PlanetPress Connect User Guide Planet Press 1.8 Operation Manual Ug

User Manual: Pdf PlanetPress Connect - 1.8 - Operation Manual User Guide for Objectif Lune PlanetPress Software, Free Instruction Manual

Open the PDF directly: View PDF PDF.
Page Count: 1096

DownloadPlanetPress Connect User Guide Planet Press - 1.8 Operation Manual Planetpress-connect-1.8-ug
Open PDF In BrowserView PDF
User Guide
Version: 1.8

User Guide
Version 1.8
Last Revision: 2018-04-09
Objectif Lune, Inc.
2030 Pie-IX, Suite 500
Montréal, QC, Canada, H1V 2C8
+1 (514) 875-5863
www.objectiflune.com

All trademarks displayed are the property of their respective owners.
© Objectif Lune, Inc. 1994-2018. All rights reserved. No part of this documentation may be
reproduced, transmitted or distributed outside of Objectif Lune Inc. by any means whatsoever
without the express written permission of Objectif Lune Inc. disclaims responsibility for any
errors and omissions in this documentation and accepts no responsibility for damages arising
from such inconsistencies or their further consequences of any kind. Objectif Lune Inc. reserves
the right to alter the information contained in this documentation without notice.

Table of Contents
Table of Contents

4

Welcome to PlanetPress Connect 1.8

14

Setup And Configuration

15

System and Hardware Considerations
Antivirus Exclusions
Database Considerations
Environment Considerations
Language and Encoding Considerations
Network Considerations
Performance Considerations
System Requirements
Installation and Activation
Where to obtain the installers
Installation - important information
Installation - "How to" guides
Activation
Installation Prerequisites
User accounts and security
The Importance of User Credentials on Installing and Running PlanetPress Connect
Installing PlanetPress Connect on Machines without Internet Access
Installation Wizard
Running connect installer in Silent Mode
Activating a License
Migrating to a new workstation
Information about PlanetPress Workflow 8
Upgrading from PlanetPress Suite 6/7
What do I gain by upgrading to PlanetPress Connect?
Known Issues
Issues with Microsoft Edge browser
Worklfow - "Execute Data Mapping" - Issues with mutliple PDFs
Installation Paths with Multi-Byte Characters
Switching Languages
GoDaddy Certificates

15
15
17
22
24
24
25
27
28
28
29
29
29
29
31
31
33
35
41
49
52
59
60
63
79
79
79
79
79
80

Page 4

MySQL Compatibility
PostScript Print Presets
Available Printer Models
External Resources in Connect
Using Capture After Installing Workflow 8
Capturing Spool Files After Installing Workflow 8
Colour Model in Stylesheets
Image Preview in Designer
Merge\Weaver Engines when Printing
REST Calls for Remote Services
Print Content and Email Content in PlanetPress Workflow
Print Limitations when the Output Server is located on a different machine
VIPP Output
Server Configuration Settings
Scheduling Preferences
Server Security Settings
Uninstalling
Important Note: Stop any active Anti-Virus software before uninstalling Connect.
Impacts upon other Applications and Services
Uninstallation Wizard
General information
Connect: a peek under the hood
The Workflow server
The Connect server
The Connect database
The File Store
The engines
The REST API
Connect File Types
The DataMapper Module
DataMapper basics
What's next?
Data mapping configurations
Creating a new data mapping configuration
Opening a data mapping configuration
Saving a data mapping configuration

80
80
81
81
82
82
82
82
82
83
83
83
83
84
85
90
91
91
91
92
93
93
94
95
96
96
97
97
98
100
100
101
101
102
105
106

Page 5

Using the wizard for CSV and Excel files
Using the wizard for databases
Using the wizard for PDF/VT and AFP files
Using the wizard for XML files
Data mapping workflow
Creating a data mapping workflow
Testing the extraction workflow
Data source settings
Extracting data
Steps
The Data Model
Creating a Data Model
Editing the Data Model
Using the Data Model
Fields
Detail tables
Data types
Data Model file structure
DataMapper User Interface
Keyboard shortcuts
Menus
Panes
Example
Settings for location-based fields in a Text file
Settings for location-based fields in a PDF File
Settings for location-based fields in CSV and Database files
Settings for location-based fields in an XML File
Text and PDF Files
CSV and Database Files
XML File
Text and PDF Files
CSV and Database Files
XML Files
Left operand, Right operand
Condition
Operators
Text file

106
108
111
112
113
113
115
115
118
140
151
152
153
154
155
161
168
177
180
181
186
190
196
221
221
222
223
226
228
229
233
235
237
241
243
243
244

Page 6

PDF File
CSV File
XML File
JavaScript
Toolbar
Welcome Screen
DataMapper Scripts API
Using scripts in the DataMapper
Setting boundaries using JavaScript
Objects
Example
Example
Examples
Example
Example
Example
Examples
Examples
Example
Example
Example
Text
XML
Functions
The Designer
Designer basics
Features
Templates
Contexts
Sections
Print
Copy Fit
Creating a Print template with a Wizard
Print context
Print sections
Pages
Master Pages

245
247
247
249
249
251
252
255
257
262
266
268
271
273
274
276
279
282
285
286
290
291
292
292
302
302
303
304
320
321
325
327
327
332
335
343
350

Page 7

Media
Email
Designing an Email template
Creating an Email template with a Wizard
Email context
Email templates
Email header settings
Email attachments
Web
Creating a Web template with a Wizard
Web Context
Web pages
Forms
Using Form elements
Using JavaScript
Capture OnTheGo
COTG Forms
Creating a COTG Form
Filling a COTG template
Testing the template
Sending the template to the Workflow tool
Using COTG data in a template
Designing a COTG Template
Capture OnTheGo template wizards
Using Foundation
COTG Elements
Using COTG Elements
Testing a Capture OnTheGo Template
Using the COTG plugin: cotg-2.0.0.js
Dynamically adding COTG widgets
Saving and restoring custom data and widgets
Capture OnTheGo API
Content elements
Element types
Editing HTML
Attributes
Inserting an element

353
359
361
364
368
370
373
379
381
382
386
387
393
398
403
406
406
407
408
409
410
410
413
416
420
423
431
436
442
445
449
453
465
465
466
467
468

Page 8

Selecting an element
Deleting an element
Styling and formatting an element
Barcode
Boxes
Business graphics
COTG Elements
Date
Forms
Form Elements
Hyperlink and mailto link
Images
Table
Text and special characters
Snippets
Adding a snippet to the Resources
Adding a snippet to a section
Creating a snippet
JSON Snippets
Styling and formatting
Local formatting versus style sheets
Layout properties
Styling templates with CSS files
Styling text and paragraphs
How to position elements
Rotating elements
Styling a table
Styling an image
Background color and/or image
Border
Colors
Fonts
Locale
Spacing
Personalizing Content
Variable data
Conditional content

469
469
470
470
513
516
519
526
527
532
536
537
542
546
548
549
550
550
551
551
551
552
553
562
567
570
571
576
579
580
583
587
590
591
592
592
593

Page 9

Dynamic images
Dynamic tables
Snippets
Scripts
Loading data
Variable Data
Formatting variable data
Showing content conditionally
Conditional Print sections
Dynamic Images
Dynamic table
Personalized URL
Writing your own scripts
Script types
Creating a new script
Writing a script
Managing scripts
Testing scripts
Optimizing scripts
Loading a snippet via a script
Loading content using a server's RESTful API
Control Scripts
The script flow: when scripts run
Selectors in Connect
Designer User Interface
Dialogs
Keyboard shortcuts
Menus
Panes
Toolbars
Welcome Screen
Print Options
Job Creation Presets
Output Creation Settings
Designer Script API
Designer Script API
Examples

593
593
594
594
594
604
610
613
616
617
618
623
624
624
625
627
629
632
636
640
643
645
660
660
666
666
738
744
755
771
776
777
840
850
873
874
882

Page 10

Examples
Examples
Examples
Examples
Examples
Examples
Examples
Examples
Examples
Examples
Examples
Examples
Example
Example
Example
Example
Example
Examples
Creating a table of contents
Example
Examples
Examples
Examples
Examples
Replace elements with a snippet
Replace elements with a set of snippets
Example
Example
Creating a Date object from a string
Control Script API
Examples
Generating output
Print output
Email output
Web output
Optimizing a template
Scripts

883
884
886
889
889
890
892
892
894
895
895
896
897
897
898
898
899
900
900
902
902
905
906
906
907
907
908
908
917
930
944
953
953
953
954
954
954

Page 11

Images
Generating Print output
Saving Printing options in Print Presets
Connect Printing options that cannot be changed from within the Printer Wizard
Print Using Standard Print Output Settings
Print Using Advanced Printer Wizard
Adding print output models to the Print Wizard
Splitting printing into more than one file
Print output variables
Generating Fax output
Generating Tags for Image Output
Generating Email output
Email output settings in the Email context and sections
Generating Email output from Connect Designer
Generating Email output from Workflow
Using an ESP with PlanetPress Connect
Generating Web output
Attaching Web output to an Email template
Generating Web output from Workflow
Web output settings in the Web context and sections
Overview
Connect 1.8 General Enhancements and Fixes
Connect 1.8 Performance Related Enhancements and Fixes
Connect 1.8 Designer Enhancements and Fixes
Connect 1.8 DataMapping Enhancements and Fixes
Connect 1.8 Output Enhancements and Fixes
Capture OnTheGo (COTG) Enhancements and Fixes
Workflow 8.8 Enhancements and Fixes
Known Issues
Previous Releases
Overview
Connect 1.7.1 General Enhancements and Fixes
Connect 1.7.1 Designer Enhancements and Fixes
Connect 1.7.1 DataMapping Enhancements and Fixes
Connect 1.7.1 Output Enhancements and Fixes
Workflow 8.7 Enhancements and Fixes
Known Issues

955
956
957
957
958
959
960
961
961
970
971
973
974
974
976
976
981
982
983
983
984
987
991
992
997
1000
1005
1006
1011
1015
1015
1018
1023
1031
1034
1042
1045

Page 12

Overview
OL Connect Send
Connect 1.6.1 General Enhancements and Fixes
Connect 1.6.1 Designer Enhancements and Fixes
Connect 1.6.1 DataMapping Enhancements and Fixes
Connect 1.6.1 Output Enhancements and Fixes
Connect Workflow 8.6 Enhancements and Fixes
Known Issues
Overview
Connect 1.5 Designer Enhancements and Fixes
Connect 1.5 DataMapping Enhancements and Fixes
Connect 1.5 Output Enhancements and Fixes
Connect 1.5 General Enhancements and Fixes
Connect 8.5 Workflow Enhancements and Fixes
Known Issues
Overview
Connect 1.4.2 Enhancements and Fixes
Connect 1.4.1 New Features and Enhancements
Connect 1.4.1 Designer Enhancements and Fixes
Connect 1.4.1 DataMapping Enhancements and Fixes
Connect 1.4.1 Output Enhancements and Fixes
Connect 8.4.1 Workflow Enhancements and Fixes
Known Issues
Legal Notices and Acknowledgements
Copyright Information

1049
1052
1054
1055
1056
1056
1058
1060
1065
1066
1070
1070
1072
1073
1074
1078
1080
1080
1082
1084
1084
1085
1085
1090
1096

Page 13

Welcome to PlanetPress Connect 1.8
Note
Since we are always looking for new ways to make your life easier, we welcome your
questions and comments about our products and documentation. Use the feedback tool
at the bottom of the page or shoot us an email at doc@ca.objectiflune.com.

PlanetPress Connect is a series of tools designed to optimize and automate customer
communications management. They work together to improve the creation, distribution,
interaction and maintenance of your communications.
The PlanetPress Connect Datamapper and Designer are designed to create output for print,
email and the web within a single template and from any data type, including formatted print
streams. Output presets applied outside the design phase make templates printing device
independent.
The Designer has an easy-to-use interface that makes it possible for almost anyone to create
multi-channel output. More advanced users may use native HTML, CSS and JavaScript.
PlanetPress Connect also includes a process automation server, called Workflow. It is capable
of servicing response form web pages and email to provide interactive business
communications.
PlanetPress Connect can create documents for tablets and mobile devices that run a free
CaptureOnTheGo App. Users with a CaptureOnTheGo subscription can then download
documents to their own devices, interact with them and send the captured data back to
PlanetPress for conversion into additional documents or workflows.
This online documentation covers PlanetPress Connect version 1.8.

Page 14

Setup And Configuration
This chapter describes the PlanetPress Connect installation and the different considerations
that are important in regards to the installation and use of PlanetPress Connect.
l

"System and Hardware Considerations" below

l

"Installation and Activation" on page 28

l

"Known Issues" on page 79

l

"Server Configuration Settings" on page 84

l

Uninstalling

System and Hardware Considerations
There are a variety of considerations to be aware of. These are documented in the following
pages:
l

"Antivirus Exclusions" below

l

"Database Considerations" on page 17

l

"Environment Considerations" on page 22

l

"Language and Encoding Considerations" on page 24

l

"Network Considerations" on page 24

l

"Performance Considerations" on page 25

l

"System Requirements" on page 27

Antivirus Exclusions
The information on this page is designed to assist IT managers and IT professionals decide
what anti-virus strategy to follow with consideration to PlanetPress Connect and their internal
requirements and needs. This page describes the mode of operation and the files and folders
used by PlanetPress Connect as well as the files, folders and executables that are
recommended to be ignored for best possible performance and to avoid issues caused by
antivirus file locks.

Page 15

IT managers and IT professionals then may decide the anti-virus strategy to follow for their
internal requirements and needs depending on the statements outlined herein.
Directories and folders
Main installation folder
All Connect applications are installed under an arbitrarily selectable main folder. We will speak
of the "Installation Target" in the following. This installation target will hold the executables and
required files and folders for the operation of the whole product suite. All these files and folders
are static after their installation. It depends on the company virus protection strategy, if such
files and folders will be monitored or not. A virus protection on these files and folders should,
however, not have a big – if even any – impact on the performance of the Connect suite.
Working folders
Working folders for Connect are created and used on a per-user-basis under the respective
user's profile folder, accessible on Windows with the standardized system variable
%USERPROFILE% in the subfolder "Connect". Working folders are:
l

l

l

l

%USERPROFILE%\Connect\filestore: This folder will hold non-intermediate files for the
operation of Connect. Files in this folder will be used frequently, but not with a high
frequency. Supervising this folder with a virus protection system should not have too
much of an impact on the speed of the whole Connect suite.
%USERPROFILE%\Connect\logs: As the name implies, log files are created and
updated here. These log files are plain text files. Virus protection may have an impact on
the speed of the whole Connect suite.
%USERPROFILE%\Connect\temp: Storage folder for temporary data, usually
intermittent files in multiple folders. Virus protection on this folder and its subfolders may
have a serious impact on the performance of Connect.
%USERPROFILE%\Connect\workspace: Usually containing settings and helper files
and folders. Supervising this folder with a virus protection system should not have too
much of an impact on the speed of the whole Connect suite.

Database 1
Depending on the components installed, a database instance is created in the system temp
folder of Windows. This folder is accessible via the standardized system variable %TMP%.
Usually, folders holding such temporary files and folders should be excluded from a virus
protection, because this influences the overall performance of the whole system at all. However

Page 16

the responsible person for the computer protection has to decide about the monitoring of such
temporary folders following the company guidelines.
Database 2
Another database instance for Connect will be hold and used under the folder, which is
intended to hold data, accessible by and for all users. The path to this folder is stored in the
standardized system variable %PROGRAMDATA%. The Connect database instance is located
in the subfolder "Connect\MySQL".
As this database will be in extremely strong usage, virus protection on this folder and its
subfolders may have a serious impact on the performance of Connect.

Database Considerations
This page describes the different considerations and pre-requisites for the database back-end
used by PlanetPress Connect, whether using the MySQL instance provided by the installer, or
pre-existing (external) instance.
Using the MySQL Instance from the Installer
The MySQL Instance provided in the Installation Wizard is already pre-configured with options
to provide the most stable back-end setup.
These are the specific options that have been changed in our version of "my.ini":
l

l

l

max_connections = 200 : PlanetPress Connect uses a lot of database connections. This
number ensures that even in high volume environments, enough connections will be
available.
max_allowed_packet = 500M : In some implementations, especially when using Capture
OnTheGo, large packet sizes are required to allow transferring binary files. This
substantial packet size maximum setting ensures that the data received by PlanetPress
Connect will be able to be stored within the database.
character-set-server = utf8 , collation-server = utf8_unicode_ci , default-characterset=utf8 : These indicate database support for UTF-8/Unicode.

Installing / Updating Connect Using an existing local MySQL instance
If MySQL Server is already present and you wish to use it, the following should be taken into
consideration:

Page 17

l

l

l

The MySQL account must have access to all permissions using the GRANT Command,
including creating databases.
The database configuration must include the options detailed in the "Using the MySQL
Instance from the Installer" on the previous page topic above.
The SQL instance must be open to access from other computers. This means the bindaddress option should not be set to 127.0.0.1 or localhost.

Warning
If you chose not to install the supplied MySQL database, and instead opt for using a preexisting (External) database then you yourself must ensure that the External database is
accessible to Connect.
Objectif Lune Inc. will take no responsibility for database connections to any but the
supplied MySQL database.

Options available within the installer:
l
l

l

The Configuration page for the local MySQL is displayed.
MySQL settings are pre-filled with default values if no existing MySQL db configuration is
found.
MySQL settings are pre-filled with existing db configuration settings, if they point to a
MySQL db type.

Installing Connect using an existing Microsoft SQL Server instance
If Microsoft SQL Server is already present and you wish to use it, the following should be taken
into consideration:

Warning
If you chose not to install the supplied MySQL database, and instead opt for using a preexisting (External) database then you yourself must ensure that the External database is
accessible to Connect.

Page 18

Objectif Lune Inc. will take no responsibility for database connections to any but the
supplied MySQL database.

Note
Since PlanetPress Connect version 1.6 the minimum required version of the MS SQL
Server is SQL Server 2012.

l

l

l

When MS SQL is selected, the default values for root user are sa and 1433 for the port.
If db settings from a previous installation are found, the pre-exising settings will be
displayed for the matching db type (for MS SQL settings, this will only work if they were
created with Server Config Tool 1.5.0 or later, or the Connect installer 1.6.0 or later). If the
db type is changed in the configuration page, the default values for this db type will be
displayed. If the pre-existing db settings are set to Hsqldb, the default db type selection
will be MySQL.
Selected db settings are stored in the preferences as usual (C:\ProgramData\Objectif
Lune\Ol
Connect\.settings\ConnectHostScope\com.objectiflune.repository.eclipselink.generic.pref
s)

Updating With No Local MySQL Product
l

l

When updating a Connect installation from 1.5.0 which contains a Server Product but no
local MySQL Product, the DB Configuration Page will detect which db type was set
before (especially if the db configuration was switched from MySQL to MS SQL using the
Server Configuration Tool), and default to those settings.
On Update from 1.4.2 or earlier, the DB Configuration Page will always default to MySQL
connection settings, and if the installation was manually tweaked to connect to MS SQL
Server, the user has to switch to "Microsoft SQL Server" type and enter connection details
again.

When modifying Connect
l

If local MySQL is removed from an installation, the DB Configuration page will offer
additionally the Microsoft SQL Server db type with respective default values.

Page 19

l

If local MySQL is added to an installation, the usual MySQL Configuration page with
default values will be displayed.

If the user has installed the Installer Supplied MySQL and then switches to an external
Microsoft SQL by using the Server Configuration Tool, the supplied MySQL cannot be switched
off. By design the installer adds a service dependency between Connect Server and the
supplied MySQL service.

Note
The Microsoft SQL selection capability will be available only with 1.6 version and upwards.

To remove this dependency the user needs to do the following
1. Have a foreign Microsoft SQL running, ready for use with Connect Server.
2. Use the Server Configuration Tool "Database Connection preferences" on page 700 to
switch the database to Microsoft SQL.
3. Re-start the Connect Server Service, so that the modifications become active.
4. Counter check that everything is working properly with Microsoft SQL.
5. Open a command-line prompt with full administration rights.
6. Enter the command sc config OLConnect_Server depend= /. This removes the
dependency.
Please be aware: The key word depend must be followed immediately by the equal sign,
but between the equal sign and the forward slash there must be a space.
Additional information can be found here: http://serverfault.com/questions/24821/howto-add-dependency-on-a-windows-service-after-the-service-is-installed#228326.
7. After the dependency has been removed, it is possible to stop the supplied MySQL
service (OLConnect_MySQL).

Warning
If a Connect 1.5 user wants to use Microsoft SQL instead of MySQL for the Connect Server, there
are several points to be taken care of:

Page 20

l

l

IF there should possibly be available some foreign MySQL instance, which could
be used intermediately, then this should be selected during the setup. This ensures,
that no stuff gets installed. Otherwise the supplied MySQL needs to be installed and
the switch to Microsoft SQL needs to be done as outlined above.
It is not possible to uninstall the supplied MySQL in this case via a Connect 1.5
modify.

Important
If a Server Product and a MySQL Product were selected to be installed on Connect 1.5.0, and
then the Server Configuration Tool is used to switch the database used by the Server to an
external Microsoft SQL, then the Update to 1.6 requires an extra step. The procedure is as
follows:
1. Run the Update to Connect 1.6. This will assume the local MySQL database needs to
be updated and configured, so the user has to enter a root password on the MySQL
Configuration Page (can be any password matching Connect security rules).
2. After the update, the Connect 1.6 Setup needs to be run once more to modify Connect.
3. On the Product Selection page, now the MySQL product can be unselected.
4. When stepping forward in the Wizard, the DB Configuration page will be displayed which
allows to configure the Microsoft SQL Server with appropriate settings.
After this modification, the local MySQL is removed, and also the service dependency from
Server to MySQL is removed.

Note
If Connect was initially installed not containing the local MySQL product (i.e. on 1.5 installation an
external MySQL was configured as database), then the Update to 1.6 will allow to select either
external MySQL or external Microsoft SQL on the DB Configuration Page.

Page 21

Environment Considerations
Virtual Machine Support
PlanetPress Connect supports VMWare Workstation, VMWare Server, VMWare Player,
VMWare ESX (including VMotion), Microsoft Hyper-V and Microsoft Hyper-V/Azure
infrastructure environments as software installed on the Guest operating system.

Warning
Copying (duplicating) a Virtual Machine with Connect installed and using both images
simultaneously constitutes an infringement of our End-User License Agreement.

Note
While some virtual machine environments (from VMWare and Microsoft) are supported,
other virtual environments (such as Parallels, Xen and others) are not supported at this
time.

Terminal Server/Service
PlanetPress Connect does not support Terminal Server (or Terminal Service) environment as
possible under Windows 2000, 2003 and 2008. This is to say, if Terminal Service is installed
on the server where PlanetPress Connect is located, unexpected behaviours may occur and
will not be supported by Objectif Lune Inc.. Furthermore, using PlanetPress Connect in a
Terminal Service environment is an infringement of our End-User License Agreement.
Remote Desktop
Tests have demonstrated that PlanetPress Connect can be used through Remote Desktop. It is
however possible that certain combination of OS could cause issues. If problems are
encountered, please contact OL Support and we will investigate.
PlanetPress Connect 1.3 and later have been certified under Remote Desktop.
64-bit Operating Systems
PlanetPress Connect is a 64-bit software and can only be installed on 64-bit operating systems.

Page 22

Antivirus Considerations
l

l

Antivirus software may slow down processing or cause issues if they are scanning in
temporary folders or those used by PlanetPress Connect. Please see KB-002: Antivirus
Exclusions for more information.
Antivirus software might interfere with installation scripts, notably a vbs script to install
fonts. McAfee, in particular, should be disabled temporarily during installation in order for
MICR fonts to install and the installation to complete successfully.

Windows Search Indexing Service
Tests have concluded that the Windows Search service, used to provide indexing for Windows
Search, can interfere with Connect when installing on a virtual machine. If the installation
hangs during the last steps, it is necessary to completely disable this service during installation.
l

Click on Start, Run.

l

Type in services.msc and click OK.

l

Locate the Windows Search service and double-click on it.

l

Change the Startup Type to Disable, and click Stop to stop the service.

l

Try the installation again.

l

Once completely, you may re-enable the service and start it.

Commandline switches and .ini entries
PlanetPress Connect is intended to work stably and reliably, based on Java and the Eclipse
framework. To ensure this reliability and robustness, many Java and Eclipse parameters have
been tested and tuned, which is reflected in the respective .ini entries and the used command
line switches. A collection of valuable settings has been elaborated and found its entry in
PlanetPress Connect “good switches list” (called the “whitelist”).
The protection of the end user’s system is one of our main goals and therefore we have
implemented a very strict verification mechanism, which ensures, that only these whitelisted ini
entries and command-line switches are accepted, when one of Connect components is started
and run. Please be therefore advised, that any non-whitelisted ini entry or command-line switch
will be accepted and will - if tried to be used - lead to the respective application’s “sudden
death”. If you should encounter such a behaviour then please double-check your Connect log
file/s for respective entries.

Page 23

Language and Encoding Considerations
Please note the following considerations:
l

Language:
l

PlanetPress Connect is currently offered in several languages. These languages
can be switch between via the Preferences dialog. The current languages include:
l

English

l

French

l

German

l

Spanish

l

Italian

l

Korean

l

Portuguese

l

Chinese (Simplified)

l

Chinese (Traditional)

l

Japanese.

The default language is English.
The PlanetPress Connect help system (this document) is currently only available in
English.
l

Encoding:
l

Issues can sometimes be encountered in menus and templates when running
PlanetPress Connect on a non-English operating system. These are due to
encoding issues and will be addressed in a later release.

Network Considerations
The following should be taken into consideration in regards to network settings and
communications
l

If a local proxy is configured (in the Internet Explorer Optionsdialog, the option Bypass
proxy server for local addresses must be checked, or some features depending on
local communication will not work.

Page 24

Firewall/Port considerations
For Firewall/Port considerations, please see this article in the Knowledge Base: Connect
Firewall/Port Configuration

Performance Considerations
This page is a comprehensive guide to getting the most performance out of PlanetPress
Connect as well as a rough guideline to indicate when it's best to upgrade.
Performance Analysis Details
In order to get the most out of PlanetPress Connect, it is important to determine how best to
maximize performance. The following guidelines will be helpful in extracting the best
performance from PlanetPress Connect before looking into hardware upgrades or extra
PlanetPress Connect performance packs.
l

Job Sizes and Speed: In terms of pure output speed, it's important to first determine what
job size is expected, and adjust "Scheduling Preferences" on page 85 accordingly. The
basic rules are:
l

l

l

l

If processing a small number of very large records (when each individual record is
composed of a large number of pages), more instances with an equal amount of
speed units is better. For hardware, RAM and Hard Drive speeds are most
important, since the smallest divisible part (the record) cannot be split on multiple
machines or even cores.
If creating a very large number of small records (hundreds of thousands of 2-3 page
individual records, for instance), a smaller number of instances with a large number
of speed units would be better. As for hardware, then the number of cores becomes
critical, whereas RAM and hard drive are secondary. Performance Packs, as well as
the MySQL instance being separate, would be helpful if your most powerful
machine starts struggling.
Mix and match. For example, one instance prioritized for large jobs and the rest for
smaller, quicker jobs. Or the contrary. Or, whatever you want, really.

RAM Configuration: By default, each instance of the Merge Engine and Weaver Engine
is set to use 640MB of RAM. This means that regardless of speed units, if not enough
memory is available, output speed might not be as expected. Assuming that the machine
itself is not running any other software, the rule of thumb is the following: The total number
of used memory in the machine should be pretty much the maximum available (around
95%).

Page 25

For each engine, it's necessary to modify the .ini file that controls its JAVA arguments.
Edit as follows:
l

l

l

l

For the Merge Engine: see C:\Program Files\Objectif Lune\OL
Connect\MergeEngine\Mergeengine.ini
For the Weaver Engine: see C:\Program Files\Objectif Lune\OL
Connect\weaverengine\Weaverengine.ini
The parameters are -Xms640m for the minimum RAM size, -Xmx640m for the
maximum RAM size. Explaining Java arguments is beyond the scope of this
document. Please read references here, here and here for more details (fair
warning: these can get pretty technical!).

Template and data mapping optimization: Some functionality offered by the
DataMapper and Designer modules is very useful, but can cause the generation of
records and of contents items to slow down due to their nature. Here are some of them:
l

l

l

l

l

Preprocessor and Postprocessor scripts: Manipulating data using a script may
cause delays before and after the data mapping action has actually taken place,
especially file conversion and data enrichment from other sources.
Loading external and network resources: In Designer, using images, javascript
or css resources located on a slow network or on a slow internet connection will
obviously lead to a loss of speed. While we do our best for caching, a document
with 100,000 records which queries a page that takes 1 second to return a different
image each time will, naturally, slow output generation down by up to 27 hours.
External JavaScript libraries: While loading a single JavaScript library from the
web is generally very fast (and only done once for the record set), actually running a
script on each generated page can take some time. Because yes, JavaScript will
run for each record, and often take the same time for each record.
Inefficient selectors: Using very precise ID selectors in script wizards can be much
faster than using a text selector, especially on very large documents. See also: "Use
an ID as selector" on page 636.
Complex scripts: Custom scripts with large, complex or non-optimized loops can
lead to slowing down content creation. While it is sometimes difficult to troubleshoot,
there are many resources online to help learn about JavaScript performance and
coding mistakes. Here, here, and here are a few. Note that most resources on the
web are about JavaScript in the browser, but the greatest majority of the tips do,
indeed, apply to scripts in general, wherever they are used.

Page 26

High-performance hardware
The following is suggested when processing speed is important. Before looking into
Performance Packs to enhance performance, ensure that the below requirements are met.
l

l

l

l

MySQL Database on a separate machine. MySQL's main possible bottleneck is file I/O,
and as such a high-performance setup will require this server to be on a separate
machine, ideally with a high-performance, low-latency hard drive. A Solid State Drive
(SSD) would be recommended.
High-Quality 16+ GB Ram. This is especially true when working with many server
instances ("speed units") running in parallel. The more parallel processing, the more
RAM is recommended.
4 or 8 physical cores. We're not talking Hyper-Threading here, but physical cores.
Hyper-Threading is great with small applications, but the overhead of "switching"
between the virtual cores, and the fact that, well, they're virtual, means the performance is
much lesser on high-power applications such as OL Connect. In short, a dual-core
processor with Hyper-Threading enabled is not equivalent to a quad-core processor.
Preferably use a physical, non-virtualized server. VMWare servers are great for
reducing the numbers of physical machines in your IT space, but they must share the
hardware between each other. While you can create a virtual machine that seems as
powerful as a physical, it will still be sharing hardware with any other virtual machines,
and this will adversely affect performance.

System Requirements
These are the system requirements for PlanetPress Connect 1.8
Operating System (64-bit only)
l

Microsoft Windows 2008/2008 R2 Server

l

Microsoft Windows 2012/2012 R2 Server

l

Microsoft Windows Vista

l

Microsoft Windows 7

l

Microsoft Windows 8.1

l

Microsoft Windows 10 (Pro and Enterprise versions only)

Page 27

Note
Windows 8.0, Windows XP, Windows 2003 and older versions of Windows are not
supported by PlanetPress Connect.

Minimum Hardware Requirements
l

NTFS Filesystem (FAT32 is not supported)

l

CPU Intel Core i7-4770 Haswell (4 Core)

l

8GB RAM (16GB Recommended)

l

Disk Space: At least 10GB (20GB recommended)

Note
For tips and tricks on performance, see "Performance Considerations" on page 25.

Installation and Activation
This topic provides detailed information about the installation and activation of PlanetPress
Connect 1.8.

Note
A PDF version of this guide is available for use in offline installations. Click here to
download it.

PlanetPress Connect 1.8 is comprised of 2 different installers: one for the PlanetPress Connect
software and one for PlanetPress Workflow 8.

Where to obtain the installers
The installers for PlanetPress Connect 1.8 and PlanetPress Workflow 8 can be obtained on
DVD or downloaded as follows:

Page 28

l

l

If you are a Customer, the installers can be downloaded from the Objectif Lune Web
Activations page: http://www.objectiflune.com/activations
If you are a Reseller, the installers can be downloaded from the Objectif Lune Partner
Portal: http://extranet.objectiflune.com/

Installation - important information
For important information about the Installation, including requirements and best practices,
please see the following topics:
l

Installation Prerequisites

l

User accounts and security

l

The importance of User Credentials when installing and running Connect

l

Migrating to a new computer

Installation - "How to" guides
For information on how to conduct the installation itself, chose from the following topics:
l

Installation

l

Silent Installation

l

Installation on machines without Internet access

Activation
For information on licensing, please see Activating your license.

Installation Prerequisites
l
l

l

l

Make sure your system meets the System requirements.
PlanetPress Version 1.8 can be installed under a regular user account with Administrator
privileges.
PlanetPress must be installed on an NTFS file system.
PlanetPress requires Microsoft .NET Framework 3.5 already be installed on the target
system.

Page 29

l

l

In order to use the automation features in Version 1.8, PlanetPress Workflow 8 will
need to be installed.
This can be installed on the same machine as an existing PlanetPress® Suite 7.6
installation or on a new computer.
For more information, please see Information about PlanetPress Workflow 8.
As with any JAVA application, the more RAM available, the faster PlanetPress will
execute!

Users of Connect 1.1
In order for users of PlanetPress Connect 1.1 to upgrade to any later version through the
Update Manager it is necessary to install a later version (1.1.8 or later) of the Objectif Lune
Update Client.
If you do not have such a version installed already, the next time you run your Update Client it
will show that there is an update available of itself to Version 1.1.8 (or later).
Simply click on the download button in the dialog to install the new version of the Update
Client. Note that it is no problem to run the update while the Client is open. It will automatically
update itself.
Once you have done this, PlanetPress Connect 1.8 will become available for download.

Note
From PlanetPress Connect Version 1.2 onwards, the new version (1.1.8) of the Update
Client is included by default with all setups.

Users of Connect 1.0
Users of this Connect version 1.0 cannot upgrade directly to Version 1.8. This is because
Connect Version 1.0 is a 32 bit version of Connect.
Users must first upgrade to Version 1.1 and from there upgrade to Version 1.8
If you are updating manually you must first upgrade to Version 1.1 before installing 1.8. If you
attempt go directly from Version 1.0 to Version 1.8 the installation will fail.
Also see "Users of Connect 1.1" above for extra information about updating from that version.

Page 30

User accounts and security
Permissions for PlanetPress Connect Designer
PlanetPress Connect Designer does not require any special permissions to run besides a
regular program. It does not require administrative rights and only needs permission to
read/write in any folder where Templates or Data Mapping Configurations are located.
If generating Print output, PlanetPress Connect Designer requires permission on the printer or
printer queue to send files.
Permissions for PlanetPress Connect Server
The PlanetPress Connect Server module, used by the Automation module, requires some
special permissions to run. These permissions are set during installation, in the Engine
Configuration portion of the Installation Wizard, but it can also be configured later by modifying
permissions for the service. To do this:
l

l

l

l

In Windows, open the Control Panel, Administrative Tools, then Services (this may
depend on your operating system).
Locate the service called Serverengine_UUID , where UUID is a series of characters that
depend on the machine where the software is installed.
Right-click on the service and select Properties.
In the Connection tab, define the account name and password that the service should
use. This can be a local account on the computer or an account on a Windows Domain.
The account must have administrative access on the machine. It should also correspond
to the user account set up in PlanetPress Worfklow.

The Importance of User Credentials on Installing and
Running PlanetPress Connect
OL Connect and required credentials depends heavily on the Connect component and
respective tasks and what sort of user credentials are needed.
First of all, it is important to distinguish between installation and run-time

Page 31

Installation
The Connect installer puts all required files, folders, registry entries and much more to their
correct places and locations. As many of these locations are protected against malicious
accesses, that very user under whose context the Connect installation is started and running,
needs very extensive rights on the respective computer. This user must belong to the Local
Administrators group on that machine. Here are some required capabilities, this user:
l
l

l
l

l

Must be able to write into the "Programs" folder.
Must be allowed to check for existing certificates and must also be allowed to install new
ones into the global certificate store on that machine.
Must be able to write into HKLM and any subtree of it in the registry.
Must be able to INSTALL, START and RUN services and also to MODIFY service
settings.
Must be known in the network the machine belongs to and must also need to be able to
use shared network resources like shared drives and/or printers etc.

This list may not be complete, but it gives the extent of the requirements. Generally, the local
administrator of the machine will have all these credentials, but there may exist network
restrictions and policies, which will block one or more of these capabilities. In such cases, the
respective network administrator should provide a valid user account for the installation.
User Account
The user account shall be used to later RUN one of the Connect Server flavors (Server or
Server Extension). This dedicated user account has to be entered on the respective installer
dialog page and must be allowed to START, STOP and RUN services on this machine. This is
different from the credentials of the installation user account, which additionally requires the
right to INSTALL services. Please be aware of this fact!
Additionally, the Server user must be able to access any network resources that are required for
OL Connect to function properly. This includes e.g. additional drives, printers, scanners, other
computers and, where appropriate, internet resources, URLs, mail servers, FTP servers,
database servers and everything else planned to be used for the intended operation of
Connect. The Server user is the run-time user.
Connect Components
Usually, a standard end user will only be facing Connect Designer and maybe the License
Activation Tool. Designer this does not require administrator rights. Either everything required

Page 32

to create documents or also to run some tasks will be already available (installed by the
installer) or be accessible in a way, where no specific credentials are required. However some
tasks like starting an email campaign will possibly require a respective account at a mail server.
But this has generally nothing to do with the credentials of the Designer user.
Activation Tool
To run the Software Activation Tool, administrator rights are required because this tool needs to
write the license file in one of the protected folders of Windows. The tool will however allow to
restart it with respective credentials if required.
MySQL
MySQL database service is installed by the install user (thus again the requirement of
installing, starting, running and modifying services). Once running it will just work.
Merge and Weaver Engines
These components do run under the Designer (if only Designer is installed) or the Server /
Extension service(s) and inherit the rights of their parent application.
Server (Extension) Configuration Tool
This component needs to access the settings of the Server. As these are stored and read by the
Server, it should be clear that the user used to run the Configuration tool should be the same as
the Server Service user as explained above.

Installing PlanetPress Connect on Machines without
Internet Access
Installing PlanetPress Connect1.8 in offline mode requires some extra steps. These are listed
below.
GoDaddy Root Certificate Authority needs to be installed.
In order to install PlanetPress Connect it is necessary for the GoDaddy Root Certificate
Authority to be installed (G2 Certificate) on the host machine and for this to be verified online.
When a machine hosting the installation does not have access to the Internet, the installation
will fail because the verification cannot be performed. To solve this problem one must first
ensure that all Windows updates have been installed on the host machine. Once the Windows

Page 33

updates are confirmed as being up to date, then complete the following steps:
1. Go to https://certs.godaddy.com/repository and download the following two certificates to
copy to the offline machine:
l

l

GoDaddy Class 2 Certification Authority Root Certificate - G2 - the file is gdrootg2.crt
GoDaddy Secure Server Certificate (Intermediate Certificate) - G2 - the file is
gdig2.crt

2. Install the certificates: Right mouse click -> Install Certificate, and follow the steps through
the subsequent wizard.
3. Now copy the PlanetPress Connect installer to the offline machine and start the
installation as normal
Windows certificate validation - Certificate Revocation List retrieval should be switched
off
For your security Objectif Lune digitally signs all relevant files with our own name and
certificate. The integrity of these files is checked at various times by different, context related,
methods. One of these checks, done during the installation process, uses the Windows
certificate validation check. .
The Windows certificate validation process not only checks the integrity of a file against its
signature, but also usually checks if the certificate itself is still valid. That check is done against
the current Certificate Revocation List (CRL), which needs to be retrieved from the internet.
However, if the machine in question does not have internet access, the retrieval of the CRL
must fail, which will lead to subsequent validation issues.
To circumvent such issues it is highly recommended to switch off the CRL retrieval prior to
installing Connect on machines without internet access. There is no security risk associated
with this, as the CRLs would never be retrievable without internet access, anyway. Advantage
of the switch will not only be found during the installation and operation of Connect, but also in
some speed improvements for any application which use signed binaries.
To switch off CRL retrieval on the computer, complete the following steps:
1. Open the “Internet Options” via the Control Panel
2. Select the “Advanced” tab and scroll down to “Security” node.
3. Uncheck the entry “Check for publisher’s certificate revocation” under that node.

Page 34

4. Click the OK button to close the dialog.
5. Re-start the computer.

Installation Wizard
Starting the PlanetPress Connect installer
The PlanetPress Connect installer may be supplied as an ISO image or on a DVD.
l

l

If an ISO image, either burn the ISO onto a DVD or unzip the contents to a folder (keeping
the folder structure)
If on a DVD, either insert the DVD and initiate the installation from there or copy the
contents to a folder (keeping the folder structure)

Navigate to the PlanetPress_Connect_Setup_x64.exe or and double-click on it. After a short
while the Setup Wizard will appear as a guide through the installation steps.

Note
PlanetPress Connect requires prior installation of Microsoft .NET Framework 3.5.
Please refer to https://www.microsoft.com/en-us/download/details.aspx?id=21 for more
details on how to install Microsoft .NET Framework 3.5, if this is not already done.

Note
If the same version of PlanetPress Connect is already installed on the target machine,
you will be presented with options to either Uninstall or Modify the existing instance.
If Modify is selected, the standard installation Wizard sequence will be followed, but with
all options from the existing installation selected.

Selecting the required components
After clicking the Next button, the component selection page appears, where the different
components of PlanetPress Connect can be selected for installation. Currently, the following
are available:

Page 35

l

l

l

PlanetPress Connect Designer: The Designer module (see "The Designer" on
page 302). It may be used as a standalone with no other installed modules, but it will not
have certain capabilities such as automation and commingling.
PlanetPress Connect Server: The Server back-end giving capabilities such as
automation, commingling, picking. It saves all entities generated from the Automation
module into a database for future use.
MySQL Product: Install the supplied MySQL database used by PlanetPress Connect.
The database is used for referencing shared and temporary Connect files, as well as for
sorting temporarily extracted data, and the like.
A pre-existing MySQL or Microsoft SQL server (referred to as an external database, in
this documentation) could be used for the same purpose, however.
The external database could reside on either the same computer or on a separate server.
If you wish to make use of an external database, please make sure the MySQL Product
option is not selected.

Warning
If you chose not to install the supplied MySQL database, and instead opt for using a
pre-existing external database then you must ensure that your external database is
accessible to Connect, yourself. Objectif Lune Inc. will take no responsibility for
database connections to any but the supplied MySQL database.
See "Database Considerations" on page 17 for more information about setting up
external databases.

l

Installation Path: This is the location where modules are to be installed.

The installer can also calculate how much disk space is required for installing the selected
components as well as how much space is available:
l

l

l

Disk space required: Displays the amount of space required on the disk by the selected
components.
Disk space available on drive: Displays the amount of space available for installation on
the drive currently in the Installation Path.
Recalculate disk space: Click to re-check available disk space. This is useful if space
has been made available for the installation while the installer was open.

Page 36

l

Source repository location: Displays the path where the installation files are located.
This can be a local drive, installation media, or a network path.

Selection Confirmation
The next page confirms the installation selections made. Click Next to start the installation
itself.
End User License Agreement
The next page displays the End User License Agreement, which needs to be read and
accepted before clicking Next.
Configuring Supplied Database Connection
The Default Database Configuration page appears if the supplied MySQL Product module
was selected for installation in the Product Selection screen. It defines the administrative
password for the MySQL server as well as which port it uses for communication.
The installer will automatically configure the Connect Server to use the supplied password and
port.
l

MySQL user 'root' Password: Enter the password for the 'root', or administration
account, for the MySQL server. The password must be at least 8 characters long and
contain at least one of each of the following:
l

a lower case character (a, b, c ... )

l

an upper case character (A, B, C ...)

l

a numeric digit (1, 2, 3 ...)

l

a punctuation character (@, $, ~ ...)

For example: "This1s@K"

Note
When updating from an earlier Connect version, the appropriate MySQL password
must be entered or the update will fail.

Page 37

If the password is subsequently forgotten, then the MySQL product must be
uninstalled and its database deleted from disk before attempting to reinstall.

l

l

Confirm 'root' Password: Re-enter to confirm the password. Both passwords must
match for installation to continue.
TCP/IP Port Number: The port on which MySQL will expect, and respond to, requests. A
check is run to confirm whether the specified TCP\IP Port Number is available on the
local machine. If it is already being used by another service (generally, an existing
MySQL installation), the number is highlighted in red and a warning message is
displayed at the top of the dialog.

Note
The MySQL Product controlled by the OLConnect_MySQL service communicates
through port 3306 by default.

l

Allow MySQL Server to accept non-local TCP connections: Click to enable external
access to the MySQL server.

Note
This option is required if MySQL Server will need to be accessed from any other
machine.
It is also required if the MySQL database is on a separate machine to PlanetPress
Connect.

Tip
This option may represent a security risk if the machine is open to the internet.
It is heavily recommended that your firewall is set to block access to port 3306 from
external requests.

Page 38

Configuring External Database Connection
The Database Connection page appears if the supplied MySQL Product module was not
selected for installation. This page is for setting up the connection to the existing External
database.
l

l

l

l

l

l

l

Database Configuration: Select the database type to use for the PlanetPress Connect
Engine. Currently only MySQL and Microsoft SQL Server are supported.
Administrator Username: Enter the username for a user with administrative rights on the
database. Administrative rights are required since tables need to be created in the
database.
If accessing a database on a different machine, the server must also be able to accept
non-local TCP connections, and the username must also be configured to accept remote
connection. For example, the "root" MySQL user entered as root@localhost is not allowed
to connect from any other machine than the one where MySQL is installed.
Administrator Password: Enter the password for the above user. The appropriate
MySQL password must be entered or the Connect installation will fail.
TCP/IP Port Number: Enter the port on which the database server expects connections.
For MySQL, this is 3306 by default.
For MS SQL it is 1433 by default.
Database Host Name: Enter the existing database server's IP or host name.
Server Schema/Table: Enter the name of the MySQL database into which the tables will
be created. The Connect standard name is "objectiflune".
Test Connection button: Click to verify that the information provide into previous fields is
valid by connecting to the database.

Note
This test does not check whether the remote user has READ and WRITE
permissions to the tables under the objectiflune schema. It is solely a test of
database connectivity.

PlanetPress Connect Server Configuration
The Server Configuration page is where the Connect Server component is configured.
The Connect Server (Master) settings are as follows:.

Page 39

l

Run Server as: Defines the machine username and password that the PlanetPress
Connect Server module's service uses.

Note
The "Server Security Settings" on page 90 dialog can only ever be executed from
the user specified here.

l

l

l

Username: The account that the service uses to login. If the machine is on a
domain, use the format domain\username.
This account must be an existing Windows profile with local administrator rights.
Password: The password associated with the selected user.
Validate user button: Click to verify that the entered username and password
combination is correct and that the service is able to login.
This button must be clicked and the user validated before the Next button becomes
available.

Click Next to start the actual installation process. This process can take several minutes.
Completing the installation
This screen describes a summary of the components that have been installed.
l

l

l

Configure Update Check checkbox: This option is enabled by default. It causes the
Product Update Manager to run after the installation is complete. This allows
configuring PlanetPress Connect to regularly check for entitled updates.
Note: this checkbox may not be available in the event that an issue was encountered
during the installation.
Show Log... : If an issue was encountered during the installation, click this button to
obtain details. This information can then be provided to Objectif Lune for troubleshooting.
When ready, click the Finish button to close the installation wizard, and initialize the
Product Update Manager, if it was selected.

The Product Update Manager
If the Configure Update Check option has been selected, the following message will be
displayed after clicking “Finish” in the setup:

Page 40

Click “Yes” to install or open the Product Update Manager where the frequency with which the
updates can be checked and a proxy server (if required) can be specified.
Note: if the Product Update Manager was already installed by another Objectif Lune
application, it will be updated to the latest version and will retain the settings previously
specified.
Select the desired options and then click OK to query the server and obtain a list of any
updates that are available for your software.
l

l

Note that the Product Update Manager can also be called from the “Objectif Lune Update
Client” option in the Start menu.
It can be uninstalled via Control Panel | Programs | Programs and Features.

Product Activation
After installation, it is necessary to activate the software. See Activating your license for more
information.

Note
Before activating the software, please wait 5 minutes for the database to initialize. If the
software is activated and the services rebooted too quickly, the database can become
corrupted and require a re-installation.

Running connect installer in Silent Mode
PlanetPress Connect can be installed in a so called "silent mode" to allow an automated setup
during a company wide roll-out or comparable situations. The trigger for the Connect Installer to
run in silent mode is a text file with the fixed name install.properties, which is located either in
the same folder as the PlanetPress_Connect_Setup_x86_64.exe or in the unpacked folder of
the installer.exe.

Note
Only the installation can be run silently. Silent Mode does not apply to uninstalling, modifying, or
updating Connect. Any previous version of Connect must be uninstalled before using the Silent
Installer.

Page 41

The required properties file has the following attributes:
l

Comment Lines, starting with # (e.g. # The options to configure an external database)

l

Key = Value pairs (e.g. install.product.0 = Connect Designer)

For supported keys, please refer to the next paragraph.

Note
install.properties file notation must follow commons configuration rules. Please refer to
Properties files for more details.

Required and optional properties
Required properties depend on the specified product. Only fields related to that specified
product must be entered. If no product is mentioned, properties must be specified for all valid
Connect products.
Here is an example of an install.properties file.
# Verbose logging
logging.verbose = true
# Product selection
install.product.0 = Connect Designer
install.product.1 = Connect Server
# Server settings
server.runas.username = Localadmin
server.runas.password = admin
# Database configuration
database.type = mysql
database.host = 192.168.116.10
database.port = 3308
database.username = root
database.password = admin
database.schema = my_ol

Page 42

Verbose logging (optional)
By default, the Silent Installer will log the same way as the GUI installer. That means logging
of error and warnings, and certain information during database configuration. A more verbose
logging can be switched on by using logging.verbose = true.
Product selection (optional)
By default, if nothing is entered for the products to be installed (install.product.X), Silent
Installer will install all products which are visible to the user for the respective brand (except for
the Server Extension, because only Server or Server Extension can be installed at the same
time).
PlanetPress defaults
install.product.0 = Connect Designer
install.product.1 = Connect Server
install.product.2 = MySQL Product

Note
The values of install.product properties must contain the exact product names.

Server configuration (required if Server is selected for install)
For Server, the following properties need to be provided:
server.runas.username = 
server.runas.password = 
Server Extension configuration (required if Server Extension is selected for install)
For Server Extension, the following properties need to be provided:
server.runas.username = 
server.runas.password = 
server.master.host = 
server.master.port = 
server.master.authenticate = true or false
server.master.username = 
server.master.password = 

Page 43

Database configuration
Case 1: MySQL is among the selected Connect products to be installed (new MySQL
installation)
If MySQL is selected and there is no previous MySQL configuration on the machine, the
following properties should be defined:
database.password =  (required and must meet the rules)
database.port =  (3306 is the default port value)
database.unlocked = true or false (the default value is false,
optional)

Note
The unlocked option should only be used when the database requires an external access.
If the Silent Installer runs with the default product selection, MySQL Product is included, and
hence the database.unlocked = true property may be optionally set if MySQL on this machine
is intended to serve as the central database also for remote machines.
If the Silent Installer runs with the explicit installation of a stand-alone (install.product.0 =
Connect Server), the database.unlocked property is irrelevant.

Note
The port will be defined automatically for the MySQL installation. All connect products selected in
the Silent Installer will automatically be configured to use the MySQL running under the port
defined by the database.port property, regardless of the default port 3306 or any other user
defined port.
A different port is required if 3306 is already taken on that machine by another application.

Case 2: The Connect Server is selected and the MySQL Product is not selected
In this case, an external database must be configured for the Server (and other Connect
products included in the Silent installation) to be used.

Page 44

2a: Configuring an external MySQL database
To configure an external MySQL database, the following properties should be defined:
database.type = mysql (required)
database.host =  (default value is localhost, otherwise
required)
database.port =  (default value is 3306, otherwise required)
database.username =  (default value is root, otherwise
required)
database.password =  (required)
database.schema =  (default value is objectiflune,
optional)
2b: Configuring an external Microsoft SQL Server database

Note
Since PlanetPress Connect version 1.6 the minimum required version of the MS SQL
Server is SQL Server 2012.

To configure an external Microsoft SQL Server database, the following properties should be
defined:
database.type = Microsoft SQL Server (required)
database.host =  (default value is localhost, otherwise
required)
database.port =  (default value is 1433, otherwise required)
database.username =  (default value is sa, otherwise
required)
database.password =  (required)
database.schema =  (default value is objectiflune,
optional)
Repository selection
The Connect installation process requires a repository from which the installer copies (locally)
or downloads (online installation) all selected Connect products.

Page 45

In Silent Installer mode, the installation process looks for the property product.repository in
the install.properties file and then proceeds with the following steps:
1. If the property exists, and its value contains an existing file location with a repository, the
installer will attempt to install from that repository.
2. If the property exists, and its value starts with http://, the installer will attempt to install from
that location. It will fail if no repository can be found at this location.
3. If none of the conditions mentioned in the previous steps are met, the installer will look
next for a local "repository" folder (located in the same folder as the running Installer
(Setup) executable file). If a repository is found, the installer will attempt to install from that
repository.
4. As a last resort, the installer will attempt to install from the default Connect Update Site
URL.
Examples
product.repository = http://192.168.79.73/Connect/Version_
01/repository
product.repository = C:\\iso\\2.0.0.39695_unpacked\\repository
Locale definition
It is possible to define the Locale which affects the installation language and installed locale for
Connect products by using the following properties in the install.properties file:
user.language
user.country
Locales supported by Connect
The Connect Setup supports a dedicated list of Locales, which is saved in the preinstall.ini file.
Each entry consists of a language tag and a country tag, formatted by the pattern:
-
The current list of supported Locales is found below, but it may be enhanced in future releases:
l

en-US (English, US)

l

de-DE (German, Germany)

l

fr-FR (French, France)

Page 46

l

ja-JP (Japanese, Japan)

l

zh-CN (Chinese, China)

l

zh-HK (Chinese, Hongkong)

l

zh-MO (Chinese, Macau)

l

zh-TW (Chinese, Taiwan)

l

it-IT (Italian, Italy)

l

pt-BR (Portuguese, Brazil)

l

es-419 (Spanish, Latin America)

Locale selection by defining user.language and user.country
If both user.language and user.country are defined in the install.properties file, the
combination must match exactly one of the supported locales, otherwise the Installer will exit
with an error.
For example, user.language = fr and user.country = CA will cause an error since fr-CA is not
in the list of supported Locales.
Locale selection by defining only user.language
If only user.language is defined in the install.properties file, the Installer will attempt to find a
Locale in the list which starts with the given language code. The first match is selected for
installation. If no match is found, the Installer will exit with an error.
For example:
user.language = zh, will result in an installation with the Locale zh-CN
user.language = no, will result in an error
Default Locale selection
If neither user.language nor user.country is defined in the install.properties file, the Installer will
select a default Locale:
1. If the System Locale is in the list of supported Locales, it will be selected.
2. Otherwise, if there is an entry in the list of supported Locales, which matches the System
language, it will be selected (e.g. on a fr-CA system, fr-FR is selected).

Page 47

3. As last resort, the first Locale in the preinstall.ini is selected (usually that should be enUS).
Getting the exit code of a silent installation
If getting the exit code of a silent installation is desirable, use the following procedure.
1. Create a new local folder on the machine (or VM) on which Connect shall be installed
and copy/extract the contents of the Connect ISO into this folder.
2. Open a command prompt with Administrator privileges and cd into this local folder.
3. Run this command to unpack the contents of the Connect Setup executable (as a sample,
we use the PReS Connect brand):
PReS_Connect_Setup_x86_64.exe -nr -gm2 -InstallPath=".\\"
4. In the local folder, the repository subfolder should now be located next to the
preinstall.exe, installer.exe and other Installer files.
5. Create the install.properties file for silent installation in the local folder.
6. With a batch file calling preinstall.exe and then querying the %errorlevel%, silent
installation can be started and the exit code can be evaluated. See the sample batch file
below.
Exit codes
0 = Success
1 = General Error in preinstall (e.g. not supported settings for user.language / user.country, for
reason see preinstall_err.log)
2 = Unknown Error in preinstall
10 = General Error in Installer application (for reason see OL_Install_.log)

Page 48

Sample batch file
@echo off
preinstall.exe
if errorlevel 10 goto err_installer
if errorlevel 2 goto err_unknown
if errorlevel 1 goto err_preinstall
echo Success
goto:eof
:err_installer
echo "Installer error - see OL_Install_.log"
goto:eof
:err_unknown
echo "Unknown preinstall error - see preinstall_err.log"
goto:eof
:err_preinstall
echo "Preinstall error - see preinstall_err.log"
goto:eof

Activating a License
PlanetPress Connect and PlanetPress Workflow 8 includes separate 30 day trial periods
during which it is not necessary to have a license for reviewing basic functionality. If a
modification to the license if required, such as to allow an extension to the trial period, or for
extra functionality or plugins (e.g., the PReS Plugin for Workflow 8), then a new activation code
will need to be requested.
Obtaining the PlanetPress Connect Magic Number
To obtain an activation file the OL™ Magic Number must first be retrieved. The Magic Number
is a machine-specific code that is generated based on the computer's hardware and software
using a top-secret Objectif Lune family recipe. Each physical computer or virtual computer
should have a different Magic Number, thus require a separate license file to be functional.
To get the PlanetPress Connect Magic Number, open the PlanetPress Connect Designer
application:

Page 49

l

Open the Start Menu

l

Click on All Programs, then Objectif Lune, then PlanetPress Connect

l

Open the PlanetPress Connect Designer [version] shortcut.

l

When the application opens, if it has never been activated or the activation has expired,
the Software Activation dialog appears:
l

License Information subsection:
l

l

l

l

l

l

l

l

l

l

Name: Displays the name of the application or module relevant to this
activation.
Serial Number: Displays the activation serial number if the product has been
activated in the past.
Expiration Date: Displays the date when the activation will expire (or the
current date if the product is not activated)
Web Activations: Click to be taken to the online activation page (not yet
functional).

End-User License Agreement (Appears only when loading a license file):
l

l

Copy to Clipboard: Click to copy the Magic Number to the clipboard. It can
then be pasted in the activation request email using the CTRL+V keyboard
shortcut.

Licensed Products subsection:
l

l

Magic Number: Displays the PlanetPress Connect Magic Number.

License: This box displays the EULA. Please note that this agreement is
legally binding.
I agree: Select to accept the EULA. This option must be selected to install the
license.
I don't agree: Select if you do not accept the EULA. You cannot install the
license if this option is selected.

Load License File: Click to browse to the .olconnectlicense file, once it has been
received.
Install License: Click to install the license and activate the software (only available
when a license file is loaded).
Close: Click to cancel this dialog. If a license file has been loaded, it will not
automatically be installed.

Page 50

Note
The Software Activation dialog can also be reached through a shortcut located in All
Programs, then Objectif Lune, then PlanetPress Connect and is named Software
Activation. Since it does not load the software, it is faster to access for the initial activation.

Requesting a license
After getting the Magic Number, a license request must be done for bothPlanetPress Connect
and Workflow 8:
l

l

Customersmust submit their Magic Number and serial number to Objectif Lune via the
Web Activations page: http://www.objectiflune.com/activations. The OL Customer Care
team will then send the PlanetPress Connect license file via email.
Resellerscan create an evaluation license via the Objectif Lune Partner Portal by
following the instructions there: http://extranet.objectiflune.com/

Note that if you do not have a serial number, one will be issued to you by the OL Activations
team.
Accepting the license will activate it, after which the PlanetPress Connect services will need to
be restarted. Note that in some case the service may not restart on its own. To resolve this
issue, restart the computer, or start the service manually from the computer's Control Panel.
Activating PlanetPress Workflow 8
PlanetPress Workflow 8 uses the same licensing scheme as PlanetPress Connect. There are
two ways of activating the license for Workflow 8 after saving it to a suitable location:
l

l

If only PlanetPress Workflow 8 is installed, double-click on the license for the PlanetPress
Workflow 8 License Activation dialog to open. Applying the license here activates all of
the Workflow 8 components.
If you have both PlanetPress Workflow 8 and PlanetPress Connect installed, it will not be
possible to double-click on the license file as this will always open the PlanetPress
Connect Activations Tool. Instead, open PlanetPress Workflow 8 manually and apply the
license through the activations dialog within.

Page 51

Activating PlanetPress Connect
To activate PlanetPress Connect, simply save the license file somewhere on your computer
where you can easily find it, such as on your desktop. You can then load the license by doubleclicking on it, or through the start menu:
l

Open the Start Menu

l

Click on All Programs, then Objectif Lune, then PlanetPress Connect

l

Open the PlanetPress Connect Designer [version] shortcut. The “PlanetPress Connect
Software Activation” tool displays information about the license and the End-User License
Agreement (EULA).

l

Click the Load License File button.

l

Read the EULA and click I agree option to accept it.

l

Click Install License to activate the license. The license will then be registered on the
computer and you will be able to start using the software.

Warning
After installation message will appear warning that the Server services will need to be restarted. Just
click OK to proceed.

Migrating to a new workstation
The purpose of this document is to provide a strategy for transferring a Connect installation to a
new workstation. The following guide applies to OLConnect v1.x and Workflow v8.x.
Before installing the software
Before upgrading to a new version, even on a new workstation, consult the product's release
note to find out about new features, bug fixes, system requirements, known issues and much
more. Simply go to the product page and look for "Release notes" in the Downloads area.
You should also consult the following pages for some technical considerations before
installing:
l

Network Considerations

l

Database Considerations

Page 52

l

Environment Considerations

l

Installation Pre-Requisites

l

Antivirus Exclusions

Downloading and Installing the Software
In order to migrate to a new workstation, the software must already be installed on the new
workstation. Follow the Installation and Activation Guide to download and install the newest
version of PlanetPress Connect on the new workstation.
Backing Up files from the current workstation
The first step in migrating to a new workstation would be to make sure all necessary production
files and resources are backed up and copied over to the new system.

Technical
Although it is not necessary to convert all of your documents when upgrading to the latest version,
we strongly recommended doing so. It is considered "Best Practice" to convert the documents to the
version installed and then re-send them to the Workflow Tools.

Backing up Workflow files
To save all Workflow-related files, backup the entire working directory:
C:\ProgramData\Objectif Lune\PlanetPress Workflow 8
Here are a few important points when transferring these files:
l

l

If you are upgrading to the latest version of Connect, it is recommended to open each
template in Designer, produce a proof making sure the output is correct. Then send the
template with its data mapper, job and output preset files to Workflow by clicking on File > Send to Workflow…
If you still use PlanetPress 7 legacy documents, PTK files can be imported by clicking on
the Workflow tool button at the top left corner of the Workflow tool interface. If copying the
PlanetPress Workflow 8 folder directly, it's important to delete any file with the .ps7
extension so as to refresh the postscript file for the new workstation.

Page 53

l

l

l

l

l

l

l

l

l

l

The Workflow configuration file itself is named ppwatch.cfg, and is backed up with the
folders. However, it needs to be re-sent to the Service to be used. To do this, rename the
file to .OL-Workflow, open the file with the Workflow tool, and send the configuration.
Locate Custom Plugins (.dll) from the below folder on the old workstation and import them
onto the new workstation
C:\Program Files (x86)\Common Files\Objectif Lune\PlanetPress
Workflow 8\Plugins
To import the plugins:
l

Start the Workflow Configuration Tool

l

Click on the Plug-in Bar

l

Click on the down pointing triangle under the Uncategorized group

l

Select Import Plug-in and select the .dll file.

Import external scripts used by the Run Script plugin, making sure they reflect the same
paths as on the previous workstation
Install any external application, executable and configuration files used by the External
Program plugin, making sure they reflect the same paths as on the previous workstation
Reconfigure local ODBC connections. (i.e. create local copies of databases or recreate
required DSN entries)
Backup and import other custom configuration files, Microsoft Excel Lookup files, making
sure they reflect the same paths as previously.
Reinstall required external printer driver and recreate all Windows printer queues and
TCPIP ports
On the new workstation if the "TCP/IP Print Server" service is running in Windows, it is
requested to disable that service so that it does not interfere with the Workflow LPD/LPR
services.
Configure the Workflow services account as in the previous installation. If accessing,
reading and writing to network shares; it is recommended to use a domain user account
and make it a member of the local Administrators group on the new workstation. Once the
user account has been chosen:
l

Click on Tools in the Workflow Configuration menu bar

l

Click Configure Services

l

Select the user account

If required, grant permissions to other machines (Designer clients and other servers) to
send documents and jobs to the new server.

Page 54

l

l

Click on Tools in the Workflow Configuration menu bar

l

Click on Access Manager

l

Grant necessary permissions to remote machines

l

Restart the Workflow Messenger service

Reconfigure the Workflow Preferences as previously by clicking on the Workflow button
on top left corner and clicking on Preferences:
l

l

l

l

Reconfigure the Server Connection Settings under Behavior > OL Connect
For PlanetPress Capture users, reconfigure the PlanetPress Capture options under
Behavior > PlanetPress Capture
Reconfigure each of the plugin, where necessary, under Plug-in as previously.
Capture OnTheGo users may want to enable the Use PHP Arrays option under
Plug-in > HTTP Server Input 1
Send the configuration to local Workflow service

Backing up Connect Resources
The following resources are used by Connect and can be backed up from their respective
folders:
l

l

l

l

l

Job Presets (.OL-jobpreset):
C:\Users\\Connect\workspace\configurations\JobCreatio
nConfig
Output Presets (.OL-outputpreset):
C:\Users\\Connect\workspace\configurations\PrinterDef
initionConfig
OL Connect Print Manager Configuration files (.OL-ipdsprinter)
C:\Users\\Connect\workspace\configurations\PrinterCon
fig
OL Printer Definition Files (.OL-printerdef)
C:\Users\\Connect\workspace\configurations\PrinterDef
initionConfig
OMR Marks Configuration Files (.hcf)
C:\Users\\Connect\workspace\configurations\HCFFiles

Other Resources

Page 55

l

l

l

l

OL Connect Designer Templates, DataMapper or Package files, copied from the folder
where they reside.
All Postscript, TrueType, Open Type and other host based fonts used in templates must
be reinstalled on the new workstation.
Import all dynamic images and make sure their paths match those in the old server.
Make sure the new workstation can also access network or remote images, JavaScript,
CSS, JSON, and HTML resources referenced in the Connect templates.

Secondary Software and Licenses
The following only apply for specific secondary products and licenses that interacts or is
integrated into the main product.
Image, Fax and Search Modules
l
l

Reconfigure the Image and Fax outputs with the new host information.
Import the Search Profile and rebuild the database in order to generate the database
structure required by the Workflow.

Capture
l
l

l

l

l

Download the latest version of the Anoto PenDirector.
Before installing the PenDirector, make sure the pen’s docking station isn’t plugged into
the server. Then install the PenDirector.
Stop the Messenger 8 service on old and new server from the Workflow menu bar > Tools
> Service Console > Messenger > right-click and select Stop.
Import the following files and folders from the old server into their equivalent location on
the new server:
C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress
Watch\capture\PPCaptureDefault.mdb
C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress
Watch\DocumentManager
C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\PGC
If Capture was previously using an external MySQL or Microsoft SQL Server, reconfigure
the ODBC connection details as previously from the Workflow Preferences by clicking on
the Workflow button on top left corner and clicking on Preferences, then reconfigure the

Page 56

PlanetPress Capture options under Behavior >PlanetPress Capture > Use ODBC
Database
l

Start the Messenger 8 service on new server from the Workflow menu bar > Tools >
Service Console > Messenger > right-click and select Start.

OL Connect Send
• Re-install OL Connect Send on the new Workstation. This should reinstall the OL Connect
Send plugins in the Workflow Tool
• Reconfigure the Server URL and port during the OL Connect Send Printer Driver setup
• Re-run the OL Connect Send printer driver setup on client system and select the Repair
option to point the clients to the new Server URL.
Configuring the Connect Engines
Any changes made to the Server preferences require the OLConnect_Server service to be
restarted to take effect.
l

l

Stop the OLConnect_Server service from Control Panel > Administrative Tools >
Services > OLConnect_Server > Stop
Configure the Merge and Weaver Engines scheduling preferences as in the previous
installation
l

l

l

l

Open the Server Configuration from :
C:\Program Files\Objectif Lune\OL Connect\Connect
Server\ServerConfig.exe
Configure the Merge and Weaver engines preferences under Scheduling (see
Engine configuration)
Configure any other options for the Clean-up Service

Configure the minimum (Xms) and maximum (Xmx) memory utilization for the Server,
Merge and Weaver engines as previously or better (see Memory per engine):
l

Edit the following Xms and Xmx fields in the following configuration files:
l

C:\Program Files\Objectif Lune\OL Connect\Connect
Server\Server.ini

l

C:\Program Files\Objectif Lune\OL
Connect\MergeEngine\Mergeengine.ini

Page 57

l

l

C:\Program Files\Objectif Lune\OL
Connect\weaverengine\weaverengine.ini

Now start the OLConnect_Server service

Configuring the Server Extensions
In the case where the OLConnect MySQL is installed on the new Master Server, it is important
to reconnect all Server Extension systems to the new Master Server.
Perform the following action on each Server Extension:
l

l

l

Stop the OLConnect_ServerExtension service from Control Panel > Administrative
Tools > Services > OLConnect_ServerExtension > Stop
Open the Server Extension Configuration from:
C:\Program Files\Objectif Lune\OL Connect\Connect Server
Extension\ServerExtension.exe
Click on Database Connection and configure the JDBC Database connection settings so
that the hostname points to the new Master server

l

Click on Scheduling and type in the location of the new Master server

l

Start the OLConnect_ServerExtension service

Transferring Software Licenses
Once all the above resources have been transferred over to the new server, it is recommended
to thoroughly test the new system with sample files under normal production load to identify
points of improvement and make sure the output match the user’s expectation. Output
generated at this point will normally bear a watermark which can be removed by transferring
licenses from the old server to the new one.
l

l

To transfer Connect and Workflow licenses, the user is usually required to complete a
License Transfer Agreement which can be obtained from their local Customer Care
department
Upgrades cannot be activated using the automated Activation Manager. Contact your
local Customer Care department.

Page 58

To apply the license file received from the Activation Team:
l

Start the PReS Connect, PlanetPress Connect or PrintShopMail Connect Software
Activation module:
C:\Program Files\Objectif Lune\OL Connect\Connect Software Activation\
SoftwareActivation.exe

l

Click on Load License File to import the license.OLConnectLicense

l

Start the Software Activation module on the Extension servers, where applicable

l

Click on Load License File to import the above same license.OLConnectLicense

l

l

Restart the OLConnect_Server service and restart the OLConnectServer_Extension
service on the Extension servers, where applicable
The number of Expected Remote Merge and Weaver engines should now be
configurable in the Connect Server Configuration module (C:\Program Files\Objectif
Lune\OL Connect\Connect Server Configuration\ ServerConfig.exe)

To apply the PlanetPress Capture License
l
l

Open the Workflow Configuration
Click on Help on the Menu Bar and click on PlanetPress Capture License manager to
import your license.

Uninstalling PlanetPress Connect from the previous workstation
It is recommended to keep the previous install for a few days until everything is completed.
However, once your transition is successful and complete, the OL Connect software must be
uninstalled from the original server.

Information about PlanetPress Workflow 8
If you wish to use PlanetPress Workflow (automation) in conjunction with PlanetPress Connect,
you will need to install PlanetPress Workflow 8.8 onto the same machine. Workflow 8.8 is
provided through a separate installer which is available on CD or for download as follows:
l

l

If you are a Customer, the installer can be downloaded from the Objectif Lune Web
Activations page: http://www.objectiflune.com/activations
If you are a Reseller, the installer can be downloaded from the Objectif Lune Partner
Portal: http://extranet.objectiflune.com/

Page 59

PlanetPress Workflow 8 can be installed in parallel on the same machine as an existing
PlanetPress® Suite 7.x installation.
Note however:
l

l

l

l

l

If both versions need to be hosted on the same machine, PlanetPress Workflow 8.8 must
always be installed after the legacy PlanetPress® Suite 7.x installation.
When uninstalling PlanetPress Workflow 8.8, you may be prompted to repair your legacy
PlanetPress® Suite 7.x installation.
If PlanetPress Workflow 8.8 has been installed alongside PlanetPress® Suite 7, Capture
can no longer be used with Workflow 7.
The plugins are now registered uniquely to Workflow 8.8 and the messenger for Workflow
7 is taken offline. It is only then possible to use Capture from PlanetPress Workflow 8.8.
PlanetPress Workflow 8.8 and PlanetPress® Suite Workflow 7 cannot run
simultaneously, since only one version of the Messenger service can run at a time. In fact,
no two versions of Workflow can be run simultaneously on the same machine, regardless
of versions.
It is possible to switch between different versions running by shutting down one version's
services and then starting the other. However, this is not recommended. There are no
technical limitations that prevent processes from previous PlanetPress Suite Workflow
versions (as far back as Version 4) to run on PlanetPress Workflow 8, removing the need
to run both versions.

For more information on the licensing of Workflow 8.8, please see Activating your license.

Upgrading from PlanetPress Suite 6/7
Note
This document is intended for people who already received their upgrade to PlanetPress Connect.
They should already have their new serial number(s) in hand and the PlanetPress Connect installers.

With the release of PlanetPress Connect, Objectif Lune’s innovative new technology, existing
users of PlanetPress Suite version 7 and 6 have the possibility to migrate to an introductory
version of PlanetPress Connect called “PlanetPress Connect Print-Only”.

Page 60

This migration benefits existing users in many ways and has limited impact on their current
processes and how they use PlanetPress Suite version 7 and 6.
This document provides information on the migration process and the requirements and
considerations for existing PlanetPress Suite users to upgrade to the latest generation of our
products.

Note
PlanetPress Connect Print-Only is available for existing users of PlanetPress version 7 or 6 with a
valid OL Care agreement. If you are using a previous version or are not covered by OL Care,
please contact your reseller or your Objectif Lune Account Manager for more information.

What does PlanetPress Connect contain?
PlanetPress Connect is comprised of the following modules:
l

PlanetPress Workflow 8. This is the natural evolution of PlanetPress Suite Workflow 7
(Watch, Office or Production). PlanetPress Workflow 8 is very similar to the PlanetPress
Suite Workflow 7 version but contains some new features and has the ability to run
PlanetPress Connect jobs, as well as PlanetPress Suite, PrintShop Mail Suite and PReS
Classic documents.
o

o

l

l

Imaging for PlanetPress Connect is available as an option. It contains:
l

PlanetPress Fax

l

PlanetPress Image

l

PlanetPress Search

PlanetPress Capture is still supported in PlanetPress Workflow 8 but only with
documents created with the PlanetPress Suite Design 7.

PlanetPress Connect Designer. This is the design tool based on completely new
technology. It is not backwards compatible and therefore cannot open PlanetPress Suite
Design 7 documents. If you want to continue editing those documents you can keep doing
so in PlanetPress Suite Design 7.
PlanetPress Connect Server. This is the core of the Connect technology. This new
module automates the merging of data with your new templates and generates the output.
It is required for PlanetPress Workflow 8 to handle templates created with the
PlanetPress Connect Designer. It can be installed on the same or a different machine as
PlanetPress Workflow 8.

Page 61

IMPORTANT: PlanetPress Connect does not contain the PlanetPress Design 7.
GOOD NEWS: PlanetPress Connect does not need any printer licenses to print from
PlanetPress Connect or PlanetPress Suite. It can also print PrintShop Mail 7 and PReS
Classic documents if these programs are licensed.
You can keep everything you have
The first thing to know is that you can keep your current PlanetPress Suite Workflow 7
configuration and your PlanetPress Suite Design documents. When upgrading to PlanetPress
Connect, they will remain functional.
Please note that PlanetPress Suite Workflow 7 and PlanetPress Workflow 8 cannot run at the
same time. See Information about Connect Workflow 8 for information about these limitations.
The only exception is the PlanetPress Suite Design tool that you can continue to use as it is not
part of PlanetPress Connect.
For customers upgrading to the free “Print only” version, if you wish you to continue your OL
Care engagement, the next year will be priced at the same price as your current price.
For customer upgrading to the full version of PlanetPress Connect, with or without new options,
the next year of OL Care will be priced at the value of the new software you upgraded to.
Before going into any further details, please read the following section carefully.
PlanetPress Connect installation considerations
The PlanetPress Suite could run on a computer with a minimum of only 1GB of RAM available.
The PlanetPress Connect Server with PlanetPress Workflow 8, by default, requires 8 GB of
RAM, but if you intend on using the new PlanetPress Connect Designer on the same computer,
you should consider having at least 12 GB of RAM available. See System requirements.
Distributed installation or not
You can decide to install PlanetPress Connect modules all on the same computer or have each
module on a different computer. Reasons for this could be:
l

There is insufficient memory in the computer currently running PlanetPress Workflow 8 to
also run PlanetPress Connect Server.

Page 62

l

You want to use a more powerful computer with more RAM and more cores to run the
Server to achieve maximum performance.

What do I gain by upgrading to PlanetPress Connect?
PlanetPress Watch users
When upgrading to PlanetPress Connect, you receive key features of PlanetPress Office such
as the following:
l
l

l
l

Ability to input data from PDF
Ability to print your PlanetPress Suite documents on any Windows printer (no need for
printer licenses)
Ability to create standard PDF output from your PlanetPress Suite documents
Even if you don’t recreate your existing PlanetPress Suite documents, you can easily
change your workflow to convert your output to PDF, then output them in PCL to any
device supporting it.

Note
If you were a PlanetPress Production user, you retain all functionalities within
PlanetPress Workflow 8. These are automatically imported during the activation (see
below).

Re-purpose your existing documents
IMPORTANT: PlanetPress Suite users covered by a valid OL Care contract receive a “Print
only” version of PlanetPress Connect which can produce printed output. If you also own
PlanetPress Imaging, which can produce PDF, Tiff and other archive formats, you will also
receive a new version.
The full version of PlanetPress Connect can open your company to the digital world by
enabling you to send HTML responsive emails as well as creating dynamic responses and
interactive web pages. All that for a minimal fee. For more information on the full version of
PlanetPress Connect, contact your reseller or your Objectif Lune Account Manager.

Page 63

Upgrade to the full multi-channel version and expand onto the Web
If you choose to take the optional “multi-channel” upgrade, you can start right away to reuse the
content of your existing documents and map it onto responsive documents that can be sent by
email in full HTML glory and/or make them available as native HTML web pages using the
latest CSS/JavaScript features.
IMPORTANT: If you owned them, you must also upgrade your Imaging modules to use the new
PReS version.
Create new documents and integrate them into your workflow at your own pace
You can start benefiting from the innovative technology of the new PlanetPress Connect
Designer right away by designing new documents, or re-doing existing ones at your own pace.
With PlanetPress Connect Print-Only, you can now:
l

l

l

Use the new Data Mapper to easily map any input data into a clean data model that any
designer person can use
Easily create documents with tables that spread over multiple print pages, respecting
widow and orphan rules, displaying sub-totals and totals properly
Have text that wrap around images

Upgrade steps
1. To upgrade to PlanetPress Connect, the first step is to stop your PlanetPress Workflow
services. You can do so from the PlanetPress Workflow configuration tool or from the
Windows Service Management console.
2. Then, using the PlanetPress Connect setup, install the Designer and/or Server on the
appropriate computers. Then, using the PlanetPress Workflow 8 setup, install
PlanetPress Workflow and/or PlanetPress Image on the appropriate computers. (See the
installation and activation document for more details)
3. If you installed PlanetPress Workflow 8 on the same computer where you had
PlanetPress Suite Workflow 6 or 7, you can use the Upgrade Wizard to import your:
l

PlanetPress Workflow:
l

Processes configuration

l

PlanetPress Suite compiled documents

l

Service configuration

Page 64

l

Access manager configuration

l

Custom plug-ins

l

PlanetPress Fax settings

l

PlanetPress Image settings

l

PlanetPress Search profiles

l

Printer activation codes

l

PlanetPress Capture database

l

PlanetPress Capture pen licenses

l

Custom scripts

l

Content of your virtual drive

l

PlanetPress Messenger configuration

4. If you installed PlanetPress Workflow 8.8 on a different computer, please see "How to
perform a Workflow migration" on page 70 for help importing all those settings, if you wish
to import them.
5. To launch the Upgrade wizard, open the PlanetPress Workflow 8 configuration tool and,
from the Tools menu, launch the Upgrade Wizard.
IMPORTANT: Before you start this process, make sure you have a backup of your current
installation/computer.

Page 65

6. Then select your upgrade type:

Page 66

7. Then select the product from which you wish to upgrade:

8. If you selected to do a Custom upgrade, select the required options:

Page 67

9. Then finally review the log in the final dialog for details on how it went:

Page 68

10. After that you will need to get the activation file for your product.
To obtain your activation, download the PlanetPress Connect installer from the Web
Activation Manager, follow the instructions for the installation using the serial number
provided to you. You can activate your license through the Web Activation Manager.
11. From now on, if you need to modify your PlanetPress Design documents, simply open
PlanetPress Design 6 or 7, edit your document and send the updated version to
PlanetPress Workflow 8. In order to do that:
l

l

If you have the PlanetPress Design on the same computer as the PlanetPress
Workflow 8, you need to save the documents to PTK by using the “Send to” menu,
then "PlanetPress Workflow”, and there use the “Save to file” button. Then, from the
PlanetPress Workflow 8 configuration tool, in the “Import” menu, select “Import a
PlanetPress Document” and select the previously saved file.
If you have the PlanetPress Design on a computer and the PlanetPress Workflow 8
on another, you can simply use the “Send to” menu in the Designer and select the
PlanetPress Workflow 8 to which you want to send the PlanetPress Design
document.

Page 69

How to perform a Workflow migration
What do you need to consider when upgrading from PlanetPress Suite 7 to PlanetPress
Connect Workflow 8.8 on a new computer?
Installing and Activating Workflow 8.8 on a new computer
Points to consider:
l

l

l

l

Before installing, be sure to read the Installation and Activation Guide. There you will find
detailed Connect Workflow installation steps as well as system requirements, notes on
license activation and much more.
It is recommended you retain your existing PlanetPress Suite installation for a period of
time after the PlanetPress Connect Workflow 8.8 installation. We recommend this
particularly when undertaking migration from one to the other. Once the migration has
completed, you should uninstall PlanetPress Suite from your original installation. In the
meantime, a fresh installation of PlanetPress Connect will run for 30 days without
requiring an activation code, to simplify the migration process.
Request new activation codes for your software licenses (License Transfer agreement
needs to be filled out and signed). Contact your local Activations Department.
ww.objectiflune.com/activations
Please note that PlanetPress Suite Workflow 7 and PlanetPress Workflow 8 cannot run at
the same time. See Information about Connect Workflow 8 for information about these
limitations. The only exception is the PlanetPress Suite Design tool that you can continue
to use as it is not part of PlanetPress Connect.

Printer Licences
If you are currently using Printer Licenses under PlanetPress Suite 7 and wish to continue
doing so in PlanetPress Connect Workflow 8.8, there are a few ways in which you can reinstall
those printer activation codes onto PlanetPress Connect Workflow 8.8. They are as follows:
l

If you retained the .pac file (printer activation codes) from your previous installation, then
double click on that file from within your new computer, and the printers will get activated.
If you did not retain the pac file, you can export a new printer activation code. This is done
from the PlanetPress Suite Designer Help > Printer Activation menu option. When the
"Activate a printer" dialog is launched, right click within it and select the Export context
menu option, then save the file on the new computer. Double clicking on the .pac file will

Page 70

then activate all of your printers on the new computer.
l

l

Login to our Web Activation Manager (www.objectiflune.com/activations) using your
customer number and password to get your Printer Activation Codes.
If you do not have access to the computer in which PlanetPress Suite was previously
installed, print a Status Page for each printer from your Connect Workflow 8
Configuration. Do this via the Tools > Printer Utilities menu option. Select “Print Status
Page” and then select your printers from the list.
Email the Status Page(s) to activations@ca.objectiflune.com and you will receive a .pac
file in return, with which you can activate your printer(s).

Documents and Resources
PlanetPress Suite Documents and Resources
l

l

l

Backup all your PlanetPress Suite Design documents from your old computer and copy
them onto the new computer. The files use the extension .ppX, where X is the version
number of the PlanetPress Suite that created the files.
The documents do not have to be in any specific folder.
Back up the entire directory of: "C:\ProgramData\Objectif Lune\PlanetPress Suite
7\PlanetPress Watch\Documents".
This folder contains all the PlanetPress Design documents and compiled forms (*.ptk and
*.ptz).
Paste the files onto the new computer in the following folder:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\Documents"
Back up the latest .pwX (PlanetPress Workflow Tools Configuration) file, found here:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\".
Paste onto the new computer in the following folder:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\"

There are several ways you can import Documents into PlanetPress Workflow. They are as
follows.
1. In Connect Workflow go to File > Import > PlanetPress Document … and select the .ptk
document you wish to import.
These files will most likely be found in the Documents folder on the PlanetPress Suite
computer:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\Documents"

Page 71

2. Copy all the PlanetPress Suite 7 Documents and Compiled forms (*.ptk and *.ptz) from
the Documents folder on the PlanetPress Suite computer and paste them into the
equivalent folder on the Connect Workflow Computer.
The PlanetPress Suite 7 folder would be "C:\ProgramData\Objectif Lune\PlanetPress
Suite 7\PlanetPress Watch\Documents".
The PlanetPress Connect Workflow 8 folder will be "C:\ProgramData\Objectif
Lune\PlanetPress Workflow 8\PlanetPress Watch\Documents"
3. Use the File > Send To menu option in PlanetPress Suite Designer and select the
PlanetPress Connect Workflow 8 to which you want to send the PlanetPress Suite
Designer document.
This should work with PlanetPress Suite versions 6 and 7.
Make sure that ports 5863 and 5864 are not blocked by firewall on either machine.
Also make sure you add the PlanetPress Suite machine’s IP address to the permissions
list in Connect Workflow 8 from Tools > Access Manager.
Further information about Workflow Access Manager can be found here: Access
Manager.
Windows Operating System Steps:
l

l

l

l

Install all the Windows printer queues from the old computer, making sure they are named
the same.
If your existing documents referenced any local dynamic image resources in a folder or in
Local Host, make sure that you import them onto the new computer as well, or make them
available on a network accessible drive.
Any special Postscript or TrueType fonts used will also need be installed on the new
computer.
Verify that you have access to any other resources that the PlanetPress Suite used. This
includes network folders, printers, third party software and the like.

Workflow Plug-ins
Back up any custom PlanetPress Suite Workflow configuration Plug-ins (.dll) and copy them
onto the new computer.
The PlanetPress Suite Workflow plug-ins folder can be found here:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\Plugins".
Make sure that you copy only the custom plug-ins.

Page 72

Alternatively, you can download custom plug-ins from this link onto the new computer.
Once you've copied your PlanetPress Suite Workflow configurations to Connect Workflow, you
can confirm their availability through the Plug-in Bar Uncategorized category. There you will
find all the Custom plug-ins that have been installed.
Missing plug-ins will be represented in Workflow steps through the use of a "?" icon. Such as in
the following image, which shows that the "TelescopingSortPlugin" is not installed.

To import a plugin:
1. Click on the popup control (

) in the Plug-in bar.

2. Select Import Plugin
3. Browse to the location of the plug-in DLL file
4. Click on Open.
5. The new plug-in should appear in the Plug-in Bar Uncategorized category.
Configuring PlanetPress Connect Workflow 8
l

l

l

l

l

Reconfigure any settings that may need to be applied to the PlanetPress Suite
Messenger and PlanetPress Workflow Tools LPD services using the Access Manager.
All PostScript and TrueType host based fonts must be reinstalled. Make sure you restart
the computer after this step.
If necessary, reconfigure local ODBC connections. (i.e. create local copies of databases
or recreate required DSN entries)
Manually install all external executables that will be referenced by the Connect Workflow
processes in the configuration file. If possible, retain the local path structure as used on
the older installation.
If the Windows "TCP/IP Print Server" service is running on the new computer, it is
recommended that you disable the Server so that it does not interfere with the
PlanetPress LPD/LPR services.

Page 73

l

l

l

If you are using images from a virtual drive, copy the entire contents of
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PSRIP" and paste them onto the new
computer here: "C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PSRIP".
Make sure to set the user who will run the PlanetPress Services. This is done by going
into Tools/Configure services. The user will need to have local administration rights in
order to be able to run the services.
For more information, see Users and Configurations.
Once all these steps have been completed, you will need to import your configuration file.
Find the latest pwX file located on the old computer, if it is not already copied across to
the new computer. The default location on the old computer is “C:\ProgramData\Objectif
Lune\PlanetPress Suite 7\PlanetPress Watch\”.
On the new computer you will need to File > Import > Configuration Components.
Browse and find your file. If the file is not visible change the file type to *.pw7

PlanetPress Image, Fax and Search
l

l

Reconfigure the PlanetPress Image and PlanetPress Fax outputs with the new host
information.
You must import the Search Profile and rebuild the database in order to generate the
required database structure.

PlanetPress Capture
l

If you have a Capture Solution, please see "How to perform a Capture migration" below.

How to perform a Capture migration
This page provides information on how to conduct a proper migration of a Capture solution.
These steps must be executed after a proper Workflow Migration has been completed.
Instructions on how to do such can be found here: "How to perform a Workflow migration" on
page 70.
Failure to do so will result in unexpected problems.

Page 74

Note
It is recommended that you first update your PlanetPress Suite to version 7.6 before
cross-grading to PlanetPress Connect.

Using PlanetPress Connect Workflow 8.8 on the same computer as PlanetPress Suite 7.6
Steps to migrate:
1. Update existing installation to PlanetPress Suite version 7.6 if not already done.
2. Install PlanetPress Connect Workflow 8.8 on the same computer.
3. Do the following for both PlanetPress Suite version 7.6 and PlanetPress Connect
Workflow 8.
1. Open Workflow Service Console. This can be done either via the Windows Start
Menu, or from within Workflow Configuration application (via menu option Tools
> Service Console).
2. Select Messenger in the tree list, right click and select

Stop from the context

menu.

Note
These steps must be done for both PlanetPress Suite Workflow 7 and
PlanetPress Connect Workflow 8.

4. Copy the file PPCaptureDefault.mdb from this folder:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\capture"
to this folder:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\capture" and
overwrite the existing database.

Page 75

Note
Prior to PlanetPress Suite 7.6, all Capture patterns, documents and several other
details were contained within the one single database. As of PlanetPress Suite 7.6
a separate database has been used for the patterns alone
(PPCaptureDefault.mdb).

5. Copy the contents of this folder:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress
Watch\DocumentManager"
to this folder:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress
Watch\DocumentManager".
6. Copy the contents of this folder:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\PGC"
to this folder:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\PGC"
7. Restart the PlanetPress Connect Workflow 8 Messenger. To do this,
1. Open Workflow Service Console. This can be done either via the Windows Start
Menu, or from within Workflow Configuration application (via menu option Tools
> Service Console).
2. Select Messenger in the tree list, right click and select

Start from the context

menu options.
8. Contact your local Objectif Lune activation team and transfer any Pen(s) licenses across.

Using PlanetPress Connect Workflow 8.8 on a different computer to PlanetPress Suite
7.6

Tip
It is safer to migrate outside high peak production since the Capture solution cannot be
run in parallel on two computers.

Page 76

Once the Capture database has been transferred to the new computer, any update made
to the old computer will be lost unless the steps to migrate are reproduce again.
Once a Pen has been docked and the data transfer done, its memory is wiped, thus
rending the parallel mode very hard to produce. It is not impossible, but describing how it
can be done this is beyond the scope of this migration article.

Steps to migrate:
1. Update existing installation to PlanetPress Suite version 7.6 if not already done.
2. Install PlanetPress Connect Workflow 8.8 on new computer.
3. The Anoto PenDirector must be installed. It if is not, you can download it from here and
then install it.

Note
It is strongly recommended that you install the latest version of the PenDirector.
Please use the link provided on the previous line.
Do not get any other version of the PenDirector from the Anoto website, as
they will not have been set up correctly for our Capture solution.

Note
Prior to installation, make sure you unplug the Pen docking station from the USB
port on the computer where you are about to install the Anoto PenDirector.

4. Do the following for both PlanetPress Suite version 7.6 and PlanetPress Connect
Workflow 8.
1. Open Workflow Service Console. This can be done either via the Windows Start
Menu, or from within Workflow Configuration application (via menu option Tools
> Service Console).

Page 77

2. Select Messenger in the tree list, right click and select

Stop from the context

menu.

Note
These steps must be done for both PlanetPress Suite Workflow 7 and
PlanetPress Connect Workflow 8.

5. Copy the file PPCaptureDefault.mdb from this folder on the PlanetPress Suite 7.6
computer:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\capture"
to this folder on the new PlanetPress Connect Workflow 8.8 computer:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\capture" and
overwrite the existing database.

Note
Prior to PlanetPress Suite 7.6, all Capture patterns, documents and several other
details were contained within the one single database. As of PlanetPress Suite 7.6
a separate database has been used for the patterns alone
(PPCaptureDefault.mdb).

6. Copy the contents of this folder on the PlanetPress Suite 7.6 computer:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress
Watch\DocumentManager"
to this folder on the new PlanetPress Connect Workflow 8.8 computer:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress
Watch\DocumentManager".
7. Copy the contents of this folder on the PlanetPress Suite 7.6 computer:
"C:\ProgramData\Objectif Lune\PlanetPress Suite 7\PlanetPress Watch\PGC"
to this folder on the new PlanetPress Connect Workflow 8.8 computer:
"C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\PGC"
8. Restart the PlanetPress Connect Workflow 8 Messenger. To do this,
1. Open Workflow Service Console. This can be done either via the Windows Start
Menu, or from within Workflow Configuration application (via menu option Tools

Page 78

> Service Console).
2. Select Messenger in the tree list, right click and select

Start from the context

menu options.
9. Contact your local Objectif Lune activation team and transfer any Pen(s) licenses across.

Known Issues
This page lists important information that applies to PlanetPress Connect 1.8.

Issues with Microsoft Edge browser
The Microsoft Edge browser fails to display web pages when the Workflow's CORS option (in
the HTTP Server Input 2 section) is set to "*". This issue will be resolved in a future release.

Worklfow - "Execute Data Mapping" - Issues with mutliple
PDFs
If a process has a Folder Capture step (but not the first input step) to capture multiple PDFs
within a folder, followed by a Execute Data Mapping step to extract XML files for each PDF,
only the first PDF file will be processed. The workaround is to put the Extract Datamapper in a
Branch, just after the Folder Input. This issue will be fixed in a later release. (SHARED-59752)

Installation Paths with Multi-Byte Characters
When installing the Chinese (Traditional or Simplified) or Japanese versions of Connect, if the
user specifies an alternative installation path containing multi-byte/wide-char characters it can
break some of the links to the Connect-related shortcuts in the Start Menu and cause an error to
appear at the end of the installer. The workaround for the moment is to use the default
installation path. The problem will be addressed in a later release.

Switching Languages
Changing the language using the Window>Preferences>Language Setting menu option
does not currently change all of the strings in the application to the selected language. This is a
known issue and will be fixed in a later release.

Page 79

In the meantime we offer the following workaround for anyone who needs to change the
language:
1. Go to the .ini files for the Designer and Server Config:
l
l

C:\Program Files\Objectif Lune\OL Connect\Connect Designer\Designer.ini
C:\Program Files\Objectif Lune\OL Connect\Connect Server
Configuration\ServerConfig.ini

2. Change the language parameter to the required one under Duser.language=en | es | de |
fr | it | ja | ko | pt | tw | zh
Only one of the above language tags should be selected. Once saved, Connect will appear in
the selected language at next start-up.

GoDaddy Certificates
When installing Connect offline, dialogs allow installing the GoDaddy certificates. Most users
should use the default settings and click Next. In some cases, however, this may not work
correctly. For this reason those users should activate Place all certificates in the following
store and then select the Trusted Root Certification Authorities as the target certificate store.

MySQL Compatibility
After installing Connect 1.8 a downgrade to a Connect version earlier than Connect 1.3 or to a
MySQL version earlier than 5.6.25 is not seamlessly possible. This is because the database
model used in Connect 1.3 and later (MySQL 5.6) is different to that used in earlier versions. If
you need to switch to an older version of Connect / MySQL, it is first necessary to remove the
Connect MySQL Database folder from "%ProgramData%\Connect\MySQL\data" before
installing the older version.

PostScript Print Presets
The print presets for PostScript were changed from Version 1.1 onwards meaning that some
presets created in Version 1.0 or 1.0.1 may no longer work.
Any PostScript print preset from Version 1.0 that contains the following will not work in Version
1.8: *.all[0].*
Any preset containing this code will need to be recreated in Version 1.8.

Page 80

Available Printer Models
Note that only the single Printer Model (Generic PDF) will appear on the Advanced page of the
Print Wizard by default.
To add additional printer models click on the settings
entry box.

button next to the Model selection

Note that the descriptions of some of the printers were updated in version 1.2 meaning that if
you had version 1.n installed, you may find that the same printer style appears twice in the list,
but with slightly different descriptions.
For example the following printer types are actually identical:
l

Generic PS LEVEL2 (DSC compliant)

l

Generic PS LEVEL2 (DSC)

External Resources in Connect
There are certain limitations on how external resources can be used in Connect. For example if
you want to link a file (e.g., CSS, image, JavaScript etc.) from a location on the network but you
do not want to have a copy of the file saved with the template you need to do the following:
1. The resource must be located where it can be accessed by all Servers/Slaves run as
users. Failure to do this will cause the image to appear as a Red X in the output for all
documents which were merged by engines which could not access the file. The job will
terminate normally and the error will be logged.
2. The file must be referenced via a UNC path e.g.,
file://///w2k8r2envan/z%20images/Picture/Supported/JPG/AB004763.jpg
l

l

UNC paths are required because the services will be unable to access mapped
network drives (Windows security feature).
The engine processing the job will look on the local file system for the direct file
path leading to the “resource not found” issue mentioned above.

Warning
Important Note: The Designer itself and Proof Print do not use processes that run as

Page 81

services and they may find local files with non-UNC paths which can lead to the false
impression that the resources are correct.

Using Capture After Installing Workflow 8
If PlanetPress Connect Workflow 8 is installed alongside PlanetPress Suite Workflow 7,
Capture can no longer be used within Workflow 7. The plugins are now registered uniquely to
Workflow 8 and the Messenger for Workflow 7 is taken offline. It is only possible to use Capture
from PlanetPress Connect Workflow 8 thereafter.

Capturing Spool Files After Installing Workflow 8
If PlanetPress Connect Workflow 8 is installed alongside PlanetPress Suite Workflow 7, the
PlanetPress Suite 7 option to capture spool files from printer queues will no longer function.
The solution is to use PlanetPress Connect Workflow 8 to capture spool files from printer
queues.

Colour Model in Stylesheets
The colour model of colours defined in a stylesheet can sometimes change after editing the
stylesheet. This is a known issue and will be addressed in a subsequent release.

Image Preview in Designer
If in the Windows Internet settings (Connection Settings > LAN configuration) a proxy is
enabled, but "Bypass proxy settings for local addresses" is not checked, the image preview
service, conversion service and live preview tab in the Designer will not work and exhibit the
following issues:
l

Images will be shown as 0 size boxes (no red 'X' is displayed)

l

Live preview does not progress, and when re-activated reports "browsers is busy"

To fix the issue you must check the "Bypass proxy settings for local addresses" option

Merge\Weaver Engines when Printing
The print operation in the Designer will automatically detect whether the Merge\Weaver
engines are available and display a message for the user to retry or cancel if not. Once the
Merge\Weaver engine becomes available and the user presses retry the print operation will
proceed as normal. This message can also occur in the following circumstances:

Page 82

l

If the server is offline and you are not using Proof Print

l

On some occasions before the Print Wizard opens

REST Calls for Remote Services
The Server will now accept REST calls for all remote services and will make commands wait
indefinitely until the required engines become available. The Server will log when it is waiting
for an engine and when it becomes available. Note that there is no way to cancel any
commands other than stopping the Server.

Print Content and Email Content in PlanetPress Workflow
In PlanetPress Workflow’s Print Content and Email Content tasks, the option to Update
Records from Metadata will only work for fields whose data type is set to String in the data
model. Fields of other types will not be updated in the database and no error will be raised.
This will be fixed in a later release.

Print Limitations when the Output Server is located on a
different machine
The following limitation may occur when using the Print options from a Designer located on a
different machine to the Output Server:
l

l
l

The file path for the prompt and directory output modes is evaluated on both the client
AND server side. When printing to a network share it must be available to BOTH the
Designer and Server for the job to terminate successfully.
The Windows printer must be installed on both the Server and Designer machines.
When printing via the Server from a remote Designer, the output file remains on the
Server machine. This is remedied by selecting “Output Local” in the Output Creation
configuration

VIPP Output
Some templates set up with landscape orientation are being produced as portrait in VIPP. It can
also sometimes be the case that text and images can be slightly displaced. These are known
issues and will be addressed in a later release of Connect.

Page 83

Server Configuration Settings
This chapter describes configuring the PlanetPress Connect Server.

The Connect Sever settings are maintained by the "Connect Server Configuration" utility
tool which is installed alongside PlanetPress Connect.
"Connect Server Configuration" can be launched from the Start Menu, as seen in the
following screen-shot:

The "Connect Server Configuration" dialog is separated into individual pages, where each
page controls certain aspects of the software.
The following pages are available:
l

"Clean-up Service preferences" on page 697

l

"Database Connection preferences" on page 700

l

"Language Setting Preferences" on page 710

l

"Scheduling Preferences" on the next page

l

l

"Merge Engine Scheduling" on page 86

l

"Weaver Engine Scheduling" on page 88

"Server Security Settings" on page 90

Page 84

Scheduling Preferences
The scheduling preferences are a way to control precisely how the PlanetPress Connect
services work in the background.
Whenever an operation is scheduled:
l

l

l

l

The number of instances to use is determined based on whether the operation is small,
medium or large
If there is a reserved instance for that type of command available then it will use a
reserved instance
If no reserved instances are available then any unreserved instance that is available will
be used
If no instances are available then the command will be blocked until an appropriate
instance becomes available

Technical
For more information on instances and performance, see Performance Considerations.

Scheduling Properties
l

Definitions group: Defines what is to be considered a Small or Large job. Anything in
between is considered a Medium job.
l

l

Maximum records in a small job: Enter the maximum number of records for a job
to be considered Small.
Minimum records in a large job: Enter the minimum number of records for a job to
be considered Large.

Note
Changes made to these settings will be applied on the run. Existing jobs will
be taken into account when determining if a job can run.
For instance, if the minimum records for a Large job is increased from 1,000 to

Page 85

10,000 and a job of 2,000 records is already running, then this existing job will
now be considered a Medium job.
Likewise, if the minimum records for a Large job is decreased from 10,000 to
5,000 and a job of 7,000 records is already running, then this existing job will
now be considered a Large job.

Additional Scheduling Preferences:
Scheduling Preferences can be applied to the two distinct Engines used in the Connect
production process. The Merge Engine merges the template and the data to create Email and
Web output, or to create an intermediary file for Printed output. The intermediary file is in turn
used by the Weaver Engine to prepare the Print output.
Each of these Engines has its own Scheduling Preference page:
l

"Merge Engine Scheduling" below

l

"Weaver Engine Scheduling" on page 88

Merge Engine Scheduling
The Merge Engine merges the template and the data to create Email and Web output, or to
create an intermediary file for Printed output. The intermediary file is in turn used by the Weaver
Engine to prepare the Print output.
This preferences page defines how different instances and speed units are attributed to
different jobs when creating Content Items (Print Content, as well as Web and Email output
Content generation). For information on the terminology and some performance tips, see
Performance Considerations. Note that in this dialog, the use of the word "Engine" is
synonymous to both "Instance" and "Speed Unit" since a single Speed Unit can be used for
each Engine.
Email and Web output is generated only with the Merge Engine and thus their output speed is
limited through this engine. However, the output speed of Print jobs is limited through the
Weaver Engine, so when Print Content is generated through the Merge Engine, its speed is not

Page 86

limited. Additionally, you may launch up to 256 engines for Print Content generation, but Email
and Web may only use the number of engines permitted by your license.

Note
Changes made to the following settings will be applied on the run (when the Apply button is
pressed), and do not require the OLConnect_Server service to be restarted.

l

l

l

l

l

l

l

Use one internal engine: Check to limit to a single instance of the server. Useful for
computers that run below the recommended System requirements, or demo machines.
Total Engines Available: Read-only box indicating the current number of engines that
are active or available.
Engines Launched: Enter the total number of Merge Engines desired on this server.
When changing the number of engines, it is necessary to save this dialog (Apply) to
actually apply the changes.
Reserved Count: Read-only box indicating the total number of "Reserved" engines, as
set in the Speed unit reservations area below.
Restart After (mins): Due to a currently un-fixable memory leak in some libraries used by
PlanetPress Connect, it is necessary to restart our engines after a certain amount of time.
The default is generally sufficient for all our clients. Only change on the advice of a
technical support agent.
Parallel Engines/Speed Units per job: This area determines how many engines and
speed units are used for each job that runs through it. In short, if any specific type of job
has more than one parallel speed unit assigned to it, that many engines will be used to
run each of its jobs. This is in fact a multiplier of speed, however it is a divider of the
number of jobs that can be run simultaneously. You cannot attribute more parallel speed
units than what you have available for any specific type of job, and you require at least
that number of floating speed units, or reservations, for the required parallel speed units
required.
Engine reservations:This area is used to reserve engines specifically for certain types of
jobs. Reserved engines cannot be used by any other type of job.
l

Floating: Read-only box indicating the number of floating engines that can be used
for any type of job. This number is equal to the Total Engines Available Reservations. For example if 6 engines are launched and 4 are reserved for small
jobs, 2 will be Floating.

Page 87

l

l

l

l

Small job speed unit reservations: Enter a number of engines reserved for
small print jobs.
Medium job speed unit reservations: Enter a number of engines reserved for
medium print jobs.
Large job speed unit reservations: Enter a number of engines reserved for large
print jobs.

l

Email engine reservations: Enter a number of engines reserved for Email jobs.

l

HTML engine reservations: Enter a number of engines reserved for Web jobs.

Maximum concurrent engine per type: This area defines the maximum possible
Engines used for any specific job type. The limit needs to be at least the number of
reservations or parallel speed units, whichever is lowest.
l

l

l

l

l

l

l

Small print job limit: Enter the maximum number of engines that can run small print
jobs.
Medium print job limit: Enter the maximum number of engines that can run medium
print jobs.
Large print job limit: Enter the maximum number of engines that can run large print
jobs.
Email limit: Enter the maximum number of engines that can run Email jobs.
Maximum Email limit in license: Read-only box indicating the maximum number of
engines useable for Email content creation.
HTML limit: Enter the maximum number of engines that can run Web jobs.
Maximum HTML limit in license: Read-only box indicating the maximum number of
engines useable for Web content creation.

Weaver Engine Scheduling
The Merge Engine merges the template and the data to create Email and Web output, or to
create an intermediary file for Printed output. The intermediary file is in turn used by the Weaver
Engine to prepare the Print output.
This preference page determines the number of Weaver engines launched, as well as their
speed, when generating the output. One single engine can only process a single job at a time,
at the speed available depending on licence and configuration. Note that in this dialog, the use
of the term "Speed Unit" is in relation of the available speed for each engine. One speed unit =
one unit of speed at the maximum speed your licence and number of Performance Packs

Page 88

allows. With no Performance Pack, PlanetPress Connect's Weaver engine can generate output
at 500 ppm (pages per minute). Additional Performance Packs increase this base speed per
engine.
Changes made to the following settings will be applied on the run (when the Apply button is
pressed), and do not require the OLConnect_Server service to be restarted.
l

l

l

l

l

l

l

l

Use one internal engine: Check to limit to a single instance of the server. Useful for
computers that run below the recommended system requirements (see "System
Requirements" on page 27) or demo machines.
Total Engines Available: Read-only box indicating the current number of engines that
are active or available.
Local Engines Launched: Enter the total number of Weaver Engines desired on this
server. When changing the number of engines, it is necessary to save this dialog (Apply)
to actually apply the changes.
Speed Units Launched: Read-only box indicating the number of speed units launched.
Limit in license: Read-only box indicating the maximum number of speed units useable
to produce output.
Reserved Count: Read-only box indicating the total number of "Reserved" engines, as
set in the Speed unit reservations topic below.
Restart after (mins): Restarts the Weaver Engines at the selected time interval. If a job is
in progress at that point, the Weaver Engines will await job completion before restarting.
Parallel engines/speed units per job:
l

l

l

Parallel engines/speed units per medium job: Enter the number of engines/speed
units that prioritize medium print jobs.
Parallel engines/speed units per large job: Enter the number of engines/speed
units that prioritize large print jobs.

Speed unit reservations:
l

l

l

Floating: Read-only box indicating the number of floating speed units that can be
used for any type of job.
Small job speed unit reservations: Enter a number of speed units reserved for
small print jobs.
Medium job speed unit reservations: Enter a number of speed units reserved for
medium print jobs.

Page 89

l

l

Large job speed unit reservations: Enter a number of speed units reserved for
large print jobs.

Maximum speed units:
l

l

l

Small job limit: Enter the maximum number of speed units that can run small print
jobs.
Medium job limit: Enter the maximum number of speed units that can run medium
print jobs.
Large job limit: Enter the maximum number of speed units that can run large print
jobs.

Server Security Settings
This dialog controls the security settings for external applications connecting to the PlanetPress
Connect Server, such as PlanetPress Workflow or scripts communicating through the REST
API.

Warning
It is highly recommended to keep security enabled and change the password on any server that
accessible from the web. If these precautions are not taken, data saved in the server may be
accessible from the outside!

l

l

l

Enable server security: Enable to add authentication to the REST server.
- When enabled, the same username and password (which cannot be blank) must be
entered in any remote Connect Designer that links to this Server. The Designer username
and password entries can be found under the "Print Preferences" on page 710 subsection of the Designer Preferences dialog.
- When disabled, a username and password is not required to make REST request, and
tasks in PlanetPress Workflow do not require them in the Proxy tab. Nor would a
username and password be required on any remote Connect Designer that links to this
Server.
Administrator's username: Enter the username for the server security. The default
username is ol-admin.
Administrator's password: Enter a password for the server security. The default
password is secret.

Page 90

l

l

Confirm password: Re-enter the password for the server security.
Default session length (min): Enter a session time (in minutes) that the authentication
stays valid for the requested process. This can reduce the number of requests to the
server since an authentication request is not necessary during the session.

Uninstalling
This topic provides some important information about uninstalling (removing) PlanetPress
Connect1.8.
To uninstall PlanetPress Connect select the application from within the Add/Remove programs
option under the Control Panel. This will start the PlanetPress Connect Setup Wizard in
uninstall mode.

Note
The PlanetPress Connect Setup Wizard might take some seconds to appear.

Important Note: Stop any active Anti-Virus software before
uninstalling Connect.
Some anti-virus systems are known to block the uninstallation of MySQL datafiles, as well as
blocking the uninstallation of the MySQL database application itself. Therefore it is highly
recommended that any anti-virus application be stopped prior to uninstalling PlanetPress
Connect, as otherwise the Connect uninstallation might not work correctly.

Impacts upon other Applications and Services
l
l

The Uninstall will terminate the installed Server / MySQL service(s)
The following applications / services should be stopped in a controlled fashion, before
running the PlanetPress Connect Uninstall:
1. PlanetPress Connect
2. Connect products on remote systems which refer to this MySQL database.

Page 91

3. Any Connect Workflow using PlanetPress Connect plugins which connect to this
server.

Uninstallation Wizard
The uninstallation is done by running the PlanetPress Connect Setup Wizard in uninstall mode.
The Wizard consists of the following pages:
1. PlanetPress Connect Setup: An information page, listing what will be uninstalled, and
also warning about impacts upon running Applications and Services.
2. Data Management: A page that provides options for backing up or deleting Connect
data. Selections are as follows:
l

Delete Connect Workspace Data: Check this box to delete the Workspace data for
the current user, or for selected users (as determined by the "Select Users" button)
l

l

Backup Connect Workspace Data for all specified Users: Check this box
to backup the Workspace data for the specified users (as previously
determined) into a compressed ZIP file (whose location can be customized),
before deletion of the full Workspace data.

Delete MySQL objectlune Data: Check this box to delete the MySQL database
installed with PlanetPress Connect.
l

Backup MySQL Date: If the deletion check box is selected, this option
appears to allow backing up the MySQL database to a customizable location,
prior to uninstallation.

Page 92

General information
Connect consists of visible and invisible parts. The invisible parts process the Connect job to
provide the actual output. They are introduced to you in the topic: "Connect: a peek under the
hood" below.
For a list of all file types used in Connect, see: "Connect File Types" on page 98.
You can find additional information that complements the user manuals, such as error codes
and frequently asked questions about PlanetPress Connect, in the Knowledge base.

Connect: a peek under the hood
Connect consists of visible and invisible parts.
The visible parts are the tools that you use to create templates, data mapping configurations,
and print presets (the Designer/DataMapper), and to create Workflow configurations (the
Workflow configuration tool).
The invisible parts process the Connect job to provide the actual output. This topic introduces
you to those parts.
Here's a simplified, graphical representation of the architecture of PlanetPress Connect. The
components described below are all located in the 'Server' part.

Page 93

The Workflow server
The Workflow server (also referred to as the 'Watch service') executes processes
independently, after a Workflow configuration has been uploaded and the services have been
started. The Workflow server can run only one configuration at a time.

There are a number of services related to Workflow. The Messenger service, for example,
receives the files sent to Workflow from the Designer and the Workflow configuration tool.

Page 94

The Workflow Service Console lets you start and stop the different services, except the
Connect server, and see their log files (see Workflow Service Console).
Note that Workflow isn't strictly limited to Connect functionality. It was originally developed as
part of the PlanetPress Suite. Many of the plugins in the Workflow configuration tool are older
than Connect; they were left in for compatibility reasons, even though they aren't all useful or
usable within Connect. However, the Connect plugins cannot be used with the PlanetPress
Suite software.

The Connect server
As opposed to the Workflow server, the Connect server was designed to be used only with
Connect. The Connect server performs several different tasks, all of which are related to
Connect content and content management:
l

l

l

It communicates with the Workflow service (with the Connect plugins, specifically) and
with the Designer when output is generated from the Designer.
It creates records (by extracting data from a data source using a data mapping
configuration), and jobs.
It communicates with the engines (see below) in order to make them create content items
and output (spool) files.

The Connect server is one of the components that has to be installed with Connect (see
"Installation Wizard" on page 35).
In the Workflow Configuration Tool preferences you have to set the OL Connect server settings
to enable Workflow to communicate with the server (see Workflow Preferences).

Page 95

The Connect Server Configuration tool lets you change the settings for the Connect server,
the engines and the service that cleans up the database and the file store. These settings can
also be made in the preferences of the Designer.

The Connect database
The Connect database is the database back-end used by Connect itself when processing jobs.
It can be either the MySQL instance provided by the Connect installer, or a pre-existing
(external) instance (see "Database Considerations" on page 17).
All generated items (records, content items etc.) are stored in this database , for the next task in
the process, as well as for future use, making it possible to commingle Print jobs, for example.

Note
Email content items are not stored in the Connect database.

A clean-up of the database is performed at regular intervals in accordance with the settings
(see "Clean-up Service preferences" on page 697).

The File Store
Connect has its own File Store which it uses for transient files.
The Clean-up service takes care of removing obsolete files when those files are not marked as
permanent (see "Clean-up Service preferences" on page 697).

Tip
As of version 1.8, the File Store has become accessible for customer implementations. The
Workflow configuration tool implements three tasks that allow you to Upload, Download and
Delete files in the Connect File Store. The files can be accessed through the REST API, which
means web portals could potentially access the files directly without having to go through a
Workflow process.

Page 96

The engines
Merge engine/s. A merge engine merges data with a template using the scripts in the template,
in order to create (Print,Email or Web) content items.
The number of merge engines is configurable. By default, only one merge engine is used, but
this number can be increased depending on the capacity of the machine that runs the solution
(see "Performance Considerations" on page 25).
Weaver engines. The Weaver engines create Print output from Print content items. It takes the
settings made in Print presets or in the Print Wizard into account. It also helps the data mapping
engine by preparing any paginated input data.
The number of Weaver engines is configurable as well (see "Weaver Engine Scheduling" on
page 88).
Speed units (parallels)
The number of 'speed units' is the maximum number of engines that can work in parallel, which
is why they are also called 'parallels'. The output speed of all speed units together is limited to
a certain number of output items (web pages, emails, or printed pages) per minute.
How many speed units you have and what the maximum total output speed will be is
determined by your licence and any additional Performance Packs you might have.
There is one important twist: when generating Print output, the limit imposed by the number of
speed units, only applies to the Weaver engines; when creating Email or Web output, the limit
applies to the Merge engines only (the Weaver engine is not involved). Nevertheless, in
situations where Print and Email or Web output are being created at the same time, all engines,
regardless of their type, count towards the maximum number of speed units.
Each engine needs at least one speed unit. However, since the number of engines is
configurable, and since small, medium and large jobs may run concurrently, the number of
engines in use may not match the number of available speed units. When there are more speed
units than there are engines in use, the Connect server distributes the speed units and the
maximum output speed to the engines proportionally.

The REST API
The Connect server receives REST commands (see The Connect REST API CookBook),
normally either via the Workflow service or from the Designer. This design allows the Connect
functionality to be used by other applications. The server forwards the commands to the

Page 97

appropriate engine and returns the results to the caller. The results are the id's of the items
(records, content items, job etc.) that are stored in the Connect database (see below). All
Connect tasks except the Create Web Content task integrate the results in the Metadata in
Workflow.
The figure below shows the communication between Connect tasks and the Connect server in
a Print process.

Printing and emailing from the Designer
To print or send email from within the Designer, the Connect service has to be running. The
service is started automatically when the Designer starts, but it may not be running if the
Connect Server and the Designer are installed on different computers. The Connect service
can be found on the Services tab in the Task Manager.
For a proof print the Connect server is not used. Proof printing is always done locally, by the
Designer.

Connect File Types
This article describes the different File Types that are related to PlanetPress Connect and its
different modules. These are files that are generally transferable between machines, can be
sent via email or other means.
l

.OL-template: A Designer Template file, including up to 3 contexts. Is linked to a data
mapping configuration by default, but not necessarily.

Page 98

l

l

l

l

l

l

l

l

.OL-datamapper: A Data Mapping Configuration file, which can include sample data
(excluding database source files such as mySQL, oracle, etc).
.OL-datamodel: A data model file which can be imported or exported into either a data
mapping configuration or a template. Contains a list of fields and their data type (date,
currency, string, etc).
.OL-jobpreset: A job preset file, used when generating a job (ready for output) from
Designer or through automation (Create Job task). Does sorting, splitting, adding
metadata fields.
.OL-outputpreset: An output preset file, used to generate the actual print output in the
appropriate format (pcl, afp, pcl, ipds, pdf, etc). Includes print settings such as imposition
(n-up, cut & stack), inserter marks, tray settings, etc.
.OL-package: A transfer file used to package one or many of the above files (the data
model being part of both the template and the data mapping configuration). Created by
using the File -> Send to Workflow dialog, and choosing "File..." in the Destination box.
.OL-script: One or more designer "scripts". Can be imported or exported from the Scripts
pane in Designer when a template is open.
.OL-printerdef: A Printer Definition File. Used by the Output Preset to determine what
type of output to produce. These are generated by an internal application that is not
currently distributed outside of OL, but the definition files themselves can be provided.
.OL-workflow: A Workflow file used by PlanetPress Workflow. Equivalent to .pp7 files
(they are, in fact, essentially the same format), containing the processes and such used by
Workflow.

Page 99

The DataMapper Module
The DataMapper is the tool to create a data mapping configuration.
A data mapping configuration file contains the information necessary for data mapping: the
settings to read the source file (Delimiter and Boundary settings), the data mapping workflow
with its extraction instructions ('Steps'), the Data Model and any imported data samples.
Data mapping configurations are used to extract data and transpose that data into a format that
can be shared amongst different layouts and outputs created with the Connect Designer and
Workflow.
The original data, located in a file or database outside of Connect, is called a data source.
The first step in the data extraction process is making settings for the input data (see "Data
source settings" on page 115) , including boundaries for each record inside the data sample.
When you define the boundaries, you are actually defining a series of records inside your data
sample file.
After configuring these settings you can start working on the logic to extract data from each of
those records. You need to identify and extract data from each record. To achieve this, you will
create a data mapping workflow, consisting of multiple steps (extractions, loops, conditions and
more) (see "Data mapping workflow" on page 113 and "Extracting data" on page 118).
When this process is complete, the result is a Data Model. This model contains the necessary
information to add variable data to Connect Designer templates. (see "The Data Model" on
page 151 for more information). It has a generic format with an emphasis on content, free from
any restrictions imposed by the file types or the origin of the data. This allows a same layout or
output to be populated with data from different sources and formats without the need to modify
it.

DataMapper basics
Connect’s DataMapper lets you build a data mapping workflow to extract data from a variety of
data sources. The data mapping workflow consists of multiple 'steps' which process and extract
data from each record of a data source and store it in a new, extracted record set. The data
mapping workflow is saved in a data mapping configuration.
1. Create a new data mapping configuration.
Run the Designer and start creating a data mapping configuration by selecting a data
source. See "Data mapping configurations" on the next page.

Page 100

2. Configure settings for the data source.
The data source can be a file (CSV, PDF, TXT, XML) or a particular database. Configure
how the data source is read by the DataMapper and create a record structure. See "Data
source settings" on page 115.
3. Build the data mapping workflow.
A data mapping workflow always starts with the Preprocessor step and ends with the
Postprocessor step. You can add as many steps as you like and edit the Data Model of
the extracted data as required. See "Data mapping workflow" on page 113 and "The Data
Model" on page 151.

What's next?
Use the data mapping configuration in the Designer module to create templates for
personalized customer communications. To learn more, see "The Designer" on page 302.
In Workflow, a data mapping configuration can be used to extract data. The extracted data can
then be merged with a Designer template to generate output in the form of print, email or a web
page. See Workflow and Connect tasks in Workflow.

Data mapping configurations
A data mapping configuration file contains the information necessary for data mapping: the
settings to read the source file (Delimiter and Boundary settings), the data mapping workflow
with its extraction instructions ('Steps'), the Data Model and any imported data samples.
Data mapping configurations are used in the Designer to help add variable data fields and
personalization scripts to a template. In fact, only a Data Model would suffice (see
"Importing/exporting a Data Model" on page 153). The advantage of a data mapping
configuration is that it contains the extracted records to merge with the template, which lets you
preview a template with data instead of field names.
It is also possible to generate output of a data mapping configuration directly from the Designer
(see "Generating output" on page 953).
Ultimately data mapping configurations are meant to be used by the Execute Data Mapping
task in Connect Workflow processes, to extract data from a particular type of data file. Typically
the extracted data is then merged with a template to generate output in the form of print, email
and/or a web page. To make this happen, the data mapping configuration as well as the

Page 101

template and any print presets have to be sent to Workflow; see "Sending files to Workflow" on
page 309.

Note
AFP input requires the CDP library. The library licence allows PlanetPress Connect to run up to 4
instances of that library on a given machine at a given time.

Creating a new data mapping configuration
A new data mapping configuration can be made with or without a wizard. When you open a
data file with a DataMapper wizard, the wizard automatically detects a number of settings. You
can adjust these settings. Next, the wizard automatically extracts as many data fields (or
metadata, in case of a PDF/VT or AFP file) as it can, in one extraction step.
Without a wizard you have to make the settings yourself, and configure the extraction workflow
manually.

Note
The DataMapper doesn’t use the data source directly, rather it uses a copy of that data: a data
sample. Although the data sample is a copy, it is updated each time the data mapping
configuration is opened or whenever the data sample is selected.
More samples can be added via the Settings pane; see "Data samples" on page 211.

From a file
To start creating a data mapping configuration without a wizard, first select the data file. There
are two ways to do that: from the Welcome screen and from the File menu.
l

From the Welcome screen
1. Click Create a New Configuration.
2. From the From a file pane and select a file type (Comma Separated Values or
Excel (CSV/XLSX/XLS), MS-Access, PDF/VT, Text or XML).
3. Click the Browse button and open the file you want to work with (for a database,

Page 102

you may have to enter a password).
4. Click Finish.
l

From the File menu
1. Click the File menu and select New.
2. Click the Data mapping Configuration drop-down and select Files and then the
file type (Comma Separated Values or Excel (CSV/XLSX/XLS), MS-Access,
PDF/VT, Text or XML).
3. Click Next.
4. Click the Browse button and open the file you want to work with.
5. Click Finish.

Note
l
l

Excel files saved in "Strict Open XML" format are not supported yet.
PCL and PostScript (PS) files are automatically converted to PDF format. When
used in a production environment (a Connect Workflow process) this may influence
the processing speed, depending on the available processing power.

After opening the file, you have to make settings for the input data (see "Data source settings"
on page 115). Then you can start building the data extraction workflow.
With a wizard
Data mapping wizards are available for PDF/VT, AFP, XML, CSV and database tabular files,
because these files are structured in a way that can be used to automatically set record
boundaries.
The wizard for PDF/VT and AFP files cannot extract data, only metadata. After opening such a
file with the wizard, you can build the data extraction workflow.
The other wizards use the Extract All step to extract data, but they cannot create detail tables,
so they are less suitable for files from which you want to extract transactional data.
There are two ways to open a data file with a wizard: from the Welcome screen or from the File
menu.

Page 103

l

From the Welcome screen
1. Open the PlanetPress ConnectWelcome page by clicking the
or select the Help menu and then Welcome.

icon at the top right

2. Click Create a New Configuration.
3. From the Using a wizard pane, select the appropriate file type.
l

From the File menu
1. In the menu, click File > New.
2. Click the Data mapping Wizards drop-down and select the appropriate file type.

The steps to take with the wizard depend on the file type. See:
l

"Using the wizard for CSV and Excel files" on page 106

l

"Using the wizard for databases" on page 108

l

"Using the wizard for PDF/VT and AFP files" on page 111

l

"Using the wizard for XML files" on page 112

Generating a counter
Instead of creating a data mapping configuration for a certain type of data file, you may create a
data mapping configuration that only contains a series of sequential numbers. This is a solution
if, for instance, you need to create sequential tickets or anything that has an ID that changes on
each record.

Note
You can’t join this configuration to another data file. It is just a counter to be applied on a static
template.

To generate a counter:
l

From the Welcome screen
1. Open the PlanetPress ConnectWelcome page by clicking the
or select the Help menu and then Welcome.

icon at the top right

Page 104

2. Click Create a New Configuration.
3. From the Using a wizard pane, select Generate counters.
l

From the File menu
1. In the menu, click File > New.
2. Click the Data mapping Wizards drop-down and select Generate counters.

You can set the following parameters:
l

l

l

l

l

l

l

Starting Value: The starting number for the counter. Defaults to 1.
Increment Value: The value by which to increment the counter for each record. For
example, an increment value of 3 and starting value of 1 would give the counter values of
1, 4, 7, 10, [...]
Number of records: The total number of counter records to generate. This is not the end
value but rather the total number of actual records to generate.
Padding character: Which character to add if the counter's value is smaller than the
width.
Width: The number of digits the counter will have. If the width is larger than the current
counter value, the padding character will be used on the left of the counter value, until the
width is equal to the set value. For example for a counter value of "15", a width of "4" and
padding character of "0", the value will become "0015".
Prefix: String to add before the counter, for example, adding # to get #00001. The prefix
length is not counted in the width.
Suffix: String to add after the counter. The suffix length is not counted in the width.

Opening a data mapping configuration
To open an existing data mapping configuration, in the Menus, select File > Open. Make sure
that the file type is either DataMapper files or Connect files. Browse to the configuration file to
open, select it and click Open.
Alternatively, click on File > Open Recent to select one of the recently opened configuration
files.

Page 105

Saving a data mapping configuration
A Data Mapping Configuration file has the extension .OL-datamapper. The file contains the
settings, the extraction workflow ('Steps'), the Data Model and the imported Data Samples
(excluding database source files such as mySQL, oracle, etc).
To save a data mapping configuration:
l

l

In the Menus, click on File > Save, or click on Save As to save a copy of a data mapping
configuration under a different name.
In the Toolbars, click the Save button.

If the data mapping configuration has not been saved before, you have to browse to the location
where the data mapping configuration should be saved and type a name, then click Save.

Using the wizard for CSV and Excel files
The DataMapper wizard for CSV and Excel files helps you create a data mapping configuration
for such files. The wizard automatically detects delimiters and extracts all data in one extraction
step.
The wizard interprets each line in the file as a record. If your data file contains transactional
data, you will probably want more lines to go in one record and put the transactional data in
detail tables.
The wizard cannot create detail tables. If the file contains transactional data, the data mapping
configuration is best created without a wizard (see "Creating a new data mapping
configuration" on page 102).
There are two ways to open a CSV file or Excel file with a wizard: from the Welcome screen or
from the File menu.
l

From the Welcome screen
1. Open the PlanetPress Connect Welcome page by clicking the
right, or select the Help menu and then Welcome.

icon at the top

2. Click Create a New Configuration.
3. From the Using a wizard pane, select CSV/XLSX/XLS.

Page 106

4. Click the Browse button and open the file you want to work with.
5. Click Next.
l

From the File menu
1. In the menu, click File > New.
2. Click the Data mapping Wizards drop-down and select From CSV/XLSX/XLS
File.
3. Click Next.
4. Click the Browse button and open the file you want to work with.
5. Click Next.

After selecting the file, take a look at the preview to ensure that the file is the right one and the
encoding correctly reads the data. Click Next.
For an Excel file you can indicate whether or not the first row contains field names, and from
which sheet the data should be extracted.

Note
Excel files saved in "Strict Open XML" format are not supported yet.

For a CSV file the will display the different settings it has detected, allowing you to change
them:
l

Encoding: Defines which encoding is used to read the file.

l

Separator: Defines which character separates each field in the file.

l

Comment Delimiter: Defines which character starts a comment line.

l

l

Text Delimiter: Defines which character surrounds text fields in the file. Separators and
comment delimiters within text are not interpreted as separator or delimiter; they are seen
as text.
Ignore unparseable lines: Ignores any line that does not correspond to the settings
above.

Page 107

First row contains field names: Uses the first line of the CSV as headers, which
automatically names all extracted fields.
Verify that the data are read properly.
l

Finally click Finish. All data fields are automatically extracted in one extraction step.

Using the wizard for databases
The DataMapper wizard for database files helps you create a data mapping configuration for a
database file. The wizard extracts the data in one extraction step.
The wizard cannot create detail tables. If the file contains transactional data, the data mapping
configuration is best created without a wizard (see "Creating a new data mapping
configuration" on page 102).
Opening a database file with a wizard
There are two ways to open an XML file with a wizard: from the Welcome screen or from the
File menu.
l

From the Welcome screen
1. Open the PlanetPress ConnectWelcome page by clicking the
or select the Help menu and then Welcome.

icon at the top right

2. Click Create a New Configuration.
3. From the Using a wizard pane, select Database.
4. Use the drop-down to select the database type.
5. Click Next.
l

From the File menu
1. In the menu, click File > New.
2. Click the Data mapping Wizards drop-down and select From databases.
3. Click Next.
4. Use the drop-down to select the database type.
5. Click Next.

Page 108

Wizard settings for a database file
After opening a database file with a wizard there are a number of settings to make, depending
on the database type (see below).
On the last page of the dialog, click Finish to close the dialog and open the actual data
mapping configuration.
MySQL, SQL Server or Oracle
l

Server: Enter the server address for the database.

l

Port: Enter the port to communicate with the server. The default port is 3306.

l

l

l

l

l

Database name: Enter the exact name of the database from where the data should be
extracted.
User name: Enter a user name that has access to the server and specified database. The
user only requires Read access to the database.
Password: Enter the password that matches the user name above.
Table name: The selected database is a set of related tables composed of rows and
columns corresponding respectively to source records and fields. Select a table from
which you want to extract data.
Encoding: Choose the correct encoding to read the file.

Microsoft Access
l

l

l

Password: Enter a password if one is required.
Table name: The selected database is a set of related tables composed of rows and
columns corresponding respectively to source records and fields. Select a table from
which you want to extract data.
Encoding: Choose the correct encoding to read the file.

ODBC Data Source
l

ODBC Source: Use the drop-down to select an ODBC System Data Source. This must
be a data source that has been configured in the 64-bit ODBC Data Source Administrator,
as PlanetPress Connect is a 64-bit application and thus cannot access 32-bit data
sources.

Page 109

l

This ODBC source is MSSQL: Check this option if the ODBC source is MSSQL (SQL
Server). The options below appear under MSSQL-ODBC advanced configuration:
l

l

Windows authentication: Select to use the Windows user name and password that
are used by the Connect Service.
SQL Server authentication: Select to use the User name and Password set below
to connect to the SQL Server:
l

User name: Enter the SQL Server user name.

l

Password: Enter the password for the above user name.

JDBC

Note
Since JDBC can connect to multiple types of databases, a specific database driver and path to
this driver's JAR file must be specified.

l

l

l

l

l

l

l

l

JDBC Driver: Use the drop-down to select which JDBC Driver to use for the database
connection.
JAR file path: Enter a path to the JAR file that contains the appropriate driver for the
database.
Server: Enter the server address for the database server.
Database name: Enter the exact name of the database from where the data should be
extracted.
User name: Enter a user name that has access to the server and specified database. The
user only requires Read access to the database.
Password: Enter the password that matches the user name above.
Advanced mode: Check to enable the Connection String field to manually enter the
database connection string.
Connection string: Type or copy in your connection string.

Page 110

Using the wizard for PDF/VT and AFP files
The pages in PDF/VT and AFP files can be grouped on several levels. Additional information
can be attached to each level in the structure. The structure and additional information are
stored in the file's metadata.
The DataMapper wizard for PDF files lets you select a level to trigger the start of a new record
and it also enables you to extract the additional information from the metadata. You can extract
data from the content afterwards.

Tip
To extract information from the metadata in the extraction workflow itself, you have to create a
JavaScript extraction (see "Using scripts in the DataMapper" on page 255 and "extractMeta()" on
page 277).

If the PDF doesn't contain any metadata, each page is a new record - in other words, a
boundary is set at the start of a new page -, which is exactly what happens when you open the
file without a wizard.
You can open a PDF/VT file with a wizard using the Welcome screen or the File menu.
l

From the Welcome screen
1. Open the PlanetPress ConnectWelcome page by clicking the
or select the Help menu and then Welcome.

icon at the top right

2. Click Create a New Configuration.
3. From the Using a wizard pane, select PDF/VT.
4. Click the Browse button and open the PDF/VT file you want to work with. Click
Next.
l

From the File menu
1. In the menu, click File > New.
2. Click the Data mapping Wizards drop-down and select From PDF/VT or AFP.
3. Click Next.
4. Click the Browse button and open the PDF/VT file you want to work with. Click
Next.

Page 111

After selecting the file, select the following options in the Metadata page:
l

l

Metadata record levels: Use the drop-down to select what level in the metadata defines
a record.
Field List: This list displays all fields on the chosen level and higher levels in the PDF/VT
metadata. The right column shows the field name. The left column displays the level on
which it is located. Check any field to add it to the extraction.

Click Finish to close the dialog and open the actual Data Mapping configuration.
On the Settings pane, you will see that the boundary trigger is set to On metadata. The
selected metadata fields are added to the Data Model.

Using the wizard for XML files
The DataMapper wizard for XML files helps you create a data mapping configuration for an
XML file. The wizard lets you select the type of node and the trigger that delimit the start of a
new record. Next, the wizard extracts the data in one extraction step.
The wizard cannot create detail tables. If the file contains transactional data, the data mapping
configuration is best created without a wizard (see "Creating a new data mapping
configuration" on page 102).
There are two ways to open an XML file with a wizard: from the Welcome screen or from the
File menu.
l

From the Welcome screen
1. Open the PlanetPress Connect Welcome page by clicking the
or select the Help menu and then Welcome.

icon at the top right

2. Click Create a New Configuration.
3. From the Using a wizard pane, select XML.
4. Click the Browse button and open the XML file you want to work with. Click Next.
l

From the File menu
1. In the menu, click File > New.
2. Click the Data mapping Wizards drop-down and select From XML File.

Page 112

3. Click Next.
4. Click the Browse button and open the XML file you want to work with. Click Next.
After selecting the file, you have to set the split level and trigger type:
l

l

XML Elements: This is a list of node elements that have children nodes. Select the level
in the data that will define the source record.
Trigger: select On element to create a record in the Data mapping for each occurrence of
the node element selected in the XML Elements field, or select On change to create a
record each time the element is different. (Check the option to include attributes in the list
of content items that can trigger a boundary.)

Note
The DataMapper only extracts elements for which at least one value is defined in the file. Attribute
values are not taken into account.
Attribute values (prefixed with an @ sign in the Data Viewer) are not extracted automatically.

Click Finish to close the dialog and open the data mapping configuration.

Data mapping workflow
A data mapping workflow is a series of extraction instructions, called steps. These steps
process and extract the data from the source and store them in records, of which the structure is
determined in the Data Model (see "The Data Model" on page 151). Together with the data
source settings, the Data Model, and the sample data, this is what makes a data mapping
configuration (See "Data mapping configurations" on page 101).
The data mapping workflow is shown on the Steps pane at the left (see "Steps pane" on
page 213).

Creating a data mapping workflow
A data mapping workflow always starts with the Preprocessor step and ends with the
Postprocessor step. These steps allow the application to perform actions on the data file itself
before it is handed over to the data mapping workflow ("Preprocessor step" on page 140) and

Page 113

after the Data Mapping workflow has completed ("Postprocessor step" on page 150).
When you create a new data mapping configuration, these steps are added automatically, but
they don't actually do anything until you configure these steps.
In between the Preprocessor and Postprocessor step, the workflow can contain as many steps
as needed to extract the required data.
Adding steps
Extracting data is the main way to build a data mapping workflow; see "Extracting data" on
page 118.
Extract steps, Condition steps and Repeat steps can be added after selecting data in the Data
Viewer.
All steps can be added via the Steps pane:
1. In the extraction workflow on the Steps pane, select the step after which to add the new
step.
2. Right-click on the Steps pane and select Add a Step; then select one of the step types.
Editing steps
The properties of each step in the extraction workflow become visible in the Step properties
pane when you select that step in the Steps pane.
The name of each step is shown in the Steps pane. You can change it under Description in
the Step properties pane.
The other properties are different per step type; see "Steps" on page 140.
Rearranging steps
To rearrange steps, simply drag & drop them somewhere else on the dotted line in the Steps
pane.
Alternatively you can right-click on a step and select Cut Step or use the Cut button in the
Toolbar. If the step is Repeat or Condition, all steps under it will also be placed in the
clipboard. To place the step at its destination, right-click the step in the position before the
desired location and click Paste Step, or use the Paste
button in the toolbar.
Keep in mind that steps may influence each other, so you may have to move other steps as well
to ensure that the workflow continues to function properly. In a Text file for example, an Extract
step may need a Goto step before it that moves the cursor to a certain place in the source data.

Page 114

Deleting steps
To delete a step, right-click on it in the Steps pane and select Delete Step.

Testing the extraction workflow
The extraction workflow is always performed on the current record in the data source. When an
error is encountered, the extraction workflow stops, and the field on which the error occurred
and all subsequent fields will be greyed out. Click the Messages tab (next to the Step
properties pane) to see any error messages.
To test the extraction workflow on all records, you can:
l

Click the Validate All Records toolbar button.

l

Select Data > Validate Records in the menu.

If any errors are encountered in one or more records, an error message will be displayed. Errors
encountered while performing the extraction workflow on the current record will also be visible
on the Messages tab.

Data source settings
After opening a data file you have to make a number of settings to make sure that the source
data is interpreted and grouped the way you want. These settings are found on the Settings
pane at the left.
l

l

l

Input Data settings help the DataMapper read the data source and recognize data
correctly.
Boundaries mark the start of a new record. They let you organize the data, depending on
how you want to use them.
Data format settings define how dates, times and numbers are formatted in the data
source.

Input data settings (Delimiters)
The Input Data settings (on the Settings pane at the left) specify how the input data must be
interpreted. These settings are different for each data type. For a CSV file, for example, it is
important to specify the delimiter that separates data fields. PDF files are already delimited

Page 115

naturally by pages, so the input data settings for PDF files are interpretation settings for text in
the file.
For an overview of all options, see: "Input Data" on page 203.
For a CSV File
In a CSV file, data is read line by line, where each line can contain multiple fields, separated by
a delimiter. Even though CSV stands for comma-separated values, fields may be separated
using any character, including commas, tabs, semicolons, and pipes.
The text delimiter is used to wrap around each field just in case the field values contain the
field separator. This ensures that, for example, the field “Smith; John” is not interpreted as two
fields, even if the field delimiter is the semicolon.
For an explanation of all the options, see: "CSV file Input Data settings" on page 203.
For a PDF File
PDF files have a clear and unmovable delimiter: pages. So, the Input Data settings are not
used to set delimiters. Instead, these options determine how words, lines and paragraphs are
detected when you select content in the PDF to extract data from it.
For an explanation of all the options, see: "PDF file Input Data settings" on page 204.
For a database
Databases all return the same type of information. Therefore the Input Data options for a
database refer to the tables inside the database. Clicking on any of the tables shows the first
line of the data in that table.
If the database supports stored procedures, including inner joins, grouping and sorting, you can
use custom SQL to make a selection from the database, using whatever language the database
supports.
For an explanation of all the options, see: "Database Input Data settings" on page 204.
For a text file
Because text files have many different shapes and sizes, there are a lot of input data settings
for these files. You can add or remove characters in lines if it has a header you want to get rid
of, or strange characters at the beginning of your file, for example; you can set a line width if you
are still working with old line printer data; etc.
It is important that pages are defined properly. This can be done either by using a set number of
lines or using a text (for example, the character “P”), to detect on the page. Be aware that this is
not a Boundary setting; it detects each new page, not each new record.
For an explanation of all the options, see: "Text file Input Data settings" on page 205.

Page 116

For an XML file
XML is a special file format because these file types can have a theoretically unlimited number
of structure types. The input data has two simple options that basically determine at which node
level a new record is created. You can either select an element type, to create a new delimiter
every time that element is encountered, or choose to use the root node. If there is only one toplevel element, there will only be one record before the Boundaries are set.

Note
The DataMapper only extracts elements for which at least one value is defined in the file.

See also: "XML File Input Data settings" on page 206.
Record boundaries
Boundaries are the division between records: they define where one record ends and the next
record begins. Using boundaries, you can organize the data the way you want.
You could use the exact same data source with different boundaries in order to extract different
information. If, for instance, a PDF file contains multiple invoices, each invoice could be a
record, or all invoices for one customer could go into a single record.
Keep in mind that when the data is merged with a template, each record generates output
(print, email, web page) for a single recipient.
To set a boundary, a specific trigger must be defined.
The trigger can be a natural delimiter between blocks of data, such as a row in a CSV file or a
page in a PDF file.
It can also be something in the data that is either static (for example, the text "Page 1 of" in a
PDF file) or changing (a customer ID, a user name, etc).
To define a more complex trigger you could write a script (see "Setting boundaries using
JavaScript" on page 257).
A new record cannot start in the middle of a data field, so if the trigger is something in the data,
the boundary will be set on the nearest preceding natural delimiter. If for instance in a PDF file
the text "Page 1 of" is used as the trigger, the new record starts at the page break before that
text.
For an explanation of all Boundaries options per file type, see "Boundaries" on page 207.

Page 117

Data format settings
By default the data type of extracted data is a String, but each field in the Data Model can be set
to contain another data type (see "Data types" on page 168). When that data type is Date,
Number or Currency, the DataMapper will expect the data in the data source to be formatted in
a certain way, depending on the settings.
The default format for dates, numbers and currencies can be set in three places: in the user
preferences, in the data source settings, and per field in the Data Model.
By default, the user preferences are set to the system preferences. These user preferences
become the default format values for any newly created data mapping configuration. To change
these preferences, select Window > Preferences > DataMapper > DataMapper default
format (see "Datamapper preferences" on page 703).
Data format settings defined for a data source apply to any new extraction made in the current
data mapping configuration. These settings are made on the Settings pane; see "Settings
pane" on page 203.
Settings for a field that contains extracted data are made via the properties of the Extract step
that the field belongs to (see "Setting the data type" on page 159). Any format settings specified
per field are always used, regardless of the user preferences or data source settings.

Note
Data format settings tell the DataMapper how certain types of data are formatted in the data
source. They don't determine how these data are formatted in the Data Model or in a template. In
the Data Model, data are converted to the native data type. Dates, for example, are converted to a
DateTime object in the Data Model, and will always be shown as "year-month-day" plus the time
stamp, for example: 2012-04-11 12.00 AM.

Extracting data
Data are extracted via Extraction steps into fields in the Data Model. This topic explains how to
do that. Fields can also be filled with other data: the result of a JavaScript or the value of a
property. To learn how to do that, see "Fields" on page 155.

Page 118

Before you start
Data source settings
Data source settings must be made beforehand, not only to make sure that the data is properly
read but also to have it organized in a record structure that meets the purpose of the data
mapping configuration (see "Data source settings" on page 115). It is important to set the
boundaries before starting to extract data, especially transactional data (see "Extracting
transactional data" on page 124). Boundaries determine which data blocks - lines, pages,
nodes - form a record in the source data. Data that are located in different records cannot be put
into the same record in the record set that is the result of the extraction workflow.
Preprocessor step
The Preprocessor step allows the application to perform actions on the data file itself before it is
handed over to the Data Mapping workflow. In addition, properties can be defined in this step.
These properties may be used throughout the extraction workflow. For more information, see
"Preprocessor step" on page 140.
Adding an extraction
In an extraction workflow, Extract steps are the pieces that take care of the actual data
extractions.
To add an Extract step:
1. In the Data Viewer pane, select the data that needs to be extracted. (See "Selecting data"
on page 122.)
2. Choose one of two ways to extract the selected data.
l

Right-click on the selected data and select Add Extraction from the contextual
menu.

Note
For optimization purposes, it is better to add data to an existing Extract step
than to have a succession of extraction steps. To do that, select that step on
the Steps pane first; then right-click on the selected data and choose Add
Extract Field.

Page 119

l

Alternatively, drag & drop the selected fields into the Data Model pane.

Tip
In a PDF or Text file, use the Drag icon
Data Model.

to drag selected data into the

With this method, a new Extract step will only be added to the extraction workflow
when no Extract step already present on the Steps pane. Otherwise the field/s will
be added to the selected Extract step or to the one that was last added.
Dragging data into an existing field in the Data Model will replace the data. The
field name stays the same.
Drop data on empty fields or on the record itself to add new fields.
Special conditions
The Extract step may need to be combined with another type of step to get the desired result.
l

l

l

Data can be extracted conditionally with a Condition step or Multiple Conditions step;
see "Condition step" on page 145 or "Multiple Conditions step" on page 148.
Normally the same extraction workflow is automatically applied to all records in the
source data. It is however possible to skip records entirely or partially, using an Action
step. Add an Action step in a branch under a Condition step or Multiple Conditions step
(see "Action step" on page 149) and set the type of action to Stop Processing Record
(see "Text and PDF Files" on page 226).
To extract transactional data, the Extract step must be placed inside a Repeat step. See
"Extracting transactional data" on page 124.

Note
Fields cannot be used twice in one extraction workflow.
Different Extract steps can only write extracted data to the same field in the Data Model,
if:

Page 120

l
l

l

The field name is the same. (See: "Renaming and ordering fields" on page 158.)
The Extract steps are mutually exclusive. This is the case when they are located in
different branches of a Condition step or Multiple Conditions step.
The option Append values to current record is checked in the Step properties
pane under Extraction Definition.

Extracting data into multiple fields
When you select multiple fields in a CSV or tabular data file and extract them simultaneously,
they are put into different fields in the Data Model automatically.
In a PDF or Text file, when multiple lines are extracted at the same time, they are by default
joined and put into one field in the Data Model. To split them and put the data into different
fields:
1. Select the field in the Data Model that contains the extracted lines.
2. On the Step properties pane, under Field Definition, click the drop-down next to Split
and select Split lines.
Adding fields to an existing Extract step
For optimization purposes, it is better to add fields to an existing Extract step than to have a
succession of extraction steps.
To add fields to an existing Extract step:
1. In the Data Viewer pane, select the data that needs to be extracted. (See "Selecting data"
on the facing page.)
2. Select an Extract step on the Steps pane.
3. Right-click on the data and select Add Extract Field, or drag & drop the data on the Data
Model.
When data are dropped on the Data Model, they are by default added to the last added Extract
step.
Editing fields
After extracting some data, you may want to:

Page 121

l

Change the names of fields that are included in the extraction.

l

Change the order in which fields are extracted.

l

Set the data type, data format and default value of each field.

l

Modify the extracted data through a script.

l

Delete a field.

All this can be done via the Step properties pane (see "Settings for location-based fields in a
Text file" on page 221), because the fields in the Data Model are seen as properties of an
Extract step. See also: "Fields" on page 155.
Testing the extraction workflow
The extraction workflow is always performed on the current record in the data source. When an
error is encountered, the extraction workflow stops, and the field on which the error occurred
and all subsequent fields will be greyed out. Click the Messages tab (next to the Step
properties pane) to see any error messages.
To test the extraction workflow on all records, you can:
l

Click the Validate All Records toolbar button.

l

Select Data > Validate Records in the menu.

If any errors are encountered in one or more records, an error message will be displayed. Errors
encountered while performing the extraction workflow on the current record will also be visible
on the Messages tab.
Selecting data
In order to extract the data, it is necessary to first define the data to be extracted, by selecting it.
It depends on the data source file type how this is done. The following paragraphs explain how
to create and manipulate a data selection in each different type of file.
Data selections are used to extract promotional data ("Extracting data" on page 118),
transactional data ("Extracting transactional data" on page 124) and to apply a condition to an
extraction (Condition step).

Page 122

Right-clicking on a data selection displays a contextual menu with the actions that can be done
with that selection or the steps that can be added to them. That menu also displays the
keyboard shortcuts.
Text or PDF file
To select data in a Text or PDF file, click on a starting point, keep the mouse button down, drag
to the end of the data that needs to be selected and release the mouse button. The data
selection can contain multiple lines.
To resize a data selection, click and hold on one of the resize handles on the borders or
corners, move them to the new size and release the mouse button.
To move the data selection, click and hold anywhere on the data selection, move it to its new
desired location and release the mouse button.

Note
In a Text or PDF file, when you move the selection rectangle directly after extracting data, you can
use it to select data for the next extraction.
However, moving the selection rectangle that appears after clicking on a field in the Data Model
actually changes which data is extracted into that field.

CSV file or database data
Tabular data is displayed in the Data Viewer in a table where multiple fields appear for each
line or row in the original data.
To select data, click on a field, keep the mouse button down, drag to the last field that you want
to select and release the mouse button.
Alternatively you can select fields just like files in the Windows Explorer: keep the Ctrl button
pressed down while clicking on fields to select or deselect them, or keep the Shift button
pressed down to select consecutive fields.
XML File
XML data is displayed as a tree view inside the Data Viewer. To get a better overview you can
also collapse any XML level.
In this tree view you can select nodes just like files in the Windows Explorer: keep the Ctrl

Page 123

button pressed down while clicking on nodes to select or deselect them, or keep the Shift
button pressed down to select consecutive nodes.
You can select multiple fields even if those fields are in different nodes.

Note
The Goto step isn't used in XML extraction workflows The DataMapper moves through the file
using Xpath, a path-like syntax to identify and navigate nodes in an XML document.

Extracting transactional data
Promotional data are data about customers, such as addresses, names and phone numbers.
In Connect, each record in the extracted record set represents one recipient. The number of
fields that contain promotional data is the same in each record. These data are stored on the
root level of the extracted record.
Transactional data, on the other hand, are used in communications about transactions
between a company and their customers or suppliers: invoices, statements, and purchase
orders, for example. Naturally these data differ per customer. They are stored in detail tables in
the extracted record. The number of fields in a detail table can vary from record to record.

Page 124

Detail tables are created when an Extract step is added within a Repeat step. The Repeat step
goes through a number of lines or nodes. An Extract step within that loop extracts data from
each line or node.
It depends on the type of source data how this loop is constructed exactly.
For more information about detail tables, multiple detail tables and nested detail tables, see
"Example " on page 196.
From a CSV file or a Database
The transactional data (also called line items) appear in multiple rows.

Page 125

1. Select a field in the column that contains the first line item information.
2. Right-click this data selection and select Add Repeat.

This adds a Repeat step with a GoTo step inside it. The GoTo step moves the cursor
down to the next line, until there are no more lines (see "Goto step" on page 144).
3. (Optional.) Add an empty detail table via the Data Model pane: right-click the Data Model
and select Add a table. Give the detail table a name.
4. Select the Repeat step on the Steps pane.
5. Start extracting data (see "Adding an extraction" on page 119).
When you drag & drop data on the name of a detail table in the Data Model pane, the data
are added to that detail table.
Dropping the data somewhere else on the Data Model pane creates a new detail table,
with a default name that you can change later on (see "Renaming a detail table" on
page 193).
The extraction step is placed inside the Repeat step, just before the GoTo step.

Page 126

Page 127

From an XML file
The transactional data appears in repeated elements.
1. Right-click one of the repeating elements and select Add Repeat.

This adds a Repeat step to the data mapping configuration.
By default, the Repeat type of this step is set to For Each, so that each of the repeated

Page 128

elements is extracted. You can see this on the Step properties pane, as long as the
Repeat step is selected on the Steps pane. In the Collection field, you will find the
corresponding node path.

Tip
It is possible to edit the Xpath in the Collection field, to include or exclude elements
from the loop. One example of this is given in a How-to: Using Xpath in a Repeat
step.
The example in the How-to uses the starts-with() function. For an overview of
XPath functions, see Mozilla: XPath Functions.

The Goto step isn't used in XML extraction workflows The DataMapper moves through the
file using Xpath, a path-like syntax to identify and navigate nodes in an XML document.
2. (Optional.) Add an empty detail table via the Data Model pane: right-click the Data Model
and select Add a table. Give the detail table a name.
3. Select the Repeat step on the Steps pane.
4. Extract the data: inside a repeating element, select the data that you want to extract. Then
right-click the selected nodes and select Add Extraction, or drag & drop them in the Data
Model.
When you drag & drop data on the name of a detail table in the Data Model pane, the data
are added to that detail table.
Dropping the data somewhere else on the Data Model pane creates a new detail table,
with a default name that you can change later on (see "Renaming a detail table" on
page 193).

Page 129

The new Extract step will be located in the Repeat step.

From a Text or a PDF file
In a PDF or Text file, transactional data appears on multiple lines and can be spread over
multiple pages.
1. Add a Goto step if necessary. Make sure that the cursor is located where the extraction
loop must start. By default the cursor is located at the top of the page, but previous steps
may have moved it. Note that an Extract step does not move the cursor.
1. Select something in the first line item.
2. Right-click on the selection and select Add Goto. The Goto step will move the
cursor to the start of the first line item.

Page 130

2. Add a Repeat step where the loop must stop.
1. In the line under the last line item, look for a text that can be used as a condition to
stop the loop, for example "Subtotals", Total" or "Amount".
2. Select that text, right-click on it and select Add Repeat. The Repeat step loops over
all lines until the selected text is found.

3. Include/exclude lines. Lines between the start and end of the loop that don't contain a
line item must be excluded form the extraction. Or rather, all lines that contain a line item
have to be included. This is done by adding a Condition step within the Repeat step.
1. Select the start of the Repeat step on the Steps pane.
2. Look for something in the data that distinguishes lines with a line item from other
lines (or the other way around). Often, a "." or "," appears in prices or totals at the
same place in every line item, but not on other lines.
3. Select that data, right-click on it and select Add Conditional.

Page 131

Selecting data - especially something as small as a dot - can be difficult in a
PDF file. To make sure that a Condition step checks for certain data: Type the value
in the right operand (in the Step properties pane).Move or resize the selection
rectangle in the data.Click the Use selection
button in the left operand (in the
Step properties pane).When the Condition evaluates to true, the value is found in
the selected region.
In the Data Viewer, you will see a green check mark in the left margin next to each
included line and an X for other lines.

Page 132

4. (Optional.) Add an empty detail table to the Data Model: right-click the Data Model and
select Add a table. Give the detail table a name.
5. Extract the data (see "Adding an extraction" on page 119).
When you drag & drop data on the name of a detail table in the Data Model pane, the data
are added to that detail table.
Dropping the data somewhere else on the Data Model pane, or using the contextual
menu in the Data Viewer, creates a new detail table, with a default name that you can
change later on (see "Renaming a detail table" on page 193).

Page 133

Note
In a PDF or Text file, pieces of data often have a variable size: a product
description, for example, may be short and fit on one line, or be long and cover two
lines. To learn how to handle this, see "Extracting data of variable length" on the
next page.

6. Extract the sum or totals. If the record contains sums or totals at the end of the line items
list, the end of the Repeat step is a good place to add an Extract step for these data. After
the loop step, the cursor position is at the end of line items.
1. Select the amount or amounts.
2. Click on the end of the Repeat step ( ) in the Steps panel.
3. Right-click on the selected data and select Add Extraction.
Alternatively, right-click on the end of the Repeat step in the Steps panel and select Add
a Step > Add Extraction.

Page 134

Tip
This how-to describes in detail how to extract an item description that appears in a variable number
of lines: How to extract multiline items.

Extracting data of variable length
In PDF and Text files, transactional data isn't structured uniformly, as in a CSV, database or
XML file. Data can be located anywhere on a page. Therefore, data are extracted from a
certain region on the page. However, the data can be spread over multiple lines and multiple
pages:
l

l

Line items may continue on the next page, separated from the line items on the first page
by a line break, a number of empty lines and a letterhead.
Data may vary in length: a product description for example may or may not fit on one line.

How to exclude lines from an extraction is explained in another topic: "Extracting transactional
data" on page 124 (see From a PDF or Text file).
This topic explains a few ways to extract a variable number of lines.

Page 135

Text file: setting the height to 0
If the variable part in a TXT file is at the end of the record (for example, the body of an email) the
height of the region to extract can be set to 0. This instructs the DataMapper to extract all lines
starting from a given position in a record until the end of the record, and store them in a single
field.
This also works with the data.extract() method in a script; see "Examples" on page 271.
Finding a condition
Where it isn't possible to use a setting to extract data of variable length, the key is to find one or
more differences between lines that make clear how big the region is from where data needs to
be extracted.
Whilst, for example, a product description may expand over two lines, other data - such as the
unit price - will never be longer than one line. Either the line above or below the unit price will
be empty when the product description covers two lines.
Such a difference can then be used as a condition in a Condition step or a Case in a Multiple
Conditions step.
A Condition step, as well as each Case in a Multiple Conditions step, can only check for one
condition. To combine conditions, you would need a script.
Using a Condition step or Multiple Conditions step
Using a Condition step ("Condition step" on page 145) or a Multiple Conditions step ("Multiple
Conditions step" on page 148) one could determine how big the region is that contains the data
that needs to be extracted.
In each of the branches under the Condition or Multiple Conditions step, an Extract step could
be added to extract the data from a particular region. The Extract steps could write their data to
the same field.
Fields cannot be used twice in one extraction workflow.
Different Extract steps can only write extracted data to the same field in the Data Model, if:
l
l

l

The field name is the same. (See: "Renaming and ordering fields" on page 158.)
The Extract steps are mutually exclusive. This is the case when they are located in
different branches of a Condition step or Multiple Conditions step.
The option Append values to current record is checked in the Step properties pane
under Extraction Definition.

Page 136

Tip
Create and edit the Extract step in the 'true' branch, then right-click the step on the Steps pane, select
Copy Step, and paste the step in the 'false' branch. Now you only have to adjust the region from
which this Extract step extracts data.

To learn how to configure a Condition step or a Case in a Multiple Conditions step, see
"Configuring a Condition step" on page 147.

Page 137

Using a script
A script could also provide a solution when data needs to be extracted from a variable region.
This requires using a Javascript-based field.

Page 138

1. Add a field to an Extract step, preferably by extracting data from one of the possible
regions; see "Extracting data" on page 118. To add a field without extracting data, see
"JavaScript-based field" on page 156.
2. On the Step properties pane, under Field Definition, select the field and change its Mode
to Javascript.
If the field was created with its Mode set to Location, you will see that the script already
contains one line of code to extract data from the original location.
3. Expand the script. Start by doing the check(s) to determine where the data that needs to
be extracted is located. Use the data.extract() function to extract the data. The
parameters that this function expects depend on the data source, see "Examples" on
page 271.
Example
The following script extracts data from a certain region in a Text file; let's assume that this
region contains the unit price. If the unit price is empty (after trimming any spaces), the product
description has to be extracted from two lines; else the product description should be extracted
from one line.
var s = data.extract(1,7,1,2,"");
if (s.substring(1,3).trim().length == 0)
{ data.extract(12,37,0,2,""); } /* extract two lines */
else { data.extract(12,37,0,1,""); } /* extract one line */
The fourth parameter of the extract() function contains the height of the region. When working
with a Text file, this equals a number of lines.

Tip
With a Text file, the data.extract() method accepts 0 as its height parameter. With the height set to 0
it extracts all lines starting from the given position until the end of the record.

Note that this script replicates exactly what can be done in a Condition step. In cases like this, it
is recommended to use a Condition step. Only use a script when no steps are sufficient to give
the expected result, or when the extraction can be better optimized in a script.

Page 139

Steps
In the DataMapper, steps are part of an extraction workflow (see "Data mapping workflow" on
page 113). They contain a specific instruction for the DataMapper, for example to extract data,
create a loop, or apply a condition. Some types of steps contain other steps.
Steps are executed sequentially, from top to bottom in an extraction workflow.
Inside a Condition step, some steps may be skipped altogether when they are on a particular
branch, whereas in a Repeat step - a loop - several steps may be repeated a number of times.
The Preprocessor and Postprocessor steps are special in that the first can be used to modify
the incoming data prior to executing the rest of the extraction workflow while the latter can be
used to further process the resulting record set after the entire extraction workflow has been
executed.
Step types
These are the types of steps that can be added to a data mapping workflow:
l

"Preprocessor step" below

l

"Extract step" on page 142

l

"Repeat step" on page 143

l

"Goto step" on page 144

l

"Condition step" on page 145

l

"Multiple Conditions step" on page 148

l

"Action step" on page 149

l

"Postprocessor step" on page 150

Preprocessor step
The Preprocessor step allows the application to modify the incoming data prior to executing the
rest of the extraction workflow through a number of preprocessors. It also lets you define
properties, to be added to each record or to the data as a whole. A unique ID could be created
to be added to each record in the output for integrity checks later on. A time stamp could be

Page 140

added to create reports. A tag could be added to process certain records differently. A
preprocessor could remove certain records altogether.
One example of how a preprocessor could be used is given in a How-to: Using Preprocessors
in DataMapper.
Properties
To add a property:
1. Select the Preprocessor step on the Steps pane.
2. On the Step properties pane, under Properties, click the Add button . See
"Properties" on page 217 for an explanation of the settings for properties.
To set the value of a property you can use an Action step (see "Action step" on page 149).
Preprocessors
The Preprocessor step can contain any number of preprocessors. They will be run in sequence
before the data is sent to the Data Mapping workflow. To add a preprocessor:
1. Select the Preprocessor step on the Steps pane.
2. On the Step properties pane, under Preprocessor, click the Add button

.

3. Under Preprocessor definition, add the script. Preprocessing tasks must be written in
JavaScript (see "Using scripts in the DataMapper" on page 255 and "DataMapper Scripts
API" on page 252).

Page 141

Configuring the Preprocessor step
For an explanation of the settings for preprocessors, see: "Preprocessor step properties" on
page 216.
Extract step
The Extract step is essential in each and every data mapping configuration. It extracts data
from the data source, based on their location (a row and column in CSV or tabular data, an
XPath in XML, or a region of the page in PDF and Text) or on a JavaScript. The data is placed
in the record set that is the result of the extraction workflow.
Fields always belong to an Extract step, but they don't necessarily all contain extracted data. To
learn how to add fields without extracted data to an Extract step, see "Fields" on page 155.

Page 142

Adding an Extract step
To add an Extract step, first select the step on the Steps pane after which to insert the Extract
step. Then:
l

l

In the Data Viewer, select some data, right-click that data and choose Add Extraction, or
drag & drop the data in the Data Model. For more detailed information and instructions,
see: "Extracting data" on page 118.
Alternatively, right-click the Steps pane and select Add a Step > Add Extraction. Make
the required settings on the Step properties pane.

If an Extract step is added within a Repeat step, the extracted data are added to a detail table
by default; see "Extracting transactional data" on page 124 and "Example " on page 196.
Configuring an Extract step
The names, order, data type and default value of the fields extracted in an Extract step are
properties of that Extract step. These and other properties can be edited via the Step properties
pane. For an explanation of all the options, see "Settings for location-based fields in a Text file"
on page 221.
Fields cannot be used twice in one extraction workflow.
Different Extract steps can only write extracted data to the same field in the Data Model, if:
l
l

l

The field name is the same. (See: "Renaming and ordering fields" on page 158.)
The Extract steps are mutually exclusive. This is the case when they are located in
different branches of a Condition step or Multiple Conditions step.
The option Append values to current record is checked in the Step properties pane
under Extraction Definition.

Repeat step
The Repeat step is a loop that may run 0 or more times, depending on the condition specified.
It is used for the extraction of transactional data; see "Extracting transactional data" on
page 124.
Repeat steps do not automatically move the pointer in the source file. Therefore a Goto step
that moves the cursor is added automatically within the loop to avoid an infinite loop, except in

Page 143

XML files. When you select a node in an XML file and add a Repeat step on it, the Repeat step
will automatically loop over all nodes of the same type on the same level in the XML file.
Adding a Repeat step
To add a Repeat step:
1. On the Steps pane, select the step after which to insert the Condition step.
2. Make sure that the cursor is located where the extraction loop must start. By default the
cursor is located at the top of the page or record, but previous steps in the extraction
workflow may have moved it down. If necessary, add a Goto step (see "Goto step"
below).
This step can be skipped when the data source is an XML file.
3. Add the Repeat step:
l

l

Select data in the line or row where the loop must end, right-click on it and select
Add Repeat.
Right-click the Steps pane and select Add a Step > Add Repeat. Make the
required settings on the Step properties pane.

Configuring a Repeat step
For information about how to configure the Repeat step, see "Text and PDF Files" on
page 233.
How to use it in an extraction workflow is explained in the topic: "Extracting transactional data"
on page 124.
Goto step
Although invisible, there is a cursor in the Data Viewer. In an extraction workflow, the cursor
starts off at the top-left corner of each record in the source data.
The Goto step can move the cursor to a certain location in the current record. The new location
can be relative to the top of the record or to the current position.
When the Goto step is used within a Repeat step, it moves the cursor in each loop of the
Repeat step. In this case the new location has to be relative to the current position.

Page 144

Note
The Goto step isn't used in XML extraction workflows The DataMapper moves through the file
using Xpath, a path-like syntax to identify and navigate nodes in an XML document.

Adding a Goto step
To add a Goto step:
l

l

On the Steps pane, select the step after which to insert the Goto step. In the Data Viewer,
select some data, right-click that data and choose Add Goto, to add a Goto step that
moves the cursor to that data.
Alternatively, right-click the Steps pane and select Add a Step > Add Goto. Make the
required settings on the Step properties pane.

Configuring a Goto step
For information about how to configure the Goto step, see "Text file" on page 244.
Condition step
A Condition step is used when the data extraction must be based on specific criteria. The
Condition step splits the extraction workflow into two separate branches, one that is executed
when the condition is true, the other when it is false.
Extract steps can be added to both the 'true' and the 'false' branch (see "Extracting data" on
page 118 and "Extracting transactional data" on page 124).
In the Data Viewer pane, icons on the left indicate the result of the evaluation in the Condition
step: when true and
when false.

Page 145

Adding a Condition step
To add a Condition step:
l

On the Steps pane, select the step after which to insert the Condition step; then, in the
Data Viewer, select some data, right-click that data and choose Add Conditional.
In the Step properties pane, you will see that the newly added Condition step checks if
the selected position (the left operand) contains the selected value (the right operand).
Both operands and the operator can be adjusted.
Note that the left operand is by default trimmed, meaning that spaces are cut off.
Selecting data - especially something as small as a dot - can be difficult in a PDF file. To
make sure that a Condition step checks for certain data: Type the value in the right

Page 146

operand (in the Step properties pane).Move or resize the selection rectangle in the
data.Click the Use selection
button in the left operand (in the Step properties
pane).When the Condition evaluates to true, the value is found in the selected region.
l

Alternatively, right-click the Steps pane and select Add a Step > Add Conditional. Enter
the settings for the condition on the Step properties pane.

Configuring a Condition step
The condition in a Condition step is expressed in a rule or combination of rules. Rules have a
left operand, an operand type (for example: contains, is empty) and a right operand.
For an overview of all options on the Step properties pane, see "Condition step properties" on
page 238.
Inverting a rule
Inverting a rule adds not to the operand type. For instance, is empty becomes is not empty.
To invert a rule, check the Invert condition option next to Operator under Condition on the
Step properties pane.
Combining rules
One rule is already present in a newly added Condition step. To add another rule, click the Add
condition button under Condition, next to Condition List, on the Step properties pane.

Rules are by default combined with AND. To change the way rules are combined, right-click
"AND" in the Rule Tree, on the Step properties pane, and select OR or XOR instead. (XOR
means one or the other, but not both.)

Page 147

Renaming a rule
To rename a rule, double-click its name in the Rule Tree and type a new name.
Multiple Conditions step
The Multiple Conditions step is useful to avoid the use of nested Condition steps: Condition
steps inside other Condition steps.
In a Multiple Conditions step, conditions or rather Cases are positioned side by side.
Each Case condition can lead to an extraction.
Cases are executed from left to right.

Page 148

Adding a Multiple Conditions step
To add a Multiple Conditions step, right-click the Steps pane and select Add a Step > Add
Multiple Conditions.
To add a case, click the Add case button to the right of the Condition field in the Step
properties pane.
Configuring a Multiple Conditions step
For information about how to configure the Multiple Conditions step, see "Left operand, Right
operand" on page 241. The settings for a Case are the same as for a Condition step; see
"Condition step properties" on page 238.
Action step
The Action step can:

Page 149

l
l

l

Execute JavaScript code.
Set the value for a record property. Record properties are defined in the Preprocessor
step; see "Preprocessor step" on page 140.
Stop the processing of the current record. Normally an extraction workflow is
automatically executed on all records in the source data. By stopping the processing of
the current record, you can filter records or skip records partially.

The Action step can run multiple specific actions one after the other in order.
Adding an Action step
To add an Action step, right-click on the Steps pane and select Add a Step > Add Action.
Configuring an Action step
For information about how to configure the Action step, see "Text and PDF Files" on page 226.
Postprocessor step
The Postprocessor step allows the application to further process the resulting record set after
the entire extraction workflow has been executed, using JavaScript.
For example, a postprocessor can export all or parts of the data to a CSV file which can then be
used to generate daily reports of the Connect Workflow processes that use this data mapping
configuration (see "Data mapping configurations" on page 101).
A postprocessor could also write the results of the extraction process to a file and immediately
upload that file to a Workflow process.
The Postprocessor step can contain any number of postprocessors.
To add a postprocessor:
l

Select the Postprocessor step on the Steps pane.

l

On the Step properties pane, under Postprocessor, click the Add button

l

.

Under Postprocessor definition, add the script. Postprocessor tasks must be written in
JavaScript (see "Using scripts in the DataMapper" on page 255 and "DataMapper Scripts
API" on page 252).

Page 150

Configuring the Postprocessor step
For an explanation of the settings for postprocessors, see "JavaScript " on page 249.

The Data Model
The Data Model is the structure of records into which extracted data are stored. It contains the
names and types of the fields in a record and in its detail tables. A detail table is a field that
contains a record set instead of a single value. The Data Model is shown in the Data Model

Page 151

pane, filled with data from the current record.

The Data Model is not related to the type of data source: whether it is XML, CSV, PDF, Text or
a database does not matter. The Data Model is a new structure, designed to contain only the
required data.
About records
A record is a block of information that may be merged with a template to generate a single
document (invoice, email, web page...) for a single recipient. It is part of the record set that is
generated by a data mapping configuration.
In each record, data from the data source can be combined with data coming from other
sources.

Creating a Data Model
A Data Model is created automatically within each data mapping configuration, but it is empty at
the start. To fill it you could use another Data Model (see "Importing/exporting a Data Model" on
the next page) or start creating a data mapping workflow (see "Data mapping workflow" on
page 113).
To learn how to add and edit fields, see "Fields" on page 155.

Page 152

Importing/exporting a Data Model
To use a Data Model in another data mapping configuration, or to use it in a Designer template
without a data mapping configuration, you have to export that Data Model and import it into a
data mapping configuration or template.
Importing and exporting Data Models is done from within the Data model Pane, using the topright icons
and
.
For information about the structure of the exported Data Model file, see "Data Model file
structure" on page 177.
When you import a Data Model, it appears in the Data Model pane where you can see all the
fields and their types.
You can delete or add fields, or change their type. Once the data model is imported and all the
fields are properly set, all you need to do is extract the information from the active data sample
(see "Extracting data" on page 118).

Note
l

l

Imported Data Model fields always overwrite existing field properties when the field
name is the same (although they will still be part of the same Extract step). Nonexistent fields are created automatically with the appropriate field settings. The
import is case-insensitive.
All imported data model fields are tagged with an asterisk (*).

Editing the Data Model
Empty fields and detail tables, added via the Data Model pane, can be edited (renamed,
deleted etc.) via the Data Model pane, using the contextual menu that opens when you rightclick on a field.
Fields in a Data Model that are actually used in the extraction workflow cannot be edited via the
Data Model pane. They are related to a step in the extraction workflow and are edited via the
Step properties pane instead.
The order of the fields can also not be changed via the Data Model pane. It is the Extract step
that determines the order in which data are extracted, so the order of the fields has to be
changed per Extract step.
To learn how to edit fields and change their order, see "Fields" on page 155.

Page 153

Using the Data Model
The Data Model is what enables you to create personalized templates in the Designer module.
You can drag & drop fields from the Data Model into the template that you are creating (see
"Variable Data" on page 604). For this, you have to have a template and a data mapping
configuration open at the same time, or import a Data Model (see "Importing/exporting a Data
Model" on the previous page).
The Data Model is reusable, meaning that it can be shared amongst different template layouts
and output types.
Different data mapping configurations could use the same Data Model, allowing a template to
be populated with data from different sources and formats, without the need to modify the
template (see "Importing/exporting a Data Model" on the previous page).
In Workflow, when a data mapping configuration is used to extract data from a data source
(see "Data mapping configurations" on page 101), the extracted data is stored in a record set
that is structured according to the Data Model.
About adding fields and data via Workflow
The Data Model is not extensible outside of the DataMapper. When it is used in Workflow - as
part of a data mapping configuration - the contents of its fields can be updated but not its
structure.
There are a number of instances however, where fields may need to be added to the data
model after the initial data mapping operation has been performed. For instance, you might
need to add a cleansed postal address next to the original address, or retrieve a value from a
database and add it to the record.
ExtraData field
You an add empty fields in advance to provide space in the Data Model for Workflow to store
data. For convenience, one field called ExtraData is automatically created at every level of
each data record. That means the record itself gets an ExtraData field, and each detail table
also gets one.
By default the field is not visible in the DataMapper's Data Model, because it is not meant to be
filled via an extraction. It can be made visible using the Show ExtraData Field icon at the top of
the Data Model.
Workflow process
Data can be added to the Data Model in a PlanetPress Connect Workflow process as follows:

Page 154

1. Use an Execute Data Mapping task or Retrieve Items task to create a record set. On the
General tab select Outputs records in Metadata.
2. Add a value to a field in the Metadata using the Metadata Fields Management task.
Data added to the _vger_fld_ExtraData field on the Document level will appear in the
record's ExtraData field, once the records are updated from the Metadata (in the next
step).
Other fields have the same prefix: _vger_fld_.
3. Update the record/s from the Metadata. There are several ways to do this. You could, for
example:
l

Use the Update Data Records plugin.

l

Add an Output task and check the option Update records with Metadata.

l

Select Metadata as the data source in the Create Preview PDF plugin.

Note
Many of these actions can also be performed using REST calls.

Please refer to PlanetPress Connect Workflow documentation for more information about the
plugins involved.

Fields
Extracted data are stored in fields in the Data Model (see "The Data Model" on page 151).
Fields can be present on different levels: on the record level or in a detail table (see "Example "
on page 196).
Fields always belong to an Extract step, as can be seen on the Step Properties pane (see
"Settings for location-based fields in a Text file" on page 221), but they don't necessarily all
contain extracted data.
Location-based fields do: they read data from a certain location in the data source.
Other fields may contain the result of a JavaScript (JavScript-based fields) or the value of a
property (property-based fields).

Page 155

Adding fields
Location-based field
Generally location-based fields are added to a Data Model by extracting data; see "Extracting
data" on page 118. Location-based fields in detail tables are created by extracting
transactional data; see "Extracting transactional data" on page 124.
Alternatively, you can add fields and detail tables directly in the Data Model pane. Right-click
anywhere on the Data Model and a contextual menu will appear. Which menu items are
available depends on where you've clicked. If you right-click inside the record itself, you can
add a field or a detail table. A field will be added at the end with no extraction, while a detail
table will be added with no fields inside.
After adding a field or detail table this way, you can drag & drop data into it. Without data it is
not accessible via the Step properties pane.
JavaScript-based field
JavaScript-based fields are filled by a script: the script provides a value. Note that the last value
attribution to a variable is the one used as the result of the expression.
There is a number of ways to add a Javascript based field.
Via the Steps pane
1. Make sure there is no data selection in the Data Viewer.
2. Right-click on an Extract step on the Steps pane and select Add a Step > Add Extract
Field. (To add a new Extract step, select Add a Step > Add Extraction first.)
3. On the Step properties pane, under Field Definition, enter the script in the Expression
field.
Via the Step properties pane
1. Select an Extract step on the Steps pane. (To add a new, empty Extract step, right-click
the Steps pane and select Add a Step > Add Extraction.)

Page 156

2. On the Step properties pane, under Field Definition, click the Add JavaScript Field
button next to the Field List.

3. On the Step properties pane, under Field Definition, enter the script in the Expression
field.
By changing a field's mode
Alternatively you can change a location-based into a JavaScript-based field.
1. Select the field in the Data Model.
2. On the Step properties pane, under Field Definition, change its Mode to JavaScript.
3. Enter the script in the Expression field.
Property-based field
A property-based field is filled with the value of a property.
Objects such as the sourceRecord and steps have a number of predefined properties. (For an
explanation of the objects to which the properties belong, see "DataMapper Scripts API" on
page 252.)
Custom properties can be added via the Preprocessor step; see "Preprocessor step" on
page 140.
A property-based field cannot be added directly. To fill a field with the value of a property, you
have to change an existing field's Mode to Properties.
1. Select the field in the Data Model.
2. On the Step properties pane, under Field Definition, change its Mode to Properties.
3. Select the property from the Property drop-down list, or click the button to the right, to
open a filter dialog that lets you find a property based on the first few letters that you type.
Another way to add the value of a property to a field is by setting the field's Mode to JavaScript
and entering the corresponding property in the Expression field, e.g.
data.properties.myProperty;.

Page 157

Adding fields dynamically
Outside of the DataMapper the Data Model cannot be changed. It isn't possible to add fields to
it when using the data mapping configuration in Workflow. It is however possible to add data to
existing fields via Workflow; see "About adding fields and data via Workflow" on page 154.
Editing fields
The list of fields that are included in the extraction, the order in which fields are extracted and
the data format of each field, are all part of the Extract step's properties. These can be edited via
the Step properties pane (see "Settings for location-based fields in a Text file" on page 221).
Renaming and ordering fields
The order and the names of fields in the Data Model can be changed via the properties of the
Extract step that they belong to.
1. Select the Extract step that contains the fields that you want to rename. To do this you
could click on one of those fields in the Data Model, or on the step in the Steps pane.
2. On the Step properties pane, under Field Definition, click the Order and rename fields
button.

See "Order and rename fields dialog" on page 224.

Note
Fields cannot have the same name, unless they are on a different level in the record.
If you intend to use the field names as metadata in a Workflow process, do not add
spaces to field names, as they are not permitted in metadata field names.

Page 158

Setting the data type
Fields store extracted data as a String by default. The data type of a field can be changed via
the properties of the Extract step that the field belongs to.
1. Select the Extract step that contains the field. You can do this by clicking on the field in
the Data Model, or on the step in the Steps pane that contains the field.
2. On the Step properties pane, under Field Definition, set the Type to the desired data type.
See "Data types" on page 168 for a list of available types.
Changing the type does not only set the data type inside the record. In the case of dates,
numbers and currencies, it also means that the DataMapper will expect the data in the data
source to be formatted in a certain way. If the actual data doesn't match the format that the
DataMapper expects, it cannot interpret the date, number or currency as such. If for example a
date in the data source is formatted as "yyyy-mm-dd" but the default format adds the time, the
date cannot be read and the DataMapper will stop with an error.
The default format for dates, numbers and currencies can be set in the user preferences
("Datamapper preferences" on page 703), in the data source settings ("Data source settings" on
page 115) and per data field (in the Extract step properties, see "Data Format" on page 223).
Setting a default value
You may want to set a default value for a field, in case no extraction can be made. Make sure to
set the data type of the field via the step properties (see above). Then right-click the field and
select Default Value.
The default value must match the selected data type. If the data type of the field is set to Integer,
for example, you cannot enter a value of 2,3. A default date must be formatted as a DateTime
object ("year-month-day" plus the time stamp, for example: 2012-04-11 12.00 AM); see "Date"
on page 171.
Modifying extracted data
To modify extracted data - the contents of a field - you have to write a script. The script can be
entered as a Post function in a location-based field or as an Expression in a JavaScript-based
field.

Page 159

Post function
On the Step properties pane, under Field Definition, you can enter a script in the Post function
field to be run after the extraction. (Click the Use JavaScript Editor button to open the Script
Editor dialog if you need more space.)
A Post function script operates directly on the extracted data. Its results replace the extracted
data. For example, the Post function script replace("-", ""); replaces the first dash character
that occurs inside the extracted string. The code toLowerCase(); converts the string to
lowercase letters.
Note that the function must be appropriate for the field's data type.
JavaScript Expression
Alternatively you can change a field's Mode from Location to Javascript:
1. Select the field in the Data Model.
2. On the Step properties pane, under Field Definition, change its Mode to JavaScript.
You will see that the JavaScript Expression field is not empty; it contains the code that was
used to extract data from the location. This code can be used or deleted.

Note
The last value attribution to a variable is the one used as the result of the expression.

Deleting a field
The list of fields that are included in an extraction is one of the properties of an Extract step. To
delete a field:
1. Select the field: click on the field in the Data Model, or select the Extract step that contains
the field that you want to delete, and in the Step properties pane, under Field Definition,
select the field from the Field List.

Page 160

2. In the Step properties pane, under Field Definition, click the Remove Extract Field button
next to the Field List drop-down.

Detail tables
A detail table is a field in the Data Model that contains a record set instead of a single value.
Detail tables contain transactional data. They are created when an Extract step is added within
a Repeat step; see "Extracting transactional data" on page 124.
In the most basic of transactional communications, a single detail table is sufficient. However, it
is possible to create multiple detail tables, as well as nested tables. Detail tables and nested
tables are displayed as separate levels in the Data Model (see "The Data Model" on page 151).
Renaming a detail table
Renaming detail tables is especially useful when there are more detail tables in one record, or
when a detail table contains another detail table. For this detail table, ‘products’ would be a
better name.
1. On the Data Model pane, click one of the fields in the detail table.
2. On the Step Properties pane, under Extraction Definition, in the Data Table field, you
can find the name of the detail table: record.detail by default. Change the detail part in
that name into something else.

Note
A detail table’s name should always begin with ‘record.’.

3. Click somewhere else on the Step Properties pane to update the Data Model. You will
see the new name appear.

Page 161

Creating multiple detail tables
Multiple detail tables are useful when more than one type of transactional data is present in the
source data, for example purchases (items with a set price, quantity, item number) and services
(with a price, frequency, contract end date, etc).

To create more than one detail table, simply extract transactional data in different Repeat steps
(see "Extracting transactional data" on page 124).
The best way to do this is to add an empty detail table (right-click the Data Model, select Add a
table and give the detail table a name) and drop the data on the name of that detail table.
Else the extracted fields will all be added to one new detail table with a default name at first,
and you will have to rename the detail table created in each Extract step to pull the detail tables
apart (see "Renaming a detail table" on the previous page).

Page 162

Page 163

Nested detail tables
Nested detail tables are used to extract transactional data that are relative to other data. They
are created just like multiple detail tables, with two differences:
l

l

For the tables to be actually nested, the Repeat step and its Extract step that extract the
nested transactional data must be located within the Repeat step that extracts data to a
detail table.
In their name, the dot notation (record.services) must contain one extra level
(record.services.charges).

Note
Using nested detail tables in the Designer module requires scripting, as described in this How-to:
Cloning your way through nested tables.

Example
An XML source file lists the services of a multi-service provider: Internet, Cable, Home Phone,
Mobile. Each service in turn lists a number of "charges", being service prices and rebates, and

Page 164

a number of "details" such as movie rentals or long distance calls.

Page 165

The services can be extracted to a detail table called record.services.

The "charges" and "details" can be extracted to two nested detail tables.

Page 166

The nested tables can be called record.services.charges and record.services.details.

Page 167

Now one "charges" table and one "details" table are created for each row in the "services"
table.

Data types
By default the data type of extracted data is a String, but each field in the Data Model can be set
to contain another data type.
To do this:
1. In the Data Model, select a field.
2. On the Step properties pane, under Field Definition choose a data type from the Type
drop-down.
Changing the type does not only set the data type inside the record. In the case of dates,
numbers and currencies, it also means that the DataMapper will expect the data in the data
source to be formatted in a certain way. If the actual data doesn't match the format that the
DataMapper expects, it cannot interpret the date, number or currency as such. If for example a
date in the data source is formatted as "yyyy-mm-dd" but the default format adds the time, the
date cannot be read and the DataMapper will stop with an error.
The default format for dates, numbers and currencies can be set in the user preferences
("Datamapper preferences" on page 703), in the data source settings ("Data source settings" on
page 115) and per data field (in the Extract step properties, see "Data Format" on page 223).

Note
Data format settings tell the DataMapper how certain types of data are formatted in the data
source. They don't determine how these data are formatted in the Data Model or in a template. In
the Data Model, data are converted to the native data type. Dates, for example, are converted to a
DateTime object in the Data Model, and will always be shown as "year-month-day" plus the time
stamp, for example: 2012-04-11 12.00 AM.

The following data types are available in PlanetPress Connect.
l

"Boolean" on the next page

l

"String" on page 176

Page 168

l

"HTMLString" on page 175

l

"Integer" on page 175

l

"Float" on page 174

l

"Currency" on the facing page

l

"Date" on page 171

l

"Object" on page 176

Note
The Object data type is only available in the DataMapper module. It can be used for properties in
the Preprocessor step, but not for fields in the Data Model.

Boolean
Booleans are a simple true/false data type often used in conditions and comparisons.
Defining Boolean values
l

Preprocessor:
l

l

l

In the Step properties pane, under Properties, add or select a field.
Specify the Type as Boolean and set a default value of either either true or
false, followed by a semicolon.

Extraction:
l

In the Data Model, select a field.

On the Step properties pane, under Field Definition set the Type to Boolean.
The field value must be true or false.
l

l

JavaScript Expression: Set the desired value to either true or false. Example:
record.fields["isCanadian"] = true;

Note
The value must be all in lowercase: true, false. Any variation in case (True, TRUE) will not
work.

Page 169

Boolean expressions
Boolean values can also be set using an expression of which the result is true or false. This is
done using operators and comparisons.
Example: record.fields["isCanadian"] = (extract("Country") == "CA");
For more information on JavaScript comparison and logical operators, please see
w3schools.com or developer.mozilla.org.
Currency
The Currency data type is a signed, numeric, fixed-point 64-bit number with 4 decimals. Values
range from -922 337 203 685 477.5808 to 922 337 203 685 477.5808. This data type is
routinely used for financial calculations: it is as precise as integers.
Defining Currency values
l

Preprocessor:
l

l

l

In the Step properties pane, under Properties, add or select a field.
Specify the Type as Currency and set a default value as a number with up to 4
decimal points, followed by a semicolon; such as 546513.8798;

Extraction:
l

In the Data Model, select a field.

l

On the Step properties pane, under Field Definition set the Type to Currency.

Under Data Format, specify how the value is formatted in the data source (see
Extract Step; for the default format settings, see "Data source settings" on
page 115).
The field value will be extracted and treated as a Float.
l

l

JavaScript Expression: Set the desired value to any Float value. Example:
record.fields["PreciseTaxSubtotal"] = 27.13465;

Note
While Currency values can be set to up to 4 significant digits, only 2 are displayed on screen.

Page 170

Building Currency values
Currency values can be the result of direct attribution or mathematical operations just like
Integer values (see "Integer" on page 175).
Date
Dates are values that represent a specific point in time, precise up to the second. They can also
be referred to as datetime values. While dates are displayed using the system's regional
settings, in reality they are stored unformatted.

Note
The Date property is stored in Connect database with zero time zone offset, which makes it
possible to convert the time correctly in any location. PlanetPress Workflow, however, shows the
date/time as it is stored database (with 0 time zone offset). This is expected behavior for the moment
and the zone offset must be calculated manually in PlanetPress Workflow.

Extracting dates
To extract data and have that data interpreted as a Date, set the type of the respective field to
Date:
1. Select the field in the data model.
2. On the Step properties pane, under Field Definition, specify the Type as Date.
3. Make sure that the date in the data source is formatted in a way that matches the
expectations of the DataMapper. If the date doesn't match the format that the DataMapper
expects, it cannot be interpreted as a date. For example, if a date in the data source is
formatted as "yyyy-mm-dd" but the DataMapper expects a time as well, the date cannot be
read and the DataMapper will stop with an error.
The expected date format can be set in three places:
l

In the user preferences ("Datamapper preferences" on page 703).

l

In the data source settings ("Data source settings" on page 115).

l

In the field properties: on the Step properties pane, under Data Format, specify the
Date/Time Format.

Page 171

For the letters and patterns that you can use in a date format, see "Defining a date/time
format" below.
Data format settings tell the DataMapper how certain types of data are formatted in the
data source. They don't determine how these data are formatted in the Data Model or in
a template. In the Data Model, data are converted to the native data type. Dates, for
example, are converted to a DateTime object in the Data Model, and will always be
shown as "year-month-day" plus the time stamp, for example: 2012-04-11 12.00 AM..
Defining a date/time format
A date format is a mask representing the order and meaning of each digit in the raw data, as
well as the date/time separators. The mask uses several predefined markers to parse the
contents of the raw data. Here is a list of markers that are available in the DataMapper:
l

yy: Numeric representation of the Year when it is written out with only 2 digits (i.e. 13)

l

yyyy: Numeric representation of the Year when it is written out with 4 digits (i.e. 2013)

l

l

l

l

l

M: Short version of the month name ( i.e. Jan, Aug). These values are based on the
current regional settings.
MM: Long version of the month name (i.e. January, August). These values are based on
the current regional settings.
mm: Numeric representation of the month (i.e. 1, 09, 12)
D: Short version of the weekday name ( i.e. Mon, Wed). These values are based on the
current regional settings.
DD: Long version of the weekday name (i.e. Monday, Wednesday). These values are
based on the current regional settings.

l

dd: Numeric representation of the day of the month (i.e. 1, 09, 22)

l

hh: Numeric representation of the hours

l

nn: Numeric representation of the minutes

l

ss: Numeric representation of the seconds

l

ms: Numeric representation of the milliseconds.

l

ap: AM/PM string.

l

In addition, any constant character can be included in the mask, usually to indicate
date/time separators (i.e. / - :) . If one of those characters happens to be one of the
reserved characters listed above, it must be escaped using the \ symbol.

Page 172

Note
The markers that can be used when extracting dates are different from those that are used to
display dates in a template (see the Designer's "Date and time patterns" on page 918).

Examples of masks
Value in raw data​

Mask to use

June 25, 2013

MM dd, YYYY

06/25/13

​mm/dd/yy

​2013.06.25

yyyy.mm.dd

2013-06-25 07:31 PM

yyyy-mm-dd hh:nn ap

​2013-06-25 19:31:14.1206

yyyy-mm-dd hh:nn:ss.ms

Tuesday, June 25, 2013 @ 7h31PM

​DD, MM dd, yyyy @ hh\hnnap

Entering a date using JavaScript
In several places in the DataMapper, Date values can be set through a JavaScript. For
example:
l

l

In a field in the Data Model. To do this, go to the Steps pane and select an Extract step.
Then, on the Step properties pane, under Field Definition click the Add JavaScript Field
button (next to the Field List drop-down). Type the JavaScript in the Expression field. (To
rename the field, click the Order and rename fields button.)
In a Preprocessor property. To do this, go to the Steps pane and select the
Preprocessor step. Then, on the Step properties pane, under Properties add a property,
specify its Type as Date and put the JavaScript in the Default Value field.

The use of the JavaScript Date() object is necessary when creating dates through a JavaScript
expression. For more information, see w3schools - JavaScript Dates and w3schools - Date
Object.

Page 173

Example
The following script creates a date that is the current date + 30 days:
function addDays(date, days) {
var result = new Date(date);
result.setDate(result.getDate() + days);
return result;
}
addDays(new Date(), 30);
Float
Floats are signed, numeric, floating-point numbers whose value has 15-16 significant digits.
Floats are routinely used for calculations. Note that Float values can only have up to 3
decimals. They are inherently imprecise: their accuracy varies according to the number of
significant digits being requested.
The Currency data type can have up to 4 decimals; see "Currency" on page 170.
Defining Float values
l

Preprocessor:
l

l

l

In the Step properties pane, under Properties, add or select a field.
Specify the Type as Float and set a default value as a number with decimal points,
followed by a semicolon; for example 546513.879;.

Extraction:
l

In the Data Model, select a field.

On the Step properties pane, under Field Definition set the Type to Float.
The field value will be extracted and treated as a Float.
l

l

JavaScript Expression: Set the desired value to any Float value.
Example: record.fields["PreciseTaxSubtotal"] = 27.134;

Building Float values
Float values can be the result of direct attribution or mathematical operations just like Integer
values (see "Integer" on the next page).

Page 174

HTMLString
HTMLStrings contain textual data that includes HTML markup. They are essentially the same
as String values except in cases where the HTML markup can be interpreted.
Example: Assume that a field has the value He said WOW!. If the data type is String and
the value is placed in a template, it will display exactly as "He said WOW!" (without the
quotes). If the data type is HTMLString, it will display as "He said WOW!" (again, without the
quotes).
Considering this is the only difference, for more information on how to create and use HTML
String values, see "String" on the facing page.
Integer
Integers are signed, numeric, whole 64bit numbers whose values range from -(2^63) to
(2^63). Integers are the numerals with the highest precision (and the fastest processing speed)
of all, since they are never rounded.
Defining Integer values
l

l

Preprocessor:
l

In the Step properties pane, under Properties, add or select a field.

l

Specify the Type as Integer and set a default value as a number, such as 42.

Extraction:
l

In the Data Model, select a field.

On the Step properties pane, under Field Definition set the Type to Integer.
The field value will be extracted and treated as an integer.
l

l

JavaScript Expression: Set the desired value to any Integer value. Example:
record.fields["AnswerToEverything"] = 42;

Building Integer Values
Integers can be set through a few methods, all of which result into an actual integer result.
l

Direct attribution: Assign an integer value directly, such as 42, 99593463712 or
data.extract("TotalOrdered");

Page 175

l

Mathematical operations: Assign the result of any mathematical operation. For example:
22+51, 3*6, 10/5 or sourceRecord.property.SubTotal. For more information
on mathematics in JavaScript , see w3Schools - Mathematical Operators. For more
advanced mathematical functions, see w3schools - Math Object.

Note
When adding numbers that are not integers, for instance 4.5 + 1.2 , a round towards zero
rounding is applied after the operation was made. In the previous example, the result, 5.7, is
rounded to 5. In another example, -1.5 - 1 results in -2

Object
Objects holds addresses that refer to objects. You can assign any reference type (string, array,
class, or interface) to an Object variable. An Object variable can also refer to data of any value
type (numeric, Boolean, Char, Date, structure, or enumeration).
Defining Object values
l

Preprocessor:
l

In the Step properties pane, under Properties, add or select a field.

l

Specify the Type as Object and set a default value as a semi-colon.

String
Strings contain textual data. Strings do not have any specific meaning, which is to say that their
contents are never interpreted in any way.
Defining String values
l

Preprocessor:
l

l

In the Step properties pane, under Properties, add or select a field.
Specify the Type as String and set a default value as any text between quotes,
followed by a semicolon, e.g. "This is my text";

Page 176

l

Extraction:
l

In the Data Model, select a field.

On the Step properties pane, under Field Definition set the Type to String.
The field value will be extracted and treated as a string.
l

l

JavaScript Expression: Set the desired value to any string between quotes. Example:
record.fields["countryOfOrigin"] = "Canada";

Building String values
String values can be made up of more than just a series of characters between quotes. Here
are a few tips and tricks to build strings:
l

l

l

Both single and double quotes can be used to surround strings and they will act in
precisely the same manner. So, "this is a string" and 'this is a string' mean the same thing.
However, it's useful to have both in order to remove the need for escaping characters. For
instance, "I'm fine!" works, but 'I'm fine!' does not since only 'I' is properly interpreted. 'I\'m
fine!' works (escaping the ' with a \).
It is possible to put more than one string, as well as variables containing strings, by
concatenating them with the + operator. For example, "Hello " +
sourceRecord.property.FirstName + ", nice to meet you!".
Adding more data to an existing string variable or field is possible using a combination of
concatenation and assignment. For example, after the statements var myVar = "Is
this the real life";, and myVar += " or is this just fantasy?";,
the value of myVar will be, obviously "Is this the real life or is this
just fantasy?".

For more information on string variables, see quicksmode.org.

Data Model file structure
The Data Model file is an XML file that contains the structure of the Data M model, including
each field's name, data type, and any number of detail tables and nested tables.
Example: promotional data















Page 178

Example: transactional details, in a simple invoice format































Page 179 Example: nested tables (one table into another)
Example: default values Default values can be added to any field with the defaultValue attribute: DataMapper User Interface The main ingredients in the Designer's user interface are the following: l "Menus" on page 186 l "Toolbar" on page 249 l "Settings pane" on page 203 l "Steps pane" on page 213 l "The Data Viewer" on page 200 l "Step properties pane" on page 215 l "Messages pane" on page 202 l "Data Model pane" on page 190 Page 180 Keyboard shortcuts This topic gives an overview of keyboard shortcuts that can be used in the DataMapper. Keyboard shortcuts available in the Designer for menu items, script editors and the data model pane can also be used in the DataMapper; see "Keyboard shortcuts" on page 738. Although some of the keyboard shortcuts are the same, this isn't a complete list of Windows keyboard shortcuts. Please refer to Windows documentation for a complete list of Windows keyboard shortcuts. Menu items The following key combinations activate a function in the menu. Key combination Function Alt Put the focus on the menu. (Alt + the underlined letter in a menu name Page 181 Key combination Function displays the corresponding menu.) The menu can then be browsed using the Enter key, arrow up and arrow down buttons. Alt + F4 Exit Ctrl + C or Ctrl + Insert Copy Ctrl + N New Ctrl + O Open file Ctrl + Shift + O Open configuration file Ctrl + S Save file Ctrl + V or Shift + Insert Paste Ctrl + X Cut Ctrl + W or Ctrl + F4 Close file Ctrl + Y or Ctrl + Shift +Y Redo Ctrl + Z or Ctrl + Shift + Z Undo Page 182 Key combination Function Ctrl + Shift + S Save all Ctrl + Shift + W or Ctrl + Shift + F4 Close all Ctrl + F5 Revert Ctrl + F7 Next view Ctrl + Shift + F7 Previous view Ctrl + F8 Next perspective Ctrl + Shift + F8 Previous perspective Ctrl + F10 Save as Ctrl + F12 Send to Workflow / Package files F4 Ignore step F6 Add an Extract step F7 Add a Goto step F8 Add a Condition step F9 Add a Repeat step Page 183 Key combination Function F10 Add an Extract field F11 Add an Action step F12 Add a Multiple Conditions step Alt + F12 Add a Case step (under a Multiple Conditions step) Home Go to the first step in the workflow End Go to the last step in the workflow Alt + V Validate records Shift + F10 or Ctrl + Shift + F10 Open context menu Viewer pane The following key combinations activate a function in the Viewer. Key combination Function Alt + - Open system menu Ctrl + - Zoom out Ctrl + + Zoom in Ctrl + Shift + E Switch to Editor Ctrl + F6 Next editor (when there is more than one file open in the Workspace) Page 184 Key combination Function Ctrl + Shift + F6 Previous editor (when there is more than one file open in the Workspace) Data Model pane Key combination Function PageUp Go to previous record PageDown Go to next record Alt + CR Property page Alt + PageDown Scroll down to the last field Alt + PageUp Scroll up to the first field Steps tab Key combination Function Ctrl + - Zoom out Ctrl + + Zoom in Edit Script and Expression windows The following key combinations have a special function in the Expression and in the Edit Script windows (expanded view). Key combination Function Ctrl + space Content assist (auto-complete) Page 185 Key combination Function Ctrl + A Select all Ctrl + D Duplicate line Ctrl + I Indent (Tab) Ctrl + J Line break Ctrl + L Go to line; a prompt opens to enter a line number. Ctrl + Shift + D Delete line Shift + Tab Shift selected lines left Tab Shift selected lines right Ctrl + / Comment out / uncomment a line in code Ctrl + Shift + / Comment out / uncomment a code block Menus The following menu items are shown in the DataMapper Module's menu: File Menu l l l New...: Opens the Creating a New Data Mapping Configuration dialog. Open: Opens a standard File Open dialog. This dialog can be used to open Templates and data mapping configurations. Open Recent: List the most recently opened Templates and configurations. Clicking on a template will open it in the Designer module, clicking on a data mapping configuration will open it in the DataMapper module. Page 186 l l l l l l l Close: Close the currently open data mapping configuration or Template. If the file needs to be saved, the appropriate Save dialog will open. Close All: Close any open data mapping configuration or Template. If any of the files need to be saved, the Save Resources dialog opens. Save: Saves the current data mapping configuration or Template to its current location on disk. If the file is a data mapping configuration and has never been saved, the Save As dialog appears instead. Save As...: Saves the current data mapping configuration or Template to a new location on disk. In the case of Templates, it is saved to a location that can be different than the local repository. Save All: Saves all open files. If any of the open files have never been saved, the Save As dialog opens for each new unsaved file. Revert: Appears only in the Designer module. Reverts all changes to the state in which the file was opened or created. Add Data: Adds data either to the current data mapping configuration or to the open template. In data mapping configuration l l l l From File...: Opens the dialog to add a new data file to the currently loaded data mapping configuration. Not available if the currently loaded data mapping configuration connects to a database source. From Database...: Opens the Edit Database Configuration dialog. Not available if the currently loaded data mapping configuration is file-based. Send to Workflow: Opens theSend to Workflow dialog to send files to a local PlanetPress Workflow software installation. Exit: Closes the software. If any of the files need to be saved, the Save Resources dialog opens. Edit Menu l Undo: Undoes the previous action. l Redo: Redoes the last action that was undone. l l Cut Step: Removes the currently selected step and places it in the clipboard. If the step is a Repeat or a Condition, all steps under it are also placed in the clipboard. If there is already a step in the clipboard, it will be overwritten. Copy Step: Places a copy of the currently selected step in the clipboard. The same details as the Cut step applies. Page 187 l l Paste Step: Takes the step or steps in the clipboard and places them in the Steps after the currently selected step. Delete Step: Deletes the currently selected step. If the step is a Repeat or Condition, all steps under it are also deleted. l Cut: Click to remove the currently selected step, or steps, and place them in the clipboard. l Copy: Click to place a copy of the currently selected step, or steps, in the clipboard. l Paste: Click to place any step, or steps, from the clipboard before the currently selected step in the Steps Pane. Data Menu l l l Hide/Show datamap: Click to show or hide the icons to the left of the Data Viewer that displays how the steps affect the line. Hide/Show extracted data: Click to show or hide the extraction selections indicating that data is extracted. This simplifies making data selections in the same areas and is useful to display the original data. Validate All Records: Runs the Steps on all records and verifies that no errors are present in any of the records. Errors are displayed in the Messages Pane. Steps l l l l l Ignore Step: Click to set the step to be ignored (aka disabled). Disabled steps do not run when in DataMapper and do not execute when the data mapping configuration is executed in Workflow. However, they can still be modified normally. Add Extract Step: Adds an Extract Step with one or more extract fields. If more than one line or field is selected in the Data Viewer, each line or field will have an extract field. Add Goto Step: Adds a Goto step that moves the selection pointer to the beginning of the data selection. For instance if an XML node is selected, the pointer moves to where that node is located. Add Condition Step: Adds a condition based on the current data selection. The "True" branch gets run when the text is found on the page. Other conditions are available in the step properties once it has been added. Add Repeat Step: Adds a loop that is based on the current data selection, and depending on the type of data. XML data will loop on the currently selected node, CSV loops for all rows in the record. In Text and PDF data, if the data selection is on the same line as the cursor position, the loop will be for each line until the end of the record. If the Page 188 data selection is on a lower line, the loop will be for each line until the text in the data selection is found at the specified position on the line (e.g. until "TOTAL" is found). l l l Add Extract Field: Adds the data selection to the selected Extract step, if an extract step is currently selected. If multiple lines, nodes or fields are selected, multiple extract fields are added simultaneously. Add Multiple Conditions: Adds a condition that splits into multiple case conditions. Add Action Step: Adds a step to run one or more specific actions such as running a JavaScript expression or setting the value of a Source Record Property. View Menu l Zoom In: Click to zoom in the Steps Pane. l Zoom Out: Click to zoom out the Steps Pane. Window Menu l Show View l Messages: Shows the Messages Pane. l Steps: Shows the Steps Pane. l Settings: Shows the Settings Pane. l Record: Shows the Record Pane. l l l l Detail tables : Each detail table and nested table is listed here. Click on one to show it in the Data Model Pane. Step Properties: Shows the Step Properties Pane. Reset Perspective: Resets all toolbars and panes to the initial configuration of the module. Preferences: Click to open the Preferences dialog. Help Menu l l l Software Activation: Displays the Software Activation dialog. See Activating your license. Help Topics: Click to open this documentation. Contact Support: Click to open the Objectif Lune Contact Page in the default system Web browser. Page 189 l About PlanetPress Connect Designer: Displays the software's About dialog. l Welcome Screen: Click to re-open the Welcome Screen. Panes The DataMapper screen contains the following panes. l "Settings pane" on page 203. The Settings pane contains settings for the data source. l "Steps pane" on page 213. The entire extraction workflow is visible in the Steps pane. l "The Data Viewer" on page 200. The Data Viewer shows one record in the data source. l "Step properties pane" on page 215. The Step properties pane contains all settings for the step that is currently selected on the Steps pane. l "Data Model pane" below. The Data Model pane shows one extracted record. l "Messages pane" on page 202. Data Model pane The Data Model pane displays the result of all the preparations and extractions of the extraction workflow. The pane displays the content of a single record within the record set at a time. Data is displayed as a tree view, with the root level being the record table. On the level below that are detail tables, and a detail table inside a detail table is called a nested table. The Data Model is also used as a navigation tool between records and in all detail tables. Data Model toolbar buttons : Import a Data Model. : Export the Data Model. : Synchronize the Data Model and the data sample. : Show the ExtraData field. Note that this field is not meant to be filled via an extraction. It can be used in Workflow to add data to the Data Model; see "About adding fields and data via Workflow" on page 154. Data Model contextual menu The Data Model is generally constructed by extracting data; see "Extracting data" on page 118. It is however possible to modify the Data Model, even if no data is present in the pane. To do Page 190 this, open the contextual menu within the pane itself by right-clicking on something in the Data Model pane. Depending on where you've clicked, it can contain the following options: l l l l l Add a field: Click to add a new field at the current level (record or detail table). Enter the field name in the dialog and click OK to add it. Add a table: Click to add a new detail table at the current level (record or existing detail table). Enter the table name in the dialog and click OK to add it. Default Value: Click to set the default value for a field. This value is used if no extraction is present, or if an extraction attached to this field returns no value. Collapse Fields: Collapse the fields in the selected level. Expand Fields: Clicking the icon that represents collapsed fields (for example: enables this menu item. It is used to expand the fields on one level. l Collapse All Fields: Collapse the fields on the record level and in all detail tables. l Expand All Fields: Expand the fields on the record level and in all detail tables. ) Note The following options are only available for Data Model fields or detail tables that are not filled via an extraction. Fields and detail tables that are filled via an Extract step are to be changed (renamed, deleted etc.) via the properties of that Extract step; see: "Editing fields" on page 158 and "Renaming a detail table" on page 193. l Rename: Click to rename the selected table or field. Enter the new name and click OK to rename. l Delete: Click to delete the selected table or field. l Set Type: Use the list to select the field type (see "Data types" on page 168). Field display Fields in the Data Model pane are displayed in specific ways to simplify comprehension of the display data: l Value: The current value of the extracted field, based on the record shown in the Data Viewer. Page 191 l l l l l l l The column on the left displays the name of the field. The column on the right displays the current value of the extracted field based on the record shown in the Data Viewer, if an Extract step has an extraction for this field (see "Extracting data" on page 118). The icon to the left of the name indicates the data Type of the field (see "Data types" on page 168). A field name with an asterisk to the right indicates that this field is part of an imported Data Model file. A field with a grey background indicates this Data Model field does not have any attached extracted data. A field with a white background indicates that the field has attached extracted data but the step extracting the data is not currently selected. A field with a blue background indicates that the field has attached extracted data and the step extracting the data is currently selected. Record navigation Records can be navigated via the Data Model pane. The default record level navigates between records both in the Data Model pane and the Data Viewer, while each detail table has a similar navigation that influences that table and each detail table under it. l l l l l l Expand/Contract: Click to hide or show any fields or tables under the current table level. Table Name: Displays the name of the table as well as the number of records at that level (in [brackets]). At the record level this is the number of records. In other levels it represents the number of entries in a detail table. Number of Records: The number of available records in the active data sample. This is affected by the Boundary settings (see "Record boundaries" on page 117 and "Settings pane" on page 203) and the Preprocessor step ("Preprocessor step" on page 140). First Record: Go to the first record in the data sample. This button is disabled if the first record is already shown. Previous Record: Go to the previous record in the data sample. This button is disabled if the first record is shown. Current Record: Displays the current record or table entry. Type a record number and press the Enter key to display that record. The number has to be within the number of available records in the data sample. Page 192 l l Next Record: Go to the next record in the data sample. This button is disabled if the last record is shown. Last Record: Go to the last record in the data sample. This button is disabled if the last record is already shown. If a record limit is set in the Settings pane ("Settings pane" on page 203) the last record will be within that limit. Detail tables A detail table is a field in the Data Model that contains a record set instead of a single value. Detail tables contain transactional data. They are created when an Extract step is added within a Repeat step; see "Extracting transactional data" on page 124. In the most basic of transactional communications, a single detail table is sufficient. However, it is possible to create multiple detail tables, as well as nested tables. Detail tables and nested tables are displayed as separate levels in the Data Model (see "The Data Model" on page 151). Renaming a detail table Renaming detail tables is especially useful when there are more detail tables in one record, or when a detail table contains another detail table. For this detail table, ‘products’ would be a better name. 1. On the Data Model pane, click one of the fields in the detail table. 2. On the Step Properties pane, under Extraction Definition, in the Data Table field, you can find the name of the detail table: record.detail by default. Change the detail part in that name into something else. Note A detail table’s name should always begin with ‘record.’. 3. Click somewhere else on the Step Properties pane to update the Data Model. You will see the new name appear. Creating multiple detail tables Multiple detail tables are useful when more than one type of transactional data is present in the source data, for example purchases (items with a set price, quantity, item number) and services Page 193 (with a price, frequency, contract end date, etc). To create more than one detail table, simply extract transactional data in different Repeat steps (see "Extracting transactional data" on page 124). The best way to do this is to add an empty detail table (right-click the Data Model, select Add a table and give the detail table a name) and drop the data on the name of that detail table. Else the extracted fields will all be added to one new detail table with a default name at first, and you will have to rename the detail table created in each Extract step to pull the detail tables apart (see "Renaming a detail table" on the previous page). Page 194 Page 195 Nested detail tables Nested detail tables are used to extract transactional data that are relative to other data. They are created just like multiple detail tables, with two differences: l l For the tables to be actually nested, the Repeat step and its Extract step that extract the nested transactional data must be located within the Repeat step that extracts data to a detail table. In their name, the dot notation (record.services) must contain one extra level (record.services.charges). Note Using nested detail tables in the Designer module requires scripting, as described in this How-to: Cloning your way through nested tables. Example An XML source file lists the services of a multi-service provider: Internet, Cable, Home Phone, Mobile. Each service in turn lists a number of "charges", being service prices and rebates, and Page 196 a number of "details" such as movie rentals or long distance calls. Page 197 The services can be extracted to a detail table called record.services. The "charges" and "details" can be extracted to two nested detail tables. Page 198 The nested tables can be called record.services.charges and record.services.details. Page 199 Now one "charges" table and one "details" table are created for each row in the "services" table. The Data Viewer The Data Viewer is located in the middle on the upper half of the DataMapper screen. It displays the data source that is currently loaded in the DataMapper, specifically one record in that data. Where one record ends and the next starts, is set in the Data Source settings (see "Record boundaries" on page 117). One record may contain more than one unit: PDF or Text pages, XML nodes, CSV lines, etc. When the Delimiter or Boundary options are set in the Settings pane, the Data Viewer reflects those changes. Any modification of the source data by a Preprocessor takes place before the data is displayed in the Data Viewer (see "Preprocessor step" on page 140). The Data Viewer lets you select data, extract them ("Extracting data" on page 118), and apply a condition where necessary. How data can be selected depends on the type of source file (see "Selecting data" on page 122). Once data is extracted, clicking on any Extract step on the Steps pane highlights any area from which it extracts data in the Data Viewer. You can click on the Preprocessor step to select all the steps in the extaction workflow and highlight all extracted data. Clicking on other step types also has a visible effect in the Data Viewer: l Clicking on a Repeat step shows where the loop takes place. l Clicking on a Goto step shows where the cursor is moved. Clicking on a Condition step shows which data fulfil the condition. For more information about the different steps that can be added to a data mapping workflow, see "Steps" on page 140. l Data Viewer toolbar The Data Viewer has a toolbar at the top to control options in the viewer. Which toolbar features are available depends on the data source type. l Font (Text file only): Use the drop-down to change the font used to display text. Useful for double-byte data. It is recommended that monospace fonts be used. Page 200 l l l l Hide/Show line numbers the left of the Data Viewer. (Text file only): Click to show or hide the line numbers on Hide/Show datamap : Click to show or hide the icons to the left of the Data Viewer which displays how the steps affect the line. Hide/Show extracted data : Click to show or hide the extraction selections indicating that data is extracted. This simplifies making data selections in the same areas and is useful to display the original data. Lock/Unlock extracted data : Click to lock existing extraction selections so they cannot be moved or resized. This simplifies making data selections in the same area. l Zoom Level: Use the arrows to adjust the zoom level, or type in the zoom percentage. l Zoom In (CTRL +) l Zoom Out (CTRL -) : Click to zoom in by increments of 10% : Click to zoom out by increments of 10% Additional Keyboard Shortcuts for XML Files: l l + (while on an XML node with children): Expand the XML Node - (while on an XML node with children): Collapse the XML node, hiding all its children nodes. Contextual Menu You can access the contextual menu using a right-click anywhere inside the Viewer window. Note The Add Extract Field item is available only after an Extract step has been added to the workflow. Page 201 Messages pane The Messages pane is shared between the DataMapper and Designer modules and displays any warnings and errors from the data mapping configuration or template. At the top of the Message pane are control buttons: l Export Log: Click to open a Save As dialog where the log file (.log) can be saved on disk. l Clear Log Viewer: Click to remove all entries in the log viewer. l Filters: Displays the Log filter (see "Log filter" below). l Activate on new events: Click to disable or enable the automatic display of this dialog when a new event is added to the pane. l Time: The date and time when the error occurred. l Type: Whether the entry is a warning or an error. l l Source: The source of the error. This indicates the name of the step as defined in its step properties. Message: The contents of the message, indicating the actual error. Log filter The log filter determines what kind of events are show in the Messages pane (see "Messages pane" above). l l Event Types group: l OK: Uncheck to hide OK-level entries. l Information: Uncheck to hide information-level entries. l Warning: Uncheck to hide any warnings. l Error: Uncheck to hide any critical errors. Limit visible events to: Enter the maximum number of events to show in the Messages Pane. Default is 50. Page 202 Settings pane Settings for the data source and a list of Data Samples and JavaScript files used in the current data mapping configuration, can be found on the Settings tab at the left. The available options depend on the type of data sample that is loaded. The Input Data settings (especially Delimiters) and Boundaries are essential to obtain the data and eventually, the output that you need. For more explanation, see "Data source settings" on page 115. Input Data The Input Data settings specify how the input data must be interpreted. These settings are different for each data type. For a CSV file, for example, it is important to specify the delimiter that separates data fields. PDF files are already delimited naturally by pages, so the input data settings for PDF files are interpretation settings for text in the file. CSV file Input Data settings In a CSV file, data is read line by line, where each line can contain multiple fields. The input data settings specify to the DataMapper module how the fields are separated. l l l l l l l Field separator: Defines what character separates each field in the file. Even though CSV stands for comma-separated values, CSV can actually refer to files where fields are separated using any character, including commas, tabs, semicolons, and pipes. Text delimiter: Defines what character surrounds text in the file, preventing the Field separator from being interpreted within those text delimiters. This ensures that, for example, the field “Smith; John” is not interpreted as two fields, even if the field delimiter is the semicolon. Comment delimiter: Defines what character starts a comment line. Encoding: Defines what encoding is used to read the Data Source ( US-ASCII, ISO8859-1, UTF-8, UTF-16, UTF-16BE or UTF-16LE ). Lines to skip: Defines a number of lines in the CSV that will be skipped and not used as records. Set tabs as a field separator: Overwrites the Field separator option and sets the Tab character instead for tab-delimited files. First row contains field names: Uses the first line of the CSV as headers, which automatically names all extracted fields. Page 203 l Ignore unparseable lines: Ignores any line that does not correspond to the settings above. PDF file Input Data settings PDF Files have a natural, static delimiter in the form of pages, so the options here are interpretation settings for text in the PDF file. The Input Data settings for PDF files determine how words, lines and paragraphs are detected in the PDF when creating data selections. Each value represents a fraction of the average font size of text in a data selection, meaning "0.3" represents 30% of the height or width. l l l l l Word spacing: Determines the spacing between words. As PDF text spacing is somehow done through positioning instead of actual text spaces, text position is what is used to find new words. This option determines what percentage of the average width of a single character needs to be empty to consider a new word has started. The default value is 0.3, meaning a space is assumed if there is a blank area of 30% of the width of the average character in the font. Line spacing: Determines the spacing between lines of text. The default value is 1, meaning the space between lines must be equal to at least the average character height. Paragraph spacing: Determines the spacing between paragraphs. The default value is 1.5, meaning the space between paragraphs must be equal to at least 1.5 times the average character height to start a new paragraph. Magic number: Determines the tolerance factor for all of the above values. The tolerance is meant to avoid rounding errors. If two values are more than 70% away from each other, they are considered distinct; otherwise they are the same. For example, if two characters have a space of exactly the width of the average character, any space of between 0.7 and 1.43 of this average width is considered one space. A space of 1.44 is considered to be 2 spaces. PDF file color space: Determines if the PDF if displayed in Color or Monochrome in the Data Viewer. Monochrome display is faster in the Data Viewer. This has no influence on the actual data extraction or the data mapping performance. Database Input Data settings Databases all return the same type of information. Therefore the Input Data options for a database refer to the database itself instead of to the data. The following settings apply to any database or ODBC Data Sample. Page 204 l l l l l Connection String: Displays the connection string used to access the Data Source. Table: Displays the tables and stored procedures available in the database. The selected table is the one the data is extracted from. Clicking on any of the tables shows the first line of the data in that table. Encoding: Defines what encoding is used to read the Data Source ( US-ASCII, ISO8859-1, UTF-8, UTF-16, UTF-16BE or UTF-16LE ). Browse button : Opens the Edit Database configuration dialog, which can replace the existing database data source with a new one. This is the same as using the Replace feature in the Data Samples window. Custom SQL button : Click to open the SQL Query Designer (see "SQL Query Designer" on page 213) and type in a custom SQL query. If the database supports stored procedures, including inner joins, grouping and sorting, you can use custom SQL to make a selection from the database, using whatever language the database supports. Text file Input Data settings Because text files have many different shapes and sizes, there are many options for the input data in these files. l l l l l l Encoding: Defines what encoding is used to read the Data Source ( US-ASCII, ISO8859-1, UTF-8, UTF-16, UTF-16BE or UTF-16LE ). Selection/Text is based on bytes: Check for text files that use double-bytes characters (resolves width issues in some text files). Add/Remove characters: Defines the number of characters to add to, or remove from, the head of the data stream. The spin buttons can also increment or decrement the value. Positive values add blank characters while negative values remove characters. Add/Remove lines: Defines the number of lines to add to, or remove from, the head of the data stream. The spin buttons can also increment or decrement the value. Positive values add blank lines while negative values remove lines. Maximum line length: Defines the number of columns on a data page. The spin buttons can also increment or decrement the value. The maximum value for this option is 65,535 characters. The default value is 80 characters. You should tune this value to the longest line in your input data. Setting a maximum data line length that greatly exceeds the length of the longest line in your input data may increase execution time. Page delimiter type: Defines the delimiter between each page of data. Multiples of such pages can be part of a record, as defined by the Boundaries. Page 205 l On lines: Triggers a new page in the Data Sample after a number of lines. l l l Cut on number of lines: Triggers a new page after the given number of lines. With this number set to 1, and the Boundaries set to On delimiter, it is possible to create a record for each and every line in the file. Cut on FF: Triggers a new page after a Form Feed character. On text: Triggers a new page in the Data Sample when a specific string is found in a certain location. l Word to find: Compares the text value with the value in the data source. l Match case: Activates a case-sensitive text comparison. l l l Location: Choose Selected area or Entire width to use the value of the current data selection as the text value. Left/Right: Use the spin buttons to set the start and stop columns to the current data selection (Selected area) in the record. Lines before/after: This option places the delimiter a certain number of lines before or after the current line. This is useful if the text that triggers the delimiter is not on the first line of each page. l Text from right to left: Sets the writing direction of the data source to right-to-left. l Expand tabs to spaces: Replaces tabs with the given number of spaces. XML File Input Data settings For an XML file you can either choose to use the root node, or select an element type, to create a new delimiter every time that element is encountered. l l Use root element: Locks the XML Elements option to the top-level element. No other boundaries can be set. If there is only one top-level element, there will only be one record. XML elements: Displays a list containing all the elements in the XML file. Selecting an element causes a new page of data to be created every time an instance of this element is encountered. Note The information contained in all of the selected parent nodes will be copied for each Page 206 instance of that node. For example, if a client node contains multiple invoice nodes, the information for the client node can be duplicated for each invoice. The DataMapper only extracts elements for which at least one value or attribute value is defined in the file. Boundaries Boundaries are the division between records: they define where one record ends and the next record begins; for an explanation see "Record boundaries" on page 117. CSV or Database file boundaries Since database data sources are structured the same way as CSV files, the options for these file types are identical. l l l Record limit: Defines how many records are displayed in the Data Viewer. This does not affect output production; when generating output, this option is ignored. To disable the limit, use the value 0 (zero). Line limit: Defines the limit of detail lines in any detail table. This is useful for files with a high number of detail lines, which in the DataMapper interface can slow down things. This does not affect output production; when generating output, this option is ignored. To disable the limit, use the value 0 (zero). Trigger: Defines the type of rule that controls when a boundary is set, creating a new record. l Record(s) per page: Defines a fixed number of lines in the file that go in each record. l l On change: Defines a new record when a specific field (Field name) has a new value. l l Records: The number of records (lines, rows) to put in each record. Field name: Displays the fields in the top line. The boundaries are set on the selected field name. On script: Defines the boundaries using a custom JavaScript. For more information see "Setting boundaries using JavaScript" on page 257. Page 207 l On field value: Sets a boundary on a specific field value. l l l Field name: Displays the fields in the top line. The value of the selected field is compared with the Expression below to create a new boundary. Expression: Enter the value or Regular Expression to compare the field value to. Use Regular Expression: Treats the Expression as a regular expression instead of static text. For more information on using Regular Expressions (regex), see the Regular-Expressions.info Tutorial. PDF file boundaries For a PDF file, Boundaries determine how many pages are included in each record. You can set this up in one of three ways: by giving a static number of pages; by checking a specific area on each page for text changes, specific text, or the absence of text; or by using an advanced script. l l Record limit: Defines how many records are displayed in the Data Viewer. To disable the limit, use the value 0 (zero). Trigger: Defines the type of rule that controls when a boundary is set, creating a new record. l On page: Defines a boundary on a static number of pages. l l Number of pages: Defines how many pages go in each record. On text: Defines a boundary on a specific text comparison. l l l Start coordinates (x,y): Defines the left and top coordinates of the data selection to compare with the text value. Stop coordinates (x,y): Defines the right and bottom coordinates. Use Selection: Select an area in the Data Viewer and click the Use selection button to set the start and stop coordinates to the current data selection. Note In a PDF file, all coordinates are in millimeters. l Times condition found: When the boundaries are based on the presence of specific text, you can specify after how many instances of this text the Page 208 boundary can be effectively defined. For example, if a string is always found on the first and on the last page of a document, you could specify a number of occurrences of 2. This way, there is no need to inspect other items for whether it is on the first page or the last page. Having found the string two times is enough to set the boundary. l Pages before/after: Defines the boundary a certain number of pages before or after the current page. This is useful if the text triggering the boundary is not located on the first page of the record. l Operator: Selects the type of comparison (for example, "contains"). l Word to find: Compares the text value with the value in the data source. l Match case: Makes the text comparison case-sensitive. Text file boundaries For a text file, Boundaries determine how many 'data pages' are included in each record. These don't have to be actual pages, as is the case with PDF files. The data page delimiters are set in the "Text file Input Data settings" on page 205. l l l Record limit: Defines how many records are displayed in the Data Viewer. This does not affect output production; when generating output, this option is ignored. To disable the limit, use the value 0 (zero). Selection/Text is based on bytes: Select this option for text records with fixed width fields whose length is based on the number of bytes and not the number of characters. Trigger: Defines the type of rule that controls when a boundary is set, creating a new record. l On delimiter: Defines a boundary on a static number of pages. l l Occurrences: The number of times that the delimiter is encountered before fixing the boundary. For example, if you know that your documents always have four pages delimited by the FF character, you can set the boundaries after every four delimiters. On text: Defines a boundary on a specific text comparison. l Location: l Selected area: l Select the area button: Uses the value of the current data selection as the text value. Making a new selection and clicking on Select the area will redefine the location. Page 209 l l l l l l l Top/Bottom: Defines the start and end row of the data selection to compare with the text value. Entire width: Ignores the column values and compares using the whole line. Entire height: Ignores the row values and compares using the whole column. Entire page: Compares the text value on the whole page. Only available with contains, not contains, is empty and is not empty operators. Times condition found: When the boundaries are based on the presence of specific text, you can specify after how many instances of this text the boundary can be effectively defined. For example, if a string is always found on the first and on the last page of a document, you could specify a number of occurrences of 2. This way, there is no need to inspect other items for whether it is on the first page or the last page. Having found the string two times is enough to set the boundary. Delimiters before/after: Defines the boundary a certain number of data pages before or after the current data page. This is useful if the text triggering the boundary is not located on the first data page of the record. l Operator: Selects the type of comparison (for example, "contains"). l Word to find: Compares the text value with the value in the data source. l l l Left/Right: Defines where to find the text value in the row. Use selected text button: copies the text in the current selection as the one to compare to it. Match case: Makes the text comparison case-sensitive. On script: Defines the boundaries using a custom JavaScript. For more information see "Setting boundaries using JavaScript" on page 257. XML file boundaries The delimiter for an XML file is a node. The Boundaries determine how many of those nodes go in one record. This can be a specific number, or a variable number if the boundary is to be set when the content of a specific field or attribute within a node changes (for example when the invoice_number field changes in the invoice node). Page 210 l l Record limit: Defines how many records are displayed in the Data Viewer. This does not affect output production; when generating output, this option is ignored. To disable the limit, use the value 0 (zero). Trigger: Defines the type of rule that controls when a boundary is set, creating a new record. l On Element: Defines a new record on each new instance of the XML element selected in the Input Data settings. l l Occurrences: The number of times that the element is encountered before fixing the boundary. On Change: Defines a new record when a specific field or attribute in the XML element has a new value. l l Field: Displays the fields and (optionally) attributes in the XML element. The value of the selected field determines the new boundaries. Also extract element attributes: Check this option to include attribute values in the list of content items that can be used to trigger a boundary. Data samples The Data Sample area displays a list of all the imported Data Samples that are available in the current data mapping configuration. As many Data Samples as necessary can be imported to properly test the configuration. Only one of the data samples - the active data sample - is shown in the Data Viewer. A number of buttons let you manage the Data Samples. In addition to using the buttons listed below, you can right-click a file to bring up the context menu, which offers the same options plus the Copy and Paste options. Tip Data samples can be copied and pasted to and from the Settings pane using Windows File Explorer. l Add : Add a new Data Sample from an external data source. The new Data Sample will need to be of the same data type as the current one. For example, you can only add PDF files to a PDF data mapping configuration. Multiple files can be added simultaneously. Page 211 l l l l Delete Replace source. : Remove the current Data Sample from the data mapping configuration. : Open a Data Sample and replace it with the contents of a different data Reload : Reload the currently selected Data Sample and any changes that have been made to it. Set as Active : Activates the selected Data Sample. The active data sample is shown in the Data Viewer after it has gone through the Preprocessor step as well as the Input Data and Boundary settings. External JS Libraries Right-clicking in the box brings up a control menu, with the same options as are available through the buttons on the right. l Add : Add a new external library. Use the standard Open dialog to browse and open the .js file. l Delete l Replace l Reload to it. : Remove the currently selected library from the data mapping configuration. : Open a library and replace it with the contents of a different js file. : Reload the currently selected library and any changes that have been made Default Data Format The Default Data Format settings defined here apply to any new extraction in made in the current data mapping configuration. Any format already defined for an existing field remains untouched. It is also possible to set a default format for dates and currencies in the user preferences ("Datamapper preferences" on page 703). Specific settings for a field that contains extracted data are made via the properties of the Extract step that the field belongs to (see "Editing fields" on page 158). l Negative Sign Before : A negative sign will be displayed before any negative value. l Decimal Separator : Set the decimal separator for a numerical value. l Thousand Separator : Set the thousand separator for a numerical value. l Currency Sign : Set the currency sign for a currency value. l Date Format : Set the date format for a date value. Page 212 l l Date Language : Set the date language for a date value (ex: If English is selected, the term May will be identified as the month of May). Treat empty as 0 : A numerical empty value is treated as a 0 value. Note Default data formats tell the DataMapper how certain types of data are formatted in the data source. They don't determine how these data are formatted in the Data Model or in a template. In the Data Model, data are converted to the native data type. Dates, for example, are converted to a DateTime object in the Data Model, and will always be shown as "year-month-day" plus the time stamp, for example: 2012-04-11 12.00 AM. SQL Query Designer The SQL Query Designer is used to design a custom SQL query to pull information from a database. It can be opened via the Settings pane when extracting data from a database. l l l l Tables: Lists all tables and stored queries in the database. Custom Query: Displays the query that retries information from a database. Each database type has their own version of the SQL query language. To learn how to build your own query, please refer to your database's user manual. Test Query button : Click to test the custom query to ensure it will retrieve the appropriate information. Results: Displays the result of the SQL query when clicking on Test Query. Steps pane The Steps tab displays the data mapping workflow: the process that prepares and extracts data. The process contains multiple distinct steps and is run for each of the records in the source data. For more information about the steps and how to use them, please refer to Steps and "Data mapping workflow" on page 113. Page 213 Moving a step To rearrange steps, simply drag & drop them somewhere else on the dotted line in the Steps pane. Alternatively you can right-click on a step and select Cut Step or use the Cut button in the Toolbar. If the step is Repeat or Condition, all steps under it will also be placed in the clipboard. To place the step at its destination, right-click the step in the position before the desired location and click Paste Step, or use the Paste button in the toolbar. Viewing step details Hovering over the task shows a tooltip that displays some of the details of that step. To see all details for a step, click on the step and take a look at the Step properties pane ("Step properties pane" on the next page. Clicking on any Extract step in the Steps pane highlights any area in the Data Viewer from which it extracts data. You can also click on the Preprocessor step to select all the steps in the workflow to show a complete map of all the extracted data. Window controls The following controls appear at the top of the Steps pane: l Zoom In (CTRL +) : Click to zoom in by increments of 10% l Zoom Out (CTRL -) : Click to zoom out by increments of 10% Contextual menu You can access the contextual menu using a right-click anywhere inside the Steps pane. l Add a Step: Adds a step to the process. More options are available when a Repeat or a Condition step is selected: l Add Step in Repeat: Adds a step to a Repeat loop. l Add Step in True: Adds a step to the True branch of a condition step. l Add Step in False: Adds a step under the False branch of a condition step. l Add Multiple Conditions Step: Adds a Multiple Conditions step. Page 214 l l l l Add Case Step: Adds a Case condition under the selected Multiple Conditions step. Ignore Step: Click to set the step to be ignored (aka disabled). Disabled steps are grayed and do not run, neither in the DataMapper nor when the data mapping configuration is executed in Workflow. However, they can still be modified normally. Delete Step: To remove a step, right-click on it and select Delete step from the contextual menu or use the Delete button in the Toolbar. If the step to be deleted is Repeat or Condition, all steps under it will also be deleted. Copy/Paste Step: To copy a step, right-click on it and select Copy Step or use the button in the Toolbar. If the step is Repeat or Condition, all steps under it will also be placed in the clipboard. To paste the copied step at its destination, right-click the step in the position before the desired location and select Paste Step, or use the button in the Toolbar. Step properties pane The Step Properties pane is used to adjust the properties of each step in the process. The pane is divided in a few subsections depending on the step and the data type. It always contains a subsection to name and document the selected step. Note Step properties may also depend on the data sample's file type. l "Preprocessor step properties" on the facing page l "Settings for location-based fields in a Text file" on page 221 l "Text and PDF Files" on page 226 l "Text and PDF Files" on page 233 l "Condition step properties" on page 238 l "Left operand, Right operand" on page 241 l "Text file" on page 244 l "JavaScript " on page 249 Page 215 Preprocessor step properties The Preprocessor step does not run for every record in the source data. It runs once, at the beginning of the extraction workflow, before anything else; see "Preprocessor step" on page 140. The properties described below become visible in the Step properties pane when the Preprocessor step is selected in the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Fixed Automation Properties The Fixed automation properties subsection lists all the fixed properties available from the PlanetPress Workflow automation module. These properties are equivalent to data available within the PlanetPress Workflow process. For each property, the following is available: l Name: A read-only field displaying the name of the property. l Scope: A read-only field indicating that the scope of the property is Automation. l Type: A read-only field indicating the data type for each property. l Default Value: Enter a default value for the property. This value is overwritten by the actual value coming from PlanetPress Workflow when the data mapping configuration is run using the Execute Data Mapping task. There are currently the following automation properties available: l JobInfoX: These properties are the equivalent of the JobInfo values available in the PlanetPress Workflow process. They can be set using the Set Job Info and Variables task. To access these properties inside of any JavaScript code within the Data Mapping Page 216 Configuration, use the automation.jobInfos.JobInfoX (where X is the job info number, from 0 to 9). l l l OriginalFilename: This property contains the original file name that was captured by the PlanetPress Workflow process and is equivalent to the %o variable in the process. To access these property inside of any JavaScript code within the Data Mapping Configuration, use automation.properties.OriginalFilename. ProcessName: This property contains the name of the process that is currently executing the data mapping configuration and is equivalent to the %w variable in the process. To access this property inside of any JavaScript code within the Data Mapping Configuration, use automation.properties.ProcessName. TaskIndex: This property contains the index (position) of the task inside the process that is currently executing the data mapping configuration but it has no equivalent in PlanetPress Workflow. To access this property inside of any JavaScript code within the Data Mapping Configuration, use automation.properties.ProcessName. Properties The Properties subsection is used to create specific properties that are used throughout the workflow. Properties can be accessed through some of the interface elements such as the Condition and Repeat step properties, or in scripts, through the "DataMapper Scripts API" on page 252. Note Properties are evaluated in the order they are placed in the list, so properties can use the values of previously defined properties in their expression. l Name: The name of the property used to refer to its value. l Scope: What this property applies to: l l Entire Data: These properties are static properties that cannot be changed once they have been set, in other words they are Global constants. Each Record: These properties are evaluated and set at the beginning of each Source Record. These properties can be modified once they have been set, but are always reset at the beginning of each Source Record. Page 217 l l l Automation variable: These properties initialize variables coming from the PlanetPress Workflow automation tool. The name of the property needs to be the same as the variable name in Workflow, and they can be either a Local variable or a Global variable. For either one, only the actual name is to be used, so for % {MyLocalVar} use only MyLocalVar , and for %{global.MyGlobalVar} use MyGlobalVar. If a global and a local variable have the same name ( %{myvar} and %{global.myvar} ), the local variable's value is used and the global one is ignored. To access a workflow variable inside of any JavaScript code within the Data Mapping Configuration, use automation.variables.variablename Type: The data type of the property. For more information see "Data types" on page 168. Default Value: The initial value of the property. This is a JavaScript expression. See "DataMapper Scripts API" on page 252. Note Entire Data Properties are evaluated before anything else, such as Preprocessors, Delimiters and Boundaries in the Settings pane (see "Data source settings" on page 115). This means these properties cannot read information from the data sample or from any records. They are mostly useful for static information such as folder locations or server addresses. Preprocessor The Preprocessor subsection defines what preprocessor tasks are performed on the data file before it is handed over to the data mapping workflow. Preprocessor tasks can modify the data file in many ways, and each task runs in turn, using the result of the previous one as an input. l Name: The name to identify the Preprocessor task. l Type: The type of Preprocessor task. Currently there is only one type available: script. Preprocessor definition l Expression: Enter the JavaScript code to be performed on the data file. See "DataMapper Scripts API" on page 252. Page 218 Extract step properties The Extract step takes information from the data source and places it in the record set that is the result of the extraction workflow. For more information see "Extract step" on page 142 and "Extracting data" on page 118. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Extraction Definition l l Data Table: Defines where the data will be placed in the extracted record. The root table is record, any other table inside the record is a detail table. For more information see "Extracting transactional data" on page 124. Append values to current record: When the Extract step is inside a loop, check this to ensure that the extraction will be done in the same detail table as any previous extractions within the same loop. This ensures that, if multiple extracts are present, only one detail table is created. Field Definition The following field definition settings are identical for all fields. l l Field List: The Field List displays each of the single fields that belong to the selected step in a drop-down. Fields can be re-ordered and re-named within the Order and rename fields dialog (see "Order and rename fields dialog" on page 224). Select one of the fields to make further settings for that field. Add Unique ID to extraction field: Check to add a unique numerical set of characters to the end of the extracted value. This ensures no two values are identical in this field in the record set. Page 219 l Mode: Determines the origin of the data. Fields always belong to an Extract step, but they don't necessarily contain extracted data. See "Fields" on page 155 for more information. l l Location: The contents of the data selection determine the value of the extracted field. The settings for location-based fields are listed separately, per file type: l "Settings for location-based fields in a Text file" on the next page l "Settings for location-based fields in a PDF File" on the next page l "Settings for location-based fields in CSV and Database files" on page 222 l "Settings for location-based fields in an XML File" on page 223 JavaScript : The result of the JavaScript Expression written below the drop-down will be the value of the extracted field. If the expression contains multiple lines, the last value attribution (variable = "value";) will be the value. See DataMapper API. l l l Use JavaScript Editor: Click to display the Script Editor dialog. Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l Properties: The value of the property selected below will be the value of the selected field. l l Property: This drop-down lists all the currently defined properties (including system properties). Custom properties can be defined in the Preprocessor step; see "Preprocessor step" on page 140. For an explanation of the objects to which the properties belong, see "DataMapper Scripts API" on page 252. Choose a property button: Click this button to open a filter dialog that lets you find a property based on the first few letters that you type. Page 220 l Type: The data type of the selected data; see "Data types" on page 168. Make sure that the data format that the DataMapper expects matches the actual format of the data in the data source; see "Data Format" on page 223. Settings for location-based fields in a Text file l Left: Defines the start of the data selection to extract. l Right: Defines the end of the data selection to extract. l Top offset: The vertical offset from the current pointer location in the Data Viewer). l l Height: The height of the selection box. When set to 0, this instructs the DataMapper to extract all lines starting from the given position until the end of the record and store them in a single field. Use selection: Click to use the value (Left, Right, Top offset and Height) of the current data selection (in the Data Viewer) for the extraction. Note If the selection contains multiple lines, only the first line is extracted. l Post Function: Enter a JavaScript expression to be run after the extraction. A Post function script operates directly on the extracted data, and its results replace the extracted data. For example, the Post function script replace("-", ""); would replace the first dash character that occurs inside the extracted string. l Use JavaScript Editor: Click to display the Script Editor dialog. l Trim: Select to trim empty characters at the beginning or the end of the field. l Concatenation string: l Split: Separate the selection into individual fields based on the Concatenation string defined above. Settings for location-based fields in a PDF File These are the settings for location-based fields in a PDF file. Page 221 l Left: Defines the start of the data selection to extract. l Right: Defines the end of the data selection to extract. l l l Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). Height: The height of the selection box. Use selection: Click to use the value (Left, Right, Top offset and Height) of the current data selection for the extraction. Note If the selection contains multiple lines, the lines are by default joined and extracted into one field. To split the lines, select the option Split lines (see below). l l l l Post Function: Enter a JavaScript expression to be run after the extraction. For example replace("-","") would replace a single dash character inside the extracted string. Trim: Select to trim empty characters at the beginning or the end of the field. Type: The data type of the selected data; see "Data types" on page 168. If the selected data is split (see below), this setting is applied to the first extracted field. Make sure that the data format that the DataMapper expects matches the actual format of the data in the data source; see "Data Format" on the next page. Split: l l l Split lines: Separate a multi-line selection into individual fields . Join lines: Join the lines in the selection with the Concatenation string defined below. Concatenation string: The (HTML) string used to concatenate lines when they are joined. Settings for location-based fields in CSV and Database files These are the settings for location-based fields in CSV and Database files. l l Column: Drop-down listing all fields in the Data Sample, of which the value will be used. Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). Page 222 l Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l Post Function: Enter a JavaScript expression to be run after the extraction. For example replace("-","") would replace a single dash character inside the extracted string. l l Use JavaScript Editor: Click to display the Script Editor dialog. Trim: Select to trim empty characters at the beginning or the end of the field. Settings for location-based fields in an XML File These are the settings for location-based fields in an XML file. l XPath: The path to the XML field that is extracted. l Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l Post Function: Enter a JavaScript expression to be run after the extraction. For example replace("-","") would replace a single dash character inside the extracted string. l l Use JavaScript Editor: Click to display the Script Editor dialog. Trim: Select to trim empty characters at the beginning or the end of the field. Data Format Format settings can be defined in three places: in the user preferences ("Datamapper preferences" on page 703), the current data mapping configuration ("Data format settings" on page 118) and per field via the Step properties. Any format settings specified per field are always used, regardless of the user preferences or data source settings. Page 223 Note Data format settings tell the DataMapper how certain types of data are formatted in the data source. They don't determine how these data are formatted in the Data Model or in a template. In the Data Model, data are converted to the native data type. Dates, for example, are converted to a DateTime object in the Data Model, and will always be shown as "year-month-day" plus the time stamp, for example: 2012-04-11 12.00 AM. l Negative Sign Before : A negative sign will be displayed before any negative value. l Decimal Separator : Set the decimal separator for a numerical value. l Thousand Separator : Set the thousand separator for a numerical value. l Currency Sign : Set the currency sign for a currency value. l Date Format : Set the date format for a date value. l l Date Language : Set the date language for a date value (ex: If English is selected, the term May will be identified as the month of May). Treat empty as 0 : A numerical empty value is treated as a 0 value. Order and rename fields dialog The Order and rename fields dialog displays the extracted fields in the currently selected Extract step. To open it, first select an Extract step on the Steps pane. Then, on the Step properties pane, under Field Definition, click the Order and Rename Fields button next to the Field List dropdown. Field extractions are executed from top to bottom. In JavaScript-based fields, it is possible to refer to previously extracted fields if they are extracted higher in this list or in previous Extract steps in the extraction workflow. Page 224 l Name: The name of the field. Click the field name and enter a new name to rename the field. Note If you intend to use the field names as metadata in a PlanetPress Workflow process, do not add spaces to field names, as they are not permitted in metadata field names. l Value: Displays the value of the extract field in the current Record. l Remove button : Click to remove the currently selected field. l Move Up button : Click to move the selected field up one position. l Move Down button : Click to move the selected field down one position. Action step properties The Action step can run multiple specific actions one after the other in order; see "Action step" on page 149 for more information. The properties of an Action step become visible in the Step properties pane when the Action step is selected on the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Actions This subsection lists all actions executed by the step, and their types. Page 225 l Name: A name by which to refer to the action. This name has no impact on functionality. l Type: l l l Set property: Sets the value of a record property which was created in the Preprocessor step (see "Preprocessor step" on page 140). Run JavaScript : Runs a JavaScript expression, giving much more flexibility over the extraction process. Stop Processing Record: When this option is selected, the extraction workflow stops processing the current record and moves on to the next one. If fields were already extracted prior to encountering the Action step, then those fields are stored as usual. If no fields were extracted prior to encountering the Action step, then no trace of the record is saved in the database at run time. Set Property Text and PDF Files l Property: Displays a list of record properties set in the Preprocessor step (see "Preprocessor step" on page 140). l Type: Displays the type of the property. This is a read-only field. l Based on: Determines the origin of the data. l Location: The contents of the data selection set below will be the value of the extracted field. The data selection settings are different depending on the data sample type. l Left: Defines the start of the data selection to extract l Right: Defines the end of the data selection to extract l l l Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). Height: The height of the selection box. Use selection: Click to use the value of the current data selection for the extraction. Page 226 Note If the selection contains multiple lines, only the first line is selected. l l Trim: Select to trim empty characters at the beginning or the end of the field JavaScript : The result of the JavaScript Expression written below the drop-down will be the value of the extracted field. If the expression contains multiple lines, the last value attribution (variable = "value";) will be the value. See "DataMapper Scripts API" on page 252. l l l l Expression: The JavaScript expression to run. Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 255 and "DataMapper Scripts API" on page 252). Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l Data Format: Data format settings tell the DataMapper how certain types of data are formatted in the data source. Make sure that this format matches the actual format of the data in the data source. l Negative Sign Before : A negative sign will be displayed before any negative value. l Decimal Separator : Set the decimal separator for a numerical value. l Thousand Separator : Set the thousand separator for a numerical value. l Currency Sign : Set the currency sign for a currency value. l Date Format : Set the date format for a date value. Page 227 l l Date Language : Set the date language for a date value (ex: If English is selected, the term May will be identified as the month of May). Treat empty as 0 : A numerical empty value is treated as a 0 value. CSV and Database Files l Property: Displays a list of record properties set in the Preprocessor step (see "Preprocessor step" on page 140). l Type: Displays the type of the property. Read only field. l Based on: Determines the origin of the data. l Location: The contents of the data selection set below will be the value of the extracted field. The data selection settings are different depending on the data sample type. l l l Column: Drop-down listing all fields in the Data Sample, of which the value will be used. Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l l Trim: Select to trim empty characters at the beginning or the end of the field JavaScript : The result of the JavaScript Expression written below the drop-down will be the value of the extracted field. If the expression contains multiple lines, the last value attribution (variable = "value";) will be the value. See "DataMapper Scripts API" on page 252. l l Expression: The JavaScript expression to run. Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 255 and "DataMapper Scripts API" on page 252). Page 228 l l Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l Data Format: Data format settings tell the DataMapper how certain types of data are formatted in the data source. Make sure that this format matches the actual format of the data in the data source. l Negative Sign Before : A negative sign will be displayed before any negative value. l Decimal Separator : Set the decimal separator for a numerical value. l Thousand Separator : Set the thousand separator for a numerical value. l Currency Sign : Set the currency sign for a currency value. l Date Format : Set the date format for a date value. l l Date Language : Set the date language for a date value (ex: If English is selected, the term May will be identified as the month of May). Treat empty as 0 : A numerical empty value is treated as a 0 value. XML File l Property: Displays a list of Source Record properties set in the Preprocessor step (see "Preprocessor step" on page 140). l Type: Displays the type of the Source Record property. Read only field. l Based on: Determines the origin of the data. l Location: The contents of the data selection set below will be the value of the extracted field. The data selection settings are different depending on the data sample type. Page 229 l XPath: The path to the XML field that is extracted. l Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. l l Trim: Select to trim empty characters at the beginning or the end of the field JavaScript : The result of the JavaScript Expression written below the drop-down will be the value of the extracted field. If the expression contains multiple lines, the last value attribution (variable = "value";) will be the value. See "DataMapper Scripts API" on page 252. l l l l Expression: The JavaScript expression to run. Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 255 and "DataMapper Scripts API" on page 252). Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Use selection: Click to use the value of the current data selection for the extraction. Note If the selection contains multiple lines, only the first line is selected. Page 230 l Data Format: Data format settings tell the DataMapper how certain types of data are formatted in the data source. Make sure that this format matches the actual format of the data in the data source. l Negative Sign Before : A negative sign will be displayed before any negative value. l Decimal Separator : Set the decimal separator for a numerical value. l Thousand Separator : Set the thousand separator for a numerical value. l Currency Sign : Set the currency sign for a currency value. l Date Format : Set the date format for a date value. l l Date Language : Set the date language for a date value (ex: If English is selected, the term May will be identified as the month of May). Treat empty as 0 : A numerical empty value is treated as a 0 value. Run JavaScript Running a JavaScript expression offers many possibilities. The script could, for example, set record properties and field values using advanced expressions and complex mathematical operations and calculations. l l l l Expression: The JavaScript expression to run (see "DataMapper Scripts API" on page 252). Use JavaScript Editor: Click to display the Edit Script dialog. Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Use selection: Click to use the value of the current data selection. Note If the selection contains multiple lines, only the first line is selected. Repeat step properties The Repeat step adds a loop to the extraction workflow; see "Steps" on page 140 and "Extracting transactional data" on page 124. Page 231 The properties described below become visible in the Step properties pane when the Repeat step is selected in the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Repeat Definition l Repeat type: l l l l While statement is true: The loop executes while the statement below is true. The statement is evaluated after the loop so the loop will always run at least once. Until statement is true: The loop executes until the statement below is true. The statement is evaluated before the loop so the loop may not run at all. Until no more elements (for Text, CSV, Database and PDF files only): The loop executes as long as there are elements left as selected below. For Each (for XML files only): The loop executes for all nodes on a specified level. Note When using an XML For Each loop, it is not necessary to skip to the repeating node or to have a Gotostep to jump to each sibling, as this loop takes care of it automatically. l Maximum iterations on each line: Defines the maximum number of iterations occurring at the same position. This expression is evaluated once when entering the loop. The value returned by the expression must be an integer higher than 0. l Use JavaScript Editor: Click to display the Edit Script dialog. Page 232 Rule Tree The Rule tree subsection displays the full combination rules (defined below under Condition) as a tree, which gives an overview of how the conditions work together as well as the result for each of these conditions for the current record or iteration. Condition First, the Condition List displays the conditions in list form, instead of the tree form above. Three buttons are available next to the list: l Add condition: Click to create a new condition in the list. This will always branch the current condition as an "AND" operator. l Delete condition: Delete the currently selected condition. l To rename a Condition, double click on its name from the Rule tree subsection . Conditions are made by comparison of two operands using a specific Operator. Note Both the Left and Right operands have the same properties. Text and PDF Files l Based On: l Position: The data in the specified position for the comparison. l l l Left: The start position for the data selection. Note that conditions are done on the current line, either the current cursor position, or the current line in a Repeat step. Right: The end position for the data selection. Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). l Height: The height of the selection box. l Use Selection: Click to use the value of the current data selection for the Page 233 extraction. l l Value: A specified static text value. l l l l l l l l Field: The Extracted Record field to use in the comparison. Expression: The JavaScript line that is evaluated. Note that the last value attribution to a variable is the one used as a result of the expression. Use JavaScript Editor: Click to display the Edit Script dialog. Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Data Property: The value of a data-level property set in the Preprocessor step (see "Preprocessor step" on page 140). Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job. Automation Property: The current value of a Document-level property set in the Preprocessor step (see "Preprocessor step" on page 140). Extractor Property: The value of an internal extractor variable: l l l Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. JavaScript : The result of a JavaScript Expression. l l Value: The text value to use in the comparison. Field: The contents of a specific field in the Extracted Record. l l Trim: Select to trim empty characters at the beginning or the end of the field. Counter: The value of the current counter iteration in a Repeat step. Vertical Position: The current vertical position on the page, either in Measure (PDF) or Line (Text and CSV). Operators: l l is equal to: The two specified value are identical for the condition to be True. contains: The first specified value contains the second one for the condition to be True. Page 234 l l l l is less than: The first specified value is smaller, numerically, than the second value for the condition to be True. is greater than: The first specified value is larger, numerically, than the second value for the condition to be True. is empty: The first specified value is empty. With this operator, there is no second value. Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. CSV and Database Files l Based On: l Position: The data in the specified position for the comparison. l l l l l l Use Selection: Click to use the value of the current data selection for the extraction. Trim: Select to trim empty characters at the beginning or the end of the field. Value: The text value to use in the comparison. Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. Field: The contents of a specific field in the Extracted Record. l l Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). Value: A specified static text value. l l Column: Drop-down listing all fields in the Data Sample, of which the value will be used. Field: The Extracted Record field to use in the comparison. JavaScript : The result of a JavaScript Expression. l l Expression: The JavaScript line that is evaluated. Note that the last value attribution to a variable is the one used as a result of the expression. Use JavaScript Editor: Click to display the Edit Script dialog. Page 235 l l l l l Data Property: The value of a data-level property set in the Preprocessor step. Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job. Automation Property: The current value of a Document-level property set in the Preprocessor step. Extractor Property: The value of an internal extractor variable: l l l Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Counter: The value of the current counter iteration in a Repeat step. Vertical Position: The current vertical position on the page, either in Measure (PDF) or Line (Text and CSV). Operators: l l l l l l is equal to: The two specified value are identical for the condition to be True. contains: The first specified value contains the second one for the condition to be True. is less than: The first specified value is smaller, numerically, than the second value for the condition to be True. is greater than: The first specified value is larger, numerically, than the second value for the condition to be True. is empty: The first specified value is empty. With this operator, there is no second value. Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. Page 236 XML Files l Based On: l Position: The data in the specified position for the comparison. l l l l l l l l l l Value: The text value to use in the comparison. Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. Field: The Extracted Record field to use in the comparison. JavaScript : The result of a JavaScript Expression. l l Trim: Select to trim empty characters at the beginning or the end of the field. Field: The contents of a specific field in the Extracted Record. l l Use Selection: Click to use the value of the current data selection for the extraction. Value: A specified static text value. l l XPath: The path to the XML field that is extracted. Expression: The JavaScript line that is evaluated. Note that the last value attribution to a variable is the one used as a result of the expression. Use JavaScript Editor: Click to display the Edit Script dialog. Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Data Property: The value of a data-level property set in the Preprocessor step. Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job. Automation Property: The current value of a Document-level property set in the Preprocessor step. Extractor Property: The value of an internal extractor variable: l l Counter: The value of the current counter iteration in a Repeat step. Vertical Position: The current vertical position on the page, either in Measure (PDF) or Line (Text and CSV). Page 237 l Operators: l l l l l l is equal to: The two specified value are identical for the condition to be True. contains: The first specified value contains the second one for the condition to be True. is less than: The first specified value is smaller, numerically, than the second value for the condition to be True. is greater than: The first specified value is larger, numerically, than the second value for the condition to be True. is empty: The first specified value is empty. With this operator, there is no second value. Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. Condition step properties A Condition step is used when the data extraction must be based on specific criteria. See "Condition step" on page 145 for more information. The properties of a Conditon step become visible in the Step properties pane when the Condition step is selected on the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Rule tree The Rule tree subsection displays the full combination of rules (defined below under Condition) as a tree, which gives an overview of how the conditions work together as well as the result for each of these conditions for the current record or iteration. Page 238 l l To rename a rule, double click on its name from the Rule tree subsection. To change the way rules are combined, right-click "AND". Select OR or XOR instead. XOR means one or the other, but not both. Condition First, the Condition List displays the conditions in list form, instead of the tree form above. Three buttons are available next to the list: l l Add condition: Click to add a new rule. This will always branch the current condition as an "AND" operator. Delete condition: Delete the currently selected condition. Conditions are made by comparison of two operands using a specific Operator. Note Both the Left and Right operands have the same properties. l Based On: l Position: The data in the specified position for the comparison. l Left (Txt and PDF only): The start position for the data selection. Note that conditions are done on the current line, either the current cursor position, or the current line in a Repeat step. l Right (Txt and PDF only): The end position for the data selection. l Height (Txt and PDF only): The height of the selection box. l l l l l Column (CSV and Database only): Drop-down listing all fields in the Data Sample, of which the value will be used. XPath (XML only): The path to the XML field that is extracted. Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). Use Selection: Click to use the value of the current data selection for the extraction. Trim: Select to trim empty characters at the beginning or the end of the field. Page 239 l Value: A specified static text value. l l l l l l l l Expression: The JavaScript line that is evaluated. Note that the last value attribution to a variable is the one used as a result of the expression. Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 255). Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Data Property: The value of a data-level property set in the Preprocessor (see "Preprocessor step" on page 140). Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job. Automation Property: The current value of a Document-level property set in the Preprocessor step (see "Preprocessor step" on page 140). Extractor Property: The value of an internal extractor variable: l l l Field: The Extracted Record field to use in the comparison. JavaScript : The result of a JavaScript Expression. l l Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. Field: The contents of a specific field in the Extracted Record. l l Value: The text value to use in the comparison. Counter: The value of the current counter iteration in a Repeat step. Vertical Position: The current vertical position on the page, either in Measure (PDF) or Line (Text and CSV). Operators: l l l is equal to: The two specified value are identical for the condition to be True. contains: The first specified value contains the second one for the condition to be True. is less than: The first specified value is smaller, numerically, than the second value for the condition to be True. Page 240 l is greater than: The first specified value is larger, numerically, than the second value for the condition to be True. is empty: The first specified value is empty. With this operator, there is no second value. Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. l Multiple Conditions step properties The Multiple Conditons step contains a number of Case conditions (one to start with) and a Default, to be executed when none of the other cases apply. Cases are executed from left to right. For more information see "Steps" on page 140. The properties described below become visible in the Step properties pane when the Multiple Conditions step is selected in the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Condition Left operand, Right operand The Left and right operand can be Based on: l Position: The data in the specified position for the comparison. l l Left (Txt and PDF only): The start position for the data selection. Note that conditions are done on the current line, either the current cursor position, or the current line in a Repeat step. Right (Txt and PDF only): The end position for the data selection. Page 241 l l l l l Trim: Select to trim empty characters at the beginning or the end of the field. Value: A specified static text value. Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. Field: The Extracted Record field to use in the comparison. JavaScript : The result of a JavaScript Expression. l l l Value: The text value to use in the comparison. Field: The contents of a specific field in the Extracted Record. l l Top offset: The vertical offset from the current pointer location in the Data Sample (Viewer). l l l XPath (XML only): The path to the XML field that is extracted. Use Selection: Click to use the value of the current data selection for the extraction. l l Column (CSV and Database only): Drop-down listing all fields in the Data Sample, of which the value will be used. l l l Height (Txt and PDF only): The height of the selection box. Expression: The JavaScript line that is evaluated. Note that the last value attribution to a variable is the one used as a result of the expression. See also: "DataMapper Scripts API" on page 252. Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 255). Use selected text: Inserts the text in the current data selection in the JavaScript Expression. If multiple lines or elements are selected, only the first one is used. Data Property: The value of a data-level property set in the Preprocessor step (see "Steps" on page 140). Record Property: One of the local variables that you can create and that are reset for each document as opposed to data variables that are global because they are initialized only once at the beginning of each job. Automation Property: The current value of a Document-level property set in the Preprocessor step (see "Steps" on page 140). Page 242 l Extractor Property: The value of an internal extractor variable: l l Counter: The value of the current counter iteration in a Repeat step. Vertical Position: The current vertical position on the page, either in Measure (PDF) or Line (Text and CSV). Condition The Condition drop-down displays the cases in list form. Three buttons are available next to the list: l l l Add case: Click to add a new case to the step. It will be placed next to any existing cases. Remove case: Delete the currently selected case. Order Cases: Under the Name column, select a case, then click one of the buttons on the right (Delete, Move Up, Move Down) to delete or change the order of the cases in the list. Operators Case conditions are made by comparison of the two operands, left and right, using a specific Operator. l is equal to: The two specified value are identical for the condition to be True. l contains: The first specified value contains the second one for the condition to be True. l l l l is less than: The first specified value is smaller, numerically, than the second value for the condition to be True. is greater than: The first specified value is larger, numerically, than the second value for the condition to be True. is empty: The first specified value is empty. With this operator, there is no second value. Invert condition: Inverts the result of the condition. For instance, is empty becomes is not empty. Goto step properties The Goto step moves the pointer within the source data to a position that is relative to the top of the record or to the current position. See also: "Steps" on page 140. Page 243 The properties of the Goto step described in this topic become visible in the Step properties pane when you select the Goto step on the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Goto Definition With each type of source data, the movement of the cursor is defined in a specific way. Text file l Target Type: Defines the type of jump . l Line: Jumps a certain number of lines or to a specific line. l l l l Current Position: The Goto begins at the current cursor position. l Top of record: The Gotobegins at line 1 of the source record. Move by: Enter the number of lines or pages to jump. Page: Jumps between pages or to a specific page. l l l From: Defines where the jump begins: From: Defines where the jump begins: l Current Position: The Gotobegins at the current cursor position. l Top of record: The Gotobegins at line 1 of the source record. Move by: Enter the number of lines or pages to jump. Next line with content: Jumps to the next line that has contents, either anywhere on the line or in specific columns. l Inspect entire page width: When checked, the Next line with content and Next occurrence of options will look anywhere on the line. If unchecked, Page 244 options appear below to specify in which area of each line the Gotostep checks in: l Left: The starting column, inclusively. l Right: The end column, inclusively. l l Use selection: Click while a selection is made in the Data Viewer to automatically set the left and right values to the left and right edges of the selection. Next occurrence of: Jumps to the next occurrence of specific text or a text pattern, either anywhere on the line or in specific columns. l Inspect entire page width: When checked, the Next line with content and Next occurrence of options will look anywhere on the line. If unchecked, options appear below to specify in which area of each line the Gotostep checks in: l Left: The starting column, inclusively. l Right: The end column, inclusively. l l l l Use selection: Click while a selection is made in the Data Viewer to automatically set the left and right values to the left and right edges of the selection. Expression: Enter the text or Regex expression to look for on the page. Use selection: Click while a selection is made in the Data Viewer to copy the contents of the first line of the selection into the Expression box. Use regular expression: Check so that the Expression box is treated as a regular expression instead of static text. For more information on using Regular Expressions (regex), see the Regular-Expressions.info Tutorial. PDF File l Target Type: Defines the type of jump . l Physical distance: l l From: Defines where the jump begins: l Current Position: The Goto begins at the current cursor position. l Top of record: The Gotobegins at line 1 of the source record. Move by: Enter distance to jump. Page 245 l Page: Jumps between pages or to a specific page. l l l From: Defines where the jump begins: l Current Position: The Gotobegins at the current cursor position. l Top of record: The Gotobegins at line 1 of the source record. Move by: Enter the number pages to jump. Next line with content: Jumps to the next line that has contents, either anywhere on the line or in specific columns. l Inspect entire page width: When checked, the Next line with content and Next occurrence of options will look anywhere on the line. If unchecked, options appear below to specify in which area of each line the Gotostep checks in: l Left: The starting column, inclusively. l Right: The end column, inclusively. l l Use selection: Click while a selection is made in the Data Viewer to automatically set the left and right values to the left and right edges of the selection. Next occurrence of: Jumps to the next occurrence of specific text or a text pattern, either anywhere on the line or in specific columns. l Inspect entire page width: When checked, the Next line with content and Next occurrence of options will look anywhere on the line. If unchecked, options appear below to specify in which area of each line the Gotostep checks in: l Left: The starting column, inclusively. l Right: The end column, inclusively. l l l Use selection: Click while a selection is made in the Data Viewer to automatically set the left and right values to the left and right edges of the selection. Expression: Enter the text or Regex expression to look for on the page. Use selection: Click while a selection is made in the Data Viewer to copy the contents of the first line of the selection into the Expression box. Page 246 l Use regular expression: Check so that the Expression box is treated as a regular expression instead of static text. For more information on using Regular Expressions (regex), see the Regular-Expressions.info Tutorial. CSV File l From (CSV files): Defines where the jump begins: l Current Position: The Goto begins at the current cursor position. l l Move by: Enter the number of lines or pages to jump. Top of record: The Gotobegins at line 1 of the source record. l Move to: Enter the number of lines or pages to jump. XML File l Destination (XML files): Defines what type of jump to make: l l l l l Sibling element: Jumps the number of siblings (nodes at the same level) defined in the Move byoption. Sibling element with same name: Jumps the number of same name siblings (nodes at the same level of which the node is the same name) defined in the Move byoption. Element, from top of record: Jumps to the specified node. The XPATH in the Absolute XPATHoption starts from the root node defined by /. Element from current position: Jumps to a position relative to the current position of the cursor. The XPATH in the Relative XPATHoption defines where to go,../goes up a level,./refers to the current level. Level Up/Down: Jumps up or down one node level (up to the parent, down to a child). The number of levels to change is defined in the Move byoption. Postprocessor step properties The Postprocessor step does not run for every Source Record in the Data Sample. It runs once, at the end of the Steps, after all records have been processed. For more information see "Postprocessor step" on page 150. Page 247 The properties described below become visible in the Step properties pane when the Postprocessor step is selected in the Steps pane. Description This subsection is collapsed by default in the interface, to give more screen space to other important parts. Name: The name of the step. This name will be displayed on top of the step's icon in the Steps pane. Comments: The text entered here will be displayed in the tooltip that appears when hovering over the step in the Steps pane. Postprocessor The Postprocessor subsection defines what postprocessors run on the Data Sample at the end of the data mapping workflow. Each Postprocessor runs in turn, using the result of the previous one as an input. l Name: The name to identify the Postprocessor. l Type: The type of Postprocessor. Currently there is a single type available. l JavaScript : Runs a JavaScript Expression to modify the Data Sample. See "DataMapper Scripts API" on page 252. l l Use JavaScript Editor: Click to display the Edit Script dialog (see "Using scripts in the DataMapper" on page 255). Add Postprocessor: Click to add a new Postprocessor. Its settings can be modified once it is added. l Remove Postprocessor: Click to remove the currently selected Postprocessor. l Move Up: Click to move the Postprocessor up one position. l Move Down: Click to move the Postprocessor down one position. l Export: Click to export the current Postprocessor configuration and content to a file. l Import: Click to import a Postprocessor configuration and content from an external file. Page 248 Postprocessor definition JavaScript l l l Expression: The JavaScript expression that will run on the Data Sample. See "DataMapper Scripts API" on page 252. Use JavaScript Editor: Click to display the Script Editor dialog. Use selected text: Uses the text in the current data selection as the Value. If multiple lines or elements are selected, only the first one is used. Toolbar In the DataMapper module, the following buttons are available in the top toolbar. File manipulation l l l New : Displays the New wizard where a new data mapping configuration or a new template can be created. Open : Displays the Open dialog to open an existing data mapping configuration. Save : Saves the current data mapping configuration. If the configuration has never been saved, the Save As... dialog is displayed. Step manipulation Note All steps except the Action step require an active data selection in the Data Viewer (see "Selecting data" on page 122 and "The Data Viewer" on page 200). l l Add Extract Step : Adds an Extract Step with one or more extract fields. If more than one line or field is selected in the Data Viewer, each line or field will have an extract field. Add Goto Step : Adds a Goto step that moves the selection pointer to the beginning of the data selection. For instance if an XML node is selected, the pointer moves to where that node is located. Page 249 l l l l l l l l l l l l Add Condition Step : Adds a condition based on the current data selection. The "True" branch gets run when the text is found on the page. Other conditions are available in the step properties once it has been added. Add Repeat Step : Adds a loop that is based on the current data selection, and depending on the type of data. XML data will loop on the currently selected node, CSV loops for all rows in the record. In Text and PDF data, if the data selection is on the same line as the cursor position, the loop will be for each line until the end of the record. If the data selection is on a lower line, the loop will be for each line until the text in the data selection is found at the specified position on the line (e.g. until "TOTAL" is found). Add Extract Field : Adds the data selection to the selected Extract step, if an extract step is currently selected. If multiple lines, nodes or fields are selected, multiple extract fields are added simultaneously. Add Multiple Conditions : Adds a condition that splits into multiple case conditions. Add Action Step : Adds a step to create a custom JavaScript snippet. See "DataMapper Scripts API" on page 252 for more details. Cut Step : Removes the currently selected step and places it in the clipboard. If the step is a Repeat or a Condition, all steps under it are also placed in the clipboard. If there is already a step in the clipboard, it will be overwritten. Copy Step : Places a copy of the currently selected step in the clipboard. The same details as the Cut step applies. Paste Step : Takes the step or steps in the clipboard and places them after the currently selected step. Delete Step : Deletes the currently selected step. If the step is a Repeat or Condition, all steps under it are also deleted. Ignore Step : Click to set the step to be ignored (aka disabled). Disabled steps do not run when in DataMapper and do not execute when the data mapping configuration is executed in Workflow. However, they can still be modified normally. Validate All Records : Runs the process on all records and verifies that no errors are present in any of the records. Errors are displayed in the Messages pane ("Messages pane" on page 202). Add Data Sample : Displays a dialog to open a new Data Source to add it as a Data Sample in the data mapping configuration. Data Samples are visible in the Settings pane ( "Settings pane" on page 203). Page 250 Welcome Screen The Welcome Screen appears when first starting up PlanetPress Connect. It offers some useful shortcuts to resources and to recent documents and data mapping configurations. If you are new to PlanetPress Connect and you don't know where to start, see "Welcome to PlanetPress Connect 1.8" on page 14. The Welcome Screen can be brought back in two ways: l The Welcome Screen button in the "Toolbars" on page 771. l From the Menus in Help, Welcome Screen. Contents l Activation: Click to open the Objectif Lune Web Activation Manager. l Release Notes: Opens the current Release Notes for PlanetPress Connect. l Website: Opens the PlanetPress Connect website. l Take A Tour: Click to open the YouTube Playlist giving you a tour of the software. l Use the DataMapper to...: l l l l Create a New Configuration: Opens the Creating a New Configuration screen. Open an Existing Configuration: Click to open the standard Browse dialog to open an existing data mapping configuration. Recent Configurations: Lists recently used configurations. Click any configuration to open it in the DataMapper module. Use the Designer to...: l l l Create a New Template: Lets you choose a Context to create a new template without a Wizard. Browse Template Wizards: Displays a list of available Template Wizards, producing premade templates with existing demo content; see "Creating a template" on page 304. Open an Existing Template: Click to open the standard Browse dialog to open an existing template. Page 251 l l Recent Templates: Lists recently used templates. Click any template to open it in the Designer module. Other Resources: l Documentation: Opens this documentation. l Courses (OL Learn): Opens the Objectif Lune e-Learning Center. l User Forums: Opens the Questions & Answer forums. DataMapper Scripts API This page describes the different features available in scripts created inside DataMapper. See "Using scripts in the DataMapper" on page 255. Objects Name Description Available in scripts of type "Objects" on page 262 A ScriptableAutomation object encapsulating the properties of the PlanetPress Workflow process that triggered the current operation. Boundaries, all steps except Goto "boundaries" on page 264 An object encapsulating properties and methods allowing to define the boundaries of each document in the job. Boundaries "data" on page 269 A data object encapsulating properties and methods pertaining to the original data stream. Boundaries, all steps except Goto "db" on page 282 An object that allows to connect to a database. Boundaries, all steps except Goto Page 252 Name Description Available in scripts of type "logger" on page 283 An object that allows to log error, warning or informational messages. Boundaries, all steps except Goto "record" on page 283 The current record in the main data set. Extract, Condition, Repeat and Multiple Conditions steps "region" on page 284 An object that defines a subsection of the input data. Boundaries "sourceRecord" on page 286 An object containing properties specific to the current source record being processed. Boundaries, all steps except Goto and Postprocessor "steps" on page 287 An object encapsulating properties and methods pertaining to the current data mapping configuration. Extract, Condition, Repeat and Multiple Conditions steps Functions These functions are available in Boudaries and Steps scripts. Page 253 Name Description copyFile() Copies a file to the target file path, replacing it if it already exists. "createHTTPRequest()" on page 294 Creates a new HTTP Request Object. createTmpFile() Creates a file with a unique name in the temporary work folder and returns a file object. deleteFile() Deletes a file. execute() Calls an external program and waits for it to end. newByteArray() Returns a new byte array. newCharArray() Returns a character array. newDoubleArray() Returns a double array. newFloatArray() Returns a float array. newIntArray() Returns an integer array. newLongArray() Returns a long array. newStringArray() Returns a string array. openBinaryReader() Opens a file as a binary file for reading purposes. openBinaryWriter() Opens a file as a binary file for writing purposes. openTextReader() Opens a file as a text file for reading purposes. openTextWriter() Opens a file as a text file for writing purposes. Page 254 Using scripts in the DataMapper In the DataMapper every part of the extraction process can be customized using scripts. A script can be used to set boundaries for a data source (see "Setting boundaries using JavaScript" on page 257). The script determines where a new record starts. Scripts can also be used in different steps in the extraction workflow. You can: l l l l l l Modify the incoming data prior to executing the rest of the extraction workflow, via a Preprocessor (see "Preprocessor step" on page 140). Edit extracted data in a field of the Data Model using a Post function script (entered on the Step properties pane, under Field Definition; see "Modifying extracted data" on page 159 and "Settings for location-based fields in a Text file" on page 221). Enter a script in a JavaScript-based field (see "JavaScript-based field" on page 156). Note that the last value attribution to a variable is the one used as the result of the expression. It is possible to refer to previously extracted fields if they are extracted higher in this list or in previous Extract steps in the extraction workflow. Let an Action step run a JavaScript, or use JavaScript to add a value to a property defined in the Preprocessor step. Change the left and right operands in a Condition step to a JavaScript expression. (On the Step properties pane, under Condition, set Based on to Javascript; see "Condition step properties" on page 238 and "Left operand, Right operand" on page 241.) Further process the resulting record set after the entire extraction workflow has been executed, via a Postprocessor (see "Postprocessor step" on page 150). The script can always be written directly in a small script area or in the Edit script dialog. To invoke this dialog click the Use JavaScript Editor button . Tip In the Edit script dialog, press Ctrl + Space to bring up the list of available JavaScript objects and functions (see Datamapper API). Use the arrow keys to select a function or object and press enter to insert it. Type a dot after the name of the function or object to see which features are Page 255 subsequently available. Keyboard shortcuts for the script editor are listed in the following topic: "Keyboard shortcuts" on page 738. Syntax rules In the DataMapper, all scripts must be written in JavaScript, following JavaScript syntax rules. For example, each statement should end with ; and the keywords that can be used, such as var to declare a variable, are JavaScript keywords. There are countless tutorials available on the Internet to familiarize yourself with the JavaScript syntax. For a simple script all that you need to know can be found on the following web pages: http://www.w3schools.com/js/js_syntax.asp and http://www.w3schools.com/js/js_if_else.asp. A complete JavaScript guide for beginners can be found here: https://developer.mozilla.org/en-US/docs/Web/JavaScript. DataMapper API Certain features that can be used in a DataMapper script do not exist in the native JavaScript library. These are additional JavaScript features, designed for use in Connect only. All features designed for use in the DataMapper are listed in the DataMapper API (see DataMapper API). External JavaScript libraries The External JS Libraries box on the Settings pane lets you add JavaScript libraries to your configuration and displays all the libraries that have been imported (see "Settings pane" on page 203). You can use JavaScript libraries to add more JavaScript functionality to your data mapping configuration. Any functions included in a JavaScript library that is imported in a data mapping configuration will be available in Preprocessor scripts as well as Action tasks, Post functions and JavaScript-based extraction steps. Take the following JavaScript function, for example: function myAddFunction(p1, p2) { return p1 + p2; }; Page 256 If this is saved as myFunction.js and imported, then the following would work anywhere in the configuration: var result = myAddFunction(25, 12); // returns 37! Setting boundaries using JavaScript As soon as you select the On Script option as the trigger for establishing record boundaries (see "Record boundaries" on page 117), you are instructing the DataMapper to read the source file sequentially and to trigger an event each and every time it hits a delimiter. (What a delimiter is, depends on the source data and the settings for that data; see "Input data settings (Delimiters)" on page 115). In other words, the script will be executed - by default - as many times as there are delimiters in the input data. If you know, for instance, that a PDF file only contains documents that are 3 pages long, your script could keep count of the number of times it's been called since the last boundary was set (that is, the count of delimiters that have been encountered). Each time the count is a multiple of 3, it could set a new record boundary. This is basically what happens when setting the trigger to On Page and specifying 3 as the Number of Pages. Note Remember that a boundary script is being called on each new delimiter encountered by the DataMapper parsing algorithm. If for instance a database query returns a million records, the script will be executing a million times! Craft your script in such a way that it doesn't waste time examining all possible conditions. Instead, it should terminate as soon as any condition it is evaluating is false. Accessing data Data available inside each event Every time a delimiter is encountered, an event is triggered and the script is executed. The event gives the script access to the data between the current location - the start of a row, line or page - and the next delimiter. So at the beginning of the process for a PDF or text file, you have access to the first page only, and for a CSV or for tabular data, that would be the first row or record. This means that you can: Page 257 l Examine the data found in between delimiters for specific conditions. l Examine specific regions of that data, or the available data as a whole. l Compare the contents of one region with another. l Etc. To access this data in the script, use the get() function of the boundaries object. This function expects different parameters depending on the type of source file; see "Example" on page 266. Getting access to other data Data that is not passed with the event, but that is necessary to define the record boundaries, can be stored in the boundaries object using the setVariable function (see "boundaries" on page 264 and "Example" on page 268). The data can be retrieved using the boundaries' getVariable function (see "getVariable()" on page 266). This way the script can access values that were evaluated in previous pages or rows, across delimiters, so you can easily set record boundaries that span over multiple delimiters. For more information on the syntax, please refer to "DataMapper Scripts API" on page 252. Examples Basic example using a CSV file ​Imagine you are a classic rock fan and you want to extract the data from a CSV listing of all the albums in your collection. Your goal is to extract records that change whenever the artist OR the release year changes. Here's what the CSV looks like: "Artist","Album","Released" "Beatles","Abbey Road",1969 "Beatles","Yellow Submarine",1969 "Led Zeppelin","Led Zeppelin 1",1969 "Led Zeppelin","Led Zeppelin 2",1969 "Beatles","Let it be",1969 "Rolling Stones","Let it bleed",1969 "Led Zeppelin","Led Zeppelin 3",1970 "Led Zeppelin","Led Zeppelin 4",1971 "Rolling Stones","Sticky Fingers",1971 Page 258 Note The first line is just the header with the names of the CSV columns. The data is already sorted per year, per artist, and per album. ​Your goal is to examine two values in each CSV record and to act when either changes. The DataMapper GUI allows you to specify a On Change trigger, but you can only specify a single field. So for instance, if you were to set the record boundary when the "Released" field changes, you'd get the first four lines together inside a single record. That's not what you want since that would include albums from several different artists. And if you were to set it when the "Artist" field changes, the first few records would be OK but near the end, you'd get both the Led Zeppelin 3 and led Zeppelin 4 albums inside the same record, even though they were released in different years. Essentially, we need to combine both these conditions and set the record boundary when EITHER the year OR the artist changes. ​Here's what the script would look like:​ /* Read the values of both columns we want to ​check */ var zeBand = boundaries.get(region.createRegion("Artist")); var zeYear = boundaries.get(region.createRegion("Released")); /* Check that at least one of our variables holding previous values has been initialized already, before attempting to compare the values */ if (boundaries.getVariable("lastBand")!=null) { if (zeBand[0] != boundaries.getVariable("lastBand") || zeYear[0] != boundaries.getVariable("lastYear") ) { boundaries.set(); } } boundaries.setVariable("lastBand",zeBand[0]); boundaries.setVariable("lastYear",zeYear[0]); l ​The script first reads the two values from the input data, using the createRegion() method (see: "Example" on page 285). For a CSV/database data type, the parameter it expects is Page 259 simply the column name. The region is passed as a parameter to the get() method, which reads its contents and converts it into an array of strings (because any region, even a CSV field, may contain several line​s).​ l l l To "remember" the values that were processed the last time the event was triggered, we use variables that remain available in between events. Note that these variables are specific to the Boundary context and not available in any other scripting context in the DataMapper. The script first checks if those values were initialized. If they weren't, it means this is the first iteration so there's no need to compare the current values with previous values since there have been none yet. But if they have already been initialized, then a condition checks if either field has changed since last time. If that's the case, then a boundary is created through the set() method. ​Finally, the script stores the values it just read in the variables using the setVariables() method. They will therefore become the "last values encountered" until the next event gets fired. When called, setVariables() creates the specified variable if it doesn't already exist and then sets the value to the second parameter passed to the function. You can try it yourself. Paste the data into the text editor of your choice and save the file to Albums.csv. Then create a new DataMapper configuration and load this CSV as your data file. In the Data Input Settings, make sure you specify the first row contains field names and set the Trigger to On script. Then paste the above JavaScript code in the Expression field and click the Apply button to see the result. ​Basic example using a text file This example is similar to the previous example, but now the data source is a plain text file that looks like this: Beatles Abbey Road 1969 Beatles Yellow Submarine 1968 Led Zeppelin Led Zeppelin 1 1969 Led Zeppelin Led Zeppelin 2 1969 Beatles Let it be 1970 Rolling Stones Let it bleed 1969 Led Zeppelin Led Zeppelin 3 1970 Led Zeppelin Led Zeppelin 4 1971 Rolling Stones Sticky Fingers 1971 Page 260 The purpose of the script, again, is to set the record boundary when EITHER the year OR the artist changes. The script would look like this: /* Read the values of both columns we want to check */ var zeBand = boundaries.get(region.createRegion(1,1,30,1)); var zeYear = boundaries.get(region.createRegion(61,1,65,1)); /* Check that at least one of our variables holding previous values have been initialized already, before attempting to compare the values */ if (boundaries.getVariable("lastBand")!=null) { (zeBand[0]!=boundaries.getVariable("lastBand") || zeYear [0]!=boundaries.getVariable("lastYear")) { boundaries.set(); } } boundaries.setVariable("lastBand",zeBand[0]); boundaries.setVariable("lastYear",zeYear[0]); This script uses the exact same code as used for CSV files, with the exception of parameters expected by the createRegion() method. The get method adapts to the context (the data source file) and therefore expects different parameters to be passed in order to achieve the same thing. Since a text file does not contain column names as a CSV does, the API expects the text regions to be defined using physical coordinates. In this instance: Left, Top, Right, Bottom. To try this code, paste the data into a text editor and save the file to Albums.txt. Then create a new DataMapper configuration and load this Text file as your data file. In the Data Input Settings, specify On lines as the Page delimiter type with the number of lines set to 1. When you now set the boundary Trigger to On script, the file will be processed line per line (triggering the event on each line). Paste the above code in the JavaScript expression field and click the Apply button to see the result. Note The PDF context also expects physical coordinates, just like the Text context does, but since PDF Page 261 pages do not have a grid concept of lines and columns, the above parameters would instead be specified in millimeters relative to the upper left corner of each page. So for instance, to create a region for the Year, the code might look like this: region.createRegion(190,20,210,25) which would create a region located near the upper right corner of the page. That's the only similarity, though, since the script for a PDF would have to look through the entire page and probably make multiple extractions on each one since it isn't dealing with single lines like the TXT example given here. For more information on the API syntax, please refer to "DataMapper Scripts API" on page 252. Objects automation Returns a ScriptableAutomation object encapsulating the properties of the PlanetPress Workflow process that triggered the current operation. Note The automation object available in Designer scripts is not of the same type. It has different properties. Properties The following table lists the properties of the automation object. These are available in Boundaries scripts, with all file types. Property Description jobInfo Returns a ScriptableAutomation object containing JobInfo 1 to 9 values from PlanetPress Workflow. Page 262 Property Description properties Returns a ScriptableAutomation object containing additional information (file name, process name and task ID) from PlanetPress Workflow. variables Returns a ScriptableAutomation object containing the list of local and global variables defined by the user in PlanetPress Workflow. Note that there is no way to distinguish local variables from global ones (local variables take precedence over global variables). To be used in the DataMapper, variables must have already been defined in the Preprocessor step as Automation variables. The Preprocessor step attempts to match variable names passed by the Workflow process to those defined inside the step. Accessing automation properties To make a Workflow variable accessible in scripts, it must first be declared in the Properties of the Preprocessor step (see "Properties" on page 217). Both the name and type of the variable must be the same as the variable in Workflow. The other properties are accessible as they are. Examples To access JobInfo 1 to 9 from Workflow: automation.jobInfo.JobInfo1; To access ProcessName, OriginalFilename or TaskIndex from Workflow: Page 263 automation.properties.OriginalFilename; To access Workflow variables (declared in the Preprocessor properties): automation.variables.Same_as_workflow; boundaries Returns a boundaries object encapsulating properties and methods allowing to define the boundaries of each document in the job. This object is available when triggering document boundaries On script. Properties The following table lists the properties of the boundaries object. Property Return Type currentDelim A read-only 1-based index (number) of the current delimiter in the file. In other words, the Beginning Of File (BOF) delimiter equals 1. It indicates the position of the current delimiter relative to the last document boundary Methods The following table describes the functions of the boundaries object. They are available with all file types. Method Description Script type "find()" on the next page Finds the first occurrence of a string starting from the current position. Boundaries get() Retrieves an array of strings. Preprocessor, Extract, Condition, Repeat, Action, and Postprocessor steps Boundaries Page 264 Method Description Script type getVariable () Retrieves a value of a variable stored in the boundaries object. Boundaries set() Sets a new record boundary. (See: "Record boundaries" on page 117.) Boundaries setVariable () Sets a boundaries variable to the specified value, automatically creating the variable if it doesn't exist yet. Boundaries find() Method of the boundaries object that finds a string in a region of the data source file. The method returns a smaller region which points to the exact location where the match was found. find(stringToFind, in_Region) Finds the string stringToFind in a rectangular region defined by inRegion. stringToFind String to find. in_Region The inRegion can be created prior to the call to find() with the region.createRegion() method. It depends on the type of data source how a region is defined; see "Example" on page 285. The find() method returns a different region object whose range property is adjusted to point to the exact physical location where the match was found. This will always be a subset of the in_Region.range property. It can be used to determine the exact location where the match occurred. Use boundaries.get() to retrieve the actual text from the resulting region; see "Example" on the facing page. Page 265 get() The get() method reads the contents of a region object and converts it into an array of strings (because any region may contain several line​s). How the region is defined, depends on the type of source data; see "region" on page 284 and "Example" on page 285. get(in_Region) in_Region A region object. What type of object this is depends on the type of source data, however in any case the region object can be created with a call to region.createRegion(); see "Example" on page 285. Example This script retrieves all text from the Email_Address field in a CSV or database file. boundaries.get(region.createRegion("Email_Address")); getVariable() Method that retrieves the value currently stored in a variable. Note Boundary variables are carried over from one iteration of the Boundaries script to the next, while native JavaScript variables are not. getVariable(varName) varName String name of the variable from which the value is to be retrieved. If the variable does not exist, the value null is returned. It is considered good practice (almost mandatory, even) to always check whether a variable is defined before attempting to access its value. set() Sets a new DataMapper record boundary. Page 266 set(delimiters) delimiters Sets a new record boundary. The delimiters parameter is an offset from the current delimiter, expressed in an integer that represents a number of delimiters. If this parameter is not specified, then a value of 0 is assumed. A value of 0 indicates the record boundary occurs on the current delimiter. A negative value of -n indicates that the record boundary occurred -n delimiters before the current delimiter. A positive value of n indicates that the record boundary occurs +n delimiters after the current delimiter. Note Specifying a positive value not only sets the DataMapper record boundary but it also advances the current delimiter to the specified delimiter. That's where the processing resumes. This allows you to skip some pages/records when you know they do not need to be examined. Negative (or 0) values simply set the boundary without changing the current location. Example This script sets a boundary when the text TOTAL is found on the current page in a PDF file. The number of delimiters is set to 1, so the boundary is set on the next delimiter, which is the start of the next page. if (boundaries.find("TOTAL", region.createRegion (10,10,215,279)).found) { boundaries.set(1); } Assume you want to set record boundaries whenever the text "TOTAL" appears in a specific region of the page of a PDF file, but the PDF file has already been padded with blank pages for duplexing purposes. The boundary should therefore be placed at the end of the page where the match is found if that match occurs on an even page, or at the end of the next blank page, if the match occurs on an odd page. Recall that for PDF files, the natural delimiter is a PDF page. The JavaScript code would look something like the following: var myRegion = region.createRegion(150,220,200,240); if(boundaries.find("TOTAL", myRegion).found) { /* a match was found. Check if we are on a odd or even page and Page 267 set the Boundary accordingly */ if((boundaries.currentDelim % 2) !=0 ) { /* Total is on odd page, let's set the document Boundary on delimiter further, thereby skipping the next blank page */ boundaries.set(1); } else { /* Total is on an even page, set the document Boundary to t current delimiter */ boundaries.set(); } } } setVariable() This method sets a variable in the boundaries to the specified value, automatically creating the variable if it doesn't exist yet. Note Boundary variables are carried over from one iteration of the Boundaries script to the next, while native JavaScript variables are not. setVariable(varName, varValue) Sets variable varName to value varValue. varName String name of the variable of which the value is to be set. varValue Object; value to which the variable has to be set. Example This script examines a specific region and stores its contents in a variable in the boundaries. var addressRegion = region.createRegion(10, 30, 100, 50); var addressLines = boundaries.get(addressRegion); boundaries.setVariable("previousLines",addressLines); Page 268 data Returns a data object encapsulating properties and methods pertaining to the original data stream. Properties The following table lists the properties of the data object. Property Description Return type filename The path of the input file. Returns the fully qualified file name of the temporary work file being processed. properties Contains properties declared in the preprocessor step (see Preprocessor Step Properties for details). Returns an array of properties defined in the Preprocessor step with the data scope (i.e. statically set at the start of the job). Methods The following table lists the methods of the data object. Method Description Script type File type "Examples" on page 271 Extracts the text value from a rectangular region. Extract, Condition, Repeat, and Action steps All "extractMeta ()" on page 277 Extracts the value of a metadata field. Extract, Condition, Repeat, and Action steps All "fieldExists ()" on page 277 Method that returns true if the specified metadata field, column or node exists. Boundaries All Preprocessor, Extract, Condition, Repeat, Action, Page 269 Method Description Script type File type and Postprocessor steps "Examples" on page 279 Finds the first occurrence of a string starting from the current position. Boundaries "Examples" on page 282 Finds the first match for a regular expression pattern starting from the current position. Extract, Condition, Repeat, Multiple Conditions and Action steps All Preprocessor, Extract, Condition, Repeat, Action, and Postprocessor steps Text, PDF extract() Extracts the text value from selected data: a node path, column, or rectangular region, depending on the type of data source. This method always returns a String. extract(left, right, verticalOffset, regionHeight, separator) Extracts a value from a position in a text file. Coordinates are expressed as characters (horizontally) or lines (vertically). left Number that represents the distance, measured in characters, from the left edge of the page to the left edge of the rectangular region. The leftmost character is character 1. right Number that represents the distance, measured in characters, from the left edge of the page to the right edge of the rectangular region. verticalOffset Number that represents the current vertical position, measured in lines. regionHeight Page 270 Number that represents the total height of the region, measured in lines. Setting the regionHeight to 0 instructs the DataMapper to extract all lines starting from the given position until the end of the record. Specifying an extraction height that is longer than the number of remaining lines results in a "step out of bound" error message. separator String inserted between all lines returned from the region. If you don't want anything to be inserted between the lines, specify an empty string (""). Tip l l "
" is a very handy string to use as a separator. When the extracted data is inserted in a Designer template, "
" will be interpreted as a line break, because
is a line break in HTML and Designer templates are actually HTML files. Setting the regionHeight to 0 makes it possible to extract a variable number of lines at the end of a record. Examples Example 1: The script command data.extract(1,22,8,1,"
"); means that the left position of the extracted information is located at character 1, the right position at character 22, the offset position is 8 (since the line number is 9) and the regionHeight is 1 (to select only 1 line). Finally, the "
" string is used for concatenation. Page 271 Example 2: The script command data.extract(1,22,9,6,"
"); means that the left position of the extracted information is located at 1, the right position at 22, the offset position is 9 (since the first line number is 10) and the regionHeight is 6 (6 lines are selected). Finally, the "
" string is used for concatenation. Page 272 extract(xPath) Extracts the text value of the specified node in an XML file. xPath String that can be relative to the current location or absolute from the start of the record. Example The script command data.extract('./CUSTOMER/FirstName'); means that the extraction is made on the FirstName node under Customer. Page 273 extract(columnName, rowOffset) Extracts the text value from the specified column and row. columnName String that represents the column name. rowOffset Number that represents the row index (zero-based), relative to the current position. To extract the current row, specify 0 as the rowOffset. Use moveTo() to move the pointer in the source data file (see "Example" on page 290). Example The script command data.extract('ID',0); means that the extraction is made on the ID column in the first row. Page 274 extract(left, right, verticalOffset, lineHeight, separator) Extracts the text value from a rectangular region in a PDF file. All coordinates are expressed in millimeters. left Double that represents the distance from the left edge of the page to the left edge of the rectangular region. right Double that represents the distance from the left edge of the page to the right edge of the rectangular region. verticalOffset Double that represents the distance from the current vertical position. Page 275 lineHeight Double that represents the total height of the region. separator String inserted between all lines returned from the region. If you don't want anything to be inserted between the lines, specify an empty string (""). Tip "
" is a very handy string to use as a separator. When the extracted data is inserted in a Designer template, it will be interpreted as a line break, because
is a line break in HTML and Designer templates are actually HTML files. Example The script command data.extract(4.572,51.815998,37.761333,3.7253342,"
"); means that the left position of the extracted information is located at 4.572mm, the right position at 51.815998mm, the vertical offset is 37.761333mm and the line height is 3.7253342mm. Finally, the "
" string is used for concatenation. Page 276 extractMeta() Method that extracts the value of a metadata field on a certain level in a PDF/VT. This method always return a String. extractMeta(levelName String, propertyName String) levelName String, specifying the PDF/VT's level. Case-sensitive. propertyName String, specifying the metadata field. fieldExists() Method of the data object that returns true if a certain metadata field, column or node exists. (See "data" on page 269.) Page 277 fieldExists(levelName, propertyName) This method returns true if the given metadata field exists at the given level in a PDF file. levelName String that specifies the metadata field. propertyName String that specifies the level. fieldExists(fieldName) This method returns true if the specified column exists in the current record in a CSV file. fieldName String that represents a field name (column) in a CSV file. fieldExists(xPath) This method returns true if the specified node exists in the current record in an XML file. xPath String that specifies a node. find() Method of the data object that finds the first occurrence of a string starting from the current position. find(stringToFind, leftConstraint, rightConstraint) Finds the first occurrence of a string starting from the current position. The search can be constrained to a series of characters (in a text file) or to a vertical strip (in a PDF file) located between the given constraints. The method returns null if the string cannot be found. Otherwise it returns a RectValueText (if the data source is a text file) or RectValuePDF (if the data source is a PDF file) object. This object contains the absolute Left, Top, Right and Bottom coordinates of the smallest possible rectangle that completely encloses the first occurrence of the string. The coordinates are expressed in a number of characters if the data source is a text file, or in millimetres if the data source is a PDF file. Page 278 Partial matches are not allowed. The entire string must be found between the two constraint parameters. The data.find() function only works on the current page. If the record contains several pages, you must create a loop that will perform a jump from one page to another to do a find() on each page. Note Calling this method does not move the current position to the location where the string was found. This allows you to use the method as a look-ahead function without disrupting the rest of the data mapping workflow. stringToFind String to find. leftConstraint Number indicating the left limit from which the search is performed. This is expressed in characters for a text file, or in millimetres for a PDF file. rightConstraint Number indicating the right limit to which the search is performed. This is expressed in characters for a text file, or in millimetres for a PDF file. Examples To look for the word "text" on an entire Letter page (8 1/2 x 11 inch), the syntax is: data.find("text", 0, 216); The numbers 0 and 216 are in millimeters and indicate the left and right limits (constraints) within which the search should be performed. In this example, these values represent the entire width of a page. Note that the smaller the area is, the faster the search is. So if you know that the word "text" is within 3 inches from the left edge of the page, provide the following: data.find("text", 0, 76.2); //76.2mm = 3*25.4 mm The return value of the function is: Page 279 Left=26,76, Top=149.77, Right=40,700001, Bottom=154.840302 These values represent the size of the rectangle that encloses the string in full, in millimeters relative to the upper left corner of the current page. findRegExp() Finds the first occurrence of a string that matches the given regular expression pattern, starting from the current position. findRegExp (regexpToFind, flags, leftConstraint, rightConstraint): rectValueText) Finds the first match for a given regular expression pattern starting from the current position. Regular expression flags (i,s,L,m,u,U,d) are specified in the flags parameter. The search can be constrained to a vertical column of characters located between the left and right constraint, each expressed in characters (in a text file) or millimeters (in a PDF file). Partial matches are not allowed. The entire match for the regular expression pattern must be found between the two constraints. The method returns null if the regular expression produces no match. Otherwise it returns a RectValueText object, containing the Left, Top, Right and Bottom coordinates - expressed in characters (in a text file) or millimeters (in a PDF file), relative to the upper left corner of the current page - of the smallest possible rectangle that completely encloses the first match for the regular expression. Note Calling this method does not move the current position to the location where the match occurred. This allows you to use the method as a look-ahead function without disrupting the rest of the data mapping workflow. regexpToFind Regular expression pattern to find. flags i: Enables case-insensitive matching. By default, case-insensitive matching assumes that only characters in the US-ASCII charset are being matched. Unicode-aware case-insensitive Page 280 matching can be enabled by specifying the UNICODE_CASE flag (u) in conjunction with this flag. s: Enables dotall mode. In dotall mode, the expression . matches any character, including a line terminator. By default this expression does not match line terminators. L: Enables literal parsing of the pattern. When this flag is specified, then the input string that specifies the pattern is treated as a sequence of literal characters. Metacharacters or escape sequences in the input sequence will be given no special meaning. The CASE_ INSENSITIVE (i) and UNICODE_CASE (u)flags retain their impact on matching when used in conjunction with this flag. The other flags become superfluous. m: Enables multiline mode. In multiline mode, the expressions ^ and $ match just after or just before, respectively, a line terminator or the end of the input sequence. By default, these expressions only match at the beginning and the end of the entire input sequence. u: Enables Unicode-aware case folding. When this flag is specified, then case-insensitive matching, when enabled by the CASE_INSENSITIVE flag (i), is done in a manner consistent with the Unicode Standard. By default, case-insensitive matching assumes that only characters in the US-ASCII charset are being matched. U: Enables the Unicode version of Predefined character classes and POSIX character classes. When this flag is specified, then the (US-ASCII only) Predefined character classes and POSIX character classes are in conformance with Unicode Technical Standard #18: Unicode Regular Expression Annex C: Compatibility Properties. d: Enables Unix lines mode. In this mode, only the '\n' line terminator is recognized in the behavior of ., ^, and $. leftConstraint Number indicating the left limit from which the search is performed. This is expressed in characters for a text file, or in millimeters for a PDF file. rightConstraint Number indicating the right limit to which the search is performed. This is expressed in characters for a text file, or in millimeters for a PDF file. Page 281 Examples data.findRegExp(/\d{3}-[A-Z]{3}/,"gi",50,100); or data.findRegExp("\\d{3}-[A-Z]{3}","gi",50,100);}} Both expressions would match the following strings: 001-ABC, 678-xYz. Note how in the second version, where the regular expression is specified as a string, some characters have to be escaped with an additional backslash, which is standard in JavaScript. db Object that allows to connect to a database. Methods The following table describes the methods of the db object. Method Description Available in File type connect () Method that returns a new database connection object. Boundaries all Preprocessor, Extract, Condition, Repeat, Action, and Postprocessor steps connect() Method that returns a new database connection object. connect(url, user, password) This method returns a new database connection object after connecting to the given URL and authenticating the connection with the provided user and password information. url String that represents the url to connect to. Page 282 user String that represents the user name for authentication. password String that represents the password for authentication. logger Global object that allows logging messages such as error, warning or informational messages. Methods The following table describes the methods of the logger object. Method Parameters Description error() message: string Logs an error message info() message: string Logs an informational message warn() message: string Logs a warning message record The current record in the main data set. Properties Property Return Type fields The field values that belong to this record. You can access a specific field value using either a numeric index or the field name, index The one-based index of this record, or zero if no data is available. tables The details table that belong to this record. You can access a specific table using a numeric index or the table name. Page 283 Example See this How-to for an example of how the current record index, and/or the total number of records in the record set, can be displayed in a document: How to get the record index and count. region The region object defines a sub-section of the input data. Its properties vary according to the type of data. This object is available when triggering document boundaries On script; see "Setting boundaries using JavaScript" on page 257. Methods The following table describes the methods of the region object. This object is available in Boundaries scripts, with all file types. Method Description Return Type found Field that contains a boolean value indicating if the last call to boundaries.find() was successful. Since the find() method always returns a region, regardless of search results, it is necessary to examine the value of found to determine the actual result of the operation. Boolean range Read-only object containing the physical coordinates of the region. Physical location of the region: x1 (left), y1 (top), x2 (right), y2 (bottom), expressed in characters for a text file or in millimeters for a PDF file. For a CSV file, it is the name of the column that defines the region. createRegion Creates a region by setting the A region that has the specified Page 284 Method Description Return Type () physical coordinates of the region object. coordinates. createRegion() This method sets the physical coordinates of the region object. The region is available when setting document boundaries using a script (see "region" on the previous page). PDF and Text: createRegion(x1, y1, x2, y2) Creates a region from the data, using the specified left (x1), top (y1), right (x2) and bottom (y2) parameters, expressed in characters for a text file or in millimeters for a PDF file. x1 Double that represents the left edge of the region. y1 Double that represents the top edge of the region. x2 Double that represents the right edge of the region. y2 Double that represents the bottom edge of the region. Example The following script attempts to match ((n,m)) or ((n)) against any of the strings in the specified region and if it does, a document boundary is set. var myRegion = region.createRegion(170,25,210,35); var regionStrings=boundaries.get(myRegion); if (regionStrings) { for (var i=0;i curPage) { steps.moveTo(0, steps.currentPosition+14); /* Moves the current position to 14 lines below the current position of the pointer in the data */ curPage++; } else if(curLine.startsWith("LOAD FACTOR")) { /* Extracts data to the curLine variable until the string "LOAD FACTOR" is encountered */ break; } else { lineArray.push(curLine); /* Adds the current line value (extraction) to the array */ } moveTo() Moves the position of the pointer in the source data file. This is a method of the steps object (see "steps" on page 287). moveTo(scope, verticalPosition) Moves the current position in a text file to verticalPosition where the meaning of verticalPosition changes according to the value specified for scope. scope Number that may be set to: l 0 or steps.MOVELINES l 1 or steps.MOVEDELIMITERS l 2: next line with content verticalPosition Number. What it represents depends on the value specified for scope. Page 289 With the scope set to 0 or steps.MOVELINES, verticalPosition represents the index of the line to move to from the top of the record. With the scope set to 1 or steps.MOVEDELIMITERS, verticalPosition represents the index of the delimiter (as defined in the Input Data settings) to move to from the top of the record. With the scope set to 2, verticalPosition is not used. The position is moved to the next line after the current position that contains any text. Example The following line of code moves the current position in a text file 14 lines down from the current vertical position (steps.currentPosition) of the pointer in the data, as long as it is on the same page. if(steps.currentPage > curPage) { steps.moveTo(0, steps.currentPosition+14); curPage++; } moveTo(scope, verticalOffset) Moves the current position in a PDF file to verticalOffset where the meaning of verticalOffset changes according to the value specified for scope. scope Number that may be set to: l 0 or steps.MOVEMEASURE l 1 or steps.MOVEPAGE verticalOffset Double. What it represents depends on the value specified for scope. With the scope set to 0 or steps.MOVEMEASURE, verticalOffset represents the number of millimeters to move the current position, relative to the top of the record (NOT the top of the current page). Page 290 With the scope set to 1 or steps.MOVEPAGES, verticalOffsetrepresents the index of the target page, relative to the top of the record. moveTo(xPath) Moves the current position in a XML file to the first instance of the given node, relative to the top of the record. xPath String that defines a node in the XML file. Tip The XML elements drop-down (on the Settings pane, under Input Data) lists xPaths defining nodes in the current XML file. moveTo(row) Moves the current position in a CSV file to the given row number. row Number that represents the index of the row, relative to the top of the record. moveToNext() Moves the position of the pointer in the source data file to the next line, row or node. The behavior and arguments are different for each emulation type: text, PDF, tabular (CSV), or XML. This is a method of the steps object (see "steps" on page 287). moveToNext(scope) Moves the current position in a text file or XML file to the next instance of scope. What scope represents depends on the emulation type: text or XML. Text scope Number that may be set to: Page 291 l l l 0 or steps.MOVELINES: the current position is set to the next line. 1 or steps.MOVEDELIMITERS: the current position is set to the next delimiter (as defined in the Input Data settings). 2 (next line with content): the current position is set to the next line that contains any text. Example The following line of code moves the current position to the next line that contains any text. steps.moveToNext(2); XML scope Number that may be set to: l l 0 or steps.MOVENODE: the current position is set to the next parent node in the XML hierarchy. 1 or steps.MOVESIBLING: the current position is set to the next sibling node in the XML hierarchy. moveToNext(left, right) Moves the current position in a PDF file to the next line that contains any text, the search for text being contained within the left and right parameters, expressed in millimeters. left Double that represents the left edge (in millimeters) of the text to find. right Double that represents the right edge (in millimeters) of the text to find. moveToNext() Moves the current position in a CSV file to the next row, relative to the current position. Functions copyFile() Function that copies a file to the target file path, replacing it if it already exists. Page 292 copyFile(source, target) source String that specifies the source file path and name. target String that specifies the target file path and name. Example This script copies the file test.txt from c:\Content into the c:\out folder. copyFile("c:\Content\test.txt","c:\out\") createTmpFile() Function that creates a file with a unique name in the temporary work folder and returns a file object. This file stores data temporarily in memory or in a buffer. It is used to prevent multiple input/output access to a physical file when writing. In the end, the contents are transferred to a physical file for which only a single input/output access will occur. Example In the following script, the contents of the data sample file are copied in uppercase to a temporary file. try{ // Open a reader var reader = openTextReader(data.filename); // Create a temporary file var tmpFile = createTmpFile(); // Open a writer on the temporary file var writer = openTextWriter(tmpFile.getPath()); try{ var line = null; // Current line /* read line by line and readLine will return null at the e the file */ while( (line = reader.readLine()) != null ){ // Edit the line line = line.toUpperCase(); // Write the result in the temporary file writer.write(line); // add a new line Page 293 writer.newLine(); } } finally{ // Close the writer of the temporary file writer.close(); } } finally{ // Close the reader reader.close(); } deleteFile(data.filename); tmpFile.move(data.filename); createHTTPRequest() Function that creates a new ScriptableHTTPRequest object, in order to issue REST/AJAX calls to external servers. This feature allows the data mapping process to complement its extraction process with external data, including data that could be provided by an HTTP process in Workflow, for instance a process that retrieves certain values from Workflow’s Data Repository. Another possible use is to have a Postprocessor that writes the results of the extraction process to a file and immediately uploads that file to a Workflow process. The returned ScriptableHTTPRequest has a selection of the properties and methods of the standard JavaScript XMLHTTPRequest object (see https://developer.mozilla.org/enUS/docs/Web/API/XMLHttpRequest). Supported properties and methods are listed below. Note It is not possible to use the async mode, which can be set via the open() function of the ScriptableHTTPRequest (see https://developer.mozilla.org/enUS/docs/Web/API/XMLHttpRequest/open) in a data mapping configuration. Async-related properties and methods of the ScriptableHTTPRequest object - for example .onreadystatechange, .readyState and .ontimeout - are not supported. The reason for this is that by the time the response comes back from the server, the DataMapper script may have finished executing and gone out of scope. Page 294 Supported properties l response l status l statusText l timeout (ms). Default: 1 minute. Supported methods create() l l open(String method, String url, String user, String password) open(String verb, String url, String userName, String password, String[] headers, String[] headervalues, String requestBody) l send() l send(String requestBody) Creates a new instance of ScriptableHTTPRequest. Opens a HTTP request. Note If you don't use a user name and password, pass empty strings: request.open ("GET",url,"",""); Sends an HTTP request and returns the HTTP status code. Blocked call. getResponseHeader(String header) Gets the ResponseHeader by name. getResponseHeaders() Returns the full response headers of the last HTTP request. getRequestBody() Gets the HTTP request body (for POST and PUT). setRequestHeader(String requestHeader, String value) Adds an additional HTTP request header. Page 295 getResponseBody() Returns the full response body of the last HTTP request. setRequestBody(String requestBody) Sets the HTTP request body (for POST and PUT). getPassword() Gets the password for HTPP basic authentication setPassword(String password) Sets the password for HTPP basic authentication getTimeout() Gets the time to wait for the server's response setTimeout(int timeout) Sets the time (in ms.) to wait for the server's response. getUsername() gets the username for basic HTTP authentication. setUsername(String userName) sets the username for basic HTTP authentication abort() Aborts the request. deleteFile() Function that is used to delete a file. deleteFile(filename) filename String that specifies the path and file name of the file to be deleted. Page 296 Examples 1. Deleting a file in a local folder: deleteFile("c:\Content\test.txt"); 2. Deleting the sample data file used in the DataMapper: deleteFile(data.filename); execute() Function that calls an external program and waits for it to end. execute(command) Calls an external program and waits for it to end. command String that specifies the path and file name of the program to execute. newByteArray() Function that returns a new byte array. newByteArray(size) Returns a new byte array of of the specified number of elements. size Integer that represents the number of elements in the new array. newCharArray() Function that returns a new Char array. newCharArray(size) Returns a new Char array of the specified number of elements. size Integer that represents the number of elements in the new array. Page 297 newDoubleArray() Function that returns a new double array. newDoubleArray(size) Returns a new Double array of the specified number of elements. size Integer that represents the number of elements in the new array. newFloatArray() Function that returns a new float array. newFloatArray(size) Returns a new Float array of the specified number of elements. size Integer that represents the number of elements in the new array. newIntArray() Function that returns a new array of Integers. newIntArray(size) Returns a new Integer array of the specified number of elements. size Integer that represents the number of elements in the new array. newLongArray() Function that returns a new long array. newLongArray(size) Returns a new Long array of the specified number of elements. size Page 298 Integer that represents the number of elements in the new array. newStringArray() Function that returns a new string array. newStringArray(size) Returns a new String array of the specified number of elements. size Integer that represents the number of elements in the new array. openBinaryReader() Function that opens a file as a binary file for reading purposes. The function returns a BinaryReader object. openBinaryReader(filename) filename String that represents the name of the file to open. openBinaryWriter() Function that opens a file as a binary file for writing purposes. The function returns a BinaryWriter object. openBinaryWriter(filename, append) filename String that represents the name of the file to open. append Boolean parameter that specifies whether the file pointer should initially be positioned at the end of the existing file (append mode) or at the beginning of the file (overwrite mode). openTextReader() Function that opens a file as a text file for reading purposes. The function returns a TextReader object. Please note that the temporary file must be closed at the end. Page 299 openTextReader(filename,encoding) filename String that represents the name of the file to open. encoding String that specifies the encoding of the file to read (UTF-8, ISO-8859-1, etc.). Example In the following example, the openTextReader() function is used to open the actual data sample file in the Data Mapper for reading. var var var var fileIn = openTextReader(data.filename); tmp = createTmpFile(); fileOut = openTextWriter(tmp.getPath()); line; while((line = fileIn.readLine())!=null){ fileOut.write(line.replace((subject),"")); fileOut.newLine(); } fileIn.close(); fileOut.close(); deleteFile(data.filename); tmp.move(data.filename); tmp.close(); OpenTextWriter() This function opens a file as a text file for writing purposes. The function returns a TextWriter object. This must be closed at the end. OpenTextWriter(filename, encoding, append) filename String that represents the name of the file to open. encoding String specifying the encoding to use (UTF-8, ISO-8859-1, etc.).. Page 300 append Boolean parameter that specifies whether the file pointer should initially be positioned at the end of the existing file (append mode) or at the beginning of the file (overwrite mode). Example In the following example, the openTextWriter function is used to open the newly created temporary file for writing: var var var var fileIn = openTextReader(data.filename); tmp = createTmpFile(); fileOut = openTextWriter(tmp.getPath()); line; while ((line = fileIn.readLine())!=null){ fileOut.write(line.replace((subject),"")); fileOut.newLine(); } fileIn.close(); fileOut.close(); deleteFile(data.filename); tmp.move(data.filename); tmp.close(); Page 301 The Designer The Designer is a WYSIWYG (what you see is what you get) editor that lets you create templates for various output channels: Print, Email and Web. A template may contain designs for multiple output channels: a letter intended for print and an e-mail variant of the same message, for example. Content, like the body of the message or letter, can be shared across these contexts. Templates are personalized using scripts and variable data extracted via the DataMapper.More advanced users may use native HTML, CSS and JavaScript. The following topics will help to quickly familiarize yourself with the Designer. l l l "Designer basics" below. These are the basic steps for creating and developing a template. "Features" on the next page. These are some of the key features in the Designer. "Designer User Interface" on page 666. This part gives an overview of all elements in the Designer User Interface, like menus, dialogs and panes. More help can be found here: l l l Tutorials On Video: watch an introductory video, overview tutorials or practical how-to videos. Forum: Browse the forum and feel free to ask questions about the use of Connect software Demo site. Download demonstrations of OL products. ... Designer basics With the Designer you can create templates for personalized letters, emails and web pages, and generate output from them. These are the basic steps for creating and developing a template: Page 302 1. Create a template Create a template, using one of the Template Wizards. See "Creating a template" on the facing page. 2. Fill the template Add text, images and other elements to the template and style them. See "Content elements" on page 465 and "Styling and formatting" on page 551. 3. Personalize the content Personalize the content using variable data. See "Personalizing Content" on page 592. 4. Generate output Adjust the settings, test the template and generate output: letters, emails, and/or web pages. See "Generating output" on page 953. 5. What's next Use Workflow to automate your customer communications. Note Steps 2 and 3 are not necessarily to be followed in this order. For example, as you add elements to a template, you may start personalizing them right away, before adding other elements to the template. Features The Designer is Connect's module to create templates for personalized customer communications. These are some of the key features in the Designer: "Templates" on the facing page. Start creating, using and sharing templates. "Contexts" on page 320. A context contains one or more designs for one output channel: l "Print" on page 325. This topic helps you design and fill sections in the Print context. l "Email" on page 359. This topics helps you design an email template. l "Web" on page 381. This topic helps you design a web page. "Sections" on page 321. Sections in one context are designed for the same output channel. Page 303 "Content elements" on page 465. Elements make up the biggest part of the content of each design. "Snippets" on page 548. Snippets help share content between contexts, or insert content conditionally. "Styling and formatting" on page 551. Make your Designer templates look pretty and give them the same look and feel with style sheets. "Personalizing Content" on page 592. Personalize your customer communications using variable data. "Writing your own scripts" on page 624. Scripting can take personalization much further. Learn how to script via this topic. "Generating output" on page 953. Learn the ins and outs of generating output from each of the contexts. Templates The Designer is a WYSIWYG (what you see is what you get) tool to create templates. This topic gets you started. It explains how to create a template, what is found in a template file, and how output can be generated. Creating a template In the Welcome screen that appears after startup, get off to a flying start choosing Browse Template Wizards. Scroll down to see all the Template Wizards. After deciding which output channel – print, email or web – will be prevalent in your template, select a template. The Template Wizards can also be accessed from the menu: click File, click New, expand the Template folder, and then expand one of the templates folders. There are Wizards for the three types of output channels, or contexts as they are called in the Designer: Print, Email and Web. See: l "Creating an Email template with a Wizard" on page 364 l "Creating a Print template with a Wizard" on page 327 Page 304 l "Creating a Web template with a Wizard" on page 382 Tip The quickest way to create a Print template based on a PDF file is to right-click the PDF file in the Windows Explorer and select Enhance with Connect. After creating a template you can add the other contexts (see "Contexts" on page 320), as well as extra sections (see "Sections" on page 321), to the template. It is, however, not possible to use a Template Wizard when adding a context or section to an existing template. Tip If an Email context is going to be part of the template, it is recommended to start with an Email Template Wizard; see "Creating an Email template with a Wizard" on page 364. After creating a template, contexts can be added to it, but that can not be done with a wizard. Opening a template To open a template from the Welcome screen, select Open an Existing Template. To open a template from the menu, select File > Open. Then select the template file. A template file has the extenstion .OL-template. Warning A template created in an older version of the software can be opened in a newer version. However, opening and saving it in a newer version of the software will convert the template to the newest file format. The converted template can't be opened in older versions of the software. Opening a package file Templates can also be stored in a package file (see "Sharing a template" on page 308). To open a package file, switch the file type to Package files (*.OL-package) in the Open File Page 305 dialog. When the package contains print presets, you will be asked if you want to import them into the proper repositories. Saving a template A Designer template file has the extension .OL-template. It is a zip file that includes up to 3 contexts, all the related resources and scripts, and (optionally) a link to a Data Mapping Configuration. To save a template for the first time, select File > Save as. After that you can save the template by selecting File > Save or pressing Ctrl+S. Tip To quickly copy the name of any other file, set Save as type to Any file (*.*) in the Save dialog. Select a file to put its name in the File name field. Then set Save as type to Template files (*.OLtemplate) and save the template. When more than one resource (template or data mapping configuration) is open and the Designer software is closed, the Save Resources dialog appears. This dialog displays a list of all open resources with their names and file location. Selected resources will be saved, deselected resources will have all their changes since they were last saved dismissed. Saving older templates Saving a template in a newer version of the software will convert the template to the newest file format. This makes it unreadable to older versions of the software. The warning message that is displayed in this case can be disabled. To re-enable this message (and all other warning dialogs), go to Window > Preferences > General, and click the Reset All Warning Dialogs button at the bottom. Associated data mapping configuration When you save a template, any data mapping configuration that is currently open will be associated with the template by saving a link to the data mapping configuration in the template file. The next time you open the template you will be asked if you want to open the associated data mapping configuration as well. Page 306 To change which data mapping configuration is linked to the template, open both the template and the data mapping configuration that should be linked to it; then save the template. Auto Save After a template has been saved for the first time, Connect Designer can auto save the template with a regular interval. To configure Auto Save: 1. Select the menu option Window > Preferences > Save. 2. Under Auto save, check the option Enable to activate the Auto Save function. 3. Change how often it saves the template by typing a number of minutes. Auto Backup Connect Designer can automatically create a backup file when you manually save a template. To configure Auto Backup: 1. Select the menu option Window > Preferences > Save. 2. Under Auto backup, check the option Enable to activate the Auto Backup function. 3. Type the number of revisions to keep. 4. Select the directory in which the backups should be stored. Backup files have the same name as the original template with two underscores and a progressive number (without leading zeros) at the end: originalname__1.OL-template, originalname__2.OL-template, etc. Note The Auto Save function does not cause backup files to be created. File properties On the menu, select File > Properties to view and complement the file properties. See File Properties. The file properties can also be used in scripts; see "template" on page 946. If you are not familiar with writing scripts, refer to "Writing your own scripts" on page 624. Page 307 Sharing a template To share a template, you can send the template file itself, or save the template to a package file, optionally together with a Data Mapping Configuration, a Job Creation Preset and an Output Creation Preset. (See "Job Creation Presets" on page 840 and "Output Creation Settings" on page 850 for more details.) To create a package file, select File > Send to Workflow and choose File in the Destination box. For the other options, see "Sending files to Workflow" on the next page. The package file has the extension .OL-package and can be opened in the Designer (see "Opening a package file" on page 305). Exporting a template report A template report can be used for archiving purposes or to provide information about the template to people who do not have access to Connect. Such a report can be exported in PDF or XML format. By default it contains a summary of the template with an overview of all the settings and resources that are used in the template: media, master pages, contexts, sections, images, scripts etc. The file properties are included as well (see File Properties). To open the Export Template Report wizard, select File > Export Report. For a description of all options, see Export Template Report wizard. Creating a custom template report The Export Template Report wizard also offers the possibility to export custom template reports (in PDF format only). A custom template report could contain another selection of information and present that differently, e.g. with the logo of your company. To create a custom template report, you need two files: l l l A template design with the desired layout and variable data. This .OL-TEMPLATE file has to be made in the Designer. A data mapping configuration that provides the variable data. You could use the data mapping configuration made for the standard template report, or create another one in the DataMapper module, using the standard XML template report as data sample. Data mapping configurations have the extension .OL-DATAMAPPER. Page 308 The following zip file contains both the template and data mapping configuration that are used to generate the standard template report: http://help.objectiflune.com/en/archive/reporttemplate.zip. Generating output from the Designer Output can be generated directly from the Designer; see "Generating Print output" on page 956, "Generating Email output" on page 973 and "Generating Web output" on page 981. To test a template first, select Context > Preflight. Preflights executes the template without actually producing output and it displays any issues once it's done (see also: "Testing scripts" on page 632). Sending files to Workflow Workflow can generate output from a template as well. For this, the template has to be sent to Workflow. The Send to Workflow dialog sends templates, Data Mapping Configurations and print presets to the Workflow server, or saves them as a package file. Print presets make it possible to do such things as filtering and sorting records, grouping documents and splitting the print jobs into smaller print jobs, as well as the more standard selection of printing options, such as binding, OMR markings and the like. See "Job Creation Presets" on page 840 and "Output Creation Settings" on page 850 for more details. To send one or more templates to Workflow: 1. Select File > Send to Workflow. 2. Select the template to send. By default the currently active template is listed. Click Browse to select another template. You may select more than one template in the Browse dialog, and each of them is sent to Workflow (or added to a package file). A template file has the extension .OL-template. 3. Select the Data Mapping Configuration to send. By default the current configuration is listed. Click Browse to select another configuration. You may select more than one configuration file in the Browse dialog, and each of them is sent to Workflow (or added to a package file). A Data Mapping Configuration file has the extension .OL-datamapper. 4. Use the drop-down to select a Job Creation Preset to send. Click Browse to select a preset that is not in the default location for presets. A Job Creation Preset file has the extension .OL-jobpreset. Page 309 5. Use the drop-down to select an Output Creation Preset. Click Browse to select a preset that is not in the default location for presets. An Output Creation Preset file has the extension .OL-outputpreset. 6. Finally, choose the Destination: use the drop-down to select where to send the files. The option Workflow machines lists all the PlanetPress Workflow installations detected on the network. Select File to save the files as a package that can be loaded within the Workflow tool. Creating a Web template with a Wizard With the Designer you can design Web templates and output them through Workflow or as an attachment to an email when generating Email output. Capture On The Go templates are a special kind of Web templates; see "Capture OnTheGo template wizards" on page 416. A Web Template Wizard helps you create a Web page that looks good on virtually any browser, device and screen size. Foundation All Web Template Wizards in Connect Designer make use of the Zurb Foundation front-end framework. A front-end framework is a collection of HTML, CSS, and JavaScript files to build upon. Foundation is a responsive framework: it uses CSS media queries and a mobile-first approach, so that websites built upon Foundation look good and function well on multiple devices including desktop and laptop computers, tablets, and mobile phones. Foundation is tested across many browsers and devices, and works back as far as IE9 and Android 2. See http://foundation.zurb.com/learn/about.html. For more information about the use of Foundation in the Designer, see "Using Foundation" on page 420. After creating a Web template, the other contexts can be added, as well as other sections (see "Adding a context" on page 321 and "Adding a Web page" on page 388). To create a Web template with a Template Wizard: Page 310 1. l l In the Welcome screen that appears after startup, choose Browse Template Wizards. Scroll down until you see the Foundation Web Page Starter Template Wizards. Alternatively, on the File menu, click New, expand the Template folder, and then expand the Foundation Web Page Starter folder. 2. Select a template. There are 4 types of Web Template Wizards : l Blank l Contact Us l Jumbotron l Thank You If you don't know what template to choose, see "Web Template Wizards" on page 313 further down in this topic, where the characteristics of each kind of template are described. 3. Click Next and make adjustments to the initial settings. l Section: l l l Description: Enter the description of the page. This is the contents of a HTML tag. Top bar group: l l l l Name: Enter the name of the Section in the Web context. This has no effect on output. Set width to Grid: Check this option to limit the width of the top bar contents to the Foundation Grid, instead of using the full width of the page. Stick to the top of the browser window: Check to lock the top menu bar to the top of the page, even if the page has scroll bars. This means the menu bar will always be visible in the browser. Background color: Enter a valid hexadecimal color code for the page background color (see w3school's color picker) , or click the colored circle to the right to open the Color Picker. Colors group: Enter a valid hexadecimal color code (see w3school's color picker) or click the colored square to open the Color Picker dialog (see "Color Picker" on page 674), and pick a color for the following elements: Page 311 l Primary: links on the page. l Secondary: secondary links on the page. l Text: text on the page contained in paragraphs (

). l Headings: all headings (

through

) including the heading section's subhead. 4. Click Finish to create the template. The Wizard creates: l l l l A Web context with one web page template (also called a section) in it. The web page contains a Header, a Section and a Footer element with dummy text, and depending on the type of web page, a navigation bar, button and/or Form elements. Resources related to the Foundation framework (see "Web Template Wizards" on the next page): style sheets and JavaScript files. The style sheets can be found in the Stylesheets folder on the Resources pane. The JavaScript files are located in the JavaScript folder on the Resources pane, in a Foundation folder. A collection of Snippets in the Snippets folder on the Resources pane. The Snippets contain ready-to-use parts to build the web page. Double-click to open them. See "Snippets" on page 548 for information about using Snippets. Images: icons, one picture and one thumbnail picture. Hover your mouse over the names of the images in the Images folder on the Resources pane to get a preview. The Wizard opens the Web section, so that you can fill it with text and other elements; see "Content elements" on page 465, "Web Context" on page 386 and "Web pages" on page 387. Web pages can be personalized just like any other type of template; see "Variable Data" on page 604 and "Personalizing Content" on page 592. Tip Use the Outline pane at the left to see which elements are present in the template and to select an element. Use the Attributes pane at the right to see the current element's ID, class and some other properties. Page 312 Use the Styles pane next to the Attributes pane to see which styles are applied to the currently selected element. Tip Click the Edges button on the toolbar to make borders of elements visible on the Design tab. The borders will not be visible on the Preview tab. Web Template Wizards Foundation All Web Template Wizards in Connect Designer make use of the Zurb Foundation front-end framework. A front-end framework is a collection of HTML, CSS, and JavaScript files to build upon. Foundation is a responsive framework: it uses CSS media queries and a mobile-first approach, so that websites built upon Foundation look good and function well on multiple devices including desktop and laptop computers, tablets, and mobile phones. Foundation is tested across many browsers and devices, and works back as far as IE9 and Android 2. See http://foundation.zurb.com/learn/about.html. Jumbotron The name of the Jumbotron template is derived from the large screens in sports stadiums. It is most useful for informative or marketing-based websites. Its large banner at the top can display important text and its "call to action" button invites a visitor to click on to more information or an order form. Contact Us The Contact Us template is a contact form that can be used on a website to receive user feedback or requests. It's great to use in conjunction with the Thank You template, which can recap the form information and thank the user for feedback. Thank You The Thank You template displays a thank you message with some text and media links. Page 313 Blank web page The Blank Web Page template is a very simple Foundation template that contains a top bar menu and some basic contents to get you started. Capture OnTheGo template wizards With the Designer you can create Capture OnTheGo (COTG) templates. COTG templates are used to generate forms for the Capture OnTheGo mobile application. For more information about this application, see the website: Capture OnTheGo. A Capture OnTheGo Form is actually just a Web Form, that you could add without a wizard, but the COTG Template Wizards include the appropriate JavaScript files for the Capture OnTheGo app, and styles to create user-friendly, responsive forms. They are built upon the Foundation framework. Foundation All Web Template Wizards in Connect Designer make use of the Zurb Foundation front-end framework. A front-end framework is a collection of HTML, CSS, and JavaScript files to build upon. Foundation is a responsive framework: it uses CSS media queries and a mobile-first approach, so that websites built upon Foundation look good and function well on multiple devices including desktop and laptop computers, tablets, and mobile phones. Foundation is tested across many browsers and devices, and works back as far as IE9 and Android 2. See http://foundation.zurb.com/learn/about.html. For more information about the use of Foundation in the Designer, see "Using Foundation" on page 420. After creating a COTG template, the other contexts can be added, as well as other sections (see "Adding a context" on page 321 and "Adding a Web page" on page 388). Tip If the COTG Form replaces a paper form, it can be tempting to stick to the original layout. Although that may increase the recognizability, it is better to give priority to the user-friendliness of the form. Keep in mind that the COTG form will be used on a device and don't miss the chance to make it as Page 314 user-friendly as possible. See "Designing a COTG Template" on page 413. Creating a COTG template using a Wizard To create a COTG template with a Template Wizard: 1. l l In the Welcome screen that appears after startup and when you click the Home icon at the top right, choose Browse Template Wizards. Scroll down until you see the Capture OnTheGo Starter Template Wizards. Alternatively, on the File menu, click New, expand the Template folder, and then expand the Capture OnTheGo Starter folder. 2. Select a template. There are 8 types of Web Template Wizards: l l l l l l l l Blank. The Blank COTG Template has some basic design and the appropriate form, but no actual form or COTG elements. Bill of Lading. The Bill of Lading Template is a transactional template that includes a detail table with a checkmark on each line, along with Signature and Date COTG elements. Use this wizard as a way to quickly start any new Zurb Foundation based form for Capture OnTheGo. Event Registration. The Event Registration Template is a generic registration form asking for name, phone, email, etc. Event Feedback. The Event Feedback Template is a questionnaire containing different questions used to rate an experience. Membership Application. The Membership Application Template is a signed generic request form that can be used for memberships such as gyms, clubs, etc. Patient Intake. The Patient Intake Template is a generic medical questionnaire that could potentially be used as a base for insurance or clinic form. Kitchen Sink. The Kitchen Sink Template includes a wide range of basic form and COTG form elements demonstrating various possibilities of the software. Time Sheet. The Time Sheet Template is a single page application used to add time entries to a list. This template demonstrates the dynamic addition of lines within a COTG template, as the Add button creates a new time entry. There is no limit to the number of entries in a single page. Submitted data are grouped using arrays (see "Grouping data using arrays" on page 434). Page 315 3. Click Next and make adjustments to the initial settings. l l l l Create Off-Canvas navigation menu: an Off-Canvas menu is a Foundation component that lets you navigate between level 4 headings (

) in the form. Check this option to add the menu automatically. Submit URL: enter the URL where the form data should be sent. The URL should be a server-side script that can accept COTG Form data. The Title and the Logo that you choose will be displayed at the top of the Form. Colors: Click the colored square to open the Color Picker dialog (see "Color Picker" on page 674) and pick a color, or enter a valid hexadecimal color code (see w3school's color picker) for the page background color. Do the same for the background color of the navigation bar at the top and for the buttons on the Form. 4. Click Next to go to the next settings page if there is one. 5. Click Finish to create the template. The Wizard creates: l l l A Web context with one web page template (also called a section) in it. The web page contains an 'off-canvas' Div element, Header, a Section and a Footer element with dummy text, and depending on the type of web page, a navigation bar, button and/or Form elements. Style sheets and JavaScript files related to the COTG form itself and others related to the Foundation framework (see above). The style sheets can be found in the Stylesheets folder on the Resources pane. The JavaScript files are located in the JavaScript folder on the Resources pane. A collection of snippets in the Snippets folder on the Resources pane. The snippets contain ready-to-use parts to build the web form. Double-click to open them. See "Snippets" on page 548 and "Loading a snippet via a script" on page 640 for information about using Snippets. The Wizard opens the Web section, so that you can fill the Capture OnTheGo form. 6. Make sure to set the action and method of the form: select the form and then enter the action and method on the Attributes pane. The action of a Capture OnTheGo form should specify the Workflow HTTP Server Input task that receives and handles the submitted data. The action will look like this: Page 316 http://127.0.0.1:8080/action (8080 is Workflow's default port number; 'action' should be replaced by the HTTP action of that particular HTTP Server Input task). The method of a Capture OnTheGo form should be POST to ensure that it doesn't hit a data limit when submitting the form. The GET method adds the data to the URL, and the length of a URL is limited to 2048 characters. Especially forms containing one or more Camera inputs may produce a voluminous data stream that doesn't fit in the URL. GET also leaves data trails in log files, which raises privacy concerns. Therefore POST is the preferred method to use. Filling a COTG template Before inserting elements in a COTG Form, have the design ready; see "Designing a COTG Template" on page 413. In a Capture OnTheGo form, you can use special Capture OnTheGo Form elements, such as a Signature and a Barcode Scanner element. For a description of all COTG elements, see: "COTG Elements" on page 519. To learn how to use them, see "Using COTG Elements" on page 431. Foundation, the framework added by the COTG template wizards, comes with a series of features that can be very useful in COTG forms; see "Using Foundation" on page 420. Naturally, Web Form elements can also be used on COTG Forms (see "Forms" on page 527 and "Form Elements" on page 532) as well as text, images and other elements (see "Content elements" on page 465). Capture OnTheGo templates can be personalized just like any other type of template; see "Variable Data" on page 604 and "Personalizing Content" on page 592. Tip Use the Outline pane at the left to see which elements are present in the template and to select an element. Use the Attributes pane at the right to see the current element's ID, class and some other properties. Use the Styles pane next to the Attributes pane to see which styles are applied to the currently selected element. Page 317 Tip Click the Edges button on the toolbar to make borders of elements visible on the Design tab. The borders will not be visible on the Preview tab. Resources This page clarifies the difference between Internal, External and Web resources that may be used in a template, and explains how to refer to them in HTML and in scripts. Internal resources Internal resources are files that are added to and saved with the template. To add images, fonts, style sheets, and snippets to your template, you can drag or copy/paste them into the Resources Pane. See also: "Images" on page 537, "Snippets" on page 548, "Styling templates with CSS files" on page 553 and "Fonts" on page 587. Resource files can also be dragged or copy/pasted out of the the application to save them on a local hard drive. Once imported, internal resources are accessed using a relative path, depending where they're called from. Resources can be located in the following folders: l images/ contains the files in the Images folder. l fonts/ contains the files in the Fonts folder. l css/ contains the files in the StyleSheets folder. l js/ contains the files in the JavaScripts folder. l snippets/ contains the files in the Snippets folder. When refering to them, normally you would simply use the path directly with the file name. The structure within those folders is maintained, so if you create a "signatures" folder within the "Images" folder, you need to use that structure, for example in HTML: . In scripts, you can refer to them in the same way, for example: results.loadhtml("snippets/en/navbar.html"); See also: "Loading a snippet via a script" on page 640 and "Writing your own scripts" on page 624. Page 318 Note When referring to images or fonts from a CSS file, you need to remember that the current path is css/, meaning you can't just call images/image.jpg. Use a relative path, for example: #header { background-image: url('../images/image.jpg'); } External resources External resources are not stored in the template, but on the local hard drive or on a network drive. They are accessed using a path. The path must have forward slashes, for example or var json_variables = loadjson ("file:///d:/jsondata/variables.json");. The complete syntax is: file:///. If the host is "localhost", it can be omitted, as it is in the example, resulting in file:///. The empty string is interpreted as `the machine from which the URL is being interpreted'. Network paths are similar: results.loadhtml ("file://servername/sharename/folder/snippet.html"); (note that in this case file is followed by 2 slashes only). Some limitations l l Style sheets cannot refer to external resources. The Connect Server user needs access to whichever network path is used. If the network path is on a domain, the Connect Server must be identified with domain credentials that have access to the domain resources. For more information on network paths, please see this Wikipedia entry: file URI scheme. Web resources Web resources are simply accessed using a full URL. This URL needs to be publicly accessible: if you type in that URL in a browser on the server, it needs to be visible. Authentication is possible only through URL Parameters (http://www.example.com/data.json?user=username&password=password) or through HTTP Basic Auth (http://username:password@www.example.com/data.json). Resources can also be called from a PlanetPress Workflow instance: Page 319 l l "Static Resources", as set in the preferences, are accessed using the resource path, by default something like http://servername:8080/_iRes/images/image.jpg. (For guidance on setting the preferences, search for 'HTTP Server Input 2' in the PlanetPress Workflow help files on: OL Help). Resources can also be served by processes: http://servername:8080/my_ process?filename=image.jpg (assuming "my_process" is the action in the HTTP Server Input). Contexts Contexts are parts of a template that are each used to generate a specific type of output: Web, Email or Print. l l l The Print context outputs documents to either a physical printer a PDF file; see "Print context" on page 332. The Email context outputs HTML email, composed of HTML code with embedded CSS. See "Email context" on page 368. The Web context outputs an HTML web page. See "Web Context" on page 386. When a new template is made, the Context appropriate to that new template is automatically created, including one section. After a template has been created, the other two contexts can be added to it; see "Adding a context" on the next page. Tip If an Email context is going to be part of the template, it is recommended to start with an Email Template Wizard; see "Creating an Email template with a Wizard" on page 364. After creating a template, contexts can be added to it, but that can not be done with a wizard. Outputting and combining contexts All three contexts can be present in any template and they can all be used to output documents; see "Generating Email output" on page 973, "Generating Print output" on page 956 and "Generating Web output" on page 981. They can even be combined in output. Page 320 If present in the same template, a Print context and a Web context can be attached to an Email context. Outputting other combinations of contexts, and selecting sections based on a value in the data, can be done via a Control Script; see "Control Scripts" on page 645. Adding a context To add a context, right-click the Contexts folder on the Resources pane and click New print context, New email context or New web context. Only one context of each type can be present in a template. Each context, however, can hold more than one section; see "Sections" below. Deleting a context To delete a context, right-click the context on the Resources pane and click Delete. Warning No backup files are maintained in the template. The only way to recover a deleted section, is to click Undo on the Edit menu, until the deleted section is restored. After closing and reopening the template it is no longer possible to restore the deleted context this way. Sections Sections are parts of one of the contexts in a template: Print, Email or Web. They contain the main text flow for the contents. In each of the contexts there can be multiple sections. A Print context, for example, may consist of two sections: a covering letter and a policy. Adding a section To add a section to a context, right-click the context (Email, Print or Web) on the Resources pane, and then click New section. The new section has the same settings as the first section in the same context. However, custom style sheets and JavaScript files aren't automatically included in the new section. It is not possible to use a Template Wizard when adding a section to an existing template. Page 321 Tip If an Email context is going to be part of the template, it is recommended to start with an Email Template Wizard; see "Creating an Email template with a Wizard" on page 364. After creating a template, contexts can be added to it, but that can not be done with a wizard. Editing a section To open a section, expand the Contexts folder on the Resources pane, expand the respective context (Print, Email or Web) and double-click a section to open it. Each section can contain text, images and many other elements (see "Content elements" on page 465), including variable data and other dynamic elements (see "Personalizing Content" on page 592). Copying a section Copying a section, either within the same template or from another template, can only be done manually. You have to copy the source of the HTML file: 1. Open the section that you want to copy and go to the Source tab in the workspace. 2. Copy the contents of the Source tab (press Ctrl+A to select everything and then Ctrl+C to copy the selection). 3. Add a new section (see "Adding a section" on the previous page, above). 4. Go to the Source tab and paste the contents of the other section here (press Ctrl+V). 5. When copying a section to another template, add the related source files, such as images, to the other template as well. Deleting a section To delete a section: l On the Resources pane, expand the Contexts folder, expand the folder of the respective context, right-click the name of the section, and then click Delete. Page 322 Warning No backup files are maintained in the template. The only way to recover a deleted section, is to click Undo on the Edit menu, until the deleted section is restored. After closing and reopening the template it is no longer possible to restore the deleted context this way. Renaming a section To rename a section: l On the Resources pane, expand the Contexts folder, expand the folder of the respective context, right-click the name of the section, and then click Rename. Note Sections cannot have an integer as name. The name should always include alphanumeric characters. Section properties Which properties apply to a section, depends on the context it is part of. See also: "Print sections" on page 335, "Email templates" on page 370, and "Web pages" on page 387. To change the properties for a section: l On the Resources pane, expand the Contexts folder, expand the folder of the respective context, right-click the name of the section, and then click one of the options. Applying a style sheet to a section In order for a style sheet to be applied to a specific section, it needs to be included in that section. There are two ways to do this. Drag & drop a style sheet Page 323 1. Click and hold the mouse button on the style sheet on the Resources pane. 2. Move the mouse cursor within the Resources pane to the section to which the style sheet should be applied. 3. Release the mouse button. Using the Includes dialog 1. On the Resources pane, right-click the section, then click Includes. 2. From the File types dropdown, select Stylesheets. 3. Choose which CSS files should be applied to this section. The available files are listed at the left. Use the arrow buttons to move the files that should be included to the list at the right. 4. You can also change the order in which the CSS files are read: click one of the included CSS files and use the Up and Down buttons. Note that moving a style sheet up in the list gives it less weight. In case of conflicting rules, style sheets read later will override previous ones. Note Style sheets are applied in the order in which they are included in a section. The styles in each following style sheet add up to the styles found in previously read style sheets. When style sheets have a conflicting rule for the same element, class or ID, the last style sheet ‘wins’ and overrides the rule found in the previous style sheet. Arranging sections Changing the order of the sections in a context can have an effect on how they are outputted; see: "Print sections" on page 335, "Email templates" on page 370 and "Web pages" on page 387. To rearrange sections in a context: l On the Resources pane, expand the Contexts folder, expand the folder of the respective context, and then drag and drop sections to change the order they are in. Alternatively, right-click a section and click Arrange. In the Arrange Sections dialog you can change the order of the sections in the same context by clicking the name of a section and moving it using the Up and Down buttons. Page 324 Outputting sections Which sections are added to the output, depends on the type of context they are in. When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record. The sections are added to the output in the order in which they appear on the Resources pane. See "Generating Print output" on page 956. In email and web output, only one section can be executed at a time. The section that will be output is the section that has been set as the 'default'. See "Generating Web output" on page 981 and "Web pages" on page 387 and "Generating Email output" on page 973 and "Email templates" on page 370. The 'default' section is always executed when the template is run using the Create Email Content task in Workflow (see Workflow Help: Create Email Content). It is, however, possible to include or exclude sections when the output is generated, or to set another section as the 'default', depending on a value in the data. A Control Script can do this; see "Control Scripts" on page 645. See "Generating output" on page 953 to learn how to generate Print documents, Web pages or Email. Print With the Designer you can create one or more Print templates and merge the template with a data set to generate personal letters, invoices, policies etc. The Print context is the folder in the Designer that can contain one or more Print sections. Print templates, also called Print sections, are part of the Print context. They are meant to be printed to a printer or printer stream, or to a PDF file (see "Generating Print output" on page 956). The Print context can also be added to Email output as a PDF attachment; see "Generating Email output" on page 973. When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record. Page 325 When a Print template is created or when a Print context is added to an existing template the Print context folder is created along with other folders and files that are specific to a Print context (see "Creating a Print template with a Wizard" on the next page, "Adding a context" on page 321 and "Print context" on page 332). Only one Print section is created at the start, but you can add as many Print sections as you need; see "Print sections" on page 335. Pages Unlike emails and web pages, Print sections can contain multiple pages. Pages are naturally limited by their size and margins. If the content of a section doesn't fit on one page, the overflow goes to the next page. This happens automatically, based on the section's page size and margins; see "Page settings: size, margins and bleed" on page 344. Although generally the same content elements can be used in all three contexts (see "Content elements" on page 465), the specific characteristics of pages make it possible to use special elements, such as page numbers; see "Page numbers " on page 345. See "Pages" on page 343 for an overview of settings and elements that are specific for pages. Headers, footers, tear-offs and repeated elements (Master page) In Print sections, there are often elements that need to be repeated across pages, like headers, footers and logos. In addition, some elements should appear on each first page, or only on pages in between the first and the last page, or only on the last page. Examples are a different header on the first page, and a tear-off slip that should show up on the last page. This is what Master Pages are used for. Master Pages can only be used in the Print context. See "Master Pages" on page 350 for an explanation of how to fill them and how to apply them to different pages. Stationery (Media) When the output of a Print context is meant to be printed on paper that already has graphical and text elements on it (called stationery, or preprinted sheets), you can add a copy of this Page 326 media, in the form of a PDF file, to the Media folder. Media can be applied to pages in a Print section, to make them appear as a background to those pages. This ensures that elements added to the Print context will correspond to their correct location on the preprinted media. When both Media and a Master Page are used on a certain page, they will both be displayed on the Preview tab of the workspace, the Master Page being 'in front' of the Media and the Print section on top. To open the Preview tab, click it at the bottom of the Workspace or select View > Preview View on the menu. The Media will not be printed, unless this is specifically requested through the printer settings in the Print Wizard; see "Generating Print output" on page 956. See "Media" on page 353 for further explanation about how to add Media and how to apply them to different pages. Copy Fit Copy Fit is a feature to scale text to the available space, the name of a person on a greeting card for example, or the name of a product on a shelf talker. This feature is only available with Box and Div elements in Print sections. For more information about this feature see "Copy Fit" on page 566. Creating a Print template with a Wizard A Print template may consist of various parts, such as a covering letter and a policy. Start with one of the Template Wizards for the first part; other parts can be added later. To create a Print template with a Template Wizard: 1. l In the Welcome screen that appears after startup: l l l Choose Browse Template Wizards and scroll down until you see the Print Template wizards and select the Postcard or Formal Letter wizard. Or choose Create a New Template and select the PDF-based Print wizard. Alternatively, on the File menu, click New, expand the Template folder, and then: Page 327 l l Select the PDF-based Print wizard. Or expand the Basic Print templates folder, select Postcard or Formal Letter and click Next. See "Print Template Wizards" below for information about the various types of Template wizards. 2. Make adjustments to the initial settings (the options for each type of template are listed below). Click Next to go to the next settings page if there is one. 3. Click Finish to create the template. See "Print context" on page 332 and "Print sections" on page 335 for more information about Print templates. Tip Use the Outline pane at the left to see which elements are present in the template and to select an element. Use the Attributes pane at the right to see the current element's ID, class and some other properties. Use the Styles pane next to the Attributes pane to see which styles are applied to the currently selected element. Print Template Wizards There are three Print Template wizards: one for a formal letter, one for a postcard and one for a Print template based on a PDF that you provide. Postcard The Postcard Wizard lets you choose a page size and two background images, one for the front and one for the back of the postcard. When you click Finish, the Wizard creates: l A Print context with one section in it, that has duplex printing (printing on both sides) enabled. See "Printing on both sides" on page 334. Page 328 l l l l Two Master Pages that each contain a background image. The first Master Page is applied to the front of every page in the Print section. The second Master Page is applied to the back of every page in the Print section. See "Master Pages" on page 350. Scripts and selectors for variable data. The Scripts pane shows, for example, a script called "first_name". This script replaces the text "@first_name@" on the front of the postcard by the value of a field called "first_name" when you open a data set that has a field with that name. See "Variable Data" on page 604. A script called Dynamic Front Image Sample. This script shows how to toggle the image on the front page dynamically. See also "Writing your own scripts" on page 624. One empty Media. Media, also called Virtual Stationery, can be applied to all pages in the Print section. See "Media" on page 353. The Wizard opens the Print section, so that you can fill it with text and other elements; see "Content elements" on page 465. It already has two Positoned Boxes on it: one on the front, for text, and one on the back, for the address. See "Print context" on page 332 and "Print sections" on page 335 for more information about Print templates. Formal letter The Formal Letter Wizard first lets you select the page settings, see "Page settings: size, margins and bleed" on page 344. These settings are fairly self-explanatory, except perhaps these: l l l l Duplex means double-sided printing. The margins define where your text flow will go. The actual printable space on a page depends on your printer. The bleed is the printable space around a page. It can be used on some printers to ensure that no unprinted edges occur in the final trimmed document. Printers that can’t print a bleed, will misinterpret this setting. Set the bleed to zero to avoid this. The number of sections is the number of parts in the Print context. Although this Template wizard can add multiple Print sections to the Print context, it will only add content to the first section. On the next settings page (click Next to go there), you can type a subject, the sender's name and the sender's title. These will appear in the letter. You can also: Page 329 l l Click the Browse button to select a signature image. This image will appear above the sender's name and title. Select Virtual Stationery: a PDF file with the letterhead stationery. Also see Media. When you click Finish, the Wizard creates: l l l l A Print context with one section in it; see "Print context" on page 332 and "Print sections" on page 335. One empty Master Page. Master Pages are used for headers and footers, for images and other elements that have to appear on more than one page, and for special elements like tear-offs. See "Master Pages" on page 350. One Media. You can see this on the Resources pane: expand the Media folder. Media 1 is the Virtual Stationery that you have selected in the Wizard. It is applied to all pages in the Print section, as can be seen in the Sheet Configuration dialog. (To open this dialog, expand the Contexts folder on the Resources pane; expand the Print folder and rightclick "Section 1"; then select Sheet Configuration.) See "Media" on page 353. Selectors for variable data, for example: @Recipient@. You will want to replace these by the names of fields in your data. See "Variable Data" on page 604. The Wizard opens the Print section. You can add text and other elements; see "Content elements" on page 465. The formal letter template already has an address on it. The address lines are paragraphs, located in one cell in a table with the ID address-block-table. As the table has no borders, it is initially invisible. The address lines will stick to the bottom of that cell, even when the address has fewer lines. See "Styling and formatting" on page 551 to learn how to style elements. Tip Click the Edges button on the toolbar to make borders of elements visible on the Design tab. The borders will not be visible on the Preview tab. Page 330 PDF-based Print template Tip The quickest way to create a Print template based on a PDF file is to right-click the PDF file in the Windows Explorer and select Enhance with Connect. The PDF-based Print template wizard creates a document from an existing PDF file: a brochure, voucher, letter, etc. The PDF is used as the background image of the Print section (see "Using a PDF file as background image" on page 339).​ Variable and personalized elements, like a reseller address, voucher codes and so on, can be added in front of it (see "Personalizing Content" on page 592 and "Variable Data" on page 604). By default, the PDF itself is added to the Image folder located in the Resources pane. Uncheck the option Save with template if the PDF should not be imported in the template. If not saved with the template, the image will remain external. Note that external images need to be available when the template is merged with a record set to generate output, and that their location should be accessible from the machine on which the template's output is produced. External images are updated (retrieved) at the time the output is generated. After clicking Next, you can change the settings for the page. The initial page size and bleed area are taken from the selected PDF. When you click Finish, the Wizard creates: l l l A Print context with one section in it; see "Print context" on the facing page and "Print sections" on page 335. The selected PDF is used as the background of the Print section; see "Using a PDF file as background image" on page 339.​ For each page in the PDF one page is created in the Print section. One empty Master Page. Master Pages are used for headers, footers, images and other elements that have to appear on more than one page, and for special elements like tearoffs. See "Master Pages" on page 350. One empty Media. Media, also called Virtual Stationery, can be applied to all pages in the Print section. See "Media" on page 353. Page 331 Print context The Print context is the folder in the Designer that can contain one or more Print templates. Print templates, also called Print sections, are part of the Print context. They are meant to be printed to a printer or printer stream, or to a PDF file (see "Generating Print output" on page 956). The Print context can also be added to Email output as a PDF attachment; see "Generating Email output" on page 973. When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record. Creating the Print context You can start creating a Print template with a Wizard (see "Creating a Print template with a Wizard" on page 327), or add the Print context to an existing template (see "Adding a context" on page 321). Tip Editing PDF files in the Designer is not possible, but when they're used as a section's background, you can add text and other elements, such as a barcode, to them. The quickest way to create a Print template based on a PDF file is to right-click the PDF file in the Windows Explorer and select Enhance with Connect. Alternatively, start creating a new Print template with a Wizard, using the PDF-based Print template (see "Creating a Print template with a Wizard" on page 327). To use a PDF file as background image for an existing section, see "Using a PDF file as background image" on page 339. When a Print template is created, the following happens: l The Print context is created and one Print section is added to it. You can see this on the Resources pane: expand the Contexts folder, and then expand the Print folder. The Print context can contain multiple sections: a covering letter and a policy, for example, or one section that is meant to be attached to an email as a PDF file and another one that is going to be printed out on paper. Only one Print section is added to it at the beginning, but you can add as many print sections as you need; see "Adding a Print section" on page 336. See "Print sections" on page 335 to learn how to fill a Print section. Page 332 l One Master Page is added to the template, as can be seen on the Resources pane, in the Master Page folder. In Print sections, there are often elements that need to be repeated across pages, like headers, footers and logos. In addition, some elements should appear on each first page, or only on pages in between the first and the last page, or only on the last page. Examples are a different header on the first page, and a tear-off slip that should show up on the last page. This is what Master Pages are used for. Master Pages can only be used in the Print context. See "Master Pages" on page 350. Initially, the (empty) master page that has been created with the Print context will be applied to all pages in the Print section, but more Master Pages can be added and applied to different pages. l l One Media is added to the template, as is visible on the Resources pane, in the Media folder. This folder can hold the company's stationery in the form of PDF files. When applied to a page in a Print section, Media can help prevent the contents of a Print section from colliding with the contents of the stationery. See "Media" on page 353 to learn how to add Media and, optionally, print them. Initially, the (empty) media that has been created with the Print context, is applied to all pages in the Print section. You can add more Media and apply them each to different pages. One Stylesheet, named context_print_styles.css, is added to the template, as you can see on the Resources pane, in the Stylesheets folder. This stylesheet is meant to be used for styles that are only applied to elements in the Print context. See also "Styling templates with CSS files" on page 553. Print settings in the Print context and sections The following settings in the Print context and Print sections have an impact on how the Print context is printed. Arranging and selecting sections The Print context can contain one or more Print sections. When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record. The sections are added to the output in the order in which they appear on the Resources pane. This order can be changed; see "Print sections" on page 335. Page 333 It is also possible to exclude sections from the output, or to include a section only on a certain condition that depends on a value in the data. This can be done using a Control Script; see "Control Scripts" on page 645. Printing on both sides To print a Print section on both sides of the paper, that Print section needs to have the Duplex printing option to be enabled; see "Enabling double-sided printing (Duplex, Mixplex)" on page 342. This setting can not be changed in a Job Creation Preset or an Output Creation Preset. Note Your printer must support duplex for this option to work. Setting the binding style for the Print context The Print context , as well as each of the Print sections, can have its own Finishing settings. In printing, Finishing is the way pages are bound together after they have been printed. Which binding styles can be applied depends on the type of printer that you are using. To set the binding style of the Print context: 1. On the Resources pane, expand the Contexts folder; then right-click the Print context and select Finishing. Alternatively, select Context > Finishing on the main menu. This option is only available when editing a Print section in the Workspace. 2. Choose a Binding style and, if applicable, the number of holes. For an explanation of all Binding and Hole making options, see "Finishing Options" on page 841. To set the binding style of a Print section, see "Setting the binding style for a Print section" on page 341. Overriding binding styles in a job creation preset A Job Creation Preset can override the binding styles set for the Print sections and for the Print context as a whole. To bind output in another way than defined in the template’s settings: Page 334 1. Create a Job Creation Preset that overrides the settings of one or more sections: select File > Presets and see "Job Creation Presets" on page 840 for more details. 2. Select that Job Creation Preset in the Print wizard; see "Generating Print output" on page 956. Setting the bleed The bleed is the printable space around a page. It can be used on some printers to ensure that no unprinted edges occur in the final trimmed document. The bleed is one of the settings for a section. See "Page settings: size, margins and bleed" on page 344. Print sections Print templates, also called Print sections, are part of the Print context. They are meant to be printed to a printer or printer stream, or to a PDF file (see "Generating Print output" on page 956). The Print context can also be added to Email output as a PDF attachment; see "Generating Email output" on page 973. When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record. Pages Unlike emails and web pages, Print sections can contain multiple pages. Pages are naturally limited by their size and margins. If the content of a section doesn't fit on one page, the overflow goes to the next page. This happens automatically, based on the section's page size and margins; see "Page settings: size, margins and bleed" on page 344. Although generally the same content elements can be used in all three contexts (see "Content elements" on page 465), the specific characteristics of pages make it possible to use special elements, such as page numbers; see "Page numbers " on page 345. See "Pages" on page 343 for an overview of settings and elements that are specific for pages. Using headers, footers, tear-offs and repeated elements In Print sections, there are often elements that need to be repeated across pages, like headers, footers and logos. In addition, some elements should appear on each first page, or only on pages in between the first and the last page, or only on the last page. Examples are a different header on the first page, and a tear-off slip that should show up on the last page. Page 335 This is what Master Pages are used for. Master Pages can only be used in the Print context. See "Master Pages" on page 350 for an explanation of how to fill them and how to apply them to different pages. Using stationery (Media) When the output of a Print context is meant to be printed on paper that already has graphical and text elements on it (called stationery, or preprinted sheets), you can add a copy of this media, in the form of a PDF file, to the Media folder. Media can be applied to pages in a Print section, to make them appear as a background to those pages. This ensures that elements added to the Print context will correspond to their correct location on the preprinted media. Note When both Media and a Master Page are used on a certain page, they will both be displayed on the Preview tab of the workspace, the Master Page being 'in front' of the Media and the Print section on top. To open the Preview tab, click it at the bottom of the Workspace or select View > Preview View on the menu. See "Media" on page 353 for a further explanation about how to add Media and how to apply them to different pages. Note: The Media will not be printed, unless this is specifically requested through the printer settings; see "Generating Print output" on page 956. Copy Fit Copy Fit is a feature to scale text to the available space, the name of a person on a greeting card for example, or the name of a product on a shelf talker. This feature is only available with Box and Div elements in Print sections. For more information about this feature see "Copy Fit" on page 566. Adding a Print section The Print context can contain multiple sections: a covering letter and a policy, for example, or one section that is meant to be attached to an email as a PDF file and another one that is meant Page 336 to be printed out on paper. When a Print template is created (see "Creating a Print template with a Wizard" on page 327 and "Print context" on page 332), only one Print section is added to it, but you can add as many print sections as you need. To add a section to a context: l On the Resources pane, expand the Contexts folder, right-click the Print context , and then click New section. The first Master Page (see "Master Pages" on page 350) and Media (see "Media" on page 353) will automatically be applied to all pages in the new section, but this can be changed, see "Applying a Master Page to a page in a Print section" on page 352 and "Applying Media to a page in a Print section" on page 357. Tip Editing PDF files in the Designer is not possible, but when they're used as a section's background, you can add text and other elements, such as a barcode, to them. The quickest way to create a Print template based on a PDF file is to right-click the PDF file in the Windows Explorer and select Enhance with Connect. Alternatively, start creating a new Print template with a Wizard, using the PDF-based Print template (see "Creating a Print template with a Wizard" on page 327). To use a PDF file as background image for an existing section, see "Using a PDF file as background image" on page 339. Note Via a Control Script, sections can be added to a Print context dynamically; see "Dynamically adding sections (cloning)" on page 655. Deleting a Print section To delete a Print section: l On the Resources pane, expand the Contexts folder, expand the Print context, rightclick the name of the section, and then click Delete. Page 337 Warning No backup files are maintained in the template. The only way to recover a deleted section, is to click Undo on the Edit menu, until the deleted section is restored. After closing and reopening the template it is no longer possible to restore the deleted context this way. Arranging Print sections When generating output from the Print context, each of the Print sections is added to the output document, one after the other in sequence, for each record. The sections are added to the output in the order in which they appear on the Resources pane, so changing the order of the sections in the Print context changes the order in which they are outputted to the final document. To rearrange sections in a context: l l On the Resources pane, expand the Print context and drag and drop sections to change the order they are in. Alternatively, on the Resources pane, right-click a section in the Print context and click Arrange. In the Arrange Sections dialog you can change the order of the sections by clicking the name of a section and moving it using the Up and Down buttons. Styling and formatting a Print section The contents of a Print section can be formatted directly, or styled with Cascading Style Sheets (CSS). See "Styling and formatting" on page 551. In order for a style sheet to be applied to a specific section, it needs to be included in that section. There are two ways to do this. Drag & drop a style sheet 1. Click and hold the mouse button on the style sheet on the Resources pane. 2. Move the mouse cursor within the Resources pane to the section to which the style sheet should be applied. 3. Release the mouse button. Using the Includes dialog Page 338 1. On the Resources pane, right-click the section, then click Includes. 2. From the File types dropdown, select Stylesheets. 3. Choose which CSS files should be applied to this section. The available files are listed at the left. Use the arrow buttons to move the files that should be included to the list at the right. 4. You can also change the order in which the CSS files are read: click one of the included CSS files and use the Up and Down buttons. Note that moving a style sheet up in the list gives it less weight. In case of conflicting rules, style sheets read later will override previous ones. Note Style sheets are applied in the order in which they are included in a section. The styles in each following style sheet add up to the styles found in previously read style sheets. When style sheets have a conflicting rule for the same element, class or ID, the last style sheet ‘wins’ and overrides the rule found in the previous style sheet. Using a PDF file as background image In the Print context, a PDF file can be used as a section's background. It is different from the Media in that the section considers the PDF to be content, so the number of pages in the section will be the same as the number of pages taken from the PDF file. With this feature it is possible to create a Print template from an arbitrary PDF file or from a PDF file provided by the DataMapper. Of course, the PDF file itself can't be edited in a Designer template, but when it is used as a section's background, text and other elements, such as a barcode, can be added to it. To use a PDF file as background image: 1. On the Resources pane, expand the Print context, right-click the print section and click Background. 2. Click the downward pointing arrow after Image and select either From Datamapper input or From PDF resource. From Datamapper input uses the active Data Mapping Configuration to retrieve the PDF Page 339 file that was used as input file, or another type of input file, converted to a PDF file. With this option you don't need to make any other settings; click OK to close the dialog. 3. For a PDF resource, you have to specify where it is located. Clicking the Select Image button opens the Select Image dialog (see "Select Image dialog" on page 725). Click Resources, Disk or Url, depending on where the image is located. l l l Resources lists the images that are present in the Images folder on the Resources pane. Disk lets you choose an image file that resides in a folder on a hard drive that is accessible from your computer. Click the Browse button to select an image. As an alternative it is possible to enter the path manually. The complete syntax is: file:///. Note: if the host is "localhost", it can be omitted, resulting in file:///, for example: file:///c:/resources/images/image.jpg. Check the option Save with template to insert the image into the Images folder on the Resources pane. Url allows you to choose an image from a specific web address. Select the protocol (http or https), and then enter the web address (for example, http://www.mysite.com/images/image.jpg). Note It is not possible to use a remotely stored PDF file as a section's background, because the number of pages in a PDF file can not be determined via the http and http protocols. Therefor, with an external image, the option Save with template is always checked. 4. Select the PDF's position: l Fit to page stretches the PDF to fit the page size. l Centered centers the PDF on the page, vertically and horizontally. l Absolute places the PDF at a specific location on the page. Use the Top field to specify the distance between the top side of the page and the top side of the PDF, and the Left field to specify the distance between the left side of the page and the left side of the PDF. Page 340 5. Optionally, if the PDF has more than one page, you can set the range of pages that should be used. Note The number of pages in the Print section is automatically adjusted to the number of pages in the PDF file that are being used as the section's background image. 6. Finally, click OK. Note To set the background of a section in script, you need a Control Script; see "Control Scripts" on page 645 and "Control Script API" on page 930. Setting the binding style for a Print section In printing, Finishing is the binding style, or the way pages are bound together. Each Print section can have its own Finishing settings, as well as the Print context as a whole; see "Setting the binding style for the Print context" on page 334. To set the binding style of a Print section: 1. On the Resources pane, expand the Contexts folder, expand the Print context and rightclick the Print section. 2. Click Finishing. 3. Choose a Binding style and, if applicable, the number of holes. To set the binding style of the Print context, see "Setting the binding style for the Print context" on page 334. Overriding binding styles in a job creation preset A Job Creation Preset can override the binding styles set for the Print sections and for the Print context as a whole. To bind output in another way than defined in the template’s settings: Page 341 1. Create a Job Creation Preset that overrides the settings of one or more sections: select File > Presets and see "Job Creation Presets" on page 840 for more details. 2. Select that Job Creation Preset in the Print wizard; see "Generating Print output" on page 956. Enabling double-sided printing (Duplex, Mixplex) To print a Print section on both sides of the paper, that Print section needs to have the Duplex printing option to be enabled. This is an option in the Sheet Configuration dialog. (See "Sheet Configuration dialog" on page 726.) Note Your printer must support Duplex for this option to work. To enable Duplex or Mixplex printing: 1. On the Resources pane, expand the Print context, right-click the print section and click Sheet configuration. 2. Check Duplex to enable content to be printed on the back of each sheet. 3. When Duplex printing is enabled, further options become available. l l l Check Omit empty back side for Last or Single sheet to reset a page to Simplex if it has an empty back side. Thus changing a Duplex job into a Mixplex job may reduce volume printing costs as omitted back sides aren't included in the number of printed pages. Empty means that there is no content and no master page on that side. To suppress the master page on emtpy back sides and single sheets, uncheck the option Same for all positions and check the option Omit Master Page Back in case of an empty back page. Check Tumble to duplex pages as in a calendar. Check Facing pages to have the side margins switched alternately, so that after printing and binding the pages, they look like in a magazine or book. See "Pages" on the next page to find out how to set a left and right margin on a page. Page 342 Pages Unlike emails and web pages, Print sections can contain multiple pages. Pages are naturally limited by their size and margins. If the content of a section doesn't fit on one page, the overflow goes to the next page. This happens automatically, based on the section's page size and margins; see "Page settings: size, margins and bleed" on the facing page. Although generally the same content elements can be used in all three contexts (see "Content elements" on page 465), the specific characteristics of pages make it possible to use special elements, such as page numbers; see "Page numbers " on page 345. The widow/orphan setting lets you control how many lines of a paragraph stick together, when content has to move to another page; see "Preventing widows and orphans" on page 347. You can also avoid or force a page break before or after an entire element, see "Page breaks" on page 349. Each page in a print section has a natural position: it is the first page, the last page, a 'middle' page (a page between the first and the last page) or a single page. For each of those positions, a different Master Page and Media can be set. A Master Page functions as a page's background, with for example a header and footer. A Media represents preprinted paper that a page can be printed on. See "Master Pages" on page 350 and "Media" on page 353. Page specific content elements The specific characteristics of pages make it possible to use these special elements: l l l Page numbers can only be used in a Print context. See "Page numbers " on page 345 to learn how to add and change them. Conditional content and dynamic tables, when used in a Print section, may or may not leave an empty space at the bottom of the last page. To fill that space, if there is any, an image or advert can be used as a whitespace element; see "Whitespace elements: using optional space at the end of the last page" on the facing page. Dynamic tables can be used in all contexts, but transport lines are only useful in a Print context; see "Dynamic table" on page 618. Positioning and aligning elements Sometimes, in a Print template, you don't want content to move up or down with the text flow. To prevent that, put that content in a Positioned Box. See "Content elements" on page 465. Page 343 When it comes to positioning elements on a page, Guides can be useful, as well as Tables. See "How to position elements" on page 567. Page settings: size, margins and bleed On paper, whether it is real or virtual, content is naturally limited by the page size and margins. These, as well as the bleed, are set per Print section, as follows: l On the Resources pane, right-click a section in the Print context and click Properties. For the page size, click the drop-down to select a page size from a list of common paper sizes. Changing the width or height automatically sets the page size to Custom. Margins define where your text flow will go. Static elements can go everywhere on a page, that is to say, within the printable space on a page that depends on the printer. The bleed is the printable space around a page. It can be used on some printers to ensure that no unprinted edges occur in the final trimmed document. Note: Printers that can’t print a bleed, will misinterpret this setting. Set the bleed to zero to avoid this. Tip By default, measurements settings are in inches (in). You could also type measures in centimeters (add 'cm' to the measurement, for example: 20cm) or in millimeters (for example: 150mm). To change the default unit for measurement settings to centimeters or millimeters: on the menu, select Window > Preferences > Print > Measurements. Whitespace elements: using optional space at the end of the last page Print sections with conditional content and dynamic tables (see "Personalizing Content" on page 592) can have a variable amount of space at the bottom of the last page. It is useful to fill the empty space at the bottom with transpromotional material, but of course you don’t want extra pages created just for promotional data. 'Whitespace elements' are elements that will only appear on the page if there is enough space for them. To convert an element into a whitespace element: Page 344 1. Import the promotional image or snippet; see "Images" on page 537 and "Snippets" on page 548. 2. Insert the promotional image or snippet in the content. Note l l Only a top-level element (for example, a paragraph that is not inside a table or div) can function as a whitespace element. Do not place the promotional image or snippet inside an absolute positioned box. Whitespacing only works for elements that are part of the text flow, not for absolute-positioned boxes. 3. Select the image or the element that holds the promotional content: click it, or use the breadcrumbs, or select it on the Outline tab; see "Selecting an element" on page 469. 4. On the Attributes pane, check the option Whitespace element. 5. (Optional.) Add extra space at the top of the element: on the menu Format, click the option relevant to the selected element (Image for an image, Paragraph for a paragraph, etc.) and adjust the spacing (padding and/or margins). Do not add an empty paragraph to provide space between the whitespace element and the variable content. The extra paragraph would be considered content and could end up on a separate page, together with the whitespace element. Page numbers Inserting page numbers Page numbers can be added to a Print section, but they are usually added to a Master Page, because headers and footers are designed on Master Pages; see also: "Master Pages" on page 350. To insert a page number, select Insert > Special character > Markers on the menu, and then click one of the options to decide with what kind of page number the marker will be replaced: l l Page number: The current page number in the document. If a page is empty or does not display a page number, it is still added to the page count. Page count: The total number of pages in the document, including pages with no contents or without a page number. Page 345 l l l l Content page number: The current page number in the document, counting only pages with contents that are supplied by the Print section. A page that has a Master Page (as set in the Sheet Configuration dialog, see "Applying a Master Page to a page in a Print section" on page 352) but no contents, is not included in the Content page count. Content page count: This is the total number of pages in the current document that have contents, supplied by the Print section. A page that has a Master Page but no contents, is not included in the Content page count. Sheet number: The current sheet number in the document. A sheet is a physical piece of paper, with two sides (or pages). This is equivalent to half the page number, for example if there are 10 pages, there will be 5 sheets. Sheet count: This marker is replaced by the total number of sheets in the document, whether or not they have contents. Note When a marker is inserted, a class is added to the element in which the marker is inserted. Do not delete that class. It enables the software to quickly find and replace the marker when generating output. The respective classes are: pagenumber, pagecount, contentpagenumber, contentpagecount, sheetnumber, and sheetcount. Tip Instead of page numbers, you might want to display the current record index and/or the total number of records in the record set, in the document. There is a How-to that explains how to do that: How to get the record index and count. Creating a table of contents A table of contents can only be created in a template script. The script should make use of the pageRef() function. For an example, see "Creating a table of contents" on page 900. If you don't know how to write a script, see "Writing your own scripts" on page 624. Configuring page numbers By default the page numbers are Arabic numerals (1, 2, 3, etc.) without leading zeros nor prefix, and page numbering starts with page 1 for each section. But this can be changed. To do that: Page 346 1. On the Resources pane, right-click a section in the Print context and click Numbering. 2. Uncheck Restart Numbering if you want the page numbers to get consecutive page numbers, instead of restarting the page numbering with this section. Note Even if a section is disabled, so it doesn't produce any output, this setting is still taken into account for the other sections. This means that if Restart Numbering is checked on a disabled section, the page numbering will be restarted on the next section. Disabling a section can only be done in a Control Script (see "Control Scripts" on page 645). Control Scripts can also change where page numbers restart. 3. Use the Format drop-down to select uppercase or lowercase letters or Roman numerals instead of Arabic numerals. 4. In Leading Zeros, type zeros to indicate how many digits the page numbers should have. Any page number that has fewer digits will be preceded by leading zeros. 5. Type the Number prefix. Optionally, check Add Prefix to Page Counts, to add the prefix to the total number of pages, too. 6. Close the dialog. Preventing widows and orphans Widows and orphans are lines at the beginning or at the end of a paragraph respectively, dangling at the bottom or at the top of a page, separated from the rest of the paragraph. By default, to prevent orphans and widows, lines are moved to the next page as soon as two lines get separated from the rest of the paragraph. The same applies to list items (in unordered, numbered and description lists). The number of lines that should be considered a widow or orphan can be changed for the entire Print context, per paragraph and in tables. Note Widows and orphans are ignored if the page-break-inside property of the paragraph is set to avoid; see "Preventing a page break" on page 350. Page 347 In the entire Print context To prevent widows and orphans in the entire Print context: 1. On the menu, select Edit > Stylesheets. 2. Select the Print context. 3. Click New (or, when there are already CSS rules for paragraphs, click the selector p and click Edit). 4. Click Format. 5. After Widows and Orphans, type the minimum number of lines that should be kept together. Alternatively, manually set the set the widows and orphans properties in a style sheet: 1. Open the style sheet for the Print context: on the Resources pane, expand the Styles folder and double-click context_print_styles.css. 2. Add a CSS rule, like the following: p { widows: 4; orphans: 3 } Per paragraph To change the widow or orphan setting for one paragraph only: 1. Open the Formatting dialog. To do this, you can: l l Select the paragraph using the breadcrumbs or the Outline pane (next to the Resources pane) and then select Format > Paragraph in the menu. Right-click the paragraph and select Paragraph... from the contextual menu. 2. After Widows and Orphans, type the minimum number of lines that should be kept together. In tables The CSS properties widows and orphans can be used in tables to prevent a number of rows from being separated from the rest of the table. Detail tables are automatically divided over several pages when needed. A Standard Table doesn't flow over multiple pages by default. Splitting a Standard Table over multiple pages Page 348 requires setting the Connect-specific data-breakable attribute on all of its rows. You can either open the Source tab, or write a script to replace each with . Note that the effect will only be visible in Preview mode. To set the number of widows and orphans for a table: 1. Open the Formatting dialog. To do this, you can: l l Select the table using the breadcrumbs or the Outline pane (next to the Resources pane) and then select Format > Table in the menu. Right-click the paragraph and select Table... from the contextual menu. 2. After Widows and Orphans, type the minimum number of table rows that should be kept together. Page breaks A page break occurs automatically when the contents of a section don't fit on one page. Inserting a page break To insert a page break before or after a certain element, set the page-break-before property or the page-break-after property of that element (a paragraph for example; see also "Styling text and paragraphs" on page 562): 1. Select the element (see "Selecting an element" on page 469). 2. On the Format menu select the respective element to open the Formatting dialog. 3. In the Breaks group, set the before or after property. l l Before: Sets whether a page break should occur before the element. This is equivalent to the page-break-before property in CSS; see CSS page-break-before property for an explanation of the available options. After: Sets whether a page break should occur after the element. Equivalent to the page-break-after property in CSS; see CSS page-break-after property for an explanation of the available options. Click the button Advanced to add CSS properties and values to the inline style tag directly. Alternatively you could set this property on the Source tab in the HTML (for example:

), or add a rule to the style sheet; see "Styling your templates with CSS files" on page 556. Page 349 Note You cannot use these properties on an empty
or on absolute-positioned elements. Preventing a page break To prevent a page break inside a certain element, set the page-break-inside property of that element to avoid: l Select the element (see "Selecting an element" on page 469). l On the Format menu, select the respective element to open the Formatting dialog. l In the Breaks group, set the inside property to avoid, to prevent a page break inside the element. For an explanation of all available options of the page-break-inside property in CSS, see CSS page-break-inside property. Alternatively you could set this property on the Source tab in the HTML (for example:
    ), or add a rule to the style sheet; see "Styling your templates with CSS files" on page 556. Adding blank pages to a section How to add a blank page to a section is described in a how-to: Create blank page on field value. Master Pages In Print sections, there are often elements that need to be repeated across pages, like headers, footers and logos. In addition, some elements should appear only on specific pages, such as only the first page, or the last page, or only on pages in-between. Examples are a different header on the first page, and a tear-off slip that shows up on the last page. This is what Master Pages are used for. Master Pages can only be used in the Print context (see "Print context" on page 332). Master Pages resemble Print sections, and they are edited in much the same way (see "Editing a Master Page" on the next page) but they contain a single page and do not have any text flow. Only one Master Page can be applied per page in printed output. Then a Print template is created, one master page is added to it automatically. You can add more Master Pages; see Page 350 "Adding a Master Page" below. Initially, the original Master Page will be applied to all pages, but different Master Pages can be applied to different pages; see "Applying a Master Page to a page in a Print section" on the facing page. Examples There are a few How-tos that demonstrate the use of Master Pages: l Showing a Terms and Conditions on the back of the first page only. l A tear-off section on the first page of an invoice. l Tips and tricks for Media and Master Pages. Adding a Master Page When a Print template is created, one master page is added to it automatically. Adding more Master Pages can be done as follows: l On the Resources pane, right-click the Master pages folder and click New Master Page. l Type a name for the master page. l l Optionally, set the margin for the header and footer. See "Adding a header and footer" on the facing page. Click OK. Initially, the master page that has been created together with the Print context will be applied to all pages in the Print section. After adding more Master Pages, different Master Pages can be applied to different pages; see "Applying a Master Page to a page in a Print section" on the facing page. Editing a Master Page Master Pages are edited just like sections, in the workspace. To open a Master Page, expand the Master pages folder on the Resources pane, and double-click the Master Page to open it. A Master Page can contain text, images and other elements (see "Content elements" on page 465), including variable data and dynamic images (see "Personalizing Content" on page 592). All elements on a Master Page should have an absolute position or be inside an element that has an absolute position. It is good practice to position elements on a Master Page by placing them in a Positioned Box (see "Content elements" on page 465). Page 351 Keep in mind that a Master Page always remains a single page. Its content cannot overflow to a next page. Content that doesn't fit, will not be displayed. Note Editing the Master Page is optional. One Master Page must always exist in a Print template, but if you don't need it, you can leave it empty. Adding a header and footer Headers and footers are not designed as part of the contents of a Print section, but as part of a Master Page, which is then applied to a page in a print section. To create a header and footer: 1. First insert elements that form the header or footer, such as the company logo and address, on the Master Page; see "Editing a Master Page" on the previous page. 2. Next, define the margins for the header and footer. The margins for a header and footer are set in the Master Page properties. This does not change the content placement within the Master Page itself; in Master Pages, elements can go everywhere on the page. Instead, the header and footer of the Master Page limit the text flow on pages in the Print sections to which this Master Page is applied. Pages in a Print section that use this Master Page cannot display content in the space that is reserved by the Master Page for the header and footer, so that content in the Print section does not collide with the content of the header and footer. To set a margin for the header and/or footer: 1. On the Resources pane, expand the Master pages folder, right-click the master page, and click Properties. 2. Fill out the height of the header and/or the footer. The contents of a print section will not appear in the space reserved for the header and/or footer on the corresponding master page. 3. Finally, apply the master page to a specific page in a print section. See "Applying a Master Page to a page in a Print section" below. Applying a Master Page to a page in a Print section Every page in a print section has a natural position: it can be the first page, the last page, one of the pages in between (a 'middle page'), or a single page. For each of those positions, you can Page 352 set a different Master Page and Media (see "Media" below). It can even have two master pages, if printing is done on both sides (called duplex printing). To apply Master Pages to specific page positions in a Print section: 1. On the Resources pane, expand the Print context; right-click the Print section, and click Sheet configuration. 2. Optionally, check Duplex to enable content to be printed on the back of each sheet. Your printer must support duplex for this option to work. If Duplex is enabled, you can also check Tumble to duplex pages as in a calendar, and Facing pages to have the margins of the section switch alternately, so that pages are printed as if in a magazine or book. 3. If the option Same for all positions is checked, the same Master Page will be applied to every page in the print section (and to both the front and the back side of the page if duplex printing is enabled). Uncheck this option. 4. Decide which Master Page should be linked to which sheet (position): click the downward pointing arrow after Master Page Front and select a Master Page. If Duplex is enabled, you can also select a Master Page for the back of the sheet and consequently, check Omit Master Page Back in case of an empty back page to omit the specified Master Page on the last backside of a section if that page is empty and to skip that page from the page count. 5. Optionally, decide which Media should be linked to each sheet. 6. Click OK to save the settings and close the dialog. Deleting a Master Page To delete a Master Page, expand the Master pages folder on the Resources pane, right-click the master page, and click Delete. Note that one Master Page as well as one Media must always exist in a Print template. Just leave it empty if you don't need it. Media When the output of a Print context is meant to be printed on paper that already has graphical and text elements on it (called stationery, or preprinted sheets), you can add a copy of this media, in the form of a PDF file, to the Media folder. Page 353 Media can be applied to pages in a Print section, to make them appear as a background to those pages. This ensures that elements added to the Print context will correspond to their correct location on the preprinted media. For further explanation about how to apply Media to different pages, see "Applying Media to a page in a Print section" on page 357. Media will not be printed, unless you want them to; see below. Per Media, a front and back can be specified and you can specify on what kind of paper the output is meant to be printed on. This includes paper weight, quality, coating and finishing; see "Setting Media properties" below. Adding Media To add a Media, right-click the Media folder on the Resources pane and select New Media. The new Media is of course empty. You can specify two PDF files for the Media: one for the front, and, optionally, another for the back. Specifying and positioning Media Specifying a PDF for the front: the fast way To quickly select a PDF file for the front of a Media, drag the PDF file from the Windows Explorer to one of the Media. The Select Image dialog opens; select an image and check the option Save with template if you want to insert the image into the Images folder on the Resources pane. (For PDF files selected by URL this option is always checked.) Alternatively you could first import the PDF file to the Images folder on the Resources pane (using drag & drop) and drag it from there on one of the Media in the Media folder. Either way, you cannot set any options. To be able to specify a PDF file for both the front and the back of the Media, and to specify a position for the Media's PDF files, you have to edit the properties of the Media. Setting Media properties Media have a number of properties that you can set. You can change the Media's page size and margins (as long as it isn't applied to a section), you can specify a PDF file (or any other Page 354 type of image file) for both the front and the back of the Media, and you can determine how the virtual stationery should be positioned on the page. This is done as follows: 1. On the Resources pane, expand the Contexts folder, expand the Media folder, rightclick the Media and click Properties. 2. Now you can change the name and page size of the Media. Note that it isn't possible to change the page size once the Media is applied to a section. Media can only be applied to sections that have the same size. 3. On the Virtual Stationery tab, you can click the Select Image button to select a PDF image file. 4. Click Resources, Disk or Url, depending on where the image is located. l l l Resources lists the PDF files that are present in the Images folder on the Resources pane. Disk lets you choose an image file that resides in a folder on a hard drive that is accessible from your computer. Click the Browse button to select an image. As an alternative it is possible to enter the path manually. The complete syntax is: file:///. Note: if the host is "localhost", it can be omitted, resulting in file:///, for example: file:///c:/resources/images/image.jpg. Check the option Save with template to insert the image into the Images folder on the Resources pane. Url allows you to choose an image from a specific web address. Select the protocol (http or https), and then enter the web address (for example, http://www.mysite.com/images/image.jpg). Note It is not possible to use a remotely stored PDF file as virtual stationery, because the number of pages in a PDF file can not be determined via the http and http protocols. Therefor, with an external image, the option Save with template is always checked. 5. Select a PDF file. 6. If the PDF file consists of more than one page, select the desired page. 7. Click Finish. Page 355 8. For each of the PDF files, select a position: l Fit to page stretches the PDF to fit the page size. l Centered centers the PDF on the page, vertically and horizontally. l Absolute places the PDF at a specific location on the page. Use the Top field to specify the distance between the top side of the page and the top side of the PDF, and the Left field to specify the distance between the left side of the page and the left side of the PDF. 9. Finally, click OK. Setting the paper's characteristics To set a Media's paper characteristics: 1. On the Resources pane, expand the Contexts folder, expand the Media folder, and right-click the Media. Click Characteristics. 2. Specify the paper's characteristics: l l l l l l l Media Type: The type of paper, such as Plain, Continuous, Envelope, Labels, Stationery, etc. Weight: The intended weight of the media in grammage (g/m2). Front Coating: The pre-process coating applied to the front surface of the media, such as Glossy, High Gloss, Matte, Satin, etc. Back Coating: The pre-process coating applied to the back surface of the media. Texture: The intended texture of the media, such as Antique, Calenared, Linen, Stipple or Vellum. Grade: The intended grade of the media, such as Gloss-coated paper, Uncoated white paper, etc. Hole Name: A predefined hole pattern that specifies the pre-punched holes in the media, such as R2-generic, R2m-MIB, R4i-US, etc. 3. Click OK. Rename Media To rename Media: Page 356 l l On the Resources pane, expand the Contexts folder, expand the Media folder, rightclick the Media and click Rename. Type the new name and click OK. Alternatively, on the Resources pane, expand the Contexts folder, expand the Media folder, right-click the Media and click Properties. Type the new name in the Name field and click OK. Applying Media to a page in a Print section Every page in a print section has a natural position: it can be the first page, the last page, one of the pages in between (a 'middle page'), or a single page. For each of those positions, you can set different Media. To apply Media to specific page positions in a Print section: 1. On the Resources pane, expand the Print context; right-click the Print section, and click Sheet configuration. 2. Optionally, check Duplex to enable content to be printed on the back of each sheet. Your printer must support duplex for this option to work. If Duplex is enabled, you can also check Tumble to duplex pages as in a calendar, and Facing pages to have the margins of the section switch alternately, so that pages are printed as if in a magazine or book. 3. If the option Same for all positions is checked, the same Media will be applied to every page in the print section. Uncheck this option. 4. Decide which Media should be linked to each sheet position: click the downward pointing arrow after Media and select a Media. 5. Optionally, decide which Master Page should be linked to each sheet; see "Master Pages" on page 350. Note When both Media and a Master Page are used on a certain page, they will both be displayed on the Preview tab of the workspace, the Master Page being 'in front' of the Media and the Print section on top. To open the Preview tab, click it at the bottom of the Workspace or select View > Preview View on the menu. Page 357 Dynamically switching the Media In addition to applying Media to sheets via the settings, it is possible to change Media dynamically, based on a value in a data field, in a script. The script has already been made; you only have to change the name of the Media and the section in the script, and write the condition on which the Media has to be replaced. 1. On the Resources pane, expand the Contexts folder, expand the Print context, rightclick the print section and click Sheet configuration. 2. Decide which pages should have dynamically switching media: every first page in the Print section, every last page, one of the pages in between (a 'middle page'), or a single page. (Uncheck the option Same for all positions, to see all page positions.) 3. In the area for the respective sheet position, click the Edit script button next to Media. The Script Wizard appears with a standard script: results.attr("content","Media 1"); Media 1 will have been replaced with the name of the media selected for the chosen sheet position. The field Selector in the Script Wizard contains the name of the section and the sheet position that you have chosen. 4. Change the script so that on a certain condition, another media will be selected for the content. For instance: if(record.fields.GENDER === 'M') { results.attr("content","Media 2"); } This script changes the media to Media 2 for male customers. See "Writing your own scripts" on page 624 if you are not familiar with how scripts are written. 5. Click Apply, open the tab Preview and browse through the records to see if the script functions as expected. 6. When you click OK, the script will be added to the Scripts pane. Rotating the Media in a Print section The actual orientation of the Media and that of a section to which the Media is applied may not match. The Media can therefore be rotated per Print section: Page 358 l l On the Resources pane, expand the Print context; right-click the Print section, and click Sheet configuration. Click one of the options next to Media rotation. The Media (to be more accurate: the Virtual Stationery images specified for this Media) as well as the section's background image will be rotated accordingly in the entire section. Note that any Virtual Stationery settings made for the Media also influence how the Media is displayed in each section (see "Setting Media properties" on page 354). If in the Media properties, the Virtual Stationery position is set to Absolute, any offset given by the Top and Left values will be applied after rotation. A Virtual Stationery image located absolutely at the top left (Top: 0, Left: 0) will still appear at the top left of the page after rotating the Media. Printing virtual stationery Media are not printed, unless you want them to. Printing the virtual stationery is one of the settings in a Job Creation Preset. To have the virtual stationery printed as part of the Print output: 1. Create a job creation preset that indicates that Media has to be printed: select File > Presets and see "Job Creation Presets" on page 840 for more details. 2. Select that job creation preset in the Print Wizard; see "Generating Print output" on page 956. Email With the Designer you can create one or more Email templates and merge the template with a data set to generate personalized emails. The Email context is the folder in the Designer that can contain one or more Email templates, also called Email sections. The HTML generated by this context is meant to be compatible with as many clients and as many devices as possible. Page 359 Email template It is strongly recommended to start creating an Email template with a Wizard; see "Creating an Email template with a Wizard" on page 364. Designing HTML email that displays properly on a variety of devices and screen sizes is challenging. Building an email is not like building for the web. While web browsers comply with standards (to a significant extent), email clients do not. Different email clients interpret the same HTML and CSS styles in totally different ways. When an Email template is created, either with a Wizard or by adding an Email context to an existing template (see "Adding a context" on page 321), the Email context folder is created along with other files that are specific to an Email context; see "Email context" on page 368. Only one Email section is created at the start, but you can add as many Email sections as you need; see "Email templates" on page 370. However, when the Designer merges a data set to generate output from the Email context, it can merge only one of the templates with each record; see "Generating Email output" on page 973. Email templates are personalized just like any other template; see "Variable Data" on page 604. Sending email When the template is ready, you can change the email settings (see "Email header settings" on page 373) and send the email directly from the Designer or via Workflow. To test a template, you can send a test email first. Output, generated from an Email template, can have the following attachments: l The contents of the Print context, in the form of a single PDF attachment. l The output of the Web context, as an integral HTML file. l Other files, an image or a PDF leaflet for example. Attaching the Print context and/or the Web context is one of the options in the Send (Test) Email dialog. See "Email attachments" on page 379 and "Generating Email output" on page 973. Page 360 Designing an Email template With the Designer you can design Email templates. It is strongly recommended to start creating an Email template with an Email Template Wizard, because it is challenging to design HTML email that looks good on all email clients, devices and screen sizes that customers use when they are reading their email. This topic explains why designing HTML email design is as challenging as it is, which solutions are used in the Email Template Wizards and it lists good practices, for example regarding the use of images in HTML email. It will help you to create the best possible Email templates in the Designer. HTML email challenges Creating HTML email isn't like designing for the Web. That's because email clients aren't like web browsers. Email clients pass HTML email through a preprocessor to remove anything that could be dangerous, introduce privacy concerns or cause the email client to behave unexpectedly. This includes removing javascript, object and embed tags, and unrecognized tags. Most preprocessors are overly restrictive and remove anything with the slightest potential to affect the layout of their email client. Next, the HTML has to be rendered so that it is safe to show within the email client. Unfortunately, desktop, webmail, and mobile clients all use different rendering engines, which support different subsets of HTML and CSS. More often than not, the result of these operations is that they completely break the HTML email's layout. Designing HTML email in PlanetPressDesigner The problem of HTML email is that preprocessing and rendering engines break the HTML email's layout. HTML tables, however, are mostly left untroubled. As they are supported by every major email client, they are pretty much the only way to design HTML emails that are universally supported. That's why Tables are heavily used to position text and images in HTML email. Nesting tables (putting tables in table cells) and applying CSS styles to each table cell to make the email look good on all screen sizes is a precision work that can be a tedious and demanding. Connect's Designer offers the following tools to make designing HTML email easier. Page 361 Email templates: Slate and others The most obvious solution offered in the Designer is to use one of the templates provided with the Designer; see "Creating an Email template with a Wizard" on page 364. The layout of these templates has been tested and proven to look good in any email client, on any device and screen size. The Tables in these templates are nested (put inside another table) and they have no visible borders, so readers won't notice them. Tip Click the Edges button on the toolbar to make borders of elements visible on the Design tab. The borders will not be visible on the Preview tab or in the output. Emmet Emmet is a plugin that enables the lightning-fast creation of HTML code though the use of a simple and effective shortcut language. The Emmet functionality is available in the HTML and CSS source editors of Connect Designer. Emmet transforms abbreviations for HTML elements and CSS properties to the respective source code. The expansion of abbreviations is invoked with the Tab key. In the Source tab of the Workspace, you could for example type div.row. This is the abbreviation for a
    element with the class row. On pressing the Tab key, this abbreviation is transformed to:
    To quickly enter a table with the ID 'green', one row, and two cells in that row, type: table#green>tr>td*2 On pressing the Tab key, this is transformed to:
    All standard abbreviations can be found in Emmet's documentation: Abbreviations. Page 362 To learn more about Emmet, please see their website: Emmet.io and the Emmet.io documentation: http://docs.emmet.io/. Preferences To change the way Emmet works in the Designer, select Window > Preferences, and in the Preferences dialog, select Emmet; see "Emmet Preferences" on page 706. Using CSS files with HTML email Email clients do not read CSS files and some even remove a