S EMA User Guide 1.6.0

User Manual: Pdf

Open the PDF directly: View PDF PDF.
Page Count: 103

DownloadS EMA-User Guide-1.6.0
Open PDF In BrowserView PDF
DEMA USER GUIDE
Making Migration Easier and Faster
EMA TEAM
[1.6.0]

PRODUCT DESCRIPTION GUIDE

Table of Contents
INTRODUCTION ............................................... 4
Migrating to a D2 Environment .................................................... 9

INSTALLING EMA ........................................... 10
EXTRACTION .................................................. 16
Extraction from 3rd party systems ................................................ 19

TRANSFORMATION......................................... 20
Combined JavaScript Approach ..................................................... 20
Java Approach ...................................................................................... 21
Properties File Approach (Deprecated) ....................................... 24
Creating a Custom Transform ..................................................... 40
JavaScript Approach (Deprecated)............................................... 42
Script Structure ................................................................................ 42
Anatomy of a Transformation Script ........................................ 42
Building Blocks .................................................................................. 42
Reusing DB lookups and Custom Mappers ............................. 44
Accessing the EMA API from a Javascript Transformation
................................................................................................................ 44

INGESTION .................................................... 46
DOCUMENTUM DELTA MIGRATIONS ............... 53
Preparing for Delta Migrations ....................................................... 53
Delta Extraction ................................................................................... 53
Delta Transformation ......................................................................... 54
Delta Ingestion ..................................................................................... 54
Delta File Copy ..................................................................................... 54

EMA-API ......................................................... 55
File System Adaptor ........................................................................... 58

CLONER .......................................................... 60
EMA-TOOLS .................................................... 63
Morph ....................................................................................................... 63
FileCopier ................................................................................................ 65
Replatform ............................................................................................. 67
Folder Structure Generator ............................................................. 68
Link Count Update .............................................................................. 69
DataVerifier ........................................................................................... 70
Compare ................................................................................................. 72
Default File Creator ............................................................................ 74
Type Extractor ...................................................................................... 76
Audit Trail Extractor ........................................................................... 78
2

User Group Extractor ......................................................................... 79
ACLExtractor ......................................................................................... 81
ExtractFileList ....................................................................................... 82
Encrypt Utils .......................................................................................... 83
Content Migrator ................................................................................ 84
Reference Update (Beta) .................................................................. 85
Connectivity Checker ......................................................................... 87

REPORT AND CONSISTENCY CHECKING ......... 89
Consistency-Report SQL Server Database ............................. 91
Consistency-Report Oracle Database ....................................... 91

TIPS & TRICKS ............................................... 93
Log4j logging ........................................................................................ 93
Using your custom log4j.properties file .................................. 93
Sample log4j.properties file ......................................................... 93
MongoDB Basics .................................................................................. 95
Customize Extractor XML file for your project ......................... 97
Performance Troubleshooting......................................................... 97
Automation ............................................................................................ 98
Scripting.................................................................................................. 98

TROUBLESHOOTING ....................................... 99
FAQ .............................................................. 102

3

INTRODUCTION
Definition

DEMA (Documentum Enterprise Migration Appliance) is a suite of tools that enable a consultant to
move data, content, even whole repositories from point to point in the enterprise or to the cloud.
 Check the training recordings - https://inside.emc.com/groups/ecdarchitects/blog/2016/03/17/updated-ema-training-recordings-available?sr=stream
 Check the training simulations present at “EMA1.6.0\Training” folder

Components

EMA-Cloner
EMAMigrate
EMA-API
EMA-Tools

Used when migrating an entire repository and there is a database change from
Oracle to SQL Server.
Used when you have a typical ETL requirement and you are migrating parts of a
repository and not the entire repository.
Java API’s to help you extract data from a third party source like a CSV, YAML or
database where Documentum is not the source system to build custom adaptors.
Morph
Used to do mass object type changes within a repository.
Object ids remain the same, inflight workflow retain their
state and audit trail stays intact.
Replatform
Used to update hostname entries in configurations settings
stored inside the repository. Can also be used to modify
configurations when moving from Unix to Windows and
vice versa.
File Copier
Used for content copy when the content has not been
already copied before Ingestion is run. Picks up the FileList
generated by the Ingestor and runs a multi-threaded
copying process.
Transform
Used when the data is not to be migrated as is and needs to
be modified as per business requirements.
Link Count Update
Tool to update the link count of a folder.
 Required when there is a transformation being
done which requires moving documents to
different folders.
 Not required when documents are not moved to
new folders as part of transformation or if D2-Core
job will be used to apply auto-linking rules.
Folder Structure
Generator
Compare

Data Verifier

Default File Creator
Type Extractor

Used to generate a folder structure based on a simple text
file containing a list of folders. A sample file
“folderListSample.txt” is provided in the samples directory.
Used to compare an object pre/post-transformation. New
properties, deleted properties, and modified properties will
be displayed in the output with before/after values.
Used to test the compatibility of a MongoDB database with
the target DB schema into which it is intended to be
ingested. This is a quicker and more efficient approach than
running dry-runs until all INSERTs pass.
Used to generate default files (required during Ingestion)
for the types specified.
Used to extract the types present in the source system. It
generates a dql file which can be run in the target system to
4

Audit Trail Extractor
Group User Extractor
Encrypt Utils
Content Migrator
Reference Update
Connectivity Checker

create the corresponding types.
Used to extract the audit trail from the system.
Used to extract the users and groups in a particular system.
Used to encrypt passwords and can be used in all the EMA
components wherever passwords are being used.
Helps in moving data from Centera to Isilon
Updates references to relations etc. if everything is not
migrated together
Checks connectivity and authentication for source/target
databases, MongoDB, Content Server & File shares.

Planning
your Migration
EMA-Migrate Considerations
Batch
There is no technical limit to the size of a batch migrated with EMA, except for
Segmentation
storage considerations.
However, there are a couple of additional practical considerations:
- Long running processes, if they fail, require a lot of time and effort to
re-run. Typically we have set the size of a batch to around 1-2 million
documents in most engagements.
- For very large dataset, we do not want to have 100s of batches to
execute. If we have 100 million documents to migrate, using 1-2
million document batches is probably near or over the limit of the
number of batches that we want to manage. So we might increase the
size of a batch to around 5 million objects.
Batch Approaches
1. Modify Date – This approach can be used when the data being
partitioned does not have versions. We can split the data using
r_modify_date and provide the ranges. In the case where there are
versions of documents and we use the modify date criteria the
partitioned data could separate documents in a version tree into
separate batches, which will cause complications during ingestion. This
will be detected during extraction as there is a version tree check. But
if we choose to ignore the version tree check (using --ivc option) and
migrate batches with split version trees, we will need to ingest all
batches in “delta” mode to ensure consistency of the version trees.
This will cause the overall ingestion process to take longer, so is not
preferred: the preference is to perform the main ingestion using
“ingest” mode, and use the “delta” mode only for the final delta of the
process.
2. Object Type – This approach is used when after the data analysis you
find that the data can be partitioned using object types. Different
object types can be grouped together to form a batch.


Extract all the folders first in a batch and ingest them
e.g. run ExtractManager: -wh "i_cabinet_id='0cXXX' and r_object_type
5



IN ('dm_cabinet','dm_folder','custom_folder1','custom_folder2')
Remember to dump the IDs during Ingestion using the --dump-ids
 option
For all the subsequent batches provide the object types
e.g. run ExtractManager: -wh "i_cabinet_id='0cXXX' and r_object_type
IN ('dm_document','custom_type1','custom_type2')
Do remember to use the preload db option to load the new object ids
of the folders. The preload db holds the old vs. new object_id for
reprocessing the same objects respectively detecting already ingested
documents.

3. Chronicle ID – similarly to using object type, chronicle ID can be used,
as it will ensure that version trees stay together.
4. Other attribute – If after data analysis you find another attribute (or
custom attribute) is a better way to partition the data then go ahead
with it.

Data to Migrate

Delta Migrations

When we talk about the number of objects in a batch, we are typically
concerned with the number of sysobject objects (e.g. documents,
folders) and not concerned with additional objects such as
relationships, virtual document structures, ACLs etc. If there are extreme
numbers of these other objects, it could affect our thinking, but typically such
objects are very small and therefore quickly ingested compared to sysobjects
and subtypes.
Typically, we use EMA-Migrate to move business data only. To move
configuration data, use DAR files and similar tools, such as D2-Config and xCP
Designer. To this end, we usually do NOT extract data from System, Temp,
Templates or Resources cabinets. In some specific cases this may make sense,
but please consult with the EMA team before you confirm this as part of your
approach to a migration.
The most common exception to this rule is for handling deleted chronicle
objects. A chronicle object is the first version created in the system (typically,
but not always ,version 1.0). When a user deletes a chronicle object, it is not
immediately deleted from the system, as is the case with any other version in
the version tree. Instead, it is marked as with the flag is_deleted = True which
causes the object not to be visible in Documentum UIs. This object is also
moved to the Temp cabinet. From an EMA perspective, when we define the
criteria to be extracted, we might not specify a where clause that matches such
deleted objects, and if they exist, we will get an error due to a failed "version
check". If this happens, consider adjusting the where clause to include delete
chronicle objects.
Mostly, we can expect to migrate about 20 million objects during a weekend
with EMA. In some cases we may need to stretch this time and potentially
exceed the time limits imposed by the customer for a “black-out” period. Then
we will need to consider running a delta.
Follow these steps for running Delta
6

Step
1

2
3

Action
Extraction - Divide the data into batches and run the extraction for
each batch. The division can be based on a Date criteria / Range of
object id / Something else that you believe would divide the data
into multiple batches.
Transformation - Executed normally without any changes.
Ingestion - Specify delta mode (--mode DELTA), and specify the
Mongo DB used for the initial migration as a preload DB source of
ID mappings (--preload-dbs ).
If multiple deltas are expected, plan to either:
 Dump out the IDs used for each delta to a Mongo DB that is
loaded each time delta runs;
 Use new DBs for each delta, and add the name to the
preload DBs list.
Multiple deltas should not be necessary in most cases; however this
may be needed where the source system is still being actively used,
during a more complex transitional period.
Ensure that the data of the initial run is not deleted, as we will need
those mappings (old object ID to new object ID) again.

Storage
Management

Plan how the content from the migration will be accommodated in the new
system. In case we plan to use the same source storage in the target system
also, we are not required to migrate the content. Create the filestores with
same names as in source and change the storage pointers to the same storage
location.
Two main options exist when content is to be migrated:
1. “Merge Content” – merge the content into the filestore(s) existing in
the target
2. “Copy Content” – create “legacy” file stores in the target, and store
migrated content there.

PROS

CONS

Merge Content
Simpler content
management going
forward
Have to copy content
post-migration

Copy Content
Can copy content ahead of
migration and even
extraction
Can re-use existing storage
New content for new
versions created in same
“legacy” stores

As a rule of thumb, we would typically use the “Copy” option where
volume is high (say in the TBs), and the “Merge” option for smaller
volumes (or where the data is coming from a 3rd party system).
We also support for extern, ca and atmos filestores added. Content
files present in these filestore can now be migrated using EMA.
7

Check the PPT deck (@EMA SyncP): “KT_EMA – Migrate Content
Management” for more details.
Retention Policies

1. Retention Policies need to be exported from the Source
system
2. Then Imported to the Source system
3. The System Cabinet needs to be extracted without any
documents:
During ingestion aspect type and the attribute needs to be provided so a
proper target ID is mapped:
 Get the aspect type for the particular aspect name
select * from dmc_aspect_type where object_name =
'dmc_folder_markup_coordinator'
Dump r_object_id get the value of (i_attr_def)


The properties file (IngerstorProperties.properties) needs to be
updated with the (i_attr_def) value i.e: 
Edit/create file as:
id.dmi_0300000c800001e7.repeating=markup_retainer_id



Remember to add the property file in the Ingestion cmd
-Doptions.default=

8

Deployment

Plan out your EMA deployment depending on the amount of data you plan to migrate
Option 1
Migration Appliance
Extractor

Ingestor

Transform

MongoDB

Source
System

Target
System
(Documentum)

System 1

This option goes good when you have less than 1 million documents to migrate in each batch but
anything more than that we would suggest going to option 2.
Option 2
Migration Appliance
Extractor
Source
System

Ingestor
Transform

Target
System
(Documentum)

System 1

MongoDB
System 2

When you have more than 1 million documents to migrate in each batch we would recommend
going with this option where MongoDB is setup on a different system. This is done because
MongoDB is memory hungry and can deprive memory to other components like Ingestor.

Migrating to a D2 Environment
If the target system to migrate is a D2 system then you can think of migrating the documents to a temporary folder and
then run the OOB D2CoreJob. This would move the documents to appropriate location, apply security based on the D2
configurations.
OOB D2CoreJob can take a lot of time if the number of documents is huge. So there is an alternative standalone utility
provided by Engineering which can be used for this purpose.
Contact the EMA team for more information on this.

9

INSTALLING EMA
Requirements

Software Requirements
 Windows Server 2012 R2 Standard 64bit server
OR
Windows 2008 R2 64bit server (We have implementations where consultants have
used EMA in Linux environments also without any issue.)
 Java 8 SDK
Hardware Requirements
 CPU – 4
 RAM – 16 GB (If you have millions of documents being Ingested increase RAM to
32/64GB)
 Disk space – 120 GB (depends on the size of the metadata being migrated)
Other Requirements
 SQL Developer / Toad / SQL Server Management studio – To check the connectivity,
credentials of the source and target database as well as to do data verification.
 Robomongo (0.9.x) for MongoDB management and analysis. In case you use eclipse
there is a MonjaDB plugin for eclipse that can be used.
 Database connectivity to both source and target along with the superuser credentials
 Documentum superuser authentication details for the target system (D7).
 Some familiarity with Mongo concepts and commands. Please look here.
 For additional transformations requirement not provided by EMA you might be
required to write new transformations. We have transformations in both Java and
JavaScript so knowledge of any of these would help.

Installation Steps

Step 1

Get the latest EMA package EMA1.6.0.zip & Unzip the contents to a location e.g.
C:\EMA
If using a different location sample scripts and files will need modifications.

10

Step 2

Step 3

Install Hotfix KB2731284 (Only on Windows server 2008) –
Double click on hotfix KB2731284 installer “Windows6.1-KB2731284-v3-x64.msu” file

Restart your system after the installation.
Install MongoDB
 If you are installing MongoDB on a separate machine copy the “mongodbwin32-x86_64-2008plus-ssl-3.0.7-signed.msi” file along with the
“mongod.cfg” file.
 Double Click on the MongoDB Installer “mongodb-win32-x86_64-2008plusssl-3.0.7-signed.msi” file



11

Step 4

Create folders named log and data inside “C:\Program Files\MongoDB\Server\3.0”

12

Step 5

Install MongoDB as a Service
Change the privilege of “C:\Program
Files\MongoDB\Server\3.0\bin\mongod.exe"
so that it can be installed as a service. For this
Right click on the executable and enable the
"Run this program as an administrator".



In cmd prompt go to the Mongo installed directory “C:\Program
Files\MongoDB\Server\3.0\bin” and execute the below command

mongod.exe --auth --config C:\EMA\EMA1.6.0\mongod.cfg --install



Start the Mongo Service

13

Step 6

Configure Authorization
Configure MongoDB for admin access by creating an “admin” user in the DB. You can
use “createAdminUser.js” script for this

C:\Program Files\MongoDB\Server\3.0\bin>mongo
MongoDB shell version: 3.0.7
connecting to: test
Welcome to the MongoDB shell. For interactive help, type "help".
For more comprehensive documentation, see http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> use admin
switched to db admin
> db.createUser(
{
user: "admin",
pwd: "Thom2807",
roles:
[
{
role: "root",
db: "admin"
}
]
});

Step 7

Verify the Mongo Installed successfully and is running in authentication mode

14

15

EXTRACTION
Definition

Extract data (segmented in defined batches) from the source Documentum Repository.
Sample script “Extract.bat” is provided in the “samples” directory.
Options available to define a batch:
1. Cabinet Name: extracts sysobjects in the cabinet and related objects*.
2. Where Clause: extracts sysobjects as defined by the where clause and related objects*
3. Folder Path: extracts sysobjects in the folder (along with sub folders) and related objects*.
*related objects – These are
1. Relation objects with the child or parent in the sysobject dataset.
2. Containment objects with the child or parent in the sysobject dataset.
3. ACL objects referenced by objects in the sysobject dataset.
4. Alias set
5. Content object
6. Assembly – Snapshots of a virtual document
7. Filestore – Filestores where the content object are present
8. Format – Format of the object
9. Policy – Lifecycle policy attached to the object.
While you technically can extract lifecycle objects (as they are sys object sub types) we do
not support the ingestion of lifecycles. Configuration should be handled outside of EMA. You
might see references to lifecycles, or retention policies, in an EMA extract, but they are
reference objects, not the actual objects.
10. Aspects attached to the object.
11. RPS attached to the object.
12. User objects for documents with subscription.

MongoDB must be running before we can start extraction
Parameters

Short
Option
-sd
-sc

Long Option

Argument

Mandatory

Description

driver class
connect string

Yes
Yes

Source JDBC driver to use for connection
Source JDBC connection string

-su
-sp
-mh
-mp
-md
-mu
-mpa
-c

--source-driver
--sourceconnection
--source-user
--source-password
--mongo-host
--mongo-port
--mongo-db
--mongo-user
--mongo-password
--cabinet-name

Username
Password
Host
Port
Database
Username
Password
Cabinet

Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes

-wh

--where-clause

Sql clause

No

-x

--exclude

Source JDBC username
Source JDBC password
Mongo DB Hostname
Mongo DB Port number
Mongo DB Database name
Mongo DB User name
Mongo DB User password
Name of cabinet or folderpath to extract. Multiple
cabinets or folderpath are seperated by pipe | operator
Where sql clause to extract. e.g. (i_cabinet_id =
'0c0004d28000b94b' and (r_modify_date >
convert(DATETIME,'2013-01-01')and r_modify_date <
convert(DATETIME,'2013-10-01')))
Exclude objects in selected cabinets rather than include
them

No

16

-ot

--other-type

Type or where
clause

No

Extract types not extracted by default. Multiple types are
seperated by a pipe | operator
{e.g --other-type dm_user|dm_group (r_object_id in
(select r_object_id from dm_group_sp where
group_name like '%docu%'))}

Only the particular type objects are
extracted and not the related objects.

Scenario

-fm
-ivc

--force-migrate
--ignore-versioncheck

No
No

-et

--exclude-type

No

-ds
-h

--db-schema
--help

Help

Providing properties file
for parameters
You can provide
options in
properties file
instead of command line.
In case an option is
provided in both, the
command line option value
will override the value
provided in the properties
file.

Extract from a cabinet
(SQL Server Database)

Extract from a cabinet
(Oracle Database)
Extract from a folderpath

Continue extraction even in case of errors
Skip version check (Checks if all versions of a document
are present in the where clause/Cabinet specified)

Skip extraction of specific types. Types
that can be excluded are: dm_relation, dmr_content,
dmr_containment, dm_assembly, dm_acl, dm_filestore,
dm_format, dm_policy, rps, all
Multiple types are seperated by a pipe | To exclude all
default types use ‘all’
Schema name
No
DB schema to be added to the tables
No
Show this text
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager
OR
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager --help
java -Doptions.default="E:/ema/ExtractorProperties.properties" -cp
"EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager

ExtractorProperties.properties
source-driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
sourceconnection=jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_Test2_d
ocbase
source-user=Test2
source-password=Thom2807
mongo-host=127.0.0.1
mongo-port=27017
mongo-db=ExtractorDB_10
mongo-user=admin
mongo-password=Thom2807
cabinet-name=testCab

java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_Test2_docbase -su Test2 -sp
Thom2807 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md ExtractorDB_10 -c
"testCab"
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.XX.XX:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa Thom2807 -md ExtractorDB_10 -c "testCab"
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
17

Where clause example
Where clause only
takes SQL
statement and
NOT DQL.Potentially
ambiguous fields should be
prefixed by “s.”
Extract from multiple
cabinets

Exclude multiple cabinets
Interpret it as
Extract Data from
all cabinets not in
the exclude list.
Will work only with
cabinets.
Extract other types not
extracted by default

com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_Test2_docbase -su Test2 -sp
Thom2807 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md ExtractorDB_10 -c
"/testCab/folder"
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_Test2_docbase -su Test2 -sp
Thom2807 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md ExtractorDB_10 wh "(i_cabinet_id = '0c0004d28000b94b' and (r_modify_date >
convert(DATETIME,'2013-01-01')and r_modify_date < convert(DATETIME,'2013-1001')))"
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_Test2_docbase -su Test2 -sp
Thom2807 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md ExtractorDB_10 -c
"Resources|System|Temp|Templates|dmadmin|netvis_own"
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_Test2_docbase -su Test2 -sp
Thom2807 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md ExtractorDB_10 -c
"Resources|System|Temp|Templates|dmadmin|netvis_own" –x

Extract Users & Groups
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.58.48:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa Thom2807 -md ExtractorDB_10 -ot "dm_user|dm_group"

Extract Formats
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.58.48:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa Thom2807 -md ExtractorDB_10 -ot "dm_format"

Extract only Inline users
along with the cabinet

java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.58.48:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa Thom2807 -md ExtractorDB_10 -c " testCab " -ot "dm_user(r_object_id
in (select r_object_id from dm_user_sp where user_source='inline password'))"

Exclude types that are
being extracted by default

java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.58.48:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa Thom2807 -md ExtractorDB_10 -c "testCab" -et "rps"

To exclude all
types extracted by
default use ‘all’ in
-et option.

18

Extract register table

java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_source_docbase -su sa -sp
password@123 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md regTable -ot
regtable.table_name

Extract Saved searches

java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd
com.microsoft.sqlserver.jdbc.SQLServerDriver -sc
jdbc:sqlserver://127.0.0.1:1433;databaseName=DM_source_docbase -su sa -sp
password@123 -mh 127.0.0.1 -mp 27017 -mu admin -mpa Thom2807 -md savedSearch
-wh "s.r_object_id IN (select r_object_id from dm_smart_list_sp where
query_type='query_builder')"
java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.XX.XX:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa Thom2807 -md ExtractorDB_10 -c "testCab" -ds "source"

Extract Data from Oracle
using db-schema

Extraction using encrypted
password

java -cp "EMAExtractManager-1.6.0.jar;C:/EMA/EMA1.6.0/dependency-jars/*"
com.emc.ema.extractor.ExtractManager -sd oracle.jdbc.driver.OracleDriver -sc
jdbc:oracle:thin:@10.8.58.48:1521:ORCL -su source -sp source -mh 127.0.0.1 -mp 27017
-mu admin -mpa "DM_ENCR=B8yMJvZwYLKy8bEG1zZ8AQ==" -md ExtractorDB_10 -c
"testCab"

Running Extraction in
single threaded mode

Modify documentum-extractor-context.xml file (present inside the jar file) and
add this line

after this comment:




Navigation menu