E Discovery Admin Guide

2016-05-23

: Pdf Ediscovery Admin Guide eDiscovery_Admin_Guide 5.6 eDiscovery ProductUserGuidesForensics-eDisco

Open the PDF directly: View PDF PDF.
Page Count: 553

DownloadE Discovery Admin Guide
Open PDF In BrowserView PDF
D r a f t

| 1

AccessData Legal and Contact Information

Document date: December 30, 2014

Legal Information
©2014 AccessData Group, Inc. All rights reserved. No part of this publication may be reproduced, photocopied,
stored on a retrieval system, or transmitted without the express written consent of the publisher.
AccessData Group, Inc. makes no representations or warranties with respect to the contents or use of this
documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any
particular purpose. Further, AccessData Group, Inc. reserves the right to revise this publication and to make
changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes.
Further, AccessData Group, Inc. makes no representations or warranties with respect to any software, and
specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose.
Further, AccessData Group, Inc. reserves the right to make changes to any and all parts of AccessData
software, at any time, without any obligation to notify any person or entity of such changes.
You may not export or re-export this product in violation of any applicable laws or regulations including, without
limitation, U.S. export regulations or the laws of the country in which you reside.

AccessData Group, Inc.
1100 Alma Street
Menlo Park, California 94025
USA
www.accessdata.com

AccessData Trademarks and Copyright Information
AccessData®

MPE+ Velocitor™

AccessData Certified Examiner® (ACE®)

Password Recovery Toolkit®

AD Summation®

PRTK®

Discovery Cracker®

Registry Viewer®

Distributed Network Attack®

ResolutionOne™

DNA®

SilentRunner®

Forensic Toolkit® (FTK®)

Summation®

Mobile Phone Examiner Plus®

ThreatBridge™

AccessData Legal and Contact Information

| 2

A trademark symbol (®, ™, etc.) denotes an AccessData Group, Inc. trademark. With few exceptions, and
unless otherwise notated, all third-party product names are spelled and capitalized the same way the owner
spells and and capitalizes its product name. Third-party trademarks and copyrights are the property of the
trademark and copyright holders. AccessData claims no responsibility for the function or performance of thirdparty products.
Third party acknowledgements:
FreeBSD

® Copyright 1992-2011. The FreeBSD Project .

AFF®

and AFFLIB® Copyright® 2005, 2006, 2007, 2008 Simson L. Garfinkel and Basis Technology
Corp. All rights reserved.

Copyright

© 2005 - 2009 Ayende Rahien

BSD License: Copyright (c) 2009-2011, Andriy Syrov. All rights reserved. Redistribution and use in source and
binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following
disclaimer; Redistributions in binary form must reproduce the above copyright notice, this list of conditions and
the following disclaimer in the documentation and/or other materials provided with the distribution; Neither the
name of Andriy Syrov nor the names of its contributors may be used to endorse or promote products derived
from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE
COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
WordNet License

This license is available as the file LICENSE in any downloaded version of WordNet.
WordNet 3.0 license: (Download)
WordNet Release 3.0 This software and database is being provided to you, the LICENSEE, by Princeton
University under the following license. By obtaining, using and/or copying this software and database, you agree
that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy,
modify and distribute this software and database and its documentation for any purpose and without fee or
royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements,
including the disclaimer, and that the same appear on ALL copies of the software, database and documentation,
including modifications that you make for internal use or for distribution. WordNet 3.0 Copyright 2006 by
Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND
PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY
WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR
WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE
USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD
PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or

AccessData Legal and Contact Information

| 3

Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database.
Title to copyright in this software, database and any associated documentation shall at all times remain with
Princeton University and LICENSEE agrees to preserve same.

Documentation Conventions
In AccessData documentation, a number of text variations are used to indicate meanings or actions. For
example, a greater-than symbol (>) is used to separate actions within a step. Where an entry must be typed in
using the keyboard, the variable data is set apart using [variable_data] format. Steps that require the user to
click on a button or icon are indicated by Bolded text. This Italic font indicates a label or non-interactive item in
the user interface.
A trademark symbol (®, ™, etc.) denotes an AccessData Group, Inc. trademark. Unless otherwise notated, all
third-party product names are spelled and capitalized the same way the owner spells and capitalizes its product
name. Third-party trademarks and copyrights are the property of the trademark and copyright holders.
AccessData claims no responsibility for the function or performance of third-party products.

Registration
The AccessData product registration is done at AccessData after a purchase is made, and before the product is
shipped. The licenses are bound to either a USB security device, or a Virtual CmStick, according to your
purchase.

Subscriptions
AccessData provides a one-year licensing subscription with all new product purchases. The subscription allows
you to access technical support, and to download and install the latest releases for your licensed products during
the active license period.
Following the initial licensing period, a subscription renewal is required annually for continued support and for
updating your products. You can renew your subscriptions through your AccessData Sales Representative.
Use License Manager to view your current registration information, to check for product updates and to
download the latest product versions, where they are available for download. You can also visit our web site,
www.accessdata.com anytime to find the latest releases of our products.
For more information, see Managing Licenses in your product manual or on the AccessData website.

AccessData Contact Information
Your AccessData Sales Representative is your main contact with AccessData. Also, listed below are the general
AccessData telephone number and mailing address, and telephone numbers for contacting individual
departments

AccessData Legal and Contact Information

| 4

Mailing Address and General Phone Numbers
You can contact AccessData in the following ways:

AccessData Mailing Address, Hours, and Department Phone Numbers
Corporate Headquarters:

AccessData Group, Inc.
1100 Alma Street
Menlo Park, California 94025 USAU.S.A.
Voice: 801.377.5410; Fax: 801.377.5426

General Corporate Hours:

Monday through Friday, 8:00 AM – 5:00 PM (MST)
AccessData is closed on US Federal Holidays

State and Local
Law Enforcement Sales:

Voice: 800.574.5199, option 1; Fax: 801.765.4370
Email: Sales@AccessData.com

Federal Sales:

Voice: 800.574.5199, option 2; Fax: 801.765.4370
Email: Sales@AccessData.com

Corporate Sales:

Voice: 801.377.5410, option 3; Fax: 801.765.4370
Email: Sales@AccessData.com

Training:

Voice: 801.377.5410, option 6; Fax: 801.765.4370
Email: Training@AccessData.com

Accounting:

Voice: 801.377.5410, option 4

Technical Support
Free technical support is available on all currently licensed AccessData solutions.
You can contact AccessData Customer and Technical Support in the following ways:
AD Customer & Technical Support Contact Information
AD
SUMMATIONand
AD EDISCOVERY

Americas/Asia-Pacific:
800.786.8369 (North America)
801.377.5410, option 5
Email: legalsupport@accessdata.com

AD IBLAZE and
ENTERPRISE:

Americas/Asia-Pacific:
800.786.2778 (North America)
801.377.5410, option 5
Email: support@summation.com

All other AD
SOLUTIONS

Americas/Asia-Pacific:
800.658.5199 (North America)
801.377.5410, option 5
Email: support@accessdata.com

AD
INTERNATIONAL
SUPPORT

Europe/Middle East/Africa:
+44 (0) 207 010 7817 (United Kingdom)
Email: emeasupport@accessdata.com

AccessData Legal and Contact Information

| 5

AD Customer & Technical Support Contact Information (Continued)
Hours of Support:

Americas/Asia-Pacific:
Monday through Friday, 6:00 AM– 6:00 PM (PST), except corporate holidays.
Europe/Middle East/Africa:
Monday through Friday, 8:00 AM– 5:00 PM (UK-London) except corporate holidays.

Web Site:

http://www.accessdata.com/support/technical-customer-support
The Support website allows access to Discussion Forums, Downloads, Previous
Releases, our Knowledge base, a way to submit and track your “trouble tickets”, and
in-depth contact information.

Documentation
Please email AccessData regarding any typos, inaccuracies, or other problems you find with the documentation:
documentation@accessdata.com

Professional Services
The AccessData Professional Services staff comes with a varied and extensive background in digital
investigations including law enforcement, counter-intelligence, and corporate security. Their collective
experience in working with both government and commercial entities, as well as in providing expert testimony,
enables them to provide a full range of computer forensic and eDiscovery services.
At this time, Professional Services provides support for sales, installation, training, and utilization of FTK, FTK
Pro, Enterprise, eDiscovery, Lab and the entire Resolution One platform. They can help you resolve any
questions or problems you may have regarding these solutions.

Contact Information for Professional Services
Contact AccessData Professional Services in the following ways:

AccessData Professional Services Contact Information
Contact Method

Number or Address

Phone

North America Toll Free: 800-489-5199, option 7
International: +1.801.377.5410, option 7

Email

services@accessdata.com

AccessData Legal and Contact Information

| 6

Contents

AccessData Legal and Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Part 1: Introducing Resolution1 eDiscovery

. . . . . . . . . . . . . . . . . . . . . . . . 16

Chapter 1: Introducing Resolution1 eDiscovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
About Resolution1 eDiscovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
About the Audience for this Admin Guide . . . . . . . . . . . . . . . . . . . . . . . . 17
What You Can Do with Resolution1 eDiscovery . . . . . . . . . . . . . . . . . . . . . 18
Basic Workflow of Resolution1 eDiscovery . . . . . . . . . . . . . . . . . . . . . . . . 19
About This Admin Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 2: Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
About the AccessData Web Console . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
About User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Opening the AccessData Web Console . . . . . . . . . . . . . . . . . . . . . . . . . 23
Installing the Browser Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Introducing the Web Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
The Project List Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
User Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Using Elements of the Web Console . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Part 2: Administrating and Configuring .

. . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Chapter 3: Introduction to Application Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Workflows for Administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
Chapter 4: Using the Management Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
About the Management Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Opening the Management Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Management Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Chapter 5: Configuring and Managing System Users,

Contents

| 7

User Groups, and Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
About Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
About User Roles and Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
About Admin Roles and Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
About the Users Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
About the Admin Roles Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Managing Admin Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Configuring and Managing User Groups . . . . . . . . . . . . . . . . . . . . . . . . . 62
Chapter 6: Using System Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
About System Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Adding a System Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Agent Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
ETM Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Executing a System Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Chapter 7: Configuring the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
About System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
System Configuration Tab - Standard Settings . . . . . . . . . . . . . . . . . . . . . 77
Chapter 8: Using the Work Manager Console and Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Using the Work Manager Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Work Manager Console Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Validating Activate Work Orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Configuring a Work Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Using the System Log and Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Chapter 9: Using the Site Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Monitoring Site Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Setting Network Traffic Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Managing Jobs on the Site Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Part 3: Configuring Data Sources.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100

Chapter 10: About Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Chapter 11: Managing People, Groups, Computers and Network Shares . . . . . . . . . . . 103
Managing People for Collecting Data . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Contents

| 8

Managing Computers for Collecting Data . . . .
Managing Network Shares for Collecting Data
Configuring Data Source Credant Options . . .
Managing Groups for Collecting Data . . . . . .
Configuring Network Collectors . . . . . . . . . .
Managing Evidence for Collecting Data . . . . .
Managing Mobile Devices for Collecting Data .

. . . . . . . . . . . . . . . . . . . . 115
. . . . . . . . . . . . . . . . . . . . 120
. . . . . . . . . . . . . . . . . . . . 123
. . . . . . . . . . . . . . . . . . . . 124
. . . . . . . . . . . . . . . . . . . . 133
. . . . . . . . . . . . . . . . . . . . 134
. . . . . . . . . . . . . . . . . . . . 136

Chapter 12: Configuring Public Data Repositories for Collecting Data . . . . . . . . . . . . . .
Configuring for a Domino Server . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for an Exchange Online/365 Server . . . . . . . . . . . . . . . . . . .
Configuring for Exchange 2003, 2007, and 2010 Servers . . . . . . . . . . . . . .
Configuring for Exchange 2010 SP1 and 2013 Servers . . . . . . . . . . . . . . .
Configuring for an Exchange Index Server . . . . . . . . . . . . . . . . . . . . . . .
Configuring for an Enterprise Vault Server . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a Oracle URM Server . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a Documentum Server . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a SharePoint Server . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for Websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a DocuShare Server . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for Cloud Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a OpenText ECM Server . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a FileNet Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for Gmail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for Google Drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for Druva . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring for a CMIS Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part 4: Managing Projects .

138
139
141
142
144
147
149
155
157
159
162
164
166
168
169
170
171
172
174

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177

Chapter 13: Introduction to Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
About Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Workflow for Project/Case Managers . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Chapter 14: Using the Project Management Home Page . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Viewing the Home Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Contents

| 9

Introducing the Home Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Adding Custom Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Managing People for a Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Chapter 15: Configuring Advanced System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
System Configuration Tab - Advanced Settings . . . . . . . . . . . . . . . . . . . 192
Chapter 16: Creating a Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Project Properties Cloning . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Viewing and Editing Project Details . . . . . . . . . . . . . . . . . . . . . . . . . . .

205
205
219
220

Chapter 17: Managing People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Data Sources People Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Adding a Person to a Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Evidence Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Chapter 18: Managing Tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Managing Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Managing Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Chapter 19: Setting Project Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About Project Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Permissions Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Associating Users and Groups to a Project . . . . . . . . . . . . . . . . . . . . . .
Associating Project Roles to Users and Groups . . . . . . . . . . . . . . . . . . . .
Creating a Project Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 20: Running Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Accessing the Reports Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Search Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Export Set Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

240
240
244
247
248
249
251
251
254
255
257

Chapter 21: Configuring Review Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Configuring Markup Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Contents

| 10

Configuring Custom Fields . .
Configuring Tagging Layouts
Configuring Highlight Profiles
Configuring Redaction Text .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

Chapter 22: Monitoring the Work List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Accessing the Work List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Chapter 23: Managing Document Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About Managing Document Groups . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Document Group During Import . . . . . . . . . . . . . . . . . . . . . .
Creating a Document Group in Project Review . . . . . . . . . . . . . . . . . . . .
Deleting a Document Group in Project Review . . . . . . . . . . . . . . . . . . . .
Chapter 24: Managing Transcripts and Exhibits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Transcript Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Capturing Realtime Transcripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using Transcript Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Uploading Exhibits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

278
278
279
279
280
281
281
285
290
292

Chapter 25: Managing Review Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Creating a Review Set . . . . . . . . . .
Deleting Review Sets . . . . . . . . . . .
Renaming a Review Set . . . . . . . . .
Manage Permissions for Review Sets .

. . . . . . . . . . . . . . . . . . . . . . . . . 293
. . . . . . . . . . . . . . . . . . . . . . . . . 295
. . . . . . . . . . . . . . . . . . . . . . . . . 296
. . . . . . . . . . . . . . . . . . . . . . . . . 297

Chapter 26: Project Folder Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Project Folder Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Project Folder Subfolders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Chapter 27: Using Language Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Language Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Chapter 28: Getting Started with KFF (Known File Filter) . . . . . . . . . . . . . . . . . . . . . . . . . . 303
About KFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
About the KFF Server and Geolocation . . . . . . . . . . . . . . . . . . . . . . . . . 308
Installing the KFF Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Configuring the Location of the KFF Server . . . . . . . . . . . . . . . . . . . . . . 310
Migrating Legacy KFF Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

Contents

| 11

Importing KFF Data . . . . . . . . . .
About CSV and Binary Formats . .
Uninstalling KFF . . . . . . . . . . . .
Installing KFF Updates . . . . . . . .
KFF Library Reference Information
What has Changed in Version 5.6 .

. . . . . . . . . . . . . . . . . . . . . . . . . . . 313
. . . . . . . . . . . . . . . . . . . . . . . . . . . 320
. . . . . . . . . . . . . . . . . . . . . . . . . . . 323
. . . . . . . . . . . . . . . . . . . . . . . . . . . 324
. . . . . . . . . . . . . . . . . . . . . . . . . . . 325
. . . . . . . . . . . . . . . . . . . . . . . . . . . 330

Chapter 29: Using KFF (Known File Filter) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About KFF and De-NIST Terminology . . . . . . . . . . . . . . . . . . . . . . . . .
Process for Using KFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring KFF Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adding Hashes to the KFF Server . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using KFF Groups to Organize Hash Sets . . . . . . . . . . . . . . . . . . . . . . .
Enabling a Project to Use KFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reviewing KFF Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Re-Processing KFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Exporting KFF Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 30: About Cerberus Malware Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About Cerberus Malware Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .
About Cerberus Stage 1 Threat Analysis . . . . . . . . . . . . . . . . . . . . . . .
About Cerberus Stage 2 Static Analysis. . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 31: Using Cerberus Malware Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About Running Cerberus Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enabling Cerberus Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Viewing Cerberus Results and Data . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part 5: Using Lit Holds

331
331
332
332
333
339
343
345
349
350
352
352
353
359
369
369
370
372

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375

Chapter 32: Managing Litigation Holds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
About Litigation Holds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Configuring the System for Managing Litigation Holds . . . . . . . . . . . . . . . 376
Configuring Litigation Holds System Settings . . . . . . . . . . . . . . . . . . . . . 379
Using the Lit Hold List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Creating a Litigation Hold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

Contents

| 12

Managing Existing Litigation Holds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

Part 6: Loading Data

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403

Chapter 33: Introduction to Loading Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Importing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Chapter 34: Using the Evidence Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Using the Evidence Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Adding Evidence to a Project Using the Evidence Wizard . . . . . . . . . . . . . 411
Chapter 35: Importing Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
About Importing Evidence Using Import . . . . . . . . . . . . . . . . . . . . . . . . 414
Importing Evidence into a Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Chapter 36: Analyzing Document Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Using Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Using Entity Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Chapter 37: Editing Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Editing Evidence Items in the Evidence Tab . . . . . . . . . . . . . . . . . . . . . 423
Evidence Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Chapter 38: Data Loading Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Document Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Email & eDocs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Related Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
Transcripts and Exhibits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Work Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Sample DII Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
DII Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442

Part 7: Using Jobs.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446

Chapter 39: Introduction to Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
About Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Chapter 40: Introduction to the Resolution1 eDiscovery Collection Job . . . . . . . . . . . . 452
About Collection Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452

Contents

| 13

Chapter 41: Creating and Managing Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Adding a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
General Job Wizard Tabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Approving a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Executing a Job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Processing a Job. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
Using Job Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
Using Job Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Using Job Templates and Filter Templates . . . . . . . . . . . . . . . . . . . . . . . 485
Additional Job Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
Chapter 42: Configuring Third Party Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other Data Sources Filter Options . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cloud Mail Collection Options for People . . . . . . . . . . . . . . . . . . . . . . . .
Domino Collection Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Documentum Collections Options . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DocuShare Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Enterprise Vault Server Collection Options . . . . . . . . . . . . . . . . . . . . . . .
Collecting Exchange Emails for Custodians . . . . . . . . . . . . . . . . . . . . . .
Exchange Public Folder Collection Options . . . . . . . . . . . . . . . . . . . . . .
FileNet Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Google Drive Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OpenText ECM Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Oracle URM Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SharePoint Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Website Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Druva Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CMIS Collection Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part 8: Using the Dashboard

491
493
494
495
496
498
500
503
505
506
507
508
509
511
514
515
517

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .520

Chapter 43: Using the Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
About the Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521

Part 9: Reference . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .525

Chapter 44: Installing the AccessData Elasticsearch Windows Service . . . . . . . . . . . . . 526
About the Elasticsearch Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
Installing the Elasticsearch Service . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

Contents

| 14

Chapter 45: Using the Site Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
About Site Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Before Installing a Site Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing a Site Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Site Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 46: Installing the Windows Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Manually Installing the Windows Agent . . . . . . . . . . . . . . . . . . . . . . . .
Using Your Own Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Controlling Consumption of the CPU . . . . . . . . . . . . . . . . . . . . . . . . . .
Resolution1 eDiscovery Additional Instructions . . . . . . . . . . . . . . . . . . . .
Important Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

529
529
530
530
532
535
535
540
541
542
543

Chapter 47: Installing the Unix / Linux Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Installing The Enterprise Agent on Unix/Linux . . . . . . . . . . . . . . . . . . . . 544
Chapter 48: Installing the Mac Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Configuring the AccessData Agent installer . . . . . . . . . . . . . . . . . . . . . .
Installing the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Uninstalling the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

546
546
548
548

Chapter 49: Integrating with AccessData Forensics Products . . . . . . . . . . . . . . . . . . . . . . 549
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Managing User Accounts and Permissions Between
FTK and Summation/Resolution1 eDiscovery . . . . . . . . . . . . . . . . . . 550
Creating and Viewing Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Known Issues with FTK Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . 553

Contents

| 15

Part 1

Introducing Resolution1
eDiscovery

This part introduces Resolution1 eDiscovery and includes the following chapters:
Introducing
Getting

AccessData eDiscovery (page 31)

Started (page 21)

| 16

Chapter 1

Introducing Resolution1 eDiscovery

About Resolution1 eDiscovery
Resolution1 eDiscovery helps you to identify and collect relevant data in-house to address electronic discovery
from beginning to end. You can run collections across the entire enterprise Network of a company. The collected
evidence can then be processed, reviewed, and exported.
The reports are enhanced by the use of keyword searches and filters to gather only relevant data that pertains to
a case. The resulting production set can then be exported into an AD1 format, or into a variety of load file
formats such as Concordance, Summation, EDRM, Introspect, and iConect.

About the Audience for this Admin Guide
This product is intended for use in gathering and processing electronically stored evidence for criminal, civil, and
internal corporate cases.
The audience for this forensic investigation software tool includes law enforcement officials as well as corporate
security and IT professionals who need to access and evaluate the evidentiary value of files, folders, computers,
and other electronic data sources. They should be well-versed in the eDiscovery process. They should also
have a good understanding of Chain of Custody and the implications of running the Resolution1 eDiscovery
process within an organization. They should also have the following competencies when using this software:
Basic

knowledge of and training in forensic policies and procedures

Familiarity

with the fundamentals of collecting digital evidence and ensuring the legal validity of the

evidence
Understanding
Experience

of forensic images and how to acquire forensically sound images

with case studies and reports

| 17

What You Can Do with Resolution1 eDiscovery
Resolution1 eDiscovery addresses the entire eDiscovery model in a repeatable, defensible, and automated
manner, using a single solution.
See Getting Started on page 21.

What you can do with Resolution1 eDiscovery
Phase in the
eDiscovery Model

What you can do

Information Management




Preserve and Collect







Processing and
deduplication







Analysis and Review









Production








Thoroughly audit for and identify electronically stored information (ESI) that
falls outside your records retention policies.
Flag non-compliant files and log their locations.
Forensically collect ESI from workstations, laptops, network servers, email
servers, and structured data repositories.
Collect only relevant data from shared resources or all people-created data,
as you choose, using advanced searching and filtering options.
Create native PSTs and NSFs from email servers.
Perform incremental collections to only collect data that has changed from a
previous collection.
Reuse previously executed collections and associate them with multiple projects.
Process data as you collect, while maintaining complete chain of custody.
Use distributed processing that greatly reduces processing time.
Automatically identify and categorize data, including encrypted files.
Deduplicate email and documents across the case or for a specific people.
Scale processes to handle massive data sets.
Use a friendly web-based interface with native file review that allows for collaborative, full review prior to creating a production set and exporting to a
load file format.
Perform advanced searches with hit highlighting in files, emails, and attachments that lets you quickly find responsive evidence without having to read
every single word.
Cull data by leveraging sophisticated searching and rich filtering.
View documents by families or similarity.
View email grouped by conversations.
Produce responsive-only documents and email in native format or an AD1
forensic archive, organized by people or as a single instance, with options to
preserve the original folder structure.
Generate load files for export to popular third-party review tools, including
Concordance, EDRM XML, iConect, Introspect, Relativity, Ringtail (MDB), or
Summation eDII.
Produce detailed reports, such as search reports, processing exception
reports, production, and exclusion reports.
Utilize rolling production support that enables batch production.

What You Can Do with Resolution1 eDiscovery

| 18

Basic Workflow of Resolution1 eDiscovery
Although there is no formal order in which you collect, process, and export evidence using Resolution1
eDiscovery, you can use the following basic workflow as a guide.

Basic Workflow of Resolution1 eDiscovery
Step

Task

Link to the tasks

1

Configure and setup eDiscovery and
eDiscovery users before you begin
collecting evidence.



See Configuring the System on page 77.

2

Add people, Network shares, computers,
and groups whose data you want to collect.



See About Data Sources on page 101.

3

Create a project.



See Creating a Project on page 205.

4

(Optional) Create a litigation hold.



See Managing Litigation Holds on page 376.

5

Collect evidence from the people, cetwork
shares, computers, and groups that you
added.



See Using Jobs on page 446.

6

Approve, execute, and then process a
collection.




See Approving a Job on page 477.
See Executing a Job on page 477.
See Processing a Job on page 478.



See the Reviewer Guide.



See the Reviewer Guide.

7

Review data.



After you process a collection, you open
the resulting case from the Project List into
Project Review. From Project Review, you
filter, search, and apply labels on the
processed data until you have a production
set that contains only relevant files for the
case. At that point, you can export the
production set to a load file as described in
the next step.
8

Export the production set to a load file.

Basic Workflow of Resolution1 eDiscovery

| 19

About This Admin Guide
This Admin Guide explains how administrators do the following:
Configure
Create

system settings

and manage projects

Configure

data sources

Configure

and use e-discovery features

Use

the Dashboard

Use

platform components such as the Site Server and agents

This guide includes the following parts:
Getting

Started (page 21)

Administrating
Configuring
Managing
Loading

and Configuring (page 41)

Data Sources (page 100)

Projects (page 177)

Data (page 403)

Using

Jobs (page 446)

Using

the Dashboard (page 520)

Using

Lit Holds (page 375)

Reference

(page 525)

For information about reviewing project data using Project Review, see the Resolution1 eDiscovery Reviewer
Guide.
For information about new features, fixed issues, and known issues, see the Resolution1 eDiscovery Release
Notes.
You can download the Reviewer Guide and Release Notes from the Help/Documentation link. See User Actions
on page 33.

About This Admin Guide

| 20

Chapter 2

Getting Started

Terminology
The Resolution1 platform is a platform of litigation support and cyber security suite of products. To better reflect
how each of AccessData’s applications work within the Resolution1 platform, AccessData has renamed the
individual products of the Resolution1 platform. The following table lists the name changes:

Application Name Changes
Previous Name

New Name

CIRT

Resolution1 CyberSecurity

eDiscovery

Resolution1 eDiscovery

To provide greater compatibility between products, some terminology in the user interface and documentation
has been consolidated. The following table lists the common terminology:

Terminology Changes
Previous Term

New Term

Case

Project

Custodian

Person

Custodians

People

System Console

Work Manager Console

Security Log

Activity Log

Audit Log

User Review Activity

About the AccessData Web Console
The application displays the AccessData web-based console that you can open from any computer connected to
the network.
All users are required to enter a username and password to open the console.

Getting Started

Terminology

| 21

What you can see and do in the application depends on your product license and the rights and permissions
granted to you by the administrator. You may have limited privileges based on the work you do.
See About User Accounts on page 22.

Web Console Requirements
Software Requirements
The following are required for using the features in the web console:
Windows-based
Internet

PC running the Internet Explorer web browser:

Explorer 9 or higher is required for full functionality of most features.

Internet

Explorer 10 or higher is required for full functionality of all features. (Some new features use
HTML5 which requires version 10 or higher.
Note: If you have issues with the interface displaying correctly, view the application in compatibility

view for Internet Explorer.
The
Internet

console may be opened using other browsers but will not be fully functional.
Explorer Browser Add-on Components

Microsoft
Adobe

Silverlight--Required for the console.

Flash Player--Required for imaging documents in Project Review.

AccessData

console components

AD

NativeViewer--Required for viewing documents in the Alternate File Viewer in Project Review.
Includes Oracle OutsideX32.

AD

Bulk Print Local--Required for printing multiple records using Bulk Printing in Project Review.
To use these features, install the associated applications on each users’ computer.
See Installing the Browser Components on page 25.

Hardware Recommendations
Use

a display resolution of 1280 x 1024 or higher.
Press F11 to display the console in full-screen mode and maximize the viewing area.

About User Accounts
Each user that uses the web console must log in with a user account. Each account has a username and
password. Administrators configure the user accounts.
User accounts are granted permissions based on the tasks those users perform. For example, one account may
have permissions to create and manage projects while another account has permissions only to review files in a
project.
Your permissions determine which items you see and the actions you can perform in the web console.
There is a default Administrator account.

Getting Started

About User Accounts

| 22

User Account Types
Depending on how the application is configured, your account may be either an Integrated Windows
Authentication account or a local application account.
The type of account that you have will affect a few elements in the web interface. For example, if you use an
Integrated Windows Authentication account, you cannot change your password within the console. However,
you can change your password within the console if you are using an application user account.

Opening the AccessData Web Console
You use the AccessData web console to perform application tasks.
See About the AccessData Web Console on page 21.
You can launch the console from an approved Web browser on any computer that is connected to the
application server on the network.
See Web Console Requirements on page 22.
To start the console, you need to know the IP address or the host name of the computer on which the application
server is installed.
When you first access the console, you are prompted to log in. Your administrator will provide you with your
username and password.

To open the web console
1.

Open Internet Explorer.
Note: Internet Explorer 7 or higher is required to use the web console for full functionality. Internet
Explorer 10 or 11 is recommended.

2.

Enter the following URL in the browser’s address field:
https:///ADG.map.Web/
where  is the host name or the IP address of the application server.
This opens the login page.
You can save this web page as a favorite.

3.

One of two login pages displays:
If you are using Integrated Windows Authentication, the following login page displays.

Integrated Windows Authentication Page

Getting Started

Opening the AccessData Web Console

| 23

Note: If you are using Integrated Windows Authentication and are not on the domain, you will see a
Windows login prompt.
If you are not using Integrated Windows Authentication, the login page displays the product name and
version for the product license that your organization is using and provides fields for your username and
password.

Non-Integrated Windows Authentication Login

4.

On the login page, enter the username and password for your account.
If you are logging in as the administrator for the very first time and have not enabled Integrated Window
Authentication, enter the pre-set default user name and password. Contact your technical support or
sales representative for login information.

5.

Click Sign In.
If you are authenticated, the application console displays.
If you cannot log in, contact your administrator.

6.

The first time the web console is opened on a computer, you may be prompted to install the following
plug-ins:
Microsoft
Adobe
AD

Silverlight

Flash Player

Alternate File Viewer (Native Viewer)

AD

Bulk Print Local
Download the plug-ins. When a pop-up from Internet Explorer displays asking to run or download the
executable, click Run. Complete the install wizard to finish installing the plug-in.
See Web Console Requirements on page 22.
See Installing Browser Components Manually on page 27.

Getting Started

Opening the AccessData Web Console

| 24

Installing the Browser Components
To use all of the features of the web console, each computer that runs the web console must have Internet
Explorer and the following add-ons:
Microsoft
Adobe

Silverlight--Required for the console.

Flash Player--Required for imaging documents in Project Review.

AccessData

NativeViewer--Required for imaging documents in Project Review.
This includes the Oracle OutsideX32 plug-in.

AccessData

Local Bulk Print--Required for printing multiple records using Bulk Printing in Project Review

Important: Each computer that runs the console must install the required browser components. The installations
require Windows administrator rights on the computer.
Upon first login, the web console will detect if the workstation's browser does not have the required versions of
the add-ons and will prompt you to download and install the add-ons.

See Installing Components through the Browser on page 25.
See Installing Browser Components Manually on page 27.

Installing Components through the Browser
Microsoft Silverlight
To install Silverlight
1.

If you need to install Silverlight, click Click now to install in the Silverlight plug-in window.

2.

Click Run in the accompanying security prompts.

3.

On the Install Silverlight dialog, Install Now.
When the Silverlight installer completes, on the Installation successful dialog, click Close.

Getting Started

Installing the Browser Components

| 25

If the web browser does not display the AD logo and then the console, refresh the browser window.

The application Main Window displays and you can install Flash Player from the plug-in installation bar.

Adobe Flash Player
To install Flash Player
1.

If you need to install Flash Player, click the Flash Player icon.

2.

Click Download now.

3.

Click Run in the accompanying security prompts.

4.

Complete the installation.

5.

Refresh the browser.

Once the application is installed, you need to install the Alternate File Viewer and Local Bulk Print software. You
can find the links to download the add-ons in the dropdown in the upper right corner of the application.

AccessData NativeViewer
To install the AD NativeViewer
1.

From the User Actions dropdown, select AD Alternate File Viewer.

2.

Click RUN on the NearNativeSetup.exe prompt.

3.

Click Next on the InstallShield Wizard dialog.

4.

Click Next on the Custom Setup dialog.

5.

Click Install on the Ready to Install the Program dialog.

6.

Allow the installation to proceed and then click Finish.

7.

Close the browser and re-log in.

8.

Click Allow on the ADG.UI.Common.Document.Views.NearNativeControl prompt.

9.

Refresh the browser.

Getting Started

Installing the Browser Components

| 26

AccessData Local Bulk Print
To install the Local Bulk Print add-on
1.

From the User Actions dropdown, select AD Local Bulk Print.

2.

Click Run at the AccessData Local Bulk Print .exe prompt in Internet Explorer.

3.

In the InstallShield Wizard dialog, click Next.

4.

Accept the license terms and click Next.

5.

Accept the default location in the Choose Destination Location dialog and click Next.

6.

Click Install on the Ready to Install the Program dialog.

7.

Click Finish.

Installing Browser Components Manually
You can use EXE files to install the components outside of the browser. You can run these locally or use
software management tools to install them remotely.

Installing AD Alternate File Viewer
To install the Alternate File Viewer add-on, navigate to the following path on the server:

C:\Program Files (x86)\AccessData\MAP\NearNativeSetup.exe
To install the AD Alternate File Viewer add-on
1.

Run the NearNativeSetup.MSI file.

2.

Click Next on the InstallShield Wizard dialog.

3.

Click Next on the Custom Setup dialog.

4.

Click Install on the Ready to Install the Program dialog.

5.

Allow the installation to proceed and then click Finish.

Installing the Local Bulk Print Tool
To install the Local Bulk Print tool, navigate to the following path on the server:

C:\Program Files (x86) \AccessData\MAP\AccessDataBulkPrintLocal.exe
To install the Local Bulk Print add-on
1.

Run the AccessDataBulkPrintLocal.exe . The wizard should appear.

2.

Click Next to begin.

3.

Click Next on the Select Installation Folder dialog.

4.

Click Next. After the installation is complete, click Close.

Installing Adobe Flash Player
Visit http://get.adobe.com/flashplayer/ and follow the prompts to install the flash player.

Getting Started

Installing the Browser Components

| 27

Introducing the Web Console
The user interface for the application is the AccessData Web console. The console includes different tabs and
elements.

The items that display in the console are determined by the following:
Your

application’s license

Your

user permissions

The main elements of the application are listed in the following table. Depending on the license that you own and
the permissions that you have, you will see some or all of the following:

Component

Description

Navigation bar

This lets you open multiple pages in the console.

Home page

The Home page lets you create, view, manage, and review projects based on the
permissions that you have. This is the default page when you open the console.
See Using the Project Management Home Page on page 180.

Getting Started

Introducing the Web Console

| 28

Component

Description

Dashboard

(Available in Resolution1 CyberSecurity, Resolution1, and Resolution1 eDiscovery)
The Dashboard allows you to view important event information in an easy-to-read
visual interface.
See Using the Dashboard on page 521.

Data Sources

The Data Sources tab lets you manage people, computers, network shares, evidence,
as well as several different connectors. This tab allows you to manage these data
sources throughout the system, not just by project.
See About Data Sources on page 101.

Lit Hold

(Available in Resolution1 CyberSecurity and Resolution1 eDiscovery)
The Lit Hold tab lets you create and manage litigation holds.
See Managing Litigation Holds on page 376.

Alerts

(Available in Resolution1 CyberSecurity, Resolution1, and Resolution1 eDiscovery)
The Alerts tab allows you to view alerts as they enter the user interface. Viewing Alerts
on page 566

Management
(gear icon)

The Management page lets administrators perform global management tasks.
See Opening the Management Page on page 43.

User Actions

Actions specific to the logged-in user that affects the user’s account.
See User Actions on page 33.

Project
Review

The Project Review page lets you analyze, filter, code and label documents for a
selected project.
You access Project Review from the Home page.
See the Reviewer Guide for more information on Project Review. You can download the
Reviewer Guide from the Help/Documentation link. See User Actions on page 33.

Getting Started

Introducing the Web Console

| 29

The Project List Panel
The Home page includes the Project List panel. The Project List panel is the default view after logging in. Users
can only view the projects for which they have created or been given permissions.

Administrators and users, given the correct permissions, can use the project list to do the following:
Create
View

projects.

a list of existing projects.

Add

evidence to a project.
See Importing Data on page 404.

Launch

Project Review.

If you are not an administrator, you will only see either the projects that you created or projects to which you
were granted permissions.
The following table lists the elements of the project list. Some items may not be visible depending on your
permissions.

Getting Started

The Project List Panel

| 30

Elements of the Project List
Element

Description

Create New Project

Click to create a new project.
See Creating a Project on page 205.

Filter Options

Allows you to search and filter all of the projects in the project list. You can
filter the list based on any number of fields associated with the project,
including, but not limited to the project name.
See Filtering Content in Lists and Grids on page 38.

Filter Enabled

Displayed if you have enabled a filter.

Project Name Column

Lists the names of all the projects to which the logged-in user has permissions.

Action Column

Allows you to add evidence to a project or enter Project Review.
Add Data
Allows you to add data to the selected project.

Project Review
Allows you to review the project using Project Review.
See the Reviewer Guide for more information on using Product Review. You
can download the Reviewer Guide from the Help/Documentation link. See
Changing Your Password on page 34.
Processing Status Column

Lists the status of the projects:
Not Started - The project has been created but no evidence has been added.
Processing - Evidence has been added and is still being processed.
Completed - Evidence has been added and processed.
Note: When processing a small set of evidence, the Processing Status may
show a delay of two minutes behind the actual processing of the evidence.
You may need to refresh the list to see the current status. See Refresh below.

Size Column

Lists the size of the data within the project.

Page Size drop-down

Allows you to select how many projects to display in the list.
The total number of projects that you have permissions to see is displayed.

Total

Lists the total number of projects displayed in the Project List.

Page

Allows you to view another page of projects.
Refresh

If you create a new project, or make changes to the list, you may need to
refresh the project list

Custom Properties

Add, edit, and delete custom columns with the default value that will be listed
in the Project list panel. When you create a project, this additional column will
be listed in the project creation dialog.
See Adding Custom Properties on page 185.

Project Property

Clone the properties of an existing project to another project. You can apply a
single project’s properties to another project, or you can pick and choose
properties from multiple individual projects to apply to a single project.
See Using Project Properties Cloning on page 219.

Cloning

Getting Started

The Project List Panel

| 31

Element
Export to CSV

Description
Export the Project list to a .csv file. You can save the file and open it in a
spreadsheet program.
Add or remove viewable columns in the Project List.

Columns
Highlight project and click Delete Project to delete it from the Project List.
Delete

Getting Started

The Project List Panel

| 32

User Actions
Once in the web console, you can preform user actions that are specific to you as the logged-in user. You access
the options by clicking on the logged-in user name in the top right corner of the console.

User Actions

User Actions
Link

Description

Logged-on user

The username of the logged-on user is displayed; for example, administrator.

Change password

Lets the logged-on user change their password.
See Changing Your Password on page 34.
Note: This function is hidden if you are using Integrated Windows
Authentication.

Help/ Documentation

Lets you to access the latest version of the Release Notes and User Guide.
The files are in PDF format and are contained in a ZIP file that you can
download.

Manage My Notifications

Lets you to manage the notifications that you have created and that you belong
to.
See About Managing Notifications for a Job on page 483.
You can delete notifications, export the notifications list to a CSV file, and filter
the notifications with the Filter Options.
See Filtering Content in Lists and Grids on page 38.

Download Alternate File
Viewer

Lets you to download the Alternate FIle Viewer application.
See AccessData NativeViewer on page 26.

Download Local Bulk
Print software

Lets you to access the latest version of the Local Bulk Print software. See
AccessData Local Bulk Print on page 27.

Logout

Logs you off and returns you to the login page.
Note: This function is hidden if you are using Integrated Windows
Authentication.

Getting Started

User Actions

| 33

Changing Your Password
Note: This function is hidden if you are using Integrated Windows Authentication. You must change your
password using Windows.
Any logged-in user can change their password. You may want to change your password for one of the following
reasons:
You

are changing a default password after you log in for the first time.

You

are changing your password on a schedule, such as quarterly.

You

are changing your password after having a password reset.

To change your own password
1.

Log in using your username and current password.
See To open the web console on page 23.

2.

In the upper right corner of the console, click Change Password.

Change User Password

3.

In the Change User Password dialog, enter the current password and then enter and confirm the new
password in the respective fields. The following are password requirements:
The

4.

password must be between 7 - 50 characters.

At

least one Alpha character.

At

least one non-alphanumeric character.

Click OK.

Getting Started

User Actions

| 34

Using Elements of the Web Console
Maximizing the Web Console Viewing Area
You can press F11 to display the console in full-screen mode.

About Content in Lists and Grids
Many objects within the console are made up of lists and grids. Many elements in the lists and grids recur in the
panels, tabs, and panes within the interface. The following sections describe these recurring elements.
You can manage how the content is displayed in the grids.
See

Refreshing the Contents in List and Grids on page 35.

See

Managing Columns in Lists and Grids on page 36.

See

Sorting by Columns on page 35.

See

Filtering Content in Lists and Grids on page 38.

See

Changing Your Password on page 34.

Refreshing the Contents in List and Grids
There may be times when the list you are looking at is not dynamically updated. You can refresh the contents by
clicking

.

Sorting by Columns
You can sort grids by most columns.

To sort a grid by columns
1.

Click the column head to sort by that column in an ascending order.
A sort indicator (an up or down arrow) is displayed.

2.

Click it a second time to sort by descending order.

Sorting By Multiple Columns
In the Item List in Project Review, you can also sort by multiple columns. For example, you can do a primary sort
by file type, and then do a second sort by file size, then a third sort by accessed date.

To sort a grid by columns
1.

Click the column head to sort by that column in an ascending order.
A sort indicator (an up or down arrow) is displayed.

2.

Click it a second time to sort by descending order.

Getting Started

Using Elements of the Web Console

| 35

3.

In the Item List in Project Review, to perform a secondary search on another column, hold Shift+Alt keys
and click another column.
A sort indicator is displayed for that column as well.

4.

You can repeat this for multiple columns.

Moving Columns in a Grid View
You can rearrange columns in a Grid view in any order you want. Some columns have pre-set default positions.
Column widths are also sizable.

To move columns
In the Grid view, click and drag columns to the position you want them.

Managing Columns in Lists and Grids
You can select the columns that you want visible in the Grid view. Project managers can create custom columns
in the Custom Fields tab on the Home page.
See Configuring Custom Fields on page 262.
For additional information on using columns, see Using Columns in the Item List Panel in the Reviewer Guide.

To manage columns
1.

In the grid, click

Columns.

2.

In the Manage Columns dialog, there are two lists:
Available

Columns
Lists all of the Columns that are available to display. They are listed in alphabetical order.
If the column is configured to be in the Visible Columns, it has a
If the column is not configured to be in the Visible Columns, it has a

.
.

If the column is a non-changeable column (for example, the Action column in the Project List), it has
a
.
Visible

Columns
Lists all of the Columns that are displayed. They are listed in the order in which they appear.

Getting Started

Using Elements of the Web Console

| 36

Manage Columns Dialog

3.

To configure columns to be visible, in the Available Columns list, click the
visible.

for the column you want

4.

To configure columns to not be visible, in the Visible Columns list, click the
not visible.

for the column you want

5.

To change the display order of the columns, in the Visible Columns list, select a column name and click
or

6.

to change the position.

Click OK.

Managing the Grid’s Pages
When a list or grid has many items, you can configure how many items are displayed at one time on a page. This
is helpful for customizing your view based on your display size and resolution and whether or not you want to
scroll in a list.

To configure page size
1.

Below a list, click the Page Size drop-down menu.

2.

Select the number of items to display in one page.

3.

Use the arrows by Page n of n to view the different pages.

Getting Started

Using Elements of the Web Console

| 37

Filtering Content in Lists and Grids
When a list or grid has many items, you can use a filter to display a portion of the list. Depending on the data you
are viewing, you have different properties that you can filter for.
For example, when looking at the Activity Log, there could be hundreds of items. You may want to view only the
items that pertain to a certain user. You can create a filter that will only display items that include references to
the user.
For example, you could create the following filter:
Activity

contains BSmith

This would include activities that pertain to the BSmith user account, such as when the account was created and
permissions for that user were configured.
You could add a second filter:
Activity

contains BSmith

OR Username =

BSmith

This would include the activities performed by BSmith, such as each time she logged in or created a project.
In this example, because an OR was used instead of an AND, both sets of results are displayed.
You can add as many filters as needed to see the results that you need.

To use filters
1.

Above the list, click Filter Options.
This opens the filter tool.

Filter Options

2.

Use the Property drop-down to select a property on which to filter.
This list will depend on the page that you are on and the data that you are viewing.

3.

Use the Operator drop-down to select an operator to use.
See Filter Operators on page 39.

4.

Use the Value field to enter the value on which you want to filter.
See Filter Value Options on page 40.

5.

Click Apply.
The results of the filter are displayed.
Once a filter had been applied, the text Filter Enabled is displayed in the upper-right corner of the panel.
This is to remind you that a filter is applied and is affecting the list of items.

6.

To further refine the results, you can add additional filters by clicking

7.

When adding additional filters, be careful to properly select And/Or.
If you select And, all filters must be true to display a result. If you select OR, all of the results for each
filter will be displayed.

Getting Started

Add .

Using Elements of the Web Console

| 38

8.

After configuring your filters, click Apply.

9.

To remove a single filter, click

Delete.

10. To remove all filters, click Disable or Clear All.
11. To hide the filter tool, click Filter Options.

Filter Operators
The following table lists the possible operators that can be found in the filter options. The operators available
depend upon what property is selected.

Filter Operators
Operator

Description

=

Searches for a value that equals the property selected. This operator is available
for almost all value filtering and is the default value.

!=

Searches for a value that does not equal the property selected. his operator is
available for almost all value filtering.

>

Searches for a value that is greater than the property selected. This operator is
available for numerical value filtering.

<

Searches for a value that is less than the property selected. This operator is
available for numerical value filtering.

>=

Searches for a value that is greater than and/or equal to the property selected.
This operator is available for numerical value filtering.

<=

Searches for a value that is less than and/or equal to the property selected. This
operator is available for numerical value filtering.

Contains

Searches for a text string that contains the value that you have entered in the
value field. This operator is available for text string filtering.

StartsWith

Searches for a text string that starts with the value that you have entered in the
value field. This operator is available for text string filtering.

EndsWith

Searches for a text string that ends with a value that you have entered in the
value field. This operator is available for text string filtering.

Getting Started

Using Elements of the Web Console

| 39

Filter Value Options
The following table lists the possible value options that can be found in the filter options. The value options
available depend upon what property is selected.

Filter Value Options
Value Option

Description

Blank field

This value allows you to enter a specific item that you can search for. The
Description property is an example of a property where the value is a blank field.

Date value

This value allows you to enter a specific date that you can search for. You can
enter the date in a m/d/yy format or you can pick a date from a calendar. The
Creation Date property is an example of a property where the value is entered as
a date value.

Pulldown

This value allows you to select from a pulldown list of specific values. The
pulldown choices are dependent upon the property selected. The Priority
property with the choices High, Low, Normal, Urgent is an example of a property
where the value is chosen from a pulldown.

Getting Started

Using Elements of the Web Console

| 40

Part 2

Administrating and
Configuring

This part describes how to administrate the application and includes the following chapters:
Introduction
Using

to Application Management (page 42)

the Management Page (page 43)

Configuring

and Managing System Users, User Groups, and Roles (page 45)

Configuring

the System (page 77)



(page 86)

Using

the Site Server Console (page 93)

Using

Language Identification (page 301)

Getting
Using

Started with KFF (Known File Filter) (page 303)

KFF (Known File Filter) (page 331)

Administrating and Configuring

| 41

Chapter 3

Introduction to Application Management

This chapter is designed to help application administrators perform management tasks. Application
administration tasks are performed on the Management page. Administrators can perform their tasks as long as
they have been granted the correct permissions.
See About User Roles and Permissions on page 45.

Workflows for Administrators
Administrators and managers configure and manage the global application environment.
Before creating and reviewing projects, you should review and perform the following tasks for configuring the
application.

Workflow for Configuring the Application
Step

Task

Link to the Tasks

1

Decide which authentication
mode to use

See Opening the AccessData Web Console on page 23.

2

Manage users, groups, and roles

See Planning User Roles on page 46.
See Managing Users on page 55.
See Configuring and Managing User Groups on page 62.

3

Configure default project settings

See Configuring Default Project Settings on page 82.

At regular intervals, administrators should perform the following tasks to manage the overall system health and
performance of the application.

Workflow for Managing the Application
Step

Task

Link to the tasks

1

Monitor system activity using logs

See Viewing the System Log or Activity Log on page 92.

2

Monitor the performance of the
Distribution Server and the Work
Managers

See on page 86.

Most of these administrative tasks are performed in the web console in the Management page.

Introduction to Application Management

Workflows for Administrators

| 42

Chapter 4

Using the Management Page

About the Management Page
Administrators manage the application through the Management page. You can manage users and users
permissions, configure aspects of the application on a global basis, and monitor activity on the system.
See Management Page on page 44.

Opening the Management Page
Administrators, and users with management permissions, use the Management page to configure and manage
the application.

To access the Management page
1.

Log in to the web console as administrator or as a user with management permissions.
See Opening the AccessData Web Console on page 23.
See Managing Users on page 55.

2.

In the web console, click Management.

Using the Management Page

About the Management Page

| 43

Management Page
You can use the Management page to maintain the list of people who use the application, including their specific
usage rights and roles. From Management, you can view system and security logs.
You can also configure Active Directory, agent credentials, a notification email server. The system administration
console area of the Management page lets you view Work Manager status.
Depending on the license that you own and the permissions that you have, you will see some or all of the
following:

Management Page Features and Options
Management Feature

Available Options

Users

See About the Users Tab on page 50.
See Managing Users on page 55.

User Groups

See Configuring and Managing User Groups on page 62.
See User Groups Tab on page 63.

Admin Roles

See About Admin Roles and Permissions on page 47.
See Managing Admin Roles on page 53.

System Jobs

See Adding a System Job on page 68.
See System Job Options on page 69.

System Configuration

See Configuring Active Directory Synchronization on page 78.
See Configuring Export Options on page 84.
See Configuring Default Project Settings on page 82.

Work Manager Console

See on page 86.
See Using the Site Server Console on page 93.

Site Server Console
See About the Threat Filter Library on page 514.
Threat Filter Library

System Log
KFF Library
KFF Group Templates

Activity Log

Using the Management Page

See Using the System Log and Activity Log on page 90.
See System Log Tab on page 90.
See Using KFF (Known File Filter) on page 331.
See Using KFF (Known File Filter) on page 331.
See Using the System Log and Activity Log on page 90.
See Activity Log Tab on page 91.

Management Page

| 44

Chapter 5

Configuring and Managing System Users,
User Groups, and Roles

This chapter will help administrators to configure users, user groups, and roles.

About Users
A user is any person who logs in and performs tasks in the web console. Each person should have their own
user account. You can configure accounts to have specific permissions to perform specific tasks. When users
open the console, what they see and do is based on their assigned permissions.
There are two users in the database that do not appear in the user interface. The passwords for these accounts
are unique per system/strong passwords:
Administrator

- This is a different user than the Application Administrator role

eDiscoveryProcessingUser

Permissions are managed by user roles.
See Adding Users on page 55.

About User Roles and Permissions
You can assign users different permissions based on the tasks that you want them to perform. The permissions
that a user has affects the items that they see and the tasks that they can perform in the web console.
For example, you can have one group of users that can manage the whole application and another group can
create projects and another group can only reviews files in a project.
Changes to permissions for a currently logged-in user take effect when they log out and log back in.
You assign permissions to a user by configuring roles and then associating users, or groups of users, to those
roles.
You can configure roles at the following levels:
Admin

roles

Configuring and Managing System Users, User Groups, and Roles

About Users

| 45

Project

roles

Admin roles provide global permissions to a user for the whole application. The following are examples of admin
permissions that you can use:
Application
Mange

Users

Create/Edit
Manage
View

Administrator
Projects

Admin Roles

the System Console

See About Admin Roles and Permissions on page 47.
Project roles only apply to a specific project. The following are examples of global permissions that you can use:
Project

Administrator (for that project only)

Project

Reviewer

Manage
View

Evidence

Project Reports

Manage

Project People

For more information, see Introduction to Project Management on page 178.

Planning User Roles
Before creating users, plan the types of roles your users will be performing. This facilitates the process of
assigning roles and permissions to users.
See Workflows for Administrators on page 42.
Possible things to consider when planning user roles:
How

many and which users should have Administrator permissions for the entire application?

How

many and which users should have application management permissions to perform tasks such as
creating and managing other users, roles, and projects?

How

do you want to distinguish between users who can create and manage projects versus those who
can only review them?

How

many and which users should have project-level permissions to perform tasks such as adding and
managing evidence and creating production sets?

Configuring and Managing System Users, User Groups, and Roles

About User Roles and Permissions

| 46

About Admin Roles and Permissions
An admin role is a set of permissions that you assign to users or groups. Each admin role has specific
permissions that allows users to manage the application, such as managing users, managing roles and
permissions, and creating and managing projects.
See Admin Permissions on page 47.
You can create admin roles or assign one of the default admin roles already created in the system. There are
three default admin roles:

Admin Roles Default Roles
Role

Description

Application Administrator

This role grants all permissions to manage the application.

Power User

This role grants the user permissions for create/edit project, manager user
groups, and manage users.

Users

This role grants the user permissions for create/edit project.

Creating Admin Roles
When you create an admin role, you can grant users Administrator permissions (all permissions) or grant a
combination of individual permissions.
If you want to grant permissions to a user that only allows them to review a project, then use project roles instead
of admin roles.
Note: The admin permissions available depend upon the Resolution1 license that you have.

Admin Permissions
You can configure admin roles with the following admin permissions

Admin Permissions
Permissions

Description

Administrator:

Grants all rights to the user/group for all projects.

Custom

You can select the following individual administrator roles:
Create/Edit Projects

Grants the right to create and edit projects on the Home page.
Users with this permission are automatic administrators of any
projects that they create.
See Creating a Project on page 205.

Configuring and Managing System Users, User Groups, and Roles

About Admin Roles and Permissions

| 47

Admin Permissions
Permissions

Description
Create/Edit Projects Restricted

Grants the rights to:
 Create projects
 Manage Admin Roles for the projects they create
 Assign permissions for the projects they create
 Link people and data sources to the projects
However, users with this permission do not have administrator
status over projects that they create. They cannot create jobs in
the project, nor view and search data in Review.

Delete Project

Grants the right to delete projects on the Home page
See Creating a Project on page 205.
.

Manage User Groups

Grants the right to add, edit, delete, and assign roles to groups.
See Planning User Roles on page 46.

Manage Users

Grants the rights to add, edit, delete, activate, deactivate, reset
passwords, and assign admin roles to users.
See About Users on page 45.
See Adding Users on page 55.
See Editing the Email Address of a User on page 57.
See Deleting Users on page 59.
See Deactivating a User on page 59.
See Activating a User on page 59.
See Resetting a User’s Password on page 58.
See Associating Admin Roles to a User on page 56.

Create People

Grants the right to create users.
See Adding Users on page 55.

Delete People

Grants the right to delete users.
See Deleting Users on page 59.

Create Nodes

Grants the right to create job targets.
See Managing People, Groups, Computers and Network
Shares on page 103.

Delete Nodes

Grants the right to delete job targets.
See Managing People, Groups, Computers and Network
Shares on page 103.

Global ID Admin

Grants the right to access and change the permissions of any
user in any project.
See Associating Admin Roles to a User on page 56.

Manage Project
Permissions

Grants the right to manage project permissions.
See Setting Project Permissions on page 240.

System Console

Grants the right to view and use the Work Manager Console
and Site Server Console on the Management page.
See on page 86 and Using the Site Server Console on
page 93.

Configuring and Managing System Users, User Groups, and Roles

About Admin Roles and Permissions

| 48

Admin Permissions
Permissions

Description
LitHold Manager

Grants the right to manage Litholds.

Evidence Admin

Grants the right to add, delete, and associate the evidence.
See Using the Evidence Wizard on page 405.

Manage Admin Roles

Grants the right to add, edit, delete and assign admin roles.
See About Admin Roles and Permissions on page 47.
See Creating an Admin Role on page 53.
See Managing Admin Roles on page 53.
See Adding Permissions to an Admin Role on page 53.

Review Sentinel Data

Grants the right to review the Sentinel data.
See Using Sentinel on page 587.

Execute Integration
API

Grants you the rights to execute a job using the API.
See HP ArcSight on page 500.
See Adding a Job on page 455.

View Alerts

Grants the right to view alerts.
See Using the Dashboard on page 521.
See Viewing Alerts on page 566.

Manage KFF

Grants the right to create and manage KFF libraries, sets,
templates, and groups.
See Using KFF (Known File Filter) on page 331.

Threat Filter Library

Grants the right to access the Threat Filter Library in the
Management tab.

System Jobs

Grants the right to view and use the System Jobs tab on the
Management page.
See Using System Jobs on page 66.

View Activity Log

Grants the right to view the Activity Log on the Management
page.
See Viewing the System Log or Activity Log on page 92.

Purge Activity Log

Grants the right to purge the Activity Log.
See Activity Log Tab on page 91.

Configuring and Managing System Users, User Groups, and Roles

About Admin Roles and Permissions

| 49

About the Users Tab
The Users tab on the Management page can be used by administrators to add, edit, delete, and associate users
on a global scale. Users are people who are logging in and working in the application.
From the Users list, you can also add, edit, or delete the application’s users. You can set users as active or
inactive, reset user passwords, and set global and group permissions.
The Users tab is the default page when you click Management on the menu bar. The User Groups tab below the
Users list pane allows you to associate and remove associations to users. The Admin Roles tab below the Users
list pane identifies the admin roles that are associated with a highlighted user.
Changes to permissions for a currently logged-in user take effect after they log out of the system and log back in.

Elements of the Users Tab
Element

Description

Filter Options

Allows you search and filter all of the items in the list. You can filter the list based
on any number of fields.
See Filtering Content in Lists and Grids on page 38.

Users List

Displays all users. Click the column headers to sort by the column.

Refresh

Refreshes the Users list.
See Refreshing the Contents in List and Grids on page 35.

Columns

Adjusts what columns display in the Users list.
See Sorting by Columns on page 35.

Delete

Deletes the selected user. Only active when a user is selected.
See Deleting Users on page 59.

Add Users

Adds a user.
See About Users on page 45.

Edit User

Delete User
Reset a User’s Password

Edits the selected user. You can add or change a selected user’s email address
that is used for notifications of the application’s events.
See Editing the Email Address of a User on page 57.
Deletes the selected user(s).
See Deleting Users on page 59.
Assigns a new password for the selected user.
See Resetting a User’s Password on page 58.

Deactivate Users

Makes selected user(s) inactive in the application.
See Deactivating a User on page 59.

Activate Users

Reactivates selected user.
See Activating a User on page 59.

User Groups Tab

Allows you to associate or disassociate groups to users.
See Associating a Group to a User on page 60.

Configuring and Managing System Users, User Groups, and Roles

About the Users Tab

| 50

Elements of the Users Tab (Continued)
Element

Admin Roles Tab

Description
Allows you to associate or disassociate admin roles to users.
See Associating Admin Roles to a User on page 56.
Associates a user to a group or admin role.

Add Association
Remove Association

Disassociates a user from a group or admin role.

Configuring and Managing System Users, User Groups, and Roles

About the Users Tab

| 51

About the Admin Roles Tab
The Admin Roles tab on the Management page can be used to add, edit, delete, and associate admin roles.
Admin roles are a set of global permissions that you can associate with a user or a group.

Elements of the Admin Roles Tab
Element

Description

Filter Options

Allows you search and filter all of the items in the list. You can filter the list
based on any number of fields.
See Filtering Content in Lists and Grids on page 38.

Admin Roles List

Displays all admin roles. Click the column headers to sort by the column.

Refresh

Refreshes the Admin Roles List.
See Refreshing the Contents in List and Grids on page 35.

Columns

Adjusts what columns display in the Admin Roles List.
See Sorting by Columns on page 35.

Delete

Add Admin Roles

Deletes the selected admin roles. Only active when an admin roles is
selected.
See About Admin Roles and Permissions on page 47.
Adds an admin role.
See Creating an Admin Role on page 53.
Edits the selected admin roles.

Edit Admin Roles
Deletes the selected admin roles.
Delete Admin Roles
Allows you to associate or disassociate users to an admin role.
Users Tab
Allows you to associate or disassociate groups to an admin role.
Groups Tab

Features Tab

Allows you to add administrator permissions to an admin role.
See Adding Permissions to an Admin Role on page 53.

Configuring and Managing System Users, User Groups, and Roles

About the Admin Roles Tab

| 52

Managing Admin Roles
Creating an Admin Role
Before you can assign permissions to an admin role, you have to create the role.

To create an admin role
1.

Log in to the web console using administrator rights.

2.

Click the Management tab.

3.

Click the Admin Roles tab.
See About Admin Roles and Permissions on page 47.

4.

Click the Add button

.

Admin Roles Details

5.

Enter a name for the admin role and a description.

6.

Click OK.
The role is added to the Admin Role list.

Adding Permissions to an Admin Role
After you have created an admin role, you need to add permissions to it before you assign it to a user or a group.

To add permissions to an admin role
1.

Log in to the web console using administrator rights.

2.

Click the Management tab.

3.

Click the Admin Roles tab.
See About Admin Roles and Permissions on page 47.

4.

Select the role from the Admin Roles List.

5.

Click the Features tab

6.

Select the permissions:
Administrator:
Custom:

.

Grants all rights to the user/group for all projects.

Select the administrator roles that you want. The following are available:

Configuring and Managing System Users, User Groups, and Roles

Managing Admin Roles

| 53

 Create/Edit

Project: Grants the right to create and edit projects on the Home page.
Project: Grants the right to delete projects on the Home page.
 Manage User Groups: Grants the right to add, edit, delete, and assign roles to groups.
 Manage Admin Roles: Grants the right to add, edit, delete and assign admin roles.
 Manage Users: Grants the rights to add, edit, delete, activate, deactivate, reset passwords,
and assign admin roles to users.
 Delete

Note: Users with the Manage Admin Roles, Manage Users, or Manage User Groups permission have
the ability to upgrade themselves or other users to system administrators.
7.

Click Save.

Configuring and Managing System Users, User Groups, and Roles

Managing Admin Roles

| 54

Managing Users
Administrators, and users assigned the Manage Users permission, manage users by doing the following:
Managing

the List of Users on page 55

Adding

Users on page 55

Editing

the Email Address of a User on page 57

Resetting
Deleting

a User’s Password on page 58

Users on page 59

Deactivating
Activating

a User on page 59

a User on page 59

Associating

Admin Roles to a User on page 56

Managing the List of Users
You create and manage users from the Users tab on the Management page.

To open the Users tab
1.

Log in as an administrator or a user that has the Manage Users permission.
See Opening the AccessData Web Console on page 23.

2.

Click Management.

3.

Click Users

.

The users list lets you view all the users, including the following columns of information about them:
Username
Email

Address of the user

Date

that the user was created

Date

of last login for the user

Active
First

status of a user

and Last name of the user

Description

From the users list, you can also add, edit, or delete users. You can set users as active or inactive, reset user
passwords, and associate groups to users and admin roles.
When you create and view the list of users, they are displayed in a grid. You can do the following to modify the
contents of the grid:
Control
If

which columns of data are displayed in the grid.

you have a large list, you can apply a filter to display the items that you want.

Adding Users
Each person that uses the console must log in with a username and password. Each person should have their
own user account.
Administrators, and users assigned the Manage Users permission, can add new user accounts.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 55

When a user is created, an entry for that user is created in the system databases.
How you add users differs depending on whether you use Integrated Windows Authentication.
If you are not using Integrated Windows Authentication, you need to configure both the username and
password. In this mode, a password is required, and the Password field is bolded.
If you are using Integrated Windows Authentication, enter the domain username but do not enter a password. In
this mode, a password is not required, and the Password field is hidden.

To add a user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the User Details pane, click

3.

In the Username field, enter a unique username.
The name must be between 7 - 32 characters and must contain only alphanumeric characters.
If you are using Integrated Windows Authentication, enter the user’s domain and username. For
example, \.

4.

Enter the First and Last name of the user.

5.

(Optional) In the Email Address field, enter the email address of the user.

6.

If you are not using Integrated Windows Authentication, enter a password in the Password and the
Reenter Password fields.
The password must be between 7 - 20 characters.

7.

Click OK.

Add.

Associating Admin Roles to a User
Administrators, and users assigned the Manage Users permission, can associate admin roles to users.
See About User Roles and Permissions on page 45.

To associate admin roles to user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, select a user to associate to an admin role.

3.

In the bottom pane, select the Admin Roles tab.

4.

Click the Add Association button

.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 56

Associate Admin Roles Dialog

5.

Click

6.

Click OK.

to add the role to the user.

Disassociating an Admin Role from a User
Administrators, and users assigned the Manage Users permission, can disassociate admin roles from users.
See About User Roles and Permissions on page 45.

To disassociate admin roles from a user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, select a user who you want to disassociate from an admin role.

3.

In the bottom pane, click the Admin Roles tab.

4.

Check the role that you want to remove.

5.

Click the Remove Association button

.

Editing the Email Address of a User
Administrators, and users assigned the Manage Users permission, can change the email address of an existing
user. If you need to make more than an email change (such as changing the username), you must delete the
user and then recreate the user with the correct information.

To edit the email address of a user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, select the user whose email address you want to edit.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 57

3.

In the User Details pane, click

Edit.

4.

In the Email Address field, enter the email address of the user.

5.

Click OK.

Resetting a User’s Password
If a user has forgotten their password, administrators and users assigned the Manage Users permission can
reset passwords for users.
Note: This function is hidden if you are using Integrated Windows Authentication. Reset a password using
Windows methods.
You cannot reset the password of the Service Account.
See Changing the Password of the Service Account on page 58.
When you reset a user’s password, a new password is automatically created. You can then give the new
password to the user. After they log in with the new password, they can change the password themselves.
You cannot reset your own password. To change your own login password, use the Change Password dialog,
not the User page.
See Changing Your Password on page 34.

To reset the password of an administrator or user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, select a user.

3.

Click

.

A new password for the user is generated and displayed.
4.

Copy the password and email it to the user, informing them that they can change the password after
logging in.

Changing the Password of the Service Account
This only applies if you are not using Integrated Windows Authentication. The service account password can
only be changed by the user who is logged in as the master administrator. This person is typically the one who
initially performed the installation. The username cannot be changed.
See Changing Your Password on page 34.
You can use the same process as you do for a user.
See Resetting a User’s Password on page 58.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 58

Deleting Users
Users can be deleted by an administrator or a user with the right to delete users.
If you try to recreate a deleted user, you receive a warning that the user already exists in the application and was
marked as deleted. You can continue to create the user and assign user rights as a new user.

To delete users
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

Do one of the following:

3.

In

the users list, select the user that you want to delete. In the User Details pane, click

In

the users list, select one or more users that you want to delete. Click

Delete.

Delete.

In the Confirm Deletion dialog box, click OK.

Deactivating a User
You can deactivate users as needed to make the console unavailable to them. When you deactivate a user, that
user remains in the users list of the Users tab, and has the status of False in the Active column. The user’s data
remains in the database; however, the user cannot log in, and they are not available for any other assignments
or work. The user remains inactive until an administrator reactivates them. You can activate or deactivate users
individually or collectively.
See Activating a User on page 59.

To deactivate a user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, check one or more users whose Active status is True.

3.

Click

4.

In the Deactivate user message box, click Yes.

Deactivate.

Activating a User
You can activate users as needed. When a user is activated, they can log in and be available for work. An
activated user remains active until an administrator deactivates them. You can activate or deactivate users
individually or collectively.
See Deactivating a User on page 59.

To activate a user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, check one or more users whose Active status is False.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 59

3.

In the bottom of the middle pane, click

4.

In the Activate user frame, click Yes.

.

Associating a Group to a User
Groups are a set of users grouped together that perform the same tasks. Putting users into groups makes it
easier to assign and manage project permissions for users. Administrators, and users assigned the Manage
Users permission, can associate groups to users.
See About User Roles and Permissions on page 45.

To associate groups to user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, select a user who you want to associate to a group.

3.

In the bottom pane, click the User Groups tab.

4.

Click the Add Association button

.

All User Groups Dialog

5.

Click

6.

Click OK.

to associate the user to the group.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 60

Disassociating a Group from a User
Administrators, and users assigned the Manage Users permission, can disassociate groups from users.
See About User Roles and Permissions on page 45.

To disassociate groups from user
1.

Open the Users tab.
See Managing the List of Users on page 55.

2.

In the user list pane, select a user who you want to disassociate from a group.

3.

In the bottom pane, click the User Groups tab.

4.

Check the group you want to remove.

5.

Click the Remove Association button

.

Configuring and Managing System Users, User Groups, and Roles

Managing Users

| 61

Configuring and Managing User Groups
Groups are a set of users grouped together. Groups allow you to put sets of users together who perform the
same tasks. Putting users into groups makes it easier to assign and manage project permissions for users.
The project permissions that you assign to users define the tasks that they can perform. Therefore, if you have a
group of users who all are going to review documents, you can put them in a group and grant them permissions
to review, code, and label documents.
Administrators, and users assigned the Manage Groups permission, can manage groups.

Opening the User Groups Tab
To open the User Groups tab
1.

Log in as an administrator or a user with the Manage Groups admin role.
See Opening the AccessData Web Console on page 23.

2.

Click Management.

3.

Click User Groups

.

The users list lets you view all the groups, including the following columns of information about them:
User

Group Name

Description

From the group list, you can also add, edit, or delete groups. You can associate groups to users and admin roles.
When you create and view the list of groups, they are displayed in a grid. You can do the following to modify the
contents of the grid:
Control
If

which columns of data are displayed in the grid.

you have a large list, you can apply a filter to display the items that you want.

Configuring and Managing System Users, User Groups, and Roles

Configuring and Managing User Groups

| 62

User Groups Tab
The User Groups tab on the Management page can be used to add, edit, delete, and associate user groups on a
global scale. Groups are collections of users who perform the same tasks in the application.

Elements of the User Groups Tab
Element

Description

Filter Options

Allows you search and filter all of the items in the list. You can filter the list
based on any number of fields.
See Filtering Content in Lists and Grids on page 38.

Groups List

Displays all groups. Click the column headers to sort by the column.

Refresh

Refreshes the Groups List.
See Refreshing the Contents in List and Grids on page 35.

Columns

Adjusts what columns display in the Groups List.
See Sorting by Columns on page 35.
Exports the user group list to a CSV file.

Export to CSV

Delete

Deletes the selected group. Only active when a group is selected.
See Deleting Groups on page 64.

Add Groups

Adds a group.
See Adding Groups on page 63.

Edit Groups

Edits the selected group.
See Editing Groups on page 64.

Delete Groups

Deletes the selected group.
See Deleting Groups on page 64.

Users Tab

Allows you to associate or disassociate users to groups.
See Associating Users/Admin Roles to a Group on page 64.

Admin Roles Tab

Allows you to associate or disassociate admin roles to groups.
See Associating Users/Admin Roles to a Group on page 64.
Associates a group to a user or admin role.

Add Association
Disassociates a group from a user or admin role.
Remove Association

Adding Groups
To add a group
1.

Open the User Groups tab.
See Opening the User Groups Tab on page 62.

Configuring and Managing System Users, User Groups, and Roles

Configuring and Managing User Groups

| 63

2.

In the Groups Details pane, click

Add.

3.

In the User Group Name field, enter a unique username.
The name must be between 7 - 32 characters and must contain only alphanumeric characters.

4.

Enter a Description.

5.

Click OK.

Deleting Groups
To delete a group
1.

Open the User Groups tab.
See Opening the User Groups Tab on page 62.

2.

Do one of the following:
In

the groups list, highlight the group that you want to delete. In the Groups Details pane, click
(delete).

In

3.

the users list, check one or more users that you want to delete. Click

Delete.

In the Confirm Deletion dialog box, click OK.

Editing Groups
To edit a group
1.

Open the User Groups tab.
See Opening the User Groups Tab on page 62.

2.

In the Groups Details pane, click

3.

In the User Group Name field, enter a unique username.
The name must be between 7 - 32 characters and must contain only alphanumeric characters.

4.

Enter a Description.

5.

Click OK.

(edit).

Associating Users/Admin Roles to a Group
From the User Groups tab, you can associate users and admin roles to the selected group.

To associate users/admin roles to a group
1.

Open the User Groups tab.
See Opening the User Groups Tab on page 62.

2.

In the user list pane, select a group to which you want to add an association.

3.

In the bottom pane, do one of the following:

4.

Select

the Users tab to associate users to the group.

Select

the Admin Roles tab to associate roles to the group.

Click Add Association

.

Configuring and Managing System Users, User Groups, and Roles

Configuring and Managing User Groups

| 64

5.

Click

6.

Click OK.

to add users/roles.

All User Groups Dialog

7.

Click

8.

Click OK.

to associate the user to the group.

Configuring and Managing System Users, User Groups, and Roles

Configuring and Managing User Groups

| 65

Chapter 6

Using System Jobs

About System Jobs
The System Jobs Tab on the Management page is dedicated to managing System Jobs. System Jobs are
primarily used for inventorying agents. As an administrator, you can add system jobs to push the agent to
multiple data sources, ping multiple agents to test connectivity, or map nodes to people.

System Jobs Tab

Elements of the System Jobs Tab
Element

Description

Filter Options

Allows you to filter system jobs in the list. See Filtering Content in Lists and Grids
on page 38.

System Jobs List

Displays all system jobs. Click the column headers to sort by the column.

Using System Jobs

About System Jobs

| 66

Elements of the System Jobs Tab (Continued)
Element

Description

Refresh

Refreshes System Jobs List. See Refreshing the Contents in List and Grids on
page 35.

Columns

Adjusts what columns display in the System Jobs List. See Sorting by Columns
on page 35.
Deletes the selected system job. Only active when a system job is selected.

Delete
Reruns a job under a new name.
Resubmit Job
Stops a current job.
Stop Job
Adds a system job.
Add System Job
Edits the selected system job.
Edit System Job
Deletes the selected system job(s).
Delete System Job
Job Target Results

Lists all of the results for the selected job. You can resubmit a job, stop a job, or
cancel ETM policies on a computer(s) in the job.

Status

Lists the failure status of a job in detail.

Associated Computers

Lists the computers associated with the selected job.

Reports

Reports are only available for agent operations. The following report is available:
Agent Op Report.

Using System Jobs

About System Jobs

| 67

Adding a System Job
As an administrator, you can add system jobs to push the agent to multiple data sources, ping multiple agents to
test connectivity, or map nodes to people.
See Executing a System Job on page 75.
See Deleting System Jobs on page 76.
See Agent Credentials on page 194.

To add a system job
1.

On the menu bar, click Management.

2.

Click System Jobs

3.

In the System Jobs Details pane, click

.
.

System Job Options

4.

In the System Job Options, set the options that you want.
See System Job Options on page 69.

5.

Do one or more of the following:
Note: Depending on the Target Type that you set in the System Job Options, some of the following
panels may not be available.
Groups
Group

screen, check the groups who will receive the system job.

Computers screen, check the computers for the groups who will receive the system job.

Computers
IP

screen, check the computers who will receive the system job.

Range screen, specify a valid starting IP address and an ending IP address.

Using System Jobs

Adding a System Job

| 68

6.

Click Save to submit the job for execution.

Installing the Agent from a Command Prompt
There are times you will install an agent from a Command Prompt. The following syntax also configures the
heartbeat settings. To install from a Command Prompt:
1.

Open the Command Prompt.

2.

Enter the following syntax, replacing the <> text with the correct paths and IP address:

msiexec /i  cer= mama=
Example:
msiexec /i c:\agentinstall\agent.msi cer=c:\agentinstall\accessdata_E1.crt
mama=10.10.32.17:54545

System Job Options
The following table describes the options that are found in the System Job Options when you add a system job.
See Agent Credentials on page 194.
See Editing a System Job on page 75.

System Job Options Dialog

System Job Options
Option

Description

Name

Sets the name of the system job.

Description

Lets you add a description of the system job.

Using System Jobs

Adding a System Job

| 69

System Job Options (Continued)
Option

Description

Job Type – Map Node To People

Associates a computer that has the Agent installed on it to people.
When you edit a system job, you cannot change the job type.

Job Type – Verify Agent Connectivity

Tests the reachability of the agents in an Active Directory group, an
IP range, or on selected computers.
Pinging the agent also updates the agent version number in the
database.
When you edit a system job, you cannot change the job type.

Job Type – Agent Operations

Pushes the agent to Active Directory groups, an IP range, or to
selected computers.
When you edit a system job, you cannot change the job type.

Job Type – Endpoint Threat
Monitoring Policy

Pushes an Endpoint Threat Monitoring policy to the agent. See
About Endpoint Threat Monitoring on page 533.

Template

Allows you to:
 Create a job from an existing job template. See Default System
Job Templates on page 71.
 Save the created job as a template to use later. You can choose
to save target options in the template.

Target Type – All Current and Future
Computers

Targets the system job to all computers, even computers that might
be added at a future point. This target is specifically for Endpoint
Threat Monitoring.

Target Type – Custom

Targets the system job to selected computers.

Target Type – Group

Targets the system job to selected Active Directory groups.

Target Type – IpRange

Targets the system job to a specified IPv4 address range.

Job Expiration

Select the days and hours for when unfinished jobs will expire.

Agent Operations dialog

Displays if you select Agent Operations as your job type. See Agent
Operations on page 73.

ETM Policy dialog

Displays if you select Endpoint Threat Monitoring Policy. See ETM
Policy on page 74.

Uninstall

Select to remove the agent from the machine.

Install

Select to push the agent to the machine. Remember that the agent
install may cause the machine to restart without a warning.
Note: You may need to restart Windows 7 machines before you can
perform jobs on that machine.

Make Public Instance

Configure the agent to check a public instance after the agent is
installed.

Configure Periodic Check-In

Configure the agent to communicate back to the server.

Dynamic Agent Options

Dynamic Agents use an encrypted file-based storage. Other agents
use a traditional protected storage.

Dynamic Agent Option – One Time

Creates the agent as a service that functions until the target
machine(s) restarts.

Dynamic Agent Option – Run Time

Hides the One Time agent on the target machine(s).

Using System Jobs

Adding a System Job

| 70

System Job Options (Continued)
Option

Description

Dynamic Agent Option – Persistent

Creates the agent as a service that remains on the target
machine(s), even after a reboot.

Agent expires after

Configures the time an agent will be active. When the time expires
on an agent, the agent removes itself. You can set the time using
days, hours, or minutes.
Note: If the agent is executing a job during the expiration date/time,
the agent will complete the job before removing itself.

Size of Data Store

Set the size of the data on the machine that you can store

Size of Store

The amount of storage allocated to the agents self administration. It
is not recommended that you change this setting.

Port Number

Enter the port designated to communicate with the agent.

Service Name

Enter the name that you want the agent to be displayed as.

Executable Name

Enter the name of the file that is being run.

Default System Job Templates
The following table lists the default System Job templates available.

Default System Job Templates
Template

Description

Agent Verification

System job that verifies if an agent is on a targeted machine.

Internal Agent with Local
Folder Storage

System job targets machines that do not communicate outside of the network.

Internal Agent with
Protected Store

System job that installs non-public agent with a hidden data store on selected
machines.

Internal Agent with
Periodic Check in and
Local Storage

System job that installs a non-public agent on selected machines. This agent will
also check in periodically via heartbeat.

Internal Agent with
Periodic Check in and
Protected Store

System job that installs a non-public agent with a hidden store on selected
machines. This agent will also check in periodically via heartbeat.

Map Nodes to People

System job identifies the person that has last logged in and associate the node to
that person.

Public Agent Install with
Local Storage

System job that installs an agent on target machines that may communicate
outside of the network.

Public Agent Install with
Protected Store

System job that installs an agent on target machines that may communicate
outside of the network and has a hidden data store.

Install Temporary Agent

This agent is installed on a temporary basis and will uninstall itself after one day.

Using System Jobs

Adding a System Job

| 71

Default System Job Templates
Template

Description

Endpoint Threat
Monitoring Default Policy

Default ETM policy job for quick deployment.

Using System Jobs

Adding a System Job

| 72

Agent Operations
The Agent Operations dialog allows you to configure what the System Agent will do. You can choose to install or
uninstall the agent, configure dynamic agent options, and/or configure additional options.
Note: This dialog is only accessible when you select Agent Options as the job type in the System Job Options
dialog. See System Job Options on page 69.

Using System Jobs

Agent Operations

| 73

ETM Policy
The ETM Policy dialog allows you to configure a policy that you can push to the agent. You can choose enforce
the policy by time and select the processes and events that you want to include in the policy. See Using Endpoint
Threat Monitoring on page 533.
Note: This dialog is only accessible when you select ETM Policy as the job type in the System Job Options
dialog. See System Job Options on page 69.

ETM Policy System Job Options

Using System Jobs

ETM Policy

| 74

Executing a System Job
You can execute a system job and view the percent complete in the System Job list pane.
See Agent Credentials on page 194.
See Deleting System Jobs on page 76.

To execute a system job
1.

On the Management tab, click System Jobs

.

2.

In the System Job list pane, highlight a system job that has not yet started.

3.

In the System Job Details pane, click Execute

to run the job.

Editing a System Job
You can edit an existing system job only if it has not yet executed. If the job has already executed, you can only
view the job’s settings, or you can create a new system job with the settings that you want.
When you edit a system job, you can change everything in the job except the job type.
See “About system jobs” on page 113.
See See Agent Credentials on page 194.

To edit a system job
1.

On the Management tab, click System Jobs

.

2.

In the System Job list pane, select the system job that you want to edit.

3.

In the right side of the upper pane, click

4.

Edit the system job options that you want.
See System Job Options on page 69.

5.

Do one or more of the following:

.

Note: Depending on the Target Type that you set in the System Job Options panel, some of the
following panels may not be available.
6.

Do one or more of the following:
Note: Depending on the Target Type that you set in the System Job Options, some of the following
panels may not be available.
Groups
Group

screen, check the groups who will receive the system job.

Computers screen, check the computers for the groups who will receive the system job.

Computers
IP

7.

screen, check the computers who will receive the system job.

Range screen, specify a valid starting IP address and an ending IP address.

Click Save to submit the job for execution.

Using System Jobs

Executing a System Job

| 75

Deleting System Jobs
You can delete one or more jobs from the System Jobs list pane.
See Agent Credentials on page 194.
See Executing a System Job on page 75.

To delete a system job
1.

On the Management tab, click System Jobs

2.

Do one of the following:
In

the System Job list pane, highlight a system job that you want to delete. In the System Job Details

pane, click
In

.

the System Job list pane, check one or more system jobs that you want to delete. In the lower left

corner of the System Job list pane, click
3.

.

.

Click OK to confirm the deletion.

Using System Jobs

Executing a System Job

| 76

Chapter 7

Configuring the System

This chapter will help administrators configure the system to their preferences.

About System Configuration
You can configure many settings for the application system. These are global settings that affect the entire
system.

System Configuration Tab - Standard Settings
The System Configuration tab on the Management page allows you to configure multiple items. This section
describes each item.
Depending on the license that you own and the permissions that you have, you will see some or all of the
following:

Elements of the System Configuration Tab
Element

Description

Active Directory

Allows you to configure Active Directory to synchronize and import Active Directory
users. Synchronization is from Active Directory to the application only.
See Configuring Active Directory Synchronization on page 78.

Email Server

Allows you to configure the Email Notification Server so that you can send notification
emails to specified users for certain events. This configuration is also necessary for
sending Litigation Hold emails to appropriate recipients.
See Configuring the Email Notification Server on page 80.

Create
Notifications

Allows you to configure email notifications for the project and user related events.
See Creating Notifications on page 81.

Manage
Certificates

Allows you to manage certificates used for encrypting AD1 files.

Configuring the System

About System Configuration

| 77

Elements of the System Configuration Tab
Element

Description

Project Defaults

Allows you to configure the following settings that will be used every time you create a
project:



Default paths for project data
Default options for processing evidence in projects

See Default Evidence Processing Options on page 84.
Export Options

Allows you to set the application to include Australian numbering.

Processing
Priority Options

Allows you to configure how much of the available CPU will be used for processing. If
not configured, the evidence processing engine will use all available CPUs.

Notes
Certificates

Allows you to manage certificates used for encrypting Lotus Notes files.

KFF

Allows you to configure KFF.
See Using KFF (Known File Filter) on page 331.

Other Advanced
Options

Depending on the license that you own and the permissions that you have, you may
see other advanced options.
See Configuring Advanced System Settings on page 192.

Configuring Active Directory Synchronization
You can sync with Active Directory to import your domain users as People. When you sync with Active Directory,
all users are imported. Synchronization only occurs from Active Directory to the application. Changes made to
the application do not sync back to Active Directory.
Domain Users can be imported but they cannot be application users. They are only used as people.
Note: After migrating from an earlier version of the application, you must re-enter the Active Directory
password. If not, the Active Directory data does not appear in the application. See Active Directory
Configuration Options on page 80.

Note: Domain Users can be imported, but they cannot be application users. They are only used as people.

To configure Active Directory synchronization
1.

Log in as an administrator.
See Opening the AccessData Web Console (page 23).

2.

Click Management.

Configuring the System

System Configuration Tab - Standard Settings

| 78

3.

Click

System Configuration.

4.

Click Active Directory.

5.

In the Active Directory Configuration dialog, set all options and click Next.
See Active Directory Configuration Options on page 80.

6.

Click Next.

7.

Select which Active Directory fields to import into User information.
In the Active Directory Fields dialog box, in the Active Directory Fields list box, select an alias attribute
and click the green arrow next to the user field that you want associated with the attribute.
Bold user field names are required fields.
The following are examples of fields that you can use:

Active Directory Fields
Active Directory
Field

Person Field

givenname

First Name (Required)

sn

Last Name (Required)

samaccountname

Username (Required)

displayname

Notes Username

mail

Email

8.

Click Next.

9.

Do one of the following:
To
If

save the settings, but not perform a sync, click Save.

you have completed all the settings and are ready to sync, click Save and Sync.

10. View the imported user in the Users tab.

Configuring the System

System Configuration Tab - Standard Settings

| 79

Active Directory Configuration Options
Elements of the Active Directory Configuration Dialog
Element

Description

Server

Enter the server name of a domain controller in the enterprise.

Use Global
Catalog

Select to use the global catalog.

Port

Enter the connection port number used by Active Directory.
The default port number is 389.
If you want to support synch with an entire Active Directory forest, set the port as 3268.
Otherwise, the synch only collects information from one domain instead of the entire
forest.
The default ports for communicating with Active Directory are:
LDAP: 389
Secure LDAP(SSL): 636
Global Catalog: 3268
Secure Global Catalog(SSL): 3269

Base DN

Enter the starting point in the Active Directory hierarchy at which the search for users
and groups begins.
The Base DN (Distinguished Name) describes where to load users and groups.
For example, in the following base DN
dc=domain,dc=com
you would replace domain and com with the appropriate domain name to search for
objects such as users, computers, contacts, groups, and file volumes.

User DN

Enter the distinguished name of the user that connects to the directory server.
For example
tjones or \tjones

Password

Enter the password that corresponds to the User DN account. This is the same
password used when connecting to the directory server.

Active Directory
Authentication

Select to enable authentication against Active Directory on login.

AD Sync Objects

Select to include users.

AD Sync
Recurrence

Configure a daily recurrence by selecting or entering the time of day to start the sync. If
a sync is in progress when the interval occurs, the interval is skipped to allow the
current sync to complete.

Test Configuration

Click to test the current configuration to ensure proper communication exists with the
Active Directory server.

AD
Synchronization

Set to inactive by default.

Configuring the Email Notification Server
You can configure the Email Notification Server so that when you create a litigation hold, your notification emails
are sent successfully.

Configuring the System

System Configuration Tab - Standard Settings

| 80

To configure an email notification server
1.

Click Management.

2.

Click System Configuration.

3.

Click Email Server.

4.

In the Email Server Configuration dialog box, set the email options that you want. See Email Server
Configuration Options on page 81.

5.

Click Save.

Email Server Configuration Options
Email Server Configuration Options
Option

Description

SMTP Server Address

Specifies the address of the SMTP mail server (for example,
smtpserver.domain.com or server1) on which you have a valid account. You
must have an SMTP-compliant email system, such as a POP3 mail server, to
receive notification messages from the application.

SMTP Port

Specifies the SMTP port to use. Port 25 is the standard non-SSL SMTP port.
However, if a connection is not established with default port 25, contact the email
server administrator to get the correct port number.

SMTP SSL?

Allows you configure the use of SSL by the SMTP server. The default SSL port is
465.

Default from Address

Specifies the name of the default email account from which alerts and
notifications are sent.

Domain

Specifies the sender’s domain.

Username

Specifies the sender’s name. The default credentials (Username, Password,
Domain) are optional.

Password

Specifies the sender’s password.

Confirm Password

Confirms the sender’s password that had been entered in the Password field.

Creating Notifications
About Event Notifications
You can configure event notifications for when certain system events occur. You select which type of event for
which you want a notification and the users to whom the notification is sent.
You can create notifications for the following events:
Project

Created

Project

Deleted

User

Created

User

Deleted

Configuring the System

System Configuration Tab - Standard Settings

| 81

Note: For the Resolution1 CyberSecurity and Resolution1 eDiscovery applications, you can also create
notifications for job events.

Creating Event Notifications
To create an email event notification
1.

Click Management.

2.

Click System Configuration.

3.

Click Create Notifications.

4.

Click Select Event Type and select the event type for which you want a notification.

5.

Select the user or users that you want to receive the notification.

6.

Click Create Event Notification.

7.

Click Close.

Viewing and Deleting Job Notifications
You can view and delete either the job notifications that you created or the job notifications to which you are
subscribed.

To view and delete event notifications
1.

In the console, click your logged-in name (top-right corner) to open the user actions menu.

2.

Click Manage My Notifications.
For information on managing list columns or filtering items in the list, see Managing Columns in Lists
and Grids (page 36).

3.

Do one or more of the following:
In

the Notifications I Created group box, under the Notification Type column header, select the job
notifications that you want to delete.

In

the Notification I Belong To group box, under the Notification Type column header, select the job
notifications that you want to delete.

4.

Click Delete.

5.

In the Confirm Deletion dialog box, click OK.

Configuring Default Project Settings
About Default Project Settings
You can configure the following settings to use every time you create a project:
Default

paths for project data

Default

options for processing evidence in projects

You are not required to configure defaults. For processing options, there are defaults that are pre-configured.
If no default project paths are configured, the person creating the project provides this information.

Configuring the System

System Configuration Tab - Standard Settings

| 82

If you configure default settings, you can have the application display those settings when a project is created. If
you allow the values to display, the user creating the project can view and/or change the values.
You can also hide the default values. If hidden, the person creating the project cannot view the options and/or
change them.
See Setting Default Project Settings on page 83.
See Default Evidence Folder Options on page 83.
See Default Evidence Processing Options on page 84.

Setting Default Project Settings
You can configure default project evidence settings.
See About Default Project Settings on page 82.

To set default project options
1.

Log in as an administrator.
See Opening the AccessData Web Console (page 23).

2.

Click Management.

3.

Click

4.

Click Project Defaults.

5.

On the Info tab, set the default path settings.
See Default Evidence Folder Options on page 83.

6.

On the Processing Options tab, set the default evidence processing options.
See Default Evidence Processing Options on page 84.

7.

Click Save.

System Configuration.

Default Evidence Folder Options
You can define default locations where the project data is stored. These locations are configured whenever you
create a project.
See Configuring Export Options on page 84.
Local paths only work on single box installations.
If a network UNC path is specified, you can validate the path to ensure that the application can access the
location. If the path is not validated, you may need to re-enter the path correctly or specify a new path.
To verify the path, click

.

Paths
Project Folder Path

Allows you to specify a local path or a UNC network
path to the project folder.

Job Data Path

Allows you to specify a job data path. The responsive
folder path is the location of reports data.

Configuring the System

System Configuration Tab - Standard Settings

| 83

Default Evidence Processing Options
The processing options configured here are the default options used by a project when it is created.
See About Default Project Settings on page 82.
See Evidence Processing and Deduplication Options on page 209.
If you configure default settings, you can have the application display those settings when a project is created. If
you allow the values to display, the user creating the project can view and/or change the values.
Note: After upgrading the application, Enable Standard Viewer Processing Option is turned off by default
because it is a slower performing processing option. If you want this functionality, you need to enable it
manually in System Configuration > Project Defaults > Processing Options.
You can also hide the default values. If hidden, the person creating the project cannot view the options and/or
change them.
Hover the mouse over the information icon to get information about each item.

Default Evidence Processing Options
Option

Description

Hide Processing Options

Allows you to hide the processing options dialog when a user creates a
project. This forces the project to use the default values set here.
The default is off.

Individual Processing Options.

See Evidence Processing and Deduplication Options on page 209.

Show All Time zones

When selected, allows you to select any time zone recognized by the
operating system when adding evidence.

Configuring Export Options
You can configure Export Options to specify the document ID numbering when exporting an export set to a load
file.
For more information on production sets, see the Exporting documentation.

To configure export settings
1.

Log in as an administrator.
See Opening the AccessData Web Console (page 23).

2.

Click Management.

3.

Click

4.

Click Export Options. The option available is described in the following table.

System Configuration.

Configuring the System

System Configuration Tab - Standard Settings

| 84

Alternative Numbering
Option

Description

Use Australian
Numbering Scheme

This option is specific to what options are available when exporting to a load file
format.
The same underlying technology performs both U.S. and Australian numbering.
For example, the Box level in the Australian scheme corresponds to the Volume
level in the U.S. scheme, and the Folder level is the same in both schemes.
Changes the Volume/Document Options page in Export to include the
numbering elements that are needed for Australian document IDs.
For example, the U.S. numbering scheme uses volumes and folders in the load
file.
The Australian numbering scheme uses a party code, boxes, and folders for their
volume structure in the load file.
See the Exporting documentation for more information on Australian numbering.

5.

If you want to change from the default U.S. numbering scheme, select a different option.

6.

Click Save.

Configuring the System

System Configuration Tab - Standard Settings

| 85

Chapter 8

Using the Work Manager Console and Logs

Using the Work Manager Console
From Work Manager Console, the Administrator can monitor the performance of the Distribution Server and
the Work Managers. Click any work manager node by name to view specific server details.
As an administrator, you can use the Work Manager Console to view pending, active, or completed work orders.
You can also view the performance of the entire system or specific Work Managers.

Opening the Work Manager Console
To open the Work Manager Console page
1.

Log in as an administrator.
See Opening the AccessData Web Console (page 23).

2.

Click Management.

3.

Click

Work Manager Console.

Work Manager Console Tab
The Work Manager Console tab, on the Management page, allows administrators to monitor the performance of
the Distribution Server and the Work Managers. Click on any work manager node by name to view specific
server details.
As an administrator, you can use the System Administration Console to view pending, active, or completed work
orders. You can also view the performance of the entire system or specific Work Managers.

Elements of the Work Manager Console Tab
Element

Description

Overall System
Status Pane

Allows you to view the performance of the entire system or specific Work Managers.

Queued Work
Orders

Displays work orders waiting to execute.

Using the Work Manager Console and Logs

Using the Work Manager Console

| 86

Elements of the Work Manager Console Tab
Element

Description

Active Work
Orders

Displays active work orders.

Completed Work
Orders

Displays completed work orders.

Overall System
Performance

Displays overall system performance. You can access the Overall System Performance
panel by expanding the Performance pane on the right side of the page. On the Overall
System Performance panel, the displayed time range indicates the time frame in which
the status information was collected.

See Validating Activate Work Orders on page 88.
See Viewing the System Log or Activity Log on page 92.
See Configuring a Work Manager on page 89.

Using the Work Manager Console and Logs

Work Manager Console Tab

| 87

Validating Activate Work Orders
Validate Active Work Orders allows you to remove orphaned work orders from the Active Work Orders table.
Work orders can become orphaned when the work manager handling the work order shuts down his/her
computer or in some other way loses contact with the Distribution server. When this happens, however, it does
not change the status of the associated job in the Jobs list.
See (page 86)

To validate active work orders
1.

In the Work Manager Console, click a work manager name to view active work orders.

2.

At the bottom of the left pane, click Validate Active Work Orders to confirm and update current work
orders and their status.

Using the Work Manager Console and Logs

Validating Activate Work Orders

| 88

Configuring a Work Manager
You can configure a selected Work Manager by setting various property values.
See (page 86).

To configure a Work Manager
1.

Open the Work Manager Console.
See Opening the Work Manager Console (page 86).

2.

In the left pane of the Work Manager Console, under Overall System Status, click a work manager
name.

3.

In the right pane, click the Configuration tab.

4.

In the Configuration pane, click

5.

When completed, click OK.

Using the Work Manager Console and Logs

Edit.

Configuring a Work Manager

| 89

Using the System Log and Activity Log
About the System Log
When certain internal events occur in the system, it is recorded in the System Log. This can be used in
conjunction with the activity log to monitor the work and status of your system.
The following are examples of the types of events that are recorded:
Completion

of evidence processing for an individual project

Exports

started and finished

Starting

of internal services

Job

failures

System
Errors

errors

accessing computers and shares

You can filter the log information that is displayed based on the following different types of criteria:
Date

and time of the log message

Log

type such as an error, information, or warning

Log

message contents

Which

component caused the log entry

Which

method caused the log entry

Username
Computer

name

System Log Tab
The System Log tab on the Management page is only accessible to the administrator. This log maintains an
historical record of the events that take place in the application. The administrator can view, clear, and export the
log file.

Elements of the System Log Tab
Element

Description

Filter Options

Allows you to filter the items in the System Log.
See Filtering Content in Lists and Grids on page 38.

System Log

Displays all the events. Click the column headers to sort by the column.

Clear Log

Deletes all the events in the log.
See Clearing the Log on page 92.

Export Log

Exports the log. It is recommended that you export and save logs before you clear
them.
See Exporting the Log on page 92.

Using the Work Manager Console and Logs

Using the System Log and Activity Log

| 90

About the Activity Log
When certain internal activities occur in the system, it is recorded in the Activity log. This can be used in
conjunction with the System Log to monitor the work and status of your system.
See About the System Log on page 90.
The following are examples of the types of activities that are recorded:
A

user logged out

A

user is forced to log out due to inactivity

Processing
A

started on the project

project is opened

You can filter the log information that is displayed based on the following different types of criteria:
Category
Activity

Date

Activity
Username

Activity Log Tab
The Activity Log tab on the Management page can only be accessed by the administrator. The Activity Log can
help you detect and investigate attempted and successful unauthorized activity in the application and to
troubleshoot problems.
The Activity Log event columns include the activity date, username, activity, and category.
Only an administrator can view, clear, and export the Activity Log file.

Elements of the Activity Log Tab
Element

Description

Filter Options

Allows you to filter the items in the activity log.
See Filtering Content in Lists and Grids on page 38.

Activity Log

Displays all the events. Click the column headers to sort by the column.
Deletes all the events in the log.

Clear Log

Export Log

Exports the log. It is recommended that you export and save logs before you clear
them.

Refresh

Refreshes activity log.
See Refreshing the Contents in List and Grids on page 35.

Columns

Adjusts what columns display in the activity log.
See Sorting by Columns on page 35.

Using the Work Manager Console and Logs

Using the System Log and Activity Log

| 91

Viewing the System Log or Activity Log
An administrator can view, clear, and export the log file.
Event lists are displayed in a grid. You can modify the contents of the grid as follows:
You
If

can control which columns of data are displayed in the grid.

you have a large list, you can apply a filter to display only the items you want.

To open the Log page
1.

Log in as an administrator.

2.

Click Management.

3.

Click

4.

To refresh the log view, click

System Log or

Activity Log.
(refresh).

Clearing the Log
As an Administrator, you can clear the log. When you clear the log, you delete all log entries across all pages. A
new entry is created stating that the log was cleared and who cleared it. Before clearing the log, consider
exporting the log file to keep a historical record.

To clear the log
1.

Open the Logs page.

2.

In the bottom left corner, click Clear Log.

3.

Click Yes to confirm the deletion.

Exporting the Log
Exporting the log lets you maintain a historical record of events in the software and saves a copy of the log for
future use, even after the log is cleared. Only an administrator can view, clear, and export the log file. You can
export the log to a CSV file to allow others, who may not have view log access, the ability to query and access
the saved events.

To export the log
1.

Open the Logs page.
See Activity Log Tab (page 91).

2.

In the bottom left corner of the View Log pane, click Export Log.

3.

In the Save As dialog box, specify a file name and file location.

4.

Click Save.

Using the Work Manager Console and Logs

Using the System Log and Activity Log

| 92

Chapter 9

Using the Site Server Console

Using the Site Server Console, you can monitor your Site Servers, monitor jobs on the Site Servers, get statuses
of various Site Servers, set the bandwidth throttling on Agent or Site Server from Network Traffic Controls and
set Phone Home Setting for your Site Servers.

Monitoring Site Servers
You can view statistics about your Site Servers using the Status tab of the Site Server Console.

To view the status of a Site Server
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

Using the Site Server Console

Monitoring Site Servers

| 93

Site Server Status Tab

4.

Select a Site Server from the list.

5.

Click the Status tab.
Statistics for the selected Site Server are displayed.

Using the Site Server Console

Monitoring Site Servers

| 94

Setting Network Traffic Control
You can set the inbound and outbound data maximums for information passed between the Site Server and the
agent.

To set network traffic control maximums
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

4.

Click the Network Traffic Control tab.

Site Server Console Network Traffic Control Tab

5.

Move the slider bars to set the maximums of inbound and outbound data.

Using the Site Server Console

Setting Network Traffic Control

| 95

Managing Jobs on the Site Server
Monitoring Jobs on the Site Server
You can monitor the status of jobs and tasks on the Site Server using the Jobs tab in the Site Server console.
From the Jobs tab, you can cancel jobs or tasks, and delete jobs.

To view the jobs on the Site Server
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

Site Server Console Jobs Tab

4.

Select a Site Server from the list.

5.

Click the Jobs tab.

Using the Site Server Console

Managing Jobs on the Site Server

| 96

Deleting Jobs on Site Server
You can delete jobs on the Site Server from the Site Server Console. Deleted jobs will be reflected on the Home
page in the application.

To delete jobs on the Site Server
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

4.

Select a Site Server from the list.

5.

Click the Jobs tab.

6.

Select the job that you want to delete in the Jobs pane.

7.

Click the Delete button.

Canceling Jobs on Site Server
You can cancel jobs on the Site Server from the Site Server Console. Canceled jobs will be reflected on the
Home page in the application.

To cancel jobs on the Site Server
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

4.

Select a Site Server from the list.

5.

Click the Jobs tab.

6.

Select the job that you want to cancel in the Jobs pane.

7.

Click the Cancel button.

Canceling Job Tasks on Site Server
You can cancel single tasks within jobs on the Site Server from the Site Server Console.
To cancel tasks within jobs on the Site Server
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

4.

Select a Site Server from the list.

5.

Click the Jobs tab.

6.

Select the job that contains the task in the Jobs pane.

7.

Select the task that you want to cancel.

8.

Click the Cancel button.

Using the Site Server Console

Managing Jobs on the Site Server

| 97

Configuring Phone Home Settings
You can configure the phone home settings of the Site Server to have agents check in at specified intervals.

To configure the phone home settings
1.

Log in to the application as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

4.

Select a Site Server from the list.

5.

Click the Phone Home Settings tab.

Site Server Console Phone Home Settings Tab

6.

Set how often you want the agent to connect by setting the Connect Every Minute(s).

7.

Set how many times you want the agent to try to connect, if it is unable to connect, by setting the Retry
Time(s).

8.

Set how many seconds between retries that you want the agent to wait before trying to connect again
by setting the Wait Second(s) between retries.

9.

Check Refresh Metrics on Startup to have the Phone Home Settings refresh on the agent when it
starts up.

10. Click Save.

Replacing Windows Agent Installers
To replace the agent installers
1.

Log in as a user with Administrative permissions.

2.

Click the Management tab.

3.

Click the Site Server Console tab.

4.

Click the Agent Installers tab.

Site Server Console Agent Installers Tab

Using the Site Server Console

Managing Jobs on the Site Server

| 98

5.

In the Agent Installer Location, browse to the new MSI you wish to upload.

6.

In the Agent Field path, enter the location and name where you'd like to put the new installer, within the
"Agent" folder in the Site Server Results Directory:
To replace the 32-bit installer, enter \x32\AccessData Agent.msi
To replace the 64-bit installer, enter \x64\AccessData Agent (64-bit).msi
Note: You may want to backup any existing Agent installers before replacing them.

7.

Click Replicate Agent File .

Using the Site Server Console

Managing Jobs on the Site Server

| 99

Part 3

Configuring Data Sources

This part describes how to configure data sources and includes the following chapters:
About

Data Sources (page 101)

Managing

People, Groups, Computers and Network Shares (page 103)

Configuring

Public Data Repositories for Collecting Data (page 138)

Configuring Data Sources

| 100

Chapter 10

About Data Sources

Data Sources are sources of data relevant to a project during electronic discovery or security investigation. The
data can include electronically stored information on employees, system management computers, and can refer
to people, Network shares, Domino or Exchange email accounts, or other public repositories associated with the
person.
Once the application has been configured to collect from a data source, you can execute a job to gather the
data. After the job has executed, you can examine the data in Project Review and filter the evidence. You can
define the scope of the data by data sources in the Navigation panel in Project Review.
You can add, define, delete, and edit data sources from the Data Sources page. You can also manage Network
shares, jobs, groups, and computers and their association with a data source.
You can manage the following types of data sources:

Data Source Type

Link for more information

Groups

See Managing Groups for Collecting Data on page 124.

People

See Managing People for Collecting Data on page 103.

Evidence

See Managing Evidence for Collecting Data on page 134.

Computers

See Managing Computers for Collecting Data on page 115.

Network Shares

See Managing Network Shares for Collecting Data on page 120.

Network Collectors

See Configuring Network Collectors on page 133.

Mobile

3rd Party

Link for more information

Data Source Type
Domino

See Configuring for a Domino Server on page 139.

Exchange

See Configuring for an Exchange Online/365 Server on page 141.
See Configuring for Exchange 2003, 2007, and 2010 Servers on page 142.
See Configuring for Exchange 2010 SP1 and 2013 Servers on page 144.

Exchange Index Server

See Configuring for an Exchange Index Server on page 147.

Enterprise Vault

See Configuring for an Enterprise Vault Server on page 149.

| 101

3rd Party

Link for more information

Data Source Type
Oracle URM

See Configuring for a Oracle URM Server on page 155.

Documentum

See Configuring for a Documentum Server on page 157.

SharePoint

See Configuring for a SharePoint Server on page 159.

Websites

See Configuring for Websites on page 162.

DocuShare

See Configuring for a DocuShare Server on page 164.

Cloud Mail

See Configuring for Cloud Mail on page 166.

OpenText ECM

See Configuring for a OpenText ECM Server on page 168.

FileNet

See Configuring for a FileNet Server on page 169.

Gmail

See Configuring for a FileNet Server on page 169.

Google Drive

See Configuring for Google Drive on page 171.

Druva

See Configuring for Druva on page 172.

CMIS Repository

See Configuring for a CMIS Repository on page 174.

| 102

Chapter 11

Managing People, Groups, Computers and
Network Shares

This chapter describes how to configure settings for collecting data from people, computers, and Network shares
and include the following topics:
Managing

People for Collecting Data (page 103)

Managing

Computers for Collecting Data (page 115)

Managing

Network Shares for Collecting Data (page 120)

Configuring
Managing

Data Source Credant Options (page 123)

Groups for Collecting Data (page 124)

Configuring

Network Collectors (page 133)

Managing

Evidence for Collecting Data (page 134)

Managing

Mobile Devices for Collecting Data (page 136)

Managing People for Collecting Data
About People
The term “person” in references any identified user who may have data relevant to a project under consideration
during electronic discovery. This can include electronically stored information (ESI) on employee or
management computers, and can refer to computers, shares, email, or other public repositories associated with
the user.
In Review, you can use the Person column to see the person that is associated with each item. You can sort,
filter, and search using the Person column.

About the Person Page
You manage people from the People tab on the Data Sources page. The people are listed in the Person List.
The main view of the Person List includes the following sortable columns:

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 103

People Information Options
Option

Description

First Name

The first name of the person. This field is required.

Middle Initial

The middle initial of the person.

Last Name

The last name of the person. This field is required.

Username

The computer username of the person. This field is required.

Domain

The network domain to which the person belongs.

Notes
Username

The username of the person as it appears in their Lotus Notes Directory.
A Lotus Notes username is typically formatted as Firstname Lastname/Organization as in
the following example:
Pat Ng/ICM

When you create and view the list of people, this list is displayed in a grid. You can do the following to modify the
contents of the grid:
Control
Sort

the columns

Define
If

which columns of data are displayed in the grid.

a column on which you can sort.

you have a large list, you can apply a filter to display only the items you want.

See Managing Columns in Lists and Grids on page 36.
Highlighting a person in the list populates the Person Details info pane on the right side. The Person Details
info pane has information relative to the currently selected person, beginning with the first name.
At the bottom of the page, you can use the following tabs to view and manage the items that the highlighted
person is associated with:
Computers
Network

shares

Evidence
Vault
Lit

Archives

Holds

Jobs
Job

results

Groups
Projects
Cloud

Mail

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 104

Person Tab Options
The following table lists the various options that are available under the Person tab.

Person Tab Options
Element

Description

Filter Options

Allows you to filter the person list. See Filtering Content in Lists and Grids on
page 38.
Click to add a person. See Adding People on page 107.

Add
Click to edit a person. See Editing a Person on page 108.
Edit
Click to remove a person. See Removing a Person on page 108.
Delete
Click to refresh the person list.
Refresh
Click to remove multiple people. See Removing a Person on page 108.
Delete

Import People

Custom Properties

Click to import people from a CSV file. See Importing People From a CSV File
on page 108.
Click to add custom properties. Custom properties must be defined before
importing CSV files with custom fields in the headers. See Adding Custom
Properties on page 185.
Export the current set of data to a CSV file.

Export to CSV

Columns

Computers

Click to adjust what columns display in the Person List. See Managing Columns
in Lists and Grids on page 36.
Allows you to view computers that have been associated to a person.
In the Computer pane, you can do the following:
 Filter the Computers list.
 Add a computer. See Adding a Computer on page 116.
 Edit a computer. See Editing a Computer on page 117.
 Associate and disassociate a computer to a person. See Associating Computers to a Person on page 111.
 Export the Computer list to a CSV file.
 Adjust the columns’ display in the Computers list.
Note: You cannot delete a computer that has been added in this pane.
To delete a computer, see the Computers tab under Data Sources.
See Deleting a Computer on page 117.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 105

Person Tab Options
Element

Network Shares

Evidence

Vault Archives

Lit Holds

Jobs

Job Results

Description
Allows you to view network shares that have been associated to a person.
In the Network Shares pane, you can do the following:
 Filter the Network Shares list.
 Add a network share. See Adding a Network Share on page 121.
 Edit a network share. See Editing a Network Share Path on page 122.
 Associate and disassociate a network share to a person. See Associating
Network Shares to a Person on page 111.
 Export the Network Share list to a CSV file.
 Adjust the columns’ display in the Network Share list.
Allows you to view evidence that has been associated to a person. In the
Evidence pane, you can do the following:
 Filter the Evidence list.
 Add Custom Properties. See Adding Custom Properties on page 185.
 Export the Evidence list to a CSV file.
 Adjust the columns’ display in the Evidence list.
 See Managing Evidence for Collecting Data on page 134.
Allows you to view the Enterprise Vault archives that has been associated to a
person. In the Vault Archives pane, you can do the following:
 Filter the Vault Archives list.
 Add a Vault archive. See Adding an Enterprise Vault Archive to a Person on
page 111.
 Edit a Vault archive. See Editing an Enterprise Vault Archive Added to a Person on page 112.
 Delete a Vault archive.See Removing an Enterprise Vault Archive Added to a
Person on page 112.
 Add Custom Properties. See Adding Custom Properties on page 185.
 Export the Vault Archives list to a CSV file.
Adjust the columns’ display in the Vault Archives list.
Allows you to view Lit Holds that have been associated to a person. In the Lit
Hold pane, you can do the following:
 Filter the Lit Holds list.
 Export the Lit Holds list to a CSV file.
 Adjust the columns’ display in the Lit Hold list.
Allows you to view jobs that has been assigned to a person. In the Jobs pane,
you can do the following:
 Filter the Jobs list.
 Export the Jobs list to a CSV file.
 Adjust the columns’ display in the Jobs list.
Allows you to view job results from a job that has been assigned to a person. In
the Job Results pane, you can do the following:
 Filter the Job Results list.
 Export the Job Results list to a CSV file.
 Adjust the columns’ display in the Job Results list.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 106

Person Tab Options
Element

Description

Groups

Allows you to view groups that a person belongs to. In the Groups pane, you
can do the following:
 Filter the Groups list.
 Export the Groups list to a CSV file.
 Adjust the columns’ display in the Groups list.
Allows you to view a project that a person belongs to. In the Projects pane, you
can do the following:
 Filter the Projects list.
 Associate and disassociate a project to a person. See Associating a Project
to a Person on page 113.
 Export the Groups list to a CSV file.
 Adjust the columns’ display in the Groups list.

Projects

Allows you to add people to a cloud mail server. In the Cloud Mail pane, you can
do the following:
 Filter the Cloud Mail list.
 Add a person to a cloud mail server. See Adding a Cloud Mail Server to a
Person on page 113.
 Edit the person added to a cloud mail server. See Editing a Cloud Mail
Server on page 113.
 Delete the person added to a cloud mail server. See Removing a Cloud Mail
Server on page 114.
 Export the Cloud Mail list to a CSV file.
 Adjust the columns’ display in the Cloud Mail list.

Cloud Mail

Allows you to view the mobile devices that have been associated to a person. In
the Mobile pane, you can do the following:
 Edit the details of a mobile device.
 Associate and disassociate a mobile device to a person. See Associating
Mobile Devices to a Person on page 114.

Mobile

Adding People
Administrators, and users with permissions, can add people.
You can add people in the following ways:
Manually

adding people

Importing

people from a file
See Importing People From a CSV File on page 108.

Creating

or importing people while importing evidence
See Managing Evidence for Collecting Data on page 134.

Importing

people from Active Directory.
See Adding People Using Active Directory on page 109.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 107

Manually Creating People
To manually create a person
1.

On the Home > Data Sources > People tab, click

2.

In Person Details, enter the person details.

3.

Click OK.

Add.

Editing a Person
You can edit any person that you have added to the project.

To edit a project-level person
1.

On the Home > Data Sources > People tab, select a person that you want to edit.

2.

Click

3.

In Person Details, edit person details.

4.

Click OK.

Edit

Removing a Person
You can remove one or more people from a project.

To remove one or more people from a project
1.

On the Home > Data Sources > People tab, select the check box for the people that you want to
remove.

2.

If you want to remove one person, check the person that you want to remove, and select

3.

If you want to remove more than one person, check the people that you want to remove, and select

Delete.

Delete.
4.

To confirm the deletion, click OK.

Importing People From a CSV File
From the People tab, you can import a list of people into the system from a CSV file. Before importing people
from a CSV file, you need to be aware of the following items:
You

must define any custom columns before importing the CSV file. See Adding Custom Properties on
page 185.

Make

sure that your columns have headers.

Multiple

items in columns must be separated by semicolons.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 108

To import people from a CSV file
1.

On the Home > People tab, click

Import People.

2.

From the Import People from CSV dialog, choose from the following options:
Import

custom columns. This option is not available if custom columns have not been previously
defined.

Merge

into existing people. This option will overwrite fields, such as first name, last name, and
email address. It also adds new computers, network shares, etc. to existing associations.

Note: For an entry to be considered a duplicate in the External Evidence column, the network path,

assigned person, and type (such as image or native file) must be the same. If there are any differences between these three fields, the entry is brought in as a new External Evidence item.
Download

Sample CSV. This allows you to download a sample CSV file illustrating how your CSV
file should be created. This example is dynamic; if you have created custom columns for people,
those custom columns appear in the sample CSV file.

Note: If your license does not support certain features (such as network shares or computers), the

columns for those items appear in the CSV without any data populated in the columns.
3.

Once options have been selected, click OK.

4.

Browse to the CSV file that you want to upload.

5.

After file has been uploaded, a People Import Summary dialog appears. This displays the number of
people added, merged, and/or failed, with details if an import failed. Click OK.

Adding People Using Active Directory
You can add people by importing from Active Directory.
If you have not already done so, be sure that you have configured Active Directory in the application. When
Active Directory is properly configured, the Active Directory filter list opens in the wizard.
See Configuring Active Directory Synchronization on page 78.
The person information automatically populates the Person List when you create people using Active Directory.
You can edit person information.
In order to add users with the correct domain name, the system parses the user’s domain name from the user
principal name provided by Active Directory (For example: accessdata.com\hhadley). This allows the system
to use the full domain name instead of truncating the name (For example, development.accessdata.com will
be used instead of development).
If you find that there are errors in the system’s automatic retrieval of the domain name, you can override the
domain name and enter a value manually. See To add people using Active Directory on page 110. for more
information.
Note: If you want to have the system truncate the domain name, update your Infrastructure service
configuration file. Edit The AppSetting key ReturnDomainAsFullyQualifiedDomainName and change
the value from UserPrincipalName to CanonicalName.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 109

To add people using Active Directory
1.

In the Data Sources > People page, click

Import from AD.

2.

Set the search/Browse depth to All Children or Immediate Children.

3.

(optional) Check Domain Name Override if you want to specify the domain or domain portion for the
users created. If you leave this unchecked, the application ignores any text in the Domain Name
Override field.
Note: The domain for the users created is drawn and parsed from the userPrincipalName in Active
Directory. Because all Active Directories are configured according to the needs of the directories’
organization, what populates automatically based on the userPrincipalName may not suit your
organization’s needs. In this case, use Domain Name Override to specify the domain.

4.

(optional) In the Domain Name Override field, add the domain for users created. For example, if you
type accessdata.com, the user name will appear as accessdata.com\
Note: The domain name is applied once you advance to the second screen of the wizard. Navigating
back to the first page and changing the domain name will not affect any users added to the import
list and queued for creation. To change the domain name, remove all users from the To Be
Added list and add them again from the search results.

5.

Select where you want to perform the search.

6.

Set the search options to one of the following:
Match

Exact

Starts

With

Ends

With

Contains

7.

Enter your search text.

8.

Check the usernames that you want to add as people.

9.

Click

Add to Import List.

10. Click Continue.
11. Review the members selected, members to add as people, and conflicted members. If you need to

make changes, click Back.
12. Click Import.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 110

Associating Computers to a Person
From the Computers pane under the Person tab, you can associate and disassociate computers to a selected
person.

To associate a computer to a person
1.

In the Computers list pane, click

2.

In the Associate Computers to  dialog, do one of the following:
In

the All Computers pane, click

In

the All Computers pane, click

3.

Click OK.

4.

(optional) Click

to add computers.

to add computers to the Associated Computers pane.
to remove computers from the Associated Computers pane.

to remove a computer from an associated person.

Associating Network Shares to a Person
From the Network Shares pane under the Person tab, you can associate and disassociate network shares to a
selected person.

To associate a network share to a person
1.

In the Network Shares list pane, click

2.

In the Associate Network Shares to  dialog, do one of the following:
the All Network Shares pane, click
pane.

to add network shares.

In

to add network shares to the Associated Network Shares

In

to remove network shares from the Associated Network

the All Network Shares pane, click
Shares pane.

3.

Click OK.

4.

(optional) Click

to remove network shares from an associated person.

Adding an Enterprise Vault Archive to a Person
From the Vault Archive pane under the Person tab, you can add an Enterprise Vault archive to a selected
person. Before adding an Enterprise Vault archive to a person, you must first configure the system to collect
from an Enterprise Vault archive.
See Configuring for Enterprise Vault on page 151.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 111

To add an Enterprise Vault archive to a person
1.

In the Person list, select the person that you want to add a cloud mail server to.

2.

Under the Vault Archives tab, click

3.

In the Archive Name field, enter the name of the Vault archive.

4.

Enter the archive ID in the Archive ID field.

5.

Select the Enterprise Vault server from the Enterprise Vault pull-down.

6.

Select the archive type from the Archive Type pull-down. You can choose either Exchange, Notes, or
File Store.

7.

Click Ok.

Add.

Editing an Enterprise Vault Archive Added to a Person
You can edit any Enterprise Vault archive server that you have added to a person.

To edit an Enterprise Vault archive server
1.

On the Vault Archives tab, select the name and username of the Enterprise Vault archive server that
you want to edit.

2.

Click

3.

In Vault Archives, edit the Enterprise Vault details.

4.

Click OK.

Edit

Removing an Enterprise Vault Archive Added to a Person
You can remove one or more Enterprise Vault servers that you have added to a person.

To remove one or more Enterprise Vault archive servers
1.

On the Vault Archives tab, select the name of the Enterprise Vault archive server that you want to edit.

2.

If you want to remove one server, check the name that you want to remove, and select

3.

If you want to remove more than one server, check the names that you want to remove, and select

Delete.

Delete.
4.

To confirm the deletion, click OK.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 112

Associating a Project to a Person
From the Projects pane under the Person tab, you can associate and disassociate projects to a selected person.

To associate a project to a person
1.

In the Project list pane, click

to add projects.

2.

In the Associate Projects to  dialog, do one of the following:
In

the All Projects pane, click

to add projects to the Associated Projects pane.

In

the All Projects pane, click

to projects from the Associated Projects pane.

3.

Click OK.

4.

(optional) Click

to remove projects from an associated person.

Adding a Cloud Mail Server to a Person
From the Cloud Mail pane under the Person tab, you can add a cloud mail server to a selected person. Before
adding a cloud mail server to a person, you must first configure the system to collect from a cloud mail server.
See Configuring for Cloud Mail on page 166.

To add a cloud mail server to a person
1.

In the Person list, select the person that you want to add a cloud mail server to.

2.

Under the Cloud Mail tab, click

3.

In the Name field, enter the name of the person.

4.

Select the cloud mail server from the Cloud Mail Server pull-down.

5.

In the Username field, enter the name of the user that you will be collecting from on the cloud server.

6.

In the Password field, enter the password of the username on the cloud server.

7.

Re-enter the password in the Confirm Password field.

8.

Click Ok.

Add.

Editing a Cloud Mail Server
You can edit any cloud mail server that you have added to a person.

To edit a cloud mail server
1.

On the Cloud Mail tab, select the name and username of the cloud mail server that you want to edit.

2.

Click

3.

In Cloud Mail Details, edit the cloud mail details.

4.

Click OK.

Edit

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 113

Removing a Cloud Mail Server
You can remove one or more cloud mail servers that you have added to a person.

To remove one or more cloud mail servers
1.

On the Cloud Mail tab, select the name and username of the cloud mail server that you want to edit.

2.

If you want to remove one name, check the name that you want to remove, and select

3.

If you want to remove more than one name, check the names that you want to remove, and select
Delete.

4.

To confirm the deletion, click OK.

Delete.

Associating Mobile Devices to a Person
From the Mobile pane under the Person tab, you can associate and disassociate computers to a selected
person. You can associate the following:
One

device to one person

Multiple

devices to one person

Multiple

devices to multiple people

To associate a mobile device to a person
1.

In the Mobile list pane, click

2.

In the Associate Mobile to  dialog, do one of the following:
In

the All Mobile pane, click

In

the All Mobile pane, click

3.

Click OK.

4.

(optional) Click

to add devices.

to add devices to the Associated Mobile pane.
to remove devices from the Associated Mobile pane.

to remove a device from an associated person.

Managing People, Groups, Computers and Network Shares

Managing People for Collecting Data

| 114

Managing Computers for Collecting Data
About Computer Management
One of the primary sources of evidence used in a project originates on workstations (or nodes) managed by a
person. To acquire that data, the application installs an agent on any node that could potentially host evidence. A
Work Manager contacts the agent and requests that files, or an entire drive, be transmitted to the Work Manager.
The Work Manager then runs the Evidence Processing sub-system for processing, placing the evidence into the
data store.
On the network, you can add any number of computers as possible evidence sources for a collection. These
may or may not be associated with the people included in the Person List view. These computers are managed
by way of the Computer Management page.
Note: In order for processing to start, the application must mark a node as cancelled in order for a collection to
complete. Because of this, nodes that have been cancelled before processing will display a completed
processing status, even though processing does not occur on the cancelled node. See Processing a Job
on page 478.
When you create and view the list of computers, they are displayed in a grid. You can do the following to modify
the contents of the grid:
Control
If

which columns of data are displayed in the grid.

you have a large list, you can apply a filter to display only the items you want.

See Managing Columns in Lists and Grids on page 36.
On the bottom of the page, you can associate People, Jobs, and Groups to computers.
See Adding People on page 107.
See Managing Groups for Collecting Data on page 124.

Computer Tab Options
The following table lists the various options that are available under the Computer tab.

Computer Tab Options
Element

Description

Filter Options

Allows you to filter the Computer list.
Click to add a computer. See Adding a Computer on page 116.

Add
Click to edit a computer. See Editing a Computer on page 117.
Edit
Click to remove a computer. See Deleting a Computer on page 117.
Delete

Managing People, Groups, Computers and Network Shares

Managing Computers for Collecting Data

| 115

Computer Tab Options
Element

Description
Click to refresh the computer list.

Refresh
Click to remove multiple computers. See Deleting a Computer on page 117.
Delete
Import Computers from

Import a list of computers from a CSV file. See Importing Computers from a
CSV file on page 118.

CSV
Export the current set of data to a CSV file.
Export to CSV
Click to adjust what columns display in the Computer List.
Columns
Allows you to view people that been associated to a computer. In the People
pane, you can do the following:
 Filter the People list.
 Associate and disassociate people to a computer. See Associating People to
a Computer on page 117.
 Export the Computers list to a CSV file.
 Adjust the columns’ display in the Computers list.

People

Allows you to view jobs that have run on a computer. In the Jobs pane, you can
do the following:
 Filter the Jobs list.
 Export the Jobs list to a CSV file.
 Adjust the columns’ display in the Jobs list.

Jobs

Allows you to view groups that a computer belongs to. In the Groups pane, you
can do the following:
 Filter the Groups list.
 Export the Groups list to a CSV file.
 Adjust the columns’ display in the Groups list.

Groups

Allows you to view the policies on a computer. You can remove a policy from
one or more computers without stopping the entire ETM job by clicking the ETM
Policies Stop job button.
See About the Endpoint Threat Monitoring Policy Job on page 533.

ETM Policies

Adding a Computer
To add a computer
1.

Click

Add.

2.

Enter the computer name and description.

3.

(Optional) Enter Credant Options.
See Configuring Data Source Credant Options on page 123.

4.

Click Save.

Managing People, Groups, Computers and Network Shares

Managing Computers for Collecting Data

| 116

Editing a Computer
You can edit the properties of a computer.

To edit a computer
1.

Click

Edit.

2.

Make any desired changes.

3.

(Optional) Enter Credant Options.
See Configuring Data Source Credant Options on page 123.

4.

Click Save.

Deleting a Computer
You can delete one or more computers from the system. You should avoid removing or deleting a computer if it
is already used in a collection.
Note: If you delete a computer it may cause the Work Manager to stop functioning.
See About Network Shares on page 120.

To delete a computer
1.

Select one or more computers that you want to delete.

2.

Click

3.

Verify the deletion by clicking OK.

Delete.

Associating People to a Computer
From the People pane under the Computers tab, you can associate and disassociate people to a selected
computer.

To associate a person to a computer
1.

In the People list pane, click

2.

In the Associate People to  dialog, do one of the following:
In

the All People pane, click

In

the All People pane, click

3.

Click OK.

4.

(optional) Click

to add people.

to add people to the Associated People pane.
to remove people to the Associated People pane.

to remove a person from an associated computer.

Managing People, Groups, Computers and Network Shares

Managing Computers for Collecting Data

| 117

Importing Computers from a CSV file
From the Computers tab, you can import a list of computers into the system from a CSV file. Before importing
computers from a CSV file, you need to be aware of the following items:
Make

sure that the Computer column has a header. Also if you import computers with associations to
groups, make sure that the Groups column has a header.

If

you want more than one group associated to a computer, separate the groups by semicolon in the
Groups column.

In

the computer column, you can designate computers by host name or IP address.

To import computers from a CSV file
1.

Click

to import a list of computers from a CSV file.

2.

From the Import Computers from CSV dialog, choose from the following options:
Associate

to Groups

Merge

new groups with existing computers . This allows you to associate new groups to
computers that were previously added by CSV import. For example, if Group C is added to the
system after computers have been added, you can re import your list with this option selected. This
adds Group C to the list of computers added, in addition to groups in the CSV list that are associated
to computers.

Note: Associations can be added by CSV import, but cannot be deleted by CSV import.
Download

Sample CSV. This allows you to download a sample CSV file illustrating how your CSV
file should be created. This example is dynamic; if you select Associate to Groups, the sample CSV
file includes a column for groups as well as for computers.

3.

Once options have been selected, click OK.

4.

Browse to the CSV file that you want to upload.

After file has been uploaded, a Computer Import Summary dialog appears. This displays the number of
computers added, merged, and/or failed, with details if an import failed. Click OK. From the Computers tab, you
can import a list of computers into the system from a CSV file. Before importing computers from a CSV file, you
need to be aware of the following items:
Make

sure that the Computer column has a header. Also if you import computers with associations to
groups, make sure that the Groups column has a header.

If

you want more than one group associated to a computer, separate the groups by semicolon in the
Groups column.

In

the computer column, you can designate computers by host name or IP address.

To import computers from a CSV file
1.

Click

to import a list of computers from a CSV file.

2.

From the Import Computers from CSV dialog, choose from the following options:
Associate

to Groups

Merge

new groups with existing computers . This allows you to associate new groups to
computers that were previously added by CSV import. For example, if Group C is added to the
system after computers have been added, you can re import your list with this option selected. This

Managing People, Groups, Computers and Network Shares

Managing Computers for Collecting Data

| 118

adds Group C to the list of computers added, in addition to groups in the CSV list that are associated
to computers.
Note: Associations can be added by CSV import, but cannot be deleted by CSV import.
Download

Sample CSV. This allows you to download a sample CSV file illustrating how your CSV
file should be created. This example is dynamic; if you select Associate to Groups, the sample CSV
file includes a column for groups as well as for computers.

3.

Once options have been selected, click OK.

4.

Browse to the CSV file that you want to upload.

5.

After file has been uploaded, a Computer Import Summary dialog appears. This displays the number of
computers added, merged, and/or failed, with details if an import failed. Click OK.

Managing People, Groups, Computers and Network Shares

Managing Computers for Collecting Data

| 119

Managing Network Shares for Collecting Data
About Network Shares
Shares are network folders on which the person may possess read and write access permissions. You can add
or remove shares from this page, edit a share path, or add and edit a share’s locality and description.
When you create and view the list of shares, they are displayed in a grid. You can do the following to modify the
contents of the grid:
Control
If

which columns of data are displayed in the grid.

you have a large list, you can apply a filter to display only the items you want.

See Managing Columns in Lists and Grids on page 36.
Important: When a job targets a network share, if a file on the share is locked from reading, the job will skip that
file and enter an entry in the log.

Network Shares Tab Options
The following table identifies the tasks that you can perform from the Network Shares page.

Network Shares Tasks
Task

Description

Filter Options

Allows you to filter the Network Shares list.

Add

Adds a network share.
See Adding a Network Share on page 121.

Edit

Lets you edit the network path where the share is located.
See Editing a Network Share Path on page 122.

Delete

Deletes the selected share from the list of shares associated with the person.
See Deleting Network Shares on page 122.
Refreshes the Network Shares list.

Refresh

Delete
Import Network Shares

Deletes multiple selected shares from the list of shares associated with the
person.
See Deleting Network Shares on page 122.
Import a list of network shares from a CSV file. See Importing Network Shares
from CSV on page 123.

from CSV
Export the current set of data to a CSV file.
Export to CSV
Click to adjust what columns display in the Computer List.
Columns

Managing People, Groups, Computers and Network Shares

Managing Network Shares for Collecting Data

| 120

Network Shares Tasks (Continued)
Task

Description
Allows you to view people that been associated to a network share. In the
People pane, you can do the following:
 Filter the Network Shares list.
 Add a person to the network share.
 Edit a person that has been added to the network share.
 Associate and disassociate people to a network share.
 Export the Network Shares list to a CSV file.
 Adjust the columns’ display in the Network Shares list.

People

Allows you to view jobs that have run on a network share. In the Jobs pane, you
can do the following:
 Filter the Jobs list.
 Export the Jobs list to a CSV file.
 Adjust the columns’ display in the Jobs list.

Jobs

Allows you to view groups that a computer belongs to. In the Groups pane, you
can do the following:
 Filter the Groups list.
 Export the Groups list to a CSV file.
 Adjust the columns’ display in the Groups list.

Groups

Adding a Network Share
The network identity used to install the application on the server must have Network
administrator privileges to be able to access all shares.
Note: In order to collect from network shares, configuration changes should be made to the application during
the installation process. Please consult with AccessData’s support during installation if you plan on
collecting from network shares as a data source.
See About Network Shares on page 120.

To add a network share
1.

Click

Add.

2.

Enter the name.

3.

Specify the path to a network share.

4.

Click

5.

(Optional) In the Description field, enter a description that can help you identify the network path.

6.

(Optional) In the Username and Password fields, specify a username and password to the Network
share.

Validate to verify the network path that you entered.

Note: Make sure that when you are setting up your network share, you fill the username and password fields
correctly to avoid errors. If you try to collect a network share with an invalid username/password, the job
will go to pending and never finish. When you run a job with network shares, make sure to specify a job

Managing People, Groups, Computers and Network Shares

Managing Network Shares for Collecting Data

| 121

expiration date in the job wizard. This will allow the job to expire within a specified time, even if there was
an invalid user or password. See Job Expiration Options on page 460.
7.

(Optional) Under User Credentials, select either the No Credentials or New Credentials radio button

8.

Click OK.

Editing a Network Share Path
You can edit a network share path if it is not already included in a collection.
See About Network Shares on page 120.

To edit a network share path
1.

On the Data Sources page, click Network Shares.

2.

Click

3.

In the Path field, update the Network Share path.

4.

Click

5.

(Optional) In the Username and Password fields, specify a username and password to the network
share.

6.

(Optional) Enter Credant options.
See Configuring Data Source Credant Options on page 123.

7.

Click Save.

Edit.

Validate to verify the Network Path that you entered.

Deleting Network Shares
You should avoid removing or deleting a network share if it is already used in a collection.
Note: If you delete a network share it may cause the Work Manager to stop functioning.
See About Network Shares on page 120.

To delete network shares
1.

On the Data Sources page, click Network Shares

2.

Select one or more shares that you want to delete.

3.

If you want to remove one network share, check the network share that you want to remove, and select
Delete.

4.

If you want to remove more than one network share, check the network shares that you want to
remove, and select

5.

Delete.

Verify the deletion by clicking OK.

Managing People, Groups, Computers and Network Shares

Managing Network Shares for Collecting Data

| 122

Importing Network Shares from CSV
From the Network Shares tab, you can import a list of network shares into the system from a CSV file.

To import network shares from a CSV file
1.

Click

to import a list of network shares from a CSV file.

2.

From the Import Network Shares from CSV dialog, click OK.

3.

Browse to the CSV file that you want to upload.

4.

After file has been uploaded, a Network Shares Import Summary dialog appears. This displays the
number of network shares added, merged, and/or failed, with details if an import failed. Click OK.

Configuring Data Source Credant Options
The following table describes the options that are available when you add or remove Credant on network shares
or computers as data sources.
See Credant Site Server Configuration Options on page 198.
See Managing Computers for Collecting Data on page 115.
See Managing Network Shares for Collecting Data on page 120.

Manage Credant Options
Option

Description

No Shield (no device
encryption)

No encryption is enabled on the network share or computer.

Current Shield File

Uses the currently associated Credant Shield file on the Network share or the
computer.

Upload New Shield File

Upload a new Credant Shield file that you want to associated with the Credantencrypted Network share or computer. You need to provide the file path and
password to the new file.

Managing People, Groups, Computers and Network Shares

Configuring Data Source Credant Options

| 123

Managing Groups for Collecting Data
Accessing the Groups Tab
To access the Groups tab
1.

Click the Data Sources tab.

2.

Click the Groups tab.

Groups Tab Options
The following table identifies the tasks that you can perform from the Groups page.

Groups Tasks
Task

Description

Search

Allows you to search the Groups list.
Adds a group. See Adding a Group Manually on page 126.

Add
Edits a group. See Editing a Manually Added Group on page 126.
Edit
Deletes a group. See Deleting a Manually Added Group on page 126.
Delete
Refreshes the Groups list.
Refresh

People

Allows you to view people that been associated to a group. In the People pane,
you can do the following:
 Filter the People list.
 Add a person manually. See Adding a Person to a Group Manually on
page 127.
 Add people to a group using Active Directory. See Adding People to Groups
Using Active Directory on page 127.
 Export the People list to a CSV file.
 Adjust the columns’ display in the People list.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 124

Groups Tasks (Continued)
Task

Description

Computers

Network Shares

Allows you to view computers that have been associated to a group. In the
Computers pane, you can do the following:
 Filter the Computers list.
 Add a computer to a group. See Adding Computers to a Group Manually on
page 129.
 Edit a computer that has been added to a group. See Editing a Computer
Added to a Group Manually on page 130.
 Remove a computer from a group. See Removing Computers from a Group
on page 130.
 Adding computers to a group using Active Directory. See Adding Computers
to a Group Using Active Directory on page 128.
 Export the Computers list to a CSV file.
 Adjust the columns’ display in the Computers list.
Note: You cannot delete a computer that has been added in this pane.
To delete a computer, see the Computers tab under Data Sources. See
Deleting a Computer on page 117.
Allows you to view network shares that are associated with a group. In the
Network Shares pane, you can do the following:
 Filter the Network Shares list.
 Add a network share to a group. See Adding Network Shares to a Group
Manually on page 131.
 Editing a network share that has been added to a group. See Editing Manually Added Network Shares to a Group on page 132.
 Add a network share using Active Directory.See Adding Network Shares to a
Group using Active Directory on page 130.
 Export the Network Shares list to a CSV file.
 Adjust the columns’ display in the Network Shares list.

Synching to Active Directory from Groups
Active Directory is the paramount platform operator. Therefore, you need to make sure that your Active Directory
listing is kept current. When you synch to Active Directory, it loads any changes that were made to people in an
organization unit into your Active Directory listing since the last synchronization.
The Person unique identifier (UID) is used to filter duplicate names.
Synchronization is from Active Directory to the application only.
Groups, people, computers, and network shares that you have added manually in Groups, are different record
types, and are not synchronized with Active Directory. Instead, you must update such records manually.
Note: Before you attempt to sync Active Directory from Groups, you must first make sure that you have
configured Active Directory synchronization in the application.
See Configuring Active Directory Synchronization on page 78.

To synchronize to Active Directory from Groups
1.

Click the Data Sources tab.

2.

Click the Groups tab.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 125

3.

On the Groups list pane, in the bottom right corner, click

Synchronize.

Adding a Group Manually
You can add groups manually instead of using Active Directory. Added groups can contain people, computers,
and network shares that you have also added manually.
When you add a group manually, it is added to the left-most list box in the Groups list pane area.
See Adding a Person to a Group Manually on page 127.
See Adding Computers to a Group Manually on page 129.
See Adding Network Shares to a Group Manually on page 131.

To add a group manually
1.

Click Data Sources.

2.

Click the Groups tab.

3.

In the right side of the Groups list pane, click

4.

On the Group Details list pane, enter a name and description.

5.

Click OK.

Add.

Editing a Manually Added Group
You can edit any group that you have added manually to the Groups page.

To edit a manually added group
1.

In the Data Sources page, click Groups.

2.

In the right side of the Groups list pane, click

3.

In the Group Details list pane, edit the options you want.

4.

Click OK.

Edit.

Deleting a Manually Added Group
You can delete any group that you have added manually to the Groups page. When you delete a group, all
associated people, computers, and network shares are removed as well.

To delete a manually added group
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

In the left-most search list of the upper pane, select a group that has

4.

In the right side of the Groups list pane, click

5.

Click OK.

Managing People, Groups, Computers and Network Shares

next to its name.

Delete.

Managing Groups for Collecting Data

| 126

Adding People to Groups Using Active Directory
You can add people to groups using Active Directory.
The Filter Options feature is available throughout the user interface in Groups. You can filter on people,
computers, and network shares to refine the list that is displayed.
Before you add people, be sure that you have configured Active Directory synchronization in Management and
recently synched to Active Directory in Groups.
See Configuring Active Directory Synchronization on page 78.
See Synching to Active Directory from Groups on page 125.
See Adding a Person to a Group Manually on page 127.
See Removing People from a Group on page 128.
After you create the groups that you want, you can add jobs and select the groups whose data you want to
collect.
See Adding a Job on page 455.

To add people to a group using Active Directory
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group whom you would like to add people.

4.

In the Associated tab, click

5.

In the People list pane, click

6.

In the Associate People to  dialog, do one of the following:

7.

In

the All People pane, click

In

the All People pane, click

People.
to add people.

to add people to the Associated People pane.
to remove people from the Associated People pane.

Click OK.

Adding a Person to a Group Manually
You can add people to groups manually, instead of using Active Directory.
Note: Groups, people, computers, and Network shares that you have added manually in Groups, are a different
record type and are not synchronized with Active Directory. Instead, you must update such records
manually.
See Adding People to Groups Using Active Directory on page 127.

To add people to a group manually
1.

Click the Data Sources tab.

2.

Click the Groups tab.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 127

3.

On the Groups list pane, use the search panes to select a group whom you would like to add people
manually.

4.

In the Associated tabs, click

5.

In the right side of the People list pane, click

6.

In the Person Details, enter information about the person.

People.
Add.

Note: The Domain is the network domain that the person belongs to. For Active Directory, the domain
would have the following syntax: dc=,dc=com.
7.

Click OK.

Removing People from a Group
You can remove one or more people from an associated group.
See Adding People to Groups Using Active Directory on page 127.
See Adding a Person to a Group Manually on page 127.

To remove people from a group
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group that contains people that you want to
disassociate from the group.

4.

In the Associated tabs, click

5.

In the Person list pane, check the people that you want to delete.

6.

In the lower left corner of the pane, click

People.

to remove the people from the associated group.

Adding Computers to a Group Using Active Directory
You can add computers to groups using Active Directory.
The Filter Options feature is available throughout the user interface in Groups. You can filter by people,
computers, and network shares to refine the list that is displayed.
Before you add computers, be sure that you have configured Active Directory synchronization in Management
and recently synched to Active Directory in Groups.
See Configuring Active Directory Synchronization on page 78.
See Synching to Active Directory from Groups on page 125.
See Adding Computers to a Group Manually on page 129.
See Removing Computers from a Group on page 130.
After you create the groups that you want, you can add jobs and select the groups whose data you want to
collect.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 128

See Adding a Job on page 455.

To add computers to a group using Active Directory
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group whom you would like to add
computers.

4.

In the Associated tabs, click

5.

In the Computers list pane, click

6.

In the Associate Computers to , do one of the following:

7.

Computers.
to add computers.

In

the All Computers pane, click

to add computers to the Associated Computers pane.

In

the All Computers pane, click

to remove computers from the Associated Computers pane.

Click OK.

Adding Computers to a Group Manually
You can add computers to groups manually, instead of using Active Directory.
Note: Groups, people, computers, and Network shares that you have added manually in Groups, are a different
record type and are not synchronized with Active Directory. Instead, you must update such records
manually.
See Adding Computers to a Group Using Active Directory on page 128.

To add computers to a group manually
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group whom you would like to add people
manually.

4.

In the Associated tabs, click

5.

In the right side of the Computers list pane, click

6.

On the Computer Details tab, enter a Computer Name and Description.

7.

Click OK.

Computers.

Managing People, Groups, Computers and Network Shares

Add.

Managing Groups for Collecting Data

| 129

Editing a Computer Added to a Group Manually
You can edit any computer that you have added manually to the Groups page.

To edit a computer
1.

In the Data Sources page, click Groups.

2.

In the Associated tabs, click

3.

In the right side of the Computers list pane, click

4.

In the Group Details list pane, edit the options you want.

5.

Click OK.

Computers.
Edit.

Removing Computers from a Group
You can remove one or more computers from an associated group.
See Adding Computers to a Group Using Active Directory on page 128.
See Adding Computers to a Group Manually on page 129.

To remove computers from a group
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group that contains computers that you want
to disassociate from the group.

4.

In the Associated tabs, click

5.

In the Computers list pane, check the computers that you want to remove from the associated group.

6.

In the lower left corner of the pane, click

Computers.

.

Adding Network Shares to a Group using Active Directory
You can add network shares to groups using Active Directory.
The Filter Options feature is available throughout the user interface in Groups. You can filter by people,
computers, and Network shares to refine the list that is displayed.
Before you add network shares, be sure that you have configured Active Directory synchronization in
Management and recently synched to Active Directory in Groups.
See Configuring Active Directory Synchronization on page 78.
See Synching to Active Directory from Groups on page 125.
See Adding Network Shares to a Group Manually on page 131.
See Removing Network Shares from a Group on page 132.
After you create the groups that you want, you can add jobs and select the groups whose data you want to
collect.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 130

See Adding a Job on page 455.

To add network shares to a group using Active Directory
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group whom you would like to add
computers.

4.

In the Associated tabs, click

5.

In the Network Shares list pane, click

6.

In the Associate Network Shares to , do one of the following:
In

Network Shares.

the All Network Shares pane, click

to add computers to the Associated Network Shares pane.

the All Network Shares pane, click
pane.

to remove computers from the Associated Network Shares

In

7.

to add network shares.

Click OK.

Adding Network Shares to a Group Manually
You can add network shares to groups manually, instead of using Active Directory.
Note: Groups, people, computers, and Network shares that you have added manually in Groups, are a different
record type and are not synchronized with Active Directory. Instead, you must update such records
manually.
See Adding Network Shares to a Group using Active Directory on page 130.

To add network shares to a group manually
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group whom you would like to add network
shares manually.

4.

In the Associated tabs, click

5.

In the right side of the Network Shares list pane, click

6.

On the Network Details tab, enter a Path and Description.

Network Shares.
Add.

Note: The local folder path or the UNC path to a Network share is where the data resides. Make sure
double backslash characters (\\) precede the UNC path. Or, enter the IP address path to a
Network share. Make sure double backslash characters (\\) precede the IP address path.
7.

(Optional) Select User Credentials

8.

Click OK.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 131

Editing Manually Added Network Shares to a Group
You can edit network shares that have been added to groups manually.

To edit network shares that have been added to a group manually
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group whom you would like to add network
shares manually.

4.

In the Associated tabs, click

5.

In the right side of the Network Shares list pane, click

6.

Click OK.

Network Shares.
Edit.

Removing Network Shares from a Group
You can remove one or more network shares from an associated group.
See Adding Network Shares to a Group using Active Directory on page 130.
See Adding Network Shares to a Group Manually on page 131.

To remove Network shares from a group
1.

Click the Data Sources tab.

2.

Click the Groups tab.

3.

On the Groups list pane, use the search panes to select a group that contains computers that you want
to disassociate from the group.

4.

In the Associated tabs, click

5.

In the Network Shares list pane, check the network shares that you want to remove from the associated
group.

6.

In the lower left corner of the pane, click

Network Shares.

.

Managing People, Groups, Computers and Network Shares

Managing Groups for Collecting Data

| 132

Configuring Network Collectors
The Network Collectors tab on the Data Sources page is where you can add your Sentinel network collectors for
Network Acquisition jobs. In order for the Resolution1 application to collect network activity data, you must have
a Sentinel network collector configured.
Note: If you enter incorrect information in a required field, the system displays a Submit operation failed error
when attempting to save the network collector. This alerts you immediately to any problems with the data
entered. You can then edit the field(s) and provide correct data.
See Using Sentinel on page 587.

Network Collector Detail Options
Option

Description

DB Provider

Displays a choice between MSSQL and Oracle for the database. These are the
only supported databases

Server

Specifies the address of the server. This field is required. For example:
10.10.32.15

Port

The port that accepts traffic. This field is required.

Database Name/SID

The name of the database. This field is required.

Description

Describes the network collector.

Username

The Username of user who has access to the server and the database. This field
is required.

Password

Password associated with the User. This field is required.

Managing People, Groups, Computers and Network Shares

Configuring Network Collectors

| 133

Managing Evidence for Collecting Data
About the Evidence Tracker
The Evidence tab under Data Sources is referred to as the Evidence Tracker. It allows you to add and manage
evidence globally throughout the system. In the Evidence Tracker, you can reuse evidence, much like you can
reuse an existing person or computer. You can track evidence activity and view the:
Time

evidence was collected and sent

Location
Person

of evidence

that the evidence was sent to and how the evidence was sent.

In addition to viewing the evidence, you can add additional evidence to specific projects or add evidence to the
system that is available to all people. Evidence can be added without processing or can be processed
immediately after adding. When you add evidence, the Evidence Wizard appears.
See Using the Evidence Wizard on page 405.
In the Evidence Tracker, you can edit evidence fields, allowing users to update information associated with a
given piece of evidence. You can edit description, unprocessed paths, associated people, and custom fields.
Users who have system administration permissions or Evidence administration permissions may view all the
evidence in the system. Users who do not have those permissions can only view the evidence that they are
given permission to see.

Accessing the Evidence Tracker
To access the Evidence Tracker
1.

Click the Data Sources tab.

2.

Click the Evidence

tab.

About the Evidence Tracker Page
You can manage evidence from the Evidence Tracker tab on the Data Sources page.
The following table identifies the tasks that you can perform from the Evidence Tracker page.

Evidence Tracker Options
Element

Description

Filter Options

Allows the user to filter the list.

Evidence Path List

Displays the paths of evidence in the project. Click the column headers to sort
by the column.

Managing People, Groups, Computers and Network Shares

Managing Evidence for Collecting Data

| 134

Evidence Tracker Options
Element

Description

Add

Click to add evidence with the Evidence Wizard. Evidence is added through the
evidence wizard and may be added without processing. Evidence added
through the Evidence Tracker is not associated with any project, but is available
in the Global Evidence list. See Using the Evidence Wizard on page 405.

Edit

Click to edit the evidence selected. See Using the Evidence Wizard on
page 405.
Click to delete evidence selected. See Using the Evidence Wizard on page 405.

Delete
Click to refresh the evidence list.
Refresh
Click to adjust what columns display in the Evidence Path List.
Columns
Export the current set of data to a CSV file.
Export to CSV

Custom Properties

Click to add custom properties. Custom properties must be defined before
importing CSV files with custom fields in the headers. See Configuring Custom
Fields on page 262.
Click to delete selected evidence.

Delete
Projects

Change History

Access Permissions

Lists the Projects that are associated with selected evidence. Projects are
shown by name, person, the processing state, description, and last modified
date. You can export the list to a CSV file.
Track the changes that have been made to the evidence. You can view the
changes by action type, date, who performed the changes, the field name, the
project that the evidence is associated with, the target user, the new value
added, and the old value that was changed. You can export the list to a CSV file.
View who has access permissions to the evidence. You can view by username,
first name, last name and last modified date. You can associate and un
associate evidence to the users listed. You can export the list to a CSV file.

Managing People, Groups, Computers and Network Shares

Managing Evidence for Collecting Data

| 135

Managing Mobile Devices for Collecting Data
About Mobile Management
For Resolution1 and Resolution1 Security users, the Mobile tab displays data received from mobile devices in
your network. The mobile devices communicate with Resolution1 via data gathered by mobile applications
installed on the devices.
See Using Mobile Threat Monitoring on page 554.
When you receive data from the mobile devices, the mobile devices and pertinent information about the devices.
You can do the following to modify the contents of the grid:
Control
If

which columns of data are displayed in the grid.

you have a large list, you can apply a filter to display only the items you want.

Mobile Tab Options
The following table identifies the tasks that you can perform from the Mobile page.

Mobile Tab Tasks
Task

Description

Filter Options

Allows you to filter the Mobile list.
Lets you edit the description of a mobile device.

Edit

Refresh

Import CSV

Refreshes the Mobile list. Refreshing the list updates the latitude/longitude
listed, apps installed on the device, and any new devices added.
Note: With iOS devices, there is a significant lag between the installation
of the mobile app and the new device populating the Mobile list. This is
due to the fact that iOS devices only communicate with the public share
every five to ten minutes.
Deletes multiple selected shares from the list of shares associated with the
person.
See Importing Associations by a CSV File on page 137.
Export the current set of data to a CSV file.

Export to CSV
Click to adjust what columns display in the Computer List.
Columns
Filter by Installed
Application

Expands a pane that lists all of the applications installed on all of the devices
that communicate with Resolution1. You can filter the list by application.

Managing People, Groups, Computers and Network Shares

Managing Mobile Devices for Collecting Data

| 136

Mobile Tab Tasks (Continued)
Task

Description
Allows you to view the applications that are installed on the selected device(s). If
no devices are selected in the Mobile list, the Apps pane displays all of the
applications that communicate with Resolution1. In the Apps tab, you can do the
following:
 Filter the Apps list.
 Export the Apps list to a CSV file.
 Adjust the columns’ display in the Apps list.

Apps

Allows you to view people that been associated to a mobile device. In the
People pane, you can do the following:
 Filter the People listed.
 Add a person to be available for mobile devices.
 Edit a person that has been added to the Mobile tab.
 Associate and disassociate people to a mobile device. You can associate:
One person to one device
One person to many devices
Many people to one device
Many people to many devices
 Export the People list to a CSV file.
 Adjust the columns’ display in the People list.

People

Importing Associations by a CSV File
You can associate people to a device by uploading a CSV file. This allows you to add a large number of
associations at a time. Adding associations by a CSV file does not overwrite associations that previously exists
in the application.
In the Import People To Mobile Device Associations dialog, you can download a CSV template, edit the template
with your devices and users to associate to those devices, save and upload the template.

To import associations by a CSV file
1.

Create a CSV file with device id listed with the user(s) that you want to associate to the device. You can
separate multiple users by a semi-colon. The CSV file must have column headers DeviceID and
Usernames

2.

From Data Sources > Mobile, click

3.

In the Import People To Mobile Device Associations dialog, select either:
Add

Import CSV.

New Associations - Create and add new associations. Existing associations will not be changed.

Overwrite

Existing Associations - Overwrites current associations for devices listed in the CSV file.

Note: Only devices listed in the CSV file will have the associations overwritten. All other devices are

unaffected.
4.

Browse to the CSV file.

5.

Click Import.

Managing People, Groups, Computers and Network Shares

Managing Mobile Devices for Collecting Data

| 137

Chapter 12

Configuring Public Data Repositories for
Collecting Data

In order to collect data from a public data repository, you need to perform the following actions:

Public Data Repository Workflow
Step

Task

1

Configure the application to collect from a public data repository.

2

Run a collection job. See About Collection Jobs on page 452.

This chapter describes how to configure settings for collecting data from public data repositories and include the
following topics:
Configuring

for a Domino Server (page 139)

Configuring

for an Exchange Online/365 Server (page 141)

Configuring

for Exchange 2003, 2007, and 2010 Servers (page 142)

Configuring

for Exchange 2010 SP1 and 2013 Servers (page 144)

Configuring

for an Exchange Index Server (page 147)

Configuring

for Enterprise Vault (page 151)

Configuring

for a Oracle URM Server (page 155)

Configuring

for a Documentum Server (page 157)

Configuring

for a SharePoint Server (page 159)

Configuring

for Websites (page 162)

Configuring

for a DocuShare Server (page 164)

Configuring

for Cloud Mail (page 166)

Configuring

for a OpenText ECM Server (page 168)

Configuring

for a FileNet Server (page 169)

Configuring

for Gmail (page 170)

Configuring

for Google Drive (page 171)

Configuring

for Druva (page 172)

Configuring Public Data Repositories for Collecting Data

| 138

Configuring

for a CMIS Repository (page 174)

For information on using jobs to collect data from public data repositories, see About Jobs (page 447).

Configuring for a Domino Server
You can configure the application to collect the data from your IBM Lotus Domino server. Such data might be
emails, instant messages, calenders, forum messages, and blogs. You can also collect documents associated
with Lotus Symphony, such as word processor documents, spreadsheets, and presentations.
Once you have configured the application to collect from your Domino server, you can choose to collect from this
source with a collection job. In the Job Wizard > Job Options, select Person’s Domino as an option under
People in the Custom Selection pane. At that point, a People page appears in the left pane. You can then
specify what to collect from the Domino server.

Note: The Lotus Notes Client must be run at least once to configure it before it can be collected.
See About Collection Jobs on page 452.

To configure the application for collecting from a Domino Server
1.

On the Data Sources page, click Domino.

2.

Click

3.

In the Details pane, set each field.
See Domino Server Configuration Fields on page 139.

4.

(Optional) On a tab, do any of the following:

5.

Add.

Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Click OK.

Domino Server Configuration Fields
The following table describes the fields that are available in the Domino Server configuration dialog box.
See Configuring for a Domino Server on page 139.

Notes Server Configuration Fields
Field

Description

Name

Specifies the name that you want to have appear in the Jobs Wizard for the Domino
Server.

Configuring Public Data Repositories for Collecting Data

Configuring for a Domino Server

| 139

Notes Server Configuration Fields
Field

Description

Locality

Specifies the location of the server.

Address

Specifies the path to the Domino server.

AdminID File

Specifies the path to the administrator’s ID file on the Domino server.

Password

Specifies the password to the administrator’s ID file.

Configuring Public Data Repositories for Collecting Data

Configuring for a Domino Server

| 140

Configuring for an Exchange Online/365 Server
You can configure the application to collect data from your Microsoft Online/365 Exchange server. This data
might include email, calendars, contacts, faxes, and voice mail.
Once you have configured the application to collect from your Exchange Online/365 server, you can choose to
collect from this source with a collection job. In the Job Wizard > Job Options, select Person’s Exchange as
an option under People in the Custom Selection pane. At that point, a People page appears in the left pane.
You can then specify what to collect from the Exchange server.

Before configuring the application for an Exchange Online/365 server, you need to do the following:
Outlook

must be run at least once with the the application service account (Exchange Administrator)
logged in to create the administrative profile.

You

need to configure Outlook to correctly send and receive against the Exchange Server.

Make

sure that the server’s password is current. Passwords for Microsoft Exchange Online/365 servers
have an expiration date, and the application cannot collect from the server with an expired password.

In

order to collect from the server, you need to download Microsoft extensions to run Microsoft Powershell
commands on the local system against the server. Consult with AccessData’s support for more
information.

The Exchange Connector can also incorporate an Indexing Service that allows you to perform a search inside
any mail boxes that have been indexed.
See Configuring for an Exchange Index Server on page 147.
See About Collection Jobs on page 452.

To configure the application for collecting from an Exchange Online/365 Server
1.

On the Data Sources page, click Exchange.

2.

Click

3.

In the Details pane, set each field.
See Exchange Server Online/365 Configuration Fields on page 142.

4.

(Optional) On a tab, do any of the following:

5.

Add.

Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Click OK.

Associating People to an Exchange Online/365 Server
For the application to collect from an Exchange server, people must be assigned to the server in the Exchange.
tab. You can associate people to more than one server. Assign people in one of two ways:


Click Associate To All people in the Exchange Mail Server Details panel to associate people to the
server.

Configuring Public Data Repositories for Collecting Data

Configuring for an Exchange Online/365 Server

| 141

Note: If you have previously associated a list of people to the server, Associate To All People will

overwrite the previous associations.
Add

individual people from the People tab in the Exchange panel. To add people, click on the Associate

link

.

Exchange Server Online/365 Configuration Fields
The following table describes the fields that are available in the Exchange Server Online/365 configuration
dialog box.
See Configuring for an Exchange Online/365 Server on page 141.

Exchange Server Online/365 Configuration Fields
Field

Description

Name

Specifies the friendly name of the Exchange Server. This name appears in
the Job Wizard for the Exchange Server.

Locality

Specifies the location of the server. This field is not required.

Address

Specifies the path to the Exchange Server.
The server name is in the form of 'exchange.mycompany.com' where
'exchange' is determined by your IT staff and 'mycompany' is the name of
your company.
Alternatively, an IP address can be used. The IP address must point to the
front-end Exchange Server.

Username

Specifies the username of the Exchange Online/365 Server.

Password

Specifies the password for the Exchange Online/365 Server.
Note: Exchange server passwords have an expiration date. You cannot
collect from Exchange if the password is expired. Make sure that the
password is current before setting up the server in the application .

Use Custom AD Settings

By default, the application uses the local Active Directory server. If you have
an advanced scenario, such as a cross-domain scenario, you can select to
this option and specify the AD Server, AD Port, AD BaseDN settings.

Associate To All People

Check to associate all of your people to the server.
If you have previously associated individual people to a server, this action
will overwrite the associations of the individual people.

Configuring for Exchange 2003, 2007, and 2010 Servers
You can configure the application to collect data from your Microsoft Exchange server. This data might include
email, calendars, contacts, faxes, and voice mail.
Outlook must be run at least once with the the application service account (Exchange Administrator) logged in
to create the administrative profile.
You need to configure Outlook to correctly send and receive against the Exchange Server.

Configuring Public Data Repositories for Collecting Data

Configuring for Exchange 2003, 2007, and 2010 Servers |

Note: the application does not support EWS (Exchange Web Service integration) for Exchange 2010. EWS is
only supported for 2010 SP1 and 2013 versions.
The Exchange Connector can also incorporate an Indexing Service that allows you to perform a search inside
any mail boxes that have been indexed.
See Configuring for an Exchange Index Server on page 147.
See About Collection Jobs on page 452.

To configure the application for collecting from an Exchange 2003, 2007, or 2010 Server
1.

On the Data Sources page, click Exchange.

2.

Click

3.

In the Details pane, set each field.
See Server Configuration Fields for Exchange 2003, 2007, and 2010 on page 143.

4.

(Optional) On a tab, do any of the following:

Add.

Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Click OK.

5.

Associating People to Exchange 2003, 2007, or 2010 Server
For the application to collect from an Exchange server, people must be assigned to the server in the Exchange.
tab. You can associate people to more than one server. Assign people in one of two ways:


Click Associate To All People in the Exchange Mail Server Details panel to associate people to the
server.
Note: If you have previously associated a list of people to the server, Associate To All People will

overwrite the previous associations.
Add

link

individual people from the People tab in the Exchange panel. To add people, click on the Associate
.

Server Configuration Fields for Exchange 2003, 2007, and 2010
The following table describes the fields that are available in the server configuration dialog for Exchange 2003,
2007, and 2010. See Configuring for Exchange 2003, 2007, and 2010 Servers on page 142.

Server Configuration Fields for Exchange 2003, 2007, and 2010
Field

Description

Name

Specifies the friendly name of the Exchange Server. This name appears in
the Job Wizard for the Exchange Server.

Configuring Public Data Repositories for Collecting Data

Configuring for Exchange 2003, 2007, and 2010 Servers |

Server Configuration Fields for Exchange 2003, 2007, and 2010
Field

Description

Locality

Specifies the location of the server. This field is not required.

Address

Specifies the path to the Exchange Server.
The server name is in the form of 'exchange.mycompany.com' where
'exchange' is determined by your IT staff and 'mycompany' is the name of
your company.
Alternatively, an IP address can be used. The IP address must point to the
front-end Exchange Server.

Use Custom AD Settings

By default, the application uses the local Active Directory server. If you have
an advanced scenario, such as a cross-domain scenario, you can select to
this option and specify the AD Server, AD Port, AD BaseDN settings.

Associate To All People

Check to associate all of your people to the server.
If you have previously associated individual people to a server, this action
will overwrite the associations of the individual people.

Configuring for Exchange 2010 SP1 and 2013 Servers
You can configure the application to collect data from your Microsoft Exchange server.
Outlook must be run at least once with the eDiscovery service account (Exchange Administrator) logged in to
create the administrative profile.
You need to configure Outlook to correctly send and receive against the Exchange Server.
Note: When configuring the application for either a 2010 SP1 or 2013 server, make sure to properly specify the
correct version. Specifying the wrong version of Exchange will cause the connector to fail.
The Exchange Connector can also incorporate an Indexing Service that allows you to perform a search inside
any mail boxes that have been indexed.
See Configuring for an Exchange Index Server on page 147.
See About Collection Jobs on page 452.

To configure the application for collecting from an Exchange 2010 SP1 or 2013 Server
1.

On the Data Sources page, click Exchange.

2.

Click

3.

In the Details pane, set each field.
See Server Configuration Fields for Exchange 2010 SP1 and 2013 on page 145.

4.

(Optional) On a tab, do any of the following:

5.

Add.

Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Click OK.

Configuring Public Data Repositories for Collecting Data

Configuring for Exchange 2010 SP1 and 2013 Servers | 144

Associating People to an Exchange 2010 SP1/2013 Server
For the application to collect from an Exchange server, people must be assigned to the server in the Exchange.
tab. You can associate people to more than one server. Assign people in one of two ways:


Click Associate To All People in the Exchange Mail Server Details panel to associate people to the
server.
Note: If you have previously associated a list of people to the server, Associate To All People will

overwrite the previous associations.
Add

individual people from the People tab in the Exchange panel. To add people, click on the Associate

link

.

Server Configuration Fields for Exchange 2010 SP1 and 2013
The following table describes the fields that are available in the server configuration dialog box for Exchange
2010 SP1 and 2013.
See Configuring for Exchange 2010 SP1 and 2013 Servers on page 144.

Server Configuration Fields for Exchange 2010 SP1 and 2013
Field

Description

Name

Specifies the friendly name of the Exchange Server. This name appears in
the Job Wizard for the Exchange Server.

Locality

Specifies the location of the server. This field is not required.

Address

Specifies the path to the Exchange Server.
The server name is in the form of 'exchange.mycompany.com' where
'exchange' is determined by your IT staff and 'mycompany' is the name of
your company.
Alternatively, an IP address can be used. The IP address must point to the
front-end Exchange Server.

Exchange Web Services
Enabled?

This must be checked if you want to use EWS (Exchange Web Service).
When collecting from a 2010 SP1 server, you must have this checked in
order to use specific 2010 SP1 features, such as recoverable items, archive
mail, and filters.

Username

Specifies the username for the server.

Password

Specifies the password for the server.
Note: Exchange server passwords have an expiration date. You cannot
collect from Exchange if the password is expired. Make sure that the
password is current before setting up the server in the application .

Exchange Server-side
Mailbox Indexing Enabled?

If you have indexing enabled on the server, check this action. If you want to
use filters on the data collected, you must have this action checked.

Configuring Public Data Repositories for Collecting Data

Configuring for Exchange 2010 SP1 and 2013 Servers | 145

Server Configuration Fields for Exchange 2010 SP1 and 2013
Field

Description

Use Custom AD Settings

By default, the application uses the local Active Directory server. If you have
an advanced scenario, such as a cross-domain scenario, you can select to
this option and specify the AD Server, AD Port, AD BaseDN settings.

Associate To All People

Check to associate all of your people to the server.
If you have previously associated individual people to a server, this action
will overwrite the associations of the individual people.

Configuring Public Data Repositories for Collecting Data

Configuring for Exchange 2010 SP1 and 2013 Servers | 146

Configuring for an Exchange Index Server
About Configuring for an Exchange Index Server
The Exchange Connector can incorporate an Indexing Service that allows you to perform a search inside any
mail boxes that have been indexed. The Indexing Service can be set up to incrementally update its index on a
configurable schedule. The Indexing Service will update the index based on a selected list of people from the the
application system.
Indexing can be performed on an every X hours basis, such as every 2 hours or every 200 hours, or on a daily
basis at a specified time such as 11:00 PM each night.
The Indexing Engine has the capability to index in three different ways: Metadata Only, Metadata and Body, or
Metadata, Body, and Attachments.

Indexing Engine Options
Metadata Only

This option will index the metadata for the email message and the metadata for any
attachments that are found on that email.

Metadata and
Body

This option will index the metadata for the email, the body content for the email, and the
metadata for any attachments that are found on that email.

Metadata, Body,
and Attachments

This option will index the metadata and body content for both the email message and
any attachments found on the email.

Note: The Indexing process can take a large amount of time – especially the first time that indexing is run.
When running a collection for email messages from an Exchange Server, the the application system will perform
in two different ways depending on how the collection has been setup.
If

you do not provide an Email Filter, then the collection performs a 'punch out' of the people Exchange
Mailbox.

If

an Email Filter is specified, then the collection performs a search against the Exchange Index and will
gather all the emails that come back in the search results.

Configuring for an Exchange Index Server
The Exchange Index Server should be configured after your Exchange Server has already been configured.

To configure the application for collecting from an Exchange Index Server
1.

Complete all the steps to configure an Exchange Server for collecting.

2.

See Configuring for an Exchange Online/365 Server on page 141.

3.

The Exchange Index Server will show up in the list after it has been installed and its server started.

4.

On the Data Sources page, click Exchange Index Server.

5.

Click

6.

Most of the fields in Details pane will be already be populated. If not, make sure that your initial
Exchange server has been configured.
See Configuring for an Exchange Online/365 Server on page 141.

Edit. Notice that there is no option to add.

Configuring Public Data Repositories for Collecting Data

Configuring for an Exchange Index Server

| 147

7.

Expand the drop-down menu and select the server you want to index.

8.

Select how you want to index: Metadata Only, Metadata and Body, or Metadata, Body and
Attachments.
See About Configuring for an Exchange Index Server on page 147.

9.

Click OK.

10. Go to the People tab and select the people that should be indexed on this Indexing Server.
11. Click Start Indexing to start the indexing schedule.
12. Click Stop Schedule to stop the indexing schedule.

Configuring Public Data Repositories for Collecting Data

Configuring for an Exchange Index Server

| 148

Configuring for an Enterprise Vault Server
About Configuring for an Enterprise Vault Server
You can configure the application so that you can collect data from Symantec Enterprise Vault using the Job
Wizard. This data might include email, files, social media communications, SharePoint content, instant
messages, and other electronically stored information.
Once you have configured the application to collect from your Enterprise server, you can choose to collect from
this source with a collection job. In the Job Wizard > Job Options, you can select Enterprise Vaults as an
option under People in the Custom Selection pane. At that point, a Enterprise Vault Server page appears in the
left hand pane. You can then select the Enterprise Vault servers from which you want to collect.
See About the Jobs Tab on page 449.
See Enterprise Vault Server Collection Options on page 500.

Before you can configure Enterprise Vault to collect data, each Enterprise Vault server must be running the
AccessData Enterprise Vault Connector. To install the connector you need the the application’s Installation
media.
See Installing the AccessData Enterprise Vault Connector on page 150.
See Configuring for Enterprise Vault on page 151.

The Enterprise Vault Configuration page has three panels that you can configure:
Enterprise

Vault Servers

Enterprise

Vault Stores

Unassociated

Archives

The Enterprise Vault Stores tab and the Unassociated Archives tab both reference servers on the
Enterprise Vault Servers tab.
Note: Symantec fixed an issue with Enterprise Vault that has existed in versions prior to 8.0, service pack 4.
The issue, as stated by Symantec, is that “retrieving large items (that is, files larger than 50 MB) resulted
in corrupt data being returned.” This issue adversely impacts the retrieval process for the application
because the application often retrieves attachments and items from various File System Archives that
are larger than 50 MB. As such, it is highly recommended that you upgrade and install the latest version
of Enterprise Vault, along with the most recent service pack.

Note: When collecting email from an Enterprise Vault Server, make sure that the Task Controller Service and
Enterprise Vault Storage Service is running on the Enterprise Vault Server. Otherwise, the collection will
run without errors, but the files are not collected. An error stating that Enterprise Vault is unavailable is

Configuring Public Data Repositories for Collecting Data

Configuring for an Enterprise Vault Server

| 149

recorded in the Integration Service logs. If you get this error, start the services and re-submit the
collection.

Installing the AccessData Enterprise Vault Connector
Before you can configure Enterprise Vault to collect data, the Enterprise Vault server at your site must have the
AccessData Enterprise Vault Connector service installed on it. This integration service allows remote the
application Work Managers to issue requests against the local Enterprise Vault program.
The service issues the following limited set of requests against Enterprise Vault:
Lookup
Apply

archive types (Directory Service)

collection filter criteria against the archives (index service)

Retrieve

matching documents (storage service)

You can install the connector service on one Enterprise Vault Server at a site, or you can install the connector
service on multiple servers across different sites to assist with workload balancing.
The following components are necessary to run the Enterprise Vault Connector service on the Enterprise Vault
Server:
Microsoft

.NET Framework 3.5 (SP1 or greater) Client Profile

Microsoft

.NET Framework 3.5 (SP1 or greater) Extended

If you do not have these components installed, the AccessData Enterprise Vault Connector installation prompts
you to install them before you continue.
The connector service needs read access to all of the Enterprise Vault archives. To accomplish this, do one of
the following:
Run

the service with the same credentials as the service account under which Enterprise Vault runs (the
installation steps below use this scenario).

Create

a new domain account and grant it read access to each archive.

Following the installation, you can check that the AccessData Enterprise Vault Integration service has started
using Windows Computer Management.
Use Windows Control Panel to uninstall AccessData Enterprise Vault Connector from the Enterprise Vault
Server.

See Configuring for Enterprise Vault on page 151.

To install the AccessData Enterprise Vault Connector
1.

Log on to the Enterprise Vault Server computer by using either the Administrator account or an account
that has administrator privileges.

2.

Insert the application installation media into the media drive of the server.

3.

From the root of the installation media, in the the application \EnterpriseVaultConnector folder,
double-click AccessData Enterprise Vault Connector.exe to start the installation.

4.

On the Welcome window, click Next.

5.

In the License Agreement window, read the license, and then click I accept the terms in the license
agreement.

Configuring Public Data Repositories for Collecting Data

Configuring for an Enterprise Vault Server

| 150

6.

Click Next.

7.

In the Destination Folder window, do one of the following:
Click

Next to accept the default install path of the connector service.

Click

Change to select a new install path, and then click Next.

8.

In the User Credentials window, specify the credentials of the Enterprise Vault service account, and the
domain where the server resides.

9.

Click Next.

10. In the Ready to Install the Program window, click Install.

You can now configure Enterprise Vault in the application .

Configuring for Enterprise Vault
Before you can configure the application to collect from Enterprise Vault, make sure that each Enterprise Vault
server at your site has an installation of the AccessData Enterprise Vault Connector.
See Installing the AccessData Enterprise Vault Connector on page 150.

To configure the application to collect from Enterprise Vault
1.

On the Data Sources page, click Enterprise Vault.

2.

Click

3.

In the Details pane, set each field.
See Enterprise Vault Servers Tab Fields on page 152.

4.

Click OK to add the configuration to the Enterprise Vault Servers table.

5.

(Optional) On a tab, do any of the following:

6.

Add.

Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Do one of the following:
Repeat

steps 2-4 to configure additional Enterprise Vault Servers.

Continue

with the next step.

Configuring Public Data Repositories for Collecting Data

Configuring for an Enterprise Vault Server

| 151

7.

On the Enterprise Vault Stores tab, click

Add.

8.

Set the Enterprise Vault Store fields.
See Enterprise Vault Stores Tab Fields on page 153.

9.

Click OK to add the configuration to the Enterprise Vault Stores table.

10. Do one of the following:
Repeat

steps 6-8 to configure additional Enterprise Vault Stores.

Continue

with the next step.

11. On the Unassociated Archives tab, click

Add.

12. Set the Unassociated Archive fields.

See Unassociated Archives Tab Fields on page 153.
13. Click OK to add the configuration to the Unassociated Archives table.
14. Do one of the following:
Repeat

steps 10-12 to configure additional Enterprise Vault Stores.

Continue

with the next step.

15. (Optional) On a tab, do any of the following:
Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Enterprise Vault Servers Tab Fields
Servers must be entered on this tab before you can configure the Enterprise Vault Stores tab or the
Unassociated Archives tabs.
The following table identifies the available fields in the Enterprise Vault Servers tab, on the Enterprise Vault
Configuration page.
See Configuring for Enterprise Vault on page 151.

Enterprise Vault Servers Tab Fields
Field

Description

Name

Specifies the friendly name of the server as chosen by the administrator.

Address

Specifies the IP address or host name of the Enterprise Vault Server.

Port

The port number that is used for communication from the Enterprise Vault server to the the
application Web application server.
The default is 9132.

Configuring Public Data Repositories for Collecting Data

Configuring for an Enterprise Vault Server

| 152

Enterprise Vault Servers Tab Fields
Field

Description

Locality

(Optional) Lets you choose from a list of existing localities. The server is associated to the
location or IP range of nodes.
Note: If you want to assign the Enterprise Vault Server or the Enterprise Vault Store a locality,
only the Work Managers with that locality are able to collect from the specified archive.
Otherwise, leave this field blank so that they can be collected by Work Managers that also
have a blank locality.

Enterprise Vault Stores Tab Fields
The Enterprise Vault Store holds logical containers that are configured on an Enterprise Vault server against
which you would like to perform collections. For each record that you configure and add, you must specify the
Vault Store ID. You can get the Vault Store ID from the General tab on the Vault Store Properties dialog box
within the Enterprise Vault Administration Console.
The following table identifies the available fields in the Enterprise Vault Stores tab, on the Enterprise Vault
Configuration page.
See Configuring for Enterprise Vault on page 151.

Enterprise Vault Stores Tab Fields
Field

Description

Name

Specifies the friendly name of the server as chosen by the administrator.

VaultStore ID

You can find the Vault Store ID on the General tab of the Vault Store properties found within
the Enterprise Vault Administration Console.

Archive Type

Lets you choose from the following archive types:
Exchange
Notes (Domino)
FileStore

Server

Specifies the Enterprise Vault Server against which you would like to perform collections.
The servers in the dropdown list come from the Enterprise Vault Servers tab.
See Enterprise Vault Servers Tab Fields on page 152.

Unassociated Archives Tab Fields
You can individually add the archives stored on an Enterprise Vault server against which you would like to
perform collections.
For each record that you configure and add, you must specify the Archive ID. You can get the Archive ID from
the Advanced tab of the Archive Properties dialog box within the Enterprise Vault Administration Console.
The following table identifies the available fields in the Unassociated Archives tab, on the Enterprise Vault
Configuration page.

Configuring Public Data Repositories for Collecting Data

Configuring for an Enterprise Vault Server

| 153

See Configuring for Enterprise Vault on page 151.

Unassociated Archives Tab Fields
Field

Description

Name

Specifies the friendly name of the server as chosen by the administrator.

Archive ID

Specifies the necessary Archive ID.

Archive Type

Lets you choose from the following archive types:
Exchange
Notes (Domino)
FileStore

Server

Specifies the Enterprise Vault Server against which you would like to perform collections.
The servers in the drop-down list come from the Enterprise Vault Servers tab.
See Enterprise Vault Servers Tab Fields on page 152.

Internal Location

This field only applies to FileStore archives. It enables the collection of a specific subdirectory found within a FileStore when it is not prudent to collect the entire archive. The
values you specify should be formatted as folder paths relative to the parent archive file.
For example, if you wanted to only collect Brad Jones’s documents out of the specified
archive, you would enter the directory path to the files such as the following:
/bjones/docs/
As long as the path exists within the archive, the files within that folder are correctly
configured for future collections.

Configuring Public Data Repositories for Collecting Data

Configuring for an Enterprise Vault Server

| 154

Configuring for a Oracle URM Server
You can configure the application so that you can collect data from Oracle URM (Universal Records
Management) servers.
Once you have configured the application to collect from your Oracle URM server, you can choose to collect
from this source with a collection job. In the Job Wizard > Job Options, you can select Oracle URM as an
option in the Other Data Sources pane. At that point, an Oracle URM page appears in the left pane. You can
then select the Oracle URM servers from which you want to collect.
See Adding a Job on page 455.
See Oracle URM Collection Options on page 509.
When you include an Oracle URM public data repository in a collection, you can also use the following Oracle
URM-specific inclusion and exclusion filters:
Email

filters:

Date

Sent

Date

Received

Repository
Date

filter:

Modified

These filters are in the Date Meta Info tab on the filters pages in the Collection Wizard.
Note: The email filters will be apply only to items that are flagged as 'Correspondence' in the URM database,
whereas Repository Filters will apply only to items that are not flagged as 'Correspondence' in the URM
database.

To configure the application to collect from Oracle URM
1.

On the Data Sources page, click Oracle URM.

2.

Click

3.

In the Details pane, set each field.
See Oracle URM Configuration Parameters on page 156.

4.

Click OK to add the configuration to the table.

5.

Do one of the following:
Repeat

Add.

steps 2-4 to configure additional Oracle URM servers.

Continue

6.

with the next step.

(Optional) Do any of the following:
Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Configuring Public Data Repositories for Collecting Data

Configuring for a Oracle URM Server

| 155

Oracle URM Configuration Parameters
The following table describes the parameters that you can set when you are configuring for collection from an
Oracle URM server.
See Configuring for a Oracle URM Server on page 155.

Oracle URM Configuration Parameters
Parameter

Description

Repository Name

Name of the configuration to help you identify it in the Job Wizard.

Locality

Specifies the location of the Oracle URM server.

Web Server URL

Sets the URL for the instance of the Oracle URM Web service. The URL is required to
communicate with the Oracle URM server.
You can get the URL from any URL to the Oracle URM server on the Web interface. For
example, a service request on the Web interface may look like the following:

http://urm.abcompany.com/xpedio/
idcplg?IdcService=GET_DOC_PAGE&Action=GetTemplatePage&Page=HOME
_PAGE&Auth=Internet
To get the server URL, you discard everything after the question mark. So, using the
example above, the URL would be the following:
http://urm.abcompany.com/xpedio/idcplg
Note: If you are running Oracle URM version 11, the URL will have a different
format for the web service URL than previous versions. The following is an
example of a web service URL from Oracle URM v.11:
http://urmserver:16300/_dav/urm/idcplg
User name

Specifies the Records Management administrator name for the Oracle URM instance.

Password

Specifies the Records Management administrator password for the Oracle URM
instance.

Configuring Public Data Repositories for Collecting Data

Configuring for a Oracle URM Server

| 156

Configuring for a Documentum Server
You can configure the application for EMC Documentum, a solution for capturing, organizing, storing, and
delivering unstructured content within an enterprise.
Once you have configured the application to collect from your Documentum server, you can choose to collect
from this source with a collection job. In the Job Wizard > Job Options, you can select Documentum as an
option in the Other Data Sources pane. At that point, a Documentum page appears in the left pane. You can
then select the Documentum servers from which you want to collect.
See Adding a Job on page 455.
See Documentum Collections Options on page 496.

To configure the application to collect from Documentum
1.

On the Data Sources page, click Documentum.

2.

Click

3.

In the Details pane, set each field.
See Documentum Configuration Fields on page 157.

4.

Click OK to add the configuration to the table.

5.

Do one of the following:
Repeat

Add.

steps 2-4 to configure additional Documentum repositories.

Continue

6.

with the next step.

(Optional) Do any of the following:
Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Documentum Configuration Fields
The following table describes the parameters that you can set when you are configuring the application to
collect from a Documentum repository.
See Configuring for a Documentum Server on page 157.

Documentum Configuration Fields
Parameter

Description

Repository Name

Name of the Documentum repository from which the application can collect.

Locality

Specifies the location of the Documentum repository.

Server

Sets the URL for the instance of the Documentum Web service. The URL is required to
communicate with the Documentum server.

Port

Provides the port number that is used for communication.

Configuring Public Data Repositories for Collecting Data

Configuring for a Documentum Server

| 157

Documentum Configuration Fields
Parameter

Description

Domain

This field is not mandatory.

Username

Specifies the user name to access the Documentum repository.

Password

Specifies the user’s password for access to the Documentum repository.

Configuring Public Data Repositories for Collecting Data

Configuring for a Documentum Server

| 158

Configuring for a SharePoint Server
You can configure the application to perform collections on Microsoft SharePoint 2013, 2010 and 2007 servers
by using the Job Wizard. The SharePoint connector can collect from document libraries, wikis, blogs, calendars,
contacts, announcements, surveys, and discussion boards on team and individual sites.
Once you have configured the application to collect from your SharePoint server, you can choose to collect from
this source with a collection job. In the Job Wizard > Job Options, you can select SharePoint as an option in
the Other Data Sources pane. At that point, a SharePoint page appears in the left pane. You can then select the
SharePoint servers from which you want to collect.
See Adding a Job on page 455.
Considerations when configuring the application for a SharePoint Server:
If

you want to specify the locality of a SharePoint server, only the Work Managers with that locality can
collect from the specified SharePoint server. You may want to leave the Locality field empty so that it can
be collected by Work Managers that also have a blank locality.

For

the application to collect data from a given SharePoint server, you must make sure that you give the
AccessData Service Account full read-only permissions to specific SharePoint servers.

If

you want to perform keyword searching on data collected from a SharePoint Server, you must configure
an index server. The index server must have FrontPage server extensions. The Search Service also
needs to be running in order to perform the searches.

You

must ensure that the username/password combination for the SharePoint site that you add has
credentials to access all sub-sites of the SharePoint site added. Specifically, you need to at least have
Read Access in the Web App Policy for the user given. Otherwise the Collection Service will not be able
to collect from these sub-sites and will sit in Waiting For Retry status waiting to connect and collect from
these sub-sites.

See Setting Service Account Permissions for a SharePoint Server on page 160.

To configure the application to collect from SharePoint
1.

On the Data Sources page, click SharePoint.

2.

Click

3.

In the SharePoint Details pane, set each field.

4.

See SharePoint Details Fields on page 160.

5.

Click OK to add the configuration to the SharePoint Web Applications table.

6.

Do one of the following:
Repeat

Add.

steps 2-4 to configure additional SharePoint Web Applications.

Continue

7.

with the next step.

(Optional) Do any of the following:
Click

Edit to edit the parameters of a given configuration.

Click

Delete to delete a configuration.

Configuring Public Data Repositories for Collecting Data

Configuring for a SharePoint Server

| 159

SharePoint Details Fields
The following table describes the fields that are available in the SharePoint Details dialog box.
See Configuring for a SharePoint Server on page 159.

SharePoint Details Fields
Field

Description

Web
Application
URL

Lets you specify the URL of the Web application.
The value of this field is typically be formatted as the following:
http://
: where
is the host name or IP address of the system hosting the SharePoint Web Application. You can optionally use the address if you are connecting to a specific SharePoint web application. If you provide a URL that does not specify the port, port 80 is used. If you specify a root path, such as http://server_name/, when you run the Collection Wizard, you can select SharePoint site URLs that may exist within sub sites off of the root path. For example, you could include URLs of any blogs, discussion boards, document libraries, or wikis within the specified root path. If you specify a SharePoint path to a particular organization’s department, you can include the blogs, discussion boards, document libraries, or wikis just within that department site. For example, the path may look like http://server_name/sites/marketing. Locality (Optional). Lets you type the name of the desired locality to associate this server to a specific location or IP range of nodes. Domain (Optional) If the user account entered in the Username field is a domain user account, the domain must be specified; otherwise leave this field blank. Username Lets you specify the username of an account that is granted Full Read access to SharePoint. See Setting Service Account Permissions for a SharePoint Server on page 160. Password Lets you set the current password of the provided user account. Setting Service Account Permissions for a SharePoint Server For the application to collect data from a given SharePoint server, you must make sure that you give the AccessData Service Account full read-only permissions to specific SharePoint servers. See Configuring for a SharePoint Server on page 159. To set AccessData Service Account permissions for a SharePoint server 1. On the Windows Start menu, click Administrative Tools > SharePoint 3.0 Central Administration. 2. On the Central Administration page, click the Application Management tab. 3. On the Application Management page, under Application Security, click Policy for Web Application. 4. On the toolbar of the Policy for Web Application page, in the Web Application field, make sure that the correct Web application path and port number is shown. If the Web application for which you want to set policy for users is not shown, click Change Web Application and select the Web application that you want. Configuring Public Data Repositories for Collecting Data Configuring for a SharePoint Server | 160 5. On the toolbar, click Add Users. 6. On the Add Users page, click Next. 7. In the Choose Users section, in the Users box, add the domain\username path. Optionally, you can click the check mark icon below the Users box to validate the path. 8. In the Choose Permissions section, check Full Read - Has full read-only access. 9. Click Finish. The AccessData Service Account now has the correct permissions set so that you can perform a collection on the specified SharePoint server. 10. Repeat the steps for each Web application whose data you want to collect. Configuring Public Data Repositories for Collecting Data Configuring for a SharePoint Server | 161 Configuring for Websites About Collecting Files from Websites You can configure the application to collect files from websites. Once you have configured the application to collect from websites, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, you can select Website as an option in the Other Data Sources pane. At that point, a Website page appears in the left pane. You can then select which website from which you want to collect. See Adding a Job on page 455. See Website Collection Options on page 514. Note: In order to collect from websites, you need to install the Microsoft SQL Server Compact 3.5 Service Pack 2 for Windows. You can find this service pack at http://www.microsoft.com/en-us/download/ details.aspx?id=5783. Make sure that BOTH 32-bit and 64-bit versions of the SQLCE are installed on a 64-bit systems. Only the 32-bit version needs to be installed on a 32-bit system. This needs to be done for every work manager you have, in addition to your desktop install. You can use one of the following options: Websites that you can collect from Type Description General You can use this option to collect files from your website. It will collect the following files:  html  cs  icon  gif  png  jpeg  jpg  css You can configure how many files you collect based on the following settings:  Maximum file size  How deep you go from the main index.html based on the number of links. For example, some pages may be viewable only after clicking six different links starting from the home page. You can specify how many links you want to “crawl”. Note: You can customize settings to collect additional file types, such as PDF, ISO, and ZIP files. Contact AccessData’s support for more information. Collecting from Websites To Configure the application to Collect From Websites 1. On the Data Sources page, click Websites. Configuring Public Data Repositories for Collecting Data Configuring for Websites | 162 2. Click 3. In the Details pane, set each field. 4. See Website Details Fields on page 163. 5. Click OK to add the configuration to the Websites table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional websites. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. Website Details Fields The following table describes the fields that are available in the Websites Details dialog box. See Configuring for Websites on page 162. Website Details Fields Field Description Name Name of the website that will appear in the Job Wizard. Locality (Optional). Lets you type the name of the desired locality to associate this server to a specific location or IP range of nodes. Address Specify the URL of the website that you are collecting from. For example: http://wikipedia.org Throttling Delay (MS) (Optional) Lets you specify a throttling delay when collecting files from general websites. Some Web servers may limit access when many files are being copied from it. You can use the setting to put a delay between copying each file. The setting is in milliseconds. Depth This specifies how deep in the Web files the collection will go. You specify the number of links from the home page that you want to “crawl”. Max File Size (MB) The specified the maximum size of a file that is collected. You specify the size in MB. User Credentials When collecting public files from a public website, no credential are required. If you are collecting files from a Web server that has credentials, only Windows credentials are supported, not forms authentication. Password Lets you set the current password of the provider user account. Configuring Public Data Repositories for Collecting Data Configuring for Websites | 163 Configuring for a DocuShare Server You can configure the application to collect data from Xerox DocuShare. Once you have configured the application to collect from your DocuShare server, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, select DocuShare as an option in the Other Data Sources pane. At that point, a Docushare page appears in the left pane. You can then specify what to collect from the DocuShare server. See DocuShare Collection Options on page 498. You can collect the following entity types: File Bulletin Email Mail Messages Blog Wiki To configure settings for collecting DocuShare data 1. On the Data Sources page, click DocuShare. 2. Click 3. In the Details pane, set each field. 4. See DocuShare Repository Details Fields on page 164. 5. Click OK to add the configuration to the DocuShare Repository Details table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional DocuShare Repository server. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. DocuShare Repository Details Fields The following table describes the fields that are available in the DocuShare Repository Details dialog box. Configuring Public Data Repositories for Collecting Data Configuring for a DocuShare Server | 164 See Configuring for a DocuShare Server on page 164. DocuShare Repository Fields Field Description Name Name of the website from which AD the application can collect. Locality (Optional). Lets you type the name of the desired locality to associate this server to a specific location or IP range of nodes. Address The URL of the server. You can specify an IP address or computer name. For example, http://10.10.4.49 Port Provides the port number that is used for communication. Typically the port is 8080. DocuShare Root Provides the root folder. Typically, this is /docushare. Domain This field is not mandatory. Username Specifies the user name to access the DocuShare repository. Password Specifies the user’s password for access to the DocuShare repository. Configuring Public Data Repositories for Collecting Data Configuring for a DocuShare Server | 165 Configuring for Cloud Mail You can configure the application to collect data from a cloud mail server, such as Yahoo! Mail. For collecting Gmail, use the Gmail connector. Once you have configured the application to collect from your cloud mail server, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, select Person’s Cloud Mail as an option under People in the Custom Selection pane. At that point, a People page appears in the left pane. You can then specify what to collect from the Cloud Mail server. See Cloud Mail Server Details Fields on page 166. Note: Make sure to configure your firewall to allow traffic to and from your cloud server. Failure to do so will generate errors. To configure settings for collecting data from a cloud server 1. On the Data Sources page, click Cloud Mail. 2. Click 3. In the Details pane, set each field. 4. See Cloud Mail Server Details Fields on page 166. 5. Click OK to add the configuration to the Cloud Mail table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional cloud servers. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. Cloud Mail Server Details Fields The following table describes the fields that are available in the Cloud Mail Server Details dialog box. Information to complete these fields can be provided by the cloud mail server host. Cloud Mail Server Details Fields Field Description Name Name of the cloud mail server from which the application can collect. Connection Type Specify whether the connection to the cloud mail server is either POP or IMAP. Address The URL of the cloud server. You can specify an IP address or computer name. For example, http://imap-ssl.mail.yahoo.com Configuring Public Data Repositories for Collecting Data Configuring for Cloud Mail | 166 Cloud Mail Server Details Fields Field Description Port Provides the port number that is used for communication. Encryption Type Your cloud mail server may require a secure connection (SSL) or other encryption. Choose between None, SSL, TLS, or Auto. Locality (Optional). Lets you type the name of the desired locality to associate this server to a specific location or IP range of nodes. Password Lets you set the current password of the provided user account. Configuring Public Data Repositories for Collecting Data Configuring for Cloud Mail | 167 Configuring for a OpenText ECM Server You can configure the application to collect data from OpenText ECM. Once you have configured the application to collect from your OpenText ECM server, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, you can select OpenText ECM as an option in the Other Data Sources pane. At that point, a OpenText ECM page appears in the left pane. You can then select which OpenText ECM repository from which you want to collect. See Adding a Job on page 455. See OpenText ECM Collection Options on page 508. To configure settings for collecting OpenText ECM data 1. On the Data Sources page, click OpenText ECM. 2. Click 3. In the Details pane, set each field. 4. See OpenText ECM Repository Details Fields on page 168. 5. Click OK to add the configuration to the OpenText ECM Repository Details table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional OpenText ECM Repository servers. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. OpenText ECM Repository Details Fields The following table describes the fields that are available in the OpenText ECM Repository Details dialog box. OpenText ECM Repository Details Fields Field Description Url Specifies the URL for the OpenText ECM Content Server. Username Specifies the username for the OpenText ECM Content Server. Password Specifies the password for the OpenText ECM Content Server. Configuring Public Data Repositories for Collecting Data Configuring for a OpenText ECM Server | 168 Configuring for a FileNet Server You can configure the application to collect data from IBM FileNet. Once you have configured the application to collect from your FileNet server, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, you can select FileNet as an option in the Other Data Sources pane. At that point, a FileNet page appears in the left pane. You can then select which FileNet repository from which you want to collect. See Adding a Job on page 455. See FileNet Collection Options on page 506. To configure settings for collecting FileNet data 1. On the Data Sources page, click FileNet. 2. Click 3. In the Details pane, set each field. 4. See FileNet Repository Details Fields on page 169. 5. Click OK to add the configuration to the FileNet Repository Details table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional FileNet Repository servers. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. FileNet Repository Details Fields The following table describes the fields that are available in the FileNet Repository Details dialog box. FileNet Repository Details Fields Field Description Host Specifies the host to access the FileNet repository. Port Specifies the port number to access the FileNet repository. Username Specifies the username for the FileNet repository. Password Specifies the password for the FileNet repository. Configuring Public Data Repositories for Collecting Data Configuring for a FileNet Server | 169 Configuring for Gmail You can configure the application to collect data from Gmail. If you want to collect from a cloud mail server other than Gmail, you can use the Cloud Mail connector. Once you have configured the application to collect from your Gmail, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, select People > Select Person’s Gmail as an option in the Custom Selection pane. See Configuring for Cloud Mail on page 166. Note: Make sure to configure your firewall to allow traffic to and from your Gmail server. Failure to do so will generate errors. Before the application can be configured to collect data from Gmail, important information from Google must be obtained. This information is obtained by the following steps: 1. First, the Provisioning API from the Google Apps control panel must be enabled before calls can be made to the Email Audit API. For more information, see http://support.google.com/a/bin/ answer.py?hl=en&answer=60757 . 2. Next, an API Project must then be created which will authorize API access. Google will generate an OAuth 2.0 Client ID. The Client ID and the Client Secret obtained will be used in the the application ’s configuration. To create an API Project, login to Gmail and go to https://code.google.com/apis/console/. 3. Once you have obtained the Client ID and the Client Secret, you can configure the settings to collect Gmail. To configure settings for collecting Gmail 1. On the Data Sources page, click Gmail. 2. Click 3. In the Details pane, set the following fields: Add. Domain Google API Client ID - this is the Client ID obtained when creating the API project. See above for more information. Google API Client Secret - this is the Client Secret obtained when creating the API project. See above for more information. 4. Click OK to add the configuration to the Gmail Details table. 5. Click the Google button to authorize Gmail access. 6. Google’s dialog will appear, asking permission to access the domain’s collector. Click Allow access. 7. Copy the key provided by Google, and paste it into the Authorization Code field. 8. Click Ok. Configuring Public Data Repositories for Collecting Data Configuring for Gmail | 170 Configuring for Google Drive You can configure the application to collect all of the Google docs from a Google drive. Once you have configured the application to collect from your Google Drive, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, select Google Drive as an option in the Other Data Sources pane. At that point, a Google Drive page appears in the left pane. You can then select from which Google Drive to collect. See Adding a Job on page 455. See Enterprise Vault Server Collection Options on page 500. Configuring for Google Drive To configure the application for Google Drive 1. On the Data Sources page, click Google Drive. 2. Click 3. In the Details pane, set the fields. 4. See Google Drive Details Fields on page 171. 5. Click OK to add the configuration to the Google Drive table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional Google Drive sites. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. Google Drive Details Fields The following table describes the fields that are available in the Google Drive Details dialog box. See Configuring for Google Drive on page 171. Google Drive Details Fields Field Description Name Specifies the name which will appear in the Job Wizard. Username Specifies the username of the Google drive. Password Specifies the password of the Google drive. Configuring Public Data Repositories for Collecting Data Configuring for Google Drive | 171 Configuring for Druva You can configure the application to collect data from your Druva endpoint backup solution. You need to aware of the following considerations when configuring the application to attach to a Druva server: The application uses the WebDAV protocol and is case-sensitive. Microsoft limits WebDAV to a maximum file size of 50 MB that can be downloaded. This limit is imposed to protect the system from a Denial of Service (DOS) attack. In order to change the file size, follow the instructions found at http://support.microsoft.com/kb/900900 . The maximum file size supported by WebDAV is 4 GB. Note: Files that exceeded the WebDAV limit are not collected. If attempting to collect a file larger than the limit, an error from Site Server occurs, stating “Unable to access.” path set in the configuration tab must be a SSL UNC path. For example: \\DruvaInSync.lab.local@SSL\webdav\TestLegalHold. The The Web Client service on the machine running Site Server must be currently running in order to collect data. Once you have configured the application to collect from your Druva server, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, select Druva as an option in the Other Data Sources pane. At that point, a Druva page appears in the left pane. You can then select from which Druva server to collect. See Adding a Job on page 455. See Druva Collection Options on page 515. Configuring for Druva To configure the application for Druva 1. On the Data Sources page, click Druva. 2. Click 3. In the Details pane, set the fields. 4. See Druva Details Fields on page 173. 5. Click OK to add the configuration to the Druva table. 6. Do one of the following: Repeat Add. steps 2-4 to configure additional Druva servers. Continue 7. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. Configuring Public Data Repositories for Collecting Data Configuring for Druva | 172 Druva Details Fields The following table describes the fields that are available in the Druva Details dialog box. Druva Details Fields Field Description Name Specifies the name of the Druva server to which you are connecting. This name must be exact because the connector is case-sensitive. Path Specifies the path to the Druva server. This path must be a SSL UNC path. For example, \\Druva-InSync.lab.local@SSL\webdav\TestLegalHold. Locality Specifies the location of the Druva server. Username Specifies the name of the user from the Druva server. Password Specifies the password needed to collect from the repository. Configuring Public Data Repositories for Collecting Data Configuring for Druva | 173 Configuring for a CMIS Repository You can configure the application to collect data from your content management systems through CMIS (Content Management Interoperability Services). You connect to the various content management systems by connecting to a CMIS server. Once you connect to the CMIS server, you can select the specific repository or repositories (For example a Documentum or FileNet data source) from which to collect. You can upload a custom filter for the CMIS repository. See Custom Filters for CMIS on page 175. When you have configured the application to collect from your CMIS repository, you can choose to collect from this source with a collection job. In the Job Wizard > Job Options, select CMIS Repository as an option in the Other Data Sources pane. At that point, a CMIS Repository page appears in the left pane. You can then select from which CMIS repository to collect. See Adding a Job on page 455. See CMIS Repository Details Fields on page 174. Configuring for a CMIS Repository To configure the application for a CMIS repository 1. On the Data Sources page, click CMIS Repository. 2. Click 3. In the Details pane, set the fields. 4. See CMIS Repository Details Fields on page 174. 5. Click Connect to retrieve repositories for the newly created CMIS configuration. You must have the URL, Username, and Password fields populated in order to retrieve repositories. 6. Select a repository from the dropdown list. These repositories are the individual data sources, such as a Documentum or FileNet data source. 7. Click OK to add the configuration to the CMIS Repository table. 8. Do one of the following: Repeat Add. steps 2-4 to configure additional CMIS servers. Continue 9. with the next step. (Optional) Do any of the following: Click Edit to edit the parameters of a given configuration. Click Delete to delete a configuration. CMIS Repository Details Fields The following table describes the fields that are available in the CMIS Repository Details dialog box. Configuring Public Data Repositories for Collecting Data Configuring for a CMIS Repository | 174 CMIS Repository Details Field Description Url Specifies the URL of the CMIS repository. Username Specifies the username for the CMIS repository drive. Password Specifies the password for the CMIS repository drive. Protocol Binding Specifies whether the repository deploys either Atom Publishing or Web Services. Connect Allows you to connect to the CMIS repository. If the connection is unsuccessful, a dialog appears warning you of the error. Note: You must populate the URL, Username, and Password in order to connect to the CMIS repository. Select Repository Allows you to select which specific repository from which to collect data. This dropdown does not populate until you have connected to the CMIS repository. Custom Filters for CMIS You can upload a custom filter that applies to the data gathered from CMIS. The filter should be written in XML, and a sample XML filter is available for download from the Upload Custom Filter dialog found on the configuration page. Any custom filters uploaded will be applied to all configured CMIS repositories. The custom filter can be combined with the Job Wizard filters. The custom filter in combination with the Job Wizard filters acts as an OR operator, not an AND. This means that a piece of collected data will match either the Job Wizard Filters or the Custom filters, but the data does not have to match both filters concurrently. A sample of the correct syntax and how to write a custom CMIS filter can be found below. Configuring Public Data Repositories for Collecting Data Configuring for a CMIS Repository | 175 Example of Custom Filter for CMIS To upload a custom filter 1. Go to Data Sources > CMIS Repository. 2. Click Upload Custom Filter. CMIS Custom Filter XML Dialog 3. Browse to the location where you have saved the custom XML filter. 4. (optional) Click Sample Filter to download a copy of a custom XML filter. 5. (optional) Click Current Filter to view the current filter uploaded to the system. 6. Click Save. Configuring Public Data Repositories for Collecting Data Configuring for a CMIS Repository | 176 Part 4 Managing Projects This part describes how to manage projects and includes the following chapters: Introduction Using to Project Management (page 178) the Project Management Home Page (page 180) Creating a Project (page 205) Managing People (page 223) Managing Tags (page 233) Setting Project Permissions (page 240) Running Reports (page 251) Configuring Monitoring Review Tools (page 258) the Work List (page 276) Managing Document Groups (page 278) Managing Transcripts and Exhibits (page 281) Managing Review Sets (page 293) Project Folder Structure (page 298) Using Language Identification (page 301) Using KFF (Known File Filter) (page 331) Managing Projects | 177 Chapter 13 Introduction to Project Management This guide is designed to help project/case managers perform common tasks. Project/case manager tasks are performed on the Home page and in Project Review. Project/case managers can perform their tasks as long as the administrator has granted the project manager the correct permission. See the Administrators guide for more information on how administrators can grant global permissions. About Projects When you want to assess a set of evidence, you create a project and then add evidence to the project. When evidence is added to the project, the data is processed so that it can be later reviewed, coded, and labeled by a team of reviewers using the Project Review interface. Workflow for Project/Case Managers Administrators, or users that have been given rights to manage projects, use the Home page of the console to create and manage projects by doing the following tasks. Basic Workflow for Project Managers Task Link to the tasks Create a project See Creating a Project on page 205. Configure the user/group permissions for a project See Setting Project Permissions on page 240. Loading Data You can load data using import or by processing the evidence into the system. See the Loading Data documentation for more information. Manage evidence and people See the Loading Data documentation. Configure the review tools to be used in project review See Configuring Markup Sets on page 258. See Creating Category Values on page 264. See Configuring Custom Fields on page 262. See Configuring Highlight Profiles on page 270. View details about the project See Viewing and Editing Project Details on page 220. Introduction to Project Management About Projects | 178 Basic Workflow for Project Managers (Continued) Task Link to the tasks Monitor the Work List See Work List Tab on page 276. See Monitoring the Work List on page 276. Manage Document Groups See Managing Document Groups on page 278. Upload Transcripts/Exhibits See Updating Transcripts on page 282. Create Production Sets See the Exporting documentation. Export the selected evidence See the Exporting documentation. Run reports See Running Reports on page 251. Introduction to Project Management Workflow for Project/Case Managers | 179 Chapter 14 Using the Project Management Home Page Viewing the Home Page Administrators, and users given permissions, use the Home page to do the following: Create View Add projects a list of existing projects evidence to a project Launch Project Review If you are not an administrator, you will only see either the projects that you created or projects to which you were granted permissions. To view the home page 1. Log in to the console. 2. In the application console, click Home. The Project List Panel is on the left-side of the page. See The Project List Panel on page 182. Administrators, and users with the Create/Edit Projects permission, create projects to add and process evidence. See About Projects on page 178. Using the Project Management Home Page Viewing the Home Page | 180 Introducing the Home Page The project management Home page is where you see the Project list and details about the project. Home Page Elements of the Home Page Elements Description Project List Panel See The Project List Panel on page 182. See Viewing and Editing Project Details on page 220. Project Details Jobs Evidence People Tags Using the Project Management Home Page See Introduction to Jobs on page 447. The evidence in the project. See the Loading Data Guide for more information. People that are associated to the project. You can add people and associate and disassociate people to the project. See Managing People for a Project on page 187. In the Evidence tab at the bottom, you can also see any people that have been associated to specific evidence within the project. See Configuring Tagging Layouts on page 265. See Managing Tags on page 233. Introducing the Home Page | 181 Elements of the Home Page (Continued) Elements Description See Setting Project Permissions on page 240. Permissions See Running Reports on page 251. Reports Processing Options The processing options used for the project. See the Admin Guide for more information. See Using KFF (Known File Filter) on page 331.. KFF See the Export documentation. Printing/Export Resolution1 eDiscovery and Resolution1 Platform only. Lit Hold See Configuring Markup Sets on page 258. Markup Sets See Configuring Tagging Layouts on page 265. Tagging Layout See Configuring Highlight Profiles on page 270. Highlight Profiles See Monitoring the Work List on page 276. Work List See Configuring Custom Fields on page 262. Custom Fields See Configuring Redaction Text on page 274. Redaction Text The Project List Panel The Home page includes the Project List panel. The Project List panel is the default view after logging in. Users can only view the projects for which they have been given permissions. Administrators and users, given the correct permissions, can use the project list to do the following: Create View Add projects. a list of existing projects. evidence to a project. See Importing Data on page 404. Using the Project Management Home Page Introducing the Home Page | 182 Launch Project Review. If you are not an administrator, you will only see either the projects that you created or projects to which you were granted permissions. The following table lists the elements of the project list. Some items may not be visible depending on your permissions. Elements of the Project List Element Description Create New Project Click to create a new project. Filter Options Allows you to search and filter all of the projects in the project list. You can filter the list based on any number of fields associated with the project, including, but not limited to the project name. See Filtering Content in Lists and Grids on page 38. Project Name Column Lists the names of all the projects to which the logged-in user has permissions. Status Column Lists the status of the projects: Not Started - The project has been created but no evidence has been imported. Processing - Evidence has been imported and is still being processed. Completed - Evidence has been imported and processed. Note: The Processing Status may show a delay of two minutes behind the actual processing of the evidence. This is only noticeable when processing a small set of evidence. See Refresh below. Size Column Lists the size of the data within the project. Action Column Allows you to add evidence to a project or enter Project Review. Allows you to add data to the selected project. Add Data Project Review Allows you to review the project using Project Review. See the Reviewers Guide for more information. Page Size Drop-down Allows you to select how many projects to display in the list. The total number of projects that you have permissions to see is displayed. Total Lists the total number of projects displayed in the Project List. Page Allows you to view another page of projects. Refresh Custom Properties If you create a new project, or make changes to the list, you may need to refresh the project list Add, edit, and delete custom columns with the default value that will be listed in the Project list panel. When you create a project, this additional column will be listed in the project creation dialog. See Adding Custom Properties on page 185. Using the Project Management Home Page Introducing the Home Page | 183 Elements of the Project List (Continued) Element Project Property Cloning Export to CSV Description Clone the properties of an existing project to another project. You can apply a single project’s properties to another project, or you can pick and choose properties from multiple individual projects to apply to a single project. See Using Project Properties Cloning on page 219. Export the Project list to a .CSV file. You can save the file and open it in a spreadsheet program. Add or remove viewable columns in the Project List. Columns Highlight project and click Delete Project to delete it from the Project List. Delete Using the Project Management Home Page Introducing the Home Page | 184 Adding Custom Properties With Custom Properties, you can add, edit, and delete custom columns with the default value that will be listed in the Project list panel. When you create a project, these additional columns will be listed in the project creation dialog and will be available to populate when editing projects that have already been created. When you create a new project, any custom properties marked as required will be available at the top of the Create New Project dialog, while non-required custom properties will be at the bottom of the dialog. When you edit an existing project, all custom properties will be at the bottom of the pane, whether they are required or not. However, the required custom properties will be bolded to differentiate from non-required custom property fields. To add a custom Properties 1. In the console, in the Project List, click Custom Properties. 2. Click 3. Configure the custom property details and click OK. Add. Custom Properties The following table lists the options available to you in the Custom Properties dialog: Custom Properties Dialog Element Description Allows you to add a custom property. Allows you to edit a custom property. Allows you to delete a custom property. Name This is a required field for a new custom property. Description This field is optional. Required Field Mark to make the custom property a required column. If the custom property column is a required field, any previously created project must have this field populated when you edit the project. Type Choose whether the column is a text field or a choice field Text Choose to make the custom property field a text field. Default Value When this field is populated for text custom properties, the Default Value will display on all existing projects. Choice Choose to make the custom property field a choice field. Enter one choice per line, separated by the Enter key. The first choice listed in the choice field will be the default for all projects. If you do not want the first choice to be the default choice, leave the first line blank. Using the Project Management Home Page Adding Custom Properties | 185 Custom Properties Dialog (Continued) Element Description Allows you to refresh the Custom Properties list. Allows you to delete a custom property. Using the Project Management Home Page Adding Custom Properties | 186 Managing People for a Project About People The term “person” references any identified person or custodian who may have data relevant to evidence in a project. You can associate people to a specific project and to specific evidence items within that project. In Review, you can use the Person column to see the person that is associated with each item. You can sort, filter, and search using the Person column. Note: A person references people that are associated with evidence, they are not the users of the Summation product. About Managing People When you manage people, you do the following: Create Edit a person the properties of a person Delete a person Associate a person with or dis-associate a person from a project Associate a person to a specific evidence item. You can create a person in the following ways: Using the People tab on the Data Sources page. This creates people at a global level which can be associated with any project. See the Data Sources chapter. Using the People tab on the Home page. This creates people for a specific project. See Adding People on page 189. Using the Add Evidence Wizard. See About Associating People with Evidence on page 407. For the most functionality of managing people, there are more options on the Data Sources page than on the Home page. For example, on the Data Sources page, you can delete People and add them using You associate people to projects in the following ways: Associate a person to a whole project when you create a project. See Creating Projects on page 205. Associate a person to a whole project after you create a project. See Associating a Project to a Person on page 191. Associate a person to specific evidence that you add to a project. See About Associating People with Evidence on page 407. Using the Project Management Home Page Managing People for a Project | 187 About the Project’s Person Tab You can manage people for a project from the People tab on the Home page. The people are listed in the Person List. The main view of the Person List includes the following sortable columns: People Information Options Option Description First Name The first name of the person. Last Name The last name of the person. Username The computer username of the person. Email Address The email address of the person. Creation Date The date that the person resource was created. Domain The network domain to which the person belongs. When you create and view the list of people, this list is displayed in a grid. You can do the following to modify the contents of the grid: Control Sort the columns Define If which columns of data are displayed in the grid. a column on which you can sort. you have a large list, you can apply a filter to display only the items you want. See Managing Columns in Lists and Grids on page 36. Highlighting a person in the list populates the Person Details info pane on the right side. The Person Details info pane has information relative to the currently selected person, beginning with the first name. At the bottom of the page, you can use the Evidence tab to view the evidence that person is associated with. Using the Project Management Home Page Managing People for a Project | 188 Project’s Person Tab Options The following table lists the various options that are available under the Person tab. Note: To import people from Active Directory or to delete a person, use the Data Sources page. Person Tab Options Element Description Filter Options Allows you to filter the person list. See Filtering Content in Lists and Grids on page 38. Click to add a person. See Adding People on page 189. Add Click to edit a person. See Editing a Person on page 190. Edit Click to refresh the person list. Refresh Click to import people from a CSV file. See Importing People From a CSV File on page 191. Import People Export the current set of data to a CSV file. Export to CSV Click to adjust what columns display in the Person List. See Managing Columns in Lists and Grids on page 36. Columns Allows you to view evidence that has been associated to a person. In the Evidence pane, you can do the following:  Filter the Evidence list.  Add Custom Properties. See Adding Custom Properties on page 185.  Export the Evidence list to a CSV file.  Adjust the columns’ display in the Evidence list.  See Managing Evidence for Collecting Data on page 134. Evidence Adding People Administrators, and users with permissions, can add people. You can add people in the following ways: Manually adding people Importing people from a file See Importing People From a CSV File on page 191. Creating or importing people while importing evidence See Managing Evidence for Collecting Data on page 134. Using the Project Management Home Page Managing People for a Project | 189 Importing people from Active Directory. See Adding People Using Active Directory on page 137. People Information Options Option Description First Name The first name of the person. This field is required. Middle Initial The middle initial of the person. Last Name The last name of the person. This field is required. Username The computer username of the person. This field is required. Domain The network domain to which the person belongs. Notes Username The username of the person as it appears in their Lotus Notes Directory. A Lotus Notes username is typically formatted as Firstname Lastname/Organization as in the following example: Pat Ng/ICM Email Address The email address of the person. Manually Creating People for a Specific Project To manually create a person 4. On the Home > Data Sources > People tab, click 5. In Person Details, enter the person details. 6. Click OK. Add. Editing a Person You can edit any person that you have added to the project. To edit a project-level person 1. On the Home > Data Sources > People tab, select a person that you want to edit. 2. Click 3. In Person Details, edit person details. 4. Click OK. Edit Using the Project Management Home Page Managing People for a Project | 190 Importing People From a CSV File From the People tab, you can import a list of people into the system from a CSV file. Before importing people from a CSV file, you need to be aware of the following items: You must define any custom columns before importing the CSV file. See Adding Custom Properties on page 185. Make sure that your columns have headers. Multiple items in columns must be separated by semicolons. To import people from a CSV file 1. On the Home > People tab, click Import People. 2. From the Import People from CSV dialog, choose from the following options: Import custom columns. This option is not available if custom columns have not been previously defined. Merge into existing people. This option will overwrite fields, such as first name, last name, and email address. It also adds new computers, network shares, etc. to existing associations. Note: For an entry to be considered a duplicate in the External Evidence column, the network path, assigned person, and type (such as image or native file) must be the same. If there are any differences between these three fields, the entry is brought in as a new External Evidence item. Download Sample CSV. This allows you to download a sample CSV file illustrating how your CSV file should be created. This example is dynamic; if you have created custom columns for people, those custom columns appear in the sample CSV file. Note: If your license does not support certain features (such as network shares or computers), the columns for those items appear in the CSV without any data populated in the columns. 3. Once options have been selected, click OK. 4. Browse to the CSV file that you want to upload. 5. After file has been uploaded, a People Import Summary dialog appears. This displays the number of people added, merged, and/or failed, with details if an import failed. Click OK. Associating a Project to a Person From the Projects pane under the Person tab, you can associate and disassociate projects to a selected person. To associate a project to a person 1. In the Person list pane, click to add people. 2. In the Associate People to dialog, do one of the following: In the All People pane, click to add projects to the Associated People pane. In the All People pane, click to projects from the Associated People pane. 3. Click OK. 4. (optional) Click to remove people from an associated project. Using the Project Management Home Page Managing People for a Project | 191 Chapter 15 Configuring Advanced System Settings This chapter will help administrators configure the advanced system settings for the application. These are global settings that affect the entire system. See Configuring the System on page 77. System Configuration Tab - Advanced Settings The System Configuration tab on the Management page allows you to configure multiple items. This section describes the advanced items. For other options, see See System Configuration Tab - Standard Settings on page 77. The following options display depending on your license and permissions: Elements of the System Configuration Tab Element Description Agent Credentials You can define the credentials used by the system to install the Agent on a target computer. See Agent Credentials on page 194. Share Credentials You can define the credentials used by the system to access Network shares. See Share Credentials on page 194. Sentinel Database You can configure the Sentinel database. See Configuring the Sentinel Database on page 194. EFS Certificates You can configure EFS Certificates for decrypting file-system level encryption. See Configuring EFS Certificates on page 195. Atlas Configuration You can configure PSS Atlas to enable the integration of its database with AccessData’s collection features. See Configuring PSS Atlas on page 196. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 192 Elements of the System Configuration Tab Element Description Redirected Acquisition You can use Redirected Acquisition to direct the results of a full disk (logical or physical) collection from the subject agent to the configured collection data path, and by-pass the local Work Manager. See Configuring Redirected Acquisition on page 197. Credant Configuration You can configure a Credant site server so that it automatically finds and uses Credant Shield files for Credant encrypted Network shares and computers in an organization. See Configuring Credant Settings on page 198. Person 3rd Party Database Sync Allows you to connect to an outside database and import business-only fields to be imported to people instead of adding them by hand. See Configuring the Person 3rd Party Database Sync on page 199. Cerberus Weighting Templates Allows you to set custom weight scores. FireEye Integration Allows you to configure FireEye integration. See Using Cerberus Malware Analysis on page 369. See Security Device Integration on page 520. Endpoint Threat Alert Allows you to enable and configure Endpoint Threat Alerts. See Using Endpoint Threat Alerting on page 528. Alert Responses Allows you to set actions that automatically execute when triggered by alerts. See Configuring Alert Responses on page 201. Palo Alto Networks Allows you to configure Palo Alto firewall integration. See Using Palo Alto Networks on page 525. Geolocation Configures Geolocation location data. See Configuring the Geolocation Requirements in the Reviewer Guide. Note: Custom Geolocation IP data that you have previously entered from the Geolocation Configuration block is not retained when upgrading the application from 5.5 to 5.6. You must re-add the custom Geolocation IP data after upgrading the application.ou can download the Reviewer Guide from the Help/Documentation link. See User Actions on page 33. Important: Any time you save new data, the KFF Service is automatically restarted. This can affect running KFF jobs. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 193 Elements of the System Configuration Tab Element Description Alert Rules Engine Allows you to view the location of the Alert Rules Engine configuration file. See Alert Rules Engine on page 546. ThreatBridge Depending on the license that you own and the permissions that you have, you have the option to configure your ThreatBridge server. See Using ThreatBridge on page 501. Other Standard Options Depending on the license that you own and the permissions that you have, you may see other standard configuration options. See System Configuration Tab - Standard Settings on page 77. Agent Credentials You can define the credentials used by the system to install the Agent on a target computer. Enter a Domain, Username, and Password in the provided fields. Share Credentials You can define the credentials used by the system to access network shares. Enter a Domain, Username, and Password in the fields provided. Configuring the Sentinel Database With the Sentinel database configured, the system can gather information from the database, process that information, and display it for review in the Review Sentinel Data dialog or when reviewing a job that retrieves specific information from the Sentinel database. To configure the Sentinel Database 1. Click Management > System Configuration > Sentinel Database. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 194 2. Enter the configuration information into the Sentinel Database Configuration dialog as follows: Note: Contact your database administrator for Sentinel database configuration information. Sentinel Database Configuration Field Definition DB Provider Select the database provider for the Sentinel database. Server Enter the server name or IP address where the Sentinel database resides. Database Name/SID Enter the database name and SID for the Sentinel database. Port Enter the port that accesses the Sentinel database. Username Enter a valid username used to access the Sentinel database. Password Enter the password used to access the Sentinel database. Confirm Password Re-enter the previous password to confirm the proper password was entered. 3. Click Save. Configuring EFS Certificates EFS is a file system driver that provides file system-level encryption in most Microsoft Windows operating systems. Files are transparently encrypted on NTFS file systems to protect confidential data from attackers with physical access to the computer. To decrypt the EFS files so that the system can process them, you will need to configure an EFS certificate. You can configure an EFS certificate under the Management tab. To configure EFS certificates 1. Log in as an administrator. 2. Click Management. 3. Click System Configuration. 4. Click EFS Certificates. 5. On the EFS Certificates page, do one of the following: In the Certificate field, type the path to a .pfx certificate file. Click Browse to locate a .pfx certificate file. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 195 6. In the Password field, enter the password that is necessary to access the .pfx file. 7. Click Save Certificate to add the certificate to the Certificate list box. 8. (Optional) Repeat steps 3-5 to add additional certificates. In the Certificates list box, select a certificate and then click to delete the certificate. Configuring PSS Atlas PSS Atlas enables global companies to minimize legal risk, comply with diverse legal duties for information, and proactively manage information based on its business value. You can configure PSS Atlas to enable the integration of its database with AccessData’s collection features. Note: If you installed the PSS Atlas Integration component during the application’s installation, you also need to either install an instance of Oracle ODAC (available for download from Oracle) or, install the PSS Atlas Integration component on the same computer where FTK Business Services is installed (an Oracle client exists in that location). Litigation Holds are created in the company’s PSS Atlas database, with people existing in this database. To use the application to do collections on these people, the PSS Atlas configuration must be configured as Enabled. Note: The PSS Atlas Sync service must already be installed before you can configure PSS Atlas. Typically, the PSS Atlas Sync service is installed during the installation of the application. If you choose to integrate using a manual sync of PSS Atlas, the service will only sync once every 60 minutes, by default. You can reconfigure the sync time in the following configuration file: PssAtlas.WindowsService.exe.config By default, the configuration file is located in C:\Program Files\AccessData\eDiscovery\PSS Atlas. The time configuration is found on the following line: synchronizationWaitIntervalInMinutes When a PSS Atlas sync takes place, it pulls all people associated with the given project. It also pulls the following person data: Name Description Attorney Comments Creation Date Effective Start date Effective End date Jurisdiction Outside Counsel The PSS Atlas database tables that the application uses during synchronization are the following: REP_RT_MATTER_VW Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 196 rep_rt_request_vw rep_rt_people_inscope_vw rep_rt_person_vw legalmatterhistory rep_rt_ach_execution_vw rep_rt_ach_plan_vw person To configure PSS Atlas 1. Log in as an administrator. 2. Click Management. 3. Click System Configuration. 4. Click Atlas Configuration. 5. In the PSS Atlas Configuration dialog, click Enabled. 6. In the Oracle Connection String field, specify the connection string ID. If the connection string is valid, you should see a list of projects in PSS Atlas. The connection string contains the information that the provider needs to know to be able to establish a connection to the database or the data file. The connection is done locally on your computer, or on your local Network. You can use the following format as an example of an Oracle connection string: Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (HOST=MyHost)(PORT=MyPort)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAM E=MyOracleSID)));User Id=myUsername;Password=myPassword; The username and the password must have full read permissions to PSS Atlas. 7. Click Sync Now. You should see a list of all the projects currently available to the account that was used to log in to the Oracle schema for PSS Atlas. 8. In the Import PSS Matter field, select either List or Manual Entry. If you select Manual Entry, enter the Matter ID in the field. 9. Click Import PSS Matter. 10. Click OK. Configuring Redirected Acquisition You can use Redirected Acquisition to direct the results of a full disk (logical or physical) collection from the subject agent to the configured collection data path, and bypass the local Work Manager. This method prevents using all or too much of the Work Manager disk space and also saves time. If you intend to use this feature for a full disk collection, you must first complete the redirect acquisition configuration. To configure Redirected Acquisition 1. Log in as an administrator. 2. Click Management. 3. Click System Configuration. 4. Click Redirected Acquisition. 5. On the Redirected Acquisition page, enter the username, domain, and password. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 197 6. Click Save. Configuring Credant Settings You can configure a Credant site server so that it automatically finds and uses Credant Shield files for Credant encrypted network shares and computers in an organization. Credant encrypts user data throughout an organization similar to how EFS functions. Instead of configuring a Credant site server, you can choose to configure specific Credant-encrypted network shares or computers with Credant Shield files. When you use this method, select Explicit Asset Configuration to enable it, and to also view the number of configured shares and computers. When you select Disabled, data is collected from the Credant-encrypted network share or computer, but does not decrypt the data using the associated Shield file See Credant Site Server Configuration Options on page 198. To configure Credant 1. Log in as an administrator. 2. Click Management. 3. Click System Configuration. 4. Click Credant Encryption. 5. On the Credant Configuration page, click the configuration option that you want, and set any associated options. See Credant Site Server Configuration Options on page 198. 6. Click Save. Credant Site Server Configuration Options The following table describes the options that are available on the Credant Configuration page. See Configuring Data Source Credant Options on page 123. Credant Site Server Configuration Options Option Description Address Specifies the IP address of the Credant servers. Port Provides the port number that is used for communication to the Credant server. The default is 8081. Domain Specifies the domain address of the Credant server. Username Specifies the Credant management console user name. This name is often “Superadmin” but may have been changed to something else. Password Specifies the user’s password for access to the Credant server. Confirm Password Specifies the user’s password for confirmed password access to the Credant server. Remove Server Removes the configuration of the Credant site server in the application. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 198 Configuring the Person 3rd Party Database Sync The Person 3rd Party Database Sync allows you to connect to any third-party database that is compatible with ODBC (Open Database Connectivity) and import fields from that database into a custom property that has been added to a person. This allows you to import business-only fields to people instead of adding them by hand. You can use these fields to filter people and viewed wherever custom columns can be viewed (such as the Home page and Project wizard). Before using this feature, you should consult with AccessData’s support and the database administrator or manager for your organization. You will need the following: Configuration string to attach to the third-party database. This configuration string to the database must either contain a trusted connection for the Resolution1 eDiscovery servers or credentials stored in the string as plain text. This configuration string must also be a ODBC connection string and not a connection string for a specific database, such as SAP or PeopleSoft. Please see AccessData’s support and your database administrator for more information. View Name for the view that you want to attach to. You can attach to either a table or a view in the database. Obtain the name for the view from your database administrator. List of people that you want to import from. People need to be created in the system under the Data Sources tab that have the same usernames as the usernames of the people in the third-party database. This allows the system to properly sync with the third-party database. See Adding People on page 107.  List of custom fields that you want to import from. These custom fields should be created in the Custom Fields tab before configuring the 3rd Party database sync. See Configuring Custom Fields on page 262. To add a database to the 3rd party database sync 1. Under the Management tab, click Person 3rd Party Database Sync. 2. Click Add New Database. 3. In the Sync People with 3rd Party Database dialog, enter the following information: In the Config Name field, enter the name that you want to give the database. This does not have to match the third-party database. In the Connection String field, enter the string that you obtained. The string must match exactly, or the databases will not sync. In the View Name field, enter the name that you obtained. The view name must match exactly, or the databases will not sync. 4. Click Connect and Get Fields. 5. (optional) You can add additional databases as needed. 6. (optional) Click Edit to edit any of the fields in the Sync People with 3rd Party Database dialog. 7. (optional) Click Delete to delete the database configuration. Once the database(s) have been created, you can sync the databases, so that the custom field data from the database that you are connecting to populate custom fields in the system. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 199 3rd Party Database Sync Synchronize Dialog To synchronize the fields between the two databases 1. If there are multiple databases added to the system, select the database that you want to sync with from the dropdown menu. 2. Click on key next to the username in the 3rd Party Database Fields pane in order to select the username. This must be done before proceeding, and a warning appears until you select the username. 3. Select a field that you want to sync with. Click on to add the field to the Person Fields to Synchronize pane. You can add additional fields by selecting the field and clicking . 4. A next to the field indicates that the custom field was not created in the system. However, if you click Save, the custom field will be created in the system. If custom fields were created previously in the system, you can view, select, and edit the custom field options from a dropdown next to the Custom Field name. 5. (optional) Remove custom fields from the Person Fields to Synchronize pane by clicking Remove. 6. (optional) Check Automatic synchronization enabled to allow the system to automatically sync with the third-party database. Select the time to sync from the calendar dropdown. 7. If syncing one database, click . To sync all the databases, click Sync All Databases. All changes made to the fields will be saved and each database configuration queued up for synchronization. Note: Once the sync has been committed, you cannot cancel the process. You can close the window and complete other actions while the synchronization occurs. The status of the synchronization appears in the upper right of the 3rd Party Database Sync Synchronization dialog. This status does not refresh automatically. However, you can check on the progress of the synchronization after the dialog has been closed by selecting Management > Person 3rd Party Database Sync. After the database(s) have been synchronized, you can view the data from the third-party database under the Data Source tab. You must first refresh the information in the tab before the data appears. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 200 Configuring Alert Responses In Alert Responses, you can set rules that will trigger automatic responses from the system when an alert appears in the Alerts tab. If an alert meets conditions that you have set in a created rule, a job executes based on a job template that you have set in the rule. For example, you can create an alert response rule that triggers a Lockdown NIC job when any alert coming from volatile data with a confidence score that is equal to or greater than 50. If an alert appears that meets these conditions, a Lockdown NIC job occurs. The Alert Responses rules can use the default job templates that come with the application, or job templates that you have created. See Using Job Templates and Filter Templates (page 485). It is important to be aware that when an alert appears, the application checks the alert against the first rule listed in the list. If the alert does not meet the first rule listed, the application checks the second rule listed, and so forth. As soon as an alert meets the conditions of a listed rule, the job template chosen for the rule executes the job. Once the action is triggered, the application does not check if the alert meets any of the subsequent rules, even if the alert meets additional rules. Because of this workflow, you should take care how to order the rules in the Alert Response list. You can arrange the rules with the arrows in the dialog. The rules that are most important for you to have met should be placed higher in the Alert Response list. If you want to write a rule that is comprehensive for any alert that may occur, place that rule at the end of the Alert Response list. Any alert that is received by the Alerts tab will be run against the response to check if there is an action that the system can perform. Note: There is no validation for the rules that you write. To configure Alert Responses 1. Log in as an administrator. 2. Click Management. 3. Click System Configuration. 4. Click Alert Responses. 5. In the Alert Responses dialog, create the rules that you want to execute automatically when an alert triggers the rule. Configure any options that you want. See Alert Responses Configuration Options on page 202. 6. (optional) There are default Alert Response Templates created that you can use. Uncheck the default templates to use. See Default Alert Response Templates on page 204. 7. Click Save. Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 201 Alert Responses Dialog Alert Responses Configuration Options The following table describes the options that are available on the Alert Responses page. Alert Responses Configuration Options Option Move Up Move Down Description Allows you to move a rule up the list of rules. This enables you to reorder the list to best respond to the alerts received by the application. Allows you to move a rule down the list of rules. This enables you to reorder the list to best respond to the alerts received by the application. Allows you to add a rule. Create the rule by choosing options from the pulldowns. Add Rule Allows you to delete a rule. Delete Create Job Template Allows you to specify the job that executes when the conditions of the rule are met. You can choose from default job templates that are installed with the application or create your own job template. See Using Job Templates and Filter Templates on page 485. Disabled Select the checkbox to disable an alert response. Rule Name Enter the name of the rule that you want to create. There are four rules that have been created and can be found in the Alert Response Configuration list:  FireEye Endpoint Threat Response  FireEye Network Isolation Response  Endpoint Threat Volatile Collection Response  ThreatScan Volatile Collection Response Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 202 Alert Responses Configuration Options Option Description Severity Allows you to specify at what severity the alert must be in order to meet the conditions of the rule. You can specify whether the rule executes on a critical, high, medium, low, or warning severity value. Add one of the following operators from the pulldown to the severity value:  < (Less than)  < (Less than or equal)  = (Equals)  != (Does not equal)  >= (Greater than or equal)  > (Greater than)  Any Source Allows you to specify the source from which the alert occurs. You can specify the variables Is, Is Not, or Any for the source. You can choose the following the source:  Collected Files - The alert originates from ThreatBridge data. You can also select if the rule triggers on collected files are found on the Norse Darklist. See Using ThreatBridge on page 501.  Endpoint Threat Alerts - The alert originates from ETA data. You can also select if the rule triggers on alerts located that appear on the Norse Darklist. See Using ThreatBridge on page 501.  FireEye - The alert originates from FireEye alerts. The alert response occurs whether an alert from FireEye has been validated or not. See Specifying FireEye Integration Event Actions on page 521.  Network Acquisition - The alert originates from network acquisition data. You can also select if the rule triggers on data obtained on the Norse Darklist. See Network Acquisition Tab on page 444.  Threat Scan - The alert originates from Threat Scan data. See Using ThreatBridge on page 501.  FireEye - The alert originates from FireEye alerts that have been validated by the application. See Specifying FireEye Integration Event Actions on page 521. Assessment Allows you to specify the assessment from which the alert occurs. Enter the assessment field in the text field. What value you can enter in the text field depends upon the source that you select for the rule. Add one of the following operators from the pulldown to the assessment value:  Contains  Equal  Not Equal  Does Not Contain  Any Confidence Allows you to specify the confidence score from which the alert occurs. Enter the confidence value as a numeric value in the field. Add one of the following operators from the pulldown to the confidence value:  < (Less than)  < (Less than or equal)  = (Equals)  != (Does not equal)  >= (Greater than or equal)  > (Greater than)  Any Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 203 Default Alert Response Templates The following table lists the default Alert Response Templates available. Default Alert Response Templates Template Name Criteria Job Response Template FireEye Endpoint ETM Response Validated FireEye event Relative Time FireEye Endpoint Threat Response (Triage) Validated FireEye event IR Triage FireEye IOC Hunt Validated FireEye event ThreatScan all nodes FireEye Endpoint Remediation Validated FireEye event Lockdown NIC PANW (Palo Alto) Endpoint ETM Response Validated PANW event Relative Time PANW Endpoint Threat Response (IR Triage) Validated PANW event IR Triage PANW Endpoint Remediation Validated PANW event Lockdown NIC PANW IOC Hunt Validated PANW event ThreatScan all nodes ThreatBridge Alert Response (Network) Responds on a critical or high severity score IR Triage ThreatBridge Alert Response (Network) Responds on a high severity score or above Relative Time ThreatScan Response Remediation Responds on a critical severity score Lockdown NIC ThreatScan Response (IR Triage) Responds on any ThreatScan hit IR Triage Alert Rules engine (ETM) Responds on any hit Relative Time Alert Rules engine (Remediation) Responds on any hit Lockdown NIC Alert Rules engine (IR) Responds on any hit Deep IR Endpoint Threat Alerting Responds on a high severity score or above IR Triage ThreatLookup Responds on a ThreatLookup confidence score of above 3 IR Triage Configuring Advanced System Settings System Configuration Tab - Advanced Settings | 204 Chapter 16 Creating a Project Creating Projects Administrators and project managers with the Create Project admin role can create projects from the Project List panel. To create a new project 1. Log in as an administrator or as a user that has permissions to create projects. 2. Click Create New Project. 3. In the Create New Project page, on the Info tab, configure the general project properties. See General Project Properties on page 205. 4. (Optional) Click the People tab to add people to the project. This is where you configure the people of the evidence of this project. People for the project can be configured later, but should be done before processing evidence. See the Data Sources chapter. 5. Click the Processing Options tab to set the processing options for the project. This is where you set the options for how the evidence is processed when it is added to the project. This setting may have a default value that you can use or change, or this setting may be configured and hidden by the administrator. See Evidence Processing and Deduplication Options on page 209. Note: You cannot change the processing options after you have created the project. 6. Select one of the following options: Create Project: Click to create the project without importing evidence. This option will create the project and return you to the Project Management page. You can then configure the project by adding evidence, assigning permissions, and so on. Create Project and Import Evidence: Click to create the project and begin importing evidence. See the Loading Data documentation for information on how to import evidence. General Project Properties You can set the properties of the specific project. Many of the fields may be populated by values set in the Project Defaults configuration block under the Management tab. See Configuring Default Project Settings on page 82. The following table describes the general Project Properties. Creating a Project Creating Projects | 205 General Project Properties Options Option Description Project Name Project Names must be only alphanumeric characters. Special characters will cause the project creation to fail. Description (Optional) This option allows you to enter the description of the project. Project Folder Path Allows you to specify a local path or a UNC network path to the project folder. This path is the location where all non-Oracle project data is stored. Note: This setting may have a default value that you can use or change, or this setting may be configured and hidden by the administrator. For example, a folder with the Project name can be created in the actual directory to be identified and managed easily. You then change the path to reflect and include the new directory. See the Admin Guide for information on configuring project defaults Job Data Path The responsive folder path is the location of reports data. Display Time Zone This option allows you to display the dates and times of files and emails based on this specified time zone. For example, if data was collected in the Eastern Time zone, you can select to display times in the Pacific Time zone and all dates will be offset by four hours to display in PST. The default is set for (UTC) Coordinated Universal Time. See Normalized Time Zones on page 207. Priority This option allows you to set the priority of the project. AD1 Encryption This option allows you to set the AD1 encryption for the project. Project Type (Optional) This option allows you to enter the project type. Attorney (Optional) This option allows you to specify the attorney for the project. This option may be populated by an entry set in the Project Defaults configuration block under the Management tab, but can be overwritten for the individual project. Legal Assistant (Optional) This option allows you to specify the legal assistant for the project. This option may be populated by an entry set in the Project Defaults configuration block under the Management tab, but can be overwritten for the individual project. Jurisdiction (Optional) This option allows you to specify the jurisdiction for the project. This option may be populated by an entry set in the Project Defaults configuration block under the Management tab, but can be overwritten for the individual project. Outside Counsel (Optional) This option allows you to specify the outside counsel for the project. This option may be populated by an entry set in the Project Defaults configuration block under the Management tab, but can be overwritten for the individual project. Comments (Optional) This option allows you to add comments. Effective Start Date (Optional) This option allows you to set the effective start date by day and month. Effective End Date (Optional) This option allows you set the effective end date by day and month. Creating a Project Creating Projects | 206 General Project Properties Options (Continued) Option Description Enable ThreatBridge Checking (CIRT and Resolution1 only) (Optional) Expand Enable ThreatBridge Checking. Enable ThreatBridge Checking -This option enables threat scan feeds for the project. This setting is enabled by default and should be chosen for security projects. See Using ThreatBridge on page 501. Enable ThreatLookup - This option allows you to enable the application to automatically check the data against ThreatLookup. If this option is selected, the application checks against ThreatLookup at the same interval that ThreatBridge updates the feeds. See Using ThreatBridge on page 501. Purge Unrelated Data - This option purges data that is not ThreatBridge data. This allows you to keep security projects free of unnecessary information. See Using ThreatBridge on page 501. Copy Properties from Existing Project (Optional) This allows you to apply properties of an existing project to the newly created project. You can also apply properties to an existing project once it has been created. See Using Project Properties Cloning on page 219. Network Data Purge Options (CIRT and Resolution1 only) (Optional) In order to keep the data from flooding the project's physical storage, you can define a regularly scheduled purge operation to delete the “oldest” data transferred. Set how often you want to purge collected data from Network Acquisition jobs by doing the following:  Retain Network Acquisition Data in database for days after transfer: Select the number of days you want to keep the data in the database after it has been collected.  Date/Time for first purge: Enter or select the date that you want the first purge to begin.  Run purge every days after initial purge: Select the number of days you want to pass before another purge is performed. Note: When setting up the Purge time frame, jobs that are set up for retrieving past data will still retrieve that data, but the system will purge the earlier information the next time the purge executes. For example, you have a continuous Search and Review job that gathers data from the past two months through to the current date. However, you also have a purge request that purges any data over 2 weeks old. The result is: the Search and Review job completes successfully and collects all the data, but the older data is only available until the next purge job runs. Static jobs are not purged since you are able to manually delete the data. Normalized Time Zones All data brought into a project using evidence processing or a collection job is stored in UTC time zone. You can configure a Display Time Zone for the project that will offset the times and display them in the specified time zone. See Display Time Zone on page 206. However, all data brought into a project using import load files is stored in the time setting that the data was created which causes an issue when trying to set the correct display time zone. The following features help you normalize time zone data. When adding data to the case through evidence processing or collection from a FAT storage device, you need to select the proper time zone for the device so that the data can be normalized to UTC. No adjustment is needed for data added to the case from NTFS storage devices. Creating a Project Creating Projects | 207 Once data has been loaded into the project, the following areas will show the time zone as the selected project time zone: Natural View for email Images for email Load The files with date and time fields columns in the Item List grid will display the UTC time zone. During load file import, you must choose the time zone that the load file was created with so the date and time values can be converted to a normalized UTC value in the database. See Importing Evidence into a Project on page 415. Creating a Project Creating Projects | 208 Evidence Processing and Deduplication Options The options you select determine the data that is contained in projects, reports, and consequently, production sets. When you create a project, you can specify unique options or use the default options. Options that increase processing time when selected are marked by a turtle icon. See the Configuring the System chapter in the User Guide. Note: You cannot edit any settings on the Processing Options section after you have added evidence to a project The following table describes the Processing Options. Depending on the license that you own, you may some or all of the following options. See Deduplication Options on page 214. Processing Options Option Description Processing Mode Standard Mode Enables the default processing options. Note: These defaults are not editable. Will include: Hashing Deduplication File - Project level for both Documents and Email Signature Analysis Expand Compound Files (archive expansion) of the following file types: 7-ZIP, IPD, BZIP2, DBX, PDF, GZIP, NSF, MBOX, MS Exchange and Office documents, MSG, PST, RAR, RFC822 Internet email, TAR, ZIP Note: You cannot expand system image files, such as AD1 and E01, if they are located inside of another archive. You must first export the files and add the files as evidence to be properly processed. Will index: Text data Will not index: Graphic files and executable files Will refine out: Microsoft Office File 2010 package contents slack Free space Deleted Zero items length files OS/File Creating a Project OLE Streams System Files Creating Projects | 209 Processing Options (Continued) Option Description Standard No Search Uses the default processing options but does not include the indexing of text data. See About Indexing for Text Searches of Content of Files on page 217. Forensic Will include: Hashing Flag (MD-5, SHA-1, SHA-256) bad extensions Thumbnails Deleted for graphics files Microsoft OLE Streams Microsoft OPC documents Refinement File options: slack Free space Will index: all file types Will not include: KFF (for faster processing) Expand HTML Compound Files (archive expansion) file listing eDiscovery Quick Creating a Project Deduplication Increases the speed of the processing of evidence by using minimal options to expedite the processing. Indexing, hashing, archive file drill down, and file identification are disabled. (Files are identified by header analysis instead of file extension.) If you select this option, the KFF Lookup option is disabled. Disabling KFF Lookup occurs because Field Mode is a processing option that is intended to speed up the process. It turns off indexing, hashing, and other options that tend to slow down data processing. The KFF Lookup option takes time to process and slows down data processing. Therefore, if both Field Mode and KFF Lookup were both enabled, it would defeat the purpose of the Quick option. Creating Projects | 210 Processing Options (Continued) Option Description Security Enables the default security processing options. Will include: Hashing Indexing eDiscovery File Deduplication - Project level for both Documents and Email signature analysis Expand Compound Files (archive expansion) of the following file types: 7-ZIP, IPD, BZIP2, DBX, PDF, GZIP, NSF, MBOX, Microsoft Exchange, MS Office documents, MSG, PST, RAR, RFC822 Internet email, TAR, ZIP, EMFSPOOL, EXIF, ThumbsDB, TMBLIST, ThumbCacheDB, NTDS, SQLITE, and PKCs7 Will refine out: File slack Free space Deleted items Microsoft Office Zero OLE Streams 2010 package contents length files OS/File System Files Will not index: Graphic files Note: In the Job Wizard, collection jobs executed in projects with standard processing selected have Auto Processing selected by default. See Job Options Tab on page 457. Optical Character Recognition Enable OCR Generates text from graphics files and indexes the resulting content. You can then use Project Review to search and label the content and treat that content the same as any other text in the project. AccessData uses the GlyphReader engine for optical character recognition. Selecting this option can increase processing time up to 50%. It also may give you results that differ between processing jobs on the same computer, with the same piece of evidence. Pre-set default is off. See About Optical Character Recognition (OCR) on page 216. Enabling this option may increase processing times. General Email Options Expand Embedded Graphics Pre-set default is off. Enabling this option may increase processing times. KFF (Known File Filter) Enable KFF Creating a Project Enables the Known File Filter (KFF). See Using KFF (Known File Filter) on page 331. Pre-set default is on. Creating Projects | 211 Processing Options (Continued) Option Description Email Body Caching Enable Email Body Caching Advanced Options This option will speed up load file generation. Pre-set default is off. Enabling this option may increase processing times. Keep the database indexes while processing. Pre-set default is off. Database indexes improve performance, but slow processing when inserting data. If this option is checked, all of the data reindexes every time more data is loaded. Only select this option if you want to load a large amount of data quickly before data is reviewed. Standard Viewer Enable Standard Viewer The option does the following:  Generates files that can be annotated and redacted (SWF format). SWF files are generated for most all user-created processed documents such as .DOC, .PPT, .MSG, and so forth (not .XLS). This enables you to work on a file in Review without waiting for a SWF file to be created. SWF files are generated for documents with a size of 1 MB and larger.  Makes the Standard Viewer the default viewer in Review. For more information, see Using the Standard Viewer and the Alternate File Viewer in the Viewing Data chapter. This option is checked as the default for the Summation license, but can be enabled in other products. Note: This option slows processing speeds. Enable Video Conversion When you process the evidence in your case, you can choose to create a common video type for videos in your case. These common video types are not the actual video files from the evidence, but a copied conversion of the media that is generated and saved as an MP4 file that can be viewed in the Natural Panel. All converted videos are stored in the case folder. You can define the following:  Bit rate  Video resolution Generate Thumbnails Creates thumbnail images for each video file in a project. These thumbnails can be seen in the Thumbnails View in Review. The thumbnails let you quickly examine a portion of the contents within video files without having to watch the full content of each media file. You can define the thumbnail generation interval based on one of the following:  Percent (1 thumbnail every “n” % of the video)  Interval (1 thumbnail every “n” minutes of the video) This feature can be used when you choose the Standard, Standard No Search, or Forensic processing modes. This is not available when using the Security or Quick processing mode. This is also not available for import loaded files. Enable Entropy Enables the calculation of entropy during the processing. Video Files Entropy Cerberus Creating a Project Creating Projects | 212 Processing Options (Continued) Option Description Enable Cerberus Stage 1 (Available depending on the license that you own.) Runs a general file and metadata analysis that identifies potentially malicious code. Cerberus generates and assigns a threat score to the executable binary. See the About Cerberus Malware Analysis chapter. Miscellaneous Options Geolocation Allows you to view processed evidence in the Geolocation Visualization filter. Note: Geolocation IP address data may take up to eight minutes to generate, depending upon other jobs currently running in the application. Generate Image Thumbnails Generates thumbnails for all image files in the project. These thumbnails can be viewed in the Thumbnail View in Review. This option is enabled by default with the Standard, Standard No Search, and Forensic Processing Modes. Timeline Options Expand Additional Timeline Events Lets you expand Log2Timeline, Event Logs, Registry, and Browser History. For example, this will recognize CSV files that are in the Log2Timeline format and parses the data within the single CSV into individual records within the case. The individual records from the CSV will be interspersed with other data, giving you the ability to perform more advanced timeline analysis across a very broad set of data. In addition you can leverage the visualization engine to perform more advanced timeline based visual analysis. When you expand CSV files into separate records, you can use several new columns in the Item List to view each CSV Log2Timeline field. Indexing Options Disable Tag Indexing Summation license only. This option is enabled by default. This option disables the reindexing of labels, categories, and issues for projects. This allows the project to process more quickly. This option only applies to new projects. If enabled, after processing, the following text is displayed in Review: Tag indexing is disabled. Document Deduplication See Deduplication Options on page 214. Email Deduplication See Deduplication Options on page 214. Document Analysis Options You can perform an automatic cluster analysis of documents and emails which provides grouping of email and documents by similar content. See Using Cluster Analysis on page 417. You can configure the number of paired keywords that are stored for the comparison of documents during cluster analysis and predictive coding. For performance reasons, the default number of keyword storage is 30 keywords. This can limit the effectiveness of cluster analysis or predictive coding. You can increase the number of pairs, but this will impact the time needed for processing. Max Keyword Pairs You can change the number of allowable pairs by a set number or select Unlimited. Cluster Analysis Creating a Project Creating Projects | 213 Processing Options (Continued) Option Description Perform Cluster Analysis: Enables the extended analysis of documents to determine related, near duplicates, and email threads. See Using Cluster Analysis on page 417. Cluster Threshold: Determines the level of similarity required for documents to be considered related or near duplicates. Note: Choosing a higher value will produce fewer documents in a cluster because the documents must contain more similar content. Choosing a lower value will produce more documents in a cluster because the documents will not need to contain as much similar content to be considered near duplicates. Entity Extraction Language Identification Identifies and extracts specific types of data in your evidence. You can process and view each of the following types of entity data:  Credit Card Numbers  Email addresses  People  Phone Numbers  Social Security Numbers See Using Entity Extraction on page 420. In Review, under the Document Content facet category, there is a facet for each data type that you extracted. See Using Language Identification on page 301. None Performs no language identification, all documents are assumed to be written in English. This is the faster processing option. Basic Performs language identification for English, Chinese, Spanish, Japanese, Portuguese, German, Arabic, French, Russian, and Korean. Extended Performs language identification for 67 different languages. This is the slowest processing option. Deduplication Options Deduplication helps a project investigation by flagging duplicate electronic document (e-document) files and emails within the data of a project or person. The duplicates filter, when applied during project analysis, removes all files flagged “True” (duplicate) from the display, significantly reducing the number of documents an investigator needs to review and analyze to complete the project investigation. If you set document deduplication at the project level, and two people have the same file, one file is flagged as primary and the other file or files are flagged as duplicates. The file resides in the project and the file paths are tracked to both people. To limit the production set, the file is only created one time during the load file/native file production. You can also deduplicate email, marking the email, email contents, or email attachments as duplicates of others. Note: In Project Review, if the duplicate filter is on, and if you perform a search for a file using a word that is part of the file path, and that path and file name is a duplicate, the search will not find that file. For example, there is a spreadsheet that is located in one folder called Sales and a duplicate of the file exists in a folder called Marketing. The file in Sales is flagged as the primary and the file in Marketing is flagged as a Creating a Project Creating Projects | 214 duplicate. If you do a search for spreadsheets in the folder named Sales, it is found. However, if you do a search for spreadsheets in the folder named Marketing, it is not found. To locate the file in the Marketing folder, turn off the duplication filter and then perform the search. See Evidence Processing and Deduplication Options on page 209. Deduplication options are integrated on the Processing Options page. The following tables describe the deduplication options that are available in the Processing Options. Document Deduplication Options Option Description No Deduplication Processes the project without document deduplication. This feature allows the case to process more quickly. This option is the default for Security processing. Project Level Deduplication compares each of the e-documents processed within a project against the others as they receive their hash during processing. If the hash remains singular throughout processing, it receives no duplicate flag. In the project of duplicate files, the first hash instance receives a “primary” flag and each reoccurrence of the hash thereafter receives a “secondary” flag. Person Level Deduplication compares the e-documents found in each custodial storage location against the other files from that same custodial location ( people, or in the project of no person, the storage location). If the hash remains singular throughout processing, it receives no duplicate flag. In the project of duplicate files the first hash instance receives a “primary” or “master” flag and each reoccurrence of the hash thereafter receives a “duplicate” flag. Actual Files Only Deduplicates actual files instead of all files. Checking this option excludes OLE files and Alternate Data Stream files. You can also deduplicate email, marking the email, email contents, or email attachments as a duplicate of others. Email Deduplication Options Option Description No Deduplication Processes the project without email deduplication. This feature allows the case to process more quickly. This option is the default for Security processing. Project Level The scope of the email deduplication. Deduplication compares each of the emails processed within a project against the others as they are processed. If the deduplication value remains singular throughout processing, it receives no duplicate flag. In the project of duplicate email, the first value instance receives a “primary” flag and each reoccurrence of the value thereafter receives a “duplicate” flag. If two people have the same email, it is marked as a duplicate. Creating a Project Creating Projects | 215 Email Deduplication Options (Continued) Option Description Person Level The scope of the email deduplication. Deduplication compares the email found in each custodial storage location against the other emails from that same custodial location ( people, or in the project of no person, the storage location). If the value remains singular throughout processing it receives no duplicate flag. In the project of duplicate emails, the first email instance receives a “primary” or “master” flag and each reoccurrence of the email thereafter receives a “duplicate” flag. In the project of duplicate files, the first value instance receives a “primary” flag and each reoccurrence of the value thereafter receives a “duplicate” flag. Email To Deduplicates email based on the recipients in the “To” field. Email From Deduplicates email based on the senders in the “From” field. Email CC Deduplicates email based on the recipients in the “Carbon Copy” field. Email Bcc Deduplicates email based on the recipients in the “Blind Carbon Copy” field. Email Subject Deduplicates email based on the contents in the “Subject” field. Email Submit Time Deduplicates email based on the date and time the email was initially sent. Email Delivery Time Deduplicates email based on the date and time the email was delivered to the recipients. Email Attachment Count Deduplicates email based on the number of attached files. Email Hash Deduplicates email based on the hash value. Body and Attachments Includes email body, recipients (the “To” field), sender (the “From” field), CC, BCC, Subject field contents, body, the number of attachments, and the attachments for deduplication. Body Only Includes only the email body and the list of attachment names for deduplication. About Optical Character Recognition (OCR) Optical Character Recognition (OCR) is a feature that generates text from graphic files and then indexes the content so the text can be searched, labeled, and so forth. OCR is currently supported in English only. Some limitations and variables of the OCR process include: OCR can have inconsistent results. OCR engines have error rates which means that it is possible to have results that differ between processing jobs on the same machine with the same piece of evidence. OCR may incur longer processing times with some large images and, under some circumstances, not generate any output for a given file. Graphical images that have no text or pictures with unaligned text can generate illegible output. OCR functions best on typewritten text that is cleanly scanned or similarly generated. All other picture files can generate unreliable output. OCR is only a helpful tool for you to locate images with index searches, and you should not consider OCR results as evidence without further review. Creating a Project Creating Projects | 216 The following table describes the OCR options that are available in Processing Options: OCR Options Option Description Enable OCR Enables OCR and expands the OCR pane to select options for OCR processing. File Types Specifies any or all of the following file types to process for OCR:  PDF. This file type is checked by default when enabling OCR.  JPEG  PNG  TIFF. This file type is checked by default when enabling OCR.  BMP  GIF  Uncommon (PCX, TGA, PSD, PCD. . .) See Supported File Types for OCR on page 218. Do Not OCR. . . Defines the minimum and maximum file size in bytes of documents to be processed by OCR. You can either enter a value in the spin box, or use arrows to select the value. If you clear the box without entering a value, the values return to the default setting. Note: The maximum size that can be specified in the Do not OCR documents over _____ bytes field is 9,223,372,036,854,775,807 bytes  Excludes full color documents to be processed by OCR. PDF Existing Filtered Text Size Excludes documents that have text exceeding the limit specified. Documents over the specified limit will not be OCRed. This option is only available when PDF is selected as a file type.  About Indexing for Text Searches of Content of Files By default, when you add evidence to a project, the files are indexed so that the content of the files can be searched. You can select a No Search processing mode, which is faster, but does not index the evidence. Creating a Project Creating Projects | 217 Supported File Types for OCR The following file types are supported for OCR: ABC ABIC AFP ANI ANZt ARW AWD BMP CAL CGM CIN CLP CMP CMW CMX CR2 CRW CUR CUT DGN DOC DOCX DCR DCS DCM DCX DNG DOC DOCX DRW DWF DWG DXF ECW EMF EPS EXIF FAX FIT FLC FPX GBR GIF HDP HTML ICO IFF IOCA IMG ITG JBG JB2 JPG JPEG-XR JPEG-LS J2K JP2 JPM JPX KDC MAC MIF MNG MO:DCA MSP MRC MRC NAP NEF NITF NRW ORF PBM PCD PCL PCL6 PCT PCX PDF PGM PLT PNG PNM PPM PPT PPTX PS PSD PSPo PTK RAS RAF RAW RTF RW2 SCT SFF SGI SHP SMP SNP SR2 SRF SVG TDB TFX TGA TIFF TIFX TXT VFF WBMP WFX WMF WMZ WPG XBM XLS XLSX XPM XPS XWD Interruption of Evidence Processing On occasion, processing might be interrupted by a catastrophic failure. Examples of catastrophic events include the network going down or power outages. In these situations, the application performs a roll back of the processing job. A roll back is when records added during the interrupted job are not available in the database and does not appear in Review. This action of rolling back of a job insures that you do not receive incomplete records in Review. Processing Status tab of the Work List alerts you to the error and shows that the system is attempting a roll back. When a catastrophic event occurs, the Processing Status tab of the Work List alerts you to the error and shows that the system is attempting a roll back. See Monitoring the Work List on page 276. You need to be aware of the following considerations with the roll back option: For multiple adding evidence jobs, only the job that fails will roll back. Jobs that complete successfully have data appear in the system. If records are locked by another process, the roll back may fail to delete physical files from the case folder. You can view what files did not get removed by viewing the log found in \\\Users\Public\Documents\AccessData\Resolution1Logs\Summation. For Evidence Processing jobs where some records are added, only newly added records roll back. Roll back only occurs with failure during Evidence Processing jobs, not Import jobs. Incidences, such as if an Evidence Processing job fails to advance (for example, the interface displays that the job is processing for a long time), do not trigger the roll back action. Creating a Project Creating Projects | 218 Using Project Properties Cloning As an administrator or a project manager with the Create/Edit Project administrator role, you can clone the properties of an existing project to another project. You can also apply a single project’s properties to another project. You can also pick and choose properties from multiple individual projects to apply to a single project. Note: The project data is not copied from one project to another. Only the project properties are copied. You can apply Project Properties Cloning to a project as it is being created or it can be applied to projects that have already been created. You can apply the following properties: Custom Fields Category Tagging and Issue Values Layouts Labels Users and Groups Markup Sets People Highlight Profiles To use Project Properties Cloning 1. From the Source Project menu, select the source project from which you want to copy. 2. If you are applying the properties to a previously created project, select the target project to which you want to copy from the pull-down menu. 3. Under Elements to Copy, select the properties that you want to apply to the project. You can select All or choose specific properties to apply. Note: If you select only Category Values, Project Properties Cloning will copy over all of the custom fields. If you select only Tagging Layouts, Project Properties Cloning will only copy over the tagging layouts. You must also select Custom Fields and Category Values if you want those values copied over. 4. If you are applying Project Properties Cloning to a project as it is being created, finish the Project Wizard. If you are applying Project Properties Cloning to a project that has already been created, click Merge. Creating a Project Using Project Properties Cloning | 219 Viewing and Editing Project Details You can view the configured properties of the project on the Project Details tab. You can also edit some of the project properties, for example: Name Job Data Path Priority Project Type To access the Project Details tab From the Home page, select a project, and click the Project Details tab. See Project Details Tab on page 221. Creating a Project Viewing and Editing Project Details | 220 Project Details Tab The Project Details tab displays data for the selected project. You can also edit some of the project data from this tab. Project Info Tab Elements of the Project Information Tab Element Description Allows you to edit information about the selected project. Only the Name, Job Data Path, and the Description can be edited. Edit Button General Project Properties See General Project Properties on page 205. Creation Date Displays the date that the project was created. Created By Displays the user who created the project. Creating a Project Viewing and Editing Project Details | 221 Elements of the Project Information Tab (Continued) Element Description Last Modified Date Displays the date when the project was last modified. Last Modified By Displays the user who last modified the project. FTK Case ID Displays the case ID for the associated FTK case if applicable. Associated FTK Case Pane Displays any associated FTK cases. Creating a Project Viewing and Editing Project Details | 222 Chapter 17 Managing People Administrators, and users with the Create/Edit Project permission, can manage people in two ways: Globally across the system using the Data Sources tab. See Data Sources People Tab on page 223. Individually for a project using the People tab on the Home page. See Home People Tab on page 228. For information on user permissions, see Setting Project Permissions (page 240). Note: In order for people to be used in Project Review, people must be created and selected before you process the evidence. See Evidence Tab on page 231. Data Sources People Tab You use the Data Sources > People tab to maintain the list of people available in the application. You can add, edit and delete global people, as well as import lists of people. From Data Sources, you can view evidence and projects associated with a person. Data Sources People Tab Managing People Data Sources People Tab | 223 Opening the Data Sources, People Page Administrators, and users with management permissions, use the Data Sources page to manage global people. To access the Data Sources, People page 1. Log in to the application console as administrator or as a user with management permissions. See The Administrator Guide for more information. 2. In the console, click Data Sources. 3. On the Data Sources page, click People. Data Sources Person Tab Features Element Description Filter Options Allows you to filter admin roles in the list. For more information, see The Administrator Guide. People List Displays all people. Click the column headers to sort by the column. Adds a person. See Adding People on page 225. Add Person Edits a selected person. See Editing a Person on page 226. Edit Person Deletes the selected person. See Removing a Person on page 226. Delete Person Delete Deletes the selected admin roles. Only active when an admin roles is selected. See Removing a Person on page 226. Import People Imports people from a CSV or TXT file. See Importing People From a File on page 226. Import From AD Import people from Active Directory. See Adding People using Active Directory on page 227. Custom Properties Add, edit, and delete custom columns with the default value that will be listed in the Project List panel. When you create a project, this additional column will be listed in the project creation dialog. Exports the current set of data to a CSV file. Export to CSV Refresh Refreshes the Groups List. See Refreshing the Contents in List and Grids on page 35. Columns Click to adjust what columns display in the Groups List. See Sorting by Columns on page 35. Associates a computer to the selected person. Add Associations Managing People Data Sources People Tab | 224 Data Sources Person Tab Features Element Description Remove Associations Removes the association to a selected person to a computer. Lists the evidence that is associated with a person. Evidence tab Lists the projects that are associated with a person. Projects tab The main view is the Person List and includes the following sortable columns: First Name  Last Name Username Email Address Creation Date Domain When you create and view the list of people, this list is displayed in a grid. You can do the following to modify the contents of the grid: Control Sort the columns Define If which columns of data are displayed in the grid. a column on which you can sort. you have a large list, you can apply a filter to display only the items you want. Highlighting a person in the list populates the Person Details info pane on the right side. The Person Details info pane has information relative to the currently selected person, beginning with the first name. At the bottom of the page, you can use the following tabs to view and manage the items that the highlighted person is associated with: Evidence Projects Adding People Administrators, and users with permissions, can add people. You can add people from the Data Sources tab in the following ways: Manually adding people. See Manually Creating People on page 226. Importing people from a file. See Importing People From a File on page 226. Importing people from Active Directory. See Adding People using Active Directory on page 227. Managing People Data Sources People Tab | 225 Manually Creating People To manually create a person 1. On the Data Sources > People tab, click 2. In Person Details, enter the person details. 3. Click OK. Add. Editing a Person You can edit any person that you have added to the project. To edit a project-level person 1. On the Data Sources > People tab, select a person that you want to edit. 2. Click 3. In Person Details, edit person details. Click Edit OK. Removing a Person You can remove one or more people from the global People List. To remove one or more people from the People List 1. On the Data Sources > People tab, select the check box for the people that you want to remove. 2. If you want to remove one person, select on the right side. 3. If you want to remove more than one person, select menu bar of the People pane. To Delete. This icon displays above the Information pane Delete. This icon displays on the bottom confirm the deletion, click OK. Importing People From a File You can import one or more people into a project from a file. The source file can be either in TXT or CSV format. Custom properties must be defined before importing CSV files with the custom fields in the headers The person name can in the following format: First and last name separated by a space For example, John Smith or Bill Jones For example, you can create a TXT or CSV file with the following text: Chris Clark Managing People Data Sources People Tab | 226 Sarah Ashland To import people from a file 1. On the Home > People tab, click Import People. 2. The Import People Options dialog appears. Mark First row contains headers if you want to import custom columns from the file. Mark 1 or More Custom Columns if you want to import custom columns from the file. 3. Browse to the TXT or CSV file. 4. Click Open. 5. When the import is complete, view the summary and click OK. Any people that have invalid data will not be imported. These people will appear on the summary, along with the field that was flagged for invalid data. You can correct the field, and reattempt to import. Only those people who were corrected will import. People that had been imported successfully earlier will not import a second time. Adding People using Active Directory You can add people by importing from Active Directory. If Active Directory is not configured, configure it in the System Configuration tab. When Active Directory is properly configured, the Active Directory filter list opens in the wizard. For more information on configuring Active Directory, see the Administrator guide. The person information automatically populates the Person List when you create people using Active Directory. You can also edit person information. To add people using Active Directory 1. In the Data Sources > People page, click 2. Set the search/Browse depth to All Children or Immediate Children. 3. Select where you want to perform the search. 4. Set the search options to one of the following: Match Exact Starts With Ends Import from AD. With Contains 5. Enter your search text. 6. Select the usernames that you want to add as people. 7. Click 8. Click Continue. 9. Review the members selected, members to add as people, and conflicted members. If you need to make changes, click Back. Add to Import List. 10. Click Import. Managing People Data Sources People Tab | 227 Home People Tab Administrators, and users with the Create/Edit Project permission, manage people for a project using the People tab on the Home page. The People tab is project specific, not global. To manage people for a project From the Home page, select a project, and click the People tab. When you create and view the list of people, they are displayed in a grid. You can do the following to modify the contents of the grid: Control which columns of data are displayed in the grid. If you have a large list, you can apply a filter to display only the items you want. See About Content in Lists and Grids on page 35. Elements of the People Tab Element Description Filter Options Allows you to search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. People List Displays the people for the project. Click the column headers to sort by the column. Refreshes the Evidence Path list. Refresh Export the list to a .csv file. Export to CSV Refresh Refreshes the Groups List. See Refreshing the Contents in List and Grids on page 35. Columns Adjusts what columns display in the Groups List. See Sorting by Columns on page 35. Associates existing people to the project. Add Association Disassociates an existing person from the project. Remove Association Imports people from a file. Import People Adds a person. Add Person Edits the selected person. Edit Person Evidence Tab Managing People Lists the evidence associated with the selected person. Data Sources People Tab | 228 Adding a Person to a Project Administrators and users with the Create/Edit Project permission can add people to a project. You can add project-level people in the following ways: Adding project-level people from the Shared People list Manually adding people Importing people from a file See Importing People From a File on page 230. Creating or importing people while importing evidence See the Loading Data documentation for more information on creating people during import. If you manually add or import people, they are added to the shared list of people. To add a person from shared people 1. On the Home > People tab, click Add. The Associate People to project page displays. 2. Select the shared people that you want associated with the project. You can click a singe person or use Shift-click or Ctrl-click to select multiple people. 3. Click or Add all Selected. This moves the people to the Associated People list. You can also check the selection box next to First Name to add all of the people. 4. You can remove people from the Associated People list by selecting people and clicking All Selected. You can also clear the selection box next to First Name to remove all of the people. 5. Click OK. or Remove You can also add project-level people from shared people using the People tab when creating a project. Manually Creating People for a Project To manually create a project-level person 1. On the Home > People tab, click Add. 2. In Person Details, enter the person details. 3. Click OK. You can also manually create people from the People tab when creating a project. Managing People Adding a Person to a Project | 229 Editing a Person You can edit any person that you have added to the project. To edit a project-level person 1. On the Home > project > People tab, select a person that you want to edit. 2. Click 3. In Person Details, edit person details. 4. Click OK. Edit. Removing a Person You can remove one or more people from a project. This does not delete the person from the shared people, it just disassociates it from the project. To remove one or more people from a project 1. On the Home > People tab, select the check box for the people that you want to remove. 2. Below the person list, click Remove. To confirm the deletion, click OK. Importing People From a File You can import one or more people into a project from a file. Even though you perform this task at the project level, it will also add the people to the global people list. The source file can be either in TXT or CSV format. The file must not contain any headers. The person name can in the following format: First and last name separated by a space For example, John Smith or Bill Jones For example, you can create a TXT or CSV file with the following text: Chris Clark Sarah Ashland To import people from a file 1. On the Home > People tab, click 2. Browse to the TXT or CSV file. 3. Click Open. 4. When the import is complete, view the summary and click OK. Managing People Import People from File. Adding a Person to a Project | 230 Evidence Tab Users with permissions can view information about the evidence that has been added to a project. To view the Evidence tab, users need one of the following permissions: administrator, create/edit project, or manage evidence. Evidence Tab Elements of the Evidence Tab Element Description Filter Options Allows the user to filter the list. Evidence Path List Displays the paths of evidence in the project. Click the column headers to sort by the column. Refresh Managing People Refreshes the Groups List. See Refreshing the Contents in List and Grids on page 35. Evidence Tab | 231 Elements of the Evidence Tab (Continued) Element Description Columns Adjusts what columns display in the Groups List. See Sorting by Columns on page 35. External Evidence Details  Processing Status Lists any messages that occurred during processing. Includes editable information about imported evidence. Information includes: That path from which the evidence was imported  A description of the project, if you entered one  The evidence file type  What people were associated with the evidence  Who added the evidence  When the evidence was added About Associating a Person to an Evidence Item You can use people to associate data to its owner. You can associate a person to an evidence item in one of two ways; however, the results are different. Specify a person when importing an evidence item. This associates the person when the evidence is processed. You can then use person data when in Project Review and in exports. See the Loading Data documentation for more information on creating people on import. When you associate a person to an evidence item, the person will be associated to all evidence in that item, whether the evidence item contains a single file or a folder of many files, messages, and so on. Edit an evidence item that has already been imported and associate a person. Using this method, the person association will not be visible or usable in Project Review nor in exports. You can only view this association in the Evidence and People tabs of the Home page. Managing People Evidence Tab | 232 Chapter 18 Managing Tags Project/case managers can manage the tags for a project in the Project Review. The following tags can be created, deleted, renamed, and managed for permissions: Categories: See Creating Category Values on page 264. Issues: See Managing Issues on page 237. Labels: See Managing Labels on page 233. Case Organizer: See Using the Case Organizer (page 197) or Using the Case Organizer in the User Guide or Reviewer Guide. Managing Labels Labels are a tool that reviewers can use to group documents together. Reviewers apply labels to documents, then project/case managers can use the Labels folder to view all the documents under the selected label. Before reviewers can use a label, the project/case manager must create it. Creating Labels Project/case managers can create labels for reviewers to use when reviewing documents. To create a label 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. 4. Expand the Tags folder. 5. Right-click the Labels folder and click Create Label. Managing Tags button next to the project in the Project List. Managing Labels | 233 Create Label Dialog 6. Enter a Label Name. 7. Select Is Label Group if the label is a group to contain other labels and then skip to the last step. 8. Do one of the following: No Color: Select this to have no color associated with the label. Color: Select this and then select a color to associate a color with the label. Note: The default color is black if you select the Color option. The color selected appears next to the label in the labels folder. 9. Click Save . Deleting Labels Project/case managers can delete existing labels. To delete a label 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. 4. Expand the Tags folder. 5. Expand the Labels folder. 6. Right-click the label that you want to delete and click Delete . 7. Click OK. button next to the project in the Project List. Renaming a Label Project/case managers can rename labels in the Project Review. To rename a label 1. Log in as a user with Project Administrator rights. 2. Click the Project Review Managing Tags button next to the project in the Project List. Managing Labels | 234 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. 4. Expand the Tags folder. 5. Expand the Labels folder. 6. Right-click the label that you want to rename and click Rename. 7. Enter the new name for the label. Managing Tags Managing Labels | 235 Managing Label Permissions Project/case managers can grant permissions of labels to groups for use. Groups of users can only use the labels for which they have permissions. To manage permissions for labels 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. 4. Expand the Tags folder. 5. Expand the Labels folder. 6. Right-click the label for which you want to grant permissions and click Manage Permissions . button next to the project in the Project List. Assign Security Permissions 7. Select the groups that you want to grant permissions for the selected label. Note: By default, all groups that the logged-in user belongs to will be selected. To make it a personal label, all groups should be un-selected. 8. Click Save. Managing Tags Managing Labels | 236 Managing Issues Project/case managers with View Issues and Assign Issues permissions can create, delete, rename, and assign permissions for issues. Issues work like labels. Reviewers can apply issues to documents to group similar documents. Creating Issues Project/case managers with View Issues and Assign Issues permissions can create issues for other users to code. To create an issue 1. Log in as a user with View Issues and Assign Issues rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. 4. Expand the Tags folder. 5. Right-click the Issues folder and click Create Issue. button next to the project in the Project List. Create New Issue Dialog 6. Enter an Issue Name. 7. Do one of the following: No Color: Select this to have no color associated with the issue. Color: 8. Select this and then select a color to associate a color with the issue. Click Save . Deleting Issues Project/case managers with View Issues and Assign Issues permissions can delete issues. To delete an issue 1. Log in as a user with View Issues and Assign Issues rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. Managing Tags button next to the project in the Project List. Managing Issues | 237 4. Expand the Tags folder. 5. Expand the Issues folder. 6. Right-click the issue that you want to delete and click Delete. 7. Click OK. Renaming Issues Project/case managers with View Issues and Assign Issues permissions can rename issues. To rename an issue 1. Log in as a user with View Issues and Assign Issues rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Review Guide for more information on tags. 4. Expand the Tags folder. 5. Expand the Issues folder. 6. Right-click the issue that you want to rename and click Rename. 7. Enter the new name for the issue. button next to the project in the Project List. Managing Issue Permissions Project/case managers can grant permissions of issues to groups for use. Groups of users can only use the labels for which they have permissions. To manage permissions for labels 1. Log in as a user with View Issues and Assign Issues rights. 2. Click the Project Review 3. Click the Tags button in the Project Explorer. See the Reviewer Guide for more information on tags. 4. Expand the Tags folder. 5. Expand the Issues folder. 6. Right-click the issue for which you want to grant permissions and click Manage Permissions . Managing Tags button next to the project in the Project List. Managing Issues | 238 Assign Security Permissions 7. Check the groups that you want to grant permissions for the selected issue. 8. Click Save. Applying Issues to Documents After an issue has been created and associated with a user group, it can then be added to a tagging layout for coding. To apply an issue to a document 1. Create an issue. See Creating Issues on page 237. 2. Grant permissions for the issue. See Managing Issue Permissions on page 238. 3. Add Issues to the Tagging Layout. See Associating Fields to a Tagging Layout on page 267. 4. Check out a review set of documents. (optional) See the Reviewer Guide for more information on checking out review sets. 5. Code the documents in the review set with the issues you created. See the Reviewer Guide for more information on coding. Managing Tags Managing Issues | 239 Chapter 19 Setting Project Permissions About Project Permissions You can assign permissions to a user or group of users for a specific project. In the project list of the Home page, users will only see projects to which they have permissions. For example, you can give a user permissions to review a project but not see any project properties on the Home page. Project permissions are project specific, not global. For information on how to manage global permissions, see the Admin Guide. In order to configure project permissions, you must have either Administrator or Create/Edit Projects permissions. You assign project permissions to users or user groups as follows: 1. Associating users or groups to the project. This will allow the user to see the project in the list, but not anything else. 2. Associating those users or groups to a project role. You can do the following: Select Create an existing project role or edit a role and assign permissions to that role About Project Roles Before you can apply permissions to a user or group, you must set up project roles. A project role is a set of permissions that you can associate to multiple users or groups. Creating a project role simplifies the process of assigning permissions to users who perform the same tasks. Setting Project Permissions About Project Permissions | 240 Project-level Permissions The following table describes the available project permissions that you can assign to a project role. Project-level Permissions Permission Description Project Administrator      Can Manage Project Roles. Can assign access permissions to users & groups. Has all project level functional permissions listed below. Can import/export. Can see job list for jobs created for his project. Project Reviewer Can open Project Review. Manage Project People Can assign access permissions to users & groups. Run Search Can run searches in the Project Review. Note: User must have this permission to perform other search functions as well. Save Search Can save searches that the user performs themselves. Manage Saved Search Permissions Can share your saved searches with other groups. View Data Reports Can view the Data Volume Reports on the Reports tab for projects which they have the rights to access. View Status Reports Can view the Completion Status Reports on the Reports tab for projects which they have the rights to access. View Audit Reports Can view the Audit Log on the Reports tab for projects which they have the rights to access. View Labels Can view the labels everywhere that labels appear. Create Labels Can create and edit labels in the Project Explorer in Project Review. Note: Must have View Labels permission as well to create and delete labels. Delete Labels Can delete labels in Project Review. Assign to Labels Can label documents. Manage Labels Permissions Can grant permissions to labels View Review Sets Can view the review sets in the Project Explorer and Review Batches panel in the Project Review. Create Review Sets Can create review sets. Delete Review Sets Can delete review sets in Project Review. Manage Review Set Permissions Can assign review sets to users/groups. View Native Can view the Native panel in Project Review. View Text Can view the Text panel in Project Review. Setting Project Permissions About Project Permissions | 241 Project-level Permissions (Continued) Permission Description View Coding Layout Can view the Coding panel in Project Review. Edit Document Can change data for documents using tagging layouts. View Categories Can view categories in Project Review. Assign Categories Can assign a document to a category. Create Categories Can create or edit categories in Project Review. Delete Categories Can delete categories in Project Review. Manage Category Permissions Can assign permissions for categories and category values. View Issues Can view issues in Project Review. Assign Issues Can assign issues to a document. Create Issues Can create and edit issues in Project Review. Delete Issues Can delete issues in Project Review. Manage Issue Permissions Can assign permissions for issue values. View Notes Can view notes everywhere that they appear in Project Review. Add Notes Can add notes in Project Review. Delete Notes Can delete notes in Project Review. View Annotations Can view annotations in Image, Natural, and Transcript panels in Project Review. Add Annotations Can add annotations in Project Review. Delete Annotations Can delete annotations in Project Review. View Activity History Can view Activity panel in Project Review. Create Production Set Can create production sets in Project Review. Delete Production Set Can delete production sets in Project Review. Manage Production Set Permissions Can edit and assign permissions for production sets. Export Production Set Can export production sets. Delete Evidence Can delete evidence items from the Item List grid. Imaging Can perform the imaging mass action in the Item List panel and can create an image using the Annotate option in the Natural panel. Create Transcript Group Can create a transcript group in Project Review. Predictive Coding Can apply predictive coding to documents in Project Review. Upload Transcripts Can upload transcripts in Project Review. Upload Exhibits Can upload exhibits in Project Review. Setting Project Permissions About Project Permissions | 242 Project-level Permissions (Continued) Permission Description Manage Transcript Permissions Can assign permissions to Transcript Groups. Global Replace Can search and replace words throughout a project in Project Review. Project-Level Permissions for eDiscovery For Resolution1 and Resolution1 eDiscovery users, you also have the ability to assign the following permissions regarding Litigation Holds: Project-Level Permissions for eDiscovery Permissions Description Approve Litholds Can approve Lit Holds. Create Litholds Can create Lit Holds. Delete Litholds Can delete Lit Holds. Hold Manager Can manage Lit Holds, including creating, approving, viewing, and deleting Lit Holds. View Litholds Can view Lit Holds. Project-Level Permissions for Jobs For Resolution1 and Resolution1 Cybersecurity users, you also have the ability to assign the following permissions for executing jobs: See Introduction to Jobs on page 447. Project-Level Permissions for Jobs Permissions Description Create Jobs Can create all jobs. Delete Jobs Can delete jobs. Approve Jobs Can approve jobs. Execute Jobs Can execute jobs. Create Agent Remediation Can create Agent Remediation jobs. Setting Project Permissions About Project Permissions | 243 Project-Level Permissions for Jobs Permissions Description Create Collection Can create Collection jobs. Note: If a user is assigned this permission and any other permission needed for combination jobs (Volatile, Computer Software Inventory, Memory Operations), that user may also create a combination job with those jobs that the user has permission to create. Create Computer Software Inventory Can create Computer Software Inventories jobs. Note: If a user is assigned this permission and any other permission needed for combination jobs (Volatile, Collection, Memory Operations), that user may also create a combination job with those jobs that the user has permission to create. Create ETM Can create ETM jobs. Create Memory Operations Can create Memory Operations jobs. Note: If a user is assigned this permission and any other permission needed for combination jobs (Volatile, Collection, Computer Software Inventory), that user may also create a combination job with those jobs that the user has permission to create. Create Metadata Only Can create Metadata only jobs. Create Network Acquisition Can create Network Acquisition jobs. Create Remediation Can create Remediation jobs. Create Remediate and Review Can create Remediate and Review jobs. Create Report Only Can create Report Only jobs. Create Removable Media Monitoring Can create Removable Media Monitoring jobs. Create Threat Scan Can create Threat Scan jobs. Create Volatile Can create Volatile jobs. Note: If a user is assigned this permission and any other permission needed for combination jobs (Volatile, Computer Software Inventory, Memory Operations), that user may also create a combination job with those jobs that the user has permission to create. Permissions Tab The Permissions tab on the Home page is used to assign users or groups permissions within the project. The Permissions tab is project specific, not global. For information on how to manage global permissions, see the Admin Guide. Setting Project Permissions Permissions Tab | 244 Permissions Tab Setting Project Permissions Permissions Tab | 245 Elements of the Permissions Tab Element Permission Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Users/Group List Displays the users and groups associated with the project. Click the column headers to sort by the column. Refreshes the User/Group List. Refresh Exports the Permissions List to a CSV file. Export to CSV Adjusts what columns display in the User/Group List. Columns Adds either a group/user to a role or a role to a group/user. Add Association Disassociates a group/user from a role or disassociate a role from a group/user. Remove Association User/Group Details Pane Displays the details for the selected user or group. Project Roles Tab Displays the available roles for the project. Adds a role. Specify the permissions of the role in this data form. Add Role Edits the selected role. Edit Role To access the Permissions tab 1. On the Home page, select a project. 2. Click the Permissions tab. To apply permissions to a user or group, you must create a project role. You can then associate that project role to a user or group on the Permissions tab. See Creating a Project Role on page 249. See Associating Users and Groups to a Project on page 247. See Project-level Permissions on page 241. Setting Project Permissions Permissions Tab | 246 Associating Users and Groups to a Project Before you can apply a project role to a user or group, you must first associate the user or group to the project. Administrators and project managers with the correct permissions can associate users and groups to a project in the Permissions tab. Once a user or group is added to a project, the user can see the project in the Project List panel. To associate a user or group to a project 1. On the Home page, select a project. 2. Click the 3. In the User/Group list pane, click Add Association Permissions tab. . All Users and Groups Dialog 4. Click to add the user or group to the project. 5. Click OK. 6. To grant specific permissions to a user or group, associate them to a project role. See Associating Project Roles to Users and Groups on page 248. Disassociate Users and Groups from a Project Administrators and project/case managers with the correct permissions can remove users from a project by disassociating them from the project in the Permissions tab. To disassociate a user or group to a project 1. On the Home page, select a project, and click the Permissions tab. 2. Check the user or group you want to remove from the project in the User/Group list pane. 3. In the User/Group list pane, click the Remove Association button Setting Project Permissions . Associating Users and Groups to a Project | 247 Associating Project Roles to Users and Groups After you have associated a user or user group to a project, you can associate them to a project role. See Associating Users and Groups to a Project on page 247. You can select an existing project role or create a new one. For information on creating new project roles, see Creating a Project Role (page 249). To associate a project role to a user or group 1. On the Home page, select a project. 2. Click the 3. In the User/UserGoup pane, select a user or group that has been associated to the project. 4. Do one of the following: Associate Permissions tab. the user or group to an existing project role. 4a. In the Project Role pane (bottom of the page), click the 4b. In the All Project Role dialog, click the with the user or group. 4c. Click OK. Add Association button. Add button for the desired project roles to associate Create a new project role. See Creating a Project Role on page 249. Disassociating Project Roles from Users or Groups Administrators and users with the Manage Project permissions can disassociate project roles from users and groups for a specific project. To disassociate a project role to a user or group 1. On the Home page, select a project. 2. Click the 3. In the User/UserGoup pane, select a user or group that has been associated to the project. 4. In the Project Roles pane, click the Remove Association button Permissions tab. Setting Project Permissions . Associating Project Roles to Users and Groups | 248 Creating a Project Role After you have associated a user or user group to a project, you can associate them to a project role. You can use an existing role or create a new role. See About Project Roles on page 240. To create a project role 1. On the Home page, select a project. 2. Click the 3. If no user is associated with the project, associate a user by doing the following: 4. Permissions tab. 3a. In the Users/UserGroup pane, click the 3b. Add a user or group by clicking the 3c. Click OK. Add Associations button. Add button for a user or group. In the Project Roles pane at the bottom of the screen, click the Add button. Add Project Roles Data Form 5. Enter a Project Role Name. 6. Check the permissions that you want to include in the role. See Project-level Permissions on page 241. 7. Click OK. Setting Project Permissions Creating a Project Role | 249 Editing and Managing a Project Role You can edit project roles if you want to alter the permissions in the role. Because project roles can be used across multiple projects, you cannot delete a project role as it may affect other projects. To edit a project role 1. On the Home page, select a project. 2. Click the 3. Select a user that has the project role associated with it. 4. In the Project Roles pane at the bottom of the screen, select a role and click the edit button 5. Edit the role and click OK. Permissions tab. Setting Project Permissions Creating a Project Role . | 250 Chapter 20 Running Reports This chapter is designed to help you execute and understand reports. Reports allow you to view data about your project. Users with the necessary permissions can run reports for a project using the Reports tab and the Exports tab on the Home page. The Reports and Exports tabs are project specific, not global. Accessing the Reports Tab To access the Reports tab From the Home page, select a project, and click the Reports tab. The following reports are available: Deduplication Data Volume Report (page 252) Completion Audit Report (page 251) Status Report (page 252) Log Report (page 252) Search Report (page 254) Export Set Report (page 255) (Only appears after generated) Export Set Report (page 255) (Only appears after generated) Deduplication Report You can open the Deduplication Summary report to view duplicate files and emails that were filtered in the project. Also included in the report are the deduplication options that were set for documents and email. You can generate the report, print it, and save it in a variety of formats, and download it to a spreadsheet. To run the deduplication report 1. Select a project in the Project List panel. 2. Click the Reports tab on the Home page. 3. Click Generate Report to create the report. 4. Click Download under the Deduplication Summary Report pane. You can choose to download the report either for files or emails. Running Reports Accessing the Reports Tab | 251 Data Volume Report You can generate the Data Volume Report to view the size of processed data, evidence file counts by file category, and a breakout of files by extension. You can view the report, print it, and save it in a variety of formats. To run the data volume report 1. Select a project in the Project List panel. 2. Click the Reports tab on the Home page. 3. Click Download under the Data Volume Report pane. Completion Status Report The Completion Status report shows the status of a job. You can generate the report after the job starts running and at least one job target status is collecting. To run the Completion Status Report 1. Select a project in the Project List Panel. 2. Click the Reports tab on the Home page. 3. Click Generate Report under the Completion Status Report pane. Audit Log Report This log records the user activities at the Project Review and evidence object level. The log records the following actions in the report: Project Review Activities: Entered Exited Review Perform Save Apply Review Search Search Filter Create Label Create Document Group Create Issue Create Category Create Review Set Check Out Review Set Check In Review Set Create Production Set Export Data Evidence Label Running Reports Object Activities Document Accessing the Reports Tab | 252 Annotate Document Create Redaction Delete Redaction Remove Create Edit Redaction Highlight Document (via Editable Grid) Image Code Document (via Tagging) Delete View Link Document Document Document (Includes Duration) Document Compare Print Document Document To view the log 1. Select a project in the Project List panel. 2. Click the Reports tab on the Home page. 3. Under the Audit Log pane, do one of the following: Click Generate Report to generate the data. Click Download to open it as an Excel file. Running Reports Accessing the Reports Tab | 253 Search Report You can generate and download a report that shows you the overall results of your search. Note: When generating a search report that includes a large number of items, such as over 100,000, the report generation can take a long time, possibly two hours or more. You should not perform other tasks using the console during this time. Even if the console closes due to inactivity, the report will still generate. The following details are included in the Search report: Total Unique Files: This count is the total items that had at least one keyword hit. If a document has several keywords that were found within its contents, a count of 1 is added to this total for that document. Note: If a search term contains a keyword hit, due to a variation search (stemming, phonic, or fuzzy), the character “&” is added to the end of each search term in the File details to indicate the variation search. However, a search term found with the synonym or related search will not show the “&.” at the end of the term. Total Unique Family Items: This count is the number of files where any single family member had a keyword hit. If any one file within a document family had a keyword hit, the individual files that make up this family are counted and added to this total. For example, one email had 3 attachments and the email hit on a keyword, a count of 4 files would be added to this count as a result. Total Family Emails: This count is the number of emails that have attachments where either the email itself or any of the attachments had a search hit. This count is for top level emails only. Emails as attachments are counted as attachments. Total Family Attachments: This count is the number of the attachments where either the top level email or any of the attachments had a search hit. For example, if you have an email with an email attached and the attached email has 4 documents attached to it, this count would include the 5 attachments. Total Unique Emails with no Attachments: This count is the number of the emails that have no attachments where a search hit was found. Total Unique Loose eDocs: This count is the number of loose eDocuments where a search hit was found. This does not include attachments to emails, but does count the individual documents where a hit was found from within a zip file. Total Hit Count: This count is the total number of hits that were found within all of the documents. Note: For some queries, the total hit count may be incorrect. To generate and download a search report 1. Perform a search. In Project Review, click Search Options > Generate Report. Running Reports Search Report | 254 Export Set Report The Export Set report supplies information about exported production sets. You can also generate and download a report either before or after you export the set to a load file. Each time you generate the report, it overwrites any previously generated report for that export set. After an export set report has been generated, you can download it in Microsoft Office Excel Worksheet format (XLSX) and save it to a new location. You can also view a list of the Export Set Reports under the Reports tab. To run an export set report 1. Select a project in the Project List panel. 2. Click the Printing/Export tab on the Home page. 3. Under the Export Set History tab, select an export and click Show Reports. 4. Under Summary, click Generate. Once an export report has been generated, click Download. Export Set Info Name: The name of the Export Set as defined by the user when the set was created. Labels: Lists which labels are included in the document set. Comments: Lists any comments that added when the export set was created. File Count: Displays a total of the number of documents contained within the exported set of data. File Size: Displays the total size of the documents being exported. File Breakout Type: Lists the document type by file extension of the files contained within the exported set of documents. Count: Size: Displays a count of how many documents are contained within each group. Displays the total size of the files within each of the groupings. File List Object Name: Displays the name of the file being exported. Person: Displays the name of the associated person. Extension: Path: Displays the original filepath of the exported item. Create Last Date: Displays the metadata property for the created date of the exported item. Access Date: Displays the metadata property for the last access date of the exported item. Modify Date: Displays the metadata property for the modification date of the exported item. Logical File Displays the file extension of the exported item. Size: Displays the metadata property fore the logical size of the exported item. Type (Generic): Displays the file type of the exported item. Running Reports Export Set Report | 255 Image Conversion Exception Report The Image Conversion Exception (ICE) report displays documents that were not imaged due to limitations of the image conversion tools or system failures. To run an image conversion exception report 1. Select a project in the Project List panel. 2. Click the Export tab on the Home page. 3. Expand the Download Reports button of a production set. 4. Select Download ICE Report. Running Reports Export Set Report | 256 Summary Report The Summary report supplies information about summaries in your project. You can must generate the report from the Tags tab in Review. After an summary report has been generated, you can download it in Microsoft Word format (DOCX) and save it to a new location. You can also view exported files. For details, see the Using Summaries information in the Review Guide. Running Reports Summary Report | 257 Chapter 21 Configuring Review Tools Project/case managers with the correct permissions can configure many of the review tools that admin reviewers use in Project Review. See Setting Project Permissions (page 240) for information on the permissions needed to set up review tools. The following review tools can be set up from the Home page: Markup Sets: Configuring Markup Sets (page 258) Custom Fields: Configuring Custom Fields (page 262) Tagging Layouts: Configuring Tagging Layouts (page 265) Highlight Profiles: Configuring Highlight Profiles (page 270) Redaction Text: Configuring Redaction Text (page 274) Configuring Markup Sets Markup sets are a set of redactions and annotations performed by a specified group of users. For example, you can create a markup set for paralegals, then when paralegal reviewers perform annotations on documents in the Project Review, all of their markups will only appear when the Paralegal option is selected as the markup for the document in the Natural or Image panel of Project Review. Note: Only redactions and annotations are included in markup sets. Configuring Review Tools Configuring Markup Sets | 258 Markup Sets Tab The Markup Sets tab on the Home page can be used to create markup sets for reviewers to use. Markup sets are a set of redactions and highlights performed by a specified group of users. Markup Sets Elements Element Description Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Markup Sets List Displays the markup sets already created for the project. Click the column headers to sort by the column. Refreshes the Markup Sets List. Refresh Adjusts what columns display in the Markup Sets List. Columns Deletes selected markup set. Only active when a markup set is selected. Delete Adds a markup set. Add Markup Set Edits the selected markup set. Edit Markup Set Deletes the selected markup set. Delete Markup Set Allows you to associate users to a markup set. Users Tab Allows you to associate groups to a markup set. Groups Tab Associates a group/user to a markup set. Add Association Remove Association Configuring Review Tools Disassociates a markup set from a user/group. Configuring Markup Sets | 259 Adding a Markup Set Before you can assign a markup set to a user or group, you must first create the markup set on the Home page. Project/case managers with the Project Administrator permission can create, edit, and delete markup sets. To add a markup set 1. Log in as a user with Project Administrator rights. 2. Click the Markup Sets tab. See Markup Sets Tab on page 259. 3. Click the Add button 4. In the Markup Set Detail form, enter the name of the Annotation Set. 5. Click OK. . Deleting a Markup Set To delete a markup set 1. Log in as a user with Project Administrator rights. 2. Click the Markup Sets tab. See Markup Sets Tab on page 259. 3. Select the markup set that you want to delete. 4. Click the Delete button 5. In the confirm deletion dialog, click OK. . Editing the Name of a Markup Set You can edit the name of an existing markup set if you have Project Administrator rights. To edit a markup set 1. Log in as a user with Project Administrator rights. 2. Click the Markup Sets tab. See Markup Sets Tab on page 259. 3. Select the markup set that you want to edit. 4. Click the Edit button 5. Change the name of the Annotation Set. 6. Click OK. Configuring Review Tools . Configuring Markup Sets | 260 Associating a User or Group to a Markup Set If you are a user with Project Administrator rights, you can associate users or groups to markup sets. Once associated, annotations that the user performs in the Project Review will appear on the document in Native or Image view when the markup set is selected. To associate a user or group to a markup set 1. Log in as a user with Project Administrator rights. 2. Click the Markup Sets tab. See Markup Sets Tab on page 259. 3. Select the markup set that you want to associate to a user or group. 4. Click the User or Group tab at the bottom of the page. 5. Click the Add Association button 6. In the All Users or All User Groups dialog, click the plus sign to add the user or group to the markup set. 7. Click OK. . Disassociating a User or Group from a Markup Set If you are a user with Project Administrator rights, you can disassociate users or groups to markup sets. To disassociate a user or group from a markup set 1. Log in as a user with Project Administrator rights. 2. Click the Markup Sets tab. See Markup Sets Tab on page 259. 3. Check the markup set that you want to disassociate to a user or group. 4. Click the User or Group tab at the bottom of the page. 5. Click the Remove Association button Configuring Review Tools . Configuring Markup Sets | 261 Configuring Custom Fields Custom fields include the columns that appear in the Project Review and categories that can be coded in Project Review. You can create custom fields that will allow you to display the data that you want for each document in Project Review, in production sets, and in exports. Custom fields allow you to: Map fields from documents upon import to the custom fields you create. See the Loading Data documentation for more information on mapping fields. Code documents for the custom fields in Project Review, using tagging layouts. See the Reviewer Guide for more information on coding data. See Adding Custom Fields on page 263. See Creating Category Values on page 264. See Adding a Tagging Layout on page 266. Custom Fields Tab The Custom Fields tab on the Home page can be used to add and edit custom fields for Project Review and coding. Elements of the Custom Fields Tab Element Description Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Highlight Custom Fields Displays the custom fields already created for the project. Click the column headers to sort by the column. Refresh Columns Delete Add Custom Fields Edit Custom Fields Delete Custom Fields Configuring Review Tools Refreshes the Custom Fields List. Adjusts what columns display in the Custom Fields List. Deletes selected custom fields. Only active when one or more custom fields are selected. IMPORTANT: See About Deleting Custom Fields on page 264. Adds a custom field. Edits the selected custom field. Deletes the selected custom field. IMPORTANT: See About Deleting Custom Fields on page 264. Configuring Custom Fields | 262 Adding Custom Fields Project/case managers with the Project Administrator permission can create and edit custom fields. You can use the custom fields to add categories, text, number, and date fields. To add a custom field 1. Log in as a user with Project Administrator rights. 2. Click the Custom Fields tab. See Custom Fields Tab on page 262. 3. Click the Add button 4. In the Custom Field Detail form, enter the name of the custom field. 5. Select a Display Type: Check Date: box: Create a column that contains a check box. This is for coding categories only. Create a column that contains a date. Number: Radio: Text: . Create a column that contains a number. Create a column that contains a radio button. This is for coding categories only. Create a column that contains text. 6. Enter a Description for the custom field. 7. Select ReadOnly to make the column un-editable. 8. Click OK. Editing Custom Fields Project/case managers with the Project Administrator permission can create and edit custom fields. You cannot edit the Display Type of the custom field. To edit a custom field 1. Log in as a user with Project Administrator rights. 2. Click the Custom Fields tab. See Custom Fields Tab on page 262. 3. Select the custom field you want to edit. 4. Click the Edit button. 5. Make your edits. 6. Click OK. Configuring Review Tools Configuring Custom Fields | 263 Creating Category Values After you have created a Custom Field for check boxes or radio buttons, you can add values to the check boxes and radio buttons in Project Review. You can create multiple values for each category. To add values to categories 1. Log in as a user with Assign Categories permissions. 2. Click the Project Review 3. In the Project Explorer, click the Tags tab. 4. Expand the Categories. 5. Right-click on the category and select Create Category Value. button next to the project in the Project List. Create New Category Value Dialog 6. Enter a Name for the value. 7. Click Save. About Deleting Custom Fields The intent of this feature is that you can quickly delete a custom field that you created with properties that you did not intend. For example, you may realize after saving a custom field that you selected the wrong display type. If you have been using a custom field, and there is associated data with it, in most cases you will not want to delete it. IMPORTANT: Be aware of the following: If you delete a custom field that has been previously used, it will also delete the data contained within the field. If you delete a custom field that is used in a Tagging Layout, it will be removed from the layout, but the layout will remain. If you delete a custom field that is in use in the Item List by other user, that other user may experience errors. For example, if a user has enabled a column in the File List for this field, their browser may hang and they will have to refresh their browser and manually remove the column from the list. For this reason, if you must delete a custom field, you may want to do it at a time when fewer people are using the system. But users will still have to manually remove it from the column preferences. It may cause similar problems for any other panel where this field is used. It may also cause problems if the field is used in a global replace job that involves the field that hasn’t run yet. Any user with the appropriate permissions can delete a custom field. For example one user with Admin rights can delete a custom field that was created by a different user. Configuring Review Tools Configuring Custom Fields | 264 Configuring Tagging Layouts Tagging Layouts are layouts used for coding in the Project Review that the project manager creates. Users must have Project Administration permissions to create, edit, delete, and associate tagging layouts. First, you must create the layout, then associate fields to the layout for the reviewer to code, and finally, associate users or groups to the layout so that they can code with it in Project Review. Custom fields must be created by the project manager before they can be added to a tagging layout. See Configuring Custom Fields (page 262) for information on how to create custom fields. Tagging Layouts can be used to code fields in the Project Review for documents in the project. Coding is editing the data that appears in the fields for each document. Tagging Layout Tab The Tagging Layout tab on the Home page can be used to create layouts for coding in the Project Review. Elements of the Tagging Layout Tab Element Description Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Tagging Layout List Displays the tagging layouts already created for the project. Click the column headers to sort by the column. Refreshes the Tagging Layout List. Refresh Adjusts what columns display in the Tagging Layout List. Columns Delete Deletes selected tagging layout. Only active when a tagging layout is selected. Adds a tagging layout. Add Tagging Layout Edits the selected tagging layout. Edit Tagging Layout Deletes the selected tagging layout. Delete Tagging Layout Tagging Layout Fields Tab Allows you to associate/disassociate fields to a tagging layout. Allows you to associate users to a tagging layout. Users Tab Configuring Review Tools Configuring Tagging Layouts | 265 Elements of the Tagging Layout Tab (Continued) Element Description Allows you to associate groups to a tagging layout. Groups Tab Associates a group, user, or field to a tagging layout. Add Association Disassociates a tagging layout from a user, group, or field. Remove Association Adding a Tagging Layout Project/case managers with the Project Administrator permission can create, edit, delete, and associate tagging layouts. To add a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. 3. Click the Add button 4. In the Tagging Layout Detail form, enter the name of the Tagging Layout. 5. Enter the number of the order that you want the layout to appear to the user in the Project Review. Repeated numbers appear in alphabetical order. 6. Click OK. . Deleting a Tagging Layout Project/case managers with the Project Administrator permission can create, edit, delete, and associate tagging layouts. To delete a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. 3. Check the layout that you want to delete. 4. Click the Delete button . Note: You can also delete multiple layouts by clicking the trash can delete button. 5. In the confirmation dialog, click OK. Configuring Review Tools Configuring Tagging Layouts | 266 Editing a Tagging Layout Project/case managers with the Project Administrator permission can create, edit, delete, and associate tagging layouts. To edit a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. 3. Click the Edit button 4. In the Tagging Layout Detail form, enter the name of the Tagging Layout. 5. Enter the number of the order that you want the layout to appear to the user in the Project Review. Repeated numbers appear in alphabetical order. 6. Click OK. . Associating Fields to a Tagging Layout Project/case managers with the Project Administrator permission can create, edit, delete, and associate tagging layouts. Custom fields must be created before you can associate them with a tagging layout. See Configuring Custom Fields on page 262. To associate fields to a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. 3. Select the layout that you want from the Tagging Layout list pane. 4. Select the fields tab in the lower pane 5. Click the Add Association button Configuring Review Tools . . Configuring Tagging Layouts | 267 Associate Tagging Layouts Dialog 6. Click to add the field to the layout. 7. Click OK. 8. Enter a number for the Order that you would like the fields to appear in the coding layout. 9. Select the fields that you just added (individually) and click the Edit button in the Tagging Layout Field Details. Select one of the following: Read Only: Select to make the field read only and disallow edits. Any standard or custom field that is defined to be 'Read Only' cannot be redefined as a "Required" or "None." Required: None: Select to make the field required to code before the reviewer can save the coding. Select to have no definition on the field. Is Carryable: Check to allow the field data to carry over to the next record when the user selects the Apply Previous button during coding. 10. Click OK. Note: Some fields are populated by processing evidence or are system fields and cannot be changed. These fields, when added to the layout, will have a ReadOnly value of True. Disassociating Fields from a Tagging Layout Project/case managers with the Project Administrator permission can disassociate tagging layouts. To disassociate fields from a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. Configuring Review Tools Configuring Tagging Layouts | 268 3. Select the layout that you want from the Tagging Layout list pane. 4. Click the fields tab in the lower pane 5. Click the Remove Association button . . Associate User or Group to Tagging Layout Project/case managers with the Project Administrator permission can create, edit, delete, and associate tagging layouts. To associate users or groups to a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. 3. Select the layout that you want from the Tagging Layout list pane. 4. Open either the User or Groups tab. 5. Click the Add Association button 6. In the All Users or All User Groups dialog, click 7. Click OK. . to add the user or group to the tagging layout. Disassociate User or Group to Tagging Layout Project/case managers with the Project Administrator permission can disassociate tagging layouts. To disassociate users or groups from a tagging layout 1. Log in as a user with Project Administrator rights. 2. Click the Tagging Layout tab. See Tagging Layout Tab on page 265. 3. Check the layout that you want from the Tagging Layout list pane. 4. Open either the User or Groups tab. 5. Check the user or group that you want to disassociate. 6. Click the Remove Association button Configuring Review Tools . Configuring Tagging Layouts | 269 Configuring Highlight Profiles You can set up persistent highlighting profiles that will highlight predetermined keywords in the Natural panel of Project Review. Persistent highlighting profiles are defined by the administrator or project/case manager and can be toggled on and off using the Select Profile drop-down in the Project Review. See Highlight Profiles Tab on page 270. Highlight Profiles Tab The Highlight Profiles tab on the Home page can be used to set up persistent highlighting profiles that will highlight predetermined keywords in the Natural panel in Project Review. Persistent highlighting profiles are defined by the administrator or project manager and can be toggled on and off using the Select Profile dropdown in the Project Review. Elements of the Highlight Profiles Tab Element Description Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Highlight Profiles List Displays the highlight profiles already created for the project. Click the column headers to sort by the column. Refreshes the Highlight Profiles List. Refresh Adjusts what columns display in the Highlight Profiles List. Columns Delete Click to delete selected highlight profiles. Only active when a highlight profile is selected. Adds a highlight profile. Add Highlight Profiles Edits the selected highlight profile. Edit Highlight Profiles Deletes the selected highlight profile. Delete Highlight Profiles Highlight Profile Keywords Allows you to add keywords and highlights to the highlight profile. Allows you to associate users to a highlight profile. Users Tab Configuring Review Tools Configuring Highlight Profiles | 270 Elements of the Highlight Profiles Tab (Continued) Element Description Allows you to associate groups to a highlight profile. Groups Tab Associates a user or group to a highlight profile. Add Association Disassociates a highlight profile from a user or group. Remove Association Adding Highlight Profiles Project/case managers with the Project Administrator permission can create, edit, delete, and associate highlight profiles. To add a highlight profile 1. Log in as a user with Project Administrator rights. 2. Click the Highlight Profiles tab. See Highlight Profiles Tab on page 270. 3. Click the Add button 4. In the Highlight Profile Detail form, enter a Profile Name. 5. Enter a Description for the profile. 6. Click OK. Configuring Review Tools . Configuring Highlight Profiles | 271 Editing Highlight Profiles Project/case managers with the Project Administrator permission can create, edit, delete, and associate highlight profiles. To edit a highlight profile 1. Log in as a user with Project Administrator rights. 2. Click the Highlight Profiles tab. See Highlight Profiles Tab on page 270. 3. Select the profile that you want to edit. 4. Click the Edit button 5. In the Highlight Profile Detail form, enter a Profile Name. 6. Enter a Description for the profile. 7. Click OK. . Deleting Highlight Profiles Project/case managers with the Project Administrator permission can create, edit, delete, and associate highlight profiles. To delete a highlight profile 1. Log in as a user with Project Administrator rights. 2. Click the Highlight Profiles tab. See Highlight Profiles Tab on page 270. 3. Select the profile that you want to delete. 4. Click the Delete button . Note: You can also delete multiple profiles by clicking the trash can delete button. Add Keywords to a Highlight Profile After you have created a highlight profile, you can add keywords to the profile that will appear highlighted in the Natural panel of the Project Review when the profile is selected. To add keywords to a highlight profile 1. Log in as a user with Project Administrator rights. 2. Click the Highlight Profiles tab. See Highlight Profiles Tab on page 270. 3. Select a profile. 4. Select the Keywords tab Configuring Review Tools . Configuring Highlight Profiles | 272 5. Click the Add Keywords button. 6. In the Keyword Details form, enter the keywords (separated by a comma) that you want highlighted. 7. Expand the color drop-down and select a color you want to use as a highlight. 8. Click OK. 9. You can add multiple keyword highlights, in different colors, to one profile. Note: You can edit and delete keyword details by clicking the pencil or minus buttons in the Keywords tab. Associating a Highlight Profile Project/case managers with the Project Administrator permission can create, edit, delete, and associate highlight profiles. You can associate highlight profiles to users and groups. To associate a highlight profile to a user or group 1. Log in as a user with Project Administrator rights. 2. Click the Highlight Profiles tab. See Highlight Profiles Tab on page 270. 3. Select the profile that you want to associate to a user or group. 4. Open either the User or Groups tab. 5. Click the Add Association button 6. In the All Users or All User Groups dialog, click the plus sign to associate the user or group with the profile. 7. Click OK. . Disassociating a Highlight Profile Project/case managers with the Project Administrator permission can disassociate highlight profiles from users or groups. To disassociate a highlight profile from a user or group 1. Log in as a user with Project Administrator rights. 2. Click the Highlight Profiles tab. See Highlight Profiles Tab on page 270. 3. Select the profile that you want to disassociate from a user or group. 4. Open either the User or Groups tab. 5. Select the user or group that you want to disassociate. 6. Click the Remove Association button Configuring Review Tools . Configuring Highlight Profiles | 273 Configuring Redaction Text Project/case managers with the Project Administration permission can create redaction text profiles with text that appears on redactions on documents. Redactions can be made in the Image or Natural panel of the Project Review. Redaction Text Tab The Redaction Text tab on the Home page can be used to add, edit, and delete redaction text profiles. Redactions can be made in the Image view of the Project Review. Elements of the Redaction Text Tab Element Description Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Redaction Text Profile List Displays the available redaction text profiles. Click the column headers to sort by the column. Refresh Refreshes the Redaction Text Profile list. For more information, see The Administrator Guide. Columns Adjusts what columns display in the Redaction Text Profile list. For more information, see The Administrator Guide. Delete Create Redaction Text Deletes selected redaction text profile. Only active when a redaction text is selected. Creates a redaction text profile. See Creating a Redaction Text Profile on page 274. Profile Edits the selected redaction text profile. Edit Redaction Text Deletes the selected redaction text profile. Delete Redaction Text Creating a Redaction Text Profile Project/case managers with the Project Administration permission can create the text that appears on redactions by adding redaction text profiles. To create redaction text profiles 1. Log in as a user with Project Administrator rights. 2. Click the Redaction Text tab. See Redaction Text Tab on page 274. Configuring Review Tools Configuring Redaction Text | 274 3. Click the Add button . 4. In the Redaction Text Detail form, enter the text that you want to appear on the redaction. 5. Click OK. Editing Redaction Text Profiles Project/case managers with the Project Administration permission can edit the text that appears on redactions by editing the redaction text profiles. To edit redaction text profiles 1. Log in as a user with Project Administrator rights. 2. Click the Redaction Text tab. See Redaction Text Tab on page 274. 3. Click the Edit button . 4. In the Redaction Text Detail form, enter the text that you want to appear on the redaction. 5. Click OK. Deleting Redaction Text Profiles Project/case managers with the Project Administration permission can delete redaction text profiles. To delete redaction text profiles 1. Log in as a user with Project Administrator rights. 2. Click the Redaction Text tab. See Redaction Text Tab on page 274. 3. Select the redaction text that you want to delete. 4. Click the Delete button Configuring Review Tools . Configuring Redaction Text | 275 Chapter 22 Monitoring the Work List The project/case manager can use the Work List tab on the Home page to monitor certain activities in the project. The following items are recorded in the Work List: searches, review sets, imaging, label assignments, imports, bulk coding, cluster analysis, bulk labeling, transcript/exhibit uploading, and delete summaries. The Job IDs are unique to every job. Jobs cannot be deleted or edited, only monitored. Project managers can be informed as to the actions performed in the project and errors that users have encountered in the project from the Work List tab. Accessing the Work List To access the Work List From the Home page, select a project, and click the Work List tab. Work List Tab The Work List tab on the Home page can be used to view data for the selected project. The bottom panel displays the number of documents processed and number of errors. This will be updated periodically to reflect current status. Elements of the Work List Tab Element Description Filter Options Allows you search and filter all of the items in the list. You can filter the list based on any number of fields. See Filtering Content in Lists and Grids on page 38. Work List Displays the jobs associated with the project. Click the column headers to sort by the column. Refresh Refreshes the Work List. Note: The Work List will automatically refresh every three minutes. Adjusts what columns display in the Work List. Columns Monitoring the Work List Accessing the Work List | 276 Elements of the Work List Tab (Continued) Element Description Displays the statistics on the data found in the Work List. Overview Tab Cancelling Review Jobs You can cancel certain jobs that you may have started while in Review. This allows you to resubmit work or cancel a process that you may not want to complete. Cancelling these jobs will cancel any work that has not yet been completed. Any work that has already completed will be retained. You can cancel the following jobs from the work list: Imaging Bulk Coding Network OCR Bulk Printing Documents To cancel a review job from the Work List 1. From the Work List, select the review job that you want to cancel. 2. Click to cancel the review job. Monitoring the Work List Accessing the Work List | 277 Chapter 23 Managing Document Groups About Managing Document Groups Project/case managers with Folders and Project Administration permissions can manage document groups. Document groups are folders where imported evidence is stored. You use document groups to organize your evidence by culling the data via permissions. Document groups can contain numerous documents. However, any given document can be in only one document group. You cannot assign permissions for documents unless the documents are in a document group. All documents in a group will be assigned DocIDs. Documents not within a document group, will NOT have DocIDs. You can name your document group to reflect where the files were located. The name can be a job number, a business name, or anything that will allow you to recognize what files are contained in the group. Document groups can be created in two ways: by importing evidence, or by selecting Document Groups in Project Review. See Creating a Document Group During Import on page 279. See Creating a Document Group in Project Review on page 279. Note: To make sure that the DocID, ParentDocID, and AttachDocIDs fields populate in the Family records, include at least one parent document and one child document when creating the document group. Managing Document Groups About Managing Document Groups | 278 Creating a Document Group During Import While importing evidence, you can create a document group. You can also place the documents into an existing document group. See the Loading Data documentation for information on how to create new document groups while importing evidence and putting evidence into existing document groups. Creating a Document Group in Project Review Project/case managers with Folders permissions can create Document Groups in the Project Review. To create document groups in Project Review 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. In the Project Explorer, right-click the Document Groups folder and select Create Document Group. 4. Enter a Name for the document group. 5. Enter a Description for the document group. 6. Click Next. 7. Check the labels that you want to include in the document group. 8. Click Next. 9. Select one of the following: Continue Assign button next to the project in the Project List. from Last: Select to continue the numbering from the last document. DocIDs: Select to assign DocID numbers to the records. 10. Enter a Prefix for the new numbering. 11. Enter a Suffix for the new numbering. 12. Select a Starting Number for the documents. 13. Select the Padding for the documents. 14. Click Next. 15. Review the Summary and click Create. 16. Click OK. Managing Document Groups Creating a Document Group During Import | 279 Deleting a Document Group in Project Review Project/case managers with Folders permissions can delete Document Groups in the Project Review. Deleting a document group allows you to move a document from one document group to another group, create sub document groups and create master document groups. When deleting a document group, the application deletes any associations to the deleted group that a particular document has. The application also deletes any DocIDs of documents that were in the deleted group. This allows you to assign a document to a new document group, or alter an existing document group. You will need to assign new DocIDs to documents that were in a deleted document group. To delete document groups in Project Review 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. In the Project Explorer, right-click the Document Groups folder and select Delete Document Group. 4. Click OK. Managing Document Groups button next to the project in the Project List. Deleting a Document Group in Project Review | 280 Chapter 24 Managing Transcripts and Exhibits Project/case managers with Upload Exhibits, Upload Transcripts, and Manage Transcripts permissions can upload transcripts, create transcript groups, grant transcript permissions to users, and upload exhibits. Transcripts are uploaded from Project Review and can be viewed and annotated in the Transcripts panel. Creating a Transcript Group Project/case managers with the Create Transcript Group permission can create transcript groups to hold multiple transcripts. To create a transcript group 1. Log in as a user with Create Transcript Group permissions. 2. Click the Project Review 3. In the Project Explorer, right-click the Transcripts folder and click Create Transcript Group. 4. Enter a Transcript Group Name. 5. Click Save. 6. After creating the group, refresh the panel by clicking button next to the project in the Project List. (Refresh) at the top of the Project Explorer panel. Uploading Transcripts Project/case managers with the Upload Transcripts permission can upload either .PTX or . TXT transcript files and put them in transcript groups. You can only add transcripts one at a time. When you upload a transcript, they are automatically indexed. To upload transcripts 1. Log in as a user with Upload Transcripts permissions. 2. Click the Project Review 3. In the Project Explorer, right-click the Transcripts folder and click Upload Transcript. Managing Transcripts and Exhibits button next to the project in the Project List. Creating a Transcript Group | 281 Upload Transcript Dialog 4. Click Browse to find the transcript file, highlight the file, and click Open. 5. Select a Transcript Group from the menu. See Creating a Transcript Group on page 281. 6. Enter the name of the Deponent. 7. Select the Deposition Date. 8. If you are uploading more than one transcript from the same day, specify the volume number to differentiate between transcripts uploaded on the same date. 9. Select This transcript contains unnumbered preamble pages to indicate that there are pages prior to the testimony. If you check this box, enter the number of preamble pages prior that occur before the testimony. These pages will be numbered as “Preamble 0000#.” The numbering continues as normal after the preamble pages. 10. If the transcript is password protected, enter the password in the Password field. 11. Click Upload Transcript. 12. After the upload is complete, refresh the Item List. 13. To view the transcripts that have been uploaded, select the Transcript Groups that you want to view and click (Apply) on the Project Explorer panel. See the Reviewer Guide for more information on viewing and working with transcripts. Updating Transcripts Project managers with the Upload Transcripts permission can update transcripts in transcript groups. You can only update transcripts one at a time. To update transcripts 1. Log in as a user with Upload Transcripts permissions. 2. Click the Project Review 3. In the Project Explorer, right-click the Transcripts folder and click Update Transcript. Managing Transcripts and Exhibits button next to the project in the Project List. Creating a Transcript Group | 282 Update Transcript Dialog 4. Select a Transcript Group. 5. Select a Transcript. 6. Enter the Deponent name. 7. Enter the Deposition Date. 8. If you are uploading more than one transcript on the same day, specify the volume number to differentiate between transcripts uploaded on the same date. 9. Click Update Transcript. Creating a Transcript Report Project/case managers with the Create Transcript Report permission can create a report of the notes and highlights on a transcript. If there are no notes or highlights on a report, a report will not be generated. Note: You can create a report containing issues with notes or a report containing issues without notes, but you cannot create a report that contains both issues with notes and issues without notes. If you create a report with notes without issues but the selected notes have been previously assigned to an issue, those notes will not appear in the report. To create a transcript report 1. Log in as a user with Create a Transcript Report permissions. 2. Click the Project Review 3. From the Explore tab in the Project Explorer, right-click the Transcripts folder and click Transcript Report. Managing Transcripts and Exhibits button next to the project in the Project List. Creating a Transcript Group | 283 Transcript Report Dialog 4. Select Include Notes. You can mark whether to generate a report of all the users’ notes or just your own notes. 5. Check any issues that you want included in the report. Click Select All to select all of the issues to include or click Select None to deselect all of the issues. 6. Select Include Highlights. You can mark whether to generate a report of all the users’ highlights or just your own highlights. 7. Click Generate Report. Managing Transcripts and Exhibits Creating a Transcript Group | 284 Capturing Realtime Transcripts You have the ability to run a Realtime transcript session and capture the stream from a court reporter’s stenographer machine. You can either connect to a court reporter’s machine or run a demonstration of the Realtime transcript with a simulated transcription. To capture a Realtime transcript 1. Log in as a user with Realtime Transcripts permissions. 2. Click the Project Review 3. From the Explore tab in the Project Explorer, right-click the Transcripts folder and select Start Realtime Transcripts. 4. A dialog displays asking to start a new Realtime session or resume a previous session. Click Start New Realtime Session. 5. Click Next. 6. Enter the options that you want associated with this transcript: button next to the project in the Project List. Transcript Group: You must select a group for the realtime transcript. If no groups are defined, exit the wizard and create a group. See Creating a Transcript Group on page 281. Deponent Deposition Date Volume: If you are capturing more than one transcript on the same day, specify the volume number to differentiate between the transcripts captured on the same date. 7. Click Next. 8. Select the serial port that will contain the feed from the court reporter’s machine. The default port is COM1. Once selected, ask the Court Reporter to type a few lines to test the port. If you do not see any lines behind the wizard window, select another port and retry. If none of the ports work, check your connections. 9. Click Next. Set up Realtime Transcript Properties Dialog Managing Transcripts and Exhibits Capturing Realtime Transcripts | 285 10. In the Set up Realtime Transcript Properties dialog, you have several options in setting up your transcript. 11. Click Test to test the connection. Once the connection test is successful, click Finish. Elements of the Set up Realtime Transcript Properties Dialog Element Description Source Source Type Allows you to select from which port you are receiving the stenographer’s feed. The default is the serial port. Lines Per Page Allows you to enter how many lines you want to appear for each page of the transcript. Time Codes Allows you to stamp a time code on the transcript. You can choose to display the time based on the following options:  Time of Day - Marks the transcript with the time of day as indicated by your system.  Time From Court Reporter - Marks the transcript with the same time as indicated by the court reporter’s stenographer machine.  Start Time - Specifies the time stamped on the transcript.  No Time Codes - Specifies that no time code is stamped on the transcript.  Time Codes every x lines - Specifies how frequent the time code appears on the transcript. Steno Feed Allows you to set the options for the court reporter’s stenographer feed. Before connecting and receiving the stenographer feed, make sure that you have the correct serial settings for the stenographer feed. Steno Feed Format Allows you to choose to receive the court reporter’s feed in either CaseView or ASCII format. Line Terminator Available only for ASCII format. Allows you to indicate line termination by CRLF (carriage return line feed), CR only (Carriage return), or LF only (line feed). Serial Port Settings Allows you to configure the serial port settings for the stenographer feed. You can set the following options:  Port - The interface where the feed is transmitted. This will usually be COM1.  Baud Rate - The speed in which the data is sent. You can select a rate between 110 baud and 56000 baud.  Data Bits - The number of data bits sent with each character. Most characters will have eight bits (ddb8).  Parity - Parity detects errors in the feed. You can set the parity to either None, Even, Odd, Mark, and Space. The default setting is None.  Stop Bits - Stop bits allow the system to resynchronize with the feed. The default setting is one bit. Marking Realtime Transcripts Once you have a successful connection and start receiving the transcript, you can mark it and link it to other documents in the project. The Transcript window displays after connecting to the stenographer’s machine. The Transcript window displays two panes: the Notes/ Linked pane and the Transcript pane. The following tables describe the functions of the elements of the two panes. Managing Transcripts and Exhibits Capturing Realtime Transcripts | 286 Realtime Notes/Linked Panels Realtime Notes/Linked Panel Elements Element Description Notes This tab manages the Quick Mark notes that are produced in the Realtime transcript. Actions Provides the ability to perform a selected task on the items within the panel. Delete Provides the ability to delete any Quick Mark notes or links. Filters Provides the ability to filter notes and linked documents. You can filter notes by page, line, note, issues, date or owner. You can filter linked documents by DocID, LinkObjectID, or file path. Linked This tab manages links from the transcript to other documents in the project. Provides the ability to link to other documents in the project. Realtime Transcript Panel Managing Transcripts and Exhibits Capturing Realtime Transcripts | 287 Realtime Transcript Panel Elements Element Description Disconnect This option allows you to disconnect from the court reporter’s feed. Line/Word This option controls how the data is entered into the transcript. You can have the data entered word by word, or allow a line to be completed and populated before the data is transmitted. No Scroll/Auto Scroll This option displays whether the feed scrolls or not. If No Scroll is selected, the scroll bar will continue to move, but the feed will not move until you pull down the scroll bar. Exercise this option by toggling. Suspend/Continue This option allows you to either suspend or continue the feed. Exercise this option by toggling. Quick Mark This option allows you to quick mark the transcript. A quick mark is a note that you can enter and add additional information to the transcript. The quick mark will occur at the last known word/line. You can also quick mark the transcript by clicking the space bar. The search bar allows you to search for words or phrases within the transcript. Save Allows you to save the transcript draft. Updating a Realtime Transcript Project managers with the Update Realtime Transcript permission can replace an earlier saved version of a Realtime transcript with a new version. To update a Realtime transcript 1. Click the Project Review button next to the project in the Project List. 2. From the Explore tab in the Project Explorer, right-click the Transcripts folder and click Update Realtime Transcript. 3. Enter the information in the dialog. 4. Click Update. Update a Realtime Transcript Dialog Managing Transcripts and Exhibits Capturing Realtime Transcripts | 288 Elements of the Realtime Transcript Dialog Element Description Update Allows you to enter the transcript that you want to replace. Select the transcript name and group name from the pull-down menu. With Allows you to enter the new transcript. You can enter the filename in the field or browse to the location on the system. New Deponent Allows you to add a new deponent to the transcript if you want. Keep Draft Allows you to select to keep the original version that you are replacing. Rename Previous Version to: Allows you to rename the original version to avoid confusion between versions. Is Certified Allows you to select whether the new version of the transcript is certified or not. Managing Transcripts and Exhibits Capturing Realtime Transcripts | 289 Using Transcript Vocabulary The Transcript Vocabulary feature uses dtDearch to create an index of all of the unique words in a transcript. The index lists all of the unique words contained in the specific transcript or all transcripts. (Noise words, such as an and the, are not included in the index.) You can use the Transcript Vocabulary feature to isolate transcripts that include specific words, and search for those words in the transcript. Navigate between highlighted terms and view the highlighted terms in context of the transcript. Note: The content of headers, preambles, and margins of the transcripts are included in the Vocabulary index. To use Transcript Vocabulary 1. Click the Project Review button next to the project in the Project List. 2. Select Vocabulary from the Search Options menu. The Vocabulary dialog appears. Transcript Vocabulary Dialog Elements of the Vocabulary Dialog Element Scope Description Narrows the scope of the vocabulary index as follows: All Transcript - Builds an index from all of the transcripts in the project.  Transcript in List - Builds an index from the transcripts in the Item List.  Managing Transcripts and Exhibits Using Transcript Vocabulary | 290 Elements of the Vocabulary Dialog Element Description Search Allows you to search for a word or a group of words in the vocabulary list. Entering a letter in the search field retrieves a list of words that begins with the letter entered. Displays the word count of the vocabulary index. This count changes depending upon the scope of the transcript vocabulary. Page Size Changes the number of word rows displayed in the pane. Page ___ of Navigates between pages of words listed. Refreshes the word list. View Details Displays more details on documents that contain the word in the highlighted row. This word appears in the Current Word field. Note: Only details of the highlighted word appear in the Current Word field, even when other words are selected in the Vocabulary list. When selected, a dialog appears. See Viewing Details of Words in the Vocabulary Dialog on page 291. Run Search Searches for documents containing certain words selected in the Vocabulary list. Note: This search searches the entire project, not just transcript documents. Any documents found post back to the Item List. You can check any number of words to include in the search. Select Match All from the menu to return documents that contain all of the words selected or Match Any to return documents that contain any of the words selected. Viewing Details of Words in the Vocabulary Dialog In the Vocabulary dialog, you can view details of the documents that contain the word that you are examining. Within the Documents Containing dialog, you can view a list of documents and filter by TranscriptName, ObjectID, or Hit Count. Note: The TranscriptName contains the deponent name, deposition date, and volume (if specified). Select a document in the document list and click View Selected Document to open the document to view the selected word. The document opens in the Natural Viewer and the selected word highlights in the Natural Viewer. Click Close to exit the Documents Containing dialog. Managing Transcripts and Exhibits Using Transcript Vocabulary | 291 Uploading Exhibits Project/case managers with the Upload Exhibits permission can upload exhibits in Project Review. You can view exhibits in the exhibits panel. To upload an exhibit 1. Log in as a user with Upload Exhibits permissions. 2. Click the Project Review 3. In the Project Explorer, right-click the Transcripts folder and click Upload Exhibits. button next to the project in the Project List. Upload Exhibit Dialog 4. Select the Transcript Group that contains the transcript to which you want to link the exhibit. 5. From the Transcripts menu, select the transcript to which you want to link the exhibit. 6. Click Browse, highlight the exhibit file, and click Open. 7. In the Text to be linked field, enter the text (from the transcript) that will become a link to the exhibit. You can enter multiple text or aliases to be linked. Separate the terms by either a comma and/or a semicolon. Every occurrence of the text in the transcript becomes a hyperlink to the exhibit. 8. Click Upload Exhibit. Managing Transcripts and Exhibits Uploading Exhibits | 292 Chapter 25 Managing Review Sets Review sets are batches of documents that you can check out for coding and then check back in. Review sets aid in the work flow of the reviewer. It allows the reviewer to track the documents that have been coded and still need to be coded. Project/case managers with Create/Delete Review Set permissions can create and delete review sets. Creating a Review Set Project/case managers with Create/Delete Review Set permissions can create and delete review sets. To create a review set 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Review Sets button in the Project Explorer. See the Reviewer Guide for more information on the Review Sets tab. 4. Right-click the Review Sets folder and click Create Review Set. button next to the project in the Project List. Create Review Set Dialog 5. Enter a Name for the review set. Managing Review Sets Creating a Review Set | 293 6. Select a Review Column that indicates the status of the review. New columns can be created in the Custom Fields tab of the Home page. See Custom Fields Tab on page 262. 7. Enter a prefix for the batch that will appear before the page numbers of the docs. 8. Increase or decrease the Batch Size to match the number of documents that you want to appear in the review set. 9. Check the following options if desired: Keep Families together: Check this to include documents within the same family as the selected documents in the batch. Keep Similar document sets together: Check this to include documents related to the selected documents in the batch. Note: Any “Keep” check box selected will override the restricted Batch Size. 10. Click Next. Create Review Sets Dialog Second Screen 11. Expand Labels and check the labels that you want to include in the review set. All documents with that label applied will be included in the review set. This is only relevant if the documents have already been labeled by reviewers. 12. Expand the Document Groups and check the document groups that you want to include in the review set. 13. Click Next. 14. Review the summary of the review set to ensure everything is accurate and click Create. 15. Click Close. Managing Review Sets Creating a Review Set | 294 Deleting Review Sets Project/case managers with Create/Delete Review Set permissions can create and delete review sets. To create a review set 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Review Sets button in the Project Explorer. See the Reviewer Guide for more information on the Review Sets tab. 4. Expand the All Sets folder. 5. Right-click the review set that you want to delete and click Delete. 6. Click OK. Managing Review Sets button next to the project in the Project List. Deleting Review Sets | 295 Renaming a Review Set Project/case managers with Manage Review Set permissions can rename review sets. To rename a review set 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Review Sets button in the Project Explorer. See the Reviewer Guide for more information on the Review Sets tab. 4. Expand the All Sets folder. 5. Right-click the review set that you want to rename and click Rename. 6. Enter a name for the review set. Managing Review Sets button next to the project in the Project List. Renaming a Review Set | 296 Manage Permissions for Review Sets Project/case managers with Manage Review Set permissions can manage the permissions for review sets. To rename a review set 1. Log in as a user with Project Administrator rights. 2. Click the Project Review 3. Click the Review Sets button in the Project Explorer. See the Reviewer Guide for more information on the Review Sets tab. 4. Expand the All Sets folder. 5. Right-click the review set that you want to manage permissions for and click Manage Permissions. button next to the project in the Project List. Assign Security Permissions Dialog 6. Check the groups that you want to grant permissions to the review set. Groups granted the Check In/ Check Out Review Batches permission will be able to check out the review sets to which they are granted permission. 7. Click Save. Managing Review Sets Manage Permissions for Review Sets | 297 Chapter 26 Project Folder Structure This document describes the folder structure of the projects in your database. The location of the project folders will differ depending on the project folder path where you saved the data. Project Folder Path When a project is created, a Project Folder is created in the Project Folder Path provided by the user that creates the project. The Project Folder consists of alphanumeric characters auto generated by the application. Project Folder example: 3fc04d13-1b48-40a5-80d3-0e410e8e9619. Finding the Project Folder Path You can find your project folder path by looking at the Project Details tab. To find the project folder path 1. Log in to the application. 2. Select the project in the Project List panel. 3. Click on the Project Detail tab on the Home page. 4. Under Project Folder Path, the path is listed. Project Folder Structure Project Folder Path | 298 Project Folder Subfolders Within the Project Folder, there are multiple subfolders. What subfolders that are available to view will depend upon the project and the evidence loaded within the project. This section describes those subfolders. Please note most of the files within the subfolders are in the DAT extension. This is the extension that the application requires in order to read the contents of these files. The filename (.dat) represents the ObjectID of that document. It should match the ObjectID column displayed in the Project Review. CoolHTML: This folder contains the CoolHTML files. The application converts all email files into CoolHTML files in order for the native viewer to display them. Native: This folder contains all the native files. This only pertains to Imported DII Documents and Production Set Documents. Tiff: This folder contains the Image Documents. This only pertains to Imported DII Image Documents, Production Set Image Documents, and Documents imaged using the “Imaging” option in the Item List panel of the Project Review. PDF: This folder contains the Image Documents. These are imaged using the “Imaging” option in the Item List panel of Project Review and selecting the pdf option. Graphic_Swf: This folder contains flash files created when imaging documents. There are two ways to create these flash files: Click on the Annotate button from the Image tab of the Document Viewer. Select Imaging in the mass operations of the Item List panel and then select the Process for Image Annotation option. Native_Swf: This folder contains flash files created when imaging documents. There are two way to create these flash files: Click on the Annotate button from the Natural tab of Document Viewer. Select Imaging in the mass operations of the Item List panel and then select the Process for Native Annotation option. Reports: This folder contains any report that is downloadable from within the program’s interface, including project level reports such as Deduplication, Data Volume, Search, and Audit Log Reports. Slipsheets: This folder is a temporary location to place slipsheets during an imaging, production set, or export job where images are requested. During the job if a particular document cannot be imaged, the program will create a slipsheet for the document, which is stored in this file. As the job gets to completion, the program will move that slipsheet into the appropriate folder (with the appropriate number in the project of export and production sets.) Dts_idx: This folder contains the DT Search Index Files. These are needed to be able to search for full text data. Email_body: This folder contains files that are the text of an email body. Filtered: This folder contains the files that are the text of the Native file extracted by the application at the time of Add Evidence. OCR: This folder contains the files that are the text of the Native/Image files loaded via Import DII. JT: This folder contains files that are used for communication between processing host and processing engine. This is internal EP communication. Jobs: This folder contains the jobs sent via the application (i.e. Import, Add Evidence, Cluster Analysis, etc.) There are multiple Job folders: Project Folder Structure Project Folder Subfolders | 299 AA: This folder contains the Additional Analysis Jobs which consist of Jobs from Import, Imaging, Transcript Uploads, Clustering, etc. This folder also contains subfolders for the respective jobs performed by the Additional Analysis jobs. These folders contain compressed job information log files that are used for troubleshooting. The user should not need to access these log files. AE: This folder contains the jobs processed through Add Evidence. This folder also contains subfolders for the respective Add Evidence jobs. These folders contain compressed job information log files that are used for troubleshooting. The user should not need to access these log files. MI: This folder contains files for Index Manager jobs. These are run anytime you run another job to help update the database. This folder also contains subfolders for the respective jobs performed by the Index Manager jobs. These folders contain compressed job information log files that are used for troubleshooting. The user should not need to access these log files. EvidenceHistory.log: This folder contains a log file of Add Evidence, Additional Analysis, and Indexing Jobs. A user should not need to access these log files. Opening Project Files To open any of the DAT files, you’ll need to know the original extension of the files. For example, if the file is in the Tiff Folder, you know that it was originally a TIFF file. So if you change the extension from DAT to TIFF, you can open the file and it’ll open as a TIFF File. The files in the Native Folder are a little more complicated. You will need to match up the ObjectID to the one shown in the Project Review and determine what kind of native file it is and then change it to that extension accordingly. So that you do not alter the original file, it is best that you make a copy of the data files and then change the extension accordingly. Files in the Project Folder In the main Project Folder, there and many files that are not in folders. Some of the loose files that you may encounter include: EvidenceHistory.log: This is a log file of Add Evidence Jobs, Imaging Jobs, Production Sets, and Clustering Jobs. Project Folder Structure Project Folder Subfolders | 300 Chapter 27 Using Language Identification Language Identification When selecting Evidence Processing, you can identify documents based on the language they were created in. See Default Evidence Processing Options on page 84. With Language Identification, you can identify and isolate documents that have been created in a specific language. Because Language Identification extends the processing time, only select the Language Identification needed for your documents. There are three levels of language identification to choose from: None The system will perform no language identification. All documents are assumed to be written in English. This is the faster processing option. Basic The system will perform language identification for the following languages: Arabic Chinese English French German Japanese Korean Portuguese Russian Spanish If the language to identify is one of the ten basic languages (except for English), select Basic when choosing Language Identification. The Extended option also identifies the basic ten languages, but the processing time is significantly greater. Using Language Identification Language Identification | 301 Extended The system will perform language identification for 67 different languages. This is the slowest processing option. The following languages can be identified: Afrikaans Esperanto Latin Scottish Albanian Estonian Latvian Serbian Amharic Finnish Lithuanian Slovak Arabic French Malay Slovenian Armenian Georgian Manx Spanish Basque German Marathi Swahili Belarusian Greek Nepali Swedish Bosnian Hawaiian Norwegian Tagalong Breton Hebrew Persian Tamil Bulgarian Hindi Polish Thai Catalan Hungarian Portuguese Turkish Chinese Icelandic Quechua Ukrainian Croatian Indonesian Romanian Vietnamese Czech Irish Rumantsch Welsh Danish Italian Russian Yiddish Dutch Japanese Sanskrit West English Korean Scots Using Language Identification Gaelic Frisian Language Identification | 302 Chapter 28 Getting Started with KFF (Known File Filter) This document contains the following information about understanding and getting started using KFF (Known File Filter). About KFF (page 303) About the KFF Server and Geolocation (page 308) Installing the KFF Server (page 309) Configuring the Location of the KFF Server (page 310) Migrating Legacy KFF Data (page 311) Importing KFF Data (page 313) About CSV and Binary Formats (page 320) Installing KFF Updates (page 324) Uninstalling KFF KFF (page 323) Library Reference Information (page 325) What has Changed in Version 5.6 (page 330) Important: AccessData applications versions 5.6 and later use a new KFF architecture. If you are using one of the following applications version 5.6 or later, you must install and implement the new KFF architecture: Resolution1 (Resolution1 Platform, Resolution1 CyberSecurity, Resolution1 eDiscovery) Summation FTK-based products (FTK, FTK Pro, AD Lab, AD Enterprise) See What has Changed in Version 5.6 on page 330. About KFF KFF (Known File Filter) is a utility that compares the file hash values of known files against the files in your project. The known files that you compare against may be the following: Files that you want to ignore, such as operating system files Files that you want to be alerted about, such as malware or other contraband files The hash values of files, such as MD5, SHA-1, etc., are based on the file’s content, not on the file name or extension. The helps you identify files even if they are renamed. Using KFF during your analysis can provide the following benefits: Getting Started with KFF (Known File Filter) About KFF | 303 Immediately identify and ignore 40-70% of files irrelevant to the project. Immediately identify known contraband files. Introduction to the KFF Architecture There are two distinct components of the KFF architecture: KFF Data - The KFF data are the hashes of the known files that are compared against the files in your project. The KFF data is organized in KFF Hash Sets and KFF Groups. The KFF data can be comprised of hashes obtained from pre-configured libraries (such as NSRL) or custom hashes that you configure yourself. See Components of KFF Data on page 304. KFF Server - The KFF Server is the component that is used to store and process the KFF data against your evidence. The KFF Server uses the AccessData Elasticsearch Windows Service. After you install the KFF Server, you import your KFF data into it. Note: The KFF database is no longer stored in the shared evidence database or on the file system in EDB format. Components of KFF Data Item Description Hash The unique MD5 or SHA-1 hash value of a file. This is the value that is compared between known files and the files in your project. Hash Set A collection of hashes that are related somehow. The hash set has an ID, status, name, vendor, package, and version. In most cases, a set corresponds to a collection of hashes from a single source that have the same status. Group KFF Groups are containers that are used for managing the Hash Sets that are used in a project. KFF Groups can contains Hash Sets as well as other groups. Projects can only use a single KFF Group. However, when configuring your project you can select a single KFF Group which can contains nested groups. Status The specified status of a hash set of the known files which can be either Ignore or Alert. When a file in a project matches a known file, this is the reported status of the file in the project. Library A pre-defined collection of hashes that you can import into the KFF Serve. There are three pre-defined libraries:  NSRL  NDIC HashKeeper  DHS See About Pre-defined KFF Hash Libraries on page 306. Getting Started with KFF (Known File Filter) About KFF | 304 Item Description Index/Indices When data is stored internally in the KFF Library, it is stored in multiple indexes or indices. The following indices can exist:  NSRL index  A dedicated index for the hashes imported from the NSRL library. NDIC index  A dedicated index for the hashes imported from the NDIC library. DHC index  A dedicated index for the hashes imported from the DHC library. KFF index A dedicated index for the hashes that you manually create or import from other sources, such as CSV. These indices are internal and you do not see them in the main application. The only place that you see some of them are in the KFF Import Tool. See Using the KFF Import Utility on page 314. The only time you need to be mindful of the indices is when you use the KFF binary format when you either export or import data. See About CSV and Binary Formats on page 320. About the Organization of Hashes, Hash Sets, and KFF Groups Hashes, such as MD5, SHA-1, etc., are based on the file’s content, not on the file name or extension. You can also import hashes into the KFF Server in .CSV format. For FTK-based products, you can also import hashes into the KFF Server that are contained in .TSV, .HKE, .HKE.TXT, .HDI, .HDB, .hash, .NSRL, or .KFF file formats. You can also manually add hashes. Hashes are organized into Hash Sets. Hash Sets usually include hashes that have a common status, such as Alert or Ignore. Hash Sets must be organized into to KFF Groups before they can be utilized in a project. Getting Started with KFF (Known File Filter) About KFF | 305 About Pre-defined KFF Hash Libraries All of the pre-configured hash sets currently available for KFF come from three federal government agencies and are available in KFF libraries. See About KFF Pre-Defined Hash Libraries on page 325. You can use the following KFF libraries: NIST NSRL See About Importing the NIST NSRL Library on page 316. NDIC HashKeeper (Sept 2008) See Importing the NDIC Hashkeeper Library on page 318. DHS (Jan 2008) See Importing the DHS Library on page 318. It is not required to use a pre-configured KFF library in order to use KFF. You can configure or import custom hash sets. See your application’s Admin Guide for more information. How KFF Works The Known File Filter (KFF) is a body of MD5 and SHA1 hash values computed from electronic files. Some predefined data is gathered and cataloged by several US federal government agencies or you can configure you own. KFF is used to locate files residing within project evidence that have been previously encountered by other investigators or archivists. Identifying previously cataloged (known) files within a project can expedite its investigation. When evidence is processed with the MD5 Hash (and/or SHA-1 Hash) and KFF options, a hash value for each file item within the evidence is computed, and that newly computed hash value is searched for within the KFF data. Every file item whose hash value is found in the KFF is considered to be a known file. Note: If two hash sets in the same group have the same MD5 hash value, they must have the same metadata. If you change the metadata of one hash set, all hash sets in the group with the same MD5 hash file will be updated to the same metadata. The KFF data is organized into Groups and stored in the KFF Server. The KFF Server service performs lookup functions. Status Values In order to accelerate an investigation, each known file can labeled as either Alert or Ignore, meaning that the file is likely to be forensically interesting (Alert) or uninteresting (Ignore). Other files have a status of Unknown. The Alert/Ignore designation can assist the investigator to hone in on files that are relevant, and avoid spending inordinate time on files that are not relevant. Known files are presented in the Overview Tab’s File Status Container, under “KFF Alert files” and “KFF Ignorable.” Getting Started with KFF (Known File Filter) About KFF | 306 Hash Sets The hash values comprising the KFF are organized into hash sets. Each hash set has a name, a status, and a listing of hash values. Consider two examples. The hash set “ZZ00001 Suspected child porn” has a status of Alert and contains 12 hash values. The hash set “BitDefender Total Security 2008 9843” has a status of Ignore and contains 69 hash values. If, during the course of evidence processing, a file item’s hash value were found to belong to the “ZZ00001 Suspected child porn” set, then that file item would be presented in the KFF Alert files list. Likewise, if another file item’s hash value were found to belong to the “BitDefender Total Security 2008 9843” set, then that file would be presented in the KFF Ignorable list. In order to determine whether any Alert file is truly relevant to a given project, and whether any Ignore file is truly irrelevant to a project, the investigator must understand the origins of the KFF’s hash sets, and the methods used to determine their Alert and Ignore status assignments. You can install libraries of pre-defined hash sets or you can import custom hash sets. The pre-defined hash sets contain a body of MD5 and SHA1 hash values computed from electronic files that are gathered and cataloged by several US federal government agencies. See About KFF Pre-Defined Hash Libraries on page 325. Higher Level Structure and Usage Because hash set groups have the properties just described, and because custom hash sets and groups can be defined by the investigator, the KFF mechanism can be leveraged in creative ways. For example, the investigator may define a group of hash sets created from encryption software and another group of hash sets created from child pornography files and then apply only those groups while processing. Getting Started with KFF (Known File Filter) About KFF | 307 About the KFF Server and Geolocation In order to use the Geolocation Visualization feature in various AccessData products, you must use the KFF architecture and do the following: Install the KFF Server. See Installing the KFF Server on page 309. Install the Geolocation (GeoIP) Data (this data provide location data for evidence) See Installing the Geolocation (GeoIP) Data on page 319. From time to time, there will be updates available for the GeoIP data. See Installing KFF Updates on page 324. If you are upgrading to 5.6 or later from an application 5.5 or earlier, you must install the new KFF Server and the updated Geolocation data. Getting Started with KFF (Known File Filter) About the KFF Server and Geolocation | 308 Installing the KFF Server About Installing the KFF Server In order to use KFF, you must first configure an KFF Server. For product versions 5.6 and later, you install a KFF Server by installing the AccessData Elasticsearch Windows Service. Where you install the KFF Server depends on the product you are using with KFF: For FTK and FTK Pro applications, the KFF Server must be installed on the same computer that runs the Examiner. For all other applications, such as AD Lab, Resolution1, or Summation, the KFF Server can be installed on either the same computer as the application or on a remote computer. For large environments, it is recommended that the KFF Server be installed on a dedicated computer. After installing the KFF Server, you configure the application with the location of the KFF Server. See Configuring the Location of the KFF Server on page 310. About KFF Server Versions The KFF Server (AccessData Elasticsearch Windows Service) may be updated from time to time. It is best to use the latest version. AccessData Elasticsearch Windows Service Released Installation Instructions Version 1.3.2 November 2014 with 5.6 versions of  Resolution1 See Installing the KFF Server Service on page 309.  Summation  FTK-based products For applications 5.5 and earlier, the KFF Server component was version 1.2.7 and earlier. About Upgrading from Earlier Versions If you have used KFF with applications versions 5.5 and earlier, you can migrate your legacy KFF data to the new architecture. See Migrating Legacy KFF Data on page 311. Installing the KFF Server Service For instructions on installing the AccessData Elasticsearch Windows Service, see Installing the Elasticsearch Service (page 527). Getting Started with KFF (Known File Filter) Installing the KFF Server | 309 Configuring the Location of the KFF Server After installing the KFF Server, on the computer running the application, such as FTK, Lab, Summation, or Resolution1, you configure the location of the KFF Server. Do one of the following: Configuring the KFF Server Location on FTK-based Computers (page 310) Configuring the KFF Server Location on Resolution1 and Summation Applications (page 310) Configuring the KFF Server Location on FTK-based Computers Before using KFF with FTK, FTK Pro, Lab, or Enterprise, with KFF, you must configure the location of the KFF Server. Important: To configure KFF, you must be logged in with Admin privileges. To view or edit KFF configuration settings 1. In the Case Manager, click Tools > Preferences > Configure KFF. 2. You can set or view the address of the KFF Server. If you installed the KFF Server on the same computer as the application, this value will be localhost. If you installed the KFF Server on a different computer, identify the KFF server. 3. Click Test to validate communication with the KFF Server. 4. Click Save. 5. Click OK. Configuring the KFF Server Location on Resolution1 and Summation Applications When using the KFF Server with Summation or Resolution1 applications, two configuration files must point to the KFF Server location. These setting are configured automatically during the KFF Server installation. If needed, you can verify the settings. However, if you change the location of the KFF Server, do the following to specify the location of the KFF Server. 1. Configure AdgWindowsServiceHost.exe.config: 1a. On the computer running the application (for example, the server running Summation), go to C:\Program Files\AccessData\Common\FTK Business Services. 1b. Open AdgWindowsServiceHost.exe.config. Getting Started with KFF (Known File Filter) Configuring the Location of the KFF Server | 310 2. 1c. Modify the line . 1d. Change localhost to be the location of your KFF server (you can use hostname or IP). 1e. Save and close file. 1f. Restart the business services common service. Configure AsyncProcessingServices web.config: 2a. On the computer running the application (for example, the server running Summation), go to C:\Program Files\AccessData\AsyncProcessingServices. 2b. Open web.config. 2c. Modify the line . 2d. Change localhost to be the location of your KFF server (you can use hostname or IP). 2e. Save and close file. 2f. Restart the AsyncProcessing service. Migrating Legacy KFF Data If you have used KFF with applications versions 5.5 and earlier, you can migrate that data from the legacy KFF Server to the new KFF Server architecture. Important: Applications version 5.6 and later can only use the new KFF architecture that was introduced in 5.6. If you want to use KFF data from previous versions, you must migrate the data. Important: If you have NSRL, NDIC, or DHS data in your legacy data, those sets will not be migrated. You must re-import them using the 5.6 versions or later of those libraries. Only legacy custom KFF data will be migrated. Legacy KFF data is migrated to KFF Groups and Hash Sets on the new KFF Server. Because KFF Templates are no longer used, they will be migrated as KFF Groups, and the groups that were under the template will be added as sub-groups. You migrate data using the KFF Migration Tool. To use the KFF Migration Tool, you identify the following: The Storage Directory folder where the legacy KFF data is located. This was folder was configured using the KFF Server Configuration utility when you installed the legacy KFF Server. If needed, you can use this utility to view the KFF Storage Directory. The default location of the KFF_Config.exe file is Program Files\AccessData\KFF. The URL of the new KFF Server ( the computer running the AccessData Elastic Search Windows Service) This is populated automatically if the new KFF Server has been installed. To install the KFF Migration Tool 1. On the computer where you have installed the KFF Server, access the KFF Installation disc, and run the autorun.exe. 2. Click the 64 bit or 32 bit Install KFF Migration Utility. 3. Complete the installation wizard. To migrate legacy KFF data 1. On the legacy KFF Server, you must stop the KFF Service. You can stop the service manually or use the legacy KFF Config.exe utility. Getting Started with KFF (Known File Filter) Migrating Legacy KFF Data | 311 2. On the new KFF Server, launch the KFF Migration Tool. 3. Enter the directory of the legacy KFF data. 4. The URL of Elasticsearch should be listed. 5. Click Start. 6. When completed, review the summary data. Getting Started with KFF (Known File Filter) Migrating Legacy KFF Data | 312 Importing KFF Data About Importing KFF Data You can import hashes and KFF Groups that have been previous configured. You can import KFF data in one of the following formats: KFF Data sources that you can import Source Description Pre-configured KFF libraries You can import KFF data from the following pre-configured libraries  NIST NSRL  NDIC HashKeeper  DHS To import KFF libraries, it is recommended that you use the KFF Import Utility. See Using the KFF Import Utility on page 314. See Importing Pre-defined KFF Data Libraries on page 316. See KFF Library Reference Information on page 325. Custom Hash Sets and KFF Groups You can import custom hashes from CSV files. See About the CSV Format on page 320. For FTK-based products, you can also import custom hashes from the following file types:  Delimited files (CSV or TSV)  Hash Database files (HDB)  Hashkeeper files (HKE)  FTK Exported KFF files (KFF)  FTK Supported XML files (XML)  FTK Exported Hash files (HASH) To import these kinds of files, use the KFF Import feature in your application. See Using the Known File Feature chapter. KFF binary files You can import KFF data that was exported in a KFF binary format, such an an archive of a KFF Server. See About CSV and Binary Formats on page 320. When you import a KFF binary snapshot, you must be running the same version of the KFF Server as was used to create the binary export. To import KFF binary files, it is recommend that you use the KFF Import Utility. See Using the KFF Import Utility on page 314. Getting Started with KFF (Known File Filter) Importing KFF Data | 313 About KFF Data Import Tools When you import KFF data, you can use one of two tools: KFF Data Import Tools The application’s Import feature The KFF management feature in the application lets you import both .CSV and KFF Binary formats. Use the application to import .CSV files. See Using the Known File Feature chapter. Even though you can import KFF binary files using the application, it is recommend that you use the KFF Import Utility. KFF Import Utility It is recommended that you use the KFF Import Utility to import KFF binary files. See Using the KFF Import Utility on page 314. About Default Status Values When you import KFF data, you configure a default status value of Alert or Ignore. When adding Hash Sets to KFF Groups, you can configure the KFF Groups to use the default status values of the Hash Set or you can configure the KFF Group with a status that will override the default Hash Set values. See Components of KFF Data on page 304. About Duplicate Hashes If multiple Hash Set files containing the same Hash identifier are imported into a single KFF Group, the group keeps the last Hash Set’s metadata information, overwriting the previous Hash Sets’ metadata. This only happens within an individual group and not across multiple groups. Using the KFF Import Utility About the KFF Import Utility Due to the large size of of some KFF data, a stand-alone KFF Import utility is available to use to import the data. This KFF Import utility can import large amounts of data faster then using the import feature in the application. It is recommend that you install and use the KFF Import utility to import the following: NSRL, An DHC, and NIST libraries archive of a KFF Server that was exported in the binary format After importing NSRL, NDIC, or DHS libraries, these indexes are displayed in the Currently Installed Sets list. See Components of KFF Data on page 304. You can also use the KFF Import Utility to remove the NSRL, NDIC, or DHS indexes that you have imported. An archive of a KFF Server, which is the exported KFF Index, is not shown in the list. Getting Started with KFF (Known File Filter) Importing KFF Data | 314 Installing the KFF Import Utility You should use the KFF Import Utility to import some kinds of KFF data. To install the KFF Import Utility 1. On the computer where you have installed the KFF Server, access the KFF Installation disc, and run the autorun.exe. 2. Click the 64 bit or 32 bit Install KFF Import Utility. 3. Complete the installation wizard. Importing a KFF Server Archive Using the KFF Import Utility You can import an archive of a KFF Server that you have exported using the binary format. If you are importing a pre-defined KFF Library, see Importing Pre-defined KFF Data Libraries (page 316). To import using the KFF Import Utility 1. On the KFF Server, open the KFF Import Utility. 2. To test the connection to the KFF Server’s Elasticsearch service at the displayed URL, click Connect. If it connects correctly, no error is shown. If it is not able to connect, you will get the following error: Failed after retrying 10 times: ‘HEAD accessdata_threat_indicies’. 3. To import, click Import. 4. Click Browse. 5. Browse to the folder that contains the KFF binary files. Specifically, select the folder that contains the Export.xml file. 6. Click Start. 7. Close the dialog. Removing Pre-defined KFF Libraries Using the KFF Import Utility You can remove a pre-defined KFF Library that you have previously imported. You cannot see or remove existing custom KFF data (the KFF Index). To remove pre-defined KFF Libraries 1. On the KFF Server, open the KFF Import Utility. 2. Select the library that you want to remove. 3. Click Remove. Getting Started with KFF (Known File Filter) Importing KFF Data | 315 Importing Pre-defined KFF Data Libraries About Importing Pre-defined KFF Data Libraries After you install the KFF Server, you can import pre-defined NIST NSRL, NDIC HashKeeper, and DHS data libraries. See About Pre-defined KFF Hash Libraries on page 306. In versions 5.5 and earlier, you installed these using an executable file. In versions 5.6 and later, you must import them. It is recommend that you use the KFF Import Utility. After importing pre-defined KFF Libraries, you can remove them from the KFF Server. See Removing Pre-defined KFF Libraries Using the KFF Import Utility on page 315. See the following sections: About Importing the NIST NSRL Library (page 316) Importing the NDIC Hashkeeper Library (page 318) Importing the DHS Library (page 318) About Importing the NIST NSRL Library You can import the NSRL library into your KFF Server. During the import, two KFF Groups are created: NSRL_Alert and NSRL_Ignore. In FTK-based products, these two groups are automatically added to the Default KFF Group. The NSRL libraries are updated from time to time. To import and maintain the NSRL data, you do the following: Process for Importing and Maintaining the NIST NSRL Library 1. Import the complete NSRL library. You must first install the most current complete NSRL library. You can later add updates to it. To access and import the complete NSRL library, see Importing the Complete NSRL Library (page 317) 2. Import updates to the library When updates are made available, import the updates to bring the data up-to date. See Installing KFF Updates on page 324. Important: In order to use the NSRL updates, you must first import the complete library. When you install an NSRL update, you must keep the previous NSRL versions installed in order to maintain the complete set of NSRL data. Available NRSL library files (new format) NSRL Library Release Complete library version 2.45 (source .ZIP file) Released Information Nov 2014 For use only with applications version 5.6 and later. Contains the full NSRL library up through update 2.45. See Importing the Complete NSRL Library on page 317. Getting Started with KFF (Known File Filter) Importing KFF Data | 316 Available Legacy NRSL library files Legacy NSRL Library Release version 2.44 (.EXE file) Released Information Nov 2013 For use with the legacy KFF Server that was used with applications versions 5.5 and earlier. Contains the full NSRL library up through update 2.44. Install this library first. Note: NSRL updates for the legacy KFF format will end in the 2nd quarter of 2015. From that time, NSRL updates will only be provided in the new format. Importing the Complete NSRL Library To add the NSRL library to your KFF Library, you import the data. You start by importing the full NSRL library. You can then import any updates as they are available. See About Importing the NIST NSRL Library on page 316. See Installing KFF Updates on page 324. Important: The complete NSRL library data is contained in a large (3.4 GB) .ZIP file. When expanded, the data is about 18 GB. Make sure that your file system can support files of this size. Important: Due to the large amount of NSRL data, it will take 3-4 hours to import the NSRL data using the KFF Import Utility. If you import from within an application, it will take even longer. To install the NSRL complete library 1. Extract the NSRLSOURCE_2.45.ZIP file from the KFF Installation disc. 2. On the KFF Server, launch the KFF Import Utility. See Installing the KFF Import Utility on page 315. 3. Click Import. 4. Click Browse. 5. Browse to and select the NSRLSource_2.45 folder that contains the NSRLFile.txt file. (Make sure you are selecting the folder and not drilling into the folder to select an individual file. The import process will drill into the folder to get the proper files for you.) 6. Click Select Folder. 7. Click Start. 8. When the import is complete, click OK. 9. Close the Import Utility dialog and the NSRL library will be listed in the Currently Installed Sets. Getting Started with KFF (Known File Filter) Importing KFF Data | 317 Importing the NDIC Hashkeeper Library You can import the Hashkeeper 9.08 library. For application versions 5.6 and later, these files are stored in the KFF binary format. To import the Hashkeeper library 1. Have access the NDIC source files by download the ZIP file from the web: 1a. Go to http://www.accessdata.com/product-download. 1b. Click Known File Filter (KFF). 1c. For KFF Hash Sets, click Download Page . 1d. Click the KFF NDIC library that you want to download. 2. Extract the ZIP file. 3. On the KFF Server, launch the KFF Import Utility. See Installing the KFF Import Utility on page 315. 4. Click Import. 5. Click Browse. 6. Browse to and select the NDIC source folder that contains the Export.xml file. (Make sure you are selecting the folder and not drilling into the folder to select an individual file. The import process will drill into the folder to get the proper files for you.) 7. Click Select Folder. 8. Click Start. 9. When the import is complete, click OK. 10. Close the Import Utility dialog and the NDIC library will be listed in the Currently Installed Sets. Importing the DHS Library You can import the DHS 1.08 library. For application versions 5.6 and later, these files are stored in the KFF binary format. To import the DHS library 1. Have access the NDIC source files by download the ZIP file from the web: 1a. Go to http://www.accessdata.com/product-download. 1b. Click Known File Filter (KFF). 1c. For KFF Hash Sets, click Download Page . 1d. Click the KFF DHS library that you want to download. 2. Extract the ZIP file. 3. On the KFF Server, launch the KFF Import Utility. See Installing the KFF Import Utility on page 315. 4. Click Import. 5. Click Browse. 6. Browse to and select the DHS source folder that contains the Export.xml file. (Make sure you are selecting the folder and not drilling into the folder to select an individual file. The import process will drill into the folder to get the proper files for you.) Getting Started with KFF (Known File Filter) Importing KFF Data | 318 7. Click Select Folder. 8. Click Start. 9. When the import is complete, click OK. 10. Close the Import Utility dialog and the DHS library will be listed in the Currently Installed Sets. Installing the Geolocation (GeoIP) Data Geolocation (GeoIP) data is used for the Geolocation Visualization feature of several AccessData products. See About the KFF Server and Geolocation on page 308. You can also check for and install GeoIP data updates. If you are upgrading to 5.6 or later from an application 5.5 or earlier, you must install the new KFF Server and the updated Geolocation data. The Geolocation data that was used with versions 5.5 and earlier is version 1.0.1 or earlier. The Geolocation data that is used with versions 5.6 and later is version 2014.10 or later. To install the Geolocation IP Data 1. On the copmuter where you have installed the KFF Server, access the KFF Installation disc, and run the autorun.exe. 2. Click the 64 bit or 32 bit Install Geolocation Data. 3. Complete the installation wizard. Getting Started with KFF (Known File Filter) Importing KFF Data | 319 About CSV and Binary Formats When you export and import KFF data, you can use one of two formats: CSV KFF Binary About the CSV Format When you use the .CSV format, you use a single .CSV file. The .CSV file contains the hashes that you import or export. When you export to a CSV file, it contains the hashes as well as all of the information about any associated Hash Sets and KFF Groups. You can only use the CSV format when exporting individual Hash Sets and KFF Groups. When you import using a CSV file, it can be a simple file containing only the hashes of files, or it can contain additional information about Hash Sets and KFF Groups. However, CSV files will usually take a little longer to export and import. To view the sample of a .CSV file that contains binaries and Hash Sets and KFF Groups, perform a CSV export and view the file in Excel. You can also use the format of CSV files that were exported in previous versions. To import .CSV files, use the application’s KFF Import feature. About the KFF Binary Format When you use the KFF binary format, you use a set of files that are in an internal KFF Server (Elasticsearch) format that is referred to as a Snapshot. The binary format is essentially a snapshot of one of the indices contained in the KFF Server. You can only have one binary format snapshot for each index. See Components of KFF Data on page 304. The benefit of the binary format is that it is able to support larger amounts of data than the CSV format. For large data sets, the binary format will export and import faster than the CSV format. For example, when you import the DHC or NDIC Hashkeeper libraries, they are imported from a KFF binary format. If you export your custom Hash Sets or KFF Groups using the KFF binary format, everything in the KFF Index is included. See About Choosing to Export in CSV or KFF Binary Format on page 321. When exporting in a Binary format, you specify an existing parent folder and then the name of a new sub-folder for the binary data. The new sub-folder must not previously exist and will be created by the export process. After export, the binary export folder contains the following: Indices sub-folder - The folder contains the exported KFF data Export.xml - This file is the only file that is not an Elasticsearch file and is created by the export feature and contains the KFF Group and Hash Set definitions for the index. Getting Started with KFF (Known File Filter) About CSV and Binary Formats | 320 Index - an index file generated by Elasticsearch metadata-snaphot file with the data and time it was created snapshot-snaphot file with the data and time it was created Note: The binary format is dependent on the version of the KFF Server. When exporting and importing the binary format, the systems must be using the same version of the KFF Server. When new versions of the KFF Server are released in the future, an upgrade process will also be provided. About Choosing to Export in CSV or KFF Binary Format When you export your own KFF data, you have the option of using either the CSV or the binary format. The results are different based on the format that you use: CSV format Exporting in CSV format When you export KFF data using the CSV format, you can export specific specific pieces of KFF data, such as one or more Hash Sets or one or more KFF Groups. The exported data is contained in one .CSV file. The benefits of the CSV format are that CSV files can be easily viewed and can be manually edited. They are also less dependent on the version of the KFF Server. Importing from CSV format When you import a CSV file, the data in the file is data is added to your existing KFF data that is in the KFF Index. See Components of KFF Data on page 304. For example, suppose you started by manually created four Hash Sets and one KFF Group. That would be the only contents in your KFF Index. Suppose you import a .CSV file that contains five hash sets and two KFF Groups. They will be added together for a total of nine Hash Sets and three KFF Groups. To import .CSV files, use the KFF Import feature in your application. See Using the Known File Feature chapter. KFF binary format Exporting in KFF binary format If you export your KFF data using the KFF binary format, all of the data that you have in the KFF Index will be exported together. You cannot use this format to export individual Hash Sets or KFF Groups. See Components of KFF Data on page 304. You will only want to use this format if you intend to export all of the data in the KFF Index and import it as a whole. This can be useful in making an archive of your KFF data or copying KFF data from one KFF Server to another. Because NSRL, NIST, and DHC data is contained in their own indexes, when you do an export using this format, those sets are not included. Only the data in the KFF Index is exported. Getting Started with KFF (Known File Filter) About CSV and Binary Formats | 321 Importing KFF binary format IMPORTANT: When you import a KFF binary format, it will import the complete index and will replace any data that is currently in that index on the KFF Server. For example, if you import the DHC library, and then later you import the DHC library again, the DHC index will be replaced with the new import. If you have a KFF binary format snapshot of custom KFF data (which would have come from a binary format export) it will replace all KFF data that already exists in your KFF Index. For example, suppose you manually created four Hash Sets and one KFF Group. Suppose you then import a binary format that has five hash sets and two KFF Groups. The binary format will be imported as a complete index and will replace the existing data. The result will be only be the imported five Hash Sets and two KFF libraries. When importing KFF binary files, it is recommend that you use the KFF Import Utility. See Installing the KFF Import Utility on page 315. Getting Started with KFF (Known File Filter) About CSV and Binary Formats | 322 Uninstalling KFF You can uninstall KFF application components independently of the KFF Data. Main version Description Applications 5.6 and later For applications version 5.6 and later, you uninstall the following components:  AccessData Elasticsearch Windows Service (KFF Server) v1.2.7 and later  Note: Elasticsearch is used by multiple features in various applications, use caution when uninstalling this service or the related data. AccessData KFF Import Utility (v5.6 and later)  AccessData KFF Migration Tool (v1.0 and later)  AccessData Geo Location Data (v2014.10 and later) Note: This component is not used by the KFF feature, but with the KFF Server for the the geolocation visualization feature. The location of the KFF data is configured when the AccessData Elasticsearch Windows Service was installed. By default, it is lactated at C:\Program Files\AccessData\Elacticsearch\Data. Applications 5.5 and earlier For applications version 5.5 and earlier, you can uninstall the following components:  KFF Server (v1.2.7 and earlier)  Note: The KFF Server is also used by the geolocation visualization feature. AccessData Geo Location Data (1.0.1 and earlier) This component is not used by the KFF feature, but with the KFF Server for the the geolocation visualization feature. The location of the KFF data was configured when the KFF Server was installed. You can view the location of the data by running the KFF.Config.exe on the KFF Server. If you are upgrading from 5.5 to 5.6, you can migrate your KFF data before uninstalling the KFF Server. Getting Started with KFF (Known File Filter) Uninstalling KFF | 323 Installing KFF Updates From time to time, AccessData will release updates to the KFF Server and the KFF data libraries. Some of the KFF data updates may require you to update the version of the KFF Server. To check for updates, do the following: 1. Go to the AccessData Product Download website at http://www.accessdata.com/product-download. 2. On the Product Downloads page, click Known File Filter (KFF). 3. Open the Download page. 4. Check for updates. See About KFF Server Versions on page 309. See About Importing the NIST NSRL Library on page 316. 5. If there are updates, download them. 6. Install or import the updates. Getting Started with KFF (Known File Filter) Installing KFF Updates | 324 KFF Library Reference Information About KFF Pre-Defined Hash Libraries This section includes a description of pre-defined hash collections that can be added as AccessData KFF data. The following pre-defined libraries are currently available for KFF and come from one of three federal government agencies: NIST NSRL (The default library installed with KFF) NDIC HashKeeper (An optional library that can be downloaded from the AccessData Downloads page) DHS (An optional library that can be downloaded from the AccessData Downloads page) Note: Because KFF is now multi-sourced, it is no longer maintained in HashKeeper format. Therefore, you cannot modify KFF data in the HashKeeper program. However, the HashKeeper format continues to be compatible with the AccessData KFF data. Use the following information to help identify the origin of any hash set within the KFF The NSRL hash sets do not begin with “ZZN” or “ZN”. In addition, in the AD Lab KFF, all the NSRL hash set names are appended (post-fixed) with multi-digit numeric identifier. For example: “Password Manager & Form Filler 9722.” All HashKeeper Alert sets begin with “ZZ”, and all HashKeeper Ignore sets begin with “Z”. (There are a few exceptions. See below.) These prefixes are often followed by numeric characters (“ZZN” or “ZN” where N is any single digit, or group of digits, 0-9), and then the rest of the hash set name. Two examples of HashKeeper Alert sets are: “ZZ00001 Suspected child porn” “ZZ14W” An example of a HashKeeper Ignore set is:  “Z00048 The  Corel Draw 6” DHS collection is broken down as follows: In 1.81.4 and later there are two sets named “DHS-ICE Child Exploitation JAN-1-08 CSV” and “DHS-ICE Child Exploitation JAN-1-08 HASH”. In AD Lab there is just one such set, and it is named “DHS-ICE Child Exploitation JAN-1-08”. Once an investigator has identified the vendor from which a hash set has come, he/she may need to consider the vendor’s philosophy on collecting and categorizing hash sets, and the methods used by the vendor to gather hash values into sets, in order to determine the relevance of Alert (and Ignore) hits to his/her project. The following descriptions may be useful in assessing hits. Getting Started with KFF (Known File Filter) KFF Library Reference Information | 325 NIST NSRL The NIST NSRL collection is described at: http://www.nsrl.nist.gov/index.html. This collection is much larger than HashKeeper in terms of the number of sets and the total number of hashes. It is composed entirely of hash sets being generated from application software. So, all of its hash sets are given Ignore status by AccessData staff except for those whose names make them sound as though they could be used for illicit purposes. The NSRL collection divides itself into many sub-collections of hash sets with similar names. In addition, many of these hash sets are “empty”, that is, they are not accompanied by any hash values. The size of the NSRL collection, combined with the similarity in set naming and the problem of empty sets, allows AccessData to modify (or selectively alter) NSRL’s own set names to remove ambiguity and redundancy. Find contact info at http://www.nsrl.nist.gov/Contacts.htm. NDIC HashKeeper NDIC’s HashKeeper collection uses the Alert/Ignore designation. The Alert sets are hash values contributed by law enforcement agents working in various jurisdictions within the US - and a few that apparently come from Luxemburg. All of the Alert sets were contributed because they were believed by the contributor to be connected to child pornography. The Ignore sets within HashKeeper are computed from files belonging to application software. During the creation of KFF, AccessData staff retains the Alert and Ignore designations given by the NDIC, with the following exceptions. AccessData labels the following sets Alert even though HashKeeper had assigned them as Ignore: “Z00045 PGP files”, “Z00046 Steganos”, “Z00065 Cyber Lock”, “Z00136 PGP Shareware”, “Z00186 Misc Steganography Programs”, “Z00188 Wiping Programs”. The names of these sets may suggest the intent to conceal data on the part of the suspect, and AccessData marks them Alert with the assumption that investigators would want to be “alerted” to the presence of data obfuscation or elimination software that had been installed by the suspect. The following table lists actual HashKeeper Alert Set origins: A Sample of HashKeeper KFF Contributions Hash Contributor Location ZZ00001 Suspected child porn Det. Mike McNown & Randy Stone Wichita PD ZZ00002 Identified Child Porn Det. Banks Union County (NJ) Prosecutor's Office ZZ00003 Suspected child porn Illinois State Police ZZ00004 Identified Child Porn SA Brad Kropp, AFOSI, Det 307 ZZ00000, suspected child porn NDIC Getting Started with KFF (Known File Filter) Contact Information Case/Source (908) 527-4508 case 2000S-0102 (609) 754-3354 Case # 00307D7S934831 KFF Library Reference Information | 326 A Sample of HashKeeper KFF Contributions (Continued) Hash Contributor ZZ00005 Suspected Child Porn Rene Moes, Luxembourg Police ZZ00006 Suspected Child Porn Illinois State Police Location Contact Information Case/Source rene.moes@police.eta t.lu ZZ00007b Suspected KP (US Federal) ZZ00007a Suspected KP Movies ZZ00007c Suspected KP (Alabama 13A-12192) ZZ00008 Suspected Child Pornography or Erotica Sergeant Purcell Seminole County Sheriff's Office (Orlando, FL, USA) (407) 665-6948, dpurcell@seminoleshe riff.org suspected child pornogrpahy from 20010000850 ZZ00009 Known Child Pornography Sergeant Purcell Seminole County Sheriff's Office (Orlando, FL, USA) (407) 665-6948, dpurcell@seminoleshe riff.org 200100004750 ZZ10 Known Child Porn Detective Richard Voce CFCE Tacoma Police Department (253)594-7906, rvoce@ci.tacoma.wa.u s ZZ00011 Identified CP images Detective Michael Forsyth Baltimore County Police Department (410)887-1866, mick410@hotmail.com ZZ00012 Suspected CP images Sergeant Purcell Seminole County Sheriff's Office (Orlando, FL, USA) (407) 665-6948, dpurcell@seminoleshe riff.org ZZ0013 Identified CP images Det. J. Hohl Yuma Police Department 928-373-4694 ZZ14W Sgt Stephen May Tamara.Chandler@oa g.state.tx.us, (512)936-2898 ZZ14U Sgt Chris Walling Tamara.Chandler@oa g.state.tx.us, (512)936-2898 ZZ14X Sgt Jeff Eckert YPD02-70707 TXOAG 41929134 TXOAG 41919887 TXOAG Internal Tamara.Chandler@oa g.state.tx.us, (512)936-2898 Getting Started with KFF (Known File Filter) KFF Library Reference Information | 327 A Sample of HashKeeper KFF Contributions (Continued) Hash Contributor ZZ14I Sgt Stephen May Location Contact Information Tamara.Chandler@oa g.state.tx.us, (512)936-2898 ZZ14B Robert Britt, SA, FBI ZZ14S Tamara.Chandler@oa g.state.tx.us, (512)936-2898 Sgt Stephen May Tamara.Chandler@oa g.state.tx.us, (512)936-2898 ZZ14Q Sgt Cody Smirl Tamara.Chandler@oa g.state.tx.us, (512)936-2898 ZZ14V Sgt Karen McKay Tamara.Chandler@oa g.state.tx.us, (512)936-2898 ZZ00015 Known CP Images Det. J. Hohl ZZ00016 Marion County Sheriff's Department Yuma Police Department Case/Source TXOAG 041908476 TXOAG 031870678 TXOAG 041962689 TXOAG 041952839 TXOAG 41924143 928-373-4694 YPD04-38144 (317) 231-8506 MP04-0216808 The basic rule is to always consider the source when using KFF in your investigations. You should consider the origin of the hash set to which the hit belongs. In addition, you should consider the underlying nature of hash values in order to evaluate a hit’s authenticity. Higher Level KFF Structure and Usage Since hash set groups have the properties just described (and because custom hash sets and groups can be defined by the investigator) the KFF mechanism can be leveraged in creative ways. For example: You could define a group of hash sets created from encryption software and another group of hash sets created from child pornography files. Then, you would apply only those groups while processing. You could also use the Ignore status. You are about to process a hard drive image, but your search warrant does not allow inspection of certain files within the image that have been previously identified. You could do the following and still observe the warrant: 6a. Open the image in Imager, navigate to each of the prohibited files, and cause an MD5 hash value to be computed for each. 6b. Import these hash values into custom hash sets (one or more), add those sets to a custom group, and give the group Ignore status. 6c. Process the image with the MD5 and KFF options, and with AD_Alert, AD_Ignore, and the new, custom group selected. Getting Started with KFF (Known File Filter) KFF Library Reference Information | 328 6d. During post-processing analysis, filter file lists to eliminate rows representing files with Ignore status. Hash Set Categories The highest level of the KFF’s logical structure is the categorizing of hash sets by owner and scope. The categories are AccessData, Project Specific, and Shared. Hash Set Categories Category Description AccessData The sets shipped with as the Library. Custom groups can be created from these sets, but the sets and their status values are read only. Project Specific Sets and groups created by the investigator to be applied only within an individual project. Shared Sets and groups created by the investigator for use within multiple projects all stored in the same database, and within the same application schema. Important: Coordination among other investigators is essential when altering Shared groups in a lab deployment. Each investigator must consider how other investigators will be affected when Shared groups are modified. Getting Started with KFF (Known File Filter) KFF Library Reference Information | 329 What has Changed in Version 5.6 WIth the 5.6 release of Resolution1, Summation, and FTK-based products, the KFF feature has been updated. If you used KFF with applications version 5.5 or earlier, you will want to be aware of the following changes in the KFF functionality. Changes from version 5.5 to 5.6 Item Description KFF Server KFF Server now runs a different service.  In 5.5 and earlier, the KFF Server ran as the KFF Server service.  In 5.6 and later, the KFF Server uses the AccessData Elasticsearch Windows Service. For applications version 5.6 and later, all KFF data must be created in or imported into the new KFF Server . KFF Migration Tool This is a new tool that lets you migrate custom KFF data from 5.5 and earlier to the new KFF Server. NIST NSRL, NDIC HashKeeper, or DHS library data from 5.5 will not be migrated. You must re-import it. See Migrating Legacy KFF Data on page 311. KFF Import Utility This is a new utility that lets you import large amounts of KFF data quicker than using the import feature in the application. See Using the KFF Import Utility on page 314. KFF Libraries, Templates, and Groups In 5.5, all Hash Sets were configured within KFF Libraries. KFF Libraries could then contain KFF Groups and KFF Templates. KFF Libraries and Templates have been eliminated. You now simply create or import KFF Groups and add Hash Sets to the groups. You can now nest KFF Groups. NIST NSRL, NDIC HashKeeper, or DHS libraries In 5.5 and earlier, to use these libraries, you ran an installation wizard for each library. You now import these libraries using the KFF Import Utility. See About Importing Pre-defined KFF Data Libraries on page 316. Import Log FTK-based products no longer include the Import Log. Resolution1 and Summation products did not have it previously. Export When you export KFF data you can now choose two formats: CSV format which replaced XML format A new binary format See About CSV and Binary Formats on page 320. Getting Started with KFF (Known File Filter) What has Changed in Version 5.6 | 330 Chapter 29 Using KFF (Known File Filter) This chapter explains how to configure and use KFF and has the following sections: See About KFF and De-NIST Terminology on page 331. See Process for Using KFF on page 332. See Configuring KFF Permissions on page 332. See Adding Hashes to the KFF Server on page 333. See Using KFF Groups to Organize Hash Sets on page 339. See Exporting KFF Data on page 350. See Enabling a Project to Use KFF on page 343. See Reviewing KFF Results on page 345. See Re-Processing KFF on page 349. About KFF and De-NIST Terminology You can configure the interface to display either the term “KFF” (Known File Filter) or “De-NIST”. For example, this can change references of a “KFF Group” to a “De-NIST Group.” This does not affect the functionality of KFF, but only the term that is displayed. This allows users in forensic environments to see the term “KFF” while users in legal environments can see the term “De-NIST.” By default, the KFF term is used in the interface. This setting only affects text in the interface. The following new icon is used with either setting: In this manual, the KFF term is used. To change the KFF and De-NIST terminology 1. In the web.config file, in the section, add or modify the following entry: 2. To change the setting to use De-NIST terminology, change the value= from “KFF” to “De-NIST”. Using KFF (Known File Filter) About KFF and De-NIST Terminology | 331 Process for Using KFF To use the KFF feature, you perform the following steps: Process for using KFF Step 1. Install and configure the KFF Server. See Installing the KFF Server on page 309. Step 2. Configure KFF permissions. Configuring KFF Permissions (page 332) Step 3. Add and manage KFF hashes on the KFF Server. See Adding Hashes to the KFF Server on page 333. Step 4. Add and manage KFF Groups to organize KFF Hash Sets. Using KFF Groups to Organize Hash Sets (page 339) Step 5. Configure a project to use KFF. See Enabling a Project to Use KFF on page 343. Step 6. Review KFF results in Project Review. See Reviewing KFF Results on page 345. Step 7. (Optional) Re-process the KFF data using different hashes. See Re-Processing KFF on page 349. Step 8. (Optional) Archive or export KFF data to share with other KFF Servers. See Exporting KFF Data on page 350. Configuring KFF Permissions In order to create and manage KFF libraries, sets, templates, and groups, you must have one of the following permissions: Administrator Manage KFF You assign the Manage KFF permission to an Admin Role and then associate that role with users. See Configuring and Managing System Users, User Groups, and Roles on page 45. A user with project management permissions does not require the Manage KFF permission in order to enable KFF for a new project. Using KFF (Known File Filter) Process for Using KFF | 332 Adding Hashes to the KFF Server You must add the hashes of the files that you want to compare against your evidence data. When adding hashes to the KFF Serer, you add them in KFF Hash Sets. See Components of KFF Data on page 304. You can use the following methods to add hashes to the KFF Library: Migrate legacy KFF Server data You can migrate legacy KFF data that is in a KFF Server in applications versions 5.5 and earlier. See Migrating Legacy KFF Data on page 311. Import hashes You can import previously configured KFF hashes from .CSV files. See Importing KFF Data on page 334. Manually create and manage Hash Sets You can manually add hashes to a Hash Set. See Manually Creating and Managing KFF Hash Sets on page 336. Create hashes from evidence files in Review You can add hashes from the files in your evidence using Review. See Adding Hashes to Hash Sets Using Project Review on page 337. About the Manage KFF Hash Sets Page To configure KFF data, you use the KFF Hash Sets and KFF Groups pages. To open the KFF Hash Sets page 1. Log in as an Administrator or user with Manage KFF permissions. 2. Click Management > Hash Sets If the feature does not function properly, check the following: The KFF Server is installed. See Installing the KFF Server on page 309. The application has been configured for the KFF Server. See Configuring the Location of the KFF Server on page 310. The KFF Service is running. In the Windows Services manager, make sure that the AccessData Elasticsearch service is started. Elements of the KFF Hash Sets page Element Description Hash Sets Displays all of the Hash Sets that have been imported or created in the KFF Server. Lets you create a Hash Set. See Manually Creating and Managing KFF Hash Sets on page 336. Using KFF (Known File Filter) Adding Hashes to the KFF Server | 333 Elements of the KFF Hash Sets page Element Description Lets you edit the active Hash Set. See Manually Creating and Managing KFF Hash Sets on page 336. Lets you delete the active Hash Set. Warning: You are not prompted to confirm the deletion. See Manually Creating and Managing KFF Hash Sets on page 336. Lets you delete one or more checked Hash Sets. Delete Lets you view and manage the hashes in the Hash Set. View Hashes Import File Export See Searching For, Viewing, and Managing Hashes in a Hash Set on page 337. Lets you import KFF data. See Importing KFF Data on page 334. Lets you export KFF data. See Exporting KFF Data on page 350. Refreshes the Hash Sets list. Importing KFF Data About Importing KFF Data To understand the methods and formats for importing KFF data, first see About Importing KFF Data (page 313). This chapter explains how to import KFF data using the application’s management console. Importing KFF Hashes You can import KFF data from the following: KFF export CSV files KFF binary files Warning: Importing KFF binary files will replace your existing KFF data. See About CSV and Binary Formats on page 320. It is recommended that you use the external KFF Import Utility to import KFF binary files. See Using the KFF Import Utility on page 314. When importing KFF data, you can enter default values for the following fields: Default Status Default Vendor Default Version Using KFF (Known File Filter) Adding Hashes to the KFF Server | 334 Default Package These are default values that will be used if they import file does not contain the information. When importing hash lists using the CSV import, each hash within the CSV can have the same, different or no status. During the import process you must choose a default status of Alert or Ignore. This default status will have no affect on any hash in your CSV that already contains a status, however, any hash that does not have a pre-assigned status will have this default status assigned to them. The override status for the hash sets that you import will be automatically set to No Override. This is to ensure that if your hash set contains both Alert and Ignore hashes, the program will not override the original status. You can, however, choose to override the individual hash status within a set by choosing to set the whole set to Alert or Ignore. You can use these value to organize your hashes. For example, you can filter or sort data based on these values. To import KFF hashes from files 1. Log in as an Administrator or user with Manage KFF permissions. 2. Click Management > 3. Click 4. On the KFF Import File dialog, click 5. Browse to and select the file. 6. Click Select. 7. Specify a Default Status. This sets a default status only for the hashes that do not have a status specified in the file. 8. (Optional) Specify a default Vendor, Version, and Package. This sets values only for the hashes that do not have a value specified in the file. 9. (Optional) Add other files. Hash Sets. Import File. Add File. 10. Click Import. 11. View the Import Summary to see the results of the Import. 12. Click Close. To import KFF data from a binary format Warning: This process may replace your existing KFF data. See About the KFF Binary Format on page 320. 1. Log in as an Administrator or user with Manage KFF permissions. 2. Click Management > 3. Click 4. On the KFF Import File dialog, click Binary Import. 5. Browse to the folder that contains the binary files (specifically the Export.xml file) and click Select. 6. Click Import. Hash Sets. Import File. Using KFF (Known File Filter) Adding Hashes to the KFF Server | 335 Manually Creating and Managing KFF Hash Sets You can manually create Hash Sets and then add hashes to them. You can also edit and delete Hash Sets. You can also add, edit, or delete the hashes in Hash Sets. Note: You cannot manually add, edit, and delete hash values that were imported from NSRL, NDIC HashKeeper, and DHS libraries. To manually create a Hash Set 1. Log in as an Administrator or user with Manage KFF permissions. 2. Click Management > 3. On the KFF Hash Sets page, in the right pane, click Add 4. Enter a name for the Hash Set. 5. Select the status for the Hash Set: Alert, Ignore, or No Override. 6. (Optional) Enter a package, vendor, or version. These are not required, but you can use these values for sorting and filtering results. 7. Click Save. Hash Sets. . To manually manage Hash Sets 1. Click Management > 2. Do one of the following: Hash Sets. To edit a Hash Set, select a set a set, and click Edit . To delete a single Hash Set, select a set, and click Delete To delete a multiple Hash Sets, select the sets, and click Delete . . To manage hashes in a hash set 1. On the KFF Hash Sets page, select a Hash Set. 2. Click View Hashes. To add hashes to a hash set 1. On the KFF Hash Sets page, select a Hash Set. 2. Click View Hashes. 3. In the KFF Hash Finder dialog, click Add 4. Enter the KFF hash value. 5. Enter the filename for the hash. 6. (Optional) Enter other reference information about the hash. 7. Click Save. The new hash is displayed. Using KFF (Known File Filter) . Adding Hashes to the KFF Server | 336 Searching For, Viewing, and Managing Hashes in a Hash Set Due to the large number of hashes that may be in a Hash Set, a list of hashes is not displayed. (However, you can export a KFF Group that contains the Hash Set and view the hashes in the export file.) You can use the KFF Hash Finder dialog to search for hash values within a hash set. You search by entering a complete hash value. You can only search within one hash set at a time. While the the KFF Hash Finder does not display a list of hashes, it does display the number of hashes in the set. To search for hashes in a hash set 1. On the KFF Hash Sets page, select a Hash Set. 2. Click View Hashes. 3. In the KFF Hash Finder dialog, enter the complete hash value that you want to search for. 4. Click Search. If the has is found, it is displayed in the hash list. If the hash is not found a message is displayed. To edit hashes in a hash set 1. In the KFF Hash Finder dialog, search for the hash that you want to edit. 2. Click Edit 3. Enter the hash information. 4. Click Save. The edited hash is displayed. . To delete hashes from a hash set 1. In the KFF Hash Finder dialog, search for the hash that you want to delete. 2. Click Delete . Adding Hashes to Hash Sets Using Project Review You may identify files that in exist in a project as files that you want to add to your KFF hashes. For example, you may find a graphics file that you want to either alert for or ignore in this or other projects. Using Project Review, you can select files and then add them to existing or new KFF Hash Sets. When you add hashes using Project Review, it starts a job that adds the hashes to the KFF Library. To use Project Review to add hashes to Hash Sets 1. Log in as an Administrator or user with Manage KFF permissions. 2. Select a project and enter Project Review. 3. Select the files that you want to add to a hash set. 4. In the Actions drop-down, select Add to KFF. 5. Click Go. 6. In the Add Hash to Set dialog, select a status for the hash. Using KFF (Known File Filter) Adding Hashes to the KFF Server | 337 7. Specify a Hash Set. You can select an existing set or create a new set. To create a new set, do the following: 7a. Select [Add New]. 7b. Enter the name of the new set. 7c. Enter a name for the hash set. 7d. (Optional) Add other information. 7e. Click Save. To use an existing set, do the following: 7a. Select the existing set. By default, you will only see the sets that match the status that you select. To see Hash Sets that have a No Override status as well, enable the Display hash sets with no override status option. 7b. Click Save. To verify that hashes were added to the KFF Server 1. Click to exit Review. 2. On the Home page, select the project that you are using. 3. Click Work List . See Monitoring the Work List on page 276. Click Refresh to see the current status. 4. View the Add Hash to KFF job types. 5. Click Refresh 6. When the jobs are completed, at the bottom of the page, you can view the results. It will show the number of files that were added or any errors generated. 7. From the KFF Hash Sets tab on the Management page, you can view the Hash Sets. See Searching For, Viewing, and Managing Hashes in a Hash Set on page 337. Using KFF (Known File Filter) to see the current status. Adding Hashes to the KFF Server | 338 Using KFF Groups to Organize Hash Sets About KFF Groups KFF groups are containers for one or more Hash Sets. When you create a group, you then add Hash Sets to the group. KFF Groups can also contain other KFF Groups. When you enable KFF for a project, you select which KFF Group to use during processing. Within a KFF group, you can manually edit custom Hash Sets. About KFF Groups Status Override Settings When you create a KFF Group, you can choose to use the default status of the Hash Set (Alert or Ignore) or override it. You do this by setting one of the following Status Override settings: Alert - All Hash Sets within the KFF Group will be set to Alert regardless of the status of the individual Hash Sets. Ignore - All Hash Sets within the KFF Group will be set to Ignore regardless of the status of the individual Hash Sets. No Override - All Hash Sets will maintain their default status. For example, if you have a Hash Set with a status of Alert, if you set the KFF Group to No Override, then the default status of Alert is used. If you set the KFF Group with a status of Ignore, the the Hash Set Alert status is overridden and Ignore is used. As a result, use caution when setting the Status Override for a KFF Group. About Nesting KFF Groups KFF Groups can contain Hash Sets or they can contain other KFF Groups. When one KFF Group includes another KFF Group, it is called nesting. The reason that you may want to nest KFF Groups is that you can use multiple KFF Groups when processing your data. When you enable KFF for a case, you can only select one KFF Group. By nesting, you can use multiple KFF Groups. For example, you may have one KFF Group that contains Hash Sets with an Alert status. You may have a second KFF Group that contains Hash Sets with an Ignore status. When processing a case, you may want to use both of those KFF Groups. To accomplish this, you can create another KFF Group as a parent and then add the other two KFF Groups to it. When processing, you would select the parent KFF Group. When nesting KFF Groups you must be mindful of the Status Override of the parent KFF Group. The Status Override for the highest KFF Group in the hierarchy is used when nesting KFF Groups. In most cases, you will want to set the parent KFF Group with a status of None. That way, the status of each child KFF Group (or their Hash Sets) is used. If you select an Alert or Ignore status for the parent KFF Group, then all child KFF Groups and their Hash Sets will use that status. Using KFF (Known File Filter) Using KFF Groups to Organize Hash Sets | 339 Creating a KFF Group You create KFF groups to organize your Hash Sets. When you create a KFF Group, you add one ore more Hash Sets to it. You can later edit the KFF Group to add or remove Hash Sets. To create a KFF Group 1. Log in as an Administrator or user with Manage KFF permissions. 2. Click Management > 3. Click Add 4. Enter a Name. 5. Set the Status Override. 6. See About KFF Groups Status Override Settings on page 339. 7. (Optional) Enter a Package, Vendor, and Version. 8. Click Save. Groups. . To add a Hash Sets to a KFF Group 1. Click Management > Groups. 2. In the Groups list, select the group that you want to add Hash Sets to. 3. In the Groups and Hash Sets pane, click 4. Select the Hash Sets that you want to add to the group. 5. You can filter the list of Hash Sets to help you find the hash sets that you want. 6. After selecting the sets, click OK. Add. Viewing the Contents of a KFF Group On the KFF Groups page, you can select a KFF Group and in the Groups and Hash Sets pane, view the Hash Sets and child KFF Groups that are contained in that KFF Group. Managing KFF Groups You can edit KFF Groups and do the following: Rename Change Add the group the Override Status or remove Hash Sets and KFF Groups You can also do the following: Delete the group Export the group See Exporting KFF Data on page 350. Using KFF (Known File Filter) Using KFF Groups to Organize Hash Sets | 340 To manage a KFF Group 1. Click Management > 2. In the Groups list, select a KFF Group that you want to manage. 3. Do one of the following: Click Edit. Click Delete. Groups. Click Export. See Exporting KFF Data on page 350. About the Manage KFF Groups Page To configure KFF Groups, you use the KFF Groups page. To open the KFF Groups page 1. Log in as an Administrator or user with Manage KFF permissions. 2. Click Management > Groups If the feature does not function properly, check the following: The KFF Server is installed. See Installing the KFF Server on page 309. The application has been configured for the KFF Server. See Configuring the Location of the KFF Server on page 310. The KFF Service is running. In the Windows Services manager, make sure that the AccessData Elasticsearch service is started. Elements of the KFF Groups page Tab Element Description KFF Groups pane KFF Groups Displays all of the KFF Groups that have been imported or created in the KFF Server. Lets you create a KFF Group. See Creating a KFF Group on page 340. Lets you edit the active KFF Group. See Managing KFF Groups on page 340. Lets you delete the active KFF Group. See Managing KFF Groups on page 340. Delete Using KFF (Known File Filter) Lets you delete one or more checked KFF Groups. Using KFF Groups to Organize Hash Sets | 341 Elements of the KFF Groups page Tab Element Description Export Lets you export KFF data. See Exporting KFF Data on page 350. Refreshes the KFF Groups list. Groups and Hash Sets Pane Lets you add and remote Hash Sets from KFF Groups. See Managing KFF Groups on page 340. Add Remove View Hashes Using KFF (Known File Filter) Displays the list of Hash Sets that you can add to a KFF Group. See Managing KFF Groups on page 340. Lets you remove Hash Sets from a KFF Group. See Managing KFF Groups on page 340. Lets you view and manage the hashes in the Hash Set. See Searching For, Viewing, and Managing Hashes in a Hash Set on page 337. Using KFF Groups to Organize Hash Sets | 342 Enabling a Project to Use KFF When you create a project, you can enable KFF and configure the KFF settings for the project. About Enabling and Configuring KFF To use KFF in a project you do the following: Process for enabling and configuring KFF 1. Create a new Project If you want to use KFF you must enable it when you create the project. You cannot enable KFF for a project after it has been created. 2. Enable KFF Enable the KFF processing option. See Enabling and Configuring KFF on page 343. 2. Configure how to process ignorable files You can choose how to process ignorable files:  Skip Ignorable Files - This option will not process any files determined to be Ignorable. Any files that are ignorable will not be included or visible in the project. This is the default option.  Process and Flag Ignorable Files - This option will process ignorable files, but flag them as Ignorable. Any files that are Ignorable will be included and visible in the project, but can be filtered. See Using Quick Filters on page 346. 4. Select a KFF Group When enabling KFF for a project, you select one KFF Group that you want to use. You do not create KFF Group at that time. You can only select an existing group. Because of this, you must have at least one KFF Group created before creating a project. See Using KFF Groups to Organize Hash Sets on page 339. However, after processing, you can re-process the data using a different KFF template. This lets you create and use different templates after you initially process the project. See Re-Processing KFF on page 349. Enabling and Configuring KFF To enable and configure KFF for a project 1. Log in as an Administrator or user with Create/Edit Projects permissions. 2. Create a new project. 3. In Processing Options, select Enable KFF. A Options tab option displays. 4. In Processing Options, select how to handle ignorable files. 5. Click Options. The KFF Options window displays. Using KFF (Known File Filter) Enabling a Project to Use KFF | 343 6. In the drop-down menu, select the KFF Group that you want to use. See Using KFF Groups to Organize Hash Sets on page 339. 7. In the Hash Sets pane, verify that this template has the hash sets that you want. Otherwise select a different template. 8. Click Create Project and Import Evidence or click Create Project and add evidence later. Using KFF (Known File Filter) Enabling a Project to Use KFF | 344 Reviewing KFF Results KFF results are displayed in Project Review. You can use the following tools to see KFF results: Project Details page Project Review KFF Information Quick Columns KFF Quick Filters KFF facets KFF Details You can also create and modify KFF libraries and hash sets using files in Review. See Adding Hashes to Hash Sets Using Project Review on page 337. Viewing KFF Data Shown on the Project Details Page To View KFF Data on the Project Details page 1. Click the Home tab. 2. Click the 3. In the right column, you can view the number of KFF known files. Project Details tab. About KFF Data Shown in the Review Item List You can identify and view files that are either Known or Unknown based on KFF results. Depending on the KFF configuration options, there are two or three possible KFF statuses in Project Review: Alert (2) - Files that matched hashes in the template with an Alert status Ignore (1) - Files that matched hashes in the template with an Ignore status (not shown in the Item List by default) Unknown (0) - Files that did not match hashes in the template If you configured the project to skip ignorable files, files configured to be ignored (Ignore status) are not included in the data and are not viewable in the Project Review. See Enabling and Configuring KFF on page 343. Using the KFF Information Quick Columns You can use the KFF Information Quick Columns to view and and sort and filter on KFF values. For example, you can sort on the KFF Status column to quickly see all the files with the Alert status. See Using Document Viewing Panels on page 76. To see the KFF columns, activate the KFF Information Quick Columns. Using KFF (Known File Filter) Reviewing KFF Results | 345 To activate the KFF Information Quick Columns 1. From the Item List in the Review window, click Options. 2. Click Quick Columns > KFF > KFF Information. The KFF Columns display. Item List with KFF Tabs displayed KFF Columns Column Description KFF Status Displays the status of the file as it pertains to KFF. The three options are Unknown (0), Ignore (1), and Alert (2).  If you configured the project to skip Ignorable files, these files are not included in the data.  If you configured the project to flag Ignorable files, and the Hide Ignorables Quick Filter is set, these files are in the data, but are not displayed. See Using Quick Filters on page 346. KFF Set Displays the KFF Hash Set to which the file belongs. KFF Group Name Displays the name created for the KFF Group in the project. KFF Vendor Displays the KFF vendor. See Filtering by Column in the Item List Panel on page 139. Using Quick Filters You can use Quick Filters to quickly show or hide KFF Ignorable files. You can toggle the quick filter to do the following: Hide Ignorables - enabled by default Show Ignorables The Hide Ignorables Quick Filter is set by default. As a result, even if you selected to process and flag Ignorable files for the project, they are not included in the Item List by default. To show ignorable files in the Item list, change the Quick Filter to Show Ignorables. Using KFF (Known File Filter) Reviewing KFF Results | 346 Note: If you configured the project to skip ignorable files, files configured to be ignored (Ignore status) will not be shown, even if you select to Show Ignorables. To change the KFF Quick Filters 1. From the Item List in the Review window, click Options. 2. Click Quick Filters > Show Ignorables. Using the KFF Facets You can use the KFF facets to filter data based on KFF values. For example, you can apply a facet to only display items with an Alert status or with a certain KFF set. See About Filtering Data with Facets on page 124. Note: If you configured the project to skip Ignorable files, these files are not included in the data and the Ignore facet is not available. If you configured the project to flag Ignorable files, and the Hide Ignorables Quick Filter is set, the Ignore facet is available, but the files will not be displayed. See Using Quick Filters on page 346. You can use the following KFF facets: KFF Vendors KFFGroups KFF Statuses KFF Sets Within a facet, only the filters that are available in the project are available. For example, if no files with the Alert status are in the project, the Alter filter will not be available in the KFF Statuses facet. To apply KFF facets 1. From the Item List in the Review window, open the facets pane. 2. Expand KFF. 3. Select the facets that you want to apply. Using KFF (Known File Filter) Reviewing KFF Results | 347 Viewing Detailed KFF Data You can view KFF results details for an individual file. To view the KFF Details 1. For a project that you have run KFF, open Project Review. 2. Under Layouts, select the CIRT Layout. See Managing Saved Custom Layouts on page 55. 3. In Project Review, select a file in the Item List panel. 4. In the view panel, click the Detail Information view tab. 5. Click the KFF Details tab. Using KFF (Known File Filter) Reviewing KFF Results | 348 Re-Processing KFF After you have processed a project with KFF enabled, you can re-process your data using an updated or different KFF Group. This is useful in re-examining a project after adding or editing hash sets. See Adding Hashes to Hash Sets Using Project Review on page 337. If you want to re-process KFF with updated hash sets, be sure that the selected KFF Group has the desired sets. You can only select from existing KFF Groups. To re-process KFF 1. From the Home page, select a project that you want to re-process. 2. Click the tab. The currently selected group is displayed along with its corresponding hash sets. 3. (Optional) If you want to change the KFF Group, in the the drop-down menu, select a different KFF Group and click Save. 4. In the Hash Sets pane, verify that the desired sets are included. 5. Click Process KFF. 6. (Optional) On the Home page, for the project, click Work Lists , and verify that the KFF job starts and completes. See Monitoring the Work List on page 276. 7. Click Refresh 8. Review the KFF results. See Reviewing KFF Results on page 345. Using KFF (Known File Filter) to see the current status. Re-Processing KFF | 349 Exporting KFF Data About Exporting KFF Data You can share KFF Hash Sets and KFF Groups with other KFF Servers by exporting KFF data on one KFF Server and importing it on another. You can also use export as a way of archiving your KFF data. You can export data in one of the following ways: Exporting Hash Sets - This exports the selected Hash Sets with any included hashes. (CSV format only) Exporting KFF Groups - This exports the selected KFF Groups with any included sub-groups and any included hashes. (CSV format only) Exporting an archive of all custom KFF data - This exports all the KFF data except NSRL, NIST, and DHC data (in a binary format). When exporting KFF Groups or Hash Sets, you can export in the following formats: CSV file Binary format Important: Even though it appears that you can select and export one Hash Set or one KFF Group, if you export using the KFF binary format, all of the data that you have in the KFF Index will be exported together. You cannot use this format to export individual Hash Sets or KFF Groups. Use the CSV format instead. See About CSV and Binary Formats on page 320. Exporting KFF Groups and Hash Sets You can share KFF hashes by exporting KFF Hash Sets or KFF Groups. Exports are saved in a CSV file that can be imported. To export a one or more KFF Groups or Hash Sets 1. Do one of the following: Click Management > Hash Sets. Click Management > Groups. 2. Select one or more KFF Groups or Hash Sets that you want to export. 3. Click Export. 4. Select CSV (do not select Export Binary). 5. Browse to and select the location to which you want to save the exported file. 6. Click Select. 7. Enter a name for the exported file. 8. Click OK. 9. In the Export Summaries dialog, view the status of the export. 10. Click Close. Using KFF (Known File Filter) Exporting KFF Data | 350 To create an archive of all your custom Hash Sets and Groups 1. Do one of the following: Click Management > Hash Sets. Click Management > Groups. 2. Select a KFF Group or Hash Set. 3. Click Export. 4. Select Export Binary. 5. Browse to and select the location to which you want to save the exported files. 6. Click Select. 7. Enter a name for the folder to contain the binary files (This is a new folder created by the export). 8. Click OK. 9. In the Export Summaries dialog, view the status of the export. 10. Click Close. To view the Export History 1. Do one of the following: Click Management > Hash Sets. Click Management > Groups. 2. Click Export. 3. Select View Export History. 4. In the Export Summaries dialog, view the status of the export. 5. Click Close. Using KFF (Known File Filter) Exporting KFF Data | 351 Chapter 30 About Cerberus Malware Analysis About Cerberus Malware Analysis Cerberus lets you do a malware analysis on executable binaries. You can use Cerberus to analyze executable binaries that are on a disk, on a network share, or that are unpacked in system memory. Cerberus consists of the following stages of analysis Stage 1: Threat Analysis Cerberus stage 1 is a general file and metadata analysis that quickly examines an executable binary file for common attributes it may possess. It identifies potentially malicious code and generates and assigns a threat score to the executable binary. See About Cerberus Stage 1 Threat Analysis on page 353. Stage 2: Static Analysis Cerberus stage 2 is a disassembly analysis that takes more time to examine the details of the code within the file. It learns the capabilities of the binary without running the actual executable. See About Cerberus Stage 2 Static Analysis on page 359. Cerberus first runs the Stage 1 threat analysis. After it completes Stage 1 analysis, it will then automatically run a static analysis against binaries that have a threat score that is higher than the designated threshold. Cerberus analysis may slow down the speed of your overall processing. Note: This feature is available depending on your license. Please contact your sales representative for more information. Important: Cerberus writes binaries to the AD Temp folder momentarily in order to perform the malware analysis. Upon completion it will quickly delete the binary. It is important to ensure that your antivirus is not scanning the AD Temp folder. If antivirus deletes/Quarantines the binary from the temp Cerberus analysis will not be performed. Cerberus analyzes the following types of files: acm com dll exe lex ocx scr tlb ax cpl dll~ iec mui pyd so tmp cnv dat drv ime new rll sys tsp wpc About Cerberus Malware Analysis About Cerberus Malware Analysis | 352 About Cerberus Stage 1 Threat Analysis Cerberus stage 1 analysis is a general analysis for executable binaries. The Stage 1 analysis engine scans through the binary looking for malicious artifacts. It examines several attributes from the file's metadata and file information to determine its potential to contain malicious code within it. For each attribute, if the condition exists, Cerberus assigns a score to the file. The sum of all of the file’s scores is the file’s total threat score. More serious attributes have higher positive scores, such as +20 or +30. Safer attributes have smaller or even negative numbers such as +5, -10 or -20. The existence of any particular attribute does not necessarily indicate a threat. However, if a file contains several attributes, then the file will have a higher sum score which may indicate that the executable binary may warrant further investigation. The higher the threat score, the more likely a file may be to contain malicious code. For example, you may have a file that had four attributes discovered. Those attributes may have scores of +10, +20, +20, and +30 for a sum of +80. You may have another file with four attributes of scores of +5, +10, -10, -20 for a sum of -15. The first file has a much higher risk than the second file. Cerberus stage 1 analysis also examines each file’s properties and provides information such as its size, version information, signature etc. About Cerberus Score Weighting There are default scores for each attribute of Cerberus Stage 1 threat scoring. However, you can modify the scoring so that you can weigh the threat score attributes with your own values. For example, the Bad Signed attribute as a default value of +20. You can give it a different weight of +30. You must configure these scores before the files are analyzed. About Cerberus Override Scores Some threat attributes have override scores. If a file has one of these attributes, instead of the score being the sum of the other attributes, the score is overridden with a set value of 100 or -100. This is useful in quickly identifying files that are automatically considered either as a threat or safe. If a bad artifact is found that requires immediate attention, the file is given the maximum score. If an artifact is found that is considered safe, the file is automatically given the minimum score. Score ranges have maximum and minimum values of -100 to 100. High threat signatures will result in a final score of 100. Low threat signatures will result in a final score of -100. Cerberus attributes that that have maximum override scores include: Bad signatures Revoked signatures Expired signatures Packed with known signature If any of these attributes are found, the score is overridden with a score of +100. About Cerberus Malware Analysis About Cerberus Stage 1 Threat Analysis | 353 Cerberus Minimum override score includes: Valid digital signature If this attribute is found, the score is overridden with a score of -100. Important: If a file that is malware has a valid digital signature, the override will score the file as -100 (low threat), even though the file is really malware. About Cerberus Threat Score Reports After you you have processed evidence with Cerberus enabled, you can view a threat score report for each executable file in a threat score reports. This report shows the Cerberus score that were calculated during processing. There are two columns of scores: the weighted score assigned to each attribute (the potential score) and the actual score given if the attribute was found in the file. Cerberus Threat Score Report About Cerberus Malware Analysis About Cerberus Stage 1 Threat Analysis | 354 The report also shows general file properties. File Information Threat Score Report About Cerberus Malware Analysis About Cerberus Stage 1 Threat Analysis | 355 Cerberus Stage 1 Threat Scores The following table lists the threat scores that are provided in a Stage 1 analysis: Cerberus Stage 1 Threat Score Attributes Attribute Default Threat Score Description Network +5 The Network category is triggered when a program contains the functionality to access a network. This could involve any kind of protocol from high-level HTTP to a custom protocol written using low-level raw sockets. Persistence +20 Persistence indicates that the application may try to persist permanently on the host. For example, the application would resume operation automatically even if the machine were rebooted. Process +5 Process indicates the application may start a new a process or attempt to gain access to inspect or modify other processes. Malicious applications attempt to gain access to other processes to obfuscate their functionality or attack vector or for many other reasons. For example, reading or writing into a process’s memory, or injecting code into another process. Crypto +6 Crypto is triggered when an application appears to use cryptographic functionality. Malicious software uses cryptography to hide data or activity from network monitors, anti-virus products, and investigators. Protected Storage +10 ProtectedStorage indicates that the application may make use of the Windows "pstore" functionality. This is used on some versions of Windows to store encrypted data on the system. For example, Internet Explorer stores a database for form-filling in protected storage. Registry +5 Registry is triggered when a target application attempts to use the registry to store data. The registry is commonly used to store application settings, auto-run keys, and other data that the application wants to store permanently but not in its own file. Security +5 Imports functions used to modify user tokens. For example, attempting to clone a security token to impersonate another logged on user. Obfuscation +30 Stage 1 searches for signs that the application is 'packed', or obfuscated in a way that hinders quick inspection. The Obfuscation category is triggered when the application appears to be packed, encrypted, or otherwise obfuscated. This represents a deliberate decision on behalf of the developer to hinder analysis. Process Execution Space +2 Unusual activity in the Process Execution Space header. For example, a zero length raw section, unrealistic linker time, or the file size doesn't match the Process Execution Space header. Bad Signed +20 This category is triggered when a binary is cryptographically signed, but the signature is invalid. A signature is generally used to demonstrate that some entity you trust (like a government or legitimate company, called a 'signing authority') has verified the authorship and good intentions of the signed application. However, signatures can be revoked and they can expire, meaning that the signature no longer represents that the signing authority has trust in the application. Embedded Data +10 This category is triggered when an application contains embedded executable code. While all programs contain some program code, this category indicates that the application has an embedded 'resource', which contains code separate from the code which runs normally as part of the application. About Cerberus Malware Analysis About Cerberus Stage 1 Threat Analysis | 356 Cerberus Stage 1 Threat Score Attributes (Continued) Attribute Default Threat Score Description Bad / Bit-Bad +20 This category is triggered when the application contains signatures indicating it uses the IRC protocol or shellcode signature. Many malware networks use IRC to communicate between the infected hosts and the command-and-control servers. Signed / Bit Signed -20 This category is triggered when a program is signed. A program that is signed is verified as 'trusted' by a third party, usually a legitimate entity like a government or trusted company. The signature may be expired or invalid though; check the 'BadSigned' category for this information. PE Good -10 Scores for good artifacts in PE headers. PE Malware +30 Scores for known malware artifacts in PE headers. About Cerberus Malware Analysis About Cerberus Stage 1 Threat Analysis | 357 Cerberus Stage 1 File Information The following table lists the threat scores that are provided in a Stage 1 analysis: File Information from Cerberus Stage 1 Analysis Item Description File Size Displays the size of the file in bytes. Import Count Displays the number of functions that Cerberus examined. Entropy Score Displays a score of the binaries entropy used for suspected packing or encrypting. Entropy may be packed New: Interesting Functions Displays the name of functions from the process execution space that contributed to the file’s threat score. Suspected Packer List Attempts to display a list of suspected packers whose signature matches known malware packers. Modules Displays the DLL files included in the binary. Has Version Displays whether or not the file has a version number. Version Info Displays information about the file that is gathered from the Windows API including the following: CompanyName FileDescription FileVersion InternalName LegalCopyright LegalTrademarks OriginalFilename ProductName ProductVersion Is Signed Displays whether or not the file is signed. If the file is signed the following information is also provided: IsValid SignerName ProductName SignatureTime SignatureResult Unpacker results Attempts to show if and which packers were used in the binary. About Cerberus Malware Analysis About Cerberus Stage 1 Threat Analysis | 358 About Cerberus Stage 2 Static Analysis When you run a stage 1 analysis, you configure a score that will launch a Cerberus stage 2 analysis. If an executable receives a score that is equal or higher than the configured score, Cerberus stage 2 is performed. Cerberus stage 2 disassembles the code of an executable binary without running the actual executable. About Cerberus Stage 2 Report Data When a stage 2 analysis runs, it returns its results of the file’s functions in the Functional Call Summary section of the threat score report. Cerberus Stage 2 Report Data in Threat Scan Report About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 359 Cerberus Stage 2 Function Call Data Stage 2 analysis data is generated for the following function call categories: File Access Networking functionality Process Manipulation Security Access Windows Registry Surveillance Uses Cryptography Low-level Loads Access a driver Subverts API Misc About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 360 File Access Call Categories Cerberus Stage 2 File Access Function Call Categories Category File Access Description Functions that manipulate (read, write, delete, modify) files on the local file system. Filesystem.File.Read. ExecutableExtension This is triggered by functionality which reads executable files from disk. The executable code can then be executed, obfuscated, stored elsewhere, transmitted, or otherwise manipulated. FileSystem.Physical. Read This application may attempt to read data directly from disk, bypassing the filesystem layer. This is very uncommon in normal applications, and may indicate subversive activity. FileSystem.Physical. Write This application may attempt to write data directly to disk, bypassing the filesystem layer in the operating system. This is very uncommon in normal applications, and may indicate subversive activity. It is also easy to do incorrectly, so this may help explain any system instability seen on the host. FileSystem.Directory. Create: This indicates the application may attempt to create directory. Modifications to the file system are useful for diagnosing how an application persists, where its code and data are stored, and other useful information. FileSystem.Directory. Create.Windows: This indicates an application may try to create a directory in the \Windows directory. This directory contains important operating system files, and legitimate applications rarely need to access it. FileSystem.Directory. Recursion: This indicates the application may attempt to recurse through the file system, perhaps as part of a search functionality. FileSystem.Delete: This indicates the application may delete files. With sufficient permissions, the application may be able to delete files which it did not write or even system files which could affect system stability. FileSystem.File.Delete .Windows: This indicates the application may try to delete files in the \Windows directory, where important system files are stored. This is rarely necessary for legitimate applications, so this is a strong indicator of suspicious activity. FileSystem.File.Delete . System32: This indicates the application may try to delete files in the \Windows\System32 directory, where important system files are stored. This is rarely necessary for legitimate applications, so this is a strong indicator of suspicious activity. FileSystem.File.Read. Windows: This indicates the application may attempt to read from the \Windows directory, which is very uncommon for legitimate applications. \Windows is where many important system files are stored. FileSystem.File.Write. Windows: This indicates the application may attempt to write to the \Windows directory, which is very uncommon for legitimate applications. \Windows is where many important system files are stored. FileSystem.File.Read. System32: This indicates the application may attempt to read from the \Windows\System32 directory, which is very uncommon for legitimate applications. \Windows\System32 is where many important system files are stored. About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 361 Cerberus Stage 2 File Access Function Call Categories (Continued) Category Description FileSystem.File.Write. System32: This indicates the application may attempt to write to the \Windows\System32 directory, which is very uncommon for legitimate applications. \Windows\System32 is where many important system files are stored. FileSystem.File.Write. ExecutableExtension: This indicates the application may attempt to write an executable file to disk. This could indicate malicious software that has multiple ‘stages’, or it could indicate a persistence mechanism used by malware (i.e. write an executable file into the startup folder so it is run when the system starts up). FileSystem.File. Filename.Compressio n: This indicates the program may write compressed files to disk. Compression can be useful to obfuscate strings or other data from quick, automated searches of every file on a filesystem. FileSystem.File. Filename.Autorun: This indicates the application may write a program to a directory so that it will run every time the system starts up. This is a useful persistence mechanism. About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 362 Networking Functionality Call Categories Cerberus Stage 2 Networking Functionality Function Call Categories Category Networking functionality Description Functions that enable sending and receiving data over the or other networks. Network.FTP.Get: Describes the use of FTP to retrieve files. This could indicate the vector a malware application uses to retrieve data from a C&C server. Network.Raw: Functions in this category indicate use of the basic networking commands used to establish TCP, UDP, or other types of connections to other machines. Programmers who use these build their own communication protocol over TCP (or UDP or other protocol below the application layer) rather than using an application-layer protocol such as HTTP or FTP. Network.Raw.Listen: Functionality in this category indicates the application accepts incoming connections over tcp, udp, or other lower-level protocol. Network.Raw. Receive: Functionality in this bucket indicates that the application receives data using a socket communicating over a lower-level protocol such as TCP, UDP, or a custom protocol. Network.DNS.Lookup. Country.XX: This indicates the application may attempt to resolve the address of machines in one of several countries. “XX” will be replaced by the ‘top level domain’, or TLD associated with the lookup, indicating the application may attempt to establish contact with a host in one of these countries. Network.HTTP.Read: The application may attempt to read data over the network using the HTTP protocol. This protocol is commonly used by malware so that its malicious traffic appears to ‘blend in’ with legitimate web traffic. Network.HTTP. Connect.Nonstandard. Request: This indicates the application may make an HTTP request which is not a head, get, or post request. The vast majority of web applications use one or more of these 3 kinds of requests, so this category indicates anomalous behavior. Network.HTTP. Connect.Nonstandard. Port: Port: Most HTTP connections occur over either port 80 or 443. This indicates the application is communicating with the server over a non-standard port, which may be a sign that the server is not a normal, legitimate web server. Network.HTTP. Connect.Nonstandard. Header: HTTP messages are partially composed of key-value pairs of strings which the receiver will need to properly handle the message. This indicates the application includes non-standard or very unusual header key-value pairs. Network.HTTP.Post: This indicates the application makes a ‘post’ http request. ‘post’ messages are normally used to push data to a server, but malware may not honor this convention. Network.HTTP.Head: This indicates the application makes a ‘head http request. ‘head’ messages are normally used to determine information about a server’s state before sending a huge amount of data across the network, but malware may not honor this convention. Network.Connect. Country.XX: This indicates the application may attempt to connect to a machines in one of several countries. “XX” will be replaced by the ‘top level domain’, or TLD associated with the lookup. About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 363 Cerberus Stage 2 Networking Functionality Function Call Categories (Continued) Category Description FTP.Put: About Cerberus Malware Analysis The application may attempt to send files over the network using FTP. This may indicate an exfiltration mechanism used by malware. About Cerberus Stage 2 Static Analysis | 364 Process Manipulation Call Categories Cerberus Stage 2 Process Manipulation Function Call Categories Category Process Manipulation Description May contain functions to manipulate processes. ProcessManageme nt.Enumeration: This functionality indicates the application enumerates all processes. This could be part of a system survey or other attempt to contain information about the host. ProcessManageme nt.Thread.Create: This indicates the target application may create multiple threads of execution. This can give insight into how the application operates, operating multiple pieces of functionality in parallel. ProcessManageme nt.Thread.Create. Suspended: This indicates the application may create threads in a suspended state. Similar to suspended processes, this may indicate that the threads are only executed some time after they’re created or that some properties are modified after they are created. ProcessManageme nt.Thread.Create: This indicates the application may attempt to create a thread in another process. This is a common malware mechanism for ‘hijacking’ other legitimate processes, disguising the fact that malware is on the machine. ProcessManageme nt.Thread.Create. Remote: This indicates that the application may create threads in other processes such that they start in a suspended state. Thus their functionality or other properties can be modified before they begin executing. ProcessManageme nt.Thread.Open: The application may try to gain access to observe or modify a thread. This behavior can give insight into how threads interact to affect the host. ProcessManageme nt.Process.Open: This application may attempt to gain access to observe or modify other processes. This can give strong insight into how the application interacts with system and what other processes it may try to subvert. ProcessManageme nt.Process.Create: This application may attempt to create one or more other processes. Similar to threads, multiple processes can be used to parallelize an application’s functionality. Understanding that processes are used rather than threads can shed insight on how an application accomplishes its goals. ProcessManageme nt.Process.Create. Suspended: Describes functionality to create new processes in a suspended state. Processes can be created in a ‘suspended’ state so that none of the threads execute until it is resumed. While a process is suspended, the creating process may be able to substantially modify its behavior or other properties. About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 365 Security Access Call Categories Cerberus Stage 2 Security Access Function Call Categories Category Security Access Description Functions that allow the program to change its security settings or impersonate other logged on users. Security: This category indicates use of any of a large number of securityrelated functions, including those manipulating security tokens, Access Control Entries, and other items. Even without using an exploit, modification of security settings can enable a malicious application to gain more privileges on a system than it would otherwise have. Windows Registry Call Categories Cerberus Stage 2 Windows Registry Function Call Categories Category Windows Registry Description Functions that manipulate (read, write, delete, modify) the local Windows registry. This also includes the ability to modify autoruns to persist a binary across boots. Registry.Key.Create : The application may attempt to create a new key in the registry. Keys are commonly used to persist settings and other configuration information, but other data can be stored as well. Registry.Key.Delete: Registry.Key.Delete: This application may attempt to delete a key from the registry. While it is common to delete only keys that the application itself created, with sufficient permissions, Windows may not prevent an application from deleting other applications’ keys as well. Registry.Key.Autoru n: This indicates the application may use the registry to try to ensure it or another application is run automatically on system startup. This is a common way to ensure that a program continues to run even after a machine is restarted. Registry.Value.Delet e: This indicates the application may attempt to delete the value associated with a particular key. As with the deletion of a key, this may not represent malicious activity so long as the application only deletes its own keys’ values. Registry.Value.Set: The application may attempt to set a value in the registry. This may represent malicious behavior if the value is set in a system key or the key of another application. Registry.Value.Set. Binary: This indicates the application may store binary data in the registry. This data could be encrypted, compressed, or otherwise is not plain text. Registry.Value.Set. Text: This indicates the application may write plain text to the registry. While the ‘text’ flag may be set, this does not mandate that the application write human-readable text to the registry. Registry.Value.Set. Autorun: The application may set a value indicating it will use the registry to persist on the machine even after it restarts. About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 366 Surveillance Call Categories Cerberus Stage 2 Surveillance Function Call Categories Category Surveillance Description Usage of functions that provide audio/video monitoring, keylogging, etc. Driver.Setup: Functionality in this category involves manipulation of INF files, logging, and other driver-related tasks. Drivers are used to gain complete control over a system, potentially even gaining control of other security products. Driver.DirectLoad: Functionality in this category involves loading drivers. As noted in ‘driver.setup’, drivers represent ultimate control over a host system and should be extremely trustworthy. Uses Cryptography Call Categories Cerberus Stage 2 Uses Cryptography Function Call Categories Category Uses Cryptography Description Usage of the Microsoft CryptoAPI functions. Crypto.Hash.Comp ute: This indicates a hash function may be used by the target application. Hash functions are used to verify the integrity of communications or files to ensure they were not tampered with. Crypto.Algorithm.X X: The “XX” could be any of several values, including ‘md5’, ‘sha-1’, or ‘sha-256’. These represent particular kinds of hashes which the target application may use. Crypto.MagicValue: This indicates that the target contains strings associated with cryptographic functionality. Even if the application does not use Windows OS functionality to use cryptography, the ‘magic values’ will exist so long as the target uses standard cryptographic algorithms. About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 367 Low-level Access Call Categories Cerberus Stage 2 Low-level Access Function Call Categories Category Low-level Access Description Functions that access low-level operating system resources, for example reading sectors directly from disk. Driver.Setup: Functionality in this category involves manipulation of INF files, logging, and other driver-related tasks. Drivers are used to gain complete control over a system, potentially even gaining control of other security products. Driver.DirectLoad: Functionality in this category involves loading drivers. As noted in ‘driver.setup’, drivers represent ultimate control over a host system and should be extremely trustworthy. Debugging.dbghelp: This indicates use of functionality included in the dbghelp.dll module from the "Debugging Tools for Windows" package from Microsoft. With the proper permissions, the functionality in this library represents a power mechanism for disguising activity from investigators or for gaining control of other processes. Misc.SystemRestore: Describes functionality involved in the System Restore feature, including removing and adding restore points. Restore points are often used as part of a malware-removal strategy, so removal of arbitrary restore points, especially without user interaction, may represent malicious activity. Debugging. ChecksForDebugger: This is triggered if the application tries to determine whether it is being debugged. Malicious applications commonly try to determine whether they’re being analyzed so that they can modify the behavior seen by analysts, making it difficult to discover their true functionality. Loads a driver Call Categories Cerberus Stage 2 Loads a driver Function Call Categories Category Description Loads a driver Functions that load drivers into a running system. Subverts API Call Categories Cerberus Stage 2 Subverts API Function Call Categories Category Subverts API Description Undocumented API functions, or unsanctioned usage of Windows APIs (for example, using native API calls). About Cerberus Malware Analysis About Cerberus Stage 2 Static Analysis | 368 Chapter 31 Using Cerberus Malware Analysis This chapter includes the following topics about running Cerberus in Resolution1 products. About Running Cerberus Analysis (page 369) Enabling Viewing Cerberus Analysis (page 370) Cerberus Results and Data (page 372) About Running Cerberus Analysis Cerberus Analysis consists of two stages of analysis that help you to locate potentially malicious files. See About Cerberus Malware Analysis on page 352. You can enable Cerberus in one of two places: At the project level. This will run Cerberus against files that are added as evidence in a project. You enable Cerberus at the project level in the Processing Options of a new project. At the job level. This will run Cerberus against files that are added as the results of a job. You enable Cerberus at the job level in the Job wizard. You can use Cerberus in the following job types: Collection Metadata (Resolution1) Only Remediate Search and Review and Review (Resolution1 CyberSecurity) Volatile Note: You do not have to enable Cerberus at the project level in order to enable Cerberus in jobs. Note: The Cerberus options are different for a Volatile job. You only select if you want to perform a Stage One on files or a Stage Two on running processes. You do not set a score threshold. See To enable Cerberus Analysis for Volatile Jobs on page 371. If you want to modify the score weighting, you do that on the Management page. If you want to modify the values, you must do that before you process evidence or run jobs. | 369 Enabling Cerberus Analysis To enable Cerberus Analysis for a project 1. When creating a new project, in the Create New Project dialog, click Processing Options. 2. To enable a Stage 1 analysis, select Cerberus Stage One. 3. (Optional) To enable a Stage 2 analysis, select Cerberus Stage Two. 3a. Configure the threshold score that will trigger a Stage 2 analysis. To modify the Cerberus score weighting 1. From the Management page, click the System Configuration tab. 2. From the System Configuration tab, click Cerberus Weighting Templates. 3. Modify the scores or reset scores to the default. 4. Click Save. Enabling Cerberus Analysis | 370 To enable Cerberus Analysis for Collection, MetaData Only, Remediate and Review, and Search and Review Jobs 1. Create a new job using the job wizard. See Creating and Managing Jobs on page 454. See Job Options Tab on page 457. 2. To enable a Stage 1 analysis, select Cerberus Stage One. 3. (Optional) To enable a Stage 2 analysis, select Cerberus Stage Two. 3a. Configure the threshold score that will trigger a Stage 2 analysis. 3b. Select whether or not to collect unscorables files. To enable Cerberus Analysis for Volatile Jobs 1. Create a new Volatile job using the job wizard. See Volatile Options Tab on page 455. 2. Complete the options on the Job Options page. 3. On the Volatile Options page, to enable a Stage 1 analysis, select Perform Cerberus Stage One analysis when corresponding files can be located on disk. 4. (Optional) To enable a Stage 2 analysis, select Perform Cerberus Stage Two analysis on running processes. Enabling Cerberus Analysis | 371 Viewing Cerberus Results and Data About Reviewing Results of Cerberus Cerberus results are displayed in Project Review. You can use the following tools to see Cerberus results: Cerberus facets Cerberus Information Quick Columns Cerberus reports Using the Cerberus Facets You can use the Cerberus facets to filter data based on Cerberus values. For example, you can apply a facet to only display items with a certain range of cerberus score. See About Filtering Data with Facets on page 124. You can also filter on any Stage 1 or Stage 2 attribute. See About Cerberus Malware Analysis on page 352. You can use the following Cerberus facets: Cerberus Stage 1 Analysis Cerberus Stage 1 Analysis Cerberus Threat Score Viewing Cerberus Results and Data | 372 To apply Cerberus facets 1. From the Item List in the Review window, open the facets pane. 2. Expand Cerberus. 3. Select the facets that you want to apply. Using the Cerberus Information Quick Columns You can use the Cerberus Information Quick Columns and sort on Cerberus score attributes and values. For example, you can sort on the Cerberus score column. See Using Document Viewing Panels on page 76. To see the Cerberus columns, activate the Cerberus Information Quick Columns. To activate the Cerberus Information Quick Columns 1. From the Item List in the Review window, click Options. 2. Click Quick Columns > Cerberus Data > Cerberus Information. The KFF Columns display. See Filtering by Column in the Item List Panel on page 139. Viewing Cerberus Results and Data | 373 Viewing and Exporting the Cerberus Threat Score Report You can view Cerberus results details for an individual file by viewing the Cerberus threat score report. See About Cerberus Threat Score Reports on page 354. To view the Cerberus threat score report 1. For a project that you have run Cerberus, open Project Review. 2. Under Layouts, select the Resolution1 CyberSecurity Layout. See Managing Saved Custom Layouts on page 55. 3. In Project Review, select a file in the Item List panel. 4. In the view panel, click the Detail Information view tab. 5. Click the Cerberus tab. To export a Cerberus Report to an HTML document 1. In the bottom-right corner of the cerberus report, click Download. 2. Enter a file name and path. 3. Click Save. Viewing Cerberus Results and Data | 374 Part 5 Using Lit Holds This part describes how to use Litigation Holds and includes the following: Managing Using Lit Holds Litigation Holds (page 376) | 375 Chapter 32 Managing Litigation Holds About Litigation Holds AccessData’s Litigation Hold system is a notification management system that efficiently handles all aspects and stages of the litigation hold process within your enterprise. The Litigation Hold offers email notification templates and interview question templates, reports, histories, reminders, acceptance records, interview response records, and centralizes the relevant data in one location. Configuring the System for Managing Litigation Holds About System Configuration for Lit Hold Your application is shipped with Active Directory enabled. By default the application is configured to use Active Directory for the IT and Person acceptance landing pages when you are accepting holds. However, you may choose to turn this feature off. Turning the feature off does not affect the use of Active Directory and IWA in the rest of the application. See Configuring Active Directory Synchronization on page 78. Disabling Active Directory You can disable Active Directory in the application and use anonymous authentication instead. See About System Configuration for Lit Hold on page 376. To disable Active Directory 1. On the Windows Start menu, in the Search programs and files field, enter INetMgr. 2. In the Internet Information Services (IIS) Manager application, in the left pane, expand the top-most server option. 3. Expand Sites > Default Web Site. 4. Click LitHoldNotification. 5. In the middle pane, in the IIS section, double-click Authentication. 6. In the Authentication pane, under the Name column, right-click Windows Authentication, and then click Disable. Managing Litigation Holds About Litigation Holds | 376 7. In the Authentication pane, under the Name column, right-click Anonymous Authentication, and then click Enable. 8. In the left pane, right-click LitHoldNotification, and then click Explore. Notice the Web.config file. 9. Open Web.config in Notepad. 10. Locate the following line in the file: 11. Change "Windows" to "None". The text is case-sensitive. 12. Locate the following line in the file: 13. Change "?" to "0". 14. Save Web.config, and then exit Notepad. 15. Close the Explore window where Web.config is displayed. 16. Exit the Internet Information Services (IIS) Manager window. About Configuring Projects, People, and Users Litigation holds use the projects, users, and people that exist in the application’s database. If you have not already created these, you must do so before you send email notifications. Projects During the creation of a litigation hold, it is required that you associate it with a project. When it first becomes necessary to create a litigation hold, you can create a new legal to associate with the hold, or you can use an existing project. See Creating a Project on page 205. Application users During the litigation hold creation process approvers are selected from the User List page. Only the users with Administrators, Project Manager, Project Administrator, LitHold Managers, Approve Lit Holds rights in your program database are loaded into the Approval page of the Hold Creation Wizard. See Configuring and Managing System Users, User Groups, and Roles on page 45. People People are selected from your program person list during litigation hold creation. You can add people manually (individually), or you can add people using Active Directory. Using Active Directory and Integrated Windows Authentication can help to verify email addresses, and to further authenticate people during their responses to email notifications. This system automatically inputs the person’s email address into the Lit Hold creation People page. See Configuring Active Directory Synchronization on page 78. Email configuration Before you can send any litigation hold notification emails, you must first make sure that you have configured Email Notification Server. See Configuring the Email Notification Server on page 80. Lit Hold Configuration After Email Notification Server is configured, you can create your litigation hold notifications, approvals, and acceptances. Managing Litigation Holds Configuring the System for Managing Litigation Holds | 377 About Litigation Hold Roles You can assign roles and permissions to users to manage lit holds. Some roles are global while others are specific to an individual project. Litigation Hold Roles and Permissions Role/ Permission Description Roles Person A person upon whom the hold is placed. IT Staff Company IT staff assigned to this Litigation Hold by the Hold Manager. IT Staff are added in the Lit Hold creation wizard. See Adding an IT Staff Member for Use in a Litigation Hold (page 380). An IT Staff works, in particular, with file aging. System-based Permission Lit Hold Manager Can manage lit holds for all projects. See Planning User Roles on page 46. Project specific permissions See Setting Project Permissions on page 240. Project Administrator Can manage a lit hold for the given project. Approve Lit Holds Approves the hold and receives updates from the Hold Manager. Create Lit Holds Can create a lit hold for the project. Delete Lit Holds Can delete a lit hold for the project. Hold Manager Creates the hold, submits it for approval (from the Hold Approver), and manages notifications, responses, recipient lists, updates, and reminders. A Hold Manager has to be a User, but does not necessarily have the role of creating Projects. A Hold Manager may be granted Hold Approver as well, but that may pose a security risk. View Lit Holds Can view data and reports for a lit hold for the project. Managing Litigation Holds Configuring the System for Managing Litigation Holds | 378 Configuring Litigation Holds System Settings Configuring Lit Hold General Settings Before you create litigation holds, you configure your Litigation Hold general settings. Prior to this, make sure you have configured your Email notification server. See Configuring the Email Notification Server on page 80. To configure Litigation Hold general settings 1. In the application console, click Lit Holds. 2. On the Lit Holds page, click LitHold Configuration. 3. On the LitHold Configuration page, set the options that you want. See Lit Hold Configuration Options on page 379. 4. Click Save. 5. (Optional) In the Send Test Email to: field, enter a single email address of a recipient, and then click Send Test Email. Lit Hold Configuration Options The following table describes the options that are available on the Lit Hold Configuration page. See Configuring Lit Hold General Settings on page 379. Lit Hold Configuration Options Option Description Email Sent From Address Specifies the sender’s email address. If desired, the IT department or a Network administrator can set up a default “From” address that people cannot reply to. Website Base Address The base address includes the protocol and server name, but not the application or the page that is currently displayed. For example, http:/// Default Escalation Stage Two Email Address You can set two levels of escalation policies for person hold acceptance. Stage One: If a person doesn't accept the hold within a number of specified days, the first escalation email is sent to their manager. Note: Stage One escalation requires Active Directory to be configured previously. In the Manager field of the Active Directory Account Screen, enter the manager that you want to be notified for the first escalation email. Stage Two: After a specified number of days, the next escalation is sent to the specified email address. You can configure the default email address for Stage Two Escalations. See People Options on page 393. See Email Notifications Options on page 394. Hold Report temporary storage path Managing Litigation Holds You can specify a dedicated path for reports data. Configuring Litigation Holds System Settings | 379 Lit Hold Configuration Options Option Description Person/IT Acceptance Message Lets you enter any message or instruction that you want the person or IT staff to receive for their acceptance. The acceptance message displays at the bottom of the Person and IT Staff Hold Notification pages, just above the Accept button. This is the “By clicking accept you agree to the terms set forth.” message. Save Saves the settings. Send Test Email To Specifies a single recipient email address that receives the test email. Send Test Email Sends a test email to the recipient specified above. Managing the IT Staff About Managing the IT Staff in a Litigation Hold An IT Staff works with file aging, among other things. Unlike people and approvers, there is no default database list that populates the IT Staff list. Instead, individuals must be entered manually. See Adding an IT Staff Member for Use in a Litigation Hold on page 380. See Editing an IT Staff Member on page 381. See Deleting an IT Staff Member on page 381. Individuals that you add to IT Staff become available for you to select from in the Hold Creation Wizard. See Creating a Litigation Hold on page 391. Adding an IT Staff Member for Use in a Litigation Hold You must add individuals to IT Staff manually. Individuals that you add here become available for you to select from in the Hold Creation Wizard. See About Managing the IT Staff in a Litigation Hold on page 380. To add an IT staff member for use in a litigation hold 1. On the Lit Holds page, click LitHold IT Staff. 2. On the Manage IT Staff page, click 3. In the Add New IT Staff dialog box, set the options that you want. See IT Staff Options on page 381. 4. Click OK to add the individual to the table on the Manage IT Staff page. Managing Litigation Holds . Configuring Litigation Holds System Settings | 380 IT Staff Options The following table identifies the options that are available in the Add New IT Staff dialog box and the Edit IT Staff dialog box. See Adding an IT Staff Member for Use in a Litigation Hold on page 380. See Editing an IT Staff Member on page 381. IT Staff Options Option Description First Name First name of the individual. Middle Initial Middle initial of the individual. Last Name Last name of the individual. Email Email address of the individual. The address is where notifications are sent. Title Given job title of the individual. Username Computer username of the individual. Domain Network domain where the individual’s computer resides. Cancel Cancels the addition of the individual. OK Adds the individual to the Manage IT Staff page. Editing an IT Staff Member Any edits or changes that you make here are propagated to existing litigation holds of which the individual may be a part. See About Managing the IT Staff in a Litigation Hold on page 380. To edit an IT staff member 1. On the Lit Holds page, click LitHold IT Staff. 2. On the Manage IT Staff page, in the table, select a name whose information you want to edit. 3. Click 4. In the Edit IT Staff dialog box, set the options that you want. See IT Staff Options on page 381. 5. Click OK. . Deleting an IT Staff Member Individuals that you delete are removed from the list of IT Staff that you can select from in the Hold Creation Wizard and they are removed from all existing litigation holds. See About Managing the IT Staff in a Litigation Hold on page 380. Managing Litigation Holds Configuring Litigation Holds System Settings | 381 To delete an IT staff member 1. On the Lit Holds page, click LitHold IT Staff. 2. On the Manage IT Staff page, in the table, select a name that you want to delete. 3. Click 4. Click OK to confirm the deletion. . Configuring LitHold Email Templates About Managing Email Templates for Use in Litigation Holds The Hold manager sends email notifications to people, IT Staff, and the Hold Approver informing them that a litigation hold is in place. Using email templates expedites this process. Templates are created in the Manage Email Templates section of the Hold drop-down menu. The Hold Manager can use predefined email templates, or create their own custom email templates. You can edit or deleted predefined email templates. It is possible that messages sent by the litigation hold notification system are flagged as junk email by clients such as Microsoft Outlook. You may need to ensure that these messages are considered “trusted” and not automatically filtered to a junk email folder. See Creating an Email Template for Use in Litigation Holds on page 382. See About Managing Email Templates for Use in Litigation Holds on page 382. Creating an Email Template for Use in Litigation Holds You can create your own email templates from scratch, or you can use an existing email template as the basis for a new template. You can add basic HTML formatting to the message body of an email. See About Managing Email Templates for Use in Litigation Holds on page 382. To create an email template for use in litigation holds 1. On the Lit Holds page, click LitHold Email Templates. 2. On the Email Templates page, in the Template Type drop-down list, select the type of template that you want to create. See Template Type Options on page 383. 3. In the Templates drop-down list, do one of the following: 4. Click the name of an existing template. Click Create New Template. In the Subject and Message Body fields, add or the delete the text that you want to appear in the email for the given template type. When you save the template, the text that you entered in the Subject field is also used for the template name that appears in the Templates drop-down list. You can use the HTML text editor to format the text as you would like to have it displayed. You can also copy HTML text from another source. Managing Litigation Holds Configuring Litigation Holds System Settings | 382 5. (Optional) Click Macros. In the Name column, click a macro name to insert it into the message body where your cursor was last located. Based on the macro that you added to the message body, its associated information is inserted into the email at the time it is sent. The associated information comes from the various fields that were filled at the time you went through the Hold Creation Wizard to create the litigation hold. You can enter macros manually if the “code” is already known. Note: The Lit Hold email notification email template allows you to manually enter in the [CompanyImage] macro. When the macro is not present in the template, the company image’s placement defaults to the top center of the email. 6. (Optional) In the Send Test Email to: field, enter an email address of a single recipient, and then click Send Test Email. 7. Click Save. Template Type Options The following table describes the types of email templates that are available for a litigation hold. See Creating an Email Template for Use in Litigation Holds on page 382. Template Types Template Type Description Approval Sent to the litigation hold manager for their approval. Stop Aging Acceptance Sent to the IT Staff describing the parameters of the hold, and linking them to the Landing Page where they can view the Stop aging Letters and acknowledge receipt of the litigation hold. Stop Aging Reminder Reminds the person that they are still involved a litigation hold order. Stop Aging Termination Notifies the IT Staff that their participation in the litigation hold order is no longer necessary. Hold Acceptance Notifies the people of the hold, and links them to the Landing page where they can acknowledge receipt of the hold. Hold Reminder Reminds the people of the litigation hold. Hold Termination Notifies the people that the litigation hold has ended. Hold Escalation Stage One There are two levels of escalation policies for person hold acceptance. Stage One: If a person doesn't accept the hold within a number of specified days, the first escalation email is sent to their manager. Note: Stage One escalation requires Active Directory to be configured previously. In the Manager field of the Active Directory Account Screen, enter the manager that you want to be notified for the first escalation email. Stage Two: After a specified number of days, the next escalation is sent to the specified email address. Repeat: Both of these escalations can be set to repeat if necessary. People within a hold can be excluded from the escalation policy if needed. This is the email template for a Stage One Escalation. Hold Escalation Stage Two Managing Litigation Holds This is the email template for a Stage Two Escalation. Configuring Litigation Holds System Settings | 383 Template Types Template Type Description Person Questions Changed Reminder You may change the interview questions of a hold. This is the email template that will remind people of the change in interview questions and that they need to re-answer them. Configuring LitHold Interview Templates About Managing Interview Templates for Use in Litigation Holds When you create a litigation hold, part of the process includes specifying interview questions. You can create interview templates with standard questions that you can re-use when you create a litigation hold. See Creating an Interview Template for Use in Litigation Holds on page 386. See Editing an Interview Template on page 387. See Deleting an Interview Template on page 387. See Creating a Litigation Hold on page 391. About Interview Question and Answer Types When you create an interview question template, you have flexibility in the kinds of questions, and potential answers, that are used. You can also specify that certain interview questions are required to answer. In an interview question template, you can configure the following different types of interview questions: LitHold Interview Template Questions Types Questions Type Description Text Input Question When you use this question type, a user answers the question by typing text. Selection Question (Check Boxes) When you use this question type, you also create a set of answers that the user can select from. The answers are provided as check boxes. The user can answer the question by selecting any of the check boxes that apply. You also have flexibility in the type of answers that you provide. LitHold Interview Template Answer Types (page 385) Depending on the type of question that you ask, you may want to provide a selection for None. Selection Question (Radio Buttons) When you use this question type, you also create a set of answers that the user can choose from. The answers are provided as radio buttons. The user can answer the question by selecting only one radio button. Depending on the type of question that you ask, you may want to provide a selection for None. Managing Litigation Holds Configuring Litigation Holds System Settings | 384 You also have flexibility in the types of answers that accompany the check box and radio button questions. You can configure the following answer types. LitHold Interview Template Answer Types Questions Type Description Add Answer The administrator specifies the text that accompanies the check box or radio button and the user simply chooses which selection to make. Add Input Answer The check box or radio button does not contain any accompanying text and the user must input text after selecting it. Add Input Answer with Text The administrator specifies the text that accompanies the check box or radio button and the user can also input text after selecting it. The following graphic is a sample of a template which has each of the three question types, and each of the three answer types. Sample of interview questions with the different types of questions and answers The first question simply provides a box for the user to input the answer. The second question provides check boxes for answers. The first answer is a simple check box with text provided in the template. The second answer is a check box where the user inputs text after selecting it. The third answer is a check box with text, but also includes a box for a user to input text. The third question provides radio buttons with the three possible answer types. The difference between questions with check boxes and questions with radio buttons is that with check boxes, a user can select any and all check boxes. With radio buttons, the user can choose only one. When creating a template, you can use the green up and down arrows on the right side to change the order the questions. Managing Litigation Holds Configuring Litigation Holds System Settings | 385 Creating an Interview Template for Use in Litigation Holds You can create any number of interview templates that contain the questions you want to ask people and others. You specify which templates you want to use when you go through the Hold Creation Wizard. See About Managing Interview Templates for Use in Litigation Holds on page 384. To create an interview template for use in litigation holds 1. On the Lit Holds page, click on the Configuration tab 2. Click LitHold Interview Templates. 3. On the Manage Interview Templates page, click 4. Enter a template name. The name of the template appears in the Templates drop-down list in the LitHold Wizard. 5. Enter a template description. 6. Add interview questions. . With the add button is a drop-down menu. Select the type of question that you want to add. See About Interview Question and Answer Types on page 384. 7. In the Question field, enter the text of the question. 8. (Optional) Select the Answer Required check box if you want to require an answer. 9. If you selected a Text Input Question (text input only), click Add. 10. If you selected a Select Question type with either check boxes or radio buttons, do the following: 10a. click the add button with the drop-down button in the lower left corner of the dialog. 10b. Select an answer type. See About Interview Question and Answer Types on page 384. 10c. Enter as many answers as desired. 10d. Click Add. 11. Add all of the questions that you want to be in this template. 12. (Optional) To edit a question or an answer, highlight a question and click Edit. 13. (Optional) Highlight a question and use the green up and down arrows on the right side to change the order of the question. 14. Click Save. Managing Litigation Holds Configuring Litigation Holds System Settings | 386 15. (Optional) Create additional templates with other questions. Editing an Interview Template You can edit an existing interview template to add or delete questions and answers to the template. You can also check or uncheck questions as required or not. See Creating an Interview Template for Use in Litigation Holds on page 386. See About Managing Interview Templates for Use in Litigation Holds on page 384. To edit an interview template 1. On the Lit Holds page, click on the Configuration tab 2. Click LitHold Interview Templates. 3. On the Manage Interview Templates page, highlight a template and click 4. Make any desired changes. 5. Click Save. Edit. Deleting an Interview Template You can delete an existing interview template so it is no longer available to choose in the Hold Creation Wizard. See Creating an Interview Template for Use in Litigation Holds on page 386. See About Managing Interview Templates for Use in Litigation Holds on page 384. To delete an interview template 1. On the Lit Holds page, click on the Configuration tab. 2. Click LitHold Interview Templates. 3. On the Manage Interview Templates page, highlight a template and click 4. Click OK to confirm. Managing Litigation Holds Delete. Configuring Litigation Holds System Settings | 387 Using the Lit Hold List The Lit Hold list is the default view when you click Lit Holds in the application console. You can use the Holds List view to display all the litigation holds in the application including the following information: Name Status Creation date The number of associated IT Staff The number of People When you view the list of holds, they are displayed in a grid. You can do the following to modify the contents of the grid: Control If which columns of data are displayed in the grid. you have a large list, you can apply a filter to display only the items you want. See Managing Columns in Lists and Grids on page 36. You can also perform the following hold actions: Create a hold Delete a hold Activate a hold Deactivate a hold Managing Litigation Holds Using the Lit Hold List | 388 Resubmits a hold Below the list of holds, you can use tabs to see the following information about the highlighted hold: Overall status Approvals List of Associated People List of the associate IT Staff Logs Email Hold History reports Managing Litigation Holds Using the Lit Hold List | 389 The following table describes each link in the Hold List task pane. Hold List Elements Links Description Lets you view and edit the selected hold Edit Deletes the selected hold. Delete New Hold Opens the Hold Creation Wizard so you can add a litigation hold. See Creating a Litigation Hold on page 391. Delete Hold Allows the user to delete the selected holds. See Deleting a Litigation Hold on page 399. Activate/Deactivate Hold Allows the user to activate or deactivate the selected hold. See Activating or Deactivating a Litigation Hold on page 399. Resubmit Hold You can resubmit a hold. This sets it back to its original state so that all actions must be performed again .See Resubmitting a Litigation Hold on page 400. Overall Status The filter lets you select Active, Inactive, or All Holds. The drop-down lists all the Holds in the selected category. The following four tab views display detailed information about the status of the selected Hold.  Overall Status  Approvals  IT Staff  People You can also choose Email Distribution History from the side task bar to view the Event Log and Email Distribution History for that Hold. See Viewing the Overall Status of a Litigation Hold on page 401. Holds Summary Summarizes all the litigation holds. Much of the same information is found in the Hold Info tab on the right side. See Viewing the Overall Status of a Litigation Hold on page 401. Approvals Displays the approval status and type. People Displays the names of the people that are associated with the selected hold. You can click Preview Acceptance Page at the bottom of the tab to open the Person Hold Notification page. IT Staff Displays the IT Staff members that are associated with the selected hold. You can click Preview Acceptance Page at the bottom of the tab to open the IT Staff Hold Notification page. See Adding an IT Staff Member for Use in a Litigation Hold on page 380. Log Displays filter options, a list of event types and related information, messages and date stamp for the selected Hold. See About the Hold Event Log for a Litigation Hold on page 401. Email Distribution History Displays filter options, a list of emails, and date stamp for the selected hold. See Viewing the Email Distribution History of a Litigation Hold on page 401. Managing Litigation Holds Using the Lit Hold List | 390 Hold List Elements Links Description Hold Reports Details the people involved in the hold, and the approval/acceptance status of the approvers, people, and IT Staff. See About Viewing Litigation Hold Reports on page 402. Creating a Litigation Hold You use the Litigation Hold Wizard to create and configure litigation holds. To create a litigation hold 1. On the Lit Holds page, click New Hold. 2. For each page of the wizard, set the options that you want. Lit Holds Options Table 1 General page See General Info Options on page 392. Approval page See Approval Options on page 392. IT Staff page See IT Staff Options on page 393. People page See People Options on page 393. Email Notifications page See Email Notifications Options on page 394. Documents page See Documents Options on page 396. Interview Questions page See Interview Questions Options on page 396. Summary page See Summary on page 397. 3. Click Next. 4. On the Summary page, Click Save to save the hold. 5. In the Success dialog box, click Hold List. 6. In the Hold List view, select the litigation hold that you just created. 7. Click (Approve Hold). Managing Litigation Holds Creating a Litigation Hold | 391 General Info Options The following table describes the options that you can set on the General Info page of the Litigation Hold Wizard. See Creating a Litigation Hold on page 391. General Info Page Options Option Description Name (Required) Sets the name of the litigation hold. Description Describes the litigation hold. Requested By Sets the name of the person who requested the litigation hold. Force Time Constraints Defines the time period associated with the hold. When the time period expires, the system sends hold termination emails, and the hold is closed. Note: You cannot edit a litigation hold that has this option checked. However, you can edit the people associated with the hold, as they change. Be sure you enable all email templates when you create the hold. If you fail to do so, any email templates that are listed as not required, cannot be enabled after you create the hold. Start Date (Required) Specifies the start date of the litigation hold. End Date Specifies the end date of the litigation hold. Project (Required) Sets the project that is associated with the litigation hold. Approval Options The following table describes the options that you can set on the Approver page of the Litigation Hold Wizard. Users who have rights to approve holds assigned to them in the projects are displayed on this page. During the litigation hold creation process approvers are selected from the User List page. Only the users with Administrators, Project Manager, Project Administrator, LitHold Managers, Approve Lit Holds rights in your program database are loaded into the Approval page of the Hold Creation Wizard. About Litigation Hold Roles (page 378) If you check Any Approver, it only takes one of the approvers in the table list to approve the litigation hold. See Creating a Litigation Hold on page 391. Approval Page Options Option Description Any Approver (Default) Any valid user that is listed in the table can approve the litigation hold. All Selected Selects all usernames in the Approval table list, meaning that all users must approve the litigation hold. Managing Litigation Holds Creating a Litigation Hold | 392 Approval Page Options Option Description Send Acceptance Emails to People and IT Staff on hold approval. After the hold is approved, acceptance notification e-mails are sent to the IT staff and the people that are associated with the hold. Send Approval Notifications Approval notification e-mails are sent to the approvers that are selected in the Approval table list. Send Approval Reminder every x days After a specified number of days, the approval notification e-mail is resent to the approvers that are selected in the Approval table list. IT Staff Options The following table describes the options that you can set on the IT Staff page of the Litigation Hold Wizard. The litigation hold does not go into effect until all selected IT Staff have accepted it. When acceptance is complete, the reminder emails are cancelled, but aging notifications continue. See Creating a Litigation Hold on page 391. IT Staff Page Options Option Description Add IT Staff members to the litigation hold. (Add New Staff Member) Filter for or select from IT Staff that has been pre-configured. See See Managing the IT Staff on page 380. Send Aging Acknowledgement every x Days Sends the litigation hold Aging Acknowledgment email to one or more IT Staff members that are checked in the table list, after so many days. Send Aging Reminder every x Days Resends the litigation hold Aging Reminder email to one or more IT Staff members that are listed in the table, after so many days. People Options The following table describes the options that you can set on the People page of the Litigation Hold Wizard. Multiple people can be involved in a litigation hold. However, only people that are associated with the selected project are displayed in the list. You can also specify people within a hold to be excluded from the interview or escalation policies. Managing Litigation Holds Creating a Litigation Hold | 393 See Creating a Litigation Hold on page 391. People Page Options Option Description Display Person data sources on acceptance page. Shows the sources of the person’s data on the Acceptance page. Send Hold Acknowledgement every x Days Sends the litigation hold Acknowledgment email to one or more people that are checked in the table list, after a specified number of days. This email continues to be sent until it is acknowledged. Send Hold Reminder every x Days Re-sends the litigation hold Reminder email to one or more people that are listed in the table, after a specified number of days. Escalations These settings allows you to set two levels of escalation policies for person hold acceptance. Stage One: If a person doesn't accept the hold within a number of specified days, the first escalation email is sent to their manager. Note: Stage One escalation requires Active Directory to be configured previously. In the Manager field of the Active Directory Account Screen, enter the manager that you want to be notified for the first escalation email. Stage Two: After a specified number of days, the next escalation is sent to the specified email address. Repeat: Both of these escalations can be set to repeat if necessary. People within a hold can be excluded from the escalation policy if needed. Email Notifications Options The following table describes the options that you can set on the Email Notifications page of the Litigation Hold Wizard. The Required section of the Email Notifications page records the notifications that you have completed. The Not Required section lists the notifications that are not necessary to complete. See Creating a Litigation Hold on page 391. General Email Notification Page Options Option Description Load from Template Lets you select an email template for the associated tab. See About Managing Email Templates for Use in Litigation Holds on page 382. Load Loads the selected email template into the Edit tab. Preview Opens the subject and message body of the email in a preview frame. Edit Lets you edit the subject and message body of the email. You can use the HTML text editor to format the text as you would like to have it displayed. You can also copy HTML text from another source. Managing Litigation Holds Creating a Litigation Hold | 394 General Email Notification Page Options Option Description View Lets you view the email message with any macro fields populated with data. The macro field data comes from the information that you entered on the wizard pages prior to the Email Notifications page. For example, the macro field [Hold Name] retrieves the name that was entered on the General page of the Hold Creation Wizard. In the predefined email templates that come with the system, some emails have “XXXX” or “YYYY” in the message body. When a recipient receives the email, these fields appear as requested data that a recipient must fill in with the appropriate information. Macros Lets you add, edit, or delete macro fields in the message body of the email. You can edit the macro fields inserted into the message body by highlighting the text between the brackets and changing the text. The following macros are available for the email Hold Name -Lets you insert the name of the hold. Hold Requestor - Lets you insert the name of the person who requested the hold. Time Frame Start - Lets you insert the date when the hold starts. Time Frame End - Lets you insert the date when the hold ends. Hold Person List - Lets you insert a list of people for the hold. This list must be separated with commas. Hold Description - Lets you insert the description of the hold. Project Name - Lets you insert the name of the associated project. View Hold Link - Lets you insert a Hold Link hyperlink into the email. The Hold Link allows recipients of the email to view a list of active holds. Send Test Email to You can send a test email so that you can verify the email notification. Enter a single email address of a recipient, and then click Send Test Email. Add CC: You can add additional email address of people other than the specified people and IT staff that you would like to receive the email. Email Notification Page Options Option Description Approval tab Lets you edit the Approval email notification that is sent to users who are identified on the Approval list. Person Acceptance tab Lets you edit the Person Acceptance email that is sent to inform associated people of the litigation hold and have them accept the hold. Person Reminder tab Lets you edit the Person Reminder email that is sent to remind people of their involvement with the hold. Person Termination tab Lets you edit the Person Termination email that is sent to inform people that the hold is complete and closed. Managing Litigation Holds Creating a Litigation Hold | 395 Email Notification Page Options Option Description IT Acceptance tab Lets you edit the IT Staff Acceptance email that is sent to inform associated IT Staff members of the litigation hold and have them accept the hold. IT Reminder tab Lets you edit the IT Staff Reminder email that is sent to remind IT Staff members of their involvement with the hold. IT Termination tab Lets you edit the Person Termination email that is sent to inform people that the hold is complete and closed. Escalation Stage One Escalation Stage Two You can set two levels of escalation policies for person hold acceptance. Stage One: If a person doesn't accept the hold within a number of specified days, the first escalation email is sent to their manager. Note: Stage One escalation requires Active Directory to be configured previously. In the Manager field of the Active Directory Account Screen, enter the manager that you want to be notified for the first escalation email. Stage Two: After a specified number of days, the next escalation is sent to the specified email address. These tabs let you configure the Escalation email that is sent to inform managers of the escalation. Documents Options The following table describes the options that you can set on the Documents page of the Litigation Hold Wizard. Documents are any supporting documents that you want to attach to the litigation hold notification emails. The document files are stored on the hard drive of the Hold Manager who creates the hold. Attached documents have read-only permissions. See Creating a Litigation Hold on page 391. Documents Page Options Option Description (Add supporting files button) Lets you add files in support of the litigation hold and have them categorized and distributed by Notice - Person or Aging - IT Staff. Documents that you add to a litigation hold are visible to the email recipient by way of a link back to the landing page. Description field Lets you double-click the description field of an added file and enter information you want about the file. Delete button Removes the file from the Supporting Documents table list. Interview Questions Options The following table describes the options that you can set on the Interview Questions page of the Litigation Hold Wizard. See Creating a Litigation Hold on page 391. Managing Litigation Holds Creating a Litigation Hold | 396 You can create interview questions here or you can load questions from your templates. When you create interview questions, you have a variety of options on how to configure the questions and answers. See About Interview Question and Answer Types on page 384. Interview Questions Page Options Option Description (Load question from template) Lets you select a previously defined interview question template that has the question set you want. See About Managing Interview Templates for Use in Litigation Holds on page 384. Add a interview question Specifies a question you want to ask recipients. You should enter and add one question at a time. For information on how to create and format questions and answers, see the following: About Interview Question and Answer Types (page 384) Creating an Interview Template for Use in Litigation Holds (page 386) Removes the highlighted question from the list. Delete button Edits the highlighted question in the list. Edit button You can select a question and change its order in the list. Allow Interview Review Allows recipients to see the interview questions and their answers after they accept the litigation hold notification. Allow Modification If you select this option, people can change their answers after the initial interview. Summary 1. On the Summary page, do one of the following:  Click in a upper-right corner of General or Approval sections to edit the information you want. In the left pane of the wizard, click a wizard page name to navigate the wizard pages and edit any information you want. Click Summary in the left pane again to return to the Summary page and activate the Save button. 2. Click Save to save the hold. Managing Litigation Holds Creating a Litigation Hold | 397 3. In the Success dialog box, click Hold List. 4. In the Hold List view, select the litigation hold that you just created. 5. Click (Approve Hold). Managing Litigation Holds Creating a Litigation Hold | 398 Managing Existing Litigation Holds Editing a Litigation Hold You can open an existing litigation hold to either edit the settings, or to just view the settings. See Creating a Litigation Hold on page 391. To edit a litigation hold 1. On the Lit Holds page, highlight a template and click (edit). 2. Click Next to navigate the pages of the hold so you can review the settings, or make any necessary changes to existing settings. 3. When you have advanced to the Summary page, do one of the following: Click Cancel if you did not make any changes to the litigation hold settings, or you want to cancel any changes you made to the hold. Click Save to save the litigation hold settings that you changed. Activating or Deactivating a Litigation Hold You can activate or deactivate a litigation hold. Deactivating a hold does not delete the hold; instead, the hold is “turned off” or made inactive, even if it has not yet been approved. If you make the litigation hold inactive, its status is displayed as Not Active in the Lit Hold view. If you make a litigation hold active, the hold’s last known status is displayed in the Lit Hold view. See Creating a Litigation Hold on page 391. To activate or deactivate a litigation hold 1. On the Lit Holds page, under the Lit Hold tab, select a litigation hold. 2. Click 3. At the Confirms Holds dialog, click Ok. Activate or Deactivate either to activate or deactivate the litigation hold. Deleting a Litigation Hold You can delete an existing litigation hold, even if the hold is not active. See Creating a Litigation Hold on page 391. To delete a litigation hold 1. On the Lit Holds page, under the Lit Hold tab, select a litigation hold. 2. Click 3. (Optional) Check Keep Archive to keep an archive record of the litigation holds, and remove the holds from the user interface. 4. Click Yes in the Confirm Deletion dialog to confirm the deletion. Delete. You can find this icon by the litigation hold and also at the bottom of the task pane. Managing Litigation Holds Managing Existing Litigation Holds | 399 Resubmitting a Litigation Hold You can resubmit a hold. This sets it back to its original state so that all actions must be performed again See Creating a Litigation Hold on page 391. To resubmit a litigation hold 1. On the Lit Holds page, under the Lit Hold tab, select a litigation hold. 2. Click 3. The Resubmit Hold dialog appears. Resubmit Hold at the bottom of the task pane. Resubmit Hold Dialog 4. Enter the New Hold Name in the field provided. 5. You can check Terminate existing hold and/or Provide new email termination notice. 6. Add your information in the message body. You can format your text with basic word processing commands. 7. Under Macros, find macros to add to the body of your message. These macros include: Hold Name Hold Requestor Time Frame Start Time Frame End Hold Person List Project View 8. Name Hold Link Click Ok. Managing Litigation Holds Managing Existing Litigation Holds | 400 Viewing Information About Holds You can view the overall status, approvals, IT Staff, and people of a selected litigation hold. See Using the Lit Hold List on page 388. Viewing the Overall Status of a Litigation Hold You can view the overall status of a highlighted hold, including the following: Whether or not it is active The number of IT Staff and People The configured time frame Which actions have been completed with links for more information You can refresh the information shown on the tab to check the current status. About the Approvals Tab The Approvals tab displays the hold’s approval status and approval type. The option Send/Resend All Approval Notices becomes inactive after the hold is approved. About the People Tab The People tab displays the list of people that are involved in the litigation hold; the Total, Accepted, and Pending counts of all the people. The sent, visited, and accepted status of each person is displayed in a grid. When you highlight a person in the grid, the associated Detail View shows the custodial options and responses to interview questions. About the IT Staff Tab The IT Staff tab displays the total, accepted, and pending count of the IT Staff that are listed. The status of Sent, Visited, Accepted, and End Notice is also displayed. When you select an IT staff name, the associated Detail View area is displayed. About the Hold Event Log for a Litigation Hold You can use Hold Event Log to review the events and messages of a selected litigation hold. You can also apply filter options to select the Hold and Event Type. The Log pane displays the type, date and time, initiator, and the message of each log item. Select a type item from the list to view the associated Message. Viewing the Email Distribution History of a Litigation Hold You can view the history of emails that were sent, their type, date sent, by whom, recipient count, and subject. You can also use filtering to select a hold and type of email. See About the Hold Event Log for a Litigation Hold on page 401. Managing Litigation Holds Managing Existing Litigation Holds | 401 About Viewing Litigation Hold Reports You can use Reports in the Holds to generate various predefined reports with summary or detailed information about a particular litigation hold. For most reports, you can view the report and then export the report to the following file formats: Portable Document Format (PDF) Comma delimited (CSV) Excel Rich 97-2003 Text Format (RTF) Tagged Web Image File format (TIFF) page archive format (.mhtml) You can also print the report. For some reports, you can also generate an Excel file that has tabs for the different sets of data. You can view the following types of reports for a given litigation hold. Available Litigation Hold Reports Report More information Holds Summary You can generate the Holds Summary report to display an overview of all litigation holds, all active holds, and all Inactive holds. These reports list their approval and acceptance status, associated project, and when it was created. Also included are number of people and IT Staff associated with a litigation hold, and the current stage of approval. Hold Details You can generate the Hold Details report to display a detailed overview of a litigation hold’s approvers, people, IT Staff, any associated document files, and interview questions. Also included are the start and end dates of the hold, the priority of the hold, and a description, if one was entered in the Hold Creation Wizard. Interview Responses You can generate the Interview Responses report to display the answers to interview questions that are associated with a litigation hold. Person Details You can generate a detail report of the people’ hold information. Selected Project’s Holds You can generate a summary of all holds in the selected project. Managing Litigation Holds Managing Existing Litigation Holds | 402 Part 6 Loading Data This part describes how to load data and includes the following sections: Importing Using the Evidence Wizard (page 405) Importing Using Evidence (page 414) Cluster Analysis (page 417) Editing Loading Data Data (page 404) Evidence (page 423) | 403 Chapter 33 Introduction to Loading Data Importing Data This document will help you import data into your project. You create projects in order to organize data. Data can be added to projects in the forms of native files, such as DOC, PDF, XLS, PPT, and PST files, or as evidence images, such as AD1, E01, and OFF files. To manage evidence, administrators, and users with the Create/Edit Projects permission, can do the following: Add evidence items to a project View Edit properties about evidence items in a project properties about evidence items in a project Associate people to evidence items in a project Note: You will normally want to have people created and selected before you process evidence. See About Associating People with Evidence on page 407. See the following chapters for more information: To import data 1. Log in as a project manager. 2. Click the Add Data button next to the project in the Project List panel. 3. In the Add Data dialog, select on of the method by which you want to import data. The following methods are available: Evidence Job (wizard): See Using the Evidence Wizard on page 405. (Resolution1 applications): See About Jobs on page 447. Import: See Importing Evidence on page 414. Cluster Analysis: See Using Cluster Analysis on page 417. Introduction to Loading Data Importing Data | 404 Chapter 34 Using the Evidence Wizard Using the Evidence Wizard When you add evidence to a project, you can use the Add Evidence Wizard to specify the data that you want to add. You specify to add either parent folders or individual files. Note: If you activated Cluster Analysis as a processing option when you created the project, cluster analysis will automatically run after processing data. You select sets of data that are called “evidence items.” It is useful to organize data into evidence items because each evidence item can be associated with a unique person. For example, you could have a parent folder with a set of subfolders. \\10.10.3.39\EvidenceSource\ \\10.10.3.39\EvidenceSource\John Smith \\10.10.3.39\EvidenceSource\Bobby Jones \\10.10.3.39\EvidenceSource\Samuel Johnson \\10.10.3.39\EvidenceSource\Edward Peterson \\10.10.3.39\EvidenceSource\Jeremy Lane You could import the parent \\10.10.3.39\EvidenceSource\ as one evidence item. If you associated a person to it, all files under the parent would have the same person. On the other hand, you could have each subfolder be its own evidence item, and then you could associate a unique person to each item. An evidence item can either be a folder or a single file. If the item is a folder, it can have other subfolders, but they would be included in the item. When you use the Evidence Wizard to import evidence, you have options that will determine how the evidence is organized in evidence items. Using the Evidence Wizard Using the Evidence Wizard | 405 When you add evidence, you select from the following types of files. Evidence File Types File Type Description Evidence Images You can add AD1, E01, or AFF evidence image files. Native Files You can add native files, such as PDF, JPG, DOC PPT, PST, XLSX, and so on. When you add evidence, you also select one of the following import methods. Import Methods Method Description CSV Import This method lets you create and import a CSV file that lists multiple paths of evidence and optionally automatically creates people and associates each evidence item with a person. Like the other methods, you specify whether the parent folder contains native files or image files. See Using the CSV Import Method for Importing Evidence on page 407. This is similar to adding people by importing a file. See the Project Manager Guide for more information on adding people by importing a file. Immediate Children This method takes the immediate subfolders of the specified path and imports each of those subfolders’ content as a unique evidence item. You can automatically create a person based on the child folder’s name (if the child folder has a first and last name separated by a space) and have it associated with the data in the subfolder. See Using the Immediate Children Method for Importing on page 409. Like the other methods, you specify if the parent folder contains native files or image files. Folder Import This method lets you select a parent folder and all data in that folder will be imported. You specify that the folder contains either native files (JPG, PPT) or image files (AD1, E01, AFF). A parent folder can have both subfolders and files. Using this method, each parent folder that you import is its own evidence item and can be associated with one person. For example, if a parent folder had several AD1 files, all data from each AD1 file can have one associated person. Likewise, if a parent folder has several native files, all of the contents of that parent folder can have one associated person. Individual File(s) This method lets you select individual files to import. You specify that these individual files are either native files (JPG, PPT) or image files (AD1, E01, AFF). Using this method, each individual file that you import is its own evidence item and can be associated with a person. For example, all data from an AD1 file can have an associated person. Likewise, each PDF, or JPG can have its own associated person. Note: The source network share permissions are defined by the administrator credentials. Using the Evidence Wizard Using the Evidence Wizard | 406 About Associating People with Evidence When you add evidence items to a project, you can specify people, or custodians, that are associated with the evidence. These custodians are listed as People on the Data Sources tab. In the Add Evidence Wizard, after specifying the evidence that you want to add, you can then associate that evidence to a person. You can select an existing person or create a new person. Important: If you want to select an existing Person, that person must already be associated to the project. You can either do that for the project on the Home page > People tab, or you can do it on the Data Sources page > People tab. You can create people in the following ways: On the Data Sources tab before creating a project. See the Data Sources chapter. When adding evidence to a project within the Add Evidence Wizard. See Adding Evidence to a Project Using the Evidence Wizard on page 411. On the People tab on the Home page for a project that has already been created. About Creating People when Adding Evidence Items In the Add Evidence Wizard, you can create people as you add evidence. There are three ways you can create people while adding evidence to a project: Using a CSV Evidence Import. See Using the CSV Import Method for Importing Evidence on page 407. Importing immediate children. See Using the Immediate Children Method for Importing on page 409. Adding a person in the Add Evidence Wizard. You can select a person from the drop-down in the wizard or enter a new person name. See the Project Manager Guide for more information on creating people. Using the CSV Import Method for Importing Evidence When specifying evidence to import in the Add Evidence Wizard, you can use one of two general options: Manually browse to all evidence folders and files. Specify folders, files, and people in a CSV file. There are several benefits of using a CSV file: You can more easily and accurately plan for all of the evidence items to be included in a project by including all sources of evidence in a single file. You can more easily and accurately make sure that you add all of the evidence items to be included in a project. If you have multiple folders or files, it is quicker to enter all of the paths in the CSV file than to browse to each one in the wizard. If you are going to specify people, you can specify the person for each evidence item. This will automatically add those people to the system rather than having to manually add each person. Using the Evidence Wizard Using the Evidence Wizard | 407 When using a CSV, each path or file that you specify will be its own evidence item. The benefit of having multiple items is that each item can have its own associated person. This is in contrast with the Folder Import method, where only one person can be associated with all data under that folder. Specifying people is not required. However, if you do not specify people, when the data is imported, no people are created or associated with evidence items. Person data will not be usable in Project Review. See the Project Manager Guide for information on associating a person to an evidence item. If you do specify people in the CSV file, you use the first column to specify the person’s name and the second column for the path. If you do not specify people, you will only use one column for paths. When you load the CSV file in the Add Evidence Wizard, you will specify that the first column does not contain people’s names. That way, the wizard imports the first column as paths and not people. If you do specify people, they can be in one of two formats: A single name or text string with no spaces For example, JSmith or John_Smith First and last name separated by a space For example, John Smith or Bill Jones In the CSV file, you can optionally have column headers. You will specify in the wizard whether it should use the first row as data or ignore the first row as headers. CSV Example 1 This example includes headers and people. In the wizard, you select both First row contains headers and First column contains people names check boxes. When the data is imported, the people are created and associated to the project and the appropriate evidence item. People, Paths JSmith,\\10.10.3.39\EvidenceSource\JSmith JSmith,\\10.10.3.39\EvidenceSource\Sales\Projections.xlsx Bill Jones,\\10.10.3.39\EvidenceSource\BJones Sarah Johnson,\\10.10.3.39\EvidenceSource\SJohnson Evan_Peterson,\\10.10.3.39\EvidenceSource\EPeterson Evan_Peterson,\\10.10.3.39\EvidenceSource\HR Jill Lane,\\10.10.3.39\EvidenceSource\JLane Jill Lane,\\10.10.3.39\EvidenceSource\Marketing This will import any individual files that are specified as well as all of the files (and additional subfolders) under a listed subfolder. Using the Evidence Wizard Using the Evidence Wizard | 408 You may normally use the same naming convention for people. This example shows different conventions simply as examples. CSV Example 2 This example does not include headers or people. In the wizard, you clear both First row contains headers and First column contains people names check boxes. When the data is imported, no people are created or associated with evidence items. \\10.10.3.39\EvidenceSource\JSmith \\10.10.3.39\EvidenceSource\Sales\Projections.xlsx \\10.10.3.39\EvidenceSource\BJones \\10.10.3.39\EvidenceSource\SJohnson \\10.10.3.39\EvidenceSource\EPeterson \\10.10.3.39\EvidenceSource\HR \\10.10.3.39\EvidenceSource\JLane \\10.10.3.39\EvidenceSource\Marketing Using the Immediate Children Method for Importing If you have a parent folder that has children subfolders, when importing it through the Add Evidence Wizard, you can use one of three methods: Folder Import Immediate Children CSV Import See Using the CSV Import Method for Importing Evidence on page 407. When using the Immediate Children method, each child subfolder of the parent folder will be its own evidence item. The benefit of having multiple evidence items is that each item can have its own associated person. This is in contrast with the Folder Import method, where all data under that folder is a single evidence item with only one possible person associated with it. Specifying people is not required. However, if you do not specify people, when the data is imported, no people are created or associated with evidence items. Person data will not be usable in Project Review. See the Project Manager Guide for more information on associating a person to evidence. When you select a parent folder in the Add Evidence Wizard, you select whether or not to specify people. If you do specify people, the names of people are based on the name of the child folders. Imported names of people can be imported in one of two formats: A single name or text string with no spaces For example, JSmith or John_Smith Using the Evidence Wizard Using the Evidence Wizard | 409 First and last name separated by a space For example, John Smith or Bill Jones For example, suppose a parent folder had four subfolders, each containing data from a different user. Using the Immediate Children method, each subfolder would be imported as a unique evidence item and the subfolder name could be the associated person. \Userdata\ (parent folder that is selected) \Userdata\lNewstead (unique evidence item with lNewstead as a person) \Userdata\KHetfield (unique evidence item with KHetfield as a person) \Userdata\James Ulrich (unique evidence item with James Ulrich as a person) \Userdata\Jill_Hammett (unique evidence item with Jill_Hammett as a person) Note: In the Add Evidence Wizard, you can manually rename the people if needed. The child folder may be a parent folder itself, but anything under it would be one evidence item. This method is similar to the CSV Import method in that it automatically creates people and associates them to evidence items. The difference is that when using this method, everything is configured in the wizard and not in an external CSV file. Using the Evidence Wizard Using the Evidence Wizard | 410 Adding Evidence to a Project Using the Evidence Wizard You can import evidence for projects for which you have permissions. When you add evidence, it is processed so that it can be reviewed in Project Review. Some data cannot be changed after it has been processed. Before adding and processing evidence, do the following: Configure the Processing Options the way you want them. See the Admin Guide for more information on default processing options. Plan whether or not you want to specify people. See the Project Manager Guide for more information on associating a person to evidence. Unless you are importing people as part of the evidence, you must have people already associated with the project. See the Project Manager Guide for more information on creating people. Note: Deduplication can only occur with evidence brought into the application using evidence processing. Deduplication cannot be used on data that is imported. To import evidence for a project 1. In the project list, click (add evidence) in the project that you want to add evidence to. 2. Select Evidence. 3. In the Add Evidence Wizard, select the Evidence Data Type and the Import Method. See Using the Evidence Wizard on page 405. 4. Click Next. 5. Select the evidence folder or files that you want to import. This screen will differ depending on the Import Method that you selected. If you are using the CSV Import method, do the following: 5a. If the CSV file uses the first row as headers rather than folder paths, select the First row contains headers check box, otherwise, clear it. If the CSV file uses the first column to specify people, select the First column contains people’s names check box, otherwise, clear it. See Using the CSV Import Method for Importing Evidence on page 407.  Click Browse.  Browse to the CSV file and click OK. The CSV data is imported based on the check box settings. Confirm that the people and evidence paths are correct. You can edit any information in the list. If the wizard can’t validate something in the CSV, it will highlight the item in red and place a red box around the problem value. If a new person will be created, it will be designated by 5b. . If you are using the Immediate Children method, do the following:  If you want to automatically create people, select Sub folders are people’s names, otherwise, clear it. See Using the Immediate Children Method for Importing on page 409.  Click Browse.  Enter the IP address of the server where the evidence files are located and click Go. Using the Evidence Wizard Adding Evidence to a Project Using the Evidence Wizard | 411 For example, 10.10.2.29 to the parent folder and click Select. Each child folder is listed as a unique evidence item. If you selected to create people, they are listed as well. Confirm that the people and evidence paths are correct. You can edit any information in the list. If the wizard can’t validate something, it will highlight the item in red and place a red box around the problem value.  Browse If a new person will be created, it will be designated by 5c. 6. . If you are using the Folder Input or Individual Files method, do the following:  Click Browse.  Enter the IP address of the server where the evidence files are located and click Go. For example, 10.10.2.29  Expand the folders in the left pane to browse the server.  In the right pane highlight the parent folder or file and click Select. If you are selecting files, you can use Ctrl-click or Shift-click to select multiple files in one folder. The folder or file is listed as a unique evidence item. If you want to specify a person to be associated with this evidence, select one from the Person Name drop-down list or type in a new person name to be added. See About Associating People with Evidence on page 407. If you enter a new person that will be created, it will be designated by . You can also edit a person’s name if it was imported. 7. Specify a Timezone. From the Timezone drop-down list, select a time zone. See Evidence Time Zone Setting on page 413. 8. (Optional) Enter a Description. This is used as a short description that is displayed with each item in the Evidence tab. For example, “Imported from Filename.csv” or “Children of path”. This can be added or edited later in the Evidence tab. 9. (Optional) If you need to delete an evidence item, click the for the item. 10. Click Next. 11. In the Evidence to be Added and Processed screen, you can view the evidence that you selected so far. From this screen, you can perform one of the following actions: Add More: Click this button to return to the Add Evidence screen. Add Evidence and Process: Click this button to add and process the evidence listed. When you are done, you are returned to the project list. After a few moments, the job will start and the project status should change to Processing. 12. If you need to manually update the list or status, click Refresh. 13. When the evidence import is completed, you can view the evidence items in the Evidence and People labels. Using the Evidence Wizard Adding Evidence to a Project Using the Evidence Wizard | 412 Evidence Time Zone Setting Because of worldwide differences in the time zone implementation and Daylight Savings Time, you select a time zone when you add an evidence item to a project. In a FAT volume, times are stored in a localized format according to the time zone information the operating system has at the time the entry is stored. For example, if the actual date is Jan 1, 2005, and the time is 1:00 p.m. on the East Coast, the time would be stored as 1:00 p.m. with no adjustment made for relevance to Greenwich Mean Time (GMT). Anytime this file time is displayed, it is not adjusted for time zone offset prior to being displayed. If the same file is then stored on an NTFS volume, an adjustment is made to GMT according to the settings of the computer storing the file. For example, if the computer has a time zone setting of -5:00 from GMT, this file time is advanced 5 hours to 6:00 p.m. GMT and stored in this format. Anytime this file time is displayed, it is adjusted for time zone offset prior to being displayed. For proper time analysis to occur, it is necessary to bring all times and their corresponding dates into a single format for comparison. When processing a FAT volume, you select a time zone and indicate whether or not Daylight Savings Time was being used. If the volume (such as removable media) does not contain time zone information, select a time zone based on other associated computers. If they do not exist, then select your local time zone settings. With this information, the system creates the project database and converts all FAT times to GMT and stores them as such. Adjustments are made for each entry depending on historical use data and Daylight Savings Time. Every NTFS volume will have the times stored with no adjustment made. With all times stored in a comparable manner, you need only set your local machine to the same time and date settings as the project evidence to correctly display all dates and times. Using the Evidence Wizard Adding Evidence to a Project Using the Evidence Wizard | 413 Chapter 35 Importing Evidence About Importing Evidence Using Import As an Administrator or Project Manager with the Create/Edit Projects permissions, you can import evidence for a project. You import evidence by using a load file, which allows you to import metadata and physical files, such as native, image, and/or text files that were obtained from another source, such as a scanning program or another processing program. You can import the following types of load files: Summation Generic DII - A proprietary file type from Summation. See Data Loading Requirements on page 426. - A delimited file type, such as a CSV file. Concordance/Relativity - A delimited DAT file type that has established guidelines as to what delimiter should be used in the fields. This file should have a corresponding LFP or OPT image file to import. Transcripts and exhibits are uploaded from Project Review and not from the Import dialog. See the Project Manager Guide for more information on how to upload transcripts and exhibits. About Mapping Field Values When importing you must specify which import file fields should be mapped to database fields. Mapping the fields will put the correct information about the document in the correct columns in the Project Review. After clicking Map Fields, a process runs that checks the imported load file against existing project fields. Most of the import file fields will automatically be mapped for you. Any fields that could not be automatically mapped are flagged as needing to be mapped. Note: If you need custom fields, you must create them in the Custom Fields tab on the Home page before you can map to those fields during the import. If the custom names are the same, they will be automatically mapped as well. Any errors that have to be corrected before the file can be imported are reported at this time. When importing a CSV or DAT load file that is missing the unique identifier used to map to the DocID file, an error message will be displayed. Notes: In review, the AttachmentCount value is displayed under the EmailDirectAttachCount column. Importing Evidence About Importing Evidence Using Import | 414 The Importance value is not imported as a text string but is converted and stored in the database as an integer representing a value of either Low, Normal, High, or blank. These values are case sensitive and in the import file must be an exact match. The Sensitivity value is not imported as a text string but is converted and stored in the database as an integer representing a value of either Confidential, Private, Personal, or Normal. These values are case sensitive and in the import file must be an exact match. The Language value is not imported as a text string but is converted and stored in the database as an integer representing one of 67 languages. Body text that is mapped to the Body database field is imported as an email body stream and is viewable in the Natural viewer. When importing all file types, the import Body field is now automatically mapped to the Body database field. Importing Evidence into a Project To import evidence into a project 1. Log into the application as an Administrator or a user with Create/Edit Project rights. 2. In the Project List panel, click Add Evidence 3. Click Import. 4. In the Import dialog, select the file type (EDII, Concordance/Relativity, or Generic ). 5. next to the project. 4a. Enter the location of the file or Browse to the file’s location. 4b. (optional - Available only for Concordance/Relativity) Select the Image Type and enter the location of the file, or Browse to the file’s location. You can choose from the following file options:  OPT - Concordance file type that contains preferences and option settings associated with the files.  LFP - Ipro file type that contains load images and related information. Perform field mapping. Most fields will be automatically mapped. If some fields need to be manually mapped, you will see an orange triangle. 5a. Click Map Fields to map the fields from the load file to the appropriate fields. See About Mapping Field Values on page 414. 5b. To skip any items that do not map, select Skip Unmapped. 5c. To return the fields back to their original state, click Reset. Note: Every time you click the Map Fields button, the fields are reset to their original state. 6. Select the Import Destination. 6a. 7. Choose from one of the following:  Existing Document Group: This option adds the documents to an existing document group. Select the group from the drop-down menu. See the Project Manager Guide for more information on managing document groups.  Create New Document Group: This option adds the documents to a new document group. Enter the name of the group in the field next to this radio button. Select the Import Options for the file. These options will differ depending on whether you select DII, Concordance/Relativity, or Generic. DII Options: Importing Evidence Importing Evidence into a Project | 415  Page Count Follows Doc ID: Select this option if your DII file has an @T value that contains both a Doc ID and a page count.  Import OCR/Full Text: Select this option to import OCR or Full Text documents for each record.  Import Native Documents/Images: Select this option to import Native Documents and Images for each record. Concordance/Relativity, or Generic Options:  First Row Contains Field Names: Select this option if the file being imported contains a row header.  Field, Quote, and Multi-Entry Separators: From the pull-down menu, select the symbols for the different separators that the file being imported contains. Each separator value must match the imported file separators exactly or the field being imported for each record is not populated correctly.  Return Placeholder: From the pull-down menu, select the same value contained in the file being imported as a replacement value for carriage return and line feed characters. Each return placeholder value must match the imported file separators. 8. Configure the Date Options. Select the date format from the Date Format drop-down menu. This option allows you to configure what date format appears in the load file system, allowing the system to properly parse the date to store in the database. All dates are stored in the database in a yyy-mm-dd hh:mm:ss format. Select the Load File Time Zone. Choose the time zone that the load file was created in so the date and time values can be converted to a normalized UTC value in the database. See Normalized Time Zones on page 207. 9. Select the Record Handling Options. New Record:  Add: Select to add new records.  Skip: Select to ignore new records. Existing Record: Select to update duplicate records with the record being imported.  Overwrite: Select to overwrite any duplicate records with the record being imported.  Skip: Select to skip any duplicate records.  Update: 10. Validation: This option verifies that: The path information within the load file is correct The records contain the correct fields. For example, the system verifies that the delimiters and fields in a Generic or Concordance/Relativity file are correct. You have all of the physical files (that is, Native, Image, and Text) that are listed in the load file. 11. (optional) Drop DB Indexes. Database indexes improve performance, but slow processing when inserting data. If this option is checked, all of the data reindexes every time more data is loaded. Only select this option if you want to load a large amount of data quickly before data is reviewed. 12. Click Start. Importing Evidence Importing Evidence into a Project | 416 Chapter 36 Analyzing Document Content Using Cluster Analysis About Cluster Analysis You can use Cluster Analysis to group Email Threaded data and Near Duplicate data together for quicker review. Note: If you activated Cluster Analysis as a processing option when you created the project, cluster analysis will automatically run after processing data and will not need to be run manually. Cluster Analysis is performed on the following file types: Documents (including PDFs) Spreadsheets Presentations Emails Cluster Analysis is also performed on text extracted from OCR if the OCR text comes from a PDF. Cluster Analysis cannot be performed on OCR text extracted from a graphic. To perform cluster analysis 1. Load the email thread or near duplicate data using Evidence Processing or Import. 2. On the Home page, in the Project List panel, click the Add Evidence button next to the project. 3. In the Add Data dialog, click Cluster Analysis. 4. Select a threshold to group the documents based on similarity. The default value is 80%. 5. Click Start. The data for the email thread appears in the Conversation tab in Project Review. The data for Near Duplicate appears in the Related tab in Project Review. An entry for cluster analysis will appear in the Work List. Words Excluded from Cluster Analysis Processing Noise words, such as “if,” “and,” “or,” are excluded from Cluster Analysis processing. The following words are excluded in the processing: a, able, about, across, after, ain't, all, almost, also, am, among, an, and, any, are, aren't, as, at, be, because, been, but, by, can, can't, cannot, could, could've, couldn't, dear, did, didn't, do, does, doesn't, don't, either, else, Analyzing Document Content Using Cluster Analysis | 417 ever, every, for, from, get, got, had, hadn't, has, hasn't, have, haven't, he, her, hers, him, his, how, however, i, if, in, into, is, isn't, it, it's, its, just, least, let, like, likely, may, me, might, most, must, my, neither, no, nor, not, of, off, often, on, only, or, other, our, own, rather, said, say, says, she, should, shouldn't, since, so, some, than, that, the, their, them, then, there, these, they, they're, this, tis, to, too, twas, us, wants, was, wasn't, we, we're, we've, were, weren't, what, when, where, which, while, who, whom, why, will, with, would, would've, wouldn't, yet, you, you'd, you'll, you're, you've, your Filtering Documents by Cluster Topic Documents processed with Cluster Analysis can be filtered by the content of the documents in the evidence. The Cluster Topic filter is created in Review under the Document Contents filter from data processed with Cluster Analysis. Data included in the Cluster Topic is taken from the following types of documents: Word documents and other text documents, spreadsheets, emails, and presentations. In order for the application to filter the data with the Cluster Topic filter, the following must occur: Prerequisites How for Cluster Topic (page 418) Cluster Topic Works (page 418) Filtering with Cluster Topic (page 419) Considerations of Cluster Topic (page 419) Prerequisites for Cluster Topic Before Cluster Topic filter facets can be created, the data in the project must be processed by Cluster Analysis. The data can be processed automatically when Cluster Analysis is selected in the Processing options or you can process the data manually by performing Cluster Analysis in the Add Evidence dialog. Evidence Processing and Deduplication Options (page 209) How Cluster Topic Works The application uses an algorithm to cluster the data. The algorithm accomplishes this by creating an initial set of cluster centers called pivots. The pivots are created by sampling documents that are dissimilar in content. For example, a pivot may be created by sampling one document that may contain information about children’s books and sampling another document that may contain information about an oil drilling operation in the Arctic. Once this initial set of pivots is created, the algorithm examines the entire data set to locate documents that contain content that might match the pivot’s perimeters. The algorithm continues to create pivots and clusters documents around the pivots. As more data is added to the project and processed, the algorithm uses the additional data to create more clusters. Word frequency or occurrence count is used by the algorithm to determine the importance of content within the data set. Noise words that are excluded from Cluster Analysis processing are also not included in the Cluster Topic pivots or clusters. Analyzing Document Content Using Cluster Analysis | 418 Filtering with Cluster Topic Once data has been processed by Cluster Analysis and facets created under the Cluster Topic filter, you can filter the data by these facets. Cluster Topic Filters The topics of the facets available are cluster terms created. Documents containing these terms are included in the cluster and are displayed when the filter is applied. Topics are comprised of two word phrases that occur in the documents. This is to make the topic more legible. The UNCLUSTERED facet contains any documents that are not included under a Cluster Topic filter. For more information, see Filtering Data in Case Review in the Reviewer Guide. Considerations of Cluster Topic You need to aware the following considerations when examining the Cluster Topic filters: Not all data will be grouped into clusters at once. The application creates clusters in an incremental fashion in order to return results as quickly as possible. Since the application is continually creating clusters, the Cluster Topic facets are continually updated. Duplicate documents are clustered together as they match a specific cluster. However, if a project is particularly large, duplicate documents may not be included as part of any cluster. This is to avoid performance issues. You can examine any duplicate documents or any documents not included in a cluster by applying the UNCLUSTERED facet of the Cluster Topic filter. Analyzing Document Content Using Cluster Analysis | 419 Using Entity Extraction About Entity Extraction You can extract entity data from the content of files in your evidence and then view those entities. You can extract the following types of entity data: Credit Card Numbers Email Addresses People Phone Numbers Social Security Numbers The data that is extracted is from the body of documents, not the meta data. For example, email addresses that are in the To: or From: fields in emails are already extracted as meta data and available for filtering. This option will extract email addresses that are contained in the body text of an email. Using entity extraction is a two-step process: 1. Process the data with the Entity Extraction processing options enabled. You can select which types of data to extract. 2. View the extracted entities in Review. The following tables provides details about the type of data that is identified and extracted: Type Credit Card Numbers Examples Numbers in the following formats will be extracted as credit card numbers: 16-digit numbers used by VISA, MasterCard, and Discover in the following formats. For example,  1234-5678-9012-3456 (segmented by dashes)  1234 5678 9012 3456 (segmented by spaces) Not:  1234567890123456 (no segments)  12345678-90123456 (other segments) 15-digit numbers used by American Express in the following formats. For example,  1234-5678-9012-345 (segmented by dashes)  1234 5678 9012 345 (segmented by spaces) Notes: Other formats, such as 14-digit Diners Club numbers, will not be extracted as credit card numbers Analyzing Document Content Using Entity Extraction | 420 Type Email Addresses Examples Text in standard email format, such as jsmith@yahoo.com will be extracted. Note: Email addresses that are in the To: or From: fields in emails are already extracted as meta data and available for filtering. This option will extract email addresses that are contained in the body text of an email. People Text that is in the form of proper names will be extracted as people. Proper names in the content are compared against personal names from 1880 - 2013 U.S. census data in order to validate names. Type Phone Numbers Examples Numbers in the following formats will be extracted as phone numbers: Standard 7-digit For example:  123-4567  123.4567  123 4567 Not: 1234567 (not segmented) Standard 10-digit For example:  (123)456-7890  (123)456 7890  (123) 456-7809  (123) 456.7809  +1 (123) 456.7809  123 456 7809 Not 1234567890 (not segmented) Note: A leading 1, for long-distance or 001 for international, is not included in the extraction, however, a +1 is. Analyzing Document Content Using Entity Extraction | 421 Type Examples International Some international formats are extracted, for example,  +12-34-567-8901  +12 34 567 8901  +12-34-5678-9012  +12 34 5678 9012 Not 12345678901 (not segmented) Other international formats are not extracted, for example,  123-45678  (10) 69445464  07700 954 321  (0295) 416,72,16 Notes: Be aware that you may get some false positives. For example, a credit number 5105-1051-051-5100 may also be extracted as the phone number 510-5100. Type Examples Social Security Numbers Numbers in the following formats will be extracted as Social Security Numbers:   123-45-6789 (segmented by dashes) 123 45 6789 (segmented by spaces) The following will not be extracted as Social Security Numbers:  123456789 (not segmented)  12345-6789 (other segments) Enabling Entity Extraction To enable entity extracting processing options: 1. You enable Entity Extraction when creating a project and configuring processing options. See Evidence Processing and Deduplication Options on page 209. Viewing Entity Extraction Data To view extracted entity data 1. For the project, open Review. 2. In the Facet pane, expand the Document Content node. 3. Expand the Document Content category. 4. Expand a sub-category, such as Credit Card Numbers or Phone Numbers. 5. Apply one or more facets to show the files in the Item List that contain the extracted data. Analyzing Document Content Using Entity Extraction | 422 Chapter 37 Editing Evidence Editing Evidence Items in the Evidence Tab Users with Create/Edit project admin permissions can view and edit evidence for a project using the Evidence tab on the Home page. To edit evidence in the Evidence tab 1. Log in as a user with Create/Edit project admin permissions. 2. Select a project from the Project List panel. 3. Click on the Evidence tab. 4. Select the evidence item you want to edit and click the Edit button. 5. In the External Evidence Details form, edit the desired information. Editing Evidence Editing Evidence Items in the Evidence Tab | 423 Evidence Tab Users with permissions can view information about the evidence that has been added to a project. To view the Evidence tab, users need one of the following permissions: Administrator, Create/Edit Project, or Manage Evidence. Evidence Tab Elements of the Evidence Tab Element Description Filter Options Allows the user to filter the list. Evidence Path List Displays the paths of evidence in the project. Click the column headers to sort by the column. Refreshes the Evidence Path List. Refresh Editing Evidence Evidence Tab | 424 Elements of the Evidence Tab (Continued) Element Description Click to adjust what columns display in the Evidence Path List. Columns External Evidence Details Includes editable information about imported evidence. Information includes:  That path from which the evidence was imported  A description of the project, if you entered one  The evidence file type  What people were associated with the evidence  Who added the evidence  When the evidence was added Processing Status Lists any messages that occurred during processing. Editing Evidence Evidence Tab | 425 Chapter 38 Data Loading Requirements This chapter describes the data loading requirements of Resolution1 Platform and Summation and contains the following sectons: Document Email Groups (page 426) & eDocs (page 429) Coding Related (page 431) Documents (page 434) Transcripts Work Product (page 437) Sample DII and Exhibits (page 435) DII Files (page 438) Tokens (page 442)k Document Groups Note: You can import and display Latin and non-Latin Unicode characters. While the application supports the display of fielded data in either Latin or non-Latin Unicode characters, the modification of fielded data is supported only in Latin Unicode characters. Note: The display of non-Latin Unicode characters does not apply to transcript filenames, since transcript deponents are defined by project users, or work product filenames, which are not displayed in the application. Images The following describes the required and recommended formats for images. Required A DII load file is required to load image documents. 0 Group IV TIFFS: single or multi-page, black and white (or color), compressed images, no DPI minimum. Single page JPEGs for color images. Data Loading Requirements Document Groups | 426 Full-Text or OCR The following describes the required and recommended formats for full-text or OCR. Required If submitting document level OCR, page breaks should be included between each page of text in the document text file. Failure to insert page breaks will result in a one page text file for a multi-page document. The ASCII character 12 (decimal) is used for the “Page Break” character. All instances of the character 12 as page breaks will be interpreted. Document All A level OCR or page level OCR. OCR files should be in ANSI or Unicode text file format, with a *.txt extension. DII load file. Loading Control List (.LST) files are not supported. Recommended OCR text files should be stored in the same directories as image files. Page level OCR is recommended to ensure proper page breaks. DII Load File Format for Image/OCR Note: When selecting the Copy ESI option, the DII and source files must reside in a location accessible by the IEP server; otherwise, import jobs will fail during the Check File process. The following describes the required format for a DII load file to load images and OCR. Required A blank line after each document summary. @T to identify each document summary. @T should equal the beginning Bates number. If OCR is included, then use @FULLTEXT at the beginning of the DII file (@FULLTEXT DOC or @FULLTEXT PAGE). If @FULLTEXT DOC is included, OCR text files are assumed to be in the Image folder location with the same name as the first image (TIFF or JPG) file. If @FULLTEXT PAGE is included, OCR text files are assumed to be in the Image folder location with the same name as the image files (each page should have its own txt file). If @O token is used, @FULLTEXT token is not required. If Fulltext is located in another directory other than images, use @FULLTEXTDIR followed by the directory path. Data Loading Requirements Document Groups | 427 The page count identifier on the @T line can be interpreted ONLY if it is denoted with a space character. For example: @FULLTEXT PAGE @T AAA0000001 2 @D @I\IMAGES\01\ AAA0000001.TIF AAA0000002.TIF @T AAA0000003 1 @D @I\IMAGES\02\ AAA0000003.TIF Import controls the Page Count Follows DocID option. If this option is deselected, the page count identifier on the @T line would not be recognized. Recommended DII load file names should mirror that of the respective volume (for easy association and identification). @T values (that is, the BegBates) and EndBates should include no more than 50 characters. Non-alphabetical and non-numerical characters should be avoided. Data Loading Requirements Document Groups | 428 Email & eDocs You can host email, email attachments, and eDocs (electronic documents in native format) for review and attorney coding, as well as associated full-text and metadata. It is also possible to include an imaged version (in TIFF format) of the file at loading. A DII load file is required in order to load e-mail and electronic documents. Note: You can import and display of Latin and non-Latin Unicode characters. While the application supports the display of fielded data in either Latin or non-Latin Unicode characters, the modification of fielded data is supported only in Latin Unicode characters. Note: The display of non-Latin Unicode characters does not apply to transcript filenames, since transcript deponents are defined by users, or work product filenames, which are not displayed. General Requirements The following describes the required and recommended formats for DII files that are used to load email, email attachments, and eDocs. A DII load file with a *.dii file extension, using only the tokens, is listed in DII Tokens (page 442). @T to identify each email, email attachment, or eDoc record. @T is the first line for each summary. @T equals the unique Docid for each email, email attachment, or eDoc record. There should be only one @T per record. A blank line between document records. @EATTACH token is required for email attachments and @EDOC for eDocs. These tokens contain a relative path to the native file. @MEDIA is required for email data with a value of eMail or Attachment. For eDocs, the @MEDIA value must be eDoc. @EATTACH is required when @MEDIA has a value of Attachment and is not required when @MEDIA has a value of eMail. To maintain the parent/child relationship between an e-mail and its attachments (family relationships for eDocs), the @PARENTID and @ATTACH tokens are used. To include images along with the native file delivery, use the @D @I tokens at the end of the record. @O token is extended to support loading FullText into eDoc and eMails also. If record has both @O and @EDOC/@EATTACH tokens, FullText is loaded from the file specified by the @O token. If @O token does NOT exist for the record, FullText is extracted from the file specified by the @EDOC/@EATTACH token. @AUTHOR and @ITEMTYPE tokens are NOT supported. Recommended @T values (Begbates/Docid) should include no more than 50 characters. Non-alphabetical and non-numerical characters should be avoided. Specify parent-child relationship in the DII file based on the following rule: Data Loading Requirements Email & eDocs | 429 In the DII file, email attachments should immediately follow the parent record, that is: @T ABC000123 @MEDIA eMail @EMAIL-BODY Please reply with a copy of the completed report. Thanks for your input. Beth @EMAIL-END @ATTACH ABC000124; ABC000125 @T ABC000124 @MEDIA Attachment @EATTACH \Native\ABC000124.doc @PARENTID ABC000123 @T ABC000125 @MEDIA Attachment @EATTACH \Native\ABC000125.doc @PARENTID ABC000123 Data Loading Requirements Email & eDocs | 430 Coding The following describes the required and recommended formats for coded data. Recommended Coded Use data should be submitted in a delimited text file, with a *.txt extension. the following default delimiter characters: Field Separator | Multi-entry Separator ; Return Placeholder ~ Quote Separator ^ Users can, however, specify any custom character in the Import user interface for any of the separators above. The standard comma and quote characters (‘,’ ‘”’) are accepted. When these characters are present within coded data, different characters must be used as separators. For instance, DOCID|SUMMARY|AUTHOR ^DOJ000001^|^Test “Summary1”^|^Smith, John^ In the above file, Field Separator | Quote Separator ^ field values should have any of the following formats. The date 16th August 2009 can be represented in the load file as: Date 08/16/2009 16/08/2009 20090816 In addition, fuzzy dates are also supported. Currently only DOCDATE field supports fuzzy dates. If a day is fuzzy, then replace dd with 00. If a month is fuzzy, then replace mm with 00. If a year is fuzzy, replace yyyy with 0000. Data Loading Requirements Coding | 431 Format Example mm/dd/yyyy 00/16/2009 (month fuzzy) 08/00/2009 (day fuzzy) 08/16/0000 (year fuzzy) 00/16/0000 (month and year fuzzy) 08/00/0000 (day and year fuzzy) 00/00/2009 (month and day fuzzy) 00/00/0000 (all fuzzy) 08/16/2009 (no fuzzy) yyyymmdd 00000816 (year fuzzy) 20090016 (month fuzzy) 20090800 (day fuzzy) 00000016 (year and month fuzzy) 00000800 (year and day fuzzy) 20090000 (month and day fuzzy) 00000000 (all fuzzy) 20090816 (no fuzzy) dd/mm/yyyy 00/08/2009 (day fuzzy) 16/00/2009 (month fuzzy) 16/08/0000 (year fuzzy) 16/00/0000 (month and year fuzzy) 00/08/0000 (day and year fuzzy) 00/00/2009 (day and month fuzzy) 00/00/0000 (all fuzzy) 16/08/2009 – no fuzzy Time values should have any of the following formats. The time 1:27 PM can be represented in the load file as: 1:27 PM 01:27 PM 1:27:00 PM 01:27:00 PM 13:27 13:27:00 Data Loading Requirements Coding | 432 Time values for standard tokens @TIMESENT/@TIMERCVD/@TIMESAVED/TIMECREATED will not be loaded for a document unless accompanied by a corresponding DATE token DATESENT/ @DATERCVD/ @DATESAVED/@DATECREATED. Recommended You can use Field Mapping where the user can select different fields to be populated from the DII/CSV files. Fields would be automatically mapped during Import if the name of the database field matches the name of the field within the DII/CSV file. Field names within the header row will appear exactly as they appear within the delimited text file. Use consistent field naming for subsequent data deliveries. DocID/BegBates/EndBates values should include no more than 50 characters. Non-alphabetical and non-numerical characters should be avoided. Coding file names should mirror that of the respective volume (for easy association and identification). For example: DOCID|TITLE|AUTHOR ^AAA-000001^|^Report to XYZ Corp^|^Jillson, Deborah;Ward, Simon;LaBelle, Paige^ ^AAA-000005^|^Financial Statement^|^Mubark, Byju;Aminov, Marina^ ^AAA-000008^|^Memo^|^McMahon, Brian^ Data Loading Requirements Coding | 433 Related Documents You can review related documents the @ATTACHRANGE token or the @PARENTID and @ATTACH tokens. . The related documents must be coded in sequential order by their DOCID. The sequence determines the first document and the last document in the related document set. Note: Bates number of the first document in @ATTACHRANGE populates the ParentDoc column. Note: @ParentID populates the ParentDoc field and @ATTACH populates the AttachIDs. Either @Attachrange or @ParentID can be used at a time. For example: @ATTACHRANGE ABC001-ABC005 OR @PARENTID ABC001 OR @ATTACH ABC001;ABC002;ABC003;ABC004;ABC005 Data Loading Requirements Related Documents | 434 Transcripts and Exhibits Note: You can import and display of Latin and non-Latin Unicode characters. While the application supports the display of fielded data in either Latin or non-Latin Unicode characters, the modification of fielded data s supported only in latin Unicode characters. Note: The display of non-Latin Unicode characters does not apply to transcript filenames, since transcript deponents are defined by users, or work product filenames, which are not displayed. From Menu > Transcript > Manage, you can upload new transcripts to any transcript collection to which they have access. All transcripts are displayed individually, and each has its own menu that controls various transcript management functions. Transcripts The following describes the required and recommended formats for transcripts. Required ASCII or Unicode files (*.txt) in AMICUS format. Recommended Transcript Page size is less than one megabyte. number specifications: All transcript pages are numbered. Page numbers are up against the left margin. The first digit of the page number should appear in Column 1. See the figure below. Page numbers appear at the top of each page. Page numbers contain no more than six digits, including zeros, if necessary. For example, Page 34 would be shown as 0034, 00034, or 000034. The first line of the transcript (Line 1 of the title page) contains the starting page number of that volume. For example, if the volume starts on Page 1, either 0001 or 00001 are correct. If the volume starts on Page 123, either 0123 or 00123 are correct. Line numbers appear in Columns 2 and 3. Text starts at least one space after the line number. It is recommended to start text in Column 7. No lines are longer than 78 characters (including letters and spaces). No page breaks, if possible. If page breaks are necessary, they should be on the line preceding the page number. Consistent numbers of lines per page, if neither page breaks nor page number formats are used. No headers or footers. All transcript lines are numbered. Data Loading Requirements Transcripts and Exhibits | 435 Preferred Transcript Format Exhibits The following describes the required format for Exhibits. Required Exhibits If that will be loaded must be in PDF format. an Exhibit has multiple pages, all pages must be contained in one file instead of a file per page. Data Loading Requirements Transcripts and Exhibits | 436 Work Product Note: You can import and display of Latin and non-Latin Unicode characters. While the application supports the display of fielded data in either Latin or non-Latin Unicode characters, the modification of fielded data is supported only in Latin Unicode characters. Note: The display of non-Latin Unicode characters does not apply to transcript filenames, since transcript deponents are defined by users, or work product filenames, which are not displayed. From Menu > Work Product > Manage you can upload, view, and review Work Product files. Work Product can be any type of file: text, word processing, PDF, or even MP3. (MP3 files are useful when you wish to send an audio transcript or message to the members of the group who have access to Work Product). The application does not maintain edits or keep version control information for the documents stored. Users working with Work Product documents must have the appropriate native application, such as Microsoft Word or Adobe Acrobat, to open them. Data Loading Requirements Work Product | 437 Sample DII Files Note: You can import and display of Latin and non-Latin Unicode characters. While the application supports the display of fielded data in either Latin or non-Latin Unicode characters, the modification of fielded data is supported only in Latin Unicode characters. Note: The display of non-Latin Unicode characters does not apply to transcript filenames, since transcript deponents are defined by users, or work product filenames, which are not displayed. Note: When selecting the Copy ESI option, the DII source files must reside in a location accessible by the IEP server; otherwise, import jobs will fail during the Check File process. eDoc DII Load Files Required DII Format (eDocs) @T SSS00000007 @MEDIA eDoc @EDOC \folder\SSS00000007.xls @T SSS00000008 @MEDIA eDoc @EDOC \Native\SSS00000008.doc Recommended DII format (eDocs) @T ABC00000123 @MEDIA eDoc @EDOC \Natives\ABC00000123.xls @APPLICATION Microsoft Excel @DATECREATED 05/25/2002 @DATESAVED 06/05/2002 @SOURCE Dee Vader Data Loading Requirements Sample DII Files | 438 eMail DII Load Files Required DII File Format for Parent Email (Emails) @T ABC000123 @MEDIA eMail @EMAIL-BODY Please reply with a copy of the completed report. Thanks for your input. Beth @EMAIL-END @ATTACH ABC000124;ABC000125 Required DII File Format for Related Email Attachment (Emails) @T ABC000124 @MEDIA Attachment @EATTACH \Native\ABC000124.doc @PARENTID ABC000123 Data Loading Requirements Sample DII Files | 439 Recommended DII Format for Parent Email (Emails) @T ABC000123 @MEDIA eMail @ATTACH ABC000124; ABC000125 @EMAIL-BODY Please reply with a copy of the completed report. Thanks for your input. Beth @EMAIL-END @FROM Abe Normal (anormal@ctsummation.com) @TO abcody@ctsummation.com; rob.hood@wolterskluwer.com @CC Willie Jo @BCC Jopp@ctsummation.com @SUBJECT Please reply @APPLICATION Microsoft Outlook @DATECREATED 06/16/2006 @DATERCVD 06/16/2006 @DATESENT 06/16/2006 @FOLDERNAME \ANormal\Sent Items @READ Y @SOURCE Abe Normal @TIMERCVD 1:36 PM @TIMESENT 1:35 PM Recommended DII Format for Related Email Attachments (Emails) @T ABC000124 @MEDIA Attachment @EATTACH \Native\ABC000124.doc @PARENTID ABC000123 @APPLICATION Microsoft Word @DATECREATED 05/25/2005 @DATESAVED 06/05/2005 @SOURCE Abe Normal @AUTHOR Abe Normal @DOCTITLE Sales Report June 2005 Data Loading Requirements Sample DII Files | 440 Recommended DII Format for Native Plus Images Deliveries (Email and eDocs) (Append to the previous recommended DII formats for eDocs or email.) @D @|\Images\ ABC000124-001.tif ABC000124-002.tif Data Loading Requirements Sample DII Files | 441 DII Tokens Data for all tokens must be in a single line except the @OCR…@OCR-END, @EMAIL-BODY … @EMAIL-END and @HEADER … @HEADER-END. TOKEN FIELD POPULATED DESCRIPTION OF USAGE @T DOCID & BEGBATES This token is required for each DII record. This must be the first token listed for the document. This must be unique in the case. The @BEGBATES or @DOCID should not be used. @T ABC000123 @APPLICATION Application The application used to view the electronic document. For example: @APPLICATION Microsoft Word @ATTACH AttachDocs IDs of attached documents. For example: @ATTACH ABC000124;ABC000125 @ATTACHRANG E ParentDoc The document number range of all attachments if more than one attachment exists. The beginning number in the range populates the PARENTDOC. For example: @ATTACHRANGE WGH000008 – WGH0000010 @ATTMSG Media & Native file is copied into the filesystem using the path provided The file name of the e-mail attachment (that is an e-mail message itself) including the relative or absolute path to the document. The relative path is evaluated using the path to the DII file as the root path. The native file is then loaded. The Media field is populated with the value eMail. @BATESBEG Begbates Beginning Bates number, used with @BATESEND. For example: @BATESBEG SGD00001 @BATESEND EndBates Ending Bates number. For example: @BATESEND SGD00055 @BCC EmailBCC Anyone sent a blind copy on an e-mail message. For example: @BCC Nick Thomas @C Custom Field Code used to load a custom field in the database. The syntax for the @C token is: @C The FIELDNAME value cannot contain spaces. For example, to fill in the DEPARTMENT field of the database with the value Accounting, the line would read: @C DEPARTMENT Accounting @CC EmailCC Anyone copied on an e-mail message. For example: @CC John Ace Data Loading Requirements DII Tokens | 442 @D @I Link to images Required token for each DII record that has an image associated with it. This designates the directory location of the image file(s). Note that only the “@D @I” sequence is allowed. The “@D @V” sequence is not recognized. The following 2 examples are equivalent: --Example 1 @D @I\Images\001\ ABC00123.tif ABC00124.tif --Example 2 @D @I\Images\ 001\ABC00123.tif 001\ABC00124.tif. Note the directory should be relative to the load file. If this token is in the record, it must be the last token in the record. Also UNC paths in the Image Directory field (For example @D \\Server\PFranc\Images) are recognized but no hard coded drive letters. @DATECREATE D CreationDateFT The date that the file was created. For example: @DATECREATED 01/04/2003 @DATERCVD DeliveryTimeFT Date that the e-mail message was received. @DATESAVED ModificationDateFT Date that the file was saved. @DATESENT SubmitTimeFT Date that the e-mail message was sent. @EATTACH Native file is copied into the filesystem using the path provided Relative path (from the load file location) of the native file to be loaded. Valid for Attachments. @EDOC Native file is copied into the filesystem using the path provided Same as @EATTACH except for eDocs. For example @EDOC \Attachments\ABC000123.xls Valid for edocs only. @EMAIL-BODY @EMAIL-END Email body is copied into a file in the file system. Body of an e-mail message. Must be a string of text contained between @EMAIL-BODY and @EMAIL-END. The @EMAIL-END token must be on its own line. For example: @EMAIL-BODY Bill, This looks excellent. Ted @EMAIL-END @FILENAME Filename of the native Original Filename of the native file (Edoc/Email/Attachment) For example @FILENAME AnnualReport.xls @FOLDERNAME FolderNameID The name of the folder that the e-mail message came from. For example: @FOLDERNAME \Inbox\Projects\ARProject @FROM EmailFrom From field in an e-mail message. For example: @FROM Kelly Morris Data Loading Requirements DII Tokens | 443 @FULLTEXT N/A (text processing directive) Determines how OCR is associated with the document. This token should be placed at the top of the file, before any @T tokens. The OCR files must have the same names as the images (not including the extension), and they must be located in the same directory. Variations: @FULLTEXT DOC - One text file exists for each database record. The name of the file must be the same name as the first image file. @FULLTEXT PAGE - One text file exists for each page. @FULLTEXTDIR Link to Full text Directory The @FULLTEXTDIR token is a partner to the @FULLTEXT token. @FULLTEXTDIR allows specifying a directory from which the full-text will be copied during the import. Therefore, the full-text files do not have to be located in the same directory as the images at the time of import. The @FULLTEXTDIR token gives you the flexibility to import the DII file and full-text files without requiring you to copy the full-text files to the network first. For example: @FULLTEXTDIR Vol001\Box001\ocrFiles The above example shows a relative path. The application searches for the full-text files in the same location as the DII file that is imported and follows any subdirectories listed after the @FULLTEXTDIR token. The @FULLTEXTDIR token applies to all subsequent records in the DII file until it is changed or turned off. @HEADER @HEADER-END EmailHeader E-mail header content. The @HEADER-END token must be on its own line. For example: @HEADER
@HEADEREND @INTMSGID InternetMessageID Internet message ID. For example: @INTMSGID <00180c34fe5$bf2d5$050@SKEETER> @MEDIA Media Indicates the type of document. This must be populated with one of the following values: {email, attachment, and eDoc} This value is REQUIRED. This value is used by the application to determine how to display the document. For example : @MEDIA eDoc @MSGID EntryID E-mail message ID generated by Microsoft Outlook or Lotus Notes. For example: @MSGID 00E8324B3A0A800F4E954B8AB427196A1304012000 @MULTILINE Any custom field with multiple lines Allows carriage returns and multiple lines of text to populate a specified text field. Text must be between @MULTILINE and @MULTILINE-END. The @MULTILINE-END token must be on its own line. For example: @MULTILINE FIELDNAME Here is the first line. Here is the second line. Here is the third line. Here is the last line. @MULTILINE-END @O OCRTEXT / FULLTEXT is copied into a file in the file system This token is used to load full-text documents. The text files can be located someplace other than the image location as specified by the @D line of the DII file. There can only be one text file for the record. The value following the @O should contain the relative path (from the load file location) of the .txt file. @O \Text\ABC000123.txt Data Loading Requirements DII Tokens | 444 @OCR @OCREND OCRTEXT is copied into a file in the file system The @OCR and @OCR-END tokens offer the flexibility to include the full-text (including carriage returns) in the DII file. The @OCREND token must appear on a separate line. For example: @OCR @OCR-END @PARENTID ParentDoc Parent document ID of an attachment. For example: @PARENTID ABC000123 @PSTFILE0 PSTFilePath and PSTStoreNameID The original PST File name and ID 1) The name and/or location of the .PST file. 2) The unique ID of the .PST file. The two values are separated by a comma. The unique ID can be any unique value that identifies the .PST file. For example: @PSTFILE EMAIL001\PFranc.pst, PFranc_14April_07 The .PST file’s unique ID (the second value) is populated into the PST ID field designated in eMail Defaults. The PST ID value specified by the @PSTFILE token is assigned to the record it appears in and will apply to all subsequent e-mail records. The value is applied until either the @PSTFILE token is turned off by setting the token to a blank value or the value changes. The @PSTFILE token can occur multiple times in a single DII file and assign a different value each time. This allows processing multiple .PST files and presenting the data for all .PST files in a single DII file. As a best practice, the @PSTFILE token should be placed above the @T token. @READ IsUnread (stores 0 if Y and 1 if N) Notes whether the e-mail message was read. For example: @READ Y @RELATED LinkedDocs The document IDs of related documents. For example: @RELATED WGH000006 @SOURCE Source Custodian of the data. You can quickly filter documents by this field. @SOURCE Joe Custodian @SUBJECT Subject The subject of an e-mail message. For example: @SUBJECT RE: Town Issues @TIMECREATED CreationDateFT Time the file/e-mail/edoc was created @TIMERCVD DeliveryTimeFT Time that the e-mail message was received. @TIMESAVED ModificationDateFT Time that the file/e-mail/edoc was last saved @TIMESENT SubmitTimeFT Time that the e-mail message was sent. @TO EmailTo To field in an e-mail message. For example: @TO Conner Stevens @UUID UUID Customer-specific and unique identifier for a record (not used internally by the application) For example : @UUID AE01R95 Data Loading Requirements DII Tokens | 445 Part 7 Using Jobs This part describes how to create and manage jobs. Depending on the license that you own and the permissions that you have, you will see some or all of the following and includes the following sections: About Jobs (page 447) Introduction Creating and Managing Jobs (page 454) Configuring Using Jobs to the Resolution1 eDiscovery Collection Job (page 452) Third Party Data Sources (page 491) | 446 Chapter 39 Introduction to Jobs About Jobs You can create jobs to perform tasks and collections on a computer, network share, public data repository, email account, or all of the above within the enterprise. The collection can be set up with filters to find only the files that are needed for the project. Jobs are responsible for the gathered, filtered, and archived information that comes from a variety of sources within an organization such as computers, laptops, personal digital assistants, and so forth. Once you collect the data from the job, you can view the data in Project Review. You can filter the data and view the data by job or data source. You use the Job Wizard to create Jobs. You can access the Job Wizard from one of two places in the application: From Home > Jobs tab, click the Add button on the Info pane. See About the Jobs Tab on page 449. From the Project List on the Home page, click the Add button next to a particular case. Important: When a job targets a network share, if a file on the share is locked from reading, the job will skip that file and enter an entry in the log. See Adding a Job on page 455. Introduction to Jobs About Jobs | 447 About Job Categories Depending on the license that you own, you can use the following categories of jobs: Job Categories Option Description Collection Job (Resolution1 eDiscovery and Resolution1) You can use a collection job to collect data for process and review. If you are using Resolution1, this job can do one of the following:  Perform a Resolution1 eDiscovery Collection job  Perform a security (Resolution1 CyberSecurity) Search and Review job. See the Introduction to the Resolution1 eDiscovery Collection Job chapter. See the Introduction to Security Jobs chapter. When configuring the Processing Options for a project, if you select the Security processing mode, the Collection job functions as a Search and Review job. Otherwise, it functions as an Resolution1 eDiscovery Collection job. Security Jobs (Resolution1 CyberSecurity and Resolution1) Security Jobs let you capture data and perform tasks on client computers. See the About Performing Security Analysis chapter. Report Only Provides you only with the location of the target document. This job type is used primarily if you suspect inappropriate activity, but you are not ready to act upon it yet. About Approving Jobs After you configure a job, it must first be approved before it is executed. Job approval allows administrative oversight of the job by either supervisors or legal professionals prior to executing the job. You can designate that a job be approved by one or more approvers. You designate who has permissions to approve a job by using roles and permissions. In order to approve a job, a user must have one of the following: Application Case Administrator role Administrator role Project Manager Role Approve LitHold LitHold Rights Manager Custom role with the Approve Jobs permission You can designate that a job be approved by any user with the approve role permission, or you can designate specific users with the approver permission. If you designate multiple specific users, all of them may approve the job. See Approving a Job on page 477. Introduction to Jobs About Jobs | 448 About the Jobs Tab Administrators, and users given permissions, use the Jobs tab to do the following: Create View jobs a list of existing jobs and their associations to people, computers, network shares, and groups. Manage jobs If you are not an administrator, you will only see either the jobs that you created or projects to which you were granted permissions. The Jobs tab refreshes every three minutes. To view the Jobs tab 1. Log in to the console. 2. In the application console, click Home. 3. Select a project. 4. Click the Jobs tab. Jobs tab Elements of the Jobs Tab Element Description Filter Options Allows the user to filter jobs in the list. See Filtering Content in Lists and Grids on page 38. Jobs List Displays the jobs associated with the project. Click the column headers to sort by the column. Note: If a job doesn’t collect and report on certain types of data, NA displays in the column. For example, for volatile jobs, the Hits and Errors columns display NA. Refreshes Jobs List. See Refreshing the Contents in List and Grids on page 35. Refresh Columns Introduction to Jobs Adjusts what columns display in the Jobs List. See Sorting by Columns on page 35. About Jobs | 449 Elements of the Jobs Tab (Continued) Element Description Deletes the selected job. The button is only active when a job is selected. Delete Resubmits a job under a new name. Resubmit Stop the current job. Cancel Manage Notifications Creates notifications for the checked job(s). See About Managing Notifications for a Job on page 483. Manage Templates Manages the templates for jobs. See Managing Job Templates and Filter Templates on page 485. Test Work Flow Tests the work flow of the job. Note: This may take up to 30 seconds Imports the job list to a CSV file. Export to CSV Job Details Pane Includes the ability to add jobs (plus sign button), edit jobs (pencil button), and delete jobs (minus sign button). Job Target Results Tab Displays all the targets for the selected job. Status Tab Displays the failure status of a job in detail. See Status Tab on page 450. People Target Tab Displays the People targeted for the selected job. Computers Target Tab Displays the Computers targeted for the selected job. Network Shares Target Tab Displays the Network Shares targeted for the selected job. Groups Target Tab Displays the Groups targeted for the selected job. Reports Tab Displays statistics about jobs run. See Reports Tab on page 451. Status Tab The Status tab allows you to view the failure status of a job in detail. The errors that cause a failure status to display are invalid network shares for collection jobs against a network share and any errors reported to the application by Site Server. See Network Shares Tab on page 470. The Status tab can be viewed by any user, even a user without admin permissions. You can view if a job has failed on an individual target and why the job fails for a particular target. If the entire job fails, a red bar error displays the reason why the job has failed. Introduction to Jobs About Jobs | 450 Note: For combination jobs, the Status tab displays the status of each job being processed. Job Status Tab Reports Tab The Reports tab allows you to generate and download reports on a selected job. You can download the following reports: Full Error Report - This report shows a breakdown of failed targets and the errors associated to them. Job Report - This report displays details pertinent to the specific job. The report can be created on a completed job or a job that is in the middle of executing. Introduction to Jobs About Jobs | 451 Chapter 40 Introduction to the Resolution1 eDiscovery Collection Job About Collection Jobs You can use the Jobs tab to perform Collection Jobs on a computer, network share, public data repository, email account, or all of the above within the enterprise. Collection Jobs let you capture data for processing and review. Jobs are the gathered, filtered, and archived information that comes from a variety of sources within an organization such as computers, laptops, personal digital assistants, and so forth. You use the Job Wizard to create Collection Jobs. See Adding a Job on page 455. About Collections Collections are the gathered, filtered, and archived information from a wide variety of sources. This allows a transfer of data from an organization to legal counsel. After collection, data is processed and reviewed for relevance. This collection process and the review of collected files is the essence of eDiscovery. In the Custom Selection and Other Data Sources panes under the Job Options tab, you can select the data sources that you want to collect from. About Collection Job Sources The following are the types of data sources that you can collect from: People. When you select a person to collect from, you can also choose to collect from the following data sources that a person is associated to: Computers Network Shares Enterprise Microsoft Cloud Vaults Exchange server Mail server, such as Yahoo Domino Server Introduction to the Resolution1 eDiscovery Collection Job About Collection Jobs | 452 Gmail See People Tab on page 465. Computers. Network See Computers Tab on page 467. Shares. See Network Shares Tab on page 470. Documentum. DocuShare. See Documentum Collections Options on page 496.‘ See DocuShare Collection Options on page 498. Enterprise Vault Server. See Enterprise Vault Server Collection Options on page 500. Exchange Public Folder. See Exchange Public Folder Collection Options on page 505. FileNet. Google See FileNet Collection Options on page 506. Drive. See Enterprise Vault Server Collection Options on page 500. OpenText Oracle ECM. See OpenText ECM Collection Options on page 508. URM. See Oracle URM Collection Options on page 509. Sharepoint. Website. Druva. See SharePoint Collection Options on page 511. See Website Collection Options on page 514. See Druva Collection Options on page 515. Note: If you collect from the data sources under the People option, data will only be collected from data sources that are associated to a person. If you want to collect data from a particular data source, both associated and unassociated to a person, select the data source by name and not by the People option. Introduction to the Resolution1 eDiscovery Collection Job About Collection Jobs | 453 Chapter 41 Creating and Managing Jobs This chapter explains how to create, run, and manage jobs and includes the following topics: Adding a Job (page 455) General Job Wizard Tabs (page 457) Approving a Job (page 477) Processing a Job (page 478) Using Job Reports (page 479) Using Job Notifications (page 483) Using Job Templates and Filter Templates (page 485) Additional Job Tasks (page 488) Testing the Collection Workflow (page 488) Stopping a Job (page 488) Resubmitting Editing a Job (page 488) a Job (page 489) Deleting Jobs (page 490) Creating and Managing Jobs | 454 Adding a Job You use the Job Wizard to create jobs for a project. See About Jobs on page 447. You can set up the job with filters to find only the files that are needed for the project in the Project Review. Note: It is strongly recommended to configure your antivirus to exclude the database (PostgreSQL, Oracle database, MS SQL) AD temp, source images/loose files, and project folders for performance and data integrity. Cerberus writes binaries to the AD Temp folder momentarily in order to perform the malware analysis. Upon completion, it will quickly delete the binary. It is important to ensure that your antivirus is not scanning the AD Temp folder. If the antivirus deletes/Quarantines the binary from the temp, Cerberus analysis will not be performed. To add a job 1. Do one of the following: In the Project List panel, click the On next to the project and then click Job. the Home tab, select a project, click Jobs, then in the right side of the upper pane, click . The Job Wizard opens. 2. In the Job Wizard dialog, in the Job Options screen, set the options that you want and click Next. Bold names in the user interface indicate required fields. See Job Options Tab on page 457. 3. Do one or more of the following: Note: Based on the Job Target Options that you set in the previous step, some of the following wizard screens may not be available. In the IP Range screen, enter the range of IP addresses from which you want to collect. See IP Range Tab on page 464. In the Group Selection screen, check the groups whose data you want to collect. See Group Selection Tab on page 462. In the People screen, check the people whose data you want to collect. See People Tab on page 465. In the Computers screen, check the computers whose data you want to collect. Set the include or exclude filtering criteria and advanced options that you want. See Computers Tab on page 467. See Computer and Network Share Filter Options on page 474. In the Network Shares screen, check the network shares whose data you want to collect. Set the include or exclude filtering criteria and advanced options that you want. See Network Shares Tab on page 470. See Computer and Network Share Filter Options on page 474. 4. Click Next. Creating and Managing Jobs Adding a Job | 455 5. The next screen you see will depend on the Job Type that you selected. The following Job Types result in a screen specific to that type: Memory Agent Operations: See Memory Operations Tab on page 443. Remediation: See Agent Remediation Tab on page 441. Network Acquisition: See Agent Remediation Tab on page 441. Removable Volatile: Media Monitoring: See RMM Options Tab on page 454. See Volatile Options Tab on page 455. 6. In the Scheduling screen, set how you would like the job to be executed. You can execute the job manually, or schedule a time for the job to be executed. See Scheduling Tab on page 471. 7. Click Next. 8. In the Approvers screen, set the options that you want and click Next. See Approvers Tab on page 473. Job Wizard Summary 9. On the Job Summary page, carefully review the settings that you have made to ensure that it includes and excludes the proper terms and documents. 10. Click Save to submit the job for approval. Creating and Managing Jobs Adding a Job | 456 General Job Wizard Tabs You can use the following general tabs to configure Jobs: Job Options Tab (page 457) Group IP Selection Tab (page 462) Range Tab (page 464) People Tab (page 465) Computers Network Tab (page 467) Shares Tab (page 470) Scheduling Tab (page 471) Approvers Tab (page 473) Computer and Network Share Filter Options (page 474) For information about Security Jobs tabs, see Security Jobs Configuration Tabs (page 440). For information about third party data sources tabs, see Configuring Public Data Repositories for Collecting Data (page 138). Job Options Tab The following describes the options that are available in the Job Options tab of the Job Wizard. General Job Options Option Description Job Type Select the type of job. See About Job Categories on page 448. Name Enter the name of the data collection job. The job name should not be longer than 255 characters. Description (Optional) Enter a description to help you further identify what you are collecting in the job. Template Use Job Template Lets you choose a job template that you have previously saved. There is also a list of pre-defined job templates that come with the application from which you can select. See Default Job Templates on page 486.  Save As Job Template Lets you save the configuration of the job as a template for use in future jobs that you add. If a job is saved as a job template, the name of the job should be no longer than 64 characters. Note: When using a job template as a secondary action, the total number of characters of the job name and the job template name should be no longer than 255 characters. For example, a job that contains a name with 200 characters with a job template that contains a name with 64 characters, the job fails because the combined job name is 264 characters. See Deleting Job Templates on page 486. Creating and Managing Jobs  General Job Wizard Tabs | 457 General Job Options (Continued) Option Description Job Data Path Specifies the job data path destination root if Inherit from Project is not checked. For a multiple server installation, this path is the UNC path or IP address path to a network share that serves as the output location for all the items that are copied during the job collection. Make sure double backslash characters (\\) precede the UNC path or the IP address path. If a network UNC path is specified, the path can be validated to ensure that the program can access the location. The validation also ensures that your job output is available for viewing. Local paths only work on single box installations. Inherit from Project Inherits the job data path from the associated project. Job Data Browse Button Lets you browse to the job data path root field to expedite finding the job by this name. Note: The folder does not have to exist. If a new folder is specified, the system will create it for the user upon execution of the specified job. Job Target Options Option Description Job Target Options - Custom Lets you manually select sources such as people, computers, network shares, and email servers, whose data you want to collect. Job Target Options - Group Lets you select data sources that you want to collect from based on Active Directory organizational units and logical administrative units for people, groups, and resource objects such as computers and file shares. When you create a job that includes Group as a target, a snapshot of all of the data sources in the group is made and used for the life of the job. If the group changes after the job is created and executed (not just approved), those changes do not affect the targets of the group that were used in an executing job. Job Target Options - IP Range Select this to enter a range of IP addresses from which you want to collect data. This is an easy way to collect from a group of computers that are in an IP range. Note: If you select this option, configure a short Cancel Pending date or the job will never complete, because there is no guarantee of an agent being in the IP range. Job Target Options - People A network user who can be responsible for or have access to computers, network shares, email, or public data repositories that contain files of interest for the current job. The user’s non-email and email are also included in a CIRT job when selected. See People Tab on page 465. Job Target Options - Computers A computer in the network that can contain files of interest. In order to collect from a computer, the computer must have the appropriate agent installed on it. See Computers Tab on page 467. Job Target Options - Network Shares A network repository for stored files that can contain files of interest. See Network Shares Tab on page 470. Creating and Managing Jobs General Job Wizard Tabs | 458 Job Other Data Sources Options Option Description Other Data Sources Note: The Other Data Sources option is invalid for the following jobs: Remediate, Remediate and Review, and Metadata Only. See Configuring Public Data Repositories for Collecting Data on page 138. Documentum See Configuring for a Documentum Server on page 157. DocuShare See Configuring for a DocuShare Server on page 164. Enterprise Vault Server See Configuring for an Enterprise Vault Server on page 149. Exchange Public Folder See Configuring for a Documentum Server on page 157. FileNet See Configuring for a FileNet Server on page 169. Google Drive See Configuring for Google Drive on page 171. OpenText ECM See Configuring for a OpenText ECM Server on page 168. Oracle URM See Configuring for a Oracle URM Server on page 155. SharePoint See Configuring for a SharePoint Server on page 159. Website See Configuring for Websites on page 162. Job Priority and Agent Speed Options Option Description Job Priority - Inherit from Project Inherits the job priority from the associated project. Job Priority - Low, Medium, High Select a priority for the job. Agent Speed - Inherit from Project Inherits the agent speed from the associated project. Agent Speed - Low, Medium, High Select a speed at which you want the agent to run. Processing And Remediation Options Option Description Auto Process Options Check to have the data auto processed. If you are running either a Search and Review or a Metadata only job, you need to check this option. Remediation Options Only available if you choose a Remediate Job Type. Creating and Managing Jobs General Job Wizard Tabs | 459 Processing And Remediation Options Option Description Collection Options    Remediation Options   Filtered Collection: Collects only files matching filters on computers or network shares pages. Full Disk Acquisition: Creates an E01 file from the disk. Auto Process Job: If this option is checked, the job and evidence is processed automatically. If you do not want to process the evidence at this time, leave this option unselected. Secure Delete: Removes files from the hard drive. Verify Successful Remediation: Check to verify for a successful remediation Job AD1 Encryption Options Option Description AD1 Encryption The AD1 Encryption option set is only available if you choose the Search and Review Job Type. Inherit from Project Inherits the AD1 encryption setting from the associated project. Disabled Turns off encryption of an AD1 evidence image file. Password Encrypts an AD1 evidence image file with a password that you specify. Certificate Encrypts an AD1 evidence image file with a certificate. Certificates use public keys for encryption and corresponding private keys for decryption. You can configure the certificates that appear in the drop-down menu. Agent Collection Check to create AD1 image on the agent. Job Expiration Options Option Description Job Expiration Define the amount of time the system (Site Servers) will try and contact data sources within a job. After the time period, jobs meeting the conditions cancel. You have two condition options to specify for the job: Cancel Pending and Cancel Incomplete. Cancel Pending Define the amount of time the system (Site Servers) will try and contact data sources within a job when the job is in a pending state. After the time period, any jobs still pending cancel. This stops the job from attempting to contact agents on which it has not yet started tasks (pending tasks). Agents that have already been contacted within the time defined with continue to run until the task is complete regardless of the expiration date. This only cancels the pending job(s), not other jobs in various states. Note: When cancelling a recurring job, only the job that is currently running in Site Server will cancel. The next occurrence of the job will start at its appointed time. A recurring Volatile job is cancelled according to the Cancel Pending parameters. Creating and Managing Jobs General Job Wizard Tabs | 460 Job Expiration Options (Continued) Option Description Cancel Incomplete Define the amount of time the system (Site Servers) will try and contact data sources within a job. After the time period, any incomplete jobs cancel. This is selected by default. This cancels all jobs that have not completed, even jobs that are in progress. Job Cerberus Score Options Option Description Cerberus Score You can enable Cerberus in the following security job types: Collection (Resolution1)  Metadata Only  Remediate and Review  Search and Review (Resolution1 CyberSecurity)  Volatile (settings are on Volatile Job Options page) Cerberus lets you do a malware analysis on executable binaries. You can use Cerberus to analyze executable binaries that are on a disk, on a network share, or are unpacked in system memory. See About Cerberus Malware Analysis on page 352.  None: Select to not run a Cerberus analysis  Cerberus Stage One: Cerberus stage 1 is a general file and metadata analysis that identifies potentially malicious code. Cerberus generates and assigns a threat score to the executable binary.  Cerberus Stage Two: Cerberus stage 2 is a disassembly analysis that examines elements of the code. It learns the capabilities of the binary without running the actual executable.  Job Auto Deploy Agents Options Option Description Auto Deploy Agents Turn on or Off. It is Off by default. Creating and Managing Jobs General Job Wizard Tabs | 461 Note: When running Symantec Endpoint Protection with Removable Media Protection versions 8.X, the agent will not obtain Handle information when performing Volatile jobs. You can obtain Handle information through the Memory Operations job type and choosing Handles. Note: During an edit, when changing between the “Approved By” options, you must unselect any users or roles previously selected. Failure to unselect these options will discard any changes to the job when the job is saved. Job Secondary Actions Option Description Enable Secondary Actions Select to enable secondary actions. Secondary actions allow you to apply a secondary job to the results of a primary job by using a job template. See Using Secondary Actions on page 457. Group Selection Tab The Group Selection appears only if you select Group in the Job Target Options earlier in the wizard. See Adding a Job on page 455. Group Selection Tab Creating and Managing Jobs General Job Wizard Tabs | 462 The following table describes the options that are available in the Select People of the Job Wizard. Group Selection Options Option Description Groups list (upper pane) Displays the computers that you can select to add to the job. The list box identifies computers by their name and by their description and locality, if specified. Filter Options (lower pane) Allows you to filter the information in the associated list pane. Displays all people within the selected group. Displays all computers within the selected group. Displays all file shares within the selected group. Creating and Managing Jobs General Job Wizard Tabs | 463 IP Range Tab The IP Range screen appears if you select IP Range as the Job Target Option in the Job Options screen of the Job Wizard. See Adding a Job on page 455. IP Range Tab IP Range Options Option Description Start Allows you to enter the IP address for the starting point of the IP range. End Allows you to enter the IP address for the ending point of the IP range. Include Filters See Computer and Network Share Filter Options on page 474. Exclude Filters See Computer and Network Share Filter Options on page 474. Advanced Options See Computers Tab on page 467. Creating and Managing Jobs General Job Wizard Tabs | 464 People Tab The People options appear only if you selected Custom > People in the Job Target Options group box in the Job Options. See Adding a Job on page 455. You can select the people that you want to collect from. In addition to selecting people, you can select a person’s: Computers (Network) Shares Enterprise Vault Exchange Server Cloud Mail Domino Gmail Server Mail People Options Option Description View by Project Displays people associated with the selected project. View All Displays all people. Filter Options Allows you to filter the information in the associated list pane. Person Details (upper pane, right side) Specifies the full name and username of the person. You can set the highlighted person’s default associations with computers, network shares, Exchange email, Lotus Notes email, or non-email data such as task items, calendar items, and so forth. For example, if you check Computers, all the computers that are listed in the Computers tab of the Select People frame, become associated with the person. Computers List tab Computer Details area Network Shares List tab Displays the computers that you can associate or unassociate with the highlighted person. Identifies the name of the highlighted computer and, if available, its locality and description. Displays the network shares that you can associate or unassociate with the highlighted person. Network Share Details area Identifies the network share path of the highlighted share and, if available, its locality and description. Enterprise Vault wizard page Lets you collect Enterprise data for the highlighted person. Exchange wizard page Lets you collect Exchange email for the highlighted person. Cloud Mail wizard page Lets you collect Cloud Mail email for the highlighted person. Creating and Managing Jobs General Job Wizard Tabs | 465 People Options (Continued) Option Description Domino wizard page Lets you collect Notes email for the highlighted person. Gmail wizard page Lets you collect Cloud Mail email for the highlighted person. Adds a data source. Edits a data source. Depending on the selected tab above, opens the Associate Computers to panel or the Associate Network Shares to panel. This allows you to associate one or more computers or network shares to the person. Creating and Managing Jobs General Job Wizard Tabs | 466 Computers Tab The Computers options appear only if you click Custom, and then check Computers in the Job Target Options group box earlier in the wizard. See Adding a Job on page 455. For agents that are configured to use a proxy server, the Work Manager initiates a secure connection with the first proxy server in the list. If the proxy is configured with two network interface cards, the internal IP address is used. If a secure connection cannot be established, the next proxy server in the list is attempted until the list is exhausted. Several attempts are made to contact a proxy server, after which an error is recorded for the job. Upon successful connection, the connected proxy server is recorded for the collection. The file request is transmitted to the proxy server. Every 20 minutes, the agent initiates a secure connection. The file request is transmitted to the agent, which reads the file request and transmits the file back to the proxy server. The Work Manager repeats these steps for each identified node (computer) that is configured to use a Proxy server. The following table describes the options that are available on the Computers options of the Job Wizard. See Network Shares Tab on page 470. Computers Options Option Description Filter Options Filters the computers in the associated list pane. See Filtering Content in Lists and Grids on page 38. Note: If your filter results in listing multiple computers, you can choose to either target all of the computers matching the filter you applied, or target only specific computers that you have checked in the list. If you choose to target all computers matching filter, the filter must be enabled. Computers list box Displays all the computers that you can select to add to the job. This list comes from the computers that are defined in the Data Sources tab. See Managing Computers for Collecting Data on page 115. The list box identifies computers by their name and by their description and locality, if specified. Computer Details area (upper pane, right side) Identifies the name of the highlighted computer and, if available, its locality and description. Filters You can click to add a computer to the list. You can click to edit the details of a computer in the list. Click the arrow to either show or hide the Filters options. See Computer and Network Share Filter Options on page 474. Include Lets you create or load an Include filter. Opens the Include panel where you can specify file inclusion filter information such as meta data information, file content, or MD5 hash sets. Deletes the selected filter template from the Include list box. Creating and Managing Jobs General Job Wizard Tabs | 467 Computers Options (Continued) Option Description Allows you to edit the settings of a selected filter in the Include list box. Lets you load a previously saved Include filter template. Exclude Displays the names of each file exclusion filter that you have created. Opens the Exclude panel where you can specify file exclusion filter information such as meta data information, and file content. See Computer and Network Share Filter Options on page 474. Deletes the selected filter template from the Exclude list box. Allows you to edit the settings of a selected filter in the Exclude list box. Lets you load a previously saved Exclude filter template. Advanced Options Collect from Target options Click the arrow to either show or hide the Advanced Options. Allows you to see advanced options for collection. Depending on the job type that you are creating, not all Advanced options are available.    ‘Search with’ options    File System: Select to collect the drives from the target’s file system. Logical Disk: Select to collect only the target’s logical drive space. Physical Disk: Select to collect the target’s entire physical drive. Search with Agent: Select to search files using the agent. Search with Either Agent or Site Server: Select to search first with the agent and then with the Site Server. Search with the Site Server: Select to search using the Site Server. System Files Allows you to search system files that are normally hidden from view. Files with “$” contain system metadata and in NTFS, the $MFT contains the file system pointers to all files. Scan Deleted Files Scans free space of a partition for files matching the filter criteria. Scan Unused Disk Area Scans unallocated disk space for files matching filter criteria. Archive Drill Down If archive files exist in any of the available data sources that contain compressed files of interest, this option lets you open the archive files as part of the job and checks them against keywords supplied in the keyword filter. Note: When selecting specific files for a Remediation job with Archive Drill Down selected, the Remediation job will delete the entire archive file if one or more of the specified files match the criteria of the job. Collect Response Archive Collects any archive that contains files that match filter criteria. Specify Extensions for Archive Drill Down Allows you to specify the extension for the archive drill down. If you don’t specify, the default will be used. Creating and Managing Jobs General Job Wizard Tabs | 468 Computers Options (Continued) Option Description Collect NonExtension Files Collects all files that do not have an extension. Use Internal File Identification Recognizes internal file identification when checking file extensions. Collect Encrypted Files Collects files that cannot be accessed to search for keyword filter criteria. Report on NonResponsive Items Generates a report detailing files that matched all filter criteria, but did not contain the specified keyword. Exclude Removable Drives Excludes removable drives that are recognized by Site Server from the collection. This option is only available for collection jobs. Not all removable drives are recognized as such so this option may not exclude ALL removable drives. Creating and Managing Jobs General Job Wizard Tabs | 469 Network Shares Tab The Network Shares options appear only if you click Custom and then check either Network Shares or in the Job Target Options panel earlier in the wizard. See Adding a Job on page 455. The following table describes the options that are available in the Network Shares options of the Job Wizard. See Computers Tab on page 467. Network Shares Options Option Description Filter Options Filters the network shares in the associated list pane. See Filtering Content in Lists and Grids on page 38. Note: If your filter results in listing multiple network shares, you can choose to either target all of the network shares matching the filter you applied, or target only specific network shares that you have checked in the list. If you choose to target all network shares matching filter, the filter must be enabled. Network shares list box Displays all the network shares that you can select to add to the job. This list comes from the network shares that are defined in the Data Sources tab. See Managing Network Shares for Collecting Data on page 120. The list box identifies network shares by their name and by their description. Network Share Details area (upper pane, right side) Identifies the name of the highlighted share and description. Filters You can click to add a new network share to the list. You can click to edit the details of network sharer in the list. Click the arrow to either show or hide the Filters options. See Computer and Network Share Filter Options on page 474. Include Lets you create or load an Include filter. Opens the Include panel where you can specify file inclusion filter information such as meta data information, file content, or MD5 hash sets. See Computer and Network Share Filter Options on page 474. Deletes the selected filter template from the Include list box. Allows you to edit the settings of a selected filter in the Include list box. Lets you load a previously saved Include filter template. Exclude Displays the names of each file exclusion filter that you have created. Opens the Exclude panel where you can specify file exclusion filter information such as meta data information, and file content. See Computer and Network Share Filter Options on page 474. Deletes the selected filter template from the Exclude list box. Creating and Managing Jobs General Job Wizard Tabs | 470 Network Shares Options (Continued) Option Description Allows you to edit the settings of a selected filter in the Exclude list box. Lets you load a previously saved Exclude filter template. Advanced Options Click the arrow to either show or hide the Advanced Options. Archive Drill Down If archive files exist in any of the available data sources that contain compressed files of interest, this option lets you open the archive files as part of the job and checks them against keywords supplied in the keyword filter. Note: When selecting specific files for a Remediation job with Archive Drill Down selected, the Remediation job will delete the entire archive file if one or more of the specified files match the criteria of the job. Collect Responsive Archives Collects any archive that contains any fields that match keyword filter criteria. Specify extensions for archive drill down Allows you to specify extensions for Archive Drill Down. Collect NonExtension Files Collects all files that do not have an extension. Use Internal File Identification Recognizes internal file identification when checking file extensions. Collect Encrypted Files Collects files that cannot be accessed to search for keyword filter criteria. Report on NonResponsive Items Generates a report detailing files that matched all filter criteria, but did not contain the specified keyword. System Files Allows you to search system files that are normally hidden from view. Files with “$” contain system metadata and in NTFS, the $MFT contains the file system pointers to all files. Scheduling Tab You can schedule when you would like a job to execute using the Scheduling options screen in the Job Wizard. You can also set when and if you would like the job to reoccur. See Scheduling a Recurring Job on page 472. There are two different types of scheduling. Server Scheduled: Available for all jobs except RMM, Network Acquisition, and Volatile. Server scheduled starts a new instance of the job on the Server. The server job collects data from the agents as they report results. Agent Scheduled: Available for volatile jobs. Agent scheduled jobs are set to repeat on agents. Once an agent has been contacted and the job is received, it will repeat as specified in the scheduling options. See Adding a Job on page 455. Creating and Managing Jobs General Job Wizard Tabs | 471 Scheduling Tab in the Job Wizard Options in the Scheduling Tab Option Description Scheduled Job Execution Select this to set a date and time when you want the job to execute. You can also set a reoccurrence on the job to execute on a regular basis. Manual Job Execution Select this to manually execute a job. Scheduling a Recurring Job You can schedule a job to execute multiple times by enabling recurrence for that particular job. When recurrence is enabled for a job, the job executes the same requested actions during each recurrence. All the data and objects that meet the job criteria are collected again each time the job reoccurs. The application allows you to configure your job(s) to execute by the minute, hourly, or daily. You can also configure the job to end at a given time. Note: When scheduling Volatile jobs within a Combination Job, the recurrence schedule for the combination job overrides the recurrence schedule for the Volatile job itself. To schedule a reoccurring job 1. From the Scheduling tab, click Scheduled Job Execution. 2. Select Enable Recurrence. 3. Under Recurrence Pattern, specify how often the job reoccurs. 4. Specify when the recurring job will end. You can specify the recurrence of the job to end after so many occurrences or specify the recurrence of the job to end after a specific date and time. Creating and Managing Jobs General Job Wizard Tabs | 472 5. Click Next and follow the Job Wizard. Recurrence Options Option Description Minute Allows you to specify the number of minutes between job recurrences with the minimum option being 1 minute and the maximum being 30 minutes. Hourly Allows you to specify the number of hours between job recurrences with the minimum being 1 hour and the maximum being 12 hours. Daily Allows you to specify a specific time for the job recurrence to occur. The time specified must be an hourly instance, such as 4:00 AM or 7:00 PM. Approvers Tab The following describes the options that are available on the Approvers screen of the Job Wizard. See Adding a Job on page 455. Job Approvers Tab Job Approvers Options Option Description Is Approved By Role Allows any user with job approval rights to approve the collection. After you complete the Job Wizard, the job must first be approved and then it must be executed. Is Approved By User List Allows you to select one or more users that are associated with the selected project, and that have approval rights, to approve the job. After you complete the Job Wizard, the job must first be approved. If you selected more than one user to approve the job, each user must log into CIRT and approve the collection. Once all approvals are complete, you can execute the job. Creating and Managing Jobs General Job Wizard Tabs | 473 Computer and Network Share Filter Options When using a job to collect data, you can use filters to either include or exclude specified data. The Include and Exclude filter options are visible on the the Computers and Network Shares tabs on the following collection-type jobs: Collection Remediate Remediate Search and Review and Review Note: If you run an Resolution1 collection job in Security mode, you need to use an inclusion filter. If you run a Resolution1 collection job in Standard mode, no inclusion filter is needed. Report Only See Computers Tab on page 467. See Network Shares Tab on page 470. When you configure a filter for a job, you can save it as a template and load it in another occurrence. Note: If you submit the filter, and then decide later to edit the filter, you cannot change the filter name or save it as a template. Note: When a multi-path exclusion file filter is used, all paths after the first one are searched for responsive files. Creating and Managing Jobs General Job Wizard Tabs | 474 The following tables describe the filter options that are available. Meta Info Tab Option Description Filter Name (Required) The name of the new file include filter. Extension(s) Includes files by extension. You can separate multiple extensions with a comma. For example, bmp,jpg,png. You can use an asterisk (*) as a wildcard. Path Contains Includes any folder with the designated name in the path. For example, if you added “confidential”, it would include all folders with “confidential” in the path. File Size Includes files based on file size. You can designate file size ranges using Is, Greater Than, or Less Than and on an associated file size in bytes, kilobytes, or megabytes. File Creation Date Includes files based on any date, a specific creation date, or a data range. File Modified Date Includes files based on any edit date, a specific edit date, or an edit data range. File Last Accessed Date Includes files based on any last accessed date, a specific last accessed date, or a last accessed date range. Save Filter As Template Lets you save the configured filter as a template so that you can reuse it in other jobs. Creating and Managing Jobs General Job Wizard Tabs | 475 File Content Tab (Include filter only) Option Description Keywords Drop-down list Lets you include files that match any, all, or regular expression keywords that you have entered in the keyword text field. Keyword text field Lets you enter text, patterns of data (regular expressions), or hexadecimal values. When writing queries for the Keyword(s) field, use the terms AND or OR to help refine your search. For example:  Apple AND orange returns files with both terms apple and orange.  Apple OR orange returns files with either the term apple or orange.  (Apple AND orange) OR (banana) returns files with either the terms apple and orange or files with the term banana.  ‘Apple and orange’ OR banana returns files with either the term apple and orange or files with the term banana. For more information on regular expressions, see the Regular Expressions Reference Guide at http://www.accessdata.com/regular-expressions (last accessed 1/15/2014) Search File name only Lets you narrow the keyword filter to search only the file name. Luhn Options Credit Card Numbers Custom Includes credit card numbers using Luhn testing. Luhn testing distinguishes valid credit card numbers from what could be a random selection of digits. Includes a custom regex expression. To filter by regular expressions, check Custom, and then enter the regular expression delimiters. For example: \d\d\d\d. Note: You are not able to use dashes when creating a custom regex expression. For example: \d\d\d\-\d\d\-\d\d\d\d Save Filter As Template Lets you save the configured filter as a template so that you can reuse it in other jobs. MD5 Tab Option Description MD5 hash list box Lets you add MD5 hash values to the MD5 list box. The added values are included in the job. Import Hash List Lets you browse and open an MD5 hash value file into the MD5 hash list box. Save Filter As Template Lets you save the configured filter as a template so that you can reuse it in other jobs. Creating and Managing Jobs General Job Wizard Tabs | 476 Approving a Job Each Job has to be approved before it can be executed. Select By Role to allow any user with specified roles to approve the job, or select specific users from the User List. See Adding a Job on page 455. See Executing a Job on page 477. To approve a job 1. Log in to CIRT if you are a user who has been grant permission to give approval to a specific job. 2. Click Jobs. 3. In the Jobs list pane, highlight a job that has not yet been approved. 4. In the right pane, click Approve . Executing a Job You can execute a job after it is approved. Executing a job begins the process of collecting the data that meets any filter or keyword criteria that you configured in the Job Wizard. See Adding a Job on page 455. See Approving a Job on page 477. To execute a job 1. Log in if you are a user who has been granted permission to execute a specific job. 2. Click Jobs. 3. In the Jobs list pane, highlight a job that has not yet executed. 4. In the right pane, click Execute. Creating and Managing Jobs Approving a Job | 477 Processing a Job When you add a job, you have the option of having the job automatically processed. See Job Options Tab on page 457. If you do not enable this options, you can process a job after it is executed. See Executing a Job on page 477. To process a job 1. If not already, log in as a user who has been granted permission to approve a specific job. 2. Select the project that has the job that you want to process. 3. In the Jobs list pane, highlight a job that has not yet been processed. 4. In the right Information pane, click Process. If a job has already been processed, you can reset the processing status. To reset the processing status 1. If not already, log in as a user who has been granted permission to approve a specific job. 2. Select the project that has the job that you want to reset the process. 3. In the Jobs list pane, highlight the processed job that you want to reset. 4. Click Reset Processing Status in the Information pane. Reset Processing Dialog 5. In the Reset Processing dialog, select whether you want the status to be reset to either Not Started or Completed. 6. Select between the Collection Status Only or Collection and Sub Items option. 7. Click OK. Creating and Managing Jobs Processing a Job | 478 Using Job Reports You can use Job Reports to generate various predefined reports with detailed information about collected files, emails, file statistics, remediated files, and so forth. You can download a job report in the Excel spreadsheet format (.xls) format. The job reports available to you depends on the type of job that you ran. You can view the following types of reports for a given job. Available Job Reports Job Type Available Reports Agent Remediation   Combination Job       Computer Software Inventory    Creating and Managing Jobs Job Details Report: Displays comprehensive information on Agent Remediation Job options that were applied when the job was created. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Combination Job options that were applied when the job was created. Search and Review Report: Displays information on job results for the Search and Review portion of the job. Volatile Report: Displays information on job results for Volatile portion of the job. Memory Operation Report: Displays information on job results for Memory Operation portion of the job. Computer Software Inventory Report: Displays information on job results for Computer Software Inventory portion of the job. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Computer Software Inventory Job options that were applied when the job was created. Software Inventory Report: Details the software inventory results of the job. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Using Job Reports | 479 Available Job Reports (Continued) Job Type Available Reports Memory Operation 1. Memory Acquisition  Job Details Report: Displays comprehensive information on Memory Operation (Memory Acquisition) Job options that were applied when the job was created.  Memory Acquisition Report: Displays information on job results for Memory Acquisition job.  Full Error Report: Displays a breakdown of failed targets and the errors associated to them. 2. Memory Analysis  Job Details Report: Displays comprehensive information on Memory Operation (Memory Analysis) Job options that were applied when the job was created.  Memory Analysis Report: Displays information on job results for Memory Analysis job.  Full Error Report: Displays a breakdown of failed targets and the errors associated to them. 3. Process Dump  Job Details Report: This report gives comprehensive information on Memory Operation (Process Dump) Job options that were applied when the job was created.  Process Dump Report: This report gives information on job results for Process Dump job.  Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Metadata Only     Network Acquisition   Remediate   Remediate and Review    Creating and Managing Jobs Job Details Report: Displays comprehensive information on Metadata Only Job options that were applied when the job was created. Job Results: Displays information on job results for the job. Keyword Search Report: Displays information on file and keyword statistics that contained the keywords that were defined in the Include Filter section of job wizard. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Network Acquisition Job options that were applied when the was created. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Remediate Job options that were applied when the job was created. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Remediate and Review Job options that were applied when the job was created. Job Results: Displays information on job results for the job. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Using Job Reports | 480 Available Job Reports (Continued) Job Type Available Reports Removable Media Monitoring    Reports Only    Search and Review    Volatile    Job Details Report: Displays comprehensive information on Removable Media Monitoring Job options that were applied when the job was created. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Reports Only Job options that were applied when the job was created. Job Results: Displays information on job results for the job. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Search and Review Job options that were applied when the job was created. Job Results: Displays information on job results for the job. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Job Details Report: Displays comprehensive information on Volatile Job options that were applied when the job was created. Job Results: Displays information on job results for the job. Full Error Report: Displays a breakdown of failed targets and the errors associated to them. Running the Job Details Report All jobs have the Job Details report available. To run the Job Details report 1. On the Home page, click the Jobs tab. 2. In the Jobs list pane, select a job. 3. In the lower pane, click Reports 4. Click Job Details > Download to view the report. . Running the Job Results Report The Job Results report is available for Search and Review and Volatile jobs. You can generate a job results report once a job begins collecting and at least one job target status is also collecting. To run the Job Results report 1. On the Home page, click the Jobs tab. 2. In the Jobs list pane, select a job. 3. In the lower pane, click Reports 4. Click Job Results > Download to download the report. Creating and Managing Jobs . Using Job Reports | 481 5. Click Job Results > View to view the report. Running the Full Error Report The Full Error Report shows a break down of failed targets and the errors associated to them. You can generate a full error report on a completed job where one or more targets have failed. To run the Job Results report 1. On the Home page, click the Jobs tab. 2. In the Jobs list pane, select a job. 3. In the lower pane, click Reports 4. Click Full Error Report > Download to download the report. 5. Open the report. . Retrieving Reports for Deleted Jobs You can retrieve Job reports, System logs, and Activity logs for jobs that have been deleted. You can retrieve the logs by navigating to a folder that you have specified in the web.config file. In order to enable this feature, you must edit the web.config file. You can find the web.config file at C:\Program Files\AccessData\MAP\Web.config . In the web.config file, locate the . For the PersistLogsToPath value, enter a path to where you would like to save the logs. Note: Only previously generated reports for a job are available after a job has been deleted. Creating and Managing Jobs Using Job Reports | 482 Using Job Notifications About Managing Notifications for a Job You can use Manage Notifications to set up a list of subscribers to email notifications for a given target job or target project, and an event type such as when job processing is completed. Target types and their associated event types include the following: Notification Type Target type Associated event types Projects     Jobs Job Approved Job Completed Job Created Processing Completed Job Approved Job Completed  Processing Completed See Creating Job Notifications on page 483. See Deleting Job Notifications on page 484.   System     User Created User Deleted Project Created Project Deleted Before you can have email notifications sent for a job event, you must first make sure that you have configured the email notification server that you want to use. See Configuring the Email Notification Server on page 80. Creating Job Notifications After you create a job notification, you can view all the notifications that you have created by going to the Manage Notification Subscriptions view available from the Home page. See About Managing Notifications for a Job on page 483. To create job notifications 1. On the menu bar, click Jobs. 2. In the Jobs list pane, check one or more jobs whose events you want to target for email notification. 3. In the lower left area of the project list pane, click 4. In the Create Event Notification page, select a notification event type from the drop-down list. 5. In the Select Users to Notify group box, check the users who will receive the notification email message. 6. Click Create Event Notification. Creating and Managing Jobs . Using Job Notifications | 483 Deleting Job Notifications You can delete job notifications that you created or job notifications that you are subscribed to. See About Managing Notifications for a Job on page 483. To delete job notifications 1. On the Home page, in the Project List panel, click 2. Do one or more of the following: . In the Notifications I Created group box, under the Notification Type column header, check the job notifications that you want to delete. In the Notification I Belong To group box, under the Notification Type column header, check the job notifications that you want to delete. 3. Click . 4. In the Confirm Deletion dialog box, click OK. Creating and Managing Jobs Using Job Notifications | 484 Using Job Templates and Filter Templates Managing Job Templates and Filter Templates You can view and delete job templates and filter templates that you created for jobs. See Job Options Tab on page 457. To view and delete templates 1. On the Home page, select the project that has the job that want to create a template for. 2. In the Jobs list pane, select a job. 3. Click Manage Templates at the bottom of the upper right pane. Manage Templates Dialog 4. Click the Job Templates tab. 5. Select the template from the list and click 6. Click Close. Creating and Managing Jobs Delete. Using Job Templates and Filter Templates | 485 Deleting Job Templates You can delete job templates that you create for jobs from the Jobs tab on the Home page. To delete jobs 1. On the Home page, click Jobs. 2. Click the Manage Job Templates button . Manage Templates Dialog 3. Select the job template from the list and click the delete button 4. Click Close. . Default Job Templates In addition to creating your own job templates, you can choose from a list of default job templates that is available in the application. The following table lists the job templates available. Default Job Templates Template Description Coll-evtx Executes a collection job that collects all the evtx (Windows Event log) files in the Windows/System32 folder. Deep IR Executes a volatile job that searches processes, sockets, DNS Cache,browser history, DLL, users, prefetch, filesystem, registry, and event logs. Creating and Managing Jobs Using Job Templates and Filter Templates | 486 Default Job Templates Template Description Drop Process by PID Executes a Process Dump/Memory Operations job for a PID specified by the user. EXE-Metadata-Cerb Executes a metadata only job on all executables in the Windows\System32 folder and performs a Cerberus score. File System Enumeration - Metadata Executes a metadata only job that retrieves directory and file system information. IR Triage Executes a volatile job that searches for DLLs and shared libraries, users, prefetch, sockets, and DNS. Lockdown NIC Executes an agent remediation that executes a script on the agent machine to disable its NIC card. LockdownEnableNIC Executes an agent remediation job that executes a script to disable the NIC card on the agent for four hours. After four hours, the NIC is enabled. Memory Acquisition Executes a memory acquisition job that includes a page file and creates an archive file. Memory Analysis Executes a memory analysis job collecting DLLs, Drivers, Handles, Registry, Sockets, and VAD information. Registry-Autostart Executes a volatile job collecting only the Autostart information. Registry-Full Executes a volatile job collecting all the registry information. Remediate-Name Executes an agent remediation job to stop a process by a name specified by the user. Remediate-PID Executes an agent remediation job to stop a process by the PID specified by the user. Small-exes-cerb Executes a collection job that looks for any executable file that is under 250kb in size. Software Inventory Executes a software inventory job. Vol-Deep Executes a volatile job with all of the options selected except registry. Vol-Deep-Cerb Executes a volatile job with all of the options except registry. The job performs Cerberus scoring on running processes. Vol-Hidden-Cerb Executes a volatile job that searches for hidden processes and performs a Cerberus score. Vol-Hidden-Injected Executes a volatile job that searches for hidden processes and injected DLLs. Vol-Quick-Cerb Executes a volatile job that searches for just processes and DLLs and performs a Cerberus score. Vol-Quick-Sched Executes a volatile job that searches for just processes and DLLs running every five minutes for five times. Creating and Managing Jobs Using Job Templates and Filter Templates | 487 Additional Job Tasks Testing the Collection Workflow You can test the collections workflow to insure that everything is collecting properly. To test the collections workflow 1. On the Home page, select the project that has the job that you want to check the collection workflow. 2. In the Jobs list pane, select a job or jobs. 3. Click Test Collection Workflow at the bottom of the Jobs list pane. Note: This process could take up to 30 seconds to execute. 4. Click OK. Stopping a Job You can stop active jobs after they have been approved and executed. When you stop a job, the Job Status column in the Jobs list pane does not immediately show “Canceled.” Instead, the status shows “Canceling” until the task is complete. See Deleting Jobs on page 490. To stop a job 1. On the menu bar, click Jobs. 2. In the Jobs list pane, check a job you want to cancel. 3. In the lower left corner of the Jobs list pane, click 4. Click Yes. . Note: Stopping an already executed job (completed) results in a dialog box that says "There are no jobs to cancel. None of the selected jobs are executing." Resubmitting a Job You can resubmit a job if it has failed, the computer has restarted, some of the items in the job did not complete, or you want to add incremental data. To resubmit a job 1. On the menu bar, click Jobs. 2. In the Jobs list pane, check a job name. 3. In the lower left corner of the Jobs list pane, click Creating and Managing Jobs . Additional Job Tasks | 488 4. In the Resubmit Job dialog, set the options that you want. The following table describes the available options. Resubmit Job Dialog Resubmit Collection Options Option Description New Job Name Specify a new name for the job. Item Options Include Failed Items Only Collects only targeted items that have failed for various reasons, such as no connection. Include all Incompleted Items Only Collects only targeted items that do not have a “Completed” status. The status may be Collecting, Queued, Waiting for Retry, Cancelled, Terminated, and so forth. Include all Failed Files (Shares Only) Tries to collect only failed files that reside on a network share. Copy Job Recollects all the originally targeted items. Resubmit Type Full (Recommended) Reruns the entire job again and gathers all hung, new, or modified data. Incremental Reruns the job, but only gathers new or modified data since the last collection. 5. Click Create Job. Editing a Job You can edit a job only if it has not yet been approved or executed. If a job is already approved or executed, you can only view the job’s settings. See Approving a Job on page 477. See Executing a Job on page 477. Creating and Managing Jobs Additional Job Tasks | 489 To edit a job 1. On the menu bar, click Jobs. 2. In the Jobs list pane, highlight a job name. 3. In the task pane, click 4. In the Edit Job page, open the desired panel of the wizard, and then set the options that you want. 5. Click Save to return to the Jobs list pane where you can select the job, approve it, and then execute it. . Deleting Jobs You can delete one or more jobs from the Jobs list view. You should use caution when you use this feature because a selected job may be active. If a job is active and you delete it, the Work Manager may stop. Note: There may be a delay between the time you delete the job and the time that the program updates the overall project size. You can still proceed with your work while the program is updating the project size. See Stopping a Job on page 488. To delete jobs 1. On the menu bar, click Jobs. 2. Do one of the following: In the Jobs list pane, highlight a job name you want to delete. In the right side of the upper pane, click . In the Jobs list pane, check one or more jobs that you want to delete. In the lower left corner of the Jobs list pane, click . 3. (Optional) In the Confirm Deletion pane, check Keep Archive to keep an archive record of the jobs, and remove the jobs from the user interface. 4. Click OK. Creating and Managing Jobs Additional Job Tasks | 490 Chapter 42 Configuring Third Party Data Sources Depending upon your license, you can access third party data sources for data. To access these sources, you need to configure job options in the Other Data Sources pane in the Job Options tab in the Job Wizard. The following jobs access third party data sources: Collection. Metadata See Introduction to the Resolution1 eDiscovery Collection Job on page 452. only. See Introduction to Security Jobs on page 392. Remediate. Remediate See About Remediation on page 392. and Review. See About Remediation on page 392. Report only. See Introduction to Security Jobs on page 392. Search and Review. See Introduction to Security Jobs on page 392. Note: Before you can access third party data sources with a job, you need to configure the application to connect to the third party data source in Data Sources. When configuring job options, you may configure the following Other Data Sources. Other Data Sources Job Options Option Description Cloud Mail Lets you select data sources from a cloud mail server. See Cloud Mail Collection Options for People on page 494. CMIS Lets you select data sources from a server connected by CMIS. See CMIS Collection Options on page 517. Documentum Lets you select data sources from a Documentum server. See Documentum Collections Options on page 496. DocuShare Lets you select data sources from a DocuShare server. See DocuShare Collection Options on page 498. Domino Lets you select data sources from a Domino See Domino Collection Options on page 495. Druva Lets you select data sources from a Druva server. See Druva Collection Options on page 515. Enterprise Vault Server Lets you select data sources from an Enterprise Vault Server or select from a particular person on an Enterprise Vault Server. See Enterprise Vault Server Collection Options on page 500. Configuring Third Party Data Sources | 491 Other Data Sources Job Options Option Description Exchange Lets you collect Exchange emails from a person. See Collecting Exchange Emails for Custodians on page 503. Exchange Public Folder Lets you collect data sources from an Exchange Public Folder. See Exchange Public Folder Collection Options on page 505. FileNet Lets you select data sources from a FileNet server. See FileNet Collection Options on page 506. Gmail Lets you select data sources from a Gmail server. Google Drive Lets you select data sources from a Google Drive. See Enterprise Vault Server Collection Options on page 500. OpenText ECM Lets you select data sources from an OpenText ECM server. See OpenText ECM Collection Options on page 508. Oracle URM Lets you select data sources from a Oracle URM. See Oracle URM Collection Options on page 509. SharePoint Lets you select data sources from a SharePoint server. See SharePoint Collection Options on page 511. Website Lets you select data sources from a Website through a Google account. See Website Collection Options on page 514. Configuring Third Party Data Sources | 492 Other Data Sources Filter Options When using a job to collect data, you can use filters to either include or exclude specified data. You are not required to configure filters to complete a job. If you do not configure any filters, the application collects all the files in the data storage locations. You configure filters by expanding the Filters panel on the wizard page and then clicking or to add or edit an Include or Exclude filter. Configuring Third Party Data Sources Other Data Sources Filter Options | 493 Cloud Mail Collection Options for People You can collect cloud mail for Custodians. To collect, select People and Select Person’s Cloud Mail in the Job Target Options group box in the Job Wizard. When you collect the mail, you may notice a discrepancy in the email count between collecting from an POP server and collecting from an IMAP server. It might seem that there is more email collected from the IMAP server than the POP server. The reason is because of the difference between the way IMAP handles email compared with the way POP handles email. If there is an email sent on an IMAP server that has the same To: address as the Sent From: address (For example, if you had sent an email to yourself), IMAP will store a copy of the email in two separate locations: one in the To: folder, and one in the Sent From: folder. POP will only store one copy of the email. Configuring Third Party Data Sources Cloud Mail Collection Options for People | 494 Domino Collection Options The Domino tab lets you collect Notes email for the highlighted custodian. In the Domino pane, you can do the following: Include Notes Collect Notes email for the highlighted custodian in the list box. Collect Folders Collect email folders on the highlighted custodian in the list box. Note: You should not put spaces in a comma-delimited list of folders that you want to collect. Collect Non-Email Data Collect non-email data, such as task items or calendar items, on the highlighted custodian. Domino Filters Filter the collected emails by variables such as subject, creation date, or keywords. You can customize the filters, edit them, and delete them. You must select a custodian from the Custodians pane before you can select any of the above options. When dealing with a Domino Server, you should understand that Domino differentiates between internet email servers and other email servers. As an administrator, you need to make sure that you have the correct value listed in the Domino filter when setting up collecting with a Domino server. To obtain the values for the Domino filter 1. On your Domino server, select an email from the user you want to define as a Domino custodian in Resolution1 eDiscovery. 2. Right click the user. 3. The Domino server will display a fields tab and the values associated with those fields. 4. Highlight and copy the value string of the field that you want to edit. Note: On the Domino server, the value string for a sender’s email server is listed in From under the Fields tab, while the value string for a sender’s internet email server is listed in INetfromfield under the Fields tab. Domino Email Values To set up email values in the Domino filter 1. In the Custodians option, under Job Wizard, select the custodian that you want. 2. Select the Domino tab. 3. Check Include Notes. 4. Select Domino Filters. 5. In the Include group box, click 6. Enter the value string in the Senders’ Internet Email and Senders’ Email fields. 7. Click OK. Configuring Third Party Data Sources Add. The Include dialog appears. Domino Collection Options | 495 Documentum Collections Options This option appears only if you click Custom, and then check Documentum in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured the Documentum data source. See Configuring for a Documentum Server on page 157. In the Documentum panel, you can select a server that you want to collect from. Documentum Include and Exclude Filters You also have the option to configure the Documentum filters. You can customize filters to include or exclude certain variables. Documentum Filters Documentum Filters Option Description Filter Name (Required) The name of the new filter. Configuring Third Party Data Sources Documentum Collections Options | 496 Documentum Filters (Continued) Option Description Cabinet(s) The name of the cabinet that you are collecting from. Author(s) Filters files based on the author(s). Owner Filters files based on the owner. Creator Filters files based on the creator. Keyword(s) Filters files based on keywords. Modified By: Filters files based on the Modified By: field. Name Filters files based on the name. Extension(s) Filters files by extension. You can separate multiple extensions with a comma. For example, bmp,jpg,png. You can use an asterisk (*) as a wildcard. File Size (bytes) Filters files based on file size. You can designate file size ranges using Equals, Not Equals, Greater Than, or Less Than size in bytes. Subject Filters files based on the subject. Title Filters files based on the title. File Creation Date Filters files based on any date, a specific creation date, or a data range. File Modified Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources Documentum Collections Options | 497 DocuShare Collection Options This option appears only if you click Custom, and then check DocuShare in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured the DocuShare data source. See Configuring for a DocuShare Server on page 164. In the DocuShare panel, you can select a server from which you want to collect. DocuShare Include and Exclude Filters You also have the option to configure the Docushare Filters. You can customize filters to include certain values or exclude certain values. DocuShare Filters Configuring Third Party Data Sources DocuShare Collection Options | 498 DocuShare Filters Option Description Filter Name (Required) The name of the new filter. Author(s) Filters files based on the author(s). Keyword(s) Filters files based on keyword(s). Description Filters files based on content in the description File Type Filters files based on file type. Handle Filters files based on the handle. Keyword Property Filters files based on keyword property. Modified By Filters files based on the Modified By: field. Owner Filters files based on the owner. File Size (bytes) Filters files based on file size. You can designate file size ranges using Equals, Not Equals, Greater Than, or Less Than size in bytes. Summary Filters files based on the content in the summary. Title Filters files based on the title. File Name Filters files based on the file name. File Creation Date Filters files based on any date, a specific creation date, or a data range. File Modified Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources DocuShare Collection Options | 499 Enterprise Vault Server Collection Options The Enterprise Vault Server options appear only if you click Custom and then check Enterprise Vault Server in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a Enterprise Vault Server data source. Configuring for an Enterprise Vault Server (page 149) In the Enterprise Vault Server panel, you can select an Enterprise Vault Server, Enterprise Vault Store, and Unassociated Archives. Enterprise Vault Include and Exclude Filters You also have the option to configure the Email Archive Filters or the File Archive Filters. You can customize filters to include certain values or exclude certain values. Email Archive Filters for Enterprise Vault Server Enterprise Vault Email Archive Filters Option Description Filter Name (Required) The name of the new filter. BCC’s Email Filters files based on the BCC’s email. CC’s Email Filters files based on the CC’s email. Keyword(s) Filters emails based on keyword(s). Configuring Third Party Data Sources Enterprise Vault Server Collection Options | 500 Enterprise Vault Email Archive Filters (Continued) Option Description Apply Keywords Applies keywords entered in the Keyword field by content, attachments, or both. Recipient’s Email Filters files based on recipient’s email. Sender’s Email Filters files based on sender’s email. Senders Names Filters files based on the senders names. Subject Filters files based on the subject. Mailbox Folder Name Filters files based on the mailbox folder name. Created Date Filters files based on the created date. You can filter by a single date, a range of dates, or any date. File Archive Filters for Enterprise Vault Server Enterprise Vault File Archive Filters Option Description Filter Name (Required) The name of the new filter. Extension(s) Filters files by extension. You can separate multiple extensions with a comma. For example, bmp,jpg,png. You can use an asterisk (*) as a wildcard. File Size (bytes) Filters files based on file size. You can designate file size ranges using Equals, Not Equals, Greater Than, or Less Than size in bytes. Keywords Filters files based on keywords. Configuring Third Party Data Sources Enterprise Vault Server Collection Options | 501 Enterprise Vault File Archive Filters (Continued) Option Description Apply Keywords Applies keywords entered in the Keyword field by content, attachments, or both. File Creation Date Filters files based on any edit date, a specific edit date, or an edit data range. File Modified Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources Enterprise Vault Server Collection Options | 502 Collecting Exchange Emails for Custodians The Exchange tab lets you collect Exchange email for the highlighted custodian. The data that you can collect from a server depends upon the version of Exchange server that you are collecting from. A custodian must be associated to an Exchange server before you can collect from that server. See Configuring for an Exchange Online/365 Server on page 141. See Configuring for Exchange 2003, 2007, and 2010 Servers on page 142. See Configuring for Exchange 2010 SP1 and 2013 Servers on page 144. See Configuring for an Exchange Index Server on page 147. Collecting Data from an Exchange Server To collect Exchange email from a custodian 1. In Job Wizard, under Custom Selection, select People and Select Person’s Exchange. 2. Click Next. 3. Select the person or people that you want to collect from using Exchange. 4. Under Exchange tab, click Include Exchange. 5. Populate the Include Exchange fields. 6. Select Next. Exchange Collection Options The following table describes the fields that are available in the Include Exchange panel. Note: When collecting from Exchange public folders, Include and Exclude filters will only work with Exchange 2013. Attempting to use a filter when collecting from public folders from earlier versions of Exchange will result in job target failure. Include Exchange Fields Field Description Exchange MAPI MAPI (Messaging Application Programming Interface) data is available from Exchange 2003, 2007, and 2010 servers. Depending upon which servers the custodians are associated with, both MAPI and EWS options may be available. If only a server is set up to collect MAPI data only, only MAPI data options will be available. Complete Mailbox Creates a local, unfiltered PST containing the full contents of the custodian's mailbox. With this option, you can collect additional data besides the custodian’s mailbox. Configuring Third Party Data Sources Collecting Exchange Emails for Custodians | 503 Include Exchange Fields Field Description Filtered Mailbox Allows you to filter email by variables such as subject, creation date, or keywords.You can customize the filters, edit them, and delete them. If filters are not set, the complete mailbox will be collected. Note: Only custodians that have been indexed can successfully use this option. See Configuring for an Exchange Index Server on page 147. Include Dumpster Allows you to collect emails that are soft-deleted. Collect Non-Email Data Allows you to collect non-email data, such as task items or calendar items associated with that custodian. Exchange Web Services Exchange Web Services data is available from Exchange Online/365, Exchange 2010 SP1, and 2013 servers. Depending upon which servers the custodians are associated with, both MAPI and EWS options may be available. Note: Resolution1 eDiscovery does not support EWS data for Exchange 2010. Only MAPI data can be collected from Exchange 2010. Apply Filter Allows you to apply the Exchange filters to the EWS data. Include Recoverable Deletes Allows you to collect deletions. Deletions are enabled by default in Exchange.There’s no need to specify a folder path because there is no folder structure retained for those items. Include Recoverable Purges Allows you to collect purges (hard deletes) of data. In order to collect purges from an Exchange server, enable purges in the Exchange server. There is no need to specify a folder path because there is no folder structure retained for those items. Include Recoverable Versions Allows you to collect versions of data that have been saved. In order to collect versions from an Exchange server, enable versions in the Exchange server. There is no need to specify a folder path because there is no folder structure retained for those items. Include Archive MailBox Allows you to collect from an archive mailbox. Mailbox Folder Path(s) Specifies the mailbox folder to collect from an archive mailbox. In the field, you can put in the exact path of the destination of the mailbox, a root path of the destination, or you can put in a keyword. If you use a keyword, the application will collect from every mailbox with the keyword. Note: Each mailbox folder path, and its options, is assigned per custodian, so if you need to have multiple custodians with the same job target, you need to define the mailbox folder and options under each custodian. Configuring Third Party Data Sources Collecting Exchange Emails for Custodians | 504 Exchange Public Folder Collection Options The Exchange Public Folder options appear only if you check Exchange Public Folder in the Other Data Sources group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a Exchange Server data source. In the Exchange Public Folder panel, you can select a server that you want to collect from. Exchange Include and Exclude Filters You also have the option to configure the Exchange Public Folder filters. You can customize filters to include or exclude certain variables. Exchange Public Folder Include Filter Exchange Public Folder Filters Option Description Filter Name (Required) The name of the new filter. Keywords Filters files based on keywords. Title Filters files based on the title. Creation Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources Exchange Public Folder Collection Options | 505 FileNet Collection Options The FileNet options appear only if you click Custom and then check FileNet in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a FileNet data source. Configuring for a FileNet Server (page 169) In the FileNet panel, you can select a FileNet host. FileNet Include and Exclude Filters You also have the option to configure the FileNet filters. Include Filter for FileNet FileNet Filters Option Description Filter Name (Required) The name of the new filter. Creator Filters files based on the creator . Keywords Filters files based on keywords. Title Filters files based on the title. File Creation Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources FileNet Collection Options | 506 Google Drive Collection Options The Google Drive option appears only if you click Custom and then check Google Drive in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a Google Drive data source. Note: The Google Drive connector will only collect documents that have been created in Google Drive. It will not collect documents that have been uploaded to Google Drive from other sources, such as Microsoft Word or Excel files. Configuring for Google Drive (page 171) In the Google Drive panel, you can select a server from which you want to collect. Google Drive Include and Exclude Filters You also have the option to configure the Google Drive Filters. You can customize filters to include or exclude certain keywords. Note: For Google Drive, the exclude filters are ignored Make sure to separate multiple keywords by commas. Google Drive Filters Google Drive Filters Option Description Filter Name (Required) The name of the new filter. Keyword(s) Filters files based on keywords. Configuring Third Party Data Sources Google Drive Collection Options | 507 OpenText ECM Collection Options The OpenText ECM options appear only if you click Custom and then check OpenText ECM in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a OpenText ECM data source. Configuring for Cloud Mail (page 166) In the OpenText ECM panel, you can select a OpenText ECM repository. OpenText ECM Include and Exclude Filters You also have the option to configure the OpenText ECM filters. You can customize filters to include certain values or exclude certain values. Include Filter for OpenText ECM FileNet Filters Option Description Filter Name (Required) The name of the new filter. Creator Filters files based on the creator . Keyword(s) Filters files based on keywords. Title Filters files based on the title. File Creation Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources OpenText ECM Collection Options | 508 Oracle URM Collection Options This option appears only if you click Custom, and then check Oracle URM in the Job Target Options group box in the Job Options group box of the wizard. In order to make any selections, you must have already configured the data sources. See Configuring for a Oracle URM Server on page 155. In the Oracle URM panel, you can select an Oracle URM repository. Oracle URM Include and Exclude Filters You also have the option to configure the File Filters. You can customize filters to include certain values or exclude certain values. Oracle URM File Filters Oracle URM Filters Option Description Filter Name (Required) The name of the new filter. Author(s). Filters files based on author(s). Configuring Third Party Data Sources Oracle URM Collection Options | 509 Oracle URM Filters Option Description Keyword(s) Filters files based on keyword(s). You can filter by any keyword or all keywords. Title Filters files based on title. Initial Check-in Filters files based on initial check-in. You can filter by any check-in, a single check-in, or a range of check-ins. Document Specification Filters files based on the document specification. Freeze Name Filters files based on the freeze name. Profile Trigger Filters files based on the profile trigger. Security Account Filters files based on the security account. Document Category (L1) Filters files based on the first level of the documentary categories. Document Category (L2) Filters files based on the second level of the documentary categories. Document Category (L3) Filters files based on the third level of the documentary categories. Amount Filters files based on the amount. Last Check-in Filters files based on the last check-in. You can filter by any check-in, a single check-in, or a range of check-ins. Configuring Third Party Data Sources Oracle URM Collection Options | 510 SharePoint Collection Options The SharePoint options appear only if you click Custom and then check SharePoint in the Job Target Options panel earlier in the wizard. In order to make any selections, you must have already configured a SharePoint data source. See Configuring for a SharePoint Server on page 159. You can select the Top-Level Site URL(S) and SubSites. For the SubSite, you can select to include the following: Select SharePoint Collection Type Options Option Description Top-Level Site URL(S) list box Lists all the Top-Level Site URL(S) that you can select to add to the job to collect from. This list is populated based on settings in the Data Sources tab. See Configuring for a SharePoint Server on page 159. Filter Options Lets you filter the information in the associated list pane. See Managing Columns in Lists and Grids on page 36. SubSites list box Lists all the SubSites that you can select to add to the job to collect from. This list is populated based on settings in the Data Sources tab. See Configuring for a SharePoint Server on page 159. SubSite Options Include Blog Collects blog data within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. You can choose to include the whole page in collecting or not. Include Discussion Board Collects discussion board data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. You can choose to include the whole page in collecting or not. Include Wiki Collects wiki data within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Include Document Library Collects document data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Include Calendar Collects calendar data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Include Contacts Collects contacts data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Include Tasks Collects tasks data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Include Announcements Collects announcements data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Include Survey Collects survey data from within the specified root path of a highlighted individual site or a team site in the SharePoint Site URL list. Configuring Third Party Data Sources SharePoint Collection Options | 511 Sharepoint Include and Exclude Filters You also have the option to configure the File Filters. You can customize filters to include certain values or exclude certain values. Sharepoint Filters Option Description Filter Name (Required) The name of the new filter. Extension(s) Filters files by extension. You can separate multiple extensions with a comma. For example, bmp,jpg,png. You can use an asterisk (*) as a wildcard. URL Contains Filters any URL with the designated name in the path. File Size (bytes) Filters files based on file size. You can designate file size ranges using Equals, Not Equals, Greater Than, or Less Than size in bytes. Title Filters files based on the title. Author(s) Filters files based on the author(s). Editor(s) Filters files based on editor(s). Content Type Filters files based on the content type. Configuring Third Party Data Sources SharePoint Collection Options | 512 Sharepoint Filters (Continued) Option Description Keyword(s) Filters files based on keywords. Name Filters files based on the name. File Creation Date Filters files based on any date, a specific creation date, or a data range. File Modified Date Filters files based on any edit date, a specific edit date, or an edit data range. Configuring Third Party Data Sources SharePoint Collection Options | 513 Website Collection Options The Website option appears only if you check Website in the Other Data Sources pane in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a website data source. See Configuring for Websites on page 162. In the Website panel, you can select a website from which you want to collect. There are no filters available for websites. Configuring Third Party Data Sources Website Collection Options | 514 Druva Collection Options The Druva option appears only if you check Druva in the Other Data Sources pane in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a Druva data source. See Configuring for Druva on page 172. In the Druva panel, you can select a Druva server. Druva Include and Exclude Filters You also have the option to configure the File Filters. You can customize filters to include certain values or exclude certain values. You can add a filter, delete a filter, edit a filter, or load a saved filter. Druva Include Filter Druva Include Filter Options Option Description Filter Name (Required) The name of the new filter. Extension(s) Filter files by extensions. Specify whether the value(s) filtered equals or does not equal the data entered in the field. Separate multiple extensions by comma. Configuring Third Party Data Sources Druva Collection Options | 515 Druva Include Filter Options Option Description Path Contains Filter files by what values are contained in the path. Specify whether the value filtered is text or a regular expression. Separate multiple extensions by comma. File Size (bytes) Filter files by file size. Specify whether the value filtered is greater than, is, less than, or any value entered in the file size field. You can specify the size by bytes, kilobytes, and kilobytes. File Creation Date Filter files by file creation. Specify the data by a range of dates, a single date, or any specific date. File Modified Date Filter files by the time the file was modified. Specify the data by a range of dates, a single date, or any specific date. File Last Accessed Date Filter files by the last time the file was accessed. Specify the data by a range of dates, a single date, or any specific date. Save Filter as a Template Save the filter created as a template that can be loaded by other users. Configuring Third Party Data Sources Druva Collection Options | 516 CMIS Collection Options The CMIS Repository option appears only if you check CMIS in the Other Data Sources pane in the Job Target Options group box in the Job Options screen of the wizard. In order to make any selections, you must have already configured a CMIS Repository data source. See Configuring for a CMIS Repository on page 174. In the CMIS panel, you can select a CMIS Repository server. Checking Use Global Custom Filters allows the job to use a custom filter that you may have uploaded when configuring the application for CMIS collection. Note: The custom filter can combine with the Include and Exclude filters. The custom filter in combination with the Include/Exclude filters acts as an OR not AND. That is, data matching either the specifications in the Include/Exclude or in the custom filter. The data does not need to match both filters. CMIS Include and Exclude Filters You also have the option to configure the File Filters. You can customize filters to include certain values or exclude certain values. You can add a filter, delete a filter, edit a filter, or load a saved filter. Although there are many fields available in the Include and Exclude filters, not all fields available can be filtered. The values that are available for you to filter depends upon how you have set up your CMIS repository. Configuring Third Party Data Sources CMIS Collection Options | 517 Note: Please note that the user interface displays the application’s default filters. Not all of the values that are available in the filter apply to every CMIS repository. If you filter on a value that is not available in the CMIS repository, the collection job will fail. CMIS Include Filters CMIS Include Filter Options The following lists the options that are available to filter in the Include filter. Note: In the following table, if there is no description listed, the field cannot be searched and the job will fail. CMIS Include Filter Options Option Description Filter Name (Required) The name of the new filter. Name Filters files based on the name. Specify whether the data contains the value or not. Description Path Configuring Third Party Data Sources CMIS Collection Options | 518 CMIS Include Filter Options The following lists the options that are available to filter in the Include filter. Note: In the following table, if there is no description listed, the field cannot be searched and the job will fail. CMIS Include Filter Options Option Description Keyword(s) Filters files based on the keyword(s). Specify whether to filter on any of the keywords or all of the keywords. Creator Filters files based on the creator. Specify whether the data contains the value or not. Modified By Filters files based on the person who modified the file. Specify whether the data contains the value or not. File Creation Date Filters files based on the file creation date. Specify the data by a range of dates, a single date, or any specific date. File Modified Date Filters files based on the file modified date. Specify the data by a range of dates, a single date, or any specific date. Object ID Object Type ID Parent ID Version Label Version Series ID Content Stream Length Content Stream Mime Type Content Stream File Name Filters files based on the content stream file name. Specify whether the data contains the value or not. Content Stream ID Configuring Third Party Data Sources CMIS Collection Options | 519 Part 8 Using the Dashboard This part describes how to use the dashboard and includes the following section: Using the Dashboard (page 521) Using the Dashboard | 520 Chapter 43 Using the Dashboard About the Dashboard The Dashboard allows you to view important information in an easy-to-read visual interface. The Dashboard has different widgets that display the monitored data. You can configure widgets to show information about all projects or selected projects. There are two tabs to view this data, Alerts and Dashboard. See Alerts on page 521. See Dashboard on page 522. Alerts The Alerts Dashboard provides a list of threat alerts that have been received. You can also narrow your threat list and view details on any specific threat. See Viewing Alerts on page 566. Using the Dashboard About the Dashboard | 521 Alerts Widget: Dashboard You can choose how the data is presented in either a pie chart, a horizontal bar chart, or a vertical bar chart. Depending upon your license, you can view the following widgets: Dashboard widgets Element Description Lit Hold View the top Lit Holds assigned to people. View the days pending approval for top people.  View the days pending approval.  View the status of all holds.  View the Lit Holds assigned to the top IT staff. See the documentation on Lit Holds for more information.   Jobs View the percentage of jobs that have completed, completed with errors, or failed. See About Jobs on page 447. Lit Hold Widget: Using the Dashboard About the Dashboard | 522 Using the Dashboard About the Dashboard | 523 Configuring Dashboard Widgets The Dashboard tab has several widgets that display the monitored data. You can use the following elements to view and filter the data. To view Dashboard 1. Click the Dashboard tab at the top of the screen. Elements of Dashboard Widget Element Description Widget Options Clear the gear icon to configure the following options: Changes the appearance of the chart. You can choose to display the data in either pie, vertical bar, or horizontal bar chart form. Purge Interval (Alerts only) The interval at which alerts are purged from the list. Filters the chart results by project. The button displays what projects are being filtered and displayed. See The Filter Case Chart Results Pane on page 524. Refreshes the data in the widget. The button displays the last time that the data had been refreshed, either manually or automatically. The Filter Case Chart Results Pane In the Filter Case Chart Results pane, you can filter the items displayed in the widget. Elements of the Filter Case Chart Results Pane Element Description Filter by selected case(s) Allows you to search for a specific case. Click Filter to filter by the search terms. Selected cases only Posts only the selected projects to the widget. You can scroll down the project list and check the projects that you want to display. Unselect all Deselects all of the projects in the project list. Apply/Apply - all cases Applies the selected projects to the Dashboard widget. This button displays the number of projects selected. For example, if you have selected four cases, the button displays Apply - 4 cases. Cancel Returns you to the main widget. Using the Dashboard About the Dashboard | 524 Part 9 Reference Installing Using the AccessData Elasticsearch Windows Service (page 526) the Site Server (page 529) Installing the Windows Agent (page 535) Installing the Unix / Linux Agent (page 544) Installing the Mac Agent (page 546) Integrating Reference with AccessData Forensics Products (page 549) | 525 Chapter 44 Installing the AccessData Elasticsearch Windows Service About the Elasticsearch Service The AccessData Elasticsearch Windows Service is used by multiple features in multiple applications, including the following: ThreatBridge Mobile KFF in Resolution1 Threat Monitoring in Resolution1 (Known File Filter) in all applications Visualization Geolocation in all applications The AccessData Elasticsearch Windows Service uses the Elasticsearch open source search engine. Prerequisites For best results with Resolution1 products and AD Lab and Enterprise, you should install the AccessData Elasticsearch Windows Service on a dedicated computer that is different from the computer running the application that uses it. For single-computer installations such as FTK, you can install the AccessData Elasticsearch Windows Service on the same computer as the application. A single instance of an AccessData Elasticsearch Windows Service is usually sufficient to support multiple features. However, if your network is extensive, you may want to install the service on multiple computers on the network. Consult with support for the best configuration for your organization’s network. You 16 can install the AccessData Elasticsearch Windows Service on 32-bit or 64-bit computers. GB of RAM or higher Microsoft .NET Framework 4 To install the AccessData Elasticsearch Windows Service, Microsoft .NET Framework 4 is required. If you do not have .NET installed, it will be installed automatically. If you install the AccessData Elasticsearch Windows Service on a system that has not previously had an AccessData product installed upon it, you must add a registry key to the system in order for the service to install correctly. Installing the AccessData Elasticsearch Windows Service About the Elasticsearch Service | 526 Installing the Elasticsearch Service Installing the Service To install the AccessData Elasticsearch Windows Service 1. Click the the AccessData Elasticsearch Windows Service installer. It is avaialable on the KFF Installation disc by clicking autorun.exe. 2. Accept the License Agreement and click Next. 3. On the Destination Folder dialog, click Next to install to the folder, or click Change to install to a different folder. This is where the Elasticsearch folder with the Elasticsearch service is installed. 4. On the Data Folder dialog, click Next to install to the folder, or click Change to install to a different folder. This is where the Elasticsearch data is stored. Note: This folder may contain up to 10GB of data. 5. (For use with KFF) In the User Credentials dialog, you can configure credentials to access KFF Data files that you want to import if they exist on a different computer. This provides the credentials for the Elasticsearch service to use in order to access a network share with a user account that has permissions to the share. Enter the user name, the domain name, and the password. If the user account is local, do not enter any domain value, such as localhost. Leave it blank instead. 6. In the Allow Remote Communication dialog, enter the IP address(es) of any machine(s) that will have ThreatBridge installed. If you plan on installing ThreatBridge on the same server as the AccessData Elasticsearch Windows Service, click Next. 7. Select Enable Remote Communication. Note: If Enable Remote Communication is selected, a firewall rule will be created to allow communication to the AccessData Elasticsearch Windows Service service for every IP address added to the IP Address field. If no IP addresses are listed, then ANY IP address will be able to access the AccessData Elasticsearch Windows Service. 8. In the following Allow Remote Communication dialog, accept the default HTTP and Transport TCP Port values and click Next. However, if there are conflicts with these ports on the network, change the values to use other ports. 9. The Configuration 1 dialog contains the following fields: Cluster Node name - This field automatically populates with the system’s name. name - This field automatically populates with the system’s name. Note: If installing the AccessData Elasticsearch Windows Service on more than one system, allow the first system to install with the system’s name in the cluster and the node fields. In the sec- Installing the AccessData Elasticsearch Windows Service Installing the Elasticsearch Service | 527 ond and subsequent systems, enter the first system’s name in the cluster field, and in the node field, enter the name of the system to which you are installing. Heap size - This is the memory allocated for the AccessData Elasticsearch Windows Service. Normally you can accept the default value. For improved performance of the AccessData Elasticsearch Windows Service, increase the heap size. 10. The Configuration 2 dialog contains the following options: Discovery - Selecting the default of Multicast allows the AccessData Elasticsearch Windows Service search to communicate across the network to other Elasticsearch services. If the network does not give permissions for the service to communicate this way, select Unicast and enter the IP address(es) of the server(s) that the AccessData Elasticsearch Windows Service is installed on in the Unicast host names field. Separate multiple addresses with commas. Node - The Master node receives requests, and can pass requests to subsequent data nodes. Select both Master node and Data node if this is the primary system on which the AccessData Elasticsearch Windows Service is installed. Select only Data node if this is a secondary system on which the AccessData Elasticsearch Windows Service is installed. Click Next. 11. In the next dialog, click Install. 12. If the service installs properly, a command line window appears briefly, stating that the service has installed properly. 13. At the next dialog, click Finish. Troubleshooting the AccessData Elasticsearch Windows Service Once installed, the AccessData Elasticsearch Windows Service service should run without further assistance. If there are issues, go to C:\Program Files\Elasticsearch\logs to examine the logs for errors. Installing the AccessData Elasticsearch Windows Service Installing the Elasticsearch Service | 528 Chapter 45 Using the Site Server About Site Servers You can use Site Servers to collect data that you gather from agent sources and network shares. Jobs for data sources can be initiated from the interface and sent down through the site server path to a group of agent sources. After jobs are completed, the resulting data can be stored on the Site Server and then replicated up to either Parent Site Servers or to the Work Manager. Site Servers can help you do reduce the quantity of traffic that must be sent through the network. For example, instead of sending the same job 100 times to 100 computers over a low bandwidth connection, you can send the job once to a site server, and then the site server can pass the job on to each the computers. Likewise, instead of multiple computers reporting the data back to work manager, they can report it to the Site Server. The Site Server can gather the data and report it back up. The following are the types of Site Servers. Site Sever types Type Description Root Root Site Servers are the main collection point. Root Site Servers store data to be collected and then pass it upstream to the Work Manager. Each Root Site Server must be bound to a locally installed Work Manager. You can have a hierarchy of Site Servers where the Root Site Server is the parent and hands off jobs to child Site Servers. Root Site Servers are the final destination of data from multiple Children Site Servers before data is handed off to the Work Manager. Root Site Servers can also directly serve agent sources. Private Private Site Servers are used to support agent sources that are connected through the local intranet. A private site server can function as both a child and a parent. For example, a Private Site Server may function at a regional level. It could receive jobs from a Parent Root Site Server, and then pass jobs to a child site servers at each specific site. Private (protected) Protected Private Site Servers are used only in environments when, due to security issues, you don’t want the child Site Server calling to the parent Site Server. | 529 Site Sever types Type Description Public Public Site Servers are used to support agent sources that are not currently connected to the local intranet. Public Site Servers may not support Children Site Servers. They are able to receive data and hold it, but they are not able to transmit it. For example, if an agent source has been given an acquisition job, and then is disconnected from the intranet before the results of the job are collected, if the agent source later connects through the internet, then it can pass the data to public site server. The data on the public site server can then be collected by a parent Private or Root Site Server. Before Installing a Site Server Before you install the Site Server software, do the following on the Site Server computer: Determine which type of Site Server you want the computer to function as. Root Site Servers must be installed on the same computer as the Work Manager. Public and Private Site Servers must report to either a Parent Site Server or a Root Site Server. Install the .Net 4.0 software locally. Install a PostgreSQL database locally. Record the database’s system password. Record the names and ports to use of any Parent or Children Site server that the computer will work with. If the Site Server will directly support agents, record the IP ranges of the agent sources that you want the Site Server to support. Copy your Public and Private certificates to a local destination on the computer. Installing a Site Server You manually install the software on each Site Server computer. To Install a Site Server 1. On the computer where you want to install a Site Server, run the Site Server installation file. 2. In the Welcome to the AccessData Site Server Setup Wizard window, click Next. 3. In the End-User License Agreement window select I accept the terms in the License Agreement, and click Next. 4. In the Destination Folder window, specify where you want to install the Site Server application files. To browse to a specific destination folder, click Change. 5. In the User Credentials window you can configure credentials. If you are installing this computer as a child site server, you can configure a service that automatically communicates to the Resolution1 Server's response path without having to communicate through the parent site server. In order to do this, you must specify the credentials of an account name that exists on both the site server and the Resolution One server. Use the Specific User Account option to set the credentials. Otherwise you can use the default Local System Account setting. | 530 6. In the Ready to install AccessData Site Server window, click Install. 7. In the Completed the AccessData Site Server Setup Wizard window, click Finish. The Site Server Configuration Utility automatically opens. See Site Server Configuration (page 532) | 531 Site Server Configuration The Site Server Configuration utility automatically opens after you install the Site Server software on the computer. If you need to access the Site Server Configuration utility, on the Site Server computer, click Start > Programs > AccessData > Site Server > Site Server Configuration. There are three to configure for Site Server: General, ThreatBridge Service, and Mobile. Site Server Configuration General Options Category Option Type Description See About Site Servers on page 529. Root, Private, Private protected, Public Friendly Name Secure Communications (Optional) Lets you provide a name (identifier) for the Site Server. The certificates used for communication between multiple Site Servers and between Site Servers and agents. Private Certificate This is the location of the private key certificate. The private key must be available on the local computer. Supported certificate types include ADP12, and P12. Public Certificate This is the location of the public key certificate. The public key must be available on the local computer. Supported certificate types include CERT, CRT, and P7B. System Password This is the system password for the locally installed PostgreSQL database. Database Port The port for the database listener service. The default is port 5432. This port is configured at the time when the PostgreSQL database is installed. Database IP Configuration Results Internal/FQDN (Public type only) This lets you specify an internal name of the Site Server computer. The is used for communications between multiple Site Servers. For example, server.company.local. External/FQDN (Public type only) This lets you specify public facing resolvable name. The is used for communications between the Site Server and agents. For example, server.company.com. Internet Protocol Version Lets you specify the following: IPv4, IPv6, or both. Port The port used for the Site Server to communicate with other Site Servers or agents. Client Port (Root type only) This is the outbound port from which a Root Site Server communicates through to the Work Manager. The default port is 54321. Use Secure Client (Root type only) This option encrypts data communications between the Root Site Server and the Work Manager. (Enabled by default) This location is where data is stored before it is replicated up through the Site Server system to the Work Manager. You can use a local folder or a domain share. Site Server Configuration | 532 Site Server Configuration General Options Category Option Description Results Directory Enter either the local directory or the UNC path. Share domain Lets you specify a domain and the credentials to access that domain. Site Server System Parent Instance This option is used for Public and Private (child) Site Servers only. This option lets you define a parent server to replicate data up-stream to. You must provide a definition of the parent server and the port to access. You can replace the string “parent” with the computer name, IP address, DNS Alias, or IPv6 address. Children Instances This option is used for Root and Private (parent) Site Servers only. This option lets you define child Site Servers from which data can be gathered from. You must provide a definition of the child servers and the ports to access. You can replace the string “child” with the computer name, IP address, DNS alias, or IPv6 address. You can add multiple children instances in this field by separating each with a comma character. Managed Subnet (Address(es) This option lets you define the range of agent computers the Site Server can interact with. This option requires CIDR notation. You can add multiple ranges by separating each with a comma character. You can configure multiple site servers to support overlapping ranges. Locality Default Domain Configuration This lets you configure communication settings. Max Client Connections Max Incoming Threads Retry Count Max Outgoing Threads Retry Delay (ms) Bandwidth Control This lets you configure communication settings. ___ bits/second in from SiteServer ___ bits/second out from SiteServer ___ bits/second in from Agent ___ bits/second out from Agent Logging Level Site Server Configuration | 533 Site Server Configuration General Options Category Option Description NONE Disable all logging. ERROR WARNING DEBUG TRACE INFO USER AUDIT ALL Includes all logging. Site Server Configuration ThreatBridge Service Options Option Description Use ThreatBridge Service Select to communicate with the ThreatBridge Management Service. This should be selected if you want to designate a particular Site Server to communicate with ThreatBridge. See Enabling Endpoint Threat Alert on page 532. See About ThreatBridge on page 501. Address and Port Enter the IP Address and port of the server where you have installed the ThreatBridge Service. API Key Enter the API Key that you received when installing ThreatBridge. See Installing ThreatBridge on page 504. Click Validate to verify that the Site Server is communicating with the ThreatBridge Management Service. Alert reporting interval (seconds) Allows you to change how often the Site Server communicates with the ThreatBridge service. The default setting is 120 seconds. Alert minimum confidence Allows you to set the minimum confidence score of a threat that Site Server passes to the ThreatBridge service. Threats under the minimum confidence score are ignored unless the threat is on the black list. The default setting is 50. Site Server Configuration Mobile Options Option Description Enable Mobile Interface Select to enable the site server to communicate with the mobile interface. Public Web Interface URL In the Public Web Interface URL field, enter the IP address and port of the server where you have installed the Public Web Interface. This should be the same server that your organization has provided to AccessData for building the mobile app See Deploying the Mobile Threat Monitoring App on page 556. Elastic Search URL Enter the IP address and port of the Elasticsearch server that you have previously created. See Installing the Elasticsearch Service on page 527. Mobile Data Event Expiration (in days) Allows you to define when you want the server to discard the mobile data. The default is 30 days. Site Server Configuration | 534 Chapter 46 Installing the Windows Agent This chapter covers the manual installation of the agent in a Windows environment. This appendix includes the following topics: See Manually Installing the Windows Agent on page 535. See Using Your Own Certificates on page 540. Manually Installing the Windows Agent Perform the following steps to manually install the Enterprise Agent in Windows: See Preparing the AD Enterprise Agent Certificate on page 535. (AD Enterprise only) See Installing the Agent on page 536. See Configuring Execname and Servicename Values on page 538. Preparing the AD Enterprise Agent Certificate About Enterprise Security Certificates: When installing AccessData Enterprise Examiner, you need a security certificate. Enterprise Management Server creates Enterprise security certificates, the CRT public key and the PEM public and private key pair files. However, the Enterprise Configuration Management Tool now also accepts PKCS#12 certificates. If you have a third-party certificate chain in the PKCS#12 format, the Enterprise Configuration Management Tool reads the PKCS#12 certificate and asks for the user password. The certificate is decrypted only long enough to gather the information necessary for the Enterprise installation, then re-encrypts the private key. The public key, regardless of source, must be in standard binary or base-64 encoding. If the Agent is installed, or pushed, to the workstations using Enterprise, the certificate information will automatically be read from the Enterprise Configuration Management Tool. If the Agent is pushed out, the certificate information (paths and filenames) must be re-entered. The public certificate itself must be in an area of the network where it can be accessed by the Agent machine during installation, but does not need to be stored on the Agent machine. In addition, the Agent uses only a public key. As long as that public key is in binary or base-64 format, it will automatically be read by the Agent. For more information, see Using Your Own Certificates (page 540). Installing the Windows Agent Manually Installing the Windows Agent | 535 To prepare the certificate 1. Prepare the Agent Certificate. 2. Copy the needed certificate from the Management Server to your deployment location. Management Server creates certificates during the setup in: [Drive]:\Program Files\AccessData\AccessData Management Server\certificates. The certificate name is the ManagementServer.crt. 3. Copy ManagementServer.crt to a folder of your choice where it can be accessed while installing the Agent. Installing the Agent To install the Agent 1. Run AccessDataAgent.msi or AccessDataAgent(64bit) using msiexec. Note: These .msi files are located in the Program Files\AccessData\Forensic Toolkit\5.1\Bin\Agent\ folder after installation. There are several command line parameters available to use with this .msi as documented below. Here is an example command line that will install with the defaults: If AccessDataAgent.msi resides in the folder C:\enterprise and ManagementServer.crt resides in [Drive]:\certificates, type the following command line to install the agent with defaults: msiexec /i [Drive]:\enterprise\AccessDataAgent.msi CER=[Drive]:\certificates\ManagementServer.crt. The following table lists the command line options available for use with this AccessDataAgent.msi: Command Line Options Option Action /i (i or x required) Specifies install. /x (i or x required) Specifies un-install. /qn (optional) Allows you to install in quiet mode with no user interaction. (required) If running from the folder where the .msi is located you do not have to include path, only the filename. CER= (required) Specifies the certificate the agent uses. ALLUSERS= Configures the installer to be available to all users. The default option varies per operating system. The options are:  allusers=1 configures the installer to be available to all users.  allusers=0 configures the installer to be available to only the user who is installing the agent. INSTALLDIR= (optional) Allows you to change the install location from the default folder: (C:\Program Files\AccessData\Agent). PORT= (optional) Allows you to change the port from the default port (3999). Installing the Windows Agent Always include the path, regardless of location. Manually Installing the Windows Agent | 536 Command Line Options (Continued) Option Action LIFETIME= (optional) Allows you to configure the life cycle of the agent. The “d” value equals the Time To Live (TTL) measured in days. Adding a number preceded by a dash measures the TTL in minutes. For example: <-d >. CONNECTIONS= Allows you to configure the number of maximum connections for the agent. STORESIZE= Allows you to configure the size of the data store. TRANSIENT=1 Allows you to configure the agent as a Transient Agent. Transient Agents have no protected storage and remove themselves when the agent machine is restarted. FOLDER_STORAGE=1 Allows you to configure the agent as a Persistent Agent. Persistent Agents use a “local” file system based storage and not protected storage. Persistent Agents also remain on the agent machine after the machine is restarted. This allows for local logical disc space to store the results of Public Site Server jobs operating while the Agent is not on the WAN or can get to the Public Site Server. SERVICELESS=1 Allows you to configure the agent to install with no protected storage and no installed service. The agent removes itself when the agent machine restarts or when the lifetime option expires, whichever comes first. PCD= (optional) Enterprise Only: Allows you to configure the Proxy Cycle Delay (PCD). The PCD is the time interval at which the agent attempts to connect to proxy to check if any work has been assigned. The PCD “x” value is measured in seconds. The default is 1200 (20 minutes). PROXY= (see example below) (optional) Enterprise Only: Allows you to configure a proxy-able agent. PrimaryIP should refer to the IP address to which the agent should try to communicate. (Usually this will be the internal private network IP of the proxy server.) The “SecondaryIP” should refer to the IP address to which the agent should try to connect when the attempts to connect to the “PrimaryIP” have failed. (Often this IP will represent the public IP of the proxy server.) PrimaryIP2 and SecondaryIP2 should refer to an additional proxy server address and is delimited by a tilde (~). Additional proxy servers can be added by following this same pattern. ,:~,: MAMA= Installing the Windows Agent Resolution1, Resolution1 CyberSecurity, and Resolution1 eDiscovery Only: Allows you to configure the IP Address of the Site Server to which the agent reports. For example, 10.32.41.113:54545 This parameter is used so that the Agents know which Site Server to check into for the first time. Additionally, after that first check-in, the Agents will learn the Site Servers that has its CIDR and check there next time. It will update based on movement of the physical IP of the node. Manually Installing the Windows Agent | 537 Command Line Options (Continued) Option Action PUBSS= Resolution1, Resolution1 CyberSecurity, and Resolution1 eDiscovery Only: Allows you to configure the agent to connect to a Public Site Server (PUBSS). See About Site Servers on page 529. For example, pubss=192.192.192.192:5432 The Agent in Public Site Server (PUBSS) mode will check-in to the original PUBSS value that was part of the install. After that first checkin, it will receive a list of other Public Site Servers in the DMZ and then ping around to find the closest/fastest connection. For example, if the user is in New York and a job starts there, and then the user goes to Los Angeles, the user will go from the NYC PUBSS to the LA PUBSS and the collection should resume and support interruption. This is all completed based on the resolution of the IP address for the target and assignment in a proper CIDR range on Site Server config. See Site Server Configuration on page 532. This list will also get updated whenever it might change. This list comes from the Site Server configuration parameters you setup on your internal servers and not specifically some additional data entry. It comes from the virtue of having any Public Site Servers deployed. See MAMA= on page 537. PUBSS_DELAY= Resolution1, Resolution1 CyberSecurity, and Resolution1 eDiscovery Only: This can be used to delay the default check-in interval (30 minutes). You may want to alter this value if you have a lot of Agents on the PUBSS system. Example Command Line Install msiexec /i "C:\AgentInstall\AccessData Agent (64-bit).msi” cer=”C:\AgentInstall\AccessData E1.crt” mama=10.10.35.32:54545 TRANSIENT=1 Persistent=1 Serviceless=1 lifetime=1 or lifetime=-5 pubss=192.192.192.192 5432 Configuring Execname and Servicename Values The Execname and Servicename values change the names of the agent executable and agent service respectively. These values are added to the MSI using an MSI editor (such as ORCA.exe — a free MSI editor). Changing the Execname Value To make changes to the execname value 1. Run Orca.EXE. 2. Click File > Open. 3. Browse to the folder containing the “AccessData Agent.msi” or “AccessData Agent (64-bit).msi” file and open the file. The default path is: [Drive]:\Program Files\AccessData\Forensic Toolkit\3.2\Bin\Agent\x32 (or x64)\ 4. In the Tables list, select File... Installing the Windows Agent Manually Installing the Windows Agent | 538 5. In the FileName column, double-click “u4jwdc7h.exe|agentcore.exe”. 5a. Enter the filename to use for the agent core executable. Note: Replace the entire string with the filename. 6. Press Enter. 7. Click File > Save. Note: Do not close Orca if you are also changing the service name. Changing the Servicename Value To make changes to the Servicename value If you closed Orca, begin with Step 1. Otherwise, skip to Step 4. 1. Run Orca.EXE. 2. Click File > Open. 3. Browse to the folder containing the “AccessData Agent.msi” or “AccessData Agent (64-bit).msi” file and open the file. The default path is: [Drive]:\Program Files\AccessData\Forensic Toolkit\3.2\Bin\Agent\x32 (or x64)\ 4. In the Tables list, select “ServiceControl”. 5. In the Name column, double-click “AgentService”. 5a. Enter the name to use for the AgentService and press Enter. Note: Use the same value in steps 5a, 7a and 8a. 6. In the Tables list, select “ServiceInstall”. 7. In the Name column, double-click “AgentService”. 7a. 8. In the DisplayName column, double-click “AgentService”. 8a. 9. Enter the name to use for the AgentService (use the same value entered in step 5a) and press Enter. Enter the name to use for the AgentService (use the same value entered in steps 5a and 7a) and press Enter. Click File > Save. 10. Click File > Close. Installing the Windows Agent Manually Installing the Windows Agent | 539 Using Your Own Certificates Definitions: PKCS#12: Standard certificate packaging to securely transfer public/private key pairs PKCS#7: Standard certificate package to store certificates for S/MIME encryption. We are using for storing sets of public key chains. To export the public certificate when using a PFX (PKCS#12) key 1. Using the PKCS#12 provided by the Certificate Administrator, double-click PKCS#12 to open it. 2. Install the certificate into a local Microsoft certificate store by following the wizard supplied when you double-click the certificate file. 3. View the public certificate of the installed certificate by opening the local machine’s certificate store. (This can be done with Microsoft Management Console or in Internet Explorer under Tools > Internet Options > Content > Certificates) 4. Find the bottom level certificate and double-click the certificate to view it. 5. Click the Certification Path tab to verify that the certificate has a full verification path, meaning that nothing is missing from the top of the chain to the bottom. 6. Click the Details tab and click Copy to File. 7. Click Next and click Cryptographic Message Syntax Standard - PKCS #7 Certificates. 8. Select Include all certificates in the certificate path if possible. 9. Click Next and enter a file export path. 10. Click Next. 11. Click Finish. 12. Double-click the exported PKCS#7 and verify that all of the public certificates in the chain are in the PKCS#7. The exported file you created will be used as the certificate for the agent installation. Installing the Windows Agent Using Your Own Certificates | 540 Controlling Consumption of the CPU You can edit a registry key that allows you to control what percentage of the CPU is used for the agent. This gives you the ability to throttle the CPU and insure that the agent does not consume all of the CPU available. To add a throttling registry key 1. In the Registry Editor, expand the HKEY_LOCAL_MACHINE hive and locate the HKEY_LOCAL_MACHINE\SOFTWARE\AccessData\Shared folder. 2. Add a new DWORD (32-bit) value to the Shared folder.(HKEY_LOCAL_MACHINE\SOFTWARE\AccessData\Shared\throttling) 3. The data value of the DWORD should be the maximum percentage of the CPU allowed to be used by the module. For example, if you want the maximum percentage of the CPU used to be 25 percent, modify the DWORD data value and enter 25 in the Edit DWORD dialog. The value should be from 0100. If the data value is left at 0, the CPU will not be throttled when the agent is started. 4. In the Edit DWORD dialog, select the Decimal radio button and click OK. 5. After applying the registry key changes, restart the agent service. For more information on adding and editing registry keys, see Microsoft’s documentation. Installing the Windows Agent Controlling Consumption of the CPU | 541 Resolution1 eDiscovery Additional Instructions 1. Obtain the public key and private key pair (pfx and cer). 2. Copy the private certificates to the collection workers. The only certificates that need to be changed are the ones that talk with the agent (collectors). 2a. 3. (Optional) You can make the certificates available to the processing worker in the event you want to use it for collection testing in the interim. Run CollectorCfgTool.EXE: [Drive}:\[Program Files]\AccessData\eDiscovery\WorkManager\. 4. 3a. Select the new private key. 3b. Provide the password. 3c. (Optional) Delete the original private key. Uninstall the agent: 4a. Click Start > Control Panel -> Add or Remove Programs. 4b. If the previous agent was installed using the “allusers” option, you must use the following to uninstall from the command prompt using msiexec msiexec /x [Path to Installer]Adagentinstaller.msi 5. Install the agent: 5a. Assuming the Certificates are installed on the C: drive, in the Run Command box, type: msiexec /i [Path to Installer]Adagentinstaller.msi CER=”[Full Path to Certificate File]” Installing the Windows Agent Resolution1 eDiscovery Additional Instructions | 542 Important Information The following information is important to know about installing and executing an agent: The ADMON module does not run on low resource priority. The ADMON module must run on Normal priority or higher in order to maintain connection to the system drivers. Installing the Windows Agent Important Information | 543 Chapter 47 Installing the Unix / Linux Agent This chapter discusses the Unix Agent Installer. It includes the following topics: See Installing The Enterprise Agent on Unix/Linux on page 544. Installing The Enterprise Agent on Unix/Linux The AccessData Agent is available for Unix-, Linux-, and Mac-based operating systems as well as for Windows. This appendix discusses the specific installation files to use for supported Unix and Linux platforms. Supported Platforms The Unix Agent Installer supports the following platforms: Unix Agent Supported Platforms Installer OS agent-rh5.sh or agent-rh5x64.sh RedHat 5 (32- & 64-bit) SLED 11 (Suse Linux Enterprise Desktop) (32- & 64-bit) CentOS Enterprise 5 (32- & 64-bit) Ubuntu 9 (and newer) (64-bit) agent-rh3.sh or agent-rh3x64.sh RedHat 3 (32- & 64-bit) Novell Linux Desktop (NLD) 9 (32-bit) SLED 10 (Suse Linux Enterprise Desktop) (32- & 64-bit) Be sure to use the correct installer file for your 32- or 64-bit architecture/OS) To install the Unix Agent Execute the following command as root, and provide the appropriate information: agent-.sh [-installpath| -i ] where is the operating system agent that is being used, and where is the location of the public certificate to be used for identification, and where [-i | -installpath] indicates the directory to install the agent in. Installing the Unix / Linux Agent Installing The Enterprise Agent on Unix/Linux | 544 This defaults to: /usr/AccessData/agent Enterprise Unix/Linux Agent Install Parameters and Options Option Result -installpath, -i The destination path for installing the agent. Default: /usr/ AccessData/agent/. -lifetime, -l The lifetime of the agent. Default: 0. If ==0, it will never uninstall itself. If >0 it is days before uninstall. If <0 it is in minutes before uninstall. -port, -p The port the agent listens on. Default: 3999. -connections, -c The maximum number of concurrent connections allowed by the agent. Default is 10. -size, -s The protected storage area size. Default is 16777216 (16 MB) Uninstallation To uninstall the Unix Agent, execute the following command as root: # ./agent.sh -rf Configuration The configuration file is located in the install path and is named ADAgent.conf. It supports the following parameters: Port: Port on which to listen for activity. MinThreadCount: MaxThreadCount: Minimum number of threads to have ready, waiting for connections. Maximum number of threads servicing connections. CertificatePath: Fully qualified network path or local path to the certificate. The installer, by default, puts the certificate in the installation path. Starting the Service To start the Unix Agent service, execute the following command as root: /etc/init.d/adagentd start Stopping the Service To stop the Unix Agent service, execute the following command as root: /etc/init.d/adagentd stop Installing the Unix / Linux Agent Installing The Enterprise Agent on Unix/Linux | 545 Chapter 48 Installing the Mac Agent This chapter discusses the Agent Installer for Apple Macintosh. It includes the following topics: See Configuring the AccessData Agent installer on page 546. See Installing the Agent on page 548. See Uninstalling the Agent on page 548. Configuring the AccessData Agent installer The AccessData Agent requires an X.509 certificate in order to establish a secure network connection to the server or for AD Enterprise, the computer running Examiner. The package installer has been provided to aid in the distribution efforts of these certificates by allowing an Administrator to modify the AccessDataAgent package installer prior to installation of AccessData Agent software for Apple Macintosh. In addition to certificate distribution, the port used by the Agent can be configured. The following instructions allow an Administrator to configure the AccessData Agent package installer. Bundling a Certificate The AccessData Agent installer requires that a certificate (or certificate tree) is bundled with the installer. The following is the sequence of steps that must be followed to bundle a certificate file into the installer. 1. Create a folder named Configure. 2. Create a single file, named adagent.cert that contains one or more X.509 certificates to be distributed to each installation of the Agent, and place it in the Configure folder. 3. Right-click the AccessDataAgent package installer file on the install disc, ([Drive]:\Enterprise\Agents\agent-Mac.dmg). 4. Select Show Package Contents popup menu item. 5. Drag the Configure folder from the Package Contents into the folder opened in Step 4 (alongside the Contents folder). Configuring the Port The AccessDataAgent installer allows an Administrator to (optionally) configure the port the Agent will use to communicate with an Examiner when installed. This is done by adding a file containing the port number to the AccessDataAgent package installer. The following is a set of instructions an Administrator will use to configure Installing the Mac Agent Configuring the AccessData Agent installer | 546 the AccessData Agent package installer. To do so, complete Steps 1-5 under Bundling a Certificate, then continue with Step 1 here. If you do not need to do a custom configuration of the port, skip to Step 6 below. 1. Create a text file named adagent.port that contains the port number the Agent is to use; this file is to be distributed to each installation of the Agent. 2. Place the adagent.port file into the Configure folder (previously created to contain the X.509 certificate). 3. Right-click the AccessDataAgent package installer file. 4. Select Show Package Contents popup menu item. 5. Ensure that the Configure folder is located in the same folder opened in Step 4 (alongside the Contents folder). 6. Close the window. Note: The installer will not run successfully if all of the above steps are not already completed. The folder and file names must be exactly as documented Additional Configuration Options The Mac installer now supports the same settings as the Unix installer. Each setting should be added to the .mpkg file in a directory called Configure. Enterprise Mac Agent Configuration Options Option Result - adagent.cert Specifies the certificate file used for communication - adagent.port Specifies the port the agent will listen on. The setting should contain nothing more than a number. The default port number is 3999 - adagent.lifetime Specifies the amount of time before the agent dissolves. Again the file should contain nothing more than a number. Same rules as for the Linux agent about sign and value. The default is 0. adagent.connection s Sets the maximum number of concurrent connections allowed by the agent. The file should contain only a number. The default is 10. - adagent.size Sets the protected storage area size. The file should contain only the number. The default is 16777216. (16 MB). Installing the Mac Agent Configuring the AccessData Agent installer | 547 Installing the Agent When the certificate is bundled and the port configuration file is complete and saved, distribute the AccessDataAgent package installer to each target computer and run it locally. Uninstalling the Agent The AccessData Agent can be uninstalled by double-clicking the uninstall utility located in /Library/ Application Support/AccessData. You will be required to enter your password; you must have administration rights for the uninstall to complete correctly. Note: The account must have a password assigned to it. Installing the Mac Agent Installing the Agent | 548 Chapter 49 Integrating with AccessData Forensics Products Web-based products (Summation, Resolution1, Resolution1 eDiscovery, and Resolution1 CyberSecurity) can work collaboratively with FTK-based forensics products, (FTK, Lab, FTK Pro, and Enterprise). Note: For brevity, in this chapter, all FTK-based products will be referenced as FTK and all Summation and Resolution1 applications will be referenced as Summation. You can access the same project data on the same database to perform legal review and forensic examination simultaneously. The benefit of this compatibility is that FTK provides some features that are not available in the web-based products. For example, you can create projects in Summation and then open, review, and perform additional tasks in FTK and then continue your work in Summation. Using FTK, you can do the following with Summation projects: Open and review a project Backup Add and restore a project and remove evidence Perform Additional Analysis after the initial processing Search, index, and label data View graphics and videos Export data Important: For compatibility, the version of the web-based product and the version for FTK must be the same-both must be 5.0.x or be 5.1.x. For example: Summation 5.2.x must be used with FTK 5.2.x Resolution1 5.5 must be used with FTK 5.5 Integrating with AccessData Forensics Products | 549 Installation You can install FTK and Summation on either the same computer or on different computers. The key is that they share a common database. The database that the data is stored in is unified so that the data can be shared between products. It is recommended that you install the web-based product first, configure the database, and then install FTK and point FTK to that database. The administrator account for the web-based product is the administrative account for the database for FTK. When launching FTK and logging into the database, you use the administrator credentials from the web-based product. Important: For compatibility, the version for Summation and the version for FTK must be the same. Important: Note that FTK and Summation may use different versions of the processing engine. If this is the case there will be information in the Release Notes. Managing User Accounts and Permissions Between FTK and Summation/Resolution1 eDiscovery You can create a user account in either product and then use that user name in the other product. Permissions When users are assigned permissions in one application, such as Summation, the permissions of the user in FTK are not affected. Creating and Viewing Projects Using either product, you can create projects and add evidence to that project. You can then use either product to open the project and perform tasks on the project data. You can have users in each program reviewing the data at the same time. Managing Evidence in FTK Adding Evidence using FTK You can use FTK to add evidence to a project that was created in Summation. Reviewers in Summation can then review the new evidence. Using FTK, you can add live evidence and static evidence. When you add evidence, you can add image files (such as AD1, E01), individual files, physical drives, and logical drives. Important: When you collect volatile data in FTK, you cannot see it in Summation. Integrating with AccessData Forensics Products Installation | 550 Processing Evidence using FTK FTK provides processing options that are not available in Summation. You can utilize the processing abilities of FTK and then review the data in Summation/Resolution1 eDiscovery. You can do all processing in FTK or you can perform an Additional Analysis in FTK after an initial processing. The following are examples of additional processing options that are available in FTK: Processing Known File Filter (KFF) Automatic Create File Decryption Thumbnails for Video Generate Explicit Profiles Common Video File Image Detection PhotoDNA Cerberus Analysis When you create a project with specific processing options, those options are maintained when the project is viewed in the other product. (15940) Important: If you create a project in Summation, process the evidence, then add more evidence using FTK, if you compare the JobInformation.log files, the processing options applied by FTK are different from Summation. Managing Evidence Groups in FTK and People in Summation It is important to note that FTK does not use people, but rather has evidence groups. Evidence groups let you create and modify groups of evidence. In FTK, you can share groups of evidence with other projects, or make them specific to a single project. When you create people in a project in Summation, and then look at the project in FTK, the people will be listed as evidence groups. The opposite is also true. If you create an evidence group in FTK, it will be listed as a person in Summation. Important: When you use FTK to add data to an evidence group that was an existing Summation person, two child entries of the same person are created for the data. When you look at the person data in Summation, there will be two child objects under the person with the same name, one with Summation data and the other with FTK data. Reviewing Evidence in FTK Searching Evidence using FTK You can use FTK to search evidence in Summation projects. The search capabilities in FTK are more robust than Summation. In FTK, you can perform an index search as well as a live search. Live search includes options such as text searching, pattern searching, and hexadecimal searching. Important: Note the following issue: Integrating with AccessData Forensics Products Creating and Viewing Projects | 551 Issue: The search results counts for the same project may be different when viewed in the different products due to the way search options are executed in the respective products. For example: Summation only search columns that are visible to the user. FTK will search columns that are not visible to a Resolution1 user. Re-indexing Because the data will change the search results. of FTK’s Live Search feature, FTK will return more search results hits than in Summation. Labeling Evidence Using FTK After searching and identifying data in FTK, you can label the data and then review the project in Summation and see the labeled data. You can then perform additional review, culling, and export tasks. Viewing Labeled Evidence in FTK When reviewing data in Summation, you can label data, and then that labeled data is viewable in FTK. This can be useful in workflow management. For example, when reviewing the data, you can label data indicating that it needs additional analysis. When the project is opened in FTK, the labeled data is visible. Exporting Data using FTK You can review and cull data in Summation and then export the data from FTK using its export capabilities. The following are examples of what you can export using FTK: Export Save files to an AD1 Image file file list information Export the contents of the project list to a word list Export hashes from a project Export search hits Export emails to PST or MSG Viewing Documents Groups and Review Sets in FTK Important: In Summation, there are separate views and permissions defined for Document Groups and Review Sets. In FTK, Document Groups and Review Sets that were created in Summation are displayed within the Manage Labels dialog. Reviewing FTK Data in Summation You can use the following review features in Summation to help manage the workflow of working with data that was added and processed using FTK. Review Cull the data by reviewers in the Web console. the data and get the desired data set. Export the data using Summation using its export capabilities. Integrating with AccessData Forensics Products Creating and Viewing Projects | 552 Known Issues with FTK Compatibility See the product’s and FTK Release Notes for a list of known issues with FTK Compatibility. Integrating with AccessData Forensics Products Known Issues with FTK Compatibility | 553

Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.5
Linearized                      : Yes
Author                          : 
Create Date                     : 2014:12:30 11:06:40Z
Modify Date                     : 2014:12:30 11:06:40Z
XMP Toolkit                     : Adobe XMP Core 4.2.1-c043 52.372728, 2009/01/18-15:08:04
Creator Tool                    : FrameMaker 9.0
Producer                        : Acrobat Distiller 9.5.5 (Windows)
Format                          : application/pdf
Creator                         : 
Title                           : 
Document ID                     : uuid:c21338ed-27fe-40d1-aaae-0a80bbc08647
Instance ID                     : uuid:d1c469c6-4425-46de-b44f-aba5a659bafd
Page Mode                       : UseOutlines
Page Count                      : 553
EXIF Metadata provided by EXIF.tools

Navigation menu