(Studies In Big Data 26) Srinivasan S. (ed.) Guide To Applications Springer (2018)


User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 567

Download(Studies In Big Data 26) Srinivasan S. (ed.)-Guide To Applications-Springer (2018)
Open PDF In BrowserView PDF
Studies in Big Data 26

S. Srinivasan Editor

Guide to
Big Data

Studies in Big Data
Volume 26

Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl

About this Series
The series “Studies in Big Data” (SBD) publishes new developments and advances
in the various areas of Big Data – quickly and with a high quality. The intent
is to cover the theory, research, development, and applications of Big Data, as
embedded in the fields of engineering, computer science, physics, economics and
life sciences. The books of the series refer to the analysis and understanding of
large, complex, and/or distributed data sets generated from recent digital sources
coming from sensors or other physical instruments as well as simulations, crowd
sourcing, social networks or other internet transactions, such as emails or video
click streams and other. The series contains monographs, lecture notes and edited
volumes in Big Data spanning the areas of computational intelligence including
neural networks, evolutionary computation, soft computing, fuzzy systems, as well
as artificial intelligence, data mining, modern statistics and Operations research, as
well as self-organizing systems. Of particular value to both the contributors and
the readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.

More information about this series at http://www.springer.com/series/11970

S. Srinivasan

Guide to Big Data


S. Srinivasan
Jesse H. Jones School of Business
Texas Southern University
Houston, TX, USA

ISSN 2197-6503
ISSN 2197-6511 (electronic)
Studies in Big Data
ISBN 978-3-319-53816-7
ISBN 978-3-319-53817-4 (eBook)
DOI 10.1007/978-3-319-53817-4
Library of Congress Control Number: 2017936371
© Springer International Publishing AG 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
Printed on acid-free paper
This Springer imprint is published by Springer Nature
The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To my wife Lakshmi and grandson Sahaas


It gives me great pleasure to write this Foreword for this timely publication on the
topic of the ever-growing list of Big Data applications. The potential for leveraging
existing data from multiple sources has been articulated over and over, in an
almost infinite landscape, yet it is important to remember that in doing so, domain
knowledge is key to success. Naïve attempts to process data are bound to lead to
errors such as accidentally regressing on noncausal variables. As Michael Jordan
at Berkeley has pointed out, in Big Data applications the number of combinations
of the features grows exponentially with the number of features, and so, for any
particular database, you are likely to find some combination of columns that will
predict perfectly any outcome, just by chance alone. It is therefore important that
we do not process data in a hypothesis-free manner and skip sanity checks on our
In this collection titled “Guide to Big Data Applications,” the editor has
assembled a set of applications in science, medicine, and business where the authors
have attempted to do just this—apply Big Data techniques together with a deep
understanding of the source data. The applications covered give a flavor of the
benefits of Big Data in many disciplines. This book has 19 chapters broadly divided
into four parts. In Part I, there are four chapters that cover the basics of Big
Data, aspects of privacy, and how one could use Big Data in natural language
processing (a particular concern for privacy). Part II covers eight chapters that



look at various applications of Big Data in environmental science, oil and gas, and
civil infrastructure, covering topics such as deduplication, encrypted search, and the
friendship paradox.
Part III covers Big Data applications in medicine, covering topics ranging
from “The Impact of Big Data on the Physician,” written from a purely clinical
perspective, to the often discussed deep dives on electronic medical records.
Perhaps most exciting in terms of future landscaping is the application of Big Data
application in healthcare from a developing country perspective. This is one of the
most promising growth areas in healthcare, due to the current paucity of current
services and the explosion of mobile phone usage. The tabula rasa that exists in
many countries holds the potential to leapfrog many of the mistakes we have made
in the west with stagnant silos of information, arbitrary barriers to entry, and the
lack of any standardized schema or nondegenerate ontologies.
In Part IV, the book covers Big Data applications in business, which is perhaps
the unifying subject here, given that none of the above application areas are likely
to succeed without a good business model. The potential to leverage Big Data
approaches in business is enormous, from banking practices to targeted advertising.
The need for innovation in this space is as important as the underlying technologies
themselves. As Clayton Christensen points out in The Innovator’s Prescription,
three revolutions are needed for a successful disruptive innovation:
1. A technology enabler which “routinizes” previously complicated task
2. A business model innovation which is affordable and convenient
3. A value network whereby companies with disruptive mutually reinforcing
economic models sustain each other in a strong ecosystem
We see this happening with Big Data almost every week, and the future is
In this book, the reader will encounter inspiration in each of the above topic
areas and be able to acquire insights into applications that provide the flavor of this
fast-growing and dynamic field.
Atlanta, GA, USA
December 10, 2016

Gari Clifford


Big Data applications are growing very rapidly around the globe. This new approach
to decision making takes into account data gathered from multiple sources. Here my
goal is to show how these diverse sources of data are useful in arriving at actionable
information. In this collection of articles the publisher and I are trying to bring
in one place several diverse applications of Big Data. The goal is for users to see
how a Big Data application in another field could be replicated in their discipline.
With this in mind I have assembled in the “Guide to Big Data Applications” a
collection of 19 chapters written by academics and industry practitioners globally.
These chapters reflect what Big Data is, how privacy can be protected with Big
Data and some of the important applications of Big Data in science, medicine and
business. These applications are intended to be representative and not exhaustive.
For nearly two years I spoke with major researchers around the world and the
publisher. These discussions led to this project. The initial Call for Chapters was
sent to several hundred researchers globally via email. Approximately 40 proposals
were submitted. Out of these came commitments for completion in a timely manner
from 20 people. Most of these chapters are written by researchers while some are
written by industry practitioners. One of the submissions was not included as it
could not provide evidence of use of Big Data. This collection brings together in
one place several important applications of Big Data. All chapters were reviewed
using a double-blind process and comments provided to the authors. The chapters
included reflect the final versions of these chapters.
I have arranged the chapters in four parts. Part I includes four chapters that deal
with basic aspects of Big Data and how privacy is an integral component. In this
part I include an introductory chapter that lays the foundation for using Big Data
in a variety of applications. This is then followed with a chapter on the importance
of including privacy aspects at the design stage itself. This chapter by two leading
researchers in the field shows the importance of Big Data in dealing with privacy
issues and how they could be better addressed by incorporating privacy aspects at
the design stage itself. The team of researchers from a major research university in
the USA addresses the importance of federated Big Data. They are looking at the
use of distributed data in applications. This part is concluded with a chapter that



shows the importance of word embedding and natural language processing using
Big Data analysis.
In Part II, there are eight chapters on the applications of Big Data in science.
Science is an important area where decision making could be enhanced on the
way to approach a problem using data analysis. The applications selected here
deal with Environmental Science, High Performance Computing (HPC), friendship
paradox in noting which friend’s influence will be significant, significance of
using encrypted search with Big Data, importance of deduplication in Big Data
especially when data is collected from multiple sources, applications in Oil &
Gas and how decision making can be enhanced in identifying bridges that need
to be replaced as part of meeting safety requirements. All these application areas
selected for inclusion in this collection show the diversity of fields in which Big
Data is used today. The Environmental Science application shows how the data
published by the National Oceanic and Atmospheric Administration (NOAA) is
used to study the environment. Since such datasets are very large, specialized tools
are needed to benefit from them. In this chapter the authors show how Big Data
tools help in this effort. The team of industry practitioners discuss how there is
great similarity in the way HPC deals with low-latency, massively parallel systems
and distributed systems. These are all typical of how Big Data is used using tools
such as MapReduce, Hadoop and Spark. Quora is a leading provider of answers to
user queries and in this context one of their data scientists is addressing how the
Friendship paradox is playing a significant part in Quora answers. This is a classic
illustration of a Big Data application using social media.
Big Data applications in science exist in many branches and it is very heavily
used in the Oil and Gas industry. Two chapters that address the Oil and Gas
application are written by two sets of people with extensive industry experience.
Two specific chapters are devoted to how Big Data is used in deduplication practices
involving multimedia data in the cloud and how privacy-aware searches are done
over encrypted data. Today, people are very concerned about the security of data
stored with an application provider. Encryption is the preferred tool to protect such
data and so having an efficient way to search such encrypted data is important.
This chapter’s contribution in this regard will be of great benefit for many users.
We conclude Part II with a chapter that shows how Big Data is used in noting the
structural safety of nation’s bridges. This practical application shows how Big Data
is used in many different ways.
Part III considers applications in medicine. A group of expert doctors from
leading medical institutions in the Bay Area discuss how Big Data is used in the
practice of medicine. This is one area where many more applications abound and the
interested reader is encouraged to look at such applications. Another chapter looks
at how data scientists are important in analyzing medical data. This chapter reflects
a view from Asia and discusses the roadmap for data science use in medicine.
Smoking has been noted as one of the leading causes of human suffering. This part
includes a chapter on comorbidity aspects related to smokers based on a Big Data
analysis. The details presented in this chapter would help the reader to focus on other
possible applications of Big Data in medicine, especially cancer. Finally, a chapter is



included that shows how scientific analysis of Big Data helps with epileptic seizure
prediction and control.
Part IV of the book deals with applications in Business. This is an area where
Big Data use is expected to provide tangible results quickly to businesses. The
three applications listed under this part include an application in banking, an
application in marketing and an application in Quick Serve Restaurants. The
banking application is written by a group of researchers in Europe. Their analysis
shows that the importance of identifying financial fraud early is a global problem
and how Big Data is used in this effort. The marketing application highlights the
various ways in which Big Data could be used in business. Many large business
sectors such as the airlines industry are using Big Data to set prices. The application
with respect to a Quick Serve Restaurant chain deals with the impact of Yelp ratings
and how it influences people’s use of Quick Serve Restaurants.
As mentioned at the outset, this collection of chapters on Big Data applications
is expected to serve as a sample for other applications in various fields. The readers
will find novel ways in which data from multiple sources is combined to derive
benefit for the general user. Also, in specific areas such as medicine, the use of Big
Data is having profound impact in opening up new areas for exploration based on
the availability of large volumes of data. These are all having practical applications
that help extend people’s lives. I earnestly hope that this collection of applications
will spur the interest of the reader to look at novel ways of using Big Data.
This book is a collective effort of many people. The contributors to this book
come from North America, Europe and Asia. This diversity shows that Big Data is
a truly global way in which people use the data to enhance their decision-making
capabilities and to derive practical benefits. The book greatly benefited from the
careful review by many reviewers who provided detailed feedback in a timely
manner. I have carefully checked all chapters for consistency of information in
content and appearance. In spite of careful checking and taking advantage of the
tools provided by technology, it is highly likely that some errors might have crept in
to the chapter content. In such cases I take responsibility for such errors and request
your help in bringing them to my attention so that they can be corrected in future
Houston, TX, USA
December 15, 2016

S. Srinivasan


A project of this nature would be possible only with the collective efforts of many
people. Initially I proposed the project to Springer, New York, over two years ago.
Springer expressed interest in the proposal and one of their editors, Ms. Mary
James, contacted me to discuss the details. After extensive discussions with major
researchers around the world we finally settled on this approach. A global Call for
Chapters was made in January 2016 both by me and Springer, New York, through
their channels of communication. Ms. Mary James helped throughout the project
by providing answers to questions that arose. In this context, I want to mention
the support of Ms. Brinda Megasyamalan from the printing house of Springer.
Ms. Megasyamalan has been a constant source of information as the project
progressed. Ms. Subhashree Rajan from the publishing arm of Springer has been
extremely cooperative and patient in getting all the page proofs and incorporating
all the corrections. Ms. Mary James provided all the encouragement and support
throughout the project by responding to inquiries in a timely manner.
The reviewers played a very important role in maintaining the quality of this
publication by their thorough reviews. We followed a double-blind review process
whereby the reviewers were unaware of the identity of the authors and vice versa.
This helped in providing quality feedback. All the authors cooperated very well by
incorporating the reviewers’ suggestions and submitting their final chapters within
the time allotted for that purpose. I want to thank individually all the reviewers and
all the authors for their dedication and contribution to this collective effort.
I want to express my sincere thanks to Dr. Gari Clifford of Emory University and
Georgia Institute of Technology in providing the Foreword to this publication. In
spite of his many commitments, Dr. Clifford was able to find the time to go over all
the Abstracts and write the Foreword without delaying the project.
Finally, I want to express my sincere appreciation to my wife for accommodating
the many special needs when working on a project of this nature.



Part I General

Strategic Applications of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Joe Weinman



Start with Privacy by Design in All Big Data Applications . . . . . . . . . . . .
Ann Cavoukian and Michelle Chibba



Privacy Preserving Federated Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . .
Wenrui Dai, Shuang Wang, Hongkai Xiong, and Xiaoqian Jiang



Word Embedding for Understanding Natural Language:
A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Yang Li and Tao Yang


Part II Applications in Science

Big Data Solutions to Interpreting Complex Systems in the
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Hongmei Chi, Sharmini Pitter, Nan Li, and Haiyan Tian


High Performance Computing and Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Rishi Divate, Sankalp Sah, and Manish Singh


Managing Uncertainty in Large-Scale Inversions for the Oil
and Gas Industry with Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Jiefu Chen, Yueqin Huang, Tommy L. Binford Jr., and Xuqing Wu


Big Data in Oil & Gas and Petrophysics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Mark Kerzner and Pierre Jean Daniel


Friendship Paradoxes on Quora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Shankar Iyer


Deduplication Practices for Multimedia Data in the Cloud . . . . . . . . . . . 245
Fatema Rashid and Ali Miri




Privacy-Aware Search and Computation Over Encrypted Data
Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Hoi Ting Poon and Ali Miri


Civil Infrastructure Serviceability Evaluation Based on Big Data . . . . 295
Yu Liang, Dalei Wu, Dryver Huston, Guirong Liu, Yaohang Li,
Cuilan Gao, and Zhongguo John Ma

Part III Applications in Medicine

Nonlinear Dynamical Systems with Chaos and Big Data:
A Case Study of Epileptic Seizure Prediction and Control . . . . . . . . . . . . 329
Ashfaque Shafique, Mohamed Sayeed, and Konstantinos Tsakalis


Big Data to Big Knowledge for Next Generation Medicine:
A Data Science Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Tavpritesh Sethi


Time-Based Comorbidity in Patients Diagnosed with Tobacco
Use Disorder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Pankush Kalgotra, Ramesh Sharda, Bhargav Molaka,
and Samsheel Kathuri


The Impact of Big Data on the Physician . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Elizabeth Le, Sowmya Iyer, Teja Patil, Ron Li, Jonathan H. Chen,
Michael Wang, and Erica Sobel

Part IV Applications in Business

The Potential of Big Data in Banking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Rimvydas Skyrius, Gintarė Giriūnienė, Igor Katin,
Michail Kazimianec, and Raimundas Žilinskas


Marketing Applications Using Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
S. Srinivasan


Does Yelp Matter? Analyzing (And Guide to Using) Ratings
for a Quick Serve Restaurant Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
Bogdan Gadidov and Jennifer Lewis Priestley

Author Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553

List of Reviewers

Maruthi Bhaskar
Jay Brandi
Jorge Brusa
Arnaub Chatterjee
Robert Evans
Aly Farag
Lila Ghemri
Ben Hu
Balaji Janamanchi
Mehmed Kantardzic
Mark Kerzner
Ashok Krishnamurthy
Angabin Matin
Hector Miranda
P. S. Raju
S. Srinivasan
Rakesh Verma
Daniel Vrinceanu
Haibo Wang
Xuqing Wu
Alec Yasinsac


Part I


Chapter 1

Strategic Applications of Big Data
Joe Weinman

1.1 Introduction
For many people, big data is somehow virtually synonymous with one application—
marketing analytics—in one vertical—retail. For example, by collecting purchase
transaction data from shoppers based on loyalty cards or other unique identifiers
such as telephone numbers, account numbers, or email addresses, a company can
segment those customers better and identify promotions that will boost profitable
revenues, either through insights derived from the data, A/B testing, bundling, or
the like. Such insights can be extended almost without bound. For example, through
sophisticated analytics, Harrah’s determined that its most profitable customers
weren’t “gold cuff-linked, limousine-riding high rollers,” but rather teachers, doctors, and even machinists (Loveman 2003). Not only did they come to understand
who their best customers were, but how they behaved and responded to promotions.
For example, their target customers were more interested in an offer of $60 worth of
chips than a total bundle worth much more than that, including a room and multiple
steak dinners in addition to chips.
While marketing such as this is a great application of big data and analytics, the
reality is that big data has numerous strategic business applications across every
industry vertical. Moreover, there are many sources of big data available from a
company’s day-to-day business activities as well as through open data initiatives,
such as data.gov in the U.S., a source with almost 200,000 datasets at the time of
this writing.
To apply big data to critical areas of the firm, there are four major generic
approaches that companies can use to deliver unparalleled customer value and

J. Weinman ()
Independent Consultant, Flanders, NJ 07836, USA
e-mail: joeweinman@gmail.com
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_1



J. Weinman

achieve strategic competitive advantage: better processes, better products and
services, better customer relationships, and better innovation.

1.1.1 Better Processes
Big data can be used to optimize processes and asset utilization in real time, to
improve them in the long term, and to generate net new revenues by entering
new businesses or at least monetizing data generated by those processes. UPS
optimizes pickups and deliveries across its 55,000 routes by leveraging data ranging
from geospatial and navigation data to customer pickup constraints (Rosenbush
and Stevens 2015). Or consider 23andMe, which has sold genetic data it collects
from individuals. One such deal with Genentech focused on Parkinson’s disease
gained net new revenues of fifty million dollars, rivaling the revenues from its “core”
business (Lee 2015).

1.1.2 Better Products and Services
Big data can be used to enrich the quality of customer solutions, moving them up
the experience economy curve from mere products or services to experiences or
transformations. For example, Nike used to sell sneakers, a product. However, by
collecting and aggregating activity data from customers, it can help transform them
into better athletes. By linking data from Nike products and apps with data from
ecosystem solution elements, such as weight scales and body-fat analyzers, Nike
can increase customer loyalty and tie activities to outcomes (Withings 2014).

1.1.3 Better Customer Relationships
Rather than merely viewing data as a crowbar with which to open customers’ wallets
a bit wider through targeted promotions, it can be used to develop deeper insights
into each customer, thus providing better service and customer experience in the
short term and products and services better tailored to customers as individuals
in the long term. Netflix collects data on customer activities, behaviors, contexts,
demographics, and intents to better tailor movie recommendations (Amatriain
2013). Better recommendations enhance customer satisfaction and value which
in turn makes these customers more likely to stay with Netflix in the long term,
reducing churn and customer acquisition costs, as well as enhancing referral (wordof-mouth) marketing. Harrah’s determined that customers that were “very happy”
with their customer experience increased their spend by 24% annually; those that
were unhappy decreased their spend by 10% annually (Loveman 2003).

1 Strategic Applications of Big Data


1.1.4 Better Innovation
Data can be used to accelerate the innovation process, and make it of higher quality,
all while lowering cost. Data sets can be published or otherwise incorporated as
part of an open contest or challenge, enabling ad hoc solvers to identify a best
solution meeting requirements. For example, GE Flight Quest incorporated data
on scheduled and actual flight departure and arrival times, for a contest intended
to devise algorithms to better predict arrival times, and another one intended to
improve them (Kaggle n.d.). As the nexus of innovation moves from man to
machine, data becomes the fuel on which machine innovation engines run.
These four business strategies are what I call digital disciplines (Weinman
2015), and represent an evolution of three customer-focused strategies called value
disciplines, originally devised by Michael Treacy and Fred Wiersema in their international bestseller The Discipline of Market Leaders (Treacy and Wiersema 1995).

1.2 From Value Disciplines to Digital Disciplines
The value disciplines originally identified by Treacy and Wiersema are operational
excellence, product leadership, and customer intimacy.
Operational excellence entails processes which generate customer value by being
lower cost or more convenient than those of competitors. For example, Michael
Dell, operating as a college student out of a dorm room, introduced an assembleto-order process for PCs by utilizing a direct channel which was originally the
phone or physical mail and then became the Internet and eCommerce. He was
able to drive the price down, make it easier to order, and provide a PC built to
customers’ specifications by creating a new assemble-to-order process that bypassed
indirect channel middlemen that stocked pre-built machines en masse, who offered
no customization but charged a markup nevertheless.
Product leadership involves creating leading-edge products (or services) that
deliver superior value to customers. We all know the companies that do this: Rolex
in watches, Four Seasons in lodging, Singapore Airlines or Emirates in air travel.
Treacy and Wiersema considered innovation as being virtually synonymous with
product leadership, under the theory that leading products must be differentiated in
some way, typically through some innovation in design, engineering, or technology.
Customer intimacy, according to Treacy and Wiersema, is focused on segmenting
markets, better understanding the unique needs of those niches, and tailoring solutions to meet those needs. This applies to both consumer and business markets. For
example, a company that delivers packages might understand a major customer’s
needs intimately, and then tailor a solution involving stocking critical parts at
their distribution centers, reducing the time needed to get those products to their
customers. In the consumer world, customer intimacy is at work any time a tailor
adjusts a garment for a perfect fit, a bartender customizes a drink, or a doctor
diagnoses and treats a medical issue.


J. Weinman

Traditionally, the thinking was that a company would do well to excel in a given
discipline, and that the disciplines were to a large extent mutually exclusive. For
example, a fast food restaurant might serve a limited menu to enhance operational
excellence. A product leadership strategy of having many different menu items, or a
customer intimacy strategy of customizing each and every meal might conflict with
the operational excellence strategy. However, now, the economics of information—
storage prices are exponentially decreasing and data, once acquired, can be leveraged elsewhere—and the increasing flexibility of automation—such as robotics—
mean that companies can potentially pursue multiple strategies simultaneously.
Digital technologies such as big data enable new ways to think about the insights
originally derived by Treacy and Wiersema. Another way to think about it is that
digital technologies plus value disciplines equal digital disciplines: operational
excellence evolves to information excellence, product leadership of standalone
products and services becomes solution leadership of smart, digital products and
services connected to the cloud and ecosystems, customer intimacy expands to
collective intimacy, and traditional innovation becomes accelerated innovation. In
the digital disciplines framework, innovation becomes a separate discipline, because
innovation applies not only to products, but also processes, customer relationships,
and even the innovation process itself. Each of these new strategies can be enabled
by big data in profound ways.

1.2.1 Information Excellence
Operational excellence can be viewed as evolving to information excellence, where
digital information helps optimize physical operations including their processes and
resource utilization; where the world of digital information can seamlessly fuse
with that of physical operations; and where virtual worlds can replace physical.
Moreover, data can be extracted from processes to enable long term process
improvement, data collected by processes can be monetized, and new forms of
corporate structure based on loosely coupled partners can replace traditional,
monolithic, vertically integrated companies. As one example, location data from
cell phones can be aggregated and analyzed to determine commuter traffic patterns,
thereby helping to plan transportation network improvements.

1.2.2 Solution Leadership
Products and services can become sources of big data, or utilize big data to
function more effectively. Because individual products are typically limited in
storage capacity, and because there are benefits to data aggregation and cloud
processing, normally the data that is collected can be stored and processed in the
cloud. A good example might be the GE GEnx jet engine, which collects 5000

1 Strategic Applications of Big Data


data points each second from each of 20 sensors. GE then uses the data to develop
better predictive maintenance algorithms, thus reducing unplanned downtime for
airlines. (GE Aviation n.d.) Mere product leadership becomes solution leadership,
where standalone products become cloud-connected and data-intensive. Services
can also become solutions, because services are almost always delivered through
physical elements: food services through restaurants and ovens; airline services
through planes and baggage conveyors; healthcare services through x-ray machines
and pacemakers. The components of such services connect to each other and externally. For example, healthcare services can be better delivered through connected
pacemakers, and medical diagnostic data from multiple individual devices can be
aggregated to create a patient-centric view to improve health outcomes.

1.2.3 Collective Intimacy
Customer intimacy is no longer about dividing markets into segments, but rather
dividing markets into individuals, or even further into multiple personas that an
individual might have. Personalization and contextualization offers the ability to
not just deliver products and services tailored to a segment, but to an individual.
To do this effectively requires current, up-to-date information as well as historical
data, collected at the level of the individual and his or her individual activities
and characteristics down to the granularity of DNA sequences and mouse moves.
Collective intimacy is the notion that algorithms running on collective data from
millions of individuals can generate better tailored services for each individual.
This represents the evolution of intimacy from face-to-face, human-mediated
relationships to virtual, human-mediated relationships over social media, and from
there, onward to virtual, algorithmically mediated products and services.

1.2.4 Accelerated Innovation
Finally, innovation is not just associated with product leadership, but can create new
processes, as Walmart did with cross-docking or Uber with transportation, or new
customer relationships and collective intimacy, as Amazon.com uses data to better
upsell/cross-sell, and as Netflix innovated its Cinematch recommendation engine.
The latter was famously done through the Netflix Prize, a contest with a million
dollar award for whoever could best improve Cinematch by at least 10% (Bennett
and Lanning 2007). Such accelerated innovation can be faster, cheaper, and better
than traditional means of innovation. Often, such approaches exploit technologies
such as the cloud and big data. The cloud is the mechanism for reaching multiple
potential solvers on an ad hoc basis, with published big data being the fuel for
problem solving. For example, Netflix published anonymized customer ratings of
movies, and General Electric published planned and actual flight arrival times.


J. Weinman

Today, machine learning and deep learning based on big data sets are a means
by which algorithms are innovating themselves. Google DeepMind’s AlphaGo Goplaying system beat the human world champion at Go, Lee Sedol, partly based on
learning how to play by not only “studying” tens of thousands of human games, but
also by playing an increasingly tougher competitor: itself (Moyer 2016).

1.2.5 Value Disciplines to Digital Disciplines
The three classic value disciplines of operational excellence, product leadership and
customer intimacy become transformed in a world of big data and complementary
digital technologies to become information excellence, solution leadership, collective intimacy, and accelerated innovation. These represent four generic strategies
that leverage big data in the service of strategic competitive differentiation; four
generic strategies that represent the horizontal applications of big data.

1.3 Information Excellence
Most of human history has been centered on the physical world. Hunting and
gathering, fishing, agriculture, mining, and eventually manufacturing and physical
operations such as shipping, rail, and eventually air transport. It’s not news that the
focus of human affairs is increasingly digital, but the many ways in which digital
information can complement, supplant, enable, optimize, or monetize physical
operations may be surprising. As more of the world becomes digital, the use of
information, which after all comes from data, becomes more important in the
spheres of business, government, and society (Fig. 1.1).

1.3.1 Real-Time Process and Resource Optimization
There are numerous business functions, such as legal, human resources, finance,
engineering, and sales, and a variety of ways in which different companies in a
variety of verticals such as automotive, healthcare, logistics, or pharmaceuticals
configure these functions into end-to-end processes. Examples of processes might
be “claims processing” or “order to delivery” or “hire to fire”. These in turn use a
variety of resources such as people, trucks, factories, equipment, and information
Data can be used to optimize resource use as well as to optimize processes for
goals such as cycle time, cost, or quality.
Some good examples of the use of big data to optimize processes are inventory
management/sales forecasting, port operations, and package delivery logistics.

1 Strategic Applications of Big Data


Fig. 1.1 High-level architecture for information excellence

Too much inventory is a bad thing, because there are costs to holding inventory:
the capital invested in the inventory, risk of disaster, such as a warehouse fire,
insurance, floor space, obsolescence, shrinkage (i.e., theft), and so forth. Too little
inventory is also bad, because not only may a sale be lost, but the prospect may
go elsewhere to acquire the good, realize that the competitor is a fine place to
shop, and never return. Big data can help with sales forecasting and thus setting
correct inventory levels. It can also help to develop insights, which may be subtle
or counterintuitive. For example, when Hurricane Frances was projected to strike
Florida, analytics helped stock stores, not only with “obvious” items such as
bottled water and flashlights, but non-obvious products such as strawberry Pop-Tarts
(Hayes 2004). This insight was based on mining store transaction data from prior
Consider a modern container port. There are multiple goals, such as minimizing
the time ships are in port to maximize their productivity, minimizing the time ships
or rail cars are idle, ensuring the right containers get to the correct destinations,
maximizing safety, and so on. In addition, there may be many types of structured
and unstructured data, such as shipping manifests, video surveillance feeds of roads
leading to and within the port, data on bridges, loading cranes, weather forecast data,
truck license plates, and so on. All of these data sources can be used to optimize port
operations in line with the multiple goals (Xvela 2016).
Or consider a logistics firm such as UPS. UPS has invested hundreds of millions
of dollars in ORION (On-Road Integrated Optimization and Navigation). It takes
data such as physical mapping data regarding roads, delivery objectives for each
package, customer data such as when customers are willing to accept deliveries,
and the like. For each of 55,000 routes, ORION determines the optimal sequence of


J. Weinman

an average of 120 stops per route. The combinatorics here are staggering, since there
are roughly 10**200 different possible sequences, making it impossible to calculate
a perfectly optimal route, but heuristics can take all this data and try to determine
the best way to sequence stops and route delivery trucks to minimize idling time,
time waiting to make left turns, fuel consumption and thus carbon footprint, and
to maximize driver labor productivity and truck asset utilization, all the while
balancing out customer satisfaction and on-time deliveries. Moreover real-time data
such as geographic location, traffic congestion, weather, and fuel consumption, can
be exploited for further optimization (Rosenbush and Stevens 2015).
Such capabilities could also be used to not just minimize time or maximize
throughput, but also to maximize revenue. For example, a theme park could
determine the optimal location for a mobile ice cream or face painting stand,
based on prior customer purchases and exact location of customers within the
park. Customers’ locations and identities could be identified through dedicated long
range radios, as Disney does with MagicBands; through smartphones, as Singtel’s
DataSpark unit does (see below); or through their use of related geographically
oriented services or apps, such as Uber or Foursquare.

1.3.2 Long-Term Process Improvement
In addition to such real-time or short-term process optimization, big data can also
be used to optimize processes and resources over the long term.
For example, DataSpark, a unit of Singtel (a Singaporean telephone company)
has been extracting data from cell phone locations to be able to improve the
MTR (Singapore’s subway system) and customer experience (Dataspark 2016).
For example, suppose that GPS data showed that many subway passengers were
traveling between two stops but that they had to travel through a third stop—a hub—
to get there. By building a direct line to bypass the intermediate stop, travelers could
get to their destination sooner, and congestion could be relieved at the intermediate
stop as well as on some of the trains leading to it. Moreover, this data could also be
used for real-time process optimization, by directing customers to avoid a congested
area or line suffering an outage through the use of an alternate route. Obviously a
variety of structured and unstructured data could be used to accomplish both shortterm and long-term improvements, such as GPS data, passenger mobile accounts
and ticket purchases, video feeds of train stations, train location data, and the like.

1.3.3 Digital-Physical Substitution and Fusion
The digital world and the physical world can be brought together in a number of
ways. One way is substitution, as when a virtual audio, video, and/or web conference
substitutes for physical airline travel, or when an online publication substitutes for

1 Strategic Applications of Big Data


a physically printed copy. Another way to bring together the digital and physical
worlds is fusion, where both online and offline experiences become seamlessly
merged. An example is in omni-channel marketing, where a customer might browse
online, order online for pickup in store, and then return an item via the mail. Or, a
customer might browse in the store, only to find the correct size out of stock, and
order in store for home delivery. Managing data across the customer journey can
provide a single view of the customer to maximize sales and share of wallet for that
customer. This might include analytics around customer online browsing behavior,
such as what they searched for, which styles and colors caught their eye, or what
they put into their shopping cart. Within the store, patterns of behavior can also be
identified, such as whether people of a certain demographic or gender tend to turn
left or right upon entering the store.

1.3.4 Exhaust-Data Monetization
Processes which are instrumented and monitored can generate massive amounts
of data. This data can often be monetized or otherwise create benefits in creative
ways. For example, Uber’s main business is often referred to as “ride sharing,”
which is really just offering short term ground transportation to passengers desirous
of rides by matching them up with drivers who can give them rides. However, in
an arrangement with the city of Boston, it will provide ride pickup and drop-off
locations, dates, and times. The city will use the data for traffic engineering, zoning,
and even determining the right number of parking spots needed (O’Brien 2015).
Such inferences can be surprisingly subtle. Consider the case of a revolving door
firm that could predict retail trends and perhaps even recessions. Fewer shoppers
visiting retail stores means fewer shoppers entering via the revolving door. This
means lower usage of the door, and thus fewer maintenance calls.
Another good example is 23andMe. 23andMe is a firm that was set up to leverage
new low cost gene sequence technologies. A 23andMe customer would take a saliva
sample and mail it to 23andMe, which would then sequence the DNA and inform
the customer about certain genetically based risks they might face, such as markers
signaling increased likelihood of breast cancer due to a variant in the BRCA1 gene.
They also would provide additional types of information based on this sequence,
such as clarifying genetic relationships among siblings or questions of paternity.
After compiling massive amounts of data, they were able to monetize the
collected data outside of their core business. In one $50 million deal, they sold data
from Parkinson’s patients to Genentech, with the objective of developing a cure
for Parkinson’s through deep analytics (Lee 2015). Note that not only is the deal
lucrative, especially since essentially no additional costs were incurred to sell this
data, but also highly ethical. Parkinson’s patients would like nothing better than for
Genentech—or anybody else, for that matter—to develop a cure.


J. Weinman

1.3.5 Dynamic, Networked, Virtual Corporations
Processes don’t need to be restricted to the four walls of the corporation. For
example, supply chain optimization requires data from suppliers, channels, and
logistics companies. Many companies have focused on their core business and
outsourced or partnered with others to create and continuously improve supply
chains. For example, Apple sells products, but focuses on design and marketing,
not manufacturing. As many people know, Apple products are built by a partner,
Foxconn, with expertise in precision manufacturing electronic products.
One step beyond such partnerships or virtual corporations are dynamic, networked virtual corporations. An example is Li & Fung. Apple sells products such
as iPhones and iPads, without owning any manufacturing facilities. Similarly, Li &
Fung sells products, namely clothing, without owning any manufacturing facilities.
However, unlike Apple, who relies largely on one main manufacturing partner; Li &
Fung relies on a network of over 10,000 suppliers. Moreover, the exact configuration
of those suppliers can change week by week or even day by day, even for the same
garment. A shirt, for example, might be sewed in Indonesia with buttons from
Thailand and fabric from S. Korea. That same SKU, a few days later, might be
made in China with buttons from Japan and fabric from Vietnam. The constellation
of suppliers is continuously optimized, by utilizing data on supplier resource
availability and pricing, transportation costs, and so forth (Wind et al. 2009).

1.3.6 Beyond Business
Information excellence also applies to governmental and societal objectives. Earlier
we mentioned using big data to improve the Singapore subway operations and
customer experience; later we’ll mention how it’s being used to improve traffic
congestion in Rio de Janeiro. As an example of societal objectives, consider the
successful delivery of vaccines to remote areas. Vaccines can lose their efficacy
or even become unsafe unless they are refrigerated, but delivery to outlying areas
can mean a variety of transport mechanisms and intermediaries. For this reason,
it is important to ensure that they remain refrigerated across their “cold chain.”
A low-tech method could potentially warn of unsafe vaccines: for example, put a
container of milk in with the vaccines, and if the milk spoils it will smell bad and the
vaccines are probably bad as well. However, by collecting data wirelessly from the
refrigerators throughout the delivery process, not only can it be determined whether
the vaccines are good or bad, but improvements can be made to the delivery process
by identifying the root cause of the loss of refrigeration, for example, loss of power
at a particular port, and thus steps can be taken to mitigate the problem, such as the
deployment of backup power generators (Weinman 2016).

1 Strategic Applications of Big Data


1.4 Solution Leadership
Products (and services) were traditionally standalone and manual, but now have
become connected and automated. Products and services now connect to the cloud
and from there on to ecosystems. The ecosystems can help collect data, analyze it,
provide data to the products or services, or all of the above (Fig. 1.2).

1.4.1 Digital-Physical Mirroring
In product engineering, an emerging approach is to build a data-driven engineering
model of a complex product. For example, GE mirrors its jet engines with “digital
twins” or “virtual machines” (unrelated to the computing concept of the same
name). The idea is that features, engineering design changes, and the like can be
made to the model much more easily and cheaply than building an actual working jet
engine. A new turbofan blade material with different weight, brittleness, and cross
section might be simulated to determine impacts on overall engine performance.
To do this requires product and materials data. Moreover, predictive analytics can
be run against massive amounts of data collected from operating engines (Warwick

Fig. 1.2 High-level architecture for solution leadership


J. Weinman

1.4.2 Real-Time Product/Service Optimization
Recall that solutions are smart, digital, connected products that tie over networks to
the cloud and from there onward to unbounded ecosystems. As a result, the actual
tangible, physical product component functionality can potentially evolve over time
as the virtual, digital components adapt. As two examples, consider a browser that
provides “autocomplete” functions in its search bar, i.e., typing shortcuts based
on previous searches, thus saving time and effort. Or, consider a Tesla, whose
performance is improved by evaluating massive quantities of data from all the Teslas
on the road and their performance. As Tesla CEO Elon Musk says, “When one car
learns something, the whole fleet learns” (Coren 2016).

1.4.3 Product/Service Usage Optimization
Products or services can be used better by customers by collecting data and providing feedback to customers. The Ford Fusion’s EcoGuide SmartGauge provides
feedback to drivers on their fuel efficiency. Jackrabbit starts are bad; smooth driving
is good. The EcoGuide SmartGauge grows “green” leaves to provide drivers with
feedback, and is one of the innovations credited with dramatically boosting sales of
the car (Roy 2009).
GE Aviation’s Flight Efficiency Services uses data collected from numerous
flights to determine best practices to maximize fuel efficiency, ultimately improving
airlines carbon footprint and profitability. This is an enormous opportunity, because
it’s been estimated that one-fifth of fuel is wasted due to factors such as suboptimal
fuel usage and inefficient routing. For example, voluminous data and quantitative
analytics were used to develop a business case to gain approval from the Malaysian
Directorate of Civil Aviation for AirAsia to use single-engine taxiing. This conserves fuel because only one engine is used to taxi, rather than all the engines
running while the plane is essentially stuck on the runway.
Perhaps one of the most interesting examples of using big data to optimize
products and services comes from a company called Opower, which was acquired
by Oracle. It acquires data on buildings, such as year built, square footage, and
usage, e.g., residence, hair salon, real estate office. It also collects data from smart
meters on actual electricity consumption. By combining all of this together, it
can message customers such as businesses and homeowners with specific, targeted
insights, such as that a particular hair salon’s electricity consumption is higher than
80% of hair salons of similar size in the area built to the same building code (and
thus equivalently insulated). Such “social proof” gamification has been shown to
be extremely effective in changing behavior compared to other techniques such as
rational quantitative financial comparisons (Weinman 2015).

1 Strategic Applications of Big Data


1.4.4 Predictive Analytics and Predictive Maintenance
Collecting data from things and analyzing it can enable predictive analytics and
predictive maintenance. For example, the GE GEnx jet engine has 20 or so sensors,
each of which collects 5000 data points per second in areas such as oil pressure,
fuel flow and rotation speed. This data can then be used to build models and identify
anomalies and predict when the engine will fail.
This in turn means that airline maintenance crews can “fix” an engine before it
fails. This maximizes what the airlines call “time on wing,” in other words, engine
availability. Moreover, engines can be proactively repaired at optimal times and
optimal locations, where maintenance equipment, crews, and spare parts are kept
(Weinman 2015).

1.4.5 Product-Service System Solutions
When formerly standalone products become connected to back-end services and
solve customer problems they become product-service system solutions. Data can
be the glue that holds the solution together. A good example is Nike and the
NikeC ecosystem.
Nike has a number of mechanisms for collecting activity tracking data, such as
the NikeC FuelBand, mobile apps, and partner products, such as NikeC Kinect
which is a video “game” that coaches you through various workouts. These can
collect data on activities, such as running or bicycling or doing jumping jacks. Data
can be collected, such as the route taken on a run, and normalized into “NikeFuel”
points (Weinman 2015).
Other elements of the ecosystem can measure outcomes. For example, a variety
of scales can measure weight, but the Withings Smart Body Analyzer can also
measure body fat percentage, and link that data to NikeFuel points (Choquel 2014).
By linking devices measuring outcomes to devices monitoring activities—with
the linkages being data traversing networks—individuals can better achieve their
personal goals to become better athletes, lose a little weight, or get more toned.

1.4.6 Long-Term Product Improvement
Actual data on how products are used can ultimately be used for long-term product
improvement. For example, a cable company can collect data on the pattern of
button presses on its remote controls. A repeated pattern of clicking around the
“Guide” button fruitlessly and then finally ordering an “On Demand” movie might
lead to a clearer placement of a dedicated “On Demand” button on the control. Car
companies such as Tesla can collect data on actual usage, say, to determine how


J. Weinman

many batteries to put in each vehicle based on the statistics of distances driven;
airlines can determine what types of meals to offer; and so on.

1.4.7 The Experience Economy
In the Experience Economy framework, developed by Joe Pine and Jim Gilmore,
there is a five-level hierarchy of increasing customer value and firm profitability.
At the lowest level are commodities, which may be farmed, fished, or mined, e.g.,
coffee beans. At the next level of value are products, e.g., packaged, roasted coffee
beans. Still one level higher are services, such as a corner coffee bar. One level
above this are experiences, such as a fine French restaurant, which offers coffee
on the menu as part of a “total product” that encompasses ambience, romance, and
professional chefs and services. But, while experiences may be ephemeral, at the
ultimate level of the hierarchy lie transformations, which are permanent, such as
a university education, learning a foreign language, or having life-saving surgery
(Pine and Gilmore 1999).

1.4.8 Experiences
Experiences can be had without data or technology. For example, consider a hike
up a mountain to its summit followed by taking in the scenery and the fresh air.
However, data can also contribute to experiences. For example, Disney MagicBands
are long-range radios that tie to the cloud. Data on theme park guests can be used to
create magical, personalized experiences. For example, guests can sit at a restaurant
without expressly checking in, and their custom order will be brought to their
table, based on tracking through the MagicBands and data maintained in the cloud
regarding the individuals and their orders (Kuang 2015).

1.4.9 Transformations
Data can also be used to enable transformations. For example, the NikeC family
and ecosystem of solutions mentioned earlier can help individuals lose weight or
become better athletes. This can be done by capturing data from the individual on
steps taken, routes run, and other exercise activities undertaken, as well as results
data through connected scales and body fat monitors. As technology gets more
sophisticated, no doubt such automated solutions will do what any athletic coach
does, e.g., coaching on backswings, grip positions, stride lengths, pronation and the
like. This is how data can help enable transformations (Weinman 2015).

1 Strategic Applications of Big Data


1.4.10 Customer-Centered Product and Service Data
When multiple products and services each collect data, they can provide a 360ı view
of the patient. For example, patients are often scanned by radiological equipment
such as CT (computed tomography) scanners and X-ray machines. While individual
machines should be calibrated to deliver a safe dose, too many scans from too
many devices over too short a period can deliver doses over accepted limits, leading
potentially to dangers such as cancer. GE Dosewatch provides a single view of the
patient, integrating dose information from multiple medical devices from a variety
of manufacturers, not just GE (Combs 2014).
Similarly, financial companies are trying to develop a 360ı view of their
customers’ financial health. Rather than the brokerage division being run separately
from the mortgage division, which is separate from the retail bank, integrating data
from all these divisions can help ensure that the customer is neither over-leveraged
or underinvested.

1.4.11 Beyond Business
The use of connected refrigerators to help improve the cold chain was described
earlier in the context of information excellence for process improvement. Another
example of connected products and services is cities, such as Singapore, that help
reduce carbon footprint through connected parking garages. The parking lots report
how many spaces they have available, so that a driver looking for parking need not
drive all around the city: clearly visible digital signs and a mobile app describe how
many—if any—spaces are available (Abdullah 2015).
This same general strategy can be used with even greater impact in the developing world. For example, in some areas, children walk an hour or more to a well
to fill a bucket with water for their families. However, the well may have gone dry.
Connected, “smart” pump handles can report their usage, and inferences can be
made as to the state of the well. For example, a few pumps of the handle and then
no usage, another few pumps and then no usage, etc., is likely to signify someone
visiting the well, attempting to get water, then abandoning the effort due to lack of
success (ITU and Cisco 2016).

1.5 Collective Intimacy
At one extreme, a customer “relationship” is a one-time, anonymous transaction.
Consider a couple celebrating their 30th wedding anniversary with a once-in-alifetime trip to Paris. While exploring the left bank, they buy a baguette and some
Brie from a hole-in-the-wall bistro. They will never see the bistro again, nor vice


J. Weinman

At the other extreme, there are companies and organizations that see customers
repeatedly. Amazon.com sees its customers’ patterns of purchases; Netflix sees its
customers’ patterns of viewing; Uber sees its customers’ patterns of pickups and
drop-offs. As other verticals become increasingly digital, they too will gain more
insight into customers as individuals, rather than anonymous masses. For example,
automobile insurers are increasingly pursuing “pay-as-you-drive,” or “usage-based”
insurance. Rather than customers’ premiums being merely based on aggregate,
coarse-grained information such as age, gender, and prior tickets, insurers can
charge premiums based on individual, real-time data such as driving over the speed
limit, weaving in between lanes, how congested the road is, and so forth.
Somewhere in between, there are firms that may not have any transaction history
with a given customer, but can use predictive analytics based on statistical insights
derived from large numbers of existing customers. Capital One, for example,
famously disrupted the existing market for credit cards by building models to create
“intimate” offers tailored to each prospect rather than a one-size fits all model
(Pham 2015).
Big data can also be used to analyze and model churn. Actions can be taken to
intercede before a customer has defected, thus retaining that customer and his or her
In short, big data can be used to determine target prospects, determine what to
offer them, maximize revenue and profitability, keep them, decide to let them defect
to a competitor, or win them back (Fig. 1.3).

Fig. 1.3 High-level architecture for collective intimacy

1 Strategic Applications of Big Data


1.5.1 Target Segments, Features and Bundles
A traditional arena for big data and analytics has been better marketing to customers
and market basket analysis. For example, one type of analysis entails identifying
prospects, clustering customers, non-buyers, and prospects as three groups: loyal
customers, who will buy your product no matter what; those who won’t buy no
matter what; and those that can be swayed. Marketing funds for advertising and
promotions are best spent with the last category, which will generate sales uplift.
A related type of analysis is market basket analysis, identifying those products
that might be bought together. Offering bundles of such products can increase profits
(Skiera and Olderog 2000). Even without bundles, better merchandising can goose
sales. For example, if new parents who buy diapers also buy beer, it makes sense to
put them in the aisle together. This may be extended to product features, where the
“bundle” isn’t a market basket but a basket of features built in to a product, say a
sport suspension package and a V8 engine in a sporty sedan.

1.5.2 Upsell/Cross-Sell
Amazon.com uses a variety of approaches to maximize revenue per customer. Some,
such as Amazon Prime, which offers free two-day shipping, are low tech and based
on behavioral economics principles such as the “flat-rate bias” and how humans
frame expenses such as sunk costs (Lambrecht and Skiera 2006). But they are
perhaps best known for their sophisticated algorithms, which do everything from
automating pricing decisions, making millions of price changes every day, (Falk
2013) and also in recommending additional products to buy, through a variety of
algorithmically generated capabilities such as “People Also Bought These Items”.
Some are reasonably obvious, such as, say, paper and ink suggestions if a copier
is bought or a mounting plate if a large flat screen TV is purchased. But many are
subtle, and based on deep analytics at scale of the billions of purchases that have
been made.

1.5.3 Recommendations
If Amazon.com is the poster child for upsell/cross-sell, Netflix is the one for a
pure recommendation engine. Because Netflix charges a flat rate for a household,
there is limited opportunity for upsell without changing the pricing model. Instead,
the primary opportunity is for customer retention, and perhaps secondarily, referral
marketing, i.e., recommendations from existing customers to their friends. The key
to that is maximizing the quality of the total customer experience. This has multiple
dimensions, such as whether DVDs arrive in a reasonable time or a streaming video


J. Weinman

plays cleanly at high resolution, as opposed to pausing to rebuffer frequently. But
one very important dimension is the quality of the entertainment recommendations,
because 70% of what Netflix viewers watch comes about through recommendations.
If viewers like the recommendations, they will like Netflix, and if they don’t, they
will cancel service. So, reduced churn and maximal lifetime customer value are
highly dependent on this (Amatriain 2013).
Netflix uses extremely sophisticated algorithms against trillions of data points,
which attempt to solve as best as possible the recommendation problem. For
example, they must balance out popularity with personalization. Most people like
popular movies; this is why they are popular. But every viewer is an individual,
hence will like different things. Netflix continuously evolves their recommendation
engine(s), which determine which options are presented when a user searches, what
is recommended based on what’s trending now, what is recommended based on
prior movies the user has watched, and so forth. This evolution spans a broad
set of mathematical and statistical methods and machine learning algorithms, such
as matrix factorization, restricted Boltzmann machines, latent Dirichlet allocation,
gradient boosted decision trees, and affinity propagation (Amatriain 2013). In addition, a variety of metrics—such as member retention and engagement time—and
experimentation techniques—such as offline experimentation and A/B testing—are
tuned for statistical validity and used to measure the success of the ensemble of
algorithms (Gomez-Uribe and Hunt 2015).

1.5.4 Sentiment Analysis
A particularly active current area in big data is the use of sophisticated algorithms to
determine an individual’s sentiment (Yegulalp 2015). For example, textual analysis
of tweets or posts can determine how a customer feels about a particular product.
Emerging techniques include emotional analysis of spoken utterances and even
sentiment analysis based on facial imaging.
Some enterprising companies are using sophisticated algorithms to conduct such
sentiment analysis at scale, in near real time, to buy or sell stocks based on how
sentiment is turning as well as additional analytics (Lin 2016).

1.5.5 Beyond Business
Such an approach is relevant beyond the realm of corporate affairs. For example,
a government could utilize a collective intimacy strategy in interacting with its
citizens, in recommending the best combination of public transportation based on
personal destination objectives, or the best combination of social security benefits,
based on a personal financial destination. Dubai, for example, has released a mobile
app called Dubai Now that will act as a single portal to thousands of government
services, including, for example, personalized, contextualized GPS-based real-time
traffic routing (Al Serkal 2015).

1 Strategic Applications of Big Data


1.6 Accelerated Innovation
Innovation has evolved through multiple stages, from the solitary inventor, such
as the early human who invented the flint hand knife, through shop invention,
a combination of research lab and experimental manufacturing facility, to the
corporate research labs (Weinman 2015). However, even the best research labs can
only hire so many people, but ideas can come from anywhere.
The theory of open innovation proposes loosening the firm boundaries to partners
who may have ideas or technologies that can be brought into the firm, and to
distribution partners who may be able to make and sell ideas developed from within
the firm. Open innovation suggests creating relationships that help both of these
approaches succeed (Chesbrough 2003).
However, even preselecting relationships can be overly constricting. A still more
recent approach to innovation lets these relationships be ad hoc and dynamic. I call
it accelerated innovation, but it can be not only faster, but also better and cheaper.
One way to do this is by holding contests or posting challenges, which theoretically
anyone in the world could solve. Related approaches include innovation networks
and idea markets. Increasingly, machines will be responsible for innovation, and we
are already seeing this in systems such as IBM’s Chef Watson, which ingested a
huge database of recipes and now can create its own innovative dishes, and Google
DeepMind’s AlphaGo, which is innovating game play in one of the oldest games in
the world, Go (Fig. 1.4).

Fig. 1.4 High-level architecture for accelerated innovation


J. Weinman

1.6.1 Contests and Challenges
Netflix depends heavily on the quality of its recommendations to maximize
customer satisfaction, thus customer retention, and thereby total customer lifetime
value and profitability. The original Netflix Cinematch recommendation system
let Netflix customers rate movies on a scale of one star to five stars. If Netflix
recommended a movie that the customer then rated a one, there is an enormous
discrepancy between Netflix’s recommendations and the user’s delight. With a perfect algorithm, customers would always rate a Netflix-recommended movie a five.
Netflix launched the Netflix Prize in 2006. It was open to anyone who wished
to compete, and multiple teams did so, from around the world. Netflix made 100
million anonymized movie ratings available. These came from almost half a million
subscribers, across almost 20,000 movie titles. It also withheld 3 million ratings to
evaluate submitted contestant algorithms (Bennett and Lanning 2007). Eventually,
the prize was awarded to a team that did in fact meet the prize objective: a 10%
improvement in the Cinematch algorithm. Since that time, Netflix has continued
to evolve its recommendation algorithms, adapting them to the now prevalent
streaming environment which provides billions of additional data points. While
user-submitted DVD ratings may be reasonably accurate; actual viewing behaviors
and contexts are substantially more accurate and thus better predictors. As one
example, while many viewers say they appreciate foreign documentaries; actual
viewing behavior shows that crude comedies are much more likely to be watched.
GE Flight Quest is another example of a big data challenge. For Flight Quest 1,
GE published data on planned and actual flight departures and arrivals, as well
as weather conditions, with the objective of better predicting flight times. Flight
Quest II then attempted to improve flight arrival times through better scheduling
and routing. The key point of both the Netflix Prize and GE’s Quests is that large
data sets were the cornerstone of the innovation process. Methods used by Netflix
were highlighted in Sect. 1.5.3. The methods used by the GE Flight Quest winners
span gradient boosting, random forest models, ridge regressions, and dynamic
programming (GE Quest n.d.).

1.6.2 Contest Economics
Running such a contest exhibits what I call “contest economics.” For example,
rather than paying for effort, as a firm would when paying salaries to its R&D
team, it can now pay only for results. The results may be qualitative, for example,
“best new product idea,” or quantitative, for example, a percentage improvement
in the Cinematch algorithm. Moreover, the “best” idea may be selected, or the best
one surpassing a particular given threshold or judges’ decision. This means that
hundreds or tens of thousands of “solvers” or contestants may be working on your
problem, but you only need to pay out in the event of a sufficiently good solution.

1 Strategic Applications of Big Data


Moreover, because the world’s best experts in a particular discipline may be
working on your problem, the quality of the solution may be higher than if
conducted internally, and the time to reach a solution may be faster than internal
R&D could do it, especially in the fortuitous situation where just the right expert is
matched with just the right problem.

1.6.3 Machine Innovation
Technology is evolving to be not just an enabler of innovation, but the source of
innovation itself. For example, a program called AlphaGo developed by DeepMind,
which has been acquired by Google, bested the European champion, Fan Hui, and
then the world champion, Lee Sedol. Rather than mere brute force examination of
many moves in the game tree together with a board position evaluation metric, it
used a deep learning approach coupled with some game knowledge encoded by its
developers (Moyer 2016).
Perhaps the most interesting development, however, was in Game 2 of the
tournament between AlphaGo and Sedol. Move 37 was so unusual that the human
commentators thought it was a mistake—a bug in the program. Sedol stood up and
left the game table for 15 min to regain his composure. It was several moves later
that the rationale and impact of Move 37 became clear, and AlphaGo ended up
winning that game, and the tournament. Move 37 was “beautiful,” (Metz 2016) in
retrospect, the way that the heliocentric theory of the solar system or the Theory of
Relativity or the concept of quasicrystals now are. To put it another way, a machine
innovated beyond what thousands of years and millions of players had been unable
to do.

1.6.4 Beyond Business
Of course, such innovation is not restricted to board games. Melvin is a program
that designs experiments in quantum physics, which are notoriously counterintuitive
or non-intuitive to design. It takes standard components such as lasers and beam
splitters, and determines new ways to combine them to test various quantum
mechanics hypotheses. It has already been successful in creating such experiments.
In another example of the use of big data for innovation, automated hypothesis
generation software was used to scan almost two hundred thousand scientific paper
abstracts in biochemistry to determine the most promising “kinases,”—a type of
protein—that activate another specific protein, “p53”, which slows cancer growth.
All but two of the top prospects identified by the software proved to have the desired
effect (The Economist 2014).


J. Weinman

1.7 Integrated Disciplines
A traditional precept of business strategy is the idea of focus. As firms select
a focused product area, market segment, or geography, say, they also make a
conscious decision on what to avoid or say “no” to. A famous story concerns
Southwest, an airline known for its no frills, low-cost service. Its CEO, Herb
Kelleher, in explaining its strategy, explained that every strategic decision could be
viewed in the light of whether it helped achieve that focus. For example, the idea of
serving a tasty chicken Caesar salad on its flights could be instantly nixed, because
it wouldn’t be aligned with low cost (Heath and Heath 2007).
McDonald’s famously ran into trouble by attempting to pursue operational
excellence, product leadership, and customer intimacy at the same time, and these
were in conflict. After all, having the tastiest burgers—product leadership—
would mean foregoing mass pre-processing in factories that created frozen
patties—operational excellence. Having numerous products, combinations and
customizations such as double patty, extra mayo, no onions—customer intimacy—
would take extra time and conflict with a speedy drive through line—operational
excellence (Weinman 2015).
However, the economics of information and information technology mean that
a company can well orient itself to more than one discipline. The robots that
run Amazon.com’s logistics centers, for example, can use routing and warehouse
optimization programs—operational excellence—that are designed once, and don’t
necessarily conflict with the algorithms that make product recommendations based
on prior purchases and big data analytics across millions or billions of transactions.
The efficient delivery of unicast viewing streams to Netflix streaming
subscribers—operational excellence—doesn’t conflict with the entertainment
suggestions derived by the Netflix recommender—collective intimacy—nor does
it conflict with the creation of original Netflix content—product leadership—nor
does it impact Netflix’s ability to run open contests and challenges such as the
Netflix Prize or the Netflix Cloud OSS (Open Source Software) Prize—accelerated
In fact, not only do the disciplines not conflict, but, in such cases, data captured
or derived in one discipline can be used to support the needs of another in the
same company. For example, Netflix famously used data on customer behaviors,
such as rewind or re-watch, contexts, such as mobile device or family TV, and
demographics, such as age and gender, that were part of its collective intimacy
strategy, to inform decisions made about investing in and producing House of Cards,
a highly popular, Emmy-Award-winning show, that supports product leadership.
The data need not even be restricted to a single company. Uber, the “ridesharing” company, entered into an agreement with Starwood, the hotel company
(Hirson 2015). A given Uber customer might be dropped off at a competitor’s hotel,
offering Starwood the tantalizing possibility of emailing that customer a coupon for
20% off their next stay at a Sheraton or Westin, say, possibly converting a lifelong
competitor customer into a lifelong Starwood customer. The promotion could be
extremely targeted, along the lines of, say, “Mr. Smith, you’ve now stayed at our

1 Strategic Applications of Big Data


competitor’s hotel at least three times. But did you know that the Westin Times
Square is rated 1 star higher than the competitor? Moreover, it’s only half as far
away from your favorite restaurant as the competitor, and has a health club included
in the nightly fee, which has been rated higher than the health club you go to at the
competitor hotel.”
Such uses are not restricted to analytics for marketing promotions. For example,
Waze and the City of Rio de Janeiro have announced a collaboration. Waze is a
mobile application that provides drivers information such as driving instructions,
based not only on maps but also real-time congestion data derived from all other
Waze users, a great example of crowdsourcing with customer intimacy. In a bidirectional arrangement with Rio de Janeiro, Waze will improve its real time routing by
utilizing not only the data produced by Waze users, but additional data feeds offered
by the City. Data will flow in the other direction, as well, as Rio uses data collected
by Waze to plan new roads or to better time traffic signals (Ungerleider 2015).

1.8 Conclusion
A combination of technologies such as the cloud, big data and analytics, machine
learning, social, mobile, and the Internet of Things is transforming the world
around us. Information technologies, of course, have information at their nexus,
and consequently data, and the capabilities to extract information and insight and
make decisions and take action on that insight, are key to the strategic application of
information technology to increase the competitiveness of our firms, enhance value
created for our customers, and to excel beyond these domains also into the areas of
government and society.
Four generic strategies—information excellence, solution leadership, collective
intimacy, and accelerated innovation—can be used independently or in combination
to utilize big data and related technologies to differentiate and create customer
value—for better processes and resources, better products and services, better
customer relationships, and better innovation, respectively.

Abdullah, Z. (2015). New app to help motorists find available parking. http://
www.straitstimes.com/singapore/new-app-to-help-motorists-find-available-parking. Accessed
15 September 2016.
Al Serkal, M. (2015). Dubai to launch smart app for 2000 government services. http://
ices-1.1625556. Accessed 15 September 2016.
Amatriain, X. (2013). Big and personal: data and models behind Netflix recommendations. In
Proceedings of the 2nd International Workshop on Big Data, Streams and Heterogeneous
Source Mining: Algorithms, Systems, Programming Models and Applications (pp. 1–6). ACM.
Bennett, J., & Lanning, S. (2007). The Netflix prize. In Proceedings of KDD Cup and Workshop
(Vol. 2007, p. 35).


J. Weinman

Chesbrough, H. W. (2003). Open innovation: The new imperative for creating and profiting from
technology. Boston: Harvard Business School Press.
Choquel, J. (2014). NikeFuel total on your Withings scale. http://blog.withings.com/2014/07/22/
new-way-to-fuel-your-motivation-see-your-nikefuel-total-on-your-withings-scale. Accessed
15 September 2016.
Combs, V. (2014). An infographic that works: I want dose watch from GE healthcare. Med City
News. http://medcitynews.com/2014/05/infographic-works-want-ges-dosewatch. Accessed 15
September 2016.
Coren, M. (2016). Tesla has 780 million miles of driving data, and adds another million
every 10 hours. http://qz.com/694520/tesla-has-780-million-miles-of-driving-data-and-addsanother-million-every-10-hours/. Accessed 15 September 2016.
Dataspark (2016) Can data science help build better public transport? https://
datasparkanalytics.com/insight/can-data-science-help-build-better-public-transport. Accessed
15 September 2016.
Falk, T. (2013). Amazon changes prices millions of times every day. ZDnet.com. http://
www.zdnet.com/article/amazon-changes-prices-millions-of-times-every-day. Accessed 15
September 2016.
GE Aviation (n.d.). http://www.geaviation.com/commercial/engines/genx/. Accessed 15 September 2016.
GE Quest (n.d.). http://www.gequest.com/c/flight. Accessed 15 November 2016.
Gomez-Uribe, C., & Hunt, N. (2015). The netflix recommender system: algorithms, business value,
and innovation. ACM Transactions on Management Information Systems, 6(4), 1–19.
Hayes, C. (2004). What Wal-Mart knows about customers’ habits. The New York
Times. http://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-aboutcustomers-habits.html. Accessed 15 September 2016.
Heath, C., & Heath, D. (2007). Made to Stick: Why some ideas survive and others die. New York,
USA: Random House.
Hirson, R. (2015). Uber: The big data company. http://www.forbes.com/sites/ronhirson/2015/03/
23/uber-the-big-data-company/#2987ae0225f4. Accessed 15 September 2016.
ITU and Cisco (2016). Harnessing the Internet of Things for Global Development. http:/
Accessed 15 September 2016.
Kaggle (n.d.) GE Tackles the industrial internet. https://www.kaggle.com/content/kaggle/img/
casestudies/Kaggle%20Case%20Study-GE.pdf. Accessed 15 September 2016.
Kuang, C. (2015). Disney’s $1 Billion bet on a magical wristband. Wired. http://www.wired.com/
2015/03/disney-magicband. Accessed 15 September 2016.
Lambrecht, A., & Skiera, B. (2006). Paying too much and being happy about it: Existence, causes
and consequences of tariff-choice biases. Journal of Marketing Research, XLIII, 212–223.
Lee, S. (2015). 23 and Me and Genentech in deal to research Parkinson’s treatments.
SFgate, January 6, 2015. http://www.sfgate.com/health/article/23andMe-and-Genentech-indeal-to-research-5997703.php. Accessed 15 September 2016.
Lin, D. (2016). Seeking and finding alpha—Will cloud disrupt the investment management industry? https://thinkacloud.wordpress.com/2016/03/07/seeking-and-finding-alphawill-cloud-disrupt-the-investment-management-industry/. Accessed 15 September 2016.
Loveman, G. W. (2003). Diamonds in the data mine. Harvard Business Review, 81(5), 109–113.
Metz, C. (2016). In two moves, AlphaGo and lee sedol redefined the future. Wired.
http://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/. Accessed 15
September 2016.
Moyer, C. (2016). How Google’s AlphaGo beat a Go world champion. http://www.theatlantic.com/
technology/archive/2016/03/the-invisible-opponent/475611/. Accessed 15 September 2016.
O’Brien, S. A. (2015). Uber partners with Boston on traffic data. http://money.cnn.com/2015/01/
13/technology/uber-boston-traffic-data/. Accessed 15 September 2016.

1 Strategic Applications of Big Data


Pham, P. (2015). The Impacts of big data that you may not have heard of. forbes.com. http://
-heard-of/#3b1ccc1c957d. Accessed 15 September 2016.
Pine, J., & Gilmore, J. (1999). The experience economy: Work is theatre and every business a stage.
Boston: Harvard Business School Press.
Rosenbush, S., & Stevens, L. (2015). At UPS, the algorithm is the driver. The Wall Street Journal. http://www.wsj.com/articles/at-ups-the-algorithm-is-the-driver-1424136536. Accessed 15
September 2016.
Roy, R., (2009). Ford’s green goddess grows leaves. http://www.autoblog.com/2009/10/29/fordsmart-gauge-engineer/. Accessed 15 September 2016.
Skiera, B., & Olderog, T. (2000). The benefits of bundling strategies. Schmalenbach Business
Review, 52, 137–159.
The Economist (2014). Computer says try this. http://www.economist.com/news/science-andtechnology/21621704-new-type-software-helps-researchers-decide-what-they-should-be-loo
king. Accessed 15 September 2016.
Treacy, M., & Wiersema, F. (1995). The discipline of market leaders. Reading, USA: AddisonWesley.
Ungerleider, N. (2015). Waze is driving into city hall. http://www.fastcompany.com/3045080/
waze-is-driving-into-city-hall. Accessed 15 September 2016.
Warwick, G. (2015). GE advances analytical maintenance with digital twins. http://
aviationweek.com/optimizing-engines-through-lifecycle/ge-advances-analytical-maintenancedigital-twins. Accessed 15 September 2016.
Weinman, J. (2015). Digital disciplines. New Jersey: John Wiley & Sons.
Weinman, J. (2016). The internet of things for developing economies. CIO. http://www.cio.com/
Accessed 15 September 2016.
Wind, J., Fung, V., & Fung, W. (2009). Network orchestration: Creating and managing global
supply chains without owning them. In P. R. Kleindorfer, Y. (Jerry) R. Wind, & R. E.
Gunther (Eds.), The Network Challenge: Strategy, Profit, and Risk in an Interlinked World
(pp. 299–314). Upper Saddle River, USA: Wharton School Publishing.
Withings (2014). NikeFuel total on your Withings scale. http://blog.withings.com/2014/07/22/
new-way-to-fuel-your-motivation-see-your-nikefuel-total-on-your-withings-scale/. Accessed
15 September 2016.
Xvela (2016). https://xvela.com/solutions.html. Accessed 15 September 2016.
Yegulalp, S. 2015. IBM’s Watson mines Twitter for sentiments. 17 Mar 2015. Infoworld.com.
timents-good-bad-and-ugly.html. Accessed 15 September 2016.

Chapter 2

Start with Privacy by Design in All Big Data
Ann Cavoukian and Michelle Chibba

2.1 Introduction
The evolution of networked information and communication technologies has, in
one generation, radically changed the value of and ways to manage data. These
trends carry profound implications for privacy. The creation and dissemination of
data has accelerated around the world, and is being copied and stored indefinitely,
resulting in the emergence of Big Data. The old information destruction paradigm
created in an era of paper records is no longer relevant, because digital bits
and bytes have now attained near immortality in cyberspace, thwarting efforts
to successfully remove them from “public” domains. The practical obscurity of
personal information—the data protection of yesteryear—is disappearing as data
becomes digitized, connected to the grid, and exploited in countless new ways.
We’ve all but given up trying to inventory and classify information, and now rely
more on advanced search techniques and automated tools to manage and “mine”
data. The combined effect is that while information has become cheap to distribute,
copy, and recombine; personal information has also become far more available
and consequential. The challenges to control and protect personal information are
significant. Implementing and following good privacy practices should not be a
hindrance to innovation, to reaping societal benefits or to finding the means to
reinforce the public good from Big Data analytics—in fact, by doing so, innovation
is fostered with doubly-enabling, win–win outcomes. The privacy solution requires
a combination of data minimization techniques, credible safeguards, meaningful
individual participation in data processing life cycles, and robust accountability
measures in place by organizations informed by an enhanced and enforceable set of

A. Cavoukian () • M. Chibba
Faculty of Science, Privacy and Big Data Institute, Ryerson University, 350 Victoria Street,
Toronto, ON M5B 2K3, Canada
e-mail: ann.cavoukian@ryerson.ca; michelle.chibba@ryerson.ca
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_2



A. Cavoukian and M. Chibba

universal privacy principles better suited to modern realities. This is where Privacy
by Design becomes an essential approach for Big Data applications. This chapter
begins by defining information privacy, then it will provide an overview of the
privacy risks associated with Big Data applications. Finally, the authors will discuss
Privacy by Design as an international framework for privacy, then provide guidance
on using the Privacy by Design Framework and the 7 Foundational Principles, to
achieve both innovation and privacy—not one at the expense of the other.

2.2 Information Privacy Defined
Information privacy refers to the right or ability of individuals to exercise control
over the collection, use and disclosure by others of their personal information
(Clarke 2000). The ability to determine the fate of one’s personal information is
so important that the authors wish to bring to the attention of the readers, the
term “informational self-determination” which underpins the approach taken to
privacy in this chapter. This term was established in 1983 in Germany when the
Constitutional Court ruled that individuals, not governments, determine the fate of
their personal information. Since this time, in December 2013, the United Nations
General Assembly adopted resolution 68/167 (UN 2016), which expressed deep
concern at the negative impact that surveillance and interception of communications
may have on human rights. The General Assembly affirmed that the rights held by
people offline must also be protected online, and it called upon all States to respect
and protect the right to privacy in digital communication.
Information privacy makes each of us ‘masters’ of the data that identifies each
of us – as individual, citizen, worker, consumer, patient, student, tourist, investor,
parent, son, or daughter. For this, the notions of empowerment, control, choice and
self-determination are the very essence of what we refer to as information privacy.
As ‘custodians’ of our information, we expect governments and business can be
trusted with its safekeeping and proper use.
There have also been references to statements such as “If you have nothing to
hide, you have nothing to fear.” (Solove 2007) Privacy is not about secrecy. It is
about the freedom to exercise one’s right to decide who to choose to share the
personal details of one’s life with. Democracy does not begin with intrusions into
one’s personal sphere—it begins with human rights, civil liberties and privacy—all
fundamental to individual freedom.
Sometimes, safekeeping or information security is taken to mean that privacy
has been addressed. To be clear, information security does not equal privacy.
While data security certainly plays a vital role in enhancing privacy, there is an
important distinction to be made—security is about protecting data assets. It is
about achieving the goals of confidentiality, integrity and availability. Privacy related
goals developed in Europe that complement this security triad are: unlinkability,
transparency and intervenability. In other words, information privacy incorporates
a much broader set of protections than security alone. We look to the work on

2 Start with Privacy by Design in All Big Data Applications


‘contextual integrity’ (Dwork 2014) that extends the meaning of privacy to a much
broader class of transmission principles that cannot be presumed unless warranted
by other context-specific parameters influenced by other actors and information
types. Privacy relates not only to the way that information is protected and accessed,
but also to the way in which it is collected and used. For example, user access
controls protect personal information from internal threats by preventing even the
possibility of accidental or intentional disclosure or misuse. This protection is
especially needed in the world of Big Data.

2.2.1 Is It Personally Identifiable Information?
Not all data gives rise to privacy concerns. An important first step for any Big
Data application is to determine whether the information involved falls under the
definition of personally identifiable information (PII). Privacy laws around the
world include a definition of personal information and it is this definition which
is integral to whether or not the rules apply. Although there are privacy laws
around the world, each with a definition of personal information, we will use the
NIST definition, where personal information (also known as personally identifiable
information) may be defined as any information, recorded or otherwise, relating
to an identifiable individual (NIST 2010). It is important to note that almost any
information (e.g. biographical, biological, genealogical, historical, transactional,
locational, relational, computational, vocational, or reputational), may become
personal in nature. Privacy laws and associated rules will apply to information
if there is a reasonable possibility of identifying a specific individual—whether
directly, indirectly, or through manipulation or data linkage.
Understanding the different forms of non-personal data helps to better understand
what constitutes personal information. One example is de-identified or anonymous
information, which will be dealt with in more detail later in this chapter. NIST
defines de-identified information as records that have had enough personal information removed or obscured in some manner such that the remaining information
does not identify an individual, and there is no reasonable basis to believe that the
information can be used to identify an individual (NIST 2015). As an illustration,
under a U.S. law known as the Health Insurance Portability and Accountability Act
(HIPAA), a set of standards exist to determine when health-care information is
no longer ‘individually identifiable’ or de-identified (HHS 2012). If this standard
is achieved, then the health-care information would not be subject to this law
governing the privacy of health care information. Another example is the EU
General Data Protection Regulation (GDPR) that similarly, excludes anonymous
information (EU Commission 2015). Of interest, however, is that this European
law introduces the concept of “pseudonymization” defined as the processing of
personal data in such a way as to prevent attribution to an identified or identifiable


A. Cavoukian and M. Chibba

person without additional information that may be held separately.1 For research and
statistical purposes, certain requirements under the GDPR are relaxed if the personal
data is pseudonymized, which is considered an appropriate safeguard alongside
encryption (Official Journal of the European Union 2016).
Another form is when personal information is aggregated. Aggregation refers
to summary data that have been generated by performing a calculation across all
individual units as a whole. For example, medical researchers may use aggregated
patient data to assess new treatment strategies; governments may use aggregated
population data for statistical analysis on certain publicly funded programs for
reporting purposes; companies may use aggregated sales data to assist in determining future product lines. Work has also been done on privacy-preserving data
aggregation in wireless sensor networks, especially relevant in the context of the
Internet of Things (Zhang et al. 2016). By using aggregated data, there is a reduced
risk of connecting this information to a specific person or identify an individual.
Lastly, while personal information may be classified as confidential, not all
confidential information should be governed under privacy rules. Confidential
information includes information that should not be publicly available and often
holds tremendous value and importance for organizations, such as strategic business
plans, interim revenue forecasts, proprietary research, or other intellectual property.
The distinction is that while the theft or loss of such confidential information is of
grave concern for an organization it would not constitute a privacy breach because
it does not involve personal information—rather, it is business information.
The growth in Big Data applications and other information communication
technologies have added to the challenges of definition of personal information.
There are times when information architectures, developed by engineers to ensure
the smooth functioning of computer networks and connectivity, lead to unforeseen
uses that have an impact on identity and privacy. These changes present challenges
to what constitutes personal information, extending it from obvious tombstone
data (name, address, telephone number, date of birth, gender) to the innocuous
computational or metadata once the purview of engineering requirements for
communicating between devices (Cameron 2013; Mayer et al. 2016).
Metadata, for example, is information generated by our communications devices
and our communications service providers as we use landline or mobile phones,
computers, tablets, or other computing devices. Metadata is essentially information
about other information—in this case, relating to our communications (Mayer
et al. 2016). Using metadata in Big Data analysis requires understanding of context.


NIST (2015) defines ‘pseudonymization’ as a specific kind of transformation in which
the names and other information that directly identifies an individual are replaced with
pseudonyms. Pseudonymization allows linking information belonging to an individual across
multiple data records or information systems, provided that all direct identifiers are systematically
pseudonymized. Pseudonymization can be readily reversed if the entity that performed the
pseudonymization retains a table linking the original identities to the pseudonyms, or if the
substitution is performed using an algorithm for which the parameters are known or can be

2 Start with Privacy by Design in All Big Data Applications


Metadata reveals detailed pattern of associations that can be far more invasive of
privacy than merely accessing the content of one’s communications (Cavoukian
2013a, b). Addresses, such as the Media Access Control (MAC) number that are
designed to be persistent and unique for the purpose of running software applications and utilizing Wi-Fi positioning systems to communicate to a local area network
can now reveal much more about an individual through advances in geo-location
services and uses of smart mobile devices (Cavoukian and Cameron 2011). Another
good example in the mobile environment would be a unique device identifier such as
an International Mobile Equipment Identity (IMEI) number: even though this does
not name the individual, if it is used to treat individuals differently it will fit the
definition of personal data (Information Commissioner’s Office ICO 2013).
No doubt, the mobile ecosystem is extremely complex and architectures that
were first developed to ensure the functioning of wireless network components
now act as geo-location points, thereby transforming the original intent or what
might be an unintended consequence for privacy. As noted by the International
Working Group on Data Protection in Telecommunications (IWGDPT 2004) “The
enhanced precision of location information and its availability to parties other
than the operators of mobile telecommunications networks create unprecedented
threats to the privacy of the users of mobile devices linked to telecommunications
networks.” When a unique identifier may be linked to an individual, it often falls
under the definition of “personal information” and carries with it a set of regulatory

2.3 Big Data: Understanding the Challenges to Privacy
Before moving into understanding the challenges and risks to privacy that arise
from Big Data applications and the associated data ecosystem, it is important to
emphasize that these should not be deterrents to extracting value from Big Data.
The authors believe that by understanding these privacy risks early on, Big Data
application developers, researchers, policymakers, and other stakeholders will be
sensitized to the privacy issues and therefore, be able to raise early flags on potential
unintended consequences as part of a privacy/security threat risk analysis.
We know that with advances in Big Data applications, organizations are developing a more complete understanding of the individuals with whom they interact
because of the growth and development of data analytical tools, and systems available to them. Public health authorities, for example, have a need for more detailed
information in order to better inform policy decisions related to managing their
increasingly limited resources. Local governments are able to gain insights never
before available into traffic patterns that lead to greater road and pedestrian safety.
These examples and many more demonstrate the ability to extract insights from Big
Data that will, without a doubt, be of enormous socio-economic significance. These
challenges and insights are further examined in the narrative on the impact of Big
Data on privacy (Lane et al. 2014).


A. Cavoukian and M. Chibba

With this shift to knowledge creation and service delivery, the value of information and the need to manage it responsibly have grown dramatically. At the
same time, rapid innovation, global competition and increasing system complexity
present profound challenges for informational privacy. The notion of informational
self-determination seems to be collapsing under the weight, diversity, speed and
volume of Big Data processing in the modern digital era. When a Big Data set is
comprised of identifiable information, then a host of customary privacy risks apply.
As technological advances improve our ability to exploit Big Data, potential privacy
concerns could stir a regulatory backlash that would dampen the data economy and
stifle innovation (Tene and Polonetsky 2013). These concerns are reflected in, for
example, the debate around the new European legislation that includes a ‘right to
be forgotten’ that is aimed at helping individuals better manage data protection
risks online by requiring organizations to delete their data if there are no legitimate
grounds for retaining it (EU Commission 2012). The genesis of the incorporation
of this right comes from a citizen complaint to a data protection regulator against
a newspaper and a major search engine concerning outdated information about the
citizen that continued to appear in online search results of the citizen’s name. Under
certain conditions now, individuals have the right to ask search engines to remove
links with personal information about them that is “inaccurate, inadequate, irrelevant
or excessive.” (EU Commission 2012)
Big Data challenges the tenets of information security, which may also be
of consequence for the protection of privacy. Security challenges arise because
Big Data involves several infrastructure layers for data processing, new types of
infrastructure to handle the enormous flow of data, as well as requiring nonscalable encryption of large data sets. Further, a data breach may have more severe
consequences when enormous datasets are stored. Consider, for example, the value
of a large dataset of identifiable information or confidential information for that
matter, that could make it a target of theft or for ransom—the larger the dataset, the
more likely it may be targeted for misuse. Once unauthorized disclosure takes place,
the impact on privacy will be far greater, because the information is centralized and
contains more data elements. In extreme cases, unauthorized disclosure of personal
information could put public safety at risk.
Outsourcing Big Data analytics and managing data accountability are other
issues that arise when handling identifiable datasets. This is especially true in
a Big Data context, since organizations with large amounts of data may lack
the ability to perform analytics themselves and will outsource this analysis and
reporting (Fogarty and Bell 2014). There is also a growing presence of data
brokers involved in collecting information, including personal information, from
a wide variety of sources other than the individual, for the purpose of reselling
such information to their customers for various purposes, including verifying an
individual’s identity, differentiating records, marketing products, and preventing
financial fraud (FTC 2012). Data governance becomes a sine qua non for the
enterprise and the stakeholders within the Big Data ecosystem.

2 Start with Privacy by Design in All Big Data Applications


2.3.1 Big Data: The Antithesis of Data Minimization
To begin, the basis of Big Data is the antithesis of a fundamental privacy principle
which is data minimization. The principle of data minimization or the limitation
principle (Gürses et al. 2011) is intended to ensure that no more personal information is collected and stored than what is necessary to fulfil clearly defined purposes.
This approach follows through the fully data lifecycle where personal data must be
deleted when it is no longer necessary for the original purpose. The challenge to this
is that Big Data entails a new way of looking at data, where data is assigned value in
itself. In other words, the value of the data is linked to its future and potential uses.
In moving from data minimization to what may be termed data maximization
or Big Data, the challenge to privacy is the risk of creating automatic data linkages
between seemingly non-identifiable data which, on its own, may not be sensitive, but
when compiled, may generate a sensitive result. These linkages can result in a broad
portrait of an individual including revelations of a sensitive nature—a portrait once
inconceivable since the identifiers were separated in various databases. Through the
use of Big Data tools, we also know that it is possible to identify patterns which may
predict people’s dispositions, for example related to health, political viewpoints or
sexual orientation (Cavoukian and Jonas 2012).
By connecting key pieces of data that link people to things, the capability of data
analytics can render ordinary data into information about an identifiable individual
and reveal details about a person’s lifestyle and habits. A telephone number or postal
code, for example, can be combined with other data to identify the location of a
person’s home and work; an IP or email address can be used to identify consumer
habits and social networks.
An important trend and contribution to Big Data is the movement by government
institutions to open up their data holdings in an effort to enhance citizen participation
in government and at the same time spark innovation and new insights through
access to invaluable government data (Cavoukian 2009).2
With this potential for Big Data to create data linkages being so powerful, the
term “super” data or “super” content has been introduced (Cameron 2013). “Super”
data is more powerful than other data in a Big Data context, because the use of
one piece of “super” data, which on its own would not normally reveal much, can
spark new data linkages that grow exponentially until the individual is identified.
Each new transaction in a Big Data system would compound this effect and spread
identifiability like a contagion.
Indeed, to illustrate the significant implications of data maximization on privacy
we need only look at the shock of the Snowden revelations and the eventual repercussions. A top EU court decision in 2015 declared the longstanding Safe Harbor


There are many government Open Data initiatives such as U.S. Government’s Open Data at
www.data.gov; Canadian Government’s Open Data at http://open.canada.ca/en/open-data; UN
Data at http://data.un.org/; EU Open Data Portal at https://data.europa.eu/euodp/en/data/. This is
just a sample of the many Open Data sources around the world.


A. Cavoukian and M. Chibba

data transfer agreement between Europe and the U.S. invalid (Lomas 2015). The
issues had everything to do with concerns about not just government surveillance
but the relationship with U.S. business and their privacy practices. Eventually, a new
agreement was introduced known as the EU-U.S. Privacy Shield (US DOC 2016)
(EU Commission 2016). This new mechanism introduces greater transparency
requirements for the commercial sector on their privacy practices among a number
of other elements including U.S. authorities affirming that collection of information
for intelligence is focussed and targeted.
The authors strongly believe that an important lesson learned for Big Data success is that when the individual participant is more directly involved in information
collection, the accuracy of the information’s context grows and invariably increases
the quality of the data under analysis. Another observation, that may seem to be
contradictory, is that even in Big Data scenarios where algorithms are tasked with
finding connections within vast datasets, data minimization is not only essential
for safeguarding personally identifiable information—it could help with finding the
needle without the haystack by reducing extraneous irrelevant data.

2.3.2 Predictive Analysis: Correlation Versus Causation
Use of correlation analysis may yield completely incorrect results for individuals.
Correlation is often mistaken for causality (Ritter 2014). If the analyses show that
individuals who like X have an eighty per cent probability rating of being exposed
to Y, it is impossible to conclude that this will occur in 100 per cent of the cases.
Thus, discrimination on the basis of statistical analysis may become a privacy issue
(Sweeney 2013). A development where more and more decisions in society are
based on use of algorithms may result in a “Dictatorship of Data”, (Cukier and
Mayer-Schonberger 2013) where we are no longer judged on the basis of our actual
actions, but on the basis of what the data indicate will be our probable actions.
In a survey undertaken by the Annenberg Public Policy Center, the researchers
found that most Americans overwhelmingly consider forms of price discrimination
and behavioral targeting ethically wrong (Turow et al. 2015). Not only are these
approaches based on profiling individuals but using personal information about an
individual for purposes the individual is unaware of. The openness of data sources
and the power of not just data mining but now predictive analysis and other complex
algorithms also present a challenge to the process of de-identification. The risks of
re-identification are more apparent, requiring more sophisticated de-identification
techniques (El Emam et al. 2011). In addition, while the concept of “nudging”
is gaining popularity, using identifiable data for profiling individuals to analyse,
predict, and influence human behaviour may be perceived as invasive and unjustified
Data determinism and discrimination are also concerns that arise from a Dictatorship of Data. Extensive use of automated decisions and prediction analyses
may actually result in adverse consequences for individuals. Algorithms are not
neutral, but reflect choices, among others, about data, connections, inferences,

2 Start with Privacy by Design in All Big Data Applications


interpretations, and thresholds for inclusion that advances a specific purpose. The
concern is that Big Data may consolidate existing prejudices and stereotyping, as
well as reinforce social exclusion and stratification (Tene and Polonetsky 2013;
IWGDPT 2014; FTC 2016). This is said to have implications for the quality of Big
Data analysis because of “echo chambers”3 in the collection phase (Singer 2011;
Quattrociocchi et al. 2016).

2.3.3 Lack of Transparency/Accountability
As an individual’s personal information spreads throughout the Big Data ecosystem
amongst numerous players, it is easy to see that the individual will have less control
over what may be happening to the data. This secondary use of data raises privacy
concerns. A primary purpose is identified at the time of collection of personal
information. Secondary uses are generally permitted with that person’s consent,
unless otherwise permitted by law. Using personal information in Big Data analytics
may not be permitted under the terms of the original consent as it may constitute a
secondary use—unless consent to the secondary use is obtained from the individual.
This characteristic is often linked with a lack of transparency. Whether deliberate or
inadvertent, lack of openness and transparency on how data is compiled and used,
is contrary to a fundamental privacy principle.
It is clear that organizations participating in the Big Data ecosystem need to
have a strong privacy program in place (responsible information management). If
individuals don’t have confidence that their personal information is being managed
properly in Big Data applications, then their trust will be eroded and they may withdraw or find alternative mechanisms to protect their identity and privacy. The consequences of a privacy breach can include reputational harm, legal action, damage
to a company’s brand or regulatory sanctions and disruption to internal operations.
In more severe cases, it could cause the demise of an organization (Solove 2014).
According to TRUSTe’s Consumer Privacy Confidence Index 2016, 92 per cent of
individuals worry about their privacy online, 44 per cent do not trust companies with
their personal information, and 89 per cent avoid doing business with companies that
they believe do not protect their privacy (TRUSTe/NCSA 2016).
Despite the fact that privacy and security risks may exist, organizations should
not fear pursuing innovation through data analytics. Through the application of
privacy controls and use of appropriate privacy tools privacy risks may be mitigated,
thereby enabling organizations to capitalize on the transformative potential of Big
Data—while adequately safeguarding personal information. This is the central


In news media an echo chamber is a metaphorical description of a situation in which information,
ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented.
The term is by analogy with an acoustic echo chamber, where sounds reverberate.


A. Cavoukian and M. Chibba

motivation for Privacy by Design, which is aimed at preventing privacy violations
from arising in the first place. Given the necessity of establishing user trust in
order to gain public acceptance of its technologies, any organization seeking to take
advantage of Big Data must apply the Privacy by Design framework as new products
and applications are developed, marketed, and deployed.

2.4 Privacy by Design and the 7 Foundational Principles
The premise of Privacy by Design has at its roots, the Fair Information Practices or
FIPs. Indeed, most privacy laws around the world are based on these practices. By
way of history, the Code of Fair Information Practices (FIPs) was developed in the
1970s and based on essentially five principles (EPIC n.d.):
1. There must be no personal data record-keeping systems whose very existence is
2. There must be a way for a person to find out what information about the person
is in a record and how it is used.
3. There must be a way for a person to prevent information about the person that was
obtained for one purpose from being used or made available for other purposes
without the person’s consent.
4. There must be a way for a person to correct or amend a record of identifiable
information about the person.
5. Any organization creating, maintaining, using, or disseminating records of
identifiable personal data must assure the reliability of the data for their intended
use and must take precautions to prevent misuses of the data.
FIPs represented an important development in the evolution of data privacy since
they provided an essential starting point for responsible information management
practices. However, many organizations began to view enabling privacy via FIPs
and associated laws as regulatory burdens that inhibited innovation. This zero-sum
mindset viewed the task of protecting personal information as a “balancing act”
of competing business and privacy requirements. This balancing approach tended
to overemphasize the significance of notice and choice as the primary method
for addressing personal information data management. As technologies developed,
the possibility for individuals to meaningfully exert control over their personal
information became more and more difficult. It became increasingly clear that FIPs
were a necessary but not a sufficient condition for protecting privacy. Accordingly,
the attention of privacy protection had begun to shift from reactive compliance with
FIPs to proactive system design.
With advances in technologies, it became increasingly apparent that systems
needed to be complemented by a set of norms that reflect broader privacy dimensions (Damiani 2013). The current challenges to privacy related to the dynamic
relationship associated with the forces of innovation, competition and the global
adoption of information communications technologies. These challenges have been

2 Start with Privacy by Design in All Big Data Applications


mirrored in security by design. Just as users rely on security engineers to ensure the
adequacy of encryption key lengths, for example, data subjects will rely on privacy
engineers to appropriately embed risk-based controls within systems and processes.
Given the complex and rapid nature of these developments, it becomes apparent that
privacy has to become the default mode of design and operation.
Privacy by Design (PbD), is a globally recognized proactive approach to privacy.
It is a framework developed in the late 1990s by co-author Dr. Ann Cavoukian
(Cavoukian 2011). Privacy by Design is a response to compliance-based approaches
to privacy protection that tend to focus on addressing privacy breaches after-the-fact.
Our view is that this reactive approach does not adequately meet the demands of the
Big Data era. Instead, we recommend that organizations consciously and proactively
incorporate privacy strategies into their operations, by building privacy protections
into their technology, business strategies, and operational processes.
By taking a proactive approach to privacy and making privacy the default setting,
PbD can have a wide-ranging impact across an organization. The approach can
result in changes to governance structures, operational and strategic objectives, roles
and accountabilities, policies, information systems and data flows, decision-making
processes, relationships with stakeholders, and even the organization’s culture.
PbD has been endorsed by many public- and private-sector authorities in the
United States, the European Union, and elsewhere (Harris 2015). In 2010, PbD
was unanimously passed as a framework for privacy protection by the International
Assembly of Privacy Commissioners and Data Protection Authorities (CNW 2010).
This approach transforms consumer privacy issues from a pure policy or compliance
issue into a business imperative. Since getting privacy right has become a critical
success factor to any organization that deals with personal information, taking
an approach that is principled and technology-neutral is now more relevant than
ever. Privacy is best interwoven proactively and to achieve this, privacy principles
should be introduced early on—during architecture planning, system design, and
the development of operational procedures. Privacy by Design, where possible,
should be rooted into actual code, with defaults aligning both privacy and business
The business case for privacy focuses on gaining and maintaining customer trust,
breeding loyalty, and generating repeat business. The value proposition typically
reflects the following:
1. Consumer trust drives successful customer relationship management (CRM) and
lifetime value—in other words, business revenues;
2. Broken trust will result in a loss of market share and revenue, translating into less
return business and lower stock value; and
3. Consumer trust hinges critically on the strength and credibility of an organization’s data privacy policies and practices.
In a marketplace where organizations are banding together to offer suites of
goods and services, trust is clearly essential. Of course, trust is not simply an enduser issue. Companies that have done the work to gain the trust of their customers
cannot risk losing it as a result of another organization’s poor business practices.


A. Cavoukian and M. Chibba

2.4.1 The 7 Foundational Principles
Privacy by Design Foundational Principles build upon universal FIPPs in a way
that updates and adapts them to modern information management needs and
requirements. By emphasizing proactive leadership and goal-setting, systematic
and verifiable implementation methods, and demonstrable positive-sum results, the
principles are designed to reconcile the need for robust data protection and an organization’s desire to unlock the potential of data-driven innovation. Implementing
PbD means focusing on, and living up to, the following 7 Foundational Principles,
which form the essence of PbD (Cavoukian 2011).
Principle 1: Use proactive rather than reactive measures, anticipate and prevent
privacy invasive events before they happen (Proactive not Reactive; Preventative not
Principle 2: Personal data must be automatically protected in any given IT system
or business practice. If an individual does nothing, their privacy still remains intact
(Privacy as the Default). Data minimization is also a default position for privacy, i.e.
the concept of always starting with the minimum personal data possible and then
justifying additional collection, disclosure, retention, and use on an exceptional and
specific data-by-data basis.
Principle 3: Privacy must be embedded into the design and architecture of IT
systems and business practices. It is not bolted on as an add-on, after the fact. Privacy
is integral to the system, without diminishing functionality (Privacy Embedded into
Principle 4: All legitimate interests and objectives are accommodated in a
positive-sum manner (Full Functionality—Positive-Sum [win/win], not Zero-Sum
Principle 5: Security is applied throughout the entire lifecycle of the data
involved—data is securely retained, and then securely destroyed at the end of the
process, in a timely fashion (End-to-End Security—Full Lifecycle Protection).
Principle 6: All stakeholders are assured that whatever the business practice or
technology involved, it is in fact, operating according to the stated promises and
objectives, subject to independent verification; transparency is key (Visibility and
Transparency—Keep it Open).
Principle 7: Architects and operators must keep the interests of the individual
uppermost by offering such measures as strong privacy defaults, appropriate notice,
and empowering user-friendly options (Respect for User Privacy—Keep it UserCentric).

2 Start with Privacy by Design in All Big Data Applications


2.5 Big Data Applications: Guidance on Applying the PbD
Framework and Principles
While the 7 Foundational Principles of PbD should be applied in a holistic manner
as a broad framework, there are specific principles worthy of pointing out because
they are what defines and distinguishes this approach to privacy. These are principles
1 (Proactive and Preventative), 2 (By Default/Data Minimization), 3 (Embedded
in Design) and 4 (Positive-sum). Although the two examples provided below are
specific to mobile apps, they are illustrative of the Privacy by Design approach to
being proactive, focussing on data minimization and embedding privacy by default.

2.5.1 Being Proactive About Privacy Through Prevention
Privacy by Design aspires to the highest global standards of practical privacy and
data protection possible and to go beyond compliance and achieve visible evidence
and recognition of leadership, regardless of jurisdiction. Good privacy doesn’t just
happen by itself—it requires proactive and continuous goal-setting at the earliest
stages. Global leadership in data protection begins with explicit recognition of the
benefits and value of adopting strong privacy practices, early and consistently (e.g.,
preventing data breaches or harms to individuals from occurring in the first place).

Your app’s main purpose is to display maps. These maps are downloaded by a
mobile device from your central server. They are then later used on the device,
when there may be no network connection available. You realise that analytics
would be useful to see which maps are being downloaded by which users.
This in turn would allow you to make targeted suggestions to individual users
about which other maps they might want to download. You consider using
the following to identify individuals who download the maps: i) the device’s
IMEI number; ii) the MAC address of the device’s wireless network interface;
and iii) the mobile phone number used by the device. You realise that any of
those identifiers may constitute personal data, so for simplicity you decide
not to take on the responsibility of dealing with them yourself. Instead, you
decide to gain users’ consent for the map suggestions feature. When a user
consents, they are assigned a randomly generated unique identifier, solely for
use by your app. (Excerpted from Information Commissioner’s Office ICO


A. Cavoukian and M. Chibba

2.5.2 Data Minimization as the Default Through
Personal information that is not collected, retained, or disclosed is data that does
not need to be protected, managed, or accounted for. If the personal information
does not exist, then it cannot be accessed, altered, copied, enriched, shared, lost,
hacked, or otherwise used for secondary and unauthorized purposes. Privacy by
Design is premised on the idea that the starting point for designing information
technologies and systems should always be maximally privacy-enhancing. The
default configuration or settings of technologies, tools, platforms, or services offered
to individuals should be as restrictive as possible regarding use of personally
identifiable data.
When Big Data analytics involves the use of personally identifiable information,
data minimization has the biggest impact on managing data privacy risks, by
effectively eliminating risk at the earliest stage of the information life cycle.
Designing Big Data analytical systems at the front end with no collection of
personally identifiable information—unless and until a specific and compelling
purpose is defined, is the ideal. For example, use(s) of personal information should
be limited to the intended, primary purpose(s) of collection and only extended to
other, non-consistent uses with the explicit consent of the individual (Article 29
Data Protection Working Party 2013). In other cases, organizations may find that
summary or aggregate data may be more than sufficient for their needs.

Your app uses GPS location services to recommend interesting activities near
to where the user is. The database of suggested activities is kept on a central
server under your control. One of your design goals is to keep the amount of
data your app downloads from the central server to a minimum. You therefore
design your app so that each time you use it, it sends location data to the
central server so that only the nearest activities are downloaded. However,
you are also keen to use less privacy-intrusive data where possible. You design
your app so that, by default, the device itself works out where the nearest town
is and uses this location instead, avoiding the need to send exact GPS coordinates of the user”s location back to the central server. Users who want
results based on their accurate location can change the default behaviour.
(Excerpted from Information Commissioner’s Office ICO 2013)

De-identification strategies are considered data minimization. De-identification
provides for a set of tools or techniques to strip a dataset of all information that
could be used to identify an individual, either directly or indirectly, through linkages
to other datasets. The techniques involve deleting or masking “direct identifiers,”
such as names or social insurance numbers, and suppressing or generalizing indirect
identifiers, such as postal codes or birthdates. Indirect identifiers may not be

2 Start with Privacy by Design in All Big Data Applications


personally identifying in and of themselves, but when linked to other datasets that
contain direct identifiers, may personally identify individuals. If done properly,
de-identified data can be used for research purposes and data analysis—thus
contributing new insights and achieving innovative goals—while minimizing the
risk of disclosure of the identities of the individuals behind the data (Cavoukian and
El Emam 2014).
This is not to suggest, of course, that data should be collected exclusively in
instances where it may become useful or that data collected for one purpose may be
repurposed at will. Rather, in a big data world, the principle of data minimization
should be interpreted differently, requiring organizations to de-identify data when
possible, implement reasonable security measures, and limit uses of data to those
that are acceptable from not only an individual but also a societal perspective (Tene
and Polonetsky 2013).

2.5.3 Embedding Privacy at the Design Stage
When privacy commitments and data protection controls are embedded into technologies, operations, and information architectures in a holistic, integrative manner,
innovation and creativity are often by-products (Cavoukian et al. 2014a, b). By
holistic, we mean that broader contexts should always be considered for a proper
assessment of privacy risks and remedies. An integrative approach takes into consideration all stakeholder interests as part of the development dialogue. Sometimes,
having to re-look at alternatives because existing solutions are unacceptable from
a privacy perspective spurs innovative and creative thinking. Embedding privacy
and data protection requires taking a systematic, principled approach—one that
not only relies on accepted standards and process frameworks, but that can stand
up to external reviews and audits. All of the 7 Foundational Principles should be
applied with equal rigour, at every step in design and operation. By doing so, the
privacy impacts of the resulting technology, process, or information architecture,
and their uses, should be demonstrably minimized, and not easily degraded through
use, misconfiguration, or error. To minimize concerns of untoward data usage,
organizations should disclose the logic underlying their decision-making processes
to the extent possible without compromising their trade secrets or intellectual
property rights.
The concept of “user-centricity” may evoke contradictory meanings in networked
or online environments. Through a privacy lens, it contemplates a right of control by
an individual over his or her personal information when online, usually with the help
of technology. For most system designers, it describes a system built with individual
users in mind that may perhaps incorporate users’ privacy interests, risks and needs.
The first may be considered libertarian (informational self-determination), the other,
paternalistic. Privacy by Design embraces both. It acknowledges that technologies,
processes and infrastructures must be designed not just for individual users, but
also structured by them. Users are rarely, if ever, involved in every design decision


A. Cavoukian and M. Chibba

or transaction involving their personal information, but they are nonetheless in an
unprecedented position today to exercise a measure of meaningful control over
those designs and transactions, as well as the disposition and use of their personal
information by others.
User interface designers know that human-computer interface can often make
or break an application. Function (substance) is important, but the way in which
that function is delivered is equally as important. This type of design embeds an
effective user privacy experience. As a quid pro quo for looser data collection
and minimization restrictions, organizations should be prepared to share the
wealth created by individuals’ data with those individuals. This means providing
individuals with access to their data in a “usable” format and allowing them to take
advantage of third party applications to analyze their own data and draw useful
conclusions (e.g., consume less protein, go on a skiing vacation, invest in bonds)
(Tene and Polonetsky 2013).

2.5.4 Aspire for Positive-Sum Without Diminishing
In Big Data scenarios, networks are more complex and sophisticated thereby
undermining the dominant “client-server” transaction model because individuals
are often far removed from the client side of the data processing equation. How
could privacy be assured when the collection, disclosure, and use of personal
information might not even involve the individual at all? Inevitably, a zero-sum
paradigm prevails where more of one good (e.g., public security, fraud detection,
operational control) cancels out another good (individual privacy, freedom). The
authors challenge the premise that privacy and data protection necessarily have to
be ceded in order to gain public, personal, or information security benefits from
Big Data. The opposite of zero-sum is positive-sum, where multiple goals may be
achieved concurrently.
Many security technologies and information systems could be designed (or
redesigned) to be effective while minimizing or even eliminating their privacyinvasive features. This is the positive-sum paradigm. We need only look to the work
of researchers in the area of privacy preserving data mining (Lindell and Pinkas
2002). In some cases, however, this requires broadening the scope of application
from only information communication technologies (ICTs) to include the “soft”
legal, policy, procedural, and other organizational controls and operating contexts
in which privacy might be embedded.
De-identification tools and techniques are gaining popularity and there are
several commercially available products. Nonetheless, furthering research into
de-identification continues (El Emam 2013a, b). Some emerging research-level
technologies hold much promise for enabling privacy and utility of Big Data analysis to co-exist. Two of these technologies are differential privacy and synthetic data.

2 Start with Privacy by Design in All Big Data Applications


Differential privacy is an approach that injects random noise into the results
of dataset queries to provide a mathematical guarantee that the presence of any
one individual in the dataset will be masked—thus protecting the privacy of each
individual in the dataset. Typical implementations of differential privacy work by
creating a query interface or “curator” that stands between the dataset’s personal
information and those wanting access to it. An algorithm evaluates the privacy risks
of the queries. The software determines the level of “noise” to introduce into the
analysis results before releasing it. The distortion that is introduced is usually small
enough that it does not affect the quality of the answers in any meaningful way—yet
it is sufficient to protect the identities of the individuals in the dataset (Dwork 2014).
At an administrative level, researchers are not given access to the dataset to
analyze themselves when applying differential privacy. Not surprisingly, this limits
the kinds of questions researchers can ask. Given this limitation, some researchers
are exploring the potential of creating “synthetic” datasets for researchers’ use. As
long as the number of individuals in the dataset is sufficiently large in comparison
to the number of fields or dimensions, it is possible to generate a synthetic dataset
comprised entirely of “fictional” individuals or altered identities that retain the
statistical properties of the original dataset—while delivering differential privacy’s
mathematical “noise” guarantee (Blum et al. 2008). While it is possible to generate
such synthetic datasets, the computational effort required to do so is usually
extremely high. However, there have been important developments into making the
generation of differentially private synthetic datasets more efficient and research
continues to show progress (Thaler et al. 2010).

2.6 Conclusion
There are privacy and security risks and challenges that organizations will face
in the pursuit of Big Data nirvana. While a significant portion of this vast digital
universe is not of a personal nature, there are inherent privacy and security risks
that cannot be overlooked. Make no mistake, organizations must seriously consider
not just the use of Big Data but also the implications of a failure to fully realize
the potential of Big Data. Big data and big data analysis, promise new insights
and benefits such as medical/scientific discoveries, new and innovative economic
drivers, predictive solutions to otherwise unknown, complex societal problems.
Misuses and abuses of personal data diminish informational self-determination,
cause harms, and erode the confidence and trust needed for innovative economic
growth and prosperity. By examining success stories and approaches such as Privacy
by Design, the takeaway should be practical strategies to address the question of
‘How do we achieve the value of Big Data and still respect consumer privacy?’
Above all, Privacy by Design requires architects and operators to keep the interests
of the individual uppermost by offering such measures as strong privacy defaults,
appropriate notice, and empowering user-friendly options. Keep it user-centric!


A. Cavoukian and M. Chibba

Article 29 Data protection working party (2013). Opinion 03/2013 on purpose limitation. http://
ec.europa.eu/justice/data-protection/index_en.htm. Accessed 2 August 2016.
Blum, A., Ligett, K., Roth, A. (2008). A learning theory approach to non-interactive database
privacy. In Proceedings of the 40th ACM SIGACT Symposium on Theory of Computing
(pp. 609–618).
Cameron, K. (2013). Afterword. In M. Hildebrandt et al. (Eds.), Digital Enlightenment Yearbook
2013. Amsterdam: IOS Press.
Cavoukian, A. (2009). Privacy and government 2.0: the implications of an open world. http://
www.ontla.on.ca/library/repository/mon/23006/293152.pdf. Accessed 22 November 2016.
Cavoukian, A. (2011). Privacy by Design: The 7 Foundational Principles. Ontario: IPC.
Cavoukian, A. (2013a). A Primer on Metadata: Separating Fact from Fiction. Ontario: IPC. http:/
Cavoukian, A. (2013b). Privacy by design: leadership, methods, and results. In S. Gutwirth, R.
Leenes, P. de Hert, & Y. Poullet (Eds.), Chapter in European Data Protection: Coming of Age
(pp. 175–202). Dordrecht: Springer Science & Business Media Dordrecht.
Cavoukian, A., & Cameron, K. (2011). Wi-Fi Positioning Systems: Beware of Unintended
Cosnequences: Issues Involving Unforeseen Uses of Pre-Existing Architecture. Ontario: IPC.
Cavoukian, A., & El Emam. (2014). De-identification Protocols: Essential for Protecting Privacy,
Ontario: IPC.
Cavoukian, A., & Jonas, J. (2012). Privacy by Design in the Age of Big Data. Ontario: IPC.
Cavoukian, A., Bansal, N., & Koudas, N. (2014a). Building Privacy into Mobile Location Analytics
(MLA) through Privacy by Design. Ontario: IPC.
Cavoukian, A., Dix, A., & El Emam, K. (2014b). The Unintended Consequences of Privacy
Paternalism. Ontario: IPC.
Clarke, R. (2000). Beyond OECD guidelines; privacy protection for the 21st century. Xamax
Consultancy Pty Ltd. http://www.rogerclarke.com/DV/PP21C.html. Accessed 22 November
CNW (2010). Landmark resolution passed to preserve the future of privacy. Press Release. Toronto,
ON, Canada. http://www.newswire.ca/news-releases/landmark-resolution-passed-to-preservethe-future-of-privacy-546018632.html. Accessed 22 November 2016.
Cukier, K., & Mayer-Schonberger, V. (2013). The dictatorship of data. MIT Technology Review.
https://www.technologyreview.com/s/514591/the-dictatorship-of-data/. Accessed 22 November 2016.
Damiani, M. L. (2013). Privacy enhancing techniques for the protection of mobility patterns in
LBS: research issues and trends. In S. Gutwirth, R. Leenes, P. de Hert, & Y. Poullet (Eds.),
Chapter in european data protection: coming of age (pp. 223–238). Dordrecht: Springer
Science & Business Media Dordrecht.
Department of Commerce (US DOC) (2016). EU-U.S. privacy shield fact sheet. Office of public
affairs, US department of commerce. https://www.commerce.gov/news/fact-sheets/2016/02/euus-privacy-shield. Accessed 22 November 2016.
Dwork, C. (2014). Differential privacy: a cryptographic approach to private data analysis. In J.
Lane, V. Stodden, S. Bender, & H. Nissenbaum (Eds.), Privacy, big data, and the public good:
Frameworks for engagement. New York: Cambridge University Press.
El Emam, K. (2013a). Benefiting from big data while protecting privacy. In K. El Emam
(Ed.), Chapter in risky business: sharing health data while protecting privacy. Bloomington,
IN: Trafford Publishing.
El Emam, K. (2013b). In K. El Emam (Ed.), Who’s afraid of big data? chapter in risky business:
Sharing health data while protecting privacy. Bloomington, IN, USA: Trafford Publishing.

2 Start with Privacy by Design in All Big Data Applications


El Emam, K., Buckeridge, D., Tamblyn, R., Neisa, A., Jonker, E., & Verma, A. (2011). The
re-identification risk of Canadians from longitudinal demographics. BMC Medical Informatics and Decision Making, 11:46. http://bmcmedinformdecismak.biomedcentral.com/articles/
10.1186/1472-6947-11-46. Accessed 22 November 2016.
EPIC (n.d.). Website: https://epic.org/privacy/consumer/code_fair_info.html. Accessed 22 November 2016.
EU Commission (2012). Fact sheet on the right to be forgotten. http://ec.europa.eu/justice/dataprotection/files/factsheets/factsheet_data_protection_en.pdf. Accessed 22 November 2016.
EU Commission (2015). Fact sheet—questions and answers—data protection reform. Brussels.
http://europa.eu/rapid/press-release_MEMO-15-6385_en.htm. Accessed 4 November 2016.
EU Commission (2016). The EU data protection reform and big data factsheet. http://
Accessed 22 November 2016.
Fogarty, D., & Bell, P. C. (2014). Should you outsource analytics? MIT Sloan Management Review,
55(2), Winter.
FTC (2012). Protecting consumer privacy in an era of rapid change: Recommendations
for businesses and policymakers. https://www.ftc.gov/sites/default/files/documents/
reports/federal-trade-commission-report-protecting-consumer-privacy-era-rapid-changerecommendations/120326privacyreport.pdf Accessed August 2016.
FTC (2016). Big data: A tool for inclusion or exclusion? Understanding the Issues.
https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusionunderstanding-issues/160106big-data-rpt.pdf. Accessed 23 November 2016.
Gürses, S.F. Troncoso, C., & Diaz, C. (2011). Engineering privacy by design, Computers, Privacy
& Data Protection. http://www.cosic.esat.kuleuven.be/publications/article-1542.pdf. Accessed
19 November 2016.
Harris, M. (2015). Recap of covington’s privacy by design workshop. inside privacy:
updates on developments in data privacy and cybsersecurity. Covington & Burlington LLP, U.S. https://www.insideprivacy.com/united-states/recap-of-covingtons-privacy-bydesign-workshop/. Accessed 19 November 2016.
HHS (2012). Guidance regarding methods for de-identification of protected health information
in accordance with the health insurance portability and accountability act (HIPPA) privacy rule. http://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/
index.html. Accessed 2 August 2016.
Information Commissioner’s Office (ICO) (2013). Privacy in Mobile Apps: Guide for app developers. https://ico.org.uk/media/for-organisations/documents/1596/privacy-in-mobile-apps-dpguidance.pdf Accessed 22 November 2016.
International Working Group on Data Protection in Telecommunications (IWGDPT) (2004)
Common position on privacy and location information in mobile communications services.
https://datenschutz-berlin.de/content/europa-international/international-working-group-ondata-protection-in-telecommunications-iwgdpt/working-papers-and-common-positionsadopted-by-the-working-group. Accessed 22 November 2016.


A. Cavoukian and M. Chibba

International Working Group on Data Protection in Telecommunications (IWGDPT) (2014).
Working Paper on Big Data and Privacy: Privacy principles under pressure in the age of
Big Data analytics. 55th Meeting. https://datenschutz-berlin.de/content/europa-international/
international-working-group-on-data-protection-in-telecommunications-iwgdpt/workingpapers-and-common-positions-adopted-by-the-working-group. Accessed 22 November 2016.
Lane, J., et al. (2014). Privacy, big data and the public good: frameworks for engagement.
Cambridge: Cambridge University Press.
Lindell, Y., & Pinkas, B. (2002). Privacy preserving data mining. Journal of Cryptology, 15,
177–206. International Association for Cryptologic Research.
Lomas, N. (2015). Europe’s top court strikes down safe Harbor data-transfer agreement
with U.S. Techcrunch. https://techcrunch.com/2015/10/06/europes-top-court-strikes-downsafe-harbor-data-transfer-agreement-with-u-s/. Accessed 22 November 2016.
Mayer, J., Mutchler, P., & Mitchell, J. C. (2016). Evaluating the privacy properties of telephone
metadata. Proceedings of the National Academies of Science, U S A, 113(20), 5536–5541.
NIST. (2010). Guide to protecting the confidentiality of personally identifiable information (PII).
NIST special publication 800–122. Gaithersburg, MD: Computer Science Division.
NIST (2015). De-identification of Personal Information. NISTR 8053. This publication is available
free of charge from: http://dx.doi.org/10.6028/NIST.IR.8053. Accessed 19 November 2016.
Official Journal of the European Union (2016). Regulation (EU) 2016/679 Of The European Parliament and of the Council. http://ec.europa.eu/justice/data-protection/reform/files/
regulation_oj_en.pdf. Accessed 19 November 2016.
Quattrociocchi, W. Scala, A., & Sunstein, C.R. (2016) Echo Chambers on Facebook. Preliminary
draft, not yet published. Available at: http://ssrn.com/abstract=2795110. Accessed 19 November 2016.
Ritter, D. (2014). When to Act on a correlation, and when Not To. Harvard Business Review. https://
hbr.org/2014/03/when-to-act-on-a-correlation-and-when-not-to. Accessed 19 November 2016.
Singer, N. (2011). The trouble with the echo chamber online. New York Times online. http://
www.nytimes.com/2011/05/29/technology/29stream.html?_r=0. Accessed 19 November 2016.
Solove, D. J. (2007). I’ve got nothing to hide’ and other misunderstandings of privacy. San Diego
Law Review, 44, 745.
Solove, D. (2014). Why did in bloom die? A hard lesson about education privacy. Privacy C
Security Blog. TeachPrivacy. Accessed 4 Aug 2016. https://www.teachprivacy.com/inbloomdie-hard-lesson-education-privacy/
Sweeney, L. (2013) Discrimination in online ad delivery. http://dataprivacylab.org/projects/
onlineads/1071-1.pdf. Accessed 22 November 2016.
Tene, O., & Polonetsky, J. (2013). Big data for all: Privacy and user control in the age of analytics.
New Journal of Technology and Intellectual Property, 11(5), 239–272.
Thaler, J., Ullman, J., & Vadhan, S. (2010). PCPs and the hardness of generating synthetic data.
Electronic Colloquium on Computational Complexity, Technical Report, TR10–TR07.
TRUSTe/NCSA (2016). Consumer privacy infographic—US Edition. https://www.truste.com/
resources/privacy-research/ncsa-consumer-privacy-index-us/. Accessed 4 November 2016.
Turow, J., Feldman, L, & Meltzer, K. (2015). Open to exploitation: american shoppers
online and offline. A report from the Annenberg Public Policy Center of the University
of Pennsylvania. http://www.annenbergpublicpolicycenter.org/open-to-exploitation-americanshoppers-online-and-offline/. Accessed 22 November 2016.
United Nations General Assembly (2016). Resolution adopted by the General Assembly. The right
to privacy in the digital age (68/167). http://www.un.org/ga/search/view_doc.asp?symbol=A/
RES/68/167. Accessed 4 November 2016.
Zhang, Y., Chen, Q., & Zhong, S. (2016). Privacy-preserving data aggregation in mobile phone
sensing. Information Forensics and Security IEEE Transactions on, 11, 980–992.

Chapter 3

Privacy Preserving Federated Big Data Analysis
Wenrui Dai, Shuang Wang, Hongkai Xiong, and Xiaoqian Jiang

3.1 Introduction
With the introduction of electronic health records (EHRs), massive patient data have
been involved in biomedical researches to study the impact of various factors on
disease and mortality. Large clinical data networks have been developed to facilitate
analysis and improve treatment of diseases by collecting healthcare data from
a variety of organizations, including healthcare providers, government agencies,
research institutions and insurance companies. The National Patient-Centered Clinical Research Network, PCORnet (n.d.), facilitates clinical effectiveness research
to provide decision support for prevention, diagnosis and treatment with the data
gathered nationwide. PopMedNet (n.d.) enables the distributed analyses of EHR
held by different organizations without requiring a central repository to collect
data. HMORNnet (Brown et al. 2012) combines PopMedNet platform to provide a
shared infrastructure for distributed querying to allow data sharing between multiple
HMO Research Network projects. Integrating PopMedNet, ESPnet achieves disease

W. Dai ()
Department of Biomedical Informatics, University of California San Diego, La Jolla,
CA 92093, USA
Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
e-mail: wed004@ucsd.edu
S. Wang • X. Jiang
Department of Biomedical Informatics, University of California San Diego, La Jolla,
CA 92093, USA
e-mail: shw070@ucsd.edu; x1jiang@ucsd.edu
H. Xiong
Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
e-mail: xionghongkai@sjtu.edu.cn
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_3



W. Dai et al.

surveillance by collecting and analyzing EHRs owned by different organizations in
a distributed fashion.
Although data sharing can benefit both biomedical discovery and public health,
it would also pose risks for disclosure of sensitive information and consequent
breach of individual privacy. Leakage of demographic, diagnostic, phenotypic and
genotypic information would lead to unexpected implications like discrimination by
employers and health insurance companies. To protect the individually identifiable
health information, Health Insurance Portability and Accountability Act (HIPAA)
(n.d.) was enacted in the United States, in which the security and privacy of protected health information (PHI) are guaranteed under the standards and regulations
specified by HIPAA Privacy Rule. It defines two methods, Expert Determination and
Safe Harbor (Lafky 2010), to meet with the de-identification standard. In practice,
the Safe Harbor method is widely adopted, where specific information should be
removed and suppressed according to a predefined checklist. However, these deidentification standards defined by HIPAA Privacy Rule do not provide adequate
privacy protection for healthcare data, as argued by McGraw (2008). Taking
advantage of publicly available background information about an individual, it is
possible to infer sensitive information like predisposition to disease and surnames
from de-identified data. Homer et al. (2008) utilized aggregated allele frequencies
in genome-wide association studies (GWAS) to re-identify individual patients in
a case group based on the reference population from the International HapMap
Project. Wang et al. (2009) extended Homer’s attack with two models to identify
patients from a smaller subset of published statistics or under limited precision
and availability of statistics. Sweeney et al. (2013) showed that most (84–97%)
patients could be exactly identified by linking their profiles in the Personal Genome
Project (PGP) with publicly available records like voter lists. Gymrek et al. (2013)
inferred the surnames from personal genome data sets by profiling Y-chromosome
haplotypes based on recreational genetic genealogy databases, which are public and
online accessible. For Healthcare Cost and Utilization Project (HCUP), Vaidya et al.
(2013) demonstrated the vulnerability of its querying system by making query inference attacks to infer patient-level information based on multiple correlated queries.
Privacy concerns have presented a challenge to efficient collaborative prediction
and analysis for biomedical research that needs data sharing. Patients would be
unwilling to provide their data to research projects or participate in treatments under
the insufficient privacy protection. For data custodian, data utility might be degraded
to lower the potential privacy risk, as they take responsibility for the security and
confidentiality of their data. Due to institutional policies and legislation, it is not
viable to explicitly transfer patient-level data to a centralized repository or share
them among various institutions in many scenarios. For example, without specific
institutional approval, the U.S. Department of Veterans Affairs requires all patient
data to remain in its server. Naive procedure for exchanging patient-level data
would also be restricted in international cross-institutional collaboration. The Data
Protection Act (1998) in the UK, in line with the EU Data Protection Directive,
prohibits transferring clinical data outside the European Economic Area, unless the
protection for data security is sufficiently guaranteed. Therefore, it is of necessity to

3 Privacy Preserving Federated Big Data Analysis


develop models and algorithms for cross-institutional collaboration with sufficient
protection of patient privacy and full compliance with institutional policies.
Federated data analysis has been developed as an alternative for crossinstitutional collaboration, which proposes to exchange aggregated statistics instead
of patient-level data. It enables a variety of privacy-preserving distributed algorithms
that facilitate computation and analysis with a guarantee of prediction accuracy
and protection of privacy and security of patient-level data. These algorithms are
commonly designed to perform regression, classification and evaluation over data
with different types of partition, i.e. horizontally partitioned data (Kantarcioglu
2008) and vertically partitioned data (Vaidya 2008), respectively. Horizontally
partitioned data are composed of data of different patients with the same clinical
variables. Thus, multiple institutions share the sample types of attributes for all
the patients in the federated database. Horizontally partitioned data would be
suitable for collaborative analysis and computation over patients from organizations
in different geographical areas, especially for studies of diseases that require a
large number of examples (patient-level data). On the other hand, for vertically
partitioned data, each institution owns a portion of clinical variables for the same
patients. Attributes from all the institutions are collected and aligned as a federated
database for computation and analysis. In fact, vertically partitioned data would
facilitate collaboration among organizations owning different types of patientlevel data. For example, the PCORnet clinical data research network (CDRN),
pSCANNER (Ohno-Machado et al. 2014), allows distributed analysis over data of
31 million patients that are vertically partitioned across Centers for Medicare and
Medicaid Services, Department of Veteran Affairs, insurance companies, and health
systems. The studies of diseases can be jointly performed based on the diagnostic
information from medical centers, demographic and financial data from insurance
companies and genomic data from laboratories. Figure 3.1 provides an illustrative
example for horizontally and vertically partitioned data, respectively.
In this chapter, we review the privacy-preserving federated data analysis algorithms for large-scale distributed data, especially biomedical data. To collaborate
on distributed data analysis in a privacy-preserving fashion, institutions might not
be able to explicitly share their patient-level data. Privacy-preserving federated
data analysis algorithms aim to establish global models for analysis and prediction
based on non-sensitive local statistics, e.g., intermediary results for Hessian matrix
and kernel matrix, instead of explicitly transferring sensitive patient-level data to
a central repository. Over the past few decades, a series of federated analysis
algorithms have been developed for regression, classification and evaluation of
distributed data with a protection of data privacy and security. Enlightened by the
principle of sharing model without sharing data, federated data modeling techniques
have been proposed to securely derive global model parameters and perform
statistical tests over horizontally and vertically partitioned data. In comparison
to centralized realizations, federated modeling techniques achieved equivalent
accuracy in model parameter estimation and statistical tests with no exchange of
patient-level data. Server/client and decentralized architectures have been developed
to realize federated data analysis in a distributed and privacy-preserving fashion.


W. Dai et al.

Fig. 3.1 Illustrative examples for horizontally and vertically partitioned data in federated data
analysis. N records with M covariates are distributed across K institutions. (a) Horizontally
partitioned data. The K institutions have different patients sharing the same type of covariates;
(b) Vertically partitioned data. Covariates from the same patients are distributed across all the K

For regression tasks, distributed optimization was efficiently achieved using the
Newton-Ralphson method. To further improve distributed optimization in federated
models, alternating direction method of multipliers (ADMM) was integrated to
formulate decomposable minimization problems with additional auxiliary variables.
Inheriting the convergence properties of methods of Lagrangian multipliers, it
is robust for a variety of distributed analyses under horizontal and vertical data

3 Privacy Preserving Federated Big Data Analysis


partitioning. Recognizing that communication between the server and clients was
not protected, secure multiparty computation (SMC) protocols were adopted for
stronger data security and privacy. Secure protocols were widely considered for
distributed analysis like regression, classification and evaluation. The intermediary
results from multiple institutions were aggregated with secure summation, product
and reordering to support the server/client or decentralized architecture. To handle
real-world network conditions, we discussed the asynchronous optimization for
distributed optimization under server/client or decentralized architecture. To support
privacy-preserving data sharing and analysis, this paper summarizes the relevant
literature to present the state-of-the-art algorithms and applications and prospect the
promising improvements and extensions along current trend.
The rest of the paper is organized as follows. Section 3.2 overviews the
architecture and optimization for federated modeling analysis over horizontally and
vertically partitioned data. In Sect. 3.3, we review the applications in regression
and classification based on the Newton-Raphson method and ADMM framework.
Section 3.3 integrates secure multiparty computation protocols with federated
data analysis for protection of intermediary results in distributed analysis and
computation. In Sect. 3.5, we present the asynchronous optimization for general
fixed-point problem and specific coordinate gradient descent and ADMM-based
methods. Finally, Sect. 3.6 makes discussion and conclusion.

3.2 Federated Data Analysis: Architecture and Optimization
In this section, we overview the architectures and optimization methods for federated data analysis. Server/client and decentralized architectures are well established
in privacy-preserving analysis. Under these architectures, the Newton-Raphson
method and alternating direction method of multipliers (ADMM) framework are
leveraged for distributed computation.

3.2.1 Architecture

Server/Client Architecture

In the federated models, the server/client architecture has been established to
estimate global model parameters and perform statistical test over horizontally and
vertically partitioned data, as shown in Fig. 3.2. Under this architecture, federated
data modeling shares models rather than patient-level information. The server
iteratively optimizes the global model parameters based on aggregated intermediary
results that are decomposable over the clients. Each client can utilize its local data
to separately calculate corresponding intermediary results. Subsequently, instead of
sharing the sensitive patient-level data, these intermediary results are exchanged for
secure computation and analysis. Taking maximum likelihood estimation (MLE)
of binary logistic regression for example, each institution calculates and exchanges


W. Dai et al.

Fig. 3.2 Server/client architecture for federated data analysis. Each institution only exchanges
intermediary results for the estimation of global model parameters and statistical tests. Each
institution does not communicate with the others to prevent unexpected information leakage

the partial Hessian matrix derived from its local records for horizontally partitioned
data and partial kernel matrix of its local attributes for vertically partitioned data.
Thus, federated models only exchanged aggregated intermediary results rather
than collected raw data in a central repository to make parameter estimation.
Moreover, clients will not collude to infer the raw data, as each client separately
performs the computation. These facts imply that distributed optimization can be
performed in a privacy-preserving fashion, as the raw data tend not to be recovered
from the aggregated intermediary results. It should be noted that the accuracy
of parameter estimation could be guaranteed in federated models, as aggregated
intermediary results do not lose any information in comparison to the centralized
methods. Furthermore, the security of federated models can be further improved
by integrating secure protocols and encryption methods to protect the intermediary
results exchanged between the server and clients.
For logistic regression and multinomial regression models, federated models can
also support distributed statistical tests over horizontally partitioned data, including
goodness-of-fit test and AUC score estimation (Chambless and Diao 2006). Besides
model parameters, variance-covariance matrix could be similarly obtained by aggregating the decomposable intermediary results. Using global model parameters and
variance-covariance matrix, federated models were able to estimate the statistics of
logistic and multinomial regression, including confidence intervals (CIs), standard
error, Z-test statistics and p-values. Furthermore, goodness-of-fit test and AUC score
estimation can be achieved in a distributed manner. The Hosmer and Lemeshow
(H-L) test (Hosmer et al. 2013) is considered to check model fitness, where each
institution only shares its number of records with positive patient outcomes per

3 Privacy Preserving Federated Big Data Analysis


Fig. 3.3 Decentralized architecture for federated data analysis. To estimate global model parameters and perform statistical tests, each institution exchanges intermediary results with its
neighboring institutions and updates its local model with received aggregated results

decile. Thus, patient-level information and estimate outcomes will not be exchanged
for privacy-preserving consideration. For computation of AUC score, raw data and
estimated outcomes of patients can be protected with peer-to-peer communication.

Decentralized Architecture

Figure 3.3 illustrates the decentralized architecture for federated analysis over
horizontally and vertically partitioned data. Contrary to server/client architectures, decentralized architectures do not require a central node (server) to collect
aggregated intermediary results from all the institutions and make global model
parameter estimation and statistical tests. Each institution only communicates with
its neighbors to exchange messages, e.g. institutions with linked health records.
To prevent leakage of patient-level information, institutions would exchange intermediary results rather than raw data. At each iteration, each institution derives its
intermediary results from local data and the aggregated results from its neighboring institutions. Taking global consensus optimization under ADMM framework
(Boyd et al. 2011) for example, local model parameters are exchanged among
neighboring institutions for global consensus in applications like sparse linear
regression (Mateos et al. 2010), principal component analysis (PCA) (Schizas and
Aduroja 2015) and support vector machine (SVM) (Forero and Giannakis 2010).
Under such architecture, distributed optimization can be performed in a privacypreserving manner, as patient-level information would never be exchanged. It is
worth mentioning that communication cost would be reduced in the decentralized
architecture, as messages are only exchanged among neighboring institutions.


W. Dai et al.

3.2.2 Distributed Optimization

The Newton-Raphson Method

For model parameter estimation, the Newton-Raphson method can be extended
to make distributed optimization for log-likelihood functions over multiple institutions. It is a powerful technique for finding numerical solutions to nonlinear
algebraic equations with successively linear approximations. Given twice differentiable objective function f (x), the Newton-Raphson method iteratively constructs a
sequence of positions towards the stationary point x with gradient-like optimization. For vector inputs x, the gradient rf and Hessian matrix H(f ) are computed for
first and second partial derivatives of f, respectively. At the t-th step, x is updated by
maximizing the log-likelihood function f.
h   i1  
x.tC1/ D x.t/  H f x.t/
rf x.t/


In federated data modeling, to enable distributed optimization, the first and
second partial derivatives are required to be decomposable over multiple institutions. Thus, the gradient and Hessian matrix can be derived from the aggregated
intermediary results separately obtained from all the institutions. The intermediary
results would vary for different tasks and data with different partition. For example,
each institution holds a portion of records Ak for horizontally partitioned data.
Consequently, the intermediary results exchanged in binary logistic regression
are ATk k Ak with k D diag((Ak , xk )(1  (Ak , xk ))) related with logit function,
while they tend to depend on the set of records at risk for Cox regression model.
For vertically distributed logistic regression, it requires Legendre transform for
distributed dual optimization, where the kernel matrix Ak ATk is separately calculated
based on the portion of attributes Ak held by the k-th institution.
The Newton-Raphson method can achieve faster convergence towards a local
optimum in comparison to gradient descent, when f is a valid objective function.
At the meantime, distributed optimization with the Newton-Raphson method can
achieve equivalent accuracy in model parameter estimation, when compared to its
centralized realization. However, it requires separable first and second partial derivatives over multiple institutions, which make the Newton-Raphson method restrictive
in some distributed optimization scenarios, e.g. distributed Cox regression. In Sect, we introduce alternating direction method of multipliers (ADMM) for a
ubiquitous distributed optimization framework.

Alternating Direction Method of Multipliers

Alternating direction method of multipliers (ADMM) (Boyd et al. 2011) is a variant
of the augmented Lagrangian scheme that supports decomposable dual ascent
solution for the method of Lagrangian multipliers. It develops a decompositioncoordination procedure to decompose the large-scale optimization problem into a
set of small local subproblems. Inheriting the convergence properties of the method

3 Privacy Preserving Federated Big Data Analysis


of Lagrangian multipliers, ADMM is able to obtain the globally optimal solution.
In comparison to general equality-constrained minimization, ADMM introduces
auxiliary variables to split the objective function for decomposability. To be
concrete, ADMM solves the minimization problem of the summation of two convex
functions f .x/ and g .z/ under the constraint of Ax C Bz D C: Thus, the augmented
Lagrangian function for optimal solution is formulated with dual variable  .
min L .x; z; y/ D f .x/ C g.z/ C  T .Ax C Bz  C/ C

jAx C Bz  Cj22


where  is the positive Lagrangian parameter. The augmented Lagrangian function
can be iteratively solved with three steps for partially updating the primal variables
x auxiliary variables z and dual variables  . In the t-th iteration, these three steps are
conducted in a sequential manner.
1. x-minimization: partially update x by minimizing the augmented Lagrangian
function L .x; z;  / with fixed z and dual variable  , or x.tC1/ D argmin L x; z.t/ ;  .t/ ,

2. z-minimization: partially update z by minimizing the augmented

function L .x; z;  / with updated x and fixed  , or z.tC1/ D argmin L x; z.tC1/ ;  .t/ ,

D  .t/ C
Update dual variables
 3. .tC1/
  with updated x and z, or 
C Bz
Here, dual variables  are updated with the step size  for each iteration.
According to Steps 1–3, x and z are alternately updated in each iteration based on
f .x/ and g .z/. This fact implies that the minimization problem over x and z can be
separately solved for distributed data, when f .x/ or g .z/ are decomposable. Since
x and z can be derived from each other based on the dual variables  , they can
be exchanged among multiple institutions for distributed optimization. Therefore,
ADMM is desirable for privacy-preserving federated data analysis, where each
institution can solve the decomposable optimization problem based on local data
and submits intermediary results for a global solution.
Under the ADMM framework, the two generic optimization problems, consensus
and sharing, are formulated for distributed analysis over horizontally and vertically
partitioned data, respectively. For horizontally partitioned data, the consensus
problem splits primal variables x and separately optimizes the decomposable
cost function f .x/ for all the institutions under the global consensus constraints.
Considering that the submatrix Ak 2RNk M of A2RNM corresponds to the local data
held by the k-th institution, the primal variables xk 2RM1 for the K institutions are
solved by


fk .Ak xk / C g.z/

s:t: xk  z D 0; k D 1;    ; K:



W. Dai et al.

Here, fk .Ak xk / is the cost function for the k-th institution and g .z/ commonly
represents the regularization term for the optimization problem. According to the
derived augmented Lagrangian function, xk is independently solved based on local
data Ak and corresponding dual variables k , while z is updated by averaging xk and
k . Thus, the global model can be established by optimizing over the K institutions
under the global consensus constraints.
The sharing problem is considered for vertically partitioned data, where A and
x are vertically split into Ak 2RNMk and xk 2RMk 1 for the K institutions. Auxiliary
variables zk 2 RN1 are introduced for the k-th institution based on Ak and xk . In such
case, the sharing problem is formulated based on the decomposable cost function
fk .xk /.


fk .xk / C g



s:t: Ak xk  zk D 0; k D 1;    ; K


Under the ADMM framework, xk and its dual variables k can be separately
solved, while zk is derived from the aggregated results of Ak xk and k . This fact
implies that each institution can locally optimize the decomposable cost function
using its own portion of attributes and adjust the model parameters according to
auxiliary variables derived from global optimization problem. It is worth mentioning
that the global consensus constraints implicitly exist for the dual variables in sharing

3.3 Federated Data Analysis Applications
In this section, we review federated data analysis models for regression and
classification based on the Newton-Raphson method and ADMM framework.

3.3.1 Applications Based on the Newton-Raphson Method
The Newton-Ralphson method is widely adopted in federated data analysis for
generalized linear models, e.g. logistic regression, multinomial regression, and
Cox proportional hazard model, which are widely used in biomedicine. Table 3.1
summarizes the existing federated modeling techniques for distributed data with
their application scenarios, data partitioning, mechanisms for model parameter
estimation, statistical tests and communication protection.
An early federated data analysis paper in biomedical informatics introduced
the Grid binary LOgistic REgression (GLORE) framework (Wu et al. 2012). In
this work, a binary logistic regression model was developed for model parameter
estimation and statistical tests over data horizontally distributed across multiple
institutions in a privacy-preserving manner. For security and confidentiality, the
proposed model shared models rather than patient-level information. Besides model
parameter estimation, distributed algorithms were developed for H-L test and AUC

GLORE (Wu et al.
IPDLR (Wu et al. 2012)
et al. 2013)
et al. 2016)
VERTIGO (Li et al.
Multi-category GLORE
(Wu et al. 2015)
HPPCox (Yu et al.
WebDISCO (Lu et al.

Cox regression

Application scenario
Logistic regression



H-L test AUC score



Statistical test
H-L test AUC score

Parameter estimation



Data partitioning

Table 3.1 Federated modeling techniques for distributed data based on the Newton-Raphson method




Garbled circuits

Secure summation protocol
SINE protocol

Communication protection

3 Privacy Preserving Federated Big Data Analysis


W. Dai et al.

score estimation in GLORE. It was shown to achieve equivalent model parameter
estimation and statistical tests over simulated and clinical datasets in comparison to
centralized methods. WebGLORE (Jiang et al. 2013) provided a free web service
to implement the privacy-preserving architecture for GLORE, where AJAX, JAVA
Applet/Servlet and PHP technologies were seamlessly integrated for secure and
easy-to-use web service. Consequently, it would benefit biomedical researchers
to deploy the practical collaborative software framework in real-world clinical
Inspired by GLORE, a series of extensions and improvements have been made
for various regression tasks with different privacy concerns. Despite shedding light
on federated data analysis, GLORE still suffered from two main limitations: privacy
protection of intermediary results and synchronization for iterative distributed optimization. Wu et al. (2012) considered the institutional privacy for GLORE. During
iterative optimization, sensitive information of an institution would be leaked, as
its contribution to each matrix of coefficients is known to the server. Therefore,
institutional privacy-preserving distributed binary logistic regression (IPDLR) was
developed to enhance the institutional privacy in GLORE by masking the ownership
of the intermediary results exchanged between the server and institutions. To make
all the institutions remain anonymous, a secure summation procedure was developed
to integrate all the intermediary results without identifying their ownership. At each
iteration, client-to-client communication was conducted to merge the intermediary
results on a client basis based on the random matrix assigned by the server. Thus,
the server would obtain the aggregated results without knowing the contribution
of each institution. The secure summation procedure was also employed in ROC
curve plotting to securely integrate local contingency tables derived in the institutions. Wang et al. (2013) proposed a Bayesian extension for GLORE, namely
EXpectation Propagation LOgistic REgRession (EXPLORER) model, to achieve
distributed privacy-preserving online learning. In comparison to frequentist logistic
regression model, EXPLORER made maximum a posteriori (MAP) estimation
using expectation propagation along the derived factor graph. The model parameters
were iteratively updated based on partial posterior function w.r.t. the records
held by each institution (intra-site update) and the messages passing among the
server and institutions (inter-site update). As a result, EXPLORER improved the
security and flexibility for distributed model learning with similar discrimination
and model fit performance. To reduce the information leakage from unprotected
intermediary results, EXPLORER exchanged the encrypted posterior distribution
of coefficients rather than the intermediary results for model parameter estimation.
The sensitive information about individual patient would not be disclosed, as only
statistics like mean vector and covariance matrix were shared to represent the
aggregated information of the raw data. Moreover, secured intermediate information
exchange (SINE) protocol was adopted to further protect aggregation information.
To guarantee flexibility, EXPLORER leveraged online learning to update the model
based on the newly added records. It also supported asynchronous communication
to avoid coordinating multiple institutions, so that it would be robust under
the emergence of offline institution and interrupted communication. Shi et al.
(2016) developed a grid logistic regression framework based on secure multiparty

3 Privacy Preserving Federated Big Data Analysis


computation. In addition to raw data, the proposed SMAC-GLORE protected the
decomposable intermediary results based on garbled circuits during iterative model
learning. Secure matrix multiplication and summation protocols were presented for
maximum likelihood estimation using fixed-Hessian methods. For MLE, Hessian
matrix inversion problem was securely transferred to a recursive procedure of matrix
multiplications and summations using Strassen algorithm, while exp006Fnential
function was approximated with Taylor series expansion.
Wu et al. (2015) extended GLORE to address multi-centric modeling of multicategory response, where grid multi-category response models were developed
for ordinal and multinomial logistic regression over horizontally partitioned data.
Grid Newton method was proposed to make maximum likelihood estimation of
model parameters in a privacy-preserving fashion. At each iteration, each institution
separately calculated partial gradients and Hessian matrix based on its own data
and the server could integrate these intermediary results to derive the global model
parameters. Thus, the proposed models could reduce disclosure risk, as patient-level
data would not be moved outside the institutions. Furthermore, privacy-preserving
distributed algorithms were presented for grid model fit assessment and AUC score
computation in ordinary and multinomial logistic regression model by extending
the corresponding algorithms for binary response models. The proposed models
were demonstrated to achieve the same accuracy with a guarantee of data privacy in
comparison to the corresponding centralized models.
Recently, Li et al. (2016) proposed a novel method that leveraged dual optimization to solve binary logistic regression over vertically partitioned data. The
proposed vertical grid logistic regression (VERTIGO) derived the global solution
with aggregated intermediate results rather than the sensitive patient-level data.
In the server/client architecture, the server iterative solved the dual problem of
binary logistic regression using the Newton-Raphson method. To compute the
Hessian matrix, each institution transmitted the kernel matrix of its local statistics
to the server for merging. Dot product kernel matrix was adopted to guarantee
that the global gram matrix was decomposable over multiple institutions. For
iterative optimization, it was only required to exchange the dot products of patient
records and dual parameters between the server and institutions. This fact implies
that the patient-level information would not be revealed, when the distribution of
covariates is not highly unbalanced. Employed on both synthetic and real datasets,
VERTIGO was shown to achieve equivalent accuracy for binary logistic regression
in comparison to its centralized counterpart.
Distributed survival analysis is one of the prevailing topics in biomedical
research, which studies the development of a symptom, disease, or mortality with
distributed time-to-event data. Cox proportional hazard model (Cox 1972) is widely
concerned in survival analysis, which evaluates the significance of time-varying
covariates with a hazard function. Yu et al. (2008) proposed a privacy-preserving
Cox model for horizontally partitioned data, where affine projections of patient
data in a lower dimensional space were shared to learn survival model. The
proposed HPPCox model utilized a rank-deficient projection matrix to hide sensitive
information in raw data, as the lower dimensional projections were commonly
irreversible. Project matrix was optimized to minimize loss of information led by
these lower dimensional projections, projection matrix was optimized to simulta-


W. Dai et al.

neously maintain the major structures (properties) and reduce the dimensionality
of input data. Thus, feature selection could be enabled to prevent overfitting for
scenarios requiring limited training data. This model was shown to achieve nearly
optimal predictive performance for multi-centric survival analysis. O’Keefe et al.
(2012) presented explicit confidentialisation measures for survival models to avoid
exchanging patient-level data in a remote analysis system, but did not consider
distributed learning model. The work considered and compared confidentialised
outputs for non-parametric survival model with Kaplan-Meier estimates (Kaplan
and Meier 1958), semiparametric Cox proportional hazard model and parametric
survival model with Weibull distribution (Weibull 1951). The confidentialised
outputs would benefit model fit assessment with similar model statistics and
significance in comparison to traditional methods. However, the work was focused
on securely generating survival outputs, did not perform distributed model learning.
Lu et al. (2014) proposed a web service WebDISCO for distributed Cox proportional
hazard model over horizontally partitioned data. The global Cox model was
established under the server/client architecture, where each institution separately
calculated its non-sensitive intermediary statistics for model parameter estimation
in server. WebDISCO investigated the technical feasibility of employing federated
data analysis on survival data. The distributed Cox model was shown to be identical
in mathematical formulation and achieve an equivalent precision for model learning
in comparison to its centralized realization.

3.3.2 Applications Based on ADMM
In this subsection, we review the ADMM-based distributed algorithms for regression and classification with a brief overview on the convergence analysis for
ADMM-based methods. Table 3.2 summarizes the ADMM-based algorithms for
horizontally and vertically partitioned data and most of them have not been applied
in the context of biomedical informatics.
Table 3.2 Federated modeling techniques for distributed data based on ADMM
Boyd et al. (2011)

Mateos et al. (2010)
Mateos and Schizas (2009)
Mateos and Giannakis (2012)
Forero and Giannakis (2010)
Schizas and Aduroja (2015)
Scardapane et al. (2016)

Application scenario
Logistic regression
Lasso/Group Lasso
Support vector machine
Linear regression
Recursive least-squares
Recursive least-squares
Support vector machine
Principal component analysis
Recurrent neural networks

Data partitioning

3 Privacy Preserving Federated Big Data Analysis



Boyd et al. (2011) summarized the ADMM-based distributed `1 -penalized logistic
regression model for horizontally and vertically partitioned data, respectively. For
horizontally partitioned data, model parameters were separately solved for each
institution by minimizing an `2 -regularized log-likelihood function over local data.
Subsequently, auxiliary variable for global consensus was found to minimize the
combination of its `1 -norm and the squared difference between the auxiliary
variable and averaged primal and dual variables. It should be noted that the
auxiliary variable could be computed for each attribute in parallel for improved
efficiency. Each institution would not leak its sensitive information, as only its local
model parameters were exchanged for global consensus. When data are vertically
distributed across multiple institutions, a Lasso problem based on the local attributes
was formulated for each institution under the ADMM framework. Aggregating the
intermediary results from all the institutions, the auxiliary variables were derived
from the `2 -regularized logistic loss function. The dual variables were updated
based on the aggregated intermediary results and averaged auxiliary variables and
remained same for all the institutions. The distributed logistic regression could be
performed in a privacy-preserving manner, as local data owned by each institution
would not be inferred from the aggregated intermediary results. In both cases, the `2 regularized minimization based on log-sum-exp. functions can be iteratively solved
using the Newton-Raphson method or L-BFGS algorithm.
Mateos et al. (2010) leveraged alternating direction method of multipliers
(ADMM) to make model parameter estimation in sparse linear regression. The
centralized Lasso model was transformed into a decomposable consensus-based
minimization problem for horizontally partitioned data. The derived minimization
problem can be iteratively solved with ADMM in a decentralized manner, where
each institution only communicates with its neighboring institutions to protect
data privacy. To balance computational complexity and convergence rate, three
iterative algorithms, DQP-Lasso, DCD-Lasso and D-Lasso, were developed based
on ADMM. DQP-Lasso introduced strictly convex quadratic terms to constrain the
model parameters and auxiliary variables for each institution and its neighbors in
augmented Lagrangian function. Thus, quadratic programming (QP) is adopted for
each institution to obtain its model parameters by minimizing the decomposable
Lagrangian function in a cyclic manner. To improve efficiency of iterative optimization, DCD-Lasso introduced coordinate descent to simplify the QP process
for each institution. At each iteration, DCD-Lasso updated the model parameters
for each institution with coordinate descent rather than iteratively obtained the
exact solution to the corresponding QP problem. Furthermore, D-Lasso enabled
parallelized computation of model parameters corresponding to multiple institutions
to enhance convergence speed. D-Lasso can achieve equivalent model estimation in
comparison to DCD-Lasso without additional relaxation terms. It is demonstrated
that the model parameters locally derived by these algorithms are convergent to the
global solution to Lasso.


W. Dai et al.

Similar to sparse linear regression, Mateos and Schizas (2009) adopted ADMM
to develop a distributed recursive least-squares (D-RLS) algorithm for time series
data horizontally distributed across multiple institutions. The proposed algorithm
reformulated the exponentially weighted least-squares estimation to a consensusbased optimization problem by introducing auxiliary variables for corresponding
institutions. The reformulated minimization problem was decomposed into a series
of quadratic optimization problems that were recursively solved for each institution.
Furthermore, Mateos and Giannakis (2012) improved the efficiency of D-RLS by
avoiding explicit matrix inversion in recursively solving quadratic problems for each
institution. The proposed algorithms are demonstrated to be stable for time series
data with sufficient temporal samples under the metric of means and mean squared
errors (MSE). These ADMM-based linear regression and least-squares estimation
were validated over the wireless sensor networks.
The sharing formulations have been also studied for Lasso and group Lasso
problems over vertically partitioned data. Similar to distributed logistic regression
model, sparse least-squares estimation was formulated for each institution to
independently obtain the model parameters corresponding to its own attributes.
In group Lasso, institutions would adopt various regularization parameters for `1 regularized problem. Since the auxiliary variables were updated analytically with a
linear combination of the aggregated intermediary results and dual variables, the
computational complexity mainly depended on the decomposable `1 -regularized
problem for multiple institutions. To improve the efficiency, the x-minimization for
certain institution could be skipped, when its attributes were not considered to be
involved in distributed optimization based on a threshold w.r.t. the regularization
parameters and Lagrangian multiplier.


Forero and Giannakis (2010) leveraged ADMM to develop a distributed support
vector machine (DSVM) classifier for training data horizontally distributed across
multiple institutions. Introducing auxiliary variables for the local model parameters
at each node, the linear SVM classifier was decomposed into a series of convex
sub-problems over these auxiliary variables under the consensus constraints. For
each node, its local model parameters were independently derived from the corresponding sub-problem. The decomposable optimization was iteratively performed
to obtain the unique optimal model parameters in a decentralized manner. Under the
ADMM framework, the global SVM classifier was trained in a privacy-preserving
fashion, as each node exchanges its locally estimated model parameters rather than
the training data it owns. To handle sequential and asynchronous learning tasks, the
linear DSVM classifier support online update for time-varying training data. The
classifier could be partially updated to adapt with the cases that training samples
were added to and removed from the training set. The ADMM-based distributed
optimization was also generalized to nonlinear SVM classifiers, where consensus
constraints on discriminant functions of local model parameters were shrunk to
a rank-deficient subspace of the reproducing kernel Hilbert space. In analogy

3 Privacy Preserving Federated Big Data Analysis


to generalized linear regression, ADMM was also adopted for SVM classifier
over vertically partitioned data. The centralized SVM classifier was decomposed
into a series of quadratic programming problems to separately derive the model
parameters corresponding to the attributes owned by each institution. These model
parameters were iteratively adjusted with the auxiliary variables obtained by soft
thresholding over the aggregated intermediary results and averaged dual variables.
Therefore, each institution only needs to share the aggregated intermediary results
of its attributes for each patient’s record.
Schizas and Aduroja (2015) proposed a distributed principal component analysis (PCA) framework based on ADMM under the metric of mean-square error
(MSE). An equivalent constrained optimization problem was formulated for the
classical PCA framework, where each node used local covariance matrix to separately estimate its principal eigenspace in a recursive manner. Coordinate descents
were adopted in the ADMM framework to iteratively minimize the augmented
Lagrangian function under the consensus constraints. The proposed framework
enabled distributed dimensionality reduction over horizontally partitioned data in
a privacy-preserving fashion, as each node only exchanged aggregated intermediary
results with its neighboring nodes. Under sufficient iterative optimization, the
estimated principal eigenspace is demonstrated to asymptotically converge to
the subspace spanned by the actual principal eigenvectors. For validation, the
proposed distributed PCA framework was employed in data denoising over wireless
sensor networks, where synthetic and practical evaluations showed it could achieve
enhanced convergence rate and improved performance for noise resilience.
Scardapane et al. (2016) adopted ADMM in the decentralized training of recurrent neural networks (RNNs) that optimized the global loss function over the data
horizontally distributed across all the nodes. The proposed distributed algorithm
was designed specifically for Echo State Networks. The decomposable optimization
under consensus constraints was formulated for each node to separately calculate
the model parameter based on the `2 -regularized least-squares estimation. For each
node, its auxiliary variable for global consensus was obtained with a weighted
average of model parameters from its neighboring nodes. Thus, it was not required
to share the training data or the hidden matrix representing the current states of
neural networks among multiple nodes. Since the auxiliary variables were not
calculated based on all the nodes in the neural network, it did not necessarily depend
on a server to perform global optimization. Evaluations over large scale synthetic
data showed that the proposed distributed algorithm could achieve comparable
classification performance in comparison to its centralized counterpart.

Convergence and Robustness for Decentralized Data Analysis

Ling et al. (2015) proposed a decentralized linearized ADMM (DLM) method
to reduce computational complexity and enhance convergence speed for standard ADMM. The proposed DLM method simplified the decomposable optimization problems with linearized approximation. Thus, the computational cost for


W. Dai et al.

implementation based on gradient descent could be significantly reduced for applications like distributed logistic regression and least-squares estimation. It is demonstrated that the DLM method can converge to the optimal solution with a linear rate,
when strongly convex and the Lipschitz continuity conditions are satisfied for the
decomposable cost functions and its gradients, respectively. Furthermore, Mokhtari
et al. (2016) considered quadratic approximation of ADMM-based formulation for
logistic regression model to improve the accuracy of model parameter estimation.
The proposed DQM method was shown to obtain more accurate estimates for model
parameters with a linear convergence rate in comparison to DLM.
Investigations have also been made to improve the robustness of ADMM
for various application scenarios. Wang and Banerjee (2012) introduced online
algorithm into ADMM to yield an enhance convergence rate. Goldfarb et al.
(2012) developed a fast first-order linearized augmented Lagrangian minimization
to accelerate the convergence of ADMM algorithm. Considering the insufficient
accessibility of true data values, Ouyang et al. (2013) presented a stochastic ADMM
algorithm aiming to minimize non-smooth composite objective functions.

3.4 Secure Multiparty Computation
The previous sections introduced federated technologies reducing the privacy risks
through model decomposition so that we build global models only on locally
aggregated statistics (without accessing the patient-level data). To further mitigate
the privacy risk, we need to ensure the confidentiality of transmitted summary
statistics in each institution as well. This can be achieved using Secure Multiparty
Computation (SMC), which ensures computation and communication security via
advance cryptographic protocols. Many state-of-the-art SMC schemes are based
upon the idea of translating an algorithm to a binary circuit. Scalable SMC protocols
like Yao’s garbled circuit (Yao 1982) could represent arbitrary functions with a
Boolean circuit to enable masking of inputs and outputs of each gate for sensitive
information. In federated data analysis, two architectures are widely considered to
integrate secure multiparty computation protocols, namely spoke-hub and peer-topeer architectures, as shown in Fig. 3.4. The spoke-hub architecture requires one or
more non-colluding institution to perform securely computation and analysis based
on the data collected from the other institutions, while the peer-to-peer architecture
allows secure exchange of encrypted data in a decentralized manner. In this section,
we will introduce some secure multiparty communication protocols for regression,
classification and evaluation tasks.

3.4.1 Regression
Fienberg et al. (2006) proposed privacy-preserving binary logistic regression for
horizontally partitioned data with categorical covariates. Secure summation and data
integration protocols were adopted to securely compute the maximum likelihood

3 Privacy Preserving Federated Big Data Analysis


Fig. 3.4 Illustration of architectures for federated data analysis using secure multiparty computation (SMC) protocols. (a) Spoke-Hub architecture. The non-colluding institution is required
to perform global computation and operation based the encrypted data collected from the other
institutions; (b) Peer-to-peer architecture. All the institutions exchanged and aggregated encrypted
data with each other in a successive manner to achieve federated data analysis

and perform contingency table analysis for log-linear models. Under the categorical
covariates, the logistic regression model could be constructed from the log-linear
models in a distributed and privacy-preserving fashion. Slavkovic and Nardi (2007)
presented a secure logistic regression approach on horizontally and vertically
partitioned data without actually combining them. Secure multiparty computation
protocols were developed for the Newton-Raphson method to solve binary logistic
regression with quantitative covariates. At each iteration, multiparty secure matrix
summation and product protocols were employed to compute the gradients and
Hessian matrix. Here, the inverse of Hessian was derived by recursively performing
secure matrix product over its sub-block matrices. The proposed protocol would
protect the privacy of each institution under the secure multiparty protocols, when
the matrices of raw data are all with a dimensionality greater than one. However,
Fienberg et al. (2009) showed that it would lead to serious privacy breach when
intermediary results are shared among multiple institutions in numerous iterations,
even though they can be protected by mechanism like encryption with random
shares. Nardi et al. (2012) developed a secure protocol to fit logistic regression
over vertically partitioned data based on maximum likelihood estimation (MLE).
In comparison to previous works, it enhanced the security by that only providing
the final results for private computation, as there would be a chance to compromise
the confidentiality of raw data held by each institution from the shared intermediate
values. Two protocols were proposed for approximating logistic function using
existing cryptographic primitives including secret sharing, secure summation and
product with random shares, secure interval membership evaluation and secure
matrix inversion. The first protocol approximated the logistic function with the
summation of step functions. However, this protocol would be computationally
prohibitive for high-dimensional large-scale problems due to the secret sharing and


W. Dai et al.

comparison operation over encrypted data. To relieve the computational burden, the
second protocol formulated the approximation by solving an ordinary differential
equation numerically integrated with Euler’s method. Thus, the logistic regression
was iteratively fit with secure summations and products of approximation and
average derivatives of logistic function evaluated on the intermediate values.
The accuracy bound of the two approximation protocols is demonstrated to be
related with minimum eigenvalue of the Fisher information matrix and step size
for approximation. However, these protocols would be prohibitive for real-world
applications due to their high computational complexity and communication rounds.
Sanil et al. (2004) addressed privacy-preserving linear regression analysis over
vertically partitioned data. Quadratic optimization was formulated derive the exact
coefficients of global regression model over multiple institutions. Distributed
computation of regression coefficients was achieved by implementing Powell’s
algorithm under the secure multiparty computation framework. At each iteration,
each institution updated its local regression coefficients and its own subset of
search directions. Subsequently, a common vector was generated by aggregating the
products of local attributes and search directions using secure summation protocol
for computation in next iteration. Finally, the proposed algorithm obtained the global
coefficients and the vector of residuals. Thus, global linear regression could be
made based on the entire dataset collected from all the institutions without actually
exchanging the raw data owned by each institution. Moreover, basic goodness-of-fit
diagnostics could be performed with the coefficient of determination that measured
the strength of the linear relationship. However, this algorithm would be unrealistic
due to the assumption that the institution holding the response attribute should share
it with the other institutions. Karr et al. (2009) proposed a protocol for secure linear
regression over data vertically distributed across multiple institutions. The proposed
protocol was able to estimate the regression coefficients and related statistics like
standard error in a privacy-preserving manner. In distributed optimization, multiple
institutions collaborated to calculate the off-diagonal blocks of the global covariance
matrix using secure matrix products, while the diagonal blocks were obtained based
on the corresponding local attributes. The global covariance matrix was shared
among multiple institutions for secure linear regression and statistical analyses.
Moreover, model diagnostic measures based on residuals can be similarly derived
from the global covariance matrix using secure summation and matrix products
protocols. Remarkably, the proposed protocol could be generalized to a variety
of regression models, e.g. weighted least squares regression, stepwise regression
and ridge regression, under the constraint that sample means and co-variances are
sufficient statistics.

3.4.2 Classification
Naïve Bayes classifier is an effective Bayes learning method with consistent and reasonable performance, which is commonly adopted as benchmark for classification
methods to be evaluated. Vaidya and Clifton (2003b) developed a privacy-preserving

3 Privacy Preserving Federated Big Data Analysis


Naive Bayes classifier for vertically partitioned data, where multiple institutions
collaborated to achieve classification with random shares of the global model.
In the proposed Bayes classifier, each institution is only required to share the
class of each instance, rather than the distribution of classes or attribute values.
Secure protocols were developed for training and classification under the secure
multiparty computation framework. In training, random shares of the conditionally
independent probabilities for nominal and numeric attributes were computed for
model parameter estimation. To be concrete, the probability estimates for classes
of instances given the attributes were computed for shares of nominal attributes.
For numeric attributes, mean and variance for the probability density function of
Normal distribution are required. For each class and attribute value, the shares can
be obtained from the institutions owning them. For evaluation, each new instance
was classified by maximizing its posterior probability using Bayes theorem under
the assumption of conditionally independent attribute values. Here, the probabilities
of class conditioned on attributes are derived based on the model parameters.
Furthermore, it is demonstrated that these protocols for training and evaluation are
able to securely compute shares of nominal and numeric attributes and classify the
classes, respectively. Furthermore, Vaidya et al. (2008) introduced secure logarithm
primitives from secure multiparty computation for naïve Bayes classification over
horizontally partitioned data. The model parameters for normal and numeric
attributes were directly computed based on the local counts using secure sum
protocols, as each institution held all the attributes required for classifying an
instance. Therefore, classification could be made locally for each institution.
K-means clustering is popular for cluster analysis, which partitions observations
into the clusters with the nearest means. For data vertically distributed across
multiple institutions, privacy-preserving K-means clustering (Vaidya and Clifton
2003a) was studied to perform clustering without sharing raw data. Using the
proposed K-means clustering algorithm, each institution can obtain its projection
of the cluster means and learn the cluster assignment of each record without
revealing its exact attributes. Since high dimensional problem cannot be simply
decomposed into a combination of lower dimensional problems for each institution,
cooperation between multiple institutions is required to learn the cluster that each
record belongs to. To achieve the privacy-preserving clustering algorithm, secure
multiparty computation framework using homomorphic encryption is introduced for
multiple institutions. For common distance metrics like Euclidean and Manhattan,
the distances between each record and the means of K clusters can be split over
these institutions. For security, these distances were disguised with random values
from a uniform random distribution and non-colluding institutions were adopted to
compare the randomized distances and permute the comparison results. The secure
permutation algorithm is performed in an asymmetric two-party manner according
to the permutation owned by the non-colluding institution. It should be noted that the
initial values of the K means were assigned to their local shares for the institutions to
obtain a feasible solution. As a result, non-colluding institutions would only know
the selected cluster in the permutation, while the exact attributes owned by each
institution would not be disclosed to the others.


W. Dai et al.

Liang et al. (2013) developed a distributed PCA algorithm to estimate the global
covariance matrix for principal components. Each institution leveraged standard
PCS algorithm to determine its principal components over the local data. These local
principal components were exchanged and collected to obtain global covariance
matrix. The proposed algorithm integrated the distributed coreset-based clustering
to guarantee that the number of vectors for communication was independent of
size and dimension of the federated data. It is demonstrated that the divergence
between approximations on projected and original data for k-means clustering can
be upper bounded. Guo et al. (2013) developed a covariance-free iterative distributed
principal component analysis (CIDPCA) algorithm for vertically partitioned highdimensional data. Instead of approximating global PCA with sampled covariance
matrix, the proposed CIDPCA algorithm is designed to directly determine principal
component by estimating their eigenvalues and eigenvectors. The first principal
component corresponding to the maximum eigenvalues of the covariance matrix
was derived by maximizing the Rayleigh quotient using gradient ascent method. The
iterative method is demonstrated to converge in an exponential rate under arbitrary
initial values of principal components. Subsequently, the remaining principal
components could be iteratively calculated in the orthogonal complement of the
subspace spanned by the previously derived principal components. In comparison
to previous distributed PCA methods, it is shown to achieve higher accuracy
in estimating principal components and better classification performance with a
significant reduction on communication cost. This conclusion is also validated with
a variety of studies over real-world datasets.
Du et al. (2004) studied multivariate statistical analysis for vertically partitioned
data in the secure two-party computation framework, including secure two-party
multivariate linear regression (S2-MLR) and classification (S2-MC). These problems were addressed in a privacy-preserving manner with secure two-party matrix
computation based on a set of basic protocols. A new security model was proposed
to lower security requirements for higher efficiency, where each institution was
allowed to reveal a part of information about its raw data under the guarantee that
the raw data would not be inferred from the disclosed information. To securely
perform matrix computation, building blocks for matrix product, matrix inverse and
matrix determinant were presented. Thus, S2-MLR and S2-MC could be securely
solved with these building blocks. It is demonstrated that, in the proposed two-party
multivariate statistical analysis, it would be impossible to infer the raw data owned
by each institution, when less than half of its disguised matrix is revealed.
Yu et al. (2006) proposed an efficient privacy-preserving support vector machine
(SVM) classification method, namely PP-SVMV, for vertically partitioned data. In
the proposed method, the global SVM model was constructed from local SVM
models rather than directly exchanging the local data. Thus, both local SVM models
and their corresponding local data for each institution were not disclosed. For
linear kernels, the global kernel matrix is computed by directly merging gram
matrices from multiple institutions to solve the dual problem. This result can
also be extended to ordinary non-linear kernels that can be represented by dot
products of covariates, i.e. polynomial and radial basis function (RBF) kernels.

3 Privacy Preserving Federated Big Data Analysis


To guarantee data and model privacy, merge of local models is performed with
secure summation of scalar integers and matrices. Experimental results demonstrated the accuracy and scalability of PP-SVMV in comparison to the centralized
SVM over the original data. Similarly, Yu et al. (2006) presented a privacypreserving solution to support non-linear SVM classification over horizontally
partitioned data. It required that the nonlinear kernel matrices could be directly
calculated based on the gram matrix. Thus, widely-used nonlinear kernel matrices
like polynomial and RBF kernels can be derived from the dot products of all data
pairs using the proposed solution. Secure set intersection cardinality was adopted
as an equivalency to these dot products based on the data horizontally distributed
across the institutions. Thus, commutative one-way hash functions were utilized to
securely obtain the set intersection cardinality. The proposed method was shown to
achieve equivalent classification performance in comparison to the centralized SVM
classifiers. Mangasarian et al. (2008) constructed a reduced kernel matrix with the
original data and a random matrix to perform classification under the protection of
local data. The random kernel based SVM classifier could support both horizontally
and vertically partitioned data. Yunhong et al. (2009) proposed a privacy-preserving
SVM classifier without using secure multiparty computation. The proposed SVM
classifier built its kernel matrix by combining local gram matrices derived from
the original data owned by the corresponding institutions. It shows that local gram
matrices would not reveal the original data using matrix factorization theory, as it
is not unique for each institution to infer its covariates from local gram matrix. The
proposed classification algorithm is developed for SVM classifier with linear and
nonlinear kernels, where the accuracy of distributed classification is comparable
to the ordinary global SVM classifier. Que et al. (2012) presented a distributed
privacy-preserving SVM (DPP-SVM), where server/client collaborative learning
framework is developed to securely estimate parameters of covariates based on
the aggregated local kernel matrices from multiple institutions. For security, all
the model operations are performed on the trusted server, including service layer
for server/client communication, task manager for data validation and computation
engine for parameter estimation.
Vaidya et al. (2008) presented a generalized privacy-preserving algorithm for
building ID3 decision tree over data vertically distributed across multiple institutions. For efficient and secure classification using the ID3 decision tree, the
proposed algorithm only revealed the basic structure of the tree and the specific
institution responsible for decision making at each node, rather than the exact values
of attributes. For each node in the tree, the basic structure includes its number of
branches and the depths of its subtrees, which represented the number of distinct
values for corresponding attributes. Thus, it is not necessary for each institution
to introduce complex cryptographic protocol at each possible level to securely
classify an instance. It should be noted that the proposed algorithm only needs to
assign class attribute needs to one institution, but each interior node could learn the
count of classes. Consequently, the institution owning class attribute estimated the
distributions throughout the decision tree based on the derived transaction counts,
which would not disclose much new information. The distribution and majority class


W. Dai et al.

was determined based on the cardinality of set intersection protocol. Given multiple
institutions, classification of an instance was conducted by exchanging the control
information based on the decision made for each node, but not the attribute values.
To further enhance the efficiency of privacy-preserving ID3 algorithm, Vaidya
et al. (2014) developed random decision tree (RDT) framework to fit parallel and
distributed architecture. Random decision tree is desirable for privacy-preserving
distributed data mining, as it can achieve equivalent effect as perturbation without
diminishing the utility of information from data mining. For horizontally partitioned
data, the structure of RDTs was known to all the institutions that held the same
types of attributes. The RDTs were constructed by considering the accessibility of
the global class distribution vector for leaf nodes. Each institution could derive its
local distribution vector from its own data and submitted the encrypted versions for
aggregation. When the class distribution vectors were known to all the institutions
or the institution owning the RDTs, the aggregation could be directly made based
on homomorphically encrypted data. If the class distribution vectors were forced
to remain unrevealed, a secure electronic voting protocol was presented to make
decision based on the collected encrypted local vectors. For vertically partitioned
data, fully distributed trees with a specified total number were considered, so
that the sensitive attribute information for each institution was not revealed. Each
random tree was split among multiple institutions and constructed recursively using
the BuildTree procedure in a distributed fashion. It is worth mentioning that this
procedure does not require the transaction set. Subsequently, the statistics of each
node was securely updated based on the training set from multiple institutions using
additively homomorphic encryption. Similarly, instance classification was achieved
by averaging the estimated probabilities from multiple RDTs in a distributed
manner. The proposed RDT algorithm is secure, as neither the attribute values nor
the RDT structure is shared during RDT construction and instance classification.

3.4.3 Evaluation
Sorting algorithm is essential for privacy-preserving distributed data analysis,
including ranked elements query, group-level aggregation and statistical tests. In
the secure multiparty computation framework, oblivious sorting algorithm can be
implement by hiding the propagation of values in the sorting network or directly
using sorting algorithm as a basis. Bogdanov et al. (2014) investigated four different
oblivious sorting algorithms for vertically partitioned data, where two algorithms
improved the existing sorting network and quicksort algorithms and the other
two ones were developed to achieve low round count for short vectors and low
communication cost for large inputs, respectively. For short vectors, a naive sorting
protocol NaiveCompSort was presented based on oblivious shuffling of input data
and vectorized comparison of shuffled data. Given large inputs, oblivious radix
sorting protocol was developed as an efficient alternative. It leveraged binary count
sorting algorithm to rearrange the input integer vectors based on the sorted digits

3 Privacy Preserving Federated Big Data Analysis


in the same positions. Thus, the oblivious radix sorting protocol is efficient, as it
does not require oblivious comparisons. Furthermore, optimization methods were
proposed to improve the efficiency of the oblivious sorting algorithms. For example,
bitwise shared representation and vectorization would allow data parallelization
to reduce the communication cost and complexity for SMC. For sorting network
structure, shuffling the inputs and re-using the generated network could optimize
its implementation, while uniqueness transformation for comparison-based sorting
protocols could avoid information leakage in the sortable vector. It should be noted
that these sorting algorithms could also be generalized to support matrix sorting. The
complexity and performance analysis for all the four sorting algorithms, including
detailed running-time, network and memory usage, was also presented.
Makri et al. (2014) proposed a privacy-preserving statistical verification for
clinical research based on the aggregated results from statistical computation. It
leveraged secure multiparty computation primitives to perform evaluations for
Student’s and Welch’s t-test, ANOVA (F-test), chi-squared test, Fisher’s exact test
and McNemar’s test over horizontally partitioned data. The proposed statistical
verification could be outsourced to a semi-host third party, namely verifiers, as no
private data were exchanged during the verification process. Secure protocols based
on secret sharing were utilized to compute the means and variance. Consequently,
Student’s t-test, Welch’s test and F-test could be evaluated based on the derived
means and variance. At the meantime, chi-squared test, Fisher’s exact test and
McNemar’s test could be performed based on the frequencies in the contingency
table, which would not reveal the individual records in the group of data held by
certain institutions. The proposed mechanism is proven to protect the data security
and privacy in the semi-honest model using secure multiparty computation protocols
from Shamir’s secret sharing.

3.5 Asynchronous Optimization
In Sect. 3.2, we discussed the Newton-Raphson method and ADMM framework
for distributed optimization in federated data analysis. In these methods, all
institutions are commonly synchronized for computation at each iteration. However,
these synchronous methods would fail due to unexpected communication delay
and interruption under practical network conditions. Asynchronous optimization
algorithms have been developed to make distributed and parallel optimization
based on local updates from institutions with various delays, e.g. institutions with
different computation and processing speeds and access frequencies. Thus, server
and institutions can proceed without waiting for prerequisite information as in
synchronous methods.


W. Dai et al.

3.5.1 Asynchronous Optimization Based on Fixed-Point
Chazan and Miranker (1969) firstly proposed asynchronous parallel method to
solve linear systems of equations with chaotic relaxation. The proposed method
developed totally (infinite delay) and partially (bounded delay) asynchronous
parallel algorithms to improve the efficiency of iterative schemes based on limited
number of updates with a guarantee of convergence. Baudet (1978) extended totally
asynchronous chaotic relaxation to solve fixed-point problems with contracting
operators. The proposed methods were guaranteed with the sufficient condition of
contracting operators for convergence to fixed point.
Bertsekas and Tsitsiklis (1989) introduced gradient-like optimization based
on totally and partially asynchronous parallel algorithms in unconstrained and
constrained optimization problems. Totally asynchronous gradient algorithm was
studied for optimization problems with contraction mapping under the metric of
weighted maximum norm. Its convergence is guaranteed when the diagonal dominance condition is satisfied for its Hessian matrix. Provided finite asynchrony for
communication and computation, the partially asynchronous parallel optimization
is shown to converge when the step size for iterative optimization is sufficiently
small in comparison to the asynchrony measure. Tseng (1991) analyzed the rate
of convergence for partially asynchronous gradient projection algorithm. Given an
objective function with Lipschitz continuous gradient, a linear convergence rate can
be achieved, when its isocost surfaces can be discriminated and upper Lipschitz
property holds for at least one of its multivalued functions. Tai and Tseng (2002)
studied the rate of convergence for strongly convex optimization problems based on
asynchronous domain decomposition methods.
Recently, Peng et al. (2016) developed an algorithmic framework AROCK for
asynchronous optimization related to non-expansive operator T with fixed point.
Under the assumption of atomic update, AROCK randomly selected a component
of primal variables x and updated it without memory locking by using sub-operator
S D I  T based on the newly updated variables b
x within a bounded delay.
x.tC1/ D x.t/  Sit b


Here it is the random variable indicating the variable for atomic writing at
time t. AROCK is demonstrated to achieve convergence under finite-dimensional
operator. It is also guaranteed to converge to the fixed point with a linear rate,
when the difference between the identity matrix and non-expansive operator is
quasi-strongly monotone. Furthermore, Peng et al. (2016) studied coordinate update
methods for specific optimization problems with coordinate friendly operators.
Remarkably, coordinate update methods could derive and leverage a variety of
coordinate friendly operators to make asynchronous realization for coordinate
descent, proximal gradient, ADMM and primal-dual methods.

3 Privacy Preserving Federated Big Data Analysis


3.5.2 Asynchronous Coordinate Gradient Descent
A series of partially asynchronous parallel methods have been developed for
coordinate gradient descent. Niu et al. (2011) presented an asynchronous strategy
HOGWILD! to perform stochastic gradient descent in parallel without memory
locking. The presented strategy enabled multiple processors to make gradient
updates in the shared memory. For sparse learning problem, HOGWILD! achieves
a sublinear convergence rate, as the error led by gradient update could be trivial. Liu et al. (2015) proposed an asynchronous stochastic coordinate descent
algorithm AsySCD to improve parallel minimization of smooth convex functions.
When essential strong convexity condition is satisfied, the minimization can be
solved in a linear convergence rate. Later, Liu and Wright (2015) developed an
asynchronous parallel algorithm AsySPCD to minimize the composite objective
function composed of smooth convex functions using stochastic proximal coordinate descent. AsySPCD considered the scenario that data were simultaneously
accessed and updated by multiple institutions. Under the relaxed optimal strong
convexity condition, the proposed algorithm can achieve a linear convergence rate.
Remarkably, both AsySCD and AsySPCD could be accelerated in a nearly linear
rate under enough number of processors (institutions). Hsieh et al. (2015) developed
a family of parallel algorithms PASSCoDe for stochastic dual coordinate descent
based on the LIBLINEAR software to efficiently solve -regularized empirical risk
minimization problems. At each iteration, PASSCoDe allowed each institution to
randomly select a dual variable to update the primal variables with coordinate
descent. Under asynchronous settings, three parallel algorithms were developed
to handle the possible violation of primal-dual relationship for -regularized minimization. PASSCoDe-Lock and PASSCoDe-Atomic were proposed to maintain
the primal-dual relationship with memory locking and atomic writing. PASSCoDeWild achieved nearly optimal performance with no locking and atomic operations
by performing backward error analysis for memory conflicts.

3.5.3 Asynchronous Alternating Direction Method
of Multipliers
Recently, asynchronous ADMM framework with consensus constraints has been
studied for distributed optimization under practical network conditions. Similar to
SMC, two architectures have been developed to realize asynchronous optimization
under the ADMM framework. In the peer-to-peer architecture, institutions are
interconnected with their delay or inconsistency indicated by asynchronous clock,
so that optimization is made in a decentralized manner. Iutzeler et al. (2013) adopted
randomized Gauss-Seidel iterations of Douglas-Rachford operator to develop a
generalized asynchronous ADMM framework over multiple institutions. The proposed asynchronous optimization methods allowed partially activation of isolated
decision variables (without involved in the active constraints) to make distributed


W. Dai et al.

minimization without coordinating all the institutions. Based on the monotone
operator theory, a randomized ADMM algorithm was formulated for various
institutions with different frequencies of activation. It is demonstrated to converge to
the optimal minimum under the connectivity assumption for the graph derived from
institutions. Wei and Ozdaglar (2013) proposed an asynchronous algorithm based on
ADMM for separable linearly constrained optimization over multiple institutions.
It established a general ADMM-based formulation under asynchronous settings to
iteratively determine a pair of random variables representing active constraints and
coupled decision variables and utilize them to update primal and dual variables.
Relating the iterative results under asynchronous setting to those based on all the
institutions, the proposed asynchronous algorithm is demonstrated to converge with
a rate inversely proportional to the number of iteration.
Spoke-hub architecture maintains a master node (server) to make consensus over
the primal variables distributed across multiple institutions. Zhang and Kwok (2014)
proposed an asynchronous ADMM algorithm that introduced partial barrier and
bounded delay to control asynchrony and guarantee convergence. The proposed
async-ADMM algorithm updated the local primal and dual variables for each
institution with most recent consensus variable. At the meantime, a master node
required only a partial subset of updated primal and dual variables with bounded
delay to update the consensus variable in an asynchronous manner. Under the
bounded delay, institutions with different computation and processing speed can
be properly counted for global consensus. The convergence property can be
held for a composition of non-smooth convex local objective functions. Hong
(2014) presented a generalized proximal ADMM framework to solve non-convex
optimization problems with nonsmooth objective functions in an asynchronous
and distributed manner. An incremental algorithm was developed for iterative
asynchronous optimization with varying active constraints. At each iteration, the
server updated all the primal and dual variables with the most recent gradients of
local objective functions from institutions. The consensus variables were derived
with proximal operation to compute the gradient in each institution. It should be
noted that the server only required a subset of institutions with newly-computed
gradients to update the primal and dual variables. Thus, the proposed AsyncPADMM algorithm is robust to handle the asynchrony led by communication delay
and interruption.

3.6 Discussion and Conclusion
In this chapter, we review the privacy-preserving federated data analysis algorithms
in the context of biomedical research. Federated data analysis algorithms aim to
facilitate biomedical research and clinical treatments based on large-scale healthcare
data collected from various organizations with full compliance with the institutional
policies and legislation. Federated models are developed to bring computation to
horizontally or vertically partitioned data rather than explicitly transfer them to a
central repository. Server/client and decentralized architectures are established to

3 Privacy Preserving Federated Big Data Analysis


estimate global model parameters and perform statistical test based on the aggregated intermediary results from multiple institutions. As a result, the patient-level
data would not be inferred, when they are not distributed in an extremely unbalanced way. Distributed optimization for federated data analysis are achieved and
generalized by integrating the Newton-Raphson method and ADMM framework.
For generalized linear regression, the Newton-Raphson method is demonstrated to
achieve equivalent accuracy in model parameter estimation and statistical tests in
comparison to its centralized counterparts. ADMM framework splits the objective
function for distributed optimization with auxiliary variables to formulate decomposable cost function for multiple institutions. Since it is not necessarily required
to derive the global consensus from all the local data, ADMM is possible to
support decentralized analysis without data server. The separate estimates from the
institutions are guaranteed to converge under the consensus constraints based on
auxiliary variables. To further protect the communication among servers and clients,
secure multiparty communication protocols were adopted for the applications
like regression, classification and evaluation over distributed data. It protects the
process of computation and communication for improved data security and privacy.
Finally, asynchronous parallel optimization can be incorporated to handle real-world
network conditions and improve the efficiency for parallel computation.
Privacy-preserving federated data analysis can be further studied for improved
security and efficiency and generic formulation for practice. One major challenge
for the federated data analysis is distributed model parameter estimation and
statistical tests for vertically partitioned data. Although federated models are shown
to be effective for logistic regression models and SVM classifiers over vertically
partitioned data, an analogical formulation based on dual optimization would fail for
the analysis tasks with non-separable objective functions like the Cox proportional
hazard model. Since each institution only holds a portion of covariates, the kernel
matrix cannot be explicitly decomposed across the institutions. Under such cases,
ADMM tends to be a promising alternative. However, ADMM commonly suffers
from low convergence rate and consequent high communication cost to solve the
iterative optimization with decomposable subproblems. Two-loop recursion would
be required to solve the optimization problem based on augmented Lagrangian
functions and regularized optimization problem in the server or client side. Thus,
it is imperative to develop privacy-preserving algorithms to consider both accuracy
and efficiency for distributed analysis over vertically partitioned data.
The second challenge is the communication cost between institutions and server.
A server is required to collect the data information or computation results from
each institution and output the final results, while all the institutions collaborate in
a round robin manner without a server in the second architecture. El Emam et al.
(2013) showed that privacy risk might exist even for exchanging intermediary results
that are aggregated from a local patient cohort. Taking horizontally partitioned data
for example, there is a chance to identify sensitive patient-level data from gram
matrix for linear models or indicating variables, covariance matrix, intermediary
results from multiple iterations, or models. On the other hand, encryption methods
can prevent information leakage in exchanging data but will noticeably affect


W. Dai et al.

computational and storage efficiency. For example, homomorphic encryption would
significantly increase computation and storage costs, when securely outsourcing
data and computations. To protect all raw data and intermediary results, SMC-based
distributed analysis methods require a high communications cost to deploy oblivious
transfer protocols. Moreover, many existing encryption-based methods cannot be
scaled up to handle large biomedical data. As a result, proper protocol or mechanism
is crucial to balance security and efficiency in institution/server communication.
Acknowledgement This research was supported by the Patient-Centered Outcomes Research
Institute (PCORI) under contract ME-1310-07058, the National Institute of Health (NIH)
under award number R01GM118574, R01GM118609, R00HG008175, R21LM012060, and

Act, D. P. (1998). Data protection act. London: The Stationery Office.
Baudet, G. M. (1978). Asynchronous iterative methods for multiprocessors. Journal of the ACM,
25(2), 226–244.
Bertsekas, D. P., & Tsitsiklis, J. N. (1989). Parallel and distributed computation: numerical
methods (Vol. 23). Englewood Cliffs, NJ: Prentice hall.
Bogdanov, D., Laur, S., & Talviste, R. (2014). A practical analysis of oblivious sorting algorithms
for secure multi-party computation. In K. Bernsmed & S. Fischer-Hübner (Eds.), Secure IT
systems (pp. 59–74). Springer International Publishing.
Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and
statistical learning via the alternating direction method of multipliers. Foundations and Trends
in Machine Learning, 3(1), 1–122.
Brown, J., Balaconis, E., Mazza, M., Syat, B., Rosen, R., Kelly, S., et al. (2012). PS146: HMORNnet: Shared infrastructure for distributed querying by HMORN collaboratives.
Clinical Medicine & Research, 10(3), 163.
Chambless, L. E., & Diao, G. (2006). Estimation of time-dependent area under the ROC curve for
long-term risk prediction. Statistics in Medicine, 25(20), 3474–3486.
Chazan, D., & Miranker, W. (1969). Chaotic relaxation. Linear Algebra and Its Applications, 2(2),
Cox, D. R. (1972). Regression models and life-tables. Journal of the Royal Statistical Society,
Series B, Statistical Methodology, 34(2), 187–220.
Du, W., Han, Y. S., & Chen, S. (2004). Privacy-preserving multivariate statistical analysis: Linear
regression and classification. In Proceedings of the 2004 SIAM international conference on
data mining (pp. 222–233).
El Emam, K., Samet, S., Arbuckle, L., Tamblyn, R., Earle, C., & Kantarcioglu, M. (2013). A secure
distributed logistic regression protocol for the detection of rare adverse drug events. Journal of
the American Medical Informatics Association: JAMIA, 20(3), 453–461.
Fienberg, S. E., Fulp, W. J., Slavkovic, A. B., & Wrobel, T. A. (2006). “Secure” log-linear and
logistic regression analysis of distributed databases. In J. Domingo-Ferrer & L. Franconi (Eds.),
Privacy in statistical databases (pp. 277–290). Berlin: Springer.
Fienberg, S. E., Nardi, Y., & Slavković, A. B. (2009). Valid statistical analysis for logistic
regression with multiple sources. In C. S. Gal, P. B. Kantor, & M. E. Lesk (Eds.), Protecting
persons while protecting the people (pp. 82–94). Berlin: Springer.
Forero, P. a., & Giannakis, G. B. (2010). Consensus-based distributed support vector machines.
Journal of Machine Learning Research: JMLR, 11, 1663–1707.

3 Privacy Preserving Federated Big Data Analysis


Goldfarb, D., Ma, S., & Scheinberg, K. (2012). Fast alternating linearization methods for
minimizing the sum of two convex functions. Mathematical Programming A Publication of
the Mathematical Programming Society, 141(1–2), 349–382.
Guo, Y.-F., Lin, X., Teng, Z., Xue, X., & Fan, J. (2013). A covariance-free iterative algorithm for
distributed principal component analysis on vertically partitioned data. Pattern Recognition,
45(3), 1211–1219.
Gymrek, M., McGuire, A. L., Golan, D., Halperin, E., & Erlich, Y. (2013). Identifying personal
genomes by surname inference. Science, 339(6117), 321–324.
Health Insurance Portability and Accountability Act (HIPAA). (n.d.). Retrieved from http://
Homer, N., Szelinger, S., Redman, M., Duggan, D., Tembe, W., Muehling, J., et al. (2008).
Resolving individuals contributing trace amounts of DNA to highly complex mixtures using
high-density SNP genotyping microarrays. PLoS Genetics, 4(8), e1000167.
Hong, M. (2014). A distributed, asynchronous and incremental algorithm for nonconvex optimization: An ADMM based approach. arXiv [cs.IT]. Retrieved from http://arxiv.org/abs/1412.6058.
Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression. New York,
NY: Wiley.
Hsieh C-J., Yu H-F., & Dhillon I. S. (2015). Passcode: Parallel asynchronous stochastic dual coordinate descent. In Proceedings of the 32nd international conference on machine learning
(ICML-15) (pp. 2370–2379).
Iutzeler, F., Bianchi, P., Ciblat, P., & Hachem, W. (2013). Asynchronous distributed optimization
using a randomized alternating direction method of multipliers. In 52nd IEEE conference on
decision and control (pp. 3671–3676). ieeexplore.ieee.org.
Jiang, W., Li, P., Wang, S., Wu, Y., Xue, M., Ohno-Machado, L., & Jiang, X. (2013). WebGLORE:
A web service for Grid LOgistic REgression. Bioinformatics, 29(24), 3238–3240.
Kantarcioglu, M. (2008). A survey of privacy-preserving methods across horizontally partitioned
data. In C. C. Aggarwal & P. S. Yu (Eds.), Privacy-preserving data mining (pp. 313–335).
New York, NY: Springer.
Kaplan, E. L., & Meier, P. (1958). Nonparametric estimation from incomplete observations.
Journal of the American Statistical Association, 53(282), 457–481.
Karr, A. F., Lin, X., Sanil, A. P., & Reiter, J. P. (2009). Privacy-preserving analysis of vertically
partitioned data using secure matrix products. Journal of Official Statistics, 25(1), 125–138.
Lafky, D. (2010). The Safe Harbor method of de-identification: An empirical test. In Fourth
national HIPAA summit west.
Liang, Y., Balcan, M. F., & Kanchanapally, V. (2013). Distributed PCA and k-means clustering. In
The big learning workshop at NIPS 2013 (pp. 1–8).
Ling, Q., Shi, W., Wu, G., & Ribeiro, A. (2015). DLM: Decentralized linearized alternating
direction method of multipliers. IEEE Transactions on Signal Processing: A Publication of
the IEEE Signal Processing Society, 63(15), 4051–4064.
Liu, J., & Wright, S. J. (2015). Asynchronous stochastic coordinate descent: Parallelism and
convergence properties. SIAM Journal on Optimization, 25(1), 351–376.
Liu, J., Wright, S. J., Ré, C., Bittorf, V., & Sridhar, S. (2015). An asynchronous parallel stochastic
coordinate descent algorithm. Journal of Machine Learning Research JMLR, 16, 285–322.
Retrieved from http://www.jmlr.org/papers/volume16/liu15a/liu15a.pdf.
Li, Y., Jiang, X., Wang, S., Xiong, H., & Ohno-Machado, L. (2016). VERTIcal Grid lOgistic
regression (VERTIGO). Journal of the American Medical Informatics Association, 23(3),
Lu, C.-L., Wang, S., Ji, Z., Wu, Y., Xiong, L., & Jiang, X. (2014). WebDISCO: A web service
for distributed cox model learning without patient-level data sharing. Journal of the American
Medical Informatics Association, 22(6), 1212–1219.
Makri, E., Everts, M. H., de Hoogh, S, Peter, A., H. op den Akker., Hartel, P., & Jonker, W.
(2014). Privacy-preserving verification of clinical research (pp. 481–500). Presented at the
Sicherheit 2014 - Sicherheit, Schutz und Zuverlässigkeit, Beiträge der 7. Jahrestagung des
Fachbereichs Sicherheit der Gesellschaft für Informatik e.V. (GI), Bonn, Germany: Gesellschaft
für Informatik.


W. Dai et al.

Mangasarian, O. L., Wild, E. W., & Fung, G. M. (2008). Privacy-preserving classification of
vertically partitioned data via random kernels. ACM Transactions on Knowledge Discovery
from Data, 2(3), 12:1–12:16.
Mateos, G., Bazerque, J. A., & Giannakis, G. B. (2010). Distributed sparse linear regression.
IEEE Transactions on Signal Processing: A Publication of the IEEE Signal Processing Society,
58(10), 5262–5276.
Mateos, G., & Giannakis, G. B. (2012). Distributed recursive least-squares: Stability and performance analysis. IEEE Transactions on Signal Processing: A Publication of the IEEE Signal
Processing Society, 60(612), 3740–3754.
Mateos, G., & Schizas, I. D. (2009). Distributed recursive least-squares for consensus-based innetwork adaptive estimation. IEEE Transactions on Signal Processing, 57(11), 4583–4588.
Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5061644.
McGraw, D. (2008). Why the HIPAA privacy rules would not adequately protect personal health
records: Center for Democracy and Technology (CDT) brief. http, 1–3.
Mokhtari, A., Shi, W., Ling, Q., & Ribeiro, A. (2016). DQM: Decentralized quadratically approximated alternating direction method of multipliers. IEEE Transactions on Signal Processing: A
Publication of the IEEE Signal Processing Society, 64(19), 5158–5173.
Nardi, Y., Fienberg, S. E., & Hall, R. J. (2012). Achieving both valid and secure logistic
regression analysis on aggregated data from different private sources. Journal of Privacy and
Confidentiality, 4(1), 9.
Niu, F., Recht, B., Re, C., & Wright, S. (2011). Hogwild: A lock-free approach to parallelizing
stochastic gradient descent. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, & K. Q.
Weinberger (Eds.), Advances in neural information processing systems 24 (pp. 693–701). Red
Hook, NY: Curran Associates, Inc.
Ohno-Machado, L., Agha, Z., Bell, D. S., Dahm, L., Day, M. E., & Doctor, J. N. (2014).
pSCANNER: Patient-centered scalable national network for effectiveness research. Journal
of the American Medical Informatics Association: JAMIA, 21(4), 621–626.
O’Keefe, C. M., Sparks, R. S., McAullay, D., & Loong, B. (2012). Confidentialising survival
analysis output in a remote data access system. Journal of Privacy and Confidentiality, 4(1), 6.
Ouyang, H., He, N., Tran, L., & Gray, A. (2013). Stochastic alternating direction method
of multipliers. In Proceedings of the 30th international conference on machine learning
(pp. 80–88).
PCORnet: The National Patient-Centered Clinical Research Network. (n.d.). Retrieved from http:/
Peng, Z., Wu, T., Xu, Y., Yan, M., & Yin, W. (2016a). Coordinate-friendly structures, algorithms
and applications. Annals of Mathematical Sciences and Applications, 1(1), 57–119.
Peng, Z., Xu, Y., Yan, M., & Yin, W. (2016b). ARock: An algorithmic framework for asynchronous
parallel coordinate updates. SIAM Journal of Scientific Computing, 38(5), A2851–A2879.
PopMedNet. (n.d.). Retrieved from http://www.popmednet.org/.
Que, J., Jiang, X., & Ohno-Machado, L. (2012). A collaborative framework for distributed privacypreserving support vector machine learning. In AMIA annual symposium (pp. 1350–1359).
Chicago, IL: AMIA.
Sanil, A. P., Karr, A. F., Lin, X., & Reiter, J. P. (2004). Privacy Preserving regression modelling via
distributed computation. In Proceedings of the tenth ACM SIGKDD international conference
on knowledge discovery and data mining (pp. 677–682). New York, NY, USA: ACM.
Scardapane, S., Wang, D., & Panella, M. (2016). A decentralized training algorithm for Echo State
Networks in distributed big data applications. Neural Networks: The Official Journal of the
International Neural Network Society, 78, 65–74.
Schizas, I. D., & Aduroja, A. (2015). A distributed framework for dimensionality reduction
and denoising. IEEE Transactions on Signal Processing: A Publication of the IEEE Signal
Processing Society, 63(23), 6379–6394.
Shi, H., Jiang, C., Dai, W., Jiang, X., Tang, Y., Ohno-Machado, L., & Wang, S. (2016).
Secure multi-pArty computation grid LOgistic REgression (SMAC-GLORE). BMC Medical
Informatics and Decision Making, 16(Suppl 3), 89.

3 Privacy Preserving Federated Big Data Analysis


Slavkovic, A. B., & Nardi, Y. (2007). “Secure” logistic regression of horizontally and vertically partitioned distributed databases. Seventh IEEE International. Retrieved from http://
Sweeney, L., Abu, A., & Winn, J. (2013). Identifying participants in the personal genome project
by name. Available at SSRN 2257732. https://doi.org/10.2139/ssrn.2257732.
Tai, X.-C., & Tseng, P. (2002). Convergence rate analysis of an asynchronous space decomposition
method for convex minimization. Mathematics of Computation, 71(239), 1105–1135.
Tseng, P. (1991). On the rate of convergence of a partially asynchronous gradient projection
algorithm. SIAM Journal on Optimization, 1(4), 603–619.
Vaidya, J. (2008). A survey of privacy-preserving methods across vertically partitioned data. In C.
C. Aggarwal & P. S. Yu (Eds.), Privacy-preserving data mining (pp. 337–358). New York, NY:
Vaidya, J., & Clifton, C. (2003a). Privacy-preserving K-means clustering over vertically partitioned
data. In Proceedings of the ninth ACM SIGKDD international conference on knowledge
discovery and data mining (pp. 206–215). New York, NY: ACM.
Vaidya, J., & Clifton, C. (2003b). Privacy preserving naive bayes classifier for vertically partitioned
data. In Proceedings of the ninth acm SIGKDD international conference on knowledge
discovery and data mining.
Vaidya, J., Clifton, C., Kantarcioglu, M., & Patterson, A. S. (2008a). Privacy-preserving decision
trees over vertically partitioned data. ACM Transactions on Knowledge Discovery from Data,
2(3), 14:1–14:27.
Vaidya, J., Kantarcıoğlu, M., & Clifton, C. (2008b). Privacy-preserving naive bayes classification.
Journal on Very Large Data Bases, 17(4), 879–898.
Vaidya, J., Shafiq, B., Fan, W., Mehmood, D., & Lorenzi, D. (2014). A random decision tree
framework for privacy-preserving data mining. IEEE Transactions on Dependable and Secure
Computing, 11(5), 399–411.
Vaidya, J., Shafiq, B., Jiang, X., & Ohno-Machado, L. (2013). Identifying inference attacks against
healthcare data repositories. In AMIA joint summits on translational science proceedings. AMIA
summit on translational science, 2013, (pp. 262–266).
Wang, H., & Banerjee, A. (2012). Online alternating direction method. In J. Langford & J. Pineau
(Eds.), Proceedings of the 29th international conference on machine learning (ICML-12) (pp.
1119–1126). New York, NY: Omnipress.
Wang, R., Li, Y. F., Wang, X., Tang, H., & Zhou, X. (2009). Learning your identity and disease
from research papers: information leaks in genome wide association study. In Proceedings of
the 16th ACM conference on computer and communications security (pp. 534–544). New York,
Wang, S., Jiang, X., Wu, Y., Cui, L., Cheng, S., & Ohno-Machado, L. (2013). EXpectation
propagation LOgistic REgRession (EXPLORER ): Distributed privacy-preserving online
model learning. Journal of Biomedical Informatics, 46(3), 1–50.
Weibull, W. (1951). A statistical distribution function of wide applicability. Journal of Applied
Mathematics, 18(3), 293–297.
Wei, E., & Ozdaglar, A. (2013). On the O(1 D k) convergence of asynchronous distributed
alternating direction method of multipliers. In Global conference on signal and information
processing (GlobalSIP), 2013 IEEE (pp. 551–554). ieeexplore.ieee.org.
Wu, Y., Jiang, X., Kim, J., & Ohno-Machado, L. (2012a). Grid binary LOgistic REgression
(GLORE): Building shared models without sharing data. Journal of the American Medical
Informatics Association, 2012(5), 758–764.
Wu, Y., Jiang, X., & Ohno-Machado, L. (2012b). Preserving institutional privacy in distributed
binary logistic regression. In AMIA annual symposium (pp. 1450–1458). Chicago, IL: AMIA.
Wu, Y., Jiang, X., Wang, S., Jiang, W., Li, P., & Ohno-Machado, L. (2015). Grid multi-category
response logistic models. BMC Medical Informatics and Decision Making, 15(1), 1–10.
Yao, A. C. (1982). Protocols for secure computations. In Foundations of computer science, 1982.
SFCS ‘08. 23rd annual symposium on (pp. 160–164). ieeexplore.ieee.org.


W. Dai et al.

Yu, H., Jiang, X., & Vaidya, J. (2006a). Privacy-preserving SVM using nonlinear kernels
on horizontally partitioned data. In Proceedings of the 2006 ACM symposium on applied
computing (pp. 603–610). New York, NY: ACM.
Yu, H., Vaidya, J., & Jiang, X. (2006b). Privacy-preserving SVM classification on vertically
partitioned data. Lecture Notes in Computer Science, 3918 LNAI, 647–656.
Yunhong, H., Liang, F., & Guoping, H. (2009). Privacy-Preserving svm classification on vertically
partitioned data without secure multi-party computation. In 2009 fifth international conference
on natural computation (Vol. 1, pp. 543–546). ieeexplore.ieee.org.
Yu, S., Fung, G., Rosales, R., Krishnan, S., Rao, R. B., Dehing-Oberije, C., & Lambin, P. (2008).
Privacy-preserving cox regression for survival analysis. In Proceedings of the 14th ACM
SIGKDD international conference on Knowledge discovery and data mining (pp. 1034–1042).
New York, NY: ACM.
Zhang, R., & Kwok, J. (2014). Asynchronous distributed ADMM for consensus optimization. In
Proceedings of the 31st international conference on machine learning (pp. 1701–1709).

Chapter 4

Word Embedding for Understanding Natural
Language: A Survey
Yang Li and Tao Yang

4.1 Introduction
Natural language understanding from text data is an important field in Artificial
Intelligence. As images and acoustic waves can be mathematically modeled by
analog or digital signals, we also need a way to represent text data in order to process
it automatically. For example, the sentence “The cat sat on the mat.” can not be
processed or understood directly by the computer system. The easiest way is to represent is through a sparse discrete vector f.icat ; 1/; .imat ; 1/; .ion ; 1/; .isit ; 1/; .ithe ; 2/g,
where iw denotes the index of word w in the vocabulary. This is called one-hot
embedding. However, there are several disadvantages for this simple model. First,
it generates high dimensional vectors whose length depends on the volume of
the vocabulary, which is usually very large. Meanwhile, the semantic relationship
between words (e.g., “sit on”) cannot be reflected by these separate counts. Due
to the subjectivity of languages, the meaning of word (phrase) varies in different
contexts. This makes it a more challenging task to automatically process text data.
A goal of language modeling is to learn the joint probability that a given word
sequence occurs in texts. A larger probability value indicates that the sequence is
more commonly used. One candidate solution to involve the relationship between
words is called “n-gram” model, where n is the hyperparameter manually chosen.
The “n-gram” model takes into consideration the phrases of n consecutive words. In
the example above, “the cat”, “cat sat”, “sat on”, “on the”, “the mat” will be counted,
if we set n D 2. The disadvantage of “n-gram” model is that it is constrained by
the parameter n. Moreover, the intrinsic difficulty of language modeling is that: a
word sequence that is used to test the model is likely to be different from all the
sequences in the training set (Bengio et al. 2003). To make it more concrete, suppose
Y. Li • T. Yang ()
School of Automation, NorthWestern Polytechnical University, Xi’an, Shanxi 710072, P.R. China
e-mail: liyangnpu@mail.nwpu.edu.cn; yangtao107@nwpu.edu.cn
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_4



Y. Li and T. Yang

we want to model the joint distribution of 5-word sequences in a vocabulary of
100,000 words, the possible number of combinations is 1000005  1 D 1025  1,
which is also the number of free parameters to learn. It is prohibitively large for
further processing. This phenomenon is called “the curse of dimensionality”. The
root of this curse is the “generalization” problem. We need an effective mechanism
to extrapolate the knowledge obtained during training to the new cases. For discrete
spaces mentioned above, the structure of generalization is not obvious, i.e. any
change on the discrete variables, as well as their combinations, will have a drastic
influence on the value of joint distribution to be estimated.
To solve the dilemma, we can model the variables in a continuous space, where
we can think of how training points trigger probability mass and distribute it
smoothly to the neighborhood around them. Then it goes back to the problem
of word embedding, whose concept was first introduced in Hinton (1986). Word
embedding, sometimes named as word representation, is a collective name for
a set of language models and feature selection methods. Its main goal is to
map textual words or phrases into a low-dimensional continuous space. Using
the example above, “cat” can be denoted as Œ0:1; 0:3; : : : ; 0:2M and “mat” can
be expressed as Œ0:1; 0:2; : : : ; 0:4M , where M is the hyperparameter. After that,
advanced NLP tasks can be processed implemented based on these real-valued
vectors. Word embedding encodes the semantic and syntactic information of words,
where semantic information mainly correlates with the meaning of words, while
syntactic information refers to their structural roles. Is a basic procedure in natural
language processing. From the high level, most of the models try to optimize a
loss function trying to minimize the discrepancy between prediction values and
target values. A basic assumption is that words in similar context should have
similar meaning (Harris 1954). This hypothesis emphasizes the bound of .w; w/
for common word-context pairs (word w and its contextual word w
Q usually appear
together) and weaken the correlation of rare ones.
The existing word embedding approaches are diverse. There are several ways to
group them into different categories. According to Sun et al. (2015) and Lai et al.
(2015), the models can be classified as either paradigmatic models or syntagmatic
models, based on the word distribution information. The text region where words
co-occur is the core of the syntagmatic model, but for the paradigmatic model
it is the similar context that matters. Take “The tiger is a fierce animal.” and
“The wolf is a fierce animal.” as examples, “tiger-fierce” and “wolf-fierce” are the
syntagmatic words, while “wolf-tiger” are the paradigmatic words. Shazeer et al.
(2016) divides the models into two classes, matrix factorization and slide-window
sampling method, according to how word embedding is generated. The former is
based on the word co-occurrence matrix, where word embedding is obtained from
matrix decomposition. For the latter one, data sampled from sliding windows is used
to predict the context word.
In this chapter, word embedding approaches are introduced according to the
method of mapping words to latent spaces. Here we mainly focus on the Neural
Network Language Model (NNLM) (Xu and Rudnicky 2000; Mikolov et al. 2013)
and the Sparse Coding Approach (SPA) (Yogatama et al. 2014a). Vector Space

4 Word Embedding for Understanding Natural Language: A Survey


Model aims at feature expression. A word-document matrix is first constructed,
where each entry counts the occurrence frequency of a word in documents. Then
embedding vectors containing semantic information of words are obtained through
probability generation or matrix decomposition. In the Neural Network Language
Model, fed with training data, word embedding is encoded as the weights of a certain
layer in the neural network. There are several types of network architectures, such
as Restricted Boltzmann Machine, Recurrent Neural Network, Recursive Neural
Network, Convolutional Neural Network and Hierarchical Neural Network. They
are able to capture both the semantic and syntactic information. Sparse coding
model is another state-of-the-art method to get word embedding. Its goal is to
discover a set of bases that can represent the words efficiently.
The main structure of this paper is as follows: the models mentioned above
and the evaluation methods are introduced in Sect. 4.2; the applications of word
embedding are included in Sect. 4.3; conclusion and future work are in Sect. 4.4.

4.2 Word Embedding Approaches and Evaluations
Generally speaking, the goal of word embedding is mapping the words in unlabeled
text data to a continuously-valued low dimensional space, in order to capture the
internal semantic and syntactic information. In this section, we first introduce the
background of text representation. Then, according to the specific methods for
generating the mapping, we mainly focus on the Neural Network Language Model
and Sparse Coding Approach. The former is further introduced in three parts,
depending on the network structures applied in the model. In the end, we provide
evaluation approaches for measuring the performance of word embedding models.

4.2.1 Background
One of the widely used approach for expressing the text documents is the Vector
Space Model (VSM), where documents are represented as vectors. VSM was
originally developed for the SMART information retrieval system (Salton et al.
1997). Some classical VSMs can be found in Deerweste et al. (1990) and Hofmann
There are various ways of building VSMs. In scenarios of information retrieval
where people care more about the textual features that facilitate text categorization,
various features selection methods such as document frequency (DF), information
gain (IG), mutual information (MI), 2 -test (CHI-test) and term strength (TS)
have different effects on text classification (Yang and Pedersen 1997). These
approaches help reduce the dimension of the text data, which could be helpful
for the subsequent processing. DF refers to the number of documents that a word
appears. IG measures the information loss between presence and absence term in


Y. Li and T. Yang

the document. MI is the ratio between term-document joint probability and the
product of their marginal probability. CHI-test applies the sums of squared errors
and tries to find out the significant difference between observed word frequency and
expected word frequency. TS estimates how likely a term will appear in “closelyrelated” documents. In information retrieval systems, these methods are useful for
transforming the raw text to the vector space, and tremendous improvement has
been achieved for information classification. However, it does not work well alone
in applications that require both semantic and syntax information of words, since
part of the original text information is already lost in feature selection procedures.
However, various word embedding models, such as sparse coding, are built based
on the VSM.
In Deerweste et al. (1990), Latent Semantic Analysis (LSA) is proposed to index
and retrieve the information from text data automatically. It applies SVD on the
term-document association data to gain the embedding output. Work in Landauer
et al. (1998) and Landauer and Dumais (1997) gives an explanation to the word
similarity that derives from the Test of English as a Foreign Language (TOEFL).
After that, Landauer (2002) tries to detect the synonym through LSA. Furthermore,
Yih et al. (2012) uses LSA to distinguish between synonyms and antonyms in
the documents and its extension Multiview LSA (MVLSA) (Rastogi et al. 2015)
supports the fusion of arbitrary views of data. However, all of them have to confront
the limitation that depends on a single matrix of term-document co-occurrences.
Usually the word embedding from those models is not flexible enough, because of
the strong reliance on the observed matrix.
Latent Dirichlet Allocation (LDA) (Bengio et al. 2003) is another useful
model for feature expression. It assumes that some latent topics are contained in
documents. The latent topic T is derived from the conditional distribution p.wjT/,
i.e. the probability that word w appears in T. Word embedding from traditional LDA
models can only capture the topic information but not the syntactic one, which does
not fully achieve its goal. Moreover, traditional LDA-based model is only used for
topic discovery, but not word representation. But we can get the k-dimensional word
embedding by training a k-topic model, the word embedding is from the rows of
word-topic matrix and the matrix is filled by p.wjT/ values.
Although both of LSA and LDA utilize the statistical information directly for
word embedding generation, there are some differences between them. Specifically,
LSA is based on matrix factorization and subject to the non-negativity constraint.
LDA relies on the word distribution, and it is expressed by the Dirichlet priori
distribution which is the conjugate of multinomial distribution.

4.2.2 Neural Network Language Model
The concept of word embedding was first introduced with the Neural Networks
Language Model (NNLM) in Xu and Rudnicky (2000) and Bengio et al. (2003).
The motivation behind is that words are more likely to share similar meaning if they

4 Word Embedding for Understanding Natural Language: A Survey


are in the same context (Harris 1954). The probability that the word sequence W
occurs can be formulated according to the Bayes rule:
P.W/ D


P.wt jw1 ; : : : ; wt1 / D


P.wt jht /



where P(W) is the joint distribution of sequence W, and ht denotes the context
words around word wt . The goal is to evaluate the probability that the word wt
appears given its context information. Because there are a huge number of possible
combinations of a word and its context, it is impractical to specify all P.wt jht /. In
this case, we can use a function ˆ to map the context into some equivalent classes,
so that we have
P.wt jht / D P.wt jˆ.ht //


where all of the words are statistically independent.
If we use the n-gram model to construct a table of conditional probability for each
word, then the last n-1 words are combined together as the context information:
Q t1
P.wt jht /  P.wt jw
tnC1 /


where n is the number of the words considered, while 1 and 2 are the most
commonly used value. Only the successive words occurring before the current word
wt are taken into account.
Most NNLM-based approaches belong to the class of unsupervised models.
Networks with various architectures such as Restrict Boltzmann Machine (RBM),
Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and
Long-Short Term Memory (LSTM) can be used to build word embedding. LogBilinear (LBL) model, SENNA, and Word2Vector are some of the most representative examples of NNLM.
Usually the goal of the NNLM is to maximize or minimize the function of LogLikelihood, sometimes with additional constraints. Suppose we have a sequence
w1 ; w2 ; : : : ; wn in the corpus, and we want to maximize the log-likelihood of
P.wt jw
Q t1
tnC1 / in the Feed Forward Neural Network (Bengio et al. 2003). Let x be
the embedding vector of proceeding words, so that
x D Œe.wtnC1 /; : : : ; e.wt2 /; e.wt1 /


where e.w/ represents the embedding of word w. In Feed Forward Neural Network
(without the direct edge in the structure) with one hidden layer, the layer function
can be expressed as:
y D b C U.tanh.d C Wx//



Y. Li and T. Yang

Fig. 4.1 The basic structure of the Feed Forward Neural Network

where U is the transformation matrix, W is the weight matrix of hidden layer, b and
d are bias vectors. Finally, y is fed into a softmax layer to obtain the probability
of the target word, and the basic structure of this model is shown in Fig. 4.1. The
parameters in this model are .b; U; d; W/. However, since we need to explicitly
normalize (e.g. use softmax) all of the values in the vocabulary when computing
the probability for the next word, it will be very costly for training and testing
(Morin and Bengio 2005). After estimating the conditional probability in Eq. (4.3),
the total probability P.wtnC1 ; ::; wt1 ; wt / can be obtained directly by locating the
mark of the phrase that is constructed by the words that appears in the same window
(Collobert et al. 2011). The resultant probability value quantifies the normality if
such a sentence appears in the natural language (more details can be found in
Instead of using matrix multiplication on the final layer (Bengio et al. 2003; Mnih
and Hinton 2007) applies the hierarchical structure (see Sect. to reduce the
computational complexity. It constructs the Log-BiLinear (LBL) model by adding
bilinear interaction between word vectors and hidden variables on the basis of the
energy function (see Sect.
Recently, Word2Vector proposed by Mikolov et al. (2013) which contains
Skip-Gram and (Continuous Bag-of-Words) CBOW these two frameworks is very
popular in NLP tasks. It can be seen as a two-layer Neural Network Language
Model. Furthermore, applying Noise Contrastive Estimation (NCE) increases its
efficiency. However, we can also add the priori information into this model, Liu
et al. (2015) extends Skip-Gram by treating topical information as important priori

4 Word Embedding for Understanding Natural Language: A Survey


knowledge for training word embedding and proposes the topical word embedding
(TWE) model, where text words and their affiliated topic (context) derived from
LDA are combined to obtain the embedding.
In the subsections below, we will introduce some basic architectures of NNLM
for word embedding.

Restricted Boltzmann Machine (RBM)

The ability of capturing the latent features among words usually depends on the
model structure. Boltzmann Machine (BM) originates from the log-linear Markov
Random Field (MRF) and its energy function is linearly related to free parameters.
According to the statistical dynamics, energy function is useful in word embedding
generation (Mnih and Hinton 2007). Restricted Boltzmann Machine (RBM), which
prunes the visible-visible and hidden-hidden connections, is a simplified version of
BM. A diagram of RBM is given in Fig. 4.2.
The main components of the energy function in RBM include binary variables
(hidden unites h and visible unites v), weights W D .wi;j / that establish connections
between hj and vi , biases ai for the visible units and bj for the hidden ones.
Specifically, the energy function is formulated as follows,
E.v; h/ D v T Wh  aT v  bT h


During the generation process word embedding, the energy function is used as the
probability distribution as shown in Eq. (4.7).
P.v; h/ D eE.v;h/ =Z
where Z is a partition function defined as the sum of eE.h;v/ .
Fig. 4.2 The basic structure
of the RBM



Y. Li and T. Yang

Mnih and Hinton (2007) proposes three models on the basis of RBM by adding
connections to the previous status. Starting from the undirected graphical model,
they try to estimate the latent conditional distribution of words. To compute the
probability of the next word in a sequence, the energy function is defined as follows:
E.wt ; hI wtnC1Wt1 / D .


viT RWi /h  bTh h  bTt RT vn  bTv vn



where viT R denotes the vector that represents word wi from the word dictionary R,
Wi in hidden units denotes the similarity between two words, and bh ; bv ; br are the
biases for the hidden units, words and word features respectively.
The disadvantage of this class of models is that it is costly to estimate massive
parameters during the training process. To reduce the number of parameters,
Mnih and Hinton (2007) extends the factored RBM language model by adding
connections C among words. It also removes the hidden variables and directly
applies the stochastic binary variables. The modified energy function is as below:
E.wt I wtnC1Wt1 / D .


viT RCi /RT vn  bTr RT vn  bTv vn



Here Ci denotes correlation between word vectors wi and wt , br and bv denotes the
word biases. This function quantifies the bilinear interaction between words, which
is also the reason why models of this kind are
Language (LBL)
P called Log-BiLinear
models. In Eq. (4.9), entries of vector h D t1
iDtnC1 vi RCi correspond to the nodes
in the hidden layer, and yi D hRT vn represents the set of nodes in the output layer.
We can see that the syntax information is contained in the hidden layer, for Ci here
can be seen as the contribution of word i to current word n, which is like the cosine
similarity between two words.

Recurrent and Recursive Neural Network

To reduce the number of parameters, we could unify the layer functions with some
repetitive parameters. By treating the text as sequential data, Mikolov et al. (2010)
proposes a language model based on the Recurrent Neural Network, the structure of
which is shown in Fig. 4.3. The model has a circular architecture. At time t, word
embedding w.t/ is generated out of the first layer. Then, it is transferred together
with the output from the context layer at time t  1 as the new input x.t/ to the
context layer, which is formulated as follows:
x.t/ D w.t/ C s.t  1/


4 Word Embedding for Understanding Natural Language: A Survey


Fig. 4.3 The structure of the
recurrent neural network

Fig. 4.4 The structure of the Recursive Neural Network

In each cycle, the outcomes of context layer and output layer are denoted as s.t/ and
y.t/ respectively. Inspired by Bengio et al. (2003), the output layer also applies the
softmax function.
Unlike Recurrent Neural Network, Recursive Neural Network (Goller and
Kuchler 1996) is a linear chain in space as shown in Fig. 4.4. To fully analyze
the sentence structure, the process iterates from two word vectors in leaf nodes
b; c to merge into a hidden output h1 , which will continue to concatenate another
input word vector a as the next input. The computing order is determined either by
the sentence structure or the graph structure. Not only could hidden layers capture
the context information, they could also involve the syntax information. Therefore,


Y. Li and T. Yang

Fig. 4.5 (A) The structure of the Recursive Neural Network model where each node represents
a vector and all the word vectors are in the leaf nodes. (B) The structure of the MV-RNN model
where each node consists of a vector and a matrix

Recursive Neural Network can capture more information than the context-based
model, which is desirable for NLP tasks.
For the sequential text data in Recursive Neural Network, Socher et al. (2011,
2013, 2012) parses the sentences into tree structures through the tree bank models
(Klein and Manning 2003; Antony et al. 2010). The model proposed by Socher
et al. (2011) employs the Recursive Neural Network, and its structure is shown
in Fig. 4.5A,
p D f .W



where a; b 2 Rd1 are the word vectors, parent node p 2 Rd1 is the hidden output,
and f D tanh adds element-wise nonlinearity to the model. W 2 Rd2d is the
parameter to learn.
To incorporate more information in each node, work in (Socher et al. 2012)
proposes the model of Matrix-Vector Recursive Neural Network (MV-RNN), for
which the parsing-tree structure is shown in Fig. 4.5B. Different from that in
Fig. 4.5A, each node in this parsing tree includes a vector and a matrix. The vector
represents the word vector in leaf nodes and phrase in non-leaf ones. The matrix
is applied to neighboring vectors, and it is initialized by the identity matrix I plus
a Gaussian noise. The vectors a; b; p capture the inherent meaning transferred in
the parsing tree, and the matrixes A; B; P are to capture the changes of the meaning
for the neighboring words or phrases. Apparently, the drawback of this model is
the large number of parameters to be estimated. So in the work of Socher et al.
(2013), the model of Recursive Neural Tensor Network (RNTN) uses the tensorbased composition function to avoid this problem.
In comparison, the main differences between recurrent and recursive models are
the computing order and the network structure. The structure of the former has
a closed loop dealing with the time sequence, while the latter has an open loop
tackling the spatial sequence.

4 Word Embedding for Understanding Natural Language: A Survey


Convolutional Neural Network (CNN)

Convolutional Neural Network (CNN) consists of several feature extraction layers,
which is inspired by biological process (Matsugu et al. 2003). It has already been
successfully applied in many image recognition problems. The main advantage of
CNN is that it does not depend on the prior knowledge or human efforts when doing
feature selection.
CNN can also be applied to extract latent features from text data (Collobert
and Weston 2008; Collobert et al. 2011). As shown in Fig. 4.6, three different
convolutional kernels, in three different colors, select different features from word
embedding separately. The feature vector is the combination of the max values from
the rows of the selected features.
In the models above, supervised learning is applied to train the whole neural
network, and the convolutional layer here is to model the latent feature of the
initialized word embedding. Since supervised training method is applied here, it
is able to generate word embedding suitable for the special task. Though the word
embedding is usually a by-product in this case, it still has a sound performance in
capturing semantic and syntactic information.

Fig. 4.6 The structure of CNN for feature extraction by using convolutional kernels


Y. Li and T. Yang

Hierarchical Neural Language Model (HNLM)

There are many tricks to speed up the process of training NNLM, such as short list,
hash table for the word and stochastic gradient descent (Bengio et al. 2003). But
when facing with datasets of large scale, further improvement in model efficiency is
still needed.
Transferring the knowledge from a small portion of observed data examples to
other cases can save a lot of computational efforts, thus speeding up the learning
process. However, it is a difficult task for neural language models which involve
computing the joint probability of certain word combinations. This is because the
possible combinations of words from a vocabulary is immensely larger than the ones
from all potential texts. So it is necessary to look for some priori information before
searching all possible combinations. Based on human knowledge or statistical
information, we can cluster all the words into several classes where some similar
statistical properties are shared by the words in each class. Some hierarchical models
are constructed based on this idea (Mnih and Hinton 2008; Morin and Bengio 2005).
Hierarchical structures has a big influence on the algorithm complexity. For
example, Mnih and Hinton (2008) and Morin and Bengio (2005) reduces the
complexity of language models by clustering words into a binary tree. Specifically,
given a dictionary V containing jVj words, the speed up will be O.jVj=logjVj/
(Morin and Bengio 2005) if the tree is binary.
To determine the hierarchical structure, hierarchical word classes (lexical word
categories in a narrow sense) are needed in most cases (Rijkhoff and Jan 2007). The
way of constructing word classes is sensitive to the word distribution. Word classes
can be built based on prior (expert) knowledge (Fellbaum 1998) or through datadriven methods (Sun et al. 2012), but they could also be automatically generated in
accordance with the usage statistics (Mnih and Hinton 2008; McMahon and Smith
1996). The prior knowledge is accurate but limited by the application range, since
there are lots of problems where expert categories are unavailable. The data-driven
methods can boost a high level of accuracy but it still needs a seed corpus which is
manually tagged. Universality is the major advantage of the automatic generation
method. It could be applied to any natural language scenarios without the expert
knowledge, though it will be more complex to construct.
By utilizing the hierarchical structure, lots of language models have been
proposed (Morin and Bengio 2005; Mnih and Hinton 2008; Yogatama et al.
2014a; Djuric et al. 2015). Morin and Bengio (2005) introduces the hierarchical
decomposition bases on the word classes that extract from the WordNet. Similar
with Bengio et al. (2003), it uses the layer function to extract the latent features, the
process is shown in Eq. (4.12).
P.di D 1jqi ; wtnC1Wt1 / D sigmoid.U.tanh.d C Wx/qi / C bi /


Here x is the concatenation of the context word [same as Eq. (4.4)], bi is the biases
vector, U; W; bi ; x plays the same roles as in Eq. (4.5). The disadvantage is the
procedure of tree construction which has to combine the manual and data-driving

4 Word Embedding for Understanding Natural Language: A Survey


processing (Morin and Bengio 2005) together. Hierarchical Log BiLinear (HLBL)
model which is proposed by Mnih and Hinton (2008) overcomes the disadvantage
by using a boosting method to generate the tree automatically. The binary tree with
words as leaves consists of two components: the words in the leaves which can be
represented by a sequential binary code uniquely from top to down, as well as the
probability for decision making at each node. Each non-leaf node in the tree also
has an associated vector q which is used for discrimination. The probability of the
predicted word w in HLBL is formulated as follows
P.wt D wjwtnC1Wt1 / D
P.di jqi ; wtnC1Wt1 /

where di is tth digit in the binary code sequence for word w, and qi is the feature
vector for the ith node in the path to word w from the root. In each non-leaf node,
the probability of the decision is given by
P.di D 1jqi ; wtnC1Wt1 / D ..


Cj wj /Pqi C bi /



where .x/ is the logistic function for decision making, Cj is the weight matrix
corresponding to the context word wj , and bi is the bias to capture the contextindependent tendency to one of its children when leaving this node.
There are both advantages and disadvantages of the hierarchical structure. Many
models benefit from the reduction of computation complexity, while the consuming
time on the structure building is the main drawback.

4.2.3 Sparse Coding Approach
Sparse Coding is an unsupervised model that learns a set of over-complete bases to
represent data efficiently. It generates base vectors f i g such that
P the input vector
x 2 Rn can be represented by a linear combination of them x D kiD1 ˛i i , where k
is the number of base vectors and k > n. Although there are many techniques such as
Principal Component Analysis (PCA) that helps learn a complete set of basis vectors
efficiently, Sparse Coding aims to use the least bases to represent the input x. The
over-complete base vectors are able to capture the structures and patterns inherent
in the data, so the nature characteristic of the input can be seized through Sparse
To be specific, sparse Coding constructs an over-complete dictionary D 2 RLK
of basis vectors, together with a code matrix A 2 RKV , to represent V words in C
contents X 2 RLV by minimizing such function as follows:
arg min kX  DAk22 C




Y. Li and T. Yang

where is a regularization hyperparameter and
is the regularizer. It applies
the squared loss for the reconstructed error, but loss functions like L1 -regularized
quadratic function can also be an efficient alternative for this problem (Schökopf
et al. 2007).
Plenty of algorithms have been proposed to extend the sparse coding framework
starting from the general loss function above. If we apply a non-negativity constraint
on A (i.e. Ai;j  0), the problem is turned into the Non-Negative Sparse Embedding
(NNSE). Besides adding constraints on A, if we also requires that all entries in D are
bigger than 0, then the problem can be transformed into the Non-Negative Sparse
Coding (NNSC). NNSC is a matrix factorization technique previously studied in
machine learning communities (Hoyer 2002).
For .A/, Yogatama et al. (2014b) designs a forest-structured regularizer that
enables the mixture use of the dimension. The structure of this model is in Fig. 4.7,
there are seven variables in the tree which describes the order that variables ‘enter
the model’. In this work, it sets the rule that a node may take a nonzero value only
if its ancestors do. For example, nodes 3 and 4 may only be nonzero if nodes 1 and
2 are also nonzero. So the regularizer for A shows in Eq. (4.16).
.av / D


hav;i ; av;Descendants.i/ i




where av is the vth column of A.
Faruqui et al. (2015) proposes two methods by adding restriction on the
dictionary D:
arg min kX  DAk22 C

Fig. 4.7 An example of a
regularization forest that
governs the order in which
variables enter the model

.A/ C kDk22


4 Word Embedding for Understanding Natural Language: A Survey


where is the regularization hyperparameter. The first method applies l1 penalty on
A, so that the Function 4.17 can be broken into the loss for each word vector, which
makes it possible for parallel optimization.
arg min


kxi  Dai k22 C kai k1 C kDk22



Here xi and ai are the ith column vector of matrix X and A respectively. The second
method is to add non-negativity constraints on variables so that:
arg min


0 ;A2R0 iD1

kxi  Dai k22 C kai k1 C kDk22


Apart from the work based on Sparse Coding, Sun et al. (2016) adds the l1 regularizer into Word2Vector. The challenge in this work lies in the optimization method,
for stochastic gradient descent (SGD) cannot produce sparse solution directly with
l1 regularizer in online training. The method of Regularized Dual Averaging (RDA)
proposed by Lin (2009) keeps track of online average subgradients at each update,
and optimizes l1 regularized loss function based on Continuous Bag-of-Words
(CBOW) Model (Mikolov et al. 2013) in online learning.

4.2.4 Evaluations of Word Embedding
Measurements for word embedding usually depend on the specific applications.
Some representative examples include perplexity, analogy precision and sentiment
classification precision.
Perplexity is an evaluation method which originates from information theory.
It measures how well a distribution or a model can predict the sample. Bengio
et al. (2003) uses the perplexity of the corpus as the target, where the lower of
the perplexity value is, the higher quality the word embedding conveys since the
information is more specific. Word or phrase analogy analysis is another important
assessment way. Work in (Mikolov et al. 2013) designs the precision of semantic
and syntactic prediction as the standard measurement to evaluate the quality of word
embedding. It is applied in phrase and word level independently. All the assessment
methods mentioned above are from the view of linguistic phenomenon that the
distance between two words in vector space reflects the correlation between them.
Besides the measurements of linguistic phenomenon, a lot of work evaluates
word embedding by using the task where they are applied. For example, if the word
embedding is used in sentiment classification, then the precision of the classification
can be used for evaluation. If it is applied in machine translation, then the precision


Y. Li and T. Yang

of the translation is the one that matters. This rule also holds for other tasks like
POS tagging, named entity recognize, textual entailment, and so on.
There is no best measurement method for all scenarios. The most useful way is to
combine it with the target of the task to achieve a more reasonable evaluation result.

4.3 Word Embedding Applications
There are many applications for word embedding, especially in NLP tasks where
word embedding is fed as input data or the features of the text data directly. In this
section, we will discuss how word embedding can be applied in the scenarios such as
semantic analysis, syntax analysis, idiomaticity analysis, Part Of the Speech (POS)
tagging, sentiment analysis, named entity recognition, textual entailment as well as
machine translation.
The goal of syntax analysis is to extract the syntactic structure from sentences.
Recent works (Socher et al. 2011; Huang et al. 2012; Collobert and Weston 2008;
Mikolov et al. 2013) have taken the syntax information into consideration to obtain
word embedding, and the result in Andreas and Dan (2014) shows that word
embedding can entail the syntactic information directly. A fundamental problem
to syntax analysis is part of the speech tagging, which is about labeling the words
to different categories (e.g., plural, noun, adverb). It requires the knowledge of
definitions and contextual information. Part of the speech tagging is a word-level
NLP task. Collobert and Weston (2008) and Collobert et al. (2011) apply word
embedding for POS tagging and achieve state-of-the-art results. In some scenarios,
in order to figure out the syntax of text, we need to first recognize the named
entities in it. Named entity recognition aims to find out the names of persons,
organizations, time expressions, monetary values, locations and so on. Works in
Collobert and Weston (2008), Collobert et al. (2011), Zou et al. (2013), Luo et al.
(2014), Pennington et al. (2014) show the expressiveness of word embedding in
these applications.
As a subsequent task, semantic analysis (Goddard 2011) relates the syntactic
structures of words to their language-independent meanings.1 In other words, it
reveals words that are correlated with each other. For example, the vector (“Madrid”
 “Spain” C “France”) is close to (“Paris”). Previous approaches (Scott et al.
1999; Yih et al. 2012) mainly use the statistical information of the text. The works
in Mikolov et al. (2013), Socher et al. (2011), Huang et al. (2012), Collobert
and Weston (2008) apply the analogy precision to measure the quality of word
embedding (see in Sect. 4.2.4). It is shown that word embedding can manifest the
semantic information of words in the text. The semantics of each word can help us
understand the combinatory meaning of several words. The phenomenon of multiwords expression idiomaticity is common in nature language, and it is difficult to be



4 Word Embedding for Understanding Natural Language: A Survey


inferred. For example, the meaning of “ivory tower” could not be inferred from the
separate words “ivory” and “tower” directly. Different from semantic analysis and
syntax analysis, idiomaticity analysis (Salehi et al. 2015) requires the prediction on
the compositionality of the multi-words expression. Once we figure out the syntax
and semantics of words, we can do sentiment analysis whose goal is to extract the
opinion, sentiment as well as subjectivity from the text. Word embedding can act as
the text features in the sentiment classifier (Socher et al. 2012; Dickinson and Hu
In some NLP problems, we need to deal with more than one type of language
corpus. Textual entailment (Saurf and Pustejovsky 2007) is such a task that involves
various knowledge sources. It attempts to deduce the meaning of the new text
referred from other known text sources. Works in Bjerva et al. (2014) and Zhao
et al. (2015) apply word embedding into the score computing in accordance
with the clustering procedure. Machine translation is another important field in
NLP, which tries to substitute the texts in one language for the corresponding
ones in another language. Statistical information is widely used in some previous
works (Ueffing et al. 2007). Neural network models are also a class of important
approaches. Bahdanau et al. (2014) builds the end-to-end model that is based on
the encoder-decoder framework, while it is still common for machine translation
to use word embedding as the word expression (Mikolov et al. 2013; Hill et al.
2014). Furthermore, Zou et al. (2013) handles the translation task using bilingual
embedding which is shown to be a more efficient method.
According to the type of roles that it acts in the applications, word embedding
could be used as the selected feature or in the raw digital format. If word embedding
is used as the selected feature, it is usually used for the higher level NLP tasks, like
sentiment classification, topic discovery, word clustering. In linguistics tasks, word
embedding is usually used in its raw digital format, like semantic analysis, syntax
analysis and idiomaticity analysis. Based on the type of machine learning problems
involved, applications of word embedding could be divided into regression task,
clustering task and classification task. Works in Zhao et al. (2015) prove that word
embedding could make improvements for regression and classification problems
which attempt to find out the pattern of the input data and make prediction after the
fitting. Tasks like semantic analysis, idiomaticity analysis and machine translation
belong to this category. Luo et al. (2014) tries to find out the similarity between short
texts by applying a regression step in word embedding, where the resultant model
could be used for searching, query suggestion, image finding, and so on. Word
embedding is also commonly used in classification tasks like sentiment analysis
and textual entailment (Amir et al. 2015). POS tagging and named entity recognition
discussed above belong to clustering problems.
Besides word embedding, phrase embedding and document embedding are some
other choices for expressing the words in the text. Phrase embedding vectorizes
the phrases for higher level tasks, such as web document management (Sharma
and Raman 2003), paraphrase identification (Yin and Schütze 2016) and machine
translation (Zou et al. 2013). Document embedding treats documents as basic units.
It can be learned from documents directly (Huang et al. 2013) or aggregated


Y. Li and T. Yang

Fig. 4.8 The citation numbers of the topics in each year

by word embedding (Lin and He 2009; Zhou et al. 2015). Similar to phrase
embedding, document embedding can also be applied in sentiment analysis and
machine translation.
Figure 4.8 summarizes the application distribution of word embedding in
different NLP tasks.2 We can see that word embedding stably gains its popularity
starting from 2004, as reflected though the rising curve. One of the most common
applications is semantic analysis, in which nearly half of the works of word
embedding are involved. Then it comes to the syntax analysis whose popularity
dramatically increases between 2009 and 2011. Compared with syntax and semantic
analysis, although other applications account for a much less proportion of work,
domains like machine translation, names entity recognition and sentiment analysis
receive dramatically increasing attention since 2010.

4.4 Conclusion and Future Work
In conclusion, word embedding is a powerful tool for many NLP tasks, especially
the ones that require original input as the text features. There are various types of
models for building word embedding, and each of them has its own advantages
and disadvantages. Word embedding can be regarded as textual features, so that
it can be counted as a preprocessing step in more advanced NLP tasks. Not only
can it be fed into classifiers, but it can be also used for clustering and regression
problems. Regarding the level that embedding represents, word embedding is a finegrit representation compared with phrase embedding and document embedding.
Word embedding is an attractive research topic worth of further exploration.
First, to enrich the information contained in word embedding, we can try to involve
various prior knowledge such as synonymy relations between words, domain


The citation numbers are from http://www.webofscience.com.

4 Word Embedding for Understanding Natural Language: A Survey


specific information, sentiment information and topical information. The resultant
word embedding generated towards this direction will be more expressive. Then,
besides using words to generate embedding, we may want to explore how characterlevel terms can affect the output. This is due to the reason that words themselves are
made up of character-level elements. Such morphological analysis matches the way
of how people perceive and create words, thus can help deal with the occurrences of
new words. Moreover, as data volume is fast accumulating nowadays, it is necessary
to develop techniques capable of efficiently process huge amount of text data. Since
text data may arrive in streams, word embedding models that incorporate the idea of
online learning are more desirable in this scenario. When the new chunk of data is
available, we do not need to learn a new model using the entire data corpus. Instead,
we only need to update the original model to fit the new data.

Amir, S., Astudillo, R., Ling, W., Martins, B., Silva, M. J., & Trancoso, I. (2015). INESC-ID: A
regression model for large scale twitter sentiment lexicon induction. In International Workshop
on Semantic Evaluation.
Andreas, J., & Dan, K. (2014). How much do word embeddings encode about syntax? In Meeting
of the Association for Computational Linguistics (pp. 822–827).
Antony, P. J., Warrier, N. J., & Soman, K. P. (2010). Penn treebank. International Journal of
Computer Applications, 7(8), 14–21.
Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to
align and translate. Eprint arxiv.
Bengio, Y., Schwenk, H., Senécal, J. S., Morin, F., & Gauvain, J. L. (2003). A neural probabilistic
language model. Journal of Machine Learning Research, 3(6), 1137–1155.
Bjerva, J., Bos, J., van der Goot, R., & Nissim, M. (2014). The meaning factory: Formal semantics
for recognizing textual entailment and determining semantic similarity. In SemEval-2014
Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing: deep
neural networks with multitask learning. In International Conference, Helsinki, Finland, June
(pp. 160–167).
Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural
language processing (almost) from scratch. Journal of Machine Learning Research, 12(1),
Deerweste, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Richard (1990). Indexing by latent
semantic analysis. Journal of the American Society for Information Science, 41, 391–407.
Dickinson, B., & Hu, W. (2015). Sentiment analysis of investor opinions on twitter. Social
Networking, 04(3), 62–71.
Djuric, N., Wu, H., Radosavljevic, V., Grbovic, M., & Bhamidipati, N. (2015). Hierarchical neural
language models for joint representation of streaming documents and their content. In WWW.
Faruqui, M., Tsvetkov, Y., Yogatama, D., Dyer, C., & Smith, N. (2015). Sparse overcomplete word
vector representations. Preprint, arXiv:1506.02004.
Fellbaum, C. (1998). WordNet. Wiley Online Library.
Goddard, C. (2011). Semantic analysis: A practical introduction. Oxford: Oxford University Press.
Goller, C., & Kuchler, A. (1996). Learning task-dependent distributed representations by
backpropagation through structure. In IEEE International Conference on Neural Networks
(Vol. 1, pp. 347–352).


Y. Li and T. Yang

Harris, Z. S. (1954). Distributional structure. Synthese Language Library, 10(2–3), 146–162.
Hill, F., Cho, K., Jean, S., Devin, C., & Bengio, Y. (2014). Embedding word similarity with neural
machine translation. Eprint arXiv.
Hinton, G. E. (1986). Learning distributed representations of concepts. In Proceedings of CogSci.
Hofmann, T. (2001). Unsupervised learning by probabilistic latent semantic analysis. Machine
Learning, 42(1–2), 177–196.
Hoyer, P. O. (2002). Non-negative sparse coding. In IEEE Workshop on Neural Networks for
Signal Processing (pp. 557–565).
Huang, E. H., Socher, R., Manning, C. D., & Ng, A. Y. (2012). Improving word representations via
global context and multiple word prototypes. In Meeting of the Association for Computational
Linguistics: Long Papers (pp. 873–882).
Huang, P.-S., He, X., Gao, J., Deng, L., Acero, A., & Heck, L. (2013). Learning deep
structured semantic models for web search using clickthrough data. In Proceedings of the
22Nd ACM International Conference on Information & Knowledge Management, CIKM ’13
(pp. 2333–2338). New York, NY: ACM.
Klein, D., & Manning, C. D. (2003). Accurate unlexicalized parsing. In Meeting on Association
for Computational Linguistics (pp. 423–430).
Lai, S., Liu, K., Xu, L., & Zhao, J. (2015). How to generate a good word embedding? Credit Union
Times, III(2).
Landauer, T. K. (2002). On the computational basis of learning and cognition: Arguments from
lsa. Psychology of Learning & Motivation, 41(41), 43–84.
Landauer, T. K., & Dumais, S. T. (1997). A solution to plato’s problem: The latent semantic
analysis theory of acquisition, induction, and representation of knowledge. Psychological
Review, 104(2), 211–240.
Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to latent semantic analysis.
Discourse Processes, 25(2), 259–284.
Lin, C., & He, Y. (2009). Joint sentiment/topic model for sentiment analysis. In ACM Conference
on Information & Knowledge Management (pp. 375–384).
Lin, X. (2009). Dual averaging methods for regularized stochastic learning and online optimization. In Conference on Neural Information Processing Systems 2009 (pp. 2543–2596).
Liu, Y., Liu, Z., Chua, T. S., & Sun, M. (2015). Topical word embeddings. In Twenty-Ninth AAAI
Conference on Artificial Intelligence.
Luo, Y., Tang, J., Yan, J., Xu, C., & Chen, Z. (2014). Pre-trained multi-view word embedding
using two-side neural network. In Twenty-Eighth AAAI Conference on Artificial Intelligence.
Matsugu, M., Mori, K., Mitari, Y., & Kaneda, Y. (2003). Subject independent facial expression
recognition with robust face detection using a convolutional neural network. Neural Networks,
16(5–6), 555–559.
McMahon, J. G., & Smith, F. J. (1996). Improving statistical language model performance with
automatically generated word hierarchies. Computational Linguistics, 22(2), 217–247.
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations
in vector space. CoRR, abs/1301.3781.
Mikolov, T., Karafiát, M., Burget, L., Cernocký, J., & Khudanpur, S. (2010). Recurrent neural
network based language model. In INTERSPEECH 2010, Conference of the International
Speech Communication Association, Makuhari, Chiba, Japan, September (pp. 1045–1048).
Mnih, A., & Hinton, G. (2007). Three new graphical models for statistical language modelling. In
International Conference on Machine Learning (pp. 641–648).
Mnih, A., & Hinton, G. E. (2008). A scalable hierarchical distributed language model. In
Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second
Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia,
Canada, December 8–11, 2008 (pp. 1081–1088).
Morin, F., & Bengio, Y. (2005). Hierarchical probabilistic neural network language model. Aistats
(Vol. 5, pp. 246–252). Citeseer.
Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation.
In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing

4 Word Embedding for Understanding Natural Language: A Survey


Rastogi, P., Van Durme, B., & Arora, R. (2015). Multiview LSA: Representation learning
via generalized CCA. In Conference of the North American chapter of the association for
computational linguistics: Human language technologies, NAACL-HLT’15 (pp. 556–566).
Rijkhoff, & Jan (2007). Word classes. Language & Linguistics Compass, 1(6), 709–726.
Salehi, B., Cook, P., & Baldwin, T. (2015). A word embedding approach to predicting the
compositionality of multiword expressions. In Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Technologies.
Salton, G., Wong, A., & Yang, C. S. (1997). A vector space model for automatic indexing. San
Francisco: Morgan Kaufmann Publishers Inc.
Saurf, R., & Pustejovsky, J. (2007). Determining modality and factuality for text entailment. In
International Conference on Semantic Computing (pp. 509–516).
Schökopf, B., Platt, J., & Hofmann, T. (2007). Efficient sparse coding algorithms. In NIPS
(pp. 801–808).
Scott, D., Dumais, S. T., Furnas, G. W., Lauer, T. K., & Richard, H. (1999). Indexing by latent
semantic analysis. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial
Intelligence (pp. 391–407).
Sharma, R., & Raman, S. (2003). Phrase-based text representation for managing the web
documents. In International Conference on Information Technology: Coding and Computing
(pp. 165–169).
Shazeer, N., Doherty, R., Evans, C., & Waterson, C. (2016). Swivel: Improving embeddings by
noticing what’s missing. Preprint, arXiv:1602.02215.
Socher, R., Huval, B., Manning, C. D., & Ng, A. Y. (2012). Semantic compositionality through
recursive matrix-vector spaces. In Joint Conference on Empirical Methods in Natural Language
Processing and Computational Natural Language Learning (pp. 1201–1211).
Socher, R., Pennington, J., Huang, E. H., Ng, A. Y., & Manning, C. D. (2011). Semi-supervised
recursive autoencoders for predicting sentiment distributions. In Conference on Empirical
Methods in Natural Language Processing, EMNLP 2011, 27–31 July 2011, John Mcintyre
Conference Centre, Edinburgh, A Meeting of SIGDAT, A Special Interest Group of the ACL
(pp. 151–161).
Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning, C. D., Ng, A. Y., & Potts, C. (2013).
Recursive deep models for semantic compositionality over a sentiment treebank. In Conference
on Empirical Methods on Natural Language Processing.
Sun, F., Guo, J., Lan, Y., Xu, J., & Cheng, X. (2015). Learning word representations by jointly
modeling syntagmatic and paradigmatic relations. In AAAI.
Sun, F., Guo, J., Lan, Y., Xu, J., & Cheng, X. (2016). Sparse word embeddings using l1 regularized
online learning. In International Joint Conference on Artificial Intelligence.
Sun, S., Liu, H., Lin, H., & Abraham, A. (2012). Twitter part-of-speech tagging using preclassification hidden Markov model. In IEEE International Conference on Systems, Man, and
Cybernetics (pp. 1118–1123).
Ueffing, N., Haffari, G., & Sarkar, A. (2007). Transductive learning for statistical machine
translation. In ACL 2007, Proceedings of the Meeting of the Association for Computational
Linguistics, June 23–30, 2007, Prague (pp. 25–32).
Xu, W., & Rudnicky, A. (2000). Can artificial neural networks learn language models? In
International Conference on Statistical Language Processing (pp. 202–205).
Yang, Y., & Pedersen, J. O. (1997). A comparative study on feature selection in text categorization.
In Fourteenth International Conference on Machine Learning (pp. 412–420).
Yih, W.-T., Zweig, G., & Platt, J. C. (2012). Polarity inducing latent semantic analysis. In
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12 (pp. 1212–1222).
Stroudsburg, PA: Association for Computational Linguistics.
Yin, W., & Schütze, H. (2016). Discriminative phrase embedding for paraphrase identification.
Preprint, arXiv:1604.00503.
Yogatama, D., Faruqui, M., Dyer, C., & Smith, N. A. (2014a). Learning word representations with
hierarchical sparse coding. Eprint arXiv.


Y. Li and T. Yang

Yogatama, D., Faruqui, M., Dyer, C., & Smith, N. A. (2014b). Learning word representations with
hierarchical sparse coding. Eprint arXiv.
Zhao, J., Lan, M., Niu, Z. Y., & Lu, Y. (2015). Integrating word embeddings and traditional
NLP features to measure textual entailment and semantic relatedness of sentence pairs. In
International Joint Conference on Neural Networks (pp. 32–35).
Zhou, C., Sun, C., Liu, Z., & Lau, F. (2015). Category enhanced word embedding. Preprint,
Zou, W. Y., Socher, R., Cer, D. M., & Manning, C. D. (2013). Bilingual word embeddings for
phrase-based machine translation. In EMNLP (pp. 1393–1398).

Part II

Applications in Science

Chapter 5

Big Data Solutions to Interpreting Complex
Systems in the Environment
Hongmei Chi, Sharmini Pitter, Nan Li, and Haiyan Tian

5.1 Introduction
The role of big data analysis in various fields has only recently been explored.
The amount of data produced in the digital age is profound. Fortunately, the
technical capabilities of the twenty-first century is beginning to meet the needs of
processing such immense amounts of information. Data collected through social
media, marketing transactions, and internet search engines have opened up the path
to in depth, real-time quantitative research to the fields of sociology, anthropology,
and economics (Boyd and Crawford 2012).
Other fields, e.g., the medical and business fields, have been quick to recognize
the utility of rapidly collecting, storing, and analyzing vast amounts of data such
as patient records and customer purchasing patterns. Even individuals now have
the ability to track their health statistics through ever increasing access to personal
tracking devices leading to the Quantified Self Movement (Kelly 2007; Swan 2013).
In the realms of environmental science and ecology the capabilities of big data
analysis remain largely unexplored. We have merely scratched the surface of possibilities (Hampton et al. 2013). And yet it is in these areas that we may have the most
to gain. The complexity of environmental systems, particularly in relation to human
behavior and impact on human life, require the capabilities of modern data analysis.

H. Chi () • S. Pitter
Florida A&M University, Tallahassee, FL 32307, USA
e-mail: hongmei.chi@famu.edu; sharmini.pitter@famu.edu
N. Li
Guangxi Teachers Education University, Nanning, 530001, China
e-mail: nli@yic.ac.cn
H. Tian
University of Southern Mississippi, Hattiesburg, MS 39406, USA
e-mail: haiyan.tian@usm.edu
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_5



H. Chi et al.

By looking at large data sets through platforms that closely resemble neural networks we are no longer restricted to simple 1:1 relationships between environmental
variables and effects on economic, recreational, and agricultural variables. It is now
possible to look at environmental impacts through the lens of the system. Take for
instance the combination of factors that contribute to higher intensity of hurricanes
in Florida and the impact on property distribution (coastal vs. inland). Instead of
simply asking how increased storm intensity on the Saffir-Simpson hurricane wind
scale is affecting property purchasing, it is now possible to understand what factors
may be leading to increased storm intensity in order to effectively produce mediation
strategies to reduce the impact high intensity storms have on the Florida economy.
These types of analysis will become increasingly necessary in order to provide
communities adequate information to prepare for and adapt to the regional effects
of climate change (Bjarnadottir et al. 2011).
In the case of agriculture a good example of the new way forward is to
pool together information regarding crop production, local/regional soil quality,
and temperature/climate variation from several farms to strategize ways to adapt
to increasing temperatures or shifting growing seasons. Companies such as the
Farmers Business Network provide access to data at a much more precise level,
allowing farmers to make informed decisions based on data sets collected region by
region, farm by farm.
In order to apply information garnered through Big Data analytics (Shiffrin 2016)
to real world issues a certain amount of interpretation is necessary. The variables
that have been considered or ignored must be taken into account in order to discern
what can and cannot be interpreted from any given dataset. For instance in the
example of GMO adoption any number of factors could have a substantial effect
on the adoption process. Just as with any implementation of a new agricultural
methodology, social interaction, economic standing, family structure, community
structure, and a plethora of other factors may have a significant effect on any one
farmer’s likelihood of adopting the method. As Big Data analysis develops these
broader impacts on decision-making will likely become clearer. However, as we
explore the interdependence of information it is important to avoid drawing direct
connections where none exist (Boyd and Crawford 2012).
In this chapter we will explore several possible applications of data analytics in
the environmental sciences as well as the data analysis tools RapidMiner, Hadoop,
and the statistical package R. The strengths of each analysis tool will be highlighted
through two case studies. We will use the examples of hurricane frequency in
Florida. The chapter will also include an environmental genomic example. Our
hope is that this information will help set the stage for others to delve deeper into
the possibilities that big data analytics can provide. Collaborations between data
scientists and environmental scientists will lead to increased analyzing capabilities
and perhaps a more accurate dissection of the complexity of the systems that
environmental scientists seek to understand. Beyond that, the simple examples
provided in this chapter should encourage environmental scientists to further their
own discovery of the analytical tools available to them.

5 Big Data Solutions to Interpreting Complex Systems in the Environment


5.2 Applications: Various Datasets
Data analysis is the process of examining data to uncover hidden patterns, unknown
correlations, and other useful information that can be used to make better decisions.
Data analytics is playing an ever-increasing role in the process of scientific discovery. EPA and NOAA related datasets are on demand to be analyzed by using data
analysis tools. Those data are difficult to handle using traditional database systems.
A wireless sensor network (WSN) is defined as a human-engineered, complex
dynamic system of interacting sensor nodes that must combine its understanding of
the physical world with its computational and control functions, and operate with
constrained resources. These miniature sensor nodes must collectively comprehend
the time evolution of physical and operational phenomena and predict their effects
on mission execution and then actuate control actions that execute common
high-level mission goals. Rapid modern advancements in micro-electromechanical
systems (MEMS) and distributed computing have propelled the use of WSNs in
diverse applications including education, geological monitoring, ecological habitat
monitoring, and healthcare monitoring. Generally, sensor nodes are equipped
with modules for sensing, computing, powering, and communicating to monitor
specific phenomena via self-organizing protocols, since node positions are not predetermined. Figure 5.1 represents a general architecture for a sensor node, where the
microcontroller or computing module processes the data observed by the sensing
module, which is then transmitted to a required destination via a wireless link with
a communication module.
Some environmental applications of sensor networks include tracking the movements of birds, small animals, and insects; monitoring environmental conditions that
affect crops and livestock (Greenwood et al. 2014); monitoring irrigation; the use
of macro-instruments for large-scale Earth monitoring and planetary exploration;
chemical/biological detection; precision agriculture; biological and environmental
monitoring in marine, soil, and atmospheric contexts; forest fire detection; meteorological or geophysical research; flood detection; bio-complexity mapping of the
environment; and pollution study. In Sect. 5.2.1 a few of these examples have been
further explained.

5.2.1 Sample Applications
Forest fire detection: Since sensor nodes may be strategically, randomly, and densely
deployed in a forest, sensor nodes can relay the exact origin of the fire to the
end users before the fire is spread uncontrollably. Millions of sensor nodes can be
deployed and integrated using radio frequencies/optical systems. The nodes may be
equipped with effective power scavenging methods, such as solar cells, because the
sensors may be left unattended for months and even years. The sensor nodes will
collaborate with each other to perform distributed sensing and overcome obstacles,
such as trees and rocks that block wired sensors’ line of sight.


H. Chi et al.



Web Services

Local wireless network


Fig. 5.1 Wireless sensor networks (WSN) used in precision agriculture. These networks allow
remote monitoring of field conditions for crops and livestock

Biocomplexity mapping of the environment: This strategy requires sophisticated
approaches to integrate information across temporal and spatial scales. Advances
of technology in the remote sensing and automated data collection have enabled
higher spatial, spectral, and temporal resolution at a geometrically declining cost
per unit area. Along with these advances, the sensor nodes also have the ability
to connect with the Internet, which allows remote users to control, monitor and
observe the biocomplexity of the environment (Khedo et al. 2010a; Khedo et al.
2010b). Although satellite and airborne sensors are useful in observing large
biodiversity, e.g., spatial complexity of dominant plant species, they do not have
enough resolution to observe small size biodiversity, which makes up most of the
biodiversity in an ecosystem. As a result, there is a need for ground level deployment
of wireless sensor nodes to observe this level of biocomplexity. One example of
biocomplexity mapping of the environment is done at the James Reserve in Southern
California (Ravi and Subramaniam 2014). Three monitoring grids, each having 25–
100 sensor nodes, will be implemented for fixed view multimedia and environmental
sensor data loggers.
Flood detection: An example of flood detection is the ALERT system deployed in
the US (Basha et al. 2008). Several types of sensors deployed in the ALERT system
are rainfall, water level and weather sensors. These sensors supply information to

5 Big Data Solutions to Interpreting Complex Systems in the Environment


the centralized database system in a pre-defined way. Research projects, such as the
COUGAR Device Database Project at Cornell University and the Data Space project
at Rutgers, are investigating distributed approaches in interacting with sensor nodes
in the sensor field to provide snapshot and long-running queries.
Precision agriculture: Some of the benefits include the ability to monitor the
pesticide levels in soil and groundwater, the level of soil erosion, and the level of air
pollution in real-time (Lehmann et al. 2012) (Fig. 5.1).
Every day a large number of Earth Observation spaceborne and airborne sensors
from many different countries provide a massive amount of remotely sensed data
(Ma et al. 2015). A vast amount of remote sensing data is now freely available from
the NASA.
Open Government Initiative (http://www.nasa.gov/open/). The most challenging
issues are managing, processing, and efficiently exploiting big data for remote
sensing problems.

5.3 Big Data Tools
In this section we will explore the data analysis tools, RapidMiner, Apache Spark
and the statistical package R. Examples of the possible uses of RapidMiner and R
will be highlighted through two corresponding case studies. There are many open
sources for analyzing environmental big datasets.

5.3.1 RapidMiner
RapidMiner is a software platform developed by the company of the same name
that provides an integrated environment for machine learning, data mining, text
mining, predictive analytics, and business analytics. It is used for business and
industrial applications as well as for research, education, training, rapid prototyping,
and application development and supports all steps of the data mining process
including results visualization, validation, and optimization. In addition to data
mining, RapidMiner also provides functionality like data preprocessing and visualization, predictive analytics and statistical modeling, evaluation, and deployment.
RapidMiner also provides the ability to run real-time data analysis on a set schedule,
which is helpful in analyzing the high velocity and high volume data that is
characteristic of big data. Written in the Java Programming language, this tool offers
advanced analytics through template-based frameworks. A bonus: Users hardly have
to write any code. Offered as a service, rather than a piece of local software, this tool
holds top position on the list of data mining tools.
This chapter focuses on the usage of Rapidminer that can be used in analyzing
Florida hurricane datasets.


H. Chi et al.

5.3.2 Apache Spark
Spark is one of the most active and fastest-growing Apache projects, and with heavyweights like IBM throwing their weight behind the project and major corporations
bringing applications into large-scale production, the momentum shows no signs of
letting up. Apache Spark is an open source cluster-computing framework. Originally
developed at the University of California, Berkeley’s AMPLab.

5.3.3 R
R is a language and environment for statistical computing and graphics, developed
at Bell Laboratories. One of R’s strengths is the ease with which well-designed
publication-quality plots can be produced, including mathematical symbols and formulae where needed. R provides an environment within which statistical techniques
are implemented. R can be extended easily via packages. Programming with Big
Data in R (pbdR) (http://r-pbd.org/) is a series of R packages and an environment
for statistical computing with Big Data by using high-performance statistical
computation. The significance of pbdR is that it mainly focuses on distributed
memory systems. The package pbdR can deal with big data in a timely manner.

5.3.4 Other Tools
There are many other open sources for processing big data, Weka, Apache projects:
such as MapReduce Spark. A few of tools are built on top of MapReduce, such
as GraphLab and Pegasus. All of those open sources are excellent in handling
environmental datasets.

5.4 Case Study I: Florida Hurricane Datasets (1950–2013)
5.4.1 Background
Due to its large coastal area and the warm Gulf Stream waters that surround it,
Florida is particularly vulnerable to hurricanes (Blake et al. 2007; Frazier et al.
2010). Within the past century over $450 billion of damage have occurred in Florida
as a result of hurricanes (Malmstadt et al. 2009). Hurricanes pose the greatest
meteorological threat to the state. They not only threaten property damage but
can also be costly in terms of revenue, employment, and loss of life (Belasen and
Polachek 2009; Blake et al. 2007).

5 Big Data Solutions to Interpreting Complex Systems in the Environment


The frequency of hurricanes in the state of Florida from the years 1950–2015
was explored. This case study seeks to highlight the utility of public database use.
Combining location data with information regarding the landfall location, intensity,
and categorization of storm events from 1950–2015 allows us to demonstrate which
areas may be the most vulnerable, and thus should invest in proper education and
infrastructure for the communities that need them most.
At this time areas of South Florida are already experiencing the effects of climate
change in the form of sea level rise and increased high tide flooding events (SFRC
Compact 2012; Wdowinski et al. 2016). It is likely that the hurricanes experienced
will be of increased intensity and frequency.
With the possibility of extreme sea level rise on the horizon for the state of Florida
over the course of the next 100 years this threat is even more severe as studies have
shown increased water level in this region will likely result in higher storm surges,
causing more damage with each major storm event (Frazier et al. 2010).

5.4.2 Dataset
Data for this case study was selected from the Atlantic HURDAT2 Database (NHC
Data Archive 2016).
A record of tropical storms, tropical depressions, and hurricanes was explored to
demonstrate the potential of analyzing large data sets in understanding the impact
of possible environmental disasters. The data utilized was sourced from the Atlantic
Hurricane Database (HURDAT2) of the National Hurricane Center (NHC). The
National Weather Service originally collected the data. This database is widely used
for risk assessment (Landsea and Franklin 2013).
Modern methods of storm data collection include observations measured from
Air Force and NOAA aircraft, ships, buoys, coastal stations, and other means
(Powell et al. 1998). Data collection in the 2000s has also been enhanced by the
use of satellite-based scatterometers, Advanced Microwave Sounding Units, the
Advanced Dvorak Technique, and aircraft-based Stepped Frequency Microwave
Radiometers (Landsea and Franklin 2013; Powell et al. 2009).
The wind speed categories used are denoted in Table 5.1, which shows categories
as they are defined by the NHC. This information is available through the NHC website (http://www.nhc.noaa.gov/aboutsshws.php). Unlike the original Saffir-Simpson
scale, this modified version does not include information such as central pressure
or storm surge and only determines category based on peak maximum sustained
wind speed (Saffir 1973; Simpson and Saffir 1974; Blake et al. 2007). The scale
used has been modified over the years (Landsea et al. 2004; Schott et al. 2012).
For the purpose of this study the wind speed records reported in the HURDAT2
database were converted into modern hurricane categories. The classifications of
Tropical Depression (TD) and Tropical Storm (TS) were assigned to storm events
with sustained wind speeds of less than 39 mph and 39-73mph respectively.


H. Chi et al.

Table 5.1 Modified Saffir-Simpson wind scale

<39 mph
39–73 mph
74–95 mph
64–82 kt
119–153 km/h


96–110 mph
83–95 kt
154–177 km/h


111–129 mph
96–112 kt
178–208 km/h


130–156 mph
113–136 kt
209–251 km/h


157 mph or
higher 137 kt
or higher
252 km/h or

Types of damage due to hurricane winds

Very dangerous winds will produce some damage:
Well-constructed frame homes could have damage to roof,
shingles, vinyl siding and gutters. Large branches of trees will snap
and shallowly rooted trees may be toppled. Extensive damage to
power lines and poles likely will result in power outages that could
last a few to several days
Extremely dangerous winds will cause extensive damage:
Well-constructed frame homes could sustain major roof and siding
damage. Many shallowly rooted trees will be snapped or uprooted
and block numerous roads. Near-total power loss is expected with
outages that could last from several days to weeks
Devastating damage will occur: Well-built framed homes may
incur major damage or removal of roof decking and gable ends.
Many trees will be snapped or uprooted, blocking numerous roads.
Electricity and water will be unavailable for several days to weeks
after the storm passes
Catastrophic damage will occur: Well-built framed homes can
sustain severe damage with loss of most of the roof structure
and/or some exterior walls. Most trees will be snapped or uprooted
and power poles downed. Fallen trees and power poles will isolate
residential areas. Power outages will last weeks to possibly
months. Most of the area will be uninhabitable for weeks or
Catastrophic damage will occur: A high percentage of framed
homes will be destroyed, with total roof failure and wall collapse.
Fallen trees and power poles will isolate residential areas. Power
outages will last for weeks to possibly months. Most of the area
will be uninhabitable for weeks or months

Source: http://www.nhc.noaa.gov/aboutssh

5.4.3 Data Analysis
Data was selected from the HURDAT2 database based on location to limit storm
events that occurred in the state of Florida during the years 1950–2013. As expected
storms of lower sustained wind speeds are more likely to occur with category 5
hurricanes comprising less than 1% of the total storms experienced in Florida from
the year 1950–2013 (Fig. 5.2, Table 5.2).
In Fig. 5.3 wind speed is plotted against the number of storms reported for a given
sustained maximum wind speed. The average maximum sustained wind speed was
found to be approximately 45 mph.


5 Big Data Solutions to Interpreting Complex Systems in the Environment









Fig. 5.2 Hurricane frequency values by category from 1950–2015
Table 5.2 Number of storm
events in each category

Nominal value

Absolute count

It is also possible to determine the time of year that most storms occur by
breaking down storm frequency per month. In Fig. 5.4 this has been further broken
down by year to show the distribution of high-powered storms over time.

5.4.4 Summary
This simple example demonstrates how easily large databases can be utilized to
discover information that may be useful to the general public. In this instance the
high frequency of hurricanes occurring during the months of August and September
could affect travel and recreational tourism planning. This information becomes
even more valuable when compared to other large datasets. The low average wind
speed of the storms (Fig. 5.2) may seem surprising given the reputation of Florida



H. Chi et al.


10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155
Wind Speed (mph)

Fig. 5.3 Frequency of storms based on wind speed (mph)
Table 5.3 Key databases for environmental microbiologists

Brief description
Comprehensive platform for annotation and analysis of
microbial genomes and metagenomes
Portal for curated genomic data and automated
annotation of microbial genomes
http://www.ncbi.nlm.nih.gov/ A series of databases relevant to biotechnology and
biomedicine and an important resource for
bioinformatics tools and services. Major databases
include GenBank for DNA sequences and PubMed, a
bibliographic database for the biomedical literature
Database of protein families
STRING http://string-db.org/
Database of protein association networks
16S rRNA gene database
rRNA gene database

as being highly vulnerable to hurricanes. However, when compared to a study
conducted of the Georgia coastline, which experienced only 14 hurricane landfalls
during the same time period, we can see why Florida is viewed as highly vulnerable
to high-powered storms (Bossak et al. 2014).
Public sharing of data is something that must be addressed (Fig. 5.5). Data that
is available for research collaboration is also vital to taking advantage of the true
potential of big data analytics to solve complex environmental challenges or even
the economic impact of various solution strategies (Bjarnadottir et al. 2011).

5 Big Data Solutions to Interpreting Complex Systems in the Environment




































































Fig. 5.4 Distribution of storms for a given month from 1950 to 2013

Fig. 5.5 Distribution of ABC systems across the phylogenetic tree of BALOs. The phylogenetic
tree was constructed based on their 16S rRNA sequences using the Neighbor-Joining method.
The reliability of the tree was evaluated with 1,000 replicates of bootstrapping test and only
high bootstrap value scores (>50%) were indicated on the branches. In addition, each strain is
followed by its isolation habitat, total number of ORFs, as well as absolute and relative number
of ABC systems and other information. *Clusters were identified by previous study[28]. 16s
rRNA sequences of strainsBSW 11, DB6, SEQ25 and BAL6 were extracted from their genomic
sequences according to the annotation.


H. Chi et al.

5.5 Case Study II: Big Data in Environmental Microbiology
5.5.1 Background
Microorganisms are found in every habitat present in nature, such as river, soil
and ocean. From the extreme environments of hydrothermal vents deep beneath
the ocean’s surface to surface soil, they are ubiquitous. As the ability to identify
organisms, isolate novel compounds and their pathways, and characterize molecular and biochemical cellular components rapidly expands the potential uses of
biotechnology are also exponentially increasing (Nielsen and Lee 2012). Under
laboratory conditions, most environmental microorganisms, especially those living
under extreme conditions, cannot be cultured easily. Genomic studies of uncultured
organisms are thought to contain a wide range of novel genes of scientific and
industrial interest. Environmental genomic methods, which are analyses of mixed
populations of cultured and uncultured microbes, have been developed to identify
novel and industrially useful genes and to study microbial diversity in various
environments (Denman et al. 2015; Guo et al. 2016).
Microbial ecology examines the diversity and activity of microorganisms
in environments. In the last 20 years, the application of genomics tools have
revolutionized microbial ecological studies and drastically expanded our view on
the previously underappreciated microbial world. This section introduces genomics
methods, including popular genome database and basic computing technics that
have been used to examine microbial communities and evolutionary history.

5.5.2 Genome Dataset
Global resources have many interconnected databases and tools in order to provide
convenient services for users from different areas (Table 5.2). At the start of
2016, Integrated Microbial Genomes & Microbiomes (IMG/M) had a total of
38,395 genome datasets from all domains of life and 8077 microbiome datasets,
out of which 33,116 genome datasets and 4615 microbiome datasets are publicly
available. The National Center for Biotechnology Information (NCBI) at the
National Institutes of Health in the United States and the European Molecular
Biology Laboratory/European Bioinformatics Institute (EMBL-EBI) are undisputed
leaders that offer the most comprehensive suites of genomic and molecular biology data collections in the world. All genomes of bacteria, archaea, eukaryotic
microorganisms, and viruses have been deposited to GenBank, EMBL Bank or
DNA Data Bank of Japan (DDBJ). Take NCBI for example, environmental microbiologists used NCBI literature resources—PubMed and PubMed Central, to access
the full text of peer-reviewed journal articles, as well as NCBI Bookshelf, which
has rights to the full text of books and reports. The central features of the NCBI
collection are nonredundant (NR) databases of nucleotide and protein sequences and

5 Big Data Solutions to Interpreting Complex Systems in the Environment


their curated subset, known as Reference Sequences or RefSeq (Pruitt et al. 2007).
The NCBI Genome database maintains genome sequencing projects, including
all sequenced microbial genomes, and provides links to corresponding records in
NR databases and BioProject, which is a central access point to the primary data
from sequencing projects. NCBI also maintains the Sequence Read Archive (SRA),
which is a public repository for next-generation sequence data (Kodama et al. 2012)
and GEO (Gene Expression Omnibus), the archive for functional genomics data
sets, which provides an R-based web application to help users analyze its data
(Barrett et al. 2009).
The NCBI’s Basic Local Alignment Search Tool (BLAST) (Cock et al. 2015) is
the most popular sequence database search tool, and it now offers an option to search
for sequence similarity against any taxonomic group from its NCBI web page or do
it using your local computer. For example, a user may search for similarity only in
cyanobacteria, or within a single organism, such as Escherichia coli. Alternatively,
any taxonomic group or an organism can be excluded from the search. NCBI
BLAST also allows users to search sequence data from environmental samples,
which providing a way to explore metagenomics data. NCBI Taxonomy database
(Federhen 2012) is another useful resource for environmental microbiologists,
because it contains information for each taxonomic node, from domain kingdoms
to subspecies.

5.5.3 Analysis of Big Data
Because of an unprecedented increase in data information available in public
databases, bioinformatics has become an important part of many areas of biology,
including environmental microbiology. In the field of genetics and genomics, it
helps in digging out sequencing information and annotating genomes. Bioinformatics tools, which developed on R language, help in the comparison of genetic
and genomic data and more generally in the understanding of evolutionary aspects
of molecular biology. A study case that demonstrates the basic process of analyzing
the environmental microbial genomes using various bioinformatics tools is provided
In this study (Li et al. 2015) the FASTA DNA sequencing software package was
utilized. Text-based FASTA files representing eight Bdellovibrio and like organisms
(BALOs) genomes (Fig. 5.4): Bx BSW11 (NZ_AUNE01000059.1), Bx DB6
(NZ_AUNJ01000508.1), Bx SJ (NR 028723.1), Bx SEQ25 (NZ_AUNI01000021.1),
Bx BAL6 (NZ_AUMC01000010.1), BD HD100 (NR 027553.1), BD Tiberius
(NR 102470.1), BD JSS (CP003537.1), were downloaded from NCBI (ftp://ftp.
ncbi.nih.gov/genomes/) on March 23, 2014. ABC systems in these genomes were
identified using the bacteria-specific ATP-binding cassette (ABC) systems systems
profile HMM and HMMER 3.0 hmmsearch at default settings. Sequences with a
domain independent E-value  0.01 and a score/bias ratio  10 were accepted. The
ABCdb database (https://wwwabcdb.biotoul.fr/) which provides comprehensive


H. Chi et al.

information on ABC systems, such as ABC transporter classification and predicted
function (Fichant et al. 2006), was used to check predicted ABC systems.
Finally, a total of 269 putative ABC proteins were identified in BALOs. The
genes encoding these ABC systems occupy nearly 1.3% of the gene content in
freshwater Bdellovibrio strains and about 0.7% in their saltwater counterparts
(Fig. 5.6). The proteins found belong to 25 ABC systems families based on
their structural characteristics and functions. Among these, 16 families function as
importers, 6 as exporters and 3 are involved in various cellular processes. Eight of
these 25 ABC system families were deduced to be the core set of ABC systems
conserved in all BALOs. All Bacteriovorax strains have 28 or less ABC systems.
To the contrary, the freshwater Bdellovibrio strains have more ABC systems,
typically around 51. In the genome of Bdellovibrio exovorus JSS (CP003537.1), 53
putative ABC systems were detected, representing the highest number among all the
BALOs genomes examined in this study. Unexpected high numbers of ABC systems
involved in cellular processes were found in all BALOs. Phylogenetic analysis (Fig.
5.6) suggests that the majority of ABC proteins can be assigned into many separate
families with high bootstrap supports (>50%). In this study, a general framework
of sequence-structure-function connections for the ABC systems in BALOs was
revealed providing novel insights for future investigations.

5.5.4 Summary
Genomic (and other “omic”) information builds the foundation for a comprehensive
analysis of environmental microbes. It becomes very important for environmental
microbiologists to know how to utilize genomic resources—databases and computational tools—to enhance their research and to gain and share known knowledge
in a useful way. One valuable option is to deposit the results of experimental
research, especially high-throughput data, to public repositories. Submission of
genomic and metagenomic sequencing and other similar data to public databases
has become mandatory. Similarly, when publishing their research papers, it is
necessary for authors to use standard database accession numbers that link with
genes, proteins, and other data sets described in the paper. Since many electronic
journals now provide hyperlinks to genomic databases, one can access relevant
data with one click. It is clear that with the development of biological databases,
research applications involving large datasets will play an increasingly important
role in environmental microbiological discovery.

5.6 Discussion and Future Work
In this chapter we have provided a basic introduction to a few available open
source tools and several applications to environmental related research for both

Fig. 5.6 Phylogenetic tree of all of the ABC systems in BALOs. The phylogenetic tree is constructed based on the ABC system domains of ABC systems.
Strain names are shortened for brevity on the phylogenetic tree using the Neighbor-Joining method. The branches of 9 common ABC systems families are
marked in deep green; the branches of expanded freshwater specific groups and salt water specific groups are separately marked in deep blue and light blue.
Representative families were labeled with family name followed by putative substrate in bracket. BD Bdellovibrio, Bx Bacteriovorax

5 Big Data Solutions to Interpreting Complex Systems in the Environment


H. Chi et al.

academic and policy related pursuits. Re-packaging complex information into useful
summaries is a major benefit of big data analysis that can serve as a jumping off
point for most researchers in a variety of fields. Yet it should be noted that it is
possible to go far beyond the types of analyses outlined herein.
Some researchers have begun to take things a few steps farther by creating
predictive analytics to measure several possible outcomes of various environmental
adaptation and policy strategies (Bjarnadottir et al. 2011; Ulrichs et al. 2015).
The complexity of nature and the environmental challenges we currently face
demand the simplification of complex systems that big data analysis provides. On an
individual level there is so much more that can be done even in something as simple
as increasing public data collection and sharing. At this point there are also many
public databases available for exploration. Both NASA and NOAA (https://azure.
microsoft.com/en-us/features/gov/noaacrada/) boast several. For example through
the Earth Science Data and Information System (ESDIS) Project NASA offers large
datasets comprised of continuous, global, collection of satellite imagery that are
freely available to the public (Earth Observing System Data and Information System
(EOSDIS) 2009). The NOAA Big Data Project also offers datasets of near real time
environmental data.
Finally, one of the fastest growing areas of data collection is directly from public
sources. Individuals are now able to contribute to the creation of large datasets to
increase precision in ways that were previously impossible. A great example of this
is the collection of bird migration information from amateur bird watchers. This
citizen science project, known as eBird, from the Cornell Lab of Ornithology (CLO)
highlights the benefits of enlisting the public in gathering vast amounts of data and
has helped in addressing such issues as studying the effects of acid rain on bird
migration patterns. Through the eBird project CLO is able to collect over 1 million
avian observations per month.
At this time remarkable progress has already been made in terms of data
collection, storage, and analysis capabilities. However, there is still so much more
that can be explored particularly in the use of big data analytics (Kitchin 2014;
Jin et al. 2015) in the Earth Sciences. In this chapter, we just touch the basic skill
and applications for analyzing environmental big datasets. More exploration in data
analysis tools for environmental datasets is in big demand.

Barrett, T., Troup, D. B., Wilhite, S. E., Ledoux, P., Rudnev, D., Evangelista, C., et al. (2009).
NCBI GEO: Archive for high-throughput functional genomic data. Nucleic Acids Research,
37(suppl 1), D885–D890.
Basha, E. A., Ravela, S., & Rus, D. (2008). Model-based monitoring for early warning flood
detection. In Proceedings of the 6th ACM conference on embedded network sensor systems
(pp. 295–308). ACM.
Belasen, A. R., & Polachek, S. W. (2009). How disasters affect local labor markets the effects of
hurricanes in Florida. Journal of Human Resources, 44(1), 251–276.

5 Big Data Solutions to Interpreting Complex Systems in the Environment


Bjarnadottir, S., Li, Y., & Stewart, M. G. (2011). A probabilistic-based framework for impact and
adaptation assessment of climate change on hurricane damage risks and costs. Structural Safety,
33(3), 173–185.
Blake, E. S., Rappaport, E. N., & Landsea, C. W. (2007). The deadliest, costliest, and most
intense United States tropical cyclones from 1851 to 2006 (and other frequently requested
hurricane facts) (p. 43). NOAA/National Weather Service, National Centers for Environmental
Prediction, National Hurricane Center.
Bossak, B. H., et al. (2014). Coastal Georgia is not immune: Hurricane history, 1851–2012.
Southeastern Geographer, 54(3), 323–333.
Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication and
Society, 15(5), 662–679.
Cock, P. J., et al. (2015). NCBI BLASTC integrated into galaxy. Gigascience, 4, 39.
Denman, S. E., Martinez Fernandez, G., Shinkai, T., Mitsumori, M., & McSweeney, C. S. (2015).
Metagenomic analysis of the rumen microbial community following inhibition of methane
formation by a halogenated methane analog. Frontiers in Microbiology, 6, 1087.
Earth Observing System Data and Information System (EOSDIS) (2009). Earth Observing System
ClearingHOuse (ECHO) /Reverb, Version 10.X [online application]. Greenbelt, MD: EOSDIS, Goddard Space Flight Center (GSFC) National Aeronautics and Space Administration
(NASA). URL: http://reverb.earthdata.nasa.gov.
Federhen, S. (2012). The NCBI Taxonomy database. Nucleic Acids Research, 40, D136–D143.
Fichant, G., Basse, M. J., & Quentin, Y. (2006). ABCdb: An online resource for ABC transporter
repertories from sequenced archaeal and bacterial genomes. FEMS Microbiology Letters, 256,
Frazier, T. G., Wood, N., Yarnal, B., & Bauer, D. H. (2010). Influence of potential sea level rise
on societal vulnerability to hurricane storm-surge hazards, Sarasota County, Florida. Applied
Geography, 30(4), 490–505.
Greenwood, P. L., Valencia, P., Overs, L., Paull, D. R., & Purvis, I. W. (2014). New ways of
measuring intake, efficiency and behaviour of grazing livestock. Animal Production Science,
54(10), 1796–1804.
Guo, J., Peng, Y., Fan, L., Zhang, L., Ni, B. J., Kartal, B., et al. (2016). Metagenomic analysis of
anammox communities in three different microbial aggregates. Environmental Microbiology,
18(9), 2979–2993.
Hampton, S. E., Strasser, C. A., Tewksbury, J. J., Gram, W. K., Budden, A. E., Batcheller, A. L.,
et al. (2013). Big data and the future of ecology. Frontiers in Ecology and the Environment,
11(3), 156–162.
Jin, X., Wah, B. W., Cheng, X., & Wang, Y. (2015). Significance and challenges of Big Data
research. Big Data Research, 2(2), 59–64.
Kelly, K. (2007). What is the quantified self. The Quantified Self, 5, 2007.
Khedo, K. K., Perseedoss, R., & Mungur, A. (2010a). A wireless sensor network air pollution
monitoring system. Preprint arXiv:1005.1737.
Khedo, K. K., Perseedoss, R., Mungur, A., & Mauritius. (2010b). A wireless sensor network air
pollution monitoring system. International Journal of Wireless and Mobile Networks, 2(2),
Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1),
Kodama, Y., Shumway, M., & Leinonen, R. (2012). The International Nucleotide Sequence
Database Collaboration. The sequence read archive: explosive growth of sequencing data.
Nucleic Acids Research, 40, D54–D56.
Landsea, C. W., & Franklin, J. L. (2013). Atlantic hurricane database uncertainty and presentation
of a new database format. Monthly Weather Review, 141(10), 3576–3592.
Landsea, C. W., et al. (2004). The Atlantic hurricane database re-analysis project: Documentation
for the 1851–1910 alterations and additions to the HURDAT database. In Hurricanes and
typhoons: Past, present and future (pp. 177–221).


H. Chi et al.

Lehmann, R. J., Reiche, R., & Schiefer, G. (2012). Future internet and the agri-food sector: Stateof-the-art in literature and research. Computers and Electronics in Agriculture, 89, 158–174.
Li, N., Chen, H., & Williams, H. N. (2015). Genome-wide comparative analysis of ABC systems
in the Bdellovibrio-and-like organisms. Gene, 562, 132–137.
Ma, Y., Wu, H., Wang, L., Huang, B., Ranjan, R., Zomaya, A., & Jie, W. (2015). Remote sensing
big data computing: challenges and opportunities. Future Generation Computer Systems, 51,
Malmstadt, J., Scheitlin, K., & Elsner, J. (2009). Florida hurricanes and damage costs. Southeastern
Geographer, 49(2), 108–131.
NHC Data Archive. Retrieved from , June 7, 2016.
Nielsen, J., & Lee, S. Y. (2012). Systems biology: The ‘new biotechnology’. Current Opinion in
Biotechnology, 23, 583–584.
Powell, M. D., Houston, S. H., Amat, L. R., & Morisseau-Leroy, N. (1998). The HRD real-time
hurricane wind analysis system. Journal of Wind Engineering and Industrial Aerodynamics,
77, 53–64.
Powell, M. D., Uhlhorn, E. W., & Kepert, J. D. (2009). Estimating maximum surface winds from
hurricane reconnaissance measurements. Weather and Forecasting, 24(3), 868–883.
Pruitt, K. D., Tatusova, T., & Maglott, D. R. (2007). NCBI reference sequences (RefSeq): A
curated non-redundant sequence database of genomes, transcripts and proteins. Nucleic Acids
Research, 35, D61–D65.
Ravi, M., & Subramaniam, P. (2014). Wireless sensor network and its security—A survey.
International Journal of Science and Research (IJSR), 3, 12.
Saffir, H. S. (1973). Hurricane wind and storm surge, and the hurricane impact scale (p. (423). The
Military Engineer: Alexandria, VA.
Schott, T., Landsea, C., Hafele, G., Lorens, J., Taylor, A., Thurm, H., et al. (2012). The
Saffir-Simpson hurricane wind scale. National Hurricane Center. National Weather Service.
Coordinación General de Protección Civil de Tamaulipas. National Oceanic and Atmospheric
Administration (NOAA) factsheet. URL: http://www.nhc.noaa.gov/pdf/sshws.pdf.
Shiffrin, R. M. (2016). Drawing causal inference from Big Data. Proceedings of the National
Academy of Sciences, 113(27), 7308–7309.
Simpson, R. H., & Saffir, H. (1974). The hurricane disaster potential scale. Weatherwise, 27(8),
South Florida Regional Climate Compact (SFRCCC) 2012. Analysis of the vulnerability of Southeast Florida to sea-level rise. Available online: http://www.southeastfloridaclimatecompact.org/
wp-content/uploads/2014/09/regional-climate-action-plan-final-ada-compliant.pdf. Accessed
14 August 2016.
Swan, M. (2013). The quantified self: Fundamental disruption in big data science and biological
discovery. Big Data, 1(2), 85–99.
Ulrichs, M., Cannon, T., Newsham, A., Naess, L. O., & Marshall, M. (2015). Climate change
and food security vulnerability assessment. Toolkit for assessing community-level potential for
adaptation to climate change. Available online: https://cgspace.cgiar.org/rest/bitstreams/55087/
retrieve. Accessed 15 August 2016.
Wdowinski, S., Bray, R., Kirtman, B. P., & Wu, Z., et al. (2016). Increasing flooding hazard in
coastal communities due to rising sea level: Case study of Miami Beach, Florida. Ocean and
Coastal Management, 126, 1–8.

Chapter 6

High Performance Computing and Big Data
Rishi Divate, Sankalp Sah, and Manish Singh

6.1 Introduction
Big Data systems are characterized by variable and changing datasets from multiple
sources across language, culture and geo-location. The data could be in various
formats such as text, video or audio files. The power of Big Data analytics
compared to traditional relational database management systems (RDBMS) or data
warehouses is the fact that multiple disparate sources of information can be quickly
analyzed to come up with meaningful insight that a customer or an internal user can
take advantage of. Companies can build products or services based on the insights
provided by Big Data analytics platforms.
Datasets themselves are growing rapidly and organizations are looking at a
minimum of 20% increase in data volumes year over year, as a result of which
Big Data systems need to be able to match up to the demand to be able to ingest,
store and process them. In a survey by VansonBourne (2015), the top two drivers for
new Big Data projects are improved customer experience and the need to get new
Any perceived slowness wherein the insight is stale; loses the appeal and the
value proposition of Big Data systems in which case customers will move to other
providers and internal users will just not trust the data. For example, once you buy
a product online from Amazon.com, you instantly get recommendations (Linden
et al. 2003) about what else to buy given your recent and past purchases, never mind
the kind of data-crunching that goes behind the scenes wherein large datasets are
analyzed by highly optimized clusters of machines.

R. Divate • S. Sah • M. Singh ()
MityLytics Inc., Alameda, CA 94502, USA
e-mail: rishi@mitylytics.com; sankalp@mitylytics.com; mksingh@mitylytics.com
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_6



R. Divate et al.

In addition, with the growing advent of Internet of Things (IoT) Xia et al.
(2012) and mobile phones sending and receiving data in real-time (Gartner n.d.),
performance becomes even more critical. Imagine what would happen if you fire-up
your favorite ride sharing application on your smartphone and you see information
that is a few minutes old. This is inspite of the fact that the application backend
needs to first collect location information from all cars in your city, sift out only the
ones that are available, and then present the car and model that works best for you.
All this has to happen within a few seconds for you to have enough faith to continue
using it.
Therefore, when you are starting a Big Data project, the first step is to understand
the kind of response time you are looking for; whether it is in hours, minutes,
seconds or sub-seconds.

6.2 High Performance in Action
Now that we have established the need for high performance, let us look at how to
make this happen.

6.2.1 Defining a Data Pipeline
To start with, let us define a data pipeline which is essentially how data flows through
various components in a Big Data deployment system as shown in the figure below.
Each component in the pipeline operates by splitting and replicating data up across
multiple machines or servers, analyzing each piece of data and combining the results
to either store categorized data or to come up with insight (Fig. 6.1).


Events are pieces of information that’s coming in from various sources often in
real-time. This could be from other databases, mobile phones, IoT devices, or usergenerated data such as instant messages.


Once events are received it is important that they are processed and categorized
appropriately. Given that data is potentially coming at a very high rate, it is
critical that ingestion works fast and does not miss any data. Example: LinkedIn
developed Apache Kafka (Kafka Ecosystem n.d.), a messaging system specifically
designed for handling real-time data feeds and that currently processes over a trillion
messages a day.

6 High Performance Computing and Big Data


Streaming event processing





Batch event processing
Fig. 6.1 Data pipeline

Streaming Event Processing

This component of the pipeline is specifically designed to analyze and store categorized data from the ingestion component in real-time. As data is being received,
depending on how critical it is, it may also need to be made available to customers
or end-users in a live view before it is stored in a datastore. Stream processing is
typically done on smaller sets of data. Response time is sub-second or seconds at
the most from when data is received. For example, Pinterest uses Apache Spark
(Apache Spark™–Lightning-Fast Cluster Computing n.d.), a streaming processing
framework to measure user engagement in real-time (MemSQL n.d.).

Batch Event Processing

Batch event processing is used for analyzing and storing high volumes of data in
batches in a scheduled manner. In batch processing, it is assumed that the results
of processing are not needed for real-time analysis unlike stream event processing.
Batch processing can take minutes or hours depending on the size of the dataset. For
example, Netflix runs a daily job that looks through the customer base to determine
the customers to be billed that day and the amount to be billed by looking at their
subscription plans and discounts (Netflix n.d.-b).


R. Divate et al.

Data Store

In modern Big Data systems, a data store is often used to store large datasets or
files typically across multiple machines. This is done by replicating data across the
machines without the need for expensive hardware like a RAID (Redundant Array
of Independent Disks) controller. To optimize storage performance, generally highperformance storage like SSDs (Solid-State Drives) are used. Example: Netflix uses
the Cassandra database to lower costs, and for continuous availability and flexibility
(Datastax n.d.-a).

Query/Data Warehouse Processing

This component of the pipeline is meant to be used for reporting and data analysis
(Data Warehouse n.d.). In the context of Big Data, data warehousing systems are
built upon distributed storage. Example: Facebook built Apache Hive (Apache Hive
n.d.) to allow end-users a easy-to-use interface to query their dataset which spanned
petabytes of data (Hive n.d.).

6.2.2 Deploying for High Performance
Given the difference in performance across various data processing, querying and
storage frameworks, it is first critical to understand the performance requirements
(seconds, minutes, hours) of your Big Data deployment, before actually deploying
In subsequent sections, we will discuss performance related tradeoffs based on
where you deploy your Big Data stack and the kind of applications that you will run.

6.3 High-Performance and Big Data Deployment Types
There are many different ways an enterprise can choose to deploy their Big Data
pipeline. They have an option to choose from the various cloud based configurationready vendors, on-premise vendors, cloud based hardware vendors or Big Data as a
service providers.
When an enterprise has to choose among the above mentioned options they
typically weigh in several different deployment considerations that arise depending
on the type of deployment.
Broadly, these considerations can be broken down into the following:
1. Provisioning
2. Deployability
3. Configurability

6 High Performance Computing and Big Data


4. Manageability
5. Costs
6. Supportability
The components in the Big Data stack can be deployed in either one of the
following broad categories:
1. Platform as a Service (PaaS) (Platform as a Service n.d.): Cloud based configready deployment
2. Cloud-based hardware
3. On-premise
4. Big Data as a Service
We will now start outlining each of these deployments focusing on what they do
or do not offer in detail.

6.3.1 Platform as a Service (PaaS): Cloud Based Config-Ready
For someone looking to get started without spending a lot on hardware, IT teams
to monitor and troubleshoot the clusters, cloud based config-ready providers are
the best bet. The major issue seen here is the inflexibility. If problems arise on
the cluster, it becomes very hard to discover the root cause for the failure given
that various software components are pre-configured by the vendor. Another major
issue is inconsistent performance seen at various times of the day, due to the multitenant nature. Since this type of solution comes with the Big Data software already
installed, the operator doesn’t have to install and configure most of the software
pieces in the data pipeline. If however, you need to have control over what version
or type of software needs to be installed, then this solution is not for you. In the
short-term, costs are lower as compared to on premise vendors and time to deploy
is quick. Example vendors include Amazon Web Services (AWS) (Amazon Web
Services n.d.) Elastic MapReduce (EMR) (AWS EMR n.d.) offering and Microsoft
Azure’s HDInsight (Microsoft Azure n.d.-a) offering.

6.3.2 Cloud-Based Hardware Deployment
This is similar to cloud based config-ready solutions except that you can pick and
choose the Big Data software stack. The vendor provides either dedicated bare
metal or virtual machines on shared hardware with varied configurations (CPU,
memory, network throughput, disk space etc). The operator needs to install and
configure the Big Data software components. That is as far as the flexibility goes
as compared to config-ready cloud-based solutions. The option to have a single


R. Divate et al.

tenant on dedicated bare metal machines gives more predictability and consistency.
Example vendors include Packet.net (Premium Bare Metal Servers and Container
Hosting–Packet n.d.), Internap (Internap n.d.), IBM Softlayer (SoftLayer n.d.) and
AWS EC2 (Amazon Elastic Compute Cloud n.d.) and Microsoft Azure (Microsoft
Azure n.d.-b).

6.3.3 On-Premise Deployment
With this deployment type, an in house IT team procures and manages all hardware
and software aspects of Big Data clusters. Costs and security are generally the top
drivers for these kinds of deployments. If data or processing needs change, inhouse staff will have to add or remove machines and re-configure the cluster. In
the long run, on-premise deployments can be more cost effective, flexible, secure
and performant, but time to deploy is much slower compared with cloud based

6.3.4 Big Data as a Service (BDaaS)
Big Data as a Service is an up and coming offering, wherein enterprises outsource
the setup and management of the entire Big Data hardware and software stacks to a
vendor. Enterprises use the analytical capabilities of the vendor’s platform either by
directly accessing the vendor’s user interface or by developing applications on top
of the vendor’s analytics platform once the data is ingested.
Internally, BDaaS systems can comprise of:
1. Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) bundled
2. PaaS and Software as a Service (SaaS) bundled together
3. IaaS, PaaS and SaaS bundled together
The implementation of the platform itself is transparent to the enterprise and
is usually highly optimized by the vendor for costs. Time-to-deploy will be much
faster than all of the above options but costs will generally be much higher. Increase
in performance will also result in an increase in the price of these systems. Example
vendors include Qubole and Altiscale.

6.3.5 Summary
If Big Data software was given scores based on the six considerations (Provisioning,
Deployability, Configurability, Manageability, Costs, Supportability), mentioned

6 High Performance Computing and Big Data


earlier, then generally, the cost is higher for higher scores on other factors but
performance per dollar is still pretty low.
To elaborate, the performance of the PaaS systems is generally lower per
dollar than other systems because today the systems for general availability are
still at a stage where applications are being transitioned from older systems and
the performance gain is considerable for custom built clusters. For example in
banking sectors newer BI applications realize order of magnitude gains with
customised cluster deployments. For next-generation systems, which perform realtime analytics on high volumes of data coming in from all the devices in the
world performance considerations become paramount, which we see as the biggest
challenge for Big Data platforms going forward.
Now, that we have identified deployment types and their tradeoffs, let us look
at specific hardware and software considerations. We do not consider pure BDaaS
(Big Data as a Service) systems in our analysis, because one generally do not have
control over the kind of hardware and software combinations provided by those
types of vendors.

6.4 Software and Hardware Considerations for Building
Highly Performant Data Platforms
6.4.1 Software Considerations
The Fig. 6.2 below shows sample Big Data ingestion, processing and querying
technologies. All of the technologies shown (Hive (Apache Hive n.d.), Flume
(Apache Flume n.d.), Pig (Apache Pig! n.d.), Spark (Apache Spark™–LightningFast Cluster Computing n.d.), Hadoop-MR (MapReduce n.d.), Storm (Apache
Storm n.d.), Phoenix (Apache Phoenix n.d.) are open-sourced via the Apache
software foundation.
Let us look at what drives the requirements for software stacks.

Data Ingestion Stacks

Rate of Data Ingestion
Millions of events/second corresponding to tens and sometimes hundreds of writes
per day as reported by companies such as Linkedin and Netflix (Netflix n.d.-a).
This will likely increase given the expected proliferation of sensors driven by an
IoT world. An organization hoping to handle large streams of data should plan on
systems that can handle 10s of millions of events/messages per-second.


R. Divate et al.

Fig. 6.2 Big Data ingestion technology

Replication for High Availability
Replication is a feature of most Big Data systems, wherein multiple copies of the
same data exist across various storage devices. Distributed software stacks such as
Kafka are very good at replication and having this feature in the software makes
it possible for data pipelines to work at a higher level of abstraction without the
need for specialized hardware such as RAID. In order to replicate efficiently one
has to establish the amount of replication levels, depending on the access patterns of
the data from different clients. Clients may include stream processing engines such
as Spark (Apache Spark™–Lightning-Fast Cluster Computing n.d.) and/or batch
processing frameworks such as Hadoop (MapReduce n.d.). A normal replication
factor is three and if needed it should be increased but this would typically mean
that more RAM and storage potentially could be needed.

Replication for Fault-Tolerance
There maybe scenarios in which one requires replication so that if data pipelines fail
then data events or messages can be retained to be accessed at a later stage. Typically
storage in the order of terabytes should be set aside for this type of fault-tolerance
given that typically a day’s worth of data would need to be accessible by a batch
processing engine.

6 High Performance Computing and Big Data


Low Latency Processing Hooks
The message or event ingestion system should be able to provide hooks for data
processing (e.g. Storm spouts) (Apache Storm n.d.), where processing systems can
hook into. What is important here is low latency and some amount of buffering. The
amount of buffering would typically depend on the latency tolerance of the system.
So for example data processing systems that can tolerate hundreds of milliseconds
of delay should have that kind of buffering built into the event engine or in the
data processing engine. Doing some simple math, 100 milliseconds of delay with
10 million messages coming in per second translates to a million messages worth
of data being buffered, which would be amount to 200 MBytes of buffer space
assuming average message size of 200 bytes. Buffering also happens between the
storage system and the data ingestion system but that would be typically be hardware

Data Processing Stacks

Low latency (10s of milliseconds of delay) is the maximum delay that should be
planned since real-time analytics means no further delays can be tolerated. When
considering interfaces for writing to data stores, fast interfaces like Storm (Apache
Storm n.d.) bolts are key.

Data Stores

A few important considerations for Data stores that are universal are:
1. Read performance—Latency and throughput for reads from the storage system.
2. Write performance—Latency and throughput for writes to the storage system.
3. Query performance—Throughput in queries/second and individual query performance.

Indexing and Querying Engine

Indexing and querying engines are increasing being used as front-ends and through
well published APIs they are being integrated into applications. They are available
on-premise as as well as a service in the cloud. Some of the examples are ELK
(ELK Stack n.d.) stack or Elastic, keen.io (Keen n.d.) and Solr their key performance
indicators are
• Latency—100 milliseconds and speed of indexing
• Throughput—A measure of how many concurrent queries can be run
• Real-Time—The responses for the system should be in real-time or near real-time


R. Divate et al.

• Performance with scale—How the system scales up with more queries and how
indexing performs
• Replication—An important question to ask is, are replicas real-time too

6.4.2 Getting the Best Hardware Fit for Your Software Stack
The building blocks for each component cluster of the Data pipeline software stack
are discussed in detail here. Given that Big Data systems use a distributed model,
where data storage and analytics is spread out across multiple machines connected
via a high-speed network, the discussion is framed in terms of the following
1. Compute nodes
2. Storage
3. Networking
We will be using hardware from AWS (Amazon Web Services n.d.) as an
example and will use information that is publicly available from the AWS website
(Amazon Web Services n.d.).

Data Ingestion Cluster

For a Data ingestion cluster such as Kafka (Apache Kafka n.d.), a million writes per
second is a goal that is achievable with commodity machines and that is something
that happens with systems today that have up to 10s of billions of writes per day
(e.g. a social media site like Facebook or Linkedin) Here are the node types that
would be required in the different cloud providers’ offerings.
A million writes per second translates to around 80 billion writes a day,
par for the course for data processing systems in industries such as advertising
or other verticals such as finance. Given that one would need at least a day’s
worth of data to be saved on disk for retrieval from batch processing systems
one would imagine that the data ingestion cluster would need to have around
80,000,000,000 * 300 D 24,000,000,000,000 D 24 terabytes of storage, given that
each event or message size is on average 300 bytes. Add a 3x replication and that
amounts to 72TB of storage. So the process of choosing an instance type (EC2
Instance Types—Amazon Web Services (AWS) n.d.) for this cluster in AWS goes
something like this.

• Compute nodes
– EC2, i2.xlarge

6 High Performance Computing and Big Data


• Storage
– Elastic Block Storage (EBS) (Amazon Elastic Block Store n.d.) or directly
attached SSD. A combination would need to be used given the large amount of
data. However with more EBS (Amazon Elastic Block Store n.d.), the access
times for log messages goes up. But it depends on how many events need to be
available in real-time. Up to a terabyte of data can be available and replicated
3x on the i2.xlarge nodes with 6 nodes that are listed here which should be
fine for most systems.
• Networking
– Enhanced 1Gig or 10Gig. If the data ingestion cluster is to be co-located
with the Data processing cluster then a moderate network performance cluster
(400–500 Mbits/s) cluster that comes with the i2.xlarge system will suffice.
However the needs of the data processing cluster, which requires large
amounts of RAM (120GB per machine for a message processing system that
can keep latency at 100s milliseconds with a million messages per second)
will drive the choice towards r3.4xlarge, which in any case will mean HighPerformance (700/800 Mbits/s) networking comes with it. The cost of this
cluster would be $7.80 hour just for ingestion. The cost of data processing
cluster would be a add-on. One needs to be mindful of the resource contention
that might occur if these are in a co-located cluster, where multiple software
stacks are running at the same time. If the ingestion cluster were to be separate
from the data processing cluster then the choice would be to go with 6 nodes
of c3.8xlarge, which is somewhat of an overkill but that is AWS pricing for
you. The cost of the above c3.8xlarge cluster would be $10/hr. which amounts
to $240/day for annual costs of $86,400, which excludes costs for support,
tertiary storage and bandwidth. One could get a yearly discount of 45% if
pre-paid. So one could expect annual costs of around $50,000 for just the
ingestion cluster.
Let us now consider an on-premise cluster with the same characteristics as above.

In order to do 1 million writes per-second, we expect 6 machines with the following
configurations, but again the networking is only 1Gbps so if the Data processing
cluster is on a separate cluster then the networking will not suffice and if the data
processing cluster is co-located with the messaging cluster then more RAM will
be required, at least 120GB of RAM. So a good configuration for a non co-located
cluster would be
• Six SSD drives with 1 TB each
• 120GB of RAM
• 10Gbps Ethernet


R. Divate et al.

Stream or Data Processing Cluster

Streaming processing engines come in different shapes, sizes and profiles. Examples
of these engines include Spark (Apache Spark™–Lightning-Fast Cluster Computing
n.d.) and Concord (n.d.).
They work with different models of execution for example micro-batches so their
needs are different. However most work in-memory and therefore require large
amounts of memory for low-latency processing. For example, an aggregate RAM
of 300GB is required to run a benchmark such as spark-perf (Databricks spark-perf
n.d.) at a scale of 1.0. Balancing resource usage of a data processing cluster with
other clusters is key to building a robust data pipeline.

Hardware Requirements for Data Processing Cluster
So if one were to go for the cheapest option in AWS (Amazon Web Services (AWS)
n.d.) which is 20 m3 .xlarge nodes, which offers 300 GB of RAM and about 1.6 TBs
of storage across the cluster, then if the data storage cluster is co-located as is being
recommended by a lot of vendors, clearly the amount of total storage available is not
enough, neither is the RAM which only suffices to run the Data processing cluster. It
is also not suitable when changing over to a non-co-located Data processing cluster,
since network throughput is only about 800 Mbps.
For a co-located cluster based on RAM, CPU and disk the best configuration in
terms of cost/performance tradeoff would be r3.8xlarge nodes primarily because of
high RAM and high storage amounting to 244 GB and 640 GB respectively.
For a non co-located cluster the best node configuration would be c3.8xlarge,
given that there is 10Gbps network connectivity with that config and 64 GB of RAM
per-node so about 5 nodes would be required from a RAM perspective.

Data Stores

Apache Cassandra (Cassandra n.d.), Aerospike (Aerospike High Performance
NoSQL Database n.d.), ScyllaDB (World Fastest NoSQL Database n.d.), MemSQL
(Real-Time Data Warehouse n.d.) and more come out everyday. The best hardware
fit for a given cluster depends on the specific software stack but generally veer
towards high RAM and large amounts of directly attached SSD. The vendors of
such stacks typically overprovision hardware to be on the safe side so configurations
such as 32 cores, 244 GB of RAM and 10 TB of storage per node are common. The
vendors of such stacks typically recommend co-locating the data processing cluster
with the data storage cluster for example Cassandra data nodes being co-located
with Spark workers. This results in contention for resources and requires tuning
for peak performance. It has also been observed by us while implementing projects
at large customers, that performance is dependent on the fit of the data model to
the underlying hardware infrastructure. Getting RoI (Return on investment) and
keeping TCO (Total cost of ownership) low becomes a challenge for enterprises

6 High Performance Computing and Big Data


trying to deploy these clusters. One of the best starting points for companies looking
to deploy high performance clusters is to run benchmarking tools and figure out the
performance of their hardware in terms of the number of peak operations that their
infrastructure can support and then work backwards to profile their applications on
smaller PoC (Proof of concept) clusters to characterize the performance in terms of
peak operations and get trends. This will help IT teams to better plan, deploy, operate
and scale their infrastructure. Good tools are therefore an essential requirement for
such an exercise.

Indexing and Query-Based Frameworks

There are quite a few of these frameworks available as services in the cloud and
or on-premise such as the Elastic Stack (Elastic n.d.), Keen.io (Labs n.d.) and
Solr (Apache Lucene™ n.d.). All these frameworks are heavily memory-bound.
The number of concurrent queries that can be supported typically is hundreds
of thousands with a response time of less than a second, with a cluster of 20
nodes, where each node has at least 64 GB RAM. The challenge is to keep up
the performance with more concurrent queries per second. The more the number
of replicas the faster the query response will be with higher number of concurrent
requests. At some point the number of queries per second does not improve with
more memory, at which point, one has to then go ahead and change the caching
policies on the cluster to be able to serve queries in near real-time. The key problems
that we generally observe for scale and performance are Java garbage collection,
cache performance and limitations on the size of heap memory allocated to the Java
Virtual Machine (JVM) if present. We see this area as a single most important area
of research and improvement.

Batch-Processing Frameworks

Batch processing MapReduce frameworks (MapReduce Tutorial n.d.) such as
Hadoop have gained in popularity and have become more common. In order to
find the right hardware fit for these frameworks one needs to figure out the peak
performance, sustained and minimum performance using framework operations persecond that can be executed on the cluster. This can be done by running different
types of benchmarks applications which have been categorized into I/O bound such
as disk, network, compute or RAM bound jobs in the cluster and observing the
operations per-second that can be achieved with different types of workloads. In
our tests typical configurations which offer good price-performance tradeoff for
running an application such as Terasort (O’Malley 2008) with 1 TB size results in
about 12TB of storage being needed for a replication factor of three with five nodes,
each of which has eight cores and 64GB of RAM. To maximize cluster usage and
drive down RoI time, one needs to have a good job mix in the cluster with a good
scheduling mechanism so as to prevent contention for the same resources (memory
or network).


R. Divate et al.

Interoperability Between Frameworks

In this section we discuss the possibility of running two frameworks together,
wherein either both run on the same cluster or on different clusters. We show what
to expect with Cassandra-Spark and Kafka-Spark combinations.

Cassandra and Spark Co-Located
The recommendation today from the vendors of Spark and Cassandra namely is to
co-locate Spark and Cassandra nodes. To do so let’s examine what kind of hardware
resources, we will need:
• Both stacks use significant memory, therefore we recommend at least hundreds
of Gigabytes of memory on each node.
• Since Cassandra is a data-store, we recommend state-of-the-art SSD or NVME
(Jacobi 2015) drives and at least 1 TB storage per node.
• Spark will use significant CPU resources, given it is a processing engine,
therefore, we recommend 10s of CPUs per node.
• Based on our experiences, we recommend the peak network bandwidth of at least
1Gbps. This should suffice for the two frameworks to operate in unison given that
Spark processing should be restricted as far as possible to data stored on the same

Cassandra and Spark located on Different Machines
With this configuration, we are looking at the following requirements:
• Moderate RAM requirements, which could be as low as tens of Gigs of RAM per
• Storage requirements will still be high for Cassandra nodes and we recommend
at least 500 GB of SSD/NVME per node
• Networking requirements in this case are much higher given that data will be
continuously transferred between Spark and Cassandra, therefore we recommend
a network with peak bandwidth of at least 10 Gbps.
• In this case, we recommend tens of cores for each node on the Spark nodes and
about half of those for the Cassandra nodes.
Even with the above ballpark figures it is hard to estimate the exact capacity
of such clusters therefore one needs to have ways and means to figure out
how to estimate the capacity of such clusters from looking at existing clusters
and performing profiling to look for operation/seconds so that capacity can be

6 High Performance Computing and Big Data


Spark and Kafka
If Spark nodes and Kafka nodes are to be deployed together then some of the
most important considerations are RAM and the contention for it and the disparate
amounts of storage required. Kafka requires moderate amounts of storage (in the
terabyte range while streaming events of size 300 bytes at a million messages per
second), while Spark typically works with in-memory data sets although some
amount of buffering maybe planned to avoid data loss. This data can be used
by batch processes or by Spark also with a small delay. There are both receiver
and receiver-less approaches to reading data from Kafka into Spark. Receiver-less
approaches typically result in lower latencies.
Therefore, Spark nodes and Kafka nodes can be co-located but one needs
to monitor RAM usage closely and take appropriate action to start buffering if
memory starts running out, so that messages are not lost. If there is sufficient
network throughput with link bonding (20-40Gbps) and redundant links then it
is best to separate the clusters. One caveat though is monitoring and effective
resource utilization so that resource managers do not schedule any network intensive
workloads on these clusters. As we will see in following sections on hyperconverged infrastructure that there may be another way of solving this conundrum
of co-location of different clusters when planning for high performance.


In Sect. 6.4.2, we have looked at how to get the best hardware fit for your software
stack and what stands out is that it is a non-trivial exercise to figure out what
hardware to start your deployment with and keep it running for high performance.
We recommend understanding the workings and limitations of each component
(ingestion/processing/query) in your data pipeline and specifically identifying the
following characteristics:
• Number of events per second or millisecond generated by each component
• Latency between components
• Overall Data volume (number of bytes ingested/stored/processed)
Given the above information, one can move on to the next stage of planning the
size of your overall production cluster (more details on how to do so are explained
in Sect. 6.5) and determining various co-location strategies.


R. Divate et al.

6.4.3 On-Premise Hardware Configuration and Rack
Based on the discussion so far when designing a cluster for deployment in a datacenter on-premise. The following are the options and the most preferred deployment
scenarios for each.


1. Indexing and Query engines on one Rack
2. Spark and Cassandra on one rack
3. Kafka on another rack
Racks that are typically laid come with a redundant top-of-rack switch connected
by a leaf spine topology in a data center. The top-of-rack switches should have dual
connectivity to the servers in the rack which implement storage as well as cluster
nodes. Cassandra is rack aware so replicas are typically placed on different racks
for redundancy purposes. Enterprise Cassandra vendors such as DataStax (DataStax
n.d.-b) advise against using distributed storage so all storage should be local to the

On-Premise Converged Infrastructure

Converged infrastructure offerings from companies like Nutanix (Nutanix n.d.) are a
new way of providing all infrastructure elements in one chassis. Nutanix implements
it’s own virtualized infrastructure to provide compute and storage together. The
benefits are obvious, however virtualized infrastructure has challenges for high
performance computing due to multi-tenancy. The typical deployment for such
infrastructure would be to have all data pipeline stacks on one chassis in a rack
with inter-chassis replication for fault tolerance in case of rack failures.

6.4.4 Emerging Technologies

Software Defined Infrastructure (SDI)

Software Defined Infrastructure (SDI) encompasses technologies such as Software
Defined Networking (SDN) (Software-Defined Networking (SDN) Definition n.d.),
Software Defined Compute (SDC) (Factsheet–IDC_P10666 2005) and Software

6 High Performance Computing and Big Data


Defined Storage (SDS) (A Guide to Software-Defined Storage n.d.) is a new
paradigm of deploying infrastructure, where the hardware is logically separated
from the software that manages it and infrastructure can be programmatically
provisioned and deployed with a high degree of dynamic control as opposed to
statically allocating large sets of resources. This paradigm is perfectly suited for Big
Data applications since these technologies can effectively be used to dynamically
scale infrastructure based on data volumes or processing needs in an automated
manner based on defined SLA’s or performance requirements.
Software Defined Networking (SDN)
Software defined Networking is defined by technologies or switches, that allow
operators to control and setup networking flows, links or tunnels. A typical use
case of SDN is reconstructing a network topology on top of an underlying physical
topology to avoid hotspots. Vendors in this space include the Cumulus Virtual
Appliance (Better, Faster, Easier Networks n.d.) and Cisco’s (n.d.) SDN API’s on
its routers.

Software Defined Storage (SDS)
Software Defined Storage (SDS) is an approach to data storage in which the
programming that controls storage-related tasks is decoupled from the physical
storage (Virtual Storage n.d.). What this means in a Big Data context for is the
ability to provision storage dynamically as datasets increase. This could include
different storage types (such as SSD, HDD’s) that reduce hardware and storage
management costs. Application performance may or may not be directly affected
depending on how the SDS solution is affected but this is definitely a long-term
solution for scalable storage solutions. Vendors in this space include EMC’s ScaleIO
(ScaleIO/Software-defined Block Storage n.d.) solution and HP’s StoreVirtual VSA
(Virtual Storage n.d.) solution.
Software Defined Compute (SDC)
Software Defined Compute (SDC) is about adding new servers or removing servers
in a cluster on-demand as processing goes up or down automatically. This is
especially important in Big Data systems as one can start scaling up if we see
the compute performance of the cluster is going below desired levels and needs
additional resources. SDC can either be achieved by virtualization with vendors
such as VMware in a data-center or via cloud-based providers such as AWS EC2.
Since these are emerging technologies, not a whole lot of research has been done
to showcase the benefits in Big Data scenarios and we look forward to such case
studies going forward.


R. Divate et al.

Advanced Hardware

NVME for Storage
Non-Volatile Memory Express or NVME (Jacobi 2015) is a communications
interface protocol that enables the SSD to transfer data at very high speed as
compared to the traditional SATA or SAS (Serial Attached SCSI). It makes use
of the high-bandwidth PCIe bus for communication. With faster speeds and lower
latency as compared to SATA (Wilson, R. (2015)), the time taken to access the
storage to process and shuffle the data in the Big Data deployment cluster is greatly
HyperConverged Infrastructure
This involves combining compute, storage and networking into a single chassis. As
compared to the traditional infrastructure the performance improves due to the lower
latency in transferring data among compute, storage and networking nodes. Nutanix
(Nutanix n.d.) and Hypergrid (Hyperdrive Innovation n.d.) are few of the companies
that provide hyper-converged infrastructure equipment to enterprises.

Intelligent Software for Performance Management

In Sect. 6.4, we have articulated various hardware and software considerations for
determining the performance of a Big Data systems and given the sheer complexity
and number of options, we at MityLytics find there is a need for intelligent software
• Measures and reports the performance of individual Big Data jobs, components and the data pipeline as a whole
• Makes and executes intelligent decisions about how to boost the performance
of a data pipeline by optimizing the current infrastructure or by using API’s
to provision or deprovision SDI (Software Defined Infrastructure) elements
given known or the current set of workloads
• Predicts future needs of the infrastructure based on the current performance
and resources
We are so convinced that we need this intelligent data driven analytics software
that we at MityLytics have been developing software to do exactly the above. To
us it makes total sense to use analytics to solve the performance issues of analytics

6 High Performance Computing and Big Data


6.5 Designing Data Pipelines for High Performance
Now, that we have outlined various considerations for high performance, let us start
by seeing about how one can start measuring performance and designing a data
pipeline so as to scale with varying dataset sizes (Fig. 6.3).
To do so, we at MityLytics typically go through the following steps in sequence
for each component in the pipeline:
1. Identify a series of steps in Big Data application that describes what you typically
need to do for that component in the Big Data pipeline; this may be a Big Data
streaming or batch job that processes incoming messages or a Hive query that
runs across large datasets. Let us call this set of steps as an application job.
2. Deploy a Big Data application on a small cluster size
3. Run an application job with a small dataset and measure time taken and
infrastructure statistics (e.g. CPU utilization, memory consumed, bandwidth
observed) across various machines in your cluster. At this point, you are just
characterising how your application behaves and not whether it meets your end
performance goal.
4. If an application is deployed on shared infrastructure, run your job multiple times
and record average statistics across all your runs.
5. Increase the size of the dataset and repeat steps 2 and 3.
6. On getting enough data points we can plan infrastructure needs for the application as the dataset is scaled up.

Fig. 6.3 TPC-DS benchmark


R. Divate et al.

Fig. 6.4 Big Data performance measure

7. For example, in the graph below, we have plotted dataset size by time taken in
milliseconds to execute two Big Data queries from the TPC-DS (TPC-DS n.d.)
In this example, we see that initially, the time taken to execute these queries
is linear with increase in the data set size but as datasets increase, the time taken
increases quadratically. On closer examination of various Big Data operations, we
see that this is on account of the Big Data reduce operation (red line) shown in the
graph below (Fig. 6.4).
Similar to this example, one should aim to create similar plots for a Big Data
application job so as to be able to show how time taken and system resource
utilization increase as dataset sizes increase.
8. Once an application is deployed in production, we use our application performance management software to ensure that the application job continues to meet
the intended levels of performance and that the underlying infrastructure is being
used efficiently.

6.6 Conclusions
As we have seen above, data analytics is key to the operation and growth of
data driven businesses and deploying high performance analytics applications
leads to topline growth. Given the importance of data analytics platforms in this
environment, it is imperative that data pipelines work in a robust manner with
Quality of Service (QoS) guarantees.

6 High Performance Computing and Big Data


It is of the utmost importance then, to plan the data platforms with the right
capacity planning tools so that desired levels of performance are met when the
platforms are scaled up to handle more data. Once the platforms are deployed,
it is imperative that they are continuously monitored using key metrics, which
could potentially point to degradation of performance or worse interruptions in
service, that could potentially result in loss of revenue. We at MityLytics (MityLytics
n.d.) believe that analytics is so important that we use analytics to help drive the
performance and hence the viability of analytics platforms.
What is even more desirable however, is to have software and mechanisms to fix
problems proactively. When such auto remediation is not possible, support system
tickets should be raised which can be manually remediated. The state-of-the-art
today is such that operations teams engage in trial-and-error methodologies, which
increase the time to RoI (Return on Investment) while increasing the total cost of
ownership of the data platform thereby reducing the productivity of development
and operations teams. It has also been our experience that consultants from software
vendors are tempted to be safe when designing clusters and frequently overprovision resources resulting in increased time to RoI.
One of the most common problems deployments run into is resource contention,
since most of the software stacks are designed to be greedy so that they can
perform best when working in isolation. However, as we have seen in previous
section, in most practical scenarios distributed software systems will be working
together and sharing resources such as compute, storage and networking so it is
very important that software stacks be evaluated, when multiple components are
working in unison. In conclusion, we recommend using software tools to plan,
monitor and auto-tune in real-time and perform proactive remediation (prevention).
One should operate Big Data clusters with software that will learn, self-heal and
use tools to do capacity planning when scaling up. Development processes should
also be streamlined to incorporate planning and performance testing for scalability
as early in the development cycle as possible to bring HPC like performance to Big
Data platforms.

A Guide to Software-Defined Storage. (n.d.). Retrieved November 29, 2016, from http://
Aerospike High Performance NoSQL Database. (n.d.). Retrieved November 29, 2016, from http://
Amazon Elastic Block Store (EBS)—Block storage for EC2. (n.d.). Retrieved November 29, 2016,
from https://aws.amazon.com/ebs/
Amazon Elastic Compute Cloud. (n.d.). Retrieved May 5, 2017, from https://aws.amazon.com/ec2/
Amazon EMR. (n.d.). Retrieved May 5, 2017, from https://aws.amazon.com/emr/
Amazon Web Services. (n.d.). What is AWS? Retrieved November 29, 2016, from https://
Amazon Web Services (AWS). (n.d.). Cloud Computing Services. Retrieved November 29, 2016,
from https://aws.amazon.com/


R. Divate et al.

Apache Hive. (n.d.). Retrieved November 29, 2016, from https://hive.apache.org/
Apache Kafka. (n.d.). Retrieved November 29, 2016, from http://kafka.apache.org/
Apache Lucene™. (n.d.). Solr is the popular, blazing-fast, open source enterprise search platform
built on Apache Lucene™. Retrieved November 29, 2016, from http://lucene.apache.org/solr
Apache Spark™—Lightning-Fast Cluster Computing. (n.d.). Retrieved November 29, 2016, from
Apache Storm. (n.d.). Retrieved November 29, 2016, from http://storm.apache.org/
Better, Faster, Easier Networks. (n.d.). Retrieved November 29, 2016, from https://
Cassandra. (n.d.). Manage massive amounts of data, fast, without losing sleep. Retrieved November
29, 2016, from http://cassandra.apache.org/.
Cisco. (n.d.). Retrieved November 29, 2016, from http://www.cisco.com/
Concord Documentation. (n.d.). Retrieved November 29, 2016, from http://concord.io/docs/
Data Warehouse. (n.d.). Retrieved November 29, 2016, from https://en.wikipedia.org/wiki/
Databricks Spark-Perf. (n.d.). Retrieved November 29, 2016, from https://github.com/databricks/
Datastax. (n.d.-a). Case Study: Netflix.. Retrieved November 29, 2016, from http://
DataStax. (n.d.-b). Retrieved November 29, 2016, from http://www.datastax.com/
EC2 Instance Types—Amazon Web Services (AWS). (n.d.). Retrieved November 29, 2016, from
Elastic. (n.d.). An introduction to the ELK Stack (Now the Elastic Stack). Retrieved November 29,
2016, from https://www.elastic.co/webinars/introduction-elk-stack
Gartner. (n.d.). Gartner says the internet of things will transform the data center. Retrieved
November 29, 2016, from http://www.gartner.com/newsroom/id/2684915
Google Research Publication. (n.d.). MapReduce. Retrieved November 29, 2016, from http://
Hive. (n.d.). A Petabyte Scale Data Warehouse using Hadoop–Facebook. Retrieved November
29, 2016, from https://www.facebook.com/notes/facebook-engineering/hive-a-petabyte-scaledata-warehouse-using-hadoop/89508453919/
Hyperdrive Innovation. (n.d.). Retrieved November 29, 2016, from http://hypergrid.com/
Jacobi, J. L. (2015). Everything you need to know about NVMe, the insanely fast future for SSDs.
Retrieved November 29, 2016, from http://www.pcworld.com/article/2899351/everything-youneed-to-know-about-nvme.html
Kafka Ecosystem at LinkedIn. (n.d.). Retrieved November 29, 2016, from https://
Keen, I. O. (n.d.). Retrieved May 5 2017, from https://keen.io/
Linden, G., Smith, B., & York, J. (2003). Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1), 76–80. doi:10.1109/mic.2003.1167344.
MapReduce Tutorial. (n.d.). Retrieved November 29, 2016, from https://hadoop.apache.org/docs/
MemSQL. (n.d.). How pinterest measures real-time user engagement with spark. Retrieved
November 29, 2016, from http://blog.memsql.com/pinterest-apache-spark-use-case/
Microsoft Azure. (n.d.-a). HDInsight-Hadoop, Spark, and R Solutions for the Cloud/Microsoft
Azure. Retrieved November 29, 2016, from https://azure.microsoft.com/en-us/services/
Microsoft Azure. (n.d.-b). Cloud computing platform and services. Retrieved November 29, 2016,
from https://azure.microsoft.com/
MityLytics. (n.d.). High performance analytics at scale. Retrieved November 29, 2016, from https:/
Netflix. (n.d.-a). Kafka inside keystone pipeline. Retrieved November 29, 2016, from http://

6 High Performance Computing and Big Data


Netflix. (n.d.-b). Netflix Billing Migration to AWS–Part II. Retrieved November 29, 2016, from
Nutanix–The Enterprise Cloud Company. (n.d.). Retrieved November 29, 2016, from http://
O’Malley, O. (2008, May). TeraByte Sort on Apache Hadoop. Retrieved November 29, 2016, from
Overview/Apache Phoenix. (n.d.). Retrieved November 29, 2016, from http://phoenix.apache.org/
Performance without Compromise/Internap. (n.d.). Retrieved November 29, 2016, from http://
Platform as a Service. (n.d.). Retrieved November 29, 2016, from https://en.wikipedia.org/wiki/
Premium Bare Metal Servers and Container Hosting–Packet. (n.d.). Retrieved November 29, 2016,
from http://www.packet.net/
Real-Time Data Warehouse. (n.d.). Retrieved November 29, 2016, from http://www.memsql.com/
ScaleIOjSoftware-Defined Block Storage/EMC. (n.d.). Retrieved November 29, 2016, from http:/
SoftLayerjcloud Servers, Storage, Big Data, and more IAAS Solutions. (n.d.). Retrieved November
29, 2016, from http://www.softlayer.com/
Software-Defined Compute–Factsheet–IDC_P10666. (2005). August 31, 2016, https://
Software-Defined Networking (SDN) Definition. (n.d.). Retrieved November 29, 2016, from https:/
Spark Streaming/Apache Spark. (n.d.). Retrieved November 29, 2016, from https://
TPC-DS–Homepage. (n.d.). Retrieved November 29, 2016, from http://www.tpc.org/tpcds/
VansonBourne. (2015). The state of big data infrastructure: benchmarking global big data users
to drive future performance. Retrieved August 23, 2016, from http://www.ca.com/content/dam/
Virtual Storage: Software defined storage array and hyper-converged solutions. (n.d.). Retrieved
November 29, 2016, from https://www.hpe.com/us/en/storage/storevirtual.html
Welcome to Apache Flume. (n.d.). Retrieved November 29, 2016, from https://flume.apache.org/.
Welcome to Apache Pig! (n.d.). Retrieved November 29, 2016, from https://pig.apache.org/
Wilson, R. (2015). Big data needs a new type of non-volatile memory. Retrieved November
29, 2016, from http://www.electronicsweekly.com/news/big-data-needs-a-new-type-of-nonvolatile-memory-2015-10/
World fastest NoSQL Database. (n.d.). Retrieved November 29, 2016, from http://
Xia, F., Lang, L. T., Wang, L., & Vinel, A. (2012). Internet of things. International Journal of
Communication Systems, 25, 1101–1102. doi:10.1002/dac.2417.

Chapter 7

Managing Uncertainty in Large-Scale Inversions
for the Oil and Gas Industry with Big Data
Jiefu Chen, Yueqin Huang, Tommy L. Binford Jr., and Xuqing Wu

7.1 Introduction
Obtaining a holistic view of the interior of the Earth is a very challenging task
faced by the oil and gas industry. When direct observation of the Earth’s interior
is out of reach, we depend on external sensors and measurements to produce a
detailed interior map. Inverse problems arise in almost all stages of oil and gas
production cycles to address the question of how to reconstruct material properties
of the Earth’s interior from measurements extracted by various sensors. Inverse
theory is a set of mathematical techniques to determine unknown parameters of a
postulated model from a set of observed data. Accurate parameter estimation leads
to better understanding of the material properties and physical state of the Earth’s
interior. In general, inverse problems are ill-posed due to the sparsity of the data and
incomplete knowledge of all the physical effects that contribute significantly to the
data (Menke 2012).
Recently, technological improvements in other fields have enabled the development of more accurate sensors that produce a larger amount of accurate data
for well planning and production performance. These data play a critical role in
the day-to-day decision-making process. However, the sheer volume of data, and
the high-dimensional parameter spaces involved in analyzing those data, means

J. Chen • X. Wu ()
University of Houston, Houston, TX 77004, USA
e-mail: jchen84@uh.edu; xwu7@uh.edu
Y. Huang
Cyentech Consulting LLC, Cypress, TX 77479, USA
e-mail: yueqin.duke@gmail.com
Tommy L. Binford Jr.
Weatherford, Houston, TX 77060, USA
e-mail: tommy.binford@weatherford.com
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_7



J. Chen et al.

that established numerical and statistical methods can scarcely keep pace with the
demand to deliver information with tangible business value. The objective of this
article is to demonstrate some scalable deterministic and statistical algorithms for
solving large-scale inverse problems typically encountered during oil exploration
and production. Especially, we like to show how these algorithms can be fit into the
MapReduce programming model (Dean and Ghemawat 2008) to take advantage of
the potential speed up. Both the model and data involved in inverse problems contain
uncertainty (Iglesias and Stuart 2014). We will address this fundamental problem by
proposing a Bayesian inversion model, which can be used to improve the inversion
accuracy in terms of physical state classification by learning from a large dataset.

7.2 Improve Classification Accuracy of Bayesian Inversion
Through Big Data Learning
Many solutions of engineering and applied sciences problems involve inversion:
to infer values of unknown model parameters through measurement. Accurate
parameter estimation leads to better understanding of the physical state of the target.
In many cases, these physical states can be categorized into a finite number of
classes. In other words, the real objective of the inversion is to differentiate the
state of the measured target; and the estimated parameters through inversion act
as state indicators or a feature set that can be used to classify the state of the
object. For example, well logging plays critical roles in meeting various specific
needs of the oil and gas industry (Ellis and Singer 2007). Interpretation of the
well logging measurement is a process of inversion. It has many applications for
structural mapping, reservoir characterization, sedimentological identification, and
well integrity evaluation. The interpretation of the inversion involves correlating
inversed parameters with properties of the system and deducing the physical state
of the system.
There exist many other situations in engineering and science problems where
we need to evaluate the physical state of a target through indirect measurement.
For example, scientists who are studying global climate change and its impact
on our planet depend on the climate data collected through measurements from
various sources, e.g., ocean currents, atmosphere, and speleothems. Researchers
categorize ocean eddies through the study and interpretation of satellite oceanography data (Faghmous et al. 2013). All of these critical work involves inversion.
In particular, the inversed results can be categorized into a finite number of
classes. Past research of inverse problems concentrates on deducing parameters
of a model and largely ignores their functionality as a feature set and indicator
of the physical state, which is the ultimate objective for many problems. In this
article, we suggest a common Bayesian inversion model and an inference framework
that is computationally efficient for solving inverse problems. It is in our interests
to investigate and provide answers to questions on how to extend the proposed
framework to serve more general science and engineering problems that involve
categorizing the result of an inversion.

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


7.2.1 Bayesian Inversion and Measurement Errors
Given a set of measurements, traditionally, parameters are estimated by analyzing
discrepancies between the expected and the actual well logging response. The analysis defines the solution as the minimizer of a criterion between two components,
the data model matching, and the regularizer. The data model matching defines
the distance metric between the observed data and forward model outputs. The
regularizer is a smoothing or sparse constraint.
What is missing from this model is the historical data matching, correction, and
expertise of the engineer, which play a pivotal role in handling uncertainties in
the data or model and minimizing systematic errors. In various fields that involve
inversion, historical data keeps accumulating in a large scale. Many of the data
sets have been studied and labeled by domain experts. However, only a small part
of the data has been studied duo to its volume, variety and variability challenge.
Under the explosive increase of the data, the massive unstructured data or so-called
Big data brings about opportunities for discovering new values and helps us to
gain more in-depth understanding of the hidden characteristics of the system. Our
proposed Bayesian inversion model will take advantage of information existed in the
large-scale dataset; learn the correlation between inversed parameters and possible
physical states; and explore the dependency relationship among direct and indirect
observed data, noise/error, and forward model. The target is to improve confidence
in justifying and predicting the physical state of the measured target.
Measurement errors pose another challenge to the inverse problem. How best to
integrate some knowledge of the measurement error into the context of a Bayesian
inversion model has been the subject of much interest (Kaipio and Somersalo 2005).
In the ideal cases that are void of any measurement errors, the data y and model
parameters u are related through the forward model y D f .u/. However, assumptions
made by the forward model may not include all factors that affect measurements.
The accuracy of observed data is also a consequence of the execution, which adds
another layer of uncertainty related to potential human errors. Uncertainty also
comes from a lack of direct measurement of the surrounding environment and
other coupling factors. The inadequate information will ultimately compromise
the accuracy of predictions made by the forward modeling, which needs to be
considered during the inverse process.
Although most parameters that characterize the physical properties are continuous, it is the physical state of the subject that is being concerned for solving
the inverse problem. For example, properties of the material obtained through the
inversion for the well logging are a set of surrogate measures that can be used to
characterize the formation by its lithology, pore fluid, porosity, clay content, and
water saturation. Therefore, the potential searching area for the parameter space
is dramatically reduced when the inversion framework shifts its attention from
obtaining an accurate numerical solution for parameters of the model to estimating
the possibility of occurrence of a finite number of formation types. One of the
objectives here is to propose a unifying approach for improving categorization


J. Chen et al.

accuracy and confidence through inversion. Our approach will explore the common
mathematical framework that supports the new approach. Furthermore, we will
investigate the computational efficiency, scalability and practical implementations
of the proposed framework.

7.2.2 Bayesian Graphical Model
Data obtained through indirect measurement of a sensor network often has errors
and unknown interferences that can affect the assessment of the relation between
environmental factors and the outcome of measuring instruments. The precise
mathematical description of the measurement process does not exist. Under the
traditional Bayesian inversion framework, a simple way to incorporate surrogate
measurement errors into the system is to construct a Berkson error model (Carroll
et al. 2006), shown in Fig. 7.1a, if we know how to formulate the error through
a rigorous analysis and replace the deterministic f with a stochastic function. A
solution of the Bayesian inversion is to find a probability measure of xQ given
the data yQ and locate the most probable point by searching the pool of points
sampled from the posterior distribution. The inversion result does not provide an
answer for the classification problem. Instead, it serves as an input for an external
classifier. Considering that the parameter space consists of a collection of random
variables, the system can be modeled as a stochastic process. Given a big data set
collected through the similar data acquisition process, it is possible for us to learn
the process autonomously. Alternatively, it is clear that any subset of the parameter
space can only represent a finite number of physical states. Following the Bayesian
approach, the posterior distribution of model parameters can be deduced given
the measurement and prior knowledge of the correlation between parameters and
corresponding physical states. In the natural environment, the occurrence of any
of those physical states can be random, and they are independent to each other.
Since the physical state is classified based on the value of the model parameters,
the parameter space can be clusterized. The number of clusters depends on the
number of physical states. The center of each cluster is chosen to be typical or

Fig. 7.1 Graphical model (a) Berkson error model (b) Dependency among .xi ; xQi ; yi /, (c) Dependency among .xi ; xQi ; yi ; yQi /

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .

Deep Resistivity (ohm.m)

Fig. 7.2 Resistivity vs.












N-D Porosity Difference (frac)

representative of the corresponding physical state. This can be generalized by an
additive model
PKor more specifically, a mixture model mathematically represented
kD1 k pk .x/, with the k being the mixing weights, k > 0 and
Figure 7.2 (Xu 2013) shows four rock types (RT1–4) and the
distribution of corresponding parameters obtained by the resistivity and neutrondensity log. We can use a hierarchical mixture model to formulate the uncertainty
involved in identifying the current physical state. The mixture distribution represents
the probability distribution of observations in the overall population. The mixture
model is a probabilistic model for representing the presence of subpopulations
within an overall population (Wikipedia 2016). The mixture is usually multimodal
and can facilitate statistical inferences about the properties of the sub-populations
given only observations on the pooled population. Rather than modeling all of those
noisy details, errors can be estimated by simple statistical measures such as mean
and variance.
Let us consider three variables xi , xQ i , and yi for a physical state i. xi is the
parameter (xi can be a vector) that characterizes the physical state i, yi is the indirect
measurement, and xQ i is deduced through an inversion process related to the forward
model y D f .x/. Conditional independence models will separate known from
unknown factors and we could use more general statistical measures to describe the
error. In Fig. 7.1b, f is a deterministic function, and xQ absorbs the system uncertainty
due to unknown environmental interferences. The dependency among xi , xQ i , and yi
can be formalized as: Œyi jf .xi / and ŒQxi jxi . The structure of Fig. 7.1b can be enriched
by introducing yQ for the inclusion of surrogate measurement errors, which are the
result of the mixture of unknown environmental factors and execution errors. The
relationship among xQ i , xi , yi and yQ can be graphically represented as Fig. 7.1c. The
new dependency changes to ŒQyi jf .xi / and ŒQxi jxi . The proposed dependency map can
handle both forward and inversion errors more robustly since the error model is not
explicitly constrained by the forward or inverse modeling process. In the proposed
model, it is interesting to see that xQ i is conditionally independent of yQ i , the observed


J. Chen et al.

surrogate measurement. This is reasonable because the uncertainty of xQ is due to
the unknown distribution of the parameter represent by x and other factors that are
ignored by the forward model f . A key statement made by the new model is that xi is
serving as the cluster center. As a statistical descriptor of the posterior distribution
of xQ , xi can be learned by mining the big historical data. Advantages of the proposed
model in Fig. 7.1c are summarized as follows:
1. Historical data retrieved from well logging is important in understanding the
structure and parameter distribution of the target, where it is characterized by
a set of parameters. The structure of the target remains unchanged regardless
what logging tools have been used. However, the perceived distribution of
the parameters is different depending on what logs were used to derive those
parameters. We can use the mixture model to formulate the parameter space of
the target. Parameters that define the mixture model depends on the selected
integrated logs. It should be noted here that the conditionally independent
relationship in Fig. 7.1c removes the one-over-one mapping constraint between
xQ i and yQ i , which makes it possible to learn the mixture model of the parameter
space with any selected sets of the logs. There are two advantages brought by
the proposed model: first, the information of the distribution of the formation
parameters hidden in the historical data is kept intact since the learning process
includes all data sources; second, after the learning process, the parameters of the
mixture model are re-calibrated to be adaptive to the sensitivity of the new sets
of integrated logs while maintaining the discriminative ability for characterizing
the target.
2. The model provides much more flexibility in selecting a proper probabilistic
model for each of the subpopulation in the mixture model, which can be
optimized ahead by learning from the historical data. Just like the traditional
Bayesian inversion model, the proposed method does not impose any limitation
on the choice of probabilistic approaches towards how to select models for
each sub-component in the mixture model. Thus, the richer representation of
the probabilistic approach is kept intact in the new model.
3. Since the distribution of the well logging responses yQ i are not the target of the
learning process. It can be modeled in a nonparametric way. In the case of estimating the joint distribution of multiple log responses, nonparametric statistical
inference techniques for measures of location and spread of a distribution on a
Riemannian manifold could be considered to capturing global structural features
of the high dimensional space (Pelletier 2005; Bengio et al. 2005).
4. According to the Bayesian inversion model, the solution of the inverse problem
for x0 given a new measurement y0 can be obtained by calculating the MAP:
argmax p.x0 jy0 ; ux ; uy / ;



where p is the pre-learned mixture model with model parameters ux and uy . In
solving the inverse problem, we try to propagate uncertainty from a selected
set of well logging measurements to petrophysical parameters by taking into
account uncertainty in the petrophysical forward function and a priori uncertainty
in model parameters.

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


Fig. 7.3 Bayesian inversion
with multiple sensors

5. Since the mixture model itself can serve as a classifier, additional classification
efforts are not necessary in most cases. And each classification result will be
gauged with a probability measure.
6. Expertise and noise patterns hidden in the historical play a key role in optimizing the pre-learned model. In particular, The distribution of the surrogate
measurement yQ directly affects the estimation of the model parameters according
to the dependency relationship of the new model. In other words, both unknown
environment factors and execution errors were considered when learning model
parameters using the historical data.
7. Characterization is an art of multiple log interpretation. Figure 7.3 shows how the
proposed model can be easily extended for the multisensory integration, where
f1 and f2 represent forward models used for different measurement approaches.
The dependency relationship remains the same and the likelihood estimation
of observed data from different sensory modalities is subject to conditional
independence constraints in the additive model.
Figure 7.4a and b highlight the difference between the framework used by the
traditional Bayesian inversion and our proposed model optimized by big data
Another advantage inherited by the proposed model is that Bayesian approach
allows more sophisticated hierarchical structure as different priors to account for
prior knowledge, hypothesis or desired properties about the error and unknown
factors. This gives us a large range of flexibilities to regularize smoothness, sparsity,
and piecewise continuity for a finite number of states.

7.2.3 Statistical Inference for the Gaussian Mixture Model
The pdf of a Gaussian mixture model has the form
p.x/ D


k N .xjk ; k / ;



where k is the mixing coefficient and KkD1 k D 1. We augment the model by
introducing a latent variable z, which is a K-dimensional binary random variable
that has a 1-of-K representation. If xn , n 2 f1; : : : ; Ng for N observations, belongs


J. Chen et al.

Fig. 7.4 (a) Traditional Bayesian inversion. (b) Proposed Bayesian inversion

to the kth componentQof the mixture, then znk D 1. The distribution of z is then
in the form p.z/ D KkD1 kzk . Conditional distribution of x given z is p.xjz/ D
kD1 N .xjk ; k / . The likelihood of the joint distribution of x and z is then
p.X; Z/ D


kznk N .xn jk ; k /znk :


nD1 kD1

According to Bayesian inference rules (Marin et al. 2005), given predefined hyperparameters 0 , 0 , 0 and 0 , the posterior of the conditional distribution of uk will
uk / N .n ;  2 =n /  L.Qyjy/ ;
n D


0 0 C nNx
and n D 0 C n ;

where L is the likelihood function. Sampling x is expensive since repeated
evaluation of the posterior for many instances means running forward model
frequently. A random-walk Metropolis algorithm is shown in Algorithm 1, which
summarizes the Metropolis-Hastings sampling steps, an Markov chain Monte Carlo
(MCMC) method for obtaining a sequence of random samples from a posterior
distribution (Chib and Greenberg 1995).

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


Algorithm 1: Metroolis-Hastings algorithm for sampling p.ujQy/
input : initial value: u.0/ , jump function q.u.i/ ju.j/ / (a normal distribution is a popular
choice for the jump function q)
output: u.k/ , where k 2 f1; 2; : : : ; Kg
Initialize with arbitrary value u.0/
while length of MCMC chain < pre-defined length K do
Generate u.k/ from q.u.k/ ju.k1/ /
p.u.k/ jQy/q.u.k1/ ju.k/ /

˛.u.k/ ; u.k1/ / D minŒ p.u.k1/ jQy/q.u.k/ ju.k1/ / ; 1
Generate ˛0 from uniform distribution U .0; 1/
if ˛0 < ˛.u.k/ ; u.k1/ / then
keep u.k/
u.k/ D u.k1/
save u.k/ in the chain

Distributed Markov Chain Monte Carlo for Big Data

Although the MCMC method guarantees asymptotically exact solution for recovering the posterior distribution, the cost is prohibitively high for training our
model with a large-scale and heterogeneous data set. General strategies for parallel
MCMC, such as Calderhead (2014) and Song et al. (2014), require full data sets
at each node, which is non-practical for Big data. For applications with Big data,
multi-machine computing provides scalable memory, disk, and processing power.
However, limited storage space and network bandwidth require algorithms in a
distributed fashion to minimize communication. Embarrassingly parallel MCMC
proposed in Neiswanger et al. (2013) tackles both problems simultaneously. The
basic idea is to allocate a portion of the data to each computing node. MCMC
is performed independently on each node without communicating. By the end, a
combining procedure is deployed to yield asymptotically exact samples from the
full-data posterior. This procedure is particularly suitable for use in a MapReduce
framework (Dean and Ghemawat 2008).
Embarrassingly parallel MCMC partitions data xN D fx1 ; : : : xN g into M subsets
fx ; : : : ; xnM g. For each subset m 2 f1; : : : ; Mg, sub-posterior is sampled as:

pm . / / p. / M p.xnm j /


The full data posterior is then in proportion to the product of the sub-posterior, i.e.
p1 : : : pM . / / p. jxN /. When N is large, p1 : : : pM . / can be approximated by
O M /, where O M and †
O M are:
Nd . jO M ; †


O 1

, O M D †


O 1
Om .



J. Chen et al.

7.2.4 Tests from Synthetic Well Integrity Logging Data
Well integrity is critical for blocking migration of oil, gas, brine and other detrimental substances to freshwater aquifers and the surface. Any deficiencies in primary
cementing tend to affect long-term isolation performance. Wide fluctuations in
downhole pressure and temperature can negatively affect cement integrity or cause
debonding. Tectonic stresses also can fracture set cement (Newell and Carey 2013).
New wells could disturb layers of rock near fragile old wells, the “communication”
between the old and the new can create pathways for oil, gas or brine water to
contaminate groundwater supplies or to travel up to the surface (Vidic et al. 2013).
Regardless of the cause, loss of cement integrity can result in fluid migration and
impairment of zonal isolation. In an effort to minimize environmental impacts,
advanced sensing technology is urgently needed for continuously monitoring well
integrities. However, the precise interpretation of the cement log is a challenging
problem since the response of acoustic tools is also related to the acoustic properties
of the surrounding environment such as casing, mud, and formation. The quality
of the acoustic coupling between the casing, cement, mud, and formation will
alter the response as well (Nelson 1990). The analysis of the log requires detailed
information concerning the well geometry, formation characteristics, and cement
job design to determine the origin of the log response. Therefore, a fair interpretation
of an acoustic log can only be made when it is possible to anticipate the log response,
which is accomplished through forward modeling.
The goal is to differentiate set cement, contaminated cement and fluid through
multiple measurements obtained by different sonic logging tools. If the cement is
not properly set, the probability of isolation failure is high in the long term. The key
of the research is to estimate the distribution of model parameters under different
physical states of the cement. As demonstrated in Fig. 7.5, the red area is marked as
contaminated cement, and the green area represents pure cement. The gray area
indicates ambiguous cases. Blue curves show the distribution of the density for
contaminated cement and pure cement. Numbers on the first row of the axis are
the measurement. The second row shows the inversed cement density given the
measurement. Accurate estimation of the distribution of the density will improve
classification accuracy for those cases with inversed parameters fallen into the gray
We have applied the aforementioned framework with simulated data. The
simulated training data set is generated to cover a large variety of combinations
of different cement status (pure or contaminated) and fluids (water, spacer, mud).
The data is generated to simulate five cementing conditions behind the casing: pure
cement, contaminated cement, mud, spacer and water. The synthetic log used as
the ground-truth is generated with additional artificial noises. Two measurements
are used by running the forward-modeling simulator, impedance and flexuralwave attenuation. The setting of the borehole geometry, casing parameters, and
formation properties is fixed. During the training stage, 500 samples were generated
for each cementing condition and corresponding physical properties. Parameters

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


Fig. 7.5 A non-uniform
distribution improves
classification accuracy

as designed
as designed































Fig. 7.6 Improve log interpretation with the pre-learned Gaussian mixture model: (1) ground
truth; (2) log interpretation without learning; (3) log interpretation with pre-learned Gaussian

for the Gaussian mixture model are learning through MCMC sampling. Inversed
parameters, density, compressional and shear velocity, are used to differentiate those
five cementing conditions. Test result on the synthetic dataset presented in Fig. 7.6.
Figure 7.6(1) is ground truth. Figure 7.6(2) show classification result by running
an off-the-shelf classifier on inversed results obtained through traditional inversion
tool without learning. Figure 7.6(3) demonstrates the improvement in terms of
classification accuracy by applying the pre-learned Gaussian mixture model.


J. Chen et al.

7.3 Proactive Geosteering and Formation Evaluation
Geosteering is a technique to actively adjust the direction of drilling, often in
horizontal wells, based on real-time formation evaluation data (Li et al. 2005).
A schematic illustration of geosteering is shown in Fig. 7.7. This process enables
drillers to efficiently reach the target zone and actively respond while drilling to
geological changes in the formation so they can maintain maximal reservoir contact.
These key features of geosteering lead to increased production. Geosteering also
provides early warning of approaching bed boundaries and faults leading to a
reduction in sidetracks, thus drilling time and drilling cost are also significantly
reduced (Bittar and Aki 2015). Depending on the properties of formation and
reservoir complexity, several different types of sensors can be used for geosteering,
such as nuclear, acoustic, gamma ray, or electromagnetic (EM) measurements.
Among all these measurements, azimuthal resistivity LWD tools are widely used in
geosteering worldwide due to its azimuthal sensitivity and relatively large depth of
investigation. Azimuthal resistivity LWD tools provide, in addition to conventional
resistivity measurements, information such as distance to bed interface, relative dip
angle, and formation anisotropy. Since its introduction into well logging in the
2000s (Bittar 2002; Li et al. 2005), azimuthal resistivity LWD tools have been a
key device for delivering better well placement, more accurate reserve estimation,
and efficiently reservoir draining. An azimuthal resistivity tool is comprised of a
set of antennas with different polarizations and working frequencies. All the major
oilfield service companies have developed their own designs, but all tools share the
major components, i.e., transmitters operating at different frequencies to generate
signals and receivers to make measurements of those signals. Examples of these
products are as follows: PeriScope by Schlumberger (Li et al. 2005), ADR by
Halliburton (Bittar et al. 2009), AziTrak by Baker Hughes (Wang et al. 2007) and
the GuideWave by Weatherford (Chen et al. 2014). Most of these tools can provide
depth of detection up to 20 feet (Li et al. 2005; Omeragic et al. 2005), and they have
been used to successfully place thousands of wells. We will consider as an example
the Weatherford GuideWave azimuthal resistivity tool.
A schematic diagram of the GuideWave Azimuthal Resistivity tool is shown
in Fig. 7.8. It has transmitters and receivers both along the tool axis (the Z direction)
and perpendicular to it (the X direction). This tool design is fully compensated, i.e.,
transmitters and receivers are always in pairs and symmetric to the center of the tool.
Fig. 7.7 A schematic of
geosteering using azimuthal
resistivity LWD tool: the
center bright yellow layer
represents reservoir, and the
blue line denotes the well

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


T6 T4

T2 R6 R4 R2 R1 R3 R5 T1


T3 T5


Fig. 7.8 The structure and schematic of an azimuthal resistivity LWD tool. The black arrows
denote transmitters or receivers with components along the tool axis (the Z direction), and the
red arrows denote transmitters or receivers with components perpendicular to the tool axis (the X
Fig. 7.9 The full mutual
inductance tensors of
formation solved from
receiver voltages with










While rotating, transmitters are energized and the receivers voltages are measured
by electronic components. Processing algorithms in the electronic hardware reduce
the measured signals from the receiver antennas to the full mutual inductance tensor
(see Fig. 7.9) related to the resistivity tensor of the geological formation occupied by
the tool (Dupuis and Denichou 2015). These different inductance tensors are used
to generate different types of measurement curves.
Examples of these measurement curves are the ZZ curves of standard resistivity
measurement, the azimuthally sensitive ZX curves, and the XX curves with
sensitivity to anisotropy. The ZZ curves are obtained by two Z direction transmitters,
such as T1 and T2, and two Z direction receivers, such as R1 and R2, thus four
separate transmitter-receiver measurement pairs are used to determine a data point
by full compensation, which is regarded as very robust to unwanted environmental
and temperature variations. The ZZ curves have no directionality: i.e. they cannot
distinguish up from down, or left from right. The ZZ curves are used for standard
measurement of formation resistivity (Mack et al. 2002). The ZX curves are
obtained by transmitters with components along the X direction, such as T7 and
T8, and Z direction receivers, such as R3 and R4. Each data point of a ZX curve
is calculated after a full 360ı of tool rotation with respect to tool direction (the
Z axis). The ZX curves have an azimuthal sensitivity, which means they can be used
to figure out the azimuthal angle of a bed interface relative to the tool. This property
of the ZX curves makes them very suitable for figuring out the azimuth and distance
of adjacent boundaries and keeping the tool in the desired bed, or in one word,
geosteering. The XX curves are obtained by transmitters with components along
the X direction, such as T7 and T8, and X direction receivers, such as R5 and R6.
The construction of a data point of a XX curve also requires a full tool rotation
around the Z axis. The XX curves are sensitive to formation resistivity anisotropy
at any relative dip angle, thus they are the best candidates for anisotropy inversion
(Li et al. 2014).


J. Chen et al.

Geosteering can be conducted by evaluating one or several azimuthally sensitive
curves or can be based on an inversion of a subset of all the measured curves. In the
latter case, we usually assume an earth model, for example a 3-layer model (Dupuis
et al. 2013), and then invert the model parameters (such as distance of tool to
interfaces, bed azimuth, and formation resistivity) by solving an inverse problem.
In recent years a new generation of EM geosteering tool with a much larger depth
of investigation has emerged on the market. This class of devices has longer spacings
between sensors compared with the aforementioned tools, and correspondingly the
working frequency can be as low as several kilohertz. GeoSphere developed by
Schlumberger is such kind of tool claiming a depth of investigation in excess of
100 feet (Seydoux et al. 2014) and the ability of optimized well landing without
pilot wells (it can cost tens of millions of dollars to drill a pilot hole) is realized.
Another new generation tool recently released by Baker Hughes (Hartmann et al.
2014), commercially named ViziTrak, can also directionally “see” the formation up
to 100 feet from the well bore. As the new generation tools can see much further
than the previous ones, the associated inverse problem becomes more challenging
and the complexity and the uncertainty greatly increase.

7.3.1 Inversion Algorithms
The process of adjusting tool position in real time relies on an inversion method
the output of which is used to generate an earth model within the constraints of
the measurements and inversion techniques. In real jobs, modeling and inversion
of azimuthal resistivity LWD measurements are usually based on a 1D parallel
layer model, i.e., all bed interfaces are infinitely large and parallel to each other.
Two different inversion schemes, a deterministic inversion method and a stochastic
inversion scheme, are introduced below.

The Deterministic Inversion Method

The conventional inversion method in geosteering applications is the deterministic
approach based on algorithms fitting the model function to measured data. Deterministic inversion methods can produce excellent results for distance-to-boundaries
and resolve resistivities of the layers with a good initial guess. Suppose an azimuthal
resistivity tool can produce N measurements denoted by m 2 RN . A computational
model function S W RM ! RN is used to synthesize these same measurements
based on M model parameters x 2 RM , where the response of the model to these
parameters is denoted S.x/ 2 RN . Results are considered of high quality when there
is good agreement between the computational model and the measured data. Such
agreement is measured by the misfit
F.x/ D S.x/  m ;


7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


where x, S, and m are as defined above. Define the cost function as a sum of squares
of nonlinear functions Fi .x/, i D 1; : : : ; N,
f .x/ D


Fi2 .x/ :



where, again, N is the number of measurement curves. We consider the optimization
problem to iteratively find the value of the variables which minimizes the cost
function. This is a unconstrained nonlinear least-squares minimization problem.
A family of mature iterative numerical algorithms have been established to solve
this least-squares problem, such as the method of gradient descent, Gauss-Newton
method, and the Levenberg-Marquardt algorithm (LMA). Here we apply LMA to
find the unknown parameters vector x.
Suppose the current value of the parameters vector at the kth iteration is xk . The
goal of successive steps of the iteration is to move xk in a direction such that the
value of the cost function f decreases. Steps hk in LMA are determined by solving
the linear subproblem
.J T .xk /J.xk / C I/hk D J T .xk /.S.xk /  m/ ;


where J.xk / is the Jacobian matrix evaluated at xk , I is the identity matrix with the
same dimension as J T .xk /J.xk /, and  is a small positive damping parameter for
regularization, e.g.  D 0:001 max.diag.J T J//. Updates to the unknown parameter
in this method can be written as
xkC1 D xk C hk ;

k D 0; 1; 2; : : :


During optimization we can stop the iteration when
jjS.xk /  mjj < 1 ;


which means the data misfit is smaller than threshold 1 (a small positive number), or
jjhi jj < 2 ;


which means the increment of parameter vector is smaller than threshold 2 (also a
small positive number), or simply
i > imax ;


which means the maximum number of iteration is reached.
In real jobs, it is usually very expensive, if not impossible, to obtain the closed
form of Jacobian. Here we use finite difference method to obtain this matrix
Ji;j .x/ D

@Sj .x/
Sj .x C eO i ıxi /  Sj .x/

; i D 1; 2; : : : ; N and j D 1; 2; : : : ; M :


J. Chen et al.

The above formulation shows that every iteration during optimization needs to
evaluate 2  M  N data. This part can be quite expensive requiring significant
computational resources. To address this issue we employ Broyden’s rank-one
update method here for the numerical Jacobian. In Broyden’s method the Jacobian
is updated at each iteration by
JkC1 D Jk C

S.xkC1 /  S.xk /  Jk hk
khk k2


Though this kind of gradient-based method is efficient and effective for any number of parameter variables, it has drawbacks that the solution is highly dependent to
the initial guess. A poor initial guess can lead the results quickly falling into a local
minimum far from the true solution. In our study, to overcome these drawbacks, a
scheme of combing the local optimization method with a global searching method
is applied for EM geosteering: we make a coarse exponential discretization for the
model space and use the global searching method to obtain initial guess for local
optimization then choose the LMA method to refine that initial guess.

The Statistical Inversion Method

The aforementioned deterministic regularization methods have a few limitations.
First, it is sensitive to the choice of the distance metric and the parameter selection
of the regularizer. Second, it is hard to quantify the uncertainty of the proposed
solutions. Finally, the gradient descent based method is hard to take advantage of the
multi-core hardware platform due to computational dependencies in the algorithm.
As an alternative method to deal with uncertainty, Bayesian approach has attracted
more attentions (Knapik et al. 2011). Unlike the deterministic formulation, the
observation model or the so-called likelihood is usually built upon the forward
model and some knowledge about the errors (e.g. measurement noise). The desired
property and the prior knowledge of the uncertainty of the solution are translated
into prior distributions. Bayesian rule is used to obtain the posterior distribution
from which the solution is deduced after combining the likelihood and the prior.
Parallel sampling is also available for more complicated Bayesian inference cases.
Experiments and observations suggest physical theories, which in turn are used
to predict the outcome of experiments. Solving a forward problem is to calculate the
output (y) of a physical model (f ) given its parameter set u, y D f .u/. The inverse
relationship can be written as u D fQ .Qy/, where fQ defines the inverse mapping and
yQ is the observed output. The classical definition of a well-posed problem, due to
Hadamard (1923), is that the solution exits is unique and depends continuously on
data. On the contrary, most inverse problems are fundamentally underdetermined
(ill-posed) because the parameter space is large, measurements are too sparse and
different model parameters may be consistent with the same measurement. The
problem is usually solved by minimizing
.u/ C R.u/,


7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


is the objective function and R serves the role of a regularizer with
regularization constant
> 0. Under the least-squares criterion, the objective
function is written as kQy  f .u/k22 . This formulation limits its modeling ability with
regard to different characteristics of noise. It is also impractical to search for the
optimal solution for both u and a proper mapping fQ when the forward problem is
To overcome the shortcomings of the deterministic method, a stochastic approach
generates a set of solutions distributed according to a probability distribution function. In a Bayesian context, the solution is sampled from the posterior distribution
of the model parameters u given the measurement y, p.ujy/  p.yju/p.u/. p.u/
is the prior that describes the dependency between the model parameters and,
therefore, constrains the set of possible inverse solutions. The likelihood p.yju/
models the stochastic relationship between the observed data y and parameter set
u. The stochastic approach provides more probabilistic arguments for modeling the
likelihood and prior, and for interpreting the solution. In particular, it represents a
natural way to quantify the uncertainty of the solution via Bayesian inference.
Let’s revisit the forward model of the inverse problem. Suppose the noise is
additive and comes from external sources, then the relationship between observed
outputs and corresponding parameters of the physical system can be represented as:
yQ D f .u/ C  ;


where  represents additive noise. In classical Bayesian approaches,  is a zero-mean
Gaussian random variable,   N .0;  2 I/. Then we have the following statistical
model instead, p.Qyju/  N .Qy  f .u/;  2 I/. From a Bayesian point of view, suppose
the prior distribution of u is governed by a zero-mean isotropic Gaussian such that:
p.ujˇ/  N .0; ˇ 2 I/. By virtue of the Bayes formula, the posterior of x is given by:
p.ujQy/  N .Qy  f .u/;  2 I/N .0; ˇ 2 I// .


It is easy to show that the log likelihood of Eq. (7.18) is equivalent to use Tikhonov
regularization method for the ill-posed problems (Bishop 2006). Furthermore, the
solution of a lasso estimation can be interpreted as the posterior mode of u in the
above Bayesian model under the assumption of a Laplacian prior (Tibshirani 1996).
To search for the Bayesian solution according to Eq. (7.18), we draw samples
from the posterior probability density function. The goal is to locate the most likely
value or function u is going to be. In other words, we are looking for the most
probable point u in the posterior distribution. And usually, the point is attained as
the maximum a posteriori (MAP) estimator, namely, the point at which the posterior
density is maximized. In the case of a linear mapping function f , the posterior
density function can be easily derived when selecting a conjugate prior. However,
conducting Bayesian inference according to Eq. (7.18) becomes a challenge when
f indicates a non-linear mapping relationship between u and y. Consequently, the
analytical solution for the posterior distribution is not always available anymore.
In practice, MCMC is one of the most popular sampling approaches to draw iid


J. Chen et al.

samples from an unknown distribution (Robert and Casella 2004). For solving
an inverse problem, the sampling algorithm requires repeated evaluation of the
posterior for many instances of u.

7.3.2 Hamiltonian Monte Carlo and MapReduce
The random walk Metropolis algorithm as shown in Algorithm 1 suffers from low
acceptance rate and converges slowly with long burning period (Neal 2011). In
practice, a burn-in period is needed to avoid starting biases, where an initial set
of samples are discarded. It is also hard to determine the length of the Markov
chain. Running MCMC with multiple chains is a natural choice when the platform
supports parallel computing (Gelman and Rubin 1992). In this section, we like to
discuss how the Hamiltonian Monte Carlo and the MapReduce platform can be used
to improve the sampling performance.
Hamiltonian Monte Carlo (HMC) (Neal 2011) reduces the correlation between
successive sampled states by using a Hamiltonian evolution. In short, let x 2 Rd
be a random variable and p.x/ / exp.L.x//, where L is the likelihood function.
HMC defines a stationary Markov chain on the augmented state space X  P with
distribution p.x; p/ D u.x/k.p/ (Zhang and Sutton 2011). A Hamiltonian function,
the sampler, is defined as
H.x; p/ D

C pT M 1 p :
2 ƒ‚ …
potential energy


kinetic energy

A new state .x ; p / is set by the Hamiltonian dynamics
xP D



pP D



Approximation of the above system is obtained with the leapfrog method (Neal
2011). If we consider the kinetic energy part in Eq. (7.19) as a jump function, we
could use Newton’s method to approximate L.x/ by a quadratic function. In other
words, we can use a local approximation to serve as the jump function. Let HL D LR
be the Hessian matrix, then the Hamiltonian function is
H.x; p/ D L.x/ C pT HL


However, since inverse of the Hessian matrix is computationally prohibitive in highdimensional space, we take the L-BFGS approach (Liu and Nocedal 1989), and the
Hamiltonian energy is simply (Zhang and Sutton 2011)

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .






Fig. 7.10 MapReduce programming mode

H.x; p/ D L.x/ C pT HBFGS


HMC also receives the benefit from multiple chains and MapReduce is an ideal
parallel computing platform for processing and dealing with large data sets on a
cluster. A simple MapReduce diagram is shown in Fig. 7.10 (Pal 2016). In the
MapReduce mode, a master node is responsible for generating initial starting values
and assigning each chain to different mappers. Data generated by each chain is
cached locally for the subsequent map-reduce invocations. The master coordinates
the mappers and the reducers. After the intermediate data is collected, the master
will in turn invoke the reducer to process it and return final results. In the multiplechain cases, the reducer will calculate the within and between chain variances to
evaluate convergence. The mean value of all chains collected by the master will be
the final result.

7.3.3 Examples and Discussions
In this part, we take the tool design of GuideWave (the first generation azimuthal
resistivity LWD tool by Weatherford) for an instance to generate tool measurements
and conduct different inversions for demonstration. It is noted that the synthetic
data are generated based on analytical EM wave solutions for 1D multi-layer model
with dipole transmitters and receivers. The effect of the borehole is considered as
negligible in forward modeling here. A field data by this tool will also be shown and
processed with these inversion methods.

Tests from Synthetic Data

In the first example, we consider a three-layer model as shown in Fig. 7.11.
The resistivities of the upper, center and lower layers are 10 m, 50 m, and
1 m, respectively. The central green line indicates the tool navigation trajectory.


J. Chen et al.
real model



z (ft)













x (ft)

Fig. 7.11 The three-layer model
inverted model



z (ft)







x (ft)






Fig. 7.12 The inverted model by deterministic method
inverted model


Confidence interval (m ± 2 x s)


z (ft)







x (ft)






Fig. 7.13 The inverted model by statistical inversion

In this case, we assume the tool dip angle is fixed as 90ı . Figures 7.12 and 7.13
show the inverted model by the deterministic and statistical methods. One can
see that when the tool is far from the top boundary (see the beginning part), the
measurements and the inversion cannot resolve the top boundary accurately. In
other words, the uncertainty involved in solving the inverse problem is high. The
reason is that the sensitivity of the measurements is relatively poor when the tool is
relatively far from a boundary. As the tool moves forward where the top boundary
bends downward, the inversion can clearly resolve both the upper and the lower
boundaries. Comparing two inverted models obtained through the deterministic
and statistical methods in this case, the results are comparable to each other. The
statistical method, however, provides quantified interpretation for the uncertainty of
the inverse problem (Fig. 7.14).

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


Fig. 7.14 The inverted formation resistivities

Test from Field Data

In the end, we will show a real field case. Figure 7.15 shows a model of this well
created from the conventional correlation and the inverted 3-layer model based on
resistivity measurement. This figure shows that the upper layer has lower resistivity
around 1 m, and the lower layer has higher resistivity around 20 m. The tool
may penetrate or approach bed boundary three times: the first one is around X050 ft,
and second is close to X300 ft, and near X900 ft the tool penetrated into a relatively
resistive zone. We can see that this inverted 3-layer model is reasonably consistent
with the model created from the conventional correlation, and the good quality
of inversion can be also verified by the great agreements between measured and
synthetic curve values shown in Fig. 7.16.


J. Chen et al.

Fig. 7.15 The underground structures of a real field job. The upper one is a model created based
on conventional correlation, and the lower one is by a 3 layer inversion of azimuthal resistivity

Fig. 7.16 Comparison of measured and synthetic values of six curves: AD2 and PD2 are
amplitude attenuation and phase difference of a 2 MHz ZZ measurement, AU1 and PU1 are amplitude attenuation and phase difference of a 100 kHz ZZ measurement, QAU1 and QPU1 are
amplitude attenuation and phase difference of a 100 kHz ZX measurement. Good agreements can
be observed for all the six curves, which indicate a good quality of the inversion for this field

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


7.3.4 Conclusion
Most research of inverse problems centers around the development of formulas that
yield a description of the system as a function of the measured data, as well as
on theoretical properties of such formulas. Bayesian approach is mainly used as
a convenient way to estimate the measurement error and model the uncertainty
in the system. The proposed work is, foremost, an effort to investigate both
the inversion and interpretation process and build an intelligent framework for
improving classification results through inversion. A more critical issue is that
the deeper knowledge and repeatable patterns that are hidden in the big data
accumulated in the past are poorly described and studied. Even if they are used, there
is a lack of systematic approach towards inverse problems. In addition, we think the
research related to exploration of computationally efficient statistical inference on
the large scale is also instrumental in learning and understanding inverse problems.
The techniques developed in this research could have a large positive impact on
many areas of computing.

Bengio, Y., Larochelle, H., & Vincent, P. (2005). Non-local manifold parzen windows. In Advances
in Neural Information Processing Systems (pp. 115–122).
Bishop, C. M. (2006). Pattern recognition and machine learning Berlin, Heidelberg: Springer.
Bittar, M., & Aki, A. (2015). Advancement and economic benefit of geosteering and wellplacement technology. The Leading Edge, 34(5), 524–528.
Bittar, M. S. (2002, November 5). Electromagnetic wave resistivity tool having a tilted an- tenna
for geosteering within a desired payzone. Google Patents. (US Patent 6,476,609)
Bittar, M. S., Klein, J. D., Randy B., Hu, G., Wu, M., Pitcher, J. L., et al. (2009). A new azimuthal
deep-reading resistivity tool for geosteering and advanced formation evaluation. SPE Reservoir
Evaluation & Engineering, 12(02), 270–279.
Calderhead, B. (2014). A general construction for parallelizing metropolis- hastings algorithms.
Proceedings of the National Academy of Sciences, 111(49), 17408–17413.
Carroll, R. J., Ruppert, D., Stefanski, L. A., & Crainiceanu, C. M. (2006). Measurement error in
nonlinear models: a modern perspective. Boca Raton: CRC Press.
Chen, J., & Yu, Y. (2014). An improved complex image theory for fast 3d resistivity modeling and
its application to geosteering in unparallel layers. In SPE Annual Technical Conference and
Chib, S., & Greenberg, E. (1995). Understanding the metropolis-hastings algorithm. The American
Statistician, 49(4), 327–335.
Dean, J., & Ghemawat, S. (2008). Mapreduce: simplified data processing on large clusters.
Communications of the ACM, 51(1), 107–113.
Dupuis, C., & Denichou, J.-M. (2015). Automatic inversion of deep-directional-resistivity measurements for well placement and reservoir description. The Leading Edge, 34(5), 504–512.
Dupuis, C., Omeragic, D., Chen, Y. H., & Habashy, T. (2013). Workflow to image unconformities
with deep electromagnetic LWD measurements enables well placement in complex scenarios.
In SPE Annual Technical Conference and Exhibition.
Ellis, D. V., & Singer, J. M. (2007). Well logging for earth scientists. Dordrecht: Springer Science
& Business Media.


J. Chen et al.

Faghmous, J. H., Le, M., Uluyol, M., Kumar, V., & Chatterjee, S. (2013). A parameter-free spatiotemporal pattern mining model to catalog global ocean dynamics. In ICDM (pp. 151–160).
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences.
Statistical Science, 457–472.
Hadamard, J. (1923). Lectures on cauchy’s problem in linear differential equations. London: Yale
University Press.
Hartmann, A., Vianna, A., Maurer, H.-M., Sviridov, M., Martakov, S., Lautenschläger, U., et al.
(2014). Verification testing of a new extra-deep azimuthal resistivity measurement. In SPWLA
55th Annual Logging Symposium.
Iglesias, M., & Stuart, A. M. (2014). Inverse problems and uncertainty quantification. SIAM News,
volume July/August.
Kaipio, J., & Somersalo, E. (2005). Statistical and computational inverse problems New York:
Knapik, B. T., Vaart, A. W. V. D., & Zanten, J. H. V. (2011). Bayesian inverse problems with
Gaussian priors. The Annals of Statistics, 39(5), 2626–2657.
Li, Q., Omeragic, D., Chou, L., Yang, L., & Duong, K. (2005). New directional electromagnetic
tool for proactive geosteering and accurate formation evaluation while drilling. In SPWLA 46th
Annual Logging Symposium.
Li, S., Chen, J., & Binford Jr, T. L. (2014). Using new LWD measurements to evaluate formation
resistivity anisotropy at any dip angle. In SPWLA 55th Annual Logging Symposium.
Liu, D. C., & Nocedal, J. (1989). On the limited memory BFGS method for large scale
optimization. Mathematical Programming, 45(1–3), 503–528.
Mack, S. G., Wisler, M., & Wu, J. Q. (2002). The design, response, and field test results of a
new slim hole LWD multiple frequency resistivity propagation tool. In SPE Annual Technical
Conference and Exhibition.
Marin, J.-M., Mengersen, K., & Robert, C. P. (2005). Bayesian modelling and inference on
mixtures of distributions. In Handbook of statistics (Vol. 25, pp. 459–507).
Menke, W. (2012). Geophysical data analysis: Discrete inverse theory (Vol. 45). San Diego:
Academic Press.
Neal, R. M. (2011). MCMC using Hamiltonian dynamics. In Handbook of Markov chain Monte
Carlo (Vol. 2, pp. 113–162). Boca Raton: Chapman & Hall/CRC Press.
Neiswanger, W., Wang, C., & Xing, E. (2013). Asymptotically exact, embarrassingly parallel
MCMC. arXiv preprint arXiv:1311.4780.
Nelson, E. B. (1990). Well cementing (Vol. 28). London: Newnes.
Newell, D. L., & Carey J. W. (2013). Experimental evaluation of wellbore integrity along the
cement-rock boundary. Environmental Science & Technology, 47(1), 276–282.
Omeragic, D., Li, Q., Chou, L., Yang, L., Duong, K., Smits, J. W., et al. (2005). Deep directional
electromagnetic measurements for optimal well placement. In SPE Annual Technical Conference and Exhibition.
Pal, K. (2016). Hadoop key terms, explained. Retrieved from http://www.kdnuggets.com/2016/05/
hadoop-key-terms-explained.html [Online]. Accessed 27 August 2016.
Pelletier, B. (2005). Kernel density estimation on Riemannian manifolds. Statistics & Probability
Letters, 73(3), 297–304.
Robert, C., & Casella, G. (2004). Monte carlo statistical methods (2nd ed.). New York: Springer.
Seydoux, J., Legendre, E., Mirto, E., Dupuis, C., Denichou, J.-M., Bennett, N., et al. (2014). Full 3d
deep directional resistivity measurements optimize well placement and provide reservoir-scale
imaging while drilling. In SPWLA 55th Annual Logging Symposium.
Song, Q., Wu, M., & Liang, F. (2014). Weak convergence rates of population versus single-chain
stochastic approximation MCMC algorithms. Advances in Applied Probability, 46(4), 1059–
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society. Series B (Methodological), 1, 267–288.
Vidic, R. D., Brantley S. L., Vandenbossche, J. M., Yoxtheimer, D., & Abad, J. D. (2013). Impact
of shale gas development on regional water quality. Science, 340 (6134).

7 Managing Uncertainty in Large-Scale Inversions for the Oil and Gas. . .


Wang, T., Chemali, R., Hart, E., & Cairns, P. (2007). Real-time formation imaging, dip, and
azimuth while drilling from compensated deep directional resistivity. In 48th Annual Logging
Wikipedia. (2016). Mixture model — wikipedia, the free encyclopedia. Retrieved from https://
en.wikipedia.org/w/index.php?title=Mixturemodel&oldid=735277579 [Online]. Accessed 23
August 2016.
Xu, C. (2013). Reservoir Description with Well-Log-Based and Core-Calibrated Petrophysical
Rock Classification (unpublished doctoral dissertation). University of Texas at Austin.
Zhang, Y., & Sutton, C. A. (2011). Quasi-Newton methods for Markov chain monte carlo. In
Advances in Neural Information Processing Systems (pp. 2393–2401).

Chapter 8

Big Data in Oil & Gas and Petrophysics
Mark Kerzner and Pierre Jean Daniel

8.1 Introduction
This paper was born as a result of attending O&G and Petrophysics shows, of
discussion with multiple smart people involved in the development and operation
of petrophysical software and energy development in general and, last but not least,
from the personal experience of the authors with oilfield related clients (Bello et al.
Mark has been involved in petrophysical software early on in his career dealing
with advanced log analysis research (Kerzner 1983) and practical software applications (Kerzner 1986). Later, he turned to Big Data software in general, and has been
teaching and implementing it for the last eight years for various clients in different
countries in the Oil & Gas, high tech companies and legal verticals.
Pierre Jean has been involved in every aspect of the well logging business from
tool research creating patents to petrophysics and tool/software integration. He is
currently working on the development of a cloud software platform for all oilfieldrelated activities, including petrophysical interpretation.
When discussing the subject together we arrive at the same conclusion that Big
Data is not far away from Petrophysics and it is already there. Multiple companies,
large and small, are beginning to implement their first Big Data projects.
As these are their first explorations in the area, they are often faced with the same
problems as the designers of current Big Data systems (think Google, Yahoo, Facebook, Twitter) faced early on (Birman 2006). Often, they may not be aware that the

M. Kerzner ()
Elephant Scale, Houston, TX, USA
e-mail: mark@elephantscale.com
P.J. Daniel
Antaeus Technologies, Houston, TX, USA
e-mail: pdaniel@antaeus-tech.com
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_8



M. Kerzner and P.J. Daniel

encountered problems have already been solved in a larger context. This also means
that even the implementation itself has already been created and likely open sourced.
Therefore, our intent is to benefit the Big Data work in the industry by
describing common problems that we see in implementing Big Data specifically
in petrophysics, and by providing guidance for such implementations.
Generally, Mark contributes to the Big Data view of things and Pierre Jean makes
sure that it can be practically tied into real life. However, since their experiences
cross over, both authors draw on their knowledge of both areas.
Finally, we are well aware that the oilfield industry is going through a deep
financial crisis with each company reducing cost to survive the downturn and at
the same time preparing for the next cycle. Actual Big Data technology developed
through intense innovation for other industry can and must help the oil and gas
companies becoming more cost effective through innovation. There has never
been so much data and information coming from so many sources without proper
software tools to integrate them. There are existing cloud applications but they
each cover a very small portion of the oilfield scheme and integration is close
to impossible task. Provided that all data information would be interpreted and
integrated, higher quality of decisions would then be expected at every level along
the decision chain.

8.2 The Value of Big Data for the Petroleum Industry
The oil and gas industry is at the brink of a digital revolution (McGrath and
Mahowald 2014) Organizations and measurements scattered across the globe are in
dire needs of better environment understanding which can be brought by enhanced
calculation capabilities, communication and collaboration within and between
companies (Industry Week 2015).
The oilfield industry is left behind the other industries in terms of access to latest
cloud software technology (Crabtree 2012). The immediate need is for the creation
of a platform in which each user can add plugins, process data, share and visualize
information (data, documents, calculation techniques and results) with other users,
teams or companies in a very secure and swift process.
This is really the first step to move from an excel-dominated world to the universe
of the Big Data analytics. For a platform to become a standard in this industry,
four necessary components need to co-exist simultaneously (Fig. 8.1). They are:
flexible and simple cost model for all companies adapted to user responsibilities,
optimized cloud databases, simplified external plugin integration from all sources,
and dashboard accessed by web interface.
Once achieved, this novel platform will bring a new level of work quality and
open the way to larger data analytics in the oil and gas industry.

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.1 Four components of a petrophysical software platform

8.2.1 Cost
The traditional model where software licenses are purchased for and installed on
a user’s computer is referred to as Product as a Product (PaaP). With the many
advantages offered by cloud computing the SaaS model (Wikipedia 2016), where
the software provider hosts his software on his own servers, usually in the cloud,
and offers a subscription to users of the software, is projected to grow at a rate
about five times of the traditional model. The advantages of the SaaS model to
the user are: limited capital investment in hardware, many IT functions that are
handled by the cloud provider freeing up internal resources, flexibility in changing
the system sizing (increasing or decreasing), cost advantages in cloud sourcing,
software provider savings in a single-platform development and testing that are
passed on to the user, and many other advantages.

SaaS (Software as a Service)

SaaS is a business model defined in Wikipedia as Software as a Service (SaaS;
pronounced/sæs/) is a software licensing and delivery model in which software is
licensed on a subscription basis and is centrally hosted. It is sometimes referred to
as “on-demand software”. SaaS is typically accessed by users via a web browser.
SaaS has become a common delivery model for many business applications,
including office and messaging software, payroll processing software, DBMS
software, management software, CAD software, development software, gamification, virtualization, accounting, collaboration, customer relationship management
(CRM), management information systems (MIS), enterprise resource planning
(ERP), invoicing, human resource management (HRM), talent acquisition, content
management (CM), antivirus software, and service desk management. [5] SaaS has
been incorporated into the strategy of all leading enterprise software companies.
See IBM cloud strategy [5]. One of the biggest selling points for these companies
is the potential to reduce IT support costs by outsourcing hardware and software
maintenance and support to the SaaS provider.
There are significant advantages to a software provider in content delivery via
SaaS, primarily that the software always operates on a defined platform without the
necessity to design and test the software on all possible hardware and operating


M. Kerzner and P.J. Daniel

Reliability is therefore increased and user issues more easily resolved. Access
to the content is commonly via an internet browser, with SSH or VPN providing security and encryption during transit over the unsecured internet. Browser
technology has incorporated many features to inhibit any corruption of the host
computer through the browser, and this approach is commonly used in many
security-conscious industries such as banking and brokerage communications.

Cost Flexibility

One of the main lessons from the drop in oil prices in late 2014 is that companies
must be aware of their immutable fixed costs. With the SaaS subscription model
(Fig. 8.2), where reassignable seats are provided on a monthly basis with hardware
expense included, access to increased resources can be quickly provided and can as
quickly be scaled back as markets change. The attempt to make the user interface
simpler and more intuitive reduces the time to productive results and mitigates the
effects of staff reductions of skilled operators.

Reduced IT Support

Many users do not appreciate the hidden costs included in the traditional model of
software license purchase. Only very large companies fully account for such essential services as backup, disaster recovery, physical security, proper environmental

Sass versus classic models
Users required by company




Classic License


feb mar apr may jun

Fig. 8.2 SaaS vs. classic software models


aug sep


nov dec

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.3 Traditional software licensing vs. on-demand web services

and power conditioning for on-site servers. The planning for hardware resources
requires commitments to future use. These are areas that have been recognized
as essential to the cloud acceptance and receive significant attention from all
cloud providers, allowing IT personnel to concentrate their more immediate tasks
(Fig. 8.3). As a cloud software platform provides its software on cloud providers,
software updates are provided automatically without threatening other user software
or hardware and avoiding the necessity of a lengthy user approval.


With the organization of files into a family of searchable databases and the enhanced
collaboration in mind, the system increases the efficiency of employees. A cost
structure that encourages wider participation in the platform tools encourages the
efficiency increase. Direct transfer of log data as inputs to the workflow engine and
direct transfer of workflow outputs, together with the automatic identification of all
workflow input files and parameters enhance operator efficiency. Software designed
for state-of-the art computational efficiency reduces computation time.

8.2.2 Collaboration
Oilfield companies have been cooperating with varying success over the years
through joint ventures, shared asset projects, and client/services providers. A very
important technical aspect of this collaboration is the transmission itself of the data
and having all individuals along the chain of decision to make the right moves to
optimize an asset. The ultimate of a browser platform is to add value to the company
through a better decision process. To do so, there are different factors such as access


M. Kerzner and P.J. Daniel

to knowledge and response time, ability to share the information between actors in a
timely and user friendly manner. Another aspect of an enhanced software platform is
the ability to standardize, remove siloed group, avoid data duplication, and visualize
results based on same data and workflows thus avoiding uncertainties and errors.
Actual software platforms are conflicting in the sense that platforms are either
created by large services companies whose goals are to push their own products,
large software companies with prohibitive software cost, or medium software
companies without a platform shareable in the cloud and with limited options for
internal and external workflows. In this condition, it is very difficult for the oil and
gas industry to move to the digital age of Big data analytics. Only a small fraction of
the companies, the ones with extensive financial means, can hope to make the leap.
A cloud platform must have the clear objective to correct all these issues by
providing a flexible and agile database schema ideal for cooperation, a low cost
Software as a Service (SaaS) pricing model widely employed in other industries,
an improved cooperation mechanism for external companies, and access to a wide
selection of plugins. To allow worldwide collaboration independent of operating
system, the secure platform has to be accessible from a secure intuitive browser

Ideal Ecosystem

Collaboration is the cooperative effort of more than one person to achieve more
progress than the collaborators could achieve individually. It requires shared
information and the freedom to manipulate the information to achieve greater
knowledge. With cooperative ecosystems as practiced currently transfer of logs and
other file types (image, reports) discourage optimal collaboration and division of
• Sources and results of the collaboration are scattered, many intermediate discarded logs are retained and source logs overwritten. The source logs and
parameters of workflows are uncertain.
• Reports are scattered on different computers, servers and local memory drives,
without easy access by all users.
• High cost of workflow seats and restricted suites of plugins limit the number of
collaborators available to work during the cycle of an asset development.
• Communication with customers or consultants requires the ability to share data,
workflow, and overall interpretation process for an enhanced collaboration using
the same tools.
To reach the Big data analytics stage, the cloud software platform should
stimulate collaboration by providing a complete integrated software platform
providing cloud databases, calculation plugin engine and browser-based dashboard
Through the platform, information represented by many file types can then be
searched, shared and manipulated. Users have available workflows and plugins for

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.4 Ideal ecosystem for oilfield actors

a variety of oilfield disciplines under the simplified subscription model. In this ideal
ecosystem, each employee in a company is given a specified level of access to
particular data (logs, reports, lab results, etc.) to make the best decision.
In this context, workflows developed and shared by other oilfield actors (tool
manufacturers, services companies, consultants, academics, oil operators) are accessible by each authorized user, leading to enhanced shared knowledge and higher
quality of operation (Fig. 8.4).

Collaboration Enhancement by Universal Cloud-based Database

Nothing inhibits evaluation of a well, for instance, more than the frustration of
searching for the relevant information which is scattered haphazardly through the
computer filing system. Well logs, for instance, may have been obtained at different
times, with depths that have been adjusted over time as more information became
available. Associated files with relevant information may be in different formats:
text, PDF, LAS, DLIS, JPEG, etc.


M. Kerzner and P.J. Daniel

The needs for universal cloud-based database has never been so important with
the organization of files and logs easily searchable to allow easy finding of particular
logs, reports or parameters in a selection of wells in a field. For optimum efficiency,
It is important that there be communication and collaboration with clients and with

Collaboration Enhancement by Calculation

With access to cloud technology through a lower cost subscription model, companies can afford field and office personnel to use the same cloud tool as the
experts and managers. Giving multi-discipline users access to the same database
information and workflows, authorized users can view and share data and results
with the same platform.
This is part of the strategy to move the oilfield to the Big data world. If each
user in a company can collaborate and share their data, then tools to interpret and
enhance the understanding of this large flow of data will be necessary.
It is very important that very early on, there is a mechanism for checking the
data integrity. By uploading field data to a collaboration space any problems can be
recognized and analyzed by both the field and office earlier before the crew leaves
the field. Specialized workflows can validate data integrity, rationalize units and
normalize depths before time-consuming multi-well workflows are attempted.
A very important aspect of workflows and plugins is the quality and accuracy of the calculation. Classical software platform relies on advertisements and
conferences to be known and recognized. In the same way as quality is checked
by users in many SaaS applications through a rating and comment, this method
can be applied to the oilfield with a software platform providing the ability for
users to add comments and ratings to workflows developed internally or by oilfield
actors (services companies, tool manufacturers, consultants). The gain in quality,
efficiency and productivity is immediate since the users will use calculation modules
already proven and tested by others.

Collaboration Enhancement by Low Cost Subscription Model

If Big Data has penetrated deeper into other industries when compared to the oil and
gas, it is also because the actual cost structures of current oilfield software platforms
is not adapted to the agility and flexibility required by clients.
Collaboration is inhibited when large immobilized capex and expensive annual
subscriptions for each additional module purchased by the company make it
prohibitively expensive for many users to share the same access to data, projects
and workflows.
SaaS (Software as a Service) has been applied to other sectors such as tourism,
travel, accounting, CAD software, development software, gamification, management information systems. This model, through its lower total cost and ability to
increase and decrease the commitment, allows wider usage including lab technicians
and field operators doing more analysis before involving senior staff.

8 Big Data in Oil & Gas and Petrophysics


The new cost structure must start at an inexpensive basic level for those not
involved in workflow calculations, such as field personnel who only download
field information, management interested in viewing results and accounting or sales
personnel. Then more expensive models must be adapted to the job function of the
users. For instance a field personnel will not need to run thousands of workflows at
the same time while the office expert will.

Collaboration Enhanced by Intuitive Browser

Interface System access via a browser allows the platform dashboard to be independent of the operating system or user hardware. Major effort must be expended to
make the dashboard very intuitive, thus minimizing the need for lengthy software
training. The dashboard can be Google-inspired to intelligently access user profile
and preferences, select databases, search for templates views and reports, workflows
and reports. The project tree is closer to a data management system than a simple
project tree through the ability to browse for existing data, files, templates, reports,
or images present in the database.

8.2.3 Knowledge (Efficiency & Mobility)
The efficiency of an organization in which all users can use the same database and
software to interpret and/or visualize data and documents is bound to increase.
The cloud software platform developed by Antaeus Technologies is seen as an
intermediate step between actual software using virtual machines and the ability
to use Big Data system for the oilfield arena.
An intuitive interface reduces non-productive time required for training. The
full suite of workflows available for each subscription seat allows the sharing
of knowledge between all users, while each workflow is optimized for speed of
execution and display, reducing non-productive time. Workflows are available from
a number of sources and are user-rated with comments to promote workflows
found to be very satisfactory by the community of users. The innovation inherited
from other industries will minimize the non productive time spent at selecting the
right workflow while improving plugin qualities provided (tool manufacturers, oil
companies, services companies, consultants, academics).

Workflow Variety

The availability of suitable workflows is essential to user efficiency and cloud
acceptance by the oilfield industry. There are many sources for workflows and
many disciplines for their application. The cloud platform must be provided with
essential workflows for many disciplines, and to provide access to the entire suite of
workflows to each subscribed seat.


M. Kerzner and P.J. Daniel

Fig. 8.5 Complete set of oilfield activities

This allows a seat to be shared amongst different disciplines and users. It is
also our opinion that many in the community, e.g. academics, students, consultants,
would appreciate having a platform for the exercise of their own workflow creations
in return for the community recognition and higher visibility of their technical
Others who have invested heavily in the development of high end workflows
may have a product for which the users would pay additional for their use. For all
workflows the mechanism of the workflow has to be hidden from the users for the
protection of intellectual property, allowing users to create their own proprietary
workflows with limited, controlled dissemination. All oilfield activities can be
considered (Fig. 8.5).

8.3 General Explanation of Terms
8.3.1 Big Data
Big Data technically means that you have more data than one computer can hold.
This is not a strict definition, and it will depend on the size of this computer, but it
is safe to assume that 10 TB already falls in the Big Data category.
However, it would be wrong to limit Big Data just by size. It is also characterized
by 3 Vs, that is, volume (we just talked about it), velocity and variety. Velocity
means that a lot of data comes to us every second. Hundreds of thousands of
transactions per second and even millions are well within reach of Big Data tools.
Variety means that the data comes in different formats, and even not formatted at all.

8 Big Data in Oil & Gas and Petrophysics


The last aspect of Big Data is its availability. Once engineers got the tools that
allowed that to the storage and process of terabytes and petabytes of data, they got
into the habit of storing more data. One thing leads to another.
Let us look at a very simple practical example. Imagine that you design a system
capable of storing a large number of logs. You do this to help people store each man
his own copy on this own computer. That helps, your system gets noticed, and you
immediately get these requests: “We heard that you can centrally store a lot of data.
Can we please store all of our PDF documents there as well.”
Thus, by Big Data we mean all these things. The tools to handle this do exist, and
we will be describing them below. Petrophysics is uniquely positioned to benefit
from Big Data, because its typical data sizes are big (tens of terabytes) but not too
big (petabytes), which would require higher level skills. So, it is a “low hanging
For a simple introduction to Big Data and Hadoop, please see (Kerzner and
Maniyam 2016).

8.3.2 Cloud
Cloud is better to be explained as “elastic cloud.” People often think that if they
store the data in a hosting center and deliver it through the Internet, this is called
“cloud-based system.” This is a mistake. This is called “internet delivery,” and it is
as old as the Internet (circa sixties and seventies (Leiner et al. 2016). By contrast,
elastic cloud means that your resources are elastic. If you need more storage, you
can get it by a simple request, and if you need more processing power, you can get
this just as well, all in a matter of minutes. Examples of such clouds are Amazon
AWS, Google Cloud, and Microsoft Azure. But any systems that can scale up and
down qualifies as a cloud-based system.
What is special about cloud-based system is that they are easily available and
very economical. The best, though old, comparison, is by building your own powergenerating plant (old style), vs. buying power from a large providers (cloud-based).
Because of obvious economies of scale and flexibility and ease of access, once
cloud-based system appear, the old systems can’t compete. A good example of
cloud-based technologies are those behind Uber.

8.3.3 Browser
Everybody knows what a browser is, but not everybody appreciates the benefits
of delivering the application through it. Browser gives you an universal interface,
allows the systems to look the same on any device, and saves developers a lot of
work. As modern browser-based systems begin to more and more imitate desktop
applications, the distinctions may blur in the users’ minds, but the developers should
always keep the principle of browser delivery paramount.


M. Kerzner and P.J. Daniel

In the oil and gas, browser delivery is especially important. Let us remember that
security for browser delivery has been under constant development for the last ten
years, and it can now be called, “secure browser platform”.
The aim of a secure browser platform is to allow the different actors of
the oil and gas field to work and collaborate. Current softwares offerings are
inadequate to cover all specific needs in a single framework, due to the cost of
licensing and the fact that services companies develop software for their own
workflows and measurements, thus depriving oil companies from other workflows
and measurements as they are not part of the original workflow. The missing link
is a software platform independent from main services companies with which
companies can interact altogether.
Current companies have different user profiles with very different needs. For
instance a field operator will need to run a workflow on a single well at a time
whereas an office expert needs to run workflows on hundreds or thousands wells at
once, and managers would need reports and history trend.

8.4 Steps to Big Data in the Oilfield
8.4.1 Past Steps
For the last 20 years, software and services companies have been developing PCbased application with large number of calculation modules. The migration to the
cloud is bound to be difficult due to the inherent cost of the technical and business
legacies, the necessary support of existing systems and the requirement for data
continuity. One solution which has been put in place is the use of virtual machine in
which the PC-based application is emulated remotely with the user getting a graphic
rendering (Angeles 2014). This technology, although facilitating communication
between centers, suffers from the absolute requirement of high speed internet since
all commands are being sent back and forth to the server with an internet-hungry
rendering on the screen. Although a necessary solution for seismic due to large
volume of data and cluster-hungry processing, it is also being used for petrophysics,
geology and other domain in which this solution is far from ideal. Hence it is
extremely sensitive to the network quality and users have been complaining about
this technology for domains outside of the seismic arena. Due to these inefficiencies,
it does not help the oilfield to migrate to Big Data from the fact that users are not
fascinated in this technology.
To reach a point where all data are used efficiently, the IT department has to
be able to manage data in databases efficiently and securely, upgrade workflow
for every user concurrently, keep time and user versioning of past projects. A
central document management is key to make this happen both for structural and
non-structural information. On a logistic level, these options are effective with a
central platform system and users accessing through the network, provided that the

8 Big Data in Oil & Gas and Petrophysics


information is passed through the network in a an effective way. The best option is
to use user and client machines in the most intelligent way through some artificial
intelligence and machine learning.

8.4.2 Actual Step: Introduce Real Cloud Technology
The software industry is rapidly accelerating towards cloud computing. Software
packages purchased by companies under a Software as a Service (SaaS) model
accounted for 10% of software purchased in 2013. This market share increased to
30% in 2015.
Considering this rapid take-over, companies are developing cloud platforms for
multiple industries. Antaeus Technologies has set its course to provide a full cloud
collaboration platform to the oilfield industry incorporating various file formats
(DLIS, LAS, : : : ) and different protocols (WITSMl, PRODML, SCADA). The
primary goal is to expand the number of users of its cloud software platform
worldwide, then in a second time, connect the office personnel (technicians,
accounting, experts, managers : : : ) to their instrumented wells and flow stations
through the same dashboard. The users will have the ability to run processes
to optimize the decision process of the companies. The mid-term goal is to add
SCADA and MODBUS protocol integration for intelligent oilfield operations.
Antaeus cloud software platform - GeoFit - is a combination of technologies
(Daniel 2016), all inherited from the cloud industry, cross-optimized for speed and
user collaboration. It is designed as a cooperation tool inside and between companies. Its workflow engine accepts calculation modules from a variety of sources
(oil companies, services companies, consultants, tool manufacturers, academics)
supervised by an intelligent input guidance to help users navigate between multiple
Using Node.js for computational efficiency, it includes multiple wrappers for
workflows written in other languages, thus facilitating company migration. The
databases accept multiple data types, providing both user and time versioning, and
with calculations done at the server for efficiency and security. GeoFit is a full cloud
platform where communication and visualization are optimized without requiring
any installation, apart from a standard web browser on the user’s computer. The
cloud platform hosted by Amazon Web Services can also be provided on a portable
mini-server allowing the user full mobility without internet connection.

What Is GeoFit?

GeoFit is a cloud based software platform providing users in the energy industry
access to a cloud database, specialized suite of calculation modules and browser
based dashboard to visualize and manipulate well data (Fig. 8.6). The heart of


M. Kerzner and P.J. Daniel

Fig. 8.6 GeoFit architecture

the program is a suite of related databases which are accessed by a computation
engine to run workflows and search engines and visualization tools to manipulate
and display the contents of the database.

Why Use a Universal Cloud-Based Hybrid Database?

GeoFit provides very agile fully multi-user and real-time ready relational database
performing at up to 40 Million data entries per second. There is much related
information concerning a well. The information can be in a variety of formats, for
example: well log formats (DLIS, LAS, CSV, etc); related documents that can be in
text, PDF, CSV, etc.; related files (JPEG pictures, audio or video clips, etc.).
The first issue is the access to the files appropriate to the investigation at hand.
In normal computer storage relationships are indicated by directories. Should the
files related to a log be stored in a file position related to the well, to the field, or
to the type of log? The GeoFit database answers such questions by the following
• Searchable: In the database every file and every log has keys indicating searchable properties. Files can be found by searching for those files matching the
desired key properties. If you want to find every log in a specified field that is
a gamma-ray log created after the year 2000, this is done almost instantaneously.
Files and logs could be searched, for example, by a client name or tool
manufacturer. Organization is accomplished by the database itself and not by
the file location. Spreadsheets such as Excel are not relational databases and do
not incorporate this flexible search capability.

8 Big Data in Oil & Gas and Petrophysics


• Multiple data types: While specially catering to well log data, the GeoFit
database can accommodate multiple file types. This allows retrieving files related
to a well without regard to a predefined categorization. PDF, JPEG, Excel, log
files and files of many other formats can be stored and referenced.
• Multi-user: Two or more users can access a log at the same time. The system
avoids problems that could be created with two users accessing the same file.
When the source file is in the Master database the user has read-only access to
the data. Any changes or any new files created by a workflow are in a new file that
can only be saved to the user’s Project database. The system enforces that each
new file has a unique name to avoid inadvertent overwriting. Project database
files are designed for maximum flexibility in the quest for the best interpretation
and users. If users were given delete permissions by the system administrator,
they can delete project valuable files; if not given these permissions, they may
create confusion or clutter up the project. To alleviate this, project files that are
considered worthy of protection can be transferred by the administrator to the
Master database to avoid inadvertent destruction while remaining available read
• Preprocessed: It would be very inconvenient if the logs in a database were
stored in their native format, e.g. some in DLIS and some in LAS. This would
require different viewers for each format, would hide inside the formats the
information that might be useful in searching, and would require some extraction
and formatting of data that might be used in calculations. Before entry of an
item into the GeoFit database all the pertinent items in each log are extracted
and stored in the database in a uniform proprietary format. The data is stored
in a form amenable to rapid calculation and the log information is stored in
searchable keys. By this means calculations and searches are sped up by orders of
magnitude. The GeoFit viewer can then display any database log without further
transformation, and any log can be exported to other users in a standard format
by a single export application.
• Calculation access: The GeoFit workflow engine is designed to access the
database by its keys so that the selection of workflow inputs is quick and
simple. Since the workflow engine knows from the database the units and format
of the data no additional specification is necessary, and automatic conversion
between user and calculation units are made. With the data in a curve put by the
preprocessing into the form for easiest processing, the calculations are orders of
magnitude faster. The results can automatically be placed into the database in the
GeoFit format for immediate viewing or further manipulation. The identification
of the particular workflow version, the identification of the particular source logs
and the identification of the input parameters are all stored in the GeoFit entry of
every workflow calculation output file.
• Collaboration: GeoFit is supplied with three individual but interactive types of
database: a master database with controlled insertion, deletion and modification
of masterfiles, allowing their general read-only use; project databases with


M. Kerzner and P.J. Daniel

shareable files to try different trial interpretations of the data; customer databases
which allow secure access by customers to selected files allowing log and report
viewing and downloading by the customer.
• Security: An administrator can assign each user an access level which determines
privileges, with multiple levels of access available. Secure files can be restricted
to only chosen viewers and users requiring very limited access can have access
to only those files selected.
• Scalability. The databases scales by building on top of Cassandra. The use of
Cassandra is hidden by the API, so to the users it looks like a relational database.
Not all the ACID properties are provided though, and they are not needed. The
important capability of Cassandra for O&G is it flexible eventual consistency,
which allows up to days of offline to be considered as a normal occurrence. If
not completely offline, but having very reduced connectivity at time is a norm in

Workflow Engine

The GeoFit workflow engine is designed for optimized performance in conjunction
with the GeoFit database.
• Intelligent input guidance: A common problem in the application of workflows is
a workflow design that favors inputs from a particular family or tool manufacturer
or the application of an inappropriate curve or parameter to a tool. To address
this problem, each GeoFit workflow or plugin contains intelligence that directs
the user to select from inputs most appropriate to the particular plugin.
• Multidisciplinary library: GeoFit provides at no additional charge an ever
increasing selection of multi discipline workflows. GeoFit also provides at no
charge third party workflows provided in return for advertisement and attribution.
There is a capability for users who develop proprietary workflows to distribute
to their customers password enabled versions of those workflows. Third party
workflows providing particular value will be provided to users at an additional
charge shared with the third party. All workflows are described in the GeoFit
form with user comments and rating.
• Computational efficiency: GeoFit-supplied plugins operate multicore in the
user’s instance and insure the full computer usage and fastest speeds of calculation. Calculations which consume a large percentage of the computer usage do
not prohibit the usage of the computer by other operators.
• Easy migration: For ease in migration from other systems, GeoFit can support
plugins written in a number of languages: C, Python, Node, Scilab, Matlab.

8 Big Data in Oil & Gas and Petrophysics


Software For Reliability, Security and Speed
GeoFit incorporates the latest software techniques. It incorporates Node.js using the
V8 engine. Both the display and the workflow interface directly with the database
utilizing the specialized structure of the pre-processed log formats to directly
and efficiently extract the data points. Wherever possible the data is manipulated
in Node.js to accommodate multi core processing. Event loops incorporating
single-threaded, non blocking IO allow handling a huge number of simultaneous
connections with high throughput. Node uses Google’s “V8” JavaScript engine
which is also used by Google Chrome. Code optimization and thorough testing is
made easier by designing for operation in the defined environment of the cloud,
rather than requiring a system adaptable to all operating systems and hardware
environments found in traditional sales models. The database is tightly coupled
to workflows and displays and was completely rewritten three times before a
satisfactory design was arrived at.

Cloud Instantiation

As described elsewhere there are many advantages to operation in the cloud,
primarily the flexibility in changing the computational and memory resources, the
enhanced security and reliability and the cost. For us a major consideration is
the opportunity to optimize in a controlled environment allowing rapid problem
addressing and a more thorough testing. Traditionally such software was installed
on each user’s server requiring software compromises to allow operation on many
operating systems on different hardware, so testing on each instance must be done
on the customer site and problems may be specific to a particular instance.


Database entries can be viewed in a number of modes: single or multiwell views of
multiple logs, cross-plots histograms, etc. (Fig. 8.7) The viewer allows such editing
features as formation definition, splicing, depth modification, etc. Searches of the
database for the well selection can be shown and other format files selected for
viewing. The individual views (2-d, crossplots, etc.) can be extracted in a form
suitable for insertion into reports (JPEG, SVG). Plot colors can be easily modified
and color preferences (default, company, project, user or customer) can be easily
created and utilized. Templates for common groupings are easily created
Communications Remote connectivity is essential in the oil industry. The
exchange of information with field offices over the internet is an everyday occurrence. Because many oilfields are in remote places the issue of connectivity over
poor or nonexistent internet lines deserves attention. Often connections that are slow
or spotty are blamed on faulty software while the true source of the problem is a


M. Kerzner and P.J. Daniel

Fig. 8.7 GeoFit main screen

poorly configured firewall or internet connection. In this case effort to improve the
internet connection can pay big dividends. GeoFit addresses this issue in a number
of ways:
• Internet quality: GeoFit continually monitors the internet quality and displays on
the dashboard a measurement of the quality. If there is no connectivity the user is
alerted so that corrective actions can be taken and time is not spent trying to fix
nonexistent other problems.
• Maximum utilization of bandwidth: If the internet connection is present but
the quality is low (e.g. dropped packets requiring multiple retransmissions)
GeoFit detects this and automatically takes steps to reduce the transmission of
information not necessary for the active display. This makes the best use of
the bandwidth available. This adaptive adjustment is not done for high quality
• Portable Server: There are some cases where an internet connection is so
unreliable that alternatives are required. The GeoFit platform can be deployed in
a portable server providing full functionality (Fig. 8.8). If an internet connection
should be occasionally available then the portable server can synchronize with
the company database. Selected files can be downloaded to the portable server
before deployment and workflows run and the results viewed on those files and
any additional files collected in the field. There are additional security measures
incorporated into the portable server to protect the code and data stored, but the
ability to limit the totality of files initially loaded provides additional security in
deployment to high-risk areas

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.8 Portable server for
limited internet connectivity

8.4.3 Structure Your Data Before You Go Big

Project Structure

The project tree is very generic in the sense that it contains all required level for
a field study with the option to select worldmap, basin, field, planned and actual
well, borehole, activities and container with data created by users or inherited from
DLIS/LAS/ASCII files. The project tree has the intrinsic data management ability
with the selection of files based on their finality like data, core, plot or document
such as reports or videos. Without changes of project tree view, the user can quickly
visualize all document types contained at each level of the well hierarchy. This is
particularly useful when the user tries to locate the field drilling report or the latest
mud content used in wireline operation. It can also be useful for the accountant to
understand cost discrepancy between operations. The experts will want to research
through a machine learning process if existing operation history could enhance
further drilling and completion operation in another field.
Saving all tool parameters saved and stored in the database at each run is the
ability to monitor the tool quality, apply preemptive measure to avoid field failure.
Having a project tree containing all objects in the database with easy access to tool
parameters benefits the quality process of all services companies. It is possible at
that point to run Big Data process to fully understand the root of the issues which
could very well be linked to the field petrophysical environment or an electronic
component reacting to an environment parameter (pressure, temperature, radiation.)
over time and over runs. This solution is absolutely impossible to solve if the user
is searching at a very limited number of runs and datasets.
For a geoscience expert, having the ability to access reports, images and cores in
the same project tree quickly when visualizing and processing data in a wellbore,
locating quickly all reports, images or cores from the project tree enhances the
quality of the interpretation. This level of integration is necessary for an enhanced
decision cycle.


M. Kerzner and P.J. Daniel

8.4.4 Timestamping in the Field
Data filtering based on time, tool, location, user, version, and other main metadata
attached to the project itself. Through the time period selection, the user has the
ability to select the period for which he/she would like to visualize the changes in
production data, tool run, etc : : : . By selecting a particular date, project tree and all
visualization graphics can selectively show data and results at that particular date.
Saving all data and metadata versus time allows propagating the right propagation of parameters along the workflows. For instance an important aspect is the depth
of the reservoir. if the kelly bushing (KB) is different from one run to the next, then
the measured depth and true vertical depth will be different. if the software platform
cannot handle the changes in metadata such as KB, mistakes will be introduced in
the processing and wrong depth will be calculated for the reservoir.
The fact that all data and metadata are timestamped will increase the quality of
the workflows since the parameters in the workflow will be for the particular run
in the well. It also opens the door to the field of data analytics with the ability
to monitor specific parameters over runs. For example monitoring Rmf over time
provides the petrophysics with information to the well porosity and permeability.
Having metadata linked to runs also increases the chance to fully understand
a tool failure or issue in a well since the right environment is known versus the
measurement. For instance a tool has failed continuously without the engineering
center finding the quality issues. If the problem is due to special environment
encountered by the tool while drilling or logging, it is very hard to diagnosticate
the main reasons by looking at a single case. With access to past environments for
each similar tool, a complete review of all possible reasons become possible through
big data analytics processes.

8.5 Future of Tools, Example
For a new software platform to be adopted by users and management at every
level in a company, the learning curve must be very fast in the order of a day
training. A company like Apple has been successful thanks to the inherent simplicity
of its application keeping the complexity in the background of the interface.
Advantage of Google and Android is the simplicity to create and share new
applications between users creating a whole world of opportunities and access to
knowledge previously impossible to apprehend. The software platform developed
by Antaeus Technologies is following the same principles of simplicity for the users,
extensibility and evolutionary functionalities in the background, incorporating the
latest cyber-security for data, workflow and results.
The software design is based on the principle on “for but not with” meaning
that the design is not preventing further evolution. Excellent ideas which require
additional work are not prevented in the future, while not necessary in the current
version for the platform to work properly.

8 Big Data in Oil & Gas and Petrophysics


The visualization graphics are being developed with time and real-time data as
a major requirement. All views are created through a data-driven graphics language
developed for time related information, enhancing the experience and capability of
the user to comprehend the flow of data from all origins.
Users in a single team might not be allowed to work directly on the company
reference database. In this case, a subset of the database (based on field, well, or
zones) can be shared by selected members of a team and publish once approved by
the data manager.
To achieve the actual cost efficiency required by the oil and industry to survive
during this downturn, the technology has to be extremely optimized. Regarding
database, a very strong option to manage speed and security is the use of an database
on linux system. Through an optimized process, we obtained a search query in
very large and complex file in the order of tens of microseconds per row of data
for multiple selected channels. Security has been designed by security experts with
years and proven records of data security over browser based platform. In the same
line, workflows have the options to have plugins inside the platform, in a separate
controller isolated from the platform or run on a remote server.
The scalability of the platform is two-fold, at the database level with the option to
select a small computer box as well as using a AWS-type of server, but although at
the workflow level with the option to send lengthy processing to high power server
for increased efficiency. This is an important factor for a company mid to long term
growth. Considering the effort to migrate information, the company has to be certain
that there will be no limitation as to the size of the database and the use of the
software by its users. The platform developed by Antaeus is being designed with
the concept that any users with specific profiles can access and use the platform, in
the field without network or connected to localized network, main company intranet
network or external network database such as AWS (Amazon web server).
TIME: Overall process time is an important factor to the company effectiveness.
Workflows run by users can be lengthy and occupy most of the processing power
of a server. To avoid any latency time, Antaeus platform can run a workflow with
plugins run on different servers for processing time efficiency. Another reason for
loss of time is the amount of data shared on the network. This amount of data can
be optimized, lowering the requirement on high network bandwidth. The last main
reason to waste time in a project is a lack of collaboration due to system inability
to share data, workflow and results. Antaeus is developing a true multi-user, multiwell, realtime architecture in which data, workflows and results can be shared within
a group on a team dedicated workspace before publishing to the company database.
The same dedicated space can be shared also with partners to accelerate project
Data integration from different origins is a key part of the software platform
to achieve digital oilfield through the integration of sensors from any sensors at
the rig or the field and wellbore data from logging tools. To be complete, digital
oilfield required processing link between data and modeling to ensure adapted
model prediction and adapted decisions. Data and metadata are linked through time


M. Kerzner and P.J. Daniel

in the database, integrating these information allows better quality of data, strong
decision process and higher return on investment.
The database is collocated at the server level with calculation modules. The
user sends request to the server which are handled through an asynchronous,
non blocking and concurrent event based language, thus utilizing intrinsically the
computational power of the server. The graphics is handled by the client, limiting
response delay generated in virtual machines by low network bandwidth. The
graphical language is a powerful data driven graphics, developed around the notion
of streaming and realtime data. The workflow engine run at the server level can
integrate language such as python, Matlab, scilab or C. From a security point of
view, plugins can be locally in the server or a separate dedicated server isolated
from the software platform with only input/output/parameter accessible.
This overall system creates and optimized cloud server/client platform with
operations shared between the server and the client for optimum network usage
and client experience.

8.6 Next Step: Big Data Analytics in the Oil Industry
In this section, we set up the major considerations for planning your Big Data

8.6.1 Planning a Big Data Implementation

Planning Storage

The standard platform for Big Data implementations is Hadoop. The description of
the technology that eventually became Hadoop was initially published by Google,
in 2003–2004.
When I travel and have to explain to my Uber driver what is Hadoop, I usually
tell them: “Hadoop is that glue that keep Uber together. Imagine all the millions
of drivers and riders, and the size of the computer that does it. So it is not one
computer, but many, working together, and Hadoop is that glue that makes sure
they work together, registering drivers and riders and all the rest.” This explanation
always works. Incidentally, Uber has recently made public their architecture stack
(Lozinski 2015).
Hadoop gives you two things: unlimited storage and unlimited computation (Fig.
8.9). More precisely, it is limited only by the size of the cluster, that is, by the
number of servers that are tied together using the Hadoop software. One better call
is Apache Hadoop, because Hadoop is an Apache trademark.
Keep in mind that there are many Big Data projects and products, but Hadoop is
the most popular, so it will serve as a base of our first explanation.

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.9 Two faces of Hadoop: storage and processing

As an example, let us plan a Hadoop cluster. Practically, here are the steps you
would go through. For example, if you plan 1 GB ingestion per day, this means that
you need to prepare 3 GB of raw storage per day, since Hadoop replicates all data
three times. So, 1 GB becomes 3 GB.
Next, Hadoop wants 25% of storage to be allocated to temporary storage, needed
to run the jobs. This turns our 3 GB intro approximately 5 GB. Finally, you never
want to run at full storage capacity, and you would want to leave at least 50% of
your storage in reserve. This makes 5 GB into 10 GB.
Thus, every GB of data requires 10 GB of raw hard drive storage. This is not a
problem, however, since at the current storage prices of about 5 cents per gigabyte,
you can store 10 GB for 50 cents. A terabyte (TB) would then be $500. As a
consequence, a petabyte (PB) would be $500,000. A petabyte, however, is a very
large amount of information, equivalent to a thousand large laptops. An enterprise
big enough to need this much data is usually big enough to be able to afford it.
This is illustrated in the Table 8.1 below.

Planning CPU Capacity

Usually, as a rule of thumb, once you have enough storage capacity in your cluster,
the CPUs that come with the servers hosting the hard drives provide sufficient
processing power. In your estimates, processing would include the need to process
all the incoming data on time, and accounting for ingest spikes.


M. Kerzner and P.J. Daniel
Table 8.1 Planning Hadoop storage capacity
Average daily ingest
Daily ‘raw data’
Node raw storage
Map Reduce temp space
‘Raw space’ available per node
One node can store data for
1 year worth of data (flat growth)
1 year (5% growth per month)

1 year (10% growth per month)

1 TB
3 TB
24 TB (12  2 TB)
25% 6 TB
18 TB (raw–mr reserve)
6 days (18/3)
1 TB  3  365 (ingest  replication  days)
1095 TB (1 PB C)
61 nodes needed (1095/18)
80C nodes
110C nodes

However, you may face a chicken-and-egg problem. If your cluster is very CPUintensive, like image processing applications, it may require more than the standard
processor. You will either know this up-front, or you will see this after you build
your cluster and load it. Luckily, clusters are grown by adding, not replacing, the
hardware. So if you cluster does not cope, you will increase the size, but not throw
it away to buy a new one.
The cost of processing is thus included in the planning for storage above.

Planning Memory Requirements

Modern cluster are very memory-hungry, especially with the latest fashion of inmemory computing. The results is that RAM sizes from 128 GB to 256 GB per
server are common. Many production clusters that we work with are in the thousands
of nodes, with 50–100 TB of RAM total.
The petrophysics clusters for Hadoop and for NoSQL will likely be much more
modest, starting from one-machine cluster for development, and extending to 5–10
servers when they go into the first acceptance testing and production.
The real-world petrophysics application will also present a unique problem, not
found in the regular Hadoop/Big Data deployments: network may be plentiful in the
office, but not so in the field, where it may be limited and unreliable.
The cost of memory is also included in the planning for storage above.
More details on Big Data and Hadoop capacity planning can be found in
(Sammer 2012).

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.10 Eventual consistency – Starbucks analogy

8.6.2 Eventual Consistency
Consistency is one area that needs to be addressed before we can announce that our
O&G application is ready for the real world. Consistency can be strong or eventual.
A good example of eventual consistency is provided by a Starbucks coffee shop
line, and this analogy is one that is very often used, see, for example in (Jones 2015).
At Starbucks, you submit your order, and the baristas make your coffee, with many
people involved. They exchange messages and do not use strongly coupled chain
of events. The results is that you may have to wait longer for your coffee than the
guy who was after you in line, and getting the coffee itself may require a retry.
But eventually you will get yet. This is good enough for you, and it is scalable for
Starbucks, and that is exactly the point (Fig. 8.10).
First of all, what is consistency? Databases provide good example of strong
consistency: their transaction are atomic (all or nothing), consistent (no inconsistent
states are visible to the outside clients), isolated (multiple processes can do updates
independently) and durable (data is not lost).
That is all true because of the databases reside on one server. In the world of
Big Data, multiple servers are used to store the data, and the ACID properties we
just described are not guaranteed. The only one where Big Data systems, such as
NoSQL, shine is durability. They don’t lose data.
The real question to discuss is consistency. Instead of providing the strong
consistency of SQL databases, Big Data systems usually provide what is call


M. Kerzner and P.J. Daniel

eventual consistency. The data will be correct, with the passage of time. How much
time? Sometime the data is propagated through the cluster in a second.
In the extreme case, however, the consistency will be preserved if one of the data
centers where the data is located is out of circulation for up to ten days. We are
talking about Cassandra, of course. Cassandra was designed to be able to tolerate
large periods of outage for large parts of it, and then reconcile automatically when
the cut-off data center comes back online.
It is exactly this kind of eventual consistency, with very relaxed limitations,
that the O&G system will have to provide. In doing so, they can learn from the
technologies already developed, tested, and put in practice by the Big Data architect.
Cassandra’s eventual consistency can be an especially useful example. You do
not have to borrow the code, but you certainly study and re-apply the eventual
consistency architectural principles. How that can be achieved is addressed in the
immediately following section.

8.6.3 Fault Tolerance
Once your system has more than one computer, you have a cluster. And once you
have a cluster, some of the computers will fail or will be in the process of failing.
Out of 10,000 servers, one fails every day.
This poses two problems. First, if we lose a server, we must make sure that in our
unlimited storage there is not definicenty, and that we have not lost any data.
This kind of fault tolerance, the fault tolerance of data, is achieved in Hadoop
by replicating all data three times. The replication factor is configurable, first as a
default on the cluster, and then as a per-file settings. The replication is going block
by block, not file by file, and block are well distributed on the cluster, so that the
loss of any server does not lead to the loss of data. Instead, the storage master finds
under-replicated blocks and creates more copies of the same data, using good blocks
on other servers (Fig. 8.11). Thus the Hadoop storage is self-healing.
In other areas, such as NoSQL, the designers follow the simple principles. For
example, in Cassandra data replication is left to each individual server. All servers
are imagined as forming a logical ring, and a copy of the data is stored first on the
primary server chosen by the hashing algorithm, and then on two next servers on
the ring.
This solution is know to work very well, helping with a few aspects of the system
• Data loss is prevented: if a server fails, we have more copies of the same data;
• Data locality is preserved: if you have multiple copies of the data, you may be
able to find a copy that is local to your server that is running the computations,
and not a remote data copy on the network;
• Computations efficiency and good overall throughput. The efficiency is ensured
by locality, and good overall throughput by minimizing network traffic.

8 Big Data in Oil & Gas and Petrophysics


Fig. 8.11 Data replication in HDFS

The designers of Big Data O&G systems, in addition to using tools which already
implemented replication, will do well to use the same principles where required by
the O&G specifics, and not provided by current tools. How this can be done is
explained in the section immediately following.
The second part of fault tolerance implies that your system continues to function
even if some of the servers that constitute it fail. This is achieved in Hadoop by
having standby, or failover server, which will kick in immediately on the failure
of the primary server for a specific function. The function may be storage or
processing, or any other part of the workflow.
As Amazon Web Services design courses teach, “Plan for failure, and nothing
fails.” (Varia 2011). In other words, failure should be considered part of the
primary design path. The question is not “if my component fails” but “when this
component fails.” This is achieved by decoupling the system, using message passing
to communicate the tasks, and by having elastic fleet of servers on standby.
All these principle should be used by O&G designers, who are either building
on top of the existing Big Data tools, or building their own fault tolerance using the
storage and processing fault tolerance ideas developed in Big Data.


M. Kerzner and P.J. Daniel

Fig. 8.12 Jenkins logo

8.6.4 Time Stamping in Big Data
Time stamping and hash stamping is useful in all areas of Big Data. For example,
the version control system Git uses hash stamps throughout, to minimize changes
that it need to transfer on the network while pushing and pulling the changes to and
from remote locations.
Jenkins, a CI (Continuous Integration) and Deployment tool (Fig. 8.12), computes the hash stamp of every artifact that it creates. Then it does not have to re-do
any build that it has already done in another build pipeline.
HBase, a NoSQL solution on top of Hadoop, uses timestamps for a crucial step
of finding the latest correct value of any field. HBase allows any writer to just write
the data, without any locks that would ensure consistency but ruin performance.
Then, when a request for reading the data comes, HBase compares the timestamps
on multiple field values written by multiple servers, and simply chooses the latest
value as the correct one. Then it updates the other values.
Cassandra is doing the same time comparison, and also fixing the old, incorrect,
or corrupt data values. These lessons can be implemented in O&G, and we show in
the next section.

8.7 Big Data is the future of O&G
A secure browser platform has intrinsic functionalities in terms of knowledge
collaboration once connected to big data database. The collaboration is at multiple
level, internally and externally. Internally by creating a workspace shared by a team
and linked to the main database through publish/synchronize action, the ability
for every user to share the same workflows and results. Externally by allowing
companies to interact either at the workflow level or data themselves. A services
companies has a strong interest in developing plugins that his clients will use,
because the client will be able to make a full use of the data taken in the wellbore.
On the other side, an oil company will have access to a larger number of workflows,
and thus a larger number of services companies to choose from for a logging work.
In the same way, a tool manufacturer interest is having client hearing about
the full potential of a tool, and what better way than promoting it through the
full capability of the tool measurements. A consultant strength results in the data
processing to get the maximum knowledge out of information from measurements

8 Big Data in Oil & Gas and Petrophysics


taken on wellbore. Another aspect of the collaboration is the ability to interact
faster with the academic work, accelerating the spread of new techniques and

8.8 In Conclusion
As we have shown above, Big Data is the future of O&G, so now let us summarize
the steps that the industry needs to take.
1. Standardize data storage approach. O&G industry is not there yet.
2. With that, you will be able to scale, and move into cloud-based databases.
3. People with a mental block of “my data never goes into the cloud!” thinking will
soon be converted. The same situation happened in health and in legal tech, and
by now all of them are happily using compute clouds (Kerzner 2015–2016).
4. Scalability will be achieved through cloud-provided scalability.
5. Customer separation will be achieved by each customer working within their own
cloud account.
6. Cloud will also allow for multiple clients to inter-operate when this is required.
7. Learning Big Data implementation principles is a must of O&G Big Data system

Angeles S. (2014). Virtualization vs. cloud computing: what’s the difference?. http://
Bello O., Srivastava, D., & Smith, D. (2014). Cloud-based data management in oil and gas
fields: advances, challenges, and opportunities. https://www.onepetro.org/conference-paper/
Birman, K. (2006). Reliable distributed systems. London: Springer. http://www.springer.com/us/
Crabtree, S. (2012). A new era for energy companies. https://www.accenture.com/
Daniel, P. J. (2016). Secure cloud based oilfield platform. https://antaeus.cloud/
Industry Week (2015). The digital oilfield: a new model for optimizing production. http:/
Jones, M. (2015), Eventual consistency: the starbucks problem. http://www.tother.me.uk/2015/06/
Kerzner, M. (1983). Formation dip determination – an artificial intelligence approach. SPWLA
Magazine, 24(5).
Kerzner, M. (1986). Image Processing in Well Log Analysis. IHRDC publishing. Republished by Springer in 2013. https://www.amazon.com/Image-Processing-Well-Log-Analysis/


M. Kerzner and P.J. Daniel

Kerzner, M. (2015–2016). Multiple articles on Bloomberg Law. https://bol.bna.com/author/
Kerzner, M., & Maniyam, S. (2016). Hadoop Illuminated. Open Source Book about Technology.
Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., & Lynch, D. C., (2016). Brief
History of the Internet. http://www.internetsociety.org/internet/what-internet/history-internet/
Lozinski, L. (2015), The Uber Engineering Tech Stack., https://eng.uber.com/tech-stack-part-one/
McGrath, B., & Mahowald, R. P. (2014). Worldwide SaaS and Cloud Software 2015–2019 Forecast
and 2014 Vendor Shares. https://www.idc.com/getdoc.jsp?containerId=257397
Sammer, E. (2012). Hadoop Operations. Sebastopol, CA: O’Reilly Publishing.
Varia, J. (2011). Architecting for the Cloud: Best Practices. https://media.amazonwebservices.com/
Wikipedia. (2016). SAAS – Software as a Service. https://en.wikipedia.org/wiki/

Chapter 9

Friendship Paradoxes on Quora
Shankar Iyer

9.1 Introduction
The “friendship paradox” is a statistical phenomenon occurring on networks that
was first identified in an influential 1991 paper by the sociologist Feld (1991). That
paper’s title declares that “your friends have more friends than you do,” and this
pithy and intuitive articulation of the core implication of the friendship paradox is
now one of the most popularly recognized discoveries of the field of social network
analysis. Despite its name, the “friendship paradox” is not really a paradox. Instead,
it is a term for a statistical pattern that may, at first glance, seem surprising or
“paradoxical”: in many social networks, most individuals’ friends have more friends
on average than they do. The simplest forms of the friendship paradox can, however,
be explained succinctly: compared to the overall population, a person’s friends are
typically more likely to be people who have a lot of friends rather than people who
have few friends.
That explanation of the mechanism underlying the friendship paradox may make
the phenomenon seem “obvious,” but in the wake of Feld’s paper, researchers
identified several nontrivial ramifications of the phenomenon for behavior on realworld social networks. Feld himself pointed out that this phenomenon may have
psychological consequences: if people tend to determine their own self-worth by
comparing their popularity to that of their acquaintances, then the friendship paradox may make many people feel inadequate (Feld 1991). Meanwhile, D. Krackhardt
independently identified that the friendship paradox has important implications for
marketing approaches where customers are encouraged to recommend products to
their friends; essentially, these approaches may be more lucrative than is naively
expected, because the friends of the original customers tend to be people who
S. Iyer ()
Quora, Inc., Mountain View, CA 94041, USA
e-mail: siyer.shankar@gmail.com
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_9



S. Iyer

are better connected and more influential (Krackhardt 1996). This is a line of
investigation that continues to be actively pursued in marketing research (Seeman
and Singer 2013). The friendship paradox was also found to have implications
for designing immunization strategies to combat epidemics spreading in social
contact networks. Cohen et al. showed that, with limited immunization resources
and incomplete knowledge of the network, it is more effective to immunize
randomly chosen acquaintances of randomly chosen people than the randomly
chosen people themselves (Cohen et al. 2003). The enhanced effectiveness of such
a strategy is again due to the greater connectivity of those acquaintances, and their
commensurately greater risk of contracting and transmitting the epidemic. This idea
was later turned on its head by N.A. Christakis and J.H. Fowler, who pointed out
that those acquaintances make better sensors for recognizing the early stages of an
outbreak, since they tend to get infected sooner (Christakis and Fowler 2010).
In the past decade, the widespread usage of online social networks and the vast
quantities of data that usage generates have enabled the study of the friendship
paradox in new contexts and on unprecedented scales. For example, in 2011,
researchers at Facebook found that 92.7% of active Facebook users (where an
“active” user was defined to be someone who had visited Facebook in a particular
28-day window in the spring of 2011 and who had at least one Facebook friend)
had fewer friends than the mean friend count of their friends (Ugander et al.
2011). Similarly, researchers have studied the network of people following one
another on Twitter. The Twitter follow network is a directed network, where follow
relationships need not be reciprocated. Such a directed network allows for the
existence of four different types of degree-based friendship paradoxes:
• Most people have fewer followers than the mean follower count of people whom
they follow.
• Most people have fewer followers than the mean follower count of their
• Most people follow fewer people than the mean number followed by people
whom they follow.
• Most people follow fewer people than the mean number followed by their
All four of these paradoxes have been shown to occur empirically (Hodas et al.
Researchers have also extended the core friendship-paradox idea to quantities
other than friend count or degree, showing that, in various social networks, the
average neighbor of most people in the network scores higher according to some
other metric. Hodas et al. have shown, for instance, that for most people on Twitter,
the people whom they follow are more active on average and see more viral content
on average (Hodas et al. 2013). Very recently, Bollen et al. combined sentiment and
network analysis techniques to show that a “happiness” paradox holds in the mutualfollow network on Twitter: if we define a network which contains a link whenever a
pair of Twitter users follow one another, then for most people in this network, their
neighbors are on average happier than they are, at least according to the sentiment

9 Friendship Paradoxes on Quora


encoded in their tweets (Bollen et al. 2016). These types of phenomena have been
called “generalized friendship paradoxes” by Eom and Jo, who identified similar
phenomena in academic collaboration networks (Eom and Jo 2014).
Hodas, Kooti, and Lerman have also emphasized the point that, in many
social networks, even stronger versions of the friendship paradox and generalized
friendship paradoxes occur. For example, not only do most Twitter users have
fewer followers than the mean follower count of their followers, they also have
fewer followers than the median follower count of their followers. This stronger
phenomenon (which Hodas et al. refer to as the “strong paradox,” in contrast
to the “weak paradox” in terms of the mean) holds for all four of the degreebased paradoxes in directed networks and also for generalized paradoxes. As some
examples of the latter, Hodas et al. show that most people on Twitter are less active
and see less diverse content than most of the people whom they follow (Hodas et al.
2014). Similar observations have been made before: the 2011 Facebook study cited
above found that 83.6% of active Facebook users had fewer friends than the median
friend count of their friends, and indeed, Feld pointed out in this seminal 1991
paper that the strong paradox held empirically in real-world high school friendship
networks (Ugander et al. 2011; Feld 1991; Coleman 1961). Therefore, the important
contribution of Hodas et al. was not simply to observe that the strong paradox
holds in online social networks like Twitter. Instead, their key observation was that,
when distributions of quantities over people in a social network follow heavy-tailed
distributions (where the mean is higher than the median due to the presence of rare,
anomalously large values), the existence of the weak paradox is usually guaranteed
by the statistical properties of the distribution. The strong paradox, in contrast, is
not. Hodas et al. demonstrate this empirically by showing that the weak paradox
often survives random reshufflings of the network that destroy correlations between
degree and other quantities, but the strong paradox usually disappears. As such, the
existence of a strong paradox reveals something about behavioral correlations on
the actual network that the weak paradox does not (Hodas et al. 2014).
In the present work, we take these developments in the study of the friendship
paradox and examine their implications for Quora, an online knowledge repository
whose goal is “to share and grow the world’s knowledge.” Quora is structured in
a question-and-answer format: anyone with a Quora account can add a question
about any topic to the product. Quora product features are then designed to route
the question to people who have the knowledge required to answer it. This can
happen through a question being automatically recommended to an expert in their
“homepage feeds,” through the question asker or other participants on Quora
requesting an answer from a particular individual, or through various other mechanisms. Frequently, these mechanisms lead to questions on physics being answered
by physics professors, questions on music theory being answered by professional
musicians, questions on politics and international affairs being answered by policy
experts and journalists, etc. Processes also exist for merging duplicate questions
or questions that are phrased differently but are logically identical. The goal is for
there ultimately to be a single page on Quora for each logically distinct question,
ideally with one or more high-quality, authoritative answers. Each such page can


S. Iyer

then serve as a canonical resource for anyone who is interested in that knowledge.
People can discover answers that may interest them by reading their homepage
feeds (where content is automatically recommended to them), by searching for
something they specifically want to learn about, or through several other product
features. People can refine Quora’s recommendations for them by following topics
and questions that interest them, following people whose content they value reading,
or simply by providing feedback on answers through “upvoting” or “downvoting.”
People “upvote” an answer when they consider it to be factually correct, agree
with the opinions expressed in the answer, or otherwise find it to be compelling
reading; people “downvote” answers that they deem to be factually wrong or low
quality. Several of the core interactions on Quora (including following, upvoting,
and downvoting) generate rich relationships between people, topics, and questions.
Many of these relationships can be represented as networks, and the existence of
various strong paradoxes in these networks may reveal important aspects of the
Quora ecosystem.

9.1.1 Organization of the Chapter and Summary of Results
The rest of this chapter is devoted to exploring contexts in which variants of the
friendship paradox arise on Quora and to exploring the consequences of these
paradoxes for the Quora ecosystem. Before diving into the data however, we will
briefly review some of the statistics of the friendship paradox in Sect. 9.2, with an
emphasis on why strong paradoxes are special and can reveal important aspects of
how Quora works.
Then, in Sects. 9.3–9.5, we will explore three different sets of paradoxes on
• Section 9.3—Strong Paradoxes in the Quora Follow Network: We first study
the network of people following one another on Quora and show that the strong
versions of all four of the directed-network paradoxes occur.
• Section 9.4—A Strong Paradox in Downvoting: We next turn our attention to
a less traditional setting for the friendship paradox: the network induced by a
specific, negative interaction between people during a given time period. More
specifically, we will study a network where a directed link exists for each unique
“downvoter, downvotee pair” within a given time window; in other words, a link
exists from person A to person B if person A downvoted at least one of person B’s
answers during the time window. We will show that, for most “sufficiently active”
downvotees (where “sufficiently active” means that they have written a few
answers during the time window), most of their “sufficiently active” downvoters
get downvoted more than they do.
• Section 9.5—A Strong Paradox in Upvoting: Finally, we will show that, for
answerers on Quora who have small numbers of followers, the following property
holds: for most of their upvoted answers, most of the upvoters of those answers
have more followers than they do.

9 Friendship Paradoxes on Quora


Each of these incarnations of the friendship paradox is worth studying for different
reasons. The paradoxes in the follow network, which we explore in Sect. 9.3,
are the “canonical” manifestations of the friendship paradox in directed networks.
Measuring and interpreting these paradoxes is a natural precursor to studying
paradoxes in less familiar contexts in Sects. 9.4 and 9.5, and it also allows us the
opportunity to develop methodologies that are useful in those later analyses. The
paradox in downvoting, which we study in Sect. 9.4, is a variant of the friendship
paradox in a context that has been rarely explored: a network representing negative
interactions. Furthermore, because it occurs in such a context, the downvoting
paradox turns the usually demoralizing flavor of the friendship paradox (“your
friends are more popular than you are on average”) on its head: it may offer some
consolation to active writers that the people in their peer group who downvote them
typically receive just as much negative feedback themselves. Finally, the paradox
in upvoting, which we discuss in Sect. 9.5, has potential practical benefits for the
distribution of content that is written by people who have yet to amass a large
following. Their content may ultimately win more visibility because this paradox
After exploring each of these paradoxes and their implications in detail in
Sects. 9.3–9.5, we conclude in Sect. 9.6 by recapping our results and indicating
interesting avenues for future investigation.

9.2 A Brief Review of the Statistics of Friendship Paradoxes:
What are Strong Paradoxes, and Why Should We
Measure Them?
Our ultimate goal in this chapter is to understand what various versions of the strong
paradox can tell us about the Quora ecosystem. As a precursor to studying any
specific paradox however, we will first explore the statistical origins of friendshipparadox phenomena in general. We will begin by reviewing traditional arguments
for the friendship paradox in undirected networks. In doing so, we will show how
recent work has put Feld’s original argumentation on stronger mathematical footing
and explained why weak and strong friendship paradoxes in degree are ubiquitous in
undirected social networks. We will also explore why weak generalized paradoxes
are often inevitable consequences of the distributions of quantities in a social
network. In contrast, we will argue that the following types of friendship-paradox
phenomena are not statistically inevitable:
• Strong degree-based paradoxes in directed networks.
• Strong generalized paradoxes in undirected or directed networks.
The existence of these types of paradoxes depends upon correlations between degree
and other quantities, both within individuals and across links in the social network.
Thus, these phenomena reveal the impact of these correlations on the network
structure, and that network structure can, in turn, reveal nontrivial aspects of the
functioning of the Quora ecosystem.


S. Iyer

9.2.1 Feld’s Mathematical Argument
We now return to Feld’s 1991 paper (Feld 1991). In that paper, Feld presented a
mathematical argument for the existence of the friendship paradox in which he
compared the mean friend count over people in a social network to the mean
friend count over neighbors. Suppose a social network contains N nodes (each node
represents a person) and has degree distribution p.k/. Here, p.k/ is the fraction of
people in the network with degree k (i.e., with exactly k friends or neighbors - we
will use the terms degree, friend count, and neighbor count interchangeably when
discussing undirected networks). The mean degree in this network is just:
hki D





Meanwhile, in the distribution of degree over neighbors, each node with degree k
gets represented k times; hence, the degree distribution over neighbors is:
k0 k p.k /


1 X 2
hk2 i
k p.k/ D
hki k


pnbr .k/ D P
and the mean degree of neighbors is:
hknbr i D

In Eqs. (9.1)–(9.3) above, the sums over k and k0 are over all possible degrees in the
network; if self-loops and multiple links between individuals are prohibited, then
these indices will range from 0 (representing isolated nodes) to N  1 (representing
nodes that are connected to all others in an N-node network). From Eq. (9.3), we
find that:
hknbr i  hki D

hk2 i  hki2


This demonstrates that hknbr i > hki whenever there is non-zero variance (i.e.,
Var.k/ > 0) of k in the degree distribution p.k/ (Feld 1991).

9.2.2 What Does Feld’s Argument Imply?
The argument above is clearly a mathematically valid statement about the two
means hki and hknbr i, but it is important to reflect carefully on what it implies. It
is, in fact, easy to construct examples of networks where the degree distribution has

9 Friendship Paradoxes on Quora


Fig. 9.1 Example network
where no one has fewer
friends than any of their
friends. Inspired by similar
examples given by Lattanzi
and Singer (2015)

Fig. 9.2 Example network
where most people have more
friends than the mean number
of friends of their friends.
Reproduction of an example
given by Feld (1991)

non-zero variance, but where no one has a lower degree than the average degree of
his or her neighbors. Figure 9.1 provides an example, inspired by similar examples
given by Lattanzi and Singer (Lattanzi and Singer 2015). Here, as is mathematically
guaranteed, hknbr i > hki (specifically, hknbr i D 29
and hki D 32
for the network
in Fig. 9.1), but there is no one in the network to whom it would be possible to
accurately say, quoting Feld, that “your friends have more friends than you do.”
Figure 9.2 provides another example, actually provided by Feld himself in his 1991
paper, where most people have more friends than the mean friend count of their
friends (Feld 1991).
These examples highlight a tension between Feld’s argument and the intuitive
conception of what the friendship paradox means. This tension exists because Feld’s
argument actually compares the following two calculations:
• Iterate over each person in the network, write down his or her degree, and take
an average over the list.
• Iterate over each friendship pair in the network, write down the degree of each of
the people involved, and take an average over the list.
In the process of comparing these two quantities, we actually never directly compare
the degree of a person to the degrees of his or her neighbors, and the fact that
hknbr i > hki does not, by itself, put any guarantees on these ratios. Dating back
to Feld however, people have empirically measured these ratios and found that most
people in many real-world social networks are in the “weak-paradox condition” in
terms of degree: in other words, their degree is lower than the mean degree of their
neighbors (Feld 1991; Ugander et al. 2011; Hodas et al. 2013). Thus, there is an
explanatory gap between Feld’s argument and this paradox.


S. Iyer

9.2.3 Friendship Paradox Under Random Wiring
One simple candidate explanation for closing this gap is to consider what happens
in a network with degree distribution p.k/ and purely random wiring between nodes.
Such a network can be constructed through the “configuration model” approach of
Newman: assign the nodes in the network a number of “stubs” drawn randomly
from the degree distribution p.k/ and then randomly match the stubs (Newman
2003). Then, if we repeatedly randomly sample a node in this network and then
randomly sample a neighbor of that node, the degree distribution over the neighbors
will obey the distribution (9.2). Again assuming non-zero variance in the original
degree distribution p.k/, the neighbor degree distribution pnbr .k/ is clearly shifted
towards higher values of k, making both the weak and strong paradoxes in terms of
degree seem very plausible.
The weakness in this line of argumentation is that real-world social networks
are not randomly wired. Instead, empirical networks exhibit across-link correlations
such as assortativity, the tendency of people of similar degree to link to one another
more often than would be expected by random chance. Assortativity generically
tends to reduce the gap in degree between a person and his or her neighbors, and
thus, generically tends to weaken the friendship paradox.

9.2.4 Beyond Random-Wiring Assumptions: Why Weak
and Strong Friendship Paradoxes are Ubiquitous
in Undirected Networks
Very recently, Cao and Ross have proved a relationship between the degree distribution of X, a randomly selected individual from a network, and the degree distribution
of Z, a randomly selected neighbor of X, without making any assumptions about
random wiring or absence of assortativity. Specifically, these authors have proven
that the degree of Z, which we denote by kZ , is “stochastically larger” than the degree
of X, which we denote by kX . This means that, for any degree k , the following
property holds (Cao and Ross 2016):
P.kZ  k /  P.kX  k /


If we let kmed refer to the median of the overall degree distribution p.k/ and set
k D kmed in Eq. (9.5), then we can see that P.kZ  kmed /  12 . Thus, the median of
the neighbor degree distribution is at least as large the median of the overall degree
Returning to Fig. 9.2, Feld explained away this counterexample to the friendship
paradox by arguing that it represents a very fine-tuned situation and that there is no
reason to expect these improbable situations to be realized in empirical networks.
Cao and Ross’ work puts this type of argumentation on sounder mathematical

9 Friendship Paradoxes on Quora


footing. In large networks, perhaps there are regions where the local topology of
the network happens to align such that some nodes do not experience the friendship
paradox; overall though, we can usually expect the weak and strong degree-based
paradoxes to prevail in undirected networks.

9.2.5 Weak Generalized Paradoxes are Ubiquitous Too
Hodas et al. have advanced an argument that we should also expect the weak
generalized paradoxes to occur very generically in undirected and directed social
networks. The reason for this is that distributions of quantities in social networks
(e.g., content contribution or activity in an online social network) often follow
heavy-tailed distributions. These are distributions where the mean is larger than the
median due to the presence of rare but anomalously large values that skew the mean.
If a person P in the social network has n neighbors, that means the mean of some
quantity y over those n neighbors has n opportunities to sample an extreme value
in the heavy tail. Meanwhile, P only has one opportunity to sample an extreme
value. Therefore, it is statistically likely that the mean of y over neighbors of P will
be greater than the value of y for P. As such, we should expect a large number of
people in the social network to be in the weak generalized paradox condition, just
because of structure of the distribution of y (Hodas et al. 2014).

9.2.6 Strong Degree-Based Paradoxes in Directed Networks
and Strong Generalized Paradoxes are Nontrivial
In contrast to the case with weak generalized paradoxes, Hodas et al. show that
strong paradoxes are not guaranteed by the form of heavy-tailed distributions.
These authors make their case empirically by studying real data from two online
networking products, Twitter and Digg. First, they show that both weak and
strong degree-based paradoxes occur on these products and that weak and strong
generalized paradoxes do too. As an example of the degree-based paradoxes, for
the majority of Twitter users, most of their followers have more followers than they
do; as an example of the generalized paradoxes, for the majority of Twitter users,
most of the people whom they follow tweet more frequently than they do. Next,
Hodas et al. randomly permute the variable under consideration over the network.
In the case of the paradox in follower count, this destroys the correlation between
indegree (follower count) and outdegree (followee count). In the case of the paradox
in tweet volume, this destroys the correlations between people’s degrees and their
content contribution. In both cases, the weak paradox survives the scrambling, but
the strong paradox disappears: it is no longer the case, for example, that for most
people in the scrambled network, most of their followees tweet more frequently than


S. Iyer

they do. This shows that the strong paradoxes depended upon correlations between
degree and other quantities in the network (Hodas et al. 2014).
Another way to see this is to return to our random-wiring assumption and imagine
what would happen if we looked for a paradox in a quantity y by sampling a node in
our network, sampling a follower of that node, and then comparing the values of y
across that link. Under the random-wiring assumption, we would sample followers
in proportion to their outdegree. If the joint distribution of outdegree and y in the
network is p.kout ; y/, we would expect the joint distribution of outdegree and y for
the follower to obey:
pfwr .kout ; y/ D

kout p.kout ; y/
hkout i


where hkout i is the mean outdegree in the whole network. If y is a metric that is
positively correlated with the number of people an individual follows, then we
can expect the marginal distribution of y over followers to shift to higher values
compared to the overall distribution of y. In these circumstances, we might expect
the typical follower to have a higher value of y, at least within the random-wiring
assumption. On the other hand, y could be anti-correlated with outdegree, and this
could lead to an “anti-paradox” instead. These possibilities emerge even before
we relax the random-wiring assumption and allow correlations across links. These
correlations introduce effects like assortativity, which can compete with the withinnode correlations between degree and other metrics, enriching and complicating the
picture. Thus, the strong paradoxes are not inevitable consequences of the distributions of individual degrees or quantities in the network, and these phenomena can
reveal nontrivial features of how the network functions.

9.3 Strong Paradoxes in the Quora Follow Network
9.3.1 Definition of the Network and Core Questions
We now begin our actual analysis of friendship paradoxes on Quora by analyzing
the network of people following one another. On Quora, one person (the “follower”)
follows another person (the “followee”) to indicate that he or she is interested in
that person’s content. These follow relationships then serve as inputs to Quora’s
recommendation systems, which are designed to surface highly relevant content to
people in their homepage feed and digests. The network generated by these follow
relationships is a classic example of a directed network, and as we noted in the
introduction, a directed network allows for four different degree-based friendship
paradoxes. We will now confirm that all four of these paradoxes occur on Quora.
In this analysis, we consider the follow relationships between all people who
visited Quora at least once in the 4 weeks preceding June 1, 2016 and who had at
least one follower or followee who made a visit during that four-week period. For

9 Friendship Paradoxes on Quora


each of 100,000 randomly chosen people in this group who had at least one follower
and one followee, we ask the following questions:

What is the average follower count (i.e., average indegree) of their followers?
What is the average followee count (i.e., average outdegree) of their followers?
What is the average follower count (i.e., average indegree) of their followees?
What is the average followee count (i.e., average outdegree) of their followees?

In all of these cases, the “average” can be either a mean or a median over neighbors,
and we compute both to see how they differ, but our claims about the existence of
strong paradoxes are always on the basis of medians. Note that the followers and
followees that we include in each average must also have visited Quora in the 4
weeks preceding June 1, 2016, but need not have both incoming and outgoing links.
For the 100,000 randomly chosen people themselves, requiring them to have both
followers and followees allows us to actually pose the question of whether they
experience the paradox with respect to both types of neighbors.

9.3.2 All Four Degree-Based Paradoxes Occur in the Quora
Follow Network
In Table 9.1, we report median values of degree over the 100,000 randomly sampled
users as well as median values of the averages over their neighbors. The “mean
follower” row in Table 9.1 reports the results of the following calculation:

Table 9.1 This table reports statistics for the degrees of 100,000 randomly sampled people in the
follow network
Typical values of degree
Mean follower
Median follower
Mean followee
Median followee

Follower count (indegree)
[6.0, 6.0, 6.0]
[35.0, 35.5. 36.0]
[17.0, 17.0, 17.5]
[104.7, 106.3, 108.0]
[51.0, 52.0, 52.0]

Followee count (outdegree)
[9.0, 9.0, 9.0]
[72.7, 73.5, 74.2]
[42.0, 42.0, 42.5]
[63.8, 64.4, 65.0]
[32.0, 33.0, 33.0]

The “person” row shows the median values of indegree (follower count) and outdegree (followee
count) over these randomly-sampled people. Meanwhile, the “mean follower” and “mean followee” rows show the “typical” (i.e., median) value of the mean degree of the neighbors of the
randomly sampled people. Finally, the “median follower” and “median followee” rows show the
“typical” (i.e., median) value of the median degree of the neighbors of the 100,000 randomly
sampled people. Since we subsample the full population in these estimates, we also report a 95%
confidence interval around each of our estimates, computed using the “distribution-free” method
(Hollander et al. 1999). The estimates themselves are in bold


S. Iyer

1. For each of the 100,000 randomly sampled users, compute the mean degree
(indegree or outdegree) over his or her followers.
2. Compute the median of those 100,000 means. This gives the “typical” value of
the mean degree.
The data in Table 9.1 implies that all four directed-network paradoxes occur,
because the “typical” values of the averages over neighbors are greater than the typical values for the randomly-chosen people. Table 9.1 is not completely conclusive
though, because we have computed statistics over the randomly-chosen people and
their neighbors independently, ignoring correlations in degree across links. We can
remedy this by computing statistics for differences in degree across the links. We
do this in Table 9.2, which shows directly that all four variants of directed-network
friendship paradoxes occur. For example, a typical followee of a typical individual
gets followed by 28 more people and follows 9.5 more people than that individual
(note that fractional values like 9.5 are possible, because if someone has an even
number of neighbors, it is possible for the computed median to be the midpoint
between two integers).
Table 9.2 shows that the typical gap between the degree of a randomly selected
person and his or her neighbors is greater when computed in terms of the mean
over neighbors (i.e., when measuring a weak paradox). As we argued in Sect. 9.2,
this is because the mean is more affected by extreme values: when we take a mean
over > 1 neighbors, we are giving the mean more opportunities to be inflated by
someone with an exceptionally large degree. Consequently, the statement that most
people have lower degree than the mean over their neighbors is generally weaker
than the statement that most people have lower degree than the median over their
neighbors (Hodas et al. 2014). Table 9.2 shows that all four paradoxes survive when
we take the median over neighbors and thus measure the “strong” paradox.

Table 9.2 This table reports statistics for the differences in degree between 100,000 randomly
sampled people in the follow network and their neighbors
Typical values of differences in degree
Follower count (indegree)
Mean follower—person
[16.0, 16.4, 16.7]
Median follower—person
[2.0, 2.5, 2.5]
Mean followee—person
[75.0, 76.2, 77.3]
Median followee—person
[27.5, 28.0, 28.0]

Followee count (outdegree)
[49.4, 50.0, 50.7]
[20.0, 20.0, 20.5]
[35.3, 35.8, 36.2]
[9.0, 9.5, 10.0]

The “mean follower—person” and “mean followee—person” rows show the typical (i.e., median)
values of the difference between the mean degree of the neighbors of P and the degree of P for
each of the randomly sampled people P. Meanwhile, the “median follower—person” and “median
followee—person” rows show the typical (i.e., median) values of the difference between the median
degree of the neighbors of P and the degree of P for each of the randomly sampled people P.
Compared to Table 9.1, averaging over differences better captures correlations in degree across
links in the network. Since we subsample the full population in these estimates, we also report
a 95% confidence interval around each of our estimates, computed using the “distribution-free”
method (Hollander et al. 1999). The estimates themselves are in bold

9 Friendship Paradoxes on Quora


9.3.3 Anatomy of a Strong Degree-Based Paradox in Directed
Before proceeding to look at more exotic paradoxes, we pause to dissect the impact
that within-node and across-link correlations have on the paradoxes in the follow
network. These paradoxes are ultimately consequences of positive correlations
between indegree and outdegree: people who are followed by more people also tend
to follow more people. We can verify that these correlations exist by tracking how
the distribution of indegrees changes as we condition on people having larger and
larger outdegrees. We do this in Fig. 9.3.
To see how across-link correlations impact the paradoxes, we can calculate how
strong the paradox would be under a random-wiring assumption, which ignores
these correlations. Then, we can compare against reality. To demonstrate this, we
focus upon the following paradox: most people have fewer followers than most of
their followers. The follow network can be characterized by a joint distribution of
indegrees and outdegrees p.kin ; kout /, which gives the probability that a person has
kin followers and follows kout people. The joint distribution encodes the within-node
correlations between indegree and outdegree that we see in Fig. 9.3. Suppose that
the wiring between nodes is completely random. Now, imagine repeatedly sampling
a random person in the network and then sampling a random follower of that

Fig. 9.3 This plot shows percentiles of the overall indegree distribution in the follow network
vs. ranges of outdegree. We show the 25th, 50th, and 75th percentiles of the distribution. As we
consider people who follow more and more people, the distribution of follower counts shifts to
higher and higher values. This reveals strong positive correlations between indegree and outdegree
in the follow network


S. Iyer

person. Based on the argumentation from Sect. 9.2, we would expect that the joint
distribution of indegrees and outdegrees for that person would look like:
pfwr .kin ; kout / D

kout p.kin ; kout /
hkout i


Using our empirical distribution p.kin ; kout / and Eq. (9.7), we can compute
the expected distribution pfwr .kin ; kout / under the random-wiring assumption. In
practice, we actually calculate the expected marginal distribution over followers of
just the variable kin and then calculate the complementary cumulative distribution
of this variable:
pfwr .kin  k/ D

X X kout p.kin ; kout /
hkout i
k k



In Fig. 9.4, we plot four complementary cumulative distributions of kin :

Fig. 9.4 This plot shows four distributions of indegree. We plot complementary cumulative
marginal distributions, which show probabilities that the indegree is at least the value on the
x-axis. In blue, we show the real distribution of follower count (indegree) over 100,000 sampled
people in the follow network who had at least one follower or one followee. In green, we show the
distribution over 100,000 sampled people who had at least one follower and one followee. In red,
we show the real distribution of indegree over followers that we find if we repeatedly randomly
sample an individual with at least one follower and one followee and then randomly sample one
of that person’s followers. In purple, we show the inferred distribution of indegree over followers
that we would expect if we apply the random-wiring assumption in Eq. (9.8) to our empirical data

9 Friendship Paradoxes on Quora


1. In blue, we plot the distribution over 100,000 randomly sampled people from the
network (i.e., people with at least one follower or one followee).
2. In green, we plot the distribution over 100,000 randomly sampled people from
the network who had at least one follower and one followee.
3. In red, we plot the distribution that we find if we adopt a two-step sampling
procedure where we repeatedly randomly sample someone with at least one
follower and one followee, randomly sample a follower of that person, and
measure that follower’s indegree.
4. In purple, we measure the distribution implied by the random-wiring assumption
from Eqs. (9.7) and (9.8).
The two-step sampling distribution is shifted outward from distribution 1 (i.e., the
overall distribution). For all but the lowest indegrees, the two-step sampling
distribution is also shifted outward from distribution 2, with the behavior at low
indegrees arising from the fact that we have required all people included in
distribution 2 to have at least one follower. The fact that the median of the twostep sampling distribution is shifted outward from the median of distribution 2 is
consistent with the observation of the paradox in Table 9.2. However, note that
the two-step sampling distribution is not shifted out as far as distribution 4, the
distribution computed via the random-wiring assumption. This indicates that the
paradox is indeed weakened, but not fully destroyed, by correlations in degree across

9.3.4 Summary and Implications
In this section, we have established that all four strong degree-based friendship
paradoxes occur in the network of people following one another on Quora. We have
previously argued that strong friendship paradoxes rely on correlations between
quantities in the network. In this case, the relevant correlations are between
indegree (i.e., follower count) and outdegree (i.e., followee count). These withinnode correlations outcompete across-link correlations (i.e., assortativity) to result in
strong friendship paradoxes.
It is worth noting that this need not have worked out this way. It is possible
to imagine a product where indegree and outdegree are anti-correlated. Suppose a
hypothetical product contains two very different types of users: there are consumers
who follow many producers but do not receive many followers themselves, and there
are producers who have many followers but do not follow many people themselves.
In a situation like this, it is possible to devise scenarios where most people have more
followers than most of their followers. Maybe a scenario similar to this is actually
realized in online marketplaces, where buyers purchase from various merchants but
do not sell to many people themselves and where merchants sell their goods to many
buyers without patronizing many other merchants.


S. Iyer

Nevertheless, in the Quora context, it is perhaps not very surprising that indegree
and outdegree are strongly correlated, as seen in Fig. 9.3. People who attract many
followers are typically active writers, who are likely to be sufficiently engaged with
Quora that they follow many other writers too. This, in turn, makes the existence
of strong paradoxes in the follow network less surprising, but it is still important to
examine this set of “canonical” friendship paradoxes before moving on to more
exotic examples. Moreover, the technology that we developed in this section to
probe the origins of these standard paradoxes will be very useful in dissecting the
less familiar paradoxes to come.

9.4 A Strong Paradox in Downvoting
9.4.1 What are Upvotes and Downvotes?
Although the network of people following one another on Quora provides valuable
input into the product’s recommendation systems, in practice, people end up seeing
content originating from outside their follow network as well. This can be because
they follow topics or directly follow questions, or because they access content
through non-social means like search. In other words, the network of people
following one another is not synonymous with the actual network of interactions
on Quora. In this section, we show that friendship-paradox phenomena also exist
in “induced networks” of real interactions on the product. We focus on a specific
interaction, the downvote, for which we identify the special variant of the friendship
paradox that we referred to in the Introduction as the “downvoting paradox.”
Before proceeding, we should clarify what a downvote is. On any Quora answer,
any person who has a Quora account has the opportunity to provide feedback by
either “upvoting” or “downvoting” the answer. To understand what a “downvote”
represents in the system, it is helpful to first understand what an “upvote” is. An
upvote typically signals that the reader identifies the answer as factually correct,
agrees with the opinions expressed in the answer, or otherwise finds the answer to
be compelling reading. Upvotes are used as one of a large number of signals that
decide where the answer gets ranked, relative to other answers to the same question,
on the Quora page for that question. Upvotes are also a signal that the reader values
the type of content represented by the answer, and therefore, can serve as one of
many features for Quora’s recommendation systems. Finally, upvotes serve a role in
social propagation of content: if someone upvotes an answer (without using Quora’s
“anonymity” feature), that person’s followers are more likely to see that piece of
content in their homepage feeds or digests. We will explore this role of the upvote
in much greater detail in Sect. 9.5.
In many ways, the “downvote” is the negative action that complements the
“upvote.” People cast downvotes on answers to indicate that they believe the answer
to be factually wrong, that they find the answer to be low quality, etc. Downvotes are

9 Friendship Paradoxes on Quora


used as negative signal in ranking answers on a question page, and they also signal
that the reader is not interested in seeing further content of this type. Meanwhile,
in contrast to the upvote, they do not serve a social distribution function: that is, a
follower of someone who downvotes an answer is, naturally, not more likely to see
that answer in homepage feeds or digest.
There is another way that downvotes differ fundamentally from upvotes. In cases
where upvoters do not use Quora’s “anonymity” feature, writers can directly see
who has upvoted their answers, but the identity of their downvoters is hidden from
them. This is true even if the downvoter has not elected to “go anonymous” on the
piece of content that he or she is downvoting. This is one of the features that makes
downvoting a compelling setting to look for friendship paradoxes: in the ordinary
settings in which friendship paradoxes are explored, the public nature of the social
ties encoded in the network has implications for the growth of the network. For
example, on networks such as Twitter or Quora, information about who is following
whom is public, increasing the odds that any particular follow relationship will be
reciprocated. The inherently hidden nature of downvoting precludes these types
of dynamics, so it is interesting to explore the ramifications of this fundamental
difference for friendship paradoxes.
Another reason that downvoting is a compelling setting for exploring this
phenomenon is simply that downvoting represents a negative interaction between
two individuals. Networks representing negative interactions show several important
structural differences with networks representing positive interactions (Harrigan and
Yap 2017). Friendship paradoxes, as the term “friendship” itself suggests, have
relatively rarely been explored in these types of contexts.

9.4.2 The Downvoting Network and the Core Questions
We are now prepared to define the downvoting network that we study in this section.
Our network represents all non-anonymous downvotes cast (and not subsequently
removed) on non-anonymous answers written (and not subsequently deleted) in the
four-week period preceding June 1, 2016. Note that the conclusions that we reach
are not peculiar to this particular four-week window, and we have checked that they
hold in other four-week windows as well; we will see one such example of another
time window later in this section.
In our network, we draw a directed link for every unique “downvoter, downvotee
pair” within the four-week window. In any downvoting interaction, the “downvotee” is the person who wrote the answer that received the downvote, and the
“downvoter” is the person who cast that downvote. A directed link exists between
two nodes in our network if the downvoter (who is represented by the origin node)
downvoted even a single non-anonymous answer written by the downvotee (who
is represented by the target node) within the four-week window. In other words,
just through the network structure, we cannot tell if a given link represents one
or multiple downvotes from a particular downvoter to a particular downvotee. We
present a cartoon version of this network in Fig. 9.5.


S. Iyer

Fig. 9.5 A cartoon illustration of the downvoting network, representing the downvotes within a
four-week period on Quora. A directed link exists between two nodes if the person represented
by the origin node (the “downvoter”) cast at least one downvote on any answer by the person
represented by the target node (the downvotee) during the four-week period. In this diagram, the
nodes in green represent all the unique downvoters of a particular downvotee, who is represented
by the node in red

Here are the core questions that we pose about this network:
1. The downvotee ! downvoter question: For most downvotees, did most of their
downvoters receive more or fewer downvotes than they did?
2. The downvoter ! downvotee question: For most downvoters, did most of their
downvotees receive more or fewer downvotes than they did?
Note that these questions amount to asking whether two versions of the strong
paradox exist in the downvoting network. This is because we ask if most of the
neighbors of most downvotees or downvoters score more highly according to a
metric (namely, the total number of downvotes received).

9.4.3 The Downvoting Paradox is Absent in the Full
Downvoting Network
In Tables 9.3 and 9.4, we report data that addresses the “downvotee ! downvoter”
and “downvoter ! downvotee” questions in the full downvoting network. Table 9.3
reports statistics for the typical values of the number of downvotes received by each
party; meanwhile, Table 9.4 reports statistics for the typical values of differences
between the number of downvotes received by the average neighbor and the
individual in question. As in Sect. 9.3, the statistics reported in Table 9.4 better
capture the impact of correlations across links in the downvoting network.
Tables 9.3 and 9.4 reveal that the analog of the strong paradox is present in the
answer to the “downvoter ! downvotee” question: the typical downvotee of most
downvoters gets downvoted more than the downvoter. However, in the “downvotee
! downvoter” case, the paradox is absent: for most downvotees, most of their
downvoters get downvoted less than they do.

9 Friendship Paradoxes on Quora


Table 9.3 This table reports the typical number of downvotes received by people and their
average “neighbors” in the “downvotee ! downvoter” and “downvoter ! downvotee” questions
Typical values of downvotes received
Downvotee ! downvoter
Mean for downvotee
Mean for downvoter
Median for downvotee
Median for downvoter

Downvoter ! downvotee

Consider the “downvotee ! downvoter” column. The “mean over downvotee” and “median over
downvotee” rows here are identical because they correspond to iterating over each downvotee in
the network, taking a mean or median over a single value (the number of downvotes received
by that downvotee), and then taking a median of those values over the downvotee in the network.
Meanwhile, the “mean for downvoter” and “median for downvoter” rows are different because they
correspond to iterating over each downvotee in the network, taking a mean or median over each
downvotee’s downvoters, and then taking a median over downvotees to compute a typical value
of the average over downvoters. The statistics in the “downvoter ! downvotee” are analogous,
with the roles of downvotees and downvoters in the computation reversed. Note that these are
population values over the entire downvoting network
Table 9.4 This table reports the typical differences in the number of downvotes received by
people and their average “neighbors” in the “downvotee ! downvoter” and “downvoter !
downvotee” questions
Typical values of differences in downvotes received
Downvotee ! downvoter
Mean downvoter–downvotee
Median downvoter–downvotee

Downvoter ! downvotee

Consider the “downvotee ! downvoter” column. The “mean downvoter–downvotee” and “median
downvoter–downvotee” rows correspond to the following calculations: (1) iterate over each
downvotee in the network, (2) compute the mean or median number of downvotes received by
the downvotee’s downvoters, (3) subtract the number of downvotes received by the downvotee,
and (4) take a median of the difference from step 3 over all downvotees. The statistics in the
“downvoter ! downvotee” are analogous, with the roles of downvotees and downvoters in the
computation reversed except in the order in which we compute the difference. In other words, we
continue to subtract the number of downvotes received by downvotees from the number received
by downvoters, rather than reversing the order of the subtraction. Note that these are population
values over the entire downvoting network

From one perspective, the fact that the paradox does not occur in the “downvotee
! downvoter” case may be unsurprising. It may be the case that most downvotees
get downvoted for understandable reasons (e.g., they write controversial or factually
incorrect content). Consequently, we may expect them to get downvoted more
frequently than their downvoters. However, it is useful to think about what it would
mean if the analog of this paradox was absent in the follow network. In that context,
the analogous situation would be if, for most people, most of their followers have
fewer followers than they do. As we saw in the previous section, the strong positive
correlations between indegree and outdegree actually produce the opposite trend:


S. Iyer

we showed that most people who have both followers and followees have fewer
followers than most of their followers. We now examine how the correlations
between downvoting and being downvoted produce a different outcome in the
downvoting network.
Consider the joint distribution over people in the downvoting network of four
• kin : the number of unique downvoters of the person (i.e., that person’s indegree
in the downvoting network).
• din : the number of downvotes the person received, which should respect din  kin .
• kout : the number of unique downvotees of the person (i.e., that person’s outdegree
in the downvoting network).
• dout : the total number of downvotes the person cast, which should respect
dout  kout .
We call this joint distribution p.kin ; din ; kout ; dout /. Now, imagine choosing a
downvotee and then following a random incoming link out to a downvoter of
that downvotee. If we adopt a random-wiring assumption, then we ought to reach
downvoters in proportion to the number of distinct people whom they downvoted,
which is kout . We expect the distribution of the four variables over the randomly
sampled downvoter to follow:
pdvr .kin ; din ; kout ; dout / D

kout p.kin ; din ; kout ; dout /
hkout i


This shifts the distribution to higher values of kout . If kout and din are positively
correlated, we might expect the typical downvoter of most downvotees to get
downvoted more than the downvotee.
We probe correlations between kout and din in Fig. 9.6 by bucketing downvoters
by their values of kout and then plotting percentiles of the distribution of din . The plot
shows that, over a large range of kout , the majority of downvoters actually receive no
downvotes at all (i.e., they have din D 0 and are “undownvoted downvoters”). This
reveals a type of anti-correlation that is at play in the downvoting network: a typical
person who has kout > 0 (i.e., they have downvoted someone) is actually more likely
to have kin D din D 0 than to have kin > 0. We actually could have anticipated this
from Table 9.3 above: the typical downvoter of most downvotees is an undownvoted
downvoter, which trivially means that this downvoter gets downvoted less than the
In Fig. 9.7, we follow the logic that led to Fig. 9.4 to plot four complementary
cumulative distributions of din :
1. In blue, we plot the overall distribution of din in the downvoting network,
including all downvoters and downvotees.
2. In green, we plot the distribution of din over downvotees in the network.

9 Friendship Paradoxes on Quora


Fig. 9.6 This plot shows percentiles of the number of downvotes an individual received vs. ranges
of the number of unique people that individual downvoted. We show the 25th, 50th, and 75th
percentiles of the distribution. This plot shows that, over a large range of unique downvotee counts,
the median number of downvotes received is zero. In other words, the distribution is strongly
affected by the presence of “undownvoted downvoters”

3. In red, we plot the distribution that arises from repeatedly randomly sampling
a downvotee, randomly sampling a downvoter of that downvotee, and then
measuring din for the downvoter.
4. In purple, we plot the distribution of din over downvoters that we would expect
from the random-wiring assumption in Eq. (9.9).
Here, the random-wiring assumption actually predicts an inward shift of the median
of the distribution of din over downvoters (relative to the overall distribution 1). This
inward shift survives in distribution 3 (which takes into account correlations across
links through applying the two-step sampling procedure to the actual network). This,
in turn, helps to explain the strong anti-paradox that we observe in Table 9.4.

9.4.4 The Downvoting Paradox Occurs When The Downvotee
and Downvoter are Active Contributors
Let us now ask why, in terms of real behavior on the product, it may be more
likely for the typical downvoter of most downvotees to not receive any downvotes.
One possible explanation is that there is a barrier to receiving downvotes that does


S. Iyer

Fig. 9.7 This plot shows four distributions of the number of downvotes received. We plot
complementary cumulative marginal distributions, which show probabilities that the number of
downvotes received by an individual is at least the value on the x-axis. In blue, we show the
real distribution of downvotes received over all people in the downvoting network, including both
downvoters and downvotees. In green, we show the real distribution of downvotes received over
people who received at least one downvote (i.e., over all downvotees). In red, we show the real
distribution of downvotes received over downvoters that we find if we repeatedly randomly sample
a downvotee and then a randomly sample a downvoter of that downvotee. In purple, we show the
inferred distribution of downvotes received over downvoters that we would expect if we apply the
random-wiring assumption in Eq. (9.9) to our empirical data

not exist for casting downvotes: in particular, to receive downvotes, it is necessary
to actually write answers. This motivates posing the “downvotee ! downvoter”
and “downvoter ! downvotee” questions again, but just for people who are active
answer writers.
We now pursue this line of investigation and show that, when we condition on
content contribution, both sides of the downvoting paradox hold. In particular, we
revise the two questions as follows:
1. The downvotee ! downvoter question: For most people who wrote at least
n answers and who received downvotes from people who also wrote at least n
answers, did most of those downvoters receive more or fewer total downvotes
than they did?
2. The downvoter ! downvotee question: For most people who wrote at least n
answers and who downvoted people who also wrote at least n answers, did most
of those downvotees receive more or fewer total downvotes than they did?
Note that, as before, we are always referring to non-anonymous answers and nonanonymous downvotes here, even if sometimes omit the adjectives for convenience.
In Tables 9.5 and 9.6, we fix n D 3 and results for the revised questions. These
results reveal that both sides of the downvoting paradox now hold.

9 Friendship Paradoxes on Quora


Table 9.5 In this table, we report statistics that we obtain when we repeat the calculations that led
to Table 9.3 but restrict our attention to downvotees and downvoters who contributed at least n D 3
non-anonymous answers during the four-week window that our downvoting network represents
Typical values of downvotes received
Downvotee ! downvoter
Mean for downvotee
Mean for downvoter
Median for downvotee
Median for downvoter

Downvoter ! downvotee

These are population values over all downvoting pairs in the downvoting network that satisfy the
content-contribution condition. Note that the variable that we compare between downvoters and
downvotees is still total downvotes received, not just downvotes received from active contributors
Table 9.6 In this table, we report statistics that we obtain when we repeat the calculations that led
to Table 9.4 but restrict our attention to downvotees and downvoters who contributed at least n D 3
non-anonymous answers during the four-week window that our downvoting network represents
Typical values of differences in downvotes received
Downvotee ! downvoter
Mean downvoter—downvotee
Median downvoter—downvotee

Downvoter ! downvotee

These are population values over all downvoting pairs in the downvoting network that satisfy the
content-contribution condition. Note that the variable that we compare between downvoters and
downvotees is still total downvotes received, not just downvotes received from active contributors

We now study the “anatomy” of the “downvotee ! downvoter” side of the
paradox, under the content-contribution condition. Note first that the contentcontribution condition motivates revising the definitions of the four variables in
Eq. (9.9):
• kin : the number of unique downvoters of the person who have written at least n
• din : the number of downvotes the person received, which should still respect din 
kin .
• kout : the number of unique downvotees of the person who have written at least n
• dout : the total number of downvotes the person cast, which should still respect
dout  kout .
If we study correlations between kout and din for just those people with n  3, we
find that, consistent with the existence of the strong paradox in the “downvotee !
downvoter” analysis, a strong positive correlation is evident. We plot this in Fig. 9.8.
Note that kout for each node now just refers to the number of people whom the
downvoter downvoted who wrote at least three answers. We can now probe the
impact that the correlations in Fig. 9.8 have upon the downvoter distribution of
din by plotting the analog of Fig. 9.7, but with the content-contribution condition.
We do this in Fig. 9.9 and observe that the distribution of din over downvoters


S. Iyer

Fig. 9.8 This plot, like Fig. 9.7, shows percentiles of the number of downvotes an individual
received vs. ranges of the number of unique people that individual downvoted. The difference with
respect to Fig. 9.7 is that we have imposed the content-contribution threshold that we discuss in the
text. This means that all people considered for this plot contributed at least n D 3 non-anonymous
answers during the four-week window represented by the downvoting network. Furthermore,
the number of “threshold unique downvotees” for each individual only counts those downvotees
who also satisfy the content-contribution criteria. Meanwhile, the number of “overall downvotes
received” still includes all downvotes received from any downvoter, not just those who satisfy the
content-contribution threshold

now shifts outward as expected. Moreover, we observe that the random-wiring
assumption works extremely well in this context: this means that the outward shift
of the distribution is approximately what we would expect from the within-node
correlations seen in Fig. 9.8, with correlations across links playing a minor role.

9.4.5 Does a “Content-Contribution Paradox” Explain
the Downvoting Paradox?
We now consider a possible explanation for the “downvotee ! downvoter” side
of the downvoting paradox: that the typical downvoter of the typical downvotee
contributes more content. If this is the case, then maybe the downvoter generally
gives himself or herself more opportunities to be downvoted, and consequently, we
should not be surprised that downvoter typically gets downvoted more. Table 9.7
shows that this “content-contribution paradox” actually occurs: for most downvotees
who wrote at least three recent answers and got downvoted by someone who also

9 Friendship Paradoxes on Quora


Fig. 9.9 This plot, like Fig. 9.7, shows four distributions of the number of downvotes received. We
plot complementary cumulative marginal distributions, which show probabilities that the number
of downvotes received by an individual is at least the value on the x-axis. In blue, we show the
real distribution of downvotes received over all people in the downvoting network, including both
downvoters and downvotees. In green, we show the real distribution of downvotes received over
people who received at least one downvote (i.e., over all downvotees). In red, we show the real
distribution of downvotes received over downvoters that we find if we repeatedly randomly sample
a downvotee and then a randomly sample a downvoter of that downvotee. In purple, we show the
inferred distribution of downvotes received over downvoters that we would expect if we apply
the random-wiring assumption in Eq. (9.9) to our empirical data. The difference with respect to
Fig. 9.7 is that we have imposed the content-contribution threshold that we discuss in the text.
Thus, the distributions are computed over people who contributed at least n D 3 non-anonymous
answers during the four-week window represented by the downvoting network. Furthermore, when
we randomly sample a downvotee and then a downvoter, we require that both parties satisfy the
threshold. However, the number of downvotes received still includes all downvotes received from
any downvoter, not just those who satisfy the content-contribution threshold

wrote at least three recent answers, most of those downvoters wrote at least four
more recent answers than they did. Moreover, comparing Tables 9.6 and 9.7, we
see that the ratio of downvotes to recent answers may actually be lower for the
Does this “content-contribution paradox” fully account for the downvoting
paradox? To determine this, we can see if the downvoting paradox survives the
following procedure: in each unique downvoting pair of the dataset, in place of the
actual number of downvotes received by the downvoter, assign the number of
downvotes received by a randomly chosen person who wrote the same number of
recent public answers. Then, we can redo the calculations for the “downvotee !
downvoter” side of the downvoting paradox in this “null model” and check if the
paradox still occurs. If so, then that would support the argument that the contentcontribution paradox explains the downvoting paradox.


S. Iyer

Table 9.7 In this table, we report statistics that we obtain when we repeat the calculations that led
to Table 9.6, including imposition of the content-contribution threshold
Typical values of differences in answers written
Downvotee ! downvoter
Mean downvoter–downvotee
Median downvoter–downvotee

Downvoter ! downvotee

However, the variable that we compare between downvoters and downvotees is now the number of
non-anonymous answers contributed during the four-week window represented by the downvoting
network. Note that these are population values over all downvoting pairs in the downvoting network
that satisfy the content-contribution condition

To check if this is the case, we have followed up on the analysis in this section by
running the “null-model” analysis on a later snapshot of the downvoting network,
representing the 4 weeks preceding October 1, 2016. We show the results in
Fig. 9.10. In this plot, we show the paradox calculation for various values of the
content-contribution threshold n in the real downvoting network, as well as in the
null model described above. This analysis reveals three things:
1. The “downvotee ! downvoter” side of the downvoting paradox is not a
peculiarity of a particular snapshot of the downvoting network, since it occurs
in this later snapshot as well. The “downvoter ! downvotee” side of the paradox
also occurs in this later snapshot, although that data is not represented in this plot.
2. The “downvotee ! downvoter” side of the downvoting paradox is not a
peculiarity of a specific content-contribution threshold n. In the plot, the paradox
gets stronger as n grows.
3. More to the current point, the content-contribution paradox in terms of recent
answers composed does not fully account for the downvoting paradox, since the
paradox disappears under the null model.
A follow-up suggestion might be that, if recent content-contribution volume cannot
account for the paradox, then maybe historical content-contribution volume does:
namely, maybe the downvoters in the “downvotee ! downvoter” side of the paradox
receive more downvotes because they have more answers over their entire history
on Quora. Figure 9.10 provides evidence against this possibility too, by presenting
data for a null model with respect to historical answers.
One weakness of the null-model analysis above might be that, for high values of
content-contribution volume, we may not have sufficient examples of people who
contributed precisely that number of answers. This could make it difficult to get a
good estimate of the distribution of number of downvotes received over people who
contribute a specific number of answers, and that could, in turn, compromise the
randomization process employed by the null model. To check if this is a problem,
we have rerun the null models while treating everyone with more than nt recent or
historical answers equivalently. In other words, in any downvoter, downvotee pair
where the downvoter has at least nt answers, we assign that person a number of
downvotes received by someone else who also wrote at least nt answers, irrespective

9 Friendship Paradoxes on Quora


Fig. 9.10 This plot compares the “downvotee ! downvoter” side of the downvoting paradox (the
blue line) to two null models. In the red line, for each downvotee, downvoter pair in the downvoting
network, we match the downvoter to someone who contributed the same number of recent public
answers and use that person’s downvote count instead. In the green line, we do the same type
of matching, but with someone who has the same number of historical public answers. This plot
shows that recent and historical content-contribution volume alone cannot explain the downvoting
paradox. The plot also shows that the downvoting paradox is not a peculiarity of a particular choice
of the content contribution threshold n. Note that this plot, unlike all the others in this section, is for
a later snapshot of the downvoting network (representing the 4 weeks preceding October 1, 2016)
and shows that the paradox is not a peculiarity of any one time window. Two points about the
null-model data: (1) It may be surprising that fractional values appear. This can happen because,
if a downvotee has an even number of downvoters, then the median downvote count may fall
between two integers. (2) The error bars on the null-model data are computed by repeating the null
model analysis 100 times and then taking 2.5th and 97.5th percentiles of those 100 samples; more
sampling would have been ideal, but this is a slow calculation, and the current level of sampling
already shows that the actual observations (i.e., the blue line) are unlikely to be explained by either
null model

of the precise number of answers. The paradox disappears for various choices of nt ,
so the conclusions that we draw from Fig. 9.10 appear to be robust to this issue.
Thus, content-contribution volume alone does not seem to account for the
downvoting paradox, despite the fact that a “content-contribution paradox” also
occurs in the downvoting network.

9.4.6 Summary and Implications
We now discuss the implications of the observations in this section. First of all, the
absence of the “downvotee ! downvoter” side of the downvoting paradox in the full
downvoting network provides an example of why the strong generalized paradox is


S. Iyer

not statistically guaranteed in social networks. Strong generalized paradoxes may
or may not occur in any given network, depending upon the interplay of within-node
and across-link correlations between degrees and metrics in the network. Thus, our
data reinforces the point of Hodas, Kooti, and Lerman that strong paradoxes reflect
behavioral correlations in the social network (Hodas et al. 2014).
Meanwhile, the fact that both sides of the downvoting paradox occur once we
condition on sufficient content contribution indicates that strong versions of the
friendship paradox can occur in networks representing negative interactions. This is
relatively unexplored territory, since most studies of the friendship paradox are set
in networks representing positive interactions (e.g., friendship in real-world social
networks or follow relationships in online networks). Moreover, this observation
shows that paradoxes also occur when the interactions are largely hidden from the
participants. This matters because one possible explanation of the paradox would
be retaliation: people who downvote often are also likely to get downvoted more
because people downvote them to “get even.” These explanations are implausible
in the Quora context, because the identity of downvoters is hidden. Meanwhile,
explanations in terms of more content contribution on the part of downvoters are also
insufficient, based on our argumentation in this section. This leaves the intriguing
possibility that the actual explanation lies in a correlation between downvoting and
composing controversial content. A deeper natural-language study may be needed
to assess whether such a correlation exists, whether it accounts for the downvoting
paradox, or whether the actual explanation is something altogether different (e.g.,
more subtle explanations in terms of content contribution may still work, an example
being a situation where being a downvoter is correlated with getting more views on
your historical content).
Finally, we note that the setting of these strong paradoxes in a network representing negative interactions reverses the usually demoralizing nature of the friendship
paradox: Feld’s 1991 paper announced that “your friends have more friends than you
do,” and he noted that this statistical pattern could have demoralizing consequences
for many people (Feld 1991). In contrast, when analogs of the paradox occur in
networks representing negative interactions, that means that the people who interact
negatively with you are usually just as likely to be the recipients of negative
interactions themselves. This may provide some comfort to participants in online
social communities.

9.5 A Strong Paradox in Upvoting
9.5.1 Content Dynamics in the Follow Network
In this section, we turn our attention to upvoting and, in particular, to its role
as a social distribution mechanism. When a person on Quora casts an upvote
on an answer, that answer then has an increased chance to be seen by that

9 Friendship Paradoxes on Quora


person’s followers in their homepage feeds, digest emails, and certain other social
distribution channels. As such, a plausible route for a piece of content to be
discovered by a sequence of people on Quora is the one depicted in Fig. 9.11. In this
network diagram, each node represents a person on Quora, and the links represent
follow relationships. Recall that one person (the “follower”) follows another (the
“followee”) because the follower is interested in the content contributed by the
followee. In Fig. 9.11, we depict a situation in which the person represented by
the node with an “A” has written an answer. A follower of this author reads the
answer in homepage feed and upvotes it. This first upvoter’s followers then have
an increased chance of encountering the answer for social reasons, and one of them
finds the answer in a digest email and upvotes it. This allows the answer to propagate
to the second upvoter’s followers, and a third person finds the answer in feed and
upvotes it. This is a purely social or viral pathway for content to propagate through
We take this opportunity to introduce the notion of an “upvote distance”: since
the first upvoter in Fig. 9.11 is a direct follower of the answer author, once that
upvote is cast, the answer reaches distance d D 1 from the author. Until then, we
say it is at distance d D 0 (i.e., still with the author). After the third upvote is
cast, because it would take three hops along directed links to get back to the answer
author, we say that the answer is at distance d D 3. Since this is a purely social or
viral pathway for propagation, the answer sequentially moves from distance d D 0
to 1 to 2 to 3.

Fig. 9.11 Network diagram representing, in cartoon form, a fully social way for content to
propagate in the Quora follow network. Each node represents a person, and each arrow represents a
follow relationship. The person represented by the center node (indicated with an “A”) has written
a non-anonymous answer and then three people read and upvote the answer for social reasons. The
upvote plays a social distribution role at each step of this process. See the text for further discussion


S. Iyer

But content on Quora also has the opportunity to be discovered via non-social
channels. A prominent one is search: people can find answers on Quora by issuing a
relevant query to an external search engine like Google or Bing or by using Quora’s
internal search engine. Moreover, people can be shown content in their homepage
feeds and digest emails for non-social reasons (e.g., because they follow a relevant
topic), so these parts of the product are themselves only partially social. Yet another
non-social channel for content discovery is to navigate directly to a topic page of
interest and read content specifically about that topic.
Figure 9.12 illustrates how these non-social channels may impact the dynamics
of content. As before, the person indicated with an “A” has written an answer. In
this case however, the first person to discover and upvote the answer does so via
internal search. This first upvoter happens to follow someone who directly follows
the answer author, but does not directly follow the answer author. As such, the
answer hops out to distance d D 2 without ever having been at distance d D 1; this
“leapfrogging” is possible because internal search is not a social means of discovery
and does not rely upon the follow graph. On the basis of the first upvoter’s action
though, that person’s followers are more likely to see the answer, and a second
person finds the answer in homepage feed for this social reason and upvotes it.
Thus, the answer reaches a distance of d D 3 through a combination of social and
non-social means. The third upvoter in this cartoon scenario is a direct follower of
the answerer who encounters the answer in a digest email; this person’s upvote is

Fig. 9.12 Network diagram representing, in cartoon form, a content propagation pathway that
mixes social and non-social channels. Each node represents a person, and each arrow represents a
follow relationship. The person represented by the center node (indicated with an “A”) has written
a non-anonymous answer. A person discovers the answer via search. This person happens to be a
second-degree follower of the author (i.e., a follower of a direct follower of the author). Thus, the
answer hops out to distance d D 2 and does so without ever having been at distance d D 1, since
search is a non-social pathway for propagation. See the text for further discussion

9 Friendship Paradoxes on Quora


at d D 1, since he or she directly follows the author. The fourth and final upvoter
finds the answer on a topic page. There actually is no path back from this person
to the answer author, so it is less clear what distance to assign to this upvote.
Nevertheless, this fact highlights an important aspect of the non-social channels
of content discovery: through them, answer authors can reach audiences that they
could never reach through purely social distribution.

9.5.2 Core Questions and Methodology
The fundamental question that we want to address in this section is the following:
when an answer receives an upvote, does the upvoter who casts that upvote typically
have more or fewer followers than the answer author? Furthermore, how does the
answer to this question vary with the number of followers of the answer author and
with the network distance of the upvote? We will see below that, in many practically
relevant cases, the following property holds: for most answer authors, for most of
their non-anonymous answers that get non-anonymous upvotes, most of the nonanonymous upvoters have more followers than they do. This is a strong variant of
the friendship paradox that can have very important consequences for how content
dynamics play out in the Quora ecosystem.
To address the questions above, we will track all answers written on Quora in
January 2015 and track all upvotes cast on these answers until near the end of
October 2016. In this section, as in Sect. 9.4), we always mean that we look at
data for non-anonymous, non-deleted answers and non-anonymous upvotes (that
were not removed after being cast) on those answers, but we sometimes drop the
adjectives for brevity. To each answer, we attach the number of followers the answer
author had when the answer was composed, and to each upvote, we attach the
number of followers the upvoter had when the upvote was cast. Furthermore, to
each upvote, we attach the network distance from the upvoter to answer author
when the upvote was cast. Now, it is important to note that the underlying network
changed significantly between January 2015 and October 2016, with many more
people joining Quora and many new follow relationships being added. We take
these dynamics seriously and update the underlying network as we assign distances
/ follower counts to each upvote. Our overall procedure is as follows:
1. Construct the follow network including people who signed up before January 1,
2015 and the links between them.
2. Iterate day-by-day from the start of the answer cohort (January 1, 205) to the end
of the observation period. For each day, perform two actions:
(a) Update the network by adding a node for new people who signed up and
adding a link for new follow relationships.
(b) Compute the follower count for the upvoter and the distance between the
upvoter and answer author for each upvote that was cast during that day (and
was not subsequently removed) on the January 2015 cohort of answers.


S. Iyer

3. Look at all upvotes or at some relevant subset of upvotes (e.g., those cast at
distance d) and ask:
(a) For each answer, what is the median follower count of the upvoters?
(b) For each answer author, what is the median over answers of the median
upvoter follower count?
(c) For each answer author, what is the ratio of the answer to question (b) to the
author’s follower count when the answer was composed?
(d) What is the median over the ratios in question (c)? If this median ratio
exceeds 1, then the strong paradox holds.
We implement these calculations using the NetworkX Python package (Hagberg
et al. 2008).
There are a number of subtleties to consider in the procedure outlined above.
Here are the most important ones:
• What links should we include in the network? In Sect. 9.3 above, we kept
only links between people who had been active on Quora within a four-week
window. Similarly, in this setting, we may want to avoid including all links, so as
to not compute artificially short paths through people who do not actively engage
with the product. In our analysis, we only add a link to the follow network if
the person being followed has added a non-deleted, non-anonymous answer or
non-anonymously upvoted a non-deleted answer since joining Quora. If that is
not the case, then we defer adding the link until that person takes one of those
actions. The interested reader can refer to our blog post, where we check how
pruning links affects the results of this analysis; the high-level takeaway is that it
does not dramatically affect the results (Iyer 2015).
• Within a given day of the calculation (see step 2 in the procedure above),
should we update the network or compute distances first? Both options
introduce some amount of error. To see why, consider a scenario where someone
finds an answer by an author that he or she has not yet followed and upvotes it.
In fact, the answer is so compelling that the reader also follows the answerer.
Later in the day, the reader visits feed, sees new content by the person whom
he or she just followed, and upvotes that too. Suppose we treat this scenario
within the “update-first” protocol where we update the graph before computing
distances. In this case, we would miss the fact that the original upvote happened
when the network distance between upvoter and answerer was greater than 1, and
possibly substantially greater. We end up underestimating the upvote distance.
Alternatively, suppose we use the “compute-first” protocol where we compute
distances before updating the graph. In this case, we miss out on the fact that
the second upvote likely happened because the reader was a first-degree follower
of the author. We end up overestimating the upvote distance. In the calculations
reported in this chapter, we always use the “update-first” protocol, but we check
robustness to changing the protocol in the blog post (Iyer 2015).
• What should we do about link removal? On Quora, people can stop following
others whose content no longer interests them. Our analysis only includes links

9 Friendship Paradoxes on Quora


that survived when the analysis was performed for the last time (in November
2016) and does not consider links that existed in the past that were subsequently
removed. Link removal is relatively rare compared to link addition (a very rough
estimate is that it happens about 4–5% as often) and substantially harder to track
in our dataset. Furthermore, we consider it very unlikely that link removal would
qualitatively change the results because of the robustness of the findings to other
structural modifications to the network, such as the two discussed above. For
these reasons, we have not checked robustness to this issue directly; however, we
cannot completely exclude the possibility that people who take the time to curate
their networks through link removal represent unusually important nodes in the

9.5.3 Demonstration of the Existence of the Paradox
We first quote results for the paradox occurring over all upvotes. For most answer
authors from January 2015, for most of their answers written during that month that
subsequently received upvotes, most of those upvotes came from people who had
at least 2 more followers than the answer author. This means that a strong paradox
occurs, but this paradox may not seem very dramatic. To see why this paradox may
actually have practical implications for content dynamics, we need to look deeper,
and in particular, we need to look at the strength of the paradox for answer authors
who had few followers themselves. For example, for answer authors who had 1–9
followers when composing their answers, for most of their upvoted answers, the
median upvoter on those answers had around five times as many followers as they
did. This suggests that these answers could typically be exposed to larger audiences
from these upvotes.
The potential impact is made clearer by incorporating network structure into the
analysis: in Fig. 9.13, we plot the output of the procedure from Sect. 9.5.3, broken
down by order-of-magnitude of the follower count of the answerer and by network
distance from the upvoter to the answerer. The plot shows that, for answer authors
who had 1–9 followers, when their answers were upvoted by people at distance
d D 1 on the network, the median follower count of the upvoters on those answers
was around 6.9 times their own. The effect is actually more dramatic at d D 2, with
the median follower count of the d D 2 upvoters on most answers being around
29.6 times the follower count of the answer authors. For these answer authors, the
typical ratio remains above 1 all the way out to d D 6, which is the largest distance
that we study. Meanwhile, the paradox also holds for authors with tens of followers
out to distance d D 3, again peaking at d D 2 with a typical ratio of around 3.4.
The practical consequence of this version of the strong friendship paradox is
the following: answers by people with low follower counts initially face a stark
funnel for social distribution. However, if these answers can get upvoted through this
stark funnel or circumvent the funnel through non-social means, then future social
propagation may be easier. This may help content written by these people to reach


S. Iyer

Fig. 9.13 For the January 2015 cohort of answers, this plot tracks “typical” ratio of the follower
count of the upvoter to the follower count of the answer author vs. the network distance between
the upvoter and the answer author at the time of the upvote. Here, “typical” refers to the median
over answers of the median within each answer; see the procedure outlined in Sect. 9.5.2 for more
details. This plot demonstrates the existence of the strong paradox for answer authors with low
follower counts, which has potentially beneficial consequences for distribution of content by these

readers who are further from them in the follow network. These benefits are possible
because the upvoters in these situations are typically much better followed than the
answer author. We can make the claim about typicality because we have measured a
strong paradox. We emphasize again that strong paradoxes are not bound to occur:
it could have been the case that the typical upvoter of an author with few followers
has a comparable or lower number of followers than the answer author. This would
curtail the answer’s opportunities for social propagation, even if it was discovered
by some other means.
We should briefly comment on the behavior in Fig. 9.13 for answers by people
with many followers. In these cases, we see a sharp drop off of the ratio at each
network distance. This makes sense, because the distance of an upvoter from a
highly-connected author tells us something about the upvoter. If someone is several
hops away from a highly-connected author that means that they have not followed
that author, they have not followed anyone who follows that author, etc. This means
that they are unlikely to be very active participants in the online community, and
therefore, are unlikely to have many followers themselves. This selection effect gets
more dramatic with each step away from a highly-connected author, so the sharp
decay of the red and purple curves in Fig. 9.13 is completely expected.

9 Friendship Paradoxes on Quora


Fig. 9.14 For answers written in January 2015 by answer authors with 1–9 followers at the time of
answer composition, this plot shows the distribution of upvotes received in the first 4 weeks after
the answer received its first upvote, broken down by the follower count of the first upvoter. The plot
includes those answers that received upvotes from people who had follower counts in the stated
ranges. We plot complementary cumulative distributions, which show the fractions of answers that
received at least the number of upvotes on the x-axis. The outward shift of the distribution with the
order-of-magnitude of follower count of the first upvoter suggests that the upvoting paradox may
assist in answer distribution, but we point out important caveats in the text

9.5.4 Can We Measure the Impact of the Paradox?
We now ask if we can measure the potential impact of the paradox reported in this
section. One way that we might probe this is the following:
1. Consider all answers written in January 2015 by people with 1–9 followers at the
time of answer composition.
2. For those answers that received upvotes (by 4 weeks before the cutoff time of
the analysis, which was November 1, 2016), record the follower count of the first
upvoter at the time the upvote was cast.
3. Measure how many upvotes the answer received in the 4 weeks beginning at the
time of that first upvote.
4. Look at the distribution of number of upvotes received, broken down by the
order-of-magnitude of followers of the first upvoter.
We report the results of this analysis in Fig. 9.14.
As the order-of-magnitude of the follower count of the first upvoter grows in
Fig. 9.14, the distribution shifts outward. This suggests that the upvoting paradox
could actually boost distribution of answers from people with few followers, as
described above. However, it is important to treat this observation with care: there
are many confounding effects here. For instance, it might be the case that people


S. Iyer

with many followers are better at recongizing content that will be successful at
Quora, and as such, their early upvote simply reveals preexisting qualities of the
answer that make it more likely to attract attention, rather than directly causing that
Meanwhile, we should also point out a surprising observation that may naively
seem at tension with Fig. 9.14. For the answers represented in the figure, in those
cases where the first upvoter had 1–9 followers, the mean number of upvotes
received within 4 weeks of the first upvote was almost twice as large as the mean
number received in cases where the first upvoter had thousands of followers. This
is because there were more cases where answers in the former category received
very large numbers of upvotes. This too could be due to a type of correlation: the
mere fact that the first upvoter on an answer was someone with very few followers
might reveal that this answer was one that had the opportunity to be seen by people
who are less engaged with the product. In turn, this might mean that the answer was
one that received wide topic-based distribution or that was very popular via search.
In these cases, maybe we should not be surprised that the answer went on to receive
many upvotes.
We cannot completely remove these confounding effects via the type of observational analysis that we have done in this section. Nevertheless, the outward shift of
the distributions in Fig. 9.14 should serve as further motivation for trying to ascertain
the actual impact of the upvoting paradox, perhaps through an appropriatelydesigned controlled experiment.

9.5.5 Summary and Implications
In this section, we have shown that the following variant of the strong friendship
paradox holds: for most answer authors with low follower counts, for most of their
answers, most of their distance d upvoters have more followers than they do. This
holds for a large range of d for people with the lowest follower counts. We have
demonstrated this friendship paradox for the January 2015 cohort of answers in
Fig. 9.13, but our findings are not peculiar to this group of answers. The interested
reader can find relevant data for other sets of answers in a related blog post entitled
“Upvote Dynamics on the Quora Network” (Iyer 2015).
This variant of the paradox is special for a number of reasons. First, the analysis
takes into account dynamics of the underlying network, whereas most studies of the
friendship paradox focus upon static snapshots of a social network. Secondly, the
paradox does not fit neatly into either the standard or generalized paradox categories.
The metric being compared is a form of degree (specifically, indegree or follower
count), but the links of the network that get considered in the comparison are subject
to a condition: that an upvote happened across the link. Furthermore, the same link
can count multiple times if one person upvotes another many times. Finally, this
paradox generalizes the notion of a “link” itself to consider followers of followers
(i.e., d D 2 upvoters), followers of followers of followers (i.e., d D 3 upvoters),
etc. Studies of the friendship paradox for anything other than first-degree neighbors

9 Friendship Paradoxes on Quora


have been rather rare, although Lattanzi and Singer have recently touched upon this
subject (Lattanzi and Singer 2015).
Most importantly though, this variant of the friendship paradox highlights how
the friendship paradox, or variants thereof, can have practical ramifications for
the fate of content on online social networks. People with low follower counts
would have quantitatively lower opportunity for their content to win visibility if this
paradox did not occur. This issue could actually be more acute in social networks
other than Quora, where people do not have the opportunity to win visibility for
non-social reasons. In products where distribution is purely social, the existence of
this type of paradox (or something similar) may be vital for new participants to be
able to attract an audience. Therefore, we hope that this study will inspire inquiry
into similar phenomena in other online social products.

9.6 Conclusion
The “friendship paradox,” the statistical pattern where the average neighbor of a
typical person in a social network is better connected than that person, is one of the
most celebrated findings of social network analysis. Variants of this phenomenon
have been observed in real-world social networks over the past 25 years, dating back
to the work of Feld (Feld 1991). In recent years, the availability of large volumes of
data collected from online social networks has ushered in a new era of theoretical
and empirical developments on this phenomenon. There has been theoretical work
aimed at clarifying when and how certain variants of the friendship paradox occur,
putting the original work of Feld on stronger mathematical footing (Cao and Ross
2016; Lattanzi and Singer 2015). There has also been empirical work that points out
that the friendship paradox occurs for metrics other than friend count or degree, so
that the average neighbor of most individuals in many social networks scores higher
according to several metrics, for example activity or content contribution in that
network (Eom and Jo 2014; Hodas et al. 2013; Bollen et al. 2016). Finally, there has
been work that clarifies that, when analyzing friendship paradoxes, not all averages
are created equally (Hodas et al. 2014). The so-called strong paradox (where the
median neighbor of most individuals in a network scores higher on some metric)
can often teach us much more about the functioning of the network than the weak
paradox (where only the mean neighbor of most individuals scores higher on that
In this chapter, we have applied these recent developments to the study of various
realizations of the friendship paradox on Quora, an online knowledge-sharing
platform that is structured in a question-and-answer format. We have identified three
different incarnations of the strong paradox in networks that represent core parts of
the Quora ecosystem. First, in Sect. 9.3, we have analyzed the network of people
following one another on Quora. We have confirmed that the four “canonical”
degree-based paradoxes in directed social networks all occur in the follow network.
Next, in Sect. 9.4, we studied the network induced by people downvoting one


S. Iyer

another during a four-week period and found that, for most sufficiently active writers
who got downvoted, most of the sufficiently-active writers who downvoted them got
downvoted just as frequently. Finally, in Sect. 9.5, we found that, for writers with
low follow counts, most of their upvoters have many more followers than they do.
We noted the potential benefits that this phenomenon has for the distribution of
content written by people who have yet to amass a large following on Quora.
Our results in Sect. 9.3 represent the first published measurements of the standard
degree-based paradoxes on Quora, and investigating these paradoxes is a natural
and necessary precursor to examining more exotic variants of the phenomenon.
However, it is the more exotic paradoxes that we study in Sects. 9.4 and 9.5 that,
we believe, point the way to important future studies. As we have mentioned above,
the “downvoting paradox” in Sect. 9.5 occurs in a context that is relatively rarely
examined in research on the friendship paradox: a network representing adversarial
or negative interactions. Our analysis of the downvoting paradox motivates many
follow-up questions. For example, to what extent is the downvoting paradox
explained by an increased tendency of downvoters to produce controversial content
themselves? Furthermore, downvoting on Quora represents a very particular type
of negative interaction. The identity of downvoters is hidden from downvotees and
this can have important consequences for the behavior of these parties: downvoters
may feel freer to give negative feedback if they are not publicly identified, and
the downvotees cannot retaliate against any specific individual if they believe that
they have been downvoted. Does something like the downvoting paradox survive if
the underlying product principles are different (e.g., if the identity of downvoters
is public), or would such a situation fundamentally alter the dynamics? We may
be able to address these questions by analyzing friendship paradoxes in networks
representing other types of negative interactions in online or real-world social
Meanwhile, we have noted that the paradox in upvoting that we demonstrate in
Sect. 9.5 can have direct practical consequences for the fate of content on Quora.
This underscores why the friendship paradox should not be thought of as merely
a sampling bias. It actually matters that the typical upvoter of a typical piece of
content by a relatively undiscovered writer is not a typical person from the full
population. The fact that this typical upvoter is more highly followed than that
typical person may help new writers be discovered and win influence in the network.
We hope that this study motivates researchers to study the role that strong friendship
paradoxes play in content propagation on online social networks. There has been
recent work on how various versions of the friendship paradox can influence
opinion spreading and adoption on social networks, but as far as we know, the role
that friendship paradoxes play in the discovery of individual pieces of content is
relatively unexplored (Jackson 2016; Lerman et al. 2016). As we have mentioned
above, the role that these phenomena play may be especially important in products
that employ purely social means of content distribution and discovery.

9 Friendship Paradoxes on Quora


Acknowledgements Some of the data that is presented in this chapter was initially shared publicly
through two posts on Quora’s data blog, entitled Upvote Dynamics on the Quora Network and
Friendship Paradoxes and the Quora Downvoting Paradox (Iyer 2015; Iyer and Cashore 2016a).
M. Cashore, who was an intern on Quora’s data team during the winter of 2015, collaborated
with the present author on early stages of the research for Friendship Paradoxes and the Quora
Downvoting Paradox and coauthored a forthcoming article summarizing some of the findings from
Sects. 9.3 and 9.4 in the 2016 Proceedings of the American Statistical Association’s Joint Statistical
Meeting (Iyer and Cashore 2016b). The present author would also like to thank K. Lerman and Y.
Singer for stimulating discussions about their work and B. Golub for informing him about the
recent work of Cao and Ross (Cao and Ross 2016). Finally, the author thanks the Quora data team
for reviewing the work reported in this chapter: within the team, the author is particularly indebted
to W. Chen for reviewing important parts of the code for Sect. 9.4 and to W. Chen, O. Angiuli, and
Z. Kao for introducing him to the “distribution-free” method for computing confidence intervals
on percentile metrics, which was useful in Sect. 9.3 (Hollander et al. 1999).

Bollen, J., Goncçalves, B., van de Leemput, I., & Ruan, G. (2016). The happiness paradox: your
friends are happier than you. arXiv preprint arXiv:1602.02665.
Cao, Y., & Ross, S. M. (2016). The friendship paradox. Mathematical Scientist, 41, (1).
Christakis, N. A., & Fowler, J. H. (2010). Social network sensors for early detection of contagious
outbreaks. PLOS One, 5(9), e12948.
Cohen, R., Havlin, S., & Ben-Avraham, D. (2003). Efficient immunization strategies for computer
networks and populations. Physical Review Letters, 91(24), 247901.
Coleman, J. S. (1961). The adolescent society. New York: Free Press.
Eom, Y.-H., & Jo, H.-H. (2014). Generalized friendship paradox in complex networks: The case
of scientific collaboration. Scientific Reports, 4, 4603.
Feld, S. L. (1991). Why your friends have more friends than you do. American Journal of
Sociology, 96(6), 1464–1477.
Hagberg, A. A., Schult, D. A., & Swart, P. J. (2008, August). Exploring network structure,
dynamics, and function using NetworkX. In Proceedings of the 7th Python in Science
Conference (SCIPY2008), Pasadena, CA, USA (pp. 11–15).
Harrigan, N., & Yap, J. (2017). Avoidance in negative ties: Inhibiting closure, reciprocity and
homophily. Social Networks, 48, 126–141.
Hodas, N., Kooti, F., & Lerman, K. (2013). Friendship paradox redux: Your friends are more
interesting than you. arXiv preprint arXiv:1304.3480.
Hodas, N., Kooti, F., & Lerman, K. (2014). Network weirdness: Exploring the origins of
network paradoxes. In Proceedings of the International Conference on Web and Social Media
(pp. 8–10).
Hollander, M., Wolfe, D., & Chicken, E. (1999). Wiley series in probability and statistics.
Nonparametric Statistical Methods (3rd ed., pp. 821–828).
Iyer, S. (2015). Upvote dynamics on the quora network. Data @ Quora (data.quora.com).
Iyer S., & Cashore, M. (2016a). Friendship paradoxes and the quora downvoting paradox.
Data@Quora (data.quora.com).
Iyer, S., & Cashore, M. (2016). Friendship paradoxes and the quora downvoting paradox. In JSM
Proceedings, Section on Statistics in Marketing (pp. 52–67). Alexandria: American Statistical
Jackson, M. O. (2016). The friendship paradox and systematic biases in perceptions and social
norms. Available at SSRN.
Krackhardt, D. (1996). Structural leverage in marketing. In Networks in marketing (pp. 50–59).
Thousand Oaks: Sage.


S. Iyer

Lattanzi, S., & Singer Y. (2015). The power of random neighbors in social networks. In
Proceedings of the Eighth ACM International Conference on Web Search and Data Mining
(pp. 77–86).
Lerman, K., Yan, X., & Wu, X.-Z. (2016). The “majority illusion” in social networks. PLOS One,
11(2), e0147617.
Newman, M. E. (2003). The structure and function of complex networks. SIAM Review, 45(2),
Seeman, L., & Singer, Y. (2013). Adaptive seeding in social networks. In 2013 IEEE 54th Annual
Symposium on Foundations of Computer Science (FOCS) (pp. 459–468).
Ugander, J., Karrer, B., Backstrom, L., & Marlow, C. (2011). The anatomy of the facebook social
graph. arXiv preprint arXiv:1111.4503.

Chapter 10

Deduplication Practices for Multimedia
Data in the Cloud
Fatema Rashid and Ali Miri

10.1 Context and Motivation
With the advent of cloud computing and its digital storage services, the growth of
digital content has become irrepressible at both enterprise and individual levels.
According to the EMC Digital Universe Study (Gantz and Reinsel 2010), the global
data supply had already reached 2.8 trillion Giga Bytes (GB) in 2012, with the
expectation that volumes of data are projected to reach about 5247 GB per person
by 2020. Due to this explosive growth of digital data, there is a clear demand from
CSPs for more cost effective use of their storage and network bandwidth for data
A recent survey (Gantz and Reinsel 2010) revealed that only 25% of the data in
data warehouses are unique. According to a survey (Anderson 2017), currently only
25 GB of the total data for each individual user are unique and the remainder are
similar shared data among various users. At the enterprise level (Adshead 2017), it
was reported that businesses hold an average of three to five copies of files, with
15–25% of these organizations having more than ten. This information points to
massive storage savings that can be gained, if CSPs store only a single copy of
duplicate data.
A major issue hindering the acceptance of cloud storage services by users has
been the data privacy issue, as the user no longer have physical control of their data
that now reside at the cloud. Although some CSPs such as Spider Oak, Tresorit,
and Mozy allow users to encrypt the data using your own keys, before uploading
it to the cloud, many other CSPs, such as popular Dropbox require access to the
plaintext data, in order to enable deduplication services. Although many of these
service providers, who do not allow encryption of data at the client site using users’
F. Rashid • A. Miri ()
Department of Computer Science, Ryerson University, Toronto, ON, Canada
e-mail: fatema.rashid; Ali.Miri@ryerson.ca
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_10



F. Rashid and A. Miri

keys, do encrypt the data themselves during transmission, this data either would be
accessible at the server end, exposing the data to both internal and external attacks
(Kovach 2017).
In the next section, we will discuss some of technical design issues pertaining to
data deduplication.

10.2 Data Deduplication Technical Design Issues
In this section, we will briefly discuss common types of deduplications used in
practice, based on methods used or where and how they take place.

10.2.1 Types of Data Deduplication
• Hash-Based: Hash-based data deduplication methods use a hashing algorithm to
identify duplicate chunks of data after a data chunking process.
• Content Aware: Content aware data deduplication methods rely on the structure
or common patterns of data used by applications. Content aware technologies are
also called byte-level deduplication or delta-differencing deduplication. A key
element of the content-aware approach is that it uses a higher level of abstraction
when analyzing the data (Keerthyrajan 2017). In this approach , the deduplication
server sees the actual objects (files, database objects etc.) and divides data into
larger segments of typical sizes from 8 to 100 MB. Since it sees the content of the
data, it finds segments that are similar and stores only bytes that have between
objects. That is why it is called a byte-level comparison.
• HyperFactor: HyperFactor is a patent pending data deduplication technology
that is included in the IBM System Storage ProtecTIER Enterprise Edition V2.1
software (IBM Corporation 2017). According to Osuna et al. (2016), a new data
stream is sent to the ProtecTIER server, where it is first received and analyzed by
HyperFactor. For each data element in the new data stream, HyperFactor searches
the Memory Resident Index in ProtecTIER to locate the data in the repository that
is most similar to the data element. The similar data from the repository is read.
A binary differential between the new data element and the data from the
repository is performed, resulting in the delta difference which is stored with
corresponding pointers.

10.2.2 Deduplication Level
Another aspect to be considered when dealing with any deduplication system is the
level of deduplication. Data deduplication can take place at two levels, file level or

10 Deduplication Practices for Multimedia Data in the Cloud


block level. In block level deduplication, a file is first broken down into blocks (often
called chunks) and then the blocks are compared with other blocks to find duplicates.
In some systems, only complete files are compared, which is called Single Instance
Storage (SIS). This is not as efficient as block level deduplication as entire files have
to be stored again as a result of any minor modification to it.

10.2.3 Inline vs Post-Processing Data Deduplication
Inline data deduplication refers to the situation when deduplication is performed as
the data is written to the storage system. With inline deduplication, the entire hash
catalog is usually placed into system memory to facilitate fast object comparisons.
The advantage of inline deduplication is that it does not require the duplicate data to
actually be written to the disk. If the priority is high-speed data backups with optimal
space conservation, inline deduplication is probably the best option. Post-processing
deduplication refers to the situation when deduplication is performed after the
data is written to the storage system. With post-processing, deduplication can be
performed at a more leisurely pace, and it typically does not require heavy utilization
of system resources. The disadvantage of post processing is that all duplicate data
must first be written to the storage system, requiring additional, although temporary,
physical space on the system.

10.2.4 Client- vs Server-Side Deduplication
Client side deduplication refers to the comparison of data objects at the source
before they are sent to a destination (usually a data backup destination). A benefit
of client side deduplication is that less data is required to be transmitted and
stored at the destination point. A disadvantage is that the deduplication catalog
and indexing components are dispersed over the network so that deduplication
potentially becomes more difficult to administer. If the main objective is to reduce
the amount of network traffic when copying files, client deduplication is the
only feasible option. On the other hand, server-side deduplication refers to the
comparison of data objects after they arrive at the server/destination point. A benefit
of server deduplication is that all the deduplication management components are
centralized. A disadvantage is that the whole data object must be transmitted over
the network before deduplication occurs. If the goal is to simplify the management
of the deduplication process server side deduplication is preferred. Among many
popular vendors such as DropBox, SpiderOak, Microsoft Sky Drive, Amazon S3,
Apple iCloud and Google Drive, only SpiderOak performs server-side deduplication
(Xu et al. 2013).


F. Rashid and A. Miri

10.2.5 Single-User vs Cross-User Deduplication
Single User: Deduplication can be done by a single user, where the redundancy
among his/her data is identified and removed, but single-user data deduplication
is not very practical and does not yield maximum space saving. To maximize the
benefits of data deduplication, cross user deduplication is used in practice. This
technique consists in identifying the redundant data among different users and
then removing the redundancy by saving a single copy of the data. According
to (Anderson 2017), 60% of data can be deduplicated on average for individual
users by using cross-user deduplication techniques. In order to save bandwidth,
CSPs and users of their services are inclined to apply client-side deduplication
where similar data is identified at the client side before being transmitted to cloud
storage. However, the potential benefit of data deduplication in terms of storage
space and storage cost can also have some associated drawbacks. In February 2012,
DropBox disabled client-side, cross-user deduplication due to security concerns
(Xu et al. 2013).

10.3 Chapter Highlights
The highlights of this chapter can be summarized as follows.
• We will introduce a secure image deduplication scheme in cloud storage environments through image compression, that consists of a partial encryption and a
unique image hashing scheme embedded into the SPIHT compression algorithm.
This work initially appeared in part in Rashid et al. (2014).
• We will introduce a secure video deduplication scheme in cloud environments
using H.264 compression, which consists of embedding a partially convergent
encryption along with a unique signature generation scheme into a H.264 video
compression scheme. This work initially appeared in part in Rashid et al. (2015).

10.4 Secure Image Deduplication Through Image
In this chapter, a novel compression scheme that achieves secure deduplication of
images in the cloud storages is proposed (Rashid et al. 2014). Its design consists
of embedding a partial encryption along with a unique image hashing into the
SPIHT compression algorithm. The partial encryption scheme is meant to ensure
the security of the proposed scheme against a semi-honest CSP whereas the unique
image hashing scheme is meant to enable classification of the identical compressed
and encrypted images so that deduplication can be performed on them, resulting in
a secure deduplication strategy with no extra computational overhead incurred for
image encryption, hashing and deduplication.

10 Deduplication Practices for Multimedia Data in the Cloud


10.5 Background
The scheme proposed in this chapter is compromised of three components, namely,
image compression, partial image encryption, and image hashing. For our scheme,
we have used the SPHIT algorithm (Said and Pearlman 2017) for image compression, which is a variant of the EZW algorithm. The EZW algorithm (Shapiro 1993)
has the property that the bits in the bit stream are generated in order of importance,
yielding a fully embedded code. For partial encryption of the image, we need to
utilize data produced during the compression process. Cheng and Li (2000) indicates
some of the specific data as significant information which is vital for decompressing
the image. Hence by encrypting only this partial information can improve security of
the compressed images. We, therefore, use some of the basic concepts presented in
Cheng and Li (2000). In Yang and Chen (2005), Yang and Chen introduced an image
hash scheme which is based on the knowledge of the locations of the significant
wavelet coefficients of the SPIHT algorithm. In our work, we have adapted this
scheme, by utilizing the wavelet coefficients of the SPIHT compression algorithm
to generate the image hash.

10.6 Proposed Image Deduplication Scheme
The design of the proposed approach for cross-user client side secure deduplication
of images in the cloud involves three components, namely, an image compression
scheme, a partial encryption scheme, and a hashing scheme as shown in Fig. 10.1.
Typically, on the user’s side, the user will process the image by applying image
compression, partial encryption, and will calculate the hash signature of the image,
in that order. Next, he/she will only send the hash signature to the CSP. On the
CSP side, the CSP will compare the received image hash against all the signatures
already present in the cloud storage. If a match is not found, the CSP will instruct
the user to upload the image. Otherwise, the CSP will update the image metadata
and then will deduplicate the image by saving only a single, unique copy.
Fig. 10.1 Image
deduplication process


F. Rashid and A. Miri

10.6.1 Image Compression
We propose to utilize image compression in order to achieve image deduplication.
The reason for doing so is that the images are compressed anyways for efficient
storage and transmission purposes. In fact, applying the compression first and
encrypting the compressed information next, one can save computation time and
resources. As discussed above, the SPIHT algorithm (Said and Pearlman 2017) is
used for image compression since it produces an embedded bit stream from which
the best images can be reconstructed (Singh and Singh 2011). In addition, it uses
an embedded coding method which makes it a suitable scheme for progressive
optimal transmission that produces fine compression ratios. It also uses significant
information sets to determine tree structures, and its execution relies upon the
structure of the zero trees of the EZW algorithm. The SPIHT compression algorithm
is independent of any keys or other form of user input. Hence, it is suitable for
deduplication of identical images compressed by different users, as these identical
images should appear identical in the compressed or encrypted form to the CSPs.
Furthermore, this algorithm involves the use of chaos theory or random permutation
from the users side, adding to the security of the system.
The SPIHT algorithm is based on the fact that there is a correlation between
the coefficients that are in different levels of the hierarchy pyramid (bands) of the
underlying structure. It maintains this information in the zero trees by grouping
insignificant coefficients together. Typically, each 2  2 block of coefficients at the
root level of this tree structure corresponds to further trees of coefficients. Basically,
the SPIHT algorithm can be in three phases, namely, initialization, sorting, and
refinement phases.
• Initialization phase: Let Coef(i,j) denote the coefficient at node (i,j). For Coef(i,j),
three coefficient sets are further defined: O(i,j) the set of coordinates of the
children of node (i,j) when this node has children; D(i,j) the set of coordinates
of the descendants of the node (i,j); and the set of coordinates of all coefficients
at the root level. Then L.i; j/ D D.i; j/  O.i; j/ is the set of descendants of the
tree node, except for its four direct offspring. Starting at threshold T, any set of
coefficients S is said to be significant (with respect to that threshold) if there is a
coefficient in S that has a magnitude at least equal to T. Three lists are maintained
by the algorithm, namely (1) LIS the list of insignificant sets—which contains the
coordinates of the roots of insignificant sub trees—this list is further divided into
two types of coefficients: D(i,j) and L(i,j); (2) LIP the list of insignificant pixels—
which contains the coordinates of those coefficients which are insignificant with
respect to the current threshold, T—these are insignificant coefficients that are
not part of any of the sets of coefficients in LIS; and (3) LSP the list of significant
pixels—which contains the coordinates of those coefficients which are significant
with respect to the current threshold, T.
At the beginning of the algorithm, threshold T is selected such that T 
max jCoef.i; j/j < 2T. The initial state of list is set in a specific manner, i.e.

LIP maintains H the set of coordinates of tree roots, LIS maintains the set D(i,j),
where (i,j) are coordinates with descendants in H, and LSP is empty.

10 Deduplication Practices for Multimedia Data in the Cloud


• Sorting phase: The algorithm identifies the significant coefficients by dividing the
D(i,j) sets into two subsets, namely the set L(i,j) and the set O(i,j) of individual
coefficients, and by putting each significant coefficient into the LSP list with
respect to the current threshold level.
• Refinement phase: This step consists of the refinement of the significant coefficients of the list LSP from the previous pass in a binary search fashion. After
each pass, the threshold is decreased by a factor of 2 and the whole process starts
again. The algorithm ends when the desired bit rate is achieved.
A description of the SPIHT algorithm is available in Said and Pearlman (1996).

10.6.2 Partial Encryption of the Compressed Image
We propose to partially encrypt the compressed image (obtained from the image
compression step) before uploading it to cloud storage. The reason for doing
so is to ensure the security of the data from the CSP or any malicious user.
Encrypting only a part of the compressed image will reduce the amount of data
to be encrypted, thereby reducing the computational time and resources (Cheng
and Li 2000). In order to satisfy the basic requirement of cross user deduplication,
i.e. identical images compressed by different users should appear identical in the
compressed/encrypted form, we propose to use convergent encryption (Thwel and
Thein 2009) to encrypt the coefficients generated by the SPIHT algorithm since
such a scheme will allow the CSP to classify the identical compressed and encrypted
Typically, the output of the SPIHT compression algorithm is a stream of encoded
wavelet coefficients along with the zero trees for the structure of the coefficients.
It contains the sign bits, the refinement bits, the significance of the pixels, and the
significance of the sets. Thus, to correctly decompress the image, the decompression
algorithm must infer the significant bits accurately. In this regard, it was suggested
in Cheng and Li (2000) that only the significant information be encrypted. This
information is determined by the significant information of the pixels in the highest
two levels of the pyramid as well as in the initial threshold for the SPIHT algorithm.
As an example, if the root is of dimension 8  8, then the (i,j) coordinates are
encrypted if and only if 0  i; j < 16, and there is no need to encrypt the significant
information belonging to subsequent levels, i.e. the pixels in the third, fourth, or
sixth level of the pyramid. This is due to the fact that the coefficients in those levels
will be reconstructed with the help of the information obtained from the coefficients
belonging to the highest two levels of the pyramid.
Using the information obtained from the coefficients at the highest two levels of
the pyramid, the states of the above-mentioned lists (i.e. LIS, LIP, LSP) will also
be initialized. From these levels, the states of these lists will constantly be changed,
depending upon the subsequent levels. But if the initial information is not correctly
derived, these lists will be pruned to errors and the image will not be decompressed


F. Rashid and A. Miri

accurately. Hence, by having the user encrypt only the significant information of the
highest two levels of the pyramid of the compressed image generated by the SPIHT
algorithm before uploading it to the cloud storage, the image will be prevented from
being decompressed by the CSP. The reason is that without the knowledge of the
significant information from the highest two levels, the CSP will not be able to know
the initial states of the above-mentioned lists, hence will not be able to decompress
the image.
The convergent encryption algorithm (Thwel and Thein 2009) uses the content of
the image to generate the key to be utilized by the users for encryption purpose, thus
it is expected that two identical images will generate identical keys, thus identical
ciphertext or cipher images. More precisely, the user constructs a SHA-3 hash of
the significant information Sig_Info of the coefficients belonging to the highest two
levels of the pyramid and use it as the key for encryption, i.e.
Image_Key.IK/ D Hash.Sig_Info/


Using Eq. (10.1), identical encrypted significant information of the first two levels
of the pyramid (hence identical ciphertext) will be produced from two identical
compressed images, irrespective of the fact that these images have been encrypted
by different users. Using this key, the users will perform symmetric encryption on
the significant information as shown in Eq. (10.2)
Sig_Info0 D .IK; Sig_Info/


This way, the CSP will be able to determine the match between identical images
from different users without having to process the images in plaintext form.

10.6.3 Image Hashing from the Compressed Image
In previous steps, the image has been compressed (using the SPIHT algorithm) and
only the significant information of the highest two levels of the pyramid has been
encrypted. In order for the CSP to perform client side deduplication on the image,
there has to be a unique identity (referred to as image hash) for each image. This step
consists in generating such image hash. The image hash is generated by the user and
sent to the CSP for performing the client side deduplication. In case of a redundant
image, the CSP will only set the pointers of the image ownership to the new user and
will not request the image to be sent again. Using this hash, the CSP will be able
to identify and classify the identical images without the necessity to possess the
original image in plaintext format. Indeed, the CSP will need to scan through the
already stored hashes in the cloud storage to find the matches rather than scanning
the entire images. It should be noted that by doing so, the CSP will be able to remove
the redundant images and store only a single unique copy of the image. The CSP
only has the knowledge of the compressed image and its partially encrypted version,

10 Deduplication Practices for Multimedia Data in the Cloud


and the original image cannot be reconstructed from these elements in any case. It
is worth mentioning that the image hash generation will not involve any additional
computational or storage overhead since the coefficients generated by the SPIHT
algorithm are known. In addition, the image deduplication performed in this way
will be secured against the CSP since all the significant information are encrypted.
Typically, the sorting pass of the SPIHT algorithm identifies the significant map,
i.e. the locations of the significant coefficients with respect to a threshold. The
binary hash sequence (where 1 denotes a significant coefficient and 0 denotes a
non significant coefficient) is then designed based on this significance map and on
the fact that the first four highest pyramid levels under the first three thresholds are
more than enough to be used as the signature of the image (Yang and Chen 2005).
In addition, the use of the convergent encryption is meant to ensure that the same
ciphertext is obtained for the same coefficients in the plaintext format.

10.6.4 Experimental Results
Experiments were conducted by keeping in mind the following three requirements: (1) using the SPIHT compression algorithm, the deduplication of the
image should be performed accurately, i.e. should not be faulty and the proposed
scheme should not identify two different images as identical, and even minor
alterations should be trapped; a failure to this requirement will enable the CSP
to eliminate one of the images and keep only one single copy of the two images
which appear to be identical but are actually not; (2) the proposed scheme should
be secure enough against the semi-honest CSP since the CSP is not given access
to the compressed image and thus cannot decompress it; the CSP can identify the
identical images from different users only through the significant maps of the image
generated by the SPIHT algorithm;and (3) the proposed scheme should be efficient
in terms of computation, complexity and storage overheads.

Experimental Settings

First, some altered images from one single image are generated. For this purpose,
six classic images are considered: Lena, bridge, mandrill, goldhill, pepper, and
clock, all represented in gray scale format. Ten different kinds of altered images are
introduced for each single classic image and 10 versions of that particular classic
image are generated, yielding a total of 126 images (i.e. 120 altered images added
to the six original images). The reason for using a large number of altered images is
to demonstrate the affects of compression on different kinds of alterations. The first,
second, and third altered images are obtained by changing the brightness, contrast,
and gamma levels of the original image respectively. The fourth altered image
is obtained by compressing the original image first (using a JPEG compression
algorithm), then decompressing it. The fifth altered image is obtained by applying


F. Rashid and A. Miri

the rotations of the original image on the standard angular positions of 90, 180
and 270ı respectively. The sixth altered image is obtained by applying a 2D median
filtration on the original image. The seventh altered image is obtained by introducing
a Poisson noise in the original image. The Poisson noise is a type of noise which
is generated from the content of the data itself rather than by adding some artificial
noise. The eighth altered image is constructed by rescaling the original image, i.e.
by converting the unit eight pixel values into double data type. Rescaling can also
be done to convert the image pixels into an integer of 16 bits, single type or double
type. The ninth altered image is obtained by applying a rectangular shear on both
sides of the original image; and the tenth altered image is obtained by removing the
noise from the original image using the Weiner filter.
Next, the Computer Vision Toolbox and Image Processing Toolbox of MATLAB
are used for applying the above-mentioned alterations to the original images. For
our experiments, we have also developed a MATLAB implementation of the SPIHT
compression algorithm described in Said and Pearlman (2017).

Deduplication Analysis

As stated earlier, one of the main requirements of the proposed scheme is that it
should be robust enough to identify the minor changes between two images even
if they are in the compressed form, i.e. it is desirable that the signature of the
image generated from the SPIHT compression algorithm be different even when the
original image has been slightly altered (hence, producing a different image). The
results showing the percentage of dissimilarity between the signature of the original
image and that of the altered image (obtained as described above) are captured in
Tables 10.1 and 10.2, for images data coded at a rate of 0.50 bpp and 0.80 bpp
respectively, on two dimensions, namely 256256 and 512512. In Table 10.1, it is
observed that the difference between the signatures is lesser for the commonly used
alterations (namely brightness, gamma correction, contrast and rotation changes)
compared to the uncommon and severe alterations. These alterations are often
performed by the users on the images but they do result in an entirely different
appearance of the image. Although the images do not appear to change to a great
extend for a human eye, the uncommon alterations (i.e. median filtration, noise
insertion and removal, shearing and rescaling) are causing significant changes in the
signature of the image. As shown in Table 10.1, the percentage difference between
the original and altered image increases significantly for the severe alterations. As
far as the dimensions are concerned, the percentage change between the signatures
tends to decrease slightly as the image dimension increases. The same trend is
maintained in the case the images data are coded at a rate of 0.80 bpp as shown
in Table 10.2. The rest of the 63 images have been tested on 0.80 bpp to ensure
that the images exhibit the same patterns when generating the signatures at different
rates as well. As shown in Table 10.2, the percentage of change for the commonly
used alterations are less than that observed for the severe alterations, including at
higher compression rates.

10 Deduplication Practices for Multimedia Data in the Cloud


Table 10.1 % difference between significant maps of the original and altered images at 0.05 bpp

Median filtration
Noise removal

(256256) (512512) (256256) (512512) (256256) (512512)












Table 10.2 % difference between significant maps of the original and altered images at 0.08 bpp

Median filtration
Noise removal

(256256) (512512) (256256) (512512)

(256256) (512512)













The alterations that have been introduced into the original images are also
analyzed to determine how much these alterations affect the images’ signatures.
The results are captured in Fig. 10.2. In Fig. 10.2, it is observed that the shearing
process has drastically changed the mathematical content of the image, producing a
compressed image data significantly different from the original one. The same trend
prevails in terms of difference between the compressed image data and the original
data, when the other alteration techniques are utilized, namely noise insertion,
rescaling, noise removal, and median filtration, in this order. It is observed that
when alterations such as rotation, gamma, and contrast are used, their impact on the
images’ signatures are not that significant. In the case of JPEG compression, it is


F. Rashid and A. Miri

% of dissimilarity between the significant
maps of the original and altered images
Noise Insertion
Noise Removal
Median Filtration








% of dissimilarity between significant

Fig. 10.2 Percentage of dissimilarity caused by the different alterations







Angular Rotation

Fig. 10.3 Percentage of dissimilarity caused by the different rotations

observed that the significant maps of the original images and that of the compressed
images are similar. This is attributed to the fact that the significant maps of the
original image are produced by a compression algorithm as well. Thus, in all the
trials with different dimensions and compression rates, the percentage of change in
the case of the JPEG compression is 0.
Angular rotations are among the most commonly performed alterations performed by the users on the images before uploading them in the cloud storage (Yang
and Chen 2005). In this work, angular rotations are performed and their impact on
the images’ signatures are captured in Fig. 10.3. In Fig. 10.3, it can be observed that
as the image moved away from its initial reference point, the images’ signatures
start to change progressively, then increases drastically, resulting to a percentage
of dissimilarity between the signature of the original image and that of the altered

10 Deduplication Practices for Multimedia Data in the Cloud


image of about 4.6% (resp. 5.4 and 7.1%) when the angular position of 90ı (resp.
180ı and 270ı ) are applied. This confirms a normal behavior of the signatures since
the image itself changes in appearance as it is rotated.

Performance Analysis

The deduplication process of our proposed scheme also deserves some analysis to
determine its performance in terms of computational overheads generated from the
users’ part. Due to the fact that the compressed images’ data have been used to
produce the unique hashes of the images for deduplication purpose, there is no
other algorithm than the SPIHT compression algorithm that has been involved in the
proposed scheme for generating the images’ signatures. The SPIHT compression
algorithm is invoked by the users to generate their compressed images before
uploading them to the cloud storage. For doing so, the users needed to calculate the
images’ signatures from the lists LIS, LIP, and LSP) that have been generated during
the compression process under the first three threshold levels. The performance
of the deduplication process can be judged by quantifying the time taken for the
signatures of the images generation versus the time taken for the generation of the
above-mentioned lists.
Let T1 be the time (measured in seconds) taken by the user to calculate the
images’ signatures from the lists; let T2 (also measured in seconds) be the time taken
by the SPIHT algorithm to generate the three lists under all the threshold levels, not
including the time taken to perform the binary encoding and decoding steps.
In Tables 10.3 and 10.4, T1 is the time consumed in calculating the signatures,
T2 is the time taken by the compression algorithm to calculate the three lists under
all the threshold levels and The third column shows how much percent of T2 is
T1. T2 is not the total time taken by the compression algorithm since that includes

Table 10.3 Performance in terms of time for 0.80 bpp
Median filtration
Noise insertion
Noise removal

Pepper (256256)
T1 (s)
T2 (s)

T1% of T2

Pepper (512512)
T1 (s)
T2 (s)

T1% of T2


F. Rashid and A. Miri

Table 10.4 Performance in terms of time for 0.50 bpp
Median filtration
Noise insertion
Noise removal

Goldhill (256256)
T1 (s)
T2 (s)
T1% of T2

Goldhill (512512)
T1 (s)
T2 (s)

T1% of T2

some binary encoding and decoding as well. Tables 10.3 and 10.4 shows that the
time taken to calculate the signatures is significantly low compared to that taken by
the SPIHT algorithm to generate the data for the signatures. This was expected
since the work carried out by the user on his/her own machine, i.e. performing
the compression and extracting the signatures already generated by the SPIHT
algorithm, is expected to last for a relative small amount of time (or CPU cycles).
Thereby, some storage space is saved by performing the image deduplication. At a
compression rate of 0:50 bpp, it is also be noticed that for all the images of 256256
dimensions (resp. 512  512 dimensions), T2 is more than 10 times (resp. 90 times)
the value of T1 in average. Therefore, as the dimension of the image increases,
the time T1 tends to decrease to further refine the performance of the reduplication
process. The same observations prevail when the images data are compressed at a
rate of 0:80 bpp. In this case, T2 is more than 10 times (resp. 56 times) the value of
T1 in average for the images of 256  256 dimensions (resp. 512  512 dimensions).
As far as the compression efficiency is concerned, the SPIHT compression algorithm
is not affected by the encryption and image hashing steps since these steps are
performed only after the image compression has been completed. This way, the
image deduplication is achieved with the help of the image compression.

10.6.5 Security Analysis
The setup of the proposed scheme allows the user to only upload the images data
generated during the image compression step (it should be noted that a part of it
is encrypted by the user) and the unique hash of the image calculated by the user.
The CSP should not be able to decompress the image from the received compressed
image data.

10 Deduplication Practices for Multimedia Data in the Cloud


The security of the proposed scheme depends upon the partial encryption of the
wavelet coefficients. The significant information in encrypted form is required in
order to correctly decompress the image. The decoder of the SPIHT algorithm keeps
track of the order of execution of the encoder; the lists and image mask are both
required for the most recent iteration of the encoder. From there on, the SPIHT
algorithm starts to decode/decompress the image in the exactly reverse order of
what the encoder has done until this process terminates.
In Cheng and Li (2000), it has been proved that the size of the important part is
at least 160 bits if at least two sorting passes are performed by the EZT compression
algorithm. Therefore, for an exhaustive search, an attacker would need at least
2160 tries before heading to a conclusion. For example, considering the Lena image
with 512  512 dimension, only the significant coefficients in the first two largest
sub bands are required to be encrypted. These are the coefficients in the 16  16
sub band with 8  8 be the root level according (Cheng and Li 2000). We have
calculated the size of the indicated data to be encrypted. The size of the two lists
to be encrypted comes to 768 bytes in total. In addition, the mask of the image
indicating if the pixels in the 16  16 sub bands are significant or not also needs to
be encrypted. The size of the mask is 575 bytes compared to that of the complete
mask for 512  512 pixels, which is 26000 bytes. Therefore, the partial encryption
is indeed very efficient since the encrypted part is significantly less compared to the
entire image data.
As far as convergent encryption is concerned, each user has his own key for the
decryption of the image, which is kept secret from the CSP. Due to the deduplication
requirements, the users cannot be allowed to choose different keys (using a public
or private key model) for encryption purpose. The keys generated for two different
users for the same image through convergent encryption are exactly the same.
Hence, we assume that the users of the same cloud storage will not collude with
the CSP, otherwise the security of the proposed scheme may be compromised.

10.7 Secure Video Deduplication Scheme in Cloud Storage
Environment Using H.264 Compression
In this section, a secure scheme is proposed that achieves cross user client side video
deduplication in cloud storage environments. Since in Venter and Stein (2012),
it is claimed that a major part of the digital data these days is composed of
videos and pictures, we are focusing on the video deduplication in this section. Its
design consists of embedding a partial convergent encryption along with a unique
signature generation scheme into a H.264 video compression scheme. The partial
convergent encryption scheme is meant to ensure that the proposed scheme is
secured against a semi-honest CSP; the unique signature generation scheme is meant
to enable a classification of the encrypted compressed video data in such a way that
the deduplication can be efficiently performed on them. Experimental results are
provided, showing the effectiveness and security of our proposed schemes.


F. Rashid and A. Miri

10.8 Background
H.264 is an object-based algorithm that makes use of local processing power to
recreate sounds and images (Richardson 2011). The H.264 algorithm (Al Muhit
2017) is a block-based motion compensation codec most widely used for HD videos.
Its working is based on frames, which can be further grouped into GOP and thus
can yield high deduplication ratios for your scheme. A selective encryption of the
H.264 compressed video is proposed in Zhao and Zhuo (2012) and it works on the
GOP level and on various types of videos. These characteristics makes this selective
encryption scheme very suitable for our video deduplication scheme. The signature
generation is inspired by the method proposed in Saadi et al. (2009), but this method
is modified to fit our requirements. In Saadi et al. (2009), the scheme proposed is
used for watermarking which is not useful for deduplication, therefore, it is used
only for signature extraction purposes in our scheme.

10.9 Proposed Video Deduplication Scheme
The design of the proposed approach for secure deduplication of videos in the cloud
storage involves three components: H.264 video compression scheme, signature
generation from the compressed videos, and selective encryption of the compressed
videos as shown in Fig. 10.4.
First, the user compresses the original video using the H.264 compression
algorithm. Second, he/she calculates the signatures based on the GOP from the
output bit stream. Third, he/she encrypt the important parts of the DCT coefficients
and motion vectors according to the type of the videos. After these processing steps
on the original video, it will be uploaded in the cloud storage. The CSP will then
check for the identical GOPS with the help of the signatures and encrypted data.
If identical GOPs are detected, the CSP will delete the new data and update the
metadata for this particular data already in the cloud storage. In this way, the CSP
will save huge space by performing cross-user video deduplication in the cloud
storage. The detailed description of the proposed scheme follows.

Original Video

via H.264 at
GOP level

of GOPs

Fig. 10.4 Proposed secure video deduplication scheme

Encryption of

Upload in the

10 Deduplication Practices for Multimedia Data in the Cloud


10.9.1 H.264 Video Compression
The first part of our video deduplication scheme is to compress the original video
sequence. The compression algorithm should be strong and efficient enough to
produce highly compressed, retrievable and smaller in size version of the video. We
use the H.264 compression algorithm (Richardson 2011). This algorithm includes a
prediction step, a transform step, and an entropy coding step.
• The prediction step: The coded video sequence generated by the H.264 algorithm
is made of a sequence of coded pictures. Each picture is divided into fixed size
macro blocks. It should be noted that macro blocks are the basic building blocks
in the H.264 algorithm. Each macro block is a rectangular picture area made of
16  16 samples for the luma component and the associated 8  8 sample regions
for the two chroma components. The luma and chroma samples of a macro block
are used for prediction purpose. They are either spatially predicted or temporally
predicted by the algorithm and the resulting residual is divided into blocks. These
blocks are then transformed, quantized and encoded in the final stages.
The macro blocks of the picture are further grouped into slices, which
themselves divide the above regions of the picture that are eligible to be decoded
independently. Each slice is a sequence of macro blocks. It is processed following
a raster scan, i.e. from top-left to bottom-right. The slices can be used for error
resilience since the partitioning of the picture allows a spatial concealment within
the picture and the start of each slice provides a resynchronization point at which
the decoding process can be reinitialized (Richardson 2011). The slices can also
be used for parallel processing since each frame can be encoded and decoded
independently of the other frames of the picture. The robustness of the algorithm
can be further strengthen by separating the more important data (such as macro
block types and motion vector values) from the less important ones (such as inter
residual transform coefficient values).
The two types of fundamental frames used are, namely (1) the I slice—i.e. a
frame in which all macro blocks are coded using the intra prediction. It is the
most important type of frame used by all versions of the algorithm; (2) the P
frame—i.e. a frame in which the macro blocks can also be coded using the inter
prediction with at most one MCP signal per block;
It should be noted that the intra-picture prediction can be performed in two
types of intra coding modes: the intra 4  4 and the intra 16  16 modes. In the
intra 4  4 mode, each 4  4 luma block is predicted from spatially neighboring
samples. In the intra 16  16 mode, the whole 16  16 luma component of the
macro block is predicted with four prediction modes, namely, vertical, horizontal,
DC, and plane.
The inter-picture prediction in P frames is another prediction method that can
be utilized by the H.264 algorithm. It involves the use of P macro blocks with
luma block sizes of 16  16, 16  8, 8  16, and 8  8 samples. For each 8 
8 partition, this method can be used to decide on whether or not that partition
should be further partitioned into smaller sizes of 8  4, 4  8, or 4  4 luma
samples and corresponding chroma samples.


F. Rashid and A. Miri

• The transform step: This includes the transform, scaling, and quantization substeps. In the H.264 algorithm, an integer transform such as the 4  4 DCT is
applied to 4  4 blocks instead of larger 8  8 blocks as used in previous standard
methods. The inverse transform of H.264 uses simple exact integer operations so
that mismatches are avoided and the decoding complexity is minimized. For the
luma component in the intra 1616 mode and the chroma components in all intra
macro blocks, the DC coefficients of the 4  4 transform blocks undergo a second
transform, so that the lowest-frequency transform basis functions cover the entire
macro block (Richardson 2011). A quantization parameter (QP) is then used for
determining the quantization of the transform coefficients. The quantization step
size is controlled logarithmically by this QP in such a way that the decoding
complexity is reduced and the bit rate control capability is enhanced (Stutz and
Uhl 2012).
• The entropy coding: In the H.264 algorithm, two entropy coding are supported:
the CAVLC and the CABAC. The CABAC has better coding efficiency compared
to the CAVLC, but yields a higher complexity. In both of these modes, the syntax
elements are coded using a single infinite-extent codeword set (referred to as
Exp-Golomb code).
In our proposed video deduplication scheme, the videos are divided into GOP
based on similarity; where each GOP is made of I frames and P frames. The H.264
algorithm is used with the following specifications: the GOP size is set to 15 frames,
where the first frame is always the I frame used as reference for the subsequent
14 frames, which are all P frames.

10.9.2 Signature Generation from the Compressed Videos
The generated signature captures the content dependent robust bits from the macro
blocks generated by the H.264 Compression algorithm. These bits are then used
to authenticate or identify one GOP from another, hence, they are treated as
the signature of the GOP. The signature calculation is done in parallel with the
compression algorithm as the GOPs are being generated by the H.264 algorithm.
The signature generation is carried out in the compressed domain, and the
signatures are generated from the information produced in the transform domain of
the H.264 compression algorithm. The content dependent robust bits are extracted
from the macro blocks and are further used as the signature for authenticating the
compressed video. It should be noted that in our use of the H.264 algorithm, the I
and P frames are the minimum types of the frames that should be present for the
algorithm to work.
Indeed, the video is first broken down into GOPs, which are authenticated
individually by hashing the features extracted from their I frames and P frames. The
hash is then considered as the digest digital signature for all the frames in the GOP.
The digital signature is composed of the features extracted from the intra 16  16,

10 Deduplication Practices for Multimedia Data in the Cloud


intra 4  4 and inter 4  4 MBs. The I slices are composed of intra coded MBs in
which each intra 1616 and intra 44 luma regions are predicted from the previous
16  16 and 4  4 coded MBs of the same I frame.
For the intra 4  4 and inter 4  4, the quantized DC coefficients and the first
two quantized AC coefficients in the zig zag scan order (surrounding the only DC
coefficient value) for every 4  4 MB are pull out as the signature data. Then, for
every intra 16  16, all the nonzero quantized Hadamard transform coefficients and
the first two quantized AC coefficients in the zig zag scan order (also surrounding
the only DC coefficient value) for every 16  16 MB are pull out as the signature
data. The signature is calculated for each MB in a frame until the end of the
GOP is reached. Meanwhile, these signatures are saved in a buffer and when the
algorithm signals IDR, the signatures for all the MBs for all the frames in the GOP
are hashed by using SHA2-256, producing a 256 bit long signature for each GOP.
Finally, the signatures for all the GOPs are calculated and transmitted along with the
video. The CSP will compare these signatures against the ones already stored in the
cloud storage in order to identify any possible duplicated parts of the videos. The
deduplication at the GOP level will further increase the deduplication ratios since it
is less likely that identical entire videos by different users will be uploaded in the
cloud as opposed to parts of the videos.

10.9.3 Selective Encryption of the Compressed Videos
Assuming that the compressed videos have been produced by the H.264 algorithm,
and the signatures have been generated by the users, the next step consists of
encrypting the compressed videos so that it will not be possible for the CSP to
access the plain videos.
We have used the partial encryption scheme proposed in Zhao and Zhuo (2012),
with some modifications applied on it in order to fulfill the requirements of crossuser deduplication. The partial encryption is carried out in the compressed domain
and therefore is well in line with the signature generation process since that is
also performed in the compressed domain. Typically, since the video is split into
GOPs, the encryption is performed at the GOP level. This encryption scheme
is content-based since user dependent variables are not involved in its process.
Content-based encryption ensures that the same encrypted videos will be obtained
for the same plain videos by different users, which is the basic requirement for
cross-user deduplication.
The encryption process starts by first generating the compressed bit stream for the
video. First, the user classifies the video into six different categories, namely high,
medium and low intensity motion (for complex texture) and high, medium and low
intensity motion (for non-complex texture) by utilizing the information generated
in the intra prediction mode, DCT coefficients, and motion vectors (as described
earlier). It is assumed that if the I frame of a GOP is complex in texture, the
corresponding P frames will also be complex in texture. For the motion intensity, the


F. Rashid and A. Miri

motion vectors of the P frames are taken into account. Second, the user organizes the
GOPs of the above mentioned six classes. Basically, the user calculates the average
of nonzero coefficients and the average number of intra 4  4 prediction for the I
frames. For the P frames, the average of the suffix length of the MV keywords and
the corresponding variance are calculated. These values are then compared against
the threshold values given by the CSP to all the users of that cloud. The users will
use these threshold values to classify their videos and start the encryption process.
The details of the classification of the videos are beyond the scope of our current
research, but can be found in Zhao and Zhuo (2012). The partial encryption then
follows after the video classification step.
The compressed video has already been broken into GOPs for the purpose
of signature generation. This same set of GOPs is used for partial encryption
as follows. The DCT coefficients and intra prediction modes in every GOP are
encrypted according to the texture of the video. In doing so, the I frames are
considered as crucial for the decoding of the P frames in the sense if errors occurred
in the I frames or if these frames are tampered (even slightly), the decoding of the
P frame will be affected for any GOP. Therefore, all the intra prediction modes
are encrypted, no matter what the texture or motion intensity of the video is. For
complex texture videos, all the nonzero DCT coefficients, except for the trailing
ones are encrypted because of the fact that the videos will have a large number
of high band DCT coefficients. For non-complex texture videos, only the first three
low band DCT coefficients are encrypted. For motion intensity purpose, the user will
encrypt the motion vectors of the P frames. In case of the complex texture video,
the user will encrypt the motion vector difference in all the P frames, i.e. the first
70% and then the first 30% of the P frames respectively for high, medium and low
intensity videos for each GOP. In case of the non-complex texture video, the user
will encrypt the motion vector difference in all the P frames, i.e. the first 70% and
then the first 30% of the P frames respectively for high, medium and low intensity
videos for each GOP.
In our scheme, the convergent encryption (Wang et al. 2010) is employed to
derive the key for partial encryption from the content of the compressed video
rather than getting the key chosen by the users individually. Therefore, for the
same content, the same key will be generated without the users knowing each other.
Thus, different users will have the same key as well as the same encrypted videos,
irrespective of the knowledge of each other keys. This will make it easier for the
CSP to compare the encrypted parts of the videos and perform deduplication in case
of duplicated videos without actually decrypting or decompressing the video data.
The convergent encryption steps are described as follows. Let us assume that
user partially encrypts the video data as described earlier and the video is complex
textured, with medium motion intensity. Then three data sets are to be encrypted,
namely, INTRA  PRED D .All the intra prediction modes in a GOP/,
NZ  DCT D .All non zero DCT coefficients in a GOP/ and
PER  MVD D .% of the MVDs in a GOP/. The user will calculate the
keys K1, K2, and K3 as follows: K1 D SHA2 .INTRA  PRED/, K2 D
SHA2 .NZ  DCT/ and K3 D SHA2.PER  MVD/; then will perform the following

10 Deduplication Practices for Multimedia Data in the Cloud


encryption: Enc.INTRA  PRED/ D AES.K1; INTRA  PRED/, Enc.NZD CT/ D
AES.K2; NZD CT/ and Enc.PER  MVD/ D AES.K3; PER  MVD/. In this way,
for every vector and for every MB of any two identical GOPs, the encrypted data
will appear to be similar. The CSP will then be able to compare the GOPs without
breaching the privacy and security of the uploaded data while storage space will
be maximized through cross-user deduplication. Once the video and the signatures
are uploaded by the user to the cloud, the CSP will compare the signatures of the
compressed video for deduplication purposes. If the signatures match, the new
copy is discarded and the metadata is updated. Depending upon the resource and
security requirements, the CSP can compare the encrypted parts of the videos after
the signature matches, in order to ensure that the two GOPs are identical before
deleting the data. Since convergent encryption is applied, the same plain videos
would result is the same cipher videos, thus enabling cross-user deduplication in
the presence of a semi-honest CSP.

10.9.4 Experimental Results
The experiments have been conducted by keeping in mind the following requirements: (1) some digital space must be saved by applying the H.264 compression
algorithm and deduplication; (2) the compression algorithm must be efficient
in terms of computation, complexity and storage overheads; (3) the signature
generation step must be robust enough to identify the GOPs for deduplication;
and (4) attacks from external users are much more straightforward than those from
internal users. Therefore, our goal is to show that our proposed scheme is secure
against a semi-honest CSP. The use of partial encryption and signature generation
done at the user end is therefore analyzed thoroughly for security and efficiency
In order to analyze the security and efficiency of our proposed scheme, we tested
it on six different video sequences, namely Akiyo, Foreman, Claire, Grandma, and
Highway. The specifications of all of these video sequences are well known and are
presented in Table 10.5. We have chosen these videos for our testing because they
belong to different classes of videos (Thomas et al. 2007). Foreman and Highway
Table 10.5 Video sequence


Size of
video (MB)

Total GOPs

Avg size of


F. Rashid and A. Miri

Table 10.6 Deduplication performance of the video sequences

Video sequence

% of space
saved for 25%
of GOPs

% of Space
saved for 50%
of GOPs

% of space
saved for 75%
of GOPs

% of space
saved for 100%
of GOPs

are classified as non-complex textured and medium intensity videos. Grandma,
Akiyo and Claire are classified as non-complex textured and low intensity motion
videos. Each video sequence is first encoded into the QCIF, and then the frames
are extracted. The videos are then input into the H.264 compression algorithm with
a GOP size of 15. The first frame is the I frame and the rest of the frames are
P frames, which implies an IPPP format. The QP value is set to 28. The algorithm
works on 4  4, 16  16 MB modes for I and P frames. The rate of compression for
H.264 is set to 30 Hz. The details of the algorithm can be found at (Al Muhit 2017).
The system configuration for the experiment is an Intel Core i3 processor with
2.40 GHz frequency and 4 GB RAM. The underlying operating system is 64 bits. We
implemented our scheme in MATLAB version R2014a. Finally, different methods,
all implemented in MATLAB, were used for the extraction and processing of QCIF
videos, as well as the extraction of the frames.
Video deduplication is the most important requirement which we need to fulfill.
The deduplication aspect of our scheme is captured in Table 10.6. We have
calculated the amount of space saved in cloud storage for the case where the CSP
practices cross-user deduplication at the GOP level for 25, 50 , 75 and 100% of
GOPs in our experiments. From the results shown in Table 10.6, it can be observed
that for complex textured videos such as mobile, the percentage of digital space
saved is higher than that of the non-complex textured video. This is attributed to the
fact that these videos are larger in size (i.e. there is more information for complex
textured videos) on the disk than the others. As more and more GOPs are replicated,
the space savings increase at the cloud storage end. The CSP will simply need to
update some metadata for the GOP, indicating that this particular GOP had also
belonged to this user in addition to some other users. The size of the metadata is
very nominal compared to the space saved by simply storing a single copy of the
GOP rather than a copy for each user. Moreover, for CSPs with large storage, the
metadata space is negligible because of its textual nature. For videos that are large in
size and high motion intensity, it can be seen that there are substantial space savings
as in case of the Highway video. The percentage of space saved increases as more
and more GOPs are identical. When the entire video is the same, 100% of the space
is saved. It can also be noted that the amount of space saved increases as the size

10 Deduplication Practices for Multimedia Data in the Cloud


of the video increases, which is very favourable for a cloud computing setup since
users try to upload large videos at times. Videos uploaded to the cloud by different
users are often not exactly identical, but at times cropped at the beginning or at the
end. Unlike images, the brightness and colour of videos are not often modified by
users, but videos can be changed in length by simply cropping some parts of them.
This explains why our scheme is based on GOPs so that the deduplication ratios can
be improved as much as possible.
The second requirement of our scheme is that the inclusion of the deduplication
process should not incur any extra overhead on the process of uploading the
videos in the cloud storage from the user’s end, i.e. the proposed scheme should
be computationally cost-effective in terms of resources at the user’s end. We are
proposing to use the H.264 compression algorithm as the basis of all our further
computations. Since videos are compressed anyways before being uploaded in
the cloud, the computational overhead in reduced to a great extend. The user is
calculating the signatures from the information produced from the compression
algorithm. The encryption is also carried out on the information generated by the
compression algorithm. Therefore, the performance efficiency should be determined
in terms of how much time the user spent on these two actions. From Table 10.7, it
can be observed that the average time to encode the videos is much higher than that
to calculate the signature. In case of the Highway video sequence, the average time
to compress the video is 165:65 s and the average time to calculate the signature at
the GOP level is 0:0603 s, which is nominal compared to the time to encode. It can
also be noticed that as the size of the video increases (from foreman to highway),
the number of GOPs naturally increases, but the time taken to encode and the PNSR
also depends on the nature of the video. The mobile video sequence is complex
in texture and smaller in size, but took the highest time to encode. The time taken
(and consequently the number of CPU cycles used) to compare the signatures is
also insignificant in the case of large cloud storages since the size of the signature
is merely 256 bits. For instance, as can be seen from Table 10.7, the size of the
actual signature for the grandma video is 1568745 bytes, which has been reduced
Table 10.7 Compression
performance of the video

time to
Video sequence (s)


Avg time to length
calculate hash of sig
of sig (s)



F. Rashid and A. Miri

to 256 bits by the SHA hash. These signatures will be transmitted along with the
compressed videos, but because of their size, they do not incur any overhead in
terms of bandwidth consumption.
For the grandma video, a total of 58 signatures need to be transmitted with an
approximate size of 14848 bits, which again is negligible for the user. The signatures
are also calculated at the GOPs level, i.e. for each GOP, a 256 bits signature is
generated, which is very small compared to the original size of the GOP. Depending
upon the computational capacity of the cloud and security and efficiency tradeoff,
the signatures can be calculated for each frame rather than at the GOP level, and
each frame signature can be checked for deduplication. The time taken to calculate
those signatures is also negligible, even at the user’s end. For the CSP, the benefit
comes when he/she has to compare the signatures (256 bits) rather than the GOPs
themselves (e.g. 53:33 KB). The size of the video to be stored also gets reduced
after compression, which is also a benefit for the CSP in terms of storage savings.
The proposed scheme is designed in such a way that the performance of the H.264
compression algorithm is not affected by the signature calculation and the partial
encryption because these methods are applied on top of the information generated
by the H.264 compression algorithm. The performance of the H.264 compression
algorithm is shown in Table 10.7 in terms of PSNR. The performance of our
proposed scheme can be judged from the results depicted in Tables 10.6 and 10.7.
We have also quantified the time consumed in the partial encryption of each
GOP. In our proposed video deduplication scheme, the partial encryption presented
in Zhao and Zhuo (2012) is adopted and adjusted to meet the requirements of crossuser deduplication. Therefore, we have considered the same experimental settings
used in Zhao and Zhuo (2012), in terms of number of frames in GOPs, compression
rate, and QP factor. We conducted the experiment on a few samples and obtained
almost similar results as the ones shown in Zhao and Zhuo (2012) in terms of time
consumed for partial encryption. Indeed, the Foreman video took 0:44019 s, the
Highway video took 2:26 s, the Akiyo video took 0:354 s and the mobile video
took 1:308 s. From these results, it is clear that the time taken to perform partial
encryption increases as the size of the video increases (the Highway video being
the largest of all) and this time tends to increase for videos with complex texture
since they have more information to encrypt. Overall, the time taken to encrypt the
partial compressed information is negligible compared to the time taken to encode
the video. Moreover, considering the tradeoff between security and efficiency, this
amount of time can be overlooked when it comes to cloud computing.

10.9.5 Security Analysis
The setup of the proposed scheme allows the user to only upload the video data
generated during the H.264 compression step, and at the same time, to partially
encrypt the video and generate the unique hash of each GOP. The CSP should
not be able to decompress the video from the received compressed data, which

10 Deduplication Practices for Multimedia Data in the Cloud


is partially encrypted. The security of the proposed scheme depends upon the
partial encryption of the DCT coefficients and motion vectors. We have used the
convergent encryption, which utilizes the 256 bits hash of the data as the key for
the AES encryption. The possibility of AES encryption being compromised is very
seldom since it requires a computational complexity of 2126:1 to recover the key.
The security of the SHA-256 bits is very strong since it requires a computational
complexity of minimum 2178 to be attacked in case of differential attacks. The
perceptual quality of the videos is too bad if decrypted by the CSP since he does
not have access to the required video compression information in order to accurately
recover the video. Due to the deduplication requirement, the users cannot be allowed
to choose different keys or a public/ private key model for the encryption step.
The keys generated for two different users for the same GOP through convergent
encryption are exactly the same. Hence, it is assumed that the users of the same
cloud storage will not collude with the CSP, otherwise the security of the proposed
scheme could be compromised.

10.10 Chapter Summary
In the first part of this chapter, we have presented a novel secure image deduplication scheme through image compression for cloud storage services purpose.
The proposed scheme is composed of three parts, namely: SPIHT compression
algorithm, partial encryption, and hashing. Experimental results have shown that
(1) the proposed scheme is robust enough to identify minor changes between two
images even if they are in the compressed form; (2) the proposed scheme is secured
against the semi-honest CSP since the CSP does not have access to the compressed
images, but can identify the identical images from different users only through
the significant maps of these compressed images; (3) the proposed POR and POW
schemes are efficient.
In the second half, we have proposed a secure video deduplication scheme
through video compression in cloud storage environments. The proposed scheme is
made of three components: H.264 video compression scheme, signature generation
from the compressed videos, and selective encryption of the compressed videos. The
compressed video obtained from the H.264 algorithm is partially encrypted through
convergent encryption, in such a way that the semi-honest CSP or any malicious
user cannot have access to it in plain, hence ensuring the security of the video
data from the CSP or any malicious user. Experimental results have shown that:
(1) For complex textured videos, the percentage of digital storage space saved by
the CSP practicing cross-user deduplication using our scheme is higher than that
of the non-complex textured video; (2) For videos that are large in size and high
motion intensity, there are substantial space savings for the CSP practicing crossuser deduplication using our scheme; (3) the average time taken to encode the videos
is much higher than that taken to calculate the signature; (4) the proposed scheme
is secured against the semi-honest CSP since the CSP does not have access to the
video compression information required for the recovery of the video.


F. Rashid and A. Miri

Adshead, A. (2017). A guide to data de-duplication. http://www.computerweekly.com/feature/Aguide-to-data-de-duplication. Accessed 24 August 2016.
Al Muhit, A. (2017). H.264 baseline codec v2. http://www.mathworks.com/matlabcentral/
fileexchange/40359-h-264-baseline-codec-v2. Accessed 24 August 2016.
Anderson, R. (2017). Can you compress and dedupe? it depends. http://storagesavvy.com.
Accessed 23 August 2016.
Cheng, H., & Li, X. (2000). Partial encryption of compressed images and videos. IEEE
Transactions on Signal Processing, 48(8), 2439–2451.
Gantz, J., & Reinsel, D. (May 2010). The digital universe decade - are you ready? https://
www.emc.com/collateral/analyst-reports/idc-digital-universe-are-you-ready.pdf. Accessed 24
August 2016.
IBM Corporation (2017). IBM protectier deduplication. http://www-03.ibm.com/systems/storage/
tape/protectier/index.html. Accessed 24 August 2016.
Keerthyrajan, G. (2017). Deduplication internals. https://pibytes.wordpress.com/2013/02/17/
deduplication-internals-content-aware-deduplication-part-3/. Accessed February 2016.
Kovach, S. (2017). Dropbox hacked. http://www.businessinsider.com/dropbox-hacked-2014-10.
Accessed 24 August 2016.
Osuna, A., Balogh, E., Ramos, A., de Carvalho, G., Javier, R. F., & Mann, Z. (2011). Implementing
IBM storage data deduplication solutions. http://www.redbooks.ibm.com/redbooks/pdfs/
sg247888.pdf. Accessed 24 August 2016.
Rashid, F., Miri, A., & Woungang, I. (2014). Proof of retrieval and ownership protocols for images
through SPIHT compression. In Proceedings of The 2014 IEEE 6th International Symposium
on Cyberspace Safety and Security (CSS’14). New York: IEEE.
Rashid, F., Miri, A., & Woungang, I. (March 2015). A secure video deduplication in cloud
storage environments using h.264 compression. In Proceedings of the First IEEE International
Conference on Big Data Computing Service and Applications San Francisco Bay, USA. New
York, IEEE.
Richardson, I. E. (2011). The H. 264 advanced video compression standard. New York: Wiley.
Saadi, K., Bouridane, A., & Guessoum, A. (2009). Combined fragile watermark and digital
signature for H. 264/AVC video authentication. In Proceedings of The 17th European signal
Processing Conference (EUSIPCO 2009).
Said, A., & Pearlman, W. (2017). SPIHT image compression. http://www.cipr.rpi.edu/research/
SPIHT/spiht1.html. Accessed 23 August 2016.
Said, A., & Pearlman, W. A. (1996). A new, fast, and efficient image codec based on set partitioning
in hierarchical trees. IEEE Transactions on Circuits and Systems for Video Technology, 6(3),
Shapiro, J. M. (1993). Embedded image coding using zerotrees of wavelet coefficients. IEEE
Transactions on Signal Processing, 41(12), 3445–3462.
Singh, P., & Singh, P. (2011). Design and implementation of EZW and SPIHT image coder for
virtual images. International Journal of Computer Science and Security (IJCSS), 5(5), 433.
Stutz, T., & Uhl, A. (2012). A survey of H. 264 AVC/SVC encryption. IEEE Transactions on
Circuits and Systems for Video Technology, 22(3), 325–339.
Thomas, N. M., Lefol, D., Bull, D. R., & Redmill, D. (2007). A novel secure H. 264 transcoder
using selective encryption. In Proceedings of The IEEE International Conference on Image
Processing (ICIP 2007) (Vol. 4, pp. 85–88). New York: IEEE.
Thwel, T. T., & Thein, N. L. (December 2009). An efficient indexing mechanism for data
deduplication. In Proceedings of The 2009 International Conference on the Current Trends
in Information Technology (CTIT) (pp. 1–5).
Venter, F., & Stein, A. (2012). Images & videos: really big data. http://analytics-magazine.org/
images-a-videos-really-big-data/. Accessed 24 August 2016.

10 Deduplication Practices for Multimedia Data in the Cloud


Wang, C., Qin, Z. g., & Peng, J., & Wang, J. (July 2010). A novel encryption scheme
for data deduplication system. In Proceedings of The 2010 International Conference on
Communications, Circuits and Systems (ICCCAS) (pp. 265–269).
Xu, J., Chang, E.-C., & Zhou, J. (2013). Weak leakage-resilient client-side deduplication
of encrypted data in cloud storage. In Proceedings of the 8th ACM SIGSAC Symposium
on Information, Computer and Communications Security (ASIA CCS ’13) (pp. 195–206).
New York: ACM.
Yang, S.-H., & Chen, C.-F. (2005). Robust image hashing based on SPIHT. In Proceedings of
The 3rd International Conference on Information Technology: Research and Education (ITRE
2005) (pp. 110–114). New York: IEEE.
Zhao, Y., & Zhuo, L. (2012). A content-based encryption scheme for wireless H. 264 compressed
videos. In Proceedings of The 2012 International Conference on Wireless Communications
and Signal Processing (WCSP) (pp. 1–6). New York: IEEE.

Chapter 11

Privacy-Aware Search and Computation Over
Encrypted Data Stores
Hoi Ting Poon and Ali Miri

11.1 Introduction
It is only recently, with the rapid development of Internet technologies, the
emergence of Internet of things and the appetite for multimedia that Big Data began
to capture the attention of companies and researchers alike. Amidst the constant
reminder of the importance of Big Data and the role of data scientist would have on
our future, there is also a growing awareness that the availability of data in all forms
and in large scale would constitute an unprecedented breach of security and privacy.
While much progress were made in the past decade, a practical and secure big
data solution remains elusive. In response to the privacy concerns, anonymization
techniques, through removal of personally identifiable information such as names
and identification numbers from customer data, have been in use by many companies such as Facebook and Netflix. Yet, many studies have shown that they do
not provide reliable protection. For instance, a study by MIT (How hard is it to
‘de-anonymize’ cellphone data? n.d.) showed that knowing a person’s location four
times in a year is enough to uniquely identify 95% of users in a set of 1.5 million
cellphone usage records. In genomics, short subsequences of chromosomes were
found to be enough to identify individuals with high probability (Gymrek et al.
2013). The anonymized Netflix Prize dataset was famously deanonymized using
publicly available information (Narayanan and Shmatikov 2008). In all cases, the
conclusion seems to be that reliable anonymization could well be infeasible since
information about individuals is so widely available and easily accessible largely
due to the Internet.

H.T. Poon • A. Miri ()
Department of Computer Science, Ryerson University, Toronto, ON, Canada
e-mail: hoiting.poon@ryerson.ca; Ali.Miri@ryerson.ca
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_11



H.T. Poon and A. Miri

As an alternative to anonymization, encryption has well defined security properties and had endured under the scrutiny of academics and security professionals.
Rather than maintaining seemingly non-identifying information in plain, all data is
encrypted with mechanisms in place to perform the required functionalities. Much
of the difficulty in securing distributed computation and data storage is due to the
fact that strong encryption tends to require significant computations, which in turn
reduces the throughput of the system. Nonetheless, there have been a growing body
of work on the topic of processing of encrypted data, that holds promise on what a
secure encrypted big data system may one day be.
In this chapter, we will look at some recent works on enabling search over
encrypted data. Their common objective is to enable data to be stored and queried
in encrypted form at numerous facilities, which may not be in the organizations
control since data centers tend to be expensive to maintain and cloud solutions are
fast becoming the standard. We will also briefly present the topic of homomorphic
encryption, which has been a very active area of research due to the breakthrough
work on the first fully homomorphic encryption scheme. The ability to perform
computations in encrypted form has significant ramification in secure distributed
computing, increasing end node security and privacy.

11.2 Searchable Encryption Models
Since Google popularized MapReduce in the early 2000s, search has been recognized as a central function of many big data systems. Similarly, research into
encrypted data processing also began with search. Searchable encryption schemes
generally involve up to three parties:
• Data Owner
• Users
• Storage Provider
The Data Owner has access to the secret/private key to decrypt the data set and
is considered to be trusted. Users are parties other than the data owner that add
material to the encrypted data set or that search over it. Users do not have the ability
to decrypt or obtain information without obtaining Data Owner’s authorization.
The encrypted data set is stored in an untrusted storage server, such as a cloud
service provider (e.g. Amazon EC2, Microsoft Azure). While it is accepted that
legitimate service providers will perform the required protocols and encryptions
without fault to ensure their business operation, it is also conceivable that they
may wish to perform analysis on the information available to them, such as the
search patterns, to acquire additional knowledge on their clients that may benefit
them. This is termed the honest but curious model, where a non-malicious entity
follows the required protocols but desires to learn more on the protected data set.
In addition to the storage provider, users are also generally considered to be honest
but curious. More complicated schemes have also been proposed which consider the

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


malicious settings where the storage provider or users may deliberately manipulate
the protocols and data to compromise the data security. Other models also exist
where researchers propose solutions to mitigate information leakage resulting from
colluding users and cloud operators. For our discussion, we will restrict to the honest
but curious model for cloud storage providers and users.
Searchable encryption schemes can be classified into four categories, denoted by
• Private/Private: A private keyword search scheme that allows the data owner
possessing the secret key to search over an encrypted data set placed by the data
owner without compromising or decrypting the data.
• Public/Private: A keyword search scheme that allows the data owner possessing
the secret key to search over an encrypted data set consisting of content submitted
by various users without compromising or decrypting the data.
• Private/Public: A keyword search scheme that allows any authorized user to
search over an encrypted data set placed by the data owner without compromising
or decrypting the data.
• Public/Public: A public keyword search scheme that allows any authorized user
to search over an encrypted corpus consisting of content submitted by various
users without compromising or decrypting the data.
In a completely private scenario, the data owner is the sole party performing
searches and providing the encrypted data set. As the security risk is limited to
the storage provider, this setting also entails the most efficient solutions using
symmetric encryption. In a private/public setting, where other users may search
over a private collection, an extension of the completely private solutions may be
used. Where public contribution to the encrypted data set is required, asymmetric
encryption is used to maintain secrecy.
To provide a better understanding of searchable encryption, this chapter will
begin by describing some classic and recent works in the context of textual
information for the aforementioned categories.

11.3 Text
Many sensitive and confidential information are stored in texts. Documents containing medical records, financial spreadsheets, business transactions, credit card
records and customer information are among the most cited that require privacy
protections. One of the central needs of text processing systems is search.
For our discussions, we will focus on conjunctive keyword search, which deals
with queries for multiple keywords linked with an AND relationship. While English
is assumed in our discussions, the techniques can easily be extended to other natural


H.T. Poon and A. Miri

Fig. 11.1 Private/private search scheme

11.3.1 Private/Private Search Scheme: Cloud Document
The simplest scenario is the case where the data owner uploads data to a third party
server and wishes to selectively access the data while hiding the content from the
server. A typical example would be a student uploading his assignments and notes
to a cloud storage host such as DropBox so that he may easily access the documents
at home or at school. Another example would be an executive uploading its merger
and acquisition records to their cloud hosted systems. In both cases, the data owner
is to be the only person searching and generating the encrypted data (Fig. 11.1).

Encrypted Indexes

Indexing has been one of the most efficient approaches to search over data.
The technique can also be extended to encrypted data.
An Index works by first parsing a data set for keywords and then generating a
table that maps the keywords to the data. Consider a document set with three books
with the following keywords:
Book A
Book B
Book C

‘World War’,‘Biography’
‘World War’, ‘Pandemic’, ‘Fiction’

Parsing the document set would result in the following index:
‘World War’


Extending the approach to encrypted data consists simply of hashing and
encrypting the keys and entries in a manner that is consistent with the index
structure. The data set itself is symmetrically encrypted using a separate secret key.

11 Privacy-Aware Search and Computation Over Encrypted Data Stores
Hk (‘Horror’)
Hk (‘Fiction’)
Hk (‘World War’)
Hk (‘Biography’)
Hk (‘Pandemic’)


Ek .A/
Ek .A; C/
Ek .B; C/
Ek .B/
Ek .C/

Suppose a user wishes to upload a document collection, D D fD1 ; D2 ; : : : ; Dn g.
It is first parsed for a list of keywords, kwj , which may include document content or
meta-data such as date and department. An index is generated mapping keywords to
documents such that I.kwj / D fda ; db ; : : : ; dn g, where di D 1 if kwj is a keyword for
the ith document and di D 0 otherwise. The index is then encrypted and uploaded
to the cloud server:
I.HK .kwj // D fEK .da ; db ; : : : ; dn /g;


where HK ./ is a cryptographic hash function and Ek ./ is a symmetric encryption
algorithm such as AES. Briefly, cryptographic hash functions are mapping H.x/ W
C ! D, where x 2 C, jCj  jDj and where it is computationally infeasible
to determine any information about x given H.x/. Common cryptographic hash
functions include SHA-2, SHA-3 and BLAKE.
For the discussed example, the encrypted index would be
Hk (‘Horror’)
Hk (‘Fiction’)
Hk (‘World War’)
Hk (‘Biography’)
Hk (‘Pandemic’)

Ek .100/
Ek .101/
Ek .011/
Ek .010/
Ek .001/

To perform a search for a set of keywords kw0 D fkw1 ; kw2 ; : : : ; kwq g, the data
owner computes their hashes, HK .kw0 /, using the secret key and sends them to the
cloud server. The cloud server looks up entries in the index tables corresponding to
HK .kw0 / and return the encrypted index entries to the data owner. The data owner
then decrypts and finds the intersection of index entries and identifies the matching
DK .I.HK .kw1 /// & DK .I.HK .kw2 ///    & DK .I.HK .kwq ///;


where & is a bitwise AND operation. Suppose a query was made for all biographies
from World war veterans, a search for ‘World War’ and ‘Biography’ would require
Hk (‘World War’) and Hk (‘Biography’) to be sent to the cloud server. Ek .011/ and
Ek .010/ would respectively be returned to the data owner, who identifies B as the
matching results from 011&010 D 010.


H.T. Poon and A. Miri

Bloom Filters

While indexes provide a reliable and familiar approach to searching encrypted
data, the need for decryption and encryption during search can be computationally
expensive for certain applications. As an alternative, Bloom filters offer similar
level of performance without the need for decryption and requires only one round
of communication, but, unlike indexing, results can contain false positives. While
generally undesirable, false positives can provide some level of privacy protection
(Goh 2003).
Bloom filters are space-efficient probabilistic data structure used to test whether
an element is a member of a set. A Bloom filter contains m bits and  hash functions,
Hi .x/, are used to map elements to the m-bits in the filter. All bits in the filter
are initially set to zeros. To add an element, a, to the filter, we compute Hi .a/ for
i D 1 to , and set the corresponding positions in the filter to 1. For example, for
 D 2 and m D 5, to add ‘Happy’ to the filter, we compute H1 (‘Happy’)D 1 and
H2 (‘Happy’)D 4. Setting the position 1 and 4, the Bloom filter becomes 1; 0; 0; 1; 0.
To test for membership of an element, b, in a sample Bloom filter, we compute
Hi .b/ for i D 1 to , the element is determined to be a member if all corresponding
positions of the sample Bloom filter is set to 1. For example, ‘Happy’ would be a
member of the Bloom filter, 1; 1; 0; 1; 1.
While Bloom filters have no false-negatives, it can falsely identify an element as
member of a set. Given  hash functions, n items inserted and m bits used in the
filter, the probability of false positives is approximately p D .1  en=m / .
Applying Bloom filters for search consists of viewing the keywords associated
with a document as a set and individual keywords as its members. Using the
same example as in previous section, Book A would need to add ‘Horror’ and
‘Fiction’ to its filter. Suppose  D 2, m D 5, H1 (‘Horror’)D 1, H2 (‘Horror’)D
4, H1 (‘Fiction’)D 2 and H2 (‘Fiction’)D 4, Book A’s keyword filter would be
1; 1; 0; 1; 0. Proceeding similarly for the remaining documents yield the following
Bloom filters analogous to the index table in previous section:
Book A
Book B
Book C


To search for ‘World War’ and ‘Biography’, we would construct a query filter
where H1 (‘World War’), H2 (‘World War’), H1 (‘Biography’) and H2 (‘Biography’)
are set and send it to the server. Suppose, the query filter is 01101, the server
identifies all filters with the 2nd, 3rd and 5th bits set and returns Book B as the
Using Bloom filters for encrypted data proceeds in the same manner except
members of filters consist of encrypted keywords. That is, to add ‘Happy’ to a filter,
we first compute its keyed cryptographic hash, Hk (‘Happy’). Then, we hash the

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


result and set the filter bits as before using H1 .Hk (‘Happy’)) and H2 .Hk (‘Happy’)).
To perform a search, we construct a query using the cryptographic hash of keywords
under search as members. Since the cloud server does not have access to k, it cannot
perform searches without data owner’s authorization.
Note that if the file names also require privacy, a small lookup table matching
numerical identifiers to file names can be stored privately by the data owner. The
matching numerical identifiers can then be used in place of file names on the cloud
server. A sample file name to identifier table is as follows:
Book A
Book B
Book C


When compared to the encrypted indexes approach, the use of Bloom filters will
generally lead to a much smaller storage requirement at the cost of having false

11.3.2 Private/Public Search Scheme: Cloud Document
Storage Extended
Suppose you wish to allow certain users to search your documents (Fig. 11.2), the
previous solutions can be extended to achieve this goal .
Consider the encrypted indexes based search scheme, a user wishing to search
data owner’s documents can simply send the queried keywords to the data owner.
The data owner computes the encrypted keywords and either forward them to the
server or return them to the user. The server then processes the query as in the private
scenario and return the results to the user. Figure 11.3 illustrates the technique.
The Bloom filter based scheme can also be extended by having the data owner
process all user query requests prior to sending the query filter to the server.
Note that the encrypted keyword set or the query filter generated from encrypted
keywords constitute a trapdoor, i.e. data that can be used to search the data set but
reveal no information on the keywords. This hides the keywords being searched for
from the cloud server.

Fig. 11.2 Private/public search scheme


H.T. Poon and A. Miri

Fig. 11.3 Extension of an
encrypted private storage to
allow third party searches

Fig. 11.4 Public/private search scheme

One advantage of this setup is that it allows an access control mechanism to be
in place where the data owner can authorize and revoke user access to the data set.
Consider a multi-national corporation with offices in many nations, but needs to
share documents between its employees. The corporation’s master server(s) may be
considered data owners and its employees are users. Depending on their roles, they
may not be able to search for certain keywords. The simple extension discussed in
this section would allow the data owner to process all query requests as opposed to
a non-interactive model.

11.3.3 Public/Private Search Scheme: Email Filtering System
At a time when email was the primary method of communication online, Boneh
et al. (2004) proposed a system that would allow emails to be securely stored on
an untrusted email server while allowing selective retrieval of messages based on
keywords. In this setting, the users are entities sending emails to the data owner
and the cloud server is the untrusted email server. Since there are many users, we
have public contribution of encrypted data and private search by the data owner
(Fig. 11.4).
The usefulness of the system, shown in Fig. 11.5, is demonstrated in a scenario
where certain emails sent by various people may be urgent and required immediate
attention from the recipient. Hence, rather than waiting for a email retrieval request,
the recipient may be immediately alerted to the urgent matter, all while maintaining
the secrecy of the email contents.
The construction is based on public key encryption, which facilitates the key
management in the considered scenario, where a single public key may be used
by all users wishing to encrypt and send emails to the recipient. In particular, the
authors noted that the scenario implies that an Identity-Based Encryption is required.

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


Fig. 11.5 Email filtering system

Identity-Based Encryption (IBE)

Identity-Based Encryption schemes allow any arbitrary strings to be used to generate
a public key. With traditional public key encryption, different parties must go to
trusted certificate authorities to obtain the public key of the intended recipient to
encrypt a message. The main advantage of IBE is that parties can effectively forego
this process by using the recipient’s identity, such as his email address, as the public
key to encrypt messages. To decrypt the messages, the recipient must obtain the
corresponding private key from a trusted authority. Note that the trusted authority in
IBE is a key escrow, that is, it can generate private keys for any users in the system
and must be considered highly trusted. It is interesting to note, however, that in a
system with a finite number of users, the key escrow can be removed once all users
have obtained their private keys. Furthermore, the ability to generate public keys
from arbitrary strings also enable the use of attributes in public key, where access
to private key may depend upon. For example, emails sent to support@abc.com
may be encrypted using ‘support@abc.com,lvl=2,Exp=09Oct2016’ to signify that
the email is valid only for level 2 support staff and until October 9th. If the party
requesting the private key fails to meet either condition, the private key would not
be issued.
The most efficient IBE schemes today are based on bilinear pairings over elliptic
curves (Boneh and Franklin 2001). Due to its importance in the literature as the
basis of many solutions on searchable encryption, we’ll include the basic scheme
here to illustrate the operations. Interested reader is encouraged to refer to Boneh
and Franklin (2001) for detailed description.
An Identity-Based Encryption based on bilinear pairings, generally Weil or Tate
pairings, consists of the following four algorithms:
• Setup—Select two large primes, p and q, two groups, G1 and G2 of order q and
a generator, P 2 G1 . Two cryptographic hash functions, H.a/ W f0; 1g ! G1
and H.b/ W G2 ! f0; 1g and a bilinear map e W G1  G1 ! G2 are also selected.
A master secret s 2 Zq is held by the authority responsible for generating private
keys. The public key, Ps is set to sP. The public parameters of the scheme are:


H.T. Poon and A. Miri

fp; q; G1 ; G2 ; e; P; Ps g:


• Private Key Generation—Given a party’s ID, the corresponding private key is
generated as dID D sH1 .ID/.
• Encryption—Given a message, m, to be sent to recipient with identity, ID, the
recipient’s public key is first computed as H1 .ID/. Then, we compute gID D
e.H1 .ID/; Ps / and the ciphertext is set to c D EIBE;H1 .ID/ D frP; m ˚ H2 .grID /g
where r 2 Zq .
• Decryption—Given a ciphertext, c D u; v, the message can be retrieved by
computing v ˚ H2 .e.dID ; u//.
Setup and Private Key Generation are generally performed by the key escrow
authority. Encryption is performed by message senders and Decryption is performed by recipients. IBE is a probabilistic encryption algorithm. The use of
randomness allows different encryption of the same plaintext to result in different
ciphertexts. The security of IBE is based on the discrete log problem in elliptic
curves and the Bilinear Diffie-Hellman Assumption. The former states that, given P
and sP, it is computationally infeasible to determine s. The latter states that, given
P, sP, it is computationally infeasible to determine e.P; P/s .

An IBE-Based Secure Email Filtering System

We first provide a high level description of the email filtering system, which also
involves four algorithms:
• KeyGen—Generates public and private key, Apub and Apriv .
• PEKS(Apub ,W)—Computes a searchable encryption of the keyword, W, for the
recipient with public key, Apub .
• Trapdoor(Apriv ,W)—Computes a trapdoor for the keyword, W, for the recipient
with private key, Apriv .
• Test(Apub , S, TW )—Tests if the keywords used to generate the searchable
encryption, S D PEKS.Apub ; W 0 /, and the trapdoor, TW D Trapdoor.Apriv ; W/
Suppose a user wishes to filter for the keyword ‘urgent’, it generates the
public and private keys using KeyGen. Then, to allow filtering for the keyword,
the user computes Trapdoor(Apriv , ‘urgent’), and place the result on the mail
server. Emails sent to the user would contain the encrypted email and a series of
keywords encrypted using the recipient’s public key, fE.email/kPEKS.Apub ; W1 /k
PEKS.Apub ; W2 / : : :g. Suppose W2 D ‘urgent’, the mail server would first compute
Test(Apub , PEKS.Apub ; W1 /, T‘urgent0 ) to find that W1 ¤ ‘urgent’. Then, Test(Apub ,
PEKS.Apub ; W2 /, T‘urgent0 ) would reveal that W2 D ‘urgent’ and that the email is
urgent while protecting the content of the email and the non-matched keyword W1 .
One of the advantages of this approach is that it’s non-interactive. Once the recipient
has generated the trapdoor for the keyword filters, the incoming emails may be
filtered even if the recipient is offline.

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


To implement the filtering system using IBE, the various parameters are chosen
as follows:

Apriv D a, where a 2 Zq is randomly chosen, and Apub D fP; aPg
PEKS(Apub ,W/ D frP; H2 .e.H1 .W/; r.aP///g, where r 2 Zq is randomly chosen
Trapdoor(Apriv ,W/ D TW D aH1 .W/
Test(Apub , S, TW / D ‘Match’ if H2 .e.TW ; A// D B, where S D fA; Bg

As seen, the recipient acts as the key escrow authority using the master secret,
a, to generate private keys for identities that are keywords he wish the email server
to filter for. The email server is given these private keys. Any users wishing to send
emails to the recipient can encrypt using the public keys assigned to the keywords.
PEKS(Apub ,W) is equivalent to an IBE encryption of the message, m D 0. Given the
private keys for the specified keywords, the email server can decrypt only PEKS./
corresponding those keywords, allowing matching. Since the content of the email is
encrypted separately, the security of the scheme is achieved.

11.3.4 Public/Public Search Scheme: Delegated Investigation
of Secured Audit Logs
Waters et al. (2004) considered a distributed system where multiple servers,
belonging to the same organization, are generating audit logs while in operation
(Fig. 11.7). The servers are off-site, e.g. in the cloud, and raise privacy and security
concerns. Third parties, such as a consulting firm’s investigators, wishing to access
the audit log may do so but only records that are relevant to their investigation.
Hence, we have public generation of encrypted data, i.e. the audit logs, by the offsite servers destined for the data owner, which is the organization. We also have
public search of the encrypted data by the investigators, the users. Figure 11.6
illustrates the scenario.
Two solutions, one based on symmetric encryption and one based on public key
encryption, are described, combining some of the techniques from previous sections.

Fig. 11.6 Public/public search scheme


H.T. Poon and A. Miri

Fig. 11.7 Audit records, with keywords and meta-data such as user and time (Waters et al. 2004)

Symmetric Scheme

The symmetric scheme provides an efficient and practical solution for searching and
delegating audit log queries. Symmetric encryption has well established security
properties and is generally very efficient. In addition, cryptographic hash functions
provide an equally efficient tool for one-way mapping of data.
Suppose there are n servers generating audit logs, each server holds a secret key,
Si and encrypts its log entries, m, using symmetric encryption algorithm, Ek .m/,
where k is randomly generated for each entry. For each keyword, wi , the server
ai D HS .wi /; bi D Hai .r/; ci D bi ˚ .KjCRC.K//;


where Hk0 ./ is a keyed cryptographic hash functions with the key k0 . That is, we
compute the hash of the keyword using the server’s secret key. Then, the result is
used as the key to hash a random value, r. The final result is XOR’ed with the
symmetric encryption key used on the log entry. CRC.K/ provides a mean to verify
if the decrypted symmetric key is correct, i.e. the keywords match. The encrypted
log entry is stored as:
fEK .m/; r; c1 : : : cn g:


To delegate log investigations, a search capability ticket, dw , must be issued,
as shown in Fig. 11.8. Suppose all logs of the class ‘Security’ is required, the
organization computes dw;i D HSi .‘Security0 / for each audit log server and gives

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


Fig. 11.8 Delegation of log auditing to investigators through search capability tickets (Waters
et al. 2004)

them to the investigator. To obtain the log entries with the specified keyword at
server i, the server computes p D Hdw;i .r/ where r is the random value stored with
each encrypted entry. Then, it computes p ˚ ci for each ci listed. If the result of the
form K 0 jH is such that H D CRC.K 0 /, a match is found and the encrypted log entry,
EK .m/, is decrypted using K 0 .
Despite the scheme’s efficiency, it can be problematic in the event of a compromise of a server secret. Since each individual log server (encrypted data generator)
maintain its own secret key, SJ , to provide keyword search capability for any
keywords, compromise of any server’s secret would allow an attacker to gain search
capability of any keywords he chooses for that server.

Asymmetric Scheme

The asymmetric solution addresses this issue by using IBE to provide search
capability similar to Sect. 11.3.3. In addition to simplifying key management,
asymmetric encryption facilitates the protection of private keys by reducing the
number of parties that must learn them. However, asymmetric encryption are also
computationally more expensive than symmetric encryption.
Recall from Sect., a master secret, s, is held by the trusted private key
generating authority in IBE. In our scenario, this would be the organization, i.e. the
data owner. To encrypt a log entry, a server chooses a random secret symmetric
encryption key, k, and encrypts the entry to produce, Ek .m/. For each keyword, wi ,
the server computes the IBE encryption of KjCRC.K/ using H1 .wi / as the public key
to produce ci D EIBE;H1 .wi / .m D KjCRC.K//. The encrypted log entry is stored as:


H.T. Poon and A. Miri

fEK .m/; c1 : : : cn g:


To delegate investigations to authorized parties, the data owner must first
generate and assign search capability tickets, dw , for the required keywords, wi .
Each ticket, dw D sH1 .wi /, represents the private key for the keyword. To identify
matching log entries, the server attempts to decrypt each ci using dw . If the result
is of the form K 0 jCRC.K 0 /, that it contains a valid CRC, then, EK .m/ is decrypted
using K 0 and retained as a match.
Note that the last step of matching records in both schemes has a possibility of
false positive. Namely, it is possible that an invalid K 0 , by chance, is followed by a
valid CRC. However, even if a false positive occurs, the decryption of log entry
using an invalid key would result in random data unrelated to the actual log entry,
preserving the security of the schemes.

11.4 Range Queries
Suppose we wish to retrieve audit logs during a time period in which we believe
an online attack had occurred, that is we wish to do “Select * from SecLogs where
time > 2014/04/10 and time < 2014/04/13” where the time element may be as in
Fig. 11.7. It would have been impractically inefficient to do a keyword search for
every single possible time value in the interval using the scheme in Sect. 11.3.4.
A recent proposal, however, generated significant interest by showing that it is
possible to preserve the ordering after encryption, thus enabling the possibility of
performing numerical value comparisons of plaintexts in encrypted form.

11.4.1 Order Preserving Encryption
Order preserving encryption (OPE) (Agrawal et al. 2004) centers on the idea that,
given a plaintext space P and a ciphertext space C,
p1 > p2 H) c1 > c2


8p1 ; p2 2 P and c1 D E.p1 /; c2 D E.p2 / 2 C:



The encryption scheme does not rely on any mathematical hard problem, but
instead generates a random mapping from plaintext to ciphertext through the use
of a pseudo-random function. Boldyreva et al. (2009) later improves the efficiency
by providing a technique to generate the encryption mappings on-the-fly without
having to reproduce the entire mapping from the key each time.

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


The security guarantee is that an attacker would not learn the plaintexts pi
besides the order. While there has been significant interest and adaptation of order
preserving encryption, it should be noted that the scheme has been shown to leak
information, up to half the bits of plaintexts, under certain conditions (Boldyreva
et al. 2011).
Adapting OPE to range queries is straightforward. For the example of audit
log in Fig. 11.7, encrypt all time elements using OPE and all keywords using
the scheme in Sect. 11.3.4. To search for logs on 2014/04/10, we must search
for time > 2014/04/09 and time < 2014/04/11. For A D E.2014=04=09/ and
B D E.2014=04=11/ and an encrypted log record, fE.Log/; E.keywords/; E.time/g,
a match is identified if
B > E.time/ > A:


11.5 Media
The growing importance of media cannot be understated. It is estimated that, by
2019, 80% of the world’s bandwidth would be consumed by media. Currently, at the
end of 2015, the number sits at 70%, with Netflix and Youtube combining for over
50% of data sent over the Internet. Naturally, the need to process videos, audios or
images in a secure and privacy-aware manner will be of growing interest in coming
years. While studies in encrypted media processing is still at its infancy, there exists
some interesting and working solutions. As with texts, investigations began with

11.5.1 Keyword Based Media Search
The simplest approach to adapt existing text-based searchable encryption to
media is through a meta-data only media search. Consider a set of images,
I D fI1 ; I2 ; : : : ; In g, we first extract a list of keywords for each image. This
extraction process can be manual, i.e. a person assigning keywords such as ‘Man’,
‘bird’, ‘table’, ‘HighRes’, or it may be done through artificial intelligence (AI) and
image recognition systems.
Once all images have been assigned keywords, all text-based solutions discussed
in Sect. 11.3 apply as is by considering the image set as the document set. This is
achievable because the search mechanism for the text-based solutions are all based
on extracted keywords and the documents are encrypted separately from the search
mechanism, be it an index, Bloom filters or IBE encrypted keywords. The reason
for the separation is because searching data content on-the-fly is computationally
expensive, even when unencrypted, since each file must be scanned as a whole
if no pre-processing was performed. Due to the security guarantees of standard


H.T. Poon and A. Miri

Fig. 11.9 Keywords-based image filtering system based on IBE

symmetric and asymmetric encryption algorithm, processing of encrypted data is
often impossible or very computationally intensive.
Figure 11.9 shows a media filtering system based on the Email filtering system
in Sect. 11.3.3. The system depicts a image bank that receives images from various
sources. Occasionally, some images may contain sensitive or personally identifiable
information that requires blurring before being placed in the image bank. The data
owner would like to maintain privacy and prevent the host server from learning the
image contents. However, due to the quantity of images received, the data owner
would like an automatic sorting system that would facilitate post-processing upon
decryption. The depicted system achieves this by requiring each user encrypts their
images using a symmetric encryption algorithm and encrypts a series of keywords
using IBE to describe the image.

11.5.2 Content Based Media Search
Searching for images based on its content is a far more challenging task. Generally,
images are first processed to produce feature vectors, analogous to keywords in
documents, using algorithms such as SIFT (Lowe 1999). Then, the Euclidean
distance between feature vectors provides a mean to measure the similarity between
different images. However, doing so in encrypted domain is generally impossible
with standard encryption algorithms. We’ll describe two recently proposed solutions, one based on OPE and one based on homomorphic encryption.

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


Media Search Using OPE

Nister and Stewenius (2006) introduced a highly cited technique for searching
through a database of images. The idea was that feature vectors extracted using
popular extraction algorithms can be arranged into clusters and each cluster can be
thought of as a visual word. This opens up the possibility of using well studied
text-based solutions for content-based image search. In particular, it was shown
that using an vocabulary tree can efficiently process searches in extremely large
In Lu et al. (2009), Lu described an adaptation of the technique to enable secure
searches on encrypted images. The solution considers the visual words as keywords
for images and builds a visual word to image index. Unlike with text, the goal of the
index is not provide a simple mapping to all images containing a visual word, but
rather to enable comparison between a query image and the images in the database.
Suppose we are given a database of n images, feature vectors are first extracted
and using k-means clustering to separate into sets of visual words. Figure 11.10
illustrates the process. Then, the feature cluster frequencies are counted to produce
an index mapping visual words to image along with the word’s frequency, as shown
in Fig. 11.11.
All visual word frequency values are encrypted using OPE, E.wi /. Then,
borrowing the notion of inverse document frequency (IDF), each encrypted value
is further scaled by
Fig. 11.10 Vocabulary tree
generation using k=3 means

Fig. 11.11 Visual word to
image index with frequencies


Word ID
Image ID
Word frequency



w Ni


H.T. Poon and A. Miri

IDF D log.



where M is the total number of images in the database and Ni is the number of
images containing the visual word, i. The scaling factor, IDF, skews the frequency
values such that less common and more distinctive visual words carry more weight
in determining similarity matches. The resulting index is stored on the cloud server.
The similarity of a query image and an image in the database is determined by the similarity between their visual word frequency patterns. Given a
visual word frequency list of a query image, we encrypt the values using OPE,
E.Q1 /; E.Q2 /; : : : E.QV /, and scale the values by IDF. Note that V represents the
total number of visual keywords in the database. To compare against the word
frequency of a database image, E.D1 /; E.D2 /; : : : E.DV /, the Jaccard similarity is
Sim.Q; D/ D

min.E.Qi /; E.Di //
E.Q/ \ E.D/
E.Q/ [ E.D/
iD1 max.E.Qi /; E.Di //


Due to the use of OPE, the order of the encrypted frequencies and thus the results
of min./ and max./ functions are preserved. The computed similarity reflects that
computed over plaintext. Therefore, the matching can be performed server side.
In addition to protecting the frequency values, the use of OPE also protects their
distribution, which may reveal information on the image content, by flattening it.

Homomorphic Encryption

Homomorphic encryption allows computations to be carried out on ciphertexts,
where the results would decrypt to the corresponding computation on plaintexts. For
example, an additively homomorphic scheme would have the following property:
E.A/ C E.B/ D E.A C B/


This feature allows for third parties to perform computations without exposing
confidential information. An additive homomorphic scheme may also allow multiplication with plaintexts to be performed by
E.m1 /m2 D E.m1 m2 /;


where m2 is in plain. Until recently, most homomorphic encryption algorithms are
either additive or multiplicative, but not both. Gentry (2009), in 2009, described
the first fully homomorphic encryption algorithm which supports both addition
and multiplication over ciphertext, opening the door to many applications and a
dramatic increase in interest on computing over encrypted data. After many years of

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


improvements, fully homomorphic encryption algorithms can now run in relatively
reasonable time. However, its computational cost remains much higher than all
popular encryption algorithms, limiting its use in practise.
Recall that content-based image search typically relies on euclidean distances
between feature vectors as a measure of similarity. That is to compute
u n
kFQ  FD k D t .FQ;i  FDj ;i /2



Since the square root operator applies to all values, it is easy to see that using
the squared Euclidean distance is equally valid as a distance measure. Naturally,
computing the summation is possible using fully homomorphic encryption although
at a high computational cost. To do so, the data owner encrypts the features FD;i for
each image Dj and uploads to server. To perform a query for image, Q, the data
owner encrypts all features,PFQ;i , of the query image and uploads to the server. The
server computes distj D
iD1 .FQ;i  FDj ;i / for all images in the database and
returns distj ’s to data owner. The smallest distance values are identified as matches
by decrypting the results.
Additive homomorphic encryption, such as Paillier cryptosystem (Paillier 1999),
can also be used to query an image in plaintext against an encrypted image database,
by exploiting the following property:
.FQ;i  FDj ;i / D
FQ;i  2
FQ;i FDj ;i C
FD2 j ;i





Since FQ;i is in plain, the first term is available. With E.FDj ;i / as ciphertext, the
second term can be computed using Eq. (11.13). The third term can be uploaded
with the features during setup. The server can then compute the encrypted distance
without interacting with data owner. The encrypted distance must then be sent back
to the data owner for decryption. It should be noted, however, that the ability to
perform similarity search using images in plain allows the server to infer that the
matched encrypted image retrieved is close to the query image presented.

11.6 Other Applications
While conjunctive keyword searches remain the central functionality required for
many secure data outsourcing applications, researchers have also investigated more
specialized searches and functionalities. Examples include:
• Ranking of search results
• Fuzzy search
• Phrase search



H.T. Poon and A. Miri

Similarity search for texts
Privacy-protected recommender system
Copyright management
Signal Processing

The ability to rank search results can increase the relevance and accuracy
of matches particularly in large databases. Fuzzy searches offer a more user
friendly and intelligent search environment. Phrase search deals with sequential data
processing, which may find use in genomics, where individuals’ DNA’s may post
a privacy risk. Even virus detection (Poon and Miri 2016) has been suggested to
identify the increasing amount of malware that may remain dormant in cold storage.
Aside from search, the ability to compute over encrypted data presented by
homomorphic encryption could lead to secure outsourcing of computation. In
addition to enabling a remote server to store and search over encrypted data without
revealing their content, it may even manipulate the data in meaningful ways without
learning its content. A sample application would be privacy-protected recommender
system. Suppose a recommender system has identified that users belonging to
certain clusters are interested in certain products, a customer may send his feature
sets, which may include previously view products and set preferences, in encrypted
form and have the recommendation be computed by the server and sent back in
encrypted form without learning what products he had viewed or his preferences.
Homomorphic encryption has also been suggested as a way to protect user
privacy while managing digital rights of encrypted media. In particular, the use
of watermarks and a simple detection algorithm consisting of the computation of
correlation between the watermark and an image can be performed in a similar
manner as described in Sect.

11.7 Conclusions
In this chapter, we gave an overview of the most important works in searchable
encryption along with some of the more recent works and the relationships
between them. Classic keyword search techniques such as encrypted indexes, IBE
and Bloom filters continue to form the basis of many works in literature today.
Recent years have also seen the development of order preserving encryption and
fully homomorphic encryption, which generated significant interest in the research
community. While the sample applications discussed in this chapter were described
in simple scenarios, it is not difficult to imagine the applications in larger scale. For
example, sentimental analysis on private Twitter feeds could well be encrypted with
searchable keyword tags. Recommender systems could provide recommendations
without revealing users’ interests, by using homomorphic encryption.
As the world continues to become more connected and the amount of data
generated continues to climb, the need for privacy and security will equally become
more important. Much of the difficulties in adapting cryptographic techniques to

11 Privacy-Aware Search and Computation Over Encrypted Data Stores


Big Data lies in their relatively high computational cost for today’s technology.
As our computational power increases and algorithms improve, the cryptographic
techniques developed today may well form the basis of a secure big data system in
the future.

Agrawal, R., Kiernan, J., Srikant, R., & Xu, Y. (2004). Order preserving encryption for numeric
data. In Proceedings of the 2004 ACM Sigmod International Conference on Management of
Data (pp. 563–574).
Boldyreva, A., Chenette, N., Lee, Y., & O’Neill, A. (2009). Order-preserving symmetric encryption. In Proceedings of the 28th Annual International Conference on Advances in Cryptology:
The Theory and Applications of Cryptographic Techniques (pp. 224–241).
Boldyreva, A., Chenette, N., & O’Neill, A. (2011). Order-preserving encryption revisited:
improved security analysis and alternative solutions. In Proceedings of the 31st Annual
Conference on Advances in Cryptology (pp. 578–595).
Boneh, D., & Franklin, M. (2001). Identity-based encryption from the weil pairing. In Advances in
Cryptology - Crypto 2001: 21st Annual International Cryptology Conference, Santa Barbara,
California, USA, August 19–23, 2001 Proceedings (pp. 213–229).
Boneh, D., Crescenzo, G. D., Ostrovsky, R., & Persiano, G. (2004). Public key encryption with
keyword search. In Proceedings of Eurocrypt (pp. 506–522).
Gentry, C. (2009). Fully homomorphic encryption using ideal lattices. In Proceedings of the FortyFirst Annual ACM Symposium on Theory of Computing (pp. 169–178).
Goh, E.-J. (2003). Secure indexes. Cryptology ePrint Archive, Report 2003/216.
Gymrek, M., McGuire, A. L., Golan, D., Halperin, E., & Erlich, Y. (2013). Identifying personal
genomes by surname inference. Science, 339(6117), 321–324.
How hard is it to ‘de-anonymize’ cellphone data? (n.d.). http://news.mit.edu/2013/how-hard-itde-anonymize-cellphone-data. Accessed 10 September 2016.
Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Proceedings of the
International Conference on Computer Vision.
Lu, W., Swaminathan, A., Varna, A. L., & Wu, M. (2009). Enabling search over encrypted
multimedia databases. In Proceedings of SPIE, Media Forensics and Security (pp. 7254–7318).
Narayanan, A., & Shmatikov, V. (2008). Robust de-anonymization of large sparse data set. In 2008
IEEE Symposium on Security and Privacy (pp. 111–125).
Nister, D., & Stewenius, H. (2006). Scalable recognition with a vocabulary tree. In Proceedings
of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(pp. 2161–2168).
Paillier, P. (1999). Public-key cryptosystems based on composite degree residuosity classes.
Lecture Notes in Computer Science, 1592, 223–238.
Poon, H., & Miri, A. (2016). Scanning for viruses on encrypted cloud storage. In IEEE
Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing,
Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People,
and Smart World Congress (pp. 954–959).
Waters, B., Balfanz, D., Durfee, G., & Smetters, D. K. (2004). Building an encrypted and
searchable audit log. In Network and Distributed System Security Symposium (pp. 215–224).

Chapter 12

Civil Infrastructure Serviceability Evaluation
Based on Big Data
Yu Liang, Dalei Wu, Dryver Huston, Guirong Liu, Yaohang Li, Cuilan Gao,
and Zhongguo John Ma

12.1 Introduction
12.1.1 Motivations of Big-Data Enabled Structural Health
Civil infrastructure, such as bridges and pipelines, is the mainstay of economic
growth and sustainable development; however, the safety and economic viability of
civil infrastructure is becoming an increasingly critical issue. According to a report
of USA’s Federal National Bridge Inventory, the average age of the nation’s 607,380

Y. Liang () • D. Wu
Department of Computer Science and Engineering, University of Tennessee at Chattanooga,
Chattanooga, TN 37403, USA
e-mail: yu-liang@utc.edu; dalei-wu@utc.edu
D. Huston
Department of Mechanical Engineering, University of Vermont, Burlington, VT 05405, USA
G. Liu
Department of Aerospace Engineering & Engineering Mechanics, University of Cincinnati,
Cincinnati, OH 45221, USA
Y. Li
Department of Computer Science, Old Dominion University, Norfolk, VA 23529, USA
C. Gao
Department of Mathematics, University of Tennessee at Chattanooga, Chattanooga,
TN 37403, USA
Z.J. Ma
Department of Civil and Environmental Engineering, University of Tennessee, Knoxville,
TN 37996, USA
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_12



Y. Liang et al.

bridges is currently 42 years old. One in nine of those bridges is rated as structurally
deficient. According to the report of United States Department of Transportation,
the U.S. oil and gas pipeline mileages has reached 158,329 miles till 2014. A new
analysis of oil and gas pipeline safety in the United States reveals that nearly 300
incidents have occurred per year since 1986.
On the other hand, a prompt and accurate detection of infrastructure deficiency,
such as bridge collapse and pipeline leaking, is extremely challenging. In this
work, a multiscale structural health monitoring and measuring system (Catbas
et al. 2012; Ye et al. 2012) based on Hadoop Ecosystem (Landset et al. 2015),
denoted as MS-SHM-Hadoop for simplicity, is proposed. By integrating sensor
technology, advanced wireless network, data-mining based on big-data platform,
and structural mechanics modeling and simulation, MS-SHM-Hadoop is equipped
with the following functions: (1) real-time sensory data acquisition, integration,
and analysis (Liang and Wu 2014, 2016; Liang et al. 2013a,b); (2) quantitative
measurement of the deficiency of nation-wide civil infrastructure; (3) identification
of civil infrastructure’s structural fault and quantitative prediction of their life
expectancy according to the long-term surveillance about the dynamics behavior
of civil infrastructure.
To monitor and manage civil infrastructure effectively, wireless sensor networks
(WSNs) have received extensive attention. A WSN is generally comprised of
multiple wireless smart sensor nodes (WSSNs) and a base station which can be
a computer server with ample computation and storage resources. Featured with
low cost in installation and maintenance and high scalability, the WSSNs have
been deployed on the Golden Gate Bridge by UC Berkeley in 2006 (Kim et al.
2007) and recently on Jindo Bridge in Korea through a collaborative research
among Korea, US, and Japan (Jang et al. 2010). Researchers have also demonstrated
enthusiasm using wireless smart sensors to monitor full-scale civil bridge structures
in Lynch et al. (2006) and Pakzad (2008). Full-scale deployment of WSN on real
bridge structure is transformative because the employment of wired sensor network
still dominates SHM projects. Challenges lay the availability of power supply and
mature damage monitoring algorithms.
Heterogeneous and multi-modal data about the structural health of civil infrastructures has been collected. Although there have been some work that adopted
data management systems and machine learning techniques for structural monitoring, few platforms have been investigated to integrate full spectrum input data
seamlessly. In Sofge (1994) neural network based techniques are used for modeling
and analyzing dynamic structural information for recognizing structural defects.
In Guo et al. (2014), to avoid the need of a large amount of labeled real-world
data as training data, a large amount of unlabeled data are used to train a feature
extractor based on the sparse coding algorithm. Features learned from sparse coding
are then used to train a neural network classifier to distinguish different statuses
of a bridge. The work in Roshandeh et al. (2014) presents a layered big data
and a real-time decision-making framework for bridge data management as well
as health monitoring. In Nick et al. (2015), both supervised and unsupervised
learning techniques for structural health monitoring are investigated by considering
acoustic emission signals. A data management civil infrastructure based on NoSQL

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


Fig. 12.1 Three major inputs for MS-SHM-Hadoop

database technologies for bridge monitoring applications was proposed in Jeong
et al. (2016b). Cloud service platform is also deployed to enhance scalability,
flexibility and accessibility of the data management system (Jeong et al. 2016a).

12.1.2 Overview of the Proposed MS-SHM-Hadoop
The three major inputs for MS-SHM-Hadoop is shown in Fig. 12.1. Sensory data
includes the cyclic external load and structural response, and surrounding environmental conditions. Supporting information refers to all civil infrastructure related
information, such as bridge configuration database (National Bridge Inventory),
pipeline safety information (the Pipeline and Hazardous Materials Safety Administration Inventory), transportation status (National Transit Database), and weather
conditions (National Climatic Data Center). Structural configurations include the
geometric formulation of civil infrastructure and construction material description.
The joint consideration of big-data and sensor-oriented structural health monitoring and measuring is based on the following considerations: (1) Many critical
aspects of civil infrastructure performance are not well understood. The reasons
for this include the extreme diversity of the civil infrastructure, the widely varying
conditions under which civil infrastructure serve, and the lack of reliable data
needed to understand performance and loads. Meanwhile, as sensors for civil
infrastructure structural health monitoring are increasingly employed across the
country, massive information-rich data from different kinds of sensors are acquired
and transmitted to the racks of civil infrastructure management administration
database. (2) There exists high-degree correlations among civil infrastructure’s data,
which can be effectively discovered by data mining over big-data platform.
The MS-SHM-Hadoop has the following capabilities: (1) real-time processing
and integration of structure-related sensory data derived from heterogeneous sensors


Y. Liang et al.

through advanced wireless network and edge computing (EC); (2) highly efficient
storage and retrieval of SHM-related heterogeneous data (i.e., with varies of format,
durability, function, etc.) over a big-data platform; (3) prompt while accurate
evaluation about the safety of civil structures according to historical and real-time
sensory data.
The following issues in the MS-SHM-Hadoop need to be investigated:
(1) research samples screening: survey the nation-wide civil infrastructure’s
information databases, characterize and screen representative research samples
with low safety levels; (2) performance indicators (PIs) determination: evaluate
and determine proper multiple PIs to predict civil infrastructure performance in a
quantitative manner; (3) data fetching and processing: fetch relevant sensor data
from Hadoop platform, according to PIs requirement, and process raw sensor data
into load effects and load spectrum (Sohn et al. 2000) through edge computing
technology; (4) multi-scale structural dynamic modeling and simulation: based
on historical data of sample civil infrastructure, establish finite element (FE) and
particle models for global structural analysis and local component fatigue analysis
(Zhu et al. 2011); (5) evaluation of the impact of innovative civil infrastructures
construction methods on infrastructure performance by instrumenting two new
bridges in Tennessee. Civil infrastructure construction, design, and materials have
changed over time, and these changes may affect civil infrastructure performance.
For example, accelerated bridge construction (ABC) is a new process in bridge
construction and may affect bridge performance (He et al. 2012). These two
new bridges can also serve as a testing bed for the proposed activities in this
project. (6) Civil infrastructure performance evaluation: assess civil infrastructure’s
performance by PIs of global structure and local critical components (Frangopol
et al. 2008).
The implementation of MS-SHM-Hadoop involves the following cutting-edge
technologies: (1) machine learning including classification, clustering, regression,
and predictive analysis, based on general civil infrastructure information (e.g.,
age, maintenance management, and weather conditions, etc.), sensory data, and
structural configurations (e.g., infrastructure material, length, etc.), Bayesian network and stochastic analysis; (2) structural dynamic analysis; (3) signal processing
for external load and structure response; (4) multi-scale strategy ranging from
nationwide civil infrastructure survey to specific components’ structural reliability
analysis; and (5) Hadoop ecosystem deployed in edge computing server to achieve
high-scalability including acquisition, fusion, normalization of heterogeneous sensory data, and highly scalable and robust data analysis and information query.

12.1.3 Organization of This Chapter
The remainder of this chapter is organized as follows: Sect. 12.2 describes the architecture and flowchart of MS-SHM-Hadoop; Sect. 12.3 introduces the acquisition
of sensory data and the integration of structure-related data; Sect. 12.4 presents
nationwide bridge survey; Sect. 12.5 investigates the global structural integrity

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


of civil infrastructure according to structural vibration; Sect. 12.6 investigates the
reliability analysis about localized critical component; Sect. 12.7 employs Bayesian
network to investigate civil infrastructure’s global integrity according to components’ reliability, which is obtained in Sect. 12.6; and Sect. 12.8 concludes the paper.

12.2 Implementation Framework About MS-SHM-Hadoop
12.2.1 Infrastructure of MS-SHM-Hadoop
Figure 12.2 shows the infrastructure of MS-SHM-Hadoop, which consists of
the following three modules: the sensor grid (SG) module, the data processing
and management (DPM) module based on Hadoop platform, and the reliability
evaluation (RE) module based on structural dynamics modeling and simulation.
A more detailed description about each module is given below.
The sensor grid (SG) module mainly acquires, pre-processes the raw sensory data
and then transmits it to the data processing and management (DPM) module. Mobile
computing gateway (denoted as MC for simplicity) coordinates with each other
through a wireless network. SensorCtrl is the control-module that tunes the sensor’s
configurations for better observation of the area-of-interest, which is located through
structural analysis (RE module).
The Hadoop-enabled data processing and management (DPM) module mainly
integrates, transforms, classifies, and stores the data with high fault-tolerance

Fig. 12.2 Infrastructure of MS-SHM-Hadoop


Y. Liang et al.

and scalability. Based on Hadoop Distributed File System (HDFS) and MapReduce high-performance parallel data processing paradigm (Landset et al. 2015),
R-Connector and Mahout (Landset et al. 2015) provide powerful statistics and
machine learning capability; Inspired by big-table techniques (including row-key,
column-key, and time-stamp), HBase (Landset et al. 2015) efficiently accesses
large-scale heterogeneous real-time or historical data; Flume (Landset et al. 2015)
collects, aggregates, and moves large amounts of streaming data (i.e., the sensory
data about civil infrastructure status) into Hadoop from variety of sources; Hive
(Landset et al. 2015) provides a data warehouse infrastructure to manage all
the data corresponding to civil infrastructure’s serviceability; Pig (Landset et al.
2015) offers MapReduce-enabled query and processing; Sqoop (Landset et al.
2015) supports the ingestion of log data, which is related to civil infrastructure
design and operation such as civil infrastructure configuration (e.g., National Bridge
Inventory, the Pipeline and Hazardous Materials Safety Administration Inventory),
transportation status (e.g., National Transit Database), and weather conditions (e.g.,
NOAA’s National Climatic Data Center). In this work, InfoSys manages the external
log data. VIBRA stores the cyclic external force load (or vibration signals), which is
applied by the wind or vehicles, and the corresponding structural response. StConD
component stores the structure configuration (i.e., geometry configuration and
mesh) of civil structure. EnvD (Environmental data component) keeps circumstance
parameters such as temperature, moisture, etc. SenD is a database component that
keeps the configurations (e.g., location, brand, mechanism, maintenance schedule,
etc.) of sensors attached to the civil infrastructure.
Based on structural dynamics theory and signal processing techniques, the RE
module mainly uses historical or real-time sensory data to identify the global
(or infrastructure-wise) or component-wise structural faults. In addition, Bayesian
network is employed to formulate the integrity analysis according to components’
structural reliability.
The DPM and the RE modules are deployed in an edge computing (EC) server
to facilitate data storage and processing while decreasing the data transmission
cost from multi-modal sensors and increasing system agility and responsiveness.
Multi-modal sensors, MC gateway, 3G/4G base stations, and end devices used by
operators/users will form a network edge. EC is pushing the frontier of computing
applications, data, and services away from centralized nodes to a network edge
(Ahmed and Ahmed 2016). It enables analytics and knowledge generation to occur
at the source of the data (Mäkinen 2015). An EC platform reduces network latency
and offer location-aware and context-related services by enabling computation and
storage capacity at the edge network (Luo et al. 2010; Nunna et al. 2015; Wu et al.

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


Fig. 12.3 Flowchart of multiscale structural health evaluation: (a) nationwide civil infrastructure
survey; (b) global structural integrity analysis; (c) localized structural component reliability
analysis (The bridge pictures are derived from mnpoliticalroundtable.com)

12.2.2 Flowchart of the MS-SHM-Hadoop
Figure 12.3 shows the systematic approach of the implementation of MS-SHMHadoop system. Based on the acquired sensory data and civil infrastructure related
log data, a multiscale structural health monitoring and measurement consist of
the following stages: (Stage 1) nationwide civil infrastructure database survey
using machine learning techniques; (Stage 2) global structural integrity analysis
using signal processing, and structure dynamics; and (Stage 3) localized structural
component reliability analysis using stochastic methods, or multiscale modeling and
With reference to Fig. 12.2, it can be observed that: Stage 1 is implemented in
the Sensor Grid (SG) module and partially in the Data Processing and Management
(DPM) module; Stage 2 is implemented in the DPM module; and Stage 3 is
implemented in the Structure Evaluation module.
By surveying the nation-wide civil infrastructure’s status on a big-data platform
or other information systems, Stage 1 aims to obtain a preliminary characterization
of the safety level of major bridges in United States from the National Bridge Inventory (NBI) database and pipelines from the Pipeline and Hazardous Materials Safety
Administration Inventory. NBI involves dimensions, location, type, design criteria,
traffic, structural and functional condition, and lots of other information. A general
screening and prioritization analysis based on weighting/loading consideration can
be performed to determine relatively low safety level aging civil infrastructure.
The serviceability of a civil infrastructure is qualitatively determined by a number
of overall factors, such as, the year-of-build, structure configuration, construction


Y. Liang et al.

material, weather conditions, traffic flow intensity and life cycle cost. In this project,
clustering analysis is employed to categorize the civil infrastructure according to
their serviceability.
Stage 2 aims to evaluate quantitatively the global structural health status of
targeted infrastructure that are characterized with low safety level from Stage 1.
Global structural integrity analysis consists of the following intensive data-based
structural dynamics: (1) extraction the measured structural resonance frequencies
from the time-history sensory data via Fast Fourier transformation (FFT) for the
targeted infrastructure; (2) computation of the fundamental natural frequency (e.g.,
the 10 lowest natural frequencies) of the infrastructure using finite-element method
(FEM), which gives the upper bound of the solution; (3) computation of the
fundamental natural frequency of the infrastructure using node-based finite-element
method (NS-FEM), which gives the lower bound of the solution; (4) evaluation of
the discrepancy about fundamental natural frequencies between the measured and
computed ones; (5) establishment of the relationship between the discrepancy of
the fundamental natural frequencies and the healthy status of the infrastructure; and
(6) based on the distribution of discrepancy obtained using sufficient large number
of sensors deployed over the span of infrastructure, the possible zones with heavy
damages and degradations are identified.
Following the time-domain or frequency-domain algorithm, Stage 3 aims to
obtain a precise description about the serviceability of local components in the
heavily damaged zones identified in Stage 2. This is to provide the remaining
service life of the infrastructure, as well as prepare possible strategies for lifeprolongation. With the load effects from sensors and computational value from
FE analysis, structural performance indicators can be calculated respectively in
local scale and global scale. Proper assessment theory, such as neuro-fuzzy hybrid
method (Kawamura and Miyamoto 2003) or DER&U method (Zhao and Chen
2001), can be evaluated and utilized. Finally the structural performance evaluation
results can be updated to the management system of structural administration to
provide professional support for decision making (Catbas et al. 2008).

12.3 Acquisition of Sensory Data and Integration
of Structure-Related Data
Table 12.1 lists representative sensors in the proposed system needed to acquire
the following information: external load; structure’s response to external load; and
environmental circumstance parameters. To provide a localized monitoring data
analysis we adopt a mobile computing (MC) gateway that collects the raw sensory
data, pre-processes and sends them to the DPM module via a wired or wireless
network. The MC provides real-time analysis on the situation at a specific location
on the infrastructure. The MC is carried by a robot or unmanned aerial vehicles
(UAVs) to collect the acquired data from the sensors covering a specified area on

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


Table 12.1 List of sensors
Monitoring data category

External loading and
structural response

Sensor type
Displacement transducer
Strain gage
Laser doppler vibrometer
GPS station

Traffic flow

Anemometer and
CCD camera
Weight in motion

Data to be collected
Proper acceleration
Structural displacement
Strain of the structure
Vibration amplitude and
Location of structure and time
Temperature and humidity
Wind speed and direction
Vehicle type, throughput, velocity
Weight of the still/moving

the infrastructure. The MC also communicates with the DPM module where further
extensive analysis of the collected data is performed. For large-scale monitoring,
multiple MCs can be deployed based on the structure of a infrastructure and
communicate with each other to acquire more data from sensors and broaden the
monitoring analysis.
Wireless sensor networks (WSNs) play a big role in monitoring the infrastructure
health, where data is collected and sent to the data processing management module
(Wu et al. 2011, 2014, 2015a). Despite the benefits that WSNs can provide, such as,
high scalability, high deployment flexibility of deployment, and low maintenance
cost, sensors suffer from computational and energy limitations, which need to be
taken into consideration for extended, reliable and robust monitoring.
Energy-efficient sensors are crucial for accurate long-duration monitoring in
SHM systems. On the one hand, to accurately capture the random process of
structural mechanics and detect the potential damage of complex structures in
real time, both long-term and real-time monitoring of these structures by sensor
networks are needed. On the other hand, sensors usually have very limited energy
supply, for example, powered by battery, which is consumed by different modules in
the sensors, including the sensing module, the on-board data processing and storage
module, and the communication module. Therefore, development of methods and
strategies for the optimization of sensors’ energy consumption is imperative. Also,
the proposed system may incorporate various energy-harvesting devices which can
capture and generate power from ambient energy sources, such as vibration, strain,
wind, solar, and thermal. Civil infrastructure are ideally suited to harvest such types
of energy (Gupta et al. 2014). For example, sensors with piezoelectric materials
can be mounted to infrastructure based on their structural information to harvest
vibrational energy.
In the proposed SHM system, the parameters to be monitored are heterogeneous,
such as temperature, wind, acceleration, displacement, corrosion, strain, traffic,
etc. These parameters have different spatial and temporal properties, for example,


Y. Liang et al.

different variation speeds and locations. Depending on the nature of the monitored
parameters, some sensors may work continuously while others may work in the
trigger mode. Based on these observations, sampling rate in data acquisition and
duty cycles (Ye et al. 2002) in wireless networking is optimized in different types of

12.4 Nationwide Civil Infrastructure Survey
As the major task of the data processing and management (DPM) module, nationwide civil infrastructure survey is dedicated to classifying the nationwide civil
infrastructure according to their life-expectancy. Hadoop Ecosystem (Landset et al.
2015) and deep learning (LeCun et al. 2010) are two enabling techniques for
nationwide civil infrastructure survey.

12.4.1 The Features Used in Nationwide Civil Infrastructure
A variety of accurate features can be used in nationwide civil infrastructure survey.
Material erosion, cyclic and random external loads, and the corresponding structural
responses are the major causes of civil infrastructure’s aging. A quantitative investigation about the dynamic behavior of civil infrastructure helps to extract the features
for structural health. The following governing equation (Hulbert 1992), Petyt (2015)
shows the linear dynamics of infrastructure:
ŒMfRug C ŒCfPug C ŒKfug D fLtraffic g C fLwind g C fLself _weight g:


where ŒM, ŒC, and ŒK are mass, damping and stiffness matrices respectively (ŒC D
˛ŒM C ˇŒK/; fRug, fPug and fug are acceleration, velocity, and displacement vectors,
respectively; external load effects fLself weight g, fLtraffic g and fLwind g are self-weight
of bridge, traffic load, and aerodynamic load incurred by wind, respectively. Load
effects are stochastic due to random variations in space and time. The Turkstra load
combination (add up the peak values) (Naess and Rúyset 2000) and Ferry BorgesCastanheta load combination (time-scale) (Thoft-Christensen and Baker 1982) are
two applicable strategies to model the uncertainty combination of load.
For small or medium scale civil infrastructure, traffic load (fLtraffic g), which is
determined by the traffic velocity, density, and vehicle/load weight, often dominates
the external load effects. For large-scale long span civil infrastructure like suspension bridges and cable-stayed bridges, wind load (fLwind g) dominates the external
loads. fLwind g consists of two degree-of-freedom: lift (or drag) force and moment.

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


They are formulated due to the applied force and the corresponding aerodynamic
derivatives (Simiu and Scanlan 1986):


D 12 U 2 .2B/ŒKH1 Uh C KH2 BU˛P C K 2 H3 ˛ C K 2 H4 Bh 

Lwindmoment D

U 2 .2B2 /ŒKA1 Uh




K 2 A3 ˛


K 2 A4 Bh 

In the above equations,  is air mass density; U is mean wind velocity; B is the width
of bridge’s girder; k D B!=U is the reduced non-dimensional frequency where ! is
the circular frequency; h and hP are the wind-induced displacement and its derivative;
˛ and ˛P are structure’s rotation and its derivative; Ai and Hi .i D 1; 2; 3; 4/ are the
aerodynamic derivatives, which are computed from the results obtained by the wind
tunnel experiments (Simiu and Scanlan 1986).
The above discussion about the wind-induced load only focuses vortex induced
response (Simiu and Scanlan 1986). Those uncommon aerodynamic phenomenon
such as buffeting response (i.e., random forces in turbulent wind) and flutter (i.e.,
self-excited forces in smooth turbulent wind) (Simiu and Scanlan 1986) are not
investigated in this work. The dynamic behavior of civil infrastructure caused by
extreme weather or environmental conditions is not covered in this work either.
Figure 12.4 shows the features to be used to measure civil infrastructure’ lifeexpectancy. Structural dynamics features include civil infrastructure’ structural
configuration (e.g., mass, damping and stiffness matrices), and cyclic external
load-effect/structural response (derived from in-house sensors or National Transit
Database). The weather information can be derived from NOAA’s National Climatic
Data Center. The accessory civil infrastructure’ information such as age, maintenance policy, and construction budgets can be found in related databases, such
as, the National Bridge Inventory (NBI) database. Particularly, Nationwide Bridge
Sufficiency rating provides training data (fttps://www.fhwa.dot.gov/bridge/).

Fig. 12.4 Classification of features involved in nationwide civil infrastructure survey


Y. Liang et al.

Table 12.2 Sample bridge data from the National Bridge Inventory Database (updated by 2012)

Stringer/multi-beam or girder




Tee beam




Stringer/multi-beam or girder




Stringer/multi-beam or girder




Stringer/multi-beam or girder



Stringer/multi-beam or girder








Stringer/multi-beam or girder




Stringer/multi-beam or girder




Tee beam




Stringer/multi-beam or girder

Prest. concrete



Box beam or girders

Prest. concrete







Not applicable
Not applicable
Not applicable


ADT (ton/day) average daily traffic, SR Sufficiency rate

As shown in Table 12.2, the National Bridge Inventory Database uses a series of
general features, which include material and structural types, climatic conditions,
highway functional classes, traffic loading, precipitation, and past preservation
history (where data are available) etc., to specify the life-expectancy of bridges.
Only five features are presented here. As a measurement of bridges’ life-expectancy,
sufficiency rating scales from 100% (entirely sufficient bridge) to 0% (deficient

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


12.4.2 Estimation of the Life-Expectancy of Nationwide Civil
Infrastructure Using Machine Learning Method

Overview of Machine-Learning-Enabled Life-Expectancy

The goal of the statistical analysis of the Life-expectancy of civil infrastructure
using empirical models (statistical evidence based) is to identify our target civil
infrastructure that are in risk of short service life. To estimate the life expectancy
based on statistical analysis, the following three tasks need to be completed at
(1) Definition of the end of functioning life. Various definitions of end-of-life may
be applied. For instance, National Bridge Inventory Database uses Sufficiency
Rating, which takes four rationales. If we use sufficiency rating as the definition
of end-of-life, we may use the threshold of the end-of-life as sufficiency rating
first drops to or below 50% on a scale from 100% (entirely sufficient bridge) to
0% (deficient bridge).
(2) Selection of general approaches. Three general life estimation approaches that
are common in the current literature are: (a) condition based approach, (b) agebased approach, and (c) hybrid approach. As we will have a large amount
of time series data from sensors, a condition-based approach may be more
appropriate. Civil infrastructure are periodically monitored with respect to their
condition. As such, deterioration models can be readily developed. For instance,
if a performance threshold is set at which point a bridge “failure” is considered
to occur, then the lifetime is the time from construction or last reconstruction to
the time when the threshold is first crossed. Combining two approaches may be
preferred as we have large amount of all different types of data.
(3) Model Selection. From the literature review (Melhem and Cheng 2003), three
potential models may be applied in this project: (a) Linear or non-linear
regression models, which fits continuous, deterministic model type with direct
interpretations of model fit and parameter strength. These regression models
are commonly used due to their simplicity, clarity of results, and ability to be
calibrated with widely available statistical software such as SAS and R. Linear
regression methods are only appropriate when the dependent variable has linear
explanatory variables, which may not necessarily be the case for highway asset
performance behavior over time. Furthermore, such models are deterministic
estimate that may not reflect the true value of condition or service life that
could be expected. On the other hand, for non-linear models, it is difficult to
develop a set of significant independent variables, while providing a higher
good of fit metric such as R2 . (b) Markov chain-based model fits a continuous,
probabilistic, fully-parametric survival curve to a Markov Chain estimate.
A Markov chain is a transition probability based solely on the present state not
on the past state of civil infrastructure, stochastic process with a finite, integer
number of possible, no-negative states that is used to predict the probability


Y. Liang et al.

of being in any state after a period of time. (c) Artificial neural networks are
non-linear, adaptive models that can predict conditions based on what it has
“learned” from the historical data. The approach updates posterior means by
applying weighted averages based on previous estimates. Typically the weights
are based on the number of observations. Such models have been found work
well with noisy data and relatively quick [53]. (d) Deep learning models, which
provides a more accurate and powerful formulation about hypothesis functions.

Weibull Linear Regression Model

The use of a Weibull model is justified by past findings in the literature, comparing
model fit statistics of alternative distributions and validating the prediction against
the non-parametric methods such as Kaplan-Meier estimate. The lifetime factors
to be investigated may include material and structural types, climatic conditions,
highway functional classes, traffic loading, precipitation and past preservation
history where data are available. Table 12.3 provides a demo output of estimated
average life times with 90% confidence interval for the Sufficiency Rate ranging
from 40 to 80.
As our preliminary results, Table 12.2 provides a sample of data of 15 bridges
located on the highways of Tennessee from the National Bridges Inventory
Database. Estimation of the lifetime function is a regression analysis of lifeexpectancy T on the covariates (factors). Only six variables are presented here.
A parametric fitting with Weibull distribution will be carried out using SAS
procedure “lifereg”. This procedure handles both left-censored observations and
right-censored observations and includes checks on the significance of the estimated
parameters. Based on the significance (P-value or associated t-statistics) associated
with each covariate (lifetime factor) and the estimated parameters, a final model of
lifetime can be determined after the variable selection procedure. Using the final
lifetime model, we can obtain the targeted bridges with short lifetime via prediction
of life-expectancy of the new bridges with sensor data. Table 12.3 provides a demo
output of estimated average life times with 90% confidence interval for Sufficiency
Rate ranging from 40 to 80.

Table 12.3 Weibull regression models predictions of bridge life
Sufficiency rate

Expected life (years)

%90 CI
[15, 65]
[22, 96]
[27, 84]
[34, 123]



[56, 183]

Weibull model parameters
and statistics
Scale factor ˛ D 57:04
Shape factor ˇ D 2:73
Model statistics
Log-likelihood function at
convergence D 1043.6
Restricted log-likelihood
function D 1903.5

End-of-life is age when sufficiency rating drops to or below 50%

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


Markov Chain Models

Markov chains can be used to model condition ratings based on the data from large
numbers of infrastructure systems using transitional probabilities. Jiang (2010) used
Markov chains to model the condition of bridge substructures in Indiana. Table 12.3
shows the transitional probabilities for concrete bridge substructures. In this case,
the transitional probabilities change as the bridge ages. Each entity in the table
indicates the probability that a bridge that is currently in a certain state will remain
in that state next year.
Using the transition matrix, the Markov models were then calibrated by minimizing the Root Mean Sum of Errors (RMSE) while holding the median life
prediction constant and changing the baseline ancillary survival factors. Advantages
of Markov-based models include a probabilistic estimate, sole dependence on
current conditions (i.e., minimal data needed if the transition probabilities known),
flexibility in modifying state duration, and efficiency for dealing with larger

Neural Network Models

Neural networks are a class of parametric models that can accommodate a wider
variety of nonlinear relationships between a set of predictors and a target variable.
Being far more complicated than regression models or a decision tree, neural
network models have much stronger prediction and classification capability.
Building a neural network model involves two main phases: (1) configuring the
network model and (2) iteratively training the model.

Deep Learning Models

The goal of nationwide civil infrastructure survey is to identify those target infrastructure systems that are in risk of short service life. Most of previous work adopted
supervised machine learning methods (Agrawal and Kawaguchi 2009; Morcous
2006; Robelin and Madanat 2007) such as linear and nonlinear regression, Markov
Chain (Jiang 2010), and Support Vector Machine (SVM) (Zhou and Yang 2013)
to estimate civil infrastructure’s life expectancy according to hand-crafted features.
This work emphatically investigates a deep learning algorithm (LeCun et al. 2015),
which automatically formulates constructive features without supervision from raw
input data.
Figure 12.5 shows a flowchart of deep learning enabled nationwide civil infrastructure survey. Compared with many other classifiers, a deep learning algorithm
has the following advantages: (1) little or no human supervision is needed; (2) some
un-interpretable yet constructive features (or intermediate representation) can be
directly derived from raw data; (3) less training data is required (this advantage is
very important in the addressed project because the archived real world sensory


Y. Liang et al.





Feature Extracon (age,
structure, material, traffic
load, rainfall)



Esmated Life

Deep Learning

Informaon System

Fig. 12.5 Flow-chart of deep-learning-centric nation-wide infrastructure survey (the original
pictures referred in the figure are derived from images.google.com)

data for highly deficient civil infrastructure is limited); (4) the mid layers of deep
networks can be repurposed from one application to another, and this advantage is
the motivation for using hybrid deep learning method (HDL) (Wang et al. 2009; Wu
et al. 2015b) that arises by merging multiple different deep learning algorithms to
handle heterogeneous raw input data.
To efficiently and accurately classify the observed civil infrastructure, a hybrid
deep-learning (HDL) algorithm is investigated in this work. HDL is featured with
the following techniques: (1) Multiple data with heterogeneous modalities, such
as raw streams of sensory data, i.e., audio/video data, images, textual information
like operational data, city-open-data, environment factors, and other hand-crafted
data, is exploited so as to give a panoramic and full-spectrum description about
the status of targeted civil infrastructure. (2) HDL is equipped with different deeplearning algorithms, at least at the lower levels, to learn the features from multiple
input data with heterogeneous modality. Convolutional neural network (CNN)
(LeCun et al. 2010) is used to learn spatial features from visual media such as
video and images because it demonstrates superior performance (high accuracy and
fast training speed) on matrix-oriented feature-learning. Recurrent Neural Network
(RNN) (Sak et al. 2014) is employed to learn temporal features from streaming
data such as acoustic signal or vibration signals because RNN exhibits dynamic
temporal behavior (enabled by the directed cycle inside RNN). Deep Boltzmann
machine (DBM) (Salakhutdinov and Hinton 2009) specializes on learning the highlevel features from textual information such as weather conditions, traffic status, and
maintenance policy, etc. (3) Deep learning algorithms always learn the upper-level
features from lower ones (LeCun et al. 2015), and the input data with heterogeneous
modality eventually fuse at upper layers with somewhat homogeneous modality.
Therefore, the HDL can use a unified deep learning algorithm such as DBM in the
feature-learning of upper levels.

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


Champion Model Selection

In real-world applications, variance of models can be jointly used to predict
civil infrastructure’ life expectancy. First, we will select a champion model that,
according to an evaluation criterion, performs best in the validation data; second,
we will employ the champion model to score new data.

12.4.3 Techniques to Boost Nationwide Civil Infrastructure
To boost the performance of the nationwide civil infrastructure survey, various techniques such as missing data handling, variable transformation, data management
optimization, and dimensionality reduction, etc. are employed in this work.

Imputation and Transformation of Variables

Most of the software packages such as “WeibullReg” in R or “lifereg” in SAS can
handle missing data. However if there is a relatively large amount of missing data
in the input data of statistical model, some data imputations are required so that all
observed value can be explored during model training.
Besides imputation, Variable transformation techniques can improve the quality
of data.

Optimization of Data Management

The proposed project employs discrete Hash-tables to formulate the correlation
among data, control the data partitioning to optimize data placement, and use inmemory technology (Robelin and Madanat 2007).
Using word-count problems as a benchmark and Amazon EC2 as the computing platform, Table 12.4 demonstrates that Apache Spark, an implementation
of Resilient Distributed Datasets (RDD) that enables users to explicitly persist
intermediate results in memory and control their partitioning to optimize data
placement, is 40 times faster than Hadoop.

Dimensionality Reduction

The data involved in sensor-oriented structural analysis is always extremely highdimensional (Tran et al. 2013). As one of our preliminary achievements, a rank
revealing randomized singular value decomposition (R3 SVD) (Ji et al. 2016) was


Y. Liang et al.

Table 12.4 Time cost of Spark (with in-memory) vs. Hadoop (without in-memory) on word-count
File size (B)

Time on spark (s)
1st count
2nd count
0.7923488 0.268379
0.7908732 1.040815
0.5838947 0.570067
0.7753005 1.310154

3rd count

Time on Hadoop (s)
1st count 2nd count

3rd count

proposed to reduce the dimensionality of dataset by adaptive sampling. As a
variance of primary component analysis (PCA), R3 SVD uses local statistical errors
to estimate global approximation error.
The preliminary investigations (Ji et al. 2013; Liu and Han 2004) demonstrated
that R3 SVD scales well to extremely big matrices and us efficient with minimal
sacrifices in accuracy due to the following reasons: (1) R3 SVD is based on statistical
sampling, which is also applicable to incomplete or noisy data. (2) R3 SVD is able
to obtain low-accuracy approximation quickly, which is particularly suitable for
many applications where high-accuracy solutions are not necessary but fast decision
making is, on the other hand, of most importance. (3) R3 SVD is trivially naturally
As illustrated in Fig. 12.6, R3 SVD algorithm with adaptive sampling dominates
its rivals such as general randomized SVD algorithm [57–59] from the point of
view of stability and timing performance [60] [61] [55] [62]. Next step we intend to
transplant R3 SVD into sensor-oriented structural analysis for civil infrastructure.

12.5 Global Structural Integrity Analysis
The global structural integrity analysis module aims to provide further structural
integrity analysis of the deficient civil infrastructure identified in Sect. 12.5. The
objectives are itemized as follows: (1) to apply the big-data and perform quantitative
analysis of global structural integrity of targeted bridges; (2) to provide guidelines
for more intensive and predictive examination of the civil infrastructure at component level to be carried out at Sect. 12.7; and (3) to feed back to the database with
integrity analysis results for future uses.

12 Civil Infrastructure Serviceability Evaluation Based on Big Data


s rows

The approximate first k
right singular vectors of A

( k 70 Hz)
while intermediate-frequency stimulation (IFS, around 50 Hz) had no effect. Our
own studies and preliminary results (further discussed below) also indicate that one
needs to consider multiple stimulation points and durations to account for varying
effectiveness of the stimulation. The development and maintenance of such “control
efficacy” maps adds new dimension to the seizure control problem.
The difficulties in designing and implementing effective seizure-suppressing
controllers motivated a new look at the problem; one that integrates our collective experience in physiology, nonlinear signal processing and adaptive feedback
systems. The brain is not an unstructured, random collection of neurons guided
by statistics. Thus, there is no need for brain models to be so. Our approach was
to consider “functional models” of the epileptic brain, i.e. networks of elements
that have feedback internal structure, perform high level operations, and produce
a seizure-like behavior when they fail. In our work with such theoretical models
we have shown that “seizure” suppression can be well achieved by employing a
feedback decoupling control strategy (Tsakalis et al. 2005). The implementation of
such theoretical models required only weak knowledge of their detailed structure
and was guided by two principles: (a) synchronization between elements increases
as the coupling between them gets stronger (also observed in experimental EEG data
from epileptic rats and humans), and (b) the pathology of hyper-synchronization in
the network lies primarily in its elements’ internal feedback connections. These
principles were introduced to explain the transitions from normal to epileptic
behavior as well as the triggering of epileptic seizures caused by external stimuli
(e.g., strobe lights).
Based on our prior work on seizure prediction we have found that measures
of spatial synchronization of dynamics, in particular the behavior of Lyapunov
exponents of critical brain sites, provide a good characterization of the transition
to seizures and can be utilized in the reliable issuance of warnings on impending
seizures. Results from our most recent project shows that the maximum Lyapunov exponents show pronounced and persistent drops prior to seizures (refer to
Figs. 13.11 and 13.12a). This observation is consistent in epileptic subjects although
it is not deterministically followed by a seizure at all times. As such, and in view
of our postulated models in Chakravarthy et al. (2008), it can be interpreted as an
increased seizure susceptibility state of the brain. This may not be entirely useful as
a prediction measure, especially since in epileptic subjects such periods do arise
frequently, compared to healthy subjects where they do not occur at all, but it
creates the possibility of being used as an appropriate signal for feedback control
and stimulation.
While seizures can be predicted with good sensitivity and specificity (Iasemidis
et al. 2004), the question remains if we can intervene effectively to change the
brain dynamics and prevent a seizure from occurring. Results from our work
shows that the applied electrical stimulation dynamically disentrains the brain sites.
This constitutes evidence that electrical stimulation can actually change the spatiotemporal dynamics of the brain in the desired direction (Fig. 13.12b), and hence


A. Shafique et al.

Fig. 13.12 Plots of maximum Lyapunov exponents computed using Kantz algorithm on all 10
channels of a rodent; the thick black line is the mean of the Lmax from the channels. The Lmax was
computed using 10 s of data, sampled at 512 Hz, every 2 s moving in time. (a) Shows a seizure
event at time 126 h 25 min and 26 s from the beginning of recording. The red vertical line marks
the beginning of the seizure. As can be seen the Lyapunov drops a little a few minutes before the
seizure, then at the seizure, rises to a higher value than its mean. (b) Shows the effect of stimulation
on F3-RA electrodes was started based on a drop of mean Lyapunov after a certain threshold. As
can be seen stimulation gradually pulled the Lyapunov exponent back up to average levels before
the drop occurred. Two channels show Lmax profiles that seem spurious, these are the electrodes
that were stimulated and at that moment were in open circuit. Refer to Sect. 13.7 for explanation
and location of the electrodes

has been used as actuation in a control scheme for epileptic seizure prevention in
our most recent work. This work entails an expansion to our proof-of-concepts for
the development of an efficacious controller for the epileptic brain using adaptive
spatially distributed control. The developed controller (online) is to be validated
in vivo. Our main thrust includes multivariable stimulation with guidance from
focus localization techniques and multivariable modeling that we have recently
developed, and the use of pulse-train-modulated signals for implementation of a
realistic adaptive stimulation.
During the course of the ongoing study we have shown that impulsive electrical
stimulation does desynchronize the rat’s epileptic brain dynamics, provided that
the stimulus duration and the electrode location is chosen wisely, i.e., based on
a prior “control efficacy” experiment (Fig. 13.14). So our choice of electrical
stimulation as a control input appears to be a viable candidate. For a realistic and
feasible implementation of the desired effect, the input to the system is a biphasic
control signal with its pulse-width or pulse-number modulated to accomplish the
desynchornization of the brain dynamics. Figure 13.13 shows the efficacy of
applying stimulation to a particular epileptic rat over a 10 week period. In this
study, the animal was labeled Rat 13 and was allowed 4 weeks of rest after Status
Epilepticus (SE) was induced so that the seizure frequency stabilizes. This was
followed by 5 weeks of baseline recording and on the sixth week stimulation was
applied to a set of electrodes every time a “seizure warning” was generated by
the computer system when it detected a drop of Lmax beyond a threshold. The
case of 5 min stimulation had an adverse effect and seemed to have increased both
seizure length and average seizure rate per week. However, weeks 7 through 9 were

13 Nonlinear Dynamical Systems with Chaos and Big Data: A Case Study of. . .




Seizure length (seconds)









W 4_
K5 S
W 7_S
W 8_
K1 _S











W 4_
K5 S
W 7_S
W 8_
K1 _S



Fig. 13.13 (a) Box and whisker plot of seizure lengths for Rat 13 over the course 10 weeks
preceded by 4 weeks of rest after status epilepticus induction. The black dots indicate the mean
of each box. (b) Average number of seizures per day for the same rat over the same 10 week period
of experimentation. Here week essentially imply a recording file which in general is never exactly
168 h. The first 5 weeks were baseline recording, thus no stimulation was provided. Week 10 was
another case of no stimulation but this was recorded 5 weeks after the end of the week 9 file. No
stimulation was provided during those 5 weeks and it can be seen that the seizure lengths and the
seizure rate per week both had started to increase gradually. Refer to Sect. 13.7 for explanation and
location of the electrodes

followed by stimulations of 10 min or more and as Fig. 13.13a, b shows, both the
seizure lengths and the average seizure frequency started coming down. Following
week 9, the rat was given a period of stimulation cessation in order to see if any
plasticity effects of the stimulation would wear out. Sure enough after 4 weeks, the
tenth week of recording showed that the seizure length and frequency both went up
slightly. While we do recognize that results from only one animal is not statistically
relevant, it does shine light on our conjecture that controlling seizures will require
proper tuning of stimulation length and possibly more parameters in the stimulation
Another potential difficulty in the design of a seizure control system arises from
the multivariable nature of this stimulation as well as its relation to the energy of the
stimulus. The following figure (Fig. 13.14) shows a characterization of the stimulus
efficacy (as a binary decision over a week-long set of experiments), when the
stimulation is applied across a given electrode pair. The top triangular part shows the
result after a 2 min stimulation to each electrode pair while the lower triangular part
shows same for 15 min. It is quite apparent that stimulation of arbitrary electrodes
may not bring the desired effect. The same is true if the duration (i.e., energy) of
the stimulus is not high enough as given by the stimulation current, pulse width,
frequency etc.
A partial direction towards the selection of an appropriate pair of electrodes
for stimulation may be obtained by analyzing the EEG data for focus localization, using the GPDC method of Baccalá and Sameshima (2001), as refined by


A. Shafique et al.

Fig. 13.14 Plot of effect of stimulation on electrode pairs (“control efficacy”). Upper triangle
shows effect of 2 min stimulation on each pair. Lower triangle shows same for 15 min. A box color
of grey indicates no effect whereas white indicates Lyapunov increase due to stimulation. It can be
seen that 2 min stimulation has little effect in bringing the Lyapunov up whereas 15 min stimulation
is quite capable of doing so. Refer to Sect. 13.7 for explanation and location of the electrodes

Vlachos et al. (2013). This method can provide an indication of the brain site that is
the most likely candidate for a focus, and hence a good first choice for stimulation.
However, the drawback of this method is that the results are (so far) based on the
analysis of averaged short-term data. Long term effects and possible plasticity have
been observed in the work of Iasemidis et al. (2009), and therefore further analysis
of the data is necessary to provide quantifiable relationships. Another direction that
we are currently investigating is what follows from the result shown in Fig. 13.12b,
where stimulation of a particular site with a fixed amount of time will start to
produce disentrainment as is validated by the Lmax profile being pulled back to
its mean level over time; the mean level in this case is the level before which it
dropped sharply. These experiments are long, in the order of weeks, thus analysis
of results and producing meaningful interpretations take quite a while. Added to
the difficulty is when an animal will cease due to health issues. In such cases the
experiments have to be restarted with a new animal. Mortality rates for animals
are typically between 40–60%. Another miscreant to the mortality rate is headcap
breakage, whereby during a seizure a rat would violently hit the cage walls and
the recording headcap will break off; in such cases euathanizing the animal is the

13 Nonlinear Dynamical Systems with Chaos and Big Data: A Case Study of. . .


only remaining option. In the following sections we describe the experimental setup
utilized to gather data and provide both offline and online stimulation as well as how
the animals are made epileptic.

13.7 Future Research Opportunities and Conclusions
As the ability to predict leads to the possibility of control, research in control of
seizures is expected to flourish in the near future, much to the benefit of the epileptic
patients. Investigations in stimulation and control of the brain have attracted the
attention of the academic community, and medical device companies have started
off designing and implementing intervention devices for various neurodegenerative
diseases (e.g. stimulators for Parkinsonian patients) in addition to the existing ones
for cardiovascular applications (e.g. pacemakers, defibrillators). For epilepsy, there
is currently an explosion of interest in academic centers and medical industry, with
clinical trials underway to test potential prediction and intervention methodology
and devices for FDA approval.
Electromagnetic stimulation and/or administration of anti-epileptic drugs at the
beginning of the preictal period, to disrupt the observed entrainment of normal brain
with the epileptogenic focus, may result in a significant reduction of the number and
severity of epileptic seizures. Our underlying hypothesis is that an epileptic seizure
will be prevented if an external intervention successfully resets the brain prior to
the seizure’s occurrence. Preliminary results from our experiments have shown that
both the length of seizures and the rate can be lowered by such means. However, it
is very important to investigate the parameters that lead to maximum efficacy and
minimum side effects of such an intervention.
We have shown that a successful and robust controller should be correcting the
pathological part of the system, that is where the coupling between brain sites
increases excessively, a situation the existing internal feedback in the brain cannot
compensate for. To this end, we have shown how Lmax computed using the tuned
Kantz algorithm can be treated as a synchronization measure (output) of interest.
A possible future direction in this work will entitle generating dynamical system
models from the applied stimulus (input) to the Lyapunov exponents. The inputs,
as always, will be time modulated with a biphasic impulse train since this is the
type of signal that gave us better decoupling results so far. A parameterization
of these signals in terms of their duration, frequency, and location should also be
considered in order to eventually develop a comprehensive input-output model from
the average stimulus power to the output of interest. It is worth mentioning that the
measure of synchronization can include, but not be limited to Lyapunov exponents
computed from EEG and other biomarkers such as heart rate, pulse oxygen levels,
body temperature, etc. Using such disparate data as potential markers, undoubtedly
presents significant challenges and complexity resulting in all issues related to
big data, namely, volume, velocity, variety and value being addressed. Tools such
as HPCmatlab suited for HPC platforms ameliorate and enable such problems


A. Shafique et al.

to be tractable. Future compute platforms will be heterogeneous and comprise
accelerator devices such as GPU’s and FPGA’s. Standard API’s like MPI, POSIX
threads, and OpenCL provide a uniform underlying approach to distributed memory,
shared memory and accelerated computing. Overall, the scientific community has
embraced these approaches and the trend will continue in the near future towards
new uses of accelerated and reconfigurable computing with completely new or
modified algorithms tuned to these platforms.
The method exploited in our work involving the reliable computation of Lyapunov exponents can be utilized in generating a highly accurate automated seizure
detection system. A reliable seizure detection mechanism will aid medical practitioners and patients with epilepsy. The established practice is to admit patients
into the hospital, once identified with epilepsy, and record long-term EEG data.
Trained technicians and doctors then have to sift through hours of patient data in
order to identify the time of seizures to make their diagnosis. Typically, the number
of epileptic patients admitted, at a time, in a hospital can become overwhelming
owing to the fact that epilepsy is such a common neurological disorder. Technicians
and doctors simply cannot keep up with the volumes of EEG generated in such short
periods of time. Studies show that about 30% of Intensive Care Unit (ICU) patients
have undiagnosed seizures due to this labor-intensive process that is prone to human
error (Claassen et al. 2004). Untimely detection of seizures increase the morbidity
of patients and can lead to mortality in certain cases. It also means that the patients
have to stay in the hospital longer, thus adding to cost. With some fine-tuning the
envisioned automated seizure detection mechanism will help alleviate much of that
burden equipped with its high true positive and low false positive detection rates.
Based on the above, we expect that the envisioned active real-time seizure
detection and feedback control techniques will mature into a technology for
intervention and control of the transition of the brain towards epileptic seizures and
would result in a novel and effective treatment of epilepsy. The “heavy machinery”
of computing Lyapunov exponents is a significant overhead to be paid, as compared
to existing linear measures, in order to improve the reliability of seizure detection
and prevention. Computational power is definitely a great concern, especially if the
technology is to be applied in a portable fashion so that patients can go on with their
day-to-day lives without interference. Considerations must be taken into account
of how the devices can perform big data operations while at the same time being
small enough to be portable. The ultimate goal is to provide a seizure-free epileptic
brain capable to function “normally”, with minimum time-wise and power-wise
intervention and side effects with the help of these advancements. We envision that
this technology will eventually enable a long anticipated new mode of treatment for
other brain dynamical disorders too, with neuromodulation, anti-epileptic drugs and
electromagnetic stimuli as its actuators.
Acknowledgements The authors work was supported by the National Science Foundation, under
grant ECCS-1102390. This research was partially supported through computational resources
provided by the Extreme Science and Engineering Discovery Environment (XSEDE), which is
supported by National Science Foundation grant number ACI-1053575.

13 Nonlinear Dynamical Systems with Chaos and Big Data: A Case Study of. . .


Appendix 1: Electrical Stimulation and Experiment Setup
Deep Brain Stimulation (DBS) protocols in several “animal models” of epilepsy
have shown some effectiveness in controlling epileptic seizures with high frequency
stimulation targeting the subthalamic nucleus, anterior thalamic nucleus, caudal
superior colliculus, substantia nigra, and hippocampus (Vercueil et al. 1998; Lado
et al. 2003). All these investigators used stimulation parameters in the following
ranges: frequency from 50–230 Hz, bipolar constant current pulses (30–1000 s)
at current intensities from 0.1 to 2 mA. In contrast, low frequency (between 1 and
30 Hz) stimulation resulted in increase of seizure susceptibility or synchronization
of EEG. The electrical stimulation we used in our experiments had a pulse width of
300 s at a current intensity ranging from 100–750 A and pulse-train duration of
10–20 min. Considering possible induced tissue damage by electrical stimulus,
the maximum current intensity of 750 A falls under the safe allowable charge
density limit of 30 C/cm2 as reported by Kuncel and Grill (2004) for deep brain
stimulation; given the size of the stimulating electrode and pulse parameters chosen.
While these specific values of the stimulation parameters are being utilized in our
work, their optimization can become a project on its own right and is not considered
at this point.
The stimulation is applied between pairs of electrodes (amygdalar, hippocampal,
thalamic, frontal) according to the localization analysis results, whenever a seizure
warning is issued by our seizure warning algorithm or in the case of the offline
stimulation in an open-loop manner for a fixed duration on various pairs of electrodes. A stimulation switching circuitry was developed in-house for the purpose
of stimulating two sites at will. This equipment consisted of an Arduino Mega and
printed circuit board containing electronic switches. Figure 13.15 shows a sketch of
hardware setup used for all experiments.
The Intan RHD2000 development board and its 16 channel amplifier was setup
as the EEG acquisition machine. The rats used in our study have 10 EEG channels,
a ground and a reference channel. We collect EEG data from 10 electrodes located
in different parts of the rat brain (see Fig. 13.16). The EEG channels go through a
switch board into the Intan EEG data acquisition system. The Intan board has an
amplifier that conditions the signal so that its 16 bit ADC has sufficient resolution
in the EEG waveforms which are typically in the 100s of V range. Once digitized,
the data is collected in an FPGA buffer until a MATLAB program polls it from the
buffer over USB. The MATLAB code operates every 2 s and it brings in 2 s worth
of data that are sampled from the ADC at 512 Hz from all channels. On MATLAB,
either the offline fixed stimulation code or the seizure warning algorithm decide
on when and how to stimulate and those parameters are sent over emulated Serial
Port (USB Virtual COM port) to an Arduino MEGA. The MEGA then commands
the stimulator to provide stimulation on its output port. The amplitudes are fixed
using analog knobs and are not programmable. Another function of the Arduino is
to command the switch board so that it disconnects a chosen pair of electrodes from
the rat to the EEG board and connects them to the stimulator so that the stimulation


A. Shafique et al.

Fig. 13.15 Block diagram showing the experimental setup used. The Switching circuit is controlled by the Arduino Due which in turn gets commands from the MATLAB program running
on the computer. The switching circuit enables us to stimulate any pair of electrode at will. The
stimulation signal is generated from the A-M systems stimulator

signal can pass through to the rat brain. Once stimulation needs to be switched
off, the Arduino commands the stimulator to switch off and reconnects the EEG
channels to the rat electrodes.

Appendix 2: Preparation of Animals
The animals used in this study were male Spraque Dawley rats, weighing between
200–225 g, from Harlan Laboratories. All animal experimentation used in the study
were performed in the Laboratory For Translational Epilepsy Research at Barrow
Neurological Institute (BNI) upon approval by the Institutional Animal Care and
Use Committee (IACUC). The protocol for inducing chronic epilepsy was described
previously by Walton and Treiman (1988). This procedure generates generalized
convulsive status epilepticus (SE). Status epilepticus was induced by intraperitoneal
(IP) injection of lithium chloride (3 mmol/kg) followed by subcutaneous (SC)
injection of pilocarpine (30 mg/kg) 20–24 h later. Following injection of pilocarpine,
the EEG of each rat were monitored visually for clinical signs of SE noted
behaviorally by the presence of a Racine level 5 seizure (rearing with loss of balance,
Racine 1972). At EEG Stage V (approximately 4 h after pilocarpine injection) SE
was stopped using a standard cocktail of diazepam 10 mg/kg, and Phenobarbital
25 mg/kg, both IP. The rats were then kept under visual observation for 72 h within
which all measures were taken to stop them from deceasing. In the event that none
of the methods to keep them alive worked, the animals were euthanized.

13 Nonlinear Dynamical Systems with Chaos and Big Data: A Case Study of. . .


Fig. 13.16 Diagram of surgical placement of electrodes in the rat’s brain (top view)

After SE was successfully induced in the animals, they were allowed 5 weeks for
the seizure frequency to stabilize. Following this 5 week period, the animals were
taken into surgery and an electrode array, as shown in Fig. 13.16, were implanted
into their brain. Not including the reference and ground connections, each rat had
10 electrodes implanted. After surgery, each animal was allowed a week before
being connected to an EEG machine. The referential voltages from each of the 10
electrodes mentioned was then recorded using an EEG machine (Intan RHD2000
development board).

Annegers, J. F., Rocca, W. A., & Hauser, W. A. (1996). Causes of epilepsy: Contributions of the
Rochester epidemiology project. Mayo Clinic Proceedings, 71, 570–575; Elsevier.
Apache Software Foundation. (2016). Hadoop. https://hadoop.apache.org/ [Accessed: Aug 2016].
Aram, P., Postoyan, R., & Cook, M. (2013). Patient-specific neural mass modeling-stochastic and
deterministic methods. Recent advances in predicting and preventing epileptic seizures (p. 63).
River Edge: World Scientific.


A. Shafique et al.

Baccalá, L.A., & Sameshima, K. (2001). Partial directed coherence: A new concept in neural
structure determination. Biological Cybernetics, 84(6), 463–474.
Ben-Menachem, E. (2016). Epilepsy in 2015: The year of collaborations for big data. The LANCET
Neurology, 15(1), 6–7.
Beverlin, B., & Netoff, T. I. (2013). Dynamic control of modeled tonic-clonic seizure states with
closed-loop stimulation. Frontiers in Neural Circuits, 6, 126.
Blanco, J. A., Stead, M., Krieger, A., Stacey, W., Maus, D., Marsh, E., et al. (2011). Data mining
neocortical high-frequency oscillations in epilepsy and controls. Brain, 134(10), 2948–2959.
Boroojerdi, B., Prager, A., Muellbacher, W., & Cohen, L. G. (2000). Reduction of human visual
cortex excitability using 1-Hz transcranial magnetic stimulation. Neurology, 54(7), 1529–1531.
Centers for Disease Control and Prevention. (2016). Epilepsy fast facts. http://www.cdc.gov/
epilepsy/basics/fast-facts.htm [Accessed: Feb 2016].
Chakravarthy, N., Sabesan, S., Tsakalis, K., & Iasemidis, L. (2008). Controlling epileptic seizures
in a neural mass model. Journal of Combinatorial Optimization, 17(1), 98–116.
Chen, C. P., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and
technologies: A survey on big data. Information Sciences, 275, 314–347.
Citizens for Research in Epilepsy. (2016). Epilepsy facts. http://www.cureepilepsy.org/
aboutepilepsy/facts.asp [Accessed: Aug 2016].
Claassen, J., Mayer, S. A., Kowalski, R. G., Emerson, R. G., & Hirsch, L. J. (2004). Detection
of electrographic seizures with continuous eeg monitoring in critically ill patients. Neurology,
62(10), 1743–1748.
Dean, J., & Ghemawat, S. (2010). System and method for efficient large-scale data processing. US
Patent 7,650,331.
Devinsky, O., Dilley, C., Ozery-Flato, M., Aharonov, R., Goldschmidt, Y., Rosen-Zvi, M., et al.
(2016). Changing the approach to treatment choice in epilepsy using big data. Epilepsy &
Behavior, 56, 32–37.
Dodson, W. E., & Brodie, M. J. (2008). Efficacy of antiepileptic drugs. Epilepsy: A comprehensive
textbook (Vol. 2, pp. 1185–1192), Philadelphia: Lippincott Williams & Wilkins .
Edward, L. (1972). Predictability: Does the flap of a butterfly’s wings in Brazil set off a tornado in
Texas? Washington, DC: American Association for the Advancement of Science.
Engel, J. (2013). Seizures and epilepsy (Vol. 83). Oxford: Oxford University Press.
Food and Drug Administration. (2015). Rns system. http://www.fda.gov/MedicalDevices/
ucm376685.htm [Accessed: Aug 2016].
Forsgren, L. (1990). Prospective incidence study and clinical characterization of seizures in newly
referred adults. Epilepsia, 31(3), 292–301.
Good, L. B., Sabesan, S., Marsh, S. T., Tsakalis, K., Treiman, D., & Iasemidis, L. (2009).
Control of synchronization of brain dynamics leads to control of epileptic seizures in rodents.
International Journal of Neural Systems, 19(03), 173–196. PMID: 19575507.
Grassberger, P., & Procaccia, I. (1983). Characterization of strange attractors. Physical Review
Letters, 50(5), 346–349.
Grassberger, P., Schreiber, T., & Schaffrath, C. (1991). Nonlinear time sequence analysis.
International Journal of Bifurcation and Chaos, 01(03), 521–547.
Guo, X., Dave, M., & Mohamed, S. (2016). HPCmatlab: A framework for fast prototyping of
parallel applications in Matlab. Procedia Computer Science, 80, 1461–1472.
Han, J., Kamber, M., & Pei, J. (2011). Data mining: Concepts and techniques (3rd ed.). Burlington:
Morgan Kaufmann.
Hasselblatt, B., & Katok, A. (2003). A first course in dynamics: With a panorama of recent
developments. Cambridge: Cambridge University Press.
Hirtz, D., Thurman, D. J., Gwinn-Hardy, K., Mohamed, M., Chaudhuri, A. R., & Zalutsky, R.
(2007). How common are the “common” neurologic disorders? Neurology, 68(5), 326–337.
Hodaie, M., Wennberg, R. A., Dostrovsky, J. O., & Lozano, A. M. (2002). Chronic anterior
thalamus stimulation for intractable epilepsy. Epilepsia, 43(6), 603–608.

13 Nonlinear Dynamical Systems with Chaos and Big Data: A Case Study of. . .


Holmes, M. D., Brown, M., & Tucker, D. M. (2004). Are “generalized” seizures truly generalized?
evidence of localized mesial frontal and frontopolar discharges in absence. Epilepsia, 45(12),
Howbert, J. J., Patterson, E. E., Stead, S. M., Brinkmann, B., Vasoli, V., Crepeau, D., et al. (2014).
Forecasting seizures in dogs with naturally occurring epilepsy. PLoS ONE, 9(1), e81920.
Human Brain Project. (2016). Human brain project. https://www.humanbrainproject.eu/
[Accessed: Aug 2016].
Iasemidis, L., Sabesan, S., Good, L., Tsakalis, K., & Treiman, D. (2009). Closed-loop control of
epileptic seizures via deep brain stimulation in a rodent model of chronic epilepsy. In World
Congress on Medical Physics and Biomedical Engineering, September 7–12, 2009, Munich,
Germany (pp. 592–595). New York: Springer.
Iasemidis, L., Shiau, D.-S., Sackellares, J., Pardalos, P., & Prasad, A. (2004). Dynamical
resetting of the human brain at epileptic seizures: Application of nonlinear dynamics and global
optimization techniques. IEEE Transactions on Biomedical Engineering, 51(3), 493–506.
Iasemidis, L., Zaveri, H., Sackellares, J., Williams, W., & Hood, T. (1988). Nonlinear dynamics of
electrocorticographic data. Journal of Clinical Neurophysiology, 5, 339.
IEEE (2004). Ieee posix standard. http://www.unix.org/version3/ieee_std.html [Accessed: Aug
Jansen, B. H., & Rit, V. (1995). Electroencephalogram and visual evoked potential generation in a
mathematical model of coupled cortical columns. Biological Cybernetics, 73(4), 357–366.
Kalitzin, S. N., Velis, D. N., & da Silva, F. H. L. (2010). Stimulation-based anticipation and control
of state transitions in the epileptic brain. Epilepsy & Behavior, 17(3), 310–323.
Kantz, H. (1994). A robust method to estimate the maximal Lyapunov exponent of a time series.
Physics Letters A, 185(1), 77–87.
Kouzes, R. T., Anderson, G. A., Elbert, S. T., Gorton, I., & Gracio, D. K. (2009). The changing
paradigm of data-intensive computing. Computer, 42(1), 26–34.
Kramer, U., Kipervasser, S., Shlitner, A., & Kuzniecky, R. (2011). A novel portable seizure
detection alarm system: Preliminary results. Journal of Clinical Neurophysiology, 28(1), 36–38.
Kuncel, A. M., & Grill, W. M. (2004). Selection of stimulus parameters for deep brain stimulation.
Clinical Neurophysiology, 115(11), 2431–2441.
Lado, F. A., Velíšek, L., & Moshé, S. L. (2003). The effect of electrical stimulation of the
subthalamic nucleus on seizures is frequency dependent. Epilepsia, 44(2), 157–164.
Lantz, G., Spinelli, L., Seeck, M., de Peralta Menendez, R. G., Sottas, C. C., & Michel,
C. M. (2003). Propagation of interictal epileptiform activity can lead to erroneous source
localizations: a 128-channel eeg mapping study. Journal of Clinical Neurophysiology, 20(5),
Lasemidis, L. D., Principe, J. C., & Sackellares, J. C. (2000). Measurement and quantification of
spatiotemporal dynamics of human epileptic seizures. Nonlinear biomedical signal processing:
Dynamic analysis and modeling (Vol. 2, pp. 294–318). New York: Wiley
Liu, Q., Logan, J., Tian, Y., Abbasi, H., Podhorszki, N., Choi, J. Y., et al. (2014). Hello adios:
the challenges and lessons of developing leadership class i/o frameworks. Concurrency and
Computation: Practice and Experience, 26(7), 1453–1473.
Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2),
Lulic, D., Ahmadian, A., Baaj, A. A., Benbadis, S. R., & Vale, F. L. (2009). Vagus nerve
stimulation. Neurosurgical Focus, 27(3), E5.
Mathworks (2016). Mex library API. http://www.mathworks.com/help/matlab/mex-library.html
[Accessed: Aug 2016].
Miller, K. (2014). Seizures, in theory: Computational neuroscience and epilepsy. http://
biomedicalcomputationreview.org/content/seizures-theory-computational-neuroscience-andepilepsy, [Accessed: Aug 2016].
Mina, F., Benquet, P., Pasnicu, A., Biraben, A., & Wendling, F. (2013). Modulation of epileptic
activity by deep brain stimulation: a model-based study of frequency-dependent effects.
Frontiers in Computational Neuroscience, 7, 94.


A. Shafique et al.

Mormann, F., Kreuz, T., Andrzejak, R. G., David, P., Lehnertz, K., & Elger, C. E. (2003). Epileptic
seizures are preceded by a decrease in synchronization. Epilepsy Research, 53(3), 173–185.
MPI Forum. (2016). Mpi forum. http://www.mpi-forum.org/ [Accessed: Aug 2016].
National Institute of Health. (2014).
Brain initiative.
BRAIN2025_508C.pdf [Accessed: Aug 2016].
National Science Foundation. (2016). Empowering the nation through discovery and innovation. http://www.nsf.gov/news/strategicplan/nsfstrategicplan_2011_2016.pdf [Accessed: Aug
Niedermeyer, E., & da Silva, F. H. L. (2005). Electroencephalography: Basic principles, clinical
applications, and related fields. Philadelphia: Wolters Kluwer Health.
Oestreicher, C. (2007). A history of chaos theory. Dialogues in Clinical Neuroscience, 9(3),
OpenCL. (2016). The open standard for parallel programming of heterogeneous systems. https://
www.khronos.org/opencl/ [Accessed: Aug 2016].
OpenMP. (2016). The OpenMP API specification for parallel programming. http://openmp.org/
wp/ [Accessed: Aug 2016].
Poincaré, H. (1992). New methods of celestial mechanics (Vol. 13). New York: Springer.
Racine, R. J. (1972). Modification of seizure activity by electrical stimulation: II. Motor seizure.
Electroencephalography and Clinical Neurophysiology, 32(3), 281–294.
Rosenstein, M. T., Collins, J. J., & De Luca, C. J. (1993). A practical method for calculating largest
Lyapunov exponents from small data sets. Physica D: Nonlinear Phenomena, 65(1), 117–134.
Ruelle, D., & Takens, F. (1971). On the nature of turbulence. Communications in Mathematical
Physics, 20(3), 167–192.
Salsa Group. (2010). Applicability of DryadLINQ to scientific applications. Pervasive Technology
Institute, Indiana University http://salsaweb.ads.iu.edu/salsa/ [Accessed: Aug 2016].
Shafique, A. B., & Tsakalis, K. (2012). Discrete-time PID controller tuning using frequency loopshaping. In Advances in PID Control (Vol. 2, pp. 613–618).
Simon, P., de Laplace, M., Truscott, F. W., & Emory, F. L. (1951). A philosophical essay on
probabilities (Vol. 166). New York: Dover Publications.
Socolar, J. E. S. (2006). Nonlinear dynamical systems (pp. 115–140). Boston: Springer.
Staba, R. J., Wilson, C. L., Bragin, A., Fried, I., & Engel, J. (2002). Quantitative analysis of highfrequency oscillations (80–500 Hz) recorded in human epileptic hippocampus and entorhinal
cortex. Journal of Neurophysiology, 88(4), 1743–1752.
Suffczynski, P., Kalitzin, S., da Silva, F. L., Parra, J., Velis, D., & Wendling, F. (2008). Active
paradigms of seizure anticipation: Computer model evidence for necessity of stimulation.
Physical Review E, 78(5), 051917.
Takens, F. (1981). Dynamical systems and turbulence (detecting strange attractors in fluid
turbulence). Lecture notes in mathematics. New York: Springer.
Tassinari, C. A., Cincotta, M., Zaccara, G., & Michelucci, R. (2003). Transcranial magnetic
stimulation and epilepsy. Clinical Neurophysiology, 114(5), 777–798.
Temkin, O. (1994). The falling sickness: a history of epilepsy from the Greeks to the beginnings of
modern neurology. Baltimore: Johns Hopkins University Press.
The Mathworks. (2016). Best practices for a matlab to c workflow using real-time workshop. http://www.mathworks.com/company/newsletters/articles/best-practices-for-a-matlabto-c-workflow-using-real-time-workshop.html?requestedDomain=www.mathworks.com.
The Neurology Lounge. (2016). 12 fascinating advances in epilepsy: big data to pacemakers.
https://theneurologylounge.com/2015/12/28/12-fascinating-advances-in-epilepsy-bigdata-to-pacemakers/ [Accessed: Nov 2016].
The White House. (2014). Brain initiative. https://www.whitehouse.gov/share/brain-initiative
[Accessed: Aug 2016].
Thiel, M., Schelter, B., Mader, M., & Mader, W. (2013). Signal processing of the EEG: Approaches
tailored to epilepsy. In R. Tetzlaff, C. E. Elgar, & K. Lehnertz (Eds.), Recent advances in
preventing and predicting epileptic seizures (pp. 119–131). Singapore: World Scientific.
TOP 500. (2016). Top 500. https://www.top500.org/lists/2016/06/ [Accessed: Aug 2016].

13 Nonlinear Dynamical Systems with Chaos and Big Data: A Case Study of. . .


Tsakalis, K., Chakravarthy, N., & Iasemidis, L. (2005). Control of epileptic seizures: Models of
chaotic oscillator networks. In Proceedings of the 44th IEEE Conference on Decision and
Control (pp. 2975–2981).
Tsakalis, K., & Iasemidis, L. (2004). Prediction and control of epileptic seizures. In International
Conference and Summer School Complexity in Science and Society European Advanced Studies
Conference V, Patras and Ancient Olympia, Greece (pp. 14–26).
Vercueil, L., Benazzouz, A., Deransart, C., Bressand, K., Marescaux, C., Depaulis, A., et al. (1998).
High-frequency stimulation of the sub-thalamic nucleus suppresses absence seizures in the rat:
Comparison with neurotoxic lesions. Epilepsy Research, 31(1), 39–46.
Vlachos, I., Krishnan, B., Sirven, J., Noe, K., Drazkowski, J., & Iasemidis, L. (2013). Frequencybased connectivity analysis of interictal iEEG to localize the epileptogenic focus. In 2013 29th
Southern Biomedical Engineering Conference. New York: Institute of Electrical & Electronics
Engineers (IEEE).
Walton, N. Y., & Treiman, D. M. (1988). Response of status epilepticus induced by lithium and
pilocarpine to treatment with diazepam. Experimental Neurology, 101(2), 267–275.
Watson, J. W. (2014). Octave. https://www.gnu.org/software/octave/ [Accessed: Aug 2016].
Wolf, A., Swift, J. B., Swinney, H. L., & Vastano, J. A. (1985). Determining Lyapunov exponents
from a time series. Physica D: Nonlinear Phenomena, 16(3), 285–317.
World Health Organization. (2016). Epilepsy fact sheet No. 999. http://www.who.int/mediacentre/
factsheets/fs999/en/ [Accessed: Aug 2016].

Chapter 14

Big Data to Big Knowledge for Next Generation
Medicine: A Data Science Roadmap
Tavpritesh Sethi

14.1 Introduction
Living systems are inherently complex. This complexity plays out as health and
disease states over the lifetime of an organism. Deciphering health has been
one of the grand endeavors of humanity since times immemorial. However, it
is only in the recent decades that a disruptive transformation of healthcare and
its delivery seems imminent. Like other disciplines, this transformation is being
fueled by exponentially growing Big-data. This has sparked a widespread move
for transitioning to Next-generation medicine which aims at being Preventive,
Predictive, Personalized, Participatory, i.e., P4 (Auffray et al. 2009) and Precise
(Collins and Varmus 2015). However, this also requires a major upgrade of our
scientific methods and approach. Our current model of medical discovery has
evolved over the past 500 years and relies upon testing pre-stated hypotheses
through careful experimental design. This approach of “hypothesis-driven medical
discovery” has been instrumental in advancing medicine and has consistently led
to breakthroughs in newer treatments, vaccines, and other interventions to promote
health over the past 500 years. However, for the first time, medicine is at a threshold
where the rate of data-generation has overtaken the rate of hypothesis generation by
clinicians and medical researchers. It has been estimated that biomedical Big-data
will reach 25,000 petabytes by 2020 (Sun 2013) largely attributable to digitization
of health-records and the pervasive genomic revolution. Therefore, a new paradigm
of “data-driven medical discovery” has emerged and is expected to revolutionize

T. Sethi ()
Department of Computational Biology, Indraprastha Institute of Information Technology,
New Delhi, India
Department of Pediatrics, All India Institute of Medical Sciences, New Delhi, India
e-mail: tavpriteshsethi@iiitd.ac.in
© Springer International Publishing AG 2018
S. Srinivasan (ed.), Guide to Big Data Applications, Studies in Big Data 26,
DOI 10.1007/978-3-319-53817-4_14



T. Sethi

the next hundred years of medicine (Kohane et al. 2012). The biggest challenge
in this direction will be to incorporate the complex adaptive properties of human
physiology into Big-data technologies.
Genomics has been the poster child of the Big-data movement in medicine as the
cost of sequencing has been falling faster than the limits imposed by Moore’s law
(NHGRI 2016). However, the genomics revolution has also taught us a sobering
lesson in science. The scientific community realized that Big-data is a necessary
condition, but not a sufficient condition for translation to bedside, community or
policy. Even before the advent of genomics era, there were glaring gaps in our
understanding of biology and it was hoped that Big-data would fill these gaps. On
the contrary, what followed was quite the opposite. The more Big-data we generated,
more we realized our lack of understanding of the complexity of living systems.
Following conventional statistical approaches, the predictive power of genomics
was found to be low for common diseases and traits. This is the well-known problem
of missing heritability, which arises partly due complex (and often unpredictable)
biological interactions and partly due to limitations of currently available statistical
techniques. For example, Type II Diabetes, a complex disease, the number is
estimated to be around 10% (Ali 2013). This is because the genetic code in DNA,
which was thought to be a major health determinant is now known to be just one of
the layers of the multiscale influences. These layers include the Exposome consisting
of sum-total of environmental exposures (Miller 2014), the Microbiome consisting
of resident micro-organisms (Turnbaugh et al. 2007) and even the heath influences
spreading upon Social-networks (Christakis 2007), in addition to other “omics”
operating in an individual such as transcriptomics, proteomics, and metabolomics
(Fig. 14.1a). These layers can be thought as “Russian doll” hierarchies with DNA
(genome) as the blueprint for dictating the RNA (transcriptome) which further
gets translated into proteins (proteome) and metabolites (metabolome). This is a
simplified picture of the hierarchical organization with feedback loops and more
“omics” layers added each day. Therefore, with these characteristics, the current
approaches of Big-data analytics are not sufficient by themselves to tackle the
challenges of Next-generation medicine. The rich complexity, multiscale nature and
interactions between these scales necessitates a Data-science roadmap incorporating
the distinguishing features of Biomedical Big-data as presented in Table 14.1.
The aim of this chapter is to propose a Data-science roadmap for knowledgegeneration from Big-data through a combination of modern machine learning
(data-driven) and statistical-inference (hypothesis-driven) approaches. The roadmap
is not a comprehensive one, and could be one of the many possible approaches
that could be geared towards the final objective of delivering better clinical
(Personalized, Predictive) and community (Preventive, Predictive, Participatory)
care through Big-data.

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


Fig. 14.1 Eric Topol’s vision is to synthesize various layers of information ranging from
environmental exposures (exposome) to the individual’s genetic blueprint (genome) to enable the
Next-generation medicine (a). The CAPE roadmap (b) proposed in this chapter serves as a Datascience blueprint for executing such a vision. With the explosion of data across genetic, cellular,
organismal and supra-organismal layers, there is an immediate need for such roadmaps and CAPE
is one of these directions discussed in this chapter. (Permission to use the image authored by Eric. J
Topol obtained from the publisher, Elsevier under License Number 3967190064575, License date
Oct 13, 2016)
Table 14.1 Challenges in biomedical big-data that make it unique
Unique challenges in
healthcare big-data
(D) and generative
physiology (P)
Heterogeneity (P,

Corresponding challenge
in conventional


Messy (D)



Bedside Potential
Inter-connected (P)




Adaptive (P)
Integration (P)
Data Privacy &
Open Data

Not defined as a
Not defined as a
Not defined as a
Not defined as a

Non-exhaustive list of possible
Large scale observational studies
(Hripcsak et al. 2016), Patient
Aggregation (Longhurst et al. 2014),
Stratified Medicine (Athey and Imbens
2016; Sethi et al. 2011)
Imputation (Longford 2001), Natural
Language Processing (Doan et al.
Machine Learning (Rothman et al.
2013; Dinov et al. 2016)
Omics (Topol 2014), Graphs and
Networks (Barabási et al. 2011)
Complex Adaptive Systems (Kottke
et al. 2016; Coveney et al. 2016)
Multiscale Modeling (Walpole et al.
Cybersecurity, Citizen-science (Follett
and Strezov 2015)


T. Sethi

14.1.1 The CAPE Roadmap
The four guiding principles of the CAPE roadmap proposed in this chapter are
(1) Capture Reliably (2) Approach Systemically (3) Phenotype Deeply and (4)
Enable Decisions (Fig. 14.1b). While the sequence is not strictly important, Datascience initiatives built around healthcare Big-data will find it naturally applicable.
Therefore, the chapter addresses each of these principles in sequence while building
upon the preceding ones and presenting case studies. The purpose of the case studies
is to introduce the reader to examples illustrating the real-world impact of Big-data
driven Data-science in healthcare.

14.2 Capture Reliably

Biomedical data are notoriously messy. Unless data quality is ensured, the maxim
of “garbage in- garbage out” may constrain the scientific and clinical utility of
Big-data in healthcare. Very often the measurements are missing, incorrect, or
corrupted. In addition to measurement errors in structured data, biomedical data
is also messy because a large fraction of it is unstructured. It is estimated that up to
80% of healthcare data resides in the form of text notes, images and scans (Sun
2013), yet contains highly valuable information with respect to patients’ health
and disease. Hence reliable capture of big-data is one of the most crucial step in
successful execution of healthcare applications. Ensuring reliable capture not only
involves ensuring the fidelity of existing systems such as Electronic Health Records
(EHRs) and Electronic Medical Records (EMRs) but also in newer technologies
such as mHealth. While challenges of the former (such as interoperability of EMRs)
need to be addressed at a healthcare policy level, the latter offers a different set
of scientific challenges. mHealth leverages mobile devices such as mobile phones
and tablet computers for recording health parameters through sensors and wireless
technologies. This has led to the recent surge in health-tracking and wellness
monitoring (Steinhubl et al. 2015). If captured reliably, these could reveal key
factors in regulation of health and disease thus enabling Precision and P4 medicine.

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


The scientific potential of this approach is already being evaluated and has sparked
interest in personal “individualomes” (Snyder 2014) i.e., longitudinal tracking of
multiple variables reflecting an individual’s health. Further, consensus guidelines
on collection and analysis of temporally tracked data have started emerging in order
to establish its scientific validity (Wijndaele et al. 2015). However, in the current
scenario, most wellness tracking and home monitoring devices are not certified to
be research-grade. Important features such as sampling rates and filters are often not
specified and vendors typically do not seek approvals from regulatory agencies such
as the Food and Drug Administration (FDA).
Therefore, from the standpoint of reliable capture, the key open challenges in
healthcare are (i) creating data standards, and (ii) developing tools that can recover
data from noisy and/or missing measurements.

14.2.1 Biomedical Data Quality and Standards
Mining of Electronic Health Records (EHRs) is one of the most promising directions
for Precision medicine. Defining medical ontologies such as International Classification of Diseases (ICD) has been a key enabling feature in this direction. Despite
this enforcement of ontologies, the lack of interoperability has led to siloing of Bigdata. Typically, in any Biomedical Big-data project, data harmonization and dealing
with messy data alone take up to 80% of the time and effort thus leading to initiatives
such as those being implemented for cancer (Rolland et al. 2015) and HIV (Chandler
et al. 2015). These approaches are expected to evolve as more Individualome data
such as genomes and environmental exposures become available. It is anticipated
that harmonization of such multidimensional data sources would require expertise
from diverse domains and hence and would be possible to be achieved only through
community efforts. Community efforts to create open-source frameworks such as
the R Language for Statistical Programming, Python, Hadoop and Spark are already
revolutionizing data-science. In a similar fashion, a major role of the open-source
movement for sharing of codes and API’s for reliable. Secure and inter-operable
capture of healthcare data is anticipated (Wilbanks and Topol 2016; Topol 2015).
In addition, reliable capture must also ensure implementation of better data-security
standards and mechanisms for protecting the privacy of individuals.

14.2.2 Data Sparsity
Sparsity is a double-edged sword in data-science. While sparsity is important for
removing redundancy, efficient storage and transmission of data (e.g. Telemedicine),
it should not be induced at the cost of completeness of data. Many times, biomedical
data suffer from both the problems at the same time. While some variables might
be redundantly represented, others might not have acceptable fidelity. The latter


T. Sethi

kind of sparsity in observations is more often observed and may result from
factors which are technical (such as missing data) or human (such as inaccurate
recording, corrupted data, textual errors, etc.). Data-science approaches to deal with
the latter problem include variable removal, data imputation and data reconstruction.
However, data-scientists are advised to inspect the possible generative mechanisms
of such missing data through statistical tests and assumptions must be tested. For
example, a naïve assumption may be about data Missing Completely At Random
(MCAR). Simple statistical computations to check whether the missing values are
distributed uniformly across all samples and groups of data must be applied before
imputing the data. As an illustrative example, this assumption may be invalidated
if data missingness was coupled with the generative process. This leads to a nonrandom pattern in the missingness of data. One of the simplest and most common
form of non-random missingness in healthcare data is the monotone pattern. A
monotone pattern of missingness arises in situations such as patient drop-out or
death in longitudinal data (Fig. 14.2). More complex patterns such as stratification
of missing data by sub-groups arise when a sub-set of patients is more prone to dropout or yield erroneous measurements. While imputation with a central tendency
(mean/median/mode) may suffice in the MCAR scenario, missing data with structure might need more sophisticated techniques such as k-nearest neighbors, Robust
Least Squares Estimation with Principal Components (RLSP), Bayesian Principal
Component Analysis and multiple imputation (Hayati et al. 2015). Sophisticated
algorithms for testing the assumptions and performing imputations are available in
most of the standard statistical software such as R, Stata, SPSS etc.

14.2.3 Feature Selection
While on one hand, data-imputation aims to recover missing data, on the other
hand many datasets suffer the problem of redundancy in data. In such datasets,
it is desirable to induce a sparsity of variables and preserve only the ones that
may be of relevance to the data-science question. This procedure of selecting
relevant features is most useful in the presence of multi-collinearity (some features
being a linear combination of others) or even nonlinear relationships in the data.
The presence of linear and nonlinear correlations between variables makes it a
mathematically underdetermined system. Further, Occam’s razor suggests that most
parsimonious models should be selected and adding variables often leads to overfitted models. In many such Big-data problems, traditional methods of statistics for
dimensionality reduction such as Principal Component Analysis (PCA) and Multidimensional Scaling (MDS) are sufficient to reduce the dimensionality using linear
transformations to an orthonormal basis. However, in other Big-data situations
such as genomics, where the number of features can run into millions per patient,
specific statistical and machine learning approaches are often required. Briefly, such
feature selection methods can be classified into (i) Filter (ii) Wrapper and (iii)
Embedded approaches and the interested reader is referred to (Wang et al. 2016) for

`Viral Load_Relative_Delta`
`Viral Load_Absolute_Delta`
`IFN Gamma_Relative_Delta`
`IL-12 p40_Relative_Delta`
`IL-12 p70_Absolute_Delta`
`IL-12 p70_Relative_Delta`
`IFN Gamma_Absolute_Delta`
`IL-12 p40_Absolute_Delta`
`TNF Alpha_Absolute_Delta`
`IFN Alpha_Relative_Delta`
`TNF Alpha_Relative_Delta`
`IFN Alpha_Absolute_Delta`


`TNF Alpha`
`NS1 Ag Units`
`Viral load`
`IL-12 p40`
`IFN Gamma`
`TLC per ml`
`IFN Alpha`
`IL-12 p70`



















# Missing Data

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


















Fig. 14.2 Undesirable Sparsity and Desirable Sparsity in Biomedical Big-data. (a) shows the
monotone pattern of missing data with patients dropping out over the length of the study,
hence invalidating the statistical assumption of Missing Completely at Random (MCAR). (b,
c) Deliberate induction of sparsity of variables in the data (feature selection) by using machine
learning algorithms such as Boruta. The example shows a run of variable selection carried out by
the author for selecting variables important for predicting Dengue severity (b) and recovery (c)
from Severe Dengue illness (Singla et al. 2016)


T. Sethi

an excellent review for bioinformatics applications. Filter approaches are the easiest
to implement and are computationally efficient. These include selection of variables
through statistical measures such as correlation-coefficients (for regression) and
tests of statistical significance (for classification). However, filter-based approaches
suffer from the problem of multiple hypothesis testing (Farcomeni 2008) as the
number of variables becomes large, hence leading to sub-optimal models. Wrapperbased approaches address this problem in a more direct manner by selecting the
subset of variables which minimizes model error. However, these are computationally expensive. A popular wrapper algorithm for feature selection is Boruta
(Miron and Witold 2010) which is a machine learning wrapper around Random
Forest algorithm for feature selection. This algorithm is centered around the concept
of Variable Importance. While Random Forest generates variable importances, it
does not test for the statistical significance of these importance measures. Boruta
algorithm fills this gap by statistical significance testing of variable importances
against permuted (shadow) datasets and has been shown to be one of the most robust
methods of feature selection in the toolkit currently available (Fig. 14.2b, c).

14.2.4 State-of-the Art and Novel Algorithms
A third scenario of data sparsity and need for reliable capture exists in the biological
signals such as ECG, EEG and images such as MRI and CT scans. These signals
are often corrupted by noise, are compressed thus resulting in loss of fidelity
(lossy compression) and are subject to technological limitations, sensor failure,
machine breakdown etc. Therefore, a challenging problem in data science is to
reconstruct the original signal such as an MRI image or an ECG signal from its
lossy version. Such reconstruction might be critical in settings such as Intensive
Care Units (ICUs) where loss of signal often results from sensors dislodging
secondary to patient movements. A similar situation could often arise with wellness
trackers in the community settings. While until about a decade ago, it was thought
impossible to recover the original signal from such under-sampled data (because
of the restrictions imposed by the Shannon-Nyquist criterion) (Jerri 1977), recent
research (Ravishankar and Bresler 2015) has proved that such reconstruction is
possible because most signals have little contribution from higher order terms. Thus,
regularization approaches can be leveraged to perfectly reconstruct the underlying
signal. This branch of signal processing called Compressed Sensing (CS) has
proven to be of immense value in image reconstruction and in physiological signals
(Ravishankar and Bresler 2015) and is finding applications in Biomedical Big-data.
Another state-of-the-art approach for signal reconstruction is using deep learning
and autoencoders. Briefly, autoencoders are a special class of neural networks where
the original data are re-constructed in the output layer while the hidden layers
learn the features and representations of the input signals. A full discussion on
autoencoders is beyond the scope of this chapter and the interested reader is referred
to (Goodfellow et al. 2016).

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


14.2.5 Physiological Precision and Stratified Medicine
The final case for reliable capture of data stresses upon the physiological heterogeneity of health and disease. The original classification of diseases was based
mostly upon signs, symptoms and pathological features of a disease. However, it
is being realized that many different mechanisms lead to similar pathophysiological
outcomes and hence manifestations of diseases. This is especially valid for chronic
multifactorial disorders such as Asthma, Diabetes, Obesity and Cardiovascular
disorders which have common underlying themes such as inflammation (Arron et al.
2015). With the advent of Big-data and data-science technologies, it is now possible
to address this heterogeneity and imprecise disease classification. This understanding was instrumental in for the proposition of the Precision Medicine Initiative
(Collins et al. 2015). Development of state-of-the-art computational mathematical
techniques of clustering multidimensional data (Hinks et al. 2015) are being used
for characterization of individuals on millions of features and provide a precision
to diagnosis. Evidently, active development of unsupervised clustering methods is
one of the most important advances in this direction. Data-driven aggregation of
patients based upon multivariate similarity measures derived from data has led to the
approach is known as Stratified Medicine (Fig. 14.3a). These algorithms of patient
stratification and aggregation attempt to define pure-subclasses by minimizing the
ratio of within-group to between-group variability. In most medical conditions
stratified so far (e.g. breast cancer), the discovered clusters are expected to have
differential disease evolution or response to therapy (Nielson et al. 2015).
On the mathematical side, most clustering algorithms rely upon the notion
of a dissimilarity measure. This may range from simple distance metrics such
as Euclidean (2-norm), Manhattan (taxicab norm) and Mahalanobis metrics to

Fig. 14.3 Stratified Medicine. Human physiology integrates the diverse layers mentioned in Fig.
14.1 to produce complex health and diseased phenotypes. (a) Our understanding of this integration
and the physiological network is sparse. (b) Application of data-science to recapitulate physiological integration may create a more complete understanding of physiology and stratification of
diversity. This stratification is helping doctors in tailoring of therapies to sub-groups of patients
(see text)


T. Sethi

dissimilarities obtained through machine learning algorithms such as unsupervised
Random Forests (Breiman 2001). For the healthcare data-scientist, the choice of
distance metric may prove to be critical. Often, this choice is dictated by the type
of the variables (numerical, categorical or mixed) and the presence of noise in the
data. Following the choice of a metric and the application of a clustering algorithm,
cluster quality inspection and visualization algorithms such as Multidimensional
Scaling (MDS), Partitioning around Medoids (PAM) further help the data-scientist
in making an informed choice on the strata that may exist in a particular disease.
In addition to standard clustering algorithms, newer methods and approaches have
focused on clustering complex data of arbitrary shapes and include multivariate
density based clustering (Ester et al. 1996), Self-organizing maps (Kohonen 1982),
Message passing between the data-points (Frey and Dueck 2007) and Topological
Data Analysis (Nielson et al. 2015).

14.2.6 The Green Button: A Case Study on Capture Reliably
Principle of the CAPE Roadmap
Tailoring of medical decisions, practices and products are the goals of precision
medicine and personalized medicine. As discussed in the preceding section, stratified medicine is a key step towards this goal. Green Button (Longhurst et al. 2014) is
an example of stratification applied to Big-data that resides in the Electronic Health
Records (EHRs). Since every patient is unique, this proposition aims at creating
patient cohorts “on the fly” by matching characteristics across millions of patient
records present in the EHRs. This would enable the new paradigm of practicebased evidence combined with the current paradigm of evidence-based practice.
Such an approach would also complement Randomized Controlled Trials (RCTs),
the current gold standard in medical knowledge discovery. RCTs apart from being
prohibitively expensive, also suffer from over-estimation of statistical effect sizes
because of stringent patient selection criteria. Often, such criteria are not met by
routinely seen patients and hence the conclusions of most of RCTs fail to generalize
to the routine clinical settings. The Green Button proposes to minimize such bias by
allowing the creation of patient aggregates at the point-of-care in the clinic itself thus
enabling generalization and bed-side application. Additionally, the Green Button
approach inherently demands inter-operability of EHRs, and secure data-sharing by
the formation of hospital-networks such as PCORnet (Fleurence et al. 2014), thus
pushing for data-standards in the biomedical community.

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


14.3 Approach Systemically

Dense inter-connectivity, regulatory phenomena, and continuous adaptations distinguish biological systems from mechanical and physical systems. Therefore,
reductionist approaches, while immensely successful in physical systems (e.g. automobiles and space-crafts), have met with limited success in biology and medicine.
Hence, holistic approaches to biomedical data with interdisciplinary application of
mathematics, computer science and clinical medicine are required. This understanding led to the birth of Systems Biology and to the recent fields of Networks Medicine
and Systems Medicine. The umbrella term of Systems Medicine is proposed to be
the next step for healthcare advancement and has been defined by (Auffray et al.
2009) as, “the implementation of Systems Biology approaches in medical concepts,
research, and practice. This involves iterative and reciprocal feedback between clinical investigations and practice with computational, statistical, and mathematical
multiscale analysis and modeling of pathogenetic mechanisms, disease progression
and remission, disease spread and cure, treatment responses and adverse events,
as well as disease prevention both at the epidemiological and individual patient
level. As an outcome Systems Medicine aims at a measurable improvement of patient
health through systems- based approaches and practice”.
Therefore, this section reviews the most common approaches that are being
applied to achieve a holistic and data-driven systems medicine.

14.3.1 Networks Medicine
In the Networks Medicine paradigm, the states of health, disease, and recovery
can be thought of as networks of complex interactions and this approach has
recently gained much popularity in biomedical Data-science (Barabási et al. 2011).
A network is data structure with variables represented as ‘nodes’ and connections


T. Sethi

between objects represented as ‘edges’. Therefore, the network representation is
not only an excellent tool for visualizing complex biological data but also serves
as a mathematical model for representing multivariate data. It has been found that
most biological networks display a common underlying pattern called scale free
behavior which has been discovered and re-discovered multiple times and in various
contexts such as Economics, Statistics and Complexity Science (Newman 2005). It
has been variously described as the Pareto Principle, 80–20 principle and Powerlaw distribution and simply reflects the absence of a single characteristic scale
in biological networks. Intuitively, this implies that the distribution of number of
connections of nodes falls exponentially (monotonically on a log-log scale) as a
function of node-frequency (Fig. 14.4b) leading to a very small number of nodes
sharing most of the share of edges in the network. This picture is consistent with
many of the known natural and societal phenomena.
In the context of evolutionary development of function, it has been proposed that
scale free behavior emerges because of “preferential attachment” of new functions
to already well-developed components of the network, thus implying a “rich get
richer” scenario. Being a quantifiable property, this strategy has been exploited
to target the key components of a network, reveal communities, interactions and
the spread of information, and interventions that may disrupt this communication.
This strategy has found recent uses in effective understanding of drug development
(Rodriguez-Esteban 2016) and for understanding community health dynamics
(Salvi et al. 2015).

Fig. 14.4 Scale free property of complex networks. Most complex networks including those
encountered in healthcare and biology display the presence of hubs at each scale of the network.
These hubs are important in understanding not only technological networks such as (a) The Internet
but also biological networks. (b) Illustration of the power law distribution of degree-connectivity
that defines a scale free network

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


14.3.2 Information Theory for Biology
In mid-nineteenth century, Ludwig Eduard Boltzman revolutionized the study of
physical systems by abstracting away the microscopic properties of a system into a
macroscopic quantity called Thermodynamic Entropy,
S D k:logW
Almost a century later, Claude Shannon proposed the theory of information
(Shannon 1948) to calculate the information content of any system and proposed
the famous equation,
H .X/ D .1/


pi logpi

The two concepts and equations share a deep theoretical relationship through
the Landauer Principle (Landauer 1961), that connects thermodynamic entropy
with change in information content of a system. Since physiological and cellular
functioning essentially involves information transfer through mediators such as
neural tracts or chemical messengers, the concept of entropy and information has
found applications designing Big-data applications in biology at a fundamental
level (Rhee et al. 2012). For example, quantification of entropy and complexity of
a heart-beat patterns has profound applications in critical care settings with a lower
complexity of inter-beat intervals shown to be a risk factor for a higher five-year
mortality and a poor prognosis after critical events such as an ischemic heart attack
(Mäkikallio et al. 1999).

14.3.3 Agent Based Models
Agent Based Models (ABMs) are a class of computational models that are particularly suited for holistic modeling of a system in the presence of interactions
and rules. These models are based upon the theory of planned behavior and
allow autonomous entities (called agents) to interact in space-time, hence allowing
collective dynamics of the system to emerge. ABMs allow learning from data
in a fashion like societal interactions i.e., peer-to-peer interaction where each
agent can be thought of as an individual with a set of rules to enable decisions
while interacting. The set of rules are updated every time an interaction occurs
in accordance to the perceived gain or loss to the individual. Combined with the
principles of behavioral psychology (such as reward and punishment) this leads to
a powerful tool in the arsenal of data-science known as Reinforcement Learning,
which has been applied to healthcare at clinical (e.g. optimal planning to prevent
sepsis in Intensive Care Units), (Tsoukalas et al. 2015) as well as community levels
(Fig. 14.5).


T. Sethi

Fig. 14.5 Agent Based Modeling for Behavioral adoption of critical health practices in the State
of Bihar (a) (Aaron Schecter). This example simulation run (b) shows the conversion rate in a
community as a function of advisors and influencers in the local community. Such models are
expected to be extremely useful for modifying community behavior towards adoption of better
health practices in the community

14.3.4 Prevalence of Symptoms on a Single Indian Healthcare
Day on a Nationwide Scale (POSEIDON): A Case
Study on Approach Systemically Principal of the CAPE
This case study (Salvi et al. 2016) illustrates application of Networks Analysis
upon a unique patient data resource of 2,04,912 patients collected on a single
day across India by Chest Research Foundation, Pune, India. The purpose of
the study was to get a snapshot of “what ails India”. India is amongst the
countries with highest disease burden in the world as per its Annual Report,
2010 and an epidemiologic transition from infectious to life-style disorders has
been documented. The methodology adopted for the POSEIDON study was to
conduct a one-day, point prevalence study, across India using an ICD-10 compliant
questionnaire. In addition to the standard statistical approaches, ta Networks based
approach to understand the global structure of the “Symptome” of India was carried
out by the author of this chapter. Data were divided by decades of age to dissect the
change in network structure across the different age groups of the Indian population.
Edges were derived based upon significance achieved by the Fisher’s exact test, a
standard test applied to test for associations. Since a weighted network analysis
gives better information about the community structure, the negative log of p-value
was used as weights for pairwise associations of symptoms in each age group.
Mapequation algorithm (Rosvall and Bergstrom 2008) was then used to detect
community structure (modularity) in the network and the dynamics of the modules
were represented as alluvial diagram (Rosvall and Bergstrom 2008). As can be
seen in the association network (Fig. 14.6) each symptom/disease is represented

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


Fig. 14.6 Association Networks and Alluvial Mapping for quantitative insights. In the POSEIDON case study (a) Diseases (nodes) formed communities which merged together (right).
(b) These communities were found to change across the age groups of the population in the
visualization in the form of an alluvial mapping of 2,04,912 Indian OPD patients (the POSEIDON
study, Salvi et al. 2015)

as a node and each pair of symptoms/diseases was tested for association. Forcedirected and Kamada-Kawai algorithms of graph layout were then used to visually
inspect the structure of these networks which were found to be strikingly different
between young and elderly age groups (Fig. 14.6a). Further, the dynamic nature
of these community patterns was represented through an alluvial mapping that
showed the change in communities represented as flow of streamlines. It was seen
that the respiratory group of comorbidities (blue, thick streamline in the figure)
was most prevalent across all age groups. Most interestingly, the Circulatory (red)
streamline was seen to merge with Endocrine (purple) comorbidities later in life.
This was confirmed to be due to Diabetes being most common comorbid endocrine
disorder in this age. Hence, a data-driven merger of nodes represented the comorbid association of cardiovascular diseases and diabetes. Similarly, a merger
of Female Genitalia (violet) with Anemia (sky-blue) was seen in the reproductive
age group, anemia due to menstrual disorders being hugely prevalent in this age


T. Sethi

group in women of India. Interestingly, the reader may note that the violet and skyblue streamlines parted ways after 40–50 age group thus signifying disassociation
of anemia with reproductive disorders after menopause. In addition to Systemic
Approach, this example also highlights the importance of data-visualization in Bigdata analytics and the emerging need to devise new methods of visualizing complex
multivariate data to enable intuitive grasp of Big-data.

14.4 Phenotype Deeply

While “Approaching Systemically” addresses the breadth and scope of Big-data,
there is an equally important need for deep phenotyping of healthy individuals and
patients. Hence, the next CAPE principle, Phenotyping Deeply aims to discover
meaningful “patterns within noise” and to exploit these for understanding the
healthy and disease states. In contrast to gross changes in summary statistics (such as
average measurements), pre-disease states are often characterized by subtle changes
in dynamical behavior of the system (such as change in variability, fractal behavior,
long-range correlations etc.). As an example, the human body is made up of about
30 trillion cells consisting roughly of about 200 cell-types arranged into tissues
that perform orchestrated physiological functions. Despite sharing the same genetic
code, there is enormous functional and phenotypic variability in these cells. Further
each cell-type population (tissue) has considerable amount of heterogeneity within
the tissue itself. Most cellular level experiments ignore this variation which is often
summarized into average properties (e.g. gene expression) of the cell-population. It
was only the advent of single cell sequencing technology that has now shed light
into enormous diversity and distinct functional sub-types even within a cellular

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


population. In a manner reminiscent of scale-free networks, similar heterogeneity
(noisiness) of function exists at the phenotypic and physiological levels, heart rate
variability being a prominent example.

14.4.1 Principal Axes of Variation
Since it is impossible to measure the entire physiology of an individual with the
available technologies, a natural question that a biomedical data-scientist faces is
“where to start with deep phenotyping of an individual?” The answer may lie in a
combination of expert driven and data-driven approaches. Leveraging the key network players may be combined with expert-knowledge of human physiology and the
disease in question. In most scientific approaches to Big-data, it is prudent to form
scientific hypothesis which may be tested through Big-data analytics. One of such
scientific hypothesis consistently validated is the existence of physiological axes that
form the core of many complex disorders (Ghiassian et al. 2016). These axes (also
called endophenotypes, endotypes or shared intermediate patho-phenotypes) can be
thought of as major relay stations for the development of a multitude of diseases
including complex diseases such as diabetes and cardiovascular diseases (Ghiassian
et al. 2016). At the cellular level, such key axes of health-regulation are found to be
(i) inflammation, (ii) fibrosis, and (iii) thrombosis. Similarly, at the physiological
level, systems which are known to integrate regulatory influences and maintain
homeostasis are expected to be strong candidates for deep-phenotyping approaches.
Since Autonomic Nervous System (ANS) is a natural choice for being an integrator
of physiological networks, its quantification through Heart Rate Variability may be
one of the key factors in untangling the complexity of diseases as discussed before
and in the following case study.

14.4.2 Heart Rate Variability: A Case Study on Phenotype
Deeply Principle of the CAPE Roadmap
A large majority of the automatic and unconscious regulation of human physiological functions happens through the Autonomic Nervous System (ANS). The
nerve supply from ANS controls some of the most vital physiological processes
including heart rate, breathing rate, blood pressure, digestion, sweating etc. through
rich nerve supply bundled into two opposing components that dynamically balance
out each other. These components are the sympathetic component (‘fight or flight’)
response and the parasympathetic component (“rest and digest”) respectively. Thus,
the assessment of sympathetic and parasympathetic components can yield insights
into the delicate dynamical balance of this physiological axis. Interestingly, this
axis has been shown to be perturbed early in the presence of a variety of diseases


T. Sethi

and these perturbations can be measured non-invasively through heart beat intervals
(Task Force for Heart rate variability 1996). The beating of the heart is not a
perfectly periodic phenomenon as it constantly adapts to the changing demands
of the body. Therefore, the heart rate exhibits complex behavior even at rest and
this variation in the rhythm of the heart is known as heart rate variability (HRV).
An illustration of inter-beat intervals time series obtained from an ECG is depicted
in Fig. 14.7. One of the most significant uses of heart rate variability is in the
prediction of a devastating and often fatal blood stream infection (sepsis) in the
newborn children admitted to an Intensive Care Unit (ICU). In the study conducted
by (Fairchild et al. 2013), the heart rhythm showed lower complexity (as evidenced
by a fall in entropy) by up to 72 h before clinical recognition of sepsis. A pervasive
application of this technique can therefore allow a window of early recognition in
which newborn babies could be treated for sepsis. This was further validated in a
clinical trial to test for the clinical value of this test and the results supported the
predictive value of these features in decreasing the deaths due to sepsis by about 5%
(Fairchild et al. 2013).
The lack of popularity of deep phenotyping stems from the highly mathematical
nature of such patterns. However, it is anticipated that interdisciplinary application
of mathematics and computer science shall be the driving force for next-generation
medicine. Thus, adequate mathematical training and comfort with computational
tools cannot be over-emphasized for the development of Big-data and Data-science
for medicine. The mathematical features of interest in HRV are defined under three
broad types, i.e. Time Domain, Frequency Domain and Nonlinear Features (Task
Force for Heart rate variability 1996).
(a) Time domain analysis involves calculating the summary statistics and include,
i. Mean of the Normal RR intervals given by:


1 X
N iD1

ii. SDNN: SDNN is the standard deviation of the NN time series calculated as:

1 X
N  1 iD1

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


Fig. 14.7 Heart Rate Variability. (a) Heart rate time series is derived from peak-to-peak time
difference in R waves from normal QRS complexes in an ECG. The heart accelerates when these
intervals get shorter and vice versa. A few such events are marked. Even within healthy individuals
(b–d) there is a considerable heterogeneity of physiological patterns that needs to be deciphered
through methods of data-science such as pattern mining


T. Sethi

iii. SDSD. This is the standard deviation of the differences from successive RR
intervals given as:


q ˚
E RR2i  E fRRi g2

iv. SDANN: This is the standard deviation of the average of NN intervals over a
short duration of time (usually taken over 5 min periods) (4).
v. RMSSD: Root mean squared differences over successive NN intervals and is
given by:


1 X
.RRiC1  RRi /2
N  1 iD1

vi. NN50: It is the count of beats with successive differences more than 50
vii. pNN50: It is the percentage of NN50 in the total NN intervals recorded (4)
and is calculated as:

pNN50 D


(b) Frequency domain analysis aims to discover the hidden periodic influences and
i. Fourier transform: It is a theoretically well founded algorithm developed
from first principles of mathematics (8) and uses sines and cosines as “basis”
functions for calculation of power associated with each frequency



RRt e2itn=N ; n D 0; : : : ; N  1


where, fn



; : : : ; N2

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


The power spectrum is defined by the following equations:
jRR0 j 2 ; at zero frequency

Pn D 2 jRRn j2 C jRRNn 2 ; at n D 1; 2; : : : ;
1 ˇ
Pc D 2 ˇRRN=2 ˇ ; at Nyquist critical frequency
P0 D

The application of Fourier transform in HRV has given many insights of
physiological relevance. It was found first from the Fourier spectrum that heart
rate had at least two distinct modes of oscillation, the low frequency (LF) mode
centered between 0.04–0.15 Hz and the high frequency (HF) mode centered between
0.15 Hz–0.40 Hz and was found from experiments in animals and humans that HF
component arose from parasympathetic control of the heart. This has been found to
be associated with the breathing frequency, hence also known as Respiratory Sinus
Arrhythmia (RSA).
(c) Nonlinear analyses of HRV for complexity quantification include:
i. Poincare plot: In this method, time-delayed embedding of a signal is
accomplished by reconstructing the phase space by using the lagged values
of the signals. Poincare plots analysis is one of the simplest and a popular
methods of phase space reconstruction of cardiac inter-beat (RR) interval
series where RR(n) is plotted against RR(n C d).
ii. Fractal analysis using Detrended Fluctuation Analysis: Fractal structures
are self-similar structures which exhibit the property of long range correlations. Detrended Fluctuation Analysis (DFA) is a robust method of fractal
analysis and its use for physiological signals and is described in the (Task
Force for Heart rate variability 1996)
iii. Entropies: As discussed earlier Entropy methods such as Shannon Entropy,
Approximate entropy and Sample Entropy (analyze the complexity or irregularity of the time series. Mathematically, these entropy measures represent
the conditional probability that a data of length N having multiple patterns
of length m, within a tolerance range r, will also have repeated patterns of
length m C 1. For example, sample entropy is defined as:

C .m C 1; r/
SampEn .m; r; N/ D  log
C .m; r/


T. Sethi

14.5 Enable Decisions

Until recently, medical knowledge discovery has solely relied upon the rigor
of statistical approaches like hypothesis testing. With the availability of Big-data,
complementary paradigms of Machine Learning and Artificial Intelligence have
emerged. These approaches place more emphasis upon predictive modeling rather
than hypothesis testing. Therefore, in contrast to traditional statistics, the data-drive
paradigm takes a more flexible approach in the beginning, relaxing the assumptions
about data such as parametric distributions. This relaxation of assumptions is
countered by rigorous testing of predictive power of the models thus learnt through
cross-validation and testing sets. The most popular approaches to machine learning
include Support Vector Machines (SVM), Random Forests (RF), Shallow and Deep
Neural Networks. Although many doubts are raised about flaws and spurious results
of many such approaches, machine learning approaches have consistently outperformed statistical modeling in many clinical situations because of better handling
of nonlinearity in these data (Song et al. 2004). Although statistical approaches
are more rigorous, these have often led to fishing for significance without clinical
translation. A meta-analysis of published findings shows that fishing for p-values
has led to a proliferation of scientific studies which are either false or nonreproducible (Ioannidis 2005). In this situation, machine learning approaches can
bring sanity by emphasizing predictive value rather than mere statistical support for
the proposed hypotheses. Therefore, these approaches have a definite potential if
applied in a robust manner. The key algorithms that are particularly important from
the standpoint of the CAPE approach are discussed below.

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


Fig. 14.8 Going Beyond Association networks to Enable Decisions through Causal Networks.
A causal model on the POSEIDON data is shown and can help data-scientist s and clinicians
in making informed decisions. This example shows the probability of tuberculosis in an Indian
patient presenting in the OPD with Lymph Node enlargement, in the absence of Anemia. Notice
that a particular Indian state (code 26) is associated with a higher probability of tuberculosis than
other states if everything else is held constant

14.5.1 Causal Modeling Through Bayesian Networks
Bayesian Networks (BN) are Probabilistic Graphical Models that extend the
argument on “Approach Systemically” and enable decisions. Unlike association
networks where the edges might merely represent indirect or spurious relationships, Bayesian Networks discover direct causal relationships through conditional
probabilities and repeated application of the Bayes Rule. Thus, BNs are one of the
most advanced analytical tools for enabling causal decisions (Pearl 2010). Having
adjusted for the possibility of spurious relationships, BNs are typically sparser than
association networks and hence combining feature reduction, systemic approach and
decision making, all in a single algorithm. Moreover, these reduce the possibility of
false relationships by fitting a joint multivariate model upon the data in contrast
to finding pairwise associations as done for association networks. Further, these
models allow statistical inference to be conducted over the nodes of interest, thus
enabling actionable decisions and policy thus making these one of the tools of choice
for community health models. An example of a Bayesian Network constructed
upon the POSEIDON data described earlier is shown in Fig. 14.8. Notice that this
network allows the data-scientist to take actionable decisions based on quantitative
inferences in contrast to pattern-mining for associations.


T. Sethi

14.5.2 Predictive Modeling
When the goal is not to find causal structure, Random Forests (Breiman 2001),
Support Vector Machines (Cortes and Vapnik 1995) and Neural Networks Hinton
et al. (2006) are the most common classes of machine learning models that are
employed in complex datasets. A full discussion on each of these is beyond the
scope of this chapter. In clinical situations, the litmus test of predictive models
is “generalizability”, i.e. optimal performance against a different clinical sites and
sources of patient and subjects’ data. This requires machine learning models to be
trained on real-life clinical data with all the complexity such as class-imbalance,
missingness, stratification and nonlinearity and avoidance of synthetic pure data

14.5.3 Reproducibility of Data Science for Biomedicine
Reproducibility is one of the biggest challenges of Biomedical Data-science. Hence
there are global efforts to create standards for predictive modeling and machine
learning to enable reproducibility. One of such standards is Transparent reporting
of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD)
(Collins et al. 2015) formulated under the Enhancing the QUAlity and Transparency
Of health Research (EQUATOR) consortium. TRIPOD provides guidelines and a
checklist of twenty-two criteria that must be met by a well conducted predictive
modeling study for medicine. From the data-science perspective, the key criteria
are to clearly specify (i) data missingness, (ii) handling of predictors, (iii) type of
model (iv) validation strategy, (v) model performance and model comparison (vi)
model recalibration, (vii) limitations of the study. It is expected that such initiatives
will improve the generalizability of predictive models and would help these to be
adopted across laboratories and clinics.

14.5.4 SAFE-ICU Initiative: A Full Spectrum Case Study
in Biomedical Data-Science
Intensive Care Units are one of the biggest source of biomedical Big-data. However,
the application of Data-science to Biomedical Big-data is relatively nascent. At All
India Institute of Medical Sciences, New Delhi, India an end-to-end initiative based
upon the CAPE principles for Pediatric Intensive Care Units has been launched. The
overarching goal of this initiative is to create a Sepsis Advanced Forecasting Engine
for ICUs (SAFE-ICU) for preventing mortality due to killer conditions such as
sepsis. Delay in recognition of sepsis in the ICUs can be devastating with a mortality
as high as 50% in developing countries like India. The CAPE principles for SAFEICU have been built from the ground-up with in-house design and customization of
pipelines for reliable capture and analysis of Big-data (Fig. 14.9). Multi-parameter

14 Big Data to Big Knowledge for Next Generation Medicine: A Data Science. . .


Fig. 14.9 (a) While traditional statistics likes to deal with neat data (analogous to library
catalogue), Big-data often starts with messy data (analogous to Einstein’s messy desk). The
emerging role of Data-science is analogous to Einstein’s brain that synthesizes these two into
new knowledge. Full Spectrum demonstration of the CAPE principles through SAFE-ICU (b). Inhouse pipelines for warehousing Big-data from Pediatric ICU. Lean prototyping of the pipelines
was initially carried out using Raspberry Pi (c) and finally deployed on a server. Reliable capture
was ensured by deploying alert mechanisms (c). The reliably captured data were then structured
using text mining (d) followed by exploratory data analysis of the multivariate time series (d).
These time series include clinically relevant features such as Heart Rate, Oxygen Saturations,
Respiratory Rates, Pulse Rate, Blood Pressure, End Tidal CO2 etc. These data are integrated with
laboratory investigations and treatment charts entered by clinicians and have led to the creation of
a unique pediatric intensive care Big-data resource

monitoring data are being warehoused using open-source platforms such as Spark,
Python and R and are documented for reproducibility using markdown documentation. Text-mining upon unstructured files such as treatment notes has been
carried out and structured data is being warehoused alongside the multiparameter
monitoring data for building graphical models. A unique Pediatric ICU resource
of over 50,000 h of continuous multivariate monitoring data followed by deepphenotyping using mathematical and computational analyses has been generated
and models are being developed and tested with the aim improving delivery of care
and saving lives.


T. Sethi

14.6 Conclusion
It has been projected that by 2017, United States alone will face a shortage of about
190,000 trained professionals who would be trained at the interface of data-science
and respective domain expertise. This number is expected to be even higher for
healthcare professionals as bridging medicine with mathematics is undoubtedly
a challenging endeavor. However, over the past decade, common themes have
emerged in biomedical science which have led to this proposal of the CAPE
roadmap of Capture Reliably, Approach Systemically, Phenotype Deeply and
Enable Decisions. This roadmap is enabling us in the identification of blind-spots in
the application of Data-science to medicine and other data-science initiatives may
find a similar utility of this roadmap. Finally, the need of the hour for biomedical
data-science is to develop many such roadmaps and adopt principles that may be
critical for bedside translation of Big-data analytics.
Acknowledgements I acknowledge the Wellcome Trust/DBT India Alliance for supporting the
SAFE-ICU project at All India Institute of Medical Sciences (AIIMS) and Indraprastha Institute
of Information Technology Delhi (IIIT-Delhi), New Delhi, India. I also express deep gratitude to
Dr. Rakesh Lodha, Professor-in-charge of the Pediatric Intensive Care Unit at AIIMS for providing
an immersive clinical environment and constant clinical feedback upon Data-science experiments
for creating a SAFE-ICU. I also acknowledge the mentorship of Prof. Charles Auffray and Prof.
Samir K. Brahmachari and analytics support provided by Mr. Aditya Nagori.

Ali, O. (2013). Genetics of type 2 diabetes. World Journal of Diabetes, 4(4), 114–123.
Arron, J. R., Townsend, M. J., Keir, M. E., Yaspan, B. L., & Chan, A. C. (2015). Stratified
medicine in inflammatory disorders: From theory to practice. Clinical Immunology, 161(1),
11–22. doi:10.1016/j.clim.2015.04.006.
Athey, S., & Imbens, G. (2016). Recursive partitioning for heterogeneous causal effects: Table 1.
Proceedings of the National Academy of Sciences of the United States of America, 113(27),
7353–7360. doi:10.1073/pnas.1510489113.
Auffray, C., Chen, Z., & Hood, L. (2009). Systems medicine: The future of medical genomics and
healthcare. Genome Medicine, 1(1