# Analysis Of Epidemiological Data Using R And Epicalc Epicalc_Book Book

User Manual: Epicalc_Book

Open the PDF directly: View PDF .

Page Count: 328 [warning: Documents this large are best viewed by clicking the View PDF Link!]

- Chapter 1: Starting to use R
- Chapter 2: Vectors
- Chapter 3: Arrays, Matrices and Tables
- Chapter 4: Data Frames
- Chapter 5: Simple Data Exploration
- Chapter 6: Date and Time
- Chapter 7: An Outbreak Investigation: Describing Time
- Chapter 8: An Outbreak Investigation: Risk Assessment
- Chapter 9: Odds Ratios, Confounding and Interaction
- Chapter 10: Basic Data Management
- Chapter 11: Scatter Plots and Linear Regression
- Chapter 12: Stratified Linear Regression
- Chapter 13: Curvilinear Relationship
- Chapter 14: Generalized Linear Models
- Chapter 15: Logistic Regression
- Chapter 16: Matched Case Control Study
- Chapter 17: Polytomous Logistic Regression
- Chapter 18: Ordinal Logistic Regression
- Chapter 19: Poisson ad Negative Binomial Regression
- Chapter 20: Introduction to Multi-level Modelling
- Chapter 21: Survival Analysis
- Chapter 22: Cox Regression
- Chapter 23: Analysing Attitudes Data
- Chapter 24: Sample Size Calculation
- Chapter 25: Documentation
- Chapter 26: Strategies for Handling Large Datasets
- Chapter 27: Table Stacking for a Manuscript
- Solutions to Exercises
- Index
- Epicalc Functions
- Epicalc Datasets

Analysis of epidemiological

data using R and Epicalc

Epidemiology Unit

Prince of Songkla University

THAILAND

> help.start()

> exp(-5)

[1] 0.006738

9

> log(3.8

+ )

[1] 1.335001

> help.start()

> exp(-5)

[1] 0.006738

9

> log(3.8

+ )

[1] 1.335001

0

10

20

30

40

50

Virasakdi Chongsuvivatwong

0

10

20

30

40

50

EPICALC–OK.indd 1 20.2.2008 15:24:54

Analysis of Epidemiological Data Using

R and Epicalc

Author: Virasakdi Chongsuvivatwong

cvirasak@medicine.psu.ac.th

Epidemiology Unit

Prince of Songkla University

THAILAND

i

Preface

Data analysis is very important in epidemiological research. The capacity of

computing facilities has been steadily increasing, moving state of the art

epidemiological studies along the same direction of computer advancement.

Currently, there are many commercial statistical software packages widely used by

epidemiologists around the world. For developed countries, the cost of software is

not a major problem. For developing countries however, the real cost is often too

high. Several researchers in developing countries thus eventually rely on a pirated

copy of the software.

Freely available software packages are limited in number and readiness of use.

EpiInfo, for example, is free and useful for data entry and simple data analysis.

Advanced data analysts however find it too limited in many aspects. For example, it

is not suitable for data manipulation for longitudinal studies. Its regression analysis

facilities cannot cope with repeated measures and multi-level modelling. The

graphing facilities are also limited.

A relatively new and freely available software called R is promising. Supported by

leading statistical experts worldwide, it has almost everything that an

epidemiological data analyst needs. However, it is difficult to learn and to use

compared with similar statistical packages for epidemiological data analysis such as

Stata. The purpose of this book is therefore to bridge this gap by making R easy to

learn for researchers from developing countries and also to promote its use.

My experience in epidemiological studies spans over twenty years with a special

fondness of teaching data analysis. Inspired by the spirit of the open-source

software philosophy, I have spent a tremendous effort exploring the potential and

use of R. For four years, I have been developing an add-on package for R that

allows new researchers to use the software with enjoyment. More than twenty

chapters of lecture notes and exercises have been prepared with datasets ready for

self-study.

Supported by WHO, TDR and the Thailand Research Fund, I have also run a

number of workshops for this software in developing countries including Thailand,

Myanmar, North Korea, Maldives and Bhutan, where R and Epicalc was very much

welcomed. With this experience, I hereby propose that the use of this software

should be encouraged among epidemiological researchers, especially for those who

cannot afford to buy expensive commercial software packages.

ii

R is an environment that can handle several datasets simultaneously. Users get

access to variables within each dataset either by copying it to the search path or by

including the dataset name as a prefix. The power of R in this aspect is a drawback

in data manipulation. When creating a variable or modifying an existing one,

without prefixing the dataset name, the new variable is isolated from its parental

dataset. If prefixing is the choice, the original data is changed but not the copy in

the search path. Careful users need to remove the copy in the search path and

recopy the new dataset into it. The procedure in this aspect is clumsy. Not being

tidy will eventually end up with too many copies in the search path overloading the

system or confusing the analyst on where the variable is actually located.

Epicalc presents a concept solution for common types of work where the data

analyst works on one dataset at a time using only a few commands. In Epicalc the

user can virtually eliminate the necessity of specifying the dataset and can avoid

overloading of the search path very effectively and efficiently. In addition to make

tidying of memory easy to accomplished, Epicalc makes it easy to recognize the

variables by adopting variable labels or descriptions which have been prepared

from other software such as SPSS or Stata or locally prepared by Epicalc itself.

R has very powerful graphing functions that the user has to spend time learning.

Epicalc exploits this power by producing a nice plot of the distribution

automatically whenever a single variable is summarised. A breakdown of the first

variable by a second categorical variable is also simple and graphical results are

automatically displayed. This automatic graphing strategy is also applied to one-

way tabulation and two-way tabulation. Description of the variables and the value

or category labels are fully exploited with these descriptive graphs.

Additional epidemiological functions added in by Epicalc include calculation of

sample size, matched 1:n (n can vary) tabulation, kappa statistics, drawing of ROC

curve from a table or from a logistic regression results, population pyramid plots

from age and sex and follow-up plots.

R has several advanced regression modelling functions such as multinomial logistic

regression, ordinal logistic regression, survival analysis and multi-level modelling.

By using Epicalc nice tables of odds ratios and 95% CI are produced, ready for

simple transferal into a manuscript document with minimal further modification

required.

Although use of Epicalc implies a different way of working with R from

conventional use, installation of Epicalc has no effect on any existing or new

functions of R. Epicalc functions only increase efficiency of data analysis and

makes R easier to use.

iii

This book is essentially about learning R with an emphasis on Epicalc. Readers

should have some background in basic computer usage. With R, Epicalc and the

supplied datasets, the users should be able to go through each lesson learning the

concepts of data management, related statistical theories and the practice of data

analysis and powerful graphing.

The first four chapters introduce R concepts and simple handling of important basic

elements such as scalars, vectors, matrices, arrays and data frames. Chapter 5 deals

with simple data exploration. Date and time variables are defined and dealt with in

Chapter 6 and fully exploited in a real dataset in Chapter 7. Descriptive statistics

and one-way tabulations are automatically accompanied by corresponding graphs

making it rather unlikely that important information is overlooked. Finally, time

plots of exposure and disease onsets are plotted with a series of demonstrating

commands. Chapter 8 continues to investigate the outbreak by two-way tabulation.

Various kinds of risk assessment, such as the risk ratio and protective efficacy, are

analysed with numeric and graphic results.

Chapter 9 extends the analysis of the dataset to deal with levels of association or

odds ratios. Stratified tabulation, the Mantel-Haenzsel odds ratio, and test of

homogeneity of odds ratios are explained in detail. All results are complemented by

simultaneous plots. With these graphs, the concept of confounding is made more

understandable.

Before proceeding further, the reader has a thorough exercise of data cleaning and

standard data manipulation in Chapter 10. Simple looping commands are

introduced to increase the efficiency of data management. Subsequently, and from

time to time in the book, readers will learn how to develop these loops to create

powerful graphs.

Scatter plots, simple linear regression and analysis of variance are presented in

Chapter 11. Stratified scatter plots to enhance the concept of confounding and

interaction for continuous outcome variables are given in Chapter 12. Curvilinear

models are discussed in Chapter 13. Linear modelling is extended to generalized

linear modelling in Chapter 14.

For binary outcome variables, Chapter 15 introduces logistic regression with

additional comparison with stratified cross-tabulation learned in Chapter 9. The

concept of a matched case control study is discussed in Chapter 16 with matched

tabulation for 1:1 and 1:n matching. Finally, conditional logistic regression is

applied. Chapter 17 introduces polytomous logistic regression using a case-control

study in which one type of case series is compared with two types of control

groups. Ordinal logistic regression is applied for ordered outcomes in Chapter 18.

iv

For a cohort study, with grouped exposure datasets, Poisson regression is used in

Chapter 19. Extra-Poisson regression for overdispersion is also discussed. This

includes modeling the outcome using the negative binomial error distribution.

Multi-level modelling and longitudinal data analysis are discussed in Chapter 20.

For cohort studies with individual follow-up times, survival analysis is discussed in

Chapter 21 and the Cox proportional hazard model is introduced in Chapter 22. In

chapter 23 the focus is on analyzing datasets involving attitudes, such as those

encountered in the social sciences. Chapter 24 deals with day-to-day work in

calculation of sample sizes and the technique of documentation that all professional

data analysts must master is explained in Chapter 25.

Some suggested strategies for handling large datasets are given in chapter 26. The

book ends with a demonstration of the tableStack command, which dramatically

shortens the preparation of a tidy stack of tables with a special technique of copy

and paste into a manuscript.

At the end of each chapter some references are given for further reading. Most

chapters also end with some exercises to practice on. Solutions to these are given at

the end of the book.

Colour

It is assumed that the readers of this book will simultaneously practice the

commands and see the results on the screen. The explanations in the text sometimes

describe the colour of graphs that appear in black and white in this book (the reason

for this is purely for reducing the printing costs). The electronic copy of the book,

however, does include colour.

Explanations of fonts used in this book

MASS An R package or library

Attitudes An R dataset

plot An R function

summ An Epicalc function (italic)

'abc' An R object

'pch' An argument to a function

'saltegg' A variable within a data frame

"data.txt" A data file on disk

v

Table of Contents

Chapter 1: Starting to use R __________________________________________ 1

Installation _____________________________________________________ 1

Text Editors ____________________________________________________ 3

Starting R Program_______________________________________________ 3

R libraries & packages ____________________________________________ 4

On-line help ____________________________________________________ 7

Using R _______________________________________________________ 8

Exercises _____________________________________________________ 14

Chapter 2: Vectors ________________________________________________ 15

Concatenation__________________________________________________ 16

Subsetting a vector with an index vector _____________________________ 17

Missing values _________________________________________________ 22

Exercises _____________________________________________________ 24

Chapter 3: Arrays, Matrices and Tables________________________________ 25

Arrays________________________________________________________ 25

Matrices ______________________________________________________ 29

Tables________________________________________________________ 29

Lists _________________________________________________________ 31

Exercises _____________________________________________________ 33

Chapter 4: Data Frames____________________________________________ 35

Data entry and analysis __________________________________________ 37

Datasets included in Epicalc ______________________________________ 38

Reading in data_________________________________________________ 38

Attaching the data frame to the search path ___________________________ 43

The 'use' command in Epicalc _____________________________________ 45

Exercises______________________________________________________ 48

vi

Chapter 5: Simple Data Exploration __________________________________ 49

Data exploration using Epicalc ____________________________________ 49

Exercise_______________________________________________________ 62

Chapter 6: Date and Time __________________________________________ 63

Computation functions related to date _______________________________ 63

Reading in a date variable ________________________________________ 66

Dealing with time variables _______________________________________ 67

Exercises______________________________________________________ 76

Chapter 7: An Outbreak Investigation: Describing Time ___________________ 77

Case definition _________________________________________________ 78

Paired plot ____________________________________________________ 83

Exercise_______________________________________________________ 86

Chapter 8: An Outbreak Investigation: Risk Assessment ___________________ 87

Recoding missing values _________________________________________ 87

Exploration of age and sex________________________________________ 90

Comparison of risk: Risk ratio and attributable risk ____________________ 92

Dose-response relationship _______________________________________ 94

Exercise_______________________________________________________ 96

Chapter 9: Odds Ratios, Confounding and Interaction ____________________ 97

Odds and odds ratio _____________________________________________ 97

Confounding and its mechanism ___________________________________ 99

Interaction and effect modification ________________________________ 103

Exercise______________________________________________________ 104

Chapter 10: Basic Data Management ________________________________ 105

Data cleaning _________________________________________________ 105

Identifying duplication ID _______________________________________ 105

Missing values ________________________________________________ 107

Recoding values using Epicalc____________________________________ 110

vii

Labelling variables with 'label.var'_________________________________ 112

Adding a variable to a data frame _________________________________ 115

Collapsing categories ___________________________________________ 118

Exercises_____________________________________________________ 120

Chapter 11: Scatter Plots & Linear Regression _________________________ 121

Scatter plots __________________________________________________ 122

Components of a linear model ____________________________________ 124

Regression line, fitted values and residuals __________________________ 127

Checking normality of residuals __________________________________ 128

Exercise______________________________________________________ 130

Chapter 12: Stratified linear regression_______________________________ 131

Exercise______________________________________________________ 138

Chapter 13: Curvilinear Relationship ________________________________ 139

Stratified curvilinear model ______________________________________ 144

Modelling with a categorical independent variable ____________________ 146

References ___________________________________________________ 148

Exercise______________________________________________________ 148

Chapter 14: Generalized Linear Models ______________________________ 149

Model attributes _______________________________________________ 150

Attributes of model summary_____________________________________ 151

Covariance matrix _____________________________________________ 152

References ___________________________________________________ 155

Exercise______________________________________________________ 156

Chapter 15: Logistic Regression_____________________________________ 157

Distribution of binary outcome ___________________________________ 157

Logistic regression with a binary independent variable_________________ 161

Interaction ___________________________________________________ 166

Interpreting the odds ratio _______________________________________ 168

viii

References ___________________________________________________ 175

Exercises_____________________________________________________ 176

Chapter 16: Matched Case Control Study _____________________________ 177

1:n matching__________________________________________________ 179

Logistic regression for 1:1 matching _______________________________ 180

Conditional logistic regression____________________________________ 183

References ___________________________________________________ 184

Exercises_____________________________________________________ 184

Chapter 17: Polytomous Logistic Regression___________________________ 185

Polytomous logistic regression using R _____________________________ 187

Exercises_____________________________________________________ 192

Chapter 18: Ordinal Logistic Regression ______________________________ 193

Modelling ordinal outcomes______________________________________ 195

References ___________________________________________________ 196

Exercise______________________________________________________ 196

Chapter 19: Poisson and Negative Binomial Regression __________________ 197

Modelling with Poisson regression ________________________________ 201

Goodness of fit test_____________________________________________ 201

Incidence density ______________________________________________ 204

Negative binomial regression_____________________________________ 206

References ___________________________________________________ 209

Exercise______________________________________________________ 210

Chapter 20: Introduction to Multi-level Modelling ______________________ 211

Random intercepts model________________________________________ 215

Model with random slopes_______________________________________ 219

Exercises_____________________________________________________ 224

Chapter 21: Survival Analysis ______________________________________ 225

Survival object in R ____________________________________________ 228

ix

Life table ____________________________________________________ 229

Kaplan-Meier curve ____________________________________________ 231

Cumulative hazard rate _________________________________________ 232

References ___________________________________________________ 235

Exercises_____________________________________________________ 236

Chapter 22: Cox Regression________________________________________ 237

Testing the proportional hazards assumption_________________________ 238

Stratified Cox regression ________________________________________ 241

References ___________________________________________________ 243

Exercises_____________________________________________________ 244

Chapter 23 Analysing Attitudes Data _________________________________ 245

tableStack for logical variables and factors __________________________ 247

Cronbach's alpha ______________________________________________ 249

Summary ____________________________________________________ 253

References ___________________________________________________ 253

Exercise______________________________________________________ 254

Chapter 24: Sample size calculation__________________________________ 255

Field survey __________________________________________________ 255

Comparison of two proportions ___________________________________ 258

Comparison of two means _______________________________________ 263

Lot quality assurance sampling ___________________________________ 264

Power determination for comparison of two proportions________________ 266

Power for comparison of two means _______________________________ 267

Exercises_____________________________________________________ 268

Chapter 25: Documentation ________________________________________ 269

Crimson Editor________________________________________________ 270

Tinn-R ______________________________________________________ 271

Saving the output text___________________________________________ 274

x

Saving a graph ________________________________________________ 275

Chapter 26: Strategies of Handling Large Datasets _____________________ 277

Simulating a large dataset _______________________________________ 277

Chapter 27 Table Stacking for a Manuscript ___________________________ 281

Concept of 'tableStack'__________________________________________ 281

Colum of total ________________________________________________ 286

Exporting 'tableStack' and other tables into a manuscript _______________ 287

Solutions to Exercises _____________________________________________ 289

Index __________________________________________________________ 310

Epicalc Functions ________________________________________________ 312

Epicalc Datasets _________________________________________________ 313

Chapter 1: Starting to use R

This chapter concerns first use of R, covering installation, how to obtain help,

syntax of R commands and additional documentation. Note that this book was

written for Windows users, however R also works on other operating systems.

Installation

R is distributed under the terms of the GNU General Public License. It is freely

available for use and distribution under the terms of this license. The latest version

of R and Epicalc and their documentation can be downloaded from CRAN (the

Comprehensive R Archive Network).

The main web site is http://cran.r-project.org/ but there are mirrors all around the

world. Users should download the software from the nearest site. R runs on the

three common contemporary operating systems, Linux, MacOS X and Windows.

To install R, first go to the CRAN website and select your operating system from

the top of the screen. For Windows users click the Windows link and follow the

link to the base subdirectory. In this page you can download the setup file for

Windows, which at the time of publication of this book was R-2.6.1-win32.exe.

Click this link and click the "Save" button.

The set-up file for R is around 28Mb. To run the installation simply double-click

this file and follow the instructions. After installation, a shortcut icon of R should

appear on the desktop. Right-click this R icon to change its start-up properties.

Replace the default 'Start in' folder with your own working folder. This is the folder

where you want R to work. Otherwise, the input and output of files will be done in

the program folder, which is not a good practice. You can create multiple shortcut

icons with different start-in folders for each project you are working on.

Suppose the work related to this book will be stored in a folder called

'C:\RWorkplace'. The 'Properties' of the icon should have the 'Start in:' text box

filled with 'C:\RWorkplace' (do not type the single quote signs ' and '. They are used

in this book to indicate objects or technical names).

R detects the main language of the operating system in the computer and tries to

use menus and dialog boxes in that language. For example, if you are running R on

a Windows XP in the Chinese language, the menus and dialog boxes will appear in

Chinese. Since this book is written in English, it is advised to set the language to be

2

English so that the responses on your computer will be the same as those in this

book. In the 'Shortcut' tab of the R icon properties, add Language=en at the end

of the 'Target'. Include a space before the word 'Language'.

So, the Target text box for R-2.6.1 version icon would be:

"C:\Program Files\R\R-2.6.1\bin\Rgui.exe" Language=en

To use this book efficiently, a specialised text editor such as Crimson Editor or

Tinn-R must be installed on your computer. In addition, the Epicalc package needs

to be installed and loaded.

3

Text Editors

Crimson Editor

This software can be installed in the conventional fashion as all other software, i.e.

by executing the setup.exe file and following the instructions.

Crimson Editor has some nice features that can assist the user when working with

R. It is very powerful for editing script or command files used by various software

programs, such as C++, PHP and HTML files. Line numbers can be shown and

open and closed brackets can be matched. These features are important because

they are commonly used in the R command language.

Installation and set-up of Crimson Editor is explained in Chapter 25.

Tinn-R

Tinn-R is probably the best text file editor to use in conjunction with the R

program. It is specifically designed for working with R script files. In addition to

syntax highlighting of R code, Tinn-R can interact with R using specific menus and

tool bars. This means that sections of commands can be highlighted and sent to the

R console (sourced) with a single button click. Tinn-R can be downloaded from the

Internet at: www.sciviews.org/Tinn-R.

Starting R Program

After modifying the start-up properties of the R icon, double-click the R icon on the

desktop. The program should then start and the following output is displayed on the

R console.

R version 2.6.1 (2007-11-26)

Copyright (C) 2007 The R Foundation for Statistical Computing

ISBN 3-900051-07-0

R is free software and comes with ABSOLUTELY NO WARRANTY.

You are welcome to redistribute it under certain conditions.

Type 'license()' or 'licence()' for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.

Type 'contributors()' for more information and

'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or

'help.start()' for an HTML browser interface to help.

Type 'q()' to quit R.

>

4

The output shown above was produced from R version 2.6.1, released on

November 26, 2007. The second paragraph declares and briefly explains the

warranty and license. The third paragraph gives information about contributors and

how to cite R in publications. The fourth paragraph suggests a few commands for

first-time users to try.

In this book, R commands begin with the ">" sign, similar to what is shown at the

R console window. You should not type the ">". Just type the commands. Within

this document both the R commands and output lines will be in Courier New

font whereas the explanatory text are in Times New Roman. Epicalc commands are

shown in italic, whereas standard R commands are shown in normal font style.

The first thing to practice is to quit the program. Click the cross sign at the far right

upper corner of the program window or type the following at the R console:

> q()

A dialog box will appear asking "Save workspace image?" with three choices:

"Yes", "No" and "Cancel". Choose "Cancel" to continue working with R. If you

choose "Yes", two new files will be created in your working folder. Any previous

commands that have been typed at the R console will be saved into a file called

'.Rhistory' while the current workspace will be saved into a file called ".Rdata".

Notice that these two files have no prefix. In the next session of computing, when R

is started in this folder, the image of the working environment of the last saved R

session will be retrieved automatically, together with the command history.

Continued use of R in this fashion (quitting and saving the unnamed workspace

image) will result in these two files becoming larger and larger. Usually one would

like to start R afresh every time so it is advised to always choose "No" when

prompted to save the workspace. Alternatively you may type:

> q("no")

to quit without saving the workspace image and prevent the dialog box message

appearing.

Note that before quitting R you can save your workspace image by typing

> save.image("C:/RWorkplace/myFile.RData")

where 'myFile' is the name of your file. Then when you quit R you should answer

"No" to the question.

R libraries & packages

R can be defined as an environment within which many classical and modern

statistical techniques, called functions, are implemented. A few of these techniques

are built into the base R environment, but many are supplied as packages. A

package is simply a collection of these functions together with datasets and

documentation. A library is a collection of packages typically contained in a single

directory on the computer.

5

There are about 25 packages supplied with R (called “standard” or “recommended”

packages) and many more are available through the CRAN web site. Only 7 of

these packages are loaded into memory when R is executed. To see which packages

are currently loaded into memory you can type:

> search()

[1] ".GlobalEnv" "package:methods" "package:stats"

[4] "package:graphics" "package:grDevices" "package:utils"

[7] "package:datasets" "Autoloads" "package:base"

The list shown above is in the search path of R. When R is told to do any work, it

will look for a particular object for it to work with from the search path. First, it will

look inside '.GlobalEnv', which is the global environment. This will always be the

first search position. If R cannot find what it wants here, it then looks in the second

search position, in this case "package:methods", and so forth. Any function that

belongs to one of the loaded packages is always available during an R session.

Epicalc package

The Epicalc package can be downloaded from the web site http://cran.r-project.org.

On the left pane of this web page, click 'Packages'. Move down along the

alphabetical order of various packages to find 'epicalc'. The short and humble

description is 'Epidmiological calculator'. Click 'epicalc' to hyperlink to the

download page. On this page you can download the Epicalc source code(.tar.gz),

and the two binary versions for MacIntosh (.tgz) and Windows (.zip) versions,

along with the documentation (.pdf).

The Epicalc package is updated from time to time. The version number is in the

suffix. For example, epicalc_2.6.1.6.zip is the binary file for use on the Windows

operating system and the version of Epicalc is 2.6.1.6. A newer version is created to

have bugs (errors in the programme) fixed, to improve the features of existing

functions (commands) and to include new functions.

The file epicalc_version.zip ('version' increases with time) is a

compressed file containing the fully compiled Epicalc package. Installation of this

package must be done within R itself. Usually there is only one session of

installation needed unless you want to overwrite the old package with a newer one

of the same name. You will also need to reinstall this package if you install a new

version of R. To install Epicalc, click 'Packages' on the menu bar at the top of the

window. Choose 'Install packages from local zip files...". When the navigating

window appears, browse to find the file and open it.

6

Successful installation will result in:

> utils:::menuInstallLocal()

package 'epicalc' successfully unpacked and MD5 sums checked

updating HTML package descriptions

Installation is now complete; however functions within Epicalc are still not

available until the following command has been executed:

> library(epicalc)

Note the use of lowercase letters. When the console accepts the command quietly,

we can be reasonably confident that the command has been accepted. Otherwise,

errors or warnings will be reported.

A common warning is a report of a conflict. This warning is, most of the time, not

very serious. This just means that an object (usually a function) with the same name

already exists in the working environment. In this case, R will give priority to the

object that was loaded more recently. The command library(epicalc) must

be typed everytime a new session of R is run.

Updating packages

Whenever a new version of a package is released it is advised to keep up to date by

removing (unloading) the old one and loading the new one. To unload the Epicalc

package, you may type the following at the R console:

> detach(package:epicalc)

After typing the above command, you may then install the new version of the

package as mentioned in the previous section. If there are any problems, you may

need to quit R and start afresh.

RProfile.site

Whenever R is run it will execute the commands in the "RProfile.site" file, which

is located in the C:\Program Files\R\R-2.6.1\etc folder. Remember to replace the R

version with the one you have installed. By including the command

library(epicalc) in the RProfile.site file, every time R is run, the Epicalc

package will be automatically loaded and ready for use. You may edit this file and

insert the command above.

Your Rprofile.site file should look something like this:

library(epicalc)

# Things you might want to change

# options(papersize="a4")

# options(editor="notepad")

# options(pager="internal")

7

# to prefer Compiled HTML help

# options(chmhelp=TRUE)

# to prefer HTML help

# options(htmlhelp=TRUE)

On-line help

On-line help is very useful when using software, especially for first time users.

Self-studying is also possible from the on-line help of R, although with some

difficulty. This is particularly true for non-native speakers of English, where

manuals can often be too technical or wordy. It is advised to combine the use of this

book as a tutorial and on-line help as a reference manual.

On-line help documentation comes in three different versions in R. The default

version is to show help information in a separate window within R. This format is

written in a simple markup language that can be read by R and can be converted to

LaTeX, which is used to produce the printed manuals. The other versions, which

can be set in the "Rprofile.site" file mentioned previously, are HTML

(htmlhelp=TRUE) and compiled HTML (chmhelp=TRUE). The later version is

Windows specific and if chosen, help documentation will appear in a Windows help

viewer. Each help format has its own advantages and you are free to choose the

format you want.

For self study, type

> help.start()

The system will open your web browser from the main menu of R. 'An Introduction

to R' is the section that all R users should try to read first. Another interesting

section is 'Packages'. Click this to see what packages you have available. If the

Epicalc package has been loaded properly, then this name should also appear in the

list. Click 'Epicalc' to see the list of the functions available. Click each of the

functions one by one and you will see the help for that individual function. This

information can also be obtained by typing 'help(myFun)' at the R console, where

'myFun' is the name of the function. To get help on the 'help' function you can type

> help(help)

or perhaps more conveniently

> ?help

For fuzzy searching you can try

> help.search("...")

Replace the dots with the keyword you want to search for. This function also allows

you to search on multiple keywords. You can use this to refine a query when you

get too many responses.

8

Very often the user would want to know how to get other statistical analysis

functions that are not available in a currently installed package. A better option

would be to search from the CRAN website using the 'search' feature located on the

left side of the web page and Google will do a search within CRAN. The results

would be quite extensive and useful. The user then can choose the website to go to

for further learning.

Now type

> search()

You should see "package:epicalc" in the list. If the Epicalc package has not been

loaded, then the functions contained inside will not be available for use.

Having the Epicalc package in the search path means we can use all commands or

functions in that package. Other packages can be called when appropriate. For

example, the package survival is necessary for survival analysis. We will encounter

this in the corresponding section.

The order of the search path is sometimes important. For Epicalc users, it is

recommended that any additional library should be called early in the session of R,

i.e. before reading in and attaching to a data frame. This is to make sure that the

active dataset will be in the second search position. More details on this will be

discussed in Chapter 4.

Using R

A basic but useful purpose of R is to perform simple calculations.

> 1+1

[1] 2

When you type '1+1' and hit the <Enter> key, R will show the result of the

calculation, which is equal to 2.

For the square root of 25:

> sqrt(25)

[1] 5

The wording in front of the left round bracket is called a 'function'. The entity inside

the bracket is referred to as the function's 'argument'. Thus in the last example,

'sqrt()' is a function, and when imposed on 25, the result is 5.

To find the value of e:

> exp(1)

[1] 2.718282

9

Exponentiation of 1 results in the value of e, which is about 2.7. Similarly, the

exponential value of -5 or e-5 would be

> exp(-5)

[1] 0.006738

Syntax of R commands

R will compute only when the commands are syntactically correct. For example, if

the number of closed brackets is fewer than the number of opened ones and the

<Enter> key is pressed, the new line will start with a '+' sign, indicating that R is

waiting for completion of the command. After the number of closed brackets equals

the number of opened ones, computation is carried out and the result appears.

> log(3.8

+ )

[1] 1.335001

However, if the number of closed brackets exceeds the number of opened ones, the

result is a syntax error, or computer grammatical.

> log(3.2))

Error: syntax error

R objects

In the above simple calculation, the results are immediately shown on the screen

and are not stored. To perform a calculation and store the result in an object type:

> a = 3 + 5

We can check whether the assignment was successful by typing the name of the

newly created object:

> a

[1] 8

More commonly, the assignment is written in the following way.

> a <- 3 + 5

> a

[1] 8

For ordinary users, there is no obvious difference between the use of = and <-. The

difference applies at the R programming level and will not be discussed here.

Although <- is slightly more awkward to type than =, the former technique is

recommended to avoid any confusion with the comparison operator (==). Notice

that there is no space between the two components of the assignment operator <-.

Now create a second object called 'b' equal to the square root of 36.

> b <- sqrt(36)

10

Then, add the two objects together.

> a + b

[1] 14

We can also compute the value on the left and assign the result to a new object

called 'c' on the right, using the right assign operator, ->.

> a + 3*b -> c

> c

[1] 26

However, the following command does not work.

> a + 3b -> c

Error: syntax error

R does not recognise '3b'. The * symbol is needed, which indicates multiplication.

The name of an object can consist of more than one letter.

> xyx <- 1

> xyx

[1] 1

A nonsense thing can be typed into the R console such as:

> qwert

Error: Object "qwert" not found

What is typed in is syntactically correct. The problem is that 'qwert' is not a

recognizable function nor a defined object.

A dot can also be used as a delimiter for an object name.

> baht.per.dollar <- 40

> baht.per.dollar

[1] 40

In conclusion, when one types in anything at the R console, the program will try to

show the value of that object. If the signs = or <- or -> are encountered, the value

will be stored to the object on the left of = and <- or the right hand side of ->.

Character or string objects

Character or string means alphanumeric or letter. Examples include the name of a

person or an address. Objects of this type cannot be used for calculation. Telephone

numbers and post-codes are also strings.

11

> A <- "Prince of Songkla University"

> A

[1] "Prince of Songkla University"

R is case sensitive, so 'A' is not the same as 'a'.

> a

[1] 8

> A

[1] "Prince of Songkla University"

Putting comments in a command line

In this book, as with most other programming documents, the author usually inserts

some comments as a part of documentation to remind him/herself or to show some

specific issue to the readers.

R ignores any words following the # symbol. Thus, such a sentence can be used for

comments. Examples:

> 3*3 = 3^2 # This gives a syntax error

> 3*3 == 3^2 # This is correct syntax-wise.

> 3*2 == 3^2 # Correct syntax but the result is FALSE

Logical: TRUE and FALSE

In the last few commands:

> 3*3 == 3^2

[1] TRUE

But

> 3*2 == 3^2

[1] FALSE

Note that we need two equals signs to check equality but only one for assignment.

> 3*2 < 3^2

[1] TRUE

Logical connection using & (logical 'and')

Both TRUE and FALSE are logical objects. They are both in upper case.

Connection of more than one such object results in either TRUE or FALSE. If all

are TRUE, the final result is TRUE. For example:

> TRUE & TRUE

[1] TRUE

12

A combination of FALSE with any other logical is always FALSE.

> TRUE & FALSE

[1] FALSE

> FALSE & FALSE

[1] FALSE

Note that

> (FALSE & TRUE) == (TRUE & FALSE)

[1] TRUE

Without brackets, computation is carried out from left to right.

> FALSE & TRUE == TRUE & FALSE

[1] FALSE

Logical connection with | (logical 'or')

This kind of connection looks for anything which is TRUE.

> TRUE | TRUE

[1] TRUE

> TRUE | FALSE

[1] TRUE

> 3*3 == 3^2 | 3*2 == 3^2

[1] TRUE

Value of TRUE and FALSE

Numerically, TRUE is equal to 1 and FALSE is equal to 0.

> TRUE == 1

[1] TRUE

> FALSE == 0

[1] TRUE

> (3*3 == 3^2) + (9 > 8)

[1] 2

Each of the values in the brackets is TRUE, which is equal to 1. The addition of two

TRUE objects results in a value of 2. However,

> 3*3 == 3^2 + 9 > 8

Error: syntax error in "3*3 == 3^2 + 9 >"

This is due to the complicated sequence of the operation. Therefore, it is always

better to use brackets in order to specify the exact sequence of computation.

Let's leave R for the time being. Answer "Yes" to the question: "Save work space

image?".

13

Please remember that answering "No" is the preferred response in this book as we

recommend typing

> q("no")

to end each R session. Responding "Yes" here is just an exercise in understanding

the concept of workspace images, which follows in chapter 2.

References

An Introduction to R. ISBN 3-900051-12-7.

R Language Definition. ISBN 3-900051-13-5.

Both references above can be downloaded from the CRAN web site.

14

Exercises

Problem 1.

The formula for sample size of a descriptive survey is

()

π=n −1

1.96

2

2

π

δ

where n is the sample size,

π is the prevalence in the population (not to be confused with the constant pi), and

δ is half the width of the 95% confidence interval (precision).

Compute the required sample size if the prevalence is estimated to be 30% of the

population and the 95% confidence interval is not farther from the estimated

prevalence by more than 5%.

Problem 2.

Change the above prevalence to 5% and suppose each side of the 95% confidence

interval is not farther from the estimated prevalence by more than 2%.

Problem 3.

The term 'logit' denotes 'log{P/(1-P)}' where P is the risk or prevalence of a disease.

Compute the logits from the following prevalences: 1%, 10%, 50%, 90% and

100%.

15

Chapter 2: Vectors

In the previous chapter, we introduced simple calculations and storage of the results

of the calculations. In this chapter, we will learn slightly more complicated issues.

History and saved objects

Outside R, examine the working folder; you should see two new files: ".Rdata",

which is the working environment saved from the latest R session, and

".Rhistory", which recorded all the commands in the preceding R session.

".Rdata" is a binary file and only recognised by the R program, while ".Rhistory"

is a text file and can be edited using any text editor such as Notepad, Crimson

Editor or Tinn-R.

Open R from the desktop icon. You may see this in the last line:

[Previously saved workspace restored]

This means that R has restored commands from the previous R session (or history)

and the objects stored form this session. Press the up arrow key and you will see the

previous commands (both correct and incorrect ones). Press <Enter> following the

command; the results will come up as if you continued to work in the previous

session.

> a

[1] 8

> A

[1] "Prince of Songkla University"

Both 'a' and 'A' are retained from the previous session.

Note: ______________________________________________________________________

The image saved from the previous session contains only objects in the '.GlobalEnv', which is

the first position in the search path. The whole search path is not saved. For example, any

libraries manually loaded in the previous session need to be reloaded. However, the Epicalc

library is automatically loaded every time we start R (from the setting of the "Rprofile.site"

file that we modified in the previous chapter). Therefore, under this setting, regardless of

whether the workspace image has been saved in the previous session or not, Epicalc will

always be in the search path.

16

If you want to remove the objects in the environment and the history, quit R

without saving. Go to the 'start in' folder and delete the two files ".Rhistory" and

".Rdata". Then restart R. There should be no message indicating restoration of

previously saved workspace and no history of previous commands.

Concatenation

Objects of the same type, i.e. numeric with numeric, string with string, can be

concatenated. In fact, a vector is an object containing concatenated, atomised (no

more divisible) objects of the same type.

To concatenate, the function 'c()' is used with at least one atomised object as its

argument. Create a simple vector having the integers 1, 2 and 3 as its elements.

> c(1,2,3)

[1] 1 2 3

This vector has three elements: 1, 2 and 3. Press the up arrow key to reshow this

command and type a right arrow to assign the result to a new object called 'd'. Then

have a look at this object.

> c(1,2,3) -> d

> d

Do some calculation with 'd' and observe the results.

> d + 4

> d - 3

> d * 7

> d / 10

> d * d

> d ^ 2

> d / d

> d == d

In addition to numbers, words can be used to create string vectors.

> B <- c("Faculty of Medicine","Prince of Songkla University")

> B

[1] "Faculty of Medicine" "Prince of Songkla University"

Vectors of systematic numbers

Sometimes a user may want to create a vector of numbers with a certain pattern.

The following command will create a vector of integers from 1 to 10.

> x <- 1:10; x

[1] 1 2 3 4 5 6 7 8 9 10

17

For 5 repetitions of 13:

> rep(13, times=5)

[1] 13 13 13 13 13

The function 'rep' is used to replicate values of the argument. For sequential

numbers from -1 to 11 with an incremental step of 3 type:

> seq(from = -1, to = 11, by = 3)

[1] -1 2 5 8 11

In this case 'seq' is a function with three arguments 'from', 'to' and 'by'. The

function can be executed with at least two parameters, 'from' and 'to', since the 'by'

parameter has a default value of 1 (or -1 if 'to' is less than 'from').

> seq(10, 23)

[1] 10 11 12 13 14 15 16 17 18 19 20 21 22 23

> seq(10, -3)

[1] 10 9 8 7 6 5 4 3 2 1 0 -1 -2 -3

The order of the arguments 'from', 'to' and 'by' is assumed if the words are omitted.

When explicitly given, the order can be changed.

> seq(by=-1, to=-3, from=10)

This rule of argument order and omission applies to all functions. For more details

on 'seq' use the help feature.

Subsetting a vector with an index vector

In many instances, only a certain part of a vector needs to be used. Let's assume we

have a vector of running numbers from 3 to 100 in steps of 7. What would be the

value of the 5th number?

> seq(from=3, to=100, by=7) -> x

> x

[1] 3 10 17 24 31 38 45 52 59 66 73 80 87 94

In fact, the vector does not end with 100, but rather 94, since a further step would

result in a number that exceeds 100.

> x[5]

[1] 31

The number inside the square brackets '[]' is called a subscript. It denotes the

position or selection of the main vector. In this case, the value in the 5th position of

the vector 'x' is 31. If the 4th, 6th and 7th positions are required, then type:

> x[c(4,6,7)]

[1] 24 38 45

18

Note that in this example, the object within the subscript can be a vector, thus the

concatenate function c is needed, to comply with the R syntax. The following

would not be acceptable:

> x[4,6,7]

Error in x[4, 6, 7] : incorrect number of dimensions

To select 'x' with the first four elements omitted, type:

> x[-(1:4)]

[1] 31 38 45 52 59 66 73 80 87 94

A minus sign in front of the subscript vector denotes removal of the elements of 'x'

that correspond to those positions specified by the subscript vector.

Similarly, a string vector can be subscripted.

> B[2]

[1] "Prince of Songkla University"

Using a subscript vector to select a subset

A vector is a set of numbers or strings. Application of a condition within the

subscript results in a subset of the main vector. For example, to choose only even

numbers of the vector 'x' type:

> x[x/2 == trunc(x/2)]

[1] 10 24 38 52 66 80 94

The function trunc means to truncate or remove the decimals. The condition that

'x' divided by 2 is equal to its truncated value is true iff (if and only if) 'x' is an even

number. The same result can be obtained by using the 'subset' function.

> subset(x, x/2==trunc(x/2))

If only odd numbers are to be chosen, then the comparison operator can simply be

changed to !=, which means 'not equal'.

> subset(x, x/2!=trunc(x/2))

[1] 3 17 31 45 59 73 87

The operator ! prefixing an equals sign means 'not equal', thus all the chosen

numbers are odd. Similarly, to choose the elements of 'x' which are greater than 30

type:

> x[x>30]

[1] 31 38 45 52 59 66 73 80 87 94

19

Functions related to manipulation of vectors

R can compute statistics of vectors very easily.

> fruits <- c(5, 10, 1, 20)

> summary(fruits)

Min. 1st Qu. Median Mean 3rd Qu. Max.

1.0 4.0 7.5 9.0 12.5 20.0

> sum(fruits)

[1] 36

There are 36 fruits in total.

> length(fruits) # number of different types of fruits

[1] 4

> mean(fruits) # mean of number of fruits

[1] 9

> sd(fruits) # standard deviation

[1] 8.205689

> var(fruits) # variance

[1] 67.33333

Non-numeric vectors

Let's create a string vector called 'person' containing 11 elements.

> person <- c("A","B","C","D","E","F","G","H","I","J","K")

Alternatively, and more economically:

> person <- LETTERS[1:11]

Now check the class of the 'person' and 'fruits' objects.

> class(person)

[1] "character"

> class(fruits)

[1] "numeric"

Character types are used for storing names of individuals. To store the sex of the

person, initially numeric codes are given: 1 for male, 2 for female, say.

> sex <- c(1,2,1,1,1,1,1,1,1,1,2)

> class(sex)

[1] "numeric"

> sex1 <- as.factor(sex) # Creating sex1 from sex

The function as.factor coerces the object 'sex' into a factor, which is a

categorical data type in R.

> sex1

[1] 1 2 1 1 1 1 1 1 1 1 2

Levels: 1 2

20

There are two levels of sex.

> class(sex1)

[1] "factor"

> is.factor(sex)

[1] FALSE

> is.factor(sex1)

[1] TRUE

Now try to label 'sex1'.

> levels(sex1) <- c("male", "female")

The levels of 'sex' is a string vector used for labeling it.

> sex1

[1] male female male male male male male

[8] male male male female

Levels: male female

Ordering elements of a vector

Create an 11-element vector of ages.

> age <- c(10,23,48,56,15,25,40,21,60,59,80)

To sort the ages:

> sort(age)

[1] 10 15 21 23 25 40 48 56 59 60 80

The function sort creates a vector with the elements in ascending order. However,

the original vector is not changed.

> median(age)

[1] 40

The median of the ages is 40. To get other quantiles, the function quantile can

be used.

> quantile(age)

0% 25% 50% 75% 100%

10.0 22.0 40.0 57.5 80.0

By default (if other arguments omitted), the 0th, 25th, 50th, 75th and 100th percentiles

are displayed. To obtain the 30th percentile of age, type:

> quantile(age, prob = .3)

30%

23

21

Creating a factor from an existing vector

An age group vector can be created from age using the cut function.

> agegr <- cut(age, breaks=c(0,15,60,100))

This creates 3 distinct groups, which we can call 'children', 'adults' and 'elderly'.

Note that the minimum and maximum of the arguments in cut are the outer most

boundaries.

> is.factor(agegr)

[1] TRUE

> attributes(agegr)

$levels

[1] "(0,15]" "(15,60]" "(60,100]"

$class

[1] "factor"

The object 'agegr' is a factor, with levels shown above. We can check the

correspondence of 'age' and 'agegr' using the data.frame function, which

combines (but not saves) the 2 variables in a data frame and displays the result.

More details on this function is given the chapter 4.

> data.frame(age, agegr)

age agegr

1 10 (0,15]

2 23 (15,60]

3 48 (15,60]

4 56 (15,60]

5 15 (0,15]

6 25 (15,60]

7 40 (15,60]

8 21 (15,60]

9 60 (15,60]

10 59 (15,60]

11 80 (60,100]

Note that the 5th person, who is 15 years old, is classified into the first group and the

9th person, who is 60 years old, is in the second group. The label of each group uses

a square bracket to end the bin indicating that the last number is included in the

group (inclusive cutting). A round bracket in front of the group is exclusive or not

including that value. To obtain a frequency table of the age groups, type:

> table(agegr)

agegr

(0,15] (15,60] (60,100]

2 8 1

There are two children, eight adults and one elderly person.

> summary(agegr) # same result as the preceding command

> class(agegr)

[1] "factor"

22

The age group vector is a factor or categorical vector. It can be transformed into a

simple numeric vector using the 'unclass' function, which is explained in more

detail in chapter 3.

> agegr1 <- unclass(agegr)

> summary(agegr1)

Min. 1st Qu. Median Mean 3rd Qu. Max.

1.000 2.000 2.000 1.909 2.000 3.000

> class(agegr1)

[1] "integer"

Categorical variables, for example sex, race and religion should always be factored.

Age group in this example is a factor although it has an ordered pattern. Declaring a

vector as a factor is very important, particularly when performing regression

analysis, which will be discussed in future chapters.

The unclassed value of a factor is used when the numeric (or integer) values of the

factor are required. For example, if we are have a dataset containing a 'sex' variable,

classed as a factor, and we want to draw a scatter plot in which the colours of the

dots are to be classified by the different levels of 'sex', the colour argument to the

plot function would be 'col = unclass(sex)'. This will be demonstrated in future

chapters.

Missing values

Missing values usually arise from data not being collected. For example, missing

age may be due to a person not giving his or her age. In R, missing values are

denoted by 'NA', abbreviated from 'Not Available'. Any calculation involving NA

will result in NA.

> b <- NA

> b * 3

[1] NA

> c <- 3 + b

> c

[1] NA

As an example of a missing value of a person in a vector series, type the following

commands:

> height <- c(100,150,NA,160)

> height

[1] 100 150 NA 160

> weight <- c(33, 45, 60,55)

> weight

[1] 33 45 60 55

23

Among four subjects in this sample, all weights are available but one height is

missing.

> mean(weight)

[1] 48.25

> mean(height)

[1] NA

We can get the mean weight but not the mean height, although the length of this

vector is available.

> length(height)

[1] 4

In order to get the mean of all available elements, the NA elements should be

removed.

> mean(height, na.rm=TRUE)

[1] 136.6667

The term 'na.rm' means 'not available (value) removed', and is the same as when it

is omitted by using the function na.omit().

> length(na.omit(height))

[1] 3

> mean(na.omit(height))

[1] 136.6667

Thus na.omit() is an independent function that omits missing values from the

argument object. 'na.rm = TRUE' is an internal argument of descriptive statistics

for a vector.

24

Exercises

Problem 1.

Compute the value of 12 + 22 + 32 ... + 1002

Problem 2.

Let 'y' be a series of integers running from 1 to 1,000. Compute the sum of the

elements of 'y' which are multiples of 7.

Problem 3.

The heights (in cm) and weights (in kg) of 10 family members are shown below:

ht wt

Niece 120 22

Son 172 52

GrandPa 163 71

Daughter 158 51

Yai 153 51

GrandMa 148 60

Aunty 160 50

Uncle 170 67

Mom 155 53

Dad 167 64

Create a vector called 'ht' corresponding to the heights of the 11 family members.

Assign the names of the family members to the 'names' attribute of this vector.

Create a vector called 'wt' corresponding to the family member's weights.

Compute the body mass index (BMI) of each person where BMI = weight / height2.

Identify the persons who have the lowest and highest BMI and calculate the

standard deviation of the BMI.

25

Chapter 3: Arrays, Matrices and Tables

Real data for analysis rarely comes as a vector. In most cases, they come as a

dataset containing many rows or records and many columns or variables. In R,

these datasets are called data frames. Before delving into data frames, let us go

through something simpler such as arrays, matrices and tables. Gaining concepts

and skills in handing these types of objects will empower the user to manipulate the

data very effectively and efficiently in the future.

Arrays

An array may generally mean something finely arranged. In mathematics and

computing, an array consists of values arranged in rows and columns. A dataset is

basically an array. Most statistical packages can handle only one dataset or array at

a time. R has a special ability to handle several arrays and datasets simultaneously.

This is because R is an object-oriented program. Moreover, R interprets rows and

columns in a very similar manner.

Folding a vector into an array

Usually a vector has no dimension.

> a <- (1:10)

> a

[1] 1 2 3 4 5 6 7 8 9 10

> dim(a)

NULL

Folding a vector to make an array is simple. Just declare or re-dimension the

number of rows and columns as follows:

> dim(a) <- c(2,5)

> a

[,1] [,2] [,3] [,4] [,5]

[1,] 1 3 5 7 9

[2,] 2 4 6 8 10

26

The numbers in the square brackets are the row and column subscripts. The

command 'dim(a) <- c(2,5)' folds the vector into an array consisting of 2 rows

and 5 columns.

Extracting cells, columns, rows and subarrays using subscripts

While extracting a subset of a vector requires only one component number (or

vector), an array requires two components. Individual elements of an array may be

referenced by giving the name of the array followed by two subscripts separated by

commas inside the square brackets. The first subscript defines row selection; the

second subscript defines column selection. Specific rows and columns may be

extracted by omitting one of the components, but keeping the comma.

> a[1,] # for the first row and all columns of array 'a'

> a[,3] # for all rows of the third column

> a[2,4] # extract 1 cell from the 2nd row and 4th column

> a[2,2:4] # 2nd row, from 2nd to 4th columns

The command a[,] and a[] both choose all rows and all columns of 'a' and thus

are the same as typing 'a'. An array may also have 3 dimensions.

> b <- 1:24

> dim(b) <- c(3,4,2) # or b <- array(1:24, c(3,4,2))

> b

, , 1

[,1] [,2] [,3] [,4]

[1,] 1 4 7 10

[2,] 2 5 8 11

[3,] 3 6 9 12

, , 2

[,1] [,2] [,3] [,4]

[1,] 13 16 19 22

[2,] 14 17 20 23

[3,] 15 18 21 24

The first value of the dimension refers to the number of rows, followed by number

of columns and finally the number of strata.

Elements of this three-dimensional array can be extracted in a similar way.

> b[1:3,1:2,2]

[,1] [,2]

[1,] 13 16

[2,] 14 17

[3,] 15 18

In fact, an array can have much higher dimensions, but for most epidemiological

analysis are rarely used or needed.

27

Vector binding

Apart from folding a vector, an array can be created from vector binding, either by

column (using the function cbind) or by row (using the function rbind). Let's

return to our fruits vector.

> fruit <- c(5, 10, 1, 20)

Suppose a second person buys fruit but in different amounts to the first person.

> fruit2 <- c(1, 5, 3, 4)

To bind 'fruits' with 'fruits2', which are vectors of the same length, type:

> Col.fruit <- cbind(fruits, fruits2)

We can give names to the rows of this array:

> rownames(Col.fruit) <- c("orange","banana","durian","mango")

> Col.fruit

fruits fruits2

orange 5 1

banana 10 5

durian 1 3

mango 20 4

Alternatively, the binding can be done by row.

> Row.fruit <- rbind(fruits, fruits2)

> colnames(Col.fruit) <- c("orange","banana","durian","mango")

> Row.fruit

orange banana durian mango

fruits 5 10 1 20

fruits2 1 5 3 4

Transposition of an array

Array transposition means exchanging rows and columns of the array. In the above

example, 'Row.fruits' is a transposition of 'Col.fruits' and vice versa.

Array transposition is achieved using the t function.

> t(Col.fruit)

> t(Row.fruit)

Basic statistics of an array

The total number of fruits bought by both persons is obtained by:

> sum(Col.fruit)

The total number of bananas is obtained by:

> sum(Col.fruit[2,])

To obtain descriptive statistics of each buyer:

28

> summary(Col.fruit)

To obtain descriptive statistics of each kind of fruit:

> summary(Row.fruit)

Suppose fruits3 is created but with one more kind of fruit added:

> fruits3 <- c(20, 15, 3, 5, 8)

> cbind(Col.fruit, fruits3)

fruits fruits2 fruits3

orange 5 1 20

banana 10 5 15

durian 1 3 3

mango 20 4 5

Warning message:

number of rows of result is not a multiple of vector length

(arg 2) in: cbind(Col.fruit, fruits3)

Note that the last element of 'fruits3' is removed before being added.

> fruits4 <- c(1,2,3)

> cbind(Col.fruit, fruits4)

fruits fruits2 fruits4

orange 5 1 1

banana 10 5 2

durian 1 3 3

mango 20 4 1

Warning message:

number of rows of result is not a multiple of vector length

(arg 2) in: cbind(Col.fruit, fruits4)

Note that 'fruits4' is shorter than the length of the first vector argument. In this

situation R will automatically recycle the element of the shorter vector, inserting

the first element of 'fruits4' into the fourth row, with a warning.

String arrays

Similar to a vector, an array can consist of character string objects.

> Thais <- c("Somsri", "Daeng", "Somchai", "Veena")

> dim(Thais) <- c(2,2); Thais

[,1] [,2]

[1,] "Somsri" "Somchai"

[2,] "Daeng" "Veena"

Note that the elements are folded in colum-wise, not row-wise, sequence.

Implicit array of two vectors of equal length

Two vectors, especially with the same length, may refer to each other without

formal binding.

29

> cities <- c("Bangkok","Hat Yai","Chiang Mai")

> postcode <- c(10000, 90110, 50000)

> postcode[cities=="Bangkok"]

[1] 10000

This gives the same result as

> subset(postcode, cities=="Bangkok")

[1] 10000

For a single vector, thre are many ways to identify the order of a specific element.

For example, to find the index of "Hat Yai" in the city vector, the following four

commands all give the same result.

> (1:length(cities))[cities=="Hat Yai"]

> (1:3)[cities=="Hat Yai"]

> subset(1:3, cities=="Hat Yai")

> which(cities=="Hat Yai")

Note that when a character vector is binded with a numeric vector, the numeric

vector is coerced into a character vector, since all elements of an array must be of

the same type.

> cbind(cities,postcode)

cities postcode

[1,] "Bangkok" "10000"

[2,] "Hat Yai" "90110"

[3,] "Chiang Mai" "50000"

Matrices

A matrix is a two-dimensional array. It has several mathematical properties and

operations that are used behind statistical computations such as factor analysis,

generalized linear modelling and so on.

Users of statistical packages do not need to deal with matrices directly but some of

the results of the analyses are in matrix form, both displayed on the screen that can

readily be seen and hidden as a returned object that can be used later. For exercise

purposes, we will examine the covariance matrix, which is an object returned from

a regression analysis in a future chapter.

Tables

A table is an array emphasizing the relationship between values among cells.

Usually, a table is a result of an analysis, e.g. a cross-tabulation between to

categorical variables (using function table).

30

Suppose six patients who are male, female, female, male, female and female attend

a clinic. If the code is 1 for male and 2 for female, then to create this in R type:

> sex <- c(1,2,2,1,2,2)

Similarly, if we characterize the ages of the patients as being either young or old

and the first three patients are young, the next two are old and the last one is young,

and the codes for this age classification are 1 for young and 2 for old, then we can

create this in R by typing.

> age <- c(1,1,1,2,2,1)

Suppose also that these patients had one to six visits to the clinic, respectively.

> visits <- c(1,2,3,4,5,6)

> table1 <- table(sex, age); table1

age

sex 1 2

1 1 1

2 3 1

Note that table1 gives counts of each combination of the vectors sex and age

while 'table2' (below) gives the sum of the number of visits based on the four

different combinations of sex and age.

> table2 <- tapply(visits, list(Sex=sex, Age=age), FUN=sum)

> table2

Age

Sex 1 2

1 1 4

2 11 5

To obtain the mean of each combination type:

> tapply(visits, list(Sex=sex, Age=age), FUN=mean)

Age

Sex 1 2

1 1.000 4

2 3.667 5

Although 'table1' has class table, the class of 'table2' is still a matrix. One can

convert it simply using the function as.table.

> table2 <- as.table(table2)

Summary of table vs summary of array

In R, applying the function summary to a table performs a chi squared test of

independence.

31

> summary(table1)

Number of cases in table: 6

Number of factors: 2

Test for independence of all factors:

Chisq = 0.375, df = 1, p-value = 0.5403

Chi-squared approximation may be incorrect

In contrast, applying summary to a non-table array produces descriptive statistics

of each column.

> is.table(Col.fruits)

[1] FALSE

> summary(Col.fruits)

fruits fruits2

Min. : 1.0 Min. :1.00

1st Qu.: 4.0 1st Qu.:2.50

Median : 7.5 Median :3.50

Mean : 9.0 Mean :3.25

3rd Qu.:12.5 3rd Qu.:4.25

Max. :20.0 Max. :5.00

> fruits.table <- as.table(Col.fruits)

> summary(fruits.table)

Number of cases in table: 49

Number of factors: 2

Test for independence of all factors:

Chisq = 6.675, df = 3, p-value = 0.08302

Chi-squared approximation may be incorrect

> fisher.test(fruits.table)

Fisher's Exact Test for Count Data

data: fruits.table

p-value = 0.07728

alternative hypothesis: two.sided

Lists

An array forces all cells from different columns and rows to be the same type. If

any cell is a character then all cells will be coerced into a character. A list is

different. It can be a mixture of different types of objects compounded into one

entity. It can be a mixture of vectors, arrays, tables or any object type.

> list1 <- list(a=1, b=fruits, c=cities)

> list1

$a

[1] 1

$b

[1] 5 10 1 20

$c

[1] "Bangkok" "Hat Yai" "Chiang Mai"

32

Note that the arguments of the function list consist of a series of new objects

being assigned a value from existing objects or values. When properly displayed,

each new name is prefixed with a dollar sign, $.

The creation of a list is not a common task in ordinary data analysis. However, a list

is sometimes required in the arguments to some functions.

Removing objects from the computer memory also requires a list as the argument to

the function rm.

> rm(list=c("list1", "fruits"))

This is equivalent to

> rm(list1); rm(fruits)

A list may also be returned from the results of an analysis, but appears under a

special class.

> sample1 <- rnorm(10)

This generates a sample of 10 numbers from a normal distribution.

> qqnorm(sample1)

The qqnorm function plots the sample quantiles, or the sorted observed values,

against the theoretical quantiles, or the corresponding expected values if the data

were perfectly normally distributed. It is used here for the sake of demonstration of

the list function only.

> list2 <- qqnorm(sample1)

This stores the results into an object called list2.

> list2

$x

[1] 0.123 -1.547 -0.375 0.655 1.000 0.375 -0.123

[8] -1.000 -0.655 1.547

$y

[1] -0.4772 -0.9984 -0.7763 0.0645 0.9595 -0.1103

[7] -0.5110 -0.9112 -0.8372 2.4158

The command qqnorm(sample1) is used as a graphical method for checking

normality. While it produces a graph on the screen, it also gives a list of the x and y

coordinates, which can be saved and used for further calculation.

Similarly, boxplot(sample1) returns another list of objects to facilitate

plotting of a boxplot.

33

Exercises

Problem 1.

Demonstrate a few simple ways to create the array below

[,1][,2][,3][,4][,5][,6][,7][,8][,9][,10]

[1,] 1 2 3 4 5 6 7 8 9 10

[2,] 11 12 13 14 15 16 7 18 19 20

Problem 2.

Extract from the above array the odd numbered columns.

Problem 3.

Cross-tabulation between status of a disease and a putative exposure have the

following results:

Diseased Non-diseased

Exposed 15 20

Non-exposed 30 22

Create the table in R and perform chi-squared and Fisher's exact tests.

34

35

Chapter 4: Data Frames

In the preceding chapter, examples were given on arrays and lists. In this chapter,

data frames will be the main focus. For most researchers, these are sometimes

called datasets. However, a complete dataset can contain more than one data frame.

These contain the real data that most researchers have to work with.

Comparison of arrays and data frames

Many rules used for arrays are also applicable to data frames. For example, the

main structure of a data frame consists of columns (or variables) and rows (or

records). Rules for subscripting, column or row binding and selection of a subset in

arrays are directly applicable to data frames.

Data frames are however slightly more complicated than arrays. All columns in an

array are forced to be character if just one cell is a character. A data frame, on the

other hand, can have different classes of columns. For example, a data frame can

consist of a column of 'idnumber', which is numeric and a column of 'name', which

is character.

A data frame can also have extra attributes. For example, each variable can have

lengthy variable descriptions. A factor in a data frame often has 'levels' or value

labels. These attributes can be transferred from the original dataset in other formats

such as Stata or SPSS. They can also be created in R during the analysis.

Obtaining a data frame from a text file

Data from various sources can be entered using many different software programs.

They can be transferred from one format to another through the ASCII file format.

In Windows, a text file is the most common ASCII file, usually having a ".txt"

extension. There are several other files in ASCII format, including the ".R"

command file discussed in chapter 25.

Data from most software programs can be exported or saved as an ASCII file. From

Excel, a very commonly used spreadsheet program, the data can be saved as ".csv"

(comma separated values) format. This is an easy way to interface between Excel

spreadsheet files and R. Simply open the Excel file and 'save as' the csv format. As

an example suppose the file "csv1.xls" is originally an Excel spreadsheet. After

36

'save as' into csv format, the output file is called "csv1.csv", the contents of which

is:

"name","sex","age"

"A","F",20

"B","M",30

"C","F",40

Note that the characters are enclosed in quotes and the delimiters (variable

separators) are commas. Sometimes the file may not contain quotes, as in the file

"csv2.csv".

name,sex,age

A,F,20

B,M,30

C,F,40

For both files, the R command to read in the dataset is the same.

> a <- read.csv("csv1.csv", as.is=TRUE)

> a

name sex age

1 A F 20

2 B M 30

3 C F 40

The argument 'as.is=TRUE' keeps all characters as they are. Had this not been

specified, the characters would have been coerced into factors. The variable 'name'

should not be factored but 'sex' should. The following command should therefore be

typed:

> a$sex <- factor(a$sex)

Note firstly that the object 'a' has class data frame and secondly that the names of

the variables within the data frame 'a' must be referenced using the dollar sign

notation. If not, R will inform you that the object 'sex' cannot be found.

For files with white space (spaces and tabs) as the separator, such as in the file

"data1.txt", the command to use is read.table.

> a <- read.table("data1.txt", header=TRUE, as.is=TRUE)

The file "data2.txt" is in fixed field format without field separators.

namesexage

1AF20

2BM30

3CF40

37

To read in such a file, the function read.fwf is preferred. The first line, which is

the header, must be skipped. The width of each variable and the column names

must be specified by the user.

> a <- read.fwf("data2.txt", skip=1, width=c(1,1,2), col.names

= c("name", "sex", "age"), as.is=TRUE)

Data entry and analysis

The above section deals with creating data frames by reading in data created from

programs outside R, such as Excel. It is also possible to enter data directly into R

by using the function data.entry. However, if the data size is large (say more

than 10 columns and/or more than 30 rows), the chance of human error is high with

the spreadsheet or text mode data entry. A software program specially designed for

data entry, such as Epidata, is more appropriate. Their web site is:

http://www.epidata.dk. Epidata has facilities for setting up useful constraints such

as range checks, automatic jumps and labelling of variables and values (codes) for

each variable. There is a direct transfer between Epidata and R (using 'read.epiinfo')

but it is recommended to export data from Epidata (using the export procedure

inside that software) to Stata format and use the function read.dta to read the

dataset into R. Exporting data into Stata format maintains many of the attributes of

the variables, such as the variable labels and descriptions.

Clearing memory and reading in data

At the R console type:

> rm(list=ls())

The function rm stands for "remove". The command above removes all objects in

the workspace. To see what objects are currently in the workspace type:

> ls()

character(0)

The command ls() shows a list of objects in the current workspace. The name(s)

of objects have class character. The result "character(0)" means that there are

no ordinary objects in the environment.

If you do not see "character(0)" in the output but something else, it means

those objects were left over from the previous R session. This will happen if you

agreed to save the workspace image before quitting R. To avoid this, quit R and

delete the file ".Rdata", which is located in your working folder, or rename it if you

would like to keep the workspace from the previous R session.

Alternatively, to remove all objects in the current workspace without quitting R,

type:

> zap()

38

This command will delete all ordinary objects from R memory. Ordinary objects

include data frames, vectors, arrays, etc. Function objects are spared deletion.

Datasets included in Epicalc

Most add-on packages for R contain datasets used for demonstration and teaching.

To check what datasets are available in all loaded packages in R type:

> data()

You will see names and descriptions of several datasets in various packages, such

as datasets and epicalc. In this book, most of the examples use datasets from the

Epicalc package.

Reading in data

Let's try to load an Epicalc dataset.

> data(Familydata)

The command data loads the Familydata dataset into the R workspace. If there

is no error you should be able to see this object in the workspace.

> ls()

[1] "Familydata"

Viewing contents of a data frame

If the data frame is small such as this one (11 records, 6 variables), just type its

name to view the entire dataset.

> Familydata

code age ht wt money sex

1 K 6 120 22 5 F

2 J 16 172 52 50 M

3 A 80 163 71 100 M

4 I 18 158 51 200 F

5 C 69 153 51 300 F

6 B 72 148 60 500 F

7 G 46 160 50 500 F

8 H 42 163 55 600 F

9 D 58 170 67 2000 M

10 F 47 155 53 2000 F

11 E 49 167 64 5000 M

To get the names of the variables (in order) of the data frame, you can type:

> names(Familydata)

[1] "code" "age" "ht" "wt" "money" "sex"

Another convenient function that can be used to explore the data structure is str.

39

> str(Familydata)

'data.frame': 11 obs. of 6 variables:

$ code : chr "K" "J" "A" "I" ...

$ age : int 6 16 80 18 69 72 46 42 58 47 ...

$ ht : int 120 172 163 158 153 148 160 163 170 155 ...

$ wt : int 22 52 71 51 51 60 50 55 67 53 ...

$ money: int 5 50 100 200 300 500 500 600 2000 2000 ...

$ sex : Factor w/ 2 levels "F","M": 1 2 2 1 1 1 1 1 2 ...

=============+=== remaining output omitted =====+===========

Summary statistics of a data frame

A quick exploration of a dataset should be to obtain the summary statistics of all

variables. This can be achieved in a single command.

> summary(Familydata)

code age ht

Length:11 Min. : 6.0 Min. :120

Class :character 1st Qu.:30.0 1st Qu.:154

Mode :character Median :47.0 Median :160

Mean :45.7 Mean :157

3rd Qu.:63.5 3rd Qu.:165

Max. :80.0 Max. :172

wt money sex

Min. :22.0 Min. : 5 F:7

1st Qu.:51.0 1st Qu.: 150 M:4

Median :53.0 Median : 500

Mean :54.2 Mean :1023

3rd Qu.:62.0 3rd Qu.:1300

Max. :71.0 Max. :5000

The function summary is from the base library. It gives summary statistics of each

variable. For a continuous variable such as 'age', 'wt', 'ht' and 'money', non-

parametric descriptive statistics such as minimum, first quartile, median, third

quartile and maximum, as well as the mean (parametric) are shown. There is no

information on the standard deviation or the number of observations. For

categorical variables, such as sex, a frequency tabulation is displayed. The first

variable code is a character variable. There is therefore no summary for it.

Compare this result with the version of summary statistics using the function summ

from the Epicalc package.

> summ(Familydata)

Anthropometric and financial data of a hypothetical family

No. of observations = 11

Var. name Obs. mean median s.d. min. max.

1 code

2 age 11 45.73 47 24.11 6 80

3 ht 11 157.18 160 14.3 120 172

4 wt 11 54.18 53 12.87 22 71

5 money 11 1023.18 500 1499.55 5 5000

6 sex 11 1.364 1 0.505 1 2

40

The function summ gives a more concise output, showing one variable per line. The

number of observations and standard deviations are included in the report replacing

the first and third quartile values in the original summary function from the R base

library. Descriptive statistics for factor variables use their unclassed values. The

values 'F' and 'M' for the variable 'sex' have been replaced by the codes 1 and 2,

respectively. This is because R interprets factor variables in terms of levels, where

each level is stored as an integer starting from 1 for the first level of the factor.

Unclassing a factor variable converts the categories or levels into integers. More

discussion about factors will appear later.

From the output above the same statistic from different variables are lined up into

the same column. Information on each variable is completed without any missing as

the number of observations are all 11. The minimum and maximum are shown close

to each other enabling the range of the variable to be easily determined.

In addition, summary statistics for each variable is possible with both choices of

functions. The results are similar to summary statistics of the whole dataset. Try the

following commands:

> summary(Familydata$age)

> summ(Familydata$age)

> summary(Familydata$sex)

> summ(Familydata$sex)

Note that summ, when applied to a variable, automatically gives a graphical output.

This will be examined in more detail in subsequent chapters.

Extracting subsets from a data frame

A data frame has a subscripting system similar to that of an array. To choose only

the third column of Familydata type:

> Familydata[,3]

[1] 120 172 163 158 153 148 160 163 170 155 167

This is the same as

> Familydata$ht

Note that subscripting the data frame Familydata with a dollar sign $ and the

variable name will extract only that variable. This is because a data frame is also a

kind of list (see the previous chapter).

> typeof(Familydata)

[1] "list"

41

To extract more than one variable, we can use either the index number of the

variable or the name. For example, if we want to display only the first 3 records of

'ht', 'wt' and 'sex', then we can type:

> Familydata[1:3,c(3,4,6)]

ht wt sex

1 120 22 F

2 172 52 M

3 163 71 M

We could also type:

> Familydata[1:3,c("ht","wt","sex")]

ht wt sex

1 120 22 F

2 172 52 M

3 163 71 M

The condition in the subscript can be a selection criteria, such as selecting the

females.

> Familydata[Familydata$sex=="F",]

code age ht wt money sex

1 K 6 120 22 5 F

4 I 18 158 51 200 F

5 C 69 153 51 300 F

6 B 72 148 60 500 F

7 G 46 160 50 500 F

8 H 42 163 55 600 F

10 F 47 155 53 2000 F

Note that the conditional expression must be followed by a comma to indicate

selection of all columns. In addition, two equals signs are needed in the conditional

expression. Recall that one equals sign represents assignment.

Another method of selection is to use the subset function.

> subset(Familydata, sex=="F")

To select only the 'ht' and 'wt' variables among the females:

> subset(Familydata, sex=="F", select = c(ht,wt))

Note that the commands to select a subset do not have any permanent effect on the

data frame. The user must save this into a new object if further use is needed.

Adding a variable to a data frame

Often it is necessary to create a new variable and append it to the existing data

frame. For example, we may want to create a new variable called 'log10money'

which is equal to log base 10 of the pocket money.

42

> Familydata$log10money <- log10(Familydata$money)

Alternatively we can use the transform function.

> Familydata <- transform(Familydata, log10money=log10(money))

The data frame is now changed with a new variable 'log10money' added. This can

be checked by the following commands.

> names(Familydata)

> summ(Familydata)

Anthropometric and financial data of a hypothetic family

No. of observations = 11

Var. name Obs. mean median s.d. min. max.

1 code

2 age 11 45.73 47 24.11 6 80

3 ht 11 157.18 160 14.3 120 172

4 wt 11 54.18 53 12.87 22 71

5 money 11 1023.18 500 1499.55 5 5000

6 sex 11 1.364 1 0.505 1 2

7 log10money 11 2.51 2.7 0.84 0.7 3.7

Removing a variable from a data frame

Conversely, if we want to remove a variable from a data frame, just specify a minus

sign in front of the column subscript:

> Familydata[,-7]

code age ht wt money sex

1 K 6 120 22 5 F

2 J 16 172 52 50 M

3 A 80 163 71 100 M

4 I 18 158 51 200 F

5 C 69 153 51 300 F

6 B 72 148 60 500 F

7 G 46 160 50 500 F

8 H 42 163 55 600 F

9 D 58 170 67 2000 M

10 F 47 155 53 2000 F

11 E 49 167 64 5000 M

Note again that this only displays the desired subset and has no permanent effect on

the data frame. The following command permanently removes the variable and

returns the data frame back to its original state.

> Familydata$log10money <- NULL

Assigning a NULL value to a variable in the data frame is equivalent to removing

that variable.

43

At this stage, it is possible that you may have made some typing mistakes. Some of

them may be serious enough to make the data frame Familydata distorted or

even not available from the environment. You can always refresh the R

environment by removing all objects and then read in the dataset afresh.

> zap()

> data(Familydata)

Attaching the data frame to the search path

Accessing a variable in the data frame by prefixing the variable with the name of

the data frame is tidy but often clumsy, especially if the data frame and variable

names are lengthy. Placing or attaching the data frame into the search path

eliminates the tedious requirement of prefixing the name of the variable with the

data frame. To check the search path type:

> search()

[1] ".GlobalEnv" "package:epicalc"

[3] "package:methods" "package:stats"

[5] "package:graphics" "package:grDevices"

[7] "package:utils" "package:datasets"

[9] "package:foreign" "Autoloads"

[11] "package:base"

The general explanation of search() is given in Chapter 1. Our data frame is not

in the search path. If we try to use a variable in a data frame that is not in the search

path, an error will occur.

> summary(age)

Error in summary(age) : Object "age" not found

Try the following command:

> attach(Familydata)

The search path now contains the data frame in the second position.

> search()

[1] ".GlobalEnv" "Familydata" "package:methods"

[4] "package:datasets" "package:epicalc" "package:survival"

[7] "package:splines" "package:graphics" "package:grDevices"

[10] "package:utils" "package:foreign" "package:stats"

[13] "Autoloads" "package:base"

Since 'age' is inside Familydata, which is now in the search path, computation of

statistics on 'age' is now possible.

> summary(age)

Min. 1st Qu. Median Mean 3rd Qu. Max.

6.00 30.00 47.00 45.73 63.50 80.00

Attaching a data frame to the search path is similar to loading a package using the

library function. The attached data frame, as well as the loaded packages, are

actually read into R's memory and are resident in memory until they are detached.

44

This is true even if the original data frame has been removed from the memory.

> rm(Familydata)

> search()

The data frame Familydata is still in the search path allowing any variable

within the data frame to be used.

> age

[1] 6 16 80 18 69 72 46 42 58 47 49

Loading the same library over and over again has no effect on the search path but

re-attaching the same data frame is possible and may eventually overload the

system resources.

> data(Familydata)

> attach(Familydata)

The following object (s) are masked from Familydata

( position 3 ) :

age code ht money sex wt

These variables are already in the second position of the search path. Attaching

again creates conflicts in variable names.

> search()

[1] ".GlobalEnv" "Familydata" "Familydata"

[4] "package:methods" "package:datasets" "package:epicalc"

[7] "package:survival" "package:splines" "package:graphics"

[10] "package:grDevices" "package:utils" "package:foreign"

[13] "package:stats" "Autoloads" "package:base"

The search path now contains two objects named Familydata in positions 2 and

3. Both have more or less the same set of variables with the same names. Recall that

every time a command is typed in and the <Enter> key is pressed, the system will

first check whether it is an object in the global environment. If not, R checks

whether it is a component of the remaining search path, that is, a variable in an

attached data frame or a function in any of the loaded packages.

Repeatedly loading the same library does not add to the search path because R

knows that the contents in the library do not change during the same session.

However, a data frame can change at any time during a single session, as seen in the

previous section where the variable 'log10money' was added and later removed.

The data frame attached at position 2 may well be different to the object of the same

name in another search position. Confusion arises if an independent object (e.g.

vector) is created outside the data frame (in the global environment) with the same

name as the data frame or if two different data frames in the search path each

contain a variable with the same name. The consequences can be disastrous.

In addition, all elements in the search path occupy system memory. The data frame

Familydata in the search path occupies the same amount of memory as the one

in the current workspace. Doubling of memory is not a serious problem if the data

frame is small. However, repeatedly attaching to a large data frame may cause R to

45

not execute due to insufficient memory.

With these reasons, it is a good practice firstly, to remove a data frame from the

search path once it is not needed anymore. Secondly, remove any objects from the

environment using rm(list=ls()) when they are not wanted anymore. Thirdly,

do not define a new object (say vector or matrix) that may have the same name as

the data frame in the search path. For example, we should not create a new vector

called Familydata as we already have the data frame Familydata in the

search path.

Detach both versions of Familydata from the search path.

> detach(Familydata)

> detach(Familydata)

Note that the command detachAllData() in Epicalc removes all attachments

to data frames. The command zap() does the same, but in addition removes all

non-function objects. In other words, the command zap() is equivalent to

rm(list=lsNoFunction()) followed by detachAllData().

The 'use' command in Epicalc

Attaching to and detaching from a data frame is often tedious and cumbersome and

if there is more than one data frame in the workspace then users must be careful that

they are attached to the correct data frame when working with their data. Most data

analysis deals only with a single data frame. In order to reduce these steps of

attaching and detaching, Epicalc contains a command called use which eases the

process. At the R console type:

> zap()

> data(Familydata)

> use(Familydata)

The command use()reads in a data file from Dbase (.dbf), Stata (.dta), SPSS

(.sav), EpiInfo (.rec) and comma separated value (.csv) formats, as well as those

that come pre-supplied with R packages. The Familydata data frame comes with

Epicalc. If you want to read a dataset from a Stata file format, such as

"family.dta", simply type use("family.dta") without typing the data

command above. The dataset is copied into memory in a default data frame called

.data. If .data already exists, it will be overwritten by the new data frame. The

original Familydata, however, remains.

In fact, all the datasets in Epicalc were originally in one of the file formats of .dta,

.rec, .csv or .txt. These datasets in their original format can be downloaded from

http://medipe.psu.ac.th/Epicalc/. If you download the files and set the working

directory for R to the default folder "C:\RWorkplace", you do not need to type

data(Familydata) and use(Familydata), but instead simply type:

> use("family.dta")

46

The original Stata file will be read into R and saved as .data. If successful, it will

make no difference whether you type data(Familydata) followed by

use(Familydata) or simply use("family.dta").

In most parts of the book, we chose to tell you to type data(Familydata) and

use(Familydata) instead of use("family.dta") because the dataset is

already in the Epicalc package, which is readily available when you practice

Epicalc to this point. However, putting "filename.extension" as the argument such

as use("family.dta") in this chapter or use("timing.dta") in the next

chapter, and so forth, may give you a real sense of reading actual files instead of the

approach that is used in this book.

The command use also automatically places the data frame, .data, into the

search path. Type:

> search()

You will see that .data is in the second position of the search path. Type:

> ls()

You will see only the Familydata object, and not .data because the name of

this object starts with a dot and is classified as a hidden object. In order to show that

.data is really in the memory, Type

> ls(all=TRUE)

You will see .data in the first position of the list.

.data is resistant to zap()

Type the following at the R console:

> zap()

> ls(all=TRUE)

The object Familydata is gone but .data is still there. However, the

attachment to the search path is now lost

> search()

In order to put it back to the search path, we have to attach to it manually.

> attach(.data)

The advantage of use() is not only that it saves time by making attach and

detach unnecessary, but .data is placed in the search path as well as being

made the default data frame. Thus des() is the same as des(.data), summ()

is equivalent to summ(.data).

> des()

> summ()

47

The sequence of commands zap, data(datafile), use(datafile),

des() and summ() is recommended for starting an analysis of almost all datasets

in this book. A number of other commands from the Epicalc package work based

on this strategy of making .data the default data frame and exclusively attached

to the search path (all other data frames will be detached, unless the argument

'clear=FALSE' is specified in the use function). For straightforward data

analysis, the command use() is sufficient to create this setting. In many cases

where the data that is read in needs to be modified, it is advised to rename or copy

the final data frame to .data. Then detach from the old .data and re-attach to

the most updated one in the search path.

This strategy does not have any effect on the standard functions of R. The users of

Epicalc can still use the other commands of R while still enjoying the benefit of

Epicalc.

48

Exercises________________________________________________

With several datasets provided with Epicalc, use the last commands (zap, data,

use, des, summ) to have a quick look at them.

49

Chapter 5: Simple Data Exploration

Data exploration using Epicalc

In the preceding chapter, we learnt the commands zap for clearing the workspace

and memory, use for reading in a data file and codebook, des and summ for

initially exploring the data frame, keeping in mind that these are all Epicalc

commands. The use function places the data frame into a hidden object called

.data, which is automatically attached to the search path. In this chapter, we will

work with more examples of data frames as well as ways to explore individual

variables.

> zap()

> data(Familydata)

> use(Familydata)

> des()

Anthropometric and financial data of a hypothetical family

No. of observations = 11

Variable Class Description

1 code character

2 age integer Age(yr)

3 ht integer Ht(cm.)

4 wt integer Wt(kg.)

5 money integer Pocket money(B.)

6 sex factor

The first line after the des() command shows the data label, which is the

descriptive text for the data frame. This is usually created by the software that was

used to enter the data, such as Epidata or Stata. Subsequent lines show variable

names and individual variable descriptions. The variable 'code' is a character string

while 'sex' is a factor. The other variables have class integer. A character variable is

not used for statistical calculations but simply for labelling purposes or for record

identification. Recall that a factor is what R calls a categorical or group variable.

The remaining integer variables ('age', 'ht', 'wt' and 'money') are intuitively

continuous variables. The variables 'code' and 'sex' have no variable descriptions

due to omission during the preparation of the data prior to data entry.

50

> summ()

Anthropometric and financial data of a hypothetical family

No. of observations = 11

Var. name Obs. mean median s.d. min. max.

1 code

2 age 11 45.73 47 24.11 6 80

3 ht 11 157.18 160 14.3 120 172

4 wt 11 54.18 53 12.87 22 71

5 money 11 1023.18 500 1499.55 5 5000

6 sex 11 1.364 1 0.505 1 2

As mentioned in the previous chapter, the command summ gives summary statistics

of all variables in the default data frame, in this case .data. Each of the six

variables has 11 observations, which means that there are no missing values in the

dataset. Since the variable 'code' is class 'character' (as shown from the 'des()'

command above), information about this variable is not shown. The ages of the

subjects in this dataset range from 6 to 80 (years). Their heights range from 120 to

172 (cm), and their weights range from 22 to 71 (kg). The variable 'money' ranges

from 5 to 5,000 (baht). The mean and median age, height and weight are quite close

together indicating relatively non-skewed distributions. The variable 'money' has a

mean much larger than the median signifying that the distribution is right skewed.

The last variable, 'sex', is a factor. However, the statistics are based on the

unclassed values of this variable. We can see that there are two levels, since the

minimum is 1 and the maximum is 2. For factors, all values are stored internally as

integers, i.e. only 1 or 2 in this case. The mean of 'sex' is 1.364 indicating that 36.4

percent of the subjects have the second level of the factor (in this case it is male). If

a factor has more than two levels, the mean will have no useful interpretation.

Codebook

The function summ gives summary statistics of each variable, line by line. This is

very useful for numeric variables but less so for factors, especially those with more

than two levels. Epicalc has another function that gives summary statistics for a

numeric variable and a frequency table with level labels and codes for factors.

> codebook()

Anthropometric and financial data of a hypothetical family

code :

A character vector

==================

age : Age(yr)

obs. mean median s.d. min. max.

11 45.727 47 24.11 6 80

==================

51

ht : Ht(cm.)

obs. mean median s.d. min. max.

11 157.182 160 14.3 120 172

==================

wt : Wt(kg.)

obs. mean median s.d. min. max.

11 54.182 53 12.87 22 71

==================

money : Pocket money(B.)

obs. mean median s.d. min. max.

11 1023.182 500 1499.55 5 5000

==================

sex :

Label table: sex1

code Frequency Percent

F 1 7 63.6

M 2 4 36.4

==================

Unlike results from the summ function, codebook deals with each variable in the

data frame with more details. If a variable label exists, it is given in the output. For

factors, the name of the table for the label of the levels is shown and the codes for

the levels are displayed in the column, followed by frequency and percentage of the

distribution. The function is therefore very useful. The output can be used to write a

table of baseline data of the manuscript coming out from the data frame.

The output combines variable description with summary statistics for all numeric

variables. For 'sex', which is a factor, the original label table is named 'sex1' where

1 = F and 2 = M. There are 7 females and 4 males in this family.

Note that the label table for codes of a factor could easily be done in the phase of

preparing data entry using Epidata with setting of the ".chk" file. If the data is

exported in Stata format, then the label table of each variable will be exported along

with the dataset. The label tables are passed as attributes in the corresponding data

frame. The Epicalc codebook command fully utilizes this attribute allowing users

to see and document the coding scheme for future referencing.

We can also explore individual variables in more detail with the same commands

des and summ by placing the variable name inside the brackets.

> des(code)

'code' is a variable found in the following source(s):

Var. source Var. order Class # records Description

.data 1 character 11

52

The output tells us that 'code' is in .data. Suppose we create an object, also called

'code', but positioned freely outside the hidden data frame.

> code <- 1

> des(code)

'code' is a variable found in the following source(s):

Var. source Var. order Class # records Description

.GlobalEnv numeric 1

.data 1 character 11

The output tells us that there are two 'codes'. The first is the recently created object

in the global environment. The second is the variable inside the data frame, .data.

To avoid confusion, we will delete the recently created object 'code'.

> rm(code)

After removal of 'code' from the global environment, the latest des() command

will describe the old 'code' variable, which is the part of .data, and remains

usable. Using des() with other variables shows similar results.

Now try the following command:

> summ(code)

This gives an error because 'code' is a character vector. Next type:

> summ(age)

Obs. mean median s.d. min. max.

11 45.727 47 24.11 6 80

20 40 60 80

Distribution of Age(yr)

Subject sorted by X−axis values

53

The results are similar to what we saw from summ. However, since the argument to

the summ command is a single variable a graph is also produced showing the

distribution of age.

The main title of the graph contains a description of the variable after the words

"Distribution of". If the variable has no description, the variable name will be

presented instead. Now try the following commands:

> abc <- 1:20

> summ(abc)

Obs. mean median s.d. min. max.

20 10.5 10.5 5.916 1 20

5 101520

Distribution of abc

Subject sorted by X−axis values

The object 'abc' has a perfectly uniform distribution since the dots form a straight

line.

The graph produced by the command summ is called a sorted dot chart. A dot chart

has one axis (in this case the X-axis) representing the range of the variable. The

other axis, the Y-axis, labelled 'Subject sorted by X-axis values', represents each

subject or observation sorted by the values of the variable. For the object 'abc', the

smallest number is 1, which is plotted at the bottom left, then 2, 3, 4 etc. The final

observation is 20, which is plotted at the top right. The values increase from one

observation to the next higher value. Since this increase is steady, the line is

perfectly straight.

54

To look at a graph of age again type:

> summ(age)

> axis(side=2, 1:length(age))

The 'axis' command adds tick marks and value labels on the specified axis (in this

case, 'side=2' denotes the Y-axis). The ticks are placed at values of 1, 2, 3, up to 11

(which is the length of the vector age). The ticks are omitted by default since if the

vector is too long, the ticks would be too congested. In this session, the ticks will

facilitate discussion.

20 40 60 80

Distribution of Age(yr)

Subject sorted by X−axis values

1234567891011

To facilitate further detailed consideration, the sorted age vector is shown with the

graph.

> sort(age)

[1] 6 16 18 42 46 47 49 58 69 72 80

The relative increment on the X-axis from the first observation (6 years) to the

second one (16 years) is larger than from the second to the third (18 years). Thus

we observe a steep increase in the Y-axis for the second pair. From the 3rd

observation to the 4th (42 years), the increment is even larger than the 1st one; the

slope is relatively flat. In other words, there is no dot between 20 and 40 years. The

4th, 5th, 6th and 7th values are relatively close together, thus these give a relatively

steep increment on the Y-axis.

> summ(ht)

Obs. mean median s.d. min. max.

11 157.182 160 14.303 120 172

> axis(side=2, 1:length(ht))

> sort(ht)

[1] 120 148 153 155 158 160 163 163 167 170 172

55

The distribution of height as displayed in the graph is interesting. The shortest

subject (120cm) is much shorter than the remaining subjects. In fact, she is a child

whereas all the others are adults. There are two persons (7th and 8th records) with the

same height (163cm). The increment on the Y-axis is hence vertical.

120 130 140 150 160 170

Distribution of Ht(cm.)

Subject sorted by X−axis values

1234567891011

> summ(wt)

> axis(side=2, 1:length(wt))

30 40 50 60 70

Distribution of Wt(kg.)

Subject sorted by X−axis values

1234567891011

56

There is a higher level of clustering of weight than height from the 2nd to 7th

observations; these six persons have very similar weights. From the 8th to 11th

observations, the distribution is quite uniform.

For the distribution of the money variable, type:

> summ(money)

Money has the most skewed distribution. The first seven persons carry less than

1,000 baht. The next two persons carry around 2,000 baht whereas the last carries

5,000 baht, far away (in the X-axis) from the others. This is somewhat consistent

with a theoretical exponential distribution.

0 1000 2000 3000 4000 5000

Distribution of Pocket money(B.)

Subject sorted by X−axis values

1234567891011

Next have a look at the distribution of the sex variable.

> summ(sex)

Obs. mean median s.d. min. max.

11 1.364 1 0.5 1 2

The graph shows that four out of eleven (36.4%, as shown in the textual statistics)

are male. When the variable is factor and has been labelled, the values will show

the name of the group.

57

246810

Distribution of sex

Subject sorted by X−axis values

FM

1234567891011

In fact, a better result can be obtained by typing

> tab1(sex)

sex :

Frequency Percent Cum. percent

F 7 63.6 63.6

M 4 36.4 100.0

Total 11 100.0 100.0

FM

Distribution of sex

Frequency

01234567

7

4

58

Since there are two sexes, we may simply compare the distributions of height by

sex.

> summ(ht, by=sex)

For sex = F

Obs. mean median s.d. min. max.

7 151 155 14.514 120 163

For sex = M

Obs. mean median s.d. min. max.

4 168 168.5 3.916 163 172

120 130 140 150 160 170

Distribution of Ht(cm.) by sex

F

M

Clearly, males are taller than females.

Dotplot

In addition to summ and tab1, Epicalc has another exploration tool called

dotplot.

> dotplot(money)

While the graph created from the summ command plots individual values against its

rank, dotplot divides the scale into several small equally sized bins (default =

40) and stacks each record into its corresponding bin. In the figure above, there are

three observations at the leftmost bin and one on the rightmost bin. The plot is very

similar to a histogram except that the original values appear on the X-axis. Most

people are more acquainted with a dot plot than the sorted dot chart produced by

summ. However, the latter plot gives more detailed information with better

accuracy. When the sample size is small, plots by summ are more informative.

59

When the sample size is large (say above 200), dotplot is more understandable

by most people.

0 5 10 15 20

Distribution of Pocket money(B.)

Frequency

0 1000 2000 3000 4000 5000

> dotplot(money, by=sex)

Distribution of Pocket money(B.) by sex

F

M

0 1000 2000 3000 4000 5000

The command summ easily produces a powerful graph. One may want to show

even more information. R can serve most purposes, but the user must spend some

time learning it.

60

Let's draw a sorted dot chart for the heights. The command below should be

followed step by step to see the change in the graphic window resulting from typing

in each line. If you make a serious mistake simply start again from the first line.

Using the up arrow key, the previous commands can be edited before executing

again.

> zap()

> data(Familydata)

> use(Familydata)

> sortBy(ht)

> .data

The command sortBy, unlike its R base equivalent sort, has a permanent effect

on .data. The whole data frame has been sorted in ascending order by the value

of height.

> dotchart(ht)

Had the data not been sorted, the incremental pattern would not be seen.

> dotchart(ht, col=unclass(sex), pch=18)

Showing separate colours for each sex is done using the 'unclass' function. Since

'sex' is a factor, uclassing it gives a numeric vector with 1 for the first level (female)

and 2 for the second level (male). Colours can be specified in several different ways

in R. One simple way is to utilise a small table of colours known as the palette. The

default palette has 9 colours, where the number 1 represents black, the number 2

represents the red, up to 9, which represents gray. Thus the black dots represent

females and the red dots represent males. More details on how to view or

manipulate the palette can be found in the help pages.

To add the y-axis, type the following command:

> axis(side=2,at=1:length(ht), labels=code, las=1)

The argument 'las' is a graphical parameter, which specifies the orientation of tick

labelling on the axes. When 'las=1', all the labels of the ticks will be horizontal to

the axis. A legend is added using the 'legend' command:

> legend(x=130, y=10, legend=c("female","male"), pch=18,

col=1:2, text.col=1:2)

The argument 'pch' stands for point or plotting character. Code 18 means the

symbol is a solid diamond shape which is more prominent than pch=1 (a hollow

round dot). Note that 'col' is for plot symbol colours and 'text.col' is for text colour

in the legend.

To add the titles type:

> title(main="Distribution of height")

> title(xlab="cms")

61

120 130 140 150 160 170

K

B

C

F

I

G

A

H

E

D

J

female

male

Distribution of height

cms

To summarise, after use(datafile), des and summ, individual variables can

be explored simply by summ(var.name) and summ(var.name,

by=group.var). In addition to summary statistics, the sorted dot chart can be

very informative. The dotplot command trades in accuracy of the individual

values with frequency dot plots, which is similar to a histogram. Further use of this

command will be demonstrated when the number of observations is larger.

62

Exercise_________________________________________________

Try the following simulations for varying sample sizes and number of groups.

Compare the graph of different types from using three commands, summ, dotplot

and boxplot. For each condition, which type of graph is the best?

## Small sample size, two groups.

> grouping1 <- rep(1:2, times=5)

> random1 <- rnorm(10, mean=grouping1, sd=1)

> summ(random1, by=grouping1)

> dotplot(random1, by=grouping1)

> boxplot(random1 ~ grouping1)

## Moderate sample size, three groups.

> grouping2 <- c(rep(1, 10),rep(2, 20), rep(3, 45))

> random2 <- rnorm(75, mean=grouping2, sd=1)

> summ(random2, by=grouping2)

> dotplot(random2, by=grouping2)

> boxplot(random2 ~ grouping2, varwidth=TRUE, col=1:3,

horizontal=TRUE, las=1)

## Large sample size, four groups.

> grouping3 <- c(rep(1, 100), rep(2, 200), rep(3,450), rep(4,

1000))

> random3 <- rnorm(1750, mean=grouping3, sd=1)

> summ(random3, by=grouping3)

> dotplot(random3, by=grouping3)

> boxplot(random3 ~ grouping3, varwidth=TRUE, col=1:4,

horizontal=TRUE, las=1)

Which kind of graph is the best for the different conditions?

63

Chapter 6: Date and Time

One of the purposes of an epidemiological study is to describe the distribution of a

population's health status in terms of time, place and person. Most data analyses,

however deal more with a person than time and place. In this chapter, the emphasis

will be on time.

The time unit includes century, year, month, day, hour, minute and second. The

most common unit that is directly involved in epidemiological research is day. The

chronological location of day is date, which is a serial function of year, month and

day.

There are several common examples of the use of dates in epidemiological studies.

Birth date is necessary for computation of accurate age. In an outbreak

investigation, description of date of exposure and onset is crucial for computation

of incubation period. In follow up studies, the follow-up time is usually marked by

date of visit. In survival analysis, date starting treatment and assessing outcome are

elements needed to compute survival time.

Computation functions related to date

Working with dates can be computationally complicated. There are leap years,

months with different lengths, days of the week and even leap seconds. Dates can

even be stored in different eras depending on the calendar. The basic task in

working with dates is to link the time from a fixed date to the display of various

date formats that people are familiar with.

Different software use different starting dates for calculating dates. This is called an

epoch. R uses the first day of 1970 as its epoch (day 0). In other words, dates are

stored as the number of days since 1st January 1970, with negative values for earlier

dates. Try the following at the R console:

64

> a <- as.Date("1970-01-01")

> a

[1] "1970-01-01"

> class(a)

[1] "Date"

> as.numeric(a)

[1] 0

The first command above creates an object 'a' with class Date. When converted to

numeric, the value is 0. Day 100 would be

> a + 100

[1] "1970-04-11"

The default display format in R for a Date object is ISO format. The American

format of 'month, day, year' can be achieved by

> format(a, "%b %d, %Y")

[1] "Jan 01, 1970"

The function 'format' displays the object 'a' in a fashion chosen by the user. '%b'

denotes the month in the three-character abbreviated form. '%d' denotes the day

value and '%Y' denotes the value of the year, including the century.

Under some operating system conditions, such as the Thai Windows operating

system, '%b' and '%a' may not work or may present some problems with fonts. Try

the following command:

> Sys.setlocale("LC_ALL", "C")

Now try the above format command again. This time, it should work. R has the

'locale' or working location set by the operating system, which varies from country

to country. "C" is the motherland of R and the language "C" is American English.

'%A' and '%a' are formats representing full and abbreviated weekdays, respectively,

while '%B' and '%b' represent the months. These are language and operating system

dependent.

Try these:

> b <- a + (0:3)

> b

Then change the language and see the effect on the R console and graphics device.

> setTitle("German"); summ(b)

> setTitle("French"); summ(b)

> setTitle("Italian"); summ(b)

65

The command setTitle changes the locale as well as the fixed wording of the

locale to match it. To see what languages are currently available in Epicalc try:

> titleString()

> titleString(return.look.up.table=TRUE)

Note that these languages all use standard ASCII text characters. The displayed

results from these commands will depend on the operating system. Thai and

Chinese versions of Windows may give different results.

You may try setTitle with different locales. To reset the system to your original

default values, type

> setTitle("")

For languages with non-standard ASCII characters, the three phrases often used in

Epicalc ("Distribution of", "by", and "Frequency") can be changed to your own

language. For more details see the help for the 'titleString' function.

Manipulation of title strings, variable labels and levels of factors using your own

language means you can have the automatic graphs tailored to your own needs. This

is however a bit too complicated to demonstrate in this book. Interested readers can

contact the author for more information.

Epicalc displays the results of the summ function in ISO format to avoid country

biases. The graphic results in only a few range of days, like the vector 'b', has the X-

axis tick mark labels in '%a%d%b' format. Note that '%a' denotes weekday in the

three-character abbreviated form.

In case the dates are not properly displayed, just solve the problem by typing:

> Sys.setlocale("LC_ALL", "C")

Then, check whether the date format containing '%a' and '%b' works.

> format(b, "%a %d%b%y")

[1] "Thu 01Jan70" "Fri 02Jan70" "Sat 03Jan70" "Sun 04Jan70"

> summ(b)

obs. mean median s.d. min. max.

4 1970-01-02 1970-01-02 <NA> 1970-01-01 1970-01-04

66

Distribution of b

Subject sorted by X−axis values

Thu01Jan Fri02Jan Sat03Jan Sun04Jan

Reading in a date variable

Each software has its own way of reading in dates. Transferring date variables from

one software to another sometimes results in 'characters' which are not directly

computable by the destination software.

R can read in date variables from Stata files directly but not older version of

EpiInfo with <dd/mm/yy> format. This will be read in as 'character' or 'AsIs'.

When reading in data from a comma separated variable (.csv) file format, it is a

good habit to put an argument 'as.is = TRUE' in the read.csv command to avoid

date variables being converted to factors.

It is necessary to know how to create date variables from character format. Create a

vector of three dates stored as character:

> date1 <- c("07/13/2004","08/01/2004","03/13/2005")

> class(date1)

[1] "character"

> date2 <- as.Date(date1, "%m/%d/%Y")

The format or sequence of the original characters must be reviewed. In the first

element of 'date1', '13', which can only be day (since there are only 12 months), is

in the middle position, thus '%d' must also be in the middle position. Slashes '/'

separate month, day and year. This must be correspondingly specified in the format

of the as.Date command.

67

> date2

[1] "2004-07-13" "2004-08-01" "2005-03-13"

> class(date2)

[1] "Date"

The default date format is "%Y-%m-%d". Changing into the format commonly

used in Epicalc is achieved by:

> format(date2, "%d%b%y")

[1] "13Jul04" "01Aug04" "13Mar05"

Other formats can be further explored by the following commands:

> help(format.Date)

> help(format.POSIXct)

It is not necessary to have all day, month and year presented. For example, if only

month is to be displayed, you can type:

> format(date2, "%B")

[1] "July" "August" "March"

To include day of the week:

> format(date2, "%a-%d%b")

[1] "Tue-13Jul" "Sun-01Aug" "Sun-13Mar"

> weekdays(date2)

[1] "Tuesday" "Sunday" "Sunday"

This is the same as

> format(date2, "%A")

Conversely, if there are two or more variables that are parts of date:

> day1 <- c("12","13","14");

> month1 <- c("07","08","12")

> paste(day1, month1)

[1] "12 07" "13 08" "14 12"

> as.Date(paste(day1,month1), "%d %m")

[1] "2007-07-12" "2007-08-13" "2007-12-14"

The function paste joins two character variables together. When the year value is

omitted, R automatically adds the current year of the system in the computer.

Dealing with time variables

A Date object contains year, month and day values. For time, values of hour,

minute and second must be available.

68

A sample dataset involving a number of timing variables was collected from

participants of a workshop on 14th December 2004, asking about personal

characteristics, times they went to bed, woke up, and arrived at the workshop. The

workshop commenced at 8.30 am.

> zap()

> data(Timing)

> use(Timing)

Note: ______________________________________________________________________

The original file for this dataset is in Stata format and is called "timing.dta". If you have

downloaded this file into the working directory (as explained in the previous chapter), you

may simply type use("timing.dta").

> des()

Timing questionnaire

No. of observations =18

Variable Class Description

1 id integer

2 gender factor

3 age integer

4 marital factor

5 child integer No. of children

6 bedhr integer Hour to bed

7 bedmin integer Min. to bed

8 wokhr integer Hour woke up

9 wokmin integer Min. woke up

10 arrhr integer Hour arrived at wkshp

11 arrmin integer Min. arrived at wkshp

> summ()

Timing questionnaire

No. of observations = 18

Var. name Obs. mean median s.d. min. max.

1 id 18 9.5 9.5 5.34 1 18

2 gender 18 1.611 2 0.502 1 2

3 age 18 31.33 27.5 12.13 19 58

4 marital 18 1.611 2 0.502 1 2

5 child 18 0.33 0 0.59 0 2

6 bedhr 18 7.83 1.5 10.34 0 23

7 bedmin 18 19.83 17.5 17.22 0 45

8 wokhr 18 5.61 6 1.61 1 8

9 wokmin 18 23.83 30 17.2 0 49

10 arrhr 18 8.06 8 0.24 8 9

11 arrmin 18 27.56 29.5 12.72 0 50

69

To create a variable equal to the time the participants went to bed, the function

ISOdatetime is used.

> bed.time <- ISOdatetime(year=2004, month=12, day=14,

hour=bedhr, min=bedmin, sec=0, tz="")

> summ(bed.time)

Min. Median Mean Max.

2004-12-14 00:00 2004-12-14 01:30 2004-12-14 08:09 2004-12-14 23:45

51015

Distribution of bed.time

Subject sorted by X−axis values

00:00 03:00 06:00 09:00 12:00 15:00 18:00 21:00

The graph shows interrupted time. In fact, the day should be calculated based on the

time that the participants went to bed. If the participant went to bed between 12pm

(midday) and 12am (midnight), then the day should be December 13th, otherwise

the day should be the 14th, the day of the workshop. To recalculate the day type:

> bed.day <- ifelse(bedhr > 12, 13, 14)

The ifelse function chooses the second argument if the first argument is TRUE,

the third otherwise.

> bed.time <- ISOdatetime(year=2004, month=12, day=bed.day,

hour=bedhr, min=bedmin, sec=0, tz="")

> summ(bed.time)

Min. Median Mean Max.

2004-12-13 21:30 2004-12-14 00:22 2004-12-14 00:09 2004-12-14 02:30

70

51015

Distribution of bed.time

Subject sorted by X−axis values

21:30 22:30 23:30 00:30 01:30 02:30

After this, woke up time and arrival time can be created and checked.

> woke.up.time <- ISOdatetime(year=2004, month=12, day=14,

hour=wokhr, min=wokmin, sec=0, tz="")

> summ(woke.up.time)

Min. Median Mean Max.

2004-12-14 01:30 2004-12-14 06:10 2004-12-14 06:00 2004-12-14 08:20

The object 'woke.up.time' looks normal, although one or two participants woke

up quite early in the morning. To compute sleeping duration type:

> sleep.duration <- difftime(woke.up.time, bed.time)

> summ(sleep.duration)

Obs. mean median s.d. min. max.

18 5.844 6.25 1.7 1 8

71

12345678

Distribution of sleep.duration

Subject sorted by X−axis values

hours

A suitable choice of units for 'sleep.duration' are chosen, but can be changed

by the user if desired. Somebody slept very little.

Displaying two variables in the same graph

The command summ of Epicalc is not appropriate for displaying two variables

simultaneously. The original dotchart of R is the preferred graphical method.

> sortBy(bed.time)

> plot(bed.time, 1:length(bed.time),

xlim=c(min(bed.time),max(woke.up.time)), pch=18, col="blue",

ylab=" ", yaxt="n")

The argument 'xlim' (x-axis limits) is set to be the minimum of 'bed.time' and the

maximum of 'woke.up.time'. The argument yaxt="n" suppresses the tick labels

on the Y-axis.

> n <- length(bed.time)

> segments(bed.time, 1:n, woke.up.time, 1:n)

> points(woke.up.time, 1:n, pch=18, col="red")

> title(main="Distribution of Bed time and Woke up time")

72

23:00 01:00 03:00 05:00 07:00

Distribution of Bed time and woke up time

Subject sorted by bed time

Finally, arrival time at the workshop is created:

> arrival.time <- ISOdatetime(year=2004, month=12, day=14,

hour=arrhr, min=arrmin, sec=0, tz="")

> summ(arrival.time)

Min. Median Mean Max.

2004-12-14 08:00 2004-12-14 08:30 2004-12-14 08:30 2004-12-14 09:20

> summ(arrival.time, by=gender)

For gender = male

Min. Median Mean Max.

2004-12-14 08:25 2004-12-14 08:30 2004-12-14 08:37 2004-12-14 09:20

For gender = female

Min. Median Mean Max.

2004-12-14 08:00 2004-12-14 08:30 2004-12-14 08:26 2004-12-14 08:50

73

Distribution of arrival.time by gender

08:00 08:20 08:40 09:00 09:20

male

female

The command summ works relatively well with time variables. In this case, it

demonstrates that there were more females than males. Females varied their arrival

time considerably. Quite a few of them arrived early because they had to prepare

the workshop room. Most males who had no responsibility arrived just in time.

There was one male who was slightly late and one male who was late by almost one

hour.

Age and difftime

Computing age from birth date usually gives more accurate results than obtaining

age from direct interview. The following dataset contains subject's birth dates that

we can use to try computing age.

> zap()

> data(Sleep3)

> use(Sleep3)

> des()

Sleepiness among the participants in a workshop

No. of observations =15

Variable Class Description

1 id integer code

2 gender factor gender

3 dbirth Date Date of birth

4 sleepy integer Ever felt sleepy in workshop

5 lecture integer Sometimes sleepy in lecture

6 grwork integer Sometimes sleepy in group work

7 kg integer Weight in Kg

8 cm integer Height in cm

74

The date of analysis was 13th December 2004.

> age <- as.Date("2005-12-13") - dbirth

The variable 'age' has class difftime as can be seen by typing:

> class(age)

[1] "difftime"

The unit of age is 'days'.

> attr(age, "unit")

[1] "days"

To display age:

> age

Time differences of 7488, 10557, 8934, 9405, 11518, 11982,

10741, 11122, 12845, 9266, 11508, 12732, 11912, 7315,

NA days

> summ(age)

Obs. mean median s.d. min. max.

15 10520 10930 1787.88 7315 12850

8000 9000 10000 11000 12000 13000

Distribution of age

Subject sorted by X−axis values

days

Note the one missing value. To convert age into years:

> age.in.year <- age/365.25

> attr(age.in.year, "units") <- "years"

> summ(age.in.year)

Obs. mean median s.d. min. max.

14 28.81 29.93 4.89 20.03 35.17

75

> summ(age.in.year, by=gender)

For gender = male

Obs. mean median s.d. min. max.

4 29.83 32.06 6.712 20.03 35.17

For gender = female

Obs. mean median s.d. min. max.

10 28.4 29.16 4.353 20.5 34.86

20 25 30 35

Distribution of age.in.year by gender

years

male

female

Note that there is a blank dotted line at the top of the female group. This a missing

value. Males have an obviously smaller sample size with the same range as women

but most observations have relatively high values.

76

Exercises________________________________________________

In the Timing dataset:

Compute time since woke up to arrival at the workshop.

Plot time to bed, time woke up and arrival time on the same axis.

77

Chapter 7: An Outbreak Investigation:

Describing Time

An outbreak investigation is a commonly assigned task to an epidemiologist. This

chapter illustrates how the data can be described effectively. Time and date data

types are not well prepared and must be further modified to suit the need of the

descriptive analysis.

On 25 August 1990, the local health officer in Supan Buri Province of Thailand

reported the occurrence of an outbreak of acute gastrointestinal illness on a national

handicapped sports day. Dr Lakkana Thaikruea and her colleagues went to

investigate. The dataset is called Outbreak.. Most variable names are self-

explanatory. Variables are coded as 0 = no, 1 = yes and 9 = missing/unknown for

three food items consumed by participants: 'beefcurry' (beef curry), 'saltegg' (salted

eggs) and 'water'. Also on the menu were eclairs, a finger-shaped iced cake of

choux pastry filled with cream. This variable records the number of pieces eaten by

each participant. Missing values were coded as follows: 88 = "ate but do not

remember how much", while code 90 represents totally missing information. Some

participants experienced gastrointestinal symptoms, such as: nausea, vomiting,

abdominal pain and diarrhea. The ages of each participant are recorded in years

with 99 representing a missing value. The variables 'exptime' and 'onset' are the

exposure and onset times, which are in character format, or 'AsIs' in R terminology.

Quick exploration

Let's look at the data. Type the following at the R console:

> zap()

> data(Outbreak)

> use(Outbreak)

> des()

No. of observations =1094

78

Variable Class Description

1 id numeric

2 sex numeric

3 age numeric

4 exptime AsIs

5 beefcurry numeric

6 saltegg numeric

7 eclair numeric

8 water numeric

9 onset AsIs

10 nausea numeric

11 vomiting numeric

12 abdpain numeric

13 diarrhea numeric

> summ()

No. of observations = 1094

Var. name valid obs. mean median s.d. min. max.

1 id 1094 547.5 547.5 315.95 1 1094

2 sex 1094 0.66 1 0.47 0 1

3 age 1094 23.69 18 19.67 1 99

4 exptime

5 beefcurry 1094 0.95 1 0.61 0 9

6 saltegg 1094 0.96 1 0.61 0 9

7 eclair 1094 11.48 2 27.75 0 90

8 water 1094 1.02 1 0.61 0 9

9 onset

10 nausea 1094 0.4 0 0.49 0 1

11 vomiting 1094 0.38 0 0.49 0 1

12 abdpain 1094 0.35 0 0.48 0 1

13 diarrhea 1094 0.21 0 0.41 0 1

We will first define the cases, examine the timing in this chapter and investigate the

cause in the next section.

Case definition

It was agreed among the investigators that a case should be defined as a person who

had any of the four symptoms: 'nausea', 'vomiting', 'abdpain' or 'diarrhea'. A case

can then by computed as follows:

> case <- (nausea==1)|(vomiting==1)|(abdpain==1)|(diarrhea==1)

To incorporate this new variable into .data, we use the function label.var.

> label.var(case, "diseased")

The variable 'case' is now incorporated into .data as the 14th variable together

with a variable description.

> des()

79

Timing of exposure

For the exposure time, first look at the structure of this variable.

> str(exptime)

Class 'AsIs' chr [1:1094] "25330825180000" "25330825180000"...

The values of this variable contain fourteen digits. The first four digits represent the

year in the Buddhist Era (B.E.) calendar, which is equal to A.D. + 543. The 5th and

6th digits contain the two digits representing the month, the 7th and 8th represent the

day, 9th and 10th hour, 11th and 12th minute and 13th and 14th second.

> day.exptime <- substr(exptime, 7, 8)

The R command susbtr (from substring), extracts parts of character vectors.

First, let's look at the day of exposure.

> tab1(day.exptime)

day.exptime :

Frequency %(NA+) cum.%(NA+) %(NA-) cum.%(NA-)

25 1055 96.4 96.4 100 100

<NA> 39 3.6 100.0 0 100

Total 1094 100.0 100.0 100 100

The day of exposure was 25th of August for all records (ignoring the 39 missing

values). We can extract the exposure time in a similar fashion.

> hr.exptime <- substr(exptime, 9, 10)

> tab1(hr.exptime)

All values seem acceptable, with the mode at 18 hours.

> min.exptime <- substr(exptime, 11, 12)

> tab1(min.exptime)

These are also acceptable, although note that most minutes have been rounded to

the nearest hour or half hour. The time of exposure can now be calculated.

> time.expose <- ISOdatetime(year=1990, month=8, day=

day.exptime, hour=hr.exptime, min=min.exptime, sec=0)

Then, the variable is labelled in order to integrate it into the default data frame.

> label.var(time.expose, "time of exposure")

> summ(time.expose)

Min. Median Mean Max.

1990-08-25 11:00 1990-08-25 18:00 1990-08-25 18:06 1990-08-25 21:00

80

0 200 400 600 800 1000

Distribution of time of exposure

Subject sorted by X−axis values

11:00 13:00 15:00 17:00 19:00 21:00

A dotplot can also be produced.

> dotplot(time.expose)

0 100 200 300 400 500 600 700

Distribution of time of exposure

Frequency

11:00 13:00 15:00 17:00 19:00 21:00

HH:MM

Almost all the exposure times were during dinner; between 6 and 7 o'clock, while

only a few were during the lunchtime.

81

Timing the onset

Exploration of the data reveals that three non-cases have non-blank onset times.

> sum(!is.na(onset[!case])) # 3

For simplicitiy we will make sure that the 'onset' variable is exclusively used for

cases only.

> onset[!case] <- NA

The extraction of symptom onset times is similar to that for time of exposure.

> day.onset <- substr(onset, 7, 8)

> tab1(day.onset)

day.onset :

Frequency %(NA+) cum.%(NA+) %(NA-) cum.%(NA-)

25 429 39.2 39.2 92.9 92.9

26 33 3.0 42.2 7.1 100.0

<NA> 632 57.8 100.0 0.0 100.0

Total 1094 100.0 100.0 100.0 100.0

Of the subjects interviewed, 57.8% had a missing 'onset' and subsequently on the

derived variable 'day.onset'. This was due to either having no symptoms or the

subject could not remember. Among those who reported the time, 429 had the onset

on the 25th August. The remaining 33 had it on the day after.

> hr.onset <- substr(onset, 9, 10)

> tab1(hr.onset)

> min.onset <- substr(onset, 11, 12)

> tab1(min.onset)

> time.onset <- ISOdatetime(year = 1990, month = 8, day =

day.onset, hour = hr.onset, min = min.onset, sec=0, tz="")

> label.var(time.onset, "time of onset")

> summ(time.onset)

0 200 400 600 800 1000

Distribution of time of onset

Subject sorted by X−axis values

15:00 18:00 21:00 00:00 03:00 06:00 09:00

82

Min. Median Mean Max.

1990-08-25 15:00 1990-08-25 21:30 1990-08-25 21:40 1990-08-26 09:00

The upper part of the graph is empty due to the many missing values.

Perhaps a better visual display can be obtained wth a dotplot.

> dotplot(time.onset)

0 50 100 150 200 250

Distribution of time of onset

Frequency

15:00 18:00 21:00 00:00 03:00 06:00 09:00

HH:MM

Both graphs show the classic single-peak epidemic curve, suggesting a single point

source. The earliest case had the onset at 3pm in the afternoon of August 25. The

majority of cases had the onset in the late evening. By the next morning, only a few

cases were seen. The last reported case occurred at 9am on August 26.

Incubation period

The analysis for incubation period is straightforward.

> incubation.period <- time.onset - time.expose

> label.var(incubation.period, "incubation period")

> summ(incubation.period)

Valid obs. mean median s.d. min. max.

462 3.631 3.5 1.28 1 14.5

> dotplot(incubation.period, las=1)

Incubation period has a median of 3.5 hours with right skewness.

83

0 20 40 60 80 100 120 140

Distribution of incubation period

Frequency

2 4 6 8 10 12 14

hours

Paired plot

We now try putting the exposure and onset times in the same graph. A sorted graph

usually gives more information, so the whole data frame is now sorted.

> sortBy(time.expose)

With this large sample size, it is better to confine the graph to plot only complete

values for both 'time.exposure' and 'time.onset'. This subset is kept as another data

frame called 'data.for.graph'.

> data.for.graph <- subset(.data, (!is.na(time.onset) &

!is.na(time.expose)), select = c(time.onset, time.expose))

> des(data.for.graph)

No. of observations =462

Variable Class Description

1 time.onset POSIXt

2 time.expose POSIXt

There are only two variables in this data frame. All the missing values have been

removed leaving 462 records for plotting.

> n <- nrow(data.for.graph)

> with(data.for.graph, {

plot(time.expose, 1:n, col="red", pch=20,

xlim = c(min(time.expose), max(time.onset)),

main = "Exposure time & onset of food poisoning outbreak",

xlab = "Time (HH:MM)", ylab = "Subject ID" )

} )

84

The plot pattern looks similar to that produced by 'summ(time.expose)'. The point

character, 'pch', is 20, which plots small solid circles, thus avoiding too much

overlapping of the dots. The limits on the horizontal axis are from the minimum of

time of exposure to the maximum of the time of onset, allowing the points of onset

to be put on the same graph. These points are added in the following command:

> with(data.for.graph, {

points(time.onset, 1:n, col="blue", pch=20)

} )

The two sets of points are paired by subjects. A line joining each pair is now drawn

by the segments command.

> with(data.for.graph, {

segments(time.expose, 1:n, time.onset, 1:n, col = "grey45")

} )

The complete list of built in colour names used by R can be found from

colours().

A legend is inserted to make the graph self-explanatory.

> legend(x = ISOdatetime(1990,8,26,2,0,0), y = 150,

legend=c("Exposure time","Onset time","Incubation period"),

pch=c(20,20,-1), lty=c(0,0,1),col=c("red","blue","grey45"),

bg="lavender")

The left upper corner of the legend is located at the right lower quadrant of the

graph with the x coordinate being 2am and y coordinate being 150. The legend

consists of three items as indicated by the character vector. The point characters and

colours of the legend are specified in accordance with those inside the graph. The

last argument, incubation period, has 'pch' equal to -1 indicating no point is to be

drawn. The line type, 'lty', of exposure time and onset are 0 (no line) whereas that

for incubation period is 1 (solid line). The colours of the points and the lines are

corresponding to that in the graph. The background of the legend was given

lavender colour to supersede any lines or points behind the legend.

Finally, some text describing the key statistic of this variable is placed inside the

plot area at 5pm and centred at 200.

> text(x = ISOdatetime(1990, 8, 25, 17, 0, 0), y = 200, labels

= "median incubation period = 3.5 hours", srt = 90)

85

The middle of the text is located at x = 19:00 and y = 200 in the graph. The

parameter 'srt' comes from 'string rotation'. In this case a rotation of 90 degrees

gives the best picture. Since the background colour is already grey, white text

would be suitable.

0 100 200 300 400

Exposure time & onset of food poisoning outbreak

Time (HH:MM)

Subject ID sorted by Exposure Time

11:00 15:00 19:00 23:00 03:00 07:00

Exposure time

Onset

incubation period

Median incubation period = 3.5 hours

Analysis of timing data has finished. The main data frame .data is saved for

further use in the next chapter.

> save(.data, file = "Chapter7.Rdata")

Reference

Thaikruea, L., Pataraarechachai, J., Savanpunyalert, P., Naluponjiragul, U. 1995

An unusual outbreak of food poisoning. Southeast Asian J Trop Med Public Health

26(1):78-85.

86

Exercise_________________________________________________

We recode the original time variable 'onset' right from the beginning using the

command:

> onset[!case] <- NA

For the data frame that we are passing to the next chapter, has the variable 'onset'

been changed? If not, why and how can we get a permanent change to the data

frame that we are using? Note: the built-in Outbreak dataset cannot be modified.

87

Chapter 8: An Outbreak Investigation:

Risk Assessment

The next step in analysing the outbreak is to deal with the level of risk. However,

let's first load the data saved from the preceding chapter.

> zap()

> load("Chapter7.Rdata")

> ls(all=TRUE) # .data is there

> search() # No dataset in the search path

> use(.data)

> search() # .data is ready for use

> des()

Recoding missing values

There are a number of variables that need to be recoded. The first variable to recode

is 'age'. The Epicalc command recode is used here. More details on this function

are given in chapter 10.

> recode(var = age, old.value = 99, new.value = NA)

The variables with the same recoding scheme, 9 to missing value, are 'beefcurry',

'saltegg' and 'water'. They can be recoded together in one step as follows:

> recode(vars = c(beefcurry, saltegg, water), 9, NA)

The three variables can also be changed to factors with value labels attached.

> beefcurry <- factor(beefcurry, labels=c("No","Yes"))

> saltegg <- factor(saltegg, labels=c("No","Yes"))

> water <- factor(water, labels=c("No","Yes"))

> label.var(beefcurry, "Beefcurry")

> label.var(saltegg, "Salted egg")

> label.var(water, "Water")

For 'eclair', the absolute missing value is 90. This should be recoded first, then re-

check the data frame for the missing values.

> recode(eclair, 90, NA)

> summ()

88

All variables look fine except 'eclair' which still contains the value 80 representing

"ate but not remember how much". We will analyse its relationship with 'case' by

considering it as an ordered categorical variable.

At this stage, cross tabulation can be performed by using the Epicalc command

tabpct.

> tabpct(eclair, case)

Distribution of diseased by eclair

eclair

diseased

0 0.5 12

3456810

12

192080

FALSE

TRUE

The width of the columns of the mosaic graph denotes the relative frequency of that

category. The highest frequency is 2 pieces followed by 0 and 1 piece. The other

numbers have relatively low frequencies; particularly the 5 records where 'eclair'

was coded as 80.

There is a tendency of increasing red area or attack rate from left to right indicating

that the risk was increased when more pieces of eclair were consumed. We will use

the distribution of these proportions to guide our grouping of eclair consumption.

The first column of zero consumption has a very low attack rate, therefore it should

be a separate category. Only a few took half a piece and this could be combined

with those who took only one piece. Persons consuming 2 pieces should be kept as

one category as their frequency is very high. Others who ate more than two pieces

should be grouped into another category. Finally, those coded as '80' will be

dropped due to the unknown amount of consumption as well as its low frequency.

> eclairgr <- cut(eclair, breaks = c(0, 0.4, 1, 2, 79),

include.lowest = TRUE, labels=c("0","1","2",">2"))

The argument 'include.lowest=TRUE' indicates that 0 eclair must be included

in the lowest category.

89

It is a good practice to label the new variable in order to describe it as well as put it

into .data.

> label.var(eclairgr, "pieces of eclair eaten")

> tabpct(eclairgr, case)

======== lines omitted =========

Row percent

diseased

pieces of eclai FALSE TRUE Total

0 279 15 294

(94.9) (5.1) (100)

1 54 51 105

(51.4) (48.6) (100)

2 203 243 446

(45.5) (54.5) (100)

>2 38 89 127

(29.9) (70.1) (100)

======== lines omitted =========

Distribution of diseased

by pieces of eclair eaten

pieces of eclair eaten

diseased

012>2

FALSE

TRUE

The attack rate or percentage of diseased in each category of exposure, as shown in

the bracket of the column TRUE, increases from 5.1% among those who did not eat

any eclairs to 70.1% among those heavy eaters of eclair. The graph output is similar

to the preceding one except that the groups are more concise.

We now have a continuous variable of 'eclair' and a categorical variable of 'eclairgr'.

The next step is to create a binary exposure for eclair.

> eclair.eat <- eclair > 0

> label.var(eclair.eat, "eating eclair")

This binary exposure variable is now similar to the others, i.e. 'beefcurry', 'saltegg'

and 'water'

90

Exploration of age and sex

Simple exploration can be done by using the summ and dotplot commands on

'age', such as:

> summ(age); dotplot(age)

0 50 100 150 200

Distribution of age

Frequency

0 102030405060

The age distribution classified by sex can easily be done via:

> sex <- factor(sex, labels=c("Female","Male"))

> summ(age, by = sex)

> dotplot(age, by = sex)

Distribution of age by Sex

Female

Male

0 102030405060

91

An alternative is to draw a population pyramid of age and sex, using the Epicalc

function pyramid, as follows:

> pyramid(age, sex)

Female

200 150 100 50 0

[0,5]

(5,10]

(10,15]

(15,20]

(20,25]

(25,30]

(30,35]

(35,40]

(40,45]

(45,50]

(50,55]

(55,60]

Male

0 50 100 150 200

From the resulting graph, young adult males (aged 10-20 years) predominated. The

binwidth can also be changed to have fewer age groups.

> pyramid(age, sex, binwidth = 15)

The table generated by the pyramid function can also be shown, as follows:

> pyramid(age, sex, printTable=TRUE)

Tabulation of age by sex (frequency).

sex

age Female Male

[0,5] 1 1

(5,10] 12 7

(10,15] 170 217

(15,20] 81 223

(20,25] 25 112

(25,30] 41 54

(30,35] 23 20

(35,40] 7 10

(40,45] 5 8

(45,50] 3 12

(50,55] 0 1

(55,60] 0 1

The percentage (for each sex) can also be shown.

> pyramid(age, sex, printTable=TRUE, percent="each")

92

Tabulation of age by sex (percentage of each sex).

Female Male

[0,5] 0.272 0.150

(5,10] 3.261 1.051

(10,15] 46.196 32.583

(15,20] 22.011 33.483

(20,25] 6.793 16.817

(25,30] 11.141 8.108

(30,35] 6.250 3.003

(35,40] 1.902 1.502

(40,45] 1.359 1.201

(45,50] 0.815 1.802

(50,55] 0.000 0.150

(55,60] 0.000 0.150

Finally, both the table and age group can be saved as R objects for future use.

> (age.tab <- pyramid(age, sex))

> ageGrp <- age.tab$ageGroup

> label.var(ageGrp, "Age Group")

> des()

> des("age*")

No. of observations =1094

Variable Class Description

3 age numeric

20 ageGrp factor Age Group

The des function can also display variables using wild card matching.

> des("????????")

No. of observations =1094

Variable Class Description

11 vomiting numeric

13 diarrhea numeric

18 eclairgr factor pieces of eclair eaten

We have spent some time learning these features of Epicalc for data exploration (a

topic of the next chapter). Let's return to the analysis of risk, which is another main

feature of Epicalc.

Comparison of risk: Risk ratio and attributable risk

There are basically two methods for comparing the risk of disease in different

exposure groups.

Risk ratio – RR (also called relative risk) is the ratio of the risk of getting disease

among the exposed compared with that among the non-exposed. It indicates how

many times the risk would increase had the subject changed their status from non-

exposed to exposed. The increment is considered in fold, thus has a mathematical

notation of being a 'multiplicative model'.

93

Risk difference on the other hand, suggests the amount of risk gained or lost had the

subject changed from non-exposed to exposed. The increase is absolute, and has the

mathematical notation of an additive model.

The Epicalc command cs is used to analyse such relationships.

> cs(case, eclair.eat)

eating eclair

case FALSE TRUE Total

FALSE 279 300 579

TRUE 15 383 398

Total 294 683 977

Rne Re Rt

Risk 0.05 0.56 0.41

Estimate Lower95 Upper95

Risk difference (attributable risk) 0.51 0.44 0.58

Risk ratio 10.99 8 15.1

Attr. frac. exp. -- (Re-Rne)/Re 0.91

Attr. frac. pop. -- (Rt-Rne)/Rt*100 % 87.48

'Rne', 'Re' and 'Rt' are the risks in the non-exposed, exposed and the total

population, respectively. 'Rne' in this instance is 15/294 = 0.05. Similarly 'Re' is

383/683 = 0.56 and 'Rt' is 398/977 = 0.41. The risk difference is 'Re' - 'Rne', an

absolute increase of 51% whereas the risk ratio is 'Re' / 'Rne', a increase of 11 fold.

The risk of getting the disease among those eating eclairs could have been reduced

by 91% and the risk among all participants in the sports carnival could have been

reduced by 87.5% had they not eaten any eclairs.

The risk ratio is an important indicator for causation. A risk ratio above 10 would

strongly suggest a causal relationship.

The risk difference has more public health implications than the risk ratio. A high

risk ratio may not be of public health importance if the disease is very rare. The risk

difference, on the other hand, measures direct health burden and the need for health

services. Those who ate eclairs had a high chance (55%) of getting symptoms. A

reduction of 51% substantially reduces the burden of the sport game attendants and

the hospital services.

Attributable fraction population indicates that the number of cases could have been

reduced by 87% had the eclairs not been contaminated. This outbreak was transient

if we consider a chronic overwhelming problem such as cardio-vascular disease or

cancer. Even a relatively low level of fraction of risk attributable to tobacco in the

population, say 20%, could lead to a huge amount of resources spent in health

services.

94

Attributable fraction exposure has little to do with level of disease burden in the

population. It is equal to 1 - RR-1, and is therefore just another way to express the

risk ratio.

We have eclair as a cause of disease. There are some interventions that can prevent

the diseases such as vaccination, education, law enforcement and improvement of

environment. In our example, let's assume that not eating eclairs is a prevention

process.

> eclair.no <- !eclair.eat # The ! sign means "NOT"

> cs(case, eclair.no)

eclair.no

case FALSE TRUE Total

FALSE 300 279 579

TRUE 383 15 398

Total 683 294 977

Rne Re Rt

Risk 0.56 0.05 0.41

Estimate Lower95 Upper95

Risk difference (absolute change) -0.51 -0.44 -0.58

Risk ratio 0.09 0.12 0.07

protective efficacy (%) 90.9

Number needed to treat (NNT) 1.96

The risk among the exposed (not eating eclair) is lower than that among the non-

exposed (eating eclair). The risk difference changes sign to negative. The risk ratio

reciprocates to a small value of 0.09. Instead of displaying the attributable fraction

exposure and attributable fraction population, the command shows protective

efficacy and number needed to treat (NNT).

From the protective efficacy value, the exposure to the prevention program would

have reduced the risk of the eclair eater (unexposed under this hypothetical

condition) by 90.9%. NNT is just the reciprocal of the negative of risk difference. A

reduction of risk of 0.51 comes from an intervention on one individual. A reduction

of 1 would need to come from an intervention on 1/0.51 or 1.96 individuals. An

intervention of high NNT would need to be given to many individuals just to avert

one unwanted event. The lowest possible level of NNT is 1 or perfect prevention

which also has 100% protective efficacy. NNT is a part of measurement of

worthiness of intervention (either prevention or treatment) technology. To avert the

same type of unwanted event, an intervention with low NNT is preferred to another

with high NNT, although the cost must also be taken into account.

Dose-response relationship

One of the criteria for causation is the evidence of a dose-response relationship. If a

higher dose of exposure is associated with a higher level of risk in a linear fashion,

then the exposure is likely to be the cause.

95

We now explore the relationship between the risk of getting the disease and the

number of eclairs consumed.

> cs(case, eclairgr)

eclairgr

case 0 1 2 >2

FALSE 279 54 203 38

TRUE 15 51 243 89

Absolute risk 0.05 0.49 0.54 0.7

Risk ratio 1 9.52 10.68 13.74

lower 95% CI 6.6 8.04 10.11

upper 95% CI 13.72 14.19 18.66

Chi-squared = 237.12 , 3 d.f., P value = 0

Fisher's exact test (2-sided) P value = 0

12 51020

Risk ratio from a cohort study

eclairgr

Risk ratio

012>2

1

9.52 10.68

13.74

( 6.6 , 13.72 ) ( 8.04 , 14.19 )

( 10.11 , 18

.66

The risk ratio increases as the dose of exposure to eclairs increases. The step from

not eating to the first group (up to one piece) is wide whereas further increases are

shown at a flatter slope. The p values in the output are both zero. In fact, they are

not really zero, but have been rounded to three decimal places. The default

rounding of decimals of odds ratios and relative risks is two and for the p-values is

three. See 'help(cs)' for more details on the arguments.

Before finishing this chapter, the current data is saved for further use.

> save(.data, file = "Chapter8.Rdata")

96

Exercise_________________________________________________

Compute the attributable risk and risk ratio of 'beefcurry', 'saltegg' and 'water'. Are

these statistically significant? If so, what are the possible reasons?

97

Chapter 9: Odds Ratios, Confounding and

Interaction

Having assessed various parameters of risk of participants in the outbreak in the last

chapter, we now focus on confounding among various types of foods.

The assessment of risk in this chapter is changed from the possible cause. The next

step in analysing the outbreak is to deal with the level of risk. Let's first load the

data saved from the preceding chapter.

> zap()

> load("Chapter8.Rdata")

> use(.data)

Odds and odds ratio

Odds has a meaning related with probability. If 'p' is the probability, p/(1-p) is

known as the odds. Conversely, the probability would be equal to odds/(odds+1).

> tab1(case)

Frequency Percent

FALSE 625 57.1

TRUE 469 42.9

Total 1094 100.0

The probability of being a case is 469/1094 or 42.9%. In this situation where non-

cases are coded as 0 and cases as 1, the probability is

> mean(case)

On the other hand the odds of being a case is 469/625 = 0.7504, or

> mean(case)/(1 - mean(case))

Note that when there are missing values in the variable, the 'mean' must have 'na.rm

=TRUE' in the argument. For example the odds of eating eclairs is:

> m.eclair <- mean(eclair.eat, na.rm = TRUE)

> m.eclair /(1 - m.eclair)

[1] 2.323129

98

While a probability always ranges from 0 to 1, an odds ranges from 0 to infinity.

For a cohort study we may compute the ratios of the odds of being a case among the

exposed vs the odds among the non-exposed.

> table(case, eclair.eat)

eclair.eat

case FALSE TRUE

FALSE 279 300

TRUE 15 383

The conventional method for computing the odds ratio is therefore:

> (383/300)/(15/279)

[1] 23.746

This is the same value as the ratio of the odds of being exposed among cases and

among non-cases.

> (383/15)/(300/279)

It is also equal to the ratio between the cross-product.

> (383 * 279)/(300 * 15)

Epicalc has a function cc producing odds ratio, its 95% confidence interval,

performing the chi-squared and Fisher's exact tests and drawing a graph for the

explanation.

> cc(case, eclair.eat)

eating eclair

case FALSE TRUE Total

FALSE 279 300 579

TRUE 15 383 398

Total 294 683 977

OR = 23.68

95% CI = 13.74 43.86

Chi-squared = 221.21 , 1 d.f. , P value = 0

Fisher's exact test (2-sided) P value = 0

The value of odds ratio from the cc function is slightly different from the

calculations that we have done. This is because the 'cc' function uses the exact

method to calculate the odds ratio.

99

Odds ratio from prospective/X−sectional study

Exposure category

Odds of outcome

non−exposed exposed

1

1/16

1/8

1/4

1/2

OR = 23.68

95% CI = 13.74 , 43.86

The vertical lines of the resulting graph show the estimate and 95% confidence

intervals of the two odds of being diseased, non-exposed on the left and exposed on

the right, computed by the conventional method. The size of the box at the estimate

reflects the relative sample size of each subgroup. There were more exposed than

non-exposed. The non-exposed group has the estimate value slightly below 1/16

since it real value is 15/279. The exposed group estimate is 383/300 or slightly

higher than 1. The latter value is over 23 times of the former.

> fisher.test(table(case, eclair.eat))$estimate

odds ratio

23.681

> fisher.test(table(case, eclair.eat))$conf.int

[1] 13.736 43.862

attr(,"conf.level")

[1] 0.95

Confounding and its mechanism

For 'saltegg', the odds ratio can be similarly computed.

> cc(case, saltegg)

saltegg

case 0 1 Total

FALSE 66 554 620

TRUE 21 448 469

Total 87 1002 1089

OR = 2.54

95% CI = 1.51 4.44

Chi-squared = 13.82 , 1 d.f. , P value = 0

Fisher's exact test (2-sided) P value = 0

100

The total valid records for computation is 1,089, which is higher than 977 of the

cross-tabulation results between 'case' and 'eclair.eat'. The value of the odds ratio is

not as high but is of statistical significance. Similar to the analysis of the odds ratio

for 'eclair', the size of the box on the right is much larger than that on the left

indicating a large proportion of exposure.

Both eclairs and salted eggs have significant odds ratios and were consumed by a

large proportion of participants. Let's check the association between these two

variables.

> cc(saltegg, eclair.eat, graph = FALSE)

eating eclair

saltegg FALSE TRUE Total

0 53 31 84

1 241 647 888

Total 294 678 972

OR = 4.58

95% CI = 2.81 7.58

Chi-squared = 47.02 , 1 d.f. , P value = 0

Fisher's exact test (2-sided) P value = 0

There might be only one real cause and the other was just confounded. In other

words, those participants who ate salted eggs also tended to eat eclairs. Stratified

analysis gives the details of confounding as follows.

> mhor(case, saltegg, eclair.eat)

Stratified prospective/X−sectional analysis

Outcome= case , Exposure= saltegg

Odds of outcome

eclair.eatTRUE: OR = 1.07 (0.48, 2.36)

eclair.eatFALSE: OR = 0.87 (0.22, 5)

MH−OR = 1.02 (0.54, 1.93)

homogeneity test P value = 0.787

Non−exposed Exposed

1

2

1/32

1/16

1/8

1/4

1/2

101

Stratified analysis by eclair.eat

OR lower lim. upper lim. P value

eclair.eat FALSE 0.874 0.224 5.00 0.739

eclair.eat TRUE 1.073 0.481 2.36 0.855

M-H combined 1.023 0.541 1.93 0.944

M-H Chi2(1) = 0 , P value = 0.944

Homogeneity test, chi-squared 1 d.f.=0.07, P value = 0.787

The above analysis of association between the disease and salted egg is stratified by

level of eclair consumption based on records that have valid values of 'case',

'eclair.eat' and 'saltegg'. There are two main parts of the results. The first part

concerns the odds ratio of the exposure of interest in each stratum defined by the

third variable, in this case 'eclair.eat' as well as the odds ratio and chi-squared

statistics computed by Mantel-Haenszel's technique. The second part suggests

whether the odds ratio of these strata can be combined. We will focus on the first

part at this stage and come back to the second part later.

In both strata, the odds ratios are close to 1 and are not statistically significant. The

slopes of the two lines are rather flat. The Mantel-Haenszel (MH) odds ratio, also

called the adjusted odds ratio, is the weighted average of the two odds ratios, which

is also close to 1. Both the stratum-specific odds ratios and the MH odds ratio are

not significantly different from 1 but the crude odds ratio is significantly different.

The distortion of the crude result from the adjusted result is called confounding.

The mechanism of this confounding can be explained with the above graph. The

upper line of the graph denotes the subset or stratum of subjects who had eaten

eclairs whereas the lower line represents those who had not. The upper line lies far

above the lower line meaning that the subset of eclair eaters had a much higher risk

than the non-eaters. The distance between the two lines is between 16 to 32 fold of

odds. It is important to note that the distribution of subjects in this study is

imbalanced in relation to eclair and salted eggs consumption. On the right-hand side

(salted egg consumers), there are alot more eclair eaters (upper box) than non-eaters

(lower box). The centre of this right-hand side then tends to be closer to the location

of the upper box. In contrast, on the left-hand side, or those not consuming salted

eggs, the number of eclair non-consumers (as represented by the size of the lower

box) is higher than that of the consumers. The centre of the left-hand side therefore

tends to lie closer to the lower box. In other words, when the two strata are

combined, the (weighted average) odds of diseased among the salted egg consumers

is therefore closer to the upper box. The opposite is true for the left-hand side where

the weighted average odds of getting the disease should be closer to the lower box.

A higher average odds on the right-hand side leads to the crude odds ratio being

higher than one. This crude odds ratio misleads us into thinking that salted egg is

another cause of the disease where in fact it was just confounded by eclairs. The

level of confounding is noteworthy only if both of the following two conditions are

met.

102

Firstly, the stratification factor must be an independent risk factor. Secondly, there

must be a significant association between the stratification factor and the exposure

of interest.

Now we check whether the relationship between the disease and eclair is

confounded by salted egg.

> mhor(case, eclair.eat, saltegg)

Stratified analysis by saltegg

OR lower lim. upper lim. P value

saltegg 0 19.3 4.68 117.9 6.06e-07

saltegg 1 24.8 13.56 49.7 2.42e-51

M-H combined 24.3 13.96 42.4 8.12e-49

M-H Chi2(1) = 215.63 , P value = 0

Homogeneity test, chi-squared 1 d.f. = 0.11 , P value = 0.736

Stratified prospective/X−sectional analysis

Outcome= case , Exposure= eclair.eat

Odds of outcome

saltegg1: OR = 24.78 (13.56, 49.71)

saltegg0: OR = 19.31 (4.68, 117.88)

MH−OR = 24.32 (13.96, 42.36)

homogeneity test P value = 0.736

Non−exposed Exposed

1

2

1/32

1/16

1/8

1/4

1/2

Stratified by 'saltegg', the odds ratio of eclair.eat in both strata (19.3 and 24.8) and

the MH odds ratio (24.3) are strong and close to the crude odds ratio (23.68).

Graphically, the two lines of strata are very close together indicating that 'saltegg' is

not an independent risk factor. In each of the exposed and non-exposed groups, the

odds for disease are close and the weighted average odds is therefore not influenced

by the number of subjects. Thus not being an independent risk factor, a variable

cannot confound another exposure variable.

103

Interaction and effect modification

Let's analyse the association between eating eclairs and the developing acute

gastrointestinal illness again but now using 'beefcurry' as the stratification factor.

> mhor(case, eclair.eat, beefcurry)

Stratified analysis by beefcurry

OR lower lim. upper lim. P value

beefcurry 0 5.33 1.53 21.7 3.12e-03

beefcurry 1 31.63 16.49 68.1 4.79e-56

M-H combined 24.08 13.85 41.9 1.39e-48

M-H Chi2(1) = 214.56 , P value = 0

Homogeneity test, chi-squared 1 d.f. = 7.23 , P value = 0.007

Stratified prospective/X−sectional analysis

Outcome= case , Exposure= eclair.eat

Odds of outcome

beefcurry1: OR = 31.63 (16.49, 68.11)

beefcurry0: OR = 5.33 (1.53, 21.71)

MH−OR = 24.08 (13.85, 41.89)

homogeneity test P value = 0.007

Non−exposed Exposed

1

1/32

1/16

1/8

1/4

1/2

The slopes of the odds ratios of the two strata cross each other. Among those who

had not eaten beef curry, the odds of getting the disease among those not eating

eclair was slightly below 1 in 6. The odds increases to over 1 in 2 for those who ate

eclairs only. This increase is 5.33 fold or an odds ratio of 5.33. In contrast, the

baseline odds among those eating beef curry only (left point of the green line) is

somewhere between 1 in 32 and 1 in 16, which is the lowest risk group in the graph.

The odds however steps up very sharply to over 1 among the subjects who had

eaten both eclairs and beef curry. The homogeneity test in the last line concludes

that the odds ratios are not homogeneous. In statistics, this is called significant

interaction. In epidemiology, the effect of 'eclair' was modified by 'beefcurry'.

Eating beef curry increased the harmful effect of eclair or increased the

susceptibility of the person to get ill by eating eclairs.

We now check the effect of 'beefcurry' stratified by 'eclair.eat'.

104

> mhor(case, beefcurry, eclair.eat)

Stratified analysis by eclair.eat

OR lower lim. upper lim. P value

eclair.eat FALSE 0.376 0.111 1.47 0.1446

eclair.eat TRUE 2.179 1.021 4.83 0.0329

M-H combined 1.401 0.769 2.55 0.2396

M-H Chi2(1) = 1.38 , P value = 0.24

Homogeneity test, chi-squared 1 d.f. = 6.78 , P value = 0.009

Stratified prospective/X−sectional analysis

Outcome= case , Exposure= beefcurry

Odds of outcome

eclair.eatTRUE: OR = 2.18 (1.02, 4.83)

eclair.eatFALSE: OR = 0.38 (0.11, 1.47)

MH−OR = 1.4 (0.77, 2.55)

homogeneity test P value = 0.009

Non−exposed Exposed

1

2

1/32

1/16

1/8

1/4

1/2

The effect of beef curry among those not eating eclairs tends to be protective but

without statistical significance. The odds ratio among those eating eclairs is 2.18

with statistical significance. The homogeneity test also concludes that the two odds

ratios are not homogeneous. The stratification factor eclair has modified the effect

of beef curry from a non-significant protective factor to a significant risk factor.

Tabulation and stratified graphs are very useful in explaining confounding and

interaction. However, they are limited to only two or three variables. For a dataset

with a larger number of variables, logistic regression is needed. We put the new

variable 'eclair.eat' into .data by using label.var and save the whole data

frame for future use with logistic regression.

> label.var(eclair.eat, "ate at least some eclair")

> save(.data, file="chapter9.Rdata")

Exercise_________________________________________________

Analyse the effect of drinking water on the odds of the disease. Check whether it is

confounded with eating eclairs or other foods. Check for interaction.

105

Chapter 10: Basic Data Management

Data cleaning

The previous datasets were relatively clean. Let's look at an uncleaned dataset that

came from a family planning clinic in the mid 1980's. The coding scheme can be

seen from

> help(Planning)

Cleaning will enable you to learn Epicalc functions for data management.

> zap()

> data(Planning)

> des(Planning)

Note that all of the variable names are in upper case. To convert them to lower case

simply type the following command.

> names(Planning) <- tolower(names(Planning))

> use(Planning)

> summ()

No. of observations = 251

Var. name Obs. mean median s.d. min. max.

1 id 251 126 126 72.6 1 251

2 age 251 27.41 27 4.77 18 41

3 relig 251 1.14 1 0.59 1 9

4 ped 251 3.83 3 2.32 0 9

5 income 251 2.84 2 2.38 1 9

6 am 251 20.66 20 5.83 15 99

7 reason 251 1.55 1 0.86 1 9

8 bps 251 137.74 110 146.84 0 999

9 bpd 251 97.58 70 153.36 0 999

10 wt 251 52.85 51.9 11.09 0 99.9

11 ht 251 171.49 154 121.82 0 999

Identifying duplication ID

Let's look more closely at the 'id' object. This variable represents the unique

identification number for the subject.

106

> summ(id)

Valid obs. mean median s.d. min. max.

251 125.996 126 72.597 1 251

0 50 100 150 200 250

Distribution of id

Subject sorted by X−axis values

The graph looks quite uniformly distributed. However, the mean of id (125.996) is

not equal to what it should be.

> mean(1:251)

[1] 126

There must be some duplication and/or some gaps within these id numbers.

Looking carefully at the graph, there is no noticeable irregularity.

To check for duplication, we can type the following:

> any(duplicated(id))

[1] TRUE

The result tells us that there is in fact at least one duplicated id. To specify the id of

the duplicates type:

> id[duplicated(id)]

[1] 215

We see that id = 215 has one duplicate. Further inspection of the data reveals that

the record numbers are 215 and 216. These two records should be investigated as to

which one is incorrect. One of them should be changed to 'id' = 216.

107

Missing values

This file is not ready for analysis yet. As is often the case, the data were coded

using outlier numbers to represent missing codes.

We first explore the data with boxplots.

> boxplot(.data, horizontal=T, las=1, main="Family Planning

Clinic")

id

age

relig

ped

income

am

reason

bps

bpd

wt

ht

0 200 400 600 800 1000

Family Planning Clinic

The outlier values of 'bps', 'bpd' and 'ht' are rather obvious. These are confirmed

with the numerical statistics from the summ command seen earlier in this chapter.

In this dataset, the value '9' represents a missing code for religion (3rd variable),

patient education (4th variable), income group (5th variable) and reason for family

planning (7th variable).

There are four methods of changing values to missing (NA). The first method is

based on the function replace, which handles one vector or variable at a time.

The second uses extraction and indexing with subscript '[]'. This method can

handle either a vector or array (several variables at the same time). The third

method is based on the transform command. These three methods use

commands that are native to R. The fourth method uses the recode command

from Epicalc, which is by far the simplest method.

108

We will use the replace function for the 3rd variable, 'relig', extraction and

indexing for the 4th to 7th variables, 'ped', 'am', 'income' and 'reason', transform

for the 'wt' variable, and finally recode for the remaining necessary variables.

Replacing values in a data frame

We wish to replace all occurrences of 9 with the missing value 'NA'. The replace

function handles only one variable at a time.

> summ(relig)

We wish to replace all occurrences of 9 with the missing value 'NA'. The replace

function handles only one variable at a time.

> replace(relig, relig==9, NA) -> .data$relig

There are three essential arguments to the replace function; the target vector, the

index vector and the value. See the online help for more detailed information on its

usage.

The first argument, 'relig', is the target vector containing values to be replaced.

The second argument, 'relig==9', is the index vector specifying the condition, in

this case, whenever 'relig' is equal to 9. The final argument, 'NA', is the new

value that will replace the old value of 9. Thus, whenever 'relig' is equal to 9, it

will be replaced with 'NA'.

Note that the index vector, or condition for change, need not be the same vector as

the target vector. For example, one may want to coerce the value of diastolic blood

pressure to be missing if the systolic blood pressure is missing.

Secondly, replace is a function, not a command. It has no effect on the original

values. The values obtained from this function must be assigned to the original

values using the assignment operators, '->' or '<-'.

Right now, the variable has changed.

> summ(.data$relig)

Obs. mean median s.d. min. max.

250 1.108 1 0.31 1 2

There was one subject with a missing value leaving 250 records for statistical

calculations. The remaining subjects have values of one and two only for 'religion'.

Changing values with extraction and indexing

The first variable to be replaced with this method is the 6th one, 'am', which denotes

age at first marriage.

109

> summ(.data$am)

Valid obs. mean median s.d. min. max.

251 20.657 20 5.83 15 99

The value 99 represents a missing value code during data entry. Note that the mean,

median and standard deviation are not correct due to this coding of missing values.

Instead of using the previous method, the alternative is:

> .data$am[.data$am==99] <- NA

With the same three components of the target vector, conditions and replacing

value, this latter command is slightly more straightforward than the above one using

the replace function.

This method can also be used for many variables with the same missing code. For

example, the 4th, 5th and 7th variables all use the value 9 as the code for a missing

value.

> .data[,c(4,5,7)][.data[,c(4,5,7)]==9] <- NA

All the 4th, 5th, and 7th variables of .data that have a value of 9 are replaced with

'NA'. The above command can be explained as follows. There are two layers of

subsets of .data marked by '[ ]'.

.data[,c(4,5,7)] means extract all rows of columns 4, 5 and 7, ('ped',

'income' and 'reason').

'[.data[,c(4,5,7)]==9]'means the subset of each particular column where

the row is equal to 9.

' <- NA' means the epression on the left is to be assigned a missing value (NA).

Thus, for these four variables, any element in which the value equals 9 will be

replaced by 'NA'.

Transforming variables in a data frame

The function transform does a similar job as the previous methods described

above. For example, to transform 'wt'

> transform(.data, wt=ifelse(wt>99, NA, wt)) -> .data

The expression inside the function tells R to replace values of 'wt' that are greater

than 99 with the NA value. The resulting object is saved into the data frame.

Now check the 'wt' variable inside the data frame.

> summ(.data$wt)

Valid obs. mean median s.d. min. max.

246 51.895 51.45 8.91 0 73.8

110

Note the two outliers on the left-hand side of the graph. Similar to the results of

previous methods, transform did not change the 'wt' variable inside the data

frame in the search path.

> summ(wt)

Valid obs. mean median s.d. min. max.

251 52.851 51.9 11.09 0 99.9

Note that the transformed data frame does not keep the variable labels or

descriptions with it. The new .data will have all variable descriptions removed.

So this method reduces the power of Epicalc.

Recoding values using Epicalc

The function recode in Epicalc was created to make data transformation easier.

Similar to other commands in Epicalc, for example use, des, summ, tab1 and

label.var, the command recode is restricted to the setting of having .data

as the default data frame.

We require replacing the values '999' to a missing value for variables 'bps', 'bpd' and

'ht'. The command is simple. Let's start with 'bps'.

> recode(var=bps, old.value=999, new.value=NA)

> summ(.data)

Notice that the variable 'bps' has been changed. In fact, recode has automatically

detached the old data frame and attached to the new one, as shown below.

> summ(bps)

Valid obs. mean median s.d. min. max.

244 113.033 110 14.22 0 170

Variable 'bps' in .data and that in the search path have been synchronised. The

number of valid records is reduced to 244 and the maximum is now 170 not 999.

This automatic updating has also affected other variables in the search path that we

changed before.

> summ(am)

Valid obs. mean median s.d. min. max.

250 20.344 20 3.06 15 31

When the variable 'am' is used as the argument of summ, the program looks for an

independent object called 'am', which does not exist. It then looks in the search

path. Since the data frame in the search path ('search()[2]') has been updated with

the new .data, the variable 'am' that is used now is the updated one which has

been changed from the command in the preceding section. The command recode

makes variable manipulation simpler than the above three standard R methods.

111

The command recode can be further simplified:

> recode(bpd, 999, NA)

> recode(ht, 999, NA)

> summ()

All the maxima have been corrected but the minima of 0 are also missing values for

the last four variables plus 'ped'. We can use recode to turn all the zeros into

missing values in one step.

> recode(c(ped, bps, bpd, wt, ht), 0, NA)

> summ()

No. of observations = 251

Var. name Obs. mean median s.d. min. max.

============ variables #1, #2, #3 omitted =========

4 ped 226 3.3 2 1.66 2 7

============ variables #5, #6, #7 omitted =========

8 bps 243 113.5 110 12.25 90 170

9 bpd 243 72.02 70 9.9 60 110

10 wt 245 52.11 51.5 8.28 16 73.8

11 ht 245 155.3 153 28.08 141 585

The minimum weight of 16kg and the maximum height of 585cm are dubious and

in fact should not be accepted. Any weight below 30kg and any height above

200cm should also be treated as missing (unless there are very good reasons to

leave them as is). A scatter plot is also useful here.

> plot(wt, ht, pch=19)

20 30 40 50 60 70

200 300 400 500 600

wt

ht

112

The outlier is clearly seen (top left corner). To correct these errors type:

> recode(wt, wt < 30, NA)

> recode(ht, ht > 200, NA)

> summ()

It should be noted that after cleaning, the effective sample size is somewhat less

than the original value of 251. The box plot of all variables now has a different

appearance.

> boxplot(.data, horizontal=T, main="Family Planning Clinic",

las=1)

id

age

relig

ped

income

am

reason

bps

bpd

wt

ht

0 50 100 150 200 250

Family Planning Clinic

Labelling variables with 'label.var'

When there are only a few variables in the dataset, all of which are for common

purposes, such as 'age', 'sex', or 'education', naming is not a problem. However,

when there are a large number of variables, it is difficult to have intuitively

understandable names for each variable. A system separating variable labels from

variable names is a better way of documentation.

R does not come with a built-in variable labelling facility. Epicalc however, adds in

this useful facility in a simple way.

Firstly, the variable names of the data are displayed.

> names(.data)

[1] "id" "age" "relig" "ped" "income" "am"

[7] "reason" "bps" "bpd" "wt" "ht"

113

Then, an appropriate label or description for each variable can be created one at a

time.

> label.var(id, "Id code")

At this stage, checking description of the dataset will reveal the description of the

first variable.

> des()

No. of observations =251

Variable Class Description

1 id numeric Id code

2 age numeric

3 relig numeric

========= subsequent lines omitted ==========

A description of the variable alone can also be displayed.

> des(id)

'id' is a variable found in the following source(s):

Var. source Var. order Class # records Description

.data 1 numeric 251

Now let's complete all other variable labels.

> label.var(age, "age")

> label.var(relig, "religion")

> label.var(ped, "eduction")

> label.var(income, "monthly income")

> label.var(am, "age(yr) 1st marriage")

> label.var(reason, "reason for fam. plan.")

> label.var(bps, "systolic BP")

> label.var(bpd, "diastolic BP")

> label.var(wt, "weight (kg)")

> label.var(ht, "height (cm)")

> des()

No. of observations =251

Variable Class Description

1 id numeric ID code

2 age numeric age

3 relig numeric religion

4 ped numeric eduction

5 income numeric monthly income

6 am numeric age(yr) 1st marriage

7 reason numeric reason for fam. plan.

8 bps numeric systolic BP

9 bpd numeric diastolic BP

10 wt numeric weight (kg)

11 ht numeric height (cm)

114

It is advised to keep each label short since it will be frequently used in the process

of automatic graphical display and tabulation.

Labelling a categorical variable

Labelling values of a categorical variable is a good practice. It is a part of important

documentation. During the analysis, a labelled variable is much easier to understand

and interpret than an unlabelled one.

As mentioned previously, the best way to label variables is during the preparation

of data entry using the data entry software. However, occasionally one may

encounter an unlabelled dataset, such as those directly imported from EpiInfo, 'txt’

or 'csv' formats. It is therefore important to know how to label variables in R.

In our example of the family planning data the variable 'ped' (patient's education

level) is an unlabelled categorical variable. In fact, at this stage, it is not really a

categorical variable. When we summarise the statistics, either by the

summary(.data) command or by summ, both outputs show means, medians

and standard deviations, indicating a continuous, numeric variable.

> summary(ped)

Min. 1st Qu. Median Mean 3rd Qu. Max. NA's

2.000 2.000 2.000 3.296 5.000 7.000 25.000

> summ(ped)

Obs. mean median s.d. min. max.

226 3.296 2 1.66 2 7

Note that there is no count for category 1 of 'ped'. According to the coding scheme:

1 = no education, 2 = primary school, 3 = secondary school, 4 = high school,

5 = vocational school, 6 = bachelor degree, 7 = other.

The data are numeric and therefore need to be converted into a factor. The labels

can be put into a list of 7 elements.

> label.ped <- list(None="1", Primary="2", "Secondary

school"="3", "High school"="4", Vocational="5", "Bachelor

degree"="6", Others="7")

Each label needs to be enclosed in double quotes if it contains a space, otherwise it

is optional. For example, one can have: None="1" or "None"="1".

To convert a numeric vector to a categorical one use the 'factor' function.

> educ <- factor(ped, exclude = NULL)

The new variable is a result of factoring the values of 'ped' in .data. The

argument 'exclude' is set to 'NULL' indicating no category (even missing or 'NA')

will be excluded in the factoring process.

115

> summary(educ)

2 3 4 5 6 7 <NA>

117 31 20 26 16 16 25

We can check the labels of a factor object using the levels command.

> levels(educ)

[1] "2" "3" "4" "5" "6" "7" NA

There are seven known levels, ranging from "2" to "7" and one missing level (NA).

Note that these numbers are actually characters or group names. There was no "1"

in the data and correspondingly is omitted in the levels.

The levels for the codes should be changed to meaningful words as defined

previouisly.

> levels(educ) <- label.ped

> levels(educ)

[1] "None" "Primary" "Secondary school"

[4] "High school" "Vocational" "Bachelor degree"

[7] "Others"

Adding a variable to a data frame

Note that the variable 'educ' is not inside the data frame .data. Remember that R

has the capacity to handle more than one object simultaneously. However, although

it is possible to go on analysing data with this variable outside the data frame,

incorporating all the important variables into the main data frame .data is

advised, especially if any sorting is done. In addition, the variable can have a

descriptive label. More importantly, when necessary, the whole data frame

including the old and new variables can be written into another data format easily

(see the function 'write.foreign' in the foreign package).

> des() # same as before

To incorporate a new variable derived from the data frame .data, simply label the

variable name as follows.

> label.var(educ, "education")

Then recheck.

> des()

No. of observations =251

Variable Class Description

1 id numeric ID code

============ Variables # 2 to 11 omitted =======

12 educ factor education

116

For a variable outside .data, the command label.var actually accomplishes

five tasks.

y The new variable is incorporated into the data frame .data,

y The new variable is labelled with a description,

y The old data frame is detached,

y The old 'free' variable outside the data frame is removed, unless the

argument 'pack=FALSE' is specified,

y The new data frame is attached to the search path.

Order of one-way tabulation

The new education variable can be tabulated.

> tab1(educ)

educ: education

Frequency %(NA+) %(NA-)

None 0 0.0 0.0

Primary 117 46.6 51.8

Secondary school 31 12.4 13.7

High school 20 8.0 8.8

Vocational 26 10.4 11.5

Bachelor degree 16 6.4 7.1

Others 16 6.4 7.1

NA's 25 10.0 0.0

Total 251 100.0 100.0

None

Primary

Secondary school

High school

Vocational

Bachelor degree

Others

Missing

Distribution of education

Frequency

0 20 40 60 80 100 120 140

0

117

31

20

26

16

16

25

The table and the graph show that most subjects had only primary education. A

horizontal bar chart is produced when the number of groups exceeds 6 and the

longest label of the group has more than 8 characters. The tabulation can also be

sorted.

117

> tab1(educ, sort.group = "decreasing")

educ : education

Frequency %(NA+) %(NA-)

Primary 117 46.6 51.8

Secondary school 31 12.4 13.7

Vocational 26 10.4 11.5

NA's 25 10.0 0.0

High school 20 8.0 8.8

Bachelor degree 16 6.4 7.1

Others 16 6.4 7.1

None 0 0.0 0.0

Total 251 100.0 100.0

None

Bachelor degree

Others

High school

Missing

Vocational

Secondary school

Primary

Distribution of education

Frequency

0 20 40 60 80 100 120 140

0

16

16

20

25

26

31

117

Alternatively the sorting can be increasing.

> tab1(educ, sort.group = "increasing")

educ : education

Frequency %(NA+) %(NA-)

None 0 0.0 0.0

Bachelor degree 16 6.4 7.1

Others 16 6.4 7.1

High school 20 8.0 8.8

NA's 25 10.0 0.0

Vocational 26 10.4 11.5

Secondary school 31 12.4 13.7

Primary 117 46.6 51.8

Total 251 100.0 100.0

A sorted table and bar chart are easier to read and viewed when there is no order of

category. However, education level is partially ordered in nature, so the non-sorted

chart may be better.

118

Collapsing categories

Sometimes a categorical variable may have too many levels. The analyst may want

to combine two or more categories together into one. For example, vocational and

bachelor degree, which are the 5th and the 6th levels, could be combined into one

level called 'tertiary'. We can do this by creating a new variable, which is then

incorporated into .data at the end.

> ped2 <- educ

> levels(ped2)[5:6] <- "Tertiary"

> label.var(ped2, "level of education")

> des()

> tab1(ped2)

ped2 : level of education

Frequency %(NA+) %(NA-)

None 0 0.0 0.0

Primary 117 46.6 51.8

Secondary school 31 12.4 13.7

High school 20 8.0 8.8

Tertiary 42 16.7 18.6

Others 16 6.4 7.1

NA's 25 10.0 0.0

Total 251 100.0 100.0

The two categories have been combined into one giving 42 subjects having a

tertiary level of education.

Conclusion

In this chapter, we have looked at a dataset with a lot of data cleaning required. In

real practice, it is very important to have preventive measures to minimise any

errors during data collection and data entry. For example, a constraint of range

check is necessary in data entry. Missing values would better be entered with

missing codes specific for the software. In EpiInfo, Stata and SPSS these are period

marks '.' or simply left blank.

One of the best ways of entering data is to use the EpiData software, which can set

legal ranges and several other logical checks as well as label the variables and

values in an easy way. If this had been properly done, then the difficult commands

used in this chapter would not have been necessary. In the remaining chapters, we

will use datasets which have been properly entered, treated for missing values and

properly labelled.

Whenever a variable is modified it is a good practice to update the variable inside

the attached data frame with the one outside.

119

The best way to modify data is to use recode, which is a powerful command of

Epicalc. It can work with one variable or a number of variables with the same

recoding scheme or recoding a variable or variables under a condition. Finally, the

best way to update the data frame with new or modified variable(s) is to use

label.var. This command not only labels the variable for further use but also

updates and incorporates the data frame with the variable outside. Attachment to the

new data frame is automatic, making data manipulation in R more smooth and

simple.

There are many other more advanced data management functions in R that are not

covered in this chapter. These include aggregate, reshape and merge, and

readers are encouraged to explore these very useful and powerful commands on

their own.

120

Exercises________________________________________________

The VCT dataset contains data from a questionnaire involving female sex workers

from Phuket, Thailand in 2004.

Read the file into R and use the commands in this chapter to clean the data.

121

Chapter 11: Scatter Plots & Linear

Regression

Linear regression involves modelling a continuous outcome variable with one or

more explanatory variables. With all data analysis the first step is always to explore

the data. In this case, scatter plots are very useful in determining whether or not the

relationships between the variables are linear.

Example: Hookworm & blood loss

The dataset in this chapter concerns the relationship between hookworm and blood

loss from a study conducted in 1970.

> zap()

> data(Suwit); use(Suwit)

> des()

HW and Blood loss SEAJTMH 1970;

No. of observations = 15

Variable Class Description

1 id numeric

2 worm numeric No. of worms

3 bloss numeric Blood loss/day

> summ()

HW and Blood loss SEAJTMH 1970;

No. of observations =15

Var. name Obs. mean median s.d. min. max.

1 id 15 8 8 4.47 1 15

2 worm 15 552.4 525 513.9 32 1929

3 bloss 15 33.45 33.8 24.85 5.03 86.65

There are 3 variables and 15 records.

> .data

122

The file is clean and ready for analysis. With this small sample size it is somewhat

straightforward to verify that there is no repetition of 'id' and no missing values. The

records have been sorted in ascending order of 'worm' (number of worms) ranging

from 32 in the first subject to 1,929 in the last one. Blood loss ('bloss') is however,

not sorted. The 13th record has the highest blood loss of 86 ml per day, which is

very high. The objective of this analysis is to examine the relationship between

these two variables.

Scatter plots

When there are two continuous variables cross plotting is the first necessary step.

> plot(worm, bloss)

The above command gives a simple scatter plot with the first variable on the

horizontal axis and the second on the vertical axis.

0 500 1000 1500 2000

20 40 60 80

worm

bloss

The names of the variables are used for the axis labels, and there is no title. The

axis labels can be modified and a title added by supplying extra arguments to the

plot function, as follows:

> plot(worm, bloss, xlab="No. of worms", ylab="ml. per day",

main = "Blood loss by number of hookworms in the bowel")

123

For a small sample size, putting the identification of each dot can improve the

information conveyed in the graph.

0 500 1000 1500 2000

20 40 60 80

Blood loss by number of hookworms in the bowel

No. of worms

ml. per day

> plot(worm, bloss, xlab="No. of worms", ylab="ml. per day",

main="Blood loss by number of hookworms in the bowel",

type="n")

The above command produces an empty plot. The argument 'type' specifies the type

of plot to be drawn. A value of "n" tells R not to plot anything. This is to set a

proper frame for further points and lines.

The variable 'id' can be used as the text to write at the coordinates using the 'text'

command.

> text(worm, bloss, labels=id)

124

0 500 1000 1500 2000

20 40 60 80

Blood loss by number of hookworms in the bowel

No. of worms

ml. per day

1

2

3

4

5

6

7

8

9

10

11

12

13

14

1

5

In order to draw a regression line, a linear model using the above two variables

should be fit to the data.

Components of a linear model

The function lm is used to perform linear modelling in R.

> lm1 <- lm(bloss ~ worm)

> lm1

Call:

lm(formula = bloss ~ worm)

Coefficients:

(Intercept) worm

10.84733 0.04092

The model 'lm1' is created. Be careful not to confuse the letter "l" with the

number "1", which look very similar. Displaying the model by typing 'lm1' gives

limited information. To get more information, one can look at the attributes of this

model, its summary and attributes of its summary.

> attr(lm1, "names")

[1] "coefficients" "residuals" "effects"

[4] "rank" "fitted.values" "assign"

[7] "qr" "df.residual" "xlevels"

[10] "call" "terms" "model"

There are 12 attributes. Most of them can be displayed with the summary function.

> summary(lm1)

125

Call:

lm(formula = bloss ~ worm)

Residuals:

Min 1Q Median 3Q Max

-15.8461 -10.8118 0.7502 4.3562 34.3896

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 10.84733 5.30857 2.04 0.062

worm 0.04092 0.00715 5.73 7e-05

Residual standard error: 13.7 on 13 degrees of freedom

Multiple R-Squared: 0.716, Adjusted R-squared: 0.694

F-statistic: 32.8 on 1 and 13 DF, p-value: 6.99e-05

The first section of summary shows the formula that was 'called'. The second

section gives the distribution of residuals. The pattern is clearly not symmetric. The

maximum is too far on the right (34.38) compared to the minimum (-15.84) and the

first quartile is further left (-10.81) of the median (0.75) than the third quartile

(4.35) is. Otherwise, the median is close to zero. The third section gives coefficients

of the intercept and the effect of 'worm' on blood loss. The intercept is 10.8

meaning that when there are no worms, the blood loss is estimated to be 10.8 ml per

day. This is however, not significantly different from zero as the P value is 0.0618.

The coefficient of 'worm' is 0.04 indicating that each worm will cause an average of

0.04 ml of blood loss per day. Although the value is small, it is highly significantly

different from zero. When there are many worms, the level of blood loss can be

very substantial.

The multiple R-squared value of 0.716 indicates that 71.6% of the variation in the

data is explained by the model. The adjusted value is 0.6942. (The calculation of R-

squared is discussed in the analysis of variance section below). The last section

describes more details of the residuals and hypothesis testing on the effect of 'worm'

using the F-statistic. The P value from this section (6.99 × 10-5) is equal to that

tested by the t-distribution in the coefficient section. This F-test more commonly

appears in the analysis of variance table.

Analysis of variance table, R-squared and adjusted R-squared

> summary(aov(lm1))

Df Sum Sq Mean Sq F value Pr(>F)

worm 1 6192 6 192 32.8 7e-05

Residuals 13 2455 189

The above analysis of variance (aov) table breaks down the degrees of freedom,

sum of squares and mean square of the outcome (blood loss) by sources (in this

case there only two: worm + residuals).

126

The so-called 'square' is actually the square of difference between the value and the

mean. The total sum of squares of blood loss is therefore:

> SST <- sum((bloss-mean(bloss))^2); SST

[1] 8647

The sum of squares from residuals is:

> SSR <- sum(residuals(lm1)^2); SSR

[1] 2455.5 # See also the analysis of variance table

The sum of squares of worm or sum of squares of difference between the fitted

values and the grand mean is:

> SSW <- sum((fitted(lm1)-mean(bloss))^2); SSW

[1] 6191.6

The latter two sums add up to the first one. The R-squared is the proportion of sum

of squares of the fitted values to the total sum of squares.

> SSW/SST

[1] 0.71603

This value of R-squared can also be said to be the percent of reduction of total sum

of squares when the explanatory variable is fitted. Thus the number of worms can

reduce or explain the variation by about 72%.

Instead of sum of squares, one may consider the mean square as the level of

variation. In such a case, the number of worms can reduce the total mean square (or

variance) by: (total mean square - residual mean square) / total mean square, or

(variance - residual mean square) / variance.

> resid.msq <- sum(residuals(lm1)^2)/lm1$df.residual

> Radj <- (var(bloss)- resid.msq)/var(bloss); Radj

[1] 0.69419

This is the adjusted R-squared shown in 'summary(lm1)' in the above section.

F-test

When the mean square of 'worm' is divided by the mean square of residuals, the

result is:

> F <- SSW/resid.msq; F

[1] 32.78

Using this F value with the two corresponding degrees of freedom (from 'worm' and

residuals) the P value for testing the effect of 'worm' can be computed.

> pf(F, df1=1, df2=13, lower.tail=FALSE)

[1] 6.9904e-05

127

The function pf is used to compute a P value from a given F value together with

the two values of the degrees of freedom. The last argument 'lower.tail' is set to

FALSE to obtain the right margin of the area under the curve of the F distribution.

In summary, both the regression and analysis of variance give the same conclusion;

that number of worms has a significant linear relationship with blood loss. Now the

regression line can be drawn.

Regression line, fitted values and residuals

A regression line can be added to the scatter plot with the following command:

> abline(lm1)

The regression line has an intercept of 10.8 and a slope of 0.04. The expected value

is the value of blood loss estimated from the regression line with a specific value of

'worm'.

> points(worm, fitted(lm1), pch=18, col="blue")

A residual is the difference between the observed and expected value. The residuals

can be drawn by the following command.

> segments(worm, bloss, worm, fitted(lm1), col="pink")

0 500 1000 1500 2000

20 40 60 80

Blood loss by number of hookworms in the bowel

No. of worms

ml. per day

1

2

3

4

5

6

7

8

910

11

12

13

14

15

The actual values of the residuals can be checked from the specific attribute of the

defined linear model.

> residuals(lm1) -> lm1.res; lm1.res

Note that some residuals are positive and some are negative. The 13th residual has

128

the largest value (furthest from the fitted line). The sum of the residuals and the sum

of their squares can be checked.

> sum(lm1.res); sum(lm1.res ^2)

[1] 3.9968e-15

[1] 2455.5

The sum of residuals is close to zero whereas the sum of their squares is the value

previously displayed in the summary of the model. The distribution of residuals, if

the model fits well, should be normal. A common sense approach is to look at the

histogram.

> hist(lm1.res)

Checking normality of residuals

Plots from the above two commands do not suggest that residuals are normally

distributed. However, with such a small sample size, it is difficult to draw any

conclusion. A better way to check normality is to plot the residuals against the

expected normal score or (residual-mean) / standard deviation. A reasonably

straight line would indicate normality.

> qqnorm(lm1.res)

Numerically, Shapiro-Wilk test can also be applied.

> shapiro.test(lm1.res)

Shapiro-Wilk normality test

data: residuals (lm1)

W = 0.8978, p-value = 0.0882

> qqline(lm1.res)

Epicalc combines the three commands and adds the p-value of the test to the graph.

> qqnorm(lm1.res) -> a

This puts the coordinates of the residuals into an object.

> shapiro.qqnorm(lm1.res, type="n")

> text(a$x, a$y, labels=as.character(id))

The X and Y coordinates are 'a$x' and 'a$y', respectively.

If the residuals were perfectly normally distributed, the text symbols would have

formed along the straight dotted line. The graph suggests that the largest residual

(13th) is too high (positive) whereas the smallest value (7th) is not large enough

(negative). However, the P value from the Shapiro-Wilk test is 0.08 suggesting that

the possibility of residuals being normally distributed cannot be rejected.

129

−1 0 1

−100 102030

Normal Q−Q plot of lm1.res

Theoretical Quantiles

Sample Quantiles

Shapiro−Wilk test P value = 0.0882

1

23

4

5

6

7

8

9

10

11 12

13

14 15

Finally, the residuals are plotted against the fitted values to see if there is a pattern.

> plot(fitted(lm1), lm1.res, xlab="Fitted values")

> plot(fitted(lm1), lm1.res, xlab="Fitted values", type="n")

> text(fitted(lm1), lm1.res, labels=as.character(id))

> abline(h=0, col="blue")

20 40 60 80

−100 102030

Fitted values

residuals(lm1)

1

2

3

4

5

6

7

8

910

11 12

13

14 15

There is no obvious pattern. The residuals are quite independent of the expected

values. With this and the above findings from the 'qqnorm' command we may

conclude that the residuals are randomly and normally distributed.

130

The above two diagnostic plots for the model 'lm1' can also be obtained from:

> windows(7, 4)

> par(mfrow=c(1,2))

> plot.lm(lm1, which=1:2)

Final conclusion

From the analysis, it is clear that blood loss is associated with number of

hookworms. On average, each worm may cause 0.04 ml of blood loss. The

remaining uncertainty of blood loss, apart from hookworm, is explained by random

variation or other factors that were not measured.

Exercise_________________________________________________

Load the SO2 dataset and label the variables using the following commands.

> label.var(smoke, "Smoke (mg/cu.m.)")

> label.var(SO2, "SO2 (ppm.)")

Using scatter plots and linear regression check whether smoke or SO2 has more

influence on logarithm of deaths.

Interpret the results the best simple linear regression.

131

Chapter 12: Stratified linear regression

Datasets usually contain many variables collected during a study. It is often useful

to see the relationship between two variables within the different levels of another

third, categorical variable.

Example: Systolic blood pressure

A small survey on blood pressure was carried out. The objective is to see the

hypertensive effect of subjects putting additional table salt on their meal.

> zap()

> data(BP); use(BP)

> des()

cross-sectional survey on BP & risk factors

No. of observations =100

Variable Class Description

1 id integer id

2 sex factor sex

3 sbp integer Systolic BP

4 dbp integer Diastolic BP

5 saltadd factor Salt added on table

6 birthdate Date

> summ()

cross-sectional survey on BP & risk factors

No. of observations = 100

Var. name Obs. mean median s.d. min. max.

id 100 50.5 50.5 29.01 1 100

sex 100 1.55 2 0.5 1 2

sbp 100 154.34 148 39.3 80 238

dbp 100 98.51 96 22.74 55 158

saltadd 80 1.538 2 0.502 1 2

birthdate 100 1952-10-11 1951-11-17 <NA> 1930-11-14 1975-

12-08

Note that the maximum systolic and diastolic blood pressures are quite high. There

are 20 missing values in 'saltadd'. The frequencies of the categorical variables 'sex'

and 'saltadd' are now inspected.

132

> summary(data.frame(sex, saltadd))

sex saltadd

male :45 no :37

female:55 yes :43

NA's:20

The next step is to create a new age variable from birthdate. The calculation is

based on 12th March 2001, the date of the survey.

> age.in.days <- as.Date("2001-03-12") - birthdate

There is a leap year in every four years. Therefore, an average year will have

365.25 days.

> class(age.in.days)

[1] "difftime"

> age <- as.numeric(age.in.days)/365.25

The function as.numeric is needed to transform the units of age (difftime);

otherwise modelling would not be possible.

> summ(sbp, by = saltadd)

For saltadd = no

Obs. mean median s.d. min. max.

37 137.5 132 29.624 80 201

For saltadd = yes

Obs. mean median s.d. min. max.

43 163 171 39.39 80 224

For saltadd = missing

Obs. mean median s.d. min. max.

20 166.9 180 45.428 106 238

100 150 200

Distribution of Systolic BP

by Salt added on table

no

yes

missing

133

Recoding missing values into another category

The missing value group has the highest median and average systolic blood

pressure. In order to create a new variable with three levels type:

> saltadd1 <- saltadd

> levels(saltadd1) <- c("no", "yes", "missing")

> saltadd1[is.na(saltadd)] <- "missing"

> summary(saltadd1)

no yes missing

37 43 20

> summary(aov(age ~ saltadd1))

Df Sum Sq Mean Sq F value Pr(>F)

saltadd1 2 114.8 57.4 0.4484 0.64

Residuals 97 12421.8 128.1

Since there is not enough evidence that the missing group is important and for

additional reasons of simplicity, we will ignore this group and continue the analysis

with the original 'saltadd' variable consisting of only two levels. Before doing this

however, a simple regression model and regression line are first fitted.

> lm1 <- lm(sbp ~ age)

> summary(lm1)

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 65.1465 14.8942 4.374 3.05e-05

age 1.8422 0.2997 6.147 1.71e-08

Residual standard error: 33.56 on 98 degrees of freedom

Multiple R-Squared: 0.2782, Adjusted R-squared: 0.2709

F-statistic: 37.78 on 1 and 98 DF, p-value: 1.712e-08

Although the R-squared is not very high, the P value is small indicating important

influence of age on systolic blood pressure.

A scatterplot of age against systolic blood pressure is now shown with the

regression line added using the 'abline' function, previously mentioned in chapter

11. This function can accept many different argument forms, including a regression

object. If this object has a 'coef' method, and it returns a vector of length 1, then the

value is taken to be the slope of a line through the origin, otherwise the first two

values are taken to be the intercept and slope, as is the case for 'lm1'.

> plot(age, sbp, main = "Systolic BP by age", xlab = "Years",

ylab = "mm.Hg")

> coef(lm1)

(Intercept) age

65.1465 1.8422

> abline(lm1)

134

30 40 50 60 70

100 150 200

Systolic BP by age

Years

mm.Hg

Subsequent exploration of residuals suggests a non-significant deviation from

normality and no pattern. Details of this can be adopted from the techniques

discussed in the previous chapter and are omitted here. The next step is to provide

different plot patterns for different groups of salt habits.

> lm2 <- lm(sbp ~ age + saltadd)

> summary(lm2)

====================

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 63.1291 15.7645 4.005 0.000142

age 1.5526 0.3118 4.979 3.81e-06

saltaddyes 22.9094 6.9340 3.304 0.001448

---

Residual standard error: 30.83 on 77 degrees of freedom

Multiple R-Squared: 0.3331, Adjusted R-squared: 0.3158

F-statistic: 19.23 on 2 and 77 DF, p-value: 1.68e-07

On the average, a one year increment of age increases systolic blood pressure by 1.5

mmHg. Adding table salt increases systolic blood pressure significantly by

approximately 23 mmHg.

Similar to the method used in the previous chapter, the following step creates an

empty frame for the plots:

> plot(age, sbp, main="Systolic BP by age", xlab="Years",

ylab="mm.Hg", type="n")

135

Add blue hollow circles for subjects who did not add table salt.

> points(age[saltadd=="no"], sbp[saltadd=="no"], col="blue")

Then add red solid points for those who did add table salt.

> points(age[saltadd=="yes"], sbp[saltadd=="yes"], col="red",

pch = 18)

Note that the red dots corresponding to those who added table salt are higher than

the blue circles. The final task is to draw two separate regression lines for each

group.

Since model 'lm2' contains 3 coefficients, the command abline now requires the

argument 'a' as the intercept and 'b' as the slope.

> coef(lm2)

(Intercept) age saltaddyes

63.129112 1.552615 22.909449

We now have two regression lines to draw, one for each group. The intercept for

non-salt users will be the first coefficient and for salt users will be the first plus the

third. The slope for both groups is the same. Thus the intercept for the non-salt

users is:

> a0 <- coef(lm2)[1]

For the salt users, the intercept is the first plus the third coefficient:

> a1 <- coef(lm2)[1] + coef(lm2)[3]

For both groups, the slope is fixed at:

> b <- coef(lm2)[2]

Now the first (lower) regression line is drawn in blue, then the other in red.

> abline(a = a0, b, col = "blue")

> abline(a = a1, b, col = "red")

Note that X-axis does not start at zero. Thus the intercepts are out of the plot frame.

The red line is for the red points of salt adders and the blue line is for the blue

points of non-adders. In this model, age has a constant independent effect on

systolic blood pressure.

Look at the distributions of the points of the two colours; the red points are higher

than the blue ones but mainly on the right half of the graph. To fit lines with

different slopes, a new model with interaction term is created.

136

30 40 50 60 70

100 150 200

Systolic BP by age

Years

mm.Hg

The next step is to prepare a model with different slopes (or different 'b' for the

abline arguments) for different lines. The model needs an interaction term

between 'addsalt' and 'age'.

> lm3 <- lm(sbp ~ age * saltadd)

> summary(lm3)

Call:

lm(formula = sbp ~ age * saltadd)

===============

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 78.0066 20.3981 3.824 0.000267 ***

age 1.2419 0.4128 3.009 0.003558 **

saltaddyes -12.2540 31.4574 -0.390 0.697965

age:saltaddyes 0.7199 0.6282 1.146 0.255441

---

Multiple R-Squared: 0.3445, Adjusted R-squared: 0.3186

F-statistic: 13.31 on 3 and 76 DF, p-value: 4.528e-07

In the formula part of the model, 'age * saltadd' is the same as 'age + saltadd +

age:saltadd'. The four coefficients are displayed in the summary of the model. They

can also be checked as follows.

> coef(lm3)

(Intercept) age saltaddyes age:saltaddyes

78.0065572 1.2418547 -12.2539696 0.7198851

The first coefficient is the intercept of the fitted line among non-salt users.

137

For the intercept of the salt users, the second term and the fourth are all zero (since

age is zero) but the third should be kept as such. This term is negative. The intercept

of salt users is therefore lower than that of the non-users.

> a0 <- coef(lm3)[1]

> a1 <- coef(lm3)[1] + coef(lm3)[3]

For the slope of the non-salt users, the second coefficient alone is enough since the

first and the third are not involved with each unit of increment of age and the fourth

term has 'saltadd' being 0. The slope for the salt users group includes the second and

the fourth coefficients since 'saltaddyes' is 1.

> b0 <- coef(lm3)[2]

> b1 <- coef(lm3)[2] + coef(lm3)[4]

These terms are used to draw the two regression lines.

Redraw the graph but this time with black representing the non-salt adders.

> plot(age, sbp, main="Systolic BP by age", xlab="Years",

ylab="mm.Hg", pch=18, col=as.numeric(saltadd))

> abline(a = a0, b = b0, col = 1)

> abline(a = a1, b = b1, col = 2)

> legend("topleft", legend = c("Salt added", "No salt added"),

lty=1, col=c("red","black"))

30 40 50 60 70

100 150 200

Systolic BP by age

Years

mm.Hg

Salt added

No salt added

Note that 'as.numeric(saltadd)' converts the factor levels into the integers 1

(black) and 2 (red), representing the non-salt adders and the salt adders,

respectively. These colour codes come from the R colour palette.

138

This model suggests that at the young age, the systolic blood pressure of two groups

are not much different as the two lines are close together on the left of the plot. For

example, at the age of 25, the difference is 5.7mmHg. Increasing age increases the

difference between the two groups. At 70 years of age, the difference is as great as

38mmHg. (For simplicity, the procedures for computation of these two levels of

difference are skipped in these notes). In this aspect, age modifies the effect of

adding table salt.

On the other hand the slope of age is 1.24mmHg per year among those who did not

add salt but becomes 1.24+0.72 = 1.96mmHg among the salt adders. Thus, salt

adding modifies the effect of age. Interaction is a statistical term whereas effect

modification is the equivalent epidemiological term.

The coefficient of the interaction term 'age:saltaddyes' is not statistically significant.

The two slopes just differ by chance.

Exercise_________________________________________________

Plot systolic and diastolic blood pressures of the subjects, use red colour of males

and blue for females as shown in the following figure. [Hint: segments]

0 20406080100

0

50

100

150

200

Index

blood pressure

Systolic and diastolic blood pressure of the subjects

Check whether there is any significant difference of diastolic blood pressure among

males and females after adjustment for age.

139

Chapter 13: Curvilinear Relationship

Example: Money carrying and age

This chapter returns to the family data and explores the relationship between money

carried and age.

> zap()

> data(Familydata)

> use(Familydata)

> des()