MEASURING POLICE AND COMMUNITY PERFORMANCE USING WEB BASED SURVEYS 221076

User Manual: 221076

Open the PDF directly: View PDF PDF.
Page Count: 219 [warning: Documents this large are best viewed by clicking the View PDF Link!]

The author(s) shown below used Federal funds provided by the U.S.
Department of Justice and prepared the following final report:
Document Title: Measuring Police and Community Performance
Using Web-Based Surveys: Findings from the
Chicago Internet Project
Author(s): Dennis Rosenbaum ; Amie Schuck ; Lisa
Graziano ; Cody Stephens
Document No.: 221076
Date Received: January 2008
Award Number: 2004-IJ-CX-0021
This report has not been published by the U.S. Department of Justice.
To provide better customer service, NCJRS has made this Federally-
funded grant final report available electronically in addition to
traditional paper copies.
Opinions or points of view expressed are those
of the author(s) and do not necessarily reflect
the official position or policies of the U.S.
Department of Justice.
MEASURING POLICE AND COMMUNITY
PERFORMANCE USING WEB-BASED SURVEYS:
FINDINGS FROM THE CHICAGO INTERNET PROJECT
Final Report
Prepared by:
Dennis P. Rosenbaum
Amie M. Schuck
Lisa M. Graziano
Cody D. Stephens
Center for Research in Law and Justice
Department of Criminal Justice
University of Illinois at Chicago
November 25, 2007
This project was supported under award number 2004-IJ-CX-0021 to the University of Illinois at
Chicago by the National Institute of Justice, Office of Justice Programs, U.S. Department of
Justice. Findings and conclusions of the research reported here are those of the authors and do
not necessarily reflect the official position or policies of the U.S. Department of Justice.
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
ACKNOWLEDGMENTS
A project of this magnitude requires considerable indebtedness. First, we would like to
thank the Chicago Police Department for agreeing to work with us as partners and for making
this project a priority at police headquarters. We are especially grateful for the leadership
provided by Assistant Deputy Superintendent Frank Limon, Lt. Michael Kuemmeth, and Sgt.
Brian Daly. They worked diligently from the CAPS Project Office to minimize implementation
problems within the police bureaucracy. Our gratitude also extends to Beth Forde, Vance Henry,
Ron Huberman, Barbara McDonald, and Ted O'Keefe, each of whom provided thoughtful
feedback during our feasibility study and the early stages of this project.
We also wish to acknowledge Willie Cade, president of Computers for Schools, for
assisting with this project and many others. Willie donated six laptop computers in order to
stimulate greater community involvement. Participation rates were enhanced because survey
respondents were eligible for as many as six drawing to win a laptop from Computers for
Schools, a Microsoft Authorized Refurbisher.
At the technical end, we are indebted to Raphael Villas, president of 2 Big Division, and
his colleagues, who designed the web interface, background infrastructure for the website, and
final graphics. We also wish to thank our own Academic Communications and Computing
Center, whose employees helped us create the project website on the University's server, post the
surveys, access the results and update the monthly content links.
Finally, we are deeply grateful to Lois Mock, our grant monitor at the National Institute
of Justice, whose guidance and support throughout this project was invaluable. Lois was always
available when we needed her assistance with either substantive or bureaucratic questions, and
she always provided encouragement when we needed it the most. Her insights about policing
and evaluation issues, drawn from dozens of projects over the years, were especially helpful.
With her retirement at the end of this year, she will be sorely missed by the entire criminal
justice community. We wish her all the best.
ii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
EXECUTIVE SUMMARY
This is the story of the Chicago Internet Project, a joint information technology project
involving the University of Illinois at Chicago, the Chicago Police Department and community
residents in Chicago’s neighborhoods. The dual goals of this project are: (1) to successfully
implement a large scale comprehensive web-based community survey and identify the
challenges encountered when transferring this infrastructure to other settings; and (2) to
determine whether a web-based survey system can enhance the problem solving process,
increase community engagement, and strengthen police-community relations.
A. Background
Over the past two decades, American policing has been in a continual state of change and
innovation. The COPS Office and the National Institute of Justice have promoted substantive
reforms and evaluation research, respectively, at the local and national levels. Community
policing and problem solving emerged as substantial reform models (Goldstein, 1990; Greene &
Mastrofksi, 1988; Rosenbaum, 1994), but the obstacles to full-scale implementation have been
numerous (see Fridell & Wycoff, 2004; Skogan & Frydl, 2004; Skogan, 2003a). Other police
innovations are now competing for dominance, including broken windows, hot spots, Compstat,
pulling-levers policing, which are all aided by advances in information technology (see
Weisburd & Braga, 2006). Critics, however, argue that these aggressive policing strategies are
undermining trust and confidence in the police, especially in minority communities (Tyler, 2005;
Walker & Katz, 2008) and could have other adverse effects down the road (see Rosenbaum,
2007).
The Chicago Internet Project is based on the premise that, while much has been done
under the community policing and problem-oriented policing models, progress in reforming
police organizations and communities has been restricted by failure to explore new measures of
success and new methods of accountability that are grounded in the community. While
community residents demand safer streets and less violence, they also want a police force that is
fair and sensitive to their needs (Rosenbaum et al, 2005; Skogan, 2005; Tyler, 2005; Weitzer &
Tuch, 2005). How can all of this be achieved, and how can it be measured?
Police accountability. Traditionally, police accountability has been an internal and legal
process, focusing on the control of officers through punitive enforcement of rules, regulations,
and laws (Chan, 2001). Today, police organizations are under pressure to be responsive to the
public both for crime control and police conduct. Consequently, there has been widespread
interest in computer driven measurement of police performance using traditional crime
indicators. Although useful for a specific purpose, these indicators of performance do not
attempt to gauge in a meaningful way customer satisfaction with the quality of police service or
the quality of police-community partnerships. As noted previously, only when the performance
evaluation systems change can we expect police–community interactions to change (Rosenbaum,
2004). To achieve marked improvements in police performance, accountability systems will
need to expand and incorporate new standards based on the goals of solving problems, engaging
iii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
the community, building effective partnerships, and providing services that are satisfactory to all
segments of the community.
Similarly, local residents must be held more accountable for public safety and crime
prevention. Too often, community residents expect the police to solve all of their public safety
problems and concerns. The community's role in the prevention of crime is well established
(Rosenbaum et al, 1998; Sampson, 1998). Unfortunately, we have yet to implement a
standardized set of measures to capture the social ecology and crime prevention behavior of
neighborhoods. In a limited way, the Chicago Internet Project also represents attempts to
systematically measure community performance indicators.
The information imperative. In this information-driven society, the POP/COP models
foster a new information imperative (Dunworth et al. 2000; Rosenbaum, 2004) and call on police
executives and universities to “measure what matters” in the 21st century (see Masterson and
Stevens 2002; Mastrofski, 1999; Mirzer 1996, Skogan, 2003b). Particularly important (and often
neglected) are data about concerns and priorities of local residents and community organizations,
as well as factors in the local environment that are either preventative or criminogenic. If
community engagement is a priority, then police officers need reliable information about
community capacity, current levels of community crime prevention behaviors, and local
resources that can be leveraged to help prevent crime and disorder. Measuring the police-
community interface is critical for achieving strong police–community relations, and stimulating
community based crime prevention. Both are needed to create effective partnerships that are
postulated as the heart of community policing and problem solving (Cordner 1997; Rosenbaum
2002; Schuck & Rosenbaum, 2006).
If police-community relations are a priority, accountability systems should begin to
examine the day-to-day interactions between police and citizens. We should ask: How are the
police responding to residents as victims, witnesses, suspects, complainants, callers, and
concerned citizens? And how do residents respond to the police? In short, researchers,
community leaders, and police administrators should begin to ask: What are the important
dimensions of community-police relationships and interactions and how do we begin to collect
timely, geo-based information on these constructs?
Information technology and the police. The “information technology (IT) revolution” is
finally reaching law enforcement (see Brown, 2000; Chan, 2001Dunworth, 2000; Reuland,
1997). Several technology driven law enforcement initiatives have received national attention in
recent years, especially New York's COMPSTAT initiative (McDonald, 2000, 2005) and
Chicago’s CLEAR program (see Skogan et al., 2002; Skogan et al, 2005). While law
enforcement agencies are making significant progress toward harnessing the power of
information technology, rarely do these initiatives give attention to the information imperative of
problem solving and community policing. Rather, police tend to focus on new ways of
processing traditional data elements (Chan et al, 2001; Rosenbaum, 2006; Weisburd et al, 2006).
Measuring the fears, concerns, and behaviors of the community has not been a priority.
Using web-based community surveys. Many police departments in the United States
now offer on-line information about their services, programs and crime statistics (Haley &
Taylor, 1998; Rosenbaum, Graziano, & Stephens, in preparation). To date, however, few have
iv
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
moved beyond simply posting information to embracing the Internet as a proactive tool for
obtaining new information about neighborhood conditions, solving problems, building
partnerships, evaluating programs, and assessing unit performance. The focus of the Chicago
Internet Project is a web-based community survey with repeated measurement. Although a few
police departments post Internet surveys, these sporadic efforts, on the whole, have not been
comprehensive, methodologically sound, institutionalized, or used for strategic planning and
accountability.
B. Research Questions and Research Findings
The Chicago Internet Project addressed several key questions. In this summary, each
research question is followed by an answer derived from our research findings.
(1) Can we successfully design and implement a comprehensive community Internet survey?
The answer is a resounding "Yes." As a team, we developed a comprehensive system of
measurement. This system required, and exceeded the following core activities: (1) identifying
samples of potential survey respondents; (2) developing multiple web surveys; (3) purchasing
and installing appropriate Internet survey software; (4) gaining approval to use the University's
server; (5) recruiting respondents by email and at community meetings; (6) monitoring survey
returns and answering questions posed by respondents; (7) analyzing community specific survey
data; (8) posting survey results; (9) developing and posting (or otherwise disseminating)
educational/training information; (10) arranging incentives to increase participation rates; and
(11) managing communication with the police department to insure fidelity of implementation.In
other words, a number of resources were needed to build and sustain the infrastructure for
conducting online surveys and providing feedback to the target communities.
To say that we achieved successful implementation should not go without qualification.
Numerous obstacles were encountered along the way, ranging from problems with using the
University server to getting the police bureaucracy to distribute and discuss the survey findings
at beat meetings. These problems are discussed in great detail in this report, but they should not
obscure the bottom line that web-based community surveys can be employed with success if
cities are willing to invest the time and resources necessary.
(2) As a measurement device, how well does the Internet survey perform with respect to
measuring neighborhood problems, community and police performance, and local program
outcomes?
One primary objectives of the Chicago Internet Project was to develop new external
measures of police performance. In 1996, the National Institute of Justice held a series of
workshops entitled "Measuring What Matters," where leading police scholars and police chiefs
reflected on the problems with traditional performance measures and agreed that there is a
pressing need to conceptualize and measure the dimensions of police performance that matter
most to the community. Although the theoretical dialogue has continued over the past decade,
little progress has been made at the empirical level.
v
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
To fill this void, the Chicago Internet Project sought to develop, field test, and validate a
number of survey measures. Our conceptual scheme for evaluation of the police posits three
primary types of community assessment: general assessments of police officers; experience-
based assessments of police officers; and assessments of the police organization as a whole.
Reaching far beyond traditional crime statistics, particular emphasis was given to addressing the
following performance questions:
Are the police exhibiting good manners during encounters with residents?
Are the police competent in the exercise of their duties?
Are the police fair and impartial when enforcing the law?
Are the police acting lawful in the exercise of their duties?
Are the police equitable in the distribution of services?
Drawing on theories of community policing and problem oriented policing the following process
and outcome questions were also measured:
Are the police responsive to the community's concerns and problems?
Are the police effective in solving neighborhood problems?
Are the police engaging the community in crime control and prevention actions?
Are the police creating cooperative partnerships with the community?
Does the public perceive less crime and disorder?
Does the public report lower rates of victimization?
Does the public report less fear of crime?
Does the public perceive a higher quality of life in their neighborhood?
Does the public attribute organizational legitimacy to the police?
In each of these domains, we found that Internet surveys are capable of yielding reliable
and valid data. As the above questions clearly suggest, the measures we developed (or selected)
covered a wide range of theoretical constructs regarding police performance. Second, these
survey items were subjected to various tests of validity and reliability. When constructing
composite indices, factor and reliability analyses were used to establish that the items formed a
unidimensional factor with strong internal consistency. Often measures were taken at two or
more points, and therefore, test-retest reliability coefficients were computed. Finally, for key
indices, additional validity tests were conducted to establish construct and criterion validity. For
example, based on prior research using telephone survey data, we hypothesized that African
Americans and Latinos would report (via the Internet) more negative views of the police on
vi
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
several performance dimensions, which is a test of "known groups" validity. The results strongly
support this hypothesis, suggesting that web-based indices of police performance can
successfully capture known group differences.
More importantly, we found that geo-based web-surveys are sensitive to neighborhood
differences that have been masked by large-scale surveys in the past. National or citywide
surveys, for example, often contribute to the impression that African Americans, Latinos and
whites are homogeneous groups with little within-group variability in their assessments of the
police. Because the data were collected in smaller geographic areas over time, the Internet
surveys were able to capture sizable differences in police performance assessments within
racial/ethnic groups.
The community measurement component of the web survey covered several variable
domains:
Neighborhood conditions: Fear of crime, social and physical disorder, crime problems,
and overall perceptions of neighborhood conditions
Individual resident performance: Individual, household, and collective crime prevention
knowledge and behaviors
Community performance: Informal social control and collective efficacy
For the community scales, validity tests were performed using multi-method procedures
and the results were encouraging. For example, looking at neighborhood conditions in our 51
police beats, we compared data from our web survey against telephone survey data. The
correlation between these two methods, using data collected three years apart with different
random samples, are incredibly strong (ranging from .552 to .790). These findings suggest that
the Internet can be used to capture valid impressions of neighborhood conditions in relatively
small geographic areas.
The table below shows the construct validity testing; we compared the findings from the
Internet and telephone surveys with official police records for the 51 police beats. Again, the
correlations between data collected from three very different methods are consistently positive
and almost always statistically significant. Furthermore, actual neighborhood problems predict
perceptions and fear. Neighborhoods (police beats) with higher levels of violent crime, illegal
drugs, weapons, and disorder (as defined by the Chicago police) are places where web-survey
respondents report significantly higher levels of fear, victimization, and disorder.
vii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Official Chicago Police Department Crime Data (logged)
Crime Violent Robbery Homicide Drug Weapons Disorder
UIC Internet Survey
Fear .489** .702** .694** .522** .725** .770** .345*
Victimization .186 .342* .331* .353* .394** .477** .354*
Disorder .276 .541** .463** .485** .698** .680** .284*
Disorder .216 .476** .422** .427** .652** .620** .265
Northwestern Telephone Survey
Fear .292* .482** .490** .351** .519** .586** .371**
Disorder .188 .400** .405** .308* .514** .563** .233
(3) What are the effects on the target audiences of collecting and feeding back this
information?
For the general public, how does participating in online surveys affect their perceptions
of neighborhood problems, community capacity, the local police, and participants' own crime
prevention knowledge and behavior? For CAPS members, does their participation have some of
the same effects, and furthermore, does it enhance the problem solving process?
To explore these questions, the Chicago Internet Project included randomized trials with
two separate groups: CAPS participants and a random sample of residents from the same police
beats. The random sample provided stronger external validity because it is more representative
of households in the study population. The CAPS group, however, provided the opportunity to
test the effects of information sharing within a face-to-face police-community partnership. A
third group, police officers were also studied. CAPS is a joint problem solving environment, so
how the police respond to beat problems or evaluate their partnership with the community is also
important.
The procedural elements of the experiment include the collection of new information
through Internet surveys, the dissemination (feedback) of this information to police and residents
in selected beats, and supplemental education/training in the use of survey findings and/or crime
prevention advice.
Despite months of careful planning, training, and implementation monitoring, the results
of these two experiments were not encouraging. Detailed analyses indicate that police beats
assigned to the experimental conditions (survey feedback or survey feedback plus additional
training and/or crime prevention advice) did not differ from the control beats on a wide range of
police and community outcome measures. The question is, “Why?” For the CAPS experiment
one possibility is that the survey response rate was extremely low. This was due in large part to
inconsistent implementation of essential project tasks on the part of the police. Police in the test
beats were assigned primary responsibility for making residents aware of the opportunity to
participate in the Internet survey and leading discussions of the survey results at beat meetings, a
viii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
strategy intended to invest police more fully in the process. However, they frequently failed to
carry out the project tasks, most notably failing to discuss survey results on more than half of the
occasions where they were expected to do so. Multiple obstacles to implementation were
identified, including communication breakdowns within the organization, police resistance or
lack of commitment to the project, rigid patterns of communication, immutable expectations of
police and residents as to their roles at beat meetings, reassignment of personnel, and a general
lack of organizational support for CAPS.
CAPS participants were still given survey results (primarily results collected from the
random sample in their beat), but this still did not change their opinions about the police or the
community. The absence of serious problem solving at most beat meetings is considered the
most formidable obstacle to implementation and the most likely explanation for the lack of
impact. We learned that CAPS is a culture unto itself, with strong (and relatively traditional)
norms about police and community roles (see Graziano, 2007). Rather than engage in joint
problem solving, the police are expected to respond to residents' complaints, similar to 911 calls,
but in person. For lower crime neighborhoods, CAPS meetings can sometimes become social
events where problem solving would be viewed as an inconvenience. Hence, the introduction of
new survey information and pressure from the police administration (and the University) to
engage in problem solving was met with some passive resistance at various levels.
On a more positive note, we were able to identify a group of randomly selected residents
in each of the 51 police beats who were willing to go online and remain engaged in the panel
survey for several months and multiple surveys. These participators were generally younger than
the CAPS sample and less inclined to attend community meetings. Hence, through random
sampling and telephone outreach, we were able to "democratize" the process of engaging the
community in a dialogue about public safety issues. (Skogan et al., 2002 notes that CAPS
participants represent, on average, only 0.5% of the beat population). These randomly selected
individuals represent "the silent majority" in neighborhoods and their public safety input is rarely
sought, except via occasional large-scale surveys. Their knowledge, perceptions, beliefs,
attitudes and opinions became the primary data for testing a new measurement system. Although
these randomly sampled residents agreed to participate, and provided valuable data, the
experimental interventions did not change their perceptions or behavior. We suspect that these
null effects were due to weak "dosage of the treatment." Most reported that they saw the survey
results, but only a small number of respondents clicked on the community resource links to
receive additional information about community crime prevention.
(4) What lessons are learned that are transferable to other communities?
From management and research perspectives, the efficiency of the Internet allows for
hundreds of performance comparisons within and across jurisdictions. This tool can be used, for
example, to assess the impact of localized interventions (e.g. the impact of installing cameras in
crime hot spots on residents’ awareness, fear, and risk of detection compared to control
locations), to compare performance across beats or districts (e.g. police visibility and response
times across different Latino beats), or to compare performance across jurisdictions (e.g.
perceived police demeanor during traffic stops in African American neighborhoods in Chicago,
London, and Los Angeles). The possibilities are endless, but making comparisons is the key to
good measurement (Maguire, 2004).
ix
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Our experience in Chicago suggests that motivated communities will find it quite feasible
to institute a system of online performance measurement. However, this project raises many
questions that must be addressed in the future. First, what is the primary purpose of the system?
For example, are you interested in: (1) assessing community needs, defining and analyzing
problems, and identifying community resources? (2) measuring police performance? (3)
measuring community performance? (4) evaluating new public safety programs? and/or (5)
querying the public about police policies and procedures and other justice initiatives? In the
Chicago Internet Project, we covered a plethora of domains, but would encourage communities
to find their primary interest.
Second, whose opinions in the "community" are you seeking? Do you want the views of
random, representative samples? How about persons who monitor places and are experts on
specific locations? How about persons with recent police encounters? Although community
samples are feasible (as we have shown), they are not the most cost-efficient way to implement a
web-based system. To begin with, we would encourage technologically savvy communities to
systematically evaluate how police perform during routine police encounters. This topic has
been a major source of tension between the police and the community for the past two decades.
As public interest in procedurally just policing reaches unprecedented heights, web-based
surveys offer one possible solution. We believe that customer satisfaction with police services
and police encounters is the next frontier for systematic measurement to address equity concerns.
In the U.S., 43.5 million persons had face-to-face contact with the police in 2005 (Durose et al,
2007). Residents have contact with the police in various settings (e.g. calls for service, incident
reports, community meetings, vehicular or pedestrian stops, arrests) and in each of these
encounters, email addresses could be collected and entered into the system. Email invitations to
complete a short customer satisfaction survey could be sent in the days that follow. This type of
feedback can be used to monitor and adjust performance at the individual, beat or district levels
or within special units or bureaus.
The National Research Council report on the status of American policing entitled,
Fairness and Effectiveness in Policing (2004), emphasizes that public confidence in the police
depends not only an organization's effectiveness in fighting crime, but also on the public's
perception of how they are treated by the police and the perceived quality of police service
during police-citizen encounters. We believe that police organizations who endorse this type of
measurement system will receive very high marks on organizational transparency, legitimacy,
professionalism, and responsive to the community.
As a final question, who should have access to, and control over, the information
generated? Although police cooperation and partnership are essential, we reluctantly conclude
that the police organization should not completely control the data collection system. History
suggests that maximum reform will occur when outsiders are watching and involved. Without
question, internal gains have been achieved by introducing new mission statements, rules and
regulations, officer training, and supervision, but critics have argued that external oversight is
necessary to achieve sizable and lasting change in police organizations. Reiss (1971) and
Mastrofski (1999) have proposed independent “auditing bureaus” to collect data on how citizens
are treated by the police and vice versa. Additionally, in recent years, independent police
auditors have been created as part of new accountability structures (Walker, 2005), although not
necessarily with survey skills or interests. We propose that the important function of external
x
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
performance measurement be assigned to universities or other independent organizations with
expertise in both policing and social science research and that are able to establish working
partnerships with the police and other stakeholders.
xi
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
TABLE OF CONTENTS
CHAPTER ONE. INTRODUCTION………………………………………………………….. 1
A. Overview…………………………………………………………………………….... 1
B. Goals and Objectives………………………………………………………………….. 2
C. Key Research Questions………………………………………………………………. 2
CHAPTER TWO. LITERATURE REVIEW…………………………………………………. 4
A. Overview………………………………………………………………………………. 4
B. Accountability…………………………………………………………………………. 4
C. The Information Imperative………………………………………………………….... 5
D. Information Technology and the Police……………………………………………….. 6
E. Testing Web-Based Community Survey………………………………………………. 7
F. Survey Feedback……………………………………………………………………….. 8
CHAPTER THREE. PROCEDURES AND METHODS…………………………………….. 9
A. Experimental Design…………………………………………………………………. 9
B. Setting…………………………………………………………………………………. 9
C. Selection and Assignment of Study Beats……………………………………….….... 10
D. Selection of Internet Users within Study Beats……………………………………….12
E. Development of Internet Survey Instruments………………………………………... 16
F. Development and Implementation of Observation and Questionnaire Methods…….. 16
G. Development of Website and Educational Linkages………………………………….18
H. Posting Surveys/Managing the Monthly Sample…………………………………….. 24
CHAPTER FOUR. LEVELS OF PARTICIPATION IN THE CHICAGO……….………. 28
INTERNET PROJECT
A. CAPS Resident Participation………………………………………………………… 28
B. Random Sample Participation………………………………………………………... 37
CHAPTER FIVE. ADVANCES IN MEASUREMENT: THE DIMENSIONS……………. 46
OF INTERNET SURVEY INFORMATION
A. Measurement Overview……………………………………………………………… 46
1. Traditional Performance Measures…………………………………………… 46
2. Establish a Mandate and Information Imperative……………………………. 48
3. Level of Measurement………………………………………………………... 49
4. Community-Based Measurement…………………………………………….. 50
5. Community Performance……………………………………………………... 51
B. Measurement Theory and Scale Construction……………………………………….. 51
C. Measures of Police Performance…………………………………………………….. 52
1. Dimensions of Police Performance …………………………………………... 53
2. General Assessments of the Police…………………………………………… 54
3. Global Evaluations of the Police……………………………………………… 54
D. Competency Indices…………………………………………………………………. 56
1. Police Knowledge Index……………………………………………………… 56
2. Police Reliability Index………………………………………………………. 56
xii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
E. Neighborhood Specific Evaluations of the Police……………………………………. 57
CHAPTER FIVE. (CONTINUED)
1. Responsiveness to Community Index………………………………………… 57
2. Satisfaction with Neighborhood Police………………………………………. 58
F. Experience-based Assessments of the Police………………………………………… 58
1. Assessments of Police Stops Index …………………………………………... 59
2. Satisfaction with Police Contacts…………………………………………….. 60
G. Performance at Public Meeting Index……………………………………………….. 62
H. Affective Response to Police Encounters…………..………………………………... 63
1. Anxiety Reaction Index………………………………………………………. 63
2. Secure Reaction Index………………………………………………………... 63
I. Assessments of Organization Outcome………………………………………………. 64
1. Police Visibility Index………………………………………………………... 65
2. Effectiveness in Preventing Crime Index…………………………………….. 65
3. Effectiveness in Solving Problems Index…………………………………….. 66
4. Willingness to Partner with the Police Index………………………………… 67
5. Engagement of the Community Index………………………………………... 67
6. Police Misconduct Index………………………………………………………68
7. Racial Profiling Index………………………………………………………… 69
8. Organizational Legitimacy Index…………………………………………….. 70
J. Measuring Individual and Collective Performance Indicators……………………….. 71
1. Neighborhood Conditions……………………………………………………. 71
2. Individual Resident Performance…………………………………………….. 76
3. Collective Performance………………………………………………………. 80
K. Further Validation of Scales…………………………………………………………. 82
1. Multi-Method Validation of Scales…………………………………………... 82
2. Known Groups Validation of Scales…………………………………………. 83
L. Measurement Sensitivity……………………………………………………………... 86
1. Within-Race Differences……………………………………………………... 86
2. Identifying Hot Spots………………………………………………………… 90
CHAPTER SIX. THE CAPS EXPERIMENT: FINDINGS AND LESSONS LEARNED... 91
A. Implementation Results within the CAPS Framework……………………………… 91
1. Feasibility Study……………………………………………………………… 91
2. Implementation: Protocol and Integrity……………………………………… 91
3. Police Attitudes about Participation in Project……………………………… 100
4. Obstacles to Implementation………………………………………………… 104
B. Testing the Effects on Residents and Officers: Hypotheses and Methods …………. 111
1. Hypotheses …………………………………………………………………. 111
2. Measurement and Scale Construction………………………………………. 112
3. Sample………………………………………………………………………. 118
4. Statistical Techniques and Analysis………………………………………… 121
C. Results……………………………………………………………………………… 124
xiii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER SEVEN. RESULTS FROM THE RANDOM SAMPLE: THE……………… 133
IMPACT OF INTERNET INFORMATION ON PARTICIPANTS’
PERCEPTIONS AND BEHAVIORS
A. Hypothesized Impact of Interventions on Random Sample………………………… 133
B. Implementation Problems with Website……………………………………………. 135
C. Statistical Techniques and Analysis Strategy………………………………………. 136
D. Results………………………………………………………………………………. 140
E. Summary……………………………………………………………………………. 147
CHAPTER EIGHT. CONCLUSIONS……………………………………………………… 148
A. The Experimental Interventions…………………………………………………….. 148
B. New Measurement System…………………………………………………………. 150
REFERENCES……………………………………………………………………………….. 153
xiv
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
LIST OF APPENDICES
APPENDIX A. Pre-test and Post-Test Police Questionnaires……………………………... A1
APPENDIX B. Pre-test and Post-Test Citizen Questionnaires……………………………. B1
APPENDIX C. Beat Meeting Observation Form…………………………………………… C1
APPENDIX D. Directives Memo for CPD Personnel………………………………………. D1
APPENDIX E. Instructions Flyer for Completing Web Survey…………………………… E1
APPENDIX F. Example of Results Distributed at Beat Meetings…………………………. F1
APPENDIX G. Problem Solving Exercise…………………………………………………... G1
xv
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
LIST OF TABLES
CHAPTER THREE
Table 3.1 Descriptive Statistics of Chicago Internet Study Beats……………………………… 11
Table 3.2 Telephone Pre-Experiment -- Final Calling Outcomes (April 2005)………………… 14
Table 3.3 Post-Experiment Final Calling Outcomes (January 2006)…………………………… 15
Table 3.4 Overview of the Monthly Link Content……………………………………………… 23
CHAPTER FOUR
Table 4.1 CAPS Response Rate for Internet Surveys (%)………………………………………. 28
Table 4.2 Participants at CAPS Beat Meetings and CAPS……………………………………... 30
Participants who Completed Internet Surveys
Table 4.3 Obstacles to Resident Participation as Identified…………………………………….. 34
by Civilian and Police Facilitators (N=70)
Table 4.4 Response Rates for Internet Surveys…………………………………………………. 38
Table 4.5 The Number of Internet Surveys Completed…………………………………………. 40
Table 4.6 Respondent’s Profiles………………………………………………………………… 40
Table 4.7 Summary of Demographic Characteristics of Respondents by Participation Level….. 42
Table 4.8 Bivariate Results for Predictors of Participation in Internet Surveys………………… 43
CHAPTER FIVE
Table 5.1 A Comparison of Telephone and Internet Data………………………………………. 83
Table 5.2 A Comparison of Official and Internet Data…………………………………………. 83
Table 5.3 HLM Linear Regression Estimates for the Impact of Resident’s……………………. 85
Race on Policing Constructs
Table 5.4 Bivariate Correlations for Residents from African American Communities………… 87
CHAPTER SIX
Table 6.1 Implementation Protocol by Experimental Condition……………………………….. 93
Table 6.2 Project Flyer Distribution Rate by Experimental Condition (%)…………………….. 95
Table 6.3 of Survey Results in Feedback and Training Groups (%)……………………………. 97
Table 6.4 Discussion of Survey Results in Feedback and Training Groups (%)……………….. 97
Table 6.5 Obstacles to Implementation as Identified by Civilian and Police………………….. 105
Facilitators (N=68)
Table 6.6 Demographic Characteristics of Citizen Beat Meeting Participants…………………119
by Experimental Conditions (N=668)
xvi
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER SIX (CONTINUED)
Table 6.7 Demographic Characteristics of Citizen Beat………………………………………. 120
Meeting Participants by Experimental Conditions (N=184)
Table 6.8 Summary of Community Hypotheses for the CAPS Experiment…………………… 122
Table 6.9 Summary of Police Hypotheses for the CAPS Experiment…………………………. 123
Table 6.10 A Summary of Multilevel Regression Estimates for………………………………. 125
Community Hypotheses of CAPS Experiment (Resident Questionnaire)
Table 6.11 OLS Regression Estimates for Community Hypotheses…………………………... 126
of CAPS Experiment (Observation Form)
Table 6.12 Logistic Regression Estimates for Commitment to………………………………... 127
Future Action for Community Hypotheses/CAPS Experiment (N = 48)
Table 6.13 OLS Regression Estimates for Police Hypotheses of……………………………… 128
CAPS Experiment (Police Questionnaire)
Table 6.14 OLS Regression Estimates for the Level of Implementation……………………… 129
for Community Hypotheses of CAPS Experiment (Resident Questionnaire)
Table 6.15 OLS Regression Estimates for the Level of Implementation……………………… 130
for Community Hypotheses of CAPS Experiment (Observation Form)
Table 6.16 Logistic Regression Estimates for the Level of Implementation………………….. 131
for Commitment to Future Action of CAPS Experiment (N = 48)
Table 6.17 OLS Regression Estimates for the Level of Implementation……………………… 131
for Police Hypotheses of CAPS Experiment (Police Questionnaire)
Table 6.18 OLS Regression Estimates for Citizen/Beat Characteristics………………………. 132
for Survey Completion Rates (N=50)
CHAPTER SEVEN
Table 7.1 Summary of Policing Hypotheses for Random Sample with……………………….. 138
Descriptive Statistics
Table 7.2 Summary of Individual and Community Hypotheses for Random…………………. 139
Sample with Descriptive Statistics
Table 7.3 A Summary of Multilevel Regression Results for Policing………………………… 141
Hypotheses with Random Sample of Respondents
Table 7.4 A Summary of Multilevel Regression Results for Individual/Community…………. 142
Hypotheses with Random Sample of Respondents
Table 7.5 A Summary of Multilevel Regression Results for the Number of………………….. 143
Internet Surveys Completed for Policing Hypotheses
Table 7.6 A Summary of Multilevel Regression Results for the Number of………………….. 144
Internet Surveys Completed for Individual/Community Hypotheses
Table 7.7 A Summary of Multilevel Regression Estimates for Viewed………………………. 145
Survey Results on Policing Hypotheses
xvii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER SEVEN (CONTINUED)
Table 7.8 A Summary of Multilevel Regression Estimates for Viewing……………………… 146
Results on Individual/Community Hypotheses
xviii
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
LIST OF FIGURES
CHAPTER THREE
Figure 3.1 Chicago Internet Project Website Page……………………………………… 19
Figure 3.2 Login Page…………………………………………………………………… 20
Figure 3.3 Survey Results Example……………………………………………………... 21
Figure 3.4 Crime Prevention Concepts Example……………………………………… 22
Figure 3.5 Community Participation Example………………………………………...... 22
Figure 3.6 Community Resource Links…………………………………………………. 24
CHAPTER FIVE
Figure 5.1 Box plots for Police Manners and Fairness Scales…………………………... 87
Figure 5.2 Box Plots for Police Problem Solving and Reliability Scales……………….. 88
Figure 5.3 Box Plots for Police Responsiveness Scales………………………………… 89
xix
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER ONE
INTRODUCTION
A. Overview
Despite the observable progress in the areas of partnership building, problem solving,
information technology, data-driven deployment, and police accountability, the fact remains that
law enforcement organizations (or other entities) have yet to develop data systems to measure
“what matters” to the public and “what matters” according to community policing and problem-
oriented policing models. One can argue that this reality has limited organizational change,
stunted the growth of police-community relations, and restricted organizational and community
efficacy (Rosenbaum, 2004). The current project sought to fill this gap by developing,
implementing, and evaluating the Chicago Internet Project. The University of Illinois at
Chicago, in cooperation with the Chicago Police Department and community residents,
developed and field tested a comprehensive web-based community survey in 51 Chicago police
beats. The elements of the intervention included the collection of new information online, the
dissemination (feedback) of this information to police and residents, the use of these new data
elements in a problem solving setting (CAPS), and training in the use and interpretation of
survey findings.
CAPS (Chicago Alternative Policing Strategy), the Chicago Police Department’s
community policing program, consists of multiple components seeking to form and strengthen
police-community partnerships in Chicago. This project was implemented, in part, within the
context of CAPS community beat meetings. Chicago has 281 police beats in 25 police districts.
Each month community residents have a structured opportunity to meet with beat officers and
their team sergeant to engage in beat-level problem solving (see Skogan & Hartnett, 1997;
Skogan et al, 1999; Bennis et al., 2003). On average, 25 residents and 7 police officers attend a
typical beat meeting. In one component of this study, monthly Internet surveys were completed
by CAPS beat meeting participants. In another component, monthly Internet surveys were
completed by a random sample of residents from each study beat who do not regularly attend
CAPS meetings. CAPS participants comprise a voluntary, self-selected group that represents, on
average, only 0.5% of the beat population. They are more likely than the typical resident to be
homeowners, non-Latinos, and residents over 65 (Skogan, 2006). Hence, the random sample of
online participators provided a separate test of external validity and allowed us to examine
feedback effects without public deliberation.
Whether online surveys and information feedback loops can contribute to the public
safety processes is an important question in this electronic age. The Internet opens the door to
virtually unlimited possibilities for two-way information sharing between the police and the
community. For years, police have resisted sharing crime-related information with residents
because of concern about heightened fear, but this apprehension has not been supported by
controlled tests of this hypothesis (see Rosenbaum et al., 1998). But the question remains, what
is the impact of sharing diverse types of geo-based information on public perceptions of crime,
police, police-community partnerships and the community itself? Can partnerships, problem
solving, and community crime prevention behaviors be enhanced via this process? To date, we
1
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
know very little about how new crime-related and prevention-related information will affect
perceptions and behaviors at the individual or collective level. At the same time that social
scientists are beginning to explore the social consequences of Internet use (see Katz & Rice,
2002, for a review of the literature), our knowledge of its effects in the police-community
context is virtually nonexistent.
B. Goals and Objectives
The dual goals of this project are (1) to successfully implement, on a large scale, a
comprehensive web-based community survey and identify the challenges to transferring this
infrastructure to other settings; and (2) to determine whether a web-based survey system can
enhance the problem solving process, increase community engagement, and strengthen police-
community relations. The elements of the intervention include the collection of new information
through the Internet, the dissemination (feedback) of this information to police and residents, the
use of these new data elements in a problem solving setting (CAPS), and training in the use and
interpretation of survey findings.
C. Key Research Questions
Several key questions were addressed as part of this Chicago Internet Project:
(1) Can we successfully design and implement a comprehensive community Internet
survey? Specifically, what resources and design processes are necessary to build the
infrastructure and implement online surveys and feedback mechanisms in the field? What
obstacles were encountered and what lessons were learned?
(2) As a measurement device, how well does the Internet survey perform with respect to
measuring neighborhood problems, community and police performance, and local program
outcomes? Are the survey questions reliable over time? Do they have content validity, covering
a wide range of relevant constructs and components of these constructs? Do they have construct
validity, thus tapping into some of the key underlying constructs in the police-community arena?
(3) How well does the Internet survey work as a mechanism for giving feedback to
community residents and police officers? Specifically, are the recipients open to receiving
feedback and do they take the process seriously? Do they spend time discussing and reacting to
the information? Are they able to use the information to identify and prioritize neighborhood
problems?
(4) What are the effects on the target audiences of collecting and feeding back this
information? For the general public, how does participating in this process affect their
perceptions of neighborhood problems, community capacity, the local police, and residents’ own
crime prevention knowledge and behavior? For CAPS participants, does it enhance the problem
2
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
solving process by improving problem identification or analysis? Does it stimulate more
solutions and action plans?1
Whether residents participate in CAPS or receive feedback online, at the core of this
evaluation is a set of questions about whether survey information, when supported with training
and technical assistance, will influence residents’ perceptions of crime, neighborhood safety, the
local police, and their own capacity to prevent crime.
Here we seek to understand the effects of four processes: (1) feedback of public
attitudinal and perceptual survey data; (2) feedback of crime prevention tips; (3) public
deliberation about survey findings; and (4) training in the use of survey research findings (see
research design below for details). We have conducted randomized field experiments testing
specific Internet interventions with two separate groups: CAPS participants and a random
sample of residents from the beat. The latter provides greater external validity as this group is
more likely to be representative of households in their neighborhood. The CAPS group,
however, provides the advantage of allowing us to examine the impact of information sharing
within a police-community partnership.
This study will also examine the impact of survey information and public deliberation on
officers’ perceptions and behaviors. Because CAPS is a joint problem solving environment, the
role of police is important. How they respond to beat problems or evaluate their partnership with
the community may or may not be affected by the survey findings.
(5) Do the effects of information feedback vary as a function of individual or group
characteristics? The beats and individuals sampled are quite diverse and thus, various factors
may interact with the treatment to produce conditional effects. Will residents respond differently
to this experiment than the police? Will residents from predominately African American or
Latino beats respond differently than White beats? Will neighborhood context (e.g. levels of
crime and poverty), which affect community capacity, influence this particular intervention?
(6) What lessons are learned that are transferable to other communities? Despite
extensive field testing, many feasibility questions remain. These questions include: What
obstacles are encountered in the data collection, analysis and feedback process? How can the
Internet survey be refined and adapted for use in other cities? Can the results be helpful in
establishing expectations about levels of participation, identifying obstacles to online surveys,
and evaluating specific survey items and scales? The Chicago Internet Project also raises some
fundamental questions about how best to enhance the accountability of the police to the
communities they serve, and the potential role of universities or other independent entities in the
data collection and reporting process.
1 In 2003, only 1 in 5 Chicago beat meetings resulted in an action plan.
3
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER TWO
LITERATURE REVIEW
A. Overview
Over the past 20 years, we have witnessed a flurry of activity directed at improving
policing in America. The COPS Office and NIJ have promoted substantive reforms and
evaluation research, respectively at the local and national levels. Community policing and
problem solving have emerged as the primary models for policing in the future (see Greene,
2000, Rosenbaum, 1994), but the challenges ahead are numerous (see Fridell & Wycoff, 2004;
National Research Council, 2004; Skogan, 2003a). A few quick examples: (1) Advanced
technology is now available, but using it for sophisticated problem solving, strategic planning or
community engagement is a task for the future; (2) Accountability is the coin of the realm, but
accountability to whom and for what? Rather than accountability to central administrators for
crime rates, can police organizations become more transparent to the public and accountable at
the neighborhood level for things that matter to local residents? (3) Zero-tolerance, broken
windows, Compstat, and hot spots policing may have played some role in declining crime rates
in the 1990s (Weisburd & Braga, 2006), but critics argue that aggressive policing will undermine
trust and confidence in the police, especially in minority communities (Tyler, 2005; Walker &
Katz, 2008) and may have other adverse effects as well (see Rosenbaum, 2006; 2007). Some
have argued that police can be both stronger and gentler under the right conditions (Harris,
2003), but what are those conditions and why haven’t we created them more often? (4) Police
organizations are learning the value of partnerships in crime fighting (McGarrell & Chermak,
2003; Roehl et al., 2006), but we know so little about the partnership dynamics that too often
undermine these relationships and limit problem solving skills (see Rosenbaum, 2002). As we
strengthen the capacity of police organizations to respond to public safety issues (with better
training, intelligence, analysis capabilities, accountability, etc.), what have we done to strengthen
communities? How institutionalized are the partnerships and what can be done to strengthen
them? (5) Finally, police organizations today are engaging in a multitude of problem solving and
community engagement projects, including street-level hot spots policing and disorder policing,
but how do we know when these efforts have been effective? How is success defined and what
data system can be used to measure success?
The Chicago Internet Project is based on the premise that, while much has been done
under the community policing and problem-oriented policing models, progress in reforming
police organizations and communities has been restricted by our failure to explore new measures
of success and new methods of accountability that are grounded in the community. While
community residents demand safer streets and less violence, they also want a police force that is
fair and sensitive to their needs (Rosenbaum et al, 2005; Skogan, 2005; Tyler, 2005; Weitzer &
Tuch, 2005). How can all of this be achieved, and how can it be measured?
B. Accountability
Traditionally, police accountability has been an internal and legal process, focusing on
the control of officers through punitive enforcement of rules, regulations and laws (Chan, 2001).
4
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Today, police organizations are under pressure to be responsive to the public both for crime
control and police conduct. Following the lead of New York’s COMPSTAT model, we have
seen widespread interest in computer-driven measurement of police performance using
traditional crime indicators. These traditional measures are important, but grossly inadequate for
satisfying the new information imperative of community policing and for taking urban police
organizations to the next level of performance. Simply put, these indicators of performance do
not attempt to gauge in a meaningful way customer satisfaction with the quality of police service
or the quality of police-community partnerships. As Rosenbaum (2004) notes, only when the
performance evaluation systems change can we expect police-community interactions to change.
To achieve marked improvements in police performance, accountability systems will need to be
expanded to incorporate new standards based on the goals of partnership building, problem
solving, community engagement, and resident satisfaction with police services.
Similarly, local residents must be held more accountable for public safety and crime
prevention. Despite the overall success of community policing in Chicago, for example, most
local residents do not attend CAPS meetings and do not participate in problem solving. (Skogan
& Hartnett, 1997; Skogan, 2006) The residents who do participate often expect the police to
solve their problems, and sustaining their participation is a continuous challenge for the police.
Thus, creating official measures of residents’ performance may increase their accountability for
neighborhood conditions.
C. The Information Imperative
In this information driven society, community- and problem-oriented models of policing
create a new information imperative (Dunworth et al. 2000; Rosenbaum, 2004) and call on police
executives to “measure what matters” in the 21st century (see Masterson and Stevens 2002;
Mirzer 1996, Skogan, 2003b). If policing organizations are serious about decentralization of
authority, for example, then beat officers must be empowered with up-to-date information about
neighborhood characteristics and should be accountable for their relationship with neighborhood
residents. If data-driven problem solving is a priority, then police officers and supervisors need
timely geo-based information relevant to all phases of the problem analysis process (see Boba,
2003; Goldstein, 1990). Especially important (and often neglected) are data about the concerns
and priorities of local residents and community organizations, as well as factors in the local
environment that are either preventative or criminogenic. If community engagement is a
priority, then police officers need reliable information about community capacity, current levels
of community crime prevention behaviors of neighborhood residents, and local resources that
can be leveraged to help prevent crime and disorder. Measuring the police-community interface
is critical for achieving strong police-community relations, and stimulating community-based
crime prevention. Both are needed to create effective partnerships that were postulated as the
heart of community policing and problem solving (Cordner 1997; Rosenbaum 2002; Schuck &
Rosenbaum, 2006).
If police-community relations is a priority, accountability systems should begin to
examine the mundane day-to-day interactions between police and citizens. Here we should ask:
How are the police responding to residents as victims, witnesses, suspects, complainants, callers,
and concerned citizens? And how do residents respond to the police? In a nutshell, researchers,
5
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
community leaders, and police administrators should begin to ask what are the important
dimensions of this relationship? Drawing on the private sector model, Mastrofski (1999)
outlines six characteristics of good police service: attentiveness, reliability, responsiveness,
competence, manners, and fairness. Skogan and Harnett (1997) have validated several of these
through citywide resident telephone surveys in Chicago. Yet a system for collecting timely, geo-
based survey data has yet to be created.
An important policy question is how to get police officers to engage in these behaviors
more often? Some gains have been achieved by introducing new mission statements, rules and
regulations, and officer training, but critics have argued that external oversight is necessary to
achieve sizable and lasting change in police organizations. Reiss (1971) and Mastrofski (1999)
have proposed independent “auditing bureaus” to collect data on how citizens are treated by the
police and vice versa. In any event, various methods have been employed for collecting new
information from citizens, ranging from surveys to beat meetings (see Skogan and Hartnett
1997). According to national surveys, one of the largest changes in police organizations between
1992 and 1997 was the use of citizen surveys to gauge public reactions (Fridell & Wycoff,
2004). By 1997, roughly 3 out of 4 departments claim to have used citizen surveys to help them
identify needs and priorities, and nearly as many used them to evaluate police services. The
challenge, as laid out here, is to institutionalize this process using web-based technology.
D. Information Technology and the Police
The “information technology (IT) revolution” is a half-century old, yet it is just beginning
to impact the criminal justice system (see Brown, 2000; Chan, 2001), which lags far behind the
private sector (Dunworth, 2000; Reuland, 1997). Since the mid-1990s, the COPS Office has
helped to stimulate a renewed interest in IT, especially by funding laptop computers for patrol
officers (see Roth et al., 2000). A variety of new technology-driven law enforcement initiatives
have received national attention in recent years, such as COMPSTAT (McDonald, 2000, 2005)
and COMPASS (Dalton, 2002), and these models have given police organizations a taste of what
is possible. While law enforcement agencies are making significant progress toward harnessing
the power of information technology, rarely do these initiatives give attention to the information
imperative of problem solving and community policing. Rather, police tend to focus on new
ways of processing traditional data elements to catch known criminals (Chan et al, 2001;
Rosenbaum, 2006; Weisburd et al, 2006).
One of the most sophisticated of these information systems is Chicago’s CLEAR (Citizen
and Law Enforcement Analysis and Reporting) program. As with other police data systems, it
has multiple components intended to improve traditional law enforcement strategies. A key
difference, however, is its community component, which has been conceptualized as a vehicle
for increased information sharing with the community (see Skogan et al., 2002; Skogan et al,
2005). This community component of Chicago’s CLEAR program remains undeveloped, but
recently, the Chicago Police Department, in partnership with community organizations, has
moved ahead with a new initiative–CLEARPath–in the hope of beginning to fill this gap.
Hence, when the Chicago Internet Project was initiated, the research team faced a unique
opportunity to begin measuring, for the first time, neighborhood concerns and behaviors that are
6
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
known to be important for maintaining public safety and strengthening police-community
partnerships. Having spent considerable time developing and testing measures of the public’s
perceptions of crime, disorder, and police performance, as well as residents’ reactions to crime
(e.g. Rosenbaum, 1986; 1994; Rosenbaum & Baumer, 1981; Rosenbaum, Lurigio, & Davis,
1998; Rosenbaum et al., 2005; Schuck & Rosenbaum, 2005), we began working with the CPD
and community leaders to develop an unprecedented web-based system of data collection with
the potential for transferability.
E. Testing a Web-Based Community Survey
A number of police departments in the United States now offer online information about
their services, programs and crime statistics (Haley & Taylor, 1998; Rosenbaum, Graziano, &
Stephens, in preparation). To date, however, few have moved beyond simply posting information
to the point of embracing the Internet as a proactive tool for obtaining new information about
neighborhood conditions, solving problems, building partnerships, evaluating programs, and
assessing unit performance. Rosenbaum (2004) has proposed a comprehensive website with five
major components, ranging from crime reporting to performance assessment. The focus of this
project is on one major component, namely, web-based community surveys. Although a few
police departments conduct Internet surveys, these efforts are not comprehensive,
methodologically defensible, or institutionalized. Furthermore, the information is not used as a
primary source for strategic or tactical planning by police or community residents.
In Chicago, we developed and pilot tested a comprehensive web-based community
survey that is designed to achieve several measurement objectives:
(1) Monitor neighborhood conditions and citizen performance. Citizen performance
measures will capture levels of community involvement, collective efficacy, perceptions and
fears about safety and crime, problem solving skills, and crime prevention behaviors, and more.
Knowing the level of community efficacy and involvement will someday allow police and
community leaders to determine the scope of community building efforts that are needed before
satisfactory community-police collaborations can occur. Abrupt reductions or increases in
citizen perceptions of crime problems and fears will help monitor “perceptual hot spots,” direct
police resources, and evaluate police and/or community initiatives within particular
communities.
(2) Monitor police performance. An Internet survey can be used to gauge residents’
perceptions of police performance, capturing aspects of police performance important to the
community, such as general perceptions of police competency and fairness. In addition to
capturing general sentiments, an Internet survey can be used to screen for persons who have had
a recent encounter with the police (as victims, witnesses, callers, drivers, walkers, arrestees,
meeting attendees, complainants, etc) and then branch off into a series of questions about how
they were treated. These “customer satisfaction” items build upon the citywide resident
telephone surveys used to evaluate community policing in Chicago for 10 years (see Skogan &
Steiner, 2004) and adapted by the Vera Institute of Justice to evaluate police performance in New
York City and Seattle (Miller et al, 2005). Thus, key aspects of police-resident encounters can
be captured.
7
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
The major limitation of previous high-quality survey research, usually involving
telephone survey methods, is that the findings cannot be disaggregated to small geographic areas
and they are one-time “snapshots” of community responses. The added cost of collecting data
more frequently or producing larger sample sizes (needed for smaller areas) has been strictly
prohibitive. Online surveys offer a potential alternative that is cost-effective.
(3) Evaluate anti-crime interventions. By conducting monthly or bi-monthly online
surveys at the police beat level, both the police and residents can receive timely data indicating
whether problems are increasing, decreasing, or staying the same. Data from comparable beats
that do not receive a particular police or community intervention can serve as control groups to
estimate program impact.
(4) Offer policy recommendations. Web-based surveys present a great opportunity for
police agencies to receive new ideas and suggestions from citizens. Multiple perspectives on
problems, programs, and policies can be encouraged.
F. Survey Feedback
An Internet survey is valuable for communities if it can produce timely, geo-based
information that is useful for planning or evaluating local police and community actions
designed to improve neighborhood safety. Within a community policing/problem oriented
policing framework, the Chicago Internet Project will examine whether survey feedback is useful
for changing the perceptions and behaviors of residents and police officers.
8
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER THREE
PROCEDURES AND METHODS
A. Experimental Design
This project employed a randomized trial in order to study and test our previously stated
research questions (see Chapter 1) in regards to the development of a comprehensive web-based
community survey. The basic design included (1) the collection of new community-based
information through the use of Internet surveys; (2) dissemination (feedback) of this information
to police and residents; and (3) exposure to additional training and educational materials. A
random sample of residents and residents attending CAPS beat meetings in 51 Chicago police
beats were both asked to complete monthly Internet surveys. The survey results were then
supplied to residents in the random sample to view through the Internet and to police and
residents at their beat meetings for discussion and use in problem solving.
To test the impact of using this information and the potential benefits of additional
training/education for police and residents, the study beats were assigned to one of three
experimental conditions: control, feedback, and training. While residents in beats within each
condition were asked to complete Internet surveys, the critical component of feeding back survey
results were reserved for beats in the feedback and training conditions. Residents in the control
condition were simply asked to complete surveys each month and served as the baseline for
testing the impact of receiving feedback. Given the difference in samples (a random sample of
residents from the study beats vs. residents attending beat meetings), the nature of the
intervention varied for these groups. For the random sample, feedback consisted of providing
survey results to residents in the feedback and training beats via an Internet website. The
training component, administered through the same website, consisted of crime prevention and
other public safety information (discussed below). For CAPS participants, feedback consisted of
providing police and residents with paper copies of the survey results at their monthly beat
meetings to use in discussion and problem solving activities. Police in the training condition
were provided with additional classroom instruction on problem solving and use of survey
results. For a full discussion of the research design employed in the CAPS experiment, see
Chapter 6.
The project was introduced to participating beats in March 2005 and continued until
September 2005, during which six waves of Internet surveys were administered and five sets of
survey results were fed back to residents in the random sample and CAPS beat meeting
participants.
B. Setting
The study took place in Chicago, Illinois, a large Midwestern metropolitan city. In 2004,
the year before the research began, Chicago was the 3rd largest city, only behind New York and
Los Angeles, with a total population of about 2.7 million residents. In 2004 about 46.8% of the
residents were White, 36.2% African American, and 27.4% Latino/a. The median household
income was $40,656 which was slightly below the national median income of $44,684.
9
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
C. Selection and Assignment of Study Beats
Using data from the 2000 U.S. Census and survey data from evaluations of the CAPS
programs, all 280 police beats in Chicago were analyzed in order to generate profiles of relevant
demographic variables for each beat. Using the key variables of income and race, the lowest
income beats were excluded and the remaining beats were then stratified to ensure diversity
within the final 60 beats selected. After initial telephone interviews within these 60 beats
(described below), 51 beats were selected to participate in the study. The 51 beats were
geographically located throughout the city and represent 18 of the 25 administrative Chicago
Police Department districts. Table 3.1 presents a comparison between the study beats and all
beats in the city2. Residents from the study beats tended to be more affluent than the general
population of the city. For example, the median income of residents from the study beats was
$51,663 compared to $36,981 for all residents. The study beats also had a greater percentage of
college graduates, homeowners and fewer female headed households. The overall crime rate, the
violent crime rate, and the robbery rate were similar between the study beats and entire city.
However, on average the homicide rate was significantly lower in the study beats compared to all
beats in the city.
There were two primary objectives when designing the beat sampling strategy. The first
objective was to select beats that would have large percentage of residents with Internet access.
An earlier pilot project of the Chicago Internet Project (Skogan et al., 2004) highlighted the
problem of the “digital divide”; that is, a larger percentage of economically disadvantaged
residents do not have computers or access to the internet. Because of cost and resource
considerations, a decision was made to target more affluent beats in the city so that we could
recruit a sample of residents who could participate in the project. Hence, this beat selection
process accounts for differences describe above. The second objective was to ensure an
adequate representation of the diverse racial and ethnic communities in Chicago, especially
African Americans, Latinos, and Whites.
The first step of the beat sampling selection strategy was to stratify all of the Chicago police
beats into four racial and ethnic groups: predominately White, African American, Latino/a and
mixed race neighborhoods (a homogeneity index was computed for this purpose). The second
step was to sort each of the four strata by the percentage of residents with annual incomes of
$40,000 or greater. This process yielded a sampling frame that could be used to select an
adequate number of beats from racially and ethnically homogeneous communities (i.e., White,
African American and Latino/a) and racially and ethnically heterogeneous communities (i.e., no
racial or ethnic group comprising a majority of the residents), as well as maximizing the potential
for selecting beats within each of the racial/ethnic strata that would have a large percentage of
residents who had internet access.
The number of beats selected within each stratum was based on the racial/ethnic representation
of the population of Chicago beats. Additionally, because the study design dictated three
conditions (i.e., control, feedback only and feedback/training) the number of beats selected
within each of the strata had to be divisible by three. As such, the first 21 beats from the White
2 Information on one beat was missing from Skogan’s data.
10
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Table 3.1 Descriptive Statistics of Chicago Internet Study Beats
Study Beats
(N = 51) All Chicago Beats
(N = 279)
M SD M SD F- Test
2000 Census Data
Population 12,884 4,928 10,379 5,427 9.44**
% pop. 65 and older 11.03 5.47 10.08 4.54 1.76
% pop 15-24 years old 13.72 3.62 14.83 3.02 5.11*
% White 38.62 35.54 25.89 28.83 7.79**
% African American 39.18 42.92 48.74 42.03 2.22
% Latino/a 17.58 26.01 20.45 26.08 .52
Median income 51,663 15,086 36,981 15,634 38.43***
% income > $40,000 59.08 10.04 43.77 16.38 41.62***
% income < $15,000 13.96 4.78 24.73 13.77 30.45***
% college graduates 57.19 20.98 45.86 19.54 14.17***
% female headed households 7.92 6.18 13.56 9.77 15.82***
% homeowners 55.87 21.47 40.40 21.14 22.96***
2004 Crime Data (per 1,000 residents)
Crime rate 145.68 72.43 293.09 837.83 1.57
Violent crime rate 14.94 11.49 28.06 72.18 1.67
Robbery rate 5.66 5.11 9.20 14.36 3.03
Homicide rate .16 .19 .26 .32 5.01*
*p<.05 **p<.01 ***p<.001
strata, the first 21 beats from the African American strata, and the first 9 beats from the Latino/a
strata were selected to be included in the study. One of the first 21 African American beats had
to be excluded because residents of that beat participated in an earlier pilot study of the Chicago
Internet Project. The next beat on the sampling frame was selected as a replacement. One of the
first 21 White beats was excluded because the crime rate for that beat was extremely high. The
crime rate of the excluded beat was more than 3 standard deviations higher than the mean
11
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
distribution of crime rates for the other White beats selected to be included in the study. The
next beat on the sampling frame was selected as a replacement.
Because the beats in the racially and ethnically heterogeneous stratum were very diverse,
a decision was made to select groups of beats based on the largest racial or ethnic group
represented in the beat. Three beats were selected that that had a significant African American
population. Three beats were selected that had a significant Latino/a population. And three beats
were selected that did have a clear majority of any racial or ethnic group.
Within the White, African American, and Latino/a strata, the beats were matched in
groups of three based on the economic advantage characteristics of the beat (i.e., income,
education and homeownership) and the robbery rate. A random selection process was then used
to assign each one of the matched beats to one of three conditions – control beat, feedback only
beat or feedback and training beat.
For the racially and ethnically heterogeneous stratum, a random selection process was
used to assign each of the beats with a significant African American population to one of the
three conditions, each of the beats with a significant Latino/a population beats to one of the three
conditions, and each of the beats with no clear majority of any racial or ethnic group to one of
the three conditions. Although the heterogeneous stratum beats were not match in the same way
as the racially and ethnically homogeneous strata beats, within each of the three groups (i.e.,
significantly African American, significantly Latino/a and no clear majority) the heterogeneous
strata beats were relatively similar in terms of economic advantage and crime rates.
During the telephone recruiting phase of the project nine beats were dropped from the
study because of the cost overruns. The lowest performing White, African American, and
Latino/a beats and their respectively matched beats were dropped from the study. The beats
were dropped from the study prior to the administration of the first Internet survey.
D. Selection of Internet Users within Study Beats
Telephone surveys were employed primarily to locate a representative sample of Internet
users in the selected police beats. The telephone data were also used to validate some Internet
findings and to test specific hypotheses about the impact of Internet participation (vs.
nonparticipation).
For the telephone surveys, we subcontracted with Northern Illinois University’s Public
Opinion Laboratory, whose job it was to identify Internet-ready households and to collect limited
survey data from respondents. Using a Chicago reverse directory, which displays listed phone
numbers by block, our research team generated random samples of households in each of the 60
beats. This information was given to the survey lab as the telephone survey sample.
Pre-experiment telephone survey. The first telephone survey (pre-experiment) was pilot
tested to achieve an interview of approximately 10 minutes in length with items that are
meaningful to respondents. In the fielded survey, screening questions allowed interviewers to
identify respondents who: (1) have regular access to email from home or work; (2) do not attend
CAPS meetings on a regular basis, i.e., 2 or more times in the past 6 months; (3) are at least 18
12
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
years old and (4) continue to live in the neighborhood/police beat from which they were
sampled. For respondents who met these criteria and who agreed to participate in the Chicago
Internet Project, email addresses were obtained and additional perceptual data were gathered.
Specifically, survey questions were asked about fear of crime, individual and collective capacity
to respond to crime, general assessments of police in their neighborhood, personal Internet usage
patterns, and standard demographic characteristics.
Pre-experiment telephone data were collected between January 14 and April 17, 2005.
UIC provided the survey laboratory with 45,992 phone numbers. The lab used 32,688 of these
numbers and made 94,890 calls. This effort yielded 2,085 completed interviews. The survey lab
then sent the UIC research team a list of all completed interviews, names and email addresses for
future participation in the Chicago Internet Project.
During the data collection process, we realized that, despite our efforts to over-sample middle
income neighborhoods, Internet access remained a significant problem for several
neighborhoods. Hence, data collection was discontinued in nine police beats because the survey
lab was struggling to generate a sufficient sample of Internet users. Consequently, resources
were transferred to other beats, with the goal of achieving 25 to 35 respondents per beat.3
(Across all 60 beats, 24% of all households were excluded from the study because they reported
a lack of access to the Internet and for many beats, non-access exceeded one-third of the sample).
Hence, our survey strategy sought to balance the total sample size against our concern for the
sample size per beat. The final number of completed interviews dropped from 2,085 to 1,976 for
the pre-experiment telephone survey as a result of this decision, but more stable estimates were
achieved for participating beats. The disposition of all calls for the pre-experiment survey is
shown in Table 3.2. Using standard formulas established by the American Association for
Public Opinion Research (1998), the pre-experiment survey outcomes can be described in these
terms: 11% response rate, 24% cooperation rate, 34% refusal rate, and 57% contact rate. These
figures suggest the difficulty associated with the task of identifying random households that have
both Internet access and are willing to participate in a long-term Internet survey. Nevertheless,
after making nearly 95,000 phone calls, we were able to identify approximately 2,000 willing
participants. We estimate the cost of this entire screening process at roughly $30 to $35 per
participant.
3 To prevent problems with random assignment, the 9 beats were discontinued/dropped in matched groups of three,
representing clusters of three African American, three Latino, and three White beats.
13
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Table 3.2 Telephone Pre-Experiment -- Final Calling Outcomes (April 2005)
# Calls CODE Interim Call Dispositions
78 4R PWA refused for R,incl HUDI w/sel R/R won't come to phone
39 8A 800 line CB, set appointment
3 8C 800 line CB, completion
1 8N 800 line CB, not a residence
16 8R 800 line CB, HH or respondent refusal
2,482 BY Normal busy signal
329 BZ Business or pay phone
10 CB All circuits busy message
2,085 CM Completed Interview
112 FA Firm appointment with a respondent
178 FB Fast busy signal
887 FM Fax/data/modem, no human contact
6 GH Group Home(>9 men or >9 women),temporary residence
59 HA HH away for entire interview period
9,478 HC Neutral HH contact, no respondent selected
10,315 HR HH refuses,incl HUDI's if known household
2,875 HU Hang up without contact, not known if elig, no respondent
75 IM Physical/mental impairment at the household
543 LB Language barrier at HH, (HH does not speak ENG or SPAN)
2,437 MM Answering Machine, left message at HH
7,785 MR Answering machine at residence
4,091 MS Answering machine, left message, unknown if residence
21,579 MU Answering machine, unknown if business or HH
14,686 NA No Answer by any device, Normal ring
7,201 NE R does not meet eligibility requirements of project
4,113 NW NonWorking/NIS/Disc#/Changed/# Can't be verified by PWA
531 OF Person who answers says take off list/don't call back
66 OG Outside geography
2,090 OS Temp OutOfService/Checked for trouble
104 PC Partial, R broke off, either refused to go on or finish later
208 PN Possible non-working number
59 TB Tech barrier at residence, any automated call-blocking
7 TM Tech barrier, left message or stated why calling
253 TU Tech barrier,unverified residence, NO MSG OR ID POSSIBLE
109 UA All residents of HH under 18 or phone line strictly a teen phone
94,890 Total Phone Calls
14
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Post-experiment telephone survey. The second and final telephone survey (post-
experiment) served as a posttest to measure a small set of outcomes and to query respondents
about their experience as participants in the Chicago Internet Project. The post-experiment was
fielded after 7 waves of Internet surveys or 6 months after the pre-experiment survey. This was a
panel sample, with telephone interviewers re-contacting persons who had completed a telephone
interview at wave 1. To allow for comparisons between different levels of participation in the
Internet project, we stratified our post-experiment panel sample to include eligible non-
participators (i.e., completed the pre-experiment survey but did not complete any online
surveys), moderate participators (i.e. completed 1-2 online surveys), and heavy participators (i.e.
completed 3 or more online surveys). The survey lab was given a sample of 1,594 cases to call
across these groups. Eligible non-participators were, as expected, much more difficult to reach,
but the lab was able to achieve a response rate of 37% with this group. Better success was
achieved with the moderate participators (56% response rate), and the most success was achieved
with the heavy participators (70% response rate). The overall response rate for post-experiment
survey was 56.1%, fully consistent with results from other telephone surveys that involve a 6-
month lag between waves. As expected, response rates varied by neighborhood/beat, with 13
beats between 30-49%, 23 beats between 50-59% and 15 beats having response rates of 60% or
higher. The final calling outcomes for post-experiment survey are shown in Table 3.3. In the
end, 894 respondents completed both the pre- and post-telephone surveys.
Table 3.3 Post-Experiment Final Calling Outcomes (January 2006)
Total Cases Fielded (N=1,594) Cases 1,594
Outcome Num.Cases Calls Made 8,330
110-complete 894 Completes 894
210/220-refusals 77
280-contact/no action 271 Calls/Complete 9.32
335-only pick up was
Answering Machine 202 AVG Length 9.75
mins
360 - no answer 26
417/410 - no new # 119
430 - Fax/modem 5
Overall
Response
Rate 56.1%
1594 Contact Rate 77.9%
Cooperation
Rate (among
contacted) 72.0%
Refusal Rate
(among
contacted) 6.2%
15
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
E. Development of Internet Survey Instruments
Prior to the beginning of the project, the research team developed a data matrix which
identified the theoretically important constructs needed to test the impact of the experiment and
to advance our knowledge of survey measurement in the domain of public safety and police
performance. The research team also gathered relevant instruments and questions used in prior
research on community-policing and community crime prevention, as well as created drafts of
new questions and scales designed specifically for use in this study. A preliminary schedule was
created that identified which of the constructs would be measured at what wave. The general
plan was to collect at least two waves of data on items that required testing for reliability and
validity. Also, each instrument was developed to reflect coherent sets of items that were ordered
in a meaningful way for the benefit of the respondents.
At the beginning of each wave, a draft of the survey instrument for that wave was
created. After the draft was created the research team members would examine the survey and
provide feedback on: (a) if the questions were measuring what they were designed to measure;
(b) whether any important constructs were missing; and (c) whether the project was generally on
track. After the survey was finalized it would go through a quality control process that was
developed to decrease the likelihood of typographical errors, to ensure that the survey was
functioning properly when completed online (e.g. working skip patterns), and to test that the
survey could be accessed with the most common web browsers (i.e., Internet Explorer, Firefox,
etc.).
F. Development and Implementation of Observation and Questionnaire Methods
A three-part methodology was developed to examine the effects of the intervention for
CAPS participants that consisted of questionnaires, field observations and field interviews.
Questionnaires. To measure the effect of feeding back web survey results on the public
safety perceptions and behaviors of CAPS participants, questionnaires were administered to both
police (see Appendix A) and citizens (see Appendix B). The questionnaires were designed to
measure three primary areas of hypothesized change. The first area was interaction within the
beat; items previously developed by the Chicago Community Policing Evaluation Consortium
(CCPEC) were used to measure changes in the extent to which police and citizens generally
interacted and engaged in public safety activities, such as attending other local meetings and
problem solving, with one another (see Skogan & Hartnett, 1997; Skogan, Hartnett, DuBois,
Comey, Kaiser, & Lovig, 1999). The second area was citizen capacity for engaging in public
safety behaviors theorized to be most likely enhanced by participation in the project. Items
measured citizen attitudes regarding beliefs about their ability to engage in effective problem
solving for their beat and possessing the necessary knowledge for keeping themselves and their
beat safe. The final area examined perceptions regarding the police-citizen partnership. Partly
based on previously developed and tested measures employed by the CCPEC, items examined
citizen attitudes towards the partnership formed between the police and community both in
general within the beat and more specifically within the CAPS beat meeting framework.
16
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Prior to the introduction of the project in March 2005 and upon completion of the project
in September 2005, questionnaires were administered to all CAPS participants who were in
attendance at their monthly meetings within the study beats. For beats that held meetings only
every other month and were not meeting during March or September, questionnaires were then
administered in April and October 2005. Arrangements were made to have 10-15 minutes on the
meeting agenda in order to introduce the project to participants and allow them to complete a
questionnaire. Participants were typically able to complete questionnaires in the time allotted;
however, some were required to complete questionnaires while the meeting resumed.
Field observations. In order to document the dynamics of the police-citizen partnership
and problem solving activities at beat meetings, including how the web survey findings were
used in relation to deliberating about problems, observations of meetings in the study beats were
conducted between March and October 2005. Observations in all beats were planned during
April (wave 2), June (wave 4), and July (wave 5). During May (wave 3), only those beats that
had failed to carry out the steps of the project in the prior month were observed to determine
whether they were now following the protocol. During August (wave 6), beats were selected for
observation based on the number of previous observations that had occurred within each beat to
ensure an equal number of observations for as many beats as possible. For each beat in which
meetings were held on a monthly basis, a minimum of 4 meetings were attended by an observer,
with the exception of a single beat where only 3 meetings were observed during the course of the
project. For beats holding meetings every other month, each meeting held during the course of
the project was attended, thus allowing a more reliable assessment of problem solving efforts. In
all, 266 beat meetings were observed during the course of the project.
The observation protocol included two primary components: completing a structured
observation form and preparing a narrative from notes taken during the meeting. The
observation form was adapted from forms previously developed by the CCPEC; these existing
forms provided a well-tested format for the systematic recording of relevant information about
beat meetings such as counting the number of police and residents in attendance, classifying the
nature of problems brought up by citizens, and depicting the roles citizens and police played in
both running the meeting and engaging in different steps of the problem solving process (e.g.
identification of problems and solutions). Additional items were prepared to detail the
presentation and use of the web survey results during meetings. (To view observation form, see
Appendix C.) Observers also prepared a supplemental 1-2 page narrative that included a
summary of events, their impressions of the police-citizen relationship, the quality of discussions
about problems and web survey results, and notation of unusual or otherwise significant
occurrences during each meeting. Through both the observation form and narratives, it was not
only possible to systematically examine the nature of problem solving and use of survey results
within and across the study beats, but also to identify characteristics of meetings where program
effects were greatest.
During the project, 32 individuals participated in conducting field observations, primarily
undergraduates from UIC. Of these, 7 were Spanish-speaking and responsible for attending
meetings in those beats with sizeable Hispanic populations. All observers underwent a two-part
training protocol that was developed and originally administered by Dr. Wesley Skogan of the
CCPEC (individuals who joined the project after the original training dates were trained with the
same protocol by project managers). The first part consisted of a one-day training session in
17
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
which observers were instructed on project objectives, CAPS framework, and observer role and
responsibilities, including guidelines for filling out observation forms and preparing narratives.
The second part required observers to attend a beat meeting outside of the study beats with a
project manager in small groups (3-4 individuals) so they would have the opportunity to practice
completing the observation form and taking field notes. Immediately following the meeting,
each group went over their completed forms and discussed any concerns. Observers also
received certification from UIC for participating in data collection by completing a three-hour
training on the protection of research subjects. Throughout the study, project managers
monitored the content of observation forms and narratives and observers were provided with
regular feedback as to the quality of their work.
Field interviews. To share their opinions about the project as to its usefulness and
discuss their own experiences in completing surveys and discussing survey results at meetings.
This not only allowed us to gain insight into how the project was received by participants, but
also to identify areas for future improvement. The interview protocol was designed to collect
information on four main topics: (1) Content of the surveys, particularly as to whether the issues
covered by the surveys were relevant for addressing beat concerns and discover other dimensions
of importance that were not included; (2) Utility of survey results as to format, as a vehicle for
understanding beat concerns, and assisting in the problem solving process; (3) Obstacles to
implementing the project in terms of both police support for the initiative and citizen
participation in completing surveys; and (4) Overall support for the use of the Internet, both in
their personal lives and to facilitate communication between citizens and the police.
Interviews were sought with individuals key to facilitating and attending meetings in the
study beats. For the police, interviews were conducted with a Sergeant, patrol officer, or
community policing officer who had attended at least 3 beat meetings during the course of the
project; in all, interviews were conducted with police personnel in 48 of the 51 study beats (with
a single interview conducted for the two beats that held joint meetings). Per the CAPS
framework, each beat is also to have a citizen facilitator who is jointly responsible with police
for running beat meetings. To this end, we conducted interviews with a subset of 22 citizen
facilitators that provided an equal representation of study beats across both the three
experimental conditions and the four dominant racial/ethnic populations (African American,
Latino, White, and mixed). All interviews were conducted between June and November 2005
and typically lasted between thirty minutes to an hour. Interviews with police personnel were
conducted at their stationhouse, while citizen facilitators were generally interviewed at the beat
meeting location or over the telephone.
G. Development of Website and Processing Survey Results
Development of website and educational linkages. Web developers designed the CIP
website to be as straightforward as possible in order to enhance ease of use and to limit any
“Internet apprehension” that some study participants may have felt. The CIP introduction page is
presented below in Figure 3.1. The website also allowed CIP researchers to upload graphs and
tables and change and update link content. After completing surveys, respondents in both
experimental conditions were invited to visit the website to view selected survey results for their
18
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
police beat.4 In addition to viewing beat specific survey results, respondents in the
feedback/training condition were further encouraged to visit links on the CIP website providing
information about various public safety topics.
Figure 3.1 Chicago Internet Project Website Page
CIP passwords or neighborhood codes. All participants in the study were assigned
passwords also called neighborhood codes which were unique to the police beat in which they
resided.5 Passwords were assigned for two purposes: first, participants used the passwords to
access the monthly online surveys and second, people in the experimental conditions used beat
specific passwords to access survey results posted each month.6 It should be noted that the
monthly surveys and the CIP website results pages were posted on separate websites in order to
avoid the control beat participants from viewing survey results. Over the course of the study,
participants were reminded of their beat specific password in all email and mail correspondence.
When participants in the experimental conditions went to the CIP website they were prompted to
type in their beat and their accompanying neighborhood code in order to log on (see the log in
screen in Figure 3.2).
4 Participants in the experimental conditions received the website link via email and postcards.
5 Participants’ beat pass code remained the same throughout the study unless they moved out of the study beat.
Participants who moved during the course of the study were still encouraged to complete the monthly online surveys
but they were assigned a different code to differentiate them as “moved” and no longer residing in the study beat.
6 Although technologically basic, these passwords allowed researchers to track the number of times respondents by
beat accessed the CIP website and any CIP public safety links.
19
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Figure 3.2 Login Page
Processing survey results. While each online survey was made available to beat
participants for an entire month, survey results were processed from responses compiled within
the first two weeks after the survey became available. After these two week periods, researchers
took a week to process, aggregate, and post beat-level survey results on the CIP website for beats
in the experimental conditions. Respondents’ survey responses were stored on the University of
Illinois at Chicago’s server as database files. Survey results processing took several steps. First,
researchers downloaded survey database files, saved them to password protected CIP office
desktop computers, and converted them into SPSS files. Second, CIP researchers chose
approximately three to five questions per survey wave to analyze further and post on the CIP
website. Researchers chose questions primarily based on their perceived utility for problem
solving and introduction of new topics for deliberation at CAPS beat meetings. Finally, survey
results were tabulated with SPSS, graphics and tables were created in Microsoft Excel, and the
graphics and tables were uploaded onto the CIP website.
In order to protect experimental participants’ confidentiality, there had to be ten survey
completions per beat per survey wave in order to display beat level data on the website.7 In order
to ensure that every experimental beat had survey results to view, when less than ten beat
participants responded to a survey wave, researchers aggregated the responses from other study
beats, analyzed, and displayed police district level data.8 Additionally, if a study beat had a high
numbers of Spanish speakers, the results were then made available in both English and Spanish.
7 In any given month of the study, anywhere from 10-20 percent of the study beats had less than 10 respondents.
8 Chicago Police Department Districts are comprised of 10-15 beats.
20
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
The CIP website design allowed researchers to upload specific survey results for each study beat.
Each month of the study, participants could scroll through several screens of survey results
specific to their neighborhood. Participants could also view archival survey results when they
logged on. In Figure 3.3 there is an example of what participants might have seen upon logging
on the website.
Figure 3.3 Survey Results Example
Community resource links. When participants in the experimental condition with
feedback and training viewed their survey results, they were directed to visit the “Community
Resource Links” (herein referred to as “links”). The links were posted in Spanish and English.
The images in Figure 3.4 and 3.5 are examples of what a participant would see if they visited the
links page.
There were two types of links on the web page – links internal to the CIP site and links to
external websites. Content of the internal links was controlled and edited by the researchers
every month. These links covered topical public safety information and covered four broad areas:
1) Community participation
2) Citizen and police relationships
3) Problem solving and
4) Individual safety behavior.
Researchers updated the content of each of these links and tied the content to topics
contained in the survey. For example, if participants were asked about their participation in
community activities, then the community participation link would provide information about
21
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Figure 3.4 Crime Prevention Concepts Example
Figure 3.5 Community Participation Example
22
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
how to join community initiatives. Table 3.4 provides a brief overview of the monthly content
covered in each of the public safety areas.9 The image in Figures 3.6 presents examples of some
of the links that respondents could access during the study. The external links were the Chicago
Alternative Policing Strategy (CAPS) and Citizen-ICAM (CPD’s crime mapping resource) sites
which are part of the overall Chicago Police Department website. The two external links did not
change over the course of the study. These links were included because they provided both
citywide and beat specific information. The CAPS site offered participants community policing
information such as beat meeting times and locations, crime watch, hotline information, and
specific Chicago Police Department contacts. The ICAM website allowed citizens to map
crimes based on selected geographic and type of crime parameters.
Table 3.4 Overview of the Monthly Link Content
Month 1 Month 2 Month 3 Month 4
Community
Participation
Importance of
reporting
neighborhood
crimes
How to describe a
suspect; Links to
numerous Chicago
community
organizations
Security checklist
for keeping your
home safe
What is
Neighborhood
Watch and how do
you join or start it
up in your
community?
Citizen and
Police
Relationships
Importance of
partnership in
crime prevention
Accessing non-
emergency police
services and city
services by calling
3-1-1
Your rights and
responsibilities
when interacting
with the police
during a traffic stop
or an arrest
Summer safety tips
– ways to contribute
and keep your
community safe
during the summer
months
Problem
Solving
Steps of problem
solving
Educational -
description of the
Crime Triangle
Ways to keep your
children safe when
they are on- line
Problem solving
exercise and fear of
crime matrix
Individual
Safety
Behavior
Crime Prevention
through
Environmental
Design (CPTED)
Tips on how to
protect yourself
from becoming a
crime victim
How to reduce the
risk of car theft or
carjacking
Ways to protect
yourself and your
family
9 The “Community Resource Links” content was gleaned from various online and paper sources, namely the
National Crime Prevention Council, National Center for Missing and Exploited Children and the American Civil
Liberties Union.
23
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Figure 3.6 Community Resource Links Example
H. Posting Surveys and Managing the Monthly Sample
Posting surveys. In all, six surveys were made available to both the random sample of
residents and CAPS participants; a seventh survey was made available exclusively to the random
sample. Because availability of surveys and the preparation of survey results was necessarily
based on the monthly CAPS beat meeting schedule, the protocol was staggered so that surveys
were made available to participants of the random sample during the week that their beat held its
CAPS meeting; likewise, survey results were made available to the random sample feedback
groups the week before the next CAPS meeting was to be held. Surveys were converted into a
web-based format using Perseus SurveySolutions software and then published to a website
maintained on the UIC server to house multiple surveys. The original research design held for
surveys to be made available to participants for a two week period; this was considered
necessary in order to facilitate the timely processing of survey responses into distributable
results. However, this scheme was discarded after the first month in order to maximize the
response rate and, during all subsequent months, once posted, surveys remained available online
to participants for one month and were only taken down once the next survey had been posted
and participants had been notified that it was now available10.
10 As previously noted, however, survey results were prepared using the compiled responses from the first two
weeks after the survey first became available in order to ensure timely dissemination of results to police for use at
the CAPS beat meetings.
24
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
In order to ensure the correct individuals were completing surveys, participants were
required to enter both their name and a password that we provided them in order to access the
survey. Again, during the first month only, each beat was provided a unique website address to
access the survey. Each survey had to be published to the Internet separately and, with the
inclusion of surveys in Spanish and separate surveys for CAPS participants in each beat, this
required over 100 different surveys. After the first month, it was decided to streamline the
process by maintaining fewer website addresses. To this end, survey addresses were assigned in
weekly blocks according to when CAPS meetings were held and tracking individual beat results
in larger response files was accomplished by maintaining the passwords that were unique to each
beat. There was little difficulty experienced in adhering to this schedule for posting surveys and
the only deviations that occurred were contingent upon problems experienced with the
availability of the survey results, which needed to be posted prior to the surveys (see
“Implementation: Protocol and Integrity” section of the CAPS Experiment).
Resident notification and contact. A protocol was established to notify participants of
the random sample about the availability of each survey and, for those in feedback beats, survey
results that incorporated multiple points of contact via both postal mail and the Internet. A
notification letter and refrigerator magnet thanking residents for participation was sent to each
individual after recruitment by telephone; these letters explained the basic premise of the project,
how participants would receive information about surveys each month, the password that
participants would be required to use to access survey, and information about the first month’s
survey. After this, a monthly notification system was followed that consisted of: (1) A postcard
sent 3-4 days prior to the start of the survey that alerted residents they survey would become
available soon; (2) An email sent the day that the survey began that provided residents with a
direct link to the survey and password necessary to access the survey; (3) A postcard sent 8 days
after the initial postcard as a reminder to residents to complete the survey if they had not already
done so; and (4) A final email 2-3 days before the point at which survey responses would be
compiled to prepare results for distribution to participants, including another link to the survey
website and password. For those participants in feedback beats, an additional email was sent a
week prior to the start of that month’s survey notifying them that survey results from the
previous month had become available, including a link to the results website and the necessary
password.
The rationale for these multiple points of contact was threefold. First, our prior
experience during a feasibility study indicated that residents often forgot to go online to complete
surveys when provided with only a single reminder and their feedback suggested they would
respond at a greater rate if such a reminder had been provided. Second, the use of both postcards
and email ensured that participants who did not regularly go online to check their email would be
prompted to do so by the arrival of a postcard. Conversely, use of emails with a direct link to the
survey website and a reminder of their password (which was not provided on the postcards)
served to facilitate accessing surveys by eliminating the chance participants would incorrectly
type in the website address or forget their password. Third, multiple reminders were considered
important because of the inevitability that some participants might mislay their postcards, delay
completing surveys and require assurance the surveys were still available online, or inadvertently
delete the email. Ultimately, most participants were satisfied with the notification system,
although the majority indicated they found email more useful for learning about each month’s
survey than postcards. Some, however, said they preferred to receive both; the reasoning for this
25
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
confirmed our rationale that some participants might require prompting to check their email
account or had not received the email. While there is a segment for which postcards would be
important to participation, the trends for survey completions across the waves support that
participants were more responsive to email notifications; completion rates were always greatest
in the first days immediately following the first email notification, with a subsequent, but smaller
jump in completions following the reminder email.
Maintaining contact with participants was an ongoing concern of the project and a barrier
to this was incorrect contact information. In the case of bad email addresses, participants were
notified by postal mail that we were having difficulty contacting them and they were invited to
contact us with the correct information, many of whom did so. In the case of bad postal
addresses, participants were called by phone in order to verify their information. Despite these
efforts, we were unable to contact approximately 10% of the participants by email and 2-3% by
postal mail because of faulty addresses. In a limited amount of cases, participants had to be
withdrawn from the project because we had no means of contacting them with the information
necessary for participation. Because there was the need to send out emails in bulk, there was
also a small portion of participants who experienced difficulty receiving our emails as their
Internet server blocked them as spam, although we were usually able to work with such
individuals to overcome this particular problem. As responses to surveys were monitored on a
weekly basis, we were aware within the first week of implementation that response rates were far
lower than expected. Our original protocol had not anticipated this issue, therefore it was
necessary to devise a new strategy for encouraging more participation. During waves 1 and 2,
members of the UIC research staff made outreach phone calls to participants in those beats with
response rates of 20% or below. During waves 3, 4, and 5, outreach calls concentrated on
individuals from all the beats who had not been regularly completing surveys. These phone calls
were intended to both determine why an individual had not been completing surveys and
encourage them to. Most individuals indicated they simply forgot or had been too busy to do so,
while some told us they no longer had Internet access or had had problems with their computers.
For those individuals who expressed willingness to complete that month’s survey, they were sent
a follow up email with a link to the survey and the necessary password.
Implementation problems. Given the scope of the project, we experienced relatively few
problems in the implementation process; apart from those associated with processing and posting
survey results, most problems that occurred were primarily due to human error. We prepared
standard forms for all notification letters, postcards, and emails that participants received which
required only the addition of the survey addresses and passwords specific to the particular beats
before being sent out. While this served to eliminate the redundancy of preparing the same
materials for each beat, it still left room for errors when supplying the survey addresses and
passwords, something which occurred on approximately six occasions. For most of these
incidents, we were almost immediately aware that they had occurred and were able to send out
correction notifications. Because participants were required to provide their names whenever
they completed a survey, such incidents were easily dealt with in being able to reconcile survey
responses with the proper beats and only one incident occurred in which this was not possible.
Participants in nine beats were inadvertently provided with the information for their CAPS
counterpart in a reminder email during wave 3. We sought to immediately correct this problem
by sending participants emails acknowledging the error and requesting their assistance in helping
us to verify the receipt of their results, but this was nevertheless problematic because the CAPS
26
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
survey to which they had been directed did not request their names. Through the use of multiple
informational elements collected during the wave 3 survey, we were ultimately able to identify
many of those individuals who had taken the incorrect survey.
Participation problems and assistance. Participants also experienced few problems in
the completion of surveys. During both waves 1 and 3, survey respondents were asked how easy
they had found it to complete the survey. For each wave, over 97% indicated they had found it
somewhat or very easy to do so. Participants were encouraged to contact UIC with any problems
they encountered by either telephone or the email account through which survey notifications
were sent out. The problem most frequently brought to our attention related to accessing the
survey either because they required a password reminder or had difficulty pulling up the website
address for a survey. Although email notifications about surveys always contained the survey
password, roughly half of all phone calls and emails that we received concerned requests for the
password. Because the validity of survey responses dictated the necessity of requiring
participants to enter a password, this was a methodological point that we could not dispense of in
order to make the process easier for the participants.
The other problem associated with accessing surveys, specifically being able to get to the
website, had sources that we were not always able to identify when a participant contacted us for
assistance, but two main categories emerged. In some instances, there did appear to be an issue
of compatibility between the survey software we employed and the software participants were
using in order to access the Internet; when this was suspected to be the case, participants would
be directed to use another Internet browser to access the survey and this generally succeeded in
taking care of the problem. The issue of software would appear to also be related to complaints
we received from some participants about the survey not being properly formatted on their
computer screen (e.g. response sets not being lined up with the buttons). In other instances, the
inability to access a survey was due to error or lack of computer skills on the part of individual
participants. Sometimes participants were simply typing in the website address incorrectly,
while others were unfamiliar with the Internet beyond using email. When this was the case,
problems were easily taken care of by sending participants another email with a direct link to the
survey that would essentially bypass the necessity of requiring greater computer skills by
requiring these participants do nothing more than clicking on the link to access the survey.
Ultimately, we considered maintaining these avenues of communication with participants
in order to answer their questions and assist them with problems as being invaluable given the
nature of this project, particularly the absence of face to face contact between UIC researchers
and participants. While it would be impossible to determine the number of study participants
who experienced problems or had questions and did not contact us, we feel strongly that the
ready availability of the UIC research team via phone and email helped to sustain participation
on the part of individuals who could have easily become frustrated when they encountered
technological difficulties or had other questions about the project that might have caused them to
withdraw from the study. Participants were generally pleased with the assistance that we gave
them and the promptness we displayed in responding to their concerns.
27
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER FOUR
LEVELS OF PARTICIPATION IN THE CHICAGO INTERNET PROJECT
A. CAPS Resident Participation
Overall, 511 web surveys were completed by citizens who attended their CAPS beat
meetings within 49 of the 50 study beats during the course of the project.11 The overall web
survey response rate for the CAPS attendees was 10.6%. An average of 20 residents attended the
CAPS meetings in each of the study beats which means an average of 2-3 residents per beat
completed surveys each month (see Table 4.1). There were no significant differences in
completion rates across the experimental conditions. These findings suggest that receiving
feedback and discussing web survey results did not serve as an incentive for residents to
complete Internet surveys.
Table 4.1 CAPS Response Rate for Internet Surveys (%)
Overall Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6
All beats 10.6 10.9 7.5 10.8 12.4 10.4 11.4
Experimental Condition (3 Groups)
Control 9.9 12.4 8.5 9.5 12.3 9.2 7.
Feedback 9.5 9.3 15.2 9.5 11.3 9.9 11
Training 12.4 10.8 8.8 13.5 13.8 12.2 15
F = .14 F = .58 F = .99 F = .12 F = .59 F =1.34
Experimental Condition (2 groups)
Control 9.9 12.4 8.5 9.5 12.3 9.2 7.
Experiment 10.9 10.1 6.9 11.4 12.5 11.0 13
F = .22 F = .27 F = .43 F = .01 F = .56 F = .20
Race/Ethnicity of Beat
African American 7.1 8.2 3.7 7.9 8.1 8.2 6.8
Latino 7.7 7.5 7.2 7.9 7.4 6.6 9.
Mixed 15.1 15.5 17.0 16.5 16.6 13.9 14
White 13.4 13.0 7.7 12.9 17.6 12.8 13
F = .56 F = 3.45* F = 2.05 F = 1.79 F = 1.88 F = 1.68
5
.6
.4
5
.3
8
.1
.4
* p<.05
Although experimental condition was not significantly related to response rates, it should
be noted that there the police in these beats tended to demonstrate more consistency in
implementation of project tasks and their investment in the project appeared to be greater than
11 An additional 241 web surveys completed by residents in relation to a single beat were excluded from this count
for reasons discussed below in the section entitled “The 50th Beat”.
28
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
for offices from the other beats. To this end, it would seem logical to expect that resident
participation in completing surveys would be influenced by not just the mere act of, but the
manner in which police implemented the project.
There were no significant differences in terms of response rates across the racial/ethnic
beats except at wave 2, when the response rate for primarily African American beats dropped to
3.7%, while beats with a mixed racial/ethnic composition remained steady at 15.5%. The
greatest overall response rates occurred within mixed racial/ethnic beats, followed closely by
White beats, the response rate of these respective beat was roughly doubled that of the rates seen
in the African American and Latino/a beats. There are three possible explanations for the
response rate differences across the beat types. The first possible explanation could be the
“digital divide” - disparities in Internet access rates exist according to income and race/ethnicity.
Studies show that, while the gap is closing, African Americans and Latino/as tend to have lower
access rates than Whites. When looking at the Internet access rates reported by CAPS
participants in the study beats, we see that over 70% of the residents in the Latino/a, mixed, and
White beats reported having access, while under 60% of residents in the African American beats
had access, lending partial support to the digital divide argument. Because residents in the
Latino/a beats reported rates of access similar to those of residents in White and mixed beats, this
does not support the belief that disparities in access were responsible for low response rates in
these beats. Yet residents in African American beats did report a significantly lower rate of
access (X2 = 14.549, p < .01) that may have contributed, in part, to the low response rates there.
The second possible explanation for the response rate difference between the various
racial/ethnic study beats may be the nature of participation in CAPS; it is feasible that, given
their greater participation in the CAPS program, residents in African American beats have
established stronger problem solving partnership with the police and felt less of a need to
supplement this partnership by completing surveys than residents in the White and mixed beats.
The final possible explanation is residents in the White and mixed racial/ethnic beats may have
felt a greater need to have their opinions heard through an additional venue (i.e., web survey).
This might be particularly true of residents in the mixed beats, most of which are neighborhoods
that are gentrifying. While it could be argued that the original residents (typically racial and
ethnic minorities) would use this new form of communication with the police as their foothold in
the area is threatened by the newcomers, it is more likely that the reverse is true: newcomers,
predominately White residents, wanted to increase their stake in the community by increasing
communication with the police. Survey completion trends support the latter explanation, with
80% of the respondents in the mixed beats being White, compared to just 17% African
Americans from those same beats.
Resident survey participation by beat varied widely, ranging from 5 beats in which a
single survey was completed for the duration of the project to 6 beats where citizens completed
between 21 and 30 surveys during the same time. Interestingly, 40 beats account for 60% of the
surveys completed, while another 9 beats account for 40% of all completions. These nine beats
are not in the experimental conditions and the residents attending CAPS meetings in those beats
were exposed to varying levels of implementation by police. In testing for predictors of
participation in these beats, the common denominator for the higher participation was type of
education for residents (overall survey completion response rates were also predicted by beat
level crime and meeting attendance rates). Residents in these 9 non-intervention beats exhibited
29
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
significantly higher levels of college attendance (F = 6.465, p < .05); all but one of the nine beats
reported an average education level of at least some college, while 28 of the 40 beats accounting
for 60% of the survey completion reported an average level of education of either high school
degree or technical training (but no college).
Table 4.2 Participants at CAPS Beat Meetings and CAPS
Participants who Completed Internet Surveys
CAPS (%) Internet (%)
Gender
Male 42.9 45.9
Female
57.1 54.1
Race/Ethnicity
African American 44.4 36.4
White 45.0 55.7
Hispanic/Latino 8.5 4.9
Other
2.1 3.0
Age
18-19 .2 .2
20-29 5.5 3.0
30-39 8.4 10.6
40-49 18.2 22.9
50-59 23.5 29.0
60-69 24.1 25.6
70 and older 20.0 8.7
Who completed the surveys each month? As Table 4.2 shows, the distribution of
participation is comparable to the distribution for participation at CAPS beat meetings. Just as a
slightly higher number of females attend CAPS meetings, more females than males completed
surveys. We also see higher participation rates for individuals between thirty and fifty-nine,
particularly those between the ages fifty and fifty-nine. Not surprisingly, those seventy or older
completed surveys at a far lower rate than their attendance at meetings; as discussed below, age
represents one of the obstacles to participation and was a barrier encountered during a previous
feasibility study. Undoubtedly, many individuals in this age group did not have computers or
their computer skills were such that they did not feel comfortable completing surveys.
Ultimately, the greatest difference between the distribution of CAPS participants and those who
completed surveys is among race/ethnic groups. White residents completed surveys at a greater
rate and were responsible for completing over half of all surveys, while African Americans and
Latino/as participated at lower rates. As previously noted, there could be several reasons for this
disparity, but there is no single indicator to explain why. Because two of the highest
participating beats were African American, however, it is also possible that a confluence of
characteristics, such as individual Internet use, residents’ concerns, and the quality of the police-
citizen relationship, that are unique to each beat may be responsible for greater resident
participation among beats of different racial/ethnic composition.
The 50th beat. We found it necessary to exclude the data of 241 survey completions that
occurred in relation to a single beat because of the circumstances under which such a high
number of responses were achieved. In wave 1 alone, 124 survey responses from this beat were
recorded, immediately alerting us that something unusual was going on. It soon came to our
30
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
attention that a resident from this beat was posting the website address and password for each of
our surveys, as well as survey results, on a blog that he maintained about his neighborhood. As
his blog showed, this resident was obviously angry and frustrated about ongoing problems in his
area, which was undergoing gentrification and he detailed and put photos about this on the site,
and was extremely critical of city officials, the police, and developers. While he clearly cared
about what was going on in his neighborhood, it was less clear how he regarded CIP; he
probably sought to make the opportunity for completing surveys available to other residents who
visited his blog, but there can be no question he was also using the material to bolster his own
point of view. In posting the survey information, he used the exact wording that we had used to
describe the project, but inserted sarcastic references to local officials into the sentences.
Similarly, when he posted survey results, he added his own interpretation, using less than
acceptable language. For instance, regarding one set of results about participation in CAPS, he
commented, “What it tells us is our CAPS program is in the CRAPPER.” While he was
expressing what we considered rather valid concerns on his blog, we were more concerned with
the volatile manner in which he did so and we did not want the project represented this way.
Discussions with police from the beat, however, confirmed our suspicions that confronting this
resident would only agitate him further and we thereafter monitored his blog to document his
postings.
This incident demonstrates the difficulty in controlling how, and to whom, research
information is disseminated beyond its intended purpose. This is a concern that anyone faces
when making information public and the best safeguards cannot completely remove the potential
for misrepresentation. For our purposes, the greater concern was verifying that only residents
within the study beats completed surveys since this was in part a geo-based study. There were
no limitations put on those who completed the surveys beyond being a resident of a study beat,
something monitored by the cross streets provided by respondents on the survey. In this way, it
was determined that 82 of the 241 responses received by residents using the password for the
50th beat were from individuals who did not live in a study beat. Of the remaining 159
responses from residents in study beats, 60% were from individuals who did not even live in this
particular beat for which the password was intended, but in nearby beats. These survey
completions were not included when determining response rates or examining the nature of
participation because they did not reflect most beats’ experiences of participation by residents
involved in the CAPS program. While unexpected, the experience in this beat actually suggests
the potential for achieving greater resident participation in such web-based initiatives through the
use of broader recruitment strategies utilizing multiple mediums, such as the blogs, discussion
groups or listservs.
Resident attitudes about participation in the project. Residents were provided with the
opportunity to offer suggestions regarding the improvement of the project during wave 5 of the
web survey. Of the 90 respondents who participated in wave 5, 70% indicated they had no
suggestions or changes were not necessary. The remaining 30% supplied responses that can be
divided among four categories: praise, methodological concerns, topics for inclusion on surveys,
and use of survey results. Some residents took advantage of the opportunity to simply offer
support for the project, wanting researchers to “keep up the good work” and that they thought the
project was a “great idea.” Surveys were praised as being “comprehensive”, with questions
“right on target”, “covering all the right things for us”, and “asking the difficult questions, i.e.
like the ones on racial profiling.” Other residents expressed concerns with methodological
31
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
issues that stemmed largely from information not made available to the beats due to the
experimental nature of the project regarding sampling methods and the number of participants,
although a few requested “more opportunity to explain some of the answers which cannot always
be answered as ‘yes or no’ but need more words.” Respondents also offered suggestions for
topics they wished to see included on surveys. Topics focused on resident understanding of
police operations, knowledge of crime in the area, and participation in CAPS. Finally, residents
commented about survey results. For residents in beats where results had not been made
available (either in control beats or in experimental beats in which police had not made results
available as instructed), there was a natural desire to see results. Others expressed the desire for
results to be used in a meaningful way, such as turning results into “comparative statistical
reports”, passing results onto beat officers, and improving “conditions in our area” and “police
presence/service.” Overall, these responses suggested that residents who participated in
completing surveys took such participation seriously and saw potential for new data elements to
be collected and results used to improve both the beat and police services.
Citizen meeting facilitators were interviewed to explore resident attitudes about
participation in the project, as well as their own responses to participating in completing surveys
and discussing survey results. Facilitators almost unanimously expressed support for the general
concept of residents using Internet surveys as a means to communicate concerns about public
safety to the police, although many had some reservations as to its feasibility due to problems
associated with access to computers or the Internet, computer literacy, and age of residents
(discussed below in “Obstacles to Resident Participation”). Internet surveys were perceived as
supplying a new “avenue” for communication between residents and police, acting as a
“supplemental tool to the current system” for providing police with more information and
expanding resident participation. Facilitators welcomed the ability of Internet surveys to act as
an additional source of information for police about what was happening in the neighborhood,
yet stressed its value as merely supplemental to existing structures; as one facilitator stated, “The
Internet is a tool. It doesn’t replace face-to-face interaction like at the beat meetings.” The
potential for broadening the scope of resident participation was also recognized, particularly for
residents who might have attended beat meetings but were prevented from doing so for various
reasons, including conflicting work schedules, health problems or physical handicaps, and those
who were reluctant to speak to police one-on-one or in front of other residents.
Regarding their own participation in the project, 13 of the 22 civilian facilitators
interviewed had completed at least one of the Internet surveys and offered feedback regarding
survey content. Overall, they felt surveys had covered issues that were important and were
unbiased and fair. The broad range of issues included on surveys was considered as both a good
and bad thing. While some facilitators were pleased with this aspect and felt that “most of the
questions got to the heart of a lot of issues”, others felt surveys should be more tailored to each
beat’s particular concerns. Indicative of the varying issues and state of police-community
relations within the participating beats, facilitators offered widely diverse opinions as to the
topics they considered most important for inclusion on surveys or problems that they saw with
the survey. For instance, a facilitator from a beat in which police-community relations were
clearly strained complained that “there were too many softball questions”. While one facilitator
observed that surveys were “much too heavy on safety” and felt “the issue of crime in relation to
the beat was overlooked”, another felt that “the most important thing is being safe” and therefore
surveys covering broader topics of public safety were beneficial to residents. Suggested topics
32
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
for inclusion bore the hallmark of the CAPS philosophy in which facilitators are well-versed:
emphasis on community-wide problems as opposed to problems specific to individuals, concerns
of the residents, suggestions from residents on how police could work more closely with them,
and the extent of residents’ knowledge about what happens in the community.
Facilitators also spoke about the presentation and discussion of survey results during beat
meetings. Full implementation did not occur in many beats according to the facilitators. Six
stated they could not even recall ever having results distributed or discussed at their meetings,
while several others stated results had only been made available once or twice during the course
of the project. Of the 15 facilitators representing beats in which survey results were supposed to
have been made available, 8 facilitators recalled results being made available and all of them
found the graphs and tables provided easy to read and understand. Some noted that the
importance assigned to neighborhood problems and public safety attitudes as indicated by
residents in survey results did not always match how residents who attended meetings felt.
However, most agreed that discussion of the results had the primary value of allowing residents
to understand how other community residents felt about certain issues, particularly residents who
did not attend the beat meetings.
While they tended to agree that discussing the survey results was also useful to police for
helping them to understand residents’ concerns, doubt was expressed about the extent to which
police would use the information. The differences between police and residents as to the
importance assigned to problems identified through the surveys was identified as a factor
determining the use of the results. Meaning that police would regard results as being useful only
if they contained information about issues that police thought of as “big” and that “they might
take it more seriously then.” Otherwise, while it was believed that police “take into
consideration what the residents say”, they were “not going to change because of it.” This last
observation of reflects the police-resident interaction observed at other points during beat
meetings. The primary aim of activities such as reading crime reports or identification of new
problems, is ultimately information sharing between the police and residents with little attempt at
problem solving. In some cases, it is possible police regarded information they were required to
share through survey results as threatening, hence little effort being made to foster discussion or
even the adoption of attitudes that precluded meaningful resident participation. A facilitator
from one beat where police fully implemented the project tasks and faithfully covered results
with residents noted, “When you gave an answer on the survey that the police department wasn’t
happy with, they came into the meeting on the defense…they would try to act like nothing was
their fault.”
Obstacles to resident participation. During the project, citizens and police were
provided with opportunities to offer their insight as to the obstacles to citizen participation in
completing the web surveys. The post-test questionnaires administered to citizens at their beat
meeting included questions regarding the extent of their participation in the project; for those
who indicated that they had not completed a web survey, they were asked to supply a reason for
not participating. Excluding individuals who were attending their first meeting at the time of the
post-test administration, 170 citizens provided reasons for lack of participation. The most
common reason, cited by 50.6 %, was lack of awareness about the project or failure to have
received the necessary information to complete surveys, reflecting poor levels of implementation
by police in many beats which stand as perhaps the most serious obstacle to achieving resident
33
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
participation. Of the 170 residents providing reasons for not participating, 31.8% reported that
they did not have a computer or did not have Internet access, while 13.5% reported a lack of
time, forgetting to or simply not being interested in doing so. Only 4.1% indicated having
encountered some sort of technological problem that prevented them from completing the
surveys.
Ultimately, the reasons given by citizens regarding their own lack of participation mirror
many of the reasons both citizen facilitators and police provided during interviews when asked to
consider why more residents had not participated in completing surveys (see Table 4.3). As
citizens and police typically identified the same obstacles, the decision was made to present their
views together and it is noted below where opinions diverged between the two.
Table 4.3 Obstacles to Resident Participation as Identified
by Civilian and Police Facilitators (N=70)
Obstacle %
Lack of Internet access/computer 41.4
Apathy 30.0
Computer illiteracy 24.3
Not beneficial 21.4
Lack of time/too busy 18.6
Lack of awareness 18.6
Age 15.7
Fear of sharing information 7.1
Personal agendas 7.1
Lack of trust in police 4.3
Not surprisingly, the obstacle to participation most cited (41.4%) was lack of Internet
access or computer. This was often linked to the concept of “computer illiteracy” or lack of
necessary skills to use either computers or the Internet (24.3%). Citizen facilitators and police
alike believed that many residents either living in their beats or at least attending the meetings
simply could not participate because of a lack of access or technical ability. Responses in this
vein were typically not qualified by further explanation or accompanied by comments indirectly
indicating the belief that this lack was a function of income and residents could not afford it (e.g.
“you won’t find Internet users in high crime areas”; “the Internet is a luxury”). Age was often
cited as a factor for non-participation (15.7%), particularly in relation to lack of access or ability;
it was noted in several beats that meeting participants tended to be older residents who were
generally considered as being either uninterested (e.g. “they’re set in their ways”) or afraid of
using computers and the Internet (e.g. “they’re intimidated by computers”). However, as over
34% of respondents for the surveys were 60 or older and their participation was in proportion to
the rate at which individuals in this age group attended meetings, this suggests age is not
necessarily the primary reason for non-participation, but that it is a matter of individual
preference.
34
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Two interrelated obstacles also identified by facilitators and police were resident apathy
(30%) and an inability to see the completion of surveys as being beneficial (21.4%). Common
refrains were that residents “don’t want to get involved” and “they don’t care”, an assessment
that extended beyond simply completing surveys to being involved in public safety initiatives
and their communities at large. Residents were perceived as only getting involved when they
had been personally affected by something going on in their community or had a specific
problem they wished to bring to the attention of the police; as one sergeant stated, “If the crime
is not close to them, its not their problem.” This view was somewhat tempered by an
accompanying belief that residents had not participated because they did not consider it
beneficial to do so. A small portion of such responses equated “benefit” with direct or
immediate rewards for residents to induce participation, basically the need to “dangle the carrot”
with prizes. One sergeant opined, “They want immediacy. The reward of improved police
service is not enough.” Most responses, however, suggested a failure to participate because
residents did not feel that information shared through the surveys would result in tangible
changes for police services or problems in the community. Some likened it to a voting process
and the belief “I’m just one person, what will it matter?”, while others discussed it in terms of
residents believing their views would not be valued by the police. There was also reference to
resident frustration with having coped with long-term community problems, exemplified by one
civilian facilitator’s comments that many residents “just don’t believe in it. Some people think
that things are never going to change regardless of what you do”.
Such responses often did not question the web survey methodology or the type of
information being collected, but rather questioned the benefit to residents as one concerning the
actual use of the information itself and whether it would be used in a meaningful way. This
observation lends support to the idea that beat meeting have moved forward within a narrow
framework of information sharing consisting of three main components featured prominently on
the accepted CAPS agenda: (1) Police report crime statistics; (2) Residents report new problems;
and (3) Police report progress on problems identified by residents. Very little meaningful
discussion or problem solving is usually attempted. Within such a narrow framework, broader
communication about topics such as police-community relations, fear of crime, and the nature of
police work would be difficult to achieve, as witnessed by observers who regularly reported
survey results being treated as yet another group of statistics to be provided to residents. The
communication between police and residents in many beats seems to be based on an assumption
that certain problems are not to be discussed, such as quality of police performance and the
police-community partnership, and the framework for meetings is structured to allow discussions
predominately about crime and disorder problems. Clearly the value assigned to the project by
residents is also related to the manner in which it was implemented. Several police personnel
commented on this point, best stated by a beat sergeant who said:
It’s not being talked up enough for whatever it’s importance. [Mimicking officer handing
out information and speaking in a monotone] And here’s this and that and, oh, yeah,
there’ll be a drawing for a laptop or something. There’s a difference. When I talk about
it, I’m putting it to them as “help me out here…and it’s all about relationship.” It’s
“please do me a favor” versus “do this crap.” If there’s a problem, it’s how we’re
selling it.
35
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Arguably, residents would be less likely to see the benefit of being the source of these
“statistics” if they had set expectations regarding the nature of the discussion about the results
based on prior experience with discussions at their meetings. This suggests a broader obstacle to
participation lies within the CAPS framework itself and that the nature of police-resident
communication as demonstrated at beat meetings is such that some residents are possibly unable
to see any potential benefit because the well-established patterns for interaction make it difficult
for new information and dimensions for discussion to be used in anything than the now accepted
manner.
Conversely, others noted the nature of attendance as an obstacle, alluding to what was
commonly termed the problem of residents having “personal agendas” that precluded interest in
the broader topics included on the surveys. The idea of personal agendas was explained as
residents “want to talk about what they want to talk about” with limited attendance by the same
group of residents as a contributing factor: “the same people every time, same issues all the
time” have essentially resulted in meetings with “such a narrow focus.” Observation of meetings
has indeed shown residents intent on bringing particular problems to police attention with
seemingly little concern for other community-wide problems. Yet such behavior would seem to
be a by-product of the police-community interaction as it exists and, to a degree, has come to be
accepted within the CAPS framework. Certainly both police and citizens expressed
dissatisfaction with the quality of interaction and involvement of participants on both sides, but
expectations for police and citizen roles have often relegated residents to nothing more than the
“eyes and ears” of police, lending justification to resident expectation that they are fulfilling their
role by bringing their personal agendas to the table. If beat meetings are not regarded by
residents as venues for meaningful discussion and problem solving or feedback about police
performance, then interaction that consists primarily of bringing their problems to the attention
of the police and receiving information about crime fulfills the perceived purpose for attending
meetings.
Lack of resident awareness or understanding of project objectives was cited as another
obstacle, something acknowledged by both citizens and police alike, although only a single
individual (a civilian facilitator) specified that this had anything to do with poor implementation
by the police. Most comments about lack of awareness were made in a general vein, e.g. “people
do not know about them”, “more people need to be made aware of it.” Resident confusion about
project objectives was also discussed in general terms: “people could have been confused on
where to go on the Internet to complete the survey” and they “don’t understand the purpose of
the survey.” When offering suggestions to improve the project, a common refrain was to
increase residents’ awareness of the survey and, again, there was the same disjuncture between
acknowledging the problem of resident awareness and positioning this problem as the
responsibility of the CPD. In a sense, this oversight is to expected by both civilians facilitators
and police who would in effect be blaming themselves for not making residents aware of the
project. Yet we consider lack of awareness on the part of residents to be one of the greatest
obstacles to achieving participation. As discussed in the implementation section, police in many
beats failed to consistently make survey information available, fully explain project objectives,
and encourage awareness of the project. Deprived of the necessary information, residents were
then deprived of the opportunity to decide whether or not to even participate. If residents did not
truly understand the purpose of the project because police did not take the time to explain it to
36
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
37
them, many may have disregarded the opportunity to participate. Ultimately, there can be no
telling the true impact that poor implementation ultimately had on resident participation.
Finally, fear was cited with less frequency as an obstacle, but represents a valid issue for
participation in such initiatives that would no doubt be encountered with greater incidence if
implementation were widespread. Some felt that the fear of retribution by either criminal
elements in the community or the police for information/views shared through the surveys
prevented some residents from participating, e.g. “People aren’t sure where the information is
going and who is going to see it”. While flyers with the survey information provided to residents
assured them that their answers were confidential and that no one from the CPD would see their
personal responses, it is definitely conceivable that some residents did not believe their identity
could not be found out. This stands as a problem that is well known to police when dealing with
the issue of calling 911 and retaining anonymity; although assured of this protection, there are
residents who have had officers come to their doors who would be distrustful of confidentiality
claims under a police-sponsored initiative no matter how they are presented. Indeed, some
identified as a primary obstacle resident lack of trust or confidence in the police that would make
participation difficult to achieve under any circumstances. Fear of sharing information is not
completely insurmountable where it is based on lack of knowledge about how information is
collected for web surveys; brief tutorials explaining the process of protecting participant identity
could help allay such fears. Concerns about sharing information deemed as going directly to the
police, however, are not as easily addressed where maintaining trust between both parties can be
an ongoing issue. This suggests that police departments may not be the best institutions for the
processing of information collected from the community and an independent agency would be
better suited for the task.
B. Random Sample Participation
Response rates for Internet surveys completed by the random sample. The response
rates for the Internet surveys for the random sample are presented in Table 4.4. The average
response rate across all seven waves was 34.5%. Not surprising the largest percentage of
respondents participated in the first Internet survey (40.5%) and the smallest percentage in the
last survey (22.1%). There was a gradual decline in participation from wave 1 to wave 6 and
then a sharp decline for wave 7. The large drop in participation for wave 7 may be attributed to
changes in the administration of the survey. As stated earlier, there were no postcard reminders
mailed out for wave 7.
The response rate was calculated by dividing the number of respondents for each wave by
the total number of respondents recruited to be in the study through the first telephone survey (N
= 1976). Because of recruiting delays not all the respondents were given an opportunity to
participate in the first wave of the Internet surveys. As such, the response rate for the first wave
is based on the number of respondents who had been recruited into the study at the time of the
administration of the survey (N = 1872). The response rates for the other Internet surveys
(waves 2 thru 7) are based on the final of number of participants recruited to be in the study (N =
1976).
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Table 4.4 Response Rates for Internet Surveys
Overall Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Wave 7
All beats 34.5 40.5 39.4 37.2 36.7 33.6 31.7 22.1
Experimental Condition (3 Groups)
Control 33.6 37.2 34.4 36.4 36.7 34.0 33.8 22.6
Feedback 33.0 40.4 37.9 36.2 35.0 30.7 29.6 21.5
Feedback/Training 34.1 41.3 43.0 36.2 35.8 33.7 29.3 19.5
χ
2
=2.5 χ
2
=10.8** χ
2
=.01 χ
2
=.41 χ
2
=2.1 χ
2
=4.2 χ
2
=2.0
Experimental Condition (2 Groups)
Control 33.6 37.2 34.4 36.4 36.7 34.0 33.8 22.6
Experimental 33.6 40.8 40.5 36.2 35.4 32.2 29.4 20.5
χ
2
=2.4 χ
2
=7.3** χ
2
=.01 χ
2
=.32 χ
2
=.68 χ
2
=4.2* χ
2
=1.2
Race/Ethnicity of Beat
White 41.4 49.0 47.9 42.6 41.9 42.1 39.1 27.3
African American 25.6 31.7 28.0 28.5 28.4 24.2 22.5 15.7
Latino/a 23.0 24.4 26.5 27.0 26.0 19.1 21.4 16.3
Mixed 39.5 42.5 46.6 44.4 44.1 39.3 37.0 22.9
χ
2
=61.3*** χ
2
=84.1*** χ
2
=49.5*** χ
2
=48.6*** χ
2
=78.0*** χ
2
=62.0*** χ
2
=33.4***
38
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Table 4.4 also presents the response rates by experimental condition. The average
response rate for participants in the feedback/training beats was 34.1% compared to 33.0% for
participants in feedback only beats and 33.6% for participants in control beats. Statistical tests
were conducted for each of the seven waves of the survey. Statistical tests were also conducted
for comparisons in participation between three groups (i.e., control vs. feedback only vs.
feedback/training) and between two groups (i.e., control vs. experiment). It important to note
that there are slight differences between the overall response rates and the response rates for the
experimental conditions because a few respondents chose to participate in the Internet surveys
anonymously. As a consequence, their Internet surveys could not be linked back to their initial
telephone survey and are missing information on geographic location and socio-demographic
characteristics. The number of anonymous respondents varies across the seven waves with a low
of less than 1% in wave 1 to a high of 2.3% for wave 7. Efforts were made to recover key
geographic and demographic information for the anonymous respondents; however, it was not
possible to recapture all of the missing information. Fortunately, it was possible to recover
geographic information for all but 9 of the anonymous respondents. Demographic information
was slightly more difficult to recapture, however, no wave is missing more than 2% of the
respondents’ demographic information due to the participant’s desire for anonymity.
Regarding response rates across the three experimental conditions, there are some
interesting findings. Significantly more respondents from feedback only and feedback/training
beats completed the wave 2 survey compared with respondents from control beats. There is
almost a 10% difference in the response rate between the feedback/training group and the control
group.
There were significant differences in respondent participation across the different
racial/ethnic beats. The average response rate for participants from predominately White beats
was 41.4% compared to 25.6% for participants from predominately African American beats and
23.0% for participants from predominately Latino/a beats. Interestingly, there were relatively
high levels of participation for individuals from racially/ethnically heterogeneous or mixed beats.
The trend in the reduction of participation across the sevens waves of the Internet survey was
relatively consistent across the four racial/ethnic groups. There was about a 44% reduction in
participation from wave 1 to wave 7 for respondents from predominately White beats, 50% for
those from African American beats, 33% for Latino/a beats and 46% for mixed beats.
On average each of the respondents completed 2.3 (SD = 2.6) Internet surveys. About
60% of the original sample recruited though the initial telephone survey completed at least one
of the Internet surveys and about 12% completed all seven surveys (see Table 4.5). There were
no statistical differences in the number of Internet surveys completed by experimental condition
(see Table 4.6). Respondents from feedback/training beats completed 2.4 (SD = 2.6) surveys
compared to 2.3 (SD = 2.6) surveys for respondents from feedback only beats and 2.3 (SD = 2.6)
for respondents from control only beats.
Respondents from White and racially heterogeneous beats completed more Internet
surveys than respondents from African American and Latino beats. On average respondents
from White beats completed 2.9 (SD = 2.7) surveys and respondents from racially heterogeneous
beats completed 2.8 (SD = 2.7) surveys compared to 1.8 (SD = 2.4) surveys for respondents from
African American beats and 1.5 (SD = 2.3) surveys from respondents from Latino/a beats.
39
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Table 4.5 The Number of Internet Surveys Completed
Number of
Surveys Competed Number of
Respondents Percent Cumulative
Percent
7 229 12.23 12.23
6 188 9.51 21.7
5 124 6.28 28.0
4 123 6.22 34.2
3 105 5.31 39.5
2 147 7.44 47.0
1 272 13.77 60.7
5
2
4
6
0
7
Demographic characteristics of respondents for random sample. Out of the original
1976 respondents recruited from the telephone survey, there are significantly more females than
males (61.5% reported being female) and significantly more homeowners than renters (71.8%
reported being homeowners). The average age of the respondents was 43 with a standard
deviation of 14. The average education level of the participants was about 14 years which is
equivalent to an associate degree. There was large percentage of college graduates (62.1%) and
even a significant number of respondents with advanced degrees (25.8%). The sample was
relatively affluent with over 50% of the sample reporting an annual income of $60,000 or
greater. Almost 25% of the respondent reported an annual income of $100,000 or greater and
only about 8% reported an income of less than $20,000.
Table 4.6 Respondent’s Profiles
Number of Internet Surveys Completed
None 1 to 2 3 or more
Beat Race/Ethnicity
White 28.0% 37.1% 45.6%
African American 42.7% 37.1% 26.4%
Latino/a 13.9% 10.7% 7.0%
Other 15.5% 15.1% 20.9%
χ2
= 91.12***
Experimental Condition (3 groups)
Control 35.2% 36.1% 34.7%
Feedback Only 32.7% 30.5% 31.9%
Feedback/Training 32.1% 33.4% 32.9%
χ2
= .83
Experimental Condition (2 groups)
Control 35.2% 36.1% 34.7%
Experimental 64.8% 63.9% 65.3%
χ2
= .22
*p<.05 **p<.01 ***p<.001
40
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
41
The demographic characteristics of sample changed significantly over the course of the study
(see Table 4.7). Those who participated in a greater number of Internet surveys were more likely
to be White vs. African American, Latino/a or other, and were more likely to be homeowners vs.
renters. Those that participated in more surveys were also generally older, had higher levels of
educational achievement and greater annual incomes. There were no sample differences across
participation in terms of gender or years at residence.
Predictors of non-response for random sample. In the course of the study we tested
nine different quantitative predictors of participation in completely Internet surveys (see Table
4.8). The information came from a range of sources including crime rates from the Chicago
Police Department, respondents’ answers to the first telephone survey and respondents’ answers
to the second telephone survey. Using difference sources was important because it allows us to
test some predictors in the correct temporal order (i.e., respondent’s answers preceding
opportunities for participation) and it allows us to test some predictors for individuals who did
not complete any of the waves of the Internet surveys.
Nine predictors of participation were created. From official data provided by the Chicago
Police Department we computed the violent crime rate (per 1,000) for each of the study beats.
From the pre-experiment telephone survey we created variables measuring the respondent’s
perceptions of their self-efficacy about public safety, knowledge about crime prevention,
perceptions of police misconduct, and time on the internet at home and at work. Self-efficacy
about public safety is a scale that included three questions where the respondent was asked to
rate on a scale from 0 to 100 how much they agreed with statements about influencing their
neighbors to take action, getting the police to be responsive, and working with the police to make
the neighborhood safer (α = .63). Knowledge about crime prevention is a 2-item scale composed
of questions about knowing the things needed to stay safe when out on the streets and knowing
the things needed to keep your house and property safe (α = .76). Perceptions of police
misconduct is a 2-item scale where respondents were asked how much of a problem (big
problem, some problem, or no problem) is the police stopping too many people on the streets
without good reason and the police not responding quickly to emergency calls in the
neighborhood (α = .56). Higher values indicate more perceptions of misconduct. Time on the
Internet at home and work were each one question that asked the respondent how much time
(everyday, several times a week, several times a month, just a few times a year, and never) they
spent on the Internet. Higher values indicate more time on the Internet.
The scales measuring computing capabilities and feelings about the web-based surveys were
created from the post-experimental telephone survey data12. For all the measures the
respondents were asked whether they strongly agreed, somewhat agreed, somewhat disagreed, or
strongly agreed with a series of statements. Computing capabilities consists of three items: I am
confident I can learn computer skills, I am apprehensive about using computers (reverse coded),
and I am able to keep up with advances happening in the computer field (α =.49). Less dystopia
feeling about computers consists of two items: computers turn people into just another number
12 Factor analysis was conducted on the eight computing / technology items. The results suggested that there were
three factors with the first factor computing capabilities accounting for 29% of the variance, the second factor
dystopia feelings about computers accounting for 18% of the variance, and the third factors perceived usefulness of
Web surveys accounting for 17% of the variance. The three factors combined accounted for 65% of the total
variance.
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Table 4.7 Summary of Demographic Characteristics of Respondents by Participation Level
Number of Internet Surveys Completed
None 1 to 2 3 or more
Respondent’s Race/Ethnicity
White 36.7% 48.5% 66.8%
African American 43.1% 38.3% 25.1%
Latino/a 14.5% 8.9% 4.7%
Other 5.7% 4.3% 3.4% χ2 = 153.57***
Respondent’s Gender
Male 39.5% 39.5% 37.0%
Female 60.5% 60.5% 63.0% χ2 = 1.24
Respondent’s
Homeowner 68.1% 71.5% 75.9%
Renter 31.9% 28.5% 24.1% χ2 = 11.83**
Age (years) M = 40.8, SD = 14.7 M = 43.1, SD = 13.6 M = 46.1, SD = 13.4 F = 28.80***
Education (years) M = 14.2, SD = 2.2 M = 14.8, SD = 2.0 M = 15.3, SD = 1.8 F = 61.21***
Income M = 3.2, SD = 1.3 M = 3.5, SD = 1.3 M = 3.7, SD = 1.2 F = 32.68***
Years at current residence M = 11.1, SD = 11.5 M = 11.2, SD = 10.8 M = 11.4, SD = 11.2 F = .18
*p<.05 **p<.01 ***p<.001
42
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
and computers make people become isolated (α = .50). And last, the perceived usefulness of the
web surveys consists of two items: Web surveys are a good way to collect data about community
crime problems and I believe that the Chicago Police Department will make good use of
information collected through online Webs surveys to help the community (α = .66).
In general, respondent’s who felt more capable had higher levels of participation. This
included personal assessments about their knowledge and role in promoting public safety, and
general assessments about access and skills with computers and the Internet. Individuals, who
reported spending more time on the Internet, either at work or at home, also participated in more
Internet surveys.
A few of the findings were somewhat surprising. First, respondents from beats with
higher levels of crime were less likely to participate. Second, there was no relationship between
the respondents’ beliefs about the usefulness of Internet surveys and their participation in the
project.
Table 4.8 Bivariate Results for Predictors of Participation in Internet Surveys
Number of Internet Surveys Completed
None 1 to 2 3 or more
M SD M SD M SD
Violence crime rate in
beat (per 1,000) 16.62 11.73 15.26 11.88 12.76 9.83 F = 24.70***
Self-efficacy about public
safety 62.91 21.32 64.35 19.26 66.38 17.52 F = 6.32**
Knowledge about crime
prevention 82.20 19.58 84.59 15.72 85.60 14.09 F = 8.40***
Perceptions of police
misconduct 1.55 .62 1.45 .57 1.34 .48
F = 29.43***
Time on internet at home 4.22 1.14 4.52 .94 4.51 .96 F = 19.90***
Time on internet at work 3.47 1.80 3.71 1.74 3.96 1.62 F = 14.53***
Computing capabilities 3.55 .52 3.68 .46 3.75 .39 F = 13.28***
Less dystopia feelings
about computers 2.78 .92 2.88 .84 3.02 .79
F = 6.23**
Perceived usefulness of
web surveys 3.17 .69 3.20 .57 3.23 .62
F = .60
**p<.01 ***p<.001
Post-experiment telephone findings. Prior to the implementation of the post-experiment
telephone survey, we compiled information on which of the respondents had completed Internet
surveys and how many Internets surveys each had completed. This information was used to
screen respondents and ask those individuals who never filled out any of the surveys why they
had decided not to participate. Additionally, respondents who had completed less than three
43
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
surveys were asked why they had had decided to stop participating. The questions were open-
ended. Gathering this data provided us a unique opportunity to try to better understand the
barriers to participating in Internet surveys.
Seven-hundred and ten respondents provided answers as to why they had chosen not to
participate in the project and 701 provided answers as to why they stopped participating13. By
far the most common response was because of a lack of time. A large percentage of the
respondent said that they were simply too busy to participate. Many other respondents gave
answers revolving around a lack of motivation such as “I’m just too lazy,” “I just did not feel
like it,” and “It’s not in my nature to do surveys.” One person went so far as to say, “I was just
busy with other stuff and I didn’t feel like it, I didn’t feel like being a good citizen and doing the
responses” (emphasis added). Many of the reasons for nonparticipation or discontinued
participation highlight the diversity of normal life events that people experience, even over such
a short period of time, including babies being born, serious illnesses, deaths in the family,
changes in employment, and relocating. Overall, the results overwhelmingly suggest that time
and motivations were the biggest barriers to getting people to complete on-line surveys.
While it is important to know that a lack of time and motivation were the most prevalent
barriers to participation, it is not particularly surprising or unique to on-line surveys. There were
however, other issues raised more specific to this type of project. For example, many people
reported technical problems. The technical problems ranged from their computer being infected
with a virus to having trouble with their internet connection. Many respondents also reported
technical problems specific to accessing the survey. For example, the Mac users had problems
getting their default Internet browser Safari to work with our survey engine software. Even more
problematic was the password system as describe earlier. A large percentage of people reported
not filling out surveys because they lost their passwords. Although we had a mechanism in place
for retrieving lost passwords, many of the respondents who forgot their password did not try to
contact us and simply did not participate. All of these examples highlight the need to provide
easy access to timely technical support for all the study participants.
Another important issue revolved around where and when they could access the Internet.
We recruited respondents into the study if they had access to the Internet at home or at work. A
significant number of respondents cited not being able to access the Internet at work as a reason
for not participating. As employers become increasingly concerned about employees wasting
time online, they may institute more restrictive policies about Internet usage. Researchers should
be aware of this issue.
The last important reason cited for lack of participation was related to email. In general
we found that the email reminder with a link to the survey was very helpful. However, many
respondents cited a change in email address for discontinued participation. Others stated that
they stopped getting the emails. And others seemed overwhelmed by the amount of email. For
example, one person stated, “I get about 150 emails everyday. It was always at the bottom of the
list. I’m sorry I didn’t participate. I volunteered yes, but I literally get about 150 emails
13 A few respondents indicated that our records were incorrect and that they had filled out the surveys. The number
of respondents who provided this answer to the question was consistent with the number of respondents who choose
to fill out the Internet surveys anonymously.
44
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
everyday.” While the email delivery system was useful to a significant number of the
respondents, over reliance on it can be problematic. Many people were overwhelmed by the
amount of spam in their inboxes, and spam filtering software may prevent the emails from
reaching the respondents.
There were a few respondents who stated that they did not participate or discontinued
participation because of their attitudes about the police or the survey content. For example, one
respondent stated, “I did not trust the police department with this information.” Another
respondent commented “I didn’t think that they were going to be responsive. I didn’t think
they’d do anything.” Respondents’ also stated concerns about the survey questions not
addressing their needs. For example, one respondent “The question did not address my
concerns. The prevention of property crime is a major concern for people in my community. I’d
like to see more things done to prevent that.”
45
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER FIVE
ADVANCES IN MEASUREMENT: THE DIMENSIONS OF INTERNET
SURVEY INFORMATION
A. Measurement Overview
The Chicago Internet Project provided a unique opportunity to develop and field test a
new measurement system that could serve a number of objectives within the community policing
and problem oriented policing paradigms, including: assessing neighborhood problems,
identifying community capacities, evaluating police performance, evaluating community
performance, and evaluating local anti-crime initiatives, among others. To provide some context,
we begin this section with a brief assessment of the limits of traditional measurements schemes
and the value added with this new online system. We then describe our methodology for scale
construction and validation. At the core of this section is a description of the various constructs
we have sought to capture through this web-based system and our findings with regard to the
scaling effort. When creating these scales we paid special attention to representing a variety of
theoretically important dimensions of policing as suggested in the policing literature (Maguire,
2004; Mastrofski, 1999; Skogan & Frydl, 2004) and the community literature (Sampson, 1989;
Schuck & Rosenbaum, 2006). We have also created some new scales that we believe are
important for understanding the police-community nexus, but have received little attention in the
literature.
1. Traditional Performance Measurement
We have witnessed significant changes in law enforcement operations in recent years as a
result of new technology, new accountability systems, and a range of new policing strategies.
But as noted in the introduction, despite this progress, police organizations have yet to develop
data systems to measure “what matters” to the public and “what matters” in policing according to
widely touted theories of community policing and problem-oriented policing. We have argued
that this measurement deficiency has placed an upper limit an organization’s capacity to
introduce needed changes, to build healthy police-community relations, and to maximize
effectiveness in fighting crime and disorder (Rosenbaum, 2004).
The growing pressure to increase police accountability begs the question of how to
measure police performance and how to define “good policing.” Traditional measures of police
performance, emerging from efforts to professionalize the police (beginning in the1920s), have
focused on crime-related counts, such as the number of crimes reported, arrests made,
contraband seized, and cases cleared, as well as police response times. The limitations of these
traditional measures of performance are well documented in the literature (Alpert & Moore,
2000; Goldstein, 1990; Grant & Terry, 2005; Skolnick & Fyfe, 1993; White, 2007). While these
measures are consistent with the dominate mythology of police as crime fighters, they do not
capture much of what police actually do (Alpert & Moore, 1993; Moore, 2002; Moore &
Poethig, 1999; Reisig, 1999). Official statistics, such as crime rates and clearance rates are not
only inaccurate and subject to manipulation (i.e. contain large measurement error), but more
46
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
importantly, they provide a very incomplete picture of police work and performance. They fail to
capture some of the critical elements of police work, especially efforts to enhance the quality of
life through improved interactions with the public and improved problem solving (Alpert &
Moore, 1993; Blumstein, 1999; Greene, 2000; Maguire, 2004; Masterson & Stevens, 2002;
Moore et al., 2002; Moore & Poethig, 1999; Reisig, 1999).
These traditional measures also fail to provide any indication of the quality of day-to-day
encounters between the police and community residents, whether these contacts are police
initiated or citizen initiated. Historically, police have taken calls and reports about incidents, but
have rarely sought external feedback on their non-crime fighting performance. Systematically
seeking citizen input is a recent invention in the history of policing. (The one exception is the
case of citizen complaint mechanisms, which remain severely flawed to this day). In the 1930s,
Bellman introduced one of the first systematic police measurement tools that involved extensive
checklists of effective departmental practices (Bellman, 1935; Parratt, 1938). Grounded firmly in
the era of professional policing, Bellman stated that police must do their job regardless of public
opinion and in his scale of several hundred items he stuck to insular police issues. Community
input was summed up as superfluous with one item on his large inventory, “Are there any
particular circumstances, geographical, political, social or otherwise, which affect the problem of
the police?”
Consistent with Bellman’s position, some critics today would argue that the public is
fickle and civilian evaluations of police performance will change dramatically with one well-
publicized incident, such as Rodney King, and therefore, should not be given any credibility or
weight. Indeed, there is evidence suggesting that a single incident can alter public opinion about
the police (Weitzer, 2002). However, any valid measurement system should provide continuous
data collection on a large scale and therefore, have the capacity to identify stable patterns across
different neighborhoods and demographic subgroups (which our data suggest are present), as
well as pinpoint the amount of variance due to local or national events. The stability of these
changes can also be examined.
Returning to the current state of measurement, the crime fighting model retains its
dominance with police organizations. Even the large volume of citizen-initiated calls for service
(up to 18,000 calls per day in Chicago) are quickly channeled and screened into a narrow set of
crime measures. Rarely do police departments report on their handling of the roughly 80% of
the calls that do not involve crime incidents, per se (Scott, 1981). In fact, the attention of the
police is drawn to the roughly 2% of calls that involve violent crime. The question then
becomes—how well are the police performing on activities or tasks that consume the vast
majority of their time but are not captured in traditional statistics? We simply don’t know.
Another problem with traditional measures of police performance is that, with the
exception of crime rates (which, in the long run, are shaped by forces beyond the control of the
police), the organizational focus is on counting activities as indicators of success rather than
measuring whether the organization has achieved its desired goals. Goldstein (1979; 1990)
argues that this obsession with “means” rather than “ends” has dramatically reduced the
effectiveness of police organizations. Measuring processes is not inherently evil, but as noted
earlier, only a few processes (such as arrests, seizures, enforcement activities) are recorded,
leaving the quality of daily policing activities largely unmeasured. Also, a wide range of non-
47
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
crime outcomes are ignored as well, such as fear of crime, perceptions of crime and disorder, and
public assessments of police performance.
Progressive police departments have recently enhanced traditional measurement with an
accountability push, both department-wide (i.e. COMSTAT) and officer specific (i.e. early
intervention or warning systems). Some departments are even seeking to integrate their internal
measurement and external monitoring systems involving citizen complaints (see Walker, 2005).
This is an important step forward in police accountability, but the primary limitation of these
new systems is that they are typically reactive and incident driven. The focus remains on the
unrepresentative group of citizens who formally complain about police conduct rather than
aggregate data from the entire community. The quality of policing during routine encounters
remains below the radar screen.
2. Establish a Mandate and Information Imperative
If police organizations attempt to move beyond traditional “bean counting” or “the
numbers game,” the question then becomes—what are the goals of the organization? What
problems are they trying to solve? Unfortunately, the police do not have a clear mandate—they
have been given a wide range of duties and responsibilities, ranging from preventing crime to
controlling crowds to saving lives. However, the emergence of several new policing paradigms
during the past 30 years has provided some guidance in this regard, bringing with them a new
information imperative (Dunsworth et al., 1998; Rosenbaum, 2004).
Some of the most popular policing models—community policing, broken windows
policing, and problem oriented policing—suggest that the function of the police reaches far
beyond crime fighting to outcomes such as improving the quality of neighborhood life as
measured by social disorder, physical decay, and fear of crime. Community policing also
mandates that the police play a role in creating self-regulating neighborhoods by strengthening
informal social controls (Rosenbaum, Lurigio & Davis, 1998). Achieving this goal presumably
entails promoting community crime prevention behaviors, strengthening collective efficacy, and
building partnerships with other agencies and organizations—functions that are not reflected in
traditional police performance measures. The achievement of these goals cannot be assessed
without the collection of new information, which requires new measurement systems.
Thus, to solve neighborhood problems and engage the community in preventative
behaviors, police organizations would benefit from knowledge of (1) public perceptions of the
severity of various problems; (2) the community’s capacity to engage in community crime
prevention; and (3) the community’s support for police initiatives and willingness to engage in
joint ventures. If police had a better understanding of problems in their respective communities
then they would be better equipped to deal with them (Moore et al., 2002). Furthermore, to
achieve maximum effectiveness, the police must have the support and cooperation of the public.
To do this, they must have the trust and confidence of the people they serve. This is an
indispensable outcome in and of itself. Hence, new measurements of performance should
capture not only the community’s assessment of problems and its capacity to prevent crime, but
also the community’s evaluation of police services, police encounters, police-community
48
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
relations, and overall organizational legitimacy. These will determine the health of the police-
community partnership and its ability to “co-produce” public safety.
Finally, we emphasize that the importance of perceived fairness in police encounters
reaches far beyond building problem solving teams to the very essence of law and order.
Research on procedural justice theory suggests that citizens’ willingness to obey the law hinges
on their perceptions that the police are procedurally just and fair (Sunshine & Tyler, 2003; Tyler,
2004). If the police are free to be abusive to citizens or to violate the law, many residents will
scoff at their own responsibility to be law abiding.
3. Level of Measurement
Measurement of police performance can occur at the individual, small group or
organizational level. Traditional internal systems to assess individual police officers are seriously
flawed. In most agencies, the performance evaluation process has no credibility and is unrelated
to real officer performance. Researchers and police executives have offered many suggestions
for improving internal evaluation systems (e.g. Oettmeier & Wycoff, 1997; Skolnick & Fyfe,
1993; Walker, 2005), but the fact remains that an officer’s work is largely unsupervised and
difficult to measure. Evidence of successful problem solving by officers holds promise as an
outcome measure, and more work is needed to develop good internal measurement systems in
this domain. But community judgments of success in problem solving are equally important. If
the quality of life in the neighborhood has not changed in noticeable ways (e.g. fear of crime,
perceived severity of local crime and disorder problems, residents use of the local parks and
facilities), this reflects on the true success of the problem solving project. Hence, external
community assessments are essential, not only to capture perceived changes in local conditions,
but also to provide independent judgments about police performance and to serve as a real-time
barometer for police-community relations.
In our measurement framework, we have chosen to focus on measuring organizational
and small group performance from a community perspective that brings attention to police
performance at the neighborhood level. This decision is based on a number of factors. First, we
believe that holding groups of officers responsible for police performance within specific
geographic areas is a sensible accountability strategy and consistent with the logic behind the
popular COMPSTAT model and community policing model. Second, the performance of
individual officers is difficult to measure with our community methodology because local
residents (similar to police supervisors) are unable to make reliable observations at the individual
level (e.g. most residents cannot recall the name of a police officer serving their neighborhood).
Having said this, we are not opposed to external measurement at the individual level. Indeed, our
measurement scheme provides for assessments of police service during individual police-resident
encounters. The only question is whether these data should be incorporated into individual
performance review systems or aggregated to hold small groups of officers accountable for the
overall quality of policing at the neighborhood level. (More on this issue later).
49
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
4. Community-Based Measurement
Readers may ask, “why the need for a community-based measurement system? Can’t
police organizations evaluate their own performance?” Certainly, we are not suggesting that
current systems be abandoned, but rather supplemented with new information. There are two
questions here: (1) why new information? (2) why can’t the police collect it? Beginning with the
second question, there are several key reasons for creating a community-based survey system
rather than a police-based system. First, most police organizations do not have sufficient
motivation, on their own, to systematically collect and utilize community feedback to improve
agency performance.14 If anything, the history of police reform suggests that departments have
sought to insulate themselves from public scrutiny. The largest police reforms occur only after
external oversight is exercised. Second, the information loses some degree of credibility if it is
managed by the police, who spend considerable energy working on impression management.
Third, the external management of information guarantees that the information will be publicly
available in aggregate form and thus allows police stakeholders to exercise pressure on the
organization to improve its performance.
In part we have already addressed the question of “why new information?” To elaborate,
the history of efforts to reform the police also suggests that police misconduct is the source of
most political crises involving law enforcement agencies. Policing in the 21st century is
characterized by a heightened awareness of the importance of equity, fairness, demeanor, and
overall professionalism as requisites for maintaining public confidence and trust in the police.
Communities continue to want effective cops who can reduce crime, but they also insist that
officers treat community members with respect and dignity. The title of the National Research
Council report on the status of American policing says it all—Fairness and Effectiveness in
Policing (Skogan & Frydl, 2004). Today, the emphasis (and measurement!) must extend beyond
efficiency and effectiveness to issues of equity and fairness of treatment across race, class,
gender, sexual orientation, and religion. Hence, we have proposed a system of measurement that
provides regular feedback about the quality (and quantity) of policing at the neighborhood level
through the eyes of the police service consumers.
Finally, we believe that such a system is timely because of a growing schism between
popular policing strategies to address violent crime on the one hand and community expectations
of fair treatment on the other. With the increased application of aggressive zero-tolerance
approaches in many cities, police organizations are running the risk of numerous adverse
community consequences (see Rosenbaum, 2006; 2007), some of which may be preventable if
community feedback loops are introduced.
Progressive police leaders have acknowledged the importance of community input and
feedback, as well as the need to create transparent learning organizations (Fridell & Wycoff,
2004). Interest in gauging local public opinion is apparently widespread, as reflected in the
statistic that 75 % of police departments in the U.S. participated in a community survey in 1997
(Fridell & Wycoff, 2004). Externally focused police measurement can include surveys of
14 Current efforts by police agencies to invite community input are widespread, but are best characterized as public
relations events or crisis management meetings rather than reflecting a deep institutionalized commitment to
strategic planning at the neighborhood level.
50
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
arrestees, victims, witnesses, callers, persons stopped, the general public, as well as community
observations and focus groups. In recent history, a salient example of advancement in crime
measurement is the introduction of the National Crime Victimization Survey (NCVS); started in
1973, the NCVS is the Nation’s primary source of information on criminal victimization and
provides essential data on unreported crime and victim responses. The NCVS, however, does
not regularly capture public perceptions of the police or the community and does not allow for
reliable estimates of smaller geographic areas or even cities. To fill this gap, hundreds of police
departments have conducted or outsourced local community surveys to obtain local feedback.
Unfortunately, most of these are unscientific mail or telephone surveys and provide only a
snapshot (one time) view of local conditions. Occasionally, we will see valid surveys that
capture many of the dimensions of interest here (e.g. see Skogan and Hartnett, 1997, for a
citywide survey conducted over several years), but even in these cases, the sampling usually does
not allow for estimates at smaller geographic areas, such as neighborhoods, and the time lag
between measurement periods is one year or more. As we have noted previously, we are
unaware of “any efforts to establish (a) representative samples of community residents, (b)
regular reporting periods, (c) comprehensive survey content to measure the important dimensions
of policing and public safety, (d) data analysis or feedback mechanisms, and (e) plans for the
systematic use of these data for strategic or tactical planning.” (Rosenbaum, 2004). The
Chicago Internet Project attempted to expand policing measurement paradigm not only by
expanding performance measure but by focusing the measurement on a small geographic unit -
the police beat—something that is made possible with the Internet.
Information technology has expanded possibilities by offering new public safety tools
and measurement methods (i.e. web surveys, websites with accessible up-to-date crime data,
crime mapping and forecasting software). New measurement systems will enhance analyses that
are central to smart policing, such as community analysis, problem analysis, deployment
decisions, and program evaluation (Dunworth, 2000).
5. Community Performance
With all the attention focused on police, it easy for police leaders, politicians, and the
general public to forget that the community is indispensable in public safety. The community’s
role in the prevention of crime is well established (Rosenbaum et al, 1998; Schuck &
Rosenbaum, 2006; Sampson, 1989). Hence, this measurement system presumes that community
residents should also be held accountable for certain types of “performance.” Society has not
held the community accountable for neighborhood safety, and therefore, has not developed any
standardized measures of community performance. To some extent, we have attempted to do so
here.
B. Measurement Theory and Scale Construction
This section provides a technical description of our approach to scale construction.
Within the context of measurement theory, it is important that we provide evidence of the
validity and reliability of any composite measures being constructed. First, social science
methodologists consistently argue that a relatively small number of measures can represent a
51
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
particular theoretical construct. This generalization from measures to constructs, however,
requires some explanatory theorizing (cf. Cronbach & Meehl, 1955) and empirical evidence (cf.
Shadish, Cook, & Campbell, 2002) to articulate the nature of the construct and its components,
how the construct is related to other similar and dissimilar constructs, and if appropriate, what
mediating or moderating processes might be operating. In essence, researchers need to establish
the validity of their measures and scales and provide evidence that the selected items are suitable
reflections of the underlying construct of interest. As part of this process, we begin with the
well-established premise that multi-item indices are superior to single-item measures of
constructs because of their relative strength in reducing measurement error, increasing
measurement stability, and capturing more of the content or components of the construct.
Hence, for each of the constructs of interest, we have followed a rigorous plan of scale
construction and testing to determine whether relevant survey items can be combined into a
single composite index. To begin with, a “good” scale should be unidimensional, internally
consistent, and stable over time. Factor analysis was used to establish the presence of
unidimensionality. Once a factor was identified, Cronbach’s Alpha coefficient was calculated to
measure internal consistency or reliability (i.e. how well these items “hang together”). If an item
did not contribute to the scale’s reliability, it was dropped prior to constructing the index.
Finally, to establish the test-retest reliability of the index, Pearson’s Correlation Coefficient was
calculated to determine the correlation of the index with itself as measured at two points in time,
typically 3 months apart. For key indices, additional tests were performed to explore construct
validity (as described below).
C. Measures of Police Performance
One of the primary objectives of the CIP initiative was to develop new external measures
of police performance. In 1996, the National Institute of Justice held a series of workshops
entitled “Measuring What Matters,” where leading police scholars and practitioners reflected on
the problems with traditional performance measures and explored new possibilities more
consistent with emerging community-oriented and problem-oriented policing strategies. The
participants agreed that police organizations and other stakeholders would need to creatively
define and measure the dimensions of police performance that matter most to the community. In
the ensuing decade, unfortunately, little progress has been made at the empirical level, although
the theoretical dialogue has continued with some refinements.
Recent conceptual work builds on the public’s expectations for the police and what we
value as a democratic society. As Herman Goldstein notes, the police are expected to do many
things, including preventing crime, resolving interpersonal conflicts, managing pedestrian and
vehicular traffic, protecting constitutionally guaranteed rights and creating a sense of security in
the community (Scott, 2000, p. 17). Furthermore, our demands on the police do not end with
these role expectations. The public increasingly insists that the police achieve these objectives in
a manner consistent with our democratic principles. As Goldstein (1990, p. xiii) underscores,
“we have an obligation to strive constantly—not periodically—for a form of policing that is not
only effective, but humane and civil; that not only protects individual rights, equality, and other
values basic to a democracy, but strengthens our commitment to them.”
52
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
1. Dimensions of Police Performance
In this framework of policing in a democratic society, several dimensions of police
performance appear repeatedly in the literature (Moore, 2002; Mastrofski, 1999; Skogan et al.,
2000). The central focus has been on assessing the processes of policing. Reaching far beyond
traditional crime statistics, particular emphasis has been given to the following performance
questions, each of which we have sought to measure in this project:
Are the police exhibiting good manners during encounters with residents?
Are the police competent in the exercise of their duties?
Are the police fair and impartial when enforcing the law?
Are the police lawful in the exercise of their duties?
Are the police equitable in the distribution of services?
In addition, theories of community policing and problem oriented policing have uniquely
underscored the importance of other process and outcome questions for the police. In particular:
Are the police responsive to the community’s concerns and problems?
Are the police effective in solving neighborhood problems?
Are the police engaging the community in crime control and prevention actions?
Are the police creating cooperative partnerships with the community?
Does the public perceive less crime and disorder?
Does the public report lower rates of victimization?
Does the public report less fear of crime?
Does the public perceive a higher quality of life in their neighborhood?
Does the public attribute organizational legitimacy to the police?
These questions define the scope of the measurement system developed as part of this
project. This system assumes that the various behaviors of officers will be reflected in the
perceptions and judgments of local residents, which in turn, will shape residents’ overall
assessment of the police organization. If public expectations of the police are met, then public
confidence in the police and perceptions of police as a legitimate authority should increase
accordingly.
Our conceptual scheme for external evaluation of the police offers three primary types of
community assessment: general assessments of police officers; experience-based assessments of
53
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
police officers; and assessments of the police organization. Each is described and distinguished
below.
2. General Assessments of the Police
General assessments of the police provide civilians with the opportunity to broadly
evaluate the behavior patterns or characteristics of police officers without reference to a
particular observation, encounter, or incident. Anyone who is generally aware of the existence
of municipal police is, arguably, qualified to express their opinions via general assessments. We
have constructed two types of general assessments—global evaluations (e.g. survey items
referring to “Chicago police officers”) and neighborhood-specific evaluations (e.g. survey items
referring to “Chicago police officers in your neighborhood”). Past community surveys
demonstrated that these are distinct constructs and although global and specific perceptions can
influence one another, they are unique assessments (Brandl, Frank, Worden & Bynum, 1994;
Schuck & Rosenbaum, 2005). Typically, surveys have focused on global perceptions, but a
better understanding of perceptions of neighborhood policing practices will provide a strong
foundation for a local geo-based evaluation system. Many of these evaluation dimensions were
designed to capture perceptions of efficacy and fairness which are conceptually distinct from
judgments about officers’ crime fighting abilities and thus require different measures to capture
them (Eck & Rosenbaum, 1994; Skogan & Frydl, 2004; Sunshine & Tyler, 2003; Tyler, 2004).
Our general assessment measures have been influenced by previous theoretical and
empirical work. Mastrofski (1999) proposed six global dimensions for assessing police
officers—attentiveness, reliability, responsiveness, competence, good manners and fairness, but
to our knowledge, these dimensions have yet to be fully validated. Additionally, in repeated
telephone surveys, Skogan and Hartnett (1997) measured three neighborhood-specific
dimensions of policing in Chicago—demeanor when dealing with people in the neighborhood,
responsiveness to community concerns, and effectiveness in preventing crime and disorder.
Even broader conceptualizations of performance measurement have been proposed in the
literature (Moore, 1999, 2002). For this project, multi-item scales were constructed to measure
both global and neighborhood-specific indicators of police performance.
3. Global Evaluations of the Police
Police Manners Index. The Police Manners Index was designed to measure the
public’s general perception of officers’ courtesy or manners when interacting with the public.
This three-item index, measured at waves 3 and 6, includes courtesy/respectfulness toward
residents in general, youth, and minorities.
Factor analyses produced a single factor at both waves, explaining 82.7% of the variance
in the items at wave 4 and 80.5% at wave 6. The factor structure was replicated across each of
the four racial/ethnic groups for both waves.
The Police Manners Index exhibited good internal consistency, as reflected in the
Cronbach alpha coefficients at wave 4 (alpha = .89) and wave 6 (alpha = .88). The Index also
exhibited good test-retest reliability between waves 3 and 6, r =.68 (p < .01).
54
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
In sum, the Police Manners Index is unidimensional, internally consistency, stable across
racial/ethnic groups, and reliable over time. The final index properties are shown in the table
below. Higher scores on the index denote more frequent displays of good manners by the police.
Police Manners Index: In your opinion, how often do Chicago police officers act in the
following manner? (5 = Always, 1 = Never)
Items
1. Courteous to residents.
2. Respectful of youth.
3. Respectful of minorities.
Scale Statistics N M SD Min Max
Wave 4 676 3.54 .74 1 5
Wave 6 580 3.47 .73 1.33 5
Police fairness index. The Police Fairness Index seeks to measure the general
perceptions of officers’ fairness or evenhandedness in the treatment of citizens and their
application of the law. A two-item index was constructed after analyses indicated that the items
were strongly and consistently correlated in all neighborhoods. The Index revealed good internal
consistency at both wave 3 (alpha=.87) and wave 6 (alpha=.90). The test-retest reliability for the
Fairness Index was also high, r = .77, p < .01.
The final index properties are shown in the table below. Higher scores on the index
denote a stronger belief that Chicago police are fair when dealing with citizens.15
Police Fairness Index: Please indicate whether you agree or disagree with the following
statements about Chicago police officers. (4=Strongly agree; 1=Strongly disagree)
Items
1. Chicago police officers treat all people with dignity and respect.
2. Chicago police officers are fair and impartial when applying the law.
Scale Statistics N M SD Min Max
Wave 3 670 2.43 .80 1 4
Wave 6 560 2.42 .77 1 4
15 In the future, we would consider including two additional items: The police are fair to residents; The police make
decisions based upon facts, not personal biases. In the present survey, however, these items used a different
response format (always-never).
55
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
D. Competency Indices
Several indexes were developed to measure the public’s view of police competency in a
wide range of areas. Given that police engage in a variety of behaviors, a single index was
considered too insensitive to capture their performance. These are global assessments of officer
competency, not specific to local beat officers.
1. Police Knowledge Index
This two-item index measures whether the public believes police officers are
knowledgeable about police procedures and are well trained. The internal consistency is high
(alpha=.79). The test-retest reliability between waves 3 and 6 was moderately high, r = .60, p <
.01.16
The final index properties are shown in the table below. Higher scores indicate stronger
belief in the knowledge and training of Chicago police officers.
Police Knowledge Index: Please indicate whether you agree or disagree with the following
statements about Chicago police officers. (4=Strongly agree; 1=Strongly disagree)
Items
1. Chicago police officers are well trained.
2. Chicago police officers are knowledgeable about police procedures.
Scale Statistics n M SD Min Max
Wave 3 651 3.16 .54 1 4
Wave 6 538 3.10 .64 1 4
2. Police Reliability Index
This 4-item index taps residents’ feelings about the reliability and consistency of Chicago
police officers. The scaling results show a single factor across all neighborhoods, explaining
66.3% of the variance at wave 3 and 75.8% at wave 6. The internal consistency of the index was
good at both waves (w3 alpha = .82; w6 alpha = .84). The test-retest reliability was moderately
strong, given that the two indices were not identical, r = .67, p < .01.17
The final index properties are shown in the table below. Higher scores indicate a stronger
belief in the reliability of Chicago police officers.
16 Only a single item was measured at wave 6, so this correlation coefficient indicates the relation between that item
and the Index score at wave 3.
17 Only 3 of the 4 items were used at wave 6.
56
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Police Reliability Index: Please indicate whether you agree or disagree with the following
statements about Chicago police officers. (4=Strongly agree; 1=Strongly disagree)
Items
1. Chicago police officers follow through on their commitments.
2. Chicago police officers are reliable when you need them.
3. Chicago police officers respond quickly to emergency calls.
4. Chicago police officers are visible on the streets.
Scale Statistics n M SD Min Max
Wave 3 729 2.96 .64 1 4
Wave 6 596 2.95 .64 1 4
E. Neighborhood Specific Evaluations of the Police
1. Responsiveness to Community Index
With the community policing paradigm, this 4-item index measures the extent to which
residents view their neighborhood police as responsive to their concerns, including a willing to
share information and work with residents on problems of high priority to the community. This
index is modeled after Skogan and Hartnett (1997) Responsiveness Index, with some new items
added (#2 and #3) for content validity.
The scaling results show a single factor across all neighborhoods, explaining 81.7% of
the variance at wave 3 and 83.2% at wave 6. The internal consistency of the index was strong at
both waves (w3 alpha = .93; w6 alpha = .93). The index demonstrated good test-retest
reliability, r = .74, p < .01.
The final index properties are shown in the table below. Higher scores indicate that local
Chicago police officers are viewed as more responsive to community concerns and engaged in a
problem solving dialogue with the community.
Responsiveness to Community Index: Please rate how good a job you feel the Chicago police
are doing in your neighborhood: (4=Very good job, 1= poor job)
Items
1. Dealing with problems that really concern residents.
2. Sharing information with residents.
3. Being open to input and suggestions from residents.
4. Working with residents to solve local problems.
Scale Statistics n M SD Min Max
Wave 3 653 2.36 .84 1 4
Wave 6 535 2.40 .83 1 4
57
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
2. Satisfaction with Neighborhood Police
A single item was used to measure overall satisfaction with policing at the neighborhood
level regardless of whether the respondent reported any contact with the police in the past year.
The item properties are shown in the table below. Higher scores indicate greater overall
satisfaction with police officers who serve the neighborhood.
Satisfaction with Neighborhood Police: (4=Very satisfied, 1=Very dissatisfied)
Items
1. In general, how satisfied are you with the police who serve your neighborhood? Are you
Scale Statistics n M SD Min Max
Wave 6 611 3.06 .62 1 4
F. Experience-based Assessments of the Police
Since the bulk of police work involves some kind of community contact, responding to
calls for service, traffic stops, order maintenance, community meetings, and since most of these
interactions are not criminal in nature, police-citizen interactions are expressly important to
capture. Measuring constituents’ perceptions of the policing process is central to capturing
whether or not police are “doing justice” (Alpert & Moore, 1993), and arguably, citizens want
the police to be fair and equitable when they are meting justice. Additionally, Tyler’s (1990)
work indicates that procedural justice, the perception of fair treatment, is related to satisfaction
regardless of whether or not citizens’ perceive that the police have solved the problem in
question.
Interactions with the community residents – either voluntary, citizen-initiated (e.g. calls
for service) or involuntary, police-initiated (i.e. traffic stops, arrests or requests to change
behavior) – are at the heart of police work. Research suggests that positive police contact can
reduce fear and improve public attitudes about the police (Pate et al., 1989), but a larger body of
studies indicates that negative police encounters have a much greater impact on perceptions of
the police than positive interactions (Skogan, 2006). For our purposes, the important point is that
these encounters are only examined occasionally via research surveys and not measured
systematically by police organizations or outside entities. Unless a police contact results in an
arrest, ticket or citizen complaint, there are no data collected on these encounters. Given that
citizen’s trust of police hinges on citizen-police interactions, vicarious and direct, and given that
most police-citizen interactions don’t result in “formal police action” (i.e. arrest), it seems
imperative that we find a way to evaluate the quality of these contacts.
The police process measures were conceived and influenced by the “customer service”
model. The idea is these measures would allow the customers, individuals who call the police,
organized petitioners or experience “obligation encounters,” to evaluate police service received
(Moore 1999, 2002). Much like private sector, and increasingly the public sector, customer
satisfaction surveys are integral to evaluating and adapting operating procedures and giving the
consumer a voice in their service delivery.
58
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Procedural justice theory (Lind & Tyler, 1988; Tyler, 1990) provided a framework for
developing measures of police-civilian encounters, as people’s judgments about the police are
based heavily on their sense of whether the process is fair. Research suggests that a process is
more likely to be judged fair when the following elements are present (Skogan & Frydl, 2004, p.
304): (1) demeanor—people are treated with dignity and respect; (2) participation—people have
a voice and are allowed to explain their situation; (3) neutrality—the authority is seen as
evenhanded and objective; (4) trust—people trust the motives of the authority as serving their
needs, concerns, and well-being. Our experience-based assessment questions capture all or part
of these dimensions.
From a crime victim’s perspective, these dimensions are also important, as too often
victims of violence encounter non-supportive professionals, which can inhibit their
psychological recovery (Ullman, 1996). Using restorative justice theory (Bazemore, 1998), one
can argue that police should be judged by their ability to “restore crime victims” (Alpert &
Moore, 1993). This implies the need for police to be sensitive to the needs and concerns of
crime victims when the incident is reported (Rosenbaum, 1987).
The experience-based assessment measures described below cover a wide range of direct
and indirect encounters with the police. Direct encounters include calls to 311 and 911, domestic
home visits, traffic stops, and crime incidents as a victim or witness. Some are police-initiated,
others are civilian-initiated. Some are close personal encounters; others are observations from a
distance (e.g. witnessing encounters in the neighborhood). Regardless, survey respondents were
asked to report their overall satisfaction with their most recent encounter (described in the table
below). More importantly, they were queried about the procedural justice and restorative justice
aspects of these encounters. Only the traffic stop responses are reported here to illustrate the
potential for measurement.
In addition, we have developed new measures of emotional responses to police
encounters. Other than an occasional item about fear of being stopped by the police, surveys
have yet to capture the affective component of potential police encounters.
1. Assessments of Police Stops Index
A national survey in 2005 indicated that roughly one-in-five U.S. residents ages 16 or
older (or 43.5 million people) have face-to-face contact with the police each year and more than
half of these contacts (56%) are traffic related (Durose, Smith, & Langan, 2007). Over the past
decade, police stops have become a lightning rod for tensions between the police and minority
communities. Complaints about racial profiling, as well as verbal and physical abuse, have been
widespread. Hence, there is a pressing need to institutionalize the measurement of police conduct
during traffic stops.
A Police Stop Index was constructed to capture some key procedural elements of police
stops as perceived by the person being stopped. The questions (and what they measure) are
listed in the following table in sequential order. The screening question asked, “In the past year,
have you been stopped by a Chicago police officer when you were in a car, on a motorcycle, on a
59
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
bike, or out walking?” We also asked, “Did this police stop occur in your neighborhood or
somewhere else?”
Assessments of Police Stops
Items
During the most recent time you were stopped by Chicago police … (1 = Yes; 0 = No)
1. Did the police clearly explain why they stopped you? (trust/concern)
2. Did you feel that you were stopped for a good reason? (neutrality)
3. When they talked with you, did the police pay careful attention to what you had to say?
(participation/voice)
4. Did the police clearly explain what action they would take? (trust/concern)
During this stop…(1 = Yes; 0 = No)
5. Did the Chicago police say anything that you thought was insulting, disrespectful, or
rude? (demeanor)
6. During this stop, did any Chicago officer use any form of physical force against you,
including pushing, grabbing, kicking, or hitting? (demeanor)
During this stop… (4 = Very polite; 1 = Very impolite)
7. Did you find the Chicago police? (demeanor)
During this stop… (4 = Very fair; 1 = Very unfair)
8. How fair were the Chicago police? (neutrality)
During this stop… (4 = Very satisfied; 1 = Very dissatisfied)
9. Overall, how satisfied were you with the way the Chicago police responded?
(satisfaction)
Why were you dissatisfied with the way the police responded? (open ended question)
Scale Statistics n M SD Min Max
Wave 3 111 6.02 3.03 1 9
1. Satisfaction with Police Contacts
The following items measure Chicagoan’s overall satisfaction with diverse police
encounters during the past year, ranging from residents’ calls for police assistance to police-
initiated vehicle stops. Satisfaction varies by type of encounter. Given the different sample sizes
for each encounter, a composite satisfaction index was not computed.
60
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Satisfaction with Police Contacts: (4=Very satisfied; 3 = Somewhat satisfied; 2 = Somewhat
dissatisfied; 1=Very dissatisfied)
Items
1. How satisfied were you with the person who answered your 311 call?
Scale Statistics n M SD Min Max
Wave 3 445 3.25 .89 1 4
Satisfaction with Police Contacts: (4=Very satisfied; 3 = Somewhat satisfied; 2 = Somewhat
dissatisfied; 1=Very dissatisfied)
Items
1. How satisfied were you with the person who answered your 911 call?
Scale Statistics n M SD Min Max
Wave 3 250 3.43 .78 1 4
Satisfaction with Police Contacts: During this stop(4=Very satisfied; 3 = Somewhat satisfied; 2 =
Somewhat dissatisfied; 1=Very dissatisfied)
Items
1. Overall, how satisfied were you with the way the Chicago police responded?
Scale Statistics n M SD Min Max
Wave 3 110 2.87 1.06 1 4
Satisfaction with Police Contacts: Concerning the incident… [In the past year, have you had any in-
person contact with a Chicago police office because someone in your family had a problem, either
children and/or adult?] (4=Very satisfied; 3 = Somewhat satisfied; 2 = Somewhat dissatisfied; 1=Very
dissatisfied)
Items
1. Overall, how satisfied were you with the way the Chicago police responded?
Scale Statistics n M SD Min Max
Wave 3 88 3.10 .94 1 4
Satisfaction with Police Contacts: Concerning the incident… [In the past year, have you had any in-
person contact with a Chicago police office because you were a victim or witness to a crime.] (4=Very
satisfied; 3 = Somewhat satisfied; 2 = Somewhat dissatisfied; 1=Very dissatisfied)
Items
1. Overall, how satisfied were you with the way the Chicago police responded?
Scale Statistics n M SD Min Max
Wave 3 115 3.12 .96 1 4
Satisfaction with Police Contacts: Concerning the incident… [In the past year, have you had any in-
person contact with a Chicago police officer because you were involved in a traffic accident or witnessed
a traffic accident.] (4=Very satisfied; 3 = Somewhat satisfied; 2 = Somewhat dissatisfied; 1=Very
dissatisfied)
Items
1. Overall, how satisfied were you with the way the Chicago police responded?
Scale Statistics n M SD Min Max
Wave 3 72 3.29 .94 1 4
Satisfaction with Police Contacts: (4=Very satisfied; 3 = Somewhat satisfied; 2 = Somewhat
dissatisfied; 1=Very dissatisfied)
Items
1. How satisfied were you with the handling of the complaint?
Scale Statistics n M SD Min Max
Wave 3 22 2.05 1.05 1 4
61
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
G. Performance at Public Meetings Index
Police officers today are expected to attend public events, organize and facilitate
community meetings, give educational presentations, and engage in problem solving tasks with
other agencies, community organizations, and local residents. The Chicago police hold monthly
beat meetings for each of its 280 police beats, as well as attend other community meetings. The
Performance at Public Meetings Index is a new 8-item scale that seeks to gauge public
assessments of police officers performance in these group settings. A wide range of performance
dimensions are explored.
The final index properties are shown in the table below. The internal reliability of the
scale is high (alpha = .93). Higher scores indicate more positive police performance in public
meetings.
Performance at Public Meetings Index: In the past year, have you had any in-person contact
with a Chicago police officer because you attended a CAPS meeting or another community
meeting? ((1=yes CAPS; 2=Yes Other meeting; 3=No)
How would you rate the performance of the Chicago police officers at the community meetings
you have attended this past year? (1=very good; 4=poor; 5=DK)
Items
1. Leadership skills
2. Communication skills
3. Problem solving skills
4. Openness to input from residents
5. Fairness to all residents
6. At the meeting, did you find the police… (1=very helpful; 4=not at all helpful)?
7. When residents talked to the police at the meeting, were the police… (1=very polite;
4=very impolite)?
8. Overall, how satisfied were you with the way the police acted at the meeting? (1=very
satisfied; 4=very dissatisfied)
Scale Statistics n M SD Min Max
Wave 4 131 3.20 .58 1.38 4
H. Affective Response to Police Encounters
Researchers have overlooked residents’ emotional or affective responses to police
encounters. Contact with the police can produce a wide range of emotions, from being upset or
angry to feeling reassured or comforted. Six items were tested and factor analyses yielded two
separate dimensions as described below. The first factor accounted for 52.4% and 57.5% of the
variance at wave 4 and wave 6, respectively, while the second factor was predictably less
explanatory (19.6% and 16.8% at waves 4 and 6). Importantly, this factor structure remained
stable when tested across all four types of neighborhoods.
62
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
1. Anxiety Reaction Index
This 2-item index, reflecting the primary factor, captures a negative emotional response
upon seeing a police officer and includes feeling afraid and uneasy. (A third item about feeling
“angry” suppressed the internal consistency of the scale, and therefore, was dropped). The index
was internally reliable at wave 4 (alpha = .84) and wave 6 (alpha = .88). The Anxiety Reaction
Index also demonstrated strong test-retest reliability, r = .69, p < .01.
The final index properties are shown in the table below. Higher scores indicate that
residents feel less anxious when seeing a Chicago police officer.
Anxiety Reaction Index: When you see a Chicago police officer, how often do you feel …
(1=Always, 5=Never)
Items
1. Afraid
2. Uneasy
Scale Statistics n M SD Min Max
Wave 4 711 4.11 .92 1 5
Wave 6 618 4.07 .90 1 5
2. Secure Reaction Index
This 3-item index measures a positive emotional response to seeing a police officer,
including feeling relieved, proud, and secure. The internal consistency of the index was strong at
wave 4 (alpha = .81) and wave 6 (alpha = .81). Also, the Secure Reaction Index exhibited strong
test-retest reliability, r = .73, p < .01.
The final index properties are shown in the table below. Higher scores indicate residents
feel more secure or relieved when seeing a Chicago police officer.
Secure Reaction Index: When you see a Chicago police officer, how often do you feel …
(5=Always, 1=Never)
Items
1. Relieved
2. Proud
3. Secure
Scale Statistics n M SD Min Max
Wave 4 720 3.33 .94 1 5
Wave 6 624 3.40 .88 1 5
63
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
I. Assessment of Organizational Outcomes
Not unlike individual officers, police organizations can be judged using both process and
outcome indicators. Police organizations are often judged by the three E’s: efficiency,
effectiveness, and equity (Eck & Rosenbaum, 1994). Efficiency is not the primary focus of the
present measurement system, but it is addressed in previously discussed measures of police
reliability, response time, follow-through, and accessibility (variables captured in the Police
Reliability Index). Here we have added a police visibility index as an organizational measure.
Police visibility remains a concern to many communities and a primary organizational objective,
so we have developed a composite measure of visible police activity from the eye of local
residents.
On the issue of effectiveness, certainly official crime statistics will continue to be
important for measuring the achievement of crime fighting objectives. Similarly, we have
constructed measures of residents’ perceptions of the severity of crime and disorder as well as
their level of fear of crime (See “Neighborhood Conditions” section below). Tracking these can
be useful for monitoring changes in the environment and the effectiveness of new police
programs. As police departments tailor solutions and strategies to neighborhood-specific crime
issues, be it via problem solving or hot spot policing, measuring outcomes beyond the crime rate
are crucial for understanding the full impact of any one strategy.
We sought to measure the overall social ecology of the neighborhood, such as informal
social control and collective efficacy. To the extent that organizational objectives include
engaging and strengthening the community, reducing fears and concerns, reducing disorder, and
improving the overall quality of life, then regularly measuring these variables is a necessity.
If police departments focus on community policing and problem oriented policing, then
they should truly measure their effectiveness at engaging the community, solving neighborhood
problems, and preventing crime by surveying community residents. Whether or not community
residents believe that police organizations are effective in these domains is an important question
addressed with these new performance indicators.
Finally, the third E, equity, has become a dominant organizational performance indicator
in the past decade. Equity includes the distribution of services (distributive justice) and equity in
the treatment of customers (procedural justice). This project has measured both, but most
attention is given to the equitable treatment of service recipients, regardless of their race, gender,
religion, or other defining characteristics. Earlier, we discussed the measures that captured
perceptions of police manners during police-civilian encounters. Here the focus is on street-level
processes that have been the subject of considerable legal action and over which police
organizations are expected to have more control including racial profiling and police misconduct.
It is important to emphasize that these measures are designed to capture the perspective of the
community and not the viewpoint of the police or investigative bodies.
Perhaps the most important indicator of organizational performance is the community’s
overall faith in the institution of policing and confidence in the police organization staff and
structure. Organizational legitimacy, as conceived by community stakeholders, is an
indispensable indicator of overall police performance. Is the department transparent and
64
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
accountable to their constituents? Do they share information and are they responsive to citizen
inquiries and complaints? Do residents feel the department is committed to the principles of
problem solving? Is the department committed to principles of community policing, such as
communication, cooperation and collaboration? These elements of organization legitimacy are
important measurement dimensions for police because in order for their constituents to
maximally and effectively partner and cooperate with them, residents have to think that the
police department is a legitimate, professional entity with competent staff.
1. Police Visibility Index
One of the most consistent public expectations for the police, across diverse
communities, is the demand for greater police visibility. Despite research evidence
demonstrating that the visibility of randomized patrols is insufficient to deter crime, the public
outcry for more police officers on the streets remains consistent. We should note that the
demonstrated effectiveness of hot spots policing and directed patrol missions may be due, in part,
to the visibility of the police units and the enforcement actions occur with additional manpower.
In any event, measuring public perceptions of police presence is critically important for external
accountability and may be important in people’s overall assessment of police performance.
The Police Visibility Index was computed by summing the scores on 8 different types of
police activity. The index properties are shown in the table below. A higher score indicates
greater police visibility in the neighborhood. Conceivably, this index could be used as an
indicator of policing at the neighborhood level, but since deployment decisions are dictated by
management, we decided to include it as an organizational measure of performance.
Police Visibility Index: In your neighborhood, how often do you see Chicago police officers engage
in the following activities? (1=Never; 5=Daily)
Items
1. Drive through on patrol
2. Walk or stand on foot patrol
3. Patrol the alley, checking garages or the backs of buildings
4. Chat or have friendly conversation with people
5. Make a traffic stop
6. Search and frisk someone
7. Break up a group of people
8. Arrest someone
Scale Statistics N M SD Min Max
Wave 3 640 20.19 6.05 8 38
2. Effectiveness in Preventing Crime Index
This 3-item index measures residents’ assessments of the police department’s
effectiveness in preventing crime and disorder in their neighborhood. A range of items were
analyzed but three items provided the most parsimonious results, with a focus on creating a safe
65
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
neighborhood for children.18 The final scale was unidimensional across all neighborhoods, with
a factor that explained 82.0% of the variance at wave 3 and 81.1% at wave 6. The index was
internally reliable at waves 3 and 6, alphas = .89 and .88, respectively. The index also has good
test-retest reliability, r = .70, p < .01.
The final index properties are shown in the table below. Higher scores indicate greater
perceived police effectiveness in preventing crime and keep order within their neighborhood.
Effectiveness in Preventing Crime Index: Please rate how good a job you feel the Chicago
police are doing in your neighborhood. (4=Very good job; 1=Poor job)
Items
1. Preventing crime.
2. Keeping order on the streets and sidewalks.
3. Keeping children safe.
Scale Statistics n M SD Min Max
Wave 3 685 2.58 .79 1 4
Wave 6 578 2.66 .73 1 4
3. Effectiveness in Solving Problems Index
This 2-item index captures residents’ judgments about the police department’s
effectiveness in solving neighborhood problems and fighting crime. While problem solving and
crime fighting are conceptually distinct outcomes, they are similar as “bottom line” results, and
are empirically related, as these findings suggest. The internal consistency of this index was
stable across neighborhoods within wave 3 (alpha=.79; range =.67-.81) and within wave 6
(alpha=.80, range = .75 to .81). The test-retest stability of this index was relatively strong, r =
.71, p < .01.
The final index properties are shown in the table below. Higher scores indicate a stronger
belief in the effectiveness of Chicago police officers in solving problems and fighting crime.
Effectiveness in Solving Problems Index: Please indicate whether you agree or disagree with
the following statements about Chicago police officers. (4=Strongly agree; 1=Strongly disagree)
Items
1. Chicago police officers are effective at solving neighborhood problems.
2. Chicago police officers are effective at fighting crime.
Scale Statistics n M SD Min Max
Wave 3 672 2.73 .68 1 4
Wave 6 585 2.75 .65 1 4
18 Although other items retained membership in a single Effectiveness factor, such as reducing homicide and helping
crime victims, they did not contribute to the internal consistency of this dimension.
66
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
4. Willingness to Partner with Police Index
The community’s willingness to work with the police has never been more critical. The
criminal justice system can only achieve justice when victims and witnesses are willing to
cooperate in the identification and prosecution of suspects. Today, police detectives are unable
to solve most homicides because of community fear, exacerbated by websites that post the
pictures, names and addresses of “snitches.” Also, effective problem solving is not possible
without the creation and maintenance of cooperative partnerships between the police and
community stakeholders.
Three survey items were used to measure resident’s willingness to participate with the
police in the co-production of public safety. The items were measured three months apart and
explain 70% of the variance at wave 3 and 66% of the variance at wave 6. Scale reliability was
good at wave 3 (α = .77) and wave 6 (α = .71). The re-test reliability was moderately high (r =
.52, n = 509, p < .001). The variable is coded so that higher values indicate a greater willingness
to work with the police.
Willingness to Partner with the Police Index: Please indicate how likely you would be to: (4
= Very likely; 1= Never)
Items
1. Call the police to report a crime occurring in your neighborhood.
2. Help the police to find someone suspected of committing a crime by providing them
with information.
3. Report dangerous or suspicious activities in your neighborhood.
Scale Statistics n M SD Min Max
Wave 3 732 4.68 .54 1 5
Wave 6 627 4.71 .48 2 5
5. Engagement of the Community Index
Police organizations are expected to make their officers accessible to the public and
increase public awareness and knowledge about crime prevention. This 2-item index measures
residents’ perceptions of community engagement or outreach activities by the police. This is a
limited scale, but the two items hang together well. The internal consistency of this index was
stable across neighborhoods within wave 3 (alpha=.82; range =.78-.86) and within wave 6
(alpha=.81, range = .74 to .90). The index also exhibited reasonable test-retest reliability, r=.60,
p < .01.
The final index properties are shown in the table below. Higher scores indicate more
positive perceptions of Chicago police involvement in community engagement activities.
67
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Engagement of Community Index: How often do Chicago police officers act in the following
manner? (5=Always, 1=Never):
Items
1. Provide crime prevention tips to residents.
2. Make themselves available to talk to residents.
Scale Statistics n M SD Min Max
Wave 3 665 3.26 .94 1 5
Wave 6 534 3.20 .93 1 5
6. Police Misconduct Index
For better or worse, police organizations and their leaders are ultimately judged not by
the performance of their best officers, but rather by the misconduct of officers and the official
response to their behavior. The problem with even the most innovative early warning systems
(see Walker, 2005) is that they are reactive by nature and focus on severely delinquent
individuals rather than seeking to improve the aggregate performance of officers assigned to
particular units or geographic areas. Geo-based surveys and customer satisfaction audits of
targeted police-civilian encounters have the potential to generate near-real time data that can be
used for management intervention, especially problem solving and training about “hot spots of
misconduct.” The option of intervening with individual officers remains available as well.
For the CIP project, we sought to demonstrate that web-based surveys can be used to
measure misconduct through the eyes of the public, short of filing an official complaint against
an individual officer. There are numerous factors that discourage civilians from filing such
complaints, and therefore, alternative measures of police performance would be beneficial. Our
web survey sequence on misconduct began with the following screening question: “In the past
year, have you had any contact with the police, or witnessed an encounter with the police, where
you felt the officer(s) acted inappropriately? If the response was affirmative (in this study,
14.8%), respondents were asked to report on the nature of the most recent incident, using
categories familiar to the Office of Professional Standards, the agency assigned to investigate
civilian complaints in Chicago. The Police Misconduct Index measures the severity of the
incident as reflects in the number of misconduct behaviors listed by the respondent. Although
not shown here, our web survey also captured whether the incident was reported, how quickly, to
whom, and the complainant’s level of satisfaction with the way the complaint was handled. The
latter question taps into procedural justice considerations.
The Police Misconduct Index is only preliminary and could be expanded to include other
types of delinquent behaviors. We recommend that specific behaviors be separated into different
survey questions. We would also recommend that personal experience with misconduct be
separated from observed incidents. The final index properties are shown in the table below.
Higher scores indicate greater perceived severity of the most recent incident. By far, the most
frequent types of misconduct listed for those who reported an incident were verbal abuse
(54.9%), stopping people without sufficient cause (30.1%), and discrimination by race or other
characteristics (27.4%).
68
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Police Misconduct Index: What was the nature of the incident that you experienced or witnessed?
(check all that apply)
Items
1. Use of excessive force (officers were physically abusive or used weapons unjustifiably)
2. Verbal abuse (officers used profanity, made verbal threats or were generally
discourteous)
3. Misuse of police power (officers accepted bribes or forced residents to perform an illegal
activity)
4. Failure to address a known crime
5. Failure to give name when asked or failure to wear nametag
6. Discrimination on the basis of race, gender, sexual orientation, class, religion
7. Too often stopping people in the neighborhood without sufficient cause
8. Other [please specify]
Scale Statistics n M SD Min Max
Wave 3 113 1.80 1.29 0 7
7. Racial Profiling Index
The extent to which racial profiling is a problem is believed to vary by organization and
even within larger organizations, suggesting that leadership, supervision, and norms of behavior
play some role. Hence, we consider community-based measures of racial profiling as indicators
of organizational (rather than individual) performance.
The Racial profiling Index captures residents’ beliefs about the frequency of racial
profiling behaviors by police officers. The content validity of this 4-item index is strengthened
by including a range of circumstances under which profiling behaviors might occur, from police
stops to arrests. Factor analyses revealed that the index was unidimensional for the total sample
and for each of the four neighborhood types. The internal consistency of the index was very high
(alpha = .94 at wave 3 and .95 at wave 6), as was the test-retest reliability, r = .71, p < .01.
The final index properties are shown in the table below. Higher scores represent a
stronger belief among residents that Chicago police officers use race when making decisions to
stop, search and arrest.
Police Racial Profiling: Please indicate how often you think that Chicago police officers
consider race when deciding: (4=All the time; 3=Often; 2=Not very often; 1=Never)
Items
1. Which cars to stop for possible traffic violations.
2. Which people to stop and question on the street.
3. Which people to search.
4. Which people to arrest and take to jail.
5. How quickly they will respond to calls for help.
69
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Scale Statistics n M SD Min Max
Wave 3 559 11.16 2.82 4 16
Wave 6 471 11.11 2.69 4 16
8. Organizational Legitimacy Index
This index seeks to capture the public’s general trust and confidence in the Chicago
Police Department as an organization, reflecting the extent to which residents believe that the
organization is under good leadership, is doing a good job overall and holds its officers
accountable for their actions.
Five relevant items were included on wave 3 and then repeated on wave 6. Factor
analyses of these five items yielded a single factor at both waves, accounting for 66.1% of the
variance in the items at wave 3 and 69.5% at wave 6. This unidimensional factor structure was
replicated across each of the racial/ethnic groups for both waves.
The Organizational Legitimacy Index exhibited strong internal consistency or reliability,
as demonstrated by the Cronbach alpha coefficient at wave 3 (alpha = .87) and wave 6 (alpha =
.89). Furthermore, the Index was shown to have strong test-retest reliability between waves 3
and 6, r =.73 (p < .01).
In sum, the Organizational Legitimacy Index is unidimensional, has strong internal
consistency, is stable across racial/ethnic groups, and is reliable over time. The final index
properties are shown in the table below. Higher scores indicate higher perceived organizational
legitimacy of the Chicago Police.
Organizational Legitimacy Index: Please indicate whether you agree or disagree with the
following statements about the Chicago Police Department. (4 = Strongly agree; 1 = Strongly
disagree)
Items
1. I have confidence the Chicago Police Department can do its job well.
2. I trust the leaders of the Chicago Police Department to make decisions that are good for
everyone in the city.
3. People's basic rights are well protected by the Chicago Police Department.
4. Chicago police officers are held accountable and disciplined when they do something
wrong.
5 When Chicagoans are upset with the police, there is usually someone they can talk to at
the Chicago Police Department.
Scale Statistics n M SD Min Max
Wave 3 728 2.83 .61 1 4
Wave 6 621 2.77 .63 1 4
70
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
J. Measuring Individual and Collective Performance Indicators
Community crime prevention theory is built on the premise that community members
play an integral role in maintaining social order and preventing criminal activity (Rosenbaum,
1988). Criminologists have made it clear that crime rates are influenced by a wide range of
social factors outside the police function (Reiss, 1986; Reiss & Roth, 1993), and that public order
is heavily influenced by informal social control processes within the community (Greenberg et
al., 1985; Sampson & Raudenbush, 1997). Hence, communities should be enlisted with the job
of enforcing informal social mores, taking individual and collective action to prevent crime,
providing information and resources to police, and working with the police to problem solve
public safety problems. Building partnerships with the police has been identified as particularly
important for the co-production of public safety (Cordner 1997; Rosenbaum 2002; Schuck &
Rosenbaum, 2006).
Building on this knowledge, community policing and problem-oriented policing theory
confirm the importance of community engagement and police-community partnerships as
vehicles for solving neighborhood problems and maintaining a safe environment (Goldstein,
1990; Greene, 2000; Rosenbaum, 1994). To test these ideas and hold both the community and
police accountable for community change, we need to construct a new measurement system.
This new system should regularly monitor the social ecology of urban neighborhoods and
evaluate the "performance" of the community, individually and collectively. Knowing the levels
of community social capital, crime prevention behaviors, and collective efficacy within small
geographic areas can assist police and community leaders in determining the scope of resources
and planning needed to achieve a measurable reduction in crime and disorder. Also, abrupt
reductions or increases in citizen perceptions of crime problems and fears will help monitor
“perceptual hot spots,” direct police resources, and evaluate police and/or community initiatives
within particular communities.
The community component of the CIP web survey taps into several overarching variable
domains:
Neighborhood conditions: Fear of crime, social and physical disorder and overall
perceptions of neighborhood conditions
Individual resident performance: Individual, household, and collective crime prevention
knowledge and behaviors
Community performance: Informal social control and collective efficacy
1. Neighborhood Conditions
The social and physical conditions of a neighborhood are what define the quality of urban
life. The presence of liquor stores, vacant lot, abandoned cars, garbage on the street, graffiti on
the walls and broken windows are physical conditions that, collectively, send a strong message
about the level of safety in the neighborhood. Similarly, loud music, groups of youth hanging
out, prostitution, panhandling, and public drinking are social conditions that define the
71
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
interpersonal landscape and quality of life in a neighborhood. These conditions, whether signs of
physical or social disorder, are dangerous because, as Skogan (1990) cogently argues, they
undermine the neighborhood's capacity to exercise informal social control, enhance residents'
fear of crime, contribute to more serious crime, and destabilize the housing market. Although
researchers continue to debate whether disorder contributes directly to serious violent crime
(Sampson & Raudenbush, 1999; Taylor, 2006), overall, there is consensus that it is an indicator
of neighborhood decline that should have the attention of police, community leaders and policy
makers. Hence, reliable measurement of this construct is essential for managing the quality of
neighborhood life.
Similarly, fear of crime and actual crime rates are widely used as indicators of
community stability. When residents are afraid and when crime rates are high, the community's
capacity to defend itself is undermined. For planners and policy makers, having baseline
information on the perceptual and behavioral conditions that define each target neighborhood is
critical. Problem-oriented policing stresses the importance of identifying, defining, and solving
these neighborhood problems and conditions (Goldstein, 1979; Goldstein, 1990). Community
policing stresses the importance of addressing residents' perceptions of and reactions to crime
and disorder (Rosenbaum, 1994; Skogan & Hartnett, 1997). The public's fears, concerns, and
behavioral responses to their environment can either make the neighborhood more hospitable or
repellent to potential offenders and criminogenic conditions (Skogan, 1990).
Hence, the measurement framework we have developed via the Chicago Internet Project
assumes that planning and problem solving demand reliable estimates of community perceptions
of disorder, perceptions of crime problems, fear of crime and actual rates of victimization.
Although our sample sizes were insufficient to generate reliable estimates of victimization,
nevertheless, the survey items were constructed with this goal in mind.
These CIP measures tap into the concerns and priorities of communities at a local level.
Repeatedly measuring these concerns can alert police and communities to emerging “disorder
hot spots” or "fear hot spots." Essentially, crime forecasters in the future may be able to identify
neighborhoods that are near the tipping point or about to enter a “cycle of decline.” Additionally,
police can use these measures to evaluate police-community problem solving efforts or targeted
police missions. Overall, these survey measures capture the perceived physical and social
conditions related to crime and quality of life. Presently, communities have no sensible way of
assessing the impact of police or community interventions on neighborhood conditions.
Crime and disorder index. The importance of measuring the public's perceptions of
disorder can be found in the “broken windows” theory of crime. The central notion is that when
a neighborhood is physically and socially disorganized it is a breeding ground for crime because
these conditions heighten fear, reduce natural crime prevention (e.g. guardianship) and therefore,
contribute to more serious crime (Felson, 2006; Skogan, 1990; Taylor, 2006; Sousa & Kelling,
2006; Wilson & Kelling, 1982).
A crime and disorder index can be used not only to assess neighborhood conditions for
planning purposes, but as an outcome indicator to monitor the effectiveness of order maintenance
strategies. Perception of disorder should change if police take action to ameliorate actual
disorder problems (i.e. youth congregating or prostitution).
72
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Community disorder has been studied carefully with both resident surveys and
observations (Sampson & Raudenbush, 1999; Taylor, 1999). Disorder has been found to be
important to community residents (Skogan, 1990; Skogan & Harnett, 1997) and to be strongly
associated with fear and other public safety constructs (Scheider, Rowell, & Bezdikian, 2003;
Skogan, 1990; Warr, 2000). The scale items in this study are adapted from Skogan and Hartnett's
(1997) physical and social disorder scale used in the annual evaluations of Chicago’s CAPS
program. Most social and physical disorder problems are area-specific and given this web-based
survey methodology, we are able to track small geographic “units of disorder.”
We have constructed a single index for crime and disorder, although the range of items
covers social disorder, physical disorder, and crime, which could be treated as subscales. The
overall Crime and Disorder Index showed strong internal consistency at wave 1 (α = .88) and
wave 5 (α =.90) and demonstrated very high test-retest reliability (r = .86, n = 323, p <.001).
Crime and Disorder Index: The following is a list of things that you may think are problems in
your neighborhood. Please indicate whether you think each is a big problem, some problem, or
no problem in your neighborhood. (3 = Big problem; 2 = Some problem; 1 = No problem)
Items
1. Garbage in the streets.
2. Poor street repairs.
3. Poor street lighting.
4. Graffiti — writing or painting on walls or buildings.
5. Public drinking.
6. Loud music and/or noise.
7. Illegally parked vehicles.
8. Abandoned houses and other empty buildings.
9. Dogs off leash or owners not picking up after them.
10. Groups of youth hanging out.
11. Speeding or drag racing.
12. Homeless people asking for money.
13. Cars being vandalized — things like windows or aerials being broken.
14. Drug dealing on the streets.
15. Prostitution.
16. People breaking into homes/garages to steal things.
17. Shootings and violence by gangs.
Scale Statistics n M SD Min Max
Wave 1 625 24.68 6.08 16 49
Wave 5 524 24.72 6.40 16 50
Fear of crime index. Fear is an important social factor with real consequences for
individuals, communities and cities. It is a defining feature of urban neighborhoods. As noted
above, fear of crime can curtail residents’ activities by increasing apprehension about venturing
into public spaces and reducing their interaction with neighbors, thus jeopardizing social control
(Garofalo, 1981; Hartnagel, 1979; Moore & Poethig, 1999; Perkins & Taylor, 1996; Skogan,
73
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
1986). Early research focused on the factors associated with fear, especially behavioral
avoidance and other crime prevention measures, such as carrying a weapon or locking doors
(DuBois, 1979; Lavrakas, 19xx; Rosenbaum & Heath, 1991?; Skogan & Maxfield, 1981).
Considerable research has focused on fear of crime as a consequence of victimization (Skogan,
1987), particularly sexual assault recovery (Ferraro, 1996). Also, community crime prevention
and community policing initiatives have been evaluated using fear of crime as a central outcome
measure (Brown & Wycoff, 1987; Ditton, Khan, & Chadee, 2005; Eck & Spelman, 1987;
Rosenbaum, 1987). Other research studies have focused on fear of crime as a social condition in
its own right (Denkers & Winkel, 1998; Perkins & Taylor, 1996).
The Chicago Internet Project generated two kinds of fear measures. First, we sought to
replicate the widely used item employed in national surveys to capture a general sense of fear
when "alone outside in your neighborhood at night." Second, we sought to measure localized
fear in particular neighborhood settings. We explored fear levels in various settings, ranging
from public transportation to local parks, both during the daytime and at night. These types of
questions about anticipatory fear of victimization under particular circumstances have been
utilized in prior research (Denkers & Winkel, 1998). All of our measures focus on settings
within the neighborhood, and therefore, allow for the construction of fear hot spots within small
geographic areas.
A 10-item Fear of Crime Index was computed at two waves. The internal consistency of
the Index was high at both waves (wave 2 alpha = .914; wave 6 alpha = .920). The Fear Index
also showed strong test-retest reliability, r = .77, p<.001, n = 464.
Fear of Crime Index: How safe do you feel or would you feel being alone in the following
locations at night? (1=Very safe; 2 = Somewhat safe; 3= Somewhat unsafe; 4= Very unsafe)
Items
1. Walking around my neighborhood.
2. In your lobby or stairway.
3. In local parks.
4. Walking to/from transportation.
5. On public buses or trains.
How safe do you feel or would you feel being alone in the following locations during the
daytime? (1= Very safe; 2 = Somewhat safe; 3= Somewhat unsafe; 4= Very unsafe)
Items
6. Walking around my neighborhood.
7. In your lobby or stairway.
8. In local parks.
9. Walking to/from transportation.
10. On public buses or trains.
Scale Statistics n M SD Min Max
Wave 2 739 17.10 5.32 7 40
Wave 6 604 17.05 5.34 8 40
74
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Victimization index. As we know from the National Crime Victim Survey, survey
methods provide an excellent opportunity to generate knowledge about the nature of crime and
victimization that cannot be captured through official police reports. Geo-based surveys have the
additional benefit of being able to produce information about local crime and victimization
patterns, data which can be used both for community planning and evaluating localized public
safety initiatives.
In the CIP initiative, victimization questions were asked at only one point in time, but
still gave us an opportunity to explore the feasibility of web-based measurement in this domain.
The victimization items used a six-month reference period to minimize problems of memory
decay and telescoping (Skogan & Lehnen, 1985) and provide more opportunity to evaluate short-
term programs. The content validity was reasonably good as the instrument captured
victimization experiences with residential burglary, theft and criminal damage to property,
completed and attempted robbery, and completed and attempt assault. When victimization was
indicated, the victims were queried about two important conditions: Did the incident happen in
the victim's current neighborhood? (in order to establish local crime rates) and was it reported to
the police? Crime reporting behavior is an important measure of public trust in the police and
perceived importance of the incident, and will likely vary by neighborhood.
The sample size was not sufficient to compute separate victimization indices. An overall
Victimization Index was computed with 8 items. (For future applications, we recommend that
the index exclude victimization incidents that occurred outside the neighborhood. A Crime
Reporting Index can also be developed).
The final index properties are shown in the table below. Higher scores indicate more
victimization experience.
Victimization: In the past 6 months, have you or members of your household experienced the
following…(check all that apply)
Items
1. Has someone broken into your home or garage to steal something?
2. Have you found any sign that someone tried to break into your home or garage?
3. Has anyone stolen, damaged, or taken something from your car or truck?
4. Have you had anything stolen that you left outside, including motorcycles or bicycles?
5. Has anyone stolen something directly from you by force, or after threatening you with
harm?
6. Has anyone tried to steal something from you by force or threat, even though they did
not get it?
7. Has anyone physically attacked you?
8. Has anyone threatened to physically attack you?
Scale Statistics N M SD Min Max
Wave 2 778 .49 .85 0 5
75
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
2. Individual Resident Performance
Knowledge about crime prevention and staying safe indices. Since the introduction of the
national crime prevention media campaign in the late 1970s (better known as the McGruff
campaign), there have been many efforts to educate the public about possible crime and drug
prevention behaviors (O'Keefe et al., 1996) and many academic statements about the need for
local residents to become more actively involved in community crime prevention (Lab, 1988;
Rosenbaum, 1988; Surette, 1992). The assumption is that residents' awareness and knowledge of
crime prevention are the first steps on the road to preventative behaviors, such as self-protection,
household protection, neighborhood problem solving, as well as enhanced perceptions of
individual and collective efficacy (Rosenbaum, 1986).
For the CIP project, knowledge about crime prevention emerged as a multidimensional
construct consisting of one dimension tapping into individual’s knowledge about keeping
themselves and their property safe and one dimension tapping into individual’s general
knowledge about crime prevention and crime in their neighborhood. The items were coded on a
four point scale with higher values indicating greater knowledge. The two factors accounted for
64% of the variance at wave 2 and 64% of the variance at wave 6. Reliability was high for both
the general measure (wave 2 α = .80; wave 6 α = .84) and the staying safe measure (wave 2 α =
.79; wave 6 α = .78). The items were measured four months apart and the re-test reliability was
high for both the general measure (r = .53, n = 495, p <.001) and the staying safe measure (r =
.60, n = 487, p <.001). All items should be generalizable to other communities and cities, with
the exception of item #2 in the Knowledge about Staying Safe index, which may be relevant only
to Chicago.
Knowledge about Crime Prevention: Please indicate whether you agree or disagree with of
the following statements about safety. (4=Strongly agree; 1= Strongly disagree)
Items
1. I know the things I need to do to stay safe when I’m out on the streets.
2. I know the things I need to do to keep my home and property safe from crime.
Scale Statistics N M SD Min Max
Wave 2 776 3.36 .51 1 4
Wave 6 627 3.39 .52 1 4
Knowledge about Staying Safe: Please indicate whether you agree or disagree with of the
following statements about safety. (4=Strongly agree; 1= Strongly disagree)
Items
1. I know how to work with the police to solve crime problems in my neighborhood.
2. I know when beat community meetings take place in my neighborhood.
3. I know how to contact the police for non-emergency problems.
4. I know where to find information about crime prevention.
5. I know where to find information about crime in my neighborhood.
Scale Statistics n M SD Min Max
Wave 2 773 2.79 .69 1 4
Wave 6 623 2.84 .67 1 4
76
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Knowledge about specific prevention concepts index. For communities that are serious
about measuring their own performance in the public safety arena, they will need some baseline
information on local residents' knowledge of specific crime prevention theories, concepts, and
local programs. For community leaders and organizers, as well as neighborhood police officers,
this information will help to identify police beats or other neighborhoods where remediation is
most needed.
The Knowledge about Specific Prevention Concepts index was designed to measure the
residents’ knowledge about important concepts and theories in crime prevention (e.g. CPTED,
SARA model, routine activities) but also local crime prevention initiatives. In terms of the latter,
Chicago residents should be familiar with the Chicago Police Department's community policing
program (CAPS) and crime mapping program that is available to the public (ICAM). These
local items should not be used in other cities.
The six items were scored on a four point response category scale and summed to create
the final measure. The reliability was acceptable at both wave 1 (α = .61) and wave 5 (α = .71).
The re-test reliability was high (r = .64, n = 466, p <.001).
Knowledge about Specific Crime Prevention Concepts Index: You may or may not be
familiar with the following terms or concepts in public safety. Please indicate whether or not
these terms are familiar to you. (4 =Very familiar; 1 = Not at all familiar)
Items
1. CAPS
2. The Crime Triangle
3. CPTED
4. SARA Model
5 ICAM
6. Broken Windows Theory
Scale Statistics n M SD Min Max
Wave 1 757 9.61 2.46 5 20
Wave 5 663 10.08 2.96 4 24
Protection behaviors index. Criminologists have established, as routine activities theory
suggests (Cohen & Felson, 1979) that an individual’s daily activities are predictive of criminal
victimization (Maxfield, 1987). Patterns of travel, work, affiliation, and recreation can affect
one’s chances of falling victim to crime. Similarly, crime prevention theories suggest that
victimization will be reduced when actions are taken to reduce the opportunities to commit the
crime – either by reducing access to vulnerable persons, places or things or by changing the
environment to increase the likelihood that potential offenders will be detected or apprehended
(Clarke, 1992; Greenburg & Williams, 1987; Rosenbaum, 1988). Hence, law enforcement
agencies and community leaders have sought to educate the public about specific behavioral
responses they can take to protect themselves, their property, and public spaces from crime.
77
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
The protective behaviors scale measures the frequency in which individuals take actions
to protect themselves, their loved ones or their property. The five-item scale accounted for 45%
of the variance at wave 1 and 52% at wave 5. The index had high internal consistency (wave 1 α
= 68; wave 5 α = .76). The scale was measured four months apart and the test-retest reliability
was very high (r = .81, n = 466, p <.001). The variable is coded so that higher values indicate a
greater frequency of engagement in safety measures. Future research should expand this set of
items to include more indicators of protective behaviors in public places and crime prevention
measures to property outside the household (see Lavrakas et al. 1980)
Protective Behaviors Index: How often do you take any of the following actions in your
neighborhood to protect your home, yourself, or loved ones? (4 = Always; 3 = Frequently; 2 =
Sometimes; 1 = Never)
Items
1. Keep a look out for suspicious activities.
2. Ask a neighbor to watch your home when you’re away.
3. Leave the radio or TV on when you go out at night.
4. Carry mace or pepper spray.
5. Limit the amount of jewelry you wear or amount of money you carry on the street.
Scale Statistics n M SD Min Max
Wave 1 755 12.92 3.59 5 20
Wave 5 664 13.40 3.88 5 20
Formal collective action. Community crime prevention is often conceived as a
combination of individual, household and collective actions. Crime prevention experts have
warned that individual crime prevention measures involving risk avoidance (e.g. not using streets
or parks) or household measures that create a fortress with locks, fences, and cameras may work
for the individual, but can increase the risk of public street crimes. Hence, since the 1970s police
have played a critical role in initiating, orchestrating and encouraging public-minded collective
strategies designed to prevent crime where residents share an interest in public safety.
Neighborhood Watch is the prototype for collective action (Rosenbaum, 1987), but some cities
hold regular meetings with the police to engage in local problem solving (Skogan & Hartnett,
1997). Police and community leaders can foster public safety by organizing residents, working
with community partners to collectively define and address crime problems, and encouraging
more positive social interactions. These collective processes are expected to strengthen informal
social controls, reduce crime, and reduce fear of crime (Rosenbaum, 1988)
The collective action construct was multidimensional with one dimension tapping into
informal collective action and another dimension tapping into formal collective action. The two
dimensions accounted for 77% of the variance at wave 1 and 79% of the variance at wave 5.
There was high internal consistency for the informal collective action scale (wave 1 α = .73;
wave 5 α = .81) and moderate internal consistency for the formal collective action scale (wave 1
α = .63; wave 5 α = .62). The test-retest reliability for the informal collective action was r = .64
(n = 465, p < .001) and the test-retest reliability for the formal collective action was r = .64 (n =
464, p < .001). The scale items were assessed approximately four months apart. Higher scores
78
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
indicate more informal collective action (i.e. more talking with others about crime) and more
formal collective action (i.e. more involvement in local neighborhood meetings).
Informal Collective Action Index: In the past 6 months, how often have you done the follow
things: (1 = Never; 2 = Once or twice; 3 = About once a month; 4 = About once a week; 5 =
More than once a week)
Items
1. Talked with your neighbors about crime issues.
2. Talked with your family or friends about crime.
Scale Statistics n M SD Min Max
Wave 1 756 2.35 .92 1 5
Wave 5 663 2.37 .98 1 5
Formal Collective Action Index: In the past 6 months, how often have you done the follow
things: (1 = Never, 2 = Once or twice, 3 = About once a month, 4 = About once a week, 5 =
More than once a week)
Items
1. Attended a CAPs meetings beat meeting.
2. Attended a community meeting in your neighborhood.
Scale Statistics n M SD Min Max
Wave 1 754 1.34 .53 1 5
Wave 5 663 1.30 .51 1 4
Self-efficacy about crime prevention index. Self-efficacy about Crime Prevention is a
new measure developed specifically for the Chicago Internet Project. Self-efficacy is rooted in
social cognition theory (Bandura, 1997) and has been used to help explain a wide range of
behaviors including academic and work-related performance (Bandura, 1993; Stajkovic &
Luthans, 1998), the use of technology (Compeau & Higgins, 1995) and health and well-being
(Lorig et al., 1989). Self-efficacy is the belief that people hold about their causal capabilities
(Bandura, 1997). The research suggests that perceptions of self-efficacy shape several
dimensions of behavior including: (a) decisions about what behaviors to engage in, (b) the
amount and persistence of effort in attempting a specific behavior, (c) the individual’s emotional
response when carrying out the behavior, and (d) the actual achievement of the individual with
respect to the behavior (Bandura, Adams, & Beyer, 1977; Wood & Bandura, 1989). According
to Bandura (1997) self-efficacy is not a generalized concept, but rather is specific to the behavior
being studied.
Self-efficacy about crime prevention refers to an individual's beliefs regarding his/her
capabilities to secure and organize resources and execute a course of action that improves
neighborhood safety. There are several aspects of self-efficacy when applied to crime prevention.
First, the individual must perceive having the means necessary to achieve success such as
79
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
knowledge about crime and the skills related to working with the police, such as problem-solving
aptitude. Second, key aspects of self-efficacy are perceptions of the importance and seriousness
of problem, as well as the motivation or incentives to take action. Finally, and most importantly,
self-efficacy about crime prevention includes the belief that one can carry out the desired actions
and that participation in these behaviors will lead to positive results. These components of the
self-efficacy construct are consistent with the health belief model, which has been used to
explain public health and crime prevention behaviors (see O'Keefe et al., 1996).
Self efficacy about crime prevention is a five-item index measuring an individual's
perceived capacity to carry out effective crime prevention actions at the neighborhood level. A
single factor accounted for 57% of the variation at wave 2 and 56% of the variance at wave 5.
The internal consistency of the items was high at wave 2 (α = .80) and wave 5 (α = .79). The test
re-test reliability was also high (r = .63, n = 662, p < .001). The variable is coded so that higher
values indicate higher levels of self-efficacy regarding crime prevention.
Self-Efficacy Index: Please indicate whether you agree or disagree with the following
statements about yourself. (4=Strongly agree; 1= Strongly disagree)
Items
1. I can influence my neighbors to take action on important crime issues.
2. I can influence the police to take action on important crime issues.
3. I know I can make a difference in my neighborhood.
4. If I work with the police, my neighborhood will be a safer place to live.
5 If I work with other community members, my neighborhood will be a safer place to live.
Scale Statistics n M SD Min Max
Wave 2 775 3.67 .75 1 5
Wave 5 662 3.64 .74 1 5
3. Collective Performance
Informal social control index. Social control refers to community residents’ efforts to
regulate their behavior and the behavior of visitors to the neighborhood in order to achieve living
in an area that is relatively free from the threat of crime (Bursik & Grasmick, 1988). Albert
Hunter (1985) developed a three-level approach to understanding how social control operates in
a community. The first level of social control, called private social control, is used to describe
the informal efforts of intimate primary groups in the community. For example, private social
control includes the use of relationships among friends to shape an individual’s behavior though
positive reinforcement such as social support or mutual esteem, and through negative
reinforcement such as criticism, banishment from the group, or even violence (Hunter, 1995;
Black, 1989). The second level, or parochial social control, refers to the efforts of broader local
interpersonal networks such as churches, schools, local businesses, or voluntary organizations.
These broader social networks have a vested interest in the well-being of the community and will
exercise control through formal and informal interaction that establish norms about acceptable
behavior in the group and by intervening to stop deviant behavior, among other ways. The third
and final level, public social control, is used to describe the residents’ ability to acquire goods
80
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
and services that are allocated by organizations and agencies outside the neighborhood. This
includes the ability of local residents to leverage resources from both private and public
organizations in an effort to maintain public order and keep residents safe. This would include
the relationship between community residents and the police (Bursik & Grasmick, 1988).
Our informal social control scale captures one form of parochial social control using
specific survey items drawn from an established literature (Sampson, Raudenbush & Earl, 1997).
The re-test reliability was .65 (n = 494, p < .001). Higher scores indicate greater informal social
control.
Informal Social Control Index: For each of the following questions, please indicate how likely
it is that your neighbors would do something if … (5 = Very likely; 1 = Very unlikely; 3 = Don’t
know)
Items
1. Children were spray-paining graffiti on a local building.
2. Children were skipping school and hanging out on a street corner.
3. A fight broke out in front of your house and someone was being beaten.
4. The fire station closest to your home was threatened with budget cuts.
Scale Statistics n M SD Min Max
Wave 2 777 3.84 .94 1 5
Wave 6 626 3.92 .91 1 5
Collective efficacy index. Collective efficacy refers to the combination of the social
cohesion among neighbors and their willingness to intervene for the common good (Sampson,
Raudenbush, & Earls, 1997). Collective efficacy is based on characteristics such as mutual trust,
solidarity, and shared expectations among neighbors. It also includes the element of active
informal social control where there is a perception that neighbors will intervene for the common
good of the neighborhood. Research suggests that collective efficacy is strongly related to
victimization and crime rates (Sampson et al., 1997; Morenoff, Sampson, and Raudenbush,
2001).
The Collective Efficacy scale replicates the work of Sampson, Raudenbush and Earl
(1997). The scale items for collective efficacy were only assessed at one time point. The re-test
reliability was .65 (n = 494, p < .001). Higher scores indicate stronger collective efficacy.
81
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Collective Efficacy Index:
For each of the following questions, please indicate how likely it is that your neighbors
would do something if … (5 = Very likely; 1 = Very unlikely; 3 = Don’t know)
Items
1. Children were spray-paining graffiti on a local building.
2. Children were skipping school and hanging out on a street corner.
3. A fight broke out in front of your house and someone was being beaten.
4. The fire station closest to your home was threatened with budget cuts.
The next few questions are also about police in your neighborhood. For each statement,
please indicate whether you agree or disagree. (4 = Strongly agree; 1 = Strongly
disagree; 3 = Don’t know)
5. People around her are willing to help their neighbors.
6. People in this neighborhood can be trusted.
7. People in this neighborhood do not share the same values (reverse coded).
Scale Statistics n M SD Min Max
Wave 2 777 3.82 .76 1 5
K. Further Validation of Scales
The scale validation process began with factor analysis and reliability analysis to confirm
the unidimensionality and internal consistency of the scales. The indices were also examined for
stability over time using test-retest reliability scores. Additional validity analyses were
performed on selected scales to test their robustness. In particular, we used a multi-method
approach to examine whether the web-based findings would correspond to the results derived
from other methods (telephone surveys and police statistics). We also utilized "known groups"
validation techniques to assess whether the scales would behave in predictable ways as dictated
by prior research and theory.
1. Multi-Method Validation of Scales
For the community scales, we were able to compare three sets of data for the same 51
police beats: our web-based survey data from 2005, official police records from 2005 and
telephone survey data collected in 2002 from these same 51 police beats. As shown in the table
below, the correlation between telephone and Internet findings, using data collected three years
apart with different random samples, are incredibly strong (ranging from .552 to .790). These
findings suggest that the Internet can be used to capture valid impressions of neighborhood
conditions in relatively small geographic areas.
The data in this table are also useful for construct validity. The research literature
indicates that neighborhood disorder is linked to fear of crime and informal social control, and
82
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
indeed, the web survey findings confirm these relationships. Stated differently, our disorder
measure behaves in a predictable manner at the neighborhood level, with higher levels of
disorder associated with higher levels of fear and less informal social control (see Table 5.1)
Table 5.1 A Comparison of Telephone and Internet Data
Northwestern University Telephone Survey Data
Fear Disorder
Informal Social
Control
University of Illinois of Chicago
Internet Survey
Fear .667** .642** -.600**
Disorder .696** .790** -.521**
Informal Social Control -.566** -.634** .552**
The table below (Table 5.2) compares the findings from the Internet and telephone
surveys with official police records for the 51 police beats. Again, the correlations between data
collected from three very different methods are consistently positive and almost always
statistically significant. Neighborhoods (police beats) with higher levels of violent crime, illegal
drugs, weapons, and disorder (as defined by the Chicago police) are places where web-survey
respondents report significantly higher levels of fear, victimization, and disorder.
The telephone survey findings are also consistent with the police data, but the Internet
survey findings (especially for fear of crime) are more highly correlated with the police findings.
This difference may be the result of a time lag, as the Internet data were collected during the
same time period as the police data, while the telephone data were collected three years earlier.
Table 5.2. A Comparison of Official and Internet Data
Official Chicago Police Department Crime Data (logged)
Crime Violent Robbery Homicide Drug Weapons Disorder
UIC Internet Survey
Fear .489** .702** .694** .522** .725** .770** .345*
Victimization .186 .342* .331* .353* .394** .477** .354*
Disorder .276 .541** .463** .485** .698** .680** .284*
Disorder .216 .476** .422** .427** .652** .620** .265
Northwestern Telephone Survey
Fear .292* .482** .490** .351** .519** .586** .371**
Disorder .188 .400** .405** .308* .514** .563** .233
2. Known Groups Validation of Scales
Additional analyses were performed on some of the new policing scales to further
validate the constructs. Methodologists often employ "known groups" validation procedures to
demonstrate that a particular measure is able to discriminate between groups that are known, on
the basis of prior research and/or theory, to have different scores on the test variable. Race is
83
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
84
one variable that has been shown previously to predict citizen perceptions and judgments about
the police. In particular, minorities consistently report, on telephone surveys, more negative
attitudes toward, and satisfaction with the police (Skogan, 2006; Rosenbaum & Schuck, 2005;
Weitzer, xxxx). Hence, we performed a series of regression analyses to determine whether
scores on Web-based police performance scales could be predicted from the race/ethnicity of the
respondents.
The findings are consistent with prior research using telephone survey methods. As
predicted, African Americans and Latinos were more likely than whites to report more negative
views of the police on several performance dimensions (see Table 5.3). These findings suggest
that our web-based indices of police performance are successful at capturing known group
differences.
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
African American
(vs. White) Latino
(vs. White) Other
(vs. White)
Dependent Variables n Est. SE Est. SE Est. SE
Residual
Varianc
e Est.
General Assessments of Police
Manners 665 -.48*** .07 -.25 .13 -.21 .16 .47***
Fairness 654 -.55*** .08 -.37* .14 -.22 .16 .56***
Competency Indices
Knowledge 636 -.27*** .06 -.15 .10 -.22 .12 .26***
Reliability 713 -.45*** .06 -.33*** .11 -.50*** .13 .35***
Assessments of Neighborhood Police
Responsiveness to the Community 637 -.49*** .08 -.62*** .15 -.52** .18 .64***
Satisfaction with Neighborhood Police 607 -.32*** .07 -.35** .13 -.36** .14 .31***
Organizational Outcomes
Organizational Legitimacy 712 -.29*** .06 -.17 .11 -.07 .12 .34***
Effectiveness in Problem Solving 656 -.40*** .07 -.27* .12 -.25 .14 .41***
Effectiveness in Preventing Crime 669 -.56*** .08 -.43** .14 -.32 .16 .49***
Engagement of the Community 654 -.47*** .09 -.58*** .16 -.44* .20 .82***
Affective Responses to Police Encounters
Security 709 -.42*** .09 -.34* .17 -.32 .19 .81***
Table 5.3 HLM Linear Regression Estimates for the Impact of Resident’s Race on Policing Constructs
85
*p.05 **p.01 ***p.001
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
86
L. Measurement Sensitivity
1. Within-Race Differences
In this report we have argued that one of the benefits of the web-based survey
methodology is the ability to cost-effectively detect differences between small geographic areas.
The 51 police beats in this study are examples of relatively small areas (arguably neighborhoods)
where stable estimates of community perceptions and behaviors are possible. Although the
sample sizes at the beat level are limited in the current study we are nevertheless able to illustrate
the potential benefits of this approach. Earlier we described differences by racial/ethnic groups
in perceptions of the police. These types of findings, whether citywide or national, have
contributed to the impression that race is the primary variable for explaining community
evaluations of the police. African Americans, Latinos and whites are thus viewed as
homogeneous groups with very little within-group variability regarding assessments of the
police. The analyses that follow illustrate that differences exist within these groups when data
are collected at smaller geographic areas. Although social class differences have been artificially
restricted in these data (as lower income police beats were excluded from the study), even so, not
all minority communities hold the same impressions of the police.
The bivariate correlations in Table 5.4 show predicable differences across African
American communities in their assessments of the police. African American neighborhoods
with higher levels of disorder and fear of crime and low levels of collective efficacy are
significantly less satisfied with police performance on virtually all dimensions than African
American neighborhoods where disorder and fear are under control and residents feel
efficacious. High rates of violent crime showed less predictive power in African American
neighborhoods. High violent crime rates predicted lower assessments police effectiveness in
fighting crime, but did not predict assessments of police manners and fairness. Only when police
are facing African American neighborhoods with high levels of disorder are they subject to more
negative evaluations on demeanor and equity dimensions.
The box plots below confirm some predictable differences in the judgments of the police
when comparing African American, Latino, white, and mixed neighborhoods. But these charts
also illustrate that there is substantial variability within each racial/ethnic cluster. The 18
predominately African American neighborhoods, for example, are fairly divergent in their views
of police manners, fairness and effectiveness in problem solving, with some beats expressing
more positive views of the police than those of white neighborhoods. For some indices,
however, such as police effectiveness in preventing crime or police reliability, the medians are
further apart and the standard deviations are smaller , thus producing little or no overlap in the
distributions for African American and white neighborhoods.
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Violent Crime Rate Disorder Fear Collective Efficacy
Dependent Variables n r n r n r n r
General Assessments of Police
Manners 189 -.13 109 -.37** 123 -.30** 135 .34**
Fairness 193 -.04 100 -.32** 129 -.16 130 .16
Competency Indices
Knowledge 186 -.07 96 -.28** 124 -.07 126 .20*
Reliability 203 -.05 104 -.41** 136 -.29** 139 .23**
Assessments of Neighborhood Police
Responsiveness to the Community 183 -.13 95 -.35** 121 -.29** 123 .37**
Satisfaction with Neighborhood Police 154 -.23** 104 -.38** 103 -.28** 113 .33**
Organizational Outcomes
Organizational Legitimacy 202 -.09 104 -.33** 135 -.22* 139 .16
Effectiveness in Problem Solving 189 -.18* 98 -.46** 128 -.29** 129 .31**
Effectiveness in Preventing Crime 190 -.20** 99 -.49** 125 -.35** 128 .34**
Engagement of the Community 187 -.14 108 -.29** 122 -.28** 131 .27**
Affective to Police Encounters
Security 201 -.08 112 .00 132 .01 141 .18*
Anxiety 198 -.05 109 -.33** 130 -.24** 139 .18*
Table 5.4 Bivariate Correlations for Residents from African American Communities
87
*p.05 **p.01
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Figure 5.1 Box plots for Police Manners and Fairness Scales
88
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Figure 5.2 Box Plots for Police Problem Solving and Reliability Scales
89
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Figure 5.3 Box Plots for Police Responsiveness Scales
2. Identifying Hot Spots
One of the implications of these findings is that cities can geographically identify not
only hot spots of violent crime (as is conventionally done), but hot spots of police-community
tensions, fears, and other concerns. For example, web-based survey findings can be used to
locate police beats, regardless of race/ethnicity, where police manners are rated as poor, when
anxiety about police stops is high, and where residents are most dissatisfied with police services.
These would be ideal locations for police officers to be engage in a problem solving exercises
with community leaders around these concerns.
90
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
CHAPTER SIX
THE CAPS EXPERIMENT: FINDINGS AND LESSONS LEARNED
A. Implementation Results within the CAPS Framework
1. Feasibility Study
A preliminary study was conducted to explore the feasibility of using a web-based system
to collect data from residents about public safety concerns and monthly feedback sessions at
CAPS beat meetings (see Skogan et al., 2005). Conducted in three beats from February to
September 2004, the study consisted of components and objectives similar to those for the
Chicago Internet Project (CIP): (1) Residents attending CAPS meeting in the study beats were
asked to go online each month and complete a survey on various public safety issues; (2) Survey
results were presented to residents and police at their meetings; and (3) Training was provided to
police and civilian facilitators on problem solving. Given the very limited sample size and
extremely experimental nature of the study, few program effects were found, but this study was
invaluable for allowing us to identify what worked and what problems we could expect to
encounter if we were to implement such a project on a broader scale. The study demonstrated
that residents would be willing to repeatedly participate in Internet surveys and experienced little
difficulty doing so. The feasibility also demonstrated that it would be possible to incorporate
presentation of survey results into the existing beat meeting framework, although problems with
the meetings agenda would need to be addressed. Most obstacles that we identified were taken
into consideration during the planning phases of the current project, as discussed below with
regards to implementation of CIP.
The major difference between the feasibility study of 2004 and the CIP was the
University’s role in project implementation. During the feasibility study, university researchers
assumed full responsibility for all facets of implementation, from preparation of study materials
(e.g. handouts and survey results) to distribution of handouts and facilitation of the presentation
and discussion of survey results. The same researchers attended the beat meetings each month
and, while only three beats participated in the study, this nevertheless required a major
commitment on the researchers’ part that would have been difficult to sustain on a regular basis.
While it appeared the researchers presence was accepted by most participants, by the end of the
study they retained the distinction of being “from the University.” The presentation and
discussion of survey results by expert facilitators could have prevented police and residents from
feeling fully invested in the study. Given the demands of implementation in 50 to 60 beats and
the need for participants to take ownership of the project, we decided that CIP would need to be
adopted and internalized by the Chicago Police Department.
2. Implementation: Protocol and Integrity
Protocol. A protocol was developed jointly by UIC and the CPD that assigned primary
responsibility for the administration of the project at CAPS meetings to beat team leaders
(selected community residents) and community policing officers facilitating the meetings each
91
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
month. To combat the perception of the project as an “university experiment”, it was agreed all
directives and memos would be issued formally through the CAPS Project Office. At the onset
of the project, District Commanders, CAPS managers, and beat team leaders were issued
directives detailing the necessary tasks to be completed as part of the project (see Appendix D).
Because implementation problems with beat personnel were observed early on and turnover was
a significant problem during the course of the project, these directives were re-issued twice to
ensure that all relevant police personnel were informed about project objectives and tasks. UIC
staff prepared and supplied the necessary handouts via email each month for both the appropriate
beat personnel and the CAPS Project Office. Additionally, the CAPS Project Office also
provided beat personnel with faxed and hard copies of all handouts as well.
UIC assumed initial responsibility for introducing the project to participants at their beat
meetings. In subsequent months, administration of project tasks was solely assumed by meeting
facilitators. Implementation integrity remained a concern throughout the duration of the project
and numerous attempts were made to secure cooperation using various avenues. In order to hold
personnel in participating beats accountable for carrying out project objectives, the CPD
monitored implementation levels and prepared formal audit reports detailing compliance in each
beat with implementing tasks. These audit reports were distributed as memos from the Assistant
Deputy Superintendent of the CAPS Project Office to District Commanders and CAPS managers
after Waves 2 and 4.
The CAPS Project Office also flagged low-compliance beats and sought cooperation
from beat personnel through multiple informal contacts. During Wave 2, District Commanders
and CAPS managers were sent a memo regarding survey participation by residents and the
importance of full implementation, including the need to increase levels of participation in the
online surveys. Similarly, UIC researchers attended a monthly CAPS Lieutenants meeting to
discuss the objectives of the project and the importance of making sure the necessary materials
were distributed at meetings, as well as introduce the possibility of the CPD using raffles to
encourage participation by residents. Full implementation of project tasks by all participating
beats, however, was never achieved despite consistent efforts by the CAPS Project Office and
UIC. Continuous efforts were made to simplify the process of receiving materials and to clarify
the project objectives. On a positive note, implementation levels steadily increased over time.
Experimental design. As Table 6.1 shows, the 51 participating beats were randomly
assigned to one of three experimental conditions receiving varying levels of treatment: control,
feedback, and training. Originally, each condition had an equal number of beats, but we
discovered early on that one of the beats assigned to the control group shared a joint meeting
with a beat in the training group. A decision was made that these two beats would be treated as a
single beat within the training group. As noted in the methodology section, we were asking two
primary questions: (1) Does receiving feedback on public safety issues affect police-resident
discussion and problem solving at CAPS beat meetings? and (2) Would additional training and
guidance have supplemental effects on discussion and problem solving at meetings? To this end,
project tasks increased progressively across conditions as follows:
All beats: All participating beats followed the basic steps of implementation, which
consisted of: including the project on the printed agenda, distributing flyers about the Internet
surveys, and encouraging resident participation to complete the monthly surveys. The sole
92
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
purpose of these steps was to make residents aware of the project, provide them with the
necessary information, and solicit their participation in completing surveys each month.
Feedback: In some beats, participants were also given feedback in the form of printed
survey results and then encouraged to discuss the findings during their CAPS meetings. Results
were selected each month based on their perceived utility to police and residents for both
assisting in identifying and prioritizing local problems and introducing new discussion topics.
Training: In some beats, in addition to the encouragement to participate in the surveys
and survey feedback, participants were the beneficiaries of two training components. The first
was an all-day training for beat sergeants consisting of a problem-solving refresher, instruction
on using survey results in problem solving, and overview of the CPD’s planned expansion of
information technology use. The second was a monthly problem solving exercise to guide
discussion about selected survey results in order to gain a fuller understanding of certain
problems, gather resident input for solutions, and otherwise educate residents.
Table 6.1 Implementation Protocol by Experimental Condition
EXPERIMENTAL
CONDITION BASIC STEPS FEEDBACK TRAINING EXAMPLES
CONTROL
N=16
Include on
agenda
Distribute flyers
Encourage
participation
FEEDBACK
N=17
Include on
agenda
Distribute flyers
Encourage
participation
Distribute/
Discuss
survey results
TRAINING
N=17
Include on
agenda
Distribute flyers
Encourage
participation
Distribute/
Discuss
survey results
Training of
Sergeants
Discuss problem
solving exercise
Survey Flyer
Appendix E
Survey Results
Appendix F
Problem
Solving
Exercise
Appendix G
Basic steps. Beats in all three experimental conditions were to follow several
fundamental steps meant to incorporate the project into their meeting’s regular proceedings and
encourage resident participation. Activity in control group beats was limited to these primary
steps; survey results were collected from participants, but were not made available to police or
residents during the course of the project. Residents in these control beats were only aware they
had been chosen to participate in a joint UIC-CPD project in testing a new web-based survey
system for gathering resident input.
93
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
Inclusion on meeting agenda. A CAPS-required component, agendas provide a basic
framework for identification and discussion of new problems at meetings. Because meetings
typically last no more than an hour, placing the project on the agenda would assure time be given
to introducing the project to residents who were not familiar with it and encouraging their
participation; additionally, it would provide time for beats receiving survey results to discuss
them. Although required, printed agendas were made available at only 71% of the 266 meetings
observed during the course of the project, with another 12% providing the agenda verbally to
residents. At the 216 meetings where police were asked to include the project on their meeting
agenda, 76% actually provided a printed agenda and another 8% offered the agenda verbally.
Rates for the inclusion of CIP on the meeting agenda were exceedingly low, with CIP appearing
on only 22% of printed agendas. There were no significant differences among the rates at which
beats in the different experimental conditions provided printed agendas and included CIP on the
agendas (X2=6.705, p> .05), although beats within the training groups included CIP at a slightly
higher rate (29%) than beats in either the control (24%) or feedback (14%) conditions.
Distribution of survey flyers: CPD personnel were provided with flyers containing
instructions for accessing and completing the web survey each month, which they were told to
distribute to meeting participants (see Appendix E for example). This information included the
basic objectives of the project, the website address, a password to access the survey, and contact
information for the UIC research team. Officers were encouraged to pass out flyers directly to
meeting attendees rather than simply place them on the table with other handouts in order to
draw residents’ attention to the opportunity to complete the surveys.
As Table 6.2 indicates, police did not fully comply with instructions to distribute project
flyers to residents at meetings, although distribution occurred regularly in most beats and the
distribution rate remained constant or increased across experimental conditions. Overall, police
provided flyers at 80% of the 169 meetings observed during the project when requested to do so.
Distribution of flyers was most problematic during wave 2, the first point at which responsibility
for carrying out this task was assumed solely by police personnel, with significantly lower rates
for beats in the control and feedback conditions. Overall distribution rates were significantly
higher for training beats. Distribution occurred more sporadically within control and feedback
beats, gradually increasing in frequency for both conditions by wave 5. Of the 27 beats in which
flyers were provided consistently at all points of observation, 14 were training beats, 7 were
feedback beats, and 6 were control beats. The most fundamental task for police personnel to
foster resident survey participation was the survey flyer distribution, however residents were not
offered the opportunity to complete the survey at 1 in 5 beat meetings.
While anecdotal evidence demonstrates that police in a few beats did indeed pass out the
materials (vs. placing them on the table with other brochures), examination of handouts collected
by observers indicated that CIP materials (flyer and survey results) were frequently stapled to the
general meeting “packet” which usually included the meeting agenda and ICAM crime reports.
While there were on average only seven separate handouts provided at any given meeting over
the course of the project, some beats frequently had double or even triple that amount of
handouts. The same type of informational “packet” was repeatedly offered month after month.
Beats that prepared meeting packets also brought other handouts that they wished to draw special
attention to, such as crime alerts or announcements about community events and would typically
distinguish them from the usual handouts provided. Likewise, regular attendees at meetings
94
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
appeared familiar with the practice of facilitators using the packet in relation to covering certain
agenda points; facilitators directed residents to certain handouts (e.g. ICAM reports) while
discussing them at the meetings. Participation may have increased had the flyer and survey
results been treated as separate handout and not stapled to the standard meeting packet.
Table 6.2 Project Flyer Distribution Rate by Experimental Condition (%)
Wave 2 (%) Wave 4 (%) Wave 5 (%) Wave 2-6 (%)
Control N = 14 N = 16 N = 25 N = 52
Yes 50.0 68.8 93.3 73.1
No 50.0 31.2 6.7 26.9
Feedback N = 15 N = 15 N = 14 N = 58
Yes 73.3 66.7 85.7 75.4
No 26.7 33.3 14.3 24.6
Training N = 14 N = 14 N = 15 N = 59
Yes 92.9 92.9 93.3 93.1
No 7.1 7.1 6.7 6.9
X
2
= 6.4* X
2
= 3.3 X
2
=3.3 X
2
= 8.07*
* p< .05
Encourage completion of surveys: In order to facilitate survey completion, interested
residents were asked to supply their email addresses for a UIC-maintained list which provided
monthly email notifications with links and passwords for the surveys. Residents had the
opportunity to supply their addresses on questionnaires completed at the time the project was
first introduced, as well as at the end of each web survey. Officers were specifically instructed to
encourage resident participation in the web surveys and to discuss any resident concerns about
accessing or using the Internet. During Wave 3, beat personnel were also provided with pens
and magnets bearing the name of the project to be provided to residents as encouragement from
the CPD for participation. When police made survey flyers available to residents, they also
tended to offer some form of encouragement for resident participation. The extent of
encouragement varied among beats, ranging from simple reminders to go online and complete a
survey to providing explanations as to the benefits of resident participation. The latter was most
commonly described in terms of its utility to police for understanding resident concerns,
increasing resident participation in CAPS, and generally improving CAPS as a program.
As an added inducement, the CPD reached an agreement with a non-profit agency,
Computers for Schools, to supply refurbished laptop computers to be raffled off in the last four
months of the project among residents who completed a survey, printed out the survey
submission page, and brought the page to the next meeting. Officers were asked to announce the
raffle and collect survey submission pages. Information about the raffle was also added to the
survey flyers for residents. As with distributing flyers, police did not fully comply with the
request to announce the raffle and did so at 63.7% of the 102 observed meetings, yet
announcement rates increased from Wave 4 to Wave 5 by at least 30% in each experimental
95
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
condition. There were no significant differences among the rates at which beats in the different
conditions made the raffle announcements (X2=4.771, p> .05), however the training beats were
the highest (76.9%) when compared with feedback beats (56.3%) and control beats (54.8%),
Providing survey results. The residents in the 34 beats in the feedback and training
conditions also received selected results from the web surveys completed by residents (both
CAPS participants and the randomly selected panel) from respective beats. Survey results were
emailed with project flyers to beat personnel and the CAPS Project Office; beat personnel were
directed to provide paper copies of the results at the meeting and facilitate discussions with
residents about the findings. Survey results were also posted on the Project website; as with
email notifications about the availability of surveys, residents on the UIC-maintained listserv
were notified when survey results became available online. Results were typically available to
each beat during the week prior to the scheduled beat meeting to ensure residents had time to
receive the email and view results online if they wished to.
Every survey wave items were selected for public dissemination among CAPS
participants. The selection was based on two factors: (1) perceived utility to police and residents
for assisting in the identification and prioritization of local problems; and (2) introducing new
areas for deliberation, including resident fear of crime and perception of the realities of police
work. Likewise, the number of results made available was contingent not only upon the
necessity to quickly process results from multiple beats in a timely manner, but also the limited
time that would be available during beat meetings for discussion. For this reason, monthly
results usually consisted of 10-12 items, with related items grouped together to form 4-5 tables
(see Appendix F for an example). Content of results selected from each survey is as follows:
Wave One: Items included feelings of resident safety, the most serious local problems as
identified by residents, and high priority activities deserving public resources (e.g. after-school
programs for youth, neighborhood watch programs).
Wave Two: Items included frequency of individual safety behaviors (e.g. locking doors,
asking neighbors to watch home), residents’ feelings of efficacy regarding ability to solve
neighborhood problems, and level of resident engagement in community safety activities (e.g.
attending beat meetings, speaking with neighbors about crime issues).
Wave Three: Items included willingness to engage in crime reporting, attitudes towards
the CPD and CPD officers, and satisfaction with 911 services.
Wave Four: Items included residents’ beliefs/stereotypes about the nature of police work,
potential areas for improvement of CPD services, and visibility of CPD officers.
Wave Five: Items included resident engagement in community-level safety activities,
knowledge about public safety strategies (e.g. CPTED, the Crime Triangle), and residents’
feelings of efficacy regarding ability to leverage resources to address crime issues.
As with the previous steps of the protocol, difficulties were encountered in having police
both provide and discuss survey results at the meetings. The provision and discussion of survey
results pattern is shown in Table 6.3 and 6.4. While full compliance for both groups was never
achieved implementation levels increased between Waves 2 and 6, with greater and more
96
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
consistent performance by police in the training group. Of the 17 feedback beats, only 3
consistently made results available, with a single beat failing to provide results at any points of
observation, while 7 of the feedback/training beats consistently made results available to
residents and all other training beats provided results on at least one occasion. Ultimately,
making the survey results available was the key indicator as to whether survey results would then
be discussed with residents. When printed results were provided at meetings, survey results were
discussed at 86% of those meetings while there were only 8 occasions when printed results were
not provided, yet police still discussed results with residents.
Table 6.3 Availability of Survey Results in Feedback and Training Groups (%)
Wave 2 (%) Wave 4 (%) Wave 5 (%) Wave 2-6 (%)
Feedback N = 15 N = 15 N = 14 N = 58
Yes 40.0 73.3 78.6 60.3
No 60.0 26.7 21.4 39.7
Training N = 14 N = 14 N = 15 N = 59
Yes 71.4 71.4 80.0 74.6
No 28.6 28.6 20.0 25.4
X2 = 2.9 X2 = .01 X2 = .00 X2 = 2.7
Table 6.4 Discussion of Survey Results in Feedback and Training Groups (%)
Wave 2 (%) Wave 4 (%) Wave 5 (%) Wave 2-6 (%)
Feedback N = 15 N = 15 N = 14 N = 58
Results discussed 13.3 26.7 57.1 31.0
Residents told to read results 13.3 6.6 6.6 8.6
Results not discussed 73.4 66.7 36.3 60.4
Training N = 14 N = 14 N = 15 N = 59
Results discussed 85.7 50.0 46.7 64.4
Residents told to read results 0.0 14.3 13.3 6.8
Results not discussed 14.3 35.7 40.0 28.8
X
2
= 15.5*** X
2
= 2.8 X
2
= .43 X
2
= 13.3*
** p< .01, *** p< .001
Discussing survey results. The discussion of survey results was the most critical
component of the study and it increased over the course of the project especially in the feedback
beats. As Table 6.4 shows, police in all but 2 of these beats failed to discuss survey results in the
first month of implementation, but the rates for discussion increased to over half of the beats by
Wave 5. Police in feedback/training beats, however, demonstrated the opposite pattern for
holding discussions, starting off at a high rate of participation in Wave 2 that declined by Wave
97
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
5. Yet it should be noted that of the 12 beats observed in the feedback/training group during
Wave 6, 75% discussed the survey results during their meetings. Despite these conflicting
patterns, police from beats in the feedback/training group exhibited more survey result
discussion consistency at a significantly higher rate than police in the feedback group. Of the 8
beats in which police discussed results at all points of observation, 7 were from the
feedback/training group; conversely, of the 7 beats in which police failed to discuss survey
results at any given point of observation, 6 were from the feedback group. Ultimately, survey
results were discussed at 48% (56) of the 117 observed meetings; of those meetings where
discussions occurred, 67.3% occurred within feedback/training beats as opposed to just a 32.7%
in the feedback beats.
What was the nature of the discussions that took place regarding survey results? Given
that results were intended to foster police-resident communication and increase problem solving,
what was the quality of the ensuing discussions between police and residents? Given that the
police facilitator role was central, the manner in which police solicited resident input about the
survey results stands as the key behavior for encouraging resident participation in the overall
discussion. For meetings at which discussion of survey results occurred, only 8 instances (14.5%
of meetings) were recorded in which police did not actively seek resident participation in the
discussion. Police most commonly encouraged residents to join in the discussion by first
providing their own feedback regarding the survey results (56.4%) or asking residents whether
they had any questions about the survey results (50.9%) and less frequently (34.5%) they
requested that residents supply their own feedback about results. Police in feedback/training
beats encouraged residents to participate in discussions more than police in feedback beats, using
multiple forms of encouragement during 37.8% of these meetings versus 27.8% for feedback
only beats (X2=.542, p>.05). UIC observers were instructed to determine whether police or
residents seemed to dominate discussions, defined as controlling and otherwise talking the most,
in relation to both the overall discussions taking place during meetings and specific discussions
about survey results. Despite encouraging resident input, police tended to dominate discussions
about survey results (85.5%) more frequently than they dominated general discussion during
meetings (45.6%). Residents, in contrast, dominated or contributed equally in discussions about
survey results at only 14.5% of the meetings, well below the rate at which they actively
participated in discussions during meetings (55.4%).
The nature of discussions about survey results were categorized according to the
inclusion of specific problem-solving components on the part of police and residents: causes of
problems, proposal of solutions, and agreed courses of action. Discussions most frequently
included the first component, with the exploration of causes and nature of problems as identified
through the surveys occurring during 46.3% of all discussions about survey results. Solutions to
address problems were proposed with slightly less frequency (40.7% of discussions) and
occurred during almost 60% of discussions that also included covering the nature of problems.
Police tended to propose more solutions, doing so during 35.2% of discussions, while residents
did so at only 18.5%. Discussions rarely, however, led to arranging a definite course of action
for either police or residents to address the identified problems, a component which occurred in
only 7.4% of discussions. While no significant differences were found between the rates at
which problem-solving components were included in discussions held in feedback/training beats
and discussions in feedback beats, feedback/training beats still exhibited greater rates of
inclusion of these components. Exploring the nature of problems occurred in 52.8% of
98
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
feedback/training beats versus 33.3% of discussions in feedback beats (X2=1.83, p>.05),
proposing solutions occurred in 44.4% of feedback/training beats vs. 33.3% of feedback beats
(X2=.614, p>.05), and reaching an agreed course of action occurred in 11.1% of
feedback/training beats vs. 0% in feedback beat discussions (X2=2.16, p>.05).
The quality of discussions varied just as many aspects of the project implementation.
Often discussions were little more than police reading the survey results to residents directly
from the printed sheet, asking if there were questions, and then moving onto the next point of the
agenda. In these instances, survey results were treated in the same manner as crime reports
where statistics about arrest and crime rates are read and residents are then provided an
opportunity to ask questions. Arguably, some police perceived survey results as simply another
set of information to be shared with residents and the objectives of the project had been met by
making the information available, particularly if they consider information sharing (in lieu of
genuine dialogue or problem solving) the primary outcome to be achieved at meetings. At the
other end of the spectrum, some discussions about survey results were treated in a fashion quite
similar to that exhibited when discussing problems identified by residents during meetings,
exploring both the causes and nature of the survey findings and possible solutions to address the
issue at hand. This was seen more when survey findings focused on resident identification of
serious problems and priority activities, possibly because of the similarity to problems residents
routinely bring to the meetings. In one beat, for example, burglary had been identified as the
most serious problem in the survey findings and police prepared a short presentation about
preventative measures that residents could take to guard against burglary. In another, residents
assigned fixing potholes as a high priority activity; the police responded by passing out forms
about the location of potholes for residents to fill out during the meeting that would be delivered
to the appropriate city service agency.
Ultimately, discussion of survey results lasted on average 5-10 minutes. On the surface,
this would appear an insufficient amount of time for genuine discussion about results or
engaging in problem solving behaviors. Yet it is important to understand the amount of time
devoted to discussing survey results within the context of how the overall time is apportioned
during the beat meeting itself. Meetings typically last no more than one hour and the structure of
is dictated by the standard CAPS agenda which inevitably restricts the amount of time that can
be devoted to any single topic. The fact that discussions about survey results lasted 5-10 minutes
possibly reflects this reality; indeed, more than once an observer noted that the time spent on
survey results was the most or a comparable amount afforded to any single topic covered during
the meeting.
Training components. In addition to receiving selected survey results, the 17
feedback/training beats also received two separate training components. As noted earlier, the
first component concerned an all-day training session for beat sergeants run jointly by UIC and
the CAPS Project Office prior to the start of the project. This training consisted of a problem-
solving refresher and instruction on the use and interpretation of survey results in problem
solving. The session not only allowed sergeants an opportunity to ask questions and express
concerns about project objectives and their own expected roles, but also to become familiar with
additional plans by the CPD to expand police-community communication through technology.
Given that police in these beats typically outperformed those in other beats in terms of both
implementation levels and quality, this would suggest that the training session served to make
99
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
sergeants feel more invested in the project and better understand the larger goals of the CPD to
further incorporate information technology into the day-to-day operations of the organization,
thus positioning the project as more legitimate.
The second training component was the Wave 4 introduction of exercises to guide
discussion pertaining to selected survey results. These exercises were intended to enhance
discussions by providing officers with questions that sought to (1) provide a fuller understanding
of citizen concerns and nature of the problem as related to the particular survey findings; (2)
engage citizens more fully in the problem-solving process by seeking their input as to potential
solutions; and (3) allow officers the opportunity to educate citizens regarding police function and
crime prevention activities. Topics selected as the focus of the three problem-solving exercises
included (1) citizen perception of how fair and impartial Chicago police officers were; (2)
common misperceptions citizen have regarding the police function; and (3) resident fear of crime
within their neighborhoods (see Appendix G for an example). To insure compliance, officers
were required to use a form to summarize the points discussed during the exercise and return the
completed form to the CAPS Project Office.
As with other tasks, full police compliance was not achieved, but implementation rates
steadily increased with each subsequent wave. Based on 41 observations, the rate of
participation in the exercises increased from 35.7% in Wave 4 to 58.3% by Wave 6. When
police did use the exercise to guide their discussions, however, observers tended to record more
robust discussions regarding the survey results that stimulated somewhat more problem-solving
on the part of participants. Indeed, more problem-solving was demonstrated during discussions
where the exercises were used than when the exercises were not used. Examination of the nature
of problems occurred in 52.9% of discussions where exercises were used compared to 40% of
discussions where exercises were not used (X2=.259, p>.05). Solutions to problems were
proposed during 52.9% of discussions using the exercises, but occurred in only 20% of
discussions without exercises (X2=1.69, p>.05). Observers at times noted somewhat more
participation by residents during discussions using exercises, no doubt attributable to the fact that
questions specifically intended to elicit resident input were built into the exercises.
The forms completed by police in they summarized their discussions during the exercises
are a less reliable, but nonetheless valuable source of information. While some are terse or state
residents did not have questions or did not respond to encouragement by police to contribute to
the discussion, others suggest the depth of discussions that occurred. This is particularly evident
with the final exercise in which the discussion centered on not only identifying locations around
the beat where residents felt unsafe, but exploring the causes for this fear and possible solutions
to reduce fear of crime. Responses from several beats demonstrated a more in-depth analysis of
causes and solutions than is typical at beat meetings. This suggests a certain value to providing
police with further guidance for discussion and examination of beat problems, particularly those
problems that police feel they can do little about, such as resident fear of crime.
3. Police Attitudes about Participation in Project
Internet communication. Police personnel, including beat sergeants, patrol officers, and
community policing officers, who regularly attended and/or facilitated beat meetings were
100
This document is a research report submitted to the U.S. Department of Justice. This report has not
been published by the Department. Opinions or points of view expressed are those of the author(s)
and do not necessarily reflect the official position or policies of the U.S. Department of Justice.
interviewed at length regarding their attitudes about the project and the nature of their
participation. Of the 48 individuals interviewed, roughly 80% expressed at least partial support
for the general idea of citizens using the Internet as a means to communicate public safety
concerns to the police, while 12% did not support the idea in any way and another 8% took a
neutral stance. Support was largely based on the premise of the Internet as a viable alternative
for information sharing by residents whether they did or did not attend beat meetings. Police
perceived the anonymity of Internet communication as the primary benefit to residents; as one
patrol officer stated, it would provide residents with a “good opportunity to express feelings
without being in a public forum.” Police acknowledged that some residents were leery of
speaking out when they attended beat meetings just as other residents failed to attend meetings
due to “a fear of reprisal”. The implied source of this fear was typically the criminal element
within a community, but it was recognized that police were also a source of “intimidation” for
some residents. As one community policing officer said, “they want to talk to us, but they don’t
want to talk to us.” To this end, it was felt residents would be “more prone to speak out on the
Internet” and “more open and honest than at a meeting.”
Those who supported the use of the Internet presented two sets of beliefs regarding its
role within the CAPS