AIM 1510

AIM-1510 AIM-1510

User Manual: AIM-1510

Open the PDF directly: View PDF PDF.
Page Count: 11

DownloadAIM-1510
Open PDF In BrowserView PDF
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
ARTIFICIAL INTELLIGENCE LABORATORY
and

CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING
DEPARTMENT OF BRAIN AND COGNITIVE SCIENCES
A.I. Memo No. 1510
C.B.C.L. Memo No. 109

December, 1994

Fast Object Recognition in Noisy Images Using
Simulated Annealing
Margrit Betke

Nicholas C. Makris

This publication can be retrieved by anonymous ftp to publications.ai.mit.edu.

Abstract

A fast simulated annealing algorithm is developed for automatic object recognition. The object recognition
problem is addressed as the problem of best describing a match between a hypothesized object and an
image. The normalized correlation coecient is used as a measure of the match. Templates are generated
on-line during the search by transforming model images. Simulated annealing reduces the search time by
orders of magnitude with respect to an exhaustive search. The algorithm is applied to the problem of
how landmarks, for example, trac signs, can be recognized by an autonomous vehicle or a navigating
robot. Images are assumed to be taken while the robot or the vehicle is moving through its environment.
It tries to match them with templates created online from models stored in a database. We illustrate
the performance of our algorithm with real-world images of complicated scenes with trac signs. False
positive matches occur only for templates with very small information content. To avoid false positive
matches, we propose a method to select model images for robust object recognition by measuring the
information content of the model images. The algorithm works well in noisy images for model images with
high information content.

Copyright c Massachusetts Institute of Technology, 1993
This report describes research done at the Center for Biological and Computational Learning and the Arti cial Intelligence
Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the
National Science Foundation under contract ASC{9217041.
The rst author can be reached at margrit@ai.mit.edu. The second author can be reached at Naval Research Laboratory,
Washington, D.C. 20375.

1 Introduction

The eld of automated object recognition is one of the
most complex areas in computer vision and image understanding. Object recognition based on matched ltering has been a very active research area in computer
vision for many years. Matched ltering has been used
much earlier in the areas of radar, sonar, and signal processing [Opp78]. Valuable information for visual object
recognition can be obtained from that literature.
Although template matching has been widely used in
computer vision [BB82, Yar85], a crucial problem with
the method is the size of the search space [MR90, LC88].
There are several approaches published in the literature
that either reduce the size of the search space or that
direct the search towards areas in the search space for
which a match is more likely [Gri88, Gri90, NR72, MR90,
AF86]. In this paper a new approach is proposed that
uses both such techniques. We discuss the problem of
how certain landmarks, for example trac signs, can be
recognized by an autonomous vehicle or robot. For this
particular application, a ve-dimensional search-space is
suciently large for robust object recognition and small
enough for ecient object recognition.
The method presented constructs templates on-line
during the search. The algorithm uses an ecient local
de nition of the correlation coecient to evaluate the
match. The algorithm presented correctly nds the location, shape, size, and orientation of objects. If enough independent information is contained in a template image,
it can be matched with an object in an image uniquely.
False positive matches occur only for objects that have
very small information content. To avoid false matches,
templates with insucient information content should
not be used for recognition tasks. We describe how to
compute the information content of template images.
Although the main objective of this paper is to describe a new approach to the general problem of visual
object recognition, the solution to the special problem
of recognizing trac signs is signi cant by itself. Automatically recognizing trac signs in images is very
valuable for mobile robot or autonomous vehicle navigation. A robot that can recognize a trac sign as a
familiar landmark in its map of the environment can
then use this information to localize itself in its environment [BG94, Bra90]. Our method stands apart from
previous approaches to trac sign recognition because
rst, it is eciently applied to real-world landscape images (as opposed to Ettinger's isolated signs [Ett88]),
and second, it does not rely on color perception which is
very sensitive to lighting changes. This sensitivity limits
the approach of May [May94] and Zheng et al. [ZRJ94]
who address the problem of recognizing trac signs using color information.
The optimization technique fast simulated annealing
is applied to avoid the cost of brute-force search by directing the search successfully. It reduces the search
time by orders of magnitude. Recent publications in
the sonar literature [CBK+ 93, KCPD90] show that fast
simulated annealing has been very successful in coherent
signal extraction and localization in noisy environments.
We use it in a similar way for incoherent image process- 1

ing. Kirkpatrick et al. [KGV83] show how to implement
a Metropolis algorithm [MRR+53] to simulate annealing
of combinatorial optimization problems. Szu and Hartley [SH87] propose an inverse linear cooling schedule for
simulated annealing. This version is called \fast simulated annealing." The original slower version of simulated annealing has been applied to segmentation and
noise reduction of degraded images by Geman and Geman [GG84], to represent lobed objects by Friedland and
Rosenfeld [FR91], and to boundary detection by Geman
et al. [GGGD90]. However, for visual object recognition,
fast simulated annealing has yet not been exploited.
This paper is organized in the following way: The object recognition problem is de ned as a parameter search
problem in Section 2. Section 3 shows how templates are
generated from model images. Section 4 examines the
search space of the recognition problem and introduces
\ambiguity surfaces." Section 5 describes our simulated
annealing algorithm and Section 6 reports our experimental results. Section 7 analyzes the error in the correlation and proposes how to avoid false matches. Section 8 describes our results on noisy images. We conclude with a summary of this work and suggestions how
to apply these results to other problems.

2 The Recognition Problem

An object in an image I is de ned to be recognized if
it correlates highly with a template image T of the hypothesized object. This template image T is a transformed version of the model of the hypothesized object.
Model images of objects are stored in a library. Section 3
shows how to compute the template from the model. A
template T(x; y), for 0  x < nT ; 0  y < mT , is generally much smaller than the image I(x; y). The template is compared with the part IT (x; y) of image I(x; y)
that contains the hypothesized object. Assuming pixel
(x0; y0) is at the lower-left corner of the hypothesized
object in I, subimage IT is de ned to be
IT (x; y) = I(x0 +x; y0 +y) for 0  x < nT ; 0  y < mT :
We use the normalized correlation coecient as a measure of how well images IT and T correlate or match. For
images IT and T, the normalized correlation coecient
 is the covariance of IT and T normalized by the standard deviation of IT and T. The correlation coecient
is dimensionless, and jj  1. The correlation coecient
measures how accurate image IT can be approximated
by template T. Image IT and template T are perfectly
correlated if  = 1. We approximate  using the sampled
coecient of correlation
P

P
r = (p
I
(x;
y)T(x;
y)
;
I
(x;
y)

T
T
T
x;y
x;y
P

x;y T(x; y) )=IT T
where IT =
r

P

r

P

pT x;y IT (x; y)2 ;
P

P

2

x;y IT (x; y)
2

,

T = pT x;y T(x; y)2 ; x;y T (x; y) and pT is
the number of pixels in the template image T with
nonzero brightness values and pT  nT  mT . Note this

last condition means that not all the pixels in images T
and IT are actually compared but only the nonzero pixels in T with the corresponding pixels in IT . This is
important, for example, if the template contains a circular object. Here pixels in T bordering the circle (or
the background) will be zero (black). The computation time of r is proportional to the number of pixels in
the hypothesized object, which is usually much smaller
than the number of pixels in I. Using the correlation
as a measure of successful recognition is also advantageous because it is a very robust measure. That is, it
is relatively insensitive to uctuations in the environment compared to higher resolution methods, as is well
documented in spectral, bearing, and range estimation
problems [Joh82, BKM93].

3 Generating Templates from Model
Images

A template T(x; y) is generated from a model image M(x; y) by choosing three parameters that describe
a transformation from M into T. The parameters determine how the model is sampled, and if necessary, how it
is interpolated to generate the template. The parameters used are a rotation parameter  and two sampling
parameters sx and sy .
For notational convenience, we de ne the origin of a
coordinate system for model image M(x; y) to be in the
middle of the image, i.e., M(x; y) is de ned for ;(nM ;
1)=2  x  (nM ; 1)=2 and ;(mM ; 1)=2  y  (mM ;
1)=2 for nM ; mM odd. Then the rotation parameter 
determines how the x and y axes of M(x; y) are rotated
to de ne the x and y axes of T(x; y). More precisely,
given vectors




n
;
1
m
;
1
M
M
mx =
;
2 ; 0 and my = 0; 2
which lie
axes of M, and model radius
q;on the coordinate
; m ;1 2
2
n
;
1
M2
+ M2 , we compute vectors
RM =
tx = RM (cos ; sin ) and ty = RM (; sin ; cos )
which de ne the coordinate axes of the template image T
in continuous space. The axes of T always span the
model object as show in Figure 1.
The sampling parameters sx and sy determine how
many samples along vectors tx and ty are used for the
template image, respectively. The spacing between the
samples along tx is ((nM ; 1)=2)=sx. If there is a pixel
in M(x; y) after every (nM ; 1)=(2sx ) step along tx, its
brightness is used to de ne T along its x-axis. For example this scenario may occur if  = 45 degrees, and sx =
(nM ; 1)=2. As shown in Figure 1, if sx = (nM ; 1)=4 the
model is down-sampled and transformed into a template
that is about one-quarter the size of the model. Pixels
of zero brightness are added where necessary as shown
in Figure 1.
In general, there may not be a pixel in M at the sampling point on vector tx. If this is the case, we use a
four-point interpolation to de ne the brightness for the
template at that point. Similarly, M is sampled (and if 2

necessary interpolated) along vectors ty ; ;tx; and ;ty
to obtain the brightness of the template pixels along the
template coordinate axes. The rest of the template is
now determined from M along the grid that is de ned
by the samples on the template coordinate axes.
Since the sampling rates sx and sy in the template
coordinate system are di erent in general, the template
is a rotated, scaled, and uniformly deformed version of
the model. More parameters would be needed to describe more general non-uniform and non-linear deformations of the model. A straightforward extension would
be to add a fourth parameter to obtain a non-uniform
linear deformation of the model. However, for our purposes, the transformation described is sucient because
the objects to be recognized are usually at, normal to
the viewing direction and far away from the camera compared to the object size. Our method computes the template very quickly by sweeping over the model image only
once. The time for creating a nT  mT template image
is O(nT mT ).
Examples of a model and corresponding transformed
templates are shown in Figure 2. The rst two templates
are scaled by sx = sy and are not rotated. The remaining templates in Figure 2 are de ned by more general
transformations with sx 6= sy .

4 The Parameter Search Space

The space of possible solutions of the recognition problem is extremely large, even if a particular object is
known to be in the image a priori. The dimension of
the search space is determined by the number of possibilities for position, size, shape, and orientation of the
object. The number of possibilities for the position of
the centroid of the object in the image is O(n2) for a
n  n image. Assuming that the size and shape of the
object can be approximated by sampling the model along
two perpendicular axes as described in the previous section, the number of possibilities to approximate the size
and shape of the object is also O(n2 ). Even with this
assumption, the number of possible angles is still very
large; since the image is discrete, we assume that the
number of possible angles is O(n). Thus, the size of the
search space is O(n5 ) for an n  n image. For a typical image of14 size 256  256, the search space has a size
of order 10 . An exhaustive search of this space would
take too long to nd a good match between templates
and images.
We use terminology from the radar and sonar literature to describe the search space. We call the space
an ambiguity surface. A peak in the ambiguity surface
means that the correlation coecient is high for a particular set of parameters. Figure 3 shows an example of
a two-dimensional ambiguity surface with a peak shown
in black. There may be several peaks in an ambiguity
surface. If the template and the object in the image
match perfectly, the cross-correlation between template
and image results in a peak in the ambiguity surface
which is the global optimum. Due to noise and reduction
of the search space by our template transformation, we
do not expect a perfect match. However, in most cases
the global optimum corresponds to a correct match or

ty

my

tx

Model
image

Template
image

mx

pixel in

added
zero

model
template

Figure 1: A 5  5 template image is obtained from a 9  9 model image using parameters sx = sy = 2 and  = 45
degrees.

Figure 2: Model of slow sign with 101  111 pixels, and six templates of slow sign. Templates are obtained by
sampling model sign at various sampling rates and degrees of rotation.

Figure 3: On the left, image Slow3. On the right, the ambiguity surface of image Slow3 computed for all possible
translations given xed angle and scaling parameters. A deterministic search would compute each value on this
surface. A steepest descent procedure would fail because of local minima. Therefore, a stochastic search is used to
nd the best correlation value (here the darkest pixel value).
3

recognition.
As we can also see in Figure 3, an iterative search for
a peak in the ambiguity surface such as steepest descent
would fail because it would get \stuck" in local minima.
Simulated annealing, however, is able to \jump" out of
local minima and nd the globally best correlation value.

5 The Simulated Annealing Algorithm

In this section we describe our algorithm for nding an
optimal match between images and templates. Our algorithm is based on a fast version of simulated annealing. Simulated annealing has become a popular search
technique for solving optimization problems. Its name
originates from the process of slowly cooling molecules
to form a perfect crystal. The cooling process and its
analogous search algorithm is an iterative process, controlled by a decreasing temperature parameter. At each
iteration, our algorithm generates templates on-line as
described in Section 3. New test values for the location, sampling, and rotation parameters of the template
are randomly perturbed from current values. If the correlation coecient rj increases over the previous coecient rj ;1, the new parameter values are accepted in the
j-th iteration (as in the gradient method). Otherwise,
they are accepted if
e;(Ej ;Ej;1 )=Tj > 
where  is randomly chosen to be in [0; 1], Tj is the temperature parameter, and Ej = 1 ; rj is the cost function
in the j-th iteration. For a sucient temperature this
allows \jumps" out of local minima. We choose
Tj = T0=j
1jL
as the cooling schedule for the j-th update of the temperature parameter where T0 is the initial temperature and
L is the number of iterations during the search. Note
that the rate at which the temperature decreases is inverse linear as rst proposed by Szu and Hartley [SH87]
and converges faster than an often used logarithmically
inverse cooling schedule [GG84]. As a criteria for stopping the annealing process, we simply put a limit on the
search length L. Although this does not ensure convergence to the optimal correlation coecient, the solutions
we obtain for the parameters are generally sucient and
solve the recognition task.
As Kuperman et al. [KCPD90] point out, if the search
problem involves di erent kinds of parameters the annealing algorithm is rather analogous to the cooling of a
mixture of liquids, each of which have di erent freezing
points. An algorithm that randomly perturbs all parameters at the same time has poor convergence properties.
Therefore, at a speci c temperature we do not combine
the test for the choice of the location, sampling, and
rotation angle. We also obtain good results using simulated annealing only for the location parameters, and a
gradient descent procedure [CBK+ 93] for the remaining
parameters given large enough perturbations.
To properly deal with image boundaries of an image
I(x; y) for which 0  x < nI and 0  y < mI , we use the
following formula to perturb the x-coordinate cx of the 4

centroid position of a template with radius RT in image
I(x; y)
8
c
if cx ; RT  0 and cx + RT  nI
>
< x
;
c
RT < 0 and cx ; RT  ;nI
x
cx = > 2nI ; cx ifif ccxx +
;
RT > nI and cx + RT  2nI
:
nI =2 otherwise (unlikely perturbation).
The y-coordinate cy of the centroid of the template is
perturbed similarly. This formula avoids attracting the
centroid position to the rim or corners of the image.

6 Experimental Results

The algorithm described above was implemented on a
Sun workstation and on a Silicon Graphics Iris. We used
the model images shown in Figure 4 to nd templates
that correlate optimally with the scene images shown in
Figure 5. The images are quantized using 256 grey levels.
The size of the model images is 122  117 pixels (except
for the one-way sign, which has 178  60 pixels.) The size
of the scene images varies between 100  70 and 516  365
pixels.
For all scene images, the shape, size, orientation, and
location of any trac sign is found if it is known a priori
what kind of sign to look for. For example, using the
stop sign model shown in Figure 4 the algorithm nds
the stop sign in a complicated scene image like image
Stop5. (This is the second image in the last row of images
in Figure 5; see also Figure 6). The stop sign in scene
image Stop5 is recognized although the stop sign model
was constructed from a picture of a completely di erent
stop sign. Note that the stop sign in image Stop5 has
grati, while the model sign does not.
For the more general problem of recognizing which
object is in a scene image (i.e., not knowing the kind
of trac sign a priori), we ran 144 experiments with 18
scene images and 8 model images. Table 1 contains the
correlation values obtained in the experiments. For each
scene image, our algorithm computes the highest correlation coecient among the set of values obtained for
each model (boldface values in Table 1). The model corresponding to the maximum correlation value is selected
as the sign recognized in the scene image. For most scene
images, the correlation coecient is highest if a match
between a sign in the image and its corresponding template occurs. Only for three images, Slow2, Stop4, and
Stop5, a false positive match occurs because the best
correlation coecient is not the one for the corresponding model. We show the templates causing these false
positive matches in Figure 6.
There are two facts that contribute to the false positive matches. First, some models do not have enough
structure by themselves and match easily with arbitrary
parts of the images. For example, the European no-entry
sign's white middle bar matches with the roof of a car in
image Stop5, as shown in Image 5 of Figure 6. In Section 7 we analyze this problem quantitatively. Second,
some models look quite di erent from the actual landmark in the scene image. For example, as mentioned
before, the stop sign model does not have any grati
while the signs in Stop4 and Stop5 do. The templates
constructed from the model stop sign do not match the

Figure 4: Model images used in experiments: Footpath, E-no-entry, No-entry, One-way, Priority, Slow, Stop, and
Yield.
0.6
Correlation Coefficient r

stop signs in images Stop4 and Stop5 well enough to
result in a correlation coecient larger than the one obtained with the model E-no-entry (see Image 4 and 5 of
Figure 6). One could try to solve this problem by making a model of each trac sign (including its grati) in
the environment. However, this would result in a huge
library of signs which would increase the search time substantially. Moreover, the environment may change and
outdate the library quickly. Therefore, we instead propose to select a small number of model images with high
information content (see Section 7) so that false positive
matches are avoided.

0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
0

6.1 Illumination Changes
The correlation coecient (IT ; T) measures not only
how accurate image IT can be approximated by template
T , but also how accurate image IT can be approximated
by a linear function of T , since (IT ; T) = (IT ; aT + b)
for some constants a; b. Therefore, the correlation coecient is invariant to constant scale factors in brightness.
Thus recognition is not a ected by new lighting conditions that mainly result in such brightness changes.

6.2 Simulated Annealing vs. Exhaustive
Search
We also implemented an exhaustive search of the entire parameter space to compare its running time to our
fast simulated annealing algorithm. The comparison of
our simulated annealing algorithm and exhaustive search
drastically demonstrates the advantage of simulated annealing. We used image Noentry2 which has 112  77
pixels. The search space had about 6:8  107 sets of parameters. It took 15 seconds to recognize the sign using
our simulated annealing algorithm. In contrast, exhaustive search found the sign after more than 10 hours of
computation time.
Figure 7 illustrates how fast our simulated annealing
algorithm recognizes a sign in a scene image.
5

50

100 150 200 250 300 350 400
Number of Iterations

Figure 7: A typical run of our simulated annealing algorithm. The sign is found after about 300 iterations (ca.
18 s).

7 Avoiding False Matches

The error in the sampled coecient of correlation r increases if the number of pixels pT in the image window
considered decreases. For large samples of pT pixels the
error of r can be expressed as the mean squared error
(MSE)
2
E[(r ; )2 ] = 1p;p
T

(see Figure 8 and Weatherburn [Wea62]). As Weatherburn points out, the sampling distribution of r is never
even approximately normal. The probability curve is
very skewed in the neighborhood of  = 1, even for
large samples.
The normalized auto-correlation of model image M(x; y)
is
P P
M(x; y)M(x ;  ; y ; y )
R(x ; y ) = x y P P (M(x; y))x2
:
x y
The faster the auto-correlation falls o , the higher the
resolution of the model image. Examples of autocorrelation images are shown in Figure 9. The resolu-

Figure 5: Scene images used in recognition experiments. The images are named by the sign in the scene and a number
if the same sign is in more than one scene image. Reading left to right, the images are: Footpath, E-no-entry, No-entry
1 & 2, One-way, Priority 1, 2, & 3, Slow 1, 2, 3, & 4, Stop 1, 2, 3, 4, & 5, and Yield 1 & 2.

6

Image 1

Image 2

Image 3

Image 4

Image 5

Image 6

Figure 6: False positive matches: Images 1 and 2 show templates constructed from models Slow and Yield overlying
the sign in image Slow2 (correlation values 0.56 and 0.58, respectively.) Images 3 and 4 are cropped images of Stop4
and Stop5 illustrating the best match with templates made from the Stop model. For images Stop4 and Stop5, we
obtain better correlation values using models E-no-entry and Yield. Cropped versions of image Stop5 illustrating
these false positive matches are shown in Images 5 and 6.

TABLE 1
Correlation Values for Recognition Task
Images
Footpath
E-no-entry
No-entry1
No-entry2
One-way
Priority1
Priority2
Priority3
Slow1
Slow2
Slow3
Stop1
Stop2
Stop3
Stop4
Stop5
Yield1
Yield2

Footpath E-no-entry
0.77
0.59
0.49
0.73
0.22
0.21
0.29
0.18
0.37
0.55
0.36
0.49
0.46
0.54
0.37
0.57
0.25
0.29
0.38
0.48
0.39
0.58
0.41
0.47
0.23
0.16
0.26
0.20
0.42
0.73
0.43
0.73
0.45
0.75
0.42
0.73

Models
No-entry One-way Priority
0.38
0.37
0.46
0.39
0.43
0.46
0.67
0.31
0.24
0.84
0.37
0.14
0.24
0.70
0.40
0.34
0.35
0.58
0.40
0.45
0.66
0.40
0.39
0.62
0.25
0.25
0.45
0.39
0.39
0.32
0.41
0.38
0.40
0.42
0.30
0.22
0.27
0.25
0.18
0.33
0.19
0.13
0.46
0.50
0.43
0.44
0.48
0.29
0.39
0.50
0.53
0.39
0.50
0.43
7

Slow
0.29
0.26
0.18
0.26
0.38
0.32
0.29
0.34

Stop
0.35
0.38
0.17
0.23
0.31
0.30
0.32
0.37
0.74
0.15
0.56 2nd 0.21
0.62
0.30
0.25
0.69
0.11
0.38
0.00
0.34
0.32
0.56 3rd
0.31
0.51 3rd
0.32
0.37
0.32
0.36

Yield
0.62
0.62
0.40
0.35
0.58
0.44
0.31
0.56
0.38

0.58
0.59
0.58
0.30
0.19
0.66
0.65

0.78
0.82

(1-rho^2)/10
(1-rho^2)/20
(1-rho^2)/50

0.08
0.07
0.06
0.05

0.7

0.04
0.03
0.02
0.01
0
-1

-0.5
0
0.5
Correlation Coefficient rho

1

Figure 8: Mean squared error of r for pT = 100; 400 and
2500.
tion of a given model image can be measured
P P with a single number, the coherence area A = x y (R(x; y))2 .
Given the coherence area A and the number of pixels n
of M(x; y), the number of coherence cells is c = n=A.
The number of coherence cells is equivalent to the number of degrees of freedom of the model image. It can
be used as a measure of the information content of the
model image.
We examine the information content of each model
image to evaluate how useful the model image is for the
recognition task. All our model images M(x; y) have the
same number of pixels n. Model images with low resolution (little structure) such as the European No-entry
and Yield signs, do not have enough information content for robust object recognition. This, and the mean
squared error in r for small pT , are responsible for the
false positive matches reported in Table 1. In order to
avoid false matches, we need to avoid using such model
images with low information content.
The models that contribute to the false positive
matches, E-no-entry and Yield, have a coherence area
of 313 and 197, respectively. This is much higher than
the coherence area for models with more reliable matching results. For example, the Footpath and Stop signs'
auto-correlation falls o much faster; their coherence areas are 148 and 56, respectively. The number of coherence cells in E-no-entry is 297 and in Yield 473, but in
Footpath it is 628 and in Stop, even 1641.
Thus, the number of coherence cells is a quantitative
measure for determining if a model has enough information content to be useful as a template. Most of the
models we use have a large enough number of coherence
cells for robust detection, but subsequent downsampling
in generation of templates may corrupt this.

8 Results on Noisy Images

Gaussian noise is added to the brightness values of some
of the scene images to examine the robustness of our
algorithm. The algorithm is able to nd the sign even
in strongly degraded pictures. The signal-to-noise ratio
(SNR) of a noisy image is de ned as 10 log of the variance
of the noisy image over the variance of the noise.

correlation coefficient

MSE in Sampled Correlation r

Several noisy images are obtained by corrupting image
Slow3 by zero-mean Gaussian noise with various signalto-noise ratios. Our results for image Slow3 are summarized in Figure 10. Note that the correlation increases
as the signal-to-noise ratio increases.

0.1
0.09

0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0.25
4

6
8
10
12
SNR of noisy Slow3 in dB

14

Figure 10: Correlation coecient for sign recognition in
noisy versions of image Slow3.
Figure 11 shows images Slow3 and Slow4 corrupted
by Gaussian noise with zero mean and SNR 3 dB and
5 dB, respectively. Matches for pictures with much lower
SNR are possible for templates with much larger number
of pixels and information content than those presented.
(In radar and sonar, signals with negative SNR are commonly extracted given sucient information content.)

9 Conclusions

Our method has been shown to eciently recognize objects in complicated landscapes in the presence of noise.
To our knowledge, our work is the rst to apply fast simulated annealing to object recognition. Our results show
that it makes the parameter search of object recognition
feasible.
We strongly advocate the use of template matching
in recognition tasks and provide quantitative techniques
to analyze its limits. We show how to measure the information content of templates as a way to make the
recognition algorithm robust.
For the application of trac signs, we have shown
that the search space can be successfully reduced by using a three parameter transformation from model image
to template. This method is well suited for recognition
tasks that involve objects with scale and shape variations. The method is so ecient that templates can be
constructed on-line during the search.
For future work, severe illumination variations within
the object and occlusion problems can be addressed.
Other applications of our method, for example in medical
computer vision and in face recognition, are being investigated. A recent paper by Brunelli and Poggio [BP93]
reports successful face recognition using template matching. The authors normalize their test images by xing
the direction of the eye-to-eye axis and the interocular
distance. The location of the masks for eye, nose, mouth,
8 and face templates are also xed. We believe that we can

Figure 9: Auto-correlation of model images Footpath, Stop, E-no-entry, and Yield. To illustrate how fast the
auto-correlation falls o , the e-folding lengths, i.e., pixels (x; y) with R(x; y)  1=e, are shown on a dark contour.

Figure 11: The rst and third images are images Slow3 and Slow4 degraded by Gaussian noise with zero mean and
SNR 3 dB and 5 dB, respectively. The second and fourth images illustrate that the object is recognized where the
templates computed are shown overlying the recognized sign in the scene. (These images are shown brighter so that
the overlying template can be illustrated better.)
generalize Brunelli and Poggio's application to recognize
faces in images that are not normalized but contain more
general scenes with varied backgrounds.

Acknowledgements

We would like to thank Wilfried Betke for taking some
the pictures, and Marney Smyth, Michelle Hsu, Ou-Dan
Peng, and Acee Agoyo for their help preparing the document.

References
[AF86]

[BB82]
[BG94]

Nicholas Ayache and Olivier D. Faugeras.
HYPER: A new approach for the recognition and positioning of two-dimensional objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(1):44{54,
January 1986.
Dana H. Ballard and Christopher M. Brown.
Computer Vision. Prentice-Hall, 1982.
Margrit Betke and Leonid Gurvits. Mobile robot localization using landmarks.
IEEE/RSJ/GI International Conference on
Intelligent Robots and Systems, September

1994. Also published as Technical Report SCR-94-TR-474, Siemens Corporate
Research.
[BKM93] Arthur B. Baggeroer, William A. Kuperman,
and Peter N. Mikhalevsky. An overview

[BP93]

of matched eld methods in ocean acoustics. IEEE Journal of Oceanic Engineering,
18(4):401{424, 1993.
Roberto Brunelli and Tomaso Poggio. Face
recognition: Features versus templates.
IEEE Transactions on Pattern Analysis and
Machine Intelligence, 15(10):1042{1052, Oc-

tober 1993.
[Bra90] David J. Braunegg. MARVEL: a system for
recognizing world locations with stereo vision. Technical Report 1229, Massachusetts
Institute of Technology, Arti cial Intelligence Laboratory, May 1990.
+
[CBK 93] Michael D. Collins, Jonathan M. Berkson,
William A. Kuperman, Nicholas C. Makris,
and John S. Perkins. Applications of optimal
time-domain beamforming. Journal of the
Acoustical Society of America, 93(4):1851{
1865, April 1993.
[Ett88] Gil J. Ettinger. Large hierarchical object
recognition using libraries of parameterized
model sub-parts. In IEEE Proceedings of
Computer Vision and Pattern Recognition,
pages 32{41, June 1988.
[FR91] Noah S. Friedland and Azriel Rosenfeld.
Lobed object delineation using a multipolar representation. Technical Report CS-TR2779, Center of Automation Research, University of Maryland, October 1991.
9

[GG84]

Stuart Geman and Donald Geman. Stochastic relaxation, Gibbs distributions, and the
Bayesian restoration of images. IEEE Trans-

actions on Pattern Analysis and Machine
Intelligence, PAMI-6(6):721{741, November

1984.
[GGGD90] Donald Geman, Stuart Geman, Christine
Gragne, and Ping Dong. Boundary detection by constrained optimization. IEEE
[Gri88]
[Gri90]
[Joh82]

Transactions on Pattern Analysis and Machine Intelligence, 12(7):609{628, July 1990.

W. Eric L. Grimson. On the recognition of parameterized 2D objects. International Journal on Computer Vision, 3:353{
372, 1988.
W. Eric L. Grimson. Object Recognition
by Computer: The Role of Geometric Constraints. MIT Press, 1990.

J. H. Johnson. The application of spectral estimation methods to bearing estimation problems. Proceedings of the IEEE,
70(9):1018{1028, 1982.
[KCPD90] William A. Kuperman, Michael D. Collins,
John S. Perkins, and N.R. Davis. Optimal time-domain beamforming with simulated annealing including application of a
priori information. Journal of the Acoustical
Society of America, 88(4):1802{1810, October 1990.
[KGV83] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing.
Science, pages 671{680, May 1983.
[LC88] Z.-Q. Liu and Terry M. Caelli. Multiobject
pattern recognition and detection in noisy
backgrounds using a hierarchical approach.
[May94]

Computer Vision, Graphics, and Image Processing, 44:296{306, 1988.

Franz May. Vision system for safe driving.
Presented at the Center for Biological and
Computational Learning, MIT, September
1994.
[MR90] Avraham Margalit and Azriel Rosenfeld.
Using probabilistic domain knowledge to
reduce the expected computational cost
of template matching. Computer Vision,
Graphics, and Image Processing, 51:219{234,
1990.
[MRR+53] N. Metropolis, A. W. Rosenbluth, M. N.
Rosenbluth, A. H. Teller, and E. Teller.
Equations of state calculations by fast computing machines. J. Chem. Physics, C{21,
1953.
[NR72] Roger N. Nagel and Azriel Rosenfeld. Ordered search techniques in templates matching. Proceedings of the IEEE, 60(2):242{244,
1972.
10

[Opp78]
[SH87]
[Wea62]
[Yar85]
[ZRJ94]

A. V. Oppenheim, editor. Applications of
Digital Signal Processing. Englewood Cli s:

Prentice Hall, 1978.
H. Szu and R. Hartley. Fast simulated annealing. Physics Letters A, 122(3{4):157{
162, June 1987.
C. E. Weatherburn. A First Course in Mathematial Statistics. Cambridge University
Press, 1962.
Leonid P. Yaroslavsky. Digital Picture Processing. Springer-Verlag Berlin, 1985.
Yong-Jian Zheng, Werner Ritter, and Reinhard Janssen. An adaptive system for trac
sign recognition. In Proceedings of the Intelligent Vehicles Symposium, 1994.



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.2
Linearized                      : Yes
Create Date                     : 2001:07:12 14:32:19
Producer                        : Acrobat Distiller 4.0 for Macintosh
Modify Date                     : 2001:07:12 14:32:20-04:00
Page Count                      : 11
EXIF Metadata provided by EXIF.tools

Navigation menu