Fourier Optics Manual

Fourier%20Optics%20Manual

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 16

1 Introduction 2
I Fourier Optics 2
2 Theory 2
2.1 FourierSeries............................................. 2
2.2 FourierTransform .......................................... 2
2.3 BandwidthTheorem ......................................... 3
2.4 FourierOptics............................................. 4
3 Manipulations 7
3.1 Uni-dimensional Periodic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Non-Periodic Functions: Bandwidth Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Inverse Fourier Transform: Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 8
II Spatial Filtering 8
4 Introduction 9
5 Manipulations 9
5.1 Uni-dimensionalFiltering ...................................... 9
5.2 Bi-dimensionalltering ....................................... 9
5.3 ImageProcessing........................................... 9
5.4 SupplementalActivities ....................................... 10
III Holography 10
6 Introduction 10
7 Manipulations 11
7.1 MichelsonInterferometer....................................... 11
7.2 TransmissionHologram ....................................... 12
7.3 Optional:RainbowHologram.................................... 13
7.4 Optional:DigitalHologram ..................................... 14
7.5 Matlab Implementation of the Reconstruction Algorithm . . . . . . . . . . . . . . . . . . . . . 16
1
The relevant appendices can be found under 'Related Documents' for each experiment at this link:
http://sky.campus.mcgill.ca/Exp/exp.a5w#
Fourier Optics
Updated in August 2018
Warnings: Be careful when using the He-Na laser. Direct eye exposure can severely damage your retina; if
need be, wear protective goggles. Specular reflections (such as mirrors) are as dangerous as direct exposure.
Diffuse reflections (such as reflection on paper) are not as dangerous, but you should still be careful about
highly reflective surfaces, including the optical table.
Contents
1 Introduction
Optical data processing, spatial filtering and holography are amongst a large variety of optical applications
that are closely related to a basic concept in modern optics: Fourier optics. The goal of this experiment is
to demonstrate how optical image formation can be treated within the mathematical framework of Fourier
methods. Once the concept is established through a few fundamental experiments, some image processing
and holography will demonstrate the practical applications of the subject.
Part I
Fourier Optics
2 Theory
We will first turn to the theoretical basis of the Fourier methods. You are advised to consult Appendices A
and C for a more complete treatment of the subject.
2.1 Fourier Series
You are probably familiar with Fourier series as a mathematical method. Let us restate their basic theorems.
A periodic function f(t), of angular frequency ω, can be considered as the sum of harmonic functions whose
frequencies are multiples of ω, which is the fundamental frequency – a square wave example is shown in
figure 1. We write:
f(t) = a0+
X
n=1
(ancos t +bnsin t).(1)
Or,
f(t) =
+
X
n=−∞
cneinωt.(2)
The amplitude of each term of the series can be calculated using the following relations:
a0=1
TZT
0
f(t)dt, (3)
an=2
TZT
0
f(t) cos(t)dt, (4)
bn=2
TZT
0
f(t) sin(t)dt, (5)
cn=1
TZT
0
f(t)einωt dt, (6)
where T=2π
ωis the period of f(t).
We define the frequency spectrum as a plot of the coefficients as a function of .
2.2 Fourier Transform
We can get to a more general definition by considering a non-periodic function f(t). In this case, Fourier
series need to be replaced by Fourier integrals. A way to think about the difference between those is to
view a non-periodic function as a periodic function with infinite period. Thus, to express such functions as
Fourier decompositions, it is necessary to sum over a continuous spectrum of frequencies. The sum of (eq.
2
Figure 1: Example of the Fourier decomposition of a square wave. The first few components of the Fourier series are plotted,
and the sum of the series is plotted at n=0. The plot of y vs n represent the frequency spectrum of the square wave.
2) can then be changed to an integral over frequencies ν:
f(t) = Z
−∞
F(ν)ei2πνt (7)
where F(ν) is the equivalent of a Fourier coefficient:
F(ν) = Z
−∞
f(t)ei2πνt dt. (8)
The last two equations define the Fourier transform and inverse transform. It is possible to simplify by using
the notation:
F(ν) = F[f(t)]
f(t) = F1[F(ν)]
The mathematically inclined might have noticed that the Fourier transform indicates a correspondence
between two spaces: the space of time tand the space of frequencies ν. As we will see later, this experiment
is not concerned with time dependent phenomena but rather with position dependent ones. We will study
the correspondence between the spatial position xand the spatial frequency νx.
As a side note, it is possible to express the Fourier transform in terms of the angular frequency. In such
a case, we normalize by introducing a factor of 1/2π:
f(t) = 1
2πZ
−∞
F(ω)et
F(ω) = 1
2πZ
−∞
f(t)et dt
2.3 Bandwidth Theorem
You have learned in your quantum mechanics class that momentum and position form a Fourier transform
pair. You have also learned that the Heisenberg uncertainty principle states that ∆xp~
2. In fact, this
kind of uncertainty principle can be generalized to any wave phenomenon. In the case of a time-dependent
pulse, it can be shown that1
ωt2πνt1 (9)
1See H. J. Pain. The Physics of Vibrations and Waves. Wiley, Chichester, UK, 2005, pp.132-135.
3
where ∆νand ∆ωare frequency spreads and ∆tis the time interval of a pulse. The relations presented
above are often called the bandwidth theorem. We define ∆ν, ∆ωand ∆tas the full width at half maximum
(FWHM) of the peaks they are associated to. Also, we consider positive frequencies only, as shown on figure
2. The theorem is only meaningful for non-periodic functions.
t
tν
∆ ν
A/2
AΜ
Μ/2
Figure 2: Illustration of the quantities involved in bandwidth theorem.
2.4 Fourier Optics
Interference
This experiment will study optical wave phenomena that are dependent on interference, or the ability of
several beams of light to interact with each other. For our studies, the light of the beams must be at the
same frequency and must be coherent. The use of a laser is therefore appropriate.
Quantitatively, waves can be described by complex-valued functions, having an amplitude and a phase. A
typical detector returns a response proportional to the intensity of the wave, which is the square of the field.
In the uni-dimensional case, the combination of two waves A=aeand B=behas an intensity given by:
(A+B)2= (A+B)(A+B) (10)
=kAk2+kBk2+AB +AB (11)
=a2+b2+ 2ab cos(αβ).(12)
The first two terms of the last line represent the amplitude of the individual waves; the last term introduces
a dependence on their respective phase and thus constitutes the interference term.
Given that the waves have the same wavelength λand are in phase at the source, we can deduce the laws
for constructive and destructive interference.2Indeed, the intensity is maximal when:
x=mλ, (13)
and is minimal when:
x= (m+ 1/2)λ, (14)
where ∆xis the path length difference between both waves and mis an integer.
Diffraction
In order to understand, or at least gain some intuition about the theory of Fourier optics, it is necessary
to become familiar with Huygens-Fresnel principle. According to this principle, each point of a wavefront
(light, mechanical wave, etc.) acts as a punctual source; the resulting wave at each position is the sum of
2See Eugene Hecht. Optics. Addison-Wesley, New York, NY, USA, third edition, 1998.
4
each of the resulting “wavelets” at this position. If one takes a small aperture that allows only a fraction
of the wavefront to propagate, the opening acts as a new source. This phenomenon is called diffraction.
Figures 3(a) and 3(b) illustrate this idea when dealing with small and large apertures.
(a) (b)
Figure 3: Illustration of diffraction based in Huygens-Fresnel principle in the case of small (a) and large (b) apertures.
We can now turn to a more quantitative formulation of Huygens-Fresnel principle. As you probably
know, a wave can be described as having an amplitude and a complex phase. Furthermore, punctual sources
emit what are called spherical waves that decrease in amplitude as the distance increases. Also, points
of the wavefront that are equidistant from the source all have the same phase since they were emitted
simultaneously. The intensity, or the square of the wave function should be inversely proportional to the
increase in the area of a sphere it covers, so as r2. The resulting wave function is then:
ψspherical =A
reikr ,(15)
where kis the spherical wavenumber, i.e. k=2π
λ,ris the distance from the source and Ais an amplitude.
x’
y y’
x
z
r
A’
A
Observation
Illumination
Aperture
Figure 4: Diffraction of the incident light on a aperture. The observation plane is at a distance zfrom the aperture.
We are interested in the diffraction pattern produced by an aperture, as in figure 4. A(x, y) is a 2-
dimensional function describing that opening. It takes values varying from 0, when the light intensity is
entirely blocked, to 1, when it is fully transmitted. The Huygens-Fresnel principle suggests that the resulting
diffraction pattern is the sum of the waves produced by punctual sources at every point of the aperture. The
5
following integral describes the diffracted wavefront observed at a distance zfrom the aperture:
A0(x0, y0) = z
Z
−∞ Z
−∞
A(x, y)eikr
r2dx dy, (16)
where, r=p(xx0)2+ (yy0)2+z2is the distance between a given point (x, y) of the aperture and a
point (x0, y0) of the observation plane. The extra factor of z/r corresponds to the cosine of the angle between
vectors rand z, since we only sum over the component of the wave in the zdirection.3The above formula is
a special case of the Fresnel-Kirchhoff diffraction formula from scalar diffraction theory.4The factor of 1/iλ
comes up in the derivation of this formula.
Fraunhofer Diffraction
A possible approximation is to take the observation plane to be far. Using Taylor series, in such a case, it is
possible to approximate rsuch that:
rz1 + (xx0)2
2z2+(yy0)2
2z2(17)
z+x2+y2
2zxx0+yy0
z+x02+y02
2z.(18)
This is the Fresnel approximation.
Furthermore, it is possible to approximate the observation plane to be at infinity; this idea is called the
Fraunhofer approximation. In this case, we assume (x2+y2)/z 0 as z→ ∞. The term x2+y2
2zis thus
omitted so that eq. (16) takes the form:
A0(νx, νy) = eikz
iλz eλ(ν2
x+ν2
y)Z
−∞ Z
−∞
A(x, y)ei2π(x+yνy)dx dy, (19)
where νx=x0
λz and νy=y0
λz . Since we will only observe the relative intensity of the signal, the terms in front
of the integrals can be considered to be a constant c:
A0(νx, νy) = cZ
−∞ Z
−∞
A(x, y)ei2π(x+yνy)dx dy. (20)
We recognize the 2-dimensional case of the Fourier transform as defined in eq. (8). In our case, however,
the “time - frequency” correspondence (t-ν) is replaced by a “position - spatial frequency” correspondence
(x-νx).
Performing a Fourier Transform Experimentally
In theory, for the Fourier transform of an aperture to be observed, it would be necessary to measure the
signal at a near infinite distance from the opening. Luckily, it is possible to use a converging lens in order
to focus the diffracted light at a finite distance, as shown on figure 5. It is often asserted that the lens itself
performs the Fourier transform; this is not totally accurate. The lens does allow one to observe the pattern
but the transform is a consequence of the diffraction from the aperture.
It is also interesting to note the relation between the position on the transform plane (x0) and the spatial
frequency of the aperture function (νx). According to eq. (19), the relation is given by νx=x0
λz where λ
is the wavelength of the laser and zis the distance between the object and the plane. The use of a lens as
shown above stops the divergence of the wave after a distance f. Thus, the proper formula is:
νx=x0
λf .(21)
3This situation is analogous to the case you probably encountered in electromagnetism, where one has to calculate the electric
potential at a given distance from a charge distribution. In our case, we are not concerned with a point-charge distribution
σ(x,y)
4π0r, but rather with a point-source distribution A(x,y)eikr
iλr .
4See Thomas Kreis. Handbook of Holographic Interferometry: Optical and Digital Methods. Wiley-VCH, Weinheim,
Germany, 2005.
6
f f
A A’
Figure 5: The image of the Fourier transform of an aperture can be observed using a converging lens of focal length f.
3 Manipulations
Figure 6: Experimental setup for studying the optical Fourier spectrum. L0is a 10X microscope objective, LCand L1are
converging lenses (focal lengths fC30cm andf150cm). L0and LCform a beam expander. The object can be any grating
you want to study.
First, we study the lens system presented on figure 6. This arrangement constitutes a frequency analyzer.
When you set up the apparatus, you will notice that the optical table is too small to have every component
in line; you must make use of mirrors. The diffraction patterns (or interference patterns) are measured
with a digital camera. You may take the picture of the reflection of the diffraction pattern using plain
paper as a screen; it is recommended, however, to observe the diffraction pattern by transmission using the
translucent slide that is part of the collection of slides. That way, the camera can be placed on the optical
axis, minimizing distortion. If the diffraction pattern is too small in the picture, than the resolution becomes
limited by the pixel resolution. To minimize this effect, you should arrange for the picture to cover roughly
the size of the translucent screen. This requires that you play with the distance from the camera to the
screen, the zoom value and possibly the use of the “macro” option.
Note that for some distance and zoom values, the camera may not be able to focus on the slide, resulting
in a blurred picture. It is recommended that you use the fully manual control of exposure time and aperture
so as to control possible saturation of some part of the diffraction pattern. You may also want to take
overexposed pictures to visualize the faint regions of the diffraction pattern when measuring the diffraction
of a single slit. It is recommended that you take equivalent pictures with various exposure times and that
you select the best one. Because of the huge contrast difference between the black background and the very
intense diffraction pattern, it is difficult to get a good picture using automatic control. The camera provided
has automatic focus (manual focus is the preferred option). Because it is often not well defined, the camera
may have difficulties to focus on the diffraction pattern. Thus, if you use automatic focus, it is recommended
that you turn on some lights in the room so as to light up the frame of the slide on which the camera can
focus.
7
You have to transfer the pictures to a computer. You can perform a quantitative analysis of the picture
using your preferred photo analysis program. In the case of the diffraction patterns of a grating or of a single
slit, you should generate a projection of the diffraction pattern on the axis normal to the diffraction pattern
and calculate the various distances (to be compared to theory) by using a distance calibration that you can
obtain by including some ruler or calibrated graph paper in your pictures of the diffraction pattern.
3.1 Uni-dimensional Periodic Functions
The aperture used is a simple amplitude grating; you may pick any slides presenting such a periodic pattern.
Now, you will measure the spatial frequency of the grating using two methods. Firstly, using a traveling
microscope, take a measurement of the spacing between the grating’s lines and calculate the spatial frequency
in lines/meter (ask a technician for where to find the traveling microscope). Secondly, measure the diffraction
grating and use equation (21) to determine the fundamental spatial frequency ν.
Repeat the experiment with gratings of different frequency, and try both slides with a grating in one
direction and with a grating in both directions. How does the interference pattern change? You may also
try the grating with a spherical pattern.
3.2 Non-Periodic Functions: Bandwidth Theorem
We are now concerned with non-periodic functions such as a simple slit. These are the optical counterparts of
pulse functions discussed in the introduction. Thus, we should be able to formulate an analogous bandwidth
theorem for “position - spatial frequencies” domain. Find the equivalent of eq. 9 in our case. Show that the
theorem is valid by measuring the diffraction pattern of slits with different widths.
3.3 Inverse Fourier Transform: Image Reconstruction
Pinhole
L0LCL1
Object
f f f1 1C
L2
f f2 2
ScreenFilter
Figure 7: Experimental setup for studying image reconstruction. LC, L1and L2are converging lenses (focal lengths fC30cm,
f150cm and f250cm). This setup completes the system shown on figure 6.
The Fourier transform has an inverse, which maps a transformed function back to its initial value. That is,
if A0(νx, νy) = F[A(x, y)], then A(x, y) = F1[A0(νx, νy)]. The optical system presented on figure 7 performs
both of these tasks. Experiment with the slide of your choice to verify that the reconstruction is valid. This
setup is even longer than the preceding one. You may need to use an extra mirror to make it all fit on the
optical table.
L1focuses the transform on the plane marked “filter”, while L2takes the diffracted pattern and reproduces
the initial object on the screen. Not that for this section, the “filter” plane is empty. You should find where
the interference pattern is the sharpest and use that point as your filter plane to properly set the distance
with other instruments. In the next section, you will have to position a low-pass or a high pass filter at that
location.
8
Part II
Spatial Filtering
4 Introduction
So far, you have experimented with the bases of Fourier optics; it is now time to explore some of its
applications. Image processing by spatial filtering is the first one that you will be studying. This method is
quite simple: the idea is to remove some part of the Fourier transform before transforming the diffraction
pattern back to the image. High and low frequencies occupy different positions on the transform plane,
which gives the freedom to filter out some components of an image.
The origin of the transform plane, where the zero frequency can be found, is on the axis of the optical
system, at the centre of the plane. This fact suggest that frequencies are increasing radially outward from
this point. A low pass filter would therefore allow only the light intensity in the neighborhood of the origin,
and vice-versa for a high pass filter. Furthermore, the spatial frequencies in different directions are placed
in distinct positions in the transform plane. The measurements are taken in similar fashion to the first part
of the experiment; use a digital camera.
5 Manipulations
5.1 Uni-dimensional Filtering
You will now study the effect of filters on a one dimensional amplitude grating. You should build this filter
using empty slide frames, electric tape or any other material you think is appropriate.
What would you define as a high or a low pass filter in one dimension? Build both of these and study their
effect on the image focused on the screen. Relate your results to the Fourier series. A low pass filter keeps
only the lower frequency components of the image, which carry the most intensity. The high pass filter keeps
only the terms in the Fourier series that “smooth out” the image. Knowing this, what should you expect to
obtain in each case?
5.2 Bi-dimensional filtering
Take a slide that presents an image containing different spatial frequencies, such as slide 14, 20 or 21. Where
are the regions that present higher and lower spatial frequencies?
What would you define as a high or low pass filter in the 2-dimensional case? Build them and observe
their effect. Again, use the decomposition in terms of the Fourier series to interpret your results. Use the
image of a grid such as slide 11. Try to allow a diagonal (45o) row of dots to pass. Based on a geometrical
argument, you should make a prediction about the relation between the spatial frequencies of this image and
the one corresponding to a vertical or horizontal grid. Compare your prediction with experimental results.
Again, you should measure the spatial frequencies by using the camera, as discussed above.
5.3 Image Processing
The goal of this section is to modify images in specific ways by using appropriate filters. Use the cloud
chamber simulation photograph (slide 22). The straight lines correspond to incoming particles, in the cloud
chamber experiment, while the curved one are caused by interactions. In a typical experiment, it would be
convenient to remove the large bands. Your goal is to remove these lines leaving only the curved tracks.
Based on your observations from the previous section, design a filter that will accomplish this task.
9
Similarly, spatial filtering makes it possible to view any of different pictures stored on a single slide. Each
picture must be encoded using lines at a particular angle. Ask the lab technician for the AB slide, which
presents a pattern similar to that of figure 8. Note that the letters are represented by lines going in different
directions. Build filters that will enable you to individually reconstruct the letters A and B. (Hint: Since
the object is not very visible on the slide, you should probably block the central spot corresponding to the
zero spatial frequency.)
Figure 8: Schematic representation of the AB slide.
A half-tone image represents gray tones by small dots of different sizes. By filtering, it is possible to
obtain a continuous-tone image. Use slide 24 and experiment the effect of different filters, such as circular
apertures and slits at different angles. For each of them, try to predict what the image should look like.
Based on your experimental results, build a filter that removes the dots from the picture.
5.4 Supplemental Activities
At this point, you probably have enough experience to perform experiments that you may have come across
in textbooks or that you may design on your own. Here are a few ideas:
Prove the convolution theorem for the Fourier transform;
Observe the Fourier transform of different functions as circular apertures, multiple slits, etc. Explain
the observed diffraction pattern;
Part III
Holography
6 Introduction
See Appendices B and D for more details on, respectively, the theory and the applications of holography
Holography involves the recording of a wave, including both the amplitude and the phase of the signal.
Another name for this technique is wavefront reconstruction. In simpler words, this means that when an
hologram is reconstructed, an observer can see the same light signal as he or she would have seen during
the recording. Now, a hologram is, by definition, the recording of an interference pattern. As we saw
previously, interference implies an interaction between different coherent waves, in this case an object wave
and a reference wave. An intuitive example of a hologram is presented on figure 9.
Mathematically, each wave can be represented as a complex function. Let U(x, y) = A(x, y)e(x,y)be
the object wave and Rr(x, y) = B(x, y)e(x,y)be the reference wave. The intensity resulting from the sum
of both of these is given by:
I(x, y) = |A(x, y)|2+|B(x, y)|2+ 2A(x, y)B(x, y) cos (ψ(x, y)φ(x, y)) (22)
where the last term corresponds to the interference URr+U Rrand includes both A(x, y) and φ(x, y). Thus,
the interference pattern contains all the information about the object wave, including both its amplitude
and its phase.
10
Reference Wave Object Wave Hologram
Reference Hologram Reconstruction
(a) (b)
Figure 9: Graphical illustration of the recording (a) and the reconstruction (b) of a waveform. The hologram is the recording
of the interference pattern between the reference wave and the object wave (a). When the reference beam is shone on the
processed hologram, the object wave is reconstructed (b).
The interference pattern can be stored on a holographic film. The transmission function of the film is
defined as the proportion of incident light that is transmitted as a function of position. In this case, it is
given by:
t(x, y) = C+β|U|2+URr+U Rr.(23)
The resulting waveform, when an illumination beam R is shone on the hologram, can be written as:
Rt =U1+U2+U3+U4,
where
U1=CR,
U2=β|U|2R,
U3=βRrRU,
U4=βRrRU.
Here, U3is the reconstruction of the object wave and is equal to U, the object wave, up to a constant factor.
We call it the virtual image since it can be observed directly. U4is proportional to U, which means that
the wave converges, as easily deduced from the functional form of U. It is possible to focus this image on a
screen, hence the name of real image.U3and U4are together called the twin images. U1and U2are simple
constants.
7 Manipulations
This part of the experiment uses holographic film rather than the digital camera.
7.1 Michelson Interferometer
To be successful at making a hologram, one must keep in mind that the interference fringes can be very
close to one another. The stability of the system is thus critical. Air vents must be blocked and the use of
an air table is preferable in order to isolate the setup from surrounding vibrations. It is possible to study
the stability of the optical system by setting up a Michelson interferometer as shown on figure 10. You
can observe the effect of vibrations on the interference pattern. During the exposition, avoid instabilities
resulting in a movement of greater than a quarter fringe and preferably you want an even better stability.
When you are satisfied with the setup, you can proceed to constructing the hologram.
11
M2
L1
PH
Laser
BS − 5%
M1
Figure 10: Experimental setup for the Michelson interferometer. M1 and M2 are mirrors, BS -5% is a 5% beam splitter, L1 is
a 60X microscope objective and PH is a screen.
7.2 Transmission Hologram
M1
Laser
BS − 5%
L1
PH
Object
M2
L2
Figure 11: Experimental system for transmission hologram recording. M1 and M2 are mirrors, BS - 5% is a 5% beam splitter,
L1 and L2 are 60X microscope objectives, and PH is the photographic plate.
First, roughly layout the positions of the components as shown in figure 11. When this is done, you
can fine tune the position and orientation of each component. Your goal is to obtain a reference beam
and an object beam that have approximately the same intensity on the screen. To make sure this is the
case, successively block each beam and observe the light on a sheet of white paper placed in front of the
photographic plate. You also want to make sure that the whole photographic plate is spanned by the beams.
If you wish to improve the quality of your hologram, you might want to use a pinhole to “clean up” the
reference beam. The positioning of this component is not trivial; some practice might be necessary. You
need to be careful in the manipulation of the holographic film. It is particularly sensitive to red (632nm)
light; manipulate it only under a green safety light. Make sure to turn the laser off when loading the film
on the photographic plate. Put the emulsion (sticky) side toward the beams. After you pull out the security
plate, wait a few seconds before turning on the laser. This will allow any vibration to die out. The exposure
time will vary depending on the intensity of your beams. Proper holograms have been recorded in 30 to 45
seconds but your setup may differ slightly. It is preferable to first try a small piece of film as a test.
When you have recorded the hologram, it still needs to be developed. The general procedure goes as
follows: 10 minutes in the D-19 developer, 1 minute in the stopper, 10 minutes in the fixer and about
10 minutes in the wash bath – ask help from a professor or a technician when you first try it. As it is
being developed, you can monitor the hologram by indirectly shining the green safety light on it. A proper
hologram should be greenish but still mostly translucent. If you see that the hologram becomes very dark
12
while still in the developer, you can remove it even though 10 minutes have not passed. As soon as the
hologram is in the fixer, you can turn the lights back on, it will not ruin the result.
Figure 12: Illustration of the reconstruction procedure for the virtual image (left) and the real image (right).
After the hologram is dry, you have two ways of reconstructing the wave: putting it back in the reference
beam or having a non-expanded laser beam go through it. Figure 12 illustrates the procedure. At what
angle between the film and the beam are the images easier to observe? Is there a relation between this angle
and the experimental setup used to record the hologram? How does this relate to the theory?
7.3 Optional: Rainbow Hologram
Laser
BS − 5%
M1
M2
PH
L1
M3
Object
Cardboard
L6
L7
GG
Figure 13: Experimental system for rainbow hologram recording. M1 and M2 are circular mirrors. M3 is a tall mirror. L6 is
a small cylindrical lens while L7 is a tall cylindrical lens. GG is a ground glass. PH is the photographic plate. BS-5% is a 5%
beam splitter.
You might wonder how common holograms are made. In fact, there exists a plethora of hologram types
that can be reconstructed under white light. The setup presented on figure 13 allows to record what is called
a rainbow hologram, a type of reflection hologram. You may attempt to record one of those by taking even
greater care about stability then in the case of a transmission hologram.
It is important to pick an object that is not too large and that will only block a small portion of light
coming from the ground glass. The type of emulsion used for the transmission hologram is appropriate as
well in this case. The development procedure is the same for this type of hologram but it is preferable
that the film be pale in order to improve reconstruction quality. Hence, a considerably shorter exposure
is appropriate. The best reconstruction results have been achieved in direct sunlight, placing the film in
front of a dark background. It was also observed that the angle at which the hologram is placed is quite
important. You should expect to see the dark silhouette of the object in front of an iridescent background.
You will notice that the colors change depending on the angle at which the film is placed.
13
7.4 Optional: Digital Hologram
Digital holography is a process by which one records a hologram with a digital camera and then reconstructs
the object wave numerically. It substitutes the process of putting the hologram back in the reference beam
by a clever mathematical trick.
Theory
As we saw previously, the information stored on a hologram corresponds to an interference pattern between
a reference beam and an object beam. The intensity is given by:
I(x, y) =
EP+E2
R
2=EPEP+ERER+EPER+EPER,(24)
where EPis the object wave and ERis the reference wave. Each of those is described by a complex-valued
two-dimensional function. Let h(ξ, ) be the function describing the recorded intensity on the hologram plane
ξηand b(x, y), the complex function corresponding to the object wave. The hologram (ξη) and the
object (xy) planes are separated by a distance d.
The optical reconstruction of the images would require the illumination of the hologram by the reference
beam. It is possible to numerically model this process by multiplying the function h(ξ, η) by the conjugate of
the reference wave r(ξ, η). The latter can be approximated to 1 if the reference wave is planar and parallel
to the hologram plane. The real image is then determined approximately by the inverse Fresnel Transform
of this product:5
b(x, y)=eiπ
(x2+y2)Z Z h(ξ, η)r(ξ, η)e iπ
(ξ2+η2)e2iπ
(+yη), (25)
where λis the wavelength of the laser used. You might have recognized a 2-dimensional Fourier transform
in the previous formula. The only difference are the two following terms:
z(x, y)=eiπ
(x2+y2)(26)
w(ξ, η)=eiπ
(ξ2+η2)(27)
These are called chirp functions. We can thus rewrite (eq. 25) as:
b=z· F1{h·r·w},(28)
where F1represents inverse the Fourier transform operator. As it is usual in numerical applications, the
description we have of h(ξ, η) is discrete and is given by an image matrix. This fact enables us to use the
Fast Fourier Transform algorithm to compute the Fresnel transform.
Reconstruction Algorithm. Given h, a M×N matrix; d the distance between the camera recording chip
and the object; ηand ξ, the vertical and horizontal centre-to-centre distances of pixels; and λ, the wave-
length of the laser. Then the following method reconstructs numerically the image stored on hologram h:
1. If the reference wave plane is not parallel to the camera, multiply the matrix elements of h by r=
exp 2πi
ληsin θ, where θis the angle between the two.
2. Multiply each element (k,l) of h·rby w(k, l) = exp iπ
(k2ξ2+l2η2).
3. Compute the Inverse 2D Fast Fourier Transform of the resulting matrix.
4. Multiply each elements (k,l) of the transform by z(k, l) = exp niπ(k2
ξ2N2+l2
η2M2)o.
5This formula is the inverse version of (eq. 16). The Fresnel approximation is used in place of the Fraunhofer approximation.
14
Procedure
CCD
MO−60X
PH
L1 (30cm)
L2 (−30cm)
MO−10X
Object (approx. 2cm)
BS−5%
Laser
MM
M
BS−5%
50 cm
Figure 14: Experimental setup for digital holography. BS-5% are 5% beam splitters, MO-60X and MO-10X are microscope
objective of 60X and 10X respectively. L1 is a converging lens (f=30cm) and L2 is a diverging lens (f=-30cm). PH is a pinhole.
M’s are mirrors. CCD is a CCD camera. The object is about 2 cm across.
Figure 15: Picture of the experimental setup for digital holography. (1) Laser; (2) 5% beam splitter; (3) mirror; (4) 60X
microscope objective; (5) pinhole; (6) converging lens (f=30cm); (7) & (8) mirrors; (9) CCD camera; (10) 5% beam splitter;
(11) diverging lens (f=-30cm); (12) 10X microscope objective; (13) object.
In order to simplify the reconstruction algorithm, it is necessary to make sure that the reference beam
is planar and that the wavefront is parallel to the camera’s iris. To obtain a planar beam, use a microscope
objective to enlarge the beam. Then, place a converging lens such that the distance that separates it from
the objective is equal to its focal length. Fine tuning the second beam splitter will enable you to have an
incident beam at right angle with the camera. The size of the camera recording chip being quite small, you
need the object to have a reduced angular size. You can achieve this by enlarging the distance between the
camera and the object. A diverging lens can also be used to save some room on the table (11 on figure 15).
The intensity of the reference and the object beams should be similar, which implies that the illumination
must be much more powerful than the reference beam to correct for the distance.
15
Finally, when you place the object, you should observe what the camera captures at the same time. At
one point, the interference pattern will be varying a lot, even though the object is stable; this indicates that
the alignment is correct. This method was shown to give some results as presented on figure 16. The object
used was a Teflon nut.
Figure 16: Example of a recorded hologram on the right and of the reconstructed image on the left. The contrast of the
reconstructed image was slightly enhanced with a photo edition software. We can distinguish both the real and virtual images.
7.5 Matlab Implementation of the Reconstruction Algorithm
holo.m
% Digital Hologram Reconstruction Algorithm
% This algorithm is based on the Fresnel transform, feel free to improve the code.
% h: hologram , d: distance CCD-object
function [ima] = holo(h,d)
[M,N] = size(h);
hd = double(h)/255;
ps = 7.4e-6; % Pixel size
lw = 632.8e-9; % Laser wavelength
mea = mean( mean( hd ) );
for m = 1:M
for n = 1:N
% DC term filtering
hd(m,n) = hd(m,n) - mea;
% Chirp functions
chirp(m,n) = exp(i*pi*(ps^2*(m-1)^2 + ps^2*(n-1)^2)/d/lw);
pre(m,n) = exp(i*pi*(1/ps^2*(m-1)^2/M^2 + 1/ps^2*(n-1)^2/N^2)*d*lw);
end;
end;
% Reconstruction
mult = (hd.*chirp);
ima = abs(pre.*ifft2(mult));
% Display image
showsc(ima);
Minor Changes by Liam Halloran, Aug. 2018
Updated by Gregory Bell, July 2017
Updated by Guillaume Huot, May 2016
16

Navigation menu