Python Multimedia:Beginner's Guide Multimedia Beginner's (2010)

User Manual: Pdf

Open the PDF directly: View PDF PDF.
Page Count: 292 [warning: Documents this large are best viewed by clicking the View PDF Link!]

Python Multimedia
Beginner's Guide
Learn how to develop mulmedia applicaons using Python
with this praccal step-by-step guide
Ninad Sathaye
Python Multimedia
Beginner's Guide
Copyright © 2010 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmied in any form or by any means, without the prior wrien permission of the
publisher, except in the case of brief quotaons embedded in crical arcles or reviews.
Every eort has been made in the preparaon of this book to ensure the accuracy of the
informaon presented. However, the informaon contained in this book is sold without
warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers
and distributors will be held liable for any damages caused or alleged to be caused directly or
indirectly by this book.
Packt Publishing has endeavored to provide trademark informaon about all of the
companies and products menoned in this book by the appropriate use of capitals.
However, Packt Publishing cannot guarantee the accuracy of this informaon.
First published: August 2010
Producon Reference: 1060810
Published by Packt Publishing Ltd.
32 Lincoln Road
Birmingham, B27 6PA, UK.
ISBN 978-1-849510-16-5
Cover Image by Ed Maclean (
Ninad Sathaye
Maurice HT Ling
Daniel Waterworth
Sivan Greenberg
Acquision Editor
Steven Wilding
Development Editor
Eleanor Duy
Technical Editor
Charumathi Sankaran
Hemangini Bari
Tejal Daruwale
Editorial Team Leader
Aanchal Kumar
Project Team Leader
Priya Mukherji
Project Coordinator
Prasad Rai
Lynda Sliwoski
Geetanjali Sawant
Producon Coordinators
Shantanu Zagade
Aparna Bhagat
Cover Work
Aparna Bhagat
About the Author
Ninad Sathaye ( has more than six years of experience in
soware design and development. He is currently working at IBM, India. Prior to working for
IBM, he was a Systems Programmer at Nanorex Inc. based in Michigan, U.S.A. At Nanorex,
he was involved in the development of an open source, interacve 3D CAD soware, wrien
in Python and C. This is where he developed passion for the Python programming language.
Besides programming, his favorite hobbies are reading and traveling.
Ninad holds a Master of Science degree in Mechanical Engineering from Kansas State
University, U.S.A.
I would like to thank everyone at Packt Publishing, especially, Eleanor Duy,
Steven Wilding, Charu Sankaran, and Prasad Rai for their co-operaon.
This book wouldn't have been possible without your help. I also want to
thank all the technical reviewers of the book for their valuable suggesons.
I wish to express my sincere thanks and appreciaon to Rahul Nayak, my
colleague, who provided many professional quality photographs for this
book. I owe a special thanks to Mark Sims and Bruce Smith, my former
colleagues, for introducing me to the amusing world of Python. Finally,
this book wouldn't have been possible without the encouragement and
support of my whole family. I owe my loving thanks to my wife, Ara, for
providing valuable feedback. She also happens to be the photographer of
several of the pictures used throughout this book.
About the Reviewers
Maurice HT Ling completed his Ph.D. in Bioinformacs and B.Sc (Hons) in Molecular and
Cell Biology, where he worked on microarray analysis and text mining for protein-protein
interacons. He is currently an Honorary Fellow at The University of Melbourne and
a Lecturer at Singapore Polytechnic where he lectures on microbiology and
computaonal biology.
Maurice holds several Chief Editorships including The Python Papers, iConcept Journal
of Computaonal and Mathemacal Biology, and Methods and Cases in Computaonal,
Mathemacal, and Stascal Biology. In his free me, Maurice likes to train in the gym,
read, and enjoy a good cup of coee. He is also a Senior Fellow of the Internaonal Fitness
Associaon, U.S.A.
Daniel Waterworth is a Python fanac who can oen be found behind his keyboard. He is
always beavering away on a new project having learned to program from a young age. He is a
keen blogger and his ideas can be found at
Sivan Greenberg is a Forum Nokia Champion, with almost ten years of mul-disciplinary
IT experience and a sharp eye for quality. He started with open source technologies and
the Debian project back in 2002. Joining Ubuntu development two years later, Sivan also
contributed to various other open source projects, such as Plone and Nokia's Maemo.
He has experience with quality assurance, applicaon and web development, UNIX system
administraon (including some rather exoc IBM plaorms), and GUI programming and
documentaon. He's been using Python for all of his development needs for the last ve
years. He is currently involved with Nokia's MeeGo project and works with CouchDB and
Python in his day job for a living.
I thank my unique and amazing family, specically my Dad Eric for igning
the spark of curiosity from day zero.
To my daughter, Anvita
Table of Contents
Preface 1
Chapter 1: Python and Mulmedia 7
Mulmedia 8
Mulmedia processing 8
Image processing 8
Audio and video processing 10
Compression 10
Mixing 11
Eding 11
Animaons 11
Built-in mulmedia support 12
winsound 12
audioop 12
wave 13
External mulmedia libraries and frameworks 13
Python Imaging Library 13
PyMedia 13
GStreamer 13
Pyglet 14
PyGame 14
Sprite 14
Display 14
Surface 14
Draw 14
Event 15
Image 15
Music 15
Time for acon – a simple applicaon using PyGame 15
QT Phonon 18
Other mulmedia libraries 19
Table of Contents
[ ii ]
Snack Sound Toolkit 19
PyAudiere 20
Summary 20
Chapter 2: Working with Images 21
Installaon prerequisites 21
Python 21
Windows plaorm 22
Other plaorms 22
Python Imaging Library (PIL) 22
Windows plaorm 22
Other plaorms 22
PyQt4 23
Windows plaorm 23
Other plaorms 24
Summary of installaon prerequisites 24
Reading and wring images 25
Time for acon – image le converter 25
Creang an image from scratch 28
Time for acon – creang a new image containing some text 28
Reading images from archive 29
Time for acon – reading images from archives 29
Basic image manipulaons 30
Resizing 30
Time for acon – resizing 30
Rotang 33
Time for acon – rotang 34
Flipping 35
Time for acon – ipping 35
Capturing screenshots 36
Time for acon – capture screenshots at intervals 36
Cropping 39
Time for acon – cropping an image 39
Pasng 40
Time for acon – pasng: mirror the smiley face! 40
Project: Thumbnail Maker 42
Time for acon – play with Thumbnail Maker applicaon 43
Generang the UI code 45
Time for acon – generang the UI code 45
Connecng the widgets 47
Time for acon – connecng the widgets 48
Developing the image processing code 49
Table of Contents
[ iii ]
Time for acon – developing image processing code 49
Summary 53
Chapter 3: Enhancing Images 55
Installaon and download prerequisites 56
Adjusng brightness and contrast 56
Time for acon – adjusng brightness and contrast 56
Tweaking colors 59
Time for acon – swap colors within an image! 59
Changing individual image band 61
Time for acon – change the color of a ower 61
Gray scale images 63
Cook up negaves 64
Blending 65
Time for acon – blending two images 65
Creang transparent images 68
Time for acon – create transparency 68
Making composites with image mask 70
Time for acon – making composites with image mask 71
Project: Watermark Maker Tool 72
Time for acon – Watermark Maker Tool 73
Applying image lters 81
Smoothing 82
Time for acon – smoothing an image 82
Sharpening 84
Blurring 84
Edge detecon and enhancements 85
Time for acon – detecng and enhancing edges 85
Embossing 87
Time for acon – embossing 87
Adding a border 88
Time for acon – enclosing a picture in a photoframe 89
Summary 90
Chapter 4: Fun with Animaons 91
Installaon prerequisites 92
Pyglet 92
Windows plaorm 92
Other plaorms 92
Summary of installaon prerequisites 93
Tesng the installaon 93
A primer on Pyglet 94
Table of Contents
[ iv ]
Important components 94
Window 94
Image 95
Sprite 95
Animaon 95
AnimaonFrame 95
Clock 95
Displaying an image 96
Mouse and keyboard controls 97
Adding sound eects 97
Animaons with Pyglet 97
Viewing an exisng animaon 97
Time for acon – viewing an exisng animaon 98
Animaon using a sequence of images 100
Time for acon – animaon using a sequence of images 100
Single image animaon 102
Time for acon – bouncing ball animaon 102
Project: a simple bowling animaon 108
Time for acon – a simple bowling animaon 108
Animaons using dierent image regions 113
Time for acon – raindrops animaon 114
Project: drive on a rainy day! 117
Time for acon – drive on a rainy day! 118
Summary 122
Chapter 5: Working with Audios 123
Installaon prerequisites 123
GStreamer 124
Windows plaorm 124
Other plaorms 125
PyGobject 125
Windows plaorm 125
Other plaorms 125
Summary of installaon prerequisites 126
Tesng the installaon 127
A primer on GStreamer 127
gst-inspect and gst-launch 128
Elements and pipeline 128
Plugins 129
Bins 129
Pads 130
Dynamic pads 130
Ghost pads 131
Caps 131
Table of Contents
[ v ]
Bus 131
Playbin/Playbin2 131
Playing music 132
Time for acon – playing an audio: method 1 133
Building a pipeline from elements 137
Time for acon – playing an audio: method 2 138
Playing an audio from a website 141
Converng audio le format 142
Time for acon – audio le format converter 142
Extracng part of an audio 150
The Gnonlin plugin 151
Time for acon – MP3 cuer! 152
Recording 156
Time for acon – recording 157
Summary 160
Chapter 6: Audio Controls and Eects 161
Controlling playback 161
Play 162
Pause/resume 162
Time for acon – pause and resume a playing audio stream 162
Stop 165
Fast-forward/rewind 166
Project: extract audio using playback controls 166
Time for acon – MP3 cuer from basic principles 167
Adjusng volume 173
Time for acon – adjusng volume 173
Audio eects 175
Fading eects 175
Time for acon – fading eects 176
Echo echo echo... 179
Time for acon – adding echo eect 179
Panning/panorama 182
Project: combining audio clips 183
Media 'meline' explained 184
Time for acon – creang custom audio by combining clips 185
Audio mixing 194
Time for acon – mixing audio tracks 194
Visualizing an audio track 196
Time for acon – audio visualizer 196
Summary 199
Table of Contents
[ vi ]
Chapter 7: Working with Videos 201
Installaon prerequisites 202
Playing a video 203
Time for acon – video player! 203
Playing video using 'playbin' 208
Video format conversion 209
Time for acon – video format converter 209
Video manipulaons and eects 215
Resizing 215
Time for acon – resize a video 216
Cropping 217
Time for acon – crop a video 218
Adjusng brightness and contrast 219
Creang a gray scale video 220
Adding text and me on a video stream 220
Time for acon – overlay text on a video track 220
Separang audio and video tracks 223
Time for acon – audio and video tracks 223
Mixing audio and video tracks 226
Time for acon – audio/video track mixer 226
Saving video frames as images 230
Time for acon – saving video frames as images 230
Summary 235
Chapter 8: GUI-based Media Players Using QT Phonon 237
Installaon prerequisites 238
PyQt4 238
Summary of installaon prerequisites 238
Introducon to QT Phonon 238
Main components 239
Media graph 239
Media object 239
Sink 239
Path 239
Eects 239
Backends 239
Modules 240
MediaNode 240
MediaSource 240
MediaObject 240
Path 240
AudioOutput 241
Eect 241
VideoPlayer 241
Table of Contents
[ vii ]
SeekSlider 241
volumeSlider 241
Project: GUI-based music player 241
GUI elements in the music player 242
Generang the UI code 243
Time for acon – generang the UI code 243
Connecng the widgets 247
Time for acon – connecng the widgets 247
Developing the audio player code 249
Time for acon – developing the audio player code 250
Project: GUI-based video player 257
Generang the UI code 258
Time for acon – generang the UI code 258
Connecng the widgets 260
Developing the video player code 261
Time for acon – developing the video player code 261
Summary 264
Index 265
Mulmedia applicaons are used in a broad spectrum of elds. Wring applicaons that
work with images, videos, and other sensory eects is great. Not every applicaon gets
to make full use of audio/visual eects, but a certain amount of mulmedia makes any
applicaon very appealing.
This book is all about mulmedia processing using Python. This step by step guide gives
you a hands-on experience with developing excing mulmedia applicaons. You will build
applicaons for processing images, creang 2D animaons and processing audio and video.
There are numerous mulmedia libraries for which Python bindings are available. These
libraries enable working with dierent kinds of media, such as images, audio, video, games,
and so on. This book introduces the reader to some of these (open source) libraries through
several implausibly excing projects. Popular mulmedia frameworks and libraries, such
as GStreamer, Pyglet, QT Phonon, and Python Imaging library are used to develop various
mulmedia applicaons.
What this book covers
Chapter 1, Python and Mulmedia teaches you a few things about popular mulmedia
frameworks for mulmedia processing using Python and shows you how to develop a
simple interacve applicaon using PyGame.
Chapter 2, Working with Images explains basic image conversion and manipulaon
techniques using the Python Imaging Library. With the help of several examples and code
snippets, we will perform some basic manipulaons on the image, such as pasng an image
on to another, resizing, rotang/ipping, cropping, and so on. We will write tools to capture
a screenshot and convert image les between dierent formats. The chapter ends with
an excing project where we develop an image processing applicaon with a graphical
user interface.
[ 2 ]
Chapter 3, Enhancing Images describes how to add special eects to an image using Python
Imaging Library. You will learn techniques to enhance digital images using image lters, for
example, reducing 'noise' from a picture, smoothing and sharpening images, embossing, and
so on. The chapter will cover topics such as selecvely changing the colors within an image.
We will develop some exing ulies for blending images together, adding transparency
eects, and creang watermarks.
Chapter 4, Fun with Animaons introduces you to the fundamentals of developing animaons
using Python and Pyglet mulmedia applicaon development frameworks. We will work
on some excing projects such as animang a fun car out for a ride in a thunderstorm, a
'bowling animaon' with keyboard controls, and so on.
Chapter 5, Working with Audios teaches you how to get to grips with the primer on
GStreamer mulmedia framework and use this API for audio and video processing. In this
chapter, we will develop some simple audio processing tools for 'everyday use'. We will
develop tools such as a command-line audio player, a le format converter, an MP3 cuer
and audio recorder.
Chapter 6, Audio Controls and Eects describes how to develop tools for adding audio eects,
mixing audio tracks, creang custom music tracks, visualizing an audio track, and so on.
Chapter 7, Working with Videos explains the fundamentals of video processing. This
chapter will cover topics such as converng video between dierent video formats, mixing
or separang audio and video tracks, saving one or more video frames as sll images,
performing basic video manipulaons such as cropping, resizing, adjusng brightness,
and so on.
Chapter 8, GUI-based Media Players using QT Phonon takes you through the fundamental
components of the QT Phonon framework. We will use QT Phonon to develop audio and
video players using a graphical user interface.
Who this book is for
Python developers who want to dip their toes into working with images, animaons, and
audio and video processing using Python.
In this book, you will nd several headings appearing frequently.
To give clear instrucons of how to complete a procedure or task, we use:
[ 3 ]
Time for action – heading
1. Acon 1
2. Acon 2
3. Acon 3
Instrucons oen need some extra explanaon so that they make sense, so they are
followed with:
What just happened?
This heading explains the working of tasks or instrucons that you have just completed.
You will also nd some other learning aids in the book, including:
Pop quiz – heading
These are short mulple choice quesons intended to help you test your own understanding.
Have a go hero – heading
These set praccal challenges and give you ideas for experimenng with what you
have learned.
You will also nd a number of styles of text that disnguish between dierent kinds of
informaon. Here are some examples of these styles, and an explanaon of their meaning.
Code words in text are shown as follows: "The diconary self.addedEffects keeps track
of all the audio."
A block of code is set as follows:
1 def __init__(self):
2 self.constructPipeline()
3 self.is_playing = False
4 self.connectSignals()
When we wish to draw your aenon to a parcular part of a code block, the relevant lines
or items are set in bold:
1 def constructPipeline(self):
2 self.pipeline = gst.Pipeline()
3 self.filesrc = gst.element_factory_make(
4 "gnlfilesource")
[ 4 ]
Any command-line input or output is wrien as follows:
>>>import pygst
New terms and important words are shown in bold. Words that you see on the screen, in
menus or dialog boxes for example, appear in the text like this: "You will need to tweak the
Eects menu UI and make some other changes in the code to keep track of the added eects."
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about this
book—what you liked or may have disliked. Reader feedback is important for us to
develop tles that you really get the most out of.
To send us general feedback, simply send an e-mail to, and
menon the book tle via the subject of your message.
If there is a book that you need and would like to see us publish, please send us a note in the
SUGGEST A TITLE form on or e-mail
If there is a topic that you have experse in and you are interested in either wring or
contribung to a book, see our author guide on
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to help you
to get the most from your purchase.
Downloading the example code for this book
You can download the example code les for all Packt books you have purchased
from your account at If you purchased this
book elsewhere, you can visit and
register to have the les e-mailed directly to you.
[ 5 ]
Although we have taken every care to ensure the accuracy of our content, mistakes do happen.
If you nd a mistake in one of our books—maybe a mistake in the text or the code—we
would be grateful if you would report this to us. By doing so, you can save other readers from
frustraon and help us improve subsequent versions of this book. If you nd any errata, please
report them by vising, selecng your book, clicking
on the errata submission form link, and entering the details of your errata. Once your errata
are veried, your submission will be accepted and the errata will be uploaded on our website,
or added to any list of exisng errata, under the Errata secon of that tle. Any exisng errata
can be viewed by selecng your tle from
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt,
we take the protecon of our copyright and licenses very seriously. If you come across any
illegal copies of our works, in any form, on the Internet, please provide us with the locaon
address or website name immediately so that we can pursue a remedy.
Please contact us at with a link to the suspected pirated material.
We appreciate your help in protecng our authors, and our ability to bring you
valuable content.
You can contact us at if you are having a problem with any
aspect of the book, and we will do our best to address it.
Python and Multimedia
Since its concepon in 1989, Python has gained increasing popularity as a
general purpose programming language. It is a high-level, object-oriented
language with a comprehensive standard library. The language features
such as automac memory management and easy readability have aracted
the aenon of a wide range of developer communies. Typically, one can
develop complex applicaons in Python very quickly compared to some other
languages. It is used in several open source as well as commercial scienc
modeling and visualizaon soware packages. It has already gained popularity
in industries such as animaon and game development studios, where the focus
is on mulmedia applicaon development. This book is all about mulmedia
processing using Python.
In this introductory chapter, we shall:
Learn about mulmedia and mulmedia processing
Discuss a few popular mulmedia frameworks for mulmedia processing
using Python
Develop a simple interacve applicaon using PyGame
So let's get on with it.
Python and Mulmedia
[ 8 ]
We use mulmedia applicaons in our everyday lives. It is mulmedia that we deal
with while watching a movie or listening to a song or playing a video game. Mulmedia
applicaons are used in a broad spectrum of elds. Mulmedia has a crucial role to play
in the adversing and entertainment industry. One of the most common usages is to add
audio and video eects to a movie. Educaonal soware packages such as a ight or a drive
simulator use mulmedia to teach various topics in an interacve way.
So what really is mulmedia? In general, any applicaon that makes use of dierent sources
of digital media is termed as a digital mulmedia. A video, for instance, is a combinaon
of dierent sources or contents. The contents can be an audio track, a video track, and a
subtle track. When such video is played, all these media sources are presented together
to accomplish the desired eect.
A mulchannel audio can have a background music track and a lyrics track. It may even
include various audio eects. An animaon can be created by using a bunch of digital images
that are displayed quickly one aer the other. These are dierent examples of mulmedia.
In the case of computer or video games, another dimension is added to the applicaon,
the user interacon. It is oen termed as an interacve type of mulmedia. Here, the users
determine the way the mulmedia contents are presented. With the help of devices such as
keyboard, mouse, trackball, joysck, and so on, the users can interacvely control the game.
Multimedia processing
We discussed some of the applicaon domains where mulmedia is extensively used.
The focus of this book will be on mulmedia processing, using which various mulmedia
applicaons will be developed.
Image processing
Aer taking a snap with a digital camera, we oen tweak the original digital image for
various reasons. One of the most common reasons is to remove blemishes from the image,
such as removing 'red-eye' or increasing the brightness level if the picture was taken in
insucient light, and so on. Another reason for doing so is to add special eects that
give a pleasing appearance to the image. For example, making a family picture black and
white and digitally adding a frame around the picture gives it a nostalgic eect. The next
illustraon shows an image before and aer the enhancement. Somemes, the original
image is modied just to make you understand important informaon presented by the
image. Suppose the picture represents a complicated assembly of components. One can add
special eects to the image so that only edges in the picture are shown as highlighted. This
Chapter 1
[ 9 ]
informaon can then be used to detect, for instance, interference between the components.
Thus, we digitally process the image further unl we get the desired output image.
An example where a border is added around an image to change its appearance is as follows:
Digital image processing can be viewed as an applicaon of various algorithms/lters on
the image data. One of the examples is an image smoothing lter. Image smoothing means
reducing the noise from the image. The random changes in brightness and color levels within
the image data are typically referred to as image noise. The smoothing algorithms modify
the input image data so that this noise is reduced in the resultant image.
Another commonly performed image processing operaon is blending. As the name
suggests, blending means mixing two compable images to create a new image. Typically,
the data of the two input images is interpolated using a constant value of alpha to produce
a nal image. The next illustraon shows the two input images and the resultant image
aer blending. In the coming chapters we will learn several of such digital image
processing techniques.
The pictures of the bridge and the ying birds are taken at dierent locaons. Using image
processing techniques these two images can be blended together so that they appear as a
single picture:
Python and Mulmedia
[ 10 ]
Audio and video processing
When you are listening to music on your computer, your music player is doing several things
in the background. It processes the digital media data so that it can be transformed into a
playable format that an output media device, such as an audio speaker, requires. The media
data ows through a number of interconnected media handling components, before it
reaches a media output device or a media le to which it is wrien. This is shown in the
next illustraon.
The following image shows a media data processing pipeline:
Audio and video processing encompasses a number of things. Some of them are briey
discussed in this secon. In this book, we will learn various audio-video processing
techniques using Python bindings of the GStreamer mulmedia framework.
If you record footage on your camcorder and then transfer it to your computer, it will
take up a lot of space. In order to save those moments on a VCD or a DVD, you almost
always have to compress the audio-video data so that it occupies less space. There are two
types of audio and video compression; lossy and lossless. The lossy compression is very
common. Here, some data is assumed unnecessary and is not retained in the compressed
media. For example, in a lossy video compression, even if some of the original data is lost,
it has much less impact on the overall quality of the video. On the other hand, in lossless
compression, the data of a compressed audio or video perfectly matches the original data.
The compression rao, however, is very low. As we go along, we will write audio-video data
conversion ulies to compress the media data.
Chapter 1
[ 11 ]
Mixing is a way to create composite media using more than one media source. In case of
audio mixing, the audio data from dierent sources is combined into one or more audio
channels. For example, it can be used to add audio eect, in order to synchronize separate
music and lyrics tracks. In the coming chapters, we will learn more about the media mixing
techniques used with Python.
Media mixing can be viewed as a type of media eding. Media eding can be broadly divided
into linear eding and non-linear eding. In linear eding, the programmer doesn't control
the way media is presented. Whereas in non-linear eding, eding is done interacvely. This
book will cover the basics of media eding. For example, we will learn how to create a new
audio track by combining porons of dierent audio les.
An animaon can be viewed as an opcal illusion of moon created by displaying a
sequence of image frames one aer the other. Each of these image frames is slightly
dierent from the previously displayed one. The next illustraon shows animaon
frames of a 'grandfather's clock':
As you can see, there are four image frames in a clock animaon. These frames are
quickly displayed one aer the other to achieve the desired animaon eect. Each of
these images will be shown for 0.25 seconds. Therefore, it simulates the pendulum
oscillaon of one second.
Python and Mulmedia
[ 12 ]
Cartoon animaon is a classic example of animaon. Since its debut in the early tweneth
century, animaon has become a prominent entertainment industry. Our focus in this
book will be on 2D cartoon animaons built using Python. In Chapter 4, we will learn some
techniques to build such animaons. Creang a cartoon character and bringing it to 'life' is
a laborious job. Unl the late 70s, most of the animaons and eects were created without
the use of computers. In today's age, much of the image creaon work is produced digitally.
The state-of-the-art technology makes this process much faster. For example, one can apply
image transformaons to display or move a poron of an image, thereby avoiding the need
to create the whole cartoon image for the next frame.
Built-in multimedia support
Python has a few built-in mulmedia modules for applicaon development. We will skim
through some of these modules.
The winsound module is available on the Windows plaorm. It provides an interface which
can be used to implement fundamental audio-playing elements in the applicaon. A sound
can be played by calling PlaySound(sound, flags). Here, the argument sound is used
to specify the path of an audio le. If this parameter is specied as None, the presently
streaming audio (if any) is stopped. The second argument species whether the le to be
played is a sound le or a system sound. The following code snippet shows how to play a
wave formaed audio le using winsound module.
from winsound import PlaySound, SND_FILENAME
PlaySound("C:/AudioFiles/my_music.wav", SND_FILENAME )
This plays the sound le specied by the rst argument to the funcon PlaySound. The
second argument, SND_FILENAME, says that the rst argument is an audio le. If the ag
is set as SND_ALIAS, it means the value for the rst argument is a system sound from
the registry.
This module is used for manipulang the raw audio data. One can perform several useful
operaons on sound fragments. For example, it can nd the minimum and maximum values
of all the samples within a sound fragment.
Chapter 1
[ 13 ]
The wave module provides an interface to read and write audio les with WAV le format.
The following line of code opens a wav le.
import wave
fil ='horn.wav', 'r')
The rst argument of method open is the locaon where the path to the wave le is
specied. The second argument 'r' returns a Wave_read object. This is the mode in which
the audio le is opened, 'r' or 'rb' for read-only mode and 'w' or 'wb' for write-only mode.
External multimedia libraries and frameworks
There are several open source mulmedia frameworks available for mulmedia applicaon
development. The Python bindings for most of these are readily available. We will discuss
a few of the most popular mulmedia frameworks here. In the chapters that follow, we will
make use of many of these libraries to create some useful mulmedia applicaons.
Python Imaging Library
Python Imaging Library provides image processing funconality in Python. It supports several
image formats. Later in this book, a number of image processing techniques using PIL will
be discussed thoroughly. We will learn things such as image format conversion and various
image manipulaon and enhancement techniques using the Python Imaging Library.
PyMedia is a popular open source media library that supports audio/video manipulaon of
a wide range of mulmedia formats.
This framework enables mulmedia manipulaon. It is a framework on top of which one
can develop mulmedia applicaons. The rich set of libraries it provides makes it easier
to develop applicaons with complex audio/video processing capabilies. GStreamer is
wrien in C programming language and provides bindings for some other programming
languages including Python. Several open source projects use GStreamer framework to
develop their own mulmedia applicaon. Comprehensive documentaon is available on
the GStreamer project website. GStreamer Applicaon Development Manual is a very good
starng point. This framework will be extensively used later in this group to develop audio
and video applicaons.
Python and Mulmedia
[ 14 ]
Interested in animaons and gaming applicaons? Pyglet is here to help. Pyglet provides
an API for developing mulmedia applicaons using Python. It is an OpenGL-based library
that works on mulple plaorms. It is one of the popular mulmedia frameworks for
development of games and other graphically intense applicaons. It supports mulple
monitor conguraon typically needed for gaming applicaon development. Later in this
book, we will be extensively using this Pyglet framework for creang animaons.
PyGame ( is another very popular open source framework that provides
an API for gaming applicaon development needs. It provides a rich set of graphics and
sound libraries. We won't be using PyGame in this book. But since it is a prominent
mulmedia framework, we will briey discuss some of its most important modules and
work out a simple example. The PyGame website provides ample resources on use of this
framework for animaon and game programming.
The Sprite module contains several classes; out of these, Sprite and Group are the
most important. Sprite is the super class of all the visible game objects. A Group object
is a container for several instances of Sprite.
As the name suggests, the Display module has funconality dealing with the display. It is
used to create a Surface instance for displaying the Pygame window. Some of the important
methods of this module include flip and update. The former is called to make sure that
everything drawn is properly displayed on the screen. Whereas the laer is used if you
just want to update a poron of the screen.
This module is used to display an image. The instance of Surface represents an image. The
following line of code creates such an instance.
surf = pygame.display.set_mode((800,600))
The API method, display.set_mode, is used to create this instance. The width and height
of the window are specied as arguments to this method.
With the Draw module, one can render several basic shapes within the Surface. Examples
include circles, rectangles, lines, and so on.
Chapter 1
[ 15 ]
This is another important module of PyGame. An event is said to occur when, for instance,
the user clicks a mouse buon or presses a key and so on. The event informaon is used to
instruct the program to execute in a certain way.
The Image module is used to process images with dierent le formats. The loaded image is
represented by a surface.
Music provides convenient methods for controlling playback such as play,
reverse, stop, and so on.
The following is a simple program that highlights some of the fundamental concepts
of animaon and game programming. It shows how to display objects in an applicaon
window and then interacvely modify their posions. We will use PyGame to accomplish
this task. Later in this book, we will use a dierent mulmedia framework, Pyglet, for
creang animaons.
Time for action – a simple application using PyGame
This example will make use of the modules we just discussed. For this applicaon to
work, you will need to install PyGame. The binary and source distribuon of PyGame is
available on Pygame's website.
1. Create a new Python source le and write the following code in it.
1 import pygame
2 import sys
4 pygame.init()
5 bgcolor = (200, 200, 100)
6 surf = pygame.display.set_mode((400,400))
8 circle_color = (0, 255, 255)
9 x, y = 200, 300
10 circle_rad = 50
12 pygame.display.set_caption("My Pygame Window")
14 while True:
15 for event in pygame.event.get():
Python and Mulmedia
[ 16 ]
16 if event.type == pygame.QUIT:
17 sys.exit()
18 elif event.type == pygame.KEYDOWN:
19 if event.key == pygame.K_UP:
20 y -= 10
21 elif event.key == pygame.K_DOWN:
22 y += 10
23 elif event.key == pygame.K_RIGHT:
24 x += 10
25 elif event.key == pygame.K_LEFT:
26 x -= 10
28 circle_pos = (x, y)
30 surf.fill(bgcolor)
31, circle_color ,
32 circle_pos , circle_rad)
33 pygame.display.flip()
2. The rst line imports the pygame package. On line 4, the modules within this
pygame package are inialized. An instance of class Surface is created using
display.set_mode method. This is the main PyGame window inside which the
images will be drawn. To ensure that this window is constantly displayed on the
screen, we need to add a while loop that will run forever, unl the window is
closed by the user. In this simple applicaon everything we need is placed inside the
while loop. The background color of the PyGame window represented by object
surf is set on line 30.
3. A circular shape is drawn in the PyGame surface by the code on line 31. The
arguments to are (Surface, color, position, radius) . This
creates a circle at the posion specied by the argument circle_pos. The instance
of class Surface is sent as the rst argument to this method.
4. The code block 16-26 captures certain events. An event occurs when, for instance,
a mouse buon or a key is pressed. In this example, we instruct the program to
do certain things when the arrow keys are pressed. When the RIGHT arrow key is
pressed, the circle is drawn with the x coordinate oset by 10 pixels to the previous
posion. As a result, the circle appears to be moving towards right whenever you
press the RIGHT arrow key. When the PyGame window is closed, the pygame.QUIT
event occurs. Here, we simply exit the applicaon by calling sys.exit() as done
on line 17.
Chapter 1
[ 17 ]
5. Finally, we need to ensure that everything drawn on the Surface is visible. This is
accomplished by the code on line 31. If you disable this line, incompletely drawn
images may appear on the screen.
6. Execute the program from a terminal window. It will show a new graphics window
containing a circular shape. If you press the arrow keys on the keyboard, the circle
will move in the direcon indicated by the arrow key. The next illustraon shows the
screenshot of the original circle posion (le) and when it is moved using the UP
and RIGHT arrow keys.
A simple PyGame applicaon with a circle drawn within the Surface (window).
The image on the right side is a screenshot taken aer maneuvering the posion
of the circle with the help of arrow keys:
What just happened?
We used PyGame to create a simple user interacve applicaon. The purpose of this
example was to introduce some of the basic concepts behind animaon and game
programming. It was just a preview of what is coming next! Later in this book we
will use Pyglet framework to create some interesng 2D animaons.
Python and Mulmedia
[ 18 ]
QT Phonon
When one thinks of a media player, it is almost always associated with a graphical user
interface. Of course one can work with command-line mulmedia players. But a media
player with a GUI is a clear winner as it provides an easy to use, intuive user interface to
stream a media and control its playback. The next screenshot shows the user interface of an
audio player developed using QT Phonon.
An Audio Player applicaon developed with QT Phonon:
QT is an open source GUI framework. 'Phonon' is a mulmedia package within QT that
supports audio and video playback. Note that, Phonon is meant for simple media player
funconality. For complex audio/video player funconality, you should use mulmedia
frameworks like GStreamer. Phonon depends on a plaorm-specic backend for media
processing. For example, on Windows plaorm the backend framework is DirectShow.
The supported funconality may vary depending on the plaorm.
To develop a media processing applicaon, a media graph is created in Phonon. This media
graph contains various interlinked media nodes. Each media node does a poron of media
processing. For example, an eects node will add an audio eect, such as echo to the media.
Another node will be responsible for outpung the media from an audio or video device
and so on. In chapter 8, we will develop audio and video player applicaons using Phonon
framework. The next illustraon shows a video player streaming a video. It is developed
using QT Phonon. We will be developing this applicaon in Chapter 8.
Chapter 1
[ 19 ]
Using various built-in modules of QT Phonon, it is very easy to create GUI-based audio
and video players. This example shows a video player in acon:
Other multimedia libraries
Python bindings for several other mulmedia libraries are available on various plaorms.
Some of the popular libraries are menoned below.
Snack Sound Toolkit
Snack is an audio toolkit that is used to create cross-plaorm audio applicaons.
It includes audio analysis and input-output funconality and it has support for
audio visualizaon as well. The ocial website for Snack Sound Toolkit is
Python and Mulmedia
[ 20 ]
PyAudiere ( is an open source audio library. It provides an API
to easily implement the audio funconality in various applicaons. It is based on Audiere
Sound Library.
This chapter served as an introducon to mulmedia processing using Python.
Specically, in this chapter we covered:
An overview of mulmedia processing. It introduced us to digital image, audio, and
video processing.
We learned about a number of freely available mulmedia frameworks that can be
used for mulmedia processing.
Now that we know what mulmedia libraries and frameworks are out there, we're ready to
explore these to develop excing mulmedia applicaons!
Working with Images
In this chapter, we will learn basic image conversion and manipulaon
techniques using the Python Imaging Library. The chapter ends with an excing
project where we create an image processing applicaon.
In this chapter, we shall:
Learn various image I/O operaons for reading and wring images using the Python
Imaging Library (PIL)
With the help of several examples and code snippets, perform some basic
manipulaons on the image, such as resizing, rotang/ ipping, cropping,
pasng, and so on.
Write an image-processing applicaon by making use of PIL
Use the QT library as a frontend (GUI) for this applicaon
So let's get on with it!
Installation prerequisites
Before we jump in to the main chapter, it is necessary to install the following packages.
In this book we will use Python Version 2.6, or to be more specic, Version 2.6.4.
It can be downloaded from the following locaon:
Working with Images
[ 22 ]
Windows platform
For Windows, just download and install the plaorm-specic binary distribuon of
Python 2.6.4.
Other platforms
For other plaorms, such as Linux, Python is probably already installed on your machine.
If the installed version is not 2.6, build and install it from the source distribuon. If you are
using a package manager on a Linux system, search for Python 2.6. It is likely that you will
nd the Python distribuon there. Then, for instance, Ubuntu users can install Python from
the command prompt as:
$sudo apt-get python2.6
Note that for this, you must have administrave permission on the machine on which you
are installing Python.
Python Imaging Library (PIL)
We will learn image-processing techniques by making extensive use of the Python Imaging
Library (PIL) throughout this chapter. As menoned in Chapter 1, PIL is an open source
library. You can download it from
Install the PIL Version 1.1.6 or later.
Windows platform
For Windows users, installaon is straighorward—use the binary distribuon PIL 1.1.6 for
Python 2.6.
Other platforms
For other plaorms, install PIL 1.1.6 from the source. Carefully review the README le in
the source distribuon for the plaorm-specic instrucons. Libraries listed in the following
table are required to be installed before installing PIL from the source. For some plaorms
like Linux, the libraries provided in the OS should work ne. However, if those do not work,
install a pre-built "libraryName-devel" version of the library. For example, for JPEG support,
the name will contain "jpeg-devel-", and something similar for the others. This is generally
applicable to rpm-based distribuons. For Linux avors like Ubuntu, you can use the
following command in a shell window.
$sudo apt-get install python-imaging.
Chapter 2
[ 23 ]
However, you should make sure that this installs Version 1.1.6 or later. Check PIL
documentaon for further plaorm-specic instrucons. For Mac OSX, see if you can use
fink to install these libraries. See for more details.
You can also check the website or Darwin ports website to see if a binary package installer is available. If such
a pre-built version is not available for any library, install it from the source.
The PIL prerequisites for installing PIL from source are listed in the following table:
Library URL Version Installaon opons
(a) or (b)
(JPEG support)
7 or 6a or
(a) Pre-built version. For example:
Check if you can do:
sudo apt-install libjpeg
(works on some avors of Linux)
(b) Source tarball. For example:
(PNG support)
1.2.3 or
(a) Pre-built version. For example:
(b) Install from the source.
(OpenType /TrueType
2.1.3 or
(a) Pre-built version. For example:
(b) Install from the source.
This package provides Python bindings for Qt libraries. We will use PyQt4 to generate GUI for
the image-processing applicaon that we will develop later in this chapter. The GPL version is
available at:
Windows platform
Download and install the binary distribuon pertaining to Python 2.6. For example, the
executable le's name could be 'PyQt-Py2.6-gpl-4.6.2-2.exe'. Other than Python, it includes
everything needed for GUI development using PyQt.
Working with Images
[ 24 ]
Other platforms
Before building PyQt, you must install SIP Python binding generator. For further details,
refer to the SIP homepage:
Aer installing SIP, download and install PyQt 4.6.2 or later, from the source tarball. For
Linux/Unix source, the lename will start with PyQt-x11-gpl-.. and for Mac OS X,
PyQt-mac-gpl-... Linux users should also check if PyQt4 distribuon is already
available through the package manager.
Summary of installation prerequisites
Package Download locaon Version Windows
Linux/Unix/OS X plaorms
(or any
Install using
(a) Install from binary; Also
install addional developer
packages (For example, with
python-devel in the
package name in the rpm
systems) OR
(b) Build and install from the
source tarball.
(c) MAC users can also check
websites such as http:// or
1.1.6 or
Install PIL 1.1.6
(binary) for
Python 2.6
(a) Install prerequisites if
needed. Refer to Table #1 and
the README le in PIL source
(b) Install PIL from source.
(c) MAC users can also check
websites like http:// or
PyQt4 http://www.
4.6.2 or
Install using
pertaining to
Python 2.6
(a) First install SIP 4.9 or later.
(b) Then install PyQt4.
Chapter 2
[ 25 ]
Reading and writing images
To manipulate an exisng image, we must open it rst for eding and we also require the
ability to save the image in a suitable le format aer making changes. The Image module in
PIL provides methods to read and write images in the specied image le format. It supports
a wide range of le formats.
To open an image, use method. Start the Python interpreter and write the
following code. You should specify an appropriate path on your system as an argument to
the method.
>>>import Image
>>>inputImage ="C:\\PythonTest\\image1.jpg")
This will open an image le by the name image1.jpg. If the le can't be opened, an
IOError will be raised, otherwise, it returns an instance of class Image.
For saving image, use the save method of the Image class. Make sure you replace the
following string with an appropriate /path/to/your/image/file.
You can view the image just saved, using the show method of Image class.
>>>outputImage ="C:\\PythonTest\\outputImage.jpg")
Here, it is essenally the same image as the input image, because we did not make any
changes to the output image.
Time for action – image le converter
With this basic informaon, let's build a simple image le converter. This ulity will
batch-process image les and save them in a user-specied le format.
To get started, download the le from the Packt website, This le can be run from the command line as:
python [arguments]
Here, [arguments] are:
--input_dir: The directory path where the image les are located.
--input_format: The format of the image les to be converted. For example, jpg.
Working with Images
[ 26 ]
--output_dir: The locaon where you want to save the converted images.
--output_format: The output image format. For example, jpg, png, bmp,
and so on.
The following screenshot shows the image conversion ulity in acon on Windows XP, that
is, running image converter from the command line.
Here, it will batch-process all the .jpg images within C:\PythonTest\images and save
them in png format in the directory C:\PythonTest\images\OUTPUT_IMAGES.
The le denes class ImageConverter . We will discuss the most important methods in
this class.
def processArgs: This method processes all the command-line arguments
listed earlier. It makes use of Python's built-in module getopts to process these
arguments. Readers are advised to review the code in the le
in the code bundle of this book for further details on how these arguments
are processed.
def convertImage: This is the workhorse method of the image-conversion ulity.
1 def convertImage(self):
2 pattern = "*." + self.inputFormat
3 filetype = os.path.join(self.inputDir, pattern)
4 fileList = glob.glob(filetype)
5 inputFileList = filter(imageFileExists, fileList)
7 if not len(inputFileList):
8 print "\n No image files with extension %s located \
9 in dir %s"%(self.outputFormat, self.inputDir)
10 return
11 else:
Chapter 2
[ 27 ]
12 # Record time before beginning image conversion
13 starttime = time.clock()
14 print "\n Converting images.."
16 # Save image into specified file format.
17 for imagePath in inputFileList:
18 inputImage =
19 dir, fil = os.path.split(imagePath)
20 fil, ext = os.path.splitext(fil)
21 outPath = os.path.join(self.outputDir,
22 fil + "." + self.outputFormat)
25 endtime = time.clock()
26 print "\n Done!"
27 print "\n %d image(s) written to directory:\
28 %s" %(len(inputFileList), self.outputDir)
29 print "\n Approximate time required for conversion: \
30 %.4f seconds" % (endtime – starttime)
Now let's review the preceding code.
1. Our rst task is to get a list of all the image les to be saved in a dierent format.
This is achieved by using glob module in Python. Line 4 in the code snippet nds all
the le path names that match the paern specied by the local variable fileType.
On line 5, we check whether the image le in fileList exists. This operaon can
be eciently performed over the whole list using the built-in filter funconality
in Python.
2. The code block between lines 7 to 14 ensures that one or more images exist. If so, it
will record the me before beginning the image conversion.
3. The next code block (lines 17-23) carries out the image le conversion. On line 18,
we use to open the image le. Line 18 creates an Image object.
Then the appropriate output path is derived and nally the output image is saved
using the save method of the Image module.
What just happened?
In this simple example, we learned how to open and save image les in a specied image
format. We accomplished this by wring an image le converter that batch-processes a
specied image le. We used PIL's and funconality along with
Python's built-in modules such as glob and filter.
Now we will discuss other key aspects related to the image reading and wring.
Working with Images
[ 28 ]
Creating an image from scratch
So far we have seen how to open an exisng image. What if we want to create our own
image? As an example, it you want to create fancy text as an image, the funconality that we
are going to discuss now comes in handy. Later in this book, we will learn how to use such
an image containing some text to embed into another image. The basic syntax for creang a
new image is:
foo =, size, color)
Where, new is the built-in method of class Image. takes three arguments,
namely, mode, size, and color. The mode argument is a string that gives informaon about
the number and names of image bands. Following are the most common values for mode
argument: L (gray scale) and RGB (true color). The size is a tuple specifying dimensions
of the image in pixels, whereas, color is an oponal argument. It can be assigned an RGB
value (a 3-tuple) if it's a mul-band image. If it is not specied, the image is lled with
black color.
Time for action – creating a new image containing some text
As already stated, it is oen useful to generate an image containing only some text or a
common shape. Such an image can then be pasted onto another image at a desired angle
and locaon. We will now create an image with text that reads, "Not really a fancy text!"
1. Write the following code in a Python source le:
1 import Image
2 import ImageDraw
3 txt = "Not really a fancy text!"
4 size = (150, 50)
5 color = (0, 100, 0)
6 img ='RGB', size, color)
7 imgDrawer = ImageDraw.Draw(img)
8 imgDrawer.text((5, 20), txt)
2. Let's analyze the code line by line. The rst two lines import the necessary modules
from PIL. The variable txt is the text we want to include in the image. On line 7,
the new image is created using Here we specify the mode and size
arguments. The oponal color argument is specied as a tuple with RGB values
pertaining to the "dark green" color.
Chapter 2
[ 29 ]
3. The ImageDraw module in PIL provides graphics support for an Image object.
The funcon ImageDraw.Draw takes an image object as an argument to create a
Draw instance. In output code, it is called imgDrawer, as used on line 7. This Draw
instance enables drawing various things in the given image.
4. On line 8, we call the text method of the Draw instance and supply posion
(a tuple) and the text (stored in the string txt) as arguments.
5. Finally, the image can be viewed using call. You can oponally
save the image using method. The following screenshot shows
the resultant image.
What just happened?
We just learned how to create an image from scratch. An empty image was created using the method. Then, we used the ImageDraw module in PIL to add text to this image.
Reading images from archive
If the image is part of an archived container, for example, a TAR archive, we can use the
TarIO module in PIL to open it and then call to pass this TarIO instance
as an argument.
Time for action – reading images from archives
Suppose there is an archive le images.tar containing image le image1.jpg. The
following code snippet shows how to read image1.jpg from the tarball.
>>>import TarIO
>>>import Images
>>>fil = TarIO.TarIO("images.tar", "images/image1.jpg")
>>>img =
What just happened?
We learned how to read an image located in an archived container.
Working with Images
[ 30 ]
Have a go hero – add new features to the image le converter
Modify the image conversion code so that it supports the following new funconality, which:
1. Takes a ZIP le containing images as input
2. Creates a TAR archive of the converted images
Basic image manipulations
Now that we know how to open and save images, let's learn some basic techniques to
manipulate images. PIL supports a variety of geometric manipulaon operaons, such as
resizing an image, rotang it by an angle, ipping it top to boom or le to right, and so on.
It also facilitates operaons such as cropping, cung and pasng pieces of images, and
so on.
Changing the dimensions of an image is one of the most frequently used image manipulaon
operaons. The image resizing is accomplished using Image.resize in PIL. The following
line of code explains how it is achieved.
foo = img.resize(size, filter)
Here, img is an image (an instance of class Image) and the result of resizing operaon is
stored in foo (another instance of class Image). The size argument is a tuple (width,
height). Note that the size is specied in pixels. Thus, resizing the image means modifying
the number of pixels in the image. This is also known as image re-sampling. The Image.
resize method also takes filter as an oponal argument. A filter is an interpolaon
algorithm used while re-sampling the given image. It handles deleon or addion of pixels
during re-sampling, when the resize operaon is intended to make image smaller or larger in
size respecvely. There are four lters available. The resize lters in the increasing order
of quality are NEAREST, BILINEAR, BICUBIC, and ANTIALIAS. The default lter opon
Time for action – resizing
Let's now resize images by modifying their pixel dimensions and applying various lters
for re-sampling.
1. Download the le ImageResizeExample.bmp from the Packt website. We will
use this as the reference le to create scaled images. The original dimensions of
ImageResizeExample.bmp are 200 x 212 pixels.
Chapter 2
[ 31 ]
2. Write the following code in a le or in Python interpreter. Replace the inPath and
outPath strings with the appropriate image path on your machine.
1 import Image
2 inPath = "C:\\images\\ImageResizeExample.jpg"
3 img =
4 width , height = (160, 160)
5 size = (width, height)
6 foo = img.resize(size)
8 outPath = "C:\\images\\foo.jpg"
3. The image specied by the inPath will be resized and saved as the image
specied by the outPath. Line 6 in the code snippet does the resizing job and
nally we save the new image on line 9. You can see how the resized image looks
by calling
4. Let's now specify the filter argument. In the following code, on line 14, the
filterOpt argument is specied in the resize method. The valid filter opons
are specied as values in the diconary filterDict. The keys of filterDict
are used as the lenames of the output images. The four images thus obtained are
compared in the next illustraon. You can clearly noce the dierence between the
ANTIALIAS image and the others (parcularly, look at the ower petals in these
images). When the processing me is not an issue, choose the ANTIALIAS lter
opon as it gives the best quality image.
1 import Image
2 inPath = "C:\\images\\ImageResizeExample.jpg"
3 img =
4 width , height = (160, 160)
5 size = (width, height)
6 filterDict = {'NEAREST':Image.NEAREST,
11 for k in filterDict.keys():
12 outPath= "C:\\images\\" + k + ".jpg"
13 filterOpt = filterDict[k]
14 foo = img.resize(size, filterOpt)
Working with Images
[ 32 ]
The resized images with dierent lter opons appear as follows. Clockwise
from le, Image.NEAREST, Image.BILENEAR, Image.BICUBIC, and
5. The resize funconality illustrated here, however, doesn't preserve the aspect
rao of the resulng image. The image will appear distorted if one dimension is
stretched more or stretched less in comparison with the other dimension. PIL's
Image module provides another built-in method to x this. It will override the
larger of the two dimensions, such that the aspect rao of the image is maintained.
import Image
inPath = "C:\\images\\ResizeImageExample.jpg"
img =
width , height = (100, 50)
size = (width, height)
outPath = "C:\\images\\foo.jpg"
img.thumbnail(size, Image.ANTIALIAS)
Chapter 2
[ 33 ]
6. This code will override the maximum pixel dimension value (width in this case)
specied by the programmer and replace it with a value that maintains the aspect
rao of the image. In this case, we have an image with pixel dimensions (47, 50).
The resultant images are compared in the following illustraon.
It shows the comparison of output images for methods Image.thumbnail
and Image.resize.
What just happened?
We just learned how image resizing is done using PIL's Image module, by wring a few lines
of code. We also learned dierent types of lters used in image resizing (re-sampling). And
nally, we also saw how to resize an image while sll keeping the aspect rao intact (that is,
without distoron), using the Image.thumbnail method.
Like image resizing, rotang an image about its center is another commonly performed
transformaon. For example, in a composite image, one may need to rotate the text by
certain degrees before embedding it in another image. For such needs, there are methods
such as rotate and transpose available in PIL's Image module. The basic syntax to
rotate an image using Image.rotate is as follows:
foo = img.rotate(angle, filter)
Where, the angle is provided in degrees and filter, the oponal argument, is the
image-re-sampling lter. The valid filter value can be NEAREST, BILINEAR, or BICUBIC.
You can rotate the image using Image.transpose only for 90-, 180-, and 270-degree
rotaon angles.
Working with Images
[ 34 ]
Time for action – rotating
1. Download the le Rotate.png from the Packt website. Alternavely, you can use
any supported image le of your choice.
2. Write the following code in Python interpreter or in a Python le. As always, specify
the appropriate path strings for inPath and outPath variables.
1 import Image
2 inPath = "C:\\images\\Rotate.png"
3 img =
4 deg = 45
5 filterOpt = Image.BICUBIC
6 outPath = "C:\\images\\Rotate_out.png"
7 foo = img.rotate(deg, filterOpt)
3. Upon running this code, the output image, rotated by 45 degrees, is saved to the
outPath. The lter opon Image.BICUBIC ensures highest quality. The next
illustraon shows the original and the images rotated by 45 and 180 degrees
respecvely—the original and rotated images.
4. There is another way to accomplish rotaon for certain angles by using the
Image.transpose funconality. The following code achieves a 270-degree
rotaon. Other valid opons for rotaon are Image.ROTATE_90 and
import Image
inPath = "C:\\images\\Rotate.png"
img =
outPath = "C:\\images\\Rotate_out.png"
foo = img.transpose(Image.ROTATE_270)
Chapter 2
[ 35 ]
What just happened?
In the previous secon, we used Image.rotate to accomplish rotang an image by the
desired angle. The image lter Image.BICUBIC was used to obtain beer quality output
image aer rotaon. We also saw how Image.transpose can be used for rotang the
image by certain angles.
There are mulple ways in PIL to ip an image horizontally or vercally. One way to achieve
this is using the Image.transpose method. Another opon is to use the funconality from
the ImageOps module . This module makes the image-processing job even easier with some
ready-made methods. However, note that the PIL documentaon for Version 1.1.6 states
that ImageOps is sll an experimental module.
Time for action – ipping
Imagine that you are building a symmetric image using a bunch of basic shapes. To create
such an image, an operaon that can ip (or mirror) the image would come in handy. So let's
see how image ipping can be accomplished.
1. Write the following code in a Python source le.
1 import Image
2 inPath = "C:\\images\\Flip.png"
3 img =
4 outPath = "C:\\images\\Flip_out.png"
5 foo = img.transpose(Image.FLIP_LEFT_RIGHT)
2. In this code, the image is ipped horizontally by calling the transpose method.
To ip the image vercally, replace line 5 in the code with the following:
foo = img.transpose(Image.FLIP_TOP_BOTTOM)
3. The following illustraon shows the output of the preceding code when the image is
ipped horizontally and vercally.
Working with Images
[ 36 ]
4. The same eect can be achieved using the ImageOps module. To ip the
image horizontally, use ImageOps.mirror, and to ip the image vercally,
use ImageOps.flip.
import ImageOps
# Flip image horizontally
foo1 = ImageOps.mirror(img)
# Flip image vertically
foo2 = ImageOps.flip(img)
What just happened?
With the help of example, we learned how to ip an image horizontally or vercally using
Image.transpose and also by using methods in class ImageOps. This operaon will be
applied later in this book for further image processing such as preparing composite images.
Capturing screenshots
How do you capture the desktop screen or a part of it using Python? There is ImageGrab
module in PIL. This simple line of code will capture the whole screen.
img = ImageGrab.grab()
Where, img is an instance of class Image.
However, note that in PIL Version 1.1.6, the ImageGrab module supports screen grabbing
only for Windows plaorm.
Time for action – capture screenshots at intervals
Imagine that you are developing an applicaon, where, aer certain me interval, the
program needs to automacally capture the whole screen or a part of the screen. Let's
develop code that achieves this.
1. Write the following code in a Python source le. When the code is executed, it will
capture part of the screen aer every two seconds. The code will run for about
three seconds.
1 import ImageGrab
2 import time
3 startTime = time.clock()
4 print "\n The start time is %s sec" % startTime
5 # Define the four corners of the bounding box.
6 # (in pixels)
7 left = 150
Chapter 2
[ 37 ]
8 upper = 200
9 right = 900
10 lower = 700
11 bbox = (left, upper, right, lower)
13 while time.clock() < 3:
14 print " \n Capturing screen at time %.4f sec" \
15 %time.clock()
16 screenShot = ImageGrab.grab(bbox)
17 name = str("%.2f"%time.clock())+ "sec.png"
18"C:\\images\\output\\" + name)
19 time.sleep(2)
2. We will now review the important aspects of this code. First, import the necessary
modules. The time.clock() keeps track of the me spent. On line 11, a bounding
box is dened. It is a 4-tuple that denes the boundaries of a rectangular region.
The elements in this tuple are specied in pixels. In PIL, the origin (0, 0) is dened
in the top-le corner of an image. The next illustraon is a representaon of a
bounding box for image cropping; see how le, upper and right, lower are specied
as the ends of a diagonal of rectangle.
Example of a bounding box used for image cropping.
3. The while loop runs ll the time.clock() reaches three seconds. Inside the loop,
the part of the screen bounded within bbox is captured (see line 16) and then the
image is saved on line 18. The image name corresponds to the me at which
it is taken.
4. The time.sleep(2) call suspends the execuon of the applicaon for two
seconds. This ensures that it grabs the screen every two seconds. The loop
repeats unl the given me is reached.
Working with Images
[ 38 ]
5. In this example, it will capture two screenshots, one when it enters the loop for the
rst me and the next aer a two-second me interval. In the following illustraon,
the two images grabbed by the code are shown. Noce the me and console prints
in these images.
The preceding screenshot is taken at me 00:02:15 as shown dialog. The next screenshot is
taken aer 2 seconds, at wall clock me, 00:02:17.
Chapter 2
[ 39 ]
What just happened?
In the preceding example, we wrote a simple applicaon that captures the screen at regular
me intervals. This helped us to learn how to grab a screen region using ImageGrab.
In previous secon, we learned how to grab a part of the screen with ImageGrab. Cropping
is a very similar operaon performed on an image. It allows you to modify a region within
an image.
Time for action – cropping an image
This simple code snippet crops an image and applies some changes on the cropped poron.
1. Download the le Crop.png from Packt website. The size of this image is 400 x
400 pixels. You can also use your own image le.
2. Write the following code in a Python source le. Modify the path of the image le to
an appropriate path.
import Image
img ="C:\\images\\Crop.png")
left = 0
upper = 0
right = 180
lower = 215
bbox = (left, upper, right, lower)
img = img.crop(bbox)
3. This will crop a region of the image bounded by bbox. The specicaon of the
bounding box is idencal to what we have seen in the Capturing screenshots
secon. The output of this example is shown in the following illustraon.
Original image (le) and its cropped region (right).
Working with Images
[ 40 ]
What just happened?
In the previous secon, we used Image.crop funconality to crop a region within an image
and save the resultant image. In the next secon, we will apply this while pasng a region of
an image onto another.
Pasng a copied or cut image onto another one is a commonly performed operaon while
processing images. Following is the simplest syntax to paste one image on another.
img = img.paste(image, box)
Here image is an instance of class Image and box is a rectangular bounding box that
denes the region of img, where the image will be pasted. The box argument can be a
4-tupleError: Reference source not found or a 2-tuple. If a 4-tuple box
is specied, the size of the image to be pasted must be same as the size of the region.
Otherwise, PIL will throw an error with a message ValueError: images do not match.
The 2-tuple on the other hand, provides pixel coordinates of the upper-le corner of the
region to be pasted.
Now look at the following line of code. It is a copy operaon on an image.
img2 = img.copy(image)
The copy operaon can be viewed as pasng the whole image onto a new image. This
operaon is useful when, for instance, you want to keep the original image unaltered
and make alteraons to the copy of the image.
Time for action – pasting: mirror the smiley face!
Consider the example in earlier secon where we cropped a region of an image. The cropped
region contained a smiley face. Let's modify the original image so that it has a 'reecon' of
the smiley face.
1. If not already, download the le Crop.png from the Packt website.
2. Write this code by replacing the le path with appropriate le path on your system.
1 import Image
2 img ="C:\\images\\Crop.png")
3 # Define the elements of a 4-tuple that represents
4 # a bounding box ( region to be cropped)
5 left = 0
6 upper = 25
7 right = 180
Chapter 2
[ 41 ]
8 lower = 210
9 bbox = (left, upper, right, lower)
10 # Crop the smiley face from the image
11 smiley = img.crop(bbox_1)
12 # Flip the image horizontally
13 smiley = smiley.transpose(Image.FLIP_TOP_BOTTOM)
14 # Define the box as a 2-tuple.
15 bbox_2 = (0, 210)
16 # Finally paste the 'smiley' on to the image.
17 img.paste(smiley, bbox_2)
3. First we open an image and crop it to extract a region containing the smiley
face. This was already done in secon Error: Reference source not
found'Cropping'. The only minor dierence you will noce is the value of the tuple
element upper. It is intenonally kept as 25 pixels from the top to make sure that the
crop image has a size that can t in the blank poron below the original smiley face.
4. The cropped image is then ipped horizontally with code on line 13.
5. Now we dene a box, bbox_2, for pasng the cropped smiley face back on to the
original image. Where should it be pasted? We intend to make a 'reecon' of the
original smiley face. So the coordinate of the top-right corner of the pasted image
should be greater than or equal to the boom y coordinate of the cropped region,
indicated by 'lower' variable (see line 8) . The bounding box is dened on line 15,
as a 2-tuple represenng the upper-le coordinates of the smiley.
6. Finally, on line 17, the paste operaon is performed to paste the smiley on the
original image. The resulng image is then saved with a dierent name.
7. The original image and the output image aer the paste operaon is shown in the
next illustraon.
The illustraon shows the comparison of original and resulng images aer the
paste operaon.
Working with Images
[ 42 ]
What just happened?
Using a combinaon of Image.crop and Image.paste, we accomplished cropping a
region, making some modicaons, and then pasng the region back on the image.
Project: Thumbnail Maker
Let's take up a project now. We will apply some of the operaons we learned in this chapter
to create a simple Thumbnail Maker ulity. This applicaon will accept an image as an input
and will create a resized image of that image. Although we are calling it a thumbnail maker, it
is a mul-purpose ulity that implements some basic image-processing funconality.
Before proceeding further, make sure that you have installed all the packages discussed at
the beginning of this chapter. The screenshot of the Thumbnail Maker dialog is show in the
following illustraon.
The Thumbnail Maker GUI has two components:
1. The le panel is a 'control area', where you can specify certain image parameters
along with opons for input and output paths.
2. A graphics area on the right-hand side where you can view the generated image.
Chapter 2
[ 43 ]
In short, this is how it works:
1. The applicaon takes an image le as an input.
2. It accepts user input for image parameters such as dimensions in pixel, lter for re-
sampling and rotaon angle in degrees.
3. When the user clicks the OK buon in the dialog, the image is processed and saved
at a locaon indicated by the user in the specied output image format.
Time for action – play with Thumbnail Maker application
First, we will run the Thumbnail Maker applicaon as an end user. This warm-up exercise
intends to give us a good understanding of how the applicaon works. This, in turn, will help
us develop/learn the involved code quickly. So get ready for acon!
1. Download the les,,
and from Packt website. Place these les
in some directory.
2. From the command prompt, change to this directory locaon and type the
following command:
The Thumbnail Maker dialog that pops up was shown in the earlier screenshot.
Next, we will specify the input-output paths and various image parameters. You can
open any image le of your choice. Here, the ower image shown in some previous
secons will be used as an input image. To specify an input image, click on the small
buon with three dots . It will open a le dialog. The following illustraon shows
the dialog with all the parameters specied.
Working with Images
[ 44 ]
3. If Maintain Aspect Rao checkbox is checked, internally it will scale the image
dimension so that the aspect rao of the output image remains the same. When
the OK buon is clicked, the resultant image is saved at the locaon specied by the
Output Locaon eld and the saved image is displayed in the right-hand panel of
the dialog. The following screenshot shows the dialog aer clicking OK buon.
4. You can now try modifying dierent parameters such as output image format or
rotaon angle and save the resulng image.
5. See what happens when the Maintain Aspect Rao checkbox is unchecked. The
aspect rao of the resulng image will not be preserved and the image may appear
distorted if the width and height dimensions are not properly specied.
6. Experiment with dierent re-sampling lters; you can noce the dierence between
the quality of the resultant image and the earlier image.
7. There are certain limitaons to this basic ulity. It is required to specify reasonable
values for all the parameters elds in the dialog. The program will print an error if
any of the parameters is not specied.
What just happened?
We got ourselves familiar with the user interface of the thumbnail maker dialog and saw
how it works for processing an image with dierent dimensions and quality. This knowledge
will make it easier to understand the Thumbnail Maker code.
Chapter 2
[ 45 ]
Generating the UI code
The Thumbnail Maker GUI is wrien using PyQt4 (Python bindings for Qt4 GUI framework).
Detailed discussion on how the GUI is generated and how the GUI elements are connected
to the main funcons is beyond the scope of this book. However, we will cover certain main
aspects of this GUI to get you going. The GUI-related code in this applicaon can simply
be used 'as-is' and if this is something that interests you, go ahead and experiment with it
further! In this secon, we will briey discuss how the UI code is generated using PyQt4.
Time for action – generating the UI code
PyQt4 comes with an applicaon called QT Designer. It is a GUI designer for QT-based
applicaons and provides a quick way to develop a graphical user interface containing some
basic widgets. With this, let's see how the Thumbnail Maker dialog looks in QT Designer and
then run a command to generate Python source code from the .ui le.
1. Download the thumbnailMaker.ui le from the Packt website.
2. Start the QT Designer applicaon that comes with PyQt4 installaon.
3. Open the le thumbnailMaker.ui in QT Designer. Noce the red-colored borders
around the UI elements in the dialog. These borders indicate a 'layout' in which
the widgets are arranged. Without a layout in place, the UI elements may appear
distorted when you run the applicaon and, for instance, resize the dialog. Three
types of QLayouts are used, namely Horizontal, Vertical, and Grid layout.
Working with Images
[ 46 ]
4. You can add new UI elements, such as a QCheckbox or a QLabel, by dragging
and dropping it from the 'Widget Box' of QT Designer. It is located in the le
panel by default.
5. Click on the eld next to the label "Input le". In the right-hand panel of QT
Designer, there is a Property Editor that displays the properes of the selected
widget (in this case it's a QLineEdit). This is shown in the following illustraon.
The Property Editor allows us to assign values to various aributes such as the
objectName, width, and height of the widget, and so on.
Qt Designer shows the details of the selected widget in Property Editor.
6. QT designer saves the le with extension .ui. To convert this into Python source
code, PyQt4 provides a conversion ulity called pyuic4. On Windows XP, for
standard Python installaon, it is present at the following locaon—C:\Python26\
Lib\site-packages\PyQt4\pyuic4.bat. Add this path to your environment
variable. Alternavely specify the whole path each me you want to convert ui
le to Python source le. The conversion ulity can be run from the command
prompt as:
pyuic4 thumbnailMaker.ui -o
7. This script will generate with all the GUI
elements dened. You can further review this le to understand how the UI
elements are dened.
What just happened?
We learned how to autogenerate the Python source code dening UI elements of Thumbnail
Maker Dialog from a Qt designer le.
Chapter 2
[ 47 ]
Have a go hero – tweak UI of Thumbnail Maker dialog
Modify the thumbnailMaker.ui le in QT Designer and implement the following list of
things in the Thumbnail Maker dialog.
1. Change the color of all the line edits in the le panel to pale yellow.
2. Tweak the default le extension displayed in the Output le Format combobox such
that the rst opon is .png instead of .jpeg
Double click on this combobox to edit it.
3. Add new opon .tiff to the output format combobox.
4. Align the OK and Cancel buons to the right corner.
You will need to break layouts, move the
spacer around, and recreate the layouts.
5. Set the range of rotaon angle 0 to 360 degrees instead of the
current -180 to +180 degrees.
Aer this, create by running the pyuic4 script and then
run the Thumbnail Maker applicaon.
Connecting the widgets
In the earlier secon, the Python source code represenng UI was automacally generated
using the pyuic4 script. This, however, only has the widgets dened and placed in a nice
layout. We need to teach these widgets what they should do when a certain event occurs.
To do this, QT's slots and signals will be used. A signal is emied when a parcular GUI event
occurs. For example, when the user clicks on the OK buon, internally, a clicked() signal
is emied. A slot is a funcon that is called when a parcular signal is emied. Thus, in
this example, it will call a specied method, whenever the OK buon is clicked. See PyQt4
documentaon for a complete list of available signals for various widgets.
Working with Images
[ 48 ]
Time for action – connecting the widgets
You will noce several dierent widgets in the dialog. For example, the eld which accepts
the input image path or the output directory path is a QLineEdit. The widget where
image format is specied is a QCombobox. On similar lines, the OK and Cancel buons are
QPushButton. As an exercise, you can open up the thumbnailMaker.ui le and click
on each element to see the associated QT class from the Property Editor.
With this, let's learn how the widgets are connected.
1. Open the le The _connect method of class
ThumbnailMakerDialog is copied. The method is called in the constructor
of this class.
def _connect(self):
Connect slots with signals.
SIGNAL("clicked()"), self._openFileDialog)
SIGNAL("clicked()"), self._outputLocationPath)
SIGNAL("clicked()"), self._processImage)
SIGNAL("clicked()"), self.close)
2. self._dialog is an instance of class Ui_ThumbnailMakerDialog. self.
connect is the inherited method of Qt class QDialog. Here, it takes the following
arguments (QObject, signal, callable), where QObject is any widget type
(all inherit QObject), signal is the QT SIGNAL that tells us about what event
occurred and callable is any method handling this event.
3. For example, consider the highlighted lines of the code snippet. They connect
the OK buon to a method that handles image processing. The rst argument
, self._dialog.okPushButton refers to the buon widget dened in class
Ui_ThumbnailMakerDialog. Referring to QPushButton documentaon, you
will nd there is a "clicked()" signal that it can emit. The second argument
SIGNAL("clicked()") tells Qt that we want to know when that buon is clicked
by the user. The third argument is the method self._processImage that gets
called when this signal is emied.
Chapter 2
[ 49 ]
4. Similarly, you can review the other connecons in this method. Each of these
connects a widget to a method of the class ThumbnailMakerDialog.
What just happened?
We reviewed ThumbnailMakerDialog._connect() method to understand how the UI
elements are connected to various internal methods. The previous two secons helped us
learn some preliminary concepts of GUI programming using QT.
Developing the image processing code
The previous secons were intended to get ourselves familiar with the applicaon as an end
user and to understand some basic aspects of the GUI elements in the applicaon. With all
necessary pieces together, let's focus our aenon on the class that does all the main image
processing in the applicaon.
The class ThumbnailMaker handles the pure image processing code. It denes various
methods to achieve this. For example, the class methods such as _rotateImage,
_makeThumbnail, and _resizeImage manipulate the given image to accomplish
rotaon, thumbnail generaon, and resizing respecvely. This class accepts input from
ThumbnailMakerDialog. Thus, no QT related UI code is required here. If you want to
use some other GUI framework to process input, you can do that easily. Just make sure to
implement the public API methods dened in class ThumbnailMakerDialog, as those are
used by the ThumbnailMaker class.
Time for action – developing image processing code
Thus, with ThumbnailMakerDialog at your disposal, you can develop your own
code in scratch, in class ThumbnailMaker. Just make sure to implement the method
processImage as this is the only method called by ThumbnailMakerDialog.
Let's develop some important methods of class ThumbnailMaker.
1. Write the constructor for class ThumbnailMaker. It takes dialog as an argument.
In the constructor, we only inialize self._dialog, which is an instance of class
ThumbnailMakerDialog. Here is the code.
def __init__(self, dialog):
Constructor for class ThumbnailMaker.
# This dialog can be an instance of
# ThumbnailMakerDialog class. Alternatively, if
# you have some other way to process input,
Working with Images
[ 50 ]
# it will be that class. Just make sure to implement
# the public API methods defined in
# ThumbnailMakerDialog class!
self._dialog = dialog
2. Next, write the processImage method in class ThumbnailMaker. The code is
as follows:
Note: You can download the le
from Packt website. The code wrien is from this le. The only
dierence is that some code comments are removed here.
1 def processImage(self):
2 filePath = self._dialog.getInputImagePath()
3 imageFile =
5 if self._dialog.maintainAspectRatio:
6 resizedImage = self._makeThumbnail(imageFile)
7 else:
8 resizedImage = self._resizeImage(imageFile)
10 rotatedImage = self._rotateImage(resizedImage)
12 fullPath = self._dialog.getOutImagePath()
14 # Finally save the image.
3. On line 2, it gets the full path of the input image le. Note that it relies on
self._dialog to provide this informaon.
4. Then the image le is opened the usual way. On line 4, it checks a ag that decides
whether or not to process the image by maintaining the aspect rao. Accordingly,
_makeThumbnail or _resizeImage methods are called.
5. On line 10, it rotates the image resized earlier, using the _rotateImage method.
6. Finally, on line 15, the processed image is saved at a path obtained from the
getOutImagePath method of class ThumbnailMakerDialog.
Chapter 2
[ 51 ]
7. We will now write the _makeThumbnail method.
1 def _makeThumbnail(self, imageFile):
2 foo = imageFile.copy()
3 size = self._dialog.getSize()
4 imageFilter = self._getImageFilter()
5 foo.thumbnail(size, imageFilter)
6 return foo
8. First a copy of the original image is made. We will manipulate this copy and the
method will return it for further processing.
9. Then the necessary parameters such as the image dimension and lter
for re-sampling are obtained from self._dialog and _getImageFilter
10. Finally the thumbnail is created on line 5 and then method returns this
image instance.
11. We have already discussed how to resize and rotate image. The related code is
straighorward to write and the readers are suggested to write it as an exercise.
You will need to review the code from le
for geng appropriate parameters. Write remaining rounes namely,
_resizeImage, _rotateImage and _getImageFilter.
12. Once all methods are in place, run the code from the command line as:
13. It should show our applicaon dialog. Play around with it to make sure
everything works!
What just happened?
In the previous secon, we completed an excing project. Several things learned in this
chapter, such as image I/O, resizing, and so on, were applied in the project. We developed a
GUI applicaon where some basic image manipulaon features, such as creang thumbnails,
were implemented. This project also helped us gain some insight into various aspects of GUI
programming using QT.
Working with Images
[ 52 ]
Have a go hero – enhance the ThumbnailMaker application
Want to do something more with the Thumbnail Maker. Here you go! As you will add more
features to this applicaon, the rst thing you would need to do is to change its name—at
least from the capon of the dialog that pops up! Edit the thumbnailMaker.ui le in
QT designer, change the name to something like "Image Processor", and recreate the
corresponding .py le. Next, add the following features to this applicaon.
If you don't want to deal with any UI code, that is ne too! You
can write a class similar to ThumbnailMakerDialog. Do
the input argument processing in your own way. All that class
ThumbnailMaker requires is implementaon of certain public
methods in this new class, to get various input parameters.
1. Accept output lename from the user. Currently, it gives the same name as the
input le.
Edit the .ui le. You would need to break the layouts before adding a QLineEdit
and its QLabel and then recreate the layouts.
2. If there is a previously created output image le in the output directory, clicking OK
would simply overwrite that le. Add a checkbox reading, "Overwrite exisng le
(if any)". If the checkbox in deselected, it should pop up a warning dialog and exit.
For the laer part, there is a commented out code block in
ThumbnailMakerDialog._processImage. Just enable the code.
3. Add a feature that can add specied text in the lower-le corner of the
output image.
4. Create an image with this text, and use the combinaon of crop and paste to
achieve desired results. For user input, you will need to add a new QLineEdit
for accepng text input and then connect signals with a callable method in
Chapter 2
[ 53 ]
We learned a lot in this chapter about basic image manipulaon.
Specically, we covered image input-output operaons that enable reading and wring of
images, and creaon of images from scratch.
With the help of numerous examples and code snippets, we learned several image
manipulaon operaons. Some of them are:
How to resize an image with or without maintaining aspect rao
Rotang or ipping an image
Cropping an image, manipulang it using techniques learned earlier in the chapter,
and then pasng it on the original image
Creang an image with a text
We developed a small applicaon that captures a region of your screen at regular
me intervals
We created an interesng project implemenng some image processing
funconality learned in this chapter
With this basic image manipulaon knowledge, we are ready to learn how to add some cool
eects to an image. In the next chapter, we will see how to enhance an image.
Enhancing Images
In the previous chapter, we learned a lot about day-to-day image processing.
We accomplished the learning objecve of performing basic image
manipulaon by working on several examples and small projects. In this
chapter, we will move a step further by learning how to add special eects to
an image. The special eects added to the image serve several purposes. These
not only give a pleasing appearance to the image but may also help you to
understand important informaon presented by the image.
In this chapter, we shall:
Learn how to adjust brightness and contrast levels of an image
Add code to selecvely modify the color of an image and create gray scale images
and negaves
Use PIL funconality to combine two images together and add transparency eects
to the image
Apply various image-enhancement lters to an image to achieve eects such as
smoothing, sharpening, embossing, and so on
Undertake a project to develop a tool to add a watermark or text or a date stamp
to an image
So let's get on with it.
Enhancing Images
[ 56 ]
Installation and download prerequisites
The installaon prerequisites for this chapter are same as the ones in Chapter 2, Working
with Images. Please refer to that chapter for further details.
It is important to download all the images required for this chapter from the Packt website
at We will be using these images throughout this chapter
in the image processing code. Addionally, please download the PDF le, Chapter 3
Supplementary Material.pdf from Packt website. This is very important if you are
reading a hard copy of this book which is printed in black and white. In the upcoming
secons such as "Tweaking Colors", we compare the images before and aer processing. In
the black and white edion, you won't be able to see the dierence between the compared
images. For example, the eects such as changed image color, modied contrast, and so on,
won't be noceable. The PDF le contains all these image comparisons. So please keep this
le handy while working on the examples in this chapter!
Adjusting brightness and contrast
One oen needs to tweak the brightness and contrast level of an image. For example, you
may have a photograph that was taken with a basic camera, when there was insucient
light. How would you correct that digitally? The brightness adjustment helps make the image
brighter or darker whereas the contrast adjustments emphasize dierences between the
color and brightness level within the image data. The image can be made lighter or darker
using the ImageEnhance module in PIL. The same module provides a class that can
auto-contrast an image.
Time for action – adjusting brightness and contrast
Let's learn how to modify the image brightness and contrast. First, we will write code to adjust
brightness. The ImageEnhance module makes our job easier by providing Brightness class.
1. Download image 0165_3_12_Before_BRIGHTENING.png and rename it to
2. Use the following code:
1 import Image
2 import ImageEnhance
4 brightness = 3.0
5 peak = "C:\\images\\Before_BRIGHTENING.png ")
6 enhancer = ImageEnhance.Brightness(peak)
7 bright = enhancer.enhance(brightness)
8 "C:\\images\\BRIGHTENED.png ")
Chapter 3
[ 57 ]
3. On line 6 in the code snippet, we created an instance of the class Brightness. It
takes Image instance as an argument.
4. Line 7 creates a new image bright by using the specied brightness value.
A value between 0.0 and less than 1.0 gives a darker image, whereas a value
greater than 1.0 makes it brighter. A value of 1.0 keeps the brightness of the
image unchanged.
5. The original and resultant image are shown in the next illustraon.
Comparison of images before and aer brightening.
6. Let's move on and adjust the contrast of the brightened image. We will append the
following lines of code to the code snippet that brightened the image.
10 contrast = 1.3
11 enhancer = ImageEnhance.Contrast(bright)
12 con = enhancer.enhance(contrast)
13 "C:\\images\\CONTRAST.png ")
7. Thus, similar to what we did to brighten the image, the image contrast was tweaked
by using the ImageEnhance.Contrast class. A contrast value of 0.0 creates a
black image. A value of 1.0 keeps the current contrast.
8. The resultant image is compared with the original in the following illustraon.
Enhancing Images
[ 58 ]
NOTE: As menoned in the Installaon and Download Prerequisites
secon, the images compared in the following illustraon will appear idencal
if you are reading a hard copy of this book. Please download and refer to the
supplementary PDF le Chapter 3 Supplementary Material.pdf. Here,
the color images are provided, which will help you see the dierence.
The original image with the image displaying the increasing contrast.
9. In the preceding code snippet, we were required to specify a contrast value. If you
prefer PIL for deciding an appropriate contrast level, there is a way to do this. The
ImageOps.autocontrast funconality sets an appropriate contrast level. This
funcon normalizes the image contrast. Let's use this funconality now.
10. Use the following code:
import ImageOps
bright = "C:\\images\\BRIGHTENED.png ")
con = ImageOps.autocontrast(bright, cutoff = 0)
11. The highlighted line in the code is where contrast is automacally set. The
autocontrast funcon computes histogram of the input image. The cutoff
argument represents the percentage of lightest and darkest pixels to be trimmed
from this histogram. The image is then remapped.
What just happened?
Using the classes and funconality in ImageEnhance module, we learned how to
increase or decrease the brightness and the contrast of the image. We also wrote code
to auto-contrast an image using funconality provided in the ImageOps module. The
things we learned here will be useful in the upcoming secons in this chapter.
Chapter 3
[ 59 ]
Tweaking colors
Another useful operaon performed on the image is adjusng the colors within an image.
The image may contain one or more bands, containing image data. The image mode
contains informaon about the depth and type of the image pixel data. The most common
modes we will use in this chapter are RGB (true color, 3x8 bit pixel data), RGBA (true color
with transparency mask, 4x8 bit) and L (black and white, 8 bit).
In PIL, you can easily get the informaon about the bands data within an image. To get the
name and number of bands, the getbands() method of the class Image can be used. Here,
img is an instance of class Image.
>>> img.getbands()
('R', 'G', 'B', 'A')
Time for action – swap colors within an image!
To understand some basic concepts, let's write code that just swaps the image band data.
1. Download the image 0165_3_15_COLOR_TWEAK.png and rename it as
2. Type the following code:
1 import Image
3 img = "C:\\images\\COLOR_TWEAK.png ")
4 img = img.convert('RGBA')
5 r, g, b, alpha = img.split()
6 img = Image.merge( "RGBA ", (g, r, b, alpha))
3. Let's analyze this code now. On line 2, the Image instance is created as usual. Then,
we change the mode of the image to RGBA.
Here we should check if the image already has that mode or if this
conversion is possible. You can add that check as an exercise!
4. Next, the call to Image.split() creates separate instances of Image class,
each containing a single band data. Thus, we have four Image instances—r, g, b,
and alpha corresponding to red, green, and blue bands, and the alpha
channel respecvely.
Enhancing Images
[ 60 ]
5. The code in line 6 does the main image processing. The rst argument that
Image.merge takes mode as the rst argument whereas the second argument is a
tuple of image instances containing band informaon. It is required to have same
size for all the bands. As you can noce, we have swapped the order of band data
in Image instances r and g while specifying the second argument.
6. The original and resultant image thus obtained are compared in the next illustraon.
The color of the ower now has a shade of green and the grass behind the ower is
rendered with a shade of red.
As menoned in the Installaon and Download Prerequisites
secon, the images compared in the following illustraon will
appear idencal if you are reading a hard copy of this book. Please
download and refer to the supplementary PDF le Chapter 3
Supplementary Material.pdf. Here, the color images are
provided that will help you see the dierence.
Original (le) and the color swapped image (right).
What just happened?
We accomplished creang an image with its band data swapped. We learned how to use
PIL's Image.split() and Image.merge() to achieve this. However, this operaon was
performed on the whole image. In the next secon, we will learn how to apply color changes
to a specic color region.
Chapter 3
[ 61 ]
Changing individual image band
In the previous secon, we saw how to change the data represented by the whole band. As
a result of this band swapping, the color of the ower was changed to a shade of green and
the grass color was rendered as a shade of red. What if we just want to change the color
of the ower and keep the color of the grass unchanged? To do this, we will make use of
Image.point funconality along with Image.paste operaon discussed in depth in the
previous chapter.
However, note that we need to be careful in specifying the color region that needs to be
changed. It may also depend on the image. Somemes, it will select some other regions
matching the specied color range, which we don't want.
Time for action – change the color of a ower
We will make use of the same ower image used in the previous secon. As menoned
earlier, our task is to change the color of the ower while keeping the grass color unchanged.
1. Add this code in a Python source le.
1 import Image
3 img = "C:\\images\\COLOR_TWEAK.png ")
4 img = img.convert('RGBA')
5 r, g, b, alpha = img.split()
6 selection = r.point(lambda i: i > 120 and 150)
7 "C:\\images\\COLOR_BAND_MASK.png ")
8 r.paste(g, None, selection)
9 img = Image.merge( "RGBA ", (r, g, b, alpha))
10 "C:\\images\\COLOR_CHANGE_BAND.png ")
2. Lines 1 to 5 remain the same as seen earlier. On line 5, we split the original image,
creang four Image instances, each holding a single band data.
3. A new Image instance 'selecon' is created on line 6. This is an important operaon
that holds the key to selecvely modify color! So let's see what this line of code
does. If you observe the original image, the ower region (well, most of it) is
rendered with a shade of red color. So, we have called the point(function)
method on Image instance r. The point method takes a single funcon and an
argument maps the image through this funcon. It returns a new Image instance.
Enhancing Images
[ 62 ]
4. What does this lambda funcon on line 6 do? Internally, PIL's point funcon does
something of this sort:
lst = map(function, range(256)) * no_of_bands
In this example, function is nothing but the lambda funcon. The no_of_bands
for the image is 1. Thus, line 6 is used to select a region where the red value is
greater than 120. The lst is a list which, in this case has the rst 120 values as
False whereas the remaining values as 150. The value of 150 plays a role in
determining the nal color when we perform the paste operaon.
5. The image mask thus created aer the applicaon of point operaon is shown
in the following illustraon. The white region in this image represents the region
captured by the point operaon that we just performed. Only the white region
will undergo change when we perform paste operaon next.
6. On line 8, we perform a paste operaon discussed in the last chapter. Here, the
image g is pasted onto image r using mask selection. As a result, the band data
of image r is modied.
7. Finally, a new Image instance is created using the merge operaon, by making
use of the individual r, g, b, and alpha image instances containing the new
band informaon.
8. The original and nal processed images are compared in the next illustraon.
The new ower color looks as cool as the original color, doesn't it?
Chapter 3
[ 63 ]
As menoned in the Installaon and download prerequisites secon, the
images compared in the following illustraon will appear idencal if you
are reading a hard copy of this book. Please download and refer to the
supplementary PDF le Chapter 3 Supplementary Material.pdf.
The color images are provided that will help you see the dierence.
What just happened?
We worked out an example that modied a selecve color region. Individual image band
data was processed to accomplish this task. With the help of point, paste, and merge
operaons in PIL's Image module, we accomplished changing the color of the ower in
the provided image.
Gray scale images
If you want to give a nostalgic eect to an image, one of the many things that you can do
is to convert it to gray scale. There is more than one way to create a gray scale image in
PIL. When the mode is specied as L, the resultant image is gray scale. The basic syntax
to convert color images to black and white is:
img = img.convert('L')
Alternavely, we can use funconality provided in the ImageOps module.
img = ImageOps.grayscale(img)
If you are creang the image from scratch, the syntax is:
img ='L', size)
Enhancing Images
[ 64 ]
The following illustraon shows the original and the converted gray scale images created
using one of these techniques.
Please download and refer to the supplementary PDF le Chapter 3
Supplementary Material.pdf. The color images are provided
that will help you see the dierence between the following images.
Original and gray scale images of a bridge:
Cook up negatives
Creang a negave of an image is straighorward. We just need to invert each color pixel.
Therefore, if you have a color x at a pixel, the negave image will have (255 x) at that pixel.
The ImageOps module makes it very simple. The following line of code creates a negave of
an image.
img = ImageOps.invert(img)
Here is the result of this operaon:
Original image (le) and its negave (right).
Chapter 3
[ 65 ]
Have you ever wished to see yourself in a family photo, taken at a me when you were not
around? Or what if you just want to see yourself at the top of Mount Everest at least in a
picture? Well, it is possible to do this digitally, using the funconality provided in PIL such
as blending, composite image processing, and so on.
In this secon, we will learn how to blend images together. As the name suggests, blending
means mixing two compable images to create a new image. The blend funconality in PIL
creates a new image using two input images of the same size and mode. Internally, the two
input images are interpolated using a constant value of alpha.
In the PIL documentaon, it is formulated as:
blended_image = in_image1 * (1.0 - alpha) + in_image2 * alpha
Looking at this formula, it is clear that alpha = 1.0 will make the blended image the same
as 'n_image2 whereas alpha = 0.0 returns in_image1 as the blended image.
Time for action – blending two images
Somemes, the combined eect of two images mixed together makes a big impact
compared to viewing the same images dierently. Now it's me to give way to your
imaginaon by blending two pictures together. In this example, our resultant image shows
birds ying over the Mackinac bridge in Michigan. However, where did they come from? The
birds were not there in the original image of the bridge.
1. Download the following les from Packt website: 0165_3_28_BRIDGE2.png
and 0165_3_29_BIRDS2.png. Rename these les as BRIDGE2.png and
BIRDS2.png respecvely.
2. Add the following code in a Python source le.
1 import Image
3 img1 = "C:\\images\\BRIDGE2.png ")
4 img1 = img1.convert('RGBA')
6 img2 = "C:\\images\\BIRDS2.png ")
7 img2 = img2.convert('RGBA')
9 img = Image.blend(img1, img2, 0.3)
11 "C:\\images\\BLEND.png")
Enhancing Images
[ 66 ]
3. The next illustraon shows the two images before blending, represented by img1
and img2 in the code.
Individual images of a bridge and ying birds, before blending.
4. The lines 3 to 7 open the two input images to be blended. Noce that we have
converted both the images RGBA. It need not be necessarily RGBA mode. We can
specify the modes such as 'RGB' or 'L'. However, it is required to have both the
images with same size and mode.
5. The images are blended on line 9 using the Image.blend method in PIL. The rst
two arguments in the blend method are two Image objects represenng the two
images to be blended. The third argument denes the transparency factor alpha. In
this example, the image of the bridge is the main image we want to focus on. Thus,
the factor alpha is dened such that more transparency is applied to the image of
the ying birds while creang the nal image. The alpha factor can have a value
between 0.0 to 1.0. Note that, while rendering the output image, the second
image, img2, is mulplied by this alpha value whereas the rst image is
mulplied by 1 - alpha. This can be represented by the following equaon.
blended_img = img1 * (1 – alpha) + img2* alpha
Thus, if we select an alpha factor of, for instance, 0.8, it means that the birds will
appear more opaque compared to the bridge. Try changing the alpha factor to see
how it changes the resultant image. The resultant image with alpha = 0.3 is:
Chapter 3
[ 67 ]
Blended image showing birds ying over a bridge.
6. The picture appears a bit dull due to the transparency eect applied while creang
the image. If you convert the input images to mode L, the resultant image will
look beer—however, it will be rendered as gray scale. This is shown in the
next illustraon.
Blended gray scale image when both the input images have mode L.
Enhancing Images
[ 68 ]
What just happened?
Blending is an important image enhancement feature. With the help of examples, we
accomplished creang blended images. We learned using the Image.blend method and
applied the transparency factor alpha to achieve this task. The technique learned in this
chapter will be very useful throughout this chapter. In the next secon, we will apply the
blending technique to create a transparent image.
Creating transparent images
In the previous secon, we learned how to blend two images together. In this secon, we
will go one step further and see how the same blend funconality can be used to create a
transparent image! The images with mode RGBA dene an alpha band. The transparency
of the image can be changed by tweaking this band data. Image.putalpha() method
allows dening new data for the alpha band of an image. We will see how to perform
point operaon to achieve the same eect.
Time for action – create transparency
Let's write a few lines of code that add the transparency eects to an input image.
1. We will use one of the images used in Chapter 2. Download 0165_3_25_SMILEY.
png and rename it to SMILEY.png.
2. Use the following code:
1 import Image
3 def addTransparency(img, factor = 0.7 ):
4 img = img.convert('RGBA')
5 img_blender ='RGBA', img.size, (0,0,0,0))
6 img = Image.blend(img_blender, img, factor)
7 return img
9 img = "C:\\images\\SMILEY.png ")
11 img = addTransparency(img, factor =0.7)
3. In this example, the addTransparency() funcon takes the img instance as input
and returns a new image instance with the desired level of transparency.
Chapter 3
[ 69 ]
4. Now let's see how this funcon works. On line 4, we rst convert the image mode to
RGBA. As discussed in an earlier secon, you can add a condional here to see if the
image is already in the RGBA mode.
5. Next, we create a new Image class instance, image_blender, using the Image.
new method. It has the same size and mode as the input image. The third argument
represents the color. Here, we specify the transparency as 0.
6. On line 6, two images, img (input image) and img_blender, are blended together
by applying a constant alpha value. The funcon then returns this modied
Image instance.
7. The original image and the one with the transparency eect are compared. The
images are the screenshots of the images opened in the GIMP editor. This is done
so that you clearly understand the eect of transparency. The checkered paern
in these images represents the canvas. Noce how the canvas appears in the
transparent image.
8. There is another simple way to add transparency to an image, using the Image.
point funconality! Enter the following code in a Python source le and execute it.
1 import Image
2 img = "C:\\images\\SMILEY.png ")
3 r, g, b, alpha = img.split()
4 alpha = alpha.point(lambda i: i>0 and 178)
5 img.putalpha(alpha)
6 "C:\\images\\Transparent_SMILEY.png ")
9. In this new code, we split the original image into four new image instance, each
having one of the image band data (r, g, b, or alpha). Note that we are assuming
here that the mode of the image is RGBA. If it is not, you need to convert this image
to RGBA! As an exercise, you can add that check in the code.
10. Next, on line 4, the Image.point method is called. The lambda funcon operates
on the alpha band data. It sets the value as 178. This is roughly equal to the alpha
factor of 0.7 that we set earlier. It is computed here as int(255*0.7) ). In the
Changing individual image band secon, where we learned modifying colors
within images, the point operaon was thoroughly discussed.
Enhancing Images
[ 70 ]
11. On line 5, we put back the new alpha band data in img. The resultant images using
blend and point funconality are shown in the next illustraon.
Image before and aer adding transparency.
What just happened?
We accomplished adding transparency eect to an image. This is a very useful image
enhancement that we need from me to me. We learned how to create a transparent image
using two dierent techniques, namely, using Image.blend funconality and Image.point
operaon. The knowledge gained in this secon will be applied later in this chapter.
Making composites with image mask
So far, we have already seen how to blend two images together. It was done using the
Image.blend operaon where the two input images were blended by using a constant
alpha transparency factor. In this secon, we will learn another technique to combine two
images together. Here, instead of a constant alpha factor, an image instance that denes
the transparency mask is used as the third argument. Another dierence is that the input
images need not have the same mode. For instance, the rst image can be with mode L
and the second with mode RGBA. The syntax to create composite images is:
outImage = Image.composite(img1, img2, mask)
Here, the arguments to the composite method are Image instances. The mask is specied as
alpha. The mode for mask image instance can be 1, L, or RGBA.
Chapter 3
[ 71 ]
Time for action – making composites with image mask
We will mix the same two images blended in another secon. Just to try out something
dierent, in the composite image, we will focus on the ying birds instead of the bridge.
1. We will use the same set of input images as used in the Blending secon.
1 import Image
3 img1 = "C:\\images\\BRIDGE2.png ")
4 img1 = img1.convert('RGBA')
6 img2 = "C:\\images\\BIRDS2.png ")
7 img2 = img2.convert('RGBA')
9 r, g, b, alpha = img2.split()
10 alpha = alpha.point(lambda i: i>0 and 204)
12 img = Image.composite(img2, img1, alpha)
2. The code unl line 7 is idencal to the one illustrated in the blending example. Note
that the two input images need not have the same mode. On line 10, the Image.
point method is called. The lambda funcon operates on the alpha band data.
The code on lines 9 and 10 is similar to that illustrated in the secon Creang
Transparent Images. Please refer to that secon for further details. The only
dierence is that the pixel value is set as 204. This modies the band data in the
image instance alpha. This value of 204 is roughly equivalent to the alpha factor
of 0.7 if the image were to be blended. What this implies is the bridge will have a
fading eect and the ying birds will appear prominently in the composite image.
3. One thing you will noce here is we are not pung the modied alpha band data
back in img2. Instead, on line 12, the composite image is created using the mask
as alpha.
Enhancing Images
[ 72 ]
4. The resultant composite image is shown in the next illustraon—with emphasis on
the image of the ying birds.
What just happened?
We learned how to create an image combining two images, using an alpha mask. This was
accomplished by using Image.composite funconality.
Project: Watermark Maker Tool
We have now learned enough image enhancement techniques to take up a simple project
applying these techniques. Let's create a simple command line ulity, a "Watermark Maker
Tool". Although we call it a "Watermark Maker ", it actually provides some more useful
features. Using this ulity, you can add the date stamp to the image (the date on which
the image was enhanced using this tool). It also enables embedding custom text within
an image. The tool can be run on the command line using the following syntax:
python [options]
Chapter 3
[ 73 ]
Where, the [options] are as follows:
--image1: The le path of the main image that provides canvas.
--waterMark: The le path of the watermark image (if any).
--mark_pos: The coordinates of top-le corner of the watermark image to be
embedded. The values should be specied in double quotes, like 100, 50.
--text: The text that should appear in the output image.
--text_pos: The coordinates of top-le corner of the TEXT to be embedded.
The values should be specied in double quotes, like 100, 50.
--transparency: The transparency factor for the watermark (if any)
--dateStamp: Flag (True or False) that determines whether to insert date
stamp in the image. If True, the date stamp at the me this image was processed
will be inserted.
The following is an example that shows how to run this tool with all the opons specied.
python --image1= "C:\foo.png "
--watermark= "C:\watermark.png "
--mark_pos= "200, 200 "
--text= "My Text "
–-text_pos= "10, 10 "
This creates an output image le WATERMARK.png, with a watermark and text at the
specied anchor point within the image.
Time for action – Watermark Maker Tool
Think about all the methods we would need to accomplish this. The rst thing that comes to
mind is a funcon that will process the command-line arguments menoned earlier. Next,
we need to write code that can add a watermark image to the main image. Let's call this
addWaterMark(). On similar lines, we will need methods that add text and date stamp
to the image. We will call this addText() and addDateStamp() respecvely. With this
informaon, we will develop code to make this work. In this project, we will encapsulate
this funconality in a class, but it is not necessary. We do so to make this tool extensible
for future use.
1. Download the le This has the code required in this
project. Just keep it for further use. Some of the methods will not be discussed in
this secon. If you encounter dicules while developing those methods, you can
always go back and refer to this le.
Enhancing Images
[ 74 ]
2. Open a new Python source le and declare the following class and its methods. Just
create empty methods for now. We will expand these in as we proceed along.
import Image, ImageDraw, ImageFont
import os, sys
import getopt
from datetime import date
class WaterMarkMaker:
def __init__(self):
def addText(self):
def addDateStamp(self):
def _addTextWorker(self, txt, dateStamp = False):
def addWaterMark(self):
def addTransparency(self, img):
def createImageObjects(self):
def _getMarkPosition(self, canvasImage, markImage):
def processArgs(self):
def printUsage(self):
3. Next, we will write code in the constructor of this class.
def __init__(self):
# Image paths
self.waterMarkPath = ''
self.mainImgPath = ''
# Text to be embedded
self.text = ''
# Transparency Factor
self.t_factor = 0.5
# Anchor point for embedded text
self.text_pos = (0, 0)
# Anchor point for watermark.
self.mark_pos = None
# Date stamp
self.dateStamp = False
# Image objects
self.waterMark = None
self.mainImage = None
Chapter 3
[ 75 ]
if self.dateStamp:
self.addDateStamp() "C:\\images\\WATERMARK.png ")
4. The code is self-explanatory. First, all the necessary aributes are inialized and
then the relevant methods are called to create the image with watermark and/or
the embedded text. Let's write the methods in the order in which they are called
in the constructor.
5. The processArgs() method processes the command-line arguments. You
can write this method as an exercise. Alternavely, you can use code in the le from the Packt website. The process arguments method
should take the assignments as shown in the following table. In the reference les,
getopt module is used to process these arguments. Alternavely, you can use
OptionParser in the optparse module of Python.
Argument Value Argument Value
image1 self.mainImgPath text_pos self.text_pos
waterMark self.waterMarkPath transparency self.t_factor
mark_pos self.mark_pos dateStamp self.dateStamp
text self.text
6. The printUsage() method just prints how to run this tool. You can easily write
that method.
7. Let's review the addText()and _addTextWorker() methods now. Note that
some of the code comments are removed from the code samples for clarity. You
can refer to the code in for detailed comments.
def addText(self):
if not self.text:
if self.mainImage is None:
print "\n Main Image not defined.Returning. "
txt = self.text
The addText() method simply calls _addTextWorker() by providing the
self.text argument received from the command line.
Enhancing Images
[ 76 ]
8. The _addTextWorker() does the main processing that embeds the text within the
image. This method is used in the following code:
1 def _addTextWorker(self, txt, dateStamp = False):
2 size = self.mainImage.size
3 color = (0, 0, 0)
4 textFont = ImageFont.truetype( "arial.ttf ", 50)
6 # Create an ImageDraw instance to draw the text.
7 imgDrawer = ImageDraw.Draw(self.mainImage)
8 textSize = imgDrawer.textsize(txt, textFont)
10 if dateStamp:
11 pos_x = min(10, size[0])
12 pos_y = size[1] - textSize[0]
13 pos = (pos_x, pos_y)
14 else:
15 # We need to add text. Use self.text_pos
16 pos = self.text_pos
17 #finally add the text
18 imgDrawer.text(pos, txt, font=textFont)
20 if ( textSize[0] > size[0]
21 or textSize[1] > size[1] ):
22 print ( "\n Warning, the specified text "
23 "going out of bounds. " )
In Chapter 2, we created a new image containing a text string. It read "Not really
a fancy text ". Do you remember? Here, we have wrien similar code with some
improvements. The funcon ImageDraw.Draw takes the self.mainImage
(an Image instance) as an argument to create a Draw instance, imgDrawer.
On line 18, the text is embedded onto the given posion using a given font. The
text() method of Draw instance takes three arguments, namely, position,
text, and the font. In the previous chapter, we already made use of the rst two
arguments. The third argument font is an instance of class ImageFont in PIL.
On line 4, we create this instance specifying a font type (arial.ttf) and the font
size (=50). The given text string is now added on to the main image!
Chapter 3
[ 77 ]
9. The next method we will discuss is addDateStamp(). It calls the same
_addTextWorker() in the end. However, the placement of this date stamp is
xed at the boom le corner of the image and of course we create our date string
by using Python's datetime module. The code is illustrated below along with the
import statement declared earlier.
from datetime import date
def addDateStamp(self):
today =
time_tpl = today.timetuple()
year, month, day = map(str, time_tpl)
datestamp = "%s/%s/%s "%(year,month, day)
self._addTextWorker(datestamp, dateStamp = True)
The rst line of the code in this method creates a date instance today with today's
date provided as a 3-tuple. Something like this:, 1, 20).
Next, we call the timetuple method of date instance. The rst three values in this
tuple are year, month, and day respecvely.
The rest of the code is just the processing of the date stamp as a text string and then
calling the main worker method just discussed.
10. Now we will review the code in the addWaterMark() method. A watermark is
typically a semi-transparent image that appears in the main image. There are two
dierent approaches to accomplish creang a watermark. The following code
considers both these approaches.
1 def addWaterMark(self):
2 # There are more than one way to achieve creating a
3 # watermark. The following flag,if True, will use
4 # Image.composite to create the watermark instead of a
5 # simple Image.paste
6 using_composite = False
8 if self.waterMark is None:
9 return
10 # Add Transparency
11 self.waterMark = self.addTransparency(self.waterMark)
12 # Get the anchor point
13 pos_x, pos_y = self._getMarkPosition(self.mainImage,
14 self.waterMark)
15 # Create the watermark
16 if not using_composite:
17 # Paste the image using the transparent
18 # watermark image as the mask.
Enhancing Images
[ 78 ]
19 self.mainImage.paste(self.waterMark,
20 (pos_x, pos_y),
21 self.waterMark)
22 else:
23 # Alternate method to create water mark.
24 # using Image.composite create a new canvas
25 canvas ='RGBA',
26 self.mainImage.size,
27 (0,0,0,0))
28 # Paste the watermark on the canvas
29 canvas.paste(self.waterMark, (pos_x, pos_y))
30 # Create a composite image
31 self.mainImage = Image.composite(canvas,
32 self.mainImage,
33 canvas)
11. To add a watermark, rst we make the image transparent. This is accomplished by
calling the addTransparency() method. This method also changes the mode
of the image to RGBA. This method is shown here. It is almost idencal to the one
we developed in an earlier secon where an image was made transparent using
blending funconality of PIL.
def addTransparency(self, img):
img = img.convert('RGBA')
img_blender ='RGBA',
img = Image.blend(img_blender,
return img
Next, on line 13, we determine the anchor point on the main image, where the top-
le corner of the watermark will appear. By default, we will match the boom-le
corner of the watermark with the main image. You can review the code for method
_getMarkPosition() in the le to see how this is done.
Moving forward, the code block between lines 16-21 creates the watermark using
the paste funconality. This is one way to create the image with a watermark. The
arguments provided in the Image.paste funcon are image to be pasted,
anchor point, and mask. The mask is selected as the watermark image itself so as
to consider the transparency. Otherwise, the watermark image will appear opaque.
The resultant image with and without image mask specicaon is compared in the
following illustraon.
Chapter 3
[ 79 ]
Resultant images using Image.paste operaon created with and
without mask.
Next, in the else condion block (lines 22 to 33), we use Image.composite
funconality in PIL to embed the watermark. The dimensions of the example
watermark image used here are 200x200 pixels, whereas the dimensions of the
main image are 800x600 pixels. To use the composite() method, we need to
make these images of the same size, and yet, make sure to paste the watermark at
the specied posion. How to achieve this? The rst thing to do is to create a canvas
image to hold the watermark. The canvas image is of the same size as that of the
main image. The code block 25-29 creates the canvas and pastes the watermark at
an appropriate locaon.
Finally, on line 31, the composite image is created using the canvas image instance
as the alpha mask.
12. Now lets run this tool! You can use your own image les for main image or the
watermark. Alternavely, you can use the image 0165_3_34_KokanPeak_for_
WATERMARK.png as the main image and 0165_3_38_SMILEY_small.png as the
watermark image. The command-line arguments for this run are:
--image1= "C:\images\KokanPeak_for_WATERMARK.png "
--text= "Peak "
--text_pos= "10, 10 "
--waterMark= "C:\\images\\SMILEY_small.png "
Enhancing Images
[ 80 ]
13. The resultant image with text, date stamp, and the watermark is shown in the
next illustraon.
Final processed image with text, date stamp, and a watermark.
What just happened?
We created a very useful ulity that can add a watermark and/or a text string and/or a date
stamp to an input image. We used several of the image processing techniques learned in this
as well as in an earlier chapter on image processing. Especially, image enhancement features
such as blending, creang composite images, and adding transparency were applied to
accomplish this task. Addionally we made use of common funconality such as pasng
an image, drawing text onto the image, and so on.
Have a go hero – do more with Watermark Maker Tool
Our Watermark Maker tool needs an upgrade. Extend this applicaon so that it supports
following the features:
1. The text or the date stamp color is currently hardcoded. Add a new command-line
argument so that a text color can be specied as an oponal argument.
2. Add some standard default opons for specifying anchor posion for text, date
stamp, and the watermark image. These opons can be TOP_RIGHT, TOP_LEFT,
Chapter 3
[ 81 ]
3. The command-line opons list is too long. Add code so that all arguments can be
read from a text le.
4. Add support so that it can batch-process images to create desired eect.
Applying image lters
In the previous chapter, filter argument was used while performing the image resize
operaon. This filter determined the quality of the output image. However, there were
only four filter opons available and the scope was limited to a resize operaon. In this
secon, some addional image enhancement lters will be introduced. These are predened
lters and can be directly used on any input image. Following is a basic syntax used for
applying a lter.
img ='foo.jpg')
filtered_image = img.filter(FILTER)
Here, we created a new image filtered_image by ltering imageby ltering image img . The FILTER
argument can be one of the predened lters in the ImageFilter module of PIL for
ltering the image data. PIL oers several predened image enhancement lters. These can
be broadly classied into the following categories. With the help of examples, we will learn
some of these in the coming secons.
Blurring and sharpening: BLUR, SHARPEN, SMOOTH, SMOOTH_MORE
Edge detecon and enhancement: EDGE_ENHANCE, EDGE_ENHANCE_MORE,
Distoron/special eects: EMBOSS
The le in the PIL source code denes the-menoned lter classes. You
can create your own custom lter by tweaking various arguments in these lter classes.
filterargs = size, scale, offset, kernel
Where, kernel is the convoluon kernel. Here, the 'convoluon' is a mathemacal
operaon, on the image matrix by the 'kernel' matrix to produce a resultant matrix.
The size of matrix is specied by the size argument. It is specied in the form (width, height).
This can either be (3, 3) or (5, 5) size in the current PIL version. The result of each pixel is
divided by scale argument. This is an oponal argument. The offset value, if specied,
has its value is added to the result aer dividing it by the scale argument.
In some of the image enhancement lter examples, we will create our own custom lter.
Enhancing Images
[ 82 ]
Smoothing an image means reducing the noise within the image data. For this, certain
mathemacal approximaon is applied on the image data to recognize the important
paerns within the image. The ImageFilter module denes class SMOOTH for smoothing
an image. PIL species the following default lter arguments for the image-smoothing lter.
filterargs = (3, 3),
(1, 1, 1,
1, 5, 1,
1, 1, 1)
Time for action – smoothing an image
Let's work out an example where a smoothing lter will be applied to an image.
1. Download the image le 0165_3_Before_SMOOTHING.png from the Packt
website and save it as Before_SMOOTHING.png.
2. This is a low-resoluon image scanned from a developed photograph. As you can
see, there is a lot of salt-and-pepper noise in the image. We will apply smoothing
lter to reduce some of this noise in the image data.
3. Add the following code in a Python le.
import ImageFilter
import Image
img = "C:\\images\\Before_SMOOTH.png ")
img = img.filter(ImageFilter.SMOOTH) "C:\\images\\ch3\\After_SMOOTH.png")
Chapter 3
[ 83 ]
4. The highlighted line in the code is where the smoothing lter is applied to the
image. The results are shown in the next illustraon.
Picture before and aer smoothing:
5. To reduce the noise further down, you can use ImageFilter.SMOOTH_MORE
or try reapplying the ImageFilter.SMOOTH mulple mes unl you get the
desired eect.
import ImageFilter
import Image
img = "C:\\images\\0165_3_2_Before_SMOOTH.png ")
i = 0
while i < 5:
img = img.filter(ImageFilter.SMOOTH)
i += 1 "C:\\images\\0165_3_3_After_SMOOTH_5X.png")
Enhancing Images
[ 84 ]
As you can observe in the illustraon, the noise is further reduced but the
image appears a lile bit hazy. Thus, one has to determine an appropriate
level of smoothness.
Comparison of the resultant image with single and mulple smoothing lters.
What just happened?
We learned how to reduce high-level noise from the image data using the smoothing lter in
the ImageFilter module.
In the earlier secon, we learned image-smoothing techniques. If you want to view
the ner details within an image, a sharpening lter can be applied over the image.
Like image-smoothing lters, PIL provides predened lters for sharpening called
ImageFilter.SHARPEN. The basic syntax to sharpen an image is as follows:
img = img.filter(ImageFilter.SHARPEN)
You can try this lter on the image that was smoothed mulple mes in the earlier secon.
In general, blurring makes an image lose its focus. In PIL, the predened lter for this is
ImageFilter.BLUR. This is typically useful if you want to fade out the background to
highlight some object in the foreground. The syntax is similar to the one used for other lters.
img = img.filter(ImageFilter.BLUR)
Chapter 3
[ 85 ]
The following illustraon shows the eect of this lter.
Image before and aer applicaon of blurring lter:
Edge detection and enhancements
In this secon, we will learn some general edge detecon and enhancement lters. The
edge enhance lter improves the edge contrast. It increases the contrast of the region very
close to the edge. This makes the edge stand out. The edge detecon algorithm looks for
disconnuies within the pixel data of the image. For example, it looks for sharp change in
the brightness to idenfy an edge.
Time for action – detecting and enhancing edges
Let's see how the edge detecon and enhancement lters modify the data of a picture. The
photograph that we will use is a close-up of a leaf. The original photo is shown in the next
illustraon. Applying an edge detecon lter on this image creates a cool eect where only
edges are highlighted and the remaining poron of the image is rendered as black.
1. Download the image 0165_3_6_Before_EDGE_ENHANCE.png from the Packt
website and save it as Before_EDGE_ENHANCE.png.
2. Add the following code in a Python le.
1 import Image
2 import ImageFilter
3 import os
4 paths = [ "C:\images\Before_EDGE_ENHANCE.png ",
5 "C:\images\After_EDGE_ENHANCE.png ",
6 "C:\images\EDGE_DETECTION_1.png ",
7 "C:\images\EDGE_DETECTION_2.png "
8 ]
9 paths = map(os.path.normpath, paths)
Enhancing Images
[ 86 ]
11 ( imgPath ,outImgPath1,
12 outImgPath2, outImgPath3) = paths
13 img =
14 img1 = img.filter(ImageFilter.FIND_EDGES)
17 img2 = img.filter(ImageFilter.EDGE_ENHANCE)
20 img3 = img2.filter(ImageFilter.FIND_EDGES)
3. Line 14 modies the image data using the FIND_EDGES lter and then the resulng
image is saved.
4. Next, we modify the original image data, so that more veins within the leaf become
visible. This is accomplished by the applicaon of ENHANCE_EDGES lter (line 17).
5. On line 20, the FIND_EDGES lter is applied on the edge-enhanced image. The
resultant images are compared in the next illustraon.
a) First row: Images before and aer applicaon of edge enhancement lter b)
Second row: The edges detected by ImageFilter.FIND_EDGES lter.
Chapter 3
[ 87 ]
What just happened?
We created an image with enhanced edges by applying the EDGE_ENHANCE lter in the
ImageFilter module. We also learned how to detect edges within the image using the
edge detecon lter. In the next secon, we will apply a special form of the edge lter that
highlights or darkens the detected edges within an image. It is called an embossing lter.
In image processing, embossing is a process that gives an image a 3-D appearance. The edges
within the image appear raised above the image surface. This opcal illusion is accomplished
by highlighng or darkening edges within the image. The following illustraon shows original
and embossed images. Noce how the edges along the characters in the embossed image
are either highlighted or darkened to give the desired eect.
The ImageFiltermodule provides a predened lter, ImageFilter.EMBOSS, to achieve
the embossing eect for an image. The convoluon kernel of this lter is of a (3, 3) size and
the default lter arguments are:
filterargs = (3, 3), 1, 128, (
-1, 0, 0,
0, 1, 0,
0, 0, 0
Time for action – embossing
1. Download the image 0165_3_4_Bird_EMBOSS.png from the Packt website and
save it as Bird_EMBOSS.png.
2. Add the following code in a Python le:
1 import os, sys
2 import Image
3 import ImageFilter
4 imgPath = "C:\images\Bird_EMBOSS.png "
5 outImgPath = "C:\images\Bird_EMBOSSED.png "
6 imgPath = os.path.normpath(imgPath)
Enhancing Images
[ 88 ]
6 outImgPath = os.path.normpath(outImgPath)
7 bird =
8 bird = bird.filter(ImageFilter.EMBOSS)
3. On line 9, the embossing lter ImageFilter.EMBOSS is applied to the
image object bird. The resultant embossed image of the bird is shown
in the next illustraon.
Original and embossed images of a bird using ImageFilter.EMBOSS.
What just happened?
We applied an embossing lter on an image and created an embossed image. As seen in
previous secon, the lter modied the brightness of various edges to make them appear
highlighted or darkened. This created an opcal illusion where the image appeared raised
above the surface.
Adding a border
How would you prefer viewing a family photo? As a bare picture or enclosed in a nice photo
frame? In ImageOps module, PIL provides a preliminary funconality to add a plain border
around the image. Here is the syntax to achieve this:
img = ImageOps.expand(img, border, fill)
Chapter 3
[ 89 ]
This code creates a border around the image. Internally, PIL just creates an image that has
dimesions such that:
new_width = ( right_border_thickness + image_width +
left_border_thickness )
new_height = ( top_border_thickness + image_height +
bottom_border_thickness )
Then, the original image is pasted onto this new image to create the border eect. The
border argument in the preceding code suggests border thickness in pixels. It is uniform
in this example and is set to 20 pixels for le, right, top, and boom borders. The fill
argument species the border color. It can be a number in the range 0 to 255 indicang the
pixel color, where 0 is for 'black' and 255 for 'white' border. Alternavely, you can specify a
string represenng a color, such as 'red' for red color, and so on.
Time for action – enclosing a picture in a photoframe
Let's develop code that adds a frame around a picture.
1. Download the image 0165_3_15_COLOR_TWEAK.png and rename it to
2. Add the following code in a Python source le. Make sure to modify the code
to specify in the input and output paths appropriately.
1 import Image, ImageOps
2 img = "C:\\images\\FLOWER.png ")
3 img = ImageOps.expand(img, border=20, fill='black')
4 img = ImageOps.expand(img, border=40, fill='silver')
5 img = ImageOps.expand(img, border=2, fill='black')
6 "C:\\images\\PHOTOFRAME.png ")
3. In this code snippet, three stacked borders are created. The innermost border layer
is rendered with black color. This is intenonally chosen darker.
4. Next, there is a middle layer of border, rendered with a lighter color (silver color in
this case). This is done by the code on line 4. It is thicker than the innermost border.
5. The outermost border is created by code on line 5. It is a very thin layer rendered
as black.
6. Together, these three layers of borders create an opcal illusion of a photo frame, by
making the border appear raised above the original image.
Enhancing Images
[ 90 ]
7. The following image shows the result of adding this border to the specied input
image—it shows the image before and aer enclosing in a 'photo frame'.
What just happened?
We learned how to create a simple border around an image. By calling ImageOps.expand
mulple mes, we created a mul-layered border having each layer of variable thickness and
color. With this, we accomplished creang an opcal illusion where the picture appears to be
enclosed within a simple photo frame.
This chapter taught us several important image enhancement techniques, specically:
With the help of ample examples, we learned how to adjust the color, brightness,
and contrast of an image.
We learned how to blend images together create composites using image mask and
how to add transparency.
We applied blending, pasng, and other techniques learned to develop an
interesng tool. We implemented features in this tool that enabled inserng
a watermark, text, or date stamp to an image.
A number of image enhancement lters were discussed. Using code snippets we
learned how to reduce high-level noise from image, enhance edges, add sharpening
or blurring eects, emboss an image, and so on.
We learned miscellaneous other useful image enhancements such as creang
negaves and adding border eects to the image.
Fun with Animations
Cartoons have always fascinated the young and old alike. An animaonanimaon
is where the imaginary creatures become alive and take us to a totally
dierent world.
Animaon is a sequence of frames displayed quickly one aer the other. This
creates an opcal illusion where the objects, for instance, appear to be movingillusion where the objects, for instance, appear to be moving
around. This chapter will introduce you to the fundamentals of developing
animaons using Python and Pyglet mulmedia applicaon development
frameworks. Pyglet is designed to do 3D operaons, but we will use it for
developing very simple 2D animaons in this book..
In this chapter, we shall:
Learn the basics of Pyglet framework. This will be used to develop code to create or
play animaons.
Learn how to play an exisng animaon le and create animaons using a sequence
of images.
Work on project 'Bowling animaon', where animaons can be controlled using
inputs from the keyboard.
Develop code to create an animaon using dierent regions of a single image.
Work on an excing project that animates a car moving in a thunderstorm. This
project will cover many important things covered throughout this chapter.
So let's get on with it.
Fun with Animaons
[ 92 ]
Installation prerequisites
We will cover the prerequisites for the installaon of Pyglet in this secon.
Pyglet provides an API for mulmedia applicaon development using Python. It is an
OpenGL-based library, which works on mulple plaorms. It is primarily used for developing
gaming applicaons and other graphically-rich applicaons. Pyglet can be downloaded from Install Pyglet version 1.1.4 or later. The
Pyglet installaon is prey straighorward.
Windows platform
For Windows users, the Pyglet installaon is straighorward—use the binary distribuon
Pyglet 1.1.4.msi or later.
You should have Python 2.6 installed. For Python 2.4, there are some more
dependencies. We won't discuss them in this book, because we are using
Python 2.6 to build mulmedia applicaons.
If you install Pyglet from the source, see the instrucons under the next sub-secon,
Other plaorms.
Other platforms
The Pyglet website provides a binary distribuon le for Mac OS X. Download and install
pyglet-1.1.4.dmg or later.
On Linux, install Pyglet 1.1.4 or later if it is available in the package repository of your
operang system. Otherwise, it can be installed from source tarball as follows:
Download and extract the tarballextract the tarball the tarball pyglet-1.1.4.tar.gz or a later version.
Make sure that python is a recognizable command in shell. Otherwise, set therecognizable command in shell. Otherwise, set the command in shell. Otherwise, set the
PYTHONPATH environment variable to the correct Python executable path.
In a shell window, change to the menoned extracted directory and then run then a shell window, change to the menoned extracted directory and then run the
following command:
python install
Review the succeeding installaon instrucons using the README/install instrucon
les in the Pyglet source tarball.
Chapter 4
[ 93 ]
If you have the package setuptools (
pypi/setuptools) installed, the Pyglet installaon should be very easy.installed, the Pyglet installaon should be very easy., the Pyglet installaon should be very easy.
However, for this, you will need a runme egg of Pyglet. But the egg le for
Pyglet is not available at If you get hold of a
Pyglet egg le, it can be installed by running the following command on Linux or
Mac OS X. You will need administrator access to install the package:
$sudo easy_install -U pyglet
Summary of installation prerequisites
The following table illustrates installaon prerequisites depending on the version
and plaorm.
Package Download locaon Version Windows
Linux/Unix/OS X plaorms
Python http://python.
2.6.4 (or any
Install using
Install from binary;
also install addional
developer packages
(For example, with
python-devel in
the package name in
a rpm-based Linux
Build and install from
the source tarball.
Pyglet http://www.
1.1.4 or later Install using
(the .msi le)
Mac: Install using disk
image le (.dmg le).
Linux: Build and install
using the source
Testing the installation
Before proceeding further, ensure that Pyglet is installed properly. To test this, just start
Python from the command line and type the following:
>>>import pyglet
If this import is successful, we are all set to go!
Fun with Animaons
[ 94 ]
A primer on Pyglet
Pyglet provides an API for mulmedia applicaon development using Python. It is an
OpenGL-based library that works on mulple plaorms. It is primarily used for developing
gaming and other graphically-rich applicaons. We will cover some important aspects of
Pyglet framework.
Important components
We will briey discuss some of the important modules and packages of Pyglet that we will
use. Note that this is just a ny chunk of the Pyglet framework. Please review the Pyglet
documentaon to know more about its capabilies, as this is beyond the scope of this book.
The pyglet.window.Window module provides the user interface. It is used to create a
window with an OpenGL context. The Window class has API methods to handle various
events such as mouse and keyboard events. The window can be viewed in normal or full
screen mode. Here is a simple example of creang a Window instance. You can dene a size
by specifying width and height arguments in the constructor.
win = pyglet.window.Window()
The background color for the image can be set using OpenGL call glClearColor, as follows:, 1, 1, 1)
This sets a white background color. The rst three arguments are the red, green, and blue
color values. Whereas, the last value represents the alpha. The following code will set up a
gray background color., 0.5, 0.5, 1)
The following illustraon shows a screenshot of an empty window with a gray
background color.
Chapter 4
[ 95 ]
The pyglet.image module enables the drawing of images on the screen. The following
code snippet shows a way to create an image and display it at a specied posion within the
Pyglet window.
img = pyglet.image.load('my_image.bmp')
x, y, z = 0, 0, 0
img.blit(x, y, z)
A later secon will cover some important operaons supported by the
pyglet.image module.
This is another important module. It is used to display an image or an animaon frame
within a Pyglet window discussed earlier. It is an image instance that allows us to posion
an image anywhere within the Pyglet window. A sprite can also be rotated and scaled. It is
possible to create mulple sprites of the same image and place them at dierent locaons
and with dierent orientaons inside the window.
Animation module is a part of pyglet.image package. As the name indicates, pyglet.
image.Animation is used to create an animaon from one or more image frames. There
are dierent ways to create an animaon. For example, it can be created from a sequence
of images or using AnimationFrame objects. We will study these techniques later in the
chapter. An animaon sprite can be created and displayed within the Pyglet window.
This creates a single frame of an animaon from a given image. An animaon can be created
from such AnimationFrame objects. The following line of code shows an example.
animation = pyglet.image.Animation(anim_frames)
anim_frames is a list containing instances of AnimationFrame.
Among many other things, this module is used for scheduling funcons to be called at a
specied me. For example, the following code calls a method moveObjects ten mes
every second.
pyglet.clock.schedule_interval(moveObjects, 1.0/10)
Fun with Animaons
[ 96 ]
Displaying an image
In the Image sub-secon, we learned how to load an image using image.blit. However,
image bling is a less ecient way of drawing images. There is a beer and preferred
way to display the image by creang an instance of Sprite. Mulple Sprite objects
can be created for drawing the same image. For example, the same image might need
to be displayed at various locaons within the window. Each of these images should be
represented by separate Sprite instances. The following simple program just loads an
image and displays the Sprite instance represenng this image on the screen.
1 import pyglet
3 car_img= pyglet.image.load('images/car.png')
4 carSprite = pyglet.sprite.Sprite(car_img)
5 window = pyglet.window.Window()
6, 1, 1, 1)
8 @window.event
9 def on_draw():
10 window.clear()
11 carSprite.draw()
On line 3, the image is opened using pyglet.image.load call. A Sprite instance
corresponding to this image is created on line 4. The code on line 6 sets white background
for the window. The on_draw is an API method that is called when the window needs to be
redrawn. Here, the image sprite is drawn on the screen. The next illustraon shows a loaded
image within a Pyglet window.
In various examples in this chapter and others, the le path strings are
hardcoded. We have used forward slashes for the le path. Although this works
on Windows plaorm, the convenon is to use backward slashes. For example,
images/car.png is represented as images\car.png. Addionally,
you can also specify a complete path to the le by using the os.path.
join method in Python. Regardless of what slashes you use, the os.path.
normpath will make sure it modies the slashes to t to the ones used
for the plaorm. The use of os.path.normpath is illustrated in the
following snippet:
import os
original_path = 'C:/images/car.png"
new_path = os.path.normpath(original_path)
Chapter 4
[ 97 ]
The preceding image illustrates Pyglet window showing a sll image.
Mouse and keyboard controls
The Window module of Pyglet implements some API methods that enable user input to a
playing animaon. The API methods such as on_mouse_press and on_key_press are
used to capture mouse and keyboard events during the animaon. These methods can be
overridden to perform a specic operaon.
Adding sound effects
The media module of Pyglet supports audio and video playback. The following code loads a
media le and plays it during the animaon.
1 background_sound =
2 'C:/AudioFiles/background.mp3',
3 streaming=False)
The second oponal argument provided on line 3 decodes the media le completely in the
memory at the me the media is loaded. This is important if the media needs to be played
several mes during the animaon. The API method play() starts streaming the specied
media le.
Animations with Pyglet
The Pyglet framework provides a number of modules required to develop animaons. Many
of these were discussed briey in earlier secons. Lets now learn techniques to create 2D
animaons using Pyglet.
Viewing an existing animation
If you already have an animaon in, for example, .gif le format, it can be loaded and
displayed directly with Pyglet. The API method to use here is pyglet.image.load_
Fun with Animaons
[ 98 ]
Time for action – viewing an existing animation
This is going to be a short exercise. The goal of this secon is to develop a primary
understanding on use of Pyglet for viewing animaons. So let's get on with it.
1. Download the le from the Packt website. Also download
the le SimpleAnimation.gif and place it in a sub-directory images. The code is
illustrated as follows:
1 import pyglet
3 animation = pyglet.image.load_animation(
4 "images/SimpleAnimation.gif")
6 # Create a sprite object as an instance of this animation.
7 animSprite = pyglet.sprite.Sprite(animation)
9 # The main pyglet window with OpenGL context
10 w = animSprite.width
11 h = animSprite.height
12 win = pyglet.window.Window(width=w, height=h)
14 # r,g b, color values and transparency for the background
15 r, g, b, alpha = 0.5, 0.5, 0.8, 0.5
17 # OpenGL method for setting the background.
18, g, b, alpha)
20 # Draw the sprite in the API method on_draw of
21 # pyglet.Window
22 @win.event
23 def on_draw():
24 win.clear()
25 animSprite.draw()
The code is self-explanatory. On line 3, the API method image.load_
animation creates an instance of class image.Animation using the
specied animaon le. For this animaon, a Sprite object is created on
line 7. The Pyglet window created on line 12 will be used to display the
animaon. The size of this window is specied by the height and width of
the animSprite. The background color for the window is set using OpenGL
call glClearColor.
Chapter 4
[ 99 ]
2. Next, we need to draw this animaon sprite into the Pyglet window. The pyglet.
window denes API method on_draw which gets called when an event occurs. The
call to the draw() method of animaon Sprite is made on line 25 to render the
animaon on screen. The code on line 22 is important. The decorator, @win.
event allows us to modify the on_draw API method of pyglet.window.Window
when an event occurs. Finally code on line 27 runs this applicaon.
You can create your own animaon le like SimpleAnimation.
gif using freely available image eding soware packages like GIMP.
This animaon le was created using GIMP 2.6.7, by drawing each
of the characters on a separate layer and then blending all the layers
using Filters | Animaon | Blend.
3. Put the le along with the animaon le
SimpleAnimation.gif in the same directory and then run the program
as follows:
This will show the animaon in a Pyglet window. You can use a dierent
animaon le instead of SimpleAnimation.gif. Just modify the related
code in this le or add code to accept any GIF le as a command-line
argument for this program. The next illustraon shows some of the
frames from this animaon at dierent me intervals.
The preceding image is a screen capture of a running animaon at dierent me intervals.
Fun with Animaons
[ 100 ]
What just happened?
We worked out an example where an already created animaon le was loaded and viewed
using Pyglet. This short exercise taught us some preliminary things about viewing animaons
using Pyglet. For example, we learned how to create a Pyglet window and load an animaon
using pyglet.Sprite object. These fundamentals will be used throughout this chapter.
Animation using a sequence of images
The API method Animation.from_image_sequence enables creaon of an animaon
using a bunch of sequenal images. Each of the images is displayed as a frame in the
animaon, one aer the other. The me for which each frame is displayed can be specied
as an argument while creang the animaon object. It can also be set aer the animaon
instance is created.
Time for action – animation using a sequence of images
Let's develop a tool that can create an animaon and display it on the screen. This tool will
create and display the animaon using the given image les. Each of the image les will
be displayed as a frame in the animaon for a specied amount of me. This is going to
be a fun lile animaon that shows a grandfather clock with a pendulum. We will animate
the pendulum oscillaons with other things, including making the dial remaining sll. This
animaon has only three image frames; two of them show the pendulum at opposite
extremes. These images are sequenced as shown in the next illustraon.
Chapter 4
[ 101 ]
Clock image frames to be animated appear in the preceding image.
1. Download the le from the Packt website.
2. The code in this le is presented below.
1 import pyglet
3 image_frames = ('images/clock1.png',
4 'images/clock2.png',
5 'images/clock3.png')
7 # Create the list of pyglet images
8 images = map(lambda img: pyglet.image.load(img),
9 image_frames)
11 # Each of the image frames will be displayed for 0.33
12 # seconds
13 # 0.33 seconds chosen so that the 'pendulam in the clock
14 # animation completes one oscillation in ~ 1 second !
16 animation = pyglet.image.Animation.from_image_sequence(
17 images, 0.33)
18 # Create a sprite instance.
19 animSprite = pyglet.sprite.Sprite(animation)
21 # The main pyglet window with OpenGL context
22 w = animSprite.width
23 h = animSprite.height
24 win = pyglet.window.Window(width=w, height=h)
26 # Set window background color to white.
27, 1, 1, 1)
29 # The @win.event is a decorator that helps modify the API
30 # methods such as
31 # on_draw called when draw event occurs.
32 @win.event
33 def on_draw():
34 win.clear()
35 animSprite.draw()
Fun with Animaons
[ 102 ]
The tuple, image_frames contains the paths for the images. The map
funcon call on line 8 creates pyglet.image objects corresponding to
each of the image paths and stores the resultant images in a list. On
line 16, the animation is created using the API method Animation.
from_image_sequence. This method takes the list of image objects
as an argument. The other oponal argument is the me in seconds for
which each of the frames will be shown. We set this me as 0.33 seconds
per image so that the total me for a complete animaon loop is nearly 1
second. Thus, in the animaon, one complete oscillaon of the pendulum
will be complete in about one second. We already discussed the rest of the
code in an earlier secon.
3. Place the image les in a sub-directory images within the directory in which
le is placed. Then run the program using:
You will see a clock with an oscillang pendulum in the window. The
animaon will connue in a loop and closing the window will end it.
What just happened?
By rapidly displaying sll images, we just created something like a 'ipbook' cartoon! We
developed a simple ulity that takes a sequence of images as an input and creates an
animaon using Pyglet. To accomplish this task, we used Animation.from_image_
sequence to create the animaon and re-used most of the framework from the
Viewing an exisng animaon secon.
Single image animation
Imagine that you are creang a cartoon movie where you want to animate the moon of
an arrow or a bullet hing a target. In such cases, typically it is just a single image. The
desired animaon eect is accomplished by performing appropriate translaon or
rotaon of the image.
Time for action – bouncing ball animation
Lets create a simple animaon of a 'bouncing ball'. We will use a single image le,
ball.png, which can be downloaded from the Packt website. The dimensions of this
image in pixels are 200x200, created on a transparent background. The following screenshot
shows this image opened in GIMP image editor. The three dots on the ball idenfy its side.
We will see why this is needed. Imagine this as a ball used in a bowling game.
Chapter 4
[ 103 ]
The image of a ball opened in GIMP appears as shown in the preceding image. The ball size
in pixels is 200x200.
1. Download the les and ball.png from the Packt
website. Place the ball.png le in a sub-directory 'images' within the directory in
which is saved.
2. The following code snippet shows the overall structure of the code.
1 import pyglet
2 import time
4 class SingleImageAnimation(pyglet.window.Window):
5 def __init__(self, width=600, height=600):
6 pass
7 def createDrawableObjects(self):
8 pass
9 def adjustWindowSize(self):
10 pass
11 def moveObjects(self, t):
12 pass
13 def on_draw(self):
14 pass
15 win = SingleImageAnimation()
16 # Set window background color to gray.
17, 0.5, 0.5, 1)
19 pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)
Fun with Animaons
[ 104 ]
Although it is not required, we will encapsulate event handling and other
funconality within a class SingleImageAnimation. The program
to be developed is short, but in general, it is a good coding pracce. It
will also be good for any future extension to the code. An instance of
SingleImageAnimation is created on line 14. This class is inherited from
pyglet.window.Window. It encapsulates the funconality we need here.
The API method on_draw is overridden by the class. on_draw is called
when the window needs to be redrawn. Note that we no longer need a
decorator statement such as @win.event above the on_draw method
because the window API method is simply overridden by this inherited class.
3. The constructor of the class SingleImageAnimation is as follows:
1 def __init__(self, width=None, height=None):
2 pyglet.window.Window.__init__(self,
3 width=width,
4 height=height,
5 resizable = True)
6 self.drawableObjects = []
7 self.rising = False
8 self.ballSprite = None
9 self.createDrawableObjects()
10 self.adjustWindowSize()
As menoned earlier, the class SingleImageAnimation inherits pyglet.
window.Window. However, its constructor doesn't take all the arguments
supported by its super class. This is because we don't need to change
most of the default argument values. If you want to extend this applicaon
further and need these arguments, you can do so by adding them as
__init__ arguments. The constructor inializes some instance variables
and then calls methods to create the animaon sprite and resize the
window respecvely.
4. The method createDrawableObjects creates a sprite instance using the
ball.png image.
1 def createDrawableObjects(self):
2 """
3 Create sprite objects that will be drawn within the
4 window.
5 """
6 ball_img= pyglet.image.load('images/ball.png')
7 ball_img.anchor_x = ball_img.width / 2
8 ball_img.anchor_y = ball_img.height / 2
Chapter 4
[ 105 ]
10 self.ballSprite = pyglet.sprite.Sprite(ball_img)
11 self.ballSprite.position = (
12 self.ballSprite.width + 100,
13 self.ballSprite.height*2 - 50)
14 self.drawableObjects.append(self.ballSprite)
The anchor_x and anchor_y properes of the image instance are set
such that the image has an anchor exactly at its center. This will be useful
while rotang the image later. On line 10, the sprite instance self.
ballSprite is created. Later, we will be seng the width and height of
the Pyglet window as twice of the sprite width and thrice of the sprite
height. The posion of the image within the window is set on line 11. The
inial posion is chosen as shown in the next screenshot. In this case, there
is only one Sprite instance. However, to make the program more general,
a list of drawable objects called self.drawableObjects is maintained.
5. To connue the discussion from the previous step, we will now review the
on_draw method.
def on_draw(self):
for d in self.drawableObjects:
As menoned previously, the on_draw funcon is an API method of class
pyglet.window.Window that is called when a window needs to be
redrawn. This method is overridden here. The self.clear() call clears
the previously drawn contents within the window. Then, all the Sprite
objects in the list self.drawableObjects are drawn in the for loop.
The preceding image illustrates the inial ball posion in the animaon.
Fun with Animaons
[ 106 ]
6. The method adjustWindowSize sets the width and height parameters of the
Pyglet window. The code is self-explanatory:
def adjustWindowSize(self):
w = self.ballSprite.width * 3
h = self.ballSprite.height * 3
self.width = w
self.height = h
7. So far, we have set up everything for the animaon to play. Now comes the fun part.
We will change the posion of the sprite represenng the image to achieve the
animaon eect. During the animaon, the image will also be rotated, to give it
the natural feel of a bouncing ball.
1 def moveObjects(self, t):
2 if self.ballSprite.y - 100 < 0:
3 self.rising = True
4 elif self.ballSprite.y > self.ballSprite.height*2 - 50:
5 self.rising = False
7 if not self.rising:
8 self.ballSprite.y -= 5
9 self.ballSprite.rotation -= 6
10 else:
11 self.ballSprite.y += 5
12 self.ballSprite.rotation += 5
This method is scheduled to be called 20 mes per second using the
following code in the program.
pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)
To start with, the ball is placed near the top. The animaon should be such
that it gradually falls down, hits the boom, and bounces back. Aer this,
it connues its upward journey to hit a boundary somewhere near the top
and again it begins its downward journey. The code block from lines 2 to 5
checks the current y posion of self.ballSprite. If it has hit the upward
limit, the ag self.rising is set to False. Likewise, when the lower limit
is hit, the ag is set to True. The ag is then used by the next code snippet
to increment or decrement the y posion of self.ballSprite.
8. The highlighted lines of code rotate the Sprite instance. The current rotaon angle
is incremented or decremented by the given value. This is the reason why we set the
image anchors, anchor_x and anchor_y at the center of the image. The Sprite
object honors these image anchors. If the anchors are not set this way, the ball will
be seen wobbling in the resultant animaon.
Chapter 4
[ 107 ]
9. Once all the pieces are in place, run the program from the command line as:
This will pop up a window that will play the bouncing ball animaon. The
next illustraon shows some intermediate frames from the animaon while
the ball is falling down.
What just happened?
We learned how to create an animaon using just a single image. The image of a ball was
represented by a sprite instance. This sprite was then translated and rotated on the screen to
accomplish a bouncing ball animaon. The whole funconality, including the event handling,
was encapsulated in the class SingleImageAnimation.
Fun with Animaons
[ 108 ]
Project: a simple bowling animation
It's me for a small project. We will re-use most of the code we used in the Single Image
Animaon secon and some more stu to create an animaon where a rolling ball hits a
pin in a bowling game. Although this chapter covers animaon, this project will give you a
preliminary understanding on how to turn an animaon into a game. This is not a real game
as such, but it will involve some user interacons to control the animaon.
The starng posion in the bowling animaon, showing ball and pin images.
Time for action – a simple bowling animation
Let's develop the code for this applicaon. As menoned earlier, a big chunk of the code
comes from the Single Image Animaon secon. So we will only discuss the new and
modied methods needed to create a bowling animaon.
1. Download the Python source le from the Packt
website. The overall class design is the same as the one developed in the Single
Image Animaon secon. We will only discuss the new and modied methods.
You can review the rest of the code from this le.
2. Also, download the image les used in this project. These les are ball.png and
pin.png. Place these les in a sub-directory images. The images directory should
be placed in the directory in which the above Python source le is located.
Chapter 4
[ 109 ]
3. The __init__ method of the class is idencal to that of class
SingleImageAnimation. The only change here is that it inializes the
following ags:
self.paused = False
self.pinHorizontal = False
The ag self.pinHorizontal is used later to check if the pin is knocked
out by the ball. Whereas, self.paused is used to pause or resume the
animaon depending on its value.
4. The createDrawable object method is modied to create a sprite instance for
the pin image. Also, the posion of the ball and pin sprites are adjusted for our
animaon needs. The code is presented as follows:
1 def createDrawableObjects(self):
2 ball_img= pyglet.image.load('images/ball.png')
3 ball_img.anchor_x = ball_img.width / 2
4 ball_img.anchor_y = ball_img.height / 2
6 pin_img = pyglet.image.load('images/pin.png')
7 pin_img.anchor_x = pin_img.width / 2
8 pin_img.anchor_y = pin_img.height / 2
10 self.ballSprite = pyglet.sprite.Sprite(ball_img)
11 self.ballSprite.position = (0 + 100,
12 self.ballSprite.height)
14 self.pinSprite = pyglet.sprite.Sprite(pin_img)
15 self.pinSprite.position = (
16 (self.ballSprite.width*2 + 100,
17 self.ballSprite.height) )
19 # Add these sprites to the list of drawables
20 self.drawableObjects.append(self.ballSprite)
21 self.drawableObjects.append(self.pinSprite)
The code block 6-8 creates an image instance for the pin image and then
sets the image anchor at its center. The Sprite instances represenng ball
and pin images are created on lines 10 and 14 respecvely. Their posions
are set such that the inial posions appear as shown in the earlier
illustraon. Finally these sprites are added to the list of drawable
objects that are drawn in on_draw method.
Fun with Animaons
[ 110 ]
5. Next, let's review the moveObjects method. As before, this method is called every
0.05 seconds.
1 def moveObjects(self, t):
2 if self.pinHorizontal:
3 self.ballSprite.x = 100
4 self.pinSprite.x -= 100
6 if self.ballSprite.x < self.ballSprite.width*2:
7 if self.ballSprite.x == 100:
8 time.sleep(1)
9 self.pinSprite.rotation = 0
10 self.pinHorizontal = False
12 self.ballSprite.x += 5
13 self.ballSprite.rotation += 5
15 if self.ballSprite.x >= self.ballSprite.width*2:
16 self.pinSprite.rotation = 90
17 self.pinSprite.x += 100
18 self.pinHorizontal = True
The if block, from lines 6 to 13, is called for when the x posion of the ball
sprite is between 100 pixels to twice the width of the self.ballSprite.
On line 12, the x posion of self.ballSprite is incremented by 5 pixels.
Also, the sprite is rotated by 5 degrees. The combinaon of these two
transforms creates an eect where we see the ball rolling horizontally, from
le to right, inside the Pyglet window. As seen earlier, the center of the
pin is located at x = self.ballSprite.width*2 + 100 and y = self.
Chapter 4
[ 111 ]
The if block from lines 15 to 18 is where the ball appears to have hit
the pin. That is, the x coordinate of ball sprite center is about 100 pixels
away from the center of the pin. The 100-pixel value is chosen to account
for the ball radius. Therefore, once the ball hits the pin, the pin image is
rotated by 90 degrees (line 16). This creates a visual eect where the pin
appears to be knocked down by the ball. The x coordinate of the pin is
incremented by 100 pixels so that, aer the pin rotaon, the ball and pin
images don't overlap. You can do some more improvement here. Shi the
y posion of the pin sprite further down, so that the pin appears lying on
the ground. In this if block, we also set the ag self.pinHorizontal
to True. When the moveObjects method is called the next me, the rst
thing that is checked is whether the pin is vercal or horizontal. If the pin is
horizontal, the original posions of the ball and pin are restored by the code
on lines 2 to 4. This is a preparaon for the next animaon loop. On line
9, the pin is rotated back to 0 degree, whereas on line 10, the ag self.
pinHorizontal is reset to False.
6. With the code we developed so far, and with the remaining code from class
SingleImageAnimation, if you run the program, it will show the bowling
animaon. Now let's add some controls to this animaon. A ag, self.paused,
was inialized in the constructor. It will be used here. Just like on_draw, on_key_
press is another API method of pyglet.window.Window. It is overridden here to
implement pause and resume controls.
1 def on_key_press(self, key, modifiers):
2 if key == pyglet.window.key.P and not self.paused:
3 pyglet.clock.unschedule(self.moveObjects)
4 self.paused = True
5 elif key == pyglet.window.key.R and self.paused:
6 pyglet.clock.schedule_interval(
7 self.moveObjects, 1.0/20)
8 self.paused = False
The key argument is one of the keyboard keys pressed by the user. The if
block from lines 2 to 4 pauses the animaon when P key is pressed. The
method self.moveObjects is scheduled to be called every 0.05 seconds.
The scheduled callback to this method is canceled using the pyglet.
clock.unschedule method. To resume the animaon, the schedule_
interval method is called on line 6. The self.paused ag ensures that
the mulple keypresses won't have any undesirable eect on the animaon.
For example, if you press the R key mulple mes, the code will just ignore
the keypress events that follow.
Fun with Animaons
[ 112 ]
7. Refer to the le to review or develop the rest of the code
and then run the program from the command line as:
This will pop up a window in which the animaon will be played. Press
the P key on the keyboard to pause the animaon. To resume a paused
animaon, press the R key. The next illustraon shows a few intermediate
frames in this animaon.
The intermediate frames in the bowling animaon appear as shown in the preceding image.
What just happened?
We completed a simple but excing project where an animaon of a bowl hing a pin was
developed. This was accomplished by moving and rotang the image sprites on the screen.
Several methods from the SingleImageAnimation class were re-used. Addionally, we
learned how to control the animaon by overriding the on_key_press API method.
Chapter 4
[ 113 ]
Animations using different image regions
It is possible to create an animaon using dierent regions of a single image. Each of these
regions can be treated as a separate animaon frame. In order to achieve the desired
animaon eect, it is important to properly create the image with regions. In the following
example, the animaon will be created using such regions. We will also be using the default
posion parameters for each of the regions within that image. Thus, our main task in this
secon is simply to use these regions in their original form and create animaon frames out
of them. We will rst see how the image looks. The following illustraon shows this image.
A single image le with an imaginary 'grid' on top of it appears in the previous image.
The horizontal doed lines overlaying this image indicate how an imaginary image grid
divides the image into dierent regions. Here we have four rows and just a single column.
Thus, during the animaon, each of these image regions will be shown as a single animaon
frame. Noce how the droplet images are drawn. In the rst row, the four droplets are drawn
at the top. Then in the next row, these images are slightly oset to the south-west direcon
compared to the droplets in the rst row. This oset is increased further in the third and
fourth rows.
Fun with Animaons
[ 114 ]
Time for action – raindrops animation
Let's create an animaon of falling raindrops by using dierent regions of a single image.
1. Download the Python source le and the image le
droplet.png from the Packt website. As done before, place the image le in a
sub-directory images. The images directory should be placed in the directory in
which the Python source le is located.
2. The __init__ method of the class RainDropsAnimation is presented.
1 def __init__(self, width=None, height=None):
2 pyglet.window.Window.__init__(self,
3 width=width,
4 height=height)
5 self.drawableObjects = []
6 self.createDrawableObjects()
The code is self-explanatory. The class RainDropsAnimation inherits pyglet.
window.Window. The constructor of the class calls the method that creates the
Sprite instance for displaying the animaon on the screen.
3. Let's review the createDrawableObjects method.
1 def createDrawableObjects(self):
2 num_rows = 4
3 num_columns = 1
4 droplet = 'images/droplet.png'
5 animation = self.setup_animation(droplet,
6 num_rows,
7 num_columns)
9 self.dropletSprite = pyglet.sprite.Sprite(animation)
10 self.dropletSprite.position = (0,0)
12 # Add these sprites to the list of drawables
13 self.drawableObjects.append(self.dropletSprite)
The pyglet.image.Animation instance is created on line 5, by calling
setup_animation method. On line 9, the Sprite instance is created for
this animation object.
Chapter 4
[ 115 ]
4. The setup_animation method is the main worker method that uses regions
within the image le to create individual animaon frames.
1 def setup_animation(self, img, num_rows, num_columns):
2 base_image = pyglet.image.load(img)
3 animation_grid = pyglet.image.ImageGrid(base_image,
4 num_rows,
5 num_columns)
6 image_frames = []
8 for i in range(num_rows*num_columns, 0, -1):
9 frame = animation_grid[i-1]
10 animation_frame = (
11 pyglet.image.AnimationFrame(frame,
12 0.2))
13 image_frames.append(animation_frame)
15 animation = pyglet.image.Animation(image_frames)
16 return animation
First, the instance of image is created on line 2. The ImageGrid is an
imaginary grid placed over the droplet image. Each 'cell' or the 'image
region' within this image grid can be viewed as a separate image frame in an
animaon. The ImageGrid instance is constructed by providing the image
object and the number of rows and columns as arguments. The number of
rows in this case is 4 and there is only a single column. Thus, there will be
four such image frames in the animaon corresponding to each of these
rows in the ImageGrid. The AnimationFrame object is created on line 10.
The code on line 8 increments the value of i from maximum to minimum
region or cell of the imaginary grid. Line 9 gets the specic image region and
this is then used to create the pyglet.image.AnimationFrame instance,
as we did on line 10. The second argument is the me for which each frame
will be displayed on the screen. Here, we are displaying each frame for 0.2
seconds. All such animaon frame forms are stored in a list image_frames
and then the pyglet.image.Animation instance is created using this list.
Fun with Animaons
[ 116 ]
5. Refer to the le to review the rest of the code and then
run the program from the command line as:
This animaon displays four image regions of a single image, one aer
another. The next illustraon shows these four images.
The four image frames that display dierent regions of a single image appear in the previous
illustraon. These four image frames are repeated in the animaon loop.
What just happened?
We created an animaon using dierent regions of a single image. Each of these regions was
treated as a separate animaon frame. The creaon of an image used in this animaon was
briey discussed. Among many other things, we learned how to create and use Pyglet classes
such as ImageGrid and AnimationFrame.
Chapter 4
[ 117 ]
Project: drive on a rainy day!
This project is essenally a summary of what we have learned so far in this chapter.
Addionally, it will cover a few other things such as adding sound eects to an animaon,
showing or hiding certain image sprites while the animaon is being played, and so on. In
this animaon, there will be a staonary cloud image. We will re-use the code from the
raindrops animaon secon to animate falling rain. There will be an image sprite to
animate lightning eect. Finally, a car cartoon will be shown passing by from le to right
in this heavy rain. The following snapshot is an animaon frame that captures all these
component images.
Component images of animaon drive on a rainy day illustrated in the preceding image.
Fun with Animaons
[ 118 ]
Time for action – drive on a rainy day!
It's me to write the code for this animaon.
1. Download the Python source le We will discuss some
of the important methods from this le. You can go through the rest of the code
from this le.
2. Download the image les, droplet.png, cloud.png, car.png, and
lightening.png from the Packt website. Place these image les in a sub-directory
called images. The images directory should be placed in the directory where the
Python source le is located.
3. The constructor of the class is wrien as follows:
1 def __init__(self, width=None, height=None):
2 pyglet.window.Window.__init__(self,
3 width=width,
4 height=height,
5 resizable=True)
6 self.drawableObjects = []
7 self.paused = False
10 self.createDrawableObjects()
11 self.adjustWindowSize()
12 # Make sure to replace the following media path to
13 # with an appropriate path on your computer.
14 self.horn_sound = (
16 streaming=False) )
The code is same as the one developed in the raindrops animaon. The
media le horn.wav is decoded on line 14. The ag streaming is set
to False so that the media can be played mulple mes during the
animaon. Make sure you specify an appropriate audio le path on
your computer on line 15.
Chapter 4
[ 119 ]
4. Let's review the createDrawableObjects method:
1 def createDrawableObjects(self):
3 num_rows = 4
4 num_columns = 1
5 droplet = 'images/droplet.png'
6 animation = self.setup_animation(droplet,
7 num_rows,
8 num_columns)
10 self.dropletSprite = pyglet.sprite.Sprite(animation)
11 self.dropletSprite.position = (0,200)
13 cloud = pyglet.image.load('images/cloud.png')
14 self.cloudSprite = pyglet.sprite.Sprite(cloud)
15 self.cloudSprite.y = 100
17 lightening = pyglet.image.load('images/lightening.png')
18 self.lSprite = pyglet.sprite.Sprite(lightening)
19 self.lSprite.y = 200
21 car = pyglet.image.load('images/car.png')
22 self.carSprite = pyglet.sprite.Sprite(car, -500, 0)
24 # Add these sprites to the list of drawables
25 self.drawableObjects.append(self.cloudSprite)
26 self.drawableObjects.append(self.lSprite)
27 self.drawableObjects.append(self.dropletSprite)
28 self.drawableObjects.append(self.carSprite)
The code block from lines 3 to 10 is idencal to the one developed in
the raindrops animaon. The self.dropletSprite image is placed
at an appropriate posion. Next, we just create sprites to load images of
clouds, lightning, and car in the Pyglet window. These sprites are placed
at appropriate locaons within the window. For example, the starng
posion of the car is o the screen. It is anchored at x = -500 and y = 0.
The code block from lines 24 to 28 adds all the Sprite instances to self.
drawableObjects . The draw() method of each one of these instances is
called in on_draw method.
Fun with Animaons
[ 120 ]
5. To achieve the desired animaon eect, we have to move around various sprites
during the animaon. This is done by scheduling a few methods to be called at
specied me intervals. These methods update the coordinates of the sprite or
toggle its visibility when the Pyglet window is redrawn. The code is illustrated
as follows:
# Schedule the method RainyDayAnimation.moveObjects to be
# called every 0.05 seconds.
pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)
# Show the lightening every 1 second
pyglet.clock.schedule_interval(win.show_lightening, 1.0)
We have already seen an example of the moveObjects method
in earlier secons. In this project, we schedule another method,
RainyDayAnimation.show_lightening, to be called every
second. This method created an animaon eect where lightning
strikes every second at dierent posions.
6. We will now review the method show_lightening.
1 def show_lightening(self, t):
2 if self.lSprite.visible:
3 self.lSprite.visible = False
4 else:
5 if(self.lSprite.x == 100):
6 self.lSprite.x += 200
7 else:
8 self.lSprite.x = 100
10 self.lSprite.visible = True
self.lSprite is the sprite represenng the lightning image. Our target
is to create an animaon eect where the lightning ashes for a moment
and then disappears. This can be accomplished by toggling the Sprite.
visible property. When this property is set to False, the lightning is
not shown. When it is set to True, the else block 4-10 is executed. The
posion of self.lSprite is changed so that the lightning appears at
dierent locaons the next me this method is called.
Chapter 4
[ 121 ]
7. The moveObjects method is scheduled to be called every 0.05 seconds.
1 def moveObjects(self, t):
2 if self.carSprite.x <= self.cloudSprite.width:
3 self.carSprite.x += 10
4 else:
5 self.carSprite.x = -500
Every me it is called, it moves the posion of the Sprite represenng
the car by 10 pixels in the posive direcon of x axis. However, if the x
coordinate of the self.carSprite exceeds its width, the car is reset to its
original posion. Also, at the starng posion of the car, the horn sound
is played.
8. Review the rest of the code from le Make sure to
replace the audio le path for self.horn_sound with an appropriate le path
on your computer. Once everything is all set, run the program from the command
line as:
This will pop up a window that will play the animaon in which a fun
car cruises along in a thunderstorm. The next illustraon shows some
intermediate frames from the animaon.
Intermediate frames from an animaon where a car drives along on a rainy
day appear in the preceding image.
Fun with Animaons
[ 122 ]
What just happened?
The animaon developed in this project used four dierent images. We learned how to add
sound eects and change the visibility of the image sprites during the animaon. Some of
the images were translated or made intermiently visible to achieve the desired animaon
eect. Dierent regions of a single image were used to animate raindrops. Overall, this fun
project covered most of the things we learned throughout this book.
Have a go hero – add more effects
1. Addional sound eects—whenever lightning strikes in the animaon, play a
thunderstorm sound.
2. In the code presented earlier, the lightning image posion is toggled between
two xed locaons. Use random module in Python to get a random coordinate
between 0 to self.cloudSprite.width and use that as the x coordinate of
3. Add keyboard controls to change the speed of the car, the frequency of lightning,
and so on.
We learned a lot in this chapter about creang 2D animaons in Python using Pyglet.
Specically, we:
Learned some fundamental components of the Pyglet framework for
creang animaons. Modules such as Window, Image, Animation, Sprite,
AnimationFrame, ImageGrid, and so on were discussed.
Wrote code to create an animaon using a sequence of images or to play a
pre-created animaon.
Learned things such as modifying the posion of the Pyglet sprite, adding keyboard
and mouse controls and adding sound eects to the animaon.
Worked on a cartoon animaon project 'Drive on a Rainy Day'. Here we applied
several of the techniques learned throughout the chapter.
Working with Audios
Decades ago, silent movies lit up the screen—but later, it was audio eect that
brought life into them. We deal with digital audio processing quite frequently—
when just playing a CD track, recording your own voice or converng songs into
a dierent audio format. There are many libraries or mulmedia frameworks
available for audio processing. This chapter teaches some common digital
audio processing techniques using Python bindings of a popular mulmedia
framework called GStreamer.
In this chapter, we shall:
Learn basic concepts behind GStreamer mulmedia framework
Use GStreamer API for audio processing
Develop some simple audio processing tools for 'everyday use'. We will develop
tools that will batch convert audio le formats, record an audio, and play audio les
So let's get on with it!
Installation prerequisites
Since we are going to use an external mulmedia framework, it is necessary to install thenecessary to install the
packages menoned in this secon.
Working with Audios
[ 124 ]
GStreamer is a popular open source mulmedia framework that supports audio/video
manipulaon of a wide range of mulmedia formats. It is wrien in the C programming
language and provides bindings for other programming languages including Python.
Several open source projects use GStreamer framework to develop their own mulmedia
applicaon. Throughout this chapter, we will make use of the GStreamer framework
for audio handling. In order to get this working with Python, we need to install both
GStreamer and the Python bindings for GStreamer.
Windows platform
The binary distribuon of GStreamer is not provided on the project website Installing it from the source may require
considerable eort on the part of Windows users. Fortunately, GStreamer WinBuilds
project provides pre-compiled binary distribuons. Here is the URL to the project website:
The binary distribuon for GStreamer as well as its Python bindings (Python 2.6) are
available in the Download area of the website:
You need to install two packages. First, the GStreamer and then the Python bindingsou need to install two packages. First, the GStreamer and then the Python bindings
to the GStreamer. Download and install the GPL distribuon of GStreamer available on
the GStreamer WinBuilds project website. The name of the GStreamer executable is
GStreamerWinBuild- The version should be 0.10.5 or higher. By default,
this installaon will create a folder C:\gstreamer on your machine. The bin directory
within this folder contains runme libraries needed while using GStreamer.
Next, install the Python bindings for GStreamer. The binary distribuon is available on the
same website. Use the executable Pygst- pertaining topertaining to
Python 2.6. The version should be 0.10.15 or higher.
GStreamer WinBuilds appears to be an independent project. It is based on
the OSSBuild developing suite. Visit
ossbuild/ for more informaon. It could happen that the GStreamer binary
built with Python 2.6 is no longer available on the menoned website at the
me you are reading this book. Therefore, it is advised that you should contact
the developer community of OSSBuild. Perhaps they might help you out!
Chapter 5
[ 125 ]
Alternavely, you can build GStreamer from source on the Windows plaorm, using a
Linux-like environment for Windows, such as Cygwin ( Under
this environment, you can rst install dependent soware packages such as Python 2.6, gcc
compiler, and others. Download the gst-python- package from the
GStreamer website Then extract this package and install it
from sources using the Cygwin environment. The INSTALL le within this package will have
installaon instrucons.
Other platforms
Many of the Linux distribuons provide GStreamer package. You can search for the
appropriate gst-python distribuon (for Python 2.6) in the package repository. If such a
package is not available, install gst-python from the source as discussed in the earlier the
Windows plaorm secon.
If you are a Mac OS X user, visit It
has detailed instrucons on how to download and install the package Py26-gst-python
version 0.10.17 (or higher).
Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribuon. If
you are using packages using this default version of Python, GStreamer
Python bindings using Python 2.5 are available on the darwinports
There is a free mulplaorm soware ulity library called 'GLib'. It provides data
structures such as hash maps, linked lists, and so on. It also supports the creaon of
threads. The 'object system' of GLib is called GObject. Here, we need to install the
Python bindings for GObject. The Python bindings are available on the PyGTK website
Windows platform
The binary installer is available on the PyGTK website. The complete URL is:
Download and install version 2.20 for Python 2.6.
Other platforms
For Linux, the source tarball is available on the PyGTK website. There could even be binary
distribuon in the package repository of your Linux operang system. The direct link to the
Version 2.21 of PyGObject (source tarball) is:
Working with Audios
[ 126 ]
If you are a Mac user and you have Python 2.6 installed, a distribuon of PyGObject is
available at Install version 2.14 or later.
Summary of installation prerequisites
The following table summarizes the packages needed for this chapter.
Package Download
Version Windows plaorm Linux/Unix/OS X plaorms
GStreamer http://
0.10.5 or
Install using binary
distribuon available on
the Gstreamer WinBuild
GStreamerWinBuild- (or later
version if available).
Linux: Use GStreamer
distribuon in package
Mac OS X: Download
and install by following
instrucons on the website:
Bindings for
0.10.15 or
later for
Python 2.6
Use binary provided by
GStreamer WinBuild
project. See. See http://
es for details pertaining to
Python 2.6.
Linux: Use gst-python
distribuon in the package
Mac OS X: Use this package
(if you are using Python2.6):
Linux/Mac: Build and install
from the source tarball.
for GObject
2.14 or
later for
Python 2.6
Use binary package from
Linux: Install from source if
pygobject is not available in
the package repository.
Mac: Use this package
on darwinports (if you
are using Python2.6)
See http://
for details.
Chapter 5
[ 127 ]
Testing the installation
Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test
this. Just start Python from the command line and type the following:
>>>import pygst
If there is no error, it means the Python bindings are installed properly.
Next, type the following:
>>>import gst
If this import is successful, we are all set to use GStreamer for processing audios and videos!
If import gst fails, it will probably complain that it is unable to work some required DLL/
shared object. In this case, check your environment variables and make sure that the PATH
variable has the correct path to the gstreamer/bin directory. The following lines of code
in a Python interpreter show the typical locaon of the pygst and gst modules on the
Windows plaorm.
>>> import pygst
>>> pygst
<module 'pygst' from 'C:\Python26\lib\site-packages\pygst.pyc'>
>>> pygst.require('0.10')
>>> import gst
>>> gst
<module 'gst' from 'C:\Python26\lib\site-packages\gst-0.10\gst\__init__
Next, test if PyGObject is successfully installed. Start the Python interpreter and try
imporng the gobject module.
>>import gobject
If this works, we are all set to proceed!
A primer on GStreamer
In this chapter, we will be using GStreamer mulmedia framework extensively. Before
we move on to the topics that teach us various audio processing techniques, a primer
on GStreamer is necessary.
Working with Audios
[ 128 ]
So what is GStreamer? It is a framework on top of which one can develop mulmedia
applicaons. The rich set of libraries it provides makes it easier to develop applicaons with
complex audio/video processing capabilies. Fundamental components of GStreamer are
briey explained in the coming sub-secons.
Comprehensive documentaon is available on the GStreamer project website.
GStreamer Applicaon Development Manual is a very good starng point. In this
secon, we will briey cover some of the important aspects of GStreamer. For
further reading, you are recommended to visit the GStreamer project website:
gst-inspect and gst-launch
We will start by learning the two important GStreamer commands. GStreamer can
be run from the command line, by calling gst-launch-0.10.exe (on Windows) or
gst-launch-0.10 (on other plaorms). The following command shows a typical execuon
of GStreamer on Linux. We will see what a pipeline means in the next sub-secon.
$gst-launch-0.10 pipeline_description
GStreamer has a plugin architecture. It supports a huge number of plugins. To see more
details about any plugin in your GStreamer installaon, use the command gst-inspect-
0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite oen. Use
of this command is illustrated here.
$gst-inspect-0.10 decodebin
Here, decodebin is a plugin. Upon execuon of the preceding command, it prints detailed
informaon about the plugin decodebin.
Elements and pipeline
In GStreamer, the data ows in a pipeline. Various elements are connected together forming
a pipeline, such that the output of the previous element is the input to the next one.
A pipeline can be logically represented as follows:
Element1 ! Element2 ! Element3 ! Element4 ! Element5
Here, Element1 through to Element5 are the element objects chained together by
the symbol !. Each of the elements performs a specic task. One of the element objects
performs the task of reading input data such as an audio or a video. Another element
decodes the le read by the rst element, whereas another element performs the job of
converng this data into some other format and saving the output. As stated earlier, linking
these element objects in a proper manner creates a pipeline.
Chapter 5
[ 129 ]
The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a
pipeline. Here, the vercal separator | denes the pipe.
$ls -la | more
Here, the ls -la lists all the les in a directory. However, somemes, this list is too long to
be displayed in the shell window. So, adding | more allows a user to navigate the data.
Now let's see a realisc example of running GStreamer from the command prompt.
$ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin !
audioconvert ! fakesink
For a Windows user, the gst command name would be gst-launch-0.10.exe. The
pipeline is constructed by specifying dierent elements. The !symbol links the adjacent
elements, thereby forming the whole pipeline for the data to ow. For Python bindings of
GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.
Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a
separate thread where it is processed unl it reaches the end or a terminaon signal is sent.
GStreamer is a plugin-based framework. There are several plugins available. A plugin
is used to encapsulate the funconality of one or more GStreamer elements. Thus we
can have a plugin where mulple elements work together to create the desired output.
The plugin itself can then be used as an abstract element in the GStreamer pipeline. An
example is decodebin. We will learn about it in the upcoming secons. A comprehensive
list of available plugins is available at the GStreamer website http://gstreamer. In this book, we will be using several of them to develop audio/
video processing applicaons. For example, a plugin Playbin will be used for audio
playback. In almost all applicaons to be developed, decodebin plugin will be used. For
audio processing, the funconality provided by plugins such as gnonlin, audioecho,
monoscope, interleave, and so on will be used.
In GStreamer, a bin is a container that manages the element objects added to it. A bin
instance can be created using gst.Bin class. It is inherited from gst.Element and can act
as an abstract element represenng a bunch of elements within it. A GStreamer plugin
decodebin is a good example represenng a bin. The decodebin contains decoder elements.
It auto-plugs the decoder to create the decoding pipeline.
Working with Audios
[ 130 ]
Each element has some sort of connecon points to handle data input and output.
GStreamer refers to them as pads. Thus an element object can have one or more
"receiver pads" termed as sink pads that accept data from the previous element in the
pipeline. Similarly, there are 'source pads' that take the data out of the element as an
input to the next element (if any) in the pipeline. The following is a very simple example
that shows how source and sink pads are specied.
>gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink
The fakesrc is the rst element in the pipeline. Therefore, it only has a source pad. It
transmits the data to the next linkedelement, that is fakesink which only has a sink pad
to accept elements. Note that, in this case, since these are fakesrc and fakesink, just
empty buers are exchanged. A pad is dened by the class gst.Pad. A pad can be aached
to an element object using the gst.Element.add_pad() method.
The following is a diagrammac representaon of a GStreamer element with a pad. It
illustrates two GStreamer elements within a pipeline, having a single source and sink pad.
Now that we know how the pads operate, let's discuss some of special types of pads. In
the example, we assumed that the pads for the element are always 'out there'. However,
there are some situaons where the element doesn't have the pads available all the me.
Such elements request the pads they need at runme. Such a pad is called a dynamic pad.
Another type of pad is called ghost pad. These types are discussed in this secon.
Dynamic pads
Some objects such as decodebin do not have pads dened when they are created. Such
elements determine the type of pad to be used at the runme. For example, depending on
the media le input being processed, the decodebin will create a pad. This is oen referred
to as dynamic pad or somemes the available pad as it is not always available in elements
such as decodebin.
Chapter 5
[ 131 ]
Ghost pads
As stated in the Bins secon a bin object can act as an abstract element. How is it
achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of
a bin are used to connect an appropriate element inside it. A ghost pad can be created
using gst.GhostPad class.
The element objects send and receive the data by using the pads. The type of media data
that the element objects will handle is determined by the caps (a short form for capabilies).
It is a structure that describes the media formats supported by the element. The caps are
dened by the class gst.Caps.
A bus refers to the object that delivers the message generated by GStreamer. A message is
a gst.Message object that informs the applicaon about an event within the pipeline. A
message is put on the bus using the gst.Bus.gst_bus_post() method. The following
code shows an example usage of the bus.
1 bus = pipeline.get_bus()
2 bus.add_signal_watch()
3 bus.connect("message", message_handler)
The rst line in the code creates a gst.Bus instance. Here the pipeline is an instance of
gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the
messages posted on that bus. Line 3 connects the signal with a Python method. In this
example, the message is the signal string and the method it calls is message_handler.
Playbin is a GStreamer plugin that provides a high-level audio/video player. It can
handle a number of things such as automac detecon of the input media le format,
auto-determinaon of decoders, audio visualizaon and volume control, and so on.
The following line of code creates a playbin element.
playbin = gst.element_factory_make("playbin")
It denes a property called uri. The URI (Uniform Resource Idener) should be an
absolute path to a le on your computer or on the Web. According to the GStreamer
documentaon, Playbin2 is just the latest unstable version but once stable, it will
replace the Playbin.
Working with Audios
[ 132 ]
A Playbin2 instance can be created the same way as a Playbin instance.
gst-inspect-0.10 playbin2
With this basic understanding, let us learn about various audio processing techniques using
GStreamer and Python.
Playing music
Given an audio le, one the rst things you will do is to play that audio le, isn't it? In
GStreamer, what basic elements do we need to play an audio? The essenal elements
are listed as follows.
The rst thing we need is to open an audio le for reading
Next, we need a decoder to transform the encoded informaon
Then, there needs to be an element to convert the audio format so that it is in a
'playable' format required by an audio device such as speakers
Finally, an element that will enable the actual playback of the audio le
How will you play an audio le using the command-line version of GStreamer? One way to
execute it using command line is as follows:
$gstlaunch-0.10 filesrc location=/path/to/audio.mp3 ! decodebin !
audioconvert ! autoaudiosink
The autoaudiosink automacally detects the correct audio device on your
computer to play the audio. This was tested on a machine with Windows XP
and it worked ne. If there is any error playing an audio, check if the audio
device on your computer is working properly. You can also try using element
sdlaudiosink that outputs to the sound card via SDLAUDIO . If this doesn't
work, and you want to install a plugin for audiosink—here is a paral list of
GStreamer plugins:
Mac OS X users can try installing osxaudiosink if the default
autoaudiosink doesn't work.
The audio le should start playing with this command unless there are any missing plugins.
Chapter 5
[ 133 ]
Time for action – playing an audio: method 1
There are a number of ways to play an audio using Python and GStreamer. Let's start with a
simple one. In this secon, we will use a command string, similar to what you would specify
using the command-line version of GStreamer. This string will be used to construct a
gst.Pipeline instance in a Python program.
So, here we go!
1. Start by creang an AudioPlayer class in a Python source le. Just dene the
empty methods illustrated in the following code snippet. We will expand those in
the later steps.
1 import thread
2 import gobject
3 import pygst
4 pygst.require("0.10")
5 import gst
7 class AudioPlayer:
8 def __init__(self):
9 pass
10 def constructPipeline(self):
11 pass
12 def connectSignals(self):
13 pass
14 def play(self):
15 pass
16 def message_handler(self):
17 pass
19 # Now run the program
20 player = AudioPlayer()
21 thread.start_new_thread(, ())
22 gobject.threads_init()
23 evt_loop = gobject.MainLoop()
Lines 1 to 5 in the code import the necessary modules. As discussed in the
Installaon prerequisites secon, the package pygst is imported rst.
Then we call pygst.require to enable the import of gst module.
Working with Audios
[ 134 ]
2. Now focus on the code block between lines 19 to 24. It is the main execuon code.
It enables running the program unl the music is played. We will use this or similar
code throughout this book to run our audio applicaon.
On line 21, the thread module is used to create a new thread for playing
the audio. The method is sent on this thread. The
second argument of thread.start_new_thread is the list of arguments
to be passed to the method play. In this example, we do not support any
command-line arguments. Therefore, an empty tuple is passed. Python
adds its own thread management funconality on top of the operang
system threads. When such a thread makes calls to external funcons (such
as C funcons), it puts the 'Global Interpreter Lock' on other threads unl,
for instance, the C funcon returns a value.
The gobject.threads_init() is an inializaon funcon for facilitang
the use of Python threading within the gobject modules. It can enable or
disable threading while calling the C funcons. We call this before running
the main event loop. The main event loop for execung this program is
created using gobject on line 23 and this loop is started by the call
3. Next, ll the AudioPlayer class methods with the code. First, write the constructor
of the class.
1 def __init__(self):
2 self.constructPipeline()
3 self.is_playing = False
4 self.connectSignals()
The pipeline is constructed by the method call on line 2. The ag self.is_
playing is inialized to False. It will be used to determine whether the
audio being played has reached the end of the stream. On line 4, a method
self.connectSignals is called, to capture the messages posted on a
bus. We will discuss both these methods next.
4. The main driver for playing the sound is the following gst command:
"filesrc location=C:/AudioFiles/my_music.mp3 "\
"! decodebin ! audioconvert ! autoaudiosink"
The preceding string has four elements separated by the symbol !. These
elements represent the components we briey discussed earlier.
5. The rst element filesrc location=C:/AudioFiles/my_music.mp3 denes
the source element that loads the audio le from a given locaon. In this string, just
replace the audio le path represented by location with an appropriate le path
on your computer. You can also specify a le on a disk drive.
Chapter 5
[ 135 ]
If the lename contains namespaces, make sure you specify the path within
quotes. For example, if the lename is my sound.mp3, specify it as follows:
filesrc location =\"C:/AudioFiles/my sound.mp3\"
6. The next element loads the le. This element is connected to a decodebin. As
discussed earlier, the decodebin is a plugin to GStreamer and it inherits gst.Bin.
Based on the input audio format, it determines the right type of decoder element
to use.
The third element is audioconvert. It translates the decoded audio data
into a format playable by the audio device.
The nal element, autoaudiosink, is a plugin; it automacally detects the
audio sink for the audio output.
We have sucient informaon now to create an instance of gst.
Pipeline. Write the following method.
1 def constructPipeline(self):
2 myPipelineString = \
3 "filesrc location=C:/AudioFiles/my_music.mp3 "\
4 "! decodebin ! audioconvert ! autoaudiosink"
5 self.player = gst.parse_launch(myPipelineString)
An instance of gst.Pipeline is created on line 5, using the
gst.parse_launch method.
7. Now write the following method of class AudioPlayer.
1 def connectSignals(self):
2 # In this case, we only capture the messages
3 # put on the bus.
4 bus = self.player.get_bus()
5 bus.add_signal_watch()
6 bus.connect("message", self.message_handler)
On line 4, an instance of gst.Bus is created. In the introductory secon
on GStreamer, we already learned what the code between lines 4 to 6
does. This bus has the job of delivering the messages posted on it from the
streaming threads. The add_signal_watch call makes the bus emit the
message signal for each message posted. This signal is used by the method
message_handler to take appropriate acon.
Working with Audios
[ 136 ]
Write the following method:
1 def play(self):
2 self.is_playing = True
3 self.player.set_state(gst.STATE_PLAYING)
4 while self.is_playing:
5 time.sleep(1)
6 evt_loop.quit()
On line 2, we set the state of the gst pipeline to gst.STATE_PLAYING to
start the audio streaming. The ag self.is_playing controls the while
loop on line 4. This loop ensures that the main event loop is not terminated
before the end of the audio stream is reached. Within the loop the call to
time.sleep just buys some me for the audio streaming to nish. The
value of ag is changed in the method message_handler that watches for
the messages from the bus. On line 6, the main event loop is terminated.
This gets called when the end of stream message is emied or when some
error occurs while playing the audio.
8. Next, develop method AudioPlayer.message_handler. This method sets the
appropriate ag to terminate the main loop and is also responsible for changing the
playing state of the pipeline.
1 def message_handler(self, bus, message):
2 # Capture the messages on the bus and
3 # set the appropriate flag.
4 msgType = message.type
5 if msgType == gst.MESSAGE_ERROR:
6 self.player.set_state(gst.STATE_NULL)
7 self.is_playing = False
8 print "\n Unable to play audio. Error: ", \
9 message.parse_error()
10 elif msgType == gst.MESSAGE_EOS:
11 self.player.set_state(gst.STATE_NULL)
12 self.is_playing = False
In this method, we only check two things: whether the message on the bus
says the streaming audio has reached its end (gst.MESSAGE_EOS ) or if
any error occurred while playing the audio stream (gst.MESSAGE_ERROR ).
For both these messages, the state of the gst pipeline is changed from
gst.STATE_PLAYING to gst.STATE_NULL. The self.is_playing ag
is updated to instruct the program to terminate the main event loop.
Chapter 5
[ 137 ]
We have dened all the necessary code to play the audio. Save the le as and run the applicaon from the command line
as follows:
This will begin playback of the input audio le. Once it is done playing, the
program will be terminated. You can press Ctrl + C on Windows or Linux to
interrupt the playing of the audio le. It will terminate the program.
What just happened?
We developed a very simple audio player, which can play an input audio le. The code we
wrote covered some of the most important components of GStreamer. These components
will be useful throughout this chapter. The core component of the program was a GStreamer
pipeline that had instrucons to play the given audio le. Addionally, we learned how to
create a thread and then start a gobject event loop to ensure that the audio le is played
unl the end.
Have a go hero – play audios from a playlist
The simple audio player we developed can only play a single audio le, whose path is
hardcoded in the constructed GStreamer pipeline. Modify this program so it can play audios
in a "playlist". In this case, play list should dene full paths of the audio les you would like
to play, one aer the other. For example, you can specify the le paths as arguments to this
applicaon or load the paths dened in a text le or load all audio les from a directory.
Hint: In a later secon, we will develop an audio le converter ulity. See if you can use
some of that code here.
Building a pipeline from elements
In the last secon, a gst.Pipeline was automacally constructed for us by the gst.parse_
launch method. All it required was an appropriate command string, similar to the one
specied while running the command-line version of GStreamer. The creaon and linking
of elements was handled internally by this method. In this secon, we will see how to
construct a pipeline by adding and linking individual element objects. 'GStreamer Pipeline'
construcon is a fundamental technique that we will use throughout this chapter and also in
other chapters related to audio and video processing.
Working with Audios
[ 138 ]
Time for action – playing an audio: method 2
We have already developed code for playing an audio. Let's now tweak the method
AudioPlayer.constructPipeline to build the gst.Pipeline using dierent
element objects.
1. Rewrite the constructPipeline method as follows. You can also download the
le from the Packt website for reference. This le has all the
code we discussed in this and previous secons.
1 def constructPipeline(self):
2 self.player = gst.Pipeline()
3 self.filesrc = gst.element_factory_make("filesrc")
4 self.filesrc.set_property("location",
5 "C:/AudioFiles/my_music.mp3")
7 self.decodebin = gst.element_factory_make("decodebin",
8 "decodebin")
9 # Connect decodebin signal with a method.
10 # You can move this call to self.connectSignals)
11 self.decodebin.connect("pad_added",
12 self.decodebin_pad_added)
14 self.audioconvert = \
15 gst.element_factory_make("audioconvert",
16 "audioconvert")
18 self.audiosink = \
19 gst.element_factory_make("autoaudiosink",
20 "a_a_sink")
22 # Construct the pipeline
23 self.player.add(self.filesrc, self.decodebin,
24 self.audioconvert, self.audiosink)
25 # Link elements in the pipeline.
26 gst.element_link_many(self.filesrc, self.decodebin)
27 gst.element_link_many(self.audioconvert,self.audiosink)
2. We begin by creang an instance of class gst.Pipeline.
Chapter 5
[ 139 ]
3. Next, on line 2, we create the element for loading the audio le. Any new gst
element can be created using the API method, gst.element_factory_make.
The method takes the element name (string) as an argument. For example, on
line 3, this argument is specied as "filesrc" in order to create an instance of
element GstFileSrc. Each element will have a set of properes. The path of the
input audio le is stored in a property location of self.filesrc element. This
property is set on line 4. Replace the le path string with an appropriate audio
le path.
You can get a list of all properes by running the 'gst-inspect-
0.10 ' command from a console window. See the introductory
secon on GSreamer for more details.
4. The second oponal argument serves as a custom name for the created object. For
example, on line 20, the name for the autoaudiosink object is specied as a_a_
sink. Like this, we create all the essenal elements necessary to build the pipeline.
5. On line 23 in the code, all the elements are put in the pipeline by calling the gst.
Pipeline.add method.
6. The method gst.element_link_many establishes connecon between two or
more elements for the audio data to ow between them. The elements are linked
together by the code on lines 26 and 27. However, noce that we haven't linked
together the elements self.decodebin and self.audioconvert. Why? That's
up next.
7. We cannot link the decodebin element with the audioconvert element at the
me the pipeline is created. This is because decodebin uses dynamic pads. These
pads are not available for connecon with the audioconvert element when the
pipeline is created. Depending upon the input data , it will create a pad. Thus, we
need to watch out for a signal that is emied when the decodebin adds a pad!
How do we do that? It is done by the code on line 11 in the code snippet above.
The "pad-added" signal is connected with a method, decodebin_pad_added.
Whenever decodebin adds a dynamic pad, this method will get called.
Working with Audios
[ 140 ]
8. Thus, all we need to do is to manually establish a connecon between decodebin
and audioconvert elements in the method decodebin_pad_added. Write the
following method.
1 def decodebin_pad_added(self, decodebin, pad ):
2 caps = pad.get_caps()
3 compatible_pad = \
4 self.audioconvert.get_compatible_pad(pad, caps)
The method takes the element (in this case it is self.decodebin ) and
pad as arguments. The pad is the new pad for the decodebin element. We
need to link this pad with the appropriate one on self.audioconvert.
9. On line 2 in this code snippet, we nd out what type of media data the pad handles.
Once the capabilies (caps) are known, we pass this informaon to the method
get_compatible_pad of object self.audioconvert. This method returns a
compable pad which is then linked with pad on line 6.
10. The rest of the code is idencal with the one illustrated in the earlier secon. You
can run this program the same way described earlier.
What just happened?
We learned some very crucial components of GStreamer framework. With the simple audio
player as an example, we created a GStreamer pipeline 'from scratch' by creang various
element objects and linking them together. We also learned how to connect two elements by
'manually' linking their pads and why that was required for the element self.decodebin.
Pop Quiz – element linking
In the earlier example, most of the elements in the pipeline linked using gst.element_
link_many in method AudioPlayer.constructPipeline. However, we did not link the
elements self.decodebin and self.audioconvert at the me when the pipeline was
constructed. Why? Choose the correct answer from the following.
1. We were just trying out a dierent technique of manually linking these
elements together.
2. Decodebin uses a dynamic pad that is created at the runme. This pad is not
available when the pipeline is created.
3. We don't need to link these elements in the pipeline. The media data will just nd
its way somehow.
4. What are you talking about? It is impossible to connect decodebin and
audioconvert elements no maer what you try.
Chapter 5
[ 141 ]
Playing an audio from a website
If there is an audio somewhere on a website that you would like to play, we can prey much
use the same AudioPlayer class developed earlier. In this secon, we will illustrate the use of
gst.Playbin2 to play an audio by specifying a URL. The code snippet below shows the revised
AudioPlayer.constructPipeline method. The name of this method should be changed as it is
playbin object that it creates.
1 def constructPipeline(self):