The Definitive Guide To Suse Linux Enterprise Server 12

User Manual: Pdf

Open the PDF directly: View PDF PDF.
Page Count: 546

DownloadThe Definitive Guide To Suse Linux Enterprise Server 12
Open PDF In BrowserView PDF
For your convenience Apress has placed some of the front
matter material after the index. Please use the Bookmarks
and Contents at a Glance links to access them.

Contents at a Glance
About the Author��������������������������������������������������������������������������������������������������������������� xxi
About the Technical Reviewer����������������������������������������������������������������������������������������� xxiii
Acknowledgments������������������������������������������������������������������������������������������������������������ xxv
Introduction�������������������������������������������������������������������������������������������������������������������� xxvii

■■Part I: Basic Skills���������������������������������������������������������������������������������������� 1
■■Chapter 1: Introduction and Installation���������������������������������������������������������������������������3
■■Chapter 2: Basic Skills����������������������������������������������������������������������������������������������������33

■■Part II: Administering SUSE Linux Enteprise Server����������������������������������� 49
■■Chapter 3: Managing Disk Layout and File Systems�������������������������������������������������������51
■■Chapter 4: User and Permission Management����������������������������������������������������������������81
■■Chapter 5: Common Administration Tasks����������������������������������������������������������������������99
■■Chapter 6: Hardening SUSE Linux����������������������������������������������������������������������������������131
■■Chapter 7: Managing Virtualization on SLES�����������������������������������������������������������������161
■■Chapter 8: Managing Hardware, the Kernel, and the Boot Procedure���������������������������177

■■Part III: Networking SUSE Linux Enterprise Server���������������������������������� 197
■■Chapter 9: Configuring Network Access������������������������������������������������������������������������199
■■Chapter 10: Securing Internet Services: Certificates and SUSE Firewall����������������������229
■■Chapter 11: Basic Network Services: xinetd, NTP, DNS, DHCP, and LDAP����������������������259
■■Chapter 12: Setting Up a LAMP Server��������������������������������������������������������������������������309
■■Chapter 13: File Sharing: NFS, FTP, and Samba�������������������������������������������������������������331
v

|||||||||||||||||||||||||||||||||||||||||||||||||

■ Contents at a Glance

■■Part IV: Advanced SUSE Linux Enterprise Server Administration������������� 359
■■Chapter 14: Introduction to Bash Shell Scripting����������������������������������������������������������361
■■Chapter 15: Performance Monitoring and Optimizing���������������������������������������������������389
■■Chapter 16: Creating a Cluster on SUSE Linux Enterprise Server����������������������������������433
■■Chapter 17: Creating a SLES 12 Installation Server������������������������������������������������������471
■■Chapter 18: Managing SUSE Linux��������������������������������������������������������������������������������479
Index���������������������������������������������������������������������������������������������������������������������������������527

vi
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Introduction
This book is about SUSE Linux Enterprise Server 12. It is intended for readers who already have basic Linux skills, so
you won’t find information on how to perform really basic tasks. Some elementary skills are briefly explained, after
which, in a total of 18 chapters, the specifics of working with SUSE Linux Enterprise Server are touched upon.
While writing this book, I have decided it should not be just any generic Linux book that happens by accident to
be about SUSE Linux Enterprise Server. Instead, I have focused on those tasks that are essential for Linux professionals
who need to know how specific tasks are performed in an SUSE environment. That is why the SUSE administration
tool YaST plays an important role in this book. YaST was developed to make administering SUSE Linux easy. In
previous versions of SUSE Linux, YaST had a bad reputation, as on some occasions, it had overwritten configurations
that the administrator had carefully built manually. On SUSE Linux Enterprise Server (SLES) 12 that doesn’t happen
anymore, and that is why YaST provides an excellent tool to build the basic configurations that are needed to do
whatever you want to do on your Linux server. That is why many chapters begin with an explanation of how tasks are
accomplished through YaST.
I am also aware, however, that using YaST alone is not sufficient to build a fully functional SLES server. That is
why after explaining how to accomplish tasks with YaST, you’ll learn which processes and configuration files are used
behind them, which allows you to manually create the exact configuration you require to accomplish whatever you
need to accomplish on your server.
As I am a technical trainer myself, I have also included exercises throughout this book. These exercises help
readers apply newly acquired skills in SLES and also help those who are preparing for the SUSE CLA and CLP exams.
I have not written this book as a complete course manual for these exams, however, although it will serve as an
excellent guide to preparing for these exams.
This book is organized in four different parts. The first parts briefly touch on basic skills. In Chapter 1, you’ll
learn how SUSE relates to other Linux distributions, and Chapter 2 covers the SUSE Linux Management basics. In this
chapter, you’ll learn how YaST is organized and what you can do to make the best possible use of it.
The second part is about Linux administration basics. You’ll first learn about file systems, including the new Btrfs
file system and its features, in Chapter 3. Following that, you’ll learn how to create users, configure permissions, apply
common tasks, and harden SLES. The last two chapters in this section are about virtualization and management of
hardware, the kernel, and the boot procedure, which includes the new systems process that takes care of everything
that happens while booting.
The third part is about networking SLES. You’ll learn how to use the new wicked tool to configure networking and
how to set up essential services that are used in a network context, including firewalling, SSL managing, DNS, DHCP,
LDAP, LAMP, NFS, and FTP. This section should help you get going, no matter which network services you want to
configure.
The fourth and final part of this book is about advanced administration tasks. You’ll learn how to write and read
shell scripts, how to optimize performance, how to build a high-availability cluster, how to configure an installation
server, and how to manage SUSE Linux using SUSE Manager.

xxvii
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Part i

Basic Skills

www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1

Introduction and Installation
In this chapter, you’ll learn about SUSE Linux Enterprise 12 and how to install it. You’ll read how it relates to other
versions of SUSE Linux and how modules are used to deliver functionality in a flexible way.

Understanding SUSE Linux Enterprise
Linux is an open source operating system. That means that the programs are available for free and that anyone can
install Linux without having to pay for it. It also means that the source code for all software in Linux is freely available.
There are no secrets in open source. Because of this freedom, Linux features can be used by anyone and implemented
in a specific way by anyone, as long as the source code remains open.
To work with Linux, users can gather software themselves and install all programs for themselves. That is a lot
of work and is why, since the earliest days of Linux, distributions have been offered. A distribution is a collection of
Linux packages that is offered with an installation program, to make working with the distribution easy. One of these
distributions is SUSE. Other distributions that currently are often used include Ubuntu and Red Hat.
SUSE (which stands for Software und System Entwicklung—Software and Systems Development) was founded in
Germany in September 1992 and, as such, is one of the oldest Linux distributions available. When it was purchased by
Novell in 2004, SUSE rapidly became one of the leading enterprise Linux distributions.

Versions of SUSE
Currently, there are two branches of SUSE Linux. openSUSE is the pure open source version of SUSE. It is available for
free and is released on a regular basis. In openSUSE, new features and programs are tested before they find their way
to SUSE Linux Enterprise.
openSUSE provides a very decent operating system, but it was never meant to be an enterprise operating system.
One of the reasons is that a version of openSUSE is not maintained very long, meaning that openSUSE users have to
upgrade to a newer version of the operating system after a relatively short period. openSUSE, however, is an excellent
operating system for professionals who are working with Linux. It allows them to explore new features before they are
brought to market in a new version of SUSE Linux Enterprise.
SUSE also provides a branch of the operating system for enterprise use. This branch is known as SUSE Linux
Enterprise. Two main versions of SUSE Linux Enterprise are available: SUSE Linux Enterprise Server (SLES) and SUSE
Linux Enterprise Desktop (SLED).
In the past, some serious attempts have been made to make Linux into a desktop operating system. That,
however, never became a large-scale success. On the server, however, SUSE Linux has become an important player,
being used by small and large companies all over the world.

3
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

About Supported Linux
An important difference between SUSE Linux Enterprise and openSUSE is that SUSE Linux Enterprise is supported.
That is also why customers are paying for SUSE Linux Enterprise, even if it can be downloaded and installed for free.
The support of SUSE Linux Enterprise includes a few important features that are essential for corporate IT.
•

SUSE is certified for specific hardware. That means that hardware vendors certify their
platform for SUSE Linux Enterprise. So, if a customer gets in trouble on specific hardware,
he or she will receive help, even if the hardware runs SUSE Linux Enterprise. Also, hardware
vendors are knowledgeable about SUSE Linux Enterprise, so customers can get assistance
from that side, in case of problems.

•

Specific applications are certified for use on SUSE Linux Enterprise. If a company wants to run
business applications on Linux, it is important that the business application is well integrated
with Linux. That is what running a supported application means. More than 5,000 applications
are certified for SUSE Linux Enterprise, which means that if a user has problems with the
application, the application vendor will be able to offer support, because it is used on a known
and supported platform.

•

Updates are verified and guaranteed. On a new version of SUSE Linux Enterprise, updates
will be provided for a period of seven years, after which an additional five years of extended
support is available. That means that SUSE Linux Enterprise can be used for twelve years, thus
guaranteeing that business customers don’t have to perform any upgrade of the software in
the meantime.

•

Support also means that SUSE offers direct help to customers who are having trouble.
Different levels of support are available, from e-mail support, which is available for a relatively
low price, up to premium support from engineers who will contact clients within a few hours.

Working with SUSE Linux Enterprise 12 Modules
In SLE 12, SUSE has introduced modules. Modules consist of specific software solutions, but with a custom life cycle.
By working with modules, SUSE makes it easier to provide updates on specific software. A module is not a new way of
selling solutions. Software that was included in earlier versions of SLE is still included in SLE 12. A module, however,
is a collection of software packages with a common-use case, a common support status, and a common life cycle.
This makes sense, because for some modules, a support life cycle of ten years is too much. Public cloud management
software, for example, is developing very fast, as is the case for solutions such as web and scripting. By putting these
in modules, SUSE makes it possible to provide updates on versions that are providing new functionality, without
breaking the generic support status of SUSE Linux Enterprise.
Currently, SUSE is providing modules for different solutions, including the following:
•

Scripting languages, such as PHP, Python, and Ruby on Rails

•

UNIX legacy, such as sendmail, old IMAP, and old Java

•

Public cloud integration tools

•

Advanced systems management

While installing SLE, these modules can be selected in the Extension Selection option. At the time of writing,
modules were provided not as an ISO image but via online repositories only, although this policy might change.
Aside from the modules that are provided as an integrated part, there are extensions as well. The most common
extension is the High Availability Extension (see Chapter 18), but other extensions may be provided too.
Apart from these, SUSE is also selling different products. An example of these is SUSE Manager, which is
discussed in Chapter 18.

4
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Installing SUSE Linux Enterprise Server 12
To perform a basic installation of SUSE Linux Enterprise Server 12, you need an ISO or an installation disk. Advanced
installation solutions are available also, such as an installation by using PXE boot and an installation server. These
are discussed in Chapter 17. To install SLES, your server needs to meet some minimal system requirements. These
depend on the kind of installation you want to perform. A text-only installation has requirements other than a full
graphical installation. Table 1-1 provides an overview of recommended minimal specifications.
Table 1-1. Installation Requirements

text-based

graphical

CPU

i5 or better

i5 or better

RAM

512MB

1GB

Available disk space

2GB

4GB

Network

100Mbit

100Mbit

The SLES software is available on www.suse.com. Even if SLES is a paid product, you can download an ISO image
for free. You will find it classed as “trial” on the web site. If you’re using a free version, you won’t be able to get support
or updates, but you can install a fully functional version of SLES without paying anything. Don’t worry about the
“trial” classification; the software is fully functional.

Performing a Basic Installation
After starting the installation from the installation media, you’ll see the welcome screen (see Figure 1-1). On this
screen, you see different options, of which Boot from Hard Disk is selected by default. Select Installation to start the
installation procedure. Other options are
•

Upgrade: Use this to upgrade a previous version of SUSE Linux Enterprise Server.

•

Rescue System: This option provides access to a rescue system that you can use to repair a
server that doesn’t start normally anymore.

•

Check Installation Media: Use this option to verify that your installation disk has no physical
problems before starting the installation. Note that, in general, this option takes a lot of time.

•

Firmware Test: This option verifies the compatibility of firmware that is used.

•

Memory Test: This option checks the integrity of system RAM and can mark segments of a
RAM chip as unusable, so that it will not be used upon installation.

In the lower part of the screen, you’ll also see several function keys that allow you to change settings, such as
installation language, video mode, and installation source. Also, by using these options, you can specify additional
drivers to be loaded. If you’re using a non-US keyboard, it makes sense to select the installation language and
choose the correct keyboard settings before continuing. This option allows you to change the language as well as
the keyboard. If you want to install in English but have to select a different keyboard, you’ll need the option that is
presented in the next screen.

5
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Figure 1-1. The Installation menu

6
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After selecting the Installation option, a Linux kernel and the corresponding installation program is loaded.
While this happens, the hardware in your server is detected. This can take some time. After hardware detection has
occurred, you’ll see the screen shown in Figure 1-2, from which you can select the Language and Keyboard and agree
to the License Agreement.

Figure 1-2. Selecting the installation language

7
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

To access patches and updates, you must provide an e-mail address and associated registration code at this point
(see Figure 1-3). If you don’t, you can still continue the installation and continue this part later. So, if you have a valid
e-mail address and registration code, enter it now. If you don’t, or if you want to perform an offline installation, click
Skip Registration. If you’re using a local registration server, such as a SUSE Manager server or an SMT server, click
Local Registration Server and enter the relevant credentials.

Figure 1-3. Entering your registration details

8
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After entering your registration details, you can select optional Add On Products (see Figure 1-4). These are
additional SUSE solutions, such as the High Availability Extension, which is not included in SUSE Linux Enterprise.
To tell the installation program where to find the installation files, select the installation source from this screen. You
can install add-on products from any installation source, including local directories, hard disks, or installation servers.
If you don’t have any additional products to install, just select nothing and click Next.

Figure 1-4. Selecting an optional add-on product

9
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

On the screen that you see in Figure 1-5, you can select the partitioning for your server. By default, two partitions
are created: one containing a swap volume, and the other containing a Btrfs file system. If you want to use Btrfs on
SLES 12, it doesn’t make much sense to create several partitions, as every directory can be mounted as a subvolume,
with its own mount properties (see Chapter 3 for more details on this). If you don’t want to use Btrfs, you can use the
Expert Partitioner, to create your own partitioning. In the section “Installing with a Custom Partition Scheme,” later in
this chapter, you can read how to do that.

Figure 1-5. Specifying hard disk layout

10
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Many services such as databases rely on correct time configuration. In the Clock and Time Zone window that you
see in Figure 1-6, you can specify your locale settings. Normally, you first click on the map, to set the right settings.
Next, you specify if the hardware clock on your computer is set to Universal Time Coordinated (UTC). UTC more or
less corresponds to Greenwich Mean Time (GMT), and it allows all of your servers to communicate at the same time.
UTC is often used for Linux servers. If your server is using local time, you can set it here. If you’re not sure, just look at
the current time that is shown. If it’s wrong, it is likely that you’re using the wrong setting here. You can also manually
adjust the time settings, by clicking the Other Settings button. This allows you to manually set time and specify which
NTP time servers you want to use. (Read Chapter 11 for more details about working with NTP.)

Figure 1-6. Specifying clock and time zone settings

11
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

On the screen shown in Figure 1-7, you can create a new user account and set properties for this user. It’s a good
idea to create at least one local user account, so that you don’t have to work as root if that’s not necessary. If you don’t
want to create a local user account, you can just click Next, to proceed to the next step.

Figure 1-7. Creating a local user account

12
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

At this point, you’ll have to enter a password for the user root (see Figure 1-8). Make sure to set a password
that is complicated enough to be secure. To make sure that you don’t enter a wrong password because of keyboard
incompatibility, you can use the Test Keyboard Layout option, to verify the current keyboard settings.

Figure 1-8. Setting the root password

13
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

You’ll now access the Installation Settings window, which you can see in Figure 1-9. In this window, you’ll find
many options to further fine-tune your installation settings.

Figure 1-9. Fine-tuning installation settings

14
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

The Software option, allows you to choose from different package categories, to make an entire class of software
and all of its dependencies available. If you require more detail, click the Details button, which still shows all of the
different categories of software but also allows you to select or de-select individual packages (see Figure 1-10). After
selecting this option, you can select one of the software patterns on the left, to show all the individual packages in that
category. If you’re looking for specific packages, you can use the Search option (see Figure 1-11). Enter a keyword and
click Search, to start your search operation. This shows a list of packages found to the left, from which you can select
everything you need. From any of the four tabs in the Software Selection utility, click Accept, once you’re done. You
may now see a window about dependencies, telling you that in order to install the packages you’ve selected, some
other packages must be installed as well. Confirm, to get back to the main settings window, from which you can
continue configuring the next part of your environment.

Figure 1-10. Getting more details on available software

15
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Figure 1-11. Use the Search option, if you’re looking for something specific

16
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

The next part of the configuration settings is about the boot loader (see Figure 1-12). SLES 12 uses GRUB 2 as its
default boot loader. The correct version is automatically selected, depending on the hardware you’re using, you might
need either GRUB2 or GRUB2-EFI. You can also select where to install the boot loader. By default, SLES installs to the
boot sector of the partition that contains the root file system (which is also set as the active partition in the partition table).
In the MBR, some generic boot code is written, which allows the boot loader to find the code you’ve written to the
active partition. If you prefer to write the boot code directly to the MBR, you can select Boot from Master Boot Record
instead.

Figure 1-12. Selecting a boot loader

17
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

While booting, you can pass kernel parameters to the kernel from the boot loader (see Figure 1-13). This allows
you to further fine-tune the behavior of the kernel and to include or exclude specific drivers, which is sometimes
required for compatibility reasons. From this window, you can also specify which type of console you want to use
(graphical or something else) and specify a console resolution.

Figure 1-13. Specifying kernel boot parameters

18
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

The third tab of the boot loader configuration menu allows you to set a time out, the default section you want to
boot, and a boot loader password. You should consider setting a boot loader password, as without such a password,
anyone can access the GRUB boot menu and pass specific options to the boot loader. This is a security risk for
environments in which the console can be physically accessed. If you protect the boot loader with a password, such
options can only be entered after providing the correct password.
After indicating how you want the boot loader to work, you can configure the firewall and Secure Shell (SSH).
By default, the firewall is enabled, as is the SSH service, but the SSH port is blocked. To change this configuration,
select Firewall and SSH and make appropriate selections (see Figure 1-14). There is no advanced interface for firewall
configuration available at this point, but you probably want to open at least the SSH port.

Figure 1-14. Opening the firewall for SSH

19
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Next, you can specify if you want to use Kdump. Kdump is a core dump kernel that can be loaded with your
default kernel. If the kernel crashes, the core dump kernel can write a memory core dump to a specified partition, to
make it easier to analyze what was going wrong when your server crashed. If you want to enable Kdump, you must
specify settings for available memory, as well as the Dump target, which is the default location to which the core
dump will be written (see Figure 1-15).

Figure 1-15. Specifying Kdump settings

20
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After selecting Kdump settings, you can choose a default systemd target. This determines the mode your server is
started in. By default, it will be started in a graphical mode, if graphical packages have been installed. From this menu
interface, you can choose Text mode as an alternative start-up mode (see Figure 1-16).

Figure 1-16. Selecting the startup mode

21
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Next, you’ll encounter the system option. This is a very interesting option that probes for available hardware in
your server and allows you to easily change settings for that hardware. These are advanced hardware settings that
change the performance profile of your server (see Figure 1-17). Don’t change them from here, if you don’t know what
you’re doing, but read Chapter 15 instead. It explains the results of the modifications that you can apply here in detail.

Figure 1-17. During installation, you can easily change advanced performance parameters

22
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

The last setting allows you to clone configuration setting to the file /root/autoinst.xml. This is default behavior
that makes it easy to install another server using the same settings. If you don’t want to do that, click Do not write
it. After selecting appropriate settings, click Install, to start the actual installation procedure. Once the file copy has
completed, the system is started with the new settings, and you can start working.

Installing with a Custom Partition Scheme
By default, the SLES installer proposes a partition scheme in which two partitions are created. The first partition is
configured as swap space, while the second partition is configured as the root file system, using a Btrfs file system. In
some cases, you might want to select a different partitioning scheme, for example, if you’re using applications that
haven’t been certified for Btrfs yet, or if you want to separate different data types. If that’s the case, you have to use the
custom partitioning interface. In this section, you’ll learn how to use it.
When the installer prompts the Suggested partitioning window, click Expert Partitioner, to open the custom
partitioning interface. This opens the window shown in Figure 1-18. On this window, you see a System View tree
on the left, with, under the Linux item, an overview of the storage on your server. By default, the installer shows the
detected hard disk(s), as well as the default partitioning that is proposed for the detected storage.

Figure 1-18. The Expert Partitioner interface

23
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

To make changes to a disk, you first have to active the Linux ➤ Hard Disks ➤ sda item. This brings you to the
Partitions window, which you can see in Figure 1-19. From this window, you can use different operations on the
partitions. To start with, you probably want to delete the existing partitions, so that you can create new partitions.
Select the partitions one by one, and next, click Delete, to remove them from your system. This gives you a starting
point from which your hard disk is completely empty.

Figure 1-19. The Partitions interface

24
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

On modern configurations, you might want to start creating your custom disk layout by setting a partition table.
Default partitioning is based on the MSDOS partition table, which allows you to address partitions with a maximum
size of 2TiB. If you want to use the modern GPT (GUID Partition Table) disk layout, select Expert ➤ Create new
partition table. After selecting the GPT partition table type, you’ll work in an environment that is a bit different. For
example, there are no extended partitions in GPT. Read Chapter 3 for more details about these differences.
To create a new partition, from the Partitions menu on your selected hard disk, click Add. This opens the window
shown in Figure 1-20. In this window, you can select the size of the partition you want to use. When specifying a
custom size, enter a size in GiB (1,024 ´ 1,024 ´ 1,024) and not GB. You should note that many hardware vendors work
with GB (1,000 ´ 1,000 ´ 1,000) as the default unit, so you may find that you don’t have as many GiB available as the
amount of GB that was sold to you by your hardware vendor.

Figure 1-20. Creating a new partition

25
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After specifying the size of the partition, in the next screen, you can select a role for the partition. That is the type
of use you intend for the new partition. Based on the selection you make here, you’ll get a default selection for the file
system to use as well. It doesn’t really matter much what you select here, as you can change the selection in the next
screen anyway.
In the following step, you’ll have to specify formatting options. There are two important choices to be made here:
which file system you are going to use and whether or not you are going to use Logical Volume Manager (LVM).
If you’re planning on using any file system other than Btrfs, it might be interesting to use LVM. When using LVM,
disk devices can be joined in a volume group (VG), and from the VG, Logical Volumes can be created as the base
allocation unit for all your storage. Using LVM allows you to easily resize your storage volumes and offers some other
benefits as well, which is why this is a relatively frequently used solution for storage layout.
Next, you’ll have to decide on the file system you want to use. Basically, for a server, there are three choices.
Some other file systems are listed, but they are not used very often anymore. The choices are between XFS, Btrfs,
and Ext4. Use Btrfs, if you want the latest and greatest file system for Linux. In Chapter 3, you’ll learn about all the
features Btrfs has to offer. If you want a very stable and reliable file system, you’re better off using XFS, a flexible, fast,
and well-proven file system that has been around for a long time. If you need backward compatibility, you can select
the Ext4 file system. This file system doesn’t offer the speed and scaling options that you might need on a modern
server, which is why Ext4 should not be your first choice. But it’s still around, and it’s a very stable file system, so if your
applications require you to use Ext4, it’s there for you.
To show you as many options as possible, in the following procedure, you’ll learn how to configure a server that
uses the following disk layout:
•

A small boot partition, using Ext4

•

All remaining disk space in another partition that is configured for use of LVM

•

A root logical volume, using XFS

•

A swap logical volume

•

A dedicated logical volume for /var, using Btrfs

To offer the best possible booting support, it’s a good idea to use a small boot partition. To do this, first create
the partition with a size of 500MiB. Next, set the mount point to /boot, and select the Ext4 file system. As the boot
partition contains only a small number of files, there’s no need to use an advanced file system, such as XFS or Btrfs, on
this partition. After selecting these features, you can click Finish, to write the changes to disk and continue
(see Figure 1-21).

26
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Figure 1-21. Creating a /boot partition
Back in the main Partitions overview, you can now add all remaining disk space in a partition that you’re going
to use for LVM. To do this, when asked for the New Partition Size, you can select the option Maximum Size, which
allocates all remaining disk space. Then click Next and select the Raw Volume Role. This will automatically select the
Do not format partition option in the next screen and select the file system type 0x8E Linux LVM. You can now click
Finish, to complete this part of the configuration.

27
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After creating the partitions, from the Expert Partitioner main window, you’ll have to select the Volume
Management option. From this interface, click Add and select Volume Group. This opens the Add Volume Group
interface, which you can see in Figure 1-22. The Volume Group is the collection of all available storage. You’ll have to
give it a name and assign storage devices to it. It’s a good idea to use a volume group name that is easy to recognize.
For example, use vgdata as its name.

Figure 1-22. Creating a volume group

28
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

Next, you can set the Physical Extent Size. A physical extent is the base building block from creating logical
volumes. All logical volumes will always have the size of a multiple of physical extents. For many cases, an extent of
4MiB works well, but if you want to use large logical volumes, you’re better off using bigger physical extents.
The last step to create volume groups is to assign physical volumes to the volume group. You’ll find all partitions
that have been set up with the partition type 0x8E in the list of available physical volumes. Select them and click Add,
to add to the volume group. Next, click Finish, to proceed to the next step.
After creating the volume group, the installer brings you back to the Expert Partitioner window. From here, click
Add ➤ Logical Volume, to add a logical volume. This opens the window shown in Figure 1-23, from which you can
specify a name and type for the logical volume. For normal use, you would use the Normal Volume type. Use Thin
Pool / Thin Volume for environments in which you want to do thin provisioning, which makes sense, for example, in
an environment in which desktop virtualization is used. In addition, all logical volumes require a unique name. You’re
free in choosing whatever name you like, but it might make sense to select a name that makes it easy to identify the
purpose of the volume.

Figure 1-23. Creating logical volumes

29
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After specifying the properties of the logical volumes, you must specify a size as well (see Figure 1-24). If you
plan on using more than one logical volume, don’t leave the Maximum Size option selected. It will take all available
disk space, and you cannot create any additional volumes anymore. Logical volumes support resizing. You can grow
the size of any file system; you cannot shrink all file systems. As a volume is easy to grow, it’s a good idea to keep the
volumes relatively small and some disk space unallocated, to accommodate for future growth. Once the volume has
been created, you’ll get to the same interface that is used for creation of partitions. From this interface, you can select
the file system to use, as well as the mount point. Note that when configuring an LVM volume for swap, you don’t have
to set a directory as a mount point. The system interface swap is set as the mount point, and that cannot be changed.

Figure 1-24. Specifying volume size

30
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 1 ■ Introduction and Installation

After finalizing your custom disk layout, you can write it to disk. The installer will now bring you back to the main
installation procedure, which has been explained in the previous section.

Summary
In this chapter, you’ve learned about the SUSE product portfolio, focusing on SUSE Linux Enterprise Server, in particular.
You have also learned about the different choices you have to make while performing an installation. A basic installation
has been explained, as has the creation of an advanced disk layout. In the next chapter, you’ll learn about some of the
essentials required to get you going with SUSE Linux Enterprise Server.

31
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2

Basic Skills
Now that you have SUSE Linux Enterprise Server (SLES) installed, in this chapter, I’ll cover some basic skills to help
you in getting started. I won’t cover Linux basics in much detail here. If you have never worked with Linux before,
I suggest you read my Beginning the Linux Command Line (Apress, 2009). What you will learn in this chapter is
how an SLES is organized and where you can find important components of the operating system. You’ll receive an
introduction to working from the GNOME graphical environment as well. You’ll also get an introduction to working
with YaST, the integrated management utility on SUSE Linux Enterprise Server.
In this chapter, the following topics are discussed:
•

Exploring SLES Interfaces

•

Working with YaST

Exploring SLES Interfaces
After installing SLES, there are two interfaces that you can work from: the graphical interface and the console
interface. When working from the console interface, you can use SLES just like any other Linux distribution. You’ll
notice it has some minor differences, related to some of the utilities that are used and locations of files and directories,
but it’s still a bash shell, and if you have previous experience with any other Linux distribution, it should not be
difficult to work with it.

Graphical or Not?
In the past, the graphical interface was considered an interface for novice Linux administrators. “Real administrators
work from the shell,” is what many people stated. Also, servers in the past had a good reason not to run a graphical
interface by default. Only a few utilities needed a graphical interface; most were written to be used in a text-only
environment. In addition, servers had limited resources, and it was considered a waste to install a server with a
graphical interface, especially if it wasn’t going to be used.
Nowadays, servers tend to have many more resources, so the waste of resources is not that significant anymore.
Also, there are quite a few good graphical utilities available, which makes it more tempting to use a graphical
interface. And last but not least, on a graphical interface, administrators can open more than one shell window at the
same time, which may make it easier to work on complex tasks.
All this doesn’t take away the fact that servers are also frequently configured to run in “headless” mode, without
even a terminal connected to it, and administrators only connect to it using Secure Shell (SSH). If that is the case,
it still doesn’t make sense to run a complete graphical environment. In the end, you’ll have to decide how you want to
run your servers yourself anyway. Pick what’s best. SUSE has developed SLES to fully support both environments.

33
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

GNOME or KDE?
SUSE has a long history of rivalry between GNOME and KDE users. While in OpenSUSE, you can choose which
graphical interface to use, on SUSE Linux Enterprise, GNOME is used as the default graphical interface. SUSE does not
want to dedicate any resources on development and maintenance of another graphical interface, which is why KDE
packages are not included.

Exploring GNOME
If you’ve selected a default installation, and the hardware configuration of your server has allowed for it, you’ll have a
GNOME 3 graphical interface. Figure 2-1 shows what this interface looks like after logging in to it.

Figure 2-1. The default GNOME 3 graphical interface
You can see that the interface has a pretty clean configuration, to make it easy to find the items you need to work
with. Basically, there are just a few things to be aware of to get you started with SUSE easily.
To begin with, the SUSE GNOME interface uses different workspaces. That means that the desktop is bigger than
the part that you can see. By default, you’re on workspace one out of four. If you click the 1/4 indicator (left from the
time in the bar on the lower end of the screen), you can select a different workspace. Using workspaces makes it easy
to work with multiple graphical applications.

34
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

A second part of the interface that is rather useful is the menu that pops up after clicking the right mouse button
somewhere on the desktop. From this menu, you can easily access the most important part of the graphical interface:
the terminal. Just click Open in Terminal to open as many terminals as you like (see Figure 2-2).

Figure 2-2. Easy access to new terminals
Third, there are a few menus in the lower-left part of the interface. The Applications menu provides access
to some applications, and the Places menu allows you to easily gain access to folders on this server or from other
locations. You should notice, however, that for a server administrator, the applications that are available from the
menus are of limited use, and many more applications can be started directly from a terminal shell also. There are a
few applications that can be useful anyway.

GNOME Configuration Editor
Not many people know it, but the GNOME interface comes with something that looks like a Windows Registry Editor.
The GNOME Configuration Editor (see Figure 2-3) allows you to lock down or configure different parts of GNOME. Select,
for example, the option desktop ➤ gnome ➤ applications ➤ terminal, and it gives access to the exec and exec_arg keys,
which tell GNOME which binary to associate to the GNOME terminal and which startup arguments to use when running
this binary. In Exercise 2-1, you’ll learn how to apply a simple setting in the GNOME Configuration Editor.

35
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

Figure 2-3. The GNOME Configuration Editor provides access to many features to tune and limit graphical
applications

R

EXERCISE 2-1. MAKING CHANGES IN THE GNOME CONFIGURATION EDITO
In this exercise, you’ll work in the GNOME Configuration Editor to apply some simple changes to the GNOME
configuration.
1.

Log in as root and open System Tools ➤ GNOME Configuration Editor.

2.

Browse to apps ➤ gdm ➤ simple-greeter.

3.

You’ll see that the disable_user_list currently is selected. This option makes sure that
upon login in the graphical environment, you won’t see a list of all users on your computer.
De-select this option.

4.

Another interesting candidate is in apps ➤ firefox ➤ lockdown. From there, click the options
disable_extensions and disable_history. Using these options makes Firefox a little bit
more secure.

36
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

5.

Next, use apps ➤ firefox ➤ web and select the cache_size parameter. Click in the value to
change it, and decrease it to 20000. This ensures that you have a bit less memory reserved
for cache usages, which makes sense on a server.

6.

Restart your server. You’ll notice that the login screen is different. Next, start Firefox. Try to
access the Firefox history. You’ll note that it no longer works.

Network Tools
If you’re a long-term Linux administrator, you’ll probably know your tools, and you’ll be able to fix network issues
from the command line. If you don’t know the different tools very well yet, you may like the Network Tools. This
graphical program provides access to different useful functions that you can use to analyze the network. Figure 2-4
shows an overview of its default appearance. Note that you’ll need to be root to get full access to the tools provided
from this interface. Exercise 2-2 gives you an idea of what you can do with the Network Tools.

Figure 2-4. Network Tools provides an easy interface to test networking functionality

37
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

EXERCISE 2-2. USING THE GNOME NETWORK TOOLS
In this exercise, you’ll explore some options that are provided from GNOME Network Tools.
1.

Log in to your server as a normal user. From the Applications menu, select System Tools ➤
Network Tools. You’ll open in the interface, from which you can select the network devices
that are available. Use the drop-down list to have a look at the different network devices and
find the IP address that is in use on your system.

2.

Click the Ping tab and enter a network address that should be accessible. The default
gateway, for example, will do well. Specify that you want to send only five requests and
click Ping, to start pinging the other node. You’ll get a nice overview of the round-trip time
statistics while doing this (see Figure 2-5).

Figure 2-5. Using Ping from the Network Tools

38
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

3.

On the Netstat tab, you can analyze networking configuration on your computer. Select Active
Network Services, and next, click Netstat. This provides an overview of all the ports that
are listening on your server. In subsequent chapters in this book, you’ll learn more about
configuring these services.

4.

Last, click Port Scan. Enter an IP address and click Scan. This will tell you that nmap has to be
installed to access this functionality. Open a terminal window and click su -. Next, enter the
root password. You are now root, which allows you to install software. Type zypper in nmap
to install the nmap port scanner. Go back to Network Tools and try to issue the port scan once
again. You’ll see that it now works, and if the target host you’re scanning allows it, you’ll get
a list of all ports that are open on that host.

■■Warning Port scanning is regarded by many network administrators as a malicious activity. You could be banned
from the network—or worse—while performing a port scan. Use this on your own networks only, to analyze the
availability and accessibility of services.

Settings
To make your GNOME desktop experience as good as it gets, you can access the GNOME Settings menu
(see Figure 2-6). In this menu, you’ll get access to different personalized GNOME settings. All of the settings that
you configure here are written to your own user home directory. As an administrator who wants to provide default
settings to users, you can change many of these settings and distribute them to other users. In Exercise 2-3, you’ll
learn how to change settings from this menu and distribute them to the home directories of other users.

Figure 2-6. Changing GNOME settings

39
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

EXERCISE 2-3. CHANGING AND DISTRIBUTING GNOME USER SETTINGS
In this exercise, you’ll lean how to change and distribute GNOME user settings. You’ll work with some commands
that are related to user management and which are discussed in more detail in Chapter 4.
1.

Right-click the GNOME desktop to open a GNOME terminal. Type su - and enter the root
password to become root.

2.

Type useradd -m user to create a user with the name user. Use passwd user to set a
password for this user. Set the password to “password.” It will state that it’s a bad password,
but you can still set to this password.

3.

Click the Off button in the lower-right part of the GNOME desktop. From the menu that
appears now, click Log out.

4.

Back on the login prompt, log in as the user you’ve just created.

5.

As this user, access the GNOME Settings menu by selecting Applications ➤ System Tools ➤
Settings.

6.

Click Displays; select your primary display; and make sure the display resolution is set to a
minimum of 1024 × 768.

7.

Select Background. From the window that opens, click Background, and from the next
window, click Wallpapers. Select another wallpaper and close the Settings application.

8.

Open a GNOME terminal, and type su - to become root. Enter the root password.

9.

Use cd /home/user to access the home directory of the template user you’ve been using.
From there, type cp -R .config /etc/skel.

10.

Type useradd -m lori, to create a user with the name lori. Type passwd lori to set a
password for user lori.

11.

Log out and log in again as user lori. Note that the settings that you’ve changed for linda are
applied for user lori also.

Working with YaST
Where other Linux distributions assume that their administrators want to work with command-line utilities and
configuration files to configure their distribution, SUSE has created an extensive management platform with the name
YaST (Yet another Sysadmin Tool). YaST offers easy access to many tasks in SUSE and makes it easy for administrators
who are not expert at Linux administration to do what they have to do anyway.
In this section, you’ll learn how to work with YaST. We won’t go through every single utility that is available in
YaST, but you’ll get a generic overview that will help you in understanding how to use YaST in your environment. In
subsequent chapters in this book, you’ll get detailed information about many of the utilities that are available from YaST.

■■Note The official name of YaST is YaST (Yet another Sysadmin Tool). However, when referring to the binary, you’ll
also encounter yast and yast2. When referring to yast, you typically refer to the program file that starts YaST in a
non-graphical mode, while yast2 is the binary that starts YaST in a graphical mode.
40
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

YaST vs. Configuration Files
A common prejudice that is often heard from administrators who don’t understand YaST is that working with YaST
makes it impossible to apply changes directly to the configuration files. While glitches have existed in very old
versions of SUSE Linux, this is no longer the case. For SUSE, it is top priority to make sure that YaST can be used as an
easy interface to the configuration files. YaST is there to create a basic configuration, which gives the administrator a
starting point to further fine-tune the configuration file.
In many cases, YaST will notice when an administrator has applied manual modifications. These modifications
will, in most cases, be picked up by YaST automatically, so there will rarely be conflicts. If, however, the administrator
has applied changes that are incompatible with the way YaST is working, a backup configuration file will be created,
to make sure that the administrator does not lose his or her hard work, but can manually integrate the modifications
made from YaST with his/her own modifications. In many configuration files, you’ll also see clear instructions on
how to act if you want to make manual modifications, to ensure that you don’t have conflicts with what YaST has been
doing. Listing 2-1 offers an example of the first few lines of the /etc/default/grub configuration file.
Listing 2-1. Integration Between YaST and Configuration Files
linux-m6gc:~ # head /etc/default/grub
# Modified by YaST2. Last modification on Mon Sep 1 07:31:52 EDT 2014
# THIS FILE WILL BE PARTIALLY OVERWRITTEN by perl-Bootloader
# For the new kernel it try to figure out old parameters. In case we are not able to recognize
it (e.g. change of flavor or strange install order ) it it use as fallback installation parameters
from /etc/sysconfig/bootloader
# If you change this file, run 'grub2-mkconfig -o /boot/grub2/grub.cfg' afterwards to update
# /boot/grub2/grub.cfg.

YaST in This Book
In this book, I want to promote the philosophy behind YaST as a tool that makes working with Linux a lot easier for
the system administrator. Many topics will be configured from YaST first. Once YaST has created a basic configuration,
you’ll get to know the configuration file behind it and learn which parts in the configuration file are important and
what they mean.

YaST Interfaces
To make working with YaST as easy as possible, there are three different appearances of YaST. First, there is the
so-called ncurses interface, which is developed for use in a non-graphical environment. You can start this interface
by typing yast or yast –ncurses. In Figure 2-7, you can see what this interface looks like. It is likely that you’ll use this
interface a lot, because it works perfectly over remote sessions such as PuTTY or other kinds of SSH connections.

41
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

Figure 2-7. The YaST ncurses interface
To work with the ncurses interface, a few keys are important. First, there is the Tab key, which allows you to
navigate between the main parts of the screen. Looking at the screen in Figure 2-7, you can see that there are a few
main options. There’s the pane on the left, from which you select the task category you want to work on. Within a task
category, you’ll see the different modules available in that task. Next, there are a couple of items available in the lower
part of the screen. To navigate between these, use the Tab key, or use Shift+Tab to go backward.
If you’re in one of the option panes, you need the arrow key, to move between options. Don’t use the Tab key,
because that will bring you to another pane of the YaST window. You’ll also use the arrow keys to open drop-down
lists. After selecting an option, you’ll use the Enter key to open its further configuration. If within a certain window you
find options to select or switch on/off, normally, you would use the Space bar to do that.
Also notice that on many occasions, there are shortcuts provided by function keys. F10, for example, is often used
as the OK key, and F9 is used to Quit. Using these shortcuts makes it easier to make fast selections in YaST.
Apart from the ncurses interface, there are two graphical appearances of YaST, based on the QT and the GTK
graphical user interfaces. The QT interface is the KDE interface, and as this interface is not supported on SLES, you
won’t use it. The GTK interface is what is needed in GNOME. An easy way to start YaST in GTK mode is by typing the
command yast2. Alternatively, you can use yast --gtk to start it. (Depending on the software that is installed, this
can generate an error message; it’s safe to ignore that.) In Figure 2-8, you can see what the GTK interface looks like.

42
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

Figure 2-8. The YaST GTK interface

YaST Modules
YaST offers many different modules to accomplish a large diversity of tasks. That is convenient, because it allows
you to browse, if you don’t know exactly what you’re doing. If, however, you’re an experienced administrator who
just wants to start a specific module, it’s a waste of time to go through all the different menu options. There are a few
solutions to that.
If you’re using the GTK interface, there’s a search bar in the upper-right corner. From this search bar, type a
keyword, representing the kind of task that you want to perform. This will narrow the number of YaST modules that
you see. If, for example, you type the text iscsi, it will only show the iSCSI management modules that are installed on
your system, and nothing else.
Another approach is that you can directly call the module when starting YaST. If you type yast --list, a list of
available modules is shown. From this list, you can look up the specific module that you need, and once you’ve found
it, you can call it directly. If, for example, you want to start the Software Management utility directly, you can type
yast sw_single. After closing the module, you won’t see the main YaST window, but you’ll get back to your starting
terminal immediately.

43
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

Behind YaST
As mentioned before, YaST is modular, and it is easily extensible. Specific modules belong to specific programs, and
that means that a module might not be available until the program using it is available. When looking for software to
install, you’ll see the YaST modules mentioned, as well as their current installation state. In Listing 2-2, you can see
what this looks like while looking for the iscsi package (used just as a random example package here; read Chapter 16
for further details about iSCSI).
Listing 2-2. YaST Modules Have to Be Installed Before Being Visible
linux-m6gc:~ # zypper se iscsi
Loading repository data...
Reading installed packages...
S | Name
| Summary
| Type
--+------------------------+-------------------------------------------+----------i | iscsiuio
| Linux Broadcom NetXtremem II iscsi server | package
i | open-iscsi
| Linux* Open-iSCSI Software Initiator
| package
| open-iscsi
| Linux* Open-iSCSI Software Initiator
| srcpackage
i | yast2-iscsi-client
| YaST2 - iSCSI Client Configuration
| package
| yast2-iscsi-client
| YaST2 - iSCSI Client Configuration
| srcpackage
| yast2-iscsi-lio-server | Configuration of iSCSI LIO target
| package
| yast2-iscsi-lio-server | Configuration of iSCSI LIO target
| srcpackage
So, imagine that you want to configure the iSCSI LIO target. You must install the corresponding YaST module
first. To do that, use zypper in yast2-iscsi-lio-server. This will be explained in more detail in Chapter 5. To get
an overview of all YaST modules that are available, type zypper search yast.
Some modules are so commonly used that they’re installed by default. That is, the module is installed in YaST,
but the software you need to configure the related service is not. If that is the case, you’ll be prompted to install the
corresponding packages—just click Install to do that automatically (see Figure 2-9).

44
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

Figure 2-9. Installing related software packages automatically
An interesting part about YaST is that the modules it uses are written in perl. That means that if you’re a
perl programmer, you can easily create your own modules in YaST. YaST modules by default are installed in
/usr/share/YaST2/modules. Have a look at them to get an impression on how they are organized.

YaST Logging
YaST activity is logged as well. You’ll find the YaST2 logs in the directory /var/log/YaST2. Some modules have their
own logging, which allows for detailed analysis of what they’re doing. The generic yast log is /var/log/YaST2/y2log.
In this log, you’ll get detailed information about all the modules that were called by YaST and the status of that
action (see Listing 2-3). If at any time YaST doesn’t do what you expect it to, you can check here to find out what
has happened.

45
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

Listing 2-3. Getting More Information Through y2log
linux-m6gc:/var/log/YaST2 # tail y2log
2014-09-01 12:41:40 <1> linux-m6gc(37460)
zypp pointer...
2014-09-01 12:41:40 <1> linux-m6gc(37460)
RpmDb[V4(X--)V3(---): '(/)/var/lib/rpm']
2014-09-01 12:41:40 <1> linux-m6gc(37460)
2014-09-01 12:41:40 <1> linux-m6gc(37460)
RpmDb[NO_INIT]
2014-09-01 12:41:40 <1> linux-m6gc(37460)
2014-09-01 12:41:40 <1> linux-m6gc(37460)
2014-09-01 12:41:40 <1> linux-m6gc(37460)
2014-09-01 12:41:40 <1> linux-m6gc(37460)
released
2014-09-01 12:41:40 <1> linux-m6gc(37460)
interpreter.
2014-09-01 12:41:40 <1> linux-m6gc(37460)
interpreter.

[Pkg] PkgFunctions.cc(~PkgFunctions):158 Releasing the
[zypp] RpmDb.cc(closeDatabase):734 Calling closeDatabase:
[zypp] librpmDb.cc(blockAccess):328 Block access
[zypp] RpmDb.cc(closeDatabase):765 closeDatabase:
[zypp] TargetImpl.cc(~TargetImpl):953 Targets closed
[zypp] RpmDb.cc(~RpmDb):268 ~RpmDb()
[zypp] RpmDb.cc(~RpmDb):271 ~RpmDb() end
[Pkg] PkgFunctions.cc(~PkgFunctions):160 Zypp pointer
[Y2Ruby] binary/YRuby.cc(~YRuby):107 Shutting down ruby
[Y2Perl] YPerl.cc(destroy):164 Shutting down embedded Perl

YaST Configuration Files
The main configuration file for YaST is the file /etc/sysconfig/yast2. In this file, some variables are set to define the
default behavior of YaST. Listing 2-4 shows a list of those variables. You can open the file to see comments on how to
use them.
Listing 2-4. Configuration Variables from /etc/sysconfig/yast2
linux-m6gc:/etc/sysconfig # grep -v ^\# yast2
WANTED_SHELL="auto"
WANTED_GUI="auto"
Y2NCURSES_COLOR_THEME="mono"
STORE_CONFIG_IN_SUBVERSION="no"
SUBVERSION_ADD_DIRS_RECURSIVE="no"
PKGMGR_ACTION_AT_EXIT="summary"
PKGMGR_AUTO_CHECK="yes"
PKGMGR_VERIFY_SYSTEM="no"
PKGMGR_REEVALUATE_RECOMMENDED="no"
USE_SNAPPER="no"
A setting that has been important in this book is Y2NCURSES_COLOR_THEME. By default, YaST is using light blue on
a dark blue background, which may be very hard to read. By setting this variable to "mono", YaST displays in different
shades of gray only, which in many situations, is much easier to read.

46
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 2 ■ Basic Skills

T

EXERCISE 2-4. WORKING WITH YAS

In this exercise, you’ll explore some of the advanced YaST features. All tasks in this exercise have to be performed
with root permissions.
1.

Start YaST in GTK mode by typing yast2 from a graphical environment. Use the search option
in the upper-right part of the screen to look for iSCSI modules. You’ll see one module only.

2.

From a console, type zypper se iscsi. Install the iSCSI LIO server package, using zypper
in yast2-iscsi-lio-server. Repeat step 1. You’ll see that the package is now listed.
(You must restart yast for this to work.)

3.

Use yast --list, to find the module that allows you to manage users. Start it by specifying
the module name as the argument.

4.

Change YaST to use monochrome when started in ncurses mode. To do this, open the
configuration file /etc/sysconfig/yast2 and set Y2NCURSES_COLOR_THEME="mono".

Summary
In this chapter, you’ve learned to work with some of the particulars of SUSE Linux Enterprise Server (SLES). First, you
have explored the GNOME 3 graphical interface and worked with some of the most useful programs that it offers.
Next, you’ve learned how to use YaST as the default tool for configuration of many aspects of SLES. In the next chapter,
you’ll learn how to manage file systems on SLES.

47
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Part Ii

Administering SUSE Linux
Enteprise Server

www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3

Managing Disk Layout and
File Systems
On a Linux server, the way in which the hard disk is organized—and in which the file systems are created on that
hard disk—is essential. There are many choices to be made, and there is no single solution that fits all needs. In
this chapter, you’ll first assess if you have to work with partitions, or whether you’re better off working with logical
volumes. You’ll also examine how partitions and volume behave differently on a master boot record (MBR) and on a
globally unique identifier (GUID) partition table. Next, you’ll discover how to create partitions and logical volumes.
Once the storage volume has been created, you have to put a file system on it. In this chapter, you’ll learn which file
system best fits your needs and how to manage specific file-system features.

Creating a Storage Volume
When deciding on the design of hard disk layout, different options are available, depending on the hardware that is
used. Options start with the type of disk that is used. On each disk, a boot loader is required. You’ll have to decide
between the classical master boot record and the newer globally unique identifier–based partition table. After making
that choice, you’ll have to decide whether to use partitions on logical volumes. This section explains your options.

The Partition Table: GUID vs. MBR
Since 1981, the MS-DOS-type boot sector has been used. With this type of boot sector, which is also known as a master
boot record (MBR), a maximum disk size of 2TB is supported, and disk layout is based on partitions. As the amount
of space in this type of boot sector is limited, a maximum amount of four partitions can be created. If more than four
partitions are required, an extended partition is used, and within the extended partition, multiple logical partitions
are created.
In current data centers, the maximum size of disks goes more and more frequently beyond 2TB. With the limited
amount of address space that is available in MBR, this no longer can be addressed. That is why a new type of boot
loader has been introduced. In this boot loader, the GUID Partition Table (GPT) is used. In this type of partition
table, all partitions are primary partitions. Owing to the increased address space, the necessity to work with logical
partitions has ceased.
A modern Linux distribution such as SUSE Linux Enterprise Server (SLES) can handle the difference between
GPT and MBR. If partitions are created from YaST, the differences aren’t even visible. If, however, you’re using
command-line utilities, you must be wary, because GPT partitions demand a different approach than MBR partitions.
For the administrator, it’s often not a choice whether to use GPT.

51
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Partitions or Logical Volumes?
The other choice that you’ll have to make as an administrator is between partitions and logical volumes. Partitions are
the old way of organizing a disk, where every storage volume has a fixed size. Using partitions has a few disadvantages:
you cannot easily resize them, and the maximum amount of partitions that can be created is limited.
Logical volumes, also referred to as LVM, have been introduced as an alternative. With LVM, it is relatively easy
to resize the storage volume. Also, logical volumes aren’t bound to the physical device they are created on. In LVM, all
storage devices are grouped in the volume group, and logical volumes are created from the volume group. The result
is that if a logical volume grows out of disk space, it is easy to add a disk to the volume group, which allows for growth
of the logical volume.
While LVM is very flexible and offers important benefits, compared to traditional partitions, with the rise of the
Btrfs file system, the need to create logical volumes has decreased. Many features that were previously supported only
on LVM are now also supported in the Btrfs file system. So, if Btrfs is used, you can do without LVM. But, if on your
server multiple file systems are used side by side, it can still be interesting to use LVM. To allow you to use the disk
layout that works best for your environment, this chapter discusses both solutions.

Creating Partitions
If you’re used to such tools as fdisk for managing partitions, you’re welcome to do so on SLES. If you want to make it
a bit easier, while having full access to all of the advanced options that exist when working with storage, you can use
YaST as a partitioning tool. When creating partitions or logical volumes from YaST, everything is integrated, and after
creating the partition, you can easily put a file system on it.

Creating Partitions from YaST
To start the partitioning utility from YaST, select System ➤ Partitioner. This will elicit a warning, from which you can
select Yes to continue. You’ll then see the Expert Partitioner screen, which is shown in Figure 3-1.

52
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-1. The Expert Partitioner interface

■■Note In this book, I prefer showing the ncurses interface of YaST, not because it is prettier, but because it is always
available, no matter if you’re working from a text-only session or a complete graphical environment.
To add a partition, from the Expert Partitioner window, you’ll have to use the Tab key to navigate to the disk on
which you want to create a partition. This gives access to the interface you see in Figure 3-2.

53
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-2. Options for creating new partitions
To add a new partition, select Add. This first opens a window, from which you can specify the size you want to
use (see Figure 3-3). By default, all available disk space is selected, so if you don’t want to use it all, specify the size you
want to assign in MiB, GiB, or TiB. Note that the notation MiB (Mebibyte), and so forth, refers to a multiple of 1024 bytes.
This is in contrast to MB, which is a multiple of 1000 bytes.

54
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-3. Specifying partition size
After specifying the partition size you want to use, you can select the role for the partition. Depending on the
role you’ve selected, a preselection of file system and mount point will be made. You can also choose the option Raw
Volume, which allows you to format the partition at a later stage. This can be useful if, for example, you want to reserve
disk space for use as a storage back end for an iSCSI LUN or a virtual machine.
In the next screen, you can select the file system type and mount options. These topics are explained in detail
later in this chapter. When you’re back on the main screen of the Expert Partitioner, you’ll see the new partition. It
hasn’t been committed to disk yet, however. To commit the changes to disk, select Next and Finish.

Creating Partitions from the Command Line
If you want to create partitions on an MBR disk, you have to use fdisk. To create partitions on a GUID disk, you’ll have
to use gdisk. The utilities are pretty similar, but the information that is written is different, so make sure to use the
right tool. As GUID partitions are rapidly becoming more common, in this chapter, I’ll explain gdisk and not fdisk.
To create a partition, type gdisk, followed by the name of the device you want to use. This gives you a message
about the current disk partitioning and will next allow you to perform your manipulations on that disk. For an
overview of available commands, type ? (see Listing 3-1).

55
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Listing 3-1. Working from the gdisk Interface
linux-s0gc:/etc/sysconfig # gdisk /dev/sdb
GPT fdisk (gdisk) version 0.8.8
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command
b
c
d
i
l
n
o
p
q
r
s
t
v
w
x
?

(? for help): ?
back up GPT data to a file
change a partition's name
delete a partition
show detailed information on a partition
list known partition types
add a new partition
create a new empty GUID partition table (GPT)
print the partition table
quit without saving changes
recovery and transformation options (experts only)
sort partitions
change a partition's type code
verify disk
write table to disk and exit
extra functionality (experts only)
print this menu

Command (? for help):
To create a new partition, type n. You can now select the partition number you want to create. The maximum
amount of GUID partitions is 128, but you should number partitions in order, so normally, you can just press Enter to
select the partition number that is proposed. Next, it asks for the first sector to use. A sector has a size of 512 bytes, and
by default, the first sector that is used is 2048. You should not have your partition start anywhere else, unless you have
a very good reason.
After selecting the starting point for the partition, you can specify the size. The easiest way to specify a size is by
using a + sign, followed by an amount and the identifier K, M, G, T, or P, according to the amount of kilo-, mega-, giga-,
tera-, or petabyte you want to assign. You next have to assign the partition type you want to use. For all normal Linux
file systems, the default partition type 8300 works fine. Note that the partition type is not the same as the file system
type. The partition type indicates the intended use of a partition, and in many cases, it’s not necessary to set a specific
partition type. After creating the partition, type p, for an overview. The result should resemble Listing 3-2.

56
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Listing 3-2. Verifying Partition Creation
Command (? for help): p
Disk /dev/sdb: 17179869184 sectors, 8.0 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): DF090B7D-9509-4B13-95E2-BC8D7E98B4C1
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 17179869150
Partitions will be aligned on 2048-sector boundaries
Total free space is 17177771965 sectors (8.0 TiB)
Number
1

Start (sector)
2048

End (sector)
2099199

Size
1024.0 MiB

Code
8300

Name
Linux filesystem

Command (? for help):
If you’re happy with the results so far, press w to write the results to disk. If you’re not happy after all, press q to get
out without changing any of the new partitions.

■■Warning It is extremely important that you not use gdisk on a partition that is configured to use MBR, if you are
booting from that disk. Gdisk will overwrite the MBR, which makes your disk unbootable.

Creating Logical Volumes
As for partitions, you can also create logical volumes from YaST or from the command line. In this section, you’ll learn
how to do both.

Creating Logical Volumes from YaST
If you want to create logical volumes, YaST offers an easy-to-use interface. First, you should make sure that you have
disk space available for creating logical volumes. That can be as a partition that you’ve created as partition type 0x8e.
You’re also fine if you have unallocated disk space available.
To create logical volumes, start from YaST ➤ System ➤ Partitioner. Click Yes when the warning is displayed,
which opens Expert Partitioner. Before you’re able to add new logical volumes, you need either a complete disk or a
partition that has been set up with the partition type 0x8e. To create such a partition, from the YaST Expert Partitioner,
select Hard Disks, and after selecting it, press the space bar. This will show you all available hard disks. Select the hard
disk on which you want to work and press Enter. This opens the window that you can see in Figure 3-4.

57
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-4. The Expert Partitioner overview

From this window, use the Tab key to select Add and create a new partition. By default, the Custom Size option
is selected, which allows you to specify the intended partition size manually. Enter the partition size you’d like to use
and select Next. You’ll now see the screen on which you can select the partition role. From this screen, select Raw
Volume (unformatted) and press Next. This will by default set the Linux LVM system ID. Click Finish to complete this
part of the procedure.
After creating the LVM partition, you can move on to the Volume Management part in the Expert Partitioner.
Select Volume Management and press Enter. This will open the Volume Management interface, which you can see
in Figure 3-5.

58
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-5. Creating LVM logical volumes from YaST
From this interface, select Add. Depending on what has been created previously, you can now select between
a volume group and a logical volume. In Figure 3-5, you can see an example of this window that was taken on a
computer that was set up to use LVM. Before you can add a new logical volume, you must have a volume group. If no
volume group exists yet, select Volume Group. If you already have an LVM set up on your computer, you can directly
add a logical volume—if disk space is still available in the logical volume.
The volume group is the abstraction of all available disk space that can be assigned to LVM. You’ll have to put
disks in it (the so-called physical volumes), and once disk space has been assigned to the volume group, on the other
end, you can create logical volumes out of it.
To create the volume group, you’ll first have to specify a name. To make it easy to find the volume groups later,
it’s a good idea to start volume group names with the letters vg, but you’re not required to do that. Next, you’ll have
to specify the physical extent size. These are the minimal building blocks that you’re going to use in LVM. If you’re
planning on creating huge logical volumes, set the physical extent size to the maximum of 64MiB. Every logical
volume you’ll create will always have a size that is a multiple of 64MiB. For regular purposes, the default size of 4MiB
does just fine.
After specifying the physical extent size, you must add physical volumes (disks or partitions) to the volume group.
Make sure that you only select the intended disks or partitions, and after selecting them, click Add to add them to the
set of selected physical volumes. At this point, you should see an interface that looks as in Figure 3-6.

59
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-6. Creating an LVM volume group
After creating the volume group, you’ll return to the Expert Partitioner main window. From this window, select
the volume group that you’ve just created and use the Tab key on your keyboard to navigate to the Add option. From
the drop-down list, select Logical Volume. This opens the screen that you see in Figure 3-7.

60
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-7. Adding a logical volume
To add a logical volume, you have to set the name of the logical volume and select the normal volume type. Then
click Next and specify the size of the logical volume. By default, it wants to allocate the maximum size that is available
in the volume group. That is not always a good idea, so you might be better off selecting a custom size. If there are
multiple physical volumes in your volume group, you can also set the amount of stripes. By selecting an amount of
stripes that is equal to the amount of physical volumes, you’ll load-balance read and write requests to the logical
volumes, which allows you to create a Redundant Array of Inexpensive Disks (RAID) set on top of LVM. If you have
just one disk, set the stripe size to 1.
From the next screen, you’ll have the opportunity to add a file system. As this will be discussed separately in a
subsequent section, we’ll skip it here. To step out without creating a file system now, select Raw Volume (unformatted)
and in the next screen, make sure that Do not format partition and Do not mount partition are selected (see Figure 3-8).

61
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-8. Skipping File System creation for now

Creating Logical Volumes from the Command Line
To create an LVM setup from the command line, you’ll start by creating a partition. To do this, you can follow the
directions that were given in the section “Creating Partitions from the Command Line.” When gdisk asks which
partition type to use, make sure to enter 8E00, which assigns the LVM partition type to the partition (see Listing 3-3).
Listing 3-3. Creating an LVM Partition Type
Command (? for help): n
Partition number (3-128, default 3): 3
First sector (34-17179869150, default = 4208640) or {+-}size{KMGTP}:
Last sector (4208640-17179869150, default = 17179869150) or {+-}size{KMGTP}: +1G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8E00
Changed type of partition to 'Linux LVM'
Command (? for help):
Next, type w to write the partition to disk. If you’re using fdisk to create the partition, you must first define the
partition and next type t to change the partition type. Set the partition type to 8e and type w to write the changes to
disk. Next, run partprobe to make sure that the kernel is updated with the new changes to the partition table.

62
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

After creating the partition, you have to make it a physical volume. To do that, you’ll use the pvcreate command.
The command is easy and straightforward to use: just type pvcreate, followed by the name of the partition you’ve
just created, as in pvcreate /dev/sdb3. Next, type pvs to verify that you succeeded and the physical volume has been
created (see Listing 3-4).
Listing 3-4. Verifying the Creation of Physical Volumes
linux-m6gc:~
PV
/dev/sda2
/dev/sdb2
/dev/sdb3

# pvs
VG
Fmt
vgdata lvm2
vgdisk lvm2
lvm2

Attr PSize PFree
a-- 19.51g
3.51g
a-1.00g 516.00m
a-1.00g
1.00g

In Listing 3-4, you can see that the physical volume has been added, but it’s not a part of any volume group yet.
To put it in a volume group, you can now use the vgcreate command. This command has two mandatory options:
you’ll have to specify the name of the volume group that you want to create, as well as the name of the device that you
want to add to it, as in vgcreate vgsan /dev/sdb3. That would create the volume group for you, and you can verify
that by using vgs or vgdisplay. Use vgs if you want to see a short summary of the volume groups on your system and
their properties; use vgdisplay if you’re looking for more extended information. In Listing 3-5, you can see the output
of both commands.
Listing 3-5. Showing Volume Group Properties
linux-m6gc:~ # vgs
VG
#PV #LV #SN Attr
VSize VFree
vgdata
1
3
0 wz--n- 19.51g
3.51g
vgdisk
1
1
0 wz--n- 1.00g 516.00m
linux-m6gc:~ # vgdisplay vgdisk
--- Volume group --VG Name
vgdisk
System ID
Format
lvm2
Metadata Areas
1
Metadata Sequence No 2
VG Access
read/write
VG Status
resizable
MAX LV
0
Cur LV
1
Open LV
0
Max PV
0
Cur PV
1
Act PV
1
VG Size
1.00 GiB
PE Size
4.00 MiB
Total PE
257
Alloc PE / Size
128 / 512.00 MiB
Free PE / Size
129 / 516.00 MiB
VG UUID
c2l6y9-ibIo-Zdp0-tWeE-WxZl-jmW9-YHaC1Y
In both commands, you can see how many physical volumes are added to the volume groups. You can also see
the amount of logical volumes currently existing in the volume group and a summary of the total size and available
free space in the volume groups.

63
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Now that you have created the volume group, you can add a logical volume to it. To do that, you’ll use the
lvcreate command. This command requires at least two arguments: the size you want to use and the name of the
volume group into which you want to put the logical volume. It is also a good idea to specify a name. If you don’t do
that, a random name will be generated. If, for example, you wanted to create a logical volume with a size of 500MB,
the name lvdb, and put that in the volume group that uses the name vgdisk, the command to use is lvcreate -L
500M -n lvdb vgdisk. You can next use the command lvs to get an overview of the logical volume properties, or
lvdisplay, if you want to see more details about it.

Creating and Mounting File Systems
At this point, you should have either a partition or a logical volume, and you’re ready to create a file system on it.
Before moving on and actually doing that, you’ll have to know about differences between Linux file systems. In the
next section, you can read about these. Next, you’ll learn how to create a file system and how to make sure that the file
system is automatically mounted on reboot.

Understanding File System Features
On Linux, many file systems are available, and recently some important changes have occurred, which leaves you with
many choices for the file system that you want to use. Before moving on and teaching you how to actually create and
manage a file system, we’ll first discuss the different options, so that you can select the file system that fits your needs best.

Ext File Systems
Shortly after the release of Linux in 1992, the System Extended File version 2 (Ext2) was released. This file system
perfectly met the data needs of the early days of Linux. Ext2 has been the default Linux file system for some years, until
the need for journaling became more manifest.
A file system journal is used to keep track of all ongoing transactions on the file system. That means that if at
any time something goes wrong on a server and the server crashes, the file system can be easily recovered from the
journal. If a journal is present, recovering the file system is a matter of seconds. If no journal is present, for file system
recovery, an extensive file system check has to be performed, and the consistency of every single file has to be verified.
On a small file system, that can take a long time; on a modern file system, that can take days. That is why file system
journaling is an essential need for any new file system.
The biggest improvement in Ext3, the successor of Ext2, is that it has a file system journal by default. Also, some
improvements have been made to make the kernel module, as well as the indexing that is used in the file system, more
efficient. Ext3 was the default file system in SUSE Linux Enterprise Server 11.
The major disadvantage of Ext3 is the way it organizes access to files. Ext3 keeps track of its files by using linear
indexes. This makes the file system slower when more files are used in the file system. Unfortunately, stepping away
from the way an Ext3 file system is organized would mean a radical redesign of the whole file system. That is why
on the day the Ext3 successor Ext4 was released, it was already clear that this file system wasn’t going to last, and
something else would be needed to fit the needs of modern-day data centers.
Nevertheless, Ext4 has been, and still is, an important Linux file system. Compared to Ext3, the kernel module
has been rewritten to make it faster and more efficient, and a new way of allocating blocks to the file system has been
added: the use of extents.
An extent is a group of blocks that can be addressed and administered as one entity. The default size of an extent
is 2MB. Compared to the often-used default file system block size of 4KB, using extents means an important reduction
in the amount of blocks that has to be administered for a file. Imagine a file with a size of 2GB, for example. It would
need 500,000 blocks for its administration, whereas only 1,000 extents are needed for its administration. Ext4 extents
make the file system administration a lot more efficient, but it doesn’t take away the inefficiency in addressing files.
For that reason, on modern generation Linux distributions, new solutions have been introduced.

64
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Even if new file systems have been introduced, Ext4 is still commonly used. To its credit is that it is a very stable
file system, as well as a very well-known file system. That means that there is excellent tooling available to repair
problems occurring on Ext4 and that many options are available for its optimum use.

ReiserFS
The first serious attempt to offer an alternative to the Ext file systems was made with ReiserFS. This file system was
introduced in the late 1990s as the default file system used by SUSE. It was a revolutionary file system in its approach,
as it was organized around a database, to keep track of file system administration, instead of the slow and inefficient
linear tables that are used in Ext file systems.
Unfortunately, the integration of ReiserFS with the kernel was not optimal, and that is why with the release of
SLES 11, SUSE has dropped ReiserFS as the default file system. Nevertheless, ReiserFS is still available on SLES 12,
and if you need a file system that can deal with many small files in an efficient way, it still is a valid choice. You should
realize, however, that it is not the most popular file system, which means that it may be difficult to get support, should
you ever encounter serious trouble using it.

Btrfs
Since 2008, developer Chris Mason has been working on the next generation Linux file system: Btrfs. This file system
is developed as a Copy on Write (CoW) file system, which means that old versions of files can be maintained while
working on them. When writing block, the old block is copied to a new location, so that two different versions of the
data block exist, which helps to prevent problems on the file system. In 2009, Btrfs was accepted in the Linux kernel,
and since then, it is available in several Linux distributions. Since the beginning, SUSE has been one of the leading
distributions to show support for Btrfs.
Apart from being a CoW file system, Btrfs has many other useful features. Among these features are the
subvolumes. A subvolume can be seen as something that sits between a volume or logical partition and a directory. It
is not a different device, but subvolumes can be mounted with their own specific mount options. This makes working
with file systems completely different. Whereas on old Linux file systems you needed a dedicated device if you had to
mount a file system with specific options, in Btrfs you can just keep it all on the same subvolume.
Another important feature of Btrfs are snapshots. A snapshot freezes the state of the file system at a specific
moment, which can be useful if you must be able to revert to an old state of the file system, or if you have to make a
backup of the file system.
Because Btrfs is a CoW file system, snapshots are very easy to create. While modifying files, a copy is made of the
old file. That means that the state of the old file is still available, and only new data blocks have to be added to that.
From the metadata perspective, it is very easy to deal with both of these, which is why it is easy to create snapshots
and revert files to an earlier version.
Snapshots are useful if you want to revert to a previous version of a file, but they also come in handy for making
backups. Files in a snapshot will never have a status of open. That means that files in a snapshot always have a
stable state that can be used to create a backup. Because it is easy to create snapshots from Btrfs subvolumes, SLES
takes care of this automatically. These snapshots are used from the snapper utility, a front-end utility that makes it
easier to revert to earlier versions of a file. You can even select from the Grub boot menu, to revert to an earlier state
of your system.

XFS
Where Btrfs definitely is the choice of the future, it’s a relatively new file system, and not so long ago, it still had
features that made many administrators question if it really was the best choice for their environment. That is why in
SLES, the XFS file system is used as the default file system for operating system volumes.

65
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

XFS is a file system that was developed by SGI in the mid-1990s as a file system that organizes its metadata
administration based on a database. It is a proven file system that has been around for a long time and is very flexible
in the different options that can be used with it. XFS can be tuned for different environments. It is very usable for
servers that have to deal with many small files but also for servers that have to be configured for streaming large files.

Creating and Mounting the File System
The easiest way to create a file system on SLES is from YaST. You can just follow the prompts that are provided when
creating logical volumes or partitions. You can also add a new file system to a device that has been created previously,
which you’ll read about in this section.

Creating File Systems from YaST
To create a file system on a device, you first have to select the device. In the YaST Partitioner, you’ll either open the
Hard Disks view, or you’ll open the Volume Management view, to select the device on which you want to create
the file system. Select the device and, next, select Edit, to modify its properties in the interface, which you can see
in Figure 3-9.

Figure 3-9. Creating a file system on an existing device

66
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

You’ll now see the screen that you can see in Figure 3-10, on which you have to specify how you want to create
the file system. In Formatting Options, select Format Partition. Next, use the down arrow key to select the file system
you want to use, or accept the default selection of the Btrfs file system. You’ll next have to specify a mount point also.
This is the directory that users will go to to work with this file system.

Figure 3-10. Making a file system
While creating a file system, you can specify specific fstab options. To make sure that the file system is
automatically mounted on system boot, a reference to the file system is placed in the /etc/fstab file. YaST will
take care of that automatically for you, but you can easily access different mount options, by selecting the fstab
options, which you can see in Figure 3-11.

67
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-11. Specifying mount options
To start with, you can specify how you want the device to be mounted. By default, the device is mounted by its
UUID. That is a universal unique ID that is written to the file system on the device and that will never change, not
even if the disk topology is changed. Unfortunately, UUIDs are not very readable, which is why other options, such as
the device ID and device path, are offered as well. You can also choose to use one of the more classical approaches,
such as a mount that is based on device name (such as /dev/sdb1), or a mount that is based on a volume label that is
written to the file system. From YaST, you can specify the volume label that you want to use (see Figure 3-11).
While mounting the file system, you can also select some mount options. The mount option No Access Time
can be useful. This option makes it unnecessary for the file system metadata to be updated every time the file system
is accessed, which is good for performance. There are many more mount options, however, that are not listed here,
so if you have specific needs, you might want to edit the /etc/fstab file manually.
If you have elected to create a Btrfs file system, you can also create subvolumes from YaST. A subvolume is
created as a subdirectory in the mount point that you’re working on. To add a subvolume, specify its name in the
YaST interface, shown in Figure 3-12, and select Add new. This will add the subvolume to the list (but won’t do
anything else with it). When done, select OK, to write the changes to disk. You have now added the file system to the
/etc/fstab file, to ensure that it is mounted automatically on boot.

68
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-12. Adding subvolumes from YaST

Creating File Systems Manually
Instead of using YaST, you can also create file systems manually. This even adds some more flexibility, as from the
command line, some tools are available that are not available from YaST. These allow you, for example, to create a
ReiserFS file system, which you cannot do from YaST.
To add a file system from the command line, different utilities are available. To get an overview of these, type mkfs
and hit the Tab key on your keyboard twice. This shows all commands that have a name beginning in mkfs. As each
file system has different features and options, the options offered by these commands will be quite different from one
another. If you want to create the file system the easy way, you can just type the name of the command, followed by
the device name on which you want to create the file system, as in mkfs.ext4 /dev/sdb2. This will create a file system
with default options for you, which, in most cases, will work just fine.

Making Manual Modifications to /etc/fstab
After creating the file system, you can put it in /etc/fstab for automatic mount after a reboot. In Listing 3-6, you can
see an example of what the contents of /etc/fstab typically looks like.

69
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Listing 3-6. Sample /etc/fstab Contents
linux-m6gc:~ # cat /etc/fstab
/dev/vgdata/swap
/dev/vgdata/lvroot
UUID=781ab8eb-b1eb-49b4-b44a-8bf309e5b99c
UUID=bab5beb7-02df-4697-809e-63d4a25d68bd
UUID=b0738048-2a89-4cd5-9d8e-dce32bc13f88

swap
/
/boot
/var
/data

swap
xfs
ext4
btrfs
btrfs

defaults
defaults
acl,user_xattr
defaults
defaults

0
1
1
0
0

0
1
2
0
0

In /etc/fstab, six columns are used to mount the file system. In the first column, you’ll have to specify the name
of the device. Read the next section, “Device Naming,” for more information on how to do that in a smart way. The
second column tells the system on which directory to mount the device. If you make manual adjustments to /etc/
fstab, make sure that you create the mount point before you try mounting it! In the third column, you’ll specify the
type of file system you have used on this device.
In the fourth column, mount options can be specified. If you don’t know of any specific mount options you want
to use, just type “default,” but if you need support for quota (see the section “Managing Quota,” later in this chapter),
or if you need support for access control lists (see Chapter 4), you can enter the required mount option in this column.
In the fifth column, you’ll specify the backup option. This column needs to have a 1, if you want full backup
support, and a 0, if you don’t. Just make sure that on any real file system, this column has a 1. If you’re using Btrfs, use
a 0 in this column as well. In the last column, you’ll specify how the file system needs to be checked during mount.
Use 0, if you don’t want an automatic check to occur or if you’re using Btrfs. Use a 1, if this is the root file system. Using
a 1 ensures that it is checked before anything else. On all other file systems that are not using Btrfs and are not the root
file system, use a 2.

Device Naming
When creating a partition, the name of the partition will be similar to /dev/sdc1, which indicates the first partition
on the third hard disk that will be found on the SCSI bus. Using device names such as /dev/sdc1 works perfectly in a
scenario in which servers are connected to physical disks, and the storage topology never changes.
Nowadays, many servers are connected to a Storage Area Network (SAN) to access storage. This means that
storage has become much more flexible and that storage topology can change, with the result that the device that was
previously known as /dev/sdc1 will now be known as /dev/sdf1, or any other device name that you did not expect.
That is why a more flexible approach to file system naming is needed. On modern Linux systems, three different
alternatives for file system naming exist:
•

UUID

•

File System Labels

•

/dev/disk naming

When a file system is created, the file system is automatically assigned a universal unique ID (UUID). This
UUID is long and difficult to read, but it does have one benefit: it is bound to the file system, and it will never change
spontaneously. If you’re on a dynamic storage topology, it might be smart to use UUIDs. You can get a list of all UUIDs
currently available on your system by typing the blkid command (see Listing 3-7).
Listing 3-7. Typing blkid for an Overview of UUIDs on Your System
linux-m6gc:~ # blkid
/dev/sda1: UUID="781ab8eb-b1eb-49b4-b44a-8bf309e5b99c" TYPE="ext4" PTTYPE="dos"
PARTLABEL="primary" PARTUUID="e1695ed9-f2db-495b-9fdd-b970eb7569a7"
/dev/sda2: UUID="Pteo9u-cBKv-3PG7-Vt1n-Mv2H-Tm53-EbK50i" TYPE="LVM2_member"
PARTLABEL="primary" PARTUUID="3ca331eb-c2fc-4d29-9dc7-ba3df66034f1"

70
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

/dev/sr0: UUID="2014-08-21-14-06-45-00" LABEL="SLE-12-Server-DVD-x86_6406991" TYPE="iso9660"
PTUUID="2663792f" PTTYPE="dos"
/dev/mapper/vgdata-swap: UUID="7e6db5e9-69cf-4292-b75a-24f4c4871a27" TYPE="swap"
/dev/mapper/vgdata-lvroot: UUID="f029648c-2218-403b-8d86-b2ef9b684c48" TYPE="xfs"
/dev/mapper/vgdata-var: UUID="bab5beb7-02df-4697-809e-63d4a25d68bd"
UUID_SUB="555f2b23-5d33-4ea6-8903-d66faf5b8a82" TYPE="btrfs"
/dev/sdb2: UUID="evADgP-Nb3w-X6cz-g9CD-fZrq-alPE-AjeERC" TYPE="LVM2_member" PARTLABEL="primary"
PARTUUID="4757de6c-7688-49c8-9a84-0c251381f361"
/dev/sdb3: UUID="qfCHtq-dK2Q-JSMG-GM8s-BdNM-K265-ZndJ3T" TYPE="LVM2_member" PARTLABEL="Linux LVM"
PARTUUID="dfe179c2-ed97-4cd2-829a-c619f8ee240c"
/dev/sdb1: UUID="b0738048-2a89-4cd5-9d8e-dce32bc13f88"
UUID_SUB="a5618c9e-ca89-4a6d-bc97-570e54b55276" TYPE="btrfs" PARTLABEL="Linux filesystem"
PARTUUID="927ec3f4-963f-41e1-a0e1-b34d69e1ff21"
Originally, UUIDs were assigned when creating the file system. If you’re using GUIDs, there is an alternative
to the file system UUID, and that is the partition UUID, which is displayed as PARTUUID in the output of the blkid
command. To mount a file system based on its UUID, you can include UUID= in /etc/fstab; to mount it on its
partition UUID, you can use PARTUUID= in /etc/fstab.
While UUIDs do serve their purpose and offer worldwide unique naming, they are not very user-friendly. If a
problem occurs while mounting a file system, you won’t be able to see at first sight that it is because of a wrong UUID.
That is where file system labels may come in handy. Most mkfs utilities support the option -L to assign a human
readable name to a file system. Using a label, you can specify a mount option such as LABEL=database in /etc/fstab.
You can get an overview of labels currently defined by using the blkid command also.
Another option that ensures you have unique device naming is to use the names that are created in the /dev/disk
directory. Depending on the specific configuration of the device, you’ll find the device name represented in different
ways. In Listing 3-8, you can see what the contents of the /dev/disk/by-path directory looks like. Using these names
can be a relatively readable way to refer to devices.
Listing 3-8. Device Naming in /dev/disk/by-path
linux-m6gc:/dev/disk/by-path #
total 0
lrwxrwxrwx 1 root root 9 Sep
lrwxrwxrwx 1 root root 10 Sep
lrwxrwxrwx 1 root root 10 Sep
lrwxrwxrwx 1 root root 9 Sep
lrwxrwxrwx 1 root root 10 Sep
lrwxrwxrwx 1 root root 10 Sep
lrwxrwxrwx 1 root root 10 Sep
lrwxrwxrwx 1 root root 9 Sep

\ls -l
9
8
8
9
9
9
8
8

12:52
16:57
16:57
13:39
13:21
12:18
17:42
16:57

pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
pci-0000:00:10.0-scsi-0:0:1:0 -> ../../sdb
pci-0000:00:10.0-scsi-0:0:1:0-part1 -> ../../sdb1
pci-0000:00:10.0-scsi-0:0:1:0-part2 -> ../../sdb2
pci-0000:00:10.0-scsi-0:0:1:0-part3 -> ../../sdb3
pci-0000:02:05.0-ata-2.0 -> ../../sr0

EXERCISE 3-1. CREATING A FILE SYSTEM ON TOP OF LVM
Now that you’ve learned all about the file system creation basics, it’s time to practice your skills in an exercise.
To perform this exercise, you need to have available storage. If you’re using a virtual machine, you can easily add
a new disk to the machine. If you’re working on physical hardware, you can use a USB key to work through this
exercise. I will assume that the device you’ll be working on is known to your computer as /dev/sdb. Make sure to
change that, according to your specific hardware setup!

71
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

1.

To find out the name of the storage device that you can use, type cat /proc/partitions.
In this file, you’ll get an overview of all disk devices that are attached to your computer.

2.

Type fdisk /dev/sdb to open fdisk and add a new partition. Before changing anything, let’s
verify that there is nothing on the disk. Type p to show current partitioning. If any partitions do
exist, and you are sure that you can use the entire disk in this exercise, type d to delete them.

3.

Now type n to create a new partition. The utility next asks which type of partition you want
to create. Type p to create a primary partition, and enter the partition number 1, which is
suggested as the default partition. Now press Enter to accept the default starting point of the
partition, and type +500M to create the partition as a 500MiB partition.

4.

Verify that you have succeeded. Now type t to change the partition type, and set it to type 8e,
which makes it usable for LVM. From the fdisk main menu, type w to write the changes to
disk and quit fdisk.

5.

To make sure that the kernel is updated with the modifications you’ve just applied, type
the partprobe command. If you see anything that looks like an error, reboot your system
to make sure that the kernel has been updated properly before continuing. Verify that your
partition is known as /dev/sdb1.

6.

Use pvcreate /dev/sdb1 to mark the partition as an LVM physical volume.

7.

Type vgcreate vgdata /dev/sdb1 to put the partition in a volume group that has the name
vgdata.

8.

Type lvcreate -n lvdata -l 50%FREE vgdata. This command creates a volume with the
name lvdata and allocates 50 percent of available disk space from vgdata to it.

9.

Type the commands pvs, vgs, and lvs to verify the creation of the LVM devices.

10.

If the LVM devices have been created properly, use mkfs.btrfs -L fsdata /dev/vgdata/
lvdata to put a Btrfs file system on top of it, which uses the file system label fsdata.

11.

Type blkid to verify that the file system has been created and the label can be read from the
file system.

12.

Use mkdir /data to create a mount point on which the file system can be mounted.

13.

Enter the following line in /etc/fstab, to ensure that the file system can be mounted
automatically:
LABEL=fsdata

/data

btrfs

defaults

0 0

14.

To test that all works well, type mount -a. This mounts all file systems that have an entry
in /etc/fstab but haven’t been mounted yet.

15.

Type mount and df -h to verify that you can see the new mount.

72
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Managing File Systems
As a Linux administrator, on occasion, you’ll have to perform some file-system-management tasks as well. The exact
tasks depend on the type of file system that you’re using. In this section, we’ll cover some of the more common of
these tasks.

Checking and Repairing File Systems
An important responsibility of the administrator is to guarantee the integrity of the file systems in use. That means
that on occasion, you’ll have to check, and sometimes also repair, file systems on your systems. Fortunately, problems
on file systems do not occur very frequently, but they do occur.
As the first line of defense, file systems are automatically checked while your system reboots—with the exception
of the Btrfs file system, because the way it is organized makes a thorough check unnecessary. On occasion, the
automated check may fail, and you’ll have to perform a manual check using the fsck command. This is especially the
case if you’re using XFS or Ext4 file systems.
The Ext4 fsck utility has a few useful options. To start with, there is the option -p, which allows you to start an
automatic repair where no questions are asked. If you add the option -y, you’ll automatically answer “yes” to all
questions asked. This is a good idea, because if serious problems have occurred, you may be prompted many times to
confirm a repair action.
Other useful options are -c, which checks for bad blocks and marks them as such, and -f, which forces a file
system check. Normally, when fsck is started, it won’t do anything if the file system is considered clean, that is, free of
problems. If there are problems, using the -f option is useful.
If serious problems have arisen on the file system, you can benefit from using an alternative superblock. On Ext4,
the file system metadata is stored in the superblock, and the superblock is required to check and repair the file system.
If the superblock is damaged, you cannot do anything to the file system anymore.
Fortunately, on any Ext file system, a backup superblock is stored. Typically, you’ll find it on block 8193. To use
this backup superblock, type fsck -b 8193 /dev/sda1. This allows your file system to be restored to the original state
(see Listing 3-9).
Listing 3-9. Using fsck on the Backup Superblock
linux-m6gc:/dev/disk/by-path # fsck.ext4 -b 8193 /dev/sda1
e2fsck 1.42.11 (09-Jul-2014)
/dev/sda1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: +(73729--73987) +(204801--205059) +(221185--221443) +(401409--401667)
Fix?
Note that in Listing 3-9, you can see that some errors were encountered, and the fsck utility began prompting
for confirmation, to verify that errors can be fixed automatically. If you don’t want to press the y key many times, it’s a
good idea to interrupt the fsck utility, using Ctrl+C, and to start it again, using the -y option.

73
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

XFS Tools
When working with the XFS file system, there are a lot of XFS-specific tools that you can use to perform management
tasks on the file system. You can get an overview of these by typing xfs[tab][tab] as root on the command line. Some
of these tools are pretty common, and you’ll use them on occasion. Some tools are not so common, and you will rarely
use them.
To start with, there is the xfs_admin command. This command gives access to a few options, such as xfs_admin -l,
which prints the file system label, or xfs_admin -L, which allows you to set a label on a file system. Note that when
working with the XFS tools, some tools work on a device; other tools work on a mount point. So, on some occasions,
you’ll have to enter a device name, and with other tools, you’ll have to use a directory name.
An interesting XFS tool is xfs_fsr, which will try to defragment an XFS file system. On some Linux distributions,
you don’t really have to do any defragmentation. On XFS, you do, and this tool works on a mounted file system.
The XFS file system also has its specific repair utility, xfs_repair. This utility can be used on unmounted devices
only; it does not run at boot time. The fsck option in /etc/fstab will only replay the XFS journal if that is necessary.
For performing a repair on an XFS file system, you’ll have to make sure that the file system journal (which is referred
to as the “log” in XFS) is clean. If you receive warnings about a dirty log, you can mount and unmount the file system,
which, in general, will clean the log. If the journal is corrupt, just mounting and unmounting won’t be sufficient. You’ll
have to use the command xfs_repair -L on your device to fix the problem.
In the section “Using LVM Snapshots,” later in this chapter, you’ll learn how to work with snapshots on LVM. If
you don’t use snapshots on LVM, it is good to know that the XFS file system allows you to temporarily freeze a file
system, so that you can take a snapshot without changes being made to the file system at the same time.
To freeze an XFS file system, use xfs_freeze -f /mountpoint (use the mount point, not the device name). This
will temporarily stall all writes, so that you can take a snapshot. Once completed, use xfs_freeze -u /mountpoint to
unfreeze the file system and commit all writes to the file system.

Btrfs Tools and Features
As mentioned before, the Btrfs file system introduces many new features. Some of the Btrfs features make working
with LVM unnecessary, and some new features have also been introduced. The key new features in Btrfs are that it is
a copy on write file system. Because of this, it supports snapshots by itself, allowing users and administrators an easy
rollback to a previous situation.
Also, Btrfs has support for multiple volumes. This means that when running out of disk space on a particular
Btrfs volume, another volume can be added. Also, after adding or removing a volume from a Btrfs file system, online
shrinking and growth of the file system is supported. The Btrfs file system also supports metadata balancing. This
means that depending on the amount of volumes used, the file system metadata can be spread in the most efficient
way. Apart from that, there are Btrfs subvolumes.

Understanding Subvolumes
A Btrfs subvolume is a namespace that can be mounted independently with specific mount options. Multiple
subvolumes can reside on the same file system and allow administrators to create different mount points for specific
needs. By default, all file systems have at least one subvolume, which is the file system device root, but additional
subvolumes can also be created. Apart from the support of per-subvolume mount options, snapshots are created on
subvolumes. After unmounting a subvolume, a rollback of the snapshot can be effected.
After a default installation of SLES 12, Btrfs is used on the root file system, and subvolumes are created
automatically. In Listing 3-10, you can see how they are created from different mounts in the /etc/fstab file.

74
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Listing 3-10. Btrfs Default Subvolumes
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c
UUID=c7997ed8-2568-49c3-bb84-3d231978707c

/btrfs defaults 0 0
/boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
/boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0
/home btrfs subvol=@/home 0 0
/opt btrfs subvol=@/opt 0 0
/srv btrfs subvol=@/srv 0 0
/tmp btrfs subvol=@/tmp 0 0
/usr/local btrfs subvol=@/usr/local 0 0
/var/crash btrfs subvol=@/var/crash 0 0
/var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
/var/lib/named btrfs subvol=@/var/lib/named 0 0
/var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
/var/log btrfs subvol=@/var/log 0 0
/var/opt btrfs subvol=@/var/opt 0 0
/var/spool btrfs subvol=@/var/spool 0 0
/var/tmp btrfs subvol=@/var/tmp 0 0
/.snapshots btrfs subvol=@/.snapshots

Using these default subvolumes allows administrators to treat the most common directories that have been
created with their own mount options and create snapshots for them as well, if required. The subvolumes are created
on mount by including the Btrfs specific subvol=@/some/name option. Subvolumes can only be created if the parent
volume is mounted first. You can see that in the first list of output in Listing 3-10, where the /dev/sda2 device is
mounted as a Btrfs device. For each subvolume after creation, specific mount options can be added to the mount
options column in /etc/fstab.
From a shell prompt, you can request a list of subvolumes that are currently being used. Use the command btrfs
subvolume list / to do so, which will give you a result like that in Listing 3-11.
Listing 3-11. Requesting a List of Current Subvolumes
linux-ia9r:~ # btrfs subvolume list /
ID 257 gen 48 top level 5 path @
ID 258 gen 39 top level 257 path boot/grub2/i386-pc
ID 259 gen 39 top level 257 path boot/grub2/x86_64-efi
ID 260 gen 42 top level 257 path home
ID 261 gen 28 top level 257 path opt
ID 262 gen 39 top level 257 path srv
ID 263 gen 45 top level 257 path tmp
ID 264 gen 39 top level 257 path usr/local
ID 265 gen 39 top level 257 path var/crash
ID 266 gen 39 top level 257 path var/lib/mailman
ID 267 gen 39 top level 257 path var/lib/named
ID 268 gen 39 top level 257 path var/lib/pgsql
ID 269 gen 48 top level 257 path var/log
ID 270 gen 39 top level 257 path var/opt
ID 271 gen 48 top level 257 path var/spool
ID 272 gen 41 top level 257 path var/tmp
ID 276 gen 39 top level 257 path .snapshots

75
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Apart from the subvolumes that are created by default, an administrator can add new subvolumes manually. To do
this, the command btrfs subvolume create is used, followed by the path of the desired subvolume. Use, for example,
the command btrfs subvolume create /root to create a subvolume for the home directory of the user root.
After creating a subvolume, snapshots can be created. To do this, use the command btrfs subvolume snapshot,
followed by the name of the subvolume and the name of the snapshot. Note that it is good practice, but not
mandatory, to create snapshots within the same namespace as the subvolume. In Exercise 3-2, you’ll apply these
commands to work with snapshots yourself.

EXERCISE 3-2. WORKING WITH BTRFS SUBVOLUMES
In this exercise, you’ll create a subvolume. You’ll next put some files in the subvolume and create a snapshot in it.
After that, you’ll learn how to perform a rollback to the original state, using the snapshot you’ve just created.
1.

On an existing Btrfs file system, type btrfs subvolume create /test.

2.

Type btrfs subvolume list /. This will show all currently existing snapshots, including the
snapshot you have just created.

3.

Copy some files to /test, using the command cp /etc/[abc]* /test.

4.

At this point, it’s time to create a snapshot, using btrfs subvolume snapshot /test /
test/snap.

5.

Remove all files from /test.

6.

To get back to the original state of the /test subvolume, use mv /test/snap/* /test.

Working with Multiple Devices in Btrfs
Another benefit of the Btrfs file system is that it allows you to work with multiple devices. By doing this, Btrfs offers a
new approach to creating RAID volumes. To create a Btrfs volume that consists of multiple devices, type a command
such as mkfs.btrfs /dev/sda1 /dev/sda2 /dev/sda3. To mount a composed device through /etc/fstab, you’ll
have to take a special approach. You’ll have to refer to the first device in the composed device and specify the names
of the other devices as a Btrfs mount option, as in the following sample line:
/dev/sda1

/somewhere

btrfs

device=/dev/sda1,device=/dev/sda2,device=/dev/sda3 0 0

Btrfs also allows you to add devices to a file system that is already created. Use btrfs device add /dev/sda4
/somewhere to do so. Notice that the device add command works on the name of the mount point and not the name
of the volume. After adding a device to a Btrfs file system, you should rebalance the device metadata, using btrfs
filesystem balance /somewhere. You can request the current status of a multi-device Btrfs volume by using the
btrfs device stats /somewhere command.
A multivolume device, as just described, is just a device that consists of multiple volumes. If one of the devices
in the volume gets damaged, there’s no easy option to repair it. If you do want an easy option for repair, you should
create a Btrfs RAID volume. The command mkfs.btrfs -m raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde will do
that for you. If one of the devices in the RAID setup is missing, you’ll first have to mount it in degraded state. That’s for
metadata consistency, and it allows you to remove the failing device. If, for example, /dev/sdb is showing errors, you
would use the command mount -o degraded /dev/sdb /mnt. Notice that it must be mounted on a temporary mount
and not on the mount point of the Btrfs RAID device. After mounting it, you can use btrfs device delete missing /mnt
to remove it.

76
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Managing Logical Volumes
You have previously learned how to create LVM logical volumes. Working with logical volumes adds some flexibility
to your configuration, especially if no Btrfs file systems are used. Some important features that LVM was used for
previously are now included in the Btrfs file system, however, which makes the need for LVM volumes less urgent.
If you’re not using Btrfs, configuring LVM can add useful features. In this section, you’ll learn how to work with LVM
snapshots and how to resize logical volumes.

Using LVM Snapshots
The purpose of LVM snapshots is to freeze the current state of a volume. That can be useful to make a backup. While
creating a backup, open files cannot be backed up properly. It can also be useful, if you want to be able to easily revert
to a previous configuration.
To create an LVM snapshot, you need available space in the volume group. The snapshot must be able to store
the original blocks of all files that have changed during the lifetime of the snapshot. That means that if you’re creating
a snapshot just to make sure that your backup procedure will run smoothly, the size of the snapshot can be limited. If,
however, you want to create a snapshot before setting up a complex test environment, so that in case all goes wrong
you can easily get back to the original configuration, the size requirements for the snapshot will be considerably
higher. If you’re not sure, make certain that you have 10 percent of the size of the original volume. This will be
sufficient in most cases.
While working with snapshots, you should be aware that a snapshot is not a replacement for a backup. Snapshots
are linked to the original volume. If the original volume just dies, the snapshot will die with it. A snapshot is a tool to
help you create a reliable backup.
To create a snapshot, you first have to make sure that you have a volume for which you want to create
the snapshot and that you have available disk space in the volume group. Next, you have to make sure that no
modifications are written to that volume at the moment that the snapshot is created. You can do this by stopping
all services that are using the volume, or by using the XFS Freeze feature that was discussed earlier in this chapter.
Next, use lvcreate -s -L 100M -n myvol-snap /dev/myvg/myvol. This creates a snapshot with a size of 100MiB
for the volume myvol.
As discussed, LVM snapshots, in general, are created for two reasons: to create a backup or to revert to a previous
situation. If you want to create a backup based on the snapshot volume you’ve just created, you should mount it
somewhere. You can next take the backup of the mount point. Once the backup has been completed, the snapshot
should be removed. This is important, because a snapshot that has not been removed will keep on claiming disk space
until it is completely full and that will generate I/O errors on your system. To remove a snapshot, unmount it and,
next, use lvremove /dev/yoursnapshot to remove it.
If you have created a snapshot to make it easier to revert to a previous state, you’ll have to use the lvconvert
utility. To revert to the original state, apply the following steps:
1.

Unmount the volume that you want to revert.

2.

Use lvconvert --merge /dev/yoursnapshot.

3.

If the previous command complains that it cannot merge over an open origin volume, use
lvchange -an /dev/yourvolume first.

4.

At this point, the original volume will be reverted, and you can mount it again.

77
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Resizing Logical Volumes
Another common task when working with LVM is the resizing of a volume. The size of an LVM volume can be
increased as well as decreased, but you should know that not all file systems offer the same options. Btrfs, for example,
can easily be resized in both directions, but an XFS file system can be grown, not reduced. So, before starting a resize
operation, make sure that your file system fully supports it.
In some manuals, in fact, you’ll read that you have to apply two different steps: the resizing of the file system and
the resizing of the LVM volume. This is not the case. You can use the lvm commands to apply both steps in the same
operation.
The most flexible utility for a resize operation is lvresize. Using this tool, you can grow as well as reduce a
file system. Alternatively, you can use lvextend to grow an LVM volume or lvreduce to reduce its size. All of these
commands honor the -r option, to resize the file system on the LVM volume at the same time, and the -L option, to
specify the new size of the file system. The generic procedure for growing a file system is as follows:
1.

Make sure that disk space is available in the volume group. If this is not the case, use
vgextend to add disk space to the volume group. If, for example, you want to add the disk
/dev/sdc to the volume group vgsan, you would use vgextend vgsan /dev/sdc.

2.

At this point, you can grow the LVM volume. Use, for example, lvextend -L +1G -r
/dev/vgsan/mylv, to add 1GiB of disk space to the volume mylv.

To reduce a file system, you’ll use lvreduce; for example, use lvreduce -L -1G -r /dev/vgsan/mylv to reduce
the size of a logical volume with 1GB. You should note that the success of this command differs according to the file
system that you’re using.
•

To reduce the size of an Ext4 file system, the file system must be unmounted and checked
before the actual reduce operation.

•

XFS file systems cannot be reduced

•

Btrfs file systems can be reduced while being online.

Instead of using the command line, you can also resize file systems easily from YaST. In YaST, select System ➤
Partitioner. Next, select the logical volume that you want to resize and navigate to the Resize option. This opens the
window that you can see in Figure 3-13. From this window, specify the resulting size that you need. (Don’t specify how
much you want to add; specify how big you want the file system to be.) Next, select OK, to perform the resize operation.

78
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

Figure 3-13. Resizing file systems from YaST

Creating Swap Space
On Linux, swap space is used in a very efficient way. If a shortage of memory arises, the Linux kernel moves memory
pages that are not actively being used to swap, to make more memory available for processes that do really need it. On
some occasions, you may find that all available swap space has been consumed. If that happens, you can add swap
space to your computer. This can be done from YaST, as well as from the command line.
The procedure to add swap space from YaST is easy and intuitive. First, you have to create the storage device
(partition or logical volume), and after doing that, you can format it as swap, by selecting the swap file system from
the drop-down list. The procedure from the command line involves a couple of commands and is outlined in the
following exercise.

79
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 3 ■ Managing Disk Layout and File Systems

E

EXERCISE 3-3. CREATING A SWAP FIL

This exercise works only on a non-Btrfs file system. So, make sure that you put the swap file on a file system that
does not use XFS.
1.

Create a partition or logical volume to be used as swap space. If you don’t have any disk
space that can be dedicated to swap, you can consider creating an empty file using dd.
This is not ideal but always better than having a server running out of memory. To create an
empty file using dd, use dd if=/dev/zero of=/root/swapfile bs=1M count=1024. In this
command, the swap file is filled with zeroes and consists of a total of 1,024 blocks with a
size of 1MiB each.

2.

Type free -m to get an overview of current memory and swap usage.

3.

Type mkswap/root/swapfile to put a swap file system on your swap file.

4.

Set the permission mode to allow root access to the swap file only, by typing
chmod 600/root/swapfile.

5.

Type swapon/root/swapfile to activate the swap file.

6.

Type free -m again. You’ll see that the amount of available swap space has increased by 1GiB!

Summary
In this chapter, you have learned how to work with storage in the SUSE Linux Enterprise Server (SLES). You have
read how to set up partitions as well as local volumes, and you have learned how to do that from YaST and from the
command line. You’ve also read about common file-system-management tasks. In the next chapter, you’ll learn how
to work with users, groups, and permissions on SLES.

80
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4

User and Permission Management
On Linux, a difference is made between processes that run in kernel mode and processes that run without full
permissions to the operating system. In the first case, the user needs to be running as root for most of these
commands to work. In the latter case, user accounts are required. This chapter explains how to set up user and
group accounts and how, after setting them up, they can be granted access to specific resources on the server, using
permissions. The following topics are covered in this chapter:
•

Creating and Managing User Accounts

•

Creating and Managing Group Accounts

•

Configuring Base Linux Permissions

•

Configuring Special Permissions

•

Working with Access Control Lists

•

Working with Attributes

Creating and Managing User Accounts
In this chapter, you’ll learn how to create and manage user accounts. Before diving into the details of user
management, you’ll read how users are used in a Linux environment.

Users on Linux
On Linux, there are two ways to look at system security. There are privileged users, and there are unprivileged users.
The default privileged user is root. This user account has full access to everything on a Linux server and is allowed to
work in system space without restrictions. The root user account is meant to perform system administration tasks and
should only be used for that. For all other tasks, an unprivileged user account should be used.
On a typical Linux environment, two kinds of user accounts exist. There are user accounts for the people that
need to work on a server and who need limited access to the resources on that server. These user accounts typically
have a password that is used for authenticating the user to the system. There are also system accounts that are used by
the services the server is offering. Both user accounts share common properties, which are kept in the files
/etc/passwd and /etc/shadow. Listing 4-1 shows the contents of the /etc/passwd file. Note that the actual usernames
depend upon users added and software installed.

81
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Listing 4-1. Partial Contents of the /etc/passwd User Configuration File
linux:~ # tail -n 10 /etc/passwd
rtkit:x:492:491:RealtimeKit:/proc:/bin/false
rpc:x:491:65534:user for rpcbind:/var/lib/empty:/sbin/nologin
pulse:x:490:490:Pulse...:/var/lib/pulseaudio:/sbin/nologin
statd:x:489:65534:NFS statd daemon:/var/lib/nfs:/sbin/nologin
postfix:x:51:51:Postfix Daemon:/var/spool/postfix:/bin/false
scard:x:488:488:Smart Card...:/var/run/pcscd:/usr/sbin/nologin
gdm:x:487:487:Gnome Display ... daemon:/var/lib/gdm:/bin/false
hacluster:x:90:90:heartbeat processes:/var/lib/heartbeat/cores/hacluster:/bin/bash
lighttpd:x:486:486:user for ...:/var/lib/lighttpd:/bin/false
linda:x:1000:100::/home/linda:/bin/bash
As you can see, to define a user account, different fields are used in /etc/passwd. The fields are separated from
one another by a colon. Below is a summary of these fields, followed by a short description of their purpose.
•

Username: This is a unique name for the user. Usernames are important to match a user to his
password, which is stored separately in /etc/shadow (see next).

•

Password: In the old days, the second field of /etc/passwd was used to store the hashes
password of the user. Because the /etc/passwd file is readable by all users, this poses a
security threat, and for that reason, on current Linux systems, the hashes passwords are stored
in /etc/shadow (discussed in the next section).

•

UID: Each user has a unique User ID (UID). This is a numeric ID, and values from 0 to 65535
can be used. It is the UID that really determines what a user can do; when permissions are set
for a user, the UID is stored in the file metadata (and not the username). UID 0 is reserved for
root, the unrestricted user account. The lower UIDs (typically up to 499) are used for system
accounts, and the higher UIDs (from 1000 on SUSE by default) are reserved for people who
need a secure working environment on a server.

•

GID: On Linux, each user is a member of at least one group. This group is referred to as the
primary group, and this group plays a central role in permissions management, as will be
discussed later in this chapter.

•

Comment field: The comment field, as you can guess, is used to add comments for user
accounts. This field is optional, but it can be used to describe what a user account is created
for. Some utilities, such as the obsolete finger utility, can be used to get information from
this field. The field is also referred to as the GECOS field, which stands for General Electric
Comprehensive Operating System, and had a specific purpose for identifying jobs in the early
1970s when General Electric was still an important manufacturer of servers.

•

Directory: This is the initial directory in which the user is placed after logging in, also
referred to as the home directory. If the user account is used by a person, this is where the
person would store his personal files and programs. For a system user account, this is the
environment where the service can store files it needs while operating.

•

Shell: This is the program that is started after the user has successfully connected to a server.
For most users this will be /bin/bash, the default Linux shell. For system user accounts, it
will typically be a shell such as /usr/bin/false or /sbin/nologin, to make sure that if by
accident an intruder would be capable of starting a shell, he won’t get access to the system
environment.

82
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

A part of the user properties is kept in /etc/passwd, which was just discussed. Another part of the configuration
of user properties is in /etc/shadow. Parameters in this file mostly refer to the user password. Typical for /etc/shadow
is that no one except the superuser root has permission to access it, which makes sense, as it contains all the
information that is required for connecting to a system. Listing 4-2 gives an overview of /etc/shadow fields.
Listing 4-2. Example Contents from /etc/shadow
linux:~ # tail -n 5 /etc/shadow
scard:!:16165::::::
gdm:!:16165::::::
hacluster:!:16168::::::
lighttpd:!:16168::::::
linda:$6$5mItSouz$Jkg5qdROahuN3nWJuIqUO/hXSdIwi9zjwpW2OL\
X3cWOHN.XWCP09jXNhDwSHdHRsNiWnV85Yju.:16171:0:99999:7:::
The following fields are used in /etc/shadow:
•

Login name: Notice that /etc/shadow doesn’t contain any UIDs, but usernames only. This
opens a possibility for multiple users using the same UID but different passwords.

•

Encrypted password: This field contains all that is needed to store the password in a secure
way. The first part of it ($6$ in the example) indicates the encryption algorithm used. The
second part contains the “salt” that is used to send an authentication token to a user wishing
to authenticate. The last part has the encrypted password itself.

•

Days since Jan 1st 1970 that the password was last changed: Many things on Linux refer to Jan
1st 1970, which on Linux is considered the beginning of days. It is also referred to as “epoch.”

•

Days before password may be changed: This allows system administrators to use a more
strict password policy, whereby it is not possible to change back to the original password
immediately after a password has been changed. Typically, this field is set to the value 0.

•

Days after which password must be changed: This field contains the maximal validity period of
passwords. Notice that by default, it is set to 99,999 (about 273 years).

•

Days before password is to expire that user is warned: This field is used to warn a user when a
forced password change is upcoming. Notice that the default is set to 7 (even if the password
validity is set to 99,999 days!).

•

Days after password expires that account is disabled: Use this field to enforce a password
change. After password expiry, users can log in no longer.

•

Days since Jan 1st 1970 that account is disabled: An administrator can set this field to disable
an account. This is typically a better approach than removing an account, as all associated
properties of the account will be kept, but it can be used no longer to authenticate on your
server. An alternative would be to use userdel, which removes the user account but by default
will keep the files the user has created.

•

A reserved field: This will probably never be used.

Most of the password properties can be managed with the passwd or chage command, which are discussed later
in this chapter.

83
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Creating Users
There are many solutions for creating users on a Linux server. To start with, you can edit the contents of the /etc/
passwd and /etc/shadow files directly (with the risk of making an error that could make logging in impossible to
anyone). There’s also useradd (called adduser on some distributions) and on some distributions, there are fancy
graphical utilities available. What utility is used doesn’t really matter, as the result will be the same anyway: a user
account is added to the appropriate configuration files.

Modifying the Configuration Files
To add user accounts, it suffices that one line is added to /etc/passwd and another line is added to /etc/shadow,
in which the user account and all of its properties are defined. It is not recommended, though. By making an error,
you might mess up the consistency of the file and make logging in completely impossible to anyone. Also, you
might encounter locking problems, if one administrator is trying to modify the file contents directly, while another
administrator wants to write a modification with some tool.
If you insist on modifying the configuration files directly, you should use vipw. This command opens an editor
interface on your configuration files, and more important, it sets the appropriate locks on the configuration files to
prevent corruption. It does not, however, check syntax, so make sure you know what you’re doing, because by
making a typo, you might still severely mess up your server. If you want to use this tool to make modifications to the
/etc/shadow file, use vipw -s.

Using useradd
The useradd utility is probably the most common tool on Linux for managing users. It allows you to add a user
account from the command line by using many of its parameters. Use, for instance, the command useradd -m -u
1201 -G sales,ops linda to create a user, linda, who is a member of the groups sales and ops, with UID 1201, and
add a home directory to the user account as well.

Using YaST2
On SUSE, users can also be created with the YaST2 management tool. Type yast2 users to access the user
management tool directly. The interface offers easy access to all the options that can be used while creating users.
Figure 4-1 shows the default interface that YaST2 uses to add user accounts.

84
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Figure 4-1. Adding users with YaST
YaST offers the options that are needed to add user accounts on four different tabs.
•

User Data: Generic options to create the user account and set a password

•

Details: More details, such as the default location of the user home directory and group
membership

•

Password Settings: Password-related settings that are stored in /etc/shadow

•

Plug-Ins: Here, you can select additional user management plug-ins and configure additional
properties. It offers, for instance, the Quota Manager plug-in, which allows you to set
restrictions to the amount of files that users can create (needs support for quota on the file
system; see Chapter 3 for more details on that).

Home Directories
Typically, users will have a home directory. For service accounts, the home directory is very specific. As an
administrator, you’ll normally not change home directory–related settings for system accounts, as they are created
automatically from the RPM post installation scripts when installing the related software packages. If you have people
that need a user account, you probably do want to manage home directory contents a bit.
If when creating user accounts you tell your server to add a home directory as well (for instance, by using
useradd -m), the contents of the “skeleton” directory is copied to the user home directory. The skeleton directory is
/etc/skel, and it contains files that are copied to the user home directory at the moment this directory is created.
These files will also get the appropriate permissions to make sure the new user can use and access them.

85
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

By default, the skeleton directory contains mostly configuration files, which determine how the user environment
is set up. If, in your environment, specific files need to be present in the home directories of all users, you’ll take care
of that by adding the files to the skeleton directory.

Managing User Properties
For changing user properties, the same rules apply as for creating user accounts. You can either work directly in the
configuration files using vipw; you can use yast users for getting easy access to user properties; and you can use the
command line.
The ultimate command line utility for modifying user properties is usermod. It can be used to set all properties of
users as stored in /etc/passwd and /etc/shadow, plus some additional tasks, such as managing group membership.
There’s just one task it doesn’t do well: setting passwords. Although usermod has an option, -p, which tells you to
“use encrypted password for the new password,” it expects you to do the password encryption before adding the user
account. That doesn’t make it particularly useful. If, as root, you want to change the user password, you’d better use
the passwd command.

Configuration Files for User Management Defaults
When working with such tools as useradd, some default values are assumed. These default values are set in two
configuration files: /etc/login.defs and /etc/default/useradd. Listing 4-3 shows the contents of
/etc/default/useradd.
Listing 4-3. useradd Defaults in /etc/default/useradd
linux:~ # cat /etc/default/useradd
# useradd defaults file
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes
As can be seen from Listing 4-3, the /etc/default/useradd file contains some default values that are applied
when using useradd.
In the file /etc/login.defs, different login-related variables are set. login.defs is used by different commands,
and it relates to setting up the appropriate environment for new users. Following is a list of some of the most
significant properties that can be set from /etc/login.defs:
MOTD_FILE: Defines the file that is used as “message of the day” file. In this file, you can
include messages to be displayed after the user has successfully logged in to the server.
ENV_PATH: Defines the $PATH variable, a list of directories that should be searched for
executable files after logging in
PASS_MAX_DAYS, PASS_MIN_DAYS, and PASS_WARN_AGE: Define the default password
expiration properties when creating new users

86
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

UID_MIN: The first UID to use when creating new users
CREATE_HOME: Indicates whether or not to create a home directory for new users
USERGROUPS_ENAB: Set to “yes” to create a private group for all new users. That means that a
new user has a group with the same name as the user as its default group. If set to “no,” all
users are made a member of the group “users.”

Managing Password Properties
You have read about the password properties that can be set in /etc/shadow. There are two commands that you can
use to change these properties for users: chage and passwd. The commands are rather straightforward; for instance,
the command passwd -n 30 -w 3 -x 90 linda would set the password for user linda to a minimal usage period of
30 days and an expiry after 90 days, where a warning is generated 3 days before expiry.
Many of the tasks that can be accomplished with passwd can be done with chage also. For instance, use chage
-E 2015-12-31 bob to have the account for user bob expire on December 31st of 2015. To see current password
management settings, you can use chage –l (see Listing 4-4 for an example).
Listing 4-4. Showing Password Expiry Information with chage -l
linux:~ # chage -l linda
Last password change
Password expires
Password inactive
Account expires
Minimum number of days between password change
Maximum number of days between password change
Number of days of warning before password expires

:
:
:
:
:
:
:

Apr 11, 2014
Jul 10, 2014
never
never
30
90
3

Creating a User Environment
When a user logs in, an environment is created. The environment consists of some variables that determine how the
user environment is used. One such variable, for instance, is $PATH, which defines a list of directories that should be
searched when a user types a command.
To construct the user environment, a few files play a role.
•

/etc/profile: Used for default settings for all users when starting a login shell

•

/etc/bash.bashrc: Used to define defaults for all users when starting a subshell

•

~/.profile: Specific settings for one user, applied when starting a login shell

•

~/.bashrc: Specific settings for one user, applied when starting a subshell

When logging in, the files are read in this order, and variables and other settings that are defined in these files are
applied. If a variable or setting occurs in more than one file, the last one wins.

87
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

EXERCISE 4-1. CREATING USER ACCOUNTS
In this exercise, you will apply common solutions to create user accounts.
1.

Type vim /etc/login.defs to open the configuration file /etc/login.defs and change a
few parameters before you start creating logging. Look for the parameter CREATE_HOME and
make sure it is set to “yes.” Also, set the parameter USERGROUPS_ENAB to “no,” which means
that a new user is added to a group with the same name as the user and nothing else.

2.

Use cd /etc/skel to go to the /etc/skel directory. Type mkdir Pictures and mkdir
Documents to add two default directories to all user home directories. Also, change the
contents of the file .bashrc to include the line export EDITOR=/usr/bin/vim, which sets
the default editor for tools that have to modify text files.

3.

Type useradd linda to create an account for user linda. Next, type id linda to verify that
linda is a member of a group with the name linda and nothing else. Also, verify that the
directories Pictures and Documents have been created in linda’s home directory.

4.

Use passwd linda to set a password for the user you’ve just created. Use the password
password.

5.

Type passwd -n 30 -w 3 -x 90 linda to change the password properties. This has the
password expire after 90 days (-x 90). Three days before expiry, the user will get a warning
(-w 3), and the password has to be used for at least 30 days before (-n 30) it can be
changed.

6.

Create a few more users—lisa, lori, and bob—using for i in lisa lori bob; do
useradd $i; done.

Creating and Managing Group Accounts
Every Linux user has to be a member of at least one group. In this section, you’ll learn how to manage settings for
Linux group accounts.

Understanding Linux Groups
Linux users can be members of two different kinds of groups. First, there is the primary group. Every user must be a
member of a primary group, and there is only one primary group. When creating files, the primary group becomes
group owner of these files. (File ownership is discussed in detail in the section “Understanding File Ownership,” later
in this chapter.) Users can also access all files their primary group has access to.
Besides the mandatory primary group, users can be members of one or more secondary groups as well.
Secondary groups are important for getting access to files. If the group a user is member of has access to specific files,
the user will get access to these files also. Working with secondary groups is important, in particular in environments
where Linux is used as a file server to allow people working for different departments to share files with one another.

88
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Creating Groups
As is the case for creating users, there are also different options for creating groups. The group configuration files can
be modified directly using vigr; the command line utility groupadd can be used; and graphical utilities are available,
such as SUSE YaST. vi.

Creating Groups with vigr
With the vigr command, you open an editor interface directly on the /etc/group configuration file. In this file, groups
are defined in four fields, per group (see Listing 4-5).
Listing 4-5. Example /etc/group Content
maildrop:x:59:postfix
scard:x:488:
ntadmin:x:71:
gdm:x:487:
haclient:x:90:
lighttpd:x:486:
sales:x:1000:linda,lisa
The following fields are used:
•

group name: As is suggested by the name of the field, this contains the name of the group.

•

group password: A feature that is hardly used anymore. A group password can be used by users
who want to join the group on a temporary basis, so that access is allowed to files the group
has access to.

•

group ID: This is a unique numeric group identification number.

•

members: In here, you find the names of users who are members of this group as a secondary
group. Note that it does not show users who are members of this group as their primary group.

Using groupadd to Create Groups
Another method to create new groups is by using the groupadd command. This command is easy to use, just use
groupadd, followed by the name of the group you want to add. There are some advanced options, but you’ll hardly
ever use them.

Creating Groups with YaST
SUSE YaST also provides an easy-to-use interface that is accessible from the yast2 users module, which you can see
in Figure 4-2. It allows you to add a new group by entering its name and selecting its members directly, as well.

89
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Figure 4-2. Adding groups from YaST

Managing Group Properties
To manage group properties, groupmod is available. You can use this command to change the name or Group ID of the
group, but it doesn’t allow you to add group members. To do this, you’ll use usermod. As discussed before, usermod
-aG will add users to new groups that will be used as their secondary group.

EXERCISE 4-2. WORKING WITH GROUPS
In this exercise, you’ll create two groups and add some users as members to these groups.
1.

Type groupadd sales, followed by groupadd account, to add groups with the names
sales and account.

2.

Use usermod to add users linda and lisa to the group sales, and lori and bob to the group
account, as follows:
usermod
usermod
usermod
usermod

-aG
-aG
-aG
-aG

sales linda
sales lisa
account lori
account bob

90
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

3.

Type id linda to verify that user linda has correctly been added to the group sales. In the
results of this command, you’ll see that linda is assigned to the group with gid=100(users).
This is her primary group. With the groups parameter, all groups she is a member of as a
secondary group are mentioned.
linux:~ # id linda
uid=1000(linda) gid=100(users) groups=1000(sales),100(users)

Configuring Base Linux Permissions
To determine which user can access which files, Linux uses permissions. Originally, a set of basic permissions was
created, but over time, these didn’t suffice. Therefore, a set of special permissions was added. These didn’t seem to
be enough either, and access control lists (ACLs) were added as well. Apart from these three solutions that relate to
permissions, attributes can be used as well, to determine access to files and directories.

Understanding File Ownership
To understand file access security on Linux, you have to understand ownership. On Linux, each file has a user owner
and a group owner. On each file, specific permissions are set for the user owner, the group owner, and “others,” which
refers to all other users. You can display current ownership with the ls -l utility. Listing 4-6 shows ownership on the
user home directories in /home. You can see the specific users set as owners of their directories, and the name of the
group users that is set as group owner.
Listing 4-6. Showing File Ownership with ls -l
linux:/home # ls -l
total 0
drwxr-xr-x 1 bob
users
drwxr-xr-x 1 linda users
drwxr-xr-x 1 lisa users
drwxr-xr-x 1 lori users

258
224
258
258

Apr
Apr
Apr
Apr

11
11
11
11

23:49
10:50
23:49
23:49

bob
linda
lisa
lori

To determine which access a user has to a specific file or directory, Linux adheres to the following rules:
1.

If the user is owner of the file, apply related permissions and exit.

2.

If the user is member of the group that is group owner of the file, apply its permissions
and exit.

3.

Apply the permissions assigned to “others.”

Note that in specific cases, this may lead to surprises. Imagine the unusual permission where user linda is owner
as file and as owner has no permissions, but she’s also member of the group that is group owner, where the group
owner has additional permissions to the file. In this case, she wouldn’t have any permissions, because only the first
match applies. Linda is user owner and, therefore, group ownership becomes irrelevant.

91
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Changing File Ownership
To change file ownership, an administrator can use the chown and chgrp commands. You can use chown to set user as
well as group owners. With chgrp, only group ownership can be modified.
Setting user ownership with chown is easy, just type chown username filename. To set group ownership with
chown, make sure the name of the group is preceded by a dot or colon: chown :sales myfile would make the group
sales group owner of “myfile.” You can also use chown to set both user and group as group ownership to a file; chown
linda:sales myfile would set user linda and group sales as the owners of myfile.
As an alternative to chown, you can use chgrp to change group ownership. The command chgrp sales myfile
would do exactly the same as chown :sales myfile, which is setting the group sales as owner of myfile.

P

EXERCISE 4-3. CHANGING FILE OWNERSHI

As a preparation for creating a directory structure that can be used in a shared group environment, this exercise
shows you how to create the directories and set appropriate ownership.
1.

Type mkdir -p /data/sales /data/account to create two shared group directories. Note
the use of the option -p, which makes sure the /data directory is created if it didn’t exist
already.

2.

Set group ownership for both directories by typing chgrp sales /data/sales and chgrp
account /data/account.

Note that because the names of the groups match the names of the directories, you could also use a simple bash
scripting structure to set the appropriate owners. This works only if /data is the active directory: for i in *;
do chgrp $i $i; done. You can read this as “for each element in *” (which is sales and account), put the name
of this element in a variable with the name i, and use that variable in the chgrp command. Don’t worry if this
looks too complicated at the moment, you’ll learn lots more about Bash shell scripting in Chapter 15.

Understanding Base Linux Permissions
Now that ownership is set, it’s time to take care of Linux permissions. There are three base Linux permissions, and
they can be set on files as well as directories. Permissions can be changed by the user root and by the owner of the file.
Table 4-1 gives an overview of these permissions.
Table 4-1. Base Permission Overview

Permission

Files

Directories

read

open the contents of the file

list the contents of the directory

write

modify the contents of an existing file

add files or remove files from a directory

execute

execute a file if it contains executable code

use cd to make the directory the active directory

92
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Even if this overview looks rather straightforward, there are a few things that are often misunderstood. Try, for
instance, to answer the following question:
1.

The “others” entity has read and execute on the /data directory. Does that mean that any
given user can read contents of files in that directory?

2.

User root places a file in the home directory of user linda. User linda has read permissions
only to that file. Can she remove it?

The answer to the first question is no. To determine if a user has read access to a file, the permissions on that file
matter, and nothing else. Linux would first see if the user is owner and then if the user is a member of the group owner,
and if neither is the case, Linux would apply the permissions assigned to “others.” The fact that the user as part of
“others” has read rights on the directory doesn’t mean anything for what the user can do on the file.
The second question is answered wrongly by many. The important fact is that the file is in the home directory of
user linda. Users normally have write permissions on their home directory, and this allows them to add and delete
files in that directory. So, the fact that user linda cannot read the contents of the file doesn’t matter. It’s her home
directory, so she will be allowed to remove it.

Applying Base Linux Permissions
Now that you understand a bit about permissions, let’s see how they are applied. To set permissions on a directory,
the chmod command is used. chmod can be used in absolute mode and in relative mode. In absolute mode, numbers
are used to specify which permissions have to be set: read = 4, write = 2, and execute = 1. You’ll specify for user, group,
and others which permissions you want to assign. An example is chmod 764 myfile. In this command, user gets 7, group
gets 6, and others get 4. To determine exactly what that means, you’ll now have to do some calculations: 7 = 4 + 2 + 1,
so user gets read, write, and execute; 6 = 4 + 2, so group gets read and write; and 4 is just 4, so others get read.
Another way to apply permissions is by using the relative mode. This mode explains itself best by using two
examples. Let’s start with chmod u=rwx,g-x,o+r myfile. In this command, you would set the user permissions to
read, write, and execute; remove x from the current group permissions; and add read to the permissions for others. As
you can see, this is not a particularly easy way to assign permissions. Relative mode is easy, though, if you just want
to change file properties in general. Take, for instance, the command chmod +x myscript, which adds the execute
permission to the myscript file to anyone. This is a fairly common way of assigning permissions in relative mode.

EXERCISE 4-4. ASSIGNING PERMISSIONS
In this exercise, we’ll build further on the directories and groups that were added in the previous exercises. You’ll
make sure that the user and group owners have permissions to do anything in their relative group directories,
while removing all permissions assigned to “others.”
1.

Use cd /data to make /data the current directory.

2.

Use chmod 770 * to set grant all permissions to user and group and none to others.

3.

Type ls -l to verify. The results should look as in Listing 4-7.

93
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Listing 4-7. Directory Settings
linux:/data # ls -l
total 0
drwxr-xr-x 1 root account
drwxr-xr-x 1 root sales
linux:/data # chmod 770 *
linux:/data # ls -l
total 0
drwxrwx--- 1 root account
drwxrwx--- 1 root sales

0 Apr 12 00:03 account
0 Apr 12 00:03 sales

0 Apr 12 00:03 account
0 Apr 12 00:03 sales

Configuring Special Permissions
In some cases, the Linux base permissions cannot do all that is needed. That is why some special permissions have
been added as well. They are called special permissions because they don’t make up part of your default permissions
tool. On occasion, you will use them, though, in particular, the Set Group ID (SGID) permission on directories and
Sticky bit on directories. Table 4-2 gives an overview of the special permissions and their use.

Table 4-2. Special Permissions Overview

Permission

Files

Directories

SUID (4)

Execute with permissions of owner

-

SGID (2)

Execute with permissions of group owner Inherit group owner to newly created items below

Sticky (1)

-

Allow deletion only for owner of the file or parent directory

So, let’s discuss in some detail what these permissions are all about. The Set User ID (SUID) permission is a
particularly dangerous but also very useful permission, in some cases. Normally, when a user executes a program, a
subshell is started in which the program runs with the permissions of that user. If SUID is applied to a program file, the
program runs with the permissions of the owner of the program file. This is useful in some cases, such as that of the
passwd command that needs write access to /etc/shadow, which cannot even be read by ordinary users, but in most
cases, it is a very dangerous permission. To be brief about it, even if you ever think about using it, just don’t. There are
other options, in most cases. SUID has no use on directories.

■■Tip As it opens good opportunities for potential evil-doers, the SUID permission is liked a lot by hackers. If a
malicious program is installed that can access a shell environment with root permissions, the hacker would be able to
take over your server completely. For that reason, it may be wise to periodically scan your server, to find out if any SUID
permissions have been set that you don’t know about yet. To do this, run find / -perm /4000. If you do this on a regular
basis, you’ll easily find files that have the SUID permission set but are not supposed to, by comparing with the output
from when it was run previously.

94
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

Applied to files, the same explanation goes for SGID as for SUID. Applied to directories, however, it is a very
useful permission. If SGID is set on a directory, all items created in that directory will get the same group owner as the
directory. This is very useful for shared group directories, because it allows for easy access to the files created in that
directory.
The Sticky bit permission has no use on files. On directories, it ensures that items in that directory can only be
deleted by the user who is owner, or the user who is owner of the parent directory. Sticky bit is also a useful permission
on shared group directories, as it prevents users from deleting files that they haven’t created themselves.
To set the special permissions, you can use chmod. It works in either absolute or in relative mode. Absolute mode
is a bit complicated, though; you’ll be adding a fourth digit, and you’ll have to make sure that permissions that were
previously set are not overwritten by accident. Therefore, you’re probably better off using relative mode. Use the
following commands to set these permissions:
•

chmod u+s myfile: Set SUID on “myfile.”

•

chmod g+s mydirectory: Set SGID on “mydirectory.”

•

chmod +t mydirectory: Set Sticky bit on “mydirectory.”

If the special permissions are set to a file, you can see that using ls -l. In the original design of the output of
ls -l, however, there is no place to show additional permissions. That is why the special permissions take the
position where you can normally find the x for user, group, and others. If the special permission identifier is marked as
an uppercase, it means that no execute permission is effective at that position. If it is marked as a lowercase, there is
also an execute permission effective. The following examples show how it works:
•

myfile has SUID but no execute: -rwSr--r-- myfile

•

myfile has SUID and execute for user: -rwsr--r-- myfile

•

myfile has SGID but not execute: -rw-r-Sr-- myfile

•

myfile has SGID and execute for group: -rw-r-sr-- myfile

•

myfile has Sticky bit but no execute: -rw-r--r-T myfile

•

myfile has Sticky bit and execute for others: -rw-r--r-t myfile

EXERCISE 4-5. SETTING SPECIAL PERMISSONS
In this exercise, you’ll learn how to further define your shared group environment, by setting special permissions
on the environment that you’ve created in previous exercises in this chapter.
1.

Use cd /data to go into the data directory.

2.

Type chmod -R g+s * to set the SGID permission on the current directories and all files and
subdirectories that might exist in them.

3.

Type chmod -R +t * to apply Sticky bit as well.

4.

Use su - linda to take the identity of user linda.

5.

Use cd /data/sales, followed by touch linda, to create a file with the name “linda” in
/data/sales.

6.

Type ls -l. It will show that group sales is group owner of the file.

7.

Type exit to go back to the root shell.

95
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

8.

Type su - lisa to take the identity of user lisa.

9.

From the /data/sales directory, type rm -f linda. You’ll get an access denied message,
because the Sticky bit permission is set on the directory.

10.

Type touch lisa to create a file owned by lisa and type exit.

11.

Assuming that linda is the manager of the sales department, it makes sense to make her
owner of the sales directory. Use chown linda /data/sales.

12.

Use su - linda to take the identity of user linda and type rm -f /data/sales/lisa.
Because the owner of a directory may delete all files in that directory, there’s nothing
preventing linda from removing the file that was created by lisa.

Working with Access Control Lists
Another extension to the original Linux permission scheme is made by access control lists. This section describes
how to use them.

Understanding ACLs
By adding the special permissions, Linux permissions were already made a bit better. Some functionality was still
missing, though, and that is why access control lists (ACLs) were added as a new option. Adding ACLs makes it
possible to give permissions to more than one user or group on a file or directory, which is useful, if you want to give
full control to a directory for one group, read-only access to another group, and no permissions at all to others.
ACLs also allow you to set permission inheritance, also known as “default ACLs.” That means that you can use
them to create an environment in which permissions are set at the directory level, and these permissions will be
applied automatically to all items that are created in that directory structure.
Imagine that you have a /data/sales directory, and you want all members of the group /data/account to be
able to read all files that will ever be created in /data/sales. ACL permission inheritance will do that for you. When
working with default ACLs, you should know that Linux will never, ever apply the execute permission to newly created
files automatically. In ACLs, the mask takes care of that and filters out the execute permission on files. You’ll see that
in the next section, in which a default ACL makes read, write, and execute to be inherited on items created below, but
the mask will show on files that read and write are used as effective permissions.

Applying ACLs
When applying ACLs, you’ll normally apply them twice. The normal ACLs will take care of files that are already
existing. To make sure permissions are set for newly created files also, you’ll have to add a default ACL as well. To set
an ACL, you’ll use setfacl; to display current ACL settings, you’ll use getfacl.
Before you start working with ACLs, you should know that they always are used as an addition to the normal
permissions that are already set for a file or directory. So before taking care of ACLs, you should make sure to set the
normal permissions. Next, when you apply the first default ACL to set inheritance, the normal permissions that are
already set on that file will be inherited as well.
Let’s have a look at how that works for the /data/sales directory that we’ve created previously. The current
settings are that user linda is owner, and group sales is group owner and has all permissions to the directory. To add
permissions by means of ACLs for the group account, you would use the following command:
setfacl

-R -m g:account:rx /data/sales

96
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

This, however, just means that members of the group account can list files in the directory and its contents, but if
new files are created, account would have no access at all. To take care of that, now type setfacl -m d:g:account:rx
/data/sales. This makes sure that members of account can read the contents of all new files that will be created.
Now let’s see what this looks like from getfacl. The command getfacl /data/sales shows current settings
(see Listing 4-8).
Listing 4-8. Showing Current ACL Assignments with getfacl
linux:/data # getfacl sales
# file: sales
# owner: root
# group: sales
user::rwx
group::rwx
group:account:r-x
mask::rwx
other::--default:user::rwx
default:group::rwx
default:group:account:r-x
default:mask::rwx
default:other::--As you can see, the output consists of two parts. The first part shows the permissions that apply to current users
and groups (including groups that have gotten their permissions by means of ACLs). The second part shows the
default permission settings.
Now let’s create a new file in the /data/sales directory: touch myfile. Now look at the permissions that were set
for this file: getfacl myfile. It should look as in Listing 4-9:
Listing 4-9. Showing ACL Assignments as Applied by Inheritance
linux:/data/sales # getfacl myfile
# file: myfile
# owner: root
# group: root
user::rwgroup::rwx
#effective:rwgroup:account:r-x
#effective:r-mask::rwother::--As you can see, the permissions have been inherited to the new file. There’s just one thing that isn’t nice. User
root has created this file, so the file has root:root as owners. That is why on a shared group environment, you would
always want to set the SGID permission. It would make sure that new files are group-owned by the group that owns
the /data/sales directory, with the effect that all members of the sales group have the appropriate permissions on
the file.
In Listing 4-9 you can also see that the mask has become effective. For new files, the mask is set to rw-, which
means that the execute permission will never be inherited for files created in a directory.

97
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 4 ■ User and Permission Management

File System ACL Support
To work with ACLs, your file systems must offer support for them. A modern file system such as Btrfs does this by
default, but for older file systems, this is not the case. The easiest way to add ACL support to your file system is
by adding the acl mount option into the /etc/fstab file. If while working with ACLs you get an “Operation not
supported” error, you’ll have to do this. The following line shows an example of a file system that is mounted from
/etc/fstab with ACL support.
/dev/sda3 /data ext4 acl 1 2

s

EXERCISE 4-6. APPLYING ACL

In this exercise, you’ll apply ACLs to allow members from the group “sales” to read files that were created by
members of the group “account,” and vice versa.
1.

Type the command setfacl -m g:account:rx /data/sales and setfacl -m g:sales:rx
/data/account. This sets the ACL assignments on the directories but not their contents.

2.

Type setfacl -m d:g:account:rx /data/sales and setfacl -m d:g:account:rx
/data/sales. This also sets the default ACL.

3.

Use getfacl /data/sales to verify the ACL has been set correctly.

4.

Use touch /data/sales/somefile to verify that the ACLs have been set correctly on the file
as well.

Working with Attributes
A third system that can be used to manage what can be done with files are file system attributes. Attributes apply
restrictions to files, no matter which user accesses the file. You can use chattr to set them and lsattr to list them.
Even if from the man page of chattr many attributes are listed, they don’t all work. Also, some attributes are
used internally by the operating system, and it doesn’t make much sense switching them on or off manually. The e
attribute, for instance, that is commonly applied on Ext4 file systems, stores files in extents instead of blocks, which
makes for more efficient storage.
From a security perspective, there are a few restrictions that do matter and work well in general.
•

immutable (i): makes it impossible to make any modification to the file

•

append only (a): allows users to modify the file, but not to delete it

•

undeletable (u): makes it impossible to delete files

So, if you would want to protect a configuration file in a user’s home directory, chattr +i file.conf would do
the trick. Even user root would no longer be able to modify the contents of the file. To remove attributes, use chattr -i
on the file that has them set.

Summary
In this chapter, you have learned how to create users and groups. You have also learned how to work with permissions
to create a secure working environment on your server. In the next chapter, you will learn how to perform common
administration tasks, such as setting up printers, scheduling jobs, and configuring logging.

98
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5

Common Administration Tasks
In this chapter, you’ll read how to perform some common administration tasks on SUSE Linux Enterprise Server.
You’ll read how to manage printers, software, and tasks and processes, and you’ll also learn how to schedule tasks and
set up an environment for logging.

Managing Printers
Even if printers, on many occasions, are connected to print servers, SUSE Linux Enterprise Server (SLES) contains
everything that is needed to manage printers. Printers that are connected locally are supported, as is the case for
printers that are connected to some other print server, no matter if that is a dedicated hardware print server or a
software print server that is defined on another computer.
To communicate to printers, the Common UNIX Printing System (CUPS) is used. This is a process that monitors
print queues for print jobs and forwards these jobs to the appropriate printer. In this section, you’ll read how to
manage printers from YaST as well as the command line.

Managing Printers from YaST
To manage printers from YaST, you have to select the Hardware ➤ Printer tool. Before doing anything, this tool will tell
you that it is going to restart the CUPS daemon that is started by default upon installation. As you probably don’t have
any printers configured yet, that is no problem, so select Yes to proceed configuring printers.
At this point, you should see the interface that can be seen in Figure 5-1. From this interface, you can configure all
properties of printers on your server.

99
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Figure 5-1. Configuring printers from YaST
To create new printers, you start from the Printer Configurations screen. From that screen, select Add to create
a new printer. This starts a printer detection process. CUPS is rather good in detecting printers. If the printer is
hardware connected to your server, the udev process (discussed in Chapter 8) will find it and initialize it. If it’s on the
network, the CUPS process will detect it.
For some types of printer connection, automatic detection doesn’t work out well. If that is the case, you can
manually start the Connection Wizard. From its first screen, you can first select the type of printer, after which you can
specify the further printer properties. The options are divided into four different categories.
•

Directly Connected Device: This is for any printer that is directly connected to your computer.
In general, these types of printers will auto-install themselves, so normally, you won’t have to
configure any of these.

•

Network Printers: These printers may need some help getting configured, as depending on the
configuration of your network, the automatic detection may fail.

•

Print Server: These are printers configured on another computer. They may require additional
configuration also.

•

Special: This is a rarely used option that allows administrators to configure special printer
types, such as sending printer data directly to another program.

After configuring printers this way, the CUPS process on your server will give access to the printer, no matter if it
is a local printer or a remote printer. You can next access it from the command line and from applications running
on your computer.

100
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Command-Line Printer Management Tools
Once printers have been installed—which, as you’ve read, can be done in quite an easy way—as an administrator, you
might want to perform some management tasks on them. Managing the printer often relates to the print jobs that are
in the queue. Some commands are available to manage print jobs, as follows:
•

lpstat: Provides information about printers. This command offers many options that allow
you to query the print queue. For example, use lpstat -l to show a list of printers, classes, or
jobs, lpstat -W all for an overview of jobs and their current status, or lpstat -p to get a list
of printers that are available on a specific host.

•

lpr: The lpr command submits files for printing. It will send them to the default printer, unless
the option -P is used to specify another destination to send print jobs to. While lpr is the old
UNIX command, on Linux, the lp command normally is used to send print jobs to a printer.

•

lpadmin: Allows administrators to create printers. Basically, everything that can be done
from YaST can be done with lpadmin as well. The command has a large amount of options to
specify exactly what needs to be done.

•

lpq: Will show you which print jobs are in the print queue.

•

lprm: Provides an easy-to-use interface to remove jobs from a print queue.

Managing Software
An important task for an SUSE Linux Enterprise Server (SLES) administrator is managing software. Two systems are
available for software management: the old RPM and the new Zypper-based system. Before explaining the details of
both systems, it’s good to have some generic understanding of the use of repositories.

Understanding Repositories and Meta Package Handlers
When programmers write a program, they normally don’t develop something that contains all that is required.
Programmers tend to focus on specific functionality and will get the rest from libraries that have been written by
someone else. This is called a dependency. In order to install a specific program, other items must already be available
on the system where the program is installed.
These dependencies are reflected in software package management. An administrator may select to install a
single package, but to install that package, all of its dependencies have to be installed as well. In the old days, when
packages were installed directly, that often led to challenges. An administrator who wanted to install one package
might get the response that some dependencies were missing, and these dependencies might even have had their
own dependencies. This is what was referred to as dependency hell.
In current Linux distributions, meta package handlers are used. In a meta package handler, repositories are
used for installing software. A repository is an installation source containing a collection of packages. The servers
and workstations on which software has to be installed are normally configured to use multiple repositories and
will regularly download an index of available packages. If when working with repositories administrators have to
install software, all dependencies are looked up in the index files, and if the dependency is found, it is installed
automatically.
In the way package management is organized on SUSE Linux, apart from repositories, there are also services
involved. A service manages repositories or does some special task. Currently, the only type of service that is
supported is the Repository Index Service (RIS). RIS contains a list of other repositories, which are indexed by using
this list. This offers the benefit that in case many repositories are used, indexes don’t have to be downloaded for
every individual repository but can be downloaded for many repositories at the same time. This makes working with
repositories faster.

101
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

On SUSE Linux Enterprise, packages can be installed individually as packages in the RPM format. They can also
be installed from repositories, using Zypper or YaST. In the next subsection, you’ll learn how to install packages using
these tools.

Installing Software from YaST
YaST offers everything an administrator needs for installing and managing software. To configure the large amount of
options available, you’ll select the Software category from YaST. This shows the following different options:
•

Online Update: This performs an update of your server against the repositories it has been
configured to use.

•

Software Management: Use this for common package management tasks, such as installing,
deleting, updating, and more.

•

Add-On Products: Add-On products such as the High Availability Extension are available for
purchase on SLES. To install such an Add-On Product, this option from YaST is used.

•

Media Check: This option can be used to verify that the installation medium does not contain
any errors.

•

Software Repositories: This option is used to define the repositories that will be used on
this server

Apart from the options that are offered from this YaST interface, in large environments, SUSE Manager can be
used for managing software. From SUSE Manager, multiple packages and patches can be selected, and these can be
installed on multiple servers in an automated way. Using SUSE Manager is highly recommended in an environment in
which many servers have to be managed. You can read more about SUSE Manager in Chapter 18.

Managing Repositories
Before an administrator can start managing software, software repositories must be available. After selecting the
Software Repositories option from YaST, you’ll see the screen shown in Figure 5-2. Normally in this list, you’ll at
least see the installation medium that was used while installing SLES. If your server is registered, you’ll also see the
SUSE update repositories, which are provided by SUSE to make sure that you’ll always be using the latest version of
available software.

102
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Figure 5-2. The Software Repositories management interface
To add a new repository, you can select the Add option. This opens the screen that you can see in Figure 5-3.
From this, you can specify the source of the repository. As you can see, every possible type of installation source is
supported. To add a repository, you have to specify the URL for that specific repository. You can also specify whether
you want to download repository description files. This is done by default, and it’s a good idea to do it, because it
makes the index of the selected repository available on your computer.

Figure 5-3. Selecting the source for the new repository

103
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

After selecting which type of repository you want to add, the screen that you can see in Figure 5-4 opens. On this
screen, you can enter a repository name and the name of the location of the repository. For repositories that are on
a server, this consists of a server name and path; for local repositories, it’s only the path. YaST allows you to use two
different kinds of repositories. A repository can contain a list of RPM files or an ISO image. If the repository contains
an ISO image, you’ll have to select the ISO image option as well, after which you can select the ISO image to be used.
This will loop-mount the ISO image, to make it available on your computer.

Figure 5-4. Specifying repository properties
From the list of available repositories, you can perform some management tasks as well. To start with, you can
enable a repository. If you know that a repository will be unavailable for some time, it makes sense to tell it. If your
server knows that a repository is temporarily unavailable, it won’t try to install software from the repository. So, if you
have to bring down a repository for maintenance, make sure to disable it as well.
The Autorefresh option makes sense on an online repository. This option tells your computer to fetch updated
index files every day, so that your server is completely up to date about the most current state of packages. Another
useful option is Priority. If you’re working with multiple repositories, there is a risk that conflicts between package
versions arise. In case of a conflict, the Priority option makes clear which package version takes precedence. The last
option is Keep Downloaded Packages. If your Internet connection is slow, it may be beneficial to use this option in
order to cache packages locally.

Managing Software Packages
After configuring the repositories, you can start managing software. For this purpose, YaST offers the interface that you
can see in Figure 5-5. The most important item in this interface is the Filter option. By default, it is on Search, but by
pressing the down arrow key, other options can be selected as well. Before you do anything, make sure to select the
option you need from this list.

104
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

•

Patterns: With the Patterns option, the software is presented in different categories. By
selecting a pattern, you can easily select all the software you need for a specific purpose.

•

Languages: Use this option if you need a specific language version of packages. The amount of
packages available in a specific language often doesn’t go much beyond the browser language.

•

RPM Groups: RPM groups are another way of grouping RPM packages, as in the Patterns
option. Use this option if you want to browse software according to the category of
software packages.

•

Repositories: Use this if you’re looking for packages from a specific repository.

•

Search: Use this option if you want to search for a package with a specific name.

•

Installation Summary: This option summarizes the work to be done, before actually starting it.

•

Package Classification: Every package, by default, has a classification. This classification
gives a hint as to whether or not a package is needed on your server. Use this option to
browse packages according to their current classification: Recommended, Suggested,
Orphaned, or Unneeded.

Figure 5-5. Managing packages from YaST

Installing Software Using the Patterns Option
The Patterns option provides a convenient way for installing software packages. This is the same interface as that used
during the initial installation of SLES (see Figure 5-6).

105
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Figure 5-6. Managing packages through the Patterns interface
Managing packages through the Patterns interface is relatively easy. Just select the pattern you want to install and
press the space bar to select all packages within this category. This makes the current status of the package category
(a + is put in front of the category name), and it will select all packages with a classification of Recommended in this
category. Before starting the actual installation, the administrator can change the default suggestion by selecting
the Action menu. This menu shows a drop-down list from which the status of individual packages can be changed.
Packages can be (de-)selected for installation easily, by pressing the space bar. Also, you may select to Update a
package or to make it a Taboo package or a Locked package. If the package is locked, it will never be upgraded. If it’s
marked as a Taboo package, it will never be installed.
When managing software packages, it’s also a good idea to consider dependencies. Different options are available
through the Dependencies menu option (see Figure 5-7).

106
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Figure 5-7. Managing package dependency options
The default setting is that dependencies are checked automatically. That means that before installing software
packages, a check is performed to find out which requirements there are to install the selected package. Another
useful option that is presented through the Dependencies menu is System Verification Mode. This mode allows you
to verify that the state of the system is consistent with the actual packages that you have selected. You can set both of
these options as default options, but it’s also possible to run them on your system now, by selecting the appropriate
option from the Dependencies menu.
While installing software through YaST, you can use the options in the View menu to get more information about
selected packages (see Figure 5-8). This option goes into the RPM package itself to request specific installation. The
most interesting of the view options are the File list and the Dependencies list. The File list shows the exact contents
of the package, and by selecting Dependencies, you can manually examine what else is required on this system before
the package can be installed. You’ll see the results of the View option you have selected in the lower right part of the
YaST package management screen.

107
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Figure 5-8. Examining package contents prior to installation
After selecting one or more packages for installation, an overview of Package Dependencies is provided. If there
are problems preventing the package to be installed normally, you’ll be prompted as to what to do. In general, the very
first option listed is also the best option. In Figure 5-9, you can see an example of such a problem.

Figure 5-9. Fixing package dependency problems

108
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

If no problems became manifest, you’ll see a window listing the automatic changes that will be applied. Read
through the list of packages that is going to be installed, and if you’re OK with it, press OK to start the installation.

Installing Software from the Command Line
While YaST offers an easy-to-use interface for packages installation, SUSE offers some good command-line utilities
also. To work on individual packages, the rpm command is useful; to manage software, the zypper utility is what you
need. You’ll find yourself working with zypper more frequently than with rpm, because zypper offers all you need to
install, update, and remove packages.

Managing Software Packages with zypper
zypper is the command-line tool that you want to use for installing, removing, and updating packages—and more.
Typically, the first step to perform, if you have to install a package, is zypper se, which allows you to search packages.
zypper search will also work, but why would you want to type a long command if there’s a short alternative, right?
The zypper se command shows a list of results corresponding to what you were looking for, including the
current state of the package. By default, zypper se performs a match on partial expressions. That means that if you
type zypper se ap, you’ll receive a list of all packages matching the string 'ap'. zypper also knows how to treat
regular expressions: make sure the search string is between single quotes, if you’re looking for a regex match.
Note that zypper se will search in the package name or description but not in the package contents. That means
that you may be missing a specific package while looking for it with zypper se. If you want to look into the package, to
look for a package containing a specific file, you can use zypper se --provides filename. Note that most binary files
will also be found when using zypper se, but if you have to go down in a bit more detail, the --provides utility may
be useful.
After finding the package you want to install with zypper se, you may want to get additional details about it.
The command zypper info will do that for you. Type zypper info packagename, if you only want some generic
information about the package (see Listing 5-1).
Listing 5-1. zypper info Provides Generic Information About a Specific Package
linux-3kk5:~ # zypper info nmap
Loading repository data...
Reading installed packages...
Information for package nmap:
----------------------------Repository: SLES12-12-0
Name: nmap
Version: 6.46-1.62
Arch: x86_64
Vendor: SUSE LLC 
Installed: No
Status: not installed
Installed Size: 16.4 MiB
Summary: Portscanner
Description:
Nmap is designed to allow system administrators and curious individuals
to scan large networks to determine which hosts are up and what
services they are offering. XNmap is a graphical front-end that shows
nmap's output clearly.
Find documentation in /usr/share/doc/packages/nmap

109
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

If you need more specific information about the package, you may appreciate some of the additional options that
zypper info provides. You can, for example, receive information about a package from a specific repository, if zypper
se has shown that the package is available from different repositories. Use zypper info -r, followed by the URL or
name of the repository, to do that.
The zypper info command provides some other useful options also. Use zypper info --provides
packagename, for example, to find out exactly what is in a package, or zypper info --requires packagename, to
determine which software has to be installed for a package to be functional.
Of course you can use zypper to work with patterns also. Start by typing zypper pt, to show a list of all available
patterns. This gives a result as in Listing 5-2.
Listing 5-2. The zypper pt Command Shows All Software Patterns
linux-3kk5:~ # zypper pt
Loading repository data...
Reading installed packages...
S | Name
| Version | Repository | Dependency
--+------------------+---------+-------------+----------| 32bit
| 12-57.1 | SLES12-12-0 |
i | 32bit
| 12-57.1 | @System
|
| Basis-Devel
| 12-57.1 | SLES12-12-0 |
| Minimal
| 12-57.1 | SLES12-12-0 |
i | Minimal
| 12-57.1 | @System
|
| WBEM
| 12-57.1 | SLES12-12-0 |
| apparmor
| 12-57.1 | SLES12-12-0 |
i | apparmor
| 12-57.1 | @System
|
| base
| 12-57.1 | SLES12-12-0 |
i | base
| 12-57.1 | @System
|
| dhcp_dns_server | 12-57.1 | SLES12-12-0 |
| directory_server | 12-57.1 | SLES12-12-0 |
| documentation
| 12-57.1 | SLES12-12-0 |
i | documentation
| 12-57.1 | @System
|
| file_server
| 12-57.1 | SLES12-12-0 |
| fips
| 12-57.1 | SLES12-12-0 |
| gateway_server
| 12-57.1 | SLES12-12-0 |
| gnome-basic
| 12-5.1 | SLES12-12-0 |
i | gnome-basic
| 12-5.1 | @System
|
| kvm_server
| 12-57.1 | SLES12-12-0 |
i | kvm_server
| 12-57.1 | @System
|
| kvm_tools
| 12-57.1 | SLES12-12-0 |
i | kvm_tools
| 12-57.1 | @System
|
| lamp_server
| 12-57.1 | SLES12-12-0 |
| mail_server
| 12-57.1 | SLES12-12-0 |
| ofed
| 12-57.1 | SLES12-12-0 |
| oracle_server
| 12-57.1 | SLES12-12-0 |
| printing
| 12-57.1 | SLES12-12-0 |
| sap_server
| 12-57.1 | SLES12-12-0 |
| x11
| 12-57.1 | SLES12-12-0 |
i | x11
| 12-57.1 | @System
|
| xen_server
| 12-57.1 | SLES12-12-0 |
| xen_tools
| 12-57.1 | SLES12-12-0 |

110
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Next, to find out what exactly is in the pattern, you can use zypper info -t pattern patternname, as in zypper
info -t pattern fips, which shows a description and a list of all packages in the fips pattern.
After getting the required information about packages, you can move on and install them, using zypper in. You
can, of course, just perform a basic installation, by using zypper in packagename, but you can also do a somewhat
more sophisticated installation, such as zypper in vim -nano, which will install Vim and remove nano at the same
time. Instead of installing individual packages, you can install patterns as well, as in zypper in -t pattern fips.
In case your attempt to install packages results in an error message, you can insist a bit more by adding the -f option.
A specific case of installation is source packages. These are packages that don’t contain ready-to-use binaries but
the source code of these packages. On occasion, you may need these if you have to do some tweaking of the package
source code. To install a source package, you’ll use zypper si instead of zypper in.
A specific kind of package operation that you can do using zypper is patch management. That starts by typing
zypper list-patches, to show a list of all patches that are available. To get more information about a specific patch,
you can next type zypper info -t patch name. If you like all patches and want to install them, you can use zypper
patch. This command, however, has some additional options also, such as zypper patch -b ###, in which ### is
replaced with a bugzilla patch number. This command allows you to install patches as documented in a specific
bugzilla patch number. Related to the patch commands is the up command, which just upgrades all packages that
have an upgrade available.
When performing an update, normally all packages are updated. This is often OK, but on some occasions, it
is not. A program might require a specific version of a package to be installed. To make sure a package will never
be upgraded, you can create a lock. Use zypper addlock package to put a lock on a specific package name. This
guarantees that it will never be updated. To get an overview of packages that are locked, use zypper ll.

PPER

EXERCISE 5-1. MANAGING SOFTWARE WITH ZY
In this exercise, you’ll learn how to work with some of the essential zypper commands.
1.

Type zypper se nmap. This will search for packages containing nmap in their package name
or description.

2.

Type zypper info nmap to get more information about the nmap package. To see what
will be installed with it, use zypper info --provides nmap, and to find out about its
dependencies, type zypper info --requires nmap.

3.

Now that you know more about the package, you can type zypper in nmap to install it.

4.

Working with patterns is also convenient. Type zypper se -t pattern to show a list of all
patterns that are available.

5.

Request more information about the fips pattern, by typing zypper info -t pattern fips.

6.

Assuming that you like what it is doing, install fips, using zypper in -t pattern fips.

7.

To make sure that nmap will never be upgraded, type zypper addlock nmap. Verify that
the lock has successfully been set, using zypper ll. To remove the lock again, use
zypper rl nmap.

111
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Querying Packages with rpm
The zypper command, in general, is used to manage software installations and upgrades. Once the software has
been installed, the RPM database keeps track of it. On a modern Linux server such as SLES 12, you won’t use the rpm
command anymore to install, update, or remove software. It is still convenient, however, for querying software and its
current state.
There are two types of RPM query that can be performed. The database can be queried, and package files can be
queried. To query the database, you’ll use rpm -q, followed by the specific query option. To query a package file, you’ll
use rpm -qp, followed by the specific query option. There are a few useful query options, as follows:
-l

Lists package contents

-i

Lists information about a package

-c

Lists configuration files included in the package

-d

Lists documentation provided by a package

-f

Lists the name of the RPM a specific file belongs to

In Exercise 5-2, you’ll work with some of the most significant query options.

EXERCISE 5-2. USING RPM QUERIES
Have you ever been in a situation in which you needed to find the configuration file that is used by a specific
binary? This exercise shows exactly what you can do in such cases. We’ll use the vsftpd binary as an example.
1.

To begin with, you need the exact name of the binary you want to query. Type which vsftpd
to find out. It will show the name /usr/sbin/vsftpd.

2.

Now we have to find out which RPM this file comes from. The command
rpm -qf /usr/sbin/vsftpd will do that for us and show vsftpd as the RPM name. It shows
a version number as well, but for querying the database, the version number is not important.

3.

Now let’s read the package description. Type rpm -qi vsftpd to get more information.

4.

To get a list of all files in the package, type rpm -ql vsftpd.

5.

To see which files are used for its configuration, use rpm -qc vsftpd.

6.

And if you have to read some more documentation before you can begin,
type rpm -qd vsftpd.

Managing Jobs and Processes
Most of the work that you’ll be doing as a Linux administrator will be done from a terminal window. To start
a task, you’ll type a specific command. For example, you’ll type ls to display a listing of files in the current
directory. Every command you type, from the perspective of the shell, is started as a job. Most commands
are started as a job in the foreground. That means the command is started; it shows its result on the terminal
window; and then it exists. As many commands only take a short while to complete their work, you don’t have to
do any specific job management on them.

112
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

While some commands only take a few seconds to finish, other commands take much longer. Imagine, for
example, the mandb command that is going to update the database that is used by the man -k command. This
command can easily take a few minutes to complete. For commands such as these, it makes sense to start them as a
background job, by putting an & sign (ampersand) at the end of the command, as in the following example:
mandb &
By putting an & at the end of a command, you start it as a background job. While starting a command this way, the
shell gives its job number (between square brackets), as well as its unique process identification number, the PID. You
can use these to manage your background jobs, as is explained in the following paragraphs.
The benefit of starting a job in the background is that the terminal is available to launch other commands, and
that is good, if the job takes a long time to complete. At the moment the background job is finished, you’ll see a
message that it has completed, but this message is only displayed after you’ve entered another command to start.
To manage jobs that are started in the background, there are a few commands and key sequences that you can
use (see Table 5-1).
Table 5-1. Managing Shell Jobs

Command/Key Sequence

Use

Ctrl+Z

Use this to pause a job. Once paused, you can put it
in the foreground or in the background.

fg

Use this to start a paused job as a foreground job.

bg

Use this to start a paused job as a background job.

jobs

Use this to show a list of all current jobs.

Normally, you won’t do too much job management, but in some cases, it does make sense to move a job you’ve
already started to the background, so that you can make the terminal where it was started available for other tasks.
Exercise 5-3 shows how to do this.

EXERCISE 5-3. MANAGING JOBS
In this exercise, you’ll learn how to move a job that was started as a foreground job to the background. This can
be especially useful for graphical programs that are started as a foreground job and occupy your terminal until
they have finished.
1.

From a graphical user interface, open a terminal, and from that terminal, start the gedit
program. You will see that the terminal is now occupied by the graphical program you’ve just
started, and at this moment, you cannot start any other programs.

2.

Click in the terminal where you started gedit and use the Ctrl+Z key sequence. This
temporarily stops the graphical program and gives back the prompt on your terminal.

3.

Use the bg command to move the job you’ve started to the background.

113
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

4.

From the terminal window, type the jobs command. This shows a list of all jobs that are
started from this terminal. You should see just the gedit command. In the list that the jobs
command shows you, every job has a unique job number. If you have only one job, it will
always be job number 1.

5.

To put a background job back in the foreground, use the fg command. By default, this
command will put the last command you’ve started in the background back in the
foreground. If you want to put another background job in the foreground, use fg, followed by
the job number of the job you want to manage, for example, fg 1.

■■Note Job numbers are specific to the shell in which you’ve started the job. That means that if you have multiple
terminals that are open, you can manage jobs in each of these terminals.

System and Process Monitoring and Management
In the preceding text, you’ve learned how to manage jobs that you have started from a shell. As mentioned, every
command that you’ve started from the shell can be managed as a job. There are, however, many more tasks that are
running on any given moment on your server. These tasks are referred to as processes.
Every command you enter or program you start from the shell becomes not only a job but also a process. Apart
from that, when your server boots, many other processes are started to provide services on your server. These are the
so called daemons—processes that are always started in the background and provide services on your server. If, for
example, your server starts an Apache web server, this web server is started as a daemon.
For a system administrator, managing processes is an important task. You may have to send a specific signal to a
process that doesn’t respond properly anymore. Otherwise, on a very busy system, it is important to get an overview
of your system and check exactly what it is doing. You will use a few commands to manage and monitor processes on
your computer (see Table 5-2).
Table 5-2. Common Process Management Commands

Command

Use

ps

Used to show all current processes

kill

Used to send signals to processes, such as asking or forcing a process to stop

pstree

Used to give an overview of all processes, including the relation between
parent and child processes

killall

Used to kill all processes, based on the name of the process

top

Used to get an overview of the current system activity

Managing Processes with ps
As an administrator, you might need to find out what a specific process is doing on your server. The ps command
helps you with that. If started as root with the appropriate options, ps shows information about the current status of
processes. Owing to historical reasons, the ps command can be used in two different modes: the BSD mode, in which
options are not preceded by a - sign, and the System V mode, in which all options are preceded by a - sign. Between
these two modes, there are options with an overlapping functionality.

114
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Two of the most useful ways to use the ps commands are in the command ps fax, which gives a tree-like
overview of all current processes, and ps aux, which gives an overview with lots of usage information for every
process. Listing 5-3 shows a partial output of the ps aux command.
Listing 5-3. Partial Output of the ps aux Command
linux-3kk5:~ # ps aux | head -n 10
USER
PID %CPU %MEM
VSZ
RSS TTY
STAT START
TIME COMMAND
root
1 0.0 0.3 33660 3468 ?
Ss
Sep19
0:05
/usr/lib/systemd/systemd --switched-root --system --deserialize 19
root
2 0.0 0.0
0
0 ?
S
Sep19
0:00 [kthreadd]
root
3 0.0 0.0
0
0 ?
S
Sep19
0:00 [ksoftirqd/0]
root
5 0.0 0.0
0
0 ?
S<
Sep19
0:00 [kworker/0:0H]
root
7 0.0 0.0
0
0 ?
S
Sep19
0:00 [migration/0]
root
8 0.0 0.0
0
0 ?
S
Sep19
0:00 [rcu_bh]
root
9 0.0 0.0
0
0 ?
S
Sep19
0:01 [rcu_sched]
root
10 0.0 0.0
0
0 ?
S
Sep19
0:01 [watchdog/0]
root
11 0.0 0.0
0
0 ?
S
Sep19
0:00 [watchdog/1]
If using ps aux, process information is shown in different columns:
USER

The name of the user whose identity is used to run this process

PID

The process identification number (PID), a unique number that is needed to manage
processes

%CPU

The percentage of CPU cycles used by this process

%MEM

The percentage of memory used by this process

VSZ

The Virtual Memory Size, the total amount of memory that is claimed by this process. It is
normal that processes claim much more memory than the amount of memory they really
need. That’s no problem, because the memory in the VSZ column isn’t used anyway

RSS

The Resident memory size, the total amount of memory that this process is really using

TTY

If the process is started from a terminal, the device name of the terminal is mentioned
in this column

STAT

The current status of the process. The top three most common status indicators are S for
sleeping, R for running, or Z for a process that has entered the zombie state

START

The time that the process was started

TIME

The real time in seconds that this process has used CPU cycles since it was started

COMMAND

The name of the command file that was used to start the process. If the name of this file is
between brackets, it is a kernel process

Another common way to show process information is by using the command ps fax. The most useful addition
in this command is the f option, which shows the relation between parent and child processes. For an administrator,
this is important information to be aware of, because for process management purposes, this relation is important.
Managing of processes goes via the parent process. That means that in order to kill a process, you must be able
to contact the parent of that specific process. Also, if you kill a process that currently has active children, all of the
children of the process are terminated as well. In Exercise 5-4, you can find out for yourself how this works.

115
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Sending Signals to Processes with the kill Command
To manage processes as an administrator, you can send signals to the process in question. According to the POSIX
standard—a standard that defines how UNIX-like operating systems should behave—different signals can be used.
In practice, only a few of these signals are always available. It is up to the person who writes a program to determine
which signals are available and which are not.
A well-known example of a command that offers more than the default signals is the dd command. When this
command is active, you can send SIGUSR1 to the command, to show details about the current progress of the dd
command.
Three signals are available at all times: SIGHUP (1), SIGKILL (9), and SIGTERM (15). Of these, SIGTERM is
the best way to ask a process to stop its activity. If as an administrator you request closure of a program, using the
SIGTERM signal, the process in question can still close all open files and stop using its resources.
A more brutal way of terminating a process is by sending it SIGKILL, which doesn’t give any time at all to the
process to cease its activity. The process is just cut off, and you risk damaging open files.
A completely different way of managing processes is by using the SIGHUP signal, which tells a process that it
should reinitialize and read its configuration files again.
To send signals to processes, you will use the kill command. This command typically has two arguments: the
number of the signal that you want to send to the process and the PID of the process to which you want to send a
signal. An example is the command kill -9 1234, which will send the SIGKILL signal to the process with PID 1234.
When using the kill command, you can use the PIDs of multiple processes to send specific signals to multiple
processes simultaneously. Another convenient way to send a signal to multiple processes simultaneously is by using
the killall command, which takes the name of a process as its argument. For example, the command killall
-SIGTERM vsftpd would send the SIGTERM signal to all active httpd processes.

EXERCISE 5-4. MANAGING PROCESSES WITH PS AND KILL
In this exercise, you will start a few processes to make the parent-child relation between these processes visible.
Next, you will kill the parent process and see that all related child processes also disappear.
1.

Open a terminal window (right-click the graphical desktop and select Open in Terminal).

2.

Use the bash command to start bash as a subshell in the current terminal window.

3.

Use ssh -X localhost to start ssh as a subshell in the bash shell you’ve just opened. When
asked if you permanently want to add localhost to the list of known hosts, type “yes.” Next,
enter the password of the user root.

4.

Type gedit & to start gedit as a background job.

5.

Type ps efx to show a listing of all current processes, including the parent-child relationship
between the commands you’ve just entered.

6.

Find the PID of the SSH shell you’ve just started. If you can’t find it, use ps aux | grep ssh.
One of the output lines shows the ssh -X localhost command you’ve just entered. Note
the PID that you see in that output line.

7.

Use kill, followed by the PID number you’ve just found to close the SSH shell. As the SSH
environment is the parent of the gedit command, killing ssh will kill the gedit window as well.

116
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Using top to Show Current System Activity
The top program offers a convenient interface in which you can monitor current process activity and perform some
basic management tasks. Figure 5-10 shows what a top window might look like.

Figure 5-10. Monitoring system activity with top
In the upper five lines of the top interface, you can see information about the current system activity. The lower
part of the top window shows a list of the most active processes at the moment, which is refreshed every five seconds.
If you notice that a process is very busy, you can press the k key from within the top interface to terminate that
process. top will next first ask for the PID of the process to which you want to send a signal (PID to kill). After entering
this, it will ask which signal you want to send to that PID, and next, it will operate on the requested PID immediately.
In the upper five lines of the top screen, you’ll find a status indicator of current system performance. The most
important information you’ll find in the first line is the load average. This gives in three different figures the load
average of the last minute, the last five minutes, and the last fifteen minutes.
To understand the load average parameter, you should understand that it reflects the average amount of
processes in the run queue, which is the queue in which processes wait before they can be handled by the scheduler.
The scheduler is the kernel component that ensures a process is handled by any of the CPU cores in your server.
A rough starting point to estimate if your system can handle its workload is that roughly the amount of processes
waiting in the run queue should never be higher than the total amount of CPU cores in your server. A quick way to find
out how many CPU cores are in your server is by pressing the 1 key from the top interface. This will show you one line
for every CPU core in your server.

117
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

In the second line of the top window, you’ll see how many tasks your server is currently handling and what each
of these tasks is currently doing. In this line, you may find four different status indications, as follows:
•

Running: The number of processes that have been active in the last polling loop

•

Sleeping: The number of processes that are currently loaded in memory but haven’t issued any
activity in the last polling loop

•

Stopped: The number of processes that have been sent a stop signal but haven’t freed all of the
resources they were using

•

Zombie: The number of processes that are in a zombie state. This is an unmanageable process
state in which the parent of the zombie process has disappeared and the child still exists but
cannot be managed anymore, because you need the parent of a process to manage
that process.

A zombie process normally is the result of bad programming. If you’re lucky, zombie processes will go away by
themselves. Sometimes they don’t, and that can be an annoyance. If that’s the case, the only way to clean up your
current zombie processes is by rebooting your server.
In the third line of top, you get an overview of the current processor activity. If you’re experiencing a problem
(which is typically expressed by a high load average), the %Cpu(s) line tells you exactly what the CPUs in your server
are doing. When trying to understand current system activity, it is good to be aware that the %Cpu(s) line summarizes
all CPUs in your system. For a per-CPU overview of the current activity, press the 1 key from the top.
In the %Cpu(s) line, you’ll find the following information about CPU status:
us: The percentage of time your system is spending in user space, which is the amount of
time your system is handling user-related tasks
sy: The percentage of time your system is working on kernel-related tasks in system space.
This should on average be (much) lower than the amount of time spent in user space
ni: The amount of time your system has worked on handling tasks of which the nice value
has been changed (see next section)
id: The amount of time the CPU has been idle
wa: The amount of time the CPU has been waiting for I/O requests. This is a very common
indicator of performance problems. If you see an elevated value here, you can make your
system faster by optimizing disk performance. See Chapter 15 for more details about
performance optimization
hi: The amount of time the CPU has been handling hardware interrupts
si: The amount of time the CPU has been handling software interrupts
st: The amount of time that has been stolen from this CPU. You’ll see this only if your server
is a virtualization hypervisor host, and this value will increase at the moment that a virtual
machine running on this host requests more CPU cycles
In the last two lines of the top status information, you’ll find current information about memory usage. The first
line contains information about memory usage; the second line has information about the usage of swap space. The
formatting is not ideal, however. The last item on the second line gives information that really is about the usage of
memory. The following parameters show how memory currently is used:
Mem: The total amount of memory that is available to the Linux kernel
used: The total amount of memory that currently is used
free: The total amount of memory that is available for starting new processes

118
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

buffers: The amount of memory that is used for buffers. In buffers, essential system tables
are stored in memory, as well as data that remains to be committed to disk
cached: The amount of memory that is currently used for cache
The Linux kernel tries to use system memory as efficiently as possible. To accomplish this goal, the kernel caches
a lot. When a user requests a file from disk, it is first read from disk and copied to RAM. Fetching a file from disk is an
extremely slow process, compared to fetching the file from RAM. For that reason, once the file is copied in RAM, the
kernel tries to keep it there as long as possible. This process is referred to as caching.
From the top interface, you can see the amount of RAM that currently is used for caching of data. You’ll notice
that the longer your server is up, the more memory is allocated to cache, and this is good, because the alternative to
use memory for caching would be to do nothing at all with it. The moment the kernel needs memory that currently is
allocated to cache for something else, it can claim this memory back immediately.
Related to cache is the memory that is in buffers. In here, the kernel caches tables and indexes that it needs
in order to allocate files, as well as data that still has to be committed to disk. Like cache, the buffer memory is also
memory that can be claimed back by the kernel immediately, but you should make sure that a minimal amount of
buffers, as well as cache, is available at all times. See Chapter 15 for further details.
As an administrator, you can tell the kernel to free all memory in buffers and cache immediately. Make sure
that you do this on test servers only, however, because in some cases, it may lead to a crash of the server! To free the
memory in buffers and cache immediately, as root, use the command echo 3 > /proc/sys/vm/drop_caches.

Managing Process Niceness
By default, every process is started with the same priority. On occasion, it may happen that some processes require
some additional time, or can offer some of their processor time because they are not that important. In those cases,
you can change the priority of the process by using the nice command.
When using the nice command, you can adjust the process niceness from -20, which is good for the most
favorable scheduling, to 19 (least favorable). By default, all processes are started with a niceness of 0. The following
example code line shows how to start the dd command with an adjusted niceness of -10, which makes it more
favorable and, therefore, allows it to finish its work faster:
nice -n -10 dd if=/dev/sda of=/dev/null
Apart from specifying the niceness to use when starting a process, you can also use the renice command to
adjust the niceness of a command that was already started. By default, renice works on the PID of the process whose
priority you want to adjust, so you have to find this PID before using renice. The ps command, which was described
earlier in this chapter, explains how to do this.
If, for example, you want to adjust the niceness of the find command that you’ve just started, you would begin by
using ps aux | grep find, which gives you the PID of the command. Assuming that would give you the PID 1234,
after finding it, you can use renice -10 1234 to adjust the niceness of the command.
Another method of adjusting process niceness is to do it from top. The convenience of using top for this purpose
is that top shows only the busiest processes on your server, which typically are the processes whose niceness you
want to adjust anyway. After identifying the PID of the process you want to adjust, from the top interface, press r. On
the sixth line of the top window, you’ll now see the message PID to renice:. Now, enter the PID of the process you
want to adjust. Next, top prompts Renice PID 3284 to value:. Here, you enter the positive or negative nice value
you want to use. Next, press Enter to apply the niceness to the selected process. In Exercise 5-5, you can apply these
procedures.

119
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

EXERCISE 5-5. CHANGING PROCESS PRIORITY
In this exercise, you’ll start four dd processes, which, by default, will go on forever. You’ll see that all of them are
started with the same priority and get about the same amount of CPU time and capacity. Next, you’ll adjust the
niceness of two of these processes from within top, which immediately shows the effect of using nice on these
commands.
1.

Open a terminal window and use su - to escalate to a root shell.

2.

Type the command dd if=/dev/zero of=/dev/null & and repeat this four times.

3.

Now start top. You’ll see the four dd commands listed on top. In the PR column, you can
see that the priority of all of these processes is set to 20. The NI column, which indicates
the actual process niceness, shows a value of 0 for all of the dd processes, and in the TIME
column, you can see that all of the processes use about the same amount of processor time.

4.

Now, from within the top interface, press r. On the PID to renice prompt, type the PID of one
of the four dd processes and press Enter. When asked to provide Renice PID  to
value:, type 5 and press Enter.

5.

With the preceding action, you have lowered the priority of one of the dd commands. You
should immediately start to see the result in top, as one of the dd processes will receive a
significantly lesser amount of CPU time.

6.

Repeat the procedure to adjust the niceness of one of the other dd processes. Now use
the niceness value of -15. You will notice that this process now tends to consume all of the
available resources on your computer, which shows that you should avoid the extremes when
working with nice.

7.

Use the k command from the top interface to stop all processes for which you’ve just
adjusted the niceness.

Scheduling Tasks
Up to now, you have learned how to start processes from a terminal window. For some tasks, it makes sense to have
them started automatically. Think, for example, of a backup job that you want to execute automatically every night.
To start jobs automatically, you can use cron.
cron consists of two parts. First, there is the cron daemon, a process that starts automatically when your server
boots. This cron daemon checks its configuration every minute to see if there are any tasks that should be
executed at that moment.
Some cron jobs are started from the directories /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, and
/etc/cron.monthly. Typically, as an administrator, you’re not involved in managing these. Programs and services
that require some tasks to be executed on a regular basis simply put a script in the directory where they need it, which
ensures that the task is automatically executed. Some RPM packages will copy scripts that are to be executed by cron to
the /etc/cron.d directory. The files in this directory contain everything that is needed to run a command through cron.
As an administrator, you can start cron jobs as a specific user, by first logging in as that user (or by using su - to
take the identity of the user you want to start the cron job as). After doing that, you’ll use the command crontab -e,
which starts the crontab editor, which is a vi interface by default. That means that you work from crontab -e the way
that you are used to working from vi.

120
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

As crontab files are created with crontab -e, you’ll specify on separate lines which command has to be executed
at which moment. Following, you can see an example of a line that can be used in crontab:
0 2 * * *

/root/bin/runscript.sh

It is very important that you start the definition of cron jobs at the right moment. To do this, five different
positions can be used to specify date and time. Table 5-3, following, lists the time and date indicators that can be used.
Table 5-3. cron Time Indicators

Field

Allowed Value

minute

0–59

hour

0–23

day of month

1–31

month

1–12

day of week 0–7 (0 and 7 are Sunday)

This means, for example, that in a crontab specification, the time indicator 0 2 3 4 * would translate to
minute 0 of hour 2 (which is 2 a.m.) on the third day of the fourth month. Day of the week, in this example, is not
specified, which means that the job would run on any day of the week.
In a cron job definition, you can use ranges as well. For example, the line */5 * * * 1-5 would mean that a job
has to run every five minutes, but only from Monday until Friday.
After creating the cron configuration file, the cron daemon automatically picks up the changes and ensure that
the job will run at the time indicated.

EXERCISE 5-6. CONFIGURING CRON JOBS
In this exercise, you’ll learn how to schedule a cron job. You’ll use your own user account to run a cron job that
sends an e-mail message to user root on your system. In the final step, you’ll verify that root has indeed received
the message.
1.

Open a terminal and make sure that you are logged in with your normal user account.

2.

Type crontab -e to open the crontab editor.

3.

Type the following line, which will write a message to syslog every five minutes:
*/5 * * * * logger hello,.

4.

Use the vi command :wq! to close the crontab editor and safe your changes.

5.

Wait five minutes. After five minutes, type tail -f /var/log/messages to verify that the
message has been written to the logs.

6.

Go back to the terminal where you are logged in with the normal user account and type
crontab -r. This deletes the current crontab file from your user account.

121
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Configuring Logging
On SLES 12, two different systems are used for logging. The rsyslog service takes care of writing log messages to
different files, and the journald service works with systemd to fetch messages that are generated through systemd
units and writes that information to the journal. Both can be interconnected also, to ensure that services that are
handled by systemd do occur in the journal as well. In this section, you’ll learn how to configure both of
these services.

Understanding rsyslog
Since the old days of UNIX, the syslog service has been used for logging information. This service is compatible with
many devices, which makes it a true log service that can take care of messages that are generated by multiple devices
in the network. rsyslogd is the latest incarnation of syslog, providing full backward compatibility with syslog
as well as new features.
The basis of the rsyslogd service configuration is in the file /etc/rsyslog.conf. In this file, logging is configured
by the definition of facilities, priorities, and destinations. Also, modules are used to provide additional functionality.

Understanding Facilities
The rsyslog facilities define what needs to be logged. To maintain backward compatibility with syslog, the facilities
are fixed, and it’s not possible to add new ones. Table 5-4 gives an overview of facilities and their use.
Table 5-4. Facilities and Their Use

Facility

Use

auth

Messages related to authentication

authpriv

Same as auth

cron

Messages that are generated by the cron service

daemon

A generic facility that can log messages that are generated by daemons that
don’t have their own facilities

kern

Kernel-related messages

lpr

Printer-related messages

mail

Messages that are related to the mail system

mark

A special facility that can be used to write log information at a specified interval

news

Messages related to the NNTP news system

security

Same as auth

syslog

Relates to messages that are generated by the rsyslog service

user

User related messages

uucp

Messages related to the legacy uucp system

local0–local7

Facilities that can be assigned to services that don’t have their own syslog facility

122
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

As you can see in Table 5-4, the facilities available are far from complete, and some commonly used services
don’t have their own facility. That is what the facilities local0 through local7 are created for. Many services that don’t
do syslog logging by default can be configured to log through one of these facilities. If, for example, an administrator
wants Apache to write its log messages to syslog rather than directly to its configuration files, the line Errorlog
syslog:local3 can be included in the Apache configuration file. Next in syslog, the local3 facility can be further
defined, so that Apache-related log information is written to the right location.

Understanding Priorities
Where the facility defines what should be logged, the priority defines when information should be sent to a log file.
Priorities can be debug, info, notice, warning, (or warn), err, (or error), crit, alert, emerg, or panic (which is
equivalent to emerg).
While defining a syslog rule, you should always use a facility.priority pair, as in kern.debug, which specifies
that the kernel facility should send everything with the priority debug (and higher) to the destination that is specified.
When defining facilities and priorities, a scripting language can be used to select the exact conditions under which to
write log messages.

Understanding Destinations
The destination defines where messages should be sent. Typically, this will be a file in the /var/log directory. Many
alternatives can be used, too, such as the name of a specific console, a remote machine, a database, a specific user or
all users who are logged in, and more. If used with output modules (see the next section), the possibilities are many.

Understanding Modules
Apart from facilities, priorities, and destinations, modules can be used. A module is an extension to the original
syslog code and adds functionality. Many modules are available to allow syslog to receive messages from specific
subsystems or to send messages to specific destinations.
In general, there are two types of modules. The Input Modules (which names that begin in im) are used to filter
incoming messages, and the Output Modules (which have names starting with om) are used to send messages in a
specific direction. Common modules are immark, which writes marker messages with a regular interval; imuxsock,
which allows syslog to communicate to journald; and imudp, which allows for reception of messages from remote
servers over UDP.
In Listing 5-4, following, you can see an example of the rsyslog.conf configuration file on SLES 12.
Listing 5-4. Sample rsyslog.conf Configuration File
$ModLoad immark.so
$MarkMessagePeriod

3600

$ModLoad imuxsock.so
$RepeatedMsgReduction

on

$ModLoad imklog.so
$klogConsoleLogLevel

1

$IncludeConfig /run/rsyslog/additional-log-sockets.conf
$IncludeConfig /etc/rsyslog.d/*.conf

123
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

if

( \

/* kernel up to warning except of firewall */
($syslogfacility-text == 'kern')
and
($syslogseverity <= 4 /* warning */ ) and not
($msg contains 'IN=' and $msg contains 'OUT=')
) or ( \
/* up to errors except of facility authpriv */
($syslogseverity <= 3 /* errors */ ) and not
($syslogfacility-text == 'authpriv')
) \
then {
/dev/tty10
|/dev/xconsole
}
*.emerg    

\
\
\
\
\
\
\

:omusrmsg:*

if

($syslogfacility-text == 'kern') and \
($msg contains 'IN=' and $msg contains 'OUT=') \
then {
-/var/log/firewall
stop
}
if

($programname == 'acpid' or $syslogtag == '[acpid]:') and \
($syslogseverity <= 5 /* notice */) \
then {
-/var/log/acpid
stop
}
if
($programname == 'NetworkManager') or \
($programname startswith 'nm-') \
then {
-/var/log/NetworkManager
stop
}
mail.*
-/var/log/mail
mail.info
-/var/log/mail.info
mail.warning
-/var/log/mail.warn
mail.err
/var/log/mail.err
news.crit
-/var/log/news/news.crit
news.err
-/var/log/news/news.err
news.notice
-/var/log/news/news.notice
*.=warning;*.=err
-/var/log/warn
*.crit
/var/log/warn
*.*;mail.none;news.none
-/var/log/messages
local0.*;local1.*
-/var/log/localmessages
local2.*;local3.*
-/var/log/localmessages
local4.*;local5.*
-/var/log/localmessages
local6.*;local7.*
-/var/log/localmessages

124
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

As you can see from the sample file in the preceding listing, there is more than just the definition of facilities,
priorities, and destinations. At the beginning of the file, some modules are defined. The immark module writes a
marker message every hour, which helps verify that rsyslog is still operational. The imuxsock module allows syslog
to receive messages from journald, and the RepeatedMsgReduction module ensures that repeated messages are not
all written to the syslog files.
After the part where the modules are defined, two inclusions are defined. In particular the
IncludeConfig/etc/rsyslog.d/*.conf line is important. This tells syslog to read additional configuration files
as well. These configuration files may have been dropped in the /etc/rsyslog.d directory by software installation
from RPM packages.
Next, there are a few lines that use scripting. In these scripting lines, especially the $msg contains lines are interesting.
They allow syslog to read the contents of a message, which allows rsyslog to decide exactly what to do with the message.
The last part of the sample configuration file defines where messages should be written to. In most cases, the
destination is a file name. The file name can be preceded by a -. This tells syslog that it’s not necessary to write the
message immediately, but that message can be buffered for better performance. Some other log destinations are
defined as well. The destination :omusrmsg:* , for example, uses the output module user message, which sends the
message to all users who are currently logged in.

Reading Log Files
The result of the work of rsyslog is in the log files. These log files are in the /var/log directory. According to the
definitions in /var/log, different files are used, but the main log file is /var/log/messages. Listing 5-5 shows partial
contents of this file.
Listing 5-5. Partial Contents of the /var/log/messages Files
2014-09-21T03:30:01.253425-04:00 linux-3kk5 cron[48465]: pam_unix(crond:session): session opened
for user root by (uid=0)
2014-09-21T03:30:01.255588-04:00 linux-3kk5 kernel: [160530.013392] type=1006
audit(1411284601.249:220): pid=48465 uid=0 old auid=4294967295 new auid=0 old ses=4294967295 new
ses=182 res=1
2014-09-21T03:30:01.293811-04:00 linux-3kk5 CRON[48465]: pam_unix(crond:session): session closed
for user root
2014-09-21T03:38:57.020771-04:00 linux-3kk5 wickedd-dhcp4[925]: eth0: Committed DHCPv4 lease with
address 192.168.4.210 (lease time 1800 sec, renew in 900 sec, rebind in 1575 sec)
2014-09-21T03:38:57.021297-04:00 linux-3kk5 wickedd[929]: eth0: address 192.168.4.210 covered by a
dhcp lease
2014-09-21T03:38:57.052381-04:00 linux-3kk5 wickedd[929]: eth0: Notified neighbours about IP address
192.168.4.210
2014-09-21T03:38:57.052774-04:00 linux-3kk5 wickedd[929]: route ipv4 0.0.0.0/0 via 192.168.4.2 dev
eth0 type unicast table main scope universe protocol dhcp covered by a ipv4:dhcp lease
2014-09-21T03:38:57.230038-04:00 linux-3kk5 wickedd[929]: Skipping hostname update, none available
2014-09-21T03:45:01.311979-04:00 linux-3kk5 kernel: [161429.729495] type=1006
audit(1411285501.307:221): pid=48669 uid=0 old auid=4294967295 new auid=0 old ses=4294967295 new
ses=183 res=1
2014-09-21T03:45:01.311470-04:00 linux-3kk5 cron[48669]: pam_unix(crond:session): session opened
for user root by (uid=0)
2014-09-21T03:45:01.338933-04:00 linux-3kk5 CRON[48669]: pam_unix(crond:session): session closed
for user root
2014-09-21T03:53:57.152972-04:00 linux-3kk5 wickedd-dhcp4[925]: eth0: Committed DHCPv4 lease with
address 192.168.4.210 (lease time 1800 sec, renew in 900 sec, rebind in 1575 sec)
2014-09-21T03:53:57.153516-04:00 linux-3kk5 wickedd[929]: eth0: address 192.168.4.210 covered by a
dhcp lease

125
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

2014-09-21T03:53:57.188390-04:00 linux-3kk5 wickedd[929]: eth0: Notified neighbours about IP address
192.168.4.210
2014-09-21T03:53:57.188638-04:00 linux-3kk5 wickedd[929]: route ipv4 0.0.0.0/0 via 192.168.4.2 dev
eth0 type unicast table main scope universe protocol dhcp covered by a ipv4:dhcp lease
2014-09-21T03:53:57.359250-04:00 linux-3kk5 wickedd[929]: Skipping hostname update, none available
2014-09-21T03:54:05.119585-04:00 linux-3kk5 dbus[790]: [system] Activating via systemd: service
name='org.freedesktop.PackageKit' unit='packagekit.service'
2014-09-21T03:54:05.124790-04:00 linux-3kk5 PackageKit: daemon start
2014-09-21T03:54:05.180829-04:00 linux-3kk5 dbus[790]: [system] Successfully activated service
'org.freedesktop.PackageKit'
2014-09-21T03:54:25.350410-04:00 linux-3kk5 PackageKit: daemon quit
2014-09-21T04:00:01.355931-04:00 linux-3kk5 kernel: [162329.431431] type=1006
audit(1411286401.349:222): pid=48878 uid=0 old auid=4294967295 new auid=0 old ses=4294967295 new
ses=184 res=1
2014-09-21T04:00:01.355448-04:00 linux-3kk5 cron[48878]: pam_unix(crond:session): session opened for
user root by (uid=0)
2014-09-21T04:00:01.397676-04:00 linux-3kk5 CRON[48878]: pam_unix(crond:session): session closed for
user root
Each message is structured in a similar way. It starts with the date and time the message has been logged. Next,
the name of the host that has logged the message is printed (linux-3kk5, in this example). Then follows the name of
the process and its PID, followed by the specific message that is logged.
You will note that services tend to have their own method of writing information to the syslog. You can see that some
commands perform logging in a way that is rather difficult to read, while other log messages are easy to understand.

Configuring Remote Logging
In a large network environment, it makes sense to set up remote logging. This allows you to create one log server that
is configured with a large amount of storage and will keep messages for a longer period. Other servers can be used as
clients toward that server and maintain small logs for themselves.
To set up remote logging, you’ll have to specify a remote log destination on the servers on which you want to do
remote logging. The lines that do this follow:
*.*
*.*

@@remotehost.example.com
@remotehost.example.com

The first line tells rsyslog to send messages to the remote host specified, using TCP; the second line tells rsyslog
to do the same, but using UDP. Sending messages via TCP is more secure. TCP is a connection-oriented protocol, so
delivery of the log messages is guaranteed, and you can be sure that no messages will get lost. If you want to forward
messages to a remote host that does not support TCP log reception, UDP can be used instead.
On the remote host, the file /etc/rsyslog.d/remote.conf must be used to enable log reception. The default
remote.conf file on SLES contains many examples that show how to set up advanced remote log servers, on which
it is even possible to use TLS for securing the message stream. The most important parameters that should be
considered are the following:
@ModLoad imtcp.so
$TCPServerRun 514
@ModLoad imudp.so
$UDPServerRun 514
These lines enable the TCP as well as the UDP log reception modules and tell your server to listen for incoming
messages on port 514.

126
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Working with journal
On SLES 12, apart from rsyslog, journald is used for logging as well. The journald service keeps extensive
information about services and other unit files that are managed through systemd (see Chapter 8 for further details).
The information in journald must be considered as an addition to the information that is logged through rsyslog.
By default, rsyslog is configured for receiving journald log messages. The line $ModLoad imuxsock.so takes care of
this. There is not much need to configure the other way around, too, and have rsyslog write messages to journald.
rsyslog should really be considered the central system for logging messages.
The journal is created at the moment that the journald service is started. That means that it won’t survive a
reboot, but the messages are forwarded to rsyslog anyway, so that shouldn’t be a big deal. If you want to make the
journal persistent, you can use the following procedure:
1.

Create a journal directory using mkdir -p -m 2775 /var/log/journal.

2.

Set the appropriate ownership: chown :systemd-journal /var/log/journal.

3.

Restart the journal service using killall -USR1 systemd-journald.

The most important benefit of using a journal is that the journalctl command allows administrators to perform
smart filtering on messages. To start with, an administrator can type the journalctl command, which will just show
all messages, starting with the oldest. So, to see only the last five messages, the command journalctl -n 5 can be
used. To see live messages scrolling by at the moment they are written, type journalctl -f. Also very useful is the
option to filter according to the time the message was written, as, for example, journalctl --since-today, which
shows all messages that were written since the start of the day.
To get even more specific information from the system, you can specify time ranges as well, as, for example,
journalctl --since "2014-09-19 23:00:00" --until "2014-09-20 8:00:00". You can also filter for specific
process information only, as in the case of journalctl _SYSTEMD_UNIT=sshd.service, or obtain detailed
information, if you want, by adding -o verbose. And it is possible to make all this very specific, if, for example, you’re
looking for a detailed report on everything that has been logged by the sshd process in a specific date/time range.
To do this, you can use a command such as journalctl --since "2014-09-19 23:00:00" --until "2014-09-20
8:00:00" _SYSTEMD_UNIT=sshd.service -o verbose.
When using journald logging, you should always remember that the journal is cleared at reboot. So, if you try
to show messages that were logged too long ago, it may occur that they no longer exist, because you have rebooted
in the meantime.

Configuring logrotate
On a very busy server, you may find that entries are added to your log files really fast. This poses a risk: your server
may be quickly filled with log messages, which leaves no more place for normal files that have to be created. There are
some solutions to this possible problem.
•

To begin with, the directory /var/log should be on a dedicated partition or logical volume, so
that in case too much information is written to the log files, this will never completely fill your
server’s file system.

•

Always include the RepeatedMsgReduction module in rsyslog.conf. It will ensure that the
volume of messages that are repeatedly written is reduced.

•

Another solution that you can use to prevent your server from being filled up completely by
log files that grow too big is logrotate. The logrotate command runs as a cron job by default
once a day from /etc/cron.daily, and it helps you to define a policy whereby log files that
grow beyond a certain age or size are rotated.

127
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

Rotating a log file basically means that the old log file is closed and a new log file is opened. In most cases,
logrotate keeps a number of the old logged files, often stored as a compressed file on disk. In the logrotate
configuration, you can define how exactly you want to handle the rotation of log files.
The configuration of logrotate is spread between two different locations. The main logrotate file is
/etc/logrotate.conf. In this file, some generic parameters are stored, as well as specific parameters that define how
specific files should be handled.
The logrotate configuration for specific files is stored in the directory /etc/logrotate.d. These scripts are
typically put there when you install the service, but you can modify them as you like. The logrotate file for the
apache2 services provides a good example that you can use, if you want to create your own logrotate file. You can see
part of its contents in Listing 5-6.
Listing 5-6. Sample logrotate Configuration File
/var/log/apache2/error_log {
compress
dateext
maxage 365
rotate 99
size=+1024k
notifempty
missingok
create 644 root root
postrotate
/usr/bin/systemctl reload apache2.service
endscript
}
/var/log/apache2/suexec.log {
compress
dateext
maxage 365
rotate 99
size=+1024k
notifempty
missingok
create 644 root root
postrotate
/usr/bin/systemctl reload apache2.service
endscript
}
You can see that the contents of the configuration is pretty straightforward. It tells which files should be logged
and how often they should be logged. This configuration shows that the maximum age of the file is set to 365, after
which a rotation will follow. logrotate will keep a maximum of 99 rotated log files, which allows administrators to go
a long way back in time.

128
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 5 ■ Common Administration Tasks

EXERCISE 5-7. CONFIGURING LOGGING
In this exercise, you’ll learn how to configure logging on your server. You’ll first set up rsyslogd to send all
messages that relate to authentication to the /var/log/auth file. Next, you’ll set up logrotate to rotate this
file on a daily basis and keep just one old version of the file.
1.

Open a terminal and make sure that you have root permissions, by opening a root shell using su -.

2.

Open the /etc/rsyslog.conf file in an editor and scroll down to the RULES section. Add the
following line: authpriv.* /var/log/auth to the end of the file.

3.

Close the log file and make sure to save the changes. Now, use the command systemctl
restart rsyslog to ensure that rsyslog uses the new configuration.

4.

Use the Ctrl+Alt+F4 key sequence to log in as a user. It doesn’t really matter which user
account you’re using for this.

5.

Switch back to the graphical user interface using Ctrl+Alt+F1 (or Ctrl+Alt+F7, depending on
your configuration). From here, use tail -f /var/log/auth. This should show the contents of
the freshly created file that contains authentication messages. Use Ctrl+C to close tail -f.

6.

Create a file with the name /etc/logrotate.d/auth and make sure it has the following
contents:
/var/log/auth {
daily
rotate 1
compress }

7.

Normally, you would have to wait a day until logrotate is started from /etc/cron.daily.
As an alternative, you can run it from the command line, using the following command:
/usr/sbin/logrotate /etc/logrotate.conf.

8.

After one day, check the contents of the /var/log directory. You should see the rotated
/var/log/auth file.

Summary
In this chapter, you’ve learned how to perform important daily system administration tasks. You’ve read how to work
with printers and manage software. Next, you have learned how to manage processes and jobs and how to use cron to
run processes at specific times. Following that, you have read how to configure logging on your server, to ensure that
you can always find what has gone wrong and why.

129
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6

Hardening SUSE Linux
Before you start offering real services on your server, it’s a good idea to think about how you can harden your SUSE
Linux installation. In this chapter, you’ll learn how to do that. We’ll first have a look at the YaST Security Center and
Hardening module, which provides some common options that are easy to apply. Next, you’ll learn how to set up
a sudo configuration that allows you to delegate tasks to specific users. Following that, you’ll read about how to
configure the Linux Audit Framework to get more detailed events about modifications that have been applied to your
server. Then, you’ll read how a pluggable authentication module (PAM) is used to make the authentication procedure
modular. Last, you’ll read about SELinux, an advanced way of profiling your system, so that only specific operations
are allowed, while everything that is not specifically allowed will be denied.
In this chapter, the following topics are discussed:
•

Using the YaST Security Center and Hardening

•

Working with sudo

•

The Linux Audit Framework

•

Understanding PAM

•

Configuring SELinux

SUSE offers different solutions for hardening a server. To give you a quick head start, the YaST Security Center
and Hardening module is offered. After configuring basic settings using this module, you can use sudo to define which
administrators are allowed access to which tasks.

Using the YaST Security Center and Hardening
The YaST Security Center and Hardening module was developed to provide an easy interface to set a secure server.
It contains a list of security settings, which you can walk through to verify and set security parameters on your server.
After starting the module, you’ll see the Security Overview section (see Figure 6-1). From here, you can easily see
the current status and configure specific settings. The list contains a collection of security settings that can easily be
accessed and modified from this list. For each of the settings, you can change the current status or get a description of
what exactly the setting is doing on your server.

131
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Figure 6-1. The Security Overview gives easy access to a list of security settings
On the Predefined Security Configurations section, you can choose between a few default security settings
templates. The templates are Home Workstation, Networked Workstation, and Network Server, or Custom Settings.
The settings relate to a few options related to booting, passwords, and file permissions. Home Workstation was
created for computers that are not connected to any network (!); Networked Workstation is for end-user computers
that are connected to a network; and Network Server offers the highest level of security setting. Don’t use these
options; they are too basic.
The Password Settings options provide access to a few of the options that can be set in the /etc/login.defs
file and are related to passwords. You can set a minimal password length, specify the password encryption method,
which by default is set to the robust SHA-512 algorithm, and you can set a minimal and maximal password age. The
settings specified here are written to /etc/login.defs and will be applied for all new users that are created from that
moment on.
On the Boot Settings option, you can find a few permissions that relate to booting. These options are old and
obsolete. On the Login Settings section, you can specify the delay, after an incorrect login attempt, and specify
whether or not you want to allow remote graphical login. User Addition specifies the minimum and maximum user
IDs that are available for creation of users. On the Miscellaneous Settings section, you find a few settings that are of
interest (see Figure 6-2).

132
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Figure 6-2. Miscellaneous security settings
The file permissions option allows you to select one of the file permissions templates available in
/etc/permissions.easy, permissions.local, permissions.secure and permissions.paranoid. In these files,
standard permissions are set for a number of default configuration files, and by selecting them from YaST, you can
easily set the permissions for many files on your computer. From the other options on this tab, the Magic SysRq Keys
option is interesting. These options give access to some debugging and advanced administration options. These
options can be used only by the root user by default, but having them enabled poses an increased risk. If enabled, the
root user can use the command echo b > /proc/sysrq-trigger to write a “reset” trigger to sysrq, and other nasty
options are available as well. In general, having these options enabled is useful but dangerous, which is why you might
consider switching them off.

Working with sudo
By default, on Linux, there are two normal users who can log in directly. Apart from that, there are system users
employed by the services that have been configured to run on your server. The user root is the privileged user; there
are no limitations whatsoever for root. All other users have limited access to your system. To provide normal users
access to performing tasks with administrator tasks, there is a middle way: using sudo. With sudo, you can define a
list of tasks that can be performed by a specific list of users. That means that you can create a configuration in which
user linda has permissions to shut down a server (which is normally a task that only root can do), but nothing else. In
addition, using sudo, you can also specify that a user can run a task as any other user. For example, you can have user
bob execute a tar job to create a backup of your Oracle databases as the Oracle user (an example of a user that can’t
normally log in directly), by using sudo.

Understanding sudo
Working with sudo offers a few important benefits from a security perspective. First, all commands that are executed
with sudo can be logged, but that only works if Defaults log_output is enabled in the sudoers file. Therefore, it is
possible to trace who has done what at which specific moment. Also, sudoreplay can be used to replay a sudo session,
which allows administrators to see exactly what has happened during a sudo session.

133
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

When running a command as a sudo user, the user runs the command, preceded by “sudo,” as in sudo ip, for
example. The user will first see a warning and is next prompted for his/her password. After entering the correct
password, the command will be executed (see Listing 6-1).
Listing 6-1. Using sudo to Run Commands
linda@ldap:~>sudo ip a
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
linda's password:
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:82:a5:c9 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.180/24 brd 192.168.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe82:a5c9/64 scope link
valid_lft forever preferred_lft forever
linda@ldap:~>
To create a sudo configuration, you can use YaST as well as visudo. For this command, I’ll explain how to use
visudo to create a configuration, because YaST doesn’t add much to the ease of configuring sudo rules. In both cases,
the configuration is written to the file /etc/sudoers, which should never be modified directly!
When creating a sudo configuration, the ultimate goal is to create rules such as the following:
linda ALL=/sbin/shutdown -h now
This command allows user linda to execute the command /sbin/shutdown -h now from all computers. When
working with sudo, it is useful to work with Linux group names also. The following line gives an example on how you
could do that:
%users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
This line ensures that all members of the group users (which typically are all users that are defined on this system)
have permissions to mount and umount the /mnt/cdrom device. Note that with this command, an argument is specified,
which means that users can only execute the command exactly as specified here; no other arguments are allowed.

134
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Creating sudo Configuration Lines
The generic structure of any line in /etc/sudoers is
WHO FROM_WHERE=(AS_WHOM) WHICH_COMMANDS
The WHO part specifies the user or group that is allowed to run the commands. As an alternative, user aliases
can be used as well. A User_Alias defines a group of users who are allowed to perform a specific task, but using
user aliases is not very common, as Linux groups can be used instead. If you want to define a user alias, it should be
defined as follows:
User_Alias ADMINS = linda, denise
After defining the alias, it can be referred to in a sudo rule as
ADMINS ALL=ALL
The FROM_WHERE part in /etc/sudoers specifies the hostname from which the user is allowed. In many cases,
it is just set to ALL, which implements no host restrictions. If hostnames are used, these hostnames must be resolvable
by using DNS or another hostname resolution mechanism. You can define a Host_Alias to make it easy to refer to a
group of servers. The Host_Alias would look like the following:
Host_Alias WEBSERVERS = web1, web2
To use this alias, you would use a sudo rule such as
ADMINS WEBSERVERS=/usr/sbin/httpd
The AS_WHOM part is optional and specifies the use as whom the command is supposed to be executed. If nothing
is specified here, the command will be executed as root. Optionally, you can specify another target user here. For this
part also, an alias can be defined, which is the RunAs_Alias.
The final part specifies the commands that are allowed. This can be mentioned as a list of commands, and
alternatively, a command alias can be used. Such an alias would look as follows:
Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/zypper
And to use it in a rule, the rule would look like the following:
linda ALL=SOFTWARE
When configuring sudo on SUSE, there are two lines in particular that deserve attention. They are the following:
Defaults targetpw
ALL ALL = (ALL) ALL
The line Defaults targetpw asks for the password of user root in all cases. Next, the ALL ALL = (ALL) ALL line
allows any user to run any command from any host as any target user. There’s nothing wrong with this line, as long as
the Defaults targetpw line is present. If it isn’t any longer, this second line becomes very dangerous. For that reason,
to prevent any errors from occurring, make sure to remove both lines before doing anything else in sudo.

135
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Working in a sudo Shell
In many environments, root login is not permitted, and all root tasks have to be executed using sudo. That’s not always
seen as convenient, with the result that many administrators like using sudo -i. This opens a root shell, from which
administrators can run any command they like, without passing through sudo every single time.
From an administrator perspective, using sudo -i is convenient, but from a security perspective, it’s not very
secure. If you want to disable the possibility of accessing shells using sudo -i, as well as the option to use sudo su to
open a root shell, you can define two command aliases, two for which access can be denied using the exclamation
point. The following three lines can be used to give access to all commands to all users who are members of the group
wheel, with the exception of commands that are listed in the NSHELLS and NSU command aliases. When using these,
make sure that the NSHELLS alias includes all shells that are installed on your server!
Cmnd_Alias
Cmnd_Alias

NSHELLS = /bin/sh, /bin/bash
NSU = /bin/su

%wheel ALL=ALL, !NSHELLS, !NSU

Replaying sudo Sessions
An important benefit of working with sudo is that sessions are logged, and it is easy to replay a sudo session, but
note that this only works if the log_output entry is uncommented in the sudoers file. To do this, you can use the
sudoreplay command. The command sudoreplay -l will give an overview of all sudo commands that have been
issued and the time when they were used. In each of these commands, a session ID is specified, which can be
identified by the TSID option. Using this session ID, a session can be replayed, which shows all output that has
occurred in a sudo session. In Listing 6-2 you can see how sudoreplay can be used.
Listing 6-2. Using sudoreplay to Get Information About a sudo Session
ldap:~ # sudoreplay -l
Jul 6 06:54:59 2014 : linda : TTY=/dev/pts/0 ; CWD=/home/linda ; USER=root ; TSID=000001 ;
COMMAND=/sbin/ip a
ldap:~ # sudoreplay 000001
Replaying sudo session: /sbin/ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:82:a5:c9 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.180/24 brd 192.168.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe82:a5c9/64 scope link
valid_lft forever preferred_lft forever

136
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

EXERCISE 6-1. CREATING A SUDO CONFIGURATION
1.

Log in as root and enter the command visudo.

2.

Locate the lines Defaults targetpw and ALL ALL = (ALL) ALL and put a comment sign (#)
in front of both of them.

3.	Add a command alias: Cmnd_Alias NETWORK = /usr/sbin/wicked, /sbin/ip.
4.	Add the following line to allow all users who are members of the group users to run
commands from the NETWORK alias: %users ALL = NETWORK.
5.

Open a shell as any user on your system. Type sudo /sbin/ip addr show to run the
ip addr show command with root permissions.

6.

Open the sudo configuration again, using visudo.

7.	Include the following lines to allow access to all commands, except shells and su, for all
users who are members of the group wheel.
Cmnd_Alias
Cmnd_Alias

NSHELLS = /bin/sh, /bin/bash
NSU = /bin/su

%wheel ALL=ALL, !NSHELLS, !NSU

8.

Create a user, linda, and make her a member of the group wheel, using useradd linda;
usermod -aG wheel linda.

9.

Log in as linda and try to run the command sudo -i. You’ll notice that it doesn’t give you
access to a root shell. It does not disallow the use of visudo, though, but at this point, you
should be able to understand how that can be fixed.

The Linux Audit Framework
Logging is a part of hardening a server. By setting up logging, you make sure that you don’t miss information about vital
security incidents that have occurred. In Chapter 5, you have already learned how to configure logging on your server.
In this section, we’ll go one step further and talk about the Linux Audit Framework. The Linux Audit Framework allows
administrators to set up the system for logging detailed messages, using the audit daemon (auditd).

Configuring Auditing from YaST
To configure auditing, you can start the Linux Audit Framework module from YaST ➤ Security and Users. After
activating this module, you’ll see a bit of a cryptic message, indicating the following:

The “apparmor” kernel module is loaded. The kernel uses a running audit daemon to log audit
events to /var/log/audit/audit.log (default). Do you want to start the daemon now?
This message is talking about the AppArmor kernel module, because auditing takes care of messages generated
by AppArmor but also messages generated by SELinux and other services that use the libaudit library, such as PAM
login events.

137
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

If you just skip the part about the AppArmor kernel module (which isn’t entirely relevant here), what matters
is the part where it says that the audit module has to be started. Just select Yes to make sure that the audit module is
started, and it will start logging messages to /var/log/audit.
You’ll now see a screen from which you can configure the different aspects of the working of Linux auditing. The
screen that you see in Figure 6-3 allows you to specify log file properties. In the General Settings section, you’ll specify
what will be logged and how it will be logged there. The default audit log file is /var/log/audit/audit.log, and there
is no reason to change that. Next, you must specify the format. The RAW format is the only format that makes sense
here. It makes sure that messages are logged exactly the way they were generated by the kernel.

Figure 6-3. Specifying log file properties
The Flush parameter determines how data is written to disk. The default setting of INCREMENTAL specifies
an amount of records that is flushed all at once. Use the frequency parameter to specify how many these are.
Select NONE, if you don’t want the audit daemon to make an effort to flush data; DATA, if you want complete
synchronization where the risk of losing log events is minimized; and SYNC, if you want to keep metadata and data
fully synchronized, which provides minimal risk of losing data.
In the Size and Action section, you can specify the maximum size of the log file. By default, it is set to 6MB. When
that size is reached, the Maximum File Size Action determines what will happen. If set to ROTATE, the log file will
be rotated, and the maximum number of log files that is kept is specified in the Number of Log Files option. Other
options after reaching the maximum log file size are the following:
•

Ignore: Nothing will happen if the maximal size is reached.

•

Syslog: A message is sent to syslog, indicating that the maximum size has been reached.

•

Susped: The audit daemon stops writing messages to disk.

•

Keep_logs: Logs are rotated, but without a limit on the maximum amount of files that is kept.

In the Computer Names section, you can specify if and, if so, how computer names are used in log messages. As
auditing is typically set up locally, the default setting is NONE, but you can select to log a hostname, the FQDN, or a
user-specified name to the log file.

138
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

On the Dispatcher tab (see Figure 6-4), you can specify how the dispatcher is used to handle log messages.
The dispatcher is a program that is started with the audit daemon. It takes audit events and sends them to child
programs that can analyze the events in real time. These child programs are specified in /etc/audisp/plugins.d
and make sure that action is taken if something happens to the audit daemon. One of the dispatcher modules, for
example, is syslog, which makes sure that events related to the audit daemon are treated by syslogd as well.

Figure 6-4. Configuring the dispatcher
On the Dispatcher tab, you can specify which program is used as the default dispatcher program and how
that program should communicate to its child programs. Normally, there is no reason to change this program. The
communication method, by default, is set to lossy, which means that a 128KB queue is filled, and once this queue is
full, events will be discarded. Use lossless for fewer chances of losing events handled by the dispatcher.
Auditd can also monitor the availability of disk space on the log partition. To do this, use the settings on the Disk
Space tab, which you can see in Figure 6-5. On this tab, you can set a warning threshold and a minimal threshold. On
reaching the warning threshold, a message is written to syslog, and on reaching the minimal threshold, the audit
system shuts down.

139
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Figure 6-5. Specifying what to do when the disk is full
On the Rules for auditctl tab, you can specify how additional rules are processed by the Linux Auditing
Framework. To start with, you have to set Auditing to enabled, and next you can edit the contents of the audit.rules
file to define additional rules for auditing. YaST provides direct access to this file, which, of course, can be manually
edited as well. The rules specified here contain parameters that are directly passed to the auditctl process. Read the
man page for this process for more information on how to use these options.
After configuring the auditing system, you can see audit messages in the /var/log/audit/audit.log file.
You’ll notice that these messages are hard to read in some cases, because they are messages that have been generated
by the kernel. For working with some subsystems, in particular with SELinux, they provide an important source of
information on what is happening on your server.

Understanding PAM
On Linux, many programs require access to information that relates to authentication. To make accessing this
information easy, Linux uses pluggable authentication modules (PAMs). The authentication-related programs
are configured to use the libpam.so and libpam_misc.so libraries. These libraries tell the program to look in the
/etc/pam.d directory for a corresponding configuration file. From this directory, different authentication plug-ins
from the directory /lib64/security can be included to implement specific login behavior (see Figure 6-6).

140
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Figure 6-6. PAM schematic overview
In Exercise 6-2, you’ll explore what the PAM configuration for the su program looks like.

EXERCISE 6-2. EXPLORING PAM CONFIGURATION
1.

Open a root shell and type ldd $(which su). This command lists the libraries that are used
by the su command. You’ll see that they include libpam and libpam_misc.

2.	Type cat /etc/pam.d/su. This shows the following configuration (the contents is explained
after this exercise):
ldap:/etc/pam.d # cat su
#%PAM-1.0
auth
sufficient
pam_rootok.so
auth
include
common-auth
account sufficient
pam_rootok.so
account include
common-account
password include
common-password
session include
common-session
session optional
pam_xauth.so

141
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

3.

From the previous step of the exercise, you have seen that some common files are included.
Type cat /etc/pam.d/common-auth, to show the contents of one of these common files. It
will show you something as in the following code:
ldap:/etc/pam.d # cat common-auth
#%PAM-1.0
#
# This file is autogenerated by pam-config. All changes
# will be overwritten.
#
# Authentication-related modules common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
# traditional Unix authentication mechanisms.
#
auth
required
pam_env.so
auth
optional
pam_gnome_keyring.so
auth
sufficient
pam_unix.so
try_first_pass
auth
required
pam_sss.so
use_first_pass

4.

Use cd /lib64/security and type ls to display all the PAM modules that are listed in this
directory. You’ll see that all modules that the configuration files refer to are listed here.

5.	Type less /usr/share/doc/packages/pam/Linux-PAM_SAG.txt. This gives access to the
PAM System Administrator Guide, in which all the PAM modules are explained, including the
parameters that can be used when using these modules.

PAM Configuration Files
In the preceding exercise, you have looked at the contents of a PAM configuration file. In each configuration file, the
authentication process is defined in four different phases. In each phase, different PAM files can be included, and
there are different ways to include the PAMs.
The following phases are defined in authentication:
•

auth: This is where the authentication is initialized.

•

account: This refers to the phase in authentication where account settings are checked.

•

password: This is for checking password-related settings.

•

session: This defines what happens after authentication, when the user wants to access
specific resources.

When calling a PAM file, it can be called in different ways.
•

required: The conditions that are implemented by this PAM library file must be met. If this is
not the case, the rest of the procedure is followed, but access will be denied.

•

requisite: With this, the conditions implemented by the PAM library file must also be met.
If this is not the case, the authentication procedure stops immediately.

142
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

•

sufficient: Conditions imposed by this PAM file don’t have to be met, but if they are met,
that is sufficient, and further PAM files in this phase of the authentication don’t have to be
processed anymore. This is useful if you want users to authenticate on an LDAP server first,
but if that is not successful, to continue local authentication. This would look as follows:
...
auth
auth
...

sufficient
required

pam_ldap.so
pam_unix2.so

•

optional: Used to include functionality that typically doesn’t deal with the real authentication
process. Use this, for example, to display the contents of a text file, or anything else that is
nonessential.

•

include: This is used to include the contents of another PAM configuration file.

In Exercise 6-3, you’ll change the PAM configuration to include the pam_securetty file. This file can be used
to define the names of terminals (TTYs) that are considered to be secure and where the user root can log in. In the
exercise, you’ll first use su on tty4 (virtual terminal 4, which can be accessed by using the Ctrl+Alt+F4 key sequence)
when the pam_securetty.so file is not included. Next, you will include the pam_securetty.so file in the login
sequence and modify the contents of /etc/securetty to disable root access on tty4.

EXERCISE 6-3. USING PAM TO LIMIT SU
1.

Use the key sequence Ctrl+Alt+F4. On the login prompt, log in as user linda and type su - to
become root. Enter the root password. You will authenticate. Type exit twice to log out both
from the su session and from the session in which you are user linda.

2.

Open a root shell and type vim /etc/pam.d/su.

3.	As the second entry (make sure it is before the line where common-auth is included), add the
required
pam_securetty.so.
line auth
4.

Open the file /etc/securetty in an editor and make sure the line tty4 is removed.

5.	Repeat step 1 of this exercise. It will no longer work.

Understanding nsswitch
PAM is related to authentication and specifies where the authentication process should look for user-related
information and more. PAM doesn’t help you to get information from nonlocal sources when they are not directly
related to authentication. For that purpose, there is nsswitch, which uses the /etc/nsswitch.conf configuration file.
In Listing 6-3, you can see what its contents looks like.

143
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Listing 6-3. Sample /etc/nsswitch.conf Contents
passwd:
group:

files sss
files sss

hosts:
networks:

files dns
files dns

services:
protocols:
rpc:
ethers:
netmasks:
netgroup:
publickey:

files
files
files
files
files
files
files

bootparams:
automount:
aliases:
passwd_compat:
group_compat:

files
files
files
files
files

By default, all information about users, but also about network hosts and much more, is looked up in local
configuration files. If you want utilities that require access to such information to look beyond, you will tell them, in
the /etc/nsswitch.conf file, where to look. You can see, for instance, that password-related information is looked for
in files, but also in sss. This refers to sssd, which is used to get user information from network authentication sources.
(sssd configuration is discussed in more detail in the section about LDAP in Chapter 11.) In addition, you can see that
for host- and network-related information, first the local configuration files are checked, after which DNS is consulted.
The information in /etc/nsswitch is used by all utilities that require access to dispersed information but that
don’t deal with authentication. Two examples of such utilities are id and host. The id utility can show user-related
information and get information about that user from any source that is specified in /etc/nsswitch.conf. The host
utility will retrieve host information from local configuration files and also from DNS (if /etc/nsswitch.conf indicates
that DNS should be used). In the example in Listing 6-4, you can see how the id command shows information about a
user, linda, that is defined in LDAP and how host gets information from DNS.
Listing 6-4. Using nsswitch.conf-Related Information
ldap:~ # id linda
uid=1001(linda) gid=100(users) groups=100(users)
ldap:~ # host www.sandervanvugt.nl
www.sandervanvugt.nl has address 213.124.112.46

Securing SLES 12 with SELinux
The Linux kernel offers a security framework in which additional security restrictions can be imposed. This
framework can be used to filter activity on the lowest level, by allowing or denying system calls, as they are issued by
the Linux kernel.
In SUSE Linux Enterprise Server 12, there are two different solutions that implement this kind of security. First,
there is AppArmor, which has been the default solution in SUSE Linux Enterprise Server since Novell purchased the
company that developed AppArmor back in 2006. A recent alternative is SELinux. As the AppArmor code hasn’t really
changed since it was included in 2006, we’ll just have a look at the way that SELinux can be configured.

144
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

■■Note The current state of kernel-level security framework support in SLES is rather unclear. When I wrote this, I was
working on RC1 of SLES 12, which normally contains all features that will be included in the final release. In this version
of the software, serious functionality was missing from the AppArmor implementation, and at the same time, the SELinux
code wasn’t complete either. Because SUSE is planning to offer AppArmor support for current customers and seems to be
gradually shifting toward SELinux, I’ve chosen to discuss only SELinux configuration in this section, even if it is far from
complete in the current release of the software. For that reason, you should realize that everything discussed from here
on, isn’t about stable software but the future directions, which might very well have been implemented by the time you’re
reading this.

SELinux Backgrounds
Security in Linux was inherited from UNIX security. UNIX security is basically oriented toward file permissions: users
are allowed to read, write, or execute files, and that’s all in the original UNIX permission scheme. An example explains
why in some cases that isn’t enough.

One morning, I found out that my server was hacked. The server was running SLES 10 at the time
and was fully patched up to the latest level. A firewall was configured on it and no unnecessary
services were offered by this server. After further analyzing the hack, it became clear that the hacker
had come in through a flaky PHP script that was a part of one of the Apache virtual hosts that was
running on this server. Through this script, the intruder had managed to get access to a shell, using
the wwwrun account that was used by the webserver. Using the legal permissions of this account,
the intruder had created several scripts in the /var/tmp and /tmp directories, that were a part of a
botnet that was launching Distributed Denial of Services attacks against multiple servers.
The interesting lesson that can be drawn from this hack is that nothing really was wrong on this server. The main
issue that the hacker had taken advantage of was the fact that every user is allowed to create and run scripts in the
/tmp and /var/tmp directories. At the time that the UNIX security model was originally developed, this was good
enough. Now that servers are connected to the Internet, it’s no longer good enough. That’s why the kernel security
framework has been developed.
There are two solutions built on the Linux kernel security framework: AppArmor and SELinux. SUSE has
been offering AppArmor since SUSE Linux Enterprise Server 9 and has recently introduced support for SELinux as
well. This is because SELinux has become the market standard, and in many environments, SELinux security is a
requirement for Linux servers.
The basic principle of SELinux is that all system calls are blocked by default on a system on which SELinux is
enabled. That means that if nothing else is done, the kernel will generate a kernel panic at a very early stage, and
everything halts. Everything that is allowed on a server is defined in the SELinux policy, in which many security rules
are set to strictly define what is allowed and what isn’t.
To define the rules in the policy, SELinux uses labels on different objects on the system. Labels can be set to files,
processes, ports, and users. These labels define what kind of object it is, and if an object with a specific kind of label
needs access to another object, this is only allowed if in the policy a rule exists allowing such access. This means that,
by default, an intruder that breaks in via the web server would be in an environment that is labeled as the web server
environment and would, therefore, never get access to either the /tmp or the /var/tmp directory.

145
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Understanding SELinux Components
Before starting the configuration of SELinux, you should know a bit about how SELinux is organized. Three components
play a role:
•

The security framework in the Linux kernel

•

The SELinux libraries and binaries

•

The SELinux policy

Since SUSE Linux Enterprise 11 SP 2, SLES comes with the standard support for SELinux in the Linux kernel and
the tools that are needed to manage the SELinux solution. You will shortly learn how to install these tools on your
server. The most important part of the work of the administrator with regard to SELinux is managing the policy.
In the SELinux policy, security labels are applied to different objects on a Linux server. These objects typically
are users, ports, processes, and files. Using these security labels, rules are created that define what is and what isn’t
allowed on a server. Remember: By default, SELinux denies all syscalls, and by creating the appropriate rules, you can
again allow the syscalls that you trust. Rules, therefore, should exist for all programs that you want to use on a system.
Alternatively, you might want to configure parts of a system to run in unconfined mode, which means that specific
ports, programs, users, files, and directories are not protected at all by SELinux. This mode is useful, if you only want
to use SELinux to protect some essential services but don’t specifically care about other services. To get a completely
confined system, you should try to avoid this.
To ensure the appropriate protection for your system, you need an SELinux policy. This must be a tailor-made
policy in which all files are provided with a label, and all services and users have a security label as well, to express
which files and directories can be accessed by which user and process on the server. Developing such a policy
requires technical expertise and a significant amount of work.
At the time this was written, the SELinux framework was supported, and there was no policy on SUSE Linux
Enterprise. That means that you will have to create your own policy or use one of the standard policies that are
available for free. Do be aware, however, that a freely available SELinux policy might work on your server, but it will
never offer complete protection for all aspects of security on your server! Also, SUSE does not support these policies.
You may, however, contact SUSE to discuss your options.

The Policy
As mentioned, the policy is the key component in SELinux. It defines rules that specify which objects can access
which files, directories, ports, and processes on a system. To do this, a security context is defined for all of these. On an
SELinux system on which the policy has been applied to label the file system, you can use the ls -Z command on any
directory to find the security context for the files in that directory. Listing 6-5 shows the security context settings for
the directories in the / directory of an SLES system with an SELinux labeled file system.
Listing 6-5. Showing Security Context Settings Using ls -Z
mmi:/ # ls -Z
system_u:object_r:default_t .autorelabel
system_u:object_r:file_t .viminfo
system_u:object_r:bin_t bin
system_u:object_r:boot_t boot
system_u:object_r:device_t dev
system_u:object_r:etc_t etc
system_u:object_r:home_root_t home
system_u:object_r:lib_t lib
system_u:object_r:lib_t lib64
system_u:object_r:lost_found_t lost+found

146
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

system_u:object_r:mnt_t media
system_u:object_r:mnt_t mnt
system_u:object_r:usr_t opt
system_u:object_r:proc_t proc
system_u:object_r:default_t root
system_u:object_r:bin_t sbin
system_u:object_r:security_t selinux
system_u:object_r:var_t srv
system_u:object_r:sysfs_t sys
system_u:object_r:tmp_t tmp
system_u:object_r:usr_t usr
system_u:object_r:var_t var
system_u:object_r:httpd_sys_content_t www
The most important line in the security context is the context type. This is the part of the security context that
ends in _t. It tells SELinux which kind of access is allowed to the object. In the policy, rules are specified to define
which type of user or which type of role has access to which type of context. For example, this can be defined by using
a rule such as the following:
allow user_t bin_t:file {read execute gettattr};
This sample rule states that the user who has the context type user_t (this user is referred to as the source
object) is allowed to access the file with the context type bin_t (the target), using the permissions read, execute, and
getattr. Later in this section, you will learn how to use a standard policy to apply this kind of security context settings
to the file system on your server.
The standard policy that you are going to use contains a huge amount of rules. To make it more manageable,
policies are often applied as modular policies. This allows the administrator to work with independent modules that
allow him or her to switch protection on or off for different parts of the system. When compiling the policy for your
system, you will have a choice to work either with a modular policy or with a monolithic policy, in which one huge
policy is used to protect everything on your system. It is strongly recommended that you use a modular policy and not
a monolithic policy. Modular policies are much easier to manage.

Installing SELinux on SUSE Linux Enterprise 12 FCS
SELinux still doesn’t come as the default option for security. If you want to use it, you’ll have to install it yourself.
This section discusses how to do that. Configuring SELinux on SLES 12 consists of three phases:
•

Install all SELinux packages

•

Enable GRUB Boot options

•

Install a policy

Installing SELinux Packages and Modifying GRUB
The easiest way to make sure that all SELinux components are installed is by using YaST2. The following procedure
outlines what to do on an installed SLES 11 SP2 server.
1.

Log in to your server as root and start YaST2.

2.

Select Software ➤ Software Management.

3.

Select Filter ➤ Patterns and select the entire C/C++ Compiler and Tools software category
for installation.

147
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

4.

Select Filter ➤ Search and make sure that Search in Name, Keywords and Summary is
selected. Now enter the keyword “selinux” and click Search. You now see a list of packages.

5.

Make sure that all packages you’ve found are selected, then click Accept to install them.

After installing the SELinux packages, you have to modify the GRUB2 boot loader. To do this, from YaST, select
System ➤ Boot Loader. On the Kernel Parameters tab, select the line that contains the Optional Kernel Command
Line Parameters and add the following to the end of that line: security=selinux selinux=1 enforcing=0.
The preceding options are used for the following purposes:
•

security=selinux: This option tells the kernel to use SELinux and not AppArmor.

•

selinux=1: This option switches on SELinux.

•

enforcing=0: This option puts SELinux in permissive mode. In this mode, SELinux is fully
functional but doesn’t enforce any of the security settings in the policy. Use this mode for
configuring your system, and once it is fully operational, change it to enforcing=1, to switch
on SELinux protection on your server.

After installing the SELinux packages and enabling the SELinux GRUB boot options, reboot your server to activate
the configuration. You may notice that while rebooting, an error is displayed, mentioning that the policy file could not
be loaded. At this point in the configuration, the error is normal, and you can safely ignore it. It will disappear once
you have compiled the policy.

Compiling the Policy
As mentioned, the policy is an essential component of SELinux, but no default policy is available for SUSE Linux
Enterprise. That means that you’ll have to obtain a policy from somewhere else. The best choice is to get a policy
from the OpenSUSE download site at software.opensuse.org. In the Package Search bar presented at this site, type
“selinux-policy” to get access to a list of all open source policy packages presented at openSUSE.org. Download the
policy for OpenSUSE 13.1 to your server and install it. Do not use the One-click install. Instead, use the rpm command
from the command line to install both packages. Note that you need two packages: the selinux-policy package and
the selinux-policy-targeted package.
After rebooting the system, you now have to perform a few additional tasks to finalize your work. First, open
/etc/passwd with an editor and change the shell of the user, nobody, to /sbin/nologin. Next, you should use the
command pam-config -a --selinux to make the PAM aware of SELinux.
At this point, all prerequisites have been met, and you are ready to start file system labeling. To do this, use the
command restorecon -Rv /. This command starts the /sbin/setfiles command to label all files on your system.
To do this, the input file /etc/selinux/refpolicy/contexts/files/file_contexts is used. Because there currently
is no SELinux policy for SUSE Linux Enterprise, this is a delicate part of the configuration. The file_contexts file has
to match your actual file system as much as possible, so if it goes wrong, it is likely to go wrong at this point. This can
lead to a completely unbootable system. If that happens, tune the contents of the file_contexts file to match the
structure of the file system your server is using. Before doing this, make sure to read the rest of this article, so that
you fully understand how context type is applied to files and directories (and don’t forget to make a backup of the
file_contexts file before starting). At the end of this section, you’ll find tips to help you troubleshoot SELinux and
create a system that fully works with your SELinux policy.

■■Note If while using semanage you receive a message that complains about the user nobody’s home directory, you
can change the login shell of user nobody to /sbin/nologin. This ensures that the user nobody’s settings match the
current policy settings.

148
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

After another reboot, SELinux should be operational. To verify this, use the command sestatus -v. It should give
you an output that looks like that in Listing 6-6.
Listing 6-6. Verifying That SELinux Is Functional, by Using sestatus -v , After Labeling the File System
mmi:/ # sestatus -v
SELinux status:
SELinuxfs mount:
Current mode:
Mode from config file:
Policy version:
Policy from config file:

enabled
/selinux
permissive
permissive
26
refpolicy

Process contexts:
Current context:
Init context:
/sbin/mingetty
/usr/sbin/sshd

root:staff_r:staff_t
system_u:system_r:init_t
system_u:system_r:sysadm_t
system_u:system_r:sshd_t

File contexts:
Controlling term:
/etc/passwd
/etc/shadow
/bin/bash
/bin/login
/bin/sh
/sbin/agetty
/sbin/init
/sbin/mingetty
/usr/sbin/sshd
/lib/libc.so.6
/lib/ld-linux.so.2

root:object_r:user_devpts_t
system_u:object_r:etc_t
system_u:object_r:shadow_t
system_u:object_r:shell_exec_t
system_u:object_r:login_exec_t
system_u:object_r:bin_t -> system_u:object_r:shell_exec_t
system_u:object_r:getty_exec_t
system_u:object_r:init_exec_t
system_u:object_r:getty_exec_t
system_u:object_r:sshd_exec_t
system_u:object_r:lib_t -> system_u:object_r:lib_t
system_u:object_r:lib_t -> system_u:object_r:ld_so_t

Configuring SELinux
At this point, you have a completely functional SELinux system, and it is time to further configure the system. In the
current status, SELinux is operational but not in enforcing mode. That means that it doesn’t limit you to do anything.
It just logs everything that it should be doing if it were in enforcing mode. This is good, because, based on the log files,
you can find what it is that it would prevent you from doing. As a first test, it is a good idea to put SELinux in enforcing
mode and find out if you can still use your server after doing that. Before doing so, modify GRUB so that it has two
boot options: one where no SELinux configuration is used at all, and one that contains all SELinux configuration. This
makes it easier to revert to a working situation, in case your SELinux configuration doesn’t work. (See Chapter 8 for
detailed instructions on how to modify GRUB parameters).
To do this, open the Kernel Parameters option from the YaST module for GRUB configuration and make sure that
the enforcing=1 option is set as one of the Optional Kernel Command Line Parameters. Reboot your server and see
if it still comes up the way you expect it to. If it does, leave it like that and start modifying the server in such a way that
everything works as expected. Chances are, though, that you won’t even be able to boot the server properly. If that is
the case, switch back to the mode in which SELinux is not enforcing and start tuning your server.

149
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Verifying the Installation
Before you start tuning your server, it is a good idea to verify the SELinux installation. You have already used the
command sestatus -v to view the current mode and process and file contexts. Next, use semanage boolean -l,
which shows a list of all Boolean switches that are available and, at the same time, verifies that you can access the
policy. Listing 6-7 shows a part of the output of this command.
Listing 6-7. Use semanage boolean -l to Get a List of Booleans and Verify Policy Access
mmi:~ # semanage boolean -l
SELinux boolean

Description

ftp_home_dir
mozilla_read_content
spamassassin_can_network
httpd_can_network_relay
openvpn_enable_homedirs
gpg_agent_env_file
allow_httpd_awstats_script_anon_write
httpd_can_network_connect_db
allow_user_mysql_connect
allow_ftpd_full_access
samba_domain_controller
httpd_enable_cgi
virt_use_nfs

->
->
->
->
->
->
->
->
->
->
->
->
->

off
off
off
off
off
off
off
off
off
off
off
off
off

ftp_home_dir
mozilla_read_content
spamassassin_can_network
httpd_can_network_relay
openvpn_enable_homedirs
gpg_agent_env_file
allow_httpd_awstats_script_anon_write
httpd_can_network_connect_db
allow_user_mysql_connect
allow_ftpd_full_access
samba_domain_controller
httpd_enable_cgi
virt_use_nfs

Another command that should produce output at this stage is semanage fcontext -l. This command shows the
default file context settings, as provided by the policy (see Listing 6-8 for a partial output of this command).
Listing 6-8. Use semanage fcontext -l to Get File Context Information
/var/run/usb(/.*)?
/var/run/utmp
/var/run/vbe.*
/var/run/vmnat.*
/var/run/vmware.*
/var/run/vpnc(/.*)?
/var/run/watchdog\.pid
/var/run/winbindd(/.*)?
/var/run/wnn-unix(/.*)
/var/run/wpa_supplicant(/.*)?
/var/run/wpa_supplicant-global
/var/run/xdmctl(/.*)?
/var/run/yiff-[0-9]+\.pid

all files
regular file
regular file
socket
all files
all files
regular file
all files
all files
all files
socket
all files
regular file

system_u:object_r:hotplug_var_run_t
system_u:object_r:initrc_var_run_t
system_u:object_r:hald_var_run_t
system_u:object_r:vmware_var_run_t
system_u:object_r:vmware_var_run_t
system_u:object_r:vpnc_var_run_t
system_u:object_r:watchdog_var_run_t
system_u:object_r:winbind_var_run_t
system_u:object_r:canna_var_run_t
system_u:object_r:NetworkManager_var_run_t
system_u:object_r:NetworkManager_var_run_t
system_u:object_r:xdm_var_run_t
system_u:object_r:soundd_var_run_t

150
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Managing SELinux
Now that the base SELinux configuration is operational, it’s time to start configuring it in a way that secures your
server. First, let’s resume what SELinux is all about. In SELinux, an additional set of rules is used to define exactly
which process or user can access which files, directories, or ports. To accomplish this, SELinux applies a context to
every file, directory, process, and port. This context is a security label that defines how this file, directory, process,
or port should be treated. These context labels are used by the SELinux policy, which defines exactly what should
be done with the context labels. By default, the policy blocks all non-default access, which means that, as an
administrator, you have to enable all features that are non-default on your server.

Displaying the Security Context
As mentioned, files, folders, and ports can be labeled. Within each label, different contexts are used. To be able to
perform your daily administration work, the type context is what you’re most interested in. As an administrator, you’ll
mostly work with the type context. Many commands allow you to use the -Z option to show a list of current context
settings. In Listing 6-9, you can see what the context settings are for the directories in the root directory.
Listing 6-9. The Default Context for Directories in the Root Directory
[root@hnl /]# ls
dr-xr-xr-x. root
dr-xr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x+ root
drwxr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
dr-xr-xr-x. root
dr-xr-xr-x. root
drwx------. root
drwxr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
dr-xr-xr-x. root
drwxr-xr-x. root
dr-xr-x---. root
dr-xr-xr-x. root
drwxr-xr-x. root
drwxr-xr-x. root
-rw-r--r--. root
drwxr-xr-x. root
drwxrwxrwt. root
-rw-r--r--. root
-rw-r--r--. root
drwxr-xr-x. root
drwxr-xr-x. root

-Z
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root

system_u:object_r:bin_t:s0
bin
system_u:object_r:boot_t:s0
boot
system_u:object_r:cgroup_t:s0
cgroup
unconfined_u:object_r:default_t:s0 data
system_u:object_r:device_t:s0
dev
system_u:object_r:etc_t:s0
etc
system_u:object_r:home_root_t:s0 home
system_u:object_r:lib_t:s0
lib
system_u:object_r:lib_t:s0
lib64
system_u:object_r:lost_found_t:s0 lost+found
system_u:object_r:mnt_t:s0
media
system_u:object_r:autofs_t:s0
misc
system_u:object_r:mnt_t:s0
mnt
unconfined_u:object_r:default_t:s0 mnt2
unconfined_u:object_r:default_t:s0 mounts
system_u:object_r:autofs_t:s0
net
system_u:object_r:usr_t:s0
opt
system_u:object_r:proc_t:s0
proc
unconfined_u:object_r:default_t:s0 repo
system_u:object_r:admin_home_t:s0 root
system_u:object_r:bin_t:s0
sbin
system_u:object_r:security_t:s0 selinux
system_u:object_r:var_t:s0
srv
unconfined_u:object_r:swapfile_t:s0 swapfile
system_u:object_r:sysfs_t:s0
sys
system_u:object_r:tmp_t:s0
tmp
unconfined_u:object_r:etc_runtime_t:s0 tmp2.tar
unconfined_u:object_r:etc_runtime_t:s0 tmp.tar
system_u:object_r:usr_t:s0
usr
system_u:object_r:var_t:s0
var

151
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

In the preceding listing, you can see the complete context for all directories. It consists of a user, a role, and a type.
The s0 settings indicate the security level in multilevel security (MLS) environments. These environments are not
discussed in this section, so just make sure that it is set to s0, and you’ll be fine. The Context Type defines what kind of
activity is permitted in the directory. Compare, for example, the /root directory, which has the admin_home_t context
type, and the /home directory, which has the home_root_t context type. In the SELinux policy, different kinds of access
are defined for these context types.
Multilevel security is the application of a computer system to process information with incompatible
classifications (i.e., at different security levels), permit access by users with different security clearances and needsto-know, and prevent users from obtaining access to information for which they lack authorization. There are two
contexts for the use of MLS. One is to refer to a system that is adequate to protect itself from subversion and has
robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an
application of a computer that will require the computer to be strong enough to protect itself from subversion and
possess adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is
important, because systems that have to be trusted are not necessarily trustworthy (Wikipedia, “Multilevel security,”
http://en.wikipedia.org/wiki/Multilevel_security).
Security labels are not only associated with files, but also with other items, such as ports and processes.
In Listing 6-10, for example, you can see the context settings for processes on your server.
Listing 6-10. Showing SELinux Settings for Processes
mmi:/ # ps Zaux
LABEL
system_u:system_r:init_t
system_u:system_r:kernel_t
system_u:system_r:kernel_t
system_u:system_r:kernel_t
system_u:system_r:kernel_t
system_u:system_r:sysadm_t

USER
root
root
root
root
root
root

PID %CPU %MEM
1 0.0 0.0
2 0.0 0.0
3 0.0 0.0
6 0.0 0.0
7 0.0 0.0
2344 0.0 0.0

VSZ
10640
0
0
0
0
27640

system_u:system_r:sshd_t

root

3245

0.0

0.0

69300

system_u:system_r:cupsd_t
root
system_u:system_r:nscd_t
root
system_u:system_r:postfix_master_t root

3265
3267
3334

0.0
0.0
0.0

0.0 68176
0.0 772876
0.0 38320

system_u:system_r:postfix_qmgr_t postfix
system_u:system_r:crond_t
root
system_u:system_r:fsdaemon_t root
system_u:system_r:sysadm_t
root
system_u:system_r:sysadm_t
root

3358
3415
3437
3441
3442

0.0
0.0
0.0
0.0
0.0

0.0
0.0
0.0
0.0
0.0

40216
14900
16468
66916
4596

RSS TTY
STAT START TIME COMMAND
808 ?
Ss
05:31 0:00 init [5]
0 ?
S
05:31 0:00 [kthreadd]
0 ?
S
05:31 0:00 [ksoftirqd/0]
0 ?
S
05:31 0:00 [migration/0]
0 ?
S
05:31 0:00 [watchdog/0]
852 ?
Ss
05:32 0:00 /usr/sbin/mcelog
--daemon --config-file /etc/mcelog/mcelog.conf
1492 ?
Ss
05:32 0:00 /usr/sbin/sshd -o
PidFile=/var/run/sshd.init.pid
2852 ?
Ss
05:32 0:00 /usr/sbin/cupsd
1380 ?
Ssl 05:32 0:00 /usr/sbin/nscd
2424 ?
Ss
05:32 0:00 /usr/lib/postfix/
master
2252 ?
S
05:32 0:00 qmgr -l -t fifo -u
800 ?
Ss
05:32 0:00 /usr/sbin/cron
1040 ?
S
05:32 0:00 /usr/sbin/smartd
2152 ?
Ss
05:32 0:00 login -- root
800 tty2 Ss+ 05:32 0:00 /sbin/mingetty tty2

Selecting the SELinux Mode
In SELinux, three different modes can be used:
•

Enforcing: This is the default mode. SELinux protects your server according to the rules in the
policy, and SELinux logs all of its activity to the audit log.

•

Permissive: This mode is useful for troubleshooting. If set to Permissive, SELinux does not
protect your server, but it still logs everything that happens to the log files. Also, in permissive
mode, the Linux kernel still maintains the SELinux labels in the file system. This is good,
because it prevents your system from relabeling everything after turning SELinux on again.

•

Disabled: This mode is to be inactivated. In disabled mode, SELinux is switched off completely,
and no logging occurs. The file system labels, however, are not removed from the file system.

152
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

You have already read how you can set the current SELinux mode from GRUB while booting, using the enforcing
boot parameter.

Modifying SELinux Context Types
An important part of the work of an administrator is setting context types on files, to ensure appropriate working of
SELinux.
If a file is created within a specific directory, it inherits the context type of the parent directory by default.
If, however, a file is moved from one location to another, it retains the context type that it had in the former location.
To set the context type for files, you can use the semanage fcontext command. With this command, you write the
new context type to the policy, but it doesn’t change the actual context type immediately! To apply the context types
that are in the policy, you have to run the restorecon command afterward.
The challenge when working with semanage fcontext is to find out which context you actually need. You can use
semanage fcontext -l to show a list of all contexts in the policy, but because it is rather long, it might be a bit difficult
to find the actual context you need from that list (see Listing 6-11).
Listing 6-11. Displaying Default File Contexts with semanage fcontext -l
[root@hnl ~]# semanage fcontext -l | less
SELinux fcontext

type

Context

/
/.*
/[^/]+
/\.autofsck
/\.autorelabel
/\.journal
/\.suspended
/a?quota\.(user|group)
/afs
/bin
/bin/.*

directory
all files
regular file
regular file
regular file
all files
regular file
regular file
directory
directory
all files

system_u:object_r:root_t:s0
system_u:object_r:default_t:s0
system_u:object_r:etc_runtime_t:s0
system_u:object_r:etc_runtime_t:s0
system_u:object_r:etc_runtime_t:s0
<>
system_u:object_r:etc_runtime_t:s0
system_u:object_r:quota_db_t:s0
system_u:object_r:mnt_t:s0
system_u:object_r:bin_t:s0
system_u:object_r:bin_t:s0

There are three ways to find out which context settings are available for your services.
1.

Install the service and look at the default context settings that are used. This is the easiest
and recommended option.

2.

Consult the man page for the specific service. Some services have a man page that ends in
_selinux, which contains all the information you need to find the correct context settings.

After finding the specific context setting you need, you just have to apply it using semanage fcontext. This
command takes the -t context type as its first argument, followed by the name of the directory or file to which you
want to apply the context settings. To apply the context to everything that already exists in the directory in which you
want to apply the context, you add the regular expression (/.*)? to the name of the directory, which. This means:
optionally, match a slash followed by any character. The examples section of the semanage man page has some useful
applicable examples for semanage.
3.

Use seinfo -t to display a list of all type contexts that are available on your system,
combined with grep to find the type context you need for a specific purpose. The amount
of information provided with seinfo -t is a bit overwhelming; about 3,000 type contexts
are available by default!

153
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Applying File Contexts
To help you apply the SELinux context properly, the following procedure shows how to set a context, using semanage
fcontext and restorecon. You will notice that at first attempt, the web server with a non-default document root
doesn’t work. After changing the SELinux context it will.
1.

Use zypper in apache2.

2.

Use mkdir /web and then go to that directory using cd /web.

3.

Use a text editor to create the file /web/index.html, which contains the text “welcome to
my website.”

4.

Open the file /etc/apache2/default-server.conf with an editor and change the
DocumentRoot line to DocumentRoot /web. Also change the  statement in
this file, to validate for the directory /web.

5.

Start the Apache Web server, using systemctl start apache2.

6.

Use w3m localhost to open a session to your local web server. You will receive a
connection refused message. Press Enter and then q, to quit w3m.

7.

Use ls -Z /srv/www to find the current type context for the default Apache
DocumentRoot, which is /srv/www/htdocs. It should be set to httpd_sys_content_t.

8.

Use semanage fcontext -a -t httpd_sys_content_t '/web(/.*) ?' to set the new
context in the policy and press Enter.

9.

Now use restorecon /web to apply the new type context.

10.

Use ls -Z /web to show the context of the files in the directory /web. You’ll see that the
new context type has been set properly to the /web directory, but not to its contents.

11.

Use restorecon -R /web to apply the new context recursively to the /web directory.
The type context has now been set correctly.

12.

Restart the web server, using rcapache2 restart. You should now be able to access the
content of the /web directory.

Configuring SELinux Policies
The easiest way to change the behavior of the policy is by working with Booleans. These are on-off switches that you
can use to change the settings in the policy.
To find out which Booleans are available, you can use the semanage boolean -l command. It will show you a
long list of Booleans, with a short description of what each of these will do for you. Once you have found the Boolean
you want to set, you can use setsebool -P, followed by the name of the Boolean that you want to change. It is
important to use the -P option at all times when using setsebool. This option writes the setting to the policy file on
disk, and this is the only way to make sure that the Boolean is applied automatically after a reboot.
The following procedure provides an example of changing Boolean settings.
1.

From a root shell, type semanage boolean -l | grep ftp. This shows a list of Booleans
that are related to FTP servers.

2.

Use setsebool allow_ftpd_anon_write off to make sure this Boolean is off. Note that it
doesn’t take much time to write the change. Use semanage boolean -l grep ftpd_anon
to verify that the Boolean is indeed turned on. (If you don’t succeed with this step, use
semodule -e ftp first.)

154
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

3.

Reboot your server.

4.

Check again to see if the allow_ftpd_anon_write Boolean is still turned on. As it hasn’t
yet been written to the policy, you’ll see that it is off at the moment.

5.

Use setsebool -P allow_ftpd_anon_write on to switch on the Boolean and write the
setting to the policy.

Working with SELinux Modules
You have compiled SELinux as modular. That means that the policy that implements SELinux features is not just one
huge policy, but it consists of many smaller modules. Each module covers a specific part of the SELinux configuration.
The concept of the SELinux module was introduced to make it easier for third-party vendors to make their services
compatible with SELinux. To get an overview of the SELinux modules, you can use the semodule -l command. This
command shows a list of all current modules in use by SELinux and their version numbers.
As an administrator, you can switch modules on or off. This can be useful if you want to disable only a part of
SELinux and not everything, to run a specific service without SELinux protection. Especially when building your own
policy, it makes sense to switch off all modules that you don’t need, so that you can focus on the services that really do
require SELinux protection. To switch off an SELinux module, use semodule -d modulename. If you want to switch it
on again, you can use semodule -e modulename. Using this command will change the current state of the module in
the /etc/selinux/refpolicy/policy/modules.conf file. Alternatively, you could also edit this file by hand.
To handle policy modules properly, it helps to understand what you’re dealing with. In the end, a policy module
is a compiled policy file that you can load using the semodule -e command. You can recognize these files by the
extension they use: *.pp (which stands for “Policy Package”). In some cases, it can be useful to modify modules to
have them do exactly what you need them to. If all the sources of the SELinux policy are installed, three different
kinds of files are used as input files for policy modules, and you can find them in subdirectories of the /etc/selinux/
refpolicy/policy/modules directory, as follows:
•

te files contain transition rules. These rules tell the policy how to deal with specific
subprocesses that are started. You won’t often change these as administrator.

•

if files define what exactly the policy should be doing. As an administrator, you don’t typically
change the contents of this file.

•

fc files contain the labeling instructions that apply to this policy. As an administrator, you
might want to change the contents of the .fc files to modify the default behavior of policies.

In Listing 6-12, you can see the first 20 lines of the apache.fc file. This is the file that contains the default file
contexts that are used for the Apache server.
Listing 6-12. The First 20 Lines from the apache.fc File
mmi:/etc/selinux/refpolicy/policy/modules/services # head -n 20 apache.fc
HOME_DIR/((www)|(web)|(public_html))(/.+)? gen_context(system_u:object_r:httpd_user_content_t,s0)
/etc/apache(2)?(/.*)?
/etc/apache-ssl(2)?(/.*)?
/etc/htdig(/.*)?
/etc/httpd
/etc/httpd/conf.*
/etc/httpd/logs
/etc/httpd/modules
/etc/vhosts

-d

--

gen_context(system_u:object_r:httpd_config_t,s0)
gen_context(system_u:object_r:httpd_config_t,s0)
gen_context(system_u:object_r:httpd_sys_content_t,s0)
gen_context(system_u:object_r:httpd_config_t,s0)
gen_context(system_u:object_r:httpd_config_t,s0)
gen_context(system_u:object_r:httpd_log_t,s0)
gen_context(system_u:object_r:httpd_modules_t,s0)
gen_context(system_u:object_r:httpd_config_t,s0)

155
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

/srv/([^/]*/)?www(/.*)?
/srv/gallery2(/.*)?
/usr/bin/htsslpass

gen_context(system_u:object_r:httpd_sys_content_t,s0)
gen_context(system_u:object_r:httpd_sys_content_t,s0)
--

/usr/lib/apache-ssl/.+
-/usr/lib/cgi-bin(/.*)?
/usr/lib(64)?/apache(/.*)?
/usr/lib(64)?/apache2/modules(/.*)?

gen_context(system_u:object_r:httpd_helper_exec_t,s0)
gen_context(system_u:object_r:httpd_exec_t,s0)
gen_context(system_u:object_r:httpd_sys_script_exec_t,s0)
gen_context(system_u:object_r:httpd_modules_t,s0)
gen_context(system_u:object_r:httpd_modules_t,s0)

In the fc file, you’ll be able to recognize different elements. First is the name of the directory or file to which the
file context will apply. As you can see, variables can be used (as is the case of the first line, which starts with HOME_DIR),
and typically, regular expressions will be used as well. Next, the gen_context command tells the policy to which
context the files related to the policy module should be set. This is the same context setting as that you can see when
using ls -Z on the file or directory.
As an administrator, you don’t typically change the contents of the policy files that come from the SELinux Policy
RPM. You would rather use semanage fcontext to change file contexts. If you are using audit2allow to generate
policies for your server, you might want to change the policy files after all. If you want to change the contents of any of
the policy module files, you’ll need to compile the changes into a new policy module file. To do this, copy or link the
SELinux Makefile from /etc/selinux/refpolicy to the directory that contains the policy module input files and run
the following command to compile the module:
make && make install && make load
Once the make command has completed, you can manually load the modules into the system, using semodule –e.

Troubleshooting SELinux
By default, if SELinux is the reason why something isn’t working, a log message to that effect is sent to the /var/log/
audit/audit.log file. That is, if the auditd service is running. If you see an empty /var/log/audit, start the auditd
service using service auditd start, and put it in the runlevels of your system, using chkconfig auditd on. In
listing 6-13, you can see a partial example of the contents of /var/log/audit/audit.log.
Listing 6-13. Example Lines from /var/log/audit/audit.log
type=DAEMON_START msg=audit(1348173810.874:6248): auditd start, ver=1.7.7 format=raw kernel=3.0.130.27-default auid=0 pid=4235 subj=system_u:system_r:auditd_t res=success
type=AVC msg=audit(1348173901.081:292): avc: denied { write } for pid=3426 comm="smartd"
name="smartmontools" dev=sda6 ino=581743 scontext=system_u:system_r:fsdaemon_t
tcontext=system_u:object_r:var_lib_t tclass=dir
type=AVC msg=audit(1348173901.081:293): avc: denied { remove_name } for pid=3426 comm="smartd"
name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state~" dev=sda6 ino=582390 scontext=system_
u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=dir
type=AVC msg=audit(1348173901.081:294): avc: denied { unlink } for pid=3426 comm="smartd"
name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state~" dev=sda6 ino=582390 scontext=system_
u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:295): avc: denied { rename } for pid=3426 comm="smartd"
name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" dev=sda6 ino=582373 scontext=system_
u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:296): avc: denied { add_name } for pid=3426 comm="smartd"
name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state~" scontext=system_u:system_r:fsdaemon_
t tcontext=system_u:object_r:var_lib_t tclass=dir

156
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

type=AVC msg=audit(1348173901.081:297): avc: denied { create } for pid=3426 comm="smartd"
name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" scontext=system_u:system_r:fsdaemon_t
tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:298): avc: denied { write open } for pid=3426 comm="smartd"
name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" dev=sda6 ino=582390 scontext=system_
u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:299): avc: denied { getattr } for pid=3426 comm="smartd"
path="/var/lib/smartmontools/smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" dev=sda6
ino=582390 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.309:300): avc: denied { append } for pid=1316
comm="syslog-ng" name="acpid" dev=sda6 ino=582296 scontext=system_u:system_r:syslogd_t
tcontext=system_u:object_r:apmd_log_t tclass=file
At first look, the lines in audit.log are a bit difficult to read. However, on closer examination, they are not that
hard to understand. Every line can be broken down in some default sections. Let’s have a look at the different sections
in the last line.
•

type=AVC: Every SELinux-related audit log line starts with the type identification type=AVC
(Access Vector Cache).

•

msg=audit(1348173901.309:300): This is the timestamp, which, unfortunately, is written
in epoch time, the number of seconds that have passed since January 1, 1970. You can use
date -d on the part up to the dot in the epoch time notation, to find out when the event has
occurred, as follows:
mmi:~ # date -d @1348173901
Thu Sep 20 16:45:01 EDT 2012

•

avc: denied { append }: The specific action that was denied. In this case, the system has
denied to append data to a file. While browsing through the audit log file, you can see other
system actions, such as write open, getattr, and more.

•

for pid=1316: The process ID of the command or process that initiated the action

•

comm="syslog-ng": The specific command that was associated with that PID

•

name="acpid": The name of the subject of the action

•

dev=sda6 ino=582296: The block device and inode number of the file that was involved

•

scontext=system_u:system_r:syslogd_t: The source context, which is the context of the
initiator of the action

•

tcontext=system_u:object_r:apmd_log_t: The target context, which is the context set on the
file on which the action was initiated

•

tclass=file: A class identification of the subject

Instead of interpreting the events in audit.log yourself, there is another approach. You can use the audit2allow
command, which helps analyze the cryptic log messages in /var/log/audit/audit.log. An audit2allow
troubleshooting session always consists of three different commands. First, you would use audit2allow -w -a, to
present the audit information in a more readable way. The audit2allow -w -a, by default, works on the audit.log
file. If you want to analyze a specific message in the audit.log file, copy it to a temporary file and analyze that, using
audit2allow -w -i filename (see Listing 6-14).

157
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

Listing 6-14. Analyzing Audit Messages Using audit2allow
mmi:/var/log/audit # audit2allow -w -i testfile
type=AVC msg=audit(1348173901.309:300): avc: denied { append } for pid=1316 comm="syslog-ng"
name="acpid" dev=sda6 ino=582296 scontext=system_u:system_r:syslogd_t
tcontext=system_u:object_r:apmd_log_t tclass=file
Was caused by:
Missing type enforcement (TE) allow rule.
You can use audit2allow to generate a loadable module to allow this access.
To find out which specific rule has denied access, you can use audit2allow –a, to show the enforcing rules from
all events that were logged to the audit.log file, or audit2allow -i filename, to show it for messages that you have
stored in a specific file (see Listing 6-15).
Listing 6-15. Using audit2allow to See Which Lines Have Denied Access
mmi:/var/log/audit # audit2allow -i testfile
  
#============= syslogd_t ==============
allow syslogd_t apmd_log_t:file append;
As the last part, use audit2allow -a -M mymodule to create an SELinux module with the name mymodule that
you can load in order to allow the access that was previously denied. If you want to do this for all events that have been
logged to the audit.log, use the -a -M command arguments. To do it only for specific messages that are in a specific file,
use -i -M, as in the example in Listing 6-16.
Listing 6-16. Use audit2allow to Create a Policy Module That Will Allow the Action Previously Denied
mmi:/var/log/audit # audit2allow -i testfile -M auditresult
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i auditresult.pp
As indicated by the audit2allow command, you can now run this module by using the semodule -i command,
followed by the name of the module that audit2allow has just created for you.

Switching to Enforcing Mode
With everything you’ve done so far, you still cannot switch SELinux to enforcing mode. This is because of a
misconfiguration in the context types for some files. When you switch to enforcing mode now, you can see that many
AVC denied messages have been written that are related to the tmpfs, and following these messages, your system hangs.
The messages that you’ll see just before the system stops, look as follows:
[5.595812] type=1400 audit(1361363803.588:3): avc: denied { read write } for pid=431
comm=“sh” name=“console” dev=tmpfs ino=2513 scontext=system_u:system_r:sysadm_t tc
ontext=system_u:object_r:tmpfs_t tclass=chr_file
[5.607734] type=1400 audit(1361363803.604:4): avc: denied { read write } for pid=431
comm=“sh” path=“/dev/console” dev=tmpfs ino=2513 scontext=system_u:system_r:sysad
m_t tcontext=system_u:object_r:tmpfs_t tclass=chr_file
As you can see, this message is repeated several times.

158
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 6 ■ Hardening SUSE Linux

To fix this problem, reboot your computer in permissive mode. Copy the /var/log/audit/audit.log file to a
temporary file (such as /var/log/audit/audit.2allow) and remove all lines, with the exception of the lines that
contain the audit.log messages for the errors listed above (use grep denied audit.log > audit2allow to find them).
Assuming that the name of the log file that you’ve created is audit.2allow, you should now run the following command:
audit2allow -i audit.2allow -M bootme
This creates a policy module file with the name bootme.pp. Make sure that this module is included in your
SELinux configuration by using semodule -i bootme.pp. Now reboot your computer in enforcing mode. You will
be able to boot and log in as root in your SELinux-protected system. If this is not the case, you’ll have to repeat this
procedure until you have no more messages in your audit.log that refer to path="/dev/console" dev=tmpfs.
This may involve several reboots.
From here, the fine-tuning begins. You will notice that many items on your system don’t work yet. You’ll have
to fix them one by one. The approach to fix all of these errors is by using audit2allow, as described in the preceding
example, and by setting the appropriate context on files and directories on your system. Until a supported version of
the SELinux policy is provided with SUSE Linux Enterprise, you’ll have to follow this approach to get it to work. At least
using this procedure does allow you to configure a computer with the very detailed security settings that are offered
with SELinux and will make your system more secure than when using other solutions.

Summary
In this chapter, you have read how to harden an SLES 12 server. You’ve read about different approaches to hardening,
starting with the relatively simple YaST security module, then working through such essentials as configuration of
auditing and sudo, up to the application of advanced security settings with SELinux. Using all of the information in
this chapter, you should be able to create a relatively safe basic server installation!

159
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7

Managing Virtualization on SLES
Even if SUSE Linux Enterprise Server (SLES) is not developed as a specific virtualization platform, it’s a Linux
distribution, and the Linux kernel includes embedded virtualization options. This chapter provides an overview of the
available options and goes into further detail about setting up a Kernel-based Virtual Machine (KVM) host platform.

Understanding Linux Virtualization Solutions
In Linux, no less than three approaches are available to create virtual machines. Before going into detail about the
most significant virtualization approach, let’s have a look at available techniques.
The first virtualization hype on Linux started with the introduction of Xen in the early 2000s. The Xen
virtualization platform used a modified Linux kernel that offered virtualization extensions. Because Xen allowed
virtual machines to address hardware directly, by using an approach that was known as paravirtualization, it took a
long time before the Xen virtualization extension really got integrated into the Linux kernel, which has stimulated the
rise of an alternative virtualization solution: KVM.
KVM is the Linux Kernel-based Virtual Machine, a kernel module that offers support for creating virtual
machines. Because KVM is so simple in its design and approach, it has been a huge success since the moment it was
launched. At present, KVM is the de facto virtualization solution on Linux.
Apart from Xen and KVM, which both are tightly integrated into the Linux kernel, there is also container
virtualization. In container virtualization, one kernel is used, and on top of that kernel, different isolated
environments are created. Each of these environments behaves as an independent machine, but it isn’t. They all
depend on the same kernel, and that also means that it’s not possible to run different operating systems in such an
environment. In Linux, LXC (Linux Containers) is the default solution for offering container-based virtualization.

Understanding the KVM Environment
To set up a KVM environment, a few elements are needed. To start with, you’ll need hardware support for
virtualization. That means that the CPU that is needed on the hypervisor platform requires virtualization extensions.
In general, this is the case for mid- to high-end CPUs. It can be verified by checking the contents of the /proc/cpuinfo
file; in the CPU, flags vmx (intel) or svm (AMD) should be listed. If they are not, your CPU does not offer support for
virtualization, and KVM cannot be used. Listing 7-1 shows sample contents of the /proc/cpuinfo file for a CPU that
does offer virtualization support.

161
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Listing 7-1. The Availability of Virtualization Support in the CPU Is Shown in /proc/cpuinfo
processor
: 1
vendor_id
: GenuineIntel
cpu family
: 6
model
: 58
model name
: Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz
stepping
: 9
microcode
: 0x15
cpu MHz
: 2693.694
cache size
: 6144 KB
fpu
: yes
fpu_exception
: yes
cpuid level
: 13
wp
: yes
flags
: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology
tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic
popcnt aes xsave avx f16c rdrand hypervisor lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow
vnmi ept vpid fsgsbase smep
bogomips
: 5387.38
clflush size
: 64
cache_alignment
: 64
address sizes
: 40 bits physical, 48 bits virtual
power management :
If the CPU extensions for virtualization are available, the required kernel modules can be loaded. These are kvm
and kvm_intel or kvm_amd, depending on the hardware platform that is used.
To manage a virtual machine, libvirt is used. libvirtd is a daemon that is started on the hypervisor platform
and offers management support for different virtualization platforms. Using libvirtd, KVM can be managed, but
other virtualization platforms can be used as well, such as Xen or LXC. libvirtd is also the interface that is used by
different management utilities. Common management utilities that can work on top of libvirt are the graphical
utility virt-manager and the command-line utility virsh.

Creating KVM Virtual Machines
Using SLES 12 as a KVM host is not difficult. It is started by selecting the software pattern from the Software option in
YaST. There’s a pattern for Xen as well as for KVM, as SUSE believes it is important to continue supporting customers
who have a current infrastructure on top of Xen and, at the same time, wants to offer full KVM support.
After installing the KVM software pattern, YaST will show a Virtualization option (see Figure 7-1). From this
virtualization menu, the following three options are available:
•

Create Virtual Machines for Xen and KVM: This option helps you to configure virtual machines
from YaST.

•

Install Hypervisor and Tools: This is the option you’ll have to use first, as it ensures that all of
the required components are available.

•

Relocation Server Configuration: Use this option if you have multiple KVM host platforms and
you want to be able to use live migration, whereby a running virtual machine can be migrated
from one platform to the other hardware platform.

162
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-1. YaST virtualization options

Configuring the KVM Host
When selecting the Install Hypervisor and Tools option, you can select between the three different virtualization
platforms (see Figure 7-2). For using KVM, select the KVM server, as well as the KVM tools to be installed, and select
Accept, to proceed.

Figure 7-2. Selecting your hypervisor of choice

163
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

While installing the KVM environment, multiple things will occur. An important element is the creation of a
software bridge. This software bridge will be used as the intermediate layer between the physical network card(s)
in your server and networking in the virtual machines. When using KVM, multiple virtual machines need access to
one physical network card, and that traffic has to be managed. To ensure that no conflicts arise, a virtual bridge is
created. In the section “Managing KVM Networking,” later in this chapter, you’ll learn how to manage this network
environment.
The installer may prompt a few times, depending on the exact configuration you’re using. First, it will ask if you
want to install graphical components as well. If you’re installing from a text-only environment, it normally doesn’t
make much sense to install graphical management software, but KVM virtual machines are best managed from the
graphical tools, so better make sure that these are installed. The next prompt is about setting up a network bridge.
You must do this for easy access to the network.
Once the installation of the required components is complete, there are a few things to verify. First, type ip link
show, for an overview of available network devices. You’ll note that a device with the name br0 has been added. Next,
type brctl show. This will show that the bridge br0 is using the physical network card in your server as its interface.
(See Listing 7-2). That is a good point from which to proceed (although you might consider creating more complex
network configurations, such as a bridge that uses a teamed network interface for redundancy).
Listing 7-2. Verifying KVM Host Network Configuration
linux-3kk5:~ # ip link show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 1500 qdisc pfifo_fast master br0 state UP mode
DEFAULT group default qlen 1000
link/ether 00:0c:29:aa:91:f2 brd ff:ff:ff:ff:ff:ff
3: br0:  mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 00:0c:29:aa:91:f2 brd ff:ff:ff:ff:ff:ff
linux-3kk5:~ # brctl show
bridge name
bridge id
STP enabled
interfaces
br0
8000.000c29aa91f2
no
eth0

Creating Virtual Machines
Once the KVM host has been configured and at least the network bridging is in place, you can proceed and create
some virtual machines. To do this, you need a graphical interface, and from the graphical interface, you can start the
Create Virtual Machines for Xen and KVM option in YaST. This starts the virt-install utility, which will now prompt
you about how you would like to install the operating system (see Figure 7-3).

164
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-3. Starting virtual machine installation
To make installation of virtual machines easy, it’s a good idea to use network installation. You might want to set
up an installation server through HTTP, FTP, or NFS, that offers the repositories required for installation of the virtual
machine and configure PXE as well, to ensure that the machines can boot from the network and get an installation
image delivered automatically. Read Chapter 17 for more details on setting up such an environment.
If no installation server is available, select Local install media. This allows you to install the virtual machine from
a physical DVD or an ISO image that is available on the KVM host.
After selecting the installation source you want to use, you can provide more details on where exactly the
installation files can be found. If you have selected to install from a DVD or ISO, you’ll select the disk you want to use
(see Figure 7-4), or if you have selected to install from an installation server, you have to provide a URL to make sure
that the files installation packages can be located (see Figure 7-5).

165
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-4. Providing the path to access an installation disk or ISO

Figure 7-5. Providing details about the installation server
After specifying details about the installation source you want to use, you’ll have to provide information about the
amount of RAM and CPUs you want to assign to the virtual machine (see Figure 7-6). You’ll understand that it’s not
possible to go beyond the physical limits of available RAM and CPUs on your computer, but the total of all RAM that is
used by your virtual machines doesn’t have to be less than the total amount of RAM in the host.

166
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-6. Allocating virtual machine RAM and CPUs
In KVM, a smart feature that is known as Kernel Shared Memory (KSM) is used. Using KSM makes it possible to
load memory pages that are addressed multiple times once only, as shared memory. That means that if you have four
virtual machines that are all using the same Linux kernel, you don’t physically have to load that kernel four times as
well, which means that the total amount of RAM that is configured on your virtual machines can go beyond the total
amount of RAM in the host.
After allocating RAM and CPUs to the virtual machine, you’ll have to configure storage (see Figure 7-7). The
easy solution is to create a disk image on the computer’s hard drive. That option will create a disk file for each virtual
machine. This is an easy option, if you don’t want to take additional measures at the storage level. If you want a more
advanced setup for storage, it makes sense to create an LVM logical volume for each virtual machine and configure
that as storage, using Select managed or other existing storage options.

167
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-7. Selecting storage
After selecting which storage to use, you’ll see a summary screen. From this screen, you can start the installation
of your virtual machine.

Managing KVM Virtual Machines
Once the virtual machine has been installed, you can start using it. On operational virtual machines, there are a few
parameters that can be managed as well. Many of these can be managed from the graphical interfaces that are offered
through the virt-managerb utility. These include networking and virtual machine properties. Alternatively, the virsh
command-line utility can be used for performing basic management tasks.

Managing KVM Networking
Networking is an important part of virtual machine management. The network properties can be accessed through
Virtual Machine Manager, by selecting the local hypervisor, which is indicated as localhost (qemu). After selecting
this, you’ll see the window shown in Figure 7-8, from which the network interfaces and the virtual networks can be
managed.

168
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-8. Managing virtual network interfaces
By default, the virtual bridge that is active on the KVM host offers networking through NAT. On the virtual
network, a private IP address is ranged, and users in the same network can access other hosts in that network.
External users, however, cannot access hosts in the NATted network directly. The default IP address for the internal
NATted network is 192.168.4.0/24.
In many cases, the default NATted network works well, but in some cases, it does not, and you might require
something else. To create an alternative network configuration, select the Virtual Networks tab (see Figure 7-9). On
this tab, you can create different virtual network configurations that can be connected to the network interfaces seen
on the Network Interfaces tab.

169
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-9. Configuring virtual networks
When adding new virtual networks, you’ll walk through a small wizard. The last screen in the window asks how
you want to be connected to the physical network. On this screen, you can select between an Isolated virtual network
and Forwarding to physical network. The Isolated virtual network is what it says it is: a network that is connected to
nothing else. If you want to connect the network to the outside world, you’ll have to select the Forwarding to physical
network option. This option first lets you select the physical network to connect to and, next, has you select a mode.
You can choose between NAT and routed modes. In NAT mode, the network configuration of virtual machines is set
up automatically, which ensures an easy connection to external computers. If selecting Routed networking, you’ll
have to manually configure routing between the internal virtual network and external networks (see Figure 7-10).

170
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-10. Setting up virtual networking

Managing Virtual Machine Properties
On a virtual machine, much virtual hardware is available. This hardware can be managed and changed by opening the
virtual machine in virt-manager and clicking the lightbulb. This gives the interface that can be seen in Figure 7-11.

171
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

Figure 7-11. Managing virtual machine properties
As you can see in Figure 7-11, an interface is available for all elements of the hardware that is configured in
the virtual machine, and many properties of the devices can be managed. This includes advanced settings, such as
properties of hardware devices, but also more basic settings, such as the amount of RAM or hard disks allocated to
a virtual machine. Note that for many changes in the virtual hardware configuration, the virtual machine must be
rebooted. There are also hardware settings that can only be changed while the virtual machine is powered off.

Managing Virtual Machines from the Command Line
In addition to the options that are offered from the graphical interface, virtual machines can be managed from the
command line as well, using the virsh utility. virsh offers a shell interface with a huge amount of options that allow
advanced administrators to perform any possible manipulation on virtual machines. To start with, there is
virsh list, which shows a list of all virtual machines that are currently running. It doesn’t show virtual machines
that are not operational, however. Use virsh list --all to see them too.
From the command line, the state of a virtual machine can be managed also. Type virsh shutdown vmname
to shut down a virtual machine gracefully. If that doesn’t work, you can use virsh destroy vmname, which halts it
immediately, as if you have pulled the power plug.

172
www.it-ebooks.info
|||||||||||||||||||||||||||||||||||||||||||||||||

Chapter 7 ■ Managing Virtualization on SLES

The virtual machine itself is stored in a configuration file, which is in /etc/libvirt/qemu. All settings of the
virtual machine are stored in that configuration file. You can see an example of it in Listing 7-3.
Listing 7-3. Sample Virtual Machine Configuration File
linux-3kk5:/etc/libvirt/qemu # cat sles12.xml


sles12
c0352e07-795d-404c-86a6-bf045f7aa729
1048576
1048576
1

hvm








Westmere






destroy
restart
restart





/usr/bin/qemu-system-x86_64




173 www.it-ebooks.info ||||||||||||||||||||||||||||||||||||||||||||||||| Chapter 7 ■ Managing Virtualization on SLES
174 www.it-ebooks.info ||||||||||||||||||||||||||||||||||||||||||||||||| Chapter 7 ■ Managing Virtualization on SLES

Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : No
Create Date                     : 2014:11:22 23:35:16+01:00
Modify Date                     : 2015:08:01 22:37:24+05:00
Www It-ebooks Info              : {6F114860-FA26-42C9-B26F-48EC50C4FBE9}
Keywords                        : www.it-ebooks.info
Has XFA                         : No
Language                        : EN
Page Layout                     : TwoColumnRight
Page Mode                       : UseOutlines
Page Count                      : 546
XMP Toolkit                     : Image::ExifTool 9.70
Creator                         : www.it-ebooks.info
Subject                         : www.it-ebooks.info
Producer                        : www.it-ebooks.info
EXIF Metadata provided by EXIF.tools

Navigation menu