Jenetics Manual 4.3.0

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 140

DownloadJenetics Manual-4.3.0
Open PDF In BrowserView PDF
JENETICS
LIBRARY USER’S MANUAL 4.3

Franz Wilhelmstötter

Franz Wilhelmstötter

franz.wilhelmstoetter@gmail.com
http://jenetics.io

4.3.0—2018/10/28

This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. To view
a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a
letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041,
USA.

Contents
1 Fundamentals
1.1 Introduction . . . . . . . . . . . . . .
1.2 Architecture . . . . . . . . . . . . . .
1.3 Base classes . . . . . . . . . . . . . .
1.3.1 Domain classes . . . . . . . .
1.3.1.1 Gene . . . . . . . .
1.3.1.2 Chromosome . . . .
1.3.1.3 Genotype . . . . . .
1.3.1.4 Phenotype . . . . .
1.3.1.5 Population . . . . .
1.3.2 Operation classes . . . . . . .
1.3.2.1 Selector . . . . . . .
1.3.2.2 Alterer . . . . . . .
1.3.3 Engine classes . . . . . . . . .
1.3.3.1 Fitness function . .
1.3.3.2 Engine . . . . . . .
1.3.3.3 EvolutionStream . .
1.3.3.4 EvolutionResult . .
1.3.3.5 EvolutionStatistics .
1.3.3.6 Engine.Evaluator . .
1.4 Nuts and bolts . . . . . . . . . . . .
1.4.1 Concurrency . . . . . . . . .
1.4.1.1 Basic configuration
1.4.1.2 Concurrency tweaks
1.4.2 Randomness . . . . . . . . .
1.4.3 Serialization . . . . . . . . . .
1.4.4 Utility classes . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

1
1
4
5
6
6
7
8
11
11
12
12
16
21
22
22
25
26
27
29
30
30
30
31
32
35
36

2 Advanced topics
2.1 Extending Jenetics
2.1.1 Genes . . . .
2.1.2 Chromosomes
2.1.3 Selectors . . .
2.1.4 Alterers . . .
2.1.5 Statistics . .
2.1.6 Engine . . . .
2.2 Encoding . . . . . .
2.2.1 Real function

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

39
39
39
40
42
43
43
44
45
45

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
ii

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

CONTENTS

CONTENTS

2.2.2 Scalar function . . . . . .
2.2.3 Vector function . . . . . .
2.2.4 Affine transformation . .
2.2.5 Graph . . . . . . . . . . .
2.3 Codec . . . . . . . . . . . . . . .
2.3.1 Scalar codec . . . . . . . .
2.3.2 Vector codec . . . . . . .
2.3.3 Subset codec . . . . . . .
2.3.4 Permutation codec . . . .
2.3.5 Mapping codec . . . . . .
2.3.6 Composite codec . . . . .
2.4 Problem . . . . . . . . . . . . . .
2.5 Validation . . . . . . . . . . . . .
2.6 Termination . . . . . . . . . . . .
2.6.1 Fixed generation . . . . .
2.6.2 Steady fitness . . . . . . .
2.6.3 Evolution time . . . . . .
2.6.4 Fitness threshold . . . . .
2.6.5 Fitness convergence . . .
2.6.6 Population convergence .
2.6.7 Gene convergence . . . . .
2.7 Reproducibility . . . . . . . . . .
2.8 Evolution performance . . . . . .
2.9 Evolution strategies . . . . . . .
2.9.1 (µ, λ) evolution strategy .
2.9.2 (µ + λ) evolution strategy
2.10 Evolution interception . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

46
47
47
49
51
52
53
53
55
56
56
58
59
60
61
61
63
64
65
66
67
68
69
70
70
71
71

3 Internals
73
3.1 PRNG testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2 Random seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4 Modules
4.1 io.jenetics.ext . . . . . . . . . . .
4.1.1 Data structures . . . . . . . . .
4.1.1.1 Tree . . . . . . . . . .
4.1.1.2 Parentheses tree . . .
4.1.1.3 Flat tree . . . . . . .
4.1.2 Genes . . . . . . . . . . . . . .
4.1.2.1 BigInteger gene . . .
4.1.2.2 Tree gene . . . . . . .
4.1.3 Operators . . . . . . . . . . . .
4.1.4 Weasel program . . . . . . . .
4.1.5 Modifying Engine . . . . . . .
4.1.5.1 ConcatEngine . . . .
4.1.5.2 CyclicEngine . . . .
4.1.5.3 AdaptiveEngine . . .
4.1.6 Multi-objective optimization .
4.1.6.1 Pareto efficiency . . .
4.1.6.2 Implementing classes
iii

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

77
78
78
78
79
80
81
81
81
81
82
84
85
86
86
88
88
89

CONTENTS

4.2

4.3

4.4

CONTENTS

4.1.6.3 Termination . .
io.jenetics.prog . . . . . . . .
4.2.1 Operations . . . . . . . .
4.2.2 Program creation . . . . .
4.2.3 Program repair . . . . . .
4.2.4 Program pruning . . . . .
io.jenetics.xml . . . . . . . .
4.3.1 XML writer . . . . . . . .
4.3.2 XML reader . . . . . . . .
4.3.3 Marshalling performance .
io.jenetics.prngine . . . . . .

5 Examples
5.1 Ones counting . . . .
5.2 Real function . . . .
5.3 Rastrigin function .
5.4 0/1 Knapsack . . . .
5.5 Traveling salesman .
5.6 Evolving images . .
5.7 Symbolic regression .
5.8 DTLZ1 . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

92
92
93
94
95
96
96
96
97
98
99

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

104
104
106
108
110
112
115
118
120

6 Build

124

Bibliography

128

iv

List of Figures
1.2.1 Evolution workflow . . . . . . . .
1.2.2 Evolution engine model . . . . .
1.2.3 Package structure . . . . . . . . .
1.3.1 Domain model . . . . . . . . . .
1.3.2 Chromosome structure . . . . . .
1.3.3 Genotype structure . . . . . . . .
1.3.4 Row-major Genotype vector . .
1.3.5 Column-major Genotype vector .
1.3.6 Genotype scalar . . . . . . . . . .
1.3.7 Fitness proportional selection . .
1.3.8 Stochastic-universal selection . .
1.3.9 Single-point crossover . . . . . .
1.3.102-point crossover . . . . . . . . .
1.3.113-point crossover . . . . . . . . .
1.3.12Partially-matched crossover . . .
1.3.13Uniform crossover . . . . . . . .
1.3.14Line crossover hypercube . . . .
1.4.1 Evaluation batch . . . . . . . . .
1.4.2 Block splitting . . . . . . . . . .
1.4.3 Leapfrogging . . . . . . . . . . .
1.4.4 Seq class diagram . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

4
4
5
6
8
8
9
10
11
13
15
19
19
20
20
20
21
30
34
35
37

2.2.1 Undirected graph and adjacency matrix . . . . . . .
2.2.2 Directed graph and adjacency matrix . . . . . . . . .
2.2.3 Weighted graph and adjacency matrix . . . . . . . .
2.3.1 Mapping codec types . . . . . . . . . . . . . . . . . .
2.6.1 Fixed generation termination . . . . . . . . . . . . .
2.6.2 Steady fitness termination . . . . . . . . . . . . . . .
2.6.3 Execution time termination . . . . . . . . . . . . . .
2.6.4 Fitness threshold termination . . . . . . . . . . . . .
2.6.5 Fitness convergence termination: NS = 10, NL = 30
2.6.6 Fitness convergence termination: NS = 50, NL = 150
2.8.1 Selector-performance (Knapsack) . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

49
50
51
56
62
63
64
65
67
68
69

4.0.1 Module graph . . . . . . . . . . .
4.1.1 Example tree . . . . . . . . . . .
4.1.2 Parentheses tree syntax diagram
4.1.3 Example FlatTree . . . . . . . .
4.1.4 Single-node crossover . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

77
78
79
81
82

v

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

LIST OF FIGURES

LIST OF FIGURES

4.1.5 Engine concatenation . . . .
4.1.6 Cyclic Engine . . . . . . . . .
4.1.7 Adaptive Engine . . . . . . .
4.1.8 Circle points . . . . . . . . .
4.1.9 Maximizing Pareto front . . .
4.1.10Minimizing Pareto front . . .
4.3.1 Genotype write performance .
4.3.2 Genotype read performance .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

. 85
. 86
. 87
. 89
. 90
. 91
. 99
. 100

5.2.1 Real function . . . . . . . . . .
5.3.1 Rastrigin function . . . . . . .
5.6.1 Evolving images UI . . . . . . .
5.6.2 Evolving Mona Lisa images . .
5.7.1 Symbolic regression polynomial
5.8.1 Pareto front DTLZ1 . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

vi

106
108
116
117
120
123

Chapter 1

Fundamentals
Jenetics is an advanced Genetic Algorithm, Evolutionary Algorithm and
Genetic Programming library, respectively, written in modern day Java.
It is designed with a clear separation of the several algorithm concepts, e. g.
Gene, Chromosome, Genotype, Phenotype, population and fitness Function.
Jenetics allows you to minimize or maximize a given fitness function without
tweaking it. In contrast to other GA implementations, the library uses the
concept of an evolution stream (EvolutionStream) for executing the evolution
steps. Since the EvolutionStream implements the Java Stream interface, it
works smoothly with the rest of the Java Stream API. This chapter describes
the design concepts and its implementation. It also gives some basic examples
and best practice tips.1

1.1

Introduction

Jenetics is a library, written in Java2 , which provides an genetic algorithm (GA)
and genetic programming (GP) implementation. It has no runtime dependencies
to other libraries, except the Java 8 runtime. Although the library is written in
Java 8, it is compilable with Java 9, 10 and 11. Jenetics is available on Maven
central repository3 and can be easily integrated into existing projects. The very
clear structuring of the different parts of the GA allows an easy adaption for
different problem domains.

This manual is not an introduction or a tutorial for genetic and/or evolutionary algorithms in general. It is assumed that the reader has a knowledge
about the structure and the functionality of genetic algorithms. Good introductions to GAs can be found in [29], [21], [28], [20], [22] or [33]. For
genetic programming you can have a look at [18] or [19].

1 The

classes described in this chapter reside in the io.jenetics.base module or
io:jenetics:jenetics:4.3.0 artifact, respectively.
2 The library is build with and depends on Java SE 8: http://www.oracle.com/technetwork/
java/javase/downloads/index.html
3 https://mvnrepository.com/artifact/io.jenetics/jenetics: If you are using Gradle,
you can use the following dependency string: »io.jenetics:jenetics:4.3.0«.

1

1.1. INTRODUCTION

CHAPTER 1. FUNDAMENTALS

To give you a first impression on how to use Jenetics, lets start with a
simple »Hello World« program. This first example implements the well known
bit-counting problem.
1
2
3
4
5
6

import
import
import
import
import
import

io
io
io
io
io
io

.
.
.
.
.
.

jenetics
jenetics
jenetics
jenetics
jenetics
jenetics

. BitChromosome ;
. BitGene ;
. Genotype ;
. e n g i n e . Engine ;
. engine . EvolutionResult ;
. u t i l . Factory ;

7
8
9
10
11
12
13
14

public f i n a l c l a s s HelloWorld {
// 2 . ) D e f i n i t i o n o f t h e f i t n e s s f u n c t i o n .
private s t a t i c i n t e v a l ( f i n a l Genotype g t ) {
return g t . getChromosome ( )
. a s ( BitChromosome . c l a s s )
. bitCount ( ) ;
}

15

public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
// 1 . ) D e f i n e t h e g e n o t y p e ( f a c t o r y ) s u i t a b l e
//
f o r t h e problem .
f i n a l Factory> g t f =
Genotype . o f ( BitChromosome . o f ( 1 0 , 0 . 5 ) ) ;

16
17
18
19
20
21

// 3 . ) C r e a t e t h e e x e c u t i o n e n v i ro n m en t .
f i n a l Engine e n g i n e = Engine
. b u i l d e r ( HelloWorld : : e v a l , g t f )
. build () ;

22
23
24
25
26

// 4 . ) S t a r t t h e e x e c u t i o n ( e v o l u t i o n ) and
//
c o l l e c t the r e s u l t .
f i n a l Genotype r e s u l t = e n g i n e . stream ( )
. l i m i t (100)
. c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) ;

27
28
29
30
31
32
33
34
35

}

}

System . out . p r i n t l n ( " H e l l o World : \ n\ t " + r e s u l t ) ;

Listing 1.1: »Hello World« GA
In contrast to other GA implementations, Jenetics uses the concept of an
evolution stream (EvolutionStream) for executing the evolution steps. Since
the EvolutionStream implements the Java Stream interface, it works smoothly
with the rest of the Java Stream API. Now let’s have a closer look at listing 1.1
and discuss this simple program step by step:
1. The probably most challenging part, when setting up a new evolution
Engine, is to transform the problem domain into an appropriate Genotype
(factory) representation.4 In our example we want to count the number
of ones of a BitChromosome. Since we are counting only the ones of one
chromosome, we are adding only one BitChromosome to our Genotype.
In general, the Genotype can be created with 1 to n chromosomes. For a
detailed description of the genotype’s structure have a look at section 1.3.1.3
on page 8.
2. Once this is done, the fitness function, which should be maximized, can
be defined. Utilizing language features introduced in Java 8, we simply
4 Section

2.2 on page 45 describes some common problem encodings.

2

1.1. INTRODUCTION

CHAPTER 1. FUNDAMENTALS

write a private static method, which takes the genotype we defined and
calculate it’s fitness value. If we want to use the optimized bit-counting
method, bitCount(), we have to cast the Chromosome class
to the actual used BitChromosome class. Since we know for sure that we
created the Genotype with a BitChromosome, this can be done safely. A
reference to the eval method is then used as fitness function and passed
to the Engine.build method.
3. In the third step we are creating the evolution Engine, which is responsible
for changing, respectively evolving, a given population. The Engine is highly
configurable and takes parameters for controlling the evolutionary and the
computational environment. For changing the evolutionary behavior, you
can set different alterers and selectors (see section 1.3.2 on page 12). By
changing the used Executor service, you control the number of threads,
the Engine is allowed to use. An new Engine instance can only be created
via its builder, which is created by calling the Engine.builder method.
4. In the last step, we can create a new EvolutionStream from our Engine.
The EvolutionStream is the model (or view) of the evolutionary process.
It serves as a »process handle« and also allows you, among other things,
to control the termination of the evolution. In our example, we simply
truncate the stream after 100 generations. If you don’t limit the stream, the
EvolutionStream will not terminate and run forever. The final result, the
best Genotype in our example, is then collected with one of the predefined
collectors of the EvolutionResult class.
As the example shows, Jenetics makes heavy use of the Stream and Collector
classes of Java 8. Also the newly introduced lambda expressions and the
functional interfaces (SAM types) play an important roll in the library design.
There are many other GA implementations out there and they may slightly
differ in the order of the single execution steps. Jenetics uses an classical
approach. Listing 1.2 shows the (imperative) pseudo-code of the Jenetics
genetic algorithm steps.
1
2
3
4
5
6
7
8
9

P0 ← Pinitial
F (P0 )
while ! f inished do
g ←g+1
Sg ← selectS (Pg−1 )
Og ← selectO (Pg−1 )
Og ← alter(Og )
Pg ← f ilter[gi ≥ gmax ](Sg ) + f ilter[gi ≥ gmax ](Og )
F (Pg )

Listing 1.2: Genetic algorithm
Line (1) creates the initial population and line (2) calculates the fitness value
of the individuals. The initial population is created implicitly before the first
evolution step is performed. Line (4) increases the generation number and line (5)
and (6) selects the survivor and the offspring population. The offspring/survivor
fraction is determined by the offspringFraction property of the Engine.Builder. The selected offspring are altered in line (7). The next line combines
the survivor population and the altered offspring population—after removing
the died individuals—to the new population. The steps from line (4) to (9) are
repeated until a given termination criterion is fulfilled.
3

1.2. ARCHITECTURE

1.2

CHAPTER 1. FUNDAMENTALS

Architecture

The basic metaphor of the Jenetics library is the Evolution Stream, implemented
via the Java 8 Stream API. Therefore it is no longer necessary (and advised)
to perform the evolution steps in an imperative way. An evolution stream is
powered by—and bound to—an Evolution Engine, which performs the needed
evolution steps for each generation; the steps are described in the body of the
while-loop of listing 1.2 on the previous page.

Figure 1.2.1: Evolution workflow
The described evolution workflow is also illustrated in figure 1.2.1, where
ES(i) denotes the EvolutionStart object at generation i and ER(i) the EvolutionResult at the ith generation. Once the evolution Engine is created, it
can be used by multiple EvolutionStreams, which can be safely used in different
execution threads. This is possible, because the evolution Engine doesn’t have
any mutable global state. It is practically a stateless function, fE : P → P,
which maps a start population, P, to an evolved result population. The Engine
function, fE , is, of course, non-deterministic. Calling it twice with the same
start population will lead to different result populations.
The evolution process terminates, if the EvolutionStream is truncated and
the EvolutionStream truncation is controlled by the limit predicate. As
long as the predicate returns true, the evolution is continued.5 At last, the
EvolutionResult is collected from the EvolutionStream by one of the available
EvolutionResult collectors.

Figure 1.2.2: Evolution engine model
Figure 1.2.2 shows the static view of the main evolution classes, together
with its dependencies. Since the Engine class itself is immutable, and can’t
be changed after creation, it is instantiated (configured) via a builder. The
Engine can be used to create an arbitrary number of EvolutionStreams. The
EvolutionStream is used to control the evolutionary process and collect the final
5 See

section 2.6 on page 60 for a detailed description of the available termination strategies.

4

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

result. This is done in the same way as for the normal java.util.stream.Stream classes. With the additional limit(Predicate) method, it is possible
to truncate the EvolutionStream if some termination criteria is fulfilled. The
separation of Engine and EvolutionStream is the separation of the evolution
definition and evolution execution.

Figure 1.2.3: Package structure
In figure 1.2.3 the package structure of the library is shown and it consists of
the following packages:
io.jenetics This is the base package of the Jenetics library and contains all
domain classes, like Gene, Chromosome or Genotype. Most of this types
are immutable data classes and doesn’t implement any behavior. It also
contains the Selector and Alterer interfaces and its implementations.
The classes in this package are (almost) sufficient to implement an own
GA.
io.jenetics.engine This package contains the actual GA implementation
classes, e. g. Engine, EvolutionStream or EvolutionResult. They
mainly operate on the domain classes of the io.jenetics package.
io.jenetics.stat This package contains additional statistics classes which are
not available in the Java core library. Java only includes classes for calculating the sum and the average of a given numeric stream (e. g. DoubleSummaryStatistics). With the additions in this package it is also possible
to calculate the variance, skewness and kurtosis—using the DoubleMomentStatistics class. The EvolutionStatistics object, which can be calculated for every generation, relies on the classes of this package.
io.jenetics.util This package contains the collection classes (Seq, ISeq and
MSeq) which are used in the public interfaces of the Chromosome and Genotype. It also contains the RandomRegistry class, which implements the
global PRNG lookup, as well as helper IO classes for serializing Genotypes
and whole populations.

1.3

Base classes

This chapter describes the main classes which are needed to setup and run an
genetic algorithm with the Jenetics6 library. They can roughly divided into
three types:
6 The documentation of the whole API is part of the download package or can be viewed
online: http://jenetics.io/javadoc/jenetics/4.3/index.html.

5

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Domain classes This classes form the domain model of the evolutionary algorithm and contain the structural classes like Gene and Chromosome. They
are directly located in the io.jenetics package.
Operation classes This classes operates on the domain classes and includes the
Alterer and Selector classes. They are also located in the io.jenetics
package.
Engine classes This classes implements the actual evolutionary algorithm and
reside solely in the io.jenetics.engine package.

1.3.1

Domain classes

Most of the domain classes are pure data classes and can be treated as value
objects7 . All Gene and Chromosome implementations are immutable as well as
the Genotype and Phenotype class.

Figure 1.3.1: Domain model
Figure 1.3.1 shows the class diagram of the domain classes. All domain
classes are located in the io.jenetics package. The Gene is the base of the
class structure. Genes are aggregated in Chromosomes. One to n Chromosomes
are aggregated in Genotypes. A Genotype and a fitness Function form the
Phenotype, which are collected into a population Seq.
1.3.1.1

Gene

Genes are the basic building blocks of the Jenetics library. They contain the
actual information of the encoded solution, the allele. Some of the implementations also contains domain information of the wrapped allele. This is the
case for all BoundedGene, which contain the allowed minimum and maximum
values. All Gene implementations are final and immutable. In fact, they are all
value-based classes and fulfill the properties which are described in the Java API
documentation[24].8
Beside the container functionality for the allele, every Gene is its own factory
and is able to create new, random instances of the same type and with the same
constraints. The factory methods are used by the Alterers for creating new
7 https://en.wikipedia.org/wiki/Value_object
8 It is also worth reading the blog entry from Stephen Colebourne: http://blog.joda.org/
2014/03/valjos-value-java-objects.html

6

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Genes from the existing one and play a crucial role by the exploration of the
problem space.
1
2
3
4
5
6
7
8

public i n t e r f a c e Gene>
extends Factory, V e r i f i a b l e
{
public A g e t A l l e l e ( ) ;
public G n e w I n s t a n c e ( ) ;
public G n e w I n s t a n c e (A a l l e l e ) ;
public boolean i s V a l i d ( ) ;
}

Listing 1.3: Gene interface
Listing 1.3 shows the most important methods of the Gene interface. The
isValid method, introduced by the Verifiable interface, allows the gene to
mark itself as invalid. All invalid genes are replaced with new ones during the
evolution phase.
The available Gene implementations in the Jenetics library should cover a
wide range of problem encodings. Refer to chapter 2.1.1 on page 39 for how to
implement your own Gene types.
1.3.1.2

Chromosome

A Chromosome is a collection of Genes which contains at least one Gene. This
allows to encode problems which requires more than one Gene. Like the Gene
interface, the Chromosome is also it’s own factory and allows to create a new
Chromosome from a given Gene sequence.
1
2
3
4
5
6
7
8
9

public i n t e r f a c e Chromosome>
extends Factory>, I t e r a b l e , V e r i f i a b l e
{
public Chromosome n e w I n s t a n c e ( ISeq g e n e s ) ;
public G getGene ( i n t i n d e x ) ;
public ISeq t o S e q ( ) ;
public Stream stream ( ) ;
public i n t l e n g t h ( ) ;
}

Listing 1.4: Chromosome interface
Listing 1.4 shows the main methods of the Chromosome interface. This are the
methods for accessing single Genes by index and as an ISeq respectively, and
the factory method for creating a new Chromosome from a given sequence of
Genes. The factory method is used by the Alterer classes which were able to
create altered Chromosome from a (changed) Gene sequence.
Most of the Chromosome implementations can be created with variable length.
E. g. the IntegerChromosome can be created with variable length, where the
minimum value of the length range is included and the maximum value of the
length range is excluded.
1
2
3

IntegerChromosome chromosome = IntegerChromosome . o f (
0 , 1_000 , IntRange . o f ( 5 , 9 )
);

The factory method of the IntegerChromosome will now create chromosome
instances with a length between [rangemin , rangemax ), equally distributed. Figure 1.3.2 on the following page shows the structure of a Chromosome with variable
length.
7

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Figure 1.3.2: Chromosome structure
1.3.1.3

Genotype

The central class, the evolution Engine is working with, is the Genotype. It
is the structural and immutable representative of an individual and consists
of one to n Chromosomes. All Chromosomes must be parameterized with the
same Gene type, but it is allowed to have different lengths and constraints. The
allowed minimal- and maximal values of a NumericChromosome is an example
of such a constraint. Within the same chromosome, all numeric gene alleles must
lay within the defined minimal- and maximal values.

Figure 1.3.3: Genotype structure
Figure 1.3.3 shows the Genotype structure. A Genotype consists of NG
Chromosomes and a Chromosome consists of NC[i] Genes (depending on the
Chromosome). The overall number of Genes of a Genotype is given by the sum
of the Chromosome’s Genes, which can be accessed via the Genotype.geneCount() method:
NX
G −1
Ng =
NC[i]
(1.3.1)
i=0

As already mentioned, the Chromosomes of a Genotype doesn’t have to have
necessarily the same size. It is only required that all genes are from the same
type and the Genes within a Chromosome have the same constraints; e. g. the
same min- and max values for numerical Genes.
1
2
3
4

Genotype g e n o t y p e = Genotype . o f (
DoubleChromosome . o f ( 0 . 0 ,
1.0 ,
8) ,
DoubleChromosome . o f ( 1 . 0 ,
2 . 0 , 10) ,
DoubleChromosome . o f ( 0 . 0 , 1 0 . 0 ,
9) ,

8

1.3. BASE CLASSES

5
6

);

DoubleChromosome . o f ( 0 . 1 ,

CHAPTER 1. FUNDAMENTALS
0.9 ,

5)

The code snippet in the listing above creates a Genotype with the same structure
as shown in figure 1.3.3 on the preceding page. In this example the DoubleGene
has been chosen as Gene type.
Genotype vector The Genotype is essentially a two-dimensional composition
of Genes. This makes it trivial to create Genotypes which can be treated
as a Gene matrices. If its needed to create a vector of Genes, there are two
possibilities to do so:
1. creating a row-major or
2. creating a column-major
Genotype vector. Each of the two possibilities have specific advantages and
disadvantages.

Figure 1.3.4: Row-major Genotype vector
Figure 1.3.4 shows a Genotype vector in row-major layout. A Genotype
vector of length n needs one Chromosome of length n. Each Gene of such a vector
obeys the same constraints. E. g., for Genotype vectors containing NumericGenes, all Genes must have the same minimum and maximum values. If the
problem space doesn’t need to have different minimum and maximum values, the
row-major Genotype vector is the preferred choice. Beside the easier Genotype
creation, the available Recombinator alterers are more efficient in exploring the
search domain.

If the problem space allows equal Gene constraint, the row-major Genotype
vector encoding should be chosen. It is easier to create and the available
Recombinator classes are more efficient in exploring the search domain.
The following code snippet shows the creation of a row-major Genotype
vector. All Alterers derived from the Recombinator do a fairly good job in
exploring the problem space for row-major Genotype vector.
1
2
3

Genotype g e n o t y p e = Genotype . o f (
DoubleChromosome . o f ( 0 . 0 ,
1.0 ,
8)
);

The column-major Genotype vector layout must be chosen when the problem
space requires components (Genes) with different constraints. This is almost the
9

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

only reason for choosing the column-major layout. The layout of this Genotype
vector is shown in 1.3.5. For a vector of length n, n Chromosomes of length one
are needed.

Figure 1.3.5: Column-major Genotype vector
The code snippet below shows how to create a Genotype vector in columnmajor layout. It’s a little more effort to create such a vector, since every Gene
has to be wrapped into a separate Chromosome. The DoubleChromosome in the
given example has length of one, when the length parameter is omitted.
1
2
3
4
5
6

Genotype g e n o t y p e = Genotype . o f (
DoubleChromosome . o f ( 0 . 0 ,
1.0) ,
DoubleChromosome . o f ( 1 . 0 ,
2.0) ,
DoubleChromosome . o f ( 0 . 0 , 1 0 . 0 ) ,
DoubleChromosome . o f ( 0 . 1 ,
0.9)
);

The greater flexibility of a column-major Genotype vector has to be payed with
a lower exploration capability of the Recombinator alterers. Using Crossover
alterers will have the same effect as the SwapMutator, when used with row-major
Genotype vectors. Recommended alterers for vectors of NumericGenes are:
• MeanAlterer9 ,
• LineCrossover10 and
• IntermediateCrossover11
See also 2.3.2 on page 53 for an advanced description on how to use the predefined
vector codecs.
Genotype scalar A very special case of a Genotype contains only one Chromosome with length one. The layout of such a Genotype scalar is shown in 1.3.6
on the next page. Such Genotypes are mostly used for encoding real function
problems.
How to create a Genotype for a real function optimization problem, is shown
in the code snippet below. The recommended Alterers are the same as for
column-major Genotype vectors: MeanAlterer, LineCrossover and IntermediateCrossover.
9 See

1.3.2.2 on page 20.
1.3.2.2 on page 21.
11 See 1.3.2.2 on page 21.
10 See

10

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Figure 1.3.6: Genotype scalar

1
2
3

Genotype g e n o t y p e = Genotype . o f (
DoubleChromosome . o f ( 0 . 0 ,
1.0)
);

See also 2.3.1 on page 52 for an advanced description on how to use the predefined
scalar codecs.
1.3.1.4

Phenotype

The Phenotype is the actual representative of an individual and consists of
the Genotype and the fitness Function, which is used to (lazily) calculate the
Genotype’s fitness value.12 It is only a container which forms the environment
of the Genotype and doesn’t change the structure. Like the Genotype, the
Phenotype is immutable and can’t be changed after creation.
1
2
3
4
5
6
7
8
9
10
11

public f i n a l c l a s s Phenotype<
G extends Gene ,
C extends Comparable 
>
implements Comparable>
{
public C g e t F i t n e s s ( ) ;
public Genotype getGenotype ( ) ;
public long getAge ( long c u r r e n t G e n e r a t i o n ) ;
public void e v a l u a t e ( ) ;
}

Listing 1.5: Phenotype class
Listing 1.5 shows the main methods of the Phenotype. The fitness property
will return the actual fitness value of the Genotype, which can be fetched with
the getGenotype method. To make the runtime behavior predictable, the fitness
value is evaluated lazily. Either by querying the fitness property or through
the call of the evaluate method. The evolution Engine is calling the evaluate
method in a separate step and makes the fitness evaluation time available
through the EvolutionDurations class. Additionally to the fitness value, the
Phenotype contains the generation when it was created. This allows to calculate
the current age and the removal of overaged individuals from the population.
1.3.1.5

Population

There is no special class which represents a population. It’s just a collection
of phenotypes. As collection class, the Seq interface is used. The own Seq
12 Since the fitness Function is shared by all Phenotypes, calls to the fitness Function must
be idempotent. See section 1.3.3.1 on page 22.

11

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

implementations allows to express the mutability state of the population at the
type level and makes the coding more reliable. For a detailed description of this
collection classes see section 1.4.4 on page 36.

1.3.2

Operation classes

Genetic operators are used for creating genetic diversity (Alterer) and selecting
potentially useful solutions for recombination (Selector). This section gives an
overview about the genetic operators available in the Jenetics library. It also
contains some theoretical information, which should help you to choose the right
combination of operators and parameters, for the problem to be solved.
1.3.2.1

Selector

Selectors are responsible for selecting a given number of individuals from the
population. The selectors are used to divide the population into survivors
and offspring. The selectors for offspring and for the survivors can be chosen
independently.

The selection process of the Jenetics library acts on Phenotypes and indirectly, via the fitness function, on Genotypes. Direct Gene- or population
selection is not supported by the library.

1
2
3
4
5

Engine e n g i n e = Engine . b u i l d e r ( . . . )
. offspringFraction (0.7)
. s u r v i v o r s S e l e c t o r (new R o u l e t t e W h e e l S e l e c t o r <>() )
. o f f s p r i n g S e l e c t o r (new T o u r n a m e n t S e l e c t o r <>() )
. build () ;

The offspringFraction, fO ∈ [0, 1], determines the number of selected offspring
NOg = kOg k = rint (kPg k · fO )
(1.3.2)
and the number of selected survivors
NSg = kSg k = kPg k − kOg k .

(1.3.3)

The Jenetics library contains the following selector implementations:
• TournamentSelector

• LinearRankSelector

• TruncationSelector

• ExponentialRankSelector

• MonteCarloSelector

• BoltzmannSelector

• ProbabilitySelector

• StochasticUniversalSelector

• RouletteWheelSelector

• EliteSelector

Beside the well known standard selector implementation the ProbabilitySelector is the base of a set of fitness proportional selectors.
12

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Tournament selector In tournament selection the best individual from a
random sample of s individuals is chosen from the population Pg . The samples are
drawn with replacement. An individual will win a tournament only if the fitness
is greater than the fitness of the other s − 1 competitors. Note that the worst
individual never survives, and the best individual wins in all the tournaments it
participates. The selection pressure can be varied by changing the tournament
size s. For large values of s, weak individuals have less chance of being selected.
Compared with fitness proportional selectors, the tournament selector is often
used in practice because of its lack of stochastic noise. Tournament selectors are
also independent to the scaling of the genetic algorithm fitness function.
Truncation selector In truncation selection individuals are sorted according
to their fitness and only the n best individuals are selected. The truncation
selection is a very basic selection algorithm. It has it’s strength in fast selecting
individuals in large populations, but is not very often used in practice; whereas
the truncation selection is a standard method in animal and plant breeding. Only
the best animals, ranked by their phenotypic value, are selected for reproduction.
Monte Carlo selector The Monte Carlo selector selects the individuals from
a given population randomly. Instead of a directed search, the Monte Carlo
selector performs a random search. This selector can be used to measure the
performance of a other selectors. In general, the performance of a selector should
be better than the selection performance of the Monte Carlo selector. If the
Monte Carlo selector is used for selecting the parents for the population, it will
be a little bit more disruptive, on average, than roulette wheel selection.[29]
Probability selectors Probability selectors are a variation of fitness proportional selectors and selects individuals from a given population based on it’s
selection probability P (i). Fitness proportional selection works as shown in

Figure 1.3.7: Fitness proportional selection
figure 1.3.7. An uniform distributed random number r ∈ [0, F ) specifies which
individual is selected, by argument minimization:
(
)
n
X
i ← argmin r <
fi ,
(1.3.4)
n∈[0,N )

i=0

where N is the number of individuals and fi the fitness value of the ith individual.
The probability selector works the same way, only the fitness value fi is replaced
13

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

by the individual’s selection probability P (i). It is not necessary to sort the
population. The selection probability of an individual i follows a binomial
distribution


n
n−k
P (i, k) =
P (i) k (1 − P (i))
(1.3.5)
k
where n is the overall number of selected individuals andk the number of
individual i in the set of selected individuals. The runtime complexity of the
implemented probability selectors is O (n + log (n)) instead of O (n2) as for the
naive approach: A binary (index) search is performed on the summed probability
array.
Roulette-wheel selector The roulette-wheel selector is also known as fitness
proportional selector and Jenetics implements it as probability selector. For
calculating the selection probability P (i), the fitness value fi of individual i is
used.
fi
P (i) = PN −1
(1.3.6)
j=0 fj
Selecting n individuals from a given population is equivalent to play n times
on the roulette-wheel. The population don’t have to be sorted before selecting
the individuals. Notice that equation 1.3.6 assumes that all fitness values are
positive and the sum of the fitness values is not zero. To cope with negative
fitnesses, an adapted formula is used for calculating the selection probabilities.
fi − fmin
,
P 0 (i) = PN −1
j=0 (fj − fmin )
where

(1.3.7)

fmin = min {fi , 0}
i∈[0,N )

As you can see, the worst fitness value fmin , if negative, has now a selection
probability of zero. In the case that the sum of the corrected fitness values is
zero, the selection probability of all fitness values will be set N1 .
Linear-rank selector The roulette-wheel selector will have problems when
the fitness values differ very much. If the best chromosome fitness is 90%, its
circumference occupies 90% of roulette-wheel, and then other chromosomes have
too few chances to be selected.[29] In linear-ranking selection the individuals
are sorted according to their fitness values. The rank N is assigned to the best
individual and the rank 1 to the worst individual. The selection probability P (i)
of individual i is linearly assigned to the individuals according to their rank.


 i−1
1
P (i) =
n− + n+ − n−
.
(1.3.8)
N
N −1
−

+

Here nN is the probability of the worst individual to be selected and nN the
probability of the best individual to be selected. As the population size is held
constant, the condition n+ = 2 − n− and n− ≥ 0 must be fulfilled. Note that
all individuals get a different rank, respectively a different selection probability,
even if they have the same fitness value.[5]
14

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Exponential-rank selector An alternative to the weak linear-rank selector
is to assign survival probabilities to the sorted individuals using an exponential
function:
ci−1
,
(1.3.9)
−1
wherec must within the range [0, 1). A small value of c increases the probability
of the best individual to be selected. If c is set to zero, the selection probability of
the best individual is set to one. The selection probability of all other individuals
is zero. A value near one equalizes the selection probabilities. This selector sorts
the population in descending order before calculating the selection probabilities.
P (i) = (c − 1)

cN

Boltzmann selector The selection probability of the Boltzmann selector is
defined as
eb·fi
P (i) =
,
(1.3.10)
Z
where b is a parameter which controls the selection intensity and Z is defined as
Z=

n
X

efi .

(1.3.11)

i=1

Positive values of b increases the selection probability of individuals with high
fitness values and negative values of b decreases it. If b is zero, the selection
probability of all individuals is set to N1 .
Stochastic-universal selector Stochastic-universal selection[1] (SUS) is a
method for selecting individuals according to some given probability in a way
that minimizes the chance of fluctuations. It can be viewed as a type of roulette
game where we now have p equally spaced points which we spin. SUS uses
a single random value for selecting individuals by choosing them at equally
spaced intervals. Weaker members of the population (according to their fitness)
have a better chance to be chosen, which reduces the unfair nature of fitnessproportional selection methods. The selection method was introduced by James
Baker.[2] Figure 1.3.8 shows the function of the stochastic-universal selection,

Figure 1.3.8: Stochastic-universal selection
where n is the number of individuals to select. Stochastic universal sampling
ensures a selection of offspring, which is closer to what is deserved than roulette
wheel selection.[29]
15

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Elite selector The EliteSelector copies a small proportion of the fittest
candidates, without changes, into the next generation. This may have a dramatic
impact on performance by ensuring that the GA doesn’t waste time re-discovering
previously refused partial solutions. Individuals that are preserved through
elitism remain eligible for selection as parents of the next generation. Elitism is
also related with memory: remember the best solution found so far. A problem
with elitism is that it may causes the GA to converge to a local optimum, so pure
elitism is a race to the nearest local optimum. The elite selector implementation of
the Jenetics library also lets you specify the selector for the non-elite individuals.
1.3.2.2

Alterer

The problem encoding/representation determines the bounds of the search space,
but the Alterers determine how the space can be traversed: Alterers are
responsible for the genetic diversity of the EvolutionStream. The two Alterer
hierarchies used in Jenetics are:
1. mutation and
2. recombination (e. g. crossover).

First we will have a look at the mutation — There are two distinct
roles mutation plays in the evolution process:
1. Exploring the search space: By making small moves, mutation allows
a population to explore the search space. This exploration is often slow
compared to crossover, but in problems where crossover is disruptive this
can be an important way to explore the landscape.
2. Maintaining diversity: Mutation prevents a population from converging
to a local minimum by stopping the solution to become to close to one
another. A genetic algorithm can improve the solution solely by the
mutation operator. Even if most of the search is being performed by
crossover, mutation can be vital to provide the diversity which crossover
needs.
The mutation probability, P (m), is the parameter that must be optimized. The
optimal value of the mutation rate depends on the role mutation plays. If
mutation is the only source of exploration (if there is no crossover), the mutation
rate should be set to a value that ensures that a reasonable neighborhood of
solutions is explored.
The mutation probability, P (m), is defined as the probability that a specific
gene, over the whole population, is mutated. That means, the (average) number
of genes mutated by a mutator is
µ̂ = NP · Ng · P (m)

(1.3.12)

where Ng is the number of available genes of a genotype and NP the population
size (revere to equation 1.3.1 on page 8).

16

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Mutator The mutator has to deal with the problem, that the genes are
arranged in a 3D structure (see chapter 1.3.1.3). The mutator selects the gene
which will be mutated in three steps:
1. Select a genotype G[i] from the population with probability PG (m),
2. select a chromosome C[j] from the selected genotype G[i] with probability
PC (m) and
3. select a gene g[k] from the selected chromosome C[j] with probability
Pg (m).
The needed sub-selection probabilities are set to
PG (m) = PC (m) = Pg (m) =

p
3

P (m).

(1.3.13)

Gaussian mutator The Gaussian mutator performs the mutation of number
genes. This mutator picks a new value based on a Gaussian distribution around
the current value of the gene. The variance of the new value (before clipping to
the allowed gene range) will be
σ̂ =
2



gmax − gmin
4

2

(1.3.14)

where gmin and gmax are the valid minimum and maximum values of the number
gene. The new value will be cropped to the gene’s boundaries.
Swap mutator The swap mutator changes the order of genes in a chromosome,
with the hope of bringing related genes closer together, thereby facilitating the
production of building blocks. This mutation operator can also be used for
combinatorial problems, where no duplicated genes within a chromosome are
allowed, e. g. for the TSP.

The second alterer type is the recombination — An enhanced genetic
algorithm (EGA) combine elements of existing solutions in order to create a new
solution, with some of the properties of each parents. Recombination creates a
new chromosome by combining parts of two (or more) parent chromosomes. This
combination of chromosomes can be made by selecting one or more crossover
points, splitting these chromosomes on the selected points, and merge those
portions of different chromosomes to form new ones.
1
2
3
4
5
6
7
8
9

void r e c o m b i n e ( f i n a l ISeq> pop ) {
// S e l e c t t h e Genotypes f o r c r o s s o v e r .
f i n a l Random random = RandomRegistry . getRandom ( ) ;
f i n a l i n t i 1 = random . n e x t I n t ( pop . l e n g t h ( ) ) ;
f i n a l i n t i 2 = random . n e x t I n t ( pop . l e n g t h ( ) ) ;
f i n a l Phenotype pt1 = pop . g e t ( i 1 ) ;
f i n a l Phenotype pt2 = pop . g e t ( 2 ) ;
f i n a l Genotype g t 1 = pt1 . getGenotype ( ) ;
f i n a l Genotype g t 2 = pt2 . getGenotype ( ) ;

10
11
12

// Choosing t h e Chromosome f o r c r o s s o v e r .
f i n a l int chIndex =

17

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

random . n e x t I n t ( min ( g t 1 . l e n g t h ( ) , g t 2 . l e n g t h ( ) ) ) ;
f i n a l MSeq> c1 = g t 1 . t o S e q ( ) . copy ( ) ;
f i n a l MSeq> c2 = g t 2 . t o S e q ( ) . copy ( ) ;
f i n a l MSeq g e n e s 1 = c1 . g e t ( c h I n d e x ) . t o S e q ( ) . copy ( ) ;
f i n a l MSeq g e n e s 2 = c2 . g e t ( c h I n d e x ) . t o S e q ( ) . copy ( ) ;

13
14
15
16
17
18

// Perform t h e c r o s s o v e r .
c r o s s o v e r ( genes1 , genes2 ) ;
c1 . s e t ( chIndex , c1 . g e t ( c h I n d e x ) . n e w I n s t a n c e ( g e n e s 1 . t o I S e q ( ) ) ) ;
c2 . s e t ( chIndex , c2 . g e t ( c h I n d e x ) . n e w I n s t a n c e ( g e n e s 2 . t o I S e q ( ) ) ) ;

19
20
21
22
23
24
25
26
27
28

}

// C r e a t i n g two new Phenotypes and r e p l a c e t h e o l d one .
MSeq> r e s u l t = pop . copy ( ) ;
r e s u l t . s e t ( i 1 , pt1 . n e w I n s t a n c e ( g t 1 . n e w I n s t a n c e ( c1 . t o I S e q ( ) ) ) ) ;
r e s u l t . s e t ( i 2 , pt2 . n e w I n s t a n c e ( g t 1 . n e w I n s t a n c e ( c2 . t o I S e q ( ) ) ) ) ;

Listing 1.6: Chromosome selection for recombination
Listing 1.6 on the preceding page shows how two chromosomes are selected for
recombination. It is done this way for preserving the given constraints and to
avoid the creation of invalid individuals.

Because of the possible different Chromosome length and/or Chromosome
constraints within a Genotype, only Chromosomes with the same
Genotype position are recombined (see listing 1.6 on the previous page).
The recombination probability, P (r), determines the probability that a given
individual (genotype) of a population is selected for recombination. The (mean)
number of changed individuals depend on the concrete implementation and
can be vary from P (r) · NG to P (r) · NG · OR , where OR is the order of the
recombination, which is the number of individuals involved in the combine
method.
Single-point crossover The single-point crossover changes two children chromosomes by taking two chromosomes and cutting them at some, randomly
chosen, site. If we create a child and its complement we preserve the total
number of genes in the population, preventing any genetic drift. Single-point
crossover is the classic form of crossover. However, it produces very slow mixing
compared with multi-point crossover or uniform crossover. For problems where
the site position has some intrinsic meaning to the problem single-point crossover
can lead to smaller disruption than multiple-point or uniform crossover.
Figure 1.3.9 shows how the SinglePointCrossover class is performing the
crossover for different crossover points—in the given example for the chromosome
indexes 0,1,3,6 and 7.
Multi-point crossover If the MultiPointCrossover class is created with
one crossover point, it behaves exactly like the single-point crossover. The
following picture shows how the multi-point crossover works with two crossover
points, defined at index 1 and 4.
Figure 1.3.11 you can see how the crossover works for an odd number of
crossover points.
18

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Figure 1.3.9: Single-point crossover

Figure 1.3.10: 2-point crossover
Partially-matched crossover The partially-matched crossover guarantees
that all genes are found exactly once in each chromosome. No gene is duplicated
by this crossover strategy. The partially-matched crossover (PMX) can be applied
usefully in the TSP or other permutation problem encodings. Permutation
encoding is useful for all problems where the fitness only depends on the ordering
of the genes within the chromosome. This is the case in many combinatorial
optimization problems. Other crossover operators for combinatorial optimization
are:
• order crossover

• edge recombination crossover

• cycle crossover

• edge assembly crossover

The PMX is similar to the two-point crossover. A crossing region is chosen
by selecting two crossing points (see figure 1.3.12 a)).
After performing the crossover we–normally–got two invalid chromosomes
(figure 1.3.12 b)). Chromosome 1 contains the value 6 twice and misses the value
3. On the other side chromosome 2 contains the value 3 twice and misses the
value 6. We can observe that this crossover is equivalent to the exchange of the
values 3→6, 4→5 and 5→4. To repair the two chromosomes we have to apply
this exchange outside the crossing region (figure 1.3.12 b)). At the end figure
1.3.12 c) shows the repaired chromosome.
Uniform crossover In uniform crossover, the genes at index i of two chromosomes are swapped with the swap-probability, pS . Empirical studies shows
that uniform crossover is a more exploitative approach than the traditional
19

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Figure 1.3.11: 3-point crossover

Figure 1.3.12: Partially-matched crossover
exploitative approach that maintains longer schemata. This leads to a better
search of the design space with maintaining the exchange of good information.[8]

Figure 1.3.13: Uniform crossover
Figure 1.3.13 shows an example of a uniform crossover with four crossover
points. A gene is swapped, if a uniformly created random number, r ∈ [0, 1], is
smaller than the swap-probability, pS . The following code snippet shows how
these swap indexes are calculated, in a functional way.
1
2
3
4
5
6

f i n a l Random random = RandomRegistry . getRandom ( ) ;
f i n a l int length = 8 ;
f i n a l double ps = 0 . 5 ;
f i n a l i n t [ ] i n d e x e s = IntRange . r a n g e ( 0 , l e n g t h )
. f i l t e r ( i −> random . nextDouble ( ) < ps )
. toArray ( ) ;

Mean alterer The Mean alterer works on genes which implement the Mean
interface. All numeric genes implement this interface by calculating the arithmetic
mean of two genes.
20

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

Line crossover The line crossover13 takes two numeric chromosomes and
treats it as a real number vector. Each of this vectors can also be seen as a point
in Rn . If we draw a line through this two points (chromosome), we have the
possible values of the new chromosomes, which all lie on this line.

Figure 1.3.14: Line crossover hypercube
Figure 1.3.14 shows how the two chromosomes form the two three-dimensional
vectors (black circles). The dashed line, connecting the two points, form the
possible solutions created by the line crossover. An additional variable, p,
determines how far out along the line the created children will be. If p = 0 then
the children will be located along the line within the hypercube. If p > 0, the
children may be located on an arbitrary place on the line, even outside of the
hypercube.This is useful if you want to explore unknown regions, and you need
a way to generate chromosomes further out than the parents are.
The internal random parameters, which define the location of the new
crossover point, are generated once for the whole vector (chromosome). If
the LineCrossover generates numeric genes which lie outside the allowed minimum and maximum value, it simply uses the original gene and rejects the
generated, invalid one.
Intermediate crossover The intermediate crossover is quite similar to the
line crossover. It differs in the way on how the internal random parameters
are generated and the handling of the invalid–out of range–genes. The internal
random parameters of the IntermediateCrossover class are generated for each
gene of the chromosome, instead once for all genes. If the newly generated gene
is not within the allowed range, a new one is created. This is repeated, until a
valid gene is built.
The crossover parameter, p, has the same properties as for the line crossover.
If the chosen value for p is greater than 0, it is likely that some genes must be
created more than once, because they are not in the valid range. The probability
for gene re-creation rises sharply with the value of p. Setting p to a value greater
than one, doesn’t make sense in most of the cases. A value greater than 10
should be avoided.

1.3.3

Engine classes

The executing classes, which perform the actual evolution, are located in the
io.jenetics.engine package. The evolution stream (EvolutionStream) is the
13 The line crossover, also known as line recombination, was originally described by Heinz
Mühlenbein and Dirk Schlierkamp-Voosen.[23]

21

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

base metaphor for performing an GA. On the EvolutionStream you can define
the termination predicate and than collect the final EvolutionResult. This
decouples the static data structure from the executing evolution part. The
EvolutionStream is also very flexible, when it comes to collecting the final
result. The EvolutionResult class has several predefined collectors, but you
are free to create your own one, which can be seamlessly plugged into the existing
stream.
1.3.3.1

Fitness function

The fitness Function is also an important part when modeling an genetic
algorithm. It takes a Genotype as argument and returns, at least, a Comparable
object as result—the fitness value. This allows the evolution Engine, respectively
the selection operators, to select the offspring- and survivor population. Some
selectors have stronger requirements to the fitness value than a Comparable,
but this constraints is checked by the Java type system at compile time.

Since the fitness Function is shared by all Phenotypes, calls to the fitness
Function has to be idempotent. A fitness Function is idempotent if, whenever
it is applied twice to any Genotype, it returns the same fitness value as if it
were applied once. In the simplest case, this is achieved by Functions which
doesn’t contain any global mutable state.
The following example shows the simplest possible fitness Function. This
Function simply returns the allele of a 1x1 float Genotype.
1
2
3
4

public c l a s s Main {
s t a t i c Double i d e n t i t y ( f i n a l Genotype g t ) {
return g t . getGene ( ) . g e t A l l e l e ( ) ;
}

5

public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
// C r e a t e f i t n e s s f u n c t i o n from method r e f e r e n c e .
Function, Double>> f f 1 =
Main : : i d e n t i t y ;

6
7
8
9
10
11
12
13
14
15

}

}

// C r e a t e f i t n e s s f u n c t i o n from lambda e x p r e s s i o n .
Function, Double>> f f 2 = g t −>
g t . getGene ( ) . g e t A l l e l e ( ) ;

The first type parameter of the Function defines the kind of Genotype from
which the fitness value is calculated and the second type parameter determines
the return type, which must be, at least, a Comparable type.
1.3.3.2

Engine

The evolution Engine controls how the evolution steps are executed. Once the
Engine is created, via a Builder class, it can’t be changed. It doesn’t contain
any mutable global state and can therefore safely used/called from different
threads. This allows to create more than one EvolutionStreams from the
Engine and execute them in parallel.
22

1.3. BASE CLASSES

1
2
3
4
5
6
7
8
9
10
11
12
13

CHAPTER 1. FUNDAMENTALS

public f i n a l c l a s s Engine<
G extends Gene ,
C extends Comparable 
>
implements Function,
E v o l u t i o n R e s u l t >,
E v o l u t i o n S t r e a m a b l e 
{
// The e v o l u t i o n f u n c t i o n , p e r f o r m s one e v o l u t i o n s t e p .
E v o l u t i o n R e s u l t  e v o l v e (
ISeq> p o p u l a t i o n ,
long g e n e r a t i o n
);

14

// E v o l u t i o n stream f o r " normal " e v o l u t i o n e x e c u t i o n .
E v o l u t i o n S t r e a m  stream ( ) ;

15
16
17
18
19
20

}

// E v o l u t i o n i t e r a t o r f o r e x t e r n a l e v o l u t i o n i t e r a t i o n .
I t e r a t o r > i t e r a t o r ( ) ;

Listing 1.7: Engine class
Listing 1.7 shows the main methods of the Engine class. It is used for performing
the actual evolution of a give population. One evolution step is executed by
calling the Engine.evolve method, which returns an EvolutionResult object.
This object contains the evolved population plus additional information like
execution duration of the several evolution sub-steps and information about the
killed and as invalid marked individuals. With the stream method you create a
new EvolutionStream, which is used for controlling the evolution process—see
section 1.3.3.3 on page 25. Alternatively it is possible to iterate through the
evolution process in an imperative way (for whatever reasons this should be
necessary). Just create an Iterator of EvolutionResult object by calling the
iterator method.
As already shown in previous examples, the Engine can only be created
via its Builder class. Only the fitness Function and the Chromosomes, which
represents the problem encoding, must be specified for creating an Engine
instance. For the rest of the parameters default values are specified. This are
the Engine parameters which can configured:
alterers A list of Alterers which are applied to the offspring population, in
the defined order. The default value of this property is set to SinglePointCrossover<>(0.2) followed by Mutator<>(0.15).
clock The java.time.Clock used for calculating the execution durations. A
Clock with nanosecond precision (System.nanoTime()) is used as default.
executor With this property it is possible to change the java.util.concurrent.Executor engine used for evaluating the evolution steps. This property can be used to define an application wide Executor or for controlling
the number of execution threads. The default value is set to ForkJoinPool.commonPool().
evaluator This property allows you to replace the evaluation strategy of the
Phenotype’s fitness function. Normally, each fitness value is evaluated
concurrently, but independently from each other. In some configuration
23

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

it is necessary, for performance reason, to evaluate the fitness values of a
population at once. This is then performed by the Engine.Evaluator or
Engine.GenotypeEvaluator interface.
fitnessFunction This property defines the fitness Function used by the evolution Engine. (See section 1.3.3.1 on page 22.)
genotypeFactory Defines the Genotype Factory used for creating new individuals. Since the Genotype is its own Factory, it is sufficient to create a
Genotype, which then serves as template.
genotypeValidator This property lets you override the default implementation
of the Genotype.isValid method, which is useful if the Genotype validity
not only depends on valid property of the elements it consists of.
maximalPhenotypeAge Set the maximal allowed age of an individual (Phenotype). This prevents super individuals to live forever. The default value is
set to 70.
offspringFraction Through this property it is possible to define the fraction of
offspring (and survivors) for evaluating the next generation. The fraction
value must within the interval [0, 1]. The default value is set to 0.6.
Additionally to this property, it is also possible to set the survivorsFraction, survivorsSize or offspringSize. All this additional properties
effectively set the offspringFraction.
offspringSelector This property defines the Selector used for selecting the
offspring population. The default values is set to TournamentSelector<>(3).
optimize With this property it is possible to define whether the fitness Function
should be maximized of minimized. By default, the fitness Function is
maximized.
phenotypeValidator This property lets you override the default implementation of the Phenotype.isValid method, which is useful if the Phenotype
validity not only depends on valid property of the elements it consists of.
populationSize Defines the number of individuals of a population. The evolution Engine keeps the number of individuals constant. That means, the
population of the EvolutionResult always contains the number of entries
defined by this property. The default value is set to 50.
selector This method allows to set the offspringSelector and survivorsSelector in one step with the same selector.
survivorsSelector This property defines the Selector used for selecting the
survivors population. The default values is set to TournamentSelector<>(3).
individualCreationRetries The evolution Engine tries to create only valid
individuals. If a newly created Genotype is not valid, the Engine creates
another one, till the created Genotype is valid. This parameter sets the
maximal number of retries before the Engine gives up and accept invalid
individuals. The default value is set to 10.
24

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

mapper This property lets you define an mapper, which transforms the final
EvolutionResult object after every generation. One usage of the mapper
is to remove duplicate individuals from the population. The EvolutionResult.toUniquePopulation() method provides such a de-duplication
mapper.
The EvolutionStreams created by the Engine class are unlimited. Such streams
must be limited by calling the available EvolutionStream.limit methods. Alternatively, the Engine instance itself can be limited with the Engine.limit
methods. This limited Engines no long creates infinite EvolutionStreams, they
are truncated by the limit predicate defined by the Engine. This feature is
needed when you are concatenating evolution Engines (see section 4.1.5.1 on
page 85).
1
2
3
4
5

f i n a l E v o l u t i o n S t r e a m a b l e  e n g i n e =
Engine . b u i l d e r ( problem )
. minimizing ( )
. build ()
. l i m i t ( ( ) −> L i m i t s . b y S t e a d y F i t n e s s ( 1 0 ) ) ;

As shown in the example code above, one important difference between the
Engine.limit and the EvolutionStream.limit method is, that the limit method of the Engine takes a limiting Predicate Supplier instead of the the
Predicate itself. The reason for this is that some Predicates has to maintain
internal state to work properly. This means, every time the Engine creates a
new stream, it must also create a new limiting Predicate.
1.3.3.3

EvolutionStream

The EvolutionStream controls the execution of the evolution process and can
be seen as a kind of execution handle. This handle can be used to define
the termination criteria and to collect the final evolution result. Since the
EvolutionStream extends the Java Stream interface, it integrates smoothly
with the rest of the Java Stream API.14
1
2
3
4
5
6
7
8
9

public i n t e r f a c e E v o l u t i o n S t r e a m <
G extends Gene ,
C extends Comparable 
>
extends Stream>
{
public E v o l u t i o n S t r e a m 
l i m i t ( P r e d i c a t e > p r o c e e d ) ;
}

Listing 1.8: EvolutionStream class
Listing 1.8 shows the whole EvolutionStream interface. As it can be seen,
it only adds one additional method. But this additional limit method allows to truncate the EvolutionStream based on a Predicate which takes an
EvolutionResult. Once the Predicate returns false, the evolution process
is stopped. Since the limit method returns an EvolutionStream, it is possible
to define more than one Predicate, which both must be fulfilled to continue
the evolution process.
14 It is recommended to make yourself
A good introduction can be found here:
java8-stream-tutorial-examples/

25

familiar with the Java Stream API.
http://winterbe.com/posts/2014/07/31/

1.3. BASE CLASSES

1
2
3
4
5

CHAPTER 1. FUNDAMENTALS

Engine e n g i n e = . . .
E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. limit ( predicate1 )
. limit ( predicate2 )
. l i m i t (100) ;

The EvolutionStream, created in the example above, will be truncated if one
of the two predicates is false or if the maximal allowed generations, of 100,
is reached. An EvolutionStream is usually created via the Engine.stream()
method. The immutable and stateless nature of the evolution Engine allows to
create more than one EvolutionStream with the same Engine instance.

The generations of the EvolutionStream are evolved serially. Calls of the
EvolutionStream methods (e. g. limit, peek, ...) are executed in the
thread context of the created Stream. In a typical setup, no additional
synchronization and/or locking is needed.
In cases where you appreciate the usage of the EvolutionStream but need a
different Engine implementation, you can use the EvolutionStream.of factory
method for creating an new EvolutionStream.
1
2
3
4
5

s t a t i c , C extends Comparable >
E v o l u t i o n S t r e a m  o f (
S u p p l i e r > s t a r t ,
Function , E v o l u t i o n R e s u l t > f
);

This factory method takes a start value, of type EvolutionStart, and an
evolution Function. The evolution Function takes the start value and returns
an EvolutionResult object. To make the runtime behavior more predictable,
the start value is fetched/created lazily at the evolution start time.
1
2
3

f i n a l S u p p l i e r > s t a r t = . . .
f i n a l E v o l u t i o n S t r e a m  stream =
E v o l u t i o n S t r e a m . o f ( s t a r t , new MySpecialEngine ( ) ) ;

1.3.3.4

EvolutionResult

The EvolutionResult collects the result data of an evolution step into an
immutable value class. This class is the type of the stream elements of the
EvolutionStream, as described in section 1.3.3.3 on the previous page.
1
2
3
4
5
6
7
8
9

public f i n a l c l a s s E v o l u t i o n R e s u l t <
G extends Gene ,
C extends Comparable 
>
implements Comparable>
{
ISeq> g e t P o p u l a t i o n ( ) ;
long g e t G e n e r a t i o n ( ) ;
}

Listing 1.9: EvolutionResult class
Listing 1.3.3.4 shows the two most important properties, the population and
the generation the result belongs to. This are also the two properties needed
26

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

for the next evolution step. The generation is, of course, incremented by
one. To make collecting the EvolutionResult object easier, it also implements
the Comparable interface. Two EvolutionResults are compared by its best
Phenotype.
The EvolutionResult classes has three predefined factory methods, which
will return Collectors usable for the EvolutionStream:
toBestEvolutionResult() Collects the best EvolutionResult of a EvolutionStream according to the defined optimization strategy.
toBestPhenotype() This collector can be used if you are only interested in the
best Phenotype.
toBestGenotype() Use this collector if you only need the best Genotype of the
EvolutionStream.
The following code snippets shows how to use the different EvolutionStream
collectors.
1
2
3

// C o l l e c t i n g t h e b e s t E v o l u t i o n R e s u l t o f t h e E v o l u t i o n S t r e a m .
E v o l u t i o n R e s u l t  r e s u l t = stream
. c o l l e c t ( EvolutionResult . toBestEvolutionResult () ) ;

4
5
6
7

// C o l l e c t i n g t h e b e s t Phenotype o f t h e E v o l u t i o n S t r e a m .
Phenotype r e s u l t = stream
. c o l l e c t ( EvolutionResult . toBestPhenotype ( ) ) ;

8
9
10
11

// C o l l e c t i n g t h e b e s t Genotype o f t h e E v o l u t i o n S t r e a m .
Genotype r e s u l t = stream
. c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) ;

1.3.3.5

EvolutionStatistics

The EvolutionStatistics class allows you to gather additional statistical
information from the EvolutionStream. This is especially useful during the
development phase of the application, when you have to find the right parametrization of the evolution Engine. Besides other informations, the EvolutionStatistics contains (statistical) information about the fitness, invalid and
killed Phenotypes and runtime information of the different evolution steps.
Since the EvolutionStatistics class implements the Consumer> interface, it can be easily plugged into the EvolutionStream,
adding it with the peek method of the stream.
1
2
3
4
5
6
7

Engine e n g i n e = . . .
E v o l u t i o n S t a t i s t i c s  s t a t i s t i c s =
E v o l u t i o n S t a t i s t i c s . ofNumber ( ) ;
e n g i n e . stream ( )
. l i m i t (100)
. peek ( s t a t i s t i c s )
. c o l l e c t ( toBestGenotype ( ) ) ;

Listing 1.10: EvolutionStatistics usage
Listing 1.10 shows how to add the the EvolutionStatistics to the EvolutionStream. Once the algorithm tuning is finished, it can be removed in the production environment.
27

1.3. BASE CLASSES

CHAPTER 1. FUNDAMENTALS

There are two different specializations of the EvolutionStatistics object
available. The first is the general one, which will be working for every kind
of Genes and fitness types. It can be created via the EvolutionStatistics.ofComparable() method. The second one collects additional statistical data for
numeric fitness values. This can be created with the EvolutionStatistics.ofNumber() method.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
| Time statistics
|
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
|
Selection : sum =0 .0 4 65 38 2 78 00 0 s ; mean = 0 .0 03 8 78 18 98 3 3 s
|
|
Altering : sum = 0. 08 6 15 54 57 0 00 s ; mean =0 . 00 71 7 96 21 4 17 s
|
|
Fitness calculation : sum = 0. 02 2 90 1 60 6 00 0 s ; mean = 0 .0 01 9 08 4 67 16 7 s
|
|
Overall execution : sum =0 .1 4 72 98 0 67 00 0 s ; mean =0 .0 1 22 74 83 8 91 7 s
|
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
| Evolution statistics
|
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
|
Generations : 12
|
|
Altered : sum =7 ,331; mean =610 .91 6666 667
|
|
Killed : sum =0; mean =0.000000000
|
|
Invalids : sum =0; mean =0.000000000
|
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
| Population statistics
|
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
|
Age : max =11; mean =1.951000; var =5.545190
|
|
Fitness :
|
|
min = 0.0000 00000000
|
|
max = 4 8 1. 7 4 8 2 2 7 1 1 4 5 3 7
|
|
mean = 3 8 4 . 4 3 0 3 4 5 0 7 8 6 6 0
|
|
var = 1 3 0 0 6 . 1 3 2 5 3 7 3 0 1 5 2 8
|
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+

A typical output of an number EvolutionStatistics object will look like the
example above.
The EvolutionStatistics object is a simple for inspecting the EvolutionStream after it is finished. It doesn’t give you a live view of the current evolution
process, which can be necessary for long running streams. In such cases you
have to maintain/update the statistics yourself.
1
2
3

public c l a s s TSM {
// The l o c a t i o n s t o v i s i t .
s t a t i c f i n a l ISeq POINTS = I S e q . o f ( . . . ) ;

4
5
6
7

// The p e r m u t a t i o n c o d e c .
s t a t i c f i n a l Codec, EnumGene>
CODEC = Codecs . o f P e r m u t a t i o n (POINTS) ;

8
9
10

// The f i t n e s s f u n c t i o n ( i n t h e problem domain ) .
s t a t i c double d i s t ( f i n a l ISeq p ) { . . . }

11
12
13
14
15
16

// The e v o l u t i o n e n g i n e .
s t a t i c f i n a l Engine, Double> ENGINE = Engine
. b u i l d e r (TSM : : d i s t , CODEC)
. o p t i m i z e ( Optimize .MINIMUM)
. build () ;

17
18
19

// Best phenotype found s o f a r .
s t a t i c Phenotype, Double> b e s t = n u l l ;

20
21
22
23
24
25
26

// You w i l l be i n f o r m e d on new r e s u l t s . This a l l o w s t o
// r e a c t on new b e s t phenotypes , e . g . l o g i t .
private s t a t i c void update (
f i n a l E v o l u t i o n R e s u l t , Double> r e s u l t
) {
i f ( b e s t == n u l l | |

28

1.3. BASE CLASSES

27

{

28
29
30
31
32

}

33

}

CHAPTER 1. FUNDAMENTALS

b e s t . compareTo ( r e s u l t . g e t B e s t P h e n o t y p e ( ) ) < 0 )
best = r e s u l t . getBestPhenotype ( ) ;
System . out . p r i n t ( r e s u l t . g e t G e n e r a t i o n ( ) + " : " ) ;
System . out . p r i n t l n ( " Found b e s t phenotype : " + b e s t ) ;

34
35
36
37
38
39
40
41
42
43
44
45

}

// Find t h e s o l u t i o n .
public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
f i n a l ISeq r e s u l t = CODEC. de c ode (
ENGINE . stream ( )
. peek (TSM : : update )
. limit (10)
. c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) )
);
System . out . p r i n t l n ( r e s u l t ) ;
}

Listing 1.11: Live evolution statistics
Listing 1.11 on the preceding page shows how to implement a manual statistics
gathering. The update method is called whenever a new EvolutionResult is
has been calculated. If a new best Phenotype is available, it is stored and logged.
With the TSM::update method, which is called on every finished generation,
you have alive view on the evolution progress.
1.3.3.6

Engine.Evaluator

This interface allows to define different strategies for evaluating the fitness
functions of a given population. Usually, it is not necessary to override the
default evaluation strategy. It is helpful if you have performance problems when
the fitness function is evaluated serially—or in small concurrent batches, as it
is implemented by the default strategy. In this case, the Engine.Evaluator
interface can be used to calculate the fitness function for a population in one
batch.
1
2
3
4
5
6

public i n t e r f a c e E v a l u a t o r <
G extends Gene ,
C extends Comparable 
> {
ISeq> e v a l u a t e ( Seq> pop ) ;
}

Listing 1.12: Engine.Evaluator interface
The implementer is free to do the evaluation in place, or create new Phenotype
instance and return the newly created one.
A second use case for the Evaluator interface is, when the fitness value also
depends on the current composition of the population. E. g. it is possible to
normalize the population’s fitness values.

29

1.4. NUTS AND BOLTS

1.4

CHAPTER 1. FUNDAMENTALS

Nuts and bolts

1.4.1

Concurrency

The Jenetics library parallelizes independent task whenever possible. Especially
the evaluation of the fitness function is done concurrently. That means that the
fitness function must be thread safe, because it is shared by all phenotypes of a
population. The easiest way for achieving thread safety is to make the fitness
function immutable and re-entrant.
Since the number of individuals of one population, which determines the
number of fitness functions to be evaluated, is usually much higher then the
number of available CPU cores, the fitness evaluation is done in batches. This
reduces the evaluation overhead, for lightweight fitness functions.

Figure 1.4.1: Evaluation batch
Figure 1.4.1 shows an example population with 12 individuals. The evaluation
of the phenotype’s fitness functions are evaluated in batches with three elements.
For this purpose, the individuals of one batch are wrapped into a Runnable
object. The batch size is automatically adapted to the available number of CPU
cores. It is assumed that the evaluation cost of one fitness function is quite small.
If this assumption doesn’t hold, you can configure the the maximal number
of batch elements with the io.jenetics.concurrency.maxBatchSize system
property15 .
1.4.1.1

Basic configuration

The used Executor can be defined when building the evolution Engine object.
1
2

import j a v a . u t i l . c o n c u r r e n t . E x e c u t o r ;
import j a v a . u t i l . c o n c u r r e n t . E x e c u t o r s ;

3
4
5
6
7

public c l a s s Main {
private s t a t i c Double e v a l ( f i n a l Genotype g t ) {
// c a l c u l a t e and r e t u r n f i t n e s s
}

8
9
10
11
12
13
14
15
16
17
18
19
20
21

}

public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
// C r e a t i n g an f i x e d s i z e E x e c u t o r S e r v i c e
final ExecutorService executor = Executors
. newFixedThreadPool ( 1 0 )
f i n a l Factory> g t f = . . .
f i n a l Engine e n g i n e = Engine
. b u i l d e r ( Main : : e v a l , g t f )
// Using 10 t h r e a d s f o r e v o l v i n g .
. executor ( executor )
. build ()
...
}
15 See

section 1.4.1.2 on the following page.

30

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

If no Executor is given, Jenetics uses a common ForkJoinPool16 for concurrency.
Sometimes it might be useful to run the evaluation Engine single-threaded,
or even execute all operations in the main thread. This can be easily achieved
by setting the appropriate Executor.
1
2
3
4

f i n a l Engine e n g i n e = Engine . b u i l d e r ( . . . )
// Doing t h e Engine o p e r a t i o n s i n t h e main t h r e a d
. e x e c u t o r ( ( E x e c u t o r ) Runnable : : run )
. build ()

The code snippet above shows how to do the Engine operations in the main
thread. Whereas the snippet below executes the Engine operations in a single
thread, other than the main thread.
1
2
3
4

f i n a l Engine e n g i n e = Engine . b u i l d e r ( . . . )
// Doing t h e Engine o p e r a t i o n s i n a s i n g l e t h r e a d
. executor ( Executors . newSingleThreadExecutor ( ) )
. build ()

Such a configuration can be useful for performing reproducible (performance)
tests, without the uncertainty of a concurrent execution environment.
1.4.1.2

Concurrency tweaks

Jenetics uses different strategies for minimizing the concurrency overhead, depending on the configured Executor. For the ForkJoinPool, the fitness evaluation of the population is done by recursively dividing it into sub-populations using
the abstract RecursiveAction class. If a minimal sub-population size is reached,
the fitness values for this sub-population are directly evaluated. The default value
of this threshold is five and can be controlled via the io.jenetics.concurrency.splitThreshold system property. Besides the splitThreshold, the size of
the evaluated sub-population is dynamically determined by the ForkJoinTask.getSurplusQueuedTaskCount() method.17 If this value is greater than three,
the fitness values of the current sub-population are also evaluated immediately.
The default value can be overridden by the io.jenetics.concurrency.maxSurplusQueuedTaskCount system property.
$ java -Dio.jenetics.concurrency.splitThreshold=1 \
-Dio.jenetics.concurrency.maxSurplusQueuedTaskCount=2 \
-cp jenetics-4.3.0.jar:app.jar \
com.foo.bar.MyJeneticsApp
You may want to tweak this parameters, if you realize a low CPU utilization
during the fitness value evaluation. Long running fitness functions could lead
to CPU under-utilization while evaluating the last sub-population. In this case,
only one core is busy, while the other cores are idle, because they already finished
16 https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.
html
17 Excerpt from the Javadoc: Returns an estimate of how many more locally queued tasks
are held by the current worker thread than there are other worker threads that might steal
them. This value may be useful for heuristic decisions about whether to fork other tasks. In
many usages of ForkJoinTasks , at steady state, each worker should aim to maintain a small
constant surplus (for example, 3) of tasks, and to process computations locally if this threshold
is exceeded.

31

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

the fitness evaluation. Since the workload has been already distributed, no
work-stealing is possible. Reducing the splitThreshold can help to have a
more equal workload distribution between the available CPUs. Reducing the
maxSurplusQueuedTaskCount property will create a more uniform workload for
fitness function with heavily varying computation cost for different genotype
values.

The fitness function shouldn’t acquire locks for achieving thread safety. It
is also recommended to avoid calls to blocking methods. If such calls are
unavoidable, consider using the ForkJoinPool.managedBlock method.
Especially if you are using a ForkJoinPool executor, which is the default.
If the Engine is using an ExecutorService, a different optimization strategy
is used for reducing the concurrency overhead. The original population is divided
into a fixed number18 of sub-populations, and the fitness values of each subpopulation are evaluated by one thread. For long running fitness functions, it is
better to have smaller sub-populations for a better CPU utilization. With the
io.jenetics.concurrency.maxBatchSize system property, it is possible to
reduce the sub-population size. The default value is set to Integer.MAX_VALUE.
This means, that only the number of CPU cores influences the batch size.
$ java -Dio.jenetics.concurrency.maxBatchSize=3 \
-cp jenetics-4.3.0.jar:app.jar \
com.foo.bar.MyJeneticsApp
Another source of under-utilized CPUs are lock contentions. It is therefore
strongly recommended to avoid locking and blocking calls in your fitness function
at all. If blocking calls are unavoidable, consider using the managed block
functionality of the ForkJoinPool.19

1.4.2

Randomness

In general, GAs heavily depends on pseudo random number generators (PRNG)
for creating new individuals and for the selection- and mutation-algorithms.
Jenetics uses the Java Random object, respectively sub-types from it, for generating random numbers. To make the random engine pluggable, the Random
object is always fetched from the RandomRegistry. This makes it possible to
change the implementation of the random engine without changing the client
code. The central RandomRegistry also allows to easily change Random engine
even for specific parts of the code.
The following example shows how to change and restore the Random object.
When opening the with scope, changes to the RandomRegistry are only visible
within this scope. Once the with scope is left, the original Random object is
restored.
18 The number of sub-populations actually depends on the number of available CPU cores,
which are determined with Runtime.availableProcessors().
19 A good introduction on how to use managed blocks, and the motivation behind it, is given
in this talk: https://www.youtube.com/watch?v=rUDGQQ83ZtI

32

1.4. NUTS AND BOLTS

1
2
3
4
5
6
7

CHAPTER 1. FUNDAMENTALS

L i s t > g e n o t y p e s =
RandomRegistry . with (new Random ( 1 2 3 ) , r −> {
Genotype . o f ( DoubleChromosome . o f ( 0 . 0 , 1 0 0 . 0 , 1 0 ) )
. instances ()
. l i m i t (100)
. collect ( Collectors . toList () )
}) ;

With the previous listing, a random, but reproducible, list of genotypes is created.
This might be useful while testing your application or when you want to evaluate
the EvolutionStream several times with the same initial population.
1
2
3
4
5
6

Engine e n g i n e = . . . ;
// C r e a t e a new e v o l u t i o n stream with t h e g i v e n
// i n i t i a l g e n o t y p e s .
Phenotype b e s t = e n g i n e . stream ( g e n o t y p e s )
. limit (10)
. c o l l e c t ( EvolutionResult . toBestPhenotype ( ) ) ;

The example above uses the generated genotypes for creating the EvolutionStream. Each created stream uses the same starting population, but will, most
likely, create a different result. This is because the stream evaluation is still
non-deterministic.

Setting the PRNG to a Random object with a defined seed has the effect,
that every evolution stream will produce the same result—in an single
threaded environment.
The parallel nature of the GA implementation requires the creation of streams
ti,j of random numbers which are statistically independent, where the streams
are numbered with j = 1, 2, 3, ..., p, p denotes the number of processes. We expect
statistical independence between the streams as well. The used PRNG should
enable the GA to play fair, which means that the outcome of the GA is strictly
independent from the underlying hardware and the number of parallel processes
or threads. This is essential for reproducing results in parallel environments
where the number of parallel tasks may vary from run to run.

The Fair Play property of a PRNG guarantees that the quality of the
genetic algorithm (evolution stream) does not depend on the degree of
parallelization.
When the Random engine is used in an multi-threaded environment, there
must be a way to parallelize the sequential PRNG. Usually this is done by
taking the elements of the sequence of pseudo-random numbers and distribute
them among the threads. There are essentially four different parallelizations
techniques used in practice: Random seeding, Parameterization, Block splitting
and Leapfrogging.
Random seeding Every thread uses the same kind of PRNG but with a
different seed. This is the default strategy used by the Jenetics library. The
33

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

RandomRegistry is initialized with the ThreadLocalRandom class from the java.util.concurrent package. Random seeding works well for the most problems but without theoretical foundation.20 If you assume that this strategy is
responsible for some non-reproducible results, consider using the LCG64ShiftRandom PRNG instead, which uses block splitting as parallelization strategy.
Parameterization All threads uses the same kind of PRNG but with different
parameters. This requires the PRNG to be parameterizable, which is not the
case for the Random object of the JDK. You can use the LCG64ShiftRandom
class if you want to use this strategy. The theoretical foundation for these
method is weak. In a massive parallel environment you will need a reliable set
of parameters for every random stream, which are not trivial to find.
Block splitting With this method each thread will be assigned a non-overlapping contiguous block of random numbers, which should be enough for the
whole runtime of the process. If the number of threads is not known in advance,
the length of each block should be chosen much larger then the maximal expected
number of threads. This strategy is used when using the LCG64ShiftRandom.ThreadLocal class. This class assigns every thread a block of 256 ≈ 7, 2 · 1016
random numbers. After 128 threads, the blocks are recycled, but with changed
seed.

Figure 1.4.2: Block splitting

Leapfrog With the leapfrog method each thread t ∈ [0, P ) only consumes the
P th random number and jump ahead in the random sequence by the number of
threads, P . This method requires the ability to jump very quickly ahead in the
sequence of random numbers by a given amount. Figure 1.4.3 on the next page
graphically shows the concept of the leapfrog method.
LCG64ShiftRandom21 The LCG64ShiftRandom class is a port of the trng::lcg64_shift PRNG class of the TRNG22 library, implemented in C++.[4]
It implements additional methods, which allows to implement the block splitting—and also the leapfrog—method.
20 This is also expressed by Donald Knuth’s advice: »Random number generators should not
be chosen at random.«
21 The LCG64ShiftRandom PRNG is part of the io.jenetics.prngine module (see section 4.4
on page 99).
22 http://numbercrunch.de/trng/

34

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

Figure 1.4.3: Leapfrogging
1
2
3
4
5
6

public c l a s s LCG64ShiftRandom extends Random {
public void s p l i t ( f i n a l i n t p , f i n a l i n t s ) ;
public void jump ( f i n a l long s t e p ) ;
public void jump2 ( f i n a l i n t s ) ;
...
}

Listing 1.13: LCG64ShiftRandom class
Listing 1.13 shows the interface used for implementing the block splitting and
leapfrog parallelizations technique. This methods have the following meaning:
split Changes the internal state of the PRNG in a way that future calls to
nextLong() will generated the sth sub-stream of pth sub-streams. s must
be within the range of [0, p − 1). This method is used for parallelization
via leapfrogging.
jump Changes the internal state of the PRNG in such a way that the engine
jumps s steps ahead. This method is used for parallelization via block
splitting.
jump2 Changes the internal state of the PRNG in such a way that the engine
jumps 2s steps ahead. This method is used for parallelization via block
splitting.

1.4.3

Serialization

Jenetics supports serialization for a number of classes, most of them are located
in the io.jenetics package. Only the concrete implementations of the Gene
and the Chromosome interfaces implements the Serializable interface. This
gives a greater flexibility when implementing own Genes and Chromosomes.
• BitGene

• IntegerChromosome

• BitChromosome

• LongGene

• CharacterGene

• LongChomosome

• CharacterChromosome

• DoubleGene

• IntegerGene

• DoubleChromosome
35

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

• EnumGene

• Genotype

• PermutationChromosome

• Phenotype

With the serialization mechanism you can write a population to disk and load
it into an new EvolutionStream at a later time. It can also be used to transfer
populations to evolution engines, running on different hosts, over a network link.
The IO class, located in the io.jenetics.util package, supports native Java
serialization.
1
2
3
4

// C r e a t i n g r e s u l t p o p u l a t i o n .
E v o l u t i o n R e s u l t  r e s u l t = stream
. l i m i t (100)
. c o l l e c t ( toBestEvolutionResult () ) ;

5
6
7
8

// W r i t i n g t h e p o p u l a t i o n t o d i s k .
f i n a l F i l e f i l e = new F i l e ( " p o p u l a t i o n . o b j " ) ;
IO . o b j e c t . w r i t e ( r e s u l t . g e t P o p u l a t i o n ( ) , f i l e ) ;

9
10
11
12
13
14
15

// Reading t h e p o p u l a t i o n from d i s k .
ISeq> p o p u l a t i o n =
( ISeq>)IO . o b j e c t . r e a d ( f i l e ) ;
E v o l u t i o n S t r e a m  stream = Engine
. build ( ff , gtf )
. stream ( p o p u l a t i o n , 1 ) ;

1.4.4

Utility classes

The io.jenetics.util and the io.jenetics.stat package of the library contains utility and helper classes which are essential for the implementation of the
GA.
io.jenetics.util.Seq Most notable are the Seq interfaces and its implementation. They are used, among others, in the Chromosome and Genotype classes
and holds the Genes and Chromosomes, respectively. The Seq interface itself
represents a fixed-sized, ordered sequence of elements. It is an abstraction over
the Java build-in array-type, but much safer to use for generic elements, because
there are no casts needed when using nested generic types.
Figure 1.4.4 on the next page shows the Seq class diagram with their most
important methods. The interfaces MSeq and ISeq are mutable, respectively
immutable specializations of the basis interface. Creating instances of the Seq
interfaces is possible via the static factory methods of the interfaces.
1
2
3
4
5

// C r e a t e " d i f f e r e n t " s e q u e n c e s .
f i n a l Seq a1 = Seq . o f ( 1 , 2 , 3 ) ;
f i n a l MSeq a2 = MSeq . o f ( 1 , 2 , 3 ) ;
f i n a l ISeq a3 = MSeq . o f ( 1 , 2 , 3 ) . t o I S e q ( ) ;
f i n a l MSeq a4 = a3 . copy ( ) ;

6
7
8
9
10

// The ’ e q u a l s ’ method p e r f o r m s element−w i s e c o m p a r i s o n .
a s s e r t ( a1 . e q u a l s ( a2 ) && a1 != a2 ) ;
a s s e r t ( a2 . e q u a l s ( a3 ) && a2 != a3 ) ;
a s s e r t ( a3 . e q u a l s ( a4 ) && a3 != a4 ) ;

How to create instances of the three Seq types is shown in the listing above.
The Seq classes also allows a more functional programming style. For a full
method description refer to the Javadoc.
36

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

Figure 1.4.4: Seq class diagram
io.jenetics.stat This package contains classes for calculating statistical
moments. They are designed to work smoothly with the Java Stream API and
are divided into mutable (number) consumers and immutable value classes, which
holds the statistical moments. The additional classes calculate the
• minimum,

• variance,

• maximum,

• skewness and

• sum,
• mean,
Numeric type
int
long
double

• kurtosis value.
Consumer class
IntMomentStatistics
LongMomentStatistics
DoubleMomentStatistics

Value class
IntMoments
LongMoments
DoubleMoments

Table 1.4.1: Statistics classes
Table 1.4.1 contains the available statistical moments for the different numeric
types. The following code snippet shows an example on how to collect double
statistics from an given DoubleGene stream.
1
2
3
4
5

// C o l l e c t i n g i n t o an s t a t i s t i c s o b j e c t .
DoubleChromosome chromosome = . . .
D o u b l e M o m e n t S t a t i s t i c s s t a t i s t i c s = chromosome . stream ( )
. c o l l e c t ( DoubleMomentStatistics
. t o D o u b l e M o m e n t S t a t i s t i c s ( v −> v . d o u b l e V a l u e ( ) ) ) ;

6
7
8
9

// C o l l e c t i n g i n t o an moments o b j e c t .
DoubleMoments moments = chromosome . stream ( )
. c o l l e c t ( DoubleMoments . toDoubleMoments ( v −> v . d o u b l e V a l u e ( ) ) ) ;

The stat package also contains a class for calculating the quantile23 of a stream
of double values. It’s implementing algorithm, which is described in [15], calculates—or estimates—the quantile value on the fly, without storing the consumed
23 https://en.wikipedia.org/wiki/Quantile

37

1.4. NUTS AND BOLTS

CHAPTER 1. FUNDAMENTALS

double values. This allows to use the Quantile class even for very large number of double values. How to calculate the first quartile of a given, random
DoubleStream is shown in the code snipped below.
1
2
3

f i n a l Q u a n t i l e q u a r t i l e = new Q u a n t i l e ( 0 . 2 5 ) ;
new Random ( ) . d o u b l e s ( 1 0 _000 ) . f o r E a c h ( q u a r t i l e ) ;
f i n a l double v a l u e = q u a r t i l e . g e t V a l u e ( ) ;

Be aware, that the calculated quartile is just an estimation. For a sufficient
accuracy, the stream size should be sufficiently large (size  100).

38

Chapter 2

Advanced topics
This section describes some advanced topics for setting up an evolution Engine
or EvolutionStream. It contains some problem encoding examples and how to
override the default validation strategy of the given Genotypes. The last section
contains a detailed description of the implemented termination strategies.

2.1

Extending Jenetics

The Jenetics library was designed to give you a great flexibility in transforming
your problem into a structure that can be solved by an GA. It also comes with
different implementations for the base data-types (genes and chromosomes) and
operators (alterers and selectors). If it is still some functionality missing, this
section describes how you can extend the existing classes. Most of the extensible
classes are defined by an interface and have an abstract implementation which
makes it easier to extend it.

2.1.1

Genes

Genes are the starting point in the class hierarchy. They hold the actual
information, the alleles, of the problem domain. Beside the classical bit-gene,
Jenetics comes with gene implementations for numbers (double-, int- and
long values), characters and enumeration types.
For implementing your own gene type you have to implement the Gene
interface with three methods: (1) the getAllele() method which will return the
wrapped data, (2) the newInstance method for creating new, random instances
of the gene—must be of the same type and have the same constraint—and (3)
the isValid() method which checks if the gene fulfill the expected constraints.
The gene constraint might be violated after mutation and/or recombination. If
you want to implement a new number-gene, e. g. a gene which holds complex
values, you may want extend it from the abstract NumericGene class. Every
Gene extends the Serializable interface. For normal genes there is no more
work to do for using the Java serialization mechanism.

39

2.1. EXTENDING JENETICS

The custom Genes and
Random engine available
when implementing their
to seamlessly change the
.setRandom method.

CHAPTER 2. ADVANCED TOPICS

Chromosomes implementations must use the
via the RandomRegistry.getRandom method
factory methods. Otherwise it is not possible
Random engine by using the RandomRegistry-

If you want to support your own allele type, but want to avoid the effort of
implementing the Gene interface, you can alternatively use the AnyGene class. It
can be created with AnyGene.of(Supplier, Predicate). The given Supplier
is responsible for creating new random alleles, similar to the newInstance
method in the Gene interface. Additional validity checks are performed by the
given Predicate.
1
2
3
4
5
6
7
8

c l a s s LastMonday {
// C r e a t e s new random ’ L o c a l D a t e ’ o b j e c t s .
private s t a t i c L o c a l D a t e nextMonday ( ) {
f i n a l Random random = RandomRegistry . getRandom ( ) ;
LocalDate
. of (2015 , 1 , 5)
. plusWeeks ( random . n e x t I n t ( 1 0 0 0 ) ) ;
}

9

// Do some a d d i t i o n a l v a l i d i t y c h e c k .
private s t a t i c boolean i s V a l i d ( f i n a l L o c a l D a t e d a t e ) { . . . }

10
11
12
13
14
15
16
17

}

// C r e a t e a new gene from t h e random ’ S u p p l i e r ’ and
// v a l i d a t i o n ’ P r e d i c a t e ’ .
private f i n a l AnyGene gene = AnyGene
. o f ( LastMonday : : nextMonday , LastMonday : : i s V a l i d ) ;

Listing 2.1: AnyGene example
Example listing 2.1 shows the (almost) minimal setup for creating user defined
Gene allele types. By convention, the Random engine, used for creating the
new LocalDate objects, must be requested from the RandomRegistry. With
the optional validation function, isValid, it is possible to reject Genes whose
alleles doesn’t conform some criteria.
The simple usage of the AnyGene has also its downsides. Since the AnyGene
instances are created from function objects, serialization is not supported by
the AnyGene class. It is also not possible to use some Alterer implementations
with the AnyGene, like:
• GaussianMutator,
• MeanAlterer and
• PartiallyMatchedCrossover

2.1.2

Chromosomes

A new gene type normally needs a corresponding chromosome implementation.
The most important part of a chromosome is the factory method newInstance,
which lets the evolution Engine create a new Chromosome instance from a
40

2.1. EXTENDING JENETICS

CHAPTER 2. ADVANCED TOPICS

sequence of Genes. This method is used by the Alterers when creating new,
combined Chromosomes. It is allowed, that the newly created chromosome has
a different length than the original one. The other methods should be selfexplanatory. The chromosome has the same serialization mechanism as the gene.
For the minimal case it can extends the Serializable interface.1

Just implementing the Serializable interface is sometimes not enough. You
might also need to implement the readObject and writeObject methods
for a more concise serialization result. Consider using the serialization proxy
pattern, item 90, described in Effective Java [6].
Corresponding to the AnyGene, it is possible to create chromosomes with
arbitrary allele types with the AnyChromosome.
1
2
3
4
5
6
7

public c l a s s LastMonday {
// The used problem Codec .
private s t a t i c f i n a l Codec>
CODEC = Codec . o f (
Genotype . o f ( AnyChromosome . o f ( LastMonday : : nextMonday ) ) ,
g t −> g t . getGene ( ) . g e t A l l e l e ( )
);

8

// C r e a t e s new random ’ L o c a l D a t e ’ o b j e c t s .
private s t a t i c L o c a l D a t e nextMonday ( ) {
f i n a l Random random = RandomRegistry . getRandom ( ) ;
LocalDate
. of (2015 , 1 , 5)
. plusWeeks ( random . n e x t I n t ( 1 0 0 0 ) ) ;
}

9
10
11
12
13
14
15
16

// The f i t n e s s f u n c t i o n : f i n d a monday a t t h e end o f t h e month .
private s t a t i c i n t f i t n e s s ( f i n a l L o c a l D a t e d a t e ) {
return d a t e . getDayOfMonth ( ) ;
}

17
18
19
20
21

public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
f i n a l Engine, I n t e g e r > e n g i n e = Engine
. b u i l d e r ( LastMonday : : f i t n e s s , CODEC)
. o f f s p r i n g S e l e c t o r (new R o u l e t t e W h e e l S e l e c t o r <>() )
. build () ;

22
23
24
25
26
27

f i n a l Phenotype, I n t e g e r > b e s t =
e n g i n e . stream ( )
. limit (50)
. c o l l e c t ( EvolutionResult . toBestPhenotype ( ) ) ;

28
29
30
31
32
33
34
35

}

}

System . out . p r i n t l n ( b e s t ) ;

Listing 2.2: AnyChromosome example
Listing 2.2 shows a full usage example of the AnyGene and AnyChromosome class.
The example tries to find a Monday with a maximal day of month. An interesting
detail is, that an Codec2 definition is used for creating new Genotypes and
1 http://www.oracle.com/technetwork/articles/java/javaserial-1536170.html
2 See

section 2.3 on page 51 for a more detailed Codec description.

41

2.1. EXTENDING JENETICS

CHAPTER 2. ADVANCED TOPICS

for converting them back to LocalDate alleles. The convenient usage of the
AnyChromosome has to be payed by the same restriction as for the AnyGene:
no serialization support for the chromosome and not usable for all Alterer
implementations.

2.1.3

Selectors

If you want to implement your own selection strategy you only have to implement
the Selector interface with the select method.
1
2
3
4
5
6
7
8
9
10
11

@FunctionalInterface
public i n t e r f a c e S e l e c t o r <
G extends Gene ,
C extends Comparable 
> {
public ISeq> s e l e c t (
Seq> p o p u l a t i o n ,
i n t count ,
Optimize opt
);
}

Listing 2.3: Selector interface
The first parameter is the original population from which the sub-population
is selected. The second parameter, count, is the number of individuals of the
returned sub-population. Depending on the selection algorithm, it is possible
that the sub-population contains more elements than the original one. The
last parameter, opt, determines the optimization strategy which must be used
by the selector. This is exactly the point where it is decided whether the GA
minimizes or maximizes the fitness function.
Before implementing a selector from scratch, consider to extend your selector
from the ProbabilitySelector (or any other available Selector implementation). It is worth the effort to try to express your selection strategy in terms of
selection property P (i). Another way for re-using existing Selector implementation is by composition.
1
2
3
4
5
6
7
8

public c l a s s E l i t e S e l e c t o r <
G extends Gene ,
C extends Comparable 
>
implements S e l e c t o r 
{
private f i n a l T r u n c a t i o n S e l e c t o r 
_ e l i t e = new T r u n c a t i o n S e l e c t o r <>() ;

9
10
11

private f i n a l T o u r n a m e n t S e l e c t o r 
_ r e s t = new T o u r n a m e n t S e l e c t o r <>(3) ;

12
13
14

public E l i t e S e l e c t o r ( ) {
}

15
16
17
18
19
20
21
22

@Override
public ISeq> s e l e c t (
f i n a l Seq> p o p u l a t i o n ,
f i n a l i n t count ,
f i n a l Optimize opt
) {
ISeq> r e s u l t ;

42

2.1. EXTENDING JENETICS
23
24
25
26
27
28
29
30
31
32
33
34

}

}

CHAPTER 2. ADVANCED TOPICS

i f ( p o p u l a t i o n . isEmpty ( ) | | count <= 0 ) {
r e s u l t = I S e q . empty ( ) ;
} else {
f i n a l i n t e c = min ( count , _ e l i t e C o u n t ) ;
r e s u l t = _ e l i t e . s e l e c t ( p o p u l a t i o n , ec , opt ) ;
r e s u l t = r e s u l t . append (
_ r e s t . s e l e c t ( p o p u l a t i o n , max ( 0 , count − e c ) , opt )
);
}
return r e s u l t ;

Listing 2.4: Elite selector
Listing 2.4 on the preceding page shows how an elite selector could be implemented by using the existing Truncation- and TournamentSelector. With
elite selection, the quality of the best solution in each generation monotonically increases over time.[3] Although this is not necessary, since the evolution
Engine/Stream doesn’t throw away the best solution found during the evolution
process.

2.1.4

Alterers

For implementing a new alterer class it is necessary to implement the Alterer
interface. You might do this if your new Gene type needs a special kind of
alterer not available in the Jenetics project.
1
2
3
4
5
6
7
8
9
10

@FunctionalInterface
public i n t e r f a c e A l t e r e r <
G extends Gene ,
C extends Comparable 
> {
public A l t e r e r R e s u l t  a l t e r (
Seq> p o p u l a t i o n ,
long g e n e r a t i o n
);
}

Listing 2.5: Alterer interface
The first parameter of the alter method is the population which has to be altered. The second parameter is the generation of the newly created individuals
and the return value is the number of genes that has been altered.

To maximize the range of application of an Alterer, it is recommended
that they can handle Genotypes and Chromosomes with variable length.

2.1.5

Statistics

During the developing phase of an application which uses the Jenetics library,
additional statistical data about the evolution process is crucial. Such data can
help to optimize the parametrization of the evolution Engine. A good starting
point is to use the EvolutionStatistics class in the io.jenetics.engine
package (see listing 1.10 on page 27). If the data in the EvolutionStatistics
43

2.1. EXTENDING JENETICS

CHAPTER 2. ADVANCED TOPICS

class doesn’t fit your needs, you simply have to write your own statistics class.
It is not possible to derive from the existing EvolutionStatistics class. This
is not a real restriction, since you still can use the class by delegation. Just
implement the Java Consumer> interface.

2.1.6

Engine

The evolution Engine itself can’t be extended, but it is still possible to create an
EvolutionStream without using the Engine class.3 Because the EvolutionStream has no direct dependency to the Engine, it is possible to use an different,
special evolution Function.
1
2
3
4

public f i n a l c l a s s S p e c i a l E n g i n e {
// The Genotype f a c t o r y .
private s t a t i c f i n a l Factory> GTF =
Genotype . o f ( DoubleChromosome . o f ( 0 , 1 ) ) ;

5

// The f i t n e s s f u n c t i o n .
private s t a t i c Double f i t n e s s ( f i n a l Genotype g t ) {
return g t . getGene ( ) . g e t A l l e l e ( ) ;
}

6
7
8
9
10

// C r e a t e new e v o l u t i o n s t a r t o b j e c t .
private s t a t i c E v o l u t i o n S t a r t 
s t a r t ( f i n a l i n t p o p u l a t i o n S i z e , f i n a l long g e n e r a t i o n ) {
f i n a l ISeq> p o p u l a t i o n = GTF
. instances ()
. map( g t −> Phenotype
. o f ( gt , g e n e r a t i o n , S p e c i a l E n g i n e : : f i t n e s s ) )
. limit ( populationSize )
. c o l l e c t ( ISeq . toISeq ( ) ) ;

11
12
13
14
15
16
17
18
19
20
21

}

22

return E v o l u t i o n S t a r t . o f ( p o p u l a t i o n , g e n e r a t i o n ) ;

23

// The s p e c i a l e v o l u t i o n f u n c t i o n .
private s t a t i c E v o l u t i o n R e s u l t 
e v o l v e ( f i n a l E v o l u t i o n S t a r t  s t a r t ) {
return . . . ; // Add i m p l e m e n t a t i o n !
}

24
25
26
27
28
29

public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
f i n a l Genotype b e s t = E v o l u t i o n S t r e a m
. o f ( ( ) −> s t a r t ( 5 0 , 0 ) , S p e c i a l E n g i n e : : e v o l v e )
. l i m i t ( Limits . bySteadyFitness (10) )
. l i m i t (100)
. c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) ;

30
31
32
33
34
35
36
37
38
39

}

}

System . out . p r i n t l n ( " Best Genotype : " + b e s t ) ) ;

Listing 2.6: Special evolution engine
Listing 2.6 shows a complete implementation stub for using an own special
evolution Function.
3 Also refer to section 1.3.3.3 on page 25 on how to create an EvolutionStream from an
evolution Function.

44

2.2. ENCODING

2.2

CHAPTER 2. ADVANCED TOPICS

Encoding

This section presents some encoding examples for common problems. The
encoding should be a complete and minimal expression of a solution to the
problem. An encoding is complete if it contains enough information to represent
every solution to the problem. An minimal encoding contains only the information
needed to represent a solution to the problem. If an encoding contains more
information than is needed to uniquely identify solutions to the problem, the
search space will be larger than necessary.
Whenever possible, the encoding should not be able to represent infeasible
solutions. If a genotype can represent an infeasible solution, care must be taken
in the fitness function to give partial credit to the genotype for its »good« genetic
material while sufficiently penalizing it for being infeasible. Implementing a
specialized Chromosome, which won’t create invalid encodings can be a solution
to this problem. In general, it is much more desirable to design a representation
that can only represent valid solutions so that the fitness function measures only
fitness, not validity. An encoding that includes invalid individuals enlarges the
search space and makes the search more costly. A deeper analysis of how to
create encodings can be found in [27] and [26].
Some of the encodings represented in the following sections has been implemented by Jenetics, using the Codec4 interface, and are available through static
factory methods of the io.jenetics.engine.Codecs class.

2.2.1

Real function

Jenetics contains three different numeric gene and chromosome implementations,
which can be used to encode a real function, f : R → R:
• IntegerGene/Chromosome,
• LongGene/Chromosome and
• DoubleGene/Chromosome.
It is quite easy to encode a real function. Only the minimum and maximum
value of the function domain must be defined. The DoubleChromosome of length
1 is then wrapped into a Genotype.
1
2
3

Genotype . o f (
DoubleChromosome . o f ( min , max , 1 )
);

Decoding the double value from the Genotype is also straight forward. Just
get the first gene from the first chromosome, with the getGene() method, and
convert it to a double.
1
2
3

s t a t i c double toDouble ( f i n a l Genotype g t ) {
return g t . getGene ( ) . d o u b l e V a l u e ( ) ;
}

When the Genotype only contains scalar chromosomes5 , it should be clear, that
it can’t be altered by every Alterer. That means, that none of the Crossover
4 See

section 2.3 on page 51.
chromosomes contains only one gene.

5 Scalar

45

2.2. ENCODING

CHAPTER 2. ADVANCED TOPICS

alterers will be able to create modified Genotypes. For scalars the appropriate
alterers would be the MeanAlterer, GaussianAlterer and Mutator.

Scalar Chromosomes and/or Genotypes can only be altered by
MeanAlterer, GaussianAlterer and Mutator classes. Other alterers
are allowed, but will have no effect on the Chromosomes.

2.2.2

Scalar function

Optimizing a function f (x1 , ..., xn ) of one or more variable whose range is
one-dimensional, we have two possibilities for the Genotype encoding.[31] For
the first encoding we expect that all variables, xi , have the same minimum
and maximum value. In this case we can simply create a Genotype with a
Numeric Chromosome of the desired length n.
1
2
3

Genotype . o f (
DoubleChromosome . o f ( min , max , n )
);

The decoding of the Genotype requires a cast of the first Chromosome to a
DoubleChromosome. With a call to the DoubleChromosome.toArray() method
we return the variables (x1 , ..., xn ) as double[] array.
1
2
3

s t a t i c double [ ] t o S c a l a r s ( f i n a l Genotype g t ) {
return g t . getChromosome ( ) . a s ( DoubleChromosome . c l a s s ) . t o A r r a y ( ) ;
}

With the first encoding you have the possibility to use all available alterers,
including all Crossover alterer classes.
The second encoding must be used if the minimum and maximum value of
the variables xi can’t be the same for all i. For the different domains, each
variable xi is represented by a Numeric Chromosome with length one. The final
Genotype will consist of n Chromosomes with length one.
1
2
3
4
5
6

Genotype . o f (
DoubleChromosome . o f ( min1 , max1 , 1 ) ,
DoubleChromosome . o f ( min2 , max2 , 1 ) ,
...
DoubleChromosome . o f ( minn , maxn , 1 )
);

With the help of the new Java Stream API, the decoding of the Genotype can
be done in a view lines. The DoubleChromosome stream, which is created from
the chromosome Seq, is first mapped to double values and then collected into
an array.
1
2
3
4
5

s t a t i c double [ ] t o S c a l a r s ( f i n a l Genotype g t ) {
return g t . stream ( )
. mapToDouble ( c −> c . getGene ( ) . d o u b l e V a l u e ( ) )
. toArray ( ) ;
}

As already mentioned, with the use of scalar chromosomes we can only use the
MeanAlterer, GaussianAlterer or Mutator alterer class.
If there are performance issues in converting the Genotype into a double[]
array, or any other numeric array, you can access the Genes directly via the
46

2.2. ENCODING

CHAPTER 2. ADVANCED TOPICS

Genotype.get(i, j) method and than convert it to the desired numeric value,
by calling intValue(), longValue() or doubleValue().

2.2.3

Vector function

A function f (X1 , ..., Xn ), of one to n variables whose range is m-dimensional, is
encoded by m DoubleChromosomes of length n.[32] The domain–minimum and
maximum values–of one variable Xi are the same in this encoding.
1
2
3
4
5
6

Genotype . o f (
DoubleChromosome . o f ( min1 , max1 , m) ,
DoubleChromosome . o f ( min2 , max2 , m) ,
...
DoubleChromosome . o f ( minn , maxn , m)
);

The decoding of the vectors is quite easy with the help of the Java Stream API. In
the first map we have to cast the Chromosome object to the actual
DoubleChromosome. The second map then converts each DoubleChromosome to
a double[] array, which is collected to an 2-dimensional double[n ][m ] array
afterwards.
1
2
3
4
5

s t a t i c double [ ] [ ] t o V e c t o r s ( f i n a l Genotype g t ) {
return g t . stream ( )
. map( dc −> dc . a s ( DoubleChromosome . c l a s s ) . t o A r r a y ( ) )
. t o A r r a y ( double [ ] [ ] : : new) ;
}

For the special case of n = 1, the decoding of the Genotype can be simplified to
the decoding we introduced for scalar functions in section 2.2.2.
1
2
3

s t a t i c double [ ] t o V e c t o r ( f i n a l Genotype g t ) {
return g t . getChromosome ( ) . a s ( DoubleChromosome . c l a s s ) . t o A r r a y ( ) ;
}

2.2.4

Affine transformation

An affine transformation6 7 is usually performed by a matrix multiplication
with a transformation matrix—in a homogeneous coordinates system8 . For a
transformation in R2 , we can define the matrix A9 :


a11 a12 a13
(2.2.1)
A =  a21 a22 a23  .
0
0
1
A simple representation can be done by creating a Genotype which contains
two DoubleChromosomes with a length of 3.
1
2
3
4

Genotype . o f (
DoubleChromosome . o f ( min , max , 3 ) ,
DoubleChromosome . o f ( min , max , 3 )
);
6 https://en.wikipedia.org/wiki/Affine_transformation
7 http://mathworld.wolfram.com/AffineTransformation.html
8 https://en.wikipedia.org/wiki/Homogeneous_coordinates
9 https://en.wikipedia.org/wiki/Transformation_matrix

47

2.2. ENCODING

CHAPTER 2. ADVANCED TOPICS

The drawback with this kind of encoding is, that we will create a lot of invalid
(non-affine transformation matrices) during the evolution process, which must
be detected and discarded. It is also difficult to find the right parameters for
the min and max values of the DoubleChromosomes.
A better approach will be to encode the transformation parameters instead
of the transformation matrix. The affine transformation can be expressed by the
following parameters:
• sx – the scale factor in x direction
• sy – the scale factor in y direction
• tx – the offset in x direction
• ty – the offset in y direction
• θ – the rotation angle clockwise around origin
• kx – shearing parallel to x axis
• ky – shearing parallel to y axis
This parameters can then be represented by the following Genotype.
1
2
3
4
5
6
7
8
9
10
11
12
13

Genotype . o f (
// S c a l e
DoubleChromosome .
DoubleChromosome .
// T r a n s l a t i o n
DoubleChromosome .
DoubleChromosome .
// R o t a t i o n
DoubleChromosome .
// Shear
DoubleChromosome .
DoubleChromosome .
)

o f ( sxMin , sxMax ) ,
o f ( syMin , syMax ) ,
o f ( txMin , txMax ) ,
o f ( tyMin , tyMax ) ,
o f ( thMin , thMax ) ,
o f ( kxMin , kxMax ) ,
o f ( kyMin , kxMax )

This encoding ensures that no invalid Genotype will be created during the
evolution process, since the crossover will be only performed on the same kind of
chromosome (same chromosome index). To convert the Genotype back to the
transformation matrix A, the following equations can be used [14]:
a11

This corresponds to

 
1 0 tx
 0 1 ty  · 
0 0 1

= sx cos θ + kx sy sin θ

a12

= sy kx cos θ − sx sin θ

a13

= tx

a21

= ky sx cos θ + sy sin θ

a22

= sy cos θ − sx ky sin θ

a23

= ty

an transformation order of
 
1 kx 0
sx 0
ky 1 0  ·  0 sy
0
0 1
0 0

T · Sh · Sc · R:
 
0
cos θ − sin θ
0  ·  sin θ
cos θ
1
0
0

(2.2.2)


0
0 .
1

In Java code, the conversion from the Genotype to the transformation matrix,
will look like this:
48

2.2. ENCODING

1
2
3
4
5
6
7
8

s t a t i c double [ ] [ ]
f i n a l double
f i n a l double
f i n a l double
f i n a l double
f i n a l double
f i n a l double
f i n a l double

CHAPTER 2. ADVANCED TOPICS
t o M a t r i x ( f i n a l Genotype g t ) {
sx = g t . g e t ( 0 , 0 ) . d o u b l e V a l u e ( ) ;
sy = g t . g e t ( 1 , 0 ) . d o u b l e V a l u e ( ) ;
tx = gt . get ( 2 , 0) . doubleValue ( ) ;
ty = gt . get ( 3 , 0) . doubleValue ( ) ;
th = g t . g e t ( 4 , 0 ) . d o u b l e V a l u e ( ) ;
kx = g t . g e t ( 5 , 0 ) . d o u b l e V a l u e ( ) ;
ky = g t . g e t ( 6 , 0 ) . d o u b l e V a l u e ( ) ;

9

final
final
final
final
final
final

10
11
12
13
14
15

double
double
double
double
double
double

cos_th = c o s ( th ) ;
s i n _ t h = s i n ( th ) ;
a11 = cos_th ∗ sx + kx ∗ sy ∗ s i n _ t h ;
a12 = cos_th ∗ kx ∗ sy − sx ∗ s i n _ t h ;
a21 = cos_th ∗ ky ∗ sx + sy ∗ s i n _ t h ;
a22 = cos_th ∗ sy − ky ∗ sx ∗ s i n _ t h ;

16
17
18
19
20
21
22

}

return new double [ ] [ ] {
{ a11 , a12 , t x } ,
{ a21 , a22 , t y } ,
{0.0 , 0.0 , 1.0}
};

For the introduced encoding all kind of alterers can be used. Since we have one
scalar DoubleChromosome, the rotation angle θ, it is recommended also to add
a MeanAlterer or GaussianAlterer to the list of alterers.

2.2.5

Graph

A graph can be represented in many different ways. The most known graph
representation is the adjacency matrix. The following encoding examples uses
adjacency matrices with different characteristics.
Undirected graph In an undirected graph the edges between the vertices
have no direction. If there is a path between nodes i and j, it is assumed that
there is also path from j to i.

Figure 2.2.1: Undirected graph and adjacency matrix
Figure 2.2.1 shows an undirected graph and its corresponding matrix representation. Since the edges between the nodes have no direction, the values
of the lower diagonal matrix are not taken into account. An application which

49

2.2. ENCODING

CHAPTER 2. ADVANCED TOPICS

optimizes an undirected graph has to ignore this part of the matrix.10
1
2

f i n a l int n = 6 ;
f i n a l Genotype g t = Genotype . o f ( BitChromosome . o f ( n ) , n ) ;

The code snippet above shows how to create an adjacency matrix for a graph
with n = 6 nodes. It creates a genotype which consists of n BitChromosomes of
length n each. Whether the node i is connected to node j can be easily checked by
calling gt.get(i-1, j-1).booleanValue(). For extracting the whole matrix
as int[] array, the following code can be used.
1
2
3
4
5

f i n a l i n t [ ] [ ] a r r a y = g t . t o S e q ( ) . stream ( )
. map( c −> c . t o S e q ( ) . stream ( )
. mapToInt ( BitGene : : o r d i n a l )
. toArray ( ) )
. t o A r r a y ( i n t [ ] [ ] : : new) ;

Directed graph A directed graph (digraph) is a graph where the path between
the nodes have a direction associated with them. The encoding of a directed
graph looks exactly like the encoding of an undirected graph. This time the
whole matrix is used and the second diagonal matrix is no longer ignored.

Figure 2.2.2: Directed graph and adjacency matrix
Figure 2.2.2 shows the adjacency matrix of a digraph. This time the whole
matrix is used for representing the graph.
Weighted directed graph A weighted graph associates a weight (label) with
every path in the graph. Weights are usually real numbers. They may be
restricted to rational numbers or integers.
The following code snippet shows how the Genotype of the matrix is created.
1
2
3
4
5

f i n a l int n = 6 ;
f i n a l double min = −1;
f i n a l double max = 2 0 ;
f i n a l Genotype g t = Genotype
. o f ( DoubleChromosome . o f ( min , max , n ) , n ) ;
10 This property violates the minimal encoding requirement we mentioned at the beginning
of section 2.2 on page 45. For simplicity reason this will be ignored for the undirected graph
encoding.

50

2.3. CODEC

CHAPTER 2. ADVANCED TOPICS

Figure 2.2.3: Weighted graph and adjacency matrix
For accessing the single matrix elements, you can simply call Genotype.get(i,
j).doubleValue(). If the interaction with another library requires a double[][] array, the following code can be used.
1
2
3

f i n a l double [ ] [ ] a r r a y = g t . stream ( )
. map( dc −> dc . a s ( DoubleChromosome . c l a s s ) . t o A r r a y ( ) )
. t o A r r a y ( double [ ] [ ] : : new) ;

2.3

Codec

The Codec interface—located in the io.jenetics.engine package—narrows
the gap between the fitness Function, which should be maximized/minimized,
and the Genotype representation, which can be understand by the evolution
Engine. With the Codec interface it is possible to implement the encodings of
section 2.2 on page 45 in a more formalized way.
Normally, the Engine expects a fitness function which takes a Genotype as
input. This Genotype has then to be transformed into an object of the problem
domain. The usage Codec interface allows a tighter coupling of the Genotype
definition and the transformation code.11
1
2
3
4
5

public i n t e r f a c e Codec> {
public Factory> e n c o d i n g ( ) ;
public Function, T> d e c o d e r ( ) ;
public default T d ec od e ( f i n a l Genotype g t ) { . . . }
}

Listing 2.7: Codec interface
Listing 2.7 shows the Codec interface. The encoding() method returns the
Genotype factory, which is used by the Engine for creating new Genotypes.
The decoder Function, which is returned by the decoder() method, transforms
the Genotype to the argument type of the fitness Function. Without the Codec
interface, the implementation of the fitness Function is polluted with code, which
transforms the Genotype into the argument type of the actual fitness Function.
1
2

s t a t i c double e v a l ( f i n a l Genotype g t ) {
f i n a l double x = g t . getGene ( ) . d o u b l e V a l u e ( ) ;
11 Section 2.2 on page 45 describes some possible encodings for common optimization
problems.

51

2.3. CODEC

3
4
5

}

CHAPTER 2. ADVANCED TOPICS

// Do some c a l c u l a t i o n with ’ x ’ .
return . . .

The Codec for the example above is quite simple and is shown below. It is not
necessary to implement the Codec interface, instead you can use the Codec.of
factory method for creating new Codec instances.
1
2
3
4
5

f i n a l DoubleRange domain = DoubleRange . o f ( 0 , 2∗ PI ) ;
f i n a l Codec c o d e c = Codec . o f (
Genotype . o f ( DoubleChromosome . o f ( domain ) ) ,
g t −> g t . getChromosome ( ) . getGene ( ) . g e t A l l e l e ( )
);

When using a Codec instance, the fitness Function solely contains code from
your actual problem domain—no dependencies to classes of the Jenetics library.
1
2
3
4

s t a t i c double e v a l ( f i n a l double x ) {
// Do some c a l c u l a t i o n with ’ x ’ .
return . . .
}

Jenetics comes with a set of standard encodings, which are created via static
factory methods of the io.jenetics.engine.Codecs class. The following subsections shows some of the implementation of this methods.

2.3.1

Scalar codec

Listing 2.8 shows the implementation of the Codecs.ofScalar factory method—for
Integer scalars.
1
2
3
4
5
6

s t a t i c Codec o f S c a l a r ( IntRange domain ) {
return Codec . o f (
Genotype . o f ( IntegerChromosome . o f ( domain ) ) ,
g t −> g t . getChromosome ( ) . getGene ( ) . g e t A l l e l e ( )
);
}

Listing 2.8: Codec factory method:

ofScalar

The usage of the Codec, created by this factory method, simplifies the implementation of the fitness Function and the creation of the evolution Engine.
For scalar types, the saving, in complexity and lines of code, is not that big, but
using the factory method is still quite handy. The following listing demonstrates
the interaction between Codec, fitness Function and evolution Engine.
1
2
3
4
5
6
7
8
9
10
11
12

c l a s s Main {
// F i t n e s s f u n c t i o n d i r e c t l y t a k e s an ’ i n t ’ v a l u e .
s t a t i c double f i t n e s s ( i n t a r g ) {
return . . . ;
}
public s t a t i c void main ( S t r i n g [ ] a r g s ) {
f i n a l Engine e n g i n e = Engine
. b u i l d e r ( Main : : f i t n e s s , o f S c a l a r ( IntRange . o f ( 0 , 1 0 0 ) ) )
. build () ;
...
}
}

52

2.3. CODEC

2.3.2

CHAPTER 2. ADVANCED TOPICS

Vector codec

In the listing 2.9, the ofVector factory method returns a Codec for an int[]
array. The domain parameter defines the allowed range of the int values and
the length defines the length of the encoded int array.
1
2
3
4
5
6
7
8
9
10
11

s t a t i c Codec o f V e c t o r (
IntRange domain ,
int length
) {
return Codec . o f (
Genotype . o f ( IntegerChromosome . o f ( domain , l e n g t h ) ) ,
g t −> g t . getChromosome ( )
. a s ( IntegerChromosome . c l a s s )
. toArray ( )
);
}

Listing 2.9: Codec factory method:

ofVector

The usage example of the vector Codec is almost the same as for the scalar
Codec. As additional parameter, we need to define the length of the desired
array and we define our fitness function with an int[] array.
1
2
3
4
5
6
7
8
9
10
11
12
13
14

c l a s s Main {
// F i t n e s s f u n c t i o n d i r e c t l y t a k e s an ’ i n t [ ] ’ a r r a y .
s t a t i c double f i t n e s s ( i n t [ ] a r g s ) {
return . . . ;
}
public s t a t i c void main ( S t r i n g [ ] a r g s ) {
f i n a l Engine e n g i n e = Engine
. builder (
Main : : f i t n e s s ,
o f V e c t o r ( IntRange . o f ( 0 , 1 0 0 ) , 1 0 ) )
. build () ;
...
}
}

2.3.3

Subset codec

There are currently two kinds of subset codecs you can choose from: finding
subsets with variable size and with fixed size.
Variable-sized subsets A Codec for variable-sized subsets can be easily
implemented with the use of a BitChromosome, as shown in listing 2.10.
1
2
3
4
5
6
7
8
9

s t a t i c  Codec, BitGene> o f S u b S e t ( ISeq b a s i c S e t ) {
return Codec . o f (
Genotype . o f ( BitChromosome . o f ( b a s i c S e t . l e n g t h ( ) ) ) ,
g t −> g t . getChromosome ( )
. a s ( BitChromosome . c l a s s ) . o n e s ( )
. mapToObj ( b a s i c S e t )
. c o l l e c t ( ISeq . toISeq ( ) )
);
}

Listing 2.10: Codec factory method:

53

ofSubSet

2.3. CODEC

CHAPTER 2. ADVANCED TOPICS

The following usage example of subset Codec shows a simplified version of the
Knapsack problem (see section 5.4 on page 110). We try to find a subset, from
the given basic SET, where the sum of the values is as big as possible, but smaller
or equal than 20.
1
2
3
4

c l a s s Main {
// The b a s i c s e t from where t o c h o o s e an ’ o p t i m a l ’ s u b s e t .
f i n a l s t a t i c ISeq SET =
ISeq . of (1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10) ;

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

}

// F i t n e s s f u n c t i o n d i r e c t l y t a k e s an ’ i n t ’ v a l u e .
s t a t i c i n t f i t n e s s ( ISeq s u b s e t ) {
a s s e r t ( s u b s e t . s i z e ( ) <= SET . s i z e ( ) ) ;
f i n a l i n t s i z e = s u b s e t . stream ( ) . c o l l e c t (
C o l l e c t o r s . summingInt ( I n t e g e r : : i n t V a l u e ) ) ;
return s i z e <= 20 ? s i z e : 0 ;
}
public s t a t i c void main ( S t r i n g [ ] a r g s ) {
f i n a l Engine e n g i n e = Engine
. b u i l d e r ( Main : : f i t n e s s , o f S u b S e t (SET) )
. build () ;
...
}

Fixed-size subsets The second kind of subset codec allows you to find the
best subset of a given, fixed size. A classical usage for this encoding is the Subset
sum problem12 :
Given a set (or multi-set) of integers, is there a non-empty subset whose sum is
zero? For example, given the set {−7, −3, −2, 5, 8}, the answer is yes because
the subset {−3, -2, 5} sums to zero. The problem is NP-complete.13
1
2
3
4
5

public c l a s s SubsetSum
implements Problem, EnumGene, I n t e g e r >
{
private f i n a l ISeq _ b a s i c S e t ;
private f i n a l i n t _ s i z e ;

6

public SubsetSum ( ISeq b a s i c S e t , i n t s i z e ) {
_basicSet = basicSet ;
_size = s i z e ;
}

7
8
9
10
11

@Override
public Function, I n t e g e r > f i t n e s s ( ) {
return s u b s e t −> abs (
s u b s e t . stream ( ) . mapToInt ( I n t e g e r : : i n t V a l u e ) . sum ( ) ) ;
}

12
13
14
15
16
17
18
19
20
21
22

}

@Override
public Codec, EnumGene> c o d e c ( ) {
return Codecs . o f S u b S e t ( _ b a s i c S e t , _ s i z e ) ;
}

12 https://en.wikipedia.org/wiki/Subset_sum_problem
13 https://en.wikipedia.org/wiki/NP-completeness

54

2.3. CODEC

2.3.4

CHAPTER 2. ADVANCED TOPICS

Permutation codec

This kind of codec can be used for problems where optimal solution depends on
the order of the input elements. A classical example for such problems is the
Knapsack problem (chapter 5.5 on page 112).
1
2
3
4
5
6
7
8
9

s t a t i c  Codec> o f P e r m u t a t i o n (T . . . a l l e l e s ) {
return Codec . o f (
Genotype . o f ( PermutationChromosome . o f ( a l l e l e s ) ) ,
g t −> g t . getChromosome ( ) . stream ( )
. map( EnumGene : : g e t A l l e l e )
. t o A r r a y ( l e n g t h −> (T [ ] ) Array . n e w I n s t a n c e (
a l l e l e s [ 0 ] . getClass () , length ) )
);
}

Listing 2.11: Codec factory method:

ofPermutation

Listing 2.11 shows the implementation of a permutation codec, where the order
of the given alleles influences the value of the fitness function. An alternate
formulation of the traveling salesman problem is shown in the following listing.
It uses the permutation codec in listing 2.11 and uses io.jenetics.jpx.WayPoints, from the JPX 14 project, for representing the city locations.
1
2
3

public c l a s s TSM {
// The l o c a t i o n s t o v i s i t .
s t a t i c f i n a l ISeq POINTS = I S e q . o f ( . . . ) ;

4

// The p e r m u t a t i o n c o d e c .
s t a t i c f i n a l Codec, EnumGene>
CODEC = Codecs . o f P e r m u t a t i o n (POINTS) ;

5
6
7
8

// The f i t n e s s f u n c t i o n ( i n t h e problem domain ) .
s t a t i c double d i s t ( f i n a l ISeq path ) {
return path . stream ( )
. c o l l e c t ( Geoid .DEFAULT. toTourLength ( ) )
. t o ( Length . Unit .METER) ;
}

9
10
11
12
13
14
15

// The e v o l u t i o n e n g i n e .
s t a t i c f i n a l Engine, Double> ENGINE = Engine
. b u i l d e r (TSM : : d i s t , CODEC)
. o p t i m i z e ( Optimize .MINIMUM)
. build () ;

16
17
18
19
20
21

// Find t h e s o l u t i o n .
public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
f i n a l ISeq r e s u l t = CODEC. d ec od e (
ENGINE . stream ( )
. limit (10)
. c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) )
);

22
23
24
25
26
27
28
29
30
31
32

}

}

System . out . p r i n t l n ( r e s u l t ) ;

14 https://github.com/jenetics/jpx

55

2.3. CODEC

2.3.5

CHAPTER 2. ADVANCED TOPICS

Mapping codec

This codec is a variation of the permutation codec. Instead of permuting the
elements of a given array, it permutes the mapping of elements of a source set to
the elements of a target set. The code snippet below shows the method of the
Codecs class, which creates a mapping codec from a given source- and target
set.
1
2

public s t a t i c  Codec, EnumGene>
ofMapping ( ISeq  s o u r c e , ISeq  t a r g e t ) ;

It is not necessary that the source and target set are of the same size. If |source| >
|target|, the returned mapping function is surjective, if |source| < |target|, the
mapping is injective and if |source| = |target|, the created mapping is bijective.
In every case the size of the encoded Map is |target|. Figure 2.3.1 shows the
described different mapping types in graphical form.

Figure 2.3.1: Mapping codec types
With |source| = |target|, you will create a codec for the assignment problem.
The problem is defined by a number of workers and a number of jobs. Every
worker can be assigned to perform any job. The cost for a worker may vary
depending on the worker-job assignment. It is required to perform all jobs by
assigning exactly one worker to each job and exactly one job to each worker in
such a way which optimizes the total assignment costs.15 The costs for such
worker-job assignments are usually given by a matrix. Such an example matrix
is shown in table 2.3.1.
Worker
Worker
Worker
Worker

1
2
3
4

Job 1
13
1
6
1

Job 2
4
11
7
3

Job 3
7
5
3
5

Job 4
6
4
8
9

Table 2.3.1: Worker-job cost
If your worker-job cost can be expressed by
 a matrix, the Hungarian algorithm16 can find an optimal solution in O n3 time. You should consider this
deterministic algorithm before using a GA.

2.3.6

Composite codec

The composite Codec factory method allows to combine two or more Codecs
into one. Listing 2.12 on the following page shows the method signature of the
15 https://en.wikipedia.org/wiki/Assignment_problem
16 https://en.wikipedia.org/wiki/Hungarian_algorithm

56

2.3. CODEC

CHAPTER 2. ADVANCED TOPICS

factory method, which is implemented directly in the Codec interface.
1
2
3
4
5

s t a t i c , A, B, T> Codec o f (
Codec codec1 ,
Codec codec2 ,
BiFunction d e c o d e r

Listing 2.12: Composite Codec factory method
As you can see from the method definition, the combining Codecs and the
combined Codec have the same Gene type.

Only Codecs which the same Gene type can be composed by the combining
factory methods of the Codec class.
The following listing shows a full example which uses a combined Codec. It
uses the subset Codec, introduced in section 2.3.3 on page 53, and combines it
into a Tuple of subsets.
1
2
3

c l a s s Main {
s t a t i c f i n a l ISeq SET =
ISeq . of (1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9) ;

4
5
6
7
8
9
10
11
12
13

// R e s u l t t y p e o f t h e combined ’ Codec ’ .
s t a t i c f i n a l c l a s s Tuple {
final A f i r s t ;
f i n a l B second ;
Tuple ( f i n a l A f i r s t , f i n a l B s e c o n d ) {
this . f i r s t = f i r s t ;
this . second = second ;
}
}

14
15
16
17
18
19
20
21
22

s t a t i c i n t f i t n e s s ( Tuple, ISeq> a r g s ) {
return a r g s . f i r s t . stream ( )
. mapToInt ( I n t e g e r : : i n t V a l u e )
. sum ( ) −
a r g s . s e c o n d . stream ( )
. mapToInt ( I n t e g e r : : i n t V a l u e )
. sum ( ) ;
}

23
24
25
26
27
28
29
30
31

public s t a t i c void main ( S t r i n g [ ] a r g s ) {
// Combined ’ Codec ’ .
f i n a l Codec, ISeq>, BitGene>
c o d e c = Codec . o f (
Codecs . o f S u b S e t (SET) ,
Codecs . o f S u b S e t (SET) ,
Tuple : : new
);

32
33
34
35

f i n a l Engine e n g i n e = Engine
. b u i l d e r ( Main : : f i t n e s s , c o d e c )
. build () ;

36
37
38
39

f i n a l Phenotype pt = e n g i n e . stream ( )
. l i m i t (100)
. c o l l e c t ( EvolutionResult . toBestPhenotype ( ) ) ;

40

57

2.4. PROBLEM

41
42
43
44
45

}

}

CHAPTER 2. ADVANCED TOPICS

// Use t h e c o d e c f o r c o n v e r t i n g t h e r e s u l t ’ Genotype ’ .
f i n a l Tuple, ISeq> r e s u l t =
c o d e c . d e c o d e r ( ) . a p p l y ( pt . getGenotype ( ) ) ;

If you have to combine more than one Codec into one, you have to use the
second, more general, combining function: Codec.of(ISeq>,Function). The example above shows how to use the general
combining function. It is just a little bit more verbose and requires explicit casts
for the sub-codec types.
1
2
3
4
5
6
7
8
9
10
11
12

f i n a l Codec, LongGene>
c o d e c = Codec . o f ( I S e q . o f (
Codecs . o f S c a l a r ( LongRange . o f ( 0 , 1 0 0 ) ) ,
Codecs . o f S c a l a r ( LongRange . o f ( 0 , 1 0 0 0 ) ) ,
Codecs . o f S c a l a r ( LongRange . o f ( 0 , 1 0 0 0 0 ) ) ) ,
v a l u e s −> {
f i n a l Long f i r s t = ( Long ) v a l u e s [ 0 ] ;
f i n a l Long s e c o n d = ( Long ) v a l u e s [ 1 ] ;
f i n a l Long t h i r d = ( Long ) v a l u e s [ 2 ] ;
return new T r i p l e <>( f i r s t , second , t h i r d ) ;
}
);

2.4

Problem

The Problem interface is a further abstraction level, which allows to bind the
problem encoding and the fitness function into one data structure.
1
2
3
4
5
6
7
8

public i n t e r f a c e Problem<
T,
G extends Gene ,
C extends Comparable 
> {
public Function f i t n e s s ( ) ;
public Codec c o d e c ( ) ;
}

Listing 2.13: Problem interface
Listing 2.13 shows the Problem interface. The generic type T represents the
native argument type of the fitness function and C the Comparable result of the
fitness function. G is the Gene type, which is used by the evolution Engine.
1
2
3
4
5
6
7
8
9
10
11
12
13
14

// D e f i n i t i o n o f t h e Ones c o u n t i n g problem .
f i n a l Problem, BitGene , I n t e g e r > ONES_COUNTING =
Problem . o f (
// F i t n e s s Function, I n t e g e r >
g e n e s −> ( i n t ) g e n e s . stream ( )
. f i l t e r ( BitGene : : g e t B i t ) . count ( ) ,
Codec . o f (
// Genotype Factory>
Genotype . o f ( BitChromosome . o f ( 2 0 , 0 . 1 5 ) ) ,
// Genotype c o n v e r s i o n
// Function, >
g t −> g t . getChromosome ( ) . t o S e q ( )
)
);

15

58

2.5. VALIDATION

16
17
18
19
20
21
22
23
24
25

CHAPTER 2. ADVANCED TOPICS

// Engine c r e a t i o n f o r Problem s o l v i n g .
f i n a l Engine e n g i n e = Engine
. b u l d e r (ONES_COUNTING)
. populationSize (150)
. s u r v i v o r s S e l e c t o r ( newTournamentSelector <>(5) )
. o f f s p r i n g S e l e c t o r (new R o u l e t t e W h e e l S e l e c t o r <>() )
. alterers (
new Mutator < >(0.03) ,
new S i n g l e P o i n t C r o s s o v e r < >(0.125) )
. build () ;

The listing above shows how a new Engine is created by using a predefined
Problem instance. This allows the complete decoupling of problem and Engine
definition.

2.5

Validation

A given problem should usually encoded in a way, that it is not possible for
the evolution Engine to create invalid individuals (Genotypes). Some possible
encodings for common data-structures are described in section 2.2 on page 45.
The Engine creates new individuals in the altering step, by rearranging (or
creating new) Genes within a Chromosome. Since a Genotype is treated as
valid if every single Gene in every Chromosome is valid, the validity property of
the Genes determines the validity of the whole Genotype.
The Engine tries to create only valid individuals when creating the initial
population and when it replaces Genotypes which has been destroyed by the
altering step. Individuals which has exceeded its lifetime are also replaced by new
valid ones. To guarantee the termination of the Genotype creation, the Engine
is parameterized with the maximal number of retries (individualCreationRetries)17 . If the described validation mechanism doesn’t fulfill your needs, you
can override the validation mechanism by creating the Engine with an external
Genotype validator.
1
2
3
4
5
6
7
8
9
10

f i n a l P r e d i c a t e > v a l i d a t o r = g t −> {
// Implement advanced Genotype c h e c k .
boolean v a l i d = . . . ;
return v a l i d ;
};
f i n a l Engine e n g i n e = Engine . b u i l d e r ( g t f , f f )
. l i m i t (100)
. genotypeValidator ( validator )
. individualCreationRetries (15)
. build () ;

Having the possibility to replace the default validation check is a nice thing, but
it is better to not create invalid individuals in the first place. For achieving this
goal, you have two possibilities:
1. Creating an explicit Genotype factory and
2. implementing new Gene/Chromosome/Alterer classes.
17 See

section 1.3.3.2 on page 22.

59

2.6. TERMINATION

CHAPTER 2. ADVANCED TOPICS

Genotype factory The usual mechanism for defining an encoding is to create a
Genotype prototype 18 . Since the Genotype implements the Factory interface,
an prototype instance can easily passed to the Engine.builder method. For a
more advanced Genotype creation, you only have to create an explicit Genotype
factory.
1
2
3
4
5
6
7
8
9

f i n a l Factory> g t f = ( ) −> {
// Implement your advanced Genotype f a c t o r y .
Genotype g e n o t y p e = . . . ;
return g e n o t y p e ;
};
f i n a l Engine e n g i n e = Engine . b u i l d e r ( g t f ,
. l i m i t (100)
. individualCreationRetries (15)
. build () ;

ff )

With this method you can avoid that the Engine creates invalid individuals
in the first place, but it is still possible that the alterer step will destroy your
Genotypes.
Gene/Chromosome/Alterer Creating your own Gene, Chromosome and Alterer classes is the most heavy-weighted possibility for solving the validity
problem. Refer to section 2.1 on page 39 for a more detailed description on how
to implement this classes.

2.6

Termination

Termination is the criterion by which the evolution stream decides whether to
continue or truncate the stream. This section gives a deeper insight into the
different ways of terminating or truncating the evolution stream, respectively.
The EvolutionStream of the Jenetics library offers an additional method for
limiting the evolution. With the limit(Predicate>)
method it is possible to use more advanced termination strategies. If the predicate,
given to the limit function, returns false, the evolution stream is truncated.
EvolutionStream.limit(r -> true) will create an infinite evolution stream.
All termination strategies described in the following sub-sections are part of
the library and can be created by factory methods of the io.jenetics.engine.Limits class. The termination strategies where tested by solving the Knapsack
problem19 (see section 5.4 on page 110) with 250 items. This makes it a real
problem with a search-space size of 2250 ≈ 1075 elements.

The predicate given to the EvolutionStream.limit function must return
false for truncating the evolution stream. If it returns true, the evolution is
continued.
Table 2.6.1 on the next page shows the evolution parameters used for the
termination tests. To make the tests comparable, all test runs uses the same
18 https://en.wikipedia.org/wiki/Prototype_pattern
19 The actual implementation used for the termination tests can be found in the Github repository: https://github.com/jenetics/jenetics/blob/master/jenetics.example/src/main/
java/io/jenetics/example/Knapsack.java

60

2.6. TERMINATION
Population size:
Survivors selector:
Offspring selector:
Alterers:
Fitness scaler:

CHAPTER 2. ADVANCED TOPICS
150
TournamentSelector<>(5)
RouletteWheelSelector<>()
Mutator<>(0.03) and
SinglePointCrossover<>(0.125)
Identity function

Table 2.6.1: Knapsack evolution parameters
evolution parameters and the very same set of knapsack items. Each termination
test was repeated 1,000 times, which gives us enough data to draw the given
candlestick diagrams.
Some of the implemented termination strategy needs to maintain an internal
state. This strategies can’t be re-used in different evolution streams. To be on
the safe side, it is recommended to always create a Predicate instance for each
stream. Calling Stream.limit(Limits.byTerminationStrategy ) will always
work as expected.

2.6.1

Fixed generation

The simplest way for terminating the evolution process, is to define a maximal
number of generations on the EvolutionStream. It just uses the existing limit
method of the Java Stream interface.
1
2
3

f i n a l long MAX_GENERATIONS = 1 0 0 ;
E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. l i m i t (MAX_GENERATIONS) ;

This kind of termination method should always be applied—usually additional
with other evolution terminators—, to guarantee the truncation of the evolution
stream and to define an upper limit of the executed generations.
Figure 2.6.1 on the following page shows the best fitness values of the used
Knapsack problem after a given number of generations, whereas the candle-stick
points represents the min, 25th percentile, median, 75th percentile and max
fitness after 250 repetitions per generation. The solid line shows for the mean
of the best fitness values. For a small increase of the fitness value, the needed
generations grows exponentially. This is especially the case when the fitness is
approaching to its maximal value.

2.6.2

Steady fitness

The steady fitness strategy truncates the evolution stream if its best fitness
hasn’t changed after a given number of generations. The predicate maintains an
internal state, the number of generations with non increasing fitness, and must
be newly created for every evolution stream.
1
2
3
4
5
6
7

f i n a l c l a s s S t e a d y F i t n e s s L i m i t >
implements P r e d i c a t e >
{
private f i n a l i n t _ g e n e r a t i o n s ;
private boolean _proceed = true ;
private i n t _ s t a b l e = 0 ;
private C _ f i t n e s s ;

61

2.6. TERMINATION

CHAPTER 2. ADVANCED TOPICS

11.5
11.0
10.5

Fitness

10.0
9.5
9.0
8.5
8.0
7.5
7.0
6.5
100

101

102

103

104

105

Generation

Figure 2.6.1: Fixed generation termination
8

public S t e a d y F i t n e s s L i m i t ( f i n a l i n t g e n e r a t i o n s ) {
_generations = generations ;
}

9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

}

@Override
public boolean t e s t ( f i n a l E v o l u t i o n R e s u l t  e r ) {
i f ( ! _proceed ) return f a l s e ;
i f ( _ f i t n e s s == n u l l ) {
_fitness = er . getBestFitness () ;
_stable = 1;
} else {
f i n a l Optimize opt = r e s u l t . g e t O p t i m i z e ( ) ;
i f ( opt . compare ( _ f i t n e s s , e r . g e t B e s t F i t n e s s ( ) ) >= 0 ) {
_proceed = ++_ s t a b l e <= _ g e n e r a t i o n s ;
} else {
_fitness = er . getBestFitness () ;
_stable = 1;
}
}
return _proceed ;
}

Listing 2.14: Steady fitness
Listing 2.14 on the previous page shows the implementation of the Limits.bySteadyFitness(int) in the io.jenetics.engine package. It should give
you an impression of how to implement own termination strategies, which possible
holds and internal state.
1

Engine e n g i n e = . . .

62

2.6. TERMINATION

2
3

CHAPTER 2. ADVANCED TOPICS

E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. l i m i t ( Limits . bySteadyFitness (15) ) ;

The steady fitness terminator can be created by the bySteadyFitness factory
method of the io.jenetics.engine.Limits class. In the example above, the
evolution stream is terminated after 15 stable generations.
106

Total generation

105
104
103
102
101

Fitness

100
11.0
10.0
9.0
8.0
7.0
100

101

102

103

104

105

Steady generation

Figure 2.6.2: Steady fitness termination
Figure 2.6.2 shows the actual total executed generation depending on the
desired number of steady fitness generations. The variation of the total generation
is quite big, as shown by the candle-sticks. Though the variation can be quite
big—the termination test has been repeated 250 times for each data point—, the
tests showed that the steady fitness termination strategy always terminated, at
least for the given test setup. The lower diagram give an overview of the fitness
progression. Only the mean values of the maximal fitness is shown.

2.6.3

Evolution time

This termination strategy stops the evolution when the elapsed evolution time
exceeds an user-specified maximal value. The evolution stream is only truncated
at the end of an generation and will not interrupt the current evolution step.
An maximal evolution time of zero ms will at least evaluate one generation. In
an time-critical environment, where a solution must be found within a maximal
time period, this terminator let you define the desired guarantees.
1
2
3

Engine e n g i n e = . . .
E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. l i m i t ( L i m i t s . byExecutionTime ( D u r a t i o n . o f M i l l i s ( 5 0 0 ) ) ;

63

2.6. TERMINATION

CHAPTER 2. ADVANCED TOPICS

In the code example above, the byExecutionTime(Duration) method is used for
creating the termination object. Another method, byExecutionTime(Duration,
Clock), lets you define the java.time.Clock, which is used for measure the
execution time. Jenetics uses the nano precision clock io.jenetics.util.NanoClock for measuring the time. To have the possibility to define a different
Clock implementation is especially useful for testing purposes.
106

Total generation

105
10

4

103
102
101

Fitness

100
11.0
10.0
9.0
8.0
7.0
100

101

102

103

104

105

Execution time [ms]

Figure 2.6.3: Execution time termination
Figure 2.6.3 shows the evaluated generations depending on the execution
time. Except for very small execution times, the evaluated generations per time
unit stays quite stable.20 That means that a doubling of the execution time will
double the number of evolved generations.

2.6.4

Fitness threshold

A termination method that stops the evolution when the best fitness in the
current population becomes less than the user-specified fitness threshold and the
objective is set to minimize the fitness. This termination method also stops the
evolution when the best fitness in the current population becomes greater than
the user-specified fitness threshold when the objective is to maximize the fitness.
1
2
3
4

Engine e n g i n e = . . .
E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. l i m i t ( Limits . byFitnessThreshold (1 0. 5)
. l i m i t (5000) ;
20 While running the tests, all other CPU intensive process has been stopped. The measuring
started after a warm-up phase.

64

2.6. TERMINATION

CHAPTER 2. ADVANCED TOPICS

When limiting the evolution stream by a fitness threshold, you have to have a
knowledge about the expected maximal fitness. If there is no such knowledge, it
is advisable to add an additional fixed sized generation limit as safety net.
107

Total generation

106
105
10

4

10

3

10

2

101

Fitness

100
11.0
10.0
9.0
8.0
7.0
7.0

7.5

8.0

8.5

9.0

9.5

10.0

10.5

11.0

Fitness threshold

Figure 2.6.4: Fitness threshold termination
Figure 2.6.4 shows executed generations depending on the minimal fitness
value. The total generations grows exponentially with the desired fitness value.
This means, that this termination strategy will (practically) not terminate, if
the value for the fitness threshold is chosen to high. And it will definitely not
terminate if the fitness threshold is higher than the global maximum of the
fitness function. It will be a perfect strategy if you can define some good enough
fitness value, which can be easily achieved.

2.6.5

Fitness convergence

In this termination strategy, the evolution stops when the fitness is deemed as
converged. Two filters of different lengths are used to smooth the best fitness
across the generations. When the best smoothed fitness of the long filter is
less than a specified percentage away from the best smoothed fitness from the
short filter, the fitness is deemed as converged. Jenetics offers a generic version
fitness-convergence predicate and a version where the smoothed fitness is the
moving average of the used filters.
1
2
3
4
5

public s t a t i c >
P r e d i c a t e > b y F i t n e s s C o n v e r g e n c e (
f i n a l int s h o r t F i l t e r S i z e ,
f i n a l int l o n g F i l t e r S i z e ,
f i n a l B i P r e d i c a t e  p r o c e e d

65

2.6. TERMINATION

6

CHAPTER 2. ADVANCED TOPICS

);

Listing 2.15: General fitness convergence
Listing 2.15 on the previous page shows the factory method which creates the
generic fitness convergence predicate. This method allows to define the evolution
termination according to the statistical moments of the short- and long fitness
filter.
1
2
3
4
5
6

public s t a t i c >
P r e d i c a t e > b y F i t n e s s C o n v e r g e n c e (
f i n a l int s h o r t F i l t e r S i z e ,
f i n a l int l o n g F i l t e r S i z e ,
f i n a l double e p s i l o n
);

Listing 2.16: Mean fitness convergence
The second factory method (shown in listing 2.16) creates a fitness convergence
predicate, which uses the moving average21 for the two filters. The smoothed
fitness value is calculated as follows:
N −1
1 X
σF (N ) =
F[G−i]
N i=0

(2.6.1)

where N is the length of the filter, F[i] the fitness value at generation i and G
the current generation. If the condition
|σF (NS ) − σF (NL )|
<
δ

(2.6.2)

is fulfilled, the evolution stream is truncated. Where δ is defined as follows:

max (|σF (NS )| , |σF (NL )|)
if
6= 0
δ=
.
(2.6.3)
1
otherwise
1
2
3

Engine e n g i n e = . . .
E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. l i m i t ( L i m i t s . b y F i t n e s s C o n v e r g e n c e ( 1 0 , 3 0 , 10E−4) ;

For using the fitness convergence strategy you have to specify three parameter.
The length of the short filter, NS , the length of the long filter, NL and the
relative difference between the smoothed fitness values, .
Figure 2.6.5 on the next page shows the termination behavior of the fitness
convergence termination strategy. It can be seen that the minimum number of
evolved generations is the length of the long filter, NL .
Figure 2.6.6 on page 68 shows the generations needed for terminating the
evolution for higher values of the NS and NL parameters.

2.6.6

Population convergence

This termination method that stops the evolution when the population is deemed
as converged. A population is deemed as converged when the average fitness
across the current population is less than a user-specified percentage away from
21 https://en.wikipedia.org/wiki/Moving_average

66

2.6. TERMINATION

CHAPTER 2. ADVANCED TOPICS

Total generation

500
450

NS= 10

400

NL= 30

350
300
250
200
150
100

Fitness

50
0
9.4
9.2
9.0
8.8
8.6
8.4
10-1

10-2

10-3

10-4

10-5

10-6

10-7

10-8

10-9

10-10

Epsilon

Figure 2.6.5: Fitness convergence termination: NS = 10, NL = 30
the best fitness of the current population. The population is deemed as converged
and the evolution stream is truncated if

where

fmax − f¯
< ,
δ

(2.6.4)

N −1
1 X
fi ,
f¯ =
N i=0

(2.6.5)

fmax = max {fi }

(2.6.6)

i∈[0,N )

and
δ=



max |fmax | , f¯
1



if
6= 0
.
otherwise

(2.6.7)

N denotes the number of individuals of the population.
1
2
3

Engine e n g i n e = . . .
E v o l u t i o n S t r e a m  stream = e n g i n e . stream ( )
. l i m i t ( Limits . byPopulationConvergence ( 0 . 1 ) ;

The evolution stream in the example above will terminate, if the difference
between the population’s fitness mean value and the maximal fitness value of
the population is less then 10%.

2.6.7

Gene convergence

This termination strategy is different, in the sense that it takes the genes or
alleles, respectively, for terminating the evolution stream. In the gene convergence
67

2.7. REPRODUCIBILITY

CHAPTER 2. ADVANCED TOPICS

2500

Total generation

NS= 50
2000

NL= 150

1500
1000
500

Fitness

0
10.0
9.8
9.6
9.4
9.2
10-1

10-2

10-3

10-4

10-5

10-6

10-7

10-8

10-9

10-10

Epsilon

Figure 2.6.6: Fitness convergence termination: NS = 50, NL = 150
termination strategy the evolution stops when a specified percentage of the genes
of a genotype are deemed as converged. A gene is treated as converged when the
average value of that gene across all of the genotypes in the current population
is less than a given percentage away from the maximum allele value across the
genotypes.

2.7

Reproducibility

Some problems can be define with different kinds of fitness functions or encodings.
Which combination works best can’t usually be decided a priori. To choose one,
some testing is needed. Jenetics allows you to setup an evolution Engine in a
way that will produce the very same result on every run.
1
2
3
4
5
6
7
8
9
10

f i n a l Engine e n g i n e =
Engine . b u i l d e r ( f i t n e s s F u n c t i o n , c o d e c )
. e x e c u t o r ( Runnable : : run )
. build () ;
f i n a l E v o l u t i o n R e s u l t  r e s u l t =
RandomRegistry . with (new Random ( 4 5 6 ) , r −>
e n g i n e . stream ( p o p u l a t i o n )
. l i m i t (100)
. c o l l e c t ( EvolutionResult . toBestEvolutionResult () )
);

Listing 2.17: Reproducible evolution Engine
Listing 2.17 shows the basic setup of such a reproducible evolution Engine.
Firstly, you have to make sure that all evolution steps are executed serially.
This is done by configuring a single threaded executor. In the simplest case
68

2.8. EVOLUTION PERFORMANCE CHAPTER 2. ADVANCED TOPICS
the evolution is performed solely on the main thread—Runnable::run. The
second step configures the random engine, the evolution Engine is working
with. Just wrap the evolution stream execution in a RandomRegistry::with
block. Additionally you can start the evolution stream with a predefined, initial
population. Once you have setup the Engine, you can vary the fitness function
and the codec and compare the results.

2.8

Evolution performance

This section contains an empirical proof, that evolutionary selectors deliver
significantly better fitness results than a random search. The MonteCarloSelector is used for creating the comparison (random search) fitness values.
11.0
10.5
10.0

Fitness

9.5
9.0
8.5
8.0
7.5
7.0
100

MonteCarloSelector
Evolutionary-Selector
101

102

103

104

105

Generation

Figure 2.8.1: Selector-performance (Knapsack)
Figure 2.8.1 shows the evolution performance of the Selector22 used by
the examples in section 2.6 on page 60. The lower blue line shows the (mean)
fitness values of the Knapsack problem when using the MonteCarloSelector for
selecting the survivors and offspring population. It can be easily seen, that the
performance of the real evolutionary Selectors is much better than a random
search.
22 The termination tests are using a TournamentSelector, with tournament-size 5, for selecting the survivors, and a RouletteWheelSelector for selecting the offspring.

69

2.9. EVOLUTION STRATEGIES

2.9

CHAPTER 2. ADVANCED TOPICS

Evolution strategies

Evolution Strategies, ES, were developed by Ingo Rechenberg and Hans-Paul
Schwefel at the Technical University of Berlin in the mid 1960s.[30] It is a
global optimization algorithm in continuous search spaces and is an instance
of an Evolutionary Algorithm from the field of Evolutionary Computation. ES
uses truncation selection23 for selecting the individuals and usually mutation24
for changing the next generation. This section describes how to configure the
evolution Engine of the library for the (µ, λ)- and (µ + λ)-ES.

2.9.1

(µ, λ) evolution strategy

The (µ, λ) algorithm starts by generating λ individuals randomly. After evaluating the fitness of all the individuals, all but the µ fittest ones are deleted.
Each of the µ fittest individuals gets to produce µλ children through an ordinary
mutation. The newly created children just replaces the discarded parents.[20]
To summarize it: µ is the number of parents which survive, and λ is the
number of offspring, created by the µ parents. The value ofλ should be a multiple
of µ. ES practitioners usually refer to their algorithm by the choice of µ and λ.
If we set µ = 5 and λ = 5, then we have a (5, 20)-ES.
1
2
3
4
5
6
7

f i n a l Engine e n g i n e =
Engine . b u i l d e r ( f i t n e s s , c o d e c )
. p o p u l a t i o n S i z e ( lambda )
. survivorsSize (0)
. o f f s p r i n g S e l e c t o r (new T r u n c a t i o n S e l e c t o r <>(mu) )
. a l t e r e r s (new Mutator <>(p ) )
. build () ;

Listing 2.18: (µ, λ) Engine configuration
Listing 2.18 shows how to configure the evolution Engine for (µ, λ)-ES. The
population size is set to λ and the survivors size to zero, since the best parents are
not part of the final population. Step three is configured by setting the offspring
selector to the TruncationSelector. Additionally, the TruncationSelector
is parameterized with µ. This lets the TruncationSelector only select the µ
best individuals, which corresponds to step two of the ES.
There are mainly three levers for the (µ, λ)-ES where we can adjust exploration versus exploitation:[20]
• Population size λ: This parameter controls the sample size for each
population. For the extreme case, as λ approaches ∞, the algorithm would
perform a simple random search.
• Survivors size of µ: This parameter controls how selective the ES is.
Relatively lowµ values pushes the algorithm towards exploitative search,
because only the best individuals are used for reproduction.25
23 See

1.3.2.1 on page 13.
1.3.2.2 on page 17.
25 As you can see in listing 2.18, the survivors size (reproduction pool size) for the (µ, λ)-ES
must be set indirectly via the TruncationSelector parameter. This is necessary, since for the
(µ, λ)-ES, the selected best µ individuals are not part of the population of the next generation.
24 See

70

2.10. EVOLUTION INTERCEPTION CHAPTER 2. ADVANCED TOPICS
• Mutation probability p: A high mutation probability pushes the algorithm toward a fairly random search, regardless of the selectivity of
µ.

2.9.2

(µ + λ) evolution strategy

In the (µ + λ)-ES, the next generation consists of the selected best µ parents
and the λ new children. This is also the main difference to (µ, λ), where the µ
parents are not part of the next generation. Thus the next and all successive
generations are µ + λ in size.[20] Jenetics works with a constant population
size and it is therefore not possible to implement an increasing population size.
Besides this restriction, the Engine configuration for the (µ + λ)-ES is shown
in listing 2.19.
1
2
3
4
5
6
7

f i n a l Engine e n g i n e =
Engine . b u i l d e r ( f i t n e s s , c o d e c )
. p o p u l a t i o n S i z e ( lambda )
. s u r v i v o r s S i z e (mu)
. s e l e c t o r (new T r u n c a t i o n S e l e c t o r <>(mu) )
. a l t e r e r s (new Mutator <>(p ) )
. build () ;

Listing 2.19: (µ + λ) Engine configuration
Since the selected µ parents are part of the next generation, the survivorsSize
property must be set to µ. This also requires to set the survivors selector to the
TruncationSelector. With the selector(Selector) method, both selectors,
the selector for the survivors and for the offspring, can be set. Because the
best parents are also part of the next generation, the (µ + λ)-ES may be more
exploitative than the (µ, λ)-ES. This has the risk, that very fit parents can defeat
other individuals over and over again, which leads to a prematurely convergence
to a local optimum.

2.10

Evolution interception

Once the EvolutionStream is created, it will continuously create EvolutionResult objects, one for every generation. It is not possible to alter the results,
although it is tempting to use the Stream.map method for this purpose. The
problem with the map method is, that the altered EvolutionResult will not be
fed back to the Engine when evolving the next generation.
1
2

private E v o l u t i o n R e s u l t 
mapping ( E v o l u t i o n R e s u l t  r e s u l t ) { . . . }

3
4
5
6
7

f i n a l Genotype r e s u l t = e n g i n e . stream ( )
. map( t h i s : : mapping )
. l i m i t (100)
. c o l l e c t ( toBestGenotype ( ) ) ;

Doing the EvolutionResult mapping as shown in the code snippet above, will
only change the results for the operations after the mapper definition. The
evolution processing of the Engine is not affected. If we want to intercept the
evolution process, the mapping must be defined when the Engine is created.
1
2
3

f i n a l Engine e n g i n e = Engine . b u i l d ( problem )
. mapping ( t h i s : : mapping )
. build () ;

71

2.10. EVOLUTION INTERCEPTION CHAPTER 2. ADVANCED TOPICS
The code snippet above shows the correct way for intercepting the evolution
stream. The mapper given to the Engine will change the stream of EvolutionResults and the will also feed the altered result back to the evolution Engine.
Distinct population This kind of intercepting the evolution process is very
flexible. Jenetics comes with one predefined stream interception method, which
allows to remove duplicate individuals from the resulting population.
1
2
3

f i n a l Engine e n g i n e = Engine . b u i l d ( problem )
. mapping ( E v o l u t i o n R e s u l t . t o U n i q u e P o p u l a t i o n ( ) )
. build () ;

Despite the de-duplication, it is still possible to have duplicate individuals.
This will be the case when domain of the possible Genotypes is not big enough
and the same individual is created by chance. You can control the number of
Genotype creation retries using the EvolutionResult.toUniquePopulation(int) method, which allows you to define the maximal number of retries if an
individual already exists.

72

Chapter 3

Internals
This section contains internal implementation details which doesn’t fit in one of
the previous sections. They are not essential for using the library, but would give
the user a deeper insight in some design decisions, made when implementing the
library. It also introduces tools and classes which where developed for testing
purpose. This classes are not exported and not part of the official API.

3.1

PRNG testing

Jenetics uses the dieharder1 (command line) tool for testing the randomness
of the used PRNGs. dieharder is a random number generator (RNG) testing
suite. It is intended to test generators, not files of possibly random numbers.
Since dieharder needs a huge amount of random data, for testing the quality
of a RNG, it is usually advisable to pipe the random numbers to the dieharder
process:
$ cat /dev/urandom | dieharder -g 200 -a
The example above demonstrates how to stream a raw binary stream of bits to
the stdin (raw) interface of dieharder. With the DieHarder class, which is
part of the io.jenetics-.prngine.internal package, it is easily possible to
test PRNGs extending the java.util.Random class. The only requirement is,
that the PRNG must be default-constructible and part of the classpath.
$ java -cp io.jenetics.prngine-1.0.1.jar \
io.jenetics.prngine.internal.DieHarder \
 -a
Calling the command above will create an instance of the given random engine
and stream the random data (bytes) to the raw interface of dieharder process.
1
2
3
4
5
6

#=============================================================================#
# Testing : < random - engine - name > (2015 -07 -11 23:48)
#
#=============================================================================#
#=============================================================================#
# Linux 3.19.0 -22 - generic ( amd64 )
#
# java version " 1.8.0 _45 "
#
1 From

Robert G. Brown:http://www.phy.duke.edu/~rgb/General/dieharder.php

73

3.2. RANDOM SEEDING

7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

CHAPTER 3. INTERNALS

# Java ( TM ) SE Runtime Environment ( build 1.8.0 _45 - b14 )
#
# Java HotSpot ( TM ) 64 - Bit Server VM ( build 25.45 - b02 )
#
#=============================================================================#
#=============================================================================#
#
dieharder version 3.31.1 Copyright 2003 Robert G . Brown
#
#=============================================================================#
rng_name
| rands / second |
Seed
|
stdin_input_raw | 1.36 e +07 |1583496496|
#=============================================================================#
test_name
| ntup | tsamples | psamples | p - value | Assessment
#=============================================================================#
di eh ard _b irt hd ays |
0|
100|
100|0.63372078| PASSED
diehard_operm5 |
0|
1000000|
100|0.42965082| PASSED
d ie h a rd _ ra n k_ 3 2 x3 2 |
0|
40000|
100|0.95159380| PASSED
diehard_ rank_6x8 |
0|
100000|
100|0.70376799| PASSED
...
Preparing to run test 209. ntuple = 0
dab_monobit2 | 12| 65000000|
1|0.76563780| PASSED
#=============================================================================#
# Summary : PASSED =112 , WEAK =2 , FAILED =0
#
#
235 ,031.492 MB of random data created with 41.394 MB / sec
#
#=============================================================================#
#=============================================================================#
# Runtime : 1:34:37
#
#=============================================================================#

In the listing above, a part of the created dieharder report is shown. For testing the LCG64ShiftRandom class, which is part of the io.jenetics.prngine
module, the following command can be called:
$ java -cp io.jenetics.prngine-1.0.1.jar \
io.jenetics.prngine.internal.DieHarder \
io.jenetics.prngine.LCG64ShiftRandom -a
Table 3.1.1 shows the summary of the dieharder tests. The full report is part
of the source file of theLCG64ShiftRandom class.2
Passed tests
110

Weak tests
4

Failed tests
0

Table 3.1.1: LCG64ShiftRandom quality

3.2

Random seeding

The PRNGs3 , used by the Jenetics library, needs to be initialized with a proper
seed value before they can be used. The usual way for doing this, is to take the
current time stamp.
1
2
3

public s t a t i c long s e e d ( ) {
return System . nanoTime ( ) ;
}

Before applying this method throughout the whole library, I decided to perform
some statistical tests. For this purpose I treated the seed() method itself as
PRNG and analyzed the created long values with the DieHarder class. The
2 https://github.com/jenetics/prngine/blob/master/prngine/src/main/java/io/
jenetics/prngine/LCG64ShiftRandom.java
3 See section 1.4.2 on page 32.

74

3.2. RANDOM SEEDING

CHAPTER 3. INTERNALS

seed() method has been wrapped into the io.jenetics.prngine.internal.NanoTimeRandom class. Assuming that the dieharder tool is in the search path,
calling
$ java -cp io.jenetics.prngine-1.0.1.jar \
io.jenetics.prngine.internal.DieHarder \
io.jenetics.prngine.internal.NanoTimeRandom -a
will perform the statistical tests for the nano time random engine. The statistical
quality is rather bad: every single test failed. Table 3.2.1 shows the summary of
the dieharder report.4
Passed tests
0

Weak tests
0

Failed tests
114

Table 3.2.1: Nano time seeding quality
An alternative source of entropy, for generating seed values, would be the
/dev/random or /dev/urandom file. But this approach is not portable, which
was a prerequisite for the Jenetics library.
The next attempt tries to fetch the seeds from the JVM, via the Object.hashCode() method. Since the hash code of an Object is available for every
operating system and most likely »randomly« distributed.
1
2
3
4

public s t a t i c long s e e d ( ) {
return ( ( long )new O b j e c t ( ) . hashCode ( ) << 3 2 ) |
new O b j e c t ( ) . hashCode ( ) ;
}

This seed method has been wrapped into the ObjectHashRandom class and
tested as well with
$ java -cp io.jenetics.prngine-1.0.1.jar \
io.jenetics.prngine.internal.DieHarder \
io.jenetics.prngine.internal.ObjectHashRandom -a
Table 3.2.2 shows the summary of the dieharder report5 , which looks better
than the nano time seeding, but 86 failing tests was still not very satisfying.
Passed tests
28

Weak tests
0

Failed tests
86

Table 3.2.2: Object hash seeding quality
After additional experimentation, a combination of the nano time seed and
the object hash seeding seems to be the right solution. The rational behind this
was, that the PRNG seed shouldn’t rely on a single source of entropy.
4 The detailed test report can be found in the source of the NanoTimeRandom
class.
https://github.com/jenetics/prngine/blob/master/prngine/src/main/java/io/
jenetics/prngine/internal/NanoTimeRandom.java
5 Full report: https://github.com/jenetics/prngine/blob/master/prngine/src/main/
java/io/jenetics/prngine/internal/ObjectHashRandom.java

75

3.2. RANDOM SEEDING

1
2
3

CHAPTER 3. INTERNALS

public s t a t i c long s e e d ( ) {
return mix ( System . nanoType ( ) , o b j e c t H a s h S e e d ( ) ) ;
}

4
5
6
7
8
9
10
11

private s t a t i c long mix ( f i n a l long a , f i n a l long b ) {
long c = a^b ;
c ^= c << 1 7 ;
c ^= c >>> 3 1 ;
c ^= c << 8 ;
return c ;
}

12
13
14
15
16

private s t a t i c long o b j e c t H a s h S e e d ( ) {
return ( ( long )new O b j e c t ( ) . hashCode ( ) << 3 2 ) |
new O b j e c t ( ) . hashCode ( ) ;
}

Listing 3.1: Random seeding
The code in listing 3.1 shows how the nano time seed is mixed with the object
seed. The mix method was inspired by the mixing step of thelcg64_shift6
random engine, which has been reimplemented in the LCG64ShiftRandom class.
Running the tests with
$ java -cp io.jenetics.prngine-1.0.1.jar \
io.jenetics.prngine.internal.DieHarder \
io.jenetics.prngine.internal.SeedRandom -a
leads to the statistics summary7 , which is shown in table 3.2.3.
Passed tests
112

Weak tests
2

Failed tests
0

Table 3.2.3: Combined random seeding quality
The statistical performance of this seeding is better, according to the dieharder test suite, than some of the real random engines, including the default
Java Random engine. Using the proposed seed() method is in any case preferable to the simple System.nanoTime() call.
Open questions
• How does this method perform on operating systems other than Linux?
• How does this method perform on other JVM implementations?

6 This class is part of the TRNG library: https://github.com/rabauke/trng4/blob/
master/src/lcg64_shift.hpp
7 Full report: https://github.com/jenetics/prngine/blob/master/prngine/src/main/
java/io/jenetics/prngine/internal/SeedRandom.java

76

Chapter 4

Modules
The Jenetics library has been split up into several modules, which allows to keep
the base EA module as small as possible. It currently consists of the modules
shown in table 4.0.1, including the Jenetics base module.1
Module
io.jenetics.base
io.jenetics.ext
io.jenetics.prog
io.jenetics.xml
io.jenetics.prngine

Artifact
io.jenetics:jenetics:4.3.0
io.jenetics:jenetics.ext:4.3.0
io.jenetics:jenetics.prog:4.3.0
io.jenetics:jenetics.xml:4.3.0
io.jenetics:prngine:1.0.1

Table 4.0.1: Jenetics modules
With this module split the code is easier to maintain and doesn’t force the user
to use parts of the library he or she isn’t using, which keep the io.jenetics.base module as small as possible. The additional Jenetics modules will be
described in this chapter. Figure 4.0.1 shows the dependency graph of the
Jenetics modules.

Figure 4.0.1: Module graph
1 The used module names follow the recommended naming scheme for the JPMS automatic
modules: http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html.

77

4.1. IO.JENETICS.EXT

4.1

CHAPTER 4. MODULES

io.jenetics.ext

The io.jenetics.ext module implements additional non-standard genes and
evolutionary operations. It also contains data structures which are used by this
additional genes and operations.

4.1.1

Data structures

4.1.1.1

Tree

The Tree interface defines a general tree data type, where each tree node can
have an arbitrary number of children.
1
2
3
4
5
6

public i n t e r f a c e Tree> {
public V g e t V a l u e ( ) ;
public O p t i o n a l  g e t P a r e n t ( ) ;
public T g e t C h i l d ( i n t i n d e x ) ;
public i n t c h i l d C o u n t ( ) ;
}

Listing 4.1: Tree interface
Listing 4.1 shows the Tree interface with its basic abstract tree methods. All
other needed tree methods, e. g. for node traversal and search, are implemented
with default methods, which are derived from this four abstract tree methods. A
mutable default implementation of the Tree interface is given by the TreeNode
class.
0

4

1

2

5

6

3

7

10

8

9

11

Figure 4.1.1: Example tree
To illustrate the usage of the TreeNode class, we will create a TreeNode
instance from the tree shown in figure 4.1.1. The example tree consists of 12
nodes with a maximal depth of three and a varying child count from one to
three.
1
2
3
4
5
6
7
8

f i n a l TreeNode t r e e = TreeNode . o f ( 0 )
. a t t a c h ( TreeNode . o f ( 1 )
. attach (4 , 5) )
. a t t a c h ( TreeNode . o f ( 2 )
. attach (6) )
. a t t a c h ( TreeNode . o f ( 3 )
. a t t a c h ( TreeNode . o f ( 7 )
. attach (10 , 11) )

78

4.1. IO.JENETICS.EXT

CHAPTER 4. MODULES

. attach (8)
. attach (9) ) ;

9
10

Listing 4.2: Example TreeNode
Listing 4.2 on the previous page shows the TreeNode representation of the given
example tree. New children are added by using the attach method. For full
Tree method list have a look at the Javadoc documentation.
4.1.1.2

Parentheses tree

A parentheses tree2 is a serialized representation of a tree and is a simplified
form of the Newick tree format3 . The parentheses tree representation of the tree
in figure 4.1.1 on the preceding page will look like the following string:
0(1(4,5),2(6),3(7(10,11),8,9))
As you can see, nodes on the same tree level are separated by a comma, ’,’. New
tree levels are created with an opening parentheses ’(’ and closed with a closing
parentheses ’)’. No additional spaces are inserted between the separator character
and the node value. Any spaces in the parentheses tree string will be part of the
node value. Figure 4.1.2 shows the syntax diagram of the parentheses tree. The
NodeValue in the diagram is the string representation of the Tree.getValue()
object.

Figure 4.1.2: Parentheses tree syntax diagram
To get the parentheses tree representation, you just have to call Tree.toParenthesesTree(). This method uses the Object.toString() method for
serializing the tree node value. If you need a different string representation
you can use the Tree.toParenthesesTree(Function)
method. A simple example, on how to use this method, is shown in the code
snippet below.
1
2

f i n a l Tree t r e e = . . . ;
f i n a l S t r i n g s t r i n g = t r e e . t o P a r e n t h e s e s S t r i n g ( Path : : getFileName ) ;

If the string representation of the tree node value contains one of the protected
characters, ’,’, ’(’ or ’)’, they will be escaped with a ’\’ character.
1
2

f i n a l Tree t r e e = TreeNode . o f ( " ( r o o t ) " )
. attach ( " , " , " ( " , " ) " )

The tree in the code snippet above will be represented as the following parentheses
string:
2 https://www.i-programmer.info/programming/theory/3458-parentheses-are-trees.
html
3 http://evolution.genetics.washington.edu/phylip/newicktree.html

79

4.1. IO.JENETICS.EXT

CHAPTER 4. MODULES

\(root\)(\„\(,\))
Serializing a tree into parentheses form is just one part of the story. It is also
possible to read back the parentheses string as tree object. The TreeNode.parse(String) method allows you to parse a tree string back to a TreeNode object. If you need to create a tree with the original node type, you
can call the parse method with an additional string mapper function. How you
can parse a given parentheses tree string is shown in the code below.
1
2
3
4

f i n a l Tree t r e e = TreeNode . p a r s e (
" 0(1(4 ,5) ,2(6) ,3(7(10 ,11) ,8 ,9) ) " ,
Integer : : parseInt
);

The TreeNode.parse method will throw an IllegalArgumentException if it
is called with an invalid tree string.
4.1.1.3

Flat tree

The main purpose for the Tree data type in the io.jenetics.ext module is
to support hierarchical TreeGenes, which are needed for genetic programming
(see section 4.2 on page 92). Since the chromosome type is essentially an array,
a mapping from the hierarchical tree structure to a 1-dimensional array is
needed.4 For general trees with arbitrary child count, additional information
needs to be stored for a bijective mapping between tree and array. The FlatTree
interface extends the Tree node with a childOffset() method, which returns
the absolute start index of the tree’s children.
1
2
3
4
5
6

public i n t e r f a c e F l a t T r e e >
extends Tree
{
public i n t c h i l d O f f s e t ( ) ;
public default ISeq f l a t t e n e d N o d e s ( ) { . . . } ;
}

Listing 4.3: FlatTree interface
Listing 4.3 shows the additional child offset needed for reconstructing the tree
from the flattened array version. When flattening an existing tree, the nodes are
traversed in breadth first order.5 For each node the absolute array offset of the
first child is stored, together with the child count of the node. If the node has
no children, the child offset is set to −1.
Figure 4.1.3 on the following page illustrates the flattened example tree shown
in figure 4.1.1 on page 78. The curved arrows denotes the child offset of a given
parent node and the curly braces denotes the child count of a given parent node.
1
2
3
4

f i n a l TreeNode t r e e = . . . ;
f i n a l ISeq> nodes = FlatTreeNode . o f ( t r e e )
. flattenedNodes () ;
a s s e r t Tree . e q u a l s ( t r e e , nodes . g e t ( 0 ) ) ;

5
6
7
8

f i n a l TreeNode u n f l a t t e n e d = TreeNode . o f ( nodes . g e t ( 0 ) ) ;
assert tree . equals ( unflattened ) ;
assert unflattened . equals ( tree ) ;
4 There exists mapping schemes for perfect binary trees, which allows a bijective mapping
from tree to array without additional storage need: https://en.wikipedia.org/wiki/Binary_
tree#Arrays. For general trees with arbitrary child count, such simple mapping doesn’t exist.
5 https://en.wikipedia.org/wiki/Breadth-first_search

80

4.1. IO.JENETICS.EXT

CHAPTER 4. MODULES

Figure 4.1.3: Example FlatTree
The code snippet above shows how to flatten a given integer tree and convert it
back to a regular tree. The first element of the flattened tree node sequence is
always the root node.

Since the TreeGene and the ProgramGene are implementing the
FlatTree interface, it is helpful to know and understand the used tree
to array mapping.

4.1.2

Genes

4.1.2.1

BigInteger gene

The BigIntegerGene implements the NumericGene interface and can be used
when the range of the existing LongGene or DoubleGene is not enough. Its
allele type is a BigInteger, which can store arbitrary-precision integers. There
also exists a corresponding BigIntegerChromosome.
4.1.2.2

Tree gene

The TreeGene interface extends the FlatTree interface and serves as basis for
the ProgramGene, used for genetic programming. Its tree nodes are stored in
the corresponding TreeChromosome. How the tree hierarchy is flattened and
mapped to an array is described in section 4.1.1.3 on the preceding page.

4.1.3

Operators

Simulated binary crossover The SimulatedBinaryCrossover performs the
simulated binary crossover (SBX) on NumericChromosomes such that each position is either crossed contracted or expanded with a certain probability. The
probability distribution is designed such that the children will lie closer to their
parents as is the case with the single point binary crossover. It is implemented
as described in [11].
Single-node crossover The SingleNodeCrossover class works on TreeChromosomes. It swaps two, randomly chosen, nodes from two tree chromosomes.
Figure 4.1.4 on the next page shows how the single-node crossover works. In
this example node 3 of the first tree is swapped with node h of the second tree.
81

4.1. IO.JENETICS.EXT

CHAPTER 4. MODULES

0

4

1

2

5

6

a

3

7

10

8

9

e

b

c

f

g

11

d

h

k

i

j

j

l

3 ←→ h
0

4

a

1

2

h

5

6

k

l

e

b

c

f

g

3

i

7

8

9

10

d

11

Figure 4.1.4: Single-node crossover

4.1.4

Weasel program

The Weasel program6 is thought experiment from Richard Dawkins, in which
he tries to illustrate the function of genetic mutation and selection.7 For this
reason he chooses the well known example of typewriting monkeys.
I don’t know who it was first pointed out that, given enough time, a
monkey bashing away at random on a typewriter could produce all
the works of Shakespeare. The operative phrase is, of course, given
enough time. Let us limit the task facing our monkey somewhat.
Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence»Methinks it is like a weasel«, and
we shall make it relatively easy by giving him a typewriter with a
restricted keyboard, one with just the 26 (uppercase) letters, and a
space bar. How long will he take to write this one little sentence?[9]
The search space of the 28 character long target string is 2728 ≈ 1040 . If the
monkey writes 1, 000, 000 different sentences per second, it would take about 1026
years (in average) writing the correct one. Although Dawkins did not provide
the source code for his program, a »Weasel« style algorithm could run as follows:
1. Start with a random string of 28 characters.
2. Make n copies of the string (reproduce).
6 https://en.wikipedia.org/wiki/Weasel_program
7 The

classes are located in the io.jenetics.ext module.

82

4.1. IO.JENETICS.EXT

CHAPTER 4. MODULES

3. Mutate the characters with an mutation probability of 5%.
4. Compare each new string with the target string »METHINKS IT IS LIKE
A WEASEL«, and give each a score (the number of letters in the string
that are correct and in the correct position).
5. If any of the new strings has a perfect score (28), halt. Otherwise, take
the highest scoring string, and go to step 2.
Richard Dawkins was also very careful to point out the limitations of this
simulation:
Although the monkey/Shakespeare model is useful for explaining the
distinction between single-step selection and cumulative selection, it is
misleading in important ways. One of these is that, in each generation
of selective »breeding«, the mutant »progeny« phrases were judged
according to the criterion of resemblance to a distant ideal target,
the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that.
Evolution has no long-term goal. There is no long-distance target, no
final perfection to serve as a criterion for selection, although human
vanity cherishes the absurd notion that our species is the final goal of
evolution. In real life, the criterion for selection is always short-term,
either simple survival or, more generally, reproductive success.[9]
If you want to write a Weasel program with the Jenetics library, you need to
use the special WeaselSelector and WeaselMutator.
1
2
3

public c l a s s WeaselProgram {
private s t a t i c f i n a l S t r i n g TARGET =
"METHINKS IT I S LIKE A WEASEL" ;

4
5
6
7
8
9
10
11

private s t a t i c i n t s c o r e ( f i n a l Genotype g t ) {
f i n a l CharSequence s o u r c e =
( CharSequence ) g t . getChromosome ( ) ;
return I n t S t r e a m . r a n g e ( 0 , TARGET. l e n g t h ( ) )
. map( i −> s o u r c e . charAt ( i ) == TARGET. charAt ( i ) ? 1 : 0 )
. sum ( ) ;
}

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) {
f i n a l CharSeq c h a r s = CharSeq . o f ( "A−Z " ) ;
f i n a l Factory> g t f = Genotype . o f (
new CharacterChromosome ( c h a r s , TARGET. l e n g t h ( ) )
);
f i n a l Engine e n g i n e = Engine
. b u i l d e r ( WeaselProgram : : s c o r e , g t f )
. populationSize (150)
. s e l e c t o r (new W e a s e l S e l e c t o r <>() )
. offspringFraction (1)
. a l t e r e r s (new WeaselMutator < >(0.05) )
. build () ;
f i n a l Phenotype r e s u l t = e n g i n e
. stream ( )
. l i m i t ( b y F i t n e s s T h r e s h o l d (TARGET. l e n g t h ( ) − 1 ) )
. peek ( r −> System . out . p r i n t l n (
r . getTotalGenerations () + " : " +
r . getBestPhenotype ( ) ) )
. c o l l e c t ( toBestPhenotype ( ) ) ;
System . out . p r i n t l n ( r e s u l t ) ;

83

4.1. IO.JENETICS.EXT

33
34

}

CHAPTER 4. MODULES

}

Listing 4.4: Weasel program
Listing 4.4 on the preceding page shows how-to implement the WeaselProgram
with Jenetics. Step (1) and (2) of the algorithm is done implicitly when the
initial population is created. The third step is done by the WeaselMutator, with
mutation probability of 0.05. Step (4) is done by the WeaselSelector together
with the configured offspring-fraction of one. The evolution stream is limited by
the Limits.byFitnessThreshold, which is set to scoremax − 1. In the current
example this value is set to TARGET.length() - 1 = 27.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

1:
2:
3:
5:
6:
7:
8:
9:
10:
11:
12:
14:
16:
18:
19:
20:
23:
26:
27:
32:
42:
46:

[ UBNHLJUS
[ UBNHLJUS
[ UBQHLJUS
[ UBQHLJUS
[ W QHLJUS
[ W QHLJKS
[ W QHLJKS
[ W QHLJKS
[ M QHLJKS
[ MEQHLJKS
[ MEQHIJKS
[ MEQHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS
[ METHINKS

RCOXR
RCOXR
RCOXR
RCOXR
RCOXR
RCOXR
RCOXR
RCOXR
RCOXR
RCOXR
ICOXR
ICOXR
ICOXR
IMOXR
IMOXR
IMOIR
IMOIR
IMOIS
IM IS
IT IS
IT IS
IT IS

LFIYLAWRDCCNY ]
LFIYLAWWDCCNY ]
LFIYLAWWECCNY ]
LFICLAWWECCNL ]
LFICLA WEGCNL ]
LFIHLA WEGCNL ]
LFIHLA WEGSNL ]
LFIS A WEGSNL ]
LFIS A WEGSNL ]
LFIS A WEGSNL ]
LFIN A WEGSNL ]
LFIN A WEGSNL ]
LFIN A WEGSNL ]
LFKN A WEGSNL ]
LIKN A WEGSNL ]
LIKN A WEGSNL ]
LIKN A WEGSEL ]
LIKN A WEGSEL ]
LIKN A WEHSEL ]
LIKN A WEHSEL ]
LIKN A WEASEL ]
LIKE A WEASEL ]

-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->
-->

6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28

The (shortened) output of the Weasel program (listing 4.4 on the previous
page) shows, that the optimal solution is reached in generation 46.

4.1.5

Modifying Engine

The current design of Engine allows to created multiple independent evolution
streams from a single Engine instance. One drawback of this approach is, that
the evolution stream runs with the same evolution parameters until the stream is
truncated. It is not possible to change the stream’s Engine configuration during
the evolution process. For this purpose, the EvolutionStreamable interface has
been introduced. It is similar to the Java Iterable interface and abstracts the
EvolutionStream creation.
1
2
3
4
5
6

public i n t e r f a c e E v o l u t i o n S t r e a m a b l e <
G extends Gene ,
C extends Comparable 
> {
E v o l u t i o n S t r e a m 
stream ( S u p p l i e r > s t a r t ) ;

7

E v o l u t i o n S t r e a m  stream ( E v o l u t i o n I n i t  i n i t ) ;

8
9
10
11
12

}

E v o l u t i o n S t r e a m a b l e 
l i m i t ( S u p p l i e r 

>> p ) Listing 4.5: EvolutionStreamable interface 84 4.1. IO.JENETICS.EXT CHAPTER 4. MODULES Listing 4.5 on the preceding page shows the main methods of the EvolutionStreamable interface. The existing stream methods take an initial value, which allows to concatenate different engines. With the limit method it is possible to limit the size of the created EvolutionStream instances. The io.jenetics.ext module contains additional classes which allows to concatenate evolution Engines with different configurations, which will then create one varying EvolutionStream. This additional Engine classes are: 1. ConcatEngine, 2. CyclicEngine and 3. AdaptiveEngine. 4.1.5.1 ConcatEngine The ConcatEngine class allows to create more than one Engines, with different configurations, and combine it into one EvolutionStreamable (Engine). Figure 4.1.5: Engine concatenation Figure 4.1.5 shows how the EvolutionStream of two concatenated Engines works. You can create the first partial EvolutionStream with an optional start value. If the first EvolutionStream stops, it’s final EvolutionResult is used as start value of the second evolution stream, created by the second evolution Engine. It is important that the evolution Engines used for concatenation are limited. Otherwise the created EvolutionStream will only use the first Engine, since it is not limited. The concatenated evolution Engines must be limited (by calling Engine.limit), otherwise only the first Engine is used executing the resulting EvolutionStream. The following code sample shows how to create an EvolutionStream from two concatenate Engines. As you can see, the two Engines are limited. 1 2 f i n a l Engine e n g i n e 1 = . . . ; f i n a l Engine e n g i n e 2 = . . . ; 3 4 5 6 7 8 9 f i n a l Genotype r e s u l t = ConcatEngine . o f ( engine1 . l i m i t (50) , e n g i n e 2 . l i m i t ( ( ) −> L i m i t s . b y S t e a d y F i t n e s s ( 3 0 ) ) ) . stream ( ) . c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) ; 85 4.1. IO.JENETICS.EXT CHAPTER 4. MODULES A practical use case for the Engine concatenation is, when you want to do a broader exploration of the search space at the beginning and narrow it with the following Engines. In such a setup, the first Engine would be configured with a Mutator with a relatively big mutation probability. The mutation probabilities of the following Engines would then be gradually reduced. 4.1.5.2 CyclicEngine The CyclicEngine is similar to the ConcatEngine. Where the ConcatEngine stops the evolution, when the stream of the last engine terminates, the CyclicEngine continues with a new stream from the first Engine. The evolution flow of the CyclicEngine is shown in figure 4.1.6. Figure 4.1.6: Cyclic Engine Since the CyclicEngine creates unlimited streams, although the participating Engines are all creating limited streams, the resulting EvolutionStream must be limited as well. The code snippet below shows the creation and execution of a cyclic EvolutionStream. 1 2 3 4 5 6 7 f i n a l Genotype r e s u l t = CyclingEngine . of ( engine1 . l i m i t (50) , e n g i n e 2 . l i m i t ( ( ) −> L i m i t s . b y S t e a d y F i t n e s s ( 1 5 ) ) ) . stream ( ) . l i m i t ( Limits . bySteadyFitness (50) ) . c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) ; The reason for using a cyclic EvolutionStream is similar to the reason for using a concatenated EvolutionStream. It allows you to do a broad search, followed by a narrowed exploration. This cycle is then repeated until the limiting predicate of the outer stream terminates the evolution process. 4.1.5.3 AdaptiveEngine The AdaptiveEngine is the most flexible method for creating EvolutionStreams. Instead of defining a fixed set Engines, which then creates the EvolutionStream, you define a function which creates an Engine, depending on last EvolutionResult of the previous EvolutionStream. You can see the evolution flow of the AdaptiveEngine in figure 4.1.7 on the next page. From the implementation perspective, the AdaptiveEngine requires a little bit more code. The additional flexibility isn’t for free, as you can see in the code example below. 1 2 public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) { f i n a l Problem problem = . . . ; 86 4.1. IO.JENETICS.EXT CHAPTER 4. MODULES Figure 4.1.7: Adaptive Engine // Engine . B u i l d e r t e m p l a t e . f i n a l Engine . B u i l d e r b l d = Engine . b u i l d e r ( problem ) . minimizing ( ) ; 3 4 5 6 7 8 9 10 11 12 13 } f i n a l Genotype r e s u l t = AdaptiveEngine .o f ( r −> e n g i n e ( r , b l d ) ) . stream ( ) . l i m i t ( Limits . bySteadyFitness (50) ) . c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) ; 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 s t a t i c E v o l u t i o n S t r e a m a b l e e n g i n e ( f i n a l E v o l u t i o n R e s u l t r e s u l t , f i n a l Engine . B u i l d e r b u i l d e r ) { return v a r ( r e s u l t ) < 0 . 2 ? b u i l d e r . copy ( ) . a l t e r e r s (new Mutator < >(0.75) ) . build () . limit (5) : b u i l d e r . copy ( ) . alterers ( new Mutator < >(0.05) , new MeanAlterer <>() ) . s e l e c t o r (new R o u l e t t e W h e e l S e l e c t o r <>() ) . build () . l i m i t ( Limits . bySteadyFitness (25) ) ; } 32 33 34 35 36 37 38 39 40 s t a t i c double v a r ( f i n a l E v o l u t i o n R e s u l t e r ) { return e r != n u l l ? e r . g e t P o p u l a t i o n ( ) . stream ( ) . map( Phenotype : : g e t F i t n e s s ) . c o l l e c t ( toDoubleMoments ( ) ) . getVariance () : 0.0; } The example tries to broaden the search, once the variance of the population’s fitness values are below a given threshold. When implementing the Engine creation function, you have to be aware, that the EvolutionResult for the first Engine is null. 87 4.1. IO.JENETICS.EXT 4.1.6 CHAPTER 4. MODULES Multi-objective optimization A Multi-objective Optimization Problem (MOP) can be defined as the problem of finding a vector of decision variables which satisfies constraints and optimizes a vector function whose elements represent the objective functions. These functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence, the term »optimize« means finding such a solution which would give the values of all the objective functions acceptable to the decision maker. [25] There are several ways for solving multiobjective problems. An excellent theoretical foundation is given in [7]. The algorithms implemented by Jenetics are based in therms of Pareto optimality as described in [13], [10] and [17]. 4.1.6.1 Pareto efficiency Pareto efficiency is named after the Italian economist and political scientist Vilfredo Pareto8 . He used the concept in his studies of economic efficiency and income distribution. The concept has been applied in different academic fields such as economics, engineering, and the life sciences. Pareto efficiency says that an allocation is efficient if an action makes some individual better off and no individual worse off. In contrast to single-objective optimization, where usually only one optimal solution exits, the multi-objective optimization creates a set of optimal solutions. The optimal solutions are also known as the Pareto front or Pareto set. Definition. (Pareto efficiency [7]): A solution x is said to be Pareto optimal iff there is no x0 for which v = (f1 (x0 ) , ..., fk (x0 )) dominates u = (f1 (x) , ..., fk (x)). The definition says that x∗ is Pareto optimal if there exists no feasible vector x which would decrease some criterion without causing a simultaneous increase in at least one other criterion. Definition. (Pareto dominance [7]): A vector u = (u1 , ..., uk ) is said to dominate another vector v = (v1 , ..., vk ) (denoted by u  v) iff u is partially less than v, i.e., ∀i ∈ {1, ..., k}, ui ≥ vi ∧ ∃i ∈ {1, ..., k} : ui > vi . After this two basic definitions, lets have a look at a simple example. Figure 4.1.8 on the following page shows some points of a two-dimensional solution space. For simplicity, the points will all lie within a circle with radius 1 and center point of (1, 1). Figure 4.1.9 on page 90 shows the Pareto front of a maximization problem. This means we are searching for solutions tries to maximize the x and y coordinate at the same time. Figure 4.1.10 on page 91 shows the Pareto front if we try to minimize the x and y coordinate at the same time. 8 https://en.wikipedia.org/wiki/Vilfredo_Pareto 88 4.1. IO.JENETICS.EXT CHAPTER 4. MODULES 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Figure 4.1.8: Circle points 4.1.6.2 Implementing classes The classes, used for solving multi-objective problems, reside in the io.jenetics.ext.moea package. Originally, the Jenetics library focuses on solving singleobjective problems. This drives the design decision to force the return value of the fitness function to be Comparable. If the result type of the fitness function is a vector, it is no longer clear how to make the results comparable. Jenetics chooses to use the Pareto dominance relation (see section 4.1.6.1 on the previous page). The Pareto dominance relation, , defines a strict partial order, which means  is 1. irreflexive: u  u, 2. transitive: u  v ∧ v  w ⇒ u  w and 3. asymmetric: u  v ⇒ v  u . The io.jenetics.ext.moea package contains the classes needed for doing multiobjective optimization. One of the central types is the Vec interface, which allows you wrap a vector of any element type into a Comparable. 1 2 3 4 5 6 7 public i n t e r f a c e Vec extends Comparable> { public T data ( ) ; public i n t l e n g t h ( ) ; public ElementComparator comparator ( ) ; public E l e m e n t D i s t a n c e d i s t a n c e ( ) ; public Comparator dominance ( ) ; } Listing 4.6: Vec interface Listing 4.6 shows the necessary methods of the Vec interface. This methods are sufficient to do all the optimization calculations. The data() method returns the underlying vector type, like double[] or int[]. With the ElementComparator, which is returned by the comparator() method, it is possible to compare single elements of the vector type T. This is similar to the ElementDistance function, 89 4.1. IO.JENETICS.EXT CHAPTER 4. MODULES 2 1.8 1.6 1.4 1.2 1 0.8 0.8 1 1.2 1.4 1.6 1.8 2 Figure 4.1.9: Maximizing Pareto front returned by the distance() method, which calculates the distance of two vector elements. The last method, dominance(), returns the Pareto dominance comparator, . Since it is quite a bothersome to implement all this needed methods, the Vec interface comes with a set of factory methods, which allows to create Vec instance for some primitive array types. 1 2 3 f i n a l Vec i v e c = Vec . o f ( 1 , 2 , 3 ) ; f i n a l Vec l v e c = Vec . o f ( 1 L , 2L , 3L) ; f i n a l Vec dvec = Vec . o f ( 1 . 0 , 2 . 0 , 3 . 0 ) ; For efficiency reason, the primitive arrays are not copied, when the Vec instance is created. This lets you, theoretically, change the value of a created Vec instance, which will lead to unexpected results. The second difference to the single-objective setup is the EvolutionResult collector. In the single-objective case, we will only get one best result, which is different in the multi-object optimization. As we have seen in section 4.1.6.1 on page 88, we no longer have only one result, we have a set of Pareto optimal solutions. There is a predefined collector in the io.jenetics.ext.moea package, MOEA.toParetoSet(IntRange), which collects the Pareto optimal Phenotypes into an ISeq. 1 2 3 4 f i n a l ISeq>> p a r e t o S e t = e n g i n e . stream ( ) . l i m i t (100) . c o l l e c t (MOEA. t o P a r e t o S e t ( IntRange . o f ( 3 0 , 5 0 ) ) ) ; Since there exists a potential infinite number of Pareto optimal solutions, you have to define desired number set elements. This is done with an IntRange object, where you can specify the minimal and maximal set size. The example above will return a Pareto size which size in the range of [30, 50). For reducing the Pareto set size, the distance between two vector elements is taken into account. Points which lie very close to each other are removed. This leads to a result, where the Pareto optimal solutions are, more or less, evenly distributed over the 90 4.1. IO.JENETICS.EXT CHAPTER 4. MODULES 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 4.1.10: Minimizing Pareto front whole Pareto front. The crowding-distance 9 measure is used for calculating the proximity of two points and it is described in [7] and [13]. Till now we have described the multi-objective result type (Vec) and the final collecting of the Pareto optimal solution. So lets create a simple multi-objective problem and an appropriate Engine. 1 2 3 4 5 6 7 8 f i n a l Problem> problem = Problem . o f ( v −> Vec . o f ( v [ 0 ] ∗ c o s ( v [ 1 ] ) + 1 , v [ 0 ] ∗ s i n ( v [ 1 ] ) + 1 ) , Codecs . o f V e c t o r ( DoubleRange . o f ( 0 , 1 ) , DoubleRange . o f ( 0 , 2∗ PI ) ) ); 9 10 11 12 13 14 f i n a l Engine> e n g i n e = Engine . b u i l d e r ( problem ) . o f f s p r i n g S e l e c t o r (new T o u r n a m e n t S e l e c t o r <>(4) ) . s u r v i v o r s S e l e c t o r ( UFTournamentSelector . ofVec ( ) ) . build () ; The fitness function in the example problem above will create 2D-points which will all lies within a circle with a center of (1, 1). In figure 4.1.9 on the preceding page you can see how the resulting solution will look like. There is almost no difference in creating an evolution Engine for single- or multi-objective optimization. You only have to take care to choose the right Selector. Not all Selectors will work for multi-objective optimization. This include all Selectors which needs a Number fitness type and where the population needs to be sorted10 . The Selector which works fine in a multi-objective setup is the TournamentSelector. 9 The crowding distance value of a solution provides an estimate of the density of solutions surrounding that solution. The crowding distance value of a particular solution is the average distance of its two neighboring solutions. https://www.igi-global.com/dictionary/ crowding-distance/42740. 10 Since the  relation doesn’t define a total order, sorting the population will lead to an IllegalArgumentException at runtime. 91 4.2. IO.JENETICS.PROG CHAPTER 4. MODULES Additionally you can use one of the special MO selectors: NSGA2Selector and UFTournamentSelector. NSGA2 selector This selector selects the first elements of the population, which has been sorted by the Crowded-comparison operator (equation 4.1.1), , n as described in [10] ij n if (irank < jrank ) ∨ ((irank = jrank ) ∧ idist > jdist ) , (4.1.1) where irank denotes the non-domination rank of individual i and idist the crowding distance of individual i. Unique fitness tournament selector The selection of unique fitnesses lifts the selection bias towards over-represented fitnesses by reducing multiple solutions sharing the same fitness to a single point in the objective space. It is therefore no longer required to assign a crowding distance of zero to individual of equal fitness as the selection operator correctly enforces diversity preservation by picking unique points in the objective space. [13] Since the MOO classes are an extensions to the existing evolution Engine, the implementation doesn’t exactly follow an established algorithm, like NSGA2 or SPEA2. The results and performance, described in the relevant papers, are therefore not directly comparable. See listing 1.2 on page 3 for comparing the Jenetics evolution flavor. 4.1.6.3 Termination Most of the existing termination strategies, implemented in the Limits class, presumes a total order of the fitness values. This assumption holds for singleobjective optimization problems, but not for multi-objective problems. Only termination strategies which doesn’t rely on the total order of the fitness value, can be safely used. The following termination strategies can be used for multiobjective problems: • Limits.byFixedGeneration, • Limits.byExecutionTime and • Limits.byGeneConvergence. All other strategies doesn’t have a well defined termination behavior. 4.2 io.jenetics.prog In artificial intelligence, genetic programming (GP) is a technique whereby computer programs are encoded as a set of genes that are then modified (evolved) using an evolutionary algorithm (often a genetic algorithm).11 The io.jenetics.prog module contains classes which enables the Jenetics library doing GP. It 11 https://en.wikipedia.org/wiki/Genetic_programming 92 4.2. IO.JENETICS.PROG CHAPTER 4. MODULES introduces a ProgramGene and ProgramChromosome pair, which serves as the main data-structure for genetic programs. A ProgramGene is essentially a tree (AST12 ) of operations (Op) stored in a ProgramChromosome.13 4.2.1 Operations When creating own genetic programs, it is not necessary to derive own classes from the ProgramGene or ProgramChromosome. The intended extension point is the Op interface. The extension point for own GP implementations is the Op interface. There is in general no need for extending the ProgramChromosome class. 1 2 3 4 5 public i n t e r f a c e Op { public S t r i n g name ( ) ; public i n t a r i t y ( ) ; public T a p p l y (T [ ] a r g s ) ; } Listing 4.7: GP Op interface The generic type of the Op interface (see listing 4.7) enforces the data-type constraints for the created program tree and makes the implementation a strongly typed GP. Using the Op.of factory method, a new operation is created by defining the desired operation function. 1 2 f i n a l Op add = Op . o f ( "+" , 2 , v −> v [ 0 ] + v [ 1 ] ) ; f i n a l Op c o n c a t = Op . o f ( "+" , 2 , v −> v [ 0 ] + v [ 1 ] ) ; A new ProgramChromosome is created with the operations suitable for our problem. When creating a new ProgramChromosome, we must distinguish two different kind of operations: 1. Non-terminal operations have an arity greater than zero, which means they take at least one argument. This operations need to have child nodes, where the number of children must be equal to the arity of the operation of the parent node. Non-terminal operations will be abbreviated to operations. 2. Terminal operations have an arity of zero and from the leaves of the program tree. Terminal operations will be abbreviated to terminals. The io.jenetics.prog module comes with three predefined terminal operations: Var, Const and EphemeralConst. Var The Var operation defines a variable of a program, which is set from outside when it is evaluated. 12 https://en.wikipedia.org/wiki/Abstract_syntax_tree 13 When implementing the GP module, the emphasis was to not create a parallel world of genes and chromosomes. It was an requirement, that the existing Alterer and Selector classes could also be used for the new GP classes. This has been achieved by flattening the AST of a genetic program to fit into the 1-dimensional (flat) structure of a chromosome. 93 4.2. IO.JENETICS.PROG 1 2 3 4 final final final final Var x = Var . o f ( " x " Var y = Var . o f ( " y " Var z = Var . o f ( " z " ISeq> t e r m i n a l s CHAPTER 4. MODULES , 0) ; , 1) ; , 2) ; = ISeq . of (x , y , z ) ; The terminal operations defined in the listing above can be used for defining a program which takes a 3-dimensional vector as input parameters, x, y, and z, with the argument indices 0, 1, and 2. If you have again a look at the apply method of the operation interface, you can see that this method takes an object array of type T. The variable x will return the first element of the input arguments, because it has been created with index 0. Const The Const operation will always return the same, constant, value when evaluated. 1 2 f i n a l Const one = Const . o f ( 1 . 0 ) ; f i n a l Const p i = Const . o f ( " PI " , Math . PI ) ; We can create a constant operation in to flavors: with a value only and with a dedicated name. If a constant has a name, the symbolic name is used, instead of the value, when the program tree is printed. EphemeralConst An ephemeral constant is a terminal operation, which encapsulates a value that is generated at run time from the Supplier it is created from. Ephemeral constants allows you to have terminals that don’t have all the same values. To create an ephemeral constant that takes its random value in [0, 1) you will write the following code. 1 2 3 4 f i n a l Op rand1 = EphemeralConst . o f ( RandomRegistry . getRandom ( ) : : nextDouble ) ; f i n a l Op rand2 = EphemeralConst . o f ( "R" , RandomRegistry . getRandom ( ) : : nextDouble ) ; The ephemeral constant value is determined when it is inserted in the tree and never changes until it is replaced by another ephemeral constant. 4.2.2 Program creation The ProgramChromosome comes with some factory methods, which lets you easily create program trees with a given depth and a given set of operations and terminals. 1 2 3 4 5 f i n a l i n t depth = 5 ; f i n a l Op o p e r a t i o n s = I S e q . o f ( . . . ) ; f i n a l Op t e r m i n a l s = I S e q . o f ( . . . ) ; f i n a l ProgramChromosome program = ProgramChromosome . o f ( depth , o p e r a t i o n s , t e r m i n a l s ) ; The code snippet above will create a perfect program tree14 of depth 5. All nonleaf nodes will contain operations, randomly selected from the given operations, whereas all leaf nodes are filled with operations from the terminals. 14 All leafs of a perfect tree have the same depth and all internal nodes have degree Op.arity. 94 4.2. IO.JENETICS.PROG CHAPTER 4. MODULES The created program tree is perfect, which means that all leaf nodes have the same depth. If new trees needs to be created during evolution, they will be created with the depth, operations and terminals defined by the template program tree. The evolution Engine used for solving GP problems is created the same way as for normal GA problems. Also the execution of the EvolutionStream stays the same. The first Gene of the collected final Genotype represents the evolved program, which can be used to calculate function values from arbitrary arguments. 1 2 3 4 5 6 7 f i n a l Engine, Double> e n g i n e = Engine . b u i l d e r ( Main : : e r r o r , program ) . minimizing ( ) . alterers ( new S i n g l e N o d e C r o s s o v e r <>() , new Mutator <>() ) . build () ; 8 9 10 11 12 13 f i n a l ProgramGene program = e n g i n e . stream ( ) . l i m i t (300) . c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) . getGene ( ) ; f i n a l double r e s u l t = program . e v a l ( 3 . 4 ) ; For a complete GP example have a look at the examples in chapter 5.7 on page 118. 4.2.3 Program repair The specialized crossover class, SingleNodeCrossover, for a TreeGene guarantees that the program tree after the alter operation is still valid. It obeys the tree structure of the gene. General alterers, not written for ProgramGene of TreeGene classes, will most likely destroy the tree property of the altered chromosome. There are essentially two possibility for handling invalid tree chromosomes: 1. Marking the chromosome as invalid. This possibility is easier to achieve, but would also to lead to a large number of invalid chromosomes, which must be recreated. When recreating invalid chromosomes we will also loose possible solutions. 2. Trying to repair the invalid chromosome. This is the approach the Jenetics library has chosen. The repair process reuses the operations in a ProgramChromosome and rebuilds the tree property by using the operation arity. Jenetics allows the usage of arbitrary Alterer implementations. Even alterers not implemented for ProgramGenes. Genes destroyed by such alterer are repaired. 95 4.3. IO.JENETICS.XML 4.2.4 CHAPTER 4. MODULES Program pruning When you are solving symbolic regression problems, the mathematical expression trees, created during the evolution process, can become quite big. From the diversity point of view, this might be not that bad, but it comes with additional computation cost. With the MathTreePruneAlterer you are able to simplify some portion of the population in each generation. 1 2 3 4 5 6 7 8 f i n a l Engine, Double> e n g i n e = Engine . b u i l d e r ( Main : : e r r o r , program ) . minimizing ( ) . alterers ( new S i n g l e N o d e C r o s s o v e r <>() , new Mutator <>() , new MathTreePruneAlterer < >(0.5) ) . build () ; In the example above, half of the expression trees are simplified in each generation. If you want to prune the final result, you can do this with the MathExpr.simplify method. 1 2 3 4 f i n a l ProgramGene program = e n g i n e . stream ( ) . l i m i t (3000) . c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) . getGene ( ) ; 5 6 f i n a l TreeNode> e x p r = MathExpr . s i m p l i f y ( program ) ; The algorithm used for pruning the expression tree, currently only uses some basic mathematical identities, like x + 0 = x, x · 1 = x or x · 0 = 0. More advanced simplification algorithms may be implemented in the future. The MathExpr helper class can also be used for creating mathematical expression trees from the usual textual representation. 1 2 3 f i n a l MathExpr e x p r = MathExpr . p a r s e ( " 5∗ z + 6∗ x + s i n ( y ) ^3 + ( 1 + s i n ( z ∗ 5 ) / 4 ) /6 " ) ; f i n a l double v a l u e = e x p r . e v a l ( 5 . 5 , 4 , 2 . 3 ) ; The variables in an expression string are sorted alphabetically. This means, that the expression is evaluated with x = 5.5, y = 4 and z = 2.3, which leads to a result value of 44.19673085074048. 4.3 io.jenetics.xml The io.jenetics.xml module allows to write and read chromosomes and genotypes to and from XML. Since the existing JAXB marshaling is part of the deprecated javax.xml.bind module the io.jenetics.xml module is now the recommended for XML marshalling of the Jenetics classes. The XML marshalling, implemented in this module, is based on the Java XMLStreamWriter and XMLStreamReader classes of the java.xml module. 4.3.1 XML writer The main entry point for writing XML files is the typed XMLWriter interface. Listing 4.8 on the next page shows the interface of the XMLWriter. 96 4.3. IO.JENETICS.XML 1 2 3 4 CHAPTER 4. MODULES @FunctionalInterface public i n t e r f a c e Writer { public void w r i t e ( XMLStreamWriter xml , T data ) throws XMLStreamException ; 5 public s t a t i c Writer a t t r ( S t r i n g name ) ; public s t a t i c Writer a t t r ( S t r i n g name , O b j e c t v a l u e ) ; public s t a t i c Writer t e x t ( ) ; 6 7 8 9 public s t a t i c Writer elem ( S t r i n g name , Writer . . . c h i l d r e n ) ; 10 11 12 13 14 15 } public s t a t i c Writer> e l e m s ( Writer w r i t e r ) ; Listing 4.8: XMLWriter interface Together with the static Writer factory method, it is possible to define arbitrary writers through composition. There is no need for implementing the Writer interface. A simple example will show you how to create (compose) a Writer class for the IntegerChromosome. The created XML should look like the given example above. 1 2 3 4 5 6 7 8 9 < min > -2147483648 < max > 2147483647 < alleles > < allele > -1878762439 < allele > -957346595 < allele > -88668137 The following writer will create the desired XML from an integer chromosome. As the example shows, the structure of the XML can easily be grasp from the XML writer definition and vice versa. 1 2 3 4 5 6 7 8 9 10 f i n a l Writer w r i t e r = elem ( " i n t −chromosome " , a t t r ( " l e n g t h " ) . map( ch −> ch . l e n g t h ( ) ) , elem ( " min " , W r i t e r .< I n t e g e r >t e x t ( ) . map( ch −> ch . getMin ( ) ) ) , elem ( " max " , W r i t e r .< I n t e g e r >t e x t ( ) . map( ch −> ch . getMax ( ) ) ) , elem ( " a l l e l e s " , e l e m s ( " a l l e l e " , W r i t e r .< I n t e g e r >t e x t ( ) ) . map( ch −> ch . t o S e q ( ) . map( g −> g . g e t A l l e l e ( ) ) ) ) ); 4.3.2 XML reader Reading and writing XML files uses the same concepts. For reading XML there is an abstract Reader class, which can be easily composed. The main method of the Reader class can be seen in listing 4.9. 1 2 3 4 public abstract c l a s s Reader { public abstract T r e a d ( f i n a l XMLStreamReader xml ) throws XMLStreamException ; } Listing 4.9: XMLReader class 97 4.3. IO.JENETICS.XML CHAPTER 4. MODULES When creating a XMLReader, the structure of the XML must be defined in a similar way as for the XMLWriter. Additionally, a factory function, which will create the desired object from the extracted XML data, is needed. A Reader, which will read the XML representation of an IntegerChromosome can be seen in the following code snippet below. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 f i n a l Reader r e a d e r = elem ( ( O b j e c t [ ] v ) −> { f i n a l int length = ( int ) v [ 0 ] ; f i n a l i n t min = ( i n t ) v [ 1 ] ; f i n a l i n t max = ( i n t ) v [ 2 ] ; f i n a l L i s t a l l e l e s = ( L i s t )v [ 3 ] ; a s s e r t a l l e l e s . s i z e ( ) == l e n g t h ; return IntegerChromosome . o f ( a l l e l e s . stream ( ) . map( v a l u e −> I n t e g e r G e n e . o f ( v a l u e , min , max) . t o A r r a y ( I n t e g e r G e n e [ ] : : new) ); }, " i n t −chromosome " , a t t r ( " l e n g t h " ) . map( I n t e g e r : : p a r s e I n t ) , elem ( " min " , t e x t ( ) . map( I n t e g e r : : p a r s e I n t ) ) , elem ( " max " , t e x t ( ) . map( I n t e g e r : : p a r s e I n t ) ) , elem ( " a l l e l e s " , e l e m s ( elem ( " a l l e l e " , t e x t ( ) . map( I n t e g e r : : p a r s e I n t ) ) ) ) ); 4.3.3 Marshalling performance Another important aspect when doing marshalling, is the space needed for the marshaled objects and the time needed for doing the marshalling. For the performance tests a genotype with a varying chromosome count is used. The used genotype template can be seen in the code snippet below. 1 2 3 4 f i n a l Genotype g e n o t y p e = Genotype . o f ( DoubleChromosome . o f ( 0 . 0 , 1 . 0 , 1 0 0 ) , chromosomeCount ); Table 4.3.1 shows the required space of the marshaled genotypes for different marshalling methods: (a) Java serialization, (b) JAXB15 serialization and (c) XMLWriter. Chromosome count 1 10 100 1000 10000 100000 Java serialization 0.0017 MiB 0.0090 MiB 0.0812 MiB 0.8039 MiB 8.0309 MiB 80.3003 MiB JAXB 0.0045 MiB 0.0439 MiB 0.4379 MiB 4.3772 MiB 43.7730 MiB 437.7283 MiB XML writer 0.0035 MiB 0.0346 MiB 0.3459 MiB 3.4578 MiB 34.5795 MiB 345.7940 MiB Table 4.3.1: Marshaled object size 15 The JAXB marshalling has been removed in version 4.0. It is still part of the table for comparison with the new XML marshalling. 98 4.4. IO.JENETICS.PRNGINE CHAPTER 4. MODULES Using the Java serialization will create the smallest files and the XMLWriter of the io.jenetics.xml module will create files roughly 75% the size of the JAXB serialized genotypes. The size of the marshaled also influences the write performance. As you can see in diagram 4.3.1 the Java serialization is the fastest marshalling method, followed by the JAXB marshalling. The XMLWriter is the slowest one, but still comparable to the JAXB method. 107 Marshalling time [µs] 106 105 104 103 102 JAXB Java serialization XML writer 101 100 100 101 102 103 104 105 Chromosome count Figure 4.3.1: Genotype write performance For reading the serialized genotypes, we will see similar results (see diagram 4.3.2 on the next page). Reading Java serialized genotypes has the best read performance, followed by JAXB and the XML Reader. This time the difference between JAXB and the XML Reader is hardly visible. 4.4 io.jenetics.prngine The prngine16 module contains pseudo-random number generators for sequential and parallel Monte Carlo simulations17 . It has been designed to work smoothly with the Jenetics GA library, but it has no dependency to it. All PRNG implementations of this library extends the Java Random class, which makes it easily usable in other projects. 16 This module is not part of the Jenetics project directly. Since it has no dependency to any of the Jenetics modules, it has been extracted to a separate GitHub repository (https: //github.com/jenetics/prngine) with an independent versioning. 17 https://de.wikipedia.org/wiki/Monte-Carlo-Simulation 99 4.4. IO.JENETICS.PRNGINE CHAPTER 4. MODULES 108 Marshalling time [µs] 107 106 105 104 10 3 102 10 JAXB Java serialization XML reader 1 100 100 101 102 103 104 105 Chromosome count Figure 4.3.2: Genotype read performance The pseudo random number generators of the io.jenetics.prngine module are not cryptographically strong PRNGs. The io.jenetics.prngine module consists of the following PRNG implementations: KISS32Random Implementation of an simple PRNG as proposed in Good Practice in (Pseudo) Random Number Generation for Bioinformatics Applications (JKISS32, page 3) David Jones, UCL Bioinformatics Group.[16] The period of this PRNG is ≈ 2.6 · 1036 . KISS64Random Implementation of an simple PRNG as proposed in Good Practice in (Pseudo) Random Number Generation for Bioinformatics Applications (JKISS64, page 10) David Jones, UCL Bioinformatics Group.[16] The PRNG has a period of ≈ 1.8 · 1075 . LCG64ShiftRandom This class implements a linear congruential PRNG with additional bit-shift transition. It is a port of the trng::lcg64_shift PRNG class of the TRNG library created by Heiko Bauke.18 MT19937_32Random This is a 32-bit version of Mersenne Twister pseudo random number generator.19 18 https://github.com/jenetics/trng4 19 https://en.wikipedia.org/wiki/Mersenne_Twister 100 4.4. IO.JENETICS.PRNGINE CHAPTER 4. MODULES MT19937_64Random This is a 64-bit version of Mersenne Twister pseudo random number generator. XOR32ShiftRandom This generator was discovered and characterized by George Marsaglia [Xorshift RNGs]. In just three XORs and three shifts (generally fast operations) it produces a full period of 232 − 1 on 32 bits. (The missing value is zero, which perpetuates itself and must be avoided.)20 XOR64ShiftRandom This generator was discovered and characterized by George Marsaglia [Xorshift RNGs]. In just three XORs and three shifts (generally fast operations) it produces a full period of 264 − 1 on 64 bits. (The missing value is zero, which perpetuates itself and must be avoided.) All implemented PRNGs has been tested with the dieharder test suite. Table 4.4.1 shows the statistical performance of the implemented PRNGs, including the Java Random implementation. Beside the XOR32ShiftRandom class, the j.u.Random implementation has the poorest performance, concerning its statistical performance. PRNG KISS32Random KISS64Random LCG64ShiftRandom MT19937_32Random MT19937_64Random XOR32ShiftRandom XOR64ShiftRandom j.u.Random Passed 108 109 110 113 111 101 107 106 Weak 6 5 4 1 3 4 7 4 Failed 0 0 0 0 0 9 0 4 Table 4.4.1: Dieharder results The second important performance measure for PRNGs is the number of random number it is able to create per second.21 Table 4.4.2 on the next page shows the PRN creation speed for all implemented generators. The slowest random engine is the j.u.Random class, which is caused by the synchronized implementations. When the only the creation speed counts, the j.u.c.ThreadLocalRandom is the random engine to use. 20 http://digitalcommons.wayne.edu/jmasm/vol2/iss1/2/ 21 Measured on a Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz with Java(TM) SE Runtime Environment (build 1.8.0_102-b14)—Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)—, using the JHM micro-benchmark library. 101 4.4. IO.JENETICS.PRNGINE PRNG KISS32Random KISS64Random LCG64ShiftRandom MT19937_32Random MT19937_64Random XOR32ShiftRandom XOR64ShiftRandom j.u.Random j.u.c.TL Random 106 int/s 189 128 258 140 148 227 225 91 264 CHAPTER 4. MODULES 106 float/s 143 124 185 115 120 161 166 89 224 106 long/s 129 115 261 92 148 140 235 46 268 Table 4.4.2: PRNG speed 102 106 double/s 108 124 191 82 120 120 166 46 216 Appendix Chapter 5 Examples This section contains some coding examples which should give you a feeling of how to use the Jenetics library. The given examples are complete, in the sense that they will compile and run and produce the given example output. Running the examples delivered with the Jenetics library can be started with the run-examples.sh script. $ ./jenetics.example/src/main/scripts/run-examples.sh Since the script uses JARs located in the build directory you have to build it with the jar Gradle target first; see section 6 on page 124. 5.1 Ones counting Ones counting is one of the simplest model-problem. It uses a binary chromosome and forms a classic genetic algorithm1 . The fitness of a Genotype is proportional to the number of ones. 1 2 import s t a t i c i o . j e n e t i c s . e n g i n e . E v o l u t i o n R e s u l t . t o B e s t P h e n o t y p e ; import s t a t i c i o . j e n e t i c s . e n g i n e . L i m i t s . b y S t e a d y F i t n e s s ; 3 4 5 6 7 8 9 10 11 12 import import import import import import import import import io io io io io io io io io . . . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics . BitChromosome ; . BitGene ; . Genotype ; . Mutator ; . Phenotype ; . RouletteWheelSelector ; . SinglePointCrossover ; . e n g i n e . Engine ; . engine . E v o l u t i o n S t a t i s t i c s ; 13 14 public c l a s s OnesCounting { 15 16 17 18 19 20 // This method c a l c u l a t e s t h e f i t n e s s f o r a g i v e n g e n o t y p e . private s t a t i c I n t e g e r count ( f i n a l Genotype g t ) { return g t . getChromosome ( ) . a s ( BitChromosome . c l a s s ) . bitCount ( ) ; 1 In the classic genetic algorithm the problem is a maximization problem and the fitness function is positive. The domain of the fitness function is a bit-chromosome. 104 5.1. ONES COUNTING CHAPTER 5. EXAMPLES } 21 22 public s t a t i c void main ( S t r i n g [ ] a r g s ) { // C o n f i g u r e and b u i l d t h e e v o l u t i o n e n g i n e . f i n a l Engine e n g i n e = Engine . builder ( OnesCounting : : count , BitChromosome . o f ( 2 0 , 0 . 1 5 ) ) . populationSize (500) . s e l e c t o r (new R o u l e t t e W h e e l S e l e c t o r <>() ) . alterers ( new Mutator < >(0.55) , new S i n g l e P o i n t C r o s s o v e r < >(0.06) ) . build () ; 23 24 25 26 27 28 29 30 31 32 33 34 35 // C r e a t e e v o l u t i o n s t a t i s t i c s consumer . f i n a l E v o l u t i o n S t a t i s t i c s s t a t i s t i c s = E v o l u t i o n S t a t i s t i c s . ofNumber ( ) ; 36 37 38 39 f i n a l Phenotype b e s t = e n g i n e . stream ( ) // Truncate t h e e v o l u t i o n stream a f t e r 7 " s t e a d y " // g e n e r a t i o n s . . l i m i t ( bySteadyFitness (7) ) // The e v o l u t i o n w i l l s t o p a f t e r maximal 100 // g e n e r a t i o n s . . l i m i t (100) // Update t h e e v a l u a t i o n s t a t i s t i c s a f t e r // each g e n e r a t i o n . peek ( s t a t i s t i c s ) // C o l l e c t ( r e d u c e ) t h e e v o l u t i o n stream t o // i t s b e s t phenotype . . c o l l e c t ( toBestPhenotype ( ) ) ; 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 } } System . out . p r i n t l n ( s t a t i s t i c s ) ; System . out . p r i n t l n ( b e s t ) ; The genotype in this example consists of one BitChromosome with a ones probability of 0.15. The altering of the offspring population is performed by mutation, with mutation probability of 0.55, and then by a single-point crossover, with crossover probability of 0.06. After creating the initial population, with the ga.setup() call, 100 generations are evolved. The tournament selector is used for both, the offspring- and the survivor selection—this is the default selector.2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Time statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Selection : sum =0 .0 1 65 80 1 44 00 0 s ; mean = 0 .0 0 13 81 6 78 6 67 s | | Altering : sum = 0. 0 96 9 04 15 9 00 0 s ; mean =0 .0 0 80 75 34 6 58 3 s | | Fitness calculation : sum = 0. 02 28 9 43 18 0 00 s ; mean =0 . 00 19 0 78 59 8 33 s | | Overall execution : sum =0 .1 36 5 75 32 3 00 0 s ; mean = 0 .0 11 3 81 27 6 91 7 s | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Evolution statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Generations : 12 | | Altered : sum =40 ,487; mean = 3 37 3 .9 1 66 6 6 66 7 | | Killed : sum =0; mean =0.000000000 | | Invalids : sum =0; mean =0.000000000 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Population statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ 2 For the other default values (population size, maximal age, ...) have a look at the Javadoc: http://jenetics.io/javadoc/jenetics/4.3/index.html 105 5.2. REAL FUNCTION 18 19 20 21 22 23 24 25 26 CHAPTER 5. EXAMPLES | Age : max =9; mean =0.808667; var =1.446299 | | Fitness : | | min = 1.0000 00000000 | | max = 1 8. 00 0 00 00 00 0 00 | | mean = 10 . 05 08 33 3 33 33 3 | | var = 7.839 555898 205 | | std = 2.7 9992 0694 985 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ [ 0 0 0 0 1 1 0 1 | 1 1 1 1 0 1 1 1 | 1 1 1 1 1 1 1 1 ] --> 18 The given example will print the overall timing statistics onto the console. In the Evolution statistics section you can see that it actually takes 15 generations to fulfill the termination criteria—finding no better result after 7 consecutive generations. 5.2 Real function y In this example we try to find the minimum value of the function   1 f (x) = cos + sin (x) · cos (x) . 2 (5.2.1) 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 1 2 3 4 5 6 x Figure 5.2.1: Real function The graph of function 5.2.1, in the range of [0, 2π], is shown in figure 5.2.1 and the listing beneath shows the GA implementation which will minimize the function. 1 2 3 4 5 import import import import import static static static static static import import import import import import import import io io io io io io io io j a v a . l a n g . Math . PI ; j a v a . l a n g . Math . c o s ; j a v a . l a n g . Math . s i n ; i o . j e n e t i c s . engine . EvolutionResult . toBestPhenotype ; io . j e n e t i c s . engine . Limits . bySteadyFitness ; 6 7 8 9 10 11 12 13 14 . . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics . DoubleGene ; . MeanAlterer ; . Mutator ; . Optimize ; . Phenotype ; . e n g i n e . Codecs ; . e n g i n e . Engine ; . engine . E v o l u t i o n S t a t i s t i c s ; 106 5.2. REAL FUNCTION 15 CHAPTER 5. EXAMPLES import i o . j e n e t i c s . u t i l . DoubleRange ; 16 17 public c l a s s R e a l F u n c t i o n { 18 // The f i t n e s s f u n c t i o n . private s t a t i c double f i t n e s s ( f i n a l double x ) { return c o s ( 0 . 5 + s i n ( x ) ) ∗ c o s ( x ) ; } 19 20 21 22 23 public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) { f i n a l Engine e n g i n e = Engine // C r e a t e a new b u i l d e r with t h e g i v e n f i t n e s s // f u n c t i o n and chromosome . . builder ( RealFunction : : f i t n e s s , Codecs . o f S c a l a r ( DoubleRange . o f ( 0 . 0 , 2 . 0 ∗ PI ) ) ) . populationSize (500) . o p t i m i z e ( Optimize .MINIMUM) . alterers ( new Mutator < >(0.03) , new MeanAlterer < >(0.6) ) // B u i l d an e v o l u t i o n e n g i n e with t h e // d e f i n e d p a r a m e t e r s . . build () ; 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 // C r e a t e e v o l u t i o n s t a t i s t i c s consumer . f i n a l E v o l u t i o n S t a t i s t i c s s t a t i s t i c s = E v o l u t i o n S t a t i s t i c s . ofNumber ( ) ; 40 41 42 43 f i n a l Phenotype b e s t = e n g i n e . stream ( ) // Truncate t h e e v o l u t i o n stream a f t e r 7 " s t e a d y " // g e n e r a t i o n s . . l i m i t ( bySteadyFitness (7) ) // The e v o l u t i o n w i l l s t o p a f t e r maximal 100 // g e n e r a t i o n s . . l i m i t (100) // Update t h e e v a l u a t i o n s t a t i s t i c s a f t e r // each g e n e r a t i o n . peek ( s t a t i s t i c s ) // C o l l e c t ( r e d u c e ) t h e e v o l u t i o n stream t o // i t s b e s t phenotype . . c o l l e c t ( toBestPhenotype ( ) ) ; 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 } } System . out . p r i n t l n ( s t a t i s t i c s ) ; System . out . p r i n t l n ( b e s t ) ; The GA works with 1 × 1 DoubleChromosomes whose values are restricted to the range [0, 2π]. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Time statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Selection : sum =0 . 06 44 0 64 5 60 00 s ; mean = 0. 0 03 06 6 9 74 09 5 s | | Altering : sum = 0. 07 01 58 3 82 00 0 s ; mean = 0 .0 03 34 0 87 53 33 s | | Fitness calculation : sum = 0. 05 04 5 26 47 0 00 s ; mean =0 .0 0 24 02 50 7 00 0 s | | Overall execution : sum =0 .1 6 98 35 1 54 00 0 s ; mean =0 .0 0 80 87 38 8 28 6 s | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Evolution statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Generations : 21 | | Altered : sum =3 ,897; mean =185.5 71428571 | | Killed : sum =0; mean =0.000000000 | | Invalids : sum =0; mean =0.000000000 | 107 5.3. RASTRIGIN FUNCTION 15 16 17 18 19 20 21 22 23 24 25 26 CHAPTER 5. EXAMPLES + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Population statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Age : max =9; mean =1.104381; var =1.962625 | | Fitness : | | min = -0.938171897696 | | max = 0.93 63101 25279 | | mean = -0.897856583665 | | var = 0.027 2462748 38 | | std = 0.16 50644 5661 7 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ [ [ [ 3 . 3 8 9 1 2 5 7 8 2 6 5 7 3 1 4 ] ] ] --> -0.9381718976956661 The GA will generated an console output like above. The exact result of the function–for the given range–will be 3.389, 125, 782, 8907, 939... You can also see, that we reached the final result after 19 generations. 5.3 Rastrigin function The Rastrigin function3 is often used to test the optimization performance of genetic algorithm. f (x) = An + n X  x2i − A cos (2πxi ) . (5.3.1) i=1 As the plot in figure 5.3.1 shows, the Rastrigin function has many local minima, which makes it difficult for standard, gradient-based methods to find the global minimum. If A = 10 and xi ∈ [−5.12, 5.12], the function has only one global minimum at x = 0 with f (x) = 0. Figure 5.3.1: Rastrigin function 3 https://en.wikipedia.org/wiki/Rastrigin_function 108 5.3. RASTRIGIN FUNCTION CHAPTER 5. EXAMPLES The following listing shows the Engine setup for solving the Rastrigin function, which is very similar to the setup for the real-function in section 5.2 on page 106. Beside the different fitness function, the Codec for double vectors is used, instead of the double scalar Codec. 1 2 3 4 import import import import static static static static import import import import import import import import import io io io io io io io io io j a v a . l a n g . Math . PI ; j a v a . l a n g . Math . c o s ; i o . j e n e t i c s . engine . EvolutionResult . toBestPhenotype ; io . j e n e t i c s . engine . Limits . bySteadyFitness ; 5 6 7 8 9 10 11 12 13 14 . . . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics . DoubleGene ; . MeanAlterer ; . Mutator ; . Optimize ; . Phenotype ; . e n g i n e . Codecs ; . e n g i n e . Engine ; . engine . E v o l u t i o n S t a t i s t i c s ; . u t i l . DoubleRange ; 15 16 17 18 19 public c l a s s R a s t r i g i n F u n c t i o n { private s t a t i c f i n a l double A = 1 0 ; private s t a t i c f i n a l double R = 5 . 1 2 ; private s t a t i c f i n a l i n t N = 2 ; 20 private s t a t i c double f i t n e s s ( f i n a l double [ ] x ) { double v a l u e = A∗N; f o r ( i n t i = 0 ; i < N; ++i ) { v a l u e += x [ i ] ∗ x [ i ] − A∗ c o s ( 2 . 0 ∗ PI ∗x [ i ] ) ; } 21 22 23 24 25 26 27 } 28 return v a l u e ; 29 public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) { f i n a l Engine e n g i n e = Engine . builder ( RastriginFunction : : fitness , // Codec f o r ’ x ’ v e c t o r . Codecs . o f V e c t o r ( DoubleRange . o f (−R, R) , N) ) . populationSize (500) . o p t i m i z e ( Optimize .MINIMUM) . alterers ( new Mutator < >(0.03) , new MeanAlterer < >(0.6) ) . build () ; 30 31 32 33 34 35 36 37 38 39 40 41 42 f i n a l E v o l u t i o n S t a t i s t i c s s t a t i s t i c s = E v o l u t i o n S t a t i s t i c s . ofNumber ( ) ; 43 44 45 f i n a l Phenotype b e s t = e n g i n e . stream ( ) . l i m i t ( bySteadyFitness (7) ) . peek ( s t a t i s t i c s ) . c o l l e c t ( toBestPhenotype ( ) ) ; 46 47 48 49 50 51 52 53 54 } } System . out . p r i n t l n ( s t a t i s t i c s ) ; System . out . p r i n t l n ( b e s t ) ; The console output of the program shows, that Jenetics finds the optimal solution after 38 generations. 109 5.4. 0/1 KNAPSACK 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 CHAPTER 5. EXAMPLES + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Time statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Selection : sum =0 .2 0 91 85 1 34 00 0 s ; mean = 0 .0 05 5 04 87 19 4 7 s | | Altering : sum = 0. 29 5 10 20 44 0 00 s ; mean =0 . 00 77 6 58 43 2 63 s | | Fitness calculation : sum = 0. 17 6 87 9 93 7 00 0 s ; mean = 0. 0 04 65 4 73 51 8 4 s | | Overall execution : sum =0 .6 6 45 1 72 56 0 00 s ; mean = 0. 01 7 48 72 9 62 11 s | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Evolution statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Generations : 38 | | Altered : sum =7 ,549; mean =198 .6578 94737 | | Killed : sum =0; mean =0.000000000 | | Invalids : sum =0; mean =0.000000000 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Population statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Age : max =8; mean =1.100211; var =1.814053 | | Fitness : | | min = 0.0000 00000000 | | max = 6 3 .6 72 6 04 0 47 47 5 | | mean = 3 .4841574 52128 | | var = 7 1. 0 47 47 5 13 90 18 | | std = 8.42 8966 4336 16 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ [[[ -1.3226168588424143 E -9] ,[ -1.096964971404292 E -9]]] --> 0.0 5.4 0/1 Knapsack In the Knapsack problem4 a set of items, together with it’s size and value, is given. The task is to select a disjoint subset so that the total size does not exceed the knapsack size. For solving the 0/1 knapsack problem we define a BitChromosome, one bit for each item. If the ith bit is set to one the ith item is selected. 1 2 import s t a t i c i o . j e n e t i c s . e n g i n e . E v o l u t i o n R e s u l t . t o B e s t P h e n o t y p e ; import s t a t i c i o . j e n e t i c s . e n g i n e . L i m i t s . b y S t e a d y F i t n e s s ; 3 4 5 6 7 import import import import java . java . java . java . import import import import import import import import import import import io io io io io io io io io io io util util util util . Random ; . f u n c t i o n . Function ; . stream . C o l l e c t o r ; . stream . Stream ; 8 9 10 11 12 13 14 15 16 17 18 19 . . . . . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics . BitGene ; . Mutator ; . Phenotype ; . RouletteWheelSelector ; . SinglePointCrossover ; . TournamentSelector ; . e n g i n e . Codecs ; . e n g i n e . Engine ; . engine . E v o l u t i o n S t a t i s t i c s ; . u t i l . ISeq ; . u t i l . RandomRegistry ; 20 21 22 // The main c l a s s . public c l a s s Knapsack { 23 24 25 26 27 // This c l a s s r e p r e s e n t s a knapsack item , with a s p e c i f i c // " s i z e " and " v a l u e " . f i n a l s t a t i c c l a s s Item { public f i n a l double s i z e ; 4 https://en.wikipedia.org/wiki/Knapsack_problem 110 5.4. 0/1 KNAPSACK CHAPTER 5. EXAMPLES public f i n a l double v a l u e ; 28 29 Item ( f i n a l double s i z e , f i n a l double v a l u e ) { this . s i z e = s i z e ; this . value = value ; } 30 31 32 33 34 // C r e a t e a new random knapsack item . s t a t i c Item random ( ) { f i n a l Random r = RandomRegistry . getRandom ( ) ; return new Item ( r . nextDouble ( ) ∗ 1 0 0 , r . nextDouble ( ) ∗100 ); } 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 } // C o l l e c t o r f o r summing up t h e knapsack i t e m s . s t a t i c C o l l e c t o r toSum ( ) { return C o l l e c t o r . o f ( ( ) −> new double [ 2 ] , ( a , b ) −> { a [ 0 ] += b . s i z e ; a [ 1 ] += b . v a l u e ; } , ( a , b ) −> { a [ 0 ] += b [ 0 ] ; a [ 1 ] += b [ 1 ] ; return a ; } , r −> new Item ( r [ 0 ] , r [ 1 ] ) ); } 54 55 56 57 58 59 60 61 62 // C r e a t i n g t h e f i t n e s s f u n c t i o n . s t a t i c Function, Double> f i t n e s s ( f i n a l double s i z e ) { return i t e m s −> { f i n a l Item sum = i t e m s . stream ( ) . c o l l e c t ( Item . toSum ( ) ) ; return sum . s i z e <= s i z e ? sum . v a l u e : 0 ; }; } 63 64 65 66 public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) { f i n a l int nitems = 1 5 ; f i n a l double k s s i z e = n i t e m s ∗ 1 0 0 . 0 / 3 . 0 ; 67 68 69 70 71 f i n a l ISeq i t e m s = Stream . g e n e r a t e ( Item : : random ) . l i m i t ( nitems ) . c o l l e c t ( ISeq . toISeq ( ) ) ; 72 73 74 75 76 77 78 79 80 81 82 // C o n f i g u r e and b u i l d t h e e v o l u t i o n e n g i n e . f i n a l Engine e n g i n e = Engine . b u i l d e r ( f i t n e s s ( k s s i z e ) , Codecs . o f S u b S e t ( i t e m s ) ) . populationSize (500) . s u r v i v o r s S e l e c t o r (new T o u r n a m e n t S e l e c t o r <>(5) ) . o f f s p r i n g S e l e c t o r (new R o u l e t t e W h e e l S e l e c t o r <>() ) . alterers ( new Mutator < >(0.115) , new S i n g l e P o i n t C r o s s o v e r < >(0.16) ) . build () ; 83 84 85 86 // C r e a t e e v o l u t i o n s t a t i s t i c s consumer . f i n a l E v o l u t i o n S t a t i s t i c s s t a t i s t i c s = E v o l u t i o n S t a t i s t i c s . ofNumber ( ) ; 87 88 89 f i n a l Phenotype b e s t = e n g i n e . stream ( ) // Truncate t h e e v o l u t i o n stream a f t e r 7 " s t e a d y " 111 5.5. TRAVELING SALESMAN CHAPTER 5. EXAMPLES // g e n e r a t i o n s . . l i m i t ( bySteadyFitness (7) ) // The e v o l u t i o n w i l l s t o p a f t e r maximal 100 // g e n e r a t i o n s . . l i m i t (100) // Update t h e e v a l u a t i o n s t a t i s t i c s a f t e r // each g e n e r a t i o n . peek ( s t a t i s t i c s ) // C o l l e c t ( r e d u c e ) t h e e v o l u t i o n stream t o // i t s b e s t phenotype . . c o l l e c t ( toBestPhenotype ( ) ) ; 90 91 92 93 94 95 96 97 98 99 100 101 102 103 } 104 105 } System . out . p r i n t l n ( s t a t i s t i c s ) ; System . out . p r i n t l n ( b e s t ) ; The console out put for the Knapsack GA will look like the listing beneath. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Time statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Selection : sum =0 .0 4 44 6 59 78 0 00 s ; mean =0 .0 0 55 58 24 7 25 0 s | | Altering : sum = 0. 06 7 38 52 11 0 00 s ; mean =0 .0 0 84 23 15 1 37 5 s | | Fitness calculation : sum = 0. 03 72 0 81 89 0 00 s ; mean =0 . 00 46 5 10 23 6 25 s | | Overall execution : sum =0 .1 2 64 6 85 39 0 00 s ; mean =0 . 01 58 0 85 67 3 75 s | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Evolution statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Generations : 8 | | Altered : sum =4 ,842; mean = 605.25 0000000 | | Killed : sum =0; mean =0.000000000 | | Invalids : sum =0; mean =0.000000000 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Population statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Age : max =7; mean =1.387500; var =2.780039 | | Fitness : | | min = 0.0000 00000000 | | max = 5 4 2 . 3 6 3 2 3 5 9 9 9 3 4 2 | | mean = 4 3 6 . 0 9 8 2 4 8 6 2 8 6 6 1 | | var = 1 1 4 3 1 . 8 0 1 2 9 1 8 1 2 3 9 0 | | std = 1 0 6 . 9 1 9 6 0 1 9 9 9 8 7 8 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ [ 0 1 1 1 1 0 1 1 | 1 0 1 1 1 1 0 1 ] --> 5 4 2 . 3 6 3 2 3 5 9 9 9 3 4 1 7 5.5 Traveling salesman The Traveling Salesman problem5 is one of the classical problems in computational mathematics and it is the most notorious NP-complete problem. The goal is to find the shortest distance, or the path, with the least costs, between N different cities. Testing all possible path for N cities would lead to N ! checks to find the shortest one. The following example uses a path where the cities are lying on a circle. That means, the optimal path will be a polygon. This makes it easier to check the quality of the found solution. 1 2 3 4 import import import import static static static static j a v a . l a n g . Math . PI ; j a v a . l a n g . Math . c o s ; j a v a . l a n g . Math . hypot ; j a v a . l a n g . Math . s i n ; 5 https://en.wikipedia.org/wiki/Travelling_salesman_problem 112 5.5. TRAVELING SALESMAN 5 6 7 8 import import import import static static static static CHAPTER 5. EXAMPLES j a v a . l a n g . System . out ; java . u t i l . Objects . requireNonNull ; i o . j e n e t i c s . engine . EvolutionResult . toBestPhenotype ; io . j e n e t i c s . engine . Limits . bySteadyFitness ; 9 10 11 12 import j a v a . u t i l . Random ; import j a v a . u t i l . f u n c t i o n . F u n c t i o n ; import j a v a . u t i l . stream . I n t S t r e a m ; 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import import import import import import import import import import import import import io io io io io io io io io io io io io . . . . . . . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics . EnumGene ; . Optimize ; . PartiallyMatchedCrossover ; . Phenotype ; . SwapMutator ; . e n g i n e . Codec ; . e n g i n e . Codecs ; . e n g i n e . Engine ; . engine . E v o l u t i o n S t a t i s t i c s ; . e n g i n e . Problem ; . u t i l . ISeq ; . u t i l . MSeq ; . u t i l . RandomRegistry ; 27 28 29 30 public c l a s s T r a v e l i n g S a l e s m a n implements Problem , EnumGene , Double> { 31 32 private f i n a l ISeq _ p o i n t s ; 33 34 35 36 37 // C r e a t e new TSP problem i n s t a n c e with g i v e n way p o i n t s . public T r a v e l i n g S a l e s m a n ( ISeq p o i n t s ) { _points = requireNonNull ( p o i n t s ) ; } 38 39 40 41 42 43 44 45 46 47 @Override public Function , Double> f i t n e s s ( ) { return p −> I n t S t r e a m . r a n g e ( 0 , p . l e n g t h ( ) ) . mapToDouble ( i −> { f i n a l double [ ] p1 = p . g e t ( i ) ; f i n a l double [ ] p2 = p . g e t ( ( i + 1 )%p . s i z e ( ) ) ; return hypot ( p1 [ 0 ] − p2 [ 0 ] , p1 [ 1 ] − p2 [ 1 ] ) ; } ) . sum ( ) ; } 48 49 50 51 52 @Override public Codec , EnumGene> c o d e c ( ) { return Codecs . o f P e r m u t a t i o n ( _ p o i n t s ) ; } 53 54 55 56 57 58 // C r e a t e a new TSM example problem with t h e g i v e n number // o f s t o p s . A l l s t o p s l i e on a c i r c l e with t h e g i v e n r a d i u s . public s t a t i c T r a v e l i n g S a l e s m a n o f ( i n t s t o p s , double r a d i u s ) { f i n a l MSeq p o i n t s = MSeq . o f L e n g t h ( s t o p s ) ; f i n a l double d e l t a = 2 . 0 ∗ PI / s t o p s ; 59 60 61 62 63 64 65 f o r ( i n t i = 0 ; i < s t o p s ; ++i ) { f i n a l double a l p h a = d e l t a ∗ i ; f i n a l double x = c o s ( a l p h a ) ∗ r a d i u s + r a d i u s ; f i n a l double y = s i n ( a l p h a ) ∗ r a d i u s + r a d i u s ; p o i n t s . s e t ( i , new double [ ] { x , y } ) ; } 66 113 5.5. TRAVELING SALESMAN CHAPTER 5. EXAMPLES // S h u f f l i n g o f t h e c r e a t e d p o i n t s . f i n a l Random random = RandomRegistry . getRandom ( ) ; f o r ( i n t j = p o i n t s . l e n g t h ( ) − 1 ; j > 0 ; −−j ) { f i n a l i n t i = random . n e x t I n t ( j + 1 ) ; f i n a l double [ ] tmp = p o i n t s . g e t ( i ) ; points . set ( i , points . get ( j ) ) ; p o i n t s . s e t ( j , tmp ) ; } 67 68 69 70 71 72 73 74 75 76 } 77 return new T r a v e l i n g S a l e s m a n ( p o i n t s . t o I S e q ( ) ) ; 78 public s t a t i c void main ( S t r i n g [ ] a r g s ) { i n t s t o p s = 2 0 ; double R = 1 0 ; double minPathLength = 2 . 0 ∗ s t o p s ∗R∗ s i n ( PI / s t o p s ) ; 79 80 81 82 T r a v e l i n g S a l e s m a n tsm = T r a v e l i n g S a l e s m a n . o f ( s t o p s , R) ; Engine , Double> e n g i n e = Engine . b u i l d e r ( tsm ) . o p t i m i z e ( Optimize .MINIMUM) . maximalPhenotypeAge ( 1 1 ) . populationSize (500) . alterers ( new SwapMutator < >(0.2) , new P a r t i a l l y M a t c h e d C r o s s o v e r < >(0.35) ) . build () ; 83 84 85 86 87 88 89 90 91 92 93 // C r e a t e e v o l u t i o n s t a t i s t i c s consumer . E v o l u t i o n S t a t i s t i c s s t a t i s t i c s = E v o l u t i o n S t a t i s t i c s . ofNumber ( ) ; 94 95 96 97 Phenotype , Double> b e s t = e n g i n e . stream ( ) // Truncate t h e e v o l u t i o n stream a f t e r 25 " s t e a d y " // g e n e r a t i o n s . . l i m i t ( bySteadyFitness (25) ) // The e v o l u t i o n w i l l s t o p a f t e r maximal 250 // g e n e r a t i o n s . . l i m i t (250) // Update t h e e v a l u a t i o n s t a t i s t i c s a f t e r // each g e n e r a t i o n . peek ( s t a t i s t i c s ) // C o l l e c t ( r e d u c e ) t h e e v o l u t i o n stream t o // i t s b e s t phenotype . . c o l l e c t ( toBestPhenotype ( ) ) ; 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 } 116 out . p r i n t l n ( s t a t i s t i c s ) ; out . p r i n t l n ( "Known min path l e n g t h : " + minPathLength ) ; out . p r i n t l n ( " Found min path l e n g t h : " + b e s t . g e t F i t n e s s ( ) ) ; 117 118 } The Traveling Salesman problem is a very good example which shows you how to solve combinatorial problems with an GA. Jenetics contains several classes which will work very well with this kind of problems. Wrapping the base type into an EnumGene is the first thing to do. In our example, every city has an unique number, that means we are wrapping an Integer into an EnumGene. Creating a genotype for integer values is very easy with the factory method of the PermutationChromosome. For other data types you have to use one of the constructors of the permutation chromosome. As alterers, we are using a 114 5.6. EVOLVING IMAGES CHAPTER 5. EXAMPLES swap-mutator and a partially-matched crossover. These alterers guarantees that no invalid solutions are created—every city exists exactly once in the altered chromosomes. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Time statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Selection : sum =0 .0 7 74 51 2 97 00 0 s ; mean = 0 .0 0 06 1 96 1 03 76 s | | Altering : sum = 0. 20 53 5 16 88 0 00 s ; mean =0 .0 0 16 42 8 13 50 4 s | | Fitness calculation : sum = 0. 09 7 12 72 25 0 00 s ; mean =0 .0 0 07 77 01 7 80 0 s | | Overall execution : sum =0 .3 71 3 04 46 4 00 0 s ; mean = 0. 0 02 97 0 43 57 1 2 s | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Evolution statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Generations : 125 | | Altered : sum =177 ,200; mean = 1 41 7. 6 00 00 00 0 0 | | Killed : sum =173; mean =1.384000000 | | Invalids : sum =0; mean =0.000000000 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Population statistics | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ | Age : max =11; mean =1.677872; var =5.617299 | | Fitness : | | min = 6 2 .5 73 7 86 0 16 0 92 | | max = 3 4 4 . 2 48 7 6 3 7 2 0 4 8 7 | | mean = 1 4 4 . 6 3 6 7 4 9 9 7 4 5 9 1 | | var = 5 0 8 2 . 9 4 7 2 4 7 8 7 8 9 5 3 | | std = 7 1 .2 94 7 91 1 69 3 34 | + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+ Known min path length : 6 2 . 5 7 3 7 8 6 0 1 6 0 9 2 3 5 Found min path length : 6 2 . 5 7 3 7 8 6 0 1 6 0 9 2 3 5 The listing above shows the output generated by our example. The last line represents the phenotype of the best solution found by the GA, which represents the traveling path. As you can see, the GA has found the shortest path, in reverse order. 5.6 Evolving images The following example tries to approximate a given image by semitransparent polygons.6 It comes with an Swing UI, where you can immediately start your own experiments. After compiling the sources with $ ./gradlew jar you can start the example by calling $ ./jrun io.jenetics.example.image.EvolvingImages Figure 5.6.1 on the next page show the GUI after evolving the default image for about 4,000 generations. With the »Open« button it is possible to load other images for polygonization. The »Save« button allows to store polygonized images in PNG format to disk. At the button of the UI, you can change some of the GA parameters of the example: Population size The number of individual of the population. Tournament size The example uses a TournamentSelector for selecting the offspring population. This parameter lets you set the number of individual used for the tournament step. 6 Original idea by Roger Johansson genetic-programming-evolution-of-mona-lisa. 115 http://rogeralsing.com/2008/12/07/ 5.6. EVOLVING IMAGES CHAPTER 5. EXAMPLES Figure 5.6.1: Evolving images UI Mutation rate The probability that a polygon component (color or vertex position) is altered. Mutation magnitude In case a polygon component is going to be mutated, its value will be randomly modified in the uniform range of [−m, +m]. Polygon length The number of edges (or vertices) of the created polygons. Polygon count The number of polygons of one individual (Genotype). Reference image size To improve the processing speed, the fitness of a given polygon set (individual) is not calculated with the full sized image. Instead an scaled reference image with the given size is used. A smaller reference image will speed up the calculation, but will also reduce the accuracy. It is also possible to run and configure the Evolving Images example from the command line. This allows to do long running evolution experiments and save polygon images every n generations—specified with the--image-generation parameter. $ ./jrun io.jenetics.example.image.EvolvingImages evolve \ --engine-properties engine.properties \ --input-image monalisa.png \ --output-dir evolving-images \ 116 5.6. EVOLVING IMAGES CHAPTER 5. EXAMPLES --generations 10000 \ --image-generation 100 Every command line argument has proper default values, so that it is possible to start it without parameters. Listing 5.1 shows the default values for the GA engine if the --engine-properties parameter is not specified. 1 2 3 4 5 6 7 8 p o p u l a t i o n _ s i z e =50 t o u r n a m e n t _ s i z e=3 m u t a t i o n _ r a t e =0.025 mut ation_m ultitu de =0.15 p o l y g o n _ l e n g t h=4 polygon_count =250 r e f e r e n c e _ i m a g e _ w i d t h =60 r e f e r e n c e _ i m a g e _ h e i g h t =60 Listing 5.1: Default engine.properties For a quick start, you can simply call $ ./jrun io.jenetics.example.image.EvolvingImages evolve a)100 generations b)102 generations c)103 generations d)104 generations e)105 generations f)106 generations Figure 5.6.2: Evolving Mona Lisa images The images in figure 5.6.2 shows the resulting polygon images after the given number of generations. They where created with the command line version of the program using the default engine.properties file (listing 5.1): $ ./jrun io.jenetics.example.image.EvolvingImages evolve \ --generations 1000000 \ --image-generation 100 117 5.7. SYMBOLIC REGRESSION 5.7 CHAPTER 5. EXAMPLES Symbolic regression Symbolic regression is a classical example in genetic programming and tries to find a mathematical expression for a given set of values. Symbolic regression involves finding a mathematical expression, in symbolic form, that provides a good, best, or perfect fit between a given finite sampling of values of the independent variables and the associated values of the dependent variables.[18] The following example shows how to solve the GP problem with Jenetics. We are trying to find the polynomial, 4x3 − 3x2 + x, which fits a given data set. The sample data where created with the polynomial we are searching for. This makes it easy to check the quality of the approximation found by the GP. 1 import s t a t i c j a v a . l a n g . Math . pow ; 2 3 import j a v a . u t i l . Arrays ; 4 5 6 7 8 9 10 11 import import import import import import import io io io io io io io . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics . Genotype ; . Mutator ; . e n g i n e . Codec ; . e n g i n e . Engine ; . engine . EvolutionResult ; . u t i l . ISeq ; . u t i l . RandomRegistry ; 12 13 import i o . j e n e t i c s . e x t . S i n g l e N o d e C r o s s o v e r ; 14 15 16 17 18 19 20 import import import import import import io io io io io io . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics . prog . ProgramChromosome ; . prog . ProgramGene ; . prog . op . EphemeralConst ; . prog . op . MathOp ; . prog . op . Op ; . prog . op . Var ; 21 22 public c l a s s S y m b o l i c R e g r e s s i o n { 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 // Sample data c r e a t e d with 4∗ x ^3 − 3∗ x ^2 + x s t a t i c f i n a l double [ ] [ ] SAMPLES = new double [ ] [ ] { { −1.0 , −8.0000} , { −0.9 , −6.2460} , { −0.8 , −4.7680} , { −0.7 , −3.5420} , { −0.6 , −2.5440} , { −0.5 , −1.7500} , { −0.4 , −1.1360} , { −0.3 , −0.6780} , { −0.2 , −0.3520} , { −0.1 , −0.1340} , {0.0 , 0.0000} , {0.1 , 0.0740} , {0.2 , 0.1120} , {0.3 , 0.1380} , {0.4 , 0.1760} , {0.5 , 0.2500} , {0.6 , 0.3840} , {0.7 , 0.6020} , {0.8 , 0.9280} , {0.9 , 1.3860} , {1.0 , 2.0000} 118 5.7. SYMBOLIC REGRESSION CHAPTER 5. EXAMPLES }; 47 48 // D e f i n i t i o n o f t h e o p e r a t i o n s . s t a t i c f i n a l ISeq> OPERATIONS = I S e q . o f ( MathOp .ADD, MathOp . SUB, MathOp .MUL ); 49 50 51 52 53 54 55 // D e f i n i t i o n o f t h e t e r m i n a l s . s t a t i c f i n a l ISeq> TERMINALS = I S e q . o f ( Var . o f ( " x " , 0 ) , EphemeralConst . o f ( ( ) −> ( double ) RandomRegistry . getRandom ( ) . n e x t I n t ( 1 0 ) ) ); 56 57 58 59 60 61 62 s t a t i c double e r r o r ( f i n a l ProgramGene program ) { return Arrays . stream (SAMPLES) . mapToDouble ( sample −> pow ( sample [ 1 ] − program . e v a l ( sample [ 0 ] ) , 2 ) + program . s i z e ( ) ∗ 0 . 0 0 0 0 1 ) . sum ( ) ; } 63 64 65 66 67 68 69 70 s t a t i c f i n a l Codec, ProgramGene> CODEC = Codec . o f ( Genotype . o f ( ProgramChromosome . o f ( 5, ch −> ch . getRoot ( ) . s i z e ( ) <= 5 0 , OPERATIONS, TERMINALS )) , Genotype : : getGene ); 71 72 73 74 75 76 77 78 79 80 81 public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) { f i n a l Engine, Double> e n g i n e = Engine . b u i l d e r ( S y m b o l i c R e g r e s s i o n : : e r r o r , CODEC) . minimizing ( ) . alterers ( new S i n g l e N o d e C r o s s o v e r <>() , new Mutator <>() ) . build () ; 82 83 84 85 86 87 88 89 90 f i n a l ProgramGene program = e n g i n e . stream ( ) . l i m i t (100) . c o l l e c t ( E v o l u t i o n R e s u l t . toBestGenotype ( ) ) . getGene ( ) ; 91 92 93 94 95 96 } 97 System . out . p r i n t l n ( program . t o P a r e n t h e s e s S t r i n g ( ) ) ; 98 99 } One output of a GP run is shown is shown in figure 5.7.1 on the next page. If we simplify this program tree, we will get exactly the polynomial which created the sample data. 119 5.8. DTLZ1 CHAPTER 5. EXAMPLES sub add mul x mul add x mul x add x add add add x x x x x x x Figure 5.7.1: Symbolic regression polynomial 5.8 DTLZ1 Deb, Thiele, Laumanns and Zitzler have proposed a set of generational MOPs for testing and comparing MOEAs. This suite of benchmarks attempts to define generic MOEA test problems that are scalable to a user defined number of objectives. Because of the last names of its creators, this test suite is known as DTLZ (Deb-Thiele-Laumanns-Zitzler). [7] DTLZ1 is an M -objective problem with linear Pareto-optimal front: [12] 120 5.8. DTLZ1 CHAPTER 5. EXAMPLES f1 (x) = f2 (x) = .. . fM −1 (x) fM (x) 1 x1 x2 · · · xM −1 (1 + g (xM )) , 2 1 x1 x2 · · · (1 − xM −1 ) (1 + g (xM )) , 2 1 x1 (1 − x2 ) (1 + g (xM )) , 2 1 (1 − x1 ) (1 + g (xM )) , = 2 ∀i ∈ [1, ..n] : 0 ≤ xi ≤ 1 = The functional g (xM ) requires |xM | = k variables and must take any function with g ≥ 0. Typically g is defined as: " 2   #  1 1 − cos 20π x − . g (xM ) = 100 |xM | + x − 2 2 In the above problem, the total number of variables is n = M + k − 1. The search space contains 11k − 1 local Pareto-optimal fronts, each of which can attract an MOEA. 1 2 3 import s t a t i c j a v a . l a n g . Math . PI ; import s t a t i c j a v a . l a n g . Math . c o s ; import s t a t i c j a v a . l a n g . Math . pow ; 4 5 6 7 8 9 10 11 12 13 14 import import import import import import import import import import io io io io io io io io io io . . . . . . . . . . jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics jenetics . DoubleGene ; . Mutator ; . Phenotype ; . TournamentSelector ; . e n g i n e . Codecs ; . e n g i n e . Engine ; . e n g i n e . Problem ; . u t i l . DoubleRange ; . u t i l . ISeq ; . u t i l . IntRange ; import import import import io io io io . . . . jenetics jenetics jenetics jenetics . ext . SimulatedBinaryCrossover ; . e x t . moea .MOEA; . e x t . moea . NSGA2Selector ; . e x t . moea . Vec ; 15 16 17 18 19 20 21 22 23 24 public c l a s s DTLZ1 private s t a t i c private s t a t i c private s t a t i c { f i n a l i n t VARIABLES = 4 ; f i n a l i n t OBJECTIVES = 3 ; f i n a l i n t K = VARIABLES − OBJECTIVES + 1 ; 25 26 27 28 29 30 s t a t i c f i n a l Problem> PROBLEM = Problem . o f ( DTLZ1 : : f , Codecs . o f V e c t o r ( DoubleRange . o f ( 0 , 1 . 0 ) , VARIABLES) ); 31 32 33 34 35 s t a t i c Vec f ( f i n a l double [ ] x ) { double g = 0 . 0 ; f o r ( i n t i = VARIABLES − K; i < VARIABLES ; i ++) { g += pow ( x [ i ] − 0 . 5 , 2 . 0 ) − c o s ( 2 0 . 0 ∗ PI ∗ ( x [ i ] − 0 . 5 ) ) ; 121 5.8. DTLZ1 CHAPTER 5. EXAMPLES } g = 1 0 0 . 0 ∗ (K + g ) ; 36 37 38 f i n a l double [ ] f = new double [ OBJECTIVES ] ; f o r ( i n t i = 0 ; i < OBJECTIVES ; ++i ) { f [ i ] = 0.5 ∗ (1.0 + g) ; f o r ( i n t j = 0 ; j < OBJECTIVES − i − 1 ; ++j ) { f [ i ] ∗= x [ j ] ; } i f ( i != 0 ) { f [ i ] ∗= 1 − x [ OBJECTIVES − i − 1 ] ; } } 39 40 41 42 43 44 45 46 47 48 49 50 } 51 return Vec . o f ( f ) ; 52 s t a t i c f i n a l Engine> ENGINE = Engine . b u i l d e r (PROBLEM) . populationSize (100) . alterers ( new S i m u l a t e d B i n a r y C r o s s o v e r <>(1) , new Mutator < >(1.0/VARIABLES) ) . o f f s p r i n g S e l e c t o r (new T o u r n a m e n t S e l e c t o r <>(5) ) . s u r v i v o r s S e l e c t o r ( NSGA2Selector . ofVec ( ) ) . minimizing ( ) . build () ; 53 54 55 56 57 58 59 60 61 62 63 public s t a t i c void main ( f i n a l S t r i n g [ ] a r g s ) { f i n a l ISeq> f r o n t = ENGINE . stream ( ) . l i m i t (2500) . c o l l e c t (MOEA. t o P a r e t o S e t ( IntRange . o f ( 1 0 0 0 , 1 1 0 0 ) ) ) . map( Phenotype : : g e t F i t n e s s ) ; } 64 65 66 67 68 69 70 71 } The listing above shows the encoding of the DTLZ1 problem with the Jenetics library. Figure 5.8.1 on the following page shows the output DTLZ1 optimization. 122 5.8. DTLZ1 CHAPTER 5. EXAMPLES 0.5 0.4 0.3 f3 0.2 0.1 00 0.1 0.2 f1 0.3 0.4 0.5 0 0.1 0.2 0.3 f2 0.4 0.5 Figure 5.8.1: Pareto front DTLZ1 123 Chapter 6 Build For building the Jenetics library from source, download the most recent, stable package version from https://github.com/jenetics/jenetics/releases and extract it to some build directory. $ unzip jenetics-.zip -d denotes the actual Jenetics version and the actual build directory. Alternatively you can check out the latest version from the Git master branch. $ git clone https://github.com/jenetics/jenetics.git \ Jenetics uses Gradle1 as build system and organizes the source into sub-projects (modules).2 Each sub-project is located in it’s own sub-directory. Published projects • jenetics: This project contains the source code and tests for the Jenetics base-module. • jenetics.ext: This module contains additional non-standard GA operations and data types. It also contains classes for solving multi-objective problems (MOEA). • jenetics.prog: The modules contains classes which allows to do genetic programming (GP). It seamlessly works with the existing EvolutionStream and evolution Engine. • jenetics.xml: XML marshalling module for the Jenetics base data structures. 1 http://gradle.org/downloads 2 If you are calling the gradlew script (instead of gradle), which are part of the downloaded package, the proper Gradle version is automatically downloaded and you don’t have to install Gradle explicitly. 124 CHAPTER 6. BUILD • prngine: PRNGine is a pseudo-random number generator library for sequential and parallel Monte Carlo simulations. Since this library has no dependencies to one of the other projects, it has its own repository3 with independent versioning. Non-published projects • jenetics.example: This project contains example code for the basemodule. • jenetics.doc: Contains the code of the web-site and this manual. • jenetics.tool: This module contains classes used for doing integration testing and algorithmic performance testing. It is also used for creating GA performance measures and creating diagrams from the performance measures. For building the library change into the directory (or one of the module directory) and call one of the available tasks: • compileJava: Compiles the Jenetics sources and copies the class files to the //build/classes/main directory. • jar: Compiles the sources and creates the JAR files. The artifacts are copied to the //build/libs directory. • test: Compiles and executes the unit tests. The test results are printed onto the console and a test-report, created by TestNG, is written to / directory. • javadoc: Generates the API documentation. The Javadoc is stored in the //build/docs directory. • clean: Deletes the /build/* directories and removes all generated artifacts. For building the library from the source, call $ cd $ gradle jar or $ ./gradlew jar if you don’t have the the Gradle build system installed—calling the the Gradle wrapper script will download all needed files and trigger the build task afterwards. 3 https://github.com/jenetics/prngine 125 CHAPTER 6. BUILD External library dependencies The following external projects are used for running and/or building the Jenetics library. • TestNG ◦ Version: 6.14.3 ◦ Homepage:http: // testng. org/ doc/ index. html ◦ License:Apache License, Version 2.0 ◦ Scope: test • Apache Commons Math ◦ Version: 3.6.1 ◦ Homepage:http: // commons. apache. org/ proper/ commons-math/ ◦ Download:http: // tweedo. com/ mirror/ apache/ commons/ math/ binaries/ commons-math3-3. 6. 1-bin. zip ◦ License:Apache License, Version 2.0 ◦ Scope: test • EqualsVerifier ◦ Version: 2.5.2 ◦ Homepage:http: // jqno. nl/ equalsverifier/ ◦ Download:https: // github. com/ jqno/ equalsverifier/ releases ◦ License:Apache License, Version 2.0 ◦ Scope: test • Java2Html ◦ Version: 5.0 ◦ Homepage:http: // www. java2html. de/ ◦ Download:http: // www. java2html. de/ java2html_ 50. zip ◦ License:GPL orCPL1.0 ◦ Scope: javadoc • Gradle ◦ Version: 4.10 ◦ Homepage:http: // gradle. org/ ◦ Download:http: // services. gradle. org/ distributions/ gradle-4. 10-bin. zip ◦ License:Apache License, Version 2.0 ◦ Scope: build Maven Central The whole Jenetics package can also be downloaded from the Maven Central repository http://repo.maven.apache.org/maven2: 126 CHAPTER 6. BUILD pom.xml snippet for Maven io.jenetics module-name 4.3.0 Gradle ’io.jenetics:module-name :4.3.0’ License The library itself is licensed under theApache License, Version 2.0. Copyright 2007-2018 Franz Wilhelmstötter Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 127 Bibliography [1] Thomas Back. Evolutionary Algorithms in Theory and Practice. Oxford Univiversity Press, 1996. [2] James E. Baker. Reducing bias and inefficiency in the selection algorithm. Proceedings of the Second International Conference on Genetic Algorithms and their Application, pages 14–21, 1987. [3] Shumeet Baluja and Rich Caruana. Removing the genetics from the standard genetic algorithm. pages 38–46. Morgan Kaufmann Publishers, 1995. [4] Heiko Bauke. Tina’s random number generator library. https://github.com/rabauke/trng4/blob/master/doc/trng.pdf, 2011. [5] Tobias Blickle and Lothar Thiele. A comparison of selection schemes used in evolutionary algorithms. Evolutionary Computation, 4:361–394, 1997. [6] Joshua Bloch. Effective Java. Addison-Wesley Professional, 3rd edition, 2018. [7] David A. Van Veldhuizen Carlos A. Coello Coello, Gary B. Lamont. Evolutionary Algorithms for Solving Multi-Objective Problems. Genetic and Evolutionary Computation. Springer, Berlin, Heidelberg, 2nd edition, 2007. [8] P.K. Chawdhry, R. Roy, and R.K. Pant. Soft Computing in Engineering Design and Manufacturing. Springer London, 1998. [9] Richard Dawkins. The Blind Watchmaker. New York: W. W. Norton & Company, 1986. [10] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. Trans. Evol. Comp, 6(2):182–197, April 2002. [11] Kalyanmoy Deb and Hans-Georg Beyer. Self-adaptive genetic algorithms with simulated binary crossover. COMPLEX SYSTEMS, 9:431–454, 1999. [12] Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. Scalable test problems for evolutionary multi-objective optimization. Number 112 in TIK-Technical Report. ETH-Zentrum, ETH-Zentrum Switzerland, July 2001. 128 BIBLIOGRAPHY BIBLIOGRAPHY [13] Félix-Antoine Fortin and Marc Parizeau. Revisiting the nsga-ii crowdingdistance computation. In Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, GECCO ’13, pages 623–630, New York, NY, USA, 2013. ACM. [14] J.F. Hughes and J.D. Foley. Computer Graphics: Principles and Practice. The systems programming series. Addison-Wesley, 2014. [15] Raj Jain and Imrich Chlamtac. The p2 algorithm for dynamic calculation of quantiles and histograms without storing observations. Commun. ACM, 28(10):1076–1085, October 1985. [16] David Jones. Good practice in (pseudo) random number generation for bioinformatics applications, May 2010. [17] Abdullah Konak, David W. Coit, and Alice E. Smith. Multi-objective optimization using genetic algorithms: A tutorial. Rel. Eng. & Sys. Safety, 91(9):992–1007, 2006. [18] John R. Koza. Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, MA, USA, 1992. [19] John R. Koza. Introduction to genetic programming: Tutorial. In Proceedings of the 10th Annual Conference Companion on Genetic and Evolutionary Computation, GECCO ’08, pages 2299–2338, New York, NY, USA, 2008. ACM. [20] Sean Luke. Essentials of Metaheuristics. Lulu, second edition, 2013. Available for free at http://cs.gmu.edu/∼sean/book/metaheuristics/. [21] Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution. Springer, 1996. [22] Melanie Mitchell. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA, USA, 1998. [23] Heinz Mühlenbein and Dirk Schlierkamp-Voosen. Predictive models for the breeder genetic algorithm i. continuous parameter optimization. 1(1):25–49. [24] Oracle. Value-based classes. https://docs.oracle.com/javase/8/docs/api/java/lang/doc-files/ValueBased.html, 2014. [25] A. Osyczka. Multicriteria optimization for engineering design. Design Optimization, page 193–227, 1985. [26] Charles C. Palmer and Aaron Kershenbaum. An approach to a problem in network design using genetic algorithms. Networks, 26(3):151–163, 1995. [27] Franz Rothlauf. Representations for Genetic and Evolutionary Algorithms. Springer, 2 edition, 2006. [28] Daniel Shiffman. The Nature of Code. The Nature of Code, 1 edition, 12 2012. [29] S. N. Sivanandam and S. N. Deepa. Introduction to Genetic Algorithms. Springer, 2010. 129 BIBLIOGRAPHY BIBLIOGRAPHY [30] W. Vent. Rechenberg, ingo, evolutionsstrategie — optimierung technischer systeme nach prinzipien der biologischen evolution. 170 s. mit 36 abb. frommann-holzboog-verlag. stuttgart 1973. broschiert. Feddes Repertorium, 86(5):337–337, 1975. [31] Eric W. Weisstein. Scalar function. ScalarFunction.html, 2015. http://mathworld.wolfram.com/- [32] Eric W. Weisstein. Vector function. VectorFunction.html, 2015. http://mathworld.wolfram.com/- [33] Darrell Whitley. A genetic algorithm tutorial. Statistics and Computing, 4:65–85, 1994. 130 Index 0/1 Knapsack, 110 2-point crossover, 19 3-point crossover, 20 AdaptiveEngine, 86 Allele, 6, 39 Alterer, 16, 43 AnyChromosome, 41 AnyGene, 40 Apache Commons Math, 126 Architecture, 4 Assignment problem, 56 Crossover 2-point crossover, 19 3-point crossover, 20 Intermediate crossover, 21 Line crossover, 21 Multiple-point crossover, 18 Partially-matched crossover, 19, 20 Simulated binary crossover, 81 Single-point crossover, 18, 19 Uniform crossover, 19, 20 CyclicEngine, 86 Base classes, 5 BigIntegerGene, 81 Block splitting, 34 Boltzmann selector, 15 Build, 124 Gradle, 124 gradlew, 124 Dieharder, 73 Directed graph, 50 Distinct population, 71 Domain classes, 6 Domain model, 6 Download, 124 DTLZ1, 120 Chromosome, 7, 8, 40 recombination, 18 scalar, 45 variable length, 7 Codec, 51 Composite, 56 Mapping, 56 Permutation, 55 Scalar, 52 Subset, 53 Vector, 53 Compile, 125 Composite codec, 56 ConcatEngine, 85 Concurrency, 30 configuration, 30 maxBatchSize, 32 maxSurplusQueuedTaskCount, 31 splitThreshold, 31 tweaks, 31 Elite selector, 16, 43 Elitism, 16, 43 Encoding, 45 Affine transformation, 47 Directed graph, 50 Graph, 49 Real function, 45 Scalar function, 46 Undirected graph, 49 Vector function, 47 Weighted graph, 50 Engine, 22, 44 Evaluator, 29 reproducible, 68 Engine classes, 21 Ephemeral constant, 94 ES, 69 Evolution Engine, 22 interception, 71 131 INDEX INDEX performance, 69 reproducible, 68 Stream, 4, 21, 25 Evolution strategy, 69 (µ + λ)-ES, 70 (µ, λ)-ES, 70 Evolution time, 63 Evolution workflow, 4 EvolutionResult, 26 interception, 71 mapper, 25, 71 EvolutionStatistics, 27 EvolutionStream, 25 EvolutionStreamable, 85 Evolving images, 115, 116 Examples, 104 0/1 Knapsack, 110 Evolving images, 115, 116 Ones counting, 104 Rastrigin function, 108 Real function, 106 Traveling salesman, 112 Exponential-rank selector, 15 Graph, 49 Hello World, 2 Installation, 124 Interception, 71 Internals, 73 io.jenetics.ext, 78 io.jenetics.prngine, 99 io.jenetics.prog, 92 io.jenetics.xml, 96 Java property maxBatchSize, 32 maxSurplusQueuedTaskCount, 31 splitThreshold, 31 Java2Html, 126 LCG64ShiftRandom, 34, 74 Leapfrog, 34 License, i, 127 Linear-rank selector, 14 Mapping codec, 56 Mean alterer, 20 Modifying Engine, 84 Modules, 77 io.jenetics.ext, 78 io.jenetics.prngine, 99 io.jenetics.prog, 92 io.jenetics.xml, 96 Mona Lisa, 117 Monte Carlo selector, 13, 69 MOO, 88 Multi-objective optimization, 88 Multiple-point crossover, 18 Mutation, 16 Mutator, 17 Fitness convergence, 65 Fitness function, 22 Fitness threshold, 64 Fixed generation, 61 FlatTree, 80 Gaussian mutator, 17 Gene, 6, 39 validation, 7 Gene convergence, 67 Genetic algorithm, 3 Genetic programming, 92, 118 Const, 94 Operations, 93 Program creation, 94 Program pruning, 96 Program repair, 95 Var, 93 Genotype, 8 scalar, 10, 45 Validation, 24 vector, 9 Git repository, 124 GP, 92, 118 Gradle, 124, 126 gradlew, 124 NSGA2 selector, 92 Ones counting, 104 Operation classes, 12 Package structure, 5 Parentheses tree, 79 Pareto dominance, 88 Pareto efficiency, 88 Partially-matched crossover, 19, 20 Permutation codec, 55 Phenotype, 11 132 INDEX INDEX Validation, 24 Population, 6 unique, 25 Population convergence, 66 PRNG, 32 Block splitting, 34 LCG64ShiftRandom, 34 Leapfrog, 34 Parameterization, 34 Random seeding, 33 PRNG testing, 73 Probability selector, 13 Problem, 58 Evolution time, 63 Fitness convergence, 65 Fitness threshold, 64 Fixed generation, 61 Gene convergence, 67 Population convergence, 66 Steady fitness, 61 TestNG, 126 Tournament selector, 13 Traveling salesman, 112 Tree, 78 TreeGene, 81 Truncation selector, 13 Quantile, 38 Undirected graph, 49 Uniform crossover, 20 Unique fitness tournament selector, 92 Unique population, 71 Random, 32 Engine, 32 LCG64ShiftRandom, 34 Registry, 32 seeding, 74 testing, 73 Random seeding, 33 Randomness, 32 Rastrigin function, 108 Real function, 106 Recombination, 17 Reproducibility, 68 Roulette-wheel selector, 14 Validation, 7, 24, 59 Vec, 89 Vector codec, 53 Weasel program, 82 WeaselMutator, 83 WeaselSelector, 83 Weighted graph, 50 SBX, 81 Scalar chromosome, 45 Scalar codec, 52 Scalar genotype, 45 Seeding, 74 Selector, 12, 42 Elite, 43 Seq, 36 Serialization, 35 Simulated binary crossover, 81 Single-node crossover, 81 Single-point crossover, 18, 19 Source code, 124 Statistics, 37, 43 Steady fitness, 61 Stochastic-universal selector, 15 Subset codec, 53 Swap mutator, 17 Symbolic regression, 118 Termination, 60 133


Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.5
Linearized                      : No
Page Count                      : 140
Page Mode                       : UseOutlines
Author                          : Franz Wilhelmstötter
Title                           : Jenetics
Subject                         : Genetic Algorithm
Creator                         : LaTeX with hyperref package
Producer                        : pdfTeX-1.40.18
Keywords                        : research, artificial Intelligence, AI, bio-informatics, evolutionary algorithm, optimization, genetic algorithm, genetic programming, parallelized, algorithms, evolutionary, metaheuristics, stream, evolutionstream, evolution, machine learning
Create Date                     : 2018:10:28 17:25:45+01:00
Modify Date                     : 2018:10:28 17:25:45+01:00
Trapped                         : False
PTEX Fullbanner                 : This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) kpathsea version 6.2.3
EXIF Metadata provided by EXIF.tools

Navigation menu