REDUCE User's Manual, Free Version March 9, 2019 Manual

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 1035

DownloadREDUCE User's Manual, Free Version March 9, 2019 Manual
Open PDF In BrowserView PDF
User’s Manual
Free Version
Anthony C. Hearn and Rainer Schöpf

March 9, 2019

Copyright c 2004–2019 Anthony C. Hearn, Rainer Schöpf and contributors to the
Reduce project. All rights reserved.
Reproduction of this manual is allowed, provided that the source of the material is
clearly acknowledged, and the copyright notice is retained.




Introductory Information



Structure of Programs



The REDUCE Standard Character Set . . . . . . . . . . . . . . .



Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .







Scalar Expressions . . . . . . . . . . . . . . . . . . . . . . . . .



Integer Expressions . . . . . . . . . . . . . . . . . . . . . . . . .



Boolean Expressions . . . . . . . . . . . . . . . . . . . . . . . .



Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Proper Statements as Expressions . . . . . . . . . . . . . . . . .



Operations on Lists . . . . . . . . . . . . . . . . . . . . . . . . .



LIST . . . . . . . . . . . . . . . . . . . . . . . . . . . .



FIRST . . . . . . . . . . . . . . . . . . . . . . . . . . . .








SECOND . . . . . . . . . . . . . . . . . . . . . . . . . .



THIRD . . . . . . . . . . . . . . . . . . . . . . . . . . .



REST . . . . . . . . . . . . . . . . . . . . . . . . . . . .



. (Cons) Operator . . . . . . . . . . . . . . . . . . . . . .



APPEND . . . . . . . . . . . . . . . . . . . . . . . . . .



REVERSE . . . . . . . . . . . . . . . . . . . . . . . . .



List Arguments of Other Operators . . . . . . . . . . . .


4.1.10 Caveats and Examples . . . . . . . . . . . . . . . . . . .





Assignment Statements . . . . . . . . . . . . . . . . . . . . . . .



Set and Unset Statements . . . . . . . . . . . . . . . . . .



Group Statements . . . . . . . . . . . . . . . . . . . . . . . . . .



Conditional Statements . . . . . . . . . . . . . . . . . . . . . . .



FOR Statements . . . . . . . . . . . . . . . . . . . . . . . . . . .



WHILE . . . DO . . . . . . . . . . . . . . . . . . . . . . . . . . .



REPEAT . . . UNTIL . . . . . . . . . . . . . . . . . . . . . . . . .



Compound Statements . . . . . . . . . . . . . . . . . . . . . . .



Compound Statements with GO TO . . . . . . . . . . . .



Labels and GO TO Statements . . . . . . . . . . . . . . .



RETURN Statements . . . . . . . . . . . . . . . . . . . .


Commands and Declarations



Array Declarations . . . . . . . . . . . . . . . . . . . . . . . . .



Mode Handling Declarations . . . . . . . . . . . . . . . . . . . .



END . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



BYE Command . . . . . . . . . . . . . . . . . . . . . . . . . . .



SHOWTIME Command . . . . . . . . . . . . . . . . . . . . . .



DEFINE Command . . . . . . . . . . . . . . . . . . . . . . . . .


Built-in Prefix Operators




Numerical Operators . . . . . . . . . . . . . . . . . . . . . . . .




ABS . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



CEILING . . . . . . . . . . . . . . . . . . . . . . . . . .



CONJ . . . . . . . . . . . . . . . . . . . . . . . . . . . .



FACTORIAL . . . . . . . . . . . . . . . . . . . . . . . .



FIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



FLOOR . . . . . . . . . . . . . . . . . . . . . . . . . . .



IMPART . . . . . . . . . . . . . . . . . . . . . . . . . .



MAX/MIN . . . . . . . . . . . . . . . . . . . . . . . . .



NEXTPRIME . . . . . . . . . . . . . . . . . . . . . . . .


7.1.10 RANDOM . . . . . . . . . . . . . . . . . . . . . . . . .


7.1.11 RANDOM_NEW_SEED . . . . . . . . . . . . . . . . . .


7.1.12 REPART . . . . . . . . . . . . . . . . . . . . . . . . . .


7.1.13 ROUND . . . . . . . . . . . . . . . . . . . . . . . . . .


7.1.14 SIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Mathematical Functions . . . . . . . . . . . . . . . . . . . . . . .



Bernoulli Numbers and Euler Numbers . . . . . . . . . . . . . .



Fibonacci Numbers and Fibonacci Polynomials . . . . . . . . . .



Motzkin numbers . . . . . . . . . . . . . . . . . . . . . . . . . .



CHANGEVAR operator . . . . . . . . . . . . . . . . . . . . . .



CHANGEVAR example: The 2-dim. Laplace Equation . . .



Another CHANGEVAR example: An Euler Equation . . . .



CONTINUED_FRACTION Operator . . . . . . . . . . . . . . .



DF Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Switches influencing differentiation . . . . . . . . . . . .



Adding Differentiation Rules . . . . . . . . . . . . . . . .


INT Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . .



Options . . . . . . . . . . . . . . . . . . . . . . . . . . .



Advanced Use . . . . . . . . . . . . . . . . . . . . . . .



References . . . . . . . . . . . . . . . . . . . . . . . . .


7.10 LENGTH Operator . . . . . . . . . . . . . . . . . . . . . . . . .




7.11 MAP Operator . . . . . . . . . . . . . . . . . . . . . . . . . . .


7.12 MKID Operator . . . . . . . . . . . . . . . . . . . . . . . . . . .


7.13 The Pochhammer Notation . . . . . . . . . . . . . . . . . . . . .


7.14 PF Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


7.15 SELECT Operator . . . . . . . . . . . . . . . . . . . . . . . . .


7.16 SOLVE Operator . . . . . . . . . . . . . . . . . . . . . . . . . .


7.16.1 Handling of Undetermined Solutions . . . . . . . . . . .


7.16.2 Solutions of Equations Involving Cubics and Quartics . .


7.16.3 Other Options . . . . . . . . . . . . . . . . . . . . . . . .


7.16.4 Parameters and Variable Dependency . . . . . . . . . . . 100
7.17 Even and Odd Operators . . . . . . . . . . . . . . . . . . . . . . 102
7.18 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.19 Non-Commuting Operators . . . . . . . . . . . . . . . . . . . . . 104
7.20 Symmetric and Antisymmetric Operators . . . . . . . . . . . . . 104
7.21 Declaring New Prefix Operators . . . . . . . . . . . . . . . . . . 105
7.22 Declaring New Infix Operators . . . . . . . . . . . . . . . . . . . 106
7.23 Creating/Removing Variable Dependency . . . . . . . . . . . . . 106

Display and Structuring of Expressions



Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109


The Expression Workspace . . . . . . . . . . . . . . . . . . . . . 110


Output of Expressions . . . . . . . . . . . . . . . . . . . . . . . . 111



LINELENGTH Operator . . . . . . . . . . . . . . . . . . 112


Output Declarations . . . . . . . . . . . . . . . . . . . . 112


Output Control Switches . . . . . . . . . . . . . . . . . . 113


WRITE Command . . . . . . . . . . . . . . . . . . . . . 116


Suppression of Zeros . . . . . . . . . . . . . . . . . . . . 119


FORTRAN Style Output Of Expressions . . . . . . . . . 119


Saving Expressions for Later Use as Input . . . . . . . . . 121


Displaying Expression Structure . . . . . . . . . . . . . . 122

Changing the Internal Order of Variables . . . . . . . . . . . . . . 123




Obtaining Parts of Algebraic Expressions . . . . . . . . . . . . . 124

COEFF Operator . . . . . . . . . . . . . . . . . . . . . . 124


COEFFN Operator . . . . . . . . . . . . . . . . . . . . . 125


PART Operator . . . . . . . . . . . . . . . . . . . . . . . 125


Substituting for Parts of Expressions . . . . . . . . . . . . 126

Polynomials and Rationals



Controlling the Expansion of Expressions . . . . . . . . . . . . . 130


Factorization of Polynomials . . . . . . . . . . . . . . . . . . . . 130


Cancellation of Common Factors . . . . . . . . . . . . . . . . . . 132

Determining the GCD of Two Polynomials . . . . . . . . 133


Working with Least Common Multiples . . . . . . . . . . . . . . 134


Controlling Use of Common Denominators . . . . . . . . . . . . 134


divide and mod / remainder Operators . . . . . . . . . . . . 135


Polynomial Pseudo-Division . . . . . . . . . . . . . . . . . . . . 136


RESULTANT Operator . . . . . . . . . . . . . . . . . . . . . . . 139


DECOMPOSE Operator . . . . . . . . . . . . . . . . . . . . . . 140

9.10 INTERPOL operator . . . . . . . . . . . . . . . . . . . . . . . . 141
9.11 Obtaining Parts of Polynomials and Rationals . . . . . . . . . . . 141
9.11.1 DEG Operator . . . . . . . . . . . . . . . . . . . . . . . 141
9.11.2 DEN Operator . . . . . . . . . . . . . . . . . . . . . . . 142
9.11.3 LCOF Operator . . . . . . . . . . . . . . . . . . . . . . . 142
9.11.4 LPOWER Operator . . . . . . . . . . . . . . . . . . . . . 142
9.11.5 LTERM Operator . . . . . . . . . . . . . . . . . . . . . . 143
9.11.6 MAINVAR Operator . . . . . . . . . . . . . . . . . . . . 143
9.11.7 NUM Operator . . . . . . . . . . . . . . . . . . . . . . . 143
9.11.8 REDUCT Operator . . . . . . . . . . . . . . . . . . . . . 144
9.11.9 TOTALDEG Operator . . . . . . . . . . . . . . . . . . . 144
9.12 Polynomial Coefficient Arithmetic . . . . . . . . . . . . . . . . . 145
9.12.1 Rational Coefficients in Polynomials . . . . . . . . . . . . 145
9.12.2 Real Coefficients in Polynomials . . . . . . . . . . . . . . 145


9.12.3 Modular Number Coefficients in Polynomials . . . . . . . 147
9.12.4 Complex Number Coefficients in Polynomials . . . . . . 147
9.13 ROOT_VAL Operator . . . . . . . . . . . . . . . . . . . . . . . . 148

10 Assigning and Testing Algebraic Properties


10.1 REALVALUED Declaration and Check . . . . . . . . . . . . . . 149
10.2 SELFCONJUGATE Declaration . . . . . . . . . . . . . . . . . . 150
10.3 Declaring Expressions Positive or Negative . . . . . . . . . . . . 151
11 Substitution Commands


11.1 SUB Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
11.2 LET Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
11.2.1 FOR ALL . . . LET . . . . . . . . . . . . . . . . . . . . . 156
11.2.2 FOR ALL . . . SUCH THAT . . . LET . . . . . . . . . . . 157
11.2.3 Removing Assignments and Substitution Rules . . . . . . 157
11.2.4 Overlapping LET Rules . . . . . . . . . . . . . . . . . . 158
11.2.5 Substitutions for General Expressions . . . . . . . . . . . 159
11.3 Rule Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
11.4 Asymptotic Commands . . . . . . . . . . . . . . . . . . . . . . . 167
12 File Handling Commands


12.1 IN Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
12.2 OUT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
12.3 SHUT Command . . . . . . . . . . . . . . . . . . . . . . . . . . 170
12.4 REDUCE startup file . . . . . . . . . . . . . . . . . . . . . . . . 171
13 Commands for Interactive Use


13.1 Referencing Previous Results . . . . . . . . . . . . . . . . . . . . 173
13.2 Interactive Editing . . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.3 Interactive File Control . . . . . . . . . . . . . . . . . . . . . . . 175
14 Matrix Calculations


14.1 MAT Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177



14.2 Matrix Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
14.3 Matrix Expressions . . . . . . . . . . . . . . . . . . . . . . . . . 178
14.4 Operators with Matrix Arguments . . . . . . . . . . . . . . . . . 179
14.4.1 DET Operator . . . . . . . . . . . . . . . . . . . . . . . . 179
14.4.2 MATEIGEN Operator . . . . . . . . . . . . . . . . . . . 180
14.4.3 TP Operator . . . . . . . . . . . . . . . . . . . . . . . . . 181
14.4.4 Trace Operator . . . . . . . . . . . . . . . . . . . . . . . 181
14.4.5 Matrix Cofactors . . . . . . . . . . . . . . . . . . . . . . 181
14.4.6 NULLSPACE Operator . . . . . . . . . . . . . . . . . . . 181
14.4.7 RANK Operator . . . . . . . . . . . . . . . . . . . . . . 182
14.5 Matrix Assignments . . . . . . . . . . . . . . . . . . . . . . . . . 183
14.6 Evaluating Matrix Elements . . . . . . . . . . . . . . . . . . . . 183
15 Procedures


15.1 Procedure Heading . . . . . . . . . . . . . . . . . . . . . . . . . 186
15.2 Procedure Body . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
15.3 Matrix-valued Procedures . . . . . . . . . . . . . . . . . . . . . . 188
15.4 Using LET Inside Procedures . . . . . . . . . . . . . . . . . . . . 189
15.5 LET Rules as Procedures . . . . . . . . . . . . . . . . . . . . . . 190
15.6 REMEMBER Statement . . . . . . . . . . . . . . . . . . . . . . 191
16 User Contributed Packages


16.1 ALGINT: Integration of square roots . . . . . . . . . . . . . . . . 194
16.2 APPLYSYM: Infinitesimal symmetries of differential equations . 195
16.2.1 Introduction and overview of the symmetry method . . . . 195
16.2.2 Applying symmetries with APPLYSYM . . . . . . . . . . 201
16.2.3 Solving quasilinear PDEs . . . . . . . . . . . . . . . . . 210
16.2.4 Transformation of DEs . . . . . . . . . . . . . . . . . . . 213
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
16.3 ARNUM: An algebraic number package . . . . . . . . . . . . . . 218
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223


16.4 ASSERT: Dynamic Verification of Assertions on Function Types . 224
16.4.1 Loading and Using . . . . . . . . . . . . . . . . . . . . . 224
16.4.2 Type Definitions . . . . . . . . . . . . . . . . . . . . . . 224
16.4.3 Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . 225
16.4.4 Dynamic Checking of Assertions . . . . . . . . . . . . . 225
16.4.5 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . 227
16.4.6 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . 227
16.4.7 Possible Extensions . . . . . . . . . . . . . . . . . . . . . 229
16.5 ASSIST: Useful utilities for various applications . . . . . . . . . . 230
16.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 230
16.5.2 Survey of the Available New Facilities . . . . . . . . . . 230
16.5.3 Control of Switches

. . . . . . . . . . . . . . . . . . . . 232

16.5.4 Manipulation of the List Structure . . . . . . . . . . . . . 233
16.5.5 The Bag Structure and its Associated Functions . . . . . 238
16.5.6 Sets and their Manipulation Functions . . . . . . . . . . . 240
16.5.7 General Purpose Utility Functions . . . . . . . . . . . . . 241
16.5.8 Properties and Flags . . . . . . . . . . . . . . . . . . . . 248
16.5.9 Control Functions . . . . . . . . . . . . . . . . . . . . . 249
16.5.10 Handling of Polynomials . . . . . . . . . . . . . . . . . . 252
16.5.11 Handling of Transcendental Functions . . . . . . . . . . . 253
16.5.12 Handling of n–dimensional Vectors . . . . . . . . . . . . 255
16.5.13 Handling of Grassmann Operators . . . . . . . . . . . . . 255
16.5.14 Handling of Matrices . . . . . . . . . . . . . . . . . . . . 256
16.6 AVECTOR: A vector algebra and calculus package . . . . . . . . 260
16.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 260
16.6.2 Vector declaration and initialisation . . . . . . . . . . . . 260
16.6.3 Vector algebra . . . . . . . . . . . . . . . . . . . . . . . 261
16.6.4 Vector calculus . . . . . . . . . . . . . . . . . . . . . . . 262
16.6.5 Volume and Line Integration . . . . . . . . . . . . . . . . 264
16.6.6 Defining new functions and procedures . . . . . . . . . . 266



16.6.7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . 266
16.7 BIBASIS: A Package for Calculating Boolean Involutive Bases . . 267
16.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 267
16.7.2 Boolean Ring . . . . . . . . . . . . . . . . . . . . . . . . 267
16.7.3 Pommaret Involutive Algorithm . . . . . . . . . . . . . . 268
16.7.4 BIBASIS Package . . . . . . . . . . . . . . . . . . . . . 269
16.7.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
16.8 BOOLEAN: A package for boolean algebra . . . . . . . . . . . . 274
16.8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 274
16.8.2 Entering boolean expressions . . . . . . . . . . . . . . . 274
16.8.3 Normal forms . . . . . . . . . . . . . . . . . . . . . . . . 275
16.8.4 Evaluation of a boolean expression . . . . . . . . . . . . 276
16.9 CALI: A package for computational commutative algebra . . . . . 278
16.10CAMAL: Calculations in celestial mechanics . . . . . . . . . . . 279
16.10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 279
16.10.2 How CAMAL Worked . . . . . . . . . . . . . . . . . . . 280
16.10.3 Towards a CAMAL Module . . . . . . . . . . . . . . . . 283
16.10.4 Integration with REDUCE . . . . . . . . . . . . . . . . . 285
16.10.5 The Simple Experiments . . . . . . . . . . . . . . . . . . 286
16.10.6 A Medium-Sized Problem . . . . . . . . . . . . . . . . . 287
16.10.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 289
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
16.11CANTENS: A Package for Manipulations and Simplifications of
Indexed Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
16.11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 293
16.11.2 Handling of space(s) . . . . . . . . . . . . . . . . . . . . 294
16.11.3 Generic tensors and their manipulation . . . . . . . . . . 298
16.11.4 Specific tensors . . . . . . . . . . . . . . . . . . . . . . . 312
16.11.5 The simplification function CANONICAL . . . . . . . . . 328
16.12CDE: A package for integrability of PDEs . . . . . . . . . . . . . 347


16.12.1 Introduction: why CDE? . . . . . . . . . . . . . . . . . . 347
16.12.2 Jet space of even and odd variables, and total derivatives . 348
16.12.3 Differential equations in even and odd variables . . . . . . 352
16.12.4 Calculus of variations . . . . . . . . . . . . . . . . . . . 354
16.12.5 C-differential operators . . . . . . . . . . . . . . . . . . . 354
16.12.6 C-differential operators as superfunctions . . . . . . . . . 357
16.12.7 The Schouten bracket . . . . . . . . . . . . . . . . . . . . 358
16.12.8 Computing linearization and its adjoint . . . . . . . . . . 359
16.12.9 Higher symmetries . . . . . . . . . . . . . . . . . . . . . 362
16.12.10Setting up the jet space and the differential equation. . . . 363
16.12.11Solving the problem via dimensional analysis.

. . . . . . 363

16.12.12Solving the problem using CRACK . . . . . . . . . . . . 367
16.12.13Local conservation laws . . . . . . . . . . . . . . . . . . 368
16.12.14Local Hamiltonian operators . . . . . . . . . . . . . . . . 369
16.12.15Korteweg–de Vries equation . . . . . . . . . . . . . . . . 370
16.12.16Boussinesq equation . . . . . . . . . . . . . . . . . . . . 373
16.12.17Kadomtsev–Petviashvili equation . . . . . . . . . . . . . 374
16.12.18Examples of Schouten bracket of local Hamiltonian operators375
16.12.19Bi-Hamiltonian structure of the KdV equation . . . . . . . 376
16.12.20Bi-Hamiltonian structure of the WDVV equation . . . . . 377
16.12.21Schouten bracket of multidimensional operators . . . . . . 381
16.12.22Non-local operators . . . . . . . . . . . . . . . . . . . . . 383
16.12.23Non-local Hamiltonian operators for the Korteweg–de Vries
equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
16.12.24Non-local recursion operator for the Korteweg–de Vries
equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
16.12.25Non-local Hamiltonian-recursion operators for Plebanski
equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
16.12.26Appendix: old versions of CDE . . . . . . . . . . . . . . 388
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
16.13CDIFF: A package for computations in geometry of Differential
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392



16.13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 392
16.13.2 Computing with CDIFF . . . . . . . . . . . . . . . . . . 393
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
16.14CGB: Computing Comprehensive Gröbner Bases . . . . . . . . . 417
16.14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 417
16.14.2 Using the REDLOG Package . . . . . . . . . . . . . . . . 417
16.14.3 Term Ordering Mode . . . . . . . . . . . . . . . . . . . . 418
16.14.4 CGB: Comprehensive Gröbner Basis . . . . . . . . . . . 418
16.14.5 GSYS: Gröbner System . . . . . . . . . . . . . . . . . . 418
16.14.6 GSYS2CGB: Gröbner System to CGB . . . . . . . . . . 420
16.14.7 Switch CGBREAL: Computing over the Real Numbers . . 420
16.14.8 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . 421
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
16.15COMPACT: Package for compacting expressions . . . . . . . . . 422
16.16CRACK: Solving overdetermined systems of PDEs or ODEs . . . 423
16.17CVIT: Fast calculation of Dirac gamma matrix traces . . . . . . . 424
16.18DEFINT: A definite integration interface . . . . . . . . . . . . . . 433
16.18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 433
16.18.2 Integration between zero and infinity . . . . . . . . . . . 433
16.18.3 Integration over other ranges . . . . . . . . . . . . . . . . 434
16.18.4 Using the definite integration package . . . . . . . . . . . 435
16.18.5 Integral Transforms . . . . . . . . . . . . . . . . . . . . . 437
16.18.6 Additional Meijer G-function Definitions . . . . . . . . . 439
16.18.7 The print_conditions function . . . . . . . . . . . . . . . 440
16.18.8 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
16.18.9 Acknowledgements . . . . . . . . . . . . . . . . . . . . . 441
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
16.19DESIR: Differential linear homogeneous equation solutions in the
neighborhood of irregular and regular singular points . . . . . . . 443
16.19.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . 443
16.19.2 FORMS OF SOLUTIONS . . . . . . . . . . . . . . . . . 444


16.19.3 INTERACTIVE USE . . . . . . . . . . . . . . . . . . . . 445
16.19.4 DIRECT USE

. . . . . . . . . . . . . . . . . . . . . . . 445

16.19.5 USEFUL FUNCTIONS . . . . . . . . . . . . . . . . . . 446
16.19.6 LIMITATIONS . . . . . . . . . . . . . . . . . . . . . . . 449
16.20DFPART: Derivatives of generic functions . . . . . . . . . . . . . 450
16.20.1 Generic Functions . . . . . . . . . . . . . . . . . . . . . 450
16.20.2 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . 451
16.20.3 Substitutions . . . . . . . . . . . . . . . . . . . . . . . . 453
16.21DUMMY: Canonical form of expressions with dummy variables . 455
16.21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 455
16.21.2 Dummy variables and dummy summations . . . . . . . . 456
16.21.3 The Operators and their Properties . . . . . . . . . . . . . 458
16.21.4 The Function CANONICAL . . . . . . . . . . . . . . . . 459
16.21.5 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . 460
16.22EXCALC: A differential geometry package . . . . . . . . . . . . 462
16.22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 462
16.22.2 Declarations . . . . . . . . . . . . . . . . . . . . . . . . 463
16.22.3 Exterior Multiplication . . . . . . . . . . . . . . . . . . . 464
16.22.4 Partial Differentiation . . . . . . . . . . . . . . . . . . . 465
16.22.5 Exterior Differentiation . . . . . . . . . . . . . . . . . . . 466
16.22.6 Inner Product . . . . . . . . . . . . . . . . . . . . . . . . 468
16.22.7 Lie Derivative . . . . . . . . . . . . . . . . . . . . . . . . 469
16.22.8 Hodge-* Duality Operator . . . . . . . . . . . . . . . . . 469
16.22.9 Variational Derivative . . . . . . . . . . . . . . . . . . . 470
16.22.10Handling of Indices . . . . . . . . . . . . . . . . . . . . . 471
16.22.11Metric Structures . . . . . . . . . . . . . . . . . . . . . . 474
16.22.12Riemannian Connections . . . . . . . . . . . . . . . . . . 478
16.22.13Killing Vectors . . . . . . . . . . . . . . . . . . . . . . . 479
16.22.14Ordering and Structuring . . . . . . . . . . . . . . . . . . 480
16.22.15Summary of Operators and Commands . . . . . . . . . . 482



16.22.16Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 483
16.23FIDE: Finite difference method for partial differential equations . 494
16.23.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
16.23.2 EXPRES . . . . . . . . . . . . . . . . . . . . . . . . . . 495
16.23.3 IIMET . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
16.23.4 APPROX . . . . . . . . . . . . . . . . . . . . . . . . . . 511
16.23.5 CHARPOL . . . . . . . . . . . . . . . . . . . . . . . . . 514
16.23.6 HURWP . . . . . . . . . . . . . . . . . . . . . . . . . . 517
16.23.7 LINBAND . . . . . . . . . . . . . . . . . . . . . . . . . 518
16.24FPS: Automatic calculation of formal power series . . . . . . . . 522
16.24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 522
16.24.2 REDUCE operator FPS . . . . . . . . . . . . . . . . . . 522
16.24.3 REDUCE operator SimpleDE . . . . . . . . . . . . . . 524
16.24.4 Problems in the current version . . . . . . . . . . . . . . 524
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
16.25GCREF: A Graph Cross Referencer . . . . . . . . . . . . . . . . 526
16.25.1 Basic Usage . . . . . . . . . . . . . . . . . . . . . . . . . 526
16.25.2 Shell Script "gcref" . . . . . . . . . . . . . . . . . . . . . 526
16.25.3 Redering with yED . . . . . . . . . . . . . . . . . . . . . 526
16.26GENTRAN: A code generation package . . . . . . . . . . . . . . 528
16.27GNUPLOT: Display of functions and surfaces . . . . . . . . . . . 529
16.27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 529
16.27.2 Command plot . . . . . . . . . . . . . . . . . . . . . . 529
16.27.3 Paper output . . . . . . . . . . . . . . . . . . . . . . . . 533
16.27.4 Mesh generation for implicit curves . . . . . . . . . . . . 533
16.27.5 Mesh generation for surfaces . . . . . . . . . . . . . . . . 534
16.27.6 G NU P LOT operation . . . . . . . . . . . . . . . . . . . . 534
16.27.7 Saving G NU P LOT command sequences . . . . . . . . . . 534
16.27.8 Direct Call of G NU P LOT . . . . . . . . . . . . . . . . . . 535
16.27.9 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 535


16.28GROEBNER: A Gröbner basis package . . . . . . . . . . . . . . 540
16.28.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . 540
16.28.2 Loading of the Package . . . . . . . . . . . . . . . . . . . 543
16.28.3 The Basic Operators . . . . . . . . . . . . . . . . . . . . 543
16.28.4 Ideal Decomposition & Equation System Solving . . . . . 563
16.28.5 Calculations “by Hand” . . . . . . . . . . . . . . . . . . 567
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
16.29GUARDIAN: Guarded Expressions in Practice . . . . . . . . . . 572
16.29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 572
16.29.2 An outline of our method . . . . . . . . . . . . . . . . . . 573
16.29.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 582
16.29.4 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
16.29.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 587
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
16.30IDEALS: Arithmetic for polynomial ideals . . . . . . . . . . . . 589
16.30.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 589
16.30.2 Initialization . . . . . . . . . . . . . . . . . . . . . . . . 589
16.30.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
16.30.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 590
16.30.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 591
16.31INEQ: Support for solving inequalities . . . . . . . . . . . . . . . 592
16.32INVBASE: A package for computing involutive bases . . . . . . . 594
16.32.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 594
16.32.2 The Basic Operators . . . . . . . . . . . . . . . . . . . . 595
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
16.33LALR: A parser generator . . . . . . . . . . . . . . . . . . . . . 598
16.33.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 599
16.33.2 An example . . . . . . . . . . . . . . . . . . . . . . . . . 600
16.34LAPLACE: Laplace transforms . . . . . . . . . . . . . . . . . . . 601
16.35LIE: Functions for the classification of real n-dimensional Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603



Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
16.36LIMITS: A package for finding limits . . . . . . . . . . . . . . . 607
16.36.1 Normal entry points . . . . . . . . . . . . . . . . . . . . 607
16.36.2 Direction-dependent limits . . . . . . . . . . . . . . . . . 607
16.37LINALG: Linear algebra package . . . . . . . . . . . . . . . . . 608
16.37.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 608
16.37.2 Getting started . . . . . . . . . . . . . . . . . . . . . . . 609
16.37.3 What’s available . . . . . . . . . . . . . . . . . . . . . . 610
16.37.4 Fast Linear Algebra . . . . . . . . . . . . . . . . . . . . . 634
16.37.5 Acknowledgments . . . . . . . . . . . . . . . . . . . . . 635
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
16.38LISTVECOPS: Vector operations on lists . . . . . . . . . . . . . 636
16.39LPDO: Linear Partial Differential Operators . . . . . . . . . . . . 639
16.39.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 639
16.39.2 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 640
16.39.3 Shapes of F-elements . . . . . . . . . . . . . . . . . . . . 641
16.39.4 Commands . . . . . . . . . . . . . . . . . . . . . . . . . 642
16.40MODSR: Modular solve and roots . . . . . . . . . . . . . . . . . 649
16.41MRVLIMIT: A new exp-log limits package . . . . . . . . . . . . 650
16.41.1 The Exp-Log Limits package . . . . . . . . . . . . . . . . 650
16.41.2 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . 651
16.41.3 The tracing facility . . . . . . . . . . . . . . . . . . . . . 653
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
16.42NCPOLY: Non–commutative polynomial ideals . . . . . . . . . . 656
16.42.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 656
16.42.2 Setup, Cleanup . . . . . . . . . . . . . . . . . . . . . . . 656
16.42.3 Left and right ideals . . . . . . . . . . . . . . . . . . . . 658
16.42.4 Gröbner bases . . . . . . . . . . . . . . . . . . . . . . . . 658
16.42.5 Left or right polynomial division . . . . . . . . . . . . . . 659
16.42.6 Left or right polynomial reduction . . . . . . . . . . . . . 660


16.42.7 Factorization . . . . . . . . . . . . . . . . . . . . . . . . 660
16.42.8 Output of expressions . . . . . . . . . . . . . . . . . . . 661
16.43NORMFORM: Computation of matrix normal forms . . . . . . . 663
16.43.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 663
16.43.2 Smith normal form . . . . . . . . . . . . . . . . . . . . . 664
16.43.3 smithex_int . . . . . . . . . . . . . . . . . . . . . . . . . 665
16.43.4 frobenius . . . . . . . . . . . . . . . . . . . . . . . . . . 666
16.43.5 ratjordan . . . . . . . . . . . . . . . . . . . . . . . . . . 667
16.43.6 jordansymbolic . . . . . . . . . . . . . . . . . . . . . . . 668
16.43.7 jordan . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
16.43.8 Algebraic extensions: Using the ARNUM package . . . . . 671
16.43.9 Modular arithmetic . . . . . . . . . . . . . . . . . . . . . 672
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
16.44NUMERIC: Solving numerical problems . . . . . . . . . . . . . 674
16.44.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
16.44.2 Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
16.44.3 Roots of Functions/ Solutions of Equations . . . . . . . . 676
16.44.4 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
16.44.5 Ordinary Differential Equations . . . . . . . . . . . . . . 678
16.44.6 Bounds of a Function . . . . . . . . . . . . . . . . . . . . 680
16.44.7 Chebyshev Curve Fitting . . . . . . . . . . . . . . . . . . 681
16.44.8 General Curve Fitting . . . . . . . . . . . . . . . . . . . 682
16.44.9 Function Bases . . . . . . . . . . . . . . . . . . . . . . . 683
16.45ODESOLVE: Ordinary differential equations solver . . . . . . . . 685
16.45.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 685
16.45.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . 686
16.45.3 User interface . . . . . . . . . . . . . . . . . . . . . . . . 687
16.45.4 Output syntax . . . . . . . . . . . . . . . . . . . . . . . . 693
16.45.5 Solution techniques . . . . . . . . . . . . . . . . . . . . . 693
16.45.6 Extension interface . . . . . . . . . . . . . . . . . . . . . 698



16.45.7 Change log . . . . . . . . . . . . . . . . . . . . . . . . . 701
16.45.8 Planned developments . . . . . . . . . . . . . . . . . . . 701
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
16.46ORTHOVEC: Manipulation of scalars and vectors . . . . . . . . . 704
16.46.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 704
16.46.2 Initialisation . . . . . . . . . . . . . . . . . . . . . . . . 705
16.46.3 Input-Output . . . . . . . . . . . . . . . . . . . . . . . . 705
16.46.4 Algebraic Operations . . . . . . . . . . . . . . . . . . . . 706
16.46.5 Differential Operations . . . . . . . . . . . . . . . . . . . 708
16.46.6 Integral Operations . . . . . . . . . . . . . . . . . . . . . 710
16.46.7 Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . 710
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
16.47PHYSOP: Operator calculus in quantum theory . . . . . . . . . . 714
16.47.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 714
16.47.2 The NONCOM2 Package . . . . . . . . . . . . . . . . . 714
16.47.3 The PHYSOP package . . . . . . . . . . . . . . . . . . . 715
16.47.4 Known problems in the current release of PHYSOP . . . . 723
16.47.5 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . 723
16.47.6 Appendix: List of error and warning messages . . . . . . 724
16.48PM: A REDUCE pattern matcher . . . . . . . . . . . . . . . . . . 726
16.48.1 M(exp,temp) . . . . . . . . . . . . . . . . . . . . . . 727
16.48.2 temp _= logical_exp . . . . . . . . . . . . . . . . . . . . 728
16.48.3 S(exp,{temp1 -> sub1, temp2 -> sub2, . . . }, rept, depth) . 729
16.48.4 temp :- exp and temp ::- exp . . . . . . . . . . . . . . . . 730
16.48.5 Arep({rep1,rep2,. . . }) . . . . . . . . . . . . . . . . . . . 731
16.48.6 Drep({rep1,rep2,..}) . . . . . . . . . . . . . . . . . . . . 731
16.48.7 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . 731
16.49QSUM: Indefinite and Definite Summation of q-hypergeometric
Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
16.49.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 733
16.49.2 Elementary q-Functions . . . . . . . . . . . . . . . . . . 733


16.49.3 q-Gosper Algorithm . . . . . . . . . . . . . . . . . . . . 734
16.49.4 q-Zeilberger Algorithm . . . . . . . . . . . . . . . . . . . 735
16.49.5 REDUCE operator QGOSPER . . . . . . . . . . . . . . . 736
16.49.6 REDUCE operator QSUMRECURSION . . . . . . . . . . 738
16.49.7 Simplification Operators . . . . . . . . . . . . . . . . . . 743
16.49.8 Global Variables and Switches . . . . . . . . . . . . . . . 744
16.49.9 Messages . . . . . . . . . . . . . . . . . . . . . . . . . . 745
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
16.50RANDPOLY: A random polynomial generator . . . . . . . . . . . 748
16.50.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 748
16.50.2 Basic use of randpoly . . . . . . . . . . . . . . . . . . 749
16.50.3 Advanced use of randpoly . . . . . . . . . . . . . . . . 750
16.50.4 Subsidiary functions: rand, proc, random . . . . . . . . . 751
16.50.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 753
16.50.6 Appendix: Algorithmic background . . . . . . . . . . . . 754
16.51RATAPRX: Rational Approximations Package for REDUCE . . . 758
16.51.1 Periodic Decimal Representation . . . . . . . . . . . . . . 758
16.51.2 Continued Fractions . . . . . . . . . . . . . . . . . . . . 760
16.51.3 Padé Approximation . . . . . . . . . . . . . . . . . . . . 766
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
16.52RATINT: Integrate Rational Functions using the Minimal Algebraic Extension to the Constant Field . . . . . . . . . . . . . . . . 770
16.52.1 Rational Integration . . . . . . . . . . . . . . . . . . . . 770
16.52.2 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . 772
16.52.3 The log_sum operator . . . . . . . . . . . . . . . . . . . 774
16.52.4 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
16.52.5 Hermite’s method . . . . . . . . . . . . . . . . . . . . . . 778
16.52.6 Tracing the ratint program . . . . . . . . . . . . . . . . 779
16.52.7 Bugs, suggestions and comments . . . . . . . . . . . . . 780
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
16.53REACTEQN: Support for chemical reaction equation systems . . 780



16.54REDLOG: Extend REDUCE to a computer logic system . . . . . 785
16.55RESET: Code to reset REDUCE to its initial state . . . . . . . . . 785
16.56RESIDUE: A residue package . . . . . . . . . . . . . . . . . . . 786
16.57RLFI: REDUCE LATEX formula interface . . . . . . . . . . . . . . 790
16.57.1 APPENDIX: Summary and syntax . . . . . . . . . . . . . 792
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
16.58ROOTS: A REDUCE root finding package . . . . . . . . . . . . . 796
16.58.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 796
16.58.2 Root Finding Strategies . . . . . . . . . . . . . . . . . . . 796
16.58.3 Top Level Functions . . . . . . . . . . . . . . . . . . . . 797
16.58.4 Switches Used in Input . . . . . . . . . . . . . . . . . . . 800
16.58.5 Internal and Output Use of Switches . . . . . . . . . . . . 801
16.58.6 Root Package Switches . . . . . . . . . . . . . . . . . . . 801
16.58.7 Operational Parameters and Parameter Setting. . . . . . . 802
16.58.8 Avoiding truncation of polynomials on input . . . . . . . 803
16.59RSOLVE: Rational/integer polynomial solvers . . . . . . . . . . . 804
16.59.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 804
16.59.2 The user interface . . . . . . . . . . . . . . . . . . . . . . 804
16.59.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 805
16.59.4 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
16.60RTRACE: Tracing in REDUCE . . . . . . . . . . . . . . . . . . 807
16.60.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 807
16.60.2 RTrace versus RDebug . . . . . . . . . . . . . . . . . . . 807
16.60.3 Procedure tracing: RTR, UNRTR . . . . . . . . . . . . . 808
16.60.4 Assignment tracing: RTRST, UNRTRST . . . . . . . . . 810
16.60.5 Tracing active rules: TRRL, UNTRRL . . . . . . . . . . 812
16.60.6 Tracing inactive rules: TRRLID, UNTRRLID . . . . . . . 813
16.60.7 Output control: RTROUT . . . . . . . . . . . . . . . . . 814
16.61SCOPE: REDUCE source code optimization package . . . . . . . 815
16.62SETS: A basic set theory package . . . . . . . . . . . . . . . . . 816


16.62.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 816
16.62.2 Infix operator precedence . . . . . . . . . . . . . . . . . . 817
16.62.3 Explicit set representation and mkset . . . . . . . . . . . 817
16.62.4 Union and intersection . . . . . . . . . . . . . . . . . . . 818
16.62.5 Symbolic set expressions . . . . . . . . . . . . . . . . . . 818
16.62.6 Set difference . . . . . . . . . . . . . . . . . . . . . . . . 819
16.62.7 Predicates on sets . . . . . . . . . . . . . . . . . . . . . . 820
16.62.8 Possible future developments . . . . . . . . . . . . . . . . 824
16.63SPARSE: Sparse Matrix Calculations . . . . . . . . . . . . . . . 825
16.63.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 825
16.63.2 Sparse Matrix Calculations . . . . . . . . . . . . . . . . . 825
16.63.3 Sparse Matrix Expressions . . . . . . . . . . . . . . . . . 826
16.63.4 Operators with Sparse Matrix Arguments . . . . . . . . . 826
16.63.5 The Linear Algebra Package for Sparse Matrices . . . . . 828
16.63.6 Available Functions . . . . . . . . . . . . . . . . . . . . . 829
16.63.7 Fast Linear Algebra . . . . . . . . . . . . . . . . . . . . . 851
16.63.8 Acknowledgments . . . . . . . . . . . . . . . . . . . . . 851
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
16.64SPDE: Finding symmetry groups of PDE’s . . . . . . . . . . . . . 852
16.64.1 Description of the System Functions and Variables . . . . 852
16.64.2 How to Use the Package . . . . . . . . . . . . . . . . . . 855
16.64.3 Test File . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
16.65SPECFN: Package for special functions . . . . . . . . . . . . . . 864
16.66SPECFN2: Package for special special functions . . . . . . . . . 865
16.66.1 REDUCE operator HYPERGEOMETRIC . . . . . . . . . 866
16.66.2 Extending the HYPERGEOMETRIC operator . . . . . . 866
16.66.3 REDUCE operator meijerg . . . . . . . . . . . . . . . 867
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
16.67SSTOOLS: Computations with supersymmetric algebraic and differential expressions . . . . . . . . . . . . . . . . . . . . . . . . 869
16.67.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 869



Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
16.68SUM: A package for series summation . . . . . . . . . . . . . . . 871
16.69SYMMETRY: Operations on symmetric matrices . . . . . . . . . 873
16.69.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 873
16.69.2 Operators for linear representations . . . . . . . . . . . . 873
16.69.3 Display Operators . . . . . . . . . . . . . . . . . . . . . 875
16.69.4 Storing a new group . . . . . . . . . . . . . . . . . . . . 875
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
16.70TAYLOR: Manipulation of Taylor series . . . . . . . . . . . . . . 878
16.70.1 Basic Use . . . . . . . . . . . . . . . . . . . . . . . . . . 878
16.70.2 Caveats . . . . . . . . . . . . . . . . . . . . . . . . . . . 882
16.70.3 Warning messages . . . . . . . . . . . . . . . . . . . . . 883
16.70.4 Error messages . . . . . . . . . . . . . . . . . . . . . . . 883
16.70.5 Comparison to other packages . . . . . . . . . . . . . . . 885
16.71TPS: A truncated power series package . . . . . . . . . . . . . . 887
16.71.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 887
16.71.2 PS Operator . . . . . . . . . . . . . . . . . . . . . . . . . 887
16.71.3 PSEXPLIM Operator . . . . . . . . . . . . . . . . . . . . 889
16.71.4 PSPRINTORDER Switch . . . . . . . . . . . . . . . . . 889
16.71.5 PSORDLIM Operator . . . . . . . . . . . . . . . . . . . 889
16.71.6 PSTERM Operator . . . . . . . . . . . . . . . . . . . . . 890
16.71.7 PSORDER Operator . . . . . . . . . . . . . . . . . . . . 890
16.71.8 PSSETORDER Operator . . . . . . . . . . . . . . . . . . 890
16.71.9 PSDEPVAR Operator . . . . . . . . . . . . . . . . . . . 891
16.71.10PSEXPANSIONPT operator . . . . . . . . . . . . . . . . 891
16.71.11PSFUNCTION Operator . . . . . . . . . . . . . . . . . . 891
16.71.12PSCHANGEVAR Operator . . . . . . . . . . . . . . . . 892
16.71.13PSREVERSE Operator . . . . . . . . . . . . . . . . . . . 892
16.71.14PSCOMPOSE Operator . . . . . . . . . . . . . . . . . . 893
16.71.15PSSUM Operator . . . . . . . . . . . . . . . . . . . . . . 894


16.71.16PSTAYLOR Operator . . . . . . . . . . . . . . . . . . . 895
16.71.17PSCOPY Operator . . . . . . . . . . . . . . . . . . . . . 895
16.71.18PSTRUNCATE Operator . . . . . . . . . . . . . . . . . . 896
16.71.19Arithmetic Operations . . . . . . . . . . . . . . . . . . . 896
16.71.20Differentiation . . . . . . . . . . . . . . . . . . . . . . . 897
16.71.21Restrictions and Known Bugs . . . . . . . . . . . . . . . 897
16.72TRI: TeX REDUCE interface . . . . . . . . . . . . . . . . . . . . 899
16.73TRIGINT: Weierstrass substitution in REDUCE . . . . . . . . . . 900
16.73.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 900
16.73.2 Statement of the Algorithm . . . . . . . . . . . . . . . . . 901
16.73.3 REDUCE implementation . . . . . . . . . . . . . . . . . 901
16.73.4 Definite Integration . . . . . . . . . . . . . . . . . . . . . 903
16.73.5 Tracing the trigint function . . . . . . . . . . . . . . . . 904
16.73.6 Bugs, comments, suggestions . . . . . . . . . . . . . . . 904
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
16.74TRIGSIMP: Simplification and factorization of trigonometric and
hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . 905
16.74.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 905
16.74.2 Simplifying trigonometric expressions . . . . . . . . . . . 905
16.74.3 Factorizing trigonometric expressions . . . . . . . . . . . 909
16.74.4 GCDs of trigonometric expressions . . . . . . . . . . . . 910
16.74.5 Further Examples . . . . . . . . . . . . . . . . . . . . . . 910
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
16.75TURTLE: Turtle Graphics Interface for REDUCE . . . . . . . . . 915
16.75.1 Turtle Graphics . . . . . . . . . . . . . . . . . . . . . . . 915
16.75.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . 915
16.75.3 Turtle Functions . . . . . . . . . . . . . . . . . . . . . . 916
16.75.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 921
16.75.5 References . . . . . . . . . . . . . . . . . . . . . . . . . 927
16.76WU: Wu algorithm for polynomial systems . . . . . . . . . . . . 929
16.77XCOLOR: Color factor in some field theories . . . . . . . . . . . 931



16.78XIDEAL: Gröbner Bases for exterior algebra . . . . . . . . . . . 933
16.78.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . 933
16.78.2 Declarations . . . . . . . . . . . . . . . . . . . . . . . . 934
16.78.3 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 935
16.78.4 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . 937
16.78.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 937
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
16.79ZEILBERG: Indefinite and definite summation . . . . . . . . . . 941
16.79.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 941
16.79.2 Gosper Algorithm . . . . . . . . . . . . . . . . . . . . . 941
16.79.3 Zeilberger Algorithm . . . . . . . . . . . . . . . . . . . . 942
16.79.4 REDUCE operator GOSPER . . . . . . . . . . . . . . . . 943
16.79.5 REDUCE operator EXTENDED_GOSPER . . . . . . . . . 946
16.79.6 REDUCE operator SUMRECURSION . . . . . . . . . . . 946
16.79.7 REDUCE operator EXTENDED_SUMRECURSION . . . . 949
16.79.8 REDUCE operator HYPERRECURSION . . . . . . . . . . 950
16.79.9 REDUCE operator HYPERSUM . . . . . . . . . . . . . . 952
16.79.10REDUCE operator SUMTOHYPER . . . . . . . . . . . . . 954
16.79.11Simplification Operators . . . . . . . . . . . . . . . . . . 954
16.79.12Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
16.79.13Global Variables and Switches . . . . . . . . . . . . . . . 958
16.79.14Messages . . . . . . . . . . . . . . . . . . . . . . . . . . 959
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
16.80ZTRANS: Z-transform package . . . . . . . . . . . . . . . . . . 962
16.80.1 Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . 962
16.80.2 Inverse Z-Transform . . . . . . . . . . . . . . . . . . . . 962
16.80.3 Input for the Z-Transform . . . . . . . . . . . . . . . . . 962
16.80.4 Input for the Inverse Z-Transform . . . . . . . . . . . . . 963
16.80.5 Application of the Z-Transform . . . . . . . . . . . . . . 964
16.80.6 EXAMPLES . . . . . . . . . . . . . . . . . . . . . . . . 964


Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970

17 Symbolic Mode


17.1 Symbolic Infix Operators . . . . . . . . . . . . . . . . . . . . . . 973
17.2 Symbolic Expressions . . . . . . . . . . . . . . . . . . . . . . . . 973
17.3 Quoted Expressions . . . . . . . . . . . . . . . . . . . . . . . . . 973
17.4 Lambda Expressions . . . . . . . . . . . . . . . . . . . . . . . . 973
17.5 Symbolic Assignment Statements . . . . . . . . . . . . . . . . . 974
17.6 FOR EACH Statement . . . . . . . . . . . . . . . . . . . . . . . 975
17.7 Symbolic Procedures . . . . . . . . . . . . . . . . . . . . . . . . 975
17.8 Standard Lisp Equivalent of Reduce Input . . . . . . . . . . . . . 976
17.9 Communicating with Algebraic Mode . . . . . . . . . . . . . . . 976
17.9.1 Passing Algebraic Mode Values to Symbolic Mode . . . . 977
17.9.2 Passing Symbolic Mode Values to Algebraic Mode . . . . 980
17.9.3 Complete Example . . . . . . . . . . . . . . . . . . . . . 980
17.9.4 Defining Procedures for Intermode Communication . . . . 981
17.10Rlisp ’88 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
17.11References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
18 Calculations in High Energy Physics


18.1 High Energy Physics Operators . . . . . . . . . . . . . . . . . . . 983
18.1.1 . (Cons) Operator . . . . . . . . . . . . . . . . . . . . . . 983
18.1.2 G Operator for Gamma Matrices . . . . . . . . . . . . . . 984
18.1.3 EPS Operator . . . . . . . . . . . . . . . . . . . . . . . . 985
18.2 Vector Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
18.3 Additional Expression Types . . . . . . . . . . . . . . . . . . . . 986
18.3.1 Vector Expressions . . . . . . . . . . . . . . . . . . . . . 986
18.3.2 Dirac Expressions . . . . . . . . . . . . . . . . . . . . . 986
18.4 Trace Calculations . . . . . . . . . . . . . . . . . . . . . . . . . 987
18.5 Mass Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . 987
18.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988



18.7 Extensions to More Than Four Dimensions . . . . . . . . . . . . 989
19 REDUCE and Rlisp Utilities


19.1 The Standard Lisp Compiler . . . . . . . . . . . . . . . . . . . . 991
19.2 Fast Loading Code Generation Program . . . . . . . . . . . . . . 992
19.3 The Standard Lisp Cross Reference Program . . . . . . . . . . . . 993
19.3.1 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . 994
19.3.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
19.3.3 Options . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
19.4 Prettyprinting REDUCE Expressions . . . . . . . . . . . . . . . . 994
19.5 Prettyprinting Standard Lisp S-Expressions . . . . . . . . . . . . 995
20 Maintaining REDUCE


A Reserved Identifiers


B Bibliography


C Changes since Version 3.8




This document provides the user with a description of the algebraic programming
system REDUCE. The capabilities of this system include:
1. expansion and ordering of polynomials and rational functions,
2. substitutions and pattern matching in a wide variety of forms,
3. automatic and user controlled simplification of expressions,
4. calculations with symbolic matrices,
5. arbitrary precision integer and real arithmetic,
6. facilities for defining new functions and extending program syntax,
7. analytic differentiation and integration,
8. factorization of polynomials,
9. facilities for the solution of a variety of algebraic equations,
10. facilities for the output of expressions in a variety of formats,
11. facilities for generating numerical programs from symbolic input,
12. Dirac matrix calculations of interest to high energy physicists.




The production of this version of the manual has been the result of the contributions of a large number of individuals who have taken the time and effort to suggest
improvements to previous versions, and to draft new sections. Particular thanks
are due to Gerry Rayna, who provided a draft rewrite of most of the first half of
the manual. Other people who have made significant contributions have included
John Fitch, Martin Griss, Stan Kameny, Jed Marti, Herbert Melenk, Don Morrison, Arthur Norman, Eberhard Schrüfer, Larry Seward and Walter Tietze. Finally,
Richard Hitt produced a TEX version of the REDUCE 3.3 manual, which has been
a useful guide for the production of the LATEX version of this manual.




Chapter 1

Introductory Information
REDUCE is a system for carrying out algebraic operations accurately, no matter
how complicated the expressions become. It can manipulate polynomials in a variety of forms, both expanding and factoring them, and extract various parts of
them as required. REDUCE can also do differentiation and integration, but we
shall only show trivial examples of this in this introduction. Other topics not considered include the use of arrays, the definition of procedures and operators, the
specific routines for high energy physics calculations, the use of files to eliminate
repetitious typing and for saving results, and the editing of the input text.
Also not considered in any detail in this introduction are the many options that
are available for varying computational procedures, output forms, number systems
used, and so on.
REDUCE is designed to be an interactive system, so that the user can input an algebraic expression and see its value before moving on to the next calculation. For
those systems that do not support interactive use, or for those calculations, especially long ones, for which a standard script can be defined, REDUCE can also be
used in batch mode. In this case, a sequence of commands can be given to REDUCE and results obtained without any user interaction during the computation.
In this introduction, we shall limit ourselves to the interactive use of REDUCE,
since this illustrates most completely the capabilities of the system. When REDUCE is called, it begins by printing a banner message like:
Reduce (Free CSL version), 25-Oct-14 ...
where the version number and the system release date will change from time to
time. It proceeds to execute the commands in user’s startup (reducerc) file, if
such a file is present, then prompts the user for input by:



You can now type a REDUCE statement, terminated by a semicolon to indicate the
end of the expression, for example:
This expression would normally be followed by another character (a Return on
an ASCII keyboard) to “wake up” the system, which would then input the expression, evaluate it, and return the result:

+ 2*X*Y + 2*X*Z + Y

+ 2*Y*Z + Z

Let us review this simple example to learn a little more about the way that REDUCE works. First, we note that REDUCE deals with variables, and constants
like other computer languages, but that in evaluating the former, a variable can
stand for itself. Expression evaluation normally follows the rules of high school
algebra, so the only surprise in the above example might be that the expression was
expanded. REDUCE normally expands expressions where possible, collecting like
terms and ordering the variables in a specific manner. However, expansion, ordering of variables, format of output and so on is under control of the user, and various
declarations are available to manipulate these.
Another characteristic of the above example is the use of lower case on input and
upper case on output. In fact, input may be in either mode, but output is usually in
lower case. To make the difference between input and output more distinct in this
manual, all expressions intended for input will be shown in lower case and output
in upper case. However, for stylistic reasons, we represent all single identifiers in
the text in upper case.
Finally, the numerical prompt can be used to reference the result in a later computation.
As a further illustration of the system features, the user should try:
for i:= 1:40 product i;
The result in this case is the value of 40!,
You can also get the same result by saying
factorial 40;
Since we want exact results in algebraic calculations, it is essential that integer
arithmetic be performed to arbitrary precision, as in the above example. Further-

more, the FOR statement in the above is illustrative of a whole range of combining
forms that REDUCE supports for the convenience of the user.
Among the many options in REDUCE is the use of other number systems, such as
multiple precision floating point with any specified number of digits — of use if
roundoff in, say, the 100th digit is all that can be tolerated.
In many cases, it is necessary to use the results of one calculation in succeeding
calculations. One way to do this is via an assignment for a variable, such as
u := (x+y+z)^2;
If we now use U in later calculations, the value of the right-hand side of the above
will be used.
The results of a given calculation are also saved in the variable WS (for WorkSpace),
so this can be used in the next calculation for further processing.
For example, the expression
following the previous evaluation will calculate the derivative of (x+y+z)^2 with
respect to X. Alternatively,
would calculate the integral of the same expression with respect to y.
REDUCE is also capable of handling symbolic matrices. For example,
matrix m(2,2);
declares m to be a two by two matrix, and
m := mat((a,b),(c,d));
gives its elements values. Expressions that include M and make algebraic sense
may now be evaluated, such as 1/m to give the inverse, 2*m - u*m^2 to give us
another matrix and det(m) to give us the determinant of M.
REDUCE has a wide range of substitution capabilities. The system knows about
elementary functions, but does not automatically invoke many of their well-known
properties. For example, products of trigonometrical functions are not converted
automatically into multiple angle expressions, but if the user wants this, he can say,
for example:


where cos(~x)*cos(~y) = (cos(x+y)+cos(x-y))/2,
cos(~x)*sin(~y) = (sin(x+y)-sin(x-y))/2,
sin(~x)*sin(~y) = (cos(x-y)-cos(x+y))/2;

where the tilde in front of the variables X and Y indicates that the rules apply for
all values of those variables. The result of this calculation is
-(COS(2*A) + SIN(2*B))
See also the user-contributed packages ASSIST (chapter 16.5), CAMAL (chapter 16.10) and TRIGSIMP (chapter 16.74).
Another very commonly used capability of the system, and an illustration of one of
the many output modes of REDUCE, is the ability to output results in a FORTRAN
compatible form. Such results can then be used in a FORTRAN based numerical
calculation. This is particularly useful as a way of generating algebraic formulas
to be used as the basis of extensive numerical calculations.
For example, the statements
on fort;
will result in the output
. LOG(X)*COS(X)-4.*LOG(X)*SIN(X)*X**2+4.*LOG(X)*
. SIN(X)*X+3.*LOG(X)*SIN(X)+8.*COS(X)*X-8.*COS(X)-8.
. *SIN(X)*X-8.*SIN(X))/(4.*SQRT(X)*X**2)

These algebraic manipulations illustrate the algebraic mode of REDUCE. REDUCE is based on Standard Lisp. A symbolic mode is also available for executing
Lisp statements. These statements follow the syntax of Lisp, e.g.
symbolic car ’(a);
Communication between the two modes is possible.
With this simple introduction, you are now in a position to study the material in the
full REDUCE manual in order to learn just how extensive the range of facilities
really is. If further tutorial material is desired, the seven REDUCE Interactive
Lessons by David R. Stoutemyer are recommended. These are normally distributed
with the system.

Chapter 2

Structure of Programs
A REDUCE program consists of a set of functional commands which are evaluated
sequentially by the computer. These commands are built up from declarations,
statements and expressions. Such entities are composed of sequences of numbers,
variables, operators, strings, reserved words and delimiters (such as commas and
parentheses), which in turn are sequences of basic characters.


The REDUCE Standard Character Set

The basic characters which are used to build REDUCE symbols are the following:
1. The 26 letters a through z
2. The 10 decimal digits 0 through 9
3. The special characters
> = { } hblanki

_ ! " $ % ’ ( ) * + , - . / : ; <

With the exception of strings and characters preceded by an exclamation mark, the
case of characters is ignored: depending of the underlying LISP they will all be
converted internally into lower case or upper case: ALPHA, Alpha and alpha
represent the same symbol. Most implementations allow you to switch this conversion off. The operating instructions for a particular implementation should be
consulted on this point. For portability, we shall limit ourselves to the standard
character set in this exposition.






There are several different types of numbers available in REDUCE. Integers consist
of a signed or unsigned sequence of decimal digits written without a decimal point,
for example:
-2, 5396, +32
In principle, there is no practical limit on the number of digits permitted as exact
arithmetic is used in most implementations. (You should however check the specific instructions for your particular system implementation to make sure that this
is true.) For example, if you ask for the value of 22000 you get it displayed as a
number of 603 decimal digits, taking up several lines of output on an interactive
display. It should be borne in mind of course that computations with such long
numbers can be quite slow.
Numbers that aren’t integers are usually represented as the quotient of two integers,
in lowest terms: that is, as rational numbers.
In essentially all versions of REDUCE it is also possible (but not always desirable!)
to ask REDUCE to work with floating point approximations to numbers again, to
any precision. Such numbers are called real. They can be input in two ways:
1. as a signed or unsigned sequence of any number of decimal digits with an
embedded or trailing decimal point.
2. as in 1. followed by a decimal exponent which is written as the letter E
followed by a signed or unsigned integer.
e.g. 32.

+32.0 0.32E2 and 320.E-1 are all representations of 32.

The declaration SCIENTIFIC_NOTATION controls the output format of floating point numbers. At the default settings, any number with five or less digits before the decimal point is printed in a fixed-point notation, e.g., 12345.6.
Numbers with more than five digits are printed in scientific notation, e.g.,
1.234567E+5. Similarly, by default, any number with eleven or more zeros
after the decimal point is printed in scientific notation. To change these defaults,
SCIENTIFIC_NOTATION can be used in one of two ways.
where m is a positive integer, sets the printing format so that a number with more
than m digits before the decimal point, or m or more zeros after the decimal point,
is printed in scientific notation.
with m and n both positive integers, sets the format so that a number with more



than m digits before the decimal point, or n or more zeros after the decimal point
is printed in scientific notation.
CAUTION: The unsigned part of any number may not begin with a decimal point,
as this causes confusion with the CONS (.) operator, i.e., NOT ALLOWED ARE:
.5 -.23 +.12; use 0.5 -0.23 +0.12 instead.



Identifiers in REDUCE consist of one or more alphanumeric characters (i.e. alphabetic letters or decimal digits) the first of which must be alphabetic. The maximum
number of characters allowed is implementation dependent, although twenty-four
is permitted in most implementations. In addition, the underscore character (_) is
considered a letter if it is within an identifier. For example,
a az p1 q23p


are all identifiers, whereas
is not.
A sequence of alphanumeric characters in which the first is a digit is interpreted as
a product. For example, 2ab3c is interpreted as 2*ab3c. There is one exception
to this: If the first letter after a digit is E, the system will try to interpret that part of
the sequence as a real number, which may fail in some cases. For example, 2E12
is the real number 2.0 ∗ 1012 , 2e3c is 2000.0*C, and 2ebc gives an error.
Special characters, such as -, *, and blank, may be used in identifiers too, even as
the first character, but each must be preceded by an exclamation mark in input. For


good! morning

CAUTION: Many system identifiers have such special characters in their names
(especially * and =). If the user accidentally picks the name of one of them for his
own purposes it may have catastrophic consequences for his REDUCE run. Users
are therefore advised to avoid such names.
Identifiers are used as variables, labels and to name arrays, operators and procedures.



The reserved words listed in section (A may not be used as identifiers. No spaces
may appear within an identifier, and an identifier may not extend over a line of text.



Every variable is named by an identifier, and is given a specific type. The type is
of no concern to the ordinary user. Most variables are allowed to have the default
type, called scalar. These can receive, as values, the representation of any ordinary
algebraic expression. In the absence of such a value, they stand for themselves.

Reserved Variables
Several variables in REDUCE have particular properties which should not be
changed by the user. These variables include:

Catalan’s constant, defined as


(2n + 1)2

Intended to represent the base of the natural logarithms. log(e),
if it occurs in an expression, is automatically replaced by 1. If
ROUNDED is on, E is replaced by the value of E to the current degree
of floating point precision.

EULER_GAMMA Euler’s constant, also available as −ψ(1).

The number

1+ 5
2 .

Intended to represent the square
root of −1. i^2 is replaced by −1, and appropriately for higher
powers of I. This applies only to the symbol I used on the top level,
not as a formal parameter in a procedure, a local variable, nor in the
context for i:= ....


Intended to represent ∞
in limit and power series calculations for example, as well as in definite integration. Note however that the current system does not do
proper arithmetic on ∞. For example, infinity + infinity
is 2*infinity.



KHINCHIN Khinchin’s constant, defined as
log2 n
n(n + 2)

NEGATIVE Used in the Roots package.

In REDUCE (algebraic mode only) taken as a synonym for zero.
Therefore NIL cannot be used as a variable.


Intended to represent the circular constant. With ROUNDED on, it
is replaced by the value of π to the current degree of floating point

POSITIVE Used in the Roots package.

Must not be used as a formal parameter or local variable in procedures, since conflict arises with the symbolic mode meaning of T as

Other reserved variables, such as LOW_POW, described in other sections, are listed
in Appendix A.
Using these reserved variables inappropriately will lead to errors.
There are also internal variables used by REDUCE that have similar restrictions.
These usually have an asterisk in their names, so it is unlikely a casual user would
use one. An example of such a variable is K!* used in the asymptotic command
Certain words are reserved in REDUCE. They may only be used in the manner
intended. A list of these is given in the section “Reserved Identifiers”. There are,
of course, an impossibly large number of such names to keep in mind. The reader
may therefore want to make himself a copy of the list, deleting the names he doesn’t
think he is likely to use by mistake.



Strings are used in WRITE statements, in other output statements (such as error
messages), and to name files. A string consists of any number of characters enclosed in double quotes. For example:
"A String".



Lower case characters within a string are not converted to upper case.
The string "" represents the empty string. A double quote may be included in a
string by preceding it by another double quote. Thus "a""b" is the string a"b,
and """" is the string consisting of the single character ".



Text can be included in program listings for the convenience of human readers, in
such a way that REDUCE pays no attention to it. There are two ways to do this:
1. Everything from the word COMMENT to the next statement terminator, normally ; or $, is ignored. Such comments can be placed anywhere a blank
could properly appear. (Note that END and >> are not treated as COMMENT
2. Everything from the symbol % to the end of the line on which it appears is
ignored. Such comments can be placed as the last part of any line. Statement
terminators have no special meaning in such comments. Remember to put
a semicolon before the % if the earlier part of the line is intended to be so
terminated. Remember also to begin each line of a multi-line % comment
with a % sign.



Operators in REDUCE are specified by name and type. There are two types, infix and prefix. Operators can be purely abstract, just symbols with no properties;
they can have values assigned (using := or simple LET declarations) for specific
arguments; they can have properties declared for some collection of arguments
(using more general LET declarations); or they can be fully defined (usually by a
procedure declaration).
Infix operators have a definite precedence with respect to one another, and normally
occur between their arguments. For example:
a + b - c
x= | > | <= | < |
+ | - | * | / | ^ | ** | .



These operators may be further divided into the following subclasses:
hassignment operatori
hlogical operatori
hrelational operatori
hsubstitution operatori
harithmetic operatori
hconstruction operatori


or | and | member | memq
= | neq | eq | >= | > | <= | <
+ | - | * | / | ^ | **

MEMQ and EQ are not used in the algebraic mode of REDUCE. They are explained
in the section on symbolic mode. WHERE is described in the section on substitutions.
In previous versions of REDUCE, not was also defined as an infix operator. In the
present version it is a regular prefix operator, and interchangeable with null.
For compatibility with the intermediate language used by REDUCE, each special
character infix operator has an alternative alphanumeric identifier associated with
it. These identifiers may be used interchangeably with the corresponding special
character names on input. This correspondence is as follows:
^ or **


(the assignment operator)

(if unary, minus)
(if unary, recip)
(raising to a power)

Note: NEQ is used to mean not equal. There is no special symbol provided for it.
The above operators are binary, except NOT which is unary and + and * which
are nary (i.e., taking an arbitrary number of arguments). In addition, - and / may
be used as unary operators, e.g., /2 means the same as 1/2. Any other operator is
parsed as a binary operator using a left association rule. Thus a/b/c is interpreted
as (a/b)/c. There are two exceptions to this rule: := and . are right associative. Example: a:=b:=c is interpreted as a:=(b:=c). Unlike ALGOL and
PASCAL, ^ is left associative. In other words, a^b^c is interpreted as (a^b)^c.
The operators <, <=, >, >= can only be used for making comparisons between
numbers. No meaning is currently assigned to this kind of comparison between
general expressions.



Parentheses may be used to specify the order of combination. If parentheses are
omitted then this order is by the ordering of the precedence list defined by the
right-hand side of the hinfix operatori table at the beginning of this section, from
lowest to highest. In other words, WHERE has the lowest precedence, and . (the
dot operator) the highest.



Chapter 3

REDUCE expressions may be of several types and consist of sequences of numbers, variables, operators, left and right parentheses and commas. The most common types are as follows:


Scalar Expressions

Using the arithmetic operations + - * / ^
(power) and parentheses, scalar
expressions are composed from numbers, ordinary “scalar” variables (identifiers),
array names with subscripts, operator or procedure names with arguments and
statement expressions.
x^3 - 2*y/(2*z^2 - df(x,z))
(p^2 + m^2)^(1/2)*log (y/m)
a(5) + b(i,q)
The symbol ** may be used as an alternative to the caret symbol (^) for forming
powers, particularly in those systems that do not support a caret symbol.
Statement expressions, usually in parentheses, can also form part of a scalar expression, as in the example
w + (c:=x+y) + z .
When the algebraic value of an expression is needed, REDUCE determines it, starting with the algebraic values of the parts, roughly as follows:
Variables and operator symbols with an argument list have the algebraic values



they were last assigned, or if never assigned stand for themselves. However, array
elements have the algebraic values they were last assigned, or, if never assigned,
are taken to be 0.
Procedures are evaluated with the values of their actual parameters.
In evaluating expressions, the standard rules of algebra are applied. Unfortunately,
this algebraic evaluation of an expression is not as unambiguous as is numerical
evaluation. This process is generally referred to as “simplification” in the sense that
the evaluation usually but not always produces a simplified form for the expression.
There are many options available to the user for carrying out such simplification.
If the user doesn’t specify any method, the default method is used. The default
evaluation of an expression involves expansion of the expression and collection
of like terms, ordering of the terms, evaluation of derivatives and other functions
and substitution for any expressions which have values assigned or declared (see
assignments and LET statements). In many cases, this is all that the user needs.
The declarations by which the user can exercise some control over the way in which
the evaluation is performed are explained in other sections. For example, if a real
(floating point) number is encountered during evaluation, the system will normally
convert it into a ratio of two integers. If the user wants to use real arithmetic,
he can effect this by the command on rounded;. Other modes for coefficient
arithmetic are described elsewhere.
If an illegal action occurs during evaluation (such as division by zero) or functions
are called with the wrong number of arguments, and so on, an appropriate error
message is generated.


Integer Expressions

These are expressions which, because of the values of the constants and variables
in them, evaluate to whole numbers.

37 * 999,

(x + 3)^2 - x^2 - 6*x

are obviously integer expressions.
j + k - 2 * j^2
is an integer expression when J and K have values that are integers, or if not integers
are such that “the variables and fractions cancel out”, as in
k - 7/3 - j + 2/3 + 2*j^2.




Boolean Expressions

A boolean expression returns a truth value. In the algebraic mode of REDUCE,
boolean expressions have the syntactical form:
hexpressionihrelational operatorihexpressioni
hboolean operatori(hargumentsi)
hboolean expressionihlogical operatorihboolean expressioni.
Parentheses can also be used to control the precedence of expressions.
In addition to the logical and relational operators defined earlier as infix operators,
the following boolean operators are also defined:

determines if the number U is even or not;


determines if the expression U is integer or not;


determines if the expression U does not contain the kernel
V anywhere in its structure;


determines if U is a number or not;


determines if U is ordered ahead of V by some canonical
ordering (based on the expression structure and an internal
ordering of identifiers);


true if U is a prime object, i.e., any object other than 0 and
plus or minus 1 which is only exactly divisible by itself or
a unit.

x>0 or x=-2
numberp x
fixp x and evenp x
numberp x and x neq 0



Boolean expressions can only appear directly within IF, FOR, WHILE, and UNTIL
statements, as described in other sections. Such expressions cannot be used in place
of ordinary algebraic expressions, or assigned to a variable.
NB: For those familiar with symbolic mode, the meaning of some of these operators is different in that mode. For example, NUMBERP is true only for integers and
reals in symbolic mode.
When two or more boolean expressions are combined with AND, they are evaluated
one by one until a false expression is found. The rest are not evaluated. Thus
numberp x and numberp y and x>y
does not attempt to make the x>y comparison unless X and Y are both verified to
be numbers.
Similarly, evaluation of a sequence of boolean expressions connected by OR stops
as soon as a true expression is found.
NB: In a boolean expression, and in a place where a boolean expression is expected,
the algebraic value 0 is interpreted as false, while all other algebraic values are
converted to true. So in algebraic mode a procedure can be written for direct usage
in boolean expressions, returning say 1 or 0 as its value as in
procedure polynomialp(u,x);
if den(u)=1 and deg(u,x)>=1 then 1 else 0;
One can then use this in a boolean construct, such as
if polynomialp(q,z) and not polynomialp(q,y) then ...
In addition, any procedure that does not have a defined return value (for example,
a block without a RETURN statement in it) has the boolean value false.



Equations are a particular type of expression with the syntax
In addition to their role as boolean expressions, they can also be used as arguments
to several operators (e.g., SOLVE), and can be returned as values.
Under normal circumstances, the right-hand-side of the equation is evaluated but
not the left-hand-side. This also applies to any substitutions made by the SUB



operator. If both sides are to be evaluated, the switch EVALLHSEQP should be
turned on.
To facilitate the handling of equations, two selectors, LHS and RHS, which return the left- and right-hand sides of an equation respectively, are provided. For
lhs(a+b=c) -> a+b
rhs(a+b=c) -> c.


Proper Statements as Expressions

Several kinds of proper statements deliver an algebraic or numerical result of some
kind, which can in turn be used as an expression or part of an expression. For
example, an assignment statement itself has a value, namely the value assigned. So
2 * (x := a+b)
is equal to 2*(a+b), as well as having the “side-effect” of assigning the value
a+b to X. In context,
y := 2 * (x := a+b);
sets X to a+b and Y to 2*(a+b).
The sections on the various proper statement types indicate which of these statements are also useful as expressions.



Chapter 4

A list is an object consisting of a sequence of other objects (including lists themselves), separated by commas and surrounded by braces. Examples of lists are:
The empty list is represented as


Operations on Lists

Several operators in the system return their results as lists, and a user can create
new lists using braces and commas. Alternatively, one can use the operator LIST
to construct a list. An important class of operations on lists are MAP and SELECT
operations. For details, please refer to the chapters on MAP, SELECT and the FOR
command. See also the documentation on the ASSIST (chapter 16.5) package.
To facilitate the use of lists, a number of operators are also available for manipulating them. PART(hlisti,n) for example will return the nth element of a
list. LENGTH will return the length of a list. Several operators are also defined
uniquely for lists. For those familiar with them, these operators in fact mirror the
operations defined for Lisp lists. These operators are as follows:






The operator LIST is an alternative to the usage of curly brackets. LIST accepts an
arbitrary number of arguments and returns a list of its arguments. This operator is
useful in cases where operators have to be passed as arguments. E.g.,





This operator returns the first member of a list. An error occurs if the argument is
not a list, or the list is empty.



SECOND returns the second member of a list. An error occurs if the argument is
not a list or has no second element.



This operator returns the third member of a list. An error occurs if the argument is
not a list or has no third element.



REST returns its argument with the first element removed. An error occurs if the
argument is not a list, or is empty.


. (Cons) Operator

This operator adds (“conses”) an expression to the front of a list. For example:
a . {b,c}





This operator appends its first argument to its second to form a new list. Examples:







The operator REVERSE returns its argument with the elements in the reverse order. It only applies to the top level list, not any lower level lists that may occur.
Examples are:




List Arguments of Other Operators

If an operator other than those specifically defined for lists is given a single argument that is a list, then the result of this operation will be a list in which that
operator is applied to each element of the list. For example, the result of evaluating
log{a,b,c} is the expression {LOG(A),LOG(B),LOG(C)}.
There are two ways to inhibit this operator distribution. Firstly, the switch
LISTARGS, if on, will globally inhibit such distribution. Secondly, one can inhibit this distribution for a specific operator by the declaration LISTARGP. For
example, with the declaration listargp log, log{a,b,c} would evaluate to
If an operator has more than one argument, no such distribution occurs.


Caveats and Examples

Some of the natural list operations such as member or delete are available only
after loading the package ASSIST (chapter 16.5).
Please note that a non-list as second argument to CONS (a "dotted pair" in LISP
terms) is not allowed and causes an "invalid as list" error.
a := 17 . 4;
***** 17 4 invalid as list
Also, the initialization of a scalar variable is not the empty list – one has to set list
type variables explicitly, as in the following example:
load_package assist;
procedure lotto (n,m);
begin scalar list_1_n, luckies, hit;
list_1_n := {};


luckies := {};
for k:=1:n do list_1_n := k . list_1_n;
for k:=1:m do
<< hit := part(list_1_n,random(n-k+1) + 1);
list_1_n := delete(hit,list_1_n);
luckies := hit . luckies >>;
return luckies;
% In Germany, try lotto (49,6);

Another example: Find all coefficients of a multivariate polynomial with respect to
a list of variables:
procedure allcoeffs(q,lis);
% q : polynomial, lis: list of vars
allcoeffs1 (list q,lis);
procedure allcoeffs1(q,lis);
if lis={} then q else
allcoeffs1(foreach qq in q join coeff(qq,first lis),
rest lis);

Chapter 5

A statement is any combination of reserved words and expressions, and has the
hstatementi −→ hexpressioni | hproper statementi
A REDUCE program consists of a series of commands which are statements followed by a terminator:
hterminatori −→ ; | $
The division of the program into lines is arbitrary. Several statements can be on
one line, or one statement can be freely broken onto several lines. If the program
is run interactively, statements ending with ; or $ are not processed until an end-ofline character is encountered. This character can vary from system to system, but
is normally the Return key on an ASCII terminal. Specific systems may also use
additional keys as statement terminators.
If a statement is a proper statement, the appropriate action takes place.
Depending on the nature of the proper statement some result or response may or
may not be printed out, and the response may or may not depend on the terminator
If a statement is an expression, it is evaluated. If the terminator is a semicolon, the
result is printed. If the terminator is a dollar sign, the result is not printed. Because
it is not usually possible to know in advance how large an expression will be, no
explicit format statements are offered to the user. However, a variety of output
declarations are available so that the output can be produced in different forms.
These output declarations are explained in Section 8.3.3.
The following sub-sections describe the types of proper statements in REDUCE.




Assignment Statements

These statements have the syntax
hassignment statementi −→ hexpressioni:=hexpressioni
The hexpressioni on the left side is normally the name of a variable, an operator
symbol with its list of arguments filled in, or an array name with the proper number
of integer subscript values within the array bounds. For example:
a1 := b + c
h(l,m) := x-2*y
k(3,5) := x-2*y

(where h is an operator)
(where k is a 2-dim. array)

More general assignments such as a+b := c are also allowed. The effect of these
is explained in Section 11.2.5.
An assignment statement causes the expression on the right-hand-side to be evaluated. If the left-hand-side is a variable, the value of the right-hand-side is assigned
to that unevaluated variable. If the left-hand-side is an operator or array expression,
the arguments of that operator or array are evaluated, but no other simplification
done. The evaluated right-hand-side is then assigned to the resulting expression.
For example, if a is a single-dimensional array, a(1+1) := b assigns the value
b to the array element a(2).
If a semicolon is used as the terminator when an assignment is issued as a command
(i.e. not as a part of a group statement or procedure or other similar construct), the
left-hand side symbol of the assignment statement is printed out, followed by a
“:=”, followed by the value of the expression on the right.
It is also possible to write a multiple assignment statement:
hexpressioni:= . . . :=hexpressioni:=hexpressioni
In this form, each hexpressioni but the last is set to the value of the last hexpressioni.
If a semicolon is used as a terminator, each expression except the last is printed
followed by a “:=” ending with the value of the last expression.


Set and Unset Statements

In some cases, it is desirable to perform an assignment in which both the left- and
right-hand sides of an assignment are evaluated. In this case, the SET statement
can be used with the syntax:



For example, the statements
j := 23;
assigns the value X to A23.
To remove a value from such a variable, the UNSET statement can be used with the
For example, the statement
j := 23;
clears the value of A23.


Group Statements

The group statement is a construct used where REDUCE expects a single statement, but a series of actions needs to be performed. It is formed by enclosing one
or more statements (of any kind) between the symbols << and >>, separated by
semicolons or dollar signs – it doesn’t matter which. The statements are executed
one after another.
Examples will be given in the sections on IF and other types of statements in which
the << . . . >> construct is useful.
If the last statement in the enclosed group has a value, then that is also the value
of the group statement. Care must be taken not to have a semicolon or dollar sign
after the last grouped statement, if the value of the group is relevant: such an extra
terminator causes the group to have the value NIL or zero.


Conditional Statements

The conditional statement has the following syntax:
hconditional statementi −→ IF hboolean expressioni THEN hstatementi
[ELSE hstatementi]
The boolean expression is evaluated. If this is true, the first hstatementi is executed.
If it is false, the second is.



if x=5 then a:=b+c else d:=e+f
if x=5 and numberp y
then <>
else <>
Note the use of the group statement.
Conditional statements associate to the right; i.e.,
is equivalent to:
In addition, the construction
parses as
If the value of the conditional statement is of primary interest, it is often called a
conditional expression instead. Its value is the value of whichever statement was
executed. (If the executed statement has no value, the conditional expression has
no value or the value 0, depending on how it is used.)
a:=if x<5 then 123 else 456;
b:=u + v^(if numberp z then 10*z

else 1) + w;

If the value is of no concern, the ELSE clause may be omitted if no action is
required in the false case.
if x=5 then a:=b+c;
Note: As explained in Section 3.3, if a scalar or numerical expression is used in
place of the boolean expression – for example, a variable is written there – the true
alternative is followed unless the expression has the value 0.




FOR Statements

The FOR statement is used to define a variety of program loops. Its general syntax
is as follows:

hvari := hnumberi
EACH hvari


hactioni hexprni


hactioni −→ do | product | sum | collect | join.
The assignment form of the FOR statement defines an iteration over the indicated
numerical range. If expressions that do not evaluate to numbers are used in the
designated places, an error will result.
The FOR EACH form of the FOR statement is designed to iterate down a list.
Again, an error will occur if a list is not used.
The action DO means that hexprni is simply evaluated and no value kept; the statement returning 0 in this case (or no value at the top level). COLLECT means that
the results of evaluating hexprni each time are linked together to make a list, and
JOIN means that the values of hexprni are themselves lists that are joined to make
one list (similar to CONC in Lisp). Finally, PRODUCT and SUM form the respective
combined value out of the values of hexprni.
In all cases, hexprni is evaluated algebraically within the scope of the current value
of hvari. If hactioni is DO, then nothing else happens. In other cases, hactioni is
a binary operator that causes a result to be built up and returned by FOR. In those
cases, the loop is initialized to a default value (0 for SUM, 1 for PRODUCT, and an
empty list for the other actions). The test for the end condition is made before any
action is taken. As in Pascal, if the variable is out of range in the assignment case,
or the hlisti is empty in the FOR EACH case, hexprni is not evaluated at all.
1. If A, B have been declared to be arrays, the following stores 52 through 102
in A(5) through A(10), and at the same time stores the cubes in the B
for i := 5 step 1 until 10 do
2. As a convenience, the common construction
step 1 until


may be abbreviated to a colon. Thus, instead of the above we could write:
for i := 5:10 do <>
3. The following sets C to the sum of the squares of 1,3,5,7,9; and D to the
expression x*(x+1)*(x+2)*(x+3)*(x+4):
c := for j:=1 step 2 until 9 sum j^2;
d := for k:=0 step 1 until 4 product (x+k);
4. The following forms a list of the squares of the elements of the list
for each x in {a,b,c} collect x^2;
5. The following forms a list of the listed squares of the elements of the list
{a,b,c} (i.e., {{A^2},{B^2},{C^2}}):
for each x in {a,b,c} collect {x^2};
6. The following also forms a list of the squares of the elements of the list
{a,b,c}, since the JOIN operation joins the individual lists into one list:
for each x in {a,b,c} join {x^2};

The control variable used in the FOR statement is actually a new variable, not
related to the variable of the same name outside the FOR statement. In other words,
executing a statement for i:= . . . doesn’t change the system’s assumption that
i2 = −1. Furthermore, in algebraic mode, the value of the control variable is
substituted in hexprni only if it occurs explicitly in that expression. It will not
replace a variable of the same name in the value of that expression. For example:
b := a; for a := 1:2 do write b;
prints A twice, not 1 followed by 2.


WHILE . . . DO

The FOR . . . DO feature allows easy coding of a repeated operation in which the
number of repetitions is known in advance. If the criterion for repetition is more
complicated, WHILE . . . DO can often be used. Its syntax is:
WHILE hboolean expressioni DO hstatementi

5.6. REPEAT . . . UNTIL


The WHILE . . . DO controls the single statement following DO. If several statements are to be repeated, as is almost always the case, they must be grouped using
the << . . . >> or BEGIN . . . END as in the example below.
The WHILE condition is tested each time before the action following the DO is
attempted. If the condition is false to begin with, the action is not performed at all.
Make sure that what is to be tested has an appropriate value initially.
Suppose we want to add up a series of terms, generated one by one, until we reach
a term which is less than 1/1000 in value. For our simple example, let us suppose
the first term equals 1 and each term is obtained from the one before by taking one
third of it and adding one third its square. We would write:
ex:=0; term:=1;
while num(term - 1/1000) >= 0 do
As long as TERM is greater than or equal to (>=) 1/1000 it will be added to EX and
the next TERM calculated. As soon as TERM becomes less than 1/1000 the WHILE
test fails and the TERM will not be added.



REPEAT . . . UNTIL is very similar in purpose to WHILE . . . DO. Its syntax is:
REPEAT hstatementi UNTIL hboolean expressioni
(PASCAL users note: Only a single statement – usually a group statement – is
allowed between the REPEAT and the UNTIL.)
There are two essential differences:
1. The test is performed after the controlled statement (or group of statements)
is executed, so the controlled statement is always executed at least once.
2. The test is a test for when to stop rather than when to continue, so its “polarity” is the opposite of that in WHILE . . . DO.



As an example, we rewrite the example from the WHILE ...DO section:
ex:=0; term:=1;
repeat <>
until num(term - 1/1000) < 0;
In this case, the answer will be the same as before, because in neither case is a term
added to EX which is less than 1/1000.


Compound Statements

Often the desired process can best (or only) be described as a series of steps to be
carried out one after the other. In many cases, this can be achieved by use of the
group statement. However, each step often provides some intermediate result, until
at the end we have the final result wanted. Alternatively, iterations on the steps are
needed that are not possible with constructs such as WHILE or REPEAT statements.
In such cases the steps of the process must be enclosed between the words BEGIN
and END forming what is technically called a block or compound statement. Such a
compound statement can in fact be used wherever a group statement appears. The
converse is not true: BEGIN ...END can be used in ways that << . . . >> cannot.
If intermediate results must be formed, local variables must be provided in which
to store them. Local means that their values are deleted as soon as the block’s
operations are complete, and there is no conflict with variables outside the block
that happen to have the same name. Local variables are created by a SCALAR
declaration immediately after the BEGIN:
scalar a,b,c,z;
If more convenient, several SCALAR declarations can be given one after another:
scalar a,b,c;
scalar z;
In place of SCALAR one can also use the declarations INTEGER or REAL. In the
present version of REDUCE variables declared INTEGER are expected to have
only integer values, and are initialized to 0. REAL variables on the other hand are
currently treated as algebraic mode SCALARs.
CAUTION: INTEGER, REAL and SCALAR declarations can only be given immediately after a BEGIN. An error will result if they are used after other statements
in a block (including ARRAY and OPERATOR declarations, which are global in
scope), or outside the top-most block (e.g., at the top level). All variables declared



SCALAR are automatically initialized to zero in algebraic mode (NIL in symbolic
Any symbols not declared as local variables in a block refer to the variables of
the same name in the current calling environment. In particular, if they are not so
declared at a higher level (e.g., in a surrounding block or as parameters in a calling
procedure), their values can be permanently changed.
Following the SCALAR declaration(s), if any, write the statements to be executed,
one after the other, separated by delimiters (e.g., ; or $) (it doesn’t matter which).
However, from a stylistic point of view, ; is preferred.
The last statement in the body, just before END, need not have a terminator (since
the BEGIN . . . END are in a sense brackets confining the block statements). The
last statement must also be the command RETURN followed by the variable or
expression whose value is to be the value returned by the procedure. If the RETURN
is omitted (or nothing is written after the word RETURN) the procedure will have
no value or the value zero, depending on how it is used (and NIL in symbolic
mode). Remember to put a terminator after the END.
Given a previously assigned integer value for N, the following block will compute
the Legendre polynomial of degree N in the variable X:
begin scalar seed,deriv,top,fact;
seed:=1/(y^2 - 2*x*y +1)^(1/2);
fact:=for i:=1:n product i;
return top/fact


Compound Statements with GO TO

It is possible to have more complicated structures inside the BEGIN . . . END brackets than indicated in the previous example. That the individual lines of the program
need not be assignment statements, but could be almost any other kind of statement or command, needs no explanation. For example, conditional statements,
and WHILE and REPEAT constructions, have an obvious role in defining more
intricate blocks.
If these structured constructs don’t suffice, it is possible to use labels and GO TOs
within a compound statement, and also to use RETURN in places within the block
other than just before the END. The following subsections discuss these matters in
detail. For many readers the following example, presenting one possible definition



of a process to calculate the factorial of N for preassigned N will suffice:
begin scalar m;
l: if n=0 then return m;
go to l


Labels and GO TO Statements

Within a BEGIN ...END compound statement it is possible to label statements,
and transfer to them out of sequence using GO TO statements. Only statements on
the top level inside compound statements can be labeled, not ones inside subsidiary
constructions like << . . . >>, IF . . . THEN . . . , WHILE . . . DO . . . , etc.
Labels and GO TO statements have the syntax:
hgo to statementi
−→ GO TO hlabeli | GOTO hlabeli
−→ hidentifieri
hlabeled statementi −→ hlabeli:hstatementi
Note that statement names cannot be used as labels.
While GO TO is an unconditional transfer, it is frequently used in conditional statements such as
if x>5 then go to abcd;
giving the effect of a conditional transfer.
Transfers using GO TOs can only occur within the block in which the GO TO is
used. In other words, you cannot transfer from an inner block to an outer block using a GO TO. However, if a group statement occurs within a compound statement,
it is possible to jump out of that group statement to a point within the compound
statement using a GO TO.


RETURN Statements

The value corresponding to a BEGIN . . . END compound statement, such as a
procedure body, is normally 0 (NIL in symbolic mode). By executing a RETURN
statement in the compound statement a different value can be returned. After a



RETURN statement is executed, no further statements within the compound statement are executed.
return x+y;
return m;
Note that parentheses are not required around the x+y, although they are permitted.
The last example is equivalent to return 0 or return nil, depending on
whether the block is used as part of an expression or not.
Since RETURN actually moves up only one block level, in a sense the casual user
is not expected to understand, we tabulate some cautions concerning its use.
1. RETURN can be used on the top level inside the compound statement, i.e. as
one of the statements bracketed together by the BEGIN . . . END
2. RETURN can be used within a top level << . . . >> construction within the
compound statement. In this case, the RETURN transfers control out of both
the group statement and the compound statement.
3. RETURN can be used within an IF . . . THEN . . . ELSE . . . on the top level
within the compound statement.
NOTE: At present, there is no construct provided to permit early termination of
a FOR, WHILE, or REPEAT statement. In particular, the use of RETURN in such
cases results in a syntax error. For example,
begin scalar y;
y := for i:=0:99 do if a(i)=x then return b(i);
will lead to an error.



Chapter 6

Commands and Declarations
A command is an order to the system to do something. Some commands cause
visible results (such as calling for input or output); others, usually called declarations, set options, define properties of variables, or define procedures. Commands
are formally defined as a statement followed by a terminator
hcommandi −→ hstatementihterminatori
hterminatori −→ ; | $
Some REDUCE commands and declarations are described in the following subsections.


Array Declarations

Array declarations in REDUCE are similar to FORTRAN dimension statements.
For example:
array a(10),b(2,3,4);
Array indices each range from 0 to the value declared. An element of an array is
referred to in standard FORTRAN notation, e.g. A(2).
We can also use an expression for defining an array bound, provided the value of
the expression is a positive integer. For example, if X has the value 10 and Y the
value 7 then array c(5*x+y) is the same as array c(57).
If an array is referenced by an index outside its range, an error occurs. If the array
is to be one-dimensional, and the bound a number or a variable (not a more general
expression) the parentheses may be omitted:
array a 10, c 57;



The operator LENGTH applied to an array name returns a list of its dimensions.
All array elements are initialized to 0 at declaration time. In other words, an array
element has an instant evaluation property and cannot stand for itself. If this is
required, then an operator should be used instead.
Array declarations can appear anywhere in a program. Once a symbol is declared
to name an array, it can not also be used as a variable, or to name an operator or
a procedure. It can however be re-declared to be an array, and its size may be
changed at that time. An array name can also continue to be used as a parameter in
a procedure, or a local variable in a compound statement, although this use is not
recommended, since it can lead to user confusion over the type of the variable.
Arrays once declared are global in scope, and so can then be referenced anywhere
in the program. In other words, unlike arrays in most other languages, a declaration within a block (or a procedure) does not limit the scope of the array to that
block, nor does the array go away on exiting the block (use CLEAR instead for this


Mode Handling Declarations

The ON and OFF declarations are available to the user for controlling various system options. Each option is represented by a switch name. ON and OFF take a list
of switch names as argument and turn them on and off respectively, e.g.,
on time;
causes the system to print a message after each command giving the elapsed CPU
time since the last command, or since TIME was last turned off, or the session began. Another useful switch with interactive use is DEMO, which causes the system
to pause after each command in a file (with the exception of comments) until a
Return is typed on the terminal. This enables a user to set up a demonstration
file and step through it command by command.
As with most declarations, arguments to ON and OFF may be strung together separated by commas. For example,
off time,demo;
will turn off both the time messages and the demonstration switch.
We note here that while most ON and OFF commands are obeyed almost instantaneously, some trigger time-consuming actions such as reading in necessary modules from secondary storage.
A diagnostic message is printed if ON or OFF are used with a switch that is not

6.3. END


known to the system. For example, if you misspell DEMO and type
on demq;
you will get the message
***** DEMQ not defined as switch.



The identifier END has two separate uses.
1) Its use in a BEGIN . . . END bracket has been discussed in connection with
compound statements.
2) Files to be read using IN should end with an extra END; command. The reason
for this is explained in the section on the IN command. This use of END does not
allow an immediately preceding END (such as the END of a procedure definition),
so we advise using ;END; there.


BYE Command

The command BYE; (or alternatively QUIT;) stops the execution of REDUCE,
closes all open output files, and returns you to the calling program (usually the
operating system). Your REDUCE session is normally destroyed.



SHOWTIME; prints the elapsed time since the last call of this command or, on its
first call, since the current REDUCE session began. The time is normally given
in milliseconds and gives the time as measured by a system clock. The operations
covered by this measure are system dependent.


DEFINE Command

The command DEFINE allows a user to supply a new name for any identifier or
replace it by any well-formed expression. Its argument is a list of expressions of
the form


hidentifieri = hnumberi | hidentifieri | hoperatori |
hreserved wordi | hexpressioni

define be==,x=y+z;
means that BE will be interpreted as an equal sign, and X as the expression y+z
from then on. This renaming is done at parse time, and therefore takes precedence
over any other replacement declared for the same identifier. It stays in effect until
the end of the REDUCE run.
The identifiers ALGEBRAIC and SYMBOLIC have properties which prevent
DEFINE from being used on them. To define ALG to be a synonym for
ALGEBRAIC, use the more complicated construction

Chapter 7

Built-in Prefix Operators
In the following subsections are descriptions of the most useful prefix operators
built into REDUCE that are not defined in other sections (such as substitution
operators). Some are fully defined internally as procedures; others are more nearly
abstract operators, with only some of their properties known to the system.
In many cases, an operator is described by a prototypical header line as follows.
Each formal parameter is given a name and followed by its allowed type. The
names of classes referred to in the definition are printed in lower case, and parameter names in upper case. If a parameter type is not commonly used, it may be
a specific set enclosed in brackets { . . . }. Operators that accept formal parameter lists of arbitrary length have the parameter and type class enclosed in square
brackets indicating that zero or more occurrences of that argument are permitted.
Optional parameters and their type classes are enclosed in angle brackets.


Numerical Operators

REDUCE includes a number of functions that are analogs of those found in most
numerical systems. With numerical arguments, such functions return the expected
result. However, they may also be called with non-numerical arguments. In such
cases, except where noted, the system attempts to simplify the expression as far as
it can. In such cases, a residual expression involving the original operator usually
remains. These operators are as follows:



ABS returns the absolute value of its single argument, if that argument has a numerical value. A non-numerical argument is returned as an absolute value, with an




overall numerical coefficient taken outside the absolute value operator. For example:





This operator returns the ceiling (i.e., the least integer greater than the given argument) if its single argument has a numerical value. A non-numerical argument is
returned as an expression in the original operator. For example:
ceiling(-5/4) ->




This returns the complex conjugate of an expression, if that argument has a numerical value. By default the complex conjugate of a non-numerical argument is
returned as an expression in the operators REPART and IMPART. For example:

-> 1-I

However, if rules have been previously defined for the complex conjugate(s) of one
or more non-numerical terms appearing in the argument, these rules are applied and
the expansion in terms of the operators REPART and IMPART is suppressed.
For example:
realvalued a,b;
let conj z => z!*, conj c => c!*;
a+b*z*z* + c*z*
conj atan z
Note that in defining the rule conj z => z!*, the rule conj z!* => z
is (in effect) automatically defined. Note also that the standard elementary
functions and their inverses (where appropriate) are automatically defined to be
SELFCONJUGATE so that conj(f(z)) => f(conj(z)).





If the single argument of FACTORIAL evaluates to a non-negative integer, its factorial is returned. Otherwise an expression involving FACTORIAL is returned. For





This operator returns the fixed value (i.e., the integer part of the given argument) if
its single argument has a numerical value. A non-numerical argument is returned
as an expression in the original operator. For example:





This operator returns the floor (i.e., the greatest integer less than the given argument) if its single argument has a numerical value. A non-numerical argument is
returned as an expression in the original operator. For example:





This operator returns the imaginary part of an expression, if that argument has an
numerical value. A non-numerical argument is returned as an expression in the
operators REPART and IMPART. For example:
-> 1


impart(a) + repart(b)





MAX and MIN can take an arbitrary number of expressions as their arguments.
If all arguments evaluate to numerical values, the maximum or minimum of the
argument list is returned. If any argument is non-numeric, an appropriately reduced
expression is returned. For example:



MAX or MIN of an empty list returns 0.



NEXTPRIME returns the next prime greater than its integer argument, using a probabilistic algorithm. A type error occurs if the value of the argument is not an integer. For example:
nextprime 1000000

-> 7
-> 2
-> -5
-> 1000003

whereas nextprime(a) gives a type error.



random(n) returns a random number r in the range 0 ≤ r < n. A type error
occurs if the value of the argument is not a positive integer in algebraic mode, or
positive number in symbolic mode. For example:



whereas random(a) gives a type error.



random_new_seed(n) reseeds the random number generator to a sequence
determined by the integer argument n. It can be used to ensure that a repeat-



able pseudo-random sequence will be delivered regardless of any previous use of
RANDOM, or can be called early in a run with an argument derived from something
variable (such as the time of day) to arrange that different runs of a REDUCE program will use different random sequences. When a fresh copy of REDUCE is first
created it is as if random_new_seed(1) has been obeyed.
A type error occurs if the value of the argument is not a positive integer.



This returns the real part of an expression, if that argument has an numerical value.
A non-numerical argument is returned as an expression in the operators REPART
and IMPART. For example:
-> 1


-> cosh(4)*sin(3)
-> log(5)/2
-> acos(sqrt(5)-2)/2
-> - impart(b) + repart(a)


This operator returns the rounded value (i.e, the nearest integer) of its single argument if that argument has a numerical value. A non-numeric argument is returned
as an expression in the original operator. For example:





SIGN tries to evaluate the sign of its argument. If this is possible SIGN returns
one of 1, 0 or -1. Otherwise, the result is the original form or a simplified variant.
For example:



Note that even powers of formal expressions are assumed to be positive only as
long as the switch COMPLEX is off.




Mathematical Functions

REDUCE knows that the following represent mathematical functions that can take
arbitrary scalar expressions as their argument(s):
where LOG is the natural logarithm, and LOGB has two arguments of which the
second is the logarithmic base.
The derivatives of all these functions are also known to the system.
REDUCE knows various elementary identities and properties of these functions.
For example:
cos(-x) = cos(x)
cos(n*pi) = (-1)^n
log(e) = 1
log(1) = 0
log(e^x) = x

sin(-x) = - sin (x)
sin(n*pi) = 0
e^(i*pi/2) = i
e^(i*pi) = -1
e^(3*i*pi/2) = -i

Beside these identities, there are a lot of simplifications for elementary functions defined in the REDUCE system as rulelists. In order to view these, the
SHOWRULES operator can be used, e.g.
{tan(~n*arbint(~i)*pi + ~~x) => tan(x) when fixp(n),
tan(~x) => trigquot(sin(x),cos(x)) when knowledge_about(sin,x,tan),
~x + ~~k*pi
tan(-------------) =>

- cot(--- + i*pi*impart(---)) when abs(repart(



~~w + ~~k*pi
tan(--------------) => tan(--- + (--- - fix(repart(---)))*pi)
when ((ratnump(rp) and abs(rp)>=1) where rp => repart(---)),
tan(atan(~x)) => x,
df(tan(~x),~x) => 1 + tan(x) }

For further simplification, especially of expressions involving trigonometric functions, see the TRIGSIMP package (chapter 16.74) documentation.
Functions not listed above may be defined in the special functions package
The user can add further rules for the reduction of expressions involving these
operators by using the LET command.
In many cases it is desirable to expand product arguments of logarithms, or collect
a sum of logarithms into a single logarithm. Since these are inverse operations, it
is not possible to provide rules for doing both at the same time and preserve the
REDUCE concept of idempotent evaluation. As an alternative, REDUCE provides
two switches EXPANDLOGS and COMBINELOGS to carry out these operations.
Both are off by default, and are subject to the value of the switch PRECISE. This
switch is on by default and prevents modifications that may be false in a complex
domain. Thus to expand LOG(3*Y) into a sum of logs, one can say
whereas to expand LOG(X*Y) into a sum of logs, one needs to say
To combine this sum into a single log:
These switches affect the logarithmic functions LOG10 (base 10) and LOGB (arbitrary base) as well.
At the present time, it is possible to have both switches on at once, which could



lead to infinite recursion. However, an expression is switched from one form to the
other in this case. Users should not rely on this behavior, since it may change in
the next release.
The current version of REDUCE does a poor job of simplifying surds. In particular,
expressions involving the product of variables raised to non-integer powers do not
usually have their powers combined internally, even though they are printed as if
those powers were combined. For example, the expression
will print as
but will have an internal form containing the two exponentiated terms. If you
now subtract sqrt(x) from this expression, you will not get zero. Instead, the
confusing form
will result. To combine such exponentiated terms, the switch COMBINEEXPT
should be turned on.
The square root function can be input using the name SQRT, or the power operation ^(1/2). On output, unsimplified square roots are normally represented by
the operator SQRT rather than a fractional power. With the default system switch
settings, the argument of a square root is first simplified, and any divisors of the
expression that are perfect squares taken outside the square root argument. The
remaining expression is left under the square root. Thus the expression

Note that such simplifications can cause trouble if A is eventually given a value
that is a negative number. If it is important that the positive property of the square
root and higher even roots always be preserved, the switch PRECISE should be
set on (the default value). This causes any non-numerical factors taken out of surds
to be represented by their absolute value form. With PRECISE on then, the above
example would become



However, this is incorrect in the complex domain, where the x2 is not identical
to |x|. To avoid the above simplification, the switch PRECISE_COMPLEX should
be set on (default is off). For example:
on precise_complex; sqrt(-8a^2*b);
yields the output
2*sqrt( - 2*a *b)
The statement that REDUCE knows very little about these functions applies only
in the mathematically exact off rounded mode. If ROUNDED is on, any of the
when given a numerical argument has its value calculated to the current degree of
floating point precision. In addition, real (non-integer valued) powers of numbers
will also be evaluated.
If the COMPLEX switch is turned on in addition to ROUNDED, these functions will also calculate a real or complex result, again to the current degree of
floating point precision, if given complex arguments. For example, with on


-0.0480793490914 - 0.998843519372*I
-4.18962569097 - 9.10922789376*I

For log and the inverse trig. and hyperbolic functions which are multi-valued,
the principal value is returned. The branch cuts chosen (except for acot) are now
those recommended by W. Kahan (Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing’s Sign Bit, in The State of the Art in Numerical
Analysis, A. Iserles, M.J.D. Powell Eds., Clarendon Press, Oxford, 1987).
The exception for acot is necessary as elsewhere in Reduce acot(−z) is taken to
be π − acot(z) rather than − acot(z). The branch cuts are:



asin, acos:
acsc, asec:
atan, acot:


{r | r ∈ R ∧ r < 0}
{r | r ∈ R ∧ (r > 1 ∨ r < −1)}
{r | r ∈ R ∧ r 6= 0 ∧ r > −1 ∧ r < 1}
{r ∗ i | r ∈ R ∧ (r > 1 ∨ r < −1)}
{r ∗ i | r ∈ R ∧ (r ≥ 1 ∨ r ≤ −1)}
{r ∗ i | r ∈ R ∧ r 6= 0 ∧ r ≥ −1 ∧ r ≤ 1}
{r | r ∈ R ∧ r < 1}
{r | r ∈ R ∧ (r > 1 ∨ r < 0)}
{r | r ∈ R ∧ (r > 1 ∨ r < −1)}
{r | r ∈ R ∧ r > −1 ∧ r < 1}

Bernoulli Numbers and Euler Numbers

The unary operator Bernoulli provides notation and computation for Bernoulli
numbers. Bernoulli(n) evaluates to the nth Bernoulli number; all of the odd
Bernoulli numbers, except Bernoulli(1), are zero.
The algorithms are based upon those by Herbert Wilf, presented by Sandra
Fillebrown.[?]. If the ROUNDED switch is off, the algorithms are exactly those;
if it is on, some further rounding may be done to prevent computation of redundant
digits. Hence, these functions are particularly fast when used to approximate the
Bernoulli numbers in rounded mode.
Euler numbers are computed by the unary operator Euler, which return the nth Euler number. The computation is derived directly from Pascal’s triangle of binomial


Fibonacci Numbers and Fibonacci Polynomials

The unary operator Fibonacci provides notation and computation for Fibonacci
numbers. Fibonacci(n) evaluates to the nth Fibonacci number. If n is a positive or negative integer, it will be evaluated following the definition:
F0 = 0; F1 = 1; Fn = Fn−1 + Fn−2
Fibonacci Polynomials are computed by the binary operator FibonacciP. FibonacciP(n,x) returns the nth Fibonaccip polynomial in the variable x. If n is a positive
or negative integer, it will be evaluated following the definition:
F0 (x) = 0; F1 (x) = 1; Fn (x) = xFn−1 (x) + Fn−2 (x)




Motzkin numbers

A Motzkin number Mn (named after Theodore Motzkin) is the number of different
ways of drawing non-intersecting chords on a circle between n points. For a nonnegative integer n, the operator Motzkin(n) returns the nth Motzkin number,
according to the recursion formula
M0 = 1 ;


M1 = 1 ;

Mn+1 =

2n + 3
Mn +
Mn−1 .

CHANGEVAR operator

Author: G. Üçoluk.
The operator CHANGEVAR does a variable transformation in a set of differential
equations. Syntax:
changevar(hdepvarsi, hnewvarsi, heqlisti, hdiffeqi)
hdiffeqi is either a single differential equation or a list of differential equations, hdepvarsi are the dependent variables to be substituted, hnewvarsi are the
new depend variables, and heqlisti is a list of equations of the form hdepvari =
hexpressioni where hexpressioni is some function in the new dependent variables.
The three lists hdepvarsi, hnewvarsi, and heqlisti must be of the same length. If
there is only one variable to be substituted, then it can be given instead of the
list. The same applies to the list of differential equations, i.e., the following two
commands are equivalent
changevar(u,y,x=e^y,df(u(x),x) - log(x));
changevar({u},{y},{x=e^y},{df(u(x),x) - log(x)});
except for one difference: the first command returns the transformed differential
equation, the second one a list with a single element.
The switch DISPJACOBIAN governs the display the entries of the inverse Jacobian, it is OFF per default.
The mathematics behind the change of independent variable(s) in differential
equations is quite straightforward. It is basically the application of the chain rule.
If the dependent variable of the differential equation is F , the independent variables are xi and the new independent variables are ui (where i=1...n) then the first
derivatives are:
∂F ∂uj
∂uj ∂xi



We assumed Einstein’s summation convention. Here the problem is to calculate
the ∂uj /∂xi terms if the change of variables is given by
xi = fi (u1 , . . . , un )
The first thought might be solving the above given equations for uj and then differentiating them with respect to xi , then again making use of the equations above,
substituting new variables for the old ones in the calculated derivatives. This is
not always a preferable way to proceed. Mainly because the functions fi may not
always be easily invertible. Another approach that makes use of the Jacobian is
better. Consider the above given equations which relate the old variables to the
new ones. Let us differentiate them:
∂fj ∂uk
δij =
∂uk ∂xi
The first derivative is nothing but the (j, k) th entry of the Jacobian matrix.
So if we speak in matrix language
where we defined the Jacobian

Jij =


and the matrix of the derivatives we wanted to obtain as

Dij =


If the Jacobian has a non-vanishing determinant then it is invertible and we are able
to write from the matrix equation above:
D = J−1
so finally we have what we want

= J−1 ij
The higher derivatives are obtained by the successive application of the chain rule
and using the definitions of the old variables in terms of the new ones. It can be
easily verified that the only derivatives that are needed to be calculated are the first
order ones which are obtained above.



7.6.1 CHANGEVAR example: The 2-dim. Laplace Equation
The 2-dimensional Laplace equation in cartesian coordinates is:
∂2u ∂2u
+ 2 =0
Now assume we want to obtain the polar coordinate form of Laplace equation. The
change of variables is:
x = r cos θ,

y = r sin θ

The solution using CHANGEVAR is as follows
changevar({u},{r,theta},{x=r*cos theta,y=r*sin theta},
{df(u(x,y),x,2)+df(u(x,y),y,2)} );
Here we could omit the curly braces in the first and last arguments (because those
lists have only one member) and the curly braces in the third argument (because
they are optional), but you cannot leave off the curly braces in the second argument.
So one could equivalently write
changevar(u,{r,theta},x=r*cos theta,y=r*sin theta,
df(u(x,y),x,2)+df(u(x,y),y,2) );
If you have tried out the above example, you will notice that the denominator contains a cos2 θ + sin2 θ which is actually equal to 1. This has of course nothing to
do with CHANGEVAR. One has to be overcome these pattern matching problems
by the conventional methods REDUCE provides (a rule, for example, will fix it).
Secondly you will notice that your u(x,y) operator has changed to u(r,theta)
in the result. Nothing magical about this. That is just what we do with pencil and
paper. u(r,theta) represents the the transformed dependent variable.


Another CHANGEVAR example: An Euler Equation

Consider a differential equation which is of Euler type, for instance:
x3 y 000 − 3x2 y 00 + 6xy 0 − 6y = 0
where prime denotes differentiation with respect to x. As is well known, Euler
type of equations are solved by a change of variable:
x = eu
So our call to CHANGEVAR reads as follows:


changevar(y, u, x=e**u, x**3*df(y(x),x,3)3*x**2*df(y(x),x,2)+6*x*df(y(x),x)-6*y(x));

and returns the result
df(y(u),u,3) - 6*df(y(u),u,2) + 11*df(y(u),u) - 6*y(u)



The operator CONTINUED_FRACTION generates the continued fraction expansion of a rational number argument. For irrational or rounded arguments it approximates the real number as a rational number (to the current system precision) and
generates the continued fraction expansion. CONTINUED_FRACTION has one or
two arguments, the number to be converted and an optional precision:
The result is a list of two elements: the first is the rational value of the approximation (final convergent) and the second is the list of terms of the continued fraction which represents the same value according to the definition t0 +1/(t1 +
1/(t2 + ...)). Precision: the second optional parameter hsizei is an upper
bound for the absolute value of the result denominator. If omitted, the expansion
performed is exact for rational number arguments and for irrational or rounded
arguments it is up to the current system precision.
continued_fraction pi;





DF Operator

The operator DF is used to represent partial differentiation with respect to one or
more variables. It is used with the syntax:
DF(hEXPRN:algebraici[, hVAR:kerneli <, hNUM:integeri >]) : algebraic.
The first argument is the expression to be differentiated. The remaining arguments
specify the differentiation variables and the number of times they are applied.
The number NUM may be omitted if it is 1. For example,
= ∂y/∂x
= ∂ 2 y/∂x2
df(y,x1,2,x2,x3,2) = ∂ 5 y/∂x21 ∂x2 ∂x23 .
The evaluation of df(y,x) proceeds as follows: first, the values of Y and X are
found. Let us assume that X has no assigned value, so its value is X. Each term
or other part of the value of Y that contains the variable X is differentiated by the
standard rules. If Z is another variable, not X itself, then its derivative with respect
to X is taken to be 0, unless Z has previously been declared to DEPEND on X, in
which case the derivative is reported as the symbol df(z,x).


Switches influencing differentiation

Consider df(u,x,y,z), assuming u depends on each of x,y,z in some way.
If none of x,y,z is equal to u then the order of differentiation is commuted into a
canonical form, unless the switch NOCOMMUTEDF is turned on (default is off). If at
least one of x,y,z is equal to u then the order of differentiation is not fully commuted and the derivative is not simplified to zero, unless the switch COMMUTEDF
is turned on. It is off by default.
If COMMUTEDF is off and the switch SIMPNONCOMDF is on then simplify as follows:


DF(U,X,2) / DF(U,X)
DF(U,X,N+1) / DF(U,X)



provided U depends only on the one variable X. This simplification removes the
non-commutative aspect of the derivative.
If the switch EXPANDDF is turned on then REDUCE uses the chain rule to expand
symbolic derivatives of indirectly dependent variables provided the result is unambiguous, i.e. provided there is no direct dependence. It is off by default. Thus, for
example, given



-> DF(F,U)*DF(U,X) + DF(F,V)*DF(V,X)
whereas after
DF(F,X) does not expand at all (since the result would be ambiguous and the
algorithm would loop).
Turning on the switch ALLOWDFINT allows “differentiation under the integral
sign”, i.e.
DF(INT(Y, X), V) -> INT(DF(Y, V), X)
if this results in a simplification. If the switch DFINT is also turned on then this
happens regardless of whether the result simplifies. Both switches are off by default.


Adding Differentiation Rules

The LET statement can be used to introduce rules for differentiation of user-defined
operators. Its general form is
FOR ALL hvar1i, . . . ,hvarni
LET DF(hoperatorihvarlisti,hvarii)=hexpressioni
hvarlisti −→ (hvar1i, . . . , hvarni),
and hvar1i,. . . ,hvarni are the dummy variable arguments of hoperatori.
An analogous form applies to infix operators.
for all x let df(tan x,x)= 1 + tan(x)^2;
(This is how the tan differentiation rule appears in the REDUCE source.)
for all x,y let df(f(x,y),x)=2*f(x,y),



Notice that all dummy arguments of the relevant operator must be declared arbitrary by the FOR ALL command, and that rules may be supplied for operators with
any number of arguments. If no differentiation rule appears for an argument in an
operator, the differentiation routines will return as result an expression in terms
of DF. For example, if the rule for the differentiation with respect to the second
argument of F is not supplied, the evaluation of df(f(x,z),z) would leave this
expression unchanged. (No DEPEND declaration is needed here, since f(x,z)
obviously “depends on” Z.)
Once such a rule has been defined for a given operator, any future differentiation
rules for that operator must be defined with the same number of arguments for that
operator, otherwise we get the error message
Incompatible DF rule argument length for 


INT Operator

INT is an operator in REDUCE for indefinite integration using a combination of
the Risch-Norman algorithm and pattern matching. It is used with the syntax:
INT(hEXPRN:algebraici,hVAR:kerneli) : algebraic.
This will return correctly the indefinite integral for expressions comprising polynomials, log functions, exponential functions and tan and atan. The arbitrary constant is not represented. If the integral cannot be done in closed terms, it returns a
formal integral for the answer in one of two ways:
1. It returns the input, INT(...,...) unchanged.
2. It returns an expression involving INTs of some other functions (sometimes
more complicated than the original one, unfortunately).
Rational functions can be integrated when the denominator is factorizable by the
program. In addition it will attempt to integrate expressions involving error functions, dilogarithms and other trigonometric expressions. In these cases it might not
always succeed in finding the solution, even if one exists.
int(log(x),x) ->

X*(LOG(X) - 1),

The program checks that the second argument is a variable and gives an error if it
is not.



Note: If the int operator is called with 4 arguments, REDUCE will implicitly call
the definite integration package (DEFINT) and this package will interpret the third
and fourth arguments as the lower and upper limit of integration, respectively. For
details, consult the documentation on the D EFINT package.



The switch TRINT when on will trace the operation of the algorithm. It produces
a great deal of output in a somewhat illegible form, and is not of much interest to
the general user. It is normally off.
The switch TRINTSUBST when on will trace the heuristic attempts to solve the
integral by substitution. It is normally off.
If the switch FAILHARD is on the algorithm will terminate with an error if the
integral cannot be done in closed terms, rather than return a formal integration
form. FAILHARD is normally off.
The switch NOLNR suppresses the use of the linear properties of integration in
cases when the integral cannot be found in closed terms. It is normally off.
The switch NOINTSUBST disables the heuristic attempts to solve the integral by
substitution. It is normally off.


Advanced Use

If a function appears in the integrand that is not one of the functions EXP, ERF,
TAN, ATAN, LOG, DILOG then the algorithm will make an attempt to integrate the argument if it can, differentiate it and reach a known function. However
the answer cannot be guaranteed in this case. If a function is known to be algebraically independent of this set it can be flagged transcendental by
in which case this function will be added to the permitted field descriptors for a
genuine decision procedure. If this is done the user is responsible for the mathematical correctness of his actions.
The standard version does not deal with algebraic extensions. Thus integration
of expressions involving square roots and other like things can lead to trouble. A
contributed package that supports integration of functions involving square roots is
available, however (ALGINT, chapter 16.1). In addition there is a definite integration package, DEFINT( chapter 16.18).





A. C. Norman & P. M. A. Moore, “Implementing the New Risch Algorithm”,
Proc. 4th International Symposium on Advanced Comp. Methods in Theor. Phys.,
CNRS, Marseilles, 1977.
S. J. Harrington, “A New Symbolic Integration System in Reduce”, Comp. Journ.
22 (1979) 2.
A. C. Norman & J. H. Davenport, “Symbolic Integration — The Dust Settles?”,
Proc. EUROSAM 79, Lecture Notes in Computer Science 72, Springer-Verlag,
Berlin Heidelberg New York (1979) 398-407.


LENGTH Operator

LENGTH is a generic operator for finding the length of various objects in the system. The meaning depends on the type of the object. In particular, the length
of an algebraic expression is the number of additive top-level terms its expanded



Other objects that support a length operator include arrays, lists and matrices. The
explicit meaning in these cases is included in the description of these objects.


MAP Operator

The MAP operator applies a uniform evaluation pattern to all members of a composite structure: a matrix, a list, or the arguments of an operator expression. The
evaluation pattern can be a unary procedure, an operator, or an algebraic expression
with one free variable.
It is used with the syntax:
Here OBJ is a list, a matrix or an operator expression. FNC can be one of the
1. the name of an operator with a single argument: the operator is evaluated
once with each element of OBJ as its single argument;



2. an algebraic expression with exactly one free variable, i.e. a variable preceded by the tilde symbol. The expression is evaluated for each element of
OBJ, with the element substituted for the free variable;
3. a replacement rule of the form var => rep where var is a variable (a
kernel without a subscript) and rep is an expression that contains var. The
replacement expression rep is evaluated for each element of OBJ with the
element substituted for var. The variable var may be optionally preceded
by a tilde.
The rule form for FNC is needed when more than one free variable occurs.
map(abs,{1,-2,a,-a}) -> {1,2,ABS(A),ABS(A)}
map(int(~w,x), mat((x^2,x^5),(x^4,x^5))) ->
[ 3
[ x
[---[ 3
[ 5
[ x
[---[ 5

6 ]
x ]
6 ]
6 ]
x ]
6 ]

map(~w*6, x^2/3 = y^3/2 -1) -> 2*X^2=3*(Y^3-2)
You can use MAP in nested expressions. However, you cannot apply MAP to a
non-composite object, e.g. an identifier or a number.


MKID Operator

In many applications, it is useful to create a set of identifiers for naming objects in
a consistent manner. In most cases, it is sufficient to create such names from two
components. The operator MKID is provided for this purpose. Its syntax is:
MKID(U:id,V:id|non-negative integer):id
for example

-> A3



while mkid(a+b,2) gives an error.
The SET statement can be used to give a value to the identifiers created by MKID,
for example
will give A3 the value 2. Similarly, the UNSET statement can be used to remove
the value from these identifiers, for example


The Pochhammer Notation

The Pochhammer notation (a)k (also called Pochhammer’s symbol) is supported
by the binary operator POCHHAMMER(A,K). For a non-negative integer k, it is
defined as (
(a)0 = 1,
(a)k = a(a + 1)(a + 2) · · · (a + k − 1).
For a 6= 0, ±1, ±2, . . . , this is equivalent to
(a)k =

Γ(a + k)

With ROUNDED off, this expression is evaluated numerically if a and k are both
integral, and otherwise may be simplified where appropriate. The simplification
rules are based upon algorithms supplied by Wolfram Koepf.


PF Operator

PF(hexpi,hvari) transforms the expression hexpi into a list of partial fractions
with respect to the main variable, hvari. PF does a complete partial fraction decomposition, and as the algorithms used are fairly unsophisticated (factorization and the
extended Euclidean algorithm), the code may be unacceptably slow in complicated



Example: Given 2/((x+1)^2*(x+2)) in the workspace, pf(ws,x); gives
the result
- 2
{-------,-------,--------------} .
X + 2
X + 1
X + 2* X + 1
If you want the denominators in factored form, use off exp;. Thus, with
2/((x+1)^2*(x+2)) in the workspace, the commands off exp; pf(ws,x);
give the result
- 2
{-------,-------,----------} .
X + 2
X + 1
(X + 1)
To recombine the terms, FOR EACH . . . SUM can be used. So with the above list
in the workspace, for each j in ws sum j; returns the result
(X + 2)*(X + 1)
Alternatively, one can use the operations on lists to extract any desired term.


SELECT Operator

The SELECT operator extracts from a list, or from the arguments of an n–ary
operator, elements corresponding to a boolean predicate. It is used with the syntax:
SELECT(hFNC:functioni, hLST:listi)
FNC can be one of the following forms:
1. the name of an operator with a single argument: the operator is evaluated
once on each element of LST;
2. an algebraic expression with exactly one free variable, i.e. a variable preceded by the tilde symbol. The expression is evaluated for each element of
hLSTi, with the element substituted for the free variable;


3. a replacement rule of the form hvari => hrepi where hvari is a variable (a
kernel without subscript) and hrepi is an expression that contains hvari. hrepi
is evaluated for each element of LST with the element substituted for hvari.
hvari may be optionally preceded by a tilde.

The rule form for FNC is needed when more than one free variable occurs.
The result of evaluating FNC is interpreted as a boolean value corresponding to the
conventions of REDUCE. These values are composed with the leading operator of
the input expression.
select( ~w>0 , {1,-1,2,-3,3}) -> {1,2,3}
select(evenp deg(~w,y),part((x+y)^5,0):=list)
-> {X^5 ,10*X^3*Y^2 ,5*X*Y^4}
select(evenp deg(~w,x),2x^2+3x^3+4x^4) -> 4X^4 + 2X^2




SOLVE Operator

SOLVE is an operator for solving one or more simultaneous algebraic equations.
It is used with the syntax:
SOLVE(hEXPRN:algebraici[, hVAR:kerneli | , hVARLIST:list of kernelsi]) : list.
EXPRN is of the form hexpressioni or { hexpression1i,hexpression2i, . . . }. Each
expression is an algebraic equation, or is the difference of the two sides of the
equation. The second argument is either a kernel or a list of kernels representing
the unknowns in the system. This argument may be omitted if the number of
distinct, non-constant, top-level kernels equals the number of unknowns, in which
case these kernels are presumed to be the unknowns.
For one equation, SOLVE recursively uses factorization and decomposition, together with the known inverses of LOG, SIN, COS, ^, ACOS, ASIN, and linear,
quadratic, cubic, quartic, or binomial factors. Solutions of equations built with
exponentials or logarithms are often expressed in terms of Lambert’s W function.
This function is (partially) implemented in the special functions package.
Linear equations are solved by the multi-step elimination method due to Bareiss,
unless the switch CRAMER is on, in which case Cramer’s method is used. The
Bareiss method is usually more efficient unless the system is large and dense.
Non-linear equations are solved using the Groebner basis package (chapter 16.28).
Users should note that this can be quite a time consuming process.
solve(log(sin(x+3))^5 = 8,x);
solve(a*log(sin(x+3))^5 - b, sin(x+3));
SOLVE returns a list of solutions. If there is one unknown, each solution is an
equation for the unknown. If a complete solution was found, the unknown will
appear by itself on the left-hand side of the equation. On the other hand, if the
solve package could not find a solution, the “solution” will be an equation for the
unknown in terms of the operator ROOT_OF. If there are several unknowns, each
solution will be a list of equations for the unknowns. For example,

-> {X=-1,X=1}

-> {X=ROOT_OF(X_ + X_ + 1,X_,TAG_1),X=1}


solve({x+3y=7,y-x=1},{x,y}) -> {{X=1,Y=2}}.

The TAG argument is used to uniquely identify those particular solutions. Solution
multiplicities are stored in the global variable ROOT_MULTIPLICITIES rather
than the solution list. The value of this variable is a list of the multiplicities of the
solutions for the last call of SOLVE. For example,
solve(x^2=2x-1,x); root_multiplicities;
gives the results
If you want the multiplicities explicitly displayed, the switch MULTIPLICITIES
can be turned on. For example
on multiplicities; solve(x^2=2x-1,x);
yields the result


Handling of Undetermined Solutions

When SOLVE cannot find a solution to an equation, it normally returns an equation
for the relevant indeterminates in terms of the operator ROOT_OF. For example, the
solve(cos(x) + log(x),x);
returns the result
{X=ROOT_OF(COS(X_) + LOG(X_),X_,TAG_1)} .
An expression with a top-level ROOT_OF operator is implicitly a list with an unknown number of elements (since we don’t always know how many solutions an
equation has). If a substitution is made into such an expression, closed form solutions can emerge. If this occurs, the ROOT_OF construct is replaced by an operator
ONE_OF. At this point it is of course possible to transform the result of the original
SOLVE operator expression into a standard SOLVE solution. To effect this, the
operator EXPAND_CASES can be used.



The following example shows the use of these facilities:
{X=ROOT_OF(A*X_ - X_ + 4*X_ + 4,X_,TAG_2),X=1}
expand_cases ws;


Solutions of Equations Involving Cubics and Quartics

Since roots of cubics and quartics can often be very messy, a switch FULLROOTS
is available, that, when off (the default), will prevent the production of a result in
closed form. The ROOT_OF construct will be used in this case instead.
In constructing the solutions of cubics and quartics, trigonometrical forms are used
where appropriate. This option is under the control of a switch TRIGFORM, which
is normally on.
The following example illustrates the use of these facilities:
let xx = solve(x^3+x+1,x);

+ X_ + 1,X_)}

on fullroots;
- SQRT(31)*I


- SQRT(31)*I
- COS(-----------------------)))/SQRT(3),

- SQRT(31)*I
X=( - I*(SQRT(3)*SIN(-----------------------)
- SQRT(31)*I
+ COS(-----------------------)))/SQRT(
- SQRT(31)*I
off trigform;
{X=( - (SQRT(31) - 3*SQRT(3))
- (SQRT(31) - 3*SQRT(3))
- 2
1/3 1/3
+ 2
)/(2*(SQRT(31) - 3*SQRT(3))

X=((SQRT(31) - 3*SQRT(3))


- (SQRT(31) - 3*SQRT(3))

+ 2

1/3 1/3
+ 2
)/(2*(SQRT(31) - 3*SQRT(3))
(SQRT(31) - 3*SQRT(3))
- 2
1/3 1/3 1/6
(SQRT(31) - 3*SQRT(3))


Other Options

If SOLVESINGULAR is on (the default setting), degenerate systems such as
x+y=0, 2x+2y=0 will be solved by introducing appropriate arbitrary constants.
The consistent singular equation 0=0 or equations involving functions with multiple inverses may introduce unique new indeterminant kernels ARBCOMPLEX(j),
or ARBINT(j), (j=1,2,...), representing arbitrary complex or integer numbers respectively. To automatically select the principal branches, do off allbranch;
. ALLBRANCH To avoid the introduction of new indeterminant kernels do OFF
ARBVARS – then no equations are generated for the free variables and their original
names are used to express the solution forms. To suppress solutions of consistent
singular equations do OFF SOLVESINGULAR.
To incorporate additional inverse functions do, for example:
together with any desired simplification rules such as
for all x let sinh(asinh(x))=x, asinh(sinh(x))=x;
For completeness, functions with non-unique inverses should be treated as ^, SIN,
and COS are in the SOLVE module source.
Arguments of ASIN and ACOS are not checked to ensure that the absolute value
of the real part does not exceed 1; and arguments of LOG are not checked to ensure



that the absolute value of the imaginary part does not exceed π; but checks (perhaps
involving user response for non-numerical arguments) could be introduced using
LET statements for these operators.


Parameters and Variable Dependency

The proper design of a variable sequence supplied as a second argument to SOLVE
is important for the structure of the solution of an equation system. Any unknown
in the system not in this list is considered totally free. E.g. the call
produces an empty list as a result because there is no function z = z(x, y) which
fulfills both equations for arbitrary x and y values. In such a case the share variable
REQUIREMENTS displays a set of restrictions for the parameters of the system:
{x - 4*y}
The non-existence of a formal solution is caused by a contradiction which disappears only if the parameters of the initial system are set such that all members of
the requirements list take the value zero. For a linear system the set is complete:
a solution of the requirements list makes the initial system solvable. E.g. in the
above case a substitution x = 4y makes the equation set consistent. For a nonlinear system only one inconsistency is detected. If such a system has more than
one inconsistency, you must reduce them one after the other. 1 The set shows you
also the dependency among the parameters: here one of x and y is free and a formal
solution of the system can be computed by adding it to the variable list of solve.
The requirement set is not unique – there may be other such sets.
A system with parameters may have a formal solution, e.g.
a*y + b
which is not valid for all possible values of the parameters.

The variable

The difference between linear and non–linear inconsistent systems is based on the algorithms
which produce this information as a side effect when attempting to find a formal solution; example:
solve({x = a, x = b, y = c, y = d}, {x, y} gives a set {a − b, c − d} while solve({x2 = a, x2 =
b, y 2 = c, y 2 = d}, {x, y} leads to {a − b}.



ASSUMPTIONS contains then a list of restrictions: the solutions are valid only
as long as none of these expressions vanishes. Any zero of one of them represents
a special case that is not covered by the formal solution. In the above case the value
which excludes formally the case b = 0; obviously this special parameter value
makes the system singular. The set of assumptions is complete for both, linear and
non–linear systems.
SOLVE rearranges the variable sequence to reduce the (expected) computing time.
This behavior is controlled by the switch VAROPT, which is on by default. If it is
turned off, the supplied variable sequence is used or the system kernel ordering is
taken if the variable list is omitted. The effect is demonstrated by an example:
s:= {y^3+3x=0,x^2+y^2=1};


+ 9*y_ - 9,y_),

- y
off varopt; solve(s,{y,x});


- 3*x_ + 12*x_ - 1,x_),

x*( - x + 2*x - 10)

In the first case, solve forms the solution as a set of pairs (yi , x(yi )) because the
degree of x is higher – such a rearrangement makes the internal computation of the
Gröbner basis generally faster. For the second case the explicitly given variable



sequence is used such that the solution has now the form (xi , y(xi )). Controlling
the variable sequence is especially important if the system has one or more free
variables. As an alternative to turning off varopt, a partial dependency among
the variables can be declared using the depend statement: solve then rearranges
the variable sequence but keeps any variable ahead of those on which it depends.
on varopt;
{{a=arbcomplex(1),b= - a ,c= - a }}
depend a,c; depend b,c; solve(s,{a,b,c});
a=root_of(a_ + c,a_),
b= - a }}
Here solve is forced to put c after a and after b, but there is no obstacle to interchanging a and b.


Even and Odd Operators

An operator can be declared to be even or odd in its first argument by the declarations EVEN and ODD respectively. Expressions involving an operator declared in
this manner are transformed if the first argument contains a minus sign. Any other
arguments are not affected. In addition, if say F is declared odd, then f(0) is
replaced by zero unless F is also declared non zero by the declaration NONZERO.
For example, the declarations
even f1; odd f2;
mean that





To inhibit the last transformation, say nonzero f2;.


Linear Operators

An operator can be declared to be linear in its first argument over powers of its
second argument. If an operator F is so declared, F of any sum is broken up into
sums of Fs, and any factors that are not powers of the variable are taken outside.
This means that F must have (at least) two arguments. In addition, the second
argument must be an identifier (or more generally a kernel), not an expression.
If F were declared linear, then

f(a*x^5+b*x+c,x) ->

F(X ,X)*A + F(X,X)*B + F(1,X)*C

More precisely, not only will the variable and its powers remain within the scope
of the F operator, but so will any variable and its powers that had been declared
to DEPEND on the prescribed variable; and so would any expression that contains
that variable or a dependent variable on any level, e.g. cos(sin(x)).
To declare operators F and G to be linear operators, use:
linear f,g;
The analysis is done of the first argument with respect to the second; any other
arguments are ignored. It uses the following rules of evaluation:
f(0) -> 0
f(-y,x) -> -F(Y,X)
f(y+z,x) -> F(Y,X)+F(Z,X)
f(y*z,x) -> Z*F(Y,X)
if Z does not depend on X
f(y/z,x) -> F(Y,X)/Z
if Z does not depend on X
To summarize, Y “depends” on the indeterminate X in the above if either of the
following hold:
1. Y is an expression that contains X at any level as a variable, e.g.: cos(sin(x))
2. Any variable in the expression Y has been declared dependent on X by use
of the declaration DEPEND.
The use of such linear operators can be seen in the paper Fox, J.A. and A. C. Hearn,
“Analytic Computation of Some Integrals in Fourth Order Quantum Electrodynamics” Journ. Comp. Phys. 14 (1974) 301-317, which contains a complete listing of



a program for definite integration of some expressions that arise in fourth order
quantum electrodynamics.


Non-Commuting Operators

An operator can be declared to be non-commutative under multiplication by the
declaration NONCOM.
After the declaration
noncom u,v;
the expressions u(x)*u(y)-u(y)*u(x) and u(x)*v(y)-v(y)*u(x) will
remain unchanged on simplification, and in particular will not simplify to zero.
Note that it is the operator (U and V in the above example) and not the variable that
has the non-commutative property.
The LET statement may be used to introduce rules of evaluation for such operators.
In particular, the boolean operator ORDP is useful for introducing an ordering on
such expressions.
The rule
for all x,y such that x neq y and ordp(x,y)
let u(x)*u(y)= u(y)*u(x)+comm(x,y);
would introduce the commutator of u(x) and u(y) for all X and Y. Note that
since ordp(x,x) is true, the equality check is necessary in the degenerate case
to avoid a circular loop in the rule.


Symmetric and Antisymmetric Operators

An operator can be declared to be symmetric with respect to its arguments by the
declaration SYMMETRIC. For example
symmetric u,v;
means that any expression involving the top level operators U or V will have its
arguments reordered to conform to the internal order used by REDUCE. The user
can change this order for kernels by the command KORDER.



For example, u(x,v(1,2)) would become u(v(2,1),x), since numbers are
ordered in decreasing order, and expressions are ordered in decreasing order of
Similarly the declaration ANTISYMMETRIC declares an operator antisymmetric.
For example,
antisymmetric l,m;
means that any expression involving the top level operators L or M will have its
arguments reordered to conform to the internal order of the system, and the sign
of the expression changed if there are an odd number of argument interchanges
necessary to bring about the new order.
For example, l(x,m(1,2)) would become -l(-m(2,1),x) since one interchange occurs with each operator. An expression like l(x,x) would also be
replaced by 0.


Declaring New Prefix Operators

The user may add new prefix operators to the system by using the declaration
OPERATOR. For example:
operator h,g1,arctan;
adds the prefix operators H, G1 and ARCTAN to the system.
This allows symbols like h(w), h(x,y,z), g1(p+q), arctan(u/v) to
be used in expressions, but no meaning or properties of the operator are implied.
The same operator symbol can be used equally well as a 0-, 1-, 2-, 3-, etc.-place
To give a meaning to an operator symbol, or express some of its properties, LET
statements can be used, or the operator can be given a definition as a procedure.
If the user forgets to declare an identifier as an operator, the system will prompt the
user to do so in interactive mode, or do it automatically in non-interactive mode.
A diagnostic message will also be printed if an identifier is declared OPERATOR
more than once.
Operators once declared are global in scope, and so can then be referenced anywhere in the program. In other words, a declaration within a block (or a procedure)
does not limit the scope of the operator to that block, nor does the operator go away
on exiting the block (use CLEAR instead for this purpose).




Declaring New Infix Operators

Users can add new infix operators by using the declarations INFIX and PRECEDENCE.
For example,
infix mm;
precedence mm,-;
The declaration infix mm; would allow one to use the symbol MM as an infix
a mm b

instead of


The declaration precedence mm,-; says that MM should be inserted into the
infix operator precedence list just after the − operator. This gives it higher precedence than − and lower precedence than * . Thus

a - b mm c - d

a - (b mm c) - d,

a * b mm c * d


(a * b) mm (c * d).

Both infix and prefix operators have no transformation properties unless LET statements or procedure declarations are used to assign a meaning.
We should note here that infix operators so defined are always binary:
a mm b mm c



(a mm b) mm c.

Creating/Removing Variable Dependency

There are several facilities in REDUCE, such as the differentiation operator and
the linear operator facility, that can utilize knowledge of the dependency between
various variables, or kernels. Such dependency may be expressed by the command
DEPEND. This takes an arbitrary number of arguments and sets up a dependency
of the first argument on the remaining arguments. For example,
depend x,y,z;
says that X is dependent on both Y and Z.
depend z,cos(x),y;



says that Z is dependent on COS(X) and Y.
Dependencies introduced by DEPEND can be removed by NODEPEND. The arguments of this are the same as for DEPEND. For example, given the above dependencies,
nodepend z,cos(x);
says that Z is no longer dependent on COS(X), although it remains dependent on
As a convenience, one or more dependent variables can be specified together in a
list for both the DEPEND and NODEPEND commands, i.e.
(no)depend {y1 , y2 , . . .}, x1 , x2 , . . .
is equivalent to
(no)depend y1 , x1 , x2 , . . .; (no)depend y2 , x1 , x2 , . . .; . . .
Both commands also accept a sequence of “dependence sequences”, where the
beginning of each new dependence sequence is indicated by a list of one or more
dependent variables. For example,
depend {x,y,z},u,v,{theta},time;
is equivalent to




Chapter 8

Display and Structuring of
In this section, we consider a variety of commands and operators that permit the
user to obtain various parts of algebraic expressions and also display their structure
in a variety of forms. Also presented are some additional concepts in the REDUCE
design that help the user gain a better understanding of the structure of the system.



REDUCE is designed so that each operator in the system has an evaluation (or
simplification) function associated with it that transforms the expression into an
internal canonical form. This form, which bears little resemblance to the original
expression, is described in detail in Hearn, A. C., “REDUCE 2: A System and Language for Algebraic Manipulation,” Proc. of the Second Symposium on Symbolic
and Algebraic Manipulation, ACM, New York (1971) 128-133.
The evaluation function may transform its arguments in one of two alternative
ways. First, it may convert the expression into other operators in the system, leaving no functions of the original operator for further manipulation. This is in a sense
true of the evaluation functions associated with the operators +, * and / , for example, because the canonical form does not include these operators explicitly. It
is also true of an operator such as the determinant operator DET because the relevant evaluation function calculates the appropriate determinant, and the operator
DET no longer appears. On the other hand, the evaluation process may leave some
residual functions of the relevant operator. For example, with the operator COS,
a residual expression like COS(X) may remain after evaluation unless a rule for
the reduction of cosines into exponentials, for example, were introduced. These
residual functions of an operator are termed kernels and are stored uniquely like



variables. Subsequently, the kernel is carried through the calculation as a variable
unless transformations are introduced for the operator at a later stage.
In those cases where the evaluation process leaves an operator expression with
non-trivial arguments, the form of the argument can vary depending on the state
of the system at the point of evaluation. Such arguments are normally produced in
expanded form with no terms factored or grouped in any way. For example, the
expression cos(2*x+2*y) will normally be returned in the same form. If the
argument 2*x+2*y were evaluated at the top level, however, it would be printed
as 2*(X+Y). If it is desirable to have the arguments themselves in a similar form,
the switch INTSTR (for “internal structure”), if on, will cause this to happen.
In cases where the arguments of the kernel operators may be reordered, the system puts them in a canonical order, based on an internal intrinsic ordering of the
variables. However, some commands allow arguments in the form of kernels, and
the user has no way of telling what internal order the system will assign to these
arguments. To resolve this difficulty, we introduce the notion of a kernel form as
an expression that transforms to a kernel on evaluation.
Examples of kernel forms are:
are not.
We see that kernel forms can usually be used as generalized variables, and most
algebraic properties associated with variables may also be associated with kernels.


The Expression Workspace

Several mechanisms are available for saving and retrieving previously evaluated
expressions. The simplest of these refers to the last algebraic expression simplified. When an assignment of an algebraic expression is made, or an expression is
evaluated at the top level, (i.e., not inside a compound statement or procedure) the
results of the evaluation are automatically saved in a variable WS that we shall refer
to as the workspace. (More precisely, the expression is assigned to the variable WS
that is then available for further manipulation.)



If we evaluate the expression (x+y)^2 at the top level and next wish to differentiate it with respect to Y, we can simply say
to get the desired answer.
If the user wishes to assign the workspace to a variable or expression for later use,
the SAVEAS statement can be used. It has the syntax
SAVEAS hexpressioni
For example, after the differentiation in the last example, the workspace holds the
expression 2*x+2*y. If we wish to assign this to the variable Z we can now say
saveas z;
If the user wishes to save the expression in a form that allows him to use some of
its variables as arbitrary parameters, the FOR ALL command can be used.
for all x saveas h(x);
with the above expression would mean that h(z) evaluates to 2*Y+2*Z.
A further method for referencing more than the last expression is described in
chapter 13 on interactive use of REDUCE.


Output of Expressions

A considerable degree of flexibility is available in REDUCE in the printing of
expressions generated during calculations. No explicit format statements are supplied, as these are in most cases of little use in algebraic calculations, where the size
of output or its composition is not generally known in advance. Instead, REDUCE
provides a series of mode options to the user that should enable him to produce his
output in a comprehensible and possibly pleasing form.
The most extreme option offered is to suppress the output entirely from any top
level evaluation. This is accomplished by turning off the switch OUTPUT which is
normally on. It is useful for limiting output when loading large files or producing
“clean” output from the prettyprint programs.
In most circumstances, however, we wish to view the output, so we need to know
how to format it appropriately. As we mentioned earlier, an algebraic expression



is normally printed in an expanded form, filling the whole output line with terms.
Certain output declarations, however, can be used to affect this format. To begin
with, we look at an operator for changing the length of the output line.



This operator is used with the syntax
and sets the output line length to the integer NUM. It returns the previous output line
length (so that it can be stored for later resetting of the output line if needed).


Output Declarations

We now describe a number of switches and declarations that are available for controlling output formats. It should be noted, however, that the transformation of
large expressions to produce these varied output formats can take a lot of computing time and space. If a user wishes to speed up the printing of the output in such
cases, he can turn off the switch PRI. If this is done, then output is produced in
one fixed format, which basically reflects the internal form of the expression, and
none of the options below apply. PRI is normally on.
With PRI on, the output declarations and switches available are as follows:
ORDER Declaration
The declaration ORDER may be used to order variables on output. The syntax is:
order v1,;
where the vi are kernels. Thus,
order x,y,z;
orders X ahead of Y, Y ahead of Z and all three ahead of other variables not given
an order. order nil; resets the output order to the system default. The order
of variables may be changed by further calls of ORDER, but then the reordered
variables would have an order lower than those in earlier ORDER calls. Thus,
order x,y,z;
order y,x;



would order Z ahead of Y and X. The default ordering is usually alphabetic.
FACTOR Declaration
This declaration takes a list of identifiers or kernels as argument. FACTOR is not
a factoring command (use FACTORIZE or the FACTOR switch for this purpose);
rather it is a separation command. All terms involving fixed powers of the declared
expressions are printed as a product of the fixed powers and a sum of the rest of the
For example, after the declaration
factor x;
the polynomial (x + y + 1)2 will be printed as

+ 2*X*(Y + 1) + Y

+ 2*Y + 1

All expressions involving a given prefix operator may also be factored by putting
the operator name in the list of factored identifiers. For example:
factor x,cos,sin(x);
causes all powers of X and SIN(X) and all functions of COS to be factored.
Note that FACTOR does not affect the order of its arguments. You should also use
ORDER if this is important.
The declaration remfac v1,...,vn; removes the factoring flag from the expressions v1 through vn.


Output Control Switches

In addition to these declarations, the form of the output can be modified by switching various output control switches using the declarations ON and OFF. We shall
illustrate the use of these switches by an example, namely the printing of the expression
x^2*(y^2+2*y)+x*(y^2+z)/(2*a) .
The relevant switches are as follows:



This switch will cause the system to search the whole expression, or any subexpression enclosed in parentheses, for simple multiplicative factors and print them
outside the parentheses. Thus our expression with ALLFAC off will print as
2 2
(2*X *Y *A + 4*X *Y*A + X*Y + X*Z)/(2*A)
and with ALLFAC on as
X*(2*X*Y *A + 4*X*Y*A + Y + Z)/(2*A) .
ALLFAC is normally on, and is on in the following examples, except where otherwise stated.
DIV Switch
This switch makes the system search the denominator of an expression for simple
factors that it divides into the numerator, so that rational fractions and negative
powers appear in the output. With DIV on, our expression would print as

2 (-1)
+ 2*X*Y + 1/2*Y *A
+ 1/2*A
*Z) .

DIV is normally off.
This switch causes the system to print polynomials according to Horner’s rule.
With HORNER on, our expression prints as

+ Z + 2*(Y + 2)*A*X*Y)/(2*A) .

HORNER is normally off.
LIST Switch
This switch causes the system to print each term in any sum on a separate line.
With LIST on, our expression prints as



X*(2*X*Y *A
+ 4*X*Y*A
+ Y
+ Z)/(2*A) .
LIST is normally off.
Under normal circumstances, the printing routines try to break an expression across
lines at a natural point. This is a fairly expensive process. If you are not overly
concerned about where the end-of-line breaks come, you can speed up the printing
of expressions by turning off the switch NOSPLIT. This switch is normally on.
RAT Switch
This switch is only useful with expressions in which variables are factored with
FACTOR. With this mode, the overall denominator of the expression is printed
with each factored sub-expression. We assume a prior declaration factor x; in
the following output. We first print the expression with RAT set to off:
(2*X *Y*A*(Y + 2) + X*(Y + Z))/(2*A) .
With RAT on the output becomes:
X *Y*(Y + 2) + X*(Y + Z)/(2*A) .
RAT is normally off.
Next, if we leave X factored, and turn on both DIV and RAT, the result becomes
X *Y*(Y + 2) + 1/2*X*A
*(Y + Z) .
Finally, with X factored, RAT on and ALLFAC off we retrieve the original structure


X *(Y + 2*Y) + X*(Y + Z)/(2*A) .

If the numerator and denominator of an expression can each be printed in one line,
the output routines will print them in a two dimensional notation, with numerator
and denominator on separate lines and a line of dashes in between. For example,
(a+b)/2 will print as
A + B
Turning this switch off causes such expressions to be output in a linear form.
The normal ordering of terms in output is from highest to lowest power. In some
situations (e.g., when a power series is output), the opposite ordering is more convenient. The switch REVPRI if on causes such a reverse ordering of terms. For
example, the expression y*(x+1)^2+(y+3)^2 will normally print as
X *Y + 2*X*Y + Y + 7*Y + 9
whereas with REVPRI on, it will print as
9 + 7*Y + Y + 2*X*Y + X *Y.


WRITE Command

In simple cases no explicit output command is necessary in REDUCE, since the
value of any expression is automatically printed if a semicolon is used as a delimiter. There are, however, several situations in which such a command is useful.
In a FOR, WHILE, or REPEAT statement it may be desired to output something
each time the statement within the loop construct is repeated.
It may be desired for a procedure to output intermediate results or other information
while it is running. It may be desired to have results labeled in special ways,
especially if the output is directed to a file or device other than the terminal.



The WRITE command consists of the word WRITE followed by one or more items
separated by commas, and followed by a terminator. There are three kinds of items
that can be used:
1. Expressions (including variables and constants). The expression is evaluated, and the result is printed out.
2. Assignments. The expression on the right side of the := operator is evaluated, and is assigned to the variable on the left; then the symbol on the left is
printed, followed by a “:=”, followed by the value of the expression on the
right – almost exactly the way an assignment followed by a semicolon prints
out normally. (The difference is that if the WRITE is in a FOR statement and
the left-hand side of the assignment is an array position or something similar
containing the variable of the FOR iteration, then the value of that variable is
inserted in the printout.)
3. Arbitrary strings of characters, preceded and followed by double-quote
marks (e.g., "string").
The items specified by a single WRITE statement print side by side on one line.
(The line is broken automatically if it is too long.) Strings print exactly as quoted.
The WRITE command itself however does not return a value.
The print line is closed at the end of a WRITE command evaluation. Therefore the
command WRITE ""; (specifying nothing to be printed except the empty string)
causes a line to be skipped.
1. If A is X+5, B is itself, C is 123, M is an array, and Q=3, then
write m(q):=a," ",b/c," THANK YOU";
will set M(3) to x+5 and print
M(Q) := X + 5 B/123 THANK YOU
The blanks between the 5 and B, and the 3 and T, come from the blanks in
the quoted strings.
2. To print a table of the squares of the integers from 1 to 20:
for i:=1:20 do write i," ",i^2;
3. To print a table of the squares of the integers from 1 to 20, and at the same
time store them in positions 1 to 20 of an array A:


for i:=1:20 do <>;
This will give us two columns of numbers. If we had used
for i:=1:20 do write i," ",a(i):=i^2;
we would also get A(i) := repeated on each line.

4. The following more complete example calculates the famous f and g series, first reported in Sconzo, P., LeSchack, A. R., and Tobey, R., “Symbolic
Computation of f and g Series by Computer”, Astronomical Journal 70 (May
x1:= -sig*(mu+2*eps)$
x2:= eps - 2*sig^2$
x3:= -3*mu*sig$
f:= 1$
g:= 0$
for i:= 1 step 1 until 10 do begin
f1:= -mu*g+x1*df(f,eps)+x2*df(f,sig)+x3*df(f,mu);
write "f(",i,") := ",f1;
g1:= f+x1*df(g,eps)+x2*df(g,sig)+x3*df(g,mu);
write "g(",i,") := ",g1;
A portion of the output, to illustrate the printout from the WRITE command,
is as follows:
...  ...
F(4) := MU*(3*EPS - 15*SIG + MU)
G(4) := 6*SIG*MU
F(5) := 15*SIG*MU*( - 3*EPS + 7*SIG - MU)
G(5) := MU*(9*EPS - 45*SIG + MU)
...  ...




Suppression of Zeros

It is sometimes annoying to have zero assignments (i.e. assignments of the form
 := 0) printed, especially in printing large arrays with many
zero elements. The output from such assignments can be suppressed by turning on
the switch NERO.


FORTRAN Style Output Of Expressions

It is naturally possible to evaluate expressions numerically in REDUCE by giving
all variables and sub-expressions numerical values. However, as we pointed out
elsewhere the user must declare real arithmetical operation by turning on the switch
ROUNDED. However, it should be remembered that arithmetic in REDUCE is not
particularly fast, since results are interpreted rather than evaluated in a compiled
form. The user with a large amount of numerical computation after all necessary
algebraic manipulations have been performed is therefore well advised to perform
these calculations in a FORTRAN or similar system. For this purpose, REDUCE
offers facilities for users to produce FORTRAN compatible files for numerical processing.
First, when the switch FORT is on, the system will print expressions in a FORTRAN notation. Expressions begin in column seven. If an expression extends over
one line, a continuation mark (.) followed by a blank appears on subsequent cards.
After a certain number of lines have been produced (according to the value of the
variable CARD_NO), a new expression is started. If the expression printed arises
from an assignment to a variable, the variable is printed as the name of the expression. Otherwise the expression is given the default name ANS. An error occurs if
identifiers or numbers are outside the bounds permitted by FORTRAN.
A second option is to use the WRITE command to produce other programs.
The following REDUCE statements
on fort;
out "forfil";
write "C
this is a fortran program";
write " 1
write "
write "
write "
write "C
it was foolish to expand this expression";
write "
print 1,x";



write "
shut "forfil";
off fort;
will generate a file forfil that contains:
c this is a fortran program
. **2*v**8*w+1980.*u**2*v**7*w**2+4620.*u**2*v**6*w**3+
. 6930.*u**2*v**5*w**4+6930.*u**2*v**4*w**5+4620.*u**2*v**3*
. w**6+1980.*u**2*v**2*w**7+495.*u**2*v*w**8+55.*u**2*w**9+
. 11.*u*v**10+110.*u*v**9*w+495.*u*v**8*w**2+1320.*u*v**7*w
. **3+2310.*u*v**6*w**4+2772.*u*v**5*w**5+2310.*u*v**4*w**6
. +1320.*u*v**3*w**7+495.*u*v**2*w**8+110.*u*v*w**9+11.*u*w
. **10+v**11+11.*v**10*w+55.*v**9*w**2+165.*v**8*w**3+330.*
. v**7*w**4+462.*v**6*w**5+462.*v**5*w**6+330.*v**4*w**7+
. 165.*v**3*w**8+55.*v**2*w**9+11.*v*w**10+w**11
. w+55.*u**9*w**2+165.*u**8*v**3+495.*u**8*v**2*w+495.*u**8
. *v*w**2+165.*u**8*w**3+330.*u**7*v**4+1320.*u**7*v**3*w+
. 1980.*u**7*v**2*w**2+1320.*u**7*v*w**3+330.*u**7*w**4+462.
. *u**6*v**5+2310.*u**6*v**4*w+4620.*u**6*v**3*w**2+4620.*u
. **6*v**2*w**3+2310.*u**6*v*w**4+462.*u**6*w**5+462.*u**5*
. v**6+2772.*u**5*v**5*w+6930.*u**5*v**4*w**2+9240.*u**5*v
. **3*w**3+6930.*u**5*v**2*w**4+2772.*u**5*v*w**5+462.*u**5
. *w**6+330.*u**4*v**7+2310.*u**4*v**6*w+6930.*u**4*v**5*w
. **2+11550.*u**4*v**4*w**3+11550.*u**4*v**3*w**4+6930.*u**
. 4*v**2*w**5+2310.*u**4*v*w**6+330.*u**4*w**7+165.*u**3*v
. **8+1320.*u**3*v**7*w+4620.*u**3*v**6*w**2+9240.*u**3*v**
. 5*w**3+11550.*u**3*v**4*w**4+9240.*u**3*v**3*w**5+4620.*u
. **3*v**2*w**6+ans1
it was foolish to expand this expression
print 1,x

If the arguments of a WRITE statement include an expression that requires continuation records, the output will need editing, since the output routine prints the
arguments of WRITE sequentially, and the continuation mechanism therefore generates its auxiliary variables after the preceding expression has been printed.
Finally, since there is no direct analog of list in FORTRAN, a comment line of the
c ***** invalid fortran construct (list) not printed



will be printed if you try to print a list with FORT on.
FORTRAN Output Options
There are a number of methods available to change the default format of the FORTRAN output.
The breakup of the expression into subparts is such that the number of continuation
lines produced is less than a given number. This number can be modified by the
card_no := hnumberi;
where hnumberi is the total number of cards allowed in a statement. The default
value of CARD_NO is 20.
The width of the output expression is also adjustable by the assignment
fort_width := hintegeri;
FORT_WIDTH which sets the total width of a given line to hintegeri. The initial
FORTRAN output width is 70.
REDUCE automatically inserts a decimal point after each isolated integer coefficient in a FORTRAN expression (so that, for example, 4 becomes 4. ). To prevent
this, set the PERIOD mode switch to OFF.
FORTRAN output is normally produced in lower case. If upper case is desired, the
switch FORTUPPER should be turned on.
Finally, the default name ANS assigned to an unnamed expression and its subparts
can be changed by the operator VARNAME. This takes a single identifier as argument, which then replaces ANS as the expression name. The value of VARNAME is
its argument.
Further facilities for the production of FORTRAN and other language output are
provided by the SCOPE and GENTRAN packagesdescribed in chapters 16.26 and


Saving Expressions for Later Use as Input

It is often useful to save an expression on an external file for use later as input
in further calculations. The commands for opening and closing output files are
explained elsewhere. However, we see in the examples on output of expressions
that the standard “natural” method of printing expressions is not compatible with
the input syntax. So to print the expression in an input compatible form we must



inhibit this natural style by turning off the switch NAT. If this is done, a dollar sign
will also be printed at the end of the expression.
The following sequence of commands
off nat; out "out"; x := (y+z)^2; write "end";
shut "out"; on nat;
will generate a file out that contains
X := Y**2 + 2*Y*Z + Z**2$


Displaying Expression Structure

In those cases where the final result has a complicated form, it is often convenient
to display the skeletal structure of the answer. The operator STRUCTR, that takes
a single expression as argument, will do this for you. Its syntax is:
The structure is printed effectively as a tree, in which the subparts are laid out with
auxiliary names. If the optional ID1 is absent, the auxiliary names are prefixed by
the root ANS. This root may be changed by the operator VARNAME. If the optional
ID1 is present, and is an array name, the subparts are named as elements of that
array, otherwise ID1 is used as the root prefix. (The second optional argument
ID2 is explained later.)
The EXPRN can be either a scalar or a matrix expression. Use of any other will
result in an error.
Let us suppose that the workspace contains ((A+B)^2+C)^3+D. Then the input
STRUCTR WS; will (with EXP off) result in the output:
ANS3 := ANS2 + D

ANS2 := ANS1


+ C

ANS1 := A + B
The workspace remains unchanged after this operation, since STRUCTR in the default situation returns no value (if STRUCTR is used as a sub-expression, its value
is taken to be 0). In addition, the sub-expressions are normally only displayed and
not retained. If you wish to access the sub-expressions with their displayed names,
the switch SAVESTRUCTR should be turned on. In this case, STRUCTR returns a
list whose first element is a representation for the expression, and subsequent elements are the sub-expression relations. Thus, with SAVESTRUCTR on, STRUCTR
WS in the above example would return


+ C,ANS1=A + B}

The PART operator can be used to retrieve the required parts of the expression. For
example, to get the value of ANS2 in the above, one could say:
If FORT is on, then the results are printed in the reverse order; the algorithm in fact
guaranteeing that no sub-expression will be referenced before it is defined. The
second optional argument ID2 may also be used in this case to name the actual
expression (or expressions in the case of a matrix argument).
Let us suppose that M, a 2 by 1 matrix, contains the elements ((a+b)^2 + c)^3
+ d and (a + b)*(c + d) respectively, and that V has been declared to be an
array. With EXP off and FORT on, the statement structr(2*m,v,k); will
result in the output


Changing the Internal Order of Variables

The internal ordering of variables (more specifically kernels) can have a significant
effect on the space and time associated with a calculation. In its default state, RE-



DUCE uses a specific order for this which may vary between sessions. However,
it is possible for the user to change this internal order by means of the declaration
KORDER. The syntax for this is:
korder v1,...,vn;
where the Vi are kernels. With this declaration, the Vi are ordered internally ahead
of any other kernels in the system. V1 has the highest order, V2 the next highest,
and so on. A further call of KORDER replaces a previous one. KORDER NIL;
resets the internal order to the system default.
Unlike the ORDER declaration, that has a purely cosmetic effect on the way results
are printed, the use of KORDER can have a significant effect on computation time.
In critical cases then, the user can experiment with the ordering of the variables
used to determine the optimum set for a given problem.


Obtaining Parts of Algebraic Expressions

There are many occasions where it is desirable to obtain a specific part of an expression, or even change such a part to another expression. A number of operators
are available in REDUCE for this purpose, and will be described in this section. In
addition, operators for obtaining specific parts of polynomials and rational functions (such as a denominator) are described in another section.


COEFF Operator

COEFF is an operator that partitions EXPRN into its various coefficients with respect to VAR and returns them as a list, with the coefficient independent of VAR
Under normal circumstances, an error results if EXPRN is not a polynomial in VAR,
although the coefficients themselves can be rational as long as they do not depend
on VAR. However, if the switch RATARG is on, denominators are not checked for
dependence on VAR, and are taken to be part of the coefficients.
returns the result



{Z ,0,3*Z,0,3,0,1/Z}.
gives an error if RATARG is off, and the result
{Z /Y,0,3*Z /Y,0,3*Z/Y,0,1/Y}
if RATARG is on.
The length of the result of COEFF is the highest power of VAR encountered plus
1. In the above examples it is 7. In addition, the variable HIGH_POW is set to
the highest non-zero power found in EXPRN during the evaluation, and LOW_POW
to the lowest non-zero power, or zero if there is a constant term. If EXPRN is a
constant, then HIGH_POW and LOW_POW are both set to zero.


COEFFN Operator

The COEFFN operator is designed to give the user a particular coefficient of a variable in a polynomial, as opposed to COEFF that returns all coefficients. COEFFN
is used with the syntax
It returns the nth coefficient of VAR in the polynomial EXPRN.


PART Operator

This operator works on the form of the expression as printed or as it would have
been printed at that point in the calculation bearing in mind all the relevant switch
settings at that point. The reader therefore needs some familiarity with the way
that expressions are represented in prefix form in REDUCE to use these operators
effectively. Furthermore, it is assumed that PRI is ON at that point in the calculation. The reason for this is that with PRI off, an expression is printed by walking
the tree representing the expression internally. To save space, it is never actually



transformed into the equivalent prefix expression as occurs when PRI is on. However, the operations on polynomials described elsewhere can be equally well used
in this case to obtain the relevant parts.
The evaluation proceeds recursively down the integer expression list. In other
→ PART(PART(hexpressioni,hinteger1i),hinteger2i)
and so on, and
PART(hexpressioni) → hexpressioni.
INTEXP can be any expression that evaluates to an integer. If the integer is positive, then that term of the expression is found. If the integer is 0, the operator
is returned. Finally, if the integer is negative, the counting is from the tail of the
expression rather than the head.
For example, if the expression a+b is printed as A+B (i.e., the ordering of the
variables is alphabetical), then
part(a+b,2) ->
part(a+b,-1) ->





An operator ARGLENGTH is available to determine the number of arguments of the
top level operator in an expression. If the expression does not contain a top level
operator, then −1 is returned. For example,
arglength(a+b+c) ->



Substituting for Parts of Expressions

PART may also be used to substitute for a given part of an expression. In this case,
the PART construct appears on the left-hand side of an assignment statement, and
the expression to replace the given part on the right-hand side.
For example, with the normal settings of the REDUCE switches:
xx := a+b;
part(xx,2) := c;
part(c+d,0) := -;

-> A+C
-> C-D



Note that xx in the above is not changed by this substitution. In addition, unlike expressions such as array and matrix elements that have an instant evaluation
property, the values of part(xx,2) and part(c+d,0) are also not changed.



Chapter 9

Polynomials and Rationals
Many operations in computer algebra are concerned with polynomials and rational
functions. In this section, we review some of the switches and operators available
for this purpose. These are in addition to those that work on general expressions
(such as DF and INT) described elsewhere. In the case of operators, the arguments
are first simplified before the operations are applied. In addition, they operate
only on arguments of prescribed types, and produce a type mismatch error if given
arguments which cannot be interpreted in the required mode with the current switch
settings. For example, if an argument is required to be a kernel and a/2 is used
(with no other rules for A), an error
A/2 invalid as kernel
will result.
With the exception of those that select various parts of a polynomial or rational
function, these operations have potentially significant effects on the space and time
associated with a given calculation. The user should therefore experiment with
their use in a given calculation in order to determine the optimum set for a given
One such operation provided by the system is an operator LENGTH which returns
the number of top level terms in the numerator of its argument. For example,
length ((a+b+c)^3/(c+d));
has the value 10. To get the number of terms in the denominator, one would first
select the denominator by the operator DEN and then call LENGTH, as in
length den ((a+b+c)^3/(c+d));
Other operations currently supported, the relevant switches and operators, and the



required argument and value modes of the latter, follow.


Controlling the Expansion of Expressions

The switch EXP controls the expansion of expressions. If it is off, no expansion of
powers or products of expressions occurs. Users should note however that in this
case results come out in a normal but not necessarily canonical form. This means
that zero expressions simplify to zero, but that two equivalent expressions need not
necessarily simplify to the same form.
Example: With EXP on, the two expressions
will both simplify to the latter form. With EXP off, they would remain unchanged,
unless the complete factoring (ALLFAC) option were in force. EXP is normally on.
Several operators that expect a polynomial as an argument behave differently when
EXP is off, since there is often only one term at the top level. For example, with
EXP off
returns the value 1.


Factorization of Polynomials

REDUCE is capable of factorizing univariate and multivariate polynomials that
have integer coefficients, finding all factors that also have integer coefficients. The
package for doing this was written by Dr. Arthur C. Norman and Ms. P. Mary Ann
Moore at The University of Cambridge. It is described in P. M. A. Moore and A.
C. Norman, “Implementing a Polynomial Factorization and GCD Package”, Proc.
SYMSAC ’81, ACM (New York) (1981), 109-116.
The easiest way to use this facility is to turn on the switch FACTOR, which causes
all expressions to be output in a factored form. For example, with FACTOR on, the
expression A^2-B^2 is returned as (A+B)*(A-B).
It is also possible to factorize a given expression explicitly.
FACTORIZE that invokes this facility is used with the syntax

The operator



FACTORIZE(EXPRN:polynomial[,INTEXP:prime integer]):list,
the optional argument of which will be described later. Thus to find and display all
factors of the cyclotomic polynomial x105 − 1, one could write:
The result is a list of factor,exponent pairs. In the above example, there is no overall
numerical factor in the result, so the results will consist only of polynomials in x.
The number of such polynomials can be found by using the operator LENGTH. If
there is a numerical factor, as in factorizing 12x2 − 12, that factor will appear as
the first member of the result. It will however not be factored further. Prime factors
of such numbers can be found, using a probabilistic algorithm, by turning on the
switch IFACTOR. For example,
on ifactor; factorize(12x^2-12);
would result in the output
{{2,2},{3,1},{X + 1,1},{X - 1,1}}.
If the first argument of FACTORIZE is an integer, it will be decomposed into its
prime components, whether or not IFACTOR is on.
Note that the IFACTOR switch only affects the result of FACTORIZE. It has no
effect if the FACTOR switch is also on.
The order in which the factors occur in the result (with the exception of a possible overall numerical coefficient which comes first) can be system dependent and
should not be relied on. Similarly it should be noted that any pair of individual factors can be negated without altering their product, and that REDUCE may
sometimes do that.
The factorizer works by first reducing multivariate problems to univariate ones and
then solving the univariate ones modulo small primes. It normally selects both
evaluation points and primes using a random number generator that should lead
to different detailed behavior each time any particular problem is tackled. If, for
some reason, it is known that a certain (probably univariate) factorization can be
performed effectively with a known prime, P say, this value of P can be handed to
FACTORIZE as a second argument. An error will occur if a non-prime is provided
to FACTORIZE in this manner. It is also an error to specify a prime that divides
the discriminant of the polynomial being factored, but users should note that this
condition is not checked by the program, so this capability should be used with
Factorization can be performed over a number of polynomial coefficient domains



in addition to integers. The particular description of the relevant domain should
be consulted to see if factorization is supported. For example, the following statements will factorize x4 + 1 modulo 7:
setmod 7;
on modular;
The factorization module is provided with a trace facility that may be useful as a
way of monitoring progress on large problems, and of satisfying curiosity about the
internal workings of the package. The most simple use of this is enabled by issuing
the REDUCE command on trfac; . Following this, all calls to the factorizer
will generate informative messages reporting on such things as the reduction of
multivariate to univariate cases, the choice of a prime and the reconstruction of
full factors from their images. Further levels of detail in the trace are intended
mainly for system tuners and for the investigation of suspected bugs. For example,
TRALLFAC gives tracing information at all levels of detail. The switch that can
be set by on timings; makes it possible for one who is familiar with the algorithms used to determine what part of the factorization code is consuming the most
resources. on overview; reduces the amount of detail presented in other forms
of trace. Other forms of trace output are enabled by directives of the form
symbolic set!-trace!-factor(,);
where useful numbers are 1, 2, 3 and 100, 101, ... . This facility is intended to make
it possible to discover in fairly great detail what just some small part of the code has
been doing — the numbers refer mainly to depths of recursion when the factorizer
calls itself, and to the split between its work forming and factorizing images and
reconstructing full factors from these. If NIL is used in place of a filename the
trace output requested is directed to the standard output stream. After use of this
trace facility the generated trace files should be closed by calling
symbolic close!-trace!-files();
NOTE: Using the factorizer with MCD off will result in an error.


Cancellation of Common Factors

Facilities are available in REDUCE for cancelling common factors in the numerators and denominators of expressions, at the option of the user. The system will
perform this greatest common divisor computation if the switch GCD is on. (GCD
is normally off.)



A check is automatically made, however, for common variable and numerical products in the numerators and denominators of expressions, and the appropriate cancellations made.
When GCD is on, and EXP is off, a check is made for square free factors in an
expression. This includes separating out and independently checking the content
of a given polynomial where appropriate. (For an explanation of these terms, see
Anthony C. Hearn, “Non-Modular Computation of Polynomial GCDs Using Trial
Division”, Proc. EUROSAM 79, published as Lecture Notes on Comp. Science,
Springer-Verlag, Berlin, No 72 (1979) 227-239.)
Example: With EXP off and GCD on, the polynomial a*c+a*d+b*c+b*d would
be returned as (A+B)*(C+D).
Under normal circumstances, GCDs are computed using an algorithm described in
the above paper. It is also possible in REDUCE to compute GCDs using an alternative algorithm, called the EZGCD Algorithm, which uses modular arithmetic.
The switch EZGCD, if on in addition to GCD, makes this happen.
In non-trivial cases, the EZGCD algorithm is almost always better than the basic
algorithm, often by orders of magnitude. We therefore strongly advise users to
use the EZGCD switch where they have the resources available for supporting the
For a description of the EZGCD algorithm, see J. Moses and D.Y.Y. Yun, “The EZ
GCD Algorithm”, Proc. ACM 1973, ACM, New York (1973) 159-166.
NOTE: This package shares code with the factorizer, so a certain amount of trace
information can be produced using the factorizer trace switches.
An implementation of the heuristic GCD algorithm, first introduced by B.W. Char,
K.O. Geddes and G.H. Gonnet, as described in J.H. Davenport and J. Padget,
“HEUGCD: How Elementary Upperbounds Generate Cheaper Data”, Proc. of EUROCAL ’85, Vol 2, 18-28, published as Lecture Notes on Comp. Science, No. 204,
Springer-Verlag, Berlin, 1985, is also available on an experimental basis. To use
this algorithm, the switch HEUGCD should be on in addition to GCD. Note that if
both EZGCD and HEUGCD are on, the former takes precedence.


Determining the GCD of Two Polynomials

This operator, used with the syntax
returns the greatest common divisor of the two polynomials EXPRN1 and EXPRN2.


gcd(x^2+2*x+1,x^2+3*x+2) ->
gcd(2*x^2-2*y^2,4*x+4*y) ->



Working with Least Common Multiples

Greatest common divisor calculations can often become expensive if extensive
work with large rational expressions is required. However, in many cases, the only
significant cancellations arise from the fact that there are often common factors
in the various denominators which are combined when two rationals are added.
Since these denominators tend to be smaller and more regular in structure than the
numerators, considerable savings in both time and space can occur if a full GCD
check is made when the denominators are combined and only a partial check when
numerators are constructed. In other words, the true least common multiple of
the denominators is computed at each step. The switch LCM is available for this
purpose, and is normally on.
In addition, the operator LCM, used with the syntax
returns the least common multiple of the two polynomials EXPRN1 and EXPRN2.
lcm(x^2+2*x+1,x^2+3*x+2) ->
lcm(2*x^2-2*y^2,4*x+4*y) ->


X**3 + 4*X**2 + 5*X + 2
4*(X**2 - Y**2)

Controlling Use of Common Denominators

When two rational functions are added, REDUCE normally produces an expression
over a common denominator. However, if the user does not want denominators
combined, he or she can turn off the switch MCD which controls this process. The
latter switch is particularly useful if no greatest common divisor calculations are
desired, or excessive differentiation of rational functions is required.
CAUTION: With MCD off, results are not guaranteed to come out in either normal
or canonical form. In other words, an expression equivalent to zero may in fact not
be simplified to zero. This option is therefore most useful for avoiding expression
swell during intermediate parts of a calculation.
MCD is normally on.




divide and mod / remainder Operators

The operators divide and mod / remainder implement Euclidean division of
polynomials. The remainder operator is used with the syntax
It returns the remainder when EXPRN1 is divided by EXPRN2. This is the true
remainder based on the internal ordering of the variables, and not the pseudoremainder.
remainder((x+y)*(x+2*y),x+3*y) ->


CAUTION: In the default case, remainders are calculated over the integers. If you
need the remainder with respect to another domain, it must be declared explicitly.
remainder(x^2-2,x+sqrt(2)); -> X^2 - 2
load_package arnum;
defpoly sqrt2**2-2;
remainder(x^2-2,x+sqrt2); -> 0
The infix operator is an alias for remainder, e.g.
(x^2 + y^2) mod (x - y);
The Euclidean division operator divide is used with the syntax
and returns both the quotient and the remainder together as the first and second
elements of a list, e.g.
divide(x^2 + y^2, x - y);
{x + y,2*y }



It can also be used as an infix operator:
(x^2 + y^2) divide (x - y);
{x + y,2*y }
All Euclidean division operators (when used in prefix form, and including the
standard remainder operator) accept an optional third argument, which specifies the main variable to be used during the division. The default is the leading
kernel in the current global ordering. Specifying the main variable does not change
the ordering of any other variables involved, nor does it change the global environment. For example
remainder(x^2 + y^2, x - y, y);
divide(x^2 + y^2, x - y, y);
{ - (x + y),2*x }
Specifying x as main variable gives the same behaviour as the default shown earlier, i.e.
divide(x^2 + y^2, x - y, x);
{x + y,2*y }
remainder(x^2 + y^2, x - y, x);


Polynomial Pseudo-Division

The polynomial division discussed above is normally most useful for a univariate
polynomial over a field, otherwise the division is likely to fail giving trivially a zero
quotient and a remainder equal to the dividend. (A ring of univariate polynomials
is a Euclidean domain only if the coefficient ring is a field.) For example, over the



divide(x^2 + y^2, 2(x - y));

+ y }

The division of a polynomial u(x) of degree m by a polynomial v(x) of degree n ≤ m can be performed over any commutative ring with identity (such
as the integers, or any polynomial ring) if the polynomial u(x) is first multiplied by lc(v, x)m−n+1 (where lc denotes the leading coefficient). This is called
pseudo-division. The polynomial pseudo-division operators pseudo_divide,
pseudo_quotient (or pseudo_div) and pseudo_remainder are implemented as prefix operators (only). When multivariate polynomials are pseudodivided it is important which variable is taken as the main variable, because the
leading coefficient of the divisor is computed with respect to this variable. Therefore, if this is allowed to default and there is any ambiguity, i.e. the polynomials are
multivariate or contain more than one kernel, the pseudo-division operators output
a warning message to indicate which kernel has been selected as the main variable
– it is the first kernel found in the internal forms of the dividend and divisor. (As
usual, the warning can be turned off by setting the switch msg to off.) For example
pseudo_divide(x^2 + y^2, x - y);
*** Main division variable selected is x
{x + y,2*y }
pseudo_divide(x^2 + y^2, x - y, x);
{x + y,2*y }
pseudo_divide(x^2 + y^2, x - y, y);
{ - (x + y),2*x }
If the leading coefficient of the divisor is a unit (invertible element) of the coefficient ring then division and pseudo-division should be identical, otherwise they are
not, e.g.
divide(x^2 + y^2, 2(x - y));




+ y }

pseudo_divide(x^2 + y^2, 2(x - y));
*** Main division variable selected is x
{2*(x + y),8*y }
The pseudo-division gives essentially the same result as would division over the
field of fractions of the coefficient ring (apart from the overall factors [contents] of
the quotient and remainder), e.g.
on rational;
divide(x^2 + y^2, 2(x - y));
{---*(x + y),2*y }
pseudo_divide(x^2 + y^2, 2(x - y));
*** Main division variable selected is x
{2*(x + y),8*y }
Polynomial division and pseudo-division can only be applied to what REDUCE
regards as polynomials, i.e. rational expressions with denominator 1, e.g.
off rational;
pseudo_divide((x^2 + y^2)/2, x - y);


x + y
***** --------- invalid as polynomial
Pseudo-division is implemented in the polydiv package using an algorithm (D.
E. Knuth 1981, Seminumerical Algorithms, Algorithm R, page 407) that does not



perform any actual division at all (which proves that it applies over a ring). It is
more efficient than the naive algorithm, and it also has the advantage that it works
over coefficient domains in which REDUCE may not be able to perform in practice divisions that are possible mathematically. An example of this is coefficient
domains involving algebraic numbers, such as the integers extended by 2, as
illustrated in the file polydiv.tst.
The implementation attempts to be reasonably efficient, except that it always computes the quotient internally even when only the remainder is required (as does the
standard remainder operator).



This is used with the syntax
It computes the resultant of the two given polynomials with respect to the given
variable, the coefficients of the polynomials can be taken from any domain. The
result can be identified as the determinant of a Sylvester matrix, but can often
also be thought of informally as the result obtained when the given variable is
eliminated between the two input polynomials. If the two input polynomials have
a non-trivial GCD their resultant vanishes.
The switch BEZOUT controls the computation of the resultants. It is off by default.
In this case a subresultant algorithm is used. If the switch Bezout is turned on,
the resultant is computed via the Bezout Matrix. However, in the latter case, only
polynomial coefficients are permitted.
The sign conventions used by the resultant function follow those in R. Loos, “Computing in Algebraic Extensions” in “Computer Algebra — Symbolic and Algebraic
Computation”, Second Ed., Edited by B. Buchberger, G.E. Collins and R. Loos,
Springer-Verlag, 1983. Namely, with A and B not dependent on X:

resultant(p(x),q(x),x)= (-1)



= a


= 1






- y

calculation in an algebraic extension:
load arnum;
defpoly sqrt2**2 - 2;
resultant(x + sqrt2,sqrt2 * x +1,x) -> -1
or in a modular domain:
setmod 17;
on modular;


-> 5


The DECOMPOSE operator takes a multivariate polynomial as argument, and returns an expression and a list of equations from which the original polynomial can
be found by composition. Its syntax is:
For example:
-> {U + 35*U + 234, U=V + 10*V, V=X - 22*X}
decompose(u^2+v^2+2u*v+1) -> {W + 1, W=U + V}
Users should note however that, unlike factorization, this decomposition is not




INTERPOL operator

where hvaluesi and hpointsi are lists of equal length and  is an algebraic expression (preferably a kernel).
INTERPOL generates an interpolation polynomial f in the given variable of degree
length(hvaluesi)-1. The unique polynomial f is defined by the property that for
corresponding elements v of hvaluesi and p of hpointsi the relation f (p) = v holds.
The Aitken-Neville interpolation algorithm is used which guarantees a stable result
even with rounded numbers and an ill-conditioned problem.


Obtaining Parts of Polynomials and Rationals

These operators select various parts of a polynomial or rational function structure.
Except for the cost of rearrangement of the structure, these operations take very
little time to perform.
For those operators in this section that take a kernel VAR as their second argument,
an error results if the first expression is not a polynomial in VAR, although the coefficients themselves can be rational as long as they do not depend on VAR. However,
if the switch RATARG is on, denominators are not checked for dependence on VAR,
and are taken to be part of the coefficients.


DEG Operator

This operator is used with the syntax
It returns the leading degree of the polynomial EXPRN in the variable VAR. If VAR
does not occur as a variable in EXPRN, 0 is returned.
deg((a+b)*(c+2*d)^2,a) ->
deg((a+b)*(c+2*d)^2,d) ->
deg((a+b)*(c+2*d)^2,e) ->
Note also that if RATARG is on,






since in this case, the denominator A is considered part of the coefficients of the
numerator in A. With RATARG off, however, an error would result in this case.


DEN Operator

This is used with the syntax:
It returns the denominator of the rational expression EXPRN. If EXPRN is a polynomial, 1 is returned.
-> Y**2
-> 3
[since 100/6 is first simplified to 50/3]
den(a/4+b/6) -> 12
-> 1


LCOF Operator

LCOF is used with the syntax
It returns the leading coefficient of the polynomial EXPRN in the variable VAR. If
VAR does not occur as a variable in EXPRN, EXPRN is returned. Examples:
lcof((a+b)*(c+2*d)^2,a) ->
lcof((a+b)*(c+2*d)^2,d) ->



LPOWER Operator

LPOWER returns the leading power of EXPRN with respect to VAR. If EXPRN
does not depend on VAR, 1 is returned.



lpower((a+b)*(c+2*d)^2,a) ->
lpower((a+b)*(c+2*d)^2,d) ->



LTERM Operator

LTERM returns the leading term of EXPRN with respect to VAR. If EXPRN does
not depend on VAR, EXPRN is returned.
lterm((a+b)*(c+2*d)^2,a) ->
lterm((a+b)*(c+2*d)^2,d) ->


Compatibility Note: In some earlier versions of REDUCE, LTERM returned 0 if
the EXPRN did not depend on VAR. In the present version, EXPRN is always equal


MAINVAR Operator

Returns the main variable (based on the internal polynomial representation) of
EXPRN. If EXPRN is a domain element, 0 is returned.
Assuming A has higher kernel order than B, C, or D:
mainvar((a+b)*(c+2*d)^2) ->


NUM Operator




Returns the numerator of the rational expression EXPRN. If EXPRN is a polynomial, that polynomial is returned.
num(x/y^2) -> X
-> 50
num(a/4+b/6) -> 3*A+2*B
-> A+B


REDUCT Operator

Returns the reductum of EXPRN with respect to VAR (i.e., the part of EXPRN left
after the leading term is removed). If EXPRN does not depend on the variable VAR,
0 is returned.
reduct((a+b)*(c+2*d),a) ->
reduct((a+b)*(c+2*d),d) ->
reduct((a+b)*(c+2*d),e) ->

B*(C + 2*D)
C*(A + B)

Compatibility Note: In some earlier versions of REDUCE, REDUCT returned
EXPRN if it did not depend on VAR. In the present version, EXPRN is always equal




x) => 2
{a,b,c}) => 1
{x, a}) => 3
{x,b}) => 2
{p,q,r}) => 0

totaldeg(u, kernlist) finds the total degree of the polynomial u in the
variables in kernlist. If kernlist is not a list it is treated as a simple single



variable. The denominator of u is ignored, and "degree" here does not pay attention
to fractional powers. Mentions of a kernel within the argument to any operator or
function (eg sin, cos, log, sqrt) are ignored. Really u is expected to be just a


Polynomial Coefficient Arithmetic

REDUCE allows for a variety of numerical domains for the numerical coefficients
of polynomials used in calculations. The default mode is integer arithmetic, although the possibility of using real coefficients has been discussed elsewhere. Rational coefficients have also been available by using integer coefficients in both the
numerator and denominator of an expression, using the ON DIV option to print the
coefficients as rationals. However, REDUCE includes several other coefficient options in its basic version which we shall describe in this section. All such coefficient
modes are supported in a table-driven manner so that it is straightforward to extend
the range of possibilities. A description of how to do this is given in R.J. Bradford, A.C. Hearn, J.A. Padget and E. Schrüfer, “Enlarging the REDUCE Domain
of Computation,” Proc. of SYMSAC ’86, ACM, New York (1986), 100–106.


Rational Coefficients in Polynomials

Instead of treating rational numbers as the numerator and denominator of a rational
expression, it is also possible to use them as polynomial coefficients directly. This
is accomplished by turning on the switch RATIONAL.
Example: With RATIONAL off, the input expression a/2 would be converted
into a rational expression, whose numerator was A and denominator 2. With
RATIONAL on, the same input would become a rational expression with numerator
1/2*A and denominator 1. Thus the latter can be used in operations that require
polynomial input whereas the former could not.


Real Coefficients in Polynomials

The switch ROUNDED permits the use of arbitrary sized real coefficients in polynomial expressions. The actual precision of these coefficients can be set by the
operator PRECISION. For example, precision 50; sets the precision to fifty
decimal digits. The default precision is system dependent and can be found by
precision 0;. In this mode, denominators are automatically made monic, and
an appropriate adjustment is made to the numerator.
Example: With ROUNDED on, the input expression a/2 would be converted into a
rational expression whose numerator is 0.5*A and denominator 1.



Internally, REDUCE uses floating point numbers up to the precision supported by
the underlying machine hardware, and so-called bigfloats for higher precision or
whenever necessary to represent numbers whose value cannot be represented in
floating point. The internal precision is two decimal digits greater than the external
precision to guard against roundoff inaccuracies. Bigfloats represent the fraction
and exponent parts of a floating-point number by means of (arbitrary precision)
integers, which is a more precise representation in many cases than the machine
floating point arithmetic, but not as efficient. If a case arises where use of the
machine arithmetic leads to problems, a user can force REDUCE to use the bigfloat
representation at all precisions by turning on the switch ROUNDBF. In rare cases,
this switch is turned on by the system, and the user informed by the message
ROUNDBF turned on to increase accuracy
Rounded numbers are normally printed to the specified precision. However, if the
user wishes to print such numbers with less precision, the printing precision can be
set by the command PRINT_PRECISION. For example, print_precision
5; will cause such numbers to be printed with five digits maximum.
Under normal circumstances when ROUNDED is on, REDUCE converts the number
1.0 to the integer 1. If this is not desired, the switch NOCONVERT can be turned
Numbers that are stored internally as bigfloats are normally printed with a space
between every five digits to improve readability. If this feature is not required, it
can be suppressed by turning off the switch BFSPACE.
Further information on the bigfloat arithmetic may be found in T. Sasaki, “Manual for Arbitrary Precision Real Arithmetic System in REDUCE”, Department of
Computer Science, University of Utah, Technical Note No. TR-8 (1979).
When a real number is input, it is normally truncated to the precision in effect
at the time the number is read. If it is desired to keep the full precision of all
numbers input, the switch ADJPREC (for adjust precision) can be turned on. While
on, ADJPREC will automatically increase the precision, when necessary, to match
that of any integer or real input, and a message printed to inform the user of the
precision increase.
When ROUNDED is on, rational numbers are normally converted to rounded representation. However, if a user wishes to keep such numbers in a rational form
until used in an operation that returns a real number, the switch ROUNDALL can be
turned off. This switch is normally on.
Results from rounded calculations are returned in rounded form with two exceptions: if the result is recognized as 0 or 1 to the current precision, the integer result
is returned.




Modular Number Coefficients in Polynomials

REDUCE includes facilities for manipulating polynomials whose coefficients are
computed modulo a given base. To use this option, two commands must be used;
SETMOD hintegeri, to set the prime modulus, and ON MODULAR to cause the
actual modular calculations to occur. For example, with setmod 3; and on
modular;, the polynomial (a+2*b)^3 would become A^3+2*B^3.
The argument of SETMOD is evaluated algebraically, except that non-modular (integer) arithmetic is used. Thus the sequence
setmod 3; on modular; setmod 7;
will correctly set the modulus to 7.
Modular numbers are by default represented by integers in the interval [0,p-1]
where p is the current modulus. Sometimes it is more convenient to use an equivalent symmetric representation in the interval [-p/2+1,p/2], or more precisely [floor((p-1)/2), ceiling((p-1)/2)], especially if the modular numbers map objects that
include negative quantities. The switch BALANCED_MOD allows you to select the
symmetric representation for output.
Users should note that the modular calculations are on the polynomial coefficients
only. It is not currently possible to reduce the exponents since no check for a prime
modulus is made (which would allow xp−1 to be reduced to 1 mod p). Note also
that any division by a number not co-prime with the modulus will result in the error
“Invalid modular division”.


Complex Number Coefficients in Polynomials

Although REDUCE routinely treats the square of the variable i as equivalent to −1,
this is not sufficient to reduce expressions involving i to lowest terms, or to factor
such expressions over the complex numbers. For example, in the default case,
gives the result
is not reduced further. However, if the switch COMPLEX is turned on, full complex



arithmetic is then carried out. In other words, the above factorization will give the
{{A + I,1},{A - I,1}}
and the quotient will be reduced to A-I*B.
The switch COMPLEX may be combined with ROUNDED to give complex real numbers; the appropriate arithmetic is performed in this case.
Complex conjugation is used to remove complex numbers from denominators
of expressions. To do this if COMPLEX is off, you must turn the switch


ROOT_VAL Operator

The ROOT_VAL operator takes a single univariate polynomial as argument, and
returns a list of root values at system precision (or greater if required to separate
roots). It is used with the syntax
ROOT_VAL(EXPRN:univariate polynomial):list.
For example, the sequence
on rounded; root_val(x^3-x-1);
gives the result
{0.562279512062*I - 0.662358978622, - 0.562279512062*I
- 0.662358978622,1.32471795724}

Chapter 10

Assigning and Testing Algebraic
Sometimes algebraic expressions can be further simplified if there is additional
information about the value ranges of its components. The following section describes how to inform REDUCE of such assumptions.


REALVALUED Declaration and Check

The declaration REALVALUED may be used to restrict variables to the real numbers. The syntax is:
realvalued v1,;
For such variables the operator IMPART gives the result zero. Thus, with
realvalued x,y;
the expression impart(x+sin(y)) is evaluated as zero. You may also declare
an operator as real valued with the meaning, that this operator maps real arguments
always to real values. Example:
operator h; realvalued h,x;
impart h(x);
impart h(w);


Such declarations are not needed for the standard elementary functions.
To remove the propery from a variable or an operator use the declaration
NOTREALVALUED with the syntax:
notrealvalued v1,;
The boolean operator REALVALUEDP allows you to check if a variable, an operator, or an operator expression is known as real valued. Thus,
realvalued x;
write if realvaluedp(sin x) then "yes" else "no";
write if realvaluedp(sin z) then "yes" else "no";
would print first yes and then no. For general expressions test the impart for
checking the value range:
realvalued x,y; w:=(x+i*y); w1:=conj w;



The declaration SELFCONJUGATE may be used to declare an operator to be selfconjuate in the sense that conj(f(z)) = f(conj(z)). The syntax is:
selfconjugate f1,...fn;
Such declarations are not needed for the standard elementary functions nor for
the inverses atan, acot, asinh, acsch. The remaining inverse functions
log, asin, acos, atanh, acosh etc. and sqrt fail to be self-conjugate
on their branch cuts (which are all subsets of the real axis).




Declaring Expressions Positive or Negative

Detailed knowlege about the sign of expressions allows REDUCE to simplify expressions involving exponentials or ABS. You can express assumptions about the
positivity or negativity of expressions by rules for the operator SIGN. Examples:
let sign(a)=>1,sign(b)=>1; abs(a*b*c);
on precise; sqrt(x^2-2x+1);
abs(x - 1)
ws where sign(x-1)=>1;
x - 1
Here factors with known sign are factored out of an ABS expression.
on precise; on factor;
((x - 2)*q)
ws where sign(x-2)=>1;
q *(x - 2)
In this case the factor (x − 2)w may be extracted from the base of the exponential
because it is known to be positive.
Note that REDUCE knows a lot about sign propagation. For example, with x and y
also x + y, x + y + π and (x + e)/y 2 are known as positive. Nevertheless, it is often
necessary to declare additionally the sign of a combined expression. E.g. at present
a positivity declaration of x − 2 does not automatically lead to sign evaluation for

x − 1 or for x.

Chapter 11

Substitution Commands
An important class of commands in REDUCE define substitutions for variables and
expressions to be made during the evaluation of expressions. Such substitutions use
the prefix operator SUB, various forms of the command LET, and rule sets.


SUB Operator

SUB(hsubstitution_listi, hEXPRN1:algebraici) : algebraic
where hsubstitution_listi is a list of one or more equations of the form
hVAR:kerneli = hEXPRN:algebraici
or a kernel that evaluates to such a list.
The SUB operator gives the algebraic result of replacing every occurrence of the
variable VAR in the expression EXPRN1 by the expression EXPRN. Specifically,
EXPRN1 is first evaluated using all available rules. Next the substitutions are made,
and finally the substituted expression is reevaluated. When more than one variable
occurs in the substitution list, the substitution is performed by recursively walking
down the tree representing EXPRN1, and replacing every VAR found by the appropriate EXPRN. The EXPRN are not themselves searched for any occurrences of
the various VARs. The trivial case SUB(EXPRN1) returns the algebraic value of
sub({x=a+y,y=y+1},x^2+y^2) -> A + 2*A*Y + 2*Y + 2*Y + 1



and with s := {x=a+y,y=y+1},


-> A + 2*A*Y + 2*Y + 2*Y + 1

Note that the global assignments x:=a+y, etc., do not take place.
EXPRN1 can be any valid algebraic expression whose type is such that a substitution process is defined for it (e.g., scalar expressions, lists and matrices). An
error will occur if an expression of an invalid type for substitution occurs either in
The braces around the substitution list may also be omitted, as in:



-> A + 2*A*Y + 2*Y + 2*Y + 1

LET Rules

Unlike substitutions introduced via SUB, LET rules are global in scope and stay in
effect until replaced or CLEARed.
The simplest use of the LET statement is in the form
LET hsubstitution listi
where hsubstitution listi is a list of rules separated by commas, each of the form:
hvariablei = hexpressioni
hprefix operatori(hargumenti, . . . , hargumenti) = hexpressioni
hargumentihinfix operatori, . . . , hargumenti = hexpressioni
For example,
let {x => y^2,
h(u,v) => u - v,
cos(pi/3) => 1/2,
a*b => c,

11.2. LET RULES

l+m => n,
w^3 => 2*z - 3,
z^10 => 0}

The list brackets can be left out if preferred. The above rules could also have been
entered as seven separate LET statements.
After such LET rules have been input, X will always be evaluated as the square of
Y, and so on. This is so even if at the time the LET rule was input, the variable Y
had a value other than Y. (In contrast, the assignment x:=y^2 will set X equal to
the square of the current value of Y, which could be quite different.)
The rule let a*b=c means that whenever A and B are both factors in an expression their product will be replaced by C. For example, a^5*b^7*w would be
replaced by c^5*b^2*w.
The rule for l+m will not only replace all occurrences of l+m by N, but will also
normally replace L by n-m, but not M by n-l. A more complete description of this
case is given in Section 11.2.5.
The rule pertaining to w^3 will apply to any power of W greater than or equal to
the third.
Note especially the last example, let z^10=0. This declaration means, in effect:
ignore the tenth or any higher power of Z. Such declarations, when appropriate,
often speed up a computation to a considerable degree. (See Section 11.4 for more
Any new operators occurring in such LET rules will be automatically declared
OPERATOR by the system, if the rules are being read from a file. If they are being
entered interactively, the system will ask DECLARE . . . OPERATOR? . Answer Y
or N and hit Return .
In each of these examples, substitutions are only made for the explicit expressions
given; i.e., none of the variables may be considered arbitrary in any sense. For
example, the command
let h(u,v) = u - v;
will cause h(u,v) to evaluate to U - V, but will not affect h(u,z) or H with
any arguments other than precisely the symbols U,V.
These simple LET rules are on the same logical level as assignments made with
the := operator. An assignment x := p+q cancels a rule let x = y^2 made
earlier, and vice versa.
CAUTION: A recursive rule such as
let x = x + 1;



is erroneous, since any subsequent evaluation of X would lead to a non-terminating
chain of substitutions:
x -> x + 1 -> (x + 1) + 1 -> ((x + 1) + 1) + 1 -> ...
Similarly, coupled substitutions such as
let l = m + n, n = l + r;
would lead to the same error. As a result, if you try to evaluate an X, L or N defined
as above, you will get an error such as
X improperly defined in terms of itself
Array and matrix elements can appear on the left-hand side of a LET statement.
However, because of their instant evaluation property, it is the value of the element
that is substituted for, rather than the element itself. E.g.,
array a(5);
a(2) := b;
let a(2) = c;
results in B being substituted by C; the assignment for a(2) does not change.
Finally, if an error occurs in any equation in a LET statement (including generalized
statements involving FOR ALL and SUCH THAT), the remaining rules are not



If a substitution for all possible values of a given argument of an operator is required, the declaration FOR ALL may be used. The syntax of such a command
FOR ALL hvariablei, . . . , hvariablei
hLET statementihterminatori
for all x,y let h(x,y) = x-y;
for all x let k(x,y) = x^y;
The first of these declarations would cause h(a,b) to be evaluated as A-B,
h(u+v,u+w) to be V-W, etc. If the operator symbol H is used with more or

11.2. LET RULES


fewer argument places, not two, the LET would have no effect, and no error would
The second declaration would cause k(a,y) to be evaluated as a^y, but would
have no effect on k(a,z) since the rule didn’t say FOR ALL Y . . . .
Where we used X and Y in the examples, any variables could have been used. This
use of a variable doesn’t affect the value it may have outside the LET statement.
However, you should remember what variables you actually used. If you want
to delete the rule subsequently, you must use the same variables in the CLEAR
It is possible to use more complicated expressions as a template for a LET statement, as explained in the section on substitutions for general expressions. In nearly
all cases, the rule will be accepted, and a consistent application made by the system. However, if there is a sole constant or a sole free variable on the left-hand side
of a rule (e.g., let 2=3 or for all x let x=2), then the system is unable
to handle the rule, and the error message
Substitution for ... not allowed
will be issued. Any variable listed in the FOR ALL part will have its symbol
preceded by an equal sign: X in the above example will appear as =X. An error will
also occur if a variable in the FOR ALL part is not properly matched on both sides
of the LET equation.



If a substitution is desired for more than a single value of a variable in an operator
or other expression, but not all values, a conditional form of the FOR ALL ...
LET declaration can be used.
for all x such that numberp x and x<0 let h(x)=0;
will cause h(-5) to be evaluated as 0, but H of a positive integer, or of an argument
that is not an integer at all, would not be affected. Any boolean expression can
follow the SUCH THAT keywords.


Removing Assignments and Substitution Rules

The user may remove all assignments and substitution rules from any expression
by the command CLEAR, in the form



CLEAR hexpressioni, . . . , hexpressioni = hterminatori
clear x, h(x,y);
Because of their instant evaluation property, array and matrix elements cannot be
cleared with CLEAR. For example, if A is an array, you must say
a(3) := 0;
rather than
clear a(3);
to “clear” element a(3).
On the other hand, a whole array (or matrix) A can be cleared by the command
clear a; This means much more than resetting to 0 all the elements of A. The
fact that A is an array, and what its dimensions are, are forgotten, so A can be
redefined as another type of object, for example an operator.
If you need to clear a variable whose name must be computed, see the UNSET
The more general types of LET declarations can also be deleted by using CLEAR.
Simply repeat the LET rule to be deleted, using CLEAR in place of LET, and omitting the equal sign and right-hand part. The same dummy variables must be used
in the FOR ALL part, and the boolean expression in the SUCH THAT part must be
written the same way. (The placing of blanks doesn’t have to be identical.)
Example: The LET rule
for all x such that numberp x and x<0 let h(x)=0;
can be erased by the command
for all x such that numberp x and x<0 clear h(x);


Overlapping LET Rules

CLEAR is not the only way to delete a LET rule. A new LET rule identical to
the first, but with a different expression after the equal sign, replaces the first.
Replacements are also made in other cases where the existing rule would be in
conflict with the new rule. For example, a rule for x^4 would replace a rule for
x^5. The user should however be cautioned against having several LET rules in

11.2. LET RULES


effect that relate to the same expression. No guarantee can be given as to which
rules will be applied by REDUCE or in what order. It is best to CLEAR an old rule
before entering a new related LET rule.


Substitutions for General Expressions

The examples of substitutions discussed in other sections have involved very simple rules. However, the substitution mechanism used in REDUCE is very general,
and can handle arbitrarily complicated rules without difficulty.
The general substitution mechanism used in REDUCE is discussed in Hearn, A.
C., “REDUCE, A User-Oriented Interactive System for Algebraic Simplification,”
Interactive Systems for Experimental Applied Mathematics, (edited by M. Klerer
and J. Reinfelds), Academic Press, New York (1968), 79-90, and Hearn. A. C.,
“The Problem of Substitution,” Proc. 1968 Summer Institute on Symbolic Mathematical Computation, IBM Programming Laboratory Report FSC 69-0312 (1969).
For the reasons given in these references, REDUCE does not attempt to implement a general pattern matching algorithm. However, the present system uses far
more sophisticated techniques than those discussed in the above papers. It is now
possible for the rules appearing in arguments of LET to have the form
hsubstitution expressioni = hexpressioni
where any rule to which a sensible meaning can be assigned is permitted. However, this meaning can vary according to the form of hsubstitution expressioni. The
semantic rules associated with the application of the substitution are completely
consistent, but somewhat complicated by the pragmatic need to perform such substitutions as efficiently as possible. The following rules explain how the majority
of the cases are handled.
To begin with, the hsubstitution expressioni is first partly simplified by collecting
like terms and putting identifiers (and kernels) in the system order. However, no
substitutions are performed on any part of the expression with the exception of
expressions with the instant evaluation property, such as array and matrix elements,
whose actual values are used. It should also be noted that the system order used is
not changeable by the user, even with the KORDER command. Specific cases are
then handled as follows:
1. If the resulting simplified rule has a left-hand side that is an identifier, an
expression with a top-level algebraic operator or a power, then the rule is
added without further change to the appropriate table.
2. If the operator * appears at the top level of the simplified left-hand side, then
any constant arguments in that expression are moved to the right-hand side


of the rule. The remaining left-hand side is then added to the appropriate
table. For example,
let 2*x*y=3
let x*y=3/2
so that x*y is added to the product substitution table, and when this rule is
applied, the expression x*y becomes 3/2, but X or Y by themselves are not

3. If the operators +, - or / appear at the top level of the simplified left-hand
side, all but the first term is moved to the right-hand side of the rule. Thus
the rules
let l+m=n, x/2=y, a-b=c
let l=n-m, x=2*y, a=c+b.
One problem that can occur in this case is that if a quantified expression is moved
to the right-hand side, a given free variable might no longer appear on the left-hand
side, resulting in an error because of the unmatched free variable. E.g.,
for all x,y let f(x)+f(y)=x*y
would become
for all x,y let f(x)=x*y-f(y)
which no longer has Y on both sides.
The fact that array and matrix elements are evaluated in the left-hand side of rules
can lead to confusion at times. Consider for example the statements
array a(5); let x+a(2)=3; let a(3)=4;
The left-hand side of the first rule will become X, and the second 0. Thus the first
rule will be instantiated as a substitution for X, and the second will result in an
The order in which a list of rules is applied is not easily understandable without
a detailed knowledge of the system simplification protocol. It is also possible for



this order to change from release to release, as improved substitution techniques
are implemented. Users should therefore assume that the order of application of
rules is arbitrary, and program accordingly.
After a substitution has been made, the expression being evaluated is reexamined
in case a new allowed substitution has been generated. This process is continued
until no more substitutions can be made.
As mentioned elsewhere, when a substitution expression appears in a product, the
substitution is made if that expression divides the product. For example, the rule
let a^2*c = 3*z;
would cause a^2*c*x to be replaced by 3*Z*X and a^2*c^2 by 3*Z*C. If the
substitution is desired only when the substitution expression appears in a product
with the explicit powers supplied in the rule, the command MATCH should be used
For example,
match a^2*c = 3*z;
would cause a^2*c*x to be replaced by 3*Z*X, but a^2*c^2 would not be
replaced. MATCH can also be used with the FOR ALL constructions described
To remove substitution rules of the type discussed in this section, the CLEAR command can be used, combined, if necessary, with the same FOR ALL clause with
which the rule was defined, for example:
for all x clear log(e^x),e^log(x),cos(w*t+theta(x));
Note, however, that the arbitrary variable names in this case must be the same as
those used in defining the substitution.


Rule Lists

Rule lists offer an alternative approach to defining substitutions that is different
from either SUB or LET. In fact, they provide the best features of both, since they
have all the capabilities of LET, but the rules can also be applied locally as is possible with SUB. In time, they will be used more and more in REDUCE. However,
since they are relatively new, much of the REDUCE code you see uses the older
A rule list is a list of rules that have the syntax


 =>  (WHEN )

For example,
{cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2,
=> (-1)^n when remainder(n,2)=0}
The tilde preceding a variable marks that variable as free for that rule, much as a
variable in a FOR ALL clause in a LET statement. The first occurrence of that
variable in each relevant rule must be so marked on input, otherwise inconsistent
results can occur. For example, the rule list
{cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2,
=> (1+cos(2x))/2}
designed to replace products of cosines, would not be correct, since the second
rule would only apply to the explicit argument X. Later occurrences in the same
rule may also be marked, but this is optional (internally, all such rules are stored
with each relevant variable explicitly marked). The optional WHEN clause allows
constraints to be placed on the application of the rule, much as the SUCH THAT
clause in a LET statement.
A rule list may be named, for example
trig1 := {cos(~x)*cos(~y)



Such named rule lists may be inspected as needed. E.g., the command trig1;
would cause the above list to be printed.
Rule lists may be used in two ways. They can be globally instantiated by means of
the command LET. For example,
let trig1;
would cause the above list of rules to be globally active from then on until cancelled
by the command CLEARRULES, as in
clearrules trig1;
CLEARRULES has the syntax



CLEARRULES |(,...) .
The second way to use rule lists is to invoke them locally by means of a WHERE
clause. For example
where {cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2};
cos(a)*sin(b) where trigrules;
The syntax of an expression with a WHERE clause is:

WHERE |(,| ...)
so the first example above could also be written
where cos(~x)*cos(~y) => (cos(x+y)+cos(x-y))/2;
The effect of this construct is that the rule list(s) in the WHERE clause only apply to
the expression on the left of WHERE. They have no effect outside the expression. In
particular, they do not affect previously defined WHERE clauses or LET statements.
For example, the sequence
let a=2;
a where a=>4;
would result in the output
Although WHERE has a precedence less than any other infix operator, it still binds
higher than keywords such as ELSE, THEN, DO, REPEAT and so on. Thus the
if a=2 then 3 else a+2 where a=3
will parse as


if a=2 then 3 else (a+2 where a=3)

WHERE may be used to introduce auxiliary variables in symbolic mode expressions, as described in Section 17.4. However, the symbolic mode use has different
semantics, so expressions do not carry from one mode to the other.
Compatibility Note: In order to provide compatibility with older versions of rule
lists released through the Network Library, it is currently possible to use an equal
sign interchangeably with the replacement sign => in rules and LET statements.
However, since this will change in future versions, the replacement sign is preferable in rules and the equal sign in non-rule-based LET statements.

Advanced Use of Rule Lists
Some advanced features of the rule list mechanism make it possible to write more
complicated rules than those discussed so far, and in many cases to write more
compact rule lists. These features are:
• Free operators
• Double slash operator
• Double tilde variables.
A free operator in the left hand side of a pattern will match any operator with the
same number of arguments. The free operator is written in the same style as a
variable. For example, the implementation of the product rule of differentiation
can be written as:
operator diff, !~f, !~g;
prule := {diff(~f(~x) * ~g(~x),x) =>
diff(f(x),x) * g(x) + diff(g(x),x) * f(x)};
let prule;
cos(z)*diff(sin(z),z) + diff(cos(z),z)*sin(z)
The double slash operator may be used as an alternative to a single slash (quotient)
in order to match quotients properly. E.g., in the example of the Gamma function
above, one can use:



gammarule :=
{gamma(~z)//(~c*gamma(~zz)) => gamma(z)/(c*gamma(zz-1)*zz)
when fixp(zz -z) and (zz -z) >0,
gamma(~z)//gamma(~zz) => gamma(z)/(gamma(zz-1)*zz)
when fixp(zz -z) and (zz -z) >0};
let gammarule;
z + 6*z + 11*z + 6
The above example suffers from the fact that two rules had to be written in order
to perform the required operation. This can be simplified by the use of double tilde
variables. E.g. the rule list
GGrule := {
gamma(~z)//(~~c*gamma(~zz)) => gamma(z)/(c*gamma(zz-1)*zz)
when fixp(zz -z) and (zz -z) >0};
will implement the same operation in a much more compact way. In general, double tilde variables are bound to the neutral element with respect to the operation in
which they are used.
Pattern given

Argument used


~z + ~~y
~z + ~~y


z=x; y=0
z=x; y=3 or z=3; y=x

~z * ~~y
~z * ~~y


z=x; y=1
z=x; y=3 or z=3; y=x

~z / ~~y
~z / ~~y


z=x; y=1
z=x; y=3

Remarks: A double tilde variable as the numerator of a pattern is not allowed.
Also, using double tilde variables may lead to recursion errors when the zero case
is not handled properly.
let f(~~a * ~x,x)

=> a * f(x,x) when freeof (a,x);



***** f(z,z) improperly defined in terms of itself
% BUT:
let ff(~~a * ~x,x)
=> a * ff(x,x) when freeof (a,x) and a neq 1;

Displaying Rules Associated with an Operator
The operator SHOWRULES takes a single identifier as argument, and returns in
rule-list form the operator rules associated with that argument. For example:
showrules log;
{LOG(E) => 1,
LOG(1) => 0,
LOG(E ) => ~X,
DF(LOG(~X),~X) => ----}
Such rules can then be manipulated further as with any list. For example rhs
first ws; has the value 1. Note that an operator may have other properties that
cannot be displayed in such a form, such as the fact it is an odd function, or has a
definition defined as a procedure.

Order of Application of Rules
If rules have overlapping domains, their order of application is important. In general, it is very difficult to specify this order precisely, so that it is best to assume



that the order is arbitrary. However, if only one operator is involved, the order of
application of the rules for this operator can be determined from the following:
1. Rules containing at least one free variable apply before all rules without free
2. Rules activated in the most recent LET command are applied first.
3. LET with several entries generate the same order of application as a corresponding sequence of commands with one rule or rule set each.
4. Within a rule set, the rules containing at least one free variable are applied in
their given order. In other words, the first member of the list is applied first.
5. Consistent with the first item, any rule in a rule list that contains no free
variables is applied after all rules containing free variables.
Example: The following rule set enables the computation of exact values of the
Gamma function:
operator gamma,gamma_error;
gamma_rules :=
{gamma(~x)=>sqrt(pi)/2 when x=1/2,
gamma(~n)=>factorial(n-1) when fixp n and n>0,
gamma(~n)=>gamma_error(n) when fixp n,
gamma(~x)=>(x-1)*gamma(x-1) when fixp(2*x) and x>1,
gamma(~x)=>gamma(x+1)/x when fixp(2*x)};
Here, rule by rule, cases of known or definitely uncomputable values are sorted out;
e.g. the rule leading to the error expression will be applied for negative integers
only, since the positive integers are caught by the preceding rule, and the last rule
will apply for negative odd multiples of 1/2 only. Alternatively the first rule could
have been written as
gamma(1/2) => sqrt(pi)/2,
but then the case x = 1/2 should be excluded in the WHEN part of the last rule
explicitly because a rule without free variables cannot take precedence over the
other rules.


Asymptotic Commands

In expansions of polynomials involving variables that are known to be small, it is
often desirable to throw away all powers of these variables beyond a certain point



to avoid unnecessary computation. The command LET may be used to do this. For
example, if only powers of X up to x^7 are needed, the command
let x^8 = 0;
will cause the system to delete all powers of X higher than 7.
CAUTION: This particular simplification works differently from most substitution mechanisms in REDUCE in that it is applied during polynomial manipulation
rather than to the whole evaluated expression. Thus, with the above rule in effect,
x^10/x^5 would give the result zero, since the numerator would simplify to zero.
Similarly x^20/x^10 would give a Zero divisor error message, since both
numerator and denominator would first simplify to zero.
The method just described is not adequate when expressions involve several variables having different degrees of smallness. In this case, it is necessary to supply
an asymptotic weight to each variable and count up the total weight of each product
in an expanded expression before deciding whether to keep the term or not. There
are two associated commands in the system to permit this type of asymptotic constraint. The command WEIGHT takes a list of equations of the form
hkernel formi = hnumberi
where hnumberi must be a positive integer (not just evaluate to a positive integer).
This command assigns the weight hnumberi to the relevant kernel form. A check
is then made in all algebraic evaluations to see if the total weight of the term is
greater than the weight level assigned to the calculation. If it is, the term is deleted.
To compute the total weight of a product, the individual weights of each kernel
form are multiplied by their corresponding powers and then added.
The weight level of the system is initially set to 1. The user may change this setting
by the command
wtlevel ;
which sets hnumberi as the new weight level of the system. meta must evaluate to
a positive integer. WTLEVEL will also allow NIL as an argument, in which case
the current weight level is returned.

Chapter 12

File Handling Commands
In many applications, it is desirable to load previously prepared REDUCE files
into the system, or to write output on other files. REDUCE offers four commands
for this purpose, namely, IN, OUT, SHUT, LOAD, and LOAD_PACKAGE. The first
three operators are described here; LOAD and LOAD_PACKAGE are discussed in
Section 19.2.


IN Command

This command takes a list of file names as argument and directs the system to
input each file (that should contain REDUCE statements and commands) into the
system. File names can either be an identifier or a string. The explicit format of
these will be system dependent and, in many cases, site dependent. The explicit
instructions for the implementation being used should therefore be consulted for
further details. For example:
in f1,"ggg.rr.s";
will first load file f1, then ggg.rr.s. When a semicolon is used as the terminator
of the IN statement, the statements in the file are echoed on the terminal or written
on the current output file. If $ is used as the terminator, the input is not shown.
Echoing of all or part of the input file can be prevented, even if a semicolon was
used, by placing an off echo; command in the input file.
Files to be read using IN should end with ;END;. Note the two semicolons! First
of all, this is protection against obscure difficulties the user will have if there are,
by mistake, more BEGINs than ENDs on the file. Secondly, it triggers some file
control book-keeping which may improve system efficiency. If END is omitted, an
error message "End-of-file read" will occur.




While a file is being loaded, the special identifier !__LINE__ is replaced by the
number of the current line in the file currently being read. Similarly, !__FILE__
is replaced by the name of the file currently being read.


OUT Command

This command takes a single file name as argument, and directs output to that
file from then on, until another OUT changes the output file, or SHUT closes it.
Output can go to only one file at a time, although many can be open. If the file
has previously been used for output during the current job, and not SHUT, the new
output is appended to the end of the file. Any existing file is erased before its first
use for output in a job, or if it had been SHUT before the new OUT.
To output on the terminal without closing the output file, the reserved file name T
(for terminal) may be used. For example, out ofile; will direct output to the
file OFILE and out t; will direct output to the user’s terminal.
The output sent to the file will be in the same form that it would have on the
terminal. In particular x^2 would appear on two lines, an X on the lower line and
a 2 on the line above. If the purpose of the output file is to save results to be read
in later, this is not an appropriate form. We first must turn off the NAT switch that
specifies that output should be in standard mathematical notation.
Example: To create a file ABCD from which it will be possible to read – using IN
– the value of the expression XYZ:
off echo$
off nat$

out abcd$
linelength 72$ %
write ";end"$ %
shut abcd$
on nat$


needed if your input is from a file.
output in IN-readable form. Each expression
printed will end with a $ .
output to new file
for systems with fixed input line length.
will output "XYZ := " followed by the value
of XYZ
standard for ending files for IN
save ABCD, return to terminal output
% restore usual output form

SHUT Command

This command takes a list of names of files that have been previously opened via
an OUT statement and closes them. Most systems require this action by the user
before he ends the REDUCE job (if not sooner), otherwise the output may be lost.
If a file is shut and a further OUT command issued for the same file, the file is



erased before the new output is written.
If it is the current output file that is shut, output will switch to the terminal. Attempts to shut files that have not been opened by OUT, or an input file, will lead to


REDUCE startup file

At the start of a REDUCE session, the system checks for the existence of a user’s
startup file, and executes the REDUCE statements in it. This is equivalent to inputting the file with the IN command.
To find the directory/folder where the file resides, the system checks the existence
of the following environment variables:
1. HOME,
2. HOMEDRIVE and HOMEPATH together (Windows).
If none of these are set, the current directory is used. The file itself must be named
either .reducerc or reduce.rc1 .

If none of these exist, the system checks for a file called reduce.INI in the current directory.
This is historical and may be removed in future.



Chapter 13

Commands for Interactive Use
REDUCE is designed as an interactive system, but naturally it can also operate in
a batch processing or background mode by taking its input command by command
from the relevant input stream. There is a basic difference, however, between interactive and batch use of the system. In the former case, whenever the system
discovers an ambiguity at some point in a calculation, such as a forgotten type
assignment for instance, it asks the user for the correct interpretation. In batch
operation, it is not practical to terminate the calculation at such points and require
resubmission of the job, so the system makes the most obvious guess of the user’s
intentions and continues the calculation.
There is also a difference in the handling of errors. In the former case, the computation can continue since the user has the opportunity to correct the mistake. In batch
mode, the error may lead to consequent erroneous (and possibly time consuming)
computations. So in the default case, no further evaluation occurs, although the
remainder of the input is checked for syntax errors. A message "Continuing
with parsing only" informs the user that this is happening. On the other
hand, the switch ERRCONT, if on, will cause the system to continue evaluating
expressions after such errors occur.
When a syntactical error occurs, the place where the system detected the error is
marked with three dollar signs ($$$). In interactive mode, the user can then use ED
to correct the error, or retype the command. When a non-syntactical error occurs in
interactive mode, the command being evaluated at the time the last error occurred
is saved, and may later be reevaluated by the command RETRY.


Referencing Previous Results

It is often useful to be able to reference results of previous computations during a
REDUCE session. For this purpose, REDUCE maintains a history of all interactive



inputs and the results of all interactive computations during a given session. These
results are referenced by the command number that REDUCE prints automatically
in interactive mode. To use an input expression in a new computation, one writes
input(n), where n is the command number. To use an output expression, one
writes WS(n). WS references the previous command. E.g., if command number 1
was INT(X-1,X); and the result of command number 7 was X-1, then
would give the result -1, whereas
would yield the same result, but without a recomputation of the integral.
The operator DISPLAY is available to display previous inputs. If its argument
is a positive integer, n say, then the previous n inputs are displayed. If its argument is ALL (or in fact any non-numerical expression), then all previous inputs are


Interactive Editing

It is possible when working interactively to edit any REDUCE input that comes
from the user’s terminal, and also some user-defined procedure definitions. At the
top level, one can access any previous command string by the command ed(n),
where n is the desired command number as prompted by the system in interactive
mode. ED; (i.e. no argument) accesses the previous command.
After ED has been called, you can now edit the displayed string using a string editor
with the following commands:
space or X

move pointer to beginning
replace next character by hcharacteri
delete next character
end editing and reread text
move pointer to next occurrence of
insert hstringi in front of pointer
delete all characters until hcharacteri
print string from current pointer
give up with error exit
search for first occurrence of hstringi, positioning pointer just before it
move pointer right one character.



The above table can be displayed online by typing a question mark followed by a
carriage return to the editor. The editor prompts with an angle bracket. Commands
can be combined on a single line, and all command sequences must be followed by
a carriage return to become effective.
Thus, to change the command x := a+1; to x := a+2; and cause it to be
executed, the following edit command sequence could be used:
The interactive editor may also be used to edit a user-defined procedure that has
not been compiled. To do this, one says:
editdef hidi;
where hidi is the name of the procedure. The procedure definition will then be
displayed in editing mode, and may then be edited and redefined on exiting from
the editor.
Some versions of REDUCE now include input editing that uses the capabilities of
modern window systems. Please consult your system dependent documentation to
see if this is possible. Such editing techniques are usually much easier to use then


Interactive File Control

If input is coming from an external file, the system treats it as a batch processed
calculation. If the user desires interactive response in this case, he can include the
command ON INT; in the file. Likewise, he can issue the command off int;
in the main program if he does not desire continual questioning from the system.
Regardless of the setting of INT, input commands from a file are not kept in the
system, and so cannot be edited using ED. However, many implementations of REDUCE provide a link to an external system editor that can be used for such editing.
The specific instructions for the particular implementation should be consulted for
information on this.
Two commands are available in REDUCE for interactive use of files. PAUSE; may
be inserted at any point in an input file. When this command is encountered on
input, the system prints the message CONT? on the user’s terminal and halts. If the
user responds Y (for yes), the calculation continues from that point in the file. If the
user responds N (for no), control is returned to the terminal, and the user can input
further statements and commands. Later on he can use the command cont; to
transfer control back to the point in the file following the last PAUSE encountered.
A top-level pause; from the user’s terminal has no effect.



Chapter 14

Matrix Calculations
A very powerful feature of REDUCE is the ease with which matrix calculations
can be performed. To extend our syntax to this class of calculations we need to
add another prefix operator, MAT, and a further variable and expression type as


MAT Operator

This prefix operator is used to represent n × m matrices. MAT has n arguments
interpreted as rows of the matrix, each of which is a list of m expressions representing elements in that row. For example, the matrix

a b c
d e f
would be written as mat((a,b,c),(d,e,f)).
Note that the single column matrix


becomes mat((x),(y)). The inside parentheses are required to distinguish it
from the single row matrix

x y
that would be written as mat((x,y)).


Matrix Variables

An identifier may be declared a matrix variable by the declaration MATRIX. The
size of the matrix may be declared explicitly in the matrix declaration, or by default



in assigning such a variable to a matrix expression. For example,
matrix x(2,1),y(3,4),z;
declares X to be a 2 x 1 (column) matrix, Y to be a 3 x 4 matrix and Z a matrix
whose size is to be declared later.
Matrix declarations can appear anywhere in a program. Once a symbol is declared
to name a matrix, it can not also be used to name an array, operator or a procedure,
or used as an ordinary variable. It can however be redeclared to be a matrix, and
its size may be changed at that time. Note however that matrices once declared
are global in scope, and so can then be referenced anywhere in the program. In
other words, a declaration within a block (or a procedure) does not limit the scope
of the matrix to that block, nor does the matrix go away on exiting the block (use
CLEAR instead for this purpose). An element of a matrix is referred to in the
expected manner; thus x(1,1) gives the first element of the matrix X defined
above. References to elements of a matrix whose size has not yet been declared
leads to an error. All elements of a matrix whose size is declared are initialized to
0. As a result, a matrix element has an instant evaluation property and cannot stand
for itself. If this is required, then an operator should be used to name the matrix
elements as in:
matrix m; operator x;


m := mat((x(1,1),x(1,2));

Matrix Expressions

These follow the normal rules of matrix algebra as defined by the following syntax:
hmatrix expressioni −→ MAThmatrix descriptioni | hmatrix variablei |
hscalar expressioni*hmatrix expressioni |
hmatrix expressioni*hmatrix expressioni |
hmatrix expressioni+hmatrix expressioni |
hmatrix expressioni^hintegeri |
hmatrix expressioni/hmatrix expressioni
Sums and products of matrix expressions must be of compatible size; otherwise an
error will result during their evaluation. Similarly, only square matrices may be
raised to a power. A negative power is computed as the inverse of the matrix raised
to the corresponding positive power. a/b is interpreted as a*b^(-1).
Assuming X and Y have been declared as matrices, the following are matrix expressions



y + mat((1,a),(b,c))/2
The computation of the quotient of two matrices normally uses a two-step elimination method due to Bareiss. An alternative method using Cramer’s method is also
available. This is usually less efficient than the Bareiss method unless the matrices
are large and dense, although we have no solid statistics on this as yet. To use
Cramer’s method instead, the switch CRAMER should be turned on.


Operators with Matrix Arguments

The operator LENGTH applied to a matrix returns a list of the number of rows and
columns in the matrix. Other operators useful in matrix calculations are defined in
the following subsections. Attention is also drawn to the LINALG (section 16.37)
and NORMFORM (section 16.43) packages.


DET Operator

The operator DET is used to represent the determinant of a square matrix expression. E.g.,
is a scalar expression whose value is the determinant of the square of the matrix Y,
det mat((a,b,c),(d,e,f),(g,h,j));
is a scalar expression whose value is the determinant of the matrix
a b c
 d e f 
g h j
Determinant expressions have the instant evaluation property. In other words, the
let det mat((a,b),(c,d)) = 2;



sets the value of the determinant to 2, and does not set up a rule for the determinant



MATEIGEN calculates the eigenvalue equation and the corresponding eigenvectors
of a matrix, using the variable ID to denote the eigenvalue. A square free decomposition of the characteristic polynomial is carried out. The result is a list of lists
of 3 elements, where the first element is a square free factor of the characteristic
polynomial, the second its multiplicity and the third the corresponding eigenvector
(as an n by 1 matrix). If the square free decomposition was successful, the product
of the first elements in the lists is the minimal polynomial. In the case of degeneracy, several eigenvectors can exist for the same eigenvalue, which manifests itself
in the appearance of more than one arbitrary variable in the eigenvector. To extract
the various parts of the result use the operations defined on lists.
Example: The command
gives the output
{{ETA - 1,2,
{ETA - 2,1,





TP Operator

This operator takes a single matrix argument and returns its transpose.


Trace Operator

The operator TRACE is used to represent the trace of a square matrix.


Matrix Cofactors

The operator COFACTOR returns the cofactor of the element in row ROW and column COLUMN of the matrix MATRIX. Errors occur if ROW or COLUMN do not
simplify to integer expressions or if MATRIX is not square.



NULLSPACE calculates for a matrix A a list of linear independent vectors (a basis)
whose linear combinations satisfy the equation Ax = 0. The basis is provided in a
form such that as many upper components as possible are isolated.
Note that with b := nullspace a the expression length b is the nullity of
A, and that second length a - length b calculates the rank of A. The



rank of a matrix expression can also be found more directly by the RANK operator
described below.
Example: The command
nullspace mat((1,2,3,4),(5,6,7,8));
gives the output


0 ]
- 3]
2 ]

1 ]
- 2]
1 ]

In addition to the REDUCE matrix form, NULLSPACE accepts as input a matrix
given as a list of lists, that is interpreted as a row matrix. If that form of input
is chosen, the vectors in the result will be represented by lists as well. This additional input syntax facilitates the use of NULLSPACE in applications different from
classical linear algebra.


RANK Operator

RANK calculates the rank of its argument, that, like NULLSPACE can either be a
standard matrix expression, or a list of lists, that can be interpreted either as a row
matrix or a set of equations.



rank mat((a,b,c),(d,e,f));
returns the value 2.


Matrix Assignments

Matrix expressions may appear in the right-hand side of assignment statements. If
the left-hand side of the assignment, which must be a variable, has not already been
declared a matrix, it is declared by default to the size of the right-hand side. The
variable is then set to the value of the right-hand side.
Such an assignment may be used very conveniently to find the solution of a set of
linear equations. For example, to find the solution of the following set of equations
a11*x(1) + a12*x(2) = y1
a21*x(1) + a22*x(2) = y2
we simply write
x := 1/mat((a11,a12),(a21,a22))*mat((y1),(y2));


Evaluating Matrix Elements

Once an element of a matrix has been assigned, it may be referred to in standard
array element notation. Thus y(2,1) refers to the element in the second row and
first column of the matrix Y.



Chapter 15

It is often useful to name a statement for repeated use in calculations with varying
parameters, or to define a complete evaluation procedure for an operator. REDUCE
offers a procedural declaration for this purpose. Its general syntax is:
[hprocedural typei] PROCEDURE hnamei[hvarlisti];hstatementi;
hvarlisti −→ (hvariablei, . . . ,hvariablei)
This will be explained more fully in the following sections.
In the algebraic mode of REDUCE the hprocedural typei can be omitted, since the
default is ALGEBRAIC. Procedures of type INTEGER or REAL may also be used.
In the former case, the system checks that the value of the procedure is an integer.
At present, such checking is not done for a real procedure, although this will change
in the future when a more complete type checking mechanism is installed. Users
should therefore only use these types when appropriate. An empty variable list
may also be omitted.
All user-defined procedures are automatically declared to be operators.
In order to allow users relatively easy access to the whole REDUCE source program, system procedures are not protected against user redefinition. If a procedure
is redefined, a message
is printed. If this occurs, and the user is not redefining his own procedure, he is
well advised to rename it, and possibly start over (because he has already redefined
some internal procedure whose correct functioning may be required for his job!)



All required procedures should be defined at the top level, since they have global
scope throughout a program. In particular, an attempt to define a procedure within
a procedure will cause an error to occur.


Procedure Heading

Each procedure has a heading consisting of the word PROCEDURE (optionally
preceded by the word ALGEBRAIC), followed by the name of the procedure to be
defined, and followed by its formal parameters – the symbols that will be used in
the body of the definition to illustrate what is to be done. There are three cases:
1. No parameters. Simply follow the procedure name with a terminator (semicolon or dollar sign).
procedure abc;
When such a procedure is used in an expression or command, abc(), with
empty parentheses, must be written.
2. One parameter. Enclose it in parentheses or just leave at least one space,
then follow with a terminator.
procedure abc(x);
procedure abc x;
3. More than one parameter. Enclose them in parentheses, separated by commas, then follow with a terminator.
procedure abc(x,y,z);
Referring to the last example, if later in some expression being evaluated the symbols abc(u,p*q,123) appear, the operations of the procedure body will be
carried out as if X had the same value as U does, Y the same value as p*q does,
and Z the value 123. The values of X, Y, Z, after the procedure body operations are
completed are unchanged. So, normally, are the values of U, P, Q, and (of course)
123. (This is technically referred to as call by value.)
The reader will have noted the word normally a few lines earlier. The call by value
protections can be bypassed if necessary, as described elsewhere.




Procedure Body

Following the delimiter that ends the procedure heading must be a single statement
defining the action to be performed or the value to be delivered. A terminator must
follow the statement. If it is a semicolon, the name of the procedure just defined is
printed. It is not printed if a dollar sign is used.
If the result wanted is given by a formula of some kind, the body is just that formula, using the variables in the procedure heading.
Simple Example:
If f(x) is to mean (x+5)*(x+6)/(x+7), the entire procedure definition could
procedure f x; (x+5)*(x+6)/(x+7);
Then f(10) would evaluate to 240/17, f(a-6) to A*(A-1)/(A+1), and so
More Complicated Example:
Suppose we need a function p(n,x) that, for any positive integer N, is the Legendre polynomial of order n. We can define this operator using the textbook formula
defining these functions:
pn (x) =

1 dn
n! dy n (y 2 − 2xy + 1) 12


Put into words, the Legendre polynomial pn (x) is the result of substituting y = 0
in the nth partial derivative with respect to y of a certain fraction involving x and
y, then dividing that by n!.
This verbal formula can easily be written in REDUCE:
procedure p(n,x);
/(for i:=1:n product i);
Having input this definition, the expression evaluation
would result in the output

- 1 .



If the desired process is best described as a series of steps, then a group or compound statement can be used.
The above Legendre polynomial example can be rewritten as a series of steps instead of a single formula as follows:
procedure p(n,x);
begin scalar seed,deriv,top,fact;
seed:=1/(y^2 - 2*x*y +1)^(1/2);
fact:=for i:=1:n product i;
return top/fact
Procedures may also be defined recursively. In other words, the procedure body can
include references to the procedure name itself, or to other procedures that themselves reference the given procedure. As an example, we can define the Legendre
polynomial through its standard recurrence relation:
procedure p(n,x);
if n<0 then rederr "Invalid argument to P(N,X)"
else if n=0 then 1
else if n=1 then x
else ((2*n-1)*x*p(n-1,x)-(n-1)*p(n-2,x))/n;
The operator REDERR in the above example provides for a simple error exit from
an algebraic procedure (and also a block). It can take a string as argument.
It should be noted however that all the above definitions of p(n,x) are quite
inefficient if extensive use is to be made of such polynomials, since each call effectively recomputes all lower order polynomials. It would be better to store these
expressions in an array, and then use say the recurrence relation to compute only
those polynomials that have not already been derived. We leave it as an exercise
for the reader to write such a definition.


Matrix-valued Procedures

Normally, procedures can only return scalar values. In order for a procedure to
return a matrix, it has to be declared of type MATRIXPROC:
matrixproc SkewSym1 (w);



(-w(2,1), w(1,1), 0));
Following this declaration, the call to SkewSym1 can be used as a matrix, e.g.
X := SkewSym1(mat((qx),(qy),(qz)));

[ 0
x := [ qz
[ - qy

- qz


- qx]
0 ]

X * mat((rx),(ry),(rz));

[ qy*rz - qz*ry ]
[ - qx*rz + qz*rx]
[ qx*ry - qy*rx ]


Using LET Inside Procedures

By using LET instead of an assignment in the procedure body it is possible to
bypass the call-by-value protection. If X is a formal parameter or local variable
of the procedure (i.e. is in the heading or in a local declaration), and LET is used
instead of := to make an assignment to X, e.g.
let x = 123;
then it is the variable that is the value of X that is changed. This effect also occurs
with local variables defined in a block. If the value of X is not a variable, but a
more general expression, then it is that expression that is used on the left-hand side
of the LET statement. For example, if X had the value p*q, it is as if let p*q =
123 had been executed.




LET Rules as Procedures

The LET statement offers an alternative syntax and semantics for procedure definition.
In place of
procedure abc(x,y,z); ;
one can write
for all x,y,z let abc(x,y,z) = ;
There are several differences to note.
If the procedure body contains an assignment to one of the formal parameters, e.g.
x := 123;
in the PROCEDURE case it is a variable holding a copy of the first actual argument
that is changed. The actual argument is not changed.
In the LET case, the actual argument is changed. Thus, if ABC is defined using
LET, and abc(u,v,w) is evaluated, the value of U changes to 123. That is, the
LET form of definition allows the user to bypass the protections that are enforced
by the call by value conventions of standard PROCEDURE definitions.
Example: We take our earlier FACTORIAL procedure and write it as a LET statement.
for all n let factorial n =
begin scalar m,s;
m:=1; s:=n;
l1: if s=0 then return m;
go to l1
The reader will notice that we introduced a new local variable, S, and set it equal
to N. The original form of the procedure contained the statement n:=n-1;. If the
user asked for the value of factorial(5) then N would correspond to, not just
have the value of, 5, and REDUCE would object to trying to execute the statement
5 := 5 − 1.
If PQR is a procedure with no parameters,



procedure pqr;
it can be written as a LET statement quite simply:
let pqr = ;
To call procedure PQR, if defined in the latter form, the empty parentheses would
not be used: use PQR not PQR() where a call on the procedure is needed.
The two notations for a procedure with no arguments can be combined. PQR can
be defined in the standard PROCEDURE form. Then a LET statement
let pqr = pqr();
would allow a user to use PQR instead of PQR() in calling the procedure.
A feature available with LET-defined procedures and not with procedures defined
in the standard way is the possibility of defining partial functions.
for all x such that numberp x let uvw(x)=;
Now UVW of an integer would be calculated as prescribed by the procedure body,
while UVW of a general argument, such as Z or p+q (assuming these evaluate to
themselves) would simply stay uvw(z) or uvw(p+q) as the case may be.


REMEMBER Statement

Setting the remember option for an algebraic procedure by
saves all intermediate results of such procedure evaluations, including recursive
calls. Subsequent calls to the procedure can then be determined from the saved
results, and thus the number of evaluations (or the complexity) can be reduced.
This mode of evalation costs extra memory, of course. In addition, the procedure
must be free of side–effects.
The following examples show the effect of the remember statement on two well–



known examples.
procedure H(n);
% Hofstadter’s function
if numberp n then
<< cnn := cnn +1;
% counts the calls
if n < 3 then 1 else H(n-H(n-1))+H(n-H(n-2))>>;
remember h;
<< cnn := 0; H(100); cnn>>;
% H has been called 100 times only.
procedure A(m,n);

% Ackermann function

if m=0 then n+1 else
if n=0 then A(m-1,1) else
remember a;

Chapter 16

User Contributed Packages
The complete REDUCE system includes a number of packages contributed by
users that are provided as a service to the user community. Questions regarding
these packages should be directed to their individual authors.
All such packages have been precompiled as part of the installation process. However, many must be specifically loaded before they can be used. (Those that are
loaded automatically are so noted in their description.) You should also consult the
user notes for your particular implementation for further information on whether
this is necessary. If it is, the relevant command is LOAD_PACKAGE, which takes a
list of one or more package names as argument, for example:
load_package algint;
although this syntax may vary from implementation to implementation.
Nearly all these packages come with separate documentation and test files (except
those noted here that have no additional documentation), which is included, along
with the source of the package, in the REDUCE system distribution. These items
should be studied for any additional details on the use of a particular package.
The packages available in the current release of REDUCE are as follows:





ALGINT: Integration of square roots

This package, which is an extension of the basic integration package distributed
with REDUCE, will analytically integrate a wide range of expressions involving
square roots where the answer exists in that class of functions. It is an implementation of the work described in J.H. Davenport, “On the Integration of Algebraic
Functions", LNCS 102, Springer Verlag, 1981. Both this and the source code
should be consulted for a more detailed description of this work.
The ALGINT package is loaded automatically when the switch ALGINT is turned
on. One enters an expression for integration, as with the regular integrator, for
If one later wishes to integrate expressions without using the facilities of this package, the switch ALGINT should be turned off.
The switches supported by the standard integrator (e.g., TRINT) are also supported by this package. In addition, the switch TRA, if on, will give further tracing
information about the specific functioning of the algebraic integrator.
There is no additional documentation for this package.
Author: James H. Davenport.



APPLYSYM: Infinitesimal symmetries of differential equations

This package provides programs APPLYSYM, QUASILINPDE and DETRAFO
for applying infinitesimal symmetries of differential equations, the generalization
of special solutions and the calculation of symmetry and similarity variables.
Author: Thomas Wolf.
In this paper the programs APPLYSYM, QUASILINPDE and DETRAFO are described which aim at the utilization of infinitesimal symmetries of differential
equations. The purpose of QUASILINPDE is the general solution of quasilinear
PDEs. This procedure is used by APPLYSYM for the application of point symmetries for either
• calculating similarity variables to perform a point transformation which lowers the order of an ODE or effectively reduces the number of explicitly occuring independent variables in a PDE(-system) or for
• generalizing given special solutions of ODEs / PDEs with new constant parameters.
The program DETRAFO performs arbitrary point- and contact transformations of
ODEs / PDEs and is applied if similarity and symmetry variables have been found.
The program APPLYSYM is used in connection with the program LIEPDE for
formulating and solving the conditions for point- and contact symmetries which is
described in [4]. The actual problem solving is done in all these programs through
a call to the package CRACK for solving overdetermined PDE-systems.


Introduction and overview of the symmetry method

The investigation of infinitesimal symmetries of differential equations (DEs) with
computer algebra programs attrackted considerable attention over the last years.
Corresponding programs are available in all major computer algebra systems. In
a review article by W. Hereman [1] about 200 references are given, many of them
describing related software.
One reason for the popularity of the symmetry method is the fact that Sophus Lie’s
method [2],[3] is the most widely used method for computing exact solutions of
non-linear DEs. Another reason is that the first step in this method, the formulation
of the determining equation for the generators of the symmetries, can already be
very cumbersome, especially in the case of PDEs of higher order and/or in case of
many dependent and independent variables. Also, the formulation of the conditions
is a straight forward task involving only differentiations and basic algebra - an ideal
task for computer algebra systems. Less straight forward is the automatic solution



of the symmetry conditions which is the strength of the program LIEPDE (for a
comparison with another program see [4]).
The novelty described in this paper are programs aiming at the final third step:
Applying symmetries for
• calculating similarity variables to perform a point transformation which lowers the order of an ODE or effectively reduces the number of explicitly occuring independent variables of a PDE(-system) or for
• generalizing given special solutions of ODEs/PDEs with new constant parameters.
Programs which run on their own but also allow interactive user control are indispensible for these calculations. On one hand the calculations can become quite
lengthy, like variable transformations of PDEs (of higher order, with many variables). On the other hand the freedom of choosing the right linear combination
of symmetries and choosing the optimal new symmetry- and similarity variables
makes it necessary to ‘play’ with the problem interactively.
The focus in this paper is directed on questions of implementation and efficiency,
no principally new mathematics is presented.
In the following subsections a review of the first two steps of the symmetry method
is given as well as the third, i.e. the application step is outlined. Each of the remaining sections is devoted to one procedure.
The first step: Formulating the symmetry conditions
To obey classical Lie-symmetries, differential equations
HA = 0


for unknown functions y α , 1 ≤ α ≤ p of independent variables xi , 1 ≤ i ≤ q
must be forminvariant against infinitesimal transformations
x̃i = xi + εξ i ,

ỹ α = y α + εη α


in first order of ε. To transform the equations (16.1) by (16.2), derivatives of y α
must be transformed, i.e. the part linear in ε must be determined. The corresponding formulas are (see e.g. [10], [20])
ỹjα1 ...jk
ηjα1 ...jk−1 jk

= yjα1 ...jk + εηjα1 ...jk + O(ε2 )

Dηjα1 ...jk−1

− yij
1 ...jk−1

Dξ i


where D/Dxk means total differentiation w.r.t. xk and from now on lower latin
indices of functions y α , (and later uα ) denote partial differentiation w.r.t. the independent variables xi , (and later v i ). The complete symmetry condition then takes
the form
XHA = 0

mod HA = 0
α ∂
+ ηmn
+ . . . + ηmn...p
X = ξ i i + η α α + ηm

where mod HA = 0 means that the original PDE-system is used to replace some
partial derivatives of y α to reduce the number of independent variables, because
the symmetry condition (16.4) must be fulfilled identically in xi , y α and all partial
derivatives of y α .
For point symmetries, ξ i , η α are functions of xj , y β and for contact symmetries
they depend on xj , y β and ykβ . We restrict ourself to point symmetries as those are
the only ones that can be applied by the current version of the program APPLYSYM
(see below). For literature about generalized symmetries see [1].
Though the formulation of the symmetry conditions (16.4), (16.5), (16.3) is
straightforward and handled in principle by all related programs [1], the computational effort to formulate the conditions (16.4) may cause problems if the number
of xi and y α is high. This can partially be avoided if at first only a few conditions are formulated and solved such that the remaining ones are much shorter and
quicker to formulate.
A first step in this direction is to investigate one PDE HA = 0 after another, as done
in [22]. Two methods to partition the conditions for a single PDE are described by
Bocharov/Bronstein [9] and Stephani [20].
In the first method only those terms of the symmetry condition XHA = 0 are
calculated which contain at least a derivative of y α of a minimal order m. Setting
coefficients of these u-derivatives to zero provides symmetry conditions. Lowering
the minimal order m successively then gradually provides all symmetry conditions.
The second method is even more selective. If HA is of order n then only terms of
the symmetry condition XHA = 0 are generated which contain n0 th order derivatives of y α . Furthermore these derivatives must not occur in HA itself. They can
therefore occur in the symmetry condition (16.4) only in ηjα1 ...jn , i.e. in the terms
ηjα1 ...jn

∂yjα1 ...jn

If only coefficients of n0 th order derivatives of y α need to be accurate to formulate
preliminary conditions then from the total derivatives to be taken in (16.3) only
that part is performed which differentiates w.r.t. the highest y α -derivatives. This
α ∂/∂y α if the expression, which is to be
means, for example, to form only ymnk
differentiated totally w.r.t. x , contains at most second order derivatives of y α .



The second method is applied in LIEPDE. Already the formulation of the remaining conditions is speeded up considerably through this iteration process. These
methods can be applied if systems of DEs or single PDEs of at least second order
are investigated concerning symmetries.
The second step: Solving the symmetry conditions
The second step in applying the whole method consists in solving the determining
conditions (16.4), (16.5), (16.3) which are linear homogeneous PDEs for ξ i , η α .
The complete solution of this system is not algorithmic any more because the solution of a general linear PDE-system is as difficult as the solution of its non-linear
characteristic ODE-system which is not covered by algorithms so far.
Still algorithms are used successfully to simplify the PDE-system by calculating
its standard normal form and by integrating exact PDEs if they turn up in this simplification process [4]. One problem in this respect, for example, concerns the
optimization of the symbiosis of both algorithms. By that we mean the ranking of
priorities between integrating, adding integrability conditions and doing simplifications by substitutions - all depending on the length of expressions and the overall
structure of the PDE-system. Also the extension of the class of PDEs which can be
integrated exactly is a problem to be pursuit further.
The program LIEPDE which formulates the symmetry conditions calls the program CRACK to solve them. This is done in a number of successive calls in order
to formulate and solve some first order PDEs of the overdetermined system first and
use their solution to formulate and solve the next subset of conditions as described
in the previous subsection. Also, LIEPDE can work on DEs that contain parametric constants and parametric functions. An ansatz for the symmetry generators can
be formulated. For more details see [4] or [17].
The procedure LIEPDE is called through
All parameters are lists.
The first parameter specifies the DEs to be investigated:
problem has the form {equations, ulist, xlist} where
equations is a list of equations, each has the form df(ui,..)=... where
the LHS (left hand side) df(ui,..) is selected such that
- The RHS (right h.s.) of an equations must not include
the derivative on the LHS nor a derivative of it.
- Neither the LHS nor any derivative of it of any equation
may occur in any other equation.
- Each of the unknown functions occurs on the LHS of
exactly one equation.


is a list of function names, which can be chosen freely
is a list of variable names, which can be chosen freely

Equations can be given as a list of single differential expressions and then the
program will try to bring them into the ‘solved form’ df(ui,..)=... automatically. If equations are given in the solved form then the above conditions are
checked and execution is stopped it they are not satisfied. An easy way to get the
equations in the desired form is to use
FIRST SOLVE({eq1,eq2,...},{one highest derivative for each function
(see the example of the Karpman equations in LIEPDE.TST). The example of the
Burgers equation in LIEPDE.TST demonstrates that the number of symmetries
for a given maximal order of the infinitesimal generators depends on the derivative
chosen for the LHS.
The second parameter symtype of LIEPDE is a list { } that specifies the symmetry
to be calculated. symtype can have the following values and meanings:


Point symmetries with ξ i = ξ i (xj , uβ ), η α = η α (xj , uβ ) are
Contact symmetries with ξ i = 0, η = η(xj , u, uk ) are
determined (uk = ∂u/∂xk ), which is only applicable if a
single equation (16.1) with an order > 1 for a single function
u is to be investigated. (The symtype {"contact"}
is equivalent to {"general",1} (see below) apart from
the additional checks done for {"contact"}.)
where order is an integer > 0. Generalized symmetries ξ i = 0,
η α = η α (xj , uβ , . . . , uβK ) of a specified order are determined
(where K is a multiple index representing order many indices.)
NOTE: Characteristic functions of generalized symmetries
(= η α if ξ i = 0) are equivalent if they are equal on
the solution manifold. Therefore, all dependences of
characteristic functions on the substituted derivatives
and their derivatives are dropped. For example, if the heat
equation is given as ut = uxx (i.e. ut is substituted by uxx )
then {"general",2} would not include characteristic
functions depending on utx or uxxx .
If you want to find all symmetries up to a given order then either
- avoid using HA = 0 to substitute lower order
derivatives by expressions involving higher derivatives, or
- increase the order specified in symtype.
For an illustration of this effect see the two symmetry
determinations of the Burgers equation in the file



{xi!_x1 =...,...,

It is possible to specify an ansatz for the symmetry. Such
an ansatz must specify all ξ i for all independent variables and
all η α for all dependent variables in terms of differential
expressions which may involve unknown functions/constants.
The dependences of the unknown functions have to be declared
in advance by using the DEPEND command. For example,
DEPEND f, t, x, u$
specifies f to be a function of t, x, u. If one wants to have f as
a function of derivatives of u(t, x), say f depending on utxx ,
then one cannot write
DEPEND f, df(u,t,x,2)$
but instead must write
DEPEND f, u!‘1!‘2!‘2$
assuming xlist has been specified as {t,x}. Because t is the
first variable and x is the second variable in xlist and u is
differentiated oncs wrt. t and twice wrt. x we therefore
use u!‘1!‘2!‘2. The character ! is the escape character
to allow special characters like ‘ to occur in an identifier.
For generalized symmetries one usually sets all ξ i = 0.
Then the η α are equal to the characteristic functions.

The third parameter flist of LIEPDE is a list { } that includes
• all parameters and functions in the equations which are to be determined
such that symmetries exist (if any such parameters/functions are specified in
flist then the symmetry conditions formulated in LIEPDE become non-linear
conditions which may be much harder for CRACK to solve with many cases
and subcases to be considered.)
• all unknown functions and constants in the ansatz xi!_.. and eta!_..
if that has been specified in symtype.
The fourth parameter inequ of LIEPDE is a list { } that includes all non-vanishing
expressions which represent inequalities for the functions in flist.
The result of LIEPDE is a list with 3 elements, each of which is a list:
{{con 1 , con 2 , . . .}, {xi_... = . . . , . . . , eta_... = . . . , . . .}, {flist}}.
The first list contains remaining unsolved symmetry conditions coni . It is the empty
list {} if all conditions have been solved. The second list gives the symmetry
generators, i.e. expressions for ξi and ηj . The last list contains all free constants
and functions occuring in the first and second list.

The third step: Application of infinitesimal symmetries
If infinitesimal symmetries have been found then the program APPLYSYM can use
them for the following purposes:
1. Calculation of one symmetry variable and further similarity variables. After
transforming the DE(-system) to these variables, the symmetry variable will
not occur explicitly any more. For ODEs this has the consequence that their
order has effectively been reduced.
2. Generalization of a special solution by one or more constants of integration.
Both methods are described in the following section.


Applying symmetries with APPLYSYM

The first mode: Calculation of similarity and symmetry variables
In the following we assume that a symmetry generator X, given in (16.5), is known
such that ODE(s)/PDE(s) HA = 0 satisfy the symmetry condition (16.4). The aim
is to find new dependent functions uα = uα (xj , y β ) and new independent variables
v i = v i (xj , y β ), 1 ≤ α, β ≤ p, 1 ≤ i, j ≤ q such that the symmetry generator
X = ξ i (xj , y β )∂xi + η α (xj , y β )∂yα transforms to
X = ∂v1 .


Inverting the above transformation to xi = xi (v j , uβ ), y α = y α (v j , uβ ) and setting
HA (xi (v j , uβ ), y α (v j , uβ ), . . .) = hA (v j , uβ , . . .) this means that
0 = XHA (xi , y α , yjβ , . . .) mod HA = 0
= XhA (v i , uα , uβj , . . .) mod hA = 0
= ∂v1 hA (v i , uα , uβj , . . .) mod hA = 0.
Consequently, the variable v 1 does not occur explicitly in hA . In the case of
an ODE(-system) (v 1 = v) the new equations 0 = hA (v, uα , duβ /dv, . . .) are
then of lower total order after the transformation z = z(u1 ) = du1 /dv with now
z, u2 , . . . up as unknown functions and u1 as independent variable.
The new form (16.6) of X leads directly to conditions for the symmetry variable
v 1 and the similarity variables v i |i6=1 , uα (all functions of xk , y γ ):
Xv 1 = 1 = ξ i (xk , y γ )∂xi v 1 + η α (xk , y γ )∂yα v 1










Xv |j6=1 = Xu = 0 = ξ (x , y )∂xi u + η (x , y )∂yα u




The general solutions of (16.7), (16.8) involve free functions of p+q−1 arguments.
From the general solution of equation (16.8), p + q − 1 functionally independent
special solutions have to be selected (v 2 , . . . , v p and u1 , . . . , uq ), whereas from
(16.7) only one solution v 1 is needed. Together, the expressions for the symmetry
and similarity variables must define a non-singular transformation x, y → u, v.
Different special solutions selected at this stage will result in different resulting
DEs which are equivalent under point transformations but may look quite differently. A transformation that is more difficult than another one will in general only
complicate the new DE(s) compared with the simpler transformation. We therefore
seek the simplest possible special solutions of (16.7), (16.8). They also have to be
simple because the transformation has to be inverted to solve for the old variables
in order to do the transformations.
The following steps are performed in the corresponding mode of the program
• The user is asked to specify a symmetry by selecting one symmetry from all
the known symmetries or by specifying a linear combination of them.
• Through a call of the procedure QUASILINPDE (described in a later section) the two linear first order PDEs (16.7), (16.8) are investigated and, if
possible, solved.
• From the general solution of (16.7) 1 special solution is selected and from
(16.8) p + q − 1 special solutions are selected which should be as simple as
• The user is asked whether the symmetry variable should be one of the independent variables (as it has been assumed so far) or one of the new functions
(then only derivatives of this function and not the function itself turn up in
the new DE(s)).
• Through a call of the procedure DETRAFO the transformation xi , y α →
v j , uβ of the DE(s) HA = 0 is finally done.
• The program returns to the starting menu.
The second mode: Generalization of special solutions
A second application of infinitesimal symmetries is the generalization of a known
special solution given in implicit form through 0 = F (xi , y α ). If one knows a
symmetry variable v 1 and similarity variables v r , uα , 2 ≤ r ≤ p then v 1 can
be shifted by a constant c because of ∂v1 HA = 0 and therefore the DEs 0 =
HA (v r , uα , uβj , . . .) are unaffected by the shift. Hence from
0 = F (xi , y α ) = F (xi (v j , uβ ), y α (v j , uβ )) = F̄ (v j , uβ )

follows that
0 = F̄ (v 1 + c, v r , uβ ) = F̄ (v 1 (xi , y α ) + c, v r (xi , y α ), uβ (xi , y α ))
defines implicitly a generalized solution y α = y α (xi , c).
This generalization works only if ∂v1 F̄ 6= 0 and if F̄ does not already have a
constant additive to v 1 .
The method above needs to know xi = xi (uβ , v j ), y α = y α (uβ , v j ) and uα =
uα (xj , y β ), v α = v α (xj , y β ) which may be practically impossible. Better is, to
integrate xi , y α along X:
= ξ i (x̄j (ε), ȳ β (ε)),

dȳ α
= η α (x̄j (ε), ȳ β (ε))


with initial values x̄i = xi , ȳ α = y α for ε = 0. (This ODE-system is the characteristic system of (16.8).)
Knowing only the finite transformations
x̄i = x̄i (xj , y β , ε), ȳ α = ȳ α (xj , y β , ε)


gives immediately the inverse transformation x̄i = x̄i (xj , y β , ε),
ȳ α (xj , y β , ε) just by ε → −ε and renaming xi , y α ↔ x̄i , ȳ α .

ȳ α =

The special solution 0 = F (xi , y α ) is generalized by the new constant ε through
0 = F (xi , y α ) = F (xi (x̄j , ȳ β , ε), y α (x̄j , ȳ β , ε))
after dropping the ¯.
The steps performed in the corresponding mode of the program APPLYSYM show
features of both techniques:
• The user is asked to specify a symmetry by selecting one symmetry from all
the known symmetries or by specifying a linear combination of them.
• The special solution to be generalized and the name of the new constant have
to be put in.
• Through a call of the procedure QUASILINPDE, the PDE (16.7) is solved
which amounts to a solution of its characteristic ODE system (16.9) where
v 1 = ε.
• QUASILINPDE returns a list of constant expressions
ci = ci (xk , y β , ε), 1 ≤ i ≤ p + q


which are solved for xj = xj (ci , ε), y α = y α (ci , ε) to obtain the generalized solution through
0 = F (xj , y α ) = F (xj (ci (xk , y β , 0), ε), y α (ci (xk , y β , 0), ε)).


• The new solution is availabe for further generalizations w.r.t. other symmetries.

If one would like to generalize a given special solution with m new constants because m symmetries are known, then one could run the whole program m times,
each time with a different symmetry or one could run the program once with a linear combination of m symmetry generators which again is a symmetry generator.
Running the program once adds one constant but we have in addition m − 1 arbitrary constants in the linear combination of the symmetries, so m new constants are
added. Usually one will generalize the solution gradually to make solving (16.9)
gradually more difficult.
The call of APPLYSYM is APPLYSYM({de, fun, var}, {sym, cons});
• de is a single DE or a list of DEs in the form of a vanishing expression or in
the form . . . = . . . .
• fun is the single function or the list of functions occuring in de.
• var is the single variable or the list of variables in de.
• sym is a linear combination of all symmetries, each with a different constant
coefficient, in form of a list of the ξ i and η α : {xi_. . . =. . . ,. . . ,eta_. . . =. . . ,. . . },
where the indices after ‘xi_’ are the variable names and after ‘eta_’ the function names.
• cons is the list of constants in sym, one constant for each symmetry.
The list that is the first argument of APPLYSYM is the same as the first argument of
LIEPDE and the second argument is the list that LIEPDE returns without its first
element (the unsolved conditions). An example is given below.
What APPLYSYM returns depends on the last performed modus. After modus 1
the return is
{{newde, newfun, newvar}, trafo}
• newde lists the transformed equation(s)
• newfun lists the new function name(s)
• newvar lists the new variable name(s)
• trafo lists the transformations xi = xi (v j , uβ ), y α = y α (v j , uβ )
After modus 2, APPLYSYM returns the generalized special solution.

Example: A second order ODE
Weyl’s class of solutions of Einsteins field equations consists of axialsymmetric
time independent metrics of the form

ds2 = e−2U e2k dρ2 + dz2 + ρ2 dϕ2 − e2U dt2 ,
where U and k are functions of ρ and z. If one is interested in generalizing these
solutions to have a time dependence then the resulting DEs can be transformed such
that one longer third order ODE for U results which contains only ρ derivatives
[23]. Because U appears not alone but only as derivative, a substitution
g = dU/dρ


lowers the order and the introduction of a function
h = ρg − 1


simplifies the ODE to
0 = 3ρ2 h h00 − 5ρ2 h02 + 5ρ h h0 − 20ρ h3 h0 − 20 h4 + 16 h6 + 4 h2 .


where 0 = d/dρ. Calling LIEPDE through
depend h,r;
{h}, {r}};
sym:=liepde(prob, {"point"},{},{});

sym := {{}, {xi_r= - c10*r

- c11*r, eta_h=c10*h*r }, {c10,c11}}.

All conditions have been solved because the first element of sym is {}. The two
existing symmetries are therefore
− ρ3 ∂ρ + hρ2 ∂h


ρ∂ρ .


Corresponding finite transformations can be calculated with APPLYSYM through
newde:=applysym(prob,rest sym);

The interactive session is given below with the user input following the prompt
‘Input:3:’ or following ‘?’. (Empty lines have been deleted.)



Do you want to find similarity and symmetry variables (enter ‘1;’)
or generalize a special solution with new parameters (enter ‘2;’)
or exit the program
(enter ‘;’)
Input:3: 1;

We enter ‘1;’ because we want to reduce dependencies by finding similarity variables and one symmetry variable and then doing the transformation such that the
symmetry variable does not explicitly occur in the DE.
xi_r= - r
---------------------xi_r= - r
---------------------Which single symmetry or
do you want to apply?
Enter an expression with

The 1.

symmetry is:

The 2.

symmetry is:

linear combination of symmetries
‘sy_(i)’ for the i’th symmetry.

We could have entered ‘sy_(2);’ or a combination of both as well with the calculation running then differently.
The symmetry to be applied in the following is
{xi_r= - r ,eta_h=h*r }
Enter the name of the new dependent variables:
Input:3: u;
Enter the name of the new independent variables:
Input:3: v;

This was the input part, now the real calculation starts.
The ODE/PDE (-system) under investigation is :
2 2
0 = 3*df(h,r,2)*h*r - 5*df(h,r) *r - 20*df(h,r)*h *r
+ 5*df(h,r)*h*r + 16*h - 20*h + 4*h
for the function(s) : h.
It will be looked for a new dependent variable u
and an independent variable v such that the transformed
de(-system) does not depend on u or v.
1. Determination of the similarity variable
The quasilinear PDE: 0 = r *(df(u_,h)*h - df(u_,r)*r).
The equivalent characteristic system:

0= - df(u_,r)*r
0= - r *(df(h,r)*r + h)
for the functions: h(r)


The PDE is equation (16.8).
The general solution of the PDE is given through
0 = ff(u_,h*r)
with arbitrary function ff(..).
A suggestion for this function ff provides:
0 = - h*r + u_
Do you like this choice? (Y or N)

For the following calculation only a single special solution of the PDE is necessary and this has to be specified from the general solution by choosing a special
function ff. (This function is called ff to prevent a clash with names of user
variables/functions.) In principle any choice of ff would work, if it defines a nonsingular coordinate transformation, i.e. here r must be a function of u_. If we have
q independent variables and p functions of them then ff has p + q arguments.
Because of the condition 0 =ff one has essentially the freedom of choosing a
function of p + q − 1 arguments freely. This freedom is also necessary to select
p + q − 1 different functions ff and to find as many functionally independent solutions u_ which all become the new similarity variables. q of them become the
new functions uα and p − 1 of them the new variables v 2 , . . . , v p . Here we have
p = q = 1 (one single ODE).
Though the program could have done that alone, once the general solution ff(..)
is known, the user can interfere here to enter a simpler solution, if possible.
2. Determination of the symmetry variable
The quasilinear PDE: 0 = df(u_,h)*h*r - df(u_,r)*r - 1.
The equivalent characteristic system:
0=df(r,u_) + r
0=df(h,u_) - h*r
for the functions: r(u_) h(u_) .
New attempt with a different independent variable
The equivalent characteristic system:
0=df(u_,h)*h*r - 1
0=r *(df(r,h)*h + r)
for the functions: r(h) u_(h) .
The general solution of the PDE is given through



2 2
- 2*h *r *u_ + h
0 = ff(h*r,--------------------)
with arbitrary function ff(..).
A suggestion for this function ff(..) yields:
h *( - 2*r *u_ + 1)
0 = --------------------2
Do you like this choice? (Y or N)

Similar to above.
The suggested solution of the algebraic system which will
do the transformation is:
Is the solution ok? (Y or N)
In the intended transformation shown above the dependent
variable is u and the independent variable is v.
The symmetry variable is v, i.e. the transformed expression
will be free of v.
Is this selection of dependent and independent variables ok? (Y or N)

We so far assumed that the symmetry variable is one of the new variables, but,
of course we also could choose it to be one of the new functions. If it is one
of the functions then only derivatives of this function occur in the new DE, not
the function itself. If it is one of the variables then this variable will not occur
In our case we prefer (without strong reason) to have the function as symmetry variable. We therefore answered with ‘no’. As a consequence, u and v will exchange
names such that still all new functions have the name u and the new variables have
name v:
Please enter a list of substitutions. For example, to
make the variable, which is so far call u1, to an
independent variable v2 and the variable, which is
so far called v2, to an dependent variable u1,
enter: ‘{u1=v2, v2=u1};’
Input:3: {u=v,v=u};
The transformed equation which should be free of u:
3 6
2 3

0=3*df(u,v,2)*v - 16*df(u,v) *v - 20*df(u,v) *v + 5*df(u,v)
Do you want to find similarity and symmetry variables (enter ‘1;’)
or generalize a special solution with new parameters (enter ‘2;’)
or exit the program
(enter ‘;’)
Input:3: ;
We stop here. The following is returned from our APPLYSYM call:
3 6
2 3
{{{3*df(u,v,2)*v - 16*df(u,v) *v - 20*df(u,v) *v + 5*df(u,v)},
{r=-----------------, h=sqrt(u)*sqrt(2)*v }}
The use of APPLYSYM effectively provided us the finite transformation
ρ = (2 u)−1/2 ,

h = (2 u)1/2 v.


and the new ODE
0 = 3u00 v − 16u03 v 6 − 20u02 v 3 + 5u0


where u = u(v) and 0 = d/dv. Using one symmetry we reduced the 2. order ODE (16.15)
to a first order ODE (16.18) for u0 plus one integration. The second symmetry can be used
to reduce the remaining ODE to an integration too by introducing a variable w through
v 3 d/dv = d/dw, i.e. w = −1/(2v 2 ). With
p = du/dw


the remaining ODE is
0 = 3w

+ 2 p (p + 1)(4 p + 1)

with solution
c̃w−2 /4 = c̃v 4 =

p3 (p + 1)
, c̃ = const.
(4 p + 1)4

Writing (16.19) as p = v 3 (du/dp)/(dv/dp) we get u by integration and with (16.17)
further a parametric solution for ρ, h:

ρ =
h =

3c21 (2p − 1)
+ c2
p1/2 (p + 1)1/2


(c2 p1/2 (p + 1)1/2 + 6c21 p − 3c21 )1/2 p1/2
c1 (4p + 1)


where c1 , c2 = const. and c1 = c̃1/4 . Finally, the metric function U (p) is obtained as an
integral from (16.13),(16.14).



Limitations of APPLYSYM
Restrictions of the applicability of the program APPLYSYM result from limitations of the
program QUASILINPDE described in a section below. Essentially this means that symmetry generators may only be polynomially non-linear in xi , y α . Though even then the
solvability can not be guaranteed, the generators of Lie-symmetries are mostly very simple
such that the resulting PDE (16.22) and the corresponding characteristic ODE-system have
good chances to be solvable.
Apart from these limitations implied through the solution of differential equations with
CRACK and algebraic equations with SOLVE the program APPLYSYM itself is free
of restrictions, i.e. if once new versions of CRACK, SOLVE would be available then
APPLYSYM would not have to be changed.
Currently, whenever a computational step could not be performed the user is informed and
has the possibility of entering interactively the solution of the unsolved algebraic system
or the unsolved linear PDE.


Solving quasilinear PDEs

The content of QUASILINPDE
The generalization of special solutions of DEs as well as the computation of similarity
and symmetry variables involve the general solution of single first order linear PDEs. The
procedure QUASILINPDE is a general procedure aiming at the general solution of PDEs
a1 (wi , φ)φw1 + a2 (wi , φ)φw2 + . . . + an (wi , φ)φwn = b(wi , φ)


in n independent variables wi , i = 1 . . . n for one unknown function φ = φ(wi ).
1. The first step in solving a quasilinear PDE (16.22) is the formulation of the corresponding characteristic ODE-system
= ai (wj , φ)
= b(wj , φ)
for φ, wi regarded now as functions of one variable ε.
Because the ai and b do not depend explicitly on ε, one of the equations
(16.23),(16.24) with non-vanishing right hand side can be used to divide all others
through it and by that having a system with one less ODE to solve. If the equation
to divide through is one of (16.23) then the remaining system would be
, i = 1, 2, . . . k − 1, k + 1, . . . n
with the independent variable wk instead of ε. If instead we divide through equation
(16.24) then the remaining system would be



i = 1, 2, . . . n


with the independent variable φ instead of ε.
The equation to divide through is chosen by a subroutine with a heuristic to find the
“simplest” non-zero right hand side (ak or b), i.e. one which
• is constant or
• depends only on one variable or
• is a product of factors, each of which depends only on one variable.
One purpose of this division is to reduce the number of ODEs by one. Secondly,
the general solution of (16.23), (16.24) involves an additive constant to ε which is
not relevant and would have to be set to zero. By dividing through one ODE we
eliminate ε and lose the problem of identifying this constant in the general solution
before we would have to set it to zero.
2. To solve the system (16.25), (16.26) or (16.27), the procedure CRACK is called. Although being designed primarily for the solution of overdetermined PDE-systems,
CRACK can also be used to solve simple not overdetermined ODE-systems. This
solution process is not completely algorithmic. Improved versions of CRACK could
be used, without making any changes of QUASILINPDE necessary.
If the characteristic ODE-system can not be solved in the form (16.25), (16.26) or
(16.27) then successively all other ODEs of (16.23), (16.24) with non-vanishing
right hand side are used for division until one is found such that the resulting
ODE-system can be solved completely. Otherwise the PDE can not be solved by
3. If the characteristic ODE-system (16.23), (16.24) has been integrated completely
and in full generality to the implicit solution
0 = Gi (φ, wj , ck , ε), i, k = 1, . . . , n + 1, j = 1, . . . , n


then according to the general theory for solving first order PDEs, ε has to be eliminated from one of the equations and to be substituted in the others to have left n
equations. Also the constant that turns up additively to ε is to be set to zero. Both
tasks are automatically fulfilled, if, as described above, ε is already eliminated from
the beginning by dividing all equations of (16.23), (16.24) through one of them.
On either way one ends up with n equations
0 = gi (φ, wj , ck ), i, j, k = 1 . . . n


involving n constants ck .
The final step is to solve (16.29) for the ci to obtain
ci = ci (φ, w1 , . . . , wn )

i = 1, . . . n.


The final solution φ = φ(wi ) of the PDE (16.22) is then given implicitly through
0 = F (c1 (φ, wi ), c2 (φ, wi ), . . . , cn (φ, wi ))
where F is an arbitrary function with n arguments.



The call of QUASILINPDE is
QUASILINPDE(de, fun, varlist);
• de is the differential expression which vanishes due to the PDE de = 0 or, de may
be the differential equation itself in the form . . . = . . . .
• fun is the unknown function.
• varlist is the list of variables of fun.
The result of QUASILINPDE is a list of general solutions
{sol 1 , sol 2 , . . .}.
If QUASILINPDE can not solve the PDE then it returns {}. Each solution sol i is a list of
{ex 1 , ex 2 , . . .}
such that the dependent function (φ in (16.22)) is determined implicitly through an arbitrary
function F and the algebraic equation
0 = F (ex 1 , ex 2 , . . .).

Example 1:
To solve the quasilinear first order PDE
1 = xu,x +uu,y −zu,z
for the function u = u(x, y, z), the input would be
depend u,x,y,z;
de:=x*df(u,x)+u*df(u,y)-z*df(u,z) - 1;

In this example the procedure returns
{{x/eu , zeu , u2 − 2y}},
i.e. there is one general solution (because the outer list has only one element which
itself is a list) and u is given implicitly through the algebraic equation
0 = F (x/eu , zeu , u2 − 2y)
with arbitrary function F.
Example 2:
For the linear inhomogeneous PDE
0 = yz,x +xz,y −1,


z = z(x, y)

QUASILINPDE returns the result that for an arbitrary function F, the equation

x+y z
, e (x − y)
defines the general solution for z.
Example 3:
For the linear inhomogeneous PDE (3.8) from [15]
0 = xw,x +(y + z)(w,y −w,z ),


w = w(x, y, z)

QUASILINPDE returns the result that for an arbitrary function F, the equation
0 = F (w, y + z, ln(x)(y + z) − y)
defines the general solution for w, i.e. for any function f
w = f (y + z, ln(x)(y + z) − y)
solves the PDE.
Limitations of QUASILINPDE
One restriction on the applicability of QUASILINPDE results from the program
CRACK which tries to solve the characteristic ODE-system of the PDE. So far
CRACK can be applied only to polynomially non-linear DE’s, i.e. the characteristic
ODE-system (16.25),(16.26) or (16.27) may only be polynomially non-linear, i.e.
in the PDE (16.22) the expressions ai and b may only be rational in wj , φ.
The task of CRACK is simplified as (16.28) does not have to be solved for wj , φ. On
the other hand (16.28) has to be solved for the ci . This gives a second restriction
coming from the REDUCE function SOLVE. Though SOLVE can be applied to
polynomial and transzendential equations, again no guarantee for solvability can
be given.


Transformation of DEs

The content of DETRAFO
Finally, after having found the finite transformations, the program APPLYSYM calls
the procedure DETRAFO to perform the transformations. DETRAFO can also be
used alone to do point- or higher order transformations which involve a considerable computational effort if the differential order of the expression to be transformed is high and if many dependent and independent variables are involved. This
might be especially useful if one wants to experiment and try out different coordinate transformations interactively, using DETRAFO as standalone procedure.



To run DETRAFO, the old functions y α and old variables xi must be known explicitly in terms of algebraic or differential expressions of the new functions uβ and
new variables v j . Then for point transformations the identity

dy α =
y α ,vi +y α ,uβ uβ ,vi dv i
= y α ,xj dxj

= y α ,xj xj ,vi +xj ,uβ uβ ,vi dv i


provides the transformation
dy α
y ,xj =
dv i

dv i


with det dxj /dv i 6= 0 because of the regularity of the transformation which is
checked by DETRAFO. Non-regular transformations are not performed.
DETRAFO is not restricted to point transformations. In the case of contact- or
higher order transformations, the total derivatives dy α /dv i and dxj /dv i then only
include all v i − derivatives of uβ which occur in
y α = y α (v i , uβ , uβ ,vj , . . .)
xk = xk (v i , uβ , uβ ,vj , . . .).
The call of DETRAFO is
DETRAFO({ex1 , ex2 , . . . , exm },
{ofun1 =fex1 , ofun2 =fex2 , . . . ,ofunp =fexp },
{ovar1 =vex1 , ovar2 =vex2 , . . . , ovarq =vexq },
{nfun1 , nfun2 , . . . , nfunp },
{nvar1 , nvar2 , . . . , nvarq });
where m, p, q are arbitrary.
• The exi are differential expressions to be transformed.
• The second list is the list of old functions ofun expressed as expressions fex
in terms of new functions nfun and new independent variables nvar.
• Similarly the third list expresses the old independent variables ovar as expressions vex in terms of new functions nfun and new independent variables

• The last two lists include the new functions nfun and new independent variables nvar.
Names for ofun, ovar, nfun and nvar can be arbitrarily chosen.
As the result DETRAFO returns the first argument of its input, i.e. the list
{ex 1 , ex 2 , . . . , ex m }
where all ex i are transformed.
Limitations of DETRAFO
The only requirement is that the old independent variables xi and old functions
y α must be given explicitly in terms of new variables v j and new functions uβ
as indicated in the syntax. Then all calculations involve only differentiations and
basic algebra.

[1] W. Hereman, Chapter 13 in vol 3 of the CRC Handbook of Lie Group Analysis of Differential Equations, Ed.: N.H. Ibragimov, CRC Press, Boca Raton,
Florida (1995). Systems described in this paper are among others:
DELiA (Alexei Bocharov Pascal
DIFFGROB2 (Liz Mansfield) Maple
DIMSYM (James Sherring and Geoff Prince) REDUCE
HSYM (Vladimir Gerdt) Reduce
LIE (V. Eliseev, R.N. Fedorova and V.V. Kornyak) Reduce
LIE (Alan Head) muMath
Lie (Gerd Baumann) Mathematica
LIEDF/INFSYM (Peter Gragert and Paul Kersten) Reduce
Liesymm (John Carminati, John Devitt and Greg Fee) Maple
MathSym (Scott Herod) Mathematica
NUSY (Clara Nucci) Reduce
PDELIE (Peter Vafeades) Macsyma
SPDE (Fritz Schwarz) Reduce and Axiom
SYM_DE (Stanly Steinberg) Macsyma
Symmgroup.c (Dominique Berube and Marc de Montigny) Mathematica
STANDARD FORM (Gregory Reid and Alan Wittkopf) Maple
SYMCAL (Gregory Reid) Macsyma and Maple
SYMMGRP.MAX (Benoit Champagne, Willy Hereman and Pavel Winternitz) Macsyma
LIE package (Khai Vu) Maple


Toolbox for symmetries (Mark Hickman) Maple
Lie symmetries (Jeffrey Ondich and Nick Coult) Mathematica.

[2] S. Lie, Sophus Lie’s 1880 Transformation Group Paper, Translated by M.
Ackerman, comments by R. Hermann, Mathematical Sciences Press, Brookline, (1975).
[3] S. Lie, Differentialgleichungen, Chelsea Publishing Company, New York,
[4] T. Wolf, An efficiency improved program LIEPDE for determining Lie - symmetries of PDEs, Proceedings of the workshop on Modern group theory methods in Acireale (Sicily) Nov. (1992)
[5] C. Riquier, Les systèmes d’équations aux dérivées partielles, Gauthier–
Villars, Paris (1910).
[6] J. Thomas, Differential Systems, AMS, Colloquium publications, v. 21,
N.Y. (1937).
[7] M. Janet, Leçons sur les systèmes d’équations aux dérivées, Gauthier–Villars,
Paris (1929).
[8] V.L. Topunov, Reducing Systems of Linear Differential Equations to a Passive
Form, Acta Appl. Math. 16 (1989) 191–206.
[9] A.V. Bocharov and M.L. Bronstein, Efficiently Implementing Two Methods
of the Geometrical Theory of Differential Equations: An Experience in Algorithm and Software Design, Acta. Appl. Math. 16 (1989) 143–166.
[10] P.J. Olver, Applications of Lie Groups to Differential Equations, SpringerVerlag New York (1986).
[11] G.J. Reid, A triangularization algorithm which determines the Lie symmetry
algebra of any system of PDEs, J.Phys. A: Math. Gen. 23 (1990) L853-L859.
[12] F. Schwarz, Automatically Determining Symmetries of Partial Differential
Equations, Computing 34, (1985) 91-106.
[13] W.I. Fushchich and V.V. Kornyak, Computer Algebra Application for Determining Lie and Lie–Bäcklund Symmetries of Differential Equations,
J. Symb. Comp. 7 (1989) 611–619.
[14] E. Kamke, Differentialgleichungen, Lösungsmethoden und Lösungen, Band
1, Gewöhnliche Differentialgleichungen, Chelsea Publishing Company, New
York, 1959.
[15] E. Kamke, Differentialgleichungen, Lösungsmethoden und Lösungen, Band
2, Partielle Differentialgleichungen, 6.Aufl., Teubner, Stuttgart:Teubner,

[16] T. Wolf, An Analytic Algorithm for Decoupling and Integrating systems of
Nonlinear Partial Differential Equations, J. Comp. Phys., no. 3, 60 (1985)
437-446 and, Zur analytischen Untersuchung und exakten Lösung von Differentialgleichungen mit Computeralgebrasystemen, Dissertation B, Jena
[17] T. Wolf, A. Brand, The Computer Algebra Package CRACK for Investigating
PDEs, Manual for the package CRACK in the REDUCE network library and
in Proceedings of ERCIM School on Partial Differential Equations and Group
Theory, April 1992 in Bonn, GMD Bonn.
[18] M.A.H. MacCallum, F.J. Wright, Algebraic Computing with REDUCE,
Clarendon Press, Oxford (1991).
[19] M.A.H. MacCallum, An Ordinary Differential Equation Solver for REDUCE, Proc. ISAAC’88, Springer Lect. Notes in Comp Sci. 358, 196–205.
[20] H. Stephani, Differential equations, Their solution using symmetries, Cambridge University Press (1989).
[21] V.I. Karpman, Phys. Lett. A 136, 216 (1989)
[22] B. Champagne, W. Hereman and P. Winternitz, The computer calculation of Lie point symmetries of large systems of differential equations,
Comp. Phys. Comm. 66, 319-340 (1991)
[23] M. Kubitza, private communication




ARNUM: An algebraic number package

This package provides facilities for handling algebraic numbers as polynomial coefficients in REDUCE calculations. It includes facilities for introducing indeterminates to represent algebraic numbers, for calculating splitting fields, and for factoring and finding greatest common divisors in such domains.
Author: Eberhard Schrüfer.
Algebraic numbers are the solutions of an irreducible polynomial over some
ground domain. The algebraic number i (imaginary unit), for example, would
be defined by the polynomial i2 + 1. The arithmetic of algebraic number s can be
viewed as a polynomial arithmetic modulo the defining polynomial.
Given a defining polynomial for an algebraic number a
an + pn−1 an−1 + ... + p0
All algebraic numbers which can be built up from a are then of the form:
rn−1 an−1 + rn−2 an−2 + ... + r0
where the rj ’s are rational numbers.
The operation of addition is defined by
(rn−1 an−1 + rn−2 an−2 + ...) + (sn−1 an−1 + sn−2 an−2 + ...) =
(rn−1 + sn−1 )an−1 + (rn−2 + sn−2 )an−2 + ...
Multiplication of two algebraic numbers can be performed by normal polynomial
multiplication followed by a reduction of the result with the help of the defining
(rn−1 an−1 + rn−2 an−2 + ...) × (sn−1 an−1 + sn−2 an−2 + ...) =
rn−1 sn−1 a2n−2 + ... modulo an + pn−1 an−1 + ... + p0

qn−1 an−1 + qn−2 an−2 + ...

Division of two algebraic numbers r and s yields another algebraic number q.

= q or r = qs.

The last equation written out explicitly reads
(rn−1 an−1 + rn−2 an−2 + . . .)
= (qn−1 an−1 + qn−2 an−2 + . . .) × (sn−1 an−1 + sn−2 an−2 + . . .)
modulo(an + pn−1 an−1 + . . .)
= (tn−1 an−1 + tn−2 an−2 + . . .)

The ti are linear in the qj . Equating equal powers of a yields a linear system for
the quotient coefficients qj .
With this, all field operations for the algebraic numbers are available. The translation into algorithms is straightforward. For an implementation we have to decide
on a data structure for an algebraic number. We have chosen the representation
REDUCE normally uses for polynomials, the so-called standard form. Since our
polynomials have in general rational coefficients, we must allow for a rational number domain inside the algebraic number.
< algebraic number > ::=
:ar: . < univariate polynomial over the rationals >
< univariate polynomial over the rationals > ::=
< variable > .** < ldeg > .* < rational > .+ < reductum >
< ldeg > ::= integer
< rational > ::=
:rn: . < integer numerator > . < integer denominator > : integer
< reductum > ::= < univariate polynomial > : < rational > : nil
This representation allows us to use the REDUCE functions for adding and multiplying polynomials on the tail of the tagged algebraic number. Also, the routines
for solving linear equations can easily be used for the calculation of quotients.
We are still left with the problem of introducing a particular algebraic number. In
the current version this is done by giving the defining polynomial to the statement
defpoly. The algebraic number sqrt(2), for example, can be introduced by
defpoly sqrt2**2 - 2;
This statement associates a simplification function for the translation of the variable in the defining polynomial into its tagged internal form and also generates a
power reduction rule used by the operations times and quotient for the reduction
of their result modulo the defining polynomial. A basis for the representation of
an algebraic number is also set up by the statement. In the working version, the
basis is a list of powers of the indeterminate of the defining polynomial up to one
less then its degree. Experiments with integral bases, however, have been very
encouraging, and these bases might be available in a later version. If the defining
polynomial is not monic, it will be made so by an appropriate substitution.
Example 1
defpoly sqrt2**2-2;


sqrt2 - 1
x + sqrt2
on gcd;

- 2*x - 3)/(x - sqrt2)

off gcd;
abs(x - sqrt2*y)
Until now we have dealt with only a single algebraic number. In practice this is not
sufficient as very often several algebraic numbers appear in an expression. There
are two possibilities for handling this: one can use multivariate extensions [2] or
one can construct a defining polynomial that contains all specified extensions. This
package implements the latter case (the so called primitive representation). The
algorithm we use for the construction of the primitive element is the same as given
by Trager [3]. In the implementation, multiple extensions can be given as a list
of equations to the statement defpoly, which, among other things, adds the new
extension to the previously defined one. All algebraic numbers are then expressed
in terms of the primitive element.
Example 2
defpoly sqrt2**2-2,cbrt5**3-5;
*** defining polynomial for primitive element:

- 6*a1 - 10*a1 + 12*a1 - 60*a1 + 17






+ 45/1187*a1

735/1187*a1 - 1820/1187

- 320/1187*a1

- 780/1187*a1




We can provide factorization of polynomials over the algebraic number domain by
using Trager’s algorithm. The polynomial to be factored is first mapped to a polynomial over the integers by computing the norm of the polynomial, which is the
resultant with respect to the primitive element of the polynomial and the defining
polynomial. After factoring over the integers, the factors over the algebraic number
field are recovered by GCD calculations.
Example 3
defpoly a**2-5;
on factor;
x**2 + x - 1;
(x + (1/2*a + 1/2))*(x - (1/2*a - 1/2))
We have also incorporated a function split_field for the calculation of a primitive
element of minimal degree for which a given polynomial splits into linear factors.
The algorithm as described in Trager’s article is essentially a repeated primitive
element calculation.
Example 4
*** Splitting field is generated by:

- 18*a2

+ 81*a2


+ 1215

- 5/42*a2

- 1/2*a2 + 2/7,



- (1/63*a2

- 5/21*a2

+ 4/7),

1/126*a2 - 5/42*a2 + 1/2*a2 + 2/7}


for each j in ws product (x-j);

- 3*x + 7

A more complete description can be found in [1].

[1] R. J. Bradford, A. C. Hearn, J. A. Padget, and E. Schrüfer. Enlarging the
REDUCE domain of computation. In Proceedings of SYMSAC ’86, pages
100–106, 1986.
[2] James Harold Davenport. On the integration of algebraic functions. In Lecture
Notes in Computer Science, volume 102. Springer Verlag, 1981.
[3] B. M. Trager. Algebraic factoring and rational function integration. In Proceedings of SYMSAC ’76, pages 196–208, 1976.




ASSERT: Dynamic Verification of Assertions on Function Types

ASSERT admits to add to symbolic mode RLISP code assertions (partly) specifying types of the arguments and results of RLISP expr procedures. These types can
be associated with functions testing the validity of the respective arguments during
Author: Thomas Sturm.


Loading and Using

The package is loaded using load_package or load!-package in algebraic
or symbolic mode, resp. There is a central switch assert, which is off by default.
With assert off, all type definitions and assertions described in the sequel are
ignored and have the status of comments. For verification of the assertions it most
be turned on (dynamically) before the first relevant type definition or assertion.
ASSERT aims at the dynamic analysis of RLISP expr procedure in symbolic mode.
All uses of typedef and assert discussed in the following have to take place
in symbolic mode. There is, in contrast, a final print routine assert_analyze
that is available in both symbolic and algebraic mode.


Type Definitions

Here are some examples for definitions of types:

number checked by numberp;
sf checked by sfpx;
sq checked by sqp;

The first one defines a type any, which is not possibly checked by any function.
This is useful, e.g., for functions which admit any argument at one position but at
others rely on certain types or guarantee certain result types, e.g.,
procedure cellcnt(a);
% a is any, returns a number.
if not pairp a then 0 else cellcnt car a + cellcnt cdr a + 1;
The other ones define a type number, which can be checked by the RLISP function numberp, a type sf for standard forms, which can be checked by the function
sfpx provided by ASSERT, and similarly a type for standard quotients.

All type checking functions take one argument and return extended Boolean, i.e.,
non-nil iff their argument is of the corresponding type.



Having defined types, we can formulate assertions on expr procedures in terms of
these types:
assert cellcnt: (any) -> number;
assert addsq: (sq,sq) -> sq;
Note that on the argument side parenthesis are mandatory also with only one argument. This notation is inspired by Haskell but avoids the intuition of currying.1
Assertions can be dynamically checked only for expr procedures. When making
assertions for other types of procedures, a warning is issued and the assertion has
the status of a comment.
It is important that assertions via assert come after the definitions of the used types
via typedef and also after the definition of the procedures they make assertions
A natural order for adding type definitions and assertions to the source code files
would be to have all typedefs at the beginning of a module and assertions immediately after the respective functions. Fig. 16.1 illustrates this. Note that for dynamic
checking of the assertions the switch assert has to be on during the translation
of the module; i.e., either when reading it with in or during compilation. For compilation this can be achieved by commenting in the on assert at the beginning
or by parameterizing the Lisp-specific compilation scripts in a suitable way.
An alternative option is to have type definitions and assertions for specific packages
right after load_package in batch files as illustrated in Fig. 16.2.


Dynamic Checking of Assertions

Recall that with the switch assert off at translation time, all type definitions and
assertions have the status of comments. We are now going to discuss how these
statements are processed with assert on.
typedef marks the type identifier as a valid type and possibly associates the given
typechecking function with it. Technically, the property list of the type identifier is
used for both purposes.
assert encapsulates the procedure that it asserts on into another one, which

This notation has benn suggested by C. Zengler



module sizetools;
load!-package ’assert;
% on assert;
typedef any;
typedef number checked by number;
procedure cellcnt(a);
% a is any, returns a number.
if not pairp a then 0 else cellcnt car a + cellcnt cdr a + 1;
assert cellcnt: (any) -> number;
% ...

% of file

Figure 16.1: Assertions in the source code.

load_package sizetools;
load_package assert;
on assert;
lisp <<
typedef any;
typedef number checked by numberp;
assert cellcnt: (any) -> number
% ... computations ...

% of file

Figure 16.2: Assertions in a batch file.

checks the types of the arguments and of the result to the extent that there are
typechecking functions given. Whenever some argument does not pass the test by
the typechecking function, there is a warning message issued. Furthermore, the
following numbers are counted for each asserted function:
1. The number of overall calls,
2. the number of calls with at least one assertion violation,
3. the number of assertion violations.
These numbers can be printed anytime in either symbolic or algebraic mode using
the command assert_analyze(). This command at the same time resets all
the counters.
Fig. 16.3 shows an interactive sample session.



As discussed above, the switch assert controls at translation time whether or not
assertions are dynamically checked.
There is a switch assertbreak, which is off by default. When on, there are not
only warnings issued for assertion violations but the computations is interrupted
with a corresponding error.
The statistical counting of procedure calls and assertion violations is toggled by
the switch assertstatistics, which is on by default.



The encapsulating functions introduced with assertions are automatically compiled.
We have experimentally checked assertions on the standard quotient arithmetic
addsq, multsq, quotsq, invsq, negsq for the test file taylor.tst of the
TAYLOR package. For CSL we observe a slowdown of factor 3.2, and for PSL
we observe a slowdown of factor 1.8 in this particular example, where there are
323 750 function calls checked altogether.
The ASSERT package is considered an analysis and debugging tool. Production
system should as a rule not run with dynamic assertion checking. For critical applications, however, the slowdown might be even acceptable.



1: symbolic$
2* load_package assert$
3* on assert$
4* typedef sq checked by sqp;
5* assert negsq: (sq) -> sq;
+++ negsq compiled, 13 + 20 bytes
6* assert addsq: (sq,sq) -> sq;
+++ addsq compiled, 14 + 20 bytes
7* addsq(simp ’x,negsq simp ’y);
((((x . 1) . 1) ((y . 1) . -1)) . 1)
8* addsq(simp ’x,negsq numr simp ’y);
*** assertion negsq: (sq) -> sq violated by arg1 (((y . 1) . 1))
*** assertion negsq: (sq) -> sq violated by result (((y . -1) . -1))
*** assertion addsq: (sq,sq) -> sq violated by arg2 (((y . -1) . -1))
*** assertion addsq: (sq,sq) -> sq violated by result (((y . -1) . -1))
(((y . -1) . -1))
9* assert_analyze()$
#bad calls
#assertion violations

Figure 16.3: An interactive sample session.



Possible Extensions

Our assertions could be used also for a static type analysis of source code. In
that case, the type checking functions become irrelevant. On the other hand, the
introduction of variouse unchecked types becomes meaningful.
In a model, where the source code is systematically annotated with assertions, it
is technically no problem to generalize the specification of procedure definitions
such that assertions become implicit. For instance, one could optionally admit
procedure definitions like the following:
procedure cellcnt(a:any):number;
if not pairp a then 0 else cellcnt car a + cellcnt cdr a + 1;




ASSIST: Useful utilities for various applications

ASSIST contains a large number of additional general purpose functions that allow
a user to better adapt REDUCE to various calculational strategies and to make the
programming task more straightforward and more efficient.
Author: Hubert Caprasse.



The package ASSIST contains an appreciable number of additional general purpose functions which allow one to better adapt REDUCE to various calculational
strategies, to make the programming task more straightforward and, sometimes,
more efficient.
In contrast with all other packages, ASSIST does not aim to provide either a new
facility to compute a definite class of mathematical objects or to extend the base of
mathematical knowledge of REDUCE . The functions it contains should be useful
independently of the nature of the application which is considered. They were initially written while applying REDUCE to specific problems in theoretical physics.
Most of them were designed in such a way that their applicability range is broad.
Though it was not the primary goal, efficiency has been sought whenever possible.
The source code in ASSIST contains many comments concerning the meaning
and use of the supplementary functions available in the algebraic mode. These
comments, hopefully, make the code transparent and allow a thorough exploitation
of the package. The present documentation contains a non–technical description
of it and describes the various new facilities it provides.


Survey of the Available New Facilities

An elementary help facility is available both within the MS-DOS and Windows
environments. It is independent of the help facility of REDUCE itself. It includes
two functions:
ASSIST is a function which takes no argument. If entered, it returns the informations required for a proper use of ASSISTHELP.
ASSISTHELP takes one argument.
i. If the argument is the identifier assist, the function returns the information
necessary to retrieve the names of all the available functions.
ii. If the argument is an integer equal to one of the section numbers of the
present documentation. The names of the functions described in that section
are obtained.

There is, presently, no possibility to retrieve the number and the type of the
arguments of a given function.
The package contains several modules. Their content reflects closely the various
categories of facilities listed below. Some functions do already exist inside the
KERNEL of REDUCE. However, their range of applicability is extended.
• Control of Switches:
• Operations on Lists and Bags:
• Operations on Sets:
• General Purpose Utility Functions:
• Properties and Flags:
• Control Statements, Control of Environment:


• Handling of Polynomials:
• Handling of Transcendental Functions:
• Coercion from Lists to Arrays and converse:
• Handling of n-dimensional Vectors:
• Handling of Grassmann Operators:
• Handling of Matrices:
• Control of the HEPHYS package:

In the following all these functions are described.


Control of Switches

The two available functions i.e. SWITCHES, SWITCHORG have no argument
and are called as if they were mere identifiers.
SWITCHES displays the actual status of the most frequently used switches when
manipulating rational functions. The chosen switches are

The selection is somewhat arbitrary but it may be changed in a trivial fashion by
the user.
The new switch DISTRIBUTE allows one to put polynomials in a distributed form
(see the description below of the new functions for manipulating them. ).
Most of the symbolic variables !*EXP, !*DIV, . . . which have either the value
T or the value NIL are made available in the algebraic mode so that it becomes
possible to write conditional statements of the kind

IF !*EXP THEN DO ......

SWITCHORG resets the switches enumerated above to the status they had when
starting REDUCE .


Manipulation of the List Structure

Additional functions for list manipulations are provided and some already defined
functions in the kernel of REDUCE are modified to properly generalize them to
the available new structure BAG.
i. Generation of a list of length n with all its elements initialized to 0 and
possibility to append to a list l a certain number of zero’s to make it of length


n is an INTEGER
l is List-like, n is an INTEGER

ii. Generation of a list of sublists of length n containing p elements equal to 0
and q elements equal to 1 such that
p + q = n.
The function SEQUENCES works both in algebraic and symbolic modes.
Here is an example in the algebraic mode:

SEQUENCES 2 ; ==> {{0,0},{0,1},{1,0},{1,1}}


An arbitrary splitting of a list can be done. The function SPLIT generates a
list which contains the splitted parts of the original list.
SPLIT({a,b,c,d},{1,1,2}) ==> {{a},{b},{c,d}}
The function ALGNLIST constructs a list which contains n copies of a list
bound to its first argument.
ALGNLIST({a,b,c,d},2); ==> {{a,b,c,d},{a,b,c,d}}
The function KERNLIST transforms any prefix of a kernel into the list
prefix. The output list is a copy:
KERNLIST (); ==> {}
Four functions to delete elements are DELETE, REMOVE, DELETE_ALL
and DELPAIR. The first two act as in symbolic mode, and the third eliminates from a given list all elements equal to its first argument. The fourth
acts on a list of pairs and eliminates from it the first pair whose first element
is equal to its first argument :
DELETE(x,{a,b,x,f,x}); ==> {a,b,f,x}
REMOVE({a,b,x,f,x},3); ==> {a,b,f,x}
DELETE_ALL(x,{a,b,x,f,x}); ==> {a,b,f}
DELPAIR(a,{{a,1},{b,2},{c,3}}; ==> {{b,2},{c,3}}

iv. The function ELMULT returns an integer which is the multiplicity of its
first argument inside the list which is its second argument. The function
FREQUENCY gives a list of pairs whose second element indicates the number of times the first element appears inside the original list:
ELMULT(x,{a,b,x,f,x}) ==> 2
FREQUENCY({a,b,c,a}); ==> {{a,2},{b,1},{c,1}}

v. The function INSERT allows one to insert a given object into a list at the
desired position.
The functions INSERT_KEEP_ORDER and MERGE_LIST allow one to
keep a given ordering when inserting one element inside a list or when merging two lists. Both have 3 arguments. The last one is the name of a binary
boolean ordering function:

INSERT(x,ll,3); ==> {1,2,x,3}
INSERT_KEEP_ORDER(5,ll,lessp); ==> {1,2,3,5}
MERGE_LIST(ll,ll,lessp); ==> {1,1,2,2,3,3}

Notice that MERGE_LIST will act correctly only if the two lists are well
ordered themselves.
vi. Algebraic lists can be read from right to left or left to right. They look symmetrical. One would like to dispose of manipulation functions which reflect
this. So, to the already defined functions FIRST and REST are added the
functions LAST and BELAST. LAST gives the last element of the list while
BELAST gives the list without its last element.
Various additional functions are provided. They are:
The token “dot” needs a special comment. It corresponds to several different
1. If one applies it on the left of a list, it acts as the CONS function. Note
however that blank spaces are required around the dot:
4 . {a,b}; ==> {4,a,b}
2. If one applies it on the right of a list, it has the same effect as the PART
{a,b,c}.2; ==> b


3. If one applies it to a 4–dimensional vectors, it acts as in the HEPHYS
POSITION returns the POSITION of the first occurrence of x in a list or a
message if x is not present in it.
DEPTH returns an integer equal to the number of levels where a list is found
if and only if this number is the same for each element of the list otherwise
it returns a message telling the user that the list is of unequal depth. The
function MKDEPTH_ONE allows to transform any list into a list of depth
equal to 1.
PAIR has two arguments which must be lists. It returns a list whose elements are lists of two elements. The nth sublist contains the nth element of
the first list and the nth element of the second list. These types of lists are
called association lists or ALISTS in the following. To test for these type of
lists a boolean function ABAGLISTP is provided. It will be discussed below.
APPENDN has any fixed number of lists as arguments. It generalizes the already existing function APPEND which accepts only two lists as arguments.
It may also be used for arbitrary kernels but, in that case, it is important to
notice that the concatenated object is always a list.
REPFIRST has two arguments. The first one is any object, the second one
is a list. It replaces the first element of the list by the object. It works like the
symbolic function REPLACA except that the original list is not destroyed.
REPREST has also two arguments. It replaces the rest of the list by its first
argument and returns the new list without destroying the original list. It is
analogous to the symbolic function REPLACD. Here are examples:

ll1:=ll.1; ==> {a,b}
ll.0; ==> list
0 . ll; ==> {0,{a,b}}
DEPTH ll; ==> 2
PAIR(ll1,ll1); ==> {{a,a},{b,b}}
REPFIRST{new,ll); ==> {new}
ll3:=APPENDN(ll1,ll1,ll1); ==> {a,b,a,b,a,b}
POSITION(b,ll3); ==> 2
REPREST(new,ll3); ==> {a,new}

RESTASLIST act on ALISTS or on lists of lists of well defined depths and
have two arguments. The first is the key object which one seeks to associate
in some way with an element of the association list which is the second argument.
ASFIRST returns the pair whose first element is equal to the first argument.
ASLAST returns the pair whose last element is equal to the first argument.
ASREST needs a list as its first argument. The function seeks the first sublist
of a list of lists (which is its second argument) equal to its first argument and
returns it.
RESTASLIST has a list of keys as its first argument. It returns the collection
of pairs which meet the criterium of ASREST.
ASFLIST returns a list containing all pairs which satisfy the criteria of the
function ASFIRST. So the output is also an association list.
ASSLIST returns a list which contains all pairs which have their second
element equal to the first argument.
Here are a few examples:

ASFIRST(a,lp); ==> {a,1}
ASLAST(1,lp); ==> {a,1}
ASREST({1},lp); ==> {a,1}
RESTASLIST({a,b},lp); ==> {{1},{2}}
ASFLIST(a,lpp); ==> {{a,1},{a,1}}
ASSLIST(1,lpp); ==> {{a,1},{a,1}}

vii. The function SUBSTITUTE has three arguments. The first is the object to
be substituted, the second is the object which must be replaced by the first,
and the third is the list in which the substitution must be made. Substitution
is made to all levels. It is a more elementary function than SUB but its
capabilities are less. When dealing with algebraic quantities, it is important
to make sure that all objects involved in the function have either the prefix
lisp or the standard quotient representation otherwise it will not properly




The Bag Structure and its Associated Functions

The LIST structure of REDUCE is very convenient for manipulating groups of objects which are, a priori, unknown. This structure is endowed with other properties
such as “mapping” i.e. the fact that if OP is an operator one gets, by default,

OP({x,y}); ==> {OP(x),OP(y)}

It is not permitted to submit lists to the operations valid on rings so that, for example, lists cannot be indeterminates of polynomials.
Very frequently too, procedure arguments cannot be lists. At the other extreme,
so to say, one has the KERNEL structure associated with the algebraic declaration
operator . This structure behaves as an “unbreakable” one and, for that reason,
behaves like an ordinary identifier. It may generally be bound to all non-numeric
procedure parameters and it may appear as an ordinary indeterminate inside polynomials.
The BAG structure is intermediate between a list and an operator. From the operator
it borrows the property of being a KERNEL and, therefore, may be an indeterminate of a polynomial. From the list structure it borrows the property of being a
composite object.
A bag is an object endowed with the following properties:
1. It is a KERNEL i.e. it is composed of an atomic prefix (its envelope) and its
content (miscellaneous objects).
2. Its content may be handled in an analogous way as the content of a list. The
important difference is that during these manipulations the name of the bag
is kept.
3. Properties may be given to the envelope. For instance, one may declare it
NONCOM or SYMMETRIC etc. . . .
Available Functions:
i. A default bag envelope BAG is defined. It is a reserved identifier. An identifier other than LIST or one which is already associated with a boolean
function may be defined as a bag envelope through the command PUTBAG.
In particular, any operator may also be declared to be a bag. When and only
when the identifier is not an already defined function does PUTBAG put on
it the property of an OPERATOR PREFIX. The command:


PUTBAG id1,id2,....idn;

declares id1,.....,idn as bag envelopes. Analogously, the command

CLEARBAG id1,...idn;

eliminates the bag property on id1,...,idn.
ii. The boolean function BAGP detects the bag property. Here is an example:

if BAGP aa then "ok"; ==> ok

iii. The functions listed below may act both on lists or bags. Moreover, functions
subsequently defined for SETS also work for a bag when its content is a set.
Here is a list of the main ones:
However, since they keep track of the envelope, they act somewhat differently. Remember that
the NAME of the ENVELOPE is KEPT by the functions
Here are a few examples (more examples are given inside the test file):

PUTBAG op; ==> T
FIRST op(x,y,z); ==> op(x)


REST op(x,y,z); ==> op(y,z)
BELAST op(x,y,z); ==> op(x,y)
APPEND(aa,aa); ==> op(x,y,z,x,y,z)
APPENDN(aa,aa,aa); ==> {x,y,z,x,y,z,x,y,z}
LENGTH aa; ==> 3
DEPTH aa; ==> 1
MEMBER(y,aa); ==> op(y,z)

When “appending” two bags with different envelopes, the resulting bag
gets the name of the one bound to the first parameter of APPEND. When
APPENDN is used, the output is always a list.
The function LENGTH gives the number of objects contained in the bag.
iv. The connection between the list and the bag structures is made easy thanks
to KERNLIST which transforms a bag into a list and thanks to the coercion
function LISTBAG which transforms a list into a bag. This function has 2
arguments and is used as follows:

LISTBAG(,); ==> ()

The identifier , if allowed, is automatically declared as a bag envelope
or an error message is generated.
Finally, two boolean functions which work both for bags and lists are provided. They are BAGLISTP and ABAGLISTP. They return t or nil (in a
conditional statement) if their argument is a bag or a list for the first one, or
if their argument is a list of sublists or a bag containing bags for the second


Sets and their Manipulation Functions

Functions for sets exist at the level of symbolic mode. The package makes them
available in algebraic mode but also generalizes them so that they can be applied
to bag-like objects as well.

i. The constructor MKSET transforms a list or bag into a set by eliminating

MKSET({1,a,a}); ==> {1,a}
MKSET bag(1,a,1,a); ==> bag(1,a)

SETP is a boolean function which recognizes set–like objects.

if SETP {1,2,3} then ... ;

ii. The available functions are
They have two arguments which must be sets otherwise an error message
is issued. Their meaning is transparent from their name. They respectively
give the union, the intersection, the difference and the symmetric difference
of two sets.


General Purpose Utility Functions

Functions in this sections have various purposes. They have all been used many
times in applications in some form or another. The form given to them in this
package is adjusted to maximize their range of applications.
handle identifiers.
MKIDNEW has either 0 or 1 argument. It generates an identifier which has
not yet been used before.

MKIDNEW(); ==> g0001
MKIDNEW(a); ==> ag0002

DELLASTDIGIT takes an integer as argument and strips it from its last digit.



DETIDNUM deletes the last digit from an identifier. It is a very convenient
function when one wants to make a do loop starting from a set of indices
a1 , . . . , a n .

DETIDNUM a23; ==> 23

LIST_to_IDS generalizes the function MKID to a list of atoms. It creates
and intern an identifier from the concatenation of the atoms. The first atom
cannot be an integer.

LIST_TO_IDS {a,1,id,10}; ==> a1id10

The function ODDP detects odd integers.
The function FOLLOWLINE is convenient when using the function PRIN2.
It allows one to format output text in a much more flexible way than with the
function WRITE.
Try the following examples :

<>$ ==> ?
<>; ==> ?

The function == is a short and convenient notation for the SET function. In
fact it is a generalization of it to allow one to deal also with KERNELS:

operator op;
op(x) == x; ==> x
op(x); ==> x
abs(x); ==> x

The function RANDOMLIST generates a list of random numbers. It takes two
arguments which are both integers. The first one indicates the range inside
which the random numbers are chosen. The second one indicates how many
numbers are to be generated. Its output is the list of generated numbers.

RANDOMLIST(10,5); ==> {2,1,3,9,6}

MKRANDTABL generates a table of random numbers. This table is either a
one or two dimensional array. The base of random numbers may be either an
integer or a decimal number. In this last case, to work properly, the switch
rounded must be ON. It has three arguments. The first is either a one
integer or a two integer list. The second is the base chosen to generate the
random numbers. The third is the chosen name for the generated array. In
the example below a two-dimensional table of random integers is generated
as array elements of the identifier ar.

MKRANDTABL({3,4},10,ar); ==>
*** array ar redefined

The output is the dimension of the constructed array.
PERMUTATIONS gives the list of permutations of n objects. Each permutation is itself a list. CYCLICPERMLIST gives the list of cyclic permutations.
For both functions, the argument may also be a bag.

PERMUTATIONS {1,2} ==> {{1,2},{2,1}}

PERM_TO_NUM and NUM_TO_PERM allow to associate to a given permutation of n numbers or identifiers a number between 0 and n! − 1. The first
function has the two permutated lists as its arguments and it returns an integer. The second one has an integer as its first argument and a list as its
second argument. It returns the list of permutated objects.



PERM_TO_NUM({4,3,2,1},{1,2,3,4}) ==> 23
NUM_TO_PERM(23,{1,2,3,4}); ==> {4,3,2,1}

COMBNUM gives the number of combinations of n objects taken p at a time.
It has the two integer arguments n and p.
COMBINATIONS gives a list of combinations on n objects taken p at a time.
It has two arguments. The first one is a list (or a bag) and the second one is
the integer p.

COMBINATIONS({1,2,3},2) ==> {{2,3},{1,3},{1,2}}

REMSYM is a command that suppresses the effect of the REDUCE commands symmetric or antisymmetric .
SYMMETRIZE is a powerful function which generates a symmetric expression. It has 3 arguments. The first is a list (or a list of lists) containing the
expressions which will appear as variables for a kernel. The second argument is the kernel-name and the third is a permutation function which exists
either in algebraic or symbolic mode. This function may be constructed
by the user. Within this package the two functions PERMUTATIONS and
CYCLICPERMLIST may be used. Examples:

SYMMETRIZE(ll,op,cyclicpermlist); ==>
OP(A,B,C) + OP(B,C,A) + OP(C,A,B)
SYMMETRIZE(list ll,op,cyclicpermlist); ==>
OP({A,B,C}) + OP({B,C,A}) + OP({C,A,B})

Notice that, taking for the first argument a list of lists gives rise to an expression where each kernel has a list as argument. Another peculiarity of
this function is the fact that, unless a pattern matching is made on the operator OP, it needs to be reevaluated. This peculiarity is convenient when OP
is an abstract operator if one wants to control the subsequent simplification
process. Here is an illustration:


SYMMETRIZE(ll,op,cyclicpermlist); ==>
OP(A,B,C) + OP(B,C,A) + OP(C,A,B)
REVAL ws; ==>
OP(B,C,A) + OP(C,A,B) + A*B*C
for all x let op(x,a,b)=sin(x*a*b);
SYMMETRIZE(ll,op,cyclicpermlist); ==>
OP(B,C,A) + SIN(A*B*C) + OP(A,B,C)
The functions SORTNUMLIST and SORTLIST are functions which sort
lists. They use the bubblesort and the quicksort algorithms.
SORTNUMLIST takes as argument a list of numbers. It sorts it in increasing
SORTLIST is a generalization of the above function. It sorts the list according to any well defined ordering. Its first argument is the list and its second
argument is the ordering function. The content of the list need not necessarily be numbers but must be such that the ordering function has a meaning.
ALGSORT exploits the PSL SORT function. It is intended to replace the two
functions above.


SORTNUMLIST l; ==> {0,1,3,4}

ll:={1,a,tt,z}$ SORTLIST(ll,ordp); ==> {a,z,tt,1}

ALGSORT(l,>); ==> {4,3,0,-1}

It is important to realise that using these functions for kernels or bags may
be dangerous since they are destructive. If it is necessary, it is recommended
to first apply KERNLIST to them to act on a copy.
The function EXTREMUM is a generalization of the already defined functions
MIN, MAX to include general orderings. It is a 2 argument function. The
first is the list and the second is the ordering function. With the list ll
defined in the last example, one gets



EXTREMUM(ll,ordp); ==> 1

GCDNL takes a list of integers as argument and returns their gcd.
iii. There are four functions to identify dependencies. FUNCVAR takes any expression as argument and returns the set of variables on which it depends.
Constants are eliminated.

FUNCVAR(e+pi+sin(log(y)); ==> {y}

DEPATOM has an atom as argument. It returns it if it is a number or if no
dependency has previously been declared. Otherwise, it returns the list of
variables which the prevoius DEPEND declarations imply.

depend a,x,y;
DEPATOM a; ==> {x,y}

The functions EXPLICIT and IMPLICIT make explicit or implicit the dependencies. This example shows how they work:

depend a,x; depend x,y,z;
EXPLICIT a; ==> a(x(y,z))
IMPLICIT ws; ==> a

These are useful when one wants to trace the names of the independent variables and (or) the nature of the dependencies.
KORDERLIST is a zero argument function which displays the actual ordering.

korder x,y,z;
KORDERLIST; ==> (x,y,z)

iv. A command REMNONCOM to remove the non-commutativity of operators
previously declared non-commutative is available. Its use is like the one of
the command NONCOM.
v. Filtering functions for lists.
CHECKPROPLIST is a boolean function which checks if the elements of a
list have a definite property. Its first argument is the list, its second argument
is a boolean function (FIXP NUMBERP . . .) or an ordering function (as
EXTRACTLIST extracts from the list given as its first argument the elements
which satisfy the boolean function given as its second argument. For example:

if CHECKPROPLIST({1,2},fixp) then "ok"; ==> ok
EXTRACTLIST(l,fixp); ==> {1}
EXTRACTLIST(l,stringp); ==> {st}

vi. Coercion.
Since lists and arrays have quite distinct behaviour and storage properties,
it is interesting to coerce lists into arrays and vice-versa in order to fully
exploit the advantages of both datatypes. The functions ARRAY_TO_LIST
and LIST_TO_ARRAY are provided to do that easily. The first function has
the array identifier as its unique argument. The second function has three
arguments. The first is the list, the second is the dimension of the array
and the third is the identifier which defines it. If the chosen dimension is
not compatible with the the list depth, an error message is issued. As an
illustration suppose that ar is an array whose components are 1,2,3,4. then

ARRAY_TO_LIST ar; ==> {1,2,3,4}
LIST_TO_ARRAY({1,2,3,4},1,arr}; ==>

generates the array arr with the components 1,2,3,4.
vii. Control of the HEPHYS package.


The commands REMVECTOR and REMINDEX remove the property of being
a 4-vector or a 4-index respectively.
The function MKGAM allows to assign to any identifier the property of a Dirac
gamma matrix and, eventually, to suppress it. Its interest lies in the fact that,
during a calculation, it is often useful to transform a gamma matrix into an
abstract operator and vice-versa. Moreover, in many applications in basic
physics, it is interesting to use the identifier g for other purposes. It takes
two arguments. The first is the identifier. The second must be chosen equal
to t if one wants to transform it into a gamma matrix. Any other binding for
this second argument suppresses the property of being a gamma matrix the
identifier is supposed to have.


Properties and Flags

In spite of the fact that many facets of the handling of property lists is easily accessible in algebraic mode, it is useful to provide analogous functions genuine to the
algebraic mode. The reason is that, altering property lists of objects, may easily
destroy the integrity of the system. The functions, which are here described, do
ignore the property list and flags already defined by the system itself. They generate and track the addtional properties and flags that the user issues using them.
They offer him the possibility to work on property lists so that he can design a
programming style of the “conceptual” type.
i. We first consider “flags”.
To a given identifier, one may associate another one linked to it “in
the background”. The three functions PUTFLAG, DISPLAYFLAG and
CLEARFLAG handle them.
PUTFLAG has 3 arguments. The first one is the identifier or a list of identifiers, the second one is the name of the flag, and the third one is T (true)
or 0 (zero). When the third argument is T, it creates the flag, when it is 0 it
destroys it. In this last case, the function does return nil (not seen inside the
algebraic mode).

PUTFLAG(z1,flag_name,t); ==> flag_name
PUTFLAG({z1,z2},flag1_name,t); ==> t
PUTFLAG(z2,flag1_name,0) ==>

DISPLAYFLAG allows one to extract flags. The previous actions give:


DISPLAYFLAG z1; ==>{flag_name,flag1_name}
DISPLAYFLAG z2 ; ==> {}

CLEARFLAG is a command which clears all flags associated with the identifiers id1 , . . . , idn .
ii. Properties are handled by similar functions. PUTPROP has four arguments.
The second argument is, here, the indicator of the property. The third argument may be any valid expression. The fourth one is also T or 0.

PUTPROP(z1,property,x^2,t); ==> z1

In general, one enters


To display a specific property, one uses DISPLAYPROP which takes two
arguments. The first is the name of the identifier, the second is the indicator
of the property.

DISPLAYPROP(z1,property); ==> {property,x

Finally, CLEARPROP is a nary commmand which clears all properties of the
identifiers which appear as arguments.


Control Functions

Here we describe additional functions which improve user control on the environment.
i. The first set of functions is composed of unary and binary boolean functions.
They are:




x is anything.
x is anything.
DEPVARP(x,v); x is anything.
(v is an atom or a kernel)

ALATOMP has the value T iff x is an integer or an identifier after it has been
evaluated down to the bottom.
ALKERNP has the value T iff x is a kernel after it has been evaluated down
to the bottom.
DEPVARP returns T iff the expression x depends on v at any level.
The above functions together with PRECP have been declared operator functions to ease the verification of their value.
NORDP is equal to NOT ORDP.
ii. The next functions allow one to analyze and to clean the environment of
REDUCE created by the user while working interactively. Two functions
are provided:
SHOW allows the user to get the various identifiers already assigned and to
see their type. SUPPRESS selectively clears the used identifiers or clears
them all. It is to be stressed that identifiers assigned from the input of files
are ignored. Both functions have one argument and the same options for this


(SUPPRESS) scalars
(SUPPRESS) lists
(SUPPRESS) saveids
(for saved expressions)
(SUPPRESS) matrices
(SUPPRESS) arrays
(SUPPRESS) vectors
(contains vector, index and tvector)

The option all is the most convenient for SHOW but, with it, it may takes
some time to get the answer after one has worked several hours. When entering REDUCE the option all for SHOW gives:

SHOW all; ==>
scalars are: NIL
arrays are: NIL
lists are: NIL
matrices are: NIL
vectors are: NIL
forms are: NIL

It is a convenient way to remind the various options. Here is an example
which is valid when one starts from a fresh environment:

SHOW scalars; ==>

scalars are: (A B)

SUPPRESS scalars; ==> t
SHOW scalars; ==>

scalars are: NIL

iii. The CLEAR function of the system does not do a complete cleaning of
OPERATORS and FUNCTIONS . The following two functions do a more
complete cleaning and, also, automatically takes into account the user flag
and properties that the functions PUTFLAG and PUTPROP may have introduced.
Their names are CLEAROP and CLEARFUNCTIONS. CLEAROP takes one
operator as its argument.
CLEARFUNCTIONS is a nary command. If one issues

CLEARFUNCTIONS a1,a2, ... , an $

The functions with names a1,a2, ... ,an are cleared. One should
be careful when using this facility since the only functions which cannot be
erased are those which are protected with the lose flag.




Handling of Polynomials

The module contains some utility functions to handle standard quotients and several new facilities to manipulate polynomials.
i. Two functions ALG_TO_SYMB and SYMB_TO_ALG allow one to change
an expression which is in the algebraic standard quotient form into a prefix
lisp form and vice-versa. This is done in such a way that the symbol list
which appears in the algebraic mode disappears in the symbolic form (there
it becomes a parenthesis “()” ) and it is reintroduced in the translation from
a symbolic prefix lisp expression to an algebraic one. Here, is an example, showing how the wellknown lisp function FLATTENS can be trivially
transposed inside the algebraic mode:
algebraic procedure ecrase x;
lisp symb_to_alg flattens1 alg_to_symb algebraic x;
symbolic procedure flattens1 x;
% ll; ==> ((A B) ((C D) E))
% flattens1 ll; (A B C D E)
if atom x then list x else
if cdr x then
append(flattens1 car x, flattens1 cdr x)
else flattens1 car x;
gives, for instance,

ECRASE ll; ==> {A, B, C, D, E, Z}

The function MKDEPTH_ONE described above implements that functionality.
ii. LEADTERM and REDEXPR are the algebraic equivalent of the symbolic
functions LT and RED. They give, respectively, the leading term and the
reductum of a polynomial. They also work for rational functions. Their interest lies in the fact that they do not require one to extract the main variable.
They work according to the current ordering of the system:


LEADTERM pol; ==> x
korder y,x,z;
LEADTERM pol; ==> y
REDEXPR pol; ==> x + z

By default, the representation of multivariate polynomials is recursive. It
is justified since it is the one which takes the least memory. With such a
representation, the function LEADTERM does not necessarily extract a true
monom. It extracts a monom in the leading indeterminate multiplied by a
polynomial in the other indeterminates. However, very often, one needs to
handle true monoms separately. In that case, one needs a polynomial in distributive form. Such a form is provided by the package GROEBNER (H.
Melenk et al.). The facility there is, however, much too involved in many
applications and the necessity to load the package makes it interesting to
construct an elementary facility to handle the distributive representation of
polynomials. A new switch has been created for that purpose. It is called
DISTRIBUTE and a new function DISTRIBUTE puts a polynomial in distributive form. With that switch set to on, LEADTERM gives true monoms.
MONOM transforms a polynomial into a list of monoms. It works whatever
the position of the switch DISTRIBUTE.
SPLITTERMS is analoguous to MONOM except that it gives a list of two lists.
The first sublist contains the positive terms while the second sublist contains
the negative terms.
SPLITPLUSMINUS gives a list whose first element is the positive part of
the polynomial and its second element is its negative part.
iii. Two complementary functions LOWESTDEG and DIVPOL are provided. The
first takes a polynomial as its first argument and the name of an indeterminate
as its second argument. It returns the lowest degree in that indeterminate.
The second function takes two polynomials and returns both the quotient
and its remainder.


Handling of Transcendental Functions

The functions TRIGREDUCE and TRIGEXPAND and the equivalent ones for hyperbolic functions HYPREDUCE and HYPEXPAND make the transformations to



multiple arguments and from multiple arguments to elementary arguments. Here
is a simple example:


When a trigonometric or hyperbolic expression is symmetric with respect to the interchange of SIN (SINH) and COS (COSH), the application of
TRIG(HYP)-REDUCE may often lead to great simplifications. However, if it is
highly asymmetric, the repeated application of TRIG(HYP)-REDUCE followed
by the use of TRIG(HYP)-EXPAND will lead to more complicated but more symmetric expressions:

TRIGREDUCE aa; ==> 1


- SIN(3*X) + 3*SIN(X) + 4



SIN(X) - 3*SIN(X)*COS(X) + 3*SIN(X) + 4



Handling of n–dimensional Vectors

Explicit vectors in EUCLIDEAN space may be represented by list-like or bag-like
objects of depth 1. The components may be bags but may not be lists. Functions are provided to do the sum, the difference and the scalar product. When the
space-dimension is three there are also functions for the cross and mixed products. SUMVECT, MINVECT, SCALVECT, CROSSVECT have two arguments.
MPVECT has three arguments. The following example is sufficient to explain how
they work:

SUMVECT(l,ll); ==> {A + 1,B + 2,C + 3}
MINVECT(l,ll); ==> { - A + 1, - B + 2, - C + 3}
SCALVECT(l,ll); ==> A + 2*B + 3*C
CROSSVECT(l,ll); ==> { - 3*B + 2*C,3*A - C, - 2*A + B}
MPVECT(l,ll,l); ==> 0


Handling of Grassmann Operators

Grassman variables are often used in physics. For them the multiplication operation is associative, distributive but anticommutative. The KERNEL of REDUCE
does not provide it. However, implementing it in full generality would almost certainly decrease the overall efficiency of the system. This small module together
with the declaration of antisymmetry for operators is enough to deal with most calculations. The reason is, that a product of similar anticommuting kernels can easily
be transformed into an antisymmetric operator with as many indices as the number
of these kernels. Moreover, one may also issue pattern matching rules to implement the anticommutativity of the product. The functions in this module represent
the minimum functionality required to identify them and to handle their specific
PUTGRASS is a (nary) command which give identifiers the property of being the
names of Grassmann kernels. REMGRASS removes this property.
GRASSP is a boolean function which detects grassmann kernels.



GRASSPARITY takes a monom as argument and gives its parity. If the monom is
a simple grassmann kernel it returns 1.
GHOSTFACTOR has two arguments. Each one is a monom. It is equal to


Here is an illustration to show how the above functions work:

PUTGRASS eta; ==> t
if GRASSP eta(1) then "grassmann kernel"; ==>
grassmann kernel
aa:=eta(1)*eta(2)-eta(2)*eta(1); ==>
AA :=

- ETA(2)*ETA(1) + ETA(1)*ETA(2)

GRASSPARITY eta(1); ==> 1
GRASSPARITY (eta(1)*eta(2)); ==> 0
GHOSTFACTOR(eta(1),eta(2)); ==> -1
{eta(~x)*eta(~y) => -eta y * eta x when nordp(x,y),
(~x)*(~x) => 0 when grassp x};
exp where grasskernel; ==> 0
aa where grasskernel; ==>


- 2*ETA(2)*ETA(1)

Handling of Matrices

This module provides functions for handling matrices more comfortably.
i. Often, one needs to construct some UNIT matrix of a given dimension. This

construction is done by the system thanks to the function UNITMAT. It is a
nary function. The command is

UNITMAT M1(n1), M2(n2), .....Mi(ni) ;

where M1,...Mi are names of matrices and n1, n2, ..., ni are
integers .
MKIDM is a generalization of MKID. It allows one to connect two or several
matrices. If u and u1 are two matrices, one can go from one to the other:

matrix u(2,2);$

unitmat u1(2)$

u1; ==>


mkidm(u,1); ==>


This function allows one to make loops on matrices like in the following
illustration. If U, U1, U2,.., U5 are matrices:

FOR I:=1:5 DO U:=U-MKIDM(U,I);

can be issued.
ii. The next functions map matrices on bag-like or list-like objects and conversely they generate matrices from bags or lists.
COERCEMAT transforms the matrix U into a list of lists. The entry is



where id is equal to list othewise it transforms it into a bag of bags whose
envelope is equal to id.
BAGLMAT does the opposite job. The first argument is the bag-like or listlike object while the second argument is the matrix identifier. The entry is


bgl becomes the matrix U . The transformation is not done if U is already
the name of a previously defined matrix. This is to avoid ACCIDENTAL
redefinition of that matrix.
ii. The functions SUBMAT, MATEXTR, MATEXTC take parts of a given matrix.
SUBMAT has three arguments. The entry is


The first is the matrix name, and the other two are the row and column numbers. It gives the submatrix obtained from U by deleting the row nr and the
column nc. When one of them is equal to zero only column nc or row nr
is deleted.
MATEXTR and MATEXTC extract a row or a column and place it into a listlike or bag-like object. The entries are


where U is the matrix, VN is the “vector name”, nr and nc are integers. If
VN is equal to list the vector is given as a list otherwise it is given as a
iii. Functions which manipulate matrices. They are MATSUBR, MATSUBC,
MATSUBR MATSUBC substitute rows and columns. They have three arguments. Entries are:



The meaning of the variables U, nr, nc is the same as above while bgl
is a list-like or bag-like vector. Its length should be compatible with the
dimensions of the matrix.
HCONCMAT VCONCMAT concatenate two matrices. The entries are


The first function concatenates horizontally, the second one concatenates
vertically. The dimensions must match.
TPMAT makes the tensor product of two matrices. It is also an infix function.
The entry is


HERMAT takes the hermitian conjuguate of a matrix The entry is


where HU is the identifier for the hermitian matrix of U. It should be unassigned for this function to work successfully. This is done on purpose to
prevent accidental redefinition of an already used identifier .
iv. SETELMAT GETELMAT are functions of two integers. The first one resets
the element (i,j) while the second one extracts an element identified by
(i,j). They may be useful when dealing with matrices inside procedures.




AVECTOR: A vector algebra and calculus package

This package provides REDUCE with the ability to perform vector algebra using
the same notation as scalar algebra. The basic algebraic operations are supported,
as are differentiation and integration of vectors with respect to scalar variables,
cross product and dot product, component manipulation and application of scalar
functions (e.g. cosine) to a vector to yield a vector result.
Author: David Harper.



This package 2 is written in RLISP (the LISP meta-language) and is intended for
use with REDUCE 3.4. It provides REDUCE with the ability to perform vector
algebra using the same notation as scalar algebra. The basic algebraic operations
are supported, as are differentiation and integration of vectors with respect to scalar
variables, cross product and dot product, component manipulation and application
of scalar functions (e.g. cosine) to a vector to yield a vector result.
A set of vector calculus operators are provided for use with any orthogonal curvilinear coordinate system. These operators are gradient, divergence, curl and delsquared (Laplacian). The Laplacian operator can take scalar or vector arguments.
Several important coordinate systems are pre-defined and can be invoked by name.
It is also possible to create new coordinate systems by specifying the names of the
coordinates and the values of the scale factors.


Vector declaration and initialisation

Any name may be declared to be a vector, provided that it has not previously been
declared as a matrix or an array. To declare a list of names to be vectors use the
VEC command:
declares the variables A, B and C to be vectors. If they have already been assigned
(scalar) values, these will be lost.
When a vector is declared using the VEC command, it does not have an initial
If a vector value is assigned to a scalar variable, then that variable will automatically be declared as a vector and the user will be notified that this has happened.

Reference: Computer Physics Communications, 54, 295-305 (1989)

A vector may be initialised using the AVEC function which takes three scalar arguments and returns a vector made up from those scalars. For example
A := AVEC(A1, A2, A3);
sets the components of the vector A to A1, A2 and A3.


Vector algebra

(In the examples which follow, V, V1, V2 etc are assumed to be vectors while S,
S1, S2 etc are scalars.)
The scalar algebra operators +,-,* and / may be used with vector operands according to the rules of vector algebra. Thus multiplication and division of a vector by
a scalar are both allowed, but it is an error to multiply or divide one vector by


V1 + V2 - V3;

Addition and subtraction
Scalar multiplication
Scalar division

Vector multiplication is carried out using the infix operators DOT and CROSS.
These are defined to have higher precedence than scalar multiplication and division.


V1 DOT V2;
V1 CROSS V2 + V3;
(V1 CROSS V2) + V3;

Cross product
Dot product

The last two expressions are equivalent due to the precedence of the CROSS operator.
The modulus of a vector may be calculated using the VMOD operator.
S := VMOD V;
A unit vector may be generated from any vector using the VMOD operator.
V1 := V/(VMOD V);
Components may be extracted from any vector using index notation in the same
way as an array.



V := AVEC(AX, AY, AZ);

yields AX
yields AY
yields AZ

It is also possible to set values of individual components. Following from above:
V(1) := B;
The vector V now has components AX, B, AZ.
Vectors may be used as arguments in the differentiation and integration routines in
place of the dependent expression.
V := AVEC(X**2, SIN(X), Y);

yields (2*X, COS(X), 0)
yields (X**3/3, -COS(X), Y*X)

Vectors may be given as arguments to monomial functions such as SIN, LOG and
TAN. The result is a vector obtained by applying the function component-wise to
the argument vector.
V := AVEC(A1, A2, A3);


yields (SIN(A1), SIN(A2), SIN(A3))

Vector calculus

The vector calculus operators div, grad and curl are recognised. The Laplacian
operator is also available and may be applied to scalar and vector arguments.



Gradient of a scalar field
Divergence of a vector field
Curl of a vector field
Laplacian of a scalar field
Laplacian of a vector field

These operators may be used in any orthogonal curvilinear coordinate system. The
user may alter the names of the coordinates and the values of the scale factors.
Initially the coordinates are X, Y and Z and the scale factors are all unity.
There are two special vectors : COORDS contains the names of the coordinates in
the current system and HFACTORS contains the values of the scale factors.
The coordinate names may be changed using the COORDINATES operator.
This command changes the coordinate names to R, THETA and PHI.

The scale factors may be altered using the SCALEFACTORS operator.
This command changes the scale factors to 1, R and R SIN(THETA).
Note that the arguments of SCALEFACTORS must be enclosed in parentheses.
This is not necessary with COORDINATES.
When vector differential operators are applied to an expression, the current set of
coordinates are used as the independent variables and the scale factors are employed in the calculation. (See, for example, Batchelor G.K. ’An Introduction to
Fluid Mechanics’, Appendix 2.)
Several coordinate systems are pre-defined and may be invoked by name. To see a
list of valid names enter
and REDUCE will respond with something like
To choose a coordinate system by name, use the command GETCSYSTEM.
To choose the Cartesian coordinate system :
Note the quote which prefixes the name of the coordinate system. This is required
because GETCSYSTEM (and its complement PUTCSYSTEM) is a SYMBOLIC procedure which requires a literal argument.
REDUCE responds by typing a list of the coordinate names in that coordinate
system. The example above would produce the response
(X Y Z)
would produce
Note that any attempt to invoke a coordinate system is subject to the same restric-



tions as the implied calls to COORDINATES and SCALEFACTORS. In particular,
GETCSYSTEM fails if any of the coordinate names has been assigned a value and
the previous coordinate system remains in effect.
A user-defined coordinate system can be assigned a name using the command
PUTCSYSTEM. It may then be re-invoked at a later stage using GETCSYSTEM.
Example 5
We define a general coordinate system with coordinate names X,Y,Z and scale factors H1,H2,H3 :
This system may later be invoked by entering


Volume and Line Integration

Several functions are provided to perform volume and line integrals. These operate
in any orthogonal curvilinear coordinate system and make use of the scale factors
described in the previous section.
Definite integrals of scalar and vector expressions may be calculated using the
DEFINT function.
Example 6
To calculate the definite integral of sin(x)2 between 0 and 2π we enter
This function is a simple extension of the INT function taking two extra arguments,
the lower and upper bounds of integration respectively.
Definite volume integrals may be calculated using the VOLINTEGRAL function
whose syntax is as follows :
VOLINTEGRAL(integrand, vector lower-bound, vector upper-bound);
Example 7
In spherical polar coordinates we may calculate the volume of a sphere by integrating unity over the range r=0 to RR, θ=0 to PI, φ=0 to 2*π as follows :

VLB := AVEC(0,0,0);
VOLINTORDER := (0,1,2);

Lower bound
Upper bound in r, θ, φ respectively
The order of integration

Note the use of the special vector VOLINTORDER which controls the order in
which the integrations are carried out. This vector should be set to contain the
number 0, 1 and 2 in the required order. The first component of VOLINTORDER
contains the index of the first integration variable, the second component is the
index of the second integration variable and the third component is the index of the
third integration variable.
Example 8
Suppose we wish to calculate the volume of a right circular cone. This is equivalent
to integrating unity over a conical region with the bounds:
z = 0 to H
r = 0 to pZ
phi = 0 to 2*PI

(H = the height of the cone)
(p = ratio of base diameter to height)

We evaluate the volume by integrating a series of infinitesimally thin circular disks
of constant z-value. The integration is thus performed in the order : d(φ) from 0 to
2π, dr from 0 to p*Z, dz from 0 to H. The order of the indices is thus 2, 0, 1.
VLB := AVEC(0,0,0);
VUB := AVEC(P*Z,H,2*PI);
(At this stage, we replace P*H by RR, the base radius of the cone, to obtain the
result in its more familiar form.)
Line integrals may be calculated using the LINEINT and DEFLINEINT functions. Their general syntax is
LINEINT(vector-function, vector-curve, variable);
DEFLINENINT(vector-function, vector-curve, variable, lower-bound,
vector-function is any vector-valued expression;
vector-curve is a vector expression which describes the path of integration in
terms of the independent variable;
variable is the independent variable;



upper-bound are the bounds of integration in terms of the independent variable.
Example 9
In spherical polar coordinates, we may integrate round a line of constant theta
(‘latitude’) to find the length of such a line. The vector function is thus the tangent
to the ‘line of latitude’, (0,0,1) and the path is (0,LAT,PHI) where PHI is the
independent variable. We show how to obtain the definite integral i.e. from φ = 0
to 2π :


Defining new functions and procedures

Most of the procedures in this package are defined in symbolic mode and are invoked by the REDUCE expression-evaluator when a vector expression is encountered. It is not generally possible to define procedures which accept or return vector
values in algebraic mode. This is a consequence of the way in which the REDUCE
interpreter operates and it affects other non-scalar data types as well : arrays cannot
be passed as algebraic procedure arguments, for example.



This package was written whilst the author was the U.K. Computer Algebra Support Officer at the University of Liverpool Computer Laboratory.



BIBASIS: A Package for Calculating Boolean Involutive Bases

Authors: Yuri A. Blinkov and Mikhail V. Zinin



Involutive polynomial bases are redundant Gröbner bases of special structure with
some additional useful features in comparison with reduced Gröbner bases [1].
Apart from numerous applications of involutive bases [2] the involutive algorithms [3] provide an efficient method for computing reduced Gröbner bases. A
reduced Gröbner basis is a well-determined subset of an involutive basis and can
be easily extracted from the latter without any extra reductions. All this takes place
not only in rings of commutative polynomials but also in Boolean rings.
Boolean Gröbner basis already have already revealed their value and usability in
practice. The first impressive demonstration of practicability of Boolean Gröbner
bases was breaking the first HFE (Hidden Fields Equations) challenge in the public
key cryptography done in [4] by computing a Boolean Gröbner basis for the system
of quadratic polynomials in 80 variables. Since that time the Boolean Gröbner
bases application area has widen drastically and nowadays there is also a number
of quite successful examples of using Gröbner bases for solving SAT problems.
During our research we had developed [5, 6, 7] Boolean involutive algorithms
based on Janet and Pommaret divisions and applied them to computation of
Boolean Gröbner bases. Our implementation of both divisions has experimentally
demonstrated computational superiority of the Pommaret division implementation.
This package BIBASIS is the result of our thorough research in the field of Boolean
Gröbner bases. BIBASIS implements the involutive algorithm based on Pommaret
division in a multivariate Boolean ring.
In section 2 the Boolean ring and its peculiarities are shortly introduced. In section
3 we briefly argue why the involutive algorithm and Pommaret division are good
for Boolean ring while the Buhberger’s algorithm is not. And finally in section 4
we give the full description of BIBASIS package capabilities and illustrate it by


Boolean Ring

Boolean ring perfectly goes with its name, it is a ring of Boolean functions of n
variables, i.e mappings from {0, 1}n to {0, 1}n . Considering these variables are
X := {x1 , . . . , xn } and F2 is the finite field of two elements {0, 1}, Boolean ring



can be regarded as the quotient ring
B [X] := F2 [X] / < x21 + x1 , . . . , x2n + xn > .
Multiplication in B [X] is idempotent and addition is nilpotent
∀ b ∈ B [X] : b2 = b , b + b = 0.
Elements in B [X] are Boolean polynomials and can be represented as finite sums

x∈ Ωj ⊆ X

of Boolean monomials. Each monomial is a conjunction. If set Ω is empty, then
the corresponding monomial is the unity Boolean function 1. The sum of zero
monomials corresponds to zero polynomial, i.e. is zero Boolean function 0.


Pommaret Involutive Algorithm

Detailed description of involutive algorithm can found in [3]. Here we note that
result of both involutive and Buhberger’s algorithms depend on chosen monomial
ordering. At that the ordering must be admissible, i.e.
m 6= 1 ⇐⇒ m  1,

m1  m2 ⇐⇒ m1 m  m2 m

∀ m, m1 , m2 .

But as one can easily check the second condition of admissibility does not hold for
any monomial ordering in Boolean ring:
x1  x2



x1 ∗ x1  x2 ∗ x2


x1 ≺ x1 x2

Though B [X] is a principal ideal ring, boolean singleton {p} is not necessarily a
Gröbner basis of ideal < p >, for example:
x1 , x2 ∈ < x1 x2 + x1 + x2 >⊂ B [x1 , x2 ].
That the reason why one cannot apply the Buhberger’s algorithm directly in a
Boolean ring, using instead a ring F2 [X] and the field binomials x21 + x1 , . . . , x2n +
xn .
The involutive algorithm based on Janet division has the same disadvantage unlike
the Pommaret division algorithm as shown in [5]. The Pommaret division algorithm can be applied directly in a Boolean ring and admits effective data structures
for monomial representation.




The package BIBASIS implements the Pommaret division algorithm in a Boolean
ring. The first step to using the package is to load it:
1: load_package bibasis;
The current version of the BIBASIS user interface consists only of 2 functions:
bibasis and bibasis_print_statistics.
The bibasis is the function that performs all the computation and has the following syntax:
bibasis(initial_polynomial_list, variables_list,
monomial_ordering, reduce_to_groebner);
• initial_polynomial_list is the list of polynomials containing the
known basis of initial Boolean ideal. All given polynomials are treated modulo 2. See Example 1.
• variables_list is the list of independent variables in decreasing order.
• monomial_ordering is a chosen monomial ordering and the supported
ones are:
lex – pure lexicographical ordering;
deglex – degree lexicographic ordering;
degrevlex – degree reverse lexicographic.
See Examples 2—4 to check that Gröbner (as well as involutive) basis depends on monomial ordering.
• reduce_to_groebner is a Boolean value, if it is t the output is the
reduced Boolean Gröbner basis, if nil, then the reduced Boolean Pommaret
basis. Examples 5,6 show distinctions between these two outputs.
• The list of polynomials which constitute the reduced Boolean Gröbner or
Pommaret basis.
The syntax of bibasis_print_statistics is simple:



This function prints out a brief statistics for the last invocation of bibasis function. See Example 7.



Example 1:
1: load_package bibasis;
2: bibasis({x+2*y}, {x,y}, lex, t);

Example 2:
1: load_package bibasis;
2: variables :={x0,x1,x2,x3,x4}$
3: polynomials := {x0*x3+x1*x2,x2*x4+x0}$
4: bibasis(polynomials, variables, lex, t);
{x0 + x2*x4,x2*(x1 + x3*x4)}

Example 3:
1: load_package bibasis;
2: variables :={x0,x1,x2,x3,x4}$
3: polynomials := {x0*x3+x1*x2,x2*x4+x0}$
4: bibasis(polynomials, variables, deglex, t);
{x1*x2*(x3 + 1),
x1*(x0 + x2),
x0*(x2 + 1),
x0*x3 + x1*x2,
x0*(x4 + 1),
x2*x4 + x0}

Example 4:
1: load_package bibasis;
2: variables :={x0,x1,x2,x3,x4}$
3: polynomials := {x0*x3+x1*x2,x2*x4+x0}$
4: bibasis(polynomials, variables, degrevlex, t);
{x0*(x1 + x3),
x0*(x2 + 1),

x1*x2 + x0*x3,
x0*(x4 + 1),
x2*x4 + x0}



Example 5:
1: load_package bibasis;
2: variables :={x,y,z}$
3: polinomials := {x, z}$
4: bibasis(polinomials, variables, degrevlex, t);

Example 6:
1: load_package bibasis;
2: variables :={x,y,z}$
3: polinomials := {x, z}$
4: bibasis(polinomials, variables, degrevlex, nil);

Example 7:

1: load_package bibasis;
2: variables :={u0,u1,u2,u3,u4,u5,u6,u7,u8,u9}$
3: polinomials := {u0*u1+u1*u2+u1+u2*u3+u3*u4+u4*u5+u5*u6+u6*u7+u7*u8+
4: bibasis(polinomials, variables, degrevlex, t);
u7*(u6 + 1),
u6*u8 + u6 + u7,
u3*(u9 + 1),
u6*u9 + u7,
u7*(u9 + 1),
u8*u9 + u6 + u7 + u8,
u0 + u3 + u6 + u9 + 1,
u1 + u7,

u2 + u7 + u8,
u4 + u6 + u8,
u5 + u6 + u7 + u8}
5: bibasis_print_statistics();
Variables order = u0 > u1 > u2 > u3 > u4 > u5 > u6 > u7 > u8 > u9
Normal forms calculated = 216
Non-zero normal forms = 85
Reductions made = 4488
Time: 270 ms
GC time: 0 ms

[1] V.P.Gerdt and Yu.A.Blinkov. Involutive Bases of Polynomial Ideals. Mathematics and Computers in Simulation, 45, 519–542, 1998; Minimal Involutive
Bases, ibid. 543–560.
[2] W.M.Seiler. Involution: The Formal Theory of Differential Equations and its
Applications in Computer Algebra. Algorithms and Computation in Mathematics, 24, Springer, 2010. arXiv:math.AC/0501111
[3] Vladimir P. Gerdt. Involutive Algorithms for Computing Gröbner Bases.
Computational Commutative and Non-Commutative Algebraic Geometry.
IOS Press, Amsterdam, 2005, pp.199–225.
[4] J.-C.Faugère and A.Joux. Algebraic Cryptanalysis of Hidden Field Equations
(HFE) Using Gröbner Bases. LNCS 2729, Springer-Verlag, 2003, pp.44–60.
[5] V.P.Gerdt and M.V.Zinin. A Pommaret Division Algorithm for Computing
Gröbner Bases in Boolean Rings. Proceedings of ISSAC 2008, ACM Press,
2008, pp.95–102.
[6] V.P.Gerdt and M.V.Zinin. Involutive Method for Computing Gröbner Bases
over F2 . Programming and Computer Software, Vol.34, No. 4, 2008, 191–
[7] Vladimir Gerdt, Mikhail Zinin and Yuri Blinkov. On computation of Boolean
involutive bases, Proceedings of International Conference Polynomial Computer Algebra 2009, pp. 17-24 (International Euler Institute, April 7-12, 2009,
St. Peterburg, Russia)




BOOLEAN: A package for boolean algebra

This package supports the computation with boolean expressions in the propositional calculus. The data objects are composed from algebraic expressions connected by the infix boolean operators and, or, implies, equiv, and the unary prefix
operator not. Boolean allows you to simplify expressions built from these operators, and to test properties like equivalence, subset property etc.
Author: Herbert Melenk.



The package Boolean supports the computation with boolean expressions in the
propositional calculus. The data objects are composed from algebraic expressions
(“atomic parts”, “leafs”) connected by the infix boolean operators and, or, implies, equiv, and the unary prefix operator not. Boolean allows you to simplify
expressions built from these operators, and to test properties like equivalence, subset property etc. Also the reduction of a boolean expression by a partial evaluation
and combination of its atomic parts is supported.


Entering boolean expressions

In order to distinguish boolean data expressions from boolean expressions in the
REDUCE programming language (e.g. in an if statement), each expression must
be tagged explicitly by an operator boolean. Otherwise the boolean operators
are not accepted in the REDUCE algebraic mode input. The first argument of
boolean can be any boolean expression, which may contain references to other
boolean values.
boolean (a and b or c);
q := boolean(a and b implies c);
boolean(q or not c);
Brackets are used to override the operator precedence as usual. The leafs or atoms
of a boolean expression are those parts which do not contain a leading boolean
operator. These are considered as constants during the boolean evaluation. There
are two pre-defined values:
• true, t or 1
• false, nil or 0
These represent the boolean constants. In a result form they are used only as 1 and

By default, a boolean expression is converted to a disjunctive normal form, that is
a form where terms are connected by or on the top level and each term is set of
leaf expressions, eventually preceded by not and connected by and. An operators
or or and is omitted if it would have only one single operand. The result of the
transformation is again an expression with leading operator boolean such that the
boolean expressions remain separated from other algebraic data. Only the boolean
constants 0 and 1 are returned untagged.
On output, the operators and and or are represented as /\ and \/, respectively.
boolean(true and false);
boolean(a or not(b and c)); -> boolean(not(b) \/ not(c) \/ a)
boolean(a equiv not c);
-> boolean(not(a)/\c \/ a/\not(c))


Normal forms

The disjunctive normal form is used by default. It represents the “natural” view
and allows us to represent any form free or parentheses. Alternatively a conjunctive normal form can be selected as simplification target, which is a form with
leading operator and. To produce that form add the keyword and as an additional
argument to a call of boolean.
boolean (a or b implies c);
boolean(not(a)/\not(b) \/ c)
boolean (a or b implies c, and);
boolean((not(a) \/ c)/\(not(b) \/ c))
Usually the result is a fully reduced disjunctive or conjuntive normal form, where
all redundant elements have been eliminated following the rules
a ∧ b ∨ ¬a ∧ b ←→ b
a ∨ b ∧ ¬a ∨ b ←→ b
Internally the full normal forms are computed as intermediate result; in these forms
each term contains all leaf expressions, each one exactly once. This unreduced
form is returned when you set the additional keyword full:
boolean (a or b implies c, full);
boolean(a/\b/\c \/ a/\not(b)/\c \/ not(a)/\b/\c \/ not(a)/\not(b)/\c


\/ not(a)/\not(b)/\not(c))

The keywords full and and may be combined.


Evaluation of a boolean expression

If the leafs of the boolean expression are algebraic expressions which may evaluate to logical values because the environment has changed (e.g. variables have
been bound), you can re–investigate the expression using the operator testbool
with the boolean expression as argument. This operator tries to evaluate all leaf
expressions in REDUCE boolean style. As many terms as possible are replaced
by their boolean values; the others remain unchanged. The resulting expression is
contracted to a minimal form. The result 1 (= true) or 0 (=false) signals that the
complete expression could be evaluated.
In the following example the leafs are built as numeric greater test. For using > in
the expressions the greater sign must be declared operator first. The error messages
are meaningless.
operator >;
fm:=boolean(x>v or not (u>v));
fm := boolean(not(u>v) \/ x>v)
testbool fm;
***** u - 10 invalid as number
***** x - 10 invalid as number
boolean(not(u>10) \/ x>10)
testbool fm;
***** u - 10 invalid as number

testbool fm;
***** u - 10 invalid as number




CALI: A package for computational commutative

This package contains algorithms for computations in commutative algebra closely
related to the Gröbner algorithm for ideals and modules. Its heart is a new implementation of the Gröbner algorithm that also allows for the computation of syzygies. This implementation is also applicable to submodules of free modules with
generators represented as rows of a matrix.
Author: Hans-Gert Gräbe.



CAMAL: Calculations in celestial mechanics

This packages implements in REDUCE the Fourier transform procedures of the
CAMAL package for celestial mechanics.
Author: John P. Fitch.
It is generally accepted that special purpose algebraic systems are more efficient
than general purpose ones, but as machines get faster this does not matter. An
experiment has been performed to see if using the ideas of the special purpose
algebra system CAMAL(F) it is possible to make the general purpose system REDUCE perform calculations in celestial mechanics as efficiently as CAMAL did
twenty years ago. To this end a prototype Fourier module is created for REDUCE,
and it is tested on some small and medium-sized problems taken from the CAMAL
test suite. The largest calculation is the determination of the Lunar Disturbing
Function to the sixth order. An assessment is made as to the progress, or lack of
it, which computer algebra has made, and how efficiently we are using modern



A number of years ago there emerged the divide between general-purpose algebra
systems and special purpose one. Here we investigate how far the improvements
in software and more predominantly hardware have enabled the general systems
to perform as well as the earlier special ones. It is similar in some respects to the
Possion program for MACSYMA [8] which was written in response to a similar
The particular subject for investigation is the Fourier series manipulator which had
its origins in the Cambridge University Institute for Theoretical Astronomy, and
later became the F subsystem of CAMAL [3, 10]. In the late 1960s this system
was used for both the Delaunay Lunar Theory [7, 2] and the Hill Lunar Theory
[5], as well as other related calculations. Its particular area of application had a
number of peculiar operations on which the general speed depended. These are
outlined below in the section describing how CAMAL worked. There have been a
number of subsequent special systems for celestial mechanics, but these tend to be
restricted to the group of the originator.
The main body of the paper describes an experiment to create within the REDUCE
system a sub-system for the efficient manipulation of Fourier series. This prototype
program is then assessed against both the normal (general) REDUCE and the extant
CAMAL results. The tests are run on a number of small problems typical of those
for which CAMAL was used, and one medium-sized problem, the calculation of
the Lunar Disturbing Function. The mathematical background to this problem is
also presented for completeness. It is important as a problem as it is the first stage



in the development of a Delaunay Lunar Theory.
The paper ends with an assessment of how close the performance of a modern
REDUCE on modern equipment is to the (almost) defunct CAMAL of eighteen
years ago.


How CAMAL Worked

The Cambridge Algebra System was initially written in assembler for the Titan
computer, but later was rewritten a number of times, and matured in BCPL, a version which was ported to IBM mainframes and a number of microcomputers. In
this section a brief review of the main data structures and special algorithms is
CAMAL Data Structures
CAMAL is a hierarchical system, with the representation of polynomials being
completely independent of the representations of the angular parts.
The angular part had to represent a polynomial coefficient, either a sine or cosine
function and a linear sum of angles. In the problems for which CAMAL was
designed there are 6 angles only, and so the design restricted the number, initially
to six on the 24 bit-halfword TITAN, and later to eight angles on the 32-bit IBM
370, each with fixed names (usually u through z). All that is needed is to remember
the coefficients of the linear sum. As typical problems are perturbations, it was
reasonable to restrict the coefficients to small integers, as could be represented in a
byte with a guard bit. This allowed the representation to pack everything into four
[ NextTerm, Coefficient, Angles0-3, Angles4-7 ]
The function was coded by a single bit in the Coefficient field. This gives a
particularly compact representation. For example the Fourier term sin(u − 2v +
w − 3x) would be represented as
[ NULL, "1"|0x1, 0x017e017d, 0x00000000 ]
[ NULL, "1"|0x1, 1:-2:1:-3, 0:0:0:0 ]
where "1" is a pointer to the representation of the polynomial 1. In all this representation of the term took 48 bytes. As the complexity of a term increased the
store requirements to no grow much; the expression (7/4)ae3 f 5 cos(u−2v +3w −
4x + 5y + 6z) also takes 48 bytes. There is a canonicalisation operation to ensure
that the leading angle is positive, and sin(0) gets removed. It should be noted that

cos(0) is a valid and necessary representation.
The polynomial part was similarly represented, as a chain of terms with packed
exponents for a fixed number of variables. There is no particular significance in this
except that the terms were held in increasing total order, rather than the decreasing
order which is normal in general purpose systems. This had a number of important
effects on the efficiency of polynomial multiplication in the presence of a truncation
to a certain order. We will return to this point later. Full details of the representation
can be found in [9].
The space administration system was based on explicit return rather than garbage
collection. This meant that the system was sometimes harder to write, but it did
mean that much attention was focussed on efficient reuse of space. It was possible
for the user to assist in this by marking when an expression was needed no longer,
and the compiler then arranged to recycle the space as part of the actual operation. This degree of control was another assistance in running of large problems on
relatively small machines.
Automatic Linearisation
In order to maintain Fourier series in a canonical form it is necessary to apply the
transformations for linearising products of sine and cosines. These will be familiar
to readers of the REDUCE test program as
cos θ cos φ ⇒ (cos(θ + φ) + cos(θ − φ))/2,


cos θ sin φ ⇒ (sin(θ + φ) − sin(θ − φ))/2,


sin θ sin φ ⇒ (cos(θ − φ) − cos(θ + φ))/2,






cos θ ⇒ (1 + cos(2θ))/2,
sin θ ⇒ (1 − cos(2θ))/2.

In CAMAL these transformations are coded directly into the multiplication routines, and no action is necessary on the part of the user to invoke them. Of course
they cannot be turned off either.
Differentiation and Integration
The differentiation of a Fourier series with respect to an angle is particularly simple. The integration of a Fourier series is a little more interesting. The terms like
cos(nu + . . .) are easily integrated with respect to u, but the treatment of terms
independent of the angle would normally introduce a secular term. By convention
in Fourier series these secular terms are ignored, and the constant of integration is
taken as just the terms independent of the angle in the integrand. This is equivalent



to the substitution rules
sin(nθ) ⇒ −(1/n) cos(nθ)
cos(nθ) ⇒ (1/n) sin(nθ)
In CAMAL these operations were coded directly, and independently of the differentiation and integration of the polynomial coefficients.
Harmonic Substitution
An operation which is of great importance in Fourier operations is the harmonic
substitution. This is the substitution of the sum of some angles and a general expression for an angle. In order to preserve the format, the mechanism uses the
sin(θ + A) ⇒ sin(θ) cos(A) + cos(θ) sin(A)
cos(θ + A) ⇒ cos(θ) cos(A) − sin(θ) sin(A)

and then assuming that the value A is small it can be replaced by its expansion:
sin(θ + A) ⇒ sin(θ){1 − A2 /2! + A4 /4! . . .} +
cos(θ){A − A3 /3! + A5 /5! . . .}
cos(θ + A) ⇒ cos(θ){1 − A2 /2! + A4 /4! . . .} −
sin(θ){A − A3 /3! + A5 /5! . . .}

If a truncation is set for large powers of the polynomial variables then the series
will terminate. In CAMAL the HSUB operation took five arguments; the original
expression, the angle for which there is a substitution, the new angular part, the
expression part (A in the above), and the number of terms required.
The actual coding of the operation was not as expressed above, but by the use of
Taylor’s theorem. As has been noted above the differentiation of a harmonic series
is particularly easy.
Truncation of Series
The main use of Fourier series systems is in generating perturbation expansions,
and this implies that the calculations are performed to some degree of the small
quantities. In the original CAMAL all variables were assumed to be equally small
(a restriction removed in later versions). By maintaining polynomials in increasing

maximum order it is possible to truncate the multiplication of two polynomials.
Assume that we are multiplying the two polynomials
A = a0 + a1 + a2 + . . .
B = b0 + b1 + b2 + . . .
If we are generating the partial answer
ai (b0 + b1 + b2 + . . .)
then if for some j the product ai bj vanishes, then so will all products ai bk for
k > j. This means that the later terms need not be generated. In the product of
1 + x + x2 + x3 + . . . + x10 and 1 + y + y 2 + y 3 + . . . + y 1 0 to a total order of 10
instead of generating 100 term products only 55 are needed. The ordering can also
make the merging of the new terms into the answer easier.


Towards a CAMAL Module

For the purposes of this work it was necessary to reproduce as many of the ideas
of CAMAL as feasible within the REDUCE framework and philosophy. It was not
intended at this stage to produce a complete product, and so for simplicity a number
of compromises were made with the “no restrictions” principle in REDUCE and
the space and time efficiency of CAMAL. This section describes the basic design
Data Structures
In a fashion similar to CAMAL a two level data representation is used. The coefficients are the standard quotients of REDUCE, and their representation need not
concern us further. The angular part is similar to that of CAMAL, but the ability to
pack angle multipliers and use a single bit for the function are not readily available
in Standard LISP, so instead a longer vector is used. Two versions were written.
One used a balanced tree rather than a linear list for the Fourier terms, this being a
feature of CAMAL which was considered but never coded. The other uses a simple
linear representation for sums. The angle multipliers are held in a separate vector
in order to allow for future flexibility. This leads to a representation as a vector of
length 6 or 4;

[ BalanceBits, Coeff, Function, Angles, LeftTree, RightTree ]
[ Coeff, Function, Angles, Next ]

where the Angles field is a vector of length 8, for the multipliers. It was decided
to forego packing as for portability we do not know how many to pack into a small



integer. The tree system used is AVL, which needs 2 bits to maintain balance information, but these are coded as a complete integer field in the vector. We can expect
the improvements implicit in a binary tree to be advantageous for large expressions,
but the additional overhead may reduce its utility for smaller expressions.
A separate vector is kept relating the position of an angle to its print name, and
on the property list of each angle the allocation of its position is kept. So long as
the user declares which variables are to be treated as angles this mechanism gives
flexibility which was lacking in CAMAL.
As in the CAMAL system the linearisation of products of sines and cosines is done
not by pattern matching but by direct calculation at the heart of the product function, where the transformations (1) through (3) are made in the product of terms
function. A side effect of this is that there are no simple relations which can be used
from within the Fourier multiplication, and so a full addition of partial products is
required. There is no need to apply linearisations elsewhere as a special case. Addition, differentiation and integration cannot generate such products, and where
they can occur in substitution the natural algorithm uses the internal multiplication
function anyway.
Substitution is the main operation of Fourier series. It is useful to consider three
different cases of substitutions.
1. Angle Expression for Angle:
2. Angle Expression + Fourier Expression for Angle:
3. Fourier Expression for Polynomial Variable.
The first of these is straightforward, and does not require any further comment.
The second substitution requires a little more care, but is not significantly difficult
to implement. The method follows the algorithm used in CAMAL, using TAYLOR
series. Indeed this is the main special case for substitution.
The problem is the last case. Typically many variables used in a Fourier series
program have had a WEIGHT assigned to them. This means that substitution must
take account of any possible WEIGHTs for variables. The standard code in REDUCE does this in effect by translating the expression to prefix form, and recalculating the value. A Fourier series has a large number of coefficients, and so this
operations are repeated rather too often. At present this is the largest problem area

with the internal code, as will be seen in the discussion of the Disturbing Function


Integration with REDUCE

The Fourier module needs to be seen as part of REDUCE rather than as a separate
language. This can be seen as having internal and external parts.
Internal Interface
The Fourier expressions need to co-exist with the normal REDUCE syntax and
semantics. The prototype version does this by (ab)using the module method, based
in part on the TPS code [1]. Of course Fourier series are not constant, and so are
not really domain elements. However by asserting that Fourier series form a ring
of constants REDUCE can arrange to direct basic operations to the Fourier code
for addition, subtraction, multiplication and the like.
The main interface which needs to be provided is a simplification function for
Fourier expressions. This needs to provide compilation for linear sums of angles,
as well as constructing sine and cosine functions, and creating canonical forms.
User Interface
The creation of HDIFF and HINT functions for differentiation disguises this. An
unsatisfactory aspect of the interface is that the tokens SIN and COS are already in
use. The prototype uses the operator form
fourier sin(u)
to introduce harmonically represented sine functions. An alternative of using the
tokens F_SIN and F_COS is also available.
It is necessary to declare the names of the angles, which is achieved with the declaration
harmonic theta, phi;
At present there is no protection against using a variable as both an angle and a
polynomial varaible. This will nooed to be done in a user-oriented version.




The Simple Experiments

The REDUCE test file contains a simple example of a Fourier calculation, determining the value of (a1 cos(wt) + a3 cos(3wt) + b1 sin(wt) + b3 sin(3wt))3 . For
the purposes of this system this is too trivial to do more than confirm the correct
The simplest non-trivial calculation for a Fourier series manipulator is to solve
Kepler’s equation for the eccentric anomoly E in terms of the mean anomoly u,
and the eccentricity of an orbit e, considered as a small quantity
E = u + e sin E
The solution procedes by repeated approximation. Clearly the initial approximation is E0 = u. The nth approximation can be written as u + An , and so An can
be calculated by
Ak = e sin(u + Ak−1 )
This is of course precisely the case for which the HSUB operation is designed, and
so in order to calculate En − u all one requires is the code
bige := fourier 0;
for k:=1:n do <<
wtlevel k;
bige:=fourier e * hsub(fourier(sin u), u, u, bige, k);
write "Kepler Eqn solution:", bige$
It is possible to create a regular REDUCE program to simulate this (as is done
for example in Barton and Fitch[4], page 254). Comparing these two programs
indicates substantial advantages to the Fourier module, as could be expected.

Solving Kepler’s Equation
Order REDUCE Fourier Module
13459.80 569.26
These results were with the linear representation of Fourier series. The tree representation was slightly slower. The ten-fold speed-up for the 13th order is most


A Medium-Sized Problem

Fourier series manipulators are primarily designed for large-scale calculations, but
for the demonstration purposes of this project a medium problem is considered.
The first stage in calculating the orbit of the Moon using the Delaunay theory (of
perturbed elliptic motion for the restricted 3-body problem) is to calculate the energy of the Moon’s motion about the Earth — the Hamiltonian of the system. This
is the calculation we use for comparisons.
Mathematical Background
The full calculation is described in detail in [6], but a brief description is given here
for completeness, and to grasp the extent of the calculation.



Referring to the figure 1 which gives the cordinate system, the basic equations are
S = (1 − γ 2 ) cos(f + g + h − f 0 − g 0 − h0 ) + γ 2 cos(f + g − h + f 0 + g(16.40)
+ h0 )

r = a(1 − e cos E)


l = E − e sin E
a =
r2 df
= a2 (1 − e2 ) 2
 a   r 3  a0 3
 r 2  a0 2
2 0
0a a
P2 (S) + 0
P3 (S) + . . . (16.45)
R = m 03 0
a r
There are similar equations to (7) to (10) for the quantities r0 , a0 , e0 , l0 , E 0 and f 0
which refer to the position of the Sun rather than the Moon. The problem is to
calculate the expression R as an expansion in terms of the quantities e, e0 , γ, a/a0 ,
l, g, h, l0 , g 0 and h0 . The first three quantities are small quantities of the first order,
and a/a0 is of second order.
The steps required are
1. Solve the Kepler equation (8)
2. Substiture into (7) to give r/a in terms of e and l.
3. Calculate a/r from (9) and f from (10)
4. Substitute for f and f 0 into S using (6)
5. Calculate R from S, a0 /r0 and r/a
The program is given in the Appendix.
The Lunar Disturbing function was calculated by a direct coding of the previous
sections’ mathematics. The program was taken from Barton and Fitch [4] with
just small changes to generalise it for any order, and to make it acceptable for
Reduce3.4. The Fourier program followed the same pattern, but obviously used
the HSUB operation as appropriate and the harmonic integration. It is very similar
to the CAMAL program in [4].
The disturbing function was calculated to orders 2, 4 and 6 using Cambridge LISP
on an HLH Orion 1/05 (Intergraph Clipper), with the three programs α) Reduce3.4,
β) Reduce3.4 + Camal Linear Module and γ) Reduce3.4 + Camal AVL Module.

The timings for CPU seconds (excluding garbage collection time) are summarised
the following table:
Order of DDF


Camal Linear

Camal Tree

If these numbers are normalised so REDUCE calculating the DDF is 100 units for
each order the table becomes
Order of DDF


Camal Linear

Camal Tree

From this we conclude that a doubling of speed is about correct, and although the
balanced tree system is slower as the problem size increases the gap between it and
the simpler linear system is narrowing.
It is disappointing that the ratio is not better, nor the absolute time less. It is worth
noting in this context that Jefferys claimed that the sixth order DDF took 30s on a
CDC6600 with TRIGMAN in 1970 [11], and Barton and Fitch took about 1s for
the second order DDF on TITAN with CAMAL [4]. A closer look at the relative
times for individual sections of the program shows that the substitution case of
replacing a polynomial variable by a Fourier series is only marginally faster than
the simple REDUCE program. In the DDF program this operation is only used
once in a major form, substituting into the Legendre polynomials, which have been
previously calculated by Rodrigues formula. This suggests that we replace this
with the recurrence relationship.
Making this change actually slows down the normal REDUCE by a small amount
but makes a significant change to the Fourier module; it reduces the run time for
the 6th order DDF from 3084.62s to 2002.02s. This gives some indication of the
problems with benchmarks. What is clear is that the current implementation of
substitution of a Fourier series for a polynomial variable is inadequate.



The Fourier module is far from complete. The operations necessary for the solution
of Duffing’s and Hill’s equations are not yet written, although they should not
cause much problem. The main defficiency is the treatment of series truncation;
at present it relies on the REDUCE WTLEVEL mechanism, and this seems too



coarse for efficient truncation. It would be possible to re-write the polynomial
manipulator as well, while retaining the REDUCE syntax, but that seems rather
more than one would hope.
The real failure so far is the large time lag between the REDUCE-based system on a
modern workstation against a mainframe of 25 years ago running a special system.
The CAMAL Disturbing function program could calculate the tenth order with a
maximum of 32K words (about 192Kbytes) whereas this system failed to calculate
the eigth order in 4Mbytes (taking 2000s before failing). I have in my archives
the output from the standard CAMAL test suite, which includes a sixth order DDF
on an IBM 370/165 run on 2 June 1978, taking 22.50s and using a maximum of
15459 words of memory for heap — or about 62Kbytes. A rough estimate is that
the Orion 1/05 is comparable in speed to the 360/165, but with more real memory
and virtual memory.
However, a simple Fourier manipulator has been created for REDUCE which performs between twice and three times the speed of REDUCE using pattern matching. It has been shown that this system is capable of performing the calculations of
celestial mechanics, but it still seriously lags behind the efficiency of the specialist
systems of twenty years before. It is perhaps fortunate that it was not been possible
to compare it with a modern specialist system.
There is still work to do to provide a convenient user interface, but it is intended to
develop the system in this direction. It would be pleasant to have again a system of
the efficiency of CAMAL(F).
I would like to thank Codemist Ltd for the provision of computing resources for
this project, and David Barton who taught be so much about Fourier series and
celstial mechanics. Thank are also due to the National Health Service, without
whom this work and paper could not have been produced.

Appendix: The DDF Function
array p(n/2+2);
harmonic u,v,w,x,y,z;
weight e=1, b=1, d=1, a=1;
%% Generate Legendre Polynomials to sufficient order
for i:=2:n/2+2 do <<
for j:=1:i do p(i):=df(p(i),h)/(2j)
%%%%%%%%%%%%%%%% Step1: Solve Kepler equation
bige := fourier 0;

for k:=1:n do <<
wtlevel k;
bige:=fourier e * hsub(fourier(sin u), u, u, bige, k);
%% Ensure we do not calculate things of too high an order
wtlevel n;
%%%%%%%%%%%%%%%% Step 2: Calculate r/a in terms of e and l
dd:=-e*e; hh:=3/2; j:=1; cc := 1;
for i:=1:n/2 do <<
j:=i*j; hh:=hh-1; cc:=cc+hh*(dd^i)/j
bb:=hsub(fourier(1-e*cos u), u, u, bige, n);
aa:=fourier 1+hdiff(bige,u); ff:=hint(aa*aa*fourier cc,u);
%%%%%%%%%%%%%%%% Step 3: a/r and f
uu := hsub(bb,u,v); uu:=hsub(uu,e,b);
vv := hsub(aa,u,v); vv:=hsub(vv,e,b);
ww := hsub(ff,u,v); ww:=hsub(ww,e,b);
%%%%%%%%%%%%%%%% Step 4: Substitute f and f’ into S
yy:=ff-ww; zz:=ff+ww;
%%%%%%%%%%%%%%%% Step 5: Calculate R
zz:=bb*vv; yy:=zz*zz*vv;
on fourier;
for i := 2:n/2+2 do <<
wtlevel n+4-2i; p(i) := hsub(p(i), h, xx) >>;
wtlevel n;
for i:=n/2+2 step -1 until 3 do



[1] A. Barnes and J. A. Padget. Univariate power series expansions in Reduce. In
S. Watanabe and M. Nagata, editors, Proceedings of ISSAC’90, pages 82–7.
ACM, Addison-Wesley, 1990.
[2] D. Barton. Astronomical Journal, 72:1281–7, 1967.
[3] D. Barton. A scheme for manipulative algebra on a computer. Computer
Journal, 9:340–4, 1967.
[4] D. Barton and J. P. Fitch. The application of symbolic algebra system to
physics. Reports on Progress in Physics, 35:235–314, 1972.
[5] Stephen R. Bourne. Literal expressions for the co-ordinates of the moon. I.
the first degree terms. Celestial Mechanics, 6:167–186, 1972.
[6] E. W. Brown. An Introductory Treatise on the Lunar Theory. Cambridge
University Press, 1896.
[7] C. Delaunay. Théorie du Mouvement de la Lune. (Extraits des Mém. Acad.
Sci.). Mallet-Bachelier, Paris, 1860.
[8] Richard J. Fateman. On the multiplication of poisson series. Celestial Mechanics, 10(2):243–249, October 1974.
[9] J. P. Fitch. Syllabus for algebraic manipulation lectures in cambridge.
SIGSAM Bulletin, 32:15, 1975.
[10] J. P. Fitch. CAMAL User’s Manual. University of Cambridge Computer
Laboratory, 2nd edition, 1983.
[11] W. H. Jeffereys. Celestial Mechanics, 2:474–80, 1970.



CANTENS: A Package for Manipulations and Simplifications of Indexed Objects

This package creates an environment which allows the user to manipulate and simplify expressions containing various indexed objects like tensors, spinors, fields
and quantum fields.
Author: Hubert Caprasse.



CANTENS is a package that creates an environment inside REDUCE which allows
the user to manipulate and simplify expressions containing various indexed objects
like tensors, spinors, fields and quantum fields. Briefly said, it allows him
- to define generic indexed quantities which can eventually depend implicitly
or explicitly on any number of variables;
- to define one or several affine or metric (sub-)spaces, and to work within
them without difficulty;
- to handle dummy indices and simplify adequatly expressions which contain
Beside the above features, it offers the user:
1. Several invariant elementary tensors which are always used in the applications involving the use of indexed objects like delta, epsilon, eta
and the generalized delta function.
2. The possibility to define any metric and to make it bloc-diagonal if he wishes
3. The capability to symmetrize or antisymmetrize any expression.
4. The possibility to introduce any kind of symmetry (even partial symmetries)
for the indexed objects.
5. The choice to work with commutative, non-commutative or anticommutative
indexed objects.
In this package, one cannot find algorithms or even specific objects (i.e. like the
covariant derivative or the SU(3) group structure constants) which are of used either
in nuclear and particle physics. The objective of the package is simply to allow the
user to easily formulate his algorithms in the notations he likes most. The package



is also conceived so as to minimize the number of new commands. However, the
large number of new capabilities inherently implies that quite a substantial number
of new functions and commands must be used. On the other hand, in order to
avoid too many error or warning messages the package assumes, in many cases,
that the user is reponsible of the consistency of its inputs. The author is aware that
the package is still perfectible and he will be grateful to all people who shall spare
some time to communicate bugs or suggest improvements.
The documentation below is separated into four sections. In the first one, the
space(s) properties and definitions are described.
In the second one, the commands to geberate and handle generic indexed quantities
(called abusively tensors) are illustrated. The manipulation and control of free and
dummy indices is discussed.
In the third one, the special tensors are introduced and their properties discussed
especially with respect to their ability to work simultaneously within several subspaces.
The last section, which is also the most important, is devoted entirely to the simplification function CANONICAL. This function originates from the package DUMMY
and has been substantially extended . It takes account of all symmetries, make
dummy summations and seeks a “canonical” form for any tensorial expression.
Without it, the present package would be much less useful.
Finally, an index has been created. It contains numerous references to the text.
Different typings have been adopted to make a clear distinction between them.
The conventions are the following:
• Procedure keywords are typed in capital roman letters.
• Package keywords are typed in typewriter capital letters.
• Cantens package keywords are in small typewriter letters.
• All other keywords are typed in small roman letters.
When CANTENS is loaded, the packages ASSIST and DUMMY are also loaded.


Handling of space(s)

One can work either in a single space environment or in a multiple space environment. After the package is loaded, the single space environment is set and a unique
space is defined. It is euclidian, and has a symbolic dimension equal to dim. The
single space environment is determined by the switch ONESPACE which is turned
on. One can verify the above assertions as follows :

onespace ?; => yes
wholespace_dim ?; => dim
signature ?; => 0
One can introduce a pseudoeuclidian metric for the above space by the command
SIGNATURE and verify that the signature is indeed 1:
signature 1;
signature ?; => 1
In principle the signature may be set to any positive integer. However, presently,
the package cannot handle signatures larger than 1. One gets the Minkowski-like
space metric
1 0
 0 −1 0
0 
 0 0 −1 0 
0 0
0 −1
which corresponds to the convention of high energy physicists. It is possible to
change it into the astrophysicists convention using the command GLOBAL_SIGN:
global_sign ?; => 1
global_sign (-1);
global_sign ?; => -1
This means that the actual metric is now (−1, 1, 1, 1). The space dimension may,
of course, be assigned at will using the function WHOLESPACE_DIM. Below, it is
assigned to 4:
wholespace_dim 4; ==> 4
When the switch ONESPACE is turned off, the system assumes that this default
space is non-existent and, therefore, that the user is going to define the space(s) in
which he wants to work. Unexpected error messages will occur if it is not done.
Once the switch is turned off many more functions become active. A few of them
are available in the algebraic mode to allow the user to properly conctruct and
control the properties of the various (sub-)spaces he is going to define and, also, to
assign symbolic indices to some of them.
DEFINE_SPACES is the space constructor and wholespace is a reserved identi-



fier which is meant to be the name of the global space if subspaces are introduced.
Suppose we want to define a unique space, we can choose for its any name but
choosing wholespace will be more efficient. On the other hand, it leaves open
the possibility to introduce subspaces in a more transparent way. So one writes, for
define_spaces wholespace=
{6,signature=1,indexrange=0 .. 5}; ==>t
The arguments inside the list, assign respectively the dimension, the signature and
the range of the numeric indices which is allowed. Notice that the range starts from
0 and not from 1. This is made to conform with the usual convention for spaces of
signature equal to 1. However, this is not compulsory. Notice that the declaration
of the indexrange may be omitted if this is the only defined space. There are two
other options which may replace the signature option, namely euclidian and
affine. They have both an obvious significance.
In the subsequent example, an eleven dimension global space is defined and two
subspaces of this space are specified. Notice that no indexrange has been declared
for the entire space. However, the indexrange declaration is compulsory for subspaces otherwise the package will improperly work when dealing with numeric
define_spaces wholespace={11,signature=1}; ==> t
define_spaces mink=
{4,signature=1,indexrange=0 .. 3}; ==> t
define_spaces eucl=
{6,euclidian,indexrange=4 .. 9}; ==> t
To remind ones the space context in which one is working, the use of the function SHOW_SPACES is required. Its output is an algebraic value from which the
user can retrieve all the informations displayed. After the declarations above, this
function gives:
show_spaces(); ==>



If an input error is made or if one wants to change the space framework, one cannot
directly redefine the relevant space(s). For instance, the input
define_spaces eucl=
{7,euclidian,indexrange=4 .. 9}; ==>
*** Warning: eucl cannot be (or is already)
defined as space identifier
whih aims to fill all dimensions present in wholespace tells that the space eucl
cannot be redefined. To redefine it effectively, one is to remove the existing definition first using the function REM_SPACES which takes any number of space-names
as its argument. Here is the illustration:
rem_spaces eucl; ==> t
show_spaces(); ==>
define_spaces eucl=
{7,euclidian,indexrange=4 .. 10}; ==> t
show_spaces(); ==>
Here, the user is entirely responsible of the coherence of his construction. The
system does NOT verify it but will incorrectly run if there is a mistake at this level.
When two spaces are direct product of each other (as the color and Minkowski



spaces in quantum chromodynamics), it is not necessary to introduce the global
space wholespace.
“Tensors” and symbolic indices can be declared to belong to a specific space or
subspace. It is in fact an essential ingredient of the package and make it able
to handle expressions which involve quantities belonging to several (sub-)spaces
or to handle bloc-diagonal “tensors”. This will be discussed in the next section.
Here, we just mention how to declare that some set of symbolic indices belong to
a specific (sub-)space or how to declare them to belong to any space. The relevant
command is MK_IDS_BELONG_SPACE whose syntax is
For example, within the above declared spaces one could write:
mk_ids_belong_space({a0,a1,a2,a3},mink); ==> t
mk_ids_belong_space({x,y,z,u,v},eucl); ==> t
The command MK_IDS_BELONG_ANYSPACE allows to remake them usable either in wholespace if it is defined or in anyone among the defined spaces. For
instance, the declaration:
mk_ids_belong_anyspace a1,a2; ==> t
tells that a1 and a2 belong either to mink or to eucl or to wholespace.


Generic tensors and their manipulation

The generic tensors handled by CANTENS are objects much more general than
usual tensors. The reason is that they are not supposed to obey well defined transformation properties under a change of coordinates. They are only indexed quantities. The indices are either contravariantly (upper indices) or covariantly (lower
indices) placed. They can be symbolic or numeric. When a given index is found
both in one upper and in one lower place, it is supposed to be summed over all
space-coordinates it belongs to viz. it is a dummy index and automatically recognized as such. So they are supposed to obey the summation rules of tensor calculus. This why and only why they are called tensors. Moreover, aside from indices
they may also depend implicitly or explicitly on any number of variables. Within
this definition, tensors may also be spinors, they can be non-commutative or anticommutative, they may also be algebra generators and represent fields or quantum

Implications of TENSOR declaration
The procedure TENSOR which takes an arbitrary number of identifiers as argument
defines them as operator-like objects which admit an arbitrary number of indices.
Each component has a formal character and may or may not belong to a specific
(sub-)space. Numeric indices are also allowed. The way to distinguish upper and
lower indices is the same as the one in the package EXCALC e.g. −a is a lower
index and a is an upper index. A special printing function has been created so as
to mimic as much as possible the way of writing such objects on a sheet of paper.
Let us illustrate the use of TENSOR:
tensor te; ==> t
te(3,a,-4,b,-c,7); ==>
3 a





te(3,a,{x,y},-4,b,-c,7); ==>
3 a





te(3,a,-4,b,{u,v},-c,7); ==>
3 a




te({x,y}); ==>



Notice that the system distinguishes indices from variables on input solely on the
basis that the user puts variables inside a list.
The dependence can also be declared implicit through the REDUCE command
DEPEND which is generalized so as to allow to declare a tensor to depend on



another tensor irrespective of its components. It means that only one declaration
is enough to express the dependence with respect to all its components. A simple
tensor te,x;
depend te,x;
df(te(a,-b),x(c)); ==>

,x )

Therefore, when all objects are tensors, the dependence declaration is valid for all
One can also avoid the trouble to place the explicit variables inside a list if one declare them as variables through the command MAKE_VARIABLES. This property
can also be removed3 using REMOVE_VARIABLES:
make_variables x,y; ==> t
te(x,y); ==> te(x,y)

te(x,y,a); ==>



remove_variables x; ==> t

te(x,y,a); ==>
x a
If one does that one must be careful not to substitute a number to such declared

One important feature of this package is its reversibility viz. it gives the user the means to erase
its previous operations at any time. So, most functions described below do possess “removing” action

variables because this number would be considered as an index and no longer as a
variable. So it is only useful for formal variables.
A tensor can be easily eliminated using the function REM_TENSOR. It has the
rem_tensor t1,t2,t3 ....;
Dummy indices recognition For all individual tensors met by the evaluator, the
system will analyse the written indices and will detect those which must be considered dummy according to the usual rules of tensor calculus. Those indices
will be given the dummy property and will no longer be allowed to play the role
of free indices unless the user removes this dummy property. In that way, the
system checks immediately the consistency of an input. Three functions are at
the disposal of the user to control dummy indices. They are DUMMY_INDICES,
REM_DUMMY_INDICES and REM_DUMMY_IDS. The following illustrates their
use as well as the behaviour of the system:
dummy_indices(); ==> {} % In a fresh environment
te(a,b,-c,-a); ==>
a b
c a
dummy_indices(); ==> {a}
te(a,b,-c,a); ==>
***** ((c)(a b a)) are inconsistent lists of indices
% a cannot be found twice as an upper index
te(a,b,-b,-a); ==>
a b
b a
dummy_indices(); ==> {b,a}
te(d,-d,d); ==>


***** ((d)(d d)) are inconsistent lists of indices
dummy_indices(); ==> {d,b,a}
rem_dummy_ids d; ==> t
dummy_indices(); ==> {b,a}
te(d,d); ==>
d d


This is allowed again.

dummy_indices(); ==> {b,a}
rem_dummy_indices(); ==> t
dummy_indices(); ==> {}
Other verifications of coherence are made when space specifications are introduced
both in the ON and OFF onespace environment. We shall discuss them later.
Substitutions, assignements and rewriting rules The user must be able to manipulate and give specific characteristics to the generic tensors he has introduced.
Since tensors are essentially REDUCE operators, the usual commands of the system are available. However, some limitations are implied by the fact that indices
and, especially numeric indices, must always be properly recognized before any
substitution or manipulation is done. We have gathered below a set of examples
which illustrate all the “delicate” points. First, the substitutions:
sub(a=-c,te(a,b)); ==>
sub(a=-1,te(a,b)); ==>
sub(a=-0,te(a,b)); ==>


0 b
% sub has replaced -0 by 0. wrong!
sub(a=-!0,te(a,b)); ==>

% right

The substitution of an index by -0 is the only one case where there is a problem.
The function SUB replaces -0 by 0 because it does not recognize 0 as an index of
course. Such a recognition is context dependent and implies a modification of SUB
for this single exceptional case. Therefore,we have opted, not do do so and to use
the index 0 which is simply !0 instead of 0.
Second, the assignments. Here, we advise the user to rely on the operator==4
instead of the operator :=. Again, the reason is to avoid the problem raised above
in the case of substitutions. := does not evaluate its left hand side so that -0 is not
recognized as an index and simplified to 0 while the == operator evaluates both
its left and right hand sides and does recognize it. The disadvantage of == is that
it demands that a second assignement on a given component be made only after
having suppressed explicitly the first assignement. This is done by the function
REM_VALUE_TENS which can be applied on any component. We stress, however,
that if one is willing to use -!0 instead of -0 as the lower 0 index, the use of := is
perfectly legitimate:
te({x,y},a,-0)==x*y*te(a,-0); ==>


te({x,y},a,-0); ==>


te({x,y},a,0); ==>


See the ASSIST documentation for its description.




a 0

te({x,y},a,-0)==x*y*te(a,-0); ==>
***** te

*x*y invalid as setvalue kernel

rem_value_tens te({x,y},a,-0);
te({x,y},a,-0); ==>


te({x,y},a,-0)==(x+y)*te(a,-0); ==>

*(x + y)

In the elementary application below, the use of a tensor avoids the introduction of
two different operators and makes the calculation more readable.
te(1)==sin th * cos phi; ==> cos(phi)*sin(th)
te(-1)==sin th * cos phi; ==> cos(phi)*sin(th)
te(2)==sin th * sin phi; ==> sin(phi)*sin(th)
te(-2)==sin th * sin phi; ==> sin(phi)*sin(th)
te(3)==cos th ; ==> cos(th)
te(-3)==cos th ; ==> cos(th)
for i:=1:3 sum te(i)*te(-i); ==>
cos(phi) *sin(th) + cos(th) + sin(phi) *sin(th)

rem_value_tens te;
te(2); ==>
There is no difference in the manipulation of numeric indices and numeric tensor
indices. The function REM_VALUE_TENS when applied to a tensor prefix suppresses the value of all its components. Finally, there is no “interference” with i as
a dummy index and i as a numeric index in a loop.
Third, rewriting rules. They are either global or local and can be used as in REDUCE. Again, here, the -0 index problem exists each time a substitution by the
index -0 must be made in a template.
% LET:
let te({x,y},-0)=x*y;
te({x,y},-0); ==> x*y
te({x,y},+0); ==>


te({x,u},-0); ==>


for all x,a let te({x},a,-b)=x*te(a,-b);
te({u},1,-b); ==>


te({u},c,-b); ==>





te({u},b,-b); ==>


te({u},a,-a); ==>


for all x,a clear te({x},a,-b);
te({u},c,-b); ==>


for all a,b let te({x},a,-b)=x*te(a,-b);
te({x},c,-b); ==>


te({x},a,-a); ==>


% The index -0 problem:
te({x},a,-0); ==> % -0 becomes +0 in the template




the rule does not apply.

te({x},0,-!0); ==>


% here it applies.

rul:={te(~a) => sin a}; ==>
rul := {te

=> sin(a)}

te(1) where rul; ==> sin(1)
te(1); ==>

% with variables:
rul1:={te(~a,{~x,~y}) => x*y*sin(a)}; ==>

rul1 := {te
(~x,~y) => x*y*sin(a)}

te(a,{x,y}) where rul1; ==> sin(a)*x*y

te({x,y},a) where rul1; ==> sin(a)*x*y

rul2:={te(-~a,{~x,~y}) => x*y*sin(-a)};



rul2 := {te
(~x,~y) => x*y*sin(-a)}
te(-a,{x,y}) where rul2; ==> -sin(a)*x*y

te({x,y},-a) where rul2; ==> -sin(a)*x*y
Notice that the position of the list of variables inside the rule may be chosen at will.
It is an irrelevant feature of the template. This may be confusing, so, we advise to
write the rules not as above but placing the list of variables in front of all indices
since it is in that canonical form which it is written by the simplification function
of individual tensors.
Behaviour under space specifications
The characteristics and the behaviour of generic tensors described up to now are
independent of all space specifications. They are complete as long as we confine
to the default space which is active when starting CANTENS. However, as soon as
some space specification is introduced, it has some consequences one the generic
tensor properties. This is true both when ONESPACE is switched ON or OFF. Here
we shall describe how to deal with these features.
When onespace is ON, if the space dimension is set to an integer, numeric indices of any generic tensors are forced to be less or equal that integer if the signature is 0 or less than that integer if the signature is equal to 1. The following
illustrates what happens.
on onespace;
wholespace_dim 4; ==> 4
signature 0; ==> 0
te(3,a,-b,7); ==> ***** numeric indices out of range
te(3,a,-b,3); ==>

3 a




te(4,a,-b,4); ==>

4 a


==> ***** numeric indices out of range
signature 1; ==> 1
% Now indices range from 0 to 3:
==> ***** numeric indices out of range
te(0,a,-b,3); ==>

0 a


When onespace is OFF, many more possibilities to control the input or to give
specific properties to tensors are open. For instance, it is possible to declare that
a tensor belongs to one of them. It is also possible to declare that some indices
belongs to one of them. It is even possible to do that for numeric indices thanks
to the declaration indexrange included optionally in the space definition generated
by DEFINE_SPACES. First, when onespace is OFF, the run equivalent to the
previous one is like the following:
off onespace;
define_spaces wholespace={6,signature=1); ==> t
show_spaces(); ==> {{wholespace,6,signature=1}}



==> wholespace
te(4,a,-b,6); ==>
***** numeric indices out of range
te(4,a,-b,5); ==>

4 a


rem_spaces wholespace;
define_spaces wholespace={4,euclidean}; ==> t
te(a,5,-b); ==> ***** numeric indices out of range
te(a,4,-b); ==>

a 4
define_spaces eucl={1,signature=0}; ==> t
show_spaces(); ==>
make_tensor_belong_space(te,eucl); ==> eucl
te(1); ==>
te(2); ==> ***** numeric indices out of range

te(0); ==>
In the run, the new function MAKE_TENSOR_BELONG_SPACE has been used.
One may be surprised that te(0) is allowed in the end of the previous run and,
indeed, it is incorrect that the system allows two different components to te. This
is due to an incomplete definition of the space. When one deals with spaces of integer dimensions, if one wants to control numeric indices correctly when onespace
is switched off one must also give the indexrange. So the previous run must be corrected to
define_spaces eucl=
{1,signature=0,indexrange=1 .. 1}; ==> t
make_tensor_belong_space(te,eucl); ==> eucl
te(0); ==>
***** numeric indices do not belong to (sub)-space
te(1); ==>
te(2); ==>
***** numeric indices do not belong to (sub)-space

Notice that the error message has also changed accordingly. So, now one can even
constrain the 0 component to belong to an euclidian space.
Let us go back to symbolic indices. By default, any symbolic index belongs
to the global space or to all defined partial spaces. In many cases, this is, of
course, not consistent. So, the possibility exists to declare that one or several
indices belong to a specific (sub-)space. To this end, one is to use the function
MK_IDS_BELONG_SPACE. Its syntax is
<(sub-)space identifier>)



The function MK_IDS_BELONG_ANYSPACE whose syntax is the same do the
reverse operation.
Combined with the declaration MAKE_TENSOR_BELONG_SPACE, it allows to
express all problems which involve tensors belonging to different spaces and do
the dummy summations correctly. One can also define a tensor which has a “blocdiagonal” structure. All these features are illustrated in the next sections which
describe specific tensors and the properties of the extended function CANONICAL.


Specific tensors

The means provided in the two previous subsection to handle generic tensors already allow to construct any specific tensor we may need. That the package contains a certain number of them is already justified on the level of conviviality. However, a more important justification is that some basic tensors are so universaly and
frequently used that a careful programming of these improves considerably the robustness and the efficiency of most calculations. The choice of the set of specific
tensors is not clearcut. We have tried to keep their number to a minimum but, experience, may lead us extend it without dificulty. So, up to now, the list of specific
tensors is:

delta tensor,
eta Minkowski tensor,
epsilon tensor,
del generalised delta tensor,
metric generic tensor metric.

It is important to realize that the typewriter font names in the list are keywords for
the corresponding tensors and do not necessarily correspond to their actual names.
Indeed, the choice of the names of particular tensors is left to the user. When
startting CANTENS specific tensors are NOT available. They must be activated by
the user using the function MAKE_PARTIC_TENS whose syntax is:
make_partic_tens( , );
The name chosen may be the same as the keyword. As we shall see, it is never
needed to define more than one delta tensor but it is often needed to define
several epsilon tensors. Hereunder, we describe each of the above tensors especially their behaviour in a multi-space environment.

DELTA tensor
It is the simplest example of a bloc-diagonal tensor we mentioned in the previous
section. It can also work in a space which is a direct product of two spaces. Therefore, one never needs to introduce more than one such tensor. If one is working
in a graphic environment, it is advantageous to choose the keyword as its name.
Here we choose DELT. We illustrate how it works when the switch onespace is
successively switched ON and OFF.
on onespace;
make_partic_tens(delt,delta); ==> t
delt(a,b); ==>
***** bad choice of indices for DELTA tensor
% order of upper and lower indices irrelevant:
delt(a,-b); ==>
delt(-b,a); ==>
delt(-a,b); ==>

wholespace_dim ?; ==> dim
delt(1,-5); ==> 0
% dummy summation done:


delt(-a,a); ==> dim

wholespace_dim 4; ==> 4
delt(1,-5); ==> ***** numeric indices out of range

wholespace_dim 3; ==> 3
delt(-a,a); ==> 3
There is a peculiarity of this tensor, viz. it can serve to represent the Dirac delta
function when it has no indices and an explicit variable dependency as hereunder
delt({x-y}) ==> delt(x-y)
Next we work in the context of several spaces:
off onespace;
define_spaces wholespace={5,signature=1}; ==> t
% we need to assign delta to wholespace when it exists:
delt(a,-a); ==> 5
delt(0,-0); ==>1
rem_spaces wholespace; ==> t
define_spaces wholespace={5,signature=0}; ==> t
delt(a,-a); ==> 5
delt(0,-a); ==>
***** bad value of indices for DELTA tensor

The checking of consistency of chosen indices is made in the same way as for
generic tensor. In fact, all the previous functions which act on generic tensors may

also affect, in the same way, a specific tensor. For instance, it was compulsory to
explicitly tell that we want DELT to belong to the wholespace overwise, DELT
would remain defined on the default space. In the next sample run, we display the
bloc-diagonal property of the delta tensor.
onespace ?; ==> no
rem_spaces wholespace; ==> t
define_spaces wholespace={10,signature=1}$
define_spaces d1={5,euclidian}$
define_spaces d2={2,euclidian}$

mk_ids_belong_space({a},d1); ==> t
mk_ids_belong_space({b},d2); ==> t
% c belongs to wholespace so:
delt(c,-b); ==>
delt(c,-c); ==> 10

delt(b,-b); ==> 2
delt(a,-a); ==> 5
% this is especially important:
delt(a,-b); ==> 0
The bloc-diagonal property of delt is made active under two conditions. The first
is that the system knows to which space it belongs, the second is that indices must
be declared to belong to a specific space. To enforce the same property on a generic
tensor, we have to make the MAKE_BLOC_DIAGONAL declaration:


make_bloc_diagonal t1,t2, ...;

and to make it active, one proceeds as in the above run. Starting from a fresh
environment, the following sample run is illustrative:
off onespace;
define_spaces wholespace={6,signature=1}$
define_spaces mink={4,signature=1,indexrange=0 .. 3}$
define_spaces eucl={3,euclidian,indexrange=4 .. 6}$
tensor te;
make_tensor_belong_space(te,eucl); ==> eucl

% the key declaration:
make_bloc_diagonal te; ==> t

% bloc-diagonal property activation:
mk_ids_belong_space({a,b,c},eucl); ==> t
mk_ids_belong_space({m1,m2},mink); ==> t

te(a,b,m1); ==> 0
te(a,b,m2); ==> 0
% bloc-diagonal property suppression:
mk_ids_belong_anyspace a,b,c,m1,m2; ==> t

te(a,b,m2); ==>
a b m2

ETA Minkowski tensor
The use of MAKE_PARTIC_TENS with the keyword eta allows to create a
Minkowski diagonal metric tensor in a one or multi-space context either with the
convention of high energy physicists or in the convention of astrophysicists. Any
eta-like tensor is assumed to work within a space of signature 1. Therefore, if the
space whose metric, it is supposed to describe has a signature 0, an error message
follows if one is working in an ON onespace context and a warning when in an
OFF onespace context. Illustration:
on onespace;
make_partic_tens(et,eta); ==> t
signature 0; ==> 0;
et(-b,-a); ==>

signature must be equal to 1 for ETA tensor

off onespace;
et(a,b); ==>
ETA tensor not properly assigned to a space


% it is then evaluated to zero:
on onespace;
signature 1; ==> 1
et(-b,-a); ==>
a b
Since et(a,-a) is evaluated to the corresponding delta tensor, one cannot
define properly an eta tensor without a simultaneous introduction of a delta
tensor. Otherwise one gets the following message:


et(a,-a); ==> *****

no name found for (delta)

So we need to issue, for instance,
make_partic_tens(delta,delta); ==> t
The value of its diagonal elements depends on the chosen global sign. The next
run illustrates this:
global_sign ?; ==> 1
et(0,0); ==> 1
et(3,3); ==>

- 1

global_sign(-1); ==> -1
et(0,0); ==>

- 1

et(3,3); ==> 1
The tensor is of course symmetric . Its indices are checked in the same way as for
a generic tensor. In a multi_space context, the eta tensor must belong to a well
defined space of signature 1:
off onespace;
define_spaces wholespace={4,signature=1}$
et(a,-a); ==> 4
If the space to which et belongs to is a subspace, one must also take care to give
a space-identity to dummy indices which may appear inside it. In the following
run, the index a belongs to wholespace if it is not told to the system that it is a
dummy index of the space mink:
make_tensor_belong_anyspace et; ==> t
rem_spaces wholespace; ==> t
define_spaces wholespace={8,signature=1}; ==> t

define_spaces mink={5,signature=1}; ==> t
make_tensor_belong_space(et,mink); ==> mink
% a sits in wholespace:
et(a,-a); ==> 8
mk_ids_belong_space({a},mink); ==> t
% a sits in mink:
et(a,-a); ==>


EPSILON tensors
It is an antisymmetric tensor which is the invariant tensor for the unitary group
transformations in n-dimensional complex space which are continuously connected
to the identity transformation. The number of their indices are always stricty equal
to the number of space dimensions. So, to each specific space is associated a
specific epsilon tensor. Its properties are also dependent on the signature of the
space. We describe how to define and manipulate it in the context of a unique space
and, next, in a multi-space context.
ONESPACE is ON The use of MAKE_PARTIC_TENS places it, by default, in
an euclidian space if the signature is 0 and in a Minkowski-type space if the signature is 1. For higher signatures it is not constructed. For a space of symbolic
dimension, the number of its indices is not constrained. When it appears inside
an expression, its indices are all currently upper or lower indices. However, the
system allows for mixed positions of the indices. In that case, the output of the
system is changed compared to the input only to place all contravariant indices to
the left of the covariant ones.
make_partic_tens(eps,epsilon); ==> t
eps(a,d,b,-g,e,-f); ==>
a d b e
- eps
g f
eps(a,d,b,-f,e,-f); ==> 0



% indices have all the same variance:
eps(-b,-a); ==>
- eps
a b

signature ?; ==> 0
eps(1,2,3,4); ==> 1
eps(-1,-2,-3,-4); ==> 1
wholespace_dim 3; ==> 3
eps(-1,-2,-3); ==> 1
eps(-1,-2,-3,-4); ==>
***** numeric indices out of range
eps(-1,-2,-3,-3); ==>
***** bad number of indices for (eps) tensor
eps(a,b); ==>
***** bad number of indices for (eps) tensor
eps(a,b,c); ==>
a b c

eps(a,b,b); ==> 0

When the signature is equal to 1, it is known that there exists two conventions
which are linked to the chosen value 1 or -1 of the (0, 1, . . . , n) component. So,
the sytem does evaluate all components in terms of the (0, 1, . . . , n) upper index
component. It is left to the user to assign it to 1 or -1.

signature 1; ==> 1
eps(0,1,2); ==>
0 1 2
eps(-0,-1,-2); ==>
0 1 2
wholespace_dim 4; ==> 4
eps(0,1,2,3); ==>
0 1 2 3
eps(-0,-1,-2,-3); ==>
0 1 2 3
- eps
% change of the global_sign convention:
wholespace_dim 3; ==> 3
% compare with second input:
eps(-0,-1,-2); ==>
0 1 2
- eps

ONESPACE is OFF As already said, several epsilon tensors may be defined.
They must be assigned to a well defined (sub-)space otherwise the simplifying function CANONICAL will not properly work. The set of epsilon tensors defined associated to their space-name may be retrieved using the function
SHOW_EPSILONS. An important word of caution here. The output of this function



does NOT show the epsilon tensor one may have defined in the ON onespace
context. This is so because the default space has NO name. Starting from a fresh
environment, the following run illustrates this point:
show_epsilons(); ==> {}
onespace ?; ==> yes
make_partic_tens(eps,epsilon); ==> t
show_epsilons(); ==> {}
To make the epsilon tensor defined in the single space environment visible in
the multi-space environment, one needs to associate it to a space. For example:
off onespace;
define_spaces wholespace={7,signature=1}; ==> t
show_epsilons(); ==> {}

% still invisible

make_tensor_belong_space(eps,wholespace); ==>
show_epsilons(); ==> {{eps,wholespace}}
Next, let us define an additional epsilon-type tensor:
define_spaces eucl={3,euclidian}; ==> t
make_partic_tens(ep,epsilon); ==>
*** Warning: ep MUST belong to a space
make_tensor_belong_space(ep,eucl); ==> eucl

show_epsilons(); ==> {{ep,eucl},{eps,wholespace}}
% We show that it is indeed working inside eucl:
ep(-1,-2,-3); ==> 1


ep(1,2,3); ==> 1
ep(a,b,c,d); ==>
***** bad number of indices for



ep(1,2,4); ==>
***** numeric indices out of range
As previously, the discrimation between symbolic indices may be introduced by
assigning them to one or another space :
rem_spaces wholespace;
define_spaces wholespace={dim,signature=1}; ==> t
mk_ids_belong_space({e1,e2,e3},eucl); ==> t
mk_ids_belong_space({a,b,c},wholespace); ==> t
ep(e1,e2,e3); ==>
e1 e2 e3

% accepted

ep(e1,e2,z); ==>
e1 e2 z

% accepted because z
% not attached to a space.

***** some indices are not in the space of ep

eps(a,b,c); ==>
a b c

% accepted because *symbolic*
% space dimension.



epsilon-like tensors can also be defined on disjoint spaces. The subsequent
sample run starts from the environment of the previous one. It suppresses the space
wholespace as well as the space-assignment of the indices a,b,c. It defines
the new space mink. Next, the previously defined eps tensor is attached to this
space. ep remains unchanged and e1,e2,e3 still belong to the space eucl.
rem_spaces wholespace; ==> t
make_tensor_belong_anyspace eps; ==> t
show_epsilons(); ==> {{ep,eucl}}
show_spaces(); ==> {{eucl,3,signature=0}}
mk_ids_belong_anyspace a,b,c; ==> t
define_spaces mink={4,signature=1}; ==> t
show_spaces(); ==>
make_tensor_belong_space(eps,mink); ==> mink
show_epsilons(); ==> {{eps,mink},{ep,eucl}}
eps(a,b,c,d); ==>
a b c d
eps(e1,b,c,d); ==>
***** some indices are not in the space of eps
ep(e1,b,c,d); ==>
***** bad number of indices for
ep(e1,b,c); ==>
b c e1




ep(e1,e2,e3); ==>
e1 e2 e3
DEL generalized delta tensor
The generalized delta function comes from the contraction of two epsilons. It is
totally antisymmetric. Suppose its name has been chosen to be gd, that the space
to which it is attached has dimension n while the name of the chosen delta tensor
is δ, then one can define it as follows:






. . . δban1
. . . δban2
. . . δba1n

It is, in general uneconomical to explicitly write that determinant except for particular numeric values of the indices or when almost all upper and lower indices
are recognized as dummy indices. In the sample run below, gd is defined as the
generalized delta function in the default space. The main automatic evaluations are
illustrated. The indices which are summed over are always simplified:
onespace ? ==> yes
make_partic_tens(delta,delta); ==> t
make_partic_tens(gd,del); ==> t
% immediate simplifications:
gd(1,2,-3,-4); ==> 0
gd(1,2,-1,-2); ==> 1
gd(1,2,-2,-1); ==> -1

% antisymmetric

==> dim*(dim - 1) % summed over dummy indices


gd(a,b,c,-a,-d,-e); ==>
b c

*(dim - 2)
d e

gd(a,b,c,-a,-d,-c); ==>


- 3*dim + 2)

% no simplification:
gd(a,b,c,-d,-e,-f); ==>
a b c
d e f
One can force evaluation in terms of the determinant in all cases. To this end, the
switch EXDELT is provided. It is initially OFF. Switching it ON will most often
give inconveniently large outputs:
on exdelt;
gd(a,b,c,-d,-e,-f); ==>
delta *delta *delta

- delta

- delta *delta *delta
+ delta *delta *delta





+ delta










- delta



In a multi-space environment, it is never necessary to define several such tensor.
The reason is that CANONICAL uses it always from the contraction of a pair of
epsilon-like tensors. Therefore the control of indices is already done, the spacedimension in which del is working is also well defined.


METRIC tensors
Very often, one has to define a specific metric. The metric-type of tensors include all generic properties. The first one is their symmetry, the second one is
their equality to the delta tensor when they get mixed indices, the third one is
their optional bloc-diagonality. So, a metric (generic) tensor is generated by the
By default, when one is working in a multi-space environment, it is defined in
wholespace One uses the usual means of REDUCE to give it specific values. In
particular, the metric ’delta’ tensor of the euclidian space can be defined that way.
Implicit or explicit dependences on variables are allowed. Here is an illustration in
the single space environment:
make_partic_tens(g,metric); ==> t
make_partic_tens(delt,delta); ==> t
onespace ?; ==> yes
g(a,b); ==>
a b
g(b,a); ==>
a b
g(a,b,c); ==>
***** bad choice of indices for a METRIC tensor

g(a,b,{x,y}); ==>


a b

g(a,-b,{x,z}); ==>



let g({x,y},1,1)=1/2(x+y);

g({x,y},1,1); ==>
x + y

rem_value_tens g({x,y},1,1);

g({x,y},1,1); ==>
1 1


The simplification function CANONICAL

Tensor expressions
Up to now, we have described the behaviour of individual tensors and how they
simplify themselves whenever possible. However, this is far from being sufficient.
In general, one is to deal with objects which involve several tensors together with
various dummy summations between them. We define a tensor expression as an
arbitrary multivariate polynomial. The indeterminates of such a polynomial may
be either an indexed object, an operator, a variable or a rational number. A tensortype indeterminate cannot appear to a degree larger than one except if it is a trace.
The following is a tensor expression:

aa:= delt({x - y})*delt(a, - g)*delt(d, - g)*delt(g, -r)
*eps( - d, - e, - f)*eps(a,b,c)*op(x,y) + 1; ==>




aa := delt(x - y)*delt



*delt *eps
d e f

a b c
*op(x,y) + 1

In the above expression, delt and eps are, respectively, the delta and the
epsilon tensors, op is an operator. and delt(x-y) is the Dirac delta function. Notice that the above expression is not cohérent since the first term has a
variance while the second term is a scalar. Moreover, the dummy index g appears
three times in the first term. In fact, on input, each factor is simplified and each factor is checked for coherence not more. Therefore, if a dummy summation appears
inside one factor, it will be done whenever possible. Hereunder delt(a,-a) is
summed over:
sub(g=a,aa); ==>
a b c
delt(x - y)*delt *delt *eps
d e f
*op(x,y)*dim + 1
The use of CANONICAL
CANONICAL is an offspring of the function with the same name of the package
DUMMY. It applies to tensor expressions as defined above. When it acts, this functions has several features which are worth to realise:
1. It tracks the free indices in each term and checks their identity. It identifies
and verify the coherence of the various dummy index summations.
2. Dummy indices summations are done on tensor products whenever possible
since it recognises the particular tensors defined above or defined by the user.
3. It seeks a canonical form for the various simplified terms, makes the comparison between them. In that way it maximises simplifications and generates a
canonical form for the output polynomial.
Its capabilities have been extended in four directions:
• It is able to work within several spaces.


• It manages correctly expressions where formal tensor derivatives are present5 .
• It takes into account all symmetries even if partial.
• As its parent function, it can deal with non-commutative and anticommutative indexed objects. So, Indexed objects may be spinors or quantum fields.

We describe most of these features in the rest of this documentation.
Check of tensor indices
Dummy indices for individual tensors are kept in the memory of the system. If
they are badly distributed over several tensors, it is CANONICAL which gives an
error message:
tensor te,tf; ==> t
bb:=te(a,b,b)*te(-b); ==>
a b b
bb := te *te

canonical bb; ==>
***** ((b)(a b b)) are inconsistent lists of indices

aa:=te(b,-c)*tf(b,-c); ==>
aa := te


% b and c are free.

canonical aa; ==>



In DUMMY it does not take them into account


canonical bb; ==>

a c

*te *tf

delt(a,-a); ==> dim

% a is now a dummy index

canonical bb; ==>
***** wrong use of indices (a)
The message of canonical is clear, the first sublist contains the list of all lower indices, and the second one the list of all upper indices. The index b is repeated three
times. In the second example, b and c are considered as free indices, so they may
be repeated. The last example shows the interference between the check on individual tensors and the one of canonical. The use of a as dummy index inside delt
does no longer allow a to be used as a free index in expression bb. To be usable,
one must explicitly remove it as dummy index using REM_DUMMY_INDICES.
Dans le quatrième cas, il n’y a pas de problème puisque b et c sont tous les deux
des indices libres. CANONICAL checks that in a tensor polynomial all do possess
the same variance:
aa:=te(a,c)+x^2; ==>
a c
aa := te
+ x
canonical aa; ==>
***** scalar added with tensor(s)
aa:=te(a,b)+tf(a,c); ==>
a b
a c
aa := te
+ tf
canonical aa; ==>
***** mismatch in free indices :

((a c) (a b))



In the message the first two lists of incompatible indices are explicitly indicated.
So, it is not an exhaustive message and a more complete correction may be needed.
Of course, no message of that kind appears if the indices are inside ordinary operators6
dummy_names b; ==> t
cc:=op(b)*op(a,b,b); ==> cc := op(a,b,b)*op(b)
canonical cc; ==> op(a,b,b)*op(b)
clear_dummy_names; ==> t
Position and renaming of dummy indices
For a specific tensor, contravariant dummy indices are place in front of covariant
ones. This already leads to some useful simplifications. For instance:
pp:=te(a,-a)+te(-a,a)+1; ==>
pp := te

+ te

+ 1

canonical pp; ==>

+ 1

pp:=te(a,-a)+te(-b,b); ==>
pp := te

+ te

canonical pp; ==>

This is the case inside the DUMMY package.



pp:=te(r,a,c,d,-a,f)+te(r,-b,c,d,b,f); ==>

c d b f

r a c d

pp := te


+ te


canonical pp; ==>

r a c d



In the second and third example, there is also a renaming of the dummy variable
b whih becomes a. There is a loophole at this point. For some expressions one
will never reach a stable expression. This is the case for the following very simple
tensor nt; ==> t
a1:=nt(-a,d)*nt(-c,a); ==>




canonical a1; ==>

a d

c a

a12:=a1-canonical a1; ==>

a12 := nt


canonical a12; ==>


a d
- nt


c a



- nt



+ nt
c a

a d
% changes sign.

In the above example, no canonical form can be reached. When applied twice on
the tensor monom a1 it gives back a1!
No change of dummy index position is allowed if a tensor belongs to an AFFINE
space. With the tensor polynomial pp introduced above one has:
off onespace;
define_spaces aff={dd,affine}; ==> t
make_tensor_belong_space(te,aff); ==> aff
mk_ids_belong_space({a,b},aff); ==> t
canonical pp; ==>

c d a f


r a c d


+ te


The renaming of b has been made however.
Contractions and summations with particular tensors
This is a central part of the extension of CANONICAL. The required contractions
and summations can be done in a multi-space environment as well in a single space
The case of DELTA
Dummy indices are recognized contracted and summed over whenever possible:
aa:=delt(a,-b)*delt(b,-c)*delt(c,-a) + 1; ==>
aa := delt





+ 1


canonical aa; ==> dim + 1
canonical aa; ==>
a e
CANONICAL will not attempt to make contraction with dummy indices included
inside ordinary operators:
operator op;

canonical aa; ==>


dummy_names b; ==> t
canonical aa; ==>
delta *op(b,b)
The case of ETA
First, we introduce ETA:
make_partic_tens(eta,eta); ==> t
signature 1; ==> 1 % necessary
aa:=delta(a,-b)*eta(b,c); ==>
b c
aa := delt *eta


canonical aa; ==>
a c

canonical(eta(a,b)*eta(-b,c)); ==>
a c

canonical(eta(a,b)*eta(-b,-c)); ==>
canonical(eta(a,b)*eta(-b,-a)); ==> dim
canonical (eta(-a,-b)*te(d,-e,f,b)); ==>




aa:=eta(a,b)*eta(-b,-c)*te(-a,c)+1; ==>
a b
aa := eta
+ 1
b c
canonical aa; ==>

+ 1



canonical aa; ==>

+ dim + 1

Let us add a generic metric tensor:
aa:=g(a,b)*g(-b,-d); ==>
a b
aa := g
b d
canonical aa; ==>
aa:=g(a,b)*g(c,d)*eta(-c,-e)*eta(e,f)*te(-f,g); ==>

aa := eta
c e

e f a b c d


canonical aa; ==>
a b
d g

The case of EPSILON
The epsilon tensor plays an important role in many contexts. CANONICAL realises
the contraction of two epsilons if and only if they belong to the same space. The
proper use of CANONICAL on expressions which contains it requires a preliminary definition of the tensor DEL. When the signature is 0; the contraction of two
epsilons gives a DEL-like tensor. When the signature is equal to 1, it is equal to
minus a DEL-like tensor. Here we choose 1 for the signature and we work in a
single space. We define the DEL tensor:


on onespace;
wholespace_dim dim; ==> dim
make_partic_tens(gd,del); ==> t
signature 1; ==> 1

We define the EPSILON tensor and show how CANONICAL contracts expression
containing two7 of them:
aa:=eps(a,b)*eps(-c,-d); ==>

aa := eps
c d

a b

canonical aa; ==>
a b
- gd
c d

aa:=eps(a,b)*eps(-a,-b); ==>

aa := eps
a b

a b

canonical aa; ==> dim*( - dim + 1)
on exdelt;
gd(-a,-b,a,b); ==> dim*(dim - 1)
canonical aa; ==>




No contractions are done on expressions containing three or more epsilons which sit in the same
space. We are not sure whether it is useful to be more general than we are presently.

delt *delt *dim - 2*delt *delt d
- delt *delt *dim + 2*delt * delt

Several expressions which contain the epsilon tensor together with other special
tensors are given below as examples to treat with CANONICAL:
aa:=eps( - b, - c)*eta(a,b)*eta(a,c); ==>
a b
a c
b c
canonical aa; ==> 0
aa:=eps(a,b,c)*te(-a)*te(-b); ==> % te is generic.
a b c
aa := eps
*te *te
canonical aa; ==> 0
tensor tf,tg;
+ eps(d,e,f)*te(-d)*tf(-e)*tg(-f); ==>
canonical aa; ==>


a b c
*te *tf *tg

+ eps(d,e,f)*te(-d)*tf(-e)*tg(-f)$
canonical aa; ==> 0



Since CANONICAL is able to work inside several spaces, we can introduce also
several epsilons and make the relevant simplifications on each (sub)-spaces. This
is the goal of the next illustration.
off onespace;
define_spaces wholespace=
{dim,signature=1}; ==> t
define_spaces subspace=
{3,signature=0}; ==> t
show_spaces(); ==>
make_partic_tens(eps,epsilon); ==> t
make_partic_tens(kap,epsilon); ==> t
==> wholespace
==> subspace
show_epsilons(); ==>
off exdelt;
canonical aa; ==>
a b c
i j
- gd

d e f

k l

If there are no index summation, as in the expression above, one can develop both
terms into the delta tensor with EXDELT switched ON. In fact, the previous calculation is correct only if there are no dummy index inside the two gd’s. If some of
the indices are dummy, then we must take care of the respective spaces in which
the two gd tensors are considered. Since, the tensor themselves do not belong to
a given space, the space identification can only be made through the indices. This
is enough since the DELTA-like tensor is bloc-diagonal. With aa the result of the
above illustration, one gets, for example,:
sub(d=a,e=b,k=i,aa); ==>
2*delt *delt *( - dim + 3*dim - 2)
sub(k=i,l=j,aa); ==>
a b c
- 6*gd
d e f
CANONICAL and symmetries
Most of the time, indexed objects have some symmetry property. When this property is either full symmetry or antisymmetry, there is no difficulty to implement it
using the declarations SYMMETRIC or ANTISYMMETRIC of REDUCE. However,
most often, indexed objects are neither fully symmetric nor fully antisymmetric:
they have partial or mixed symmetries . In the DUMMY package, the declaration
SYMTREE allows to impose such type of symmetries on operators. This command
has been improved and extended to apply to tensors. In order to illustrate it, we
shall take the example of the wellknown Riemann tensor in general relativity. Let
us remind the reader that this tensor has four indices. It is separately antisymmetric with respect to the interchange of the first two indices and with respect to the
interchange of the last two indices. It is symmetric with respect to the interchange
of the first two and the last two indices. In the illustration below, we show how to
express this and how CANONICAL is able to recognize mixed symmetries:


tensor r; ==> t
rem_dummy_indices a,b,c,d; % free indices
ra:=r(b,a,c,d); ==>
b a c d
ra := r
canonical ra; ==>
a b c d
- r
ra:=r(c,d,a,b); ==>
c d a b
ra := r
canonical ra; ==>
a b c d

canonical r(-c,-d,a,b); ==>
a b
c d
r(-c,-c,a,b); ==>


ra:=r(-c,-d,c,b); ==>
c b
ra := r
c d
canonical ra; ==>
b c

- r
c d
In the last illustration, contravariant indices are placed in front of covariant indices
and the contravariant indices are transposed. The superposition of the two partial
symmetries gives a minus sign.
There exists an important (though natural) restriction on the use of SYMTREE
which is linked to the algorithm itself: Integer used to localize indices must start
from 1, be contiguous and monotoneously increasing. For instance, one is not
allow to introduce
but the subsequent declarations are allowed:
The first declaration endows r with a partial symmetry with respect to the first two
A side effect of SYMTREE is to restrict the number of indices of a generic tensor.
For instance, the second declaration in the above illustrations makes r depend on
5 indices as illustrated below:
canonical r(-b,-a,d,c); ==>
***** Index ‘5’ out of range for
((minus b) (minus a) d c) in nth
canonical r(-b,-a,d,c,e); ==>
d c e

% correct


a b
canonical r(-b,-a,d,c,e,g); ==>
d c e
a b

% The sixth index is forgotten!

Finally, the function REMSYM applied on any tensor identifier removes all symmetry properties.
Another related question is the frequent need to symmetrize a tensor polynomial.
To fulfill it, the function SYMMETRIZE of the package ASSIST has been improved and generalised. For any kernel (which may be either an operator or a
tensor) that function generates
- the sum over the cyclic permutations of indices,
- the symetric or antisymetric sums over all permutations of the indices.
Moreover, if it is given a list of indices, it generates a new list which contains
sublists wich contain the relevant permutations of these indices
symmetrize(te(x,y,z,{v}),te,cyclicpermlist); ==>

x y z

y z x
(v) + te

z x y
(v) + te


symmetrize(te(x,y),te,permutations); ==>
x y

y x
+ te

symmetrize(te(x,y),te,permutations,perm_sign); ==>
x y

y x
- te

symmetrize(te(y,x),te,permutations,perm_sign); ==>
x y
- te

y x
+ te

If one wants to symmetrise an expression which is not a kernel, one can also use

SYMMETRIZE to obtain the desired result as the next example shows:
ex:=te(a,-b,c)*te1(-a,-d,-e); ==>
ex := te

a d e


ll:=list(b,c,d,e)$ % the chosen relevant indices
lls:=symmetrize(ll,list,cyclicpermlist); ==>
lls := {{b,c,d,e},{c,d,e,b},{d,e,b,c},{e,b,c,d}}
% The sum over the cyclic permutations is:
excyc:=for each i in lls sum

excyc := te


+ te

+ te
a d e
a e b



+ te
a b c
a c d

CANONICAL and tensor derivatives
Only ordinary (partial) derivatives are fully correctly handled by CANONICAL.
This is enough, to explicitly construct covariant derivatives. We recognize here
that extensions should still be made. The subsequent illustrations show how
CANONICAL does indeed manage to find the canonical form and simplify expressions which contain derivatives. Notice, the use of the (modified) DEPEND
on onespace;
tensor te,x; ==> t



depend te,x;

canonical aa; ==> 0
make_partic_tens(eta,eta); ==>


signature 1;
aa := df(te

,x )*eta
b b
a d

canonical aa; ==>

,x )

In the last example, after contraction, the covariant dummy index b has been
changed into the contravariant dummy index a. This is allowed since the space
is metric.



CDE: A package for integrability of PDEs

Author: Raffaele Vitolo
We describe CDE, a REDUCE package devoted to differential-geometric computations on Differential Equations (DEs, for short).
We will give concrete recipes for computations in the geometry of differential
equations: higher symmetries, conservation laws, Hamiltonian operators and their
Schouten bracket, recursion operators. All programs discussed here are shipped
together with the CDE sources, inside the REDUCE sources. The mathematical
theory on which computations are based can be found in refs. [2, 12]. We invite
the interested reader to have a look at the website [1] which contains useful reseources in the above mathematical area. A book on integrable systems and CDE
is currently being written [17] with more examples and more detailed explanations
about the mathematical part.


Introduction: why CDE?

CDE is a REDUCE package for differential-geometric computations for DEs. The
package aims at defining differential operators in total derivatives and computing
with them. Such operators are called C-differential operators (see [2]).
CDE depends on the REDUCE package CDIFF for constructing total derivatives.
CDIFF was developed by Gragert and Kersten for symmetry computations in DEs,
and later extended by Roelofs and Post.
There are many software packages that can compute symmetries and conservation
laws; many of them run on Mathematica or Maple. Those who run on REDUCE
were written by M.C. Nucci [22, 23], F. Oliveri (R E L IE, [24]), F. Schwartz (SPDE,
REDUCE official distribution) T. Wolf (APPLYSYM and CONLAW in the official
REDUCE distribution, [28, 29, 30, 31]).
The development of CDE started from the idea that a computer algebra tool for
the investigation of integrability-related structures of PDEs still does not exist in
the public domain. We are only aware of a Mathematica package that may find
recursion operators under quite restrictive hypotheses [3].
CDE is especially designed for computations of integrability-related structures
(such as Hamiltonian, symplectic and recursion operators) for systems of differential equations with an arbitrary number of independent or dependent variables.
On the other hand CDE is also capable of (generalized) symmetry and conservation
laws computations. The aim of this guide is to introduce the reader to computations
of integrability related structures using CDE.
The current version of CDE, 2.0, has the following features:



1. It is able to do standard computations in integrable systems like determining
systems for generalized symmetries and conservation laws. However, CDE
has not been programmed with this purpose in mind.
2. CDE is able to compute linear overdetermined systems of partial differential
equations whose solutions are Hamiltonian, symplectic or recursion operators. Such equations may be solved by different techniques; one of the possibilities is to use CRACK, a REDUCE package for solving overdetermined
systems of PDEs [32].
3. CDE can compute linearization (or Fréchet derivatives) of vector functions
and adjoints of differential operators.
4. CDE is able to compute Schouten brackets between multivectors. This can
be used eg to check Hamiltonianity of an operator or to check their compatibility.
At the moment the papers [8, 9, 14, 16, 26, 27] have been written using CDE, and
more research by CDE on integrable systems is in progress.
The readers are warmly invited to send questions, comments, etc., both on the
computations and on the technical aspects of installation and configuration of REDUCE, to the author of this document.
Acknowledgements. I’d like to thank Paul H.M. Kersten, who explained to me
how to use the original CDIFF package for several computations of interest in the
Geometry of Differential Equations. When I started writing CDE I was substantially helped by A.C. Norman in understanding many features of Reduce which
were deeply hidden in the source code and not well documented. This also led to
writing a manual of Reduce’s internals for programmers [21]. Moreover, I’d like
to thank the developers of the REDUCE mailing list for their prompt replies with
solutions to my problems. On the mathematical side, I would like to thank J.S.
Krasil’shchik and A.M. Verbovetsky for constant support and stimulating discussions which led me to write the software. Thanks are also due to B.A. Dubrovin,
M. Casati, E.V. Ferapontov, P. Lorenzoni, M. Marvan, V. Novikov, A. Savoldi, A.
Sergyeyev, M.V. Pavlov for many interesting discussions.


Jet space of even and odd variables, and total derivatives

The mathematical theory for jets of even (ie standard) variables and total derivatives can be found in [2, 25].
Let us consider the space Rn × Rm , with coordinates (xλ , ui ), 1 ≤ λ ≤ n, 1 ≤ i ≤
m. We say xλ to be independent variables and ui to be dependent variables. Let
us introduce the jet space J r (n, m). This is the space with coordinates (xλ , uiσ ),

where uiσ is defined as follows. If s : Rn → Rm is a differentiable function, then
uiσ ◦ s(x) =

∂ |σ| (ui ◦ s)
(∂x1 )σ1 · · · (∂xn )σn

Here σ = (σ1 , . . . , σn ) ∈ Nn is a multiindex. We set |σ| = σ1 + · · · + σn . If
σ = (0, . . . , 0) we set uiσ = ui .
CDE is first of all a program which is able to create a finite order jet space inside
REDUCE. To this aim, issue the command
load_package cde;
Then, CDE needs to know the variables and the maximal order of derivatives. The
input can be organized as in the following example:
• indep_var is the list of independent variables;
• dep_var is the list of dependent variables;
• total_order is the maximal order of derivatives.
Two more parameters can be set for convenience:
These are the name of the output file for recording the internal state of the program (and for debugging purposes), and the name of the file containing results
of the computation.
The main routine in is called as follows:
Here the two empty lists are placeholders; they are of interest for computations with
odd variables/differential equations. The function cde defines derivative symbols
of the type:



Note that the symbol v_tx does not exist in the jet space. Indeed, introducing
all possible permutations of independent variables in indices would increase the
complexity and slow down every computation.
Two lists generated by CDE can be useful: all_der_id and all_odd_id,
which are, respectively, the lists of identifiers of all even and odd variables.
Other lists are generated by CDE, but they are accessible in REDUCE symbolic
mode only. Please check the file global.txt to know the names of the lists.
It can be useful to inspect the output generated by the function cde and the above
lists in particular. All that data can be saved by the function:
CDE has a few procedures involving the jet space, namely:
• jet_fiber_dim(jorder) returns the number of derivative coordinates
uiσ with |σ| equal to jorder;
• jet_dim(jorder) returns the number of derivative coordinates uiσ with
0 ≤ |σ| and |σ| equal to jorder;
• selectvars(par,orderofder,depvars,vars) returns all derivative coordinates (even if par=0, odd if par=1) of order orderofder of
the list of dependent variables depvars which belong to the set of derivative coordinates vars.
The function cde defines total derivatives truncated at the order total_order.
Their coordinate expressions are of the form
Dλ =

+ uiσλ i ,


where σ is a multiindex.
The total derivative of an argument ϕ is invoked as follows:
the syntax closely follows REDUCE’s syntax for standard derivatives df; the
above expression translates to Dx Dx ϕ, or D{2,0} ϕ in multiindex notation.
When in total derivatives there is a coefficient of order higher than maximal this is
replaced by the identifier letop, which is a function that depends on independent
variables. If such a function (or its derivatives) appears during computations it is
likely that we went too close to the highest order variables that we defined in the

file. All results of computations are scanned for the presence of such variables by
default, and if the presence of letop is detected the computation is stopped with
an error message. This usually means that we need to extend the order of the jet
space, just by increasing the number total_order.
Note that in the folder containing all examples there is also a shell script,
(works only under bash, a GNU/Linux command interpreter) which can be used
to run reduce on a given CDE program. When an error message about letop is
issued the script reruns the computation with a new value of total_order one
unity higher than the previous one.
The function that checks an expression for the presence of letop is check_letop.
If you wish to switch off this kind of check in order to increase the speed, the switch
checkord must be set off:
off checkord;
The computation of total derivatives of a huge expression can be extremely time
and resources consuming. In some cases it is a good idea to disable the expansion
of the total derivative and leave an expression of the type Dσ ϕ as indicated. This
is achieved by the command
If you wish to restore the default behaviour, do
CDE can also compute on jets of supermanifolds. The theory can be found in
[11, 12, 15]. The input can be organized as follows:
Here odd_var is the list of odd variables. The call
will create the jet space of the supermanifold described by the independent variables and the even and odd dependent variables, up to the order total_order.
Total derivatives truncated at the order total_order will also include odd
Dλ =
+ uiσλ i + piσλ i ,



where σ is a multiindex. The considerations on expansion and letop apply in
this case too.
Odd variables can appear in anticommuting products; this is represented as
where ext(p_2xt,p) = - ext(p,p_2xt) and the variables are arranged
in a unique way terms of an internal ordering. Indeed, the internal representation
of odd variables and their products (not intended for normal users!) is
as all odd variables and their derivatives are indexed by integers. Note that p
and ext(p) are just the same. The odd product of two expressions ϕ and ψ is
achieved by the CDIFF function
The derivative of an expression ϕ with respect to an odd variable p is achieved by


Differential equations in even and odd variables

We now give the equation in the form of one or more derivatives equated to righthand side expressions. The left-hand side derivatives are called principal, and the
remaining derivatives are called parametric8 . Parametric coordinates are coordinates on the equation manifold and its differential consequences, and principal
coordinates are determined by the differential equation and its differential consequences. For scalar evolutionary equations with two independent variables parametric derivatives are of the type (u, ux , uxx , . . .). Note that the system must be
in passive orthonomic form; this also means that there will be no nontrivial integrability conditions between parametric derivatives. (Lines beginning with % are
comments for REDUCE.) The input is formed as follows (Burger’s equation).
% left-hand side of the differential equation
% right-hand side of the differential equation
Systems of PDEs are input in the same way: of course, the above two lists must
have the same length. See 16.12.16 for an example.

This terminology dates back to Riquier, see [19]

The main routine in is called as follows:
Here the three empty lists are placeholders; they are important for computations
with odd variables. The function cde computes principal and parametric derivatives of even and odd variables, they are stored in the lists all_parametric_der,
all_principal_der, all_parametric_odd, all_principal_odd.
The function cde also defines total derivatives truncated at the order total_order
and restricted on the (even and odd) equation; this means that total derivatives are
tangent to the equation manifold. Their coordinate expressions are of the form
Dλ =
piσλ i ,
uσ parametric

pσ parametric

where σ is a multiindex. It can happen that uiσλ (or piσλ ) is principal and must be
replaced with differential consequences of the equation. Such differential consequences are called primary differential consequences, and are computed; in general
they will depend on other, possibly new, differential consequences, and so on. Such
newly appearing differential consequences are called secondary differential consequences. If the equation is in passive orthonomic form, the system of all differential
consequences (up to the maximal order total_order) must be solvable in terms
of parametric derivatives only. The function cde automatically computes all necessary and sufficient differential consequences which are needed to solve the system.
The solved system is available in the form of REDUCE let-rules in the variables
repprincparam_der and repprincparam_odd.
The syntax and properties (expansion and letop) of total derivatives remain the
same. For exmaple:
It is possible to deal with mixed systems on eve and odd variables. For example,
in the case of Burgers equation we can input the linearized equation as a PDE on
a new odd variable as follows (of course, in addition to what has been defined
de_odd:={q_2x + 2*u_x*q + 2*u*q_x}$



The main routine in is called as follows:


Calculus of variations

CDE can compute variational derivatives of any function (usually a Lagrangian
density) or superfunction L. We have the following coordinate expression
= (−1)|σ| Dσ i ,

= (−1)|σ| Dσ i


which translates into the CDE commands
• the first argument can be 0 or 1 and is the parity of the variable ui or pi;
• lagrangian_dens is L;
• ui or pi are the given dependent variables.
The Euler operator computes variational derivatives with respect to all even and
odd variables in the jet space, and arranges them in a list of two lists, the list of even
variational derivatives and the list of odd variational derivatives. The command is
All the above is used in the definition of Schouten brackets, as we will see in
Subsection 16.12.6.


C-differential operators

Linearizing (or taking the Fréchet derivative) of a vector function that defines a differential equation yields a differential operator in total derivatives. This operator
can be restricted to the differential equation, which may be regarded as a differential constraint; the kernel of the restricted operator is the space of all symmetries
(including higher or generalized symmetries) [2, 25].
The formal adjoint of the linearization operator yields by restriction to the corresponding differential equation a differential operator whose kernel contains all
characteristic vectors or generating functions of conservation laws [2, 25].

Such operators are examples of C-differential operators. The (still incomplete)
REDUCE implementation of the calculus of C-differential operators is the subject
of this section.
C-differential operators
Let us consider the spaces
P = {ϕ : J r (n, m) → Rk },

Q = {ψ : J r (n, m) → Rs }.

A C-differential operator ∆ : P → Q is defined to be a map of the type
X σj
∆(ϕ) = (
ai Dσ ϕi ),


where aσj
i are differentiable functions on J (n, m), 1 ≤ i ≤ k, 1 ≤ j ≤ s. The
order of δ is the highest length of σ in the above formula.

We may consider a generalization to k-C-differential operators of the type
∆ : P1 × · · · × Ph → Q
∆(ϕ1 , . . . , ϕh ) = (


,...,σh , j
Dσ1 ϕi11 · · · Dσh ϕihh ), (16.51)

σ1 ,...,σh ,i1 ,...,ih

where the enclosing parentheses mean that the value of the operator is a vector
function in Q.
A C-differential operator in CDE must be declared as follows:
• opname is the name of the operator;
• num_arg is the number of arguments eg k in (16.51);
• length_arg is the list of lengths of the arguments: eg the length of the
single argument of ∆ (16.50) is k, and the corresponding list is {k}, while in
(16.51) one needs a list of k items {k_1,...,k_h}, each corresponding
to number of components of the vector functions to which the operator is
• length_target is the numer of components of the image vector function.
The syntax for one component of the operator opname is



The above operator will compute
X σ ,...,σ , j
∆(ϕ1 , . . . , ϕh ) =
ai11···ih h Dσ1 ϕi11 · · · Dσh ϕihh ,


σ1 ,...,σh

for fixed integer indices i1 ,. . . ,ih and j.
There are several operations which involve differential operators. Obviously they
can be summed and multiplied by scalars.
An important example of C-differential operator is that of linearization, or Fréchet
derivative, of a vector function
F : J r (n, m) → Rk .
This is the operator
`F : κ → P,

ϕ 7→

X ∂F k


Dσ ϕi ,

where κ = {ϕ : J r (n, m) → Rm } is the space of generalized vector fields on jets
[2, 25].
Linearization can be extended to an operation that, starting from a k-C-differential
operator, generates a k + 1-C-differential operator as follows:

,...,σk , j

σ,σ1 ,...,σk ,i,i1 ,...,ik


`∆ (p1 , . . . , pk , ϕ) = (

Dσ ϕi Dσ1 pi11 · · · Dσk pikk )

(The above operation is also denoted by `∆,p1 ,...,pk (ϕ).)
At the moment, CDE is only able to compute the linearization of a vector function
(Section 16.12.8).
Given a C-differential operator ∆ like in (16.50) we can define its adjoint as
∆∗ ((qj )) = ( (−1)|σ| Dσ (aσj
i qj )).

Note that the matrix of coefficients is transposed. Again, the coefficients of the
adjoint operator can be found by computing ∆∗ (xσ ej ) for every basis vector ej and
every count xσ , where |σ| ≤ r, and r is the order of the operator. This operation
can be generalized to C-differential operators with h arguments.
At the moment, CDE can compute the adjoint of an operator with one argument
(Section 16.12.8).

Now, consider two operators ∆ : P → Q and ∇ : Q → R. Then the composition
∇ ◦ ∆ is again a C-differential operator. In particular, if
X σj
∆(p) = (
ai Dσ pi ), ∇(q) = (
bτj k Dτ q j ),


X σj
∇ ◦ ∆(p) = (
bτj k Dτ (
ai Dσ pi ))


This operation can be generalized to C-differential operators with h arguments.
There is another important operation between C-differential operators with h arguments: the Schouten bracket [2]. We will discuss it in next Subsection, in the
context of another formalism, where it takes an easier form [12].


C-differential operators as superfunctions

In the papers [11, 12] (and independently in [10]) a scheme for dealing with (skewadjoint) variational multivectors was devised. The idea was that operators of the
type (16.51) could be represented by homogeneous vector superfunctions on a
supermanifold, where odd coordinates qσi would correspond to total derivatives
D σ ϕi .
The isomorphism between the two languages is given by

,...,σh , j
σ1 ,...,σh ,i1 ,...,ih



,...,σh , j i1
qσ1 · · · qσihh


σ1 ,...,σh ,i1 ,...,ih

where qσi is the derivative of an odd dependent variable (and an odd variable itself).
A superfunction in CDE must be declared as follows:
• sfname is the name of the superfunction;
• num_arg is the degree of the superfunction eg h in (16.54);
• length_arg is the list of lengths of the arguments: eg the length of the
single argument of ∆ (16.50) is k, and the corresponding list is {k}, while in
(16.51) one needs a list of k items {k_1,...,k_h}, each corresponding
to number of components of the vector functions to which the operator is


• length_target is the numer of components of the image vector function.

The above parameters of the operator opname are stored in the property list9 of
the identifier opname. This means that if one would like to know how many
arguments has the operator opname the answer will be the output of the command
and the same for the other parameters.
The syntax for one component of the superfunction sfname is
CDE is able to deal with C-differential operators in both formalisms, and provides
conversion utilities:
• conv_cdiff2superfun(cdop,superfun)
• conv_superfun2cdiff(superfun,cdop)
where in the first case a C-differential operator cdop is converted into a vector
superfunction superfun with the same properties, and conversely.


The Schouten bracket

We are interested in the operation of Schouten bracket between variational multivectors [11]. These are differential operators with h arguments in κ with values in
densities, and whose image is defined up to total divergencies:
∆: κ × · · · × κ →
¯ r (n, m) → λn−1 T ∗ Rn }) (16.55)
{J r (n, m) → λn T ∗ Rn }/d({J
It is known [10, 12] that the Schouten bracket between two variational multivectors A1 , A2 can be computed in terms of their corresponding superfunction by the
h δA δA
δA2 δA1 i
[A1 , A2 ] =
δuj δpj
δuj δpj
where δ/δui , δ/δpj are the variational derivatives and the square brackets at the
right-hand side should be understood as the equivalence class up to total divergencies.

The property list is a lisp concept, see [21] for details.

If the operators A1 , A2 are compatible, ie [A1 , A2 ] = 0, the expression (16.56)
must be a total derivative. This means that:

δA1 δA2 δA2 δA1
[A1 , A2 ] = 0 ⇔ E
+ j
= 0.
δuj δpj
δu δpj
If A1 is an h-vector and A2 is a k-vector the formula (16.56) produces a (h+k−1)vector, or a C-differential operator with h + k − 1 arguments. If we would like to
check that this multivector is indeed a total divergence, we should apply the Euler
operator, and check that it is zero. This procedure is considerably simpler than the
analogue formula with operators (see for example [12]). All this is computed by
where biv1 and biv2 are bivectors, or C-differential operators with 2 arguments,
and tv12 is the result of the computation, which is a three-vector (it is automatically declared to be a superfunction). Examples of this computation are given in
Section 16.12.18.


Computing linearization and its adjoint

Currently, CDE supports linearization of a vector function, or a C-differential operator with 0 arguments. The computation is performed in odd coordinates.
Suppose that we would like to linearize the vector function that defines the (dispersionless) Boussinesq equation [13]:

ut − ux v − uvx − σvxxx = 0
vt − ux − vvx = 0
where σ is a constant. Then a jet space with independent variables x,t, dependent
variables u,v and odd variables in the same number as dependent variables p,q
must be created:



The linearization of the above system and its adjoint are, respectively

Dt − vDx − vx −ux − uDx − σDxxx
`Bou =
Dt − vx − vDx

−Dt + vDx
`Bou =
uDx + σDxxx −Dt + vDx
Let us introduces the vector function whose zeros are the Boussinesq equation:
f_bou:={u_t - (u_x*v + u*v_x + sig*v_3x),
v_t - (u_x + v*v_x)};
The following command assigns to the identifier lbou the linearization Cdifferential operator `Bou of the vector function f_bou
moreover, a superfunction lbou_sf is also defined as the vector superfunction
corresponding to `Bou . Indeed, the following sequence of commands:
2: lbou_sf(1);
- p*v_x + p_t - p_x*v - q*u_x - q_3x*sig - q_x*u
3: lbou_sf(2);
- p_x - q*v_x + q_t - q_x*v
shows the vector superfunction corresponding to `Bou . To compute the value of the
(1, 1) component of the matrix `Bou applied to an argument psi do
In order to check that the result is correct one could define the linearization as a
C-differential operator and then check that the corresponding superfunctions are
the same:
for all phi let lbou2(1,1,phi)
= td(phi,t) - v*td(phi,x) - v_x*phi;
for all phi let lbou2(1,2,phi)
= - u_x*phi - u*td(phi,x) - sig*td(phi,x,3);
for all phi let lbou2(2,1,phi)
= - td(phi,x);
for all phi let lbou2(2,2,phi)

= td(phi,t) - v*td(phi,x) - v_x*phi;
lbou2_sf(1) - lbou_sf(1);
lbou2_sf(2) - lbou_sf(2);
the result of the two last commands must be zero.
The formal adjoint of lbou can be computed and assigned to the identifier
lbou_star by the command
Again, the associated vector superfunction lbou_star_sf is computed, with
4: lbou_star_sf(1);
- p_t + p_x*v + q_x
5: lbou_star_sf(2);
p_3x*sig + p_x*u - q_t + q_x*v
Again, the above operator can be checked for correctness.
Once the linearization and its ajdoint are computed, in order to do computations
with symmetries and conservation laws such operator must be restricted to the
corresponding equation. This can be achieved with the following steps:
1. compute linearization of a PDE of the form F = 0 and its adjoint, and save
them in the form of a vector superfunction;
2. start a new computation with the given even PDE as a constraint on the (even)
jet space;
3. load the superfunctions of item 1;
4. restrict them to the even PDE.
Only the last step needs to be explained. If we are considering, eg the Boussinesq
equation, then ut and its differential consequences (ie the principal derivatives) are
not automatically expanded to the right-hand side of the equation and its differential consequences. At the moment this step is not fully automatic. More precisely,
only principal derivatives which appear as coefficients in total derivatives can be
replaced by their expression. The lists of such derivatives with the corresponding



expressions are repprincparam_der and repprincparam_odd (see Section 16.12.3). They are in the format of REDUCE’s replacement list and can be
used in let-rules. If the linearization or its adjoint happen to depend on another
principal derivative this must be computed separately. A forthcoming release of
REDUCE will automatize this procedure.
However, note that for evolutionary equations this step is trivial, as the restriction
of linearization and its adjoint on the given PDE will only affect total derivatives
which are restricted by CDE to the PDE.


Higher symmetries

In this section we show the computation of (some) higher [2] (or generalized, [25])
symmetries of Burgers’equation B = ut − uxx + 2uux = 0.
We provide two ways to solve the equations for higher symmetries. The first possibility is to use dimensional analysis. The idea is that one can use the scale symmetries of Burgers’equation to assign “gradings” to each variable appearing in the
equation (in other words, one can use dimensional analisys). As a consequence,
one could try different ansatz for symmetries with polynomial generating functions. For example, it is possible to require that they are sum of monomials of given
degrees. This ansatz yields a simplification of the equations for symmetries, because it is possible to solve them in a “graded” way, i.e., it is possible to split them
into several equations made by the homogeneous components of the equation for
symmetries with respect to gradings.
In particular, Burgers’equation translates into the following dimensional equation:
[ut ] = [uxx ],

[uxx ] = [2uux ].

By the rules [uz ] = [u] − [z] and [uv] = [u] + [v], and choosing [x] = −1, we
have [u] = 1 and [t] = −2. This will be used to generate the list of homogeneous
monomials of given grading to be used in the ansatz about the structure of the
generating function of the symmetries.
The file for the above computation is and the results of the computation are in results/
Another possibility to solve the equation for higher symmetries is to use a PDE
solver that is especially devoted to overdetermined systems, which is the distinguishing feature of systems coming from the symmetry analysis of PDEs. This approach is described below. The file for the above computation is
and the results of the computation are in results/



Setting up the jet space and the differential equation.

After loading CDE:
Here the new lists are scale degrees:
• deg_indep_var is the list of scale degrees of the independent variables;
• deg_dep_var is the list of scale degrees of the dependent variables;
We now give the equation and call CDE:


Solving the problem via dimensional analysis.

Higher symmetries of the given equation are functions sym depending on parametric coordinates up to some jet space order. We assume that they are graded polynomials of all parametric derivatives. In practice, we generate a linear combination
of graded monomials with arbitrary coefficients, then we plug it in the equation of
the problem and find conditions on the coefficients that fulfill the equation. To construct a good ansatz, it is required to make several attempts with different gradings,
possibly including independent variables, etc.. For this reason, ansatz-constructing
functions are especially verbose. In order to use such functions they must be initialized with the following command:
Note the empty list at the end; it playe a role only for computations involving odd
We need one operator equ whose components will be the equation of higher symmetries and its consequences. Moreover, we need an operator c which will play
the role of a vector of constants, indexed by a counter ctel:



operator c,equ;
We prepare a list of variables ordered by scale degree:
The function der_deg_ordering is defined in It produces the
given list using the list all_parametric_der of all parametric derivatives of
the given equation up to the order total_order. The first two parameters can
assume the values 0 or 1 and say that we are considering even variables and that
the variables are of parametric type.
Then, due to the fact that all parametric variables have positive scale degree then
we prepare the list ansatz of all graded monomials of scale degree from 0 to 5
gradmon:={1} . gradmon$
ansatz:=for each el in gradmon join el$
More precisely, the command graded_mon produces a list of monomials of degrees from i to j, formed from the list of graded variables l_grad_var; the
second command adds the zero-degree monomial; and the last command produces
a single list of all monomials.
Finally, we assume that the higher symmetry is a graded polynomial obtained from
the above monomials (so, it is independent of x and t!)
sym:=(for each el in ansatz sum (c(ctel:=ctel+1)*el))$
Next, we define the equation `B (sym) = 0. Here, `B stands for the linearization
(Section 16.12.8). A function sym that fulfills the above equation, on account of
B = 0, is an higher symmetry.
We cannot define the linearization as a C-differential operator in this way:
bur:={u_t - (2*u*u_x+u_2x)};
as the linearization is performed with respect to parametric derivatives only! This
means that the linearization has to be computed beforehand in a free jet space, then
it may be used here.
So, the right way to go is
for all phi let lbur(1,1,phi)

= td(phi,t)-td(phi,x,2)-2*u*td(phi,x)-2*u_x*phi;
Note that for evolutionary equations the restriction of the linearization to the equation is equivalent to just restricting total derivatives, which is automatic in CDE.
The equation becomes
equ 1:=lbur(1,1,sym);
At this point we initialize the equation solver. This is a part of the CDIFF package called (see the original documentation inside the folder
packages/cdiff in REDUCE’s source code). In our case the above package
will solve a large sparse linear system of algebraic equations on the coefficients of
The list of variables, to be passed to the equation solver:
The number of initial equation(s):
Next command initializes the equation solver. It passes
• the equation vector equ togeher with its length tel (i.e., the total number
of equations);
• the list of variables with respect to which the system must not split the equations, i.e., variables with respect to which the unknowns are not polynomial.
In this case this list is just {};
• the constants’vector c, its length ctel, and the number of negative indexes
if any; just 0 in our example;
• the vector of free functions f that may appear in computations. Note that in
{f,0,0 } the second 0 stands for the length of the vector of free functions.
In this example there are no free functions, but the command needs the presence of at least a dummy argument, f in this case. There is also a last zero
which is the negative length of the vector f , just as for constants.

Run the procedure splitvars_opequ on the first component of equ in order
to obtain equations on coefficiens of each monomial.



Note that splitvars_opequ needs to know the indices of the first and the last
equation in equ, and here we have only one equation as equ(1). The output
tel is the final number of splitted equations, starting just after the initial equation
Next command tells the solver the total number of equations obtained after running
put_equations_used tel;
This command solves the equations for the coefficients. Note that we have to skip
the initial equations!
for i:=2:tel do integrate_equation i;
The output is written in the result file by the commands
off echo$
off nat$
out <>;
write ";end;";
shut <>;
on nat$
on echo$
The command off nat turns off writing in natural notation; results in this form
are better only for visualization, not for writing or for input into another computation. The command «resname» forces the evaluation of the variable resname
to its string value. The commands out and shut are for file opening and closing.
The command sym:=sym is evaluated only on the right-hand side.
One more example file is available; it concerns higher symmetries of the KdV
equation. In order to deal with symmetries explicitely depending on x and t
it is possible to use REDUCE and CDE commands in order to have sym =
x*(something of degree 3) + t*(something of degree 5) + (something of degree
2); this yields scale symmetries. Or we could use sym = x*(something of degree
1) + t*(something of degree 3) + (something of degree 0); this yields Galilean



Solving the problem using CRACK

CRACK is a PDE solver which is devoted mostly to the solution of overdetermined PDE systems [30, 32]. Several mathematical problems have been solved by
the help of CRACK, like finding symmetries [29, 31] and conservation laws [28].
The aim of CDE is to provide a tool for computations with total derivatives, but it
can be used to compute symmetries too. In this subsection we show how to interface CDE with CRACK in order to find higher (or generalized) symmetries for the
Burgers’equation. To do that, after loading CDE and introducing the equation, we
define the linearization of the equation lbur.
We introduce the new unknown function ‘ansatz’. We assume that the function
depends on parametric variables of order not higher than 3. The variables are
selected by the function selectvars of CDE as follows:
even_vars:=for i:=0:3 join
In the arguments of selectvars, 0 means that we want even variables, i
stands for the order of variables, dep_var stands for the dependent variables to be selected by the command (here we use all dependent variables),
all_parametric_der is the set of variables where the function will extract
the variables with the required properties. In the current example we wish to get
all higher symmetries depending on parametric variables of order not higher than
The dependency of ansatz from the variables is given with the standard REDUCE command depend:
for each el in even_vars do depend(ansatz,el)$
The equation to be solved is the equation lbur(ansatz)=0, hence we give the
The above command will issue an error if the list {total_eq} depends on the
flag variable letop. In this case the computation has to be redone within a jet
space of higher order.
The equation ell_b(ansatz)=0 is polynomial with respect to the variables of
order higher than those appearing in ansatz. For this reason, its coefficients can
be put to zero independently. This is the reason why the PDEs that determine
symmetries are overdetermined. To tell this to CRACK, we issue the command



The list split_vars contains variables which are in the current CDE jet space
but not in even_vars.
Then, we load the package CRACK and get results.
load_package crack;
The results are in the variable crack_results:
{ansatz=(2*c_12*u_x + 2*c_13*u*u_x + c_13*u_2x
+ 6*c_8*u**2*u_x + 6*c_8*u*u_2x + 2*c_8*u_3x
+ 6*c_8*u_x**2)/2},{c_8,c_13,c_12},
So, we have three symmetries; of course the generalized symmetry corresponds
to c_8. Remember to check always the output of CRACK to see if any of the
symbols c_n is indeed a free function depending on some of the variables, and not
just a constant.


Local conservation laws

In this section we will find (some) local conservation laws for the KdV equation
F = ut − uxxx + uux = 0. Concretely, we have to find non-trivial 1-forms
¯ = 0 on F = 0. “Triviality” of conservation
f = fx dx+ft dt on F = 0 such that df
laws is a delicate matter, for which we invite the reader to have a look in [2].
The files containing this example are kdv_lcl1,kdv_lcl2 and the corresponding results and debug files.
We suppose that the conservation law has the form ω = fx dx + ft dt. Using the
same ansatz as in the previous example we assume
fx:=(for each el in ansatz sum (c(ctel:=ctel+1)*el))$
ft:=(for each el in ansatz sum (c(ctel:=ctel+1)*el))$
Next we define the equation d(ω)
= 0, where d¯ is the total exterior derivative
restricted to the equation.
equ 1:=td(fx,t)-td(ft,x)$
After solving the equation as in the above example we get
fx := c(3)*u_x + c(2)*u + c(1)$

ft := (2*c(8) + 2*c(3)*u*u_x + 2*c(3)*u_3x + c(2)*u**2 +
Unfortunately it is clear that the conservation law corresponding to c(3) is trivial,
because it is just the KdV equation. Here this fact is evident; how to get rid of less
evident trivialities by an ‘automatic’ mechanism? We considered this problem in
the file kdv_lcl2, where we solved the equation
equ 1:=fx-td(f0,x);
equ 2:=ft-td(f0,t);
after having loaded the values fx and ft found by the previous program. In order
to do that we have to introduce two new counters:
operator cc,equ;
We make the following ansatz on f0:
f0:=(for each el in ansatz sum (cc(cctel:=cctel+1)*el))$
After solving the system, issuing the commands
fxnontriv := fx-td(f0,x);
ftnontriv := ft-td(f0,t);
we obtain
fxnontriv := c(2)*u$
ftnontriv := (c(2)*(u**2 + 2*u_2x))/2$
This mechanism can be easily generalized to situations in which the conservation
laws which are found by the program are difficult to treat by pen and paper. However, we will present another approach to the computation of conservation laws in
subsection 16.12.25.


Local Hamiltonian operators

In this section we will show how to compute local Hamiltonian operators for
Korteweg–de Vries, Boussinesq and Kadomtsev–Petviashvili equations. It is interesting to note that we will adopt the same computational scheme for all equations,
even if the latter is not in evolutionary form and it has more than two independent
variables. This comes from a new mathematical theory which started in [12] for
evolution equations and was later extended to general differential equations in [14].




Korteweg–de Vries equation

Here we will find local Hamiltonian operators for the KdV equation ut = uxxx +
uux . A necessary condition for an operator to be Hamiltonian is that it sends
generating functions (or characteristics, according with [25]) of conservation laws
to higher (or generalized) symmetries. As it is proved in [12], this amounts at
solving `¯KdV (phi) = 0 over the equation

ut = uxxx + uux
pt = pxxx + upx
or, in geometric terminology, find the shadows of symmetries on the `∗ -covering
of the KdV equation, with the further condition that the shadows must be linear in
the p-variables. Note that the second equation (in odd variables!) is just the adjoint
of the linearization of the KdV equation applied to an odd variable.
The file containing this example is kdv_lho1.
We stress that the linearization `¯KdV (phi) = 0 is the equation
but the total derivatives are lifted to the `∗ covering, hence they contain also derivatives with respect to p’s. We can define a linearization operator lkdv as usual.
In order to produce an ansatz which is a superfunction of one odd variable (or a
linear function in odd variables) we produce two lists: the list l_grad_var of all
even variables collected by their gradings and a similar list l_grad_odd for odd
l_grad_odd:={1} . der_deg_ordering(1,all_parametric_odd)$
gradmon:={1} . gradmon$
We need a list of graded monomials which are linear in odd variables. The function mkalllinodd produces all monomials which are linear with respect to the
variables from l_grad_odd, have (monomial) coefficients from the variables in
l_grad_var, and have total scale degrees from 1 to 6. Such monomials are then
converted to the internal representation of odd variables.
Note that all odd variables have positive scale degrees thanks to our initial choice
deg_odd_var:=1;. Finally, the ansatz for local Hamiltonian operators:
sym:=(for each el in linext sum (c(ctel:=ctel+1)*el))$

After having set
equ 1:=lkdv(1,1,sym);
and having initialized the equation solver as before, we do splitext
in order to split the polynomial equation with respect to the ext variables, then
in order to split the resulting polynomial equation in a list of equations on the
coefficients of all monomials.
Now we are ready to solve all equations:
put_equations_used tel;
for i:=2:tel do integrate_equation i;
Note that we want all equations to be solved!
The results are the two well-known Hamiltonian operators for the KdV. After integration the function sym becomes
sym := (c(5)*p*u_x + 2*c(5)*p_x*u +
3*c(5)*p_3x + 3*c(2)*p_x)/3$
Of course, the results correspond to the operators
p x → Dx ,
(3p3x + 2upx + ux p) → (3Dxxx + 2uDx + ux )
Note that each operator is multiplied by one arbitrary real constant, c(5) and
The same problem can be approached using CRACK, as follows (file
An ansatz is constructed by the following instructions:
even_vars:=for i:=0:3 join
odd_vars:=for i:=0:3 join



ansatz:=for each el in ext_vars sum
Note that we have
ansatz := p*s1 + p_2x*s3 + p_3x*s4 + p_x*s2$
Indeed, we are looking for a third-order operator whose coefficients depend on
variables of order not higher than 3. This last property has to be introduced by
unk:=for i:=1:ctemp collect mkid(s,i)$
for each ell in unk do
for each el in even_vars do depend ell,el$
Then, we introduce the linearization (lifted on the cotangent covering)
operator ell_f$
for all sym let ell_f(sym)=
td(sym,t) - u*td(sym,x) - u_x*sym - td(sym,x,3)$
and the equation to be solved, together with the usual test that checks for the nedd
to enlarge the jet space:
Finally, we split the above equation by collecting all coefficients of odd variables:
and we feed CRACK with the equations that consist in asking to the above coefficients to be zero:
load_package crack;
The results are the same as in the previous section:
crack_results := {{{},
{s4=(3*c_17)/2,s3=0,s2=c_16 + c_17*u,s1=(c_17*u_x)/2},



Boussinesq equation

There is no conceptual difference when computing for systems of PDEs with respect to the previous computations for scalar equations. We will look for Hamiltonian structures for the dispersionless Boussinesq equation (16.58).
We will proceed by dimensional analysis. Gradings can be taken as
[t] = −2,

[x] = −1,

[v] = 1,

[u] = 2,

[p] = 1,

[q] = 2

where p, q are the two odd coordinates. We have the `∗Bou covering equation
−pt + vpx + qx = 0
upx + σpxxx − qt + vqx = 0
− ux v − uvx − σvxxx = 0
 t
vt − ux − vvx = 0
We have to find Hamiltonian operators as shadows of symmetries on the above
covering. At the level of source file (bou_lho1) the input data is:
The ansatz for the components of the Hamiltonian operator, of scale degree between 1 and 6, is
phi1:=(for each el in linodd sum (c(ctel:=ctel+1)*el))$
phi2:=(for each el in linodd sum (c(ctel:=ctel+1)*el))$
and the equation for shadows of symmetries is (lbou2 is taken from Section 16.12.8)
equ 1:=lbou2(1,1,phi1) + lbou2(1,2,phi2);
equ 2:=lbou2(2,1,phi1) + lbou2(2,2,phi2);



After the usual procedures for decomposing polynomials we obtain three local
Hamiltonian operators:
phi1_odd := (2*c(31)*p*sig*v_3x + 2*c(31)*p*u*v_x
+ 2*c(31)*p*u_x*v + 6*c(31)*p_2x*sig*v_x
+ 4*c(31)*p_3x*sig*v + 6*c(31)*p_x*sig*v_2x
+ 4*c(31)*p_x*u*v + 2*c(31)*q*u_x + 4*c(31)*q_3x*sig
+ 4*c(31)*q_x*u + c(31)*q_x*v**2 + 2*c(16)*p*u_x
+ 4*c(16)*p_3x*sig + 4*c(16)*p_x*u
+ 2*c(16)*q_x*v + 2*c(10)*q_x)/2$
phi2_odd := (2*c(31)*p*u_x + 2*c(31)*p*v*v_x
+ 4*c(31)*p_3x*sig + 4*c(31)*p_x*u
+ c(31)*p_x*v**2 + 2*c(31)*q*v_x + 4*c(31)*q_x*v
+ 2*c(16)*p*v_x + 2*c(16)*p_x*v
+ 4*c(16)*q_x + 2*c(10)*p_x)/2$

There is a whole hierarchy of nonlocal Hamiltonian operators [12].


Kadomtsev–Petviashvili equation

There is no conceptual difference in symbolic computations of Hamiltonian operators for PDEs in 2 independent variables and in more than 2 independent variables,
regardless of the fact that the equation at hand is written in evolutionary form. As
a model example, we consider the KP equation
uyy = utx − u2x − uuxx −

uxxxx .

Proceeding as in the above examples we input the following data:
and look for Hamiltonian operators of scale degree between 1 and 5:


phi:=(for each el in linodd sum (c(ctel:=ctel+1)*el))$
After solving the equation for shadows of symmetries in the cotangent covering
equ 1:=td(phi,y,2) - td(phi,x,t) + 2*u_x*td(phi,x)
+ u_2x*phi + u*td(phi,x,2) + (1/12)*td(phi,x,4);
we get the only local Hamiltonian operator
phi := c(13)*p_2x$
As far as we know there are no further local Hamiltonian operators.
Remark: the above Hamiltonian operator is already known in an evolutionary presentation of the KP equation [18]. Our mathematical theory of Hamiltonian operators for general differential equations [14] allows us to formulate and solve the
problem for any presentation of the KP equation. Change of coordinate formulae
could also be provided.


Examples of Schouten bracket of local Hamiltonian operators

Let F = 0 be a system of PDEs. Here F ∈ P , where P is the module (in the
algebraic sense) of vector functions P = {J r (n, m) → Rk }.
The Hamiltonian operators which have been computed in the previous Section are
differential operators sending generating functions of conservation laws into generating functions of symmetries for the above system of PDEs:
H : P̂ → κ


• P̂ = {J r (n, m) → (Rk )∗ ⊗ ∧n T ∗ Rn } is the space of covector-valued densities,
• κ = {J r (n, m) → Rm } is the space of generalized vector fields on jets;
generating functions of higher symmetries of the system of PDEs are elements of this space.
As the operators are mainly used to define a bracket operation and a Lie algebra structure on conservation laws, two properties are required: skew-adjointness
H ∗ = −H (corresponding with skew-symmetry of the bracket) and [H, H] = 0
(corresponding with the Jacobi property of the bracket).
In order to compute the two properties we proceed as follows. Skew-adjointness
is checked by computing the adjoint and verifying that the sum with the initial
operator is zero.



In the case of evolutionary equations, P = κ, and Hamiltonian operators (16.60)
can also be interpreted as variational bivectors, ie
Ĥ : κ̂ × κ̂ → ∧n T ∗ Rn


where the correspondence is given by
H(ψ) = (aijσ Dσ ψj )


Ĥ(ψ1 , ψ2 ) = (aijσ Dσ ψ1 j ψ2 i )


In terms of the corresponding superfunctions:
H = aik σ pk σ


Ĥ = aik σ pk σ pi .

Note that the product pk σ pi is anticommutative since p’s are odd variables.
After that a C-differential operator of the type of H has been converted into a
bivector it is possible to apply the formulae (16.56) and (16.57) in order to compute
the Schouten bracket. This is what we will see in next section.


Bi-Hamiltonian structure of the KdV equation

We can do the above computations using KdV equation as a test case (see the file
Let us load the above operators:
operator ham1;
for all psi1 let ham1(psi1)=td(psi1,x);
operator ham2;
for all psi2 let ham2(psi2)=
(1/3)*u_x*psi2 + td(psi2,x,3) + (2/3)*u*td(psi2,x);
We may convert the two operators into the corresponding superfunctions
The result of the conversion is
sym1(1) := {p_x};
sym2(2) := {(1/3)*p*u_x + p_3x + (2/3)*p_x*u};
Skew-adjointness is checked at once:

and the result of the last two commands is zero.
Then we shall convert the two superfunctions into bivectors:
The output is:
biv1(1) := - ext(p,p_x);
biv2(1) := - (1/3)*( - 3*ext(p,p_3x) - 2*ext(p,p_x)*u);
Finally, the three Schouten brackets [Ĥi , Ĥj ] are computed, with i, j = 1, 2:
the result are well-known lists of zeros.


Bi-Hamiltonian structure of the WDVV equation

This subsection refers to the the example file The simplest
nontrivial case of the WDVV equations is the third-order Monge–Ampère equat2 −f
ion, fttt = fxxt
xxx fxtt [4]. This PDE can be transformed into hydrodynamic
at = bx , bt = cx , ct = (b2 − ac)x ,
via the change of variables a = fxxx , b = fxxt , c = fxtt . This system possesses
two Hamiltonian formulations [7]:
 
δHi /δa
 b  = Ai  δHi /δb  , i = 1, 2
c t
δHi /δc
with the homogeneous first-order Hamiltonian operator
 3
Dx b
− 2 Dx
2 Dx a
Â1 =  12 aDx 12 (Dx b + bDx )
2 cDx + cx
(b − ac)Dx + Dx (b − ac)
2 Dx c − cx



with the Hamiltonian H1 = c dx, and the homogeneous third-order Hamiltonian
 Dx ,
−Dx a
A2 = D x  0
Dx −aDx Dx b + bDx + aDx a
with the nonlocal Hamiltonian


−1 2
a Dx b + Dx bDx c dx.
H2 = −
Both operators are of Dubrovin–Novikov type [5, 6]. This means that the operators
are homogeneous with respect to the grading |Dx | = 1. It follows that the operators
are form-invariant under point transformations of the dependent variables, ui =
ui (ũj ). Here and in what follows we will use the letters ui to denote the dependent
variables (a, b, c). Under such transformations, the coefficients of the operators
transform as differential-geometric objects.
The operator A1 has the general structure
A1 = g1ij Dx + Γij
k ux
is j
where the covariant metric g1 ij is flat, Γij
k = g1 Γsk (here g1 is the inverse matrix
that represent the contravariant metric induced by g1 ij ), and Γjsk are the usual
Christoffel symbols of g1 ij .

The operator A2 has the general structure

A2 = Dx g2ij Dx + cij
k x Dx ,


where the inverse g2 ij of the leading term transforms as a covariant pseudoRiemannian metric. From now on we drop the subscript 2 for the metric of A2 .
It was proved in [8] that, if we set cijk = giq gjp cpq
k , then
cijk = (gik,j − gij,k )
and the metric fulfills the following identity:
gmk,n + gkn,m + gmn,k = 0.


This means that the metric is a Monge metric [8]. In particular, its coefficients are
quadratic in the variables ui . It is easy to input the two operators in CDE. Let us
start by A1 : we may define its entries one by one as follows
operator a1;
for all psi let a1(1,1,psi) = - (3/2)*td(psi,x);
for all psi let a1(1,2,psi) = (1/2)*td(a*psi,x);

We could also use one specialized Reduce package for the computation of
the Christoffel symbols, like RedTen or GRG. Assuming that the operators
gamma_hi(i,j,k) have been defined equal to Γij
k and computed in the system using the inverse matrix gij of the leading coefficient contravariant metric10
 3 1
−2 2a
g ij =  12 a b
2 c 2(b − ac)
then, provided we defined a list dep_var of the dependent variables, we could set
operator gamma_hi_con;
for all i,j let gamma_hi_con(i,j) =
for k:=1:3 sum gamma_hi(i,j,k)*mkid(part(dep_var,k),!_x)
operator a1$
for all i,j,psi let a1(i,j,psi) =
gu1(i,j)*td(psi,x)+(for k:=1:3 sum gamma_hi_con(i,j)*psi
The third order operator can be reconstructed as follows. Observe that the leading
contravariant metric is
0 0
−a 
g ij = 0 1
1 −a 2b + a2
Introduce the above matrix in REDUCE as gu3. Then set
and define cijk as
operator c_lo$
for i:=1:3 do
for j:=1:3 do
for k:=1:3 do
Indeed in the example file there are procedures for computing all those



- df(gl3(j,i),part(dep_var,k)))$
Then define cij
operator c_hi$
for i:=1:ncomp do
for j:=1:ncomp do
for k:=1:ncomp do
for m:=1:ncomp join
for n:=1:ncomp collect
Introduce the contracted operator
operator c_hi_con$
for i:=1:ncomp do
for j:=1:ncomp do
templist:=for k:=1:ncomp collect
Finally, define the operator A2
operator aa2$
for all i,j,psi let aa2(i,j,psi) =
Now, we can test the Hamiltonian property of A1 , A2 and their compatibility:


Needless to say, the result of the last three command is a list of zeroes.
We observe that the same software can be used to prove the bi-Hamiltonianity of a
6-component WDVV system [26].


Schouten bracket of multidimensional operators

The formulae (16.56), (16.57) hold also in the case of multidimensional operators,
ie operators with total derivatives in more than one independent variables. Here
we give one Hamiltonian operator H and we give two more variational bivectors
P1 , P2 ; all operators are of Dubrovin–Novikov type (homogeneous). We check the
compatibility by computing [H, P1 ] and [H, P2 ]. Such computations are standard
for the problem of computing the Hamiltonian cohomology of H.
This example has been provided by M. Casati. The file of the computation is The dependent variables are p1 , p2 .
Let us set


Dx 0
0 Dy

P1 P112
P1 =
P121 P122
∂g 2
py Dx + 1 p2xy + 1 2 p2x p2y + 2 1 p1x p2y
∂p ∂p
∂ p
= − f Dx2 + gDy2 + 2 p2y Dy − ( 1 p1x + 2 2 p2x )Dx
− 2 2 p2x p2x − 1 2 p1x p2x − 2 p22 x;
∂ p
∂p ∂p
∂g 1 
=f Dx2 − gDy2 + 1 p1x Dx −
p Dy
∂p1 y
− 2 1 p1y p1y − 1 2 p1y p2y − 1 p12y ;
∂ p
∂p ∂p
∂f 1
∂f 1
=2 2 px Dy + 2 px y + 1 2 p1x p1y + 2 2 p1x p2y ;
∂p ∂p
∂ p

P111 =2






and let P2 = P1T . This is implemented as follows:
for all psi let aa2(1,1,psi) =
2*df(g,p1)*p2_y*td(psi,x) + df(g,p1)*p2_xy*psi
+ df(g,p1,p2)*p2_x*p2_y*psi + df(g,p1,2)*p1_x*p2_y*psi;
for all psi let aa2(1,2,psi) =
f*td(psi,x,2) - g*td(psi,y,2) + df(f,p1)*p1_x*td(psi,x)
- (df(g,p2)*p2_y + 2*df(g,p1)*p1_y)*td(psi,y)
- df(g,p1,2)*p1_y*p1_y*psi - df(g,p1,p2)*p1_y*p2_y*psi
- df(g,p1)*p1_2y*psi;
for all psi let aa2(2,1,psi) =
- f*td(psi,x,2) + g*td(psi,y,2)
+ df(g,p2)*p2_y*td(psi,y)
- (df(f,p1)*p1_x+2*df(f,p2)*p2_x)*td(psi,x)
- df(f,p2,2)*p2_x*p2_x*psi - df(f,p1,p2)*p1_x*p2_x*psi
- df(f,p2)*p2_2x*psi;
for all psi let aa2(2,2,psi) =
+ df(f,p2)*p1_xy*psi + df(f,p1,p2)*p1_x*p1_y*psi
+ df(f,p2,2)*p1_x*p2_y*psi;
for all psi let aa3(1,1,psi)
for all psi let aa3(1,2,psi)
for all psi let aa3(2,1,psi)
for all psi let aa3(2,2,psi)



Let us check the skew-adjointness of the above bivectors:
for i:=1:2 do write sym1(i) + aa1_star_sf(i);
for i:=1:2 do write sym2(i) + aa2_star_sf(i);
for i:=1:2 do write sym3(i) + aa3_star_sf(i);

Of course the last three commands produce two zeros each.
Let us compute Schouten brackets.
sb11(1) is trivially a list of zeros, while sb12(1) is nonzero and sb13(1) is
again zero.
More formulae are currently being implemented in the system, like symplecticity
and Nijenhuis condition for recursion operators [13]. Interested readers are warmly
invited to contact R. Vitolo for questions/feature requests.


Non-local operators

In this section we will show an experimental way to find nonlocal operators. The
word ‘experimental’ comes from the lack of a comprehensive mathematical theory
of nonlocal operators; in particular, it is still missing a theoretical framework for
Schouten brackets of nonlocal opeartors in the odd variable language.
In any case we will achieve the results by means of a covering of the cotangent
covering. Indeed, it can be proved that there is a 1 − 1 correspondence between
(higher) symmetries of the initial equation and conservation laws on the cotangent
covering. Such conservation laws provide new potential variables, hence a covering (see [2] for theoretical details on coverings).
In Section 16.12.25 we will also discuss a procedure for finding conservation laws
from their generating functions that is of independent interest.


Non-local Hamiltonian operators for the Korteweg–de Vries

Here we will compute some nonlocal Hamiltonian operators for the KdV equation.
The result of the computation (without the details below) has been published in



We have to solve equations of the type ddx(ct)-ddt(cx) as in 16.12.13. The
main difference is that we will attempt a solution on the `∗ -covering (see Subsection 16.12.14). For this reason, first of all we have to determine covering variables
with the usual mechanism of introducing them through conservation laws, this time
on the `∗ -covering.
As a first step, let us compute conservation laws on the `∗ -covering whose components are linear in the p’s. This computation can be found in the file kdv_nlcl1
and related results and debug files.
The conservation laws that we are looking for are in 1 − 1 correspondence with
symmetries of the initial equation [12]. We will look for conservatoin laws which
correspond to Galilean boost, x-translation, t-translation at the same time. In the
case of 2 independent variables and 1 dependent variable, one could prove that one
component of such conservation laws can always be written as sym*p as follows:
c1x:=(t*u_x+1)*p$ % degree 1
c2x:=u_x*p$ % degree 4
c3x:=(u*u_x+u_3x)*p$ % degree 6
The second component must be found by solving an equation. To this aim we
produce the ansatz
% degree 6
c2t:=(for each el in linodd6 sum (c(ctel:=ctel+1)*el))$
% degree 8
c3t:=(for each el in linodd8 sum (c(ctel:=ctel+1)*el))$
where we already introduced the sets linodd6 and linodd8 of 6-th and 8-th
degree monomials which are linear in odd variables (see the source code). For the
first conservation law solutions of the equation
equ 1:=td(c1t,x) - td(c1x,t);
are found by hand due to the presence of ‘t’ in the symmetry:
We also have the equations
equ 2:=td(c2t,x)-td(c2x,t);

equ 3:=td(c3t,x)-td(c3x,t);
They are solved in the usual way (see the source code of the example and the results
file kdv_nlcl1_res).
Now, we solve the equation for shadows of nonlocal symmetries in a covering of
the `∗ -covering (source file kdv_nlho1). We can produce such a covering by
introducing three new nonlocal (potential) variables ra,rb,rc. We are going to
look for non-local Hamiltonian operators depending linearly on one of these variables. To this aim we modify the odd part of the equation to include the components
of the above conservation laws as the derivatives of the new non-local variables r1,
r2, r3:
p*(t*u_x + 1),
p*t*u*u_x + p*t*u_3x + p*u + p_2x*t*u_x + p_2x
- p_x*t*u_2x,
p*u*u_x + p*u_3x + p_2x*u_x - p_x*u_2x,
p*(u*u_x + u_3x),
p*u**2*u_x + 2*p*u*u_3x + 3*p*u_2x*u_x + p*u_5x
+ p_2x*u*u_x + p_2x*u_3x - p_x*u*u_2x
- p_x*u_4x - p_x*u_x**2}$
The scale degree analysis of the local Hamiltonian operators of the KdV equation
leads to the formulation of the ansatz
phi:=(for each el in linodd sum (c(ctel:=ctel+1)*el))$
where linext is the list of graded mononials which are linear in odd variables
and have degree 7 (see the source file). The equation for shadows of nonlocal
symmetries in `∗ -covering
equ 1:=td(phi,t)-u*td(phi,x)-u_x*phi-td(phi,x,3);
is solved in the usual way, obtaining (in odd variables notation):
phi := (c(5)*(4*p*u*u_x + 3*p*u_3x + 18*p_2x*u_x
+ 12*p_3x*u + 9*p_5x + 4*p_x*u**2
+ 12*p_x*u_2x - r2*u_x))/4$
Higher non-local Hamiltonian operators could also be found [12]. The CRACK
approach also holds for non-local computations.




Non-local recursion operator for the Korteweg–de Vries

Following the ideas in [12], a differential operator that sends symmetries into symmetries can be found as a shadow of symmetry on the `-covering of the KdV equation, with the further condition that the shadows must be linear in the covering
q-variables. The tangent covering of the KdV equation is

ut = uxxx + uux
qt = ux q + uqx + qxxx
and we have to solve the equation `¯KdV (phi) = 0, where `¯KdV means that the
linearization of the KdV equation is lifted over the tangent covering.
The file containing this example is The example closely follows
the computational scheme presented in [16].
Usually, recursion operators are non-local: operators of the form Dx−1 appear in
their expression. Geometrically we interpret this kind of operator as follows. We
introduce a conservation law on the cotangent covering of the form
ω = rt dx + rx dt
where rt = uq + qxx and rx = q. It has the remarkable feature of being linear
with respect to q-variables. A non-local variable r can be introduced as a potential
of ω, as rx = rx, rt = rt. A computation of shadows of symmetries on the system
of PDEs
ut = uxxx + uux
qt = ux q + uqx + qxxx
r = uq + qxx
 t
rx = q
yields, analogously to the previous computations,
2*c(5)*q*u + 3*c(5)*q_2x + c(5)*r*u_x + c(2)*q.
The operator q stands for the identity operator, which is (and must be!) always a
solution; the other solution corresponds to the Lenard–Magri operator
3Dxx + 2u + ux Dx−1 .


Non-local Hamiltonian-recursion operators for Plebanski

The Plebanski (or second Heavenly) equation
F = utt uxx − u2tx + uxz + uty = 0


is Lagrangian. This means that its linearization is self-adjoint: `F = `∗F , so that
the tangent and cotangent covering coincide, its odd equation being
`F (p) = pxz + pty − 2utx ptx + u2x p2t + u2t p2x = 0.


It is not difficult to realize that the above equation can be written in explicit conservative form as
pxz + pty + utt pxx + uxx ptt − 2utx ptx
= Dx (pz + utt px − utx pt ) + Dt (py + uxx pt − utx px ) = 0,
thus the corresponding conservation law is
υ(1) = (py +uxx pt −utx px ) dx∧dy∧dz+(utx pt −pz −utt px ) dt∧dy∧dz. (16.69)
We can introduce a potential r for the above 2-component conservation law.
Namely, we can assume that
rx = py + uxx pt − utx px ,

rt = utx pt − pz − utt px .


This is a new nonlocal variable for the (co)tangent covering of the Plebanski equation. We can load the Plebanski equation together with its nonlocal variable r as
% rhs of the equations that define the nonlocal variable
rt:= - p_z - u_2t*p_x + u_tx*p_t$
rx:= p_y + u_2x*p_t - u_tx*p_x$
% We add conservation laws as new nonlocal odd variables;
We can easily verify that the integrability condition for the new nonlocal variable
td(r,t,x) - td(r,x,t);



the result is 0.
Now, we look for nonlocal recursion operators in the tangent covering using the
new nonlocal odd variable r. We can load the equation exactly as before. We look
for recursion operators which depend on r (which has scale degree 4); we produce
the following ansatz for phi:
phi:=(for each el in linodd sum (c(ctel:=ctel+1)*el))$
then we solve the equation of shadows of symmetries:
equ 1:=td(phi,x,z)+td(phi,t,y)-2*u_tx*td(phi,t,x)
The solution is
phi := c(28)*r + c(1)*p
hence we obtain the identity operator p and the new nonlocal operator r. It can be
proved that changing coordinates to the evolutionary presentation yields the local
operator (which has a much more complex expression than the identity operator)
and one of the nonlocal operators of [?]. More details on this computation can be
found in [?].


Appendix: old versions of CDE

A short version history is provided here.
CDE 1.0 This version was published in October 2014. It was programmed in
REDUCE’s algebraic mode, so its capabilities were limited, and its speed was
severely affected by the systematic use of the package assist for manipulating
algebraic lists. Its features were:
1. CDE 1.0 is able to do standard computations in integrable systems like determining systems for generalized symmetries and conservation laws.
2. CDE 1.0 is able to compute linear overdetermined systems of partial differential equations whose solutions are Hamiltonian operators.
3. CDE is able to compute Schouten brackets between bivectors. This can be
used eg to check Hamiltonianity of an operator, or the compatibility of two

CDE 1.0 has never ben included in the official REDUCE distribution, and it is still
available at [1].

[1] Geometry of Differential Equations web site:
A. M. V ERBOVETSKY AND A. M. V INOGRADOV: Symmetries and Conservation Laws for Differential Equations of Mathematical Physics, I. S.
Krasil0 shchik and A. M. Vinogradov eds., Translations of Math. Monographs
182, Amer. Math. Soc. (1999).
[3] D. BALDWIN , W. H EREMAN, A symbolic algorithm for computing recursion
operators of nonlinear partial differential equations, International Journal of
Computer Mathematics, vol. 87 (5), pp. 1094-1119 (2010).
[4] B.A. D UBROVIN, Geometry of 2D topological field theories, Lecture Notes
in Math. 1620, Springer-Verlag (1996) 120–348.
[5] B.A. D UBROVIN AND S.P. N OVIKOV, Hamiltonian formalism of onedimensional systems of hydrodynamic type and the Bogolyubov-Whitham
averaging method, Soviet Math. Dokl. 27 No. 3 (1983) 665–669.
[6] B.A. D UBROVIN AND S.P. N OVIKOV, Poisson brackets of hydrodynamic
type, Soviet Math. Dokl. 30 No. 3 (1984), 651–2654.
[7] E.V. F ERAPONTOV, C.A.P. G ALVAO , O. M OKHOV, Y. N UTKU, BiHamiltonian structure of equations of associativity in 2-d topological field
theory, Comm. Math. Phys. 186 (1997) 649-669.
[8] E.V. F ERAPONTOV, M.V. PAVLOV, R.F. V ITOLO, Projective-geometric aspects of homogeneous third-order Hamiltonian operators, J. Geom. Phys. 85
(2014) 16-28, DOI: 10.1016/j.geomphys.2014.05.027.
[9] E.V. F ERAPONTOV, M.V. PAVLOV, R.F. V ITOLO, Towards the classification of homogeneous third-order Hamiltonian operators, http://arxiv.
[10] E. Getzler, A Darboux theorem for Hamiltonian operators in the formal calculus of variations, Duke J. Math. 111 (2002), 535-560.
[11] S. I GONIN , A. V ERBOVETSKY, R. V ITOLO : Variational Multivectors and
Brackets in the Geometry of Jet Spaces, V Int. Conf. on on Symmetry in Nonlinear Mathematical Physics, Kyiv 2003; Part 3 of Volume 50 of Proceedings
of Institute of Mathematics of NAS of Ukraine, Editors A.G. Nikitin, V.M.


Boyko, R.O. Popovych and I.A. Yehorchenko (2004), 1335–1342; http://

[12] P.H.M. K ERSTEN , I.S. K RASIL’ SHCHIK , A.M. V ERBOVETSKY, Hamiltonian operators and `∗ -covering, Journal of Geometry and Physics 50 (2004),
[13] P.H.M. K ERSTEN , I.S. K RASIL’ SHCHIK , A.M. V ERBOVETSKY, A geometric study of the dispersionless Boussinesq equation, Acta Appl. Math. 90
(2006), 143–178.
Hamiltonian structures for general PDEs, Differential equations: Geometry, Symmetries and Integrability. The Abel Symposium 2008 (B. Kruglikov,
V. V. Lychagin, and E. Straume, eds.), Springer-Verlag, 2009, pp. 187–198,
Geometry of
jet spaces and integrable systems, J. Geom. Phys. (2011)
doi:10.1016/j.geomphys.2010.10.012, arXiv:1002.0077.
[16] I. K RASIL0 SHCHIK , A. V ERBOVETSKY, R. V ITOLO, A unified approach to
computation of integrable structures, Acta Appl. Math. (2012).
[17] I. K RASIL0 SHCHIK , A. V ERBOVETSKY, R. V ITOLO, The symbolic computation of integrability structures for partial differential equations, book, to
appear in the Springer series “Texts and monographs in symbolic computations” (2017).
[18] B. K UPERSCHMIDT: Geometric Hamiltonian forms for the Kadomtsev–
Petviashvili and Zabolotskaya–Khokhlov equations, in Geometry in Partial
Differential Equations, A. Prastaro, Th.M. Rassias eds., World Scientific
(1994), 155–172.
[19] M. M ARVAN, Sufficient set of integrability conditions of an orthonomic system. Foundations of Computational Mathematics 9 (2009), 651–674.
[20] F. N EYZI , Y. N UTKU , AND M.B. S HEFTEL, Multi-Hamiltonian structure
of Plebanski’s second heavenly equation J. Phys. A: Math. Gen. 38 (2005),
8473. arXiv:nlin/0505030v2.
[21] A.C. N ORMAN , R. V ITOLO, Inside Reduce, part of the official REDUCE
documentation included in the source code, see below.
[22] M.C. N UCCI, Interactive REDUCE programs for calculating classical, nonclassical, and approximate symmetries of differential equations, in Computational and Applied Mathematics II. Differential Equations, W.F. Ames, and
P.J. Van der Houwen, Eds., Elsevier, Amsterdam (1992) pp. 345–350.

[23] M.C. N UCCI, Interactive REDUCE programs for calculating Lie point, nonclassical, Lie-Bäcklund, and approximate symmetries of differential equations: manual and floppy disk, in CRC Handbook of Lie Group Analysis of
Differential Equations. Vol. 3 N.H. Ibragimov, Ed., CRC Press, Boca Raton
(1996) pp. 415–481.
[24] F. O LIVERI, R E L IE, REDUCE software and user guide, http://
[25] P. O LVER, Applications of Lie Groups to Partial Differential Equations, 2nd
ed, GTM Springer, 1992.
[26] M.V. PAVLOV, R.F. V ITOLO: On the bi-Hamiltonian geometry of the WDVV
[27] G. S ACCOMANDI , R. V ITOLO: On the Mathematical and Geometrical
Structure of the Determining Equations for Shear Waves in Nonlinear
Isotropic Incompressible Elastodynamics, J. Math. Phys. 55 (2014), 081502.
[28] T. W OLF, A comparison of four approaches to the calculation of conservation
laws, Euro. Jnl of Applied Mathematics 13 part 2 (2002) 129-152.
[29] T. W OLF, APPLYSYM - a package for the application of Lie-symmetries,
software distributed together with the computer algebra system REDUCE,
[30] T. W OLF, A. B RAND, Investigating DEs with CRACK and Related Programs, SIGSAM Bulletin, Special Issue, (June 1995), p 1-8.
[31] T. W OLF, An efficiency improved program LIEPDE for determining Liesymmetries of PDEs, Proc.of Modern Group Analysis: advanced analytical
and computational methods in mathematical physics, Catania, Italy Oct.1992,
Kluwer Acad.Publ. (1993) 377-385.
[32] T. W OLF, A. B RAND: CRACK, user guide, examples and documentation For applications, see also the publications of T. Wolf.




CDIFF: A package for computations in geometry of
Differential Equations

Authors: P. Gragert, P.H.M. Kersten, G. Post and G. Roelofs.
Author of this Section: R. Vitolo.
We describe CDIFF, a Reduce package for computations in geometry of Differential Equations (DEs, for short) developed by P. Gragert, P.H.M. Kersten, G. Post
and G. Roelofs from the University of Twente, The Netherlands.
The package is part of the official REDUCE distribution at Sourceforge [1], but
it is also distributed on the Geometry of Differential Equations web site http:
// (GDEQ for short).
We start from an installation guide for Linux and Windows. Then we focus on concrete usage recipes for the computation of higher symmetries, conservation laws,
Hamiltonian and recursion operators for polynomial differential equations. All
programs discussed here are shipped together with this manual and can be found
at the GDEQ website. The mathematical theory on which computations are based
can be found in refs. [11, 12].
NOTE: The new REDUCE package CDE [14], also distributed on http://, simplifies the use of CDIFF and extends its capabilities. Interested
users may read the manual of CDE where the same computations described here
for CDIFF are done in a simpler way, and further capabilities allow CDE to solve
a greater variety of problems.



This brief guide refers to using CDIFF, a set of symbolic computation programs
devoted to computations in geometry of DEs and developed by P. Gragert, P.H.M.
Kersten, G. Post and G. Roelofs at the University of Twente, The Netherlands.
Initially, the development of the CDIFF packages was started by Gragert and Kersten for symmetry computations in DEs, then they have been partly rewritten and
extended by Roelofs and Post. The CDIFF packages consist of 3 program files plus
a utility file; only the main three files are documented [7, 8, 9]. The CDIFF packages, as well as a copy of the documentation (including this manual) and several
example programs, can be found both at Sourceforge in the sources of REDUCE
[1] and in the Geometry of Differential Equations (GDEQ for short) web site [2].
The name of the packages, CDIFF, comes from the fact that the package is aimed
at defining differential operators in total derivatives and do computations involving
them. Such operators are called C-differential operators (see [11]).
The main motivation for writing this manual was that REDUCE 3.8 recently be-

came free software, and can be downloaded here [1]. For this reason, we are able
to make our computations accessible to a wider public, also thanks to the inclusion
of CDIFF in the official REDUCE distribution. The readers are warmly invited
to send questions, comments, etc., both on the computations and on the technical
aspects of installation and configuration of REDUCE, to the author of the present
Acknowledgements. My warmest thanks are for Paul H.M. Kersten, who explained to me how to use the CDIFF packages for several computations of interest in the Geometry of Differential Equations. I also would like to thank I.S.
Krasil’shchik and A.M. Verbovetsky for constant support and stimulating discussions which led me to write this text.


Computing with CDIFF

In order to use CDIFF it is necessary to load the package by the command
All programs that we will discuss in this manual can be found inside the subfolder
examples in the folder which contains this manual. In order to run them just do
in "";
at the REDUCE command prompt.
There are some conventions that I adopted on writing programs which use CDIFF.
• Program files have the extension .red. This will load automatically the
reduce-ide mode in emacs (provided you made the installation steps described in the reduce-ide guides).
• Program files have the following names:
where equationname stands for the shortened name of the equation (e.g.
Korteweg–de Vries is always indicated by KdV), typeofcomputation
stands for the type of geometric object which is computed with the given file,
for example symmetries, Hamiltonian operators, etc., version is a version
• More specific information, like the date and more details on the computation
done in each version, are included as comment lines at the very beginning of
each file.



Now we describe some examples of computations with CDIFF. The parts of examples which are shared between all examples are described only once. We stress that
all computations presented in this document are included in the official REDUCE
distribution and can be also downloaded at the GDEQ website [2]. The examples
can be run with REDUCE by typing in ""; at the REDUCE
prompt, as explained above.
Remark. The mathematical theories on which the computations are based can be
found in [11, 12].
Higher symmetries
In this section we show the computation of (some) higher symmetries of Burgers’
equation B = ut −uxx +2uux = 0. The corresponding file is
and the results of the computation are in
The idea underlying this computation is that one can use the scale symmetries of
Burgers’ equation to assign “gradings” to each variable appearing in the equation.
As a consequence, one could try different ansatz for symmetries with polynomial
generating function. For example, it is possible to require that they are sum of
monomials of given degrees. This ansatz yields a simplification of the equations
for symmetries, because it is possible to solve them in a “graded” way, i.e., it is
possible to split them into several equations made by the homogeneous components
of the equation for symmetries with respect to gradings.
In particular, Burgers’ equation translates into the following dimensional equation:
[ut ] = [uxx ],

[uxx = 2uux ].

By the rules [uz ] = [u] − [z] and [uv] = [u] + [v], and choosing [x] = −1, we
have [u] = 1 and [t] = −2. This will be used to generate the list of homogeneous
monomials of given grading to be used in the ansatz about the structure of the
generating function of the symmetries.
The following instructions initialize the total derivatives. The first string is the
name of the vector field, the second item is the list of even variables (note that
u1, u2, ... are ux , uxx , . . . ), the third item is the list of odd (non-commuting)
variables (‘ext’ stands for ‘external’ like in external (wedge) product). Note that in
this example odd variables are not strictly needed, but it is better to insert some of
them for syntax reasons.
{ext 1,ext 2,ext 3,ext 4,ext 5,ext 6,ext 7,ext 8,ext 9,
ext 10,ext 11,ext 12,ext 13,ext 14,ext 15,ext 16,ext 17,
ext 18,ext 19,ext 20,ext 21,ext 22,ext 23,ext 24,ext 25,









{ext 1,ext 2,ext 3,ext 4,ext 5,ext 6,ext 7,ext 8,ext
ext 10,ext 11,ext 12,ext 13,ext 14,ext 15,ext 16,ext
ext 18,ext 19,ext 20,ext 21,ext 22,ext 23,ext 24,ext
ext 26,ext 27,ext 28,ext 29,ext 30,ext 31,ext 32,ext
ext 34,ext 35,ext 36,ext 37,ext 38,ext 39,ext 40,ext
ext 42,ext 43,ext 44,ext 45,ext 46,ext 47,ext 48,ext
ext 50,ext 51,ext 52,ext 53,ext 54,ext 55,ext 56,ext
ext 58,ext 59,ext 60,ext 61,ext 62,ext 63,ext 64,ext
ext 66,ext 67,ext 68,ext 69,ext 70,ext 71,ext 72,ext
ext 74,ext 75,ext 76,ext 77,ext 78,ext 79,ext 80



Specification of the vectorfield ddx. The meaning of the first index is the parity of variables. In particular here we have just even variables. The second index parametrizes the second item (list) in the super_vectorfield declaration. More precisely, ddx(0,1) stands for ∂/∂x, ddx(0,2) stands for ∂/∂t,
ddx(0,3) stands for ∂/∂u, ddx(0,4) stands for ∂/∂ux , . . . , and all coordinates x, t, ux , . . . , are treated as even coordinates. Note that ‘$’ suppresses the



The string letop is treated as a variable; if it appears during computations it is
likely that we went too close to the highest order variables that we defined in the
file. This could mean that we need to extend the operators and variable list. In
case of large output, one can search in it the string letop to check whether errors
Specification of the vectorfield ddt. In the evolutionary case we never have more
than one time derivative, other derivatives are utxxx··· .
We now give the equation in the form one of the derivatives equated to a right-hand
side expression. The left-hand side derivative is called principal, and the remaining
derivatives are called parametric11 . For scalar evolutionary equations with two
independent variables internal variables are of the type (t, x, u, ux , uxx , . . .).

This terminology dates back to Riquier, see [13]

ut1:=ddx ut;
ut2:=ddx ut1;
ut3:=ddx ut2;
ut4:=ddx ut3;
ut5:=ddx ut4;
ut6:=ddx ut5;
ut7:=ddx ut6;
ut8:=ddx ut7;
ut9:=ddx ut8;
ut10:=ddx ut9;
ut11:=ddx ut10;
ut12:=ddx ut11;
ut13:=ddx ut12;
ut14:=ddx ut13;
Test for verifying the commutation of total derivatives. Highest order defined terms
may yield some letop.
operator ev;
for i:=1:17 do write ev(0,i):=ddt(ddx(0,i))-ddx(ddt(0,i));
This is the list of variables with respect to their grading, starting from degree one.
This is the list of all monomials of degree 0, 1, 2, . . . which can be constructed from
the above list of elementary variables with their grading.
grd1:= mkvarlist1(1,1)$
grd2:= mkvarlist1(2,2)$
grd3:= mkvarlist1(3,3)$
grd4:= mkvarlist1(4,4)$
grd5:= mkvarlist1(5,5)$
grd6:= mkvarlist1(6,6)$
grd7:= mkvarlist1(7,7)$
grd8:= mkvarlist1(8,8)$
grd9:= mkvarlist1(9,9)$
grd10:= mkvarlist1(10,10)$



Initialize a counter ctel for arbitrary constants c; initialize equations:
operator c,equ;
We assume a generating function sym, independent of x and t, of degree ≤ 5.
(for each
(for each
(for each
(for each
(for each
(for each






This is the equation `¯B (sym) = 0, where B = 0 is Burgers’ equation and sym is
the generating function. From now on all equations are arranged in a single vector
whose name is equ.
equ 1:=ddt(sym)-ddx(ddx(sym))-2*u*ddx(sym)-2*u1*sym ;
This is the list of variables, to be passed to the equation solver.
This is the number of initial equation(s)
The following procedure uses multi_coeff (from the package tools). It gets
all coefficients of monomials appearing in the initial equation(s). The coefficients
are put into the vector equ after the initial equations.
procedure splitvars i;

ll:=multi_coeff(equ i,vars);
equ(tel:=tel+1):=first ll;
ll:=rest ll;
for each el in ll do equ(tel:=tel+1):=second el;
This command initializes the equation solver. It passes
• the equation vector equ togeher with its length tel (i.e., the total number
of equations);
• the list of variables with respect to which the system must not split the equations, i.e., variables with respect to which the unknowns are not polynomial.
In this case this list is just {};
• the constants’vector c, its length ctel, and the number of negative indexes
if any; just 0 in our example;
• the vector of free functions f that may appear in computations. Note that in
{f,0,0 } the second 0 stands for the length of the vector of free functions.
In this example there are no free functions, but the command needs the presence of at least a dummy argument, f in this case. There is also a last zero
which is the negative length of the vector f , just as for constants.
Run the procedure splitvars in order to obtain equations on coefficiens of each
splitvars 1;
Next command tells the solver the total number of equations obtained after running
put_equations_used tel;
It is worth to write down the equations for the coefficients.
for i:=2:tel do write equ i;
This command solves the equations for the coefficients. Note that we have to skip
the initial equations!
for i:=2:tel do integrate_equation i;



In the folder computations/NewTests/Higher_symmetries it is possible to find the following files: The above file, together with its results file. Higher symmetries of KdV, with the ansatz: deg(sym) ≤ 5. Higher symmetries of KdV, with the ansatz:
sym = x*(something of degree 3) + t*(something of degree 5)
+ (something of degree 2).
This yields scale symmetries. Higher symmetries of KdV, with the ansatz:
sym = x*(something of degree 1) + t*(something of degree 3)
+ (something of degree 0).
This yields Galilean boosts.
Local conservation laws
In this section we will find (some) local conservation laws for the KdV equation
F = ut − uxxx + uux = 0. Concretely, we have to find non-trivial 1-forms
¯ = 0 on F = 0. “Triviality” of conservation
f = fx dx+ft dt on F = 0 such that df
laws is a delicate matter, for which we invite the reader to have a look in [11].
The files containing this example is,
and the corresponding results files.
We make use of ddx and ddt, which in the even part are the same as in the previous example (subsection 16.13.2). After defining the total derivatives we prepare
the list of graded variables (recall that in KdV u is of degree 2):
We make the ansatz














for the components of the conservation law. We have to solve the equation
equ 1:=ddt(fx)-ddx(ft);
the fact that ddx and ddt are expressed in internal coordinates on the equation
means that the objects that we consider are already restricted to the equation.
We shall split the equation in its graded summands with the procedure splitvars,
then solve it
splitvars 1;
pte tel;
for i:=2:tel do es i;
As a result we get
fx := c(3)*u1 + c(2)*u + c(1)$
ft := (2*c(3)*u*u1 + 2*c(3)*u3 + c(2)*u**2 + 2*c(2)*u2)/2$
Unfortunately it is clear that the conservation law corresponding to c(3) is trivial,
because it is the total x-derivative of F ; its restriction on the infinite prolongation of the KdV is zero. Here this fact is evident; how to get rid of less evident
trivialities by an ‘automatic’ mechanism? We considered this problem in the file, where we solved the equation
equ 1:=fx-ddx(f0);
equ 2:=ft-ddt(f0);
after having loaded the values fx and ft found by the previous program. We make
the following ansatz on f0:









Note that this gives a grading which is compatible with the gradings of fx and ft.
After solving the system
for i:=1:2 do begin splitvars i;end;
pte tel;
for i:=3:tel do es i;
issuing the commands
fxnontriv := fx-ddx(f0);
ftnontriv := ft-ddt(f0);
we obtain
fxnontriv := c(2)*u + c(1)$
ftnontriv := (c(2)*(u**2 + 2*u2))/2$
This mechanism can be easily generalized to situations in which the conservation
laws which are found by the program are difficult to treat by pen and paper.
Local Hamiltonian operators
In this section we will find local Hamiltonian operators for the KdV equation ut =
uxxx + uux . Concretely, we have to solve `¯KdV (phi) = 0 over the equation

ut = uxxx + uux
pt = pxxx + upx
or, in geometric terminology, find the shadows of symmetries on the `∗ -covering
of the KdV equation. The reference paper for this type of computations is [12].
The file containing this example is
We make use of ddx and ddt, which in the even part are the same as in the previous example (subsection 16.13.2). We stress that the linearization `¯KdV (phi) = 0
is the equation
but the total derivatives are lifted to the `∗ covering, hence they must contain also
derivatives with respect to p’s. This will be achieved by treating p variables as odd
and introducing the odd parts of ddx and ddt,

ddx(1,3):=ext 4$
ddx(1,4):=ext 5$
ddx(1,5):=ext 6$
ddx(1,6):=ext 7$
ddx(1,7):=ext 8$
ddx(1,8):=ext 9$
ddx(1,9):=ext 10$
ddx(1,10):=ext 11$
ddx(1,11):=ext 12$
ddx(1,12):=ext 13$
ddx(1,13):=ext 14$
ddx(1,14):=ext 15$
ddx(1,15):=ext 16$
ddx(1,16):=ext 17$
ddx(1,17):=ext 18$
ddx(1,18):=ext 19$
ddx(1,19):=ext 20$
In the above definition the first index ‘1’ says that we are dealing with odd variables, ext indicates anticommuting variables. Here, ext 3 is p0 , ext 4 is px ,
ext 5 is pxx , . . . so ddx(1,3):=ext 4 indicates px ∂/∂p, etc..
Now, remembering that the additional equation is again evolutionary, we can get
rid of pt by letting it be equal to ext 6 + u*ext 4, as follows:
ddt(1,3):=ext 6 + u*ext 4$



Let us make the following ansatz about the Hamiltonian operators:
(for each
(for each
(for each
(for each







(for each el in grd0 sum (c(ctel:=ctel+1)*el))*ext 4+
(for each el in grd1 sum (c(ctel:=ctel+1)*el))*ext 4+
(for each el in grd2 sum (c(ctel:=ctel+1)*el))*ext 4+
(for each el in grd0 sum (c(ctel:=ctel+1)*el))*ext 5+
(for each el in grd1 sum (c(ctel:=ctel+1)*el))*ext 5+
(for each el in grd0 sum (c(ctel:=ctel+1)*el))*ext 6
Note that we are looking for generating functions of shadows which are linear
with respect to p’s. Moreover, having set [p] = −2 we will look for solutions of
maximal possible degree +1.
After having set
equ 1:=ddt(phi)-u*ddx(phi)-u1*phi-ddx(ddx(ddx(phi)));
we define the procedures splitvars as in subsection 16.13.2 and splitext
as follows:
procedure splitext i;
ll:=operator_coeff(equ i,ext);
equ(tel:=tel+1):=first ll;
ll:=rest ll;
for each el in ll do equ(tel:=tel+1):=second el;

Then we initialize the equations:
do splitext
splitext 1;
then splitvars
for i:=2:tel1 do begin splitvars i;equ i:=0;end;
Now we are ready to solve all equations:
put_equations_used tel;
for i:=2:tel do write equ i:=equ i;
for i:=2:tel do integrate_equation i;
Note that we want all equations to be solved!
The results are the two well-known Hamiltonian operators for the KdV:
phi := c(4)*ext(4) + 3*c(3)*ext(6) + 2*c(3)*ext(4)*u
+ c(3)*ext(3)*u1$
Of course, the results correspond to the operators
ext(4) → Dx ,
3*c(3)*ext(6) + 2*c(3)*ext(4)*u + c(3)*ext(3)*u1 →
3Dxxx + 2uDx + ux
Note that each operator is multiplied by one arbitrary real constant, c(4) and
Non-local Hamiltonian operators
In this section we will show an experimental way to find nonlocal Hamiltonian
operators for the KdV equation. The word ‘experimental’ comes from the lack of a
consistent mathematical theory. The result of the computation (without the details
below) has been published in [12].
We have to solve equations of the type ddx(ft)-ddt(fx) as in 16.13.2. The



main difference is that we will attempt a solution on the `∗ -covering (see Subsection 16.13.2). For this reason, first of all we have to determine covering variables
with the usual mechanism of introducing them through conservation laws, this time
on the `∗ -covering.
As a first step, let us compute conservation laws on the `∗ -covering whose
components are linear in the p’s. This computation can be found in the file and related results file. When specifying odd variables
in ddx and ddt, we have something like
ddx(1,3):=ext 4$
ddx(1,4):=ext 5$
ddx(1,5):=ext 6$
ddx(1,6):=ext 7$
ddx(1,7):=ext 8$
ddx(1,8):=ext 9$
ddx(1,9):=ext 10$
ddx(1,10):=ext 11$
ddx(1,11):=ext 12$
ddx(1,12):=ext 13$
ddx(1,13):=ext 14$
ddx(1,14):=ext 15$
ddx(1,15):=ext 16$
ddx(1,16):=ext 17$
ddx(1,17):=ext 18$
ddx(1,18):=ext 19$
ddx(1,19):=ext 20$
ddx(1,50):=(t*u1+1)*ext 3$ % degree -2
ddx(1,51):=u1*ext 3$ % degree +1
ddx(1,52):=(u*u1+u3)*ext 3$ % degree +3
ddt(1,3):=ext 6 + u*ext 4$

ddt(1,50):=f1*ext 3+f2*ext 4+f3*ext 5$
ddt(1,51):=f4*ext 3+f5*ext 4+f6*ext 5$
ddt(1,52):=f7*ext 3+f8*ext 4+f9*ext 5$
The variables corresponding to the numbers 50,51,52 here play a dummy role,
the coefficients of the corresponding vector are the unknown generating functions
of conservation laws on the `∗ -covering. More precisely, we look for conservation
laws of the form
fx= phi*ext 3
ft= f1*ext3+f2*ext4+f3*ext5
The ansatz is chosen because, first of all, ext 4 and ext 5 can be removed from
fx by adding a suitable total divergence (trivial conservation law); moreover it can
be proved that phi is a symmetry of KdV. We can write down the equations
equ 1:=ddx(ddt(1,50))-ddt(ddx(1,50));
equ 2:=ddx(ddt(1,51))-ddt(ddx(1,51));
equ 3:=ddx(ddt(1,52))-ddt(ddx(1,52));
However, the above choices make use of a symmetry which contains ‘t’ in the
generator. This would make automatic computations more tricky, but still possible.
In this case the solution of equ 1 has been found by hand and passed to the
together with the ansatz on the coefficients for the other equations
f4:=(for each el in grd5 sum (c(ctel:=ctel+1)*el))$



f5:=(for each el in grd4 sum (c(ctel:=ctel+1)*el))$
f6:=(for each el in grd3 sum (c(ctel:=ctel+1)*el))$
f7:=(for each el in grd7 sum (c(ctel:=ctel+1)*el))$
f8:=(for each el in grd6 sum (c(ctel:=ctel+1)*el))$
f9:=(for each el in grd5 sum (c(ctel:=ctel+1)*el))$
The previous ansatz keep into account the grading of the starting symmetry in
phi*ext 3. The resulting equations are solved in the usual way (see the example
Now, we solve the equation for shadows of nonlocal symmetries in a covering of
the `∗ -covering. We can choose between three new nonlocal variables ra,rb,rc.
We are going to look for non-local Hamiltonian operators depending linearly on
one of these variables. Higher non-local Hamiltonian operators could be found by
introducing total derivatives of the r’s. As usual, the new variables are specified
through the components of the previously found conservation laws according with
the rule
ra_x=fx, ra_t=ft,
and analogously for the others. We define
ddx(1,50):=(t*u1+1)*ext 3$ % degree -2
ddx(1,51):=u1*ext 3$ % degree +1
ddx(1,52):=(u*u1+u3)*ext 3$ % degree +3
ddt(1,50) := ext(5)*t*u1 + ext(5) - ext(4)*t*u2
+ ext(3)*t*u*u1 + ext(3)*t*u3 + ext(3)*u$
ddt(1,51) := ext(5)*u1 - ext(4)*u2 + ext(3)*u*u1
+ ext(3)*u3$
ddt(1,52) := ext(5)*u*u1 + ext(5)*u3 - ext(4)*u*u2
- ext(4)*u1**2 - ext(4)*u4 + ext(3)*u**2*u1
+ 2*ext(3)*u*u3 + 3*ext(3)*u1*u2 + ext(3)*u5$
as it results from the computation of the conservation laws. The following ansatz
for the nonlocal Hamiltonian operator comes from the fact that local Hamiltonian
operators have gradings −1 and +1 when written in terms of p’s. So we are looking
for a nonlocal Hamiltonian operator of degree 3.
(for each el in grd6 sum (c(ctel:=ctel+1)*el))*ext 50+
(for each el in grd3 sum (c(ctel:=ctel+1)*el))*ext 51+

(for each el in grd1 sum (c(ctel:=ctel+1)*el))*ext 52+








As a solution, we obtain
phi := c(1)*(ext(51)*u1 - 9*ext(8) - 12*ext(6)*u
- 18*ext(5)*u1 - 4*ext(4)*u**2 - 12*ext(4)*u2
- 4*ext(3)*u*u1 - 3*ext(3)*u3)$
where ext51 stands for the nonlocal variable rb fulfilling
rb_x:=u1*ext 3$
rb_t:=ext(5)*u1 - ext(4)*u2 + ext(3)*u*u1 + ext(3)*u3$
Remark. In the file it is possible to find another ansatz
for a non-local Hamiltonian operator of degree +5.
Computations for systems of PDEs
There is no conceptual difference when computing for systems of PDEs. We will
look for Hamiltonian structures for the following Boussinesq equation:

ut − ux v − uvx − σvxxx = 0
vt − ux − vvx = 0
where σ is a constant. This example also shows how to deal with jet spaces with
more than one dependent variable. Here gradings can be taken as
[t] = −2,

[x] = −1,

[v] = 1,

[u] = 2,

[p] = [

] = −2,

[q] = [

] = −1

where p, q are the two coordinates in the space of generating functions of conservation laws.
The linearization of the above system and its adjoint are, respectively

Dt − vDx − vx −ux − uDx − σDxxx
−Dt + vDx
`Bou =
, `Bou =
Dt − vx − vDx
uDx + σDxxx −Dt + vDx



and lead to the `∗Bou covering equation
−pt + vpx + qx = 0
upx + σpxxx − qt + vqx = 0
u − ux v − uvx − σvxxx = 0
 t
vt − ux − vvx = 0
We have to find shadows of symmetries on the above covering. Total derivatives
must be defined as follows:
{ext 1,ext 2,ext 3,ext 4,ext 5,ext 6,ext 7,ext 8,ext 9,
ext 10,ext 11,ext 12,ext 13,ext 14,ext 15,ext 16,ext 17,
ext 18,ext 19,ext 20,ext 21,ext 22,ext 23,ext 24,ext 25,
ext 26,ext 27,ext 28,ext 29,ext 30,ext 31,ext 32,ext 33,
ext 34,ext 35,ext 36,ext 37,ext 38,ext 39,ext 40,ext 41,
ext 42,ext 43,ext 44,ext 45,ext 46,ext 47,ext 48,ext 49,
ext 50,ext 51,ext 52,ext 53,ext 54,ext 55,ext 56,ext 57,
ext 58,ext 59,ext 60,ext 61,ext 62,ext 63,ext 64,ext 65,
ext 66,ext 67,ext 68,ext 69,ext 70,ext 71,ext 72,ext 73,
ext 74,ext 75,ext 76,ext 77,ext 78,ext 79,ext 80
{ext 1,ext 2,ext 3,ext 4,ext 5,ext 6,ext 7,ext 8,ext 9,
ext 10,ext 11,ext 12,ext 13,ext 14,ext 15,ext 16,ext 17,
ext 18,ext 19,ext 20,ext 21,ext 22,ext 23,ext 24,ext 25,
ext 26,ext 27,ext 28,ext 29,ext 30,ext 31,ext 32,ext 33,
ext 34,ext 35,ext 36,ext 37,ext 38,ext 39,ext 40,ext 41,
ext 42,ext 43,ext 44,ext 45,ext 46,ext 47,ext 48,ext 49,
ext 50,ext 51,ext 52,ext 53,ext 54,ext 55,ext 56,ext 57,
ext 58,ext 59,ext 60,ext 61,ext 62,ext 63,ext 64,ext 65,
ext 66,ext 67,ext 68,ext 69,ext 70,ext 71,ext 72,ext 73,
ext 74,ext 75,ext 76,ext 77,ext 78,ext 79,ext 80
In the list of coordinates we alternate derivatives of u and derivatives of v. The
same must be done for coefficients; for example,

After specifying the equation
we define the (already introduced) time derivatives:


up to the required order (here the order can be stopped at 15). Odd variables p and q
must be specified with an appropriate length (here it is OK to stop at ddx(1,36)).
Recall to replace pt , qt with the internal coordinates of the covering:
ddt(1,3):=+v*ext 5+ext 6$
ddt(1,4):=u*ext 5+sig*ext 9+v*ext 6$
The list of graded variables:
The ansatz for the components of the Hamiltonian operator is
(for each el in grd2 sum (c(ctel:=ctel+1)*el))*ext 3+



(for each el in grd1 sum (c(ctel:=ctel+1)*el))*ext 5+
(for each el in grd1 sum (c(ctel:=ctel+1)*el))*ext 4+
(for each el in grd0 sum (c(ctel:=ctel+1)*el))*ext 6
(for each el in grd1 sum (c(ctel:=ctel+1)*el))*ext 3+
(for each el in grd0 sum (c(ctel:=ctel+1)*el))*ext 5+
(for each el in grd0 sum (c(ctel:=ctel+1)*el))*ext 4
and the equation for shadows of symmetries is
equ 1:=ddt(phi1)-v*ddx(phi1)-v1*phi1-u1*phi2-u*ddx(phi2)
equ 2:=-ddx(phi1)-v*ddx(phi2)-v1*phi2+ddt(phi2);
After the usual procedures for decomposing polynomials we obtain the following
phi1 := c(6)*ext(6)$
phi2 := c(6)*ext(5)$
which corresponds to the vector (Dx , Dx ). Extending the ansatz to
(for each
(for each
(for each
(for each
(for each
(for each
(for each







(for each
(for each
(for each
(for each
(for each







allows us to find a second (local) Hamiltonian operator

phi1 := (c(3)*(2*ext(9)*sig + ext(6)*v + 2*ext(5)*u
+ ext(3)*u1))/2$
phi2 := (c(3)*(2*ext(6) + ext(5)*v + ext(3)*v1))/2$
There is one more higher local Hamiltonian operator, and a whole hierarchy of
nonlocal Hamiltonian operators [12].
Explosion of denominators and how to avoid it
Here we propose the computation of the repeated total derivative of a denominator.
This computation fills up the whole memory after some time, and can be used as a
kind of speed test for the system. The file is
After having defined total derivatives on the KdV equation, run the following iteration:
for i:=1:100 do begin
write i;
The program shows the iteration number. At the 18th iteration the program uses
about 600MB of RAM, as shown by top run from another shell, and 100% of one
There is a simple way to avoid denominator explosion. The file is
After having defined total derivatives with respect to x (on the KdV equation, for
example) consider in the same ddx a component with a sufficently high index
immediately after ‘letop’ (otherwise super_vectorfield does not work!),
say ddx(0,21), and think of it as being the coefficient to a vector of the type
In this case, its coefficient must be
More particularly, here follows the detailed definition of ddx



Now, suppose that we want to compute the 5th total derivative of phi. Write the
following code:
for i:=1:5 do begin
write i;
The result is then a polynomial in the additional ‘denominator’ variable

phi := aa21**2*( - 120*aa21**4*u**5*u2**5
- 600*aa21**4*u**4*u1**2*u2**4 - 600*aa21**4*u**4*u2**4*u4
- 1200*aa21**4*u**3*u1**4*u2**3 - 2400*aa21**4*u**3*u1**2*u2**3*u4
- 1200*aa21**4*u**3*u2**3*u4**2 - 1200*aa21**4*u**2*u1**6*u2**2
- 3600*aa21**4*u**2*u1**4*u2**2*u4 - 3600*aa21**4*u**2*u1**2*u2**2*u4
- 1200*aa21**4*u**2*u2**2*u4**3 - 600*aa21**4*u*u1**8*u2
- 2400*aa21**4*u*u1**6*u2*u4 - 3600*aa21**4*u*u1**4*u2*u4**2
- 2400*aa21**4*u*u1**2*u2*u4**3 - 600*aa21**4*u*u2*u4**4
- 120*aa21**4*u1**10 - 600*aa21**4*u1**8*u4
- 1200*aa21**4*u1**6*u4**2 - 1200*aa21**4*u1**4*u4**3
- 600*aa21**4*u1**2*u4**4 - 120*aa21**4*u4**5
+ 240*aa21**3*u**4*u2**3*u3
+ 720*aa21**3*u**3*u1**2*u2**2*u3 + 720*aa21**3*u**3*u1*u2**4
+ 240*aa21**3*u**3*u2**3*u5 + 720*aa21**3*u**3*u2**2*u3*u4
+ 720*aa21**3*u**2*u1**4*u2*u3 + 2160*aa21**3*u**2*u1**3*u2**3


720*aa21**3*u**2*u1**2*u2**2*u5 + 1440*aa21**3*u**2*u1**2*u2*u3*u4
2160*aa21**3*u**2*u1*u2**3*u4 + 720*aa21**3*u**2*u2**2*u4*u5
720*aa21**3*u**2*u2*u3*u4**2 + 240*aa21**3*u*u1**6*u3
2160*aa21**3*u*u1**5*u2**2 + 720*aa21**3*u*u1**4*u2*u5
720*aa21**3*u*u1**4*u3*u4 + 4320*aa21**3*u*u1**3*u2**2*u4
1440*aa21**3*u*u1**2*u2*u4*u5 + 720*aa21**3*u*u1**2*u3*u4**2
2160*aa21**3*u*u1*u2**2*u4**2 + 720*aa21**3*u*u2*u4**2*u5
240*aa21**3*u*u3*u4**3 + 720*aa21**3*u1**7*u2
2160*aa21**3*u1**5*u2*u4 + 720*aa21**3*u1**4*u4*u5
2160*aa21**3*u1**3*u2*u4**2 + 720*aa21**3*u1**2*u4**2*u5
720*aa21**3*u1*u2*u4**3 + 240*aa21**3*u4**3*u5
60*aa21**2*u**3*u2**2*u4 - 90*aa21**2*u**3*u2*u3**2
120*aa21**2*u**2*u1**2*u2*u4 - 90*aa21**2*u**2*u1**2*u3**2
780*aa21**2*u**2*u1*u2**2*u3 - 180*aa21**2*u**2*u2**4
60*aa21**2*u**2*u2**2*u6 - 180*aa21**2*u**2*u2*u3*u5
120*aa21**2*u**2*u2*u4**2 - 90*aa21**2*u**2*u3**2*u4
60*aa21**2*u*u1**4*u4 - 1020*aa21**2*u*u1**3*u2*u3
1170*aa21**2*u*u1**2*u2**3 - 120*aa21**2*u*u1**2*u2*u6
180*aa21**2*u*u1**2*u3*u5 - 120*aa21**2*u*u1**2*u4**2
540*aa21**2*u*u1*u2**2*u5 - 1020*aa21**2*u*u1*u2*u3*u4
360*aa21**2*u*u2**3*u4 - 120*aa21**2*u*u2*u4*u6
90*aa21**2*u*u2*u5**2 - 180*aa21**2*u*u3*u4*u5
60*aa21**2*u*u4**3 - 240*aa21**2*u1**5*u3
990*aa21**2*u1**4*u2**2 - 60*aa21**2*u1**4*u6
540*aa21**2*u1**3*u2*u5 - 480*aa21**2*u1**3*u3*u4
1170*aa21**2*u1**2*u2**2*u4 - 120*aa21**2*u1**2*u4*u6
90*aa21**2*u1**2*u5**2 - 540*aa21**2*u1*u2*u4*u5
240*aa21**2*u1*u3*u4**2 - 180*aa21**2*u2**2*u4**2
60*aa21**2*u4**2*u6 - 90*aa21**2*u4*u5**2
10*aa21*u**2*u2*u5 + 20*aa21*u**2*u3*u4 + 10*aa21*u*u1**2*u5
110*aa21*u*u1*u2*u4 + 80*aa21*u*u1*u3**2 + 160*aa21*u*u2**2*u3
10*aa21*u*u2*u7 + 20*aa21*u*u3*u6 + 30*aa21*u*u4*u5
50*aa21*u1**3*u4 + 340*aa21*u1**2*u2*u3 + 10*aa21*u1**2*u7
180*aa21*u1*u2**3 + 60*aa21*u1*u2*u6 + 80*aa21*u1*u3*u5
50*aa21*u1*u4**2 + 60*aa21*u2**2*u5 + 100*aa21*u2*u3*u4
10*aa21*u4*u7 + 20*aa21*u5*u6 - u*u6 - 6*u1*u5 - 15*u2*u4
10*u3**2 - u8)$

where the value of aa21 can be replaced back in the expression.



[1] Obtaining REDUCE: http://reduce-algebra.sourceforge.
[2] Geometry of Differential Equations web site:
[3] notepad++:
[4] List of text editors:
[5] How to install emacs in Windows:
[6] How to install REDUCE in Windows: http://reduce-algebra.
Version 1.0, Memorandum 1099, Dept. Appl. Math., University of Twente,
1992. Available at
[8] G.H.M. ROELOFS, The INTEGRATOR package for REDUCE. Version 1.0,
Memorandum 1100, Dept. Appl. Math., University of Twente, 1992. Available at
[9] G.F. P OST, A manual for the package TOOLS 2.1, Memorandum 1331, Dept.
Appl. Math., University of Twente, 1996. Available at
[10] REDUCE IDE for emacs:
A. M. V ERBOVETSKY AND A. M. V INOGRADOV: Symmetries and Conservation Laws for Differential Equations of Mathematical Physics, I. S.
Krasil0 shchik and A. M. Vinogradov eds., Translations of Math. Monographs
182, Amer. Math. Soc. (1999).
[12] P.H.M. K ERSTEN , I.S. K RASIL’ SHCHIK , A.M. V ERBOVETSKY, Hamiltonian operators and `∗ -covering, Journal of Geometry and Physics 50 (2004),
[13] M. M ARVAN, Sufficient set of integrability conditions of an orthonomic system. Foundations of Computational Mathematics 9 (2009) 651–674.
[14] R. V ITOLO, CDE: a reduce package for computations in geometry of Differential Equations. Available at



CGB: Computing Comprehensive Gröbner Bases

Authors: Andreas Dolzmann, Thomas Sturm, and Winfried Neun



Consider the ideal basis F = {ax, x + y}. Treating a as a parameter, the calling
yields {x, y} as reduced Gröbner basis. This is, however, not correct under the specialization a = 0. The reduced Gröbner basis would then be {x + y}. Taking these
results together, we obtain C = {x + y, ax, ay}, which is correct wrt. all specializations for a including zero specializations. We call this set C a comprehensive
Gröbner basis (CGB).
The notion of a CGB and a corresponding algorithm has been introduced bei
Weispfenning [Wei92]. This algorithm works by performing case distinctions
wrt. parametric coefficient polynomials in order to find out what the head monomials are under all possible specializations. It does thus not only determine a CGB, but
even classifies the contained polynomials wrt. the specializations they are relevant
for. If we keep the Gröbner bases for all cases separate and associate information
on the respective specializations with them, we obtain a Gröbner system. For our
example, the Gröbner system is the following;

a 6= 0 {x + y, ax, ay}
{x + y}
A CGB is obtained as the union of the single Gröbner bases in a Gröbner system.
It has also been shown that, on the other hand, a Gröbner system can easily be
reconstructed from a given CGB [Wei92].
The CGB package provides functions for computing both CGB’s and Gröbner systems, and for turning Gröbner systems into CGB’s.


Using the REDLOG Package

For managing the conditions occurring with the CGB computations, the CGB
package uses the package REDLOG implementing first-order formulas, [DS97a,
DS99], which is also part of the REDUCE distribution.




Term Ordering Mode

The CGB package uses the settings made with the function torder of the
GROEBNER package. This includes in particular the choice of the main variables. All variables not mentioned in the variable list argument of torder are
parameters. The only term ordering modes recognized by CGB are lex and


CGB: Comprehensive Gröbner Basis

The function cgb expects a list F of expressions. It returns a CGB of F wrt. the
current torder setting.
{x + b*y,a*x + y,(a*b - 1)*y}
{b*y + x,
a*x + y,
y*(a*b - 1)}
Note that the basis returned by the cgb call has not undergone the standard evaluation process: The returned polynomials are ordered wrt. the chosen term order.
Reevaluation changes this as can be seen with the output of ws.


GSYS: Gröbner System

The function gsys follows the same calling conventions as cgb. It returns the
complete Gröbner system represented as a nested list

c1 , {g11 , . . . , g1n1 } , . . . , cm , {gm1 , . . . , g1nm }


The ci are conditions in the parameters represented as quantifier-free REDLOG
formulas. Each choice of parameters will obey at least one of the ci . Whenever a

choice of parameters obeys some ci , the corresponding {gi1 , . . . , gini } is a Gröbner
basis for this choice.
gsys {a*x+y,x+b*y};
{{a*b - 1 <> 0 and a <> 0,
{a*x + y,x + b*y,(a*b - 1)*y}},
{a <> 0 and a*b - 1 = 0,
{a*x + y,x + b*y}},
{a = 0,{a*x + y,x + b*y}}}
As with the function cgb, the contained polynomials remain unevaluated.
Computing a Gröbner system is not harder than computing a CGB. In fact, cgb
also computes a Gröbner system and then turns it into a CGB.
Switch CGBGEN: Only the Generic Case
If the switch cgbgen is turned on, both gsys and cgb will assume all parametric
coefficients to be non-zero ignoring the other cases. For cgb this means that the result equals—up to auto-reduction—that of groebner. A call to gsys will return
this result as a single case including the assumptions made during the computation:
on cgbgen;
{{a*b - 1 <> 0 and a <> 0,
{a*x + y,x + b*y,(a*b - 1)*y}}}
off cgbgen;




GSYS2CGB: Gröbner System to CGB

The call gsys2cgb turns a given Gröbner system into a CGB by constructing the
union of the Gröbner bases of the single cases.
gsys2cgb ws;
{x + b*y,a*x + y,(a*b - 1)*y}


Switch CGBREAL: Computing over the Real Numbers

All computations considered so far have taken place over the complex numbers,
more precisely, over algebraically closed fields. Over the real numbers, certain
branches of the CGB computation can become inconsitent though they are not inconsistent over the complex numbers. Consider, e.g., a condition a2 + 1 = 0.
When turning on the switch cgbreal, all simplifications of conditions are performed over the real numbers. The methods used for this are described in [DS97b].
off cgbreal;
gsys {a*x+y,x-a*y};

+ 1 <> 0 and a <> 0,

{a*x + y,x - a*y,(a

+ 1)*y}},

{a <> 0 and a

+ 1 = 0,{a*x + y,x - a*y}},

{a = 0,{a*x + y,x - a*y}}}
on cgbreal;


{{a <> 0,
{a*x + y,x - a*y,(a

+ 1)*y}},

{a = 0,{a*x + y,x - a*y}}}



cgbreal Compute over the real numbers. See Section 16.14.7 for details.
cgbgs Gröbner simplification of the condition. The switch cgbgs can be turned
on for applying advanced algebraic simplification techniques to the conditions. This will, in general, slow down the computation, but lead to a simpler
Gröbner system.
cgbstat Statistics of the CGB run. The switch cgbstat toggles the creation and
output of statistical information on the CGB run. The statistical information
is printed at the end of the run.
cgbfullred Full reduction. By default, the CGB functions perform full reductions
in contrast to pure top reductions. By turning off the switch cgbfullred,
reduction can be restricted to top reductions.

[DS97a] Andreas Dolzmann and Thomas Sturm. Redlog: Computer algebra
meets computer logic. ACM SIGSAM Bulletin, 31(2):2–9, June 1997.
[DS97b] Andreas Dolzmann and Thomas Sturm. Simplification of quantifierfree formulae over ordered fields. Journal of Symbolic Computation,
24(2):209–231, August 1997.

Andreas Dolzmann and Thomas Sturm. Redlog User Manual. FMI,
Universität Passau, D-94030 Passau, Germany, April 1999. Edition 2.0
for Version 2.0.

[Wei92] Volker Weispfenning. Comprehensive Gröbner bases. Journal of Symbolic Computation, 14:1–29, July 1992.




COMPACT: Package for compacting expressions

COMPACT is a package of functions for the reduction of a polynomial in the presence of side relations. COMPACT applies the side relations to the polynomial so
that an equivalent expression results with as few terms as possible. For example,
the evaluation of
compact(s*(1-sin x^2)+c*(1-cos x^2)+sin x^2+cos x^2,
{cos x^2+sin x^2=1});
yields the result
SIN(X) *C + COS(X) *S + 1 .
The switch TRCOMPACT can be used to trace the operation.
Author: Anthony C. Hearn.



CRACK: Solving overdetermined systems of PDEs
or ODEs

CRACK is a package for solving overdetermined systems of partial or ordinary
differential equations (PDEs, ODEs). Examples of programs which make use
of CRACK (finding symmetries of ODEs/PDEs, first integrals, an equivalent Lagrangian or a "differential factorization" of ODEs) are included. The application
of symmetries is also possible by using the APPLYSYM package.
Authors: Andreas Brand, Thomas Wolf.




CVIT: Fast calculation of Dirac gamma matrix

This package provides an alternative method for computing traces of Dirac gamma
matrices, based on an algorithm by Cvitanovich that treats gamma matrices as 3-j
Authors: V.Ilyin, A.Kryukov, A.Rodionov, A.Taranov.

In modern high energy physics the calculation of Feynman diagrams are still very
important. One of the difficulties of these calculations are trace calculations. So
the calculation of traces of Dirac’s γ-matrices were one of first task of computer algebra systems. All available algorithms are based on the fact that gamma-matrices
constitute a basis of a Clifford algebra:
{Gm,Gn} = 2gmn.
We present the implementation of an alternative algorithm based on treating of
gamma-matrices as 3-j symbols (details may be found in [1,2]).
The program consists of 5 modules described below.


| MAP-TO-STRAND |---->---+

Requires of REDUCE version: 3.2, 3.3.

Author: A.P.Kryukov
Purpose:interface REDUCE and CVIT package



RED_TO_CVIT_INTERFACE module is intended for connection of REDUCE
with main module of CVIT package. The main idea is to preserve standard REDUCE syntax for high energy calculations. For realization of this we redefine
SYMBOLIC PROCEDURE ISIMP1 from HEPhys module of REDUCE system.
After loading CVIT package user may use switch CVIT which is ON by default.
If switch CVIT is OFF then calculations of Diracs matrices traces are performed
using standard REDUCE facilities. If CVIT switch is ON then CVIT package will
be active.
RED_TO_CVIT_INTERFACE module performs some primitive simplification
and control input data independently. For example it remove Gm Gm , check parity
of the number of Dirac matrices in each trace etc. There is one principal restriction
concerning G5-matrix. There are no closed form for trace in non-integer dimension
case when trace include G5-matrix. The next restriction is that if the space-time
dimension is integer then it must be even (2,4,6,...). If these and other restrictions
are violated then the user get corresponding error message. List of messages is
From module
------------------------------------------------LIST OF EXPORTED FUNCTION
To module
HEPhys (redefine)

Author: A.Ya.Rodionov
Purpose: graphs reduction
CVITMAPPING module is intended for diagrams calculation according to Cvitanovic - Kennedy algorithm. The top function of this module CALC_SPUR

is called from RED_TO_CVIT_INTERFACE interface module. The main idea
of the algorithm consists in diagram simplification according to rules (1.9’) and
(1.14) from [1]. The input data - trace of Diracs gamma matrices (G-matrices)
has a form of a list of identifiers lists with cyclic order. Some of identifiers may
be identical. In this case we assume summation over dummy indices. So trace
Sp(GbGr).Sp(GwGbGcGwGcGr) is represented as list ((b r) (w b c w c r)).
The first step is to transform the input data to “map” structure and then to reduce
the map to a “simple” one. This transformation is made by function TRANSFORM_MAP_ (top function). Transformation is made in three steps. At the first
step the input data are transformed to the internal form - a map (by function PREPARE_MAP_). At the second step a map is subjected to Fierz transformations
(1.14) (function MK_SIMPLE_MAP_). At this step of optimization can be maid
(if switch CVITOP is on) by function MK_FIRZ_OP. In this case Fierzing starts
with linked vertices with minimal distance (number of vertices) between them. After Fierz transformations map is further reduced by vertex simplification routine
MK_SIMPLE_VERTEX using (1.9’). Vertices reduced to primitive ones, that is to
vertices with three or less edges. This is the last (third) step in transformation from
input to internal data.
The next step is optional. If switch CVITBTR is on factorisation of bubble (function FIND_BUBBLES1) and triangle (function FIND_TRIANGLES1) submaps
is made. This factorisation is very efficient for “wheel” diagrams and unnecessary for “lattice” diagrams. Factorisation is made recursively by substituting composed edges for bubbles and composed vertices for triangles. So check (function
SORT_ATLAS) must be done to test possibility of future marking procedure. If the
check fails then a new attempt to reorganize atlas (so we call complicated structure witch consists of MAP, COEFFicient and DENOMinator) is made. This cause
backtracking (but very seldom). Backtracking can be traced by turning on switch
CVITRACE. FIND_BUBLTR is the top function of this program’s branch.
Then atlases must be prepared (top function WORLD_FROM_ATLAS) for final
algebraic calculations. The resulted object called “world” consists of edges names
list (EDGELIST), their marking variants (VARIANTS) and WORLD1 structure.
WORLD1 structure differs from WORLD structure in one point. It contains MAP2
structure instead of MAP structure. MAP2 is very complicated structure and consist of VARIANTS, marking plan and GSTRAND. (GSTRAND constructed by
PRE!-CALC!-MAP_ from INTERFIERZ module.) By marking we understand
marking of edges with numbers according to Cvitanovic - Kennedy algorithm.
The last step is performed by function CALC_WORLD. At this step algebraic
calculations are done. Two functions CALC_MAP_TAR and CALC_DENTAR
from INTERFIERZ module make algebraic expressions in the prefix form. This
expressions are further simplified by function REVAL. This is the REDUCE system
general function for algebraic expressions simplification. REVAL and SIMP!* are
the only REDUCE functions used in this module.



There are also some functions for printing several internal structures: PRINT_ATLAS,
PRINT_VERTEX, PRINT_EDGE, PRINT_COEFF, PRINT_DENOM. This functions can be used for debugging.
If an error occur in module CVITMAPPING the error message “ERROR IN
MAP CREATING ROUTINES” is displayed. Error has number 55. The switch
CVITERROR allows to give full information about error: name of function where
error occurs and names and values of function’s arguments. If CVITERROR switch
is on and backtracking fails message about error in SORT_ATLAS function is
printed. The result of computation however will be correct because in this case
factorized structure is not used. This happens extremely seldom.
List of imported function
from module
------------------------------------------------List of exported function
to module
REDUCE - CVIT interface



Data structure
list of VERTICES (unordered)
list of WORLDS (unordered)
list of EDGEs (with cyclic order)

::= ATOM
::= T or NIL
-----------------------------------------------*Define in module MAP!-TO!-STRAND.

Author: A.Taranov
Purpose: evaluate single Map
Module INTERFIERZ exports to module CVITMAPPING three functions: PRECALC-MAP_, CALC-MAP_TAR, CALC-DENTAR.
Function PRE-CALC-MAP_ is used for preliminary processing of a map. It returns
is strand structure described in MAP-TO-STRAND module. NEWMAP is a map
structure without “tadepoles” and “deltas”. “Tadepole” is a loop connected with
map with only one line (edge). “Delta” is a single line disconnected from a map.
TADEPOLES is a list of “tadepole” submaps. DELTAS is a list (CONS E1 E2)
where E1 and E2 are
Function CALC_MAP_TAR takes a list of the same form as returned by PRECALC-MAP_, a-list, of the form (... edge . weight ... ) and returns a prefix form
of algebraic expression corresponding to the map numerator.
Function CALC-DENTAR returns a prefix form of algebraic expression corresponding to the map denominator.
Module EVAL-MAP exports to module INTERFIERZ functions MK-NUMR and
Function MK-NUMR returns a prefix form for some combinatorial coefficient (Pohgammer symbol).
Function STRAND-ALG-TOP performs an actual computation of a prefix form
of algebraic expression corresponding to the map numerator. This computation is
based on a “strand” structure constructed from the “map” structure.
Module MAP-TO-STRAND exports functions MAP-TO-STRAND, INCIDENT1
Function INCIDENT1 is a selector in “strand” structure. DELETEZ1 performs
auxiliary optimization of “strand”. MAP-TO-STRAND transforms “map” to
“strand” structure. The latter is describe in program module.



CONTRACT-STRAND do strand vertex simplifications of “strand” and COLORSTRAND finishes strand generation.


Description of STRAND data structure.
::= . ( )
::= . NUMBER

dimension of space-time  HAS NON-UNIT DENOMINATOR The 
has non-unit denominator.
• THREE INDICES HAVE NAME  There are three indices with
equal names in evaluated expression.
List of switches
If it is on then use KennedyCvitanovic algorithm else use
standard facilities.
Fierz optimization switch
Bubbles and triangles
factorisation switch
Backtracking tracing switch
-----------------------------------------------------------Functions cross references*.

from CVITMPPING module.



• 1. Ilyin V.A., Kryukov A.P., Rodionov A.Ya., Taranov A.Yu. Fast algorithm
for calculation of Diracs gamma-matrices traces. SIGSAM Bull., 1989, v.23,
no.4, pp.15-24.
• 2. Kennedy A.D. Phys.Rev., 1982, D26, p.1936.




DEFINT: A definite integration interface

This package finds the definite integral of an expression in a stated interval. It
uses several techniques, including an innovative approach based on the Meijer Gfunction, and contour integration.
Authors: Kerry Gaskell, Stanley M. Kameny, Winfried Neun.



This documentation describes part of REDUCE’s definite integration package that
is able to calculate the definite integrals of many functions, including several special functions. There are other parts of this package, such as Stan Kameny’s code
for contour integration, that are not included here. The integration process described here is not the more normal approach of initially calculating the indefinite
integral, but is instead the rather unusual idea of representing each function as a
Meijer G-function (a formal definition of the Meijer G-function can be found in
[1]), and then calculating the integral by using the following Meijer G integration






(cu )
(dv )




(ap )
(bq )

dx =


(gk )
(hl )

The resulting Meijer G-function is then retransformed, either directly or via a
hypergeometric function simplification, to give the answer. A more detailed account of this theory can be found in [2].


Integration between zero and infinity

As an example, if one wishes to calculate the following integral


x−1 e−x sin(x) dx


then initially the correct Meijer G-functions are found, via a pattern matching process, and are substituted into eq. 16.72 to give









2 0




The cases for validity of the integral are then checked. If these are found to be
satisfactory then the formula is calculated and we obtain the following Meijer Gfunction





This is reduced to the following hypergeometric function
2 F1 ( 2 , 1; 2 ; −1)

which is then calculated to give the correct answer of
The above formula (1) is also true for the integration of a single Meijer G-function
by replacing the second Meijer G-function with a trivial Meijer G-function.
A list of numerous particular Meijer G-functions is available in [1].


Integration over other ranges

Although the description so far has been limited to the computation of definite integrals between 0 and infinity, it can also be extended to calculate integrals between
0 and some specific upper bound, and by further extension, integrals between any
two bounds. One approach is to use the Heaviside function, i.e.


2 −x

x e


H(1 − x) dx =


x2 e−x dx


Another approach, again not involving the normal indefinite integration process,
again uses Meijer G-functions, this time by means of the following formula






(au )
(bv )

dx = y


p+1 q+1

(a1 , 1 − α, an+1 ..ap )
(b1 , −α, bm+1 )

For a more detailed look at the theory behind this see [2].
For example, if one wishes to calculate the following integral


sin(2 x) dx

then initially the correct Meijer G-function is found, by a pattern matching process,
and is substituted into eq. 16.73 to give



2 0




which then in turn gives





−1 0


and returns the result
π J3/2 (2 y) y
y 1/4


Using the definite integration package

To use this package, you must first load it by the command
load_package defint;
Definite integration is then possible using the int command with the syntax:
where LOW and UP are the lower and upper bounds respectively for the definite
integration of EXPRN with respect to VAR.


e−x dx





xsin(1/x) dx





x2 cos(x) e−2x dx









H(1 − x) dx =


xe−1/2x dx


2*(2*SQRT(E) - 3)



x log(1 + x) dx




cos(2x) dx

SIN(4*Y) - SIN(2*Y)



Integral Transforms

A useful application of the definite integration package is in the calculation of
various integral transforms. The transforms available are as follows:
• Laplace transform
• Hankel transform
• Y-transform
• K-transform
• StruveH transform
• Fourier sine transform
• Fourier cosine transform
Laplace transform
The Laplace transform
f (s) = L {F(t)} =


e−st F (t) dt

can be calculated by using the laplace_transform command.
This requires as parameters
• the function to be integrated
• the integration variable.
For example
L {e−at }
is entered as
and returns the result



Hankel transform
The Hankel transform

f (ω) =

F (t) Jν (2 ωt) dt


can be calculated by using the hankel_transform command e.g.
This is used in the same way as the laplace_transform command.
The Y-transform

f (ω) =

F (t) Yν (2 ωt) dt


can be calculated by using the Y_transform command e.g.
This is used in the same way as the laplace_transform command.
The K-transform
f (ω) =


F (t) Kν (2 ωt) dt


can be calculated by using the K_transform command e.g.
This is used in the same way as the laplace_transform command.
StruveH transform
The StruveH transform


f (ω) =


F (t) StruveH(ν, 2 ωt) dt


can be calculated by using the struveh_transform command e.g.
This is used in the same way as the laplace_transform command.
Fourier sine transform
The Fourier sine transform


F (t) sin(st) dt

f (s) =

can be calculated by using the fourier_sin command e.g.
This is used in the same way as the laplace_transform command.
Fourier cosine transform
The Fourier cosine transform
f (s) =


F (t) cos(st) dt

can be calculated by using the fourier_cos command e.g.
This is used in the same way as the laplace_transform command.


Additional Meijer G-function Definitions

The relevant Meijer G representation for any function is found by a patternmatching process which is carried out on a list of Meijer G-function definitions.
This list, although extensive, can never hope to be complete and therefore the user
may wish to add more definitions. Definitions can be added by adding the following lines:



defint_choose(f(~x),~var => f1(n,x);
symbolic putv(mellin!-transforms!*,n,’
(() (m n p q) (ai) (bj) (C) (var)));

where f(x) is the new function, i = 1..p, j=1..q, C = a constant, var = variable, n =
an indexing number.
For example when considering cos(x) we have
Meijer G representation √

π G10


0 21


Internal definite integration package representation defint_choose(cos(~x),~var)

=> f1(3,x);

where 3 is the indexing number corresponding to the 3 in the following formula
symbolic putv(mellin!-transforms!*,3,’
(() (1 0 0 2) () (nil (quotient 1 2))
(sqrt pi) (quotient (expt x 2) 4)));
or the more interesting example of Jn (x):
Meijer G representation G10


n −n
2 2


Internal definite integration package representation defint_choose(besselj(~n,~x),~var) => f1(50,x,n);
symbolic putv(mellin!-transforms!*,50,’
((n) (1 0 0 2) () ((quotient n 2)
(minus quotient n 2)) 1
(quotient (expt x 2) 4)));


The print_conditions function

The required conditions for the validity of the transform integrals can be viewed
using the following command:

For example after calculating the following laplace transform
using the print_conditions command would produce

repart(sum(ai) - sum(bj)) + 1/2 (q + 1 - p)>(q - p) repart(s)
and ( - min(repart(bj))= ram ∗ (r + v).

Finally this procedure RETURNS the complete result of the carry over of the solution in the equation.
This procedure cannot be used if the solution number solk is linked to a condition.
Writing of different forms of results
procedure standsol(solutions);
This procedure enables the simplified form of each solution to be obtained from
the list "solutions", {lcoef f ,{...,{general_solution},....}} which is one of the elements of the list returned by DESIR, or {lcoef f, sol} where sol is the list returned
This procedure RETURNS a list of 3 elements : { lcoef f, solstand, solcond }
= list of differential equation coefficients
solstand = list of solutions written in standard form
= list of conditional solutions that have not been written in
standard form. This solutions remain in general form.
This procedure has no meaning for "conditional" solutions. In case, a value has
to be given to the parameters, that can be done either by calling the procedure
SORPARAM that displays and returns these solutions in the standard form, either
by calling the procedure SOLPARAM which returns these solutions in general
procedure sorsol(sol);
This procedure is called by DESIR to write the solution sol, given in general form,
in standard form with enumeration of different conditions (if there are any).
It can be used independently.
Writing of solutions after the choice of parameters
procedure sorparam(solutions, param);
This is an interactive procedure which displays the solutions evaluated : the value
of parameters is requested.
solutions : {lcoef f ,{....,{general_solution},....}}
: list of parameters.
It returns the list formed of 2 elements :


• list of evaluated coefficients of the equation
• list of standard solutions evaluated for the value of parameters.

procedure solparam(solutions, param, valparam);
This procedure evaluates the general solutions for the value of parameters given by
valparam and returns these solutions in general form.
solutions : {lcoef f ,{....,{general_solution},....}}
: list of parameters
valparam : list of parameters values
It returns the list formed of 2 elements :
• list of evaluated coefficients of the equation
• list of solutions in general form, evaluated for the value of parameters.
procedure changehom(lcoef f, x, secmember, id);
Differentiation of an equation with right-hand side.
lcoef f
: list of coefficients of the equation
: variable
secmember : right-hand side
: order of the differentiation.
It returns the list of coefficients of the differentiated equation. It enables an equation with polynomial right-hand side to be transformed into a homogeneous equation by differentiating id times, id = degre(secmember) + 1.
procedure changevar(lcoef f, x, v, f ct);
Changing of variable in the homogeneous equation defined by the list,lcoeff of its
coefficients : the old variable x and the new variable v are linked by the relation
x = f ct(v).
It returns the list of coefficients in respect to the variable v of the new equation.
examples of use :
- translation enabling a rational singularity to be brought back to zero.
- x = 1/v brings the infinity to 0.
procedure changefonc(lcoef f, x, q, f ct);
Changing of unknown function in the homogeneous equation defined by the list
lcoeff of its coefficients :

lcoef f : list of coefficients of the initial equation
: variable
: new unknown function
f ct
: y being the unknown function y = f ct(q)
It returns the list of coefficients of the new equation.
Example of use :
this procedure enables the computation,in the neighbourhood of an irregular singularity, of the "reduced" equation associated to one of the slopes (the Newton
polygon having a null slope of no null length). This equation gives much informations on the associated divergent series.
Optional writing of intermediary results
switch trdesir : when it is ON, at each step of the Newton algorithm, a description
of the Newton polygon is displayed (it is possible to follow the break of slopes), and
at each call of the FROBENIUS procedure ( case of a null slope ) the corresponding
indicial equation is displayed.
By default, this switch is OFF.



1. This DESIR version is limited to differential equations leading to indicial
equations of degree <= 3. To pass beyond this limit, a further version written in the D5 environment of the computation with algebraic numbers has to
be used.
2. The computation of a basis of solutions for an equation depending on parameters is assured only when the indicial equations are of degree <= 2.




DFPART: Derivatives of generic functions

This package supports computations with total and partial derivatives of formal
function objects. Such computations can be useful in the context of differential
equations or power series expansions.
Author: Herbert Melenk.
The package DFPART supports computations with total and partial derivatives of
formal function objects. Such computations can be useful in the context of differential equations or power series expansions.


Generic Functions

A generic function is a symbol which represents a mathematical function. The
minimal information about a generic function function is the number of its arguments. In order to facilitate the programming and for a better readable output this
package assumes that the arguments of a generic function have default names such
as f (x, y),q(rho, phi). A generic function is declared by prototype form in a statement
GENERIC_FUNCTION hfnamei(harg1 i, harg2 i · · · hargn i);
where f name is the (new) name of a function and argi are symbols for its formal arguments. In the following f name is referred to as “generic function",
arg1 , arg2 · · · argn as “generic arguments" and f name(arg1 , arg2 · · · argn ) as
“generic form". Examples:
generic_function f(x,y);
generic_function g(z);
After this declaration REDUCE knows that
∂f ∂g
• there are formal partial derivatives ∂f
∂x , ∂y ∂z and higher ones, while partial
derivatives of f and g with respect to other variables are assumed as zero,

• expressions of the type f (), g() are abbreviations for f (x, y), g(z),
• expressions of the type f (u, v) are abbreviations for
sub(x = u, y = v, f (x, y))
• a total derivative

df (u,v)

has to be computed as

∂f du
∂x dw


∂f dv
∂y dw



Partial Derivatives

The operator DFP represents a partial derivative:
DFP(hexpri, hdfarg1 i, hdfarg2 i · · · hdfargn i);
where expr is a function expression and df argi are the differentiation variables.




∂ f
(u, v). For compatibility with the DF operator the differentiation
stands for ∂x∂y
variables need not be entered in list form; instead the syntax of DF can be used,
where the function expression is followed by the differentiation variables, eventually with repetition numbers. Such forms are interenally converted to the above
form with a list as second parameter.

The expression expr can be a generic function with or without arguments, or an
arithmetic expression built from generic functions and other algebraic parts. In the
second case the standard differentiation rules are applied in order to reduce each
derivative expressions to a minimal form.
When the switch NAT is on partial derivatives of generic functions are printed in
standard index notation, that is fxy for ∂x∂y
and fxy (u, v) for ∂x∂y
(u, v). Therefore single characters should be used for the arguments whenever possible. Examples:

generic_function f(x,y);
generic_function g(y);
F *G()




*G() + F *G

The difference between partial and total derivatives is illustrated by the following
generic_function h(x);
F (X,H(X))*G(H(X))
F (X,H(X))*G(H(X)) + F (X,H(X))*H (X)*G(H(X))
+ G (H(X))*H (X)*F(X,H(X))
Cooperation of partial derivatives and Taylor series under a differential side relation
dx = f (x, q):
load_package taylor;
operator q;
let df(q(~x),x) => f(x,q(x));
F (X0,Q(X0)) + F (X0,Q(X0))*F(X0,Q(X0))
Q(X0) + F(X0,Q(X0))*H + -----------------------------------------*H
+ (F

(X0,Q(X0)) + F (X0,Q(X0))*F(X0,Q(X0))
+ F (X0,Q(X0))*F (X0,Q(X0)) + F (X0,Q(X0))*F(X0,Q(X0))

+ F

(X0,Q(X0))*F(X0,Q(X0)) + F (X0,Q(X0)) *F(X0,Q(X0)))/6*H



+ O(H )

Normally partial differentials are assumed as non-commutative
- F
However, a generic function can be declared to have globally interchangeable partial derivatives using the declaration DFP_COMMUTE which takes the name of a
generic function or a generic function form as argument. For such a function differentiation variables are rearranged corresponding to the sequence of the generic
generic_function q(x,y);
dfp_commute q(x,y);
dfp(q(),{x,y,y}) + dfp(q(),{y,x,y}) + dfp(q(),{y,y,x});
If only a part of the derivatives commute, this has to be declared using the standard
REDUCE rule mechanism. Please note that then the derivative variables must be
written as list.



When a generic form or a DFP expression takes part in a substitution the following
steps are performed:
1. The substitutions are performed for the arguments. If the argument list is
empty the substitution is applied to the generic arguments of the function; if
these change, the resulting forms are used as new actual arguments. If the
generic function itself is not affected by the substitution, the process stops
2. If the function name or the generic function form occurs as a left hand side
in the substitution list, it is replaced by the corresponding right hand side.



3. The new form is partially differentiated according to the list of partial derivative variables.
4. The (eventually modified) actual parameters are substituted into the form for
their corresponding generic variables. This substitution is done by name.
generic_function f(x,y);





generic_function ff(y,z);
The dataset dfpart.tst contains more examples, including a complete application for computing the coefficient equations for Runge-Kutta ODE solvers.



DUMMY: Canonical form of expressions with dummy

This package allows a user to find the canonical form of expressions involving
dummy variables. In that way, the simplification of polynomial expressions can be
fully done. The indeterminates are general operator objects endowed with as few
properties as possible. In that way the package may be used in a large spectrum of
Author: Alain Dresse.



The possibility to handle dummy variables and to manipulate dummy summations
are important features in many applications. In particular, in theoretical physics,
the possibility to represent complicated expressions concisely and to realize simplifications efficiently depend on both capabilities. However, when dummy variables are used, there are many more ways to express a given mathematical objects
since the names of dummy variables may be chosen almost arbitrarily. Therefore,
from the point of view of computer algebra the simplification problem is much
more difficult. Given a definite ordering, one is, at least, to find a representation
which is independent of the names chosen for the dummy variables otherwise,
simplifications are impossible. The package does handle any number of dummy
variables and summations present in expressions which are arbitrary multivariate
polynomials and which have operator objects eventually dependent on one (or several) dummy variable(s) as some of their indeterminates. These operators have the
same generality as the one existing in REDUCE. They can be noncommutative,
anticommutative or commutative. They can have any kind of symmetry property.
Such polynomials will be called in the following dummy polynomials. Any monomial of this kind will be called dummy monomial. For any such object, the package
allows to find a well defined normal form in one-to-one correspondance with it.
In section 2, the convention for writing dummy summations is explained and the
available declarations to introduce or suppress dummy variables are given.
In section 3, the commands allowing to give various algebraic properties to the
operators are described.
In section 4, the use of the function CANONICAL is explained and illustrated.
In section 5, a fairly complete set of references is given.
The use of DUMMY requires that the package ASSIST version 2.2 be available.
This is the case when REDUCE 3.6 is used. When loaded, ASSIST is automatically loaded.




Dummy variables and dummy summations

A dummy variable (let us name it dv) is an identifier which runs from the integer
i1 to another integer i2 . To the extent that no definite space is defined, i1 and i2 are
assumed to be some integers which are the same for all dummy variables.
If f is any REDUCE operator, then the simplest dummy summation associated to
dv is the sum
f (dv)

and is simply written as
f (dv).
No other rules govern the implicit summations. dv can appear as many times we
want since the operator f may depend on an arbitrary number of variables. So, the
package is potentially applicable to many contexts. For instance, it is possible to
add rules of the kind one encounters in tensor calculus.
Obviously, there are as many ways we want to express the same quantity. If the
name of another dummy variable is dum then the previous expression is written as

f (dum)


and the computer algebra system should be able to find that the expression
f (dv) − f (dum);
is equal to 0. A very special case which is allowed is when f is the identity operator.
So, a generic dummy polynomial will be a sum of dummy monomials of the kind
ci ∗ fi (dv1 , . . . , dvki , f r1 , . . . , f rli )

where dv1 , . . . , are dummy variables while f r1 , . . . , are ordinary or free variables.
To declare dummy variables, two commands are available:
• i.
dummy_base ;
where idp is the name of any unassigned identifier.
• ii.
dummy_names ,, ....;

The first one declares idp1 , . . . , idpn as dummy variables i.e. all variables of
the form idpxxx where xxx is a number will be dummy variables, such as
idp1 , idp2 , . . . , idp23 . The second one gives special names for dummy variables.
All other identifiers which may appear are assumed to be free. However, there is a
restriction: named and base dummy variables cannot be declared simultaneously.
The above declarations are mutually exclusive. Here is an example showing that:

dummy_base dv; ==> dv
% dummy indices are dv1, dv2, dv3, ...
dummy_names i,j,k; ==>
***** The created dummy base dv must be cleared
When this is done, an expression like
means a sum over dv1 , dv2 . To clear the dummy base, and to create the dummy
names i, j, k one is to do

clear_dummy_base; ==> t
dummy_names i,j,k; ==> t
% dummy indices are i,j,k.
When this is done, an expression like
means a sum over i. One should keep in mind that every application of the above
commands erases the previous ones. It is also possible to display the declared
dummy names using SHOW_DUMMY_NAMES:
show_dummy_names(); ==> {i,j,k}
To suppress all dummy variables one can enter
clear_dummy_names; clear_dummy_base;




The Operators and their Properties

All dummy variables should appear at first level as arguments of operators. For
instance, if i and j are dummy variables, the expression


is allowed but the expression
op(i,op(j)) - op(j,op(j))
is not allowed. This is because dummy variables are not detected if they appear
at a level larger than 1. Apart from that there is no restrictions. Operators may
be commutative, noncommutative or even anticommutative. Therefore they may
be elements of an algebra, they may be tensors, spinors, grassman variables, etc.
. . . By default they are assumed to be commutative and without symmetry properties. The REDUCE command NONCOM is taken into account and, in addition, the
anticom at1, at2;
makes the operators at1 and at2 anticommutative.
One can also give symmetry properties to them. The usual declarations SYMMETRIC
and ANTISYMMETRIC are taken into account. Moreover and most important
they can be endowed with a partial symmetry through the command SYMTREE.
Here are three illustrative examples for the r operator:
symtree (r,{!+, 1, 2, 3, 4});
symtree (r,{!*, 1, {!-, 2, 3, 4}});
symtree (r, {!+, {!-, 1, 2}, {!-, 3, 4}});
The first one makes the operator (fully) symmetric. The second one declares it
antisymmetric with respect to the three last indices. The symbols !*, !+ and !- at
the beginning of each list mean that the operator has no symmetry, is symmetric or
is antisymmetric with respect to the indices inside the list. Notice that the indices
are not denoted by their names but merely by their natural order of appearance. 1
means the first written argument of r, 2 its second argument etc. The first command
is equivalent to the declaration symmetric except that the number of indices of
r is restricted to 4 i.e. to the number declared in SYMTREE. In the second example
r is stated to have no symmetry with respect to the first index and is declared to
be antisymmetric with respect to the three last indices. In the third example, r is
made symmetric with respect to the interchange of the pairs of indices 1,2 and 3,4
respectively and is made antisymmetric separately within the pairs (1, 2) and (3, 4).
It is the symmetry of the Riemann tensor. The anticommutation property and the

various symmetry properties may be suppressed by the commands REMANTICOM
and REMSYM. To eliminate partial symmetry properties one can also use SYMTREE
itself. For example, assuming that r has the Riemann symmetry, to eliminate it do
symtree (r,{!*, 1, 2, 3, 4});
However, notice that the number of indices remains fixed and equal to 4 while with
REMSYM it becomes again arbitrary.


The Function CANONICAL

CANONICAL is the most important functionality of the package. It can be applied
on any polynomial whether it is a dummy polynommial or not. It returns a normal
form uniquely determined from the current ordering of the system. If the polynomial does not contain any dummy index, it is rewriten taking into account the
various operator properties or symmetries described above. For instance,
symtree (r, {!+, {!-, 1, 2}, {!-, 3, 4}});
canonical aa; ==>

- r(x1,x2,x3,x4).

If it contains dummy indices, CANONICAL takes also into account the various
dummy summations, makes the relevant simplifications, eventually rename the
dummy indices and returns the resulting normal form. Here is a simple example:
operator at1,at2;
anticom at1,at2;
dummy_names i,j,k; ==> t
show_dummy_names(); ==> {i,j,k}
rr:=at1(i)*at2(k) -at2(k)*at1(i)$

canonical rr; => 2*at1(i)*at2(j)
It is important to notice, in the above example, that in addition to the summations over indices i and k, and of the anticommutativity property of the operators,
canonical has replaced the index k by the index j. This substitution is essential to get full simplification. Several other examples are given in the test file and,



there, the output of CANONICAL is explained.
As stated in the previous section, the dependence of operators on dummy indices
is limited to first level. An erroneous result will be generated if it is not the case as
the subsequent example illustrates:
operator op;
dummy_names i,j;
canonical rr; ==> 0
Zero is obtained because, in the second term, CANONICAL has replaced j by i
but has left op(j) unchanged because it does not see the index j which is inside.
This fact has also the consequence that it is unable to simplify correctly (or at
all) expressions which contain some derivatives. For instance (i and j are dummy
aa:=df(op(x,i),x) -df(op(x,j),x)$
canonical aa; ==> df(op(x,i),x) - df(op(x,j),x)
instead of zero. A second limitation is that CANONICAL does not add anything
to the problem of simplifications when side relations (like Bianchi identities) are



- Butler, G. and Lam, C. W. H., “A general backtrack algorithm for the
isomorphism problem of combinatorial objects", J. Symb. Comput. vol.1,
(1985) p.363-381.
- Butler, G. and Cannon, J. J., “Computing in Permutation and Matrix
Groups I: Normal Closure, Commutator Subgroups, Series", Math. Comp.
vol.39, number 60, (1982), p. 663-670.
- Butler, G., “Computing in Permutation and Matrix Groups II: Backtrack
Algorithm", Math. Comp. vol.39, number 160, (1982), p.671-680.
- Leon, J.S., “On an Algorithm for Finding a Base and a Strong Generating
Setfor a Group Given by Generating Permutations”, Math. Comp. vol.35,
(1980), p941-974.
- Leon, J. S., “Computing Automorphism Groups of Combinatorial Objects”,
Proc. LMS Symp. on Computational Group Theory, Durham, England,
editor: Atkinson, M. D., Academic Press, London, (1984).

- Leon, J. S., “Permutation Group Algorithms Based on Partitions, I: Theory
and Algorithms”, J.Symb. Comput.vol.12, (1991) p. 533-583.
- Linton, Stephen Q., “Double Coset Enumeration”, J. Symb. Comput.,
vol.12, (1991) p. 415-426.
- McKay, B. D., “Computing Automorphism Groups and Canonical Labellings of Graphs”, Proc. Internat. Conf. on Combinatorial Theory, Lecture
Notes in Mathematics“ vol. 686, (1977), p.223-232, Springer-Verlag, Berlin.
- Rodionov, A. Ya. and Taranov, A. Yu., “Combinatorial Aspects of Simplification of Algebraic Expression”, Proceedings of Eurocal 87, Lecture Notes
in Comp. Sci., vol. 378, (1989), p. 192.
- Sims, C. C., “Determining the Conjugacy Classes of a Permutation Group”,
Computers in Algebra and Number Theory, SIAM-AMS Proceedings, vol.
4, (1971), p. 191-195, editor G. Birckhoff and M. Hall Jr., Amer. Math.
- Sims, C. C., “Computation with Permutation Groups”, Proc. of the Second
Symposium on Symbolic and Algebraic Manipulation, (1971), p. 23-28,
editor S. R. Petrick, Assoc. Comput. Mach., New York.
- Burnel A., Caprasse H., Dresse A., “ Computing the BRST operator used
in Quantization of Gauge Theories” IJMPC vol. 3, (1993) p.321-35.
- Caprasse H., “BRST charge and Poisson Algebras”, Discrete Mathematics
and Theoretical Computer Science, Special Issue: Lie Computations papers,, (1997).




EXCALC: A differential geometry package

EXCALC is designed for easy use by all who are familiar with the calculus of Modern Differential Geometry. The program is currently able to handle scalar-valued
exterior forms, vectors and operations between them, as well as non-scalar valued
forms (indexed forms). It is thus an ideal tool for studying differential equations,
doing calculations in general relativity and field theories, or doing simple things
such as calculating the Laplacian of a tensor field for an arbitrary given frame.
Author: Eberhard Schrüfer.

This program was developed over several years. I would like to express my deep
gratitude to Dr. Anthony Hearn for his continuous interest in this work, and especially for his hospitality and support during a visit in 1984/85 at the RAND
Corporation, where substantial progress on this package could be achieved. The
Heinrich Hertz-Stiftung supported this visit. Many thanks are also due to Drs.
F.W. Hehl, University of Cologne, and J.D. McCrea, University College Dublin,
for their suggestions and work on testing this program.



EXCALC is designed for easy use by all who are familiar with the calculus of
Modern Differential Geometry. Its syntax is kept as close as possible to standard
textbook notations. Therefore, no great experience in writing computer algebra
programs is required. It is almost possible to input to the computer the same as what
would have been written down for a hand-calculation. For example, the statement
f*x^y + u _| (y^z^x)
would be recognized by the program as a formula involving exterior products and
an inner product. The program is currently able to handle scalar-valued exterior
forms, vectors and operations between them, as well as non-scalar valued forms
(indexed forms). With this, it should be an ideal tool for studying differential
equations, doing calculations in general relativity and field theories, or doing such
simple things as calculating the Laplacian of a tensor field for an arbitrary given
frame. With the increasing popularity of this calculus, this program should have an
application in almost any field of physics and mathematics.
Since the program is completely embedded in REDUCE, all features and facilities
of REDUCE are available in a calculation. Even for those who are not quite comfortable in this calculus, there is a good chance of learning it by just playing with

the program.
This is the last release of version 2. A much extended differential geometry package (which includes complete symbolic index simplification, tensors, mappings,
bundles and others) is under development.
Complaints and comments are appreciated and should be send to the author. If the
use of this program leads to a publication, this document should be cited, and a
copy of the article to the above address would be welcome.



Geometrical objects like exterior forms or vectors are introduced to the system by
declaration commands. The declarations can appear anywhere in a program, but
must, of course, be made prior to the use of the object. Everything that has no
declaration is treated as a constant; therefore zero-forms must also be declared.
An exterior form is introduced by
PFORM < declaration1 >, < declaration2 >, . . . ;
< declaration > ::= < name > | < list of names >=< number > | < identifier > |
< expression >
< name > ::= < identifier > | < identifier >(< arguments >)
For example
pform u=k,v=4,f=0,w=dim-1;
declares U to be an exterior form of degree K, V to be a form of degree 4, F to be a
form of degree 0 (a function), and W to be a form of degree DIM-1.
If the exterior form should have indices, the declaration would be
pform curv(a,b)=2,chris(a,b)=1;
The names of the indices are arbitrary.
Exterior forms of the same degree can be grouped into lists to save typing.
pform {x,y,z}=0,{rho(k,l),u,v(k)}=1;
The declaration of vectors is similar. The command TVECTOR takes a list of
TVECTOR < name1 >, < name2 >, . . . ;



For example, to declare X as a vector and COMM as a vector with two indices, one
would say
tvector x,comm(a,b);
If a declaration of an already existing name is made, the old declaration is removed,
and the new one is taken.
The exterior degree of a symbol or a general expression can be obtained with the
EXDEGREE < expression >;
exdegree(u + 3*chris(k,-k));


Exterior Multiplication

Exterior multiplication between exterior forms is carried out with the nary infix operator ˆ (wedge). Factors are ordered according to the usual ordering in REDUCE
using the commutation rule for exterior products.
Example 10
pform u=1,v=1,w=k;
- U^V
( - 1) *U^V^W


A*(5*U^V^W - U^W^W)
It is possible to declare the dimension of the underlying space by
SPACEDIM < number > | < identifier >;
If an exterior product has a degree higher than the dimension of the space, it is
replaced by 0:
spacedim 4;
pform u=2,v=3;


Partial Differentiation

Partial differentiation is denoted by the operator @. Its capability is the same as the
REDUCE DF operator.
Example 11
@(sin x,x);
An identifier can be declared to be a function of certain variables. This is done
with the command FDOMAIN. The following would tell the partial differentiation
operator that F is a function of the variables X and Y and that H is a function of X.
fdomain f=f(x,y),h=h(x);
Applying @ to F and H would result in



F + X*@


The partial derivative symbol can also be an operator with a single argument. It
then represents a natural base element of a tangent vector.
Example 12
a*@ x + b*@ y;
A*@ + B*@


Exterior Differentiation

Exterior differentiation of exterior forms is carried out by the operator d. Products
are normally differentiated out, i.e.
pform x=0,y=k,z=m;
d(x * y);
X*d Y + d X^Y
R*d Y
( - 1) *X*Y^d Z

+ X*d Y^Z + d X^Y^Z

This expansion can be suppressed by the command NOXPND D.
noxpnd d;


To obtain a canonical form for an exterior product when the expansion is switched
off, the operator D is shifted to the right if it appears in the leftmost place.
d y ^ z;
- ( - 1) *Y^d Z + d(Y^Z)
Expansion is performed again when the command XPND D is executed.
Functions which are implicitly defined by the FDOMAIN command are expanded
into partial derivatives:
pform x=0,y=0,z=0,f=0;
fdomain f=f(x,y);
d f;
@ F*d X + @ F*d Y
If an argument of an implicitly defined function has further dependencies the chain
rule will be applied e.g.
fdomain y=y(z);
d f;
@ F*d X + @ F*@ Y*d Z
Expansion into partial derivatives can be inhibited by NOXPND @ and enabled
again by XPND @.
The operator is of course aware of the rules that a repeated application always leads
to zero and that there is no exterior form of higher degree than the dimension of
the space.
d d x;


pform u=k;
spacedim k;
d u;


Inner Product

The inner product between a vector and an exterior form is represented by the
diphthong _| (underscore or-bar), which is the notation of many textbooks. If the
exterior form is an exterior product, the inner product is carried through any factor.
Example 13
pform x=0,y=k,z=m;
tvector u,v;
u _| (x*y^z);
X*(( - 1) *Y^U _| Z + U _| Y^Z)
In repeated applications of the inner product to the same exterior form the vector
arguments are ordered e.g.
(u+x*v) _| (u _| (3*z));
- 3*U _| V _| Z
The duality of natural base elements is also known by the system, i.e.
pform {x,y}=0;
(a*@ x+b*@(y)) _| (3*d x-d y);
3*A - B



Lie Derivative

The Lie derivative can be taken between a vector and an exterior form or between
two vectors. It is represented by the infix operator |_ . In the case of Lie differentiating, an exterior form by a vector, the Lie derivative is expressed through inner
products and exterior differentiations, i.e.
pform z=k;
tvector u;
u |_ z;
U _| d Z + d(U _| Z)
If the arguments of the Lie derivative are vectors, the vectors are ordered using the
anticommutivity property, and functions (zero forms) are differentiated out.
Example 14
tvector u,v;
v |_ u;
- U |_ V
pform x=0,y=0;
(x*u) |_ (y*v);
- U*Y*V _| d X + V*X*U _| d Y + X*Y*U |_ V


Hodge-* Duality Operator

The Hodge-* duality operator maps an exterior form of degree K to an exterior form
of degree N-K, where N is the dimension of the space. The double application
of the operator must lead back to the original exterior form up to a factor. The
following example shows how the factor is chosen here
spacedim n;
pform x=k;
# # x;



( - 1)

+ K*N)

The indeterminate SGN in the above example denotes the sign of the determinant
of the metric. It can be assigned a value or will be automatically set if more of
the metric structure is specified (via COFRAME), i.e. it is then set to g/|g|, where
g is the determinant of the metric. If the Hodge-* operator appears in an exterior
product of maximal degree as the leftmost factor, the Hodge-* is shifted to the right
according to
pform {x,y}=k;
# x ^ y;
( - 1)

+ K*N)
*X^# Y

More simplifications are performed if a coframe is defined.


Variational Derivative

The function VARDF returns as its value the variation of a given Lagrangian n-form
with respect to a specified exterior form (a field of the Lagrangian). In the shared
variable BNDEQ!*, the expression is stored that has to yield zero if integrated over
the boundary.
VARDF(< Lagrangian n-form >,< exterior form >)
Example 15
spacedim 4;
pform l=4,a=1,j=3;
l:=-1/2*d a ^ # d a - a^# j$

%Lagrangian of the e.m. field

- (# J + d # d A)

%Maxwell’s equations


- ’A^# d A

%Equation at the boundary

In the current implementation, the Lagrangian must be built up by the fields and
the operations d, #, and @. Variation with respect to indexed quantities is currently
not allowed.
For the calculation of the conserved currents induced by symmetry operators (vector fields), the function NOETHER is provided. It has the syntax:
NOETHER(< Lagrangian n-form >,< field >,< symmetry generator >)
Example 16
pform l=4,a=1,f=2;
spacedim 4;
l := -1/2*d a^#d a;
tvector x;

%Free Maxwell field;

%An unspecified generator;

- 2*d(x _| a)^# d a + d a^x _| # d a - x _| d a^# d a

The above expression would be the canonical energy momentum 3-forms of the
Maxwell field, if X is interpreted as a translation;


Handling of Indices

Exterior forms and vectors may have indices. On input, the indices are given as
arguments of the object. A positive argument denotes a superscript and a negative
argument a subscript. On output, the indexed quantity is displayed two dimensionally if NAT is on. Indices may be identifiers or numbers.
Example 17


pform om(k,l)=m,e(k)=1;
E ^E

In the current release, full simplification is performed only if an index range is
specified. It is hoped that this restriction can be removed soon. If the index range
(the values that the indices can obtain) is specified, the given expression is evaluated for all possible index values, and the summation convention is understood.
Example 18
indexrange t,r,ph,z;
pform e(k)=1,s(k,l)=2;
w := e(k)*e(-k);
W := E *E

+ E *E

:= 0
:= - E ^E
:= - E ^E

+ E *E

+ E *E


If the expression to be evaluated is not an assignment, the values of the expression
are displayed as an assignment to an indexed variable with name NS. This is done
only on output, i.e. no actual binding to the variable NS occurs.

:= 0


:= - E ^E

It should be noted, however, that the index positions on the variable NS can sometimes not be uniquely determined by the system (because of possible reorderings in
the expression). Generally it is advisable to use assignments to display complicated
A range can also be assigned to individual index-names. For example, the declaration
indexrange {k,l}={x,y,z},{u,v,w}={1,2};
would assign to the index identifiers k,l the range values x,y,z and to the index
identifiers u,v,w the range values 1,2. The use of an index identifier not listed in
previous indexrange statements has the range of the union of all given index ranges.
With the above example of an indexrange statement, the following index evaluations would take place
pform w n=0;
W *W

+ W *W

+ W *W


W *W + W *W
W *W + W *W + W *W + W *W + W *W

In certain cases, one would like to inhibit the summation over specified index
names, or at all. For this the command
NOSUM < indexname1 >, . . . ;
and the switch NOSUM are available. The command NOSUM has the effect that
summation is not performed over those indices which had been listed. The command RENOSUM enables summation again. The switch NOSUM, if on, inhibits any
It is possible to declare symmetry properties for an indexed quantity by the command INDEX_SYMMETRIES. A prototypical example is as follows

index_symmetries u(k,l,m,n): symmetric
in {k,l},{m,n}
antisymmetric in {{k,l},{m,n}},
g(k,l),h(k,l): symmetric;

It declares the object u symmetric in the first two and last two indices and antisymmetric with respect to commutation of the given index pairs. If an object is
completely symmetric or antisymmetric, the indices need not to be given after the
corresponding keyword as shown above for g and h.
If applicable, this command should be issued, since great savings in memory and
execution time result. Only strict components are printed.
The commands symmetric and antisymmetric of earlier releases have no effect.


Metric Structures

A metric structure is defined in EXCALC by specifying a set of basis one-forms
(the coframe) together with the metric.

COFRAME < identifier >< (index1 ) >=< expression1 >,
< identifier >< (index2 ) >=< expression2 >,
< identifier >< (indexn ) >=< expressionn >
WITH METRIC < name >=< expression >;

This statement automatically sets the dimension of the space and the index range.
The clause WITH METRIC can be omitted if the metric is Euclidean and the shorthand WITH SIGNATURE < diagonal elements > can be used in the case
of a pseudo-Euclidean metric. The splitting of a metric structure in its metric tensor coefficients and basis one-forms is completely arbitrary including the extremes
of an orthonormal frame and a coordinate frame.
Example 19
coframe e r=d r, e(ph)=r*d ph
with metric g=e(r)*e(r)+e(ph)*e(ph);

%Polar coframe

coframe e(r)=d r,e(ph)=r*d(ph);

%Same as before

coframe o(t)=d t, o x=d x
with signature -1,1;

%A Lorentz coframe

coframe b(xi)=d xi, b(eta)=d eta
%A lightcone coframe
with metric w=-1/2*(b(xi)*b(eta)+b(eta)*b(xi));
coframe e r=d r, e ph=d ph
with metric g=e r*e r+r**2*e ph*e ph;

%Polar coordinate

Individual elements of the metric can be accessed just by calling them with the
desired indices. The value of the determinant of the covariant metric is stored in
the variable DETM!*. The metric is not needed for lowering or raising of indices
as the system performs this automatically, i.e. no matter in what index position
values were assigned to an indexed quantity, the values can be retrieved for any
index position just by writing the indexed quantity with the desired indices.
Example 20


coframe e t=d t,e x=d x,e y=d y
with signature -1,1,1;
pform f(k,l)=0;
index_symmetries f(k,l): antisymmetric;
f(k,l) := 0$
f(-t,-x):=ex$ f(-x,-y):=b$
on nero;

:= - EX


:= - EX


:= - B


:= B

Any expression containing differentials of the coordinate functions will be transformed into an expression of the basis one-forms.The system also knows how to
take the exterior derivative of the basis one-forms.
Example 21(Spherical coordinates)
coframe e(r)=d(r), e(th)=r*d(th), e(ph)=r*sin(th)*d(ph);
d r^d th;
(E ^E )/R

(E ^E )/R

pform f=0;
fdomain f=f(r,th,ph);
factor e;
on rat;
d f;

%The "gradient" of F in spherical coordinates;

E *@

F + (E *@

F)/R + (E *@


The frame dual to the frame defined by the COFRAME command can be introduced
by FRAME command.
FRAME < identifier >;
This command causes the dual property to be recognized, and the tangent vectors
of the coordinate functions are replaced by the frame basis vectors.
Example 22
coframe b r=d r,b ph=r*d ph,e z=d z; %Cylindrical coframe;
frame x;
on nero;
x(-k) _| b(l);

:= 1


:= 1



:= 1

x(-k) |_ x(-l);


:= X


%The commutator of the dual frame;


:= ( - X


%i.e. it is not a coordinate base;

As a convenience, the frames can be displayed at any point in a program by the
The Hodge-* duality operator returns the explicitly constructed dual element if
applied to coframe base elements. The metric is properly taken into account.
The total antisymmetric Levi-Cevita tensor EPS is also available. The value of
EPS with an even permutation of the indices in a covariant position is taken to be


Riemannian Connections

The command RIEMANNCONX is provided for calculating the connection 1 forms.
The values are stored on the name given to RIEMANNCONX. This command is far
more efficient than calculating the connection from the differential of the basis
one-forms and using inner products.
Example 23(Calculate the connection 1-form and curvature 2-form on S(2))
coframe e th=r*d th,e ph=r*sin(th)*d ph;
riemannconx om;

%Display the connection forms;


:= 0



:= (E



:= ( - E



:= 0

pform curv(k,l)=2;

curv(k,-l):=d om(k,-l) + om(k,-m)^om(m-l);
%The curvature forms

:= 0


:= ( - E


:= (E

^E )/R
%Of course it was a sphere with
%radius R.

^E )/R


:= 0


Killing Vectors

The command KILLING_VECTOR is provided for calculating the determining
system of partial differential equations of Killing vectors for a given metric structure provided by the coframe statement. The result is a list where the first entry is
a vector constructed from the identifier given to the command and the second entry
consists of a list of partial differential equations for the coefficients of this vector.
Example 24(Calculate the determining pde’s for a Killing vector of S(2))



coframe e th = d th,e ph = sin th*d ph;
killing_vector u;








)*sin(th) + @



+ @ (u )*sin(th)}}


Ordering and Structuring

The ordering of an exterior form or vector can be changed by the command
FORDER. In an expression, the first identifier or kernel in the arguments of
FORDER is ordered ahead of the second, and so on, and ordered ahead of all not
appearing as arguments. This ordering is done on the internal level and not only on
output. The execution of this statement can therefore have tremendous effects on
computation time and memory requirements. REMFORDER brings back standard
ordering for those elements that are listed as arguments.
An expression can be put in a more structured form by renaming a subexpression.
This is done with the command KEEP which has the syntax
KEEP < name1 >=< expression1 >,< name2 >=< expression2 >, . . .
The effect is that rules are set up for simplifying < name > without introducing its
definition in an expression. In an expression the system also tries by reordering to
generate as many instances of < name > as possible.
Example 25
pform x=0,y=0,z=0,f=0,j=3;


keep j=d x^d y^d z;
d j;
j^d x;
fdomain f=f(x);
d f^d y^d z;
@ F*J
The capabilities of KEEP are currently very limited. Only exterior products should
occur as righthand sides in KEEP.




Summary of Operators and Commands

Table 16.1 summarizes EXCALC commands and the page number they are defined


Exterior Multiplication
Partial Differentiation
Tangent Vector
Hodge-* Operator
Inner Product
Lie Derivative
Declaration of a coframe
Exterior differentiation
Displays the frame
Levi-Civita tensor
Calculates the exterior degree of an expression
Declaration of implicit dependencies
Ordering command
Declares the frame dual to the coframe
Declaration of indices
Declares arbitrary index symmetry properties
Structuring command
Structuring command
Clause of COFRAME to specify a metric
Calculates the Noether current
Inhibits summation convention
Inhibits the use of product rule for d
Inhibits expansion into partial derivatives
Declaration of exterior forms
Clears ordering
Enables summation convention
Calculation of a Riemannian Connection
Clause of COFRAME to specify a pseudoEuclidean metric
Command to set the dimension of a space
Declaration of vectors
Variational derivative
Enables the use of product rule for d
Enables expansion into partial derivatives

Table 16.1: EXCALC Command Summary





The following examples should illustrate the use of EXCALC. It is not intended
to show the most efficient or most elegant way of stating the problems; rather the
variety of syntactic constructs are exemplified. The examples are on a test file
distributed with EXCALC.

% Problem: Calculate the PDE’s for the isovector of the heat equation.
% -------%
(c.f. B.K. Harrison, f.B. Estabrook, "Geometric Approach...",
J. Math. Phys. 12, 653, 1971)
% The heat equation @
psi = @ psi is equivalent to the set of exterior
% equations (with u=@ psi, y=@ psi):

pform {psi,u,x,y,t}=0,a=1,{da,b}=2;
a := d psi - u*d t - y*d x;
da := - d u^d t - d y^d x;
b := u*d x^d t - d y^d t;

% Now calculate the PDE’s for the isovector.
tvector v;
pform {vpsi,vt,vu,vx,vy}=0;
fdomain vpsi=vpsi(psi,t,u,x,y),vt=vt(psi,t,u,x,y),vu=vu(psi,t,u,x,y),
v := vpsi*@ psi + vt*@ t + vu*@ u + vx*@ x + vy*@ y;

factor d;
on rat;
i1 := v |_ a - l*a;
pform o=1;
o := ot*d t + ox*d x + ou*d u + oy*d y;



fdomain f=f(psi,t,u,x,y);
i11 := v _| d a - l*a + d f;
let vx=-@(f,y),vt=-@(f,u),vu=@(f,t)+u*@(f,psi),vy=@(f,x)+y*@(f,psi),
factor ^;
i2 := v |_ b - xi*b - o^a + zeta*da;
let ou=0,oy=@(f,u,psi),ox=-u*@(f,u,psi),
let zeta=-@(f,u,x)-@(f,u,y)*u-@(f,u,psi)*y;
let xi=-@(f,t,u)-u*@(f,u,psi)+@(f,x,y)+u*@(f,y,y)+y*@(f,y,psi)+@(f,psi);
let @(f,u,u)=0;

% These PDE’s have to be solved.

clear a,da,b,v,i1,i11,o,i2,xi,t;
remfdomain f,vpsi,vt,vu,vx,vy;
clear @(f,u,u);


-------Calculate the integrability conditions for the system of PDE’s:
(c.f. B.F. Schutz, "Geometrical Methods of Mathematical Physics"
Cambridge University Press, 1984, p. 156)

% @ z /@ x + a1*z + b1*z = c1
% @ z /@ y + a2*z + b2*z = c2
% @ z /@ x + f1*z + g1*z = h1


% @ z /@ y + f2*z + g2*z = h2

pform w(k)=1,integ(k)=4,{z(k),x,y}=0,{a,b,c,f,g,h}=1,





% The equivalent exterior system:
factor d;
w(1) := d z(-1) + z(-1)*a + z(-2)*b - c;
w(2) := d z(-2) + z(-1)*f + z(-2)*g - h;
indexrange 1,2;
factor z;
% The integrability conditions:
integ(k) := d w(k) ^ w(1) ^ w(2);
clear a,b,c,f,g,h,x,y,w(k),integ(k),z(k);
remfdomain a1,a2,b1,c1,c2,f1,f2,g1,g2,h1,h2;

-------Calculate the PDE’s for the generators of the d-theta symmetries of
the Lagrangian system of the planar Kepler problem.
c.f. W.Sarlet, F.Cantrijn, Siam Review 23, 467, 1981
Verify that time translation is a d-theta symmetry and calculate the
corresponding integral.

pform {t,q(k),v(k),lam(k),tau,xi(k),eta(k)}=0,theta=1,f=0,
tvector gam,y;
indexrange 1,2;
fdomain tau=tau(t,q(k),v(k)),xi=xi(t,q(k),v(k)),f=f(t,q(k),v(k));



l := 1/2*(v(1)**2 + v(2)**2) + m/r$

% The Lagrangian.

pform r=0;
fdomain r=r(q(k));
let @(r,q 1)=q(1)/r,@(r,q 2)=q(2)/r,q(1)**2+q(2)**2=r**2;
lam(k) := -m*q(k)/r;

% The force.

gam := @ t + v(k)*@(q(k)) + lam(k)*@(v(k))$
eta(k) := gam _| d xi(k) - v(k)*gam _| d tau$

:= tau*@ t + xi(k)*@(q(k)) + eta(k)*@(v(k))$

% Symmetry generator.

theta := l*d t + @(l,v(k))*(d q(k) - v(k)*d t)$
factor @;
s := y |_ theta - d f$
glq(k) := @(q k) _| s;
glv(k) := @(v k) _| s;
glt := @(t) _| s;
% Translation in time must generate a symmetry.
xi(k) := 0;
tau := 1;
glq k := glq k;
glv k := glv k;
% The corresponding integral is of course the energy.
integ := - y _| theta;

clear l,lam k,gam,eta k,y,theta,s,glq k,glv k,glt,t,q k,v k,tau,xi k;
remfdomain r,f,tau,xi;

-------Calculate the "gradient" and "Laplacian" of a function and the "curl"
and "divergence" of a one-form in elliptic coordinates.

coframe e u = sqrt(cosh(v)**2 - sin(u)**2)*d u,
e v = sqrt(cosh(v)**2 - sin(u)**2)*d v,
e phi = cos u*sinh v*d phi;


pform f=0;
fdomain f=f(u,v,phi);
factor e,^;
on rat,gcd;
order cosh v, sin u;
% The gradient:
d f;
factor @;
% The Laplacian:
# d # d f;
% Another way of calculating the Laplacian:
-#vardf(1/2*d f^#d f,f);
remfac @;
% Now calculate the "curl" and the "divergence" of a one-form.
pform w=1,a(k)=0;
fdomain a=a(u,v,phi);
w := a(-k)*e k;
% The curl:
x := # d w;
factor @;
% The divergence:
y := # d # w;

remfac @;
clear x,y,w,u,v,phi,e k,a k;
remfdomain a,f;

% Problem:
% -------% Calculate in a spherical coordinate system the Navier Stokes equations.
coframe e r=d r, e theta =r*d theta, e phi = r*sin theta *d phi;
frame x;
fdomain v=v(t,r,theta,phi),p=p(r,theta,phi);



pform v(k)=0,p=0,w=1;
% We first calculate the convective derivative.
w := v(-k)*e(k)$
factor e; on rat;
cdv := @(w,t) + (v(k)*x(-k)) |_ w - 1/2*d(v(k)*v(-k));
%next we calculate the viscous terms;
visc := nu*(d#d# w - #d#d w) + mu*d#d# w;
% Finally we add the pressure term and print the components of the
% whole equation.
pform nasteq=1,nast(k)=0;
nasteq := cdv - visc + 1/rho*d p$
factor @;
nast(-k) := x(-k) _| nasteq;
remfac @,e;
clear v k,x k,nast k,cdv,visc,p,w,nasteq,e k;
remfdomain p,v;


-------Calculate from the Lagrangian of a vibrating rod the equation of
motion and show that the invariance under time translation leads
to a conserved current.

pform {y,x,t,q,j}=0,lagr=2;
fdomain y=y(x,t),q=q(x),j=j(x);
factor ^;
lagr := 1/2*(rho*q*@(y,t)**2 - e*j*@(y,x,x)**2)*d x^d t;
% The Lagrangian does not explicitly depend on time; therefore the
% vector field @ t generates a symmetry. The conserved current is


pform c=1;
factor d;
c := noether(lagr,y,@ t);
% The exterior derivative of this must be zero or a multiple of the
% equation of motion (weak conservation law) to be a conserved current.
remfac d;
d c;
% i.e. it is a multiple of the equation of motion.
clear lagr,c,j,y,q;
remfdomain y,q,j;

-------Show that the metric structure given by Eguchi and Hanson induces a
self-dual curvature.
c.f. T. Eguchi, P.B. Gilkey, A.J. Hanson, "Gravitation, Gauge Theories
and Differential Geometry", Physics Reports 66, 213, 1980

for all x let cos(x)**2=1-sin(x)**2;
pform f=0,g=0;
fdomain f=f(r), g=g(r);


f*d r,
(r/2)*(sin(psi)*d theta - sin(theta)*cos(psi)*d phi),
(r/2)*(-cos(psi)*d theta - sin(theta)*sin(psi)*d phi),
(r/2)*g*(d psi + cos(theta)*d phi);

frame e;

pform gamma(a,b)=1,curv2(a,b)=2;
index_symmetries gamma(a,b),curv2(a,b): antisymmetric;
factor o;
gamma(-a,-b) := -(1/2)*( e(-a) _| (e(-c) _| (d o(-b)))
-e(-b) _| (e(-a) _| (d o(-c)))
+e(-c) _| (e(-b) _| (d o(-a))) )*o(c)$

curv2(-a,b) := d gamma(-a,b) + gamma(-c,b)^gamma(-a,c)$



let f=1/g,g=sqrt(1-(a/r)**4);
pform chck(k,l)=2;
index_symmetries chck(k,l): antisymmetric;
% The following has to be zero for a self-dual curvature.
chck(k,l) := 1/2*eps(k,l,m,n)*curv2(-m,-n) + curv2(k,l);
clear gamma(a,b),curv2(a,b),f,g,chck(a,b),o(k),e(k),r,phi,psi;
remfdomain f,g;

Example: 6-dimensional FRW model with quadratic curvature terms in
------the Lagrangian (Lanczos and Gauss-Bonnet terms).
cf. Henriques, Nuclear Physics, B277, 621 (1986)

for all x let cos(x)**2+sin(x)**2=1;
pform {r,s}=0;
fdomain r=r(t),s=s(t);
coframe o(t)
with metric


d t,
r*d u/(1 + k*(u**2)/4),
r*u*d theta/(1 + k*(u**2)/4),
r*u*sin(theta)*d phi/(1 + k*(u**2)/4),
s*d v1,
s*sin(v1)*d v2

frame e;
on nero; factor o,^;
riemannconx om;
pform curv(k,l)=2,{riemann(a,b,c,d),ricci(a,b),riccisc}=0;
index_symmetries curv(k,l): antisymmetric,
riemann(k,l,m,n): antisymmetric in {k,l},{m,n}
symmetric in {{k,l},{m,n}},
ricci(k,l): symmetric;
curv(k,l) := d om(k,l) + om(k,-m)^om(m,l);
riemann(a,b,c,d) := e(d) _| (e (c) _| curv(a,b));

% The rest is done in the Ricci calculus language,
ricci(-a,-b) := riemann(c,-a,-d,-b)*g(-c,d);
riccisc := ricci(-a,-b)*g(a,b);
pform {laglanc,inv1,inv2} = 0;
index_symmetries riemc3(k,l),riemri(k,l),
hlang(k,l),einst(k,l): symmetric;
pform {riemc3(i,j),riemri(i,j)}=0;
riemc3(-i,-j) := riemann(-i,-k,-l,-m)*riemann(-j,k,l,m)$
inv1 := riemc3(-i,-j)*g(i,j);
riemri(-i,-j) := 2*riemann(-i,-k,-j,-l)*ricci(k,l)$
inv2 := ricci(-a,-b)*ricci(a,b);
laglanc := (1/2)*(inv1 - 4*inv2 + riccisc**2);

pform {einst(a,b),hlang(a,b)}=0;
hlang(-i,-j) := 2*(riemc3(-i,-j) - riemri(-i,-j) 2*ricci(-i,-k)*ricci(-j,K) +
riccisc*ricci(-i,-j) - (1/2)*laglanc*g(-i,-j));
% The complete Einstein tensor:
einst(-i,-j) := (ricci(-i,-j) - (1/2)*riccisc*g(-i,-j))*alp1 +
alp1 := 1$
factor alp2;
einst(-i,-j) := einst(-i,-j);
clear o(k),e(k),riemc3(i,j),riemri(i,j),curv(k,l),riemann(a,b,c,d),
remfdomain r,s;

-------Calculate for a given coframe and given torsion the Riemannian part and
the torsion induced part of the connection. Calculate the curvature.

% For a more elaborate example see E.Schruefer, F.W. Hehl, J.D. McCrea,
% "Application of the REDUCE package EXCALC to the Poincare gauge field



% theory of gravity", GRG Journal, vol. 19, (1988)


pform {ff, gg}=0;
fdomain ff=ff(r), gg=gg(r);
coframe o(4)
with metric


d u + 2*b0*cos(theta)*d phi,
ff*(d u + 2*b0*cos(theta)*d phi) + d r,
gg*d theta,
gg*sin(theta)*d phi
= -o(4)*o(1)-o(4)*o(1)+o(2)*o(2)+o(3)*o(3);

frame e;
pform {tor(a),gwt(a)}=2,gamma(a,b)=1,
index_symmetries gamma(a,b): antisymmetric;
fdomain u1=u1(r),u3=u3(r),u5=u5(r);
tor(4) := 0$
tor(1) := -u5*o(4)^o(1) - 2*u3*o(2)^o(3)$
tor(2) := u1*o(4)^o(2) + u3*o(4)^o(3)$
tor(3) := u1*o(4)^o(3) - u3*o(4)^o(2)$
gwt(-a) := d o(-a) - tor(-a)$
% The following is the combined connection.
% The Riemannian part could have equally well been calculated by the
% RIEMANNCONX statement.
gamma(-a,-b) := (1/2)*( e(-b) _| (e(-c) _| gwt(-a))
+e(-c) _| (e(-a) _| gwt(-b))
-e(-a) _| (e(-b) _| gwt(-c)) )*o(c);
pform curv(a,b)=2;
index_symmetries curv(a,b): antisymmetric;
factor ^;
curv(-a,b) := d gamma(-a,b) + gamma(-c,b)^gamma(-a,c);
clear o(k),e(k),curv(a,b),gamma(a,b),theta,phi,x,y,z,r,s,t,u,v,p,q,c,cs;
remfdomain u1,u3,u5,ff,gg;





FIDE: Finite difference method for partial differential equations

This package performs automation of the process of numerically solving partial
differential equations systems (PDES) by means of computer algebra. For PDES
solving, the finite difference method is applied. The computer algebra system REDUCE and the numerical programming language FORTRAN are used in the presented methodology. The main aim of this methodology is to speed up the process
of preparing numerical programs for solving PDES. This process is quite often,
especially for complicated systems, a tedious and time consuming task.
Documentation for this package is in plain text.
Author: Richard Liska.



The FIDE package performs automation of the process of numerical solving partial differential equations systems (PDES) by means of computer algebra. For
PDES solving finite difference method is applied. The computer algebra system
REDUCE and the numerical programming language FORTRAN are used in the
presented methodology. The main aim of this methodology is to speed up the
process of preparing numerical programs for solving PDES. This process is quite
often, especially for complicated systems, a tedious and time consuming task. In
the process one can find several stages in which computer algebra can be used
for performing routine analytical calculations, namely: transforming differential
equations into different coordinate systems, discretization of differential equations,
analysis of difference schemes and generation of numerical programs. The FIDE
package consists of the following modules:
EXPRES for transforming PDES into any orthogonal coordinate system.
IIMET for discretization of PDES by integro-interpolation method.
APPROX for determining the order of approximation of difference scheme.
CHARPOL for calculation of amplification matrix and characteristic polynomial
of difference scheme, which are needed in Fourier stability analysis.
HURWP for polynomial roots locating necessary in verifying the von Neumann
stability condition.
LINBAND for generating the block of FORTRAN code, which solves a system
of linear algebraic equations with band matrix appearing quite often in difference schemes.

Version 1.1 of the FIDE package is the result of porting FIDE package to REDUCE 3.4. In comparison with Version 1.0 some features has been changed in the
LINBAND module (possibility to interface several numerical libraries).
References ———[1] R. Liska, L. Drska: FIDE: A REDUCE package for automation of FInite difference method for solving pDE. In ISSAC ’90, Proceedings of the International
Symposium on Symbolic and Algebraic Computation, Ed. S. Watanabe, M. Nagata. p. 169-176, ACM Press, Addison Wesley, New York 1990.



A Module for Transforming Differential Operators and Equations into an Arbitrary
Orthogonal Coordinate System
This module makes it possible to express various scalar, vector, and tensor differential equations in any orthogonal coordinate system. All transformations needed
are executed automatically according to the coordinate system given by the user.
The module was implemented according to the similar MACSYMA module from
The specification of the coordinate system
The coordinate system is specified using the following statement:
SCALEFACTORS ,,...,,,...,;
 ::= 2 | 3
coordinate system dimension
 ::= "algebraic expression"
the expression of the i-th
Cartesian coordinate in new
 ::= "identifier"
the i-th new coordinate
All evaluated quantities are transformed into the coordinate system set by the last
SCALEFACTORS statement. By default, if this statement is not applied, the threedimensional Cartesian coordinate system is employed. During the evaluation of
SCALEFACTORS statement the metric coefficients, i.e. scale factors SF(i), of a
defined coordinate system are computed and printed. If the WRCHRI switch is
ON, then the nonzero Christoffel symbols of the coordinate system are printed too.
By default the WRCHRI switch is OFF.



The declaration of tensor quantities
Tensor quantities are represented by identifiers. The VECTORS declaration declares the identifiers as vectors, the DYADS declaration declares the identifiers as
dyads. i.e. two-dimensional tensors, and the TENSOR declaration declares the
identifiers as tensor variables. The declarations have the following syntax:
 ::= "identifier"
The value of the identifier V declared as vector in the two-dimensional coordinate
system is (V(1), V(2)), where V(i) are the components of vector V. The value of
the identifier T declared as a dyad is ((T(1,1), T(1,2)), (T(2,1), T(2,2))). The value
of the tensor variable can be any tensor (see below). Tensor variables can be used
only for a single coordinate system, after the coordinate system redefining by a
new SCALEFACTORS statement, the tensor variables have to be re-defined using
the assigning statement.
New infix operators
For four different products between the tensor quantities, new infix operators have
been introduced (in the explaining examples, a two-dimensional coordinate system,
vectors U, V, and dyads T, W are considered):

- scalar product
- vector product
- outer product


- double scalar product

U.V = U(1)*V(1)+U(2)*V(2)
U?V = U(1)*V(2)-U(2)*V(1)
U&V = ((U(1)*V(1),U(1)*V(2)),
T#W = T(1,1)*W(1,1)+T(1,2)*W(1,2)

The other usual arithmetic infix operators +, -, *, ** can be used in all situations
that have sense (e.g. vector addition, a multiplication of a tensor by a scalar, etc.).
New prefix operators
New prefix operators have been introduced to express tensor quantities in its components and the differential operators over the tensor quantities:
VECT - the explicit expression of a vector in its components
DYAD - the explicit expression of a dyad in its components

GRAD - differential operator of gradient
DIV - differential operator of divergence
LAPL - Laplace’s differential operator
CURL - differential operator of curl
DIRDF - differential operator of the derivative in direction (1st argument is the
directional vector)
The results of the differential operators are written using the DIFF operator.
DIFF(,) expresses the derivative of  with respect to the
coordinate . This operator is not further simplified. If the user wants to
make it simpler as common derivatives, he performs the following declaration:


Then, however, we must realize that if the scalars or tensor quantities do not directly explicitly depend on the coordinates, their dependencies have to be declared
using the DEPEND statements, otherwise the derivative will be evaluated to zero.
The dependence of all vector or dyadic components (as dependence of the name of
vector or dyad) has to appear before VECTORS or DYADS declarations, otherwise
after these declarations one has to declare the dependencies of all components. For
formulating the explicit derivatives of tensor expressions, the differentiation operator DF can be used (e.g. the differentiation of a vector in its components).
Tensor expressions
Tensor expressions are the input into the EXPRES module and can have a variety
of forms. The output is then the formulation of the given tensor expression in
the specified coordinate system. The most general form of a tensor expression
 is described as follows (the conditions (d=i) represent the limitation on
the dimension of the coordinate system equalling i):
 ::=  |  | 
 ::= "algebraic expression, can contain " |
"tensor variable with scalar value" |
. | # |
(d=2)? | DIV  |
LAPL  | (d=2) ROT  |
 ::= "identifier declared by VECTORS statement" |
"tensor variable with vector value" |
VECT(,...,) | - |



+ | - |
* | / |
. | . | (d=3)
? | (d=2) ? |
(d=2) ? | GRAD  |
DIV  | LAPL  | (d=3) ROT  |
DIRDF(,) | DF(,"usual
further arguments")
::= "identifier declared by DYADS statement" |
"tensor variable with dyadic value" |
...,)) | - | + |
- | * | /
| . | & |
(d=3) ? | (d=3) ? |
GRAD  | DF(,"usual further arguments")

Assigning statement
The assigning statement for tensor variables has a usual syntax, namely:
 ::= "identifier declared TENSOR"


The assigning statement assigns the tensor variable the value of the given tensor
expression, formulated in the given coordinate system. After a change of the coordinate system, the tensor variables have to be redefined.
References ———[1] M. C. Wirth, On the Automation of Computational Physics. PhDr Thesis. Report UCRL-52996, Lawrence Livermore National Laboratory, Livermore, 1980.



A Module for Discretizing the Systems of Partial Differential Equations
This program module makes it possible to discretize the specified system of partial differential equations using the integro-interpolation method, minimizing the
number of the used interpolations in each independent variable. It can be used
for non-linear systems and vector or tensor variables as well. The user specifies
the way of discretizing individual terms of differential equations, controls the discretization and obtains various difference schemes according to his own wish.

Specification of the coordinates and the indices corresponding to them
The independent variables of differential equations will be called coordinates. The
names of the coordinates and the indices that will correspond to the particular coordinates in the difference scheme are defined using the COORDINATES statement:
 ::= "identifier" - the name of the coordinate
 ::= "identifier"
- the name of the index
This statement specifies that the  will correspond to the .
A new COORDINATES statement cancels the definitions given by the preceding
COORDINATES statement. If the part [ INTO ... ] is not included in the statement,
the statement assigns the coordinates the indices I, J, K, L, M, N, respectively. If
it is included, the number of coordinates and the number of indices should be the
2.2 Difference grids
In the discretization, orthogonal difference grids are employed. In addition to the
basic grid, called the integer one, there is another, the half-integer grid in each coordinate, whose cellular boundary points lie in the centers of the cells of the integer
grid. The designation of the cellular separating points and centers is determined by
the CENTERGRID switch: if it is ON and the index in the given coordinate is I,
the centers of the grid cells are designated by indices I, I + 1,..., and the boundary
points of the cells by indices I + 1/2,..., if, on the contrary, the switch is OFF, the
cellular centers are designated by indices I + 1/2,..., and the boundary points by
indices I, I + 1,... (see Fig. 2.1).
Figure 2.1 Types of grid
In the case of ON CENTERGRID, the indices i,i+1,i-1... thus designate the centers
of the cells of the integer grid and the boundary points of the cells of the half-integer
grid, and, similarly, in the case of OFF CENTERGRID, the boundaries of the cells
of the integer grid and the central points of the half-integer grid. The meaning
of the integer and half-integer grids depends on the CENTERGRID switch in the



described way. After the package is loaded, the CENTERGRID is ON. Obviously,
this switch is significant only for non-uniform grids with a variable size of each
cell. The grids can be uniform, i.e. with a constant cell size - the step of the grid.
The following statement:
defines uniform grids in all coordinates occurring in it. Those coordinates that do
not occur in the GRID UNIFORM statement are supposed to have non-uniform
grids. In the outputs, the grid step is designated by the identifier that is made by
putting the character H before the name of the coordinate. For a uniform grid,
this identifier (e.g. for the coordinate X the grid step HX) has the meaning of a
step of an integer or half-integer grids that are identical. For a non-uniform grid,
this identifier is an operator and has the meaning of a step of an integer grid, i.e.
the length of a cell whose center (in the case of ON CENTERGRID) or beginning
(in the case of OFF CENTERGRID) is designated by a single argument of this
operator. For each coordinate s designated by the identifier i, this step of the integer
non-uniform grid is defined as follows:
Hs(i+j) = s(i+j+1/2) - s(i+j-1/2)
Hs(i+j) = s(i+j+1) - s(i+j)


for all integers j (s(k) designates the value of the coordinate s in the cellular boundary point subscripted with the index k). The steps of the half-integer non-uniform
grid are not applied in outputs.
Declaring the dependence of functions on coordinates
In the system of partial differential equations, two types of functions, in other
words dependent variables can occur: namely, the given functions, whose values
are known before the given system is solved, and the sought functions, whose values are not available until the system of equations is solved. The functions can be
scalar, vector, or tensor, for vector or tensor functions the EXPRES module has to
be applied at the same time. The names of the functions employed in the given
system and their dependence on the coordinates are specified using the DEPENDENCE statement.
 ::= ([],{,
 ::= "identifier" - the name of the function
 ::= 1|2 tensor order of the function (the value of
the function is 1 - vector, 2 - dyad (two-

dimensional tensor))
Every  in the statement determines on which  the
 depends. If the tensor  of the function occurs in the , the  is declared as a vector or a dyad. If, however, the
 has been declared by the VECTORS and DYADS statements of the
EXPRES module, the user need not present the tensor . By default, a function without any declaration is regarded as scalar. In the discretization, all scalar
components of tensor functions are replaced by identifiers that arise by putting successively the function name and the individual indices of the given component (e.g.
the tensor component T(1,2), written in the EXPRES module as T(1,2), is represented by the identifier T12). Before the DEPENDENCE statement is executed,
the coordinates have to be defined using the COORDINATES statement. There
may be several DEPENDENCE statements. The DEPENDENCE statement cancels all preceding determinations of which grids are to be used for differentiating
the function or the equation for this function. These determinations can be either
defined by the ISGRID or GRIDEQ statements, or computed in the evaluation of
the IIM statement. The GIVEN statement:
GIVEN {,};
declares all functions included in it as given functions whose values are known to
the user or can be computed. The CLEARGIVEN statement:
cancels all preceding GIVEN declarations. If the TWOGRID switch is ON, the
given functions can be differentiated both on the integer and the half-integer grids.
If the TWOGRID switch is OFF, any given function can be differentiated only on
one grid. After the package is loaded, the TWOGRID is ON.
Functions and difference grids
Every scalar function or scalar component of a vector or a dyadic function occurring in the discretized system can be discretized in any of the coordinates either
on the integer or half-integer grid. One of the tasks of the IIMET module is to
find the optimum distribution of each of these dependent variables of the system
on the integer and half-integer grids in all variables so that the number of the performed interpolations in the integro-interpolation method will be minimal. Using
the statement
SAME {,};



all functions given in one of these declarations will be discretized on the same
grids in all coordinates. In each SAME statement, at least one of these functions
in one SAME statement must be the sought one. If the given function occurs in
the SAME statement, it will be discretized only on one grid, regardless of the state
of the TWOGRID switch. If a vector or a dyadic function occurs in the SAME
statement, what has been said above relates to all its scalar components. There
are several SAME statements that can be presented. All SAME statements can be
canceled by the following statement:
The SAME statement can be successfully used, for example, when the given function depends on the function sought in a complicated manner that cannot be included either in the differential equation or in the difference scheme explicitly, and
when both the functions are desired to be discretized in the same points so that
the user will not be forced to execute the interpolation during the evaluation of the
given function. In some cases, it is convenient too to specify directly which variable on which grid is to be discretized, for which case the ISGRID statement is
 ::= ([,]{,})
 ::=  .. ,
 ::= ONE | HALF
designation of the integer
(ONE) and half-integer (HALF)
 ::=  |
for the vector 
, for the dyadic 
it is not presented for the
 ::= *| "natural number from 1 to the space dimension
the space dimension is specified in the EXPRES
module by the SCALEFACTORS statement, * means all
The statement defines that the given functions or their components will be discretized in the specified coordinates on the specified grids, so that, for example,
the statement ISGRID U (X..ONE,Y..HALF), V(1,Z..ONE), T(*,1,X..HALF); defines that scalar U will be discretized on the integer grid in the coordinate X, and
on the half-integer one in the coordinate Y, the first component of vector V will
be on the integer grid in the coordinate Z, and the first column of tensor T will be
on the half-integer grid in the coordinate X. The ISGRID statement can be applied
more times. The functions used in this statement have to be declared before by the
DEPENDENCE statement.

Equations and difference grids
Every equation of the system of partial differential equations is an equation for
some sought function (specified in the IIM statement). The correspondence between the sought functions and the equations is mutually unambiguous. The
GRIDEQ statement makes it possible to determine on which grid an individual
equation will be discretized in some or all coordinates
 ::= ({,})
Every equation can be discretized in any coordinate either on the integer or halfinteger grid. This statement determines the discretization of the equations given by
the functions included in it in given coordinates, on given grids. The meaning of
the fact that an equation is discretized on a certain grid is as follows: index I used
in the DIFMATCH statements (discussed in the following section), specifying the
discretization of the basic terms, will be located in the center of the cell of this
grid, and indices I+1/2, I-1/2 from the DIFMATCH statement on the boundaries
of the cell of this grid. The actual name of the index in the given coordinate is
determined using the COORDINATES statement, and its location on the grid is set
by the CENTERGRID switch.
Discretization of basic terms
The discretization of a system of partial differential equations is executed successively in individual coordinates. In the discretization of an equation in one coordinate, the equation is linearized into its basic terms first that will be discretized
independently then. If D is the designation for the discretization operator in the
coordinate x, this linearization obeys the following rules:




(p does not depend on the coordinate x)

The linearization lasts as long as some of these rules can be applied. The basic
terms that must be discretized after the linearization have then the forms of the
following quantities:
1. The actual coordinate in which the discretization is performed.
2. The sought function.
3. The given function.



4. The product of the quantities 1 - 7.
5. The quotient of the quantities 1 - 7.
6. The natural power of the quantities 1 - 7.
7. The derivative of the quantities 1 - 7 with respect to the actual coordinate.
The way of discretizing these basic terms, while the functions are on integer and
half-integer grids, is determined using the DIFMATCH statement:
, };
 ::= ALL | "identifier" - the coordinate name from
the COORDINATES statement
 ::= |
| *
| / |
 ** |
 ::= X
 ::= U | V | W
 ::= F | G
 ::= N | "integer greater than 1"
 ::= "integer greater than 2"
 ::= =
 ::= |

 ::= "non-negative integer"
 ::= ()|
"natural number"|DI|DIM1|DIP1|DIM2|DIP2|
 | -  |
 +  |
 *  |
 /  |
() |
 ::= X | U | V | W | F | G
 ::=  |
 +  |
 ::= I

 = "rational number"
 ::= "identifier" - the constant parameter of
the difference scheme.
 ::= "identifier" - prefix operator, that can
appear in discretized equations (e.g. SIN).
The first parameter of the DIFMATCH statement determines the coordinate for
which the discretization defined in it is valid. If ALL is used, the discretization
will be valid for all coordinates, and this discretization is accepted when it has
been checked whether there has been no other discretization defined for the given
coordinate and the given pattern term. Each pattern sought function, occurring in
the pattern term, must be included in the specification of the grids. The pattern
given functions from the pattern term can occur in the grid specification, but in
some cases (see below) need not. In the grid specification the maximum number
of 3 pattern functions may occur. The discretization of each pattern term has to
be specified in all combinations of the pattern functions occurring in the grid specification, on the integer and half-integer grids, that is 2**n variants for the grid
specification with n pattern functions (n=0,1,2,3). The discretized term is the discretization of the pattern term in the pattern coordinate X in the point X(I) on the
pattern grid (see Fig. 2.2), and the pattern functions occurring in the grid specification are in the discretized term on the respective grids from this specification (to
the discretized term corresponds the grid specification preceding it).
integer grid
half-integer grid
Figure 2.2 Pattern grid
The pattern grid steps defined as


X(I - 1/2)
X(I) - X(I
X(I + 1/2)
X(I + 1) X(I + 3/2)

- X(I - 3/2)
- 1)
- X(I - 1/2)
- X(I + 1/2)

can occur in the discretized term. In the integro-interpolation method, the dis-



cretized term is specified by the integral
where DINT is operator of definite integration DINT(from, to, function, variable).
The number of interpolations determines how many interpolations were needed for
calculating this integral in the given discrete form (the function on the integer or
half-integer grid). If the integro-interpolation method is not used, the more convenient is the distribution of the functions on the half-integer and integer grids, the
smaller number is chosen by the user. The parameters of the difference scheme
defined by the DIFCONST statement can occur in the discretized expression too
(for example, the implicit-explicit scheme on the implicit layer multiplied by the
constant C and on the explicit one by (1-C)). As a matter of fact, all DIFMATCH
statements create a base of pattern terms with the rules of how to discretize these
terms in individual coordinates under the assumption that the functions occurring
in the pattern terms are on the grids determined in the grid specification (all combinations must be included). The DIFMATCH statement does not check whether the
discretized term is actually the discretization of the pattern term or whether in the
discretized term occur the functions from the grid specification on the grids given
by this specification. An example can be the following definition of the discretization of the first and second derivatives of the sought function in the coordinate R
on a uniform grid:
DIFMATCH R,DIFF(U,X),U=ONE,2,(U(I+1)-U(I-1))/(2*DI);
DIFMATCH R,DIFF(U,X,2),U=ONE,0,(U(I+1)-2*U(I)+U(I-1))/DI**2,
All DIFMATCH statements can be cleared by the statement
After this statement user has to supply its own DIFMATCH statements. But now
back to the discretizing of the basic terms obtained by the linearization of the partial differential equation, as mentioned at the beginning of this section. Using the
method of pattern matching, for each basic term a term representing its pattern is
found in the base of pattern terms (specified by the DIFMATCH statements). The
pattern matching obeys the following rules:
1. The pattern for the coordinate in which the discretization is executed is the
pattern coordinate X.
2. The pattern for the sought function is some pattern sought function, and this

correspondence is mutually unambiguous.
3. The pattern for the given function is some pattern given function, or, in case
the EQFU switch is ON, some pattern sought function, and, again, the correspondence of the pattern with the given function is mutually unambiguous
(after loading the EQFU switch is ON).
4. The pattern for the products of quantities is the product of the patterns of
these quantities, irrespective of their sequence.
5. The pattern for the quotient of quantities is the quotient of the patterns of
these quantities.
6. The pattern for the natural power of a quantity is the same power of the
pattern of this quantity or the power of this quantity with the pattern exponent
7. The pattern for the derivative of a quantity with respect to the coordinate in
which the discretization is executed is the derivative of the pattern of this
quantity with respect to the pattern coordinate X of the same order of differentiation.
8. The pattern for the sum of the quantities that have the same pattern with the
identical correspondence of functions and pattern functions is this common
pattern (so that it will not be necessary to multiply the parentheses during
discretizing the products in the second and further coordinates).
When matching the pattern of one basic term, the program finds the pattern term
and the functions corresponding to the pattern functions, maybe also the exponent
corresponding to the pattern exponent N. After determining on which grids the individual functions and the individual equations will be discretized, which will be
discussed in the next section, the program finds in the pattern term base the discretized term either with pattern functions on the same grids as are the functions
from the basic term corresponding to them in case that the given equation is differentiated on the integer grid, or with pattern functions on inverse grids (an inverse
integer grid is a half-integer grid, and vice versa) compared with those used for
the functions from the basic term corresponding to them in case the given equation
is differentiated on the half-integer grid (the discretized term in the DIFMATCH
statement is expressed in the point X(I), i.e. on the integer grid, and holds for the
discretizing of the equation on the integer grid; with regard to the substitutions for
the pattern index I mentioned later, it is possible to proceed in this way and not necessary to define the discretization in the points X(I+1/2) too, i.e. on the half-integer
grid). The program replaces in the thus obtained discretized term:
1. The pattern coordinate X with the particular coordinate s in which the discretization is actually performed.



2. The pattern index I and the grid steps DIM2, DIM1, DI, DIP1, DIP2 with
the expression given in table 2.1 according to the state of the CENTERGRID
switch and to the fact whether the given equation is discretized on the integer
or half-integer grid (i is the index corresponding to the coordinate s according
to the COORDINATES statement, the grid steps were defined in section 2.2)
3. The pattern functions with the corresponding functions from the basic term
and, possibly, the pattern exponent with the corresponding exponent from
the basic term.
the equation discretized on
the integer grid
the half-integer grid
| I |
|(Hs(i-1)+Hs(i))/2 |
| (Hs(i-1)+Hs(i))/2
|DI |(Hs(i-1)+Hs(i))/2 |
|(Hs(i)+Hs(i+1))/2 |
| (Hs(i)+Hs(i+1))/2
|DIP2|(Hs(i)+Hs(i+1))/2 |
-------------------------------------------------------------------Table 2.1

Values of the pattern index and
the pattern grid steps.

More details will be given now to the discretization of the given functions and its
specification. The given function may occur in the SAME statement, which makes
it bound with some sought function, in other words it can be discretized only on one
grid. This means that all basic terms, in which this function occurs, must have their
pattern terms in whose discretization definitions by the DIFMATCH statement the
pattern function corresponding to the mentioned given function has to occur in the
grid specification. If the given function does not occur in the SAME statement and
the TWOGRID switch is OFF, i.e. it can be discretized only on one grid again,
the same holds true. If, however, the given function does not occur in the SAME
statement and the TWOGRID switch is ON, i.e. it can be discretized simultaneously on the integer and the half-integer grids, then the basic terms of the equations
including this function have their pattern terms in whose discretization definitions
the pattern function corresponding to the mentioned given function need not occur
in the grid specification. If, however, in spite of all, this pattern function in the discretization definition does occur in the grid specification, it is the alternative with
a smaller number of interpolations occurring in the DIFMATCH statement that

is selected for each particular basic term with a corresponding pattern (the given
function can be on the integer or half-integer grid). Before the discretization is executed, it is necessary to define using the DIFMATCH statements the discretization
of all pattern terms that are the patterns of all basic terms of all equations appearing
in the discretized system in all coordinates. The fact that the pattern terms of the
basic terms of partial equations occur repeatedly in individual systems has made
it possible to create a library of the discretizations of the basic types of pattern
terms using the integro-interpolation method. This library is a component part of
the IIMET module (in its end) and makes work easier for those users who find
the pattern matching mechanism described here too difficult. New DIFMATCH
statements have to be created by those whose equations will contain a basic term
having no pattern in this library, or those who need another method to perform
the discretization. The described implemented algorithm of discretizing the basic
terms is sufficiently general to enable the use of a nearly arbitrary discretization on
orthogonal grids.
Discretization of a system of equations
All statements influencing the run of the discretization that one want use in this
run have to be executed before the discretization is initiated. The COORDINATES, DEPENDENCE, and DIFMATCH statements have to occur in all applications. Further, if necessary, the GRID UNIFORM, GIVEN, ISGRID, GRIDEQ,
SAME, and DIFCONST statements can be used, or some of the CENTREGRID,
TWOGRID, EQFU, and FULLEQ switches can be set. Only then the discretization
of a system of partial differential equations can be started using the IIM statement:
IIM {,,};
 ::= "identifier" - the name of the array for storing
the result
 ::= "identifier" - the name of the function
whose behavior is described by the
 ::=  = 
 ::= "algebraic expression" , the derivatives are
designated by the DIFF operator
 ::= "algebraic expression"
Hence, in the IIM statement the name of the array in which the resulting difference
schemes will be stored, and the pair sought function - equation, which describes
this function, are specified. The meaning of the relation between the sought function and its equation during the discretization lies in the fact that the sought function
is preferred in its equation so that the interpolation is not, if possible, used in discretizing the terms of this equation that contain it. In the equations, the functions



and the coordinates appear as identifiers. The identifiers that have not been declared as functions by the DEPENDENCE statement or as coordinates by the COORDINATES statement are considered constants independent of the coordinates.
The partial derivatives are expressed by the DIFF operator that has the same syntax
as the standard differentiation operator DF. The functions and the equations can
also have the vector or tensor character. If these non-scalar quantities are applied,
the EXPRES module has to be used together with the IIMET module, and also
non-scalar differential operators such as GRAD, DIV, etc. can be employed. The
sequence performed by the program in the discretization can be briefly summed up
in the following items:
1. If there are non-scalar functions or equations in a system of equations, they
are automatically converted into scalar quantities by means of the EXPRES
2. In each equation, the terms containing derivatives are transferred to the left
side, and the other terms to the right side of the equation.
3. For each coordinate, with respect to the sequence in which they occur in the
COORDINATES statement, the following is executed:
a) It is determined on which grids all functions and all equations in the actual
coordinate will be discretized, and simultaneously the limits are kept resulting from the ISGRID, GRIDEQ, and SAME statements if they were used.
Such a distribution of functions and equations on the grids is selected among
all possible variants that ensures the minimum sum of all numbers of the
interpolations of the basic terms (specified by the DIFMATCH statement) of
all equations if the FULLEQ switch is ON, or of all left sides of the equations if the FULLEQ switch is OFF (after the loading the FULLEQ switch is
b) The discretization itself is executed, as specified by the DIFMATCH statements.
4. If the array name is A, then if there is only one scalar equation in the IIM
statement, the discretized left side of this equation is stored in A(0) and the
discretized right side in A(1) (after the transfer mentioned in item 2), if there
are more scalar equations than one in the IIM statement, the discretization of
the left side of the i-th scalar equation is stored in A(i,0) and the discretization of the right side in A(i,1).
The IIM statement can be used more times during one program run, and between its
calls, the discretizing process can be altered using other statements of this module.

Error messages
The IIMET module provides error messages in the case of the user’s errors. Similarly as in the REDUCE system, the error reporting is marked with five stars :
"*****" on the line start. Some error messages are identical with those of the
REDUCE system. Here are given some other error messages that require a more
detailed explanation:
***** Matching of X term not found
- the discretization of the pattern term that is the pattern of
the basic term printed on the place X has not been
defined (using the DIFMATCH statement)
of type F not defined on grids in DIFMATCH
- in the definition of the discretizing of the pattern term
the given functions were not used in the grid
specification and are needed now
***** X Free vars not yet implemented
- in the grid specification in the DIFMATCH statement
more than 3 pattern functions were used
***** All grids not given for term X
- in the definition of the discretization of the pattern of
the basic term printed on the place X not all
necessary combinations of the grid specification
of the pattern functions were presented



A Module for Determining the Precision Order of the Difference Scheme
This module makes it possible to determine the differential equation that is solved
by the given difference scheme, and to determine the order of accuracy of the
solution of this scheme in the grid steps in individual coordinates. The discrete
function values are expanded into the Taylor series in the specified point.
Specification of the coordinates and the indices corresponding to them
The COORDINATES statement, described in the IIMET module manual, specifying the coordinates and the indices corresponding to them is the same for this
program module as well. It has the same meaning and syntax. The present module
version assumes a uniform grid in all coordinates. The grid step in the input difference schemes has to be designated by an identifier consisting of the character H
and the name of the coordinate, e.g. the step of the coordinate X is HX.



Specification of the Taylor expansion
In the determining of the approximation order, all discrete values of the functions
are expanded into the Taylor series in all coordinates. In order to determine the
Taylor expansion, the program needs to know the point in which it performs this
expansion, and the number of terms in the Taylor series in individual coordinates.
The center of the Taylor expansion is specified by the CENTER statement and the
number of terms in the Taylor series in individual coordinates by the MAXORDER
::= = ::= "rational number" MAXORDER {,}; ::= = ::= "natural number" The increment in the CENTER statement determines that the center of the Taylor expansion in the given coordinate will be in the point specified by the index I + , where I is the index corresponding to this coordinate, defined using the COORDINATES statement, e.g. the following example COORDINATE T,X INTO N,J; CENTER T = 1/2, X = 1; MAXORDER T = 2, X = 3; specifies that the center of the Taylor expansion will be in the point (t(n+1/2),x(j+1)) and that until the second derivatives with respect to t (second powers of ht) and until the third derivatives with respect to x (third powers of hx) the expansion will be performed. The CENTER and MAXORDER statements can be placed only after the COORDINATES statement. If the center of the Taylor expansion is not defined in some coordinate, it is supposed to be in the point given by the index of this coordinate (i.e. zero increment). If the number of the terms of the Taylor expansion is not defined in some coordinate, the expansion is performed until the third derivatives with respect to this coordinate. Function declaration All functions whose discrete values are to be expanded into the Taylor series must be declared using the FUNCTIONS statement: FUNCTIONS {,}; ::= "identifier" 513 In the specification of the difference scheme, the functions are used as operators with one or more arguments, designating the discrete values of the functions. Each argument is the sum of the coordinate index (from the COORDINATES statement) and a rational number. If some index is omitted in the arguments of a function, this functional value is supposed to lie in the point in which the Taylor expansion is performed, as specified by the CENTER statement. In other words, if the COORDINATES and CENTER statements, shown in the example in the previous section, are valid, then it holds that U(N+1) = U(N+1,J+1) and U(J-1) = U(N+1/2,J-1). The FUNCTIONS statement can declare both the sought and the known functions for the expansion. Order of accuracy determination The order of accuracy of the difference scheme is determined by the APPROX statement: APPROX (); ::= = ::= "algebraic expression" In the difference scheme occur the functions in the form described in the preceding section, the coordinate indices and the grid steps described in section 3.1, and the other symbolic parameters of the difference scheme. The APPROX statement expands all discrete values of the functions declared in the FUNCTIONS statement into the Taylor series in all coordinates (the point in which the Taylor expansion is performed is specified by the CENTER statement, and the number of the expansion terms by the MAXORDER statement), substitutes the expansions into the difference scheme, which gives a modified differential equation. The modified differential equation, containing the grid steps too, is an equation that is really solved by the difference scheme (into the given orders in the grid steps). The partial differential equation, whose solution is approximated by the difference scheme, is determined by replacing the grid steps by zeros and is displayed after the following message: "Difference scheme approximates differential equation" Then the following message is displayed: "with orders of approximation:" and the lowest powers (except for zero) of the grid steps in all coordinates, occurring in the modified differential equation are written. If the PRAPPROX switch is ON, then the rest of the modified differential equation is printed. If this rest is added to the left hand side of the approximated differential equation, one obtain modified equation. By default the PRAPPROX switch is OFF. If the grid steps are 514 CHAPTER 16. USER CONTRIBUTED PACKAGES found in some denominator in the modified equation, i.e. with a negative exponent, the following message is written, preceding the approximated differential equation: "Reformulate difference scheme, grid steps remain in denominator" and the approximated differential equation is not correctly determined (one of its sides is zero). Generally, this message means that there is a term in the difference scheme that is not a difference replacement of the derivative, i.e. the ratio of the differences of the discrete function values and the discrete values of the coordinates (the steps of the difference grid). The user, however, must realize that in some cases such a term occurs purposefully in the difference scheme (e.g. on the grid boundary to keep the scheme conservative). 16.23.5 CHARPOL A Module for Calculating the Amplification Matrix and the Characteristic Polynomial of the Difference Scheme This program module is used for the first step of the stability analysis of the difference scheme using the Fourier method. It substitutes the Fourier components into the difference scheme, calculates the amplification matrix of the scheme for transition from one time layer to another, and computes the characteristic polynomial of this matrix. Commands common with the IIMET module The COORDINATES and GRID UNIFORM statements, described in the IIMET module manual, are applied in this module as well, having the same meaning and syntax. The time coordinate is assumed to be designated by the identifier T. The present module version requires all coordinates to have uniform grids, i.e. to be declared in the GRID UNIFORM statement. The grid step in the input difference schemes has to be designated by the identifier consisting of the character H and the name of the coordinate, e.g. the step of the time coordinate T is HT. Function declaration The UNFUNC statement declares the names of the sought functions used in the difference scheme: UNFUNC {,} ::= "identifier" - the name of the sought function The functions are used in the difference schemes as operators with one or more arguments for designating the discrete function values. Each argument is the sum 515 of the index (from the COORDINATES statement) and a rational number. If some index is omitted in the function arguments, this function value is supposed to lie in the point specified only by this index, which means that, with the indices N and J and the function U, it holds that U(N+1) = U(N+1,J) and U(J-1) = U(N,J-1). As two-step (in time) difference schemes may be used only, the time index may occur either completely alone in the arguments, or in the sum with a one. Amplification matrix The AMPMAT matrix operator computes the amplification matrix of a two-step difference scheme. Its argument is an one column matrix of the dimension (1,k), where k is the number of the equations of the difference scheme, that contains the difference equations of this scheme as algebraic expressions equal to the difference of the right and left sides of the difference equations. The value of the AMPMAT matrix operator is the square amplification matrix of the dimension (k,k). During the computation of the amplification matrix, two new identifiers are created for each spatial coordinate. The identifier made up of the character K and the name of the coordinate represents the wave number in this coordinate, and the identifier made up of the character A and the name of the coordinate represents the product of this wave number and the grid step in this coordinate divided by the least common multiple of all denominators occurring in the scheme in the function argument containing the index of this coordinate. On the output an equation is displayed defining the latter identifier. For example, if in the case of function U and index J in the coordinate X the expression U(J+1/2) has been used in the scheme (and, simultaneously, no denominator higher than 2 has occurred in the arguments with J), the following equation is displayed: AX: = (KX*HX)/2. The definition of these quantities As allows to express every sum occurring in the argument of the exponentials as the sum of these quantities multiplied by integers, so that after a transformation, the amplification matrix will contain only sin(As) and cos(As) (for all spatial coordinates s). The AMPMAT operator performs these transformations automatically. If the PRFOURMAT switch is ON (after the loading it is ON), the matrices H0 and H1 (the amplification matrix is equal to -H1**(-1)*H0) are displayed during the evaluation of the AMPMAT operator. These matrices can be used for finding a suitable substitution for the goniometric functions in the next run for a greater simplification. The TCON matrix operator transforms the square matrix into a Hermit-conjugate matrix, i.e. a transposed and complex conjugate one. Its argument is the square matrix and its value is Hermit-conjugate matrix of the argument. The Hermit-conjugate matrix is used for testing the normality and unitarity of the amplification matrix in the determining of the sufficient stability condition. 516 CHAPTER 16. USER CONTRIBUTED PACKAGES Characteristic polynomial The CHARPOL operator calculates the characteristic polynomial of the given square matrix. The variable of the characteristic polynomial is designated by the LAM identifier. The operator has one argument, the square matrix, and its value is its characteristic polynomial in LAM. Automatic denotation Several statements and procedures are designed for automatic denotation of some parts of algebraic expressions by identifiers. This denotation is namely useful when we obtain very large expressions, which cannot fit into the available memory. We can denote subparts of an expression from the previous step of calculation by identifiers, replace these subparts by these identifiers and continue the analytic calculation only with these identifiers. Every time we use this technique we have to explicitly survive in processed expressions those algebraic quantities which will be necessary in the following steps of calculation. The process of denotation and replacement is performed automatically and the algebraic values which are denoted by these new identifiers can be written out at any time. We describe how this automatic denotation can be used. The statement DENOTID defines the beginning letters of newly created identifiers. Its syntax is DENOTID ; ::= "identifier" After this statement the new identifiers created by the operators DENOTEPOL and DENOTEMAT will begin with the letters of the identifier used in this statement. Without using any DENOTID statement all new identifiers will begin with one letter A. We suggest to use this statement every time before using operators DENOTEPOL or DENOTEMAT with some new identifier and to choose identifiers used in this statement in such a way that the newly created identifiers are not equal to any identifiers used in the expressions you are working with. The operator DENOTEPOL has one argument, a polynomial in LAM, and denotes the real and imaginary part of its coefficients by new identifiers. The real part of the j-th LAM power coefficient is denoted by the identifier R0j and the imaginary part by I0j, where is the identifier used in the last DENOTID statement. The denotation is done only for non-numeric coefficients. The value of this operator is the polynomial in LAM with coefficients constructed from the new identifiers. The algebraic expressions which are denoted by these identifiers are stored as LISP data structure standard quotient in the LISP variable DENOTATION!* (assoc. list). The operator DENOTEMAT has one argument, a matrix, and denotes the real and imaginary parts of its elements. The real part of the (j,k) matrix element is denoted by the identifier Rjk and the imaginary part by Ijk. The returned value of 517 the operator is the original matrix with non-numeric elements replaced by Rjk + I*Ijk. Other matters are the same as for the DENOTEPOL operator. The statement PRDENOT has the syntax PRDENOT; and writes from the variable DENOTATION!* the definitions of all new identifiers introduced by the DENOTEPOL and DENOTEMAT operators since the last call of CLEARDENOT statement (or program start) in the format defined by the present setting of output control declarations and switches. The definitions are written in the same order as they have been entered, so that the definitions of the first DENOTEPOL or DENOTEMAT operators are written first. This order guarantees that this statement can be utilized directly to generate a semantically correct numerical program (the identifiers from the first denotation can appear in the second one, etc.). The statement CLEARDENOT with the syntax CLEARDENOT; clears the variable DENOTATION!*, so that all denotations saved earlier by the DENOTEPOL and DENOTEMAT operators in this variable are lost. The PRDENOT statement succeeding this statement writes nothing. 16.23.6 HURWP A Module for Polynomial Roots Locating This module is used for verifying the stability of a polynomial, i.e. for verifying if all roots of a polynomial lie in a unit circle with its center in the origin. By investigating the characteristic polynomial of the difference scheme, the user can determine the conditions of the stability of this scheme. Conformal mapping The HURW operator transforms a polynomial using the conformal mapping LAM=(z+1)/(z-1). Its argument is a polynomial in LAM and its value is a transformed polynomial in LAM (LAM=z). If P is a polynomial in LAM, then it holds: all roots LAM1i of the polynomial P are in their absolute values smaller than one, i.e. |LAM1i|<1, iff the real parts of all roots LAM2i of the HURW(P) polynomial are negative, i.e. Re (LAM2i)<0. The elimination of the unit polynomial roots (LAM=1), which has to occur before the conformal transformation is performed, is made by the TROOT1 operator. The argument of this operator is a polynomial in LAM and its value is a polynomial in LAM not having its root equal to one any more. Mostly, the investigated polynomial has some more parameters. For some 518 CHAPTER 16. USER CONTRIBUTED PACKAGES special values of those parameters, the polynomial may have a unit root. During the evaluation of the TROOT1 operator, the condition concerning the polynomial parameters is displayed, and if it is fulfilled, the resulting polynomial has a unit root. Investigation of polynomial roots The HURWITZP operator checks whether a polynomial is the Hurwitz polynomial, i.e. whether all its roots have negative real parts. The argument of the HURWITZP operator is a polynomial in LAM with real or complex coefficients, and its value is YES if the argument is the Hurwitz polynomial. It is NO if the argument is not the Hurwitz polynomial, and COND if it is the Hurwitz polynomial when the conditions displayed by the HURWITZP operator during its analysis are fulfilled. These conditions have the form of inequalities and contain algebraic expressions made up of the polynomial coefficients. The conditions have to be valid either simultaneously, or they are designated and a proposition is created from them by the AND and OR logic operators that has to be fulfilled (it is the condition concerning the parameters occurring in the polynomial coefficient) by a polynomial to be the Hurwitz one. This proposition is the sufficient condition, the necessary condition is the fulfillment of all the inequalities displayed. If the HURWITZP operator is called interactively, the user is directly asked if the inequalities are or are not valid. The user responds "Y" if the displayed inequality is valid, "N" if it is not, and "?" if he does not know whether the inequality is true or not. 16.23.7 LINBAND A Module for Generating the Numeric Program for Solving a System of Linear Algebraic Equations with Band Matrix The LINBAND module generates the numeric program in the FORTRAN language, which solves a system of linear algebraic equations with band matrix using the routine from the LINPACK, NAG ,IMSL or ESSL program library. As input data only the system of equations is given to the program. Automatically, the statements of the FORTRAN language are generated that fill the band matrix of the system in the corresponding memory mode of chosen library, call the solving routine, and assign the chosen variables to the solution of the system. The module can be used for solving linear difference schemes often having the band matrix. Program generation The program in the FORTRAN language is generated by the GENLINBANDSOL statement (the braces in this syntax definition occur directly in the program and do 519 not have the usual meaning of the possibility of repetition, they designate REDUCE lists): GENLINBANDSOL (,,{}); ::= "natural number" ::= "natural number" ::= | , ::= {,} | ::= "kernel" ::= = ::= "algebraic expression" ::= "algebraic expression" ::= {DO,{,,,},} ::= "identifier" ::= ::= ::= ::= "algebraic expression" with natural value (evaluated in FORTRAN) ::= | , ::= {,} The first and second argument of the GENLINBANDSOL statement specifies the number of the lower (below the main diagonal) and the upper diagonals of the band matrix of the system. The system of linear algebraic equations is specified by means of lists expressed by braces in the REDUCE system. The variables of the equation system can be identifiers, but most probably they are operators with an argument or with arguments that are analogous to array in FORTRAN. The left side of each equation has to be a linear combination of the system variables, the right side, on the contrary, is not allowed to contain any variables of the system. The sequence of the band matrix lines is given by the sequence of the equations, and the sequence of the columns by the sequence of the variables in the list describing the equation system. The meaning of the loop in the system list is similar to that of the DO loop of the FORTRAN language. The individual variables and equations described by the loop are obtained as follows: 1. = . 2. The value is substituted into the variables and equations of the loop, by which further variables and equations of the system are obtained. 3. is increased by . 4. If is less or equal , then go to step 2, else all variables and equations described by the loop have already been obtained. The variables and equations of the system included in the loop usually contain the loop parameter, which mostly occur in the operator arguments in the REDUCE 520 CHAPTER 16. USER CONTRIBUTED PACKAGES language, or in the array indices in the FORTRAN language. If NL = , NU = , and for some loop F = , T = , S = and N is the number of the equations in the loop , it has to be true that UP(NL/N) + UP(NU/N) < DOWN((T-F)/S) where UP represents the rounding-off to a higher natural number, and DOWN the rounding-off to a lower natural number. With regard to the fact that, for example, the last variable before the loop is not required to equal the last variable from the loop system, into which the loop parameter equal to F-S is substituted, when the band matrix is being constructed, from the FORTRAN loop that corresponds to the loop from the specification of the equation system, at least the first NL variablesequations have to be moved to precede the FORTRAN loop, and at least the last NU variables-equations have to be moved to follow this loop in order that the correspondence of the system variables in this loop with the system variables before and after this loop will be secured. And this move requires the above mentioned condition to be fulfilled. As, in most cases, NL/N and NU/N are small with respect to (T-F)/S, this condition does not represent any considerable constrain. The loop parameters , , and can be natural numbers or expressions that must have natural values in the run of the FORTRAN program. Choosing the numerical library The user can choose the routines of which numerical library will be used in the generated FORTRAN code. The supported numerical libraries are: LINPACK, NAG, IMSL and ESSL (IBM Engineering and Scientific Subroutine Library) . The routines DGBFA, DGBSL (band solver) and DGTSL (tridiagonal solver) are used from the LINPACK library, the routines F01LBF, F04LDF (band solver) and F01LEF, F04LEF (tridiagonal solver) are used from the NAG library, the routine LEQT1B is used from the IMSL library and the routines DGBF, DGBS (band solver) and DGTF, DGTS (tridiagonal solver) are used from the ESSL library. By default the LINPACK library routines are used. The using of other libraries is controlled by the switches NAG,IMSL and ESSL. All these switches are by default OFF. If the switch IMSL is ON then the IMSL library routine is used. If the switch IMSL is OFF and the switch NAG is ON then NAG library routines are used. If the switches IMSL and NAG are OFF and the switch ESSL is ON then the ESSL library is used. During generating the code using LINPACK, NAG or ESSL libraries the special routines are use for systems with tridiagonal matrices, because tridiagonal solvers are faster than the band matrix solvers. 521 Completion of the generated code The GENLINBANDSOL statement generates a block of FORTRAN code ( a block of statements of the FORTRAN language) that performs the solution of the given system of linear algebraic equations. In order to be used, this block of code has to be completed with some declarations and statements, thus getting a certain envelope that enables it to be integrated into the main program. In order to be able to work, the generated block of code has to be preceded by: 1. The declaration of arrays as described by the comments generated into the FORTRAN code (near the calling of library routines) 2. The assigning the values to the integer variables describing the real dimensions of used arrays (again as described in generated FORTRAN comments) 3. The filling of the variables that can occur in the loop parameters. 4. The filling or declaration of all variables and arrays occurring in the system equations, except for the variables of the system of linear equations. 5. The definition of subroutine ERROUT the call to which is generated after some routines found that the matrix is algorithmically singular The mentioned envelope for the generated block can be created manually, or directly using the GENTRAN program package for generating numeric programs. The LINBAND module itself uses the GENTRAN package, and the GENLINBANDSOL statement can be applied directly in the input files of the GENTRAN package (template processing). The GENTRAN package has to be loaded prior to loading of the LINBAND module. The generated block of FORTRAN code has to be linked with the routines from chosen numerical library. References ———[1] R. Liska: Numerical Code Generation for Finite Difference Schemes Solving. In IMACS World Congress on Computation and Applied Mathematics. Dublin, July 22-26, 1991, Dublin,(In press). 522 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.24 FPS: Automatic calculation of formal power series This package can expand a specific class of functions into their corresponding Laurent-Puiseux series. Authors: Wolfram Koepf and Winfried Neun. 16.24.1 Introduction This package can expand functions of certain type into their corresponding Laurent-Puiseux series as a sum of terms of the form ∞ X ak (x − x0 )mk/n+s k=0 where m is the ‘symmetry number’, s is the ‘shift number’, n is the ‘Puiseux number’, and x0 is the ‘point of development’. The following types are supported: • functions of ‘rational type’, which are either rational or have a rational derivative of some order; • functions of ‘hypergeometric type’ where a(k+m)/a(k) is a rational function for some integer m; • functions of ‘explike type’ which satisfy a linear homogeneous differential equation with constant coefficients. The FPS package is an implementation of the method presented in [2]. The implementations of this package for M APLE (by D. Gruntz) and M ATHEMATICA (by W. Koepf) served as guidelines for this one. Numerous examples can be found in [3]–[4], most of which are contained in the test file fps.tst. Many more examples can be found in the extensive bibliography of Hansen [1]. 16.24.2 REDUCE operator FPS FPS(f,x,x0) tries to find a formal power series expansion for f with respect to the variable x at the point of development x0. It also works for formal Laurent (negative exponents) and Puiseux series (fractional exponents). If the third argument is omitted, then x0:=0 is assumed. Examples: FPS(asin(x)^2,x) results in 523 2*k 2*k 2 2 x 2 factorial(k) * * *x infsum(----------------------------,k,0,infinity) factorial(2*k + 1)*(k + 1) FPS(sin x,x,pi) gives 2*k k ( - pi + x) *( - 1) *( - pi + x) infsum(------------------------------------,k,0,infinity) factorial(2*k + 1) and FPS(sqrt(2-x^2),x) yields 2*k - x *sqrt(2)*factorial(2*k) infsum(--------------------------------,k,0,infinity) k 2 8 *factorial(k) *(2*k - 1) Note: The result contains one or more infsum terms such that it does not interfere with the REDUCE operatorP sum. In graphical oriented REDUCE interfaces this operator results in the usual notation. If possible, the output is given using factorials. In some cases, the use of the Pochhammer symbol pochhammer(a,k):= a(a+1) · · · (a+k−1) is necessary. The operator FPS uses the operator SimpleDE of the next section. If an error message of type Could not find the limit of: occurs, you can set the corresponding limit yourself and try a recalculation. In the computation of FPS(atan(cot(x)),x,0), REDUCE is not able to find the value for the limit limit(atan(cot(x)),x,0) since the atan function is multi-valued. One can choose the branch of atan such that this limit equals π/2 so that we may set let limit(atan(cot(~x)),x,0)=>pi/2; and a recalculation of FPS(atan(cot(x)),x,0) yields the output pi 2*x which is the correct local series representation. 524 16.24.3 CHAPTER 16. USER CONTRIBUTED PACKAGES REDUCE operator SimpleDE SimpleDE(f,x) tries to find a homogeneous linear differential equation with polynomial coefficients for f with respect to x. Make sure that y is not a used variable. The setting factor df; is recommended to receive a nicer output form. Examples: SimpleDE(asin(x)^2,x) then results in 2 df(y,x,3)*(x - 1) + 3*df(y,x,2)*x + df(y,x) SimpleDE(exp(x^(1/3)),x) gives 2 27*df(y,x,3)*x + 54*df(y,x,2)*x + 6*df(y,x) - y and SimpleDE(sqrt(2-x^2),x) yields 2 df(y,x)*(x - 2) - x*y The depth for the search of a differential equation for f is controlled by the variable fps_search_depth; higher values for fps_search_depth will increase the chance to find the solution, but increases the complexity as well. The default value for fps_search_depth is 5. For FPS(sin(x^(1/3)),x), or SimpleDE(sin(x^(1/3)),x) e. g., a setting fps_search_depth:=6 is necessary. The output of the FPS package can be influenced by the switch tracefps. Setting on tracefps causes various prints of intermediate results. 16.24.4 Problems in the current version The handling of logarithmic singularities is not yet implemented. The rational type implementation is not yet complete. The support of special functions [5] will be part of the next version. Bibliography [1] E. R. Hansen, A table of series and products. Prentice-Hall, Englewood Cliffs, NJ, 1975. 525 [2] Wolfram Koepf, Power Series in Computer Algebra, J. Symbolic Computation 13 (1992) [3] Wolfram Koepf, Examples for the Algorithmic Calculation of Formal Puiseux, Laurent and Power series, SIGSAM Bulletin 27, 1993, 20-32. [4] Wolfram Koepf, Algorithmic development of power series. In: Artificial intelligence and symbolic mathematical computing, ed. by J. Calmet and J. A. Campbell, International Conference AISMC-1, Karlsruhe, Germany, August 1992, Proceedings, Lecture Notes in Computer Science 737, Springer-Verlag, Berlin–Heidelberg, 1993, 195–213. [5] Wolfram Koepf, Algorithmic work with orthogonal polynomials and special functions. Konrad-Zuse-Zentrum Berlin (ZIB), Preprint SC 94-5, 1994. 526 16.25 CHAPTER 16. USER CONTRIBUTED PACKAGES GCREF: A Graph Cross Referencer This package reuses the code of the RCREF package to create a graph displaying the interdependency of procedures in a Reduce source code file. Authors: A. Dolzmann, T. Sturm. 16.25.1 Basic Usage Similarly to the Reduce cross referencer, it is used via switches as follows: load_package gcref; on gcref; in ".red"; off gcref; At off gcref; the graph is printed to the screen in TGF format. To redirect this output to a file, use the following: load_package gcref; on gcref; in ".red"; out ".tgf"; off gcref; shut ".tgf"; 16.25.2 Shell Script "gcref" There is a shell script "gcref" in this directory automizing this like ./gcref "gcref" is configured to use CSL Reduce. To use PSL Reduce instead, set $REDUCE in the environment. To use PSL by default, define REDUCE=redpsl in line 3 of "gcref". 16.25.3 Redering with yED The obtained TGF file can be viewed with a graph editor. I recommend using the free software yED, which is written in Java and available for many platforms. 527 Note that TGF is not suitable for storing rendering information. After opening the TGF file with yED, the graph has to be rendered explicitly as follows: * From menu "Layout" choose "Hierarchical Layout". To resize the nodes to the procedure names * from menu "Tools" choose "Fit Node to Label". Feel free to experiment with yED and use other layout and layout options, which might be suitable for your particular software. For saving your particular layout at the end, use the GRAPHML format instead of TGF. 528 16.26 CHAPTER 16. USER CONTRIBUTED PACKAGES GENTRAN: A code generation package GENTRAN is an automatic code GENerator and TRANslator. It constructs complete numerical programs based on sets of algorithmic specifications and symbolic expressions. Formatted FORTRAN, RATFOR, PASCAL or C code can be generated through a series of interactive commands or under the control of a template processing routine. Large expressions can be automatically segmented into subexpressions of manageable size, and a special file-handling mechanism maintains stacks of open I/O channels to allow output to be sent to any number of files simultaneously and to facilitate recursive invocation of the whole code generation process. Author: Barbara L. Gates. 529 16.27 GNUPLOT: Display of functions and surfaces This package is an interface to the popular GNUPLOT package. It allows you to display functions in 2D and surfaces in 3D on a variety of output devices including X terminals, PC monitors, and postscript and Latex printer files. NOTE: The GNUPLOT package may not be included in all versions of REDUCE. Author: Herbert Melenk. 16.27.1 Introduction The G NU P LOT system provides easy to use graphics output for curves or surfaces which are defined by formulas and/or data sets. G NU P LOT supports a variety of output devices such as VGA screen, postscript, picTEX, MS Windows. The REDUCE G NU P LOT package lets one use the G NU P LOT graphical output directly from inside REDUCE, either for the interactive display of curves/surfaces or for the production of pictures on paper. 16.27.2 Command plot Under REDUCE G NU P LOT is used as graphical output server, invoked by the command plot(...). This command can have a variable number of parameters: • A function to plot; a function can be – an expression with one unknown, e.g. u*sin(u)ˆ2. – a list of expressions with one (identical) unknown, e.g. {sin(u), cos(u)}. – an expression with two unknowns, e.g. u*sin(u)ˆ2+sqrt(v). – a list of expressions with two (identical) unknowns, e.g. {x^2+y^2,x^2-y^2}. – a parametic expression of the form point(,) or point(, ,) where u,v,w are expressions which depend of one or two parameters; if there is one parameter, the object describes a curve in the plane (only u and v) or in 3D space; if there are two parameters, the object describes a surface in 3D. The parameters are treated as independent variables. Example: point(sin t,cos t,t/10). – an equation with a symbol on the left-hand side and an expression with one or two unknowns on the right-hand side, e.g. dome= 1/(xˆ2+yˆ2). 530 CHAPTER 16. USER CONTRIBUTED PACKAGES – an equation with an expression on the left-hand side and a zero on right-hand side describing implicitly a one dimensional variety in the plane (implicitly given curve), e.g. xˆ3 + x*yˆ2-9x = 0, or a two-dimensional surface in 3-dimensional Euclidean space, – an equation with an expression in two variables on the left-hand side and a list of numbers on the right-hand side; the contour lines corresponding to the given values are drawn, e.g. xˆ3 - yˆ2 + x*y = {-2,-1,0,1,2}. – a list of points in 2 or 3 dimensions, e.g. {{0,0},{0,1},{1,1}} representing a curve, – a list of lists of points in 2 or 3 dimensions e.g. {{{0,0},{0,1},{1,1}}, {{0,0},{0,1},{1,1}}} representing a family of curves. • A range for a variable; this has the form variable=(lower_bound,.., upper_bound) where lower_bound and upper_bound must be expressions which evaluate to numbers. If no range is specified the default ranges for independent variables are (−10 .. 10) and the range for the dependent variable is set to maximum number of the G NU P LOT executable (using double floats on most IEEE machines). Additionally the number of interval subdivisions can be assigned as a formal quotient variable=(lower_bound .. upper_bound)/ where it is a positive integer. E.g. (1 .. 5)/30 means the interval from 1 to 5 subdivided into 30 pieces of equal size. A subdivision parameter overrides the value of the variable points for this variable. • A plot option, either as fixed keyword, e.g. hidden3d or as equation e.g. term=pictex; free texts such as titles and labels should be enclosed in string quotes. Please note that a blank has to be inserted between a number and a dot, otherwise the REDUCE translator will be misled. If a function is given as an equation the left-hand side is mainly used as a label for the axis of the dependent variable. In two dimensions, plot can be called with more than one explicit function; all curves are drawn in one picture. However, all these must use the same independent variable name. One of the functions can be a point set or a point set list. Normally all functions and point sets are plotted by lines. A point set is drawn by points only if functions and the point set are drawn in one picture. The same applies to three dimensions with explicit functions. However, an implicitly given curve must be the sole object for one picture. The functional expressions are evaluated in rounded mode. This is done automatically, it is not necessary to turn on rounded mode explicitly. 531 Examples: plot(cos x); plot(s=sin phi, phi=(-3 .. 3)); plot(sin phi, cos phi, phi=(-3 .. 3)); plot (cos sqrt(x^2 + y^2), x=(-3 .. 3), y=(-3 .. 3), hidden3d); plot {{0,0},{0,1},{1,1},{0,0},{1,0},{0,1},{0.5,1.5},{1,1},{1,0}}; % parametric: screw on rounded; w := for j := 1:200 collect {1/j*sin j, 1/j*cos j, j/200}$ plot w; % parametric: globe dd := pi/15$ w := for u := dd step dd until pi-dd collect for v := 0 step dd until 2pi collect {sin(u)*cos(v), sin(u)*sin(v), cos(u)}$ plot w; % implicit: superposition of polynomials plot((x^2+y^2-9)*x*y = 0); Piecewise-defined functions A composed graph can be defined by a rule-based operator. In that case each rule must contain a clause which restricts the rule application to numeric arguments, e.g. operator my_step1; let {my_step1(~x) => -1 when numberp x and x<-pi/2, my_step1(~x) => 1 when numberp x and x>pi/2, my_step1(~x) => sin x when numberp x and -pi/2<=x and x<=pi/2}; plot(my_step2(x)); Of course, such a rule may call a procedure: procedure my_step3(x); if x<-1 then -1 else if x>1 then 1 else x; operator my_step2; let my_step2(~x) => my_step3(x) when numberp x; 532 CHAPTER 16. USER CONTRIBUTED PACKAGES plot(my_step2(x)); The direct use of a produre with a numeric if clause is impossible. Plot options The following plot options are supported in the plot command: • points=: the number of unconditionally computed data points; for a grid pointsˆ2 grid points are used. The default value is 20. The value of points is used only for variables for which no individual interval subdivision has been specified in the range specification. • refine=: the maximum depth of adaptive interval intersections. The default is 8. A value 0 switches any refinement off. Note that a high value may increase the computing time significantly. Additional options The following additional G NU P LOT options are supported in the plot command: • title=name: the title (string) is put at the top of the picture. • axes labels: xlabel="text1", ylabel="text2", and for surfaces zlabel="text3". If omitted the axes are labeled by the independent and dependent variable names from the expression. Note that xlabel, ylabel, and zlabel here are used in the usual sense, x for the horizontal and y for the vertical axis in 2-d and z for the perpendicular axis under 3-d – these names do not refer to the variable names used in the expressions. plot(1,x,(4*x^2-1)/2,(x*(12*x^2-5))/3, x=(-1 .. 1), ylabel="L(x,n)", title="Legendre Polynomials"); • terminal=name: prepare output for device type name. Every installation uses a default terminal as output device; some installations support additional devices such as printers; consult the original G NU P LOT documentation or the G NU P LOT Help for details. • output="filename": redirect the output to a file. • size="s_x,s_y": rescale the graph (not the window) where sx and sy are scaling factors for the x- and y-sizes. Defaults are sx = 1, xz = 1. Note that scaling factors greater than 1 will often cause the picture to be too big for the window. 533 plot(1/(x^2+y^2), x=(0.1 .. 5), y=(0.1 .. 5), size="0.7,1"); • view="r_x,r_z": set the viewpoint in 3 dimensions by turning the object around the x or z axis; the values are degrees (integers). Defaults are rx = 60, rz = 30. plot(1/(x^2+y^2), x=(0.1 .. 5), y=(0.1 .. 5), view="30,130"); • contour resp. nocontour: in 3 dimensions an additional contour map is drawn (default: nocontour). Note that contour is an option which is executed by G NU P LOT by interpolating the precomputed function values. If you want to draw contour lines of a delicate formula, you had better use the contour form of the REDUCE plot command. • surface resp. nosurface: in 3 dimensions the surface is drawn, resp. suppressed (default: surface). • hidden3d: hidden line removal in 3 dimensions. 16.27.3 Paper output The following example works for a PostScript printer. If your printer uses a different communication, please find the correct setting for the terminal variable in the G NU P LOT documentation. For a PostScript printer, add the options terminal=postscript and output="filename" to your plot command, e.g. plot(sin x, x=(0 .. 10), terminal=postscript, output=""); 16.27.4 Mesh generation for implicit curves The basic mesh for finding an implicitly-given curve, the x, y plane is subdivided into an initial set of triangles. Those triangles which have an explicit zero point or which have two points with different signs are refined by subdivision. A further refinement is performed for triangles which do not have exactly two zero neighbours because such places may represent crossings, bifurcations, turning points or other difficulties. The initial subdivision and the refinements are controlled by the option points which is initially set to 20: the initial grid is refined unconditionally until approximately points * points equally-distributed points in the x, y plane have been generated. The final mesh can be visualized in the picture by setting on show_grid; 534 16.27.5 CHAPTER 16. USER CONTRIBUTED PACKAGES Mesh generation for surfaces By default the functions are computed at predefined mesh points: the ranges are divided by the number associated with the option points in both directions. For two dimensions the given mesh is adaptively smoothed when the curves are too coarse, especially if singularities are present. On the other hand refinement can be rather time-consuming if used with complicated expressions. You can control it with the option refine. At singularities the graph is interrupted. In three dimensions no refinement is possible as G NU P LOT supports surfaces only with a fixed regular grid. In the case of a singularity the near neighborhood is tested; if a point there allows a function evaluation, its clipped value is used instead, otherwise a zero is inserted. When plotting surfaces in three dimensions you have the option of hidden line removal. Because of an error in Gnuplot 3.2 the axes cannot be labeled correctly when hidden3d is used ; therefore they aren’t labelled at all. Hidden line removal is not available with point lists. 16.27.6 G NU P LOT operation The command plotreset; deletes the current G NU P LOT output window. The next call to plot will then open a new one. If G NU P LOT is invoked directly by an output pipe (UNIX and Windows), an eventual error in the G NU P LOT data transmission might cause G NU P LOT to quit. As REDUCE is unable to detect the broken pipe, you have to reset the plot system by calling the command plotreset; explicitly. Afterwards new graphics output can be produced. Under Windows 3.1 and Windows NT, G NU P LOT has a text and a graph window. If you don’t want to see the text window, iconify it and activate the option update wgnuplot.ini from the graph window system menu - then the present screen layout (including the graph window size) will be saved and the text windows will come up iconified in future. You can also select some more features there and so tailor the graphic output. Before you terminate REDUCE you should terminate the graphic window by calling plotreset;. If you terminate REDUCE without deleting the G NU P LOT windows, use the command button from the G NU P LOT text window - it offers an exit function. 16.27.7 Saving G NU P LOT command sequences GNUPLOT If you want to use the internal G NU P LOT command sequence more than once (e.g. for producing a picture for a publication), you may set 535 on trplot, plotkeep; trplot causes all G NU P LOT commands to be written additionally to the actual REDUCE output. Normally the data files are erased after calling G NU P LOT, however with plotkeep on the files are not erased. 16.27.8 Direct Call of G NU P LOT G NU P LOT has a lot of facilities which are not accessed by the operators and parameters described above. Therefore genuine G NU P LOT commands can be sent by REDUCE. Please consult the G NU P LOT manual for the available commands and parameters. The general syntax for a G NU P LOT call inside REDUCE is gnuplot(,, ...) where cmd is a command name and p1 , p2 , . . . are the parameters, inside REDUCE separated by commas. The parameters are evaluated by REDUCE and then transmitted to G NU P LOT in G NU P LOT syntax. Usually a drawing is built by a sequence of commands which are buffered by REDUCE or the operating system. For terminating and activating them use the REDUCE command plotshow. Example: gnuplot(set,polar); gnuplot(set,noparametric); gnuplot(plot, x*sin x); plotshow; In this example the function expression is transferred literally to G NU P LOT, while REDUCE is responsible for computing the function values when plot is called. Note that G NU P LOT restrictions with respect to variable and function names have to be taken into account when using this type of operation. Important: String quotes are not transferred to the G NU P LOT executable; if the G NU P LOT syntax needs string quotes, you must add doubled stringquotes inside the argument string, e.g. gnuplot(plot, """mydata""", "using 2:1"); 16.27.9 Examples The following are taken from a collection of sample plots (gnuplot.tst) and a set of tests for plotting special functions. The pictures are made using the qt G NU P LOT device and using the menu of the graphics window to export to PDF or PNG. 536 CHAPTER 16. USER CONTRIBUTED PACKAGES A simple plot for sin(1/x): plot(sin(1/x), x=(-1 .. 1), y=(-3 .. 3)); REDUCE Plot 1 0.8 0.6 0.4 y 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1 -0.5 0 0.5 1 x Some implicitly-defined curves: plot(x^3 + y^3 - 3*x*y = {0,1,2,3}, x=(-2.5 .. 2), y=(-5 .. 5)); REDUCE Plot 2 1.5 1 0.5 x 0 -0.5 -1 -1.5 -2 -2.5 -6 -4 -2 0 y 2 4 6 537 A test for hidden surfaces: plot(cos sqrt(x^2 + y^2), x=(-3 .. 3), y=(-3 .. 3), hidden3d); REDUCE Plot z 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 3 2 -3 1 0 -2 -1 0 y x -1 1 2 -2 3-3 This may be slow on some machines because of a delicate evaluation context: plot(sinh(x*y)/sinh(2*x*y), hidden3d); REDUCE Plot 0.5 0.45 0.4 0.35 0.3 z 0.25 0.2 0.15 0.1 0.05 0 10 5 -10 0 -5 0 y 5 -5 -10 10 x 538 CHAPTER 16. USER CONTRIBUTED PACKAGES on rounded; w:= {for j:=1 step 0.1 until 20 collect {1/j*sin j, 1/j*cos j, j}, for j:=1 step 0.1 until 20 collect {(0.1+1/j)*sin j, (0.1+1/j)*cos j, j} }$ plot w; REDUCE Plot z 20 18 16 14 12 10 8 6 4 2 0 0.6 0.4 -0.4 0.2 -0.2 0 0 0.2 0.4 x -0.2 0.6 0.8 y -0.4 1-0.6 An example taken from: Cox, Little, O’Shea, Ideals, Varieties and Algorithms: plot(point(3u+3u*v^2-u^3, 3v+3u^2*v-v^3, 3u^2-3v^2), hidden3d, title="Enneper Surface"); Enneper Surface 300 200 100 z 0 -100 -200 -300 -2500-2000 -1500-1000 -500 x 0 -500 -1000 500 1000 -1500 -2000 15002000 -2500 2500 0 2500 2000 1500 1000 500 y 539 The following examples use the specfn package to draw a collection of Chebyshev T polynomials and Bessel Y functions. The special function package has to be loaded explicitely to make the operator ChebyshevT and BesselY available. load_package specfn; plot(chebyshevt(1,x), chebyshevt(2,x), chebyshevt(3,x), chebyshevt(4,x), chebyshevt(5,x), x=(-1 .. 1), title="Chebyshev t Polynomials"); Chebyshev t Polynomials 1 y 0.5 0 -0.5 -1 -1 -0.5 0 0.5 1 x plot(bessely(0,x), bessely(1,x), bessely(2,x), x=(0.1 .. 10), y=(-1 .. 1), title="Bessel functions of 2nd kind"); Bessel functions of 2nd kind 0.6 0.4 0.2 y 0 -0.2 -0.4 -0.6 -0.8 -1 0 1 2 3 4 5 x 6 7 8 9 10 540 16.28 CHAPTER 16. USER CONTRIBUTED PACKAGES GROEBNER: A Gröbner basis package GROEBNER is a package for the computation of Gröbner Bases using the Buchberger algorithm and related methods for polynomial ideals and modules. It can be used over a variety of different coefficient domains, and for different variable and term orderings. Gröbner Bases can be used for various purposes in commutative algebra, e.g. for elimination of variables, converting surd expressions to implicit polynomial form, computation of dimensions, solution of polynomial equation systems etc. The package is also used internally by the SOLVE operator. Authors: Herbert Melenk, H.M. Möller and Winfried Neun. Gröbner bases are a valuable tool for solving problems in connection with multivariate polynomials, such as solving systems of algebraic equations and analyzing polynomial ideals. For a definition of Gröbner bases, a survey of possible applications and further references, see [6]. Examples are given in [5], in [7] and also in the test file for this package. The groebner package calculates Gröbner bases using the Buchberger algorithm. It can be used over a variety of different coefficient domains, and for different variable and term orderings. The current version of the package uses parts of a previous version, written by R. Gebauer, A.C. Hearn, H. Kredel and H. M. Möller. The algorithms implemented in the current version are documented in [10], [11], [15] and [12]. The operator saturation has been implemented in July 2000 (Herbert Melenk). 16.28.1 Background Variables, Domains and Polynomials The various functions of the groebner package manipulate equations and/or polynomials; equations are internally transformed into polynomials by forming the difference of left-hand side and right-hand side, if equations are given. All manipulations take place in a ring of polynomials in some variables x1, . . . , xn over a coefficient domain d: d[x1, . . . , xn], where d is a field or at least a ring without zero divisors. The set of variables x1, . . . , xn can be given explicitly by the user or it is extracted automatically from the input expressions. All REDUCE kernels can play the role of “variables” in this context; examples are 541 x y z22 sin(alpha) cos(alpha) c(1,2,3) c(1,3,2) farina4711 The domain d is the current REDUCE domain with those kernels adjoined that are not members of the list of variables. So the elements of d may be complicated polynomials themselves over kernels not in the list of variables; if, however, the variables are extracted automatically from the input expressions, d is identical with the current REDUCE domain. It is useful to regard kernels not being members of the list of variables as “parameters”, e.g. a ∗ x + (a − b) ∗ y ∗ ∗2 with “variables” {x, y} and “parameters” a and b . The exponents of groebner variables must be positive integers. A groebner variable may not occur as a parameter (or part of a parameter) of a coefficient function. This condition is tested in the beginning of the groebner calculation; if it is violated, an error message occurs (with the variable name), and the calculation is aborted. When the groebner package is called by solve, the test is switched off internally. The current version of the Buchberger algorithm has two internal modes, a field mode and a ring mode. In the starting phase the algorithm analyzes the domain type; if it recognizes d as being a ring it uses the ring mode, otherwise the field mode is needed. Normally field calculations occur only if all coefficients are numbers and if the current REDUCE domain is a field (e.g. rational numbers, modular numbers modulo a prime). In general, the ring mode is faster. When no specific REDUCE domain is selected, the ring mode is used, even if the input formulas contain fractional coefficients: they are multiplied by their common denominators so that they become integer polynomials. Zeroes of the denominators are included in the result list. Term Ordering In the theory of Gröbner bases, the terms of polynomials are considered as ordered. Several order modes are available in the current package, including the basic modes: lex, gradlex, revgradlex All orderings are based on an ordering among the variables. For each pair of variables (a, b) an order relation must be defined, e.g. “a  b”. The greater sign  does not represent a numerical relation among the variables; it can be interpreted only in terms of formula representation: “a” will be placed in front of “b” or “a” is more complicated than “b”. 542 CHAPTER 16. USER CONTRIBUTED PACKAGES The sequence of variables constitutes this order base. So the notion of {x1, x2, x3} as a list of variables at the same time means x1  x2  x3 with respect to the term order. If terms (products of powers of variables) are compared with lex, that term is chosen which has a greater variable or a higher degree if the greatest variable is the first in both. With gradlex the sum of all exponents (the total degree) is compared first, and if that does not lead to a decision, the lex method is taken for the final decision. The revgradlex method also compares the total degree first, but afterward it uses the lex method in the reverse direction; this is the method originally used by Buchberger. Example 26 with {x, y, z}: lex: x ∗ y ∗ ∗3  y ∗ ∗48 (heavier variable) x ∗ ∗4 ∗ y ∗ ∗2  x ∗ ∗3 ∗ y ∗ ∗10 (higher degree in 1st variable) gradlex: y ∗ ∗3 ∗ z ∗ ∗4  x ∗ ∗3 ∗ y ∗ ∗3 x ∗ z  y ∗ ∗2 revgradlex: y ∗ ∗3 ∗ z ∗ ∗4  x ∗ ∗3 ∗ y ∗ ∗3 x ∗ z  y ∗ ∗2 (higher total degree) (equal total degree) (higher total degree) (equal total degree, so reverse order of lex) The formal description of the term order modes is similar to [14]; this description regards only the exponents of a term, which are written as vectors of integers with 0 for exponents of a variable which does not occur: (e) = (e1, . . . , en) representing x1 ∗ ∗e1 x2 ∗ ∗e2 · · · xn ∗ ∗en. deg(e) is the sum over all elements of (e) (e)  (l) ⇐⇒ (e) − (l)  (0) = (0, . . . , 0) lex: (e) > lex > (0) =⇒ ek > 0 and ej = 0 for j = 1, . . . , k − 1 gradlex: (e) > gl > (0) =⇒ deg(e) > 0 or (e) > lex > (0) revgradlex: (e) > rgl > (0) =⇒ deg(e) > 0 or (e) < lex < (0) 543 Note that the lex ordering is identical to the standard REDUCE kernel ordering, when korder is set explicitly to the sequence of variables. lex is the default term order mode in the groebner package. It is beyond the scope of this manual to discuss the functionality of the term order modes. See [7]. The list of variables is declared as an optional parameter of the torder statement (see below). If this declaration is missing or if the empty list has been used, the variables are extracted from the expressions automatically and the REDUCE system order defines their sequence; this can be influenced by setting an explicit order via the korder statement. The result of a Gröbner calculation is algebraically correct only with respect to the term order mode and the variable sequence which was in effect during the calculation. This is important if several calls to the groebner package are done with the result of the first being the input of the second call. Therefore we recommend that you declare the variable list and the order mode explicitly. Once declared it remains valid until you enter a new torder statement. The operator gvars helps you extract the variables from a given set of polynomials, if an automatic reordering has been selected. The Buchberger Algorithm The Buchberger algorithm of the package is based on G EBAUER /M ÖLLER [11]. Extensions are documented in [16] and [12]. 16.28.2 Loading of the Package The following command loads the package into REDUCE (this syntax may vary according to the implementation): load_package groebner; The package contains various operators, and switches for control over the reduction process. These are discussed in the following. 16.28.3 The Basic Operators Term Ordering Mode torder (vl,m,[p1 , p2 , . . .]); 544 CHAPTER 16. USER CONTRIBUTED PACKAGES where vl is a variable list (or the empty list if no variables are declared explicitly), m is the name of a term ordering mode lex, gradlex, revgradlex (or another implemented mode) and [p1 , p2 , . . .] are additional parameters for the term ordering mode (not needed for the basic modes). torder sets variable set and the term ordering mode. The default mode is lex. The previous description is returned as a list with corresponding elements. Such a list can alternatively be passed as sole argument to torder. If the variable list is empty or if the torder declaration is omitted, the automatic variable extraction is activated. gvars ({exp1, exp2, . . ., expn}); where {exp1, exp2, . . . , expn} is a list of expressions or equations. gvars extracts from the expressions {exp1, exp2, . . . , expn} the kernels, which can play the role of variables for a Gröbner calculation. This can be used e.g. in a torder declaration. groebner: Calculation of a Gröbner Basis groebner {exp1, exp2, . . . , expm}; where {exp1, exp2, . . . , expm} is a list of expressions or equations. groebner calculates the Gröbner basis of the given set of expressions with respect to the current torder setting. The Gröbner basis {1} means that the ideal generated by the input polynomials is the whole polynomial ring, or equivalently, that the input polynomials have no zeroes in common. As a side effect, the sequence of variables is stored as a REDUCE list in the shared variable gvarslast. This is important if the variables are reordered because of optimization: you must set them afterwards explicitly as the current variable sequence if you want to use the Gröbner basis in the sequel, e.g. for a preduce call. A basis has the property “Gröbner” only with respect to the variable sequences which had been active during its computation. Example 27 torder({},lex)$ groebner{3*x**2*y + 2*x*y + y + 9*x**2 + 5*x - 3, 2*x**3*y - x*y - y + 6*x**3 - 2*x**2 - 3*x + 3, 545 x**3*y + x**2*y + 3*x**3 + 2*x**2 }; 2 {8*x - 2*y + 5*y + 3, 3 2*y 2 - 3*y - 16*y + 21} This example used the default system variable ordering, which was {x, y}. With the other variable ordering, a different basis results: torder({y,x},lex)$ groebner{3*x**2*y + 2*x*y + y + 9*x**2 + 5*x - 3, 2*x**3*y - x*y - y + 6*x**3 - 2*x**2 - 3*x + 3, x**3*y + x**2*y + 3*x**3 + 2*x**2 }; 2 {2*y + 2*x - 3*x - 6, 3 2*x 2 - 5*x - 5*x} Another basis yet again results with a different term ordering: torder({x,y},revgradlex)$ groebner{3*x**2*y + 2*x*y + y + 9*x**2 + 5*x - 3, 2*x**3*y - x*y - y + 6*x**3 - 2*x**2 - 3*x + 3, x**3*y + x**2*y + 3*x**3 + 2*x**2 }; 2 {2*y - 5*y - 8*x - 3, y*x - y + x + 3, 2 2*x + 2*y - 3*x - 6} The operation of groebner can be controlled by the following switches: groebopt – If set on, the sequence of variables is optimized with respect to execution speed; the algorithm involved is described in [5]; note that the final list of variables is available in gvarslast. An explicitly declared dependency supersedes the variable optimization. For 546 CHAPTER 16. USER CONTRIBUTED PACKAGES example depend a, x, y; guarantees that a will be placed in front of x and y. So groebopt can be used even in cases where elimination of variables is desired. By default groebopt is of f , conserving the original variable sequence. groebf ullreduction – If set of f , the reduction steps during the groebner operation are limited to the pure head term reduction; subsequent terms are reduced otherwise. By default groebf ullreduction is on. gltbasis – If set on, the leading terms of the result basis are extracted. They are collected in a basis of monomials, which is available as value of the global variable with the name gltb. glterms – If {exp1 , . . . , expm } contain parameters (symbols which are not member of the variable list), the share variable glterms contains a list of expression which during the calculation were assumed to be nonzero. A Gröbner basis is valid only under the assumption that all these expressions do not vanish. The following switches control the print output of groebner; by default all these switches are set of f and nothing is printed. groebstat – A summary of the computation is printed including the computing time, the number of intermediate h–polynomials and the counters for the hits of the criteria. trgroeb – Includes groebstat and the printing of the intermediate h-polynomials. trgroebs – Includes trgroeb and the printing of intermediate s–polynomials. trgroeb1 – The internal pairlist is printed when modified. Gzerodim?: Test of dim = 0 gzerodim!? bas where bas is a Gröbner basis in the current setting. The result is nil, if bas is the basis of an ideal of polynomials with more than finitely many common zeros. If the ideal is zero dimensional, i. e. the polynomials of the ideal have only finitely many zeros in common, the result is an integer k which is the number of these common zeros (counted with multiplicities). 547 gdimension, gindependent_sets: compute dimension and independent variables The following operators can be used to compute the dimension and the independent variable sets of an ideal which has the Gröbner basis bas with arbitrary term order: gdimension bas gindependent_sets bas gindependent_sets computes the maximal left independent variable sets of the ideal, that are the variable sets which play the role of free parameters in the current ideal basis. Each set is a list which is a subset of the variable list. The result is a list of these sets. For an ideal with dimension zero the list is empty. gdimension computes the dimension of the ideal, which is the maximum length of the independent sets. The switch groebopt plays no role in the algorithms gdimension and gindependent_sets. It is set of f during the processing even if it is set on before. Its state is saved during the processing. The “Kredel-Weispfenning" algorithm is used (see [15], extended to general ordering in [4]. Conversion of a Gröbner Basis glexconvert: Conversion of an Arbitrary Gröbner Basis of a Zero Dimensional Ideal into a Lexical One glexconvert ({exp, . . . , expm} [, {var1 . . . , varn}] [, maxdeg = mx] [, newvars = {nv1, . . . , nvk}]) where {exp1, . . . , expm} is a Gröbner basis with {var1, . . . , varn} as variables in the current term order mode, mx is an integer, and {nv1, . . . , nvk} is a subset of the basis variables. For this operator the source and target variable sets must be specified explicitly. glexconvert converts a basis of a zero-dimensional ideal (finite number of isolated solutions) from arbitrary ordering into a basis under lex ordering. During the call of glexconvert the original ordering of the input basis must be still active! newvars defines the new variable sequence. If omitted, the original variable sequence is used. If only a subset of variables is specified here, the partial ideal basis is evaluated. For the calculation of a univariate polynomial, newvars should be a list with one element. maxdeg is an upper limit for the degrees. The algorithm stops with an error message, if this limit is reached. 548 CHAPTER 16. USER CONTRIBUTED PACKAGES A warning occurs if the ideal is not zero dimensional. glexconvert is an implementation of the FLGM algorithm by FAUGÈRE, G IANNI, L AZARD and M ORA [10]. Often, the calculation of a Gröbner basis with a graded ordering and subsequent conversion to lex is faster than a direct lex calculation. Additionally, glexconvert can be used to transform a lex basis into one with different variable sequence, and it supports the calculation of a univariate polynomial. If the latter exists, the algorithm is even applicable in the non zero-dimensional case, if such a polynomial exists. If the polynomial does not exist, the algorithm computes until maxdeg has been reached. torder({{w,p,z,t,s,b},gradlex) g := groebner { f1 := 45*p + 35*s -165*b -36, 35*p + 40*z + 25*t - 27*s, 15*w + 25*p*s +30*z -18*t -165*b**2, -9*w + 15*p*t + 20*z*s, w*p + 2*z*t - 11*b**3, 99*w - 11*s*b +3*b**2, b**2 + 33/50*b + 2673/10000}; g := {60000*w + 9500*b + 3969, 1800*p - 3100*b - 1377, 18000*z + 24500*b + 10287, 750*t - 1850*b + 81, 200*s - 500*b - 9, 2 10000*b + 6600*b + 2673} glexconvert(g,{w,p,z,t,s,b},maxdeg=5,newvars={w}); 2 100000000*w + 2780000*w + 416421 glexconvert(g,{w,p,z,t,s,b},maxdeg=5,newvars={p}); 2 6000*p - 2360*p + 3051 groebner_walk: Conversion of a (General) Total Degree Basis into a Lex One The algorithm groebner_walk convertes from an arbitrary polynomial system a 549 graduated basis of the given variable sequence to a lex one of the same sequence. The job is done by computing a sequence of Gröbner bases of correspondig monomial ideals, lifting the original system each time. The algorithm has been described (more generally) by [2],[3],[1] and [8]. groebner_walk should be only called, if the direct calculation of a lex Gröbner base does not work. The computation of groebner_walk includes some overhead (e. g. the computation divides polynomials). Normally torder must be called before to define the variables and the variable sorting. The reordering of variables makes no sense with groebner_walk; so do not call groebner_walk with groebopt on! groebner_walk g where g is a polynomial ideal basis computed under gradlex or under weighted with a one–element, non zero weight vector with only one element, repeated for each variable. The result is a corresponding lex basis (if that is computable), independet of the degree of the ideal (even for non zero degree ideals). The variabe gvarslast is not set. groebnerf : Factorizing Gröbner Bases Background If Gröbner bases are computed in order to solve systems of equations or to find the common roots of systems of polynomials, the factorizing version of the Buchberger algorithm can be used. The theoretical background is simple: if a polynomial p can be represented as a product of two (or more) polynomials, e.g. h = f ∗g, then h vanishes if and only if one of the factors vanishes. So if during the calculation of a Gröbner basis h of the above form is detected, the whole problem can be split into two (or more) disjoint branches. Each of the branches is simpler than the complete problem; this saves computing time and space. The result of this type of computation is a list of (partial) Gröbner bases; the solution set of the original problem is the union of the solutions of the partial problems, ignoring the multiplicity of an individual solution. If a branch results in a basis {1}, then there is no common zero, i.e. no additional solution for the original problem, contributed by this branch. groebnerf Call The syntax of groebnerf is the same as for groebner. groebnerf({exp1, exp2, . . . , expm}[, {}, {nz1, . . . nzk}); where {exp1, exp2, . . . , expm} is a given list of expressions or equations, and {nz1, . . . nzk} is an optional list of polynomials known to be non-zero. groebnerf tries to separate polynomials into individual factors and to branch the computation in a recursive manner (factorization tree). The result is a list of partial Gröbner bases. If no factorization can be found or if all branches but one lead to 550 CHAPTER 16. USER CONTRIBUTED PACKAGES the trivial basis {1}, the result has only one basis; nevertheless it is a list of lists of polynomials. If no solution is found, the result will be {{1}}. Multiplicities (one factor with a higher power, the same partial basis twice) are deleted as early as possible in order to speed up the calculation. The factorizing is controlled by some switches. As a side effect, the sequence of variables is stored as a REDUCE list in the shared variable gvarslast . If gltbasis is on, a corresponding list of leading term bases is also produced and is available in the variable gltb. The third parameter of groebnerf allows one to declare some polynomials nonzero. If any of these is found in a branch of the calculation the branch is cancelled. This can be used to save a substantial amount of computing time. The second parameter must be included as an empty list if the third parameter is to be used. torder({x,y},lex)$ groebnerf { 3*x**2*y + 2*x*y + y + 9*x**2 + 5*x = 3, 2*x**3*y - x*y - y + 6*x**3 - 2*x**2 - 3*x = -3, x**3*y + x**2*y + 3*x**3 + 2*x**2 \}; {{y - 3,x}, 2 {2*y + 2*x - 1,2*x - 5*x - 5}} It is obvious here that the solutions of the equations can be read off immediately. All switches from groebner are valid for groebnerf as well: groebopt gltbasis groebf ullreduction groebstat trgroeb trgroebs rgroeb1 Additional switches for groebnerf : 551 trgroebr – All intermediate partial basis are printed when detected. By default trgroebr is off. groebmonfac groebresmax groebrestriction These variables are described in the following paragraphs. Suppression of Monomial Factors The factorization in groebnerf is controlled by the following switches and variables. The variable groebmonf ac is connected to the handling of “monomial factors”. A monomial factor is a product of variable powers occurring as a factor, e.g. x ∗ ∗2 ∗ y in x ∗ ∗3 ∗ y − 2 ∗ x ∗ ∗2 ∗ y ∗ ∗2. A monomial factor represents a solution of the type “x = 0 or y = 0” with a certain multiplicity. With groebnerf the multiplicity of monomial factors is lowered to the value of the shared variable groebmonf ac which by default is 1 (= monomial factors remain present, but their multiplicity is brought down). With groebmonf ac := 0 the monomial factors are suppressed completely. Limitation on the Number of Results The shared variable groebresmax controls the number of partial results. Its default value is 300. If groebresmax partial results are calculated, the calculation is terminated. groebresmax counts all branches, including those which are terminated (have been computed already), give no contribution to the result (partial basis 1), or which are unified in the result with other (partial) bases. So the resulting number may be much smaller. When the limit of groeresmax is reached, a warning GROEBRESMAX limit reached is issued; this warning in any case has to be taken as a serious one. For "normal" calculations the groebresmax limit is not reached. groebresmax is a shared variable (with an integer value); it can be set in the algebraic mode to a different (positive integer) value. Restriction of the Solution Space In some applications only a subset of the complete solution set of a given set of equations is relevant, e.g. only nonnegative 552 CHAPTER 16. USER CONTRIBUTED PACKAGES values or positive definite values for the variables. A significant amount of computing time can be saved if nonrelevant computation branches can be terminated early. Positivity: If a polynomial has no (strictly) positive zero, then every system containing it has no nonnegative or strictly positive solution. Therefore, the Buchberger algorithm tests the coefficients of the polynomials for equal sign if requested. For example, in 13 ∗ x + 15 ∗ y ∗ z can be zero with real nonnegative values for x, y and z only if x = 0 and y = 0 or z = 0; this is a sort of “factorization by restriction”. A polynomial 13 ∗ x + 15 ∗ y ∗ z + 20 never can vanish with nonnegative real variable values. Zero point: If any polynomial in an ideal has an absolute term, the ideal cannot have the origin point as a common solution. By setting the shared variable groebrestriction groebnerf is informed of the type of restriction the user wants to impose on the solutions: groebrestiction:=nonnegative; only nonnegative real solutions are of interest groebrestriction:=positive; only nonnegative and nonzero solutions are of interest groebrestriction:=zeropoint; only solution sets which contain the point {0, 0, . . . , 0} are or interest. If groebnerf detects a polynomial which formally conflicts with the restriction, it either splits the calculation into separate branches, or, if a violation of the restriction is determined, it cancels the actual calculation branch. greduce, preduce: Reduction of Polynomials Background Reduction of a polynomial “p” modulo a given sets of polynomials “b” is done by the reduction algorithm incorporated in the Buchberger algorithm. Informally it can be described for polynomials over a field as follows: 553 loop1: % head term elimination if there is one polynomial b in B such that the leading term of p is a multiple of the leading term of P do p := p − lt(p)/lt(b) ∗ b (the leading term vanishes) do this loop as long as possible; loop2: % elimination of subsequent terms for each term s in p do if there is one polynomial b in B such that s is a multiple of the leading term of p do p := p − s/lt(b) ∗ b (the term s vanishes) do this loop as long as possible; If the coefficients are taken from a ring without zero divisors we cannot divide by each possible number like in the field case. But using that in the field case, c ∗ p is reduced to c ∗ q, if p is reduced to q, for arbitrary numbers c, the reduction for the ring case uses the least c which makes the (field) reduction for c ∗ p integer. The result of this reduction is returned as (ring) reduction of p eventually after removing the content, i.e. the greatest common divisor of the coefficients. The result of this type of reduction is also called a pseudo reduction of p. Reduction via Gröbner Basis Calculation greduce(exp, {exp1, exp2, . . . , expm}]); where exp is an expression, and {exp1, exp2, . . . , expm} is a list of any number of expressions or equations. greduce first converts the list of expressions {exp1, . . . , expn} to a Gröbner basis, and then reduces the given expression modulo that basis. An error results if the list of expressions is inconsistent. The returned value is an expression representing the reduced polynomial. As a side effect, greduce sets the variable gvarslast in the same manner as groebner does. Reduction with Respect to Arbitrary Polynomials preduce(exp, {exp1, exp2, . . . , expm}); where expm is an expression, and {exp1, exp2, . . . , expm} is a list of any number of expressions or equations. preduce reduces the given expression modulo the set {exp1, . . . , expm}. If this set is a Gröbner basis, the obtained reduced expression is uniquely determined. If not, then it depends on the subsequence of the single reduction steps (see 27). preduce does not check whether {exp1, exp2, . . . , expm} is a Gröbner basis in the actual order. Therefore, if the expressions are a Gröbner basis calculated earlier 554 CHAPTER 16. USER CONTRIBUTED PACKAGES with a variable sequence given explicitly or modified by optimization, the proper variable sequence and term order must be activated first. Example 28(preduce called with a Gröbner basis): torder({x,y},lex); gb:=groebner{3*x**2*y + 2*x*y + y + 9*x**2 + 5*x - 3, 2*x**3*y - x*y - y + 6*x**3 - 2*x**2 - 3*x + 3, x**3*y + x**2*y + 3*x**3 + 2*x**2}$ preduce (5*y**2 + 2*x**2*y + 5/2*x*y + 3/2*y + 8*x**2 + 3/2*x - 9/2, gb); 2 y greduce_orders: Reduction with several term orders The shortest polynomial with different polynomial term orders is computed with the operator greduce_orders: greduce_orders (exp, {exp1, exp2, . . . , expm} [,{v1 ,v2 . . . vn }]); where exp is an expression and {exp1, exp2, . . . , expm} is a list of any number of expressions or equations. The list of variables v1 , v2 . . . vn may be omitted; if set, the variables must be a list. The expression exp is reduced by greduce with the orders in the shared variable gorders, which must be a list of term orders (if set). By default it is set to {revgradlex, gradlex, lex} The shortest polynomial is the result. The order with the shortest polynomial is set to the shared variable gorder. A Gröbner basis of the system {exp1, exp2, . . . , expm} is computed for each element of orders. With the default setting gorder in most cases will be set to revgradlex. If the variable set is given, these variables are taken; otherwise all variables of the system {exp1, exp2, . . . , expm} are extracted. The Gröbner basis computations can take some time; if interrupted, the intermediate result of the reduction is set to the shared variable greduce_result, if one is done already. However, this is not nesessarily the minimal form. If the variable gorders should be set to orders with a parameter, the term oder has to be replaced by a list; the first element is the term oder selected, followed by its parameter(s), e.g. orders := {{gradlexgradlex, 2}, {lexgradlex, 2}} 555 Reduction Tree In some case not only are the results produced by greduce and preduce of interest, but the reduction process is of some value too. If the switch groebprot is set on, groebner, greduce and preduce produce as a side effect a trace of their work as a REDUCE list of equations in the shared variable groebprotf ile. Its value is a list of equations with a variable “candidate” playing the role of the object to be reduced. The polynomials are cited as “poly1”, “poly2”, . . . . If read as assignments, these equations form a program which leads from the reduction input to its result. Note that, due to the pseudo reduction with a ring as the coefficient domain, the input coefficients may be changed by global factors. 556 CHAPTER 16. USER CONTRIBUTED PACKAGES Example 29 on groebprot $ preduce (5 ∗ y ∗ ∗2 + 2 ∗ x ∗ ∗2 ∗ y + 5/2 ∗ x ∗ y + 3/2 ∗ y + 8 ∗ x ∗ ∗2 +3/2 ∗ x − 9/2, gb); 2 y groebprotfile; 2 2 2 {candidate=4*x *y + 16*x + 5*x*y + 3*x + 10*y + 3*y - 9, 2 poly1=8*x - 2*y + 5*y + 3, 3 2 poly2=2*y - 3*y - 16*y + 21, candidate=2*candidate, candidate= - x*y*poly1 + candidate, candidate= - 4*x*poly1 + candidate, candidate=4*candidate, 3 candidate= - y *poly1 + candidate, candidate=2*candidate, 2 candidate= - 3*y *poly1 + candidate, candidate=13*y*poly1 + candidate, candidate=candidate + 6*poly1, 2 candidate= - 2*y *poly2 + candidate, candidate= - y*poly2 + candidate, candidate=candidate + 6*poly2} 557 This means 3 3 9 5 16(5y 2 + 2x2 y + xy + y + 8x2 + x − ) = 2 2 2 2 (−8xy − 32x − 2y 3 − 3y 2 + 13y + 6)poly1 +(−2y 2 − 2y + 6)poly2 + y 2 . Tracing with groebnert and preducet Given a set of polynomials {f1 , . . . , fk } and their Gröbner basis {g1 , . . . , gl }, it is well known that there are matrices of polynomials Cij and Dji such that X X fi = Cij gj and gj = Dji fi j i and these relations are needed explicitly sometimes. In B UCHBERGER [6], such cases are described in the context of linear polynomial equations. The standard technique for computing the above formulae is to perform Gröbner reductions, keeping track of the computation in terms of the input data. In the current package such calculations are performed with (an internally hidden) cofactor technique: the user has to assign unique names to the input expressions and the arithmetic combinations are done with the expressions and with their names simultaneously. So the result is accompanied by an expression which relates it algebraically to the input values. There are two complementary operators with this feature: groebnert and preducet; functionally they correspond to groebner and preduce. However, the sets of expressions here must be equations with unique single identifiers on their left side and the lhs are interpreted as names of the expressions. Their results are sets of equations (groebnert) or equations (preducet), where a lhs is the computed value, while the rhs is its equivalent in terms of the input names. Example 30 We calculate the Gröbner basis for an ellipse (named “p1” ) and a line (named “p2” ); p2 is member of the basis immediately and so the corresponding first result element is of a very simple form; the second member is a combination of p1 and p2 as shown on the rhs of this equation: gb1:=groebnert {p1=2*x**2+4*y**2-100,p2=2*x-y+1}; gb1 := {2*x - y + 1=p2, 2 9*y - 2*y - 199= - 2*x*p2 - y*p2 + 2*p1 + p2} Example 31 558 CHAPTER 16. USER CONTRIBUTED PACKAGES We want to reduce the polynomial x**2 wrt the above Gröbner basis and need knowledge about the reduction formula. We therefore extract the basis polynomials from gb1, assign unique names to them (here g1, g2) and call preducet. The polynomial to be reduced here is introduced with the name Q, which then appears on the rhs of the result. If the name for the polynomial is omitted, its formal value is used on the right side too. gb2 := for k := 1:length gb1 collect mkid(g,k) = lhs part(gb1,k)$ preducet (q=x**2,gb2); - 16*y + 208= - 18*x*g1 - 9*y*g1 + 36*q + 9*g1 - g2 This output means 1 1 1 1 4 52 x2 = ( x + y − )g1 + g2 + (− y + ). 2 4 4 36 9 9 Example 32 If we reduce a polynomial which is member of the ideal, we consequently get a result with lhs zero: preducet(q=2*x**2+4*y**2-100,gb2); 0= - 2*x*g1 - y*g1 + 2*q + g1 - g2 This means 1 1 1 q = (x + y − )g1 + g2. 2 2 2 With these operators the matrices Cij and Dji are available implicitly, Dji as side effect of groebnertT, cij by calls of preducet of fi wrt {gj }. The latter by definition will have the lhs zero and a rhs with linear fi . If {1} is the Gröbner basis, the groebnert calculation gives a “proof”, showing, how 1 can be computed as combination of the input polynomials. Remark: Compared to the non-tracing algorithms, these operators are much more time consuming. So they are applicable only on small sized problems. Gröbner Bases for Modules Given a polynomial ring, e.g. r = z[x1 · · · xk ] and an integer n > 1: the vectors with n elements of r form a module under vector addition (= componentwise 559 addition) and multiplication with elements of r. For a submodule given by a finite basis a Gröbner basis can be computed, and the facilities of the groebner package can be used except the operators groebnerf and groesolve. The vectors are encoded using auxiliary variables which represent the unit vectors in the module. E.g. using v1 , v2 , v3 the module element [x21 , 0, x1 − x2 ] is represented as x21 v1 + x1 v3 − x2 v3 . The use of v1 , v2 , v3 as unit vectors is set up by assigning the set of auxiliary variables to the share variable gmodule, e.g. gmodule := {v1,v2,v3}; After this declaration all monomials built from these variables are considered as an algebraically independent basis of a vector space. However, you had best use them only linearly. Once gmodule has been set, the auxiliary variables automatically will be added to the end of each variable list (if they are not yet member there). Example: torder({x,y,v1,v2,v3},lex)$ gmodule := {v1,v2,v3}$ g:=groebner{x^2*v1 + y*v2,x*y*v1 - v3,2y*v1 + y*v3}; 2 g := {x *v1 + y*v2, 2 x*v3 + y *v2, 3 y *v2 - 2*v3, 2*y*v1 + y*v3} preduce((x+y)^3*v1,g); 1 3 2 - x*y*v2 - ---*y *v3 - 3*y *v2 + 3*y*v3 2 In many cases a total degree oriented term order will be adequate for computations in modules, e.g. for all cases where the submodule membership is investigated. However, arranging the auxiliary variables in an elimination oriented term order can give interesting results. E.g. 560 CHAPTER 16. USER CONTRIBUTED PACKAGES p1:=(x-1)*(x^2-x+3)$ p2:=(x-1)*(x^2+x-5)$ gmodule := {v1,v2,v3}; torder({v1,x,v2,v3},lex)$ gb:=groebner {p1*v1+v2,p2*v1+v3}; gb := {30*v1*x - 30*v1 + x*v2 - x*v3 + 5*v2 - 3*v3, 2 2 x *v2 - x *v3 + x*v2 + x*v3 - 5*v2 - 3*v3} g:=coeffn(first gb,v1,1); g := 30*(x - 1) c1:=coeffn(first gb,v2,1); c1 := x + 5 c2:=coeffn(first gb,v3,1); c2 := - x - 3 c1*p1 + c2*p2; 30*(x - 1) Here two polynomials are entered as vectors [p1 , 1, 0] and [p2 , 0, 1]. Using a term ordering such that the first dimension ranges highest and the other components lowest, a classical cofactor computation is executed just as in the extended Euclidean algorithm. Consequently the leading polynomial in the resulting basis shows the greatest common divisor of p1 and p2 , found as a coefficient of v1 while the coefficients of v2 and v3 are the cofactors c1 and c2 of the polynomials p1 and p2 with the relation gcd(p1 , p2 ) = c1 p1 + c2 p2 . Additional Orderings Besides the basic orderings, there are ordering options that are used for special purposes. Separating the Variables into Groups It is often desirable to separate variables and formal parameters in a system of polynomials. This can be done with a lex 561 Gröbner basis. That however may be hard to compute as it does more separation than necessary. The following orderings group the variables into two (or more) sets, where inside each set a classical ordering acts, while the sets are handled via their total degrees, which are compared in elimination style. So the Gröbner basis will eliminate the members of the first set, if algebraically possible. torder here gets an additional parameter which describe the grouping torder (vl,gradlexgradlex, n) torder (vl,gradlexrevgradlex,n) torder (vl,lexgradlex, n) torder (vl,lexrevgradlex, n) Here the integer n is the number of variables in the first group and the names combine the local ordering for the first and second group, e.g. lexgradlex, 3 for {x1 , x2 , x3 , x4 , x5 }: xi11 . . . xi55  xj11 . . . xj55 if (i1 , i2 , i3 ) lex (j1 , j2 , j3 ) or (i1 , i2 , i3 ) = (j1 , j2 , j3 ) and (i4 , i5 ) gradlex (j4 , j5 ) Note that in the second place there is no lex ordering available; that would not make sense. Weighted Ordering The statement torder (vl,weighted, {n1 , n2 , n3 . . .}) ; establishes a graduated ordering, where the exponents are first multiplied by the given weights. If there are less weight values than variables, the weight 1 is added automatically. If the weighted degree calculation is not decidable, a lex comparison follows. Graded Ordering The statement torder (vl,graded, {n1 , n2 , n3 . . .},order2 ) ; establishes a graduated ordering, where the exponents are first multiplied by the given weights. If there are less weight values than variables, the weight 1 is added automatically. If the weighted degree calculation is not decidable, the term order order2 specified in the following argument(s) is used. The ordering graded is designed primarily for use with the operator dd_groebner. 562 CHAPTER 16. USER CONTRIBUTED PACKAGES Matrix Ordering The statement torder (vl,matrix, m) ; where m is a matrix with integer elements and row length which corresponds to the variable number. The exponents of each monomial form a vector; two monomials are compared by multiplying their exponent vectors first with m and comparing the resulting vector lexicographically. E.g. the unit matrix establishes the classical lex term order mode, a matrix with a first row of ones followed by the rows of a unit matrix corresponds to the gradlex ordering. The matrix m must have at least as many rows as columns; a non–square matrix contains redundant rows. The matrix must have full rank, and the top non–zero element of each column must be positive. The generality of the matrix based term order has its price: the computing time spent in the term sorting is significantly higher than with the specialized term orders. To overcome this problem, you can compile a matrix term order ; the compilation reduces the computing time overhead significantly. If you set the switch comp on, any new order matrix is compiled when any operator of the groebner package accesses it for the first time. Alternatively you can compile a matrix explicitly torder_compile(,); where < n > is a name (an identifier) and < m > is a term order matrix. torder_compile transforms the matrix into a LISP program, which is compiled by the LISP compiler when comp is on or when you generate a fast loadable module. Later you can activate the new term order by using the name < n > in a torder statement as term ordering mode. Gröbner Bases for Graded Homogeneous Systems For a homogeneous system of polynomials under a term order graded, gradlex, revgradlex or weighted a Gröbner Base can be computed with limiting the grade of the intermediate s–polynomials: dd_groebner (d1,d2,{p1 , p2 , . . .}); where d1 is a non–negative integer and d2 is an integer > d1 or “infinity". A pair of polynomials is considered only if the grade of the lcm of their head terms is between d1 and d2. See [4] for the mathematical background. For the term orders graded or weighted the (first) weight vector is used for the grade computation. Otherwise the total degree of a term is used. 563 16.28.4 Ideal Decomposition & Equation System Solving Based on the elementary Gröbner operations, the groebner package offers additional operators, which allow the decomposition of an ideal or of a system of equations down to the individual solutions. Solutions Based on Lex Type Gröbner Bases groesolve: Solution of a Set of Polynomial Equations The groesolve operator incorporates a macro algorithm; lexical Gröbner bases are computed by groebnerf and decomposed into simpler ones by ideal decomposition techniques; if algebraically possible, the problem is reduced to univariate polynomials which are solved by solve; if rounded is on, numerical approximations are computed for the roots of the univariate polynomials. groesolve({exp1, exp2, . . . , expm}[, {var1, var2, . . . , varn}]); where {exp1, exp2, . . . , expm} is a list of any number of expressions or equations, {var1, var2, . . . , varn} is an optional list of variables. The result is a set of subsets. The subsets contain the solutions of the polynomial equations. If there are only finitely many solutions, then each subset is a set of expressions of triangular type {exp1, exp2, . . . , expn}, where exp1 depends only on var1, exp2 depends only on var1 and var2 etc. until expn which depends on var1, . . . , varn. This allows a successive determination of the solution components. If there are infinitely many solutions, some subsets consist in less than n expressions. By considering some of the variables as “free parameters”, these subsets are usually again of triangular type. Example 33(Intersubsections of a line with a circle): groesolve({x ∗ ∗2 − y ∗ ∗2 − a, p ∗ x + q ∗ y + s}, {x, y}); 2 2 2 2 2 {{x=(sqrt( - a*p + a*q + s )*q - p*s)/(p - q ), 2 2 2 2 2 y= - (sqrt( - a*p + a*q + s )*p - q*s)/(p - q )}, 2 2 2 2 2 {x= - (sqrt( - a*p + a*q + s )*q + p*s)/(p - q ), 2 2 2 2 2 y=(sqrt( - a*p + a*q + s )*p + q*s)/(p - q )}} If the system is zero–dimensional (has a number of isolated solutions), the algorithm described in [13] is used, if the decomposition leaves a polynomial with 564 CHAPTER 16. USER CONTRIBUTED PACKAGES mixed leading term. Hillebrand has written the article and Möller was the tutor of this job. The reordering of the groesolve variables is controlled by the REDUCE switch varopt. If varopt is on (which is the default of varopt), the variable sequence is optimized (the variables are reordered). If varopt is of f , the given variable sequence is taken (if no variables are given, the order of the REDUCE system is taken instead). In general, the reordering of the variables makes the Gröbner basis computation significantly faster. A variable dependency, declare by one (or several) depend statements, is regarded (if varopt is on). The switch groebopt has no meaning for groesolve; it is stored during its processing. groepostproc: Postprocessing of a Gröbner Basis In many cases, it is difficult to do the general Gröbner processing. If a Gröbner basis with a lex ordering is calculated already (e.g., by very individual parameter settings), the solutions can be derived from it by a call to groepostproc. groesolve is functionally equivalent to a call to groebnerf and subsequent calls to groepostproc for each partial basis. groepostproc({exp1, exp2, . . . , expm}[, {var1, var2, . . . , varn}]); where {exp1, exp2, . . . , expm} is a list of any number of expressions, {var1, var2, . . . , varn} is an optional list of variables. The expressions must be a lex Gröbner basis with the given variables; the ordering must be still active. The result is the same as with groesolve. groepostproc({x3**2 x2*x3 x2**2 x1**2 + + + - x3 + x2 - 1, x1*x3 + x3 + x1*x2 + x1 + 2, 2*x2 - 1, 2},{x3,x2,x1}); {{x3= - sqrt(2), x2=sqrt(2) - 1, x1=sqrt(2)}, {x3=sqrt(2), x2= - (sqrt(2) + 1), x1= - sqrt(2)}, sqrt(4*sqrt(2) + 9) - 1 {x3=-------------------------, 565 2 x2= - (sqrt(2) + 1), x1=sqrt(2)}, - (sqrt(4*sqrt(2) + 9) + 1) {x3=------------------------------, 2 x2= - (sqrt(2) + 1), x1=sqrt(2)}, sqrt( - 4*sqrt(2) + 9) - 1 {x3=----------------------------, 2 x2=sqrt(2) - 1, x1= - sqrt(2)}, - (sqrt( - 4*sqrt(2) + 9) + 1) {x3=---------------------------------, 2 x2=sqrt(2) - 1, x1= - sqrt(2)}} Idealquotient: Quotient of an Ideal and an Expression Let i be an ideal and f be a polynomial in the same variables. Then the algebraic quotient is defined by i : f = {p | p ∗ f member of i} . The ideal quotient i : f contains i and is obviously part of the whole polynomial ring, i.e. contained in {1}. The case i : f = {1} is equivalent to f being a member of i. The other extremal case, i : f = i, occurs, when f does not vanish at any general zero of i. The explanation of the notion “general zero” introduced by van der Waerden, however, is beyond the aim of this manual. The operation of groesolve/groepostproc is based on nested ideal quotient calculations. If i is given by a basis and f is given as an expression, the quotient can be calculated by idealquotient({exp1, . . . , expm}, exp); 566 CHAPTER 16. USER CONTRIBUTED PACKAGES where {exp1, exp2, . . . , expm} is a list of any number of expressions or equations, exp is a single expression or equation. idealquotient calculates the algebraic quotient of the ideal i with the basis {exp1, exp2, . . . , expm} and exp with respect to the variables given or extracted. {exp1, exp2, . . . , expm} is not necessarily a Gröbner basis. The result is the Gröbner basis of the quotient. Saturation: Saturation of an Ideal and an Expression The saturation operator computes the quotient on an ideal and an arbitrary power of an expression exp ∗ ∗n with arbitrary n. The call is saturation({exp1, . . . , expm}, exp); where {exp1, exp2, . . . , expm} is a list of any number of expressions or equations, exp is a single expression or equation. saturation calls idealquotient several times, until the result is stable, and returns it. Operators for Gröbner Bases in all Term Orderings In some cases where no Gröbner basis with lexical ordering can be calculated, a calculation with a total degree ordering is still possible. Then the Hilbert polynomial gives information about the dimension of the solutions space and for finite sets of solutions univariate polynomials can be calculated. The solutions of the equation system then is contained in the cross product of all solutions of all univariate polynomials. Hilbertpolynomial: Hilbert Polynomial of an Ideal This algorithm was contributed by J OACHIM H OLLMAN, Royal Institute of Technology, Stockholm (private communication). hilbertpolynomial({exp1, . . . , expm}) ; where {exp1, . . . , expm} is a list of any number of expressions or equations. hilertpolynomial calculates the Hilbert polynomial of the ideal with basis {exp1, . . . , expm} with respect to the variables given or extracted provided the given term ordering is compatible with the degree, such as the gradlexor revgradlex-ordering. The term ordering of the basis must be active and {exp1, . . ., expm} should be a Gröbner basis with respect to this ordering. The Hilbert polynomial gives information about the cardinality of solutions of the system {exp1, . . . , expm}: if the Hilbert polynomial is an integer, the system has 567 only a discrete set of solutions and the polynomial is identical with the number of solutions counted with their multiplicities. Otherwise the degree of the Hilbert polynomial is the dimension of the solution space. If the Hilbert polynomial is not a constant, it is constructed with the variable “x” regardless of whether x is member of {var1, . . . , varn} or not. The value of this polynomial at sufficiently large numbers “x” is the difference of the dimension of the linear vector space of all polynomials of degree ≤ x minus the dimension of the subspace of all polynomials of degree ≤ x which belong also to the ideal. x must be an undefined variable or the value of x must be an undefined variable; otherwise a warning is given and a new (generated) variable is taken instead. Remark: The number of zeros in an ideal and the Hilbert polynomial depend only on the leading terms of the Gröbner basis. So if a subsequent Hilbert calculation is planned, the Gröbner calculation should be performed with on gltbasis and the value of gltb (or its elements in a groebnerf context) should be given to hilbertpolynomial. In this manner, a lot of computing time can be saved in the case of long calculations. 16.28.5 Calculations “by Hand” The following operators support explicit calculations with polynomials in a distributive representation at the REDUCE top level. So they allow one to do Gröbner type evaluations stepwise by separate calls. Note that the normal REDUCE arithmetic can be used for arithmetic combinations of monomials and polynomials. Representing Polynomials in Distributive Form gsortp; where p is a polynomial or a list of polynomials. If p is a single polynomial, the result is a reordered version of p in the distributive representation according to the variables and the current term order mode; if p is a list, its members are converted into distributive representation and the result is the list sorted by the term ordering of the leading terms; zero polynomials are eliminated from the result. torder({alpha,beta,gamma},lex); dip := gsort(gamma*(alpha-1)**2*(beta+1)**2); 568 CHAPTER 16. USER CONTRIBUTED PACKAGES 2 2 2 dip := alpha *beta *gamma + 2*alpha *beta*gamma 2 2 + alpha *gamma - 2*alpha*beta *gamma - 4*alpha*beta*gamma 2 - 2*alpha*gamma + beta *gamma + 2*beta*gamma + gamma Splitting of a Polynomial into Leading Term and Reductum gsplitp; where p is a polynomial. gsplit converts the polynomial p into distributive representation and splits it into leading monomial and reductum. The result is a list with two elements, the leading monomial and the reductum. gslit dip; 2 2 {alpha *beta *gamma, 2 2 2 2*alpha *beta*gamma + alpha *gamma - 2*alpha*beta *gamma 2 - 4*alpha*beta*gamma - 2*alpha*gamma + beta *gamma + 2*beta*gamma + gamma} Calculation of Buchberger’s S-polynomial gspoly(p1, p2); where p1 and p2 are polynomials. gspoly calculates the s-polynomial from p1 and p2; 569 Example for a complete calculation (taken from DAVENPORT ET AL . [9]): torder({x,y,z},lex)$ g1 := x**3*y*z - x*z**2; g2 := x*y**2*z - x*y*z; g3 := x**2*y**2 - z;$ % first S-polynomial g4 := gspoly(g2,g3);$ 2 2 g4 := x *y*z - z % next S-polynomial p := gspoly(g2,g4); $ 2 2 p := x *y*z - y*z % and reducing, here only by g4 g5 := preduce(p,{g4}); 2 g5 := - y*z 2 + z % last S-polynomial} g6 := gspoly(g4,g5); 2 2 3 g6 := x *z - z % and the final basis sorted descending gsort{g2,g3,g4,g5,g6}; 2 2 {x *y - z, 2 2 x *y*z - z , 570 CHAPTER 16. USER CONTRIBUTED PACKAGES 2 2 3 x *z - z , 2 x*y *z - x*y*z, 2 2 - y*z + z } Bibliography [1] Beatrice Amrhein and Oliver Gloor. The fractal walk. In Bruno Buchberger an Franz Winkler, editor, Gröbner Bases and Applications, volume 251 of LMS, pages 305 –322. Cambridge University Press, February 1998. [2] Beatrice Amrhein, Oliver Gloor, and Wolfgang Kuechlin. How fast does the walk run? In Alain Carriere and Louis Remy Oudin, editors, 5th Rhine Workshop on Computer Algebra, volume PR 801/96, pages 8.1 – 8.9. Institut Franco–Allemand de Recherches de Saint–Louis, January 1996. [3] Beatrice Amrhein, Oliver Gloor, and Wolfgang Kuechlin. Walking faster. In J. Calmet and C. Limongelli, editors, Design and Implementation of Symbolic Computation Systems, volume 1128 of Lecture Notes in Computer Science, pages 150 –161. Springer, 1996. [4] Thomas Becker and Volker Weispfenning. Gröbner Bases. Springer, 1993. [5] W. Boege, R. Gebauer, and H. Kredel. Some examples for solving systems of algebraic equations by calculating Gröbner bases. J. Symbolic Computation, 2(1):83–98, March 1986. [6] Bruno Buchberger. Gröbner bases: An algorithmic method in polynomial ideal theory. In N. K. Bose, editor, Progress, directions and open problems in multidimensional systems theory, pages 184–232. Dordrecht: Reidel, 1985. [7] Bruno Buchberger. Applications of Gröbner bases in non-linear computational geometry. In R. Janssen, editor, Trends in Computer Algebra, pages 52–80. Berlin, Heidelberg, 1988. [8] S. Collart, M. Kalkbrener, and D. Mall. Converting bases with the Gröbner walk. J. Symbolic Computation, 24:465 – 469, 1997. [9] James H. Davenport, Yves Siret, and Evelyne Tournier. Computer Algebra, Systems and Algorithms for Algebraic Computation. Academic Press, 1989. 571 [10] J. C. Faugère, P. Gianni, D. Lazard, and T. Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. Technical report, 1989. [11] Rüdiger Gebauer and H. Michael Möller. On an installation of Buchberger’s algorithm. J. Symbolic Computation, 6(2 and 3):275–286, 1988. [12] A. Giovini, T. Mora, G. Niesi, L. Robbiano, and C. Traverso. One sugar cube, please or selection strategies in the Buchberger algorithm. In Proc. of ISSAC ’91, pages 49–55, 1991. [13] Dietmar Hillebrand. Triangulierung nulldimensionaler Ideale - Implementierung und Vergleich zweier Algorithmen - in German . Diplomarbeit im Studiengang Mathematik der Universität Dortmund. Betreuer: Prof. Dr. H. M. Möller. Technical report, 1999. [14] Heinz Kredel. Admissible termorderings used in computer algebra systems. SIGSAM Bulletin, 22(1):28–31, January 1988. [15] Heinz Kredel and Volker Weispfenning. Computing dimension and independent sets for polynomial ideals. J. Symbolic Computation, 6(1):231–247, November 1988. [16] Herbert Melenk, H. Michael Möller, and Winfried Neun. On Gröbner bases computation on a supercomputer using REDUCE. Preprint SC 88-2, KonradZuse-Zentrum für Informationstechnik Berlin, January 1988. 572 16.29 CHAPTER 16. USER CONTRIBUTED PACKAGES GUARDIAN: Guarded Expressions in Practice Computer algebra systems typically drop some degenerate cases when evaluating expressions, e.g., x/x becomes 1 dropping the case x = 0. We claim that it is feasible in practice to compute also the degenerate cases yielding guarded expressions. We work over real closed fields but our ideas about handling guarded expression can be easily transferred to other situations. Using formulas as guards provides a powerful tool for heuristically reducing the combinatorial explosion of cases: equivalent, redundant, tautological, and contradictive cases can be detected by simplification and quantifier elimination. Our approach allows to simplify the expressions on the basis of simplification knowledge on the logical side. The method described in this paper is implemented in the REDUCE package GUARDIAN. Authors: Andreas Dolzmann and Thomas Sturm. 16.29.1 Introduction It is meanwhile a well-known fact that evaluations obtained with the interactive use of computer algebra systems (CAS) are not entirely correct in general. Typically, some degenerate cases are dropped. Consider for instance the evaluation x2 = x, x which is correct only if x 6= 0. The problem here is that CAS consider variables to be transcendental elements. The user, in contrast, has in mind variables in the sense of logic. In other words: The user does not think of rational functions but of terms. Next consider the valid expression √ √ x + −x . x It is meaningless over the reals. C AS often offer no choice than to interprete surds over the complex numbers even if they distinguish between a real and a complex mode. Corless and Jeffrey [4] have examined the behavior of a number of CAS with such input data. They come to the conclusion that simultaneous computation of all cases is exemplary but not feasible due to the combinatorial explosion of cases to be considered. Therefore, they suggest to ignore the degenerate cases but to provide the assumptions to the user on request. We claim, in contrast, that it is in fact feasible to compute all possible cases. Our setting is as follows: Expressions are evaluated to guarded expressions consisting of possibly several conventional expressions guarded by quantifier-free for- 573 mulas. For the above examples, we would obtain h i √ √   x+ −x x 6= 0 x , . F x As the second example illustrates, we are working in ordered fields, more precisely in real closed fields. The handling of guarded expressions as described in this paper can, however, be easily transferred to other situations. Our approach can also deal with redundant guarded expressions, such as   |x| − x T  x≥0  0 x<0 −2x which leads to algebraic simplification techniques based on logical simplification as proposed by Davenport and Faure [5]. We use formulas over the language of ordered rings as guards. This provides powerful tools for heuristically reducing the combinatorial explosion of cases: equivalent, redundant, tautological, and contradictive cases can be detected by simplification [6] and quantifier elimination [17, 3, 18, 15, 21, 20]. In certain situations, √ we will allow the formulas also to contain extra functions such as · or | · |. Then we take care that there is no quantifier elimination applied. Simultaneous computation of several cases concerning certain expressions being zero or not has been extensively investigated as dynamic evaluation [12, 10, 11, 2]. It has also been extended to real closed fields [9]. The idea behind the development of these methods is of a more theoretical nature than to overcome the problems with the interactive usage of CAS sketched above: one wishes to compute in algebraic (or real) extension fields of the rationals. Guarded expressions occur naturally when solving problems parametrically. Consider, e.g., the Gröbner systems used during the computation of comprehensive Gröbner bases [19]. The algorithms described in this paper are implemented in the REDUCE package GUARDIAN . It is based on the REDUCE [13, 16] package REDLOG [7, 8] implementing a formula data type with corresponding algorithms, in particular including simplification and quantifier elimination. 16.29.2 An outline of our method Guarded expressions A guarded expression is a scheme       γ0 t0 γ1 t1   .. ..  . .  γn tn 574 CHAPTER 16. USER CONTRIBUTED PACKAGES where each γi is a quantifier-free formula, the guard, and each ti is an associated conventional expression. The idea is that some ti is a valid interpretation iff γi holds. Each pair (γi , ti ) is called a case. The first case (γ0 , t0 ) is the generic case: t0 is the expression the system would compute without our package, and γ0 is the corresponding guard. The guards γi need neither exclude one another, nor do we require that they form a complete case distinction. We shall, however, assume that all cases covered by a guarded expression are already covered by the generic case; in other words: n ^ (γi −→ γ0 ). (16.74) i=1 Consider the following evaluation of |x| to a guarded expression:   |x| T  x ≥ 0 x . x < 0 −x Here the non-generic cases already cover the whole domain. The generic case is in some way redundant. It is just present for keeping track of the system’s default behavior. Formally we have n _  γi ←→ γ0 . (16.75) i=1 As an example for a non-redundant, i.e., necessary generic case we have the evaluation of the reciprocal x1 :   x 6= 0 x1 . In every guarded expression, the generic case is explicitly marked as either necessary or redundant. The corresponding tag is inherited during the evaluation process. Unfortunately it can happen that guarded expressions satisfy (16.75) without being tagged redundant, e.g., specialization of   T sin x x=0 0 to x = 0 if the system cannot evaluate sin(0). This does not happen if one claims for necessary generic cases to have, as the reciprocal above, no alternative cases at all. Else, in the sequel “redundant generic case” has to be read as “tagged redundant.” With guarded expressions, the evaluation splits into two independent parts: Algebraic evaluation and a subsequent simplification of the guarded expression obtained. 575 Guarding schemes In the introduction we have seen that certain operators introduce case distinctions. For this, with each operator f there is a guarding scheme associated providing information on how to map f (t1 , . . . , tm ) to a guarded expression provided that one does not have to care for the argument expressions t1 , . . . , tm . In the easiest case, this is a rewrite rule f (a1 , . . . , am ) → G(a1 , . . . , am ). The actual terms t1 , . . . , tm are simply substituted for the formal symbols a1 , . . . , am into the generic guarded expression G(a1 , . . . , am ). We give some examples: h i a1 a2 6= 0 aa12 → a2   √ √ a1 a1 ≥ 0 a1 →   T sign(a1 )  a1 > 0  1  sign(a1 ) →   a1 = 0  0 a1 < 0 −1   T |a1 | |a1 | →  a1 ≥ 0 a1  (16.76) a1 < 0 −a1 For functions of arbitrary arity, e.g., min or max, we formally assume infinitely many operators of the same name. Technically, we associate a procedure parameterized with the number of arguments m that generates the corresponding rewrite rule. As min_scheme(2) we obtain, e.g.,   T min(a1 , a2 ) , a1 min(a1 , a2 ) →  a1 ≤ a2 (16.77) a2 a2 ≤ a1 while for higher arities there are more case distinctions necessary. For later complexity analysis, we state the concept of a guarding scheme formally: a guarding scheme for an m-ary operator f is a map gschemef : Em → GE where E is the set of expressions, and GE is the set of guarded expressions. This allows to split f (t1 , . . . , tm ) in dependence on the form of the parameter expressions t1 , . . . , tm . 576 CHAPTER 16. USER CONTRIBUTED PACKAGES Algebraic evaluation Evaluating conventional expressions The evaluation of conventional expressions into guarded expressions is performed recursively: Constants c evaluate to   T c . For the evaluation of f (e1 , . . . , em ) the argument expressions e1 , . . . , em are recursively evaluated to guarded expressions   γi0 ti0  γi1 ti1    (16.78) e0i =  . ..  for 1 ≤ i ≤ m. .  . .  γini tini Then the operator f is “moved inside” the e0i by combining all cases, technically a simultaneous Cartesian product computation of both the sets of guards and the sets of terms: m m Y Y Γ= {γi0 , . . . , γini }, T = {ti0 , . . . , tini }. (16.79) i=1 This leads to the intermediate result  γ10 ∧ · · · ∧ γm0  ..  .   γ1n1 ∧ · · · ∧ γm0   ..  . γ1n1 ∧ · · · ∧ γmnm i=1  f (t10 , . . . , tm0 )  ..  .  f (t1n1 , . . . , tm0 )  .  ..  . f (t1n1 , . . . , tmnm ) (16.80) The new generic case is exactly the combination of the generic cases of the e0i . It is redundant if at least one of these combined cases is redundant. Next, all non-generic cases containing at least one redundant generic constituent γi0 in their guard are deleted. The reason for this is that generic cases are only used to keep track of the system default behavior. All other cases get the status of a non-generic case even if they contain necessary generic constituents in their guard. At this point, we apply the guarding scheme of f to all remaining expressions f (t1i1 , . . . , tmim ) in the form (16.80) yielding a nested guarded expression     δ00 u00   .. ..    Γ0  . .       δ0k0 u0k0    ..  .. (16.81)  . , .       δN 0 uN 0    ..    . .    ΓN  . . δN kN uN kN 577 which can be straightforwardly resolved to a guarded expression   u00 Γ0 ∧ δ00   .. ..   . .    Γ0 ∧ δ0k u0k0  0     .. ..  . . .    ΓN ∧ δN 0  u N 0     .. ..   . . ΓN ∧ δN kN uN kN This form is treated analogously to the form (16.80): The new  generic case (Γ0 ∧ δ00 , u00 ) is redundant if at least one of Γ0 , f (t10 , . . . , tm0 ) and (δ00 , u00 ) is redundant. Among the non-generic cases all those containing redundant generic constituents in their guard are deleted, and all those containing necessary generic constituents in their guard get the status of an ordinary non-generic case. Finally the standard evaluator of the system—reval in the case of REDUCE— is applied to all contained expressions, which completes the algebraic part of the evaluation. Evaluating guarded expressions The previous section was concerned with the evaluation of pure conventional expressions into guarded expressions. Our system currently combines both conventional and guarded expressions. We are thus faced with the problem of treating guarded subexpressions during evaluation. When there is a guarded subexpression ei detected during evaluation, all contained expressions are recursively evaluated to guarded expressions yielding a nested guarded expression of the form (16.81). This is resolved as described above yielding the evaluation subresult e0i . As a special case, this explains how guarded expressions are (re)evaluated to guarded expressions. Example We describe the evaluation of the expression min(x, |x|). The first argument e1 = x evaluates recursively to   e01 = T x (16.82) with a necessary generic case. The nested x inside e2 = |x| evaluates to the same form (16.82). For obtaining e02 , we apply the guarding scheme (16.76) of the absolute value to the only term of (16.82) yielding     T |x|  T  x ≥ 0 x  , x < 0 −x 578 CHAPTER 16. USER CONTRIBUTED PACKAGES where the inner generic case is redundant. This form is resolved to   |x| T∧T e02 =  T ∧ x ≥ 0 x  T ∧ x < 0 −x with a redundant generic case. The next step is the combination of cases by Cartesian product computation. We obtain   T ∧ (T ∧ T) min(x, |x|)  T ∧ (T ∧ x ≥ 0) min(x, x)  , T ∧ (T ∧ x < 0) min(x, −x) which corresponds to (16.80) above. For the outer min, we apply the guarding scheme (16.77) to all terms yielding the nested guarded expression     min(x, |x|) T    x ≤ |x|  T ∧ (T ∧ T) x     |x|  |x| ≤ x     T min(x, x)    T ∧ (T ∧ x ≥ 0) ,  x≤x  x     x ≤ x x       T min(x, −x)    T ∧ (T ∧ x < 0)  x ≤ −x   x −x ≤ x −x which is in turn resolved to  min(x, |x|) (T ∧ (T ∧ T)) ∧ T  (T ∧ (T ∧ T)) ∧ x ≤ |x| x   (T ∧ (T ∧ T)) ∧ |x| ≤ x |x|   (T ∧ (T ∧ x ≥ 0)) ∧ T min(x, x)   (T ∧ (T ∧ x ≥ 0)) ∧ x ≤ x x   (T ∧ (T ∧ x ≥ 0)) ∧ x ≤ x x   (T ∧ (T ∧ x < 0)) ∧ T min(x, −x)   (T ∧ (T ∧ x < 0)) ∧ x ≤ −x x (T ∧ (T ∧ x < 0)) ∧ −x ≤ x −x        .       From this, we delete the two non-generic cases obtained by combination with the redundant generic case of the min. The final result of the algebraic evaluation step is the following:   (T ∧ (T ∧ T)) ∧ T min(x, |x|)   (T ∧ (T ∧ T)) ∧ x ≤ |x| x     (T ∧ (T ∧ T)) ∧ |x| ≤ x |x|   .  (T ∧ (T ∧ x ≥ 0)) ∧ x ≤ x x (16.83)     (T ∧ (T ∧ x ≥ 0)) ∧ x ≤ x x     (T ∧ (T ∧ x < 0)) ∧ x ≤ −x x (T ∧ (T ∧ x < 0)) ∧ −x ≤ x −x 579 Worst-case complexity Our measure of complexity |G| for guarded expressions G is the number of contained cases:   γ0 t0  γ1 t 1     .. ..  = n + 1.  . .  γn t n As in Section 16.29.2, consider an m-ary operator f , guarded expression arguments e01 , . . . , e0m as in equation (16.78), and the Cartesian product T as in equation (16.79). Then X |gschemef (t1 , . . . , tm )| |f (e01 , . . . , e0m )| ≤ (t1 ,...,tm )∈T ≤ max (t1 ,...,tm )∈T = max (t1 ,...,tm )∈T ≤ max (t1 ,...,tm )∈T |gschemef (t1 , . . . , tm )| · #T |gschemef (t1 , . . . , tm )| · m Y |e0j | j=1 |gschemef (t1 , . . . , tm )| · m max |e0j | . 1≤j≤m In the important special case that the guarding scheme of f is a rewrite rule f (a1 , . . . , am ) → G, the above complexity estimation simplifies to |f (e01 , . . . , e0m )| ≤ |G| · m Y j=1 |e0j | ≤ |G| · m max |e0j | . 1≤j≤m In other words: |G| plays the role of a factor, which, however, depends on f , and |f (e01 , . . . , e0m )| is polynomial in the size of the ei but exponential in the arity of f . Simplification In view of the increasing size of the guarded expressions coming into existence with subsequent computations, it is indispensable to apply simplification strategies. There are two different algorithms involved in the simplification of guarded expressions: 1. A formula simplifier mapping quantifier-free formulas to equivalent simpler ones. 2. Effective quantifier elimination for real closed fields over the language of ordered rings. 580 CHAPTER 16. USER CONTRIBUTED PACKAGES It is not relevant, which simplifier and which quantifier elimination procedure is actually used. We use the formula simplifier described in [6]. Our quantifier elimination uses test point methods developed by Weispfenning [18, 15, 21]. It is restricted to formulas obeying certain degree restrictions wrt. the quantified variables. As an alternative, REDLOG provides an interface to Hong’s QEPCAD quantifier elimination package [14]. Compared to the simplification, the quantifier elimination is more time consuming. It can be turned off by a switch. The following simplification steps are applied in the given order: Contraction of cases This is restricted to the non-generic cases of the considered guarded expression. We contract different cases containing the same terms:   γ0 t0    .. ..  t0 γ0  .  .    .. ..   γi ti   . .      .. ..  becomes  γ ∨ γ t  .  . j i  .   i   .. ..  γj ti  . .   .. .. . . Simplification of the guards The simplifier is applied to all guards replacing them by simplified equivalents. Since our simplifier maps γ ∨ γ to γ, this together with the contraction of cases takes care for the deletion of duplicate cases. Keep one tautological case If the guard of some non-generic case becomes “T,” we delete all other non-generic cases. Else, if quantifier elimination is turned on, we try to detect a tautology by eliminating the universal closures ∀γ of the guards γ. This quantifier elimination is also applied to the guards of generic cases. These are, in case of success, simply replaced by “T” without deleting the case. Remove contradictive cases A non-generic case is deleted if its guard has become “F.” If quantifier elimination is turned on, we try to detect further contradictive cases by eliminating the existential closure ∃γ for each guard γ. This quantifier elimination is also applied to generic cases. In case of success they are not deleted but their guards are replaced by “F.” Our assumption (16.74) allows then to delete all non-generic cases. 581 Example revisited We turn back to the form (16.83) of our example min(x, |x|). Contraction of cases with subsequent simplification automatically yields   T min(x, |x|)   T x  ,  |x| − x ≤ 0  |x| F −x of which only the tautological non-generic case survives:   T min(x, |x|) . x T (16.84) Output modes An output mode determines which part of the information contained in the guarded expressions is provided to the user. G UARDIAN knows the following output modes: Matrix Output matrices in the style used throughout this paper. We have already seen that these can become very large in general. Generic case Output only the generic case. Generic term Output only the generic term. Thus the output is exactly the same as without the guardian package. If the condition of the generic case becomes “F,” a warning “contradictive situation” is given. The computation can, however, be continued. Note that output modes are restrictions concerning only the output; internally the system still computes with the complete guarded expressions. A smart mode Consider the evaluation result (16.84) of min(x, |x|). The generic term output mode would output min(x, |x|), although more precise information could be given, namely x. The problem is caused by the fact that generic cases are used to keep track of the system’s default behavior. In this section we will describe an optional smart mode with a different notion of generic case. To begin with, we show why the problem can not be overcome by a “smart output mode.” 582 CHAPTER 16. USER CONTRIBUTED PACKAGES Assume that there is an output mode which outputs x for (16.84). As the next computation involving (16.84) consider division by y. This would result in # " y 6= 0 min(x,|x|) y . x y 6= 0 y Again, there are identic conditions for the generic case and some non-generic case, and, again, the term belonging to the latter is simpler. Our mode would output xy . Next, we apply the absolute value once more yielding   | min(x,|x|)| y 6= 0 |y|   x  xy ≥ 0 ∧ y 6= 0 . y −x xy < 0 ∧ y 6= 0 y Here, the condition of the generic case differs from all other conditions. We thus have to output the generic term. For the user, the evaluation of | xy | results in | min(x,|x|)| . |y| The smart mode can turn a non-generic case into a necessary generic one dropping the original generic case and all other non-generic cases. Consider, e.g., (16.84), where the conditions are equal, and the non-generic term is “simpler.” In fact, the relevant relationship between the conditions is that the generic condition implies the non-generic one. In other words: Some non-generic condition is not more restrictive than the generic condition, and thus covers the whole domain of the guarded expression. Note that from the implication and (16.74) we may conclude that the cases are even equivalent. Implication is heuristically checked by simplification. If this fails, quantifier elimination provides a decision procedure. Note that our test point methods are incomplete in this regard due to the degree restrictions. Also it cannot be applied straightforwardly to guards containing operators that do not belong to the language of ordered rings. Whenever we happen to detect a relevant implication, we actually turn the corresponding non-generic case into the generic one. From our motivation of nongeneric cases, we may expect that non-generic expressions are generally more convenient than generic ones. 16.29.3 Examples We give the results for the following computations as they are printed in the output mode matrix providing the full information on the computation result. The reader can derive himself what the output in the mode generic case or generic term would be. 583 • Smart mode or not: h i 1 1 x + 1 = 6 0 = . x2 +2x+1 x2 + 2x + 1 The simplifier recognizes that the denominator is a square. • Smart mode or not: h i 1 1 T = . 2 x +2x+2 x2 + 2x + 2 Quantifier elimination recognizes the positive definiteness of the denominator. • Smart mode: |x| − √ x=   √ x≥0 − x+x . The square root allows to forget about the negative branch of the absolute value. • Smart mode: |x2 + 2x + 1| =  T x2 + 2x + 1  . The simplifier recognizes the positive semidefiniteness of the argument. R E DUCE itself recognizes squares within absolute values only in very special cases such as |x2 |. • Smart mode:    min x, max(x, y) = T x . Note that REDUCE does not know any rules about nested minima and maxima. • Smart mode:    min sign(x), −1 = T −1 . • Smart mode or not:   T |x| − x . 0 |x| − x =  x ≥ 0 x<0 −2x This example is taken from [5]. • Smart mode or not: h i p p 1 + x2 y 2 (x2 + y 2 − 3) = T x4 y 2 + x2 y 4 − 3x2 y 2 + 1 The Motzkin polynomial is recognized to be positive semidefinite by quantifier elimination. The evaluation time for the last example is 119 ms on a SUN SPARC -4. This illustrates that efficiency is no problem with such interactive examples. 584 16.29.4 CHAPTER 16. USER CONTRIBUTED PACKAGES Outlook This section describes possible extensions of the GUARDIAN. The extensions proposed in Section 16.29.4 on simplification of terms and Section 16.29.4 on a background theory are clear from a theoretical point of view but not yet implemented. Section 16.29.4 collects some ideas on the application of our ideas to the REDUCE integrator. In this field, there is some more theoretical work necessary. Simplification of terms Consider the expression sign(x)x − |x|. It evaluates to the following guarded expression:   T −|x| + sign(x)x  x 6= 0 . 0 x=0 −x This suggests to substitute −x by 0 in the third case, which would in turn allow to contract the two non-generic cases yielding   T −|x| + sign(x)x . T 0 In smart mode second case would then become the only generic case. Generally, one would proceed as follows: If the guard is a conjunction containing as toplevel equations t1 = 0, . . . , tk = 0, reduce the corresponding expression modulo the set of univariate linear polynomials among t1 , . . . , tk . A more general approach would reduce the expression modulo a Gröbner basis of all the t1 , . . . , tk . This leads, however, to larger expressions in general. One can also imagine to make use of non-conjunctive guards in the following way: 1. Compute a DNF of the guard. 2. Split the case into several cases corresponding to the conjunctions in the DNF . 3. Simplify the terms. 4. Apply the standard simplification procedure to the resulting guarded expression. Note that it includes contraction of cases. According to experiences with similar ideas in the “Gröbner simplifier” described in [6], this should work well. 585 Background theory In practice one often computes with quantities guaranteed to lie in a certain range. For instance, when computing an electrical resistance, one knows in advance that it will not be negative. For such cases one would like to have some facility to provide external information to the system. This can then be used to reduce the complexity of the guarded expressions. One would provide a function assert(ϕ), which asserts the formula ϕ to hold. Successive applications of assert establish a background theory, which is a set of formulas considered conjunctively. The information contained in the background theory can be used with the guarded expression computation. The user must, however, not rely on all the background information to be actually used. Technically, denote by Φ the (conjunctive) background theory. For the simplification of the guards, we can make use of the fact that our simplifier is designed to simplify wrt. a theory, cf. [6]. For proving that some guard γ is tautological, we try to prove ∀(Φ −→ γ) instead of ∀γ. Similarly, for proving that γ is contradictive, we try to disprove ∃(Φ ∧ γ). Instead of proving ∀(γ1 −→ γ2 ) in smart mode, we try to prove  ∀ (Φ ∧ γ1 ) −→ γ2 . Independently, one can imagine to use a background theory for reducing the output with the matrix output mode. For this, one simplifies each guard wrt. the theory at the output stage treating contradictions and tautologies appropriately. Using the theory for replacing all cases by one at output stage in a smart mode manner leads once more to the problem of expressions or even guarded expressions “mysteriously” getting more complicated. Applying the theory only at the output stage makes it possible to implement a procedure unassert(ϕ) in a reasonable way. Integration C AS integrators make “mistakes” similar to those we have examined. Consider, e.g., the typical result Z 1 xa dx = xa+1 . a+1 It does not cover the case a = −1, for which one wishes to obtain Z x−1 dx = ln x. 586 CHAPTER 16. USER CONTRIBUTED PACKAGES This problem can also be solved by using guarded expressions for integration results. Within the framework of this paper, we would have to associate a guarding scheme to the integrator int. It is not hard to see that this cannot be done in a reasonable way without putting as much knowledge into the scheme as into the integrator itself. Thus for treating integration, one has to modify the integrator to provide guarded expressions. Next, we have to clarify what the guarded expression for the above integral would look like. Since we know that the integral is defined for all interpretations of the variables, our assumption (16.74) implies that the generic condition be “T.” We obtain the guarded expression R a   T x dx 1 a+1   a 6= −1 . a+1 x ln x a = −1 Note that the redundant generic case does not model the system’s current behavior. Combining algebra with logic Our method, in the described form, uses an already implemented algebraic evaluator. In the previous section, we have seen that this point of view is not sufficient for treating integration appropriately. Also our approach runs into trouble with built-in knowledge such as √ x2 = |x|, sign(|x|) = 1. (16.85) (16.86) Equation (16.85) introduces an absolute value operator within a non-generic term without making a case distinction. Equation (16.86) is wrong when not considering x transcendental. In contrast to the situation with reciprocals, our technique cannot be used to avoid this “mistake.” We obtain   T 1 sign(|x|) =  x 6= 0 1  x=0 0 yielding two different answers for x = 0. We have already seen in the example Section 16.29.3 that the implementation of knowledge such as (16.85) and (16.86) is usually quite ad hoc, and can be mostly covered by using guarded expressions. This obesrvation gives rise to the following question: When designing a new CAS based on guarded expressions, how should the knowledge be distributed between the algebraic side and the logic side? 587 16.29.5 Conclusions Guarded expressions can be used to overcome well-known problems with interpreting expressions as terms. We have explained in detail how to compute with guarded expressions including several simplification techniques. Moreover we gain algebraic simplification power from the logical simplifications. Numerous examples illustrate the power of our simplification methods. The largest part of our ideas is efficiently implemented, and the software is published. The outlook on background theories and on the treatment of integration by guarded expressions points on interesting future extensions. Bibliography [1] Bradford, R. Algebraic simplification of multiple valued functions. In Design and Implementation of Symbolic Computation Systems (1992), J. Fitch, Ed., vol. 721 of Lecture Notes in Computer Science, Springer-Verlag, pp. 13–21. Proceedings of the DISCO 92. [2] Broadberry, P., Gómez-Díaz, T., and Watt, S. On the implementation of dynamic evaluation. In Proceedings of the International Symposium on Symbolic and Algebraic Manipulation (ISSAC 95) (New York, N.Y., 1995), A. Levelt, Ed., ACM Press, pp. 77–89. [3] Collins, G. E. Quantifier elimination for the elementary theory of real closed fields by cylindrical algebraic decomposition. In Automata Theory and Formal Languages. 2nd GI Conference (Berlin, Heidelberg, New York, May 1975), H. Brakhage, Ed., vol. 33 of Lecture Notes in Computer Science, Gesellschaft für Informatik, Springer-Verlag, pp. 134–183. [4] Corless, R. M., and Jeffrey, D. J. Well . . . it isn’t quite that simple. ACM SIGSAM Bulletin 26, 3 (Aug. 1992), 2–6. Feature. [5] Davenport, J. H., and Faure, C. The “unknown” in computer algebra. Programmirovanie 1, 1 (1994). [6] Dolzmann, A., and Sturm, T. Simplification of quantifier-free formulas over ordered fields. Technical Report MIP-9517, FMI, Universität Passau, D94030 Passau, Germany, Oct. 1995. To appear in the Journal of Symbolic Computation. [7] Dolzmann, A., and Sturm, T. Redlog—computer algebra meets computer logic. Technical Report MIP-9603, FMI, Universität Passau, D-94030 Passau, Germany, Feb. 1996. 588 CHAPTER 16. USER CONTRIBUTED PACKAGES [8] Dolzmann, A., and Sturm, T. Redlog user manual. Technical Report MIP9616, FMI, Universität Passau, D-94030 Passau, Germany, Oct. 1996. Edition 1.0 for Version 1.0. [9] Duval, D., and Gonzáles-Vega, L. Dynamic evaluation and real closure. In Proceedings of the IMACS Symposium on Symbolic Computation (1993). [10] Duval, D., and Reynaud, J.-C. Sketches and computation I: Basic definitions and static evaluation. Mathematical Structures in Computer Science 4, 2 (1994), 185–238. [11] Duval, D., and Reynaud, J.-C. Sketches and computation II: Dynamic evaluation and applications. Mathematical Structures in Computer Science 4, 2 (1994), 239–271. [12] Gómez-Díaz, T. Examples of using dynamic constructible closure. In Proceedings of the IMACS Symposium on Symbolic Computation (1993). [13] Hearn, A. C., and Fitch, J. P. Reduce User’s Manual for Version 3.6. RAND, Santa Monica, CA 90407-2138, July 1995. RAND Publication CP78. [14] Hong, H., Collins, G. E., Johnson, J. R., and Encarnacion, M. J. QEPCAD interactive version 12. Kindly communicated to us by Hoon Hong, Sept. 1993. [15] Loos, R., and Weispfenning, V. Applying linear quantifier elimination. The Computer Journal 36, 5 (1993), 450–462. Special issue on computational quantifier elimination. [16] Melenk, H. Reduce symbolic mode primer. In REDUCE 3.6 User’s Guide for UNIX. Konrad-Zuse-Institut, Berlin, 1995. [17] Tarski, A. A decision method for elementary algebra and geometry. Tech. rep., University of California, 1948. Second edn., rev. 1951. [18] Weispfenning, V. The complexity of linear problems in fields. Journal of Symbolic Computation 5, 1 (Feb. 1988), 3–27. [19] Weispfenning, V. Comprehensive Gröbner bases. Journal of Symbolic Computation 14 (July 1992), 1–29. [20] Weispfenning, V. Quantifier elimination for real algebra—the cubic case. In Proceedings of the International Symposium on Symbolic and Algebraic Computation in Oxford (New York, July 1994), ACM Press, pp. 258–263. [21] Weispfenning, V. Quantifier elimination for real algebra—the quadratic case and beyond. To appear in AAECC. 589 16.30 IDEALS: Arithmetic for polynomial ideals This package implements the basic arithmetic for polynomial ideals by exploiting the Gröbner bases package of REDUCE. In order to save computing time all intermediate Gröbner bases are stored internally such that time consuming repetitions are inhibited. Author: Herbert Melenk. 16.30.1 Introduction This package implements the basic arithmetic for polynomial ideals by exploiting the Gröbner bases package of REDUCE. In order to save computing time all intermediate Gröbner bases are stored internally such that time consuming repetitions are inhibited. A uniform setting facilitates the access. 16.30.2 Initialization Prior to any computation the set of variables has to be declared by calling the operator I_setting . E.g. in order to initiate computations in the polynomial ring Q[x, y, z] call I_setting(x,y,z); A subsequent call to I_setting allows one to select another set of variables; at the same time the internal data structures are cleared in order to free memory resources. 16.30.3 Bases An ideal is represented by a basis (set of polynomials) tagged with the symbol I, e.g. u := I(x*z-y**2, x**3-y*z); Alternatively a list of polynomials can be used as input basis; however, all arithmetic results will be presented in the above form. The operator ideal2list allows one to convert an ideal basis into a conventional REDUCE list. Operators Because of syntactical restrictions in REDUCE, special operators have to be used for ideal arithmetic: 590 CHAPTER 16. USER CONTRIBUTED PACKAGES .+ .* .: ./ .= subset intersection member gb ideal2list ideal sum (infix) ideal product (infix) ideal quotient (infix) ideal quotient (infix) ideal equality test (infix) ideal inclusion test (infix) ideal intersection (prefix,binary) test for membership in an ideal (infix: polynomial and ideal) Groebner basis of an ideal (prefix, unary) convert ideal basis to polynomial list (prefix,unary) Example: I(x+y,x^2) .* I(x-z); 2 2 2 I(X + X*Y - X*Z - Y*Z,X*Y - Y *Z) The test operators return the values 1 (=true) or 0 (=false) such that they can be used in REDUCE if − then − else statements directly. The results of sum, product, quotient, intersction are ideals represented by their Gröbner basis in the current setting and term order. The term order can be modified using the operator torder from the Gröbner package. Note that ideal equality cannot be tested with the REDUCE equal sign: I(x,y) = I(y,x) I(x,y) .= I(y,x) 16.30.4 is false is true Algorithms The operators groebner, preduce and idealquotient of the REDUCE Gröbner package support the basic algorithms: GB(Iu1 , u2 ...) → groebner({u1 , u2 ...}, {x, ...}) p ∈ I1 → p = 0 mod I1 T I1 : I(p) → (I1 I(p))/p elementwise On top of these the Ideals package implements the following operations: 591 I(u1 , u2 ...) + I(v1 , v2 ...) → GB(I(u1 , u2 ..., v1 , v2 ...)) I(u1 , u2 ...) ∗ I(v1 , v2 ...) → GB(I(u1 ∗ v1 , u1 ∗ v2, ..., u2 ∗ v1 , u2 ∗ v2 ...)) T T I1 I2 → Q[x, ...] GBlex (t ∗ I1 + (1 − t) ∗ I2 , {t, x, ..}) T T I1 : I(p1 , p2 , ...) → I1 : I(p1 ) I1 : I(p2 ) ... I1 = I2 → GB(I1 ) = GB(I2 ) I1 ⊆ I2 → ui ∈ I2 ∀ ui ∈ I1 = I(u1 , u2 ...) 16.30.5 Examples Please consult the file ideals.tst. 592 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.31 INEQ: Support for solving inequalities This package supports the operator ineq_solve that tries to solves single inequalities and sets of coupled inequalities. Author: Herbert Melenk. The following types of systems are supported : • only numeric coefficients (no parametric system), • a linear system of mixed equations and <= – >= inequalities, applying the method of Fourier and Motzkin 12 , • a univariate inequality with <=, >=, > or < operator and polynomial or rational left–hand and right–hand sides, or a system of such inequalities with only one variable. For linear optimization problems please use the operator simplex of the L INALG package (cf. section 16.37). Syntax: INEQ_SOLVE(hexpri [,hvli]) where is an inequality or a list of coupled inequalities and equations, and the optional argument is a single variable (kernel) or a list of variables (kernels). If not specified, they are extracted automatically from . For multivariate input an explicit variable list specifies the elimination sequence: the last member is the most specific one. An error message occurs if the input cannot be processed by the currently implemented algorithms. The result is a list. It is empty if the system has no feasible solution. Otherwise the result presents the admissible ranges as set of equations where each variable is equated to one expression or to an interval. The most specific variable is the first one in the result list and each form contains only preceding variables (resolved form). The interval limits can be formal max or min expressions. Algebraic numbers are encoded as rounded number approximations. Examples: ineq_solve({(2*x^2+x-1)/(x-1) >= (x+1/2)^2, x>0}); {x=(0 .. 0.326583),x=(1 .. 2.56777)} 12 described by G.B. Dantzig in Linear Programming and Extensions. 593 reg:= {a + b - c>=0, a - b + c>=0, - a + b + c>=0, 0>=0, 2>=0, 2*c - 2>=0, a - b + c>=0, a + b - c>=0, - a + b + c - 2>=0, 2>=0, 0>=0, 2*b - 2>=0, k + 1>=0, - a - b - c + k>=0, - a - b - c + k + 2>=0, - 2*b + k>=0, - 2*c + k>=0, a + b + c - k>=0, 2*b + 2*c - k - 2>=0, a + b + c - k>=0}$ ineq_solve (reg,{k,a,b,c}); {c=(1 .. infinity), b=(1 .. infinity), a=(max( - b + c,b - c) .. b + c - 2), k=a + b + c} 594 16.32 CHAPTER 16. USER CONTRIBUTED PACKAGES INVBASE: A package for computing involutive bases Involutive bases are a new tool for solving problems in connection with multivariate polynomials, such as solving systems of polynomial equations and analyzing polynomial ideals. An involutive basis of polynomial ideal is nothing but a special form of a redundant Gröbner basis. The construction of involutive bases reduces the problem of solving polynomial systems to simple linear algebra. Authors: A.Yu. Zharkov and Yu.A. Blinkov. 16.32.1 Introduction Involutive bases are a new tool for solving problems in connection with multivariate polynomials, such as solving systems of polynomial equations and analyzing polynomial ideals, see [1]. An involutive basis of polynomial ideal is nothing but a special form of a redundant Gröbner basis. The construction of involutive bases reduces the problem of solving polynomial systems to simple linear algebra. The INVBASE package 13 calculates involutive bases of polynomial ideals using an algorithm described in [1] which may be considered as an alternative to the well-known Buchberger algorithm [2]. The package can be used over a variety of different coefficient domains, and for different variable and term orderings. The algorithm implemented in the INVBASE package is proved to be valid for any zero-dimensional ideal (finite number of solutions) as well as for positivedimensional ideals in generic form. However, the algorithm does not terminate for “sparse” positive-dimensional systems. In order to stop the process we use the maximum degree bound for the Gröbner bases of generic ideals in the totaldegree term ordering established in [3]. In this case, it is reasonable to call the GROEBNER package with the answer of INVBASE as input information in order to compute the reduced Gröbner basis under the same variable and term ordering. Though the INVBASE package supports computing involutive bases in any admissible term ordering, it is reasonable to compute them only for the total-degree term orderings. The package includes a special algorithm for conversion of total-degree involutive bases into the triangular bases in the lexicographical term ordering that is desirable for finding solutions. Normally the sum of timings for these two computations is much less than the timing for direct computation of the lexicographical involutive bases. As a rule, the result of the conversion algorithm is a reduced Gröbner basis in the lexicographical term ordering. However, because of some gaps in the current version of the algorithm, there may be rare situations when the resulting triangular set does not possess the formal property of Gröbner bases. Anyway, we recommend using the GROEBNER package with the result of the 13 The REDUCE implementation has been supported by the Konrad-Zuse-Zentrum Berlin 595 conversion algorithm as input in order either to check the Gröbner bases property or to transform the result into a lexicographical Gröbner basis. 16.32.2 The Basic Operators Term Ordering The following term order modes are available: REV GRADLEX, GRADLEX, LEX These modes have the same meaning as for the GROEBNER package. All orderings are based on an ordering among the variables. For each pair of variables an order relation > must be defined, e.g. x > y. The term ordering mode as well as the order of variables are set by the operator IN V T ORDER < mode >, {x1 , ..., xn } where < mode > is one of the term order modes listed above. The notion of {x1 , ..., xn } as a list of variables at the same time means x1 > ... > xn . Example 1. IN V T ORDER REV GRADLEX, {x, y, z} sets the reverse graduated term ordering based on the variable order x > y > z. The operator IN V T ORDER may be omitted. The default term order mode is REV GRADLEX and the default decreasing variable order is alphabetical (or, more generally, the default REDUCE kernel order). Furthermore, the list of variables in the IN V T ORDER may be omitted. In this case the default variable order is used. Computing Involutive Bases To compute the involutive basis of ideal generated by the set of polynomials {p1 , ..., pm } one should type the command IN V BASE {p1 , ..., pm } where pi are polynomials in variables listed in the IN V T ORDER operator. If some kernels in pi were not listed previously in the IN V T ORDER operator they are considered as parameters, i.e. they are considered part of the coefficients of polynomials. If IN V T ORDER was omitted, all the kernels in pi are considered as variables with the default REDUCE kernel order. The coefficients of polynomials pi may be integers as well as rational numbers (or, 596 CHAPTER 16. USER CONTRIBUTED PACKAGES accordingly, polynomials and rational functions in the parametric case). The computations modulo prime numbers are also available. For this purpose one should type the REDUCE commands ON M ODU LAR; SET M OD p; where p is a prime number. The value of the IN V BASE function is a list of integer polynomials {g1 , ..., gn } representing an involutive basis of a given ideal. Example 2. IN V T ORDER REV GRADLEX, {x, y, z}; g := IN V BASE {4 ∗ x ∗ ∗2 + x ∗ y ∗ ∗2 − z + 1/4, 2 ∗ x + y ∗ ∗2 ∗ z + 1/2, x ∗ ∗2 ∗ z − 1/2 ∗ x − y ∗ ∗2}; The resulting involutive basis in the reverse graduate ordering is g := { 8 ∗ x ∗ y ∗ z3 − 2 ∗ x ∗ y ∗ z2 + 4 ∗ y3 − 4 ∗ y ∗ z 2 + 16 ∗ x ∗ y + 17 ∗ y ∗ z − 4 ∗ y, 8 ∗ y 4 − 8 ∗ x ∗ z 2 − 256 ∗ y 2 + 2 ∗ x ∗ z + 64 ∗ z 2 − 96 ∗ x + 20 ∗ z − 9, 2 ∗ y 3 ∗ z + 4 ∗ x ∗ y + y, 8 ∗ x ∗ z 3 − 2 ∗ x ∗ z 2 + 4 ∗ y 2 − 4 ∗ z 2 + 16 ∗ x + 17 ∗ z − 4, −4 ∗ y ∗ z 3 − 8 ∗ y 3 + 6 ∗ x ∗ y ∗ z + y ∗ z 2 − 36 ∗ x ∗ y − 8 ∗ y, 4 ∗ x ∗ y 2 + 32 ∗ y 2 − 8 ∗ z 2 + 12 ∗ x − 2 ∗ z + 1, 2 ∗ y 2 ∗ z + 4 ∗ x + 1, −4 ∗ z 3 − 8 ∗ y 2 + 6 ∗ x ∗ z + z 2 − 36 ∗ x − 8, 8 ∗ x2 − 16 ∗ y 2 + 4 ∗ z 2 − 6 ∗ x − z } To convert it into a lexicographical Gröbner basis one should type h := IN V LEX g; The result is h := { 3976 ∗ x + 37104 ∗ z 6 − 600 ∗ z 5 + 2111 ∗ z 4 + 122062 ∗ z 3 + 232833 ∗ z 2 − 680336 ∗ z + 288814, 1988 ∗ y 2 − 76752 ∗ z 6 + 1272 ∗ z 5 − 4197 ∗ z 4 − 251555 ∗ z 3 − 481837 ∗ z 2 + 1407741 ∗ z − 595666, 16 ∗ z 7 − 8 ∗ z 6 + z 5 + 52 ∗ z 4 + 75 ∗ z 3 − 342 ∗ z 2 + 266 ∗ z − 60 } In the case of “sparse” positive-dimensioned system when the involutive basis in the sense of [1] does not exist, you get the error message ∗ ∗ ∗ ∗ ∗ M AXIM U M DEGREE BOU N D EXCEEDED 597 The resulting list of polynomials which is not an involutive basis is stored in the share variable INVTEMPBASIS. In this case it is reasonable to call the GROEBNER package with the value of INVTEMPBASIS as input under the same variable and term ordering. Bibliography [1] Zharkov A.Yu., Blinkov Yu.A. Involution Approach to Solving Systems of Algebraic Equations. Proceedings of the IMACS ’93, 1993, 11-16. [2] Buchberger B. Gröbner bases: an Algorithmic Method in Polynomial Ideal Theory. In: (Bose N.K., ed.) Recent Trends in Multidimensional System Theory, Reidel, 1985. [3] Lazard D. Gröbner Bases, Gaussian Elimination and Resolution of Systems of Algebraic Equations. Proceedings of EUROCAL ’83. Lecture Notes in Computer Science 162, Springer 1983, 146-157. 598 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.33 LALR: A parser generator Author: Arthur Norman This package provides a parser-generator, somewhat styled after yacc or the many programs available for use with other languages. You present it with a phrase structure grammar and it generates a set of tables that can then be used by the function yyparse to read in material in the syntax that you specified. Internally it uses a very well established technique known “LALR” which takes the grammar are derives the description of a stack automaton that can accept it. Details of the procedure can be found in standard books on compiler construction, such as the one by Aho, Ullman Lam and Sethi. At the time of writing this explanation the code is not in its final form, so this will describe the current state and include a few notes on what might chaneg in the future. Building a parser is done in Reduce symbolic mode, so say "symbolic;" or "lisp;" before starting your work. To use the code here you use a function lalr_create_parser, giving it two arguments. The first indicates precedence information and will be described later: for now just pass the value nil. The second argument is a list of productions, and the first one of these is taken to be the top-level target for the whole grammar. Each production is in the form (LHS ((rhs1.1 rhs1.2 ...) a1.1 a1.2 ...) ((rhs2.1 rhs2.1 ...) a2.1 a2.2 ...) ...) which in regular publication style for grammars might be interpreted as meaning LHS ⇒ | rhs1,1 rhs1,2 . . . {a1,1 a1,2 . . .} rhs2,1 rhs2,2 . . . {a2,1 a2,2 . . .} ... ; The various lines specify different options for what the left hand side (non-terminal symbol) might correspond to, while the items within the braces are sematic actions that get obeyed or evaluated when the production ruls is used. Each LHS is treated as a non-terminal symbol and is specified as a simple name. Note that by default the Reduce parser will be folding characters within names to lower case and so it will be best to choose names for non-terminals that are unambiguous even when case-folded, but I would like to establish a convention that in source code they are written in capitals. 599 The RHS items may be either non-terminals (identified because they are present in the left hand side of some production) or terminals. Terminal symbols can be specified in two different ways. The lexer has built-in recipes that decode certain sequences of characters and return the special markers for !:symbol, !:number, !:string, !:list for commonly used cases. In these cases the variable yylval gets left set to associated data, so for instance in the case of !:symbol it gets set to the particular symbol concerned. The token type :list is used for Lisp or rlisp-like notation where the input contains ’expression or ‘expression so for instance the input ‘(a b c) leads to the lexer returning !:list and yylvel being set to (backquote (a b c)). This treatment is specialised for handling rlisp-like syntax. Other terminals are indicated by writing a string. That may either consist of characters that would otherwise form a symbol (ie a letter followed by letters, digits and underscores) or a sequence of non-alphanumeric characters. In the latter case if a sequence of three or more punctuation marks make up a terminal then all the shorter prefixes of it will also be grouped to form single entities. So if "<–>" is a terminal then ’<’, ’<-’ and ’<–’ will each by parsed as single tokens, and any of them that are not used as terminals will be classified as !:symbol. As well as terminals and non-terminals (which are writtent as symbols or strings) it is possible to write one of (OPT s1 s2 . . . ) (STAR s1 s2 . . . ) (PLUS s1 s2 . . . ) (LIST sep s1 s2 . . . ) (LISTPLUS sep s1 . . . ) (OR s1 s2 . . . ) 0 or 1 instances of the sequence s1, . . . 0, 1, 2, . . . instances. 1, 2, 3, . . . instances. like (STAR s1 s2 . . . ) but with the single item sep between each instance. like (PLUS s2 . . . ) but with sep interleaved. one or other of the tokens shown. When the lexer processes input it will return a numeric code that identifies the type of the item seen, so in a production one might write (!:symbol ":=" EXPRESSION) and as it recognises the first two tokens the lexer will return a numeric code for !:symbol (and set yylval to the actual symbol as seen) and then a numeric code that it allocates for ":=". In the latter case it will also set yylval to the symbol !:!= in case that is useful. Precedence can be set using lalr_precedence. See examples below. 16.33.1 Limitations 1. Grammar rules and semantic actions are specified in fairly raw Lisp. 2. The lexer is hand-written and can not readily be reconfigured for use with languages other than rlisp. For instance it has use of "!" as a character escape built into it. 600 16.33.2 CHAPTER 16. USER CONTRIBUTED PACKAGES An example % Here I set up a sample grammar % S’ -> S % S -> C C { } % C -> "c" C { } % | "d" { } % This is example 4.42 from Aho, Sethi and Ullman’s Red Dragon book. % It is example 4.54 in the more recent Purple book. % % grammar := ’( (s ((cc cc) ) % Use default semantic action here ) (cc (("c" cc) (list ’c !$2)) % First production for C (("d") ’d ) % Second production for C ))$ parsertables := lalr_create_parser(nil, grammar)$ << lex_init(); yyparse() >>; c c c d c d ; 601 16.34 LAPLACE: Laplace transforms This package can calculate ordinary and inverse Laplace transforms of expressions. Documentation is in plain text. Authors: C. Kazasov, M. Spiridonova, V. Tomov. Reference: Christomir Kazasov, Laplace Transformations in REDUCE 3, Proc. Eurocal ’87, Lecture Notes in Comp. Sci., Springer-Verlag (1987) 132-133. Some hints on how to use to use this package: Syntax: LAPLACE(< exp >, < var − s >, < var − t > ) INVLAP(< exp >, < var − s >, < var − t >) where < exp > is the expression to be transformed, < var − s > is the source variable (in most cases < exp > depends explicitly of this variable) and < var − t > is the target variable. If < var − t > is omitted, the package uses an internal variable lp!& or il!&, respectively. The following switches can be used to control the transformations: lmon: lhyp: ltrig: If on, sin, cos, sinh and cosh are converted by LAPLACE into exponentials, If on, expressions e˜x are converted by INVLAP into hyperbolic functions sinh and cosh, If on, expressions e˜x are converted by INVLAP into trigonometric functions sin and cos. The system can be extended by adding Laplace transformation rules for single functions by rules or rule sets. In such a rule the source variable MUST be free, the target variable MUST be il!& for LAPLACE and lp!& for INVLAP and the third parameter should be omitted. Also rules for transforming derivatives are entered in such a form. 602 CHAPTER 16. USER CONTRIBUTED PACKAGES Examples: let {laplace(log(~x),x) => -log(gam * il!&)/il!&, invlap(log(gam * ~x)/x,x) => -log(lp!&)}; operator f; let{ laplace(df(f(~x),x),x) => il!&*laplace(f(x),x) - sub(x=0,f(x)), laplace(df(f(~x),x,~n),x) => il!&**n*laplace(f(x),x) for i:=n-1 step -1 until 0 sum sub(x=0, df(f(x),x,n-1-i)) * il!&**i when fixp n, laplace(f(~x),x) = f(il!&) }; Remarks about some functions: The DELTA and GAMMA functions are known. ONE is the name of the unit step function. INTL is a parametrized integral function intl(< expr >, < var >, 0, < obj.var >) which means "Integral of < expr > wrt. < var > taken from 0 to < obj.var >", e.g. intl(2∗y 2 , y, 0, x) which is formally a function in x. We recommend reading the file LAPLACE.TST for a further introduction. 603 16.35 LIE: Functions for the classification of real n-dimensional Lie algebras LIE is a package of functions for the classification of real n-dimensional Lie algebras. It consists of two modules: liendmc1 and lie1234. With the help of the functions in the liendmcl module, real n-dimensional Lie algebras L with a derived algebra L(1) of dimension 1 can be classified. Authors: Carsten and Franziska Schöbel. LIE is a package of functions for the classification of real n-dimensional Lie algebras. It consists of two modules: liendmc1 and lie1234. liendmc1 With the help of the functions in this module real n-dimensional Lie algebras L with a derived algebra L(1) of dimension 1 can be classified. L has to be defined by its structure constants ckij in the basis {X1 , . . . , Xn } with [Xi , Xj ] = ckij Xk . The user must define an ARRAY LIENSTRUCIN(n, n, n) with n being the dimension of the Lie algebra L. The structure constants LIENSTRUCIN(i, j, k):=ckij for i < j should be given. Then the procedure LIENDIMCOM1 can be called. Its syntax is: LIENDIMCOM1(). corresponds to the dimension n. The procedure simplifies the structure of L performing real linear transformations. The returned value is a list of the form (i) {LIE_ALGEBRA(2),COMMUTATIVE(n-2)} or (ii) {HEISENBERG(k),COMMUTATIVE(n-k)} with 3 ≤ k ≤ n, k odd. The concepts correspond to the following theorem (LIE_ALGEBRA(2) → L2 , HEISENBERG(k) → Hk and COMMUTATIVE(n-k) → Cn−k ): Theorem. Every real n-dimensional Lie algebra L with a 1-dimensional derived algebra can be decomposed into one of the following forms: (i) C(L) ∩ L(1) = {0} : L2 ⊕ Cn−2 or (ii) C(L) ∩ L(1) = L(1) : Hk ⊕ Cn−k (k = 2r − 1, r ≥ 2), with 604 CHAPTER 16. USER CONTRIBUTED PACKAGES 1. C(L) = Cj ⊕ (L(1) ∩ C(L)) and dim Cj = j , 2. L2 is generated by Y1 , Y2 with [Y1 , Y2 ] = Y1 , 3. Hk is generated by {Y1 , . . . , Yk } with [Y2 , Y3 ] = · · · = [Yk−1 , Yk ] = Y1 . (cf. [2]) The returned list is also stored as LIE_LIST. The matrix LIENTRANS gives the transformation from the given basis {X1 , . . . , Xn } into the standard basis {Y1 , . . . , Yn }: Yj = (LIENTRANS)kj Xk . A more detailed output can be obtained by turning on the switch TR_LIE: ON TR_LIE; before the procedure LIENDIMCOM1 is called. The returned list could be an input for a data bank in which mathematical relevant properties of the obtained Lie algebras are stored. lie1234 This part of the package classifies real low-dimensional Lie algebras L of the dimension n :=dim L = 1, 2, 3, 4. L is also given by its structure constants ckij in the basis {X1 , . . . , Xn } with [Xi , Xj ] = ckij Xk . An ARRAY LIESTRIN(n, n, n) has to be defined and LIESTRIN(i, j, k):=ckij for i < j should be given. Then the procedure LIECLASS can be performed whose syntax is: LIECLASS(). should be the dimension of the Lie algebra L. The procedure stepwise simplifies the commutator relations of L using properties of invariance like the dimension of the centre, of the derived algebra, unimodularity etc. The returned value has the form: {LIEALG(n),COMTAB(m)}, where m corresponds to the number of the standard form (basis: {Y1 , . . . , Yn }) in an enumeration scheme. The corresponding enumeration schemes are listed below (cf. [3],[1]). In case that the standard form in the enumeration scheme depends on one (or two) parameter(s) p1 (and p2 ) the list is expanded to: {LIEALG(n),COMTAB(m),p1,p2}. This returned value is also stored as LIE_CLASS. The linear transformation from the basis {X1 , . . . , Xn } into the basis of the standard form {Y1 , . . . , Yn } is given by the matrix LIEMAT: Yj = (LIEMAT)kj Xk . 605 By turning on the switch TR_LIE: ON TR_LIE; before the procedure LIECLASS is called the output contains not only the list LIE_CLASS but also the non-vanishing commutator relations in the standard form. By the value m and the parameters further examinations of the Lie algebra are possible, especially if in a data bank mathematical relevant properties of the enumerated standard forms are stored. Enumeration schemes for lie1234 returned list LIE_CLASS the corresponding commutator relations LIEALG(1),COMTAB(0) commutative case LIEALG(2),COMTAB(0) commutative case LIEALG(2),COMTAB(1) [Y1 , Y2 ] = Y2 LIEALG(3),COMTAB(0) commutative case LIEALG(3),COMTAB(1) [Y1 , Y2 ] = Y3 LIEALG(3),COMTAB(2) [Y1 , Y3 ] = Y3 LIEALG(3),COMTAB(3) [Y1 , Y3 ] = Y1 , [Y2 , Y3 ] = Y2 LIEALG(3),COMTAB(4) [Y1 , Y3 ] = Y2 , [Y2 , Y3 ] = Y1 LIEALG(3),COMTAB(5) [Y1 , Y3 ] = −Y2 , [Y2 , Y3 ] = Y1 LIEALG(3),COMTAB(6) [Y1 , Y3 ] = −Y1 + p1 Y2 , [Y2 , Y3 ] = Y1 , p1 6= 0 LIEALG(3),COMTAB(7) [Y1 , Y2 ] = Y3 , [Y1 , Y3 ] = −Y2 , [Y2 , Y3 ] = Y1 LIEALG(3),COMTAB(8) [Y1 , Y2 ] = Y3 , [Y1 , Y3 ] = Y2 , [Y2 , Y3 ] = Y1 LIEALG(4),COMTAB(0) commutative case LIEALG(4),COMTAB(1) [Y1 , Y4 ] = Y1 LIEALG(4),COMTAB(2) [Y2 , Y4 ] = Y1 LIEALG(4),COMTAB(3) [Y1 , Y3 ] = Y1 , [Y2 , Y4 ] = Y2 LIEALG(4),COMTAB(4) [Y1 , Y3 ] = −Y2 , [Y2 , Y4 ] = Y2 , [Y1 , Y4 ] = [Y2 , Y3 ] = Y1 LIEALG(4),COMTAB(5) [Y2 , Y4 ] = Y2 , [Y1 , Y4 ] = [Y2 , Y3 ] = Y1 LIEALG(4),COMTAB(6) [Y2 , Y4 ] = Y1 , [Y3 , Y4 ] = Y2 LIEALG(4),COMTAB(7) [Y2 , Y4 ] = Y2 , [Y3 , Y4 ] = Y1 LIEALG(4),COMTAB(8) [Y1 , Y4 ] = −Y2 , [Y2 , Y4 ] = Y1 LIEALG(4),COMTAB(9) [Y1 , Y4 ] = −Y1 + p1 Y2 , [Y2 , Y4 ] = Y1 , p1 6= 0 LIEALG(4),COMTAB(10) [Y1 , Y4 ] = Y1 , [Y2 , Y4 ] = Y2 LIEALG(4),COMTAB(11) [Y1 , Y4 ] = Y2 , [Y2 , Y4 ] = Y1 606 CHAPTER 16. USER CONTRIBUTED PACKAGES returned list LIE_CLASS the corresponding commutator relations LIEALG(4),COMTAB(12) [Y1 , Y4 ] = Y1 + Y2 , [Y2 , Y4 ] = Y2 + Y3 , [Y3 , Y4 ] = Y3 LIEALG(4),COMTAB(13) [Y1 , Y4 ] = Y1 , [Y2 , Y4 ] = p1 Y2 , [Y3 , Y4 ] = p2 Y3 , p1 , p2 6= 0 LIEALG(4),COMTAB(14) [Y1 , Y4 ] = p1 Y1 + Y2 , [Y2 , Y4 ] = −Y1 + p1 Y2 , [Y3 , Y4 ] = p2 Y3 , p2 6= 0 LIEALG(4),COMTAB(15) [Y1 , Y4 ] = p1 Y1 + Y2 , [Y2 , Y4 ] = p1 Y2 , [Y3 , Y4 ] = Y3 , p1 6= 0 LIEALG(4),COMTAB(16) [Y1 , Y4 ] = 2Y1 , [Y2 , Y3 ] = Y1 , [Y2 , Y4 ] = (1 + p1 )Y2 , [Y3 , Y4 ] = (1 − p1 )Y3 , p1 ≥ 0 LIEALG(4),COMTAB(17) [Y1 , Y4 ] = 2Y1 , [Y2 , Y3 ] = Y1 , [Y2 , Y4 ] = Y2 − p1 Y3 , [Y3 , Y4 ] = p1 Y2 + Y3 , p1 6= 0 LIEALG(4),COMTAB(18) [Y1 , Y4 ] = 2Y1 , [Y2 , Y3 ] = Y1 , [Y2 , Y4 ] = Y2 + Y3 , [Y3 , Y4 ] = Y3 LIEALG(4),COMTAB(19) [Y2 , Y3 ] = Y1 , [Y2 , Y4 ] = Y3 , [Y3 , Y4 ] = Y2 LIEALG(4),COMTAB(20) [Y2 , Y3 ] = Y1 , [Y2 , Y4 ] = −Y3 , [Y3 , Y4 ] = Y2 LIEALG(4),COMTAB(21) [Y1 , Y2 ] = Y3 , [Y1 , Y3 ] = −Y2 , [Y2 , Y3 ] = Y1 LIEALG(4),COMTAB(22) [Y1 , Y2 ] = Y3 , [Y1 , Y3 ] = Y2 , [Y2 , Y3 ] = Y1 Bibliography [1] M.A.H. MacCallum. On the classification of the real four-dimensional lie algebras. 1979. [2] C. Schoebel. Classification of real n-dimensional lie algebras with a lowdimensional derived algebra. In Proc. Symposium on Mathematical Physics ’92, 1993. [3] F. Schoebel. The symbolic classification of real four-dimensional lie algebras. 1992. 607 16.36 LIMITS: A package for finding limits This package loads automatically. Author: Stanley L. Kameny. LIMITS is a fast limit package for REDUCE for functions which are continuous except for computable poles and singularities, based on some earlier work by Ian Cohen and John P. Fitch. The Truncated Power Series package is used for noncritical points, at which the value of the function is the constant term in the expansion around that point. l’Hôpital’s rule is used in critical cases, with preprocessing of ∞ − ∞ forms and reformatting of product forms in order to apply l’Hôpital’s rule. A limited amount of bounded arithmetic is also employed where applicable. 16.36.1 Normal entry points LIMIT(hEXPRN:algebraici, hVAR:kerneli, hLIMPOINT:algebraici) : algebraic This is the standard way of calling limit, applying all of the methods. The result is the limit of EXPRN as VAR approaches LIMPOINT. 16.36.2 Direction-dependent limits LIMIT!+(hEXPRN:algebraici, hVAR:kerneli, hLIMPOINT:algebraici) : algebraic LIMIT!-(hEXPRN:algebraici, hVAR:kerneli, hLIMPOINT:algebraici) : algebraic If the limit depends upon the direction of approach to the LIMPOINT, the functions LIMIT!+ and LIMIT!- may be used. They are defined by: LIMIT!+ (LIMIT!-) (EXP,VAR,LIMPOINT) →LIMIT(EXP*,,0), EXP*=sub(VAR=VAR+(-)2 ,EXP) 608 16.37 CHAPTER 16. USER CONTRIBUTED PACKAGES LINALG: Linear algebra package This package provides a selection of functions that are useful in the world of linear algebra. Author: Matt Rebbeck. 16.37.1 Introduction This package provides a selection of functions that are useful in the world of linear algebra. These functions are described alphabetically in subsection 16.37.3 and are labelled to They can be classified into four sections(n.b: the numbers after the dots signify the function label in section 16.37.3). Contributions to this package have been made by Walter Tietze (ZIB). Basic matrix handling add_columns add_to_columns augment_columns column_dim diagonal find_companion get_rows matrix_augment minor mult_rows remove_columns row_dim stack_rows swap_columns swap_rows ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Constructors Functions that create matrices. add_rows add_to_rows char_poly copy_into extend get_columns hermitian_tp matrix_stack mult_columns pivot remove_rows rows_pivot sub_matrix swap_entries ... ... ... ... ldots ... ... ... ... ... ... ... ... ... 609 band_matrix char_matrix companion hilbert jordan_block random_matrix Vandermonde ... ... ... ... ... ... ... block_matrix coeff_matrix hessian mat_jacobian make_identity toeplitz Kronecker_Product ... ... ... ... ... ... ... High level algorithms char_poly gram_schmidt pseudo_inverse svd ... ... ... ... cholesky lu_decom simplex triang_adjoint ... ... ... ... There is a separate NORMFORM[1] package for computing the following matrix normal forms in REDUCE: smithex, smithex_int, frobenius, ratjordan, jordansymbolic, jordan. Predicates matrixp symmetricp ... ... squarep ... Note on examples: In the examples the matrix A will be   1 2 3 A = 4 5 6  7 8 9 Notation Throughout I is used to indicate the identity matrix and AT to indicate the transpose of the matrix A. 16.37.2 Getting started If you have not used matrices within REDUCE before then the following may be helpful. 610 CHAPTER 16. USER CONTRIBUTED PACKAGES Creating matrices Initialisation of matrices takes the following syntax: mat1 := mat((a,b,c),(d,e,f),(g,h,i)); will produce   a b c mat1 := d e f  g h i Getting at the entries The (i, j)th entry can be accessed by: mat1(i,j); Loading the linear_algebra package The package is loaded by: load_package linalg; 16.37.3 What’s available add_columns, add_rows Syntax: add_columns(A,c1,c2,expr); A c1, c2 expr :::- a matrix. positive integers. a scalar expression. Synopsis: add_columns replaces column c2 of A by expr ∗ column(A,c1) + column(A,c2). add_rows performs the equivalent task on the rows of A. 611 Examples:   1 x+2 3 add_columns(A, 1, 2, x) = 4 4 ∗ x + 5 6 7 7∗x+8 9   1 2 3 add_rows(A, 2, 3, 5) =  4 5 6  27 33 39 Related functions: add_to_columns, add_to_rows, mult_columns, mult_rows. add_rows See: add_columns. add_to_columns, add_to_rows Syntax: add_to_columns(A,column_list,expr); A column_list expr :::- a matrix. a positive integer or a list of positive integers. a scalar expression. Synopsis: add_to_columns adds expr to each column specified in column_list of A. add_to_rows performs the equivalent task on the rows of A. Examples:   11 12 3 add_to_columns(A, {1, 2}, 10) = 14 15 6 17 18 9   1 2 3 add_to_rows(A, 2, −x) = −x + 4 −x + 5 −x + 6 7 8 9 Related functions: add_columns, add_rows, mult_rows, mult_columns. 612 CHAPTER 16. USER CONTRIBUTED PACKAGES add_to_rows See: add_to_columns. augment_columns, stack_rows Syntax: augment_columns(A,column_list); A column_list ::- a matrix. either a positive integer or a list of positive integers. Synopsis: augment_columns gets hold of the columns of A specified in column_list and sticks them together. stack_rows performs the same task on rows of A. Examples:   cc1 2 augment_columns(A, {1, 2}) =  4 5 7 8   1 2 3 stack_rows(A, {1, 3}) = 7 8 9 Related functions: get_columns, get_rows, sub_matrix. band_matrix Syntax: band_matrix(expr_list,square_size); expr_list :- square_size :- either a single scalar expression or a list of an odd number of scalar expressions. a positive integer. Synopsis: band_matrix creates a square matrix of dimension square_size. The diagonal consists of the middle expr of the expr_list. The expressions to the left of this fill the required number of sub-diagonals and the expressions to the right the super-diagonals. 613  y x  0 Examples: band_matrix({x, y, z}, 6) =  0  0 0 z y x 0 0 0 0 z y x 0 0 0 0 z y x 0 0 0 0 z y x  0 0  0  0  z y Related functions: diagonal. block_matrix Syntax: block_matrix(r,c,matrix_list); r, c matrix_list ::- positive integers. a list of matrices. Synopsis: block_matrix creates a matrix that consists of r × c matrices filled from the matrix_list row-wise. Examples:       22 33 5 1 0 , D= , C= B= 44 55 5 0 1  1 0 5 22 33  0 1 5 44 55  block_matrix(2, 3, {B, C, D, D, C, B}) =  22 33 5 1 0  44 55 5 0 1  char_matrix Syntax: char_matrix(A, λ); A λ ::- a square matrix. a symbol or algebraic expression. Synopsis: char_matrix creates the characteristic matrix C of A. This is C = λI − A.   x − 1 −2 −3 Examples: char_matrix(A, x) =  −4 x − 5 −6  −7 −8 x − 9 614 CHAPTER 16. USER CONTRIBUTED PACKAGES Related functions: char_poly. char_poly Syntax: char_poly(A, λ); A λ ::- a square matrix. a symbol or algebraic expression. Synopsis: char_poly finds the characteristic polynomial of A. This is the determinant of λI − A. Examples: char_poly(A, x) = x3 − 15 ∗ x2 − 18 ∗ x Related functions: char_matrix. cholesky Syntax: cholesky(A); A :- a positive definite matrix containing numeric entries. Synopsis: cholesky computes the cholesky decomposition of A. It returns {L, U} where L is a lower matrix, U is an upper matrix, A = LU, and U = LT . Examples:   1 1 0 F = 1 3 1 0 1 1    1 √0 2 cholesky(F) = 1   0 √1 2 Related functions: lu_decom.   1 √1 0  2 0  , 0 √1 0 0 2    √1   2   √1 0 2 615 coeff_matrix Syntax: coeff_matrix({lin_eqn1 ,lin_eqn2 , ...,lin_eqnn }); 14 lin_eqn1 ,lin_eqn2 , . . . ,lin_eqnn :- linear equations. Can be of the form equation = number or just equation which is equivalent to equation = 0. Synopsis: coeff_matrix creates the coefficient matrix C of the linear equations. It returns {C, X , B} such that CX = B. Examples: coeff_matrix({x + y + 4 ∗ z = 10, y + x − z = 20, x + y + 4}) =       z 10   4 1 1 −1 1 1 , y  ,  20    0 1 1 x −4 column_dim, row_dim Syntax: column_dim(A); A :- a matrix. Synopsis: column_dim finds the column dimension of A. row_dim finds the row dimension of A. Examples: column_dim(A) = 3 companion Syntax: companion(poly,x); poly x 14 ::- a monic univariate polynomial in x. the variable. If you’re feeling lazy then the {}’s can be omitted. 616 CHAPTER 16. USER CONTRIBUTED PACKAGES Synopsis: companion creates the companion matrix C of poly. This is the square matrix of dimension n, where n is the degree of poly w.r.t. x. The entries of C are: C(i, n) = −coeffn(poly, x, i − 1) for i = 1, . . . , n, C(i, i − 1) = 1 for i = 2, . . . , n and the rest are 0.   0 0 0 −11 1 0 0 0   Examples: companion(x4 + 17 ∗ x3 − 9 ∗ x2 + 11, x) =  0 1 0 9  0 0 1 −17 Related functions: find_companion. copy_into Syntax: copy_into(A, B,r,c); A, B r, c ::- matrices. positive integers. Synopsis: copy_into copies matrix A into B with A(1, 1) at B(r, c). Examples:  0 0 G= 0 0 0 0 0 0 0 0 0 0  0 0  0 0  0 0 copy_into(A, G, 1, 2) =  0 0 1 4 7 0 2 5 8 0  3 6  9 0 Related functions: augment_columns, extend, matrix_augment, matrix_stack, stack_rows, sub_matrix. diagonal Syntax: diagonal({mat1 ,mat2 , ...,matn });15 15 If you’re feeling lazy then the {}’s can be omitted. 617 mat1 ,mat2 , . . . ,matn :- each can be either a scalar expr or a square matrix. Synopsis: diagonal creates a matrix that contains the input on the diagonal. Examples:  H= 66 77 88 99   1 4  7 diagonal({A, x, H}) =  0  0 0 2 5 8 0 0 0 3 6 9 0 0 0  0 0 0 0 0 0  0 0 0  x 0 0  0 66 77 0 88 99 Related functions: jordan_block. extend Syntax: extend(A,r,c,expr); A :- a matrix. r, c :- positive integers. expr :- algebraic expression or symbol. Synopsis: extend returns a copy of A that has been extended by r rows and c columns. The new entries are made equal to expr.   1 2 3 x x  4 5 6 x x  Examples: extend(A, 1, 2, x) =   7 8 9 x x x x x x x Related functions: copy_into, matrix_augment, matrix_stack, remove_columns, remove_rows. find_companion Syntax: find_companion(A,x); 618 CHAPTER 16. USER CONTRIBUTED PACKAGES A x ::- a matrix. the variable. Synopsis: Given a companion matrix, find_companion finds the polynomial from which it was made. Examples:  0 1 C= 0 0 0 0 1 0  0 −11 0 0   0 9  1 −17 find_companion(C, x) = x4 + 17 ∗ x3 − 9 ∗ x2 + 11 Related functions: companion. get_columns, get_rows Syntax: get_columns(A,column_list); A c ::- a matrix. either a positive integer or a list of positive integers. Synopsis: get_columns removes the columns of A specified in column_list and returns them as a list of column matrices. get_rows performs the same task on the rows of A. Examples:     3   1 get_columns(A, {1, 3}) = 4 , 6   7 9   get_rows(A, 2) = 4 5 6 Related functions: augment_columns, stack_rows, sub_matrix. get_rows See: get_columns. 619 gram_schmidt Syntax: gram_schmidt({vec1 ,vec2 , ...,vecn }); 16 vec1 ,vec2 , . . . ,vecn :- linearly-independent vectors. Each vector must be written as a list, eg:{1,0,0}. Synopsis: gram_schmidt performs the Gram-Schmidt orthonormalisation on the input vectors. It returns a list of orthogonal normalised vectors. Examples: gram_schmidt({{1,0,0},{1,1,0},{1,1,1}}) = {{1,0,0},{0,1,0},{0,0,1}} √ √ 1 2 2∗ 5 − 5 gram_schmidt({{1,2},{3,4}}) = {{ √ , √ }, { , }} 5 5 5 5 hermitian_tp Syntax: hermitian_tp(A); A :- a matrix. Synopsis: hermitian_tp computes the hermitian transpose of A. This is a matrix in which the (i, j)th entry is the conjugate of the (j, i)th entry of A. Examples:   i+1 i+2 i+3 5 2  J = 4 1 i 0   −i + 1 4 1 hermitian_tp(J ) = −i + 2 5 −i −i + 3 2 0 Related functions: tp17 . 16 17 If you’re feeling lazy then the {}’s can be omitted. standard reduce call for the transpose of a matrix - see section 14.4. 620 CHAPTER 16. USER CONTRIBUTED PACKAGES hessian Syntax: hessian(expr,variable_list); expr variable_list ::- a scalar expression. either a single variable or a list of variables. Synopsis: hessian computes the hessian matrix of expr w.r.t. variable_list. the varibles in This is an n × n matrix where n is the number of variables and the (i, j)th entry is df(expr,variable_list(i),variable_list(j)).   0 0 0 0 0 2 z y   Examples: hessian(x ∗ y ∗ z + x2 , {w, x, y, z}) =  0 z 0 x 0 y x 0 Related functions: df18 . hilbert Syntax: hilbert(square_size,expr); square_size expr ::- a positive integer. an algebraic expression. Synopsis: hilbert computes the square hilbert matrix of dimension square_size. This is the symmetric matrix in which the (i, j)th entry is 1/(i + j − expr).   −1 x+y−2  −1 Examples: hilbert(3, y + x) =  x+y−3 −1 x+y−4 −1 x+y−3 −1 x+y−4 −1 x+y−5 −1 x+y−4 −1  x+y−5  −1 x+y−6 jacobian Syntax: mat_jacobian(expr_list,variable_list); 18 standard reduce call for differentiation - see section 7.8. 621 expr_list :- variable_list :- either a single algebraic expression or a list of algebraic expressions. either a single variable or a list of variables. Synopsis: mat_jacobian computes the jacobian matrix of expr_list w.r.t. variable_list. This is a matrix whose (i, j)th entry is df(expr_list(i),variable_list(j)). The matrix is n × m where n is the number of variables and m the number of expressions. Examples: mat_jacobian({x4 , x ∗ y 2 , x ∗ y ∗ z 3 },{w, x, y, z}) =   0 4 ∗ x3 0 0 0  y2 2∗x∗y 0 3 3 2 0 y∗z x∗z 3∗x∗y∗z Related functions: hessian, df19 . NOTE: The function mat_jacobian used to be called just "jacobian" however us of that name was in conflict with another Reduce package. jordan_block Syntax: jordan_block(expr,square_size); expr square_size ::- an algebraic expression or symbol. a positive integer. Synopsis: jordan_block computes the square jordan block matrix J of dimension square_size. The entries of J are: J (i, i) = expr for i = 1, . . . , n, J (i, i + 1) = 1 for i = 1, . . . , n − 1, and all other entries are 0.   x 1 0 0 0 0 x 1 0 0    Examples: jordan_block(x,5) =  0 0 x 1 0 0 0 0 x 1 0 0 0 0 x Related functions: diagonal, companion. 19 standard reduce call for differentiation - see REDUCE User’s Manual[2]. 622 CHAPTER 16. USER CONTRIBUTED PACKAGES lu_decom Syntax: lu_decom(A); A :- a matrix containing either numeric entries or imaginary entries with numeric coefficients. Synopsis: lu_decom performs LU decomposition on A, ie: it returns {L, U} where L is a lower diagonal matrix, U an upper diagonal matrix and A = LU. Caution: The algorithm used can swap the rows of A during the calculation. This means that LU does not equal A but a row equivalent of it. Due to this, lu_decom returns {L, U,vec}. The call convert(A,vec) will return the matrix that has been decomposed, ie: LU = convert(A,vec).   1 3 5 Examples: K = −4 3 7 8 6 4      1 0.75 0.5 0 0   8 1 1.5 , [ 3 2 3 ] 6 0  , 0 lu := lu_decom(K) = −4   0 0 1 1 2.25 1.1251  8 first lu * second lu = −4 1  8  convert(K,third lu) = −4 1  6 4 3 7 3 5  6 4 3 7 3 5   i+1 i+2 i+3 5 2  P= 4 1 i 0   1 0 0  , −4 ∗ i + 5 0 lu := lu_decom(P) =  4  i+1 3 0.41463 ∗ i + 2.26829    1 i 0  0 1 0.19512 ∗ i + 0.24390 , [ 3 2 3 ]  0 0 1 623   1 i 0 5 2  first lu * second lu =  4 i+1 i+2 i+3   1 i 0 5 2  convert(P, thirdlu) =  4 i+1 i+2 i+3 Related functions: cholesky. make_identity Syntax: make_identity(square_size); square_size :- a positive integer. Synopsis: make_identity creates the identity matrix of dimension square_size.   1 0 0 0 0 1 0 0  Examples: make_identity(4) =  0 0 1 0 0 0 0 1 Related functions: diagonal. matrix_augment, matrix_stack Syntax: matrix_augment({mat1 ,mat2 , ...,matn });20 mat1 ,mat2 , . . . ,matn :- matrices. Synopsis: matrix_augment sticks the matrices in matrix_list together horizontally. matrix_stack sticks the matrices in matrix_list together vertically. 20 If you’re feeling lazy then the {}’s can be omitted. 624 CHAPTER 16. USER CONTRIBUTED PACKAGES Examples:  1  matrix_augment({A, A}) = 4 7  1 2 4 5  7 8 matrix_stack({A, A}) =  1 2  4 5 7 8  2 3 1 2 3 4 6 4 5 6 8 9 7 8 9  3 6  9  3  6 9 Related functions: augment_columns, stack_rows, sub_matrix. matrixp Syntax: matrixp(test_input); test_input :- anything you like. Synopsis: matrixp is a boolean function that returns t if the input is a matrix and nil otherwise. Examples: matrixp(A) = t matrixp(doodlesackbanana) = nil Related functions: squarep, symmetricp. matrix_stack See: matrix_augment. minor Syntax: minor(A,r,c); A r, c ::- a matrix. positive integers. 625 Synopsis: minor computes the (r, c)th minor of A. This is created by removing the rth row and the cth column from A.   4 5 Examples: minor(A, 1, 3) = 7 8 Related functions: remove_columns, remove_rows. mult_columns, mult_rows Syntax: mult_columns(A,column_list,expr); A column_list expr :::- a matrix. a positive integer or a list of positive integers. an algebraic expression. Synopsis: mult_columns returns a copy of A in which the columns specified in column_list have been multiplied by expr. mult_rows performs the same task on the rows of A. Examples:   x 2 3∗x mult_columns(A, {1, 3}, x) = 4 ∗ x 5 6 ∗ x 7∗x 8 9∗x   1 2 3 mult_rows(A, 2, 10) = 40 50 60 7 8 9 Related functions: add_to_columns, add_to_rows. mult_rows See: mult_columns. pivot Syntax: pivot(A,r,c); 626 CHAPTER 16. USER CONTRIBUTED PACKAGES A r, c ::- a matrix. positive integers such that A(r, c) 6= 0. Synopsis: pivot pivots A about its (r, c)th entry. To do this, multiples of the r’th row are added to every other row in the matrix. This means that the c’th column will be 0 except for the (r,c)’th entry.   −1 −0.5 0 5 6 Examples: pivot(A, 2, 3) =  4 1 0.5 0 Related functions: rows_pivot. pseudo_inverse Syntax: pseudo_inverse(A); A :- a matrix containing only real numeric entries. Synopsis: pseudo_inverse, also known as the Moore-Penrose inverse, computes the pseudo inverse of A. Given the singular value decomposition of A, i.e: A = UΣV T , then the pseudo inverse A† is defined by A† = VΣ† U T . For the diagonal matrix Σ, the pseudoinverse Σ† is computed by taking the reciprocal of only the nonzero diagonal elements. If A is square and non-singular, then A† = A. In general, however, AA† A = A, and A† AA† = A† . Perhaps more importantly, A† solves the following least-squares problem: given a rectangular matrix A and a vector b, find the x minimizing kAx−bk2 , and which, in addition, has minimum `2 (euclidean) Norm, kxk2 . This x is A† b. Examples:    1 2 3 4 R= , 9 8 7 6 Related functions: svd.  −0.2 0.1 −0.05 0.05   pseudo_inverse(R) =   0.1 0  0.25 −0.05 627 random_matrix Syntax: random_matrix(r,c,limit); r, c, limit :- positive integers. Synopsis: random_matrix creates an r × c matrix with random entries in the range −limit < entry < limit. Switches: imaginary :- not_negative :- only_integer :- symmetric upper_matrix lower_matrix :::- if on, then matrix entries are x + iy where −limit < x, y < limit. if on then 0 < entry < limit. In the imaginary case we have 0 < x, y < limit. if on then each entry is an integer. In the imaginary case x, y are integers. if on then the matrix is symmetric. if on then the matrix is upper triangular. if on then the matrix is lower triangular. Examples:   −4.729721 6.987047 7.521383 random_matrix(3, 3, 10) = −5.224177 5.797709 −4.321952 −9.418455 −9.94318 −0.730980 on only_integer, not_negative, upper_matrix, imaginary;   2∗i+5 3∗i+7 7∗i+3 6  0 2 ∗ i + 5 5 ∗ i + 1 2 ∗ i + 1  random_matrix(4, 4, 10) =    0 0 8 i 0 0 0 5∗i+9 remove_columns, remove_rows Syntax: remove_columns(A,column_list); A column_list ::- a matrix. either a positive integer or a list of positive integers. Synopsis: remove_columns removes the columns specified in column_list from A. remove_rows performs the same task on the rows of A. 628 CHAPTER 16. USER CONTRIBUTED PACKAGES Examples:   1 3 remove_columns(A, 2) = 4 6 7 9 remove_rows(A, {1, 3}) = 4 5 6  Related functions: minor. remove_rows See: remove_columns. row_dim See: column_dim. rows_pivot Syntax: rows_pivot(A,r,c,{row_list}); A r,c row_list :::- a matrix. positive integers such that A(r,c) neq 0. positive integer or a list of positive integers. Synopsis: rows_pivot performs the same task as pivot but applies the pivot only to the rows specified in row_list. Examples:  1 4  N = 7 1 4 2 5 8 2 5  3 6  9  3 6  rows_pivot(N , 2, 3, {4, 5}) 1  4  =  7  −0.75 −0.375  2 3 5 6   8 9   0 0.75  0 0.375 629 Related functions: pivot. simplex Syntax: simplex(max/min,objective function,{linear inequalities},[{bounds}]); max/min :- objective function linear inequalities ::- bounds :- either max or min (signifying maximise and minimise). the function you are maximising or minimising. the constraint inequalities. Each one must be of the form sum of variables (<=, =, >=) number. bounds on the variables as specified for the LP file format. Each bound is of one of the forms l ≤ v, v ≤ u, or l ≤ v ≤ u, where v is a variable and l, u are numbers or infinity or -infinity Synopsis: simplex applies the revised simplex algorithm to find the optimal(either maximum or minimum) value of the objective function under the linear inequality constraints. It returns {optimal value,{ values of variables at this optimal}}. The {bounds} argument is optional and admissible only when the switch fastsimplex is on, which is the default. Without a {bounds} argument, the algorithm implies that all the variables are non-negative. Examples: simplex(max,x+y,{x>=10,y>=20,x+y<=25}); ***** Error in simplex: Problem has no feasible solution. simplex(max,10x+5y+5.5z,{5x+3z<=200,x+0.1y+0.5z<=12, 0.1x+0.2y+0.3z<=9, 30x+10y+50z<=1500}); {525.0,{x=40.0,y=25.0,z=0}} squarep Syntax: squarep(A); 630 CHAPTER 16. USER CONTRIBUTED PACKAGES A :- a matrix. Synopsis: squarep is a boolean function that returns t if the matrix is square and nil otherwise. Examples: L= 1 3 5  squarep(A) = t squarep(L) = nil Related functions: matrixp, symmetricp. stack_rows See: augment_columns. sub_matrix Syntax: sub_matrix(A,row_list,column_list); A row_list, column_list ::- a matrix. either a positive integer or a list of positive integers. Synopsis: sub_matrix produces the matrix consisting of the intersection of the rows specified in row_list and the columns specified in column_list.   2 3 Examples: sub_matrix(A, {1, 3}, {2, 3}) = 8 9 Related functions: augment_columns, stack_rows. svd (singular value decomposition) Syntax: svd(A); A :- a matrix containing only real numeric entries. 631 Synopsis: svd computes the singular value decomposition of A. If A is an m × n real matrix of (column) rank r, svd returns the 3-element list {U, Σ, V} where A = UΣV T . Let k = min(m, n). Then U is m × k, V is n × k, and and Σ = diag(σ1 , . . . , σk ), where σi ≥ 0 are the singular values of A; only r of these are non-zero. The singular values are the non-negative square roots of the eigenvalues of AT A. U and V are such that UU T = VV T = V T V = Ik . Note: there are a number of different definitions of SVD in the literature, in some of which Σ is square and U and V rectangular, as here, but in others U and V are square, and Σ is rectangular. Examples:  1 3 Q = −4 3 3 6   0.0236042 svd(Q) = −0.969049  0.245739  0.959473 −0.281799  0.959473 svd(TP(Q)) = −0.281799  0.0236042 −0.969049 0.245739    0.419897  4.83288 0  0.232684 , , 0 7.52618 0.877237  0.281799 0.959473    4.83288 0 0.281799 , , 0 7.52618 0.959473  0.419897  0.232684  0.877237 swap_columns, swap_rows Syntax: swap_columns(A,c1,c2); A c1,c1 ::- a matrix. positive integers. Synopsis: swap_columns swaps column c1 of A with column c2. swap_rows performs the same task on 2 rows of A. 632 CHAPTER 16. USER CONTRIBUTED PACKAGES   1 3 2 Examples: swap_columns(A, 2, 3) = 4 6 5 7 9 8 Related functions: swap_entries. swap_entries Syntax: swap_entries(A,{r1,c1},{r2,c2}); A r1,c1,r2,c2 ::- a matrix. positive integers. Synopsis: swap_entries swaps A(r1,c1) with A(r2,c2).   9 2 3 Examples: swap_entries(A, {1, 1}, {3, 3}) = 4 5 6 7 8 1 Related functions: swap_columns, swap_rows. swap_rows See: swap_columns. symmetricp Syntax: symmetricp(A); A :- a matrix. Synopsis: symmetricp is a boolean function that returns t if the matrix is symmetric and nil otherwise. Examples:  M= 1 2 2 1  symmetricp(A) = nil symmetricp(M) = t 633 Related functions: matrixp, squarep. toeplitz Syntax: toeplitz({expr1 ,expr2 , ...,exprn }); 21 expr1 ,expr2 , . . . ,exprn :- algebraic expressions. Synopsis: toeplitz creates the toeplitz matrix from the expression list. This is a square symmetric matrix in which the first expression is placed on the diagonal and the i’th expression is placed on the (i-1)’th sub and super diagonals. It has dimension n where n is the number of expressions.   w x y z x w x y   Examples: toeplitz({w, x, y, z}) =  y x w x z y x w triang_adjoint Syntax: triang_adjoint(A); A :- a matrix. Synopsis: triang_adjoint computes the triangularizing adjoint F of matrix A due to the algorithm of Arne Storjohann. F is lower triangular matrix and the resulting matrix T of F ∗ A = T is upper triangular with the property that the i-th entry in the diagonal of T is the determinant of the principal i-th submatrix of the matrix A. Examples:   1 0 0 triang_adjoint(A) = −4 1 0  −3 6 −3   1 2 3 F ∗ A = 0 −3 −6 0 0 0 21 If you’re feeling lazy then the {}’s can be omitted. 634 CHAPTER 16. USER CONTRIBUTED PACKAGES Vandermonde Syntax: vandermonde({expr1 ,expr2 , . . . ,exprn }); 22 expr1 ,expr2 , . . . ,exprn :- algebraic expressions. Synopsis: Vandermonde creates the Vandermonde matrix from the expression list. (j−1) This is the square matrix in which the (i, j)th entry is expri . It has dimension n, where n is the number of expressions.   1 x x2 Examples: vandermonde({x, 2 ∗ y, 3 ∗ z}) = 1 2 ∗ y 4 ∗ y 2  1 3 ∗ z 9 ∗ z2 kronecker_product Syntax: kronecker_product(M1 , M2 ) M1 , M2 :- Matrices Synopsis: kronecker_product creates a matrix containing the Kronecker product (also called direct product or tensor product) of its arguments. Examples: a1 := mat((1,2),(3,4),(5,6))$ a2 := mat((1,1,1),(2,z,2),(3,3,3))$ kronecker_product(a1,a2);   1 1 1 2 2 2 2 z 2 4 2∗z 4   3 3 3 6 6 6    3 3 3 4 4 4   6 3∗z 6 8 4∗z 8   9 9 9 12 12 12   5 5 5 6 6 6   10 5 ∗ z 10 12 6 ∗ z 12 15 16.37.4 15 15 18 18 18 Fast Linear Algebra By turning the fast_la switch on, the speed of the following functions will be increased: 22 If you’re feeling lazy then the {}’s can be omitted. 635 add_columns copy_into minor remove_columns stack_rows swap_rows add_rows make_identity mult_column remove_rows sub_matrix symmetricp augment_columns matrix_augment mult_row rows_pivot swap_columns column_dim matrix_stack pivot squarep swap_entries The increase in speed will be insignificant unless you are making a significant number(i.e: thousands) of calls. When using this switch, error checking is minimised. This means that illegal input may give strange error messages. Beware. 16.37.5 Acknowledgments Many of the ideas for this package came from the Maple[3] Linalg package [4]. The algorithms for cholesky, lu_decom, and svd are taken from the book Linear Algebra - J.H. Wilkinson & C. Reinsch[5]. The gram_schmidt code comes from Karin Gatermann’s Symmetry package[6] for REDUCE. Bibliography [1] Matt Rebbeck: NORMFORM: A REDUCE package for the computation of various matrix normal forms. ZIB, Berlin. (1993) [2] Anthony C. Hearn: REDUCE User’s Manual 3.6. RAND (1995) [3] Bruce W. Char. . . [et al.]: Maple (Computer Program). Springer-Verlag (1991) [4] Linalg - a linear algebra package for Maple[3]. [5] J. H. Wilkinson & C. Reinsch: Linear Algebra (volume II). Springer-Verlag (1971) [6] Karin Gatermann: Symmetry: A REDUCE package for the computation of linear representations of groups. ZIB, Berlin. (1992) 636 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.38 LISTVECOPS: Vector operations on lists Author: Eberhard Schrüfer This package implements vector operations on lists.. Addition, multiplication, division, and exponentiation work elementwise. For example, after A := {a1,a2,a3,a4}; B := {b1,b2,b3,b4}; c*A will simplify to {c*a1,..,c*a4}, A + B to {a1+b1,...,a4+b4}, and A*B to {a1*b1,...,a4*b4}. Linear operations work as expected: c1*A + c2*B; {a1*c1 + b1*c2, a2*c1 + b2*c2, a3*c1 + b3*c2, a4*c1 + b4*c2} A division and an exponentation example: {a,b,c}/{3,g,5}; a b c {---,---,---} 3 g 5 ws^3; 3 3 3 a b c {----,----,-----} 27 3 125 g The new operator *. (ldot) implements the dot product: {a,b,c,d} *. {5,7,9,11/d}; 5*a + 7*b + 9*c + 11 637 For accessing list elements, the new operator _ (lnth) can be used instead of the PART operator: l := {1,{2,3},4}$ lnth(l,3); 4 l _2*3; {6,9} l _2 _2; 3 It can also be used to modify a list (unlike PART, which returns a modified list): part(l,2,2):=three; {1,{2,three},4} l; {1,{2,3},4} l _ 2 _2 :=three; three l; {1,{2,three},4} Operators are distributed over lists: a *. log b; log(b1)*a1 + log(b2)*a2 + log(b3)*a3 + log(b4)*a4 df({sin x*y,x^3*cos y},x,2,y); { - sin(x), - 6*sin(y)*x} 638 CHAPTER 16. USER CONTRIBUTED PACKAGES int({sin x,cos x},x); { - cos(x),sin(x)} By using the keyword listproc, an algebraic procedure can be declared to return a list: listproc spat3(u,v,w); begin scalar x,y; x := u *. w; y := u *. v; return v*x - w*y end; 639 16.39 LPDO: Linear Partial Differential Operators Author: Thomas Sturm 16.39.1 Introduction Consider the field F = Q(x1 , . . . , xn ) of rational functions and a set ∆ = {∂x1 ,. . . , ∂xn } of commuting derivations acting on F . That is, for all ∂xi , ∂xj ∈ ∆ and all f , g ∈ F the following properties are satisfied: ∂xi (f + g) = ∂xi (f ) + ∂xi (g), ∂xi (f · g) = f · ∂xi (g) + ∂xi (f ) · g, ∂xi (∂xj (f )) = ∂xj (∂xi (f )). (16.87) (16.88) Consider now the set F [∂x1 , . . . , ∂xn ], where the derivations are used as variables. This set forms a non-commutative linear partial differential operator ring with pointwise addition, and multiplication defined as follows: For f ∈ F and ∂xi , ∂xj ∈ ∆ we have for any g ∈ F that (f ∂xi )(g) = f · ∂xi (g), (∂xi f )(g) = ∂xi (f · g), (16.89) (∂xi ∂xj )(g) = ∂xi (∂xj (g)). (16.90) Here “ · ” denotes the multiplication in F . From (16.90) and (16.88) it follows that ∂xi ∂xj = ∂xj ∂xi , and using (16.89) and (16.87) the following commutator can be proved: ∂xi f = f ∂xi + ∂xi (f ). A linear partial differential operator (LPDO) of order k is an element X D = aj ∂ j ∈ F [∂x1 , . . . , ∂xn ] |j|≤k in canonical form. Here the expression of the Pn |j| ≤ k specifies the set jof all jtuples 1 n form j = (j1 , . . . , jn ) ∈ N with i=1 ji ≤ k, and we define ∂ = ∂x1 · · · ∂xjnn . A factorization of D is a non-trivial decomposition D = D1 · · · Dr ∈ F [∂x1 , . . . , ∂xn ] into multiplicative factors, each of which is an LPDO Di of order greater than 0 and less than k. If such a factorization exists, then D is called reducible or factorable, else irreducible. 640 CHAPTER 16. USER CONTRIBUTED PACKAGES For the purpose of factorization it is helpful to temporarily consider as regular commutative polynomials certain summands of the LPDO under consideration. Consider a commutative polynomial ring over F in new indeterminates y1 , . . . , yn . Adopting the notational conventions above, for m ≤ k the symbol of D of order m is defined as X aj y j ∈ F [y1 , . . . , yn ]. Symm (D) = |j|=m For m = k we obtain as a special case the symbol Sym(D) of D. 16.39.2 Operators partial There is a unary operator partial(·) denoting ∂. hpartial-termi → partial ( hidi ) *** There is a binary operator *** for the non-commutative multiplication involving partials ∂x . All expressions involving *** are implicitly transformed into LPDOs, i.e., into the following normal form: hnormalized-lpdoi hnormalized-moni hpartial-termprodi → → → hnormalized-moni [ + hnormalized-lpdoi ] hF-elementi [ *** hpartial-termprodi ] hpartial-termi [ *** hpartial-termprodi ] The summands of the normalized-lpdo are ordered in some canonical way. As an example consider input: a()***partial(y)***b()***partial(x); (a()*b()) *** partial(x) *** partial(y) + (a()*diff(b(),y,1)) *** partial(x) Here the F-elements are polynomials, where the unknowns are of the type constantoperator denoting functions from F : hconstant-operatori → hidi ( ) We do not admit division of such constant operators since we cannot exclude that such a constant operator denotes 0. 641 The operator notation on the one hand emphasizes the fact that the denoted elements are functions. On the other hand it distinguishes a() from the variable a of a rational function, which specifically denotes the corresponding projection. Consider e.g. input: (x+y)***partial(y)***(x-y)***partial(x); 2 2 (x - y ) *** partial(x) *** partial(y) + ( - x - y) *** partial(x) Here we use as F-elements specific elements from F = Q(x, y). diff In our example with constant operators, the transformation into normal form introduces a formal derivative operation diff(·,·,·), which cannot be evaluated. Notice that we do not use the Reduce operator df(·,·,·) here, which for technical reasons cannot smoothly handle our constant operators. In our second example with rational functions as F-elements, derivative occurring with commutation can be computed such that diff does not occur in the output. 16.39.3 Shapes of F-elements Besides the generic computations with constant operators, we provide a mechanism to globally fix a certain shape for F-elements and to expand constant operators according to that shape. lpdoset We give an example for a shape that fixes all constant operators to denote generic bivariate affine linear functions: input: d := (a()+b())***partial(x1)***partial(x2)**2; 2 d := (a() + b()) *** partial(x1) *** partial(x2) input: lpdoset {!#10*x1+!#01*x2+!#00,x1,x2}; {-1} input: d; 2 642 CHAPTER 16. USER CONTRIBUTED PACKAGES (a00 + a01*x2 + a10*x1 + b00 + b01*x2 + b10*x1) *** partial(x1) *** partial(x2) Notice that the placeholder # must be escaped with !, which is a general convention for Rlisp/Reduce. Notice that lpdoset returns the old shape and that {-1} denotes the default state that there is no shape selected. lpdoweyl The command lpdoweyl {n,x1,x2,...} creates a shape for generic polynomials of total degree n in variables x1, x2, . . . . input: lpdoweyl(2,x1,x2); 2 2 {#_00_ + #_01_*x2 + #_02_*x2 + #_10_*x1 + #_11_*x1*x2 + #_20_*x1 ,x1,x2} input: lpdoset ws; {#10*x1 + #01*x2 + #00,x1,x2} input: d; 2 2 (a_00_ + a_01_*x2 + a_02_*x2 + a_10_*x1 + a_11_*x1*x2 + a_20_*x1 + b_00_ 2 2 + b_01_*x2 + b_02_*x2 + b_10_*x1 + b_11_*x1*x2 + b_20_*x1 ) *** partial(x1) 2 partial(x2) *** 16.39.4 Commands General lpdoord The order of an lpdo: input: lpdoord((a()+b())***partial(x1)***partial(x2)**2+3***partial(x1)); 3 lpdoptl Returns the list of derivations (partials) occurring in its argument LPDO d. input: lpdoptl(a()***partial(x1)***partial(x2)+partial(x4)+diff(a(),x3,1)); {partial(x1),partial(x2),partial(x4)} 643 That is the smallest set {. . . , ∂xi , . . . } such that d is defined in F [. . . , ∂xi , . . . ]. Notice that formal derivatives are not derivations in that sense. lpdogp Given a starting symbol a, a list of variables l, and a degree n, lpdogp(a,l,n) generates a generic (commutative) polynomial of degree n in variables l with coefficients generated from the starting symbol a: input: lpdogp(a,{x1,x2},2); 2 a_00_ + a_01_*x2 + a_02_*x2 2 + a_10_*x1 + a_11_*x1*x2 + a_20_*x1 lpdogdp Given a starting symbol a, a list of variables l, and a degree n, lpdogp(a,l,n) generates a generic differential polynomial of degree n in variables l with coefficients generated from the starting symbol a: input: lpdogdp(a,{x1,x2},2); 2 2 a_20_ *** partial(x1) + a_02_ *** partial(x2) + a_11_ *** partial(x1) *** partial(x2) + a_10_ *** partial(x1) + a_01_ *** partial(x2) + a_00_ Symbols lpdosym The symbol of an lpdo. That is the differential monomial of highest order with the partials replaced by corresponding commutative variables: input: lpdosym((a()+b())***partial(x1)***partial(x2)**2+3***partial(x1)); 2 y_x1_*y_x2_ *(a() + b()) More generally, one can use a second optional arguments to specify a the order of a different differential monomial to form the symbol of: input: lpdosym((a()+b())***partial(x1)***partial(x2)**2+3***partial(x1),1); 3*y_x1_ Finally, a third optional argument can be used to specify an alternative starting symbol for the commutative variable, which is y by default. Altogether, the optional arguments default like lpdosym(·)=lpdosym(·,lpdoord(·),y). 644 CHAPTER 16. USER CONTRIBUTED PACKAGES lpdosym2dp This converts a symbol obtained via lpdosym back into an LPDO resulting in the corresponding differential monomial of the original LPDO. input: d := a()***partial(x1)***partial(x2)+partial(x3)$ input: s := lpdosym d; s := a()*y_x1_*y_x2_ input: lpdosym2dp s; a() *** partial(x1) *** partial(x2) In analogy to lpdosym there is an optional argument for specifying an alternative starting symbol for the commutative variable, which is y by default. lpdos Given LPDOs p, q and m ∈ N the function lpdos(p,q,m) computes the commutative polynomial ! n X X Sm = pi ∂i (qj ) + p0 qj y j . |j|=m |j|= 0,{partial(x1),partial(x2)}}, {epsilon - 1 >= 0,{partial(x2),partial(x1)}}} If the result is the empty list, then this guarantees that there is no approximate factorization possible. In our example we happen to obtain two possible factorizations. Note, however, that the result in general does not provide a complete list of factorizations with a left factor of order 1 but only at least one such sample factorization. Furthermore, the procedure might fail due to polynomial degrees exceeding certain bounds for the extended quantifier elimination by virtual substitution used internally. If this happens, the corresponding Ai will contain existential quantifiers ex, and Li will be meaningless. Da sollte besser ein failed kommen ... The first of the two subresults above has the semantics that ∂x1 ∂x2 is an approximate factorization of f2 for all ε ≥ 1. Formally, ||f2 − ∂x1 ∂x2 || ≤ ε for all ε ≥ 1, which is equivalent to ||f2 − ∂x1 ∂x2 || ≤ 1. That is, 1 is an upper bound for the approximation error over R2 . Where there are two possible choices for the seminorm || · ||: 1. ... 2. ... explain switch lpdocoeffnorm ... Besides 1. the LPDO d, lpdofactorizex accepts several optional arguments: 2. A Boolean combination ψ of equations, negated equations, and (possibly strict) ordering constraints. This ψ describes a (semialgebraic) region over which to factorize approximately. The default is true specifying the entire Rn . It is possible to choose ψ parametrically. Then the parameters will in general occur in the conditions Ai in the result. 3., 4. An LPDO of order 1, which serves as a template for the left (linear) factor, and an LPDO of order ord(d) − 1, which serves as a template for the right factor. See the documentation of lpdofactorize for defaults and details. 5. A bound ε for describing the desired precision for approximate factorization. The default is the symbol epsilon, i.e., a symbolic choice such that 648 CHAPTER 16. USER CONTRIBUTED PACKAGES the optimal choice (with respect to parameters in ψ) is obtained during factorization. It is possible to fix ε ∈ Q. This does, however, not considerably simplify the factorization process in most cases. input: f3 := partial(x1) *** partial(x2) + x1$ input: psi1 := 0<=x1<=1 and 0<=x2<=1$ input: lpdofactorizex(f3,psi1,a()***partial(x1),b()***partial(x2)); {{epsilon - 1 >= 0,{partial(x1),partial(x2)}}} lpdofacx This is a low-level entry point to the factorization lpdofactorizex. It is analogous to lpdofac for lpdofactorize; see the documentation there for details. lpdohrect lpdohcirc 649 16.40 MODSR: Modular solve and roots This package supports solve (M_SOLVE) and roots (M_ROOTS) operators for modular polynomials and modular polynomial systems. The moduli need not be primes. M_SOLVE requires a modulus to be set. M_ROOTS takes the modulus as a second argument. For example: on modular; setmod 8; m_solve(2x=4); -> {{X=2},{X=6}} m_solve({x^2-y^3=3}); -> {{X=0,Y=5}, {X=2,Y=1}, {X=4,Y=5}, {X=6,Y=1}} m_solve({x=2,x^2-y^3=3}); -> {{X=2,Y=1}} off modular; m_roots(x^2-1,8); -> {1,3,5,7} m_roots(x^3-x,7); -> {0,1,6} The operator legendre_symbol(a,p) denotes the Legendre symbol   p−1 a ≡a 2 (mod p) p which, by its very definition can only have one of the values {−1, 0, 1}. There is no further documentation for this package. Author: Herbert Melenk. 650 16.41 CHAPTER 16. USER CONTRIBUTED PACKAGES MRVLIMIT: A new exp-log limits package Author: Neil Langmead This package was written when the author was a placement student at ZIB Berlin. 16.41.1 The Exp-Log Limits package This package arises from the PhD thesis of Dominik Gruntz, of the ETH Zürich. He developed a new algorithm to compute limits of "exp-log" functions. Many of the examples he gave were unable to be computed by the present limits package in REDUCE, the simplest example being the following, whose limit is obviously 0: load limits; limit(x^7/e^x,x,infinity); 7 x limit(----,x,infinity) x e This particular problem arises, because L’Hopital’s rule for the computation of ∞ indefinite forms (such as 0/0, or ∞ ) can only be applied in a CAS a finite number of times, and in REDUCE, this number is 3. Applied 7 times to the above problem would have yielded the correct answer 0. The new algorithm solves this particular problem, and enables the computation of many more limit calculations in REDUCE. We first define the domain in which we work, and then give a statement of the main algorithm that is used in this package. Definition: Let <[x] be the ring of polynomials in x with real coefficients, and let f be an element in this ring. The field which is obtained from <[x] by closing it under the operations f → exp(f ) and f → log |f | is called the L- field (or logarithmicoexponential field, or field of exp-log functions for short). Hardy proved that every L function is ultimately continuous, of constant sign, monotonic, and tends to ±∞ or to a finite real constant as x → +∞. Here are some examples of exp-log functions, which the package is able to deal 651 with: f (x) = ex ∗ log(log(x)) f (x) = log(log(x + e−x )) ex2 + log(log(x)) f (x) = log(x)log(x) f (x) = ex∗log(x) 16.41.2 The Algorithm A complete statement of the algorithm now follows: Let f be a log-exp function in x, whose limit we wish to compute as x → x0 . The main steps of the algorithm to do this are as follows: • Determine the set Ω of the most rapidly varying subexpressions of f (x). Limits may have to be computed recursively at this stage. • Choose an expression ω such that ω > 0, limx→∞ ω = 0 and ω is in the same comparability class as any element of Ω. Rewrite the other expressions in Ω as A(x)ω c , where A(x) only contains subexpressions in lower comparability classes than Ω. • Let f (ω) be the function obtained from f (x) by replacing all elements of Ω by their representation in terms of ω. Consider all expressions independent of ω as constants and compute the leading term of the power series of f(ω) around ω = 0+ • If the leading exponent e0 > 0, then the limit is 0, and we stop. If the leading exponent e0 < 0 then the limit is ±∞. The sign is defined by the sign of the leading coefficient c0 . If the leading exponent e0 = 0 then the limit is the limit of the leading coeficient c0 . If c0 6∈ C, where C = Const(L), the set of exp-log constants, we apply the same algorithm recursively on c0 . The algorithm to compute the most rapidly varying subset (the mrv set) of a function f is given below: procedure mrv(f) if (not (depend(f,x))) → return ({}) else if f = x → return({x}) else if f = gh → return(max(mrv(g),mrv(h))) else if f = g + h → return(max(mrv(g),mrv(h))) else if f = g c and c ∈ C → return(mrv(g)) 652 CHAPTER 16. USER CONTRIBUTED PACKAGES else if f = log(g) → return(mrv(g)) else if f = eg → if limx→∞ g = ±∞ → return(max({eg }, mrv(g))) else → return mrv(g) end The function max() computes the maximum of the two sets of expressions. Max() compares two elements of its argument sets and returns the set which is in the higher comparability class or the union of both if they have the same order of variation. For further details, proofs and explanations of the algorithm, please consult [Grn96]. For example, we have mrv(ex ) = {ex } mrv(log(log(log(x + x2 + x3 )))) = {x} mrv(x) = {x} mrv(ex + e−x + x2 + x log(x)) = {ex , e−x } mrv(ee −x ) = {e−x } Mrv_limit Examples Consider the following in REDUCE: mrv_limit(e^x,x,infinity); infinity mrv_limit(1/log(x),x,infinity); 0 b:=e^x*(e^(1/x-e^-x)-e^(1/x)); -1 - x x + x - e b := e (e - 1) * mrv_limit(b,x,infinity); 653 -1 -1 - log(log(log(log(x))) + log(x)) *log(x) ex:= *(log(log(x)) - log(log(log(x)) + log(x))); - log(x)*(log(log(x)) - log(log(log(x)) + log(x))) ----------------------------------------------------log(log(log(log(x))) + log(x)) ex:= off mcd; mrv_limit(ex,x,infinity); 1 (log(x+e^-x)+log(1/x))/(log(x)*e^x); e - x -1 -1 - x log(x) (log(x ) + log(e + x)); * * mrv_limit(ws,x,infinity); 0 mrv_limit((log(x)*e^-x)/e^(log(x)+e^(x^2)),x,infinity); 0 16.41.3 The tracing facility The package provides a means of tracing the mrv_limit function at its main steps, and is intended to help the user if he encounters problems. Messages are displayed informing the user which Taylor expansion is being computed, all recursive calls are listed, and the value returned by the mrv function is given. This information is displayed when a switch tracelimit is on. This is off by default, but can be 654 CHAPTER 16. USER CONTRIBUTED PACKAGES switched on with the command on tracelimit; For a more complete examination of the workings of the algorithm, the user could also try the command tr mrv_limit; This is not recommended, as the amount of information returned is often huge and difficult to wade through. Here is a simple example in REDUCE: Loading image file: /silo/cons/reduce35/Alpha/binary/redu37a.img REDUCE Development Version, 4-Nov-96 ... 1: load mrvlimit; 2: on tracelimit; 3: mrv_limit(e^x,x,infinity); mrv_f is {x} x After move_up, f is e -1 performing taylor on: ww -1 series expansion is ww -1 series is ww exponent list is {expt,-1} leading exponent e0 is {expt,-1} x mrv_f is {e } h is x 655 mrv_f is {x} x After move_up, f is e -1 performing taylor on: ww -1 series expansion is ww -1 series is ww exponent list is {expt,-1} leading exponent e0 is {expt,-1} - x small has been changed to e -1 After substitution to ww, f is ww -1 performing taylor on: ww -1 series expansion is ww -1 series is ww exponent list is {expt,-1} leading exponent e0 is {expt,-1} infinity Note that, due to the recursiveness of the functions mrv and mrv_limit, many 656 CHAPTER 16. USER CONTRIBUTED PACKAGES calls to each function are made, and information is given on all calls when the tracelimit switch is on. Bibliography [Grn96] Gruntz, Dominik, On Computing Limits in a Symbolik Manipulation System, PhD Thesis, ETH Zürich [Red36] Hearn, Anthony C. and Fitch, John F. REDUCE User’s Manual 3.6, RAND Corporation, 1995 16.42 NCPOLY: Non–commutative polynomial ideals This package allows the user to set up automatically a consistent environment for computing in an algebra where the non–commutativity is defined by Lie-bracket commutators. The package uses the REDUCE noncom mechanism for elementary polynomial arithmetic; the commutator rules are automatically computed from the Lie brackets. Authors: Herbert Melenk and Joachim Apel. 16.42.1 Introduction REDUCE supports a very general mechanism for computing with objects under a non–commutative multiplication, where commutator relations must be introduced explicitly by rule sets when needed. The package NCPOLY allows you to set up automatically a consistent environment for computing in an algebra where the non–commutativity is defined by Lie-bracket commutators. The package uses the REDUCE noncom mechanism for elementary polynomial arithmetic; the commutator rules are automatically computed from the Lie brackets. You can perform polynomial arithmetic directly, including division and factorization. Additionally NCPOLY supports computations in a one sided ideal (left or right), especially one sided Gröbner bases and polynomial reduction. 16.42.2 Setup, Cleanup Before the computations can start the environment for a non–commutative computation must be defined by a call to nc_setup: nc_setup([,][,]); 657 where < vars > is a list of variables; these must include the non–commutative quantities. < comms > is a list of equations * - *= where < u > and < v > are members of < vars >, and < rh > is a polynomial. < dir > is either lef t or right selecting a left or a right one sided ideal. The initial direction is lef t. nc_setup generates from < comms > the necessary rules to support an algebra where all monomials are ordered corresponding to the given variable sequence. All pairs of variables which are not explicitly covered in the commutator set are considered as commutative and the corresponding rules are also activated. The second parameter in nc_setup may be omitted if the operator is called for the second time, e.g. with a reordered variable sequence. In such a case the last commutator set is used again. Remarks: • The variables need not be declared noncom - nc_setup performs all necessary declarations. • The variables need not be formal operator expressions; nc_setup encapsulates a variable x internally as nc!*(!_x) expressions anyway where the operator nc!∗ keeps the noncom property. • The commands order and korder should be avoided because nc_setup sets these such that the computation results are printed in the correct term order. Example: nc_setup({KK,NN,k,n}, {NN*n-n*NN= NN, KK*k-k*KK= KK}); NN*n; -> NN*n n*NN; -> NN*n - NN nc_setup({k,n,KK,NN}); NN*n - NN -> n*NN; Here KK, N N, k, n are non–commutative variables where the commutators are described as [N N, n] = N N , [KK, k] = KK. The current term order must be compatible with the commutators: the product < u > ∗ < v > must precede all terms on the right hand side < rh > under the current term order. Consequently 658 CHAPTER 16. USER CONTRIBUTED PACKAGES • the maximal degree of < u > or < v > in < rh > is 1, • in a total degree ordering the total degree of < rh > may be not higher than 1, • in an elimination degree order (e.g. lex) all variables in < rh > must be below the minimum of < u > and < v >. • If < rh > does not contain any variables or has at most < u > or < v >, any term order can be selected. If you want to use the non–commutative variables or results from non–commutative computations later in commutative operations it might be necessary to switch off the non–commutative evaluation mode because not all operators in REDUCE are prepared for that environment. In such a case use the command nc_cleanup; without parameters. It removes all internal rules and definitions which nc_setup had introduced. To reactive non–commutative call nc_setup again. 16.42.3 Left and right ideals A (polynomial) left ideal L is defined by the axioms u ∈ L, v ∈ L =⇒ u + v ∈ L u ∈ L =⇒ k ∗ u ∈ L for an arbitrary polynomial k where “*” is the non–commutative multiplication. Correspondingly, a right ideal R is defined by u ∈ R, v ∈ R =⇒ u + v ∈ R u ∈ R =⇒ u ∗ k ∈ R for an arbitrary polynomial k 16.42.4 Gröbner bases When a non–commutative environment has been set up by nc_setup, a basis for a left or right polynomial ideal can be transformed into a Gröbner basis by the operator nc_groebner: nc_groebner(); Note that the variable set and variable sequence must be defined before in the nc_setup call. The term order for the Gröbner calculation can be set by using the torder declaration. The internal steps of the Gröbner calculation can be watched 659 by setting the switches trgroeb (=list all internal basis polynomials) or trgroebs (=list additionally the S-polynomials) 23 . For details about torder, trgroeb and trgroebs see the REDUCE GROEBNER manual. 2: nc_setup({k,n,NN,KK},{NN*n-n*NN=NN,KK*k-k*KK=KK},left); 3: p1 := (n-k+1)*NN - (n+1); p1 := - k*nn + n*nn - n + nn - 1 4: p2 := (k+1)*KK -(n-k); p2 := k*kk + k - n + kk 5: nc_groebner ({p1,p2}); {k*nn - n*nn + n - nn + 1, k*kk + k - n + kk, n*nn*kk - n*kk - n + nn*kk - kk - 1} Important: Do not use the operators of the GROEBNER package directly as they would not consider the non–commutative multiplication. 16.42.5 Left or right polynomial division The operator nc_divide computes the one sided quotient and remainder of two polynomials: nc_divide(,); The result is a list with quotient and remainder. The division is performed as a pseudo–division, multiplying < p1 > by coefficients if necessary. The result {< q >, < r >} is defined by the relation < c > ∗ < p1 >=< q > ∗ < p2 > + < r > for direction lef t and < c > ∗ < p1 >=< p2 > ∗ < q > + < r > for direction right, 23 The command lisp(!*trgroebfull:=t); causes additionally all elementary polynomial operations to be printed. 660 CHAPTER 16. USER CONTRIBUTED PACKAGES where < c > is an expression that does not contain any of the ideal variables, and the leading term of < r > is lower than the leading term of < p2 > according to the actual term order. 16.42.6 Left or right polynomial reduction For the computation of the one sided remainder of a polynomial modulo a given set of other polynomials the operator nc_preduce may be used: nc_preduce(,); The result of the reduction is unique (canonical) if and only if < plist > is a one sided Gröbner basis. Then the computation is at the same time an ideal membership test: if the result is zero, the polynomial is member of the ideal, otherwise not. 16.42.7 Factorization Technique Polynomials in a non–commutative ring cannot be factored using the ordinary factorize command of REDUCE. Instead one of the operators of this section must be used: nc_factorize(); The result is a list of factors of < polynomial >. A list with the input expression is returned if it is irreducible. As non–commutative factorization is not unique, there is an additional operator which computes all possible factorizations nc_factorize_all(); The result is a list of factor decompositions of < polynomial >. If there are no factors at all the result list has only one member which is a list containing the input polynomial. Control of the factorization In contrast to factoring in commutative polynomial rings, the non–commutative factorization is rather time consuming. Therefore two additional operators allow you to reduce the amount of computing time when you look only for isolated fac- 661 tors in special context, e.g. factors with a limited degree or factors which contain only explicitly specified variables: left_factor([,[,]]) right_factor([,[,]]) left_factors([,[,]]) right_factors([,[,]]) where < polynomial > is the form under investigation, < vars > is an optional list of variables which must appear in the factor, and < deg > is an optional integer degree bound for the total degree of the factor, a zero for an unbounded search, or a monomial (product of powers of the variables) where each exponent is an individual degree bound for its base variable; unmentioned variables are allowed in arbitrary degree. The operators ∗_f actor stop when they have found one factor, while the operators ∗_f actors select all one–sided factors within the given range. If there is no factor of the desired type, an empty list is returned by ∗_f actors while the routines ∗_f actor return the input polynomial. Time of the factorization The share variable nc_f actor_time sets an upper limit for the time to be spent for a call to the non–commutative factorizer. If the value is a positive integer, a factorization is terminated with an error message as soon as the time limit is reached. The time units are milliseconds. Usage of SOLVE The factorizer internally uses solve, which is controlled by the REDUCE switch varopt. This switch (which per default is set on) allows, to reorder the variable sequence, which is favourable for the normal system. It should be avoided to set varopt of f , when using the non–commutative factorizer, unless very small polynomials are used. 16.42.8 Output of expressions It is often desirable to have the commutative parts (coefficients) in a non– commutative operation condensed by factorization. The operator nc_compact() collects the coefficients to the powers of the lowest possible non-commutative variable. 662 CHAPTER 16. USER CONTRIBUTED PACKAGES load ncpoly; nc_setup({n,NN},{NN*n-n*NN=NN})$ p1 := n**4 + n**2*nn + 4*n**2 + 4*n*nn + 4*nn + 4; 4 p1 := n 2 2 + n *nn + 4*n + 4*n*nn + 4*nn + 4 nc_compact p1; 2 (n 2 + 2) 2 + (n + 2) *nn 663 16.43 NORMFORM: Computation of matrix normal forms This package contains routines for computing the following normal forms of matrices: • smithex_int • smithex • frobenius • ratjordan • jordansymbolic • jordan. Author: Matt Rebbeck. 16.43.1 Introduction When are two given matrices similar? Similar matrices have the same trace, determinant, characteristic polynomial, and eigenvalues, but the matrices     0 1 0 0 U= and V = 0 0 0 0 are the same in all four of the above but are not similar. Otherwise there could exist a nonsingular N ∈M2 (the set of all 2 × 2 matrices) such that U = N VN −1 = N 0 N −1 = 0 , which is a contradiction since U 6= 0 . Two matrices can look very different but still be similar. One approach to determining whether two given matrices are similar is to compute the normal form of them. If both matrices reduce to the same normal form they must be similar. NORMFORM is a package for computing the following normal forms of matrices: - smithex smithex_int frobenius ratjordan jordansymbolic jordan The package is loaded by load_package normform; 664 CHAPTER 16. USER CONTRIBUTED PACKAGES By default all calculations are carried out in Q (the rational numbers). For smithex, frobenius, ratjordan, jordansymbolic, and jordan, this field can be extended. Details are given in the respective sections. The frobenius, ratjordan, and jordansymbolic normal forms can also be computed in a modular base. Again, details are given in the respective sections. The algorithms for each routine are contained in the source code. NORMFORM has been converted from the normform and Normform packages written by T.M.L. Mulders and A.H.M. Levelt. These have been implemented in Maple [4]. 16.43.2 Smith normal form Function smithex(A, x) computes the Smith normal form S of the matrix A. It returns {S, P, P −1 } where S, P, and P −1 are such that PSP −1 = A. A is a rectangular matrix of univariate polynomials in x. x is the variable name. Field extensions Calculations are performed in Q. To extend this field the ARNUM package can be used. For details see subsection 16.43.8. Synopsis: • The Smith normal form S of an n by m matrix A with univariate polynomial entries in x over a field F is computed. That is, the polynomials are then regarded as elements of the Euclidean domain F(x). • The Smith normal form is a diagonal matrix S where: – – – – rank(A) = number of nonzero rows (columns) of S. S(i, i) is a monic polynomial for 0 < i ≤ rank(A). S(i, i) divides S(i + 1, i + 1) for 0 < i < rank(A). S(i, i) is the greatest common divisor of all i by i minors of A. Hence, if we have the case that n = m, as well as rank(A) = n, then n Y i=1 S(i, i) = det(A) . lcoeff(det(A), x) • The Smith normal form is obtained by doing elementary row and column operations. This includes interchanging rows (columns), multiplying through a row (column) by −1, and adding integral multiples of one row (column) to another. 665 • Although the rank and determinant can be easily obtained from S, this is not an efficient method for computing these quantities except that this may yield a partial factorization of det(A) without doing any explicit factorizations. Example: load_package normform;   x x+1 A= 0 3 ∗ x2       1 0 1 0 x x+1 smithex(A, x) = , , 0 x3 3 ∗ x2 1 −3 −3 16.43.3 smithex_int Function Given an n by m rectangular matrix A that contains only integer entries, smithex_int(A) computes the Smith normal form S of A. It returns {S, P, P −1 } where S, P, and P −1 are such that PSP −1 = A. Synopsis • The Smith normal form S of an n by m matrix A with integer entries is computed. • The Smith normal form is a diagonal matrix S where: – – – – rank(A) = number of nonzero rows (columns) of S. sign(S(i, i)) = 1 for 0 < i ≤ rank(A). S(i, i) divides S(i + 1, i + 1) for 0 < i < rank(A). S(i, i) is the greatest common divisor of all i by i minors of A. Hence, if we have the case that n = m, as well as rank(A) = n, then |det(A)| = n Y S(i, i). i=1 • The Smith normal form is obtained by doing elementary row and column operations. This includes interchanging rows (columns), multiplying through a row (column) by −1, and adding integral multiples of one row (column) to another. Example 666 CHAPTER 16. USER CONTRIBUTED PACKAGES load_package normform;   9 −36 30 A = −36 192 −180 30 −180 180       −17 −5 −4 1 −24 30   3 0 0 19 15  , −1 25 −30 smithex_int(A) = 0 12 0  ,  64   0 0 60 −50 −15 −12 0 −1 1 16.43.4 frobenius Function frobenius(A) computes the Frobenius normal form F of the matrix A. It returns {F, P, P −1 } where F, P, and P −1 are such that PF P −1 = A. A is a square matrix. Field extensions Calculations are performed in Q. To extend this field the ARNUM package can be used. For details see subsection 16.43.8 Modular arithmetic frobenius can be calculated in a modular base. For details see subsection 16.43.9. Synopsis • F has the following structure:  Cp1  Cp2  F =   ..     . Cpk where the C(pi )’s are companion matrices associated with polynomials p1 , p2 , . . . , pk , with the property that pi divides pi+1 for i = 1 . . . k −1. All unmarked entries are zero. • The Frobenius normal form defined in this way is unique (ie: if we require that pi divides pi+1 as above). Example 667 load_package normform; A= frobenius(A) = ( 0 1 16.43.5 −x2 +y 2 +y y −x2 −x+y 2 +y y x∗(x2 −x−y 2 +y) y −2∗x2 +x+2∗y 2 y −x2 +x+y 2 −y y −x2 +x+y 2 −y y ! , 1 0 ! −x2 +y 2 +y y −x2 −x+y 2 +y y ! , 1 0 −x2 +y 2 +y x2 +x−y 2 −y −y x2 +x−y 2 −y ratjordan Function ratjordan(A) computes the rational Jordan normal form R of the matrix A. It returns {R, P, P −1 } where R, P, and P −1 are such that PRP −1 = A. A is a square matrix. Field extensions Calculations are performed in Q. To extend this field the ARNUM package can be used. For details see subsection 16.43.8. Modular arithmetic ratjordan can be calculated in a modular base. For details see subsection 16.43.9. Synopsis • R has the following structure:  r11  r12   ..  . R=  r21   r22  The rij ’s have the following shape:  C(p) I  C(p) I   . .. ... rij =    C(p)  ..          .       I  C(p) !) 668 CHAPTER 16. USER CONTRIBUTED PACKAGES where there are eij times C(p) blocks along the diagonal and C(p) is the companion matrix associated with the irreducible polynomial p. All unmarked entries are zero. Example load_package normform;   x+y 5 A= y x2 ratjordan(A) = ( 0 −x3 − x2 ∗ y + 5 ∗ y 1 16.43.6 x2 + x + y ! , 1 x+y 0 y ! , 1 0 −(x+y) y 1 y jordansymbolic Function jordansymbolic(A) computes the Jordan normal form J of the matrix A. It returns {J , L, P, P −1 }, where J , P, and P −1 are such that PJ P −1 = A. L = {ll, ξ}, where ξ is a name and ll is a list of irreducible factors of p(ξ). A is a square matrix. Field extensions Calculations are performed in Q. To extend this field the ARNUM package can be used. For details see subsection 16.43.8. Modular arithmetic jordansymbolic can be calculated in a modular base. For details see subsection 16.43.9. Extras If using xr, the X interface for REDUCE, the appearance of the output can be improved by setting the switch looking_good to on. This converts all lambda to ξ and improves the indexing, e.g., lambda12 ⇒ ξ12 . The example below shows the output when this switch is on. Synopsis • A Jordan block k (λ) is a k by form:  λ 1  λ   k (λ) =    k upper triangular matrix of the  1 .. .    ..  .  λ 1 λ !) 669 There are k − 1 terms “+1” in the superdiagonal; the scalar λ appears k times on the main diagonal. All other matrix entries are zero, and 1 (λ) = (λ). • A Jordan matrix J ∈ Mn (the set of all n by n matrices) is a direct sum of jordan blocks   n1 (λ1 )   n2 (λ2 )   J =  , n1 + n2 + · · · + nk = n ..   . nk (λk ) in which the orders ni may not be distinct and the values λi need not be distinct. • Here λ is a zero of the characteristic polynomial p of A. If p does not split completely, symbolic names are chosen for the missing zeroes of p. If, by some means, one knows such missing zeroes, they can be substituted for the symbolic names. For this, jordansymbolic actually returns {J , L, P, P −1 }. J is the Jordan normal form of A (using symbolic names if necessary). L = {ll , ξ}, where ξ is a name and ll is a list of irreducible factors of p(ξ). If symbolic names are used then ξij is a zero of lli . P and P −1 are as above. Example load_package normform; on looking_good;  A= 1 y y2 3  jordansymbolic(A) =    ξ11 0 , −y 3 + ξ 2 − 4 ∗ ξ + 3 , ξ , 0 ξ12 ! !) ξ11 −2 ξ11 +y 3 −1 ξ11 − 3 ξ12 − 3 2∗(y 3 −1) 2∗y 2 ∗(y 3 +1) , ξ12 −2 ξ12 +y 3 −1 y2 y2 2∗(y 3 −1) 2∗y 2 ∗(y 3 +1) solve(-yˆ3+xiˆ2-4*xi+3,xi); p p {ξ = y 3 + 1 + 2, ξ = − y 3 + 1 + 2} J = sub({xi(1,1)=sqrt(yˆ3+1)+2, xi(1,2)=-sqrt(yˆ3+1)+2}, first jordansymbolic (A)) ! p y3 + 1 + 2 0 p J = 0 − y3 + 1 + 2 For a similar example ot this in standard REDUCE (ie: not using xr), see the normform.rlg file. 670 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.43.7 jordan Function jordan(A) computes the Jordan normal form J of the matrix A. It returns {J , P, P −1 }, where J , P, and P −1 are such that PJ P −1 = A. A is a square matrix. Field extensions Calculations are performed in Q. To extend this field the ARNUM package can be used. For details see subsection 16.43.8. Note In certain polynomial cases the switch fullroots is turned on to compute the zeroes. This can lead to the calculation taking a long time, as well as the output being very large. In this case a message ***** WARNING: fullroots turned on. May take a while. will be printed. It may be better to kill the calculation and compute jordansymbolic instead. Synopsis • The Jordan normal form J with entries in an algebraic extension of Q is computed. • A Jordan block k (λ) is a k by k upper triangular matrix of the form:   λ 1  λ 1      .. .. k (λ) =   . .    λ 1 λ There are k − 1 terms “+1” in the superdiagonal; the scalar λ appears k times on the main diagonal. All other matrix entries are zero, and 1 (λ) = (λ). • A Jordan matrix J ∈ Mn (the set of all n by n matrices) is a direct sum of jordan blocks.   n1 (λ1 )   n2 (λ2 )   J =  , n1 + n2 + · · · + nk = n ..   . nk (λk ) in which the orders ni may not be distinct and the values λi need not be distinct. • Here λ is a zero of the characteristic polynomial p of A. The zeroes of the characteristic polynomial are computed exactly, if possible. Otherwise they are approximated by floating point numbers. 671 Example load_package normform;   −9 −21 −15 4 2 0 −10 21 −14 4 2 0    −8  16 −11 4 2 0  A=  −6 12 −9 3 3 0    −4 8 −6 0 5 0 −2 4 −3 0 1 3 J = first jordan(A);  3 0  0 J = 0  0 0 16.43.8 0 3 0 0 0 0 0 0 1 0 0 0  0 0 0 0 0 0   1 0 0   1 0 0   0 i+2 0  0 0 −i + 2 Algebraic extensions: Using the ARNUM package The package is loaded by the command load_package arnum;. The algebraic field Q can now √ be extended. For example, defpoly sqrt2**2-2; will extend it to include 2 (defined here by sqrt2). The ARNUM package was written by Eberhard Schrüfer and is described in section 16.3. Example load_package normform; load_package arnum; defpoly sqrt2**2-2; 672 CHAPTER 16. USER CONTRIBUTED PACKAGES √ (sqrt2 now changed to 2 for looks!) √ √  √  4 ∗ √2 − 6 −4 ∗ √2 + 7 −3 ∗ √2 + 6 A = 3 ∗ 2√− 6 −3 ∗ 2√ + 7 −3 ∗ 2√+ 6 3∗ 2 1−3∗ 2 −2 ∗ 2 √  2 √0 0  , ratjordan(A) =  0 2 0 √  0 0 −3 ∗ 2 + 1 √ √   √ −21∗ 2+18 2−49 7 ∗ 2 − 6 2∗ 31 31 √ √   √ 3 ∗ 2 − 6 21∗ 2−18 −21∗ 2+18  ,   31 √31 √ √ 3∗ 2−24 −3∗ 2+24 3∗ 2+1 31 31 √   0 2+1 1√  √ −1 4 ∗ 2 + 9 4 ∗ 2 √  −1 − 16 ∗ 2 + 1 1 16.43.9 Modular arithmetic Calculations can be performed in a modular base by setting the switch modular to on. The base can then be set by setmod p; (p a prime). The normal form will then have entries in Z/pZ. By also switching on balanced_mod the output will be shown using a symmetric modular representation. Information on this modular manipulation can be found in chapter 9. Example load_package normform; on modular; setmod 23;   10 18 A= 17 20 jordansymbolic(A) =       18 0 15 9 1 14 , {{λ + 5, λ + 11} , λ} , , 0 12 22 1 1 15 673 on balanced_mod; jordansymbolic(A) =       −5 0 −8 9 1 −9 , {{λ + 5, λ + 11} , λ} , , 0 −11 −1 1 1 −8 Bibliography [1] T.M.L.Mulders and A.H.M. Levelt: The Maple normform and Normform packages. (1993) [2] T.M.L.Mulders: Algoritmen in De Algebra, A Seminar on Algebraic Algorithms, Nigmegen. (1993) [3] Roger A. Horn and Charles A. Johnson: Matrix Analysis. Cambridge University Press (1990) [4] Bruce W. Chat. . . [et al.]: Maple (Computer Program). Springer-Verlag (1991) [5] Anthony C. Hearn: REDUCE User’s Manual 3.6. RAND (1995) 674 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.44 NUMERIC: Solving numerical problems This package implements basic algorithms of numerical analysis. These include: • solution of algebraic equations by Newton’s method num_solve({sin x=cos y, x + y = 1},{x=1,y=2}) • solution of ordinary differential equations num_odesolve(df(y,x)=y,y=1,x=(0 .. 1), iterations=5) • bounds of a function over an interval bounds(sin x+x,x=(1 .. 2)); • minimizing a function (Fletcher Reeves steepest descent) num_min(sin(x)+x/5, x); • Chebyshev curve fitting chebyshev_fit(sin x/x,x=(1 .. 3),5); • numerical quadrature num_int(sin x,x=(0 .. pi)); Author: Herbert Melenk. The NUMERIC package implements some numerical (approximative) algorithms for REDUCE, based on the REDUCE rounded mode arithmetic. These algorithms are implemented for standard cases. They should not be called for ill-conditioned problems; please use standard mathematical libraries for these. 16.44.1 Syntax Intervals, Starting Points Intervals are generally coded as lower bound and upper bound connected by the operator ‘..’, usually associated to a variable in an equation. E.g. x= (2.5 .. 3.5) 675 means that the variable x is taken in the range from 2.5 up to 3.5. Note, that the bounds can be algebraic expressions, which, however, must evaluate to numeric results. In cases where an interval is returned as the result, the lower and upper bounds can be extracted by the PART operator as the first and second part respectively. A starting point is specified by an equation with a numeric righthand side, e.g. x=3.0 If for multivariate applications several coordinates must be specified by intervals or as a starting point, these specifications can be collected in one parameter (which is then a list) or they can be given as separate parameters alternatively. The list form is more appropriate when the parameters are built from other REDUCE calculations in an automatic style, while the flat form is more convenient for direct interactive input. Accuracy Control The keyword parameters accuracy = a and iterations = i, where a and i must be positive integer numbers, control the iterative algorithms: the iteration is continued until the local error is below 10−a ; if that is impossible within i steps, the iteration is terminated with an error message. The values reached so far are then returned as the result. tracing Normally the algorithms produce only a minimum of printed output during their operation. In cases of an unsuccessful or unexpected long operation a trace of the iteration can be printed by setting on trnumeric; 16.44.2 Minima The Fletcher Reeves version of the steepest descent algorithms is used to find the minimum of a function of one or more variables. The function must have continuous partial derivatives with respect to all variables. The starting point of the search can be specified; if not, random values are taken instead. The steepest descent algorithms in general find only local minima. Syntax: NUM_MIN (exp, var1 [= val1 ][, var2 [= val2 ] . . .] 676 CHAPTER 16. USER CONTRIBUTED PACKAGES [, accuracy = a][, iterations = i]) or NUM_MIN (exp, {var1 [= val1 ][, var2 [= val2 ] . . .]} [, accuracy = a][, iterations = i]) where exp is a function expression, var1 , var2 , . . . are the variables in exp and val1 , val2 , . . . are the (optional) start values. NUM_MIN tries to find the next local minimum along the descending path starting at the given point. The result is a list with the minimum function value as first element followed by a list of equations, where the variables are equated to the coordinates of the result point. Examples: num_min(sin(x)+x/5, x); {4.9489585606,{X=29.643767785}} num_min(sin(x)+x/5, x=0); { - 1.3342267466,{X= - 1.7721582671}} % Rosenbrock function (well known as hard to minimize). fktn := 100*(x1**2-x2)**2 + (1-x1)**2; num_min(fktn, x1=-1.2, x2=1, iterations=200); {0.00000021870228295,{X1=0.99953284494,X2=0.99906807238}} 16.44.3 Roots of Functions/ Solutions of Equations An adaptively damped Newton iteration is used to find an approximative zero of a function, a function vector or the solution of an equation or an equation system. Equations are internally converted to a difference of lhs and rhs such that the Newton method (=zero detection) can be applied. The expressions must have continuous derivatives for all variables. A starting point for the iteration can be given. If not given, random values are taken instead. If the number of forms is not equal to the number of variables, the Newton method cannot be applied. Then the minimum of the sum of absolute squares is located instead. With ON COMPLEX solutions with imaginary parts can be found, if either the expression(s) or the starting point contain a nonzero imaginary part. 677 Syntax: NUM_SOLVE (exp1 , var1 [= val1 ][, accuracy = a][, iterations = i]) or NUM_SOLVE ({exp1 , . . . , expn }, var1 [= val1 ], . . . , varn [= valn ] [, accuracy = a][, iterations = i]) or NUM_SOLVE ({exp1 , . . . , expn }, {var1 [= val1 ], . . . , varn [= valn ]} [, accuracy = a][, iterations = i]) where exp1 , . . . , expn are function expressions, var1 , . . . , varn are the variables, val1 , . . . , valn are optional start values. NUM_SOLVE tries to find a zero/solution of the expression(s). Result is a list of equations, where the variables are equated to the coordinates of the result point. The Jacobian matrix is stored as a side effect in the shared variable JACOBIAN. Example: num_solve({sin x=cos y, x + y = 1},{x=1,y=2}); {X= - 1.8561957251,Y=2.856195584} jacobian; [COS(X) [ [ 1 16.44.4 SIN(Y)] ] 1 ] Integrals For the numerical evaluation of univariate integrals over a finite interval the following strategy is used: 1. If the function has an antiderivative in close form which is bounded in the integration interval, this is used. 678 CHAPTER 16. USER CONTRIBUTED PACKAGES 2. Otherwise a Chebyshev approximation is computed, starting with order 20, eventually up to order 80. If that is recognized as sufficiently convergent it is used for computing the integral by directly integrating the coefficient sequence. 3. If none of these methods is successful, an adaptive multilevel quadrature algorithm is used. For multivariate integrals only the adaptive quadrature is used. This algorithm tolerates isolated singularities. The value iterations here limits the number of local interval intersection levels. Accuracy is a measure for the relative total discretization error (comparison of order 1 and order 2 approximations). Syntax: NUM_INT (exp, var1 = (l1 ..u1 )[, var2 = (l2 ..u2 ) . . .] [, accuracy = a][, iterations = i]) where exp is the function to be integrated, var1 , var2 , . . . are the integration variables, l1 , l2 , . . . are the lower bounds, u1 , u2 , . . . are the upper bounds. Result is the value of the integral. Example: num_int(sin x,x=(0 .. pi)); 2.0000010334 16.44.5 Ordinary Differential Equations A Runge-Kutta method of order 3 finds an approximate graph for the solution of a ordinary differential equation real initial value problem. Syntax: NUM_ODESOLVE (exp,depvar = dv,indepvar=(f [, accuracy = a][, iterations = i]) where exp is the differential expression/equation, 679 depvar is an identifier representing the dependent variable (function to be found), indepvar is an identifier representing the independent variable, exp is an equation (or an expression implicitly set to zero) which contains the first derivative of depvar wrt indepvar, f rom is the starting point of integration, to is the endpoint of integration (allowed to be below f rom), dv is the initial value of depvar in the point indepvar = f rom. The ODE exp is converted into an explicit form, which then is used for a Runge Kutta iteration over the given range. The number of steps is controlled by the value of i (default: 20). If the steps are too coarse to reach the desired accuracy in the neighborhood of the starting point, the number is increased automatically. Result is a list of pairs, each representing a point of the approximate solution of the ODE problem. Example: num_odesolve(df(y,x)=y,y=1,x=(0 .. 1), iterations=5); {{0.0,1.0},{0.2,1.2214},{0.4,1.49181796},{0.6,1.8221064563}, {0.8,2.2255208258},{1.0,2.7182511366}} Remarks: – If in exp the differential is not isolated on the lefthand side, please ensure that the dependent variable is explicitly declared using a DEPEND statement, e.g. depend y,x; otherwise the formal derivative will be computed to zero by REDUCE. – The REDUCE package SOLVE is used to convert the form into an explicit ODE. If that process fails or has no unique result, the evaluation is stopped with an error message. 680 CHAPTER 16. USER CONTRIBUTED PACKAGES 16.44.6 Bounds of a Function Upper and lower bounds of a real valued function over an interval or a rectangular multivariate domain are computed by the operator BOUNDS. The algorithmic basis is the computation with inequalities: starting from the interval(s) of the variables, the bounds are propagated in the expression using the rules for inequality computation. Some knowledge about the behavior of special functions like ABS, SIN, COS, EXP, LOG, fractional exponentials etc. is integrated and can be evaluated if the operator BOUNDS is called with rounded mode on (otherwise only algebraic evaluation rules are available). If BOUNDS finds a singularity within an interval, the evaluation is stopped with an error message indicating the problem part of the expression. Syntax: BOUNDS (exp, var1 = (l1 ..u1 )[, var2 = (l2 ..u2 ) . . .]) BOUNDS (exp, {var1 = (l1 ..u1 )[, var2 = (l2 ..u2 ) . . .]}) where exp is the function to be investigated, var1 , var2 , . . . are the variables of exp, l1 , l2 , . . . and u1 , u2 , . . . specify the area (intervals). BOU N DS computes upper and lower bounds for the expression in the given area. An interval is returned. Example: bounds(sin x,x=(1 .. 2)); {-1,1} on rounded; bounds(sin x,x=(1 .. 2)); 0.84147098481 .. 1 bounds(x**2+x,x=(-0.5 .. 0.5)); - 0.25 .. 0.75 681 16.44.7 Chebyshev Curve Fitting The operator family Chebyshev_ . . . implements approximation and evaluation of (a,b) functions by the Chebyshev method. Let Tn (x) be the Chebyshev polynomial of order n transformed to the interval (a, b). Then a function f (x) can be approximated in (a, b) by a series P (a,b) (x) f (x) ≈ N i=0 ci Ti The operator Chebyshev_f it computes this approximation and returns a list, which has as first element the sum expressed as a polynomial and as second element the sequence of Chebyshev coefficients ci . Chebyshev_df and Chebyshev_int transform a Chebyshev coefficient list into the coefficients of the corresponding derivative or integral respectively. For evaluating a Chebyshev approximation at a given point in the basic interval the operator Chebyshev_eval can be used. Note that Chebyshev_eval is based on a recurrence relation which is in general more stable than a direct evaluation of the complete polynomial. CHEBYSHEV_FIT (f cn, var = (lo..hi), n) CHEBYSHEV_EVAL (coef f s, var = (lo..hi), var = pt) CHEBYSHEV_DF (coef f s, var = (lo..hi)) CHEBYSHEV_INT (coef f s, var = (lo..hi)) where f cn is an algebraic expression (the function to be fitted), var is the variable of f cn, lo and hi are numerical real values which describe an interval (lo < hi), n is the approximation order,an integer > 0, set to 20 if missing, pt is a numerical value in the interval and coef f s is a series of Chebyshev coefficients, computed by one of CHEBY SHEV _COEF F , _DF or _IN T . Example: on rounded; w:=chebyshev_fit(sin x/x,x=(1 .. 3),5); 3 w := {0.03824*x 2 - 0.2398*x + 0.06514*x + 0.9778, {0.8991,-0.4066,-0.005198,0.009464,-0.00009511}} chebyshev_eval(second w, x=(1 .. 3), x=2.1); 682 CHAPTER 16. USER CONTRIBUTED PACKAGES 0.4111 16.44.8 General Curve Fitting The operator N U M _F IT finds for a set of points the linear combination of a given set of functions (function basis) which approximates the points best under the objective of the least squares criterion (minimum of the sum of the squares of the deviation). The solution is found as zero of the gradient vector of the sum of squared errors. Syntax: NUM_FIT (vals, basis, var = pts) where vals is a list of numeric values, var is a variable used for the approximation, pts is a list of coordinate values which correspond to var, basis is a set of functions varying in var which is used for the approximation. The result is a list containing as first element the function which approximates the given values, and as second element a list of coefficients which were used to build this function from the basis. Example: % approximate a set of factorials by a polynomial pts:=for i:=1 step 1 until 5 collect i$ vals:=for i:=1 step 1 until 5 collect for j:=1:i product j$ num_fit(vals,{1,x,x**2},x=pts); 2 {14.571428571*X - 61.428571429*X + 54.6,{54.6, - 61.428571429,14.571428571}} num_fit(vals,{1,x,x**2,x**3,x**4},x=pts); 683 4 3 {2.2083333234*X - 20.249999879*X 2 + 67.791666154*X - 93.749999133*X + 44.999999525, {44.999999525, - 93.749999133,67.791666154, - 20.249999879,2.2083333234}} 16.44.9 Function Bases The following procedures compute sets of functions e.g. to be used for approximation. All procedures have two parameters, the expression to be used as variable (an identifier in most cases) and the order of the desired system. The functions are not scaled to a specific interval, but the variable can be accompanied by a scale factor and/or a translation in order to map the generic interval of orthogonality to another (e.g. (x − 1/2) ∗ 2pi). The result is a function list with ascending order, such that the first element is the function of order zero and (for the polynomial systems) the function of order n is the n + 1-th element. monomial_base(x,n) trigonometric_base(x,n) Bernstein_base(x,n) Legendre_base(x,n) Laguerre_base(x,n) Hermite_base(x,n) Chebyshev_base_T(x,n) Chebyshev_base_U(x,n) {1,x,...,x**n} {1,sin x,cos x,sin(2x),cos(2x)...} Bernstein polynomials Legendre polynomials Laguerre polynomials Hermite polynomials Chebyshev polynomials first kind Chebyshev polynomials second kind Example: Bernstein_base(x,5); 5 4 3 2 { - X + 5*X - 10*X + 10*X - 5*X + 1, 684 CHAPTER 16. USER CONTRIBUTED PACKAGES 4 5*X*(X 3 2 - 4*X + 6*X - 4*X + 1), 2 3 2 10*X *( - X + 3*X - 3*X + 1), 3 2 10*X *(X - 2*X + 1), 4 5*X *( - X + 1), 5 X } 685 16.45 ODESOLVE: Ordinary differential equations solver The ODESOLVE package is a solver for ordinary differential equations. At the present time it has very limited capabilities. It can handle only a single scalar equation presented as an algebraic expression or equation, and it can solve only first-order equations of simple types, linear equations with constant coefficients and Euler equations. These solvable types are exactly those for which Lie symmetry techniques give no useful information. For example, the evaluation of depend(y,x); odesolve(df(y,x)=x**2+e**x,y,x); yields the result X 3 3*E + 3*ARBCONST(1) + X {Y=---------------------------} 3 Main Author: Malcolm A.H. MacCallum. Other contributors: Francis Wright, Alan Barnes. 16.45.1 Introduction ODESolve 1+ is an experimental project to update and enhance the ordinary differential equation (ODE) solver (odesolve) that has been distributed as a standard component of REDUCE [2, 4, 3] for about 10 years. ODESolve 1+ is intended to provide a strict superset of the facilities provided by odesolve. This document describes a substantial re-implementation of previous versions of ODESolve 1+ that now includes almost none of the original odesolve code. This version is targeted at REDUCE 3.7 or later, and will not run in earlier versions. This project is being conducted partly under the auspices of the European CATHODE project [1]. Various test files, including three versions based on a published review of ODE solvers [7], are included in the ODESolve 1+ distribution. For further background see [10], which describes version 1.03. See also [11]. ODESolve 1+ is intended to implement some solution techniques itself (i.e. most of the simple and well known techniques [12]) and to provide an automatic interface to other more sophisticated solvers, such as PSODE [5, 6, 8] and CRACK [9], to handle cases where simple techniques fail. It is also intended to provide a unified interface to other special solvers, such as Laplace transforms, series solutions and numerical methods, under user request. Although none of these extensions 686 CHAPTER 16. USER CONTRIBUTED PACKAGES is explicitly implemented yet, a general extension interface is implemented (see §16.45.6). The main motivation behind ODESolve 1+ is pragmatic. It is intended to meet user expectations, to have an easy user interface that normally does the right thing automatically, and to return solutions in the form that the user wants and expects. Quite a lot of development effort has been directed toward this aim. Hence, ODESolve 1+ solves common text-book special cases in preference to esoteric pathological special cases, and it endeavours to simplify solutions into convenient forms. 16.45.2 Installation The file inputs the full set of source files that are required to implement ODESolve 1+ assuming that the current directory is the ODESolve 1+ source directory. Hence, ODESolve 1+ can be run without compiling it in any implementation of REDUCE 3.7 by starting REDUCE in the ODESolve 1+ source directory and entering the statement 1: in ""$ However, the recommended procedure is to compile it by starting REDUCE in the ODESolve 1+ source directory and entering the statements 1: faslout odesolve; 2: in ""$ 3: faslend; In CSL-REDUCE, this will work only if you have write access to the REDUCE image file (reduce.img), so you may need to set up a private copy first. In PSL-REDUCE, you may need to move the compiled image file odesolve.b to a directory in your PSL load path, such as the main fasl directory. Please refer to the documentation for your implementation of REDUCE for details. Once a compiled version of ODESolve 1+ has been correctly installed, it can be loaded by entering the REDUCE statement 1: load_package odesolve; A string describing the current version of ODESolve 1+ is assigned to the algebraic-mode variable odesolve_version, which can be evaluated to check what version is actually in use. In versions of REDUCE derived from the development source after 22 September 2000, use of the normal algebraic-mode odesolve operator causes the package to 687 autoload. However, the ODESolve 1+ global switches are not declared, and the symbolic mode interface provided for backward compatibility with the previous version is not defined, until after the package has loaded. The former is not a huge problem because all ODESolve switches can be accessed as optional arguments, and the backward compatibility interface should probably not be used in new code anyway. 16.45.3 User interface The principal interface is via the operator odesolve. (It also has a synonym called dsolve to make porting of examples from Maple easier, but it does not accept general Maple syntax!) For purposes of description I will refer to