peeter joot E L E C T RO M AG N E T I C E N G I N E E R I N G W I T H G E O M E T R I C ALGEBRA
E L E C T RO M AG N E T I C E N G I N E E R I N G W I T H G E O M E T R I C A L G E B R A peeter joot
A modern approach to electromagnetism. December 2017 – version v.0
Peeter Joot: Electromagnetic Engineering with Geometric Algebra, A modern approach to elecc December 2017 tromagnetism.,
COPYRIGHT
c Copyright 2017 Peeter Joot All Rights Reserved This book may be reproduced and distributed in whole or in part, without fee, subject to the following conditions: • The copyright notice above and this permission notice must be preserved complete on all complete or partial copies. • Any translation or derived work must be approved by the author in writing before distribution. • If you distribute this work in part, instructions for obtaining the complete version of this document must be included, and a means for obtaining a complete version provided. • Small portions may be reproduced as illustrations for reviews or quotes in other works without this permission notice if proper citation is given. Exceptions to these rules may be granted for academic purposes: Write to the author and ask. Disclaimer: ment.
I confess to violating somebody’s copyright when I copied this copyright state-
v
DOCUMENT VERSION
Version 0.280 Sources for this notes compilation can be found in the github repository
[email protected]:peeterjoot/GAelectrodynamics.git The last commit (Dec/16/2017), associated with this pdf was c260e74ddc4db93e86be28069d2fc6362515bbce
vii
Dedicated to: Aurora and Lance, my awesome kids, and Sofia, who not only tolerates and encourages my studies, but is also awesome enough to think that math is sexy.
P R E FA C E
A book on Geometric Algebra applications to electromagnetism. The target audience are undergraduate student with sufficient background in electromagnetism that knowledge of Maxwell’s equations can be presumed. Alternatives to the usual 3D vector or coordinate based derivations of various problems will be presented that highlight some of the ways that Geometric Algebra can be used to tackle electromagnetic problems. These notes contain: • An introduction to Geometric Algebra (GA). • Application of Geometric Algebra to electromagnetism, with a focus on engineering applications. There are two potential audiences for these notes. The first is for a student new to GA faced with the learning curve of both GA itself and the notational changes needed to apply it to electromagnetism. The second audience is the individual who already has some knowledge of GA, and may want to skim yet another boilerplate “Introduction to Geometric Algebra” to get an idea of the notation in use, and move on to the electromagnetism applications. To serve both potential audiences, much of the substance of the introductory GA material has been deferred to the problems. Students new to GA should attempt all these problems before falling back to just reading the solutions. It will be assumed that the reader is familiar with rotation matrices, complex numbers, dot and vector products, coordinate representation of vector spaces, linear transformations, and determinants. Peeter Joot
[email protected]
xi
CONTENTS Preface
xi
geometric algebra. 1 geometric algebra. 3 1.1 Prerequisites. 3 1.1.1 Vector space. 3 1.1.2 Basis, span and dimension. 4 1.1.3 Standard basis, length and normality. 5 1.2 Definitions 8 1.2.1 Multivector space. 8 1.2.2 Nomenclature. 14 1.3 Analysis. 16 1.3.1 Colinear vectors. 16 1.3.2 Normal vectors. 17 1.3.3 2D multiplication table. 19 1.3.4 Plane rotations. 20 1.3.5 Vector product, dot product and wedge product. 23 1.3.6 Reverse. 28 1.3.7 Complex representations. 30 1.3.8 Multivector dot product. 31 1.3.9 Permutation within scalar selection. 35 1.3.10 Multivector wedge product. 36 1.3.11 Duality. 38 1.3.12 Projection and rejection. 39 1.3.13 Normal factorization of the wedge product. 43 1.3.14 The wedge product as an oriented area. 44 1.3.15 General rotation. 47 1.3.16 Symmetric and antisymmetric vector sums. 49 1.3.17 Reflection. 51 1.3.18 Linear systems. 52 1.4 A summary comparision. 54 1.5 Problem solutions. 56 2 multivector calculus. 57 2.1 Reciprocal frames. 57 2.1.1 Problems. 61 i 1
xiii
xiv
contents
2.2
2.3 2.4
2.5
2.6 ii 3
Curvilinear coordinates. 62 2.2.1 Cylindrical coordinates. 66 2.2.2 Spherical coordinates. 68 2.2.3 Toroidal coordinates. 70 2.2.4 Problems. 72 Green’s theorem. 72 Stokes’ theorem. 75 2.4.1 Statement. 75 2.4.2 One parameter specialization of Stokes’ theorem. 76 2.4.3 Two parameter specialization of Stokes’ theorem. 77 2.4.4 Three parameter specialization of Stokes’ theorem. 80 2.4.5 Using scalar volume elements 82 2.4.6 Problems 84 Fundamental theorem of geometric calculus. 84 2.5.1 Fundamental Theorem of Geometric Calculus. 84 2.5.2 Green’s function for the gradient in Euclidean spaces. 88 2.5.3 Helmholtz theorem. 91 Problem solutions. 94
electromagnetism. 99 electromagnetism. 101 3.1 Maxwell and Lorentz equations. 101 3.1.1 Problems. 104 3.2 Electrostatics. 104 3.2.1 Enclosed charge. 105 3.2.2 Electric potential. 106 3.2.3 Inverting the gradient equations. 106 3.2.4 Poisson equation solution. 107 3.2.5 Example: Straight line charge. 108 3.2.6 Example: Circular line charge. 109 3.2.7 Problems. 112 3.3 Magnetostatics. 112 3.3.1 Vector potential. 113 3.3.2 Enclosed current density. 113 3.3.3 Enclosed current. 114 3.3.4 Biot-Savart law. 115 3.3.5 Example. Ampere’s law for magnetic field between two current sources. 3.4 Maxwell’s equation (GA). 118 3.5 Wave equation. 120
116
contents
3.6
3.7
3.8 3.9
3.10 3.11 3.12
3.13
3.14
Statics. 121 3.6.1 Inverting the Maxwell statics equation. 121 3.6.2 Example. Infinite line charge and current. 122 3.6.3 Example. Infinite planar charge and current. 123 3.6.4 Example. Field of a ring of charge or current density. 124 Poynting vector. 127 3.7.1 Field energy and momentum density and the stress energy tensor. 127 3.7.2 Poynting’s theorem. 130 3.7.3 Example. Energy density and Poynting vectors for static field solutions. 132 3.7.4 Complex energy and power. 133 Plane waves. 135 Polarization. 137 3.9.1 Plane wave. 137 3.9.2 Circular polarization basis. 138 3.9.3 Linear polarization. 139 3.9.4 Other phase dependence and energy momentum. 139 3.9.5 Elliptical parameterization. 140 3.9.6 Pseudoscalar imaginary. 142 3.9.7 Problems. 143 Transverse fields in a waveguide. 143 Boundary value conditions. 145 Multivector potential. 148 3.12.1 General potential representation. 148 3.12.2 Electric sources. 149 3.12.3 Magnetic sources. 151 3.12.4 Far field. 153 3.12.5 Gauge transformations 156 3.12.6 Lorenz gauge 158 Lorentz force. 159 3.13.1 GA statement. 159 3.13.2 Constant magnetic field. 160 Dielectric and magnetic media. 161
iii backmatter 165 a justifying the contraction axiom. 167 b distribution theorems. 169 c ga electrodynamics in the literature. 173 index 174
xv
xvi
contents
bibliography
177
LIST OF FIGURES
Figure 1.1 Figure 1.2 Figure 1.3 Figure 1.4 Figure 1.5 Figure 1.6 Figure 1.7 Figure 1.8 Figure 1.9 Figure 1.10 Figure 1.11 Figure 1.12 Figure 1.13 Figure 1.14 Figure 1.15 Figure 1.16 Figure 1.17 Figure 1.18 Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 2.8 Figure 3.1 Figure 3.2 Figure 3.3 Figure 3.4 Figure 3.5 Figure 3.6
Scalar orientation. 9 Vector orientation. 9 Graphical vector addition. 10 Oriented unit areas in the x-y plane. 10 Circular representation of two bivectors. 11 Bivector addition. 11 Oriented Volume 12 e1 + e2 . 18 Multiplication by e1 e2 . 20 π/2 rotation using pseudoscalar multiplication. 21 Rotation in a plane. 22 Two vectors in a plane. 24 Projection and rejection illustrated. 43 Parallelogram representations of wedge products. 46 Different shape representations of a wedge product. 46 Rotation with respect to the plane of a pseudoscalar. 50 Reflection. 52 Intersection of two lines. 53 Oblique and reciprocal bases. 59 Toroidal parameterization. 70 Infinitesimal loop integral. 73 Sum of infinitesimal loops. 74 One parameter manifold. 76 Two parameter manifold differentials. 77 Contour for two parameter surface boundary. 80 Three parameter volume element. 81 Line charge density. 108 Circular line charge. 110 Magnetic field between two current sources. 117 Field due to a circular distribution. 124 (a) A(˜z, ρ). ˜ (b) B(˜z, ρ). ˜ 126 Electric field direction for circular charge density distribution near z = 0. 127
xvii
xviii
List of Figures
Figure 3.7 Figure 3.8 Figure 3.9 Figure 3.10 Figure 3.11 Figure 3.12 Figure A.1
Magnetic field direction for circular current density distribution near z = 0. 128 Magnetic field for larger z. 128 Linear polarization. 139 Electric field with elliptical polarization. 141 Pillbox integration volume. 146 Vertical infinitesimal dipole and selected propagation direction. 155 Equivalent vectors in R1 and on a number line. 167
Part I GEOMETRIC ALGEBRA.
1
GEOMETRIC ALGEBRA.
1.1
prerequisites.
Geometric algebra (GA for short), generalizes and extends vector algebra. The following section contains a lighting review of the concepts of vector space, basis, orthonormality, and metric, all foundational concepts for GA. If you are inclined to skip this, please at least examine the stated dot product definition, since the conventional positive definite property is not assumed. 1.1.1
Vector space.
Vectors have many generalizations in mathematics, where a number of disparate mathematical objects can all be considered vectors. A vector space is an enumeration of the properties and operations that are common to a set of vector-like objects, allowing them to be treated in a unified fashion, regardless of their representation and application. Definition 1.1: Vector space. A (real) vector space is a set V = {x, y, z, · · ·}, the elements of which are called vectors, which has an addition operation designated + and a scalar multiplication operation designated by juxtaposition, where the following axioms are satisfied for all vectors x, y, z ∈ V and scalars a, b ∈ R
3
4
geometric algebra.
Vector space axioms. Addition is closed. (Scalar) multiplication is closed. Addition is associative. Addition is commutative. There exists a zero element 0 ∈ V . For any x ∈ V there exists a negative additive inverse −x ∈ V .
x+y ∈ V ax ∈ V (x + y) + z = x + (y + z) y+x = x+y x+0 = x x + (−x) = 0
(Scalar) multiplication is distributive.
a(x + y) = ax + ay, (a + b)x = ax + bx
(Scalar) multiplication is associative.
(ab)x = a(bx)
There exists a multiplicative identity 1.
1x = x
Despite the generality of this definition, the vector spaces used in GA are fairly restricted. In particular, electrodynamic applications of GA require only two, three or four dimensional real vector spaces. No vector spaces with matrix, polynomial, or complex tuple elements will be required, nor will any infinite dimensional vector spaces. The only unconventional vector space of interest will be a “space-time” vector space containing a time like “direction”, 1-3 spatial directions, and a generalized length operation that can be negative. Exercise 1.1
RN
Define RN as the set of tuples {(x1 , x2 , · · ·) | xi ∈ R}. Show that RN is a vector space when the addition operation is defined as x + y ≡ (x1 + y1 , x2 + y2 , · · ·) , and scalar multiplication is defined as ax ≡ (ax1 , ax2 , · · ·) for any x = (x1 , x2 , · · ·) ∈ RN , y = (y1 , y2 , · · ·) ∈ RN , and a ∈ R. Exercise 1.2 1.1.2 Basis, span and dimension. Definition 1.2: Linear combination Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. A linear combination of vectors in S is any sum a1 x1 + a2 x2 + · · · + ak xk .
1.1 prerequisites.
Definition 1.3: Linear dependence. Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. This set S is linearly dependent if any equation 0 = a1 x1 + a2 x2 + · · · + ak xk , can be constructed for which not all of the coefficients ai are zero. Definition 1.4: Linear independence. Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. This set is linearly independent if the there are no equations with ai , 0 such that 0 = a1 x1 + a2 x2 + · · · + ak xk . Definition 1.5: Span. Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. The span of this set is the set of all linear combinations of these vectors, denoted span(S ) = {a1 x1 + a2 x2 + · · · + ak xk } .
Definition 1.6: Subspace. Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. This subset is a subspace if S is a vector space under the multiplication and addition operations of the vector space V.
Definition 1.7: Basis and dimension Let S = {x1 , x2 , · · · , xn } be a linearly independent subset of V. This set is a basis if span(S ) = V. The number of vectors n in this set is called the dimension of the space.
1.1.3 Standard basis, length and normality.
5
6
geometric algebra.
Definition 1.8: Dot product. Let x, y be vectors from a vector space V. A dot product x · y is a mapping V × V → R with the following properties Dot product properties. x·y = y·x
Symmetric in both arguments
(ax + by) · (a0 x0 + b0 y0 ) = aa0 (x · x0 ) + bb0 (y · y0 ) + ab0 (x · y0 ) + ba0 (y · x0 )
Bilinear (Optional) Positive definite
x·x ≥ 0
Because the dot product is bilinear, it is specified completely by the dot products of a set of basis elements for the space. For example, given a basis {e1 , e2 , · · · , eN }, and two vectors x=
N X
xi ei
i=1
y=
N X
(1.1) yi ei ,
i=1
the dot product of the two is N N X X x · y = xi ei · y j e j j=1
i=1
=
N X
(1.2)
xi y j ei · e j .
i, j=1
Such an expansion in coordinates can be written in matrix form as x · y = xTGy,
(1.3)
where G is the symmetric matrix with elements gi j = ei · e j . This matrix G, or its elements gi j is also called the metric for the space. In this book the metric is always diagonal, with all diagonal values having an absolute value of one. The positive definite property xTGy ≥ 0 is not required of the metric or its associated dot product. This omission has specific relevance in electrodynamics, since Maxwell’s equations take their simplest form when expressed in terms of four-vector (relativistic) vector spaces, where some of the metric matrix elements are negative.
1.1 prerequisites.
Definition 1.9: Length The squared norm of a vector x is defined as kxk2 = x · x, a quantity that need not be positive. The length of a vector x is defined as p kxk = |x · x|.
Definition 1.10: Unit vector A vector x is called a unit vector if its absolute squared norm is one (|x · x| = 1).
Definition 1.11: Normal Two vectors from a vector space V are normal, or orthogonal, if their dot product is zero, x · y = 0 ∀x, y ∈ V.
Definition 1.12: Orthonormal Two vectors x, y are orthonormal if they are both unit vectors and normal to each other (x · y = 0, |x · x| = |y · y| = 1). A set of vectors {x, y, · · · , z} is an orthonormal set if all pairs of vectors in that set are orthonormal.
Definition 1.13: Standard basis. A basis {e1 , e2 , · · · , eN } is called a standard basis if that set is orthonormal.
Definition 1.14: Euclidean space.
7
8
geometric algebra.
A vector space with basis {e1 , e2 , · · · , eN } is called Euclidean if all the dot product pairs between the basis elements are not only orthonormal, but positive definite. That is ei · e j = δi j . 1.2
definitions
1.2.1 Multivector space. Geometric algebra takes a vector space and adds two additional operations, a vector multiplication operation, and a generalized addition operation that extends vector addition to include addition of scalars and products of vectors. Multiplication of vectors is indicated by juxtaposition, for example, if x, y, e1 , e2 , e3 , · · · are vectors, then some vector products are xy, xyx, xyxy, e1 e2 , e2 e1 , e2 e3 , e3 e2 , e3 e1 , e1 e3 , e1 e2 e3 , e3 e1 e2 , e2 e3 e1 , e3 e2 e1 , e2 e1 e3 , e1 e3 e2 ,
(1.4)
e1 e2 e3 e1 , e1 e2 e3 e1 e3 e2 , · · · Vector multiplication is constrained by a rule, the contraction axiom, which specifies that the square of vector is the squared length of that vector (i.e. a scalar). In a sum of scalars, vectors, and vector products, such as 1 + 2e1 + 3e1 e2 + 4e1 e2 e3 ,
(1.5)
the value 1 is a scalar (or 0-vector), 2e1 is a vector (or 1-vector), 3e1 e2 is called a bivector (or 2-vector), 4e1 e2 e3 is called a trivector (or 3-vector), and the sum itself is called a multivector. Geometric algebra uses vector multiplication to build up a hierarchy of geometrical objects, representing oriented points, lines, planes, volumes and in higher dimensional spaces oriented hypervolumes. Scalar. A scalar (or 0-vector) can be represented graphically as an arrow with a head and a tail pointing into the paper (or chalkboard.) The sign of a scalar, represented by a “head” or “tail” can be thought of as an orientation, as illustrated in fig. 1.1 with a crossed circle for the tail, and a solid dot as the head. We don’t usually try to represent of a quantity that has the apparent dimensions of a point graphically. Vector. A vector can be represented graphically as an arrow with a head and a tail, as illustrated in fig. 1.2. Here the orientation has two aspects, one is the direction in space of the vector, and the other is the relative placement of the head vs. tail of the vector, which is toggled by
1.2 definitions
Figure 1.1: Scalar orientation.
x -x
Figure 1.2: Vector orientation.
changing the sign of the vector. Graphically we can add vectors by connecting them head to tail, and connecting the end points, as illustrated in fig. 1.3. Bivector.
Assuming a vector product, a bivector can be defined as
Definition 1.15: Bivector. A bivector, or 2-vector, is a sum of products of pairs of normal vectors. Given an N dimensional vector space with an orthonormal basis {e1 , e2 , · · ·}, a general bivector can be expressed as X Bi j ei e j , 1≤i, j≤N
where Bi j is a scalar. n o The set ei e j | i , j could be considered a basis for the space of bivectors, generated from the underlying vector space and the vector product. We willn see later that o this is a linear dependent set, and that the bivector space can be defined as span ei e j | i < j . The bivectors that we will encounter in physics, such as torque and angular momentum, can all be represented as oriented plane segments. Like vectors, this orientation includes a “sidedness” as well as directions in
9
10
geometric algebra.
b a+b
a
Figure 1.3: Graphical vector addition.
space. In three dimensions this sidedness can be represented using a vector normal to the plane. In two and N dimensions this orientation can be represented using a cyclic direction on the surface of the plane segment as illustrated in fig. 1.4. Other than having a boundary that defines the total area, a graphical bivector representation as a plane segment need not have any specific geometry. Two bivectors represented graphically as oriented circles in three dimensional space can be found in fig. 1.5. With vectors, addition is performed by connecting vectors head to tail,
e1
e2
e2
e1 Figure 1.4: Oriented unit areas in the x-y plane.
which maintains the orientation. The same can be done with bivectors, where the bivectors are also connected with compatible orientation to construct a sum. This is illustrated graphically in fig. 1.6, where a blue bivector with a right handed orientation is added to a red bivector with right handed orientation, to form a green bivector also with right handed orientation, where all orientations are with respect to the exterior of the bounding surface formed by the three bivectors.
1.2 definitions
Figure 1.5: Circular representation of two bivectors.
Figure 1.6: Bivector addition.
11
12
geometric algebra.
Trivector.
Again, assuming a vector product
Definition 1.16: Trivector. A trivector, or 3-vector, is a sum of products of triplets of mutually normal vectors. Given an N dimensional vector space with an orthonormal basis {e1 , e2 , · · ·}, a general trivector can be expressed as X T i jk ei e j ek , 1≤i, j,k≤N
where Ti jk is a scalar. In three dimensional space, we will see that all trivectors are scalar multiples of e1 e2 e3 , and can represent an oriented volume segment such as the differential form in a volume integral. This orientation can be visualized with a normal pointing into or out of the volume, or like bivectors, with a cyclic direction on the surface of the volume as in illustrated with the spherical volume of fig. 1.7.
Figure 1.7: Oriented Volume
K-vector. Definition 1.17: K-vector and grade.
1.2 definitions
A k-vector is a sum of products of k mutually normal vectors. Given an N dimensional vector space with an orthonormal basis {e1 , e2 , · · ·}, a general k-vector can be expressed as X Ki1 i2 ···ik ei1 ei2 · · · eik , 1≤i1 ,i2 ···,ik ≤N
where Ki1 i2 ···ik is a scalar. The number k of normal vectors that generate a k-vector is called the grade. A 1-vector is defined as a vector, and a 0-vector is defined as a scalar. We will see that the highest grade for a k-vector in an N dimensional vector space is N. Multivector space. Definition 1.18: Multivector space. Given an N dimensional (generating) vector space V with an orthonormal basis {e1 , e2 , · · · , eN }, and a vector multiplication operation represented by juxtaposition, a multivector is a sum P P P of k-vectors, k ∈ [1, N], such as a0 + i ai ei + i, j ai j ei e j + i, j,k ai jk ei e j ek + · · ·, where a0 , ai , ai j , · · · are scalars. A multivector space is a set M = {x, y, z, · · ·} of multivectors, where the following axioms are satisfied Multivector space axioms. x2 = x · x, ∀x ∈ V
Contraction.
x+y ∈ M
Addition is closed. Multiplication is closed.
xy ∈ M
Addition is associative.
(x + y) + z = x + (y + z) y+x = x+y
Addition is commutative.
x+0 = x
There exists a zero element 0 ∈ M .
x + (−x) = 0
There exists a negative additive inverse −x ∈ M . Multiplication is distributive.
x(y + z) = xy + xz, (x + y)z = xz + yz
Multiplication is associative.
(xy)z = x(yz)
There exists a multiplicative identity 1.
Compared to the vector space, def’n. 1.1, the multivector space
1x = x
13
14
geometric algebra.
• presumes a vector multiplication operation, which is not assumed to be commutative (order matters), • generalizes vector addition to multivector addition, • generalizes scalar multiplication to multivector multiplication (of which scalar multiplication and vector multiplication are special cases), • and most importantly, specifies a rule providing the meaning of a squared vector (the contraction axiom). The contraction axiom is arguably the most important of the multivector space axioms, as it allows for multiplicative closure without an infinite dimensional multivector space. The remaining set of non-contraction axioms of a multivector space are almost that of a field 1 (as encountered in the study of complex inner products), as they describe most of the properties one would expect of a “well behaved” set of number-like quantities. However, a field also requires a multiplicative inverse element for all elements of the space, which exists for some multivector subspaces, but not in general. 1.2.2 Nomenclature. The workhorse operator of geometric algebra is called grade selection, defined as Definition 1.19: Grade selection operator Given a set of k-vectors Mk , k ∈ [0, N], and any multivector of their sum M=
N X
Mi ,
i=0
the grade selection operator is defined as hMik ≡ Mk . Due to its importance, selection of the (scalar) zero grade is given the shorthand hMi ≡ hMi0 = M0 . 1 A mathematician would call a multivector space a non-commutative ring with identity [17], and could state the multivector space definition much more compactly without listing all the properties of a ring explicitly as done above.
1.2 definitions
The grade selection operator will be used to define a generalized dot product between multivectors, and the wedge product, which generalizes the cross product (and is related to the cross product in R3 ). To illustrate grade selection by example, given a multivector M = 3 − e3 + 2e1 e2 , then hMi = 3 hMi1 = −e3 hMi2 = 2e1 e2
(1.6)
hMi3 = 0. Definition 1.20: Orthonormal product shorthand. Given an orthonormal basis {e1 , e2 , · · ·}, a multiple indexed quantity ei j···k should be interpreted as the product (in the same order) of the basis elements with those indexes ei j···k = ei e j · · · ek . For example, e12 = e1 e2 e123 = e1 e2 e3
(1.7)
e23121 = e2 e3 e1 e2 e1 . Definition 1.21: Pseudoscalar. If {x1 , x2 , · · · , xk } is a normal basis for a k-dimensional (sub)space, then the product x1 x2 · · · xk is called a pseudoscalar for that (sub)space. A pseudoscalar that squares to ±1 is called a unit pseudoscalar. In a two dimensional space e1 e2 is a pseudoscalar, as is 3e2 e1 . We will see shortly that these are related by a constant factor. In a three dimensional space the trivector e3 e1 e2 is a pseudoscalar, as is −7e3 e1 e2 . Both of these can be related by a constant factor. It is conventional to refer to
e12 = e1 e2 ,
(1.8)
15
16
geometric algebra.
as “the pseudoscalar” for a two dimensional space, and to
e123 = e1 e2 e3 ,
(1.9)
as “the pseudoscalar” for a three dimensional space. We will see that geometric algebra allows for many quantities that have a complex imaginary nature, and that the pseudoscalars of eq. (1.8) and eq. (1.9) both square to −1. For this reason, it is often convenient to use a imaginary notation for the R2 and R3 pseudoscalars i = e12 I = e123 .
(1.10)
In three dimensional problems these notes often use i as a pseudoscalar for whatever planar subspace is convenient for the problem i = e31 , e23 , · · ·, and not just the x-y plane. For example, the bivector that describes the transverse plane for a plane wave propagating along a kˆ direction may be designated by i, even if i does not lie in the x-y plane. 1.3
analysis.
Unless otherwise stated, a Euclidean vector space with an orthonormal basis {e1 , e2 , · · ·} is assumed for the remainder of this chapter. Generalizations required for non-Euclidean spaces will be discussed when (if?) spacetime vectors are introduced. 1.3.1 Colinear vectors. It was pointed out that the vector multiplication operation was not assumed to be commutative (order matters). The only condition for which the product of two vectors is order independent, is when those vectors are colinear. Theorem 1.1: Commutation A product of factors that commute is unchanged by interchange of those factors. If u, and v are non-zero colinear vectors, then they commute uv = vu. The proof is simple. Because these vectors are colinear there exists some α for which v = αu, so
1.3 analysis.
vu = (αu)u = αuu = uαu = u(αu) = uv.
(1.11)
The contraction axiom ensures that the product of two colinear vectors is a scalar. In particular, the square of a unit vector is unity. This should be highlighted explicitly, because this property will be used again and again xˆ 2 = 1.
(1.12)
For example, the squares of any orthonormal basis vectors are unity e21 = e22 = e3 = 1. A corollary of eq. (1.12) is that we can factor 1 into the square of any unit vector
1 = xˆ xˆ .
(1.13)
This has been also been highlighted explicitly, because this factorization trick will be used repeatedly. 1.3.2
Normal vectors.
An interchange of the order of the products of two normal factors results in a change of sign, for example e2 e1 = −e1 e2 . This is a consequence of the contraction axiom, and can be demonstrated by squaring the vector e1 + e2 (fig. 1.8). By the contraction axiom, the square of this vector is 2, the squared length of the vector (e1 + e2 )2 = (e1 + e2 ) · (e1 + e2 ) = e1 · e1 + e1 · e e2 · e 2 + 1 + e2 · e2 = 2.
(1.14)
On the other hand, deferring the application of the contraction axiom until after the vectors products have been distributed gives (e1 + e2 )2 = (e1 + e2 )(e1 + e2 ) = e21 + e2 e1 + e1 e2 + e22 = 1 + e2 e1 + e1 e2 + 1 = 2 + e2 e1 + e1 e2 .
(1.15)
17
18
geometric algebra.
1.0
e1 + e2 e2
0.8
0.6
2
1
0.4
0.2
e1
1
0.0 0.2
0.4
0.6
0.8
1.0
Figure 1.8: e1 + e2 .
The right hand side of eq. (1.15), a mixed grade multivector with grades zero and two, must also equal eq. (1.14). A solution requires that the grade two components all sum to zero e2 e1 + e1 e2 = 0,
(1.16)
or e1 e2 = −e1 e2 .
(1.17)
The same computation could have been performed for any two orthonormal vectors, so we conclude that any interchange of two orthonormal vectors changes the sign. In general this is true of any normal vectors. Theorem 1.2: Anticommutation Let u, and v be two normal vectors, the product of which uv is a bivector. Changing the order of these products toggles the sign of the bivector. uv = −vu. This sign change on interchange is called anticommutation.
1.3 analysis.
1.3.3
2D multiplication table.
The multiplication table for the R2 geometric algebra can be computed with relative ease. Many of the interesting products involve i = e1 e2 , the unit pseudoscalar. Using eq. (1.17) the imaginary nature of the pseudoscalar, mentioned early, can now be demonstrated explicitly (e1 e2 )2 = (e1 e2 )(e1 e2 ) = −(e1 e2 )(e2 e1 ) = −e1 (e22 )e1 = −e21 = −1.
(1.18)
Like the (scalar) complex imaginary, this bivector also squares to −1. The only non-trivial products left to fill in the R2 multiplication table are those of the unit vectors with i, products that are order dependent e1 i = e1 ( e1 e2 ) = ( e1 e1 ) e2 = e2 ie1 = (e1 e2 ) e1 = (−e2 e1 ) e1 = −e2 (e1 e1 ) = −e2 e2 i = e2 ( e1 e2 ) = e2 (−e2 e1 ) = − (e2 e2 ) e1 = −e1 ie2 = (e1 e2 ) e2 = e1 ( e2 e2 ) = e1 . The multiplication table for the R2 multivector basis can now be tabulated
(1.19)
19
20
geometric algebra.
2D Multiplication table. 1
e1
e2
e1 e2
1
1
e1
e2
e1 e2
e1
e1
1
e1 e2
e2
e2
e2
−e1 e2
1
−e1
e1 e2
−e2
e1
−1
e1 e2
It is important to point out that the pseudoscalar i does not commute with either basis vector, but anticommutes with both, since ie1 = −e1 i, and ie2 = −e2 i. By superposition i anticommutes with any vector in the x-y plane. More generally, if uˆ and vˆ are orthonormal, and x ∈ span {u, ˆ vˆ } then the bivector uˆ ˆ v anticommutes with x, or any other vector in this plane. 1.3.4 Plane rotations. Plotting eq. (1.19), as in fig. 1.9, shows that multiplication by i rotates the R2 basis vectors by π/2 radians, with the rotation direction dependant on the order of multiplication.
e1 (e1 e2 ) e2
-1.0
1.0
1.0
0.5
0.5
-0.5
0.5
1.0
e1
e2 (e1 e2 )
-1.0
-0.5
0.5
-0.5
-0.5
-1.0
-1.0
(e1 e2 )e2 1.0
(e1 e2 )e1
(a)
(b)
Figure 1.9: Multiplication by e1 e2 .
Using a polar vector representation x = ρ (e1 cos θ + e2 sin θ) ,
(1.20)
1.3 analysis.
it can be demonstrated directly that unit pseudoscalar multiplication of an arbitrary vector will induce a π/2 rotation. Right multiplication by the pseudoscalar gives xi = xe1 e2 = ρ (e1 cos θ + e2 sin θ) e1 e2 = ρ (e2 cos θ − e1 sin θ) ,
(1.21)
whereas multiplication from the right gives ix = e1 e2 x = ρe1 e2 (e1 cos θ + e2 sin θ) e1 e2 = ρ (−e2 cos θ + e1 sin θ) .
(1.22)
These rotations, illustrated in fig. 1.10, are the counterclockwise and clockwise rotations through π/2 radians of eq. (1.20) (exercise 1.3).
x(e1 e2 )
1.0
x 0.5
-1.0
-0.5
0.5
1.0
-0.5
-1.0
(e1 e2 )x
Figure 1.10: π/2 rotation using pseudoscalar multiplication.
In complex number theory the complex exponential eiθ can be used as a rotation operator. Geometric algebra puts this rotation operator into the vector algebra toolbox, by utilizing Euler’s formula eiθ = cos θ + i sin θ,
(1.23)
which holds for this pseudoscalar imaginary representation too (exercise 1.4). By writing e2 = e1 e1 e2 , a complex exponential can be factored directly out of the polar vector representation eq. (1.20)
21
22
geometric algebra.
x = ρ (e1 cos θ + e2 sin θ) = ρ (e1 cos θ + (e1 e1 )e2 sin θ) = ρe1 (cos θ + e1 e2 sin θ) = ρe1 (cos θ + i sin θ) = ρe1 eiθ .
(1.24)
We end up with a complex exponential multivector factor on the right. Alternatively, since e2 = e2 e1 e1 , a complex exponential can be factored out on the left x = ρ (e1 cos θ + e2 sin θ) = ρ (e1 cos θ + e2 (e1 e1 ) sin θ) = ρ (cos θ − e1 e2 sin θ) e1 = ρ (cos θ − i sin θ) e1 = ρe−iθ e1 .
(1.25)
Left and right exponential expressions have now been found for the polar representation ρ (e1 cos θ + e2 sin θ) = ρe−iθ e1 = ρe1 eiθ .
(1.26)
This is essentially a recipe for rotation of a vector in the x-y plane. The rotational sense that matches the direction of rotation that takes e1 to e2 . Such rotations are illustrated in fig. 1.11. xeiθ 1.0
x 0.5
-1.0
-0.5
0.5
1.0
-0.5
eiθ x -1.0
Figure 1.11: Rotation in a plane.
1.3 analysis.
This generalizes to rotations of RN vectors constrained to a plane. Given orthonormal vectors u, ˆ vˆ and any vector in the plane of these two vectors (x ∈ span {u, ˆ vˆ }), this vector is rotated θ radians in the direction of rotation that takes uˆ to vˆ by ˆ vθ ˆ vθ x0 = xeuˆ = e−uˆ x.
(1.27)
ˆ vθ is opposite that of evˆ uθ ˆ , which provides a first hint The sense of rotation for the rotation euˆ that bivectors can be characterized as having an orientation, somewhat akin to thinking of a vector as having a head and a tail.
Exercise 1.3
R2 rotations.
Using familiar methods, such as rotation matrices, show that the counterclockwise and clockwise rotations of eq. (1.20) are given by eq. (1.21) and eq. (1.22) respectively. Exercise 1.4
Euler’s formula.
For a multivector x assume an infinite series representation of the exponential, sine and cosine functions exp x = cos x =
∞ X xk k! k=0 ∞ X
(−1)k
x2k (2k)!
(−1)k
x2k+1 . (2k + 1)!
k=0
sin x =
∞ X k=0
a. Show that for scalar θ Euler’s formula e Jθ = cos θ + J sin θ holds for for any multivector J that satisfies J 2 = −1. b. Given multivectors x, y, show that splitting a multivector exponential into factors of the form e x+y = e x ey requires x and y to commute. 1.3.5
Vector product, dot product and wedge product.
The product of two colinear vectors is a scalar, and the product of two normal vectors is a bivector. The product of two general vectors is a multivector with structure to be determined.
23
24
geometric algebra.
A powerful way to examine this structure is to compute the product of two vectors in a polar representation with respect to the plane that they span. Let uˆ and vˆ be an orthonormal pair of vectors in the plane of a and b, oriented in a positive rotational sense as illustrated in fig. 1.12.
Figure 1.12: Two vectors in a plane.
With respect to the plane basis uˆ and vˆ , a a polar representation of a, b is a = kak ue ˆ iab θa = kak e−iab θa uˆ b = kbk ue ˆ iab θb = kbk e−iab θb u, ˆ
(1.28)
where iab = uˆ ˆ v is a unit pseudoscalar for the planar subspace spanned by a and b. The vector product of these two vectors is ˆ iab θb ab = kak e−iab θa uˆ kbk ue = kak kbk e−iab θa (uˆ u)e ˆ iab θb = kak kbk eiab (θb −θa ) . = kak kbk (cos(θb − θa ) + iab sin(θb − θa )) .
(1.29)
We see that the product of two vectors is a multivector that has only grades 0 and 2. This can be expressed symbolically as ab = habi + habi2 .
(1.30)
We recognize the scalar grade of the vector product as the RN dot product, but the grade 2 component of the vector product is something new that requires a name. We respectively identify and define operators for these vector grade selection operations
1.3 analysis.
Definition 1.22: Dot and wedge products of two vectors. Given two vectors a, b ∈ RN the dot product is identified as the scalar grade of their product habi = a · b. A wedge product of the vectors is defined as a ∧ b ≡ habi2 . Given this notation, the product of two vectors can be written ab = a · b + a ∧ b. Scalar grade selection of a product of two vectors is an important new tool. There will be many circumstances where the easiest way to compute a dot product is using scalar grade selection. The split of a vector product into dot and wedge product components is also important. However, to utilize it, the properties of the wedge product have to be determined. We also want to determine how exactly how the wedge product is related to the cross product, as they clearly have a similar structure. To summarizing eq. (1.29) with our new operators, we write
ab = kak kbk exp (iab (θb − θa )) a · b = kak kbk cos(θb − θa )
(1.31)
a ∧ b = iab kak kbk sin(θb − θa ), Two wedge product properties can be immediately deduced from this polar representation • b ∧ a = −a ∧ b. • a ∧ (αa) = 0,
∀α ∈ R.
The cross product is also bilinear, so one can reasonably expect this of the wedge product. This is much easier to demonstrate using a coordinate expansion. To do so let X a= ai ei b=
i X i
bi ei .
(1.32)
25
26
geometric algebra.
The product of these vectors is X X ab = ai ei b j e j i j X = ai b j ei e j
(1.33)
ij
=
X
ai b j ei e j +
i= j
X
ai b j ei e j
i, j
Since ei ei = 1, we see again that the scalar component of the product is the dot product i ai bi . The remaining grade 2 components are the wedge product, for which the coordinate expansion can be simplified further P
a∧b=
X
ai b j ei e j
i, j
=
X
ai b j ei e j +
i< j
=
X
=
X
i< j
X
ai b j ei e j
j b), and a = ae1 eiψ is p the vectoral representation of the semi-major axis (not necessarily placed along e1 ), and e = 1 − (b/a)2 is the eccentricity of the ellipse, then it can be shown ([6]) that an elliptic parameterization can be written in the compact form r(φ) = ea cosh(tanh−1 (b/a) + iφ).
(3.164)
3.9 polarization.
When the bivector imaginary i = e12 is used then this parameterization is real and has only vector grades, so the electromagnetic field for a general elliptic wave has the form F = eEa (1 + e3 ) e1 eiψ cosh (m + iφ) m = tanh−1 ( Eb /Ea ) q e = 1 − (Eb /Ea )2 ,
(3.165)
where Ea (Eb ) are the magnitudes of the electric field components lying along the semimajor(minor) axes, and the propagation direction e3 is normal to both the major and minor axis directions. An elliptic electric field polarization is illustrated in fig. 3.10, where the vectors representing the major and minor axes are Ea = Ea e1 eiψ , Eb = Eb e1 eiψ . Observe that setting Eb = 0 results in the linearly polarized field of eq. (3.157). Ea
e2
Eb ψ e1
Figure 3.10: Electric field with elliptical polarization.
Following the procedure of eq. (3.159), the energy-momentum of an elliptically polarized field is E+
S 1 = FF † v 2 1 e (1 + e ) iψ = e2 Ea2 (1 + e3 ) e1 e cosh (m + iφ) cosh (m − iφ) e−iψ 1 3 2 1 = e2 Ea2 (1 + e3 ) (cosh(2m) + cos(2φ)) 2 1 = (1 + e3 ) Eb2 + 2 Ea2 − Eb2 cos2 φ . 2
(3.166)
The simplification above made use of the identity (1 − (b/a)2 ) cosh(2 atanh(b/a)) = 1 + (b/a)2 .
141
142
electromagnetism.
3.9.6 Pseudoscalar imaginary. The multivector 1 + e3 acts as a projector, consuming any factors of e3 (1 + e3 )e3 = e3 + e23 = 1 + e3 .
(3.167)
This property allows all the bivector imaginaries i = e12 = e3 I in eq. (3.153) to be reexpressed in terms of the R3 pseudoscalar I = e123 . To illustrate this consider just the left circular polarized wave FL = (1 + e3 ) e1 αL eiφ = (1 + e3 ) e1 αL (cos φ + e3 I sin φ) = (1 + e3 ) e1 αL cos φ − (1 + e3 ) e3 e1 αL I sin φ = (1 + e3 ) e1 αL e−Iφ = (1 + e3 ) e1 (αL1 + e3 IαL2 ) e−Iφ = (1 + e3 ) e1 (αL1 − IαL2 ) e−Iφ .
(3.168)
This shows that the coefficients for the circular polarized states can be redefined using the pseudoscalar as an imaginary (in contrast to the bivector imaginary used in eq. (3.154)) α0L = αL1 − IαL2 α0R = αR1 − IαR2 , so that the plane wave is F = (1 + e3 ) e1 α0L e−Iφ + α0R eIφ .
(3.169)
(3.170)
Like eq. (3.153) this plane wave representation does not require taking any real parts. The transverse plane in which the electric and magnetic fields lie is defined by the duality relation i = Ie3 . The energy momentum multivector for a wave described in terms of the pseudoscalar circular polarization states of eq. (3.170) is just S 2 2 E + = (1 + e3 ) α0L + α0R , (3.171) v where the absolute value is computed using the reverse as the conjugation operation |z|2 = zz† .
3.10 transverse fields in a waveguide.
3.9.7
Problems.
Exercise 3.7
Circular polarization coefficients relationship to the Jones vector.
By substituting eq. (3.154) into eq. (3.153), and comparing to eq. (3.150), show that the circular state coefficients have the following relationship to the Jones vector coordinates αL = (α1 + β2 ) /2 + i (−α2 + β1 ) /2 αR = (α1 − β2 ) /2 + i (−α2 − β1 ) /2, and use this to prove eq. (3.155). Exercise 3.8
Pseudoscalar Jones vector.
With the Jones vector defined in terms of the R3 pseudoscalar c1 = α1 + Iβ1 c2 = α2 + Iβ2 , calculate the values α0L , α0R of eq. (3.169) in terms of this Jones vector. 3.10
transverse fields in a waveguide.
Under source free conditions, Maxwell’s equation in GA form is F = E + IηH ! 1 0 = ∇ + ∂t F t
(3.172)
Maxwell’s equation allows for components of the electric and magnetic field along the propagation direction and the transverse plane, however, it is possible to relate the transverse and propagating field components. Assume that the propagation direction is along the z-axis (either forward or backwards), with angular frequency ω, with the field represented by the real part of F(x, y, z, t) = F(x, y)e jωt∓ jkz .
(3.173)
We split the gradient into transverse and z-axis components ∇ = ∇t + e3 ∂z ,
(3.174)
143
144
electromagnetism.
so that Maxwell’s equation becomes ω ∇t + j ∓ ke3 F(x, y) = 0. v With F = F(x, y) ω ∇t F = − j ∓ ke3 F. v
(3.175)
(3.176)
We require a way to expressing the components of the field that lie in the propagation direction and transverse planes. Let the propagation component be designated Fz so that Fz = (E · e3 ) e3 + Iη (H · e3 ) e3 1 1 = (Ee3 + e3 E) e3 + Iη (He3 + e3 H) e3 2 2 1 1 = (E + e3 Ee3 ) + Iη (H + e3 He3 ) , 2 2
(3.177)
showing that the propagation component Fz and transverse components Ft = F − Fz of the total field are 1 ( F + e3 Fe3 ) 2 1 Ft = ( F − e3 Fe3 ) 2
Fz =
(3.178)
Since ∇t has no xˆ , yˆ components, e3 anticommutes with the transverse gradient e3 ∇t = −∇t e3 ,
(3.179)
but commutes with 1 ∓ e3 . This means that 1 1 (∇t F ± e3 (∇t F) e3 ) = (∇t F ∓ ∇t e3 Fe3 ) 2 2 1 = ∇t (F ∓ e3 Fe3 ) , 2
(3.180)
1 (∇t F + e3 (∇t F ) e3 ) = ∇t Ft 2 1 (∇t F − e3 (∇t F ) e3 ) = ∇t Fz , 2
(3.181)
or
so Maxwell’s equation eq. (3.176) becomes
3.11 boundary value conditions.
∇t Ft = − j ∇t F z = − j
ω v ω
∓ ke3 Fz ∓ ke3 Ft .
(3.182)
v Provided , (kv)2 , these can be inverted. Such an inversion allows an application of the transverse gradient to whichever one of Fz , Ft is known, to compute the other. ω2
ω ± ke3 F z = j v 2 ∇t F t ω 2 − k v ω v
± ke3 ∇t F z . F t = j 2 ω 2 − k v
(3.183)
The relation for Ft in eq. (3.183) is usually stated in terms of the electric and magnetic fields. To compute that split we need to expand most of the terms in the numerator ω ω ± ke3 ∇t Fz = − e3 ± k ∇t e3 Fz v v ω (3.184) = ±k − e3 ∇t (Ez + IηHz ) v ωη ω = ±k∇t Ez + e3 × ∇t Hz + I ±kη∇t Hz − e3 × ∇t Ez , v v which means the transverse electric and magnetic fields are j ωη e3 × ∇t Hz Et = ω 2 ±k∇t Ez + 2 v v −k (3.185) j ω ηHt = ω 2 ±kη∇t Hz − e3 × ∇t Ez . 2 v v −k There is considerably more complexity required to express the transverse field in terms of separate electric and magnetic components compared to the equivalent total transverse field expression of eq. (3.183). Exercise 3.9
Transverse electric and magnetic field components.
Fill in the missing details in the steps of eq. (3.184). 3.11
boundary value conditions.
The difference in the normal and tangential components of the electromagnetic field spanning a surface on which there are a surface current or surface charge density can be related to those surface sources.
145
146
electromagnetism.
These relationships can be determined by integrating Maxwell’s equation over the pillbox configuration illustrated in fig. 3.11.
Figure 3.11: Pillbox integration volume.
An assumption that the sources are primarily constrained to the surface can be written as J = J s δ(y),
(3.186)
where the y coordinate is locally normal to the surface at any given point. In terms of the scalar and vector potentials, such a surface source model is J = (η (vρ s − J s ) + I (vρms − M s )) δ(y).
(3.187)
It will be simplest to demonstrate the boundary relationships in the frequency domain, where Maxwell’s equation can be written as either ∇F = − jkF + J,
(3.188a)
∇IF = − jkIF + I J.
(3.188b)
or
Application of contraction operations gives ∇ · F = h− jkF + Ji0,1 = − jkE + η(vρ s − J s )δ(y)
(3.189a)
∇ · (IF) = h− jkIF + I Ji0,1 = jkηH − (vρms − M s )δ(y).
(3.189b)
3.11 boundary value conditions.
Each of these contraction operations can be evaluated over the pillbox volume above using the divergence theorem, however, the delta function integrals are problematic. Those integrals dependent on η and v which vary across the surface, but are also dependent on the delta function surface contribution, which is valid at only the surface. Consider the vector potential term for electric sources as an example, where the volume integral of that term is Z Z h/2 Z Z 0 Z − dVηJ s δ(y) = − dAη2 J s δ(y) − dAη1 J s δ(y). (3.190) y=0
y=−h/2
The delta function is only well defined when integrated across the y = 0 point. This problem can be overcome by applying grade selection operations to each of the components of eq. (3.189), and then rearranging so that all the medium specific contributions to the integrals are factored away from the delta functions h∇ · (F)i = ρ s δ(y)
(3.191a)
* !+ 1 k ∇· F = − j E − J s δ(y) η η 1
(3.191b)
* !+ 1 ∇ · I F = −ρms δ(y) v
(3.191c)
h∇ · (IF)i1 = jkηH + M s δ(y).
(3.191d)
Each of the grade selections picks off one of D, B, H or E, so this could have been obtained directly from the conventional set of individual Maxwell equations, however, it is instructional to see how to work with the complete electromagnetic field F. This also provides a method of evaluating the boundary conditions that is both coordinate free, and uses the same integral form for all the boundary conditions. Application of the multivector R3 divergence theorem, as stated informally in eq. (2.105) gives *Z + dV nˆ · (F) = ∆Aρ s (3.192a)
*Z
*Z
1 dV nˆ · F η
!+
1 dV nˆ · I F v
= − jω 1
Z
Z dy
dAD − ∆AJ s
(3.192b)
y
!+ = −∆Aρms
(3.192c)
147
148
electromagnetism.
*Z
+ = jω
dV nˆ · (IF)
Z
1
Z dy
dAB + ∆AM s
(3.192d)
y
The y (normal) integral components of the volume integrals are all assumed to vanish as ∆y → 0, leaving ˆ 2 F2 − 1 F1 )i = ρ s hn( !+ * 1 1 F2 − F1 = −J s nˆ η2 η1 1 * !+ 1 1 nI ˆ F2 − F1 = −ρms v2 v1
(3.193)
ˆ hnI(F 2 − F 1 )i1 = M s These can, of course, each be written in terms of the constituent fields if desired nˆ · (D2 − D1 ) = ρ s nˆ × (H2 − H1 ) = J s nˆ · (B2 − B1 ) = ρms
(3.194)
nˆ × (E2 − E1 ) = −M s . The crazy jumble of dot products, cross products and field components in this conventional statement of the boundary conditions is seen to follow systematically from Maxwell’s equation eq. (3.188a), and reflects the fact that the components of Maxwell’s equation have to be treated individually by grade when evaluating the boundary integrals. 3.12 3.12.1
multivector potential. General potential representation.
For both electrostatics and magnetostatics, where Maxwell’s equations are both a pair of gradients, we were able to require that the respective scalar and vector potentials were both gradients. For electrodynamics where Maxwell’s equation is ! 1∂ ∇+ F = J, (3.195) v ∂t it seems more reasonable to demand a different structure of the potential, say ! 1∂ F = ∇− A, v ∂t
(3.196)
3.12 multivector potential.
where A is a multivector potential that may contain all grades, with structure to be determined. If such a multivector potential can be found, then Maxwell’s equation is reduced to a single wave equation ! 1 ∂2 ∇2 − 2 2 A = J, (3.197) v ∂t which can be thought of as one wave equation for each multivector grade of the multivector source J. Some thought shows that the guess eq. (3.196) is not quite right, as it allows for the invalid possibility that F has scalar or pseudoscalar grades. While it is possible to impose constraints (a gauge choice) on potential A that ensure F has only the vector and bivector grades, in general, a grade selection filter must be imposed
* F=
! + 1∂ ∇− A . v ∂t 1,2
(3.198)
We will find that the desired representation of the multivector potential is A = −φ + vA + ηI (−φm + vF) .
(3.199)
Here 1. φ is the scalar potential V (Volts). 2. A is the vector potential W/m (Webers/meter). 3. φm is the scalar potential for (fictitious) magnetic current sources A (Amperes). 4. F is the vector potential for (fictitious) magnetic current sources C (Coulombs). This specific breakdown of A into scalar and vector potentials, and dual (pseudoscalar and bivector) potentials has been chosen to match existing SI conventions, specifically those of [3]. 3.12.2
Electric sources.
For a multivector current with only electric sources J = η (vρ − J) ,
(3.200)
we can construct a multivector potential with only scalar and vector grades A = −φ + vA.
(3.201)
149
150
electromagnetism.
The resulting field is F = E + IηH * ! + 1∂ (−φ + vA) , = ∇− v ∂t 1,2
(3.202)
which expands to
F = −∇φ −
∂A + v∇ ∧ A. ∂t
(3.203)
The respective electric and magnetic fields can be extracted using a duality transformation for the bivector curl F = −∇φ −
∂A + Iv∇ × A, ∂t
(3.204)
from which we can read off the field components E = −∇φ −
∂A ∂t
(3.205)
µH = ∇ × A. Observe that the grade selection encodes the precise recipe required to produce the desired combination of gradients, curls and time partials. The potential representation of the field eq. (3.202) is only a solution if Maxwell’s equation is also satisfied, or ∇2 −
! !* ! + 1 ∂2 1∂ 1∂ (−φ (vρ (−φ + vA) = η − J) + ∇ + ∇ − + vA) v ∂t v ∂t v2 ∂t2 0,3 ! ! 1∂ 1 ∂φ = η (vρ − J) + ∇ + v∇ · A + . v ∂t v ∂t
(3.206)
Imposing a constraint on the potential grades ∇·A+
1 ∂φ = 0, v2 ∂t
(3.207)
the Lorenz gauge condition, is clearly an expedient way to simplify this relationship. In particular, in the frequency domain ∂/∂t ↔ jω = jkv, this gauge choice allows the scalar potential to be entirely eliminated, since φ=
jv2 ∇ · A. ω
(3.208)
3.12 multivector potential.
so the multivector potential is completely determined by a single vector potential A=−
jv2 ∇ · A + vA, ω
Maxwell’s equation is reduced to a Helmholtz equation ∇2 + k2 A = J,
(3.209)
(3.210)
and the field is simply F = (∇ − jk) A.
(3.211)
The conventional electric and magnetic field expressions can be found by substituting eq. (3.208) into eq. (3.204) and switching to the frequency domain F = −j
v2 ∇ (∇ · A) − jωA + Iv∇ × A, ω
(3.212)
so E = − jωA −
jv ∇ (∇ · A) k
(3.213)
µH = ∇ × A. 3.12.3
Magnetic sources.
For a multivector current with only magnetic sources J = I (vρm − M) ,
(3.214)
we can construct a multivector potential with only pseudoscalar and bivector grades A = ηI (−φm + vF) .
(3.215)
The resulting field is F = E + IηH * ! + 1∂ (−Iηφm + IηvF) , = ∇− v ∂t 1,2
(3.216)
which simplifies to ! ∂F F = Iη v∇ ∧ F − − ∇φm . ∂t
(3.217)
151
152
electromagnetism.
The separate electric and magnetic field contributions can be read off from ! ∂F F = −ηv∇ × F + ηI −∇φm − , ∂t
(3.218)
yielding 1 E=− ∇×F ∂F H = −∇φm − . ∂t
(3.219)
The potential representation of the field eq. (3.219) is only a solution if Maxwell’s equation is also satisfied, or ! !* ! + 1 ∂2 1∂ 1∂ ∇ − 2 2 ηI (−φm + vF) = I (vρm − M) + ∇ + ∇− ηI (−φm + vF) v ∂t v ∂t v ∂t 0,3 ! ! (3.220) 1 ∂ ηI ∂φm + ηvI∇ · F = I (vρm − M) + ∇ + v ∂t v ∂t 2
Again, imposing a constraint on the potential grades ∇·F+
1 ∂φm = 0, v2 ∂t
(3.221)
the Lorenz gauge condition for the magnetic potentials, is clearly an expedient way to simplify this relationship. As before, in the frequency domain the scalar potential can be entirely eliminated φm =
jv2 ∇ · F. ω
In this case the multivector potential is ! jv2 A = ηI − ∇ · F + vF , ω
(3.222)
(3.223)
and Maxwell’s equation and the field are given by eq. (3.210) and eq. (3.211) respectively. In the frequency domain, the electric and magnetic fields can be found from eq. (3.222) and eq. (3.219) 1 E=− ∇×F jv H = − jωF − ∇ (∇ · F) . k
(3.224)
3.12 multivector potential.
3.12.4
Far field.
Given a vector potential with a radial spherical wave representation e− jkr A(θ, φ), (3.225) r we can compute the far field ( r 1 ) electrodynamic field F. The spherical representation of the gradient is A=
∇ = rˆ ∂r + ∇⊥ θˆ φˆ ∇⊥ = ∂θ + ∂φ . r r sin θ The gradient of the vector potential is e− jkr ∇A = (rˆ ∂r + ∇⊥ ) A ! r− jkr e− jkr 1 e = rˆ − jk − A+ ∇⊥ A r r r ! 1 = − jk + rˆ A + O(1/r2 ) r ≈ − jkˆrA.
(3.226)
(3.227)
Here, all the O(1/r2 ) terms, including the action of the non-radial component of the gradient on the 1/r potential, have been neglected. From eq. (3.227) the far field divergence and the (bivector) curl of A are ∇ · A = − jkˆr · A ∇ ∧ A = − jkˆr ∧ A.
(3.228)
Finally, the far field gradient of the divergence of A is ∇ (∇ · A) = (rˆ ∂r + ∇⊥ ) (− jkˆr · A) ≈ − jkˆr∂r (ˆr · A) ! 1 = − jkˆr − jk − (ˆr · A) r 2 ≈ −k rˆ (ˆr · A) ,
(3.229)
again neglecting any O(1/r2 ) terms. The field is v2 ∇ (∇ · A) + v∇ ∧ A ω = − jωA + jωˆr (ˆr · A) − jkvˆr ∧ A = − jω (A − rˆ (ˆr · A)) − jωˆr ∧ A = − jωˆr (ˆr ∧ A) − jωˆr ∧ A,
F = − jωA − j
(3.230)
153
154
electromagnetism.
or F = − jω (rˆ + 1) (rˆ ∧ A) .
(3.231)
One interpretation of this is that the (bivector) magnetic field is represented by the plane perpendicular to the direction of propagation, and the electric field by a vector in that plane. These electric and magnetic fields can be extracted by inspection. Let A⊥ = rˆ (rˆ ∧ A) represent the non-radial component of the potential, so these respective fields are E = − jωA⊥ 1 H = rˆ × E. η
(3.232)
Having calculated the far field approximation for the electrodynamic field for a vector potential A the field for the magnetic source vector potential F is
F = − jωηI (rˆ + 1) (rˆ ∧ F) ,
(3.233)
for which the electric and magnetic field components are E = jωηˆr × F H = − jωF⊥ .
(3.234)
Example 3.1: Vertical dipole potential. We will calculate the far field along the propagation direction vector kˆ in the z-y plane kˆ = e3 eiθ i = e32 ,
(3.235)
for the infinitesimal dipole potential A=
e3 µ0 I0 l − jkr e , 4πr
as illustrated in fig. 3.12.
(3.236)
3.12 multivector potential.
Figure 3.12: Vertical infinitesimal dipole and selected propagation direction.
The wedge of kˆ with A is proportional to D E ˆ 3 kˆ ∧ e3 = ke 2 E D = e3 eiθ e3 D E2 = e23 e−iθ 2 = −i sin θ,
(3.237)
so from eq. (3.231) the field is µ0 I0 l − jkr F = jω 1 + e3 eiθ i sin θ e 4πr
(3.238)
The electric and magnetic fields can be found from the respective vector and bivector grades of eq. (3.238) jωµ0 I0 l − jkr iθ e e3 e i sin θ 4πr jωµ0 I0 l − jkr iθ = e e2 e sin θ 4πr jkη0 I0 l sin θ − jkr (e2 cos θ − e3 sin θ) , = e 4πr
E=
(3.239)
155
156
electromagnetism.
and µ0 I0 l − jkr 1 jωi sin θ0 e Iη0 4πr 1 µ0 I0 l − jkr = e321 e32 jω sin θ0 e η0 4πr jk sin θ0 I0 l − jkr = −e1 e . 4πr
H=
(3.240)
The multivector electrodynamic field expression eq. (3.238) for F is more algebraically compact than the separate electric and magnetic field expressions, but this comes with the complexity of dealing with different types of imaginaries. There are two explicit unit imaginaries in eq. (3.238), the scalar imaginary j used to encode the time harmonic nature of the field, and i = e32 used to represent the plane that the far field propagation direction vector lay in. Additionally, when the magnetic field component was extracted, the pseudoscalar I = e123 entered into the mix. Care is required to keep these all separate, especially since I, j commute with all grades, but i does not. 3.12.5
Gauge transformations
Because the potential representation of the field is expressed as a grade 1,2 selection, the addition of scalar or pseudoscalar components to the grade selection will not alter the field. In particular, it is possible to alter the multivector potential ! 1∂ A→ A+ ∇+ (3.241) ψ, v ∂t where ψ is any multivector field with scalar and pseudoscalar grades, without changing the field ! ! !+ 1∂ 1∂ ∇− A+ ∇+ ψ v ∂t v ∂t 1,2 ! + * 2 1 ∂ = F + ∇2 − 2 2 ψ . v ∂t 1,2 *
F→
(3.242)
That last grade selection is zero, since ψ has no vector or bivector grades, demonstrating that the electromagnetic field is invariant with respect to this multivector potential transformation. It is worth looking how such a transformation impacts each grade of the potential. Let ψ = vψ(e) + ηvIψ(m) , where ψ(e) and ψ(m) are both scalar fields. The gauge transformation provides the mapping
3.12 multivector potential.
∂ (e) ψ ∂t
(3.243a)
vA → vA + v∇ψ(e)
(3.243b)
IvF → IvF + Iv∇ψ(m)
(3.243c)
−φ → −φ +
−Iηφm → −Iηφm + Iη
∂ (m) ψ , ∂t
(3.243d)
or ∂ (e) ψ ∂t
(3.244a)
A → A + ∇ψ(e)
(3.244b)
F → F + ∇ψ(m)
(3.244c)
φ→φ−
φm → φm −
∂ (m) ψ . ∂t
(3.244d)
These have the alternation of sign that is found in the usual recipe for gauge transformation of the scalar and vector potentials. In conventional electromagnetism, the first two relations are usually found by observing it is possible to add any gradient to the vector potential, and then finding the transformation consequences that that choice imposes on the electric field. With the grade selection formulation of the electromagnetic field, this special coupling of the field potentials comes for free without having to consider the curl of a specific field component. Note that the latter two dual transformation relationships are for magnetic sources, and are usually expressed in the frequency domain, where the gauge transformations take the form φ → φ − jωψ(e)
(3.245a)
A → A + ∇ψ(e)
(3.245b)
F → F + ∇ψ(m)
(3.245c)
φm → φm − jωψ(m) .
(3.245d)
157
158
electromagnetism.
3.12.6
Lorenz gauge
With the flexibility to alter make a gauge transformation of the potential, it is useful to examine the conditions for which it is possible to express the electromagnetic field without any grade selection operation. That is ! 1∂ (−φ + vA + ηI (−φm + vF)) . F = ∇− (3.246) v ∂t There should be no a priori assumption that such a field representation has no scalar, nor no pseudoscalar components, which can be seen by the explicit expansion in grades ! 1∂ F = ∇− A v ∂t ! 1∂ = ∇− (−φ + vA + ηI (−φm + vF)) v ∂t 1 (3.247) = ∂t φ + v∇ · A v − ∇φ + Iηv∇ ∧ F − ∂t A + v∇ ∧ A − ηI∇φm − Iη∂t F 1 + ηI ∂t φm + Iηv∇ · F, v so if this potential representation has only vector and bivector grades, it must be true that 1 ∂t φ + v∇ · A = 0 v 1 ∂t φm + v∇ · F = 0. v
(3.248)
The first is the well known Lorenz gauge condition, whereas the second is the dual of that condition for magnetic sources. Should one of these conditions, say the Lorenz condition for the electric source potentials, be non-zero, then it is possible to make a potential transformation for which this condition is zero 1 1 0 , ∂t φ + v∇ · A = ∂t (φ0 − ∂t ψ) + v∇ · (A0 + ∇ψ) v v ! 1 0 1 0 2 = ∂t φ + v∇A + v ∇ − 2 ∂tt ψ, v v so if 1v ∂t φ0 + v∇A0 is zero, ψ must be found such that ! 1 1 2 ∂t φ + v∇ · A = v ∇ − 2 ∂tt ψ. v v
(3.249)
(3.250)
3.13 lorentz force.
Such a gauge transformation requires a non-homogeneous wave equation solution, or equivalently in the frequency domain requires the solution of a Helmholtz equation 1 jωφ + v∇ · A = v ∇2 + k2 ψ. v
(3.251)
A similar transformation is also clearly possible to eliminate any pseudoscalar grades in eq. (3.246). Such a potential representation is desirable since Maxwell’s equations for such a potential are completely decoupled ! 1 ∂2 2 ∇ − 2 2 A = J, (3.252) v ∂t which is equivalent to precisely one non-homogeneous wave equation for each grade source and potential ! 1 1 ∂2 ∇2 − 2 2 φ = − ρ v ∂t ! 2 1 ∂ ∇2 − 2 2 A = −µJ v ∂t (3.253) ! 1 ∂2 I 2 ∇ − 2 2 φm = − ρm µ v ∂t ! 2 1 ∂ ∇2 − 2 2 F = −IM, v ∂t or equivalently, in the frequency domain, a forced Helmholtz equation for each grade 1 ∇2 + k 2 φ = − ρ 2 2 ∇ + k A = −µJ 1 ∇2 + k2 φm = − ρm µ 2 2 ∇ + k F = −M. 3.13 3.13.1
(3.254)
lorentz force. GA statement.
In free space, the Lorentz force equation eq. (3.7a) can be restated in terms of F = E + Iη0 H = E + IcB as
159
160
electromagnetism.
dp v = q F 1+ , dt c 1
(3.255)
which puts the electric and magnetic fields on equal footing. This can be demonstrated by splitting the F (1 + v/c) multivector into its constituent grades v v = q (E + IcH) 1 + qF 1 + c c q = qE + qIBv + Ev + qcIB c ! 1 q = E · v + q (E + v × B) + q cIB + E ∧ v + q(IB) ∧ v. c c
(3.256)
The grade 0 component of this product hints of eq. (3.7b), and substitution into the vector grade selection operation of eq. (3.255) recovers eq. (3.7a) as desired. 3.13.2
Constant magnetic field.
The Lorentz force equation that determines the dynamics of a charged particle in an external field F has been restated as a multivector differential equation, but how to solve such an equation is probably not obvious. Given a constant external magnetic field, the Lorentz force equation is reduced to m
dv = q(IB) · v, dt
(3.257)
or Ω=−
qIB m
dv = v · Ω, dt
(3.258)
where Ω is a bivector containing all the constant factors. This can be solved by introducing a multivector integration factor R and its reverse R† on the left and right respectively R
dv † R = Rv · ΩR† dt 1 = R (vΩ − Ωv) R† 2 1 1 = RvΩR† − RΩvR† , 2 2
(3.259)
3.14 dielectric and magnetic media.
or 0=R
1 dv † 1 R + RΩvR† − RvΩR† dt 2 2
(3.260)
Let R˙ = RΩ/2.
(3.261)
Since Ω is a bivector R˙ † = −ΩR† /2, so by chain rule 0=
d RvR† . dt
(3.262)
The integrating factor has solution R = eΩt/2 ,
(3.263)
a “complex exponential”, so the solution of eq. (3.257) is v(t) = e−Ωt/2 v(0)eΩt/2 .
(3.264)
The velocity of the charged particle traces out a helical path. Any component of the initial velocity v(0)⊥ perpendicular to the Ω plane is untouched by this rotation operation, whereas components of the initial velocity v(0)k that lie in the Ω plane will trace out a circular path. If ˆ is the unit bivector for this plane, that velocity is Ω ˆ Ω ˆ −1 v(0)k = v(0) · Ω ˆ −1 ˆ Ω (3.265) v(0)⊥ = v(0) ∧ Ω v(t) = v(0)k eΩt + v(0)⊥ . A multivector integration factor method for solving the Lorentz force equation in constant external electric and magnetic fields can be found in [6]. Other examples, solved using a relativistic formulation of GA, can be found in [5], [8], and [9]. 3.14
dielectric and magnetic media.
The GA formulation of Maxwell’s equation has only been applied in media where it has been assumed throughout that linear constitutive relationships D = E B = µH,
(3.266)
161
162
electromagnetism.
have been available. Without such assumptions the GA formalism for Maxwell’s equations cannot be written as a single equation with one multivector field, but requires two equations and two multivector fields. The two multivector fields are F = E + IvB I G =D+ H v for which Maxwell’s equations are ! + * J 1∂ G =ρ− ∇+ v ∂t v 0,1 * ! + 1∂ ∇+ F = I (vρm − M) . v ∂t 2,3
(3.267)
(3.268)
Here v is a non-dimensionalizing constant with dimensions [L/T], but is otherwise unspecified. Direct expansion can be used to show that eq. (3.268) is equivalent to Maxwell’s equations. Doing so for each of the grades in turn, we have ! + 1∂ ρ= ∇+ G v ∂t + ! * I 1∂ D+ H = ∇+ v ∂t v =∇·D *
! + 1∂ ∇+ G v ∂t + ! 1 * I 1∂ = ∇+ D+ H v ∂t v 1 1 ∂D I = + ∇∧H v ∂t v 1 ∂D 1 = − ∇×H v ∂t v
J − = v
(3.269a)
*
! + 1∂ −IM = ∇ + F v ∂t * ! 2 + 1∂ (E + IvB) = ∇+ v ∂t 2 ∂B =∇∧E+I ∂t
(3.269b)
*
(3.269c)
3.14 dielectric and magnetic media.
! + 1∂ Ivρm = ∇ + F v ∂t * ! 3 + 1∂ (E + IvB) = ∇+ v ∂t 3 = vI∇ · B. *
(3.269d)
After rearranging and cancelling common factors of v, I Maxwell’s equations are recovered ∇·D=ρ ∂D ∂t ∂B ∇ × E = −M − ∂t ∇ · B = ρm .
∇×H=J+
(3.270)
One possible strategy for solving these equations is to impose an additional set of constraints on the grades in question * ! + 1∂ =0 ∇+ G v ∂t 2,3 * ! + (3.271) 1∂ ∇+ F = 0, v ∂t 0,1 so that all the grade selection filters can be cleared ! J 1∂ G =ρ− ∇+ v ∂t v ! 1∂ ∇+ F = I (vρm − M) . v ∂t
(3.272)
Each of these now separately has the form of Maxwell’s equation, and could be solved separately, subject to the constraint equations. Only if F, G can be related by a constant factor, say F = G, can these be summed directly (after non-dimensional scaling) to form Maxwell’s equation. Other non-constraint strategies for solving eq. (3.268) would require additional thought and study.
163
Part III B A C K M AT T E R
A
J U S T I F Y I N G T H E C O N T R AC T I O N A X I O M .
The contraction axiom is arguably the most important of the multivector space axioms. The use of this axiom was not justified or motivated in any way. It could be argued that the subsequent use of the axiom provides justification, but that may be an unsatisfactory argument. Here I make an argument that the contraction axiom is consistent with the rules for multiplying numbers, in particular, with the rule for squaring a number. It that is the case, and the rules for multiplying numbers should be consistent with the rules for multiplying vectors in a one dimensional vector space, or with the rules for multiplying vectors in any one dimensional vector subspace, then we have a justification for the contraction axiom for multiplication of vectors from arbitrary vector spaces. Consider R1 , a one dimensional vector space, spanned by a single unit vector {e1 }. A point x on a real number line can be associated with each vector xe1 in this space. This establishes an isomorphism x ↔ xe1 between the real number line {x} , x ∈ R, and this one dimensional vector space {x = xe1 , x ∈ R}. To illustrate this isomorphism, the vectors −3e1 and 7e1 are plotted in fig. A.1 for both R1 and the real number line.
-4
-2
0
2
(a)
4
6
8
-4
-2
0
2
4
6
8
(b)
Figure A.1: Equivalent vectors in R1 and on a number line.
We know how to multiply real numbers, so can we use the same implicit rules to determine how we should multiply one dimensional vectors? In particular, the rules for real numbers require that for any point x distant from the origin, we have (±x)2 = |x|2 = x2 .
(A.1)
This is the familiar rule for real number multiplication, the square of a number (positive or negative) equals the squared distance of that number from zero (i.e. numbers squared are positive).
167
168
justifying the contraction axiom.
An equivalent statement for the square of a vector in R1 is (±x)2 = |x|2 = x2 .
(A.2)
Observe that this is identical to the contraction axiom for a one dimensional Euclidean vector space. If this the desired behaviour of a vector square in R1 , then it should also be the rule for squaring any vector lying in a one dimensional vector subspace, therefore providing a justification of the contraction axiom in general. In this sense the contraction axiom can be conceptualized as the vector space equivalent of the numeric multiplication rule “the square of a number is positive”.
B
DISTRIBUTION THEOREMS.
Theorem B.1: Vector Blade dot and wedge product relations. Given a k-blade B and a vector a, the dot and wedge products have the following commutation relationships
a · B = (−1)k−1 B · a
(B.1)
a ∧ B = (−1)k B ∧ a,
and can be expressed as symmetric and antisymmetric sums depending on the grade of the blade
1 Ba + (−1)k aB 2 1 B·a = Ba − (−1)k aB . 2
B∧a =
(B.2)
Fixme: Prove these. Theorem B.2: Vector-trivector dot product. Given a vector a and a blade b ∧ c ∧ d formed by wedging three vectors, the dot product of the two can be expanded as bivectors like a · (b ∧ c ∧ d) = (b ∧ c ∧ d) · a = (a · b)(c ∧ d) − (a · c)(b ∧ d) + (a · d)(b ∧ c). The proof follows by expansion in coordinates X D E a · (b ∧ c ∧ d) = ai b j ck dl ei e j ek el . 2
(B.3)
(B.4)
j,k,l
The products within the grade two selection operator can be of either grade two or grade four, so only the terms where one of i = j, i = k, or i = l contributes. Repeated anticommutation of
169
170
distribution theorems.
the normal unit vectors can put each such pair adjacent, where they square to unity. Those are respectively hei ei ek el i2 = ek el D E D E ei e j ei el = − e j ei ei el = −e j el D E2 D E2 D E ei e j ek ei = − e j ei ek ei = + e j ek ei ei = e j ek 2
2
(B.5)
2
Substitution back into eq. (1.57) gives a · (b ∧ c ∧ d) =
X
ai b j ck dl ei · e j (ek el ) − ei · ek (e j el ) + ei · el (e j ek )
j,k,l
(B.6)
= (a · b)(c ∧ d) − (a · c)(b ∧ d) + (a · d)(b ∧ c). Theorem B.2 is a specific case of the more general identity Theorem B.3: Vector blade dot product distribution. A vector dotted with a n − blade distributes as
x · (y1 ∧ y2 ∧ · · · ∧ yn ) =
n X
(−1)i (x · yi ) (y1 ∧ · · · ∧ yi−1 ∧ yi+1 ∧ · · · ∧ yn ) .
i=1
This dot product is symmetric(antisymmetric) when the grade of the blade the vector is dotted with is odd(even). For a proof of theorem B.3 (valid for all metrics) see [5]. Theorem B.4: Distribution of inner products Given two blades A s , Br with grades subject to s > r > 0, and a vector b, the inner product distributes according to A s · (b ∧ Br ) = ( A s · b) · Br . The proof is straightforward, but also mechanical. Start by expanding the wedge and dot products within a grade selection operator A s · (b ∧ Br ) = hA s (b ∧ Br )i s−(r+1) 1
= A s bBr + (−1)r Br b s−(r+1) 2
(B.7)
distribution theorems.
Solving for Br b in 2b · Br = bBr − (−1)r Br b,
(B.8)
we have 1 A s · (b ∧ Br ) = hA s bBr + A s (bBr − 2b · Br )i s−(r+1) 2
(( ( ( = hA s bBr i s−(r+1) − (A( b( · B( s (( r ) s−(r+1) .
(B.9)
The last term above is zero since we are selecting the s − r − 1 grade element of a multivector with grades s − r + 1 and s + r − 1, which has no terms for r > 0. Now we can expand the A s b multivector product, for A s · (b ∧ Br ) = h(A s · b + A s ∧ b) Br i s−(r+1) .
(B.10)
The latter multivector (with the wedge product factor) above has grades s + 1 − r and s + 1 + r, so this selection operator finds nothing. This leaves A s · (b ∧ Br ) = h(A s · b) · Br + (A s · b) ∧ Br i s−(r+1) .
(B.11)
The first dot products term has grade s − 1 − r and is selected, whereas the wedge term has grade s − 1 + r , s − r − 1 (for r > 0).
171
C
G A E L E C T R O D Y N A M I C S I N T H E L I T E R AT U R E .
The notation and nomenclature used to express Maxwell’s equation in the GA literature, much of which has a relativistic focus, has not been standardized. Here is an overview of some of the variations that will be encountered in readings. Space Time Algebra (STA). [5]
Maxwell’s equation is written
∇F = J F = E + IB I = γ0 γ1 γ2 γ3
(C.1)
J = γµ J µ = γ0 ( ρ − J ) ∇ = γµ ∂µ = γ0 (∂t + ∇) . n o STA uses a relativistic basis γµ and its dual {γµ } for which γ02 = −γk2 = 1, k ∈ 1, 2, 3, and γµ · γν = δµ ν . Spatial vectors are expressed in terms of the Pauli basis σi = γi γ0 , which are bivectors that behave as Euclidean basis vectors (squaring to unity, and all mutually anticommutative). F is called the electromagnetic field strength (and is not a 1,2 multivector, but a bivector), ∇ is called the vector derivative operator, ∇ called the three-dimensional vector derivative operator, and J is called the spacetime current (and is a vector, not a multivector). The physicist’s “natural units” c = 0 = µ0 are typically used in STA. The d’Alambertian in STA is = ∇2 = ∂2t − ∇2 , although the earliest formulation of STA [7] used for the vector derivative. Only spatial vectors are in bold, and all other multivectors are non-bold. STA is inherently relativistic, and can be used to obtain many of the results in this book more directly. STA can easily be related to the tensor formulation of electrodynamics. Maxwell’s equations as expressed in eq. (3.74) can be converted to their STA form (in SI units) by setting ei = γi γ0 and by left multipliplying both sides by γ0 . Algebra of Physical Space (APS). [4]
Maxwell’s equation is written as
173
174
ga electrodynamics in the literature.
1 0 c F = E + icB
∂F =
i = e123 1 ∂ = ∂t − ∇ c 1 j= (ρc + j) . 0 c
(C.2)
F is called the Faraday, ∂ the gradient, j the current density, and 0,1 multivectors are called paravectors. A Euclidean spacial basis {e1 , e2 , e3 } is used, and e0 = 1 is used as the time-like basis “vector”. In APS, where e0 = 1 is not a vector grade object, a standard GA dot product for which eµ · eν = δµ ν to express proper length. APS uses inner products based on grade selection from the multivector zz, where z is the Clifford conjugation operation that changes the sign of any vector and bivector grades of a multivector z. This conjugation operation is also used to express Lorentz transformations, and is seen in Maxwell’s equation, operating on the current density and gradient. The d’Alambertian is written as = ∂∂ = (1/c2 )∂2t − ∇2 . While APS is only cosmetically different than eq. (3.74) the treatment in [4] is inherently relativisitic. Maxwell’s equation in linear isotropic media is written as √ ˆ ln √µ = ˜ D f + eD ln + bD √ ∂ D = ∇ + µ ∂t f = e + bˆ √ e = E 1 bˆ = √ IB µ
Jancewicz. [11]
(C.3)
I = e123 1 √ ˜ = √ ρ − µj Jancewicz works with fields that have been re-dimensionalized to the same units, uses an overhat bold notation for bivectors (which are sometimes called volutors). D is called the cliffor differential operator, f the electromagnetic cliffor, and ˜ the density of electric sources. In media that for which µ, are constant in space and time, his Maxwell equation reduces to D f = ˜. The √ d’Alambertian is written as = D ∗ D = ∇2 − µ∂2t , where D ∗ = ∇ − µ∂t . Unlike Baylis, which uses a “paravector” approach extensively for his relativisitic treatment, this book ends with a relativistic treatment using STA.
INDEX
ei j , ei jk , · · ·, 15 RN , 4 0-vector, 8 1-vector, 8 2-vector, 9 3-vector, 10 anticommutation, 18 antisymmetric sum, 49 area element, 77, 80 basis, 5 Biot-Savart law, 115 bivector, 9 blade, 29 boundary values, 145 circular polarization, 138 colinear vectors wedge, 28 commutation, 16, 47 complex exponential, 21 complex imaginary, 19, 30 complex plane, 109 complex power, 133 conjugation, 47 convolution, 92, 106 cross product, 26 curvilinear coordinates, 62, 66, 68 cylindrical coordinates, 66
dimension, 5 divergence theorem, 84 dot product, 5, 24, 35 dual, 38 electric charge density, 101 electric current density, 101 electrostatics, 104 enclosed charge, 105 enclosed current, 114 enclosed current density, 113 energy density, 127 energy flux, 127 Euclidean space, 7 Euler’s formula, 21 far field, 153 Fundamental Theorem of Geometric Calculus, 84 gauge transformation, 156 grade, 12, 29 grade selection, 14, 25 gradient, 66 Green’s function, 88 spherical, 69 Green’s function, 88, 106 gradient, 115 Green’s theorem, 72, 84 Helmholtz’s theorem, 92
delta function, 92 determinant wedge product, 26 differential form, 70, 76, 77, 80
Jones vector, 137 k-vector, 12
175
176
Index
Laplacian, 88, 92, 107 left circular polarization, 138 length, 6 line charge, 108, 109 linear combination, 4 linear dependence, 4 linear independence, 5 linear system, 52 magnetic charge density, 101 magnetic current density, 101 magnetostatics, 112 Maxwell’s equation, 118 momentum density, 127 multivector, 13 multivector dot product, 32 multivector potential, 148 multivector space, 13 multivector wedge product, 36 normal, 7 number line, 167 oriented volume element, 75 orthonormal, 7 parallelogram, 45 plane wave, 135, 137 Poisson equation, 107 polar representation, 25 polarization, 137 potential, 148 Poynting theorem, 131 Poynting vector, 127 projection, 39 pseudoscalar, 15, 20, 21, 30 spherical, 136 reciprocal basis, 66 reciprocal frame, 57, 61 reflection, 51
rejection, 39 reverse, 28 right circular polarization, 138 rotation, 20, 49 scalar, 8 scalar selection, 35 source free, 135 span, 5 spherical coordinates, 68, 72, 109 standard basis, 7 Stokes’ theorem, 75, 77, 84 stress tensor, 127 subspace, 5 symmetric sum, 49 time harmonic, 103, 135 time independence, 104, 112 toroid, 70 trivector, 10 unit pseudoscalar, 15 unit vector, 7 vector, 8 vector inverse, 42 vector potential, 113 vector product, 23 vector space, 3 volume parameterization, 80 wave equation, 120 wedge factorization, 44 wedge product, 24 linear solution, 52
BIBLIOGRAPHY
[1] Rafal Ablamowicz and Garret Sobczyk. Lectures on Clifford (geometric) algebras and applications, chapter Introduction to Clifford Algebras. Springer, 2004. (Cited on page 44.) [2] Constantine A Balanis. Advanced engineering electromagnetics. Wiley New York, 1989. (Cited on page 138.) [3] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005. (Cited on page 149.) [4] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science & Business Media, 2004. (Cited on pages 31, 173, and 174.) [5] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003. (Cited on pages 85, 88, 161, 170, and 173.) [6] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999. (Cited on pages 131, 140, and 161.) [7] David Hestenes. Space-time algebra, volume 1. Springer, 1966. (Cited on page 173.) [8] David Hestenes. Proper dynamics of a rigid point particle. Journal of Mathematical Physics, 15(10):1778–1786, 1974. (Cited on page 161.) [9] David Hestenes. Proper particle mechanics. Journal of Mathematical Physics, 15(10): 1768–1777, 1974. (Cited on page 161.) [10] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975. (Cited on pages 34 and 130.) [11] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988. (Cited on page 174.) [12] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics, chapter Appendix I. World Scientific, 1988. (Cited on page 121.) [13] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689. (Cited on page 104.)
177
178
bibliography
[14] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012. (Cited on pages 65, 76, and 85.) [15] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987. (Cited on page 72.) [16] Garret Sobczyk and Omar León Sánchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21(1):221–231, 2011. URL http://arxiv.org/abs/0809. 4526. (Cited on page 85.) [17] Bartel Leendert Van Der Waerden, E.. Artin, and E.. Noether. Modern algebra. Frederick Ungar, 1943. (Cited on page 14.)