Appendix A Vectors and Vector Spaces A vector can be defined in - - PDF document

appendix a vectors and vector spaces
SMART_READER_LITE
LIVE PREVIEW

Appendix A Vectors and Vector Spaces A vector can be defined in - - PDF document

Appendix A Vectors and Vector Spaces A vector can be defined in many ways: it can be interpreted a collection of real or complex numbers arranged in a row or in a column, it can represent directed, i.e., geometrical quantities, as forces,


slide-1
SLIDE 1

Appendix A Vectors and Vector Spaces

A vector can be defined in many ways: it can be interpreted a collection of real or complex numbers arranged in a row or in a column, it can represent directed, i.e., geometrical quantities, as forces, moments, velocities, etc., it is an element of a linear vector space, with many algebraic properties that can be put into correspondence with geometrical properties. In the present notes we will introduce some of the above definitions, without losing

  • ur aim, that is to empower the students to use vectors as a basic tool for dynamical

systems modelling; so while we will avoid any superfluous information, we will try to be formally correct, but omitting any theorem or proof, apart from those that give an insight to the underlying physical interpretation. Let us start with the most common definition: a vector is a collection of n real or complex quantities arranged in a single column v =      v1 v2 . . . vn      So a vector can also be seen as a particular matrix, with dimension n × 1. As anticipate in Section 1.3, if we need to reduce the printed space, we can write the vector as a transpose column, i.e., a row v = [ v1 v2 · · · vn ]T In broader terms, a vector is an element of a vector space; the following Section will review some concepts and definitions on Vector Spaces. 163

slide-2
SLIDE 2

164 Basilio Bona - Dynamic Modelling

A.1 Vector spaces

A vector space is a mathematical structure whose elements, denoted by v, obeys to a number of rules that qualify the properties of vector spaces. These properties are well suited to describe a large number of phenomena characterizing many sectors

  • f physics and engineering; for further details see [8].

Definition Given a generic field1 F, a vector space is a set of elements, called vectors, that satisfy the following axiomatic properties

  • 1. A vector sum operation, denoted by +, exists, such that {V(F); +} is a

commutative (or abelian) group; the identity element is denoted by 0;

  • 2. For any α ∈ F and any v ∈ V(F), a vector αv ∈ V(F) exists; moreover for

any α, β ∈ F and any v, w ∈ V(F) the following are true: (a) associative property with respect to the product by a scalar: α(βv) = (αβ)v (b) existence of the identity with respect to the product by a scalar: 1(v) = v; ∀v (c) distributive property with respect to the vector sum: α(v + w) = αv + αw (d) distributive property with respect to the product by a scalar: (α + β)v = αv + βv A vector space is said to be real if F = R, while is said to be complex if F = C. A classical example of a real vector space is that whose elements are collections of n real numbers Vn(R) = Rn; in this case an element, i.e., a vector, is represented by components v =      v1 v2 . . . vn      , v ∈ Rn, vi ∈ R

1See Appendix C for this and other abstract algebra structures.

slide-3
SLIDE 3

Basilio Bona - Dynamic Modelling 165 Since the properties (a)–(d) induce a linear structure on the space V, this is also termed as linear vector space or more simply linear space. The term “vector” therefore takes a meaning that is more general and can define ab- stract mathematical entities more generals that the simple collection of n numbers. For instance, as shown in [30], infinite sequences of real or complex numbers, con- tinuous functions taking their values in the interval [a, b], polynomials with complex coefficients defined in [a, b], oriented segments ⃗ v, and many more. As one can notice, among the vector space axioms, no product operator is defined. For this reason the vector space structure, i.e., the set of properties deriving from the above axioms, does not allow to define such geometrical concepts as angle or mag-

  • nitude. that are implicit in a purely geometrical definition of the directed quantity

⃗ v. To allow a definition of these geometrical concepts it is necessary to endow the vector space with a quadratic structure also know as metric structure or simply metric. The introduction of a metric structure on a vector space generates an algebra that makes possible the performance of a number of geometric computations involving the vectors. The most common metric is that induced by the scalar product definition. Before defining this product, it is convenient to review some properties of the linear functions.

A.2 Linear functions and operators

Given two vector spaces U(F) and V(F), that for simplicity we assume defined

  • n the same field F, a function L : U → V that transforms the elements in U in

elements of V, is said to be linear if for couple of vectors a, b ∈ U and any scalar λ ∈ F, the following axioms hold true L(a + b) = L(a) + L(b) = La + Lb L(λa) = λL(a) = λLa (A.1) The function L : U → U is also called linear operator, linear transformation, linear application or endomorphism. The set of all linear functions L : U → V is a linear vector space L(F). Moreover the set of all linear functions L : U → U is a ring2 denoted by the symbol End(U). We also point out that any linear transformation from U to V can be represented by a matrix A ∈ Rm×n, and vice versa, where m and n are the dimensions of V and U.

2See Appendix ??.

completare

slide-4
SLIDE 4

166 Basilio Bona - Dynamic Modelling Linear independence – Base – Dimension Given n vectors ai ∈ V(F), a vector v ∈ V(F) is said to be a linear combination

  • f {a1, a2, . . . , an} if it can be written as

v = v1a1 + v2a2 + · · · vnan (A.2) with vi ∈ F. The vector set {a1, a2, . . . , an} is said to be linearly independent if no ai can be written as a linear combination of the remaining ones aj, j ̸= i. In

  • ther words, the only solution to the equation

v1a1 + v2a2 + · · · + vnan = 0

def

=    . . .    is that with all v1 = v2 = · · · = vn = 0. If this is not the case, we say that ai is linearly dependent from the other vectors. In the linear combination v = v1a1 + v2a2 + · · · vnan if all vectors ai are linearly independent, then the scalars vi are unique and take the name of coordinates or components of v. The linear combinations of k ≤ n linearly independent vectors {a1, a2, . . . , ak} form a subspace S(F) ⊆ V(F). It is said that this subspace is spanned by {a1, a2, . . . , ak}. Any set of linearly independent vectors {a1, a2, . . . , an} forma basis in V. All bases in V have the same number of elements (in our case n); this number is the dimension of the space and is denoted as dim(V). A note on notations Often, in modern physics textbooks, such as those on relativity, the relation in (A.2) is written as v = v1a1 + v2a2 + · · · vnan (A.3) simply written as v = viai

  • r

v = viai that makes use of the Einstein conventions for the sum viai

def

=

n

i=1

viai Indeed in the tensorial representation a vector a can be expressed in two different ways, adopting the Einstein convention: a = aiei

  • ppure

a = aiei

slide-5
SLIDE 5

Basilio Bona - Dynamic Modelling 167 where ai are the so-called contravariant components of a, while ai are the so-called covariant components of a, ei are the contravariant basis vectors and ei are the (covariant basis vectors). In these notes we will adopt the column vector notation introduced previously, but we suggest the reader to familiarize with different notations adopted in different contexts.

A.3 Vectors and their interpretation

Once the concept of vector space has been defined, it is important to briefly discuss the interpretation associated to vectors, since this can be useful to represent both geometrical and physical entities for modelling purposes. Geometrical vectors A geometrical vector p represents a point P in space. The point P is an abstrac- tion that often, but not always, requires a representation. Vector representations are given with respect to a reference frame. If P ∈ R3 then its representation is a vector pa ∈ R3 =   px py pz  

a

≡   p1 p2 p3  

a

and pi is the vector i-th coordinate with respect to a chosen reference frame Ra. Affine geometry To treat points as vectors in linear vector spaces implies the definition of a zero point (the origin), i.e., a point with particular privileged characteristics. Since, in many applications, this is not required, a particular geometry that is “origin-free” has been set up. It is called affine geometry and is defined on affine spaces. Affine geometry is at the base of projective geometry and perspective transforms, as well as homogeneous vectors and homogeneous transformations. It will not be considered in the present context; the interested reader can find additional details in [14]. Approccio geometrico

TO BE COMPLETED

TRADURRE

slide-6
SLIDE 6

168 Basilio Bona - Dynamic Modelling Secondo l’approccio geometrico, un vettore v ` e dato da un segmento orientato ⃗ v

def

= − → AB nel sistema di riferimento R (O, x, y, z). Proiettando ortogonalmente il segmento

  • rientato −

→ AB lungo i tre assi coordinati x, y, z e confrontando le lunghezze delle proiezioni con i segmenti unitari OX, OY e OZ, si ottengono tre numeri reali vx, vy e vz che sono detti coordinate o componenti 3 del vettore v in R. La componente ` e positiva se la proiezione di − → AB ` e concorde con il verso positivo degli assi, definito dai segmenti − − → OX, − − → OY , − →

  • OZ. Il vettore v `

e quindi una quantit` a orientata, dotata di una sua grandezza (detta modulo o norma del vettore), di una direzione (la direzione della retta su cui giace il segmento − → AB) e di un verso (da A a B). L’approccio geometrico necessita a priori del concetto di lunghezza, che a sua volta richiede una procedura di misura per comparazione tra segmenti paralleli, e il con- cetto di angolo; in generale si fonda sugli assiomi della geometria euclidea, che riportiamo qui di seguito per completezza:

  • 1. da ogni punto a ogni altro punto `

e possibile condurre una linea retta. Euclide non postula esplicitamente che per due punti passi un’unica retta, ma assume tacitamente che sia cos` ı;

  • 2. un segmento di linea retta pu`
  • essere indefinitamente prolungato in linea retta;
  • 3. attorno ad un centro scelto a piacere `

e possibile tracciare una circonferenza con raggio scelto a piacere;

  • 4. tutti gli angoli retti sono uguali; Euclide ha dato la seguente definizione di

angolo retto: se una retta r innalzata da un’altra retta s forma con essa angoli adiacenti uguali fra loro, ciascuno dei due angoli ` e retto. Il postulato 4 ` e necessario per garantire che gli angoli ottenuti con un’altra costruzione di questo tipo, relativa alle rette r′ e s′, siano uguali ai precedenti. Il postulato 4 dimostra una notevole raffinatezza logica da parte di Euclide e afferma in sostanza che il piano ` e uniforme (nel senso che la costruzione predetta fornisce sempre gli stessi angoli, in qualsiasi parte del piano venga eseguita). Questo fatto non ` e verificato invece nelle geometrie non euclidee (geometria sferica, ellittica, iperbolica);

  • 5. in un piano, per un punto fuori da una retta si pu`
  • condurre una e una sola

parallela a una retta data (perci`

  • due rette si diranno parallele quando non si

intersecano). Il venir meno del quinto postulato di Euclide ha dato origine alle geometrie non euclidee: se per un punto non esistono rette che non si intersecano con una retta data, avremo la geometria sferica; se invece esistono infinite rette che non intersecano una retta data, avremo la geometria iperbolica.

3Alcuni autori preferiscono distinguere i due termini: coordinate di un punto, componenti di

un vettore, ma si tratta di una sottigliezza che non seguiremo nel testo, usando indifferentemente i due termini.

slide-7
SLIDE 7

Basilio Bona - Dynamic Modelling 169 Approccio analitico Secondo l’approccio analitico, un vettore reale v ` e un elemento astratto appartenente allo spazio vettoriale V. Per le definizioni relative agli spazi vettoriali, si veda il Capitolo ??. Nel seguito si far` a riferimento al pi` u comune tra gli spazi vettoriali, ovvero quello formato da n-ple di numeri reali, Vn(R) = Rn, ed in particolare al caso in cui n = 3, per poter descrivere punti geometrici e segmenti orientati dello spazio cartesiano

  • tridimensionale. Adotteremo la solita convenzione per indicare un vettore:

v =   v1 v2 v3   ; v ∈ R3; v1, v2, v3 ∈ R Nel seguito assumeremo implicitamente di utilizzare soltanto vettori colonna v =   v1 v2 v3   oppure v =   vx vy vz   Quando fosse necessario risparmiare spazio, potremo scriverli in riga, utilizzando l’operatore di trasposizione T, cio` e v T = (v1 v2 v3) oppure v = (v1 v2 v3)T.

A.4 Vector products

We have seen in Section A.1 that the definition of a vector space does not include a product operator; nevertheless it is often necessary to endow the space with a product

  • perator and a quadratic structure or metric, able to define the “magnitude” or the

“measure” of a vector. One of the most common metrics is that derived from a scalar (or inner) product between two vectors.

A.4.1 Scalar product and vector norm

Given two real vectors a, b ∈ V(R), the scalar product, also called inner product

  • r dot product

a · b is a real quantity that can be defined either in geometrical or in algebraic way (componentwise) Geometrical definition: a · b

def

= ∥a∥ ∥b∥ cos θ (A.4) Algebraic definition: a · b

def

= ∑

k

akbk = aTb (A.5)

slide-8
SLIDE 8

170 Basilio Bona - Dynamic Modelling where ∥a∥ is the length of the oriented segment associated to the vector a and θ, (0◦ ≤ θ ≤ 180◦) is the angle between the two oriented segments a and b. Some textbooks use the symbol ⟨a, b⟩ for the scalar product. The dot product has the following properties:

  • simmetry or commutativity:

a · b = b · a (A.6)

  • distributivity with respect to vector addition:

(a + b) · c = a · c + b · c (A.7)

  • associativity with respect to multiplication by a scalar:

(sa) · b = s(a · b) = a · (sb) (A.8)

  • positive definiteness:

a · a > 0 for a ̸= 0 (A.9) The dot product is not associative with respect to itself, i.e. (a · b) · c ̸= a · (b · c) (A.10) Indeed relation (A.10) has no meaning, as the dot product a · b is a scalar, and the dot product of a scalar by a vector is undefined. The introduction of the dot product allows to derive the vector norm as ∥v∥ = (v · v)1/2 = (v Tv)1/2 = √ v2

1 + v2 2 + v2 3

(A.11) and also provide a definition of orthogonality: two vectors a and b are said to be

  • rthogonal if and only if their dot product is zero:

a · b = 0 (A.12) The angle between two vectors can be derived from the dot product: given two nonzero vectors a and b, the angle θ between them is defined as θ = cos−1 a · b ∥a∥ ∥b∥ (A.13) The norm defined in (A.11) is called Euclidean norm and the spaces endowed by the Euclidean norm are called Euclidean spaces and denoted with the symbol En.

slide-9
SLIDE 9

Basilio Bona - Dynamic Modelling 171 Other norms Apart from Euclidean norm, other norms have been defined and are used for different application; a norm must fulfill the following properties, called the norm axioms

  • 1. ∥v∥ > 0 for any v ̸= 0; ∥v∥ = 0 if and only if v = 0;
  • 2. ∥a + b∥ ≤ ∥a∥ + ∥b∥ (triangle inequality);
  • 3. ∥αv∥ = |α| ∥v∥ for any scalar α and any vector v.

generalizing the Euclidean norm, we can define the p-norm ∥·∥p as ∥x∥p

def

= (∑

k

|ak|p )1/p Among the most used p-norms, we recall 2-norm or Euclidean norm p = 2 ∥x∥2

def

= √ x Tx 1-norm or absolute value norm p = 1 ∥x∥1

def

= ∑

k |ak|

∞-norm or max-norm p = ∞ ∥x∥∞

def

= maxk |xk| A unit vector is a vector having unit (Euclidean-)norm u

def

= x ∥x∥; ∥u∥ = 1.

A.4.2 Other vector products

Scalar product acts on two vectors producing a scalar value, while in general we may interested in some other product operator that acts on two vectors and produces another vector. We will limit our study of vector products to 3D vectors, i.e., v ∈ R3. Cross product Given two generic vectors x, y ∈ R3 x = [ x1 x2 x3 ]T e y = [ y1 y2 y3 ]T the vector product (or cross product or, sometimes, outer product) is defined as: z = x × y

def

=   x2y3 − x3y2 x3y1 − x1y3 x1y2 − x2y1   (A.14)

slide-10
SLIDE 10

172 Basilio Bona - Dynamic Modelling Considering a skew-symmetric4 matrix S(x)this definition can also be written as z = x × y =   −x3 x2 x3 −x1 −x2 x1   y = S(x)y the cross product norm is ∥z∥ = ∥x∥ ∥y∥ sin θ (A.15) where θ is the minimum angle between x and y measured on the plane xy defined by the two vectors; zvet is orthogonal to the plane xy and its positive direction is defined by the Right Hand Rule5 (RHR, see Figure A.1) Cross product fulfill the following properties: Figure A.1: Right Hand Rule.

  • The product of a vector by itself is always zero

x × x = 0

  • The product is anticommutative:

x × y = − (y × x)

  • The product is distributive with respect to the sum:

x × (y + z) = (x × y) + (x × z)

  • The product is is distributive with respect to the product with a scalar:

α (x × y) = (αx) × y = x × (αy)

  • The product is non-associative:

x × (y × z) ̸= (x × y) × z

4Skew-symmetric matrix properties are detailed in Section ??. 5RHR: if the four fingers of the right hand are curled such as to simulate a rotation bringing x

  • n y, then the thumb is directed along x × y.
slide-11
SLIDE 11

Basilio Bona - Dynamic Modelling 173 Scalar triple product Give three 3D vectors x, y, z, the scalar triple product is defined as follows p

def

= z · (x × y) where p is a real number. A geometrical interpretation of this product is possible: its norm ∥p∥ = ∥x · (y × z)∥ is equal to the volume of the parallelepiped generated by the three vectors, since ∥y × z∥ is the are of the base and ∥x cos θ∥ is the height, as shown in Figure A.2. Figure A.2: A geometrical interpretation of the scalar triple product. Another property is the cyclicity rule z · (x × y) = x · (y × z) = y · (z × x) (A.16) this property is easy to demonstrate giving the volume interpretation (base area × height). Furthermore the following relation holds x · (y × z) = (x × y) · z that follows from the dot product commutativity; notice the position of the paren- theses: indeed writing (x ·y)×z has no sense, since it will represent a cross product between a scalar and a vector. Vectorial triple product Give three 3D vectors x, y, z, the vectorial triple product is defined as follows x × (y × z) = (x · z) y − (x · y) z (x × y) × z = (x · z) y − (y · z) x (A.17) Given four vectors x, y, z, w, the following relations hold (x × y) · z = − (z × y) · x (x × y) · (z × w) = (x · z)(y · w) − (x · w)(y · z) x × (y × (z × w)) = y(x · (z × w)) − (x · y)(z × w) (A.18)

slide-12
SLIDE 12

174 Basilio Bona - Dynamic Modelling The cross product, according to (A.14), can be defined only in R3; to generalize it in n-dimensional vector spaces, with n > 3, requires to introduce Clifford algebras [29], that is beyond the scope of these notes.

A.4.3 Dyadic product

Given two vectors x, y ∈ Rn, we can define the following dyadic product x ⊗ y

def

= xy T = D(x, y) =    x1y1 · · · x1yn . . . xiyi . . . xny1 · · · xnyn    Dyadic product has the following properties: (αx) ⊗ y = x ⊗ (αy) = α(x ⊗ y) x ⊗ (y + z) = x ⊗ y + x ⊗ z (x + y) ⊗ z = x ⊗ y + x ⊗ z (x ⊗ y)z = x(y · z) x(y ⊗ z) = (x · y)z Some texts call this product external product, adding “noise” to the nomenclature, since the external product is another type of product introduce by the German mathematician H.G. Grassmann. The dyadic product is non-commutative, since, given D(x, y) = DT(y, x), it follows that x ⊗ y ̸= y ⊗ x. The matrix D obtained from the dyadic product has always rank ρ(D) = 1, for any dimension n of the vectors involved. A useful property linking the dyadic product to the vectorial triple product is the following x × (y × z) = [(x · z) I − z ⊗ x] y (x × y) × z = [(x · z) I − x ⊗ z] y Notice that, while the cross products in the left-hand terms are defined only for R3, the products on the right-hand terms can be computed independently of the dimensions n of the space .

A.4.4 Other vector products

Since the scalar product does not produce a vector, and the cross product is non- commutative, but also non-associative, there was the necessity to define a product ab between vectors possessing the majority of properties of an ordinary product, i.e., the associative and distributive properties, while the commutativity was not deemed essential.

slide-13
SLIDE 13

Basilio Bona - Dynamic Modelling 175 Moreover the norm equality must be true, i.e., ∥ab∥ = ∥a∥ ∥b∥. Many products were defined with these properties, but the two most important , with applications in computer vision, kinematics, and quantum physics, are the Hamilton product and the Clifford product. Hamilton product Hamilton product finds its justification in the context of the quaternion product. The product c = ab is axiomatically defined as c = ab

def

= −a · b + a × b At present this product has only an historical significance, since it has the unpleasant characteristic to produce a negative number for the product of a vector by itself aa = −a · a + a × a = − ∥a∥2 Clifford Product As reported in [43], a vector product satisfying the same axioms of product between real numbers (distributivity, associativity and commutativity) does not exist for vector spaces with dimension n ≥ 3. Leaving aside the commutativity axiom, it is possible to define the Clifford product (from William Clifford, 1845-1879). It allows to extend the cross (external) product to vector spaces Rn, n > 3. Indeed the Clifford product was already introduced some years before by Hermann

  • G. Grassmann, the inventor of the exterior algebra. To define it it is necessary to

introduce the concept of bivector. Figure A.3: W. Clifford and H.G. Grassmann.

slide-14
SLIDE 14

176 Basilio Bona - Dynamic Modelling Bivectors Let us start in R2, with two vectors a = a1i + a2j , and b = b1i + b2j ; Clifford product is defined as: ab = a1b1 + a2b2 + (a1b2 − a2b1)e12 = a · b + (a1b2 − a2b1)e12 where e12 ∈ R2 is a bivector and shall be understood as the signed unit area of the parallelogram with sides i and j . This is analog to the cross product i × j , apart that this last is an axial vector

  • rthogonal to the i, j plane, while the former is a so-called patch of the same plane

as shown in Figure A.4. Figure A.4: A graphical representation of the bivector e12 in R2 compared with the cross product i × j ∈ R3. The Clifford product is often written using the symbols introduced by Grassmann, i.e., ab = a · b + a ∧ b where a ∧ b = (a1b2 − a2b1)i ∧ j is the so called wedge product or exterior product that shall not be confused with the cross product a × b. As said before, a ∧ b is a directed area, while a × b is an axial vector. Moreover, while a × b is not defined except that in R3, the wedge product can be defined for any n-dimensional space Rn, where it can be interpreted as an area patch, a volume patch, a higher dimensional patch, etc. In R3 if one assumes that the following identity holds: cc = c2 = c · c where · is the scalar product

slide-15
SLIDE 15

Basilio Bona - Dynamic Modelling 177 assuming c = a + b and the commutativity of the dot product, then: (a + b)(a + b) = (a + b) · (a + b) aa + ab + ba + bb = a · a + a · b + b · a + b · b a2 + ab + ba + b2 = a2 + 2a · b + b2 hence ab + ba = 2a · b and finally ab = 2a · b − ba The interested reader can find further details in [29]. Applications – Differential geometry One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clif- ford bundle in analogy with the exterior bundle. This has a number of important applications in Riemannian geometry. Perhaps more important is the link to a spin manifold, its associated spinor bundle and spinor manifolds. Applications – Physics Clifford algebras have several important applications in physics. Physicists usually consider a Clifford algebra to be an algebra spanned by the so-called Dirac matrices γ1, . . . , γ4 where γiγj + γjγi = 2ηijI where η = [ ηij ] is the matrix of a quadratic form, having signature (p, q), typically (1, 3) when working in Minkowski space (i.e., (+, −, −, −)). The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and provide an explicit isomorphism from the Clifford algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form of Dirac field bilinears.

slide-16
SLIDE 16

178 Basilio Bona - Dynamic Modelling In mathematical physics, the gamma matrices, γ1, γ2, γ3, γ4, also known as the Dirac matrices, are a set of conventional matrices with specific anticommutative relations that ensure they generate a matrix representation of the Clifford algebra Cm1,3. When interpreted as the matrices of the action of a set of orthogonal basis vectors for contravariant vectors in Minkowski space, the column vectors on which the matrices act become a space of spinors, on which the Clifford algebra of space time acts. Spinors Spinors ease space-time computations in general, and in particular are fundamental to the Dirac equation for relativistic spin-1/2 particles. In Dirac representation, the four contravariant gamma matrices are γ1 =     1 1 −1 −1     ; γ2 =     1 1 −1 −1     γ3 =     −j j j −j     ; γ4 =     1 −1 −1 1     Applications – Computer Vision Recently, Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Some authors propose a Clifford embedding to generalize traditional MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow (see [49]). Vector-valued data is analyzed using the Clifford Fourier transform. Based on these vectors action filters are synthesized in the Clifford Fourier domain and recognition

  • f actions is performed using Clifford Correlation. The authors demonstrate the

effectiveness of the Clifford embedding by recognizing actions typically performed in classic feature films and sports broadcast television.

A.5 Vector derivatives

In general vectors representing geometrical points or physical quantities are function

  • f time t. The derivative of the vector with respect to time is itself a vector defined

as d dtx(t) = ˙ x(t) =   ˙ x1(t) ˙ x2(t) ˙ x3(t)   (A.19)

slide-17
SLIDE 17

Basilio Bona - Dynamic Modelling 179 Higher order derivatives are expressed in a similar way, starting from the second

  • rder time derivative

d dt ˙ x(t) = ¨ x(t) =   ¨ x1(t) ¨ x2(t) ¨ x3(t)   (A.20) The derivatives of a scalar or cross product obey the usual laws of the derivative of a product d(x · y) dt = ˙ x · y + x · ˙ y d(x × y) dt = ( ˙ x × y) + (x × ˙ y) (A.21) We recall that the time derivative of a generic vector x b represented in Rb is itself a vector, indicated in one of the following ways ˙ x b = d dt(x b) = d dt    xb

1

xb

2

xb

3

   =    ˙ xb

1

˙ xb

2

˙ xb

3

   (A.22) The relation between the time derivative with respect to frame Rb and the time derivative with respect to frame Ra is obtained, as seen in Section 2.13.1, computing the derivative of x a = Ra

bx b

that gives ˙ x a = Ra

b ˙

x b + ˙ Ra

bx b

(A.23) Recalling (2.110) ˙ R (x, θ(t)) = S (ω(t)) R (x, θ(t)) in addition to (B.7), that we recall her for completeness, S(Rx)R = RS(x) and the usual transformation of a vector between two reference frames ωa

ab = Ra bωb ab

we can write S(ωa

ab)Ra b = S(Ra bωb ab)Ra b = Ra bS(ωb ab)

that gives ˙ Ra

b = S(ωa ab)Ra b = Ra bS(ωb ab)

replacing it in (A.23), we obtain the final relation that gives ˙ x a as a function of ˙ x b and x b: ˙ x a = Ra

b

( ˙ x b + S(ωb

ab)x b)

= Ra

b( ˙

x b + ωb

ab × x b)

(A.24)

slide-18
SLIDE 18

180 Basilio Bona - Dynamic Modelling

  • r

˙ x a = Ra

b ˙

x b + S(Ra

bωb ab)Ra bx b = Ra b ˙

x b + ωa

ab × Ra bx b

(A.25) where S(ωb

ab) represents the cross product ωb ab×.

Relation (A.24) in its right terms contains only vectors represented in Rb, that are transformed in Ra. A similar relation is obtained for ˙ x b as a function of ˙ x a and x a: ˙ x b = Rb

a ( ˙

x a + S(ωa

ba)x a) = Rb a( ˙

x a + ωa

ba × x a)

(A.26) where the vector ωa

ba is, this time, the angular velocity of the reference frame Ra

with respect to Rb, represented in Ra. We shall therefore conclude that the time derivative of a vector, with respect to a reference frame, is generally not the same as the time derivative with respect to another frame, and this fact is commonly written as ( d dt )

Ra

x ℓ = ( d dt )

Rb

x ℓ + ωℓ

ab × x ℓ

In particular, if the reference system Ra coincides with the inertial frame R0, we can write the following relation, that provides the so-called total derivative ( d dt ) x ℓ = ( d dt )

b

x ℓ + ωℓ

0b × x ℓ

(A.27) If, in addition, all vectors are represented in R0 we have ( d dt ) x 0 = ( d dt )

b

x 0 + ω0

0b × x 0

(A.28) that is another expression of the total derivative. To conclude this Paragraph we recall some useful relations involving the angular velocities and the skew-symmetric matrices ωb

ab ̸= ωa ab;

ωℓ

aa = 0

∀ a, b, ℓ ωℓ

ab + ωℓ bc = ωℓ ac

∀ a, b, c, ℓ ωℓ

ab = −ωℓ ba

∀ a, b, ℓ S(−ω) = −S(ω) = S T(ω) ∀ ω S(λ1ω1 + λ2ω2) = λ1S(ω1) + λ2S(ω2) ∀ λ1, λ2, ω1, ω2 S(ω)x = ω × x ∀ ω, x x TS(ω)x ≡ 0 ∀ ω, x Ra

bS(ωb ab) = S(Ra bωb ab)Ra b = S(ωa ab)Ra b

(A.29)