Jump to content

Linear algebra (Osnabrück 2024-2025)/Part I/Lecture 6

From Wikiversity



Vector spaces
The addition of two arrows '"`UNIQ--postMath-00000001-QINU`"' and '"`UNIQ--postMath-00000002-QINU`"', a typical example for vectors.
The addition of two arrows and , a typical example for vectors.

The central concept of linear algebra is a vector space.


Let denote a field, and a set with a distinguished element , and with two mappings

and

Then is called a -vector space (or a vector space over ), if the following axioms hold[1] (where and are arbitrary). [2]

  1. ,
  2. ,
  3. ,
  4. For every , there exists a such that ,
  5. ,
  6. ,
  7. ,
  8. .

The binary operation in is called (vector-)addition, and the operation is called scalar multiplication. The elements in a vector space are called vectors, and the elements are called scalars. The null element is called null vector, and for , the inverse element, with respect to the addition, is called the negative of , denoted by .

The field that occurs in the definition of a vector space is called the base field. All the concepts of linear algebra refer to such a base field. In case , we talk about a real vector space, and in case , we talk about a complex vector space. For real and complex vector spaces, there exist further structures like length, angle, inner product. But first we develop the algebraic theory of vector spaces over an arbitrary field.



Let denote a field, and let . Then the product set

with componentwise addition and with scalar multiplication given by

is a vector space. This space is called the -dimensional standard space. In particular, is a vector space.

The null space , consisting of just one element , is a vector space. It might be considered as .

The vectors in the standard space can be written as row vectors

or as column vectors

The vector

where the is at the -th position, is called -th standard vector.


Let be a "plane“ with a fixed "origin“ . We identify a point with the connecting vector (the arrow from to ). In this situation, we can introduce an intuitive coordinate-free vector addition and a coordinate-free scalar multiplication. Two vectors and are added together by constructing the parallelogram of these vectors. The result of the addition is the corner of the parallelogram which lies in opposition to . In this construction, we have to draw a line parallel to through and a line parallel to through . The intersection point is the point sought after. An accompanying idea is that we move the vector in a parallel way so that the new starting point of becomes the ending point of .

In order to describe the multiplication of a vector with a scalar , this number has to be given on a line that is also marked with a zero point and a unit point . This line lies somewhere in the plane. We move this line (by translating and rotating) in such a way that becomes , and we avoid that the line is identical with the line given by (which we call ).

Now we connect and with a line , and we draw the line parallel to through . The intersecting point of and is .

These considerations can also be done in higher dimensions, but everything takes place already in the plane spanned by these vectors.


The complex numbers form a field, and therefore they form also a vector space over the field itself. However, the set of complex numbers equals as an additive group. The multiplication of a complex number with a real number is componentwise, so this multiplication coincides with the scalar multiplication on . Hence, the set of complex numbers is also a real vector space.


For a field , and given natural numbers , the set

of all -matrices, endowed with componentwise addition and componentwise scalar multiplication, is a -vector space. The null element in this vector space is the null matrix

We will introduce polynomials later, they are probably known from school.


Let be the polynomial ring in one variable over the field , consisting of all polynomials, that is, expressions of the form

with . Using componentwise addition and componentwise multiplication with a scalar (this is also multiplication with the constant polynomial ), the polynomial ring is a -vector space.


We consider the inclusion of the rational numbers inside the real numbers. Using the real addition and the multiplication of rational numbers with real numbers, we see that is a -vector space, as follows directly from the field axioms. This is a quite crazy vector space.


Let be a field, and let be a -vector space. Then the following properties hold (for

and ).
  1. We have .
  2. We have .
  3. We have .
  4. If and , then .

Proof



Linear subspaces

Let be a field, and let be a -vector space. A subset is called a linear subspace if the following properties hold.

  1. .
  2. If , then also .
  3. If and , then also holds.

Addition and scalar multiplication can be restricted to such a linear subspace. Hence, the linear subspace is itself a vector space, see Exercise 6.10 . The simplest linear subspaces in a vector space are the null space and the whole vector space .


Let be a field, and let

be a homogeneous system of linear equations over . Then the set of all solutions to the system is a linear subspace of the standard space .

Proof


Therefore, we talk about the solution space of the linear system. In particular, the sum of two solutions of a system of linear equations is again a solution. The solution set of an inhomogeneous linear system is not a vector space. However, one can add, to a solution of an inhomogeneous system, a solution of the corresponding homogeneous system, and get a solution of the inhomogeneous system again.


We take a look at the homogeneous version of Example 5.1 , so we consider the homogeneous linear system

over . Due to Lemma 6.11 , the solution set is a linear subspace of . We have described it explicitly in Example 5.1 as

This description also shows that the solution set is a vector space. Moreover, with this description, it is clear that is in bijection with , and this bijection respects the addition and also the scalar multiplication (the solution set of the inhomogeneous system is also in bijection with , but there is no reasonable addition nor scalar multiplication on ). However, this bijection depends heavily on the chosen "basic solutions“ and , which depends on the order of elimination. There are several equally good basic solutions for .

This example shows also the following: the solution space of a linear system over is "in a natural way“, that means, independent on any choice, a linear subspace of (where is the number of variables). For this solution space, there always exists a "linear bijection“ (an "isomorphism“) to some (), but there is no natural choice for such a bijection. This is one of the main reasons to work with abstract vector spaces, instead of just .



Generating systems

The solution set of a homogeneous linear system in variables over a field is a linear subspace of . This solution space is often described as the set of all "linear combinations“ of finitely many (simple) solutions. In this and the next lecture, we will develop the concepts to make this precise.

The plane generated by two vectors and consists of all linear combinations .

Let be a field, and let be a -vector space. Let denote a family of vectors in . Then the vector

is called a linear combination of this vectors

(for the coefficient tuple ).

Two different coefficient tuples can define the same vector.


Let be a field, and let be a -vector space. A family , , is called a generating system (or spanning system) of , if every vector can be written as

with a finite subfamily , and with

.

In , the standard vectors , , form a generating system. In the polynomial ring , the powers , , form an (infinite) generating system.


Let be a field, and let be a -vector space. For a family , , we set

and call this the linear span of the family, or the generated linear subspace.

The empty set generates the null space.[3] The null space is also generated by the element . A single vector spans the space . For , this is a line, a term we will make more precise in the framework of dimension theory. For two vectors and , the "form“ of the spanned space depends on how the two vectors are related to each other. If they both lie on a line, say , then is superfluous, and the linear subspace generated by the two vectors equals the linear subspace generated by . If this is not the case (and and are not ), then the two vectors span a "plane“.

We list some simple properties for generating systems and linear subspaces.


Let be a field, and let be a

-vector space. Then the following hold.
  1. Let , , be a family of linear subspaces of . Then the intersection

    is a linear subspace.

  2. Let , , be a family of elements of , and consider the subset of which is given by all linear combinations of these elements. Then is a linear subspace of .
  3. The family , , is a system of generators of if and only if

Proof



Footnotes
  1. The first four axioms, which are independent of , mean that is a commutative group.
  2. Also for vector spaces, there is the convention that multiplication binds stronger than addition.
  3. This follows from the definition, if we use the convention that the empty sum equals .


<< | Linear algebra (Osnabrück 2024-2025)/Part I | >>
PDF-version of this lecture
Exercise sheet for this lecture (PDF)