» Find out if the vectors are linearly dependent. Linear dependence of the system of vectors. Collinear vectors. Vectors, their properties and actions with them

Find out if the vectors are linearly dependent. Linear dependence of the system of vectors. Collinear vectors. Vectors, their properties and actions with them

Definition 1. A system of vectors is called linearly dependent if one of the system's vectors can be represented as a linear combination of the rest of the system's vectors, and linearly independent otherwise.

Definition 1´. A system of vectors is called linearly dependent if there are numbers With 1 , With 2 , …, With k , not all equal to zero, such that the linear combination of vectors with given coefficients is equal to the zero vector: = , otherwise the system is called linearly independent.

Let us show that these definitions are equivalent.

Let Definition 1 be satisfied, i.e., one of the vectors of the system is equal to a linear combination of the rest:

A linear combination of a system of vectors is equal to a zero vector, and not all coefficients of this combination are equal to zero, i.e. definition 1´ holds.

Let Definition 1´ be satisfied. The linear combination of the system of vectors is , and not all coefficients of the combination are equal to zero, for example, the coefficients of the vector .

We presented one of the vectors of the system as a linear combination of the rest, i.e. definition 1 is fulfilled.

Definition 2. The unit vector, or ort, is called n-dimensional vector, which one i The th coordinate is equal to one, and the rest are zero.

. (1, 0, 0, …, 0),

(0, 1, 0, …, 0),

(0, 0, 0, …, 1).

Theorem 1. Various unit vectors n-dimensional space are linearly independent.

Proof. Let the linear combination of these vectors with arbitrary coefficients be equal to the zero vector.

It follows from this equality that all coefficients are equal to zero. We got a contradiction.

Each vector n-dimensional space ā (a 1 , a 2 , ..., a n ) can be represented as a linear combination of unit vectors with coefficients equal to the coordinates of the vector

Theorem 2. If the system of vectors contains a zero vector, then it is linearly dependent.

Proof. Let a system of vectors be given and one of the vectors be zero, for example = . Then, with the vectors of this system, it is possible to compose a linear combination equal to the zero vector, and not all coefficients will be zero:

Therefore, the system is linearly dependent.

Theorem 3. If some subsystem of a system of vectors is linearly dependent, then the entire system is linearly dependent.

Proof. Given a system of vectors . Let us assume that the system is linearly dependent, i.e. there are numbers With 1 , With 2 , …, With r , not all equal to zero, such that = . Then

It turned out that the linear combination of the vectors of the entire system is equal, and not all coefficients of this combination are equal to zero. Therefore, the system of vectors is linearly dependent.

Consequence. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

Proof.

Assume the opposite, i.e. some subsystem is linearly dependent. It follows from the theorem that the entire system is linearly dependent. We have come to a contradiction.

Theorem 4 (Steinitz's theorem). If each of the vectors is a linear combination of the vectors and m>n, then the system of vectors is linearly dependent.

Consequence. In any system of n -dimensional vectors, there cannot be more than n linearly independent ones.

Proof. Each n-dimensional vector is expressed as a linear combination of n unit vectors. Therefore, if the system contains m vectors and m>n, then, by the theorem, this system is linearly dependent.

In this article, we will cover:

  • what are collinear vectors;
  • what are the conditions for collinear vectors;
  • what are the properties of collinear vectors;
  • what is the linear dependence of collinear vectors.
Definition 1

Collinear vectors are vectors that are parallel to the same line or lie on the same line.

Example 1

Conditions for collinear vectors

Two vectors are collinear if any of the following conditions are true:

  • condition 1 . Vectors a and b are collinear if there is a number λ such that a = λ b ;
  • condition 2 . Vectors a and b are collinear with equal ratio of coordinates:

a = (a 1 ; a 2) , b = (b 1 ; b 2) ⇒ a ∥ b ⇔ a 1 b 1 = a 2 b 2

  • condition 3 . Vectors a and b are collinear provided that the vector product and the zero vector are equal:

a ∥ b ⇔ a , b = 0

Remark 1

Condition 2 not applicable if one of the vector coordinates is zero.

Remark 2

Condition 3 applicable only to those vectors that are given in space.

Examples of problems for the study of the collinearity of vectors

Example 1

We examine the vectors a \u003d (1; 3) and b \u003d (2; 1) for collinearity.

How to decide?

In this case, it is necessary to use the 2nd condition of collinearity. For given vectors, it looks like this:

The equality is wrong. From this we can conclude that the vectors a and b are non-collinear.

Answer : a | | b

Example 2

What value m of the vector a = (1 ; 2) and b = (- 1 ; m) is necessary for the vectors to be collinear?

How to decide?

Using the second collinear condition, vectors will be collinear if their coordinates are proportional:

This shows that m = - 2 .

Answer: m = - 2 .

Criteria for linear dependence and linear independence of systems of vectors

Theorem

A system of vectors in a vector space is linearly dependent only if one of the system's vectors can be expressed in terms of the rest of the system's vectors.

Proof

Let the system e 1 , e 2 , . . . , e n is linearly dependent. Let us write down the linear combination of this system equal to the zero vector:

a 1 e 1 + a 2 e 2 + . . . + a n e n = 0

in which at least one of the coefficients of the combination is not equal to zero.

Let a k ≠ 0 k ∈ 1 , 2 , . . . , n .

We divide both sides of the equality by a non-zero coefficient:

a k - 1 (a k - 1 a 1) e 1 + (a k - 1 a k) e k + . . . + (a k - 1 a n) e n = 0

Denote:

A k - 1 a m , where m ∈ 1 , 2 , . . . , k - 1 , k + 1 , n

In this case:

β 1 e 1 + . . . + β k - 1 e k - 1 + β k + 1 e k + 1 + . . . + βn e n = 0

or e k = (- β 1) e 1 + . . . + (- β k - 1) e k - 1 + (- β k + 1) e k + 1 + . . . + (- β n) e n

It follows that one of the vectors of the system is expressed in terms of all other vectors of the system. Which is what was required to be proved (p.t.d.).

Adequacy

Let one of the vectors be linearly expressed in terms of all other vectors of the system:

e k = γ 1 e 1 + . . . + γ k - 1 e k - 1 + γ k + 1 e k + 1 + . . . + γ n e n

We transfer the vector e k to the right side of this equality:

0 = γ 1 e 1 + . . . + γ k - 1 e k - 1 - e k + γ k + 1 e k + 1 + . . . + γ n e n

Since the coefficient of the vector e k is equal to - 1 ≠ 0 , we get a non-trivial representation of zero by a system of vectors e 1 , e 2 , . . . , e n , and this, in turn, means that the given system of vectors is linearly dependent. Which is what was required to be proved (p.t.d.).

Consequence:

  • A system of vectors is linearly independent when none of its vectors can be expressed in terms of all other vectors of the system.
  • A vector system that contains a null vector or two equal vectors is linearly dependent.

Properties of linearly dependent vectors

  1. For 2- and 3-dimensional vectors, the condition is fulfilled: two linearly dependent vectors are collinear. Two collinear vectors are linearly dependent.
  2. For 3-dimensional vectors, the condition is fulfilled: three linearly dependent vectors are coplanar. (3 coplanar vectors - linearly dependent).
  3. For n-dimensional vectors, the condition is fulfilled: n + 1 vectors are always linearly dependent.

Examples of solving problems for linear dependence or linear independence of vectors

Example 3

Let's check vectors a = 3 , 4 , 5 , b = - 3 , 0 , 5 , c = 4 , 4 , 4 , d = 3 , 4 , 0 for linear independence.

Solution. Vectors are linearly dependent because the dimension of the vectors is less than the number of vectors.

Example 4

Let's check vectors a = 1 , 1 , 1 , b = 1 , 2 , 0 , c = 0 , - 1 , 1 for linear independence.

Solution. We find the values ​​of the coefficients at which the linear combination will equal the zero vector:

x 1 a + x 2 b + x 3 c 1 = 0

We write the vector equation in the form of a linear one:

x 1 + x 2 = 0 x 1 + 2 x 2 - x 3 = 0 x 1 + x 3 = 0

We solve this system using the Gauss method:

1 1 0 | 0 1 2 - 1 | 0 1 0 1 | 0 ~

From the 2nd line we subtract the 1st, from the 3rd - the 1st:

~ 1 1 0 | 0 1 - 1 2 - 1 - 1 - 0 | 0 - 0 1 - 1 0 - 1 1 - 0 | 0 - 0 ~ 1 1 0 | 0 0 1 - 1 | 0 0 - 1 1 | 0 ~

Subtract the 2nd from the 1st line, add the 2nd to the 3rd:

~ 1 - 0 1 - 1 0 - (- 1) | 0 - 0 0 1 - 1 | 0 0 + 0 - 1 + 1 1 + (- 1) | 0 + 0 ~ 0 1 0 | 1 0 1 - 1 | 0 0 0 0 | 0

It follows from the solution that the system has many solutions. This means that there is a non-zero combination of the values ​​of such numbers x 1 , x 2 , x 3 for which the linear combination a , b , c equals the zero vector. Hence the vectors a , b , c are linearly dependent. ​​​​​​​

If you notice a mistake in the text, please highlight it and press Ctrl+Enter

Linear dependence and independence of vectors

Definitions of linearly dependent and independent systems of vectors

Definition 22

Let we have a system of n-vectors and have a set of numbers
, then

(11)

is called a linear combination of a given system of vectors with a given set of coefficients.

Definition 23

Vector system
is called linearly dependent if there is such a set of coefficients
, of which at least one is not equal to zero, such that the linear combination of the given system of vectors with this set of coefficients is equal to the zero vector:

Let
, then

Definition 24 ( through the representation of one vector of the system as a linear combination of the others)

Vector system
is called linearly dependent if at least one of the vectors of this system can be represented as a linear combination of the other vectors of this system.

Statement 3

Definitions 23 and 24 are equivalent.

Definition 25(via zero line combination)

Vector system
is called linearly independent if the zero linear combination of this system is possible only for all
equal to zero.

Definition 26(through the impossibility of representing one vector of the system as a linear combination of the rest)

Vector system
is called linearly independent if none of the vectors of this system can be represented as a linear combination of other vectors of this system.

Properties of linearly dependent and independent systems of vectors

Theorem 2 (zero vector in the system of vectors)

If there is a zero vector in the system of vectors, then the system is linearly dependent.

 Let
, then .

Get
, therefore, by definition of a linearly dependent system of vectors in terms of a zero linear combination (12) the system is linearly dependent. 

Theorem 3 (dependent subsystem in the system of vectors)

If a system of vectors has a linearly dependent subsystem, then the entire system is linearly dependent.

 Let
- linearly dependent subsystem
, among which at least one is not equal to zero:

Hence, by Definition 23, the system is linearly dependent. 

Theorem 4

Any subsystem of a linearly independent system is linearly independent.

 On the contrary. Let the system be linearly independent and have a linearly dependent subsystem. But then, by Theorem 3, the entire system will also be linearly dependent. Contradiction. Therefore, a subsystem of a linearly independent system cannot be linearly dependent. 

geometric sense linear dependence and independence of the system of vectors

Theorem 5

Two vectors and linearly dependent if and only if
.

Need.

and - linearly dependent
that the condition
. Then
, i.e.
.

Adequacy.

Linear dependent. 

Corollary 5.1

Zero vector is collinear to any vector

Corollary 5.2

For two vectors to be linearly independent it is necessary and sufficient that was not collinear .

Theorem 6

In order for a system of three vectors to be linearly dependent, it is necessary and sufficient that these vectors be coplanar .

Need.

- are linearly dependent, therefore, one vector can be represented as a linear combination of the other two.

, (13)

where
and
. According to the parallelogram rule is the diagonal of a parallelogram with sides
, but the parallelogram flat figure
coplanar
are also coplanar.

Adequacy.

- coplanar. We apply three vectors to the point O:

C

B`

– linearly dependent 

Corollary 6.1

The zero vector is coplanar to any pair of vectors.

Corollary 6.2

In order for the vectors
are linearly independent if and only if they are not coplanar.

Corollary 6.3

Any plane vector can be represented as a linear combination of any two non-collinear vectors of the same plane.

Theorem 7

Any four vectors in space are linearly dependent .

Let's consider 4 cases:

Let's draw a plane through the vectors, then a plane through the vectors and a plane through the vectors. Then we draw the planes passing through the point D, parallel to the pairs of vectors ; ; respectively. We build a parallelepiped along the lines of intersection of the planes OB 1 D 1 C 1 ABDC.

Consider OB 1 D 1 C 1 - parallelogram by construction according to the parallelogram rule
.

Consider OADD 1 - a parallelogram (from the parallelepiped property)
, then

EMBED Equation.3 .

By Theorem 1
such that . Then
, and by definition 24 the system of vectors is linearly dependent. 

Corollary 7.1

The sum of three non-coplanar vectors in space is a vector that coincides with the diagonal of the parallelepiped built on these three vectors attached to a common origin, and the beginning of the sum vector coincides with the common origin of these three vectors.

Corollary 7.2

If we take 3 non-coplanar vectors in a space, then any vector of this space can be decomposed into a linear combination of these three vectors.

Vectors, their properties and actions with them

Vectors, actions with vectors, linear vector space.

Vectors are an ordered collection of a finite number of real numbers.

Actions: 1. Multiplying a vector by a number: lambda * vector x \u003d (lamda * x 1, lambda * x 2 ... lambda * x n). (3.4, 0. 7) * 3 \u003d (9, 12,0.21)

2. Addition of vectors (they belong to the same vector space) vector x + vector y \u003d (x 1 + y 1, x 2 + y 2, ... x n + y n,)

3. Vector 0=(0,0…0)---n E n – n-dimensional (linear space) vector x + vector 0 = vector x

Theorem. For a system of n vectors in an n-dimensional linear space to be linearly dependent, it is necessary and sufficient that one of the vectors be a linear combination of the others.

Theorem. Any set of n+ 1st vector of n-dimensional linear space yavl. linearly dependent.

Addition of vectors, multiplication of vectors by numbers. Subtraction of vectors.

The sum of two vectors is the vector directed from the beginning of the vector to the end of the vector, provided that the beginning coincides with the end of the vector. If the vectors are given by their expansions in terms of basis vectors, then adding the vectors adds up their corresponding coordinates.

Let's consider this using the example of a Cartesian coordinate system. Let

Let us show that

Figure 3 shows that

The sum of any finite number of vectors can be found using the polygon rule (Fig. 4): to construct the sum of a finite number of vectors, it is enough to match the beginning of each subsequent vector with the end of the previous one and construct a vector connecting the beginning of the first vector with the end of the last one.

Properties of the vector addition operation:

In these expressions m, n are numbers.

The difference of vectors is called the vector. The second term is a vector opposite to the vector in direction, but equal to it in length.

Thus, the vector subtraction operation is replaced by the addition operation

The vector, the beginning of which is at the origin of coordinates, and the end at the point A (x1, y1, z1), is called the radius vector of the point A and is denoted or simply. Since its coordinates coincide with the coordinates of the point A, its expansion in terms of vectors has the form

A vector starting at point A(x1, y1, z1) and ending at point B(x2, y2, z2) can be written as

where r 2 is the radius vector of the point B; r 1 - radius vector of point A.

Therefore, the expansion of the vector in terms of orts has the form

Its length is equal to the distance between points A and B

MULTIPLICATION

So in the case of a flat problem, the product of a vector by a = (ax; ay) and a number b is found by the formula

a b = (ax b; ay b)

Example 1. Find the product of the vector a = (1; 2) by 3.

3 a = (3 1; 3 2) = (3; 6)

So in the case of a spatial problem, the product of the vector a = (ax; ay; az) and the number b is found by the formula

a b = (ax b; ay b; az b)

Example 1. Find the product of the vector a = (1; 2; -5) by 2.

2 a = (2 1; 2 2; 2 (-5)) = (2; 4; -10)

Dot product of vectors and where is the angle between the vectors and ; if either , then

From the definition of the scalar product, it follows that

where, for example, is the value of the projection of the vector onto the direction of the vector .

Vector scalar square:

Dot product properties:

Dot product in coordinates

If a then

Angle between vectors

Angle between vectors - the angle between the directions of these vectors (smallest angle).

Vector product(The vector product of two vectors.)- is a pseudovector perpendicular to the plane constructed by two factors, which is the result of the binary operation "vector multiplication" on vectors in three-dimensional Euclidean space. The product is neither commutative nor associative (it is anticommutative) and is different from the dot product of vectors. In many engineering and physics problems, it is necessary to be able to build a vector perpendicular to two existing ones - the vector product provides this opportunity. The cross product is useful for "measuring" the perpendicularity of vectors - the length of the cross product of two vectors is equal to the product of their lengths if they are perpendicular, and decreases to zero if the vectors are parallel or anti-parallel.

Vector product is defined only in three-dimensional and seven-dimensional spaces. The result of the vector product, like the scalar product, depends on the metric of the Euclidean space.

Unlike the formula for calculating the scalar product from the coordinates of the vectors in a three-dimensional rectangular coordinate system, the formula for the vector product depends on the orientation of the rectangular coordinate system, or, in other words, its “chirality”

Collinearity of vectors.

Two non-zero (not equal to 0) vectors are called collinear if they lie on parallel lines or on the same line. We allow, but not recommended, a synonym - "parallel" vectors. Collinear vectors can be directed in the same direction ("co-directed") or oppositely directed (in the latter case they are sometimes called "anticollinear" or "antiparallel").

Mixed product of vectors( a,b,c)- scalar product of vector a and vector product of vectors b and c:

(a,b,c)=a ⋅(b×c)

sometimes it is called the triple scalar product of vectors, apparently due to the fact that the result is a scalar (more precisely, a pseudoscalar).

Geometric meaning: The modulus of the mixed product is numerically equal to the volume of the parallelepiped formed by the vectors (a,b,c) .

Properties

A mixed product is skew-symmetric with respect to all its arguments: that is, e. a permutation of any two factors changes the sign of the product. It follows that the mixed product in the right Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of the matrix composed of the vectors and:

The mixed product in the left Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of a matrix composed of vectors and taken with a minus sign:

In particular,

If any two vectors are parallel, then with any third vector they form a mixed product equal to zero.

If three vectors are linearly dependent (i.e., coplanar, lie in the same plane), then their mixed product is zero.

Geometric meaning - The mixed product in absolute value is equal to the volume of the parallelepiped (see figure) formed by the vectors and; the sign depends on whether this triple of vectors is right or left.

Complanarity of vectors.

Three vectors (or more) are called coplanar if they, being reduced to a common origin, lie in the same plane

Complanarity Properties

If at least one of the three vectors is zero, then the three vectors are also considered coplanar.

A triple of vectors containing a pair of collinear vectors is coplanar.

Mixed product of coplanar vectors. This is a criterion for the coplanarity of three vectors.

Coplanar vectors are linearly dependent. This is also a criterion for coplanarity.

In 3-dimensional space, 3 non-coplanar vectors form a basis

Linearly dependent and linearly independent vectors.

Linearly dependent and independent systems of vectors.Definition. The system of vectors is called linearly dependent, if there is at least one non-trivial linear combination of these vectors equal to the zero vector. Otherwise, i.e. if only a trivial linear combination of given vectors is equal to the null vector, the vectors are called linearly independent.

Theorem (linear dependence criterion). For a system of vectors in a linear space to be linearly dependent, it is necessary and sufficient that at least one of these vectors be a linear combination of the others.

1) If there is at least one zero vector among the vectors, then the entire system of vectors is linearly dependent.

Indeed, if, for example, , then, assuming , we have a nontrivial linear combination .▲

2) If some of the vectors form a linearly dependent system, then the entire system is linearly dependent.

Indeed, let the vectors , , be linearly dependent. Hence, there exists a non-trivial linear combination equal to the zero vector. But then, assuming , we also obtain a non-trivial linear combination equal to the zero vector.

2. Basis and dimension. Definition. System of linearly independent vectors vector space is called basis this space, if any vector from can be represented as a linear combination of the vectors of this system, i.e. for each vector there are real numbers such that equality holds. This equality is called vector decomposition according to the basis , and the numbers called vector coordinates relative to the basis(or in basis) .

Theorem (on the uniqueness of the expansion in terms of the basis). Each space vector can be expanded in terms of the basis in a unique way, i.e. coordinates of each vector in the basis are defined unambiguously.

Definition 1. A linear combination of vectors is the sum of the products of these vectors and scalars
:

Definition 2. Vector system
is called a linearly dependent system if the linear combination of them (2.8) vanishes:

and among the numbers
there is at least one other than zero.

Definition 3. Vectors
are called linearly independent if their linear combination (2.8) vanishes only if all are numbers.

From these definitions, the following corollaries can be obtained.

Corollary 1. In a linearly dependent vector system, at least one vector can be expressed as a linear combination of the others.

Proof. Let (2.9) hold and let, for definiteness, the coefficient
. We then have:
. Note that the converse is also true.

Consequence 2. If the system of vectors
contains a zero vector, then this system is (necessarily) linearly dependent - the proof is obvious.

Corollary 3. If among n vectors
any k(
) of vectors are linearly dependent, then all n vectors are linearly dependent (we omit the proof).

2 0 . Linear combinations of two, three and four vectors. Let's consider questions of linear dependence and independence of vectors on a straight line, a plane and in space. Let us present the corresponding theorems.

Theorem 1. For two vectors to be linearly dependent, it is necessary and sufficient that they be collinear.

Need. Let the vectors and linearly dependent. This means that their linear combination
=0 and (for the sake of definiteness)
. This implies the equality
, and (by the definition of multiplication of a vector by a number) the vectors and collinear.

Adequacy. Let the vectors and collinear ( ) (we assume that they are different from the zero vector; otherwise, their linear dependence is obvious).

By Theorem (2.7) (see §2.1, item 2 0) then
such that
, or
– the linear combination is equal to zero, and the coefficient at equals 1 – vectors and linearly dependent.

The following corollary follows from this theorem.

Consequence. If the vectors and are not collinear, then they are linearly independent.

Theorem 2. For three vectors to be linearly dependent, it is necessary and sufficient that they be coplanar.

Need. Let the vectors ,and linearly dependent. Let us show that they are coplanar.

The definition of the linear dependence of vectors implies the existence of numbers
and such that the linear combination
, and at the same time (for definiteness)
. Then from this equality we can express the vector :=
, that is, the vector equal to the diagonal of the parallelogram built on the vectors on the right side of this equality (Fig. 2.6). This means that the vectors ,and lie in the same plane.

Adequacy. Let the vectors ,and coplanar. Let us show that they are linearly dependent.

Let us exclude the case of collinearity of any pair of vectors (because then this pair is linearly dependent and by Corollary 3 (see item 1 0) all three vectors are linearly dependent). Note that such an assumption also excludes the existence of a zero vector among the indicated three.

We transfer three coplanar vectors to one plane and bring them to a common origin. Through the end of the vector draw lines parallel to the vectors and ; we obtain the vectors and (Fig. 2.7) - their existence is ensured by the fact that the vectors and vectors that are not collinear by assumption. It follows that the vector =+. Rewriting this equality as (–1) ++=0, we conclude that the vectors ,and linearly dependent.

Two corollaries follow from the proved theorem.

Corollary 1. Let and non-collinear vectors, vector – arbitrary, lying in the plane defined by the vectors and , vector. Then there are numbers and such that

=+. (2.10)

Consequence 2. If the vectors ,and are not coplanar, then they are linearly independent.

Theorem 3. Any four vectors are linearly dependent.

We omit the proof; with some modifications, it copies the proof of Theorem 2. Let us present a corollary of this theorem.

Consequence. For any non-coplanar vectors ,,and any vector
and such that

. (2.11)

Comment. For vectors in a (three-dimensional) space, the concepts of linear dependence and independence have, as follows from Theorems 1-3 above, a simple geometric meaning.

Let there be two linearly dependent vectors and . In this case, one of them is a linear combination of the second, that is, it simply differs from it by a numerical factor (for example,
). Geometrically, this means that both vectors are on a common line; they can have the same or opposite directions (Fig. 2.8 xx).

If two vectors are located at an angle to each other (Fig. 2.9 xx), then in this case one of them cannot be obtained by multiplying the other by a number - such vectors are linearly independent. Therefore, the linear independence of two vectors and means that these vectors cannot be laid on the same straight line.

Let us find out the geometric meaning of the linear dependence and independence of three vectors.

Let the vectors ,and are linearly dependent and let (for definiteness) the vector is a linear combination of vectors and , that is, located in the plane containing the vectors and . This means that the vectors ,and lie in the same plane. The converse statement is also true: if the vectors ,and lie in the same plane, then they are linearly dependent.

So the vectors ,and are linearly independent if and only if they do not lie in the same plane.

3 0 . Concept of basis. One of the most important concepts of linear and vector algebra is the concept of a basis. We introduce definitions.

Definition 1. A pair of vectors is called ordered if it is specified which vector of this pair is considered the first and which is the second.

Definition 2. Ordered Pair ,of noncollinear vectors is called a basis on the plane defined by the given vectors.

Theorem 1. Any vector on the plane can be represented as a linear combination of the basis vector system ,:

(2.12)

and this representation is unique.

Proof. Let the vectors and form a basis. Then any vector can be represented as
.

To prove uniqueness, suppose that there is one more decomposition
. We then have =0, and at least one of the differences is nonzero. The latter means that the vectors and linearly dependent, that is, collinear; this contradicts the assertion that they form a basis.

But then the decomposition is unique.

Definition 3. A triple of vectors is called ordered if it is indicated which vector is considered the first, which is the second, and which is the third.

Definition 4. An ordered triple of non-coplanar vectors is called a basis in space.

The decomposition and uniqueness theorem also holds here.

Theorem 2. Any vector can be represented as a linear combination of the basis vector system ,,:

(2.13)

and this representation is unique (we omit the proof of the theorem).

In expansions (2.12) and (2.13), the quantities are called the coordinates of the vector in a given basis (more precisely, in affine coordinates).

For a fixed basis
and
you can write
.

For example, if a basis is given
and given that
, then this means that there is a representation (decomposition)
.

4 0 . Linear operations on vectors in coordinate form. The introduction of a basis allows linear operations on vectors to be replaced by ordinary linear operations on numbers - the coordinates of these vectors.

Let some basis be given
. Obviously, setting the coordinates of the vector in this basis completely determines the vector itself. There are the following propositions:

a) two vectors
and
are equal if and only if their respective coordinates are equal:

b) when multiplying a vector
per number its coordinates are multiplied by this number:

; (2.15)

c) when adding vectors, their respective coordinates are added:

We omit the proofs of these properties; Let us prove property b) only as an example. We have

==

Comment. In space (on the plane) one can choose infinitely many bases.

We give an example of the transition from one basis to another, establish the relationship between the coordinates of the vector in different bases.

Example 1. In the base system
three vectors are given:
,
and
. in basis ,,vector has a decomposition. Find vector coordinates in basis
.

Solution. We have expansions:
,
,
; Consequently,
=
+2
+
= =
, that is
in basis
.

Example 2. Let in some basis
four vectors are given by their coordinates:
,
,
and
.

Find out if the vectors form
basis; in the case of a positive answer, find the decomposition of the vector in this basis.

Solution. 1) vectors form a basis if they are linearly independent. Compose a linear combination of vectors
(
) and find out for what
and it vanishes:
=0. We have:

=
+
+
=

By definition of the equality of vectors in coordinate form, we obtain the following system of (linear homogeneous algebraic) equations:
;
;
, whose determinant
=1
, that is, the system has the (only) trivial solution
. This means that the vectors are linearly independent
and hence they form a basis.

2) expand the vector in this basis. We have: =
or in coordinate form.

Passing to the equality of vectors in coordinate form, we obtain a system of linear nonhomogeneous algebraic equations:
;
;
. Solving it (for example, according to Cramer's rule), we get:
,
,
and (
)
. We have a vector decomposition in basis
:=.

5 0 . Projection of a vector onto an axis. Projection properties. Let there be some axis l, that is, a straight line with a direction chosen on it, and let some vector .Define the concept of the projection of a vector per axle l.

Definition. Vector projection per axle l is called the product of the modulus of this vector and the cosine of the angle between the axis l and vector (Fig.2.10):

. (2.17)

A consequence of this definition is the statement that equal vectors have equal projections (on the same axis).

Note the properties of projections.

1) projection of the sum of vectors onto some axis l is equal to the sum of the projections of the terms of the vectors on the same axis:

2) the projection of the product of a scalar and a vector is equal to the product of this scalar and the projection of the vector on the same axis:

=
. (2.19)

Consequence. The projection of a linear combination of vectors on the axis is equal to the linear combination of their projections:

We omit the proofs of the properties.

6 0 . Rectangular Cartesian coordinate system in space.Decomposition of a vector in unit vectors of axes. Let three mutually perpendicular unit vectors be chosen as a basis; we introduce special notation for them
. By placing them start at the point O, direct along them (according to the unit vectors
) coordinate axes Ox,Oy and O z(an axis with a positive direction chosen on it, a reference point and a unit of length is called a coordinate axis).

Definition. An ordered system of three mutually perpendicular coordinate axes with a common origin and a common unit of length is called a rectangular Cartesian coordinate system in space.

Axis Ox called the x-axis, Oy- the y-axis and O z applique axis.

Let's deal with the expansion of an arbitrary vector in terms of the basis
. From the theorem (see §2.2, item 3 0 , (2.13)) it follows that
can be uniquely expanded in the basis
(here instead of designating the coordinates
use
):

. (2.21)

In (2.21)
are the (cartesian rectangular) coordinates of the vector . The meaning of Cartesian coordinates is established by the following theorem.

Theorem. Cartesian coordinates
vector are the projections of this vector, respectively, on the axes Ox,Oy and O z.

Proof. Let's place the vector to the origin of the coordinate system - a point O. Then its end will coincide with some point
.

Let's pass through the point
three planes parallel to the coordinate planes Oyz,Oxz and Oxy(Fig. 2.11 xx). We get then:

. (2.22)

In (2.22) the vectors
and
are called components of the vector
along the axes Ox,Oy and O z.

Let through
and the angles formed by the vector are indicated respectively with orts
. Then for the components we obtain the following formulas:

=
=
,
=

=
,
=

=
(2.23)

From (2.21), (2.22) (2.23) we find:

=
=
;=
=
;=
=
(2.23)

- coordinates
vector there are projections of this vector onto the coordinate axes Ox,Oy and O z respectively.

Comment. Numbers
are called direction cosines of the vector .

Vector modulus (diagonal of a rectangular parallelepiped) is calculated by the formula:

. (2.24)

From formulas (2.23) and (2.24) it follows that the direction cosines can be calculated using the formulas:

=
;
=
;
=
. (2.25)

Raising both parts of each of the equalities in (2.25) and adding term by term the left and right parts of the resulting equalities, we arrive at the formula:

- not any three angles form a certain direction in space, but only those whose cosines are related by relation (2.26).

7 0 . Radius vector and point coordinates.Determining a vector by its beginning and end. Let's introduce a definition.

Definition. The radius vector (denoted ) is called the vector connecting the origin O with this point (Fig. 2.12 xx):

. (2.27)

Any point in space corresponds to a certain radius vector (and vice versa). Thus, points in space are represented in vector algebra by their radius vectors.

Obviously the coordinates
points M are projections of its radius vector
on the coordinate axes:

(2.28’)

and thus,

(2.28)

– the radius vector of a point is a vector whose projections on the coordinate axes are equal to the coordinates of this point. From this follows two entries:
and
.

Obtaining formulas for calculating vector projections
by the coordinates of its beginning - the point
and end point
.

Draw the radius vectors
and vector
(fig.2.13). We get that

=
=(2.29)

– the projections of the vector onto the coordinate vectors are equal to the differences of the corresponding coordinates of the end and beginning of the vector.

8 0 . Some problems on Cartesian coordinates.

1) vector collinearity conditions . From the theorem (see §2.1, item 2 0 , formula (2.7)) it follows that for the collinarity of vectors and necessary and sufficient for the following relation to hold: =. From this vector equality, we obtain three equalities in the coordinate form:, from which the condition for the collinarity of vectors in the coordinate form follows:

(2.30)

– for collinear vectors and it is necessary and sufficient that their respective coordinates be proportional.

2) distance between points . From representation (2.29) it follows that the distance
between points
and
is determined by the formula

=
=. (2.31)

3) segment division in this respect . Let points be given
and
and attitude
. Need to find
- point coordinates M (fig.2.14).

We have from the condition of collinear vectors:
, where
and

. (2.32)

From (2.32) we obtain in the coordinate form:

From formulas (2.32 ') one can obtain formulas for calculating the coordinates of the middle of the segment
, assuming
:

Comment. Let's count the segments
and
positive or negative, depending on whether their direction coincides with the direction from the origin
cut to end
, or doesn't match. Then, using formulas (2.32) - (2.32"), you can find the coordinates of the point dividing the segment
externally, that is, so that the dividing point M is on the extension
, not inside it. At the same time, of course,
.

4) spherical surface equation . Let's compose the equation of a spherical surface - the locus of points
, equidistant to a distance from some fixed center - a point
. Obviously, in this case
and taking into account formula (2.31)

Equation (2.33) is the equation of the desired spherical surface.