revised June 28, 2021
3
Note that because
4
8
ker(
)
3
−
∈
A
, this means that
1
2
3
1
2
3
4
4
8
8
4
8
3
3
3
−
↑
↑
↑
−
=
=
−
+
+
=
↓
↓
↓
A
v
v
v
v
v
v
0
, so there's
a
linear dependency
among these vectors. In particular, this means that we can solve for any one of these
vectors in terms of the remaining vectors. For example,
3
1
2
8
4
3
3
=
−
v
v
v
. This means that
{
}
3
1
2
span
,
∈
v
v
v
, so
we don't need it in our spanning set for the image. We could, of course, have eliminated any one of these three
vectors in this manner, but it's good standard practice to solve for the later vectors in terms of its predecessors.
There are no other linear interdependencies that can be used to eliminate redundancy, so this is the best we can
do. That is,
{
}
1
2
im(
)
span
,
=
A
v
v
.
Note, in particular, that we retained in our spanning set for the image exactly those column vectors of the
original matrix
A
that eventually yielded Leading 1's in the reduced row-echelon form of this matrix. We can
always choose our spanning set for the image such that this is the case. In order to understand this better, we
need one more very important definition.
Definition
: A set of vectors
{
}
1
2
,
,
,
n
k
∈
v
v
v
R
is called
linearly independent
if given any linear combination
of the form
1
1
k
k
c
c
+
+
=
v
v
0
, this implies that
1
0
k
c
c
=
=
. That is, there is no nontrivial way to combine
these vectors to yield the zero vector.
Note that this definition also means that it's impossible to solve for any one of these vectors in terms of the
others. This is the essential quality of linear independence - there is no redundancy among a linearly
independent set of vectors.
Test for linear independence
Given a collection of vectors
{
}
1
2
,
,
,
n
k
∈
v
v
v
R
, if we write
1
k
↑
↑
=
↓
↓
A
v
v
, then the expression
1
1
k
k
c
c
+
+
=
v
v
0
can be expressed as
=
Ac
0
where
1
k
c
c
=
c
. So the statement that
1
1
k
k
c
c
+
+
=
v
v
0
implies that
1
0
k
c
c
=
=
can be restated very succinctly as
=
Ac
0
implies
=
c
0
. That is, a collection of
vectors
{
}
1
2
,
,
,
k
v
v
v
will be linearly independent if and only if
{ }
ker(
)
=
A
0
.
In our example above, we found that for
1
2
3
↑
↑
↑
=
↓
↓
↓
A
v
v
v
,
4
ker(
)
span
8
3
−
=
A
. Therefore these column
vectors were not linearly independent.
Definition
: Given a subspace
V
of
n
R
, a collection of vectors
{
}
1
2
,
,
,
k
V
∈
v
v
v
is called a
basis
of
V
if
{
}
1
2
Span
,
,
,
k
V
=
v
v
v
and
{
}
1
2
,
,
,
k
v
v
v
are linearly independent.
A basis is a minimal spanning set, and it's important to note that any given subspace can have many different
bases.
Note
:
n
R
is itself a subspace of
n
R
(the whole space). The standard basis
{
}
1
2
,
,
,
n
=
e
e
e
E
is familiar to us,
but, in fact, any linearly independent collection of
n
vectors
{
}
1
2
,
,
,
n
v
v
v
in
n
R
would provide an alternate
basis for
n
R
.