Mathematics 581   Problem Set 7   Due Wednesday, May 29

1. Let V be a vector space over a field F.  Let W be a subspace of V and U be a subspace of W. Prove that U is a subspace of  V.
***
U  is nonempty.  Next, suppose  u  and  v  are in  U  and  a  and  b  are in  F.  Because  U  is a subspace of  W,  au + bv  is in  U, using the vector addition and scalar multiplication of  W.  But these are the same as the vector addition and scalar multiplication of  V.  Therefore,  au + bv is in  U  using  V's operations; so  U  is a subspace of  V.
***

2.  Let  F  be any field.  Here is a system of m  linear equations  in  n  unknowns in  F.

a11x1 + a12x2 + ... + a1nxn = b1,
a21x1 + a22x2 + ... + a2nxn = b2,
......
am1x1 + am2x2 + ... + amnxn = bm.

All of the coefficients  aij  and all of the constants  bi  are in  F.   A solution of the system is an n-tuple  (r1, r2, ... , rn)  in  Fn which satisfies all of the equations when each  ri  is substituted for  the corresponding  xi.  (Each  ri  is an element of  F.)

The system of equations is called homogeneous if every  bi = 0.

(a)  Show that if every  bi = 0, then the set of all solutions is a subspace of  Fn.
***
Let  (x1, x2, ... , xn)  and  (y1, y2, ... ,yn)  both be solutions, and let  a  and  b  be scalars.  Let  every  bi = 0.  Then for all  i,
n                                    n                       n
S aij(axj + byj) =  a[S aijxj] +  b[S aijyj]=  0, so that
j = 1                                           j = 1                 j = 1

a(x1, x2, ... , xn)  +  b(y1, y2, ... ,yn)  = (ax1 + by1,  ...  , axn + byn)  is a solution, too.  So, the sets of all solutions is a subspace.
***

(b)  Show that if the set of all solutions is a subspace of  Fn, then every  bi = 0.
***
If the set of solutions is a subspace, then  (0, 0, ..., 0)  is a solution, because it is in every subspace.
n
So  0 =  S aij0 = bi.
j = 1
***

3.  Let  F  be a field.  We define a nonempty subset  E  of  F  to be a subfield of  F if (i)
the additive identity element  0  of  F and the multiplicative identity element  1  of  F  are in  E, and  (ii)  under the operations of addition and multiplication given by  F,  E  itself is a field.  Examples are  the real numbers form a subfield of the field of complex numbers,  the rational numbers form a subfield of the field of real numbers.

If  E  is a subfield of  F, we can make  F  into a vector space over the field  E by (i) declaring that the vector addition in  F  is the addition that already exists in  F  because it is a field,
(ii) declaring that if  a  is an element of  E (now playing the role of the field of scalars) and  v  is an element of  F (now playing the role of vector space) then the scalar product  av  is the product given by multiplication in the field  F.

Prove that  F is a vector space over  E  by carefully checking all of the eight or so axioms for a vector space.
***
The addition in any field  F  makes  F  into an Abelian group.  This means that with vector addition being the field addition, the first four axioms of a vector space are satisfied.  the next four axioms concern multiplication between elements of  E  and elements of  F.  They are
(a + b)v = av + bv,
a(v + w) = av + aw
a(bv) = (ab)v
1v = v.

The first two are incarnations of the distributive law in a field, the third is the associative law, and the last expresses the fact that  1  is a multiplicative identity in F.
***

3. Here is an infinite sequence of polynomials in R[x] which is a bit like the sequence of ordinary powers of x.

(x)0 = 1, (x)1 = x, (x)2 = x(x - 1), (x)3 = x(x - 1)(x - 2),
..., (x)n = x(x - 1) ...(x - n + 1), ...

Notice that each  (x)n  has degree  n and  leading coefficient  1.

(a)  Viewing  R[x]  as a vector space over the field  R of real numbers, express each of the powers  x0, x1, x2, x3, and  x4, explicitly as a linear combination of  (x)0, (x)1, (x)2, (x)3, and  (x)4. (This gets to be a lot of work.)
***
x0 = 1(x)0,
x1 = 1(x)1,
x2 = 1(x)1 + 1(x)2,
x3 = 1(x)1 + 3(x)2 + 1(x)3,
x4 = 1(x)1 + 7(x)2 + 6(x)3 + 1(x)4.
***

(b)  Let  T  be the set  {(x)n:  n greater than or equal to 0}.  Prove that  T  is a linearly independent set.
***
If it is a linearly dependent set, suppose that some linear combination is equal to 0, the zero polynomial:
(*)                  a0(x)0 + a1(x)1 + a2(x)2 + ... +  an - 1(x)n - 1 + an(x)n = 0,
where  a0,  a1,a2, ...   an - 1, an  are real numbers. Induction  on  n.  If  n = 0, then
a0(x)0 = 0,  that is,  a01 = 0.  So,  a0 = 0.  If  n > 0,  then assume true for  n - 1.  The only term in equation (*) which contains the power  xn  is  an(x)n.   Therefore, an(x)n = 0, and
an = 0.  the  (*)  becomes  a0(x)0 + a1(x)1 + a2(x)2 + ... +  an - 1(x)n - 1 = 0, and by induction,  a0 = a1 = a2 = ...  =  an - 1 =  0, as well.  Q.E.D.
***

(c)  Prove by induction on  n  that every polynomial  p(x)  of degree  n  in  R[x]  is a linear combination of  (x)0, (x)1, (x)2, ... , (x)n. (Notice that parts (b) and (c) together prove that  T  is a basis for  R[x].
***
To begin the induction, let  n = 0.  If  p(x)  is a polynomial of degree 0, then p(x) = a, which is a constant.  Then  p(x) = ax(0).  Now for the induction step.  Assume true for all polynomials of degrees less than or equal to  k - 1.  Let degree p(x) = k.  Let the leading coefficient, the coefficient of  xk, be  c.  When you multiply out  (x)k in terms of powers of  x, its leading coefficient is  1.   So,  p(x) - c(x)is of degree less than or equal to  k - 1, and by induction must be a linear combination of  (x)0, (x)1, (x)2, ... , (x)k- 1.  Just add  c(x)to the linear combination, and you get a linear combination for  p(x).
***

(d)  Prove, again by induction on  n,  if the polynomial p(x) in part (c) has integer coefficients then all of the coefficients in the linear combination in (c) are integers.
***
The proof is almost exactly the same as for part (c).
To begin the induction, let  n = 0.  If  p(x)  is a polynomial of degree 0, then p(x) = a, which is an integer.  Then  p(x) = ax(0).  Now for the induction step.  Assume true for all polynomials of degrees less than or equal to  k - 1.  Let degree p(x) = k.  Let the leading coefficient, the coefficient of  xk, be  c, an integer.  When you multiply out  (x)k in terms of powers of  x, its leading coefficient is  1.   So,  p(x) - c(x)k is of degree less than or equal to  k - 1, and by induction must be a linear combination of  (x)0, (x)1, (x)2, ... , (x)k- 1, with integer coefficients.  Just add  c(x)to the linear combination, and you get a linear combination for  p(x), with all of the coefficients beingintegers.
***
n
(e)  From part (d) you now know that  xn  =  S S(n, r)(x)r, where each coefficient
r = 0
S(n, r)  is an integer.  Show that

(i)  for every  n,  S(n, n) = 1,
(ii)  for every  n > 0,  S(n, 0) = 0,
(iii) if  r > n, then the convention is that  S(n, r) = 0 (nothing to prove).
***
(i)  Because  (x)n, when written as powers of  x, has degree  n  and leading coeffieient  1 and the degree of  (x)r, if  r < n, is less than  n,  S(n, n)  must be  1.
(ii)  When you substitute  x = 0  in  (x)r, for  r > 0, you get  (0)r = 0, because each such  (x)r  has a factor of  x.  Now substitute  x = 0  into the equation in part (e)  and you get
n
0 = 0n  =  S S(n, r)(0)r = S(n, 0)1= S(n, 0).
r = 0
***

(f)  Starting from the fact that  (x)n + 1 = (x)n(x - n), show that for  r  less than or equal to  n and greater than  0,

S(n + 1, r) = S(n, r - 1) + rS(n, r).

n
(Hint: Start with   xn  = S S(n, j)(x)j, and multiply each side by  x.  When you
j = 0
distribute the factor  x through the sum on the right, write each  x  as [(x - j) + j],
n + 1
according to the term that it is in.  You also have  xn + 1  = S S(n + 1, k)(x)k.  Now you
k = 0
can equate the coefficients of  (x)r  on the two sides of the equation because the set of polynomials  T  is a linearly independent set.)

(Terminology:  the numbers  S(n, r)  are commonly called Stirling numbers of the second kind.  The Stirling numbers of the first kind are the coefficients that you get when you multiply out each  (x)n = x(x - 1) ...(x - n + 1)  to express it as a sum of powers of  x.)
***
n                              n                                n
xn + 1  = (xn)x = [ S S(n, j)(x)j]x = S [S(n, j)(x)j]x = S [S(n, j)(x)j]((x - j) + j]
j = 0                          j = 0                          j = 0
n                                   n
=  [ S S(n, j)(x)j + 1] + S [jS(n, j)(x)j]
j = 0                             j = 0

=    S(n, 0)(x)1 +   S(n, 1)(x)2 + ...+ S(n - 1,n - 1)(x) + S(n, n)(x)n + 1
+ 0S(n, 0)(x)0 + 1S(n, 1)(x)1 + 2S(n, 2)(x)2  + ... + nS(n, n)(x)n =

0S(n, 0)(x)0 + S(n, 0)(x)1 + 1S(n, 1)(x)1 + S(n, 1)(x)2 + 2S(n, 2)(x)2 + ...
+ S(n - 1,n - 1)(x)n + nS(n, n)(x)n + S(n, n)(x)n + 1 =
n + 1
xn + 1  = S S(n + 1, k)(x)k.  Now for r  less than or equal to  n and greater than  0,
k = 0

equate the coefficients of  (x) to get S(n + 1, r) = S(n, r - 1) + rS(n, r).  You can equate coefficients like this because  T  is a linearly independent set.
***

(g)  Use the results in  (e)  and  (f)  to calculate recursively the coefficients  0, 1, 7, 6, 1 that you found for  xin part (a) above.

S(n, r)        r = 0    r = 1    r = 2    r = 3    r = 4    r = 5

n = 0            1

n = 1            0          1

n = 2            0          1         1

n = 3            0          1         3         1

n = 4            0          1         7         6         1

n = 5            0          1        15       25       10        1  (not required)
***