Home
Graphing and Writing Linear Functions
SOLVING EQUATIONS INVOLVING RATIONAL EXPONENTS
Linear Equations and Graphing
Systems of Linear Equations
Solving Polynomial Equations
Matrix Equations and Solving Systems of Linear Equations
Introduction Part II and Solving Equations
Linear Algebra
Graphing Linear Inequalities
Using Augmented Matrices to Solve Systems of Linear Equations
Solving Linear Inequalities
Solution of the Equations
Linear Equations
Annotated Bibliography of Linear Algebra Books
Write Linear Equations in Standard Form
Graphing Linear Inequalities
Introduction to Linear Algebra for Engineers
Solving Quadratic Equations
THE HISTORY OF SOLVING QUADRATIC EQUATIONS
Systems of Linear Equations
Review for First Order Differential Equations
Systems of Nonlinear Equations & their solutions
LINEAR LEAST SQUARES FIT MAPPING METHOD FOR INFORMATION RETRIEVAL FROM NATURAL LANGUAGE TEXTS
Quadratic Equations
Syllabus for Differential Equations and Linear Alg
Linear Equations and Matrices
Solving Linear Equations
Slope-intercept form of the equation
Linear Equations
DETAILED SOLUTIONS AND CONCEPTS QUADRATIC EQUATIONS
Linear Equation Problems
Systems of Differential Equations
Linear Algebra Syllabus
Quadratic Equations and Problem Solving
LinearEquations
The Slope-Intercept Form of the Equation
Final Exam for Matrices and Linear Equations
Linear Equations
Try the Free Math Solver or Scroll down to Tutorials!

 

 

 

 

 

 

 

 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 

Please use this form if you would like
to have this math solver on your website,
free of charge.


Systems of Differential Equations

1 Matrices and Systems of Linear Equations

An n × m matrix is an array A = (aij) of the form

where each aij is a real or complex number.
The matrix has n rows and m columns.
For 1≤j≤m, 1≤i≤n, the n × 1 matrix

is called the j−th column of A and the 1×m matrix

is called the i − th row of A.
We can add n × m matrices as follows. If A = (aij)
and B = (bij), then C = A+B is the matrix (cij) defined
by

We can multiply an n × m matrix A = (aij) by an
m × p matrix Bjk to get an n × p matrix C = (cik)
defined by

Thus the element cik is the dot product of the ith row
of A and the jth column of B.
Both the operations of matrix addition and matrix
multiplication are associative. That is,

(A + B) + C = A + (B + C), (AB)C = A(BC).

Multiplication of n×n matrices is not always commutative.
For instance, if

then,

We will write vectors x = (x1, . . . , xn) in Rn both as
row vectors and column vectors.
Matrices are useful for dealing with systems of linear
equations.

Since our interest here is in treating systems of differential
equations, we will only consider linear systems of
n equations in n unknowns.
We can write the system

as a single vector equation
Ax = b

where A in the n × n matrix (aij), x is an unknown
n−vector, and b is a known n−vector.
Let ei be the n−vector with zeroes everywhere except
in the i−th position and a 1 there.
The n×n matrix I whose i−th row is ei is called the
n × n identity matrix.
For any n × n matrix A we have

AI = IA = A.

An n × n matrix A is invertible if there is another
n × n matrix B such that AB = BA = I. We also
call A non-singular. A singular matrix is one that is not
invertible.

The matrix B is unique and called the inverse of A.
It is usually written A−1.
Let 0 denote the n−vector all of whose entries are 0.
A collection u1, u2, . . . , uk of vectors in Rn is called a
linearly independent set of vectors in Rn, if whenever
we have a linear combination

with the constants (scalars) we must have
for every i.

Ax = b

Fact. The following conditions are equivalent for an
n × n matrix A to be invertible.

1. the rows of A form a linearly independent set of vectors
2. the columns of A form a linearly independent set of
vectors
3. for every vector b, the system
Ax = b
has a unique solution
4. det(A) ≠ 0

We define the number det(A) inductively by

where A[i | 1] is the (n−1)×(n−1) matrix obtained
by deleting the first column and i−th row of A.
88888888888888888 do examples 8888888888888888

2 Systems of Differential Equations

Let U be an open subset of Rn, I be an open interval in
R and : I×Rn -> Rn be a function from I ×Rn to Rn.
The equation

is called a first order ordinary differential equation in
Rn. We emphasize here that x is an n−dimensional vector
in Rn. We also consider the initial value problem

where t0 ∈I and x0 ∈U.
A solution to the IVP (2) is a differentiable function
x(t) from an open subinterval containing t0 such
that

for t ∈J.
The general solution to (1) is an expression

x(t, c) (3)

where c is an n−dimensional constant vector in Rn
such that every solution of (1) can be written in the form
(3) for some choice of c.

If we write out the D.E. (1) in coordinates, we get a
system of first order differential equations as follows.

Fact: The n−th order scalar D.E. is equivalent to a
simple n−dimensional system.

Consider

Letting we get

If we have a solution y(t) to (5), and set x1 = y(t), x2(t) =
y'(t), . . . , xn(t) = y(n−1)(t), then x(t) = (x1(t), . . . , xn(t)
is a solution to the system (6). Conversely, if we have a
solution x(t) = (x1(t), . . . , xn(t) to the system 6, then
putting y(t) = x1(t) gives a solution to (5).

The following existence and uniqueness theorem is proved
in more advanced courses.

Theorem (Existence-Uniqueness Theorem for
systems).
U be an open set in Rn, and let I be an
open interval in R. Let f (t, x) be a C1 function of the
variables (t, x) defined in I × U with values in Rn.
Then, there is a unique solution
x(t) to the initial value problem

If the right side of the system f (t, x) does not depend
on time, then one calls the system autonomous
(or time-independent). Otherwise, one calls the system
non-autonomous or time-dependent.
There is a simple geometric description of autonomous
systems in Rn. In that case, we consider

where f is a C1 function defined in an open subset U
in Rn. We think of f as a vector field in U and solutions
x(t) of (7) as curves in U which are everywhere tangent
to f .

2.1 Linear Systems of Differential Equations: General
Properties

The system

in which A(t) is a continuous n × n matrix valued
function of t and g(t) is a continuous n−vector valued
function of t is called a linear system of differential equations
(or a linear differential equation) in Rn.

As in the case of scalar equations, one gets the general
solution to (8) in two steps. First, one finds the general
solution xh(t) to the associated homogeneous system

Then, one finds a particular solution xp(t) to (8) and
gets the general solution to (8) as a sum

Accordingly, we will examine ways of doing both tasks.

Let yi(t) be a collection of Rn−valued functions for
1≤i≤k. We say that they form a linearly independent
set of functions if whenever are k scalars
such that

for all t, we have that
An n × n matrix of linearly independent solutions
to the homogeneous linear system (9) is called a
fundamental matrix for (9).

A necessary and sufficient condition for the matrix
of solutions to be a fundamental matrix is that
det() = 0 for some (or any ) t.

If y1(t), . . . , yn(t) are n solutions to (9), and is
the matrix whose columns are the functions yi(t), then
the function

is called the Wronskian of the collection {y1(t), . . . , yn(t)}
of solutions. It is then a fact that W(t) vanishes at some
point t0 if and only if it vanishes at all point t.

The general solution to (9) has the form

where is any fundamental matrix for (9) and c is
a constant vector.

Thus, we have to find fundamental matrices and particular
solutions. We will do this explicitly below for
n = 2, 3 and constant matrices A.

To close this section, we observe an analogy between
systems

and the scalar equation x' = ax.
One can define the matrix exp(A) = eA by the power
series

It can be shown that the matrix series on the right
side of this equation converges for any A. The series
represents a matrix with many properties analogous to
the usual exponential function of a real variable.

In particular, for a real number t, the matrix function
is a differentiable matrix valued funtion and its
derivative, computed by differentiating the series

term by term satisfies

It follows that, for each vector x0, the vector function

is the unique solution to the IVP

Hence, the matrix etA is a fundamental matrix for the
system

This observation is useful in certain circumstances,
but, in general, it is hard to compute etA directly. In
practice the methods involving eigenvalues described in
the next secion are easier to use to find the general solution
to