Summary of Chapter 3 in
Finite Mathematics /
Finite Mathematics & Applied Calculus
Topic: Matrix Algebra

              Student Home
True/False Quiz
On-Line Tutorial
Review Exercises
Summary Index
Everything for Calculus
Everything for Finite Math
Everything for Finite Math & Calculus
Chapter 2 Summary     Chapter 4 Summary
Tools: Matrix Algebra Tool | Pivot and Gauss-Jordan Tool | Free Macintosh Matrix Software | Game Theory Tool

Basic Definitions | Operations with Matrices | Algebra of Matrices | Matrix Form of a System of Linear Equations | Matrix Inverse | Determining Whether a Matrix is Invertible | Inverse of a 2 × 2 Matrix | Two-Person Zero Sum Game | Mixed Strategy, Expected Value | Minimax Criterion, Fundamental Principles of Game Theory | Reducing by Dominance | Saddle Point, Strictly Determined Game | Input-Output Economic Models

Basic Definitions

An m×n matrix A is a rectangular array of real numbers with m rows and n columns. (Rows are horizontal and columns are vertical.) The numbers m and n are the dimensions of A.

The real numbers in the matrix are called its entries. The entry in row i and column j is called aij or Aij.

Example

Following is a 4×5 matrix with the entry A23 highlighted.

    A =
    0
    1
    2
    0
    3
    1/3
    -1
    10
    1/3
    2
    3
    1
    0
    1
    -3
    2
    1
    0
    0
    1

Top of Page
Operations with Matrices

Transpose
The transpose, AT, of a matrix A is the matrix obtained from A by writing its rows as columns. If A is an m×n matrix and B = AT, then B is the n×m matrix with bij = aji.

Sum, Difference
If A and B have the same dimensions, then their sum, A+B, is obtained by adding corresponding entries. In symbols, (A+B)ij = Aij + Bij. If A and B have the same dimensions, then their difference, A - B, is obtained by subtracting corresponding entries. In symbols, (A-B)ij = Aij - Bij.

Scalar Multiple
If A is a matrix and c is a number (sometimes called a scalar in this context), then the scalar multiple, cA, is obtained by multiplying every entry in A by c. In symbols, (cA)ij = c(Aij).

Product
If A has dimensions m×n and B has dimensions n×p, then the product AB is defined, and has dimensions m×p. The entry (AB)ij is obtained by multiplying row i of A by column j of B, which is done by multiplying corresponding entries together and then adding the results.

Examples

Transpose
 
0
1
2
T
1/3
-1
10
=
0
1/3
1
-1
2
10

Sum & Scalar Multiple
0
1
1/3
-1
+2
1
-1
2/3
-2
=
2
-1
5/3
-5

Product
0
1
1/3
-1
1
-1
2/3
-2
=
2/3
-2
-1/3
5/3

Visit our Matrix Algebra Tool for on-line matrix algebra computations.

Top of Page
Algebra of Matrices

The n×n identity matrix is the matrix I that has 1's down the main diagonal and 0's everywhere else. In symbols, Iij = 1 if i = j and 0 if i j.

A zero matrix is one whose entries are all 0.

The various matrix operations, addition, subtraction, scalar multiplication and matrix multiplication, have the following properties.
A+(B+C) = (A+B)+C Additive associative law
A+B = B+A Additive commutative law
A+O = O+A = A Additive identity law
A+( - A) = O = ( - A)+A Additive inverse law
c(A+B) = cA+cB Distributive law
(c+d)A = cA+dA Distributive law
A = A Scalar unit
0A = O Scalar zero
A(BC) = (AB)C Multiplicative associative law
AI = IA = A Multiplicative identity law
A(B+C) = AB + AC Distributive law
(A+B)C = AC + BC Distributive law
OA = AO = O Multiplication by zero matrix
(A+B)T = AT + BT Transpose of a sum
(cA)T = c(AT) Transpose of a scalar multiple
(AB)T = BTAT Transpose of a matrix product
The one rule that is conspicuously absent from this list is commutativity of the matrix product. In general, matrix multiplication is not commutative: AB is not equal to BA in general.

Examples

Following is the 4×4 identity matrix.

    I =
    1
    0
    0
    0
    0
    1
    0
    0
    0
    0
    1
    0
    0
    0
    0
    1

The following illustrates the failure of the commutative law for matrix multiplication.
A =
0
1
1/3
-1
B =
1
-1
2/3
-2

    AB =
    2/3
    -2
    -1/3
    5/3
    BA =
    -1/3
    2
    -2/3
    8/3

Top of Page
Matrix Form of a System of Linear Equations

An important application of matrix multiplication is this: The system of linear equations

  a11x1 + a12x2 + a13x3 + . . . + a1nxn=b1
  a21x1 + a22x2 + a23x3 + . . . + a2nxn=b2
   . . . . . . . . . . . . . .
  am1x1 + am2x2 + am3x3 + . . . + amnxn=bm

can be rewritten as the matrix equation

AX = B

where
  A = a11a12 a13 . . . 1n
a21 a22 a23 . . . a2n
. . . . . . .
am1 am2am3 . . . amn

X = [x1, x2, x3, . . . , xn]T
and
B = [b1, b2, x3, . . . , bm]T
Example

The system

    x+y-z=4
    3x+y-z=6
    x +y-2z=4
    3x+2y-z=9

has matrix form

Matrix Inverse

If A is a square matrix, one that has the same number of rows and columns, it is sometimes possible to take a matrix equation such as AX = B and solve for X by "dividing by A." Precisely, a square matrix A may have an inverse, written A-1, with the property that

AA-1 = A-1A = I.
If A has an inverse we say that A is invertible, otherwise we say that A is singular.

When A is invertible we can solve the equation

AX = B
by multiplying both sides by A-1, which gives us
X = A-1B.
Example

The system of equations

    124 x = 1
    246y1
    468z-1

has solution

x = 124 1 1
y2461
z468-1
= 1-21 1
-22-1/21
1-1/20-1
= -2 .
1/2
1/2

Top of Page
Determining Whether a Matrix is Invertible

In order to determine whether an n×n matrix A is invertible or not, and to find A1 if it does exist, write down the n×(2n) matrix [A | I] (this is A with the n×n identity matrix set next to it).

Row reduce this matrix.

If the reduced form is [I | B] (i.e., has the identity matrix in the left part), then A is invertible and B = A-1. If you cannot obtain I in the left part, then A is singular

Examples

The matrix

    A =
    1
    2
    4
    2
    4
    6
    4
    6
    8

is invertible. The matrix

    B =
    1
    2
    4
    2
    4
    6
    2
    4
    7

is not.

Top of Page
Inverse of a 2×2 Matrix

The 2×2 matrix

    A =
    a
    b
    c
    d
is invertible if ad - bc is nonzero and is singular if ad - bc = 0. The number ad - bc is called the determinant of the matrix. When the matrix is invertible its inverse is given by the formula
    A1 =
    1

    ad - bc
    d
    -b
    .
    -c
    a
Example

1
2
1 =
1

(1)(4) - (2)(3)
4
-2
3
4
-31

    =
    -2
    1
    .
    3/2
    -1/2

Top of Page
Two-Person Zero Sum Game

In a two-person zero sum game, each of the two players is given a choice between several prescribed strategies at each turn, and each player's loss is equal to the other player's gain.

The payoff matrix of a two-person zero sum game has rows labeled by the row player's strategies and columns labeled by the column player's strategies. The ij entry of the matrix is the payoff that accrues to the row player in the event that the row player uses strategy i and the column player uses strategy j.

Example

Paper, Scissors, Rock
Rock beats (crushes) scissors; scissors beat (cut) paper, and paper beats (wraps) rock.
Each +1 entry indicates a win for the row player, -1 indicates a loss, and 0 indicates a tie.

Do you want to play? Click on a row strategy...

Column Strategy
Row
Strategy
0
1
1
1
0
1
1
1
0


Top of Page
Mixed Strategy, Expected Value

A player uses a pure strategy if he or she uses the same strategy at each round of the game. A mixed strategy is a method of playing a game where the rows or columns are played at random so that each is used a given fraction of the time.

We represent a mixed (or pure) strategy for the row player by a row matrix (probability vector)

S = [a   b   c  . . . ]
with the same number of entries as there are rows, where each entry represents the fraction of times the corresponding row is played (or the probability of using that strategy) and where a + b + . . . = 1.

A mixed strategy for the column player is represented by a similar column matrix T. For both row and column players, pure strategies is represented by vectors in with a single 1 and the rest zeros.

The expected value of the game with payoff matrix P corresponding to the mixed strategies S and T is given by

e = SPT
The expected value of the game is the average payoff per round if each player uses the associated mixed strategies for a large number of rounds.

Example

Here is a variant of "paper, scissors, rock in which "paper/paper" and "rock/rock" is no longer a draw.

2
1
1
1
0
1
1
1
2

Suppose the row player uses the mixed strategy

S = [0.75   0   0.25]
(play paper 75% of the time, scissors 0% of the time and rock 25% of the time) and thje column player plays scissors and rock each 50% of the time;
T =
0
0.5
0.5
.
Then the expected value of the game is
  e= SPT
= [0.75   0   0.25]
2
1
1
0
1
0
1
0.5
1
1
2
0.5
= -0.125

Top of Page
Minimax Criterion, Fundamental Principles of Game Theory

Minimax Criterion
A player using the minimax criterion chooses a strategy that, among all possible strategies, minimizes the effect of the other player's best counter-strategy. That is, an optimal (best) strategy according to the minimax criterion is one that minimizes the maximum damage the opponent can cause.

Finding the minimax strategy is called solving the game. In the texbook we show a graphical method for solving 2×2 games. For general games, one uses the simplex method (see the Chapter 4 Summary). However, one can frequently simplify a game and sometimes solve it by "reducing by dominance" and/or checking whether it is "strictly determined" (see below).

Fundamental Principles of Game Theory When analyzing any game, we make the following assumptions about both players:

  1. Each player makes the best possible move.
  2. Each player knows that his or her opponent is also making the best possible move

Example

Consider the following game.

Column Strategy
A B C
Row
Strategy
1
0
1
1
2
0
0
2
3
1
2
3

If the row player follows Principle 1, (s)he should never play Strategy 1 since Strategy 2 gives better payoffs no matter what strategy the column player chooses. (The payoffs in Row 2 are all at least as high as the coresponding ones in Row 1.)

Further, following Principle 2, the row player expects that the column player will never play Strategy A, since Strategy B gives better payoffs as far as the column player is concerned. (The payoffs in Column B are all at least as low as the coresponding ones in Column A.)

Top of Page
Reducing by Dominance

One pure strategy dominates another if all its payoffs are more advantageous to the player than the corresponding ones in the other strategy. In terms of the payoff matrix, we can say it this way:

  1. Row r in the payoff matrix dominates row s if each payoff in row r is ≥ the corresponding payoff in row s.
  2. Column r in the payoff matrix dominates column s if each payoff in row r is ≤ the corresponding payoff in columns.
Example

Consider the above game once again.

Column Strategy
A B C
Row
Strategy
1
0
1
1
2
0
0
2
3
1
2
3

Since the entries in Row 2 are ≥ the corresponding ones in Row 1, Row 2 dominates Row 1.

Since the entries in Column B are ≤ the corresponding ones in Column A, Column B dominates Column A.

Top of Page
Saddle Point, Strictly Determined Game

A saddle point is a payoff that is simultaneously a row minimum and a column maximum. To locate saddle points, circle the row minima and box the column maxima. The saddle points are those entries that are both circled and boxed.

A game is strictly determined if it has at least one saddle point. The following statements are true about strictly determined games.

  1. All saddle points in a game have the same payoff value.
  2. Choosing the row and column through any saddle point gives minimax strategies for both players. In other words, the game is solved via the use of these (pure) strategies.

The value of a strictly determined game is the value of the saddle point entry. A fair game has value of zero, otherwise it is unfair or biased.

Example

Row Player:In the above game, there are two saddle points, shown in color.

A B C
1
0
1
1
2
0
0
2
3
1
2
3

Since the saddle point entries are zero, this is a fair game.

Our on-line game theory utility can be used to check any game (up to 55) for saddle points. Try it out.

Top of Page
Input-Output Economic Models

An input-output matrix for an economy gives, as its jth column, the amounts (in dollars or other appropriate currency) of outputs of each sector used as input by sector j (for one year or other appropriate period of time). It also gives the total production of each sector of the economy for a year (called the production vector when written as a column).

The technology matrix is the matrix obtained by dividing each column by the total production of the corresponding sector. Its ijth entry, the ijth technology coefficient, gives the input from sector i necessary to produce one unit of output from sector j. A demand vector is a column vector giving the total demand from outside the economy for the products of each sector. If A is the technology matrix, X is the production vector, and D is the demand vector, then

(I - A)X = D,
or
X = (I - A)-1D.

These same equations hold if D is a vector representing change in demand, and X is a vector representing change in production. The entries in a column of (I - A)-1 represent the change in production in each sector necessary to meet a unit change of demand in the sector corresponding to that column, taking into account all direct and indirect effects.

Top of Page

Last Updated: March 2006
Copyright © 2000-2006 Stefan Waner