# @ Definition Review Sets Groups Fields Vector Spaces and Matrices

**Sets:**

A set S is an unordered collection of distinct, unique, well defined, elements.

**Subset:**

C is a subset of set S iff every element in C is also in S.

**Ordered Pair:**

An ordered pair denoted (a,b) is two elements with a fixed order, the first element of the pair is a the second is b.

**Cartesian Product:**

The Cartesian product of two sets *X* and *Y*, denoted *X* × *Y*, is the set of all possible ordered pairs whose first component is a member of *X* and whose second component is a member of *Y.*

**Binary Relation:**

A binary relation *R* is usually defined as an ordered triple (*X*, *Y*, *G*) where *X* and *Y* are arbitrary sets, and *G* is a subset of the Cartesian product *X* × *Y*.

**Binary Operation:**

A binary operation **op** on set S is a binary relation on the Cartesian product of S with itself to S. **op**: (SXS-->S).

**Closure:**

A binary operation **op** is closed in S iff (S1 op S2) is contained in S. For all s1,s2 in S.

**Associativity:**

A binary operation **op** is associative in S iff

(s1 op s2 op s3) = s1 op (s2 op s3) for all s1,s2,s3 in S.

**Commutativity:**

A binary operation **op** is associative in S iff

(s1 op s2 op s3) = s1 op (s2 op s3) for all s1,s2,s3 in S.

**Identity Element:**

An identity element 0 of a set S with operation **op** is an elements such that for all s in S,

s op 0 = s.

**Invertible:**

An elements s of S is invertible under **op** if there exists an element s-1 in S such that s op s-1 = s-1 op s = 0, where 0 is the identity element in S.

**Distributivity:**

If S is a set, op_1 and op_2 are operations on set S then op_2 distributes over op_1 in S iff for all s1,s2,s3 in S,

s1 op_2 (s2 op_1 s3) = (s1 op_2 s2) op_1 (s1 op_2 s3).

**For any set S and any operation f:**

If S is closed under f, S is associative under f, S contains a unique identity element 0, and each element of S is uniquely invertible then S is a **group** under f.

If S is a group under f, and S is commutative under f then S is an **abelian group **under f.

If S an abelian group under f, but S is only closed and associative under g, then {S,f,g} is called a **ring**.

If S is abelian under f, and S is abelian under g, and g distributes over f in S, then {S,f,g} is called a **field**.

The real numbers with the operations multiplication and addition as commonly defined form a field.

**Define vector space:**

If {V,+} is an abelian group.

and

{R,+,*} is a field.

and

for all r in R and all v in V r*v is in V.

and

for all r in R and all v1,v2 in V, r*(v1 + v2) = r*v1 + r*v2.

and

for all r1,r2 in R and v in V, v*(r1 + r2) = v*r1 + r*r2.

Then {(V,+),(R,+,*)} is a vector space.

**Define subspace:**

A non-empty subset W of a vector space V the is closed under addition and scalar multiplication (therefore containing the 0 vector) is called a subspace of V.

**Define Basis:**

Suppose B={V_1,...,V_n} is a finite subspace for vector space V over a field F, then B is a basis if it satisfies the following properties:

1) For all a_1,...,a_n in F a_1*V_1+...+a_nV_n = 0 then a_1=...=a_n.

2) for every x in V there exist a_1, … ,a_n in F such that x=a_1*V_1+...+a_n*V_n

By (1) a_1,...,a_n are uniquely determined for each c in V.

Let V and W be vector spaces over field F.

A function T: V-->W is called a linear transformation if for all x,y in V and all c in F.

(a) T(x + y) = T(x) + T(y)

(b) T(cx) = cT(x)

**Define Linear Map:**

Let V and W be two vector spaces over the same field K. A function f:V-->W is a **linear map **if for any two vectors x and y in V and any scalar a in K the following conditions are satisfied:

f(x+y) = f(x) + f(y)

f(a*x) = a*f(x).

**Define array:**

An array is an ordered sequence of elements.

**Define matrix:**

An mxn matrix is a 2 dimensional array of elements. Each element of the array is indexed by a tuple (i,j) where i is one of 0,...,m-1, and j is one of 0,...,n-1 which specifies its location in the array.

**Define matrix representation of linear map:**

Suppose V_1,...V_n is the basis for V.

Every vector v in V is uniquely determined by coefficients c_1...c_n in the expression v=c_1*V_1+...+c_n*V_n

If f:V-->W is a linear map, f(c_1*V_1+...+c_n*V_n)=c_1*f(V_1)+...+c_n*f(V_n)

if W_1,...,W_m is the basis for W.

then f(V_j) = A_(1,j)*W_1+...+A_(m,j)*W_m, for some values of A_(i,j).

The value of f is completely determined by values A_(i,j)

by the definition of linear map.

If we put the values of A in an mxn matrix,

then we can use it to compute the value of f for any vector v in V.

If we place values c_1,...,c_n in an nx1 matrix C then M*C will result in the mx1 matrix whose i’th element is the coordinate of f(v), under the usual definition of dot product.

In this way matrix A represents linear function f.

**Define Identity Matrix**:

An identity matrix In of size n is a matrix of n rows and n columns indexed (i,j) where i and j both range from 0 to n-1.

**Define Inverse:**

An nxn matrix A has an inverse if there exists an nxn matrix B such that A*B=B*A=In where In is the identity matrix of size n.

B is then the inverse of A and is denoted A-1. * is the matrix product operation.

An nxm matrix A where n!=m does not have an inverse, however such a matrix may have a left and/or right inverse.

Matrix A has a left inverse if there exists a matrix B such that BA=I. Matrix A has a right inverse if there exists a matrix B such that AB=I.

**Calculating an Inverse A****-1**** of Matrix A on a computer.**

**Gaussian Elimination:**

The inverse of a square nxn matrix A may be calculated on a computer via Gaussian elimination.

Gaussian elimination entails performing elementary row operations on Matrix A until it becomes an identity matrix of size n, or until it becomes clear that it cannot be changed into the identity matrix In via elementary row operations.

If it is found that matrix A cannot be changed into elementary matrix In then matrix A has no inverse.

If it is found that a series of {f_1,f_2,...,f_x} elementary row operations can transform the matrix A into In then perform these exact operations in the same order to an identity matrix In after performing these calculations matrix In will have become the inverse matrix of A, A-1.

An elementary row operation consists of a scalar multiply of some scalar x from field F with some row i of A or the addition of one row of A to another.

**Cofactor Expansion:**

Another way of calculating the inverse of A is to calculate its adjugate matrix X, transpose that matrix, and divide every entry in the matrix by the determinant of A.

Then X will become the inverse of A.

The adjugate matrix is a matrix where the the (i,j) entry of the matrix is the cofactor of element A_(i,j).

The cofactor at element (i,j) of A is -1i+j *A(i,j)*Det(M) Where M is the matrix A with the i’th row removed, and the j’th column removed (M will have dimension n-1 X n-1).

The determinant for an n by n matrix A ( Det(A) )can be calculated using the Leibniz formula:

Where sigma represents the set of all permutations of the set {1,...,n},

and sgn(sigma) evaluates to +1 for even and to -1 for odd. sigma is even if the sequence differs from the base sequence 1,2,...,n by an even number of 2-swaps, and is odd otherwise.

The cofactor algorithm will find the matrix inverse in n2+1 determinant calculations.

Each determinant is calculated as a sum of n! terms (permutations of the base sequence), each term is a product of n terms, also a signature calculation is necessary for each term.

The below code calculates the inverse of a 3x3 matrix A in the python programming language by using the Gaussian elimination method. The code will work for any square matrix A, (it may get slow though). If at the end of the algorithm matrix B is not the identity matrix of size n, then matrix A has no inverse, otherwise matrix C will be the inverse of matrix A.

The code uses a Fractions library, that represents numbers as divisor and dividend.

**Inverse Via Gaussian elimination in python:**

#All functions only work on square matrices.

from fractions import Fraction

import sys

def vecDot(X,Y):

if (len(X)!=len(Y)):

print "Size error"

sys.exit(1)

accum = 0

for i in range(len(X)):

accum+=X[i]*Y[i]

return accum

def matDot(X,Y):

Z=[]

for i in range(len(X)):

resRow=[]

for j in range(len(Y)):

column=[]

for k in range(len(Y)):

column.append(Y[k][j])

resRow.append(vecDot(X[i],column))

Z.append(resRow)

return Z

def vecSum(X,Y):

if (len(X)!=len(Y)):

print "Size error"

sys.exit(1)

sum = [[0]*len(X)]*len(X)

for i in range(len(X)):

sum[i] = X[i] + Y[i]

return sum

def scaleProd(a,X):

sum = [0]*len(X)

for i in range(len(X)):

sum[i] = a*X[i]

return sum

def lowLeft(B):

C = [[0]*len(B)]*len(B)

C = map(lambda a: a[:],C)

for i in range(len(B)):

C[i][i] = Fraction(1,1)

for i in range(len(B)):

if (B[i][i] != 0):

C[i] = scaleProd(Fraction(1,B[i][i]),C[i])

B[i] = scaleProd(Fraction(1,B[i][i]),B[i])

for j in range(len(B))[i+1:]:

C[j] = vecSum(scaleProd(-1*B[j][i], C[i]),C[j])

B[j] = vecSum(scaleProd(-1*B[j][i], B[i]),B[j])

return C

def highRight(B,C):

j = len(B)-1

for i in reversed(range(len(B))):

for j in range(len(B))[:j]:

C[j] = vecSum(scaleProd(-1*B[j][i], C[i]),C[j])

B[j] = vecSum(scaleProd(-1*B[j][i], B[i]),B[j])

return C

def printMat(A):

for i in range(len(A)):

row=""

for j in range(len(A[0])):

row+=str(A[i][j])+"\t"

print row

def identityTest(C):

test = True

for i in range(len(C)):

for j in range(len(C)):

if (i==j):

if (C[i][j] != 1):

test=False

else:

if (C[i][j] != 0):

test=False

return test

#A=[[3,5,8],[3,9,7],[2,6,8]]

#A=[[3,5,8,1],[3,9,7,7],[2,6,8,3],[6,2,8,4]]

#A=[[1,1,1,1],[1,1,1,1],[2,6,8,3],[6,2,8,4]]

A=[[5,4,2,8,1,6],[9,4,6,2,8,1],[2,6,8,3,0,3],[6,2,8,4,5,1],[1,5,3,7,2,7],[5,3,2,7,1,0]]

B=A[:]

C= lowLeft(B)

C= highRight(B,C)

if (identityTest(B)):

printMat(A)

print("*")

printMat(C)

print("=")

printMat(matDot(C,A))

else:

printMat(A)

print("Is not invertible.")

#Results from the calculation of the last 6X6 matrix:

5 4 2 8 1 6

9 4 6 2 8 1

2 6 8 3 0 3

6 2 8 4 5 1

1 5 3 7 2 7

5 3 2 7 1 0

*

3387/14350 759/14350 662/21525 -1387/43050 -4702/21525 -439/7175

-1957/14350 1951/14350 2543/21525 -10693/43050 1772/21525 904/7175

-18/7175 -576/7175 1264/21525 3293/21525 -719/21525 -383/7175

-1159/14350 -1213/14350 -1559/21525 3559/43050 2164/21525 1023/7175

-2879/14350 1147/14350 -2554/21525 929/43050 4484/21525 263/7175

71/350 -3/350 -4/525 29/1050 -16/525 -37/175

=

1 0 0 0 0 0

0 1 0 0 0 0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

As you can see the inverse matrix was calculated correctly.