TRUE

It is only orthogonal if every dot product between two elements is 0.

It is only orthogonal if every dot product between two elements is 0.

Not every linear independent set in Rn is an orthogonal set.

TRUE

(That projection formula thing.

(That projection formula thing.

If y is a linear combination of nonzero vectors from an orthogonal sets, then the weights in the linear combination can be computed without row operations on a matrix.

FALSE

Normalizing just changes the magnitude of the vectors, it doesn’t affect orthogonality.

Normalizing just changes the magnitude of the vectors, it doesn’t affect orthogonality.

If the vectors in an orthogonal set of nonzero vectors are normalized, then some of the new vectors may not be orthogonal.

FALSE

It must be a square matrix.

It must be a square matrix.

A matrix with orthonormal columns is an orthogonal matrix.

FALSE

The distance is || y – Y ||

The distance is || y – Y ||

If L is a line through 0 and if y is the orthogonal projection of Y onto L, then ||y|| gives the distance from y to L.

FALSE

Orthogonal means linear independence.

Orthogonal means linear independence.

Not every orthogonal set in Rn is linearly independent.

FALSE

It doesn’t have to be normal.

It doesn’t have to be normal.

If a set S = { u1, … un} has the property that ui*uj=0 whenever i does not equal j, then S is an orthonormal set.

TRUE

Theorem 7.

Theorem 7.

If the columns of an mxn matrix A are orthonormal, then the linear mapping x-> Ax preserves length.

TRUE

The orthogonal projection of y onto v is the same as the orthogonal projection of y onto cv whenever c does not equal zero.

TRUE

The columns are linearly independent since orthogonal. (Invertible Matrix Theorem)

The columns are linearly independent since orthogonal. (Invertible Matrix Theorem)

An orthogonal matrix is invertible.

TRUE

Z will be orthogonal to any linear combination of to u1 and u2.

Z will be orthogonal to any linear combination of to u1 and u2.

If z is orthogonal to u1 and u2 and if W = span{u1,u2} , then z must be in W-perp.

TRUE

For each y and each subspace W, the vector y – proj,w,y is orthogonal to W.

FALSE

It is always independent of basis.

It is always independent of basis.

The orthogonal projection of Y of y onto a subspace W can sometimes depend on the orthogonal basis for W used to compute y.

TRUE

If y is in a subspace W, then the orthogonal projection of y onto W is y itself.

TRUE

If the columns of an nxp matrix U are orthonormal, then UU-t*y is the orthogonal projection of y onto the column space of U.

TRUE

If W is a subspace of Rn and if v is in both W and W-perp, then v must be the zero vector.

TRUE

In the Orthogonal Decomposition Theorem, each term in formula (2) for Y is itself an orthogonal projection of y onto a subspace of W.

TRUE

If y = z1 + z2 where z2 is in a subspace W and z2 is in W-perp, then z1 must be the orthogonal projection of y onto W.

FALSE

It is proj,w,y.

(y-proj,w,y is the distance)

It is proj,w,y.

(y-proj,w,y is the distance)

The best approximation to y by elements of a subspace W is given by the vector y- proj,w,y.

FALSE

This only holds if U is square.

This only holds if U is square.

If an nxp matrix U had orthonormal columns, then U*U-t*x=x for all x in R-n.

FALSE

We don’t want c=0.

We don’t want c=0.

If {v1,v2,v3} is an orthogonal basis for W, then multiplying v3 by a scalar c gives a new orthogonal basis {v1,v2,cv3}

TRUE

The GSP produces from a linearly independent set {x1,…xp} and orthogonal set {v1,…vp} with the property that for each k, the vectors v1…vk span the same subspace as the spanned by x1…xk.

TRUE

If A=QR, where Q has orthonormal columns, then R = Q-t*A

TRUE

The general LSS is to find an x that makes Ax as close as possible to b.

TRUE

The projection gives us the best approximation.

The projection gives us the best approximation.

A LSS of Ax=b is a vector x that satisfies Ax=b where b is the orthogonal projection of b onto Col(A)

FALSE

The inequality is facing the wrong way.

The inequality is facing the wrong way.

A LSS of Ax=b is a vector x such that || b- Ax || < || b-Ax|| for all x in Rn.

TRUE

This is the LSS formula.

This is the LSS formula.

Any solution of A-t*A*x = A-t b is a LSS of Ax=b.

TRUE

Then A-t*A is invertible so we can solve A-t*a*x=A-t*b for x when taking the inverse.

Then A-t*A is invertible so we can solve A-t*a*x=A-t*b for x when taking the inverse.

If the columns of A are linearly independent, the equation Ax=b has exactly one LSS.

TRUE

Just don’t multiply the row.

Just don’t multiply the row.

Adding a multiple of one row to another does not affect the determinant of a matrix.

FALSE

If we scale rows when getting the echelon form, then we change the determinant.

If we scale rows when getting the echelon form, then we change the determinant.

The determinant of A is the product of the pivots in any echelon form U of A, multiplied by (-1)^r, where r is the number of row interchanges made during row reduction from A to U.

TRUE

If there’s a row without a pivot, so there must be a row of zeros.

If there’s a row without a pivot, so there must be a row of zeros.

If the columns of A are linearly dependent, then det=0.

FALSE

This is true for product.

This is true for product.

det(A+B) = detA + detB

TRUE

-1*-1=1

-1*-1=1

If two row interchanges are made in succession, then the new determinant equals the old determinant.

FALSE

A has to be triangular.

A has to be triangular.

The determinant of A is the product of the diagonal entries in A.

FALSE

If two rows or two columns are the same, or a row or a columns is 0, THEN det A is 0.

If two rows or two columns are the same, or a row or a columns is 0, THEN det A is 0.

If det A is 0, then two rows or two columns are the same, or a row or a column is zero.

FALSE

det(A^T) = det A when A is nxn.

det(A^T) = det A when A is nxn.

det(A^1)=(-1)detA.