I'm currently working on a implementing a matrix inversion method for a Matrix class in C++. One thing that isn't implemented is a check for when the pivot element is 0, meaning I will need to swap the rows (likely with the row just beneath it).

Here is what I have for far for the Inversion method:

Matrix Matrix:: invert()
{
	if (rows != cols) {         // Check for square matrix
		cout << "Matrix must be square." << endl;
		exit(1);
	}

	double ratio, sum;
	int i, j, k;
	Matrix T1(rows, cols*2);		// New temp identity matrix of same order
	Matrix F(rows, cols);				// Final Inverted Matrix
    
	// Add Identity Matrix
	for(i = 1; i <= rows; i++){
		for(j = rows; j <= 2*rows; j++){
			if(i == (j-cols)){
				T1.el[i][j] = 1.0;
			}
			else
				T1.el[i][j] = 0.0;
		}
	}

    // Add original Matrix to 1st half of Temp Matrix
	for (i = 1; i <=rows; i++) {       
		for(j=1;j<=rows; j++) {
			T1.el[i][j] = el[i][j];
		}
	}

	cout << "\n\nOriginal matrix with added Identity Matrix" << endl;
	for (i = 1; i <=rows; i++) {
		cout << setw(5);
		for(j=1;j<=rows*2; j++) {
			cout << T1.el[i][j] << setw(6);
		}
		cout << endl;
	}

	double temp;
	// Row Operations
	for(i = 1; i <= rows; i++){
		for(j = 1; j <= rows; j++){
			if(i != j) {
				ratio = T1.el[j][i]/T1.el[i][i];
				for(k = 1; k <= 2*rows; k++) {
					T1.el[j][k] -= ratio * T1.el[i][k];
				}
			}
		}
	}

	// backwards substitutions
	for(i = 1; i <= rows; i++){
		sum = T1.el[i][i];
		for(j = i+1; j <= 2*rows; j++){
			T1.el[i][j] /= sum;
		}
	}

	// Copy to return Matrix
	for(i=1; i<=rows; i++){
		for (j=1; j<=rows; j++) {
            F.el[i][j] = T1.el[i][j+cols];
		}
	}

	return F;
}

How or where would I go about adding this functionality/check?

Thanks.

Brock

First of all, I don't know if the implementation of "Matrix" class is special, but normally, in C/C++, the indices (i,j) go from 0 to N-1, not from 1 to N (like in Matlab or Fortran). If this Matrix class is implemented to handled 1-based indices, then I highly recommend that it be changed because you should stick with the established convention of the language you are using, which is 0-based indices.

Second, this inner loop:

for(j = 1; j <= rows; j++){
			if(i != j) {
				ratio = T1.el[j][i]/T1.el[i][i];
				for(k = 1; k <= 2*rows; k++) {
					T1.el[j][k] -= ratio * T1.el[i][k];
				}
			}
		}

Should be replaced by this (because you only need to obtain a triangular form for the back-substitution, meaning you only need to eliminate the rows below the diagonal):

for(j = i+1; j <= rows; j++){
			ratio = T1.el[j][i]/T1.el[i][i];
			for(k = 1; k <= 2*rows; k++) {
				T1.el[j][k] -= ratio * T1.el[i][k];
			}
		}

Finally, Gaussian elimination without a pivot is really unstable, doing the pivoting is not optional and cannot just be "the next row if the pivot element is 0". First, a floating-point number is rarely if ever exactly zero. Then, just picking the first row that meets some minimal requirement (absolute value greater than some tolerance) is not good enough in general. The standard thing is to always select the row with the largest pivot element (largest in absolute-value terms). Then, you swap the current row with the selected row, and then you perform the elimination as usual.

Just go by the wikipedia pseudo-code and you'll be fine.


N.B.: I hope you don't think that Gaussian elimination is actually a good method for inverting a matrix.. right? It's a good exercise in programming to implement the method, but it is not really useful in practice, it's slow and unstable. For the cases for which Gaussian elimination would work OK, the LU-decomposition works better (or PLU decomp.). But, for the majority of cases where the accuracy/stability of the LU-decomp would be really questionable, other methods are preferrable, like QR, Cholesky, Jacobi, etc.

This article has been dead for over six months. Start a new discussion instead.