do u have any code to find a system of two linear equations with two variables??

Recommended Answers

All 11 Replies

Lots. Do you have any code, or do you at least know how to do this on paper?

If I understand your question you would like to solve the equation with two uknown, like x and y forexample.
One of my favorite ways to solve that thing if you don't need to worry about the error is determinants.
But if you wanna solve this thing for real, consider some iterational methods.

One question for you Paul.Esson do you need to take care of sitation wher D is close to zero.
For example
Dx / D = x and Dy / D =y.
Well I ment like fixed point iterations.

It would be a lie if I said I knew, What do you get when D is close to 0?

What do you get when D is close to 0?

When D is close to zero, you get close to an under-determined case and you have to switch to a minimum-norm solution. When the determinant is zero, the matrix is called rank-deficient (row-rank), in mathematical analysis. In numerical analysis, however, we generally talk about the numerical rank or the condition number (the ratio of the highest eigen-value divided by the lowest eigen-value, in absolute values). The numerical condition number has, in general, a direct effect on the amplification of the numerical round-off error. Any operation inflates the round-off error, it's just a matter of how much. In a linear system solver, the round-off error on the input is, at best, multiplied by a factor proportional to the condition number of the matrix, e.g., if the condition number is 1000 and the error on the input is 1e-6, then the error on the output (solution) is roughly 1e-3. So, if the condition number is too high (e.g., the determinant is near to zero), the whole calculation is meaningless because the error is larger than the value, meaning that the solution is garbage. This is why it is important to worry about this. Also, some algorithms are worse in terms of round-off error amplification. For instance, the determinant method produces terrible errors (the amplification factor is the square of the condition number, if memory serves me right), while a more decent method like QR or RRQR achieves the minimum factor of amplification.

To mitigate this problem, there are, in general, two approaches: (1) balancing and (2) dampening. Balancing means that you scale the matrix's columns and rows in such a way that you can bring the condition number down to closer to 1 (usually between 2 and 1), then solve the problem, which gets you a scaled solution, which you can then scale back to the original value. The trick here is that you can do the scaling in perfect arithmetic (using frexp and ldexp) which does not affect round-off error (it's integer arithmetic on the binary powers). Dampening means that you add a small values to the diagonals of the matrix such that it guarantees that the smallest eigen-value will not be too small as to create a very large condition number, meaning that you can effectively put an upper-bound on the condition number without affecting the solution too much (the worse the real condition number is, the more approximate the solution will be, but round-off error is contained). Which approach you choose depends on how much you want to have an exact solution, or if you can live with an approximate solution.

Another approach is to use a rank-revealing method which will be able to detect if there is a rank-deficiency. Checking if the determinant is near zero is one such method (albeit a crude and very expensive one). Generally, you use a pivoting strategy that re-orders the eigen-values by magnitude, such that you can detect a loss of rank in the rows or columns of the matrix. Once rank-deficiency is detected (within a near-zero tolerance value), you must reduce the system to eliminate linearly dependent rows or columns. Doing so will turn the system into an under-determined or over-determined system, in which case, you have to use either a minimum-norm solution or a least-square approximation, respectively.

do u have any code to find a system of two linear equations with two variables??

I have code to solve a system of thousands of variables, but I don't think I have one for only two variables. The truth is, for two variables, there is a very simple closed-form solution for it. So, you don't really need to worry too much about this. Just work it out by pen-and-paper and implement that solution. And for extra points, check for divisions by zero and see what you can do to fix that (which is also simple to derive by hand).

commented: Informative ! +4

Ok this things are deep. For God sacke the people get PHD for contributing for this things, and those things are advancing very fast.
I am not in that fild, so it would be best to get the expert in that fild to get newest info on the subject, so you can get front end app that is fast.
Well I will tell this thing, I know for the array that conveges to the root of the square root of a number, and few yers ago, I have even seen that some people use that algorithm in C++, so when you count square root, that thing gets caled. I also know that yo could count that thing with Chebishev polinom, so way this way, it is way faster.
My advice is to get expert from that fild.

so it would be best to get the expert in that fild to get newest info on the subject

You are pretty much talking to one, in all modesty.

I know for the array that conveges to the root of the square root of a number, and few yers ago, I have even seen that some people use that algorithm in C++, so when you count square root, that thing gets caled. I also know that yo could count that thing with Chebishev polinom, so way this way, it is way faster.

You're half right. Those converging series are used to compute square-roots, and in fact, any transcendental function (exponents, logs, roots, trigonometric, etc.). But those series are implemented in hardware, not in software. Basically, if you ask for the square-root of a number, there is a module on the CPU that will compute about 40 terms of the Taylor series all at once and add them up to obtain the results. That will take many more clock cycles than a typical addition or multiplication, but overall, it is very fast, as fast as can be without sacrificing precision.

However, in some cases, most notably in 3D computer game programming, precision is not important, but speed is. In those cases, people can create functions that compute a truncated (10 terms or so) series instead to spare a few clock cycles at the expense of precision in the results. These functions are usually written in assembly directly.

Member Avatar for iamthwee

Sorry to spoil the PHD party but Cramer's rule anyone (paul esson mentioned this before)...

There's a nice exmample on another cough cough forum, just need to be able to use google.

Well i know that you would probably say that my sugestion would be usels, and we no practical applicatoions, but lets consider this.
How about solving that thing with big numbers, well bigger than long long int or long double or whatever possible.
To be specific to this how do they get those big prime numbers. So, you have been halg way right as well.

Well any one about big noumbers.

*Second off all, when I use asm, I like to use it from C++, like _asm{...} and wery often mix it with C++. If you like this aproach to make things fun you have one project at the MS site, it is like Intel and AMD proccessors something.
*Then when it comes to implementation it is no matter how you implement that thing there will be people like> well you have Google.
*then I still stay with my statement, I have seen people implement it i .h file in some C++.
*About that hardver implementation, I have heard long time a go, so would you explain in more detaile. Is it like table or is it like some what so ever, you seems to be expert in that fild?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.