Well, i'm pretty much a newbie into programming, merely because i'm just 15, but i really like informatics ( specially programing ) but i'm not that wealthy so, i can't pay for private teaching, or anything like that.

But if you want to, you can, that's what i say. I've been learning c++ from a while now ( few months ) but everytime i see some math operation i see the "for (int i<...." For example:

int lower limit,upper limit;
int range=upper limit-lower limit;
int number;
for(int i=0;i<20;i++)
{
number=lower limit+(rand()%range);
cout<<number<<endl;
}

Why the hell is there an i, does it means something, it's just an easy way to remember something or what?.

Sorry for the dumb question, but i'm really an autodidact so it's kinf of hard to get someone to explain me this.

SORRY! AND GREETINGS!

Recommended Answers

All 18 Replies

You could always try for( i=0;i<20;i++) and observe the error messages

Maybe then try

int i;
for(i=0;i<20;i++)

You could always try for( i=0;i<20;i++) and observe the error messages

Maybe then try

int i;
for(i=0;i<20;i++)

Well, yeah, that answers one question, but why i , why not a, or u
does that i means something?

Why the hell is there an i, does it means something, it's just an easy way to remember something or what?.

Relax and read some Wikipedia -> loop counter

FYI. If you don't like (too) short variable names, you are free to pick your poison. I think, C++ standard recommends that an identifier would be allowed to be 1024 characters long (the minimum).

i is generally used for loop counters.. but no one is forcing you.
Specially if the body (the part between { and }) of the for loop is big, you might consider using a more meaningful name for the loop counter.

There are a lot of 'naming conventions'.. each with people who like it and people who protest it.

I generally use i if the body is small, else something like iLoopCount or simmilar. In any case... consider what your personal perference is and stick to it (be precise).
I personally have a strict naming convention.. e.g. pointers start with p, a 'char' starts with c, strings with sz, DWORDs with dw(or ul) etc.

It is a variable, you can use whatever you want

int bacon = 0;

for (bacon = 0; bacon != 20; ++bacon)

is a favourite of mine. However "i" is typically used in for loops. I hate when people declare the integer in the for loop though

for (int i = 0; i !=20; ++i)

that's just awful, declare ALL your variables at the top please.

Absolutely not.. there is more wrong with declaring the variable on top than inside the for statement.

It's all to do with scope, if you declare the variable on top of the for loop, it is still in scope when the for loop exits, which is usually not what you want.

Declaring it in the for statement means the variable will go out of scope when the loop ends.

So declaring it on top, without using it anymore after the loop creates confusion for other people reading your code.

commented: Exactly^2 +5
commented: Yes +4

Well thanks jeje, i became with one question, and i got out with a lot of different answers, that just generated even more doubts, but thats the way it's supose to be, isn't it??

Anyway, thanks, and don't worry, you'll probably see me again really soon. :D

I would say, that i has origin in iterating. Because you use it to iterate through all its values.

I believe that 'i' is used in accordance with the rules of Hungarian Notion; a coding style used to self identify the type variables (int type vars use 'i').

Check out Hungarian Notation.

I believe that 'i' is used in accordance with the rules of Hungarian Notion; a coding style used to self identify the type variables (int type vars use 'i').

Check out Hungarian Notation.

Nope. i has been used long before Hungarian Notation.

From Wikipedia:
"The original Hungarian notation, which would now be called Apps Hungarian, was invented by Charles Simonyi, a programmer who worked at Xerox PARC circa 1972-1981, and who later became Chief Architect at Microsoft. It may have been derived from the earlier principle of using the first letter of a variable name to set its type — for example, variables whose names started with letters I through N in FORTRAN were integers by default."

Nope. i has been used long before Hungarian Notation.

From Wikipedia:
"The original Hungarian notation, which would now be called Apps Hungarian, was invented by Charles Simonyi, a programmer who worked at Xerox PARC circa 1972-1981, and who later became Chief Architect at Microsoft. It may have been derived from the earlier principle of using the first letter of a variable name to set its type — for example, variables whose names started with letters I through N in FORTRAN were integers by default."

i for iterator is my guess. 2nd guess its from i,j,k from vector directions.

Actually, it predates computer programming entirely; it comes from conventions in mathematical notation, where i, j and k are traditionally used to indicate the indices of arrays and the iterative variables of summation (big-sigma) operations.

Index, in fact. It's a long-standing math tradition.
From math it was assumed by Fortran (which is Formula Translator), where anything starting with I, J, K, L, M and N was implicitly integer. Then it became a programming tradition.

World is so inclined to Apple. Thats why every first one (Phones, tablets & even loop) start with an I....Just joking....

One of the first real fortran program every written was about the gamma function, and in typical mathematical notation it evaluated [tex]\sum_{i=1}^{6} \frac{\gamma_i}{1+\lambda_i \tau}[/tex] ina typical math function. Unfortunately I haven't seen the original code for it. But the use if i,j,... etc is extremely common for subscript in maths notation, so given that it was mathematicians that were the originators of computing, their notation become standard were it is applicable.

However, the first fortran compile manual [IBM 704] (published in 1956), uses i as a loop variable in the very first program (finding the largest number in a set).

Tradition.

Hey. Look, it's just a convention.
All you're saying is that I am creating a variable named i which has a data type of int.
The fact that it is so widely used as an example as well as in actual production code just means that it is universally accepted.
As the other contributors say, you don't have to use i.
The fact is that you can not use int because that is a reseved word, so i is the next best thing.
i keeps it short but it's not very meaningful.

Well, i'm pretty much a newbie into programming, merely because i'm just 15, but i really like informatics ( specially programing ) but i'm not that wealthy so, i can't pay for private teaching, or anything like that.

But if you want to, you can, that's what i say. I've been learning c++ from a while now ( few months ) but everytime i see some math operation i see the "for (int i<...." For example:

int lower limit,upper limit;
int range=upper limit-lower limit;
int number;
for(int i=0;i<20;i++)
{
number=lower limit+(rand()%range);
cout<<number<<endl;
}

Why the hell is there an i, does it means something, it's just an easy way to remember something or what?.

Sorry for the dumb question, but i'm really an autodidact so it's kinf of hard to get someone to explain me this.

SORRY! AND GREETINGS!

i is used probably because it stands for iteration. People choose to use it because i is short and it looks clean when you use it as an array index (compare array to array[numloops]).

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.