For the general case, essentially, you break the exponent down into a series of "multiply by x" and "square then multiply by x" operations. If you imagine the exponent in binary (without trailing zeroes), then for each bit position that there is (ignoring the highest bit), you need to square the number, and in addition if a bit is 1, you also need to multiply by x. So for example, to raise to the power 18, that's n= 10010 in binary. Chop off the first 1, then for the next two "0"s square the number each time (x^2^2 = x^4), then for the "1" square and multiply by x (=x^4^2.x = x^9), then for the final "0" we square again (x^9^2 = x^18).
Thus, the complexity in tems of the number of multiplications is 1 multiplication for each binary digit (bar the first), plus an additional multiplication for each bit that is "1".Assuming some average fraction of the bits set to 1 and the process of determining the number of leading zeroes to be negligible, you can analyse the algorithm as being of O(log n) complexity if you consider the number of multiplications to be the rate-determining factor. (Or if you conside the worst case of 2 multiplications per bit, you can still analyse as O(log n) complexity, since the number of multiplications per bit is constant.)
If n is known in advance, you can potentially calculate in fewer multiplications by finding factorisations of the exponent that lead to fewer multiplications (see Knuth 4.6.3, pp. 463ff).