This is because floating point types (float, double, long double) only hold an approximation to the value you think they hold because of the way the value is encoded into the available memory (normally 4 or 8 bytes). e.g. You assigned 22.7 to d but actually it holds that as 22.7000000000001 in memory.
This approximation self evidently true particularly for a float because it (normally) has the same amount of memory as a long but holds a vastly larger range of numbers. Also floating point constants by definition have type double so 22.7 has type double, to use a float constant you have to post fix an F 22.7F has type float.
So in your code you take a double constant (which is an approximation of 22.7), convert it to a float in the assignment to f and then promote it back to a double for the comparison to d which itself is an approximation of 22.7.
During all that approximation, converting and promoting it is not surprising that the 2 values end up not exactly the same. In fact what is more surprising is that in your first example they don't end up different.
The rule of thumb usually used in comparing floating point types is that == always returns false and != always returns true. Your first code example shows that is not absolutely always the case but it is pretty much always the case in any but the most trivial examples.
Given that you should never use == or != to compare floating point types, the approach normally taken is to check that the values are with-in a predetermined tolerance, that is that the values are close rather than equal e.g. fabs(d1 - d2) < 0.0001.
My best guess would be that 22.5 has an exact represenation as a double (and may be a float) and 22.7 doesn't. However that is not the point, you don't need to know why one works and the other doesn't you need to know that using == is a mistake.
To summarise: it's impossible to map a floating point number with exact precision, therefore all you'll ever get is an approximation.
How good an approximation depends on hardware, operating system, and the phases of the moon and stars (well, not really that, but there's randomness involved).
That's why you should never compare floating point numbers for identity.
If you need to compare them at all, compare them to be identical within a specific accuracy range, say +/- 0.001 (or whatever you require, but the more accurate you expect things to be, the bigger the chance that you're going to reject things that you shouldn't).
And that's why in any serious calculations where precision is required, floating point numbers are not used.
Instead use integer mathematics or fixed precision numerical computation libraries, and only convert to floating point (if at all) for final presentation.
For example, an amount of money can be represented in dollars and cents as a floating point number, but is far better represented as an integer number denoting cents (as $1 is 100 cents).
Your more precise, and as an added bonus your calculations are faster as well.