SO, I'm playing around with an idea. I'm trying to convert primitives to and from a char array for file logging. I've got all the details worked out, but I don't understand something.. Take the following case, for example:

```
public static byte[] ToByta (short data) {
return new byte[]{
(byte)(data & 0xff),
(byte)((data >>> 8) & 0xff)
};
}
```

The above converts a short value to a 2-byte byte array using bitwise operators.

Let's test it out:

0 yields {0, 0}

10 yields {10, 0}

-10 yields {-10, -1}

127 yields {127, 0}

128 yields {-128, 0}

1234 yields {-46, 4}

-1234 yields {46, -5}

Everything looks normal.

Now, lets pass those arrays through the original reverting function:

```
public static short ToShort (byte[] data) {
if (data == null) return 0x0;
if (data.length != 2) return 0x0;
return (short) (
data[0] |
data[1] << 8
);
}
```

Can you see anything wrong with that? Maybe.

Let's test it out:

0 yields 0

10 yields 10

-10 yields -10

127 yields 127

128 yields -128

1234 yields -46

-1234 yields 1234

As you can see, there's a few issues. I knew it had something to do with the bits, so I played around a bit and came up with this:

```
public static short ToShort (byte[] data) {
if (data == null) return 0x0;
if (data.length != 2) return 0x0;
return (short) (
0xff &
data[0] |
data[1] << 8
);
}
```

Which seems to work perfectly.

All I added was the `0xff &`

in the combination sequence.. And I don't know why this works whereas it doesn't work without the `0xff &`

.

Can someone please explain that to me?

10101010

does not equal

11111111 &

10101010

??? :-|