I know this is probably overly simple but I am terrible with math.

This is a section from a game I'm attempting with PyGame. Instead of using pixels for coordinates I have my own render function that uses blocks(actually just cnverts block value to pixels), which are 32x32 pixels, so it's easier to make levels. At least I think it will be. :S

Like the title says, if I accidentally put in a pixel value that is not a multiple of 32, my code will raise a ValueError. Instead I want it to round the pixel value to the nearest multiple of 32 and just warn me that it is incorrect.

``````class graphics():
def __init__(self):
pass

def render(self, sprites):
for sprite, coord in sprites:
surface.blit(sprite, (coord[0], coord[1]))
pygame.display.update()

def PTB(self, pixels): #Pixels to blocks
if pixels == 1024: return 1024/32

if pixels not in range(0, 1024, 32):
raise ValueError("PTB: Pixels not a multiple of 32.")
#instead of an error, round to nearest multiple
else: return pixels / 32

def BTP(self, blocks): #Blocks to pixels
return blocks * 32

gfx = graphics()``````

Edited by WildBamaBoy: n/a

3
Contributors
6
Replies
7
Views
7 Years
Discussion Span
Last Post by Gribouillis

number>>5<<5
Shift out lowerst bits and shift zero bits in their place. That however truncates so maybe you like to add 16 before this.

Edited by pyTony: n/a

I suggest `def round32(x): return (x+16) & ~31` . As tonyjv said, we much choose if 16 should be rounded to 0 or 32. This rounds to 32. Otherwise add 15 instead of 16.

Edited by Gribouillis: n/a

@Gribouillis: I was thinking to use and, but was unsure how to get other bits complemented (the not bit: ~). Maybe for some people would be clearer to use binary or heksadesimal number to understand easier the meaning of 'magic numbers'.

``def round32(x): return (x+0b10000) & ~0b11111)``

I did not refer to midpoint rounding, I was refering cutting like int, int(1.6) = 1 (no addition) or rounding like action with adding the midpoint, round(1.6) = 2.

Edited by pyTony: n/a

``def round32(x): return (x+16) & ~31``

This seems to work right. Could you please explain what the '&' and '~31' are used for?

``def round32(x): return (x+16) & ~31``

This seems to work right. Could you please explain what the '&' and '~31' are used for?

When working with bitwise operations (~, &, |, ^), it's better to think of an integer as a semi-infinite sequence of bits. See the examples below

``````# python 2 and 3
from __future__ import print_function

def tobit(n, width=50):
m = (1 << (width - 3))
n = (n & (m-1)) if n >= 0 else ((n & (m-1))+m) & (m-1)
z = "...{n:0>{w}}".format(n=bin(n)[2:], w=width-3)
return z

L = [354777,4191,-24522,774,-32]
print("various numbers", L, ":")
for n in L:
print(tobit(n))

print("")

x, y = 4577822, 768887

print("x, y, x & y:")
for s in (x, y, x & y):
print(tobit(s))
print("the bits set in x & y are the common bits set in x and in y")

print("")
print("x, ~x:")
for s in (x, ~x):
print(tobit(x))
print("the bits set in ~x are the bits unset in x")

print("")
print("x, ~31, x & ~31:")
y = ~31
for s in (x, y, x & y):
print(tobit(s))
print("in ~31, only the five lowest bits are unset")
print("x & ~31 is x with the five lowest bits forced to 0")

""" my output --->
various numbers [354777, 4191, -24522, 774, -32] :
...00000000000000000000000000001010110100111011001
...00000000000000000000000000000000001000001011111
...11111111111111111111111111111111010000000110110
...00000000000000000000000000000000000001100000110
...11111111111111111111111111111111111111111100000

x, y, x & y:
...00000000000000000000000010001011101101000011110
...00000000000000000000000000010111011101101110111
...00000000000000000000000000000011001101000010110
the bits set in x & y are the common bits set in x and in y

x, ~x:
...00000000000000000000000010001011101101000011110
...00000000000000000000000010001011101101000011110
the bits set in ~x are the bits unset in x

x, ~31, x & ~31:
...00000000000000000000000010001011101101000011110
...11111111111111111111111111111111111111111100000
...00000000000000000000000010001011101101000000000
in ~31, only the five lowest bits are unset
x & ~31 is x with the five lowest bits forced to 0
"""``````

When working with bitwise operations (~, &, |, ^), it's better to think of an integer as a semi-infinite sequence of bits. See the examples below

``````# python 2 and 3
from __future__ import print_function

def tobit(n, width=50):
m = (1 << (width - 3))
n = (n & (m-1)) if n >= 0 else ((n & (m-1))+m) & (m-1)
z = "...{n:0>{w}}".format(n=bin(n)[2:], w=width-3)
return z``````

OOPS ! there was an error in the function tobit() above. Replace it with

``````def tobit(n, width=50, reverse=False):
m = (1 << (width - 3))
n = (n % m) & (m-1)
z = "...{n:0>{w}}".format(n=bin(n)[2:], w=width-3)
return z[::-1] if reverse else z``````

The output is now

``````"""
various numbers [354777, 4191, -24522, 774, -32] :
...00000000000000000000000000001010110100111011001
...00000000000000000000000000000000001000001011111
...11111111111111111111111111111111010000000110110
...00000000000000000000000000000000000001100000110
...11111111111111111111111111111111111111111100000

x, y, x & y:
...00000000000000000000000010001011101101000011110
...00000000000000000000000000010111011101101110111
...00000000000000000000000000000011001101000010110
the bits set in x & y are the common bits set in x and in y

x, ~x:
...00000000000000000000000010001011101101000011110
...11111111111111111111111101110100010010111100001
the bits set in ~x are the bits unset in x

x, ~31, x & ~31:
...00000000000000000000000010001011101101000011110
...11111111111111111111111111111111111111111100000
...00000000000000000000000010001011101101000000000
in ~31, only the five lowest bits are unset
x & ~31 is x with the five lowest bits forced to 0
"""``````

Edited by Gribouillis: n/a

This question has already been answered. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.