I have an automated API with a secure frontend/backend structure where backend object attributes are hidden with frontend proxy classes.

basically, when accessing an object from the frontend, you get a proxy object disguised as the backend object with all it's explicitly "public" functionality.

what I'm dealing with right now seems to be a design flaw with python's descriptor implementation:

Instance.Attr[item] = value

what this equates to in Python is:

Instance.__class__.Attr.__get__( Instance, Instance.__class__ ).__setitem__( item, value )

the insecurity with this is __get__ instead of just simply being __setitem__, meaning I have to pass a private object into my frontend to be indexed from.

or to put it another way, what I expect to be doing is:

Instance.__class__.Attr.__setitem__( Instance, item, value )

is there a way to get this expected behavior??

Recommended Answers

All 14 Replies

Hi Tcll, it's been a while indeed!

I don't think there is a flaw in python's descriptor implementation. Also I don't understand why you think there is a security issue nor which 'private object' is 'passed into your backend'.

If Attr is a descriptor in the class, then indeed Instance.Attr is translated in Instance.__class__.Attr.__get__(Instance, Instance.__class__).

It can't be Instance.__class__.Attr because the value of Instance.__class__ is simply the class and does not remember which instance it was accessed from. It means that we need the instance again to retrieve the value of the attribute.

Perhaps you want to implement an alternative to python's descriptors, an object to put in the class that would have __getitem__ and __setitem__ methods instead of __get__ and __set__ methods. But why would this be better? can you explain in a more detailed way ?

I see, god when will I ever learn to provide the proper​details XD

so basically, yes, I have a backend class with private and public attributes.
when the class is created, it uses __new__ to create an associate proxy class with only the public attributes.
(these public attributes are descriptors that link to the backend object's attributes)

so that is where the security comes in, since you can't access the private attributes through the proxy.

but what I'm trying to do is automate access to particular instances in indices of my collection type, which is why I need __getitem__ but don't want to make an object accessible to the frontend.

basically, I want something like:

Facepoint.UV

to behave like

Facepoint.UV[0]

for cases where a channel isn't needed.

well after pondering for a good while, the best idea I got is basically a placebo object that represents itself as the object meant for return.

one thing I failed to note (absence from mind) earlier was a particular attribute only accessible via Facepoint:

Facepoint.Vertice = ( 1.0, 1.0, 1.0 )
Facepoint.Vertice.Index

Facepoint.UV.Index
Facepoint.UV[1].Index = 438

I should​note that what's stored in Facepoint.Vertice is converted to a vector instance.

EDIT: I should also note that Facepoint is not static, but comes from the iterated result of Primitive.Facepoints

for Object in Scene.Objects:
    Vertices = Object.Vertices

    for MeshName, Primitives in Object.Primitives.items():
        for Primitive in Primitives:
            for Facepoint in Primitive.Facepoints:

it continues even further.

It's impossible that facepoint.UV behaves like facepoint.UV[0], because the result of facepoint.UV[1] would be the same as facepoint.UV[0][1]: in facepoint.UV[1], the expression facepoint.UV is computed first. There is no way out of this in python.

What you could do is that facepoint.UV returns a proxy of the real UV. I'm thinking about a symbolic reference that stores nothing but the facepoint. Something along the line of

class UVdescriptor(object):
    def __get__(self, obj, cls):
        if obj is not None:
            return UVproxy(obj)

class Facepoint(object):
    UV = UVdescriptor()

def _getuvitem(facepoint, index):
    return backend_peer(facepoint).UV[index]

def _setuvitem(facepoint, index, value):
    backend_peer(facepoint).UV[index] = value

class UVproxy(object):
    def __init__(self, facepoint):
        self.facepoint = facepoint

    def __getitem__(self, index):
        return _getuvitem(self.facepoint, index)

    def __setitem__(self, index, value):
        _setuvitem(self.facepoint, index, value)

However there is a good chance that you reach only an illusory feeling of security with your proxies. You know that python code has very powerful introspection capabilities and it is a real challenge to ensure that the user of some python code can not completely expose the inside.

there goes my bad habit again :P
nothing illustrated above actually works, it's how I wanted it to work :P

but thanks for working with me ;)

and yeah, I know nothing about python is truely secure, however, there is a difficulty level to hacking my API which I'm comfortable with.
(I do want my API to be hackable, but not at ease with the expenses of differences between updates)

if you understand __closure__ well, then it'll be relatively easy to gain backend access provided you can gain access to the metaclass lookup and avoid the traps of importing outside the restricted namespace.

Heck if that's not enough, I even have a pre-processor which analyzes the script looking for hacking attempts that might break the backend structure.

so yeah, you gotta be good to hack my API ;)

A remark: if you create a lot of proxies such as UVProxy above, it is a good idea to make them flyweight by using the __slot__ attribute. Basically, they are nothing more than a smart pointer.

haha, I've fallen in love with, married, divorced, and re-married __slots__ =D
best magic ever when it comes to memory use =)

but yeah, I'm even careful to track the class instances created.
one reason for that being the debugger in SIDE, since deleting instances seemed to cause problems with array() if you remember that ;)

I'm as anal about memory as I am about performance and security ;)

so after watching a recent video https://www.youtube.com/watch?v=QM1iUe6IofM
I've changed my outlook on how things should work.

basically, the new idea is to make any attribute with a channel index plural and return the descriptor itself as an iterable object:

Colors = Facepoint.Colors
UVs = Facepoint.UVs

with this new ideology, people should be more inclined to Index, or use a for loop otherwise.

and since the returned object is the descriptor itself (since methods track the instance),
__getitem__ (wrapped with a vanilla method wrapper in __new__) should effectively be the same as __get__ on non-channel descriptors...
basically making it possible to do:

Facepoint.Vertice.Index
Facepoint.UVs[0].Index

since what's returned would be a proxy object representing the vector.

thanks for helping me understand Grib ;)

Why not return iterables indeed? It all depends on what the user wants to do with it. The risk is to store a whole list of all instances when one only needs a few of them. I like the UVproxy solution because I already used such objects in the past with connection to databases and they turned out to be very flexible: it is easy to fine tune the behaviour of the proxies.

Concerning the video, it is a modern trend to say that OOP is bad. I think it is pure ideology. Consider how OOP spontaneously appeared in the 70's. At that time, good C programmers who used C structures (struct) wrote bunches of helper functions to manipulate these structures. The object appeared from packing up the structure and the helper functions in a single entity.

This original version of OOP is fundamentally GOOD, it was a great progress in programming. Big OOP theories and constructions with a lot of subclassing are probably bad. Some people now think they will gain flexibility by unpacking the whole thing, and in a way it is true, but it comes at the cost of an efficient program's organization and I think there are other ways. I tried to develop small entities centered around an algorithm instead of a piece of memory. I call them algons, grains of algorithms. I think there are other ways to flexibility than simply throw away OOP.

So, if you want my advice, don't rush to follow this trend because it may bring you no benefit.

heh, don't worry about me following trends ;)
in the evolution of my programming practices, I've tended to just do my own thing that works best for both the program and use case.
funny enough, this tends to follow along with procedural programming, so I called it as such in my comment on the video.

anyways, with the security of my frontend, to prevent noobs from doing stupid stuff and creating bad trends that would likely break scripts once the program has been updated,
I can't just simply return an iterable like a list or so.

I went for returning the descriptor itself as the iterable because it's basically an empty class with set and get attributes alone, so I figured I could extend its functionality while reducing the memory footprint. ;)

I can't quite call it there though, again, with the security restrictions in place, I need a proxy attribute which returns a proxy of the descriptor to make sure the objects returned are also proxies.

basically, once a proxy, always a proxy.

it's a complicated system, but a very easy-to-grasp security concept.

if I had designed my API in C, I'd've had my work done for me, since the C-API does exactly that when handling python objects, but I'm stuck with python until I release... heh
going back now would add years of work, and I'm just too close to a release right now.

hmm, I think this new concept idea actually works better than I thought :D

Facepoint.Weights["Bone"] = 1.0
Facepoint.Materials[0] = "Material"

Note that Material retrieval will return the Material proxy where you can obtain the Name or Index from.

oh... ok... did not think that one through...

using a custom descriptor to track the collection based​ on the instance only works good for the single attribute.
it does NOT work good with __getitem__ for channels because there actually isn't a good way of tracking the instance.

it almost works, but globals break the function, since they can unsync and you end up with a different instance than you expect to have.

so that went out the window, however, the idea isn't scrapped, just needs a bit of tweaking.

I think I'm starting to understand why you create separate proxy instances in your above code, and I hate it... lol

I really don't want to rely on GC for this >_>

There is something interesting in your last post. You could indeed create a single UVproxy for each facepoint, only if and when it is needed. Just like a cached property. It would work very well.

ah ok, I was just overthinking it with the GC, alright thanks :)

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.