First off, I know this question doesn't exactly belong here...
I'm asking here because this is the lowest common ground between binary opcodes and:

# python 2.7
class A(object):
    def __init__(this):
        print 'I am initialized!'

instance = A()

my question is, what's the equivelant of that in either PPC or x86_64 binary opcodes??
(not python byte-code or ASM, though the low-level ASM range would be legit)

I'm trying to design a common interface language for recompilation purposes between opcodes and code.
but I need to know how a class/object would work on a CPU before I can start finalizing the designs.

thanks :)
___

for your concerns as to why I'd need this:

GCN/Wii: DOL/REL (PPC)
Windows: EXE/DLL (x86_64)

imagine being able to convert between those.

My language is designed for identifying large structures of ASM or binary opcodes and simplifying them into basic code keywords and statements, which can then be parsed into higher level languages such as C++, Java, or even Python (including byte-code).
(and vice-versa)
___

what exactly is my language?
you'll see it when I release my open-source program ;)

I'm not too willing to share right now because it's still very much incomplete.
my only knowledge of the CPU comes from partially building one in Minecraft (Redstone)
(you might want to look into RedGame for something more functional than what I've built)

if you still want to know, I'll share this:
https://picasaweb.google.com/110263988688421174390/UMCSL
^ some older attempts at designs and a partially working interpreter (now lost)

I'm designing a better interpreter than the horrid methods I was using in that one.

I'm not a computer genius, but that doesn't mean I'm incapable of building something that works. ;)

Recommended Answers

All 10 Replies

what's the equivelant of that in either PPC or x86_64 binary opcodes??

Are you asking us to compile this Python code? You can use a Python-to-C/C++ conversion tool, like Cython, and then compile it with a C/C++ compiler. If you specify PPC as target, you'll get the PPC machine code. If you specify x86_64 as target, you'll get that machine code. Your question doesn't make sense if asked to human beings, this question is the reason compilers exist, that's their job.

how a class/object would work on a CPU before I can start finalizing the designs.

A class or object does not work any differently on a CPU than plain old C code. After compilation / interpretation / JIT'ing / whatever, the code is just plain function calls, jumps and operations on registers. The concept of a "class" or an "object" doesn't really survive passed the first pass of the compiler (the "semantic analyser").

I would recommend that you start by learning how to write object-oriented code in C. C is pretty much the closest to hardware language that is still human-readable and has a very straight-forward and obvious (and standardized) mapping to machine code. If you wonder how anything could be done in actuality, just try to write the exact equivalent code in C (which is always possible, but sometimes hard). Then, if you really need to know what it looks like in assembly, just use a C compiler and ask it to generate the assembly listings (the option is -S on most compilers). As for getting the PPC or x86_64 machine code, well... that's the executable / object files you get after compilation.. they are in machine code, that's the point.

And if you want to know how some Python code would look like in ASM, then your out of luck my friend. Python is not a standardized language, there is absolutely no specified or guaranteed behavior. You write Python code, feed it to CPython, and something happens. You can reasonably expect a certain behavior, based on what CPython docs tell you, but there is no formal specification that guarantees any kind of behavior, in other words, a simple "hello world" program that completely ignores the Python file you specify as a command line argument would be a perfectly valid Python interpreter, because "nothing ever happens" is a valid behavior for any python program.

Jokes aside, my point is that you will never get a straight answer to the question of how a Python class or function is realized in actual native code. You can dig into the CPython implementation if you want to see how they do it, but that is neither a generally-applicable nor a definitive answer.

One place you could look for an answer that is closer to the "Python-class-to-ASM" is the Intel Itanium 64 C++ ABI specification. This is the informal standard C++ ABI that most major compilers follow (except Microsoft.. sigh) which specifies exactly how classes of all kinds (in C++) translate to actual implementations in memory.

I'm trying to design a common interface language for recompilation purposes between opcodes and code.

It sounds like you are trying to implement something like LLVM's Code Generator, used by Clang compilers (use on Macs by default for C/C++/Obj-C, and optionally on other systems too). Most compilers have a back-end that compiles from some intermediate language (or opcodes, bytecodes, etc.) to machine code.

It also sounds like you want to implement something along the lines of any one of Microsoft's attempts to create the "one model to rule them all", like COM (Component Object Model) or .NET CLR.

imagine being able to convert between those.

That's called an emulator, like qemu. AFAIK, qemu only does dynamic translation (on the fly) from any architecture to the host architecture. I guess it wouldn't be impossible to do a batch translation of the whole thing. There is also a qemu back-end that produces LLVM bitcode (which can then be optimized and compiled into any target architecture that LLVM supports, which is most of them).

To be honest, translating the machine code from one architecture to another is not really the issue. The main problem is all the links to the outside world, like system calls, library calls, etc... which you cannot easily port.

My language is designed for identifying large structures of ASM or binary opcodes and simplifying them into basic code keywords and statements, which can then be parsed into higher level languages such as C++, Java, or even Python (including byte-code).

That is really the most insane part of your post. You must not be a fan of information theory, are you? Most of the information conveyed by source code is useless noise, as far as the compilers and code generators are concerned. They wash away nearly everything. So, from our perspective (human beings), all that nice and valuable information about how the code works that is represented in the way the source code is structured, well, all that is gone, long gone by the time it hits bytecode, IR, IL, or machine code, of course. There is literally no way to reconstruct it.

There is one exception, which is .NET CLR/CIL, which is a really high-level IR that actually preserves nearly everything of the original source code, which means that (1) it is really slow and (2) it is super easy to hack, exploit, attack and all that nasty stuff (but it certainly makes for interesting reads to find out about how ridiculously easy .NET code is to hack).

I'm pretty sure that the most you could hope for in most cases is to generate some barely human-readable C code.

And, to answer your question about what that program would look like in ASM, I wrote this equivalent C++ program:

#include <iostream>

class A {
public:
  A() {
    std::cout << "I am initialized!" << std::endl;
  };
};

int main() {
  auto instance = A();
};

Which gave this ASM:

    .file   "class_to_asm.cpp"
    .section    .rodata.str1.1,"aMS",@progbits,1
.LC0:
    .string "I am initialized!"
    .section    .text.startup,"ax",@progbits
    .p2align 4,,15
    .globl  main
    .type   main, @function
main:
.LFB1269:
    .cfi_startproc
    subq    $8, %rsp
    .cfi_def_cfa_offset 16
    movl    $17, %edx
    movl    $.LC0, %esi
    movl    $_ZSt4cout, %edi
    call    _ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l
    movl    $_ZSt4cout, %edi
    call    _ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_
    xorl    %eax, %eax
    addq    $8, %rsp
    .cfi_def_cfa_offset 8
    ret
    .cfi_endproc
.LFE1269:
    .size   main, .-main
    .p2align 4,,15
    .type   _GLOBAL__sub_I_main, @function
_GLOBAL__sub_I_main:
.LFB1425:
    .cfi_startproc
    subq    $8, %rsp
    .cfi_def_cfa_offset 16
    movl    $_ZStL8__ioinit, %edi
    call    _ZNSt8ios_base4InitC1Ev
    movl    $__dso_handle, %edx
    movl    $_ZStL8__ioinit, %esi
    movl    $_ZNSt8ios_base4InitD1Ev, %edi
    addq    $8, %rsp
    .cfi_def_cfa_offset 8
    jmp __cxa_atexit
    .cfi_endproc
.LFE1425:
    .size   _GLOBAL__sub_I_main, .-_GLOBAL__sub_I_main
    .section    .init_array,"aw"
    .align 8
    .quad   _GLOBAL__sub_I_main
    .local  _ZStL8__ioinit
    .comm   _ZStL8__ioinit,1,1
    .hidden __dso_handle
    .ident  "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
    .section    .note.GNU-stack,"",@progbits

Which is pretty much self explanatory if you are the least bit used reading to ASM listings (as all programmers should be). And here is the kicker... the most you could deduce (of the main function) from the above assembly is some code like this:

namespace std {
  extern basic_ostream<char, char_traits<char> > cout;
  basic_ostream<char, char_traits<char> >& __ostream_insert<char, char_traits<char> >(basic_ostream<char, char_traits<char> >&, char const*, long);
  basic_ostream<char, char_traits<char> >& endl<char, char_traits<char> >(basic_ostream<char, char_traits<char> >&)
};

int main() {
  std::__ostream_insert(std::cout,"I am initialized!");
  std::endl(std::cout);
};

Not exactly super interesting if you are looking to resurrect the higher-level original C++ program. The "A" class doesn't even exist anymore, as I explained already, it's washed away, like it would in any language that respects itself (and no, .NET languages are not among them). And btw, I omitted the static init code, which only adds more noise. And finally, you are lucky that this particular examples doesn't really contain too many goto instructions, as ASM is usually riddled with gotos, which are very hard to translate back to any kind of code that isn't meaningless unreadable spaghetti crap.

Are you asking us to compile this Python code?

no, I'm asking for a binary representation that would work similarly.
(ASM doesn't quite tell me how the CPU works)
^especially high-level ASM such as MASM or FASM, which seems to be what you posted

I will mention, I'm hardly educated in ASM, though I'm learning through working with stuff dealing with it...

the code is just plain function calls, jumps and operations on registers

so I've been told...

I have an assumption that a class works similar to a struct but with function pointers.
(credit to APott aka NardCake for this idea)

It sounds like you are trying to implement something like LLVM's Code Generator

I guess it could be similar >.>

though looking at this, this seems to be direct ASM -> code...
this approach is alot more complex to have to deal with.

my approach is a language designed slightly above ASM level...
I'm not worrying about the code (C++, Java, etc) just yet.

I need to perfect what my language can handle, and add a few features that keep things just below C.

I havn't shown in any of the images I supplied earlier, but I'm already supporting pointers and functions using the CPU stack.
so yes, the language IS functional, and like I said, I've built an interpreter shown in a few of the supplied images.

I did say those images are really old... heh
(about 3 to 4 years to be exact)

That's called an emulator

no, an emulator is really nothing more than an interpreter substituting the given commends supplied by the given executable for actual commands for the machine during emulation runtime.

the correct term is recompiler.
(a Wii emulator can't reconstruct a DOL into an EXE)

There is literally no way to reconstruct it.

I can argue... it's been done already...
IDA Pro can be, and has been used (with a plugin) to convert PPC executables into x86_64 exectables.
https://github.com/kakaroto/ps3ida/tree/master/plugins/PPCAltivec
the process is extremely complex, as can be imagined.
and it still follows an overly complex direct approach.

what I'm trying to do falls more along the lines of that.
but the language I'm designing is more of a bottleneck to use for converting binary into HL code.

the program that will be able to do this (when I get up to this level) will be called:
Universal Game Converter

you can use it to convert something like minecraft into a Wii executable.

this project is an upgrade from my current project:
Universal Model Converter

this is the reason my language is currently called UMCSL...
when I release the language, it will be named after my final project, UGESL.

before you get confused however, this is NOT a "Scripting Language" as the title portrays, it's a recompilation language made easy to understand.

scripting with it is only an additional feature, so you can expect poor performance during testing.
(and even poorer since the backend is python-based to handle the dynamic interface)

btw, I forgot to thank you for doing that work of gathering an ASM output :)

but yea, as for the stuff being washed away...
I'll most likely need separate backends between my language and ASM, and my language and code.

I've figured that a while back in a skype post with a friend... heh
___

when importing/decompiling:

a script will decompile the executable into ASM and pass it to my interface

the backend for ASM will aid in identifying large reused structures such as class instances for my code.

<--UMCSL-->

the backend for code will restructure my language into more complex structures for higher level languages.

a script will write the (C++, Java, etc.) code using the structures obtained.

note: when I say "a script"
my interface is fully extendable, so scripts add new support for my interface.

so yea, I've already got the main idea... it's really not hard. :)

oh yea... after the <--UMCSL-->
that's an export operation, where you choose what to export (ASM or code)

Not exactly super interesting if you are looking to resurrect the higher-level original C++ program. The "A" class doesn't even exist anymore, as I explained already, it's washed away, like it would in any language that respects itself (and no, .NET languages are not among them). And btw, I omitted the static init code, which only adds more noise. And finally, you are lucky that this particular examples doesn't really contain too many goto instructions, as ASM is usually riddled with gotos, which are very hard to translate back to any kind of code that isn't meaningless unreadable spaghetti crap.

I just reread this and relooked at that deduced code...
I see what you mean...
but yea, like I said with the direct approach vs the bottleneck approach.

in my example, all I need to worry about is the very basic instructions.
the backends take care of everything between:
ASM <backend> UMCSL <backend> Code

UMCSL is of course standalone, so everything the backends deal with is direct towards a common interface language.

the .NET CLI is meaningless unless a script supports it.
otherwize I can set function names as IDs as the CPU identifies them.

reading through that ASM again, yea that definately looks like MASM...

can I get a lower level output like I originally requested :)
(x86_64 preferred)

thanks ;)

You can easily translate the assembly that I posted into x86_64 assembly using as and objdump. Let's say you put the assembly listings in "foo.s", just do this:

$ as -o foo.o foo.s
$ objdump -d foo.o

And you get the following:

foo.o:     file format elf64-x86-64


Disassembly of section .text.startup:

0000000000000000 <main>:
   0:   48 83 ec 08             sub    $0x8,%rsp
   4:   ba 11 00 00 00          mov    $0x11,%edx
   9:   be 00 00 00 00          mov    $0x0,%esi
   e:   bf 00 00 00 00          mov    $0x0,%edi
  13:   e8 00 00 00 00          callq  18 <main+0x18>
  18:   bf 00 00 00 00          mov    $0x0,%edi
  1d:   e8 00 00 00 00          callq  22 <main+0x22>
  22:   31 c0                   xor    %eax,%eax
  24:   48 83 c4 08             add    $0x8,%rsp
  28:   c3                      retq   
  29:   0f 1f 80 00 00 00 00    nopl   0x0(%rax)

0000000000000030 <_GLOBAL__sub_I_main>:
  30:   48 83 ec 08             sub    $0x8,%rsp
  34:   bf 00 00 00 00          mov    $0x0,%edi
  39:   e8 00 00 00 00          callq  3e <_GLOBAL__sub_I_main+0xe>
  3e:   ba 00 00 00 00          mov    $0x0,%edx
  43:   be 00 00 00 00          mov    $0x0,%esi
  48:   bf 00 00 00 00          mov    $0x0,%edi
  4d:   48 83 c4 08             add    $0x8,%rsp
  51:   e9 00 00 00 00          jmpq   56 <_GLOBAL__sub_I_main+0x26>

And if you want it in PPC or any other architecture that is different from your host architecture, then you just need to specify the target architecture options for the assembly and disassembly, and you'll need to have them installed (basically, install the GNU cross-compilers for your desired target).

But the details of the specific instruction set used is not very relevant at this stage, since there's really nothing left of the original object-oriented code, it's just a couple of plain function calls and raw memory. I don't know what you expect to see different between the architectures, or between the assembly listing and the x86-64 instructions, after all, this is just a simple cdecl call (mov, mov, mov, call), another simple cdecl call (mov, call), a return of zero (xor, ret), a stack frame push / pop (sub rsp, add rsp), and a bit of padding (nopl). It's quite literally the most trivial piece of assembly code you could imagine (it's just a "hello world" program, after all!).

thanks :)

like I stated earlier, I'm not a computer genius...
I don't know all that much about the CPU other than what I've designed in Minecraft, which really isn't much more than a functional ALU with some RAM and storage...
nothing too grand... heh

so yea, I'm using ASM to both structure my language against my own ideas of how a CPU works, as well as learn from it.

all my language will do is clarify the logic given by the supplied opcodes.
of course though, it'll need much more code, and multiple instances of similar function structures, in order to assume a class.
___

basically, it'll do the same thing you do when you start out as a noob and write a program out of a bunch of functions...
once you gain knowledge about classes, you start organizing your functions more properly and creating objects.

^that was me a few years ago, I originally wrote UMC3.0a (dev3) knowing nothing but python functions.
now I know classes, decorators, and even meta-programming and am completely redesigning everything with 2 versions of UMC (3.0a (dev5) and 3.0).

with that experience, I can identify classes from slews of functions, and I want to write that functionality into my language.

it has to do with my ability to visualize extremely large structures of logic.
(I have autism, and I believe this ability is a gift)

I know the process looks impossibly complex to you, but it looks simple enough to me. ;)
___

so about learning from the ASM you posted...
as long as I can see the basic logic on the CPU level, I can understand how it works.

I think I need another instance created of class A to understand it better. :P

# python 2.7
class A(object):
    def __init__(this):
        print 'I am initialized!'

instance = A()
instance2 = A()
instance3 = A() # just to make sure

I think it has to do with these:

4:   ba 11 00 00 00          mov    $0x11,%edx
9:   be 00 00 00 00          mov    $0x0,%esi
e:   bf 00 00 00 00          mov    $0x0,%edi

of course though not knowing alot, I could very much be wrong. :P

Those three move operations are for passing the parameters of the function. This is according to the calling conventions. It is only a bit special here because the function signature is such that all the parameters can be passed via registers instead of the stack, as is usually the case.

Here is a basic explanation:

  • movl $17, %edx : Passes the integer value 17 as the last argument to the function call (__ostream_insert), which is optimized by passing the value through the EDX register (a general-purpose integer extended (32bit) register often used for argument passing). Btw, the value 17 is the length of the string "I am initialized!", which is the required third parameter to the __ostream_insert function.
  • movl $.LC0, %esi : Passes a pointer to the string constant (marked by the label .LC0, as you can see in the .rodata read-only data section) as the second parameter, which uses the ESI register, which is a general-purpose pointer register.
  • movl $_ZSt4cout, %edi : Passes a pointer to the std::cout object, marked by the mangled external symbol _ZSt4cout (which will be resolved by the linker later), as the first parameter to the __ostream_insert function, which is a reference to a ostream object (C++ references are, of course, implemented as pointers). That pointer is passed to the EDI register, another general-purpose pointer register.
  • call ...__ostream_insert... : Calls the function, which just means that it does what is called a "long jump" to the execution address specified, which is, in this case, a mangled external symbol to the function __ostream_insert (with the demangled signature that I posted earlier) that will be resovled later by the linker (that actual function probably resides in libstdc++.so).

The other function call is similar, but even simpler:

movl    $_ZSt4cout, %edi   ; pass '&std::cout' through EDI register
call    ...endl...         ; jump to 'std::endl' function

Once you understand those bits of assembly, you'll quickly see that there is almost nothing left in the main function. And here is the explanation of the remaining code:

subq    $8, %rsp    ; allocate 8 bytes on the stack
... the two function calls ...
xorl    %eax, %eax  ; assign zero to register EAX (return value)
addq    $8, %rsp    ; deallocate the 8 bytes from the stack
ret                 ; return from the main function

The allocation / deallocation of stack memory is simply done by moving the stack pointer (RSP register) back by 8 and then moving it forward again at the end (on Linux, stack memory grows backwards, from higher memory addresses to lower ones, so, you "grow" it by moving the stack pointer backwards). And for the assignment of zero to the EAX register (used for passing simple result values, in this case, the result of the main function, which is zero), this is done with a XOR trick which uses the fact that XOR'ing a variable with itself always gives a result of zero, and it just happens that the XOR operator is faster than any other method to make a value zero.

The only thing that puzzles me a bit is the stack allocation, I'm not sure exactly what it's for, since the main function never uses it to store anything (it has no local variables). It might simply be an artifact of something else or a safety measure.

with that experience, I can identify classes from slews of functions

There are decompilers, like Hex-Rays, that exist, but they generally don't go much beyond reconstituting basic procedural code. And they produce very ugly code (even when linker symbols are available, which is rarely the case for "distributed" software). Unless, of course, you have debug information, but that's like having the source code, making the whole exercise pointless.

it has to do with my ability to visualize extremely large structures of logic.
(I have autism, and I believe this ability is a gift)

Well, maybe you are gifted in that way, but writing a computer program that can do the same is a whole other ball-game. You could argue that any experienced programmer could take assembly code and be able to reconstruct a reasonable approximation of what the source code that produced it probably looked like. But the point is that such a reverse engineering task involves a lot of drawing from your own experience and intuitions about the code. That's not something that's easy to replicate in a computer program, it's called artificial intelligence / cognition.

well, I'm intrigued, thanks for that clarification ;)
I'll surely be re-reading this thread multiple times over... heh

sorry for the late response btw, I'm multitasking on various other forums and skype chats between 2 other areas of my program.

it's called artificial intelligence / cognition.

haha, funny you should mention that. :)

I've been working on plans for AII (Artificial Interactive Intelligence):
a standard for my game-system/computer that's designed to give natural interaction (not just human-like) to bots in video games.

imagine playing CoD (IK, kill me now), and physically pointing to an area on the screen while telling your bot comrad to go to that area.

depending on how the bot was programmed to act (it's attitude), and standards given to the bot to follow (such as army training), will result in if the bot follows your direction or not.
___

of course, this requires sheer power to perform, which is why I'm redesigning electronics to perform the way electrons were naturally meant to perform.
(binary and quaternary are only fixed systems and aren't exactly natural development)

if you were to plug wires into your brain, how much you wanna bet the data you'd recieve wouldn't be binary or quaternary.

take a look into Analog Computing...
(yes, taking a step back to take a leap forward)
I'm taking that a step further to achieve my power. :)

Well, maybe you are gifted in that way, but writing a computer program that can do the same is a whole other ball-game.

I did say it was extremely complex. =P

I can't be overwhelmed by it though as I'm already beyond drowned in my sea of overwhelming projects... heh

I can see just how big the logic is I need to write...
I just wish I could connect my brain to my compy so the slow method of typing wouldn't keep me from writing out the full code in seconds... heh

feel free to call me crazy XD
at least I'm not insane :3

I just have too much fun designing this sort of stuff =P

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.