hi,

i am a python beginner, at the moment just playing_with/ testing/learning the language, and i must say that i love it. i intend to use it for a new project but i have a question about ... performance. for my project cpython is absolutely ok but, still, i have a question.

i imported in a python structure an inventory transaction table from an erp. the python data structure is a dict of row_id:{'field_k':value_k, ....} row_id between 1 and 1 milion, k between 0 and 15. A row is something like this: 5555:{'f10': '', 'f13': '33434892', 'f9': 1.0, 'f8': 1732357.8, 'f1': '01/17/03', 'f12': 'euro', 'f3': '', 'f2': 'ord-so', 'f11': 'crisp', 'f0': 'GBGA007ZXEA', 'f7': 15301.2487, 'f6': 'id_client', 'f5': 'each', 'f4': 0.0} where f9 is the quantity, f8 is the unit price, etc

i have a small program that, in a single thread, as a "transaction", chooses at random sets of 100 rows and reads some data or changes some data in those rows the discarding the results (without waiting for hdd, netwoki, etc). I run 12000 transactions and calculate transaction/second.

i have run the test program with a few computers and i get with a hp nx 9030 (intel centrino 1.6 ghz 6 years old, 2 gb ddr1, cpubenchmark.net =450, ram speed 900 mB/s) about 50% of the transactions/ second obtained with a hp intel i5 (m460, 3gbyte ram ddr3, cpubenchmark.net=2500, ram speed 6000 mB/s)

this i5 is a lot more powerful than the centrino m 1600 (as cpu power, cpu cache and as ram speed) so i was expecting to see the increase of performance for the i5 a lot higher than 100%.

why this "discrepancy"? is there a bottleneck (other than cpu and memory)? i understand the constraints imposed by the gil, but i figured that using much better hardware we can obtain much better performance, even using a core. my little test says this is false...

i would appreciate an explanation or link to some documents where i can find it.

thank you for your patience.

Recommended Answers

All 3 Replies

The i5 is rated at 2.66GHz (according to HP's website), so a doubling of speed is acceptable compared to a 1.6 GHz machine. The i5 is also a dual processor so you can use parallel python to speed things up. For a simple function, it would be something along these lines:

import pp
ppservers = () 
job_server = pp.Server(ppservers=ppservers)
f1=job_server.submit(process_this, (range(0, 6000)), (), ())
f2=job_server.submit(process_this, (range(6000, 12000)), (), ())

in my application the processes should have access to a big python data structure.

i looked over paralelpython.com and i didn't see a mechanism to share big data structures between processes.

i looked at the multiprocessing module and i understand that i can use shared memory to share data, but not complicated objects ([imbricated] lists, dicts, etc). with Manager(), once i do:
manager=Manager
my_data_structure=manager.dict(my_data_structure)

the speed of access to the data falls around 20 times...

do you know other solutions to use multiple cores and share big python data structures between the processes.

other idea that would help me is if anyone has heard about emulating python data structures using an rdbms... but the simplest idea was the first: to obtain bigger performance giving python faster resources. and i don't think that the most relevant think is cpu's frequency... with this idea an amd xp m 2800+ at 2.12 ghz would be as fast as a core in a i7 2630qm at 2 ghz...

is there something that i don't understand?

thank you.

There is not enough info here to really answer your question. The natural response is "don't use big data structures". If you want to choose 100 random records from one million choices then SQL (commit every 1000 records or so when adding is faster than a commit after every record) would probably be faster than a dictionary. Next, you should try one of the profiling packages so you know where the bottlenecks are as it could be creating the dictionary takes the time, as opposed to looking for the 100 random records. PyPy is also something to look into.

i looked over paralelpython.com and i didn't see a mechanism to share big data structures between processes.

Why do you __have__ to share big data structures between processes?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.