hi,
i am a python beginner, at the moment just playing_with/ testing/learning the language, and i must say that i love it. i intend to use it for a new project but i have a question about ... performance. for my project cpython is absolutely ok but, still, i have a question.
i imported in a python structure an inventory transaction table from an erp. the python data structure is a dict of row_id:{'field_k':value_k, ....} row_id between 1 and 1 milion, k between 0 and 15. A row is something like this: 5555:{'f10': '', 'f13': '33434892', 'f9': 1.0, 'f8': 1732357.8, 'f1': '01/17/03', 'f12': 'euro', 'f3': '', 'f2': 'ord-so', 'f11': 'crisp', 'f0': 'GBGA007ZXEA', 'f7': 15301.2487, 'f6': 'id_client', 'f5': 'each', 'f4': 0.0} where f9 is the quantity, f8 is the unit price, etc
i have a small program that, in a single thread, as a "transaction", chooses at random sets of 100 rows and reads some data or changes some data in those rows the discarding the results (without waiting for hdd, netwoki, etc). I run 12000 transactions and calculate transaction/second.
i have run the test program with a few computers and i get with a hp nx 9030 (intel centrino 1.6 ghz 6 years old, 2 gb ddr1, cpubenchmark.net =450, ram speed 900 mB/s) about 50% of the transactions/ second obtained with a hp intel i5 (m460, 3gbyte ram ddr3, cpubenchmark.net=2500, ram speed 6000 mB/s)
this i5 is a lot more powerful than the centrino m 1600 (as cpu power, cpu cache and as ram speed) so i was expecting to see the increase of performance for the i5 a lot higher than 100%.
why this "discrepancy"? is there a bottleneck (other than cpu and memory)? i understand the constraints imposed by the gil, but i figured that using much better hardware we can obtain much better performance, even using a core. my little test says this is false...
i would appreciate an explanation or link to some documents where i can find it.
thank you for your patience.