This looks really slick, but it's a bit unfortunate that this was implemented in a way that's tied to the CPython API specifically, given that there are Python-VM-neutral methods of low-level interop available (ctypes, CFFI, etc.). My general sense (and certainly my own experience) has been that many people doing speed-sensitive Python work are already running on PyPy now, and that writing lower-level extensions has become a way to squeeze out an extra level of performance beyond that, so having to go back to CPython to take advantage of this infrastructure would be kind of a bummer, and almost definitely a non-starter in my projects, which leaves me stuck with C.
Right, I definitely don't think PyPy+[ctypes|cffi] is universal ("many," not "all"). Lots of people are doing Cython. Also, anyone who has a performance-sensitive workload that depends on SciPy is out of luck on PyPy at least for the time being. All I'm saying is that PyPy as a performant-Python strategy is common enough now that it's worth supporting in new performant-Python projects.
My thoughts exactly. Great concept and I will definitely follow it as it develops, but I'd love to see an abstraction layer that takes the pain out of the a$$ that is the Python C-api. I left that years ago and I'm not going back.
Ive been using the Python C api recently and I find it the most straight forward option of getting performance out of Python. If you write a few helper classes in C++ using RAII - the python c-api becomes actually pretty pleasant to use.
I'm jealous. Granted, I learned the API before I re-learned C, so I admittedly shot myself in the foot. Now that I'm back up to speed in C (I was in a Java shop for 5 years) and doing more work in Go for my own start-up, I shy away from using Python for anything other than one-off numerical analyses.