You can do parallel computations in Python e.g., many numpy functions can utilize more than one CPU:
import numpy as np
N = 5000
# create a couple NxN matrices with random elements
a = np.random.rand(N, N)
b = np.random.rand(N, N)
# perform matrix multiplication
c = np.dot(a, b)
Cython makes it easy to write C extensions for Python if necessary.
You can also do a compute-intensive work in parallel using multiple processes:
import random
from multiprocessing import Pool
def fire(nshots, rand=random.random):
return sum(1 for _ in range(nshots) if (rand()**2 + rand()**2) < 1)
def main():
pool = Pool() # use all available CPUs
nshots, nslices = 10**6, 10
nhits = sum(pool.imap(fire, [nshots // nslices] * nslices))
print("pi = {pi:.5}".format(pi=4.0 * nhits / nshots))
if __name__ == '__main__':
main()
You can also do a compute-intensive work in parallel using multiple processes:
The code calculates Pi using Monte Carlo method http://demonstrations.wolfram.com/MonteCarloEstimateForPi/