ghPython – New component and parallel modules
Just in time for Christmas… ghPython 0.6.0.3 was released this week and it has two new features that I’m really excited about.
A little background
David Rutten was visiting the McNeel Seattle office in November to discuss future work on Grasshopper and Rhino. When David is in town it always gives me the chance to brainstorm with him and try to solve some of the features that users ask for. Two features that we commonly hear about are “how can I do what X component does, but through RhinoCommon/code?” and “how can I improve performance on my computer with many CPUs?”
Out of these chats came the two major new features in ghPython 0.6.0.3; the ability to call components from python and an easy way to do this using multiple threads. ghPython 0.6.0.3 ships with a new package (ghpythonlib) that supports these two new features.
Components As Functions (node-in-code)
There is a module in ghpythonlib called components which attempts to make every component available in python in the form of an easy to call function. Here’s a sample to help paint the picture.
import ghpythonlib.components as ghcomp # call Voronoi component with input points curves = ghcomp.Voronoi(points) # call Area component with curves from Voronoi centroids = ghcomp.Area(curves).centroid
Notice that the above sample is just three lines of script (and two lines of comments to help describe what is happening.)
Here is a sample gh file
Of course you can mix in other python to perform calculations on the results of the component function calls. I tweaked the above example to find the curve generated from Voronoi that has the largest area.
import ghpythonlib.components as ghcomp curves = ghcomp.Voronoi(points) areas = ghcomp.Area(curves).area #find the biggest curve in the set max_area = -1 max_curve = None for i, curve in enumerate(curves): if areas[i] > max_area: max_area = areas[i] max_curve = curve
Remember, this can be done for almost every component in Grasshopper (including every installed add-on component.) I use the term almost because there are cases where the function call doesn’t make sense. These cases are for things like Kangaroo or timers where the state of the component is important between iterations. Fortunately this is pretty rare.
Along with the new functionality that this provides, I also found myself simplifying existing gh definition files by simply lumping together a bunch of related components into a single python script.
Use those CPUs
Along with components is another module in ghpythonlib called parallel. This module has a single function called “run” which takes a list of data as input and a single function that should be called for each item in the list. What the run function does is call this function on as many threads as there are processors in your computer and then properly collect the results so you get a list of return values in the same order as the input list. The return value is whatever your custom function returns. I could show how this is done with the previous samples, but those already run so fast that there is no need to attempt to multithread them. Instead I put together a sample that typically takes around a second to complete on my computer; slicing a brep with 100 planes.
import ghpythonlib.components as ghcomp import ghpythonlib.parallel #custom function that is executed by parallel.run def slice_at_angle(plane): result = ghcomp.BrepXPlane(brep, plane) if result: return result.curves if parallel: slices = ghpythonlib.parallel.run(slice_at_angle, planes, True) else: slices = ghcomp.BrepXPlane(brep, planes).curves
In the above image I’m passing the variable called parallel into the python script with a value of false. This makes the code execute on a single thread and as you can see by the profiler that the performance is the same in the python script as it is just using the BrepXPlane component (which is expected.)
Now when I toggle the input parallel variable to a value of true, the parallel.run function is executed. This function calls my custom slice_at_angle function 105 times, each time passing in a single plane and all on multiple threads. On my computer with 4 CPUs the execution time drops from one second to 313 milliseconds! A 3X speed boost by just adding a couple lines of script.
Give this new build of ghPython a try. I’m sure there will be questions and probably a bug or two to fix, but it gets fun pretty fast once you get the hang of it.