llvmlite originated to fulfill the needs of the Numba project. It is still mostly maintained by the Numba team. As such, we tend to prioritize the needs and constraints of Numba over other conflicting desires. However, we welcome any contributions, under the form of bug reports or pull requests.
For now, we use the Numba public mailing-list, which you can e-mail at firstname.lastname@example.org. If you have any questions about contributing to llvmlite, it is ok to ask them on this mailing-list. You can subscribe and read the archives on Google Groups, and there is also a Gmane mirror allowing NNTP access.
5.1.2. Bug tracker¶
We use the Github issue tracker to track both bug reports and feature requests. If you report an issue, please include specifics:
- what you are trying to do;
- which operating system you have and which version of llvmlite you are running;
- how llvmlite is misbehaving, e.g. the full error traceback, or the unexpected results you are getting;
- as far as possible, a code snippet that allows full reproduction of your problem.
5.2. Pull requests¶
If you want to contribute code, we recommend you fork our Github repository, then create a branch representing your work. When your work is ready, you should submit it as a pull request from the Github interface.
5.3. Development rules¶
5.3.1. Coding conventions¶
All Python code should follow PEP 8. Our C++ code doesn’t have a well-defined coding style (would it be nice to follow PEP 7?). Code and documentation should generally fit within 80 columns, for maximum readability with all existing tools (such as code review UIs).
5.3.2. Platform support¶
llvmlite is to be kept compatible with Python 2.7, 3.4 and later under at least Linux, OS X and Windows. It only needs to be compatible with the currently supported LLVM version (currently, the 3.8 series).
We don’t expect contributors to test their code on all platforms. Pull requests are automatically built and tested using Travis-CI. This takes care of Linux compatibility. Other operating systems are tested on an internal continuous integration platform at Continuum Analytics.