Contact Us  +1 (650) 307-6736 +32 81813700

Serving OpenERP 6.1 on multicore systems with Gunicorn

ShareThis !

OpenERP  6.1 comes with a load of new features, and one of them provides a much  greater ability to scale up on modern hardware.  Until now, OpenERP  offered only one option: a multi-threaded HTTP layer, with a limited  ability to use available computing resources.  F or  the 6.1 release, one of the goal was to make it easy to run the OpenERP  server in multiple processes, harvesting big performance gains. Doing so introduces nice deployment choices and development opportunities.  

By  running a Python application such as OpenERP 6.1 in multiple processes  instead of multiple threads, one can avoid Python's Global Interpreter  Lock (GIL[0]) and take advantage of the multiple cores of today's  machines.   

This  (rather technical) post explains how the upcoming OpenERP version runs  more efficiently on multi-core systems by using the excellent  Gunicorn[1] HTTP server.  

This subject will also be covered during the 2012 OpenDays. The slides for the talk are already available at http://bit.ly/IkOYyq

 Dance around the GIL 

To create and manage processes, we first thought to use the  `multiprocessing`[3] module. When the time to finally implement a  multi-process solution arrived, we quickly thought it was better to  handle the added complexity in the Unix way: make a specific piece of  code to manage processes. As it happens, such specific piece of code  already exists and we didn't write anything (and thus didn't use  `multiprocessing`): we simply turned the server into a WSGI-compliant  application, leaving the responsibility to manage it to someone else.  That someone else is Gunicorn[1]. 


Gunicorn  is a Python HTTP server with support for WSGI[2]. It uses the pre-fork  model to spawn a WSGI-compliant application into different processes.   In our case, the WSGI application is the OpenERP server. The server has  its WSGI entry point located in the `openerp.wsgi` module. It is simply  named `application`. In our repository, we also provide a sample  `gunicorn.conf.py` configuration file. Assembling the pieces together,  launching the server with multiple processes is a simple as: 

   > gunicorn openerp:wsgi.core.application -c /path/to/gunicorn.conf.py 

You can modify the configuration to your liking. Gunicorn is well  documented and the comments in the sample configuration file should  prove enough to get you started. Just note that it is not possible to  pass arguments to OpenERP on the command line (i.e. the way you would do  it with `openerp-server`). Instead, you can directly set OpenERP's  configuration values from within the Gunicorn configuration file (as it  is done in the example file). 

Awesomeness provided by the beast

 It is still possible to start the server with the regular  `openerp-server` script. Doing so uses a multi-threaded HTTP layer (this  is not the 6.0 HTTP layer: we also use the WSGI entry point, this time  serving it with `werkzeug`[4]). But serving OpenERP with Gunicorn is  great! When handling two concurrent CPU-bound requests with two workers  (on at least two cores), you can expect a nearly 2x speed-up[5]. Of  course, if the two requests lock the same rows in database and don't  spend much of their time running Python code, you might achieve no  speed-up at all.  

Beside  taking advantage of a multi-core setup, Gunicorn provides a few hooks  that we use to limit the resources made available to each request. It is  also possible to automatically kill and restart processes after they  have served a few thousands of requests, to mitigate memory waste, if  any. We have added three new options -- although they are documented as  command-line options, they really are used only with Gunicorn:   

*  `virtual-memory-limit` limits how much memory a process can allocate. When the limit is reached, a `MemoryError` is raised.  

*  `virtual-memory-reset` is a similar limit: when the amount of memory used exceeds that limit, the process will gracefully die after the  current request and Gunicorn will re-spawn a new process. This is  again a safety net against memory leak.  

* `cpu-time-limit` limits the amount of CPU time a request can use, also raising an exception when the limit is reached. 

 WSGI and statelessness

To ensure we could run multiple OpenERP processes safely, we had to  modify the server to make it stateless, because any request can be  handled by any process. For this reason, we changed the implementation  (and the name) of the `osv_memory` class. Instead of being held in  memory, a `TransientModel` is stored in database, just like a regular  `Model` (the new name for `osv`). The difference with a `Model` is that `TransientModel` rows are automatically deleted after a while.  

Server-side  caching is another issue. It's useful for improving performance in some  situations, but makes the server partially stateable and thus requires  synchronization. Fortunately, most of OpenERP caches are of minor  importance and read-only, so the relatively fast process recycling will  take care of refreshing The only cache that really required an update is  the login cache; because an authentication check is done for each  request, if you change your password (causing only one process cache to  be updated) you will immediately be locked out. The trivial way we fixed  it was to ignore the login cache whenever an authentication fails,  causing a refresh of the cache on that process. After a change of  password all caches will thus be refreshed transparently one by one.

Still, for the situation that needs it (i.e. it is really necessary to  run multiple processes while still allowing configuration changes), we  implemented a signalling scheme using the PostgreSQL database. Whenever  caches are invalidated on a process, or a new module is installed, the  process signals the change to other processes (managed by the same  Gunicorn instance, or running on a different machine). The solution will  be part of a next 6.1 release.  

As  mentioned above, the OpenERP server is now a library exposing a WSGI  entry point. It is also a kind of WSGI middleware as it can dispatch  requests to other, registered entry points. This is indeed the way we  have now embedded the web client in the server: the `openerp-web`  project provides its own addons directory, which is put in the server's  addons path. The server loads the web addons at startup because it is  the default value in the new `server_wide_modules`[6] option (exposed on  the command line as `--load`). When being loaded, the web addons  registers itself as a WSGI entry point: the server serves XML-RPC and  regular browser requests on the same port (8069 by default). Of course  you can use the same principle for your own modules. 

Please note the web client is storing its sessions on disk. If you plan  to deploy multiple web clients, embedded in the server or not, you have  to make sure the sessions can be accessed by any of them. 

Wrapping up  

Embracing  existing (and great) tools allow us to be leaner and meaner. This is  true with WSGI and Gunicorn but we hope to continue in this direction.  One important question is left unanswered: how many processes must  Gunicorn spawn on a given machine to be as efficient as possible? We  don't know yet the answer but we should have it quite soon: we are  assembling benchmarks in the `openerp-command` repository.   

[0] http://dabeaz.blogspot.com/2010/01/python-gil-visualized.html 

[1] http://gunicorn.org/ 

[2] http://en.wikipedia.org/wiki/Web_Server_Gateway_Interface 

[3] http://docs.python.org/library/multiprocessing.html 

[4] http://werkzeug.pocoo.org/ 

[5]  My use of the word 'speed-up' may not be completely appropriate:  speed-up is normally used for parallel computation. In this post a 2x  speed-up means you can run a second request with no impact on another  one. 

[6]  Server-wide modules are not tied to a particular database. For  instance, the web client can serve a page to create a new database;  obviously the web client has to run even if a database is not yet  loaded.

Date : 9th April, 2012