Celery v2.3.0 has a new setting for the CELERY_RESULT_BACKEND that allows you to store the results of your apply_async() and dispatch() calls in something other than an AMQP-based backend. In previous versions Celery (without the ignore_result=True) would store these results as a message created by a separate queue corresponding to the taskset ID (a UUID). If you had a lot of tasks without consuming them (i.e. checking he result), you would eventually exhausting the memory usage.
The problem is well-described here. One of the issues was using an older version of RabbitMQ, which used a different persister that would try to keep everything in memory and would crash. With recent changes in RabbitMQ, which allow task results to be expired, the problem is much more mitigated. Nonetheless, setting ignore_result=True also helps with this respect. With the recent Celery v2.3.0 release you can also use a different backend (i.e. Redis) to store these task set results!
http://packages.python.org/celery/userguide/tasks.html#result-backends
Note: Celery is still highly dependent on an AMQP host. Just because you can change the CELERY_RESULT_BACKEND doesn't mean you can use a completely different messaging system.
No comments:
Post a Comment