Saturday, October 15, 2011

Celery and the big instance refactor

One of the strange parts in Celery is that if you want a logger that will write to celery.task.default instead of its own default name, you can do:

import celery.task import Task
logger = Task.get_logger()

The Task class appears to be a global instantiation of Celery. Normally, the task logger is setup via the get_logger() method, which then calls setup_task_logger(), which in turn calls get_task_logger. If you invoke get_logger() within a Task class, the name of the task name is used:

def setup_task_logger(self, loglevel=None, logfile=None, format=None,
            colorize=None, task_name=None, task_id=None, propagate=False,
            app=None, **kwargs):
        logger = self._setup_logger(self.get_task_logger(loglevel, task_name),
                                    logfile, format, colorize, **kwargs)

If you use Task.get_logger(), no name is used and the logger namespace is set to celery.task.default.

def get_task_logger(self, loglevel=None, name=None):                                                                               
      logger = logging.getLogger(name or "celery.task.default")  
      if loglevel is not None:
         logger.setLevel(loglevel)                                                                                  
      return logger

This Task appears to be part of the “The Big Instance” Refactor. It appears that there are plans for multiple instances of the Celery object to be instantiated.

Also, one thing to note:

http://ask.github.com/celery/userguide/tasks.html#logging

Instantiation
A task is not instantiated for every request, but is registered in the task registry as a global instance.

This means that the __init__ constructor will only be called once per process, and that the task class is semantically closer to an Actor.

If you have a task,

No comments:

Post a Comment