Welcome to Pysllo’s documentation!¶
Pysllo is set of useful python logging extenders that give possibility to bind additional data to logs, raising all logs if error occurs or flow tracks with tools like Elastic Stack or other monitoring tools based on document databases.
The most important benefit of using pysllo is that it’s is just extension to normal python logging library. It not requiring from you to change whole logs implementation in your application. You can easy change just part of logging configuration and use this tool in that part of code. It’s really simple to start working with Pysllo.
Quick start¶
pip install pysllo
Features¶
pysllo.loggers.StructuredLogger
Logger class that make available binding data to logs
pysllo.loggers.PropagationLogger
Logger class that make possible to propagate log level between few code block
pysllo.loggers.TrackingLogger
Logger that add functionality to track logs on all levels and push it if error occurs ignoring normal level configuration
pysllo.utils.factory.LoggersFactory
Is class that can create you Logger class with functionality from classes above you want
pysllo.formatters.JsonFormatter
It’s formatter class that convert your log records into JSON objects
pysllo.handlers.ElasticSearchUDPHandler
It’s handler class that send your logs into Elastic cluster
Usage example¶
from pysllo.handlers import ElasticSearchUDPHandler
from pysllo.formatters import JsonFormatter
from pysllo.utils import LoggersFactory
# configuration
host, port = 'localhost', 9000
handler = ElasticSearchUDPHandler([(host, port)])
formatter = JsonFormatter()
handler.setFormatter(formatter)
MixedLogger = LoggersFactory.make(
tracking_logger=True,
propagation_logger=True,
structured_logger=True
)
logger = MixedLogger('test')
logger.addHandler(handler)
# examlpe usage
msg = "TEST"
logger.bind(ip='127.0.0.1')
logger.debug(msg, user=request.user)
Loggers¶
-
class
pysllo.loggers.
StructuredLogger
(name, level=0)¶ Bases:
logging.Logger
StructuredLogger is a class that make possible to add and bind additional information to logs as named parameters that extends functionality of extra parameter in default logger class
To use it:
>>> import logging >>> from pysllo.loggers import StructuredLogger >>> logging.setLoggerClass(StructurredLogger) >>> log = logging.getLogger('name')
Most important feature is possibility to binding values to logs using simple named parameters or bind function.
Example:
>>> log.bind(user=request.user, ip=request.get('IP_ADDR')) >>> log.info('User login correctly') >>> log.debug('Request keywords %s', request, headers=request.headers, cookies=session.cookies) >>> request.logout() >>> log.unbind('user', 'ip')
-
get
(item, default=None)¶ Return value of item in context if exists
>>> log.get('ip') 127.0.0.1
Parameters: - item – (str) - name of element to get from context
- default – (object) - default value if element is not in context
Returns: (object)
-
static
bind
(**kwargs)¶ Bind params as context to logger
>>> log.bind(ip='127.0.0.1')
Parameters: kwargs – (dict) - named parameters with values to bind
-
static
unbind
(*args)¶ Remove params from context by names, possible to give list of names
>>> log.unbind('ip')
Parameters: args – (list) names of context elements to remove
-
-
class
pysllo.loggers.
PropagationLogger
(name, level=0, propagation=False)¶ Bases:
logging.Logger
PropagationLogger is Logger class that makes possible to propagate logging level across block of code, especially function, context or manually your own.
To use it:
>>> import logging >>> from pysllo.loggers import PropagationLogger >>> logging.setLoggerClass(PropagationLogger) >>> log = logging.getLogger('name')
Most popular usage of this logger is to propagate level between functions used in some scope. For example:
>>> logger.set_propagation(True) >>> logger.setLevel(logging.INFO) >>> >>> def test_second_level(): >>> logger.debug("msg2") >>> >>> @logger.level_propagation(logging.DEBUG) >>> def test_first_level(): >>> logger.debug("msg1") >>> test_second_level() >>> >>> test_first_level() >>> test_second_level()
In this case instead of globally configured level INFO if function test_second_level is used in scope where propagation is enabled log from test_second_level will be pushed to handler ignoring global configuration. In second run of this function without propagation log on level DEBUG from that function will be dropped because there is normal configuration scope.
-
__init__
(name, level=0, propagation=False)¶ Used automatically by getLogger but if you use config file, you can configure propagation from start which is fine option
Parameters: - name – (str) logger name
- level – (int) logging level
- propagation – (bool) on/off propagation from start
-
set_propagation
(propagation=True)¶ Function that enable/disable propagation level functionality
Parameters: propagation – (bool) make propagation on/off
-
static
reset_level
()¶ Resetting level of propagation
-
static
level_propagation
(level)¶ Decorator that give propagation functionality to decorated function
Parameters: level – (int) logging level
-
static
force_level
(*args, **kwargs)¶ Function that make possible to force level value for specific loggers
Parameters: - args – (str or dict) level name or configuration for more levels
- kwargs – (dict) name of logger and value as elements
-
-
class
pysllo.loggers.
TrackingLogger
(name, level=0, propagation=False)¶ Bases:
pysllo.loggers.propagation_logger.PropagationLogger
TrackingLogger is Logger class that make possible to trace logging activity on all level and if exception occurs to push all of logs from specific context apart of logging level
To use it:
>>> import logging >>> from pysllo.loggers import TrackingLogger >>> logging.setLoggerClass(TrackingLogger) >>> log = logging.getLogger('name')
Tracking logger can be use in two ways, as context and as decorator.
Context example:
>>> logger.setLevel(logging.INFO) >>> try: >>> with logger.trace: >>> logger.debug(msg1) >>> logger.info(msg2) >>> raise Exception >>> except Exception: >>> pass
In this case after exception occurs logs on all level will be pushed to handler.
Decorator example:
>>> loggger.setLevel(logging.CRITICAL) >>> @logger.trace() >>> def trace_func(): >>> logger.debug(msg1) >>> logger.info(msg2) >>> raise Exception >>> >>> try: >>> trace_func() >>> except Exception: >>> pass
This is same case like previous by using tracer object as decorator.
-
__init__
(name, level=0, propagation=False)¶ Used automatically by getLogger but if you use config file, you can configure propagation from start which is fine option
Parameters: - name – (str) logger name
- level – (int) logging level
- propagation – (bool) on/off propagation from start
-
trace
¶ Return tracer object, tracer make possible to track logs by context or as decorator
Returns: (Tracer) special object with is context or decorator to tracking logs
-
enable_tracking
(force_level=10)¶ Make tracking enable in whole logging. If force_level is configured on other level that after exception logs to that level were pushed out.
Parameters: force_level – (int) logging level
-
disable_tracking
()¶ Disable tacking functionality
-
exit_with_exc
()¶ Special function as helper to use after exception occurs. If you use trace object, is not required to use it manually.
-
-
class
pysllo.utils.factory.
LoggersFactory
¶ LoggersFactory is static helper that makes combined Logger class with possible maximum features that you want to use.
To use it:
>>> MixedLogger = LoggersFactory.make( >>> tracking_logger=True, >>> structured_logger=True >>> ) >>> logging.setLoggerClass(MixedLogger) >>> log = logging.getLogger('test')
-
static
make
(structured_logger=False, propagation_logger=False, tracking_logger=False)¶ Method based on your choices create on class that support all chooses functions.
Parameters: - structured_logger – (bool) on/off structured logger features
- propagation_logger – (bool) on/off propagation possibility
- tracking_logger – (bool) on/off tracking function
Returns: (MixedLogger class) combined logger class
-
static
Handlers¶
-
class
pysllo.handlers.
ElasticSearchUDPHandler
(connections, level=0, name='logs', limit=9000, backup=False)¶ Bases:
logging.Handler
ElasticSearchUDPHandler is a logging handler that makes possible to send your logs to ElasticSearch cluster.
To do that is used UDP connection, it’s required to configure UDP bulk insert in your Elastic cluster. Another requirement is to use pysllo.formatters.JsonFormatter or other JSON formatter for that handler because default ElasticSearch Bulk format is list of JSON objects.
For more information about this configuration see elastic page about UDP messages here.
To use this handler just setup:
>>> host, port = 'localhost', 9000 >>> handler = ElasticSearchUDPHandler([(host, port)]) >>> formatter = JsonFormatter() >>> handler.setFormatter(formatter) >>> log = logging.getLogger('test') >>> log.setLevel(logging.DEBUG) >>> log.addHandler(handler)
-
__init__
(connections, level=0, name='logs', limit=9000, backup=False)¶ Configure most important thing to setting this handler, list of connections is required, you can set more than one them round robin algorithm will be used to make next connections
Parameters: - connections – (tuple or list) list of tuples with server address and port
- level – (int) logging level
- name – (str) logger name
- limit – (int) byte size of buffer, after this limit buffer is pushed to elastic cluster
- backup – on/off backup
-
static
set_backup_path
(path)¶ Set path to backup files
Parameters: path – (str) unix path
-
static
enable_backup
()¶ Enable backup functionality that make possible to make logs sending secure in situation of loosing connection.
-
static
disable_backup
()¶ Disable backup functionality
-
static
set_limit
(limit)¶ Set limit value, limit is size of buffer to store messages, after make this buffer full all messages will be send. It’s important to make there good number to make sure that you don’t have too many connections to DB and to have too big snap of messages that can make delay’s on real time dashboards
Parameters: limit – (int) number of bytes
-
emit
(record)¶ Is standard logging Handler method that send message to receiver, in this case message is saved in buffer
Parameters: record – (LogRecord) - record to send
-
Formatters¶
-
class
pysllo.formatters.
JsonFormatter
(name='logs', limit=9000)¶ Bases:
logging.Formatter
JsonFormatter give logging possibility to convert your logs into JSON records that give possibility to save it in document based databases
To use it simple add this formatter to handler that support JSON messages.
-
__init__
(name='logs', limit=9000)¶ Configure limit of bytes in message, and name of document store
Parameters: - name – (str) name of DB in store
- limit – (int) maximum number of bytes in message
-
format
(record)¶ It’s standard logging method to format record to JSON
Parameters: record – (LogRecord) object to be serialized
-