Python,想要具有日志轮替和压缩的日志记录
有没有人能推荐一种在Python中进行日志记录的方法,要求如下:
- 每天自动轮换日志
- 轮换时对日志进行压缩
- 可选 - 删除最旧的日志文件,以保持X MB的可用空间
- 可选 - 将日志文件通过SFTP传输到服务器
谢谢大家的回复,
Fred
11 个回答
15
除了unutbu的回答之外:这里是如何修改TimedRotatingFileHandler,使其能够使用zip文件进行压缩。
import logging
import logging.handlers
import zipfile
import codecs
import sys
import os
import time
import glob
class TimedCompressedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
"""
Extended version of TimedRotatingFileHandler that compress logs on rollover.
"""
def doRollover(self):
"""
do a rollover; in this case, a date/time stamp is appended to the filename
when the rollover happens. However, you want the file to be named for the
start of the interval, not the current time. If there is a backup count,
then we have to get a list of matching filenames, sort them and remove
the one with the oldest suffix.
"""
self.stream.close()
# get the time that this sequence started at and make it a TimeTuple
t = self.rolloverAt - self.interval
timeTuple = time.localtime(t)
dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
if os.path.exists(dfn):
os.remove(dfn)
os.rename(self.baseFilename, dfn)
if self.backupCount > 0:
# find the oldest log file and delete it
s = glob.glob(self.baseFilename + ".20*")
if len(s) > self.backupCount:
s.sort()
os.remove(s[0])
#print "%s -> %s" % (self.baseFilename, dfn)
if self.encoding:
self.stream = codecs.open(self.baseFilename, 'w', self.encoding)
else:
self.stream = open(self.baseFilename, 'w')
self.rolloverAt = self.rolloverAt + self.interval
if os.path.exists(dfn + ".zip"):
os.remove(dfn + ".zip")
file = zipfile.ZipFile(dfn + ".zip", "w")
file.write(dfn, os.path.basename(dfn), zipfile.ZIP_DEFLATED)
file.close()
os.remove(dfn)
if __name__=='__main__':
## Demo of using TimedCompressedRotatingFileHandler() to log every 5 seconds,
## to one uncompressed file and five rotated and compressed files
os.nice(19) # I always nice test code
logHandler = TimedCompressedRotatingFileHandler("mylog", when="S",
interval=5, backupCount=5) # Total of six rotated log files, rotating every 5 secs
logFormatter = logging.Formatter(
fmt='%(asctime)s.%(msecs)03d %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
logHandler.setFormatter(logFormatter)
mylogger = logging.getLogger('MyLogRef')
mylogger.addHandler(logHandler)
mylogger.setLevel(logging.DEBUG)
# Write lines non-stop into the logger and rotate every 5 seconds
ii = 0
while True:
mylogger.debug("Test {0}".format(ii))
ii += 1
32
在Python 3.3中,压缩日志文件的另一种方法是在旋转日志时使用BaseRotatingHandler(以及所有继承的类)中的一个叫rotator的属性。举个例子:
import gzip
import os
import logging
import logging.handlers
class GZipRotator:
def __call__(self, source, dest):
os.rename(source, dest)
f_in = open(dest, 'rb')
f_out = gzip.open("%s.gz" % dest, 'wb')
f_out.writelines(f_in)
f_out.close()
f_in.close()
os.remove(dest)
logformatter = logging.Formatter('%(asctime)s;%(levelname)s;%(message)s')
log = logging.handlers.TimedRotatingFileHandler('debug.log', 'midnight', 1, backupCount=5)
log.setLevel(logging.DEBUG)
log.setFormatter(logformatter)
log.rotator = GZipRotator()
logger = logging.getLogger('main')
logger.addHandler(log)
logger.setLevel(logging.DEBUG)
....
你可以在这里查看更多信息。
85
- 每天轮换日志: 使用一个叫做 TimedRotatingFileHandler 的工具。
- 日志压缩: 设置
encoding='bz2'
参数。(注意,这个“技巧”只适用于Python2。在Python3中,'bz2'不再被视为一种编码方式。) - 可选 - 删除最旧的日志文件以保持X MB的可用空间:
你可以通过一个叫做 RotatingFileHandler 的工具间接实现这个功能。通过设置
maxBytes
参数,当日志文件达到一定大小时,它会自动轮换。通过设置backupCount
参数,你可以控制保留多少个轮换的日志文件。这两个参数一起使用,可以帮助你控制日志文件占用的最大空间。你也可以尝试创建一个新的类,继承TimeRotatingFileHandler
,将这个功能整合进去。
为了好玩,这里有一个如何继承 TimeRotatingFileHandler
的示例。当你运行下面的代码时,它会将日志文件写入 /tmp/log_rotate*
。
如果 time.sleep
的值很小(比如0.1),日志文件会很快填满,达到最大字节限制,然后进行轮换。
如果 time.sleep
的值很大(比如1.0),日志文件填满得比较慢,虽然没有达到最大字节限制,但当时间间隔(10秒)到达时,日志文件还是会轮换。
下面的所有代码来自 logging/handlers.py。我只是将 TimeRotatingFileHandler 和 RotatingFileHandler 以最简单的方式结合在一起。
import time
import re
import os
import stat
import logging
import logging.handlers as handlers
class SizedTimedRotatingFileHandler(handlers.TimedRotatingFileHandler):
"""
Handler for logging to a set of files, which switches from one file
to the next when the current file reaches a certain size, or at certain
timed intervals
"""
def __init__(self, filename, maxBytes=0, backupCount=0, encoding=None,
delay=0, when='h', interval=1, utc=False):
handlers.TimedRotatingFileHandler.__init__(
self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes = maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
# due to non-posix-compliant Windows feature
self.stream.seek(0, 2)
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
return 0
def demo_SizedTimedRotatingFileHandler():
log_filename = '/tmp/log_rotate'
logger = logging.getLogger('MyLogger')
logger.setLevel(logging.DEBUG)
handler = SizedTimedRotatingFileHandler(
log_filename, maxBytes=100, backupCount=5,
when='s', interval=10,
# encoding='bz2', # uncomment for bz2 compression
)
logger.addHandler(handler)
for i in range(10000):
time.sleep(0.1)
logger.debug('i=%d' % i)
demo_SizedTimedRotatingFileHandler()