日志处理器:如何在时间或最大字节后进行切换?
我在记录日志方面有点困难。我想在一定时间后和达到一定大小后,自动切换日志文件。
在一定时间后切换日志是通过 TimedRotatingFileHandler
来实现的,而在达到一定日志大小后切换则是通过 RotatingFileHandler
来完成的。
但是,TimedRotatingFileHandler
没有 maxBytes
这个属性,而 RotatingFileHandler
不能在一定时间后切换。我也尝试将这两个处理器都添加到日志记录器中,但结果是日志记录重复了。
我是不是漏掉了什么?
我还查看了 logging.handlers
的源代码。我尝试继承 TimedRotatingFileHandler
并重写 shouldRollover()
方法,想要创建一个同时具备这两种功能的类:
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
""" This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler) """
# super(self). #It's old style class, so super doesn't work.
logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0)
self.maxBytes=maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
we are also comparing times
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
#print "No need to rollover: %d, %d" % (t, self.rolloverAt)
return 0
但是这样做的话,日志只会创建一个备份,然后被覆盖。看起来我还需要重写 doRollover()
方法,但这并不简单。
有没有其他办法可以创建一个在一定时间后和达到一定大小后都能切换文件的日志记录器呢?
5 个回答
这是我使用的内容:
import logging
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler, logging.handlers.RotatingFileHandler):
'''
cf http://stackoverflow.com/questions/29602352/how-to-mix-logging-handlers-file-timed-and-compress-log-in-the-same-config-f
Spec:
Log files limited in size & date. I.E. when the size or date is overtaken, there is a file rollover
'''
########################################
def __init__(self, filename, mode = 'a', maxBytes = 0, backupCount = 0, encoding = None,
delay = 0, when = 'h', interval = 1, utc = False):
logging.handlers.TimedRotatingFileHandler.__init__(
self, filename, when, interval, backupCount, encoding, delay, utc)
logging.handlers.RotatingFileHandler.__init__(self, filename, mode, maxBytes, backupCount, encoding, delay)
########################################
def computeRollover(self, currentTime):
return logging.handlers.TimedRotatingFileHandler.computeRollover(self, currentTime)
########################################
def getFilesToDelete(self):
return logging.handlers.TimedRotatingFileHandler.getFilesToDelete(self)
########################################
def doRollover(self):
return logging.handlers.TimedRotatingFileHandler.doRollover(self)
########################################
def shouldRollover(self, record):
""" Determine if rollover should occur. """
return (logging.handlers.TimedRotatingFileHandler.shouldRollover(self, record) or logging.handlers.RotatingFileHandler.shouldRollover(self, record))
如果你真的需要这个功能,可以自己写一个处理器,基于 TimedRotatingFileHandler 来主要使用时间来进行日志文件的切换,同时把基于大小的切换逻辑也加进去。你已经尝试过这样做,但至少需要重写两个方法:shouldRollover()
和 doRollover()
。第一个方法用来判断什么时候需要切换日志文件,第二个方法则负责关闭当前的日志文件,重命名已有的文件,删除过期的文件,然后打开新的文件。
doRollover()
的逻辑可能有点复杂,但肯定是可以做到的。
我对 TimedRotatingFileHandler
做了一点小改动,让它可以根据时间和文件大小同时进行日志轮换。为了实现这个功能,我修改了 __init__
、shouldRollover
、doRollover
和 getFilesToDelete
这几个部分(具体代码见下文)。这是我设置的结果,当我设置 when='M'(按分钟)、interval=2(间隔2分钟)、backupCount=20(保留20个备份)、maxBytes=1048576(最大文件大小为1MB)时,效果如下:
-rw-r--r-- 1 user group 185164 Jun 10 00:54 sumid.log
-rw-r--r-- 1 user group 1048462 Jun 10 00:48 sumid.log.2011-06-10_00-48.001
-rw-r--r-- 1 user group 1048464 Jun 10 00:48 sumid.log.2011-06-10_00-48.002
-rw-r--r-- 1 user group 1048533 Jun 10 00:49 sumid.log.2011-06-10_00-48.003
-rw-r--r-- 1 user group 1048544 Jun 10 00:50 sumid.log.2011-06-10_00-49.001
-rw-r--r-- 1 user group 574362 Jun 10 00:52 sumid.log.2011-06-10_00-50.001
你可以看到,前四个日志在达到1MB大小后进行了轮换,而最后一次轮换是在两分钟后发生的。目前我还没有测试删除旧日志文件的功能,所以可能这个功能还不太好用。 另外,这段代码在 backupCount 大于等于1000 时肯定是无法工作的,因为我在文件名的末尾只添加了三个数字。
以下是修改后的代码:
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
""" This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler) """
logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes=maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
we are also comparing times
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
#print "No need to rollover: %d, %d" % (t, self.rolloverAt)
return 0
def doRollover(self):
"""
do a rollover; in this case, a date/time stamp is appended to the filename
when the rollover happens. However, you want the file to be named for the
start of the interval, not the current time. If there is a backup count,
then we have to get a list of matching filenames, sort them and remove
the one with the oldest suffix.
"""
if self.stream:
self.stream.close()
# get the time that this sequence started at and make it a TimeTuple
currentTime = int(time.time())
dstNow = time.localtime(currentTime)[-1]
t = self.rolloverAt - self.interval
if self.utc:
timeTuple = time.gmtime(t)
else:
timeTuple = time.localtime(t)
dstThen = timeTuple[-1]
if dstNow != dstThen:
if dstNow:
addend = 3600
else:
addend = -3600
timeTuple = time.localtime(t + addend)
dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
if self.backupCount > 0:
cnt=1
dfn2="%s.%03d"%(dfn,cnt)
while os.path.exists(dfn2):
dfn2="%s.%03d"%(dfn,cnt)
cnt+=1
os.rename(self.baseFilename, dfn2)
for s in self.getFilesToDelete():
os.remove(s)
else:
if os.path.exists(dfn):
os.remove(dfn)
os.rename(self.baseFilename, dfn)
#print "%s -> %s" % (self.baseFilename, dfn)
self.mode = 'w'
self.stream = self._open()
newRolloverAt = self.computeRollover(currentTime)
while newRolloverAt <= currentTime:
newRolloverAt = newRolloverAt + self.interval
#If DST changes and midnight or weekly rollover, adjust for this.
if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc:
dstAtRollover = time.localtime(newRolloverAt)[-1]
if dstNow != dstAtRollover:
if not dstNow: # DST kicks in before next rollover, so we need to deduct an hour
addend = -3600
else: # DST bows out before next rollover, so we need to add an hour
addend = 3600
newRolloverAt += addend
self.rolloverAt = newRolloverAt
def getFilesToDelete(self):
"""
Determine the files to delete when rolling over.
More specific than the earlier method, which just used glob.glob().
"""
dirName, baseName = os.path.split(self.baseFilename)
fileNames = os.listdir(dirName)
result = []
prefix = baseName + "."
plen = len(prefix)
for fileName in fileNames:
if fileName[:plen] == prefix:
suffix = fileName[plen:-4]
if self.extMatch.match(suffix):
result.append(os.path.join(dirName, fileName))
result.sort()
if len(result) < self.backupCount:
result = []
else:
result = result[:len(result) - self.backupCount]
return result