如何改进用于合并已排序文件的Python类?
背景:
我正在处理一些很大的文件,这些文件太大了,无法一次性放进内存里。每当我清理输入文件时,我会在内存中建立一个列表;当这个列表的条目达到1000000(大约占用1GB内存)时,我就会对它进行排序(使用下面的默认排序方式),然后把这个列表写入一个文件。这个类的作用是把已经排序的文件重新组合在一起。到目前为止,它在我遇到的文件上都能正常工作。我处理过的最大情况是合并66个已排序的文件。
问题:
- 我的逻辑有没有漏洞(哪里可能出问题)?
- 我实现的合并排序算法正确吗?
- 有没有明显的改进建议?
示例数据:
这是这些文件中一行数据的抽象表示:
'hash_of_SomeStringId\tSome String Id\t\t\twww.somelink.com\t\tOtherData\t\n'
重点是我使用 'SomeStringId'.lower().replace(' ', '')
作为我的排序关键字。
原始代码:
class SortedFileMerger():
""" A one-time use object that merges any number of smaller sorted
files into one large sorted file.
ARGS:
paths - list of paths to sorted files
output_path - string path to desired output file
dedup - (boolean) remove lines with duplicate keys, default = True
key - use to override sort key, default = "line.split('\t')[1].lower().replace(' ', '')"
will be prepended by "lambda line: ". This should be the same
key that was used to sort the files being merged!
"""
def __init__(self, paths, output_path, dedup=True, key="line.split('\t')[1].lower().replace(' ', '')"):
self.key = eval("lambda line: %s" % key)
self.dedup = dedup
self.handles = [open(path, 'r') for path in paths]
# holds one line from each file
self.lines = [file_handle.readline() for file_handle in self.handles]
self.output_file = open(output_path, 'w')
self.lines_written = 0
self._mergeSortedFiles() #call the main method
def __del__(self):
""" Clean-up file handles.
"""
for handle in self.handles:
if not handle.closed:
handle.close()
if self.output_file and (not self.output_file.closed):
self.output_file.close()
def _mergeSortedFiles(self):
""" Merge the small sorted files to 'self.output_file'. This can
and should only be called once.
Called from __init__().
"""
previous_comparable = ''
min_line = self._getNextMin()
while min_line:
index = self.lines.index(min_line)
comparable = self.key(min_line)
if not self.dedup:
#not removing duplicates
self._writeLine(index)
elif comparable != previous_comparable:
#removing duplicates and this isn't one
self._writeLine(index)
else:
#removing duplicates and this is one
self._readNextLine(index)
previous_comparable = comparable
min_line = self._getNextMin()
#finished merging
self.output_file.close()
def _getNextMin(self):
""" Returns the next "smallest" line in sorted order.
Returns None when there are no more values to get.
"""
while '' in self.lines:
index = self.lines.index('')
if self._isLastLine(index):
# file.readline() is returning '' because
# it has reached the end of a file.
self._closeFile(index)
else:
# an empty line got mixed in
self._readNextLine(index)
if len(self.lines) == 0:
return None
return min(self.lines, key=self.key)
def _writeLine(self, index):
""" Write line to output file and update self.lines
"""
self.output_file.write(self.lines[index])
self.lines_written += 1
self._readNextLine(index)
def _readNextLine(self, index):
""" Read the next line from handles[index] into lines[index]
"""
self.lines[index] = self.handles[index].readline()
def _closeFile(self, index):
""" If there are no more lines to get in a file, it
needs to be closed and removed from 'self.handles'.
It's entry in 'self.lines' also need to be removed.
"""
handle = self.handles.pop(index)
if not handle.closed:
handle.close()
# remove entry from self.lines to preserve order
_ = self.lines.pop(index)
def _isLastLine(self, index):
""" Check that handles[index] is at the eof.
"""
handle = self.handles[index]
if handle.tell() == os.path.getsize(handle.name):
return True
return False
编辑: 根据 Brian 的建议,我想出了以下解决方案:
第二次编辑: 根据 John Machin 的建议更新了代码:
def decorated_file(f, key):
""" Yields an easily sortable tuple.
"""
for line in f:
yield (key(line), line)
def standard_keyfunc(line):
""" The standard key function in my application.
"""
return line.split('\t', 2)[1].replace(' ', '').lower()
def mergeSortedFiles(paths, output_path, dedup=True, keyfunc=standard_keyfunc):
""" Does the same thing SortedFileMerger class does.
"""
files = map(open, paths) #open defaults to mode='r'
output_file = open(output_path, 'w')
lines_written = 0
previous_comparable = ''
for line in heapq26.merge(*[decorated_file(f, keyfunc) for f in files]):
comparable = line[0]
if previous_comparable != comparable:
output_file.write(line[1])
lines_written += 1
previous_comparable = comparable
return lines_written
粗略 测试
使用相同的输入文件(2.2 GB的数据):
- SortedFileMerger 类花了51分钟(3068.4秒)
- Brian 的解决方案花了40分钟(2408.5秒)
- 在添加了 John Machin 的建议后,解决方案代码花了36分钟(2214.0秒)
2 个回答
<< 这个“回答”是对原提问者代码结果的评论 >>
建议:使用 eval() 这个方法有点问题,而且你这样做限制了调用者只能使用 lambda 函数——提取关键字可能需要的不止是一行代码,另外,你在初步排序的时候不需要用同样的函数吗?
所以把这个:
def mergeSortedFiles(paths, output_path, dedup=True, key="line.split('\t')[1].lower().replace(' ', '')"):
keyfunc = eval("lambda line: %s" % key)
换成这个:
def my_keyfunc(line):
return line.split('\t', 2)[1].replace(' ', '').lower()
# minor tweaks may speed it up a little
def mergeSortedFiles(paths, output_path, keyfunc, dedup=True):
注意,在python2.6中,heapq模块新增了一个merge函数,可以帮你完成这个操作。
如果你想处理自定义的键函数,可以用一个装饰器把文件迭代器包装起来,这样它就能根据这个键进行比较,最后再把装饰器去掉:
def decorated_file(f, key):
for line in f:
yield (key(line), line)
filenames = ['file1.txt','file2.txt','file3.txt']
files = map(open, filenames)
outfile = open('merged.txt')
for line in heapq.merge(*[decorated_file(f, keyfunc) for f in files]):
outfile.write(line[1])
[编辑] 即使在早期版本的python中,直接从后来的heapq模块中拿merge的实现也是值得的。这个实现是纯python写的,可以在python2.5中不做修改地运行,而且因为它使用堆来获取下一个最小值,所以在合并大量文件时应该非常高效。
你可以简单地从python2.6的安装中复制heapq.py文件,把它命名为"heapq26.py",然后使用"from heapq26 import merge
"来导入——里面没有使用任何特定于2.6的功能。或者,你也可以直接复制merge函数(把heappop等调用改成引用python2.5的heapq模块)。