用于在OSX上数千个文件的目录中进行二进制文件匹配的高性能搜索的工具

2024-04-19 07:57:08 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在合并两个具有不同目录结构的大型(1000)照片集,其中许多照片已经存在于这两个集中。我要写一个剧本这样:

For a given photo in set B,
Check if a binary match for it exists in set A.
If there's a match, delete the file.

在检查完集合B中的所有文件之后,我将把集合B中的剩余部分(现在是唯一的)合并到集合A中

可能存在具有不同文件名的二进制匹配,因此测试时应忽略文件名。你知道吗

另外,我将对集合B中的每个文件执行集合A搜索,所以我更喜欢一个工具,作为初始扫描的一部分来构建集合A的索引。幸运的是,这个索引可以做一次,而且永远不需要更新。你知道吗

我本来打算使用osxshell脚本,但是python也可以。你知道吗


Tags: 文件in目录forif文件名checkmatch
1条回答
网友
1楼 · 发布于 2024-04-19 07:57:08

根据马克的建议,我通过编写一对Python脚本来解决我的问题。你知道吗

md5型索引.py地址:

#given a folder path, makes a hash index of every file, recursively
import sys, os, hashlib, io

hash_md5 = hashlib.md5()

#some files need to be hashed incrementally as they may be too big to fit in memory
#http://stackoverflow.com/a/40961519/2518451
def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
    return md5

#this project done on macOS. There may be other files that are appropriate to hide on other platforms.
ignore_files = [".DS_Store"]

def index(source, index_output):

    index_output_f = open(index_output, "wt")
    index_count = 0

    for root, dirs, filenames in os.walk(source):

        for f in filenames:
            if f in ignore_files:
                continue

            #print f
            fullpath = os.path.join(root, f)
            #print fullpath

            md5 = md5sum(fullpath)
            md5string = md5.hexdigest()
            line = md5string + ":" + fullpath
            index_output_f.write(line + "\n")
            print line
            index_count += 1

    index_output_f.close()
    print("Index Count: " + str(index_count))


if __name__ == "__main__":
    index_output = "index_output.txt"

    if len(sys.argv) < 2:
        print("Usage: md5index [path]")
    else:
        index_path = sys.argv[1]
        print("Indexing... " + index_path)
        index(index_path, index_output)

以及唯一合并.py地址:

#given an index_output.txt in the same directory and an input path,
#remove all files that already have a hash in index_output.txt

import sys, os
from md5index import md5sum
from send2trash import send2trash
SENDING_TO_TRASH = True

def load_index():
    index_output = "index_output.txt"
    index = []
    with open(index_output, "rt") as index_output_f:
        for line in index_output_f:
            line_split = line.split(':')
            md5 = line_split[0]
            index.append(md5)
    return index

#traverse file, compare against index
def traverse_merge_path(merge_path, index):
    found = 0
    not_found = 0

    for root, dirs, filenames in os.walk(merge_path):
        for f in filenames:
            #print f
            fullpath = os.path.join(root, f)
            #print fullpath

            md5 = md5sum(fullpath)
            md5string = md5.hexdigest()

            if md5string in index:
                if SENDING_TO_TRASH:
                    send2trash(fullpath)

                found += 1
            else:
                print "\t NON-DUPLICATE ORIGINAL: " + fullpath
                not_found += 1


    print "Found Duplicates: " + str(found) + " Originals: " + str(not_found)


if __name__ == "__main__":
    index = load_index()
    print "Loaded index with item count: " + str(len(index))

    print "SENDING_TO_TRASH: " + str(SENDING_TO_TRASH) 

    merge_path = sys.argv[1]
    print "Merging To: " + merge_path

    traverse_merge_path(merge_path, index)

假设我想将folderA合并到folderB中,我会: python md5索引.py福尔德拉 #创建索引_输出.txt所有来自福尔德拉的信息

python uniquemerge.py folderB
# deletes all files in folderB that already existed in folderA
# I can now manually merge folderB into folderA

相关问题 更多 >