hashlib:签名哈希计算中被终止
我正在尝试在一台低配电脑上运行 Android 的 check_ota_package_signature.py 脚本,
这个程序的作用是检查 Android OTA 更新的签名。不幸的是,程序运行时被强制关闭了。我认为关键的部分是:
with open(package, 'rb') as package_file:
package_bytes = package_file.read()
length = len(package_bytes)
footer = bytearray(package_bytes[-6:])
signature_start_from_end = (footer[1] << 8) + footer[0]
signature_start = length - signature_start_from_end
comment_len = (footer[5] << 8) + footer[4]
signed_len = length - comment_len - 2
comment_len = (footer[5] << 8) + footer[4]
signed_len = length - comment_len - 2
print('Package length: %d' % (length,))
print('Comment length: %d' % (comment_len,))
print('Signed data length: %d' % (signed_len,))
print('Signature start: %d' % (signature_start,))
use_sha256 = CertUsesSha256(cert)
print('Use SHA-256: %s' % (use_sha256,))
h = sha256() if use_sha256 else sha1()
h.update(package_bytes[:signed_len])
package_digest = h.hexdigest().lower()
h.update(package_bytes[:signed_len]) 这一行被强制关闭了。我的理解是,这个问题是因为内存不足,因为大约 1.1GB 的文件一次性被读取了。我该如何修改程序,让它分块读取文件,这样就不会遇到这个问题了?
我原本是想验证签名的,但结果却收到了以下信息:
Package length: 1182903653
Comment length: 1983
Signed data length: 1182901668
Signature start: 1182901688
Use SHA-256: True
Killed
1 个回答
0
使用 mmap()
函数可以把文件映射到虚拟内存中,然后让操作系统把这个虚拟内存在物理内存中进行调入和调出,这是一种减少内存需求的方法。相比之下,传统的做法是用一个大的 read()
函数把整个文件复制到新分配的内存中,这种内存不和磁盘关联,除非你有足够的交换空间,否则无法进行调出。
#!/usr/bin/env python3
import mmap
import sys
from hashlib import sha256, sha1
with open(sys.argv[1], 'rb') as f:
with mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) as mm:
length = len(mm)
footer = bytearray(mm[-6:])
signature_start_from_end = (footer[1] << 8) + footer[0]
signature_start = length - signature_start_from_end
comment_len = (footer[5] << 8) + footer[4]
signed_len = length - comment_len - 2
comment_len = (footer[5] << 8) + footer[4]
signed_len = length - comment_len - 2
print('Package length: %d' % (length,))
print('Comment length: %d' % (comment_len,))
print('Signed data length: %d' % (signed_len,))
print('Signature start: %d' % (signature_start,))
h = sha256()
## slower, but more explicit about only dealing with small chunks at a time
current_pos=0
block_size=4096
while current_pos < signed_len:
block_end = min(current_pos+block_size, signed_len)
h.update(mm[current_pos:block_end])
current_pos = block_end
gc.collect()
## faster but depends on CPython implementation details
# h.update(mm[:signed_len])
print(h.hexdigest().lower())