我们有Python(Luigi)中的ETL作业。它们都连接到Hive元存储以获取分区信息
代码:
from hive_metastore import ThriftHiveMetastore
client = ThriftHiveMetastore.Client(protocol)
partitions = client.get_partition_names('sales', 'salesdetail', -1)
-1是最大部分(返回的最大分区数)
它会像这样随机超时:
File "/opt/conda/envs/etl/lib/python2.7/site-packages/luigi/contrib/hive.py", line 210, in _existing_partitions
partition_strings = client.get_partition_names(database, table, -1)
File "/opt/conda/envs/etl/lib/python2.7/site-packages/hive_metastore/ThriftHiveMetastore.py", line 1703, in get_partition_names
return self.recv_get_partition_names()
File "/opt/conda/envs/etl/lib/python2.7/site-packages/hive_metastore/ThriftHiveMetastore.py", line 1716, in recv_get_partition_names
(fname, mtype, rseqid) = self._iprot.readMessageBegin()
File "/opt/conda/envs/etl/lib/python2.7/site-packages/thrift/protocol/TBinaryProtocol.py", line 126, in readMessageBegin
sz = self.readI32()
File "/opt/conda/envs/etl/lib/python2.7/site-packages/thrift/protocol/TBinaryProtocol.py", line 206, in readI32
buff = self.trans.readAll(4)
File "/opt/conda/envs/etl/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 58, in readAll
chunk = self.read(sz - have)
File "/opt/conda/envs/etl/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 159, in read
self.__rbuf = StringIO(self.__trans.read(max(sz, self.__rbuf_size)))
File "/opt/conda/envs/etl/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 105, in read
buff = self.handle.recv(sz)
timeout: timed out
此错误偶尔发生。
配置单元元存储上有15分钟超时。
当我研究单独运行get_partition_names时,它会在几秒钟内返回数据。
即使我将socket.timeout设置为1秒或2秒,查询也会完成。
配置单元元存储日志中没有套接字关闭连接消息的记录cat /var/log/hive/..log.out
它通常超时的表有大量分区~10K+。但正如前面提到的,它们只是随机超时。当测试代码的这一部分时,它们会快速返回分区元数据
你知道它为什么会随机超时,或者如何在metastore日志中捕获这些超时错误,或者如何修复它们吗
问题是LUIGI中的线程重叠
我们使用单例来实现穷人的连接池。但是Luigi的不同工作线程互相攻击,当一个线程的get_分区名称与另一个线程的get_分区名称冲突时,会导致奇怪的行为
我们通过确保每个线程的连接对象在连接池中获得自己的“密钥”(而不是所有共享进程id密钥的线程)修复了这个问题
相关问题 更多 >
编程相关推荐