我尝试使用official Cassandra and Scylla documentation中描述的预处理语句,但是100000条消息的性能仍然在30秒左右。有什么办法改进吗?在
query = "INSERT INTO message (id, message) VALUES (?, ?)"
prepared = session.prepare(query)
for key in range(100000):
try:
session.execute_async(prepared, (0, "my example message"))
except Exception as e:
print("An error occured : " + str(e))
pass
更新
我发现强烈建议使用批处理来提高性能,因此我使用了准备好的报表和批处理,以符合官方文档的要求。我现在的代码是这样的:
^{pr2}$你知道为什么性能这么慢,运行这个源代码后,结果如下图所示吗?在
test 0: 2018-06-19 11:10:13.990691
0
1
...
41
cAn error occured : Error from server: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out for messages.message - received only 1 responses from 2 CL=QUORUM." info={'write_type': 'BATCH', 'required_responses': 2, 'consistency': 'QUORUM', 'received_responses': 1}
42
...
52 An error occured : errors={'....0.3': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=.....0.3
53
An error occured : Error from server: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out for messages.message - received only 1 responses from 2 CL=QUORUM." info={'write_type': 'BATCH', 'required_responses': 2, 'consistency': 'QUORUM', 'received_responses': 1}
54
...
59
An error occured : Error from server: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out for messages.message - received only 1 responses from 2 CL=QUORUM." info={'write_type': 'BATCH', 'required_responses': 2, 'consistency': 'QUORUM', 'received_responses': 1}
60
61
62
...
69
70
71
An error occured : errors={'.....0.2': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=.....0.2
72
An error occured : errors={'....0.1': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=....0.1
73
74
...
98
99
test 1: 2018-06-19 11:11:03.494957
在我的机器上,我使用本地机器通过大量并行化插入来获得这种类型问题的次秒执行时间。在
恐怕我不知道如何在Python中实现它,但在Go中它可能看起来像这样:
^{pr2}$如果需要,请根据需要调整连接。在
相关问题 更多 >
编程相关推荐