回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p><sub>(<a href="http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-micro-optimization-theater.html" rel="noreferrer">Sometimes</a>我们的主机错误;纳秒问题;)</sub></p>
<p>我有一个Python Twisted服务器,它可以与一些Java服务器进行对话,分析显示,它的运行时大约30%花在JSON编码器/解码器上;它的工作是每秒处理数千条消息。</p>
<p>youtube的<a href="http://highscalability.com/blog/2012/3/26/7-years-of-youtube-scalability-lessons-in-30-minutes.html" rel="noreferrer">This talk</a>提出了有趣的适用点:</p>
<ul>
<li><p>序列化格式-无论您使用哪种格式,它们都是
很贵。测量。不要用泡菜。不是个好选择。找到
协议缓冲区速度慢。他们编写了自己的BSON实现
比你能下载的快10-15倍。</p></li>
<li><p>你得测量一下。Vitess把一个its协议换成了一个HTTP
实施。虽然是C语言,但速度很慢。所以他们撕开了
输出HTTP并使用python进行直接套接字调用,这是8%
全球CPU价格更低。HTTP的封装非常昂贵。</p></li>
<li><p>测量。在Python中,测量就像读取茶叶。
在Python中有很多东西是反直觉的,比如
抢合议庭的费用。他们的大部分应用程序都花费在
他们的时间序列。分析序列化非常依赖于
你在放什么。序列化Int与
序列化大blob。</p></li>
</ul>
<p>无论如何,我控制消息传递API的Python和Java端,并且可以选择不同于JSON的序列化。</p>
<p>我的信息看起来像:</p>
<ul>
<li>不定长度的数目;长度在1到10公里之间</li>
<li>和两个already-UTF8文本字符串;都在1到3KB之间</li>
</ul>
<p>因为我是从一个套接字读取它们的,所以我希望库能够优雅地处理流——例如,如果它不告诉我它消耗了多少缓冲区,那就很恼人。</p>
<p>当然,这个流的另一端是一个Java服务器;我不想选择对Python端很好的东西,而是将问题转移到Java端,例如性能、痛苦或不稳定的API。</p>
<p>很明显我会做我自己的分析。我在这里问你,希望你能描述一些我不会想到的方法,例如使用<a href="http://docs.python.org/library/struct.html" rel="noreferrer">^{<cd1>}</a>和什么是最快的字符串/缓冲区。</p>
<p>一些简单的测试代码给出了令人惊讶的结果:</p>
<pre><code>import time, random, struct, json, sys, pickle, cPickle, marshal, array
def encode_json_1(*args):
return json.dumps(args)
def encode_json_2(longs,str1,str2):
return json.dumps({"longs":longs,"str1":str1,"str2":str2})
def encode_pickle(*args):
return pickle.dumps(args)
def encode_cPickle(*args):
return cPickle.dumps(args)
def encode_marshal(*args):
return marshal.dumps(args)
def encode_struct_1(longs,str1,str2):
return struct.pack(">iii%dq"%len(longs),len(longs),len(str1),len(str2),*longs)+str1+str2
def decode_struct_1(s):
i, j, k = struct.unpack(">iii",s[:12])
assert len(s) == 3*4 + 8*i + j + k, (len(s),3*4 + 8*i + j + k)
longs = struct.unpack(">%dq"%i,s[12:12+i*8])
str1 = s[12+i*8:12+i*8+j]
str2 = s[12+i*8+j:]
return (longs,str1,str2)
struct_header_2 = struct.Struct(">iii")
def encode_struct_2(longs,str1,str2):
return "".join((
struct_header_2.pack(len(longs),len(str1),len(str2)),
array.array("L",longs).tostring(),
str1,
str2))
def decode_struct_2(s):
i, j, k = struct_header_2.unpack(s[:12])
assert len(s) == 3*4 + 8*i + j + k, (len(s),3*4 + 8*i + j + k)
longs = array.array("L")
longs.fromstring(s[12:12+i*8])
str1 = s[12+i*8:12+i*8+j]
str2 = s[12+i*8+j:]
return (longs,str1,str2)
def encode_ujson(*args):
return ujson.dumps(args)
def encode_msgpack(*args):
return msgpacker.pack(args)
def decode_msgpack(s):
msgunpacker.feed(s)
return msgunpacker.unpack()
def encode_bson(longs,str1,str2):
return bson.dumps({"longs":longs,"str1":str1,"str2":str2})
def from_dict(d):
return [d["longs"],d["str1"],d["str2"]]
tests = [ #(encode,decode,massage_for_check)
(encode_struct_1,decode_struct_1,None),
(encode_struct_2,decode_struct_2,None),
(encode_json_1,json.loads,None),
(encode_json_2,json.loads,from_dict),
(encode_pickle,pickle.loads,None),
(encode_cPickle,cPickle.loads,None),
(encode_marshal,marshal.loads,None)]
try:
import ujson
tests.<a href="https://www.cnpython.com/list/append" class="inner-link">append</a>((encode_ujson,ujson.loads,None))
except ImportError:
print "no ujson support installed"
try:
import msgpack
msgpacker = msgpack.Packer()
msgunpacker = msgpack.Unpacker()
tests.append((encode_msgpack,decode_msgpack,None))
except ImportError:
print "no msgpack support installed"
try:
import bson
tests.append((encode_bson,bson.loads,from_dict))
except ImportError:
print "no BSON support installed"
longs = [i for i in xrange(10000)]
str1 = "1"*5000
str2 = "2"*5000
random.seed(1)
encode_data = [[
longs[:random.randint(2,len(longs))],
str1[:random.randint(2,len(str1))],
str2[:random.randint(2,len(str2))]] for i in xrange(1000)]
for encoder,decoder,massage_before_check in tests:
# do the encoding
start = time.time()
encoded = [encoder(i,j,k) for i,j,k in encode_data]
encoding = time.time()
print encoder.__name__, "encoding took %0.4f,"%(encoding-start),
sys.stdout.flush()
# do the decoding
decoded = [decoder(e) for e in encoded]
decoding = time.time()
print "decoding %0.4f"%(decoding-encoding)
sys.stdout.flush()
# check it
if massage_before_check:
decoded = [massage_before_check(d) for d in decoded]
for i,((longs_a,str1_a,str2_a),(longs_b,str1_b,str2_b)) in enumerate(zip(encode_data,decoded)):
assert longs_a == list(longs_b), (i,longs_a,longs_b)
assert str1_a == str1_b, (i,str1_a,str1_b)
assert str2_a == str2_b, (i,str2_a,str2_b)
</code></pre>
<p>给出:</p>
<pre><code>encode_struct_1 encoding took 0.4486, decoding 0.3313
encode_struct_2 encoding took 0.3202, decoding 0.1082
encode_json_1 encoding took 0.6333, decoding 0.6718
encode_json_2 encoding took 0.5740, decoding 0.8362
encode_pickle encoding took 8.1587, decoding 9.5980
encode_cPickle encoding took 1.1246, decoding 1.4436
encode_marshal encoding took 0.1144, decoding 0.3541
encode_ujson encoding took 0.2768, decoding 0.4773
encode_msgpack encoding took 0.1386, decoding 0.2374
encode_bson encoding took 55.5861, decoding 29.3953
</code></pre>
<p><a href="http://pypi.python.org/pypi/bson/0.3.3" rel="noreferrer">bson</a>、<a href="http://msgpack.org/" rel="noreferrer">msgpack</a>和<a href="http://pypi.python.org/pypi/ujson/" rel="noreferrer">ujson</a>都是通过简易安装安装的</p>
<p>我喜欢被证明我做错了;我应该使用cStringIO接口,否则你会加速!</p>
<p>一定有一种方法可以使这些数据的串行化速度加快一个数量级?</p>