我在AWS EMR上使用pyspark(4个r5.xlarge作为4个工人,每个工人有一个执行器和4个内核),我得到了AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'
。下面是引发此错误的代码片段:
search = SearchEngine(db_file_dir = "/tmp/db")
conn = sqlite3.connect("/tmp/db/simple_db.sqlite")
pdf_ = pd.read_sql_query('''select zipcode, lat, lng,
bounds_west, bounds_east, bounds_north, bounds_south from
simple_zipcode''',conn)
brd_pdf = spark.sparkContext.broadcast(pdf_)
conn.close()
@udf('string')
def get_zip_b(lat, lng):
pdf = brd_pdf.value
out = pdf[(np.array(pdf["bounds_north"]) >= lat) &
(np.array(pdf["bounds_south"]) <= lat) &
(np.array(pdf['bounds_west']) <= lng) &
(np.array(pdf['bounds_east']) >= lng) ]
if len(out):
min_index = np.argmin( (np.array(out["lat"]) - lat)**2 + (np.array(out["lng"]) - lng)**2)
zip_ = str(out["zipcode"].iloc[min_index])
else:
zip_ = 'bad'
return zip_
df = df.withColumn('zipcode', get_zip_b(col("latitude"),col("longitude")))
下面是回溯,其中get_zip_b中的第102行表示pdf = brd_pdf.value
:
21/08/02 06:18:19 WARN TaskSetManager: Lost task 12.0 in stage 7.0 (TID 1814, ip-10-22-17-94.pclc0.merkle.local, executor 6): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 605, in main
process()
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 597, in process
serializer.dump_stream(out_iter, outfile)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 223, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 141, in dump_stream
for obj in iterator:
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 212, in _batched
for item in iterator:
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 450, in mapper
result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 450, in <genexpr>
result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 90, in <lambda>
return lambda *a: f(*a)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/util.py", line 121, in wrapper
return f(*args, **kwargs)
File "/mnt/var/lib/hadoop/steps/s-1IBFS0SYWA19Z/Mobile_ID_process_center.py", line 102, in get_zip_b
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 146, in value
self._value = self.load_from_path(self._path)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 123, in load_from_path
return self.load(f)
File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 129, in load
return pickle.load(file)
AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/mnt/miniconda/lib/python3.9/site-packages/pandas/core/internals/blocks.py'>
一些观察和思考过程:
1、在线搜索后,pyspark中的AttributeError似乎是由驱动程序和工作人员之间的版本不匹配引起的
2,但我在两个不同的数据集上运行了相同的代码,一个运行时没有任何错误,而另一个没有,这看起来很奇怪,不确定,而且错误可能不是由不匹配的版本引起的。否则,两个数据集都不会成功
3,然后我在成功的数据集上再次运行了相同的代码,但这次使用了不同的spark配置:将spark.driver.memory从2048M设置为4192m,并抛出AttributeError
总之,我认为归因错误与驾驶员有关。但我无法从错误消息中看出它们之间的关系,以及如何修复它:AttributeError:无法在<;模块'pandas.core.internal.blocks'
我在服务器上使用pandas 1.3.2时,在客户端使用pandas 1.2时,出现了相同的错误。 将熊猫降级到1.2解决了这个问题
解决方案
或
总之
用于转储pickle的pandas版本(转储_版本,可能为1.3.x)与用于加载pickle的pandas版本(加载_版本,可能为1.2.x)不兼容。要解决此问题,请尝试在加载环境中将pandas版本(load_version)升级到1.3.x,然后加载pickle。或者将pandas版本(dump_version)降级到1.2.x,然后重新转储新的pickle。在此之后,可以使用1.2.x版的pandas加载新的pickle
这与PySpark没有任何关系
长期
这个问题与熊猫版本
1.2.x
和1.3.x
之间的向后不兼容有关。在版本1.2.5
和之前的版本中,熊猫在模块pandas.core.internals.blocks
cfsource code v1.2.5中使用变量名new_blocks
。2021年7月2日,熊猫发布了版本1.3.0
。在此更新中,Pandas更改了api,模块pandas.core.internals.blocks
中的变量名new_blocks
已更改为new_block
cfsource code v1.3.0API的此更改将导致两个不兼容错误:
AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '.../site-packages/pandas/core/internals/blocks.py'>'>
Python抛出此错误,抱怨在当前
pandas.core.internals.blocks
上找不到属性new_block
,因为为了pickle加载对象,它必须使用与转储pickle完全相同的类这正是您的情况:用Pandas v1.3.x倾倒泡菜并尝试用Pandas v1.2.x加载泡菜
重现错误
pip install upgrade pandas==1.3.4
pip install upgrade pandas==1.2.5
相关问题 更多 >
编程相关推荐