Spark AttributeError:无法在<模块'pandas.core.internals.blocks'上获取属性'new_block'

2024-05-14 17:18:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我在AWS EMR上使用pyspark(4个r5.xlarge作为4个工人,每个工人有一个执行器和4个内核),我得到了AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'。下面是引发此错误的代码片段:

search =  SearchEngine(db_file_dir = "/tmp/db")
conn = sqlite3.connect("/tmp/db/simple_db.sqlite")
pdf_ = pd.read_sql_query('''select  zipcode, lat, lng, 
                        bounds_west, bounds_east, bounds_north, bounds_south from 
                        simple_zipcode''',conn)
brd_pdf = spark.sparkContext.broadcast(pdf_) 
conn.close()


@udf('string')
def get_zip_b(lat, lng):
    pdf = brd_pdf.value 
    out = pdf[(np.array(pdf["bounds_north"]) >= lat) & 
              (np.array(pdf["bounds_south"]) <= lat) & 
              (np.array(pdf['bounds_west']) <= lng) & 
              (np.array(pdf['bounds_east']) >= lng) ]
    if len(out):
        min_index = np.argmin( (np.array(out["lat"]) - lat)**2 + (np.array(out["lng"]) - lng)**2)
        zip_ = str(out["zipcode"].iloc[min_index])
    else:
        zip_ = 'bad'
    return zip_

df = df.withColumn('zipcode', get_zip_b(col("latitude"),col("longitude")))

下面是回溯,其中get_zip_b中的第102行表示pdf = brd_pdf.value

21/08/02 06:18:19 WARN TaskSetManager: Lost task 12.0 in stage 7.0 (TID 1814, ip-10-22-17-94.pclc0.merkle.local, executor 6): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 605, in main
    process()
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 597, in process
    serializer.dump_stream(out_iter, outfile)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 223, in dump_stream
    self.serializer.dump_stream(self._batched(iterator), stream)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 141, in dump_stream
    for obj in iterator:
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 212, in _batched
    for item in iterator:
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 450, in mapper
    result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 450, in <genexpr>
    result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 90, in <lambda>
    return lambda *a: f(*a)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/util.py", line 121, in wrapper
    return f(*args, **kwargs)
  File "/mnt/var/lib/hadoop/steps/s-1IBFS0SYWA19Z/Mobile_ID_process_center.py", line 102, in get_zip_b
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 146, in value
    self._value = self.load_from_path(self._path)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 123, in load_from_path
    return self.load(f)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 129, in load
    return pickle.load(file)
AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/mnt/miniconda/lib/python3.9/site-packages/pandas/core/internals/blocks.py'>

一些观察和思考过程:

1、在线搜索后,pyspark中的AttributeError似乎是由驱动程序和工作人员之间的版本不匹配引起的

2,但我在两个不同的数据集上运行了相同的代码,一个运行时没有任何错误,而另一个没有,这看起来很奇怪,不确定,而且错误可能不是由不匹配的版本引起的。否则,两个数据集都不会成功

3,然后我在成功的数据集上再次运行了相同的代码,但这次使用了不同的spark配置:将spark.driver.memory从2048M设置为4192m,并抛出AttributeError

总之,我认为归因错误与驾驶员有关。但我无法从错误消息中看出它们之间的关系,以及如何修复它:AttributeError:无法在<;模块'pandas.core.internal.blocks'


Tags: inpyhadooppdfapplicationcontainerlinezip
2条回答

我在服务器上使用pandas 1.3.2时,在客户端使用pandas 1.2时,出现了相同的错误。 将熊猫降级到1.2解决了这个问题

解决方案

  • 保持pickle文件不变,将pandas版本升级到1.3.x,然后加载pickle文件

  • 保持当前pandas版本不变,转储侧将pandas版本降级到1.2.x,然后使用v1.2.x转储新的pickle文件。用您的熊猫版1.2.x

总之

用于转储pickle的pandas版本(转储_版本,可能为1.3.x)与用于加载pickle的pandas版本(加载_版本,可能为1.2.x)不兼容。要解决此问题,请尝试在加载环境中将pandas版本(load_version)升级到1.3.x,然后加载pickle。或者将pandas版本(dump_version)降级到1.2.x,然后重新转储新的pickle。在此之后,可以使用1.2.x版的pandas加载新的pickle

这与PySpark没有任何关系

长期

这个问题与熊猫版本1.2.x1.3.x之间的向后不兼容有关。在版本1.2.5和之前的版本中,熊猫在模块pandas.core.internals.blockscfsource code v1.2.5中使用变量名new_blocks。2021年7月2日,熊猫发布了版本1.3.0。在此更新中,Pandas更改了api,模块pandas.core.internals.blocks中的变量名new_blocks已更改为new_blockcfsource code v1.3.0

API的此更改将导致两个不兼容错误:

  • 如果您已使用Pandas v1.3.x转储了一个pickle,并尝试使用Pandas v1.2.x加载pickle,则会出现以下错误:

AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '.../site-packages/pandas/core/internals/blocks.py'>'>

Python抛出此错误,抱怨在当前pandas.core.internals.blocks上找不到属性new_block,因为为了pickle加载对象,它必须使用与转储pickle完全相同的类

这正是您的情况:用Pandas v1.3.x倾倒泡菜并尝试用Pandas v1.2.x加载泡菜

重现错误

pip install upgrade pandas==1.3.4

import numpy as np 
import pandas as pd
df =pd.DataFrame(np.random.rand(3,6))

with open("dump_from_v1.3.4.pickle", "wb") as f: 
    pickle.dump(df, f) 

quit()

pip install upgrade pandas==1.2.5

import pickle

with open("dump_from_v1.3.4.pickle", "rb") as f: 
    df = pickle.load(f) 


                                     -
AttributeError                            Traceback (most recent call last)
<ipython-input-2-ff5c218eca92> in <module>
      1 with open("dump_from_v1.3.4.pickle", "rb") as f:
  > 2     df = pickle.load(f)
      3 

AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/blocks.py'>

相关问题 更多 >

    热门问题