将Pandas转换为Sp时出现TypeError

2024-06-02 18:26:41 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我在这里查找了这个问题,但是以前的解决方案对我不起作用。我有一个这种格式的数据框

mdf.head()
    dbn       boro       bus
0   17K548  Brooklyn    B41, B43, B44-SBS, B45, B48, B49, B69
1   09X543  Bronx       Bx13, Bx15, Bx17, Bx21, Bx35, Bx4, Bx41, Bx4A,...
4   28Q680  Queens      Q25, Q46, Q65
6   14K474  Brooklyn    B24, B43, B48, B60, Q54, Q59

还有几个专栏,但我排除了它们(地铁线路和考试成绩)。当我试图将这个数据帧转换成一个Spark数据帧时,我得到了一个错误,就是这个错误。

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-30-1721be5c2987> in <module>()
----> 1 sparkdf = sqlc.createDataFrame(mdf)

/usr/local/Cellar/apache-spark/1.6.2/libexec/python/pyspark/sql/context.pyc in createDataFrame(self, data, schema, samplingRatio)
    423             rdd, schema = self._createFromRDD(data, schema, samplingRatio)
    424         else:
--> 425             rdd, schema = self._createFromLocal(data, schema)
    426         jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
    427         jdf = self._ssql_ctx.applySchemaToPythonRDD(jrdd.rdd(), schema.json())

/usr/local/Cellar/apache-spark/1.6.2/libexec/python/pyspark/sql/context.pyc in _createFromLocal(self, data, schema)
    339 
    340         if schema is None or isinstance(schema, (list, tuple)):
--> 341             struct = self._inferSchemaFromList(data)
    342             if isinstance(schema, (list, tuple)):
    343                 for i, name in enumerate(schema):

/usr/local/Cellar/apache-spark/1.6.2/libexec/python/pyspark/sql/context.pyc in _inferSchemaFromList(self, data)
    239             warnings.warn("inferring schema from dict is deprecated,"
    240                           "please use pyspark.sql.Row instead")
--> 241         schema = reduce(_merge_type, map(_infer_schema, data))
    242         if _has_nulltype(schema):
    243             raise ValueError("Some of types cannot be determined after inferring")

/usr/local/Cellar/apache-spark/1.6.2/libexec/python/pyspark/sql/types.pyc in _merge_type(a, b)
    860         nfs = dict((f.name, f.dataType) for f in b.fields)
    861         fields = [StructField(f.name, _merge_type(f.dataType, nfs.get(f.name, NullType())))
--> 862                   for f in a.fields]
    863         names = set([f.name for f in fields])
    864         for n in nfs:

/usr/local/Cellar/apache-spark/1.6.2/libexec/python/pyspark/sql/types.pyc in _merge_type(a, b)
    854     elif type(a) is not type(b):
    855         # TODO: type cast (such as int -> long)
--> 856         raise TypeError("Can not merge type %s and %s" % (type(a), type(b)))
    857 
    858     # same type

TypeError: Can not merge type <class 'pyspark.sql.types.StringType'> and <class 'pyspark.sql.types.DoubleType'>

从我所读到的内容来看,这可能是将头作为数据处理的问题。据我所知,您不能从数据帧中删除头,因此我如何解决此错误并将此数据帧转换为Spark one?

编辑:这是我创建熊猫DF的代码,并以我的方式解决了这个问题。

sqlc = SQLContext(sc)
df = pd.DataFrame(pd.read_csv('hsdir.csv', encoding = 'utf_8_sig'))
df = df[['dbn', 'boro', 'bus', 'subway', 'total_students']]
df1 = pd.DataFrame(pd.read_csv('sat_r.csv', encoding = 'utf_8_sig'))
df1 = df1.rename(columns = {'Num of SAT Test Takers': 'num_test_takers', 'SAT Critical Reading Avg. Score': 'read_avg', 'SAT Math Avg. Score' : 'math_avg', 'SAT Writing Avg. Score' : 'write_avg'})
mdf = pd.merge(df, df1, left_on = 'dbn', right_on = 'DBN', how = 'left')
mdf = mdf[pd.notnull(mdf['DBN'])]
mdf.to_csv('merged.csv', encoding = 'utf-8')
ndf = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load("merged.csv")

这段代码的最后一行,从我的本地计算机加载它,最终允许我将CSV正确地转换成一个数据帧,但是我的问题仍然存在。为什么一开始就不起作用?


Tags: csv数据inselfsqldataschemausr
3条回答

您可以使用反射从Row对象的RDD推断模式,例如

from pyspark.sql import Row
mdfRows = mdf.map(lambda p: Row(dbn=p[0], boro=p[1], bus=p[2]))
dfOut = sqlContext.createDataFrame(mdfRows)

这能达到预期的效果吗?

你也可以试试这个:

def create_spark_dataframe(file_name):
   """
   will return the spark dataframe input pandas dataframe
   """
   pandas_data_frame = pd.read_csv(file_name, converters= {"PRODUCT": str})
   for col in pandas_data_frame.columns:
   if ((pandas_data_frame[col].dtypes != np.int64) & 
      (pandas_data_frame[col].dtypes != np.float64)):
    pandas_data_frame[col] = pandas_data_frame[col].fillna('')

   spark_data_frame = sqlContext.createDataFrame(pandas_data_frame)
   return spark_data_frame

这会解决你的问题。

我也有同样的问题,并且能够追踪到一个值为0(或空)的条目。_inferScheme命令在数据帧的每一行上运行并确定类型。默认情况下,假设空值是Double,而另一个是String。这两种类型不能由_merge_type命令合并。问题已经提交https://issues.apache.org/jira/browse/SPARK-18178,但最好的解决方法可能是为createDataFrame命令提供模式。

下面的代码再现了PySpark 2.0中的问题

import pandas as pd
from io import StringIO
test_df = pd.read_csv(StringIO(',Scan Options\n15,SAT2\n16,\n'))
sqlContext.createDataFrame(test_df).registerTempTable('Test')
o_qry = sqlContext.sql("SELECT * FROM Test LIMIT 1")
o_qry.first()

相关问题 更多 >