Pandas df.to_csv("file.csv"编码="utf-8")对负号仍然输出乱码
我看到有些关于Python 2在使用Pandas的to_csv(...)时的限制的讨论。我是不是遇到这个问题了?我现在用的是Python 2.7.3。
当字符串中出现≥和-这些符号时,输出的结果会变成一些乱码。除此之外,导出的内容都很好。
df.to_csv("file.csv", encoding="utf-8")
有没有什么解决办法?
df.head()的结果是这样的:
demography Adults ≥49 yrs Adults 18−49 yrs at high risk|| \
state
Alabama 32.7 38.6
Alaska 31.2 33.2
Arizona 22.9 38.8
Arkansas 31.2 34.0
California 29.8 38.8
生成的csv文件内容是这样的:
state, Adults ≥49 yrs, Adults 18−49 yrs at high risk||
0, Alabama, 32.7, 38.6
1, Alaska, 31.2, 33.2
2, Arizona, 22.9, 38.8
3, Arkansas,31.2, 34
4, California,29.8, 38.8
整个代码是这样的:
import pandas
import xlrd
import csv
import json
df = pandas.DataFrame()
dy = pandas.DataFrame()
# first merge all this xls together
workbook = xlrd.open_workbook('csv_merger/vaccoverage.xls')
worksheets = workbook.sheet_names()
for i in range(3,len(worksheets)):
dy = pandas.io.excel.read_excel(workbook, i, engine='xlrd', index=None)
i = i+1
df = df.append(dy)
df.index.name = "index"
df.columns = ['demography', 'area','state', 'month', 'rate', 'moe']
#Then just grab month = 'May'
may_mask = df['month'] == "May"
may_df = (df[may_mask])
#then delete some columns we dont need
may_df = may_df.drop('area', 1)
may_df = may_df.drop('month', 1)
may_df = may_df.drop('moe', 1)
print may_df.dtypes #uh oh, it sees 'rate' as type 'object', not 'float'. Better change that.
may_df = may_df.convert_objects('rate', convert_numeric=True)
print may_df.dtypes #that's better
res = may_df.pivot_table('rate', 'state', 'demography')
print res.head()
#and this is going to spit out an array of Objects, each Object a state containing its demographics
res.reset_index().to_json("thejson.json", orient='records')
#and a .csv for good measure
res.reset_index().to_csv("thecsv.csv", orient='records', encoding="utf-8")
2 个回答
1
我试过用encoding='utf-8-sig
,但对我来说不太管用。Excel现在能正确读取特殊字符了,但制表符(Tab)却没了!不过,encoding='utf-16
就能正常工作:特殊字符没问题,制表符也能用。这对我来说就是解决办法。
365
你看到的“坏”输出其实是UTF-8格式的内容被当成CP1252格式来显示了。
在Windows系统上,很多编辑器在文件开头没有字节顺序标记(BOM)的时候,会默认使用ANSI编码(在美国的Windows上是CP1252),而不是UTF-8。虽然BOM对于UTF-8编码来说没有实际意义,但它的存在可以被一些程序当作标识。例如,微软的Excel即使在非Windows系统上也需要这个标记。你可以试试:
df.to_csv('file.csv',encoding='utf-8-sig')
这个编码器会添加BOM。