我以为我有这个,但后来一切都崩溃了。我正在启动一个从一个中国网站上提取数据的刮板。当我隔离和打印元素时,我正在寻找一切正常(“打印元素”和“打印文本”)。但是,当我将这些元素添加到字典中,然后打印字典(print holder)时,我的所有内容都是“\x85\xe6\xb0”。尝试将.encode('utf-8')作为附加过程的一部分只会引发新的错误。这最终可能并不重要,因为它只是要被转储到CSV中,但它使故障排除变得非常困难。当我将元素添加到字典中以破坏编码时,我在做什么?你知道吗
谢谢!你知道吗
from bs4 import BeautifulSoup
import urllib
#csv is for the csv writer
import csv
#intended data structure is list of dictionaries
# holder = [{'headline': TheHeadline, 'url': TheURL, 'date1': Date1, 'date2': Date2, 'date3':Date3}, {'headline': TheHeadline, 'url': TheURL, 'date1': Date1, 'date2': Date2, 'date3':Date3})
#initiates the dictionary to hold the output
holder = []
txt_contents = "http://sousuo.gov.cn/s.htm?q=&n=80&p=&t=paper&advance=true&title=&content=&puborg=&pcodeJiguan=%E5%9B%BD%E5%8F%91&pcodeYear=2016&pcodeNum=&childtype=&subchildtype=&filetype=&timetype=timeqb&mintime=&maxtime=&sort=pubtime&nocorrect=&sortType=1"
#opens the output doc
output_txt = open("output.txt", "w")
#opens the output doc
output_txt = open("output.txt", "w")
def headliner(url):
#opens the url for read access
this_url = urllib.urlopen(url).read()
#creates a new BS holder based on the URL
soup = BeautifulSoup(this_url, 'lxml')
#creates the headline section
headline_text = ''
#this bundles all of the headlines
headline = soup.find_all('h3')
#for each individual headline....
for element in headline:
headline_text += ''.join(element.findAll(text = True)).encode('utf-8').strip()
#this is necessary to turn the findAll output into text
print element
text = element.text.encode('utf-8')
#prints each headline
print text
print "*******"
#creates the dictionary for just that headline
temp_dict = {}
#puts the headline in the dictionary
temp_dict['headline'] = text
#appends the temp_dict to the main list
holder.append(temp_dict)
output_txt.write(str(text))
#output_txt.write(holder)
headliner(txt_contents)
print holder
output_txt.close()
编码没有出错。只是表达同一事物的不同方式:
要知道的最后一点是,当您将对象放入容器中时,它会打印
repr
,以在容器的表示中表示容器中的那些对象:如果我们定义自己的自定义对象,也许会更清楚:
相关问题 更多 >
编程相关推荐