解析时去掉重复项

2024-04-26 12:40:16 发布

您现在位置:Python中文网/ 问答频道 /正文

我做了一个用python编写的解析器,它的工作做得很好,除了一些重复出现。而且,当我打开csv文件时,我可以看到每个结果都被方括号包围。有没有什么方法可以在运行中消除重复数据和方括号?以下是我尝试的:

import csv
import requests
from lxml import html
def parsingdata(mpg):
    data = set()
    outfile=open('RealYP.csv','w',newline='')
    writer=csv.writer(outfile)
    writer.writerow(["Name","Address","Phone"])
    pg=1
    while pg<=mpg:
        url="https://www.yellowpages.com/search?search_terms=Coffee%20Shops&geo_location_terms=Los%20Angeles%2C%20CA&page="+str(pg)
        page=requests.get(url)
        tree=html.fromstring(page.text)
        titles = tree.xpath('//div[@class="info"]')
        items = []
        for title in titles:
            comb = []
            Name = title.xpath('.//span[@itemprop="name"]/text()')
            Address = title.xpath('.//span[@itemprop="streetAddress" and @class="street-address"]/text()')
            Phone = title.xpath('.//div[@itemprop="telephone" and @class="phones phone primary"]/text()')
            try:
                comb.append(Name[0])
                comb.append(Address[0])
                comb.append(Phone[0])
            except:
                continue
            items.append(comb)

        pg+=1 
        for item in items:
            writer.writerow(item)
parsingdata(3)

现在一切正常。 编辑:校正部分取自bjpreisler


Tags: csvtextnameimporttitleaddresspagephone
3条回答

我最近发现的这个刮刀的简洁版本是:

import csv
import requests
from lxml import html

url = "https://www.yellowpages.com/search?search_terms=Coffee%20Shops&geo_location_terms=Los%20Angeles%2C%20CA&page={0}"

def parsingdata(link):

    outfile=open('YellowPage.csv','w',newline='')
    writer=csv.writer(outfile)
    writer.writerow(["Name","Address","Phone"])

    for page_link in [link.format(i) for i in range(1, 4)]:
        page = requests.get(page_link).text
        tree = html.fromstring(page)

        for title in tree.xpath('//div[@class="info"]'):
            Name = title.findtext('.//span[@itemprop="name"]')
            Address = title.findtext('.//span[@itemprop="streetAddress"]')
            Phone = title.findtext('.//div[@itemprop="telephone"]')
            print([Name, Address, Phone])
            writer.writerow([Name, Address, Phone])

parsingdata(url)

您当前正在将列表(项)写入csv,这就是为什么它在括号中的原因。要避免这种情况,请使用另一个如下所示的for循环:

 for title in titles:
        comb = []
        Name = title.xpath('.//span[@itemprop="name"]/text()')
        Address = title.xpath('.//span[@itemprop="streetAddress" and @class="street-address"]/text()')
        Phone = title.xpath('.//div[@itemprop="telephone" and @class="phones phone primary"]/text()')
        if Name:
            Name = Name[0]
        if Address:
            Address = Address[0]
        if Phone:
            Phone = Phone[0]
        comb.append(Name)
        comb.append(Address)
        comb.append(Phone)
        print comb
        items.append(comb)

pg+=1 
for item in items:
    writer.writerow(item)
parsingdata(3)

这应该将每个项目分别写入csv。结果你添加到comb中的项目本身就是列表,所以这将提取它们。在

当我使用.csv文件时,此脚本将删除重复项。检查这是否适用于您:)

with open(file_out, 'w') as f_out, open(file_in, 'r') as f_in:
    # write rows from in-file to out-file until all the data is written
    checkDups = set() # set for removing duplicates
    for line in f_in:
        if line in checkDups: continue # skip duplicate
        checkDups.add(line)
        f_out.write(line)

相关问题 更多 >