Python将刮削数据写入csv-fi

2024-05-31 23:41:17 发布

您现在位置:Python中文网/ 问答频道 /正文

我写了简单的代码,从网站上刮取数据,但我正在努力保存到csv文件的所有行。完成的脚本只保存一行-这是循环中最后一次出现。你知道吗

def get_single_item_data(item_url):
    f= csv.writer(open("scrpe.csv", "wb"))  
    f.writerow(["Title", "Company", "Price_netto"]) 

    source_code = requests.get(item_url)
    soup = BeautifulSoup(source_code.content, "html.parser")

for item_name in soup.find_all('div', attrs={"id" :'main-container'}):
    title = item_name.find('h1').text
    prodDesc_class = item_name.find('div', class_='productDesc')
    company = prodDesc_class.find('p').text
    company = company.strip()

    price_netto = item_name.find('div', class_="netto").text
    price_netto = price_netto.strip()


    #print title, company, ,price_netto

    f.writerow([title.encode("utf-8"), company, price_netto, ])

重要的是将数据保存到并发列


Tags: csv数据textnamedivurlgettitle
2条回答

问题是您正在get_single_item_data中打开输出文件,当该函数返回且f超出范围时,输出文件将被关闭。 您想将一个打开的文件传递到get_single_item_data,以便写入多行。你知道吗

@PadraicCunningham这是我的全部剧本:

import requests
from bs4 import BeautifulSoup
import csv

url_klocki = "http://selgros24.pl/Dla-dzieci/Zabawki/Klocki-pc1121.html"
r = requests.get(url_klocki)
soup = BeautifulSoup(r.content, "html.parser")

def main_spider(max_page):
    page = 1
    while page <= max_page:
        url = "http://selgros24.pl/Dla-dzieci/Zabawki/Klocki-pc1121.html"
        source_code = requests.get(url)
        soup = BeautifulSoup(source_code.content, "html.parser")

        for link in soup.find_all('article', class_='small-product'):
            url = "http://www.selgros24.pl"
            a = link.findAll('a')[0].get('href')
            href = url + a
            #print href

            get_single_item_data(href)

        page +=1

def get_single_item_data(item_url):
    f= csv.writer(open("scrpe.csv", "wb"))   
    f.writerow(["Title", "Comapny", "Price_netto"]) 

    source_code = requests.get(item_url)
    soup = BeautifulSoup(source_code.content, "html.parser")

    for item_name in soup.find_all('div', attrs={"id" :'main-container'}):
        title = item_name.find('h1').text
        prodDesc_class = item_name.find('div', class_='productDesc')
        company = prodDesc_class.find('p').text
        company = company.strip()

        price_netto = item_name.find('div', class_="netto").text
        price_netto = price_netto.strip()


        print title, company, price_netto

        f.writerow([title.encode("utf-8"), company, price_netto])


main_spider(1)

相关问题 更多 >