如何用硒刮纸?

2024-03-29 09:34:09 发布

您现在位置:Python中文网/ 问答频道 /正文

我想通过硒刮一个网站,共有10页。我的代码如下,但为什么我只能得到第一页的结果:

# -*- coding: utf-8 -*-
from selenium import webdriver
from scrapy.selector import Selector


MAX_PAGE_NUM = 10
MAX_PAGE_DIG = 3

driver = webdriver.Chrome('C:\Users\zhang\Downloads\chromedriver_win32\chromedriver.exe')
with open('results.csv', 'w') as f:
    f.write("Buyer, Price \n")

for i in range(1, MAX_PAGE_NUM + 1):
    page_num = (MAX_PAGE_DIG - len(str(i))) * "0" + str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num

    driver.get(url)

    names = sel.xpath('//*[@class="fontsubsection nomarginpadding lmargin opensans"]/text()').extract()
    Countries = sel.xpath('//td[text()="Country:"]/following-sibling::td/text()').extract()
    websites = sel.xpath('//td[text()="Website:"]/following-sibling::td/a/@href').extract()

driver.close()
print(len(names), len(Countries), len(websites))

Tags: textfromimportlendriverpageextractxpath
2条回答

我猜这和你在页码作业中做的奇怪的事情有关。若要调试,请尝试在调用驱动程序。获取(网址):

print(driver.current_url)

如果它返回您期望的url,那么问题很可能出在XPATH中。你知道吗

在这里,我首先用find_elements_by_xpath获得每个页面的名称、国家和网站,并将它们存储到一个列表中。将从列表中的每个元素中提取文本,并将值添加到新列表中。你知道吗

from selenium import webdriver

MAX_PAGE_NUM = 10

driver = webdriver.Chrome('C:\\Users...\\chromedriver.exe')

names_list = list()
Countries_list = list()
websites_list = list()

# The for loop is to get each of the 10 pages
for i in range(1, MAX_PAGE_NUM):
    page_num = str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num

    driver.get(url)

    # Use "driver.find_elements" instead of "driver.find_element" to get all of them. You get a list of WebElements of each page
    names = driver.find_elements_by_xpath("//*[@class='fontsubsection nomarginpadding lmargin opensans']")

    # To get the value of each WebElement in the list. You have to iterate on the list 
    for i in range(0, len(names)):
    # Now you add each value into a new list 
        names_list.append(names[i].text)


    Countries = driver.find_elements_by_xpath("//td[text()='Country:']/following-sibling::td")
    for i in range(0, len(Countries)):
        Countries_list.append(Countries[i].text)

    websites = driver.find_elements_by_xpath("//td[text()='Website:']/following-sibling::td")
    for i in range(0, len(websites)):
        websites_list.append(websites[i].text)

print(names_list)
print(Countries_list)               
print(websites_list)

driver.close()

我希望这对你有用

选项:获取<div class = border fontcontentdet>上包含的节中的所有数据。你知道吗

从selenium导入webdriver

MAX_PAGE_NUM = 10

driver = webdriver.Chrome('C:\\Users\\LVARGAS\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts\\chromedriver.exe')

data_list = list()

# The for loop is to get each of the 10 pages
for i in range(1, MAX_PAGE_NUM):
    page_num = str(i)
    url = "https://www.oilandgasnewsworldwide.com/Directory1/DREQ/Drilling_Equipment_Suppliers_?page=" + page_num
    driver.get(url)

    rows = driver.find_elements_by_xpath("//*[@class='border fontcontentdet']")

    for i in range(0, len(rows)):

        print(rows[i].text)

        data_list.append(rows[i].text)

        print(' -')

driver.close()
print(data_list)

相关问题 更多 >