出版物产品 "Beautifulsoup 4筛选Python 3问题"

2024-03-28 13:54:14 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经看了6个小时没弄明白了。我想使用Beautifulsoup来过滤网页中的数据,但我无法获取.contents或get_text()来工作,我不知道我哪里出错了,也不知道如何在第一次通过时进行另一个过滤。我可以找到“fields标签”,但不能缩小范围到

标签来获取数据。抱歉,如果这是一个简单的问题,我做错了,我只是昨天开始Python和开始(至少尝试)刮网今天上午。你知道吗

完整代码:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from openpyxl import Workbook
import bs4 as bs
import math


book = Workbook()
sheet = book.active

i=0

#Change this value to your starting tracking number
StartingTrackingNumber=231029883

#Change this value to increase or decrease the number of tracking numbers you       want to search overal
TrackingNumberCount = 4

#Number of Tacking Numbers Searched at One Time
QtySearch = 4


#TrackingNumbers=["Test","Test 2"]


for i in range(0,TrackingNumberCount):
    g=i+StartingTrackingNumber
    sheet.cell(row=i+1,column=1).value = 'RN' + str(g) + 'CA,'


TrackingNumbers = []
for col in sheet['A']:
     TrackingNumbers.append(col.value)

MaxRow = sheet.max_row
MaxIterations = math.ceil(MaxRow / QtySearch)
#print(MaxIterations)

RowCount = 0
LastTrackingThisPass = QtySearch

for RowCount in range (0,MaxIterations): #range(1,MaxRow):
    FirstTrackingThisPass = (RowCount)*QtySearch
    x = TrackingNumbers[FirstTrackingThisPass:LastTrackingThisPass]
    LastTrackingThisPass+=QtySearch
    driver = webdriver.Safari()
    driver.set_page_load_timeout(20)
    driver.get("https://www.canadapost.ca/cpotools/apps/track/personal/findByTrackNumber?execution=e1s1")

     driver.find_element_by_xpath('//*[contains(@id,    "trackNumbers")]').send_keys(x)
    driver.find_element_by_xpath('//*[contains(@id, "submit_button")]').send_keys(chr(13))
    driver.set_page_load_timeout(3000)
    WebDriverWait(driver,30).until(EC.presence_of_element_located((By.ID, "noResults_modal")))
    SourceCodeTest = driver.page_source

#print(SourceCodeTest)

Soup = bs.BeautifulSoup(SourceCodeTest, "lxml") #""html.parser")


z = 3

#for z in range (1,5):
#    t = str(z)
#    NameCheck = "trackingNumber" + t
##FindTrackingNumbers = Soup.find_all("div", {"id": "trackingNumber3"})
#    FindTrackingNumbers = Soup.find_all("div", {"id": NameCheck})
#    print(FindTrackingNumbers)

Info = Soup.find_all("fieldset", {"class": "trackhistoryitem"}, "strong")

print(Info.get_text())

期望输出:

RN231029885CA不适用

RN231029884CA不适用

RN231029883CA 2017/04/04

尝试解析的HTML示例:

<fieldset class="trackhistoryitem">

                    <p><strong>Tracking No. </strong><br><input type="hidden" name="ID_RN231029885CA" value="false">RN231029885CA
                </p>




                   <p><strong>Date / Time   </strong><br>


                            <!--h:outputText value="N/A" rendered="true"/>
                            <h:outputText value="N/A - N/A" rendered="false"/>

                            <h:outputText value="N/A" rendered="false"/-->N/A
                    </p>



                <p><strong>Description  </strong><br><span id="tapListResultForm:tapResultsItems:1:trk_rl_div_1">

Tags: infromimportidforvaluedriverselenium
1条回答
网友
1楼 · 发布于 2024-03-28 13:54:14

使用.get_text()我找回了这个丑陋的长字符串:

'\nTracking No. RN231029885CA\n                \nDate / Time   \nN/A\n                    \nDescription  '

对于pythons的一些字符串函数:

objects = []
for each in soup.find_all("fieldset"): 
    each = each.get_text().split("\n") #split the ugly string up
    each = [each[1][-13:], each[4]] #grab the parts you want, rmv extra words
    objects.append(each)

注意:这假设所有的跟踪数字都是13位,如果不是,你需要使用regex或其他一些创造性的方法来提取它。你知道吗

相关问题 更多 >