Python网络爬虫有时返回一半的源代码,有时返回全部。。。来自同一个网站

2024-04-24 12:13:24 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个专利数字的电子表格,我正在通过谷歌专利、美国专利商标局网站和其他一些网站获取额外的数据。我大部分时间都在运行,但有一件事我一整天都在坚持。当我去USPTO网站获取源代码时,它有时会给我全部的东西,并且工作得非常出色,但其他时候它只给我大约后半部分(我要找的是前半部分)。你知道吗

在这附近找了不少,但我没看到有人有这个问题。下面是一段相关的代码(因为我已经尝试了一段时间,所以它有一些冗余,但我确信这是它的最小问题):

from bs4 import BeautifulSoup
import html5lib
import re
import csv
import urllib
import requests

# This is the base URL for Google Patents
gpatbase = "https://www.google.com/patents/US"
ptobase = "http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=/netahtml/PTO/search-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN/"

# Bring in the patent numbers and define the writer we'll use to add the new info we get
with open(r'C:\Users\Filepathblahblahblah\Patent Data\scrapeThese.csv', newline='') as csvfile:
patreader = csv.reader(csvfile)
writer = csv.writer(csvfile)

for row in patreader:
    patnum = row[0]
    #print(row)

    print(patnum)
    # Take each patent and append it to the base URL to get the actual one
    gpaturl = gpatbase + patnum
    ptourl = ptobase + patnum


    gpatreq = requests.get(gpaturl)
    gpatsource = gpatreq.text
    soup = BeautifulSoup(gpatsource, "html5lib")

    # Find the number of academic citations on that patent

    # From the Google Patents page, find the link labeled USPTO and extract the url
    for tag in soup.find_all("a"):
        if tag.next_element == "USPTO":
            uspto_link = tag.get('href')

    #uspto_link = ptourl
    requested = urllib.request.urlopen(uspto_link)
    source = requested.read()

    pto_soup = BeautifulSoup(source, "html5lib")

    print(uspto_link)
    # From the USPTO page, find the examiner's name and save it
    for italics in pto_soup.find_all("i"):
        if italics.next_element == "Primary Examiner:":
            prim = italics.next_element
        else:
            prim = "Not found"

    if prim != "Not found":
        examiner = prim.next_element
    else:
        examiner = "Not found"

    print(examiner)

到现在为止,关于我是得到考官的名字还是“找不到”大概是五五开,我看不出两组成员之间有什么共同点,所以我都没办法了。你知道吗


Tags: andcsvtheinimportforgetlink
1条回答
网友
1楼 · 发布于 2024-04-24 12:13:24

我仍然不知道是什么导致了这个问题,但是如果有人有类似的问题,我能够找到一个解决方法。如果您将源代码发送到文本文件而不是直接使用它,它将不会被切断。我猜问题是在数据下载之后,但在数据导入到“工作区”之前。下面是我在scraper中写的一段代码:

 if examiner == "Examiner not found":
        filename = r'C:\Users\pathblahblahblah\Code and Output\Scraped Source Code\scraper_errors_' + patnum + '.html'
        sys.stdout = open(filename, 'w')
        print(patnum)
        print(pto_soup.prettify())
        sys.stdout = console_out

        # Take that logged code and find the examiner name
        sec = "Not found"
        prim = "Not found"
        scraped_code = open(r'C:\Users\pathblahblahblah\Code and Output\Scraped Source Code\scraper_errors_' + patnum + '.txt')

        scrapedsoup = BeautifulSoup(scraped_code.read(), 'html5lib')
        # Find all italics (<i>) tags
        for italics in scrapedsoup.find_all("i"):
            for desc in italics.descendants:
                # Check to see if any of them affect the words "Primary Examiner"
                if "Primary Examiner:" in desc:
                    prim = desc.next_element.strip()
                    #print("Primary found: ", prim)
                else:
                    pass
                # Same for "Assistant Examiner"
                if "Assistant Examiner:" in desc:
                    sec = desc.next_element.strip()
                    #print("Assistant found: ", sec)
                else:
                    pass

        # If "Secondary Examiner" in there, set 'examiner' to the next string 
        # If there is no secondary examiner, use the primary examiner
        if sec != "Not found":
            examiner = sec
        elif prim != "Not found":
            examiner = prim
        else:
            examiner = "Examiner not found"
        # Show new results in the console
        print(examiner)

相关问题 更多 >