的正确divclass组合汤。选择()

2024-06-07 10:02:41 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在开发一些抓取代码,它不断返回一些错误,我想其他人可能可以帮助解决这些错误。在

我先运行这个片段:

import  pandas as pd
from urllib.parse import urljoin
import requests 

base = "http://www.reed.co.uk/jobs"

url = "http://www.reed.co.uk/jobs?datecreatedoffset=Today&pagesize=100"
r = requests.get(url).content
soup = BShtml(r, "html.parser")

df = pd.DataFrame(columns=["links"], data=[urljoin(base, a["href"]) for a in soup.select("div.pages a.page")])
df

我在今天的招聘启事的第一页写了这段话。我提取页面底部的url,以便找到在这个时间点上存在的页面总数。下面的正则表达式为我解释了这一点:

^{pr2}$

不是在上面的第三行,页数总是包含在这个列表中倒数第二个(共五个)url中。我相信有一种更优雅的方式来做这件事,但它已经足够了。然后我将从URL中获取的数字输入循环:

result_set = []

loopbasepref = 'http://www.reed.co.uk/jobs?cached=True&pageno='
loopbasesuf = '&datecreatedoffset=Today&pagesize=100'
for pnum in range(1,pagenum):
    url = loopbasepref + str(pnum) + loopbasesuf
    r = requests.get(url).content
    soup = BShtml(r, "html.parser")
    df2 = pd.DataFrame(columns=["links"], data=[urljoin(base, a["href"]) for a in  soup.select("div", class_="results col-xs-12 col-md-10")])
    result_set.append(df2)
    print(df2)

这就是我出错的地方。我要做的是循环遍历从第1页开始到第N页(其中N=pagenum)列出作业的所有页面,然后提取链接到每个单独作业页面的URL并将其存储在一个dataframe中。我尝试过soup.select("div", class_="")的各种组合,但是每次都会收到一个错误,错误是:TypeError: select() got an unexpected keyword argument 'class_'。在

如果有人对此有任何想法,并能看到一个好的前进道路,我将感谢帮助!在

干杯

克里斯


Tags: importhttpurlbasewww错误页面requests
1条回答
网友
1楼 · 发布于 2024-06-07 10:02:41

您可以一直循环,直到没有下一页:

import  requests
from bs4 import BeautifulSoup
from urllib.parse import  urljoin

base = "http://www.reed.co.uk"
url = "http://www.reed.co.uk/jobs?datecreatedoffset=Today&pagesize=100"

def all_urls():
    r = requests.get(url).content
    soup = BeautifulSoup(r, "html.parser")
    # get the urls from the first page
    yield  [urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")]
    nxt = soup.find("a", title="Go to next page")
    # title="Go to next page" is missing when there are no more pages
    while nxt:
        # wash/repeat until no more pages
        r = requests.get(urljoin(base, nxt["href"])).content
        soup = BeautifulSoup(r, "html.parser")
        yield  [urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")]
        nxt = soup.find("a", title="Go to next page")

只需循环生成器函数,即可从每个页面获取URL:

^{pr2}$

我还在选择器中使用a[href^=/jobs],因为有其他匹配的标记,所以我们确保只提取作业路径。在

在您自己的代码中,使用选择器的正确方法是:

soup.select("div.results.col-xs-12.col-md-10")

您的语法用于findfind\u all,其中您将class_=...用于css类:

soup.find_all("div", class_="results col-xs-12 col-md-10")

但无论如何,这都不是正确的选择。在

如果您不确定为什么要创建多个dfs:

def all_urls():
    r = requests.get(url).content
    soup = BeautifulSoup(r, "html.parser")
    yield pd.DataFrame([urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")],
                       columns=["Links"])
    nxt = soup.find("a", title="Go to next page")
    while nxt:
        r = requests.get(urljoin(base, nxt["href"])).content
        soup = BeautifulSoup(r, "html.parser")
        yield pd.DataFrame([urljoin(base, a["href"]) for a in soup.select("div.details h3.title a[href^=/jobs]")],
                           columns=["Links"])
        nxt = soup.find("a", title="Go to next page")


dfs = list(all_urls())

这会给你一个dfs列表:

In [4]: dfs = list(all_urls())
dfs[0].head()
In [5]: dfs[0].head(10)
Out[5]: 
                                               Links
0  http://www.reed.co.uk/jobs/tufting-manager/308...
1  http://www.reed.co.uk/jobs/financial-services-...
2  http://www.reed.co.uk/jobs/head-of-finance-mul...
3  http://www.reed.co.uk/jobs/class-1-drivers-req...
4  http://www.reed.co.uk/jobs/freelance-middlewei...
5  http://www.reed.co.uk/jobs/sage-200-consultant...
6  http://www.reed.co.uk/jobs/bereavement-support...
7  http://www.reed.co.uk/jobs/property-letting-ma...
8  http://www.reed.co.uk/jobs/graduate-recruitmen...
9  http://www.reed.co.uk/jobs/solutions-delivery-...

但是如果您只想要一个,那么使用带有itertools.chain的原始代码:

 from itertools import chain
 df = pd.DataFrame(columns=["links"], data=list(chain.from_iterable(all_urls())))

这将为您提供一个df中的所有链接:

In [7]:  from itertools import chain
   ...:  df = pd.DataFrame(columns=["links"], data=list(chain.from_iterable(all_
   ...: urls())))
   ...: 

In [8]: df.size
Out[8]: 675

相关问题 更多 >

    热门问题