如何循环使用Beautifulsoup打印<P>的url列表

2024-04-20 00:49:10 发布

您现在位置:Python中文网/ 问答频道 /正文

我刚刚发现了关于美女团(4)。我有很多链接,我想一次打印多个网站的<p>标签,但我不知道怎么做,因为我是一个初学者。我在stackoverflow上也找不到适合我的东西。
像这样的事情是行不通的:

from bs4 import BeautifulSoup
import requests
import warnings

warnings.filterwarnings("ignore", category=UserWarning, module='bs4')
url = ["http://fc.lc/api?api=9053290fd05b5e5eb091b550078fa1e30935c92c&url=https://wow-ht.ml?s=https://cutlinks.pro/api?api=e6a8809e51daedcf30d9d6270fd0bfeba73c1dcb&url=https://google.com=text&format=text", "http://fc.lc/api?api=9053290fd05b5e5eb091b550078fa1e30935c92c&url=https://wow-ht.ml?s=https://cutlinks.pro/api?api=e6a8809e51daedcf30d9d6270fd0bfeba73c1dcb&url=https://example.com&format=text&format=text"]

# add header
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36'}
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r.content, "lxml")
print( soup.find('p').text )

错误我得到这个(我没想到它会以任何方式工作(给我一个可能的重复回答这个错误不会帮助我,阅读我的问题在标题第一):

Traceback (most recent call last):
  File "C:\Users\Gebruiker\Desktop\apitoshortened.py", line 10, in <module>
    r = requests.get(url, headers=headers)
  File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\api.py", line 60, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 640, in send
    adapter = self.get_adapter(url=request.url)
  File "C:\Users\Gebruiker\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\sessions.py", line 731, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for '['http://fc.lc/api?api=9053290fd05b5e5eb091b550078fa1e30935c92c&url=https://wow-ht.ml?s=https://cutlinks.pro/api?api=e6a8809e51daedcf30d9d6270fd0bfeba73c1dcb&url=https://google.com=text&format=text', 'http://fc.lc/api?api=9053290fd05b5e5eb091b550078fa1e30935c92c&url=https://wow-ht.ml?s=https://cutlinks.pro/api?api=e6a8809e51daedcf30d9d6270fd0bfeba73c1dcb&url=https://example.com&format=text&format=text']'

我真的没想到会这么简单难,任何帮助都会感激的!你知道吗


Tags: textinpyhttpsapiformaturlget
2条回答

如果有list,那么使用for循环

for item in url:
    r = requests.get(item, headers=headers)
    soup = BeautifulSoup(r.content, "lxml")
    print(soup.find('p').text)

顺便说一句:你的url不返回任何HTML,而是返回一些带有链接的文本-所以代码找不到<p>。你知道吗

查看返回的文本

for item in url:
    r = requests.get(item, headers=headers)
    print(r.text)    

结果

https://fc.lc/C4FNiXbY

使用for循环,然后检查p标记在场。如果所以打印文本。你知道吗

from bs4 import BeautifulSoup
import requests
import warnings

warnings.filterwarnings("ignore", category=UserWarning, module='bs4')
urls = ["http://fc.lc/api?api=9053290fd05b5e5eb091b550078fa1e30935c92c&url=https://wow-ht.ml?s=https://cutlinks.pro/api?api=e6a8809e51daedcf30d9d6270fd0bfeba73c1dcb&url=https://google.com=text&format=text", "http://fc.lc/api?api=9053290fd05b5e5eb091b550078fa1e30935c92c&url=https://wow-ht.ml?s=https://cutlinks.pro/api?api=e6a8809e51daedcf30d9d6270fd0bfeba73c1dcb&url=https://example.com&format=text&format=text"]

# add header
for url in urls:
 headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36'}
 r = requests.get(url, headers=headers)
 soup = BeautifulSoup(r.content, "lxml")
 if soup.find('p'):
    print( soup.find('p').text)

相关问题 更多 >