如何使用beautiful soup和python获取特定的href url

2024-05-13 22:34:42 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试获取此td标签中的下载URL

<a href="https://dibbs2.bsm.dla.mil/Downloads/Awards/18SEP19/GS07F5933RSPEFA519F0433.PDF" target="DIBBSDocuments" title="Link To Delivery Order Document"><img alt="PDF Document" border="0" height="16" hspace="2" src="https://www.dibbs.bsm.dla.mil/app_themes/images/icons/IconPdf.gif" width="16"/></a>, <a href="https://dibbs2.bsm.dla.mil/Downloads/Awards/18SEP19/GS07F5933RSPEFA519F0433.PDF" target="DIBBSDocuments" title="Link To Delivery Order Document">SPEFA519F0433</a>

以上由我的代码生成的输出是:

downloandurl=batch.select("a[href*=https://dibbs2.bsm.dla.mil/Downloads/Awards/]")

如何从标记中获取hrefURL

我正试图找回这个

https://dibbs2.bsm.dla.mil/Downloads/Awards/18SEP19/GS07F5933RSPEFA519F0433.PDF


Tags: httpstargetpdftitledownloadslinkdocumenthref
2条回答

从锚标记获取href

使用

  • 标签['hef']

  • tag.get('href')

  • tag.attrs.get('href')
from bs4 import BeautifulSoup
data='''<a href="https://dibbs2.bsm.dla.mil/Downloads/Awards/18SEP19/GS07F5933RSPEFA519F0433.PDF" target="DIBBSDocuments" title="Link To Delivery Order Document"><img alt="PDF Document" border="0" height="16" hspace="2" src="https://www.dibbs.bsm.dla.mil/app_themes/images/icons/IconPdf.gif" width="16"/></a>, <a href="https://dibbs2.bsm.dla.mil/Downloads/Awards/18SEP19/GS07F5933RSPEFA519F0433.PDF" target="DIBBSDocuments" title="Link To Delivery Order Document">SPEFA519F0433</a>'''
soup=BeautifulSoup(data,'html.parser')
for item in soup.select('a'):
    print(item['href'])
    print(item.get('href'))
    print(item.attrs.get('href'))


若你们在寻找某个特定的锚定标记,那个么在findtaglike中添加更多的条件

for item in soup.select('a[target="DIBBSDocuments"]'):
    print(item['href'])
    print(item.get('href'))
    print(item.attrs.get('href'))

或者以href url开头

for item in soup.select('a[href^="https://dibbs2.bsm.dla.mil/Downloads/Awards"]'):
    print(item['href'])
    print(item.get('href'))
    print(item.attrs.get('href'))

请为您的问题使用适当的标签,并分享您的代码,这样我们就知道您做了多少,而不是提供完整的答案。谢谢

试试这个:

from bs4 import BeautifulSoup
import requests, re
''' Don't forget to install/setup package = 'lxml' '''



url = "http://www.github.com"

response = requests.get(url)

data = response.text

soup = BeautifulSoup(data,'lxml')

tags = soup.find_all('a')

''' This will print every available link'''

for tag in tags:                       
    print(tag.get('href'))

''' this will print links with only prefix as given'''
for link in soup.find_all('a',attrs={'href': re.compile("^https://github")}):
    print(link.get('href')

相关问题 更多 >