从带有BS4的网页中提取多个不带“a”或“href”标记的URL

2024-04-20 05:32:17 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在用Selenium做一个简单的程序Flickr.com网站,搜索用户输入的术语,然后打印出所有这些图像的URL。你知道吗

我在最后一部分挣扎,只得到图片的网址。我一直在使用class_=搜索来获取HTML中url所在的部分。在搜索“apples”时,会多次返回以下内容:

<div class="view photo-list-photo-view requiredToShowOnServer awake" 
   data-view-signature="photo-list-photo-view__engagementModelName_photo-lite-
   models__excludePeople_false__id_6246270647__interactionViewName_photo-list-
   photo-interaction-    view__isOwner_false__layoutItem_1__measureAFT_true__model_1__modelParams_1_    _parentContainer_1__parentSignature_photolist-
   479__requiredToShowOnClient_true__requiredToShowOnServer_true__rowHeightMod    _1__searchTerm_apples__searchType_1__showAdvanced_true__showSort_true__show    Tools_true__sortMenuItems_1__unifiedSubviewParams_1__viewType_jst"
   style="transform: translate(823px, 970px); -webkit-transform:     translate(823px, 970px); -ms-transform: translate(823px, 970px); width:
   237px; height: 178px; background-image:
   url(//c3.staticflickr.com/7/6114/6246270647_edc7387cfc_m.jpg)">
<div class="interaction-view"></div>

我只希望每个图像的URL如下所示:

c3.staticflickr.com/7/6114/6246270647_edc7387cfc_m.jpg

由于没有ahreftrag,我正在努力过滤掉它们。你知道吗

最后我还尝试了一些正则表达式,例如:

print(soup.find_all(re.compile(r'^url\.jpg$')))

但那没用。你知道吗

下面是我的完整代码,谢谢。你知道吗

import os
import re
import urllib.request as urllib2
import bs4
from selenium import webdriver
from selenium.webdriver.common.keys import Keys 

os.makedirs('My_images', exist_ok=True)

browser = webdriver.Chrome()
browser.implicitly_wait(10)

print("Opening Flickr.com")

siteChoice = 'http://www.flickr.com'

browser.get(siteChoice)

print("Enter your search term: ")

term = input("> ")

searchField = browser.find_element_by_id('search-field')
searchField.send_keys(term)
searchField.submit()

url = siteChoice + '/search/?text=' + term

html = urllib2.urlopen(url)

soup = bs4.BeautifulSoup(html, "html.parser")

print(soup.find_all(class_='view photo-list-photo-view requiredToShowOnServer awake', style = re.compile('staticflickr')))

我更改的代码:

p = re.compile(r'url\(\/\/([^\)]+)\)')

test_str = str(soup)

all_urls = re.findall(p, test_str)


print('Exporting to file')


with open('flickr_urls.txt', 'w') as f:
    for i in all_urls:
        f.writelines("%s\n" % i)

print('Done')

Tags: importdivbrowserrecomviewtrueurl
2条回答

试试这个

url\(\/\/([^\)]+)\)

Demo

import re
p = re.compile(ur'url\(\/\/([^\)]+)\)')
test_str = u"<div class=\"view photo-list-photo-view requiredToShowOnServer awake\" \ndata-view-signature=\"photo-list-photo-view__engagementModelName_photo-lite-\nmodels__excludePeople_false__id_6246270647__interactionViewName_photo-list-\nphoto-interaction-    view__isOwner_false__layoutItem_1__measureAFT_true__model_1__modelParams_1_    _parentContainer_1__parentSignature_photolist-\n479__requiredToShowOnClient_true__requiredToShowOnServer_true__rowHeightMod    _1__searchTerm_apples__searchType_1__showAdvanced_true__showSort_true__show    Tools_true__sortMenuItems_1__unifiedSubviewParams_1__viewType_jst\"\n style=\"transform: translate(823px, 970px); -webkit-transform:     translate(823px, 970px); -ms-transform: translate(823px, 970px); width:\n 237px; height: 178px; background-image:\n url(//c3.staticflickr.com/7/6114/6246270647_edc7387cfc_m.jpg)\">\n<div class=\"interaction-view\"></div>"

m = re.search(p, test_str)
print m.group(1)

输出:

c3.staticflickr.com/7/6114/6246270647_edc7387cfc_m.jpg

要从带有Selenium的页面中删除所有png/jpg链接,请执行以下操作:

from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.flickr.com/")
links = driver.execute_script("return document.body.innerHTML.match(" \
  "/https?:\/\/[a-z_\/0-9\-\#=&.\@]+\.(jpg|png)/gi)")
print links

相关问题 更多 >