抓取时出现重复链接
我正在尝试从一个网站 http://www.pakistanfashionmagazine.com 收集所有 class="featured" 的 "a" 标签。我写了这段代码,没有错误,但它重复了链接。我该如何解决这个重复的问题呢?
from bs4 import BeautifulSoup
import requests
url = raw_input("Enter a website to extract the URL's from: ")
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
results= soup.findAll('div', attrs={"class":'featured'})
for div in results:
links = div.findAll('a')
for a in links:
print "http://www.pakistanfashionmagazine.com/" +a['href']
1 个回答
1
这个实际的HTML页面每个项目<div>
里有两个链接;一个是图片的链接,另一个是<h4>
标签的链接:
<div class="item">
<div class="image">
<a href="/dress/casual-dresses/bella-embroidered-lawn-collection-3-stitched-suits-pkr-14000-only.html" title="BELLA Embroidered Lawn Collection*3 STITCHED SUITS@PKR 14000 ONLY"><img src="/siteimages/upload/BELLA-Embroidered-Lawn-Collection3-STITCHED-SUITSPKR-14000-ONLY_1529IM1-thumb.jpg" alt="Featured Product" /></a> </div>
<div class="detail">
<h4><a href="/dress/casual-dresses/bella-embroidered-lawn-collection-3-stitched-suits-pkr-14000-only.html">BELLA Embroidered Lawn Collection*3 STITCHED SUITS@PKR 14000 ONLY</a></h4>
<em>updated: 2013-06-03</em>
<p>BELLA Embroidered Lawn Collection*3 STITCHED SUITS@PKR 14000 ONLY</p>
</div>
</div>
建议你把链接限制为只用一个;我会在这里使用CSS选择器:
links = soup.select('div.featured .detail a[href]')
for link in links:
print "http://www.pakistanfashionmagazine.com/" + link['href']
这样打印出来的链接就变成32个,而不是64个了。
如果你只想要第二个featured
部分(美容小贴士),那就这样做;选择featured
的
,然后从列表中选第二个,接着
links = soup.select('div.featured')[1].select('.detail a[href]')
这样你就只得到了那个部分的8个链接。