删除基本u

2024-04-25 12:40:35 发布

您现在位置:Python中文网/ 问答频道 /正文

我编写了一个python脚本,从给定网页上的所有链接中提取href值:

from BeautifulSoup import BeautifulSoup
import urllib2
import re

html_page = urllib2.urlopen("http://kteq.in/services")
soup = BeautifulSoup(html_page)
for link in soup.findAll('a'):
    print link.get('href')

当我运行上述代码时,我得到以下输出,其中包括外部和内部链接:

index
index
#
solutions#internet-of-things
solutions#online-billing-and-payment-solutions
solutions#customer-relationship-management
solutions#enterprise-mobility
solutions#enterprise-content-management
solutions#artificial-intelligence
solutions#b2b-and-b2c-web-portals
solutions#robotics
solutions#augement-reality-virtual-reality`enter code here`
solutions#azure
solutions#omnichannel-commerce
solutions#document-management
solutions#enterprise-extranets-and-intranets
solutions#business-intelligence
solutions#enterprise-resource-planning
services
clients
contact
#
#
#
https://www.facebook.com/KTeqSolutions/
#
#
#
#
#contactform
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
index
services
#
contact
#
iOSDevelopmentServices
AndroidAppDevelopment
WindowsAppDevelopment
HybridSoftwareSolutions
CloudServices
HTML5Development
iPadAppDevelopment
services
services
services
services
services
services
contact
contact
contact
contact
contact
None
https://www.facebook.com/KTeqSolutions/
#
#
#
#

我想删除像https://www.facebook.com/KTeqSolutions/这样有完整URL的外部链接,同时保留像solutions#internet-of-things这样的链接。我怎样才能有效地做到这一点?你知道吗


Tags: andhttpsimportcomindexfacebook链接www
2条回答

如果我没听错的话,你可以试一下:

l = []
for link in soup.findAll('a'):
    print link.get('href')
    l.append(link.get('href'))
l = [x for x in l if "www" not in x] #or 'https'

您可以从requests模块使用parse_url。你知道吗

import requests

url = 'https://www.facebook.com/KTeqSolutions/'

requests.urllib3.util.parse_url(url)

给你

Url(scheme='https', auth=None, host='www.facebook.com', port=None, path='/KTeqSolutions/', query=None, fragment=None)

相关问题 更多 >