我编写了一个简单的解析器,可以访问网站的主页和主页上的内部链接。基本上,它从主页开始深入到网站结构的1个层次,搜索与regex表达式匹配的字符串。它执行JS。适用于电子邮件、电话号码或任何格式良好的数据。代码如下:
pages = set()
def getPage(startUrl):
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11'
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = (user_agent)
driver = webdriver.PhantomJS(executable_path="/Users/mainuser/Downloads/phantomjs-2.1.1-macosx/bin/phantomjs", desired_capabilities=dcap)
# print("in get page: "+startUrl)
try:
driver.set_page_load_timeout(10)
driver.get(startUrl)
return BeautifulSoup(driver.page_source,"html.parser")
except:
print("returning NOne")
return None
def traverseHomePage(startUrl):
if startUrl.endswith("/"):
startUrl = startUrl[:-1]
try:
driver = webdriver.PhantomJS(executable_path="/Users/mainuser/Downloads/phantomjs-2.1.1-macosx/bin/phantomjs")
driver.get(startUrl)
except HTTPError as e:
print(e)
# print(pageUrl+" ")
pass
except URLError as e:
print(e)
pass
else:
bsObj = BeautifulSoup(driver.page_source,"html.parser")
text = str(bsObj)
listRegex = re.findall( r'someregexhere', text)
print(listRegex+" do something with data")
for link in bsObj.findAll("a", href=re.compile("^((?!#|javascript|\.png|\.jpg|\.gif).)*$")):
if 'href' in link.attrs:
if ("http://" in link.attrs['href'] or "https://" in link.attrs['href']) and startUrl in link.attrs['href']:
print("internal aboslute: "+startUrl+" is in "+link.attrs['href'])
#absolute link
if 'href' in link.attrs:
if link.attrs['href'] not in pages:
#We have encountered a new page
newPage = link.attrs['href']
oneLevelDeep(newPage)
elif ("http://" in link.attrs['href'] or "https://" in link.attrs['href'] or "mailto" in link.attrs['href']) and (startUrl not in link.attrs['href']):
print("outside link"+link.attrs['href'])
# print(link.attrs['href'])
continue
else:
print("internal relative: "+link.attrs['href'] )
#relative link
if 'href' in link.attrs:
if link.attrs['href'] not in pages:
#We have encountered a new page
newPage = link.attrs['href']
pages.add(newPage)
if newPage.startswith("/"):
pass
# print("/"+newPage)
else:
newPage = "/"+newPage
pages.add(newPage)
oneLevelDeep(startUrl+newPage)
def oneLevelDeep(startUrl):
# print(startUrl)
if startUrl.endswith("/"):
startUrl = startUrl[:-1]
try:
# print("stUrl: "+startUrl+pageUrl)
bsObj = getPage(startUrl)
if bsObj != "None":
text = str(bsObj)
text = str(bsObj)
listRegex = re.findall( r'someregexhere', text)
print(listRegex+" do something with data")
#
except HTTPError as e:
# print(e)
# print(pageUrl+" ")
pass
except URLError as e:
# print(e)
pass
用法示例:traverseHomePage("http://homepage.com")
我运行了一段时间,这铲运机,它是难以置信的慢。我在Eclipse中复制了8次我的项目,但仍然在12小时内只搜索了1000页我能做些什么来提高它的速度?我严重怀疑googlebot每天只索引250页。你知道吗
我认为瓶颈是机器人每分钟的页面请求量。每隔几秒钟就有一次。我读过关于机器人每秒发出50个请求(你不应该这样做)。这次不是这样。你知道吗
如何提高刮削速度?我正在运行来自eclipselocalhost的代码。如果我移到服务器上会有帮助吗?我是否应该告诉服务器不要给我发送图像,这样我就不用那么多带宽了?是否可以发出异步请求?多个脚本同时异步运行吗?欢迎提出任何意见。你知道吗
问题是,您正在加载网页,就像浏览器要访问该网页一样。如果你敞开心扉主页.com进入开发者菜单,然后进入网络(至少在chrome上),你会发现这个页面需要很长时间才能加载。在我的例子中,总共花了7秒钟,最后一个文件是google地图的身份验证文件。你知道吗
Google可以快速解析东西,因为它在服务器上有服务器来执行解析,而且它只查看几个文件,从根目录开始,它访问该页面上的每个链接,以及之后每个页面上的每个链接。它不需要等待整个页面加载。它只需要每个站点的原始html。你知道吗
等待javascript,下载整个网站、css和所有内容(而不仅仅是一个html文件)是减慢搜索速度的原因。我会使用请求来获取简单的html并从那里开始工作。你知道吗
相关问题 更多 >
编程相关推荐