提取谷歌搜索结果
我想定期检查一下谷歌上列出了哪些子域名。
为了获取子域名的列表,我在谷歌搜索框里输入'site:example.com',这样就能看到所有的子域名结果(我们这个域名的结果有超过20页)。
有没有什么好的方法可以只提取出'site:example.com'搜索结果中的网址?
我在考虑写一个小的Python脚本,来进行这个搜索,并用正则表达式从搜索结果中提取网址(在所有结果页面上重复这个过程)。这样做算是个好主意吗?有没有更好的方法?
谢谢!
3 个回答
0
另一种方法是使用 requests
和 bs4
库:
import requests, lxml
from bs4 import BeautifulSoup
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {'q': 'site:minecraft.fandom.com'}
html = requests.get(f'https://www.google.com/search?q=',
headers=headers,
params=params).text
soup = BeautifulSoup(html, 'lxml')
for container in soup.findAll('div', class_='tF2Cxc'):
link = container.find('a')['href']
print(link)
输出结果:
https://minecraft.fandom.com/wiki/Podzol
https://minecraft.fandom.com/wiki/Pumpkin
https://minecraft.fandom.com/wiki/Swimming
https://minecraft.fandom.com/wiki/Polished_Blackstone
https://minecraft.fandom.com/wiki/Nether_Quartz_Ore
https://minecraft.fandom.com/wiki/Blacksmith
https://minecraft.fandom.com/wiki/Grindstone
https://minecraft.fandom.com/wiki/Spider
https://minecraft.fandom.com/wiki/Crash
https://minecraft.fandom.com/wiki/Tuff
要从每一页获取这些结果,可以使用分页:
from bs4 import BeautifulSoup
import requests, urllib.parse
import lxml
def print_extracted_data_from_url(url):
headers = {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
response = requests.get(url, headers=headers).text
soup = BeautifulSoup(response, 'lxml')
print(f'Current page: {int(soup.select_one(".YyVfkd").text)}')
print(f'Current URL: {url}')
print()
for container in soup.findAll('div', class_='tF2Cxc'):
head_link = container.a['href']
print(head_link)
return soup.select_one('a#pnnext')
def scrape():
next_page_node = print_extracted_data_from_url(
'https://www.google.com/search?hl=en-US&q=site:minecraft.fandom.com')
while next_page_node is not None:
next_page_url = urllib.parse.urljoin('https://www.google.com', next_page_node['href'])
next_page_node = print_extracted_data_from_url(next_page_url)
scrape()
部分输出结果:
Results via beautifulsoup
Current page: 1
Current URL: https://www.google.com/search?hl=en-US&q=site:minecraft.fandom.com
https://minecraft.fandom.com/wiki/Podzol
https://minecraft.fandom.com/wiki/Pumpkin
https://minecraft.fandom.com/wiki/Swimming
https://minecraft.fandom.com/wiki/Polished_Blackstone
https://minecraft.fandom.com/wiki/Nether_Quartz_Ore
https://minecraft.fandom.com/wiki/Blacksmith
https://minecraft.fandom.com/wiki/Grindstone
https://minecraft.fandom.com/wiki/Spider
https://minecraft.fandom.com/wiki/Crash
https://minecraft.fandom.com/wiki/Tuff
另外,你可以使用 SerpApi 的 Google 搜索引擎结果 API。这是一个收费的 API,但提供 5,000 次搜索的免费试用。
集成的代码:
from serpapi import GoogleSearch
import os
params = {
"engine": "google",
"q": "site:minecraft.fandom.com",
"api_key": os.getenv('API_KEY')
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results['organic_results']:
link = result['link']
print(link)
输出结果:
https://minecraft.fandom.com/wiki/Podzol
https://minecraft.fandom.com/wiki/Pumpkin
https://minecraft.fandom.com/wiki/Swimming
https://minecraft.fandom.com/wiki/Polished_Blackstone
https://minecraft.fandom.com/wiki/Nether_Quartz_Ore
https://minecraft.fandom.com/wiki/Blacksmith
https://minecraft.fandom.com/wiki/Grindstone
https://minecraft.fandom.com/wiki/Spider
https://minecraft.fandom.com/wiki/Crash
https://minecraft.fandom.com/wiki/Tuff
使用分页:
import os
from serpapi import GoogleSearch
def scrape():
params = {
"engine": "google",
"q": "site:minecraft.fandom.com",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
print(f"Current page: {results['serpapi_pagination']['current']}")
for result in results["organic_results"]:
print(f"Title: {result['title']}\nLink: {result['link']}\n")
while 'next' in results['serpapi_pagination']:
search.params_dict["start"] = results['serpapi_pagination']['current'] * 10
results = search.get_dict()
print(f"Current page: {results['serpapi_pagination']['current']}")
for result in results["organic_results"]:
print(f"Title: {result['title']}\nLink: {result['link']}\n")
scrape()
免责声明,我在 SerpApi 工作。
3
谷歌自定义搜索API可以以ATOM XML格式提供搜索结果。
16
用正则表达式来解析HTML并不是个好主意。因为它很难读懂,而且还需要HTML格式正确。
可以试试 BeautifulSoup 这个库,它是专门为Python准备的。下面是一个示例脚本,可以从网站domain.com的谷歌搜索结果的前10页中提取网址。
import sys # Used to add the BeautifulSoup folder the import path
import urllib2 # Used to read the html document
if __name__ == "__main__":
### Import Beautiful Soup
### Here, I have the BeautifulSoup folder in the level of this Python script
### So I need to tell Python where to look.
sys.path.append("./BeautifulSoup")
from BeautifulSoup import BeautifulSoup
### Create opener with Google-friendly user agent
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
### Open page & generate soup
### the "start" variable will be used to iterate through 10 pages.
for start in range(0,10):
url = "http://www.google.com/search?q=site:stackoverflow.com&start=" + str(start*10)
page = opener.open(url)
soup = BeautifulSoup(page)
### Parse and find
### Looks like google contains URLs in <cite> tags.
### So for each cite tag on each page (10), print its contents (url)
for cite in soup.findAll('cite'):
print cite.text
输出结果:
stackoverflow.com/
stackoverflow.com/questions
stackoverflow.com/unanswered
stackoverflow.com/users
meta.stackoverflow.com/
blog.stackoverflow.com/
chat.meta.stackoverflow.com/
...
当然,你可以把每个结果添加到一个列表里,这样就可以进一步解析子域名。我几天前才开始接触Python和网页抓取,但这个示例应该能帮助你入门。