Python请求shtml,尝试在Jscript中加载所有信息

2024-03-28 16:40:08 发布

您现在位置:Python中文网/ 问答频道 /正文

我不想访问这个提供免费代理的网站,而是想搜集信息,然后过滤。我尝试使用html请求来实现这一点,但到目前为止,在阅读教程和阅读库时,它并没有发生,当我运行它时,它只输出[]。这是我到目前为止的代码,我正试图抓住网页中有IP的部分

import requests
from bs4 import BeautifulSoup
from requests_html import HTMLSession



# create an HTML Session object
session = HTMLSession()


# Use the object above to connect to needed webpage
resp = session.get("https://advanced.name/freeproxy")

# Run JavaScript code on webpage
resp.html.render()

port = resp.html.find("data-ip")
print(port)

Tags: tofromimport信息代理object网站port
2条回答

您需要在render()中添加睡眠时间:

from requests_html import HTMLSession

session = HTMLSession()
url = "https://advanced.name/freeproxy"

r = session.get(url)
r.html.render(sleep=2)

ips = r.html.find('tr > td:nth-child(2)')
ports = r.html.find('tr > td:nth-child(3)')

for ip, port in zip(ips, ports):
    print(ip.text + ":" +port.text)

输出:

186.96.117.28:9991
181.209.106.196:3128
181.209.86.210:999
115.77.191.25:9090
177.52.221.166:999
49.232.118.212:3128
45.235.110.66:53281
177.155.215.89:8080
191.242.230.135:8080
45.167.95.184:8085
170.83.76.73:999
142.44.148.56:8080
103.139.194.69:8080
102.134.123.167:8080
45.167.23.30:999
45.224.150.155:999
103.138.41.132:8080
170.239.180.58:999
103.160.56.16:8080
210.18.133.71:8080
185.179.30.130:8080
190.61.90.141:999
187.188.200.2:999
42.194.212.250:8081
88.157.181.42:8080
31.40.135.67:31113
218.60.8.99:3129
104.238.195.10:80
45.189.252.40:999
190.52.129.39:8080
103.151.226.133:8080
178.205.254.106:8080
186.233.186.60:8080
201.222.44.58:999
175.103.35.2:3888
177.21.237.100:8080
113.20.31.24:8080
190.108.93.82:999
158.140.162.70:80
36.75.246.41:80
190.120.252.245:999
167.172.180.46:42580
188.133.137.9:8081
191.234.166.244:80
47.101.59.76:8888
178.32.129.31:3128
202.142.189.21:8080
185.190.38.14:8080
203.75.190.21:80
222.74.202.229:80
223.82.106.253:3128
3.221.105.1:80
3.219.153.200:80
62.33.207.196:80
178.63.17.151:3128
111.90.179.74:8080
14.97.2.108:80
120.197.179.166:8080
68.15.147.8:48678
183.215.206.39:55443
221.6.201.74:9999
18.224.59.63:3128
61.153.251.150:22222
184.180.90.226:8080
162.243.161.166:80
103.148.195.37:4153
18.236.151.253:80
81.19.0.134:3128
78.47.104.35:3128
71.172.1.52:8080
65.184.156.234:52981
199.192.126.211:8080
125.99.106.250:3128
69.163.162.222:37926
173.236.176.67:17838
184.155.36.194:8080
216.75.113.182:39602
107.150.37.82:3128
159.65.171.69:80
45.43.19.140:33533
104.215.127.197:80
124.41.211.211:57258
103.216.82.20:6667
74.143.245.221:80
124.71.162.246:808
103.79.96.173:4153
47.57.188.208:80
69.163.166.126:37926
41.33.66.241:1080
131.0.87.225:52017
50.242.100.89:32100
103.78.27.49:4145
220.163.129.150:808
193.164.94.244:4153
203.215.181.219:36342
202.168.147.189:34493
106.52.10.171:9999
51.222.21.95:32768
122.155.165.191:3128
218.16.62.152:3128

这个页面使用JavaScript来检测bot/脚本,它似乎可以工作,因为它阻止了您的代码。你可能需要更多的东西

如果您检查reporequests-html,您会发现它的更新时间不超过1年

我可以用硒

from selenium import webdriver
             
url = "https://advanced.name/freeproxy"

#driver = webdriver.Firefox()
driver = webdriver.Chrome()
driver.get(url)

all_ips = driver.find_elements_by_xpath('//td[@data-ip]')
all_ports = driver.find_elements_by_xpath('//td[@data-port]')
for ip, port in zip(all_ips, all_ports):
    print(ip.text, port.text)

编辑:

阅读下一页

  • 使用for-loop和带有页码的url,但它需要知道有多少页

      from selenium import webdriver
    
      #driver = webdriver.Firefox()
      driver = webdriver.Chrome()
    
      url = "https://advanced.name/freeproxy?ddexp4attempt=1&page="
    
      for page in range(15):
          print(' - page', page, ' -')
    
          driver.get(url + str(page))
    
          all_ips = driver.find_elements_by_xpath('//td[@data-ip]')
          all_ports = driver.find_elements_by_xpath('//td[@data-port]')
          for ip, port in zip(all_ips, all_ports):
              print(ip.text, port.text)
    
  • 使用while并单击链接到下一页-您不必知道有多少页

      from selenium import webdriver
    
      #driver = webdriver.Firefox()
      driver = webdriver.Chrome()
    
      url = "https://advanced.name/freeproxy"
      driver.get(url)
    
      while True:
    
          print(' - page  -')
    
          all_ips = driver.find_elements_by_xpath('//td[@data-ip]')
          all_ports = driver.find_elements_by_xpath('//td[@data-port]')
          for ip, port in zip(all_ips, all_ports):
              print(ip.text, port.text)
    
          try:
              # go to next page
              link_to_next_page = driver.find_element_by_link_text('»')
              link_to_next_page.click()
          except:
              # exit loop if there is no more pages
              break
    

相关问题 更多 >