运行文件并在大约75-100个URL被删除后超时
我试过延迟,但得到了同样的结果,不知道还能尝试什么。如有任何意见,将不胜感激
urls = [line.strip() for line in inf]
for url in urls:
sourceCode = requests.get(url)
plainText = sourceCode.text
soup = BeautifulSoup(plainText, "html.parser")
irock = soup.find_all('div', class_="card-img-container")
for img in irock:
imageElement = img.find("img")
bingo = imageElement.get("data-src")
imgName = imageElement.get("title")
fullName = str(imgName) + ".jpg"
r = urllib.request.urlretrieve(bingo, fullName)
print(url)
print("Done")
目前没有回答
相关问题 更多 >
编程相关推荐