如何使用selenium迭代HREF?

2024-04-25 00:14:30 发布

您现在位置:Python中文网/ 问答频道 /正文

我一直在尝试获取一篇新闻文章主页的所有HREF。最后,我想创造一些东西,让我从所有的新闻文章中找到n个最常用的词。要做到这一点,我想我首先需要HREF,然后一个接一个地点击它们

在这个平台的另一位用户的大量帮助下,我现在得到了以下代码:

from bs4 import BeautifulSoup
from selenium import webdriver

url = 'https://ad.nl'

# launch firefox with your url above
# note that you could change this to some other webdriver (e.g. Chrome)
driver = webdriver.Chrome()
driver.get(url)

# click the "accept cookies" button
btn = driver.find_element_by_name('action')
btn.click()

# grab the html. It'll wait here until the page is finished loading
html = driver.page_source

# parse the html soup
soup = BeautifulSoup(html.lower(), "html.parser")
articles = soup.findAll("article")

for i in articles:
    article = driver.find_element_by_class_name('ankeiler')
    hrefs = article.find_element_by_css_selector('a').get_attribute('href')
    print(hrefs)
driver.quit()

它给出了我认为的第一个href,但不会重复下一个href。它只是给了我第一个href,次数和它迭代的次数一样多。有人知道我是如何让它转到下一个href而不是停留在第一个href上的吗

如果有人对如何进一步完成我的小项目有一些建议,请随意分享,因为我还有很多关于Python和编程的知识需要学习


Tags: thefromurlbyhtmldriverarticle文章
3条回答

不要用漂亮的汤,这个怎么样

articles = driver.find_elements_by_css_selector('article')

for i in articles:
    href = i.find_element_by_css_selector('a').get_attribute('href')
    print(href)

要在文章中获得所有HREF,您可以执行以下操作:

hrefs = article.find_elements_by_xpath('//a')
#OR article.find_element_by_css_selector('a')

for href in hrefs:
  print(href.get_attribute('href'))

不过,为了推进项目,下面的吼声可能会有所帮助:

hrefs = article.find_elements_by_xpath('//a')
links = [href.get_attribute("href") for href in hrefs]

for link in link:
  driver.get(link)
  #Add all words in the article to a dictionary with the key being the words and
  #the value being the number of times they occur

为了改进我之前的回答,我为您的问题写了一个完整的解决方案:

from selenium import webdriver

url = 'https://ad.nl'

#Set up selenium driver
driver = webdriver.Chrome()
driver.get(url)

#Click the accept cookies button
btn = driver.find_element_by_name('action')
btn.click()

#Get the links of all articles
article_elements = driver.find_elements_by_xpath('//a[@class="ankeiler__link"]')
links = [link.get_attribute('href') for link in article_elements]

#Create a dictionary for every word in the articles
words = dict()

#Iterate through every article
for link in links:
    #Get the article
    driver.get(link)

    #get the elements that are the body of the article
    article_elements = driver.find_elements_by_xpath('//*[@class="article__paragraph"]')

    #Initalise a empty string
    article_text = ''

    #Add all the text from the elements to the one string
    for element in article_elements:
        article_text+= element.text + " "

    #Convert all character to lower case  
    article_text = article_text.lower()

    #Remove all punctuation other than spaces
    for char in article_text:
        if ord(char) > 122 or ord(char) < 97:
            if ord(char) != 32:
                article_text = article_text.replace(char,"")

    #Split the article into words
    for word in article_text.split(" "):
        #If the word is already in the article update the count
        if word  in words:
            words[word] += 1
        #Otherwise make a new entry
        else:
            words[word] = 1

#Print the final dictionary (Very large so maybe sort for most occurring words and display top 10)
#print(words)

#Sort words by most used
most_used = sorted(words.items(), key=lambda x: x[1],reverse=True)

#Print top 10 used words
print("TOP 10 MOST USED: ")
for i in range(10):
    print(most_used[i])

driver.quit()

对我来说很好,如果有任何错误,请告诉我

相关问题 更多 >