使用PythonElementTree使用名称空间刮取多个URL的rss/xml

2024-05-12 19:53:02 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在制作一个scraper,它可以循环浏览带有RSS提要的url列表,然后检索他们博客文章的标题、博客文章的日期以及博客文章的链接。这是后来提交给博士后的。我已经制作了另一个脚本,该脚本检索不带名称空间的xml,并基于该代码。我认为我在理解上遇到的最大问题是,我有这么多的网站,其中很多网站都有不同的名称空间,所以我不太明白如何以优雅的方式检索这些不同的提要

下面是一些我正在努力使用的XML示例

    <rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
    <rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <rss xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <rss xmlns:sy="http://purl.org/rss/1.0/modules/syndication/">
    <rss xmlns:georss="http://www.georss.org/georss">
    <rss mlns:slash="http://purl.org/rss/1.0/modules/slash/">
    <rss xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" version="2.0">
    <rss xmlns:wfw="http://wellformedweb.org/CommentAPI/>

事实上,其中一个事件在单个rss标记中包含所有xml名称空间

    <rss xmlns:content="http://purl.org/rss/1.0/modules/content/" 
    xmlns:wfw="http://wellformedweb.org/CommentAPI/" 
    xmlns:dc="http://purl.org/dc/elements/1.1/" 
    xmlns:atom="http://www.w3.org/2005/Atom" 
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" 
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/" 
    xmlns:georss="http://www.georss.org/georss" 
    xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" version="2.0> 

这是我的刮板的代码。问题从第二次尝试开始

import asyncio
import httpx
import xml.etree.ElementTree as ET
import psycopg2
import pdb

#open initial connection
conn = psycopg2.connect("")

#open initial cursor
cur = conn.cursor()

URLS =  [
        "https://www.sitename1.com/feed.xml",
        "https://www.sitename2.com/atom.xml",
        "https://www.sitename3.com/feed.xml",
        "https://www.sitename4.com/index.xml",
        "https://www.sitename5.com/links/rss.xml",
        "https://www.sitename6.com/blog/rss.xml",
        "http://www.sitename7.com/rss.xml" ] 
        
        # there's at least 10 more sites after this, but you get the picture.

async def main():

    async with httpx.AsyncClient() as client:
        for url in URLS:
            response = await client.get(url)
            
            try:
                root = ET.fromstring(response.text)
                print("ROOT:", root)
                    
            except:
                continue
            try:
                links = [x for x in root if x.tag in ("entry", "items")]
                #links = [x for x in root if x.tag in ("entry", "items")]
                print("LINKS:", links)
            except:
                print("URL {} is rejected".format(url))
                continue

            for link in links:
                title = [x.text for x in link if x.tag == "title"]
                link_url = [x.attrib["href"] for x in link if x.tag == "link"]
                if title and link_url:
                    print("Found {} with HREF {}".format(title, link_url))
                    #cur.execute("INSERT INTO posts (host_title, post_url) VALUES (%s, %s)", 
                            #(title[0], link_url[0]))
                    #conn.commit()
                    print("committed")
                    print(f"{title} and {link_url} submitted to database.")
                    
    cur.execute("SELECT * FROM posts;")
    rows = cur.fetchall()
    for r in rows:
        print(f"{r[0]} and {r[1]}")
    cur.close()
    conn.close()  

if __name__ == '__main__':
    asyncio.run(main())

这是目前的输出,我阻止它抛出异常,但现在它输出的是一个空列表。没有检索到任何数据

    ROOT: <Element 'rss' at 0x7f4b249a7220>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b2528ab30>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b249707c0>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b24978130>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b249990e0>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b249a7d60>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b249bb860>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b249c3720>
    LINKS: []
    ROOT: <Element 'rss' at 0x7f4b2497d270>
    LINKS: []

我相信问题源于这一行代码

 try:
      links = [x for x in root if x.tag[1] in ("entry", "items")]
      #links = [x for x in root if x.tag in ("channel", "items")]
      print("LINKS:", links)

一个人怎么做呢?我浏览了python文档,阅读了stackoverflow线程,浏览了大量博客线程,并尝试了各种各样的解决方案,但都无济于事。如蒙协助,将不胜感激


Tags: inorghttpurlforwwwlinkroot