用Python创建博客摘要?

2 投票
3 回答
866 浏览
提问于 2025-04-16 13:16

有没有什么好的库或者正则表达式可以把一篇博客文章转换成博客摘要?我希望摘要能显示前四句话、第一段,或者前几个字符……其实我也不太确定哪种方式最好。理想情况下,我希望它能保留一些html格式标签,比如 <a><b><u><i>,但可以去掉其他所有的html标签、javascript和css。

更具体来说,我想输入的是一个代表整个博客文章的html字符串。输出的结果应该是一个html字符串,里面包含前几句话、段落,或者前几个字符。并且要去掉所有可能不安全的html标签。请用Python来实现。

3 个回答

0

你需要解析一下HTML代码。一个很不错的工具是BeautifulSoup。它可以帮助你去掉特定的标签,并提取标签之间的内容(也就是文本)。然后,你可以比较简单地把这些文本缩减到四句话,不过我建议还是用固定的字符数来限制,因为句子的长度可能差别很大。

1

我最后使用了gdata库,自己写了一个博客摘要工具,这个工具利用gdata库来获取Google App Engine上的Blogspot博客(把它移植到其他平台也不难)。下面是代码。使用之前,先设置常量blog_id_constant,然后调用get_blog_info,这样就能返回一个包含博客摘要的字典。

我不建议把这段代码用来生成互联网上任何随机博客的摘要,因为它可能无法完全去除博客内容中的不安全html。不过,对于你自己写的简单博客,下面的代码应该是可以用的。

欢迎随意复制,如果你发现了任何bug或者想要改进的地方,请在评论里告诉我。(抱歉有分号的存在)。

import sys
import os
import logging
import time
import urllib
from HTMLParser import HTMLParser
from django.core.cache import cache
# Import the Blogger API
sys.path.insert(0, 'gdata.zip')
from gdata import service

Months = ["Jan.", "Feb.", "Mar.", "Apr.", "May", "June", "July", "Aug.", "Sept.", "Oct.", "Nov.", "Dec."];
blog_id_constant = -1 # YOUR BLOG ID HERE
blog_pages_at_once = 5

# -----------------------------------------------------------------------------
#   Blogger 
class BlogHTMLSummarizer(HTMLParser):
    '''
    An HTML parser which only grabs X number of words and removes
    all tags except for certain safe ones.
    '''

    def __init__(self, max_words = 80):
        self.max_words = max_words
        self.allowed_tags = ["a", "b", "u", "i", "br", "div", "p", "img", "li", "ul", "ol"]
        if self.max_words < 80:
            # If it's really short, don't include layout tags
            self.allowed_tags = ["a", "b", "u", "i"]
        self.reset()
        self.out_html = ""
        self.num_words = 0
        self.no_more_data = False
        self.no_more_tags = False
        self.tag_stack = []

    def handle_starttag(self, tag, attrs):
        if not self.no_more_data and tag in self.allowed_tags:
            val = "<%s %s>"%(tag, 
                " ".join("%s='%s'"%(a,b) for (a,b) in attrs))
            self.tag_stack.append(tag)
            self.out_html += val

    def handle_data(self, data):
        if self.no_more_data:
            return
        data = data.split(" ")
        if self.num_words + len(data) >= self.max_words:
            data = data[:self.max_words-self.num_words]
            data.append("...")
            self.no_more_data = True
        self.out_html  += " ".join(data)
        self.num_words += len(data)

    def handle_endtag(self, tag):
        if self.no_more_data and not self.tag_stack:
            self.no_more_tags = True
        if not self.no_more_tags and self.tag_stack and tag == self.tag_stack[-1]:
            if not self.tag_stack:
                logging.warning("mixed up blogger tags")
            else:
                self.out_html += "</%s>"%tag
                self.tag_stack.pop()

def get_blog_info(short_summary = False, page = 1, year = "", month = "", day = "", post = None):
    '''
    Returns summaries of several recent blog posts to be displayed on the front page
        page: which page of blog posts to get. Starts at 1.
    '''
    blogger_service = service.GDataService()
    blogger_service.source = 'exampleCo-exampleApp-1.0'
    blogger_service.service = 'blogger'
    blogger_service.account_type = 'GOOGLE'
    blogger_service.server = 'www.blogger.com'
    blog_dict = {}

    # Do the common stuff first
    query = service.Query()
    query.feed = '/feeds/' + blog_id_constant + '/posts/default'
    query.order_by = "published"
    blog_dict['entries'] = []

    def get_common_entry_data(entry, summarize_len = None):
        '''
        Convert an entry to a dictionary object.
        '''
        content = entry.content.text
        if summarize_len != None:
            parser = BlogHTMLSummarizer(summarize_len)
            parser.feed(entry.content.text)
            content = parser.out_html
        pubstr = time.strptime(entry.published.text[:-10], '%Y-%m-%dT%H:%M:%S')
        safe_title = entry.title.text.replace(" ","_")
        for c in ":,.<>!@#$%^&*()+-=?/'[]{}\\\"":
            # remove nasty characters
            safe_title = safe_title.replace(c, "")
        link = "%d/%d/%d/%s/"%(pubstr.tm_year, pubstr.tm_mon, pubstr.tm_mday, 
            urllib.quote_plus(safe_title))
        return {
                'title':entry.title.text,
                'alllinks':[x.href for x in entry.link] + [link], #including blogger links
                'link':link,
                'content':content,
                'day':pubstr.tm_mday,
                'month':Months[pubstr.tm_mon-1],
                'summary': True if summarize_len != None else False,
            }

    def get_blogger_feed(query):
        feed = cache.get(query.ToUri())
        if not feed:
            logging.info("GET Blogger Page: " + query.ToUri())
            try:
                feed = blogger_service.Get(query.ToUri())
            except DownloadError:
                logging.error("Cant download blog, rate limited? %s"%str(query.ToUri()))
                return None
            except Exception, e:
                web_exception('get_blogger_feed', e)
                return None
            cache.set(query.ToUri(), feed, 3600)
        return feed

    def _in_one(a, allBs):
        # Return true if a is in one of allBs
        for b in allBs:
            if a in b:
                return True
        return False

    def _get_int(i):
        try:
            return int(i)
        except ValueError:
            return None
    (year, month, day) = (_get_int(year), _get_int(month), _get_int(day))

    if not short_summary and year and month and day:
        # Get one more than we need so we can see if we have more
        query.published_min = "%d-%02d-%02dT00:00:00-08:00"%(year, month, day)
        query.published_max = "%d-%02d-%02dT23:59:59-08:00"%(year, month, day)
        feed = get_blogger_feed(query)
        if not feed:
            return {}
        blog_dict['detail_view'] = True
        blog_dict['entries'] = map(lambda e: get_common_entry_data(e, None), feed.entry)
    elif not short_summary and year and month and not day:
        # Get one more than we need so we can see if we have more
        query.published_min = "%d-%02d-%02dT00:00:00-08:00"%(year, month, 1)
        query.published_max = "%d-%02d-%02dT23:59:59-08:00"%(year, month, 31)
        feed = get_blogger_feed(query)
        if not feed:
            return {}
        blog_dict['detail_view'] = True
        blog_dict['entries'] = map(lambda e: get_common_entry_data(e, None), feed.entry)
        if post:
            blog_dict['entries'] = filter(lambda f: _in_one(post, f['alllinks']), blog_dict['entries'])
    elif short_summary:
        # Get a summary of all posts
        query.max_results = str(3) 
        query.start_index = str(1)
        feed = get_blogger_feed(query)
        if not feed:
            return {}
        feed.entry = feed.entry[:3]
        blog_dict['entries'] = map(lambda e: get_common_entry_data(e, 18), feed.entry)
    else:
        # Get a summary of all posts
        try:
            page = int(page)
        except ValueError:
            page = 1

        # Get one more than we need so we can see if we have more
        query.max_results = str(blog_pages_at_once + 1) 
        query.start_index = str((page - 1)* blog_pages_at_once + 1)
        logging.info("GET Blogger Page: " + query.ToUri())
        feed = blogger_service.Get(query.ToUri())

        has_older = len(feed.entry) > blog_pages_at_once
        feed.entry = feed.entry[:blog_pages_at_once]
        if page > 1:
            blog_dict['newer_page'] = str(page-1)
        if has_older:
            blog_dict['older_page'] = str(page+1)       
        blog_dict['entries'] = map(lambda e: get_common_entry_data(e, 80), feed.entry)

    return blog_dict
1

如果你要查看HTML代码,你需要对它进行解析。除了之前提到的BeautifulSoup,lxml.html也有一些很不错的处理HTML的工具。

不过,如果你是在处理博客,使用RSS或Atom订阅源可能会更简单。Feedparser非常好用,可以让你轻松处理这些订阅源。使用这些订阅源,你会得到更好的兼容性和稳定性(因为RSS的标准比较明确,变化会少一些),但如果订阅源里没有你需要的信息,那也没办法。

撰写回答