如何使用BeautifulSoup去除<p>标签并仅返回文本?
我想把所有的 <p>
标签替换成里面的内容,这个操作是在我用 BeautifulSoup 处理其他东西的过程中进行的。
这和一个类似的问题关于提取文本稍微有点不同。
输入示例:
... </p> ... <p>Here is some text</p> ... and some more
期望的输出:
... ... Here is some text ... and some more
如果我只想在 class 为 "content" 的 div 中进行这样的处理,我该怎么做呢?
我现在好像还没完全搞懂 BeautifulSoup 的用法呢!
1 个回答
-1
我没有使用beautifulSoup,但它应该和内置的HTMLParser库差不多。这是我自己写的一个类,用来解析输入的HTML,并把标签转换成需要的不同格式。
class BaseHTMLProcessor(HTMLParser):
def reset(self):
# extend (called by HTMLParser.__init__)
self.pieces = []
HTMLParser.reset(self)
def handle_starttag(self, tag, attrs):
# called for each start tag
# attrs is a list of (attr, value) tuples
# e.g. for <pre class="screen">, tag="pre", attrs=[("class", "screen")]
# Ideally we would like to reconstruct original tag and attributes, but
# we may end up quoting attribute values that weren't quoted in the source
# document, or we may change the type of quotes around the attribute value
# (single to double quotes).
# Note that improperly embedded non-HTML code (like client-side Javascript)
# may be parsed incorrectly by the ancestor, causing runtime script errors.
# All non-HTML code must be enclosed in HTML comment tags (<!-- code -->)
# to ensure that it will pass through this parser unaltered (in handle_comment).
if tag == 'b':
v = r'%b[1]'
elif tag == 'li':
v = r'%f[1]'
elif tag == 'strong':
v = r'%b[1]%i[1]'
elif tag == 'u':
v = r'%u[1]'
elif tag == 'ul':
v = r'%n%'
else:
v = ''
self.pieces.append("{0}".format(v))
def handle_endtag(self, tag):
# called for each end tag, e.g. for </pre>, tag will be "pre"
# Reconstruct the original end tag.
if tag == 'li':
v = r'%f[0]'
elif tag == '/b':
v = r'%b[0]'
elif tag == 'strong':
v = r'%b[0]%i[0]'
elif tag == 'u':
v = r'%u[0]'
elif tag == 'ul':
v = ''
elif tag == 'br':
v = r'%n%'
else:
v = '' # it matched but we don't know what it is! assume it's invalid html and strip it
self.pieces.append("{0}".format(v))
def handle_charref(self, ref):
# called for each character reference, e.g. for " ", ref will be "160"
# Reconstruct the original character reference.
self.pieces.append("&#%(ref)s;" % locals())
def handle_entityref(self, ref):
# called for each entity reference, e.g. for "©", ref will be "copy"
# Reconstruct the original entity reference.
self.pieces.append("&%(ref)s" % locals())
# standard HTML entities are closed with a semicolon; other entities are not
if htmlentitydefs.entitydefs.has_key(ref):
self.pieces.append(";")
def handle_data(self, text):
# called for each block of plain text, i.e. outside of any tag and
# not containing any character or entity references
# Store the original text verbatim.
output = text.replace("\xe2\x80\x99","'").split('\r\n')
for count,item in enumerate(output):
output[count] = item.strip()
self.pieces.append(''.join(output))
def handle_comment(self, text):
# called for each HTML comment, e.g. <!-- insert Javascript code here -->
# Reconstruct the original comment.
# It is especially important that the source document enclose client-side
# code (like Javascript) within comments so it can pass through this
# processor undisturbed; see comments in unknown_starttag for details.
self.pieces.append("<!--%(text)s-->" % locals())
def handle_pi(self, text):
# called for each processing instruction, e.g. <?instruction>
# Reconstruct original processing instruction.
self.pieces.append("<?%(text)s>" % locals())
def handle_decl(self, text):
# called for the DOCTYPE, if present, e.g.
# <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
# "http://www.w3.org/TR/html4/loose.dtd">
# Reconstruct original DOCTYPE
self.pieces.append("<!%(text)s>" % locals())
def output(self):
"""Return processed HTML as a single string"""
return "".join(self.pieces)
要使用这个类,只需引入它。然后在你的代码中使用以下几行:
parser = BaseHTMLProcessor()
for line in input:
parser.feed(line)
parser.close()
output = parser.output()
parser.reset()
print output
它的工作原理是将输入的内容分成一个个小块。每当遇到一段HTML时,就会用合适的方法来处理它。所以像<p><b>这是加粗的文本!</b></p>
这样的内容会触发handle_starttag
两次,然后触发handle_data
一次,最后再触发handle_endtag
两次。最后,当调用输出方法时,它会把这些内容重新组合在一起。