在两个P之间提取文本

2024-03-29 05:56:27 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图提取两个元素“高管”和“分析师”之间的数据,但我不知道如何继续。 我的html是:

<div class="content_part hid" id="article_participants">
<p>Wabash National Corporation (NYSE:<a title="" href="http://seekingalpha.com/symbol/wnc">WNC</a>)</p><p>Q4 2014 <span class="transcript-search-span" style="background-color: yellow;">Earnings</span> Conference <span class="transcript-search-span" style="background-color: rgb(243, 134, 134);">Call</span></p><p>February 04, 2015 10:00 AM ET</p>
<p><strong>Executives</strong></p>
<p>Mike Pettit - Vice President of Finance and Investor Relations</p>
<p>Richard Giromini - President and Chief Executive Officer</p>
<p>Jeffery Taylor - Senior Vice President and Chief Financial Officer</p>
<p><strong>Analysts</strong></p>

我想对一大堆文件执行此操作,到目前为止,我的代码是:

from bs4 import BeautifulSoup
import requests
import textwrap
import os
from lxml import html
import csv

directory ='C:/Research syntheses - Meta analysis/SeekingAlpha'
for filename in os.listdir(directory):
    if filename.endswith('.html'):
        fname = os.path.join(directory,filename)
        with open(fname, 'r') as f:
            page=f.read()
            soup = BeautifulSoup(f.read(),'html.parser')
            match = soup.find('div',class_='content_part hid', id='article_participants')
    print(match)

我是Python方面的新手,请容忍我

我喜欢的输出是: Example output

标题可在以下HTML中找到:

<div class="page_header_email_alerts" id="page_header">
      <h1>
        <span itemprop="headline">Wabash National's (WNC) CEO Richard Giromini on Q4 2014 Results - Earnings Call Transcript</span>
              </h1>

      <div id="article_info">
        <div class="article_info_pos">
          <span itemprop="datePublished" content="2015-02-04T21:48:03Z">Feb.  4, 2015  4:48 PM ET</span>
          <span id="title_article_comments"></span>
          <span class="print_hide"><span class="print_hide">&nbsp;|&nbsp;</span> <span>About:</span> <span id="about_primary_stocks"><a title="Wabash National Corporation" href="/symbol/WNC" sasource="article_primary_about_trc">Wabash National Corporation (WNC)</a></span></span>
          <span class="author_name_for_print">by: SA Transcripts</span>
            <span id="second_line_wrapper"></span>
        </div>
'''




Tags: importdividtitlehtmlarticlecontentclass
3条回答

这不是最有效的方法,但您可以尝试:

file = open(File_Path,'r') #open my file ( be careful with encoding)
text = file.readlines() #extract the content of the file
file.close() #close my file
Goal = [] # will include all the lines beetwen Executives and Analysts 
for indice,line in enumerate(text): 
    if "<p><strong>Executives</strong></p>" in line:
        """
        when the line with "<p><strong>Executives</strong></p>" is found, it will add to Goal all the next line until <p><strong>Analysts</strong></p> appear in a line
        """
        i = 1
        while not("<p><strong>Analysts</strong></p>" in text[indice+i]):
            Goal.append(text[indice+i])
            i +=1
        break
print(Goal)

最重要的部分在主循环中,因此您可以根据自己的程序调整它

如果您知道高管和分析师之间的行数,您可以通过以下方式替换while循环:

Goal = text[indice+1:indice+<number_of_line + 1>]

并删除:i=1

通过这种方式,您可以在所有行中保留标记(如:<;p>;..<;/p>;)和“\n”

您可以使用内置函数删除一行中的所有“\n”:

line = line.replace("\n","")

在标记之间获取数据的方法很少,例如在HTMLPasser中使用handle_数据,或者您可以在re中使用findall函数:

data_in_line = re.findall(r'>(.*?)<',line)

“行中的数据”将是关于模式r'>;的所有数据的列表;(.*)<;'因此,所有的数据都在“>;”之间和“<;”

举个例子:'<;p>;atest</p>;'

它将返回['atest']

这对你有帮助吗

@dabinsou有一个很好的解决方案,但是这里有一个非常简单的方法,不必使用复杂的存储库:

from re import search

html = """<div class="content_part hid" id="article_participants">
<p>Wabash National Corporation (NYSE:<a title="" href="http://seekingalpha.com/symbol/wnc">WNC</a>)</p><p>Q4 2014 <span class="transcript-search-span" style="background-color: yellow;">Earnings</span> Conference <span class="transcript-search-span" style="background-color: rgb(243, 134, 134);">Call</span></p><p>February 04, 2015 10:00 AM ET</p>
<p><strong>Executives</strong></p>
<p>Mike Pettit - Vice President of Finance and Investor Relations</p>
<p>Richard Giromini - President and Chief Executive Officer</p>
<p>Jeffery Taylor - Senior Vice President and Chief Financial Officer</p>
<p><strong>Analysts</strong></p>"""

soup = search( r"(<strong>Executives(.+))<strong>", html, re.DOTALL)
print ( soup.group(1) )

结果(html):

<strong>Executives</strong></p>
<p>Mike Pettit - Vice President of Finance and Investor Relations</p>
<p>Richard Giromini - President and Chief Executive Officer</p>
<p>Jeffery Taylor - Senior Vice President and Chief Financial Officer</p>
<p>

结果(文本):

print ( bs(soup.group(1), "lxml").get_text() )

Executives
Mike Pettit - Vice President of Finance and Investor Relations
Richard Giromini - President and Chief Executive Officer
Jeffery Taylor - Senior Vice President and Chief Financial Officer

合并您的代码

import os
from simplified_scrapy.simplified_doc import SimplifiedDoc
directory ='C:/Research syntheses - Meta analysis/SeekingAlpha'
for filename in os.listdir(directory):
  if filename.endswith('.html'):
    fname = os.path.join(directory,filename)
    with open(fname, 'r') as f:
      page=f.read()
      doc = SimplifiedDoc(page)
      headline = doc.select('div#article_info>span#about_primary_stocks>a>text()')
      div = doc.select('div#article_participants')
      if not div: continue
      ps = div.getElements('p',start='<strong>Executives</strong>',end='<strong>Analysts</strong>')
      Executives = [p.text.split('-')[0].strip() for p in ps]
      ps = div.getElements('p',start='<strong>Analysts</strong>')
      Analysts = [p.text.split('-')[0].strip() for p in ps]
      print (headline)
      print (Executives)
      print (Analysts)

下面的代码是一个示例

from simplified_scrapy.simplified_doc import SimplifiedDoc
html = '''
<div class="page_header_email_alerts" id="page_header">
  <h1>
    <span itemprop="headline">Wabash National's (WNC) CEO Richard Giromini on Q4 2014 Results - Earnings Call Transcript</span>
  </h1>
  <div id="article_info">
    <div class="article_info_pos">
      <span itemprop="datePublished" content="2015-02-04T21:48:03Z">Feb.  4, 2015  4:48 PM ET</span>
      <span id="title_article_comments"></span>
      <span class="print_hide"><span class="print_hide">&nbsp;|&nbsp;</span> <span>About:</span> <span id="about_primary_stocks"><a title="Wabash National Corporation" href="/symbol/WNC" sasource="article_primary_about_trc">Wabash National Corporation (WNC)</a></span></span>
      <span class="author_name_for_print">by: SA Transcripts</span>
        <span id="second_line_wrapper"></span>
    </div>
  </div>
</div>
<div class="content_part hid" id="article_participants">
<p>Wabash National Corporation (NYSE:<a title="" href="http://seekingalpha.com/symbol/wnc">WNC</a>)</p><p>Q4 2014 <span class="transcript-search-span" style="background-color: yellow;">Earnings</span> Conference <span class="transcript-search-span" style="background-color: rgb(243, 134, 134);">Call</span></p><p>February 04, 2015 10:00 AM ET</p>
<p><strong>Executives</strong></p>
<p>Mike Pettit - Vice President of Finance and Investor Relations</p>
<p>Richard Giromini - President and Chief Executive Officer</p>
<p>Jeffery Taylor - Senior Vice President and Chief Financial Officer</p>
<p><strong>Analysts</strong></p>
<p>Jeffery Taylor - Senior Vice President and Chief Financial Officer</p>
</div>
'''
doc = SimplifiedDoc(html)
headline = doc.select('div#article_info>span#about_primary_stocks>a>text()')
div = doc.select('div#article_participants')
ps = div.getElements('p',start='<strong>Executives</strong>',end='<strong>Analysts</strong>')
Executives = [p.text.split('-')[0].strip() for p in ps]
ps = div.getElements('p',start='<strong>Analysts</strong>')
Analysts = [p.text.split('-')[0].strip() for p in ps]

print (headline)
print (Executives)
print (Analysts)

结果:

Wabash National Corporation (WNC)
[u'Mike Pettit', u'Richard Giromini', u'Jeffery Taylor']
[u'Jeffery Taylor']

这里有更多的例子:https://github.com/yiyedata/simplified-scrapy-demo/tree/master/doc_examples

相关问题 更多 >