我能够成功地通过多个网站请求/回拨。但是,当我写csv时,我得到了重复的项目

2024-04-25 04:00:56 发布

您现在位置:Python中文网/ 问答频道 /正文

我的代码执行以下操作:

  1. 通过原始网站解析:Finviz.com网站,并刮去P/E等一些项目
  2. 请求回调并通过YahooFinance中的两个单独的URL进行解析并提取更多信息。在
  3. 将请求的项返回到包含finviz信息和yahoo信息的干净字典值中。在

我似乎成功地做到了。但是,我在输出方面遇到了问题。输出同时写入finviz信息,如市盈率、marketcap,并输出新访问的信息,现在是finviz+yahoo的集合(<;-我只想要后者)。我不知道为什么这两个都输出,它导致我的csv文件有很多重复。在

class FinvizSpider(CrawlSpider):
    name = "finviz"
    allowed_domains = ["finviz.com", "finance.yahoo.com"]
    start_urls = ["http://finviz.com/screener.ashx?v=152&f=cap_smallover&ft=4&c=0,1,2,6,7,10,11,13,14,45,65"]

    rules = (Rule(LxmlLinkExtractor(allow=('r=\d+'),restrict_xpaths='//a[@class="tab-link"]')
    , callback="parse_items", follow= True),
    )

    def parse_start_url(self, response):
        return self.parse_items(response)


    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        trs = hxs.select('//table[@bgcolor="#d3d3d3"]/tr');
        items = []
        for tr in trs[1:len(trs)]:
            item = StockfundamentalsItem()
            item['ticker'] = tr.select('td[2]/a/text()').extract()
            item ["marketcap"] = tr.select("td[4]//text()").extract()
            item ["pEarnings"] = tr.select("td[5]//text()").extract()
            item ["pSales"] = tr.select("td[6]//text()").extract()
            item ["pBook"] = tr.select("td[7]//text()").extract()
            item ["pFCF"] = tr.select("td[8]//text()").extract()
            item ["Div"] = tr.select("td[9]//text()").extract()


            newurl = "http://finance.yahoo.com/q/ks?s=" + item['ticker'][0] + "+Key+Statistics"
            newurl2 = "http://finance.yahoo.com/q/cf?s="+ item['ticker'][0] + "&ql=1"


            yield Request(newurl, meta={'item': item}, callback=self.LinkParse)
            yield Request(newurl2, meta={'item': item}, callback = self.LinkParse2)

            items.append(item)
        return items



    def LinkParse(self, response):
        hxs = HtmlXPathSelector(response)
        enterprise = hxs.select('//table[@class="yfnc_datamodoutline1"]//tr[9]/td[2]/text()').extract()
        item = response.meta['item']
        item['Enterprise'] = [enterprise[0]] 
        return item


    def LinkParse2(self, response):
        hxs = HtmlXPathSelector(response)
        stockpurchases = hxs.select('//table[@class="yfnc_tabledata1"]//tr[23]')
        runningtot = 0 

        tds = (stockpurchases.select("./td/text()")).extract()
        for elements in tds[1:]:
            val = float(elements.strip().replace('-','0').replace(',','').replace('(','-').replace(')',''))
            runningtot = runningtot + val

        item = response.meta['item']

        item['BBY'] = [runningtot] 

        return item

例如,我的输出如下所示(注意前雅虎信息和后雅虎信息):

^{pr2}$

即使这样也不行。太乱了(我不介意),我只是不想要复制品。在


Tags: textselfcom信息responseextractitemsitem
1条回答
网友
1楼 · 发布于 2024-04-25 04:00:56

我发现解决方法是在我的第二个请求中发布一个请求。在

差不多:

class FinvizSpider(CrawlSpider):
    name = "finviz"
    allowed_domains = ["finviz.com", "finance.yahoo.com"]
    start_urls = ["http://finviz.com/screener.ashx?v=152&f=cap_smallover&ft=4&c=0,1,2,6,7,10,11,13,14,45,65"]

    rules = (Rule(LxmlLinkExtractor(allow=('r=\d+'),restrict_xpaths='//a[@class="tab-link"]')
    , callback="parse_items", follow= True),
    )

    def parse_start_url(self, response):
        return self.parse_items(response)


    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        trs = hxs.select('//table[@bgcolor="#d3d3d3"]/tr');
        items = []
        for tr in trs[1:len(trs)]:
            item = StockfundamentalsItem()
            item['ticker'] = tr.select('td[2]/a/text()').extract()
            item ["marketcap"] = tr.select("td[4]//text()").extract()
            item ["pEarnings"] = tr.select("td[5]//text()").extract()
            item ["pSales"] = tr.select("td[6]//text()").extract()
            item ["pBook"] = tr.select("td[7]//text()").extract()
            item ["pFCF"] = tr.select("td[8]//text()").extract()
            item ["Div"] = tr.select("td[9]//text()").extract()


            newurl = "http://finance.yahoo.com/q/ks?s=" + item['ticker'][0] + "+Key+Statistics"



            yield Request(newurl, meta={'item': item}, callback=self.LinkParse)


            items.append(item)
        return items



    def LinkParse(self, response):
        hxs = HtmlXPathSelector(response)
        enterprise = hxs.select('//table[@class="yfnc_datamodoutline1"]//tr[9]/td[2]/text()').extract()
        item = response.meta['item']
        item['Enterprise'] = [enterprise[0]] 
        newurl2 = "http://finance.yahoo.com/q/cf?s="+ item['ticker'][0] + "&ql=1"
        yield Request(newurl2, meta={'item': item}, callback = self.LinkParse2)
        return


    def LinkParse2(self, response):
        hxs = HtmlXPathSelector(response)
        stockpurchases = hxs.select('//table[@class="yfnc_tabledata1"]//tr[23]')
        runningtot = 0 

        tds = (stockpurchases.select("./td/text()")).extract()
        for elements in tds[1:]:
            val = float(elements.strip().replace('-','0').replace(',','').replace('(','-').replace(')',''))
            runningtot = runningtot + val

        item = response.meta['item']

        item['BBY'] = [runningtot] 

        return item

然而,这似乎不是解决这个问题的正确方法。。。有没有一种方法可以正确地执行多个请求?在

相关问题 更多 >