使用POST和cookie在Python中获取多个页面

0 投票
1 回答
1151 浏览
提问于 2025-04-15 17:16

我有一个测试网站要抓取。这个网站使用的是POST方法,还涉及到cookies。(我不太确定这些cookies是否很重要,但我觉得可能是的…)

这个应用会展示一个页面,上面有一个“下一步”按钮,用来生成后续的页面。我使用了LiveHttpHeaders/Firefox来确定查询中应该包含什么样的POST数据,以及cookies是如何设置的。我还确认了如果浏览器禁用了cookies,页面就无法正常工作。

我正在试图找出我在测试中遗漏了什么或者搞错了什么。示例代码展示了我想要抓取的第1页和第2页的查询/POST数据。

我在网上搜索过,也尝试了很多不同的方法,所以我很确定我遗漏了一些简单的东西…

任何想法或评论都很受欢迎…

#!/usr/bin/python

#test python script
import re
import urllib
import urllib2
import sys, string, os
from  mechanize import Browser
import mechanize
import cookielib            
########################
#
# Parsing App Information
########################

# datafile

cj = "p"
COOKIEFILE = 'cookies.lwp'
#cookielib = 1

urlopen = urllib2.urlopen
#cj = urllib2.cookielib.LWPCookieJar()       
cj = cookielib.LWPCookieJar()       
#cj = ClientCookie.LWPCookieJar()       
Request = urllib2.Request
br = Browser()

if cj != None:
  print "sss"
#install the CookieJar for the default CookieProcessor
  if os.path.isfile(COOKIEFILE):
      cj.load(COOKIEFILE)
      print "foo\n"
  if cookielib:
      opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
      urllib2.install_opener(opener)
      print "foo2\n"

user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values1 = {'name' : 'Michael Foord',
          'location' : 'Northampton',
          'language' : 'Python' }
headers = { 'User-Agent' : user_agent }



if __name__ == "__main__":
# main app

  baseurl="https://pisa.ucsc.edu/class_search/index.php"
  print "b = ",baseurl
  print "b = ",headers
  query="action=results&binds%5B%3Aterm%5D=2100&binds%5B%3Areg_status%5D=O&binds%5B%3Asubject%5D=&binds%5B%3Acatalog_nbr_op%5D=%3D&binds%5B%3Acatalog_nbr%5D=&binds%5B%3Atitle%5D=&binds%5B%3Ainstr_name_op%5D=%3D&binds%5B%3Ainstructor%5D=&binds%5B%3Age%5D=&binds%5B%3Acrse_units_op%5D=%3D&binds%5B%3Acrse_units_from%5D=&binds%5B%3Acrse_units_to%5D=&binds%5B%3Acrse_units_exact%5D=&binds%5B%3Adays%5D=&binds%5B%3Atimes%5D=&binds%5B%3Aacad_career%5D="


  request = urllib2.Request(baseurl, query, headers)
  response = urllib2.urlopen(request)

  print "gggg \n"
  #print req
  print "\n gggg 555555\n"

  print "res = ",response
  x1 = response.read()
  #x1 = res.read()
  print x1
  #sys.exit()

  cj.save(COOKIEFILE)    # resave cookies
  if cj is None:
      print "We don't have a cookie library available - sorry."
      print "I can't show you any cookies."
  else:
      print 'These are the cookies we have received so far :'
      for index, cookie in enumerate (cj):
          print index, '  :  ', cookie

  cj.save(COOKIEFILE)  

  print "ffgg \n"
  for index, cookie in enumerate (cj):
       print index, '  :  ', cookie


  #baseurl ="http://students.yale.edu/oci/resultList.jsp"
  baseurl="https://pisa.ucsc.edu/class_search/index.php"

  query="action=next&Rec_Dur=100&sel_col%5Bclass_nbr%5D=1&sel_col%5Bclass_id%5D=1&sel_col%5Bclass_title%5D=1&sel_col%5Btype%5D=1&sel_col%5Bdays%5D=1&sel_col%5Btimes%5D=1&sel_col%5Binstr_name%5D=1&sel_col%5Bstatus%5D=1&sel_col%5Benrl_cap%5D=1&sel_col%5Benrl_tot%5D=1&sel_col%5Bseats_avail%5D=1&sel_col%5Blocation%5D=1"

  request = urllib2.Request(baseurl, query, headers)
  response = urllib2.urlopen(request)

  print "gggg \n"
  #print req
  print "\n gggg 555555\n"

  print "res = ",response
  x1 = response.read()
  #x1 = res.read()
  print x1
  sys.exit()


  req = Request(baseurl, query, headers)
  print "gggg \n"
  #print req
  print "\n gggg 555555\n"
  #br.open(req)

  res = urlopen(req)
  print "gggg 000000000000\n"
  x1 = res.read()
  print x1


  sys.exit()

谢谢你的任何想法或建议…

对了,我知道…这个脚本/测试真的很糟糕!

-tom

1 个回答

0

我真的不太确定哪里出问题了。虽然可能没什么帮助,但可以试试另一种方法:

每次提交第一页的时候,打开一个表格行,然后只把这个行的ID保存在cookie里,后续访问的时候再把它加到这个行里。

tf.

撰写回答