如何使用Python Mechanize从基于网页的文件服务器下载文件
我在一个私人FTP文件服务器上有一系列文件,想用mechanize这个工具下载它们。
mechanize的链接对象结构是这样的:
Link(base_url='http://myfileserver.com/cgi-bin/index.cgi', url='index.cgi?page=download&file=%2Fhome%2Fjmyfileserver%2Fpublic_html%2Fuser_data%2Fmycompany%2F.ftpquota', text='Download [IMG]', tag='a', attrs=[('href', 'index.cgi?page=download&file=%2Fhome%2Fjmyfileserver%2Fpublic_html%2Fuser_data%2Fmycompany%2F.ftpquota'), ('class', 'ar')])
这基本上是指一个文件图标链接到实际文件的情况。
我对mechanize还不太熟悉。
但是,我该如何下载这个链接的文件呢?可以从这里获取:
urlparse.urljoin(base_url , url)
这两个结合起来可以得到:
http://myfileserver.com/cgi-bin/index.cgi?page=download&file=%2Fhome%2Fjmyfileserver%2Fpublic_html%2Fuser_data%2Fmycompany%2F.ftpquota
我不知道接下来该怎么做。
这是我写的原始代码:
import mechanize
import subprocess
import urlparse
br = mechanize.Browser()
br.open("http://myfileserver.com/cgi-bin/index.cgi")
br.select_form(nr=0)
br['login'] = "mylogin"
br['password'] = "mypassword"
br.submit()
#print dir(br)
myfiles = []
for alink in br.links():
print alink
myfiles.append(alink)
def downloadlink(l):
print " Trying to download", l.url.split("%2F")[-1]
f=open(l.url.split("%2F")[-1],"w")
myurl = urlparse.urljoin(l.base_url,l.url)
print myurl
# Dont know how to proceed
for linkobj in myfiles:
if "sca" in linkobj.url:
#br.follow_link(text='[IMG]', nr=0)
downloadlink(linkobj)