擅长:python、mysql、java
<p>我解决了这个问题,但是我不得不对你的代码做一些重大的修改。修订代码:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import re
main_page = requests.get('http://www.midiworld.com/classic.htm')
parsed_page = BeautifulSoup(main_page.content, 'html.parser')
links = parsed_page.find_all('a', href=re.compile('mid$'))
def getFileName(link):
link = link['href']
filename = link.split('/')[::-1][0]
return filename
def downloadFile(link, filename):
mid_file = requests.get(link['href'], stream=True)
with open(filename, 'wb') as saveMidFile:
saveMidFile.write(mid_file.content)
print('Downloaded {} successfully.'.format(filename))
for link in links:
filename = getFileName(link)
downloadFile(link, filename)
</code></pre>
<p>这似乎可以快速轻松地下载文件。他们都没有腐败,我可以发挥他们只是罚款。
谢谢你把我的文件夹塞满了古典音乐。你知道吗</p>