检查无限数量的自生成网址的有效性,若有效则安全响应(HTTP 200)
我想检查无穷无尽的自生成网址是否有效,如果有效的话,就把安全的响应内容保存到一个文件里。这些网址的格式是这样的:https://mydomain.com/ 加上一串随机字符(比如 https://mydomain.com/ake3t),我想用字母表 "abcdefghijklmnopqrstuvwxyz0123456789_-" 来生成这些网址,然后用暴力破解的方法尝试所有可能的组合。
我用Python写了一个脚本,但因为我是个完全的新手,所以运行得非常慢!我需要一个非常非常快的解决方案,所以我尝试使用Scrapy,因为我觉得它正好适合这种工作。
现在的问题是,我找不到如何动态生成网址的方法,我不能提前生成它们,因为数量是不固定的。
有没有人能告诉我怎么做到这一点,或者推荐我其他更适合这个工作的工具或库?
更新:这是我用的脚本,但我觉得它运行得很慢。让我最担心的是,如果我使用多个线程(在threadsNr中指定),它的速度反而会变得更慢。
import threading, os
import urllib.request, urllib.parse, urllib.error
threadsNr = 1
dumpFolder = '/tmp/urls/'
charSet = 'abcdefghijklmnopqrstuvwxyz0123456789_-'
Url_pre = 'http://vorratsraum.com/'
Url_post = 'alwaysTheSameTail'
# class that generate the words
class wordGenerator ():
def __init__(self, word, charSet):
self.currentWord = word
self.charSet = charSet
# generate the next word set that word as currentWord and return the word
def nextWord (self):
self.currentWord = self._incWord(self.currentWord)
return self.currentWord
# generate the next word
def _incWord(self, word):
word = str(word) # convert to string
if word == '': # if word is empty
return self.charSet[0] # return first char from the char set
wordLastChar = word[len(word)-1] # get the last char
wordLeftSide = word[0:len(word)-1] # get word without the last char
lastCharPos = self.charSet.find(wordLastChar) # get position of last char in the char set
if (lastCharPos+1) < len(self.charSet): # if position of last char is not at the end of the char set
wordLastChar = self.charSet[lastCharPos+1] # get next char from the char set
else: # it is the last char
wordLastChar = self.charSet[0] # reset last char to have first character from the char set
wordLeftSide = self._incWord(wordLeftSide) # send left site to be increased
return wordLeftSide + wordLastChar # return the next word
class newThread(threading.Thread):
def run(self):
global exitThread
global wordsTried
global newWord
global hashList
while exitThread == False:
part = newWord.nextWord() # generate the next word to try
url = Url_pre + part + Url_post
wordsTried = wordsTried + 1
if wordsTried == 1000: # just for testing how fast it is
exitThread = True
print( 'trying ' + part) # display the word
print( 'At URL ' + url)
try:
req = urllib.request.Request(url)
req.addheaders = [('User-agent', 'Mozilla/5.0')]
resp = urllib.request.urlopen(req)
result = resp.read()
found(part, result)
except urllib.error.URLError as err:
if err.code == 404:
print('Page not found!')
elif err.code == 403:
print('Access denied!')
else:
print('Something happened! Error code', err.code)
except urllib.error.URLError as err:
print('Some other error happened:', err.reason)
resultFile.close()
def found(part, result):
global exitThread
global resultFile
resultFile.write(part +"\n")
if not os.path.isdir(dumpFolder + part):
os.makedirs(dumpFolder + part)
print('Found Part = ' + part)
wordsTried = 0
exitThread = False # flag to kill all threads
newWord = wordGenerator('',charSet); # word generator
if not os.path.isdir(dumpFolder):
os.makedirs(dumpFolder)
resultFile = open(dumpFolder + 'parts.txt','a') # open file for append
for i in range(threadsNr):
newThread().start()
2 个回答
1
你想要用暴力破解还是随机破解呢?下面是一个简单的暴力破解方法,它可以处理重复的字符。这个方法的速度主要取决于你的服务器反应速度。另外,要注意,这种方法很快就可能导致服务无法使用。
import itertools
import url
pageChars = 5
alphabet = "abcdefghijklmnopqrstuvwxyz0123456789_-"
#iterate over the product of alphabet with <pageChar> elements
#this assumes repeating characters are allowed
# Beware this generates len(alphabet)**pageChars possible strings
for chars in itertools.product(alphabet,repeat=pageChars):
pageString = ''.join(chars)
urlString = 'https://mydomain.com/' + pageString
try:
url = urllib2.urlopen(url)
except urllib2.HTTPError:
print('No page at: %s' % urlString)
continue
pageDate = url.read()
#do something with page data
1
你不可能在不变得“非常慢”的情况下检查“无限数量的链接”,无论你是新手还是老手。
你的抓取程序花费的时间几乎肯定是因为你访问的服务器响应时间太长,而不是因为你的代码效率低下。
你到底想要做什么呢?