通过下面这段代码可以一目了然的知道scrapy的抓取页面结构,调用也非常简单
#!/usr/bin/env python
import fileinput, re
from collections import defaultdict
def print_urls(allurls, referer, indent=0):
urls = allurls[referer]
for url in urls:
print ' '*indent + referer
if url in allurls:
print_urls(allurls, url, indent+2)
def main():
log_re = re测试数据pile(r' \(referer: (.*?)\)')
allurls = defaultdict(list)
for l in fileinput.input():
m = log_re.search(l)
if m:
url, ref = m.groups()
allurls[ref] += [url]
print_urls(allurls, 'None')
main()
希望本文所述对大家的Python程序设计有所帮助。
查看更多关于Python打印scrapy蜘蛛抓取树结构的方法的详细内容...
声明:本文来自网络,不代表【好得很程序员自学网】立场,转载请注明出处:http://haodehen.cn/did88322