切换风格

Wizard Sky California Sunset glow Black Cloud Beige Dragon Lavender NewYear City Snow Flowers London
收藏本站XSS平台字符串转换jsfuck
310 Skipfish GOOGLE出品[复制链接]
发表于 2012-10-11 23:51:16 | 显示全部楼层 |!read_mode!

skipfish是谷歌开发网站完全扫描工具

安装
  1. root@Dis9Team:/pen/web# wget http://skipfish.googlecode.com/files/skipfish-2.09b.tgz
  2. root@Dis9Team:/pen/web# tar xf skipfish-2.09b.tgz
  3. root@Dis9Team:/pen/web# cd skipfish-2.09b
  4. root@Dis9Team:/pen/web/skipfish-2.09b# apt-get install libpcre3 libpcre3-dev
  5. root@Dis9Team:/pen/web/skipfish-2.09b# make
复制代码
帮助选项
  1. root@Dis9Team:/pen/web/skipfish-2.09b# ./skipfish -h
  2. skipfish version 2.09b by <lcamtuf@google.com>
  3. Usage: ./skipfish [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ]

  4. Authentication and access options:

  5.   -A user:pass      - use specified HTTP authentication credentials
  6.   -F host=IP        - pretend that 'host' resolves to 'IP'
  7.   -C name=val       - append a custom cookie to all requests
  8.   -H name=val       - append a custom HTTP header to all requests
  9.   -b (i|f|p)        - use headers consistent with MSIE / Firefox / iPhone
  10.   -N                - do not accept any new cookies
  11.   --auth-form url   - form authentication URL
  12.   --auth-user user  - form authentication user
  13.   --auth-pass pass  - form authentication password
  14.   --auth-verify-url -  URL for in-session detection

  15. Crawl scope options:

  16.   -d max_depth     - maximum crawl tree depth (16)
  17.   -c max_child     - maximum children to index per node (512)
  18.   -x max_desc      - maximum descendants to index per branch (8192)
  19.   -r r_limit       - max total number of requests to send (100000000)
  20.   -p crawl%        - node and link crawl probability (100%)
  21.   -q hex           - repeat probabilistic scan with given seed
  22.   -I string        - only follow URLs matching 'string'
  23.   -X string        - exclude URLs matching 'string'
  24.   -K string        - do not fuzz parameters named 'string'
  25.   -D domain        - crawl cross-site links to another domain
  26.   -B domain        - trust, but do not crawl, another domain
  27.   -Z               - do not descend into 5xx locations
  28.   -O               - do not submit any forms
  29.   -P               - do not parse HTML, etc, to find new links

  30. Reporting options:

  31.   -o dir          - write output to specified directory (required)
  32.   -M              - log warnings about mixed content / non-SSL passwords
  33.   -E              - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
  34.   -U              - log all external URLs and e-mails seen
  35.   -Q              - completely suppress duplicate nodes in reports
  36.   -u              - be quiet, disable realtime progress stats
  37.   -v              - enable runtime logging (to stderr)

  38. Dictionary management options:

  39.   -W wordlist     - use a specified read-write wordlist (required)
  40.   -S wordlist     - load a supplemental read-only wordlist
  41.   -L              - do not auto-learn new keywords for the site
  42.   -Y              - do not fuzz extensions in directory brute-force
  43.   -R age          - purge words hit more than 'age' scans ago
  44.   -T name=val     - add new form auto-fill rule
  45.   -G max_guess    - maximum number of keyword guesses to keep (256)

  46.   -z sigfile      - load signatures from this file

  47. Performance settings:

  48.   -g max_conn     - max simultaneous TCP connections, global (40)
  49.   -m host_conn    - max simultaneous connections, per target IP (10)
  50.   -f max_fail     - max number of consecutive HTTP errors (100)
  51.   -t req_tmout    - total request response timeout (20 s)
  52.   -w rw_tmout     - individual network I/O timeout (10 s)
  53.   -i idle_tmout   - timeout on idle HTTP connections (10 s)
  54.   -s s_limit      - response size limit (400000 B)
  55.   -e              - do not keep binary responses for reporting

  56. Safety settings:

  57.   -l max_req      - max requests per second (0.000000)
  58.   -k duration     - stop scanning after the given duration h:m:s

  59. Send comments and complaints to <heinenn@google.com>.
  60. root@Dis9Team:/pen/web/skipfish-2.09b#
复制代码
默认扫描

  1. root@Dis9Team:/pen/web/skipfish-2.09b# ./skipfish -o 1 http://5.5.5.8
  2. [+] Copying static resources...
  3. [+] Sorting and annotating crawl nodes: 911
  4. [+] Looking for duplicate entries: 911
  5. [+] Counting unique nodes: 148
  6. [+] Saving pivot data for third-party tools...
  7. [+] Writing scan description...
  8. [+] Writing crawl tree: 911
  9. [+] Generating summary views...
  10. [+] Report saved to '1/index.html' [0xb47c9783].
  11. [+] This was a great day for science!
复制代码


-o是保存目录




查看结果:
  1. root@Dis9Team:/pen/web/skipfish-2.09b# firefox 1/index.html
复制代码




爬虫域
  1. root@Dis9Team:/pen/web/skipfish-2.09b# ./skipfish -D .qq.com -o 1
复制代码


扫描QQ.COM链接的X.QQ.COM的全部网站 保存到 1目录

最大连接数

-g ,默认 40





附件: 你需要登录才可以下载或查看附件。没有帐号?加入Team
操千曲而后晓声,观千剑而后识器。

代码区

GMT+8, 2020-12-3 02:24

Powered by Discuz! X2

© 2001-2018 Comsenz Inc.

回顶部