How to specify the urls that I want to scan?
Dear Tasos:
I'm currently using arachni to examine a web application that has only around 15 pages. However, a lot of them can only be visited by triggering javascript events. As far as I know, arachni is not yet able to process javascript, so I'm wondering if I can specify those links manually in one scan(because otherwise I can run 15 scans, but that doesn't help very much). Can you point to me both command line and api documentations that explains this? Or some small examples should suffice. Thanks.
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by Tasos Laskos on 07 May, 2013 05:50 PM
Sure thing:
extend_paths
option.Feel free to re-open the discussion if you need further clarifications.
Tasos Laskos closed this discussion on 07 May, 2013 05:50 PM.
Shang Wang re-opened this discussion on 07 May, 2013 08:09 PM
2 Posted by Shang Wang on 07 May, 2013 08:09 PM
Dear Tasos:
Thanks for the response. I created a file, then put another url in it, then pass the file name along with --extend_paths parameter, is this the right method? I couldn't find a test site that has the same situation as my client's(links using javascript), and since today is not a good day for testing, I'm not sure how to verify it.
Another concern is that I cannot hard-code any link in our product(uses arachni as engine), I need to write something to dynamically detect all javascript links. My question is: if the link is out of scope(not the same domain as original target), will --extend_paths parameter also add this domain as target, or it simply exclude it? Thanks.
Support Staff 3 Posted by Tasos Laskos on 07 May, 2013 08:17 PM
No, the CLI argument is
--extend-paths
, the API option isextend_paths
.You could use an external crawler that can interpret JS and then pass that sitemap to Arachni -- or better yet, fire up the
proxy
plugin and configure the external crawler to use Arachni's proxy, that way AJAX requests will be seen and analyzed by Arachni as well.And no, if a path matches an exclusion criterion it will be ignored.
Cheers
Tasos Laskos closed this discussion on 07 May, 2013 08:17 PM.