Arachni for REST based WebSites
Hi Guys,
I am using arachni for testing our application, the problem I am facing is, arachni is not able to crawl all the links(URLS) of the websites.
My website uses REST services to get data from the server, so even if i visit different pages manually the URL remains same. I guess this could be the reason why arachni is not able to crawl all the different URLs possible.
If I manually put them in exteneded-path, arachni crawls it. Is there any way to do this apart from this ?
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by Tasos Laskos on 25 May, 2015 07:47 AM
I'm afraid I'll need to check this out for myself in order to see what's going on.
I could make this discussion private and you can send me the site info etc. or you can provide that info via e-mail: [email blocked]
Cheers
2 Posted by Gaurang Shah on 25 May, 2015 08:25 AM
Hi Tassos,
I think it would be good idea, right now I am following steps mention in following post
http://support.arachni-scanner.com/kb/general-use/service-scanning
so far I have done following things.
After this step, step 2 hangs saying server is being shutdown.
If I use following command, where I don't mention login script, it works fine, however it shows 0 page audited.
arachni https://10.219.170.155/webui/login --scope-page-limit=0 --checks=*,-common_*,-backup*,-backdoors,-directory_listing --plugin=proxy --audit-jsons --audit-xmls
Support Staff 3 Posted by Tasos Laskos on 25 May, 2015 12:41 PM
Is this application live somewhere? I can't know if there's a configuration issue unless I try it.
4 Posted by Gaurang Shah on 25 May, 2015 12:54 PM
Hi Tasos,
No this application is not live, let me know what exactly you want me to check and i will check that for you.
however would you please let me know, if steps I am performing are correct or not ?
Support Staff 5 Posted by Tasos Laskos on 25 May, 2015 01:00 PM
The idea when using the proxy is to train Arachni via your interactions with the browser.
Whatever JSON traffic passes through the proxy it will be audited once the scan starts after shutting down the proxy.
So step 3 is mandatory.
In your case you don't need to export the
http_proxy
variable.6 Posted by Gaurang Shah on 25 May, 2015 01:07 PM
And what about command, which command should i use, one with login script and without login script.
would you let me know how exactly it works. Should i do something like this.
1. Start arachni with proxy (not sure about command, with or without login script)
2. set proxy in browser and visit pages
3. stop proxy server
now what should happen or what should i do? should i run the command to scan website for attack, if yes from where it will pickup the REST URL ? or will it start scan automatically ?
Support Staff 7 Posted by Tasos Laskos on 25 May, 2015 01:14 PM
If the script is giving you problems you can remove it and login via your browser, but be sure to exclude resources that may log you out.
Once you've finished browsing you can shut down the proxy via the panel you'll be seeing at the top of your window or with:
Once you shut down the proxy the scan will start from the URL you provided at the command line and will include any resources that were made visible via the proxy.
I'm not sure how to explain it better, I'm sorry.
8 Posted by Gaurang Shah on 27 May, 2015 08:23 AM
HI Tasos,
I am able to generate the report by following steps mentioned below, in the report it shows REST URL as well.
however the problem is I am using arachni with gauntlt and I am not able to undestand how would I automate this process with gauntlt.
Does arachni saves all the URLS somewhere ? if it's saving, i can create new file and provide that as exteneded_paths.
Steps:
1. run the following command
arachni https://10.219.170.155/webui/login --scope-page-limit=0 --checks=*,-common_*,-backup*,-backdoors,-directory_listing --plugin=proxy --audit-jsons --audit-xmls 2. Set proxy in browser and visit URLS,
3. stop proxy server.
Support Staff 9 Posted by Tasos Laskos on 27 May, 2015 01:17 PM
I don't know how gauntlt works so I can't help you with that.
Also, it's not the URLs that are important but the JSON traffic, there is something you can use to get the result you want but I'm afraid I forgot to update that plugin to handle JSON and XML.
I'll update it and let you know once a nightly is up.
Cheers
Support Staff 10 Posted by Tasos Laskos on 28 May, 2015 04:05 PM
Nightlies are up.
After you've finished training the system via the proxy, you'll be able to grab the input vectors via: http://arachni.proxy/panel/vectors.yml
Then, instead of using the
proxy
plugin, you'll be able to load them using thevector_feed
one, like so:--plugin=vector_feed:yaml_file=/some/path/vectors.yml
The rest of the options should be the same as before, but you will need to set a valid session by either using the
login_script
plugin or providing a--http-cookie-jar
.11 Posted by Gaurang Shah on 01 Jun, 2015 06:55 AM
Hi Tasos,
Thanks for generating the nightly build, I have installed it and now ready to test my web services. However I am still unclear about the steps.
I am not able to understand how to generate vector.yml, do I need to create it manually, if yes then how ? or is there any way this proxy plugin creates vector.yml
Support Staff 12 Posted by Tasos Laskos on 01 Jun, 2015 11:45 AM
I get the feeling that you're not paying attention to what I'm saying, it's the second thing I mentioned in my last post.
Tasos Laskos closed this discussion on 30 Sep, 2015 02:57 PM.