Normal Arachni progress
Hi Tasos,
I was wondering if you could explain what is a "normal" scan progress.
I explain, I run a scan and after 16 hours I see it's scanning a page. When I watch now, after more than 20 hours (4 hours latter), it scan the same web page.
I though maybe Arachni was crawling all the website for getting a map and scan all the webpages after.
Could you please tell me if I'm wrong ?
One more question, I'm seeing 1315 pages snapshots during the scan but when it interrupt I'm only seeing between 100 and 400 on the report. Is it normal ?
Regards,
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by Tasos Laskos on 29 Dec, 2017 07:40 PM
The 2 are different, let's start by defining a couple of things:
You may see the same "page" being audited, but actually it's a different snapshot (DOM state) of the page, for example, the list of transitions will probably be different -- not necessarily, but usually new states come after DOM events which are recorded as page DOM transitions.
Now, regarding your issue of the same page being audited for so long, as I said, it's probably different states and it seems like there might be a lot of them or the server could be very slow, resulting in large scan durations, have you tried this? http://support.arachni-scanner.com/kb/general-use/optimizing-for-fa...
Also, the system doesn't crawl first and keep a list (or map or tree) of resources to audit later, it's all on the fly and in a way interconnected in a feedback-loop -- the crawl and audit are basically complementary processes.
Not sure if the above makes sense, I haven't had my coffee yet and the first draft of my reply really did not make any.
2 Posted by Ranus on 02 Jan, 2018 12:36 PM
Hi,
Thank you for your answer. I understand much better now.
Yes, I followed the guide but couldn't end a test for now :'(
I've tried to only check allowed methods or CSRF but couldn't end it anyway.
A weird thing is that it get stuck on a different page every time.
The website is really big, but it get stuck after like 10-20 hours.
I tried with 1-50 browsers cluster.
I think my problem is similar to this one : http://support.arachni-scanner.com/discussions/problems/4922-worklo...
If you want, I've sent you a capture of the problem on your support email address ( december 06 2017 17:09).
Thank you for your response before your coffee and happy new year
3 Posted by Ranus on 04 Jan, 2018 12:31 PM
Hi,
I tried with output-debug=5
Here is last console screen when it get stuck :
Here is the command
I hope it could help solve the problem.
edit : We investigated and found a problem of common port. We currently try to replace PhantomJS for a test. Will know more in some days.
Thank you again for all your time
4 Posted by Ranus on 15 Jan, 2018 10:52 AM
Hi Tasos, we finally succeded to scan the entire website (2.5 millions lines of code) :) .
We found a workaround by downgraded phantomjs version. From what we understood, it was a shared port issue by phantomjs and selenium.
Unfortunatly, we had turn off scans : xss_dom* and unvalidated_redirect*
From what I saw, with dom redirects it stayed stuck in loop.
I can't give you access to our code/website, but if you like we could help you for debugging (beta test).
today issue is in protocol.rb:158:in 'rescue in rbuff_fill'
will update if we found a solution for it.
Tasos Laskos closed this discussion on 04 May, 2018 09:14 AM.