Rethinking the way audit works
Starting from your reply Tasos "Hm, I saw that before with the path_traversal module, the increased responses required for more accuracy/coverage end up killing the server. (I may have to reduce the default HTTP request concurrency to keep the servers responsive.)"
An older idea went back through my mind that i wanted to share with you, so basically it killed the server because requested a specific vector too many times and too fast, right? even with the AutoThrottle and default HTTP request limit which default is 60 (idk why i knew that is 25). anyway, what about if we don't change default HTTP request concurrency, instead some changes to the core making it audit multiple vectors at the same, more specific if we chose to use path_traversal alone and --audit-link, --audit-forms, it will audit the link vectors and forms at the same time, this way server might handle the audit much better. This is the way another scanner works (which name also starts with A), basically what it does, instead focusing at one vector at time, it opens around 10 scripts or so auditing multiple things in a efficient way .After using those two good scanners for long time, with same http req limit and similar http time-outs i notice one thing, during the audit, on some servers, it is much less likely to get time-outs on the other scanner and network bandwidth is more smooth.
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by Tasos Laskos on 08 Aug, 2013 01:19 PM
Why do you think it's 60? It's actually 20.
However, that doesn't make a difference to the server as it only sees requests. What matters is how many requests you've got running at any given time and how soon you make new requests after the current ones have finished.
Arachni is fast which can be a problem for small servers, which is why the HTTP request concurrency is adjustable, so that you can configure it to work best for your server.
That other scanner I imagine isn't as aggressive as Arachni by default so the audit goes smoother for less powerful servers.
The only way to do that would be to change the default HTTP request concurrency of Arachni to a less aggressive setting as well.
Tasos Laskos closed this discussion on 08 Aug, 2013 01:19 PM.
user021 re-opened this discussion on 08 Aug, 2013 01:31 PM
2 Posted by user021 on 08 Aug, 2013 01:31 PM
https://github.com/Arachni/arachni/wiki/Command-line-user-interface#wiki-http-req-limit
says default 60
i had req limit set same on both scanners and id dare to say that other scanner is faster while compared to Arachni normal instance (while on grid Arachni with two instances is faster) and i think that could be improved, but of course that's just my opinion.
Support Staff 3 Posted by Tasos Laskos on 08 Aug, 2013 02:03 PM
I had forgotten to update the doc, it's actually been 20 for a long time now, my bad.
About the performance thing, that could be for a lot of reasons:
path_traversal
coverage was pretty poor and if that's any indication for the coverage of its other tests then it has a lot less requests to perform -- again, leading to a less tressed server and a much decreased amount of requests overall.4 Posted by user021 on 08 Aug, 2013 02:09 PM
Alright hmm. too bad ruby doesn't handle mutiple real threads
user021 closed this discussion on 08 Aug, 2013 02:09 PM.
Tasos Laskos re-opened this discussion on 08 Aug, 2013 02:18 PM
Support Staff 5 Posted by Tasos Laskos on 08 Aug, 2013 02:18 PM
Well, technically it does but in a really daft way.
As of Ruby 1.9, Ruby threads are real OS threads but only one runs at a time, while the the others are waiting on some IO operation to complete and because all the scanner is doing is waiting on IO it wouldn't have been very efficient, that's why I'm using a single thread and non-blocking IO for concurrency -- which is faster and has less overhead since it doesn't have to initialize and tear-down threads, which kind of leaves the server no time to breath because HTTP request efficiency is really high.
However, there were more reason for the multi-Instance scans and I'd have done it that way anyhow so it's not all bad.
Tasos Laskos closed this discussion on 08 Aug, 2013 02:18 PM.