Rethinking the way audit works

user021's Avatar

user021

08 Aug, 2013 09:41 AM

Starting from your reply Tasos "Hm, I saw that before with the path_traversal module, the increased responses required for more accuracy/coverage end up killing the server. (I may have to reduce the default HTTP request concurrency to keep the servers responsive.)"

An older idea went back through my mind that i wanted to share with you, so basically it killed the server because requested a specific vector too many times and too fast, right? even with the AutoThrottle and default HTTP request limit which default is 60 (idk why i knew that is 25). anyway, what about if we don't change default HTTP request concurrency, instead some changes to the core making it audit multiple vectors at the same, more specific if we chose to use path_traversal alone and --audit-link, --audit-forms, it will audit the link vectors and forms at the same time, this way server might handle the audit much better. This is the way another scanner works (which name also starts with A), basically what it does, instead focusing at one vector at time, it opens around 10 scripts or so auditing multiple things in a efficient way .After using those two good scanners for long time, with same http req limit and similar http time-outs i notice one thing, during the audit, on some servers, it is much less likely to get time-outs on the other scanner and network bandwidth is more smooth.

  1. Support Staff 1 Posted by Tasos Laskos on 08 Aug, 2013 01:19 PM

    Tasos Laskos's Avatar

    Why do you think it's 60? It's actually 20.
    However, that doesn't make a difference to the server as it only sees requests. What matters is how many requests you've got running at any given time and how soon you make new requests after the current ones have finished.

    Arachni is fast which can be a problem for small servers, which is why the HTTP request concurrency is adjustable, so that you can configure it to work best for your server.

    That other scanner I imagine isn't as aggressive as Arachni by default so the audit goes smoother for less powerful servers.
    The only way to do that would be to change the default HTTP request concurrency of Arachni to a less aggressive setting as well.

  2. Tasos Laskos closed this discussion on 08 Aug, 2013 01:19 PM.

  3. user021 re-opened this discussion on 08 Aug, 2013 01:31 PM

  4. 2 Posted by user021 on 08 Aug, 2013 01:31 PM

    user021's Avatar

    https://github.com/Arachni/arachni/wiki/Command-line-user-interface#wiki-http-req-limit

    says default 60
    i had req limit set same on both scanners and id dare to say that other scanner is faster while compared to Arachni normal instance (while on grid Arachni with two instances is faster) and i think that could be improved, but of course that's just my opinion.

  5. Support Staff 3 Posted by Tasos Laskos on 08 Aug, 2013 02:03 PM

    Tasos Laskos's Avatar

    I had forgotten to update the doc, it's actually been 20 for a long time now, my bad.

    About the performance thing, that could be for a lot of reasons:

    1. The scan was faster because the other scanner was slower and didn't stress the server as much and that lead to the being server more responsive overall -- just because you configured it the same doesn't mean that the implementation would have the same efficiency. Try lowering Arachni's HTTP request concurrency to the point where the server can handle it more easily and see if that makes any difference.
    2. The other scanner can use real threads, which isn't possible with Ruby, and that's the reason I've implemented multi-Instance scans.
    3. Judging from the HTTP logs you sent me of that other scanner, its path_traversal coverage was pretty poor and if that's any indication for the coverage of its other tests then it has a lot less requests to perform -- again, leading to a less tressed server and a much decreased amount of requests overall.
  6. 4 Posted by user021 on 08 Aug, 2013 02:09 PM

    user021's Avatar

    Alright hmm. too bad ruby doesn't handle mutiple real threads

  7. user021 closed this discussion on 08 Aug, 2013 02:09 PM.

  8. Tasos Laskos re-opened this discussion on 08 Aug, 2013 02:18 PM

  9. Support Staff 5 Posted by Tasos Laskos on 08 Aug, 2013 02:18 PM

    Tasos Laskos's Avatar

    Well, technically it does but in a really daft way.

    As of Ruby 1.9, Ruby threads are real OS threads but only one runs at a time, while the the others are waiting on some IO operation to complete and because all the scanner is doing is waiting on IO it wouldn't have been very efficient, that's why I'm using a single thread and non-blocking IO for concurrency -- which is faster and has less overhead since it doesn't have to initialize and tear-down threads, which kind of leaves the server no time to breath because HTTP request efficiency is really high.

    However, there were more reason for the multi-Instance scans and I'd have done it that way anyhow so it's not all bad.

  10. Tasos Laskos closed this discussion on 08 Aug, 2013 02:18 PM.

Comments are currently closed for this discussion. You can start a new one.

Keyboard shortcuts

Generic

? Show this help
ESC Blurs the current field

Comment Form

r Focus the comment reply box
^ + ↩ Submit the comment

You can use Command ⌘ instead of Control ^ on Mac