High memory when scanning Mutillidae or orangeHRM
I'm trying to scan a few web applications, some real, some for
testing. I got many troubles with the web UI so I switched to the
command line (inside "screen"). Most scans went flawlessly
(although slowly) but I've hit some problems:
* Mutillidae: still running after 3 days, consumed 2.8 GB of RAM. *
orangeHRM-3.2.1 (a couple of dlaws were published recently):
running after 1 day, consumed 6 GB. I don't know if it's relevant,
but in both cases, I used the "proxy" module to authenticate on the
application and crawl it.
What am I doing wrong? How can I debug / fix this?
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
Support Staff 1 Posted by Tasos Laskos on Apr 14, 2015 @ 12:05 PM
A Mutillidae scan shouldn't take more than a few minutes. How have you set it up?
Is it part of those VMs with lots of vulnerable webapps?
And could you please answer the same question for orangeHRM too?
2 Posted by Michel Arboi on Apr 14, 2015 @ 12:20 PM
There are several vulnerable applications on this VM but they are on different ports or in different directories. As far as I can see, Arachni does not catch the other applications.
I ran:
Support Staff 3 Posted by Tasos Laskos on Apr 14, 2015 @ 12:25 PM
How certain are you? Because with your current config I'd bet good money that Arachni is scanning everything on that machine.
Try
--scope-include-path-pattern=mutillidae
and reset Mutillidae before rescanning because if I remember correctly it has a comments section with XSS and that page can reach MBs in size due to form submissions which can result in increased memory usage.It may also be a good idea to set
--http-response-max-size=500000
.4 Posted by Michel Arboi on Apr 14, 2015 @ 12:33 PM
If I remember well, I reset the DB before launching the scan, so I guess that I should rather limit the response size.
Would
--scope-auto-redundant=3
(for example) help too?Support Staff 5 Posted by Tasos Laskos on Apr 14, 2015 @ 12:35 PM
I don't know if it's necessary but it couldn't hurt.
6 Posted by Michel Arboi on Apr 14, 2015 @ 03:43 PM
With
--scope-auto-redundant=3
the scan of Mutillidae ends in about ten minutes (but some flaws were not found). Without it (default value = 10 IIRC), it is still running after three hours and the main Ruby process is already eating 1.5 GB.I'm trying to find a better compromise, maybe
--scope-auto-redundant=5
Support Staff 7 Posted by Tasos Laskos on Apr 14, 2015 @ 03:56 PM
By default there's no redundancy limit, the default limit if not specifying a value (
--auto-redundant
) is 10.Either way, I don't like the memory consumption, I'll need to have a look at that.
Cheers
8 Posted by Michel Arboi on Apr 14, 2015 @ 04:25 PM
FYI, with
--scope-auto-redundant=5
, it took 1 hour 13 minutes.Support Staff 9 Posted by Tasos Laskos on Apr 14, 2015 @ 04:26 PM
That's crazy, what's your average response time?
10 Posted by Michel Arboi on Apr 14, 2015 @ 04:29 PM
And it still did not catch the SQL injections. I'm definitely doing something wrong.
Support Staff 11 Posted by Tasos Laskos on Apr 14, 2015 @ 04:32 PM
This VM is really slow, you should be getting at least 100 req/s from a LAN. What resources does it have?
12 Posted by Michel Arboi on Apr 15, 2015 @ 08:11 AM
2 virtual proc, RAM = 4 GB, but I had several scans running at the same time.
PHP looks OK:
Without
--scope-auto-redundant
I had to stop the scan this morning and got:I double check the VM status and retry.
Support Staff 13 Posted by Tasos Laskos on Apr 15, 2015 @ 08:14 AM
It's not about RAM so much as it is about processing power. You're essentially DoSing an underpowered server.
You are stressing it so much that the average response time is 8 seconds.
There's not much I can do about that.
14 Posted by Michel Arboi on Apr 15, 2015 @ 09:54 AM
I got much better results after adding one proc and just running one scan. Odd...
Anyway, my initial problem (high memory) is solved, probably by
--http-response-max-size=500000
.Restricting the scope is not a good idea: the scan is muck quicker but Arachni misses the SQL injections.
Tasos Laskos closed this discussion on Apr 15, 2015 @ 09:59 AM.