Memory management

mailinglist's Avatar


19 Sep, 2013 07:22 AM


I'm working arachni on a centos OpenVZ container. This VM is a "low" resources machine.
I have some problems with your scans. I observed that the webui disappears if memory resources is missing.
I also observe that the instances are staying alive and obviously are consuming resources.

Won't it be a good idea to have a progressive creation of instances and a kind of monitoring for local resources.
I know that my needs are really far away from the main purpose of your framework, yet if security could be considered as a priority, it's not always necessary to use high resources level. Moreover, as one scan can last many hours, letting it to be done on a little server is really accurate I think, don't you?

Thansk for reading,


  1. Support Staff 1 Posted by Tasos Laskos on 19 Sep, 2013 12:03 PM

    Tasos Laskos's Avatar

    Is there any way I can get some numbers? Like how much RAM does the VM have? How many Instances you have running and how much RAM they consume?

    Also, what do you mean by "the instances are staying alive"? Are their processes still there are the scan finishes?
    If you're talking about running scans not dying out when you close the WebUI then that's the intended behavior, so that if the WebUI crashes for whatever reason you won't lose your progress and be able to grab the report once you fire-up the WebUI again.

    Or are you using a Dispatcher? In that case that's the Dispatcher's job, to maintain a pool of Instances. If you don't want that the you can perform direct scans, where Instances are spawned as needed.

    So let's start by you giving me that info and we'll figure this out.

  2. 2 Posted by mailinglist on 29 Sep, 2013 07:26 PM

    mailinglist's Avatar

    Hy tasos,

    Sorry for my late answer.
    I found this in my /var/log/messages :
    Sep 16 15:03:39 openvas kernel: OOM killed process 7152 (ruby) vm:1849208kB, rss:194448kB, swap:407644kB Sep 29 17:59:44 openvas kernel: : OOM killed process 2003 (ruby) vm:6109920kB, rss:1832860kB, swap:2991764kB Sep 29 20:49:26 openvas kernel: : OOM killed process 8196 (ruby) vm:1692656kB, rss:42328kB, swap:409276kB Sep 29 20:52:03 openvas kernel: : OOM killed process 8200 (ruby) vm:1418896kB, rss:167280kB, swap:64720kB Sep 29 20:52:19 openvas kernel: : OOM killed process 8169 (ruby) vm:7557696kB, rss:1969348kB, swap:3945872kB

    I think that this is a Virtual issue.

    By the way, is it normal that no date appears in the arachni logs?


  3. Support Staff 3 Posted by Tasos Laskos on 29 Sep, 2013 07:54 PM

    Tasos Laskos's Avatar

    Yeah your VM is killing the process because they're eating a lot of its memory. The one that worries me is PID 2003, it hit 1.8GB RAM usage.

    Do you happen to know what that process was doing? Was it performing a scan? If so, can you try performing the scan again and checking if the RAM usage is similar?
    If I can reproduce this I'm pretty sure I'll be able to fix it.

    Also, to which logs are you referring to? I'm pretty sure all of Arachni's log files have timestamps for each logged message.

  4. 4 Posted by mailinglist on 29 Sep, 2013 08:16 PM

    mailinglist's Avatar

    Thanks for your answer tasos.

    In my production.log I have things like that in the webui part:
    DispatcherManager#refresh Dispatcher Load (0.2ms) SELECT "dispatchers".* FROM "dispatchers" No date appears.
    Moreover , I didn't found logs for arachni framework. The folder is empty

    Regarding to the memory issue, I relaunched a have a look to the processes. I'll let you know

  5. Support Staff 5 Posted by Tasos Laskos on 29 Sep, 2013 08:19 PM

    Tasos Laskos's Avatar

    Ah those logs, those are generated by Ruby-on-Rails and they're just for the WebUI, you won't find any scan related info in there.

  6. 6 Posted by mailinglist on 29 Sep, 2013 08:23 PM

    mailinglist's Avatar


    So where are the arachni logs stored?


  7. Support Staff 7 Posted by Tasos Laskos on 29 Sep, 2013 08:37 PM

    Tasos Laskos's Avatar

    Depends of how you run the scan, if you were using a Dispatcher to get an Instance for the scan you'd have found logs for the Dispatcher under the "framework" directory, otherwise there's nothing to log. If you were using a Dispatcher you'd also be able to log all output of the scanners (as if you were running it from the CLI) by using --reroute-to-logfile.

    None of the logs would help debug this though. If you can reproduce the RAM consumption issue I'll then have to try it for myself and give it a very close look.

  8. 8 Posted by mailinglist on 29 Sep, 2013 08:58 PM

    mailinglist's Avatar

    for the moment I launch the process like this
    bin/arachni_web -D --host Do you mean that launching t with bin/arachni_web -D --host --reroute-to-logfile /path/to/log/file will be enough?
    Sorry if I didn't understood correctly.


  9. Support Staff 9 Posted by Tasos Laskos on 29 Sep, 2013 09:07 PM

    Tasos Laskos's Avatar

    No, you'll need to:

    • Start a Dispatcher using bin/arachni_rpcd --reroute-to-logfile.
    • Add that Dispatcher to the WebUI.
    • Select "Remote" scan type using that Dispatcher via the advanced options when starting a new scan.

    You'll then find 2 types of logs in the "framework" directory, one from the Dispatcher and a few from the Instances the Dispatcher has in its pool. One of those logfiles will belong to the Instance you're using for the scan.

  10. 10 Posted by mailinglist on 29 Sep, 2013 09:20 PM

    mailinglist's Avatar

    Hy tasos,

    To be sure, the dispatchers have to be started by CLI?
    Then, in the webUI, what port do I have to set?
    Does the dispatcher need to be on a different machine?
    Does a dispatcher could be started at boot?


  11. Support Staff 11 Posted by Tasos Laskos on 29 Sep, 2013 09:25 PM

    Tasos Laskos's Avatar
    1. Yep, it's like a server which keeps track of scanner Instances. You generally don't need it in simple-ish deployments but provides improved logging, among other things.
    2. Default address is localhost:7331.
    3. Nope.
    4. Sure, you just have bin/arachni_rpcd --reroute-to-logfile run at boot.
  12. 12 Posted by mailinglist on 29 Sep, 2013 09:50 PM

    mailinglist's Avatar

    so ....

    Let's log all that stuff ;-)

  13. 13 Posted by mailinglist on 30 Sep, 2013 05:59 AM

    mailinglist's Avatar

    So I think I have another issue.
    This morning no more UI is available.
    Ruby proc still running.
    No OOM appears in the logs that could kill the processes.
    I restarted the UI by CLI and the scan still running.
    The VM has enough resources for this scan

    I'll try everything in a full virtualized VM. Maybe it could be better.
    What is amazing for me is that I didn't succeed to scan a site. My conf should be weird!


  14. 14 Posted by user021 on 30 Sep, 2013 11:34 AM

    user021's Avatar

    @mailinglist, as for the memory usage increasing on long scans, i proposed Tasos to add disk buffer file for temporary storing data so freeing more ram but the idea was not accepted (maybe in the future who knows) and i respect his decision.

    Currently open issues relayed to this:

    A few points that might help reducing memory usage though :

    • use a filter for binary content, so the crawler doesn't store all that data, like exclude='.a3c|.ace|.aif|.aifc|.aiff|.arj|.asf|.asx|.attach|.au|.avi|.avi|.bin|.bmp|.cab|.cache|.class|.djv|.djvu|.dwg|.es|.esl|.exe|.fif|.fvi|.gif|.gz|.hqx|.ice|.ico|.ief|.ifs|.iso|.jar|.jpe|.jpeg|.jpg|.kar|.mdb|.mid|.midi|.mov|.movie|.mp|.mp2|.mp3|.mp4|.mpeg|.mpeg2|.mpg|.mpg2|.mpga|.msi|.pac|.pdf|.png|.ppt|.psd|.qt|.ra|.ram|.rar|.rm|.rpm|.snd|.svf|.tar|.tgz|.tif|.tiff|.tpl|.uff|.wav|.wma|.wmv|.zip'

    -in case ur using the trainer module, that also have a big impact

  15. Support Staff 15 Posted by Tasos Laskos on 30 Sep, 2013 11:55 AM

    Tasos Laskos's Avatar

    @user021 Like I've already explained, it is very unlikely that this has anything to do with data the system stores. If you experience high RAM consumption then something's leaking memory, it's a bug that needs to be fixed. A disk buffer generally wouldn't do you any good.

    Also, the crawler doesn't store response bodies, that filter will prevent the crawler from following links that match the filter, and will save you time and bandwidth, but has none to very little effect on RAM consumption.

    However, you may be onto something with the Trainer though. The Trainer does store new elements in RAM, and those elements are part of a page, so if a lot of new elements appear during the audit then those pages will stay in RAM until they're audited. This is probably the only data structure in the system which can cause those issues, given that a memory leak has been ruled out. In that case, a disk buffer would indeed help.

    However, this is all conjecture until I have a reproducible case.

  16. Support Staff 16 Posted by Tasos Laskos on 10 Oct, 2013 07:28 PM

    Tasos Laskos's Avatar

    Guys, any news on this? I'd really like to get this sorted.

  17. 17 Posted by mailinglist on 10 Oct, 2013 07:43 PM

    mailinglist's Avatar

    Hy Tasos,

    Thanks for your following. My planning is a bit overload. I should have a look to the VM that hosts your soft in the next 10 days. I'll inform you immediately,


  18. Support Staff 18 Posted by Tasos Laskos on 10 Oct, 2013 11:15 PM

    Tasos Laskos's Avatar

    I just pushed an optimization that leads to pages being consumed ASAP and not be stored in RAM for extended periods of time.

    If the problem was caused by the Trainer, this should take care of it.

  19. Support Staff 19 Posted by Tasos Laskos on 10 Oct, 2013 11:54 PM

    Tasos Laskos's Avatar

    Well, there may be one more case that I may have missed so I'll see if I can do more.

  20. Support Staff 20 Posted by Tasos Laskos on 11 Oct, 2013 01:36 AM

    Tasos Laskos's Avatar

    OK, the page queue is now offloaded to disk. If that was indeed the problem it should now be fixed.

  21. Support Staff 21 Posted by Tasos Laskos on 11 Oct, 2013 03:14 AM

    Tasos Laskos's Avatar

    More good news, I tweaked the part of the HTTP library that has to do with how many requests can be queued at a time and this has greatly reduced RAM consumption.

    For example, a very simple scan which requires 94MB of RAM with the current stable version, requires 64MB with the code in the experimental branch.

  22. Support Staff 22 Posted by Tasos Laskos on 30 Oct, 2013 03:38 AM

    Tasos Laskos's Avatar

    Hey folks,

    I've optimized the hell out of Arachni and the changes can be found in the nightlies.
    Differences for a sample scan follow.


    RAM usage:

    • After crawl: 48.068 MB
    • After audit: 65.116 MB
    • After plugins: 82.7 MB
    • After reports: 86.824 MB

      [~] Sent 53861 requests.
      [~] Received and analyzed 53861 responses.
      [~] In 00:20:59
      [~] Average: 42 requests/second.


    RAM usage:

    • After crawl: 48.068 MB
    • After audit: 85.12 MB
    • After plugins: 102.732 MB
    • After reports: 105.088 MB

      [~] Sent 78657 requests.
      [~] Received and analyzed 78657 responses.
      [~] In 00:29:32
      [~] Average: 44 requests/second.
  23. Tasos Laskos closed this discussion on 30 Oct, 2013 03:38 AM.

Comments are currently closed for this discussion. You can start a new one.

Keyboard shortcuts


? Show this help
ESC Blurs the current field

Comment Form

r Focus the comment reply box
^ + ↩ Submit the comment

You can use Command ⌘ instead of Control ^ on Mac