High Performance Grid shuts down
After crawling 218 pages using 3 dispatchers on localhost
./arachni_rpc --server localhost:7331 --audit-link --audit-forms
--audit-cookies --audit-headers --modules=sqli --auto-redundant=1
--grid --spawns=3 'http://www.fao.com'
i get
[-] undefined method any?' for true:TrueClass [-]
/home/r/Desktop/ad/system/gems/bundler/gems/arachni-55dc50b44fc6/lib/arachni/ui/cli/rpc/rpc.rb:237:in
refresh_progress'
[-]
/home/r/Desktop/ad/system/gems/bundler/gems/arachni-55dc50b44fc6/lib/arachni/ui/cli/rpc/rpc.rb:167:in
run' [-]
/home/r/Desktop/ad/system/gems/bundler/gems/arachni-55dc50b44fc6/bin/arachni_rpc:23:in
'
[-]
/home/r/Desktop/ad/bin/../system/arachni-ui-web/bin/arachni_rpc:16:in
load' [-]
/home/r/Desktop/ad/bin/../system/arachni-ui-web/bin/arachni_rpc:16:in
'
[*] Shutting down and retrieving the report, please wait...
[*] Dumping audit results in '2013-04-15 19.26.49 -0400.afr'. [*] Done!
I can confirm that crawled the whole website using single instance and no errors what so ever, also tryed with two dispatchers for a while and was ok.
Showing page 2 out of 2. View the first page
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
? | Show this help |
---|---|
ESC | Blurs the current field |
Comment Form
r | Focus the comment reply box |
---|---|
^ + ↩ | Submit the comment |
You can use Command ⌘
instead of Control ^
on Mac
31 Posted by user021 on 17 Apr, 2013 05:16 PM
And like i said, it happens with normal single instance scan too, makes sense then why the grid was crawling for 6 hours and no audit
Support Staff 32 Posted by Tasos Laskos on 17 Apr, 2013 05:20 PM
I'm not sure why you're fixated on that message, it has nothing to do with the crawl being finished or anything like that. What you are seeing is just a big (and somewhat slow) website being crawled.
I'm having a little trouble understanding your expectations.
33 Posted by user021 on 17 Apr, 2013 05:30 PM
So basically what you are telling me is that it doesn't work like before, now it needs to crawl the whole website and when is done, the audit process starts ?
Support Staff 34 Posted by Tasos Laskos on 17 Apr, 2013 05:36 PM
I'm saying it never worked like that and that you may have possibly confused the
auto-redundant
option with thelink-count
one.35 Posted by user021 on 17 Apr, 2013 09:42 PM
That two options are pretty hard to be confused, even for me, idk why i thought that after that HTTP queue message, the audit should start, my bad. About the exception from my first post, you can check it on them, i don't think they mind being scanned, limited link so it doesn't take so long:
./arachni_rpc --server localhost:9184 --audit-link --audit-forms --audit-cookies --audit-headers --modules=sqli --auto-redundant=1 --grid --spawns=3 --link-count=300 'http://www.hackthissite.org'
ps: on first test it generated the error and i think that the link limit worked, now running second test and it shows "Discovered 609 pages."
Support Staff 36 Posted by Tasos Laskos on 17 Apr, 2013 09:47 PM
No worries. When using multiple Instances the options specified are for each Instance, I think I better change that...
Support Staff 37 Posted by Tasos Laskos on 17 Apr, 2013 10:52 PM
Yay, I reproduced it! Fixing it shouldn't be that hard now that I can trigger it.
Support Staff 38 Posted by Tasos Laskos on 17 Apr, 2013 11:13 PM
OK I just fixed it, was a type casting issue. Pushing the nightlies again for you now, will let you know when they're available.
Support Staff 39 Posted by Tasos Laskos on 18 Apr, 2013 01:24 AM
The nightly packages have been updated and tomorrow's will split the
link-count
andhttp-req-limit
options by the Instance count.Tasos Laskos closed this discussion on 18 Apr, 2013 01:24 AM.
user021 re-opened this discussion on 18 Apr, 2013 10:14 AM
40 Posted by user021 on 18 Apr, 2013 10:14 AM
Thx, it is fixed. One more thing, when i start scanning a large website with the grid and without auto-redundant option, what exactly arachni does when status is Preparing?the logs shows that it's crawling, if yes, using only one instance?because on dispatchers i see no progress, and if there is a difference on crawling performance when using grid and without.
Support Staff 41 Posted by Tasos Laskos on 18 Apr, 2013 12:48 PM
It seems that the processes are too busy crawling to reply to progress request calls so you're left with the first one shown. Eventually (as the crawl seeds lighten up) the processes become less busy and start responding properly again.
This is more of an annoyance than a problem but I'll investigate it further because it just bugs me.
Tasos Laskos closed this discussion on 18 Apr, 2013 12:48 PM.
user021 re-opened this discussion on 18 Apr, 2013 01:35 PM
42 Posted by user021 on 18 Apr, 2013 01:35 PM
I know you probably have many things on your head as a developer and i wouldn't be happy myself to be in your place if a guy keeps bothering me with more and more bugs on my product but in other way i think is a necessary evil xD
Not sure, but i think i found something else, using autologin plugin i noticed in dispatchers logs:
on one
"AutoLogin: Found log-in form with name: login AutoLogin: Form submitted successfully."
however, on others two :
"AutoLogin: Could not find a form suiting the provided params at: http://www.x.com"
Support Staff 43 Posted by Tasos Laskos on 18 Apr, 2013 01:44 PM
I actually prefer that people keep me informed, if there are bugs they need to be fixed, nothing wrong with that.
But, I'd rather you report bugs at the issue tracker as this portal is here for support purposes and preferably keep discussions on topic and create new ones if need be.
This behavior though I hadn't considered...that plugin should only be called by the master instance and then have the cookies and login sequence transmitted to the slaves.
Unfortunately, I won't have time to fix this for the upcoming v0.4.2 release as this is about an experimental feature. I did however put it in my TODO list.
Thanks for playing with the grid man, it needed some thorough testing, let me know if anything else comes up.
Tasos Laskos closed this discussion on 18 Apr, 2013 01:44 PM.
Tasos Laskos re-opened this discussion on 18 Apr, 2013 01:51 PM
Support Staff 44 Posted by Tasos Laskos on 18 Apr, 2013 01:51 PM
I changed my mind, there's nothing wrong with each instance having its own session, for now. Looking into this.
Support Staff 45 Posted by Tasos Laskos on 18 Apr, 2013 05:38 PM
Hm, works fine for me, would you mind sending me the details privately via e-mail?
46 Posted by user021 on 18 Apr, 2013 06:22 PM
I didin't said it doesn't work, just what i seen in logs, if you say that the the cookies and login sequence is transmitted to the slaves then i guess is ok, which details you mean more exactly, the logs? deleted them but i can repro it and send them, still, there's nothing much interesting in them, except for that message on begining and then the crawling and audit taking place, or you want to check if somehow one of dispatcher did not get the cookies?
Support Staff 47 Posted by Tasos Laskos on 18 Apr, 2013 06:25 PM
No I meant the autologin plugin worked for me from both instances, it logged-in fine without complaining that it didn't find the login form.
By details I meant the complete command line you used to run the scan including the credentials -- if at all possible.
Support Staff 48 Posted by Tasos Laskos on 18 Apr, 2013 10:13 PM
Fixed and the nightlies have been updated. Also, it now splits the
link-count
andhttp-req-limit
values depending on the amount of Instances.Tasos Laskos closed this discussion on 18 Apr, 2013 10:13 PM.