I've not ever experienced the login form not showing when viewing via a webbrowser. One annoyance I've noted; the developers have a timeout on the login page. If you do not log in within a set time the login screen session expires and you have to click to go back to the login screen; don't think this would affect arachni but thought I'd mention it.
The scan does seem to complete but I see these errors periodically it runs about 3 of them then it's good for an hour. I setup arachni on a linux box and ran from the command line and the scan ran much faster. It had not completed in 21 hours when run from a windows 10 system, completed in 2hrs 45 mins from the command line of the linux box; I also used the command instead of webgui on the linux box, don't know which made the difference. I'm running a scan from the gui now as a comparison.
The report I received from running Arachni from the CLI has a sitemap of 25 pages and a total of 25 issues, 10 High Severity (Cross-Site Request Forgery). The scan being done by the WebUI, still not complete after 22 hours, shows 62 pages discovered, 72 issues, 39 High Severity (Cross-Site Request Forgery). Why the descrepency between the CLI and WebUI?
The scan from the webUI is still running at 52 hours now. 82 pages discovered, 71 cross-site request forgery. I still don't understand why the webui is finding and followed paths that the CLI did not. Any thoughts?
With the WebUI I made a copy of the default profile and enabled the autologin plugin, passing the same information as in the CLI. I did not see a place in the WebUI to exclude 'Logout' in the scanning. Going back I did find this and adding it now.
Would that have caused the system to find more pages by not having that exclusions? At 94 hours the scan still was not complete and the web interface was very slow to unresponsive. I'm starting over with the exclusions to see what happens.
So to ensure there was no difference when running the CLI scan vs the WebUI I exported the profile built in the WebUI and used it in the CLI. I had similar results as previous. The WebUI ran over a 3 day weekend and did not complete, I cancelled the scan this morning. When attempting to view the scan I'm told an error has occurred. The summary of the scan says it found 76 issues. The CLI scan indicates 18 issues were found and completed in about an hour.
Again there seems to a large discrepancy between results in the CLI and WebUI.
I tried running it from the WebUI on a windows box before moving it to a linux system. Hoped the performance would be better but it behaved about how I am describing. Unfortunately this is an internal test site that has PII in it. I'd be happy to provide any logs that may be useful, I can scrub them for any private data that may be included first.
It seems the WebUI finds more links than the CLI, but the WebUI grinds until it finally becomes unresponsive.