Search Engine, MetaCrawler, Data Extraction Software in one
The Original Web Crawler Software for AutoMapIt has been modified a number of ways for custom spidering jobs already. Contact us with your project specifications for a quote. This spider has adapted to copy HTML from dynamic webpages and convert the entire site to HTML while creating clean URLs. Our crawler is used to measure various SEO stats of the pages it spiders in order to run SEO reports on entire websites in one pass. It can be trained to store Search Engine data on websites it spiders. What do you need it to do?
The spider comes as a core application capable of extracting URLs from a page, then spidering the next URL in it's list. Our crawler honors the robots.txt exclusion standard and uses polite extraction methods to prevent swamping your server. Anything else that you need it to do past this point is open to your imagination.
The AutoMapIt spider runs on Linux using PHP5 and MySQL. Larger/Longer running projects may require hosting in addition to the spider itself. While every effort is made to optimize the script, when it is handling large amounts of data and processing it in many ways, the system resources this spider uses exceeds what is available on Shared Hosting plans. I offer fully managed hosting of your spidering application, but you may host it wherever you please.
Where would you like to crawl?