<span>Click on this API button to see a documentation of the POST request parameter for crawl starts.</span>
</div>
@ -215,7 +215,7 @@
You can define URLs as start points for Web page crawling and start crawling here.
"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links.
This is repeated as long as specified under "Crawling Depth".
A crawl can also be started using wget and the <ahref="http://www.yacy-websuche.de/wiki/index.php/Dev:API#Managing_crawl_jobs" target="_blank">post arguments</a> for this web page.
A crawl can also be started using wget and the <ahref="http://www.yacy-websearch.net/wiki/index.php/Dev:APICrawler" target="_blank">post arguments</a> for this web page.