#%env/templates/metas.template%#
#%env/templates/header.template%# #%env/templates/submenuIndexCreate.template%#Start Crawling Job: You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".
#(info)# :: Crawling paused successfully. :: Continue crawling. #(/info)#
#(refreshbutton)# :: #(/refreshbutton)#Recently started remote crawls in progress:
Start Time | Peer Name | Start URL | Intention/Description | Depth | Accept '?' URLs |
#[cre]# | #[peername]# | #[startURL]# | #[intention]# | #[generalDepth]# | #(crawlingQ)#no::yes#(/crawlingQ)# |
Recently started remote crawls, finished:
Start Time | Peer Name | Start URL | Intention/Description | Depth | Accept '?' URLs |
#[cre]# | #[peername]# | #[startURL]# | #[intention]# | #[generalDepth]# | #(crawlingQ)#no::yes#(/crawlingQ)# |
Remote Crawling Peers:
#(remoteCrawlPeers)#No remote crawl peers available.
::#[num]# peers available for remote crawling.
Idle Peers | #{available}##[name]# (#[due]# seconds due) #{/available}# |
---|---|
Busy Peers | #{busy}##[name]# (#[due]# seconds due) #{/busy}# |