Fetch new URLs to crawl
The newly added URLs will be crawled without any filter restricions except of the static stop-words.
The Re-Crawl option isn't used and the sites won't be stored in the Proxy Cache. Text and media types will be indexed.
Since these URLs will be requested explicitely from another peer, they won't be distributed for remote indexing.
Fetch from URL :
#(hostError)#:: Malformed URL #(/hostError)#
#(saved)#::
Or select previously entered URL :
#{urls}#
#[url]# #{/urls}#
#(/saved)#
#(peersKnown)#::
Fetch from Peer :
Choose a random peer #{peers}#
#[name]# #{/peers}#
Amount of URLs to request :
#(peerError)#::
Error fetching URL-list from #[hash]#:#[name]# ::
Peer with hash #[hash]# doesn't seem to be online anymore #(/peerError)#
#(/peersKnown)#
Frequency:
Fetch only once
Fetch when queue is empty
Fetch in a specified delay :
every
Days
Hours
Minutes
#(freqError)#:: Invalid period, fetching only once #(/freqError)#