#%env/templates/metas.template%# #%env/templates/header.template%#

URL-Fetcher

Fetch new URLs to crawl

The newly added URLs will be crawled without any filter restricions except of the static stop-words. The Re-Crawl option isn't used and the sites won't be stored in the Proxy Cache. Text and media types will be indexed. Since these URLs are explicitely requested from another peer, they won't be distributed for remote indexing.

:
#(saved)#::
Or select previously entered URL: #(/saved)# #(hostError)#:: Malformed URL#(/hostError)#
#(peersKnown)#::
:
#(peerError)#::  Error fetching URL-list from #[hash]#:#[name]#::  Peer with hash #[hash]# doesn't seem to be online anymore#(/peerError)#
#(/peersKnown)#
Frequency:


:   #(freqError)#:: Invalid period, fetching only once#(/freqError)#
#(threadError)#:: Error on stopping thread, it isn't alive anymore:: Error on restarting thread, it isn't alive anymore#(/threadError)# #(runs)#::
Thread to fetch URLs is #(status)#running::stopped::paused#(/status)#
Total runs:
#[totalRuns]#
Last run duration:
#[lastRun]#
Last server response:
#[lastServerResponse]#
Total fetched URLs:
#[totalFetchedURLs]#
Total failed URLs:
#[totalFailedURLs]#
Last fetched URLs:
#[lastFetchedURLs]#
Failed URLs:
#[error]#
    #{error}#
  • #[reason]#: #[url]#
  • #{/error}#
#(status)# :: :: #(/status)#
#(/runs)#
#%env/templates/footer.template%#