#%env/templates/metas.template%# #%env/templates/header.template%# #%env/templates/submenuConfig.template%#

Heuristics Configuration

A heuristic is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia). The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines. When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content. This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.

The success of heuristics are marked with an image (heuristic:<name> (redundant)/heuristic:<name> (new link)) below the favicon left from the search result entry:
heuristic:<name> (redundant)
The search result was discovered by a heuristic, but the link was already known by YaCy
heuristic:<name> (new link)
The search result was discovered by a heuristic, not previously known by YaCy

When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl. That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host. Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).

When a search is made then all displayed result links are crawled with a depth-1 crawl. This means: right after the search request every page is loaded and every page that is linked on this page. If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled). Default is to add the links to the local crawl queue (your peer crawls the linked pages).

When using this heuristic, then every search request line is used for a call to blekko. 20 results are taken from blekko and loaded simultanously, parsed and indexed immediately.

#%env/templates/footer.template%#