diff --git a/htroot/Blacklist_p.html b/htroot/Blacklist_p.html index 33c7cf7cd..9cd9b6b71 100644 --- a/htroot/Blacklist_p.html +++ b/htroot/Blacklist_p.html @@ -13,103 +13,144 @@ from being loaded. You can define several blacklists and activate them separatel You may also provide your blacklist to other peers by sharing them; in return you may collect blacklist entries from other peers.

- - - + + + + + + + + + + + + + + +
-
+ + + + + + + + + + + + - - - - -
+   +
+ - - - - - -
-Edit list: - -
-
-New list: - - -
-
-
-
- -
- + + + + + + + + + -
- - - - + + + +
+ Edit list: + +
+
+
+
+
+
+
+ New list: + +
+
+
-

Active list: #[filename]#

-
+ + +
+

Active list: #[filename]#

+
-
-These are the domain name / path patterns in this blacklist:
-You can select them here for deletion
-
- - -
- -

-Enter new domain name / path pattern in the form:
-"<domain>/<path-regexpr>":
-
- - -

- -Import blacklist items from other YaCy peers:
-
- -Host: -
- -
- -

Import blacklist items from URL:
-

- -URL: -
- -
- -

Import blacklist items from file:
-

- -File: -
- -
- -
+ + + + + + + + + + + + + +
+ These are the domain name / path patterns in this blacklist:
+ You can select them here for deletion +
+
+ + +

+ +

+ Enter new domain name / path pattern in the form:
+ "<domain>/<path-regexpr>": +
+

+

+ +
+ +
  + Import blacklist items from other YaCy peers:
+
+ + Host: +

+ +

+
+ Import blacklist items from URL:
+
+ + URL: +

+ +

+
+

Import blacklist items from file:

+
+ + File: +

+ +

+

#(status)# :: diff --git a/htroot/EditProfile_p.html b/htroot/EditProfile_p.html index ded879c5d..6f7c38718 100644 --- a/htroot/EditProfile_p.html +++ b/htroot/EditProfile_p.html @@ -15,56 +15,60 @@ You do not need to provide any personal data here, but if you want to distribute

- +
+ + + + - - + + - - + + - - + + - - + + - + - - + + - - + + - - + + - - + + - + - - + + -
 
NameName
Nick NameNick Name
HomepageHomepage
eMaileMail
  
ICQICQ
JabberJabber
Yahoo!Yahoo!
MSNMSN
  
CommentComment
+
diff --git a/htroot/Help.html b/htroot/Help.html index 71306d3ee..10eaf66f1 100644 --- a/htroot/Help.html +++ b/htroot/Help.html @@ -1,54 +1,86 @@ - -YaCy: Help -#%env/templates/metas.template%# - - -#%env/templates/header.template%# -

-

Help

+ + YaCy: Help + #%env/templates/metas.template%# + + + #%env/templates/header.template%# +
+
+

Help

-

+

This is a distributed web crawler and also a caching HTTP proxy. You are using the online-interface of the application. You can use this interface to configure your personal settings, proxy settings, access control and crawling properties. You can also use this interface to start crawls, send messages to other peers and monitor your index, cache status and crawling processes. Most important, you can use the search page to search either your own or the global index. -

+

-

+

For more detailed information, visit the YaCy homepage. -

+

-

Local and Global Search: Options and Functions

+

Local and Global Search: Options and Functions

The proxy provides a search interface that accesses your local index, created from web pages that passed the proxy. The search can also be applied globally, by searching other peers. You can use the following options to enhance your search results: +

+ -
- - + + + + + + + + + + + + + + + + + +
Search Word List +
+ Search Word List + You can search for several words simultanous. Words must be separated by a single space. The words are treated conjunctive, that means every must occur in the result, not any. If you do a global search (see below) you may get different results each time you do a search. -
Maximum Number of Results +
+ Maximum Number of Results + You can select the number of wanted maximum links. We do not yet support multiple result pages for virtually any possible link. Instead we encourage you to enhance the search result by submitting more search words. -
Result Order Options +
+ Result Order Options + The search engine provides an experimental 'Quality' ranking. In contrast to other known search engines we provide also a result order by date. If you change the order to 'Date-Quality' the most recently updated page from the search results is listed first. For pages that have the same date the second order, 'Quality' is applied. -
Resource Domain +
+ Resource Domain + This search engine is constructed to search the web pages that pass the proxy. But the search index is distributed to other peers as well, so you can search also globally: this function is currently only rudimentary, but can be choosen for test cases. Future releases will automatically distribute index information before a search happends to form a performant distributed hash table -- a very fast global search. -
Maximum Search Time +
+ Maximum Search Time + Searching the local index is extremely fast, it happends within milliseconds, even for a large number (millions) of pages. But searching the global index needs more time to find the correct remote peer that contains best search results. This is especially the case while the distributed index is in test mode. Search results get more stable (repeated global search produce more similar results) the longer the search time is. -
+ + + +


You may want to use accesskeys to navigate through the YaCy webinterface:

diff --git a/htroot/IndexCreate_p.html b/htroot/IndexCreate_p.html index 5d5b348d8..ef21cf9d8 100644 --- a/htroot/IndexCreate_p.html +++ b/htroot/IndexCreate_p.html @@ -12,29 +12,30 @@

Start Crawling Job:  -You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth". +You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".

- + +
- - - + + + - - + - - - + - - - + - - + - @@ -107,7 +116,7 @@ You can define URLs as start points for Web page crawling and start crawling her --> - - - + @@ -132,17 +141,19 @@ You can define URLs as start points for Web page crawling and start crawling her

-

Distributed Indexing: +
+Distributed Indexing: Crawling and indexing can be done by remote peers. -Your peer can search and index for other peers and they can search for you.
-
Crawling Depth: + This defines how often the Crawler will follow links embedded in websites.
A minimum of 1 is recommended and means that the page you enter under "Starting Point" will be added to the index, but no linked content is indexed. 2-4 is good for normal indexing. Be careful with the depth. Consider a branching factor of average 20; A prefetch-depth of 8 would index 25.600.000.000 pages, maybe this is the whole WWW.
Crawling Filter: + This is an emacs-like regular expression that must match with the URLs which are used to be crawled. Use this i.e. to crawl a single domain. If you set this filter it would make sense to increase the crawling depth. @@ -43,15 +44,15 @@ You can define URLs as start points for Web page crawling and start crawling her
Accept URLs with '?' / dynamic URLs: + A questionmark is usually a hint for a dynamic page. URLs pointing to dynamic content should usually not be crawled. However, there are sometimes web pages with static content that is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.
Store to Proxy Cache: + This option is used by default for proxy prefetch, but is not needed for explicit crawling. We recommend to leave this switched off unless you want to control the crawl results with the Cache Monitor. @@ -60,19 +61,27 @@ You can define URLs as start points for Web page crawling and start crawling her
Do Local Indexing: + This enables indexing of the wepages the crawler will download. This should be switched on by default, unless you want to crawl only to fill the Proxy Cache without indexing.
Do Remote Indexing:
- Describe your intention to start this global crawl (optional):
-
- This message will appear in the 'Other Peer Crawl Start' table of other peers. -
+ + + + + +
+ + + Describe your intention to start this global crawl (optional):

+

+ This message will appear in the 'Other Peer Crawl Start' table of other peers. +
+
If checked, the crawler will contact other peers and use them as remote indexers for your crawl. If you need your crawling results locally, you should switch this off. Only senior and principal peers can initiate or receive remote crawls. @@ -82,7 +91,7 @@ You can define URLs as start points for Web page crawling and start crawling her
Exclude static Stop-Words + This can be useful to circumvent that extremely common words are added to the database, i.e. "the", "he", "she", "it"... To exclude all words given in the file yacy.stopwords from indexing, check this box.
Starting Point: + @@ -119,12 +128,12 @@ You can define URLs as start points for Web page crawling and start crawling her
From File:
Existing start URLs are re-crawled. + Existing start URLs are re-crawled. Other already visited URLs are sorted out as "double". A complete re-crawl will be available soon.
+Your peer can search and index for other peers and they can search for you.

+ +
- + - - + + + +
Accept remote crawling requests and perform crawl at maximum load
@@ -155,9 +166,13 @@ Your peer can search and index for other peers and they can search for you. Do not accept remote crawling requests (please set this only if you cannot accept to crawl only one page per minute; see option above)
-
+ + +

diff --git a/htroot/Language_p.html b/htroot/Language_p.html index 7dfb29a01..8273b842e 100644 --- a/htroot/Language_p.html +++ b/htroot/Language_p.html @@ -1,40 +1,70 @@ - -YaCy '#[clientname]#': Language selection -#%env/templates/metas.template%# - - -#%env/templates/header.template%# -

-

Language selection

-

-You can change the language of the YaCy-webinterface with translation files. -
-
-Current language: default(english)
-Languagefile Author:
-Send additions to maintainer:
-

- + + YaCy '#[clientname]#': Language selection + #%env/templates/metas.template%# + + + #%env/templates/header.template%# +

+

Language selection

+

+ You can change the language of the YaCy-webinterface with translation files. +

+ + + + + + + + + + + + + + + + + + + + + + + + +
 
+ Current language: + default(english) +
+ Languagefile Author: + +
+ Send additions to maintainer: + +
+ Languages: +
-Languages:
-
+

- +
-Install new language from URL:
+Install new language from URL: +

Use this language
+
#(status)# :: diff --git a/htroot/ProxyIndexingMonitor_p.html b/htroot/ProxyIndexingMonitor_p.html index bbdaa9d08..2f7d559a0 100644 --- a/htroot/ProxyIndexingMonitor_p.html +++ b/htroot/ProxyIndexingMonitor_p.html @@ -19,14 +19,14 @@ and automatically excluded from indexing.

- - +
+ - + embedded URLs, but since embedded image links are loaded by the browser this means that only embedded href-anchors are prefetched additionally. - + - + - + - + - + diff --git a/htroot/Skins_p.html b/htroot/Skins_p.html index e7e7b5d1a..ef3a3ba48 100644 --- a/htroot/Skins_p.html +++ b/htroot/Skins_p.html @@ -10,12 +10,25 @@

Skin selection

You can change the appearance of YaCy with skins. Select one of the default skins, download new skins, or create your own skin.

- -Current skin: #[currentskin]#

+

Proxy pre-fetch setting: this is an automated html page loading procedure that takes actual proxy-requested URLs as crawling start points for crawling.
Prefetch Depth @@ -34,28 +34,28 @@ URLs as crawling start points for crawling.
Store to Cache It is almost always recommended to set this on. The only exception is that you have another caching proxy running as secondary proxy and YaCy is configured to used that proxy in proxy-proxy - mode.
Proxy generally
Path The path where the pages are stored (max. length 300)
Size The size in MB of the cache.
 
+ + + + + + + + + + + + + + + +
 
+Current skin: + #[currentskin]# +
+Skins: + -Skins:

- +
-Install new skin from URL:
+Install new skin from URL: +

Use this skin

- +
#(status)# :: Unable to get URL: #[url]# :: Error saving the skin. #(status)# - +
#%env/templates/footer.template%# diff --git a/htroot/index.html b/htroot/index.html index 05576f47f..93a2c564e 100644 --- a/htroot/index.html +++ b/htroot/index.html @@ -1,78 +1,113 @@ - -YaCy '#[clientname]#': Search Page -#%env/templates/metas.template%# - - - + + YaCy '#[clientname]#': Search Page + #%env/templates/metas.template%# + + + #%env/templates/header.template%# - + -
-

YaCy logo Kaskelix
P2P WEB SEARCH


-
#[promoteSearchPageGreeting]#

+
+

YaCy logo Kaskelix
P2P WEB SEARCH


+
#[promoteSearchPageGreeting]#

-
-
+ + + + + + + + + + + + + + + + + + + + + +
- - - + +
+ + + +
+
+ +

+

-
-Max. number of results: - -  order by: - -
-Resource: - -     Max. search time (seconds): - -
-URL mask: -#(urlmaskoptions)# - -:: - restrict on - show all -#(/urlmaskoptions)# -
- -

+

+ Max. number of results: + + +
+ order by: + + +
+ Resource: + + +
+ Max. search time (seconds): + + +
+ URL mask: + + #(urlmaskoptions)# + + :: + restrict on + show all + #(/urlmaskoptions)# +
+
+ +

#(excluded)# :: The following words are stop-words and had been excluded from the search: #[stopwords]#.