The right \'\*\', after the \'\/\', can be replaced by a <a href=\"http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html\">regex</a>.== 在 '/' 后边的 '*' ,可用<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html">正则表达式</a>表示.
domain.net\/fullpath<==domain.net/绝对路径<
>domain.net\/\*<==>domain.net/*<
\*.domain.net\/\*<==*.domain.net/*<
\*.sub.domain.net\/\*<==*.sub.domain.net/*<
#sub.domain.\*\/\*<==sub.domain.*/*<
#domain.\*\/\*<==domain.*/*<
a complete <a href=\"http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html\">regex</a> \(slow\)==一个完整的<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html">正则表达式</a> (慢)
#was removed from blacklist==wurde aus Blacklist entfernt
#was added to the blacklist==wurde zur Blacklist hinzugefügt
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==获取所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Access to your peer from your own computer \(localhost access\) is granted. No need to configure an administration account.==您可以从自己的电脑访问(本地访问). 不需要设置管理员账户.
#and the default icons and links on the search page can be replaced with you own.==und die standard Grafiken und Links auf der Suchseite durch Ihre eigenen ersetzen.
Skin Selection==选择皮肤
Select one of the default skins, download new skins, or create your own skin.==选择一个默认皮肤, 下载新皮肤或者创建属于您自己的皮肤.
Your YaCy installation behaves independently from other peers and you define your own web index by starting your own web crawl. This can be used to search your own web pages or to define a topic-oriented search portal.==本机YaCy的peer创建与索引过程独立于其他peer, 即您可以定义自己的搜索偏向.
Files may also be shared with the YaCy server, assign a path here:==您也能与YaCy服务器共享内容, 在这里指定路径:
Your peer name has not been customized; please set your own peer name==您的peer尚未命名, 请命名它
You may change your peer name==您可以改变您的peer名称
Peer Name:==Peer名称:
Your peer cannot be reached from outside==外部将不能访问您的peer
which is not fatal, but would be good for the YaCy network==此举有利于YaCy网络
please open your firewall for this port and/or set a virtual server option in your router to allow connections on this port==请改变您的防火墙或者虚拟机路由设置, 从而外网能访问这个端口
Your peer can be reached by other peers==外部将能访问您的peer
monitor at the network page</a> what the other peers are doing==监视网络页面</a>, 以及其他peer的活动
Your Peer name is a default name; please set an individual peer name.==您的peer名称为系统默认, 请设置另外一个名称.
You did not set a user name and/or a password.==您未设置用户名和/或密码.
Some pages are protected by passwords.==一些页面受密码保护.
You should set a password at the <a href="ConfigAccounts_p.html">Accounts Menu</a> to secure your YaCy peer.</p>::==您可以在 <a href="ConfigAccounts_p.html">账户菜单</a> 设置密码, 从而加强您的YaCy peer安全性.</p>::
You did not open a port in your firewall or your router does not forward the server port to your peer.==您未打开防火墙端口或者您的路由器未能与主机的服务端口建立链接.
This is needed if you want to fully participate in the YaCy network.==如果您要完全加入YaCy网络, 此项是必须的.
You can also use your peer without opening it, but this is not recomended.==不开放您的peer您也能使用, 但是不推荐.
A <a href=\"http://en.wikipedia.org/wiki/Heuristic\">heuristic</a> is an \'experience-based technique that help in problem solving, learning and discovery\' \(wikipedia\).==<a href="http://de.wikipedia.org/wiki/Heuristik">启发式</a> '是一个依赖于经验来解决问题, 学习与发现问题的过程.' (Wikipedia).
The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines.==
When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content.==开启启发式搜索时, 搜索结果给出的链接并不是直接搜索的链接, 而是已经缓存在其他服务器上的结果.
This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.==这保证了黑名单的有效性, 并且搜索关键字是通过启发式搜索找到的.
The success of heuristics are marked with an image==启发式搜索找到的结果会被特定图标标记
When a search is made using a \'site\'-operator \(like: \'download site:yacy.net\'\) then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即抓取层数为 最大限制深度-1 的内容.
That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一主机中每一个与此页面相连的网页.
Because this \'instant crawl\' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search \(after a small pause of some seconds\).==因为'立即抓取'依赖于robots.txt和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟).
scroogle: load external search result list==scroogle: 载入外部搜索引擎结果
When using this heuristic, then every search request line is used for a call to scroogle.==开启这个选项时, 每一次搜索都会引入scroogle的结果.
20 results are taken from scroogle and loaded simultanously, parsed and indexed immediately.==同时读取并索引从scroogle获得的20个结果.
The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==HTCache存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存.
The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容.
Integration of a Search Field for Live Search==搜索栏集成: 即时搜索
A \'Live-Search\' input field that reacts as search-as-you-type in a pop-up window can easily be integrated in any web page=='即时搜索'输入栏: 即当您在搜索栏键入关键字时, 会在网页中弹出搜索对话框按钮
This is the same function as can be seen on all pages of the YaCy online-interface \(look at the window in the upper right corner\)==当您在线使用YaCy时, 您会在搜索页面看到相应功能(页面右上角)
Just use the code snippet below to integrate that in your own web pages==将以下代码添加到您的网页中
Please check if the address, as given in the example \'\#\[ip\]\#\:\#\[port\]\#\' here is correct and replace it with more appropriate values if necessary==对于形如 '#[ip]#:#[port]#' 的地址, 请用具体值来替换
Code Snippet:==代码:
YaCy Portal Search==YaCy门户搜索
"Search"=="搜索"
Configuration options and defaults for \'yconf\':==配置设置和默认的'yconf':
is a mandatory property - no default<==固有参数 - 非默认<
YaCy P2P Web Search==YaCy P2P 网页搜索
Size and position \(width \| height \| position\)==尺寸和位置(宽度 | 高度 | 位置)
Specifies where the dialog should be displayed. Possible values for position: \'center\', \'left\', \'right\', \'top\', \'bottom\', or an array containing a coordinate pair \(in pixel offset from top left of viewport\) or the possible string values \(e.g. \[\'right\',\'top\'\] for top right corner\)==指定对话框位置. 对于位置: 'center', 'left', 'right', 'top', 'bottom' 的值, 或者一个包含对应位置值的数组 (以左上角为参考位置的像素数), 或者字符串值 (e.g. ['right','top'] 对应右上角)
If modal is set to true, the dialog will have modal behavior; other items on the page will be disabled \(i.e. cannot be interacted with\).==如果选中modal属性, 则对话框会有modal行为; 否则页面上就不具有此特性. (即不能进行交互操作).
Modal dialogs create an overlay below the dialog but above other page elements.==Modal对话框会在页面元素下面而不是其上创建覆盖层.
If resizable is set to true, the dialog will be resizeable.==如果选中可变属性, 对话框大小就是可变的.
Load JavaScript load_js==载入页面JavaScript
If load_js is set to false, you have to manually load the needed JavaScript on your portal page.==如果未选中载入页面JavaScript, 那么您可能需要手动加载页面里的JavaScript.
This can help to avoid timing problems or double loading.==这有助于避免分时或者重载问题.
Load Stylesheets load_css==载入页面样式
If load_css is set to false, you have to manually load the needed CSS on your portal page.==如果未选中载入页面样式, 您需要手动加载页面里的CSS文件.
download</a> ready made themes or <a href=\"http://jqueryui.com/themeroller/\" target=\"_blank\">create</a>==下载</a>或者<a href="http://jqueryui.com/themeroller/" target="_blank">创建</a>
your own custom theme. <br/>Themes are installed into: DATA/HTDOCS/yacy/ui/css/themes/==一个您自己的主题. <br/>主题文件安装在: DATA/HTDOCS/yacy/ui/css/themes/
#P2P operation can run without remote indexing, but runs better with remote indexing switched on. Please switch 'Accept Remote Crawl Requests' on==P2P-Tätigkeit läuft ohne Remote-Indexierung, aber funktioniert besser, wenn diese eingeschaltet ist. Bitte aktivieren Sie 'Remote Crawling akzeptieren'
For P2P operation, at least DHT distribution or DHT receive \(or both\) must be set. You have thus defined a Robinson configuration==对于P2P操作, 需要配置DHT分布网络或者DHT设备(或都要配置). 因此您需要定义一个Robinson配置.
Global Search in P2P configuration is only allowed, if index receive is switched on. You have a P2P configuration, but are not allowed to search other peers.==仅当接收索引选项打开时, 才能进行P2P全球搜索.
For Robinson Mode, index distribution and receive is switched off==在Robinson模式中, 索引分发和接收是默认关闭的.
#This Robinson Mode switches remote indexing on, but limits targets to peers within the same cluster. Remote indexing requests from peers within the same cluster are accepted==Dieser Robinson-Modus aktiviert Remote-Indexierung, aber beschränkt die Anfragen auf Peers des selben Clusters. Nur Remote-Indexierungsanfragen von Peers des selben Clusters werden akzeptiert
#This Robinson Mode does not allow any remote indexing \(neither requests remote indexing, nor accepts it\)==Dieser Robinson-Modus erlaubt keinerlei Remote-Indexierung (es wird weder Remote-Indexierung angefragt, noch akzeptiert)
# With this configuration it is not allowed to authentify automatically from localhost!==Diese Konfiguration erlaubt keine automatische Authentifikation von localhost!
# Please open the <a href=\"ConfigAccounts_p.html\">Account Configuration</a> and set a new password.==Bitte in der <a href="ConfigAccounts_p.html">Benutzerverwaltung</a> ein neues Passwort festlegen.
If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果您的peer运行在'Robinson模式', 您能在不与其他peer交换数据的情况下进行搜索
There is no index receive and no index distribution between your peer and any other peer==您不会与其他peer进行索引传递
In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于Robinson模式的cluster, 一样会应答远端的crawl请求
>Private Peer==>私有Peer
Your search engine will not contact any other peer, and will reject every request==您的搜索引擎不会与其他peer联系, 并会拒绝每一个外部请求
When you allow access from the YaCy network, your data is recognized using keywords==当您允许YaCy网络的访问时, 您的数据会以关键字形式表示
Please describe your search portal with some keywords \(comma-separated\)==请用关键字描述您的搜索门户 (以逗号隔开)
If you leave the field empty, no peer asks your peer. If you fill in a \'\*\', your peer is always asked.==如果此部分留空, 那么您的peer不会被其他peer访问. 如果内容是 '*' 则标示您的peer永远被允许访问.
If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果您想将YaCy作为您的网站搜索门户, 您可能需要在这改变搜索页面的图标和信息.
The search page may be customized.==搜索页面可以自由定制.
You can change the \'corporate identity\'-images, the greeting line==您可以改变 'Corporate Identity' 图像, 问候语
and a link to a home page that is reached when the \'corporate identity\'-images are clicked.==和一个指向首页的 'Corporate Identity' 图像链接.
You can create a personal profile here, which can be seen by other YaCy-members==您可以在这创建个人资料, 而且对其他YaCy成员可见
or <a href="ViewProfile.html\?hash=localhash">in the public</a> using a <a href="ViewProfile.rdf\?hash=localhash">FOAF RDF file</a>.==或者<a href="ViewProfile.html?hash=localhash">在公共场所时</a>使用<a href="ViewProfile.rdf?hash=localhash">FOAF RDF 文件</a>.
Homepage \(appears on every <a href="/Supporter.html">Supporter Page</a> as long as your peer is online\)==首页(显示在每个<a href="/Supporter.html">支持者</a> 页面中, 前提是您的peer在线).
Here are all configuration options from YaCy.==这里显示YaCy所有设置.
You can change anything, but some options need a restart, and some options can crash YaCy, if wrong values are used.==您可以改变任何这里的设置, 当然, 有的需要重启才能生效, 有的甚至能引起YaCy崩溃.
For explanation please look into defaults/yacy.init==详细内容请参考defaults/yacy.init
We give information how to integrate a search box on any web page that==如何将一个搜索框集成到任意
calls the normal YaCy search window.==调用YaCy搜索的页面.
Simply use the following code:==使用以下代码:
MySearch== 我的搜索
"Search"=="搜索"
This would look like:==示例:
This does not use a style sheet file to make the integration into another web page with a different style sheet easier.==在这里并没有使用样式文件, 因为这样会比较容易将其嵌入到不同样式的页面里.
You would need to change the following items:==您可能需要以下条目:
Replace the given colors \#eeeeee \(box background\) and \#cccccc \(box border\)==替换已给颜色 #eeeeee (框架背景) 和 #cccccc (框架边框)
Replace the word \"MySearch\" with your own message==用您想显示的信息替换"我的搜索"
#Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört.
#The profiles for remote crawls, <a href="/ProxyIndexingMonitor_p.html">indexing via proxy</a> and snippet fetches==Die Profile für Remote Crawl, <a href="/ProxyIndexingMonitor_p.html">Indexierung per Proxy</a> und Snippet Abrufe
#cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind.
These are monitoring pages for the different indexing queues.==索引队列监视页面.
YaCy knows 5 different ways to acquire web indexes. The details of these processes \(1-5\) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 进程(1-5)的细节在子菜单中显示
above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息应该视为隐私,
so you need to log-in with your administration password.==所以您最好设置一个有密码的管理员账户来查看.
Case \(6\) is a monitor of the local receipt-generator, the opposed case of \(1\). It contains also an indexing result monitor but is not considered private==事件(6)与事件(1)相反, 显示本地回执. 它也包含索引结果, 但不属于隐私
since it shows crawl requests from other peers.==因为它含有来自其他peer的请求.
These web pages had been crawled by your own crawl task.==您的crawl任务crawl了这些网页.
<em>Use Case:</em> start a crawl by setting a crawl start point on the 'Index Create' page.==<em>用法:</em>在'索引创建'页面设置crawl起始点以开始crawl.
\(6\) Results for Global Crawling==(6) 全球crawl结果
These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已经被您的peer索引, 但是它们是被远端peer crawl的.
This is the 'mirror'-case of process \(1\).==这是进程(1)的'镜像'实例.
<em>Use Case:</em> This list may fill if you check the 'Accept remote crawling requests'-flag on the 'Index Crate' page==<em>用法:</em>如果您选中了'索引创建'页面的'接受远端crawl请求', 则会在此列表中显示.
The stack is empty.==栈为空.
Statistics about \#\[domains\]\# domains in this stack:==此栈显示有关 #[domains]# 域的数据:
<em>Use Case:</em> place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==<em>用法:</em>将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式
You can define URLs as start points for Web page crawling and start crawling here. \"Crawling\" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under \"Crawling Depth\".==您可以将指定URL作为网页crawling的起始点. "Crawling"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容, 其深度由"Crawling深度"指定.
Attribute<==属性<
Value<==值<
Description<==描述<
>Starting Point:==>起始点:
>From URL==>来自URL
From Sitemap==来自站点地图
From File==来自文件
Existing start URLs are always re-crawled.==已存在的起始链接将会被重新crawl.
Other already visited URLs are sorted out as \"double\", if they are not allowed using the re-crawl option.==对于已经访问过的链接, 如果它们不允许被重新crawl,则被标记为'重复'.
Create Bookmark==创建书签
\(works with "Starting Point: From URL" only\)==(仅从"起始链接"开始)
Title<==标题<
Folder<==目录<
This option lets you create a bookmark from your crawl start URL.==此选项会将起始链接设为书签.
Crawling Depth</label>==Crawling深度</label>
This defines how often the Crawler will follow links \(of links..\) embedded in websites.==此选项为crawler跟踪网站嵌入链接的深度.
0 means that only the page you enter under \"Starting Point\" will be added==设置为 0 代表仅将"起始点"
to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引. 建议设置为2-4. 由于设置为8会索引将近25,000,000,000个页面, 所以不建议设置大于8的值,
index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容.
Scheduled re-crawl<==已安排的重新Crawl<
>no doubles<==>无 重复<
run this crawl once and never load any page that is already known, only the start-url may be loaded again.==仅运行一次crawl, 并且不载入重复网页, 可能会重载起始链接.
>re-load<==>重载<
run this crawl once, but treat urls that are known since==运行此crawl, 但是将链接视为从
>years<==>年<
>months<==>月<
>days<==>日<
>hours<==>时<
not as double and load them again. No scheduled re-crawl.==不重复并重新载入. 无安排的crawl任务.
>scheduled<==>定期<
after starting this crawl, repeat the crawl every==运行此crawl后, 每隔
> automatically.==> 运行.
A web crawl performs a double-check on all links found in the internet against the internal database. If the same url is found again,==网页crawl参照自身数据库, 对所有找到的链接进行重复性检查. 如果链接重复,
then the url is treated as double when you check the \'no doubles\' option. A url may be loaded again when it has reached a specific age,==并且'无重复'选项打开, 则被以重复链接对待. 如果链接存在时间超过一定时间,
to use that check the \'re-load\' option. When you want that this web crawl is repeated automatically, then check the \'scheduled\' option.==并且'重载'选项打开, 则此链接会被重新读取. 当您想这些crawl自动运行时, 请选中'定期'选项.
In this case the crawl is repeated after the given time and no url from the previous crawl is omitted as double.==此种情况下, crawl会每隔一定时间自动运行并且不会重复寻找前一次crawl中的链接.
#The filter is an emacs-like regular expression that must match with the URLs which are used to be crawled;==Dieser Filter ist ein emacs-ähnlicher regulärer Ausdruck, der mit den zu crawlenden URLs übereinstimmen muss;
The filter is a <a href=\"http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html\">regular expression</a>==过滤是一组<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html">正则表达式</a>
that must match with the URLs which are used to be crawled; default is \'catch all\'.==, 它们表示了要抓取的链接规则; 默认是'抓取所有'.
Example: to allow only urls that contain the word \'science\', set the filter to \'.*science.*\'.==比如: 如果仅抓取包含'科学'的链接, 可将过滤器设置为 '.*.*'.
You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用域限制来抓取整个域.
#It depends on the age of the last crawl if this is done or not: if the last crawl is older than the given==Es hängt vom Alter des letzten Crawls ab, ob dies getan oder nicht getan wird: wenn der letzte Crawl älter als das angegebene
#Auto-Dom-Filter:==Auto-Dom-Filter:
#This option will automatically create a domain-filter which limits the crawl on domains the crawler==Diese Option erzeugt automatisch einen Domain-Filter der den Crawl auf die Domains beschränkt ,
#will find on the given depth. You can use this option i.e. to crawl a page with bookmarks while==die auf der angegebenen Tiefe gefunden werden. Diese Option kann man beispielsweise benutzen, um eine Seite mit Bookmarks zu crawlen
#restricting the crawl on only those domains that appear on the bookmark-page. The adequate depth==und dann den folgenden Crawl automatisch auf die Domains zu beschränken, die in der Bookmarkliste vorkamen. Die einzustellende Tiefe für
#for this example would be 1.==dieses Beispiel wäre 1.
#The default value 0 gives no restrictions.==Der Vorgabewert 0 bedeutet, dass nichts eingeschränkt wird.
You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==您可以将从单个域中抓取和索引的页面数目限制为此值.
You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域.
the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域会被自动忽略.
Accept URLs with==接受链接
dynamic URLs==动态URL
A questionmark is usually a hint for a dynamic page. URLs pointing to dynamic content should usually not be crawled. However, there are sometimes web pages with static content that==动态页面通常用问号标记. 通常不会抓取指向动态页面的链接. 然而, 也有些含有静态内容的页面用问号标记.
is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定, 不要选中此项, 以防抓取时陷入死循环.
Store to Web Cache==存储到网页缓存
This option is used by default for proxy prefetch, but is not needed for explicit crawling.==这个选项默认打开, 并用于预抓取, 但对于精确抓取此选项无效.
This enables indexing of the wepages the crawler will download. This should be switched on by default, unless you want to crawl only to fill the==此选项开启时, crawler会下载网页索引. 默认打开, 除非您仅要填充
Document Cache without indexing.==文件缓存而不进行索引.
Do Remote Indexing==远程索引
Describe your intention to start this global crawl \(optional\)==在这填入您要进行全球crawl的目的(可选)
This message will appear in the 'Other Peer Crawl Start' table of other peers.==此消息会显示在其他peer的'其他peer crawl起始'列表中.
If checked, the crawler will contact other peers and use them as remote indexers for your crawl.==如果选中, crawler会联系其他peer, 并将其作为此次crawl的远程索引器.
If you need your crawling results locally, you should switch this off.==如果您仅想crawl本地内容, 请关闭此设置.
Only senior and principal peers can initiate or receive remote crawls.==仅高级peer和主peer能初始化或者接收远程crawl.
A YaCyNews message will be created to inform all peers about a global crawl==YaCy新闻消息中会通知其他peer这个全球crawl,
so they can omit starting a crawl with the same start point.==然后他们才能以相同起始点进行crawl.
This can be useful to circumvent that extremely common words are added to the database, i.e. \"the\", \"he\", \"she\", \"it\"... To exclude all words given in the file <tt>yacy.stopwords</tt> from indexing,==此项用于规避极常用字, 比如 "个", "他", "她", "它"等. 当要在索引时排除所有在<tt>yacy.stopwords</tt>文件中的字词时,
When an index domain is configured to contain intranet links,==当索引域中包含局域网链接时,
the intranet may be scanned for available servers.==可用服务器会扫描它们.
Please select below the servers in your intranet that you want to fetch into the search index.==以下服务器在您的局域网中, 请选择您想添加到搜索索引中的主机.
This network definition does not allow intranet links.==当前网络定义不允许局域网链接.
A list of intranet servers is only available if you confiugure YaCy to index intranet targets.==仅当您将YaCy配置为索引局域网目标, 以下条目才有效.
To do so, open the <a href=\"ConfigBasic.html\">Basic Configuration</a> servlet and select the \'Intranet Indexing\' use case.==将YaCy配置为索引局域网目标, 打开<a href="ConfigBasic.html">基本设置</a>页面, 选中'索引局域网'.
No more that two pages are loaded from the same host in one second \(not more that 120 document per minute\) to limit the load on the target server.==每秒最多从同一主机中载入两个页面(每分钟不超过120个文件)以限制目标主机负载.
>Target Balancer<==>目标平衡器<
A second crawl for a different host increases the throughput to a maximum of 240 documents per minute since the crawler balances the load over all hosts.==对于不同主机的二次crawl, 会上升到每分钟最多240个文件, 因为crawler会自动平衡所有主机的负载.
>High Speed Crawling<==>高速crawl<
A \'shallow crawl\' which is not limited to a single host \(or site\)==当目标主机很多时, 用于多个主机(或站点)的'浅crawl'方式,
can extend the pages per minute \(ppm\) rate to unlimited documents per minute when the number of target hosts is high.==会增加每秒页面数(ppm).
This can be done using the <a href=\"CrawlStartExpert_p.html\">Expert Crawl Start</a> servlet.==对应设置<a href="CrawlStartExpert_p.html">专家模式起始crawl</a>选项.
The scheduler on crawls can be changed or removed using the <a href=\"Table_API_p.html\">API Steering</a>.==可以使用<a href="Table_API_p.html">API向导</a>改变或删除crawl定时器.
add search results from scroogle==添加来自scroogle的搜索结果
add search results from blekko==添加来自blekko的搜索结果
Search Navigation==搜索导航
keyboard shotcuts==快捷键
tab or page-up==Tab或者Page Up
next result page==下一页
page-down==Page Down
previous result page==上一页
automatic result retrieval==自动结果检索
browser integration==浏览集成
after searching, click-open on the default search engine in the upper right search field of your browser and select 'Add "YaCy Search.."'==搜索后, 点击浏览器右上方区域中的默认搜索引擎, 并选择'添加"YaCy"'
search as rss feed==作为RSS-Feed搜索
click on the red icon in the upper right after a search. this works good in combination with the '/date' ranking modifier. See an==搜索后点击右上方的红色图标. 配合'/date'排名修改, 能取得较好效果.
>example==>例
json search results==JSON搜索结果
for ajax developers: get the search rss feed and replace the '.rss' extension in the search result url with '.json'==对AJAX开发者: 获取搜索结果页的RSS-Feed, 并用'.json'替换'.rss'搜索结果链接中的扩展名
this may produce unresolved references at other word indexes but they do not harm==这可能和其他关键字产生未解析关联, 但是这并不影响系统性能
"Delete URL and remove all references from words"=="删除URl并从关键字中删除所有关联"
delete the reference to this url at every other word where the reference exists \(very extensive, but prevents unresolved references\)==删除指向此链接的关联字,(很多, 但是会阻止未解析关联的产生)
Content Integration: Retrieval from phpBB3 Databases==内容集成: 从phpBB3数据库中导入
It is possible to extract texts directly from mySQL and postgreSQL databases.==能直接从mysql或者postgresql中解压出内容.
Each extraction is specific to the data that is hosted in the database.==每次解压都针对主机数据库中的数据.
This interface gives you access to the phpBB3 forums software content.==通过此接口能访问phpBB3论坛软件内容.
If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:==如果从使用phpMyAdmin读取数据库内容, 您可能会用到以下建议:
before importing large database dumps, set==在导入尺寸较大的数据库时,
in phpmyadmin/config.inc.php and place your dump file in /tmp \(Otherwise it is not possible to upload files larger than 2MB\)==设置phpmyadmin/config.inc.php的内容, 并将您的数据库文件放到 /tmp 目录下(否则不能上传大于2MB的文件)
When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.==导出过程开始时, 在 DATA/SURROGATE/in 目录下自动生成备份文件, 并且会被索引器自动抓取.
All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.==所有被索引的备份文件都在 DATA/SURROGATE/out 目录下, 并被索引器循环利用.
>With this file it is possible to find locations in Germany using the location \(city\) name, a zip code, a car sign or a telephone pre-dial number.<==>使用此插件, 则能通过查询城市名, 邮编, 车牌号或者电话区号得到德国任何地点的位置信息.<
This queue stores the urls that shall be sent to other peers to perform a remote crawl.==此队列存储着需要发送到其他peer进行crawl的链接.
If there is no peer for remote crawling available, the links are crawled locally.==如果远端无可用crawl, 则此队列对本地有效.
The global crawler queue is empty==全球crawl队列为空.
"clear global crawl queue"=="清空全球crawl队列"
There are <strong>\#\[num\]\#</strong> entries in the global crawler queue. Showing <strong>\#\[show-num\]\#</strong> most recent entries.==全球crawler队列中有 <strong>#[num]#</strong> 个条目. 显示最近的 <strong>#[show-num]#</strong> 个.
This queue stores the urls that shall be crawled localy by this peer.==此队列存储着本地peer要crawl的队列.
It may also contain urls that are computed by the proxy-prefetch.==此队列中也包含通过代理预取的链接.
The local crawler queue is empty==本地crawl队列为空.
There are <strong>\#\[num\]\#</strong> entries in the local crawler queue. Showing <strong>\#\[show-num\]\#</strong> most recent entries.==本地crawl队列中有 <strong>#[num]#</strong> 个条目. 显示最近的 <strong>#[show-num]#</strong> 个.
The local index currently consists of \(at least\) \#\[wcount\]\# reverse word indexes and \#\[ucount\]\# URL references.==本地索引当前至少有 #[wcount]# 个关键字索引和 #[ucount]# 个URL关联.
Import Job with the same path already started.==含有相同路径的导入任务已存在.
Starting new Job==开始新任务
Import Type:==导入类型:
Cache Size==缓存大小
Usage Examples==使用<br />举例
"Path to the PLASMADB directory of the foreign peer"=="其他peer的PLASMADB目录路径"
Import Path:==导入失败:
"Start Import"=="开始导入"
Attention:==注意:
Always do a backup of your source and destination database before starting to use this import function.==在使用此导入功能之前, 一定要备份您的源数据库和目的数据库.
You can import MediaWiki dumps here. An example is the file==您可以在这导入MediaWiki副本. 示例
Dumps must be in XML format and may be compressed in gz or bz2. Place the file in the YaCy folder or in one of its sub-folders.==副本文件必须是 XML 格式并用 bz2 压缩的. 将其放进YaCy目录或其子目录中.
"Import MediaWiki Dump"=="导入MediaWiki备份"
When the import is started, the following happens:==:开始导入时, 会进行以下工作
The dump is extracted on the fly and wiki entries are translated into Dublin Core data format. The output looks like this:==备份文件即时被解压, 并被译为Dublin核心元数据格式:
Each 10000 wiki records are combined in one output file which is written to /DATA/SURROGATES/in into a temporary file.==每个输出文件都含有10000个wiki记录, 并都被保存在 /DATA/SURROGATES/in 的临时目录中.
When each of the generated output file is finished, it is renamed to a .xml file==生成的输出文件都以 .xml结尾
Each time a xml surrogate file appears in /DATA/SURROGATES/in, the YaCy indexer fetches the file and indexes the record entries.==只要 /DATA/SURROGATES/in 中含有 xml文件, YaCy索引器就会读取它们并为其中的条目制作索引.
When a surrogate file is finished with indexing, it is moved to /DATA/SURROGATES/out==当索引完成时, xml文件会被移动到 /DATA/SURROGATES/out
You can recycle processed surrogate files by moving them from /DATA/SURROGATES/out to /DATA/SURROGATES/in==您可以将文件从/DATA/SURROGATES/out 移动到 /DATA/SURROGATES/in 以重复索引.
Results from the import can be monitored in the <a href=\"\/CrawlResults.html\?process=7\">indexing results for surrogates==导入结果<a href="/CrawlResults.html?process=7">监视
Single request import==单个导入请求
This will submit only a single request as given here to a OAI-PMH server and imports records into the index==向OAI-PMH服务器提交如下导入请求, 并将返回记录导入索引
It is possible to insert forum pages into the YaCy index using a database import of forum postings.==导入含有论坛帖子的数据库, 能在YaCy主页显示论坛内容.
This guide helps you to insert a search window in your phpBB3 pages.==此向导能帮助您在您的phpBB3论坛页面中添加搜索框.
Retrieval of phpBB3 Forum Pages using a database export==phpBB3论坛页面需使用数据库导出
Forum posting contain rich information about the topic, the time, the subject and the author.==论坛帖子中含有话题, 时间, 主题和作者等丰富信息.
This information is in an bad annotated form in web pages delivered by the forum software.==此类信息往往由论坛散播, 并且对于搜索引擎来说, 它们的标注很费解.
It is much better to retrieve the forum postings directly from the database.==所以, 直接从数据库中获取帖子内容效果更好.
This will cause that YaCy is able to offer nice navigation features after searches.==这会使得YaCy在每次搜索后提供较好引导特性.
YaCy has a phpBB3 extraction feature, please go to the <a href="ContentIntegrationPHPBB3_p.html">phpBB3 content integration</a> servlet for direct database imports.==YaCy能够解析phpBB3关键字, 参见 <a href="ContentIntegrationPHPBB3_p.html">phpBB3内容集成</a> 直接导入数据库方法.
Retrieval of phpBB3 Forum Pages using a web crawl==使用网页crawl接收phpBB3论坛页面
The following form is a simplified crawl start that uses the proper values for a phpbb3 forum crawl.==下栏是使用某一值的phpBB3论坛crawl起始点.
Just insert the front page URL of your forum. After you started the crawl you may want to get back==将论坛首页填入表格. 开始crawl后,
to this page to read the integration hints below.==您可能需要返回此页面阅读以下提示.
URL of the phpBB3 forum main page==phpBB3论坛主页
This is a crawl start point==这是crawl起始点
"Get content of phpBB3: crawl forum pages"=="获取phpBB3内容: crawl论坛页面"
Inserting a Search Window to phpBB3==在phpBB3中添加搜索框
To integrate a search window into phpBB3, you must insert some code into a forum template.==在论坛模板中添加以下代码以将搜索框集成到phpBB3中.
There are several templates that can be used for phpBB3, but in this guide we consider that==phpBB3中有多种模板,
you are using the default template, \'prosilver\'==在此我们使用默认模板 'prosilver'.
open styles/prosilver/template/overall_header.html==打开 styles/prosilver/template/overall_header.html
find the line where the default search window is displayed, thats right behind the <pre>\<div id=\"search-box\"\></pre> statement==找到搜索框显示代码部分, 它们在 <pre><div id="search-box"></pre> 下面
Insert the following code right behind the div tag==在div标签后插入以下代码
YaCy Forum Search==YaCy论坛搜索
;YaCy Search==;YaCy搜索
Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name==用您自己的IP或者主机名替代代码中给出的IP地址
You may want to change the default text elements in the code snippet==您可以更改代码中的文本元素
To see all options for the search widget, look at the more generic description of search widgets at==搜索框详细设置, 请参见
the <a href=\"ConfigLiveSearch.html\">configuration for live search</a>.==der Seite <a href=\"ConfigLiveSearch.html\">搜索栏集成: 即时搜索</a>.
RSS feeds can be loaded into the YaCy search index.==YaCy能够读取RSS feed.
This does not load the rss file as such into the index but all the messages inside the RSS feeds as individual documents.==但不是直接读取RSS文件, 而是将RSS feed中的所有信息分别当作单独的文件来读取.
URL of the RSS feed==RSS feed链接
>Preview<==>预览<
"Show RSS Items"=="显示RSS条目"
Available after successful loading of rss feed in preview==仅在读取rss feed后有效
"Add All Items to Index \(full content of url\)"=="添加所有条目到索引中(URL中的全部内容)"
>once<==>立即<
>load this feed once now<==>立即读取此feed<
>scheduled<==>定时<
>repeat the feed loading every<==>读取此feed每隔<
>minutes<==>分钟<
>hours<==>小时<
>days<==>天<
> automatically.==>.
>List of Scheduled RSS Feed Load Targets<==>定时任务列表<
The peer does not respond. It was now removed from the peer-list.==远端peer未响应, 将从peer列表中删除.
The peer <b>==peer <b>
is alive and responded:==可用:
You are allowed to send me a message==您现在可以给我发送消息
kb and an==kb和一个
attachment ≤==附件 ≤
Your Message==您的短消息
Subject:==主题:
Text:==内容:
"Enter"=="发送"
"Preview"=="预览"
You can use==您可以在这使用
Wiki Code</a> here.==Wiki Code </a>.
Preview message==预览消息
The message has not been sent yet!==短消息未发送!
The peer is alive but cannot respond. Sorry.==peer属于活动状态但是无响应.
Your message has been sent. The target peer responded:==您的短消息已发送. 接收peer返回:
The target peer is alive but did not receive your message. Sorry.==抱歉, 接收peer属于活动状态但是没有接收到您的消息.
Here is a copy of your message, so you can copy it to save it for further attempts:==这是您的消息副本, 可被保存已备用:
You cannot call this page directly. Instead, use a link on the <a href="Network.html">Network</a> page.==您不能直接使用此页面. 请使用 <a href="Network.html">网络</a> 页面的对应功能.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==获取所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
#You are in online mode, but probably no internet resource is available.==Sie befinden sich im Online-Modus, aber zur Zeit besteht keine Internetverbindung.You are in online mode, but probably no internet resource is available.
#Please check your internet connection.==Bitte überprüfen Sie Ihre Internetverbindung.
#You are not in online mode. To get online, press this button:==Sie sind nicht im Online-Modus. Um Online zu gehen, drücken Sie diesen Knopf:
Other peers may use this information to prevent double-crawls from the same start point.==其他的peer能利用此信息以防止相同起始点的二次crawl.
A table with recently started crawls is presented on the Index Create - page=="创建首页"-页面会显示最近启动的crawl.
A change in the personal profile will create a news entry. You can see recently made changes of==个人信息的改变会创建一个新闻条目, 可以在网络页面查看,
profile entries on the Network page, where that profile change is visualized with a '\*' beside the 'P' \(profile\) - selector.==以带有 '*' 的 'P' (资料)标记出.
More news services will follow.==接下来会有更多的新闻服务.
Above you can see four menues:==上面四个菜单选项分别为:
<strong>Incoming News \(\#\[insize\]\#\)</strong>: latest news that arrived your peer.==<strong>已接收新闻(#[insize]#)</strong>: 发送至您peer的新闻.
Only these news will be used to display specific news services as explained above.==这些消息含有上述的特定新闻服务.
You can process these news with a button on the page to remove their appearance from the IndexCreate and Network page==您可以使用'创建首页'和'网络'页面的设置隐藏它们.
<strong>Processed News \(\#\[prsize\]\#\)</strong>: this is simply an archive of incoming news that you removed by processing.==<strong>已处理新闻(#[prsize]#)</strong>: 此页面显示您已删除的新闻.
<strong>Outgoing News \(\#\[ousize\]\#\)</strong>: here your can see news entries that you have created. These news are currently broadcasted to other peers.==<strong>已生成新闻(#[ousize]#)</strong>: 此页面显示您的peer创建的新闻条目, 默认发布给其他peer.
you can stop the broadcast if you want.==您也可以选择停止发布.
<strong>Published News \(\#\[pusize\]\#\)</strong>: your news that have been broadcasted sufficiently or that you have removed from the broadcast list.==<strong>已发布新闻(#[pusize]#)</strong>: 显示已经完全发布出去的新闻或者已经从发布列表删除的新闻.
"\#\(page\)\#::Process All News::Delete All News::Abort Publication of All News::Delete All News\#\(/page\)\#"=="#(page)#::处理所有新闻::删除所有新闻::停止发布所有新闻::删除所有新闻#(/page)#"
This is the time delta between accessing of the same domain during a crawl.==这是在crawl期间, 访问同一域名的间歇值.
The crawl balancer tries to avoid that domains are==crawl平衡器能够避免频繁地访问同一域名,
accessed too often, but if the balancer fails \(i.e. if there are only links left from the same domain\), then these minimum==如果平衡器失效\(比如相同域名下只剩链接了), 则此有此间歇
# This is the control page for web pages that your peer has indexed during the current application run-time==Dies ist die Kontrollseite für Internetseiten, die Ihr Peer während der aktuellen Sitzung
# as result of proxy fetch/prefetch.==durch Besuchen einer Seite indexiert.
# No personal or protected page is indexed==Persönliche Seiten und geschütze Seiten werden nicht indexiert
those pages are detected by properties in the HTTP header \(like Cookie-Use, or HTTP Authorization\)==通过检测HTTP头部属性(比如cookie用途或者http认证)
or by POST-Parameters \(either in URL or as HTTP protocol\)==或者提交参数(链接或者http协议)
and automatically excluded from indexing.==能够检测出此类网页并在索引时排除.
Proxy Auto Config:==自动配置代理:
this controls the proxy auto configuration script for browsers at http://localhost:8090/autoconfig.pac==这会影响浏览器代理自动配置脚本 http://localhost:8090/autoconfig.pac
.yacy-domains only==仅 .yacy 域名
whether the proxy should only be used for .yacy-Domains==代理是否只对 .yacy 域名有效.
Proxy pre-fetch setting:==代理预读设置:
this is an automated html page loading procedure that takes actual proxy-requested==这是一个自动预读网页的过程
URLs as crawling start points for crawling.==期间会将请求代理的URL作为crawl起始点.
Prefetch Depth==预读深度
A prefetch of 0 means no prefetch; a prefetch of 1 means to prefetch all==设置为0则不预读; 设置为1预读所有嵌入链接,
embedded URLs, but since embedded image links are loaded by the browser==但是嵌入图像链接是由浏览器读取,
this means that only embedded href-anchors are prefetched additionally.==这意味着只预读嵌入式链接的顶层部分.
Store to Cache==存储至缓存
It is almost always recommended to set this on. The only exception is that you have another caching proxy running as secondary proxy and YaCy is configured to used that proxy in proxy-proxy - mode.==推荐打开此项设置. 唯一的例外是您有另一个缓存代理作为二级代理并且YaCy设置为使用'代理到代理'模式.
Do Local Text-Indexing==进行本地文本索引
If this is on, all pages \(except private content\) that passes the proxy is indexed.==如果打开此项设置, 所有通过代理的网页(除了私有内容)都会被索引.
Do Local Media-Indexing==进行本地媒体索引
This is the same as for Local Text-Indexing, but switches only the indexing of media content on.==与本地文本索引类似, 但是仅当'索引媒体内容'打开时有效.
Do Remote Indexing==进行远程索引
If checked, the crawler will contact other peers and use them as remote indexers for your crawl.==如果被选中, crawler会联系其他peer并将之作为远程索引器.
If you need your crawling results locally, you should switch this off.==如果仅需要本地索引结果, 可以关闭此项.
Only senior and principal peers can initiate or receive remote crawls.==只有高级peer和主要peer能初始化和接收远端crawl.
Please note that this setting only take effect for a prefetch depth greater than 0.==请注意, 此设置仅在预读深度大于0时有效.
Proxy generally==代理杂项设置
Path==路径
The path where the pages are stored \(max. length 300\)==存储页面的路径(最大300个字符长度)
Size</label>==大小</label>
The size in MB of the cache.==缓存大小(MB).
"Set proxy profile"=="保存设置"
The file DATA/PLASMADB/crawlProfiles0.db is missing or corrupted.==文件 DATA/PLASMADB/crawlProfiles0.db 丢失或者损坏.
Please delete that file and restart.==请删除此文件并重启.
Pre-fetch is now set to depth==预读深度现为
Caching is now \#\(caching\)\#off\:\:on\#\(/caching\)\#.==缓存现已 #(caching)#关闭::打开#(/caching)#.
Local Text Indexing is now \#\(indexingLocalText\)\#off::on==本地文本索引现已 #(indexingLocalText)#关闭::打开
Local Media Indexing is now \#\(indexingLocalMedia\)\#off::on==本地媒体索引现已 #(indexingLocalMedia)#关闭::打开
Remote Indexing is now \#\(indexingRemote\)\#off::on==远程索引现已 #(indexingRemote)#关闭::打开
Cachepath is now set to \'\#\[return\]\#\'.</strong> Please move the old data in the new directory.==缓存路径现为 '#[return]#'.</strong> 请将旧文件移至此目录.
Cachesize is now set to \#\[return\]\#MB.==缓存大小现为 #[return]#MB.
Changes will take effect after restart only.==改变仅在重启后生效.
An error has occurred:==发生错误:
You can see a snapshot of recently indexed pages==你可以在
Simply drag and drop the link shown below to your Browsers Toolbar/Link-Bar.==仅需拖动以下链接至浏览器工具栏/书签栏.
If you click on it while browsing, the currently viewed website will be inserted into the YaCy crawling queue for indexing.==如果在浏览网页时点击, 当前查看页面会被插入到crawl队列已用于索引
Crawl with YaCy==用YaCy进行crawl
Title:==标题:
Link:==链接:
Status:==状态:
URL successfully added to Crawler Queue==已成功添加链接到crawl队列.
Malformed URL==异常链接
Unable to create new crawling profile for URL:==创建链接crawl信息失败:
Unable to add URL to crawler queue:==添加链接到crawl队列失败:
The document ranking influences the order of the search result entities.==排名影响到搜索结果的排列顺序.
A ranking is computed using a number of attributes from the documents that match with the search word.==通过计算所有符合搜索关键字的文件属性, 从而得到排名.
The attributes are first normalized over all search results and then the normalized attribut is multiplied with the ranking coefficient computed from this list.==第一次搜索时, 所有结果的文件属性会被初始化, 然后计算列表中排名系数并更改这些数值.
The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数根据下表排名级别呈指数增长.
If you increase a single value by one, then the strength of the parameter doubles.==如果值加1, 则参数影响强度加倍.
#The age of a document is measured using the date submitted by the remote server as document date==Das Alter eines Dokuments wird gemessen anhand des Dokument Datums das der Remote Server übermittelt
first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名.
The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果.
#Application Of Prefer Pattern==Anwendung eines bevorzugten Musters
#a higher ranking level prefers documents where the url matches the prefer pattern given in a search request.==Ein höherer Ranking Level bevorzugt Dokumente deren URL auf das bevorzugte Muster einer Suchanfrage passt.
Maximum allowed file size in bytes that should be downloaded==允许下载的最大文件尺寸(字节)
Larger files will be skipped==超出此限制的文件将被忽略
Please note that if the crawler uses content compression, this limit is used to check the compressed content size==请注意, 如果crawler使用内容压缩, 则此限制对压缩后文件大小有效.
With this you can specify if YaCy can be used as transparent proxy.==选此指定YaCy作为透明代理.
Hint: On linux you can configure your firewall to transparently redirect all http traffic through yacy using this iptables rule==提示: Linux系统中, 您可以使用如下iptables规则转发所有http流量
Connection Keep-Alive==保持连接
With this you can specify if YaCy should support the HTTP connection keep-alive feature.==选此指定YaCy支持HTTP连接保持特性.
Send "Via" Header==发送"Via"头
Specifies if the proxy should send the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.45">Via</a>==选此指定代理是否发送<a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.45">"Via"HTTP头</a>
http header according to RFC 2616 Sect 14.45.==根据RFC 2616 Sect14.45.
The sshd port of the host, like '22'==ssh端口, 比如'22'
Path</label>==路径</label>
The remote path on the server, like '~/yacy/seed.txt'. Missing sub-directories are NOT created automatically.==ssh服务器上传路径, 比如'~/yacy/seed.txt'. 不会自动创建缺少的子目录.
The password redundancy check failed. You have probably misstyped your password.==密码冗余检查错误.
Shutting down.</strong><br />Application will terminate after working off all crawling tasks.==正在关闭</strong><br />所有crawl任务完成后程序会关闭.
Your administration account setting has been made.==已创建管理账户设置.
Your new administration account name is \#\[user\]\#. The password has been accepted.<br />If you go back to the Settings page, you must log-in again.==新帐户名是 #[user]#. 密码输入正确.<br />如果返回设置页面, 需要再次输入密码.
Your proxy access setting has been changed.==代理访问设置已改变.
Your proxy account check has been disabled, since you did not supply a password.==不能进行代理账户检查, 密码不正确.
The new proxy IP filter is set to==代理IP过滤设置为
The proxy port is:==代理端口号:
Port rebinding will be done in a few seconds.==端口在几秒后绑定完成.
You can reach your YaCy server under the new location==可以通过新位置访问YaCy服务器:
Your proxy access setting has been changed.==代理访问设置已改变.
Your server access filter is now set to==服务器访问过滤为
Auto pop-up of the Status page is now <strong>disabled</strong>==自动弹出状态页面<strong>关闭.</strong>
Auto pop-up of the Status page is now <strong>enabled</strong>==自动弹出状态页面<strong>打开.</strong>
You are now permanently <strong>online</strong>.==您现在处于永久<strong>在线状态</strong>.
After a short while you should see the effect on the====一会儿可以在
status</a> page.==Status</a> 页面看到变化.
The Peer Name is:==peer名:
Your static Ip\(or DynDns\) is:==静态IP(或DynDns)为:
Seed Settings changed.\#\(success\)\#::You are now a principal peer.==seed设置已改变.#(success)#::本地peer已成为主要peer.
Seed Settings changed, but something is wrong.==seed设置已改变, 但是未完全成功.
Seed Uploading was deactivated automatically.==seed上传自动关闭.
Please return to the settings page and modify the data.==请返回设置页面修改参数.
The remote-proxy setting has been changed==远程代理设置已改变.
The new setting is effective immediately, you don't need to re-start.==新设置立即生效.
The submitted peer name is already used by another peer. Please choose a different name.</strong> The Peer name has not been changed.==提交的peer名已存在, 请更改.</strong> peer名未改变.
Log-in as administrator to see full status==登录管理用户以查看完整状态
Welcome to YaCy!==欢迎使用YaCy!
Your settings are _not_ protected!</strong>==您的设置未受保护!</strong>
Please open the <a href="ConfigAccounts_p.html">accounts configuration</a> page <strong>immediately</strong>==请打开<a href="ConfigAccounts_p.html">账户设置</a> <strong>页面</strong>
and set an administration password.==并设置管理密码.
You have not published your peer seed yet. This happens automatically, just wait.==尚未发布您的peer seed. 将会自动发布, 请稍候.
The peer must go online to get a peer address.==peer必须上线获得peer地址.
You cannot be reached from outside.==外部不能访问您的peer.
A possible reason is that you are behind a firewall, NAT or Router.==很可能是您在防火墙, NAT或者路由的后面.
But you can <a href="index.html">search the internet</a> using the other peers'==但是您依然能进行<a href="index.html">搜索</a>
global index on your own search page.==, 需要通过其他peer的全球索引.
"bad"=="坏"
"idea"="主意"
"good"="好"
"Follow YaCy on Twitter"=="在Twitter上关注YaCy"
We encourage you to open your firewall for the port you configured \(usually: 8090\),==我们推荐您开发防火墙端口(通常是: 8090),
or to set up a 'virtual server' in your router settings \(often called DMZ\).==或者在路由设置中(DMZ)建立一个"虚拟服务器".
Please be fair, contribute your own index to the global index.==请公平地贡献您的索引给全球索引.
Free disk space is lower than \#\[minSpace\]\#. Crawling has been disabled. Please fix==空闲磁盘空间低于 #[minSpace]#. crawl已被关闭,
it as soon as possible and restart YaCy.==请尽快修复并重启YaCy.
Free memory is lower than \#\[minSpace\]\#. DHT has been disabled. Please fix==空闲内存低于 #[minSpace]#. DHT已被关闭,
Latest public version is==最新版本为
You can download a more recent version of YaCy. Click here to install this update and restart YaCy:==您可以下载最新版本YaCy, 点此进行升级并重启:
You have a principal peer because you publish your seed-list to a public accessible server==您拥有一个主要peer, 因为您向公共服务器公布了您的seed列表,
where it can be retrieved using the URL==可使用此URL进行接收:
Your Web Page Indexer is idle. You can start your own web crawl <a href="CrawlStartSite_p.html">here</a>==网页索引器当前空闲. 可以点击<a href="CrawlStartSite_p.html">这里</a>开始网页crawl
Your Web Page Indexer is busy. You can <a href="Crawler_p.html">monitor your web crawl</a> here.==网页索引器当前忙碌. 点击<a href="Crawler_p.html">这里</a>查看状态.
Used for YaCy -> YaCy communication:==用于YaCy -> YaCy通信:
WARNING:==注意:
You do this on your own risk.==此动作危险.
If you do this without YaCy running on a desktop-pc or without Java 6 installed, this will possibly break startup.==如果您不是在台式机上或者已安装Java6的机器上运行, 可能会破坏开机程序.
In this case, you will have to edit the configuration manually in DATA/SETTINGS/yacy.conf==在此情况下, 您需要手动修改配置文件 DATA/SETTINGS/yacy.conf
Peer is online again, forwarding to status page...==peer再次上线, 正在传输状态...
Peer is not online yet, will check again in a few seconds...==peer尚未上线, 几秒后重新检测...
No action submitted==未提交动作
Go back to the <a href="Settings_p.html">Settings</a> page==将返回<a href="Settings_p.html">设置</a>页面
Your system is not protected by a password==您的系统未受密码保护
Please go to the <a href="/ConfigAccounts_p.html">User Administration</a> page and set an administration password.==请在<a href="/ConfigAccounts_p.html">用户管理</a>页面设置管理密码.
You don't have the correct access right to perform this task.==无执行此任务权限.
Please log in.==请登录.
You can now go back to the <a href="Settings_p.html">Settings</a> page if you want to make more changes.==您现在可以返回<a href="Settings_p.html">设置</a>页面进行详细设置.
See you soon!==See you soon!
Just a moment, please!==请稍候.
Application will terminate after working off all scheduled tasks.==程序在所有任务完成后将停止,
Then YaCy will restart.==然后YaCy会重新启动.
If you can't reach YaCy's interface after 5 minutes restart failed.==如果5分钟后不能访问此页面说明重启失败.
Installing release==正在安装
YaCy will be restarted after installation==YaCy在安装完成后会重新启动
Scheduled actions are executed after the next execution date has arrived within a time frame of \#\[tfminutes\]\# minutes.==已安排动作会在 #[tfminutes]# 分钟后执行.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
it may take some seconds until the first result appears there.==在出现第一个搜索结果前需要几秒钟时间.
If you crawl any un-wanted pages, you can delete them <a href="IndexCreateWWWLocalQueue_p.html">here</a>.==如果您crawl了不需要的页面, 您可以 <a href="IndexCreateWWWLocalQueue_p.html">点这</a> 删除它们.
The data that is visualized here can also be retrieved in a XML file, which lists the reference relation between the domains.==此页面数据显示域之间的关联关系, 能以XML文件形式查看.
With a GET-property 'about' you get only reference relations about the host that you give in the argument field for 'about'.==使用GET属性'about'仅能获得带有'about'参数的域关联关系.
With a GET-property 'latest' you get a list of references that had been computed during the current run-time of YaCy, and with each next call only an update to the next list of references.==使用GET属性'latest'能获得当前的关联关系列表, 并且每一次调用都只能更新下一级关联关系列表.
Click the API icon to see the XML file.==点击API图标查看XML文件.
To see a list of all APIs, please visit the <a href=\"http://www.yacy-websuche.de/wiki/index.php/Dev:API\">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
These tags create headlines. If a page has three or more headlines, a table of content will be created automatically.==此标记标识标题内容. 如果页面有多于三个标题, 则会自动创建一个表格.
Headlines of level 1 will be ignored in the table of content.==一级标题.
#The escape tags will cause all tags in the text between the starting and the closing tag to not be treated as wiki-code.==Durch diesen Tag wird der Text, der zwischen den Klammern steht, nicht interpretiert und unformatiert als normaler Text ausgegeben.
A text between these tags will keep all the spaces and linebreaks in it. Great for ASCII-art and program code.==此标记之间的文本会保留所有空格和换行, 主要用于ASCII艺术图片和编程代码.
If a line starts with a space, it will be displayed in a non-proportional font.==如果一行以空格开头, 则会以非比例形式显示.
url description==URL描述
This tag creates links to external websites.==此标记创建外部网站链接.
This search result can also be retrieved as RSS/<a href=\"http://www.opensearch.org\">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org">opensearch</a>形式表示.
The query format is similar to <a href=\"http://www.loc.gov/standards/sru/\">SRU</a>.==请求的格式与<a href="http://www.loc.gov/standards/sru/">SRU</a>相似.
Click the API icon to see an example call to the search rss API.==点击API图标查看示例.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
This search result can also be retrieved as RSS/<a href="http://www.opensearch.org">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org">opensearch</a>形式表示.
The query format is similar to <a href="http://www.loc.gov/standards/sru/">SRU</a>.==请求的格式<a href="http://www.loc.gov/standards/sru/">SRU</a>相似.
Click the API icon to see an example call to the search rss API.==点击API图标查看示例.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
YaCy-UI is going to be a JavaScript based client for YaCy based on the existing XML and JSON API.==YaCy-UI 是基于JavaScript的YaCy客户端, 它使用当前的XML和JSON API.
YaCy-UI is at most alpha status, as there is still problems with retriving the search results.==YaCy-UI 尚在测试阶段, 所以在搜索时会有部分问题出现.
I am currently changing the backend to a more application friendly format and getting good results with it \(I will check that in some time after the stable release 0.7\).==目前我正在修改程序后台, 以让其更加友善和搜索到更合适的结果(我会在稳定版0.7后改善此类问题).
For now have a look at the bookmarks, performance has increased significantly, due to the use of JSON and Flexigrid!==就目前来说, 由于使用JSON和Flexigrid, 性能已获得显著提升!