This images shows incoming connections to your YaCy peer and outgoing connections from your peer to other peers and web servers==这幅图显示了到您节点的传入连接,以及从您节点到其他节点或网站服务器的传出连接
With this settings you can activate or deactivate URL proxy.==With this settings you can activate or deactivate URL proxy.
Service call: ==Service call:
, where parameter is the url of an external web page.==, where parameter is the url of an external web page.
>URL proxy:<==>URL proxy:<
>Enabled<==>开启<
Globally enables or disables URL proxy via ==Globally enables or disables URL proxy via
Show search results via URL proxy:==Show search results via URL proxy:
Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.==Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.
Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address==Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address
via the YaCy proxy servlet.==via the YaCy proxy servlet.
or right-click this link and add to favorites:==or right-click this link and add to favorites:
Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.==Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.
This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理地址过滤;过滤掉自载入时加入进黑名单的地址.
from being loaded. You can define several blacklists and activate them separately.==您可以自定义黑名单并分别激活它们.
You may also provide your blacklist to other peers by sharing them; in return you may==您也可以提供你自己的黑名单列表给其他人;
collect blacklist entries from other peers==同样,其他人也能将黑名单列表共享给您
Select list to edit:==选择列表进行编辑:
Add URL pattern==添加地址规则
Edit list==编辑列表
The right '*', after the '/', can be replaced by a==在'/'之后的右边'*'可以被替换为
>regular expression<==>正则表达式<
#(slow)==(慢)
"set"=="集合"
The right '*'==右边的'*'
Used Blacklist engine:==使用的黑名单引擎:
Active list:==激活列表:
No blacklist selected==未选中黑名单
Select list:==选中黑名单:
not shared::shared==未共享::已共享
"select"=="选择"
Create new list:==创建:
"create"=="创建"
Settings for this list==设置
"Save"=="保存"
Share/don't share this list==共享/不共享此名单
Delete this list==删除
Edit this list==编辑
These are the domain name/path patterns in==这些域名/路径规则来自
Blacklist Pattern==黑名单规则
Edit selected pattern(s)==编辑选中规则
Delete selected pattern(s)==删除选中规则
Move selected pattern(s) to==移动选中规则
#You can select them here for deletion==您可以从这里选择要删除的项
Add new pattern:==添加新规则:
"Add URL pattern"=="添加地址规则"
The right '*', after the '/', can be replaced by a <a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">regular expression</a>.== 在 '/' 后边的 '*' ,可用<a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">正则表达式</a>表示.
#domain.net/fullpath<==domain.net/绝对路径<
#>domain.net/*<==>domain.net/*<
#*.domain.net/*<==*.domain.net/*<
#*.sub.domain.net/*<==*.sub.domain.net/*<
#sub.domain.*/*<==sub.domain.*/*<
#domain.*/*<==domain.*/*<
#was removed from blacklist==wurde aus Blacklist entfernt
#was added to the blacklist==wurde zur Blacklist hinzugefügt
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>.==要查看所有API的列表,请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>。
To see a list of all APIs==要查看所有API的列表,请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>。
The bookmarks list can also be retrieved as RSS feed. This can also be done when you select a specific tag.==书签列表也能用作RSS订阅.当您选择某个标签时您也可执行这个操作.
Click the API icon to load the RSS from the current selection.==点击API图标以从当前选择书签中载入RSS.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==获取所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Access to your peer from your own computer (localhost access) is granted with administrator rights. No need to configure an administration account.==通过管理员权限授予从您自己的计算机访问您的节点(localhost访问权限).无需配置管理帐户.
This setting is convenient but less secure than using a qualified admin account.==此设置很方便,但比使用合格的管理员帐户安全性低.
Please use with care, notably when you browse untrusted and potentially malicious websites while running your YaCy peer on the same computer.==请谨慎使用,尤其是在计算机上运行YaCy节点并浏览不受信任和可能有恶意的网站时.
This is required if you want a remote access to your peer, but it also hardens access controls on administration operations of your peer.==如果您希望远端访问你的节点,则这是必需的,但它也会加强节点管理操作的访问控制.
#and the default icons and links on the search page can be replaced with you own.==und die standard Grafiken und Links auf der Suchseite durch Ihre eigenen ersetzen.
Skin Selection==选择皮肤
Select one of the default skins, download new skins, or create your own skin.==选择一个默认皮肤, 下载新皮肤或者创建属于您自己的皮肤.
Current skin==当前皮肤
Available Skins==可用皮肤
"Use"=="使用"
"Delete"=="删除"
>Skin Color Definition<==>改变皮肤颜色<
The generic skin 'generic_pd' can be configured here with custom colors:==能在这里修改皮肤'generic_pd'的颜色:
Your port has changed. Please wait 10 seconds.==您的端口已更改. 请等待10秒.
Your browser will be redirected to the new <a href="http://#[host]#:#[port]#/ConfigBasic.html">location</a> in 5 seconds.==您的浏览器将在5秒内重定向到新的<a href="http://#[host]#:#[port]#/ConfigBasic.html">位置</a>.
The peer port was changed successfully.==对端口已经成功修改
Opening a router port is <i>not</i> a YaCy-specific task;==打开一个路由器端口不是一个YaCy特定的任务;
However: if you fail to open a router port, you can nevertheless use YaCy with full functionality, the only function that is missing is on the side of the other YaCy users because they cannot see your peer.==但是,如果您无法打开路由器端口,则仍然可以使用YaCy的全部功能,缺少的唯一功能是在其他YaCy用户侧,因为他们无法看到您的YaCy节点.
Configure your router for YaCy using UPnP:==使用UPnP为您的路由器配置YaCy:
on port==在端口
you can see instruction videos everywhere in the internet, just search for <a href="http://www.youtube.com/results?search_query=Open+Ports+on+a+Router">Open Ports on a <our-router-type> Router</a> and add your router type as search term.==您可以在互联网上的任何地方查看说明视频,只需搜索<a href="http://www.youtube.com/results?search_query=Open Ports on a Router">Open Ports on a <our-router-type> Router</a>并添加您的路由器类型作为搜索词。
you can see instruction videos everywhere in the internet==您可以在互联网上的任何地方查看说明视频,只需搜索<a href="http://www.youtube.com/results?search_query=Open Ports on a Router">Open Ports on a
Use Case: what do you want to do with YaCy:==用法: 您想将YaCy当作:
Community-based web search==全球网络搜索
Join and support the global network 'freeworld', search the web with an uncensored user-owned search network==加入并支持全球网络 'freeworld', 开启自由搜索.
Search portal for your own web pages==个人网页搜索门户
Your YaCy installation behaves independently from other peers and you define your own web index by starting your own web crawl. This can be used to search your own web pages or to define a topic-oriented search portal.==您的YaCy安装独立于其他节点,您可以通过开始自己的网络爬虫来创建自己的网络索引。 这可用于搜索您自己的网页或创建具有特定主题的搜索门户。
which is not fatal, but would be good for the YaCy network==此举有利于YaCy网络
please open your firewall for this port and/or set a virtual server option in your router to allow connections on this port==请改变您的防火墙或者虚拟机路由设置, 从而外网能访问这个端口
You should set a password at the <a href="ConfigAccounts_p.html">Accounts Menu</a> to secure your YaCy peer.</p>::==您可以在 <a href="ConfigAccounts_p.html">账户菜单</a> 设置密码, 从而加强您的YaCy节点安全性.</p>::
The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==超文本缓存存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存.
The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容.
A <a href="http://en.wikipedia.org/wiki/Heuristic" target="_blank">heuristic</a> is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia).==<a href="http://en.wikipedia.org/wiki/Heuristic" target="_blank">启发式</a>是一种“基于经验的技术,有助于解决问题,学习和发现”
search-result: shallow crawl on all displayed search results==搜索结果:浅度爬取所有显示的搜索结果
When a search is made then all displayed result links are crawled with a depth-1 crawl.==当进行搜索时,所有显示的结果网址的爬网深度-1。
When using this heuristic, then every new search request line is used for a call to listed opensearch systems.==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。
This means: right after the search request every page is loaded and every page that is linked on this page.==这意味着:在搜索请求之后,就开始加载结果的每个页面及每个页面上的链接。
If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中'添加为全球爬取作业',则要爬取的页面将被添加到全球爬取队列中(其他远端YaCy节点可能会帮助爬取这些页面)。
start background task, depending on index size this may run a long time==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间
With the button "discover from index" you can search within the metadata of your local index (Web Structure Index) to find systems which support the Opensearch specification.==使用“从索引发现”按钮,您可以在本地索引(Web结构索引)的元数据中搜索,以查找支持Opensearch规范的系统。
The task is started in the background. It may take some minutes before new entries appear (after refreshing the page).==任务在后台启动。 出现新条目可能需要几分钟时间(在刷新页面之后)。
located in <i>defaults/heuristicopensearch.conf</i> to the DATA/SETTINGS directory.==位于DATA / SETTINGS目录的<i> defaults / heuristicopensearch.conf </i>中。
For the discover function the <i>web graph</i> option of the web structure index and the fields <i>target_rel_s, target_protocol_s, target_urlstub_s</i> have to be switched on in the <a href="IndexSchema_p.html?core=webgraph">webgraph Solr schema</a>.==对于发现功能,Web结构索引的<i> web图表</i>选项和字段<i> target_rel_s,target_protocol_s,target_urlstub_s </i>必须在<a href="IndexSchema_p.html?core=webgraph">webgraph Solr模式</a>。
When using this heuristic==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。
For the discover function the <i>web graph</i> option of the web structure index and the fields <i>target_rel_s==对于发现功能,Web结构索引的<i> web图表</ i>选项和字段<i> target_rel_s,target_protocol_s,target_urlstub_s </ i>必须在<a href =“IndexSchema_p.html ?core = webgraph“> webgraph Solr模式</a>。
The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines.==您可以在这里开启启发式搜索, 通过猜测链接, 嵌套搜索和访问其他搜索引擎, 从而找到更多符合您期望的结果.
When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content.==开启启发式搜索时, 搜索结果给出的链接并不是直接搜索的链接, 而是已经缓存在其他服务器上的结果.
This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.==这保证了黑名单的有效性, 并且搜索关键字是通过启发式搜索找到的.
The success of heuristics are marked with an image==启发式搜索找到的结果会被特定图标标记
heuristic:<name>==启发式:<名称>
#(redundant)==(redundant)
(new link)==(新链接)
below the favicon left from the search result entry:==搜索结果中使用的图标:
The search result was discovered by a heuristic, but the link was already known by YaCy==搜索结果通过启发式搜索, 且链接已知
The search result was discovered by a heuristic, not previously known by YaCy==搜索结果通过启发式搜索, 且链接未知
When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即爬取层数为 最大限制深度-1 的内容.
That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一主机中每一个与此页面相连的网页.
Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).==因为'立即爬取'依赖于爬虫协议和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟).
For P2P operation, at least DHT distribution or DHT receive (or both) must be set. You have thus defined a Robinson configuration==关于P2P操作,必须至少勾选DHT分发或DHT接收(或两者)。 因此,您已被确定为漂流配置
Global Search in P2P configuration is only allowed, if index receive is switched on. You have a P2P configuration, but are not allowed to search other peers.==P2P配置中的全局搜索仅在打开接受索引时才被允许。您已有P2P配置,但不被允许搜索其他节点。
If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果您的节点运行在'漂流模式', 您能在不与其他节点交换数据的情况下进行搜索
There is no index receive and no index distribution between your peer and any other peer==您不会与其他节点进行索引传递
If you leave the field empty, no peer asks your peer. If you fill in a '*', your peer is always asked.==如果此部分留空, 那么您的节点不会被其他节点访问. 如果内容是 '*' 则标示您的节点永远被允许访问.
it should be used to encrypt outgoing communications with them (for operations such as network presence, index transfer, remote crawl==它应该被用来加密与它们的传出通信(操作:网络存在、索引传输、远端爬行
This is an experimental setting which makes it possible to split PDF documents into individual index entries==这是一个实验设置,可以将PDF文档拆分为单独的索引条目
Every page will become a single index hit and the url is artifically extended with a post/get attribute value containing the page number as value==每个页面都将成为单个索引匹配,并且使用包含页码作为值的post/get属性值人为扩展url
If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果您想将YaCy作为您的网站搜索门户, 您可能需要在这改变搜索页面的图标和信息.
To change also colours and styles use the <a href="ConfigAppearance_p.html">Appearance Servlet</a> for different skins and languages.==若要改变颜色和风格,请到<a href="ConfigAppearance_p.html">外观选项</a>选择您喜欢的皮肤和语言.
Control whether media search results are as default strictly limited to indexed documents matching exactly the desired content domain==控制媒体搜索结果是否默认严格限制为与所需内容域完全匹配的索引文档
Prefer https for search queries on remote peers.==首选https用于远端节点上的搜索查询.
When SSL/TLS is enabled on remote peers, https should be used to encrypt data exchanged with them when performing peer-to-peer searches.==在远端节点上启用SSL/TLS时,应使用https来加密在执行P2P搜索时与它们交换的数据.
Please note that contrary to strict TLS, certificates are not validated against trusted certificate authorities (CA), thus allowing YaCy peers to use self-signed certificates.==请注意,与严格TLS相反,证书不会针对受信任的证书颁发机构(CA)进行验证,因此允许YaCy节点使用自签名证书.
or <a href="ViewProfile.html?hash=localhash">in the public</a> using a <a href="ViewProfile.rdf?hash=localhash">FOAF RDF file</a>.==或者<a href="ViewProfile.html?hash=localhash">在公共场所时</a>使用<a href="ViewProfile.rdf?hash=localhash">FOAF RDF 文件</a>.
Homepage (appears on every <a href="Supporter.html">Supporter Page</a> as long as your peer is online)==首页(显示在每个<a href="Supporter.html">支持者</a> 页面中, 前提是您的节点在线).
Here are all configuration options from YaCy.==这里显示YaCy所有设置.
You can change anything, but some options need a restart, and some options can crash YaCy, if wrong values are used.==您可以改变任何这里的设置, 当然, 有的需要重启才能生效, 有的甚至能引起YaCy崩溃.
For explanation please look into defaults/yacy.init==详细内容请参考defaults/yacy.init
We give information how to integrate a search box on any web page that==如何将一个搜索框集成到任意
calls the normal YaCy search window.==调用YaCy搜索的页面.
Simply use the following code:==使用以下代码:
MySearch== 我的搜索
"Search"=="搜索"
This would look like:==示例:
This does not use a style sheet file to make the integration into another web page with a different style sheet easier.==在这里并没有使用样式文件, 因为这样会比较容易将其嵌入到不同样式的页面里.
You would need to change the following items:==您可能需要以下条目:
Replace the given colors #eeeeee (box background) and #cccccc (box border)==替换已给颜色 #eeeeee (框架背景) 和 #cccccc (框架边框)
Replace the word "MySearch" with your own message==用您想显示的信息替换"我的搜索"
>Search Result Page Layout Configuration<==>搜索结果页面布局配置<
Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中您希望显示的功能复选框.
Other portal settings can be adjusted in <a href="ConfigPortal_p.html">Generic Search Portal</a> menu.==其他门户网站设置可以在<a href="ConfigPortal_p.html">通用搜索门户</a>菜单中调整.
Maximum days number in the histogram. Beware that a large value may trigger high CPU loads both on the server and on the browser with large result sets.==直方图中的最大天数. 请注意, 较大的值可能会在服务器和具有大结果集的浏览器上触发高CPU负载.
Showing #[numActiveRunning]# active connections from a max. of #[numMax]# allowed incoming connections==正在显示 #[numActiveRunning]# 活动连接,最大允许传入连接 #[numMax]#
Double-Content detection is done using a ranking on a 'unique'-Field, named 'fuzzy_signature_unique_b'.==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。
This is the minimum length of a word which shall be considered as element of the signature. Should be either 2 or 3.==这是一个应被视为签名的元素单词的最小长度。 应该是2或3。
The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number, the less==quantRate是参与签名计算的单词数量的度量。 数字越高,越少
words are used for the signature==单词用于签名
For minTokenLen = 2 the quantRate value should not be below 0.24; for minTokenLen = 3 the quantRate value must be not below 0.5.==对于minTokenLen = 2,quantRate值不应低于0.24; 对于minTokenLen = 3,quantRate值必须不低于0.5。
"Re-Set to default"=="重置为默认"
"Set"=="设置"
Double-Content detection is done using a ranking on a 'unique'-Field==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。
The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number==quantRate是参与签名计算的单词数量的度量。 数字越高,越少
#-----------------------------
#File: ContentControl_p.html
#---------------------------
Content Control<==内容控制<
Peer Content Control URL Filter==节点内容控制地址过滤器
With this settings you can activate or deactivate content control on this peer==使用此设置,您可以激活或取消激活此YaCy节点上的内容控制
Use content control filtering:==使用内容控制过滤:
>Enabled<==>已启用<
Enables or disables content control==启用或禁用内容控制
Use this table to create filter:==使用此表创建过滤器:
Define a table. Default:==定义一个表格. 默认:
Content Control SMW Import Settings==内容控制SMW导入设置
With this settings you can define the content control import settings. You can define a==使用此设置,您可以定义内容控制导入设置. 你可以定义一个
Semantic Media Wiki with the appropriate extensions==语义媒体百科与适当的扩展
SMW import to content control list:==SMW导入到内容控制列表:
Enable or disable constant background synchronization of content control list from SMW (Semantic Mediawiki). Requires restart!==启用或禁用来自SMW(Semantic Mediawiki)的内容控制列表的恒定后台同步。 需要重启!
SMW import base URL:==SMW导入基URL:
Define base URL for SMW special page "Ask". Example: ==为SMW特殊页面“Ask”定义基础地址.例:
Purge content control list on initial sync:==在初始同步时清除内容控制列表:
Purge content control list on initial synchronisation after startup.==重启后,清除初始同步的内容控制列表.
"Submit"=="提交"
Define base URL for SMW special page "Ask". Example:==为SMW特殊页面“Ask”定义基础地址.例:
#-----------------------------
#File: ContentIntegrationPHPBB3_p.html
#---------------------------
Content Integration: Retrieval from phpBB3 Databases==内容集成: 从phpBB3数据库中导入
It is possible to extract texts directly from mySQL and postgreSQL databases.==能直接从mysql或者postgresql中解压出内容.
Each extraction is specific to the data that is hosted in the database.==每次解压都针对主机数据库中的数据.
This interface gives you access to the phpBB3 forums software content.==通过此接口能访问phpBB3论坛软件内容.
If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:==如果从使用phpMyAdmin读取数据库内容, 您可能会用到以下建议:
before importing large database dumps, set==在导入尺寸较大的数据库时,
in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB)==设置phpmyadmin/config.inc.php的内容, 并将您的数据库文件放到 /tmp 目录下(否则不能上传大于2MB的文件)
deselect the partial import flag==取消部分导入
When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.==导出过程开始时, 在 DATA/SURROGATE/in 目录下自动生成备份文件, 并且会被索引器自动爬取.
All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.==所有被索引的备份文件都在 DATA/SURROGATE/out 目录下, 并被索引器循环利用.
#Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört.
#The profiles for remote crawls, <a href="ProxyIndexingMonitor_p.html">indexing via proxy</a> and snippet fetches==Die Profile für Remote Crawl, <a href="ProxyIndexingMonitor_p.html">Indexierung per Proxy</a> und Snippet Abrufe
#cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind.
These are monitoring pages for the different indexing queues.==这是索引创建队列的监视页面.
YaCy knows 5 different ways to acquire web indexes. The details of these processes (1-5) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 详细描述显示在子菜单的进程(1-5)中,
above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息是私有的,
so you need to log-in with your administration password.==所以您需要以管理员账户来查看.
Case (6) is a monitor of the local receipt-generator, the opposed case of (1). It contains also an indexing result monitor but is not considered private==事件(6)是本地回执生成器的监视器, (1)的相反事件. 它也包含一个索引结果监视器, 但不是私有的.
since it shows crawl requests from other peers.==因为它显示了来自其他节点的爬取请求.
Case (7) occurs if surrogate files are imported==事件(7)发生在导入备份文件时
The image above illustrates the data flow initiated by web index acquisition.==上图解释了由网页索引查询发起的数据流.
Some processes occur double to document the complex index migration structure.==某些进程发生了两次以记录复杂的索引迁移结构.
(1) Results of Remote Crawl Receipts==(1) 远端爬取回执的结果
This is the list of web pages that this peer initiated to crawl,==这是此节点发起爬取的网页列表,
but had been crawled by <em>other</em> peers.==但它们早已被 <em>其他</em> 节点爬取了.
This is the 'mirror'-case of process (6).==这是进程(6)的'镜像'事件.
<em>Use Case:</em> You get entries here, if you start a local crawl on the '<a href="CrawlStartExpert.html">Advanced Crawler</a>' page and check the==<em>用法:</em> 你可在此获得条目, 当你在 '<a href="CrawlStartExpert.html">高级爬虫页面</a> 上启动本地爬取并勾选
'Do Remote Indexing'-flag, and if you checked the 'Accept Remote Crawl Requests'-flag on the '<a href="RemoteCrawl_p.html">Remote Crawling</a>' page.=='执行远端索引'-标志时, 这需要你确保在 '<a href="RemoteCrawl_p.html">远端爬取</a>' 页面中勾选了'接受远端爬取请求'-标志.
<em>Use Case:</em> This list may fill if you check the 'Index Receive'-flag on the 'Index Control' page==<em>用法:</em> 如果您在'索引控制'页面上选中'索引接收'-标志, 则此列表会填写
These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已被您的节点创建了索引, 但它们是被远端节点爬取的.
This is the 'mirror'-case of process (1).==这是进程(1)的'镜像'事件.
<em>Use Case:</em> This list may fill if you check the 'Accept Remote Crawl Requests'-flag on the '<a href="RemoteCrawl_p.html">Remote Crawling</a>' page==<em>用法:</em> 如果你在 '<a href="RemoteCrawl_p.html">远端爬取</a>' 页面勾选'接受远端爬取请求'-标记,此列表会填写
<em>Use Case:</em> place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式
You can define URLs as start points for Web page crawling and start crawling here==您可以将指定地址作为爬取网页的起始点
"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links== "爬取中"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容
This is repeated as long as specified under "Crawling Depth"==它将一直重复至到满足指定的"爬取深度"
A crawl can also be started using wget and the==爬取也可以将wget和
This defines how often the Crawler will follow links (of links..) embedded in websites.==此选项为爬虫跟踪网站嵌入链接的深度.
0 means that only the page you enter under "Starting Point" will be added==设置为0代表仅将"起始点"
to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引.建议设置为2-4.由于设置为8会索引将近256亿个页面,所以不建议设置大于8的值,
index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容.
You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域名.
the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域名会被自动忽略.
>misc. Constraints<==>其余约束<
A questionmark is usually a hint for a dynamic page.==动态页面常用问号标记.
Example: to allow only urls that contain the word 'science', set the must-match filter to '.*science.*'.==列如:只允许包含'science'的地址,就在'必须匹配过滤器'中输入'.*science.*'.
In this case the crawl is repeated after the given time and no url from the previous crawl is omitted as double.==此种情况下, 爬取会每隔一定时间自动运行并且不会重复寻找前一次crawl中的链接.
This enables indexing of the wepages the crawler will download. This should be switched on by default, unless you want to crawl only to fill the==此选项开启时, 爬虫会下载网页索引. 默认打开, 除非您仅要填充
This can be useful to circumvent that extremely common words are added to the database, i.e. "the", "he", "she", "it"... To exclude all words given in the file <tt>yacy.stopwords</tt> from indexing,==此项用于规避极常用字, 比如 "个", "他", "她", "它"等. 当要在索引时排除所有在<tt>yacy.stopwords</tt>文件中的字词时,
No more that two pages are loaded from the same host in one second (not more that 120 document per minute) to limit the load on the target server.==每秒最多从同一主机中载入两个页面(每分钟不超过120个文件)以限制目标主机负载.
>Target Balancer<==>目标平衡器<
A second crawl for a different host increases the throughput to a maximum of 240 documents per minute since the crawler balances the load over all hosts.==对于不同主机的第二次爬取, 会上升到每分钟最多240个文件, 因为爬虫会自动平衡所有主机的负载.
>High Speed Crawling<==>高速爬取<
A 'shallow crawl' which is not limited to a single host (or site)==当目标主机很多时, 用于多个主机(或站点)的'浅爬取'方式,
This can be done using the <a href="CrawlStartExpert.html">Expert Crawl Start</a> servlet.==对应设置<a href="CrawlStartExpert.html">专家模式起始爬取</a>选项.
>Scheduler Steering<==>定时器向导<
The scheduler on crawls can be changed or removed using the <a href="Table_API_p.html">API Steering</a>.==可以使用<a href="Table_API_p.html">API向导</a>改变或删除爬取定时器.
#-----------------------------
#File: Crawler_p.html
#---------------------------
Crawler Queues==爬虫队列
RWI RAM (Word Cache)==RWI RAM (关键字缓存)
Error with profile management. Please stop YaCy, delete the file DATA/PLASMADB/crawlProfiles0.db==资料管理出错. 请关闭YaCy, 并删除文件 DATA/PLASMADB/crawlProfiles0.db
and restart.==后重启.
Error:==错误:
Application not yet initialized. Sorry. Please wait some seconds and repeat==抱歉, 程序未初始化, 请稍候并重复
ERROR: Crawl filter==错误: 爬取过滤
does not match with==不匹配
crawl root==爬取根
Please try again with different==请使用不同的过滤字再试一次
filter. ::==. ::
Crawling of==在爬取
failed. Reason:==失败. 原因:
Error with URL input==网址输入错误
Error with file input==文件输入错误
started.==已开始.
pause reason: resource observer: not enough memory space==暂停原因: 资源观察器:没有足够内存空间
Please wait some seconds,==请稍等,
it may take some seconds until the first result appears there.==在出现第一个搜索结果前需要几秒钟时间.
If you crawl any un-wanted pages, you can delete them <a href="IndexCreateQueues_p.html?stack=LOCAL">here</a>.==如果您爬取了不需要的页面, 您可以 <a href="IndexCreateQueues_p.html?stack=LOCAL">点这</a> 删除它们.
Crawl Queue:==爬取队列:
Running Crawls==运行中的爬取
>Name<==>名字<
>Status<==>状态<
>Crawled Pages<==>爬取到的页面<
Queue</th>==队列</th>
Profile</th>==资料</th>
Initiator==发起者
Depth</th>==深度</th>
Modified Date==修改日期
Anchor Name==祖先名
#URL==URL
Delete==删除
Next update in==下次更新将在
/> seconds.==/> 秒后.
See a access timing <a href="api/latency_p.xml">here</a>==<a href="api/latency_p.xml">点这</a> 查看访问时间
>With this file it is possible to find locations in Germany using the location (city) name, a zip code, a car sign or a telephone pre-dial number.<==>使用此插件, 则能通过查询城市名, 邮编, 车牌号或者电话区号得到德国任何地点的位置信息.<
this may produce unresolved references at other word indexes but they do not harm==这可能和其他关键字产生未解析关联, 但是这并不影响系统性能
"Delete URL and remove all references from words"=="删除地址并从关键字中删除所有关联"
delete the reference to this url at every other word where the reference exists (very extensive, but prevents unresolved references)==删除指向此链接的关联字,(很多, 但是会阻止未解析关联的产生)
The search index contains #[doccount]# documents. You can delete them here.==搜索索引包含了 #[doccount]# 篇文档. 你可以在这儿删除它们.
Deletions are made concurrently which can cause that recently deleted documents are not yet reflected in the document count.==删除是同时进行的,这可能导致最近删除的文档还没有反映在文档计数中.
Delete by URL Matching<==通过URL匹配删除<
Delete all documents within a sub-path of the given urls. That means all documents must start with one of the url stubs as given here.==删除给定网址的子路径中的所有文档. 这意味着所有文档必须以此处给出的其中一个url存根开头.
One URL stub, a list of URL stubs<br/>or a regular expression==一个URL存根, 一个URL存根列表<br/> 或一条正则表达式
Matching Method<==匹配方法<
sub-path of given URLs==给定URL的子路径
matching with regular expression==与正则表达式匹配
"Simulate Deletion"=="模拟删除"
"no actual deletion, generates only a deletion count"=="没有实际删除,只生成删除计数"
"Engage Deletion"=="真正删除"
"simulate a deletion first to calculate the deletion count"=="首先请模拟删除以计算删除数量"
"engaged"=="删除了"
selected #[count]# documents for deletion==选择 #[count]# 篇文档以删除
deleted #[count]# documents==删除了 #[count]# 篇文档
Delete by Age<==按年龄删除<
Delete all documents which are older than a given time period.==删除所有超过给定时间段的文档.
Time Period<==时间段<
All documents older than==所有文件年龄超过
years<==年<
months<==月<
days<==日<
hours<==小时<
Age Identification<==年龄识别<
>load date==>加载日期
>last-modified==>上次修改
Delete Collections<==删除集合<
Delete all documents which are inside specific collections.==删除特定集合中的所有文档.
Not Assigned<==未分配<
Delete all documents which are not assigned to any collection==删除未分配给任何集合的所有文档
, separated by ',' (comma) or '|' (vertical bar); or==, 分隔按','(逗号)或'|'(垂直条); 或
>generate the collection list...==>生成集合列表...
Assigned<==分配的<
Delete all documents which are assigned to the following collection(s)==删除分配给以下集合的所有文档
Delete by Solr Query<==通过Solr查询删除<
This is the most generic option: select a set of documents using a solr query.==这是最通用的选项: 使用solr查询选择一组文档.
As an internal indexing database a deep-embedded multi-core Solr is used and it is possible to attach also a remote Solr.==内部索引数据库使用了深度嵌入式多核Solr,并且还可以附加远端Solr.
You can import <a href="https://dumps.wikimedia.org/backup-index-bydb.html" target="_blank">MediaWiki dumps</a> here. An example is the file==您可以在这导入<a href="https://dumps.wikimedia.org/backup-index-bydb.html" target="_blank">MediaWiki副本</a>副本. 示例
Dumps must be in XML format and may be compressed in gz or bz2. Place the file in the YaCy folder or in one of its sub-folders.==副本文件必须是XML格式并用bz2压缩的.将其放进YaCy目录或其子目录中.
When the import is started, the following happens:==:开始导入时, 会进行以下工作
The dump is extracted on the fly and wiki entries are translated into Dublin Core data format. The output looks like this:==备份文件即时被解压, 并被译为Dublin核心元数据格式:
Each 10000 wiki records are combined in one output file which is written to /DATA/SURROGATES/in into a temporary file.==每个输出文件都含有10000个百科记录, 并都被保存在 /DATA/SURROGATES/in 的临时目录中.
When each of the generated output file is finished, it is renamed to a .xml file==生成的输出文件都以 .xml结尾
Each time a xml surrogate file appears in /DATA/SURROGATES/in, the YaCy indexer fetches the file and indexes the record entries.==只要 /DATA/SURROGATES/in 中含有 xml文件, YaCy索引器就会读取它们并为其中的条目制作索引.
When a surrogate file is finished with indexing, it is moved to /DATA/SURROGATES/out==当索引完成时, xml文件会被移动到 /DATA/SURROGATES/out
You can recycle processed surrogate files by moving them from /DATA/SURROGATES/out to /DATA/SURROGATES/in==您可以将文件从/DATA/SURROGATES/out 移动到 /DATA/SURROGATES/in 以重复索引.
Results from the import can be monitored in the <a href="CrawlResults.html?process=7">indexing results for surrogates==导入结果<a href="CrawlResults.html?process=7">监视
Single request import==单个导入请求
This will submit only a single request as given here to a OAI-PMH server and imports records into the index==向OAI-PMH服务器提交如下导入请求, 并将返回记录导入索引
"Import OAI-PMH source"=="导入OAI-PMH源"
Source:==源:
Processed:==已处理:
records<==返回记录<
#ResumptionToken:==ResumptionToken:
Import failed:==导入失败:
Import all Records from a server==从服务器导入全部记录
Import all records that follow according to resumption elements into index==根据恢复元素导入服务器记录
In case that an index schema of the embedded/local index has changed, all documents with missing field entries can be indexed again with a reindex job.==In case that an index schema of the embedded/local index has changed, all documents with missing field entries can be indexed again with a reindex job.
"refresh page"=="refresh page"
Documents in current queue<==Documents in current queue<
Documents processed<==Documents processed<
current select query==current select query
"start reindex job now"=="start reindex job now"
"stop reindexing"=="stop reindexing"
Remaining field list==Remaining field list
reindex documents containing these fields:==reindex documents containing these fields:
Re-Crawl Index Documents==Re-Crawl Index Documents
Searches the local index and selects documents to add to the crawler (recrawl the document).==Searches the local index and selects documents to add to the crawler (recrawl the document).
This runs transparent as background job.==This runs transparent as background job.
Documents are added to the crawler only if no other crawls are active==Documents are added to the crawler only if no other crawls are active
and are added in small chunks.==and are added in small chunks.
"start recrawl job now"=="start recrawl job now"
"stop recrawl job"=="stop recrawl job"
Re-Crawl Query Details==Re-Crawl Query Details
Documents to process==Documents to process
Current Query==Current Query
Edit Solr Query==Edit Solr Query
update==update
to re-crawl documents selected with the given query.==to re-crawl documents selected with the given query.
Include failed URLs==Include failed URLs
>Field<==>Field<
>count<==>count<
Re-crawl works only with an embedded local Solr index!==Re-crawl works only with an embedded local Solr index!
Simulate==Simulate
Check only how many documents would be selected for recrawl==Check only how many documents would be selected for recrawl
"Browse metadata of the #[rows]# first selected documents"=="Browse metadata of the #[rows]# first selected documents"
document(s)</a>#(/showSelectLink)# selected for recrawl.==document(s)</a>#(/showSelectLink)# selected for recrawl.
>Solr query <==>Solr query <
Set defaults==Set defaults
"Reset to default values"=="Reset to default values"
Last #(/jobStatus)#Re-Crawl job report==Last #(/jobStatus)#Re-Crawl job report
An error occurred while trying to refresh automatically==An error occurred while trying to refresh automatically
The job terminated early due to an error when requesting the Solr index.==The job terminated early due to an error when requesting the Solr index.
>Status<==>Status<
"Running"=="Running"
"Shutdown in progress"=="Shutdown in progress"
"Terminated"=="Terminated"
Running::Shutdown in progress::Terminated==Running::Shutdown in progress::Terminated
>Query<==>Query<
>Start time<==>Start time<
>End time<==>End time<
URLs added to the crawler queue for recrawl==URLs added to the crawler queue for recrawl
>Recrawled URLs<==>Recrawled URLs<
URLs rejected for some reason by the crawl stacker or the crawler queue. Please check the logs for more details.==URLs rejected for some reason by the crawl stacker or the crawler queue. Please check the logs for more details.
>Rejected URLs<==>Rejected URLs<
>Malformed URLs<==>Malformed URLs<
"#[malformedUrlsDeletedCount]# deleted from the index"=="#[malformedUrlsDeletedCount]# deleted from the index"
> Refresh<==> Refresh<
#-----------------------------
#File: IndexSchema_p.html
#---------------------------
Solr Schema Editor==Solr Schema Editor
If you use a custom Solr schema you may enter a different field name in the column 'Custom Solr Field Name' of the YaCy default attribute name==If you use a custom Solr schema you may enter a different field name in the column 'Custom Solr Field Name' of the YaCy default attribute name
Select a core:==Select a core:
the core can be searched at==the core can be searched at
Active==Active
Attribute==Attribute
Custom Solr Field Name==Custom Solr Field Name
Comment==Comment
show active==show active
show all available==show all available
show disabled==show disabled
"Set"=="Set"
"reset selection to default"=="reset selection to default"
>Reindex documents<==>Reindex documents<
If you unselected some fields, old documents in the index still contain the unselected fields.==If you unselected some fields, old documents in the index still contain the unselected fields.
To physically remove them from the index you need to reindex the documents.==To physically remove them from the index you need to reindex the documents.
Here you can reindex all documents with inactive fields.==Here you can reindex all documents with inactive fields.
"reindex Solr"=="reindex Solr"
You may monitor progress (or stop the job) under <a href="IndexReIndexMonitor_p.html">IndexReIndexMonitor_p.html</a>==You may monitor progress (or stop the job) under <a href="IndexReIndexMonitor_p.html">IndexReIndexMonitor_p.html</a>
It is much better to retrieve the forum postings directly from the database.==所以, 直接从数据库中获取帖子内容效果更好.
This will cause that YaCy is able to offer nice navigation features after searches.==这会使得YaCy在每次搜索后提供较好引导特性.
YaCy has a phpBB3 extraction feature, please go to the <a href="ContentIntegrationPHPBB3_p.html">phpBB3 content integration</a> servlet for direct database imports.==YaCy能够解析phpBB3关键字, 参见 <a href="ContentIntegrationPHPBB3_p.html">phpBB3内容集成</a> 直接导入数据库方法.
Inserting a Search Window to phpBB3==在phpBB3中添加搜索框
To integrate a search window into phpBB3, you must insert some code into a forum template.==在论坛模板中添加以下代码以将搜索框集成到phpBB3中.
There are several templates that can be used for phpBB3, but in this guide we consider that==phpBB3中有多种模板,
you are using the default template, 'prosilver'==在此我们使用默认模板 'prosilver'.
open styles/prosilver/template/overall_header.html==打开 styles/prosilver/template/overall_header.html
find the line where the default search window is displayed, thats right behind the <pre><div id="search-box"></pre> statement==找到搜索框显示代码部分, 它们在 <pre><div id="search-box"></pre> 下面
Insert the following code right behind the div tag==在div标签后插入以下代码
YaCy Forum Search==YaCy论坛搜索
;YaCy Search==;YaCy搜索
Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name==用您自己的IP或者主机名替代代码中给出的IP地址
You may want to change the default text elements in the code snippet==您可以更改代码中的文本元素
To see all options for the search widget, look at the more generic description of search widgets at==搜索框详细设置, 请参见
the <a href="ConfigLiveSearch.html">configuration for live search</a>.==der Seite <a href="ConfigLiveSearch.html">搜索栏集成: 即时搜索</a>.
RSS feeds can be loaded into the YaCy search index.==YaCy能够读取RSS饲料.
This does not load the rss file as such into the index but all the messages inside the RSS feeds as individual documents.==但不是直接读取RSS文件, 而是将RSS饲料中的所有信息分别当作单独的文件来读取.
Here is a copy of your message, so you can copy it to save it for further attempts:==这是您的消息副本, 可被保存已备用:
You cannot call this page directly. Instead, use a link on the <a href="Network.html">Network</a> page.==您不能直接使用此页面. 请使用 <a href="Network.html">网络</a> 页面的对应功能.
<strong>Processed News (#[prsize]#)</strong>: this is simply an archive of incoming news that you removed by processing.==<strong>处理的新闻(#[prsize]#)</strong>: 此页面显示您已删除的传入新闻存档.
<strong>Outgoing News (#[ousize]#)</strong>: here your can see news entries that you have created. These news are currently broadcasted to other peers.==<strong>传出的新闻(#[ousize]#)</strong>: 此页面显示您节点创建的新闻条目, 正在发布给其他节点.
<strong>Published News (#[pusize]#)</strong>: your news that have been broadcasted sufficiently or that you have removed from the broadcast list.==<strong>发布的新闻(#[pusize]#)</strong>: 显示已经完全发布出去的新闻或者从传出列表中删除的新闻.
"#(page)#::Process All News::Delete All News::Abort Publication of All News::Delete All News#(/page)#"=="#(page)#::处理所有新闻::删除所有新闻::停止发布所有新闻::删除所有新闻#(/page)#"
The crawl balancer tries to avoid that domains are==crawl平衡器能够避免频繁地访问同一域名,
accessed too often, but if the balancer fails (i.e. if there are only links left from the same domain), then these minimum==如果平衡器失效(比如相同域名下只剩链接了), 则此有此间歇
Enough memory is available for proper operation.==有足够内存保证正常运行.
Within the last eleven minutes, at least four operations have tried to request memory that would have reduced free space within the minimum required.==在过去的11分钟内,至少有四次操作尝试请求内存,这将减少所需的最低可用空间。
Minimum required==最低要求
Amount of memory (in Mebibytes) that should at least be free for proper operation==为保证正常运行的最低内存量(以MB为单位)
YaCy can be used to 'scrape' content from pages that pass the integrated caching HTTP proxy.==YaCy能够通过集成缓存HTTP代理进行搜索.
When scraping proxy pages then <strong>no personal or protected page is indexed</strong>;==当通过代理进行搜索时不会索引<strong>私有或者受保护页面</strong>;
# This is the control page for web pages that your peer has indexed during the current application run-time==Dies ist die Kontrollseite für Internetseiten, die Ihr Peer während der aktuellen Sitzung
# as result of proxy fetch/prefetch.==durch Besuchen einer Seite indexiert.
# No personal or protected page is indexed==Persönliche Seiten und geschütze Seiten werden nicht indexiert
those pages are detected by properties in the HTTP header (like Cookie-Use, or HTTP Authorization)==通过检测HTTP头部属性(比如cookie用途或者http认证)
or by POST-Parameters (either in URL or as HTTP protocol)==或者提交参数(链接或者http协议)
and automatically excluded from indexing.==能够检测出此类网页并在索引时排除.
Proxy Auto Config:==自动配置代理:
this controls the proxy auto configuration script for browsers at http://localhost:8090/autoconfig.pac==这会影响浏览器代理自动配置脚本 http://localhost:8090/autoconfig.pac
.yacy-domains only==仅 .yacy 域名
whether the proxy should only be used for .yacy-Domains==代理是否只对 .yacy 域名有效.
Proxy pre-fetch setting:==代理预读设置:
this is an automated html page loading procedure that takes actual proxy-requested==这是一个自动预读网页的过程
A prefetch of 0 means no prefetch; a prefetch of 1 means to prefetch all==设置为0则不预读; 设置为1预读所有嵌入链接,
embedded URLs, but since embedded image links are loaded by the browser==但是嵌入图像链接是由浏览器读取,
this means that only embedded href-anchors are prefetched additionally.==这意味着只预读嵌入式链接的顶层部分.
Store to Cache==存储至缓存
It is almost always recommended to set this on. The only exception is that you have another caching proxy running as secondary proxy and YaCy is configured to used that proxy in proxy-proxy - mode.==推荐打开此项设置. 唯一的例外是您有另一个缓存代理作为二级代理并且YaCy设置为使用'代理到代理'模式.
Do Local Text-Indexing==进行本地文本索引
If this is on, all pages (except private content) that passes the proxy is indexed.==如果打开此项设置, 所有通过代理的网页(除了私有内容)都会被索引.
Do Local Media-Indexing==进行本地媒体索引
This is the same as for Local Text-Indexing, but switches only the indexing of media content on.==与本地文本索引类似, 但是仅当'索引媒体内容'打开时有效.
Simply drag and drop the link shown below to your Browsers Toolbar/Link-Bar.==仅需拖动以下链接至浏览器工具栏/书签栏.
If you click on it while browsing, the currently viewed website will be inserted into the YaCy crawling queue for indexing.==如果在浏览网页时点击, 当前查看页面会被插入到crawl队列已用于索引
The document ranking influences the order of the search result entities.==文档排名会影响实际搜索结果的顺序.
A ranking is computed using a number of attributes from the documents that match with the search word.==排名计算使用到与搜索词匹配的文档中的多个属性.
The attributes are first normalized over all search results and then the normalized attribute is multiplied with the ranking coefficient computed from this list.==在所有搜索结果基础上,先对属性进行归一化,然后将归一化的属性与相应的排名系数相乘.
The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数随着下表中给出的排名水平呈指数增长.
If you increase a single value by one, then the strength of the parameter doubles.==如果将单个值增加1,则参数的影响效果加倍.
</body>==<script>window.onload = function () {$("label:contains('Appearance In Emphasized Text')").text('出现在强调的文本中');$("label:contains('Appearance In URL')").text('出现在地址中'); $("label:contains('Appearance In Author')").text('出现在作者中'); $("label:contains('Appearance In Reference/Anchor Name')").text('出现在参考/锚点名称中'); $("label:contains('Appearance In Tags')").text('出现在标签中'); $("label:contains('Appearance In Title')").text('出现在标题中'); $("label:contains('Authority of Domain')").text('域名权威'); $("label:contains('Category App, Appearance')").text('类别:出现在应用中'); $("label:contains('Category Audio Appearance')").text('类别:出现在音频中'); $("label:contains('Category Image Appearance')").text('类别:出现在图片中'); $("label:contains('Category Video Appearance')").text('类别:出现在视频中'); $("label:contains('Category Index Page')").text('类别:索引页面'); $("label:contains('Date')").text('日期'); $("label:contains('Domain Length')").text('域名长度'); $("label:contains('Hit Count')").text('命中数'); $("label:contains('Preferred Language')").text('倾向的语言'); $("label:contains('Links To Local Domain')").text('本地域名链接'); $("label:contains('Links To Other Domain')").text('其他域名链接'); $("label:contains('Phrases In Text')").text('文本中短语');$("label:contains('Position In Phrase')").text('在短语中位置');$("label:contains('Position In Text')").text('在文本中位置');$("label:contains('Position Of Phrase')").text('短语的位置'); $("label:contains('Term Frequency')").text('术语频率'); $("label:contains('URL Components')").text('地址组件'); $("label:contains('Term Frequency')").text('术语频率'); $("label:contains('URL Length')").text('地址长度'); $("label:contains('Word Distance')").text('词汇距离'); $("label:contains('Words In Text')").text('文本词汇'); $("label:contains('Words In Title')").text('标题词汇');}</script></body>
first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名.
The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果.
The password redundancy check failed. You have probably misstyped your password.==密码冗余检查错误.
Shutting down.</strong><br />Application will terminate after working off all crawling tasks.==正在关闭</strong><br />所有crawl任务完成后程序会关闭.
Your administration account setting has been made.==已创建管理账户设置.
Your new administration account name is #[user]#. The password has been accepted.<br />If you go back to the Settings page, you must log-in again.==新帐户名是 #[user]#. 密码输入正确.<br />如果返回设置页面, 需要再次输入密码.
Your proxy access setting has been changed.==代理访问设置已改变.
Your proxy account check has been disabled, since you did not supply a password.==不能进行代理账户检查, 密码不正确.
The new proxy IP filter is set to==代理IP过滤设置为
The proxy port is:==代理端口号:
Port rebinding will be done in a few seconds.==端口在几秒后绑定完成.
You can reach your YaCy server under the new location==可以通过新位置访问YaCy服务器:
Your proxy access setting has been changed.==代理访问设置已改变.
Your server access filter is now set to==服务器访问过滤为
Auto pop-up of the Status page is now <strong>disabled</strong>==自动弹出状态页面<strong>关闭.</strong>
Auto pop-up of the Status page is now <strong>enabled</strong>==自动弹出状态页面<strong>打开.</strong>
You are now permanently <strong>online</strong>.==您现在处于永久<strong>在线状态</strong>.
After a short while you should see the effect on the====一会儿可以在
status</a> page.==Status</a> 页面看到变化.
The Peer Name is:==节点名:
Your static Ip(or DynDns) is:==静态IP(或DynDns)为:
Seed Settings changed.#(success)#::You are now a principal peer.==seed设置已改变.#(success)#::本地节点已成为主要节点.
Seed Settings changed, but something is wrong.==seed设置已改变, 但是未完全成功.
Seed Uploading was deactivated automatically.==seed上传自动关闭.
Please return to the settings page and modify the data.==请返回设置页面修改参数.
The new setting is effective immediately, you don't need to re-start.==新设置立即生效.
The submitted peer name is already used by another peer. Please choose a different name.</strong> The Peer name has not been changed.==提交的节点名已存在, 请更改.</strong> 节点名未改变.
Your Peer Language is:==节点语言:
The submitted peer name is not well-formed. Please choose a different name.</strong> The Peer name has not been changed.
Peer names must not contain characters other than (a-z, A-Z, 0-9, '-', '_') and must not be longer than 80 characters.
#The new parser settings where changed successfully.==Die neuen Parser Einstellungen wurden erfolgreich gespeichert.
Parsing of the following mime-types was enabled:
Seed Upload method was changed successfully.==seed上传方式改变成功.
You are now a principal peer.==本地节点已成为主要节点.
Seed Upload Method:==seed上传方式:
Seed File URL:==seed文件URL:
Your proxy networking settings have been changed.==代理网络设置已改变.
Transparent Proxy Support is:==透明代理支持:
Connection Keep-Alive Support is:==连接保持支持:
Your message forwarding settings have been changed.==消息发送设置已改变.
Message Forwarding Support is:==消息发送支持:
Message Forwarding Command:==消息:
Recipient Address:==收件人地址:
Please return to the settings page and modify the data.==请返回设置页面修改参数.
You are now <strong>event-based online</strong>.==您现在处于<strong>事件驱动在线</strong>.
After a short while you should see the effect on the==查看变化
You are now in <strong>Cache Mode</strong>.==您现在处于<strong>Cache模式</strong>.
Only Proxy-cache ist available in this mode.==此模式下仅代理缓存可用.
After a short while you should see the effect on the==查看变化
You can now go back to the==现在可返回
Settings</a> page if you want to make more changes.==设置</a> 页面, 如果需要更改更多参数的话.
You can reach your YaCy server under the new location==现在可以通过新位置访问YaCy服务器:
but only if there had been changes to the seed-list.==前提是seed列表有变更.
The host where you have a FTP account, like==ftp服务器, 比如
Path</label>==路径</label>
The remote path on the FTP server, like==ftp服务器上传路径, 比如
Missing sub-directories are NOT created automatically.==不会自动创建缺少的子目录.
Username==用户名
Your log-in at the FTP server==ftp服务器用户名
Password</label>==密码</label>
The password==用户密码
"Submit"=="提交"
#-----------------------------
#File: Settings_Seed_UploadScp.inc
#---------------------------
Uploading via SCP:==通过SCP上传:
This is the account for a server where you are able to login via ssh.==设置通过ssh访问服务器的账户.
#Server==Server
The host where you have an account, like 'my.host.net'==主机, 比如'my.host.net'
#Server Port==Server Port
The sshd port of the host, like '22'==ssh端口, 比如'22'
Path</label>==路径</label>
The remote path on the server, like '~/yacy/seed.txt'. Missing sub-directories are NOT created automatically.==ssh服务器上传路径, 比如'~/yacy/seed.txt'. 不会自动创建缺少的子目录.
Username==用户名
Your log-in at the server==ssh服务器用户名
Password</label>==密码</label>
The password==用户密码
"Submit"=="提交"
#-----------------------------
#File: Settings_ServerAccess.inc
#---------------------------
Server Access Settings==服务器访问设置
IP-Number filter:==IP地址过滤:
Here you can restrict access to the server.==通过此限制访问服务器的IP.
By default, the access is not limited,==默认情况下, 不对访问作限制,
because this function is needed to spawn the p2p index-sharing function.==否则会影响p2p索引共享功能.
If you block access to your server (setting anything else than '*'), then you will also be blocked==如果作了访问限制(不要设置'*'),
Please open the <a href="ConfigAccounts_p.html">accounts configuration</a> page <strong>immediately</strong>==请打开<a href="ConfigAccounts_p.html">账户设置</a> <strong>页面</strong>
Access is unrestricted from localhost (this includes administration features).==访问权限在localhost不受限制(这包括管理功能)。
Please check the <a href="ConfigAccounts_p.html">accounts configuration</a> page to ensure that the settings match the security level you need.==请检查<a href="ConfigAccounts_p.html">帐户配置</a>页面,确保设置符合您所需的安全级别。
Your Web Page Indexer is idle. You can start your own web crawl <a href="CrawlStartSite.html">here</a>==网页索引器当前空闲. 可以点击<a href="CrawlStartSite.html">这里</a>开始爬取网页
If you do this without YaCy running on a desktop-pc or without Java 6 installed, this will possibly break startup.==如果您不是在台式机上或者已安装Java6的机器上运行, 可能会破坏开机程序.
In this case, you will have to edit the configuration manually in DATA/SETTINGS/yacy.conf==在此情况下, 您需要手动修改配置文件 DATA/SETTINGS/yacy.conf
Go back to the <a href="Settings_p.html">Settings</a> page==将返回<a href="Settings_p.html">设置</a>页面
Your system is not protected by a password==您的系统未受密码保护
Please go to the <a href="ConfigAccounts_p.html">User Administration</a> page and set an administration password.==请在<a href="ConfigAccounts_p.html">用户管理</a>页面设置管理密码.
You don't have the correct access right to perform this task.==无执行此任务权限.
Please log in.==请登录.
You can now go back to the <a href="Settings_p.html">Settings</a> page if you want to make more changes.==您现在可以返回<a href="Settings_p.html">设置</a>页面进行详细设置.
Application will terminate after working off all scheduled tasks.==程序在所有任务完成后将停止.
Please send us feed-back!==可以给我们一个反馈嘛!
We don't track YaCy users, YaCy does not send 'home-pings', we do not even know how many people use YaCy as their private search engine.==我们不跟踪YAY用户,YaCy不发送“回家Ping”,我们甚至不知道有多少人使用Yyas作为他们的私人搜索引擎。
Therefore we like to ask you: do you like YaCy?==所以我们想问你:你喜欢YaCy吗?
Will you use it again... if not, why?==你会再次使用它吗?如果不是,为什么?
Is it possible that we change a bit to suit your needs==我们有可能改变一下以满足您的需求吗
Please send us feed-back about your experience with an==请向我们发送有关您的体验的回馈
Professional Support==专业级支持
If you are a professional user and you would like to use YaCy in your company in combination with consulting services by YaCy specialists, please see==如果您是专业用户,并且希望在公司中使用YaCy并获得YaCy专家的咨询服务,请参阅
The information that is presented on this page can also be retrieved as XML.==The information that is presented on this page can also be retrieved as XML.
Click the API icon to see the XML.==Click the API icon to see the XML.
To see a list of all APIs, please visit the ==To see a list of all APIs, please visit the
The information that is presented on this page can also be retrieved as XML.==此页信息也可表示为XML.
Click the API icon to see the XML.==点击API图标查看XML.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Vocabularies can be used to produce a search navigation.==词汇表可用于生成搜索导航.
A vocabulary must be created before content is indexed.==必须在索引内容之前创建词汇.
The vocabulary is used to annotate the indexed content with a reference to the object that is denoted by the term of the vocabulary.==词汇用于通过引用由词汇的术语表示的对象来注释索引的内容.
The object can be denoted by a url stub that, combined with the term, becomes the url for the object.==该对象可以用地址存根表示,该存根与该术语一起成为该对象的地址.
>Vocabulary Selection<==>词汇选择<
>Vocabulary Name<==>词汇名<
"View"=="查看"
>Vocabulary Production<==>词汇生成<
Empty Vocabulary== 空词汇
>Auto-Discover<==>自动发现<
> from file name==> 来自文件名
> from page title (splitted)==> 来自页面标题(拆分)
> from page title==> 来自页面标题
> from page author==> 来自页面作者
>Objectspace<==>对象空间<
It is possible to produce a vocabulary out of the existing search index.==可以从现有搜索索引中生成词汇表.
This is done using a given 'objectspace' which you can enter as a URL Stub.==这是使用给定的“对象空间”完成的,您可以将其作为地址存根输入.
This stub is used to find all matching URLs.==此存根用于查找所有匹配的地址.
If the remaining path from the matching URLs then denotes a single file, the file name is used as vocabulary term.==如果来自匹配地址的剩余路径表示单个文件,则文件名用作词汇表术语.
This works best with wikis.==这适用于百科.
Try to use a wiki url as objectspace path.==尝试使用百科地址作为对象空间路径
Import from a csv file==从csv文件导入
>File Path or==>文件路径或者
>Start line<==>起始行<
>Column for Literals<==>文本列<
>Synonyms<==>同义词<
>no Synonyms<==>无同义词<
>Auto-Enrich with Synonyms from Stemming Library<==>使用词干库中的同义词自动丰富<
The data that is visualized here can also be retrieved in a XML file, which lists the reference relation between the domains.==此页面数据显示域之间的关联关系, 能以XML文件形式查看.
With a GET-property 'about' you get only reference relations about the host that you give in the argument field for 'about'.==使用GET属性'about'仅能获得带有'about'参数的域关联关系.
With a GET-property 'latest' you get a list of references that had been computed during the current run-time of YaCy, and with each next call only an update to the next list of references.==使用GET属性'latest'能获得当前的关联关系列表, 并且每一次调用都只能更新下一级关联关系列表.
Click the API icon to see the XML file.==点击API图标查看XML文件.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Web Structure==网页结构
host<==主机<
depth<==深度<
nodes<==节点<
time<==时间<
size<==大小<
>Background<==>背景<
>Line<==>线<
>Dot<==>点<
>Dot-end<==>末点<
>Color <==>颜色<
"change"=="改变"
#-----------------------------
#File: Wiki.html
#---------------------------
YaCyWiki page:==YaCyWiki:
last edited by==最后编辑由
change date==改变日期
Edit<==编辑<
only granted to admin==只授权给管理员
Grant Write Access to==授予写权限
# !!! Do not translate the input buttons because that breaks the function to switch rights !!!
Text will be displayed <span class="underline">underlined</span>.==文本要显示<span class =“underline”>下划线</ span>.
Code==代码
This tag displays a Youtube or Vimeo video with the id specified and fixed width 425 pixels and height 350 pixels.==这个标签显示一个425像素和350像素的Youtube或Vimeo视频.
i.e. use==比如用
Wiki Help==Wiki帮助
Wiki-Code==Wiki代码
This table contains a short description of the tags that can be used in the Wiki and several other servlets==此表列出了用于Wiki和几个插件代码标签简述,
of YaCy. For a more detailed description visit the==详情请见
#YaCy Wiki==YaCy Wiki
Description==描述
#=headline===headline
These tags create headlines. If a page has three or more headlines, a table of content will be created automatically.==此标记标识标题内容. 如果页面有多于三个标题, 则会自动创建一个表格.
Headlines of level 1 will be ignored in the table of content.==一级标题.
#text==Text
These tags create stressed texts. The first pair emphasizes the text (most browsers will display it in italics),==这些标记标识文本内容. 第一对中为强调内容(多数浏览器用斜体表示),
the second one emphazises it more strongly (i.e. bold) and the last tags create a combination of both.==第二对用粗体表示, 第三对为两者的联合.
Text will be displayed <span class="strike">stricken through</span>.==文本内容以<span class="strike">删除线</span>表示.
Lines will be indented. This tag is supposed to mark citations, but may as well be used for styling purposes.==缩进内容, 此标记主要用于引用, 也能用于标识样式.
#point==point
These tags create a numbered list.==此标记用于有序列表.
#something<==something<
#another thing==another thing
#and yet another==and yet another
#something else==something else
These tags create an unnumbered list.==用于创建无序列表.
#word==word
#:definition==:definition
These tags create a definition list.==用于创建定义列表.
This tag creates a horizontal line.==创建水平线.
#pagename==pagename
#description]]==description]]
This tag creates links to other pages of the wiki.==创建到其他wiki页面的链接.
This tag displays an image, it can be aligned left, right or center.==显示图片, 可设置左对齐, 右对齐和居中.
These tags create a table, whereas the first marks the beginning of the table, the second starts==用于创建表格, 第一个标记为表格开头, 第二个为换行,
a new line, the third and fourth each create a new cell in the line. The last displayed tag==第三个与第四个创建列.
closes the table.==最后一个为表格结尾.
#The escape tags will cause all tags in the text between the starting and the closing tag to not be treated as wiki-code.==Durch diesen Tag wird der Text, der zwischen den Klammern steht, nicht interpretiert und unformatiert als normaler Text ausgegeben.
A text between these tags will keep all the spaces and linebreaks in it. Great for ASCII-art and program code.==此标记之间的文本会保留所有空格和换行, 主要用于ASCII艺术图片和编程代码.
If a line starts with a space, it will be displayed in a non-proportional font.==如果一行以空格开头, 则会以非比例形式显示.
url description==URL描述
This tag creates links to external websites.==此标记创建外部网站链接.
"As a first-time-user you see only basic functions. Set a use case or name your peer to see more options. Start a first web crawl to see all monitoring options."=="作为初次使用者,您只能看到基本的功能. 请命名您的Yacy节点来看更多的选项. 开始第一个网页爬取, 查看所有监视选项."
"You do not see all monitoring options here, because some belong to crawl result monitoring. Start a web crawl to see that!"=="您不会在这里看到所有的监控选项,因为有些属于爬取结果监控. 开始网络爬取看看!"
As a first-time-user you see only basic functions. Set a use case or name your peer to see more options. Start a first web crawl to see all monitoring options.==作为初次使用者, 您只能看到基本的功能. 请命名您的Yacy节点来看更多的选项. 开始第一个网页爬取, 查看所有监视选项.
You do not see all monitoring options here, because some belong to crawl result monitoring. Start a web crawl to see that!==您不会在这里看到所有的监控选项,因为有些属于爬取结果监控. 开始网络爬取看看!
click on the red icon in the upper right after a search. this works good in combination with the==搜索后点击右上角的红色图标. 这个结合起来很好用
add search results from external opensearch systems==添加外部opensearch系统的搜索结果
only pages with <date> in content==仅内容包含<date>的页面
add search results from ==从中添加搜索结果
this works good in combination with the '/date' ranking modifier.==这与“/ date”排名修饰符结合使用效果很好.
click on the red icon in the upper right after a search.==搜索后点击右上角的红色图标.
only pages with ==仅内容包含
add search results from==从中添加搜索结果
"Search"=="搜索"
advanced parameters==高级参数
Max. number of results==搜索结果最多有
Results per page==每个页面显示结果
Resource==资源
global==全球
>local==>本地
Global search is disabled because==全球搜索被禁用, 因为
DHT Distribution</a> is==DHT分发</a>被
Index Receive</a> is==索引接收</a>被
DHT Distribution and Index Receive</a> are==DHT分发和索引接受</a>被
disabled.#(==禁用.#(
URL mask==URL过滤
restrict on==限制
show all==显示所有
Prefer mask==首选过滤
Constraints==约束
only index pages==仅索引页面
"authentication required"=="需要认证"
Disable search function for users without authorization==禁止未授权用户搜索
Enable web search to everyone==允许所有人搜索
the peer-to-peer network==P2P网络
only the local index==仅本地索引
Query Operators==查询操作
restrictions==限制
only urls with the <phrase> in the url==仅包含<phrase>的URL
only urls with extension==仅带扩展名的地址
only urls from host==仅来自主机的地址
only pages with as-author-anotated==仅作者授权页面
only pages from top-level-domains==仅来自顶级域名的页面
only resources from http or https servers==仅来自http/https服务器的资源
only resources from ftp servers==仅来自ftp服务器的资源
they are rare==很少
crawl them yourself==您需要自己爬取它们
only resources from smb servers==仅来自smb服务器的资源
Intranet Indexing</a> must be selected==局域网索引</a>必须被选中
only files from a local file system==仅来自本机文件系统的文件
ranking modifier==排名修改
sort by date==按日期排序
latest first==最新者居首
multiple words shall appear near==引用多个字
doublequotes==双引号
prefer given language==首选语言
an <a href="http://www.loc.gov/standards/iso639-2/php/English_list.php" title="Reference alpha-2 language codes list">ISO 639-1</a> 2-letter code==<a href="http://www.loc.gov/standards/iso639-2/php/English_list.php" title="Reference alpha-2 language codes list">ISO 639-1</a> 标准的双字母代码
heuristics==启发式
add search results from blekko==添加来自blekko的搜索结果
Search Navigation==搜索导航
keyboard shortcuts==快捷键
<a href="https://en.wikipedia.org/wiki/Access_key">Access key</a> modifier + n==<a href="https://zh.wikipedia.org/wiki/%E8%AE%BF%E9%97%AE%E9%94%AE">访问键</a> modifier + n
next result page==下一页
<a href="https://en.wikipedia.org/wiki/Access_key">Access key</a> modifier + p==<a href="https://zh.wikipedia.org/wiki/%E8%AE%BF%E9%97%AE%E9%94%AE">访问键</a> modifier + p
previous result page==上一页
automatic result retrieval==自动结果检索
browser integration==浏览集成
after searching, click-open on the default search engine in the upper right search field of your browser and select 'Add "YaCy Search.."'==搜索后, 点击浏览器右上方区域中的默认搜索引擎, 并选择'添加"YaCy"'
search as rss feed==作为RSS-Feed搜索
click on the red icon in the upper right after a search. this works good in combination with the '/date' ranking modifier. See an==搜索后点击右上方的红色图标. 配合'/date'排名修改, 能取得较好效果.
>example==>例
json search results==json搜索结果
for ajax developers: get the search rss feed and replace the '.rss' extension in the search result url with '.json'==对AJAX开发者: 获取搜索结果页的RSS-Feed, 并用'.json'替换'.rss'搜索结果链接中的扩展名
YaCy-UI is going to be a JavaScript based client for YaCy based on the existing XML and JSON API.==YaCy-UI 是基于JavaScript的YaCy客户端, 它使用当前的XML和JSON API.
YaCy-UI is at most alpha status, as there is still problems with retriving the search results.==YaCy-UI 尚在测试阶段, 所以在搜索时会有部分问题出现.
I am currently changing the backend to a more application friendly format and getting good results with it (I will check that in some time after the stable release 0.7).==目前我正在修改程序后台, 以让其更加友善和搜索到更合适的结果(我会在稳定版0.7后改善此类问题).
For now have a look at the bookmarks, performance has increased significantly, due to the use of JSON and Flexigrid!==就目前来说, 由于使用JSON和Flexigrid, 性能已获得显著提升!
#-----------------------------
#File: yacyinteractive.html
#---------------------------
YaCy Interactive Search==YaCy交互搜索
This search result can also be retrieved as RSS/<a href="http://www.opensearch.org">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org">opensearch</a>形式表示.
The query format is similar to <a href="http://www.loc.gov/standards/sru/">SRU</a>.==请求的格式与<a href="http://www.loc.gov/standards/sru/">SRU</a>相似.
Click the API icon to see an example call to the search rss API.==点击API图标查看示例.
This search result can also be retrieved as RSS/<a href="http://www.opensearch.org" target="_blank">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org" target="_blank">opensearch</a>形式表示.
Your search is done using peers in the YaCy P2P network.==您的搜索是靠YaCy P2P网络中的节点完成的。
You can switch to 'Stealth Mode' which will switch off P2P, giving you full privacy. Expect less results then, because then only your own search index is used.==您可以切换到'隐形模式',这将关闭P2P,给你完全的隐私。期待较少的结果,因为那时只有您自己的搜索索引被使用。
Your search is done using only your own peer, locally.==你的搜索是靠在本地的YaCy节点完成的。
You can switch to 'Peer-to-Peer Mode' which will cause that your search is done using the other peers in the YaCy network.==您可以切换到'P2P',这将让您的搜索使用YaCy网络中的YaCy节点。