You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
yacy_search_server/locales/zh.lng

4640 lines
275 KiB

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

# zh.lng
# English-->Chinese
# -----------------------
# This is a part of YaCy, a peer-to-peer based web search engine
#
# (C) by Michael Peter Christen; mc@anomic.de
# first published on http://www.anomic.de
# Frankfurt, Germany, 2005
#
#
# This file is maintained by lofyer<lofyer@gmail.com>
# This file is written by lofyer
# If you find any mistakes or untranslated strings in this file please don't hesitate to email them to the maintainer.
#File: AccessGrid_p.html
#---------------------------
YaCy Network Access==YaCy网络访问
Server Access Grid==服务器访问网格
This images shows incoming connections to your YaCy peer and outgoing connections from your peer to other peers and web servers==这幅图显示了到你节点的传入连接,以及从你节点到其他节点或网站服务器的传出连接
#-----------------------------
#File: AccessTracker_p.html
#---------------------------
YaCy '#[clientname]#': Access Tracker==YaCy '#[clientname]#': 访问跟踪器
Server Access Overview==服务器访问概况
This is a list of #[num]# requests to the local http server within the last hour.==最近一小时内有 #[num]# 个到本地的访问请求。
Showing #[num]# requests==显示 #[num]# 个请求
>Host<==>服务器<
>Path<==>路径<
Date<==日期<
Access Count During==访问时间
last Second==最近1秒
last Minute==最近1分
last 10 Minutes==最近10分
last Hour==最近1小时
The following hosts are registered as source for brute-force requests to protected pages==以下服务器作为保护页面强制请求的源
#>Host==>Host
Access Times==访问时间
Server Access Details==服务器访问细节
Local Search Log==本地搜索日志
Local Search Host Tracker==本地搜索服务器跟踪器
Remote Search Log==远端搜索日志
#Total:==Total:
Success:==成功:
Remote Search Host Tracker==远端搜索服务器跟踪器
This is a list of searches that had been requested from this' peer search interface==此列表显示来自本节点搜索界面发出请求的搜索
Showing #[num]# entries from a total of #[total]# requests.==显示 #[num]# 词条,共 #[total]# 个请求。
Requesting Host==请求服务器
Peer Name==节点名称
Offset==偏移量
Expected Results==期望结果
Returned Results==返回结果
Known Results==已知结果
Used Time (ms)==消耗时间(毫秒)
URL fetch (ms)==获取地址(毫秒)
Snippet comp (ms)==摘录比较(毫秒)
Query==查询字符
>User Agent<==>用户代理<
Top Search Words (last 7 Days)==热门搜索词汇(最近7天)
Search Word Hashes==搜索字哈希值
Count</td>==计数</td>
Queries Per Last Hour==查询/小时
Access Dates==访问日期
This is a list of searches that had been requested from remote peer search interface==此列表显示来自远端节点搜索界面发出请求的搜索.
This is a list of requests (max. 1000) to the local http server within the last hour==这是最近一小时内本地http服务器的请求列表(最多1000个)
#-----------------------------
#File: Autocrawl_p.html
#---------------------------
>Autocrawler<==>自动爬虫<
Autocrawler automatically selects and adds tasks to the local crawl queue==自动爬虫自动选择任务并将其添加到本地爬网队列
This will work best when there are already quite a few domains in the index==如果索引中已经有一些域名,这将会工作得最好
Autocralwer Configuration==自动爬虫配置
You need to restart for some settings to be applied==你需要重新启动才能应用一些设置
Enable Autocrawler:==启用自动爬虫:
Deep crawl every:==深入爬取:
Warning: if this is bigger than "Rows to fetch" only shallow crawls will run==警告:如果这大于“取回行”,只有浅爬取将运行
Rows to fetch at once:==一次取回行:
Recrawl only older than # days:==重新爬取只有 # 天以前的时间:
Get hosts by query:==通过查询获取服务器:
Can be any valid Solr query.==可以是任何有效的Solr查询。
Shallow crawl depth (0 to 2):==浅爬取深度0至2:
Deep crawl depth (1 to 5):==深度爬取深度1至5:
Index text:==索引文本:
Index media:==索引媒体:
"Save"=="保存"
#-----------------------------
#File: BlacklistCleaner_p.html
#---------------------------
Blacklist Cleaner==黑名单整理
Here you can remove or edit illegal or double blacklist-entries==在这里你可以删除或者编辑一个非法或者重复的黑名单词条
Check list==校验名单
"Check"=="校验"
Allow regular expressions in host part of blacklist entries==允许黑名单中服务器部分的正则表达式
The blacklist-cleaner only works for the following blacklist-engines up to now:==此整理目前只对以下黑名单引擎有效:
Illegal Entries in #[blList]# for==非法词条在 #[blList]#
Deleted #[delCount]# entries==已删除 #[delCount]# 个词条
Altered #[alterCount]# entries==已修改 #[alterCount]# 个词条
Two wildcards in host-part==服务器部分中的两个通配符
Either subdomain <u>or</u> wildcard==子域名<u>或者</u>通配符
Path is invalid Regex==无效正则表达式
Wildcard not on begin or end==通配符未在开头或者结尾处
Host contains illegal chars==服务器名包含非法字符
Double==重复
"Change Selected"=="改变选中"
"Delete Selected"=="删除选中"
No Blacklist selected==未选中黑名单
#-----------------------------
#File: BlacklistImpExp_p.html
#---------------------------
Blacklist Import==黑名单导入
Used Blacklist engine:==使用的黑名单引擎:
Import blacklist items from...==导入黑名单词条从...
other YaCy peers:==其他的YaCy 节点s:
"Load new blacklist items"=="载入黑名单词条"
#URL:==URL:
plain text file:<==文本文件:<
XML file:==XML文件:
Upload a regular text file which contains one blacklist entry per line.==上传一个每行都有一个黑名单词条的文本文件.
Upload an XML file which contains one or more blacklists.==上传一个包含一个或多个黑名单的XML文件.
Export blacklist items to==导出黑名单到
Here you can export a blacklist as an XML file. This file will contain additional==你可以导出黑名单到一个XML文件中此文件含有
information about which cases a blacklist is activated for==激活黑名单所具备条件的详细信息
"Export list as XML"=="导出名单到XML"
Here you can export a blacklist as a regular text file with one blacklist entry per line==你可以导出黑名单到一个文本文件中,且每行都仅有一个黑名单词条
This file will not contain any additional information==此文件不会包含详细信息
"Export list as text"=="导出名单到文本"
#-----------------------------
#File: BlacklistTest_p.html
#---------------------------
Blacklist Test==黑名单测试
Used Blacklist engine:==使用的黑名单引擎:
Test list:==测试黑名单:
"Test"=="测试"
The tested URL was==此链接
It is blocked for the following cases:==在下列情况下,它会被阻止:
Crawling==爬取中
#DHT==DHT
News==新闻
Proxy==代理
Search==搜索
Surftips==建议
#-----------------------------
#File: Blacklist_p.html
#---------------------------
Blacklist Administration==黑名单管理
This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理地址过滤;过滤掉自载入时加入进黑名单的地址.
from being loaded. You can define several blacklists and activate them separately.==你可以自定义黑名单并分别激活它们.
You may also provide your blacklist to other peers by sharing them; in return you may==你也可以提供你自己的黑名单列表给其他人;
collect blacklist entries from other peers==同样,其他人也能将黑名单列表共享给你
Select list to edit:==选择列表进行编辑:
Add URL pattern==添加地址规则
Edit list==编辑列表
The right '*', after the '/', can be replaced by a==在'/'之后的右边'*'可以被替换为
>regular expression<==>正则表达式<
#(slow)==(慢)
"set"=="收集"
The right '*'==右边的'*'
Used Blacklist engine:==使用的黑名单引擎:
Active list:==激活列表:
No blacklist selected==未选中黑名单
Select list:==选中黑名单:
not shared::shared==未共享::已共享
"select"=="选择"
Create new list:==创建:
"create"=="创建"
Settings for this list==设置
"Save"=="保存"
Share/don't share this list==共享/不共享此名单
Delete this list==删除
Edit this list==编辑
These are the domain name/path patterns in==这些域名/路径规则来自
Blacklist Pattern==黑名单规则
Edit selected pattern(s)==编辑选中规则
Delete selected pattern(s)==删除选中规则
Move selected pattern(s) to==移动选中规则
#You can select them here for deletion==你可以从这里选择要删除的项
Add new pattern:==添加新规则:
"Add URL pattern"=="添加地址规则"
The right '*', after the '/', can be replaced by a <a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">regular expression</a>.== 在 '/' 后边的 '*' ,可用<a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">正则表达式</a>表示.
#domain.net/fullpath<==domain.net/绝对路径<
#>domain.net/*<==>domain.net/*<
#*.domain.net/*<==*.domain.net/*<
#*.sub.domain.net/*<==*.sub.domain.net/*<
#sub.domain.*/*<==sub.domain.*/*<
#domain.*/*<==domain.*/*<
#was removed from blacklist==wurde aus Blacklist entfernt
#was added to the blacklist==wurde zur Blacklist hinzugefügt
Activate this list for==为以下词条激活此名单
Show entries:==显示词条:
Entries per page:==页面词条:
Edit existing pattern(s):==编辑现有规则:
"Save URL pattern(s)"=="保存地址规则"
#-----------------------------
#File: Blog.html
#---------------------------
by==通过
Comments</a>==评论</a>
>edit==>编辑
>delete==>删除
Edit<==编辑<
previous entries==前一个词条
next entries==下一个词条
new entry==新词条
import XML-File==导入XML文件
export as XML==导出到XML文件
Comments</a>==评论</a>
Blog-Home==博客主页
Author:==作者:
Subject:==标题:
Text:==文本:
You can use==你可以用
Yacy-Wiki Code==YaCy-百科代码
here.==这儿.
Comments:==评论:
deactivated==无效
>activated==>有效
moderated==改变
"Submit"=="提交"
"Preview"=="预览"
"Discard"=="取消"
>Preview==>预览
No changes have been submitted so far==未作出任何改变
Access denied==拒绝访问
To edit or create blog-entries you need to be logged in as Admin or User who has Blog rights.==如果编辑或者创建博客内容,你需要登录.
Are you sure==确定
that you want to delete==要删除:
Confirm deletion==确定删除
Yes, delete it.==是, 删除.
No, leave it.==不, 保留.
Import was successful!==导入成功!
Import failed, maybe the supplied file was no valid blog-backup?==导入失败, 可能提供的文件不是有效的博客备份?
Please select the XML-file you want to import:==请选择你想导入的XML文件:
#-----------------------------
#File: BlogComments.html
#---------------------------
by==通过
Comments</a>==评论</a>
Login==登录
Blog-Home==博客主页
delete</a>==删除</a>
allow</a>==允许</a>
Author:==作者:
Subject:==标题:
#Text:==Text:
You can use==你可以用
Yacy-Wiki Code==YaCy-百科代码
here.==在这里.
"Submit"=="提交"
"Preview"=="预览"
"Discard"=="取消"
#-----------------------------
#File: Bookmarks.html
#---------------------------
start autosearch of new bookmarks==开始自动搜索新书签
This starts a search of new or modified bookmarks since startup==开始搜索自从启动以来新的或修改的书签
Every peer online will be ask for results.==每个在线的节点都会被索要结果。
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>.==要查看所有API的列表请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>。
To see a list of all APIs==要查看所有API的列表请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>。
YaCy '#[clientname]#': Bookmarks==YaCy '#[clientname]#': 书签
The bookmarks list can also be retrieved as RSS feed. This can also be done when you select a specific tag.==书签列表也能用作RSS订阅.当你选择某个标签时你也可执行这个操作.
Click the API icon to load the RSS from the current selection.==点击API图标以从当前选择书签中载入RSS.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==获取所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
<h3>Bookmarks==<h3>书签
Bookmarks (==书签(
Login==登录
List Bookmarks==显示书签
Add Bookmark==添加书签
Import Bookmarks==导入书签
Import XML Bookmarks==导入XML书签
Import HTML Bookmarks==导入HTML书签
"import"=="导入"
Default Tags:==默认标签
imported==已导入
Edit Bookmark==编辑书签
#URL:==URL:
Title:==标题:
Description:==描述:
Folder (/folder/subfolder):==目录(/目录/子目录):
Tags (comma separated):==标签(以逗号隔开):
>Public:==>公共的:
yes==是
no==否
Bookmark is a newsfeed==书签是新闻订阅点
"create"=="创建"
"edit"=="编辑"
File:==文件:
import as Public==导入为公有
"private bookmark"=="私有书签"
"public bookmark"=="公共书签"
Tagged with==关键词:
'Confirm deletion'=='确认删除'
Edit==编辑
Delete==删除
Folders==目录
Bookmark Folder==书签目录
Tags==标签
Bookmark List==书签列表
previous page==前一页
next page==后一页
All==所有
Show==显示
Bookmarks per page==书签/每页
#unsorted==默认排序
#-----------------------------
#File: Collage.html
#---------------------------
Image Collage==图像拼贴
Private Queue==私有
Public Queue==公共
#-----------------------------
#File: compare_yacy.html
#---------------------------
Websearch Comparison==网页搜索对比
Left Search Engine==左侧引擎
Right Search Engine==右侧引擎
Query==查询
"Compare"=="比较"
Search Result==结果
#-----------------------------
#File: ConfigAccountList_p.html
#---------------------------
User List==用户列表
User Accounts==用户账户
User==用户
First name==名字
Last name==姓氏
Address==地址
Last Access==最近访问
Rights==权限
Time==时间
Traffic==流量
#-----------------------------
#File: ConfigAccounts_p.html
#---------------------------
Username too short. Username must be &gt;= 4 Characters.==用户名太短。 用户名必须&gt;= 4 个字符.
Username already used (not allowed).==用户名已被使用(不允许).
Username too short. Username must be ==用户名太短. 用户名必须
User Administration==用户管理
User created:==用户已创建:
User changed:==用户已改变:
Generic error==一般错误
Passwords do not match==密码不匹配
Username too short. Username must be >= 4 Characters==用户名太短, 至少为4个字符
No password is set for the administration account==管理员账户未设置密码
Please define a password for the admin account==请设置一个管理员密码
#Admin Account
Admin Account==管理员
Access from localhost without account==本地匿名访问
Access to your peer from your own computer (localhost access) is granted with administrator rights. No need to configure an administration account.==通过管理员权限授予从你自己的计算机访问你的节点localhost访问权限.无需配置管理帐户.
This setting is convenient but less secure than using a qualified admin account.==此设置很方便,但比使用合格的管理员帐户安全性低.
Please use with care, notably when you browse untrusted and potentially malicious websites while running your YaCy peer on the same computer.==请谨慎使用尤其是在计算机上运行YaCy节点并浏览不受信任和可能有恶意的网站时.
Access only with qualified account==只允许授权用户访问
This is required if you want a remote access to your peer, but it also hardens access controls on administration operations of your peer.==如果你希望远端访问你的节点,则这是必需的,但它也会加强节点管理操作的访问控制.
Peer User:==节点用户:
New Peer Password:==新节点密码:
Repeat Peer Password:==重复节点密码:
"Define Administrator"=="设置管理员账户"
#Access Rules
>Access Rules<==>访问规则<
Protection of all pages: if set to on==保护所有页面:如果设置为开启
access to all pages need authorization==访问所有页面需要授权
if off, only pages with "_p" extension are protected==如果关闭只有扩展名为“_p”的页面才受保护
Set Access Rules==设置访问规则
#User Accounts
User Accounts==用户账户
Select user==选择用户
New user==新用户
or goto user==或者去用户
>account list<==>账户列表<
Edit User==编辑用户
Delete User==删除用户
Edit current user:==编辑当前用户:
Username</label>==用户名</label>
Password</label>==密码</label>
Repeat password==重复密码
First name==名
Last name==姓
Address==地址
Rights==权限
</body>==<script>window.onload = function () {$("label:contains('Wiki Admin right')").text('百科管理权'); $("label:contains('Upload right')").text('上传权'); $("label:contains('Download right')").text('下载权'); $("label:contains('Admin right')").text('管理权'); $("label:contains('Proxy usage right')").text('代理使用权'); $("label:contains('Blog right')").text('博客权'); $("label:contains('Bookmark right')").text('书签权'); $("label:contains('Extended Search right')").text('拓展搜索权');}</script></body>
Timelimit==时限
Time used==已用时
Save User==保存用户
#-----------------------------
#File: ConfigAppearance_p.html
#---------------------------
Appearance and Integration==外观整合
You can change the appearance of the YaCy interface with skins.==你可以在这里修改YaCy的外观界面.
The selected skin and language also affects the appearance of the search page.==选择的皮肤和语言也会影响到搜索页面的外观.
If you <a href="ConfigPortal_p.html">create a search portal with YaCy</a> then you can==如果你<a href="ConfigPortal_p.html">创建YaCy门户</a>,
change the appearance of the search page here.==那么你能在<a href="ConfigPortal_p.html">这里</a> 改变搜索页面的外观.
Skin Selection==选择皮肤
Select one of the default skins. <b>After selection it might be required to reload the web page while holding the shift key to refresh cached style files.</b>==选择一个默认皮肤。<b>选择后重新加载网页可能需要在按住shift键的同时刷新缓存的样式文件。</b>
Select one of the default skins, download new skins, or create your own skin.==选择一个默认皮肤, 下载新皮肤或者创建属于你自己的皮肤.
Current skin==当前皮肤
Available Skins==可用皮肤
"Use"=="使用"
"Delete"=="删除"
>Skin Color Definition<==>改变皮肤颜色<
The generic skin 'generic_pd' can be configured here with custom colors:==能在这里修改皮肤'generic_pd'的颜色:
>Background<==>背景<
>Text<==>文本<
>Legend<==>说明<
>Table&nbsp;Header<==>标签&nbsp;头部<
>Table&nbsp;Item<==>标签&nbsp;词条&nbsp;1<
>Table&nbsp;Item&nbsp;2<==>标签&nbsp;词条&nbsp;2<
>Table&nbsp;Bottom<==>标签&nbsp;底部<
>Border&nbsp;Line<==>边界&nbsp;线<
>Sign&nbsp;'bad'<==>符号&nbsp;'坏'<
>Sign&nbsp;'good'<==>符号&nbsp;'好'<
>Sign&nbsp;'other'<==>符号&nbsp;'其他'<
>Search&nbsp;Headline<==>搜索&nbsp;标题<
>Search&nbsp;URL==>搜索&nbsp;地址
hover==悬浮
"Set Colors"=="设置颜色"
>Skin Download<==>下载皮肤<
Skins can be installed from download locations==安装下载皮肤
Install new skin from URL==从URL安装皮肤
Use this skin==使用这个皮肤
"Install"=="安装"
Make sure that you only download data from trustworthy sources. The new Skin file==确保你的皮肤文件是从可靠源获得. 如果存在相同文件
might overwrite existing data if a file of the same name exists already.==, 新皮肤会覆盖旧的.
>Unable to get URL:==>无法打开链接:
Error saving the skin.==保存皮肤时出错.
#-----------------------------
#File: ConfigBasic.html
#---------------------------
Your port has changed. Please wait 10 seconds.==你的端口已更改。 请等待10秒。
Your browser will be redirected to the new <a href="http://#[host]#:#[port]#/ConfigBasic.html">location</a> in 5 seconds.==你的浏览器将在5秒内重定向到新的<a href="http://#[host]#:#[port]#/ConfigBasic.html">位置</a>。
The peer port was changed successfully.==节点端口已经成功修改。
Set by system property==由系统属性设置
https enabled==https启用
Configure your router for YaCy using UPnP:==使用UPnP为你的路由器配置YaCy:
on port==在端口
Access Configuration==访问设置
Basic Configuration==基本设置
Your YaCy Peer needs some basic information to operate properly==你的YaCy节点需要一些基本信息才能有效工作
Select a language for the interface==选择界面语言
Browser==浏览器
Use the browser preferred language if available==如果可用就使用浏览器偏好的语言
Use Case: what do you want to do with YaCy:==用法你想将YaCy当作
Community-based web search==基于社区的网络搜索
Join and support the global network 'freeworld', search the web with an uncensored user-owned search network==加入并支持全球网络 'freeworld', 自由搜索网络。
Search portal for your own web pages==个人网站的搜索门户
Your YaCy installation behaves independently from other peers and you define your own web index by starting your own web crawl. This can be used to search your own web pages or to define a topic-oriented search portal.==你的YaCy安装独立于其他节点你可以通过开始自己的网络爬虫来创建自己的网络索引。这可用于搜索你的个人网站或创建专题搜索门户。
Files may also be shared with the YaCy server, assign a path here:==你也能与YaCy服务器共享内容, 在这里指定路径:
This path can be accessed at ==可以通过以下链接访问
Use that path as crawl start point.==将此路径作为索引起点。
Intranet Indexing==内网索引
Create a search portal for your intranet or web pages or your (shared) file system.==为内网或网页或(共享)文件系统创建搜索门户。
URLs may be used with http/https/ftp and a local domain name or IP, or with an URL of the form==URL可以是http/https/ftp以及本地域名或IP也可以是下面形式的URL
or smb:==或者smb:
Your peer name has not been customized; please set your own peer name==你的节点尚未命名, 请命名它
You may change your peer name==你可以改变你的节点名称
Peer Name:==节点名称:
Your peer can be reached by other peers==外部能访问你的节点
Your peer cannot be reached from outside==外部不能访问你的节点
which is not fatal, but would be good for the YaCy network==此举不是强制的但有利于YaCy网络
please open your firewall for this port and/or set a virtual server option in your router to allow connections on this port.==请改变你的防火墙或者虚拟机路由设置, 从而让外网能访问这个端口。
Opening a router port is <i>not</i> a YaCy-specific task;==打开一个路由器端口不是一个YaCy特定的任务;
you can see instruction videos everywhere in the internet, just search for <a href="http://www.youtube.com/results?search_query=Open+Ports+on+a+Router">Open Ports on a &lt;our-router-type&gt; Router</a> and add your router type as search term.==你可以在互联网上的任何地方查看说明视频,只需搜索<a href="http://www.youtube.com/results?search_query=Open Ports on a Router">在&lt;我的路由器类型&gt;路由器打开一个端口</a>并添加你的路由器类型作为搜索词。
However: if you fail to open a router port, you can nevertheless use YaCy with full functionality, the only function that is missing is on the side of the other YaCy users because they cannot see your peer.==但是如果你无法打开路由器端口你仍然可以使用YaCy的全部功能唯一缺失的功能是对其他YaCy用户而言的因为他们无法看到你的YaCy节点。
Peer Port:==节点端口:
Configure your router for YaCy:==设置本机路由:
Configuration was not successful. This may take a moment.==配置失败。这需要花费一些时间。
Set Configuration==保存设置
What you should do next:==下一步你该做的:
Your basic configuration is complete! You can now (for example)==配置成功, 你现在可以
just <==开始<
start an uncensored search==自由地搜索了
start your own crawl</a> and contribute to the global index, or create your own private web index==开始你的索引</a>并将其贡献给全球索引, 或者创建你的私有索引
set a personal peer profile</a> (optional settings)==设置个人节点资料</a> (可选项)
monitor at the network page</a> what the other peers are doing==监控网络页面</a>, 以及其他节点的活动
Your Peer name is a default name; please set an individual peer name.==你的节点名称为系统默认,请另外设置一个名称。
You did not set a user name and/or a password.==你未设置用户名和/或密码。
Some pages are protected by passwords.==一些页面受密码保护。
You should set a password at the <a href="ConfigAccounts_p.html">Accounts Menu</a> to secure your YaCy peer.</p>::==你可以在 <a href="ConfigAccounts_p.html">账户菜单</a> 设置密码, 从而加强你的YaCy节点安全性。</p>::
You did not open a port in your firewall or your router does not forward the server port to your peer.==你未在防火墙中打开端口,或者你的路由器不能与服务器端口建立有效链接。
This is needed if you want to fully participate in the YaCy network.==如果你想完全加入YaCy网络, 此项是必须的。
You can also use your peer without opening it, but this is not recomended.==不开放端口你也能使用你的节点, 但是不推荐。
#-----------------------------
#File: ConfigHeuristics_p.html
#---------------------------
Heuristics Configuration==启发式配置
A <a href="http://en.wikipedia.org/wiki/Heuristic" target="_blank">heuristic</a> is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia).==<a href="http://en.wikipedia.org/wiki/Heuristic" target="_blank">启发式</a>是一种“基于经验的技术,有助于解决问题,学习和发现”
search-result: shallow crawl on all displayed search results==搜索结果:浅度爬取所有显示的搜索结果
When a search is made then all displayed result links are crawled with a depth-1 crawl.==当进行搜索时,所有显示的结果网址的爬网深度-1。
"Save"=="保存"
"add"=="添加"
>new<==>新建<
>delete<==>删除<
>Comment<==>评论<
>Title<==>标题<
>Active<==>激活<
>copy &amp; paste a example config file<==>复制&amp; 粘贴一个示例配置文件<
Alternatively you may==或者你可以
To find out more about OpenSearch see==要了解关于OpenSearch的更多信息请参阅
20 results are taken from remote system and loaded simultanously, parsed and indexed immediately.==20个结果从远端系统中获取并同时加载,立即解析并创建索引.
When using this heuristic, then every new search request line is used for a call to listed opensearch systems.==使用这种启发式时每个新的搜索请求行都用于调用列出的opensearch系统。
This means: right after the search request every page is loaded and every page that is linked on this page.==这意味着:在搜索请求之后,就开始加载结果的每个页面及每个页面上的链接。
If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中'添加为全球爬取作业'则要爬取的页面将被添加到全球爬取队列中其他远端YaCy节点可能会帮助爬取这些页面
Default is to add the links to the local crawl queue (your peer crawls the linked pages).==默认是将链接添加到本地爬网队列你的YaCy爬取链接的页面
add as global crawl job==添加为全球爬取作业
opensearch load external search result list from active systems below==opensearch从下面的活动系统加载外部搜索结果列表
Available/Active Opensearch System==可用/激活Opensearch系统
Url <small>(format opensearch==Url <small>格式为opensearch
Url template syntax==网址模板语法
"reset to default list"=="重置为默认列表"
"discover from index"=="从索引中发现"
start background task, depending on index size this may run a long time==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间
With the button "discover from index" you can search within the metadata of your local index (Web Structure Index) to find systems which support the Opensearch specification.==使用“从索引发现”按钮你可以在本地索引Web结构索引的元数据中搜索以查找支持Opensearch规范的系统。
The task is started in the background. It may take some minutes before new entries appear (after refreshing the page).==任务在后台启动。 出现新词条可能需要几分钟时间(在刷新页面之后)。
"switch Solr fields on"=="开关Solr字段"
('modify Solr Schema')=='修改Solr模式'
located in <i>defaults/heuristicopensearch.conf</i> to the DATA/SETTINGS directory.==位于DATA / SETTINGS目录的<i> defaults / heuristicopensearch.conf </i>中。
For the discover function the <i>web graph</i> option of the web structure index and the fields <i>target_rel_s, target_protocol_s, target_urlstub_s</i> have to be switched on in the <a href="IndexSchema_p.html?core=webgraph">webgraph Solr schema</a>.==对于发现功能Web结构索引的<i> web图表</i>选项和字段<i> target_rel_starget_protocol_starget_urlstub_s </i>必须在<a href="IndexSchema_p.html?core=webgraph">webgraph Solr模式</a>。
20 results are taken from remote system and loaded simultanously==20个结果从远端系统中获取并同时加载立即解析并索引
>copy ==>复制amp; 粘贴一个示例配置文件<
When using this heuristic==使用这种启发式时每个新的搜索请求行都用于调用列出的opensearch系统。
For the discover function the <i>web graph</i> option of the web structure index and the fields <i>target_rel_s==对于发现功能Web结构索引的<i> web图表</ i>选项和字段<i> target_rel_starget_protocol_starget_urlstub_s </ i>必须在<a href =“IndexSchema_p.html core = webgraph“> webgraph Solr模式</a>。
start background task==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间
>copy==>复制amp; 粘贴一个示例配置文件<
The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines.==你可以在这里开启启发式搜索, 通过猜测链接, 嵌套搜索和访问其他搜索引擎, 从而找到更多符合你期望的结果.
When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content.==开启启发式搜索时, 搜索结果给出的链接并不是直接搜索的链接, 而是已经缓存在其他服务器上的结果.
This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.==这保证了黑名单的有效性, 并且搜索关键字是通过启发式搜索找到的.
The success of heuristics are marked with an image==启发式搜索找到的结果会被特定图标标记
heuristic:&lt;name&gt;==启发式:&lt;名称&gt;
#(redundant)==(redundant)
(new link)==(新链接)
below the favicon left from the search result entry:==搜索结果中使用的图标:
The search result was discovered by a heuristic, but the link was already known by YaCy==搜索结果通过启发式搜索, 且链接已知
The search result was discovered by a heuristic, not previously known by YaCy==搜索结果通过启发式搜索, 且链接未知
'site'-operator: instant shallow crawl=='站点'-操作符: 即时浅爬取
When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,服务器就会立即爬取层数为 最大限制深度-1 的内容.
That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一服务器中每一个与此页面相连的网页.
Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).==因为'立即爬取'依赖于爬虫协议和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多词条(需要间隔几秒钟).
#-----------------------------
#File: ConfigHTCache_p.html
#---------------------------
Hypertext Cache Configuration==超文本缓存配置
The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==超文本缓存存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存.
The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容.
#HTCache Configuration
HTCache Configuration==超文本缓存配置
Cache hits==缓存命中率
The path where the cache is stored==缓存存储路径
The current size of the cache==当前缓存容量
>#[actualCacheSize]# MB for #[actualCacheDocCount]# files, #[docSizeAverage]# KB / file in average==>#[actualCacheSize]#MB为#[actualCacheDocCount]#文件, #[docSizeAverage]#平均KB /文件
The maximum size of the cache==缓存最大容量
Compression level==压缩级别
Concurrent access timeout==并行存取超时
milliseconds==毫秒
"Set"=="设置"
#Cleanup
Cleanup==清除
Cache Deletion==删除缓存
Delete HTTP &amp; FTP Cache==删除HTTP &amp; FTP 缓存
Delete robots.txt Cache==删除robots.txt缓存
"Delete"=="删除"
#-----------------------------
#File: ConfigLanguage_p.html
#---------------------------
Simple Editor==简单编辑器
Download Language File==下载语言文件
to add untranslated text==用于添加仍未翻译文本
Supported formats are the internal language file (extension .lng) or XLIFF (extension .xlf) format.==支持的格式是内部语言文件(扩展名.lng或XLIFF扩展名.xlf格式.
Language selection==语言选择
You can change the language of the YaCy-webinterface with translation files.==你可以使用翻译文件来改变YaCy操作界面的语言.
Current language</label>==当前语言</label>
Author(s) (chronological)</label>==作者(按时间排序)</label>
Send additions to maintainer</em>==向维护者提交补丁</em>
Available Languages</label>==可用语言</label>
Install new language from URL==从URL安装新语言
Use this language==使用此语言
"Use"=="使用"
"Delete"=="删除"
"Install"=="安装"
Unable to get URL:==打开链接失败:
Error saving the language file.==保存语言文件时发生错误.
Make sure that you only download data from trustworthy sources. The new language file==确保你的数据是从可靠源下载. 如果存在相同文件名
might overwrite existing data if a file of the same name exists already.==, 旧文件将被覆盖.
#-----------------------------
#File: ConfigNetwork_p.html
#---------------------------
Network Configuration==网络设置
No changes were made!==未作出任何改变!
Accepted Changes==接受改变
Inapplicable Setting Combination==设置未被应用
For P2P operation, at least DHT distribution or DHT receive (or both) must be set. You have thus defined a Robinson configuration==关于P2P操作必须至少勾选DHT分发或DHT接收或两者。 因此,你已被确定为漂流配置
Global Search in P2P configuration is only allowed, if index receive is switched on. You have a P2P configuration, but are not allowed to search other peers.==P2P配置中的全局搜索仅在打开接受索引时才被允许。你已有P2P配置但不被允许搜索其他节点。
#Network and Domain Specification
Network and Domain Specification==确定网络和域
YaCy can operate a computing grid of YaCy peers or as a stand-alone node.==Yacy能够以一群YaCy节点组成的计算网络或作为一个孤立节点运行.
To control that all participants within a web indexing domain have access to the same domain,==要控制Web索引域中的所有参与者都可以访问同一个域
this network definition must be equal to all members of the same YaCy network.==此网络定义必须与同一YaCy网络的成员相同。
>Network Definition<==>网络定义<
Enter custom URL...==输入自定义网址...
Remote Network Definition URL==远端网络定义地址
Network Nick==网络别名
Long Description==详细描述
Indexing Domain==索引域
#DHT==DHT
"Change Network"=="改变网络"
#Distributed Computing Network for Domain
Distributed Computing Network for Domain==域内分布式计算网络.
Enable Peer-to-Peer Mode to participate in the global YaCy network==开启点对点模式从而加入全球搜索网
or if you want your own separate search cluster with or without connection to the global network.==或者不论加不加入全球YaCy网你都可以打造个人搜索群。
Enable 'Robinson Mode' for a completely independent search engine instance,==开启漂流模式获得完全独立的搜索引擎实例,
without any data exchange between your peer and other peers.==且不会与其他节点有任何数据交换。
#Peer-to-Peer Mode
Peer-to-Peer Mode==P2P模式
>Index Distribution==>索引分发
This enables automated, DHT-ruled Index Transmission to other peers==自动向其他节点传递服从DHT规则的索引
>enabled==>开启
disabled during crawling==关闭 (在爬取时)
disabled during indexing==关闭 (在索引时)
>Index Receive==>接收索引
Accept remote Index Transmissions==允许远端索引输入
This works only if you have a senior peer. The DHT-rules do not work without this function==仅当你是高级节点时有效。 如果没有勾选, 服从DHT规则的索引不会输入
>reject==>拒绝
accept transmitted URLs that match your blacklist==允许 (你黑名单中的地址)
>allow==>允许
deny remote search==拒绝 (远端搜索)
#Robinson Mode
>Robinson Mode==>漂流模式
If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果你的节点运行在'漂流模式', 你能在不与其他节点交换数据的情况下进行搜索
There is no index receive and no index distribution between your peer and any other peer==你不会与其他节点进行索引传递
In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于漂流群模式,一样会应答那个群内远端节点的爬取请求
>Private Peer==>私有节点
Your search engine will not contact any other peer, and will reject every request==你的搜索引擎不会与其他节点联系, 并会拒绝每一个外部请求
>Public Peer==>公共节点
You are visible to other peers and contact them to distribute your presence==对于其他节点你是可见的, 可以与他们进行通信以分发你的索引
Your peer does not accept any outside index data, but responds on all remote search requests==你的节点不接受任何外部索引数据, 但是会回应所有外部搜索请求
>Public Cluster==>公共群
Your peer is part of a public cluster within the YaCy network==你的节点属于YaCy网络内的一个公共群
Index data is not distributed, but remote crawl requests are distributed and accepted==索引数据不会被分发, 但是外部的爬取请求会被分发和接受
Search requests are spread over all peers of the cluster, and answered from all peers of the cluster==搜索请求在当前群内的所有节点中传播, 并且这些节点同样会作出回应
List of .yacy or .yacyh - domains of the cluster: (comma-separated)==群内.yacy 或者.yacyh 的域名列表: (以逗号隔开)
>Peer Tags==>节点标签
When you allow access from the YaCy network, your data is recognized using keywords==当你允许YaCy网络的访问时, 你的数据会以关键字形式表示
Please describe your search portal with some keywords (comma-separated)==请用关键字描述你的搜索门户 (以逗号隔开)
If you leave the field empty, no peer asks your peer. If you fill in a '*', your peer is always asked.==如果此部分留空, 那么你的节点不会被其他节点访问. 如果内容是 '*' 则标示你的节点永远被允许访问.
"Save"=="保存"
#Outgoing communications encryption
Outgoing communications encryption==出色的通信加密
Protocol operations encryption==协议操作加密
Prefer HTTPS for outgoing connexions to remote peers==更喜欢以HTTPS作为输出连接到远端节点
When==当
is enabled on remote peers==在远端节点开启时
it should be used to encrypt outgoing communications with them (for operations such as network presence, index transfer, remote crawl==它应该被用来加密与它们的传出通信(操作:网络存在、索引传输、远端爬行
Please note that contrary to strict TLS==请注意与严格的TLS相反
certificates are not validated against trusted certificate authorities==证书向受信任的证书颁发机构进行验证
thus allowing YaCy peers to use self-signed certificates==从而允许YaCy节点使用自签名证书
Note also that encryption of remote search queries is configured with a dedicated setting in the <a href="ConfigPortal_p.html">Config Portal</a> page.==另请注意,请在<a href="ConfigPortal_p.html">门户配置</a>页面中设置远端搜索加密功能。
#-----------------------------
#File: ConfigParser_p.html
#---------------------------
Parser Configuration==解析器配置
Content Parser Settings==内容解析器设置
With this settings you can activate or deactivate parsing of additional content-types based on their MIME-types.==此设置能根据文件类型(MIME)开启/关闭额外的内容解析.
For a detailed description of the various MIME-types take a look at==关于MIME的详细描述请参考
If you want to test a specific parser you can do so using the==如果要测试特定的解析器,可以使用
>File Viewer<==>文件查看器<
>Extension<==>拓展名<
>Mime-Type<==>Mime-类型<
"Submit"=="提交"
PDF Parser Attributes==PDF解析器属性
This is an experimental setting which makes it possible to split PDF documents into individual index entries==这是一个实验设置可以将PDF文档拆分为单独的索引词条
Every page will become a single index hit and the url is artifically extended with a post/get attribute value containing the page number as value==每个页面都将成为单个索引匹配并且使用包含页码作为值的post/get属性值人为扩展url
Split PDF==分割PDF
Property Name==属性名
#-----------------------------
#File: ConfigPortal_p.html
#---------------------------
Integration of a Search Portal==搜索门户设置
If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果你想将YaCy作为你的网站搜索门户, 你可能需要在这改变搜索页面的图标和信息。
The search page may be customized.==搜索页面可以自由定制。
You can change the 'corporate identity'-images, the greeting line==你可以改变'企业标志'图片,问候语
and a link to a home page that is reached when the 'corporate identity'-images are clicked.==和一个点击'企业标志'图像后转到主页的超链接。
To change also colours and styles use the <a href="ConfigAppearance_p.html">Appearance Servlet</a> for different skins and languages.==若要改变颜色和风格,请到<a href="ConfigAppearance_p.html">外观选项</a>选择你喜欢的皮肤和语言。
Greeting Line<==问候语<
URL of Home Page<==主页链接<
URL of a Small Corporate Image<==企业形象小图地址<
URL of a Large Corporate Image<==企业形象大图地址<
Alternative text for Corporate Images<==企业形象代替文字<
Enable Search for Everyone==对任何人开启搜索
Search is available for everyone==任何人可用搜索
Only the administator is allowed to search==只有管理员可以搜索
Show Navigation Bar on Search Page==显示导航栏和搜索页
Show Navigation Top-Menu==显示顶级导航菜单
no link to YaCy Menu (admin must navigate to /Status.html manually)==没有到YaCy菜单的链接(管理页面必须手动指向 /Status.html)
Show Advanced Search Options on Search Page==在搜索页显示高级搜索选项
Show Advanced Search Options on index.html&nbsp;==在index.html显示高级搜索选项?
do not show Advanced Search==不显示高级搜索
Media Search==媒体搜索
>Extended==>拓展
>Strict==>严格
Control whether media search results are as default strictly limited to indexed documents matching exactly the desired content domain==控制媒体搜索结果是否默认严格限制为与所需内容域完全匹配的索引文档
(images, videos or applications specific)==(图片,视频或具体应用)
or extended to pages including such medias (provide generally more results, but eventually less relevant).==或扩展到包括此类媒体的网页(通常提供更多结果,但相关性更弱)。
Remote results resorting==远端搜索结果排序
>On demand, server-side==>根据需要, 服务器侧
Automated, with JavaScript in the browser==自动化, 基于嵌入浏览器的JavaScript代码
Automated results resorting with JavaScript makes the browser load the full result set of each search request.==基于JavaScript的自动结果重新排序使浏览器加载每个搜索请求的完整结果集。
This may lead to high system loads on the server.==这可能会导致服务器上的系统负载过高。
Please check the 'Peer-to-peer search with JavaScript results resorting' section in the <a href="SearchAccessRate_p.html">Local Search access rate</a> configuration page to set up proper limitations on this mode by unauthenticated users.==请查看<a href="SearchAccessRate_p.html">本地搜索访问率</a> 配置页面中的“使用JavaScript对P2P搜索结果重排”部分对未经身份验证的用户使用该模式加以适当限制。
Remote search encryption==远端搜索加密
Prefer https for search queries on remote peers.==首选https用于远端节点上的搜索查询。
When SSL/TLS is enabled on remote peers, https should be used to encrypt data exchanged with them when performing peer-to-peer searches.==在远端节点上启用SSL/TLS时,应使用https来加密在执行P2P搜索时与它们交换的数据。
Please note that contrary to strict TLS, certificates are not validated against trusted certificate authorities (CA), thus allowing YaCy peers to use self-signed certificates.==请注意,与严格TLS相反,证书不会针对受信任的证书颁发机构(CA)进行验证,因此允许YaCy节点使用自签名证书。
>Snippet Fetch Strategy==>摘要提取策略
Speed up search results with this option! (use CACHEONLY or FALSE to switch off verification)==使用此选项加速搜索结果!(使用CACHEONLY或FALSE来关闭验证)
Statistics on text snippets generation can be enabled in the <a href="Settings_p.html?page=debug">Debug/Analysis Settings</a> page.==可以在<a href="Settings_p.html?page=debug">调试/分析设置</a>页面中启用文本摘录生成的统计信息。
NOCACHE: no use of web cache, load all snippets online==NOCACHE:不使用网络缓存,在线加载所有网页摘要
IFFRESH: use the cache if the cache exists and is fresh otherwise load online==IFFRESH:如果缓存存在则使用最新的缓存,否则在线加载
IFEXIST: use the cache if the cache exist or load online==IFEXIST:如果缓存存在则使用缓存,或在线加载
If verification fails, delete index reference==如果验证失败,删除索引参考
CACHEONLY: never go online, use all content from cache.==CACHEONLY:永远不上网,内容只来自缓存。
If no cache entry exist, consider content nevertheless as available and show result without snippet==如果不存在缓存词条,将内容视为可用,并显示没有摘要的结果
FALSE: no link verification and not snippet generation: all search results are valid without verification==FALSE:没有链接验证且没有摘要生成:所有搜索结果在没有验证情况下有效
Link Verification<==链接验证<
Greedy Learning Mode==贪心学习模式
load documents linked in search results,==加载搜索结果中链接的文档,
will be deactivated automatically when index size==将自动停用当索引大小
(see==(见
>Heuristics: search-result<==>启发式:搜索结果<
to use this permanent)==使得它永久性)
Index remote results==索引远端结果
add remote search results to the local index==将远端搜索结果添加到本地索引
( default=on, it is recommended to enable this option ! )==(默认=开启,建议启用此选项!)
Limit size of indexed remote results==现在远端索引结果容量
maximum allowed size in kbytes for each remote search result to be added to the local index==每个远端搜索结果的最大允许大小(以KB为单位)添加到本地索引
for example, a 1000kbytes limit might be useful if you are running YaCy with a low memory setup==例如,如果运行具有低内存设置的YaCy,则1000KB限制可能很有用
Default Pop-Up Page<==默认弹出页面<
>Status Page&nbsp;==>状态页面&nbsp;
>Search Front Page==>搜索首页
>Search Page (small header)==>搜索页面(二级标题)
>Interactive Search Page==>交互搜索页面
Default maximum number of results per page==默认每页最大结果数
Default index.html Page (by forwarder)==默认index.html页面(通过转发器)
Target for Click on Search Results==点击搜索结果时
"_blank" (new window)=="_blank" (新窗口)
"_self" (same window)=="_self" (同一窗口)
"_parent" (the parent frame of a frameset)=="_parent" (父级窗口)
"_top" (top of all frames)=="_top" (置顶)
Special Target as Exception for an URL-Pattern==作为URL模式的异常的特殊目标
&nbsp;Pattern:<=&nbsp;模式:<
Exclude Hosts==排除的服务器
List of hosts that shall be excluded from search results by default but can be included using the site:&lt;host&gt; operator:==默认情况下将被排除在搜索结果之外的服务器列表但可以使用site:&lt;host&gt;操作符包括进来
'About' Column<=='关于'栏<
shown in a column alongside==显示在
with the search result page==搜索结果页侧栏
(Headline)==(标题)
(Content)==(内容)
>You have to==>你必须
>set a remote user/password<==>设置一个远端用户/密码<
to change this options.<==来改变设置。<
Show Information Links for each Search Result Entry==显示搜索结果的链接信息
"searchresult" (a default custom page name for search results)=="搜索结果" (搜索结果页面名称)
"Change Search Page"=="改变搜索页"
"Set to Default Values"=="设为默认值"
The search page can be integrated in your own web pages with an iframe. Simply use the following code:==使用以下代码,将搜索页集成在你的网站中:
This would look like:==示例:
For a search page with a small header, use this code:==对于一个拥有二级标题的页面, 可使用以下代码:
A third option is the interactive search. Use this code:==交互搜索代码:
#-----------------------------
#File: ConfigProfile_p.html
#---------------------------
Your Personal Profile==你的个人资料
You can create a personal profile here, which can be seen by other YaCy-members==你可以在这创建个人资料, 而且对其他YaCy节点可见
or <a href="ViewProfile.html?hash=localhash">in the public</a> using a <a href="ViewProfile.rdf?hash=localhash">FOAF RDF file</a>.==或者<a href="ViewProfile.html?hash=localhash">在公共场所时</a>使用<a href="ViewProfile.rdf?hash=localhash">FOAF RDF 文件</a>.
>Name<==>名字<
Nick Name==昵称
Homepage (appears on every <a href="Supporter.html">Supporter Page</a> as long as your peer is online)==首页(显示在每个<a href="Supporter.html">支持者</a> 页面中, 前提是你的节点在线).
eMail==邮箱
Comment==注释
"Save"=="保存"
You can use <==在这里你可以用<
> here.==>.
#-----------------------------
#File: ConfigProperties_p.html
#---------------------------
Advanced Config==高级设置
Here are all configuration options from YaCy.==这里显示YaCy所有设置.
You can change anything, but some options need a restart, and some options can crash YaCy, if wrong values are used.==你可以改变任何这里的设置, 当然, 有的需要重启才能生效, 有的甚至能引起YaCy崩溃.
For explanation please look into defaults/yacy.init==详细内容请参考defaults/yacy.init
"Save"=="保存"
"Clear"=="清除"
#-----------------------------
#File: ConfigRobotsTxt_p.html
#---------------------------
Exclude Web-Spiders==拒绝网络爬虫
Here you can set up a robots.txt for all webcrawlers that try to access the webinterface of your peer.==在这里你可以创建一个爬虫协议, 以阻止试图访问你节点网络接口的网络爬虫.
is a voluntary agreement most search-engines (including YaCy) follow.==是一个大多数搜索引擎(包括YaCy)都遵守的协议.
It disallows crawlers to access webpages or even entire domains.==它会阻止网络爬虫进入网页甚至是整个域.
Deny access to==禁止访问
Entire Peer==整个节点
Status page==状态页面
Network pages==网络页面
Surftips==上网技巧
News pages==新页面
Blog==博客
Public bookmarks==公共书签
Home Page==首页
File Share==文件共享
Impressum==公司信息
"Save restrictions"=="保存"
Wiki==维基
#-----------------------------
#File: ConfigSearchBox.html
#---------------------------
Integration of a Search Box==搜索框设置
We give information how to integrate a search box on any web page that==如何将一个搜索框集成到任意
calls the normal YaCy search window.==调用YaCy搜索的页面.
Simply use the following code:==使用以下代码:
MySearch== 我的搜索
"Search"=="搜索"
This would look like:==示例:
This does not use a style sheet file to make the integration into another web page with a different style sheet easier.==在这里并没有使用样式文件, 因为这样会比较容易将其嵌入到不同样式的页面里.
You would need to change the following items:==你可能需要以下词条:
Replace the given colors #eeeeee (box background) and #cccccc (box border)==替换已给颜色 #eeeeee (框架背景) 和 #cccccc (框架边框)
Replace the word "MySearch" with your own message==用你想显示的信息替换"我的搜索"
#-----------------------------
#File: ConfigSearchPage_p.html
#---------------------------
Search Page<==搜索页<
>Search Result Page Layout Configuration<==>搜索结果页面布局配置<
Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中你希望显示的功能复选框.
To change colors and styles use the <a href="ConfigAppearance_p.html">Appearance</a> menu for different skins.==要改变颜色和样式,使用<a href="ConfigAppearance_p.html">外观</a>菜单以改变皮肤。
Other portal settings can be adjusted in <a href="ConfigPortal_p.html">Generic Search Portal</a> menu.==其他门户网站设置可以在<a href="ConfigPortal_p.html">通用搜索门户</a>菜单中调整.
>Page Template<==>页面模板<
>Toggle navigation<==>切换导航<
>Log in<==>登录<
>userName<==>用户名<
>Search Interfaces<==>搜索界面<
> Administration &raquo;<==> 管理 &raquo;<
>Tag<==>标签<
>Topics<==>主题<
>Cloud<==>云<
>Location<==>位置<
show search results on map==在地图上显示搜索结果
Sorted by descending counts==按计数递减排序
Sorted by ascending counts==按计数递增排序
Sorted by descending labels==按降序标签排序
Sorted by ascending labels==按升序标签排序
>Sort by==>排序
>Descending counts<==>降序计数<
>Ascending counts<==>升序计数<
>Descending labels<==>降序标签<
>Ascending labels<==>升序标签<
>Vocabulary <==>词汇<
>search<==>搜索<
>Text<==>文本<
>Images<==>图片<
>Audio<==>音频<
>Video<==>视频<
>Applications<==>应用<
>more options<==>更多选项<
> Date Navigation<==> 日期导航<
Maximum range (in days)==最大范围 (按照天算)
Maximum days number in the histogram. Beware that a large value may trigger high CPU loads both on the server and on the browser with large result sets.==直方图中的最大天数. 请注意, 较大的值可能会在服务器和具有大结果集的浏览器上触发高CPU负载.
Show websites favicon==显示网站图标
Not showing websites favicon can help you save some CPU time and network bandwidth.==不显示网站图标可以帮助你节省一些CPU时间和网络带宽。
>Title of Result<==>结果标题<
Description and text snippet of the search result==搜索结果的描述和文本摘录
>Tags<==>标签<
>keyword<==>关键词<
>subject<==>主题<
>keyword2<==>关键词2<
>keyword3<==>关键词3<
Max. tags initially displayed==初始显示的最大标签数
(remaining can then be expanded)==(剩下的可以扩展)
42 kbyte<==42kb<
>Metadata<==>元数据<
>Parser<==>解析器<
>Citation<==>引用<
>Pictures<==>图片<
>Cache<==>缓存<
>View via Proxy<==>通过代理查看<
>JPG Snapshot<==>JPG快照<
For this option URL proxy must be enabled.==对于这个选项必须启用URL代理。
menu: System Administration > Advanced Settings==菜单:系统管理>高级设置
Ranking score value, mainly for debug/analysis purpose, configured in <a href="Settings_p.html?page=debug">Debug/Analysis Settings</a>==排名分数值,主要用于调试/分析目的,在<a href="Settings_p.html?page=debug">调试/分析</a>设置中配置
>Add Navigators<==>添加导航器<
Save Settings==保存设置
Set Default Values==重置默认值
#-----------------------------
#File: ConfigUpdate_p.html
#---------------------------
>System Update<==>系统更新<
>changelog<==>更新日志<
> and <==>和<
> RSS feed<==> RSS订阅<
(unsigned)==(未签名)
(signed)==(签名)
add the following line to==将以下行添加到
Manual System Update==系统手动升级
Current installed Release==当前版本
Available Releases==可用版本
"Download Release"=="下载更新"
"Check for new Release"=="检查更新"
Downloaded Releases==已下载
No downloaded releases available for deployment.==无可用更新.
no&nbsp;automated installation on development environments==开发环境中自动安装
"Install Release"=="安装更新"
"Delete Release"=="删除更新"
Automatic Update==自动更新
check for new releases, download if available and restart with downloaded release==检查更新, 如果可用则重启并使用
"Check + Download + Install Release Now"=="检查 + 下载 + 现在安装"
Download of release #[downloadedRelease]# finished. Restart Initiated.== 已完成下载 #[downloadedRelease]# . 重启并初始化.
No more recent release found.==无最近更新.
Release will be installed. Please wait.==准备安装更新. 请稍等.
You installed YaCy with a package manager.==你使用包管理器安装的YaCy.
To update YaCy, use the package manager:==用包管理器以升级YaCy:
Omitting update because this is a development environment.==因当前为开发环境, 忽略安装升级.
Omitting update because download of release #[downloadedRelease]# failed.==下载 #[downloadedRelease]# 失败, 忽略安装升级.
Automated System Update==系统自动升级
manual update==手动升级
no automatic look-up, updates can be made manually using this interface (see options above)==无自动检查更新时, 可以使用此功能安装更新(参见上述).
automatic update==自动更新
updates are made within fixed cycles:==每隔一定时间自动检查更新:
Time between lookup==检查周期
hours==小时
Release blacklist==版本黑名单
regex on release number strings==版本号正则表达式
Release type==版本类型
only main releases==仅主版本号
any release including developer releases==任何版本, 包括测试版
Signed autoupdate:==签名升级:
only accept signed files==仅接受签名文件
"Submit"=="提交"
Accepted Changes.==已接受改变.
System Update Statistics==系统升级状况
Last System Lookup==上一次查找更新
never==从未
Last Release Download==最近一次下载更新
Last Deploy==最近一次应用更新
#-----------------------------
#File: ConfigUser_p.html
#---------------------------
User Account Editor==用户账户编辑器
User created: #[username]#==用户已创建: #[username]#
User changed: #[username]#==用户已改变: #[username]#
Generic error.==一般性错误。
Passwords do not match.==密码不匹配。
Username too short. Username must be &gt;= 4 Characters.==用户名太短。用户名至少 &gt;= 4 字符。
Username already used (not allowed).==用户名已存在(不允许)。
Edit current user: #[username]#==编辑当前用户: #[username]#
Username==用户名
Password==密码
Repeat password==重复密码
First name==名字
Last name==姓氏
Address==地址
Rights:==权限:
Timelimit==时间限制
Time used==已用时间
Save User==保存用户
Delete User==删除用户
back to user list==返回用户列表
#-----------------------------
#File: Connections_p.html
#---------------------------
Server Connection Tracking==服务器连接跟踪
Up-Bytes==截至字节
Showing #[numActiveRunning]# active connections from a max. of #[numMax]# allowed incoming connections==正在显示 #[numActiveRunning]# 活动连接,最大允许传入连接 #[numMax]#
Connection Tracking==连接跟踪
Incoming Connections==进入连接
Showing #[numActiveRunning]# active, #[numActivePending]# pending connections from a max. of #[numMax]# allowed incoming connections.==显示 #[numActiveRunning]# 活动, #[numActivePending]# 挂起连接, 最大允许 #[numMax]# 个进入连接.
Protocol</td>==协议</td>
Duration==持续时间
Source IP[:Port]==来源IP[:端口]
Dest. IP[:Port]==目标IP[:端口]
Command</td>==命令</td>
Used==使用的
Close==关闭
Waiting for new request nr.==等待新请求数.
Outgoing Connections==外出连接
Showing #[clientActive]# pooled outgoing connections used as:==显示 #[clientActive]# 个外出链接, 用作:
Duration==持续时间
#ID==ID
#-----------------------------
#File: ContentAnalysis_p.html
#---------------------------
Content Analysis<==内容分析<
These are document analysis attributes.==这些是文档分析属性。
>Double Content Detection<==>重复内容检测<
Double-Content detection is done using a ranking on a 'unique'-Field, named 'fuzzy_signature_unique_b'.==重复内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。
This field is set during parsing and is influenced by two attributes for the <a href="https://lucene.apache.org/solr/5_5_2/solr-core/org/apache/solr/update/processor/TextProfileSignature.html" target="_blank">TextProfileSignature</a> class.==此字段在解析期间设置,并受<a href="https://lucene.apache.org/solr/5_5_2/solr-core/org/apache/solr/update/processor/TextProfileSignature.html" target="_blank">TextProfileSignature</a>类的两个属性影响。
>minTokenLen<==>最小令牌长度<
This is the minimum length of a word which shall be considered as element of the signature. Should be either 2 or 3.==这是一个应被视为签名的元素单词的最小长度。应该是2或3。
>quantRate<==>量化率<
The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number, the less==量化率是参与签名计算的单词数量的度量。
words are used for the signature==数字越大,用于签名的单词就越少。
For minTokenLen = 2 the quantRate value should not be below 0.24; for minTokenLen = 3 the quantRate value must be not below 0.5.==对于最小令牌长度=2量化率值不应低于0.24; 对于最小令牌长度=3量化率值必须不低于0.5。
"Re-Set to default"=="重置为默认"
"Set"=="设置"
The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number==quantRate是参与签名计算的单词数量的度量。 数字越高,越少
#-----------------------------
#File: ContentControl_p.html
#---------------------------
Content Control<==内容控制<
Peer Content Control URL Filter==节点内容控制地址过滤器
With this settings you can activate or deactivate content control on this peer==使用此设置你可以激活或取消激活此YaCy节点上的内容控制
Use content control filtering:==使用内容控制过滤:
>Enabled<==>已启用<
Enables or disables content control==启用或禁用内容控制
Use this table to create filter:==使用此表创建过滤器:
Define a table. Default:==定义一个表格. 默认:
Content Control SMW Import Settings==内容控制SMW导入设置
With this settings you can define the content control import settings. You can define a==使用此设置,你可以定义内容控制导入设置. 你可以定义一个
Semantic Media Wiki with the appropriate extensions==语义媒体百科与适当的扩展
SMW import to content control list:==SMW导入到内容控制列表:
Enable or disable constant background synchronization of content control list from SMW (Semantic Mediawiki). Requires restart!==启用或禁用来自SMWSemantic Mediawiki的内容控制列表的恒定后台同步。 需要重启!
SMW import base URL:==SMW导入基URL:
Define base URL for SMW special page "Ask". Example: ==为SMW特殊页面“Ask”定义基础地址.例:
SMW import target table:==SMW导入目标表:
Define import target table. Default: contentcontrol==定义导入目标表. 默认值:contentcontrol
Purge content control list on initial sync:==在初始同步时清除内容控制列表:
Purge content control list on initial synchronisation after startup.==重启后,清除初始同步的内容控制列表.
"Submit"=="提交"
Define base URL for SMW special page "Ask". Example:==为SMW特殊页面“Ask”定义基础地址.例:
#-----------------------------
#File: ContentIntegrationPHPBB3_p.html
#---------------------------
Content Integration: Retrieval from phpBB3 Databases==内容集成: 从phpBB3数据库中导入
It is possible to extract texts directly from mySQL and postgreSQL databases.==能直接从mysql或者postgresql中解压出内容.
Each extraction is specific to the data that is hosted in the database.==每次解压都针对服务器数据库中的数据.
This interface gives you access to the phpBB3 forums software content.==通过此接口能访问phpBB3论坛软件内容.
If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:==如果从使用phpMyAdmin读取数据库内容, 你可能会用到以下建议:
before importing large database dumps, set==在导入尺寸较大的数据库时,
in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB)==设置phpmyadmin/config.inc.php的内容, 并将你的数据库文件放到 /tmp 目录下(否则不能上传大于2MB的文件)
deselect the partial import flag==取消部分导入
When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.==导出过程开始时, 在 DATA/SURROGATE/in 目录下自动生成备份文件, 并且会被索引器自动爬取.
All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.==所有被索引的备份文件都在 DATA/SURROGATE/out 目录下, 并被索引器循环利用.
The URL stub==URL根域名
like https://searchlab.eu==比如链接 https://searchlab.eu
this must be the path right in front of '/viewtopic.php?'==必须在'/viewtopic.php?'前面
Type==数据库
> of database<==> 类型<
use either 'mysql' or 'pgsql'==使用'mysql'或者'pgsql'
Host==数据库
> of the database<==> 服务器名<
of database service==数据库服务
usually 3306 for mySQL==MySQL中通常是3306
Name of the database==服务器
on the host==数据库
Table prefix string==table
for table names==前缀
User==数据库
that can access the database==用户名
Password==给定用户名的
for the account of that user given above==访问密码
Posts per file==导出备份中
in exported surrogates==每个文件拥有的最多帖子数
Check database connection==检查数据库连接
Export Content to Surrogates==导出到备份
Import a database dump==导入数据库
Import Dump==导入
Posts in database==数据库中帖子
first entry==第一个
last entry==最后一个
Info failed:==错误信息:
Export successful! Wrote #[files]# files in DATA/SURROGATES/in==导出成功! #[files]# 已写入到 DATA/SURROGATES/in 目录
Export failed:==导出失败:
Import successful!==导入成功!
Import failed:==导入失败:
#-----------------------------
#File: CookieMonitorIncoming_p.html
#---------------------------
Incoming Cookies Monitor==进入Cookies监控器
Cookie Monitor: Incoming Cookies==Cookies监控器: 进入Cookies
This is a list of Cookies that a web server has sent to clients of the YaCy Proxy:==Web服务器已向YaCy代理客户端发送的Cookie:
Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个词条, 总共 #[total]# 条Cookies.
Sending Host==发送中的服务器
Date</td>==日期</td>
Receiving Client==接收中的客户端
>Cookie<==>Cookie<
"Enable Cookie Monitoring"=="开启Cookie监控"
"Disable Cookie Monitoring"=="关闭Cookie监控"
#-----------------------------
#File: CookieMonitorOutgoing_p.html
#---------------------------
Outgoing Cookies Monitor==外出Cookie监控器
Cookie Monitor: Outgoing Cookies==Cookie监控器: 外出Cookie
This is a list of cookies that browsers using the YaCy proxy sent to webservers:==YaCy代理以通过浏览器向Web服务器发送的Cookie:
Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个词条, 总共 #[total]# 条Cookie.
Receiving Host==接收中的服务器
Date</td>==日期</td>
Sending Client==发送中的客户端
>Cookie<==>Cookie<
"Enable Cookie Monitoring"=="开启Cookie监控"
"Disable Cookie Monitoring"=="关闭Cookie监控"
#-----------------------------
#File: CookieTest_p.html
#---------------------------
Cookie - Test Page==缓存 - 测试页
Here is a cookie test page.==这是一个缓存测试页.
Just clean it==Just clean it
Name:==Name:
Value:==Value:
Dear server, set this cookie for me!==Dear server, set this cookie for me!
Cookies at this browser:==Cookies at this browser:
Cookies coming to server:==Cookies coming to server:
Cookies server sent:==Cookies server sent:
YaCy is a GPL'ed project==YaCy is a GPL'ed project
with the target of implementing a P2P-based global search engine.==with the target of implementing a P2P-based global search engine.
Architecture (C) by==Architecture (C) by
#-----------------------------
#File: CrawlCheck_p.html
#---------------------------
Crawl Check==爬取检查
This pages gives you an analysis about the possible success for a web crawl on given addresses.==通过本页面,你可以分析在特定地址上进行网络爬取的可能性。
List of possible crawl start URLs==可能的起始爬取地址列表
"Check given urls"=="检查给定的网址"
>Analysis<==>分析<
>Access<==>访问<
>Robots<==>机器人<
>Crawl-Delay<==>爬取延时<
>Sitemap<==>网页<
#-----------------------------
#File: Crawler_p.html
#---------------------------
YaCy '#[clientname]#': Crawler==YaCy '#[clientname]#': 爬虫
Click on this API button to see an XML with information about the crawler status==单击此API按钮可查看包含有关爬虫状态信息的 XML
>Crawler<==>爬虫<
(Please enable JavaScript to automatically update this page!)==(请启用JavaScript以自动更新此页面)
>Queues<==>队列<
>Queue<==>队列<
>Size<==>大小<
Local Crawler==本地爬虫
Limit Crawler==受限爬虫
Remote Crawler==远端爬虫
No-Load Crawler==未加载爬虫
>Loader<==>加载器<
Terminate All==全部终止
>Index Size<==>索引大小<
>Database<==>数据库<
>Entries<==>词条数<
Seg-<br/>ments==分割<br/>
>Documents<==>文档<
>solr search api<==>solr搜索api<
>Webgraph Edges<==>网图边缘<
>Citations<==>引用<
(reverse link index)==(反向链接索引)
>RWIs<==>反向词<
(P2P Chunks)==(P2P块)
>Progress<==>进度<
>Indicator<==>指标<
>Level<==>等级<
Speed / PPM==速度/PPM
Pages Per Minute==每分钟页数
Latency Factor==延迟因子
Max same Host in queue==队列同一服务器最大数量
"set" =="设置"
>min<==>最小<
>max<==>最大<
Set PPM to the default minimum value==设置PPM为默认最小值
Set PPM to the default maximum value==设置PPM为默认最大值
Crawler PPM==爬虫PPM
Postprocessing Progress==后加工进度
>pending:<==>待定:<
>collection=<==>收集=<
>webgraph=<==>网图=<
Traffic (Crawler)==流量 (爬虫)
>Load<==>负荷<
Error with profile management. Please stop YaCy, delete the file DATA/PLASMADB/crawlProfiles0.db==资料管理出错. 请关闭YaCy, 并删除文件 DATA/PLASMADB/crawlProfiles0.db
and restart. ::==后重启。::
Error:==错误:
Application not yet initialized. Sorry. Please wait some seconds and repeat==抱歉, 程序未初始化, 请稍候并重复
ERROR: Crawl filter==错误: 爬取过滤
does not match with==不匹配
crawl root==爬取根
Please try again with different==请使用不同的过滤字再试一次
filter. ::==. ::
Crawling of==在爬取
failed. Reason:==失败. 原因:
Error with URL input==网址输入错误
Error with file input==文件输入错误
started.==已开始.
pause reason: resource observer: not enough memory space==暂停原因: 资源检测器:没有足够内存空间
Please wait some seconds,==请稍等几秒钟,
it may take some seconds until the first result appears there.==在出现第一个搜索结果前需要几秒钟时间.
If you crawl any un-wanted pages, you can delete them <a href="IndexCreateQueues_p.html?stack=LOCAL">here</a>.==如果你爬取了不需要的页面, 你可以 <a href="IndexCreateQueues_p.html?stack=LOCAL">点这</a> 删除它们.
>Running Crawls==>运行中的爬取
>Name<==>名字<
>Count<==>计数<
>Status<==>状态<
>Running<==>运行中<
"Terminate"=="终止"
"show link structure"=="显示链接结构"
"hide graphic"=="隐藏图形"
>Crawled Pages<==>抓取到的网页<
#-----------------------------
#File: CrawlMonitorRemoteStart.html
#---------------------------
Recently started remote crawls in progress==最近启动的远端爬虫
Remote crawl start points, crawl is ongoing==远端爬虫开启点,爬虫运行中
Start Time==开启时间
Peer Name==节点名称
Start URL==起始地址
Intention/Description==意图/描述
Depth==深度
Accept '?' URLs==接受'?'地址
Remote crawl start points, finished:==远端爬虫开启点,已完成:
#-----------------------------
#File: CrawlProfileEditor_p.html
#---------------------------
Crawl Profile Editor==爬取配置文件编辑器
>Crawl Profile Editor<==>爬取文件编辑<
>Crawler Steering<==>爬虫控制<
>Crawl Scheduler<==>爬取调度器<
>Scheduled Crawls can be modified in this table<==>请在下表中修改已安排的爬取<
Crawl profiles hold information about a crawl process that is currently ongoing.==爬取文件里保存有正在运行的爬取进程信息.
#Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört.
#The profiles for remote crawls, <a href="ProxyIndexingMonitor_p.html">indexing via proxy</a> and snippet fetches==Die Profile für Remote Crawl, <a href="ProxyIndexingMonitor_p.html">Indexierung per Proxy</a> und Snippet Abrufe
#cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind.
#Crawl Profile List
Crawl Profile List==爬取文件列表
Crawl Thread<==爬取线程<
>Collections<==>搜集<
>Status<==>状态<
>Depth<==>深度<
Must Match<==必须匹配<
>Must Not Match<==>必须不符<
>Recrawl if older than<==>重新爬取如果老于<
>Domain Counter Content<==>域计数器内容<
>Max Page Per Domain<==>每个域中拥有最大页面<
>Accept==>接受
URLs<==地址<
>Fill Proxy Cache<==>填充代理缓存<
>Local Text Indexing<==>本地文本索引<
>Local Media Indexing<==>本地媒体索引<
>Remote Indexing<==>远端索引<
MaxAge<==最长寿命<
no::yes==否::是
Running==运行中
"Terminate"=="终结"
Finished==已完成
"Delete"=="删除"
"Delete finished crawls"=="删除已完成的爬取进程"
Select the profile to edit==选择要修改的文件
"Edit profile"=="修改文件"
An error occurred during editing the crawl profile:==修改爬取文件时发生错误:
Edit Profile==修改文件
"Submit changes"=="提交改变"
#-----------------------------
#File: CrawlResults.html
#---------------------------
Crawl Results<==爬取结果<
>Crawl Results Overview<==>爬取结果概况<
These are monitoring pages for the different indexing queues.==这是索引创建队列的监控页面.
YaCy knows 5 different ways to acquire web indexes. The details of these processes (1-5) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 详细描述显示在子菜单的进程(1-5)中,
above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息是私有的,
so you need to log-in with your administration password.==所以你需要以管理员账户来查看.
Case (6) is a monitor of the local receipt-generator, the opposed case of (1). It contains also an indexing result monitor but is not considered private==事件(6)是本地回执生成器的监控器, (1)的相反事件. 它也包含一个索引结果监控器, 但不是私有的.
since it shows crawl requests from other peers.==因为它显示了来自其他节点的爬取请求.
Case (7) occurs if surrogate files are imported==事件(7)发生在导入备份文件时
The image above illustrates the data flow initiated by web index acquisition.==上图解释了由网页索引查询发起的数据流.
Some processes occur double to document the complex index migration structure.==某些进程发生了两次以记录复杂的索引迁移结构.
(1) Results of Remote Crawl Receipts==(1) 远端爬取回执的结果
This is the list of web pages that this peer initiated to crawl,==这是此节点发起爬取的网页列表,
but had been crawled by <em>other</em> peers.==但它们早已被 <em>其他</em> 节点爬取了.
This is the 'mirror'-case of process (6).==这是进程(6)的'镜像'事件.
<em>Use Case:</em> You get entries here, if you start a local crawl on the '<a href="CrawlStartExpert.html">Advanced Crawler</a>' page and check the==<em>用法:</em> 你可在此获得词条, 当你在 '<a href="CrawlStartExpert.html">高级爬虫页面</a> 上启动本地爬取并勾选
'Do Remote Indexing'-flag, and if you checked the 'Accept Remote Crawl Requests'-flag on the '<a href="RemoteCrawl_p.html">Remote Crawling</a>' page.=='执行远端索引'-标志时, 这需要你确保在 '<a href="RemoteCrawl_p.html">远端爬取</a>' 页面中勾选了'接受远端爬取请求'-标志.
Every page that a remote peer indexes upon this peer's request is reported back and can be monitored here.==远端节点根据此节点的请求编制索引的每个页面都会被报告回来,并且可以在此处进行监控.
(2) Results for Result of Search Queries==(2) 搜索查询结果报告页
This index transfer was initiated by your peer by doing a search query.==通过搜索, 此索引转移能被发起.
The index was crawled and contributed by other peers.==这个索引是被其他节点贡献与爬取的.
<em>Use Case:</em> This list fills up if you do a search query on the 'Search Page'==<em>用法:</em> 如果你在'搜索页面'上执行搜索查询,此列表将填满
(3) Results for Index Transfer==(3) 索引转移结果
The url fetch was initiated and executed by other peers.==这些取回本地的地址是被其他节点发起并爬取.
These links here have been transmitted to you because your peer is the most appropriate for storage according to==程序已将这些地址传递给你, 因为根据全球分布哈希表的逻辑,
the logic of the Global Distributed Hash Table.==你的节点是最适合存储它们的.
<em>Use Case:</em> This list may fill if you check the 'Index Receive'-flag on the 'Index Control' page==<em>用法:</em> 如果你在'索引控制'页面上选中'索引接收'-标志, 则此列表会填写
(4) Results for Proxy Indexing==(4) 代理索引结果
These web pages had been indexed as result of your proxy usage.==以下是由于使用代理而索引的网页.
No personal or protected page is indexed==不包括私有或受保护网页
such pages are detected by Cookie-Use or POST-Parameters (either in URL or as HTTP protocol)==通过检测cookie用途和提交参数(链接或者HTTP协议)能够识别出此类网页,
and automatically excluded from indexing.==并在索引时自动排除.
<em>Use Case:</em> You must use YaCy as proxy to fill up this table.==<em>用法:</em> 必须把YaCy用作代理才能填充此表格.
Set the proxy settings of your browser to the same port as given==将浏览器代理端口设置为
on the 'Settings'-page in the 'Proxy and Administration Port' field.=='设置'页面'代理和管理端口'选项中的端口.
(5) Results for Local Crawling==(5)本地爬取结果
These web pages had been crawled by your own crawl task.==这些网页按照你的爬虫任务已被爬取.
<em>Use Case:</em> start a crawl by setting a crawl start point on the 'Index Create' page.==<em>用法:</em> 在'索引创建'页面设置爬取起始点以开始爬取.
(6) Results for Global Crawling==(6)全球爬取结果
These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已被你的节点创建了索引, 但它们是被远端节点爬取的.
This is the 'mirror'-case of process (1).==这是进程(1)的'镜像'事件.
<em>Use Case:</em> This list may fill if you check the 'Accept Remote Crawl Requests'-flag on the '<a href="RemoteCrawl_p.html">Remote Crawling</a>' page==<em>用法:</em> 如果你在 '<a href="RemoteCrawl_p.html">远端爬取</a>' 页面勾选'接受远端爬取请求'-标记,此列表会填写
The stack is empty.==此栈为空.
Statistics about #[domains]# domains in this stack:==此栈显示有关 #[domains]# 域的数据:
(7) Results from surrogates import==(7) 备份导入结果
These records had been imported from surrogate files in DATA/SURROGATES/in==这些记录从 DATA/SURROGATES/in 中的备份文件中导入
<em>Use Case:</em> place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式
(i.e. <a href="IndexImportMediawiki_p.html">MediaWiki import</a>, <a href="IndexImportOAIPMH_p.html">OAI-PMH retrieval</a>)==(例如 <a href="IndexImportMediawiki_p.html">MediaWiki 导入</a>, <a href="IndexImportOAIPMH_p.html">OAI-PMH 导入</a>)
>Domain==>域名
"delete all"=="全部删除"
Showing all #[all]# entries in this stack.==显示栈中所有 #[all]# 词条.
Showing latest #[count]# lines from a stack of #[all]# entries.==显示栈中 #[all]# 词条的最近
"clear list"=="清除列表"
>Executor==>执行者
>Modified==>已修改
>Words==>单词
>Title==>标题
"delete"=="删除"
>Collection==>收集
Blacklist to use==使用的黑名单
"del & blacklist"=="删除并拉黑"
on the 'Settings'-page in the 'Proxy and Administration Port' field.==在'设置'-页面的'代理和管理端口'字段的上。
#-----------------------------
#File: CrawlStartExpert.html
#---------------------------
YaCy '#[clientname]#': Crawl Start==YaCy '#[clientname]#': 爬取开启
Click on this API button to see a documentation of the POST request parameter for crawl starts.==单击此API按钮查看爬取启动的POST请求参数的文档。
Expert Crawl Start==高级爬取开启
Start Crawling Job:==开启爬取任务:
You can define URLs as start points for Web page crawling and start crawling here.==你可以在此指定网页爬取起始点的网址和开启爬取。
"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links.== "爬取中"意即YaCy会下载指定的网站, 并提取出其中的链接,接着下载链接中的全部内容。
This is repeated as long as specified under "Crawling Depth".==它将一直重复上述步骤,直到满足指定的"爬取深度"。
A crawl can also be started using wget and the <a href="http://www.yacy-websearch.net/wiki/index.php/Dev:APICrawler" target="_blank">post arguments</a> for this web page.==也可以使用此网页的wget和<a href="http://www.yacy-websearch.net/wiki/index.php/Dev:APICrawler" target="_blank">post参数</a>开启爬取。
>Crawl Job<==>爬取任务<
A Crawl Job consist of one or more start point, crawl limitations and document freshness rules.==爬取任务由一个或多个起始点、爬取限制和文档更新规则构成。
>Start Point==>起始点
One Start URL or a list of URLs:<br/>(must start with http:// https:// ftp:// smb:// file://)==起始网址或网址列表:<br/>(必须以http:// https:// ftp:// smb:// file://开头)
Define the start-url(s) here. You can submit more than one URL, each line one URL please.==在此给定起始网址。你可以提交多个网址,请一个网址一行。
Each of these URLs are the root for a crawl start, existing start URLs are always re-loaded.==这些网址中每个都是爬取开始的起点,已存在的起始网址总是会被重新加载。
Other already visited URLs are sorted out as "double", if they are not allowed using the re-crawl option.==对其他已访问过的网址,如果基于重爬选项它们不被允许,则被标记为'重复'。
>From Link-List of URL<==>来自网址的链接列表<
From Sitemap==来自网站地图
From File (enter a path<br/>within your local file system)==来自文件<br/>(输入一个本地文件系统路径)
>Crawler Filter==>爬虫过滤器
These are limitations on the crawl stacker. The filters will be applied before a web page is loaded.==这些是爬取堆栈器的限制。这些过滤器将在网页加载前被应用。
>Crawling Depth<==>爬取深度<
This defines how often the Crawler will follow links (of links..) embedded in websites.==此选项决定了爬虫将跟随嵌入网址中链接的深度。
0 means that only the page you enter under "Starting Point" will be added==0代表仅将"起始点"网址添加到索引。
to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==2-4是常规索引用的值。超过8的值没有用因为深度为8的爬取将
index approximately 25.600.000.000 pages, maybe this is the whole WWW.==索引接近256亿个网页这可能是整个互联网的内容。
also all linked non-parsable documents==包括全部链接中不可解析的文档
>Unlimited crawl depth for URLs matching with<==>对这些匹配的网址不不限制爬取深度<
>Maximum Pages per Domain<==>每个域名下最大网页数<
You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==使用此选项,你可以限制单个域名下爬取和索引的页面数。
You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==你可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域名。
the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域名会被自动忽略。
>Use<==>使用<
Page-Count<==页面数<
>misc. Constraints<==>其它限制<
A questionmark is usually a hint for a dynamic page. URLs pointing to dynamic content should usually not be crawled.==问号标记常用作动态网页的提示。指向动态内容的地址通常不应该被爬取。
However, there are sometimes web pages with static content that==然而,也有些含有静态网页地址也包含问号标记。
is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果你不确定,不要勾选此项以防爬取陷入循环。
Following frames is NOT done by Gxxg1e, but we do by default to have a richer content. 'nofollow' in robots metadata can be overridden; this does not affect obeying of the robots.txt which is never ignored.==以下框架不是Gxxg1e制作的但我们默认会制作更丰富的内容。robots元数据中的nofollow可被否决这并不影响对无法忽视的robots.txt的遵守。
Accept URLs with query-part ('?'): ==接受包含问号标记('?')的地址:
Obey html-robots-noindex:==遵守html-robots-noindex
Obey html-robots-nofollow:==遵守html-robots-nofollow
Media Type detection==媒体类型探测
Not loading URLs with unsupported file extension is faster but less accurate.==不加载包含不受支持文件扩展名的网址速度更快,但准确性更低。
Indeed, for some web resources the actual Media Type is not consistent with the URL file extension. Here are some examples:==实际上,对于某些网络资源,实际的媒体类型与网址中文件扩展名不一致。以下是一些例子:
: the .de extension is unknown, but the actual Media Type of this page is text/html==: 这个.de扩展名未知但此页面的实际媒体类型为text/html
: the .com extension is not supported (executable file format), but the actual Media Type of this page is text/html==: 这个.com扩展名不受支持可执行文件格式但此页面的实际媒体类型为text/html
: the .png extension is a supported image format, but the actual Media Type of this page is text/html==: 这个.png扩展名是一种受支持的图像格式但该页面的实际媒体类型是text/html
Do not load URLs with an unsupported file extension==不加载具有不支持文件拓展名的地址
Always cross check file extension against Content-Type header==始终针对Content-Type标头交叉检查文件扩展名
>Load Filter on URLs<==>对地址加载过滤器<
The filter is a <b><a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">regular expression</a></b>.==这个过滤器是一个<b><a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">正则表达式</a></b>。
Example: to allow only urls that contain the word 'science', set the must-match filter to '.*science.*'. ==示例要仅允许包含单词“science”的网址请将“必须匹配”筛选器设置为'.*science.*'。
You can also use an automatic domain-restriction to fully crawl a single domain.==你还可以使用自动域名限制来完全爬取单个域名。
Attention: you can test the functionality of your regular expressions using the <a href="RegexTest.html">Regular Expression Tester</a> within YaCy.==注意你可以使用YaCy中的<a href="RegexTest.html">正则表达式测试仪</a>测试正则表达式的功能。
> must-match<==>必须匹配<
Restrict to start domain==限制起始域
Restrict to sub-path==限制子路经
Use filter==使用过滤器
(must not be empty)==(不能为空)
> must-not-match<==>必须排除<
>Load Filter on URL origin of links<==>在链接的地址上加载筛选器<
The filter is a <b><a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">regular expression</a></b>==这个过滤器是一个<b><a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">正则表达式</a></b>
Example: to allow loading only links from pages on example.org domain, set the must-match filter to '.*example.org.*'.==示例为只允许加载域名example.org网页中链接将“必须匹配”筛选器设置为'.*example.org.*'。
>Load Filter on IPs<==>对IP加载过滤器<
>Must-Match List for Country Codes<==>国家代码必须匹配列表<
Crawls can be restricted to specific countries. This uses the country code that can be computed from==爬取可以限制在特定的国家。它使用的国家代码可以从存放网页的服务器的IP计算得出。
the IP of the server that hosts the page. The filter is not a regular expressions but a list of country codes, separated by comma.==过滤器不是正则表达式,而是国家代码列表,用逗号分隔。
>no country code restriction<==>没有国家代码限制<
>Use filter&nbsp;&nbsp;==>使用过滤器&nbsp;&nbsp;
>Document Filter==>文档过滤器
These are limitations on index feeder. The filters will be applied after a web page was loaded.==这些是对索引供给器的限制。加载网页后过滤器才会被应用。
>Filter on URLs<==>地址过滤器<
The filter is a <b><a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">regular expression</a></b>==这个过滤器是一个<b><a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">正则表达式</a></b>
that <b>must not match</b> with the URLs to allow that the content of the url is indexed.==匹配那些<b>必须排除</b>的网址,以允许对剩下网址的内容进行索引。
Filter on Content of Document<br/>(all visible text, including camel-case-tokenized url and title)==文档内容过滤器<br/>(所有可见文本,包括驼峰大小写标记的网址和标题)
Filter on Document Media Type (aka MIME type)==文档媒体类型过滤器又名MIME类型
that <b>must match</b> with the document Media Type (also known as MIME Type) to allow the URL to be indexed. ==对那些有<b>必须匹配</b>文档媒体类型也称为MIME类型的网址进行索引。
Standard Media Types are described at the <a href="https://www.iana.org/assignments/media-types/media-types.xhtml" target="_blank">IANA registry</a>.==<a href="https://www.iana.org/assignments/media-types/media-types.xhtml" target="_blank">IANA注册表</a>中描述了标准媒体类型。
Solr query filter on any active <a href="IndexSchema_p.html" target="_blank">indexed</a> field(s)==任何<a href="IndexSchema_p.html" target="_blank">激活索引</a>字段上的Solr查询过滤器
Each parsed document is checked against the given Solr query before being added to the index.==在添加到索引之前将根据给定的Solr查询检查每个已解析的文档。
The query must be written in respect to the <a href="https://lucene.apache.org/solr/guide/6_6/the-standard-query-parser.html#the-standard-query-parser" target="_blank">standard</a> Solr query syntax.==必须按照<a href="https://lucene.apache.org/solr/guide/6_6/the-standard-query-parser.html#the-standard-query-parser" target="_blank">标准</a>Solr查询语法编写查询。
The embedded local Solr index must be connected to use this kind of filter.==要使用这种过滤器必须连接嵌入式本地Solr索引。
You can configure this with the <a href="IndexFederated_p.html">Index Sources &amp; targets</a> page.==你可以使用<a href="IndexFederated_p.html">索引源目标</a>页面对此进行配置。
>Content Filter==>内容过滤器
These are limitations on parts of a document. The filter will be applied after a web page was loaded.==这些是文档部分的限制.加载网页后将应用过滤器.
>Filter div or nav class names<==>div或nav类名过滤器<
>set of CSS class names<==>CSS类名收集<
comma-separated list of &lt;div&gt; or &lt;nav&gt; element class names which should be filtered out==应过滤掉的&lt;div&gt;元素或&lt;nav&gt;类名的逗号分隔列表
>Clean-Up before Crawl Start==>爬取前清理
Clean up search events cache==清理搜索事件缓存
Check this option to be sure to get fresh search results including newly crawled documents. Beware that it will also interrupt any refreshing/resorting of search results currently requested from browser-side.==选中此选项以确保获得新包括新爬取文档的搜索结果.请注意,它也会中断当前从浏览器端请求的搜索结果的刷新/排序.
>No Deletion<==>不删除<
After a crawl was done in the past, document may become stale and eventually they are also deleted on the target host.==在过去完成爬取后,文档可能会过时,最终它们也会在目标服务器上被删除。
To remove old files from the search index it is not sufficient to just consider them for re-load but it may be necessary==若要从搜索索引中删除旧文件,仅考虑重新加载它们是不够的。
to delete them because they simply do not exist any more. Use this in combination with re-crawl while this time should be longer.==但可能有必要删除它们,因为它们已经不存在了。与重新爬取组合使用,而这一时间应该更长。
Do not delete any document before the crawl is started.==在爬取前不删除任何文档.
>Delete sub-path<==>删除子路径<
For each host in the start url list, delete all documents (in the given subpath) from that host.==对于启动URL列表中的每个服务器,从这些服务器中删除所有文档(在给定的子路径中).
>Delete only old<==>删除旧文件<
Treat documents that are loaded==认为加载于
ago as stale and delete them before the crawl is started==前的文档是旧文档,在爬取前删除它们.
>Double-Check Rules==>重复检查规则
>No&nbsp;Doubles<==>无重复检查<
A web crawl performs a double-check on all links found in the internet against the internal database. If the same url is found again,==网页爬取参照自身数据库,对所有找到的链接进行重复性检查.如果链接重复,
then the url is treated as double when you check the 'no doubles' option. A url may be loaded again when it has reached a specific age,==并且'无重复'选项打开, 则被以重复链接对待.如果地址存在时间超过一定时间,
to use that check the 're-load' option.==并且'重加载'选项打开,则此地址会被重新读取.
Never load any page that is already known. Only the start-url may be loaded again.==切勿加载任何已知的页面.只有起始地址可能会被重新加载.
>Re-load<==>重加载<
Treat documents that are loaded==认为加载于
ago as stale and load them again. If they are younger, they are ignored.==前的文档是旧文档并重新加载它们.如果它们是新文档,不需要重新加载.
>Document Cache==>文档缓存
Store to Web Cache==存储到网页缓存
This option is used by default for proxy prefetch, but is not needed for explicit crawling.==这个选项默认打开, 并用于预爬取, 但对于精确爬取此选项无效.
Policy for usage of Web Cache==网页缓存使用策略
The caching policy states when to use the cache during crawling:==缓存策略即表示爬取时何时使用缓存:
no&nbsp;cache==无缓存
never use the cache, all content from fresh internet source;==从不使用缓存内容, 全部从因特网资源即时爬取;
if&nbsp;fresh==如果有,更新
use the cache if the cache exists and is fresh using the proxy-fresh rules;==如果缓存中存在并且是最新则使用代理刷新规则;
if&nbsp;exist==如果有,退出
use the cache if the cache exist. Do no check freshness. Otherwise use online source;==如果缓存存在则使用缓存. 不检查是否最新. 否则使用最新源;
cache&nbsp;only==仅缓存
never go online, use all content from cache. If no cache exist, treat content as unavailable==从不检查线上内容, 全部使用缓存内容. 如果缓存存在, 将其视为无效
>Robot Behaviour<==>机器人行为<
Use Special User Agent and robot identification==使用特殊的用户代理和机器人识别
Because YaCy can be used as replacement for commercial search appliances==因为YaCy可以替代商业搜索设备
(like the Google Search Appliance aka GSA) the user must be able to crawl all web pages that are granted to such commercial platforms.==像谷歌搜索设备又名GSA用户必须能够抓取所有授予此类商业平台的网页。
Not having this option would be a strong handicap for professional usage of this software. Therefore you are able to select==没有这个选项将是专业使用该软件的一大障碍。
alternative user agents here which have different crawl timings and also identify itself with another user agent and obey the corresponding robots rule.==因此,你可以在此处选择替代用户代理,它具有不同爬取时间,还可以伪装成另一个用户代理标识,并遵守相应的机器人规则。
>Enrich Vocabulary<==>丰富词汇<
>Scraping Fields<==>刮领域<
You can use class names to enrich the terms of a vocabulary based on the text content that appears on web pages. Please write the names of classes into the matrix.==你可以根据网页上显示的文本内容,使用类名丰富词汇表中的术语。请把类名写进表格。
>Snapshot Creation==>创建快照
>Max Depth for Snapshots<==>快照最大深度<
Snapshots are xml metadata and pictures of web pages that can be created during crawling time.==快照是可以在爬取期间创建的xml元数据和网页图片。
The xml data is stored in the same way as a Solr search result with one hit and the pictures will be stored as pdf into subdirectories==xml数据以与Solr搜索结果相同的方式存储只需点击一次图片将以pdf格式存储到HTCACHE/snapshots/的子目录中。
of HTCACHE/snapshots/. From the pdfs the jpg thumbnails are computed. Snapshot generation can be controlled using a depth parameter; that==根据PDF计算jpg缩略图。可以使用深度参数控制快照生成
means a snapshot is only be generated if the crawl depth of a document is smaller or equal to the given number here. If the number is set to -1,==这意味着只有当文档的爬网深度小于或等于此处给定的数字时,才会生成快照。
no snapshots are generated.==如果该数字设置为-1则不会生成快照。
>Multiple Snapshot Versions<==>多个快照版本<
replace old snapshots with new one==用新快照代替老快照
add new versions for each crawl==每次爬取添加新版本
>must-not-match filter for snapshot generation<==>快照产生排除过滤器<
>Image Creation<==>生成快照<
Only XML snapshots can be generated. as the <a href="https://wkhtmltopdf.org/" target="_blank">wkhtmltopdf</a> util is not found by YaCy on your system.==只能生成XML快照。因为YaCy在你的系统上找不到<a href="https://wkhtmltopdf.org/" target="_blank">wkhtmltopdf</a>工具。
It is required to generate PDF snapshots from crawled pages that can then be converted to images.==需要从爬取的页面中生成PDF快照然后将其转换为图像。
>Index Attributes==>索引属性
>Indexing<==>创建索引<
This enables indexing of the webpages the crawler will download. This should be switched on by default, unless you want to crawl only to fill the==这样就可以对爬虫将下载的网页进行索引。
Document Cache without indexing.==默认情况下,应该打开该选项,除非你只想爬取以填充文档缓存而不建立索引。
>index text<==>索引文本<
>index media<==>索引媒体<
>Do Remote Indexing<==>远端索引<
If checked, the crawler will contact other peers and use them as remote indexers for your crawl.==如果选中, 爬虫会联系其他节点, 并将其作为此次爬取的远端索引器.
If you need your crawling results locally, you should switch this off.==如果你仅想爬取本地内容, 请关闭此设置.
Only senior and principal peers can initiate or receive remote crawls.==仅高级节点和主节点能发起或者接收远端爬取.
A YaCyNews message will be created to inform all peers about a global crawl==YaCy新闻消息中会将这个全球爬取通知其他节点,
so they can omit starting a crawl with the same start point.==然后他们才能以相同起始点进行爬取.
Remote crawl results won't be added to the local index as the remote crawler is disabled on this peer.==远程爬取结果不会添加到本地索引中,因为远程爬取程序在此节点上被禁用。
You can activate it in the <a href="RemoteCrawl_p.html">Remote Crawl Configuration</a> page.==你可以在<a href="RemoteCrawl_p.html">远程爬取配置</a>页面中激活它。
Describe your intention to start this global crawl (optional)==在这填入你要进行全球爬取的目的(可选)
This message will appear in the 'Other Peer Crawl Start' table of other peers.==此消息会显示在其他节点的'其他节点爬取起始列表'中.
>Add Crawl result to collection(s)<==>添加爬取结果到收集<
A crawl result can be tagged with names which are candidates for a collection request.==爬取结果可以标记为收集请求的候选名称。
These tags can be selected with the <a href="gsa/search?q=www&site=#[collection]#">GSA interface</a> using the 'site' operator.==这些标签可以通过<a href="gsa/search?q=www&site=#[collection]#">GSA界面</a>使用“网站”运算进行选择。
To use this option, the 'collection_sxt'-field must be switched on in the <a href="IndexFederated_p.html">Solr Schema</a>==要使用此选项,必须在<a href="IndexFederated_p.html">Solr模式</a>中打开“collection_sxt”字段
>Time Zone Offset<==>时区偏移<
The time zone is required when the parser detects a date in the crawled web page. Content can be searched with the on: - modifier which==当解析器在已爬取的网页中检测到日期时,需要时区。
requires also a time zone when a query is made. To normalize all given dates, the date is stored in UTC time zone. To get the right offset==可以使用on:-修饰符搜索内容在进行查询时该修饰符还需要一个时区。为了规范化所有给定的日期该日期存储在UTC时区中。
from dates without time zones to UTC, this offset must be given here. The offset is given in minutes;==要获得从没有时区的日期到UTC的正确偏移量必须在此处给出该偏移量。偏移量以分钟为单位
Time zone offsets for locations east of UTC must be negative; offsets for zones west of UTC must be positve.==UTC以东位置的时区偏移必须为负值UTC以西区域的偏移量必须为正值。
Start New Crawl Job==开始新爬取任务
#-----------------------------
#File: CrawlStartScanner_p.html
#---------------------------
Network Scanner==网络扫描器
YaCy can scan a network segment for available http, ftp and smb server.==YaCy可以扫描一个网段以查找可用的http、ftp和smb服务器。
You must first select a IP range and then, after this range is scanned,==你须先指定IP范围,此后该范围将被扫描,
it is possible to select servers that had been found for a full-site crawl.==也可以选择已找到的服务器作全站点爬取。
No servers had been detected in the given IP range #[iprange]#. Please enter a different IP range for another scan.==在给定IP范围内#[iprange]#未检测到可用服务器请重新指定IP范围。
Please wait...==请稍候...
>Scan the network<==>扫描网络<
Scan Range==扫描范围
Scan sub-range with given host==扫描给定服务器的子域
Full Intranet Scan:==局域网完全扫描:
Do not use intranet scan results, you are not in an intranet environment!==由于你当前不处于局域网环境, 请不要使用局域网扫描结果!
All known hosts in the search index (/31 subnet recommended!)==搜索索引中的所有已知服务器(推荐/31子网
>Subnet<==>子网<
>/31 (only the given host(s)) <==>/31 (仅限给定服务器) <
>/24 (254 addresses) <==>/24 (254个地址) <
>/20 (4064 addresses) <==>/20 (4064个地址) <
>/16 (65024 addresses)==>/16 (65024个地址)
>Time-Out<==>超时<
>Scan Cache<==>扫描缓存<
accumulate scan results with access type "granted" into scan cache (do not delete old scan result)==使用"已授权"的缓存以加速扫描(不要删除上次扫描结果)
>Service Type<==>服务类型<
>Scheduler<==>定期扫描<
run only a scan==运行一次扫描
scan and add all sites with granted access automatically. This disables the scan cache accumulation.==扫描并自动添加已授权站点. 此选项会关闭缓存扫描加速.
Look every==每隔
>minutes<==>分<
>hours<==>时<
>days<==>天<
again and add new sites automatically to indexer.==再次检视, 并自动添加新站点到索引器中.
Sites that do not appear during a scheduled scan period will be excluded from search results.==周期扫描中未上线的站点会被自动排除.
"Scan"=="扫描"
#-----------------------------
#File: CrawlStartSite.html
#---------------------------
YaCy '#[clientname]#': Crawl Start==YaCy '#[clientname]#': 爬取开启
Site Crawling==站点爬取
Site Crawler:==站点爬虫:
Download all web pages from a given domain or base URL.==从给定域名或者网址中下载所有网页。
Site Crawl Start==开始爬取站点
>Site<==>站点<
Start URL&nbsp;(must start with<br/>http:// https:// ftp:// smb:// file://)==起始地址&nbsp;(必须以<br/>http:// https:// ftp:// smb:// file://开头)
Link-List of URL==网址列表
Sitemap URL==网站地图地址
>Path<==>路径<
load all files in domain==载入域名下全部文件
load only files in a sub-path of given url==仅载入给定域名子路径中的文件
>Limitation<==>限制<
not more than <==不超过<
>documents<==>文件<
>Collection<==>收集<
>Start<==>开启<
"Start New Crawl"=="开启新的爬取"
Hints<==提示<
>Crawl Speed Limitation<==>爬取速度限制<
No more that four pages are loaded from the same host in one second (not more that 120 document per minute) to limit the load on the target server.==每秒最多从同一服务器中载入4个页面(每分钟不超过120个文件)以减少对目标服务器影响。
>Target Balancer<==>目标平衡器<
A second crawl for a different host increases the throughput to a maximum of 240 documents per minute since the crawler balances the load over all hosts.==因爬虫会平衡全部服务器的负载,对于不同服务器的二次爬取, 生产量会上升到每分钟最多240个文件。
>High Speed Crawling<==>高速爬取<
A 'shallow crawl' which is not limited to a single host (or site)==当目标服务器数量很多时, 不局限于单个服务器(或站点)的'浅爬取'模式
can extend the pages per minute (ppm) rate to unlimited documents per minute when the number of target hosts is high.==会将生产量上升到每分钟无限页面数(ppm)。
This can be done using the <a href="CrawlStartExpert.html">Expert Crawl Start</a> servlet.==可在<a href="CrawlStartExpert.html">专家爬虫</a>中开启。
>Scheduler Steering<==>调度器控制<
The scheduler on crawls can be changed or removed using the <a href="Table_API_p.html">API Steering</a>.==可以使用<a href="Table_API_p.html">API控制</a>改变或删除爬虫调度器。
#-----------------------------
#File: DictionaryLoader_p.html
#---------------------------
>Knowledge Loader<==>知识加载器<
YaCy can use external libraries to enable or enhance some functions. These libraries are not==你可以使用外部插件来增强一些功能. 考虑到程序大小问题,
included in the main release of YaCy because they would increase the application file too much.==这些插件并未被包含在主程序中.
You can download additional files here.==你可以在这下载扩展文件.
#Geolocalization
>Geolocalization<==>位置定位<
Geolocalization will enable YaCy to present locations from OpenStreetMap according to given search words.==根据关键字, YaCy能从OpenStreetMap获得的位置信息.
>GeoNames<==>位置<
#Suggestions
>Suggestions<==>建议<
#Synonyms
>Synonyms<==>同义词<
Dictionary Loader==功能扩展
With this file it is possible to find cities with a population > 1000 all over the world.==使用此文件能够找到全世界平均人口大于1000的城市.
>Download from<==>下载来源<
>Storage location<==>存储位置<
#>Status<==>Status<
>not loaded<==>未加载<
>loaded<==>已加载<
:deactivated==:已停用
>Action<==>动作<
>Result<==>结果<
"Load"=="加载"
"Deactivate"=="停用"
"Remove"=="卸载"
"Activate"=="启用"
>loaded and activated dictionary file<==>加载并启用插件<
>loading of dictionary file failed: #[error]#<==>读取插件失败: #[error]#<
>deactivated and removed dictionary file<==>停用并卸载插件<
>cannot remove dictionary file: #[error]#<==>卸载插件失败: #[error]#<
>deactivated dictionary file<==>停用插件<
>cannot deactivate dictionary file: #[error]#<==>停用插件失败: #[error]#<
>activated dictionary file<==>已启用插件<
>cannot activate dictionary file: #[error]#<==>启用插件失败: #[error]#<
#>OpenGeoDB<==>OpenGeoDB<
>With this file it is possible to find locations in Germany using the location (city) name, a zip code, a car sign or a telephone pre-dial number.<==>使用此插件, 则能通过查询城市名, 邮编, 车牌号或者电话区号得到德国任何地点的位置信息.<
#-----------------------------
#File: Help.html
#---------------------------
>Tutorial==>教学
twitter this video==推特这个视频
Download from Vimeo==从Vimeo下载
More Tutorials==更多教学
Please see the tutorials on==请参阅教程
YaCy: Tutorial==YaCy: 教程
YaCy: Help==YaCy: 帮助
Tutorial==新手教程
You are using the administration interface of your own search engine==你正在搜索引擎的管理界面
You can create your own search index with YaCy==你可以用YaCy创建属于自己的搜索索引
To learn how to do that, watch one of the demonstration videos below==观看以下demo视频以了解更多
#-----------------------------
#File: index.html
#---------------------------
YaCy '#[clientname]#': Search Page==YaCy '#[clientname]#':搜索页面
>Search<==>搜索<
&nbsp;Text&nbsp;&nbsp;==&nbsp;文本&nbsp;&nbsp;
&nbsp;Images&nbsp;&nbsp;==&nbsp;图片&nbsp;&nbsp;
&nbsp;Audio&nbsp;&nbsp;==&nbsp;音频&nbsp;&nbsp;
&nbsp;Video&nbsp;&nbsp;==&nbsp;视频&nbsp;&nbsp;
&nbsp;Applications==&nbsp;应用
more options...==更多选项...
>Results per page<==>每页显示结果<
>Resource<==>来源<
>the peer-to-peer network<==>P2P网络<
>only the local index<==>仅本地索引<
>Prefer mask<==>偏好过滤<
Constraints:==限制:
>only index pages<==>仅索引页<
>Media search<==>媒体搜索<
Extend media search results (images, videos or applications specific) to pages including such medias (provides generally more results, but eventually less relevant).==将媒体搜索结果(特定于图像、视频或应用程序)扩展到包含此类媒体的页面(通常提供更多结果,但最终相关性较低)。
> Extended==> 拓展
Strictly limit media search results (images, videos or applications specific) to indexed documents matching exactly the desired content domain.==严格将媒体搜索结果(特定于图像、视频或应用程序)限制为与所需内容域完全匹配的索引文档。
> Strict==> 严格
>Query Operators<==>查询运算符<
>restrictions<==>限制<
only urls with the &lt;phrase&gt; in the url==仅包含词组&lt;phrase&gt;的网址的结果
only urls with the &lt;phrase&gt; within outbound links of the document==仅在文档的出站链接中包含带有词组&lt;phrase&gt;的网址
only urls with extension &lt;ext&gt;==仅包含拓展名为&lt;ext&gt;的网址
only urls from host &lt;host&gt;==仅服务器为&lt;host&gt;的网址
only pages with as-author-anotated &lt;author&gt;==仅包含作者为&lt;author&gt;的页面
only pages from top-level-domains &lt;tld&gt;==仅来自顶级域&lt;tld&gt;的页面
only pages with &lt;date&gt; in content==仅内容包含&lt;date&gt;的页面
only pages with a date between &lt;date1&gt; and &lt;date2&gt; in content==内容中只有日期介于&lt;date1&gt;和&lt;date2&gt;之间的页面
only pages with keyword anotation containing &lt;phrase&gt;==仅包含包含&lt;phrase&gt;的关键字注释的页面
only resources from http or https servers==仅限来自http或https服务器的资源
only resources from ftp servers (they are rare, <a href="CrawlStartSite.html">crawl them yourself</a>==只有来自ftp服务器的资源它们很少见请<a href="CrawlStartSite.html">自己抓取</a>
only resources from smb servers (<a href="ConfigBasic.html">Intranet Indexing</a> must be selected)==仅限来自smb服务器的资源必须选择<a href="ConfigBasic.html">内网索引</a>
only files from a local file system (<a href="ConfigBasic.html">Intranet Indexing</a> must be selected)==仅来自本地文件系统的文件(必须选择<a href="ConfigBasic.html">内网索引</a>
>spatial restrictions<==>空间限制<
only documents having location metadata (geographical coordinates)==仅包含位置元数据(地理坐标)的文档
only documents within a square zone embracing a circle of given radius (in decimal degrees) around the specified latitude and longitude (in decimal degrees)==仅限于包含指定经纬度(十进制度数)周围给定半径(十进制度数)圆圈的正方形区域内的文档
>ranking modifier<==>排名修饰符<
sort by date (latest first)==按日期排序(最新优先)
multiple words shall appear near==多个单词应出现在附近
"" (doublequotes)=="" (双引号)
/language/&lt;lang&gt;==/language/&lt;语言&gt;
prefer given language (an <a href="http://www.loc.gov/standards/iso639-2/php/English_list.php" title="Reference alpha-2 language codes list">ISO 639-1</a> 2-letter code)==首选给定语言(<a href="http://www.loc.gov/standards/iso639-2/php/English_list.php" title="Reference alpha-2 language codes list">ISO 639-1</a>的2字母代码
>heuristics<==>启发式<
>add search results from external opensearch systems<==>从外部开放搜索系统添加搜索结果<
>Search Navigation<==>搜索导航<
>keyboard shortcuts<==>键盘快捷键<
>Access key<==>访问键<
> modifier + n<==> 修饰语 + n<
>next result page<==>下页结果<
> modifier + p<==> 修饰语 + p<
>previous result page<==>上页结果<
>automatic result retrieval<==>自动结果检索<
>browser integration<==>浏览器集成<
after searching, click-open on the default search engine in the upper right search field of your browser and select 'Add "YaCy Search.."'==搜索完成后,单击浏览器右上角搜索字段中默认搜索引擎上的“打开”,然后选择'添加YaCy搜索..'
>search as rss feed<==>作为rss源搜索<
click on the red icon in the upper right after a search. this works good in combination with the '/date' ranking modifier. See an <a href="yacysearch.rss?query=news+%2Fdate&Enter=Search&verify=cacheonly&contentdom=text&nav=hosts%2Cauthors%2Cnamespace%2Ctopics%2Cfiletype%2Cprotocol&startRecord=0&indexof=off&meanCount=5&maximumRecords=10&resource=global&prefermaskfilter=">example</a>.==搜索后点击右上角的红色图标。这与“/date”排名修饰符结合使用效果很好。看一个<a href="yacysearch.rss?query=news+%2Fdate&Enter=Search&verify=cacheonly&contentdom=text&nav=hosts%2Cauthors%2Cnamespace%2Ctopics%2Cfiletype%2Cprotocol&startRecord=0&indexof=off&meanCount=5&maximumRecords=10&resource=global&prefermaskfilter=">例子</a>。
>json search results<==>json搜索结果<
for ajax developers: get the search rss feed and replace the '.rss' extension in the search result url with '.json'==对于ajax开发人员获取搜索rss提要并替换搜索结果地址'.rss'扩展名为'.json'
#-----------------------------
#File: IndexBrowser_p.html
#---------------------------
Index Browser==索引浏览器
Browse the index of #[ucount]# documents.== 浏览来自 #[ucount]# 篇文档的索引.
Enter a host or an URL for a file list or view a list of==输入服务器或者地址来查看文件列表,它们来自
>all hosts<==>全部服务器<
>only hosts with urls pending in the crawler<==>只是在爬虫中地址待处理的服务器<
> or <==> 或 <
>only with load errors<==>只有加载错误的服务器<
Host/URL==服务器/地址
Browse Host==浏览服务器
"Delete Subpath"=="删除子路径"
Browser for==浏览器关于
"Re-load load-failure docs (404s etc)"=="重新加载具有错误的文档(404s 等)"
Confirm Deletion==确认删除
>Host List<==>服务器列表<
>Count Colors:<==>计数颜色:<
Documents without Errors==没有错误的文档
Pending in Crawler==在爬虫中待处理
Crawler Excludes<==爬虫排除<
Load Errors<==加载错误<
documents stored for host: #[hostsize]#==该服务器储存的文档: #[hostsize]#
documents stored for subpath: #[subpathloadsize]#==该子路径储存的文档: #[subpathloadsize]#
unloaded documents detected in subpath: #[subpathdetectedsize]#==子路径中探测到但未加载的文档: #[subpathdetectedsize]#
>Path<==>路径<
>stored<==>储存的<
>linked<==>连接的<
>pending<==>带处理的<
>excluded<==>排除的<
>failed<==>失败的<
Show Metadata==显示元数据
link, detected from context==从内容中探测到的连接
>indexed<==>索引的<
>loading<==>加载中<
Outbound Links, outgoing from #[host]# - Host List==出站链接,从#[host]#中传出 - 服务器列表
Inbound Links, incoming to #[host]# - Host List==入站链接,传入#[host]# - 服务器列表
'number of documents about this date'=='在这个日期的文件数量'
"show link structure graph"=="展示连接结构图"
Host has load error(s)==服务器有加载错误项
Administration Options==管理选项
Delete all==全部删除
>Load Errors<==>加载错误<
from index==来自索引
"Delete Load Errors"=="删除加载错误项"
#-----------------------------
#File: IndexControlRWIs_p.html
#---------------------------
Reverse Word Index Administration==反向词索引管理
The local index currently contains #[wcount]# reverse word indexes==本地索引包含#[wcount]#个反向词索引
RWI Retrieval (= search for a single word)==反向词检索(=搜索单个词)
Retrieve by Word:<==按单词检索:<
"Show URL Entries for Word"=="显示单词相关的地址"
Retrieve by Word-Hash==按单词Hash值检索
"Show URL Entries for Word-Hash"=="显示单词Hash值相关的地址"
>Limitations<==>限制<
>Index Reference Size<==>反向词索引大小<
No reference size limitation (this may cause strong CPU load when words are searched that appear very often)==没有索引大小限制当搜索经常出现的单词时这可能会导致CPU负载过大
Limitation of number of references per word:==每个单词的索引数量限制:
(this causes that old references are deleted if that limit is reached)==(这会导致如果达到该限制,旧的索引将被删除)
>Set References Limit<==>设置索引限制<
Select Segment:==选择分段:
"Generate List"=="生成列表"
Cleanup==清理
>Index Deletion<==>删除索引<
>Delete Search Index<==>删除搜索索引<
Stop Crawler and delete Crawl Queues==停止爬虫并删除crawl队列
Delete HTTP &amp; FTP Cache==删除HTTP &amp; FTP缓存
Delete robots.txt Cache==删除robots.txt缓存
Delete cached snippet-fetching failures during search==删除已缓存的错误信息
"Delete"=="删除"
No entry for word '#[word]#'==无'#[word]#'的对应词条
No entry for word hash==无词条对应
Search result==搜索结果
total URLs</td>==全部URL</td>
appearance in</td>==出现在</td>
in link type</td>==链接类型</td>
document type</td>==文件类型</td>
<td>description</td>==<td>描述</td>
<td>title</td>==<td>标题</td>
<td>creator</td>==<td>创建者</td>
<td>subject</td>==<td>主题</td>
<td>url</td>==<td>URL</td>
<td>emphasized</td>==<td>高亮</td>
<td>image</td>==<td>图片</td>
<td>audio</td>==<td>音频</td>
<td>video</td>==<td>视频</td>
<td>app</td>==<td>应用</td>
index of</td>==索引</td>
>Selection</td>==>选择</td>
Display URL List==显示URL列表
Number of lines==行数
all lines==全部
"List Selected URLs"=="列出选中URL"
Transfer RWI to other Peer==传递RWI给其他节点
Transfer by Word-Hash==按字Hash值传递
"Transfer to other peer"=="传递"
to Peer==指定节点
<dd>select==<dd>选择
or enter a hash==或者输入节点的Hash值
Sequential List of Word-Hashes==字Hash值的顺序列表
No URL entries related to this word hash==无对应入口地址对于字Hash
>#[count]# URL entries related to this word hash==>#[count]# 个入口地址与此字Hash相关
Resource</td>==资源</td>
Negative Ranking Factors==负向排名因素
Positive Ranking Factors==正向排名因素
Reverse Normalized Weighted Ranking Sum==反向常规加权排名和
hash</td>==Hash</td>
dom length</td>==域长度</td>
ybr</td>==YBR</td>
#url comps</td>
url length</td>==URL长度</td>
pos in text</td>==文中位置</td>
pos of phrase</td>==短语位置</td>
pos in phrase</td>==在短语中位置</td>
word distance</td>==字间距离</td>
<td>authority</td>==<td>权限</td>
<td>date</td>==<td>日期</td>
words in title</td>==标题字数</td>
words in text</td>==内容字数</td>
local links</td>==本地链接</td>
remote links</td>==远端链接</td>
hitcount</td>==命中数</td>
#props</td>==</td>
unresolved URL Hash==未解析URL Hash值
Word Deletion==删除关键字
Deletion of selected URLs==删除选中URL
delete also the referenced URL (recommended, may produce unresolved references==同时删除关联URL (推荐, 虽然在索引时
at other word indexes but they do not harm)==会产生未解析关联, 但是不影响系统性能)
for every resolvable and deleted URL reference, delete the same reference at every other word where==对于已解析并已删除的URL关联来说, 则会删除它与其他关键字的关联
the reference exists (very extensive, but prevents further unresolved references)==(很多, 但是会阻止未解析关联的产生)
"Delete reference to selected URLs"=="删除与选中URL的关联"
"Delete Word"=="删除关键字"
Blacklist Extension==黑名单扩展
"Add selected URLs to blacklist"=="添加选中URL到黑名单"
"Add selected domains to blacklist"=="添加选中域到黑名单"
#-----------------------------
#File: IndexControlURLs_p.html
#---------------------------
URL Database Administration<==地址数据库管理<
The local index currently contains #[ucount]# URL references==目前本地索引含有#[ucount]#个地址索引
#URL Retrieval
URL Retrieval<==地址检索<
Retrieve by URL:<==按地址检索:<
Retrieve by URL-Hash==按地址Hash值检索
"Show Details for URL"=="显示地址细节"
"Show Details for URL-Hash"=="显示地址Hash细节"
#Cleanup
>Cleanup<==>清理<
>Index Deletion<==>删除索引<
> Delete local search index (embedded Solr and old Metadata)<==> 删除本地搜索索引(嵌入 Solr 和旧元数据)<
> Delete remote solr index<==> 删除远程solr索引<
> Delete RWI Index (DHT transmission words)<==> 删除反向词索引DHT传输词<
> Delete Citation Index (linking between URLs)<==> 删除引文索引(地址之间的链接)<
> Delete First-Seen Date Table<==> 删除首次出现日期表<
> Delete HTTP &amp; FTP Cache<==> 删除HTTP &amp; FTP缓存<
> Stop Crawler and delete Crawl Queues<==> 停止爬虫并删除爬虫队列<
> Delete robots.txt Cache<==> 删除robots.txt缓存<
value="Delete"==value="删除"
>Optimize Solr<==>优化Solr<
merge to max. <input type="text" name="optimizemax" value="#[optimizemax]#" size="6" maxlength="6" /> segments==合并到最大<input type="text" name="optimizemax" value="#[optimizemax]#" size="6" maxlength="6" /> 个分段
"Optimize Solr"=="优化Solr"
>Reboot Solr Core<==>重启Solr核<
"Shut Down and Re-Start Solr"=="关闭并重启Solr"
Select Segment:==选择分段:
"Generate List"=="生成列表"
Statistics about top-domains in URL Database==地址数据库中顶级域数据
Show top==显示全部URL中的
domains from all URLs.==个域.
"Generate Statistics"=="生成数据"
Statistics about the top-#[domains]# domains in the database:==数据库中头 #[domains]# 个域的数据:
"delete all"=="全部删除"
Domain==域名
URLs==地址
Sequential List of URL-Hashes==地址Hash顺序列表
Loaded URL Export==导出已加载地址
Export File==导出文件
URL Filter==地址过滤器
Export Format==导出格式
Only Domain:==仅域名:
Full URL List:==完整地址列表:
Plain Text List (domains only)==文本文件(仅域名)
HTML (domains as URLs, no title)==HTML (超链接格式的域名, 不包括标题)
#Full URL List <i>(high IO)==Vollständige URL Liste <i>(hoher IO)
Plain Text List (URLs only)==文本文件(仅地址)
HTML (URLs with title)==HTML (带标题的地址)
#XML (RSS)==XML (RSS)
"Export URLs"=="导出地址"
Export to file #[exportfile]# is running .. #[urlcount]# URLs so far==正在导出到 #[exportfile]# .. 已经导出 #[urlcount]# 个URL
Finished export of #[urlcount]# URLs to file==已完成导出 #[urlcount]# 个地址到文件
Export to file #[exportfile]# failed:==导出到文件 #[exportfile]# 失败:
No entry found for URL-hash==未找到合适词条对应地址Hash
"Show Content"=="显示内容"
"Delete URL"=="删除地址"
this may produce unresolved references at other word indexes but they do not harm==这可能和其他关键字产生未解析关联, 但是这并不影响系统性能
"Delete URL and remove all references from words"=="删除地址并从关键字中删除所有关联"
delete the reference to this url at every other word where the reference exists (very extensive, but prevents unresolved references)==删除指向此链接的关联字,(很多, 但是会阻止未解析关联的产生)
#-----------------------------
#File: IndexCreateLoaderQueue_p.html
#---------------------------
Loader Queue==加载器队列
The loader set is empty==该加载器集合为空
There are #[num]# entries in the loader set:==加载器中有 #[num]# 个词条:
>Initiator<==>发起者<
>Depth<==>深度<
>Status<==>状态<
>URL<==>地址<
#-----------------------------
#File: IndexCleaner_p.html
#---------------------------
Index Cleaner==索引整理
>URL-DB-Cleaner==>URL-DB-清理
#ThreadAlive:
#ThreadToString:
Total URLs searched:==搜索到的全部地址:
Blacklisted URLs found:==搜索到的黑名单地址:
Percentage blacklisted:==黑名单占百分比:
last searched URL:==最近搜索到的地址:
last blacklisted URL found:==最近搜索到的黑名单地址:
>RWI-DB-Cleaner==>RWI-DB-清理
RWIs at Start:==启动时RWIs:
RWIs now:==当前反向词索引:
wordHash in Progress:==处理中的Hash值:
last wordHash with deleted URLs:==已删除网址的Hash值:
Number of deleted URLs in on this Hash:==此Hash中已删除的地址数:
URL-DB-Cleaner - Clean up the database by deletion of blacklisted urls:==URL-DB-清理 - 清理数据库, 会删除黑名单地址:
Start/Resume==开始/继续
Stop==停止
Pause==暂停
RWI-DB-Cleaner - Clean up the database by deletion of words with reference to blacklisted urls:==RWI-数据库-清理 - 清理数据库, 会删除与黑名单URL相关的信息:
#-----------------------------
#File: IndexCreateParserErrors_p.html
#---------------------------
>Rejected URLs<==>被拒绝地址<
Parser Errors==解析错误
Rejected URL List:==被拒绝地址列表:
There are #[num]# entries in the rejected-urls list.==在被拒绝地址列表中有 #[num]# 个词条.
Showing latest #[num]# entries.==显示最近的 #[num]# 个词条.
"show more"=="更多"
"clear list"=="清除列表"
There are #[num]# entries in the rejected-queue:==被拒绝队列中有 #[num]# 个词条:
Executor==执行器
>Time<==>时间<
>URL<==>地址<
Fail-Reason==错误原因
#-----------------------------
#File: IndexCreateQueues_p.html
#---------------------------
Crawl Queue<==爬取队列<
Click on this API button to see an XML with information about the crawler latency and other statistics.==单击此API按钮以查看包含有关爬虫程序延迟和其他统计信息的XML。
This crawler queue is empty==爬取队列为空
Delete Entries:==删除词条:
>Initiator<==>发起者<
>Profile<==>资料<
>Depth<==>深度<
Modified Date==修改日期
Anchor Name==锚点名
>URL<==>地址<
"Delete"=="删除"
>Count<==>计数<
Delta/ms==延迟/ms
>Host<==>服务器<
#-----------------------------
#File: IndexDeletion_p.html
#---------------------------
Index Deletion<==索引删除<
The search index contains #[doccount]# documents. You can delete them here.==搜索索引包含#[doccount]#篇文档。你可以在这儿删除它们。
Deletions are made concurrently which can cause that recently deleted documents are not yet reflected in the document count.==删除是同步进行的,这可能导致最近删除的文档还没有反映在文档计数中。
Index deletion will not immediately reduce the storage size on disk because entries are only marked as deleted in a first step.==索引删除不会立即减少磁盘上的存储大小,因为条目仅在第一步中被标记为已删除。
The storage size will later on shrink by itself if new documents are indexed or you can force a shrinking by <a href="/IndexControlURLs_p.html">performing an "Optimize Solr" procedure.</a>==如果新文档被索引,存储大小将在稍后自行缩小,或者你可以通过执行<a href="/IndexControlURLs_p.html">优化Solr</a>过程强制缩小。
Delete by URL Matching<==通过URL匹配删除<
Delete all documents within a sub-path of the given urls. That means all documents must start with one of the url stubs as given here.==删除给定网址的子路径中的所有文档. 这意味着所有文档必须以此处给出的其中一个url存根开头.
One URL stub, a list of URL stubs<br/>or a regular expression==一个URL存根, 一个URL存根列表<br/> 或一条正则表达式
Matching Method<==匹配方法<
sub-path of given URLs==给定URL的子路径
matching with regular expression==与正则表达式匹配
"Simulate Deletion"=="模拟删除"
"no actual deletion, generates only a deletion count"=="没有实际删除,只生成删除计数"
"Engage Deletion"=="真正删除"
"simulate a deletion first to calculate the deletion count"=="首先请模拟删除以计算删除数量"
"engaged"=="删除了"
selected #[count]# documents for deletion==选择 #[count]# 篇文档以删除
deleted #[count]# documents==删除了 #[count]# 篇文档
Delete by Age<==按年龄删除<
Delete all documents which are older than a given time period.==删除所有超过给定时间段的文档.
Time Period<==时间段<
All documents older than==所有文件年龄超过
years<==年<
months<==月<
days<==日<
hours<==小时<
Age Identification<==年龄识别<
>load date==>加载日期
>last-modified==>上次修改
Delete Collections<==删除收集<
Delete all documents which are inside specific collections.==删除特定收集中的所有文档.
Not Assigned<==未分配<
Delete all documents which are not assigned to any collection==删除未分配给任何收集的所有文档
, separated by ',' (comma) or '|' (vertical bar); or==, 分隔按','(逗号)或'|'(垂直条); 或
>generate the collection list...==>生成收集列表...
Assigned<==分配的<
Delete all documents which are assigned to the following collection(s)==删除分配给以下收集的所有文档
Delete by Solr Query<==通过Solr查询删除<
This is the most generic option: select a set of documents using a solr query.==这是最通用的选项: 使用solr查询选择一组文档.
#-----------------------------
#File: IndexExport_p.html
#---------------------------
>Index Export<==>索引导出<
>The local index currently contains==> 本地索引目前包含
documents.<==文档<
#Loaded URL Export
>Loaded URL Export<==>加载的地址导出<
>Export Path<==>导出路径<
>URL Filter<==>地址过滤器<
>query<==>查询<
>maximum age (seconds, -1 = unlimited)<==>最大年龄(秒, -1=无限制)<
>Export Format<==>导出格式<
>Full Data Records:<==>完整数据记录:<
>Full URL List:<==>完整地址列表:<
>Only Domain:<==>仅仅域名:<
>Only Text:<==>仅仅文本:<
"Export"=="导出"
#Dump and Restore of Solr Index
>Dump and Restore of Solr Index<==>Solr索引的转储和恢复<
"Create Dump"=="创建转储"
>Dump File<==>转储文件<
"Restore Dump"=="恢复转储"
#-----------------------------
#File: IndexFederated_p.html
#---------------------------
Index Sources &amp; Targets==索引来源&目标
YaCy supports multiple index storage locations.==YaCy支持多地索引储存。
As an internal indexing database a deep-embedded multi-core Solr is used and it is possible to attach also a remote Solr.==内部索引数据库使用了深度嵌入式多核Solr并且还可以附加远端Solr。
>Solr Search Index<==>Solr搜索索引<
Solr stores the main search index. It is the home of two cores, the default 'collection1' core for documents and the 'webgraph' core for a web structure graph. Detailed information about the used Solr fields can be edited in the <a href="IndexSchema_p.html">Schema Editor</a>.==Solr存储主搜索索引。它是两个核心的所在地默认的'collection1'核心用于文档,'webgraph'核心用于网络结构图。可以在<a href="IndexSchema_p.html">模式编辑器</a>中编辑有关已用Solr字段的详细信息。
>Lazy Value Initialization&nbsp;<==>惰性值初始化&nbsp;<
If checked, only non-zero values and non-empty strings are written to Solr fields.==如果选中,则仅将非零值和非空字符串写入 Solr 字段。
>Use deep-embedded local Solr&nbsp;<==>使用深度嵌入的本地Solr&nbsp;<
This will write the YaCy-embedded Solr index which stored within the YaCy DATA directory.==这将写入存储在YaCy的DATA目录下的 YaCy嵌入式Solr索引。
>Use remote Solr server(s)&nbsp;<==>使用远程Solr服务器&nbsp;<
>Allow self-signed certificates <==>允许自签名证书 <
write-enabled (if unchecked, the remote server(s) will only be used as search peers)==启用写入(如果未选中,远程服务器将仅用作搜索节点)
value="Set"==value="设置"
Web Structure Index==网络结构图索引
The web structure index is used for host browsing (to discover the internal file/folder structure), ranking (counting the number of references) and file search (there are about fourty times more links from loaded pages as in documents of the main search index). ==网页结构索引用于服务器浏览(发现内部文件/文件夹结构、排名计算引用次数和文件搜索加载页面的链接大约是主搜索索引的文档中的40倍
use citation reference index (lightweight and fast)==使用引文参考索引(轻量且快速)
use webgraph search index (rich information in second Solr core)==使用网图搜索索引第二个Solr核心中的丰富信息
Peer-to-Peer Operation==P2P运行
The 'RWI' (Reverse Word Index) is necessary for index transmission in distributed mode. For portal or intranet mode this must be switched off.=='RWI'(反向词索引)对于分布式模式下的索引传输是必需的。在门户或内网模式下,必须将其关闭。
support peer-to-peer index transmission (DHT RWI index)==支持点对点索引传输DHT RWI索引
#-----------------------------
#File: IndexImportMediawiki_p.html
#---------------------------
MediaWiki Dump Import==MediaWiki转储导入
No import thread is running, you can start a new thread here==当前无运行导入任务, 不过你可以在这开始
Bad input data:==损坏数据:
MediaWiki Dump File Selection: select a 'bz2' file==MediaWiki备份文件: 选择一个 'bz2' 文件
You can import <a href="https://dumps.wikimedia.org/backup-index-bydb.html" target="_blank">MediaWiki dumps</a> here. An example is the file==你可以在这导入<a href="https://dumps.wikimedia.org/backup-index-bydb.html" target="_blank">MediaWiki副本</a>副本. 示例
Dumps must be in XML format and may be compressed in gz or bz2. Place the file in the YaCy folder or in one of its sub-folders.==副本文件必须是XML格式并用bz2压缩的.将其放进YaCy目录或其子目录中.
"Import MediaWiki Dump"=="导入MediaWiki备份"
When the import is started, the following happens:==:开始导入时, 会进行以下工作
The dump is extracted on the fly and wiki entries are translated into Dublin Core data format. The output looks like this:==备份文件即时被解压, 并被译为Dublin核心元数据格式:
Each 10000 wiki records are combined in one output file which is written to /DATA/SURROGATES/in into a temporary file.==每个输出文件都含有10000个百科记录, 并都被保存在 /DATA/SURROGATES/in 的临时目录中.
When each of the generated output file is finished, it is renamed to a .xml file==生成的输出文件都以 .xml结尾
Each time a xml surrogate file appears in /DATA/SURROGATES/in, the YaCy indexer fetches the file and indexes the record entries.==只要 /DATA/SURROGATES/in 中含有 xml文件, YaCy索引器就会读取它们并为其中的词条制作索引.
When a surrogate file is finished with indexing, it is moved to /DATA/SURROGATES/out==当索引完成时, xml文件会被移动到 /DATA/SURROGATES/out
You can recycle processed surrogate files by moving them from /DATA/SURROGATES/out to /DATA/SURROGATES/in==你可以将文件从/DATA/SURROGATES/out 移动到 /DATA/SURROGATES/in 以重复索引.
Import Process==导入进程
Thread:==线程:
Processed:==已完成:
Wiki Entries==百科词条
Speed:==速度:
articles per second<==文章/秒<
Running Time:==运行时间:
hours,==小时,
minutes<==分<
Remaining Time:==剩余时间:
#-----------------------------
#File: IndexImportOAIPMH_p.html
#---------------------------
OAI-PMH Import==OAI-PMH导入
Results from the import can be monitored in the <a href="CrawlResults.html?process=7">indexing results for surrogates==导入结果<a href="CrawlResults.html?process=7">监控
Single request import==单个导入请求
This will submit only a single request as given here to a OAI-PMH server and imports records into the index==向OAI-PMH服务器提交如下导入请求, 并将返回记录导入索引
"Import OAI-PMH source"=="导入OAI-PMH源"
Source:==源:
Processed:==已处理:
records<==返回记录<
#ResumptionToken:==ResumptionToken:
Import failed:==导入失败:
Import all Records from a server==从服务器导入全部记录
Import all records that follow according to resumption elements into index==根据恢复元素导入服务器记录
"import this source"=="导入此源"
::or&nbsp;==::o或&nbsp;
"import from a list"=="从列表导入"
Import started!==已开始导入!
Bad input data:==损坏数据:
#-----------------------------
#File: IndexImportOAIPMHList_p.html
#---------------------------
List of #[num]# OAI-PMH Servers==#[num]# 个OAI-PMH服务器
"Load Selected Sources"=="加载选中源"
OAI-PMH source import list==导入OAI-PMH源
#OAI Source List==OAI Quellen Liste
>Source<==>源<
Import List==导入列表
#>Thread<==>Thread<
#>Source<==>Quelle<
>Processed<br />Chunks<==>已处理<br />块<
>Imported<br />Records<==>已导入<br />记录<
>Speed<br />(records/second)==>速度<br />==(记录/每秒)
#-----------------------------
#File: IndexImportWarc_p.html
#---------------------------
Warc Import==Warc 导入
Web Archive File Import==Web存档文件导入
No import thread is running, you can start a new thread here==没有正在运行的导入线程,你可以在此处启动新线程
Warc File Selection: select an warc file (which may be gz compressed)==Warc文件选择选择一个warc文件(可能是gz压缩的)
You can download warc archives for example here==你可以在此处下载warc档案
Internet Archive==互联网档案
Import Warc File==导入Warc文件
Import Process==导入流程
Thread:==线程:
Warc File:==Warc文件:
Processed:==处理好的:
Entries==词条
Speed:==速度:
pages per second==页/秒
Running Time:==运行时间:
hours,==小时,
minutes<==分钟<
Remaining Time:==剩余时间:
#-----------------------------
#File: IndexReIndexMonitor_p.html
#---------------------------
Field Re-Indexing<==字段重新索引<
In case that an index schema of the embedded/local index has changed, all documents with missing field entries can be indexed again with a reindex job.==如果嵌入式/本地索引的索引架构发生更改,则可以使用重新索引作业再次索引所有缺少字段条目的文档。
"refresh page"=="刷新页面"
Documents in current queue<==当前队列中的文档<
Documents processed<==已处理的文档<
current select query==当前选择查询
"start reindex job now"=="立即开始重新索引作业"
"stop reindexing"=="停止重新索引"
Remaining field list==剩余字段列表
reindex documents containing these fields:==重新索引包含这些字段的文档:
Re-Crawl Index Documents==重新抓取索引文档
Searches the local index and selects documents to add to the crawler (recrawl the document).==搜索本地索引并选择要添加到爬虫的文档(重新爬取文档)。
This runs transparent as background job.==这作为后台作业透明运行。
Documents are added to the crawler only if no other crawls are active==仅当没有其他爬取处于活动状态时,才会将文档添加到爬虫中
and are added in small chunks.==并以小块添加。
"start recrawl job now"=="立即开始重新抓取作业"
"stop recrawl job"=="停止重新抓取作业"
Re-Crawl Query Details==重新抓取查询详情
Documents to process==待处理的文档
Current Query==当前查询
Edit Solr Query==编辑Solr查询
update==更新
to re-crawl documents selected with the given query.==重新抓取使用给定查询选择的文档。
Include failed URLs==包含失败的地址
>Field<==>字段<
>count<==>计数<
Re-crawl works only with an embedded local Solr index!==重新抓取仅适用于嵌入的本地Solr索引
Simulate==模拟
Check only how many documents would be selected for recrawl==仅检查将选择多少文档进行重新抓取
"Browse metadata of the #[rows]# first selected documents"=="浏览 #[rows]# 个第一个选定文档的元数据"
document(s)</a>#(/showSelectLink)# selected for recrawl.==document(s)</a>#(/showSelectLink)# selected for recrawl.
>Solr query <==>Solr查询 <
Set defaults==设置默认值
"Reset to default values"=="重置为默认值"
Last #(/jobStatus)#Re-Crawl job report==最近的#(/jobStatus)#重新抓取作业报告
Automatically refreshing==自动刷新
An error occurred while trying to refresh automatically==尝试自动刷新时出错
The job terminated early due to an error when requesting the Solr index.==由于请求Solr索引时出错作业提前终止。
>Status<==>状态<
"Running"=="运行中"
"Shutdown in progress"=="正在关闭"
"Terminated"=="已终止"
Running::Shutdown in progress::Terminated==运行中::正在关闭:已终止
>Query<==>查询<
>Start time<==>开启时间<
>End time<==>结束时间<
URLs added to the crawler queue for recrawl==添加到爬虫队列以进行重新爬取的地址
>Recrawled URLs<==>已重新爬取的地址<
URLs rejected for some reason by the crawl stacker or the crawler queue. Please check the logs for more details.==由于某种原因在抓取堆栈器或抓取器队列中被拒绝的地址。请检查日志以获取更多详细信息。
>Rejected URLs<==>已被拒绝的地址<
>Malformed URLs<==>格式错误的地址<
"#[malformedUrlsDeletedCount]# deleted from the index"=="#[malformedUrlsDeletedCount]# deleted from the index"
> Refresh<==> 刷新<
#-----------------------------
#File: IndexSchema_p.html
#---------------------------
Solr Schema Editor==Solr模式编辑器
If you use a custom Solr schema you may enter a different field name in the column 'Custom Solr Field Name' of the YaCy default attribute name==如果您使用自定义 Solr 架构您可以在YaCy默认属性名称的'自定义Solr字段名称'列中输入不同的字段名称
Select a core:==选择核心:
the core can be searched at==核心可以在以下位置搜索
Active==激活
Attribute==属性
Custom Solr Field Name==自定义Solr字段名称
Comment==注释
show active==显示激活
show all available==显示全部可用
show disabled==显示未激活
"Set"=="设置"
"reset selection to default"=="将选择值重置为默认值"
>Reindex documents<==>重新索引文档<
If you unselected some fields, old documents in the index still contain the unselected fields.==如果您取消选择某些字段,但索引中的旧文档仍包含取消选择的字段。
To physically remove them from the index you need to reindex the documents.==要从索引中实际删除它们,您需要重新索引文档。
Here you can reindex all documents with inactive fields.==在这里,您可以重新索引所有具有非活动字段的文档。
"reindex Solr"=="重新索引Solr"
You may monitor progress (or stop the job) under <a href="IndexReIndexMonitor_p.html">IndexReIndexMonitor_p.html</a>==您可以在<a href="IndexReIndexMonitor_p.html">IndexReIndexMonitor_p.html</a>下监控进度(或停止工作)
#-----------------------------
#File: IndexShare_p.html
#---------------------------
Index Sharing==索引共享
The local index currently consists of (at least) #[wcount]# reverse word indexes and #[ucount]# URL references==本地索引目前包含(至少) #[wcount]# 反向词索引和 #[ucount]# 地址引用
Index:&nbsp;==索引:&nbsp;
distribute&nbsp;==分发&nbsp;
&nbsp;&nbsp;&nbsp;receive grant default: ==&nbsp;&nbsp;&nbsp;接受准许默认值:
receive==接受
&nbsp;&nbsp;&nbsp;for each remote peer&nbsp;==&nbsp;&nbsp;&nbsp;对每个远端节点&nbsp;
&nbsp;links/minute&nbsp;==&nbsp;链接/分钟&nbsp;
&nbsp;words/minute&nbsp;==&nbsp;反向词/分钟&nbsp;
Set==设置
#-----------------------------
#File: Load_MediawikiWiki.html
#---------------------------
YaCy '#[clientname]#': Configuration of a Wiki Search==YaCy'#[clientname]#':Wiki搜索配置
Integration in MediaWiki==MediaWiki整合
It is possible to insert wiki pages into the YaCy index using a web crawl on that pages.==使用网页爬取, 能将百科网页添加到YaCy主页中.
This guide helps you to crawl your wiki and to insert a search window in your wiki pages.==此向导帮助你爬取你的百科网页并在其中添加一个搜索框.
Retrieval of Wiki Pages==接收百科网页
The following form is a simplified crawl start that uses the proper values for a wiki crawl.==下栏是使用某一值的百科爬取起始点.
Just insert the front page URL of your wiki.==请填入百科的地址.
After you started the crawl you may want to get back==爬取开始后,
to this page to read the integration hints below.==你可能需要返回此页面阅读以下提示.
URL of the wiki main page==百科主页地址
This is a crawl start point==将作为爬取起始点
"Get content of Wiki: crawl wiki pages"=="获取百科内容: 爬取百科页面"
Inserting a Search Window to MediaWiki==在MediaWiki中添加搜索框
To integrate a search window into a MediaWiki, you must insert some code into the wiki template.==在百科模板中添加以下代码以将搜索框集成到MediaWiki中.
There are several templates that can be used for MediaWiki, but in this guide we consider that==MediaWiki中有多种模板,
you are using the default template, 'MonoBook.php':==在此我们使用默认模板 'MonoBook.php':
open skins/MonoBook.php==打开skins/MonoBook.php
find the line where the default search window is displayed, there are the following statements:==找到搜索框显示部分代码, 如下:
Remove that code or set it in comments using '&lt;!--' and '--&gt;'==删除以上代码或者用 '&lt;!--' '--&gt;' 将其注释掉
Insert the following code:==插入以下代码:
Search with YaCy in this Wiki:==在此百科中使用YaCy搜索:
value="Search"==value="搜索"
Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name==用你自己的IP或者服务器名替代代码中给出的IP地址
You may want to change the default text elements in the code snippet==你可以更改代码中的文本元素
To see all options for the search widget, look at the more generic description of search widgets at==搜索框详细设置, 请参见
the <a href="ConfigLiveSearch.html">configuration for live search</a>.==<a href="ConfigLiveSearch.html">搜索栏集成: 即时搜索</a>.
#-----------------------------
#File: Load_PHPBB3.html
#---------------------------
Configuration of a phpBB3 Search==phpBB3搜索配置
Integration in phpBB3==phpBB3整合
It is possible to insert forum pages into the YaCy index using a database import of forum postings.==导入含有论坛帖子的数据库, 能在YaCy主页显示论坛内容.
This guide helps you to insert a search window in your phpBB3 pages.==此向导能帮助你在你的phpBB3论坛页面中添加搜索框.
Retrieval of phpBB3 Forum Pages using a database export==phpBB3论坛页面需使用数据库导出
Forum posting contain rich information about the topic, the time, the subject and the author.==论坛帖子中含有话题、时间、主题和作者等丰富信息.
This information is in an bad annotated form in web pages delivered by the forum software.==此类信息往往由论坛散播,并且对于搜索引擎来说,它们的标注很费解.
It is much better to retrieve the forum postings directly from the database.==所以, 直接从数据库中获取帖子内容效果更好.
This will cause that YaCy is able to offer nice navigation features after searches.==这会使得YaCy在每次搜索后提供较好引导特性.
YaCy has a phpBB3 extraction feature, please go to the <a href="ContentIntegrationPHPBB3_p.html">phpBB3 content integration</a> servlet for direct database imports.==YaCy能够解析phpBB3关键字, 参见 <a href="ContentIntegrationPHPBB3_p.html">phpBB3内容集成</a> 直接导入数据库方法.
Retrieval of phpBB3 Forum Pages using a web crawl==接受phpBB3论坛页面的网页爬取
The following form is a simplified crawl start that uses the proper values for a phpbb3 forum crawl.==下栏是使用某一值的phpBB3论坛爬取起始点.
Just insert the front page URL of your forum. After you started the crawl you may want to get back==将论坛首页填入表格. 开始爬取后,
to this page to read the integration hints below.==你可能需要返回此页面阅读以下提示.
URL of the phpBB3 forum main page==phpBB3论坛主页
This is a crawl start point==这是爬取起始点
"Get content of phpBB3: crawl forum pages"=="获取phpBB3内容: 爬取论坛页面"
Inserting a Search Window to phpBB3==在phpBB3中添加搜索框
To integrate a search window into phpBB3, you must insert some code into a forum template.==在论坛模板中添加以下代码以将搜索框集成到phpBB3中.
There are several templates that can be used for phpBB3, but in this guide we consider that==phpBB3中有多种模板,
you are using the default template, 'prosilver'==在此我们使用默认模板 'prosilver'.
open styles/prosilver/template/overall_header.html==打开 styles/prosilver/template/overall_header.html
find the line where the default search window is displayed, thats right behind the <pre>&lt;div id="search-box"&gt;</pre> statement==找到搜索框显示代码部分, 它们在 <pre>&lt;div id="search-box"&gt;</pre> 下面
Insert the following code right behind the div tag==在div标签后插入以下代码
YaCy Forum Search==YaCy论坛搜索
;YaCy Search==;YaCy搜索
Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name==用你自己的IP或者服务器名替代代码中给出的IP地址
You may want to change the default text elements in the code snippet==你可以更改代码中的文本元素
To see all options for the search widget, look at the more generic description of search widgets at==搜索框详细设置, 请参见
the <a href="ConfigLiveSearch.html">configuration for live search</a>.==der Seite <a href="ConfigLiveSearch.html">搜索栏集成: 即时搜索</a>.
#-----------------------------
#File: Load_RSS_p.html
#---------------------------
Configuration of a RSS Search==RSS搜索配置
Loading of RSS Feeds<==加载RSS饲料<
RSS feeds can be loaded into the YaCy search index.==YaCy能够读取RSS饲料.
This does not load the rss file as such into the index but all the messages inside the RSS feeds as individual documents.==但不是直接读取RSS文件, 而是将RSS饲料中的所有信息分别当作单独的文件来读取.
URL of the RSS feed==RSS饲料地址
>Preview<==>预览<
"Show RSS Items"=="显示RSS词条"
>Indexing<==>创建索引<
Available after successful loading of rss feed in preview==仅在读取rss饲料后有效
"Add All Items to Index (full content of url)"=="将所有词条添加到索引(地址中的全部内容)"
>once<==>一次<
>load this feed once now<==>读取一次此饲料<
>scheduled<==>定时<
>repeat the feed loading every<==>读取此饲料每隔<
>minutes<==>分钟<
>hours<==>小时<
>days<==>天<
>collection<==>收集<
> automatically.==>.
>List of Scheduled RSS Feed Load Targets<==>定时RSS饲料读取目标列表<
>Title<==>标题<
>URL/Referrer<==>地址/参照网址<
>Recording<==>正在记录<
>Last Load<==>上次读取<
>Next Load<==>将要读取<
>Last Count<==>目前计数<
>All Count<==>全部计数<
>Avg. Update/Day<==>每天平均更新次数<
"Remove Selected Feeds from Scheduler"=="删除选中饲料"
"Remove All Feeds from Scheduler"=="删除所有饲料"
>Available RSS Feed List<==>可用RSS饲料列表<
"Remove Selected Feeds from Feed List"=="删除选中饲料"
"Remove All Feeds from Feed List"=="删除所有饲料"
"Add Selected Feeds to Scheduler"=="添加选中饲料到定时任务"
>new<==>新<
>enqueued<==>已加入队列<
>indexed<==>已索引<
>RSS Feed of==>RSS饲料
>Author<==>作者<
>Description<==>描述<
>Language<==>语言<
>Date<==>日期<
>Time-to-live<==>TTL<
>Docs<==>文件<
>State<==><
#>URL<==>URL<
"Add Selected Items to Index (full content of url)"=="添加选中词条到索引(地址中全部内容)"
#-----------------------------
#File: Messages_p.html
#---------------------------
>Messages==>短消息
Date</td>==日期</td>
From</td>==来自</td>
To</td>==发送至</td>
>Subject==>主题
Action==动作
From:==来自:
To:==发送至:
Date:==日期:
#Subject:==Betreff:
>view==>查看
reply==回复
>delete==>删除
Compose Message==撰写短消息
Send message to peer==发送消息至节点
"Compose"=="撰写"
Message:==短消息:
inbox==收件箱
#-----------------------------
#File: MessageSend_p.html
#---------------------------
Send message==发送短消息
You cannot send a message to==不能发送消息至
The peer does not respond. It was now removed from the peer-list.==远端节点未响应, 将从节点列表中删除.
The peer <b>==peer <b>
is alive and responded:==可用:
You are allowed to send me a message==你现在可以给我发送消息
kb and an==kb和一个
attachment &le;==附件 &le;
Your Message==你的短消息
Subject:==主题:
Text:==内容:
"Enter"=="发送"
"Preview"=="预览"
You can use==你可以在这使用
Wiki Code</a> here.==Wiki Code </a>.
Preview message==预览消息
The message has not been sent yet!==短消息未发送!
The peer is alive but cannot respond. Sorry.==节点属于活动状态但是无响应.
Your message has been sent. The target peer responded:==你的短消息已发送. 接收节点返回:
The target peer is alive but did not receive your message. Sorry.==抱歉, 接收节点属于活动状态但是没有接收到你的消息.
Here is a copy of your message, so you can copy it to save it for further attempts:==这是你的消息副本, 可被保存已备用:
You cannot call this page directly. Instead, use a link on the <a href="Network.html">Network</a> page.==你不能直接使用此页面. 请使用 <a href="Network.html">网络</a> 页面的对应功能.
#-----------------------------
#File: Network.html
#---------------------------
YaCy Search Network==YaCy搜索网络
YaCy Network<==YaCy网络<
The information that is presented on this page can also be retrieved as XML.==此页信息也可表示为XML.
Click the API icon to see the XML.==点击API图标查看XML.
To see a list of all APIs==获取所有API
please visit the==请访问
API wiki page==API百科页面
Network Overview==网络一览
Active&nbsp;Principal&nbsp;and&nbsp;Senior&nbsp;Peers==主动&nbsp;骨干&nbsp;和&nbsp;高级&nbsp;节点
Passive&nbsp;Senior&nbsp;Peers==被动&nbsp;高级&nbsp;节点
Junior&nbsp;(fragment)&nbsp;Peers==初级&nbsp;(碎片)&nbsp;节点
Network History==网络历史
<b>Count of Connected Senior Peers</b> in the last two days, scale = 1h==<b>过去两天连接的高级节点数</b>, 尺度 = 1小时
<b>Count of all Active Peers Per Day</b> in the last week, scale = 1d==<b>过去1周内每天所有主动节点数</b>, 尺度 = 1天
<b>Count of all Active Peers Per Week</b> in the last 30d, scale = 7d==<b>过去30天内每周所有主动节点数</b>, 尺度 = 7天
<b>Count of all Active Peers Per Month</b> in the last 365d, scale = 30d==<b>过去365天中每月所有主动节点数</b>, 尺度 = 30天
Active Principal and Senior Peers in '#[networkName]#' Network== '#[networkName]#' 网络中的主动骨干高级节点
Passive Senior Peers in '#[networkName]#' Network== '#[networkName]#' 网络中的被动高级节点
Junior Peers (a fragment) in '#[networkName]#' Network=='#[networkName]#' 网络中的初级(碎片)节点
Manually contacting Peer==手动联系节点
Active Senior==主动高级
Passive Senior==被动高级
Junior (fragment)==初级(碎片)
>Network<==>网络<
>Online Peers<==>在线节点<
>Number of<br/>Documents<==>文件<br/>数目<
Indexing Speed:==索引速度:
Pages Per Minute (PPM)==页面/分钟(PPM)
Query Frequency:==请求频率:
Queries Per Hour (QPH)==请求/小时(QPH)
>Today<==>今天<
>Last Hour<==>1小时前<
>Last&nbsp;Week<==>最近一周<
>Last&nbsp;Month<==>最近一月<
>Now<==>现在<
>Active<==>活动<
>Passive<==>被动<
>Potential<==>潜在<
>This Peer<==>本机节点<
no remote #[peertype]# peer for this list known==当前列表中无远端 #[peertype]# 节点.
Showing #[num]# entries from a total of #[total]# peers.==显示全部 #[total]# 个节点中的 #[num]# 个.
send&nbsp;<strong>M</strong>essage/<br/>show&nbsp;<strong>P</strong>rofile/<br/>edit&nbsp;<strong>W</strong>iki/<br/>browse&nbsp;<strong>B</strong>log==发送消息(<strong>m</strong>)/<br/>显示资料(<strong>p</strong>)/<br/>编辑百科(<strong>w</strong>)/<br/>浏览博客(<strong>b</strong>)
Search for a peername (RegExp allowed)==搜索节点名称(允许正则表达式)
"Search"=="搜索"
Name==名称
Address==地址
#Hash==Hash
Type==类型
Release/<br/>SVN==YaCy版本/<br/>SVN
Last<br/>Seen==最后<br/>上线
Location==位置
>URLs for<br/>Remote<br/>Crawl<==>用于<br/>远端<br/>爬取的URL<
Offset==偏移
Send message to peer==发送消息至节点
View profile of peer==查看节点资料
Read and edit wiki on peer==查看并编辑百科
Browse blog of peer==查看博客
"DHT Receive: yes"=="接收DHT: 是"
"DHT receive enabled"=="打开DHT接收"
"DHT Receive: no; #[peertags]#"=="接收DHT: 否; #[peertags]#"
"DHT Receive: no"=="接收DHT: 否"
"no DHT receive"=="无接收DHT"
"Accept Crawl: no"=="接受爬取: 否"
"no crawl"=="无爬取"
"Accept Crawl: yes"=="接受爬取: 是"
"crawl possible"=="可以爬取"
Contact: passive==通信: 被动
Contact: direct==通信: 直接
Seed download: possible==种子下载: 可用
runtime:==运行时间:
Peers==节点
URLs for<br/>Remote Crawl==远端<br/>爬取的地址
"The YaCy Network"=="YaCy网络"
Indexing<br/>PPM==索引<br/>PPM
(public&nbsp;local)==(公共/本地)
(remote)==(远端)
Your Peer:==你的节点:
>Name<==>名称<
>Info<==>信息<
>Version<==>版本<
>Release<==>版本<
>Age<==>年龄(天)<
>UTC<==>时区<
>Uptime<==>运行时间<
>Links<==>链接<
>RWIs<==>反向词索引<
>Sent DHT<==>已发送DHT<
>Received DHT<==>已接受DHT<
>Word Chunks<==>词汇块<
>Sent==>已发送
>Received==>已接受
>DHT Word Chunks<==>DHT词汇块<
Sent<br/>URLs==已发送网址
Received<br/>URLs==已接收网址
Known<br/>Seeds==已知种子
Sent<br/>Words==已发送词语
Received<br/>Words==已接收词语
Connects<br/>per hour==连接/小时
>dark green font<==>深绿色字<
senior/principal peers==高级/主要节点
>light green font<==>浅绿色字<
>passive peers<==>被动节点<
>pink font<==>粉色字<
junior peers==初级节点
red point==红点
this peer==本机节点
>grey waves<==>灰色波浪<
>crawling activity<==>爬取活动<
>green radiation<==>绿色辐射圆<
>strong query activity<==>强烈请求活动<
>red lines<==>红线<
>DHT-out<==>DHT输出<
>green lines<==>绿线<
>DHT-in<==>DHT输入<
#-----------------------------
#File: News.html
#---------------------------
Overview==概况
>Incoming&nbsp;News<==>传入的新闻<
>Processed&nbsp;News<==>处理的新闻<
>Outgoing&nbsp;News<==>传出的新闻<
>Published&nbsp;News<==>发布的新闻<
This is the YaCyNews system (currently under testing).==这是YaCy新闻系统(测试中).
The news service is controlled by several entry points:==新闻服务会因为下面的操作产生:
A crawl start with activated remote indexing will automatically create a news entry.==由远端创建索引激活的一次爬取会自动创建一个新闻词条.
Other peers may use this information to prevent double-crawls from the same start point.==其他的节点能利用此信息以防止相同起始点的二次爬取.
A table with recently started crawls is presented on the Index Create - page=="索引创建"-页面会显示最近启动的爬取.
A change in the personal profile will create a news entry. You can see recently made changes of==个人信息的改变会创建一个新闻词条, 可以在网络个人信息页面查看,
profile entries on the Network page, where that profile change is visualized with a '*' beside the 'P' (profile) - selector.==以带有 '*' 的 'P' (资料)标记出.
Publishing of added or modified translation for the user interface.==发布用户界面翻译的添加或者修改信息。
Other peers may include it in their local translation list.==其他节点可能会接受这些翻译。
To publish a translation, use the integrated==要发布新的翻译,请用
translation editor==翻译编辑器
to add a translation and publish it afterwards.==来添加翻译并发布。
More news services will follow.==接下来会有更多的新闻服务.
Above you can see four menues:==上面四个菜单选项分别为:
<strong>Incoming News (#[insize]#)</strong>: latest news that arrived your peer.==<strong>传入的新闻(#[insize]#)</strong>: 发送至你节点的新闻.
Only these news will be used to display specific news services as explained above.==这些消息含有上述的特定新闻服务.
You can process these news with a button on the page to remove their appearance from the IndexCreate and Network page==你可以使用'创建首页'和'网络'页面的设置隐藏它们.
<strong>Processed News (#[prsize]#)</strong>: this is simply an archive of incoming news that you removed by processing.==<strong>处理的新闻(#[prsize]#)</strong>: 此页面显示你已删除的传入新闻存档.
<strong>Outgoing News (#[ousize]#)</strong>: here your can see news entries that you have created. These news are currently broadcasted to other peers.==<strong>传出的新闻(#[ousize]#)</strong>: 此页面显示你节点创建的新闻词条, 正在发布给其他节点.
you can stop the broadcast if you want.==你也可以选择停止发布.
<strong>Published News (#[pusize]#)</strong>: your news that have been broadcasted sufficiently or that you have removed from the broadcast list.==<strong>发布的新闻(#[pusize]#)</strong>: 显示已经完全发布出去的新闻或者从传出列表中删除的新闻.
Originator==拥有者
Created==创建时间
Category==分类
Received==接收时间
Distributed==已发布
Attributes==属性
"#(page)#::Process Selected News::Delete Selected News::Abort Publication of Selected News::Delete Selected News#(/page)#"=="#(page)#::处理选中新闻::删除选中新闻::停止发布选中新闻::删除选中新闻#(/page)#"
"#(page)#::Process All News::Delete All News::Abort Publication of All News::Delete All News#(/page)#"=="#(page)#::处理所有新闻::删除所有新闻::停止发布所有新闻::删除所有新闻#(/page)#"
#-----------------------------
#File: Performance_p.html
#---------------------------
Performance Settings==性能设置
Online Caution Settings:==在线警告设置:
refresh graph==刷新图表
#Memory Settings
Memory Settings==内存设置
Memory reserved for <abbr title="Java Virtual Machine">JVM</abbr>==为<abbr title="Java Virtual Machine">JVM</abbr>保留的内存
"Set"=="设置"
#Resource Observer
Resource Observer==资源查看器
Memory state==内存状态
>proper<==>合适<
>exhausted<==>耗尽<
Reset state==重置状态
Manually reset to 'proper' state==手动设置到'合适'状态
Enough memory is available for proper operation.==有足够内存保证正常运行.
Within the last eleven minutes, at least four operations have tried to request memory that would have reduced free space within the minimum required.==在过去的11分钟内至少有四次操作尝试请求内存这将减少所需的最低可用空间。
Minimum required==最低要求
Amount of memory (in Mebibytes) that should at least be free for proper operation==为保证正常运行的最低内存量以MB为单位
Disable <abbr title="Distributed Hash Table">DHT</abbr>-in below.==当低于其值时,关闭<abbr title="Distributed Hash Table">DHT</abbr>输入.
Free space disk==空闲硬盘空间
Steady-state minimum==稳态最小值
Amount of space (in Mebibytes) that should be kept free as steady state==为保持稳定状态所需的空闲硬盘空间以MB为单位
Disable crawls when free space is below.==当空闲硬盘低于其值时,停止爬取。
Absolute minimum==绝对最小值
Amount of space (in Mebibytes) that should at least be kept free as hard limit==最小限制空闲硬盘空间以MB为单位
Disable <abbr title="Distributed Hash Table">DHT</abbr>-in when free space is below.==当空闲硬盘低于其值时,关闭<abbr title="Distributed Hash Table">DHT</abbr>输入。
>Autoregulate<==>自动调节<
when absolute minimum limit has been reached==当达到绝对最小限制值时
The autoregulation task performs the following sequence of operations, stopping once free space disk is over the steady-state value :==自动调节任务执行以下操作序列,一旦硬盘可用空间超过稳态值就停止:
>delete old releases<==>删除旧发行版<
>delete logs<==>删除日志<
>delete robots.txt table<==>删除robots.txt表<
>delete news<==>删除新闻<
>clear HTCACHE<==>清除HTCACHE<
>clear citations<==>清除引用<
>throw away large crawl queues<==>扔掉大爬取队列<
>cut away too large RWIs<==>切除过大的反向词<
Used space disk==已用硬盘空间
Steady-state maximum==稳态最大值
Maximum amount of space (in Mebibytes) that should be used as steady state==为保持稳定状态最大可用的硬盘空间以MB为单位
Disable crawls when used space is over.==当使用硬盘高于其值时,停止爬取。
Absolute maximum==绝对最大值
Maximum amount of space (in Mebibytes) that should be used as hard limit==最大限制已用硬盘空间以MB为单位
Disable <abbr title="Distributed Hash Table">DHT</abbr>-in when used space is over.==当已用硬盘空间超过其值时,关闭<abbr title="Distributed Hash Table">DHT</abbr>输入。
when absolute maximum limit has been reached.==当达到绝对最大限制值时。
The autoregulation task performs the following sequence of operations, stopping once used space disk is below the steady-state value:==自动调节任务执行以下操作序列,一旦使用的硬盘空间低于稳态值就停止:
RAM==内存
free space==空闲空间
Accepted change. This will take effect after <strong>restart</strong> of YaCy==已接受改变. 在YaCy<strong>重启</strong>后生效
restart now</a>==立即重启</a>
Confirm Restart==确定重启
Use Default Profile:==使用默认配置:
and use==并使用
of the defined performance.==中的默认性能设置.
Save==保存
Changes take effect immediately==改变立即生效
YaCy Priority Settings==YaCy优先级设置
YaCy Process Priority==YaCy进程优先级
#Normal==Normal
Below normal==低于普通
Idle</option>==空闲</option>
"Set new Priority"=="置为新优先级"
Changes take effect after <strong>restart</strong> of YaCy==在YaCy<strong>重启</strong>后生效.
#Online Caution Settings
Online Caution Settings==在线警告设置
This is the time that the crawler idles when the proxy is accessed, or a local or remote search is done.==这是代理被访问或者搜索完成后的一段爬取空闲时间.
The delay is extended by this time each time the proxy is accessed afterwards.==在访问代理后, 会触发此延时,
This shall improve performance of the affected process (proxy or search).==从而提高相关进程(代理或者搜索)的性能.
(current delta is==(当前设置为
seconds since last proxy/local-search/remote-search access.)==秒.)
Online Caution Case==触发事件
indexer delay (milliseconds) after case occurency==事件触发后的索引延时(毫秒)
Proxy:==代理:
Local Search:==本地搜索:
Remote Search:==远端搜索:
"Enter New Parameters"=="使用新参数"
#-----------------------------
#File: PerformanceConcurrency_p.html
#---------------------------
Performance of Concurrent Processes==并行进程性能
serverProcessor Objects==服务器处理器对象
>Thread<==>线程<
Queue Size<br />Current==队列大小<br />当前
Queue Size<br />Maximum==队列大小<br />最大
Executors:<br />Current Number of Threads==执行者:<br />当前线程数
Concurrency:<br />Maximum Number of Threads==并发:<br />最大线程数
>Childs<==>子线程<
Average<br />Block Time<br />Reading==平均<br />阻塞时间<br />读取
Average<br />Exec Time==平均<br />运行时间
Average<br />Block Time<br />Writing==平均<br />阻塞时间<br />写入
Concurrency:<br />Number of Threads==并行:<br />线程数
Total<br />Cycles==总<br />循环
Full Description==完整描述
#-----------------------------
#File: PerformanceMemory_p.html
#---------------------------
Performance Settings for Memory==内存性能设置
refresh graph==刷新图表
>simulate short memory status<==>模拟短期内存状态<
>use Standard Memory Strategy<==>使用标准内存策略<
(current==(当前
Memory Usage==内存使用
After Startup==启动后
After Initializations==初始化后
before GC==GC前
after GC==GC前
>Now==>现在
before <==未<
Description==描述
maximum memory that the JVM will attempt to use==JVM使用的最大内存
>Available<==>可用<
total available memory including free for the JVM within maximum==当前JVM可用剩余内存
>Total<==>全部<
total memory taken from the OS==操作系统分配内存
>Free<==>空闲<
free memory in the JVM within total amount==JVM空闲内存
>Used<==>已用<
used memory in the JVM within total amount==JVM已用内存
Table RAM Index==Table使用内存
>Size==>大小
>Key==>关键字
>Value==>值
Chunk Size<==块大小<
Used Memory<==已用内存<
Object Index Caches==Object索引缓存
Needed Memory==所需内存大小
Object Read Caches==Object读缓存
>Read Hit Cache<==>命中缓存<
>Read Miss Cache<==>丢失缓存<
>Read Hit<==>读命中<
>Read Miss<==>读丢失<
Write Unique<==写入<
Write Double<==写回<
Deletes<==删除<
Flushes<==清理<
Total Mem==全部内存
MB (hit)==MB (命中)
MB (miss)==MB (丢失)
Stop Grow when less than #[objectCacheStopGrow]# MB available left==可用内存低于 #[objectCacheStopGrow]# MB时停止增长
Start Shrink when less than #[objectCacheStartShrink]# MB availabe left==可用内存低于 #[objectCacheStartShrink]# MB开始减少
Other Caching Structures==其他缓存结构
Type</td>==类型</td>
>Hit<==>命中<
>Miss<==>丢失<
Insert<==插入<
Delete<==删除<
#DNSCache</td>==DNSCache</td>
#DNSNoCache</td>==DNSNoCache</td>
#HashBlacklistedCache==HashBlacklistedCache
Search Event Cache<==搜索事件缓存<
#-----------------------------
#File: PerformanceQueues_p.html
#---------------------------
Performance Settings of Queues and Processes==队列和进程性能设置
Scheduled tasks overview and waiting time settings:==定时任务一览与等待时间设置:
>Thread<==>线程<
Queue Size==队列大小
>Total==>全部
Block Time==阻塞时间
Sleep Time==睡眠时间
Exec Time==执行时间
<td>Idle==<td>空闲
>Busy==>忙碌
Short Mem<br />Cycles==小内存<br />周期
>per Cycle==>每周期
>per Busy-Cycle==>每次忙碌周期
>Memory Use==>内存<br />使用
>Delay between==>延时
>idle loops==>空闲循环
>busy loops==>忙碌循环
Minimum of<br />Required Memory==最小<br />需要内存
Full Description==完整描述
Submit New Delay Values==提交新延时值
Changes take effect immediately==改变立即生效
Cache Settings:==缓存设置:
#RAM Cache==RAM Cache
<td>Description==<td>描述
URLs in RAM buffer:==缓存中URL:
This is the size of the URL write buffer. Its purpose is to buffer incoming URLs==这是URL写缓冲的大小.作用是缓冲接收URL,
in case of search result transmission and during DHT transfer.==以利于结果转移和DHT传递.
Words in RAM cache:==缓存中关键字
This is the current size of the word caches.==这是当前关键字缓存的大小.
The indexing cache speeds up the indexing process, the DHT cache holds indexes temporary for approval.==此缓存能加速索引进程, 也能用于DHT.
The maximum of this caches can be set below.==此缓存最大值能从下面设置.
Maximum URLs currently assigned<br />to one cached word:==关键字拥有最大URL数:
This is the maximum size of URLs assigned to a single word cache entry.==这是单个关键字缓存词条所能分配的最多URL数目.
If this is a big number, it shows that the caching works efficiently.==如果此数值较大, 则表示缓存效率很高.
Maximum age of a word:==关键字最长寿命:
This is the maximum age of a word in an index in minutes.==这是索引内关键字所能存在的最长时间.
Minimum age of a word:==关键字最短寿命:
This is the minimum age of a word in an index in minutes.==这是索引内关键字所能存在的最短时间.
Maximum number of words in cache:==缓存中关键字最大数目:
This is is the number of word indexes that shall be held in the==这是索引时缓存中存在的最大关键字索引数目.
ram cache during indexing. When YaCy is shut down, this cache must be==当YaCy停止时,
flushed to disc; this may last some minutes.==它们会被冲刷到硬盘中, 可能会花费数分钟.
#Initial space of words in cache:==Anfangs Freiraum im Word Cache:
#This is is the init size of space for words in cache.==Dies ist die Anfangsgröße von Wörtern im Cache.
Enter New Cache Size==使用新缓存大小
Balancer Settings==平衡器设置
This is the time delta between accessing of the same domain during a crawl.==这是在爬取期间, 访问同一域名的间歇值.
The crawl balancer tries to avoid that domains are==crawl平衡器能够避免频繁地访问同一域名,
accessed too often, but if the balancer fails (i.e. if there are only links left from the same domain), then these minimum==如果平衡器失效(比如相同域名下只剩链接了), 则此有此间歇
delta times are ensured.==提供访问保障.
>Crawler Domain<==>爬虫域名<
>Minimum Access Time Delta<==>最小访问间歇<
>local (intranet) crawls<==>本地(局域网)爬取<
>global (internet) crawls<==>全球(广域网)爬取<
"Enter New Parameters"=="使用新参数"
Thread Pool Settings:==线程池设置:
maximum Active==最大活动
current Active==当前活动
Enter new Threadpool Configuration==使用新配置
#-----------------------------
#File: PerformanceSearch_p.html
#---------------------------
Performance Settings of Search Sequence==搜索时间性能设置
Search Sequence Timing==搜索时间测量
Timing results of latest search request:==最近一次搜索请求时间测量结果:
Query==请求
Event<==事件<
Comment<==注释<
Time<==时间<
Delta (ms)==间隔(毫秒)
Duration (ms)==耗时(毫秒)
Result-Count==结果数目
The network picture below shows how the latest search query was solved by asking corresponding peers in the DHT:==下图显示了通过询问DHT中节点解析的最近搜索请求情况:
red -> request list alive==红色 -> 活动请求列表
green -> request has terminated==绿色 -> 已终结请求列表
grey -> the search target hash order position(s) (more targets if a dht partition is used)<==灰色 -> 搜索目标hash序列位置(如果使用dht会产生更多目标)<
"Search event picture"=="搜索时间图况"
#-----------------------------
#File: ProxyIndexingMonitor_p.html
#---------------------------
Indexing with Proxy==代理索引
YaCy can be used to 'scrape' content from pages that pass the integrated caching HTTP proxy.==YaCy能够通过集成缓存HTTP代理进行搜索.
When scraping proxy pages then <strong>no personal or protected page is indexed</strong>;==当通过代理进行搜索时不会索引<strong>私有或者受保护页面</strong>;
# This is the control page for web pages that your peer has indexed during the current application run-time==Dies ist die Kontrollseite für Internetseiten, die Ihr Peer während der aktuellen Sitzung
# as result of proxy fetch/prefetch.==durch Besuchen einer Seite indexiert.
# No personal or protected page is indexed==Persönliche Seiten und geschütze Seiten werden nicht indexiert
those pages are detected by properties in the HTTP header (like Cookie-Use, or HTTP Authorization)==通过检测HTTP头部属性(比如cookie用途或者http认证)
or by POST-Parameters (either in URL or as HTTP protocol) and automatically excluded from indexing.==或者提交参数(链接或者http协议)能够检测出此类网页并在索引时排除.
You have to <a href="Settings_p.html?page=ProxyAccess">setup the proxy</a> before use.==您必须在使用前<a href="Settings_p.html?page=ProxyAccess">设置代理</a>。
Proxy Auto Config:==自动配置代理:
this controls the proxy auto configuration script for browsers at http://localhost:8090/autoconfig.pac==这会影响浏览器代理自动配置脚本 http://localhost:8090/autoconfig.pac
.yacy-domains only==仅 .yacy 域名
whether the proxy should only be used for .yacy-Domains==代理是否只对 .yacy 域名有效.
Proxy pre-fetch setting:==代理预读设置:
this is an automated html page loading procedure that takes actual proxy-requested==这是一个自动预读网页的过程
URLs as crawling start points for crawling.==期间会将请求代理的URL作为爬取起始点.
Prefetch Depth==预读深度
A prefetch of 0 means no prefetch; a prefetch of 1 means to prefetch all==设置为0则不预读; 设置为1预读所有嵌入链接,
embedded URLs, but since embedded image links are loaded by the browser==但是嵌入图像链接是由浏览器读取,
this means that only embedded href-anchors are prefetched additionally.==这意味着只预读嵌入式链接的顶层部分.
Store to Cache==存储至缓存
It is almost always recommended to set this on. The only exception is that you have another caching proxy running as secondary proxy and YaCy is configured to used that proxy in proxy-proxy - mode.==推荐打开此项设置. 唯一的例外是你有另一个缓存代理作为二级代理并且YaCy设置为使用'代理到代理'模式.
Do Local Text-Indexing==进行本地文本索引
If this is on, all pages (except private content) that passes the proxy is indexed.==如果打开此项设置, 所有通过代理的网页(除了私有内容)都会被索引.
Do Local Media-Indexing==进行本地媒体索引
This is the same as for Local Text-Indexing, but switches only the indexing of media content on.==与本地文本索引类似, 但是仅当'索引媒体内容'打开时有效.
Do Remote Indexing==进行远端索引
If checked, the crawler will contact other peers and use them as remote indexers for your crawl.==如果被选中, 爬虫会联系其他节点并将之作为远端索引器.
If you need your crawling results locally, you should switch this off.==如果仅需要本地索引结果, 可以关闭此项.
Only senior and principal peers can initiate or receive remote crawls.==只有高级节点和主要节点能发起和接收远端crawl.
Please note that this setting only take effect for a prefetch depth greater than 0.==请注意, 此设置仅在预读深度大于0时有效.
Proxy generally==代理杂项设置
Path==路径
The path where the pages are stored (max. length 300)==存储页面的路径(最大300个字符长度)
Size</label>==大小</label>
The size in MB of the cache.==缓存大小(MB).
"Set proxy profile"=="保存设置"
The file DATA/PLASMADB/crawlProfiles0.db is missing or corrupted.==文件 DATA/PLASMADB/crawlProfiles0.db 丢失或者损坏.
Please delete that file and restart.==请删除此文件并重启.
Pre-fetch is now set to depth==预读深度现为
Caching is now #(caching)#off::on#(/caching)#.==缓存现已 #(caching)#关闭::打开#(/caching)#.
Local Text Indexing is now #(indexingLocalText)#off::on==本地文本索引现已 #(indexingLocalText)#关闭::打开
Local Media Indexing is now #(indexingLocalMedia)#off::on==本地媒体索引现已 #(indexingLocalMedia)#关闭::打开
Remote Indexing is now #(indexingRemote)#off::on==远端索引现已 #(indexingRemote)#关闭::打开
Cachepath is now set to '#[return]#'.</strong> Please move the old data in the new directory.==缓存路径现为 '#[return]#'.</strong> 请将旧文件移至此目录.
Cachesize is now set to #[return]#MB.==缓存大小现为 #[return]#MB.
Changes will take effect after restart only.==改变仅在重启后生效.
An error has occurred:==发生错误:
You can see a snapshot of recently indexed pages==你可以在
on the==
Page.==页面查看最近索引页面快照.
#-----------------------------
#File: QuickCrawlLink_p.html
#---------------------------
Quick Crawl Link==快速爬取链接
Quickly adding Bookmarks:==快速添加书签:
Simply drag and drop the link shown below to your Browsers Toolbar/Link-Bar.==仅需拖动以下链接至浏览器工具栏/书签栏.
If you click on it while browsing, the currently viewed website will be inserted into the YaCy crawling queue for indexing.==如果在浏览网页时点击, 当前查看页面会被插入到crawl队列已用于索引
Crawl with YaCy==用YaCy爬取
Title:==标题:
Link:==链接:
Status:==状态:
URL successfully added to Crawler Queue==已成功添加网址到爬虫队列.
Malformed URL==异常链接
Unable to create new crawling profile for URL:==创建链接爬取信息失败:
Unable to add URL to crawler queue:==添加链接到爬取队列失败:
#-----------------------------
#File: RankingRWI_p.html
#---------------------------
RWI Ranking Configuration<==反向词排名配置<
The document ranking influences the order of the search result entities.==文档排名会影响实际搜索结果的顺序。
A ranking is computed using a number of attributes from the documents that match with the search word.==排名计算使用到与搜索词匹配的文档中的多个属性。
The attributes are first normalized over all search results and then the normalized attribute is multiplied with the ranking coefficient computed from this list.==在所有搜索结果基础上,先对属性进行归一化,然后将归一化的属性与相应的排名系数相乘。
The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数随着下表中给出的排名水平呈指数增长。
If you increase a single value by one, then the strength of the parameter doubles.==如果将单个值增加1,则参数的影响效果加倍。
>Pre-Ranking<==>预排名<
</body>==<script>window.onload = function () {$("label:contains('Appearance In Emphasized Text')").text('出现在强调的文本中');$("label:contains('Appearance In URL')").text('出现在地址中'); $("label:contains('Appearance In Author')").text('出现在作者中'); $("label:contains('Appearance In Reference/Anchor Name')").text('出现在参考/锚点名称中'); $("label:contains('Appearance In Tags')").text('出现在标签中'); $("label:contains('Appearance In Title')").text('出现在标题中'); $("label:contains('Authority of Domain')").text('域名权威'); $("label:contains('Category App, Appearance')").text('类别:出现在应用中'); $("label:contains('Category Audio Appearance')").text('类别:出现在音频中'); $("label:contains('Category Image Appearance')").text('类别:出现在图片中'); $("label:contains('Category Video Appearance')").text('类别:出现在视频中'); $("label:contains('Category Index Page')").text('类别:索引页面'); $("label:contains('Date')").text('日期'); $("label:contains('Domain Length')").text('域名长度'); $("label:contains('Hit Count')").text('命中数'); $("label:contains('Preferred Language')").text('倾向的语言'); $("label:contains('Links To Local Domain')").text('本地域名链接'); $("label:contains('Links To Other Domain')").text('其他域名链接'); $("label:contains('Phrases In Text')").text('文本中短语');$("label:contains('Position In Phrase')").text('在短语中位置');$("label:contains('Position In Text')").text('在文本中位置');$("label:contains('Position Of Phrase')").text('短语的位置'); $("label:contains('Term Frequency')").text('术语频率'); $("label:contains('URL Components')").text('地址组件'); $("label:contains('Term Frequency')").text('术语频率'); $("label:contains('URL Length')").text('地址长度'); $("label:contains('Word Distance')").text('词汇距离'); $("label:contains('Words In Text')").text('文本词汇'); $("label:contains('Words In Title')").text('标题词汇');}</script></body>
There are two ranking stages: first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==有两个排名阶段:首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名。
The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果。
>Post-Ranking<==>后排名<
"Set as Default Ranking"=="保存为默认排名"
"Re-Set to Built-In Ranking"=="重置排名设置"
#-----------------------------
#File: RankingSolr_p.html
#---------------------------
Solr Ranking Configuration<==Solr排名配置<
These are ranking attributes for Solr. This ranking applies for internal and remote (P2P or shard) Solr access.==这些是 Solr 的排名属性。 此排名适用于内部和远端P2P或分片的Solr访问。
Select a profile:==选择配置文件:
>Boost Function<==>提升函数<
A Boost Function can combine numeric values from the result document to produce a number which is multiplied with the score value from the query result.==提升函数可以组合结果文档中的数值以生成一个数字,该数字与查询结果中的得分值相乘。
To see all available fields, see the <a href="IndexSchema_p.html">YaCy Solr Schema</a> and look for numeric values (these are names with suffix '_i').==要查看所有可用字段,请参阅<a href="IndexSchema_p.html">YaCy Solr架构</a>并查找数值它们都是带有后缀“_i”的名称
To find out which kind of operations are possible, see the <a href="https://lucene.apache.org/solr/guide/6_6/function-queries.html" target="_blank">Solr Function Query</a> documentation.==要了解可能的操作类型,请参阅<a href="https://lucene.apache.org/solr/guide/6_6/function-queries.html" target="_blank">Solr函数查询</a>文档。
Example: to order by date, use "recip(ms(NOW,last_modified),3.16e-11,1,1)", to order by crawldepth, use "div(100,add(crawldepth_i,1))".==示例:要按日期排序,使用"recip(ms(NOW,last_modified),3.16e-11,1,1)";要按爬虫深度排序,使用"div(100,add(crawldepth_i,1))"。
You can boost with vocabularies, use the occurrence counters #[vocabulariesvoccount]# and #[vocabulariesvoclogcount]#.==你可以使用出现次数计数器#[vocabulariesvoccount]#和#[vocabulariesvoclogcount]#来提升词汇量。
>Boost Query<==>提升查询<
The Boost Query is attached to every query. Use this to statically boost specific content in the index.==提升查询附加到每个查询。使用它来静态提升索引中的特定内容。
Example: "fuzzy_signature_unique_b:true^100000.0f" means that documents, identified as 'double' are ranked very bad and appended to the end of all results (because the unique are ranked high).==示例“fuzzy_signature_unique_b:true^100000.0f”表示被标识为“double”的文档排名很差并附加到所有结果的末尾因为唯一的排名很高
To find appropriate fields for this query, see the <a href="IndexSchema_p.html">YaCy Solr Schema</a> and look for boolean values (with suffix '_b') or tags inside string fields (with suffix '_s' or '_sxt').==要为此查询找到适当的字段,请参阅<a href="IndexSchema_p.html">YaCy Solr架构</a>并查找布尔值带有后缀“_b”或字符串字段中的标签带有后缀“_s”或“_sxt”
You can boost with vocabularies, use the field '#[vocabulariesfield]#' with values #[vocabulariesavailable]#. You can also boost on logarithmic occurrence counters of the fields #[vocabulariesvoclogcounts]#.==你可以使用词汇表进行提升,使用值为#[vocabulariesavailable]#的字段'#[vocabulariesfield]#'。你还可以提高字段#[vocabulariesvoclogcounts]#的对数出现计数器。
>Filter Query<==>过滤器查询<
The Filter Query is attached to every query. Use this to statically add a selection criteria to reduce the set of results.==过滤器查询附加到每个查询。使用它静态添加选择标准以减少结果集。
Example: "http_unique_b:true AND www_unique_b:true" will filter out all results where urls appear also with/without http(s) and/or with/without 'www.' prefix.==示例:"http_unique_b:true AND www_unique_b:true"将过滤掉URL包含/不包含http(s) 和/或 包含/不包含“www”的结果。
To find appropriate fields for this query, see the <a href="IndexSchema_p.html">YaCy Solr Schema</a>. Warning: bad expressions here will cause that you don't have any search result!==要寻找此查询的适当字段,请参阅<a href="IndexSchema_p.html">YaCy Solr架构</a>。警告:此处的错误表达式将导致你没有任何搜索结果!
>Solr Boosts<==>Solr提升<
This is the set of searchable fields (see <a href="IndexSchema_p.html">YaCy Solr Schema</a>). Entries without a boost value are not searched. Boost values make hits inside the corresponding field more important.==这是一组可搜索字段(请参阅 <a href="IndexSchema_p.html">YaCy Solr架构</a>)。没有提升值的条目不会被搜索。提升值使相应字段内的命中更加重要。
"Set Boost Function"=="设置提升函数"
"Set Boost Query"=="设置提升查询"
"Set Filter Query"=="设置过滤器查询"
"Set Field Boosts"=="设置字段提升"
"Re-Set to default"=="重置为默认值"
#-----------------------------
#File: RegexTest.html
#---------------------------
YaCy '#[clientname]#': Regex Test==YaCy '#[clientname]#':正则表达式测试
>Regex Test<==>正则表达式测试<
>Test String<==>测试字符串<
>Regular Expression<==>正则表达式<
This is a <a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">Java Pattern</a>==这是一种<a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" target="_blank">Java模式</a>
>Result<==>结果<
#-----------------------------
#File: RemoteCrawl_p.html
#---------------------------
Remote Crawl Configuration==远端爬取配置
>Remote Crawler<==>远端爬虫<
The remote crawler is a process that requests urls from other peers.==远端爬虫是一个进程, 该进程可用于处理来自其他节点的地址.
Peers offer remote-crawl urls if the flag 'Do Remote Indexing'==如果选中了'进行远端索引', 则节点在开始爬取时
is switched on when a crawl is started.==能够进行远端爬取.
Remote Crawler Configuration==远端爬虫配置
Your peer cannot accept remote crawls because you need senior or principal peer status for that!==你的节点无法接受远端爬取, 因为你需要高级或骨干节点状态!
>Accept Remote Crawl Requests<==>接受远端爬取请求<
Perform web indexing upon request of another peer.==收到另一节点请求时进行网页索引.
Load with a maximum of==最多帮助爬取
pages per minute==页面/分钟
"Save"=="保存"
Crawl results will appear in the==爬取会出现在
>Crawl Result Monitor<==>爬取结果监控器<
Peers offering remote crawl URLs==提供远端爬取地址的节点
If the remote crawl option is switched on, then this peer will load URLs from the following remote peers:==如果勾选了远端爬取选项, 则本机节点会帮助爬取来自远端节点提供的链接:
>Name<==>名字<
URLs for<br/>Remote<br/>Crawl==来自远端爬取的地址
>Release<==>版本号<
>PPM<==>PPM<
>QPH<==>QPH<
>Last<br/>Seen<==>上次<br/>出现<
>UTC</strong><br/>Offset<==>UTC</strong><br/>时区<
>Uptime<==>在线时长<
>Links<==>链接<
>Age<==>年龄<
#-----------------------------
#File: SearchAccessRate_p.html
#------------------------------
Local Search access rate limitations==本地搜索访问率限制
You can configure here limitations on access rate to this peer search interface by unauthenticated users and users without extended search right==你可以在此处配置未经验证的用户和没有扩展搜索权限的用户对该节点搜索界面的访问速率限制
(see the <a href="ConfigAccounts_p.html">Accounts</a> configuration page for details on users rights).==(有关用户权限详情请参见<a href="ConfigAccounts_p.html">账户</a>配置页面)。
YaCy search==YaCy搜索
Access rate limitations to this peer search interface.==本节点搜索界面访问率限制。
When a user with limited rights (unauthenticated or without extended search right) exceeds a limit, the search is blocked.==当具有有限权限的用户(未经验证或没有扩展搜索权限)超过限制时,搜索阻塞。
Max searches in 3s==3秒内最大搜索次数
Max searches in 3s==3秒内最大搜索次数
Max searches in 1mn==1分钟内最大搜索次数
Max searches in 10mn==10分钟内最大搜索次数
Max searches in 10mn==10分钟内最大搜索次数
>Peer-to-peer search<==>P2P搜索<
Access rate limitations to the peer-to-peer search mode.==P2P搜索模式下访问率限制。
When a user with limited rights (unauthenticated or without extended search right) exceeds a limit, the search scope falls back to only this local peer index.==当具有有限权限的用户(未经验证或没有扩展搜索权限)超过限制时,搜索范围缩小为本地索引。
Peer-to-peer search with JavaScript results resorting==带有结果排序的P2P搜索
Access rate limitations to the peer-to-peer search mode with browser-side JavaScript results resorting enabled==对启用了浏览器端JavaScript结果重新排序的P2P搜索模式的访问率限制
(check the 'Remote results resorting' section in the <a href="ConfigPortal_p.html">Search Portal</a> configuration page).==(在<a href="ConfigPortal_p.html">搜索门户</a>配置页面勾选'远端结果重排序')。
When a user with limited rights (unauthenticated or without extended search right) exceeds a limit, results resorting becomes only applicable on demand, server-side.==当具有有限权限的用户(未经验证或没有扩展搜索权限)超过限制时,搜索结果重排仅可用于服务器侧搜索。
Remote snippet load==远端摘录加载
Limitations on snippet loading from remote websites.==对从远程网站加载摘录的限制。
When a user with limited rights (unauthenticated or without extended search right) exceeds a limit, the snippets fetch strategy falls back to 'CACHEONLY'==当具有有限权限的用户(未经验证或没有扩展搜索权限)超过限制时,摘录获取策略缩小为'CACHEONLY'。
(check the default Snippet Fetch Strategy on the <a href="ConfigPortal_p.html">Search Portal</a> configuration page).==(在<a href="ConfigPortal_p.html">搜索门户</a>配置页面勾选默认的摘录获取策略)。
Submit==提交
Set defaults==设定为默认值
Changes will take effect immediately.==改变将会立即生效。
#-----------------------------
#File: ServerScannerList.html
#------------------------------
>URL<==>网址<
>inaccessible<==>不可访问<
#Network Scanner Monitor==Network Scanner Monitor
The following servers had been detected:==已检测到以下服务器:
The following servers can be searched:==可以搜索以下服务器:
Available server within the given IP range==指定IP范围内的可用服务器
>Protocol<==>协议<
#>IP<==>IP<
#>URL<==>URL<
>Access<==>权限<
>Process<==>状态<
>unknown<==>未知<
>empty<==>空<
>granted<==>已授权<
>denied<==>拒绝<
>not in index<==>未在索引中<
>indexed<==>已被索引<
"Add Selected Servers to Crawler"=="添加选中服务器到爬虫"
#------------------------------
#File: Settings_Crawler.inc
#---------------------------
>Crawler Settings<==>爬虫设置<
Generic Crawler Settings==普通爬虫设置
>Timeout:<==>超时:<
Connection timeout in ms==连接超时(毫秒)
means unlimited.==表示没有限制。
HTTP Crawler Settings:==HTTP 爬虫设置:
Maximum Filesize==最大文件大小
FTP Crawler Settings==FTP 爬虫设置
SMB Crawler Settings==SMB 爬虫设置
Local File Crawler Settings==本地文件爬虫设置
Maximum allowed file size in bytes that should be downloaded==允许下载的最大文件大小(字节)
Larger files will be skipped==超出此限制的文件将被忽略
Please note that if the crawler uses content compression, this limit is used to check the compressed content size==请注意, 如果爬虫使用内容压缩, 则此限制对压缩后文件大小有效.
Submit==提交
Changes will take effect immediately==改变立即生效
#-----------------------------
#File: Settings_Debug.inc
#---------------------------
#-----------------------------
#File: Settings_HttpClient.inc
#---------------------------
#-----------------------------
#File: Settings_MessageForwarding.inc
#---------------------------
Message Forwarding==消息发送
With this settings you can activate or deactivate forwarding of yacy-messages via email.==此设置能打开或关闭电邮发送yacy消息.
Enable message forwarding==打开消息发送
Enabling/Disabling message forwarding via email.==打开/关闭email发送.
Forwarding Command==发送命令
The command-line program that should be used to forward the message.<br />==将用于发送消息的命令行程序.<br />
Forwarding To==发送给
The recipient email-address.<br />==收件人email地址.<br />
e.g.:==比如:
"Submit"=="提交"
Changes will take effect immediately.==改变立即生效.
#-----------------------------
#File: Settings_p.html
#---------------------------
Advanced Settings==高级设置
If you want to restore all settings to the default values,==你如果要恢复所有设置到默认值,
but <strong>forgot your administration password</strong>, you must stop the proxy,==但是<strong>忘记了管理员密码</strong> 则你首先必须停止代理,
delete the file 'DATA/SETTINGS/yacy.conf' in the YaCy application root folder and start YaCy again.==然后删除YaCy应用根目录下的 'DATA/SETTINGS/yacy.conf' 文件最后再次启动YaCy。
>Server Access Settings<==>服务器访问设置<
>Referrer Policy Settings<==>参考策略设置<
>Crawler Settings<==>爬虫设置<
>Seed Upload Settings<==>种子上传设置<
>Message Forwarding (optional)<==>消息传输(可选)<
>Transparent Proxy Access Settings<==>透明代理访问设置<
>URL/Web Proxy Access Settings<==>网址代理访问设置<
>Remote Proxy (optional)<==>远端代理(可选)<
>Debug/Analysis Settings<==>调试/分析设置<
>HTTP client Settings<==>HTTP客户端设置<
#-----------------------------
#File: Settings_Proxy.inc
#---------------------------
Remote Proxy (optional)==远端代理(可选)
YaCy can use another proxy to connect to the internet. You can enter the address for the remote proxy here:==YaCy能够通过第二代理连接到网络, 在此输入远端代理地址.
Use remote proxy</label>==使用远端代理</label>
Enables the usage of the remote proxy by yacy==打开以支持远端代理
Use remote proxy for yacy &lt;-&gt; yacy communication==为YaCy &lt;-&gt; YaCy 通信使用代理
Specifies if the remote proxy should be used for the communication of this peer to other yacy peers.==选此指定远端代理是否支持YaCy节点间通信.
<em>Hint:</em> Enabling this option could cause this peer to remain in junior status.==<em>提示:</em> 打开此选项后本地节点会被置为初级节点.
Use remote proxy for HTTPS==为HTTPS使用远端代理
Specifies if YaCy should forward ssl connections to the remote proxy.==选此指定YaCy是否使用SSL代理.
Remote proxy host==远端代理服务器
The ip address or domain name of the remote proxy==远端代理的IP地址或者域名
Remote proxy port==远端代理端口
the port of the remote proxy==远端代理使用的端口
Remote proxy user==远端代理用户
Remote proxy password==远端代理用户密码
No-proxy adresses==无代理地址
IP addresses for which the remote proxy should not be used==指定不使用代理的IP地址
"Submit"=="提交"
Changes will take effect immediately.==改变立即生效.
#-----------------------------
#File: Settings_ProxyAccess.inc
#---------------------------
Proxy Access Settings==代理访问设置
These settings configure the access method to your own http proxy and server.==设定http代理和服务器的访问方式.
All traffic is routed throug one single port, for both proxy and server.==代理和服务器流量均从同一端口流过.
Server/Proxy Port Configuration==服务器/代理 端口设置
The socket addresses where YaCy should listen for incoming connections from other YaCy peers or http clients.==指定YaCy需要监听的socket地址.
You have four possibilities to specify the address:==可以设置以下四个地址:
defining a port only==仅指定一个端口
<em>e.g. 8090</em>==<em>比如 8090</em>
defining IP address and port==指定IP地址和端口
<em>e.g. 192.168.0.1:8090</em>==<em>比如 192.168.0.1:8090</em>
defining host name and port==指定域名和端口
<em>e.g. home:8090</em>==<em>比如 home:8090</em>
defining interface name and port==指定网络接口和端口
<em>e.g. #eth0:8090</em>==<em>z.B. #eth0:8090</em>
Hint: Dont forget to change your firewall configuration after you have changed the port.==提示: 改变端口后请更改对应防火墙设置.
Proxy and http-Server Administration Port==代理和http服务器管理端口
Changes will take effect in 5-10 seconds==改变在5-10秒后生效
Server Access Restrictions==服务器访问限制
You can restrict the access to this proxy/server using a two-stage security barrier:==使用两层安全屏障限制到此代理/服务器的访问:
define an <em>access domain</em> with a list of granted client IP-numbers or with wildcards==定义一个带有授权IP名单或者通配符的<em>访问域</em>
define an <em>user account</em> with an user:password - pair==创建一个需要密码的<em>用户账户</em>
This is the account that restricts access to the proxy function.==这是一个限制代理访问功能的账户.
You probably don't want to share the proxy to the internet, so you should set the==如果不想在互联网上共享代理,
IP-Number Access Domain to a pattern that corresponds to you local intranet.==请定义一个对应本地局域网的IP访问域表达式.
The default setting should be right in most cases. If you want, you can also set a proxy account==默认设置适用于大多数情况. 如果需要共享代理,
so that every proxy user must authenticate first, but this is rather unusual.==请先设置需要授权的代理账户.
IP-Number filter==IP地址过滤
Use <a==使用 <a
#-----------------------------
#File: Settings_Referrer.inc
#---------------------------
#-----------------------------
#File: Settings_Seed_UploadFile.inc
#---------------------------
Store into filesystem:==存储至文件系统:
You must configure this if you want to store the seed-list file onto the file system.==如果要将seed列表文件存储至文件系统, 请先配置此选项.
File Location==文件位置
Here you can specify the path within the filesystem where the seed-list file should be stored.==在此指定文件系统内保存seed列表文件的路径.
"Submit"=="提交"
#-----------------------------
#File: Settings_Seed_UploadFtp.inc
#---------------------------
Uploading via FTP:==通过FTP上传:
This is the account for a FTP server where you can host a seed-list file.==此账户能够访问FTP服务器以存储seed列表文件.
If you set this, you will become a principal peer.==如果设置了此选项, 本地节点会被置为主要节点.
Your peer will then upload the seed-bootstrap information periodically,==你的节点会定期上传seed启动信息,
but only if there had been changes to the seed-list.==前提是seed列表有变更.
The host where you have a FTP account, like==ftp服务器, 比如
Path</label>==路径</label>
The remote path on the FTP server, like==ftp服务器上传路径, 比如
Missing sub-directories are NOT created automatically.==不会自动创建缺少的子目录.
Username==用户名
Your log-in at the FTP server==ftp服务器用户名
Password</label>==密码</label>
The password==用户密码
"Submit"=="提交"
#-----------------------------
#File: Settings_Seed_UploadScp.inc
#---------------------------
Uploading via SCP:==通过SCP上传:
This is the account for a server where you are able to login via ssh.==设置通过ssh访问服务器的账户.
#Server==Server
The host where you have an account, like 'my.host.net'==服务器, 比如'my.host.net'
#Server&nbsp;Port==Server&nbsp;Port
The sshd port of the host, like '22'==ssh端口, 比如'22'
Path</label>==路径</label>
The remote path on the server, like '~/yacy/seed.txt'. Missing sub-directories are NOT created automatically.==ssh服务器上传路径, 比如'~/yacy/seed.txt'. 不会自动创建缺少的子目录.
Username==用户名
Your log-in at the server==ssh服务器用户名
Password</label>==密码</label>
The password==用户密码
"Submit"=="提交"
#-----------------------------
#File: Settings_Seed.inc
#---------------------------
Seed Upload Settings==种子上传设置
With these settings you can configure if you have an account on a public accessible==如果你有一个公共服务器的账户, 可在此设置
server where you can host a seed-list file.==种子列表文件相关选项.
General Settings:==通用设置:
If you enable one of the available uploading methods, you will become a principal peer.==如果节点使用了以下某种上传方式, 则本机节点会成为主要节点.
Your peer will then upload the seed-bootstrap information periodically,==你的节点会定期上传种子启动信息,
but only if there have been changes to the seed-list.==前提是种子列表有变更.
Upload Method==上传方式
"Submit"=="提交"
Retry Uploading==重试上传
Here you can specify which upload method should be used.==在此指定上传方式.
Select 'none' to deactivate uploading.==选择'none'关闭上传
The URL that can be used to retrieve the uploaded seed file, like==能够上传种子文件的链接, 比如
#-----------------------------
#File: Settings_ServerAccess.inc
#---------------------------
Server Access Settings==服务器访问设置
IP-Number filter:==IP地址过滤:
(requires restart)==(要求重启)
Here you can restrict access to the server.==通过此限制访问服务器的IP。
By default, the access is not limited,==默认情况下, 不对访问作限制,
because this function is needed to spawn the p2p index-sharing function.==否则会影响p2p索引共享功能。
If you block access to your server (setting anything else than '*'), then you will also be blocked==如果作了访问限制(设置了不是'*'的任何值),
from using other peers' indexes for search service.==你也将在搜索服务中不能使用其他节点的索引。
However, blocking access may be correct in enterprise environments where you only want to index your==然而, 在企业环境中, 如果仅需要索引公司内部网页,
company's own web pages.==作相应限制则是正确的选项。
Filter have to be entered as IP, IP range or using CIDR notation separated by comma (e.g. 192.168.1.1,2001:db8::ff00:42:8329,192.168.1.10-192.168.1.20,192.168.1.30-40,192.168.2.0/24)==过滤器必须输入使用逗号分隔的IP、IP范围或CIDR符号 (比如 192.168.1.1,2001:db8::ff00:42:8329,192.168.1.10-192.168.1.20,192.168.1.30-40,192.168.2.0/24)
further details on format see Jetty==关于格式的进一步细节参见Jetty
<a href="http://download.eclipse.org/jetty/stable-9/apidocs/org/eclipse/jetty/util/InetAddressSet.html" target="_blank">InetAddressSet</a> documentation.==<a href="http://download.eclipse.org/jetty/stable-9/apidocs/org/eclipse/jetty/util/InetAddressSet.html" target="_blank">InetAddressSet</a>文档。
staticIP (optional):==静态IP (可选):
The staticIP can help that your peer can be reached by other peers in case that your==如果你在防火墙或者代理后,
peer is behind a firewall or proxy.</strong> You can create a tunnel through the firewall/proxy==静态IP设置能够确保其他节点能够找到你.</strong> 你可以创建一个穿过防火墙/代理的通道,
(look out for 'tunneling through https proxy with connect command') and create==(请搜索"通过链接命令创建https代理通道"了解更多)
an access point for incoming connections.==以给输入链接提供访问点。
This access address can be set here (either as IP number or domain name).==在此设置访问地址(IP地址或者域名)。
If the address of outgoing connections is equal to the address of incoming connections,==如果流出链接的地址和流入链接的相同,
you don't need to set anything here, please leave it blank.==请留空此栏.
ATTENTION: Your current IP is recognized as "#[clientIP]#".==注意: 当前你的为"#[clientIP]#".
If the value you enter here does not match with this IP,==如果你输入的IP与此IP不符,
you will not be able to access the server pages anymore.==那么你就不能访问服务器页面了.
>fileHost:<==>文件服务器:<
Set this to avoid error-messages like 'proxy use not allowed / granted' on accessing your Peer by its hostname.==设置此选项可避免在通过服务器名访问对等服务器时出现‘代理使用不允许/已授权’等错误消息。
Virtual host for httpdFileServlet access for example http://FILEHOST/ shall access the file servlet and==用于 httpdFileServlet 访问的虚拟主机,
return the defaultFile at rootPath either way, http://FILEHOST/ denotes the same as http://localhost:&lt;port&gt;/==例如 http://FILEHOST/ 应访问文件服务器并以任一方式返回根路径下的默认文件,对预值'localpeer'而言http://FILEHOST/ 与 http://localhost:&lt;port&gt;/表示相同,
for the preconfigured value 'localpeer', the URL is: http://localpeer/.==地址为http://localpeer/。
"Submit"=="提交"
>Server Port Settings<==>服务器端口设置<
>Server port:<==>服务器端口:<
This is the main port for all http communication (default is 8090). A change requires a restart.==这是所有http通信的主端口默认值为8090。更改需要重新启动。
>Server ssl port:<==>服务器ssl端口:<
This is the port to connect via https (default is 8443). A change requires a restart.==这是通过https连接的端口默认为8443。更改需要重新启动。
>Shutdown port:<==>关机端口:<
This is the local port on the loopback address (127.0.0.1 or :1) to listen for a shutdown signal to stop the YaCy server (-1 disables the shutdown port, recommended default is 8005). A change requires a restart.==这是环地址127.0.0.1 或:1上的本地端口用于侦听关闭信号以停止YaCy服务器-1禁用关闭端口推荐默认值为8005。更改需要重新启动。
>Compression settings<==>压缩设置<
Compress responses with gzip==用gzip压缩响应
When checked (default), HTTP responses can be compressed using gzip.==选中时默认可以使用gzip压缩HTTP响应。
The requesting user-agent (a web browser, another YaCy peer or any other tool) uses the header 'Accept-Encoding' to tell whether it accepts gzip compression or not.==请求用户代理网页浏览器、另一个YaCy节点或任何其他工具使用标头'Accept-Encoding'来判断它是否接受gzip压缩。
This adds some processing overhead, but can significantly reduce the amount of bytes transmitted over the network.==这增加了一些处理开销,但可以显着减少通过网络传输的字节量。
>Changes need a server restart.<==>需重启服务器才能让改变生效。<
#-----------------------------
#File: Settings_UrlProxyAccess.inc
#---------------------------
URL Proxy Settings<=URL Proxy Settings<
With this settings you can activate or deactivate URL proxy.==With this settings you can activate or deactivate URL proxy.
Service call: ==Service call:
, where parameter is the url of an external web page.==, where parameter is the url of an external web page.
>URL proxy:<==>URL proxy:<
>Enabled<==>开启<
Globally enables or disables URL proxy via ==Globally enables or disables URL proxy via
Show search results via URL proxy:==Show search results via URL proxy:
Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.==Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.
Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address==Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address
via the YaCy proxy servlet.==via the YaCy proxy servlet.
or right-click this link and add to favorites:==or right-click this link and add to favorites:
Restrict URL proxy use:==Restrict URL proxy use:
Define client filter. Default: ==Define client filter. Default:
URL substitution:==URL substitution:
Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.==Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.
"Submit"=="Submit"
#-----------------------------
#File: SettingsAck_p.html
#---------------------------
YaCy '#[clientname]#': Settings Acknowledge==YaCy '#[clientname]#': 设置
Settings Receipt:==菜单设置:
No information has been submitted==未提交信息.
Error with submitted information.==提交信息发生错误.
Nothing changed.</p>==无任何改变.</p>
The user name must be given.==必须给出用户名.
Your request cannot be processed.==不能响应请求.
The password redundancy check failed. You have probably misstyped your password.==密码冗余检查错误.
Shutting down.</strong><br />Application will terminate after working off all crawling tasks.==正在关闭</strong><br />所有crawl任务完成后程序会关闭.
Your administration account setting has been made.==已创建管理账户设置.
Your new administration account name is #[user]#. The password has been accepted.<br />If you go back to the Settings page, you must log-in again.==新帐户名是 #[user]#. 密码输入正确.<br />如果返回设置页面, 需要再次输入密码.
Your proxy access setting has been changed.==代理访问设置已改变.
Your proxy account check has been disabled, since you did not supply a password.==不能进行代理账户检查, 密码不正确.
The new proxy IP filter is set to==代理IP过滤设置为
The proxy port is:==代理端口号:
Port rebinding will be done in a few seconds.==端口在几秒后绑定完成.
You can reach your YaCy server under the new location==可以通过新位置访问YaCy服务器:
Your proxy access setting has been changed.==代理访问设置已改变.
Your server access filter is now set to==服务器访问过滤为
Auto pop-up of the Status page is now <strong>disabled</strong>==自动弹出状态页面<strong>关闭.</strong>
Auto pop-up of the Status page is now <strong>enabled</strong>==自动弹出状态页面<strong>打开.</strong>
You are now permanently <strong>online</strong>.==你现在处于永久<strong>在线状态</strong>.
After a short while you should see the effect on the====一会儿可以在
status</a> page.==Status</a> 页面看到变化.
The Peer Name is:==节点名:
Your static Ip(or DynDns) is:==静态IP(或DynDns)为:
Seed Settings changed.#(success)#::You are now a principal peer.==seed设置已改变.#(success)#::本地节点已成为主要节点.
Seed Settings changed, but something is wrong.==seed设置已改变, 但是未完全成功.
Seed Uploading was deactivated automatically.==seed上传自动关闭.
Please return to the settings page and modify the data.==请返回设置页面修改参数.
The remote-proxy setting has been changed==远端代理设置已改变.
The new setting is effective immediately, you don't need to re-start.==新设置立即生效.
The submitted peer name is already used by another peer. Please choose a different name.</strong> The Peer name has not been changed.==提交的节点名已存在, 请更改.</strong> 节点名未改变.
Your Peer Language is:==节点语言:
The submitted peer name is not well-formed. Please choose a different name.</strong> The Peer name has not been changed.
Peer names must not contain characters other than (a-z, A-Z, 0-9, '-', '_') and must not be longer than 80 characters.
#The new parser settings where changed successfully.==Die neuen Parser Einstellungen wurden erfolgreich gespeichert.
Parsing of the following mime-types was enabled:
Seed Upload method was changed successfully.==seed上传方式改变成功.
You are now a principal peer.==本地节点已成为主要节点.
Seed Upload Method:==seed上传方式:
Seed File URL:==seed文件URL:
Your proxy networking settings have been changed.==代理网络设置已改变.
Transparent Proxy Support is:==透明代理支持:
Connection Keep-Alive Support is:==连接保持支持:
Your message forwarding settings have been changed.==消息发送设置已改变.
Message Forwarding Support is:==消息发送支持:
Message Forwarding Command:==消息:
Recipient Address:==收件人地址:
Please return to the settings page and modify the data.==请返回设置页面修改参数.
You are now <strong>event-based online</strong>.==你现在处于<strong>事件驱动在线</strong>.
After a short while you should see the effect on the==查看变化
You are now in <strong>Cache Mode</strong>.==你现在处于<strong>Cache模式</strong>.
Only Proxy-cache ist available in this mode.==此模式下仅代理缓存可用.
After a short while you should see the effect on the==查看变化
You can now go back to the==现在可返回
Settings</a> page if you want to make more changes.==设置</a> 页面, 如果需要更改更多参数的话.
You can reach your YaCy server under the new location==现在可以通过新位置访问YaCy服务器:
#-----------------------------
#File: sharedBlacklist_p.html
#---------------------------
Shared Blacklist==共享黑名单
Add Items to Blacklist==添加词条到黑名单
Unable to store the items into the blacklist file:==不能存储词条到黑名单文件:
#File Error! Wrong Path?==Datei Fehler! Falscher Pfad?
YaCy-Peer &quot;<span class="settingsValue">#[name]#</span>&quot; not found.==YaCy peer&quot;<span class="settingsValue">#[name]#</span>&quot; 未找到.
not found or empty list.==未找到或者列表为空.
Wrong Invocation! Please invoke with==调用错误! 请使用配合
Blacklist source:==黑名单源:
Blacklist target:==黑名单目的:
Blacklist item==黑名单词条
"select all"=="全部选择"
"deselect all"=="全部反选"
value="add"==value="添加"
#-----------------------------
#File: Status_p.inc
#---------------------------
System Status==系统状态
System==系统
YaCy version==YaCy版本
Unknown==未知
Uptime:==运行时间:
Processors:==处理器:
Load:==负载:
Threads:==线程:
peak:==峰值:
total:==全部:
Protection==保护
Password is missing==无密码
password-protected==受密码保护
Unrestricted access from localhost==本地无限制访问
Address</dt>==地址</dt>
peer address not assigned==未分配节点地址
Host:==服务器:
Public Address:==公共地址:
YaCy Address:==YaCy地址:
Proxy</dt>==代理</dt>
Transparent ==透明代理
not used==未使用
broken::connected==断开::连接
broken==已断开
connected==已连接
Used for YaCy -> YaCy communication:==用于YaCy -> YaCy通信:
WARNING:==警告:
You do this on your own risk.==此动作危险.
If you do this without YaCy running on a desktop-pc or without Java 6 installed, this will possibly break startup.==如果你不是在台式机上或者已安装Java6的机器上运行, 可能会破坏开机程序.
In this case, you will have to edit the configuration manually in DATA/SETTINGS/yacy.conf==在此情况下, 你需要手动修改配置文件 DATA/SETTINGS/yacy.conf
Remote:==远端:
Tray-Icon==任务栏图标
Experimental<==实验性的<
Yes==是
No==否
Auto-popup on start-up==启动时自动弹出
Disabled==关闭
Enable]==打开]
Enabled==开启
Disable]==关闭]
Memory Usage==内存使用
RAM used:==占用内存:
RAM max:==最大内存:
DISK used:==占用硬盘:
(approx.)==(大约)
DISK free:==可用硬盘:
on::off==开::关
Configure==配置
max:==最大:
Traffic ==流量
>Reset==>重置
Proxy:==代理:
Crawler:==爬虫:
Incoming Connections==流入连接
Active:==活动:
Max:==最大:
Loader Queue==加载器队列
paused==已暂停
>Queues<==>队列<
Local Crawl==本地爬取
Remote triggered Crawl==远端触发的爬取
Pre-Queueing==预排序
Seed server==种子服务器
Enabled: Updating to server==开启: 与服务器同步
Last upload: #[lastUpload]# ago.==最后上传: #[lastUpload]# 以前.
Enabled: Updating to file==开启: 与文件同步
YaCy version:==YaCy版本:
Java version:==Java版本:
>Experimental<==>实验性的<
Enabled <a==开启 <a
Reset</a>==重启</a>
#-----------------------------
#File: Status.html
#---------------------------
Console Status==控制台状态
Log-in as administrator to see full status==登录管理用户以查看完整状态
Welcome to YaCy!==欢迎使用YaCy!
Your settings are _not_ protected!</strong>==你的设置 _未_ 受保护!</strong>
Please open the <a href="ConfigAccounts_p.html">accounts configuration</a> page <strong>immediately</strong>==请打开<a href="ConfigAccounts_p.html">账户设置</a> <strong>页面</strong>
and set an administration password.==并设置管理密码.
Access is unrestricted from localhost (this includes administration features).==访问权限在localhost不受限制这包括管理功能
Please check the <a href="ConfigAccounts_p.html">accounts configuration</a> page to ensure that the settings match the security level you need.==请检查<a href="ConfigAccounts_p.html">帐户配置</a>页面,确保设置符合你所需的安全级别。
You have not published your peer seed yet. This happens automatically, just wait.==尚未发布你的节点种子. 将会自动发布, 请稍候。
The peer must go online to get a peer address.==节点必须上线以获得节点地址。
You cannot be reached from outside.==外部不能访问你的节点。
A possible reason is that you are behind a firewall, NAT or Router.==很可能是因为你被防火墙, NAT或者路由器阻挡在后面。
But you can <a href="index.html">search the internet</a> using the other peers'==但是你依然能在<a href="index.html">你的搜索页面</a>
global index on your own search page.==通过其他节点的全球索引进行搜索。
"bad"=="坏"
"idea"=="主意"
"good"=="好"
"Follow YaCy on Twitter"=="在Twitter上关注YaCy"
We encourage you to open your firewall for the port you configured (usually: 8090),==我们推荐你开放防火墙端口(通常是:8090)
or to set up a 'virtual server' in your router settings (often called DMZ).==或者在路由器中建立一个'虚拟服务器'(常叫做DMZ)。
Please be fair, contribute your own index to the global index.==请公平地贡献你的索引给全球索引。
Free disk space is lower than #[minSpace]#. Crawling has been disabled. Please fix==空闲硬盘空间低于 #[minSpace]#. 爬取已被关闭,
it as soon as possible and restart YaCy.==请尽快修复并重启YaCy.
Free memory is lower than #[minSpace]#. DHT-in has been disabled. Please fix==空闲内存低于 #[minSpace]#. DHT-in已被关闭,
Crawling is paused! If the crawling was paused automatically, please check your disk space.==爬取暂停! 如果这是自动暂停的,请检查你的硬盘空间。
Latest public version is==最新版本为
You can download a more recent version of YaCy. Click here to install this update and restart YaCy:==你可以下载最新版本YaCy, 点此进行升级并重启:
Install YaCy==安装YaCy
You can download the latest releases here:==你可以在此处下载最新版本:
You are running a server in senior mode and you support the global internet index,==服务器运行在高级模式, 并支持全球索引,
which you can also <a href="index.html">search yourself</a>.==你也能进行<a href="index.html">本地搜索</a>.
You have a principal peer because you publish your seed-list to a public accessible server==你是一个骨干节点, 因为你向公共服务器公布了你的种子列表,
where it can be retrieved using the URL==可使用此URL进行接收:
Your Web Page Indexer is idle. You can start your own web crawl <a href="CrawlStartSite.html">here</a>==网页索引器当前空闲. 可以点击<a href="CrawlStartSite.html">这里</a>开始爬取网页
Your Web Page Indexer is busy. You can <a href="Crawler_p.html">monitor your web crawl</a> here==网页索引器当前忙碌. 点击<a href="Crawler_p.html">这里</a>查看状态
If you need professional support, please write to==如果你需要专业级支持, 请EMAIL来信
For community support, please visit our==如果只是社区支持, 请访问我们的
>forum<==>论坛<
#-----------------------------
#File: Steering.html
#---------------------------
Steering</title>==控制</title>
Checking peer status...==正在检查节点状态...
Peer is online again, forwarding to status page...==节点再次上线, 正在传输状态...
Peer is not online yet, will check again in a few seconds...==节点尚未上线, 几秒后重新检测...
No action submitted==未提交动作
Go back to the <a href="Settings_p.html">Settings</a> page==将返回<a href="Settings_p.html">设置</a>页面
Your system is not protected by a password==你的系统未受密码保护
Please go to the <a href="ConfigAccounts_p.html">User Administration</a> page and set an administration password.==请在<a href="ConfigAccounts_p.html">用户管理</a>页面设置管理密码.
You don't have the correct access right to perform this task.==无执行此任务权限.
Please log in.==请登录.
You can now go back to the <a href="Settings_p.html">Settings</a> page if you want to make more changes.==你现在可以返回<a href="Settings_p.html">设置</a>页面进行详细设置.
See you soon!==下次再见!
Just a moment, please!==请稍候.
Application will terminate after working off all scheduled tasks.==程序在所有任务完成后将停止.
Please send us feed-back!==可以给我们一个反馈嘛!
We don't track YaCy users, YaCy does not send 'home-pings', we do not even know how many people use YaCy as their private search engine.==我们不跟踪YAY用户YaCy不发送“回家Ping”我们甚至不知道有多少人使用Yyas作为他们的私人搜索引擎。
Therefore we like to ask you: do you like YaCy?==所以我们想问你你喜欢YaCy吗
Will you use it again... if not, why?==你会再次使用它吗?如果不是,为什么?
Is it possible that we change a bit to suit your needs==我们有可能改变一下以满足你的需求吗
Please send us feed-back about your experience with an==请向我们发送有关你的体验的回馈
Professional Support==专业级支持
If you are a professional user and you would like to use YaCy in your company in combination with consulting services by YaCy specialists, please see==如果你是专业用户,并且希望在公司中使用YaCy并获得YaCy专家的咨询服务,请参阅
Then YaCy will restart.==然后YaCy会重新启动.
If you can't reach YaCy's interface after 5 minutes restart failed.==如果5分钟后不能访问此页面说明重启失败.
Installing release==正在安装
YaCy will be restarted after installation==YaCy在安装完成后会重新启动
#-----------------------------
#File: Supporter.html
#---------------------------
Supporter<==参与者<
"Please enter a comment to your link recommendation. (Your Vote is also considered without a comment.)"
Supporter are switched off for users without authorization==未授权用户不属于参与者范畴
"bookmark"=="书签"
"Add to bookmarks"=="添加到书签"
"positive vote"=="好评"
"Give positive vote"=="给予好评"
"negative vote"=="差评"
"Give negative vote"=="给予差评"
provided by YaCy peers with an URL in their profile. This shows only URLs from peers that are currently online.==由各节点提供. 仅显示所有节点中当前在线链接.
#-----------------------------
#File: Surftips.html
#---------------------------
Surftips</title>==建议</title>
Surftips</h2>==建议</h2>
Surftips are switched off==建议已关闭
title="bookmark"==title="书签"
alt="Add to bookmarks"==alt="添加到书签"
title="positive vote"==title=="好评"
alt="Give positive vote"==alt="给予好评"
title="negative vote"==title=="差评"
alt="Give negative vote"==alt="给予差评"
YaCy Supporters<==YaCy参与者<
>a list of home pages of yacy users<==>显示YaCy用户<
provided by YaCy peers using public bookmarks, link votes and crawl start points==由使用公共书签, 网址评价和爬取起始点的节点提供
"Please enter a comment to your link recommendation. (Your Vote is also considered without a comment.)"=="输入推荐链接备注. (可留空.)"
"authentication required"=="需要认证"
Hide surftips for users without autorization==隐藏非认证用户的建议功能
Show surftips to everyone==所有人均可使用建议
#-----------------------------
#File: Table_API_p.html
#---------------------------
: Peer Steering==: 节点控制
The information that is presented on this page can also be retrieved as XML.==The information that is presented on this page can also be retrieved as XML.
Click the API icon to see the XML.==Click the API icon to see the XML.
To see a list of all APIs, please visit the ==To see a list of all APIs, please visit the
API wiki page==API wiki page
>Process Scheduler<==>进程调度器<
This table shows actions that had been issued on the YaCy interface==此表显示YaCy用于
to change the configuration or to request crawl actions.==改变配置或者处理爬取请求的动作接口函数.
These recorded actions can be used to repeat specific actions and to send them==它们用于重复执行某一指定动作,
to a scheduler for a periodic execution.==或者用于周期执行一系列动作.
>Recorded Actions<==>已记录的动作<
"next page"=="下一页"
"previous page"=="上一页"
of #[of]#== 共 #[of]#
>Type==>类型
>Comment==>注释
Call Count<==调用次数<
Recording&nbsp;Date==记录的日期
Last&nbsp;Exec&nbsp;Date==上次执行日期
Next&nbsp;Exec&nbsp;Date==下次执行日期
>Event Trigger<==>事件触发器<
"clone"=="clone"
>Scheduler<==>调度器<
>no event<==>无事件<
>activate event<==>激活事件<
>no repetition<==>不重复<
>activate scheduler<==>激活调度器<
>off<==>关闭<
>run once<==>执行一次<
>run regular<==>定期执行<
>after start-up<==>在启动后<
"Execute Selected Actions"=="执行选中的行为"
"Delete Selected Actions"=="删除选中的行为"
"Delete all Actions which had been created before "=="删除创建于之前的全部行为"
day<==天<
days<==天<
week<==周<
weeks<==周<
month<==月<
months<==月<
year<==年<
years<==年<
>Result of API execution==>API执行结果
>minutes<==>分钟<
>hours<==>小时<
Scheduled actions are executed after the next execution date has arrived within a time frame of #[tfminutes]# minutes.==已安排动作会在 #[tfminutes]# 分钟后执行。
To see a list of all APIs, please visit the==To see a list of all APIs, please visit the
#-----------------------------
#File: Table_RobotsTxt_p.html
#---------------------------
Table Viewer==表格查看器
The information that is presented on this page can also be retrieved as XML.==此页信息也可表示为XML.
Click the API icon to see the XML.==点击API图标查看XML.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
API wiki page==API百科页面
To see a list of all APIs, please visit the==要查看所有API的列表请访问
>robots.txt table<==>robots.txt列表<
#-----------------------------
#File: Table_YMark_p.html
#---------------------------
Table Viewer==表格查看
YMark Table Administration==YMark表格管理
Table Editor: showing table==表格编辑器: 显示表格
"Edit Selected Row"=="编辑选中行"
"Add a new Row"=="添加新行"
"Delete Selected Rows"=="删除选中行"
"Delete Table"=="删除表格"
"Rebuild Index"=="重建索引"
Primary Key==主键
>Row Editor<==>行编辑器<
"Commit"=="备注"
Table Selection==选择表格
Select Table:==选择表格:
show max. entries==显示最多词条
>all<==>所有<
Display columns:==显示列:
"load"=="载入"
Search/Filter Table==搜索/过滤表格
search rows for==搜索
"Search"=="搜索"
#>Tags<==>Tags<
>select a tag<==>选择标签<
>Folders<==>目录<
>select a folder<==>选择目录<
>Import Bookmarks<==>导入书签<
#Importer:==Importer:
#>XBEL Importer<==>XBEL Importer<
#>Netscape HTML Importer<==>Netscape HTML Importer<
"import"=="导入"
#-----------------------------
### This Tables section is removed in current SVN Versions
#File: Tables_p.html
#---------------------------
Table Viewer==表查看器
entries==词条
Table Administration==表格管理
Table Selection==选择表格
Select Table:==选择表格:
#"Show Table"=="Zeige Tabelle"
show max.==显示最多.
>all<==>全部<
entries,==个词条,
search rows for==搜索内容
"Search"=="搜索"
Table Editor: showing table==表格编辑器: 显示表格
#PK==Primärschlüssel
"Edit Selected Row"=="编辑选中行"
"Add a new Row"=="添加新行"
"Delete Selected Rows"=="删除选中行"
"Delete Table"=="删除表格"
Row Editor==行编辑器
Primary Key==主键
"Commit"=="备注"
#-----------------------------
#File: terminal_p.html
#---------------------------
YaCy Peer Live Monitoring Terminal==YaCy节点实时监控终端
YaCy System Terminal Monitor==YaCy系统终端监控器
#YaCy System Monitor==YaCy System Monitor
Search Form==搜索页面
Crawl Start==开始爬取
Status Page==状态页面
Confirm Shutdown==确认关闭
>&lt;Shutdown==>&lt;关闭程序
Event Terminal==事件终端
Image Terminal==图形终端
Domain Monitor==域监控器
"Loading Processing software..."=="正在载入软件..."
This browser does not have a Java Plug-in.==此浏览器没有安装Java插件.
Get the latest Java Plug-in here.==在此获取.
Resource Monitor==资源监控器
Network Monitor==网络监控器
#-----------------------------
#File: Threaddump_p.html
#---------------------------
YaCy Debugging: Thread Dump==YaCy Debug: 线程Dump
Threaddump<==线程Dump<
"Single Threaddump"=="单线程Dump"
"Multiple Dump Statistic"=="多个Dump数据"
#"create Threaddump"=="Threaddump erstellen"
#-----------------------------
#File: TransNews_p.html
#---------------------------
Translation News for Language==语言翻译新闻
Translation News==翻译新闻
You can share your local addition to translations and distribute it to other peers.==你可以分享你的本地翻译,并分发给其他节点。
The remote peer can vote on your translation and add it to the own local translation.==远端节点可以对你的翻译进行投票并将其添加到他们的本地翻译中。
entries available==可用的词条
"Publish"=="发布"
You can check your outgoing messages==你可以检查你的传出消息
>here<==>这儿<
To edit or add local translations you can use==要编辑或添加本地翻译,你可以用
File:==文件:
Translation:==翻译:
>score==>分数
negative vote==反对票
positive vote==赞成票
Vote on this translation==对这个翻译投票
If you vote positive the translation is added to your local translation list==如果你投赞成票,翻译将被添加到你的本地翻译列表中
>Originator<==>启动人<
#-----------------------------
#File: Translator_p.html
#---------------------------
Translation Editor==翻译编辑器
Translate untranslated text of the user interface (current language).==翻译用户界面中未翻译的文本(当前语言)。
UI Translation==界面翻译
Target Language:==目标语言
activate a different language==激活另一种语言
Source File==源文件
view it==查看
filter untranslated==列出未翻译项
Source Text==源文
Translated Text==翻译
Save translation==保存翻译
The modified translation file is stored in DATA/LOCALE directory.==修改的翻译文件储存在 DATA/LOCALE 目录下
#-----------------------------
#File: User.html
#---------------------------
User Page==用户页面
You are not logged in.<br />==当前未登录.<br />
Username:==用户名:
Password: <input==密码: <输入
"login"=="登录"
You are currently logged in as #[username]#.==当前作为 #[username]# 登录.
You have used==可用时间已使用
minutes of your onlinetime limit of==分钟, 共
minutes per day.==分钟每天.
old Password==旧密码
new Password<==新密码<
new Password(repetition)==新密码(重复)
"Change"=="改变"
You are currently logged in as admin.==当前作为管理员登录.
value="logout"==value="注销"
(after logout you will be prompted for your password again. simply click "cancel")==(注销后需要重新输入密码)
Password was changed.==密码已改变.
Old Password is wrong.==密码输入错误.
New Password and its repetition do not match.==新密码两次输入不匹配.
New Password is empty.==新密码为空.
#-----------------------------
#File: ViewFile.html
#---------------------------
See the page info about the url.==查看关于此地址的页面信息。
"Show Metadata"=="显示元数据"
"Browse Host"=="浏览服务器"
Citation Report==引用报告
Collections==收集
MimeType:==Mime类型
Search in Document:==在文档中搜索:
"Show Snippet"=="显示摘录"
(click this for full metadata)==(点击这个,获得完整的元数据)
View URL Content==查看地址内容
>Get URL Viewer<==>获取地址查看器<
>URL Metadata<==>地址元数据<
URL==地址
#Hash==Hash
Word Count==字数
Description==描述
Size==大小
View as==查看形式
#Original==Original
Plain Text==文本
Parsed Text==解析文本
Parsed Sentences==解析句子
Parsed Tokens/Words==解析令牌/字
Link List==链接列表
"Show"=="显示"
Unable to find URL Entry in DB==无法找到数据库中的链接.
Invalid URL==无效链接
Unable to download resource content.==无法下载资源内容.
Unable to parse resource content.==无法解析资源内容.
Unsupported protocol.==不支持的协议.
>Original Content from Web<==>网页原始内容<
Parsed Content==解析内容
>Original from Web<==>网页原始内容<
>Original from Cache<==>缓存原始内容<
>Parsed Tokens<==>解析令牌<
#-----------------------------
#File: ViewLog_p.html
#---------------------------
Server Log==服务器日志
Lines==行
reversed order==倒序排列
"refresh"=="刷新"
#-----------------------------
#File: ViewProfile.html
#---------------------------
Local Peer Profile:==本地节点资料:
Remote Peer Profile==远端节点资料
Wrong access of this page==页面权限错误
The requested peer is unknown or a potential peer.==所请求节点未知或者是潜在节点.
The profile can't be fetched.==无法获取资料.
The peer==节点
is not online.==当前不在线.
This is the Profile of==资料
#Name==Name
#Nick Name==Nick Name
#Homepage==Homepage
#eMail==eMail
#ICQ==ICQ
#Jabber==Jabber
#Yahoo!==Yahoo!
#MSN==MSN
#Skype==Skype
Comment==注释
View this profile as==查看方式
> or==> 或者
#vCard==vCard
#-----------------------------
#File: Vocabulary_p.html
#---------------------------
>Vocabulary Administration<==>词汇管理<
Vocabularies can be used to produce a search navigation.==词汇表可用于生成搜索导航.
A vocabulary must be created before content is indexed.==必须在索引内容之前创建词汇.
The vocabulary is used to annotate the indexed content with a reference to the object that is denoted by the term of the vocabulary.==词汇用于通过引用由词汇的术语表示的对象来注释索引的内容.
The object can be denoted by a url stub that, combined with the term, becomes the url for the object.==该对象可以用地址存根表示,该存根与该术语一起成为该对象的地址.
>Vocabulary Selection<==>词汇选择<
>Vocabulary Name<==>词汇名<
"View"=="查看"
>Vocabulary Production<==>词汇生成<
Empty Vocabulary== 空词汇
>Auto-Discover<==>自动发现<
> from file name==> 来自文件名
> from page title (splitted)==> 来自页面标题(拆分)
> from page title==> 来自页面标题
> from page author==> 来自页面作者
>Objectspace<==>对象空间<
It is possible to produce a vocabulary out of the existing search index.==可以从现有搜索索引中生成词汇表.
This is done using a given 'objectspace' which you can enter as a URL Stub.==这是使用给定的“对象空间”完成的,你可以将其作为地址存根输入.
This stub is used to find all matching URLs.==此存根用于查找所有匹配的地址.
If the remaining path from the matching URLs then denotes a single file, the file name is used as vocabulary term.==如果来自匹配地址的剩余路径表示单个文件,则文件名用作词汇表术语.
This works best with wikis.==这适用于百科.
Try to use a wiki url as objectspace path.==尝试使用百科地址作为对象空间路径
Import from a csv file==从csv文件导入
>File Path or==>文件路径或者
>Start line<==>起始行<
>Column for Literals<==>文本列<
>Synonyms<==>同义词<
>no Synonyms<==>无同义词<
>Auto-Enrich with Synonyms from Stemming Library<==>使用词干库中的同义词自动丰富<
>Read Column<==>读取列<
>Column for Object Link (optional)<==>对象链接列(可选)<
>Charset of Import File<==>导入文件字符集<
>Column separator<==>列分隔符<
"Create"=="创建"
#-----------------------------
#File: WatchWebStructure_p.html
#---------------------------
>Text<==>文本<
>Pivot Dot<==>枢轴点<
"WebStructurePicture"=="网页结构图"
>Other Dot<==>其他点<
API wiki page==API 百科页面
To see a list of all APIs, please visit the==要查看所有API的列表, 请访问
>Host List<==>服务器列表<
To see a list of all APIs==要查看所有API的列表
The data that is visualized here can also be retrieved in a XML file, which lists the reference relation between the domains.==此页面数据显示域之间的关联关系, 能以XML文件形式查看.
With a GET-property 'about' you get only reference relations about the host that you give in the argument field for 'about'.==使用GET属性'about'仅能获得带有'about'参数的域关联关系.
With a GET-property 'latest' you get a list of references that had been computed during the current run-time of YaCy, and with each next call only an update to the next list of references.==使用GET属性'latest'能获得当前的关联关系列表, 并且每一次调用都只能更新下一级关联关系列表.
Click the API icon to see the XML file.==点击API图标查看XML文件.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Web Structure==网页结构
host<==服务器<
depth<==深度<
nodes<==节点<
time<==时间<
size<==大小<
>Background<==>背景<
>Line<==>线<
>Dot<==>点<
>Dot-end<==>末点<
>Color <==>颜色<
"change"=="改变"
#-----------------------------
#File: Wiki.html
#---------------------------
YaCyWiki page:==YaCyWiki:
last edited by==最后编辑由
change date==改变日期
Edit<==编辑<
only granted to admin==只授权给管理员
Grant Write Access to==授予写权限
# !!! Do not translate the input buttons because that breaks the function to switch rights !!!
#"all"=="Allen"
#"admin"=="Administrator"
Start Page==开始页面
Index==索引
Versions==版本
Author:==作者:
#Text:==Text:
You can use==你可以在这使用
Wiki Code</a> here.==wiki代码</a>.
"edit"=="编辑"
"Submit"=="提交"
"Preview"=="预览"
"Discard"=="取消"
>Preview==>预览
No changes have been submitted so far!==未提交任何改变!
Subject==主题
Change Date==改变日期
Last Author==最后作者
IO Error reading wiki database:==读取wiki数据库时出现IO错误:
Select versions of page==选择页面版本
Compare version from==原始版本
"Show"=="显示"
with version from==对比版本
"current"=="当前"
"Compare"=="对比"
Return to==返回
Changes will be published as announcement on YaCyNews==改变会被发布在YaCy新闻中.
#-----------------------------
#File: WikiHelp.html
#---------------------------
to embed this video:==嵌入此视频:
Text will be displayed <span class="underline">underlined</span>.==文本要显示<span class =“underline”>下划线</ span>.
Code==代码
This tag displays a Youtube or Vimeo video with the id specified and fixed width 425 pixels and height 350 pixels.==这个标签显示一个425像素和350像素的Youtube或Vimeo视频.
i.e. use==比如用
Wiki Help==Wiki帮助
Wiki-Code==Wiki代码
This table contains a short description of the tags that can be used in the Wiki and several other servlets==此表列出了用于Wiki和几个插件代码标签简述,
of YaCy. For a more detailed description visit the==详情请见
#YaCy Wiki==YaCy Wiki
Description==描述
#=headline===headline
These tags create headlines. If a page has three or more headlines, a table of content will be created automatically.==此标记标识标题内容. 如果页面有多于三个标题, 则会自动创建一个表格.
Headlines of level 1 will be ignored in the table of content.==一级标题.
#text==Text
These tags create stressed texts. The first pair emphasizes the text (most browsers will display it in italics),==这些标记标识文本内容. 第一对中为强调内容(多数浏览器用斜体表示),
the second one emphazises it more strongly (i.e. bold) and the last tags create a combination of both.==第二对用粗体表示, 第三对为两者的联合.
Text will be displayed <span class="strike">stricken through</span>.==文本内容以<span class="strike">删除线</span>表示.
Lines will be indented. This tag is supposed to mark citations, but may as well be used for styling purposes.==缩进内容, 此标记主要用于引用, 也能用于标识样式.
#point==point
These tags create a numbered list.==此标记用于有序列表.
#something<==something<
#another thing==another thing
#and yet another==and yet another
#something else==something else
These tags create an unnumbered list.==用于创建无序列表.
#word==word
#:definition==:definition
These tags create a definition list.==用于创建定义列表.
This tag creates a horizontal line.==创建水平线.
#pagename==pagename
#description]]==description]]
This tag creates links to other pages of the wiki.==创建到其他wiki页面的链接.
This tag displays an image, it can be aligned left, right or center.==显示图片, 可设置左对齐, 右对齐和居中.
These tags create a table, whereas the first marks the beginning of the table, the second starts==用于创建表格, 第一个标记为表格开头, 第二个为换行,
a new line, the third and fourth each create a new cell in the line. The last displayed tag==第三个与第四个创建列.
closes the table.==最后一个为表格结尾.
#The escape tags will cause all tags in the text between the starting and the closing tag to not be treated as wiki-code.==Durch diesen Tag wird der Text, der zwischen den Klammern steht, nicht interpretiert und unformatiert als normaler Text ausgegeben.
A text between these tags will keep all the spaces and linebreaks in it. Great for ASCII-art and program code.==此标记之间的文本会保留所有空格和换行, 主要用于ASCII艺术图片和编程代码.
If a line starts with a space, it will be displayed in a non-proportional font.==如果一行以空格开头, 则会以非比例形式显示.
url description==URL描述
This tag creates links to external websites.==此标记创建外部网站链接.
alt text==文本备案
#-----------------------------
#File: yacyinteractive.html
#---------------------------
YaCy Interactive Search==YaCy交互搜索
This search result can also be retrieved as RSS/<a href="http://www.opensearch.org" target="_blank">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org" target="_blank">opensearch</a>被形式检索。
The query format is similar to <a href="http://www.loc.gov/standards/sru/" target="_blank">SRU</a>.==请求的格式与<a href="http://www.loc.gov/standards/sru/" target="_blank">SRU</a>相似。
Click the API icon to see an example call to the search rss API.==点击API图标查看调用rss API的示例。
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API百科页面</a>。
>loading from local index...<==>从本地索引加载...<
"Search"=="搜索"
"Search..."=="搜索中..."
#-----------------------------
#File: yacysearch_location.html
#---------------------------
YaCy '#[clientname]#': Location Search==YaCy '#[clientname]#':位置搜索
The information that is presented on this page can also be retrieved as XML==此页面上显示的信息也可以作为XML检索
Click the API icon to see the XML.==单击 API 图标以查看 XML。
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki page</a>.==要查看所有 API 的列表,请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API" target="_blank">API wiki</a>页面。
>search<==>搜索<
#-----------------------------
#File: yacysearch.html
#---------------------------
# Do not translate id="search" and rel="search" which only have technical html semantics
Search Page==搜索页面
This search result can also be retrieved as RSS/<a href="http://www.opensearch.org" target="_blank">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org" target="_blank">opensearch</a>形式表示.
Click the RSS icon to see this search result as RSS message stream.==单击 RSS 图标可将此搜索结果视为 RSS 消息流。
Use the RSS search result format to add static searches to your RSS reader, if you use one.==使用 RSS 搜索结果格式将静态搜索添加到你的 RSS 阅读器(如果你使用的话)。
>search<==>搜索<
"search again"=="再次搜索"
innerHTML = 'search'==innerHTML = '搜索'
Illegal URL mask:==非法网址掩码:
(not a valid regular expression), mask ignored.==(不是一个有效的正则表达式),掩码忽略.
Illegal prefer mask:==Illegal prefer mask:
Did you mean:==你想搜:
The following words are stop-words and had been excluded from the search:==以下关键字是休止符, 已从搜索中排除:
No Results.==未找到.
length of search words must be at least 1 character==搜索文本最少一个字符
Searching the web with this peer is disabled for unauthorized users. Please==对于未经授权的用户将禁用使用此节点搜索Web。 请
>log in<==>登录<
as administrator to use the search function==作为管理员使用搜索功能
Location -- click on map to enlarge==位置 -- 点击地图放大
Map (c) by <==Map (c) by <
and contributors, CC-BY-SA==and contributors, CC-BY-SA
>Media<==>媒体<
> of==> 共
> local,==> 本地,
remote from==远端 来自
YaCy peers).==YaCy 节点).
#-----------------------------
#File: yacysearchitem.html
#---------------------------
"bookmark"=="书签"
"recommend"=="推荐"
"delete"=="删除"
Pictures==图片
#-----------------------------
#File: YaCySearchPluginFF.html
#---------------------------
#[clientname]#: Firefox Search Plugin==#[clientname]#: Firefox搜索插件
YaCy Firefox Search-Plugin Installation:==YaCy Firefox 搜索插件安装:
Simply click on the link shown below to integrate the YaCy Firefox Search-Plugin into your browser.==只需点击下面显示的链接即可将YaCy Firefox搜索插件集成到浏览器中。
In Mozilla Firefox, you can the Search-Plugin via the search box on the toolbar.<br />In Mozilla (Seamonkey) you can access the Search-Plugin via the Sidebar or the Location Bar.==在Mozilla Firefox中你可以通过工具栏上的搜索框打开搜索插件。<br />在MozillaSeamonkey你可以通过侧栏或位置栏访问搜索插件。
Install the YaCy search plugin.==安装YaCy搜索插件。
#-----------------------------
#File: yacysearchtrailer.html
#---------------------------
show search results for "#[query]#" on map==在地图上显示 "#[query]#" 的搜索结果
Your search is done using peers in the YaCy P2P network.==你的搜索是靠YaCy P2P网络中的节点完成的。
You can switch to 'Stealth Mode' which will switch off P2P, giving you full privacy. Expect less results then, because then only your own search index is used.==你可以切换到'隐形模式'这将关闭P2P给你完全的隐私。期待较少的结果因为那时只有你自己的搜索索引被使用。
Your search is done using only your own peer, locally.==你的搜索是靠在本地的YaCy节点完成的。
You can switch to 'Peer-to-Peer Mode' which will cause that your search is done using the other peers in the YaCy network.==你可以切换到'P2P'这将让你的搜索使用YaCy网络中的YaCy节点。
>Provider==>提供者
>Name Space==>命名空间
>Author==>作者
>Filetype==>文件类型
>Language==>语言
>Peer-to-Peer<==>P2P<
Stealth Mode==隐身 模式
Privacy==隐私
Context Ranking==按内容排名
Sort by Date==按日期排序
Documents==文件
Images==图片
>Documents==>文件
>Images==>图片
#-----------------------------
#File: YMarks.html
#---------------------------
"Import"=="导入"
documents=="文件"
days==天
hours==小时
minutes==分钟
for new documents automatically==自动地对新文件
run this crawl once==爬取一次
>Query<==>查询<
Query Type==查询类型
>Import<==>导入<
Tag Manager==标签管理器
Bookmarks (user: #[user]# size: #[size]#)==书签(用户: #[user]# 大小: #[size]#)
"Replace"=="替换"
#-----------------------------
### Subdirectory api ###
#File: api/citation.html
#---------------------------
Document Citations for==文档引用
List of other web pages with citations==其他网页与引文列表
Similar documents from different hosts:==来自不同服务器的类似文件:
#-----------------------------
#File: api/table_p.html
#---------------------------
Table Viewer==查看表格
"Edit Table"=="编辑表格"
#-----------------------------
#File: api/yacydoc.html
#---------------------------
>Title<==>标题<
>Author<==>作者<
>Description<==>描述<
>Subject<==>主题<
>Publisher<==>发布者<
>Contributor<==>贡献者<
>Date<==>日期<
>Type<==>类型<
>Identifier<==>标识符<
>Language<==>语言<
>Load Date<==>加载日期<
>Referrer Identifier<==>关联标识符<
#>Referrer URL<==>Referrer URL<
>Document size<==>文件大小<
>Number of Words<==>关键字数目<
#-----------------------------
### Subdirectory env/templates ###
#File: env/templates/header.template
#---------------------------
>&nbsp;Administration<==>&nbsp;管理<
"Search..."=="搜索..."
Re-Start<==重启<
Shutdown<==关闭<
Forum==论坛
Help==帮助
About This Page==关于此页面
JavaScript information==JavaScript信息
external==外部
&nbsp;&nbsp;&nbsp;YaCy Tutorials==&nbsp;&nbsp;&nbsp;YaCy教程
&nbsp;&nbsp;&nbsp;Download YaCy==&nbsp;&nbsp;&nbsp;下载YaCy
&nbsp;&nbsp;&nbsp;Community (Web Forums)==&nbsp;&nbsp;&nbsp;社区(网页论坛)
&nbsp;&nbsp;&nbsp;Git Repository==&nbsp;&nbsp;&nbsp;Git库
> Sponsor<==> 赞助<
YaCy is free software, so we need the help of many to support the development.==YaCy是免费开源软件所以我们需要很多人的帮助来支持开发。
You</b> can help by joining a sponsoring plan:==你</b> 可以通过加入赞助计划来提供帮助:
become a Github Sponsor==成为Github赞助商
become a YaCy Patreon==成为YaCy赞助商
Please help! We need financial help to move on with the development!==请帮忙!我们需要资金帮助才能继续发展!
> Search<==>搜索<
### FIRST STEPS ###
First Steps==第一步
Use Case &amp; Account==用法&amp;账户
Load Web Pages, Crawler==加载网页,爬虫
RAM/Disk Usage &amp; Updates==内存/硬盘使用&amp;更新
### MONITORING ###
Monitoring==监控
System Status==系统状态
Peer-to-Peer Network==P2P网络
Index Browser==索引浏览器
Network Access==网络访问
Crawler Monitor==爬虫监控
### Production ###
Production==生产
Advanced Crawler==高级爬虫
Index Export/Import==索引导出/导入
Content Semantic==内容语义
Target Analysis==目标分析
### Administration ###
>Administration<==>管理<
Index Administration==索引管理
System Administration==系统管理
Filter &amp; Blacklists==过滤&amp;黑名单
Process Scheduler==进程调度器
### Search Portal Integration ###
Search Portal Integration==搜索门户整合
Portal Configuration==门户配置
Portal Design==门户设计
Ranking and Heuristics==排名和启发
#-----------------------------
#File: env/templates/metas.template
#---------------------------
English, Englisch==English, Englisch
#-----------------------------
#File: env/templates/simpleheader.template
#---------------------------
Project Wiki==项目百科
Search Interface==搜索界面
About This Page==关于此页
Bugtracker==Bug追踪器
Git Repository==Git存储库
Community (Web Forums)==社区(网络论坛)
Download YaCy==下载YaCy
Google Appliance API==Google设备API
>Web Search<==>网页搜索<
>File Search<==>文件搜索<
>Compare Search<==>比较搜索<
>Index Browser<==>索引浏览器<
>URL Viewer<==>地址查看器<
Example Calls to the Search API:==调用搜索API的示例:
Administration &raquo;==管理 &raquo;
Search Interfaces==搜索界面
Toggle navigation==切换导航
Solr Default Core==Solr默认核心
Solr Webgraph Core==Solr网页图形核心
Administration ==管理
Administration==管理
#Administration<==Administration<
>Search Network<==>搜索网络<
#Peer Owner Profile==节点所有者资料
Help / YaCy Wiki==帮助 / YaCy Wiki
#-----------------------------
#File: env/templates/simpleSearchHeader.template
#---------------------------
Log in==登录
Search Interfaces==搜索界面
Web Search==网页搜索
File Search==文件搜索
Compare Search==比较搜索
URL Viewer==网址查看器
Example Calls to the Search API:==调用搜索API的例子:
About This Page==关于此页面
YaCy Tutorials==YaCy教程
JavaScript information==JavaScript信息
external==外部
&nbsp;&nbsp;&nbsp;Download YaCy==&nbsp;&nbsp;&nbsp;下载YaCy
&nbsp;&nbsp;&nbsp;Community (Web Forums)==&nbsp;&nbsp;&nbsp;社区(网页论坛)
&nbsp;&nbsp;&nbsp;Git Repository==&nbsp;&nbsp;&nbsp;Git库
&nbsp;&nbsp;&nbsp;Bugtracker==&nbsp;&nbsp;&nbsp;Bug追踪器
Administration &raquo;==管理 &raquo;
#-----------------------------
#File: env/templates/submenuAccessTracker.template
#---------------------------
Access Tracker==访问跟踪器
Server Access==服务器访问
Access Grid==访问网格
Incoming Requests Overview==传入请求概况
Incoming Requests Details==传入请求详情
All Connections<==全部连接<
Local Search<==本地搜索<
Log==日志
Host Tracker==服务器跟踪器
Access Rate Limitations==访问率限制
Remote Search<==远端搜索<
Cookie Menu==Cookie菜单
Incoming&nbsp;Cookies==传入&nbsp;Cookies
Outgoing&nbsp;Cookies==传出&nbsp;Cookies
#-----------------------------
#File: env/templates/submenuBlacklist.template
#---------------------------
Content Control==内容控制
Filter &amp; Blacklists==过滤 &amp; 黑名单
Blacklist Administration==黑名单管理
Blacklist Cleaner==黑名单整理
Blacklist Test==黑名单测试
Import/Export==导入/导出
Index Cleaner==索引整理
#-----------------------------
#File: env/templates/submenuComputation.template
#---------------------------
>Application Status<==>应用程序状态<
>Status<==>状态<
System==系统
Thread Dump==线程转储
>Processes<==>流程<
>Server Log<==>服务器日志<
>Concurrent Indexing<==>并发索引<
>Memory Usage<==>内存使用<
>Search Sequence<==>搜索序列<
>Messages<==>消息<
>Overview<==>概况<
>Incoming&nbsp;News<==>传入的新闻<
>Processed&nbsp;News<==>处理的新闻<
>Outgoing&nbsp;News<==>传出的新闻<
>Published&nbsp;News<==>发布的新闻<
>Community Data<==>社区数据<
>Surftips<==>上网技巧<
>Local Peer Wiki<==>本地节点百科 <
UI Translations==用户界面翻译
>Published==>已发布的
>Processed==>加工的
>Outgoing==>传出的
>Incoming==>传入的
#-----------------------------
#File: env/templates/submenuConfig.template
#---------------------------
System Administration==系统管理
Viewer and administration for database tables==数据库表的查看与管理
Performance Settings of Busy Queues==繁忙队列的性能设置
#UNUSED HERE
#Peer Administration Console==节点控制台
Status==状态
>Accounts==>账户
Network Configuration==网络设置
>Heuristics<==>触发式<
Dictionary Loader==功能扩展
System Update==系统升级
>Performance==>性能
Advanced Settings==高级设置
Parser Configuration==解析配置
Local robots.txt==本地robots.txt
Advanced Properties==高级设置
#-----------------------------
#File: env/templates/submenuCrawlMonitor.template
#---------------------------
Overview</a>==概况</a>
Receipts</a>==回执</a>
Queries</a>==查询</a>
DHT Transfer==DHT 传输
Proxy Use==代理使用
Local Crawling</a>==本地爬取</a>
Global Crawling</a>==全球爬取</a>
Surrogate Import==代理导入
Crawl Results==爬取结果
Crawler<==爬虫<
Global==全球
robots.txt Monitor==robots.txt监控器
Remote==远端
No-Load==空载
Processing Monitor==进程监控
Crawler Queues==爬虫队列
Loader<==加载器<
Rejected URLs==被拒绝地址
>Queues<==>队列<
Local<==本地<
Crawler Steering==爬取控制
Scheduler and Profile Editor<==调度器与资料编辑器<
#-----------------------------
#File: env/templates/submenuCrawler.template
#---------------------------
Load Web Pages==加载网页
Site Crawling==网站爬取
Parser Configuration==解析器配置
#-----------------------------
#File: env/templates/submenuDesign.template
#---------------------------
>Language<==>语言<
Search Page Layout==搜索页面布局
Design==设计
>Appearance<==>外观<
Customization==自定义
>Appearance==>外观
User Profile==用户资料
>Language==>语言
#-----------------------------
#File: env/templates/submenuIndexControl.template
#---------------------------
Index Administration==索引管理
URL Database Administration==地址数据库管理
Index Deletion==索引删除
Index Sources &amp; Targets==索引来源&目标
Solr Schema Editor==Solr模式编辑器
Field Re-Indexing==字段重新索引
Reverse Word Index==反向词索引
Content Analysis==内容分析
Reverse Word Index Administration==详细关键字索引管理
URL References Database==地址关联关系数据库
URL Viewer==地址浏览
#-----------------------------
#File: env/templates/submenuIndexCreate.template
#---------------------------
Crawler/Spider<==爬虫/蜘蛛<
Crawl Start (Expert)==爬取开始(专家模式)
Network Scanner==网络扫描仪
Crawling of MediaWikis==MediaWikis爬取
Remote Crawling==远端爬取
Scraping Proxy==收割代理
>Autocrawl<==>自动爬取<
Advanced Crawler==高级爬虫
>Crawling of phpBB3 Forums<==>phpBB3论坛爬取<
Start a Web Crawl==开启网页爬取
Crawler Queues==爬虫队列
Index Creation==索引创建
Full Site Crawl==全站爬取
Sitemap Loader==网站地图加载
Crawl Start<br/>(Expert)==开始爬取<br/>(专家模式)
Network<br/>Scanner==网络<br/>扫描仪
Crawling of==正在爬取
>phpBB3 Forums<==>phpBB3论坛<
Content Import<==导入内容<
Network Harvesting<==网络采集<
Remote<br/>Crawling==远端<br/>爬取
Scraping<br/>Proxy==收割<br/>代理
Database Reader<==数据库读取<
for phpBB3 Forums==对于phpBB3论坛
Dump Reader for==Dump阅读器为
#-----------------------------
#File: env/templates/submenuIndexImport.template
#---------------------------
>Content Export / Import<==>内容导出/导入<
>Export<==>导出<
>Internal Index Export<==>内部索引导出<
>Import<==>导入<
RSS Feed Importer==RSS订阅导入器
OAI-PMH Importer==OAI-PMH导入器
>Warc Importer<==>Warc导入器<
>Database Reader<==>数据库阅读器<
Database Reader for phpBB3 Forums==phpBB3论坛的数据库阅读器
Dump Reader for MediaWiki dumps==MediaWiki转储阅读器
#-----------------------------
#File: env/templates/submenuMaintenance.template
#---------------------------
RAM/Disk Usage &amp; Updates==内存/硬盘 使用 &amp; 更新
Web Cache==网页缓存
Download System Update==下载系统更新
>Performance<==>性能<
RAM/Disk Usage==内存/硬盘 使用
#-----------------------------
#File: env/templates/submenuPortalConfiguration.template
#---------------------------
Generic Search Portal==通用搜索门户
User Profile==用户资料
Local robots.txt==本地robots.txt
Portal Configuration==门户配置
Search Box Anywhere==随处搜索框
#-----------------------------
#File: env/templates/submenuPublication.template
#---------------------------
Publication==发布
Wiki==百科
Blog==博客
File Hosting==文件共享
#-----------------------------
#File: env/templates/submenuRanking.template
#---------------------------
Solr Ranking Config==Solr排名配置
>Heuristics<==>启发式<
Ranking and Heuristics==排名与启发式
RWI Ranking Config==反向词排名配置
#-----------------------------
#File: env/templates/submenuSemantic.template
#---------------------------
Content Semantic==内容语义
>Automated Annotation<==>自动注释<
Auto-Annotation Vocabulary Editor==自动注释词汇编辑器
Knowledge Loader==知识加载器
>Augmented Content<==>增强内容<
Augmented Browsing==增强浏览
#-----------------------------
#File: env/templates/submenuTargetAnalysis.template
#---------------------------
Target Analysis==目标分析
Mass Crawl Check==大量爬取检查
Regex Test==正则表达式测试
#-----------------------------
#File: env/templates/submenuUseCaseAccount.template
#---------------------------
Use Case &amp; Accounts==用法 &amp; 账户
Use Case ==用法
Use Case==用法
Basic Configuration==基本设置
>Accounts<==>账户<
Network Configuration==网络设置
#-----------------------------
#File: env/templates/submenuWebStructure.template
#---------------------------
Index Browser==索引浏览器
Web Visualization==网页元素外观
Web Structure==网页结构
Image Collage==图像拼贴
#-----------------------------
### Subdirectory js ###
#File: js/Crawler.js
#---------------------------
"Continue this queue"=="继续队列"
"Pause this queue"=="暂停队列"
#-----------------------------
#File: js/yacyinteractive.js
#---------------------------
>total results==>全部结果
&nbsp;topwords:==&nbsp;顶部:
>Name==>名称
>Size==>大小
>Date==>日期
#-----------------------------
### Subdirectory proxymsg ###
#File: proxymsg/authfail.inc
#---------------------------
Your Username/Password is wrong.==用户名/密码输入错误.
Username</label>==用户名</label>
Password</label>==密码</label>
"login"=="登录"
#-----------------------------
#File: proxymsg/error.html
#---------------------------
YaCy: Error Message==YaCy: 错误消息
request:==请求:
unspecified error==未定义错误
not-yet-assigned error==未定义错误
You don't have an active internet connection. Please go online.==无网络链接, 请上线.
Could not load resource. The file is not available.==无效文件, 加载资源失败.
Exception occurred==异常发生
Generated #[date]# by==生成日期 #[date]# 由
#-----------------------------
#File: proxymsg/proxylimits.inc
#---------------------------
Your Account is disabled for surfing.==你的账户没有浏览权限.
Your Timelimit (#[timelimit]# Minutes per Day) is reached.==你的账户时限(#[timelimit]# 分钟每天)已到.
#-----------------------------
#File: proxymsg/unknownHost.inc
#---------------------------
The server==服务器
could not be found.==未找到.
Did you mean:==是不是:
#-----------------------------
### Subdirectory yacy ###
#File: yacy/ui/index.html
#---------------------------
About YaCy-UI==关于YaCy-UI
Admin Console==管理控制台
"Bookmarks"=="书签"
>Bookmarks==>书签
Server Log==服务器日志
#-----------------------------
#File: yacy/ui/js/jquery-flexigrid.js
#---------------------------
'Displaying {from} to {to} of {total} items'=='显示 {from} 到 {to}, 总共 {total} 个词条'
'Processing, please wait ...'=='正在处理, 请稍候...'
'No items'=='无词条'
#-----------------------------
#File: yacy/ui/js/jquery-ui-1.7.2.min.js
#---------------------------
Loading&#8230;==正在加载&#8230;
#-----------------------------
#File: yacy/ui/js/jquery.ui.all.min.js
#---------------------------
Loading&#8230;==正在加载&#8230;
#-----------------------------
#File: yacy/ui/sidebar/sidebar_1.html
#---------------------------
YaCy P2P Websearch==YaCy P2P搜索
"Search"=="搜索"
>Text==>文本
>Images==>图片
>Audio==>音频
>Video==>视频
>Applications==>应用
Search term:==搜索词条:
# do not translate class="help" which only has technical html semantics
alt="help"==alt="帮助"
title="help"==title="帮助"
Resource/Network:==资源/网络:
freeworld==自由世界
local peer==本地节点
>bookmarks==>书签
sciencenet==ScienceNet
>Language:==>语言:
any language==任意语言
Bookmark Folders==书签目录
#-----------------------------
#File: yacy/ui/sidebar/sidebar_2.html
#---------------------------
Bookmark Tags<==标签<
Search Options==搜索设置
Constraint:==约束:
all pages==所有页面
index pages==索引页面
URL mask:==URL过滤:
Prefer mask:==首选过滤:
Bookmark TagCloud==标签云
Topwords<==顶部<
alt="help"==alt="帮助"
title="help"==title="帮助"
#-----------------------------
#File: yacy/ui/yacyui-admin.html
#---------------------------
Peer Control==节点控制
"Login"=="登录"
Themes==主题
Messages==消息
Re-Start==重启
Shutdown==关闭
Web Indexing==网页索引
Crawl Start==开始爬取
Monitoring==监控
YaCy Network==YaCy网络
>Settings==>设置
"Basic Settings"=="基本设置"
Basic== 基本
Accounts==账户
"Network"=="网络"
Network== 网络
"Advanced Settings"=="高级设置"
Advanced== 高级
"Update Settings"=="升级设置"
Update== 升级
>YaCy Project==>YaCy项目
"YaCy Project Home"=="YaCy项目主页"
Project== 项目
"YaCy Forum"=="YaCy论坛"
"Help"=="帮助"
#-----------------------------
#File: yacy/ui/yacyui-bookmarks.html
#---------------------------
'Add'=='添加'
'Crawl'=='爬取'
'Edit'=='编辑'
'Delete'=='删除'
'Rename'=='重命名'
'Help'=='帮助'
"YaCy Bookmarks"=="YaCy书签"
'Public'=='公有'
'Title'=='题目'
'Tags'=='标签'
'Folders'=='目录'
'Date'=='日期'
#-----------------------------
#File: yacy/ui/yacyui-welcome.html
#---------------------------
>Overview==>概况
YaCy-UI is going to be a JavaScript based client for YaCy based on the existing XML and JSON API.==YaCy-UI 是基于JavaScript的YaCy客户端, 它使用当前的XML和JSON API.
YaCy-UI is at most alpha status, as there is still problems with retriving the search results.==YaCy-UI 尚在测试阶段, 所以在搜索时会有部分问题出现.
I am currently changing the backend to a more application friendly format and getting good results with it (I will check that in some time after the stable release 0.7).==目前我正在修改程序后台, 以让其更加友善和搜索到更合适的结果(我会在稳定版0.7后改善此类问题).
For now have a look at the bookmarks, performance has increased significantly, due to the use of JSON and Flexigrid!==就目前来说, 由于使用JSON和Flexigrid, 性能已获得显著提升!
#-----------------------------
# EOF