diff --git a/locales/zh.lng b/locales/zh.lng index 6c9243f64..e985a02df 100644 --- a/locales/zh.lng +++ b/locales/zh.lng @@ -13,385 +13,18 @@ # If you find any mistakes or untranslated strings in this file please don't hesitate to email them to the maintainer. -#File: ConfigLanguage_p.html -#--------------------------- -# Only part 1. -# Contributors are in chronological order, not how much they did absolutely. -# Thank you for your help! -default(english)==Chinese -==lofyer -==lofyer@gmail.com -#----------------------------- - -#File: env/templates/submenuTargetAnalysis.template -#--------------------------- -Target Analysis==目标分析 -Mass Crawl Check==大量抓取检查 -Regex Test==正则表达式测试 -#----------------------------- - -#File: CrawlCheck_p.html -#--------------------------- -Crawl Check==抓取检查 -This pages gives you an analysis about the possible success for a web crawl on given addresses.==通过本页面,您可以分析在特定地址上进行网络爬取的可能性。 -List of possible crawl start URLs==可行的起始抓行网址列表 -"Check given urls"=="检查给定的网址" ->Analysis<==>分析< ->Access<==>访问< ->Robots<==>机器人< ->Crawl-Delay<==>爬取延时< ->Sitemap<==>网页< -#----------------------------- - -#File: RegexTest.html -#--------------------------- ->Regex Test<==>正则表达式测试< ->Test String<==>测试字符串< ->Regular Expression<==>正则表达式< ->Result<==>结果< -#----------------------------- - -#File: env/templates/submenuRanking.template -#--------------------------- -Solr Ranking Config==Solr排名配置 ->Heuristics<==>启发式< -Ranking and Heuristics==排名与启发式 -RWI Ranking Config==RWI排名配置 -#----------------------------- - -#File: ConfigPortal.html -#--------------------------- -Enable Search for Everyone?==对所有人启用搜索? -Search is available for everyone==搜索适用于所有人 -Only the administator is allowed to search==只有管理员被允许搜索 -Snippet Fetch Strategy & Link Verification==片段获取策略&amp; 链接验证 -Speed up search results with this option! (use CACHEONLY or FALSE to switch off verification)==使用此选项加快搜索结果! (使用CACHEONLY或FALSE关闭验证) -NOCACHE: no use of web cache, load all snippets online==NOCACHE:不使用网页缓存,在网上加载所有片段 -IFFRESH: use the cache if the cache exists and is fresh otherwise load online==IFFRESH:如果高速缓存存在,则使用高速缓存,否则将重新在线载入 -IFEXIST: use the cache if the cache exist or load online==IFEXIST:如果缓存存在或在线加载,则使用缓存 -If verification fails, delete index reference==如果验证失败,请删除索引引用 -CACHEONLY: never go online, use all content from cache. If no cache entry exist, consider content nevertheless as available and show result without snippet==CACHEONLY:永远不联网,使用缓存中的所有内容。 如果不存在缓存条目,则尽管如此,仍然可以将内容视为可用并显示结果,而不是片段 -FALSE: no link verification and not snippet generation: all search results are valid without verification==FALSE:没有链接验证,也没有代码片段生成:所有搜索结果在没有验证的情况下都是有效的 -Greedy Learning Mode==贪婪学习模式 -load documents linked in search results, will be deactivated automatically when index size==加载链接到搜索结果中的文档,将自动停用在索引大小 -Default maximum number of results per page==每页结果的默认最大数量 -Special Target as Exception for an URL-Pattern==特殊目标作为URL模式的例外 -Pattern:<==模式:< -set a remote user/password==设置一个远程用户/密码 -to change this options.==改变这个选项。 -(Headline)==(标题) -(Content)==(内容) -You have to set a remote user/password to change this options.==您必须设置远程用户/密码才能更改此选项。 -You have ==你有 -'About' Column
(shown in a column alongside
with the search result page)==“关于”栏
(与搜索结果页面一起在一栏中显示) ->Exclude Hosts<==>排除主机< -List of hosts that shall be excluded from search results by default but can be included using the site:<host> operator:==默认情况下应从搜索结果中排除但可以使用网站包含的主机列表:&lt; host&gt;运营商: -You have==你有 -NOCACHE: no use of web cache==NOCACHE:不使用网页缓存,在网上加载所有片段 -List of hosts that shall be excluded from search results by default but can be included using the site:==默认情况下应从搜索结果中排除但可以使用网站包含的主机列表:&lt; host&gt;运营商: -Snippet Fetch Strategy ==片段获取策略&amp; 链接验证 -If verification fails==如果验证失败,请删除索引引用 -CACHEONLY: never go online==CACHEONLY:永远不联网,使用缓存中的所有内容。 如果不存在缓存条目,则尽管如此,仍然可以将内容视为可用并显示结果,而不是片段 -load documents linked in search results==加载链接到搜索结果中的文档,将自动停用在索引大小 -#----------------------------- - -#File: env/templates/submenuComputation.template -#--------------------------- ->Application Status<==>应用程序状态< ->Status<==>状态< -System==系统 -Thread Dump==线程转储 ->Processes<==>流程< ->Server Log<==>服务器日志< ->Concurrent Indexing<==>并发索引< ->Memory Usage<==>内存使用< ->Search Sequence<==>搜索序列< ->Messages<==>消息< ->Overview<==>概述< ->Incoming News<==>传入的新闻< ->Processed News<==>处理的新闻< ->Outgoing News<==>传出的新闻< ->Published News<==>发布的新闻< ->Community Data<==>社区数据< ->Surftips<==>冲浪提示< ->Local Peer Wiki<==>本地节点百科 < -UI Translations==用户界面翻译 ->Published==>已发布的 ->Processed==>加工的 ->Outgoing==>传出的 ->Incoming==>传入的 -#----------------------------- - -#File: env/templates/submenuPortalConfiguration.template -#--------------------------- -Generic Search Portal==通用搜索门户 -User Profile==用户资料 -Local robots.txt==本地爬虫协议 -Portal Configuration==门户配置 -Search Box Anywhere==随处搜索框 -#----------------------------- - -#File: Translator_p.html -#--------------------------- -Translation Editor==翻译编辑器 -Translate untranslated text of the user interface (current language).==翻译用户界面中未翻译的文本(当前语言)。 -UI Translation==界面翻译 -Target Language:==目标语言 -activate a different language==激活另一种语言 -Source File==源文件 -view it==查看 -filter untranslated==列出未翻译项 -Source Text==源文 -Translated Text==翻译 -Save translation==保存翻译 -The modified translation file is stored in DATA/LOCALE directory.==修改的翻译文件储存在 DATA/LOCALE 目录下 -#----------------------------- - -#File: api/citation.html -#--------------------------- -Document Citations for==文档引用 -List of other web pages with citations==其他网页与引文列表 -Similar documents from different hosts:==来自不同主机的类似文件: -#----------------------------- - -#File: ConfigSearchPage_p.html -#--------------------------- ->Search Result Page Layout Configuration<==>搜索结果页面布局配置< -Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中您希望显示的功能复选框. -To change colors and styles use the ==要改变颜色和样式使用 ->Appearance<==>外观< -menu for different skins==不同皮肤的菜单 -Other portal settings can be adjusted in==其他门户网站设置可以在这调整 ->Generic Search Portal<==>通用搜索门户< -menu.==菜单. - ->Page Template<==>页面模板< ->Tag<==>标签< ->Topics<==>主题< ->Cloud<==>云< ->Text<==>文本< ->Images<==>图片< ->Audio<==>音频< ->Video<==>视频< ->Applications<==>应用< ->more options<==>更多选项< ->Location<==>位置< -Search Page<==搜索页< ->Protocol<==>协议< ->Filetype<==>文件类型< ->Wiki Name Space<==>百科名称空间< ->Language<==>语言< ->Author<==>作者< ->Vocabulary<==>词汇< ->Provider<==>提供商< ->Collection<==>收藏< ->Title of Result<==>结果标题< -Description and text snippet of the search result==搜索结果的描述和文本片段 -42 kbyte<==42千字节< ->Metadata<==>元数据< ->Parser<==>分析器< ->Citation<==>引用< ->Pictures<==>图片< ->Cache<==>高速缓存< ->Augmented Browsing<==>增强浏览< -For this option URL proxy must be enabled==对于这个选项,必须启用URL代理 ->Add Navigators<==>添加导航器< ->append==>附加 -"Save Settings"=="保存设置" -"Set Default Values"=="设置默认值" - -#----------------------------- - -#File: AccessGrid_p.html -#--------------------------- -This images shows incoming connections to your YaCy peer and outgoing connections from your peer to other peers and web servers==这幅图显示了到您节点的传入连接,以及从您节点到其他节点或网站服务器的传出连接 -Server Access Grid==服务器访问网格 -YaCy Network Access==YaCy网络访问 -#----------------------------- - -#File: env/templates/submenuIndexImport.template -#--------------------------- ->Content Export / Import<==>内容导出/导入< ->Export<==>导出< ->Internal Index Export<==>内部索引导出< ->Import<==>导入< -RSS Feed Importer==RSS订阅导入器 -OAI-PMH Importer==OAI-PMH导入器 ->Warc Importer<==>Warc导入器< ->Database Reader<==>数据库阅读器< -Database Reader for phpBB3 Forums==phpBB3论坛的数据库阅读器 -Dump Reader for MediaWiki dumps==MediaWiki转储阅读器 -#----------------------------- - -#File: TransNews_p.html -#--------------------------- -Translation News for Language==语言翻译新闻 -Translation News==翻译新闻 -You can share your local addition to translations and distribute it to other peers.==你可以分享你的本地翻译,并分发给其他节点。 -The remote peer can vote on your translation and add it to the own local translation.==远程节点可以对您的翻译进行投票并将其添加到他们的本地翻译中。 -entries available==可用的条目 -"Publish"=="发布" -You can check your outgoing messages==你可以检查你的传出消息 ->here<==>这儿< -To edit or add local translations you can use==要编辑或添加本地翻译,你可以用 - -File:==文件: -Translation:==翻译: ->score==>分数 -negative vote==反对票 -positive vote==赞成票 -Vote on this translation==对这个翻译投票 -If you vote positive the translation is added to your local translation list==如果您投赞成票,翻译将被添加到您的本地翻译列表中 ->Originator<==>发起人< -#----------------------------- - -#File: Autocrawl_p.html -#--------------------------- ->Autocrawler<==>自动爬虫< -Autocrawler automatically selects and adds tasks to the local crawl queue==自动爬虫自动选择任务并将其添加到本地爬网队列 -This will work best when there are already quite a few domains in the index==如果索引中已经有一些域名,这将会工作得最好 -Autocralwer Configuration==自动爬虫配置 -You need to restart for some settings to be applied==您需要重新启动才能应用一些设置 -Enable Autocrawler:==启用自动爬虫: -Deep crawl every:==深入抓取: -Warning: if this is bigger than "Rows to fetch" only shallow crawls will run==警告:如果这大于“取回行”,只有浅抓取将运行 -Rows to fetch at once:==一次取回行: -Recrawl only older than # days:==重新抓取只有 # 天以前的时间: -Get hosts by query:==通过查询获取主机: -Can be any valid Solr query.==可以是任何有效的Solr查询。 -Shallow crawl depth (0 to 2):==浅抓取深度(0至2): -Deep crawl depth (1 to 5):==深度抓取深度(1至5): -Index text:==索引文本: -Index media:==索引媒体: -"Save"=="保存" -#----------------------------- - -#File: ConfigParser.html -#--------------------------- ->File Viewer<==>文件查看器< ->Extension<==>扩展< -If you want to test a specific parser you can do so using the==如果你想测试一个特定的解析器,你可以使用 ->Mime-Type<==> MIME类型< -#----------------------------- - -#File: env/templates/submenuCrawler.template -#--------------------------- -Load Web Pages==加载网页 -Site Crawling==网站抓取 -Parser Configuration==解析器配置 -#----------------------------- - -#File: ContentControl_p.html -#--------------------------- -Content Control<==内容控制< -Peer Content Control URL Filter==节点内容控制地址过滤器 -With this settings you can activate or deactivate content control on this peer==使用此设置,您可以激活或取消激活此YaCy节点上的内容控制 -Use content control filtering:==使用内容控制过滤: ->Enabled<==>已启用< -Enables or disables content control==启用或禁用内容控制 -Use this table to create filter:==使用此表创建过滤器: -Define a table. Default:==定义一个表格. 默认: -Content Control SMW Import Settings==内容控制SMW导入设置 -With this settings you can define the content control import settings. You can define a==使用此设置,您可以定义内容控制导入设置. 你可以定义一个 -Semantic Media Wiki with the appropriate extensions==语义媒体百科与适当的扩展 -SMW import to content control list:==SMW导入到内容控制列表: -Enable or disable constant background synchronization of content control list from SMW (Semantic Mediawiki). Requires restart!==启用或禁用来自SMW(Semantic Mediawiki)的内容控制列表的恒定后台同步。 需要重启! -SMW import base URL:==SMW导入基URL: -Define base URL for SMW special page "Ask". Example: ==为SMW特殊页面“Ask”定义基础地址.例: -SMW import target table:==SMW导入目标表: -Define import target table. Default: contentcontrol==定义导入目标表. 默认值:contentcontrol -Purge content control list on initial sync:==在初始同步时清除内容控制列表: -Purge content control list on initial synchronisation after startup.==重启后,清除初始同步的内容控制列表. -"Submit"=="提交" -Define base URL for SMW special page "Ask". Example:==为SMW特殊页面“Ask”定义基础地址.例: -#----------------------------- - -#File: env/templates/submenuSemantic.template -#--------------------------- -Content Semantic==内容语义 ->Automated Annotation<==>自动注释< -Auto-Annotation Vocabulary Editor==自动注释词汇编辑器 -Knowledge Loader==知识加载器 ->Augmented Content<==>增强内容< -Augmented Browsing==增强浏览 -#----------------------------- - -#File: YMarks.html -#--------------------------- -"Import"=="导入" -documents=="文件" -days==天 -hours==小时 -minutes==分钟 -for new documents automatically==自动地对新文件 -run this crawl once==抓取一次 ->Query<==>查询< -Query Type==查询类型 ->Import<==>导入< -Tag Manager==标签管理器 -Bookmarks (user: #[user]# size: #[size]#)==书签(用户: #[user]# 大小: #[size]#) -"Replace"=="替换" -#----------------------------- - -#File: AugmentedBrowsing_p.html -#--------------------------- -Augmented Browsing<==增强浏览< -, where parameter is the url of an external web page==其中参数是外部网页的网址 -URL Proxy Settings<==URL代理设置< -With this settings you can activate or deactivate URL proxy which is the method used for augmentation.==使用此设置,您可以激活或取消激活用于扩充的URL代理。 -Service call: ==服务电话: ->URL proxy:<==>URL 代理:< -Globally enables or disables URL proxy via==全局启用或禁用URL代理通过 ->Enabled<==>已启用< -Show search results via URL proxy:==通过URL代理显示搜索结果: -Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.==为所有搜索结果启用或禁用URL代理。 如果启用,所有搜索结果将通过URL代理隧道传输。 -Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address==或者,您可以将此JavaScript添加到您的浏览器收藏夹/快捷方式,这将重新加载当前的浏览器地址 -via the YaCy proxy servlet==通过YaCy代理servlet -or right-click this link and add to favorites:==或右键单击此链接并添加到收藏夹: -Restrict URL proxy use:==限制URL代理使用: -Define client filter. Default: ==定义客户端过滤器.默认: -URL substitution:==网址替换: -Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.==定义允许在代理环境中导航的URL替换规则。 可能的值:all,domainlist。 默认:domainlist。 -"Submit"=="提交" -Alternatively you may add this javascript to your browser favorites/short-cuts==或者,您可以将此JavaScript添加到您的浏览器收藏夹/快捷方式,这将重新加载当前的浏览器地址 -Enables or disables URL proxy for all search results. If enabled==为所有搜索结果启用或禁用URL代理。 如果启用,所有搜索结果将通过URL代理隧道传输。 -Service call:==服务电话: -Define client filter. Default:==定义客户端过滤器.默认: -Define URL substitution rules which allow navigating in proxy environment. Possible values: all==定义允许在代理环境中导航的URL替换规则.可能的值:all,domainlist.默认:domainlist。 -Globally enables or disables URL proxy via==全局启用或禁用URL代理通过 -#----------------------------- - -#File: env/templates/submenuMaintenance.template -#--------------------------- -RAM/Disk Usage & Updates==内存/硬盘 使用 & 更新 -Web Cache==网页缓存 -Download System Update==下载系统更新 ->Performance<==>性能< -RAM/Disk Usage==内存/硬盘 使用 -#----------------------------- - -#File: ContentAnalysis_p.html -#--------------------------- -Content Analysis==内容分析 -These are document analysis attributes==这些是文档分析属性 -Double Content Detection==双重内容检测 -Double-Content detection is done using a ranking on a 'unique'-Field, named 'fuzzy_signature_unique_b'.==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 -This is the minimum length of a word which shall be considered as element of the signature. Should be either 2 or 3.==这是一个应被视为签名的元素单词的最小长度。 应该是2或3。 -The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number, the less==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 -words are used for the signature==单词用于签名 -For minTokenLen = 2 the quantRate value should not be below 0.24; for minTokenLen = 3 the quantRate value must be not below 0.5.==对于minTokenLen = 2,quantRate值不应低于0.24; 对于minTokenLen = 3,quantRate值必须不低于0.5。 -"Re-Set to default"=="重置为默认" -"Set"=="设置" -Double-Content detection is done using a ranking on a 'unique'-Field==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 -The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 +#File: AccessGrid_p.html +#--------------------------- +YaCy Network Access==YaCy网络访问 +Server Access Grid==服务器访问网格 +This images shows incoming connections to your YaCy peer and outgoing connections from your peer to other peers and web servers==这幅图显示了到您节点的传入连接,以及从您节点到其他节点或网站服务器的传出连接 #----------------------------- - #File: AccessTracker_p.html #--------------------------- -Access Tracker==访问跟踪 +Access Tracker==访问跟踪器 Server Access Overview==网站访问概况 -This is a list of #[num]# requests to the local http server within the last hour==最近一小时内有 #[num]# 个到本地的访问请求 +This is a list of #[num]# requests to the local http server within the last hour.==最近一小时内有 #[num]# 个到本地的访问请求。 This is a list of requests to the local http server within the last hour==此列表显示最近一小时内到本机的访问请求 Showing #[num]# requests==显示 #[num]# 个请求 >Host<==>主机< @@ -434,57 +67,46 @@ This is a list of searches that had been requested from remote peer search inter This is a list of requests (max. 1000) to the local http server within the last hour==这是最近一小时内本地http服务器的请求列表(最多1000个) #----------------------------- +#File: Settings_UrlProxyAccess.inc +#--------------------------- +URL Proxy Settings<=URL Proxy Settings< +With this settings you can activate or deactivate URL proxy.==With this settings you can activate or deactivate URL proxy. +Service call: ==Service call: +, where parameter is the url of an external web page.==, where parameter is the url of an external web page. +>URL proxy:<==>URL proxy:< +>Enabled<==>开启< +Globally enables or disables URL proxy via ==Globally enables or disables URL proxy via +Show search results via URL proxy:==Show search results via URL proxy: +Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.==Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy. +Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address==Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address +via the YaCy proxy servlet.==via the YaCy proxy servlet. +or right-click this link and add to favorites:==or right-click this link and add to favorites: +Restrict URL proxy use:==Restrict URL proxy use: +Define client filter. Default: ==Define client filter. Default: +URL substitution:==URL substitution: +Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.==Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist. +"Submit"=="Submit" +#----------------------------- -#File: Blacklist_p.html +#File: Autocrawl_p.html #--------------------------- -Blacklist Administration==黑名单管理 -This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理地址过滤;过滤掉自载入时加入进黑名单的地址. -from being loaded. You can define several blacklists and activate them separately.==您可以自定义黑名单并分别激活它们. -You may also provide your blacklist to other peers by sharing them; in return you may==您也可以提供你自己的黑名单列表给其他人; -collect blacklist entries from other peers==同样,其他人也能将黑名单列表共享给您 -Select list to edit:==选择列表进行编辑: -Add URL pattern==添加地址规则 -Edit list==编辑列表 -The right '*', after the '/', can be replaced by a==在'/'之后的右边'*'可以被替换为 ->regular expression<==>正则表达式< -#(slow)==(慢) -"set"=="集合" -The right '*'==右边的'*' -Used Blacklist engine:==使用的黑名单引擎: -Active list:==激活列表: -No blacklist selected==未选中黑名单 -Select list:==选中黑名单: -not shared::shared==未共享::已共享 -"select"=="选择" -Create new list:==创建: -"create"=="创建" -Settings for this list==设置 +>Autocrawler<==>自动爬虫< +Autocrawler automatically selects and adds tasks to the local crawl queue==自动爬虫自动选择任务并将其添加到本地爬网队列 +This will work best when there are already quite a few domains in the index==如果索引中已经有一些域名,这将会工作得最好 +Autocralwer Configuration==自动爬虫配置 +You need to restart for some settings to be applied==您需要重新启动才能应用一些设置 +Enable Autocrawler:==启用自动爬虫: +Deep crawl every:==深入爬取: +Warning: if this is bigger than "Rows to fetch" only shallow crawls will run==警告:如果这大于“取回行”,只有浅爬取将运行 +Rows to fetch at once:==一次取回行: +Recrawl only older than # days:==重新爬取只有 # 天以前的时间: +Get hosts by query:==通过查询获取主机: +Can be any valid Solr query.==可以是任何有效的Solr查询。 +Shallow crawl depth (0 to 2):==浅爬取深度(0至2): +Deep crawl depth (1 to 5):==深度爬取深度(1至5): +Index text:==索引文本: +Index media:==索引媒体: "Save"=="保存" -Share/don't share this list==共享/不共享此名单 -Delete this list==删除 -Edit this list==编辑 -These are the domain name/path patterns in==这些域名/路径规则来自 -Blacklist Pattern==黑名单规则 -Edit selected pattern(s)==编辑选中规则 -Delete selected pattern(s)==删除选中规则 -Move selected pattern(s) to==移动选中规则 -#You can select them here for deletion==您可以从这里选择要删除的项 -Add new pattern:==添加新规则: -"Add URL pattern"=="添加地址规则" -The right '*', after the '/', can be replaced by a regular expression.== 在 '/' 后边的 '*' ,可用正则表达式表示. -#domain.net/fullpath<==domain.net/绝对路径< -#>domain.net/*<==>domain.net/*< -#*.domain.net/*<==*.domain.net/*< -#*.sub.domain.net/*<==*.sub.domain.net/*< -#sub.domain.*/*<==sub.domain.*/*< -#domain.*/*<==domain.*/*< -#was removed from blacklist==wurde aus Blacklist entfernt -#was added to the blacklist==wurde zur Blacklist hinzugefügt -Activate this list for==为以下条目激活此名单 -Show entries:==显示条目: -Entries per page:==页面条目: -Edit existing pattern(s):==编辑现有规则: -"Save URL pattern(s)"=="保存地址规则" #----------------------------- #File: BlacklistCleaner_p.html @@ -538,7 +160,7 @@ Test list:==测试黑名单: "Test"=="测试" The tested URL was==此链接 It is blocked for the following cases:==在下列情况下,它会被阻止: -Crawling==抓取中 +Crawling==爬取中 #DHT==DHT News==新闻 Proxy==代理 @@ -546,6 +168,58 @@ Search==搜索 Surftips==建议 #----------------------------- +#File: Blacklist_p.html +#--------------------------- +Blacklist Administration==黑名单管理 +This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理地址过滤;过滤掉自载入时加入进黑名单的地址. +from being loaded. You can define several blacklists and activate them separately.==您可以自定义黑名单并分别激活它们. +You may also provide your blacklist to other peers by sharing them; in return you may==您也可以提供你自己的黑名单列表给其他人; +collect blacklist entries from other peers==同样,其他人也能将黑名单列表共享给您 +Select list to edit:==选择列表进行编辑: +Add URL pattern==添加地址规则 +Edit list==编辑列表 +The right '*', after the '/', can be replaced by a==在'/'之后的右边'*'可以被替换为 +>regular expression<==>正则表达式< +#(slow)==(慢) +"set"=="集合" +The right '*'==右边的'*' +Used Blacklist engine:==使用的黑名单引擎: +Active list:==激活列表: +No blacklist selected==未选中黑名单 +Select list:==选中黑名单: +not shared::shared==未共享::已共享 +"select"=="选择" +Create new list:==创建: +"create"=="创建" +Settings for this list==设置 +"Save"=="保存" +Share/don't share this list==共享/不共享此名单 +Delete this list==删除 +Edit this list==编辑 +These are the domain name/path patterns in==这些域名/路径规则来自 +Blacklist Pattern==黑名单规则 +Edit selected pattern(s)==编辑选中规则 +Delete selected pattern(s)==删除选中规则 +Move selected pattern(s) to==移动选中规则 +#You can select them here for deletion==您可以从这里选择要删除的项 +Add new pattern:==添加新规则: +"Add URL pattern"=="添加地址规则" +The right '*', after the '/', can be replaced by a regular expression.== 在 '/' 后边的 '*' ,可用正则表达式表示. +#domain.net/fullpath<==domain.net/绝对路径< +#>domain.net/*<==>domain.net/*< +#*.domain.net/*<==*.domain.net/*< +#*.sub.domain.net/*<==*.sub.domain.net/*< +#sub.domain.*/*<==sub.domain.*/*< +#domain.*/*<==domain.*/*< +#was removed from blacklist==wurde aus Blacklist entfernt +#was added to the blacklist==wurde zur Blacklist hinzugefügt +Activate this list for==为以下条目激活此名单 +Show entries:==显示条目: +Entries per page:==页面条目: +Edit existing pattern(s):==编辑现有规则: +"Save URL pattern(s)"=="保存地址规则" +#----------------------------- + #File: Blog.html #--------------------------- by==通过 @@ -667,17 +341,6 @@ Private Queue==私有 Public Queue==公共 #----------------------------- - -#File: compare_yacy.html -#--------------------------- -Websearch Comparison==网页搜索对比 -Left Search Engine==左侧引擎 -Right Search Engine==右侧引擎 -Query==查询 -"Compare"=="比较" -Search Result==结果 -#----------------------------- - #File: ConfigAccounts_p.html #--------------------------- Username too short. Username must be >= 4 Characters.==用户名太短。 用户名必须>= 4 个字符. @@ -835,73 +498,30 @@ This is needed if you want to fully participate in the YaCy network.==如果您 You can also use your peer without opening it, but this is not recomended.==不开放您的节点您也能使用, 但是不推荐. #----------------------------- -#File: RankingSolr_p.html -#--------------------------- ->Solr Ranking Configuration<==>Solr排名配置< -These are ranking attributes for Solr.==这些是Solr的排名属性. -This ranking applies for internal and remote (P2P or shard) Solr access.==此排名适用于内部和远程(P2P或分片)Solr访问. -Select a profile:==选择配置文件: ->Boost Function<==>提升功能< ->Boost Query<==>提升查询< ->Filter Query<==>过滤器查询< ->Solr Boosts<==>Solr提升< -"Set Boost Function"=="设置提升功能" -"Set Boost Query"=="设置提升查询" -"Set Filter Query"=="设置过滤器查询" -"Set Field Boosts"=="设置字段提升" -"Re-Set to default"=="重置为默认" -#----------------------------- -#File: RankingRWI_p.html +#File: ConfigHTCache_p.html #--------------------------- ->RWI Ranking Configuration<==>RWI排名配置< -The document ranking influences the order of the search result entities.==文档排名会影响实际搜索结果的顺序. -A ranking is computed using a number of attributes from the documents that match with the search word.==排名计算使用到与搜索词匹配的文档中的多个属性. -The attributes are first normalized over all search results and then the normalized attribute is multiplied with the ranking coefficient computed from this list.==在所有搜索结果基础上,先对属性进行归一化,然后将归一化的属性与相应的排名系数相乘. -The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数随着下表中给出的排名水平呈指数增长. -If you increase a single value by one, then the strength of the parameter doubles.==如果将单个值增加1,则参数的影响效果加倍. - -#Pre-Ranking ->Pre-Ranking<==>预排名< -== - -#>Appearance In Emphasized Text<==>出现在强调的文本中< -#a higher ranking level prefers documents where the search word is emphasized==较高的排名级别更倾向强调搜索词的文档 -#>Appearance In URL<==>出现在地址中< -#a higher ranking level prefers documents with urls that match the search word==较高的排名级别更倾向具有与搜索词匹配的地址的文档 -#Appearance In Author==出现在作者中 -#a higher ranking level prefers documents with authors that match the search word==较高的排名级别更倾向与搜索词匹配的作者的文档 -#>Appearance In Reference/Anchor Name<==>出现在参考/锚点名称中< -#a higher ranking level prefers documents where the search word matches in the description text==较高的排名级别更倾向搜索词在描述文本中匹配的文档 -#>Appearance In Tags<==>出现在标签中< -#a higher ranking level prefers documents where the search word is part of subject tags==较高的排名级别更喜欢搜索词是主题标签一部分的文档 -#>Appearance In Title<==>出现在标题中< -#a higher ranking level prefers documents with titles that match the search word==较高的排名级别更喜欢具有与搜索词匹配的标题的文档 -#>Authority of Domain<==>域名权威< -#a higher ranking level prefers documents from domains with a large number of matching documents==较高的排名级别更喜欢来自具有大量匹配文档的域的文档 -#>Category App, Appearance<==>类别:出现在应用中< -#a higher ranking level prefers documents with embedded links to applications==更高的排名级别更喜欢带有嵌入式应用程序链接的文档 -#>Category Audio Appearance<==>类别:出现在音频中< -#a higher ranking level prefers documents with embedded links to audio content==较高的排名级别更喜欢具有嵌入音频内容链接的文档 -#>Category Image Appearance<==>类别:出现在图像中< -#>Category Video Appearance<==>类别:出现在视频中< -#>Category Index Page<==>类别:索引页面< -#a higher ranking level prefers 'index of' (directory listings) pages==较高的排名级别更喜欢(目录列表)页面的索引 -#>Date<==>日期< -#a higher ranking level prefers younger documents.==更高的排名水平更喜欢最新的文件. -#The age of a document is measured using the date submitted by the remote server as document date==使用远程服务器提交的日期作为文档日期来测量文档的年龄 -#>Domain Length<==>域名长度< -#a higher ranking level prefers documents with a short domain name==较高的排名级别更喜欢具有短域名的文档 -#>Hit Count<==>命中数< -#a higher ranking level prefers documents with a large number of matchings for the search word(s)==较高的排名级别更喜欢具有大量匹配搜索词的文档 +Hypertext Cache Configuration==超文本缓存配置 +The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==超文本缓存存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存. +The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容. -There are two ranking stages:==有两个排名阶段: -first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名. -The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果. -#Post-Ranking ->Post-Ranking<==二次排名 +#HTCache Configuration +HTCache Configuration==超文本缓存配置 +Cache hits==缓存命中率 +The path where the cache is stored==缓存存储路径 +The current size of the cache==当前缓存容量 +>#[actualCacheSize]# MB for #[actualCacheDocCount]# files, #[docSizeAverage]# KB / file in average==>#[actualCacheSize]#MB为#[actualCacheDocCount]#文件, #[docSizeAverage]#平均KB /文件 +The maximum size of the cache==缓存最大容量 +Compression level==压缩级别 +Concurrent access timeout==并行存取超时 +milliseconds==毫秒 +"Set"=="设置" -"Set as Default Ranking"=="保存为默认排名" -"Re-Set to Built-In Ranking"=="重置排名设置" +#Cleanup +Cleanup==清除 +Cache Deletion==删除缓存 +Delete HTTP & FTP Cache==删除HTTP & FTP 缓存 +Delete robots.txt Cache==删除爬虫协议缓存 +"Delete"=="删除" #----------------------------- #File: ConfigHeuristics_p.html @@ -923,9 +543,9 @@ To find out more about OpenSearch see==要了解关于OpenSearch的更多信息 20 results are taken from remote system and loaded simultanously, parsed and indexed immediately.==20个结果从远程系统中获取并同时加载,立即解析并创建索引. When using this heuristic, then every new search request line is used for a call to listed opensearch systems.==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。 This means: right after the search request every page is loaded and every page that is linked on this page.==这意味着:在搜索请求之后,就开始加载每个页面上的链接。 -If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中“添加为全局抓取作业”,则要爬网的页面将被添加到全局抓取队列(远程YaCy节点可以抓取要抓取的页面)。 +If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中“添加为全局爬取作业”,则要爬网的页面将被添加到全局爬取队列(远程YaCy节点可以爬取要爬取的页面)。 Default is to add the links to the local crawl queue (your peer crawls the linked pages).==默认是将链接添加到本地爬网队列(您的YaCy爬取链接的页面)。 -add as global crawl job==添加为全球抓取作业 +add as global crawl job==添加为全球爬取作业 opensearch load external search result list from active systems below==opensearch从下面的活动系统加载外部搜索结果列表 Available/Active Opensearch System==可用/激活Opensearch系统 Url (format opensearch==Url (格式为opensearch @@ -955,36 +575,10 @@ heuristic:<name>==启发式:<名称> below the favicon left from the search result entry:==搜索结果中使用的图标: The search result was discovered by a heuristic, but the link was already known by YaCy==搜索结果通过启发式搜索, 且链接已知 The search result was discovered by a heuristic, not previously known by YaCy==搜索结果通过启发式搜索, 且链接未知 -'site'-operator: instant shallow crawl=='站点'-操作符: 即时浅抓取 -When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即抓取层数为 最大限制深度-1 的内容. +'site'-operator: instant shallow crawl=='站点'-操作符: 即时浅爬取 +When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即爬取层数为 最大限制深度-1 的内容. That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一主机中每一个与此页面相连的网页. -Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).==因为'立即抓取'依赖于爬虫协议和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟). -#----------------------------- - -#File: ConfigHTCache_p.html -#--------------------------- -Hypertext Cache Configuration==超文本缓存配置 -The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==超文本缓存存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存. -The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容. - -#HTCache Configuration -HTCache Configuration==超文本缓存配置 -Cache hits==缓存命中率 -The path where the cache is stored==缓存存储路径 -The current size of the cache==当前缓存容量 ->#[actualCacheSize]# MB for #[actualCacheDocCount]# files, #[docSizeAverage]# KB / file in average==>#[actualCacheSize]#MB为#[actualCacheDocCount]#文件, #[docSizeAverage]#平均KB /文件 -The maximum size of the cache==缓存最大容量 -Compression level==压缩级别 -Concurrent access timeout==并行存取超时 -milliseconds==毫秒 -"Set"=="设置" - -#Cleanup -Cleanup==清除 -Cache Deletion==删除缓存 -Delete HTTP & FTP Cache==删除HTTP & FTP 缓存 -Delete robots.txt Cache==删除爬虫协议缓存 -"Delete"=="删除" +Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).==因为'立即爬取'依赖于爬虫协议和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟). #----------------------------- #File: ConfigLanguage_p.html @@ -1010,45 +604,10 @@ Make sure that you only download data from trustworthy sources. The new language might overwrite existing data if a file of the same name exists already.==, 旧文件将被覆盖. #----------------------------- -#File: ConfigLiveSearch.html -#--------------------------- -Integration of a Search Field for Live Search==搜索栏集成: 即时搜索 -A 'Live-Search' input field that reacts as search-as-you-type in a pop-up window can easily be integrated in any web page=='即时搜索'输入栏: 即当您在搜索栏键入关键字时, 会在网页中弹出搜索对话框按钮 -This is the same function as can be seen on all pages of the YaCy online-interface (look at the window in the upper right corner)==当您在线使用YaCy时, 您会在搜索页面看到相应功能(页面右上角) -Just use the code snippet below to integrate that in your own web pages==将以下代码添加到您的网页中 -Please check if the address, as given in the example '#[ip]#:#[port]#' here is correct and replace it with more appropriate values if necessary==对于形如 '#[ip]#:#[port]#' 的地址, 请用具体值来替换 -Code Snippet:==代码: -YaCy Portal Search==YaCy门户搜索 -"Search"=="搜索" -Configuration options and defaults for 'yconf':==配置设置和默认的'yconf': -Defaults<==默认< -url<==URL< -is a mandatory property - no default<==固有参数 - 非默认< -YaCy P2P Web Search==YaCy P2P 网页搜索 -Size and position (width | height | position)==尺寸和位置(宽度 | 高度 | 位置) -Specifies where the dialog should be displayed. Possible values for position: 'center', 'left', 'right', 'top', 'bottom', or an array containing a coordinate pair (in pixel offset from top left of viewport) or the possible string values (e.g. ['right','top'] for top right corner)==指定对话框位置. 对于位置: 'center', 'left', 'right', 'top', 'bottom' 的值, 或者一个包含对应位置值的数组 (以左上角为参考位置的像素数), 或者字符串值 (e.g. ['right','top'] 对应右上角) -Animation effects (show | hide)==动画效果 (显示 | 隐藏) -The effect to be used. Possible values: 'blind', 'clip', 'drop', 'explode', 'fold', 'puff', 'slide', 'scale', 'size', 'pulsate'.==可用特效: 'blind', 'clip', 'drop', 'explode', 'fold', 'puff', 'slide', 'scale', 'size', 'pulsate'. -Interaction (modal | resizable)==对话框 (modal | 可变) -If modal is set to true, the dialog will have modal behavior; other items on the page will be disabled (i.e. cannot be interacted with).==如果选中modal属性, 则对话框会有modal行为; 否则页面上就不具有此特性. (即不能进行交互操作). -Modal dialogs create an overlay below the dialog but above other page elements.==Modal对话框会在页面元素下面而不是其上创建覆盖层. -If resizable is set to true, the dialog will be resizeable.==如果选中可变属性, 对话框大小就是可变的. -Load JavaScript load_js==载入页面JavaScript -If load_js is set to false, you have to manually load the needed JavaScript on your portal page.==如果未选中载入页面JavaScript, 那么您可能需要手动加载页面里的JavaScript. -This can help to avoid timing problems or double loading.==这有助于避免分时或者重载问题. -Load Stylesheets load_css==载入页面样式 -If load_css is set to false, you have to manually load the needed CSS on your portal page.==如果未选中载入页面样式, 您需要手动加载页面里的CSS文件. -#Themes==Themes -You can <==您能够< -download ready made themes or create==下载或者创建 -your own custom theme.
Themes are installed into: DATA/HTDOCS/yacy/ui/css/themes/==一个您自己的主题.
主题文件安装在: DATA/HTDOCS/yacy/ui/css/themes/ -#----------------------------- - #File: ConfigNetwork_p.html #--------------------------- == Network Configuration==网络设置 - #Network and Domain Specification Network and Domain Specification==确定网络和域 YaCy can operate a computing grid of YaCy peers or as a stand-alone node.==您可以操作由YaCy节点组成的计算网格或者一个单独节点. @@ -1062,20 +621,18 @@ Long Description==描述 Indexing Domain==索引域 #DHT==DHT "Change Network"=="改变网络" - #Distributed Computing Network for Domain Distributed Computing Network for Domain==域内分布式计算网络. Enable Peer-to-Peer Mode to participate in the global YaCy network==开启点对点模式从而加入全球YaCy网 or if you want your own separate search cluster with or without connection to the global network==或者不论加不加入全球YaCy网,你都可以打造个人搜索群 Enable 'Robinson Mode' for a completely independent search engine instance==开启漂流模式获得完全独立的搜索引擎 without any data exchange between your peer and other peers==本节点不会与其他节点有任何数据交换 - #Peer-to-Peer Mode Peer-to-Peer Mode==点对点模式 >Index Distribution==>索引分发 This enables automated, DHT-ruled Index Transmission to other peers==自动向其他节点传递DHT规则的索引 >enabled==>开启 -disabled during crawling==关闭 在抓取时 +disabled during crawling==关闭 在爬取时 disabled during indexing==关闭 在索引时 >Index Receive==>接收索引 Accept remote Index Transmissions==接受远程索引传递 @@ -1084,12 +641,11 @@ This works only if you have a senior peer. The DHT-rules do not work without thi accept transmitted URLs that match your blacklist==接受 与您黑名单匹配的传来的地址 >allow==>允许 deny remote search==拒绝 远程搜索 - #Robinson Mode >Robinson Mode==>漂流模式 If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果您的节点运行在'漂流模式', 您能在不与其他节点交换数据的情况下进行搜索 There is no index receive and no index distribution between your peer and any other peer==您不会与其他节点进行索引传递 -In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于漂流群模式,一样会应答那个群内远端节点的抓取请求 +In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于漂流群模式,一样会应答那个群内远端节点的爬取请求 >Private Peer==>私有节点 Your search engine will not contact any other peer, and will reject every request==您的搜索引擎不会与其他节点联系, 并会拒绝每一个外部请求 >Public Peer==>公共节点 @@ -1097,7 +653,7 @@ You are visible to other peers and contact them to distribute your presence==对 Your peer does not accept any outside index data, but responds on all remote search requests==您的节点不接受任何外部索引数据, 但是会回应所有外部搜索请求 >Public Cluster==>公共群 Your peer is part of a public cluster within the YaCy network==您的节点属于YaCy网络内的一个公共群 -Index data is not distributed, but remote crawl requests are distributed and accepted==索引数据不会被分发, 但是外部的抓取请求会被分发和接受 +Index data is not distributed, but remote crawl requests are distributed and accepted==索引数据不会被分发, 但是外部的爬取请求会被分发和接受 Search requests are spread over all peers of the cluster, and answered from all peers of the cluster==搜索请求在当前群内的所有节点中传播, 并且这些节点同样会作出回应 List of .yacy or .yacyh - domains of the cluster: (comma-separated)==群内.yacy 或者.yacyh 的域名列表: (以逗号隔开) >Peer Tags==>节点标签 @@ -1105,7 +661,6 @@ When you allow access from the YaCy network, your data is recognized using keywo Please describe your search portal with some keywords (comma-separated)==请用关键字描述您的搜索门户 (以逗号隔开) If you leave the field empty, no peer asks your peer. If you fill in a '*', your peer is always asked.==如果此部分留空, 那么您的节点不会被其他节点访问. 如果内容是 '*' 则标示您的节点永远被允许访问. "Save"=="保存" - #Outgoing communications encryption Outgoing communications encryption==出色的通信加密 Protocol operations encryption==协议操作加密 @@ -1118,25 +673,20 @@ certificates are not validated against trusted certificate authorities==证书 thus allowing YaCy peers to use self-signed certificates==从而允许YaCy节点使用自签名证书 Note also that encryption of remote search queries is configured with a dedicated setting in the==另请注意,远程搜索查询加密的专用设置配置请使用 page==页面 - No changes were made!==未作出任何改变! Accepted Changes==接受改变 Inapplicable Setting Combination==设置未被应用 - - #----------------------------- #File: ConfigParser_p.html #--------------------------- Parser Configuration==解析配置 - #Content Parser Settings Content Parser Settings==内容解析设置 With this settings you can activate or deactivate parsing of additional content-types based on their MIME-types.==此设置能开启/关闭依据文件类型(MIME)的内容解析. For a detailed description of the various MIME-types take a look at==关于MIME的详细描述请参考 If you want to test a specific parser you can do so using the==如果要测试特定的解析器,可以使用 enable/disable Parser==开启/关闭解析器 - # --- Parser Names are hard-coded BEGIN --- ##Mime-Type==MIME Typ ##Microsoft Powerpoint Parser==Microsoft Powerpoint Parser @@ -1163,7 +713,6 @@ enable/disable Parser==开启/关闭解析器 #BMP Image Parser==BMP Bild Parser # --- Parser Names are hard-coded END --- "Submit"=="提交" - #PDF Parser Attributes PDF Parser Attributes==PDF解析器属性 This is an experimental setting which makes it possible to split PDF documents into individual index entries==这是一个实验设置,可以将PDF文档拆分为单独的索引条目 @@ -1279,11 +828,6 @@ or in the public using a Supporter Page as long as your peer is online)==首页(显示在每个支持者 页面中, 前提是您的节点在线). eMail==邮箱 -#ICQ==ICQ -#Jabber==Jabber -#Yahoo!==Yahoo! -#MSN==MSN -#Skype==Skype Comment==注释 "Save"=="保存" You can use <==在这里您可以用< @@ -1337,6 +881,52 @@ Replace the given colors #eeeeee (box background) and #cccccc (box border)==替 Replace the word "MySearch" with your own message==用您想显示的信息替换"我的搜索" #----------------------------- +#File: ConfigSearchPage_p.html +#--------------------------- +>Search Result Page Layout Configuration<==>搜索结果页面布局配置< +Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中您希望显示的功能复选框. +To change colors and styles use the ==要改变颜色和样式使用 +>Appearance<==>外观< +menu for different skins==不同皮肤的菜单 +Other portal settings can be adjusted in==其他门户网站设置可以在这调整 +>Generic Search Portal<==>通用搜索门户< +menu.==菜单. +>Page Template<==>页面模板< +>Tag<==>标签< +>Topics<==>主题< +>Cloud<==>云< +>Text<==>文本< +>Images<==>图片< +>Audio<==>音频< +>Video<==>视频< +>Applications<==>应用< +>more options<==>更多选项< +>Location<==>位置< +Search Page<==搜索页< +>Protocol<==>协议< +>Filetype<==>文件类型< +>Wiki Name Space<==>百科名称空间< +>Language<==>语言< +>Author<==>作者< +>Vocabulary<==>词汇< +>Provider<==>提供商< +>Collection<==>收藏< +>Title of Result<==>结果标题< +Description and text snippet of the search result==搜索结果的描述和文本片段 +42 kbyte<==42千字节< +>Metadata<==>元数据< +>Parser<==>分析器< +>Citation<==>引用< +>Pictures<==>图片< +>Cache<==>高速缓存< +>Augmented Browsing<==>增强浏览< +For this option URL proxy must be enabled==对于这个选项,必须启用URL代理 +>Add Navigators<==>添加导航器< +>append==>附加 +"Save Settings"=="保存设置" +"Set Default Values"=="设置默认值" +#----------------------------- + #File: ConfigUpdate_p.html #--------------------------- >System Update<==>系统更新< @@ -1411,57 +1001,172 @@ Duration==持续时间 #ID==ID #----------------------------- +#File: ContentAnalysis_p.html +#--------------------------- +Content Analysis==内容分析 +These are document analysis attributes==这些是文档分析属性 +Double Content Detection==双重内容检测 +Double-Content detection is done using a ranking on a 'unique'-Field, named 'fuzzy_signature_unique_b'.==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 +This is the minimum length of a word which shall be considered as element of the signature. Should be either 2 or 3.==这是一个应被视为签名的元素单词的最小长度。 应该是2或3。 +The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number, the less==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 +words are used for the signature==单词用于签名 +For minTokenLen = 2 the quantRate value should not be below 0.24; for minTokenLen = 3 the quantRate value must be not below 0.5.==对于minTokenLen = 2,quantRate值不应低于0.24; 对于minTokenLen = 3,quantRate值必须不低于0.5。 +"Re-Set to default"=="重置为默认" +"Set"=="设置" +Double-Content detection is done using a ranking on a 'unique'-Field==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 +The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 +#----------------------------- + +#File: ContentControl_p.html +#--------------------------- +Content Control<==内容控制< +Peer Content Control URL Filter==节点内容控制地址过滤器 +With this settings you can activate or deactivate content control on this peer==使用此设置,您可以激活或取消激活此YaCy节点上的内容控制 +Use content control filtering:==使用内容控制过滤: +>Enabled<==>已启用< +Enables or disables content control==启用或禁用内容控制 +Use this table to create filter:==使用此表创建过滤器: +Define a table. Default:==定义一个表格. 默认: +Content Control SMW Import Settings==内容控制SMW导入设置 +With this settings you can define the content control import settings. You can define a==使用此设置,您可以定义内容控制导入设置. 你可以定义一个 +Semantic Media Wiki with the appropriate extensions==语义媒体百科与适当的扩展 +SMW import to content control list:==SMW导入到内容控制列表: +Enable or disable constant background synchronization of content control list from SMW (Semantic Mediawiki). Requires restart!==启用或禁用来自SMW(Semantic Mediawiki)的内容控制列表的恒定后台同步。 需要重启! +SMW import base URL:==SMW导入基URL: +Define base URL for SMW special page "Ask". Example: ==为SMW特殊页面“Ask”定义基础地址.例: +SMW import target table:==SMW导入目标表: +Define import target table. Default: contentcontrol==定义导入目标表. 默认值:contentcontrol +Purge content control list on initial sync:==在初始同步时清除内容控制列表: +Purge content control list on initial synchronisation after startup.==重启后,清除初始同步的内容控制列表. +"Submit"=="提交" +Define base URL for SMW special page "Ask". Example:==为SMW特殊页面“Ask”定义基础地址.例: +#----------------------------- + +#File: ContentIntegrationPHPBB3_p.html +#--------------------------- +Content Integration: Retrieval from phpBB3 Databases==内容集成: 从phpBB3数据库中导入 +It is possible to extract texts directly from mySQL and postgreSQL databases.==能直接从mysql或者postgresql中解压出内容. +Each extraction is specific to the data that is hosted in the database.==每次解压都针对主机数据库中的数据. +This interface gives you access to the phpBB3 forums software content.==通过此接口能访问phpBB3论坛软件内容. +If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:==如果从使用phpMyAdmin读取数据库内容, 您可能会用到以下建议: +before importing large database dumps, set==在导入尺寸较大的数据库时, +in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB)==设置phpmyadmin/config.inc.php的内容, 并将您的数据库文件放到 /tmp 目录下(否则不能上传大于2MB的文件) +deselect the partial import flag==取消部分导入 +When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.==导出过程开始时, 在 DATA/SURROGATE/in 目录下自动生成备份文件, 并且会被索引器自动爬取. +All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.==所有被索引的备份文件都在 DATA/SURROGATE/out 目录下, 并被索引器循环利用. +The URL stub==URL根域名 +like http://forum.yacy-websuche.de==比如链接 http://forum.yacy-websuche.de +this must be the path right in front of '/viewtopic.php?'==必须在'/viewtopic.php?'前面 +Type==数据库 +> of database<==> 类型< +use either 'mysql' or 'pgsql'==使用'mysql'或者'pgsql' +Host==数据库 +> of the database<==> 主机名< +of database service==数据库服务 +usually 3306 for mySQL==MySQL中通常是3306 +Name of the database==主机 +on the host==数据库 +Table prefix string==table +for table names==前缀 +User==数据库 +that can access the database==用户名 +Password==给定用户名的 +for the account of that user given above==访问密码 +Posts per file==导出备份中 +in exported surrogates==每个文件拥有的最多帖子数 +Check database connection==检查数据库连接 +Export Content to Surrogates==导出到备份 +Import a database dump==导入数据库 +Import Dump==导入 +Posts in database==数据库中帖子 +first entry==第一个 +last entry==最后一个 +Info failed:==错误信息: +Export successful! Wrote #[files]# files in DATA/SURROGATES/in==导出成功! #[files]# 已写入到 DATA/SURROGATES/in 目录 +Export failed:==导出失败: +Import successful!==导入成功! +Import failed:==导入失败: +#----------------------------- + #File: CookieMonitorIncoming_p.html #--------------------------- -Incoming Cookies Monitor==进入Cookies监视 -Cookie Monitor: Incoming Cookies==Cookie监视: 进入Cookies -This is a list of Cookies that a web server has sent to clients of the YaCy Proxy:==Web服务器已向YaCy代理客户端发送的Cookies: -Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 个Cookies. +Incoming Cookies Monitor==进入缓存监视 +Cookie Monitor: Incoming Cookies==缓存监视: 进入缓存 +This is a list of Cookies that a web server has sent to clients of the YaCy Proxy:==Web服务器已向YaCy代理客户端发送的缓存: +Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 条缓存. Sending Host==发送主机 Date==日期 Receiving Client==接收主机 >Cookie<==>缓存< Cookie==缓存 -"Enable Cookie Monitoring"=="开启Cookie监视" -"Disable Cookie Monitoring"=="关闭Cookie监视" +"Enable Cookie Monitoring"=="开启缓存监视" +"Disable Cookie Monitoring"=="关闭缓存监视" #----------------------------- #File: CookieMonitorOutgoing_p.html #--------------------------- -Outgoing Cookies Monitor==外出Cookie监视 -Cookie Monitor: Outgoing Cookies==Cookie监视: 外出Cookie -This is a list of cookies that browsers using the YaCy proxy sent to webservers:==YaCy代理以通过浏览器向Web服务器发送的Cookie: -Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 个Cookies. +Outgoing Cookies Monitor==外出缓存监视 +Cookie Monitor: Outgoing Cookies==缓存监视: 外出缓存 +This is a list of cookies that browsers using the YaCy proxy sent to webservers:==YaCy代理以通过浏览器向Web服务器发送的缓存: +Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 条缓存. Receiving Host==接收主机 Date==日期 Sending Client==发送主机 >Cookie<==>缓存< Cookie==缓存 -"Enable Cookie Monitoring"=="开启Cookie监视" -"Disable Cookie Monitoring"=="关闭Cookie监视" +"Enable Cookie Monitoring"=="开启缓存监视" +"Disable Cookie Monitoring"=="关闭缓存监视" +#----------------------------- + +#File: CookieTest_p.html +#--------------------------- +Cookie - Test Page==缓存 - 测试页 +Here is a cookie test page.==这是一个缓存测试页. +Just clean it==Just clean it +Name:==Name: +Value:==Value: +Dear server, set this cookie for me!==Dear server, set this cookie for me! +Cookies at this browser:==Cookies at this browser: +Cookies coming to server:==Cookies coming to server: +Cookies server sent:==Cookies server sent: +YaCy is a GPL'ed project==YaCy is a GPL'ed project +with the target of implementing a P2P-based global search engine.==with the target of implementing a P2P-based global search engine. +Architecture (C) by==Architecture (C) by +#----------------------------- + +#File: CrawlCheck_p.html +#--------------------------- +Crawl Check==爬取检查 +This pages gives you an analysis about the possible success for a web crawl on given addresses.==通过本页面,您可以分析在特定地址上进行网络爬取的可能性。 +List of possible crawl start URLs==可行的起始抓行网址列表 +"Check given urls"=="检查给定的网址" +>Analysis<==>分析< +>Access<==>访问< +>Robots<==>机器人< +>Crawl-Delay<==>爬取延时< +>Sitemap<==>网页< #----------------------------- #File: CrawlProfileEditor_p.html #--------------------------- Crawl Profile Editor==爬取配置文件编辑器 ->Crawl Profile Editor<==>抓取文件编辑< +>Crawl Profile Editor<==>爬取文件编辑< >Crawler Steering<==>爬虫向导< ->Crawl Scheduler<==>抓取调度器< ->Scheduled Crawls can be modified in this table<==>请在下表中修改已安排的抓取< -Crawl profiles hold information about a crawl process that is currently ongoing.==抓取文件里保存有正在运行的抓取进程信息. +>Crawl Scheduler<==>爬取调度器< +>Scheduled Crawls can be modified in this table<==>请在下表中修改已安排的爬取< +Crawl profiles hold information about a crawl process that is currently ongoing.==爬取文件里保存有正在运行的爬取进程信息. #Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört. #The profiles for remote crawls, indexing via proxy and snippet fetches==Die Profile für Remote Crawl, Indexierung per Proxy und Snippet Abrufe #cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind. - #Crawl Profile List -Crawl Profile List==抓取文件列表 -Crawl Thread<==抓取线程< +Crawl Profile List==爬取文件列表 +Crawl Thread<==爬取线程< >Collections<==>搜集< >Status<==>状态< >Depth<==>深度< Must Match<==必须匹配< >Must Not Match<==>必须不符< ->Recrawl if older than<==>重新抓取如果老于< +>Recrawl if older than<==>重新爬取如果老于< >Domain Counter Content<==>域计数器内容< >Max Page Per Domain<==>每个域中拥有最大页面< >Accept==>接受 @@ -1470,17 +1175,16 @@ URLs<==地址< >Local Text Indexing<==>本地文本索引< >Local Media Indexing<==>本地媒体索引< >Remote Indexing<==>远程索引< - MaxAge<==最长寿命< no::yes==否::是 Running==运行中 "Terminate"=="终结" Finished==已完成 "Delete"=="删除" -"Delete finished crawls"=="删除已完成的抓取进程" +"Delete finished crawls"=="删除已完成的爬取进程" Select the profile to edit==选择要修改的文件 "Edit profile"=="修改文件" -An error occurred during editing the crawl profile:==修改抓取文件时发生错误: +An error occurred during editing the crawl profile:==修改爬取文件时发生错误: Edit Profile==修改文件 "Submit changes"=="提交改变" #----------------------------- @@ -1490,16 +1194,16 @@ Edit Profile==修改文件 >Collection==>收藏 "del & blacklist"=="删除并拉黑" "del =="删除 -Crawl Results<==抓取结果< +Crawl Results<==爬取结果< Overview==概况 Receipts==回执 Queries=请求 DHT Transfer==DHT转移 Proxy Use==Proxy使用 -Local Crawling==本地抓取 -Global Crawling==全球抓取 +Local Crawling==本地爬取 +Global Crawling==全球爬取 Surrogate Import==导入备份 ->Crawl Results Overview<==>抓取结果概述< +>Crawl Results Overview<==>爬取结果概述< These are monitoring pages for the different indexing queues.==创建索引队列的监视页面. YaCy knows 5 different ways to acquire web indexes. The details of these processes (1-5) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 进程(1-5)的细节在子菜单中显示 above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息应该视为隐私, @@ -1509,31 +1213,27 @@ since it shows crawl requests from other peers.==因为它含有来自其他节 Case (7) occurs if surrogate files are imported==如果备份被导入, 则事件(7)发生. The image above illustrates the data flow initiated by web index acquisition.==上图为网页索引的数据流. Some processes occur double to document the complex index migration structure.==一些进程可能出现双重文件索引结构混合的情况. - -(1) Results of Remote Crawl Receipts==(1) 远程抓取回执结果 -This is the list of web pages that this peer initiated to crawl,==这是节点初始化时抓取的网页列表, -but had been crawled by other peers.==但是先前它们已被其他节点抓取. +(1) Results of Remote Crawl Receipts==(1) 远程爬取回执结果 +This is the list of web pages that this peer initiated to crawl,==这是节点初始化时爬取的网页列表, +but had been crawled by other peers.==但是先前它们已被其他节点爬取. This is the 'mirror'-case of process (6).==这是进程(6)的'镜像'实例 Use Case:==用法: You get entries here==你在此得到条目 -if you start a local crawl on the 'Index Creation'-Page==如果您在'索引创建'页面上启动本地抓取 +if you start a local crawl on the 'Index Creation'-Page==如果您在'索引创建'页面上启动本地爬取 and check the==并检查 'Do Remote Indexing'-flag=='执行远程索引'标记 #Every page that a remote peer indexes upon this peer's request is reported back and can be monitored here==远程节点根据此节点的请求编制索引的每个页面都会被报告回来,并且可以在此处进行监视 - (2) Results for Result of Search Queries==(2) 搜索查询结果报告页 This index transfer was initiated by your peer by doing a search query.==通过搜索, 此索引转移能被初始化. -The index was crawled and contributed by other peers.==这个索引是被其他节点贡献与抓取的. +The index was crawled and contributed by other peers.==这个索引是被其他节点贡献与爬取的. This list fills up if you do a search query on the 'Search Page'==当您在'搜索页面'进行搜索时, 此表会被填充. >Domain<==>域名< >URLs<==>地址数< - (3) Results for Index Transfer==(3) 索引转移结果 -The url fetch was initiated and executed by other peers.==被其他节点初始化并抓取的地址. +The url fetch was initiated and executed by other peers.==被其他节点初始化并爬取的地址. These links here have been transmitted to you because your peer is the most appropriate for storage according to==这些链接已经被传递给你, 因为根据全球分布哈希表的计算, the logic of the Global Distributed Hash Table.==您的节点是最适合存储它们的. This list may fill if you check the 'Index Receive'-flag on the 'Index Control' page==当您选中了在'索引控制'里的'接收索引'时, 这个表会被填充. - (4) Results for Proxy Indexing==(4) 代理索引结果 These web pages had been indexed as result of your proxy usage.==以下是由于使用代理而索引的网页. No personal or protected page is indexed==不包括私有或受保护网页 @@ -1542,24 +1242,20 @@ and automatically excluded from indexing.==并在索引时自动排除. You must use YaCy as proxy to fill up this table.==必须把YaCy用作代理才能填充此表格. Set the proxy settings of your browser to the same port as given==将浏览器代理端口设置为 on the 'Settings'-page in the 'Proxy and Administration Port' field.=='设置'页面'代理和管理端口'选项中的端口. - -(5) Results for Local Crawling==(5)本地抓取结果 -These web pages had been crawled by your own crawl task.==您的爬虫任务已经抓取了这些网页. -start a crawl by setting a crawl start point on the 'Index Create' page.==在'索引创建'页面设置抓取起始点以开始抓取. - -(6) Results for Global Crawling==(6)全球抓取结果 -These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已经被您的节点索引, 但是它们是被远端节点抓取的. +(5) Results for Local Crawling==(5)本地爬取结果 +These web pages had been crawled by your own crawl task.==您的爬虫任务已经爬取了这些网页. +start a crawl by setting a crawl start point on the 'Index Create' page.==在'索引创建'页面设置爬取起始点以开始爬取. +(6) Results for Global Crawling==(6)全球爬取结果 +These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已经被您的节点索引, 但是它们是被远端节点爬取的. This is the 'mirror'-case of process (1).==这是进程(1)的'镜像'实例. The stack is empty.==栈为空. Statistics about #[domains]# domains in this stack:==此栈显示有关 #[domains]# 域的数据: -This list may fill if you check the 'Accept remote crawling requests'-flag on the==如果您选中“接受远程抓取请求”标记,请填写列表在 +This list may fill if you check the 'Accept remote crawling requests'-flag on the==如果您选中“接受远程爬取请求”标记,请填写列表在 page<==页面< - (7) Results from surrogates import==(7) 备份导入结果 These records had been imported from surrogate files in DATA/SURROGATES/in==这些记录从 DATA/SURROGATES/in 中的备份文件中导入 place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式 (i.e. MediaWiki import, OAI-PMH retrieval)==(例如 MediaWiki 导入, OAI-PMH 导入) - "delete all"=="全部删除" Showing all #[all]# entries in this stack.==显示栈中所有 #[all]# 条目. Showing latest #[count]# lines from a stack of #[all]# entries.==显示栈中 #[all]# 条目的最近 #[count]# 行. @@ -1576,58 +1272,56 @@ Showing latest #[count]# lines from a stack of #[all]# entries.==显示栈中 #[ #File: CrawlStartExpert.html #--------------------------- == -Expert Crawl Start==抓取高级设置 -Start Crawling Job:==开始抓取任务: -You can define URLs as start points for Web page crawling and start crawling here==您可以将指定地址作为抓取网页的起始点 -"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links== "抓取中"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容 -This is repeated as long as specified under "Crawling Depth"==它将一直重复至到满足指定的"抓取深度" -A crawl can also be started using wget and the==抓取也可以将wget和 +Expert Crawl Start==爬取高级设置 +Start Crawling Job:==开始爬取任务: +You can define URLs as start points for Web page crawling and start crawling here==您可以将指定地址作为爬取网页的起始点 +"Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links== "爬取中"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容 +This is repeated as long as specified under "Crawling Depth"==它将一直重复至到满足指定的"爬取深度" +A crawl can also be started using wget and the==爬取也可以将wget和 for this web page==用于此网页 - #Crawl Job ->Crawl Job<==>抓取工作< -A Crawl Job consist of one or more start point, crawl limitations and document freshness rules==抓取作业由一个或多个起始点、抓取限制和文档新鲜度规则组成 +>Crawl Job<==>爬取工作< +A Crawl Job consist of one or more start point, crawl limitations and document freshness rules==爬取作业由一个或多个起始点、爬取限制和文档新鲜度规则组成 #Start Point >Start Point==>起始点 Define the start-url(s) here.==在这儿确定起始地址. You can submit more than one URL, each line one URL please.==你可以提交多个地址,请一行一个地址. -Each of these URLs are the root for a crawl start, existing start URLs are always re-loaded.==每个地址中都是抓取开始的根,已有的起始地址会被重新加载. -Other already visited URLs are sorted out as "double", if they are not allowed using the re-crawl option.==对已经访问过的地址,如果它们不允许被重新抓取,则被标记为'重复'. +Each of these URLs are the root for a crawl start, existing start URLs are always re-loaded.==每个地址中都是爬取开始的根,已有的起始地址会被重新加载. +Other already visited URLs are sorted out as "double", if they are not allowed using the re-crawl option.==对已经访问过的地址,如果它们不允许被重新爬取,则被标记为'重复'. One Start URL or a list of URLs:==一个起始地址或地址列表: (must start with==(头部必须有 >From Link-List of URL<==>来自地址的链接列表< From Sitemap==来自站点地图 From File (enter a path==来自文件(输入 within your local file system)<==你本地文件系统的地址)< - #Crawler Filter >Crawler Filter==>爬虫过滤器 -These are limitations on the crawl stacker. The filters will be applied before a web page is loaded==这些是抓取堆栈器的限制.将在加载网页之前应用过滤器 +These are limitations on the crawl stacker. The filters will be applied before a web page is loaded==这些是爬取堆栈器的限制.将在加载网页之前应用过滤器 This defines how often the Crawler will follow links (of links..) embedded in websites.==此选项为爬虫跟踪网站嵌入链接的深度. 0 means that only the page you enter under "Starting Point" will be added==设置为0代表仅将"起始点" to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引.建议设置为2-4.由于设置为8会索引将近256亿个页面,所以不建议设置大于8的值, index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容. ->Crawling Depth<==>抓取深度< +>Crawling Depth<==>爬取深度< also all linked non-parsable documents==还包括所有链接的不可解析文档 ->Unlimited crawl depth for URLs matching with<==>不限抓取深度,对这些匹配的网址< +>Unlimited crawl depth for URLs matching with<==>不限爬取深度,对这些匹配的网址< >Maximum Pages per Domain<==>每个域名最大页面数< Use:==使用: Page-Count==页面数 -You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==使用此选项,您可以限制将从单个域名中抓取和索引的页面数. +You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==使用此选项,您可以限制将从单个域名中爬取和索引的页面数. You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域名. the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域名会被自动忽略. >misc. Constraints<==>其余约束< A questionmark is usually a hint for a dynamic page.==动态页面常用问号标记. -URLs pointing to dynamic content should usually not be crawled.==通常不会抓取指向动态页面的地址. +URLs pointing to dynamic content should usually not be crawled.==通常不会爬取指向动态页面的地址. However, there are sometimes web pages with static content that==然而,也有些含有静态内容的页面用问号标记. -is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定,不要选中此项以防抓取时陷入死循环. +is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定,不要选中此项以防爬取时陷入死循环. Accept URLs with query-part ('?')==接受具有查询格式('?')的地址 >Load Filter on URLs<==>对地址加载过滤器< > must-match<==>必须匹配< The filter is a <==这个过滤器是一个< >regular expression<==>正则表达式< Example: to allow only urls that contain the word 'science', set the must-match filter to '.*science.*'.==列如:只允许包含'science'的地址,就在'必须匹配过滤器'中输入'.*science.*'. -You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用主动域名限制来完全抓取单个域名. +You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用主动域名限制来完全爬取单个域名. Attention: you can test the functionality of your regular expressions using the==注意:你可测试你的正则表达式功能使用 >Regular Expression Tester<==>正则表达式测试器< within YaCy.==在YaCy中. @@ -1638,13 +1332,12 @@ Use filter==使用过滤器 > must-not-match<==>必须排除< >Load Filter on IPs<==>对IP加载过滤器< >Must-Match List for Country Codes<==>国家代码必须匹配列表< -Crawls can be restricted to specific countries.==可以限制只在某个具体国家抓取. +Crawls can be restricted to specific countries.==可以限制只在某个具体国家爬取. This uses the country code that can be computed from==这会使用国家代码, 它来自 the IP of the server that hosts the page.==该页面所在主机的IP. The filter is not a regular expressions but a list of country codes,==这个过滤器不是正则表达式,而是 separated by comma.==由逗号隔开的国家代码列表. >no country code restriction<==>没有国家代码限制< - #Document Filter >Document Filter==>文档过滤器 These are limitations on index feeder.==这些是索引进料器的限制. @@ -1657,7 +1350,6 @@ that must not match with the URLs to allow that the content of the url is >Solr query filter on any active <==>Solr查询过滤器对任何有效的< >indexed<==>索引的< > field(s)<==>域< - #Content Filter >Content Filter==>内容过滤器 These are limitations on parts of a document.==这些是文档部分的限制. @@ -1666,24 +1358,22 @@ The filter will be applied after a web page was loaded.==加载网页后将应 >set of CSS class names<==>CSS类名集合< #comma-separated list of
or