From 68b5b48335bc766afed01a513e2ad10334a44a92 Mon Sep 17 00:00:00 2001 From: tangdou1 <35254744+tangdou1@users.noreply.github.com> Date: Mon, 16 Jul 2018 18:04:37 +0800 Subject: [PATCH] Update zh.lng --- locales/zh.lng | 545 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 391 insertions(+), 154 deletions(-) diff --git a/locales/zh.lng b/locales/zh.lng index da223747a..c32dba7a2 100644 --- a/locales/zh.lng +++ b/locales/zh.lng @@ -43,6 +43,14 @@ List of possible crawl start URLs==可行的起始抓行网址列表 >Sitemap<==>网页< #----------------------------- +#File: RegexTest.html +#--------------------------- +>Regex Test<==>正则表达式测试< +>Test String<==>测试字符串< +>Regular Expression<==>正则表达式< +>Result<==>结果< +#----------------------------- + #File: env/templates/submenuRanking.template #--------------------------- Solr Ranking Config==Solr排名配置 @@ -101,9 +109,9 @@ Thread Dump==线程转储 >Messages<==>消息< >Overview<==>概述< >Incoming News<==>传入的新闻< ->Processed News<==>加工的新闻< +>Processed News<==>处理的新闻< >Outgoing News<==>传出的新闻< ->Published News<==>已发布的新闻< +>Published News<==>发布的新闻< >Community Data<==>社区数据< >Surftips<==>冲浪提示< >Local Peer Wiki<==>本地节点百科 < @@ -148,28 +156,30 @@ Similar documents from different hosts:==来自不同主机的类似文件: #File: ConfigSearchPage_p.html #--------------------------- -Search Page<==搜索页< -Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板。 选中您希望显示的功能复选框。 ->Images<==>图片< ->Audio<==>音频< ->Video<==>视频< >Search Result Page Layout Configuration<==>搜索结果页面布局配置< -To change colors and styles use the ==要改变颜色和样式使用 +Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中您希望显示的功能复选框. +To change colors and styles use the ==要改变颜色和样式使用 >Appearance<==>外观< +menu for different skins==不同皮肤的菜单 Other portal settings can be adjusted in==其他门户网站设置可以在这调整 >Generic Search Portal<==>通用搜索门户< -menu for different skins==不同皮肤的菜单 -To change colors and styles use the==要改变颜色和样式使用 +menu.==菜单. + >Page Template<==>页面模板< ->Text<==>文本< ->Applications<==>应用< ->more options<==>更多选项< >Tag<==>标签< >Topics<==>主题< >Cloud<==>云< +>Text<==>文本< +>Images<==>图片< +>Audio<==>音频< +>Video<==>视频< +>Applications<==>应用< +>more options<==>更多选项< +>Location<==>位置< +Search Page<==搜索页< >Protocol<==>协议< >Filetype<==>文件类型< ->Wiki Name Space<==> 百科名称空间< +>Wiki Name Space<==>百科名称空间< >Language<==>语言< >Author<==>作者< >Vocabulary<==>词汇< @@ -185,9 +195,11 @@ Description and text snippet of the search result==搜索结果的描述和文 >Cache<==>高速缓存< >Augmented Browsing<==>增强浏览< For this option URL proxy must be enabled==对于这个选项,必须启用URL代理 +>Add Navigators<==>添加导航器< +>append==>附加 "Save Settings"=="保存设置" "Set Default Values"=="设置默认值" -menu==菜单 + #----------------------------- #File: AccessGrid_p.html @@ -598,7 +610,7 @@ here.==在这里. #--------------------------- start autosearch of new bookmarks==开始自动搜索新书签 This starts a serach of new or modified bookmarks since startup==开始搜索自从启动以来新的或修改的书签 -Every peer online will be ask for results.==每个在线的伙伴都会被索要结果。 +Every peer online will be ask for results.==每个在线的节点都会被索要结果。 To see a list of all APIs, please visit the API wiki page.==要查看所有API的列表,请访问API wiki page。 To see a list of all APIs==要查看所有API的列表,请访问API wiki page。 YaCy '#[clientname]#': Bookmarks==YaCy '#[clientname]#': 书签 @@ -823,6 +835,59 @@ This is needed if you want to fully participate in the YaCy network.==如果您 You can also use your peer without opening it, but this is not recomended.==不开放您的节点您也能使用, 但是不推荐. #----------------------------- +#File: RankingRWI_p.html +#--------------------------- +>RWI Ranking Configuration<==>RWI排名配置< +The document ranking influences the order of the search result entities.==文档排名会影响实际搜索结果的顺序. +A ranking is computed using a number of attributes from the documents that match with the search word.==排名计算使用到与搜索词匹配的文档中的多个属性. +The attributes are first normalized over all search results and then the normalized attribute is multiplied with the ranking coefficient computed from this list.==在所有搜索结果基础上,先对属性进行归一化,然后将归一化的属性与相应的排名系数相乘. +The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数随着下表中给出的排名水平呈指数增长. +If you increase a single value by one, then the strength of the parameter doubles.==如果将单个值增加1,则参数的影响效果加倍. + +#Pre-Ranking +>Pre-Ranking<==>预排名< +== + +#>Appearance In Emphasized Text<==>出现在强调的文本中< +#a higher ranking level prefers documents where the search word is emphasized==较高的排名级别更倾向强调搜索词的文档 +#>Appearance In URL<==>出现在地址中< +#a higher ranking level prefers documents with urls that match the search word==较高的排名级别更倾向具有与搜索词匹配的地址的文档 +#Appearance In Author==出现在作者中 +#a higher ranking level prefers documents with authors that match the search word==较高的排名级别更倾向与搜索词匹配的作者的文档 +#>Appearance In Reference/Anchor Name<==>出现在参考/锚点名称中< +#a higher ranking level prefers documents where the search word matches in the description text==较高的排名级别更倾向搜索词在描述文本中匹配的文档 +#>Appearance In Tags<==>出现在标签中< +#a higher ranking level prefers documents where the search word is part of subject tags==较高的排名级别更喜欢搜索词是主题标签一部分的文档 +#>Appearance In Title<==>出现在标题中< +#a higher ranking level prefers documents with titles that match the search word==较高的排名级别更喜欢具有与搜索词匹配的标题的文档 +#>Authority of Domain<==>域名权威< +#a higher ranking level prefers documents from domains with a large number of matching documents==较高的排名级别更喜欢来自具有大量匹配文档的域的文档 +#>Category App, Appearance<==>类别:出现在应用中< +#a higher ranking level prefers documents with embedded links to applications==更高的排名级别更喜欢带有嵌入式应用程序链接的文档 +#>Category Audio Appearance<==>类别:出现在音频中< +#a higher ranking level prefers documents with embedded links to audio content==较高的排名级别更喜欢具有嵌入音频内容链接的文档 +#>Category Image Appearance<==>类别:出现在图像中< +#>Category Video Appearance<==>类别:出现在视频中< +#>Category Index Page<==>类别:索引页面< +#a higher ranking level prefers 'index of' (directory listings) pages==较高的排名级别更喜欢(目录列表)页面的索引 +#>Date<==>日期< +#a higher ranking level prefers younger documents.==更高的排名水平更喜欢最新的文件. +#The age of a document is measured using the date submitted by the remote server as document date==使用远程服务器提交的日期作为文档日期来测量文档的年龄 +#>Domain Length<==>域名长度< +#a higher ranking level prefers documents with a short domain name==较高的排名级别更喜欢具有短域名的文档 +#>Hit Count<==>命中数< +#a higher ranking level prefers documents with a large number of matchings for the search word(s)==较高的排名级别更喜欢具有大量匹配搜索词的文档 + +There are two ranking stages:==有两个排名阶段: +first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名. +The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果. +#Post-Ranking +>Post-Ranking<==二次排名 + +"Set as Default Ranking"=="保存为默认排名" +"Re-Set to Built-In Ranking"=="重置排名设置" +#----------------------------- + #File: ConfigHeuristics_p.html #--------------------------- search-result: shallow crawl on all displayed search results==搜索结果:对所有显示的搜索结果进行浅度爬取 @@ -841,7 +906,7 @@ To find out more about OpenSearch see==要了解关于OpenSearch的更多信息 When using this heuristic, then every new search request line is used for a call to listed opensearch systems.==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。 A heuristic is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia).==启发式是一种“基于经验的技术,有助于解决问题,学习和发现” This means: right after the search request every page is loaded and every page that is linked on this page.==这意味着:在搜索请求之后,就开始加载每个页面上的链接。 -If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中“添加为全局抓取作业”,则要爬网的页面将被添加到全局抓取队列(远程YaCy伙伴可以抓取要抓取的页面)。 +If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中“添加为全局抓取作业”,则要爬网的页面将被添加到全局抓取队列(远程YaCy节点可以抓取要抓取的页面)。 Default is to add the links to the local crawl queue (your peer crawls the linked pages).==默认是将链接添加到本地爬网队列(您的YaCy爬取链接的页面)。 add as global crawl job==添加为全局抓取作业 opensearch load external search result list from active systems below==opensearch从下面的活动系统加载外部搜索结果列表 @@ -1041,7 +1106,7 @@ page==页面 deny remote search==拒绝远程搜索 No changes were made!==未作出任何改变! -Accepted Changes==应用设置 +Accepted Changes==接受改变 Inapplicable Setting Combination==设置未被应用 @@ -1098,8 +1163,8 @@ Property Name==属性名 Integration of a Search Portal==搜索门户设置 If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果您想将YaCy作为您的网站搜索门户, 您可能需要在这改变搜索页面的图标和信息. The search page may be customized.==搜索页面可以自由定制. -You can change the 'corporate identity'-images, the greeting line==您可以改变 'Corporate Identity' 图像, 问候语 -and a link to a home page that is reached when the 'corporate identity'-images are clicked.==和一个指向首页的 'Corporate Identity' 图像链接. +You can change the 'corporate identity'-images, the greeting line==您可以改变'企业标志'图像,问候语 +and a link to a home page that is reached when the 'corporate identity'-images are clicked.==和一个指向首页的'企业标志'图像链接. To change also colours and styles use the Appearance Servlet for different skins and languages.==若要改变颜色和风格,请到外观选项选择您喜欢的皮肤和语言. Greeting Line<==问候语< URL of Home Page<==主页链接< @@ -1116,13 +1181,41 @@ Show Advanced Search Options on Search Page==在搜索页显示高级搜索选 Show Advanced Search Options on index.html ==在index.html显示高级搜索选项? do not show Advanced Search==不显示高级搜索 Media Search==媒体搜索 +>Extended==>拓展 +>Strict==>严格 +Control whether media search results are as default strictly limited to indexed documents matching exactly the desired content domain==控制媒体搜索结果是否默认严格限制为与所需内容域完全匹配的索引文档 +(images, videos or applications specific)==(图像,视频或具体应用) +or extended to pages including such medias (provide generally more results, but eventually less relevant).==或扩展到包括此类媒体的网页(通常提供更多结果,但相关性更弱) Remote results resorting==远程搜索结果排序 +>On demand, server-side==>按需,服务器端 +Automated, with JavaScript in the browser==自动化,在浏览器中使用JavaScript +>for authenticated users only<==>仅限经过身份验证的用户< Remote search encryption==远程搜索加密 ->Snippet Fetch Strategy==>片段提取策略 +Prefer https for search queries on remote peers.==首选https用于远程节点上的搜索查询. +When SSL/TLS is enabled on remote peers, https should be used to encrypt data exchanged with them when performing peer-to-peer searches.==在远程节点上启用SSL/TLS时,应使用https来加密在执行P2P搜索时与它们交换的数据. +Please note that contrary to strict TLS, certificates are not validated against trusted certificate authorities (CA), thus allowing YaCy peers to use self-signed certificates.==请注意,与严格TLS相反,证书不会针对受信任的证书颁发机构(CA)进行验证,因此允许YaCy节点使用自签名证书. +>Snippet Fetch Strategy==>摘要提取策略 +Speed up search results with this option! (use CACHEONLY or FALSE to switch off verification)==使用此选项加快搜索结果!(使用CACHEONLY或FALSE关闭验证) +NOCACHE: no use of web cache, load all snippets online==NOCACHE:不使用网络缓存,在线加载所有网页摘要 +IFFRESH: use the cache if the cache exists and is fresh otherwise load online==IFFRESH:如果缓存存在则使用最新的缓存,否则在线加载 +IFEXIST: use the cache if the cache exist or load online==IFEXIST:如果缓存存在则使用缓存,或在线加载 +If verification fails, delete index reference==如果验证失败,删除索引参考 +CACHEONLY: never go online, use all content from cache.==CACHEONLY:永远不上网,内容只来自缓存. +If no cache entry exist, consider content nevertheless as available and show result without snippet==如果不存在缓存条目,将内容视为可用,并显示没有摘要的结果 +FALSE: no link verification and not snippet generation: all search results are valid without verification==FALSE:没有链接验证且没有摘要生成:所有搜索结果在没有验证情况下有效 Link Verification<==链接验证< Greedy Learning Mode==贪心学习模式 +load documents linked in search results,==加载搜索结果中链接的文档, +will be deactivated automatically when index size==将自动停用当索引大小 + (see==(见 +>Heuristics: search-result<==>启发式:搜索结果< + to use this permanent)==使得它永久性) Index remote results==索引远程结果 +add remote search results to the local index==将远程搜索结果添加到本地索引 +( default=on, it is recommended to enable this option ! )==(默认=开启,建议启用此选项!) Limit size of indexed remote results==现在远程索引结果容量 +maximum allowed size in kbytes for each remote search result to be added to the local index==每个远程搜索结果的最大允许大小(以KB为单位)添加到本地索引 +for example, a 1000kbytes limit might be useful if you are running YaCy with a low memory setup==例如,如果运行具有低内存设置的YaCy,则1000KB限制可能很有用 Default Pop-Up Page<==默认弹出页面< Default maximum number of results per page==默认每页最大结果数 Default index.html Page (by forwarder)==默认index.html(前者指定) @@ -1132,13 +1225,18 @@ Target for Click on Search Results==点击搜索结果时 "_parent" (the parent frame of a frameset)=="_parent" (父级窗口) "_top" (top of all frames)=="_top" (置顶) Special Target as Exception for an URL-Pattern==作为URL模式的异常的特殊目标 +Pattern:<=模式:< Exclude Hosts==排除的主机 -#List of hosts that shall be excluded from search results by default but can be included using the site: operator==默认情况下将被排除在搜索结果之外的主机列表,但可以使用site:操作符包括进来 +List of hosts that shall be excluded from search results by default==默认情况下将被排除在搜索结果之外的主机列表 +but can be included using the site: operator=但可以使用site:操作符包括进来 'About' Column<=='关于'栏< shown in a column alongside==显示在 with the search result page==搜索结果页侧栏 (Headline)==(标题) (Content)==(内容) +>You have to==>你必须 +>set a remote user/password<==>设置一个远程用户/密码< +to change this options.<==来改变设置.< Show Information Links for each Search Result Entry==显示搜索结果的链接信息 >Date&==>日期& >Size&==>大小& @@ -1201,7 +1299,7 @@ Entire Peer==整个节点 Status page==状态页面 Network pages==网络页面 Surftips==建议 -News pages==新闻 +News pages==新页面 Blog==博客 Wiki==维基 Public bookmarks==公共书签 @@ -1467,7 +1565,7 @@ Showing latest #[count]# lines from a stack of #[all]# entries.==显示栈中 #[ == Expert Crawl Start==抓取高级设置 Start Crawling Job:==开始抓取任务: -You can define URLs as start points for Web page crawling and start crawling here==您可以将指定URL作为抓取网页的起始点 +You can define URLs as start points for Web page crawling and start crawling here==您可以将指定地址作为抓取网页的起始点 "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links== "抓取中"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容 This is repeated as long as specified under "Crawling Depth"==它将一直重复至到满足指定的"抓取深度" A crawl can also be started using wget and the==抓取也可以将wget和 @@ -1476,35 +1574,165 @@ for this web page==用于此网页 #Crawl Job >Crawl Job<==>抓取工作< A Crawl Job consist of one or more start point, crawl limitations and document freshness rules==抓取作业由一个或多个起始点、抓取限制和文档新鲜度规则组成 +#Start Point >Start Point==>起始点 +Define the start-url(s) here.==在这儿确定起始地址. +You can submit more than one URL, each line one URL please.==你可以提交多个地址,请一行一个地址. +Each of these URLs are the root for a crawl start, existing start URLs are always re-loaded.==每个地址中都是抓取开始的根,已有的起始地址会被重新加载. +Other already visited URLs are sorted out as "double", if they are not allowed using the re-crawl option.==对已经访问过的地址,如果它们不允许被重新抓取,则被标记为'重复'. +One Start URL or a list of URLs:==一个起始地址或地址列表: +(must start with==(头部必须有 +>From Link-List of URL<==>来自地址的链接列表< +From Sitemap==来自站点地图 +From File (enter a path==来自文件(输入 +within your local file system)<==你本地文件系统的地址)< + +#Crawler Filter >Crawler Filter==>爬虫过滤器 +These are limitations on the crawl stacker. The filters will be applied before a web page is loaded==这些是抓取堆栈器的限制.将在加载网页之前应用过滤器 +This defines how often the Crawler will follow links (of links..) embedded in websites.==此选项为爬虫跟踪网站嵌入链接的深度. +0 means that only the page you enter under "Starting Point" will be added==设置为0代表仅将"起始点" +to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引.建议设置为2-4.由于设置为8会索引将近256亿个页面,所以不建议设置大于8的值, +index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容. +>Crawling Depth<==>抓取深度< +also all linked non-parsable documents==还包括所有链接的不可解析文档 +>Unlimited crawl depth for URLs matching with<==>不限抓取深度,对这些匹配的网址< +>Maximum Pages per Domain<==>每个域名最大页面数< +Use:==使用: +Page-Count==页面数 +You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==使用此选项,您可以限制将从单个域名中抓取和索引的页面数. +You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域名. +the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域名会被自动忽略. +>misc. Constraints<==>其余约束< +A questionmark is usually a hint for a dynamic page.==动态页面常用问号标记. +URLs pointing to dynamic content should usually not be crawled.==通常不会抓取指向动态页面的地址. +However, there are sometimes web pages with static content that==然而,也有些含有静态内容的页面用问号标记. +is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定,不要选中此项以防抓取时陷入死循环. +Accept URLs with query-part ('?')==接受具有查询格式('?')的地址 +>Load Filter on URLs<==>对地址加载过滤器< +> must-match<==>必须匹配< +The filter is a <==这个过滤器是一个< +>regular expression<==>正则表达式< +Example: to allow only urls that contain the word 'science', set the must-match filter to '.*science.*'.==列如:只允许包含'science'的地址,就在'必须匹配过滤器'中输入'.*science.*'. +You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用主动域名限制来完全抓取单个域名. +Attention: you can test the functionality of your regular expressions using the==注意:你可测试你的正则表达式功能使用 +>Regular Expression Tester<==>正则表达式测试器< +within YaCy.==在YaCy中. +Restrict to start domain==限制起始域 +Restrict to sub-path==限制子路经 +Use filter==使用过滤器 +(must not be empty)==(不能为空) +> must-not-match<==>必须排除< +>Load Filter on IPs<==>对IP加载过滤器< +>Must-Match List for Country Codes<==>国家代码必须匹配列表< +Crawls can be restricted to specific countries.==可以限制只在某个具体国家抓取. +This uses the country code that can be computed from==这会使用国家代码, 它来自 +the IP of the server that hosts the page.==该页面所在主机的IP. +The filter is not a regular expressions but a list of country codes,==这个过滤器不是正则表达式,而是 +separated by comma.==由逗号隔开的国家代码列表. +>no country code restriction<==>没有国家代码限制< + +#Document Filter >Document Filter==>文档过滤器 +These are limitations on index feeder.==这些是索引进料器的限制. +The filters will be applied after a web page was loaded.==加载网页后将应用过滤器. +that must not match with the URLs to allow that the content of the url is indexed.==它必须排除这些地址,从而允许地址中的内容被索引. +>Filter on URLs<==>地址过滤器< +>Filter on Content of Document<==>文档内容过滤器< +>(all visible text, including camel-case-tokenized url and title)<==>(所有可见文本,包括camel-case-tokenized的网址和标题)< +>Filter on Document Media Type (aka MIME type)<==>文档媒体类型过滤器(又称MIME类型)< +>Solr query filter on any active <==>Solr查询过滤器对任何有效的< +>indexed<==>索引的< +> field(s)<==>域< + +#Content Filter >Content Filter==>内容过滤器 +These are limitations on parts of a document.==这些是文档部分的限制. +The filter will be applied after a web page was loaded.==加载网页后将应用过滤器. +>Filter div or nav class names<==>div或nav类名过滤器< +>set of CSS class names<==>CSS类名集合< +#comma-separated list of
or