# zh.lng # English-->Chinese # ----------------------- # This is a part of YaCy, a peer-to-peer based web search engine # # (C) by Michael Peter Christen; mc@anomic.de # first published on http://www.anomic.de # Frankfurt, Germany, 2005 # # # This file is maintained by lofyer # This file is written by lofyer # If you find any mistakes or untranslated strings in this file please don't hesitate to email them to the maintainer. #File: ConfigLanguage_p.html #--------------------------- # Only part 1. # Contributors are in chronological order, not how much they did absolutely. # Thank you for your help! default(english)==Chinese ==lofyer ==lofyer@gmail.com #----------------------------- #File: env/templates/submenuTargetAnalysis.template #--------------------------- Target Analysis==目标分析 Mass Crawl Check==大量抓取检查 Regex Test==正则表达式测试 #----------------------------- #File: CrawlCheck_p.html #--------------------------- Crawl Check==抓取检查 This pages gives you an analysis about the possible success for a web crawl on given addresses.==通过本页面,您可以分析在特定地址上进行网络爬取的可能性。 List of possible crawl start URLs==可行的起始抓行网址列表 "Check given urls"=="检查给定的网址" >Analysis<==>分析< >Access<==>访问< >Robots<==>机器人< >Crawl-Delay<==>爬取延时< >Sitemap<==>网页< #----------------------------- #File: RegexTest.html #--------------------------- >Regex Test<==>正则表达式测试< >Test String<==>测试字符串< >Regular Expression<==>正则表达式< >Result<==>结果< #----------------------------- #File: env/templates/submenuRanking.template #--------------------------- Solr Ranking Config==Solr排名配置 >Heuristics<==>启发式< Ranking and Heuristics==排名与启发式 RWI Ranking Config==RWI排名配置 #----------------------------- #File: ConfigPortal.html #--------------------------- Enable Search for Everyone?==对所有人启用搜索? Search is available for everyone==搜索适用于所有人 Only the administator is allowed to search==只有管理员被允许搜索 Snippet Fetch Strategy & Link Verification==片段获取策略&amp; 链接验证 Speed up search results with this option! (use CACHEONLY or FALSE to switch off verification)==使用此选项加快搜索结果! (使用CACHEONLY或FALSE关闭验证) NOCACHE: no use of web cache, load all snippets online==NOCACHE:不使用网页缓存,在网上加载所有片段 IFFRESH: use the cache if the cache exists and is fresh otherwise load online==IFFRESH:如果高速缓存存在,则使用高速缓存,否则将重新在线载入 IFEXIST: use the cache if the cache exist or load online==IFEXIST:如果缓存存在或在线加载,则使用缓存 If verification fails, delete index reference==如果验证失败,请删除索引引用 CACHEONLY: never go online, use all content from cache. If no cache entry exist, consider content nevertheless as available and show result without snippet==CACHEONLY:永远不联网,使用缓存中的所有内容。 如果不存在缓存条目,则尽管如此,仍然可以将内容视为可用并显示结果,而不是片段 FALSE: no link verification and not snippet generation: all search results are valid without verification==FALSE:没有链接验证,也没有代码片段生成:所有搜索结果在没有验证的情况下都是有效的 Greedy Learning Mode==贪婪学习模式 load documents linked in search results, will be deactivated automatically when index size==加载链接到搜索结果中的文档,将自动停用在索引大小 Default maximum number of results per page==每页结果的默认最大数量 Special Target as Exception for an URL-Pattern==特殊目标作为URL模式的例外 Pattern:<==模式:< set a remote user/password==设置一个远程用户/密码 to change this options.==改变这个选项。 (Headline)==(标题) (Content)==(内容) You have to set a remote user/password to change this options.==您必须设置远程用户/密码才能更改此选项。 You have ==你有 'About' Column
(shown in a column alongside
with the search result page)==“关于”栏
(与搜索结果页面一起在一栏中显示) >Exclude Hosts<==>排除主机< List of hosts that shall be excluded from search results by default but can be included using the site:<host> operator:==默认情况下应从搜索结果中排除但可以使用网站包含的主机列表:&lt; host&gt;运营商: You have==你有 NOCACHE: no use of web cache==NOCACHE:不使用网页缓存,在网上加载所有片段 List of hosts that shall be excluded from search results by default but can be included using the site:==默认情况下应从搜索结果中排除但可以使用网站包含的主机列表:&lt; host&gt;运营商: Snippet Fetch Strategy ==片段获取策略&amp; 链接验证 If verification fails==如果验证失败,请删除索引引用 CACHEONLY: never go online==CACHEONLY:永远不联网,使用缓存中的所有内容。 如果不存在缓存条目,则尽管如此,仍然可以将内容视为可用并显示结果,而不是片段 load documents linked in search results==加载链接到搜索结果中的文档,将自动停用在索引大小 #----------------------------- #File: env/templates/submenuComputation.template #--------------------------- >Application Status<==>应用程序状态< >Status<==>状态< System==系统 Thread Dump==线程转储 >Processes<==>流程< >Server Log<==>服务器日志< >Concurrent Indexing<==>并发索引< >Memory Usage<==>内存使用< >Search Sequence<==>搜索序列< >Messages<==>消息< >Overview<==>概述< >Incoming News<==>传入的新闻< >Processed News<==>处理的新闻< >Outgoing News<==>传出的新闻< >Published News<==>发布的新闻< >Community Data<==>社区数据< >Surftips<==>冲浪提示< >Local Peer Wiki<==>本地节点百科 < UI Translations==用户界面翻译 >Published==>已发布的 >Processed==>加工的 >Outgoing==>传出的 >Incoming==>传入的 #----------------------------- #File: env/templates/submenuPortalConfiguration.template #--------------------------- Generic Search Portal==通用搜索门户 User Profile==用户资料 Local robots.txt==本地爬虫协议 Portal Configuration==门户配置 Search Box Anywhere==随处搜索框 #----------------------------- #File: Translator_p.html #--------------------------- Translation Editor==翻译编辑器 Translate untranslated text of the user interface (current language).==翻译用户界面中未翻译的文本(当前语言)。 UI Translation==界面翻译 Target Language:==目标语言 activate a different language==激活另一种语言 Source File==源文件 view it==查看 filter untranslated==列出未翻译项 Source Text==源文 Translated Text==翻译 Save translation==保存翻译 The modified translation file is stored in DATA/LOCALE directory.==修改的翻译文件储存在 DATA/LOCALE 目录下 #----------------------------- #File: api/citation.html #--------------------------- Document Citations for==文档引用 List of other web pages with citations==其他网页与引文列表 Similar documents from different hosts:==来自不同主机的类似文件: #----------------------------- #File: ConfigSearchPage_p.html #--------------------------- >Search Result Page Layout Configuration<==>搜索结果页面布局配置< Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中您希望显示的功能复选框. To change colors and styles use the ==要改变颜色和样式使用 >Appearance<==>外观< menu for different skins==不同皮肤的菜单 Other portal settings can be adjusted in==其他门户网站设置可以在这调整 >Generic Search Portal<==>通用搜索门户< menu.==菜单. >Page Template<==>页面模板< >Tag<==>标签< >Topics<==>主题< >Cloud<==>云< >Text<==>文本< >Images<==>图片< >Audio<==>音频< >Video<==>视频< >Applications<==>应用< >more options<==>更多选项< >Location<==>位置< Search Page<==搜索页< >Protocol<==>协议< >Filetype<==>文件类型< >Wiki Name Space<==>百科名称空间< >Language<==>语言< >Author<==>作者< >Vocabulary<==>词汇< >Provider<==>提供商< >Collection<==>收藏< >Title of Result<==>结果标题< Description and text snippet of the search result==搜索结果的描述和文本片段 42 kbyte<==42千字节< >Metadata<==>元数据< >Parser<==>分析器< >Citation<==>引用< >Pictures<==>图片< >Cache<==>高速缓存< >Augmented Browsing<==>增强浏览< For this option URL proxy must be enabled==对于这个选项,必须启用URL代理 >Add Navigators<==>添加导航器< >append==>附加 "Save Settings"=="保存设置" "Set Default Values"=="设置默认值" #----------------------------- #File: AccessGrid_p.html #--------------------------- This images shows incoming connections to your YaCy peer and outgoing connections from your peer to other peers and web servers==这幅图显示了到您节点的传入连接,以及从您节点到其他节点或网站服务器的传出连接 Server Access Grid==服务器访问网格 YaCy Network Access==YaCy网络访问 #----------------------------- #File: env/templates/submenuIndexImport.template #--------------------------- >Content Export / Import<==>内容导出/导入< >Export<==>导出< >Internal Index Export<==>内部索引导出< >Import<==>导入< RSS Feed Importer==RSS订阅导入器 OAI-PMH Importer==OAI-PMH导入器 >Warc Importer<==>Warc导入器< >Database Reader<==>数据库阅读器< Database Reader for phpBB3 Forums==phpBB3论坛的数据库阅读器 Dump Reader for MediaWiki dumps==MediaWiki转储阅读器 #----------------------------- #File: TransNews_p.html #--------------------------- Translation News for Language==语言翻译新闻 Translation News==翻译新闻 You can share your local addition to translations and distribute it to other peers.==你可以分享你的本地翻译,并分发给其他节点。 The remote peer can vote on your translation and add it to the own local translation.==远程节点可以对您的翻译进行投票并将其添加到他们的本地翻译中。 entries available==可用的条目 "Publish"=="发布" You can check your outgoing messages==你可以检查你的传出消息 >here<==>这儿< To edit or add local translations you can use==要编辑或添加本地翻译,你可以用 File:==文件: Translation:==翻译: >score==>分数 negative vote==反对票 positive vote==赞成票 Vote on this translation==对这个翻译投票 If you vote positive the translation is added to your local translation list==如果您投赞成票,翻译将被添加到您的本地翻译列表中 >Originator<==>发起人< #----------------------------- #File: Autocrawl_p.html #--------------------------- >Autocrawler<==>自动爬虫< Autocrawler automatically selects and adds tasks to the local crawl queue==自动爬虫自动选择任务并将其添加到本地爬网队列 This will work best when there are already quite a few domains in the index==如果索引中已经有一些域名,这将会工作得最好 Autocralwer Configuration==自动爬虫配置 You need to restart for some settings to be applied==您需要重新启动才能应用一些设置 Enable Autocrawler:==启用自动爬虫: Deep crawl every:==深入抓取: Warning: if this is bigger than "Rows to fetch" only shallow crawls will run==警告:如果这大于“取回行”,只有浅抓取将运行 Rows to fetch at once:==一次取回行: Recrawl only older than # days:==重新抓取只有 # 天以前的时间: Get hosts by query:==通过查询获取主机: Can be any valid Solr query.==可以是任何有效的Solr查询。 Shallow crawl depth (0 to 2):==浅抓取深度(0至2): Deep crawl depth (1 to 5):==深度抓取深度(1至5): Index text:==索引文本: Index media:==索引媒体: "Save"=="保存" #----------------------------- #File: ConfigParser.html #--------------------------- >File Viewer<==>文件查看器< >Extension<==>扩展< If you want to test a specific parser you can do so using the==如果你想测试一个特定的解析器,你可以使用 >Mime-Type<==> MIME类型< #----------------------------- #File: env/templates/submenuCrawler.template #--------------------------- Load Web Pages==加载网页 Site Crawling==网站抓取 Parser Configuration==解析器配置 #----------------------------- #File: ContentControl_p.html #--------------------------- Content Control<==内容控制< Peer Content Control URL Filter==节点内容控制地址过滤器 With this settings you can activate or deactivate content control on this peer==使用此设置,您可以激活或取消激活此YaCy节点上的内容控制 Use content control filtering:==使用内容控制过滤: >Enabled<==>已启用< Enables or disables content control==启用或禁用内容控制 Use this table to create filter:==使用此表创建过滤器: Define a table. Default:==定义一个表格. 默认: Content Control SMW Import Settings==内容控制SMW导入设置 With this settings you can define the content control import settings. You can define a==使用此设置,您可以定义内容控制导入设置. 你可以定义一个 Semantic Media Wiki with the appropriate extensions==语义媒体百科与适当的扩展 SMW import to content control list:==SMW导入到内容控制列表: Enable or disable constant background synchronization of content control list from SMW (Semantic Mediawiki). Requires restart!==启用或禁用来自SMW(Semantic Mediawiki)的内容控制列表的恒定后台同步。 需要重启! SMW import base URL:==SMW导入基URL: Define base URL for SMW special page "Ask". Example: ==为SMW特殊页面“Ask”定义基础地址.例: SMW import target table:==SMW导入目标表: Define import target table. Default: contentcontrol==定义导入目标表. 默认值:contentcontrol Purge content control list on initial sync:==在初始同步时清除内容控制列表: Purge content control list on initial synchronisation after startup.==重启后,清除初始同步的内容控制列表. "Submit"=="提交" Define base URL for SMW special page "Ask". Example:==为SMW特殊页面“Ask”定义基础地址.例: #----------------------------- #File: env/templates/submenuSemantic.template #--------------------------- Content Semantic==内容语义 >Automated Annotation<==>自动注释< Auto-Annotation Vocabulary Editor==自动注释词汇编辑器 Knowledge Loader==知识加载器 >Augmented Content<==>增强内容< Augmented Browsing==增强浏览 #----------------------------- #File: YMarks.html #--------------------------- "Import"=="导入" documents=="文件" days==天 hours==小时 minutes==分钟 for new documents automatically==自动地对新文件 run this crawl once==抓取一次 >Query<==>查询< Query Type==查询类型 >Import<==>导入< Tag Manager==标签管理器 Bookmarks (user: #[user]# size: #[size]#)==书签(用户: #[user]# 大小: #[size]#) "Replace"=="替换" #----------------------------- #File: AugmentedBrowsing_p.html #--------------------------- Augmented Browsing<==增强浏览< , where parameter is the url of an external web page==其中参数是外部网页的网址 URL Proxy Settings<==URL代理设置< With this settings you can activate or deactivate URL proxy which is the method used for augmentation.==使用此设置,您可以激活或取消激活用于扩充的URL代理。 Service call: ==服务电话: >URL proxy:<==>URL 代理:< Globally enables or disables URL proxy via==全局启用或禁用URL代理通过 >Enabled<==>已启用< Show search results via URL proxy:==通过URL代理显示搜索结果: Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.==为所有搜索结果启用或禁用URL代理。 如果启用,所有搜索结果将通过URL代理隧道传输。 Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address==或者,您可以将此JavaScript添加到您的浏览器收藏夹/快捷方式,这将重新加载当前的浏览器地址 via the YaCy proxy servlet==通过YaCy代理servlet or right-click this link and add to favorites:==或右键单击此链接并添加到收藏夹: Restrict URL proxy use:==限制URL代理使用: Define client filter. Default: ==定义客户端过滤器.默认: URL substitution:==网址替换: Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.==定义允许在代理环境中导航的URL替换规则。 可能的值:all,domainlist。 默认:domainlist。 "Submit"=="提交" Alternatively you may add this javascript to your browser favorites/short-cuts==或者,您可以将此JavaScript添加到您的浏览器收藏夹/快捷方式,这将重新加载当前的浏览器地址 Enables or disables URL proxy for all search results. If enabled==为所有搜索结果启用或禁用URL代理。 如果启用,所有搜索结果将通过URL代理隧道传输。 Service call:==服务电话: Define client filter. Default:==定义客户端过滤器.默认: Define URL substitution rules which allow navigating in proxy environment. Possible values: all==定义允许在代理环境中导航的URL替换规则.可能的值:all,domainlist.默认:domainlist。 Globally enables or disables URL proxy via==全局启用或禁用URL代理通过 #----------------------------- #File: env/templates/submenuMaintenance.template #--------------------------- RAM/Disk Usage & Updates==内存/硬盘 使用 & 更新 Web Cache==网页缓存 Download System Update==下载系统更新 >Performance<==>性能< RAM/Disk Usage==内存/硬盘 使用 #----------------------------- #File: ContentAnalysis_p.html #--------------------------- Content Analysis==内容分析 These are document analysis attributes==这些是文档分析属性 Double Content Detection==双重内容检测 Double-Content detection is done using a ranking on a 'unique'-Field, named 'fuzzy_signature_unique_b'.==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 This is the minimum length of a word which shall be considered as element of the signature. Should be either 2 or 3.==这是一个应被视为签名的元素单词的最小长度。 应该是2或3。 The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number, the less==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 words are used for the signature==单词用于签名 For minTokenLen = 2 the quantRate value should not be below 0.24; for minTokenLen = 3 the quantRate value must be not below 0.5.==对于minTokenLen = 2,quantRate值不应低于0.24; 对于minTokenLen = 3,quantRate值必须不低于0.5。 "Re-Set to default"=="重置为默认" "Set"=="设置" Double-Content detection is done using a ranking on a 'unique'-Field==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 #----------------------------- #File: AccessTracker_p.html #--------------------------- Access Tracker==访问跟踪 Server Access Overview==网站访问概况 This is a list of #[num]# requests to the local http server within the last hour==最近一小时内有 #[num]# 个到本地的访问请求 This is a list of requests to the local http server within the last hour==此列表显示最近一小时内到本机的访问请求 Showing #[num]# requests==显示 #[num]# 个请求 >Host<==>主机< >Path<==>路径< Date<==日期< Access Count During==访问时间 last Second==最近1秒 last Minute==最近1分 last 10 Minutes==最近10分 last Hour==最近1小时 The following hosts are registered as source for brute-force requests to protected pages==以下主机作为保护页面强制请求的源 #>Host==>Host Access Times==访问时间 Server Access Details==服务器访问细节 Local Search Log==本地搜索日志 Local Search Host Tracker==本地搜索主机跟踪 Remote Search Log==远端搜索日志 #Total:==Total: Success:==成功: Remote Search Host Tracker==远端搜索跟踪 This is a list of searches that had been requested from this' peer search interface==此列表显示从远端节点所进行的搜索 Showing #[num]# entries from a total of #[total]# requests.==显示 #[num]# 条目,共 #[total]# 个请求. Requesting Host==请求主机 Peer Name==节点名称 Offset==偏移量 Expected Results==期望结果 Returned Results==返回结果 Known Results==已知结果 Used Time (ms)==消耗时间(毫秒) URL fetch (ms)==获取地址(毫秒) Snippet comp (ms)==片段比较(毫秒) Query==查询字符 >User Agent<==>用户代理< Top Search Words (last 7 Days)==热门搜索词汇(最近7天) Search Word Hashes==搜索字哈希值 Count==计数 Queries Per Last Hour==查询/小时 Access Dates==访问日期 This is a list of searches that had been requested from remote peer search interface==此列表显示从远端节点所进行的搜索. This is a list of requests (max. 1000) to the local http server within the last hour==这是最近一小时内本地http服务器的请求列表(最多1000个) #----------------------------- #File: Blacklist_p.html #--------------------------- Blacklist Administration==黑名单管理 This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理地址过滤;过滤掉自载入时加入进黑名单的地址. from being loaded. You can define several blacklists and activate them separately.==您可以自定义黑名单并分别激活它们. You may also provide your blacklist to other peers by sharing them; in return you may==您也可以提供你自己的黑名单列表给其他人; collect blacklist entries from other peers==同样,其他人也能将黑名单列表共享给您 Select list to edit:==选择列表进行编辑: Add URL pattern==添加地址规则 Edit list==编辑列表 The right '*', after the '/', can be replaced by a==在'/'之后的右边'*'可以被替换为 >regular expression<==>正则表达式< #(slow)==(慢) "set"=="集合" The right '*'==右边的'*' Used Blacklist engine:==使用的黑名单引擎: Active list:==激活列表: No blacklist selected==未选中黑名单 Select list:==选中黑名单: not shared::shared==未共享::已共享 "select"=="选择" Create new list:==创建: "create"=="创建" Settings for this list==设置 "Save"=="保存" Share/don't share this list==共享/不共享此名单 Delete this list==删除 Edit this list==编辑 These are the domain name/path patterns in==这些域名/路径规则来自 Blacklist Pattern==黑名单规则 Edit selected pattern(s)==编辑选中规则 Delete selected pattern(s)==删除选中规则 Move selected pattern(s) to==移动选中规则 #You can select them here for deletion==您可以从这里选择要删除的项 Add new pattern:==添加新规则: "Add URL pattern"=="添加地址规则" The right '*', after the '/', can be replaced by a regular expression.== 在 '/' 后边的 '*' ,可用正则表达式表示. #domain.net/fullpath<==domain.net/绝对路径< #>domain.net/*<==>domain.net/*< #*.domain.net/*<==*.domain.net/*< #*.sub.domain.net/*<==*.sub.domain.net/*< #sub.domain.*/*<==sub.domain.*/*< #domain.*/*<==domain.*/*< #was removed from blacklist==wurde aus Blacklist entfernt #was added to the blacklist==wurde zur Blacklist hinzugefügt Activate this list for==为以下条目激活此名单 Show entries:==显示条目: Entries per page:==页面条目: Edit existing pattern(s):==编辑现有规则: "Save URL pattern(s)"=="保存地址规则" #----------------------------- #File: BlacklistCleaner_p.html #--------------------------- Blacklist Cleaner==黑名单整理 Here you can remove or edit illegal or double blacklist-entries==在这里您可以删除或者编辑一个非法或者重复的黑名单条目 Check list==校验名单 "Check"=="校验" Allow regular expressions in host part of blacklist entries==允许黑名单中主机部分的正则表达式 The blacklist-cleaner only works for the following blacklist-engines up to now:==此整理目前只对以下黑名单引擎有效: Illegal Entries in #[blList]# for==非法条目在 #[blList]# Deleted #[delCount]# entries==已删除 #[delCount]# 个条目 Altered #[alterCount]# entries==已修改 #[alterCount]# 个条目 Two wildcards in host-part==主机部分中的两个通配符 Either subdomain or wildcard==子域名或者通配符 Path is invalid Regex==无效正则表达式 Wildcard not on begin or end==通配符未在开头或者结尾处 Host contains illegal chars==主机名包含非法字符 Double==重复 "Change Selected"=="改变选中" "Delete Selected"=="删除选中" No Blacklist selected==未选中黑名单 #----------------------------- #File: BlacklistImpExp_p.html #--------------------------- Blacklist Import==黑名单导入 Used Blacklist engine:==使用的黑名单引擎: Import blacklist items from...==导入黑名单条目从... other YaCy peers:==其他的YaCy 节点s: "Load new blacklist items"=="载入黑名单条目" #URL:==URL: plain text file:<==文本文件:< XML file:==XML文件: Upload a regular text file which contains one blacklist entry per line.==上传一个每行都有一个黑名单条目的文本文件. Upload an XML file which contains one or more blacklists.==上传一个包含一个或多个黑名单的XML文件. Export blacklist items to==导出黑名单到 Here you can export a blacklist as an XML file. This file will contain additional==您可以导出黑名单到一个XML文件中,此文件含有 information about which cases a blacklist is activated for==激活黑名单所具备条件的详细信息 "Export list as XML"=="导出名单到XML" Here you can export a blacklist as a regular text file with one blacklist entry per line==您可以导出黑名单到一个文本文件中,且每行都仅有一个黑名单条目 This file will not contain any additional information==此文件不会包含详细信息 "Export list as text"=="导出名单到文本" #----------------------------- #File: BlacklistTest_p.html #--------------------------- Blacklist Test==黑名单测试 Used Blacklist engine:==使用的黑名单引擎: Test list:==测试黑名单: "Test"=="测试" The tested URL was==此链接 It is blocked for the following cases:==在下列情况下,它会被阻止: Crawling==抓取中 #DHT==DHT News==新闻 Proxy==代理 Search==搜索 Surftips==建议 #----------------------------- #File: Blog.html #--------------------------- by==通过 Comments==评论 >edit==>编辑 >delete==>删除 Edit<==编辑< previous entries==前一个条目 next entries==下一个条目 new entry==新条目 import XML-File==导入XML文件 export as XML==导出到XML文件 Comments==评论 Blog-Home==博客主页 Author:==作者: Subject:==标题: Text:==文本: You can use==您可以用 Yacy-Wiki Code==YaCy-百科代码 here.==这儿. Comments:==评论: deactivated==无效 >activated==>有效 moderated==改变 "Submit"=="提交" "Preview"=="预览" "Discard"=="取消" >Preview==>预览 No changes have been submitted so far==未作出任何改变 Access denied==拒绝访问 To edit or create blog-entries you need to be logged in as Admin or User who has Blog rights.==如果编辑或者创建博客内容,您需要登录. Are you sure==确定 that you want to delete==要删除: Confirm deletion==确定删除 Yes, delete it.==是, 删除. No, leave it.==不, 保留. Import was successful!==导入成功! Import failed, maybe the supplied file was no valid blog-backup?==导入失败, 可能提供的文件不是有效的博客备份? Please select the XML-file you want to import:==请选择您想导入的XML文件: #----------------------------- #File: BlogComments.html #--------------------------- by==通过 Comments==评论 Login==登录 Blog-Home==博客主页 delete==删除 allow==允许 Author:==作者: Subject:==标题: #Text:==Text: You can use==您可以用 Yacy-Wiki Code==YaCy-百科代码 here.==在这里. "Submit"=="提交" "Preview"=="预览" "Discard"=="取消" #----------------------------- #File: Bookmarks.html #--------------------------- start autosearch of new bookmarks==开始自动搜索新书签 This starts a serach of new or modified bookmarks since startup==开始搜索自从启动以来新的或修改的书签 Every peer online will be ask for results.==每个在线的节点都会被索要结果。 To see a list of all APIs, please visit the API wiki page.==要查看所有API的列表,请访问API wiki page。 To see a list of all APIs==要查看所有API的列表,请访问API wiki page。 YaCy '#[clientname]#': Bookmarks==YaCy '#[clientname]#': 书签 The bookmarks list can also be retrieved as RSS feed. This can also be done when you select a specific tag.==书签列表也能用作RSS订阅.当您选择某个标签时您也可执行这个操作. Click the API icon to load the RSS from the current selection.==点击API图标以从当前选择书签中载入RSS. To see a list of all APIs, please visit the API wiki page.==获取所有API, 请访问API Wiki.

Bookmarks==

书签 Bookmarks (==书签( Login==登录 List Bookmarks==显示书签 Add Bookmark==添加书签 Import Bookmarks==导入书签 Import XML Bookmarks==导入XML书签 Import HTML Bookmarks==导入HTML书签 "import"=="导入" Default Tags:==默认标签 imported==已导入 Edit Bookmark==编辑书签 #URL:==URL: Title:==标题: Description:==描述: Folder (/folder/subfolder):==目录(/目录/子目录): Tags (comma separated):==标签(以逗号隔开): >Public:==>公共的: yes==是 no==否 Bookmark is a newsfeed==书签是新闻订阅点 "create"=="创建" "edit"=="编辑" File:==文件: import as Public==导入为公有 "private bookmark"=="私有书签" "public bookmark"=="公共书签" Tagged with==关键词: 'Confirm deletion'=='确认删除' Edit==编辑 Delete==删除 Folders==目录 Bookmark Folder==书签目录 Tags==标签 Bookmark List==书签列表 previous page==前一页 next page==后一页 All==所有 Show==显示 Bookmarks per page==书签/每页 #unsorted==默认排序 #----------------------------- #File: Collage.html #--------------------------- Image Collage==图像拼贴 Private Queue==私有 Public Queue==公共 #----------------------------- #File: compare_yacy.html #--------------------------- Websearch Comparison==网页搜索对比 Left Search Engine==左侧引擎 Right Search Engine==右侧引擎 Query==查询 "Compare"=="比较" Search Result==结果 #----------------------------- #File: ConfigAccounts_p.html #--------------------------- Username too short. Username must be >= 4 Characters.==用户名太短。 用户名必须>= 4 个字符. Username already used (not allowed).==用户名已被使用(不允许). Username too short. Username must be ==用户名太短. 用户名必须 User Administration==用户管理 User created:==用户已创建: User changed:==用户已改变: Generic error==一般错误 Passwords do not match==密码不匹配 Username too short. Username must be >= 4 Characters==用户名太短, 至少为4个字符 No password is set for the administration account==管理员账户未设置密码 Please define a password for the admin account==请设置一个管理员密码 #Admin Account Admin Account==管理员 Access from localhost without account==本地匿名访问 Access to your peer from your own computer (localhost access) is granted with administrator rights. No need to configure an administration account.==通过管理员权限授予从您自己的计算机访问您的节点(localhost访问权限).无需配置管理帐户. This setting is convenient but less secure than using a qualified admin account.==此设置很方便,但比使用合格的管理员帐户安全性低. Please use with care, notably when you browse untrusted and potentially malicious websites while running your YaCy peer on the same computer.==请谨慎使用,尤其是在计算机上运行YaCy节点并浏览不受信任和可能有恶意的网站时. Access only with qualified account==只允许授权用户访问 This is required if you want a remote access to your peer, but it also hardens access controls on administration operations of your peer.==如果您希望远程访问你的节点,则这是必需的,但它也会加强节点管理操作的访问控制. Peer User:==节点用户: New Peer Password:==新节点密码: Repeat Peer Password:==重复节点密码: "Define Administrator"=="设置管理员账户" #Access Rules >Access Rules<==>访问规则< Protection of all pages: if set to on==保护所有页面:如果设置为开启 access to all pages need authorization==访问所有页面需要授权 if off, only pages with "_p" extension are protected==如果关闭,只有扩展名为“_p”的页面才受保护 Set Access Rules==设置访问规则 #User Accounts User Accounts==用户账户 Select user==选择用户 New user==新用户 or goto user==或者去用户 >account list<==>账号列表< Edit User==编辑用户 Delete User==删除用户 Edit current user:==编辑当前用户: Username==用户名 Password==密码 Repeat password==重复密码 First name==名 Last name==姓 Address==地址 Rights==权限 == Timelimit==时限 Time used==已用时 Save User==保存用户 #----------------------------- #File: ConfigAppearance_p.html #--------------------------- Appearance and Integration==外观界面 You can change the appearance of the YaCy interface with skins.==您可以在这里修改YaCy的外观界面. #You can change the appearance of YaCy with skins==Sie können hier das Erscheinungsbild von YaCy mit Skins ändern The selected skin and language also affects the appearance of the search page.==选择的皮肤和语言也会影响到搜索页面的外观. If you create a search portal with YaCy then you can==如果您创建YaCy门户, change the appearance of the search page here.==那么您能在这里 改变搜索页面的外观. #and the default icons and links on the search page can be replaced with you own.==und die standard Grafiken und Links auf der Suchseite durch Ihre eigenen ersetzen. Skin Selection==选择皮肤 Select one of the default skins, download new skins, or create your own skin.==选择一个默认皮肤, 下载新皮肤或者创建属于您自己的皮肤. Current skin==当前皮肤 Available Skins==可用皮肤 "Use"=="使用" "Delete"=="删除" >Skin Color Definition<==>改变皮肤颜色< The generic skin 'generic_pd' can be configured here with custom colors:==能在这里修改皮肤'generic_pd'的颜色: >Background<==>背景< >Text<==>文本< >Legend<==>说明< >Table Header<==>标签 头部< >Table Item<==>标签 条目 1< >Table Item 2<==>标签 条目 2< >Table Bottom<==>标签 底部< >Border Line<==>边界 线< >Sign 'bad'<==>符号 '坏'< >Sign 'good'<==>符号 '好'< >Sign 'other'<==>符号 '其他'< >Search Headline<==>搜索 标题< >Search URL==>搜索 地址 hover==悬浮 "Set Colors"=="设置颜色" >Skin Download<==>下载皮肤< Skins can be installed from download locations==安装下载皮肤 Install new skin from URL==从URL安装皮肤 Use this skin==使用这个皮肤 "Install"=="安装" Make sure that you only download data from trustworthy sources. The new Skin file==确保您的皮肤文件是从可靠源获得. 如果存在相同文件 might overwrite existing data if a file of the same name exists already.==, 新皮肤会覆盖旧的. >Unable to get URL:==>无法打开链接: Error saving the skin.==保存皮肤时出错. #----------------------------- #File: ConfigBasic.html #--------------------------- Your port has changed. Please wait 10 seconds.==您的端口已更改. 请等待10秒. Your browser will be redirected to the new location in 5 seconds.==您的浏览器将在5秒内重定向到新的位置. The peer port was changed successfully.==对端口已经成功修改 Opening a router port is not a YaCy-specific task;==打开一个路由器端口不是一个YaCy特定的任务; However: if you fail to open a router port, you can nevertheless use YaCy with full functionality, the only function that is missing is on the side of the other YaCy users because they cannot see your peer.==但是,如果您无法打开路由器端口,则仍然可以使用YaCy的全部功能,缺少的唯一功能是在其他YaCy用户侧,因为他们无法看到您的YaCy节点. Set by system property==由系统属性设置 https enabled==https启用 Configure your router for YaCy using UPnP:==使用UPnP为您的路由器配置YaCy: on port==在端口 you can see instruction videos everywhere in the internet, just search for Open Ports on a <our-router-type> Router and add your router type as search term.==您可以在互联网上的任何地方查看说明视频,只需搜索Open Ports on a <our-router-type> Router并添加您的路由器类型作为搜索词。 However: if you fail to open a router port==但是,如果您无法打开路由器端口,则仍然可以使用YaCy的全部功能,缺少的唯一功能是在其他YaCy用户侧,因为他们无法看到您的YaCy节点。 you can see instruction videos everywhere in the internet==您可以在互联网上的任何地方查看说明视频,只需搜索Open Ports on a Access Configuration==访问设置 Basic Configuration==基本设置 Your YaCy Peer needs some basic information to operate properly==您的YaCy 节点需要一些基本信息才能工作 Select a language for the interface==选择界面语言 汉语/漢語==中文 Use Case: what do you want to do with YaCy:==用途: 您用YaCy做什么: Community-based web search==基于社区的网络搜索 Join and support the global network 'freeworld', search the web with an uncensored user-owned search network==加入并支持全球网络 'freeworld', 自由地搜索. Search portal for your own web pages==属于您自己的搜索引擎 Your YaCy installation behaves independently from other peers and you define your own web index by starting your own web crawl. This can be used to search your own web pages or to define a topic-oriented search portal.==本机YaCy的节点创建与索引过程独立于其他节点, 即您可以定义自己的搜索偏向. Files may also be shared with the YaCy server, assign a path here:==您也能与YaCy服务器共享内容, 在这里指定路径: This path can be accessed at ==可以通过以下链接访问 Use that path as crawl start point.==将此路径作为索引起点. Intranet Indexing==局域网索引 Create a search portal for your intranet or web pages or your (shared) file system.==创建您自己的局域网, 网页或者您共享的文件系统. URLs may be used with http/https/ftp and a local domain name or IP, or with an URL of the form==适合http/https/ftp协议的链接/主机名/IP or smb:==或者服务器信息块(SMB): Your peer name has not been customized; please set your own peer name==您的节点尚未命名, 请命名它 You may change your peer name==您可以改变您的节点名称 Peer Name:==节点名称: Your peer cannot be reached from outside==外部将不能访问您的节点 which is not fatal, but would be good for the YaCy network==此举有利于YaCy网络 please open your firewall for this port and/or set a virtual server option in your router to allow connections on this port==请改变您的防火墙或者虚拟机路由设置, 从而外网能访问这个端口 Your peer can be reached by other peers==外部将能访问您的节点 Peer Port:==节点端口: Configure your router for YaCy:==设置本机路由: Configuration was not successful. This may take a moment.==配置失败. 这需要花费一些时间. Set Configuration==保存设置 What you should do next:==下一步您该做的: Your basic configuration is complete! You can now (for example)==配置成功, 您现在可以 just <==开始< start an uncensored search==自由地搜索了 start your own crawl and contribute to the global index, or create your own private web index==开始您的索引, 并将其贡献给全球索引, 或者创建一个您自己的私有搜索网页 set a personal peer profile (optional settings)==设置个人节点资料 (可选项) monitor at the network page what the other peers are doing==监视网络页面, 以及其他节点的活动 Your Peer name is a default name; please set an individual peer name.==您的节点名称为系统默认,请另外设置一个名称. You did not set a user name and/or a password.==您未设置用户名和/或密码. Some pages are protected by passwords.==一些页面受密码保护. You should set a password at the Accounts Menu to secure your YaCy peer.

::==您可以在 账户菜单 设置密码, 从而加强您的YaCy节点安全性.

:: You did not open a port in your firewall or your router does not forward the server port to your peer.==您未打开防火墙端口或者您的路由器未能与主机的服务端口建立链接. This is needed if you want to fully participate in the YaCy network.==如果您要完全加入YaCy网络, 此项是必须的. You can also use your peer without opening it, but this is not recomended.==不开放您的节点您也能使用, 但是不推荐. #----------------------------- #File: RankingSolr_p.html #--------------------------- >Solr Ranking Configuration<==>Solr排名配置< These are ranking attributes for Solr.==这些是Solr的排名属性. This ranking applies for internal and remote (P2P or shard) Solr access.==此排名适用于内部和远程(P2P或分片)Solr访问. Select a profile:==选择配置文件: >Boost Function<==>提升功能< >Boost Query<==>提升查询< >Filter Query<==>过滤器查询< >Solr Boosts<==>Solr提升< "Set Boost Function"=="设置提升功能" "Set Boost Query"=="设置提升查询" "Set Filter Query"=="设置过滤器查询" "Set Field Boosts"=="设置字段提升" "Re-Set to default"=="重置为默认" #----------------------------- #File: RankingRWI_p.html #--------------------------- >RWI Ranking Configuration<==>RWI排名配置< The document ranking influences the order of the search result entities.==文档排名会影响实际搜索结果的顺序. A ranking is computed using a number of attributes from the documents that match with the search word.==排名计算使用到与搜索词匹配的文档中的多个属性. The attributes are first normalized over all search results and then the normalized attribute is multiplied with the ranking coefficient computed from this list.==在所有搜索结果基础上,先对属性进行归一化,然后将归一化的属性与相应的排名系数相乘. The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数随着下表中给出的排名水平呈指数增长. If you increase a single value by one, then the strength of the parameter doubles.==如果将单个值增加1,则参数的影响效果加倍. #Pre-Ranking >Pre-Ranking<==>预排名< == #>Appearance In Emphasized Text<==>出现在强调的文本中< #a higher ranking level prefers documents where the search word is emphasized==较高的排名级别更倾向强调搜索词的文档 #>Appearance In URL<==>出现在地址中< #a higher ranking level prefers documents with urls that match the search word==较高的排名级别更倾向具有与搜索词匹配的地址的文档 #Appearance In Author==出现在作者中 #a higher ranking level prefers documents with authors that match the search word==较高的排名级别更倾向与搜索词匹配的作者的文档 #>Appearance In Reference/Anchor Name<==>出现在参考/锚点名称中< #a higher ranking level prefers documents where the search word matches in the description text==较高的排名级别更倾向搜索词在描述文本中匹配的文档 #>Appearance In Tags<==>出现在标签中< #a higher ranking level prefers documents where the search word is part of subject tags==较高的排名级别更喜欢搜索词是主题标签一部分的文档 #>Appearance In Title<==>出现在标题中< #a higher ranking level prefers documents with titles that match the search word==较高的排名级别更喜欢具有与搜索词匹配的标题的文档 #>Authority of Domain<==>域名权威< #a higher ranking level prefers documents from domains with a large number of matching documents==较高的排名级别更喜欢来自具有大量匹配文档的域的文档 #>Category App, Appearance<==>类别:出现在应用中< #a higher ranking level prefers documents with embedded links to applications==更高的排名级别更喜欢带有嵌入式应用程序链接的文档 #>Category Audio Appearance<==>类别:出现在音频中< #a higher ranking level prefers documents with embedded links to audio content==较高的排名级别更喜欢具有嵌入音频内容链接的文档 #>Category Image Appearance<==>类别:出现在图像中< #>Category Video Appearance<==>类别:出现在视频中< #>Category Index Page<==>类别:索引页面< #a higher ranking level prefers 'index of' (directory listings) pages==较高的排名级别更喜欢(目录列表)页面的索引 #>Date<==>日期< #a higher ranking level prefers younger documents.==更高的排名水平更喜欢最新的文件. #The age of a document is measured using the date submitted by the remote server as document date==使用远程服务器提交的日期作为文档日期来测量文档的年龄 #>Domain Length<==>域名长度< #a higher ranking level prefers documents with a short domain name==较高的排名级别更喜欢具有短域名的文档 #>Hit Count<==>命中数< #a higher ranking level prefers documents with a large number of matchings for the search word(s)==较高的排名级别更喜欢具有大量匹配搜索词的文档 There are two ranking stages:==有两个排名阶段: first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名. The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果. #Post-Ranking >Post-Ranking<==二次排名 "Set as Default Ranking"=="保存为默认排名" "Re-Set to Built-In Ranking"=="重置排名设置" #----------------------------- #File: ConfigHeuristics_p.html #--------------------------- Heuristics Configuration==启发式配置 A heuristic is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia).==启发式是一种“基于经验的技术,有助于解决问题,学习和发现” search-result: shallow crawl on all displayed search results==搜索结果:浅度爬取所有显示的搜索结果 When a search is made then all displayed result links are crawled with a depth-1 crawl.==当进行搜索时,所有显示的结果网址的爬网深度-1。 "Save"=="保存" "add"=="添加" >new<==>新建< >delete<==>删除< >Comment<==>评论< >Title<==>标题< >Active<==>激活< >copy & paste a example config file<==>复制& 粘贴一个示例配置文件< Alternatively you may==或者你可以 To find out more about OpenSearch see==要了解关于OpenSearch的更多信息,请参阅 20 results are taken from remote system and loaded simultanously, parsed and indexed immediately.==20个结果从远程系统中获取并同时加载,立即解析并创建索引. When using this heuristic, then every new search request line is used for a call to listed opensearch systems.==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。 This means: right after the search request every page is loaded and every page that is linked on this page.==这意味着:在搜索请求之后,就开始加载每个页面上的链接。 If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中“添加为全局抓取作业”,则要爬网的页面将被添加到全局抓取队列(远程YaCy节点可以抓取要抓取的页面)。 Default is to add the links to the local crawl queue (your peer crawls the linked pages).==默认是将链接添加到本地爬网队列(您的YaCy爬取链接的页面)。 add as global crawl job==添加为全球抓取作业 opensearch load external search result list from active systems below==opensearch从下面的活动系统加载外部搜索结果列表 Available/Active Opensearch System==可用/激活Opensearch系统 Url (format opensearch==Url (格式为opensearch Url template syntax==网址模板语法 "reset to default list"=="重置为默认列表" "discover from index"=="从索引中发现" start background task, depending on index size this may run a long time==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间 With the button "discover from index" you can search within the metadata of your local index (Web Structure Index) to find systems which support the Opensearch specification.==使用“从索引发现”按钮,您可以在本地索引(Web结构索引)的元数据中搜索,以查找支持Opensearch规范的系统。 The task is started in the background. It may take some minutes before new entries appear (after refreshing the page).==任务在后台启动。 出现新条目可能需要几分钟时间(在刷新页面之后)。 "switch Solr fields on"=="开关Solr字段" ('modify Solr Schema')==('修改Solr模式') located in defaults/heuristicopensearch.conf to the DATA/SETTINGS directory.==位于DATA / SETTINGS目录的 defaults / heuristicopensearch.conf 中。 For the discover function the web graph option of the web structure index and the fields target_rel_s, target_protocol_s, target_urlstub_s have to be switched on in the webgraph Solr schema.==对于发现功能,Web结构索引的 web图表选项和字段 target_rel_s,target_protocol_s,target_urlstub_s 必须在webgraph Solr模式。 20 results are taken from remote system and loaded simultanously==20个结果从远程系统中获取,并同时加载,立即解析并索引 >copy ==>复制&amp; 粘贴一个示例配置文件< When using this heuristic==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。 For the discover function the web graph option of the web structure index and the fields target_rel_s==对于发现功能,Web结构索引的 web图表选项和字段 target_rel_s,target_protocol_s,target_urlstub_s 必须在 webgraph Solr模式。 start background task==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间 >copy==>复制&amp; 粘贴一个示例配置文件< The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines.==您可以在这里开启启发式搜索, 通过猜测链接, 嵌套搜索和访问其他搜索引擎, 从而找到更多符合您期望的结果. When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content.==开启启发式搜索时, 搜索结果给出的链接并不是直接搜索的链接, 而是已经缓存在其他服务器上的结果. This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.==这保证了黑名单的有效性, 并且搜索关键字是通过启发式搜索找到的. The success of heuristics are marked with an image==启发式搜索找到的结果会被特定图标标记 heuristic:<name>==启发式:<名称> #(redundant)==(redundant) (new link)==(新链接) below the favicon left from the search result entry:==搜索结果中使用的图标: The search result was discovered by a heuristic, but the link was already known by YaCy==搜索结果通过启发式搜索, 且链接已知 The search result was discovered by a heuristic, not previously known by YaCy==搜索结果通过启发式搜索, 且链接未知 'site'-operator: instant shallow crawl=='站点'-操作符: 即时浅抓取 When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即抓取层数为 最大限制深度-1 的内容. That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一主机中每一个与此页面相连的网页. Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).==因为'立即抓取'依赖于爬虫协议和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟). #----------------------------- #File: ConfigHTCache_p.html #--------------------------- Hypertext Cache Configuration==超文本缓存配置 The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==超文本缓存存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存. The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容. #HTCache Configuration HTCache Configuration==超文本缓存配置 Cache hits==缓存命中率 The path where the cache is stored==缓存存储路径 The current size of the cache==当前缓存容量 >#[actualCacheSize]# MB for #[actualCacheDocCount]# files, #[docSizeAverage]# KB / file in average==>#[actualCacheSize]#MB为#[actualCacheDocCount]#文件, #[docSizeAverage]#平均KB /文件 The maximum size of the cache==缓存最大容量 Compression level==压缩级别 Concurrent access timeout==并行存取超时 milliseconds==毫秒 "Set"=="设置" #Cleanup Cleanup==清除 Cache Deletion==删除缓存 Delete HTTP & FTP Cache==删除HTTP & FTP 缓存 Delete robots.txt Cache==删除爬虫协议缓存 "Delete"=="删除" #----------------------------- #File: ConfigLanguage_p.html #--------------------------- Simple Editor==简单编辑器 Download Language File==下载语言文件 to add untranslated text==用于添加仍未翻译文本 Supported formats are the internal language file (extension .lng) or XLIFF (extension .xlf) format.==支持的格式是内部语言文件(扩展名.lng)或XLIFF(扩展名.xlf)格式. Language selection==语言选择 You can change the language of the YaCy-webinterface with translation files.==您可以使用翻译文件来改变YaCy操作界面的语言. Current language==当前语言 Author(s) (chronological)==作者(按时间排序) Send additions to maintainer==向维护者提交补丁 Available Languages==可用语言 Install new language from URL==从URL安装新语言 Use this language==使用此语言 "Use"=="使用" "Delete"=="删除" "Install"=="安装" Unable to get URL:==打开链接失败: Error saving the language file.==保存语言文件时发生错误. Make sure that you only download data from trustworthy sources. The new language file==确保您的数据是从可靠源下载. 如果存在相同文件名 might overwrite existing data if a file of the same name exists already.==, 旧文件将被覆盖. #----------------------------- #File: ConfigLiveSearch.html #--------------------------- Integration of a Search Field for Live Search==搜索栏集成: 即时搜索 A 'Live-Search' input field that reacts as search-as-you-type in a pop-up window can easily be integrated in any web page=='即时搜索'输入栏: 即当您在搜索栏键入关键字时, 会在网页中弹出搜索对话框按钮 This is the same function as can be seen on all pages of the YaCy online-interface (look at the window in the upper right corner)==当您在线使用YaCy时, 您会在搜索页面看到相应功能(页面右上角) Just use the code snippet below to integrate that in your own web pages==将以下代码添加到您的网页中 Please check if the address, as given in the example '#[ip]#:#[port]#' here is correct and replace it with more appropriate values if necessary==对于形如 '#[ip]#:#[port]#' 的地址, 请用具体值来替换 Code Snippet:==代码: YaCy Portal Search==YaCy门户搜索 "Search"=="搜索" Configuration options and defaults for 'yconf':==配置设置和默认的'yconf': Defaults<==默认< url<==URL< is a mandatory property - no default<==固有参数 - 非默认< YaCy P2P Web Search==YaCy P2P 网页搜索 Size and position (width | height | position)==尺寸和位置(宽度 | 高度 | 位置) Specifies where the dialog should be displayed. Possible values for position: 'center', 'left', 'right', 'top', 'bottom', or an array containing a coordinate pair (in pixel offset from top left of viewport) or the possible string values (e.g. ['right','top'] for top right corner)==指定对话框位置. 对于位置: 'center', 'left', 'right', 'top', 'bottom' 的值, 或者一个包含对应位置值的数组 (以左上角为参考位置的像素数), 或者字符串值 (e.g. ['right','top'] 对应右上角) Animation effects (show | hide)==动画效果 (显示 | 隐藏) The effect to be used. Possible values: 'blind', 'clip', 'drop', 'explode', 'fold', 'puff', 'slide', 'scale', 'size', 'pulsate'.==可用特效: 'blind', 'clip', 'drop', 'explode', 'fold', 'puff', 'slide', 'scale', 'size', 'pulsate'. Interaction (modal | resizable)==对话框 (modal | 可变) If modal is set to true, the dialog will have modal behavior; other items on the page will be disabled (i.e. cannot be interacted with).==如果选中modal属性, 则对话框会有modal行为; 否则页面上就不具有此特性. (即不能进行交互操作). Modal dialogs create an overlay below the dialog but above other page elements.==Modal对话框会在页面元素下面而不是其上创建覆盖层. If resizable is set to true, the dialog will be resizeable.==如果选中可变属性, 对话框大小就是可变的. Load JavaScript load_js==载入页面JavaScript If load_js is set to false, you have to manually load the needed JavaScript on your portal page.==如果未选中载入页面JavaScript, 那么您可能需要手动加载页面里的JavaScript. This can help to avoid timing problems or double loading.==这有助于避免分时或者重载问题. Load Stylesheets load_css==载入页面样式 If load_css is set to false, you have to manually load the needed CSS on your portal page.==如果未选中载入页面样式, 您需要手动加载页面里的CSS文件. #Themes==Themes You can <==您能够< download ready made themes or create==下载或者创建 your own custom theme.
Themes are installed into: DATA/HTDOCS/yacy/ui/css/themes/==一个您自己的主题.
主题文件安装在: DATA/HTDOCS/yacy/ui/css/themes/ #----------------------------- #File: ConfigNetwork_p.html #--------------------------- == Network Configuration==网络设置 #Network and Domain Specification Network and Domain Specification==确定网络和域 YaCy can operate a computing grid of YaCy peers or as a stand-alone node.==您可以操作由YaCy节点组成的计算网格或者一个单独节点. To control that all participants within a web indexing domain have access to the same domain,==进行索引的域需要具有访问权限才能控制相同域内的所有节点, this network definition must be equal to all members of the same YaCy network.==且此设置对同一YaCy网络内的所有节点有效. >Network Definition<==>网络定义< Enter custom URL...==输入自定义网址... Remote Network Definition URL==远程网络定义地址 Network Nick==网络别名 Long Description==描述 Indexing Domain==索引域 #DHT==DHT "Change Network"=="改变网络" #Distributed Computing Network for Domain Distributed Computing Network for Domain==域内分布式计算网络. Enable Peer-to-Peer Mode to participate in the global YaCy network==开启点对点模式从而加入全球YaCy网 or if you want your own separate search cluster with or without connection to the global network==或者不论加不加入全球YaCy网,你都可以打造个人搜索群 Enable 'Robinson Mode' for a completely independent search engine instance==开启漂流模式获得完全独立的搜索引擎 without any data exchange between your peer and other peers==本节点不会与其他节点有任何数据交换 #Peer-to-Peer Mode Peer-to-Peer Mode==点对点模式 >Index Distribution==>索引分发 This enables automated, DHT-ruled Index Transmission to other peers==自动向其他节点传递DHT规则的索引 >enabled==>开启 disabled during crawling==关闭 在抓取时 disabled during indexing==关闭 在索引时 >Index Receive==>接收索引 Accept remote Index Transmissions==接受远程索引传递 This works only if you have a senior peer. The DHT-rules do not work without this function==仅当您拥有更上级节点时有效. 如果未设置此项, DHT规则不生效 >reject==>拒绝 accept transmitted URLs that match your blacklist==接受 与您黑名单匹配的传来的地址 >allow==>允许 deny remote search==拒绝 远程搜索 #Robinson Mode >Robinson Mode==>漂流模式 If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果您的节点运行在'漂流模式', 您能在不与其他节点交换数据的情况下进行搜索 There is no index receive and no index distribution between your peer and any other peer==您不会与其他节点进行索引传递 In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于漂流群模式,一样会应答那个群内远端节点的抓取请求 >Private Peer==>私有节点 Your search engine will not contact any other peer, and will reject every request==您的搜索引擎不会与其他节点联系, 并会拒绝每一个外部请求 >Public Peer==>公共节点 You are visible to other peers and contact them to distribute your presence==对于其他节点您是可见的, 可以与他们进行通信以分发你的索引 Your peer does not accept any outside index data, but responds on all remote search requests==您的节点不接受任何外部索引数据, 但是会回应所有外部搜索请求 >Public Cluster==>公共群 Your peer is part of a public cluster within the YaCy network==您的节点属于YaCy网络内的一个公共群 Index data is not distributed, but remote crawl requests are distributed and accepted==索引数据不会被分发, 但是外部的抓取请求会被分发和接受 Search requests are spread over all peers of the cluster, and answered from all peers of the cluster==搜索请求在当前群内的所有节点中传播, 并且这些节点同样会作出回应 List of .yacy or .yacyh - domains of the cluster: (comma-separated)==群内.yacy 或者.yacyh 的域名列表: (以逗号隔开) >Peer Tags==>节点标签 When you allow access from the YaCy network, your data is recognized using keywords==当您允许YaCy网络的访问时, 您的数据会以关键字形式表示 Please describe your search portal with some keywords (comma-separated)==请用关键字描述您的搜索门户 (以逗号隔开) If you leave the field empty, no peer asks your peer. If you fill in a '*', your peer is always asked.==如果此部分留空, 那么您的节点不会被其他节点访问. 如果内容是 '*' 则标示您的节点永远被允许访问. "Save"=="保存" #Outgoing communications encryption Outgoing communications encryption==出色的通信加密 Protocol operations encryption==协议操作加密 Prefer HTTPS for outgoing connexions to remote peers==更喜欢以HTTPS作为输出连接到远程节点 When==当 is enabled on remote peers==在远端节点开启时 it should be used to encrypt outgoing communications with them (for operations such as network presence, index transfer, remote crawl==它应该被用来加密与它们的传出通信(操作:网络存在、索引传输、远程爬行 Please note that contrary to strict TLS==请注意,与严格的TLS相反 certificates are not validated against trusted certificate authorities==证书向受信任的证书颁发机构进行验证 thus allowing YaCy peers to use self-signed certificates==从而允许YaCy节点使用自签名证书 Note also that encryption of remote search queries is configured with a dedicated setting in the==另请注意,远程搜索查询加密的专用设置配置请使用 page==页面 No changes were made!==未作出任何改变! Accepted Changes==接受改变 Inapplicable Setting Combination==设置未被应用 #----------------------------- #File: ConfigParser_p.html #--------------------------- Parser Configuration==解析配置 #Content Parser Settings Content Parser Settings==内容解析设置 With this settings you can activate or deactivate parsing of additional content-types based on their MIME-types.==此设置能开启/关闭依据文件类型(MIME)的内容解析. For a detailed description of the various MIME-types take a look at==关于MIME的详细描述请参考 If you want to test a specific parser you can do so using the==如果要测试特定的解析器,可以使用 enable/disable Parser==开启/关闭解析器 # --- Parser Names are hard-coded BEGIN --- ##Mime-Type==MIME Typ ##Microsoft Powerpoint Parser==Microsoft Powerpoint Parser #Torrent Metadata Parser==Torrent Metadaten Parser ##HTML Parser==HTML Parser #GNU Zip Compressed Archive Parser==GNU Zip Komprimiertes Archiv Parser ##Adobe Flash Parser==Adobe Flash Parser #Word Document Parser==Word Dokument Parser ##vCard Parser==vCard Parser #Bzip 2 UNIX Compressed File Parser==bzip2 UNIX Komprimierte Datei Parser #OASIS OpenDocument V2 Text Document Parser==OASIS OpenDocument V2 Text Dokument Parser ##Microsoft Excel Parser==Microsoft Excel Parser #ZIP File Parser==ZIP Datei Parser ##Rich Site Summary/Atom Feed Parser==Rich Site Summary / Atom Feed Parser #Comma Separated Value Parser==Comma Separated Value (CSV) Parser ##Microsoft Visio Parser==Microsoft Visio Parser #Tape Archive File Parser==Bandlaufwerk Archiv Datei Parser #7zip Archive Parser==7zip Archiv Parser ##Acrobat Portable Document Parser==Adobe Acrobat Portables Dokument Format Parser ##Rich Text Format Parser==Rich Text Format Parser #Generic Image Parser==Generischer Bild Parser #PostScript Document Parser==PostScript Dokument Parser #Open Office XML Document Parser==Open Office XML Dokument Parser #BMP Image Parser==BMP Bild Parser # --- Parser Names are hard-coded END --- "Submit"=="提交" #PDF Parser Attributes PDF Parser Attributes==PDF解析器属性 This is an experimental setting which makes it possible to split PDF documents into individual index entries==这是一个实验设置,可以将PDF文档拆分为单独的索引条目 Every page will become a single index hit and the url is artifically extended with a post/get attribute value containing the page number as value==每个页面都将成为单个索引匹配,并且使用包含页码作为值的post/get属性值人为扩展url Split PDF==分割PDF Property Name==属性名 #----------------------------- #File: ConfigPortal_p.html #--------------------------- Integration of a Search Portal==搜索门户设置 If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果您想将YaCy作为您的网站搜索门户, 您可能需要在这改变搜索页面的图标和信息. The search page may be customized.==搜索页面可以自由定制. You can change the 'corporate identity'-images, the greeting line==您可以改变'企业标志'图像,问候语 and a link to a home page that is reached when the 'corporate identity'-images are clicked.==和一个指向首页的'企业标志'图像链接. To change also colours and styles use the Appearance Servlet for different skins and languages.==若要改变颜色和风格,请到外观选项选择您喜欢的皮肤和语言. Greeting Line<==问候语< URL of Home Page<==主页链接< URL of a Small Corporate Image<==企业形象小图地址< URL of a Large Corporate Image<==企业形象大图地址< Alternative text for Corporate Images<==企业形象代替文字< Enable Search for Everyone==对任何人开启搜索 Search is available for everyone==任何人可用搜索 Only the administator is allowed to search==只有管理员可以搜索 Show Navigation Bar on Search Page==显示导航栏和搜索页 Show Navigation Top-Menu==显示顶级导航菜单 no link to YaCy Menu (admin must navigate to /Status.html manually)==没有到YaCy菜单的链接(管理页面必须手动指向 /Status.html) Show Advanced Search Options on Search Page==在搜索页显示高级搜索选项 Show Advanced Search Options on index.html ==在index.html显示高级搜索选项? do not show Advanced Search==不显示高级搜索 Media Search==媒体搜索 >Extended==>拓展 >Strict==>严格 Control whether media search results are as default strictly limited to indexed documents matching exactly the desired content domain==控制媒体搜索结果是否默认严格限制为与所需内容域完全匹配的索引文档 (images, videos or applications specific)==(图像,视频或具体应用) or extended to pages including such medias (provide generally more results, but eventually less relevant).==或扩展到包括此类媒体的网页(通常提供更多结果,但相关性更弱) Remote results resorting==远程搜索结果排序 >On demand, server-side==>按需,服务器端 Automated, with JavaScript in the browser==自动化,在浏览器中使用JavaScript >for authenticated users only<==>仅限经过身份验证的用户< Remote search encryption==远程搜索加密 Prefer https for search queries on remote peers.==首选https用于远程节点上的搜索查询. When SSL/TLS is enabled on remote peers, https should be used to encrypt data exchanged with them when performing peer-to-peer searches.==在远程节点上启用SSL/TLS时,应使用https来加密在执行P2P搜索时与它们交换的数据. Please note that contrary to strict TLS, certificates are not validated against trusted certificate authorities (CA), thus allowing YaCy peers to use self-signed certificates.==请注意,与严格TLS相反,证书不会针对受信任的证书颁发机构(CA)进行验证,因此允许YaCy节点使用自签名证书. >Snippet Fetch Strategy==>摘要提取策略 Speed up search results with this option! (use CACHEONLY or FALSE to switch off verification)==使用此选项加快搜索结果!(使用CACHEONLY或FALSE关闭验证) NOCACHE: no use of web cache, load all snippets online==NOCACHE:不使用网络缓存,在线加载所有网页摘要 IFFRESH: use the cache if the cache exists and is fresh otherwise load online==IFFRESH:如果缓存存在则使用最新的缓存,否则在线加载 IFEXIST: use the cache if the cache exist or load online==IFEXIST:如果缓存存在则使用缓存,或在线加载 If verification fails, delete index reference==如果验证失败,删除索引参考 CACHEONLY: never go online, use all content from cache.==CACHEONLY:永远不上网,内容只来自缓存. If no cache entry exist, consider content nevertheless as available and show result without snippet==如果不存在缓存条目,将内容视为可用,并显示没有摘要的结果 FALSE: no link verification and not snippet generation: all search results are valid without verification==FALSE:没有链接验证且没有摘要生成:所有搜索结果在没有验证情况下有效 Link Verification<==链接验证< Greedy Learning Mode==贪心学习模式 load documents linked in search results,==加载搜索结果中链接的文档, will be deactivated automatically when index size==将自动停用当索引大小 (see==(见 >Heuristics: search-result<==>启发式:搜索结果< to use this permanent)==使得它永久性) Index remote results==索引远程结果 add remote search results to the local index==将远程搜索结果添加到本地索引 ( default=on, it is recommended to enable this option ! )==(默认=开启,建议启用此选项!) Limit size of indexed remote results==现在远程索引结果容量 maximum allowed size in kbytes for each remote search result to be added to the local index==每个远程搜索结果的最大允许大小(以KB为单位)添加到本地索引 for example, a 1000kbytes limit might be useful if you are running YaCy with a low memory setup==例如,如果运行具有低内存设置的YaCy,则1000KB限制可能很有用 Default Pop-Up Page<==默认弹出页面< Default maximum number of results per page==默认每页最大结果数 Default index.html Page (by forwarder)==默认index.html(前者指定) Target for Click on Search Results==点击搜索结果时 "_blank" (new window)=="_blank" (新窗口) "_self" (same window)=="_self" (同一窗口) "_parent" (the parent frame of a frameset)=="_parent" (父级窗口) "_top" (top of all frames)=="_top" (置顶) Special Target as Exception for an URL-Pattern==作为URL模式的异常的特殊目标 Pattern:<=模式:< Exclude Hosts==排除的主机 List of hosts that shall be excluded from search results by default==默认情况下将被排除在搜索结果之外的主机列表 but can be included using the site: operator=但可以使用site:操作符包括进来 'About' Column<=='关于'栏< shown in a column alongside==显示在 with the search result page==搜索结果页侧栏 (Headline)==(标题) (Content)==(内容) >You have to==>你必须 >set a remote user/password<==>设置一个远程用户/密码< to change this options.<==来改变设置.< Show Information Links for each Search Result Entry==显示搜索结果的链接信息 >Date&==>日期& >Size&==>大小& >Metadata&==>元数据& >Parser&==>解析器& >Pictures==>图像 >Status Page==>状态页面 >Search Front Page==>搜索首页 >Search Page (small header)==>搜索页面(二级标题) >Interactive Search Page==>交互搜索页面 "searchresult" (a default custom page name for search results)=="搜索结果" (搜索结果页面名称) "Change Search Page"=="改变搜索页" "Set to Default Values"=="设为默认值" The search page can be integrated in your own web pages with an iframe. Simply use the following code:==使用以下代码,将搜索页集成在你的网站中: This would look like:==示例: For a search page with a small header, use this code:==对于一个拥有二级标题的页面, 可使用以下代码: A third option is the interactive search. Use this code:==交互搜索代码: #----------------------------- #File: ConfigProfile_p.html #--------------------------- Your Personal Profile==您的个人资料 You can create a personal profile here, which can be seen by other YaCy-members==您可以在这创建个人资料, 而且对其他YaCy节点可见 or in the public using a FOAF RDF file.==或者在公共场所时使用FOAF RDF 文件. >Name<==>名字< Nick Name==昵称 Homepage (appears on every Supporter Page as long as your peer is online)==首页(显示在每个支持者 页面中, 前提是您的节点在线). eMail==邮箱 #ICQ==ICQ #Jabber==Jabber #Yahoo!==Yahoo! #MSN==MSN #Skype==Skype Comment==注释 "Save"=="保存" You can use <==在这里您可以用< > here.==>. #----------------------------- #File: ConfigProperties_p.html #--------------------------- Advanced Config==高级设置 Here are all configuration options from YaCy.==这里显示YaCy所有设置. You can change anything, but some options need a restart, and some options can crash YaCy, if wrong values are used.==您可以改变任何这里的设置, 当然, 有的需要重启才能生效, 有的甚至能引起YaCy崩溃. For explanation please look into defaults/yacy.init==详细内容请参考defaults/yacy.init "Save"=="保存" "Clear"=="清除" #----------------------------- #File: ConfigRobotsTxt_p.html #--------------------------- >Exclude Web-Spiders<==>拒绝网页爬虫< Here you can set up a robots.txt for all webcrawlers that try to access the webinterface of your peer.==在这里您可以创建一个爬虫协议, 以阻止试图访问您节点网络接口的网络爬虫. >is a volunteer agreement most search-engines==>是一个大多数搜索引擎 (including YaCy) follow==(包括YaCy)都遵守的协议 It disallows crawlers to access webpages or even entire domains.==它会阻止网络爬虫进入网页甚至是整个域. Deny access to==禁止访问以下页面 Entire Peer==整个节点 Status page==状态页面 Network pages==网络页面 Surftips==建议 News pages==新页面 Blog==博客 Wiki==维基 Public bookmarks==公共书签 Home Page==首页 File Share==共享文件 >Impressum<==>公司信息< "Save restrictions"=="保存" #----------------------------- #File: ConfigSearchBox.html #--------------------------- Integration of a Search Box==搜索框设置 We give information how to integrate a search box on any web page that==如何将一个搜索框集成到任意 calls the normal YaCy search window.==调用YaCy搜索的页面. Simply use the following code:==使用以下代码: MySearch== 我的搜索 "Search"=="搜索" This would look like:==示例: This does not use a style sheet file to make the integration into another web page with a different style sheet easier.==在这里并没有使用样式文件, 因为这样会比较容易将其嵌入到不同样式的页面里. You would need to change the following items:==您可能需要以下条目: Replace the given colors #eeeeee (box background) and #cccccc (box border)==替换已给颜色 #eeeeee (框架背景) 和 #cccccc (框架边框) Replace the word "MySearch" with your own message==用您想显示的信息替换"我的搜索" #----------------------------- #File: ConfigUpdate_p.html #--------------------------- >System Update<==>系统更新< >changelog<==>更新日志< > and <==>和< > RSS feed<==> RSS订阅< (unsigned)==(未签名) (signed)==(签名) add the following line to==将以下行添加到 Manual System Update==系统手动升级 Current installed Release==当前版本 Available Releases==可用版本 "Download Release"=="下载更新" "Check for new Release"=="检查更新" Downloaded Releases==已下载 No downloaded releases available for deployment.==无可用更新. no automated installation on development environments==开发环境中自动安装 "Install Release"=="安装更新" "Delete Release"=="删除更新" Automatic Update==自动更新 check for new releases, download if available and restart with downloaded release==检查更新, 如果可用则重启并使用 "Check + Download + Install Release Now"=="检查 + 下载 + 现在安装" Download of release #[downloadedRelease]# finished. Restart Initiated.== 已完成下载 #[downloadedRelease]# . 重启并初始化. No more recent release found.==无最近更新. Release will be installed. Please wait.==准备安装更新. 请稍等. You installed YaCy with a package manager.==您使用包管理器安装的YaCy. To update YaCy, use the package manager:==用包管理器以升级YaCy: Omitting update because this is a development environment.==因当前为开发环境, 忽略安装升级. Omitting update because download of release #[downloadedRelease]# failed.==下载 #[downloadedRelease]# 失败, 忽略安装升级. Automated System Update==系统自动升级 manual update==手动升级 no automatic look-up, updates can be made manually using this interface (see options above)==无自动检查更新时, 可以使用此功能安装更新(参见上述). automatic update==自动更新 updates are made within fixed cycles:==每隔一定时间自动检查更新: Time between lookup==检查周期 hours==小时 Release blacklist==版本黑名单 regex on release number strings==版本号正则表达式 Release type==版本类型 only main releases==仅主版本号 any release including developer releases==任何版本, 包括测试版 Signed autoupdate:==签名升级: only accept signed files==仅接受签名文件 "Submit"=="提交" Accepted Changes.==已接受改变. System Update Statistics==系统升级状况 Last System Lookup==上一次查找更新 never==从未 Last Release Download==最近一次下载更新 Last Deploy==最近一次应用更新 #----------------------------- #File: Connections_p.html #--------------------------- Server Connection Tracking==服务器连接跟踪 Up-Bytes==截至字节 Showing #[numActiveRunning]# active connections from a max. of #[numMax]# allowed incoming connections==正在显示 #[numActiveRunning]# 活动连接,最大允许传入连接 #[numMax]# Connection Tracking==连接跟踪 Incoming Connections==进入连接 Showing #[numActiveRunning]# active, #[numActivePending]# pending connections from a max. of #[numMax]# allowed incoming connections.==显示 #[numActiveRunning]# 活动, #[numActivePending]# 挂起连接, 最大允许 #[numMax]# 个进入连接. Protocol==协议 Duration==持续时间 Source IP[:Port]==来源IP[:端口] Dest. IP[:Port]==目标IP[:端口] Command==命令 Used==使用的 Close==关闭 Waiting for new request nr.==等待新请求数. Outgoing Connections==外出连接 Showing #[clientActive]# pooled outgoing connections used as:==显示 #[clientActive]# 个外出链接, 用作: Duration==持续时间 #ID==ID #----------------------------- #File: CookieMonitorIncoming_p.html #--------------------------- Incoming Cookies Monitor==进入Cookies监视 Cookie Monitor: Incoming Cookies==Cookie监视: 进入Cookies This is a list of Cookies that a web server has sent to clients of the YaCy Proxy:==Web服务器已向YaCy代理客户端发送的Cookies: Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 个Cookies. Sending Host==发送主机 Date==日期 Receiving Client==接收主机 >Cookie<==>缓存< Cookie==缓存 "Enable Cookie Monitoring"=="开启Cookie监视" "Disable Cookie Monitoring"=="关闭Cookie监视" #----------------------------- #File: CookieMonitorOutgoing_p.html #--------------------------- Outgoing Cookies Monitor==外出Cookie监视 Cookie Monitor: Outgoing Cookies==Cookie监视: 外出Cookie This is a list of cookies that browsers using the YaCy proxy sent to webservers:==YaCy代理以通过浏览器向Web服务器发送的Cookie: Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 个Cookies. Receiving Host==接收主机 Date==日期 Sending Client==发送主机 >Cookie<==>缓存< Cookie==缓存 "Enable Cookie Monitoring"=="开启Cookie监视" "Disable Cookie Monitoring"=="关闭Cookie监视" #----------------------------- #File: CrawlProfileEditor_p.html #--------------------------- Crawl Profile Editor==爬取配置文件编辑器 >Crawl Profile Editor<==>抓取文件编辑< >Crawler Steering<==>爬虫向导< >Crawl Scheduler<==>抓取调度器< >Scheduled Crawls can be modified in this table<==>请在下表中修改已安排的抓取< Crawl profiles hold information about a crawl process that is currently ongoing.==抓取文件里保存有正在运行的抓取进程信息. #Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört. #The profiles for remote crawls, indexing via proxy and snippet fetches==Die Profile für Remote Crawl, Indexierung per Proxy und Snippet Abrufe #cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind. #Crawl Profile List Crawl Profile List==抓取文件列表 Crawl Thread<==抓取线程< >Collections<==>搜集< >Status<==>状态< >Depth<==>深度< Must Match<==必须匹配< >Must Not Match<==>必须不符< >Recrawl if older than<==>重新抓取如果老于< >Domain Counter Content<==>域计数器内容< >Max Page Per Domain<==>每个域中拥有最大页面< >Accept==>接受 URLs<==地址< >Fill Proxy Cache<==>填充代理缓存< >Local Text Indexing<==>本地文本索引< >Local Media Indexing<==>本地媒体索引< >Remote Indexing<==>远程索引< MaxAge<==最长寿命< no::yes==否::是 Running==运行中 "Terminate"=="终结" Finished==已完成 "Delete"=="删除" "Delete finished crawls"=="删除已完成的抓取进程" Select the profile to edit==选择要修改的文件 "Edit profile"=="修改文件" An error occurred during editing the crawl profile:==修改抓取文件时发生错误: Edit Profile==修改文件 "Submit changes"=="提交改变" #----------------------------- #File: CrawlResults.html #--------------------------- >Collection==>收藏 "del & blacklist"=="删除并拉黑" "del =="删除 Crawl Results<==抓取结果< Overview==概况 Receipts==回执 Queries=请求 DHT Transfer==DHT转移 Proxy Use==Proxy使用 Local Crawling==本地抓取 Global Crawling==全球抓取 Surrogate Import==导入备份 >Crawl Results Overview<==>抓取结果概述< These are monitoring pages for the different indexing queues.==创建索引队列的监视页面. YaCy knows 5 different ways to acquire web indexes. The details of these processes (1-5) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 进程(1-5)的细节在子菜单中显示 above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息应该视为隐私, so you need to log-in with your administration password.==所以您最好设置一个有密码的管理员账户来查看. Case (6) is a monitor of the local receipt-generator, the opposed case of (1). It contains also an indexing result monitor but is not considered private==事件(6)与事件(1)相反, 显示本地回执. 它也包含索引结果, 但不属于隐私 since it shows crawl requests from other peers.==因为它含有来自其他节点的请求. Case (7) occurs if surrogate files are imported==如果备份被导入, 则事件(7)发生. The image above illustrates the data flow initiated by web index acquisition.==上图为网页索引的数据流. Some processes occur double to document the complex index migration structure.==一些进程可能出现双重文件索引结构混合的情况. (1) Results of Remote Crawl Receipts==(1) 远程抓取回执结果 This is the list of web pages that this peer initiated to crawl,==这是节点初始化时抓取的网页列表, but had been crawled by other peers.==但是先前它们已被其他节点抓取. This is the 'mirror'-case of process (6).==这是进程(6)的'镜像'实例 Use Case:==用法: You get entries here==你在此得到条目 if you start a local crawl on the 'Index Creation'-Page==如果您在'索引创建'页面上启动本地抓取 and check the==并检查 'Do Remote Indexing'-flag=='执行远程索引'标记 #Every page that a remote peer indexes upon this peer's request is reported back and can be monitored here==远程节点根据此节点的请求编制索引的每个页面都会被报告回来,并且可以在此处进行监视 (2) Results for Result of Search Queries==(2) 搜索查询结果报告页 This index transfer was initiated by your peer by doing a search query.==通过搜索, 此索引转移能被初始化. The index was crawled and contributed by other peers.==这个索引是被其他节点贡献与抓取的. This list fills up if you do a search query on the 'Search Page'==当您在'搜索页面'进行搜索时, 此表会被填充. >Domain<==>域名< >URLs<==>地址数< (3) Results for Index Transfer==(3) 索引转移结果 The url fetch was initiated and executed by other peers.==被其他节点初始化并抓取的地址. These links here have been transmitted to you because your peer is the most appropriate for storage according to==这些链接已经被传递给你, 因为根据全球分布哈希表的计算, the logic of the Global Distributed Hash Table.==您的节点是最适合存储它们的. This list may fill if you check the 'Index Receive'-flag on the 'Index Control' page==当您选中了在'索引控制'里的'接收索引'时, 这个表会被填充. (4) Results for Proxy Indexing==(4) 代理索引结果 These web pages had been indexed as result of your proxy usage.==以下是由于使用代理而索引的网页. No personal or protected page is indexed==不包括私有或受保护网页 such pages are detected by Cookie-Use or POST-Parameters (either in URL or as HTTP protocol)==通过检测cookie用途和提交参数(链接或者HTTP协议)能够识别出此类网页, and automatically excluded from indexing.==并在索引时自动排除. You must use YaCy as proxy to fill up this table.==必须把YaCy用作代理才能填充此表格. Set the proxy settings of your browser to the same port as given==将浏览器代理端口设置为 on the 'Settings'-page in the 'Proxy and Administration Port' field.=='设置'页面'代理和管理端口'选项中的端口. (5) Results for Local Crawling==(5)本地抓取结果 These web pages had been crawled by your own crawl task.==您的爬虫任务已经抓取了这些网页. start a crawl by setting a crawl start point on the 'Index Create' page.==在'索引创建'页面设置抓取起始点以开始抓取. (6) Results for Global Crawling==(6)全球抓取结果 These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已经被您的节点索引, 但是它们是被远端节点抓取的. This is the 'mirror'-case of process (1).==这是进程(1)的'镜像'实例. The stack is empty.==栈为空. Statistics about #[domains]# domains in this stack:==此栈显示有关 #[domains]# 域的数据: This list may fill if you check the 'Accept remote crawling requests'-flag on the==如果您选中“接受远程抓取请求”标记,请填写列表在 page<==页面< (7) Results from surrogates import==(7) 备份导入结果 These records had been imported from surrogate files in DATA/SURROGATES/in==这些记录从 DATA/SURROGATES/in 中的备份文件中导入 place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式 (i.e. MediaWiki import, OAI-PMH retrieval)==(例如 MediaWiki 导入, OAI-PMH 导入) "delete all"=="全部删除" Showing all #[all]# entries in this stack.==显示栈中所有 #[all]# 条目. Showing latest #[count]# lines from a stack of #[all]# entries.==显示栈中 #[all]# 条目的最近 #[count]# 行. "clear list"=="清除列表" #Initiator==Initiator >Executor==>执行 >Modified==>已改变 >Words==>单词 >Title==>标题 #URL==URL "delete"=="删除" #----------------------------- #File: CrawlStartExpert.html #--------------------------- == Expert Crawl Start==抓取高级设置 Start Crawling Job:==开始抓取任务: You can define URLs as start points for Web page crawling and start crawling here==您可以将指定地址作为抓取网页的起始点 "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links== "抓取中"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容 This is repeated as long as specified under "Crawling Depth"==它将一直重复至到满足指定的"抓取深度" A crawl can also be started using wget and the==抓取也可以将wget和 for this web page==用于此网页 #Crawl Job >Crawl Job<==>抓取工作< A Crawl Job consist of one or more start point, crawl limitations and document freshness rules==抓取作业由一个或多个起始点、抓取限制和文档新鲜度规则组成 #Start Point >Start Point==>起始点 Define the start-url(s) here.==在这儿确定起始地址. You can submit more than one URL, each line one URL please.==你可以提交多个地址,请一行一个地址. Each of these URLs are the root for a crawl start, existing start URLs are always re-loaded.==每个地址中都是抓取开始的根,已有的起始地址会被重新加载. Other already visited URLs are sorted out as "double", if they are not allowed using the re-crawl option.==对已经访问过的地址,如果它们不允许被重新抓取,则被标记为'重复'. One Start URL or a list of URLs:==一个起始地址或地址列表: (must start with==(头部必须有 >From Link-List of URL<==>来自地址的链接列表< From Sitemap==来自站点地图 From File (enter a path==来自文件(输入 within your local file system)<==你本地文件系统的地址)< #Crawler Filter >Crawler Filter==>爬虫过滤器 These are limitations on the crawl stacker. The filters will be applied before a web page is loaded==这些是抓取堆栈器的限制.将在加载网页之前应用过滤器 This defines how often the Crawler will follow links (of links..) embedded in websites.==此选项为爬虫跟踪网站嵌入链接的深度. 0 means that only the page you enter under "Starting Point" will be added==设置为0代表仅将"起始点" to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引.建议设置为2-4.由于设置为8会索引将近256亿个页面,所以不建议设置大于8的值, index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容. >Crawling Depth<==>抓取深度< also all linked non-parsable documents==还包括所有链接的不可解析文档 >Unlimited crawl depth for URLs matching with<==>不限抓取深度,对这些匹配的网址< >Maximum Pages per Domain<==>每个域名最大页面数< Use:==使用: Page-Count==页面数 You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==使用此选项,您可以限制将从单个域名中抓取和索引的页面数. You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域名. the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域名会被自动忽略. >misc. Constraints<==>其余约束< A questionmark is usually a hint for a dynamic page.==动态页面常用问号标记. URLs pointing to dynamic content should usually not be crawled.==通常不会抓取指向动态页面的地址. However, there are sometimes web pages with static content that==然而,也有些含有静态内容的页面用问号标记. is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定,不要选中此项以防抓取时陷入死循环. Accept URLs with query-part ('?')==接受具有查询格式('?')的地址 >Load Filter on URLs<==>对地址加载过滤器< > must-match<==>必须匹配< The filter is a <==这个过滤器是一个< >regular expression<==>正则表达式< Example: to allow only urls that contain the word 'science', set the must-match filter to '.*science.*'.==列如:只允许包含'science'的地址,就在'必须匹配过滤器'中输入'.*science.*'. You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用主动域名限制来完全抓取单个域名. Attention: you can test the functionality of your regular expressions using the==注意:你可测试你的正则表达式功能使用 >Regular Expression Tester<==>正则表达式测试器< within YaCy.==在YaCy中. Restrict to start domain==限制起始域 Restrict to sub-path==限制子路经 Use filter==使用过滤器 (must not be empty)==(不能为空) > must-not-match<==>必须排除< >Load Filter on IPs<==>对IP加载过滤器< >Must-Match List for Country Codes<==>国家代码必须匹配列表< Crawls can be restricted to specific countries.==可以限制只在某个具体国家抓取. This uses the country code that can be computed from==这会使用国家代码, 它来自 the IP of the server that hosts the page.==该页面所在主机的IP. The filter is not a regular expressions but a list of country codes,==这个过滤器不是正则表达式,而是 separated by comma.==由逗号隔开的国家代码列表. >no country code restriction<==>没有国家代码限制< #Document Filter >Document Filter==>文档过滤器 These are limitations on index feeder.==这些是索引进料器的限制. The filters will be applied after a web page was loaded.==加载网页后将应用过滤器. that must not match with the URLs to allow that the content of the url is indexed.==它必须排除这些地址,从而允许地址中的内容被索引. >Filter on URLs<==>地址过滤器< >Filter on Content of Document<==>文档内容过滤器< >(all visible text, including camel-case-tokenized url and title)<==>(所有可见文本,包括camel-case-tokenized的网址和标题)< >Filter on Document Media Type (aka MIME type)<==>文档媒体类型过滤器(又称MIME类型)< >Solr query filter on any active <==>Solr查询过滤器对任何有效的< >indexed<==>索引的< > field(s)<==>域< #Content Filter >Content Filter==>内容过滤器 These are limitations on parts of a document.==这些是文档部分的限制. The filter will be applied after a web page was loaded.==加载网页后将应用过滤器. >Filter div or nav class names<==>div或nav类名过滤器< >set of CSS class names<==>CSS类名集合< #comma-separated list of
or

==建议 Surftips are switched off==建议已关闭 title="bookmark"==title="书签" alt="Add to bookmarks"==alt="添加到书签" title="positive vote"==title=="好评" alt="Give positive vote"==alt="给予好评" title="negative vote"==title=="差评" alt="Give negative vote"==alt="给予差评" YaCy Supporters<==YaCy参与者< >a list of home pages of yacy users<==>显示YaCy用户< provided by YaCy peers using public bookmarks, link votes and crawl start points==由使用公共书签, 网址评价和抓取起始点的节点提供 "Please enter a comment to your link recommendation. (Your Vote is also considered without a comment.)"=="输入推荐链接备注. (可留空.)" "authentication required"=="需要认证" Hide surftips for users without autorization==隐藏非认证用户的建议功能 Show surftips to everyone==所有人均可使用建议 #----------------------------- #File: Table_API_p.html #--------------------------- >Process Scheduler<==>进程调度器< This table shows actions that had been issued on the YaCy interface==此表显示YaCy用于 to change the configuration or to request crawl actions.==改变配置或者处理抓取请求的动作接口函数. These recorded actions can be used to repeat specific actions and to send them==它们用于重复执行某一指定动作, to a scheduler for a periodic execution.==或者用于周期执行一系列动作. : Peer Steering==: 节点向导 Steering of API Actions<==API动作向导< #Recorded Actions >Recorded Actions<==>已记录动作< >Type==>类型 >Comment==>注释 >Call<==>调用< >Count<==>次数< "next page"=="下一页" "previous page"=="上一页" "next page"=="下一页" "previous page"=="上一页" of #[of]#== 共 #[of]# Recording==记录的 Last Exec==上次执行 Next Exec==下次执行 >Date==>日期 >Event Trigger<==>事件触发器< >Scheduler<==>定时器< #>URL<==>URL >no repetition<==>无安排< >activate scheduler<==>激活定时器< "Execute Selected Actions"=="执行选中活动" "Delete Selected Actions"=="删除选中活动" "Delete all Actions which had been created before "=="删除创建于之前的全部活动" >Result of API execution==>API执行结果 #>Status<==>Status> #>URL<==>URL< >minutes<==>分钟< >hours<==>小时< >days<==>天< Scheduled actions are executed after the next execution date has arrived within a time frame of #[tfminutes]# minutes.==已安排动作会在 #[tfminutes]# 分钟后执行. #----------------------------- #File: Table_RobotsTxt_p.html #--------------------------- API wiki page==API百科页面 To see a list of all APIs, please visit the==要查看所有API的列表,请访问 To see a list of all APIs==要查看所有API的列表 Table Viewer==表格查看 The information that is presented on this page can also be retrieved as XML.==此页信息也可表示为XML. Click the API icon to see the XML.==点击API图标查看XML. To see a list of all APIs, please visit the API wiki page.==查看所有API, 请访问API Wiki. >robots.txt table<==>爬虫协议列表< #----------------------------- ### This Tables section is removed in current SVN Versions #File: Tables_p.html #--------------------------- Table Viewer==表查看器 entries==条目 Table Administration==表格管理 Table Selection==选择表格 Select Table:==选择表格: #"Show Table"=="Zeige Tabelle" show max.==显示最多. >all<==>全部< entries,==个条目, search rows for==搜索内容 "Search"=="搜索" Table Editor: showing table==表格编辑器: 显示表格 #PK==Primärschlüssel "Edit Selected Row"=="编辑选中行" "Add a new Row"=="添加新行" "Delete Selected Rows"=="删除选中行" "Delete Table"=="删除表格" Row Editor==行编辑器 Primary Key==主键 "Commit"=="备注" #----------------------------- #File: Table_YMark_p.html #--------------------------- Table Viewer==表格查看 YMark Table Administration==YMark表格管理 Table Editor: showing table==表格编辑器: 显示表格 "Edit Selected Row"=="编辑选中行" "Add a new Row"=="添加新行" "Delete Selected Rows"=="删除选中行" "Delete Table"=="删除表格" "Rebuild Index"=="重建索引" Primary Key==主键 >Row Editor<==>行编辑器< "Commit"=="备注" Table Selection==选择表格 Select Table:==选择表格: show max. entries==显示最多条目 >all<==>所有< Display columns:==显示列: "load"=="载入" Search/Filter Table==搜索/过滤表格 search rows for==搜索 "Search"=="搜索" #>Tags<==>Tags< >select a tag<==>选择标签< >Folders<==>目录< >select a folder<==>选择目录< >Import Bookmarks<==>导入书签< #Importer:==Importer: #>XBEL Importer<==>XBEL Importer< #>Netscape HTML Importer<==>Netscape HTML Importer< "import"=="导入" #----------------------------- #File: terminal_p.html #--------------------------- YaCy Peer Live Monitoring Terminal==YaCy节点实时监控终端 YaCy System Terminal Monitor==YaCy系统终端监视器 #YaCy System Monitor==YaCy System Monitor Search Form==搜索页面 Crawl Start==开始抓取 Status Page==状态页面 Confirm Shutdown==确认关闭 ><Shutdown==><关闭程序 Event Terminal==事件终端 Image Terminal==图形终端 Domain Monitor==域监视器 "Loading Processing software..."=="正在载入软件..." This browser does not have a Java Plug-in.==此浏览器没有安装Java插件. Get the latest Java Plug-in here.==在此获取. Resource Monitor==资源监视器 Network Monitor==网络监视器 #----------------------------- #File: Threaddump_p.html #--------------------------- YaCy Debugging: Thread Dump==YaCy Debug: 线程Dump Threaddump<==线程Dump< "Single Threaddump"=="单线程Dump" "Multiple Dump Statistic"=="多个Dump数据" #"create Threaddump"=="Threaddump erstellen" #----------------------------- #File: User.html #--------------------------- User Page==用户页面 You are not logged in.
==当前未登录.
Username:==用户名: Password: Get URL Viewer<==>获取地址查看器< >URL Metadata<==>地址元数据< URL==地址 #Hash==Hash Word Count==字数 Description==描述 Size==大小 View as==查看形式 #Original==Original Plain Text==文本 Parsed Text==解析文本 Parsed Sentences==解析句子 Parsed Tokens/Words==解析令牌/字 Link List==链接列表 "Show"=="显示" Unable to find URL Entry in DB==无法找到数据库中的链接. Invalid URL==无效链接 Unable to download resource content.==无法下载资源内容. Unable to parse resource content.==无法解析资源内容. Unsupported protocol.==不支持的协议. >Original Content from Web<==>网页原始内容< Parsed Content==解析内容 >Original from Web<==>网页原始内容< >Original from Cache<==>缓存原始内容< >Parsed Tokens<==>解析令牌< #----------------------------- #File: ViewLog_p.html #--------------------------- Server Log==服务器日志 Lines==行 reversed order==倒序排列 "refresh"=="刷新" #----------------------------- #File: ViewProfile.html #--------------------------- Local Peer Profile:==本地节点资料: Remote Peer Profile==远端节点资料 Wrong access of this page==页面权限错误 The requested peer is unknown or a potential peer.==所请求节点未知或者是潜在节点. The profile can't be fetched.==无法获取资料. The peer==节点 is not online.==当前不在线. This is the Profile of==资料 #Name==Name #Nick Name==Nick Name #Homepage==Homepage #eMail==eMail #ICQ==ICQ #Jabber==Jabber #Yahoo!==Yahoo! #MSN==MSN #Skype==Skype Comment==注释 View this profile as==查看方式 > or==> 或者 #vCard==vCard #----------------------------- #File: Crawler_p.html #--------------------------- Crawler Queues==爬虫队列 RWI RAM (Word Cache)==RWI RAM (关键字缓存) Error with profile management. Please stop YaCy, delete the file DATA/PLASMADB/crawlProfiles0.db==资料管理出错. 请关闭YaCy, 并删除文件 DATA/PLASMADB/crawlProfiles0.db and restart.==后重启. Error:==错误: Application not yet initialized. Sorry. Please wait some seconds and repeat==抱歉, 程序未初始化, 请稍候并重复 ERROR: Crawl filter==错误: 抓取过滤 does not match with==不匹配 crawl root==抓取根 Please try again with different==请使用不同的过滤字再试一次 filter. ::==. :: Crawling of==在抓取 failed. Reason:==失败. 原因: Error with URL input==网址输入错误 Error with file input==文件输入错误 started.==已开始. pause reason: resource observer: not enough memory space==暂停原因: 资源观察器:没有足够内存空间 Please wait some seconds,==请稍等, it may take some seconds until the first result appears there.==在出现第一个搜索结果前需要几秒钟时间. If you crawl any un-wanted pages, you can delete them here.==如果您抓取了不需要的页面, 您可以 点这 删除它们. Crawl Queue:==抓取队列: Running Crawls==运行中的抓取 >Name<==>名字< >Status<==>状态< >Crawled Pages<==>抓取到的页面< Queue==队列 Profile==资料 Initiator==发起者 Depth==深度 Modified Date==修改日期 Anchor Name==祖先名 #URL==URL Delete==删除 Next update in==下次更新将在 /> seconds.==/> 秒后. See a access timing here==点这 查看访问时间 unlimited==无限制 #Crawler >Crawler<==>爬虫< Queues==队列 Queue==队列 >Size==>大小 Local Crawler==本地爬虫 Limit Crawler==受限爬虫 Remote Crawler==远端爬虫 No-Load Crawler==未加载爬虫 Loader==加载器 Index Size==索引大小 Database==数据库 Entries==条目数 #Seg-
ments==分段
>Documents<==>文档< >Webgraph Edges<==>网页图形边缘< >Citations<==>引用< (reverse link index)==(反向链接索引) (P2P Chunks)==(P2P块) >Progress<==>进度< Indicator==指示器 Level==级别 Speed / PPM==速度/PPM (Pages Per Minute)==(页/分钟) Crawler PPM==爬虫PPM Postprocessing Progress==后处理进度 #idle==空闲 Traffic (Crawler)==流量 (爬虫) >Load<==>负荷< "minimum"=="最小" "custom"=="自定义" "maximum"=="最大" Pages (URLs)==页面(链接) RWIs (Words)==RWIs (字) #----------------------------- #File: HostBrowser.html #--------------------------- >Index Browser<==>索引浏览器< Browse the index of==浏览索引来自 documents.==文档. Enter a host or an URL for a file list or view a list of==输入主机或者地址来查看文件列表,它们来自 >all hosts<==>全部主机< >only hosts with urls pending in the crawler<==>只是在爬虫中地址待处理的主机< >only with load errors<==>只有加载错误< >Host/URL<==>主机/地址< >Browse Host<==>浏览主机< >Host List<==>主机列表< >Count Colors:<==>颜色表示:< Documents without Errors<==没有错误的文档< Pending in Crawler<==在爬虫中待处理< Crawler Excludes<==爬虫排除< Load Errors<==加载错误< #Administration Options==管理选项 #"Delete Load Errors"=="删除加载错误项" #----------------------------- #File: WatchWebStructure_p.html #--------------------------- >Text<==>文本< >Pivot Dot<==>枢轴点< "WebStructurePicture"=="网页结构图" >Other Dot<==>其他点< API wiki page==API 百科页面 To see a list of all APIs, please visit the==要查看所有API的列表, 请访问 >Host List<==>主机列表< To see a list of all APIs==要查看所有API的列表 The data that is visualized here can also be retrieved in a XML file, which lists the reference relation between the domains.==此页面数据显示域之间的关联关系, 能以XML文件形式查看. With a GET-property 'about' you get only reference relations about the host that you give in the argument field for 'about'.==使用GET属性'about'仅能获得带有'about'参数的域关联关系. With a GET-property 'latest' you get a list of references that had been computed during the current run-time of YaCy, and with each next call only an update to the next list of references.==使用GET属性'latest'能获得当前的关联关系列表, 并且每一次调用都只能更新下一级关联关系列表. Click the API icon to see the XML file.==点击API图标查看XML文件. To see a list of all APIs, please visit the API wiki page.==查看所有API, 请访问API Wiki. Web Structure==网页结构 host<==主机< depth<==深度< nodes<==节点< time<==时间< size<==大小< >Background<==>背景< >Line<==>线< >Dot<==>点< >Dot-end<==>末点< >Color <==>颜色< "change"=="改变" #----------------------------- #File: Wiki.html #--------------------------- YaCyWiki page:==YaCyWiki: last edited by==最后编辑由 change date==改变日期 Edit<==编辑< only granted to admin==只授权给管理员 Grant Write Access to==授予写权限 # !!! Do not translate the input buttons because that breaks the function to switch rights !!! #"all"=="Allen" #"admin"=="Administrator" Start Page==开始页面 Index==索引 Versions==版本 Author:==作者: #Text:==Text: You can use==您可以在这使用 Wiki Code here.==wiki代码. "edit"=="编辑" "Submit"=="提交" "Preview"=="预览" "Discard"=="取消" >Preview==>预览 No changes have been submitted so far!==未提交任何改变! Subject==主题 Change Date==改变日期 Last Author==最后作者 IO Error reading wiki database:==读取wiki数据库时出现IO错误: Select versions of page==选择页面版本 Compare version from==原始版本 "Show"=="显示" with version from==对比版本 "current"=="当前" "Compare"=="对比" Return to==返回 Changes will be published as announcement on YaCyNews==改变会被发布在YaCy新闻中. #----------------------------- #File: WikiHelp.html #--------------------------- to embed this video:==嵌入此视频: Text will be displayed underlined.==文本要显示下划线. Code==代码 This tag displays a Youtube or Vimeo video with the id specified and fixed width 425 pixels and height 350 pixels.==这个标签显示一个425像素和350像素的Youtube或Vimeo视频. i.e. use==比如用 Wiki Help==Wiki帮助 Wiki-Code==Wiki代码 This table contains a short description of the tags that can be used in the Wiki and several other servlets==此表列出了用于Wiki和几个插件代码标签简述, of YaCy. For a more detailed description visit the==详情请见 #YaCy Wiki==YaCy Wiki Description==描述 #=headline===headline These tags create headlines. If a page has three or more headlines, a table of content will be created automatically.==此标记标识标题内容. 如果页面有多于三个标题, 则会自动创建一个表格. Headlines of level 1 will be ignored in the table of content.==一级标题. #text==Text These tags create stressed texts. The first pair emphasizes the text (most browsers will display it in italics),==这些标记标识文本内容. 第一对中为强调内容(多数浏览器用斜体表示), the second one emphazises it more strongly (i.e. bold) and the last tags create a combination of both.==第二对用粗体表示, 第三对为两者的联合. Text will be displayed stricken through.==文本内容以删除线表示. Lines will be indented. This tag is supposed to mark citations, but may as well be used for styling purposes.==缩进内容, 此标记主要用于引用, 也能用于标识样式. #point==point These tags create a numbered list.==此标记用于有序列表. #something<==something< #another thing==another thing #and yet another==and yet another #something else==something else These tags create an unnumbered list.==用于创建无序列表. #word==word #:definition==:definition These tags create a definition list.==用于创建定义列表. This tag creates a horizontal line.==创建水平线. #pagename==pagename #description]]==description]] This tag creates links to other pages of the wiki.==创建到其他wiki页面的链接. This tag displays an image, it can be aligned left, right or center.==显示图片, 可设置左对齐, 右对齐和居中. These tags create a table, whereas the first marks the beginning of the table, the second starts==用于创建表格, 第一个标记为表格开头, 第二个为换行, a new line, the third and fourth each create a new cell in the line. The last displayed tag==第三个与第四个创建列. closes the table.==最后一个为表格结尾. #The escape tags will cause all tags in the text between the starting and the closing tag to not be treated as wiki-code.==Durch diesen Tag wird der Text, der zwischen den Klammern steht, nicht interpretiert und unformatiert als normaler Text ausgegeben. A text between these tags will keep all the spaces and linebreaks in it. Great for ASCII-art and program code.==此标记之间的文本会保留所有空格和换行, 主要用于ASCII艺术图片和编程代码. If a line starts with a space, it will be displayed in a non-proportional font.==如果一行以空格开头, 则会以非比例形式显示. url description==URL描述 This tag creates links to external websites.==此标记创建外部网站链接. alt text==文本备案 #----------------------------- #File: yacyinteractive.html #--------------------------- API wiki page==API 百科页面 "Search..."=="搜索中..." e="Search"==e="搜索" loading from local index...==从本地索引加载... To see a list of all APIs, please visit the==要查看所有API的列表,请访问 The query format is similar to==查询格式类似于 This search result can also be retrieved as RSS/opensearch output.==此搜索结果也可以作为RSS / opensearch 输出检索. To see a list of all APIs==要查看所有API的列表 YaCy Interactive Search==YaCy交互搜索 This search result can also be retrieved as RSS/opensearch output.==此搜索结果能以RSS/opensearch形式表示. The query format is similar to SRU.==请求的格式与SRU相似. Click the API icon to see an example call to the search rss API.==点击API图标查看示例. To see a list of all APIs, please visit the API wiki page.==查看所有API, 请访问API Wiki. #----------------------------- #File: yacysearch.html #--------------------------- # Do not translate id="search" and rel="search" which only have technical html semantics >search<==>搜索< Illegal URL mask:==非法网址掩码: >Media<==>媒体< g> remote),==g> 远程), >log in<==>登录< length of search words must be at least 1 character==搜索文本最少一个字符 Search Page==搜索网页 This search result can also be retrieved as RSS/opensearch output.==此搜索结果能以RSS/opensearch形式表示. The query format is similar to SRU.==请求的格式SRU相似. Click the API icon to see an example call to the search rss API.==点击API图标查看示例. To see a list of all APIs, please visit the API wiki page.==查看所有API, 请访问API Wiki. Did you mean:==是否搜索: "Search"=="搜索" 'Search'=='搜索' "search again"=="再次搜索" more options==更多选项 Text==文本 Images==图片 Audio==音频 Video==视频 Applications==程序 The following words are stop-words and had been excluded from the search:==以下关键字是休止符, 已从搜索中排除: No Results.==未找到. length of search words must be at least 3 characters==搜索关键字至少为3个字符 > of==> 共 > local,==> 本地, remote from==远端 来自 YaCy peers).==YaCy 节点). #----------------------------- #File: yacysearchitem.html #--------------------------- "bookmark"=="书签" "recommend"=="推荐" "delete"=="删除" Pictures==图像 #----------------------------- #File: yacysearchtrailer.html #--------------------------- >Provider==>提供者 >Filetype==>文件类型 >Language==>语言     Stealth Mode   ==    隐身 模式       Peer-to-Peer    ==   P2P     "Privacy"=="隐私" Context Ranking==上下文排序 Sort by Date==按日期排序 Documents==文件 Images==图像 >Documents==>文件 >Images==>图像 "Your search is done using peers in the YaCy P2P network."=="您的搜索是靠YaCy P2P网络中的节点完成的。" "You can switch to 'Stealth Mode' which will switch off P2P, giving you full privacy. Expect less results then, because then only your own search index is used."=="您可以切换到'隐形模式',这将关闭P2P,给你完全的隐私。期待较少的结果,因为那时只有您自己的搜索索引被使用。" "Your search is done using only your own peer, locally."=="你的搜索是靠在本地的YaCy节点完成的。" "You can switch to 'Peer-to-Peer Mode' which will cause that your search is done using the other peers in the YaCy network."=="您可以切换到'P2P',这将让您的搜索使用YaCy网络中的YaCy节点。" "Your search is done using only your own peer=="你的搜索是靠在本地的YaCy节点完成的。" "You can switch to 'Stealth Mode' which will switch off P2P=="您可以切换到'隐形模式',这将关闭P2P,给你完全的隐私。期待较少的结果,因为那时只有您自己的搜索索引被使用。" show search results for "#[query]#" on map==显示 "#[query]#" 的搜索结果 #>Provider==>Anbieter >Name Space==>命名空间导航 >Author==>作者导航 #----------------------------- ### Subdirectory api ### #File: api/table_p.html #--------------------------- Table Viewer==查看表格 #>PK<==>Primärschlüssel< "Edit Table"=="编辑表格" #----------------------------- #File: api/yacydoc.html #--------------------------- >Title<==>标题< >Author<==>作者< >Description<==>描述< >Subject<==>主题< >Publisher<==>发布者< >Contributor<==>贡献者< >Date<==>日期< >Type<==>类型< >Identifier<==>标识符< >Language<==>语言< >Load Date<==>加载日期< >Referrer Identifier<==>关联标识符< #>Referrer URL<==>Referrer URL< >Document size<==>文件大小< >Number of Words<==>关键字数目< #----------------------------- ### Subdirectory env/templates ### #File: env/templates/header.template #--------------------------- ### FIRST STEPS ### First Steps==初步 Use Case & Account==用例 & 账号 Load Web Pages, Crawler==加载网页,爬虫 RAM/Disk Usage & Updates==内存/硬盘 使用 &更新 Load Web Pages==加载网页 ### MONITORING ### Target Analysis==目标分析 Re-Start<==重启< Shutdown<==关闭< Download YaCy==下载YaCy Search Interface==搜索界面 About This Page==关于此页 "Search..."=="搜索中..." Crawler Monitor==爬虫监视 System Status==系统状态 Peer-to-Peer Network==P2P网络 Index Browser==索引浏览器 You did not yet start a web crawl!==您还没开启网络爬虫! Advanced Crawler==高级爬虫 Index Export/Import==索引导出/导入 RAM/Disk Usage ==内存/硬盘使用  Administration== 管理 Toggle navigation==切换导航 Community (Web Forums)==社区(网络论坛) Project Wiki==项目百科 Portal Configuration==门户配置 Portal Design==门户设计 Ranking and Heuristics==排名和启发式 Content Semantic==内容语义 Process Scheduler==进程调度器 Network Access==网络访问 Confirm Re-Start==确认重启 Project Wiki<==项目百科< Git Repository==Git存储库 Bugtracker==错误追踪器 "You just started a YaCy peer!"==“您刚开始一个YaCy节点!” "As a first-time-user you see only basic functions. Set a use case or name your peer to see more options. Start a first web crawl to see all monitoring options."=="作为初次使用者,您只能看到基本的功能. 请命名您的Yacy节点来看更多的选项. 开始第一个网页抓取, 查看所有监视选项." "You did not yet start a web crawl!"=="您还未启动一个网络爬虫!" "You do not see all monitoring options here, because some belong to crawl result monitoring. Start a web crawl to see that!"=="您不会在这里看到所有的监控选项,因为有些属于抓取结果监控. 开始网络抓取看看!" System Administration==系统管理 Configuration==配置 Production==生产 >Administration<==>管理< Search Portal Integration==搜索门户集成 You just started a YaCy peer!==你刚开启了YaCy节点! As a first-time-user you see only basic functions. Set a use case or name your peer to see more options. Start a first web crawl to see all monitoring options.==作为初次使用者, 您只能看到基本的功能. 请命名您的Yacy节点来看更多的选项. 开始第一个网页抓取, 查看所有监视选项. You do not see all monitoring options here, because some belong to crawl result monitoring. Start a web crawl to see that!==您不会在这里看到所有的监控选项,因为有些属于抓取结果监控. 开始网络抓取看看! Use Case ==用例 "You do not see all monitoring options here=="您不会在这里看到所有的监控选项, 因为有些属于抓取结果监控. 开始网络抓取看看!" You do not see all monitoring options here==您不会在这里看到所有的监控选项, 因为有些属于抓取结果监控. 开始网络抓取看看! RAM/Disk Usage==内存/硬盘 使用 Use Case==用例 YaCy - Distributed Search Engine==YaCy - 分布式搜索引擎 ### SEARCH & BROWSE ### >Search==>搜索 Web Search==搜索网页 File Search==搜索文件 Search & Browse==搜索 & 浏览 Search Page==搜索网页 Rich Client Search==客户端搜索 Interactive local Search==本地交互搜索 Compare Search==对比搜索 Ranking Config==排名设置 >Surftips==>建议 Local Peer Wiki==本地Wiki >Bookmarks==>书签 >Help==>帮助 ### INDEX CONTROL ### Index Production==索引 Index Control==索引 控制 Index Creation==索引创建 Crawler Monitor==爬虫监视 Crawl Results==抓取结果 Index Administration==索引管理 Filter & Blacklists==过滤 & 黑名单 ### SEARCH INTEGRATION ### Search Integration==搜索集成 Search Portals==搜索主页 Customization==自定义 ### MONITORING ### Monitoring==监视 YaCy Network==YaCy网络 Web Visualization==网页元素外观 Access Tracker==访问跟踪 #Server Log==Server Log >Messages==>消息 >Terminal==>终端 "New Messages"=="新消息" ### PEER CONTROL Peer Control==节点控制 Admin Console==管理控制台 >API Action Steering<==>API动作向导< Confirm Restart==确认重启 Re-Start==重启 Confirm Shutdown==确认关闭 >Shutdown==>关闭 ### THE PROJECT ### The Project==项目 Project Home==项目主页 #Deutsches Forum==Deutsches Forum English Forum==论坛 YaCy Project Wiki==YaCy项目Wiki # Development Change Log==Entwicklung Änderungshistorie amp;language=en==amp;language=cn Development Change Log==变更日志 Peer Statistics::YaCy Statistics==节点统计数据::YaCy数据 #----------------------------- #File: env/templates/simpleheader.template #--------------------------- Project Wiki==项目百科 Search Interface==搜索界面 About This Page==关于此页 Bugtracker==Bug追踪器 Git Repository==Git存储库 Community (Web Forums)==社区(网络论坛) Download YaCy==下载YaCy Google Appliance API==Google设备API >Web Search<==>网页搜索< >File Search<==>文件搜索< >Compare Search<==>比较搜索< >Index Browser<==>索引浏览器< >URL Viewer<==>地址查看器< Example Calls to the Search API:==调用搜索API的示例: Administration »==管理 » Search Interfaces==搜索界面 Toggle navigation==切换导航 Solr Default Core==Solr默认核心 Solr Webgraph Core==Solr网页图形核心 Administration ==管理 Administration==管理 #Administration<==Administration< >Search Network<==>搜索网络< #Peer Owner Profile==节点所有者资料 Help / YaCy Wiki==帮助 / YaCy Wiki #----------------------------- #File: env/templates/submenuAccessTracker.template #--------------------------- Access Grid==访问网格 Incoming Requests Overview==传入请求概述 Incoming Requests Details==传入的请求详细信息 All Connections<==所有连接< Local Search<==本地搜索< Remote Search<==远程搜索< Cookie Menu==Cookie菜单 Incoming Cookies==传入 Cookies Outgoing Cookies==传出 Cookies Incoming==传入 Outgoing==传出 Access Tracker==访问跟踪 Server Access==服务器访问 Overview==概述 #Details==Details Connections==连接 Local Search==本地搜索 Log==日志 Host Tracker==主机跟踪器 Remote Search==远程搜索 #----------------------------- #File: env/templates/submenuBlacklist.template #--------------------------- Content Control==内容控制 Filter & Blacklists==过滤 & 黑名单 Blacklist Administration==黑名单管理 Blacklist Cleaner==黑名单整理 Blacklist Test==黑名单测试 Import/Export==导入/导出 Index Cleaner==索引整理 #----------------------------- #File: env/templates/submenuConfig.template #--------------------------- System Administration==系统管理 Viewer and administration for database tables==数据库表的查看与管理 Performance Settings of Busy Queues==繁忙队列的性能设置 #UNUSED HERE #Peer Administration Console==节点控制台 Status==状态 >Accounts==>账户 Network Configuration==网络设置 >Heuristics<==>触发式< Dictionary Loader==功能扩展 System Update==系统升级 >Performance==>性能 Advanced Settings==高级设置 Parser Configuration==解析配置 Local robots.txt==本地爬虫协议 Advanced Properties==高级设置 #----------------------------- #File: env/templates/submenuContentIntegration.template #--------------------------- External Content Integration==外部内容集成 Import phpBB3 forum==导入phpBB3论坛内容 Import Mediawiki dumps==导入Mediawiki数据 Import OAI-PMH Sources==导入OAI-PMH源 #----------------------------- #File: env/templates/submenuCookie.template #--------------------------- Cookie Menu==Cookie菜单 Incoming Cookies==进入cookie Outgoing Cookies==外出cookie #----------------------------- #File: env/templates/submenuCrawlMonitor.template #--------------------------- Overview==概述 Receipts==收据 Queries==查询 DHT Transfer==DHT 传输 Proxy Use==代理使用 Local Crawling==本地抓取 Global Crawling==全球抓取 Surrogate Import==代理导入 Crawl Results==抓取结果 Crawler<==爬虫< Global==全球 robots.txt Monitor==爬虫协议监视器 Remote==远程 No-Load==空载 Processing Monitor==进程监视 Crawler Queues==爬虫队列 Loader<==加载器< Rejected URLs==已拒绝地址 >Queues<==>队列< Local<==本地< Crawler Steering==抓取向导 Scheduler and Profile Editor<==定时器与资料编辑器< #----------------------------- #File: env/templates/submenuDesign.template #--------------------------- >Language<==>语言< Search Page Layout==搜索页面布局 Design==设计 >Appearance<==>外观< Customization==自定义 >Appearance==>外观 User Profile==用户资料 >Language==>语言 #----------------------------- #File: env/templates/submenuIndexControl.template #--------------------------- Index Administration==索引管理 URL Database Administration==地址数据库管理 Index Deletion==索引删除 Index Sources & Targets==索引来源&目标 Solr Schema Editor==Solr模式编辑器 Field Re-Indexing==字段重新索引 Reverse Word Index==反向字索引 Content Analysis==内容分析 Reverse Word Index Administration==详细关键字索引管理 URL References Database==地址关联关系数据库 URL Viewer==地址浏览 #----------------------------- #File: env/templates/submenuIndexCreate.template #--------------------------- Crawler/Spider<==爬虫/蜘蛛< Crawl Start (Expert)==抓取开始(专家模式) Network Scanner==网络扫描仪 Crawling of MediaWikis==MediaWikis抓取 Remote Crawling==远程抓取 Scraping Proxy==刮痧代理 >Autocrawl<==>自动抓取< Advanced Crawler==高级爬虫 >Crawling of phpBB3 Forums<==>phpBB3论坛抓取< #Web Crawler Control==Web 爬虫 Steuerung Start a Web Crawl==开启网页抓取 #Crawl Start==Crawl starten #Crawl Profile Editor==Crawl Profil Editor Crawler Queues==爬虫队列 #Indexing<==Indexierung< #Loader<==Lader< #URLs to be processed==zu verarbeitende URLs #Processing Queues==Warteschlangen #Local<==Lokal< #Global<==Global< #Remote<==Remote< #Overhang<==Überhang< #Media Crawl Queues==Medien Crawl-Puffer #>Images==>Bilder #>Movies==>Filme #>Music==>Musik #--- New menu items --- Index Creation==索引创建 Full Site Crawl==全站抓取 Sitemap Loader==网站地图加载 Crawl Start
(Expert)==开始抓取
(专家模式) Network
Scanner==网络
扫描仪 #>Intranet
Scanner<==>Intranet
Scanner< Crawling of==正在抓取 #MediaWikis==MediaWikis >phpBB3 Forums<==>phpBB3论坛< Content Import<==导入内容< Network Harvesting<==网络采集< Remote
Crawling==远程
抓取 Scraping
Proxy==刮痧
代理 Database Reader<==数据库读取< for phpBB3 Forums==对于phpBB3论坛 Dump Reader for==Dump阅读器为 #MediaWiki dumps==MediaWiki dumps #----------------------------- #File: env/templates/submenuPortalIntegration.template #--------------------------- Search Portal Integration==搜索门户集成 Live Search Anywhere==任意位置即时搜索 Generic Search Portal==一般搜索门户 Search Box Anywhere==任意位置搜索框 #----------------------------- #File: env/templates/submenuPublication.template #--------------------------- Publication==发布 Wiki==百科 Blog==博客 File Hosting==文件共享 #----------------------------- #File: env/templates/submenuUseCaseAccount.template #--------------------------- Use Case & Accounts==用例 & 账号 Use Case ==用例 Use Case==用例 Basic Configuration==基本设置 >Accounts<==>账户< Network Configuration==网络设置 #----------------------------- #File: env/templates/submenuViewLog.template #--------------------------- Server Log Menu==服务器日志菜单 #Server Log==Server Log #----------------------------- #File: env/templates/submenuWebStructure.template #--------------------------- Index Browser==索引浏览器 Web Visualization==网页元素外观 Web Structure==网页结构 Image Collage==图像拼贴 #----------------------------- #File: proxymsg/authfail.inc #--------------------------- Your Username/Password is wrong.==用户名/密码输入错误. Username==用户名 Password==密码 "login"=="登录" #----------------------------- #File: proxymsg/error.html #--------------------------- YaCy: Error Message==YaCy: 错误消息 request:==请求: unspecified error==未定义错误 not-yet-assigned error==未定义错误 You don't have an active internet connection. Please go online.==无网络链接, 请上线. Could not load resource. The file is not available.==无效文件, 加载资源失败. Exception occurred==异常发生 Generated #[date]# by==生成日期 #[date]# 由 #----------------------------- #File: proxymsg/proxylimits.inc #--------------------------- Your Account is disabled for surfing.==您的账户没有浏览权限. Your Timelimit (#[timelimit]# Minutes per Day) is reached.==您的账户时限(#[timelimit]# 分钟每天)已到. #----------------------------- #File: proxymsg/unknownHost.inc #--------------------------- The server==服务器 could not be found.==未找到. Did you mean:==是不是: #----------------------------- #File: www/welcome.html #--------------------------- YaCy: Default Page for Individual Peer Content==YaCy: 每个节点的默认页面 Individual Web Page==每个网页 Welcome to your own web page
in the YaCy Network==欢迎来到
YaCy网络 THIS IS A DEMONSTRATION PAGE FOR YOUR OWN INDIVIDUAL WEB SERVER!==这是网页服务器演示页面! PLEASE REPLACE THIS PAGE BY PUTTING A FILE index.html INTO THE PATH==请用index.html替换以下路径中的文件 <YaCy-application-home>#[wwwpath]#==<YaCy程序主页>#[wwwpath]#. #----------------------------- #File: js/Crawler.js #--------------------------- "Continue this queue"=="继续队列" "Pause this queue"=="暂停队列" #----------------------------- #File: js/yacyinteractive.js #--------------------------- >total results==>全部结果  topwords:== 顶部: >Name==>名称 >Size==>大小 >Date==>日期 #----------------------------- #File: yacy/ui/js/jquery-flexigrid.js #--------------------------- 'Displaying {from} to {to} of {total} items'=='显示 {from} 到 {to}, 总共 {total} 个条目' 'Processing, please wait ...'=='正在处理, 请稍候...' 'No items'=='无条目' #----------------------------- #File: yacy/ui/js/jquery-ui-1.7.2.min.js #--------------------------- Loading…==正在加载… #----------------------------- #File: yacy/ui/js/jquery.ui.all.min.js #--------------------------- Loading…==正在加载… #----------------------------- #File: yacy/ui/index.html #--------------------------- About YaCy-UI==关于YaCy-UI Admin Console==管理控制台 "Bookmarks"=="书签" >Bookmarks==>书签 Server Log==服务器日志 #----------------------------- #File: yacy/ui/yacyui-admin.html #--------------------------- Peer Control==节点控制 "Login"=="登录" #Login==Anmelden Themes==主题 Messages==消息 Re-Start==重启 Shutdown==关闭 Web Indexing==网页索引 Crawl Start==开始抓取 Monitoring==监视 YaCy Network==YaCy网络 >Settings==>设置 "Basic Settings"=="基本设置" Basic== 基本 Accounts==账户 "Network"=="网络" Network== 网络 "Advanced Settings"=="高级设置" Advanced== 高级 "Update Settings"=="升级设置" Update== 升级 >YaCy Project==>YaCy项目 "YaCy Project Home"=="YaCy项目主页" Project== 项目 "YaCy Statistics"=="YaCy数据" Statistics== 数据 "YaCy Forum"=="YaCy论坛" Forum==论坛 "Help"=="帮助" #"YaCy Wiki"=="YaCy Wiki" Wiki==百科 #----------------------------- #File: yacy/ui/yacyui-bookmarks.html #--------------------------- 'Add'=='添加' 'Crawl'=='抓取' 'Edit'=='编辑' 'Delete'=='删除' 'Rename'=='重命名' 'Help'=='帮助' #"public bookmark"=="öffentliches Lesezeichen" #"private bookmark"=="privates Lesezeichen" "delete bookmark"=="删除书签" "YaCy Bookmarks"=="YaCy书签" >Title==>标题 'Tags'=='标签' #'Hash'=='Hash' 'Public'=='公有' 'Title'=='题目' 'Folders'=='目录' 'Date'=='日期' #----------------------------- #File: yacy/ui/sidebar/sidebar_1.html #--------------------------- YaCy P2P Websearch==YaCy P2P搜索 "Search"=="搜索" >Text==>文本 >Images==>图像 >Audio==>音频 >Video==>视频 >Applications==>程序 Search term:==搜索条目: # do not translate class="help" which only has technical html semantics alt="help"==alt="帮助" title="help"==title="帮助" Resource/Network:==资源/网络: freeworld==自由世界 local peer==本地节点 >bookmarks==>书签 sciencenet==ScienceNet >Language:==>语言: any language==任意语言 Bookmark Folders==书签目录 #----------------------------- #File: yacy/ui/sidebar/sidebar_2.html #--------------------------- Bookmark Tags<==标签< Search Options==搜索设置 Constraint:==约束: all pages==所有页面 index pages==索引页面 URL mask:==URL过滤: Prefer mask:==首选过滤: Bookmark TagCloud==标签云 Topwords<==顶部< alt="help"==alt="帮助" title="help"==title="帮助" #----------------------------- #File: yacy/ui/yacyui-welcome.html #--------------------------- >Overview==>概述 YaCy-UI is going to be a JavaScript based client for YaCy based on the existing XML and JSON API.==YaCy-UI 是基于JavaScript的YaCy客户端, 它使用当前的XML和JSON API. YaCy-UI is at most alpha status, as there is still problems with retriving the search results.==YaCy-UI 尚在测试阶段, 所以在搜索时会有部分问题出现. I am currently changing the backend to a more application friendly format and getting good results with it (I will check that in some time after the stable release 0.7).==目前我正在修改程序后台, 以让其更加友善和搜索到更合适的结果(我会在稳定版0.7后改善此类问题). For now have a look at the bookmarks, performance has increased significantly, due to the use of JSON and Flexigrid!==就目前来说, 由于使用JSON和Flexigrid, 性能已获得显著提升! #----------------------------- # EOF