Fixed various spelling mistakes...

git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@144 6c8d7289-2bf4-0310-a012-ef5d649a1542
pull/1/head
rramthun 20 years ago
parent bfff0a96a7
commit 76475f9f38

@ -18,19 +18,19 @@ You do not need to provide any personal data here, but if you want to distribute
<table border="1">
<tr>
<td>Name</td>
<td><input type="text" name="name" value="#[name]#" style="width:100%"></td>
<td><input type="text" name="name" maxlength="1000" value="#[name]#" style="width:100%"></td>
</tr>
<tr>
<td>Nick Name</td>
<td><input type="text" name="nickname" value="#[nickname]#" style="width:100%"></td>
<td><input type="text" name="nickname" maxlength="1000" value="#[nickname]#" style="width:100%"></td>
</tr>
<tr>
<td>Homepage</td>
<td><input type="text" name="homepage" value="#[homepage]#" style="width:100%"></td>
<td><input type="text" name="homepage" maxlength="1000" value="#[homepage]#" style="width:100%"></td>
</tr>
<tr>
<td>EMail</td>
<td><input type="text" name="email" value="#[email]#" style="width:100%"></td>
<td>eMail</td>
<td><input type="text" name="email" maxlength="1000" value="#[email]#" style="width:100%"></td>
</tr>
<tr>
@ -43,15 +43,15 @@ You do not need to provide any personal data here, but if you want to distribute
</tr>
<tr>
<td>Jabber</td>
<td><input type="text" name="jabber" value="#[jabber]#" style="width:100%"></td>
<td><input type="text" name="jabber" maxlength="1000" value="#[jabber]#" style="width:100%"></td>
</tr>
<tr>
<td>Yahoo!</td>
<td><input type="text" name="yahoo" value="#[yahoo]#" style="width:100%"></td>
<td><input type="text" name="yahoo" maxlength="1000" value="#[yahoo]#" style="width:100%"></td>
</tr>
<tr>
<td>MSN</td>
<td><input type="text" name="msn" value="#[msn]#" style="width:100%"></td>
<td><input type="text" name="msn" maxlength="1000" value="#[msn]#" style="width:100%"></td>
</tr>
<tr>

@ -28,15 +28,15 @@ You can define url's as start points for Web page crawling and start that crawli
<td class=small colspan="3">
A minimum of 1 is recommended.
Be careful with the prefetch number. Consider a branching factor of average 20;
A prefect-depth of 8 would index 25.600.000.000 pages, maybe the whole WWW.
A prefetch-depth of 8 would index 25.600.000.000 pages, maybe the whole WWW.
</td>
</tr>
<tr valign="top" class="TableCellDark">
<td class=small>Crawling Filter:</td>
<td class=small><input name="crawlingFilter" type="text" size="20" maxlength="100" value="#[crawlingFilter]#"></td>
<td class=small colspan="3">
This is an emacs-like regular expression that must match with the crawled url.
Use this i.e. to crawl a single domain. If you set this filter is would make sense to increase
This is an emacs-like regular expression that must match with the crawled URL.
Use this i.e. to crawl a single domain. If you set this filter it would make sense to increase
the crawl depth.
</td>
</tr>
@ -45,7 +45,7 @@ You can define url's as start points for Web page crawling and start that crawli
<td class=small><input type="checkbox" name="crawlingQ" align="top" #(crawlingQChecked)#::checked#(/crawlingQChecked)#></td>
<td class=small colspan="3">
URL's pointing to dynamic content should usually not be crawled. However, there are sometimes web pages with static content that
is accessed with URL's containing question marks. If you are unshure, do not check this to avoid crawl loops.
is accessed with URL's containing question marks. If you are unsure, do not check this to avoid crawl loops.
</td>
</tr>
<tr valign="top" class="TableCellDark">
@ -71,7 +71,7 @@ You can define url's as start points for Web page crawling and start that crawli
<td class=small colspan="3">
If checked, the crawl will try to assign the leaf nodes of the search tree to remote peers.
If you need your crawling results locally, you must switch this off.
Only senior and principal peers can initiate or receive remote crawls.
Only senior and principal peer's can initiate or receive remote crawls.
</td>
</tr>
<tr valign="top" class="TableCellDark">
@ -104,8 +104,8 @@ You can define url's as start points for Web page crawling and start that crawli
<td class=small>Start Point:</td>
<td class=small colspan="2"><input name="crawlingURL" type="text" size="42" maxlength="256" value="http://"></td>
<td class=small><input type="submit" name="crawlingstart" value="Start New Crawl"></td>
<td class=small>Existing start url's are re-crawled.
Other already visited url's are sorted out as 'double'.
<td class=small>Existing start URL's are re-crawled.
Other already visited URL's are sorted out as 'double'.
A complete re-crawl will be available soon.
</td>
</tr>

@ -1,13 +1,13 @@
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html>
<head>
<title>YaCy: Send Message</title>
<title>YaCy: Send message</title>
#[metas]#
</head>
<body marginheight="0" marginwidth="0" leftmargin="0" topmargin="0">
#[header]#
<br><br>
<h2>Send Message</h2><br>
<h2>Send message</h2><br>
#[body]#

@ -38,7 +38,7 @@
<tr class="TableHeader" valign="bottom">
<td class="small">Profile<br>&nbsp;</td>
<td class="small">Message<br>&nbsp;</td>
<td class="small">Name*<br>&nbsp;</td>
<td class="small">Name<br>&nbsp;</td>
#(complete)#::
<td class="small">Address<br>&nbsp;</td>
<td class="small">Hash<br>&nbsp;</td>

@ -1,7 +1,7 @@
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<link rel="shortcut icon" href="favicon.ico">
<meta name="Content-Language" content="English, Englisch">
<meta name="keywords" content="Anomic HTTP Proxy search engine spider indexer java network open free download Mac Windwos Software development">
<meta name="keywords" content="Anomic HTTP Proxy search engine spider indexer java network open free download Mac Windows Linux Software development">
<meta name="description" content="Anomic Software HTTP Proxy Freeware Home Page">
<meta name="copyright" content="Michael Christen">
<link rel="stylesheet" media="all" href="/env/style.css">

@ -38,7 +38,7 @@
<td>
<br><br><br><br><br><br><br><br><br><br>
<p>
<h2>Error with url '#[url]#':</h2><br><br>
<h2>Error with URL '#[url]#':</h2><br><br>
<tt>#[httperror]#</tt><br><br>
<h3>
#(errormessage)#

Loading…
Cancel
Save