*) just some typos

git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@2576 6c8d7289-2bf4-0310-a012-ef5d649a1542
pull/1/head
low012 19 years ago
parent e03740c306
commit f4af607b79

@ -38,7 +38,7 @@
<td><input name="crawlingFilter" type="text" size="20" maxlength="100" value="#[crawlingFilter]#" /></td>
<td>
This is an emacs-like regular expression that must match with the URLs which are used to be crawled.
Use this i.e. to crawl a single domain. If you set this filter it would make sense to increase
Use this i.e. to crawl a single domain. If you set this filter it makes sense to increase
the crawling depth.
</td>
</tr>
@ -56,7 +56,7 @@
<td>
If you use this option, web pages that are already existent in your database are crawled and indexed again.
It depends on the age of the last crawl if this is done or not: if the last crawl is older than the given
date, the page is crawled again, othervise it is treaded as 'double' and not loaded or indexed again.
date, the page is crawled again, otherwise it is treated as 'double' and not loaded or indexed again.
</td>
</tr>
<tr valign="top" class="TableCellDark">

Loading…
Cancel
Save