fix for double-entries of crawl tasks.

git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@3920 6c8d7289-2bf4-0310-a012-ef5d649a1542
pull/1/head
orbiter 18 years ago
parent 0e57a8062b
commit 5009695537

@ -16,7 +16,7 @@
You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".
</p>
<form action="WatchCrawler_p.html" method="get" enctype="multipart/form-data">
<form action="WatchCrawler_p.html" method="post" enctype="multipart/form-data">
<table border="0" cellpadding="5" cellspacing="1">
<tr class="TableHeader">
<td><strong>Attribut</strong></td>

@ -19,7 +19,7 @@
This is repeated as long as specified under "Crawling Depth".
</p>
<form action="WatchCrawler_p.html" method="get" enctype="multipart/form-data">
<form action="WatchCrawler_p.html" method="post" enctype="multipart/form-data">
<input type="hidden" name="crawlingFilter" value=".*" />
<input type="hidden" name="crawlingIfOlderCheck" value="off" />
<input type="hidden" name="crawlingDomFilterCheck" value="off" />

Loading…
Cancel
Save