apfelmaennchen 16 years ago
parent 0f0b4aec75
commit 9ab009b16b

@ -4,8 +4,8 @@
<title>YaCy '#[clientname]#': Bookmarks</title>
#%env/templates/metas.template%#
<script src="/js/ajax.js" type="text/javascript"></script>
#(display)#
<script src="/js/Bookmarks.js" type="text/javascript"></script>
#(display)#
<link rel="alternate" type="application/rss+xml" title="RSS" href="Bookmarks.rss" />
::
#(/display)#

@ -26,7 +26,7 @@
You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".
</p>
<form id="WatchCrawler" action="WatchCrawler_p.html" method="post" enctype="multipart/form-data">
<form name="WatchCrawler" action="WatchCrawler_p.html" method="post" enctype="multipart/form-data">
<table border="0" cellpadding="5" cellspacing="1">
<tr class="TableHeader">
<td><strong>Attribut</strong></td>
@ -60,7 +60,7 @@
<td colspan="3" class="commit">
<span id="robotsOK"></span>
<span id="title"><br/></span>
<img src="/env/grafics/empty.gif" id="ajax" alt="empty" />
<img src="/env/grafics/empty.gif" name="ajax" alt="empty" />
</td>
</tr>
</table>

Loading…
Cancel
Save