From 70576e88d20cb0a917a031955cd16d457bc0f0ea Mon Sep 17 00:00:00 2001
From: mikeworks
Date: Wed, 6 Oct 2010 00:00:23 +0000
Subject: [PATCH] de.lng: Added some more untranslated strings I found and
uncommented old ones that were removed terminal_p.html: Put back the old ID
which was really easy to find IndexCreate.js: Because XHTML 1.0 Strict does
not allow name tags for some elements rewrote most element access functions
to use getElementById Table_API_p.html and all other html pages: Some XHTMl
1.0 Strict fixes, changed checkAll javascript, marked the first row with
checkboxes as unsortable where applicable Table_API_p.java and all other java
pages: URLencoded lines with possible ampersands & -> & for validation
XHTML 1.0 Strict sourcecode --> All Index Create pages should validate now.
Hope I did not break anything else (too much :-)
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7225 6c8d7289-2bf4-0310-a012-ef5d649a1542
---
htroot/AccessTracker_p.html | 2 +-
htroot/ContentIntegrationPHPBB3_p.html | 18 +++----
htroot/CrawlStartExpert_p.html | 10 ++--
htroot/CrawlStartSite_p.html | 49 +++++++++--------
htroot/IndexImportOAIPMHList_p.html | 33 ++++++------
htroot/IndexImportOAIPMHList_p.java | 7 +--
htroot/IndexImportOAIPMH_p.html | 9 ++--
htroot/IndexImportWikimedia_p.html | 2 +-
htroot/Load_MediawikiWiki.html | 19 +++----
htroot/Load_PHPBB3.html | 4 +-
htroot/Load_RSS_p.html | 54 +++++++++----------
htroot/Load_RSS_p.java | 12 ++---
htroot/Table_API_p.html | 73 ++++++++++++--------------
htroot/Table_API_p.java | 2 +-
htroot/js/IndexCreate.js | 14 ++---
htroot/terminal_p.html | 2 +-
locales/de.lng | 28 ++++++----
17 files changed, 173 insertions(+), 165 deletions(-)
diff --git a/htroot/AccessTracker_p.html b/htroot/AccessTracker_p.html
index 8183f1181..e191926d3 100644
--- a/htroot/AccessTracker_p.html
+++ b/htroot/AccessTracker_p.html
@@ -1,5 +1,5 @@
-
+
YaCy '#[clientname]#': Access Tracker
#%env/templates/metas.template%#
diff --git a/htroot/ContentIntegrationPHPBB3_p.html b/htroot/ContentIntegrationPHPBB3_p.html
index 6eadd65db..83477c83c 100644
--- a/htroot/ContentIntegrationPHPBB3_p.html
+++ b/htroot/ContentIntegrationPHPBB3_p.html
@@ -15,12 +15,12 @@
If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:
-
-
before importing large database dumps, set the following Line in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB):
-
$cfg['UploadDir'] = '/tmp';
-
deselect the partial import flag
-
-
+
+
+
before importing large database dumps, set the following Line in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB):
+
$cfg['UploadDir'] = '/tmp';
+
deselect the partial import flag
+
When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.
All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.
@@ -56,7 +56,7 @@
Posts per file in exported surrogates
-
+
@@ -67,11 +67,11 @@
Import a database dump,
-
-
+
+
diff --git a/htroot/CrawlStartExpert_p.html b/htroot/CrawlStartExpert_p.html
index a05c05abe..e8a8db683 100644
--- a/htroot/CrawlStartExpert_p.html
+++ b/htroot/CrawlStartExpert_p.html
@@ -1,5 +1,5 @@
-
+
YaCy '#[clientname]#': Crawl Start
#%env/templates/metas.template%#
@@ -26,7 +26,7 @@
You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".
-