If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:
-
-
before importing large database dumps, set the following Line in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB):
-
$cfg['UploadDir'] = '/tmp';
-
deselect the partial import flag
-
-
+
+
+
before importing large database dumps, set the following Line in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB):
+
$cfg['UploadDir'] = '/tmp';
+
deselect the partial import flag
+
When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.
All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.
@@ -56,7 +56,7 @@
Posts per file in exported surrogates
-
+
@@ -67,11 +67,11 @@
Import a database dump,
-
-
+
+
diff --git a/htroot/CrawlStartExpert_p.html b/htroot/CrawlStartExpert_p.html
index a05c05abe..e8a8db683 100644
--- a/htroot/CrawlStartExpert_p.html
+++ b/htroot/CrawlStartExpert_p.html
@@ -1,5 +1,5 @@
-
+
YaCy '#[clientname]#': Crawl Start
#%env/templates/metas.template%#
@@ -26,7 +26,7 @@
You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".
-