From 70576e88d20cb0a917a031955cd16d457bc0f0ea Mon Sep 17 00:00:00 2001 From: mikeworks Date: Wed, 6 Oct 2010 00:00:23 +0000 Subject: [PATCH] de.lng: Added some more untranslated strings I found and uncommented old ones that were removed terminal_p.html: Put back the old ID which was really easy to find IndexCreate.js: Because XHTML 1.0 Strict does not allow name tags for some elements rewrote most element access functions to use getElementById Table_API_p.html and all other html pages: Some XHTMl 1.0 Strict fixes, changed checkAll javascript, marked the first row with checkboxes as unsortable where applicable Table_API_p.java and all other java pages: URLencoded lines with possible ampersands & -> & for validation XHTML 1.0 Strict sourcecode --> All Index Create pages should validate now. Hope I did not break anything else (too much :-) git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7225 6c8d7289-2bf4-0310-a012-ef5d649a1542 --- htroot/AccessTracker_p.html | 2 +- htroot/ContentIntegrationPHPBB3_p.html | 18 +++---- htroot/CrawlStartExpert_p.html | 10 ++-- htroot/CrawlStartSite_p.html | 49 +++++++++-------- htroot/IndexImportOAIPMHList_p.html | 33 ++++++------ htroot/IndexImportOAIPMHList_p.java | 7 +-- htroot/IndexImportOAIPMH_p.html | 9 ++-- htroot/IndexImportWikimedia_p.html | 2 +- htroot/Load_MediawikiWiki.html | 19 +++---- htroot/Load_PHPBB3.html | 4 +- htroot/Load_RSS_p.html | 54 +++++++++---------- htroot/Load_RSS_p.java | 12 ++--- htroot/Table_API_p.html | 73 ++++++++++++-------------- htroot/Table_API_p.java | 2 +- htroot/js/IndexCreate.js | 14 ++--- htroot/terminal_p.html | 2 +- locales/de.lng | 28 ++++++---- 17 files changed, 173 insertions(+), 165 deletions(-) diff --git a/htroot/AccessTracker_p.html b/htroot/AccessTracker_p.html index 8183f1181..e191926d3 100644 --- a/htroot/AccessTracker_p.html +++ b/htroot/AccessTracker_p.html @@ -1,5 +1,5 @@ - + YaCy '#[clientname]#': Access Tracker #%env/templates/metas.template%# diff --git a/htroot/ContentIntegrationPHPBB3_p.html b/htroot/ContentIntegrationPHPBB3_p.html index 6eadd65db..83477c83c 100644 --- a/htroot/ContentIntegrationPHPBB3_p.html +++ b/htroot/ContentIntegrationPHPBB3_p.html @@ -15,12 +15,12 @@

If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin: -

-

+

+

When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread. All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted. @@ -56,7 +56,7 @@

Posts per file
in exported surrogates
-
+
 
   @@ -67,11 +67,11 @@
Import a database dump,
-
-
+
 
+ diff --git a/htroot/CrawlStartExpert_p.html b/htroot/CrawlStartExpert_p.html index a05c05abe..e8a8db683 100644 --- a/htroot/CrawlStartExpert_p.html +++ b/htroot/CrawlStartExpert_p.html @@ -1,5 +1,5 @@ - + YaCy '#[clientname]#': Crawl Start #%env/templates/metas.template%# @@ -26,7 +26,7 @@ You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".

-
+ @@ -41,7 +41,7 @@ @@ -67,7 +67,7 @@
Attribute: - +

- empty + empty
@@ -93,7 +93,7 @@
no doubles
run this crawl once and never load any page that is already known, only the start-url may be loaded again.
-
re-load
+
re-load
run this crawl once, but treat urls that are known since
Start URL -
+
- empty + + empty Link-List of URL @@ -53,8 +54,8 @@
-
- + +
run this crawl once
@@ -77,30 +78,32 @@
load all files in domain
load only files in a sub-path of given url -
- - - + + + +
-
+
not more than documents
-
allow query-strings (urls with a '?' in the path) +
+ allow query-strings (urls with a '?' in the path) + + + + + + + + + +
- - - - - - - - - -
+
@@ -115,7 +118,7 @@
  • High Speed Crawling

    A 'shallow crawl' which is not limited to a single host (or site) can extend the pages per minute (ppm) rate to unlimited documents per minute when the number of target hosts is high. This can be done using the Expert Crawl Start servlet.
  • -
  • Scheduler Steering

    The scheduler on crawls can be changed or removed using the API Steering. +
  • Scheduler Steering

    The scheduler on crawls can be changed or removed using the API Steering.
  • #%env/templates/footer.template%# diff --git a/htroot/IndexImportOAIPMHList_p.html b/htroot/IndexImportOAIPMHList_p.html index c067ab712..493376213 100644 --- a/htroot/IndexImportOAIPMHList_p.html +++ b/htroot/IndexImportOAIPMHList_p.html @@ -1,23 +1,19 @@ - - + + + YaCy '#[clientname]#': OAI-PMH source import list #%env/templates/metas.template%# #(refresh)#::#(/refresh)# - @@ -26,14 +22,15 @@ #(source)#::

    List of #[num]# OAI-PMH Servers

    - +

    - + +
    - + #{table}# diff --git a/htroot/IndexImportOAIPMHList_p.java b/htroot/IndexImportOAIPMHList_p.java index 0d431d6c8..60d5dadb5 100644 --- a/htroot/IndexImportOAIPMHList_p.java +++ b/htroot/IndexImportOAIPMHList_p.java @@ -26,6 +26,7 @@ import java.util.ArrayList; import java.util.Set; import net.yacy.cora.protocol.RequestHeader; +import net.yacy.document.parser.html.CharacterCoding; import net.yacy.document.importer.OAIListFriendsLoader; import net.yacy.document.importer.OAIPMHImporter; @@ -51,8 +52,8 @@ public class IndexImportOAIPMHList_p { for (String root: oaiRoots) { prop.put("source_table_" + count + "_dark", (dark) ? "1" : "0"); prop.put("source_table_" + count + "_count", count); - prop.put("source_table_" + count + "_source", root); - prop.put("source_table_" + count + "_loadurl", "" + root + ""); + prop.put("source_table_" + count + "_source", CharacterCoding.unicode2html(root, true)); + prop.put("source_table_" + count + "_loadurl", "" + CharacterCoding.unicode2html(root, true) + ""); dark = !dark; count++; } @@ -72,7 +73,7 @@ public class IndexImportOAIPMHList_p { for (OAIPMHImporter job: jobs) { prop.put("import_table_" + count + "_dark", (dark) ? "1" : "0"); prop.put("import_table_" + count + "_thread", (job.isAlive()) ? "\"running\"" : "finished"); - prop.put("import_table_" + count + "_source", job.source()); + prop.putXML("import_table_" + count + "_source", job.source()); prop.put("import_table_" + count + "_chunkCount", job.chunkCount()); prop.put("import_table_" + count + "_recordsCount", job.count()); prop.put("import_table_" + count + "_completeListSize", job.getCompleteListSize()); diff --git a/htroot/IndexImportOAIPMH_p.html b/htroot/IndexImportOAIPMH_p.html index 713bc42ec..eca22d950 100644 --- a/htroot/IndexImportOAIPMH_p.html +++ b/htroot/IndexImportOAIPMH_p.html @@ -1,5 +1,6 @@ - - + + +YaCy '#[clientname]#': OAI-PMH Import #%env/templates/metas.template%# @@ -39,13 +40,13 @@ #(status)#::

    Import started!

    ::

    Bad input data: #[message]#

    #(/status)# - #%env/templates/footer.template%# diff --git a/htroot/IndexImportWikimedia_p.html b/htroot/IndexImportWikimedia_p.html index 781504625..28fc65848 100644 --- a/htroot/IndexImportWikimedia_p.html +++ b/htroot/IndexImportWikimedia_p.html @@ -52,7 +52,7 @@
  • When a surrogate file is finished with indexing, it is moved to /DATA/SURROGATES/out
  • You can recycle processed surrogate files by moving them from /DATA/SURROGATES/out to /DATA/SURROGATES/in
  • -

    +
    ::
    Import Process
    diff --git a/htroot/Load_MediawikiWiki.html b/htroot/Load_MediawikiWiki.html index 5b2860e4b..50a309958 100644 --- a/htroot/Load_MediawikiWiki.html +++ b/htroot/Load_MediawikiWiki.html @@ -20,11 +20,12 @@ to this page to read the integration hints below.

    - +
    URL of the wiki main page
    This is a crawl start point
    -
    +
    + @@ -38,7 +39,7 @@ - + @@ -48,8 +49,8 @@ - -
    + +
     
    @@ -62,7 +63,8 @@ To integrate a search window into a MediaWiki, you must insert some code into the wiki template. There are several templates that can be used for MediaWiki, but in this guide we consider that you are using the default template, 'MonoBook.php': -
      +

      +
      • open skins/MonoBook.php
      • find the line where the default search window is displayed, there are the following statements:
        <form name="searchform" action="<?php $this->text('searchaction') ?>" id="searchform">
        @@ -114,9 +116,8 @@
                 
      • Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name
      • You may want to change the default text elements in the code snippet
      • To see all options for the search widget, look at the more generic description of search widgets at - the configuration for live search. -
      -

      + the configuration for live search. +
    #%env/templates/footer.template%# diff --git a/htroot/Load_PHPBB3.html b/htroot/Load_PHPBB3.html index adeac7f41..9af95a1cb 100644 --- a/htroot/Load_PHPBB3.html +++ b/htroot/Load_PHPBB3.html @@ -1,5 +1,5 @@ - + YaCy '#[clientname]#': Configuration of a phpBB3 Search #%env/templates/metas.template%# @@ -34,7 +34,7 @@
    URL of the phpBB3 forum main page
    This is a crawl start point
    -
    +
     
    diff --git a/htroot/Load_RSS_p.html b/htroot/Load_RSS_p.html index 9708aafba..fcf599954 100644 --- a/htroot/Load_RSS_p.html +++ b/htroot/Load_RSS_p.html @@ -1,17 +1,17 @@ - + YaCy '#[clientname]#': Configuration of a RSS Search #%env/templates/metas.template%# @@ -63,11 +63,11 @@ #(showscheduledfeeds)#:: -
    - +
    +
    Source
    - + @@ -80,8 +80,8 @@ #{list}# - - + + @@ -93,18 +93,18 @@
    Title URL/Referrer Recording
    #[title]##[rss]#
    #[referrer]#
    #[title]##[rss]#
    #[referrer]#
    #[recording]# #[lastload]# #[nextload]#

    - - + +

    #(/showscheduledfeeds)# #(shownewfeeds)#:: -
    - +
    + - + @@ -112,24 +112,24 @@ #{list}# - - + + #{/list}#
    Title URL/Referrer Recording
    #[title]##[rss]#
    #[referrer]#
    #[title]##[rss]#
    #[referrer]#
    #[recording]#

    - - - + + +

    #(/shownewfeeds)# #(showitems)#:: -
    - +
    +
    Title
    #[title]#
    Author
    #[author]#
    @@ -141,7 +141,7 @@
    - + @@ -166,7 +166,7 @@

    - +

    #(/showitems)# diff --git a/htroot/Load_RSS_p.java b/htroot/Load_RSS_p.java index 0612e75c2..d339eacaa 100644 --- a/htroot/Load_RSS_p.java +++ b/htroot/Load_RSS_p.java @@ -199,9 +199,9 @@ public class Load_RSS_p { Date date_next_exec = r.get(WorkTables.TABLE_API_COL_DATE_NEXT_EXEC, (Date) null); prop.put("showscheduledfeeds_list_" + apic + "_pk", new String(row.getPK())); prop.put("showscheduledfeeds_list_" + apic + "_count", apic); - prop.put("showscheduledfeeds_list_" + apic + "_rss", messageurl); - prop.put("showscheduledfeeds_list_" + apic + "_title", row.get("title", "")); - prop.put("showscheduledfeeds_list_" + apic + "_referrer", referrer == null ? "" : referrer.toNormalform(true, false)); + prop.putXML("showscheduledfeeds_list_" + apic + "_rss", messageurl); + prop.putXML("showscheduledfeeds_list_" + apic + "_title", row.get("title", "")); + prop.putXML("showscheduledfeeds_list_" + apic + "_referrer", referrer == null ? "#" : referrer.toNormalform(true, false)); prop.put("showscheduledfeeds_list_" + apic + "_recording", DateFormat.getDateTimeInstance().format(row.get("recording_date", new Date()))); prop.put("showscheduledfeeds_list_" + apic + "_lastload", DateFormat.getDateTimeInstance().format(row.get("last_load_date", new Date()))); prop.put("showscheduledfeeds_list_" + apic + "_nextload", date_next_exec == null ? "" : DateFormat.getDateTimeInstance().format(date_next_exec)); @@ -213,9 +213,9 @@ public class Load_RSS_p { // this is a new entry prop.put("shownewfeeds_list_" + newc + "_pk", new String(row.getPK())); prop.put("shownewfeeds_list_" + newc + "_count", newc); - prop.put("shownewfeeds_list_" + newc + "_rss", messageurl); - prop.put("shownewfeeds_list_" + newc + "_title", row.get("title", "")); - prop.put("shownewfeeds_list_" + newc + "_referrer", referrer == null ? "" : referrer.toNormalform(true, false)); + prop.putXML("shownewfeeds_list_" + newc + "_rss", messageurl); + prop.putXML("shownewfeeds_list_" + newc + "_title", row.get("title", "")); + prop.putXML("shownewfeeds_list_" + newc + "_referrer", referrer == null ? "" : referrer.toNormalform(true, false)); prop.put("shownewfeeds_list_" + newc + "_recording", DateFormat.getDateTimeInstance().format(row.get("recording_date", new Date()))); newc++; } diff --git a/htroot/Table_API_p.html b/htroot/Table_API_p.html index 391ce8a2d..c09ceb629 100644 --- a/htroot/Table_API_p.html +++ b/htroot/Table_API_p.html @@ -6,20 +6,15 @@ #(/showtable)# #%env/templates/metas.template%# - @@ -36,33 +31,33 @@ to a scheduler for a periodic execution.

    ::#(/inline)# #(showtable)#:: - -
    - + + +

    #(navigation)# :: - #(left)#::#(/left)# + #(left)#no previous page::previous page#(/left)# #[startRecord]#-#[to]# of #[of]# - #(right)#::#(/right)# - #(/navigation)# - - - - - + #(right)#no next page::next page#(/right)# + #(/navigation)# + + + + +

    State Title URL
    - - - - - - - - - #(inline)#::#(/inline)# + + + + + + + + + #(inline)#::#(/inline)# #{list}# @@ -75,7 +70,7 @@
    TypeCommentCall
    Count
    Recording
    Date
    Last Exec
    Date
    Next Exec
    Date
    SchedulerURLTypeCommentCall
    Count
    Recording
    Date
    Last Exec
    Date
    Next Exec
    Date
    SchedulerURL
    #[dateNextExec]# #(scheduler)# - + - + :: -
    + #(inline)#::#(/inline)# @@ -115,7 +110,7 @@

    - + #(/showtable)# #(showexec)#:: diff --git a/htroot/Table_API_p.java b/htroot/Table_API_p.java index 537f28b8c..e33927f4a 100644 --- a/htroot/Table_API_p.java +++ b/htroot/Table_API_p.java @@ -217,7 +217,7 @@ public class Table_API_p { prop.put("showtable_list_" + count + "_repeatTime", time); prop.put("showtable_list_" + count + "_type", row.get(WorkTables.TABLE_API_COL_TYPE)); prop.put("showtable_list_" + count + "_comment", row.get(WorkTables.TABLE_API_COL_COMMENT)); - prop.put("showtable_list_" + count + "_inline_url", "http://" + sb.myPublicIP() + ":" + sb.getConfig("port", "8080") + new String(row.get(WorkTables.TABLE_API_COL_URL))); + prop.putHTML("showtable_list_" + count + "_inline_url", "http://" + sb.myPublicIP() + ":" + sb.getConfig("port", "8080") + new String(row.get(WorkTables.TABLE_API_COL_URL))); if (time == 0) { prop.put("showtable_list_" + count + "_scheduler", 0); diff --git a/htroot/js/IndexCreate.js b/htroot/js/IndexCreate.js index ab7a72333..a33cf505e 100644 --- a/htroot/js/IndexCreate.js +++ b/htroot/js/IndexCreate.js @@ -11,8 +11,8 @@ function handleResponse(){ if (response.getElementsByTagName("title")[0].firstChild!=null){ doctitle=response.getElementsByTagName("title")[0].firstChild.nodeValue; } - // document.getElementById("title").innerHTML=doctitle; - document.Crawler.bookmarkTitle.value=doctitle + //document.getElementById("title").innerHTML=doctitle; + document.getElementById("bookmarkTitle").value=doctitle; // determine if crawling is allowed by the robots.txt docrobotsOK=""; @@ -28,14 +28,16 @@ function handleResponse(){ img.setAttribute("src", "/env/grafics/ok.png"); img.setAttribute("width", "32px"); img.setAttribute("height", "32px"); + img.setAttribute("alt", "robots.txt - OK"); robotsOKspan.appendChild(img); } else if(docrobotsOK==0){ img=document.createElement("img"); img.setAttribute("src", "/env/grafics/bad.png"); img.setAttribute("width", "32px"); img.setAttribute("height", "32px"); + img.setAttribute("alt", "robots.txt - Bad"); robotsOKspan.appendChild(img); - robotsOKspan.appendChild(img); + // robotsOKspan.appendChild(img); } else { robotsOKspan.appendChild(document.createTextNode("")); document.getElementById("robotsOK").innerHTML=""; @@ -58,7 +60,7 @@ function handleResponse(){ if (sitelist) document.getElementById("sitelist").disabled=false; // clear the ajax image - document.getElementsByName("ajax")[0].setAttribute("src", AJAX_OFF); + document.getElementById("ajax").setAttribute("src", AJAX_OFF); } } @@ -69,8 +71,8 @@ function changed() { function loadInfos() { // displaying ajax image - document.getElementsByName("ajax")[0].setAttribute("src",AJAX_ON); + document.getElementById("ajax").setAttribute("src",AJAX_ON); - url=document.getElementsByName("crawlingURL")[0].value; + url=document.getElementById("crawlingURL").value; sndReq('/api/util/getpageinfo_p.xml?actions=title,robots&url='+url); } diff --git a/htroot/terminal_p.html b/htroot/terminal_p.html index 4f4353a46..b0e226777 100755 --- a/htroot/terminal_p.html +++ b/htroot/terminal_p.html @@ -135,7 +135,7 @@ function init() { The yacy Network - + diff --git a/locales/de.lng b/locales/de.lng index 7cffea6e3..18adc75d9 100644 --- a/locales/de.lng +++ b/locales/de.lng @@ -815,8 +815,8 @@ Crawl Thread==Crawl Art Must Match==Muss zutreffen Must Not Match==Muss nicht zutreffen MaxAge==Max. Alter -Auto Filter Depth==Auto Filter Tiefe -Auto Filter Content==Auto Inhalts Filter +#Auto Filter Depth==Auto Filter Tiefe +#Auto Filter Content==Auto Inhalts Filter Max Page Per Domain==Max. Seiten pro Domain Accept==Akzeptiere Fill Proxy Cache==Fülle Proxy Cache @@ -969,12 +969,12 @@ If you don't know what this means, please leave this field empty.==Wenn Sie nich #Re-crawl known URLs:==Re-crawl bekannter URLs: Use:==Benutzen: #It depends on the age of the last crawl if this is done or not: if the last crawl is older than the given==Es hängt vom Alter des letzten Crawls ab, ob dies getan oder nicht getan wird: wenn der letzte Crawl älter als das angegebene -Auto-Dom-Filter:==Auto-Dom-Filter: -This option will automatically create a domain-filter which limits the crawl on domains the crawler==Diese Option erzeugt automatisch einen Domain-Filter der den Crawl auf die Domains beschränkt , -will find on the given depth. You can use this option i.e. to crawl a page with bookmarks while==die auf der angegebenen Tiefe gefunden werden. Diese Option kann man beispielsweise benutzen, um eine Seite mit Bookmarks zu crawlen -restricting the crawl on only those domains that appear on the bookmark-page. The adequate depth==und dann den folgenden Crawl automatisch auf die Domains zu beschränken, die in der Bookmarkliste vorkamen. Die einzustellende Tiefe für -for this example would be 1.==dieses Beispiel wäre 1. -The default value 0 gives no restrictions.==Der Vorgabewert 0 bedeutet, dass nichts eingeschränkt wird. +#Auto-Dom-Filter:==Auto-Dom-Filter: +#This option will automatically create a domain-filter which limits the crawl on domains the crawler==Diese Option erzeugt automatisch einen Domain-Filter der den Crawl auf die Domains beschränkt , +#will find on the given depth. You can use this option i.e. to crawl a page with bookmarks while==die auf der angegebenen Tiefe gefunden werden. Diese Option kann man beispielsweise benutzen, um eine Seite mit Bookmarks zu crawlen +#restricting the crawl on only those domains that appear on the bookmark-page. The adequate depth==und dann den folgenden Crawl automatisch auf die Domains zu beschränken, die in der Bookmarkliste vorkamen. Die einzustellende Tiefe für +#for this example would be 1.==dieses Beispiel wäre 1. +#The default value 0 gives no restrictions.==Der Vorgabewert 0 bedeutet, dass nichts eingeschränkt wird. Maximum Pages per Domain:==Maximale Seiten pro Domain: Page-Count==Seitenanzahl You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==Sie können die maximale Anzahl an Seiten, die von einer einzelnen Domain gefunden und indexiert werden, mit dieser Option begrenzen. @@ -1019,7 +1019,6 @@ Exclude static Stop-Words==Statische Stop-Words ausschließen This can be useful to circumvent that extremely common words are added to the database, i.e. \"the\", \"he\", \"she\", \"it\"... To exclude all words given in the file yacy.stopwords from indexing,==Dies ist sinnvoll, um zu verhindern, dass extrem häufig vorkommende Wörter wie z.B. "der", "die", "das", "und", "er", "sie" etc in die Datenbank aufgenommen werden. Um alle Wörter von der Indexierung auszuschließen, die in der Datei yacy.stopwords enthalten sind, check this box.==aktivieren Sie diese Box. "Start New Crawl"=="Neuen Crawl starten" -Depth:==Tiefe: #----------------------------- #File: CrawlStartIntranet_p.html @@ -1723,7 +1722,7 @@ Available after successful loading of rss feed in preview==Verfügbar nach dem e >List of Scheduled RSS Feed Load Targets<==>Liste aller geplanten RSS Feed Ziele< >Title<==>Titel< #>URL/Referrer<==>URL/Referrer< -#>Recording<==>Recording< +>Recording<==>Eintrag< >Last Load<==>Zuletzt Geladen< >Next Load<==>Nächster Ladevorgang< >Last Count<==>Letzter Zähler< @@ -1735,6 +1734,9 @@ Available after successful loading of rss feed in preview==Verfügbar nach dem e "Remove Selected Feeds from Feed List"=="Entferne ausgewählte Feeds aus der Feed Liste" "Remove All Feeds from Feed List"=="Entferne alle Feeds aus der Feed Liste" "Add Selected Feeds to Scheduler"=="Füge ausgewählte Feeds zur geplanten Liste hinzu" +>new<==>Neu< +>enqueued<==>Geplant< +>indexed<==>Indexiert< >RSS Feed of==>RSS Feed von >Author<==>Autor< >Description<==>Beschreibung< @@ -2780,6 +2782,11 @@ to change the configuration or to request crawl actions.==um Konfigurationen zu These recorded actions can be used to repeat specific actions and to send them==Diese aufgezeichneten Aktionen können dazu verwendet werden, bestimmte Aktionen wiederholt auszuführen und um sie to a scheduler for a periodic execution.==einem Scheduler für periodische Ausführung zu übergeben. >Recorded Actions<==>Aufgezeichnete Aktionen< +"next page"=="Nächste Seite" +"previous page"=="Vorherige Seite" +"next page"=="Keine nächste Seite" +"previous page"=="Keine vorherige Seite" + of \#\[of\]\#== von #[of]# >Date==>Datum >Type==>Typ >Comment==>Kommentar @@ -2799,6 +2806,7 @@ Next Exec==Nächste Ausführung >minutes<==>Minuten< >hours<==>Stunden< >days<==>Tage< +Scheduled actions are executed after the next execution date has arrived within a time frame of \#\[tfminutes\]\# minutes.==Geplante Aktionen werden innerhalb eines #[tfminutes]# Minuten Zeitfensters ausgeführt wenn der nächste Ausführungszeitpunkt erreicht wurde. #----------------------------- #File: Table_RobotsTxt_p.html
    - - + + #(/scheduler)# #[url]#