de.lng: Added missing translation for Show Content and changed existing line
--> Index Administration should now correctly validate XHTML 1.0 Strict / Trans
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7255 6c8d7289-2bf4-0310-a012-ef5d649a1542
- added a suggest servlet according to opensearch and firefox standard
- integrated the suggest servlet into opensearch description file
- integrated a autocomplete plugin for jquery
- added a autocomplete addition to the yacy search windows showing autosuggest queries
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7241 6c8d7289-2bf4-0310-a012-ef5d649a1542
- faster generation of index abstract compression during remote search
- less synchronization in IO record reading
- request index abstract generation only if necessary and faster time-out in remote search
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7239 6c8d7289-2bf4-0310-a012-ef5d649a1542
- removed read cache from Records data structure because the read cache had no cache hit during search operation
- copied old read-cache class to CachedRecords and the old, now new Records class does not have the cache any more and a code review checked that data structures and synchronization is clean
- removed unnecessary synchronization from Table class during get()
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7237 6c8d7289-2bf4-0310-a012-ef5d649a1542
- added a generic skin which is filled with actual color assignment using a servlet
- enabled css servlets
- added a generic color scheme in configuration file
- added configuration input in Customization/Appearance servlet
- added a jquery color picker widget
- placed color picked widget to input field of generic colour definition input fields
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7235 6c8d7289-2bf4-0310-a012-ef5d649a1542
this makes YaCy search results VERY fast for all verify=false search cases
and it enhances the search speed also for all other snippet-fetch cases.
With this change my peer performed 100 Queries Per Second (!!!) while doing 10 queries simultanously (!!!)
in an intranet index of 20000 URLs on my 16-core Mac
Check this yourself by doing:
cd bin
./searchtestmulti.sh
after finishing the run, divide 1000 by the given time per query (which is the qps for one thread)
and then multiply again by 10 (because 10 search threads has been started)
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7231 6c8d7289-2bf4-0310-a012-ef5d649a1542
*) more beautyful and easier to understand code (IMO)
*) added display= parameter to a lot of links in Wiki.html
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7226 6c8d7289-2bf4-0310-a012-ef5d649a1542
terminal_p.html: Put back the old ID which was really easy to find
IndexCreate.js: Because XHTML 1.0 Strict does not allow name tags for some elements rewrote most element access functions to use getElementById
Table_API_p.html and all other html pages: Some XHTMl 1.0 Strict fixes, changed checkAll javascript, marked the first row with checkboxes as unsortable where applicable
Table_API_p.java and all other java pages: URLencoded lines with possible ampersands & -> & for validation XHTML 1.0 Strict sourcecode
--> All Index Create pages should validate now. Hope I did not break anything else (too much :-)
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7225 6c8d7289-2bf4-0310-a012-ef5d649a1542
- better crawl star for files paths and smb paths
- added time-out wrapper for dns resolving and reverse resolving to prevent blockings
- fixed intranet scanner result list check boxes
- prevented htcache usage in case of file and smb crawling (not necessary, documents are locally available)
- fixed rss feed loader
- fixes sitemap loader which had not been restricted to single files (crawl-depth must be zero)
- clearing of crawl result lists when a network switch was done
- higher maximum file size for crawler
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7214 6c8d7289-2bf4-0310-a012-ef5d649a1542
- nobody understand the auto-dom filter without a lenghtly introduction about the function of a crawler
- nobody ever used the auto-dom filter other than with a crawl depth of 1
- the auto-dom filter was buggy since the filter did not survive a restart and then a search index contained waste
- the function of the auto-dom filter was in fact to just load a link list from the given start url and then start separate crawls for all these urls restricted by their domain
- the new Site Link-List option shows the target urls in real-time during input of the start url (like the robots check) and gives a transparent feed-back what it does before it can be used
- the new option also fits into the easy site-crawl start menu
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7213 6c8d7289-2bf4-0310-a012-ef5d649a1542
terminal_p.html: Set new link for starting a crawl to CrawlStartSite_p.html and replaced the old embed object of the Among.us Flash object by their new JS which takes care of adding the object correctly
de.lng: Moved the translations for the JS part from yacyinteractive.html to the yacyinteractive.js part
--> Terminal page is now valid XHTML 1.0 Transitional
git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@7211 6c8d7289-2bf4-0310-a012-ef5d649a1542