<li>Added exclusion-search (a search with '-' to exclude specific words from the search results)</li>
<li>Added extraction of sitemap-url from robots.txt, which can be used for crawl starts</li>
<li>Added a network configuration menu for new cluster configuration functions: a set of peers may now operate as an isle within the YaCy network, without exchange of index data over the border of the isle. Peers within the cluster can trigger internal remote crawls and search only within the own cluster.</li>
<li>Added a postscript parser</li>
</ul>
<li>Interface Enhancements</li>
<ul>
<li>Redesigned the status page, shows now also hints and warnings</li>
<li>Better layout for image search results</li>
<li>The peer profile can now be displayed as vcard, e.g. http://localhost:8080/ViewProfile.vcf?hash=localhash</li>
</ul>
<li>Performance Enhancements</li>
<ul>
<li>Added an option to configure a path to a secondary index location.
This shall be used to store a fragment of the index on another physical device,
to split IO load and enhance access speed. The index is splitted in such a way
that the LURLs are stored to the secondary location, and the RWIs to the primary
location.</li>
<li>Optimized memory allocation when accessing the web-index (now half of memory throughput as before)</li>
<li>Fixed bugs in database engine that corrupted the data when entries had been removed</li>
<li>Higher crawling speed possible caused by better ram cache flush methods</li>
<li>The crawl balancer now has a security function which prevents that remote web servers are accessed more than two times in one second. In case a crawling from a single domain is made, this means the crawling speed is restricted to not more than 120 pages per minute</li>
<li>The crawl balancer chooses better urls. Newly added urls are now prevented from beeing hidden by masses of links generated by the crawler. The effect is that in most cases the security function described above is not needed.</li>
<li>Added a crawling speed button on the crawling monitor page.</li>
<li>Crawl targets get informed about the yacy bot; a link to http://yacy.net/yacy/bot.html is attached to each crawl request; the page explains YaCy and that YaCy respects robots.txt</li>
</ul>
<li>Better Monitoring</li>
<ul>
<li>New search result page SearchStatistics_p.html shows local and remote search requests; remote requests are anonymized</li>
<li>Added network-wide QPM (queries per minute) computation to show how much the network is used for web search. The statistics are not reported from searching peers, but from searched peers; therefore the accumulation preserves privacy of the searcher</li>
<li>New page LogStatistics_p.html which shows an evaluation of entries from the log.</li>
<li>New page BlacklistCleaner_p.html to clean up wrong blacklist entries. The page allows categorization of blacklist error case, correction of error and the optional deletion of the blacklist entry.</li>
<li>Added RSS feed for YaCyNews</li>
</ul>
<li>Enhanced User Interface</li>
<ul>
<li>Added a robots.txt configuration menu to enable/disable external crawlers to access the yacy user interface</li>
<li>New wiki-parser</li>
<li>Blog entries may now have user-comments</li>
<li>The network list page now provides links to the users blog pages</li>
<li>The menu points had been rearranged</li>
</ul>
<li>Less Memory Usage and Better Memory Management</li>
<ul>
<li>All caches (node cache, object cache) now have enhanced self-organization and dont need fixed size assigments</li>
<li>Memory protection by disallowing collection arrays beyond kca-7. Collections larger than those are written to 'common' files.</li>
<li>The network picture uses less memory</li>
</ul>
<li>Bugfixes: a very large number of bugfixes were made.</li>
<li>Added search pages for Images, Audio, Video and Application search.</li>
<li>Added media link presentation during snippet fetch; the Image Search presents search results as image thumbnails.</li>
<li>Better recognition of search hits for text snippet generation.</li>
<li>Media search results are indexed again after remote search results are collected; only media links are used to update the index.</li>
</ul>
<li>Better Result Ranking</li>
<ul>
<li>New ranking parameters and appearance attributes are now considered.</li>
<li>Faster ranking; more references can be ranked and sorted within given search time.</li>
<li>Ranking Parameters can be handed over to remote peers and are applied there.</li>
<li>Adopted Detailed Search to new ranking parameters.</li>
<li>Coefficients from detailed search can be set as default ranking for search page; this replaces the old ranking alternatives.</li>
</ul>
<li>Better Crawl Monitoring</li>
<ul>
<li>After a crawl start was initialized, the Crawler Monitor is shown.</li>
<li>The Crawl Monitor now shows all queue elements in one table.</li>
<li>Monitoring of index size.</li>
<li>The Crawl Profiles are shown; crawls can be interrupted within the profile table.</li>
<li>A crawl may now distinguish between text indexing and media link indexing.</li>
</ul>
<li>Migration to new Database Structure</li>
<ul>
<li>The new Collection Database is now the only database structure that can be used; Assortments are switched off.</li>
<li>Added functions to migrate Assortment databases and WORDS databases to Collection database.</li>
<li>Removed all methods to write Assortment data structures.</li>
<li>Migrated DHT position computation to base64-decoded values; this changes the DHT structure slightly and closes the gaps in the old DHT structure.</li>