The settings in the default configuration file is historic. Many
machines have much more CPU cores today and now an auto-scaling to this
hardware is better.
nix flakes allow developers using NixOS or
the nix package manager to quickly set up a
working development environment with the correct
dependencies.
Fixes N/A
This is a major step because solr removed support for embedded solr
instances in 9.0 and we want to keep it because we want to ship
YaCy with an embedded solr. It was necessary to add parts of solr
code into YaCy to make this migration possible. Further on with
Solr 9.1 they removed even more parts which are required for embedded
operation, therefore we cannot migrate yet further without big
changes.
If you are running a YaCy instance with Solr 8.x, the migration should
be done automatically. If not you require to first migrate to a YaCy
version 1.93 with Solr 8.x to migrate to Solr 8 data.
RAG (Retrieval Augmented Generation) is a method to combine a search
engine with a LLM (Large Language Model). When a new prompt is
submitted, a search engine injects knowledge from a search into the
content. This is done using a reverse proxy between the Chat Client and
the LLM. In this case, we used the following software:
LLM Backend - Ollama:
https://github.com/ollama/ollama
Install ollama and then load two required LLM models
with the following commands:
ollama pull phi3:3.8b
ollama pull llama3:8b
Chat Client - susi_chat:
https://github.com/susiai/susi_chat
just clone the repository and the open the file
susi_chat/chat_terminal/index.html
in your browser. This displays a chat terminal.
In this terminal, run the following command:
host http://localhost:8090
This sets the LLM backend to your YaCy peer.
Then start YaCy. It will provide the LLM endpoint to the client
while using ollama in the backend. It then injects search results
only from the local Solr index, not from the p2p network (so far).