#context = ModelContext()
#context.set_default()
#store = DocumentStore()
Indexing
For testing I would start exploring by having a document I want to be able to retrieve information from.
My naive implementation would be an index for the embedding and mapping with the node index. Lets try that.
Todo:
- Try out different scoring strategies weighting other type of things, like metadata similarity, options that have been picked by an LLM and such.
- Docker container to rapidly deploy agents.
- Support own models apart from sentence transformer models.
#Won't be used for now but serves for dependency injection for the index, to try diff retrieval strategies and combine them.
#Will separate the retrieval strategies in the future.
class Retriever(ABC):
@abstractmethod
def retrieve(self, query_embedding, embeddings, top_k):
pass
VectorNodesIndex
VectorNodesIndex (context=None)
Inside here the embeddings stored are normalized. So when doing operations with vectors has to be kept into account.
Type | Default | Details | |
---|---|---|---|
context | NoneType | None | May not be needed in postgres. |