Harnessing the Memory Power of the Camelids
-
Updated
Oct 19, 2023 - Python
Harnessing the Memory Power of the Camelids
🗲 A high-performance on-disk dictionary.
Semantic product search on Databricks
just testing langchain with llama cpp documents embeddings
The AI Assistant uses OpenAI's GPT models and Langchain for agent management and memory handling. With a Streamlit interface, it offers interactive responses and supports efficient document search with FAISS. Users can upload and search pdf, docx, and txt files, making it a versatile tool for answering questions and retrieving content.
minimem is a minimal implementation of in-memory vector-store using only numpy
🤖 An intelligent, context-aware chatbot that can be utilized to answer questions about your own documented data.
LLM powered ChatAI system. Added support for HF Embeddings and Models too
An AI-based application leveraging Gemini/OpenAI and JinaAI embeddings with a FastAPI backend and Svelte frontend. The app can read PDFs, maintain webpage memory, and facilitate interactive chat with websites, webpages, and PDFs.
A website that summarizes PDFs into simple paragraphs based on user's queries_using Streamlit, LangChain, OpenAI, and ChromaDB Docker Image technologies.
Q & A with multiple pdf App is a Python application that allows you to ask questions about the PDFs you upload using natural language model to generate accurate answers to your queries.
Add a description, image, and links to the vector-store topic page so that developers can more easily learn about it.
To associate your repository with the vector-store topic, visit your repo's landing page and select "manage topics."