Search your files across your devices with natural language | 使用自然语言跨设备搜索文件的桌面应用
-
Updated
May 4, 2026 - Rust
Search your files across your devices with natural language | 使用自然语言跨设备搜索文件的桌面应用
Build Semantic Search with S-BERT and Fine-tune your model in unsupervised way
An easy-to-use vector database.
sovereign embedded vector database, single-file .nest container with content-addressable citations, reproducible builds, and offline-first model verification.
LLM-assistant that searches PubMed, retrieves abstracts or full-texts, and generates answers using OpenAI ChatGPT. Features a custom RAG pipeline, semantic search, and knowledge graph generation.
Agentic RAG with LangGraph 🔥
The embedded database for local-first JavaScript apps.
A .NET-based AI project leveraging Retrieval-Augmented Generation (RAG) and OpenAI to provide efficient, intelligent search capabilities for team documentation.
Hybrid AI chatbot with semantic memory, intent learning, and LLM fallback used as a teacher. Built with Python, FAISS, and TensorFlow.
🤖 AI-powered e-commerce platform with intelligent chat assistants, semantic product search via RAG, and real-time streaming charts. A proof-of-concept showcasing MCP servers and advanced AI integration patterns.
🌟 Lumiere: Multi-agent RAG system with semantic memory. Combines LangGraph, Qdrant vector search, and OpenAI for intelligent document Q&A, SQL data analysis, and context-aware conversations. Features long-term learning, critic validation, and full observability.
AI-powered developer documentation search engine using RAG (Retrieval-Augmented Generation) with FAISS, Sentence Transformers, and local LLM (Ollama). Enables fast, context-aware answers from Python, Django, Flask, FastAPI, NumPy, and Pandas docs.
Docker image of PostgreSQL with vector database extension **pgvector** on Alpine Linux.
AI-focused news aggregator that ranks, summarizes, and deduplicates articles about artificial intelligence in real time.
RAG Chatbot that turns documents in Google Drive into a conversational AI. Uses OpenAI embeddings, Qdrant vector search, and Google Gemini for context-aware answers. Applied to large document collections, including legal texts, it drastically cuts search time and provides accurate responses grounded in multiple sources.
Persistent memory backend for AI systems (FastAPI + SQLite)
Sematic Cache is a semantic caching library that uses LanceDB for vector storage. It allowing caching of natural language queries based on semantic similarity rather than exact string matching.
Developed a semantic search engine as part of the CS-328 Introduction to DataScience course, using word embeddings to retrieve semantically relevant documents. Explored approximate nearest neighbor (ANN) and hashing-based methods to strike a balance between retrieval accuracy and computational efficiency.
aicli is a Rust-based terminal application that implements a Retrieval-Augmented Generation (RAG) workflow. It scans and chunks text files, generates embeddings, stores and queries vectors in Qdrant, and retrieves relevant context to produce accurate, context-aware responses through an interactive TUI.
Add a description, image, and links to the sematic-search topic page so that developers can more easily learn about it.
To associate your repository with the sematic-search topic, visit your repo's landing page and select "manage topics."