LLM story writer with a focus on high-quality long output based on a user provided prompt.
-
Updated
Jul 5, 2024 - Python
LLM story writer with a focus on high-quality long output based on a user provided prompt.
Content Engine is RAG system that analyzes and compares multiple PDF documents, specifically identifying and highlighting their differences. The system will utilize Retrieval Augmented Generation (RAG) techniques to effectively retrieve, assess, and generate insights from the documents.
Instruct and validate structured outputs from LLMs with Ollama.
Harness LLMs with Multi-Agent Programming
Project Jarvis is a versatile AI assistant that integrates various functionalities.
Run state-of-the-art language models locally. Chat with AI using simple slash commands. Zero cloud, zero cost – just pure, home-brewed AI magic.
PalmHill.BlazorChat is a chat application and API built with Blazor WebAssembly, SignalR, and WebAPI, featuring real-time LLM conversations, markdown support, customizable settings, and a responsive design. This project supports Llama2 models and was tested with Orca2.
This application uses Streamlab and Streamlit to create an interactive web interface for chatting with a locally hosted Llama 3 language model.
'Local Large language RAG Application', an application for interfacing with a local RAG LLM.
Chrome Extension to Summarize or Chat with Web Pages/Local Documents Using locally running LLMs. Keep all of your data and conversations private. 🔐
automate the batching and execution of prompts.
Your fully proficient, AI-powered and local chatbot assistant🤖
Recipes for on-device voice AI and local LLM
A python package for developing AI applications with local LLMs.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."