Chat with your RNA-seq results using a fully local LLM — no API keys required.
Upload DESeq2 tables, GSEA pathway results, sample metadata, and figures (volcano plots, UMAPs) and ask questions about them in a NotebookLM-style interface.
- Ollama installed and running
llama3.2model:ollama pull llama3.2llavamodel (for image understanding):ollama pull llava
pip install -r requirements.txtstreamlit run app.pyThen open http://localhost:8501 in your browser.
- Upload files in the sidebar
- Click Build / Update DB
- Ask questions in the chat
Or use the CLI ingestion tool:
python ingest.py --files data/deseq2_results.csv data/volcano.png --db ./chroma_db| File | Naming Convention | What is stored |
|---|---|---|
| DESeq2 results | contains deseq, deg, differential |
Per-gene: log2FC, padj, baseMean, direction |
| GSEA results | contains gsea, pathway, kegg, hallmark |
Per-pathway: NES, padj, direction, size |
| Sample metadata | contains meta, sample, coldata |
Per-sample conditions + summary |
| Images (PNG/JPG) | any filename | LLaVA-generated text description |
- "Which genes are significantly upregulated (padj < 0.05, log2FC > 1)?"
- "What immune-related pathways are activated?"
- "What does the volcano plot show?"
- "How many samples are in each condition?"
- "Which cancer hallmark pathways are enriched?"
rnaseq-local-chat/
├── app.py # Streamlit UI
├── ingest.py # CSV + image → ChromaDB (importable + CLI)
├── data/ # Put your RNA-seq files here
├── requirements.txt
├── .env.example
└── LICENSE
- LLM: llama3.2 via Ollama
- Vision: llava for image-to-text
- Embeddings: all-MiniLM-L6-v2
- Vector store: ChromaDB
- Framework: LangChain LCEL
- UI: Streamlit