Efficient Triton Kernels for LLM Training
-
Updated
Apr 13, 2026 - Python
Efficient Triton Kernels for LLM Training
RAG-based Telegram assistant bot for freshmen
Code for Paper Submitted in GRADES-NDA 2025
Conversational RAG with PDF and chat history
Example application for applying QLoRA-based Parameter-Efficient Fine-Tuning (PEFT) to a Stance Detection task using Gemma-2-9B-Instruct
ask LLMs in the terminal
You pick the genre, you make the choices. The AI weaves the story on the fly. From slapstick comedy to psychological horror, impossible heists to football glory – every playthrough is unique, every ending is yours. Ready to write your legend?
Implementing different LLM architectures in single repo
Finetuning a Local LLM Gemma 2 2B using Unsloth and your own custom dataset for Custom Attribute extraction from an unstructured content
⚗️ Gemma 2 9B model instruct repository
🚀 Explore a minimal, extensible LLM inference engine for efficient AI model execution, designed to enhance research and experimentation with novel techniques.
Repository for BabyLM competition on 3 models in Strict and Strict-small tracks
A Streamlit-based intelligent assistant that combines mathematical problem-solving capabilities with data search functionality, powered by Google's Gemma 2 model via Groq API.
A Python script to download DR LYD audio, generate subtitles using Whisper, convert the subtitles to LRC lyrics format, and embed these lyrics into the MP3 file.
Add a description, image, and links to the gemma2 topic page so that developers can more easily learn about it.
To associate your repository with the gemma2 topic, visit your repo's landing page and select "manage topics."