Skip to content
#

reward-modeling

Here are 28 public repositories matching this topic...

This training offers an intensive exploration into the frontier of reinforcement learning techniques with large language models (LLMs). We will explore advanced topics such as Reinforcement Learning with Human Feedback (RLHF), Reinforcement Learning from AI Feedback (RLAIF), Reasoning LLMs, and demonstrate practical applications such as fine-tuning

  • Updated Mar 9, 2026
  • Jupyter Notebook

An easy python package to run quick basic QA evaluations. This package includes standardized QA evaluation metrics and semantic evaluation metrics: Black-box and Open-Source large language model prompting and evaluation, exact match, F1 Score, PEDANT semantic match, transformer match. Our package also supports prompting OPENAI and Anthropic API.

  • Updated Jul 18, 2025
  • Python

Improve this page

Add a description, image, and links to the reward-modeling topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the reward-modeling topic, visit your repo's landing page and select "manage topics."

Learn more