Skip to content
@interview-eval

interview-eval

LLM-as-an-Interviewer 🎤📄

This is the official GitHub repository for LLM-as-an-Interviewer: Beyond Static Testing Through Dynamic LLM Evaluation.

LLM-as-an-Interviewer is an evaluation framework that assesses the capabilities of LLMs through an interview-style process. In this approach, the LLM acting as the interviewer evaluates other LLMs by providing feedback and asking follow-up questions, enabling a more comprehensive assessment of their capabilities.

Our framework includes a flexible pipeline that can be easily adapted to various tasks by incorporating a customized evaluation rubric.


Code for Paper Replication

Access the code used to replicate the results discussed in our paper. github


Code for the Framework

Explore the framework implementation. github


PyPI

The framework is also available as a Python package on PyPI. Install it using:

pip install interview-eval

Visit the PyPI page


Popular repositories Loading

  1. interview-eval interview-eval Public

    Interview-based evaluation of LLMs

    Python 21

  2. .github .github Public

Repositories

Showing 2 of 2 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…