Member-only story

How can you easily evaluate open LLMs using LLM as a Judge?

Emad Dehnavi
6 min readSep 25, 2024

Google Cloud’s Gen AI Evaluation Service in Vertex AI makes it easy to evaluate large language models (LLMs) like Meta-Llama-3.1–8B. Using existing or custom metrics like BLEU, ROUGE, and “LLM as a Judge,” you can quickly analyze model performance. In this guide, we’ll walk through evaluating a model for coherence using Google’s tools.

How can you easily evaluate open LLMs using LLM as a Judge?

Setup and Configuration

First, you need to install Google Cloud’s command-line tool, gcloud, and the Python SDK for Vertex AI. Then, log in to your Google Cloud account and set your project. Make sure to enable the necessary APIs like Vertex AI, Compute Engine, and Container Registry. Once done, you can start deploying the Meta-Llama-3.1-8B model on Vertex AI using the L4 accelerator.

To install gcloud, you can follow the steps based on your operating system from here: https://cloud.google.com/sdk/docs/install

if you are using Mac, you need to extract the archive file somewhere in your machine, perhaps home directory, you can rename the folder to gcloud if you like, and run the ./gcloud/install.sh from that directory.

once installed, you probably need to refresh the shell’s environment, ex if you are using the zshrc you can run source ~/.zshrc or just close and open your terminal. To verify if…

--

--

Emad Dehnavi
Emad Dehnavi

Written by Emad Dehnavi

With 8 years as a software engineer, I write about AI and technology in a simple way. My goal is to make these topics easy and interesting for everyone.

No responses yet