ragas: Answer Relevancy with Own LLM: Kernel Dies

Hi,

I want to evaluate my RAG application and computing faithfulness, context_precision and context recall with my own LLM (Llama based) works. But whey I try to compute the answer_relevancy score, either the “OpenAI-Key” was not found (I don’t want to use OpenAI) or my Kernel dies. This is how my DatasetDict looks like:

DatasetDict({
    train: Dataset({
        features: ['question', 'answer', 'contexts', 'ground_truths'],
        num_rows: 2 
    })
})

I am loading my own embeddings, following the documentation:

from ragas.metrics import AnswerRelevancy
from langchain.embeddings import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name='intfloat/multilingual-e5-large',model_kwargs={'device': 'mps'})
answer_relevancy = AnswerRelevancy(
    embeddings=embeddings
)

If I try the following, I get an “OpenAIKeyNotFound”-Error:

#answer_relevancy.llm = llm
# init_model to load models used
answer_relevancy.init_model()

results = answer_relevancy.score(em_mistral_extracted["train"])

When I first try to set answer_relevancy to my specific LLM, my Kernel crashes:

answer_relevancy.llm = llm
# init_model to load models used
answer_relevancy.init_model()

results = answer_relevancy.score(em_mistral_extracted["train"])

Results in: “Kernel crashed while executing code in the the current cell or a previous cell”

At the moment the LLM and the Embeddings are different. When using the same LLM with its embeddings, the same issue occurs.

So any suggestions here?

About this issue

  • Original URL
  • State: closed
  • Created 7 months ago
  • Comments: 17 (7 by maintainers)

Most upvoted comments

Kernel dies when evaluate a custom dataset using custom llm and embeddings.

eval_llm = LlamaCpp(
    model_path="./model/Mistral-7B-Instruct-v0.2/mistral-7b-instruct-v0.2.Q8_0.gguf",
    n_ctx=3900,
    n_gpu_layers=-1,
    n_batch=512,
    callback_manager= CallbackManager([StreamingStdOutCallbackHandler()]),
    verbose=False,
)

eval_embed = HuggingFaceBgeEmbeddings(model_name='./model/bge-large-en-v1.5',
                                           model_kwargs={'device': 'cuda'},
                                           encode_kwargs={'normalize_embeddings': True})


result = evaluate(
    evalsets,
    metrics=[
        faithfulness, 
        answer_correctness, 
        answer_similarity, 
        answer_relevancy],
    llm=eval_llm,
    embeddings=eval_embed
)