Bilingual LLM Evaluator

Remote
Mercor $19 - $19
Part Time Mid Level

Posted 4 weeks ago Expired

This job has expired

Looking for a job like Bilingual LLM Evaluator? Upload your resume and we'll notify you when similar positions become available.

Upload Your Resume

About This Role

Evaluate Large Language Model (LLM) generated responses for quality, accuracy, and adherence to conversational behavior, providing high-quality human evaluation data.

Responsibilities

  • Evaluate LLM-generated responses on their ability to effectively answer user queries
  • Conduct fact-checking using trusted public sources and external tools
  • Generate high-quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuracies
  • Assess reasoning quality, clarity, tone, and completeness of responses
  • Ensure model responses align with expected conversational behavior and system guidelines
  • Apply consistent annotations by following clear taxonomies, benchmarks, and detailed evaluation guidelines

Requirements

  • Bachelor’s degree
  • Native speaker or ILR 5/primary fluency (C2 on the CEFR scale) in Brazilian Portuguese
  • Significant experience using large language models (LLMs)
  • Excellent writing skills
  • Strong attention to detail
  • Adaptable and comfortable moving across topics
  • Background or experience in domains requiring structured analytical thinking
  • Excellent college-level mathematics skills

Qualifications

  • Bachelor’s degree

Nice to Have

  • Prior experience with RLHF, model evaluation, or data annotation work
  • Experience writing or editing high-quality written content
  • Experience comparing multiple outputs and making fine-grained qualitative judgments
  • Familiarity with evaluation rubrics, benchmarks, or quality scoring systems

Skills

Large Language Models (LLMs) *

* Required skills

About Mercor

Mercor connects elite creative and technical talent with leading AI research labs.

Technology
View all jobs at Mercor →