At Dream, we redefine cyber defense vision by combining AI and human expertise to create products that protect nations and critical infrastructure. This is more than a job; It’s a Dream job. Dream is where we tackle real-world challenges, redefine AI and security, and make the digital world safer. Let’s build something extraordinary together.
Dream's AI cybersecurity platform applies a new, out-of-the-ordinary, multi-layered approach, covering endless and evolving security challenges across the entire infrastructure of the most critical and sensitive networks. Central to our Dream's proprietary Cyber Language Models are innovative technologies that provide contextual intelligence for the future of cybersecurity.
At Dream, our talented team, driven by passion, expertise, and innovative minds, inspires us daily. We are not just dreamers, we are dream-makers.
The Dream Job:
In this role, you'll be responsible for designing and implementing evaluation, validation and optimization of GenAI systems. You will define, design and develop LLMs as judges to evaluate task and system outputs across multiple applications, create datasets for benchmarking and evaluation and help design robust and scalable evaluation pipelines for both onine and offline GenAI systems.
The Dream-Maker Responsibilities:
- Design, develop and apply state-of-the-art techniques for evaluating and validating AI agents and/or workflows.
- Develop and implement LLM-as-a-Judge (or similar) for different tasks and roles for GenAI systems and tools.
- Design and implement evaluation pipelines and benchmark datasets for evaluating model quality, relevance and system consistency for various applications.
- Optimize and maintain judge LLMs to evaluate outputs for different use cases such as chatbots, RAG systems, cybersecurity experts and investigators.
- Define evaluation KPIs and metrics for both models, systems and tools.
- Validate and optimize datasets for various use cases.
- Ensure the reliability, efficiency, and scalability of evaluation tools and pipelines for both online and offline use cases.
- Work closely with AI/ML engineers to make evaluations a part of the production pipelines of GenAI applications.
- Collaborate with cross-functional teams including product, research and data science.
- Stay up to date with the latest developments in AI, machine learning, focusing on LLMs, exploring how emerging technologies can be applied to improve our evaluation and validation pipelines.
The Dream Skill Set:
- Advanced knowledge and experience in NLP and use of LLMs for GenAI applications in production at scale.
- Strong experience in designing end-to-end R&D plans for GenAI including evaluation and validation lifecycle and benchmarking.
- Strong proficiency in Python
- Solid understanding of Data Science and Machine Learning lifecycle and best practices evaluating and validating AI systems at scale.
- Excellent problem-solving abilities, coupled with a creative and strategic mindset.
- Proven ability to work effectively in a team setting.
Advantages:
- Experience with EDD (evaluation driven development) for GenAI applications.
- Familiarity with cybersecurity applications of GenAI.
- Advanced skills in performance optimization for high throughput systems.
Tech Stack:
Python, Langchain, Langgraph (or other agentic frameworks), Langfuse/LangSmith (or other observability and tracing tools), HuggingFace, Mlflow, MongoDB
Never Stop Dreaming...:
If you think this role doesn't fully match your skills but are eager to grow and break glass ceilings, we’d love to hear from you!