DevJobs

Charm Security - Lead AI Researcher

Overview
Skills
  • Python Python
  • PyTorch PyTorch
  • Airflow Airflow
  • LangChain
  • LLMs
  • Neural Networks
  • Transformers
  • Hugging Face
  • LangGraph
  • Multi-agent Systems
  • Online Reinforcement Learning
  • Orchestration Frameworks
  • Pinecone
  • Prefect
  • Qdrant
  • RAG Systems
  • Agent Frameworks
  • RLHF
  • DAG-based Workflow Orchestration
  • Scalable Training Pipelines
  • Dagster
  • Feedback-driven Optimization Methods
  • Vector Databases
  • GenAI Infrastructure

About Us

Charm's Scam Defense AI fights the epidemic of human-centric fraud that costs hundreds of billions annually and disrupts millions of lives. Our mission is "to break the scam spell" by protecting people from scammers who manipulate human psychology rather than just exploit technical vulnerabilities. When attackers increasingly use AI to scale their social engineering attacks, Charm responds with cutting-edge AI defense systems.

Using proprietary behavioral science models that combine fraud expertise with psychology and behavioral analysis, Charm has developed a comprehensive portfolio of AI-powered scam defense solutions for financial institutions. Our Prevention Agent proactively analyzes risk signals and intervenes in real-time, our Copilot empowers fraud analysts with conversation analysis and guided actions, and our Customer-Facing Agent provides self-serve protection with emotional support and professional escalation pathways.


About the Role

We are looking for a Lead AI Researcher to join our AI team and drive applied research efforts at the frontier of LLMs, agents, and AI fraud intervention and resolution. This is a high-impact role for someone with a strong academic foundation, real-world experience in production AI systems, and a passion for turning state-of-the-art models into production-ready systems that protect people from scams. You'll work closely with our Head of AI Research as a technical partner, leading hands-on research initiatives while helping shape our AI strategy.


Responsibilities

  • Conduct applied research in deep learning, LLMs, and agentic reasoning focused on human-centric-fraud intervention and resolution.
  • Train and fine-tune large-scale models for scam prevention; evaluate performance, alignment, robustness, and regulatory compliance
  • Prototype systems around language model orchestration, retrieval pipelines, tool use, and DAG-orchestrated workflows for real-time scam detection
  • Collaborate with infrastructure and engineering teams to scale and deploy models in production-grade financial environments
  • Partner with the Head of AI Research on technical challenges, research direction, and implementation of best practices
  • Track and experiment with cutting-edge research in LLMs, adversarial AI, and behavioral modeling
  • Support cross-team collaboration to identify emerging fraud patterns and develop AI-driven solutions



Requirements:



Requirements

  • M.Sc. or Ph.D. in Computer Science, Physics, Mathematics, or similar, with a research thesis in ML, DL, or NLP
  • 6+ years of industry experience in AI/ML roles, preferably in startup or high-growth environments
  • Hands-on experience training and evaluating neural networks, transformers, and LLMs
  • Proficiency in Python and ML toolkits (e.g., PyTorch, Hugging Face, Transformers, LangChain)
  • Ability to balance research depth with rapid prototyping and experimentation
  • Strong analytical skills and ability to translate fraud prevention challenges into technical solutions
  • Excellent collaboration skills and ability to work as a technical partner to research leadership


Bonus Points

  • Experience with agent frameworks, RLHF, online reinforcement learning, and multi-agent systems
  • Background in fraud detection, behavioral analysis, or adversarial AI applications
  • Experience with DAG-based workflow orchestration (Airflow, Prefect, Dagster)
  • Experience with GenAI infrastructure, including vector databases (e.g., Pinecone, Qdrant), orchestration frameworks (e.g., LangGraph), RAG systems, scalable training pipelines, or feedback-driven optimization methods


People Skills & Traits

  • Mission-driven with genuine passion for protecting people from fraud and social engineering
  • Strong ethical framework and commitment to responsible AI development in sensitive domains
  • Excellent communicator able to bridge technical depth with practical application
  • Proven ability to work effectively in both independent research and collaborative team settings
  • Robust 'get things done' approach with ability to balance innovation with regulatory requirements and production stability
  • Thrives in high-pressure, mission-critical environments where AI systems directly protect vulnerable populations
Team8