DevJobs

Charm Security-Head of AI Research

Overview
Skills
  • Python Python
  • Deep learning Deep learning
  • Airflow Airflow
  • LLM fine-tuning ꞏ 3y
  • Explainable AI
  • Human evaluation protocols
  • LangChain
  • Model interpretability
  • Model validation
  • Multi-modal AI systems
  • Natural language processing
  • Prompt engineering
  • Adversarial AI
  • Retrieval-augmented generation
  • Algorithm development
  • Testing frameworks
  • Automated testing pipelines
  • Transformers
  • Behavioral modeling
  • Bias detection
  • DAG orchestration
  • Dagster
  • Prefect

Charm's Scam Defense AI fights the epidemic of human-centric fraud that costs hundreds of billions annually and disrupts millions of lives. Our mission is "to break the scam spell" by protecting people from scammers who manipulate human psychology rather than just exploit technical vulnerabilities. When attackers increasingly use AI to scale their social engineering attacks, Charm responds with cutting-edge AI defense systems.

Using proprietary behavioral science models that combine fraud expertise with psychology and behavioral analysis, Charm has developed a comprehensive portfolio of AI-powered scam defense solutions for financial institutions. Our Prevention Agent proactively analyzes risk signals and intervenes in real-time, our Copilot empowers fraud analysts with conversation analysis and guided actions, and our Customer-Facing Agent provides self-serve protection with emotional support and professional escalation pathways.

That's where you get into the picture. We are on a quest to scale our AI team with individuals who are not only brilliant and passionate but also driven by our mission to protect people from fraud and restore trust in digital interactions. As Head of AI Research, you will be at the forefront of this transformative mission, pushing the boundaries of AI to combat increasingly sophisticated human-centric attacks.

RESPONSIBILITIES

Team Leadership and Development: Guide and nurture a team of AI researchers specializing in fraud detection, behavioral analysis, and human-AI interaction to cultivate an environment enriched with innovation, collaboration, and outstanding performance in the fight against scams.

AI Technology Deployment: Spearhead the swift development and deployment of cutting-edge AI technologies including large language models (LLMs), multi-agent systems, and DAG-orchestrated workflows for real-time scam prevention in production financial environments, ensuring on-time delivery of robust, compliant, and scalable AI solutions that meet strict regulatory requirements.

Excellence in AI Processes: Promote excellence in AI research and engineering processes by establishing best practices across all development stages. Lead the implementation of rigorous design methodologies, comprehensive analysis frameworks, and deep research protocols from initial LLM fine-tuning and behavioral science model creation through behavioral pattern analysis, testing, and deployment via sophisticated DAG pipelines. Ensure robust verification and testing protocols for all AI outputs, including model validation, output quality assurance, bias detection, and continuous monitoring systems, while emphasizing explainability, auditability, and regulatory compliance alongside scalability and effectiveness.

Cross-Team Collaboration: Engage actively with product and engineering teams to identify emerging fraud patterns and develop AI-driven solutions that align with financial institution needs while maintaining regulatory compliance.

Trend Monitoring and Innovation: Stay current with the latest developments in AI, particularly in large language models, transformer architectures, DAG-based AI orchestration, adversarial AI techniques, and LLM safety research, integrating cutting-edge approaches into our scam defense portfolio while maintaining focus on fraud detection, natural language processing, and behavioral analysis.

Data Utilization: Transform Charm's extensive fraud pattern datasets and behavioral analysis into powerful, transformative LLM-powered applications and DAG-orchestrated AI workflows that provide real-time protection while maintaining strict privacy and compliance standards.

Quality Assurance & Verification: Implement comprehensive testing frameworks for all AI outputs, including automated testing pipelines, human evaluation protocols, adversarial testing for fraud detection systems, and continuous validation of model performance. Establish best practices for output verification, including accuracy metrics, bias detection, edge case analysis, and regulatory compliance testing for all LLM-based systems.

Research & Development Leadership: Establish comprehensive research methodologies and best practices for AI development lifecycle, including systematic design thinking approaches, thorough competitive analysis, deep technical research phases, and rigorous verification protocols. Implement stage-gate processes that ensure proper analysis and validation at each development milestone, from initial concept through production deployment.

Regulatory Compliance & Governance: Ensure all AI research and development activities comply with financial industry regulations including SR-11, Basel III, and other relevant frameworks, working closely with engineering teams to maintain audit trails and model governance standards. Establish verification protocols that validate regulatory compliance at each development stage, from initial design through production deployment.



Requirements:


  • Education: Advanced degree in Computer Science, Mathematics, or related field, with preference for Ph.D.
  • AI & LLM Expertise: Profound knowledge in deep learning, large language models (LLMs), natural language processing, behavioral modeling, adversarial AI, and algorithm development. Extensive experience with LLM fine-tuning, prompt engineering, retrieval-augmented generation (RAG), and multi-modal AI systems for fraud detection and human-AI interaction.
  • Leadership Experience: Demonstrable success in leading AI initiatives from concept through operational deployment in regulated environments, including minimum 3 years in leadership roles within financial services or fraud prevention.
  • AI & LLM Systems Experience: At least 3 years of direct experience developing and deploying LLM-based systems for real-world applications, with expertise in transformer architectures, attention mechanisms, and directed acyclic graph (DAG) orchestration for complex AI workflows. Preferred background in fraud detection, risk assessment, and conversational AI systems that analyze human behavior patterns.
  • Regulatory Knowledge: Strong experience with financial industry compliance requirements, particularly SR-11 model risk management, Basel III operational risk frameworks, and related regulatory guidelines for AI/ML systems in financial services.
  • Advanced AI Architecture: Skilled in modern AI programming languages and frameworks, especially Python, Transformers, and LLM deployment platforms (e.g. LangChain). Extensive experience with DAG-based workflow orchestration tools (Airflow, Prefect, Dagster) for complex AI pipelines, and expertise in explainable AI and model interpretability required for regulatory compliance in LLM applications.
  • Process Excellence: Strong experience establishing and implementing AI development best practices across all stages including systematic design methodologies, comprehensive analysis frameworks, deep research protocols, rigorous testing procedures, and verification systems for AI outputs. Proven track record of creating stage-gate processes that ensure quality and compliance at each development milestone.
  • Analytics & Strategic Design: Strong analytical skills with capability to transform evolving fraud threats and business needs into actionable technical strategies through systematic design thinking, comprehensive analysis, and deep research methodologies while maintaining regulatory compliance. Proven ability to implement verification and testing protocols that ensure solution quality and effectiveness.
  • Mission-Driven: Exceptional passion for protecting people from fraud and social engineering, with genuine commitment to making digital interactions safer and more trustworthy.
  • Process Innovation: Outstanding ability to establish and scale best-practice methodologies for AI development, including systematic design processes, comprehensive analysis frameworks, deep research protocols, and rigorous verification systems that ensure high-quality outputs across all development stages.
  • Communication & Process Leadership: Excellent interpersonal and communication skills with proven ability to explain complex AI concepts, research methodologies, and testing protocols to both technical teams and regulatory stakeholders. Demonstrated success in establishing cross-functional collaboration around rigorous development processes.
  • Leadership: Proven ability to work effectively as both team leader and collaborative member in high-pressure, mission-critical environments.
  • Execution: Robust 'get things done' approach with ability to balance innovation with regulatory requirements and production stability.
  • Ethical Foundation: Strong ethical framework and understanding of the human impact of fraud, with commitment to responsible AI development.
Team8