חדש באתר! העלו קורות חיים אנונימיים לאתר ואפשרו למעסיקים לפנות אליכם!
Charm's Scam Defense AI fights the epidemic of human-centric fraud that costs hundreds of billions annually and disrupts millions of lives. Our mission is "to break the scam spell" by protecting people from scammers who manipulate human psychology rather than just exploit technical vulnerabilities. When attackers increasingly use AI to scale their social engineering attacks, Charm responds with cutting-edge AI defense systems.
Using proprietary behavioral science models that combine fraud expertise with psychology and behavioral analysis, Charm has developed a comprehensive portfolio of AI-powered scam defense solutions for financial institutions. Our Prevention Agent proactively analyzes risk signals and intervenes in real-time, our Copilot empowers fraud analysts with conversation analysis and guided actions, and our Customer-Facing Agent provides self-serve protection with emotional support and professional escalation pathways.
That's where you get into the picture. We are on a quest to scale our AI team with individuals who are not only brilliant and passionate but also driven by our mission to protect people from fraud and restore trust in digital interactions. As Head of AI Research, you will be at the forefront of this transformative mission, pushing the boundaries of AI to combat increasingly sophisticated human-centric attacks.
RESPONSIBILITIES
Team Leadership and Development: Guide and nurture a team of AI researchers specializing in fraud detection, behavioral analysis, and human-AI interaction to cultivate an environment enriched with innovation, collaboration, and outstanding performance in the fight against scams.
AI Technology Deployment: Spearhead the swift development and deployment of cutting-edge AI technologies including large language models (LLMs), multi-agent systems, and DAG-orchestrated workflows for real-time scam prevention in production financial environments, ensuring on-time delivery of robust, compliant, and scalable AI solutions that meet strict regulatory requirements.
Excellence in AI Processes: Promote excellence in AI research and engineering processes by establishing best practices across all development stages. Lead the implementation of rigorous design methodologies, comprehensive analysis frameworks, and deep research protocols from initial LLM fine-tuning and behavioral science model creation through behavioral pattern analysis, testing, and deployment via sophisticated DAG pipelines. Ensure robust verification and testing protocols for all AI outputs, including model validation, output quality assurance, bias detection, and continuous monitoring systems, while emphasizing explainability, auditability, and regulatory compliance alongside scalability and effectiveness.
Cross-Team Collaboration: Engage actively with product and engineering teams to identify emerging fraud patterns and develop AI-driven solutions that align with financial institution needs while maintaining regulatory compliance.
Trend Monitoring and Innovation: Stay current with the latest developments in AI, particularly in large language models, transformer architectures, DAG-based AI orchestration, adversarial AI techniques, and LLM safety research, integrating cutting-edge approaches into our scam defense portfolio while maintaining focus on fraud detection, natural language processing, and behavioral analysis.
Data Utilization: Transform Charm's extensive fraud pattern datasets and behavioral analysis into powerful, transformative LLM-powered applications and DAG-orchestrated AI workflows that provide real-time protection while maintaining strict privacy and compliance standards.
Quality Assurance & Verification: Implement comprehensive testing frameworks for all AI outputs, including automated testing pipelines, human evaluation protocols, adversarial testing for fraud detection systems, and continuous validation of model performance. Establish best practices for output verification, including accuracy metrics, bias detection, edge case analysis, and regulatory compliance testing for all LLM-based systems.
Research & Development Leadership: Establish comprehensive research methodologies and best practices for AI development lifecycle, including systematic design thinking approaches, thorough competitive analysis, deep technical research phases, and rigorous verification protocols. Implement stage-gate processes that ensure proper analysis and validation at each development milestone, from initial concept through production deployment.
Regulatory Compliance & Governance: Ensure all AI research and development activities comply with financial industry regulations including SR-11, Basel III, and other relevant frameworks, working closely with engineering teams to maintain audit trails and model governance standards. Establish verification protocols that validate regulatory compliance at each development stage, from initial design through production deployment.