Overview
We are looking for a
Staff Data Scientist to join our Generative AI
Content Moderation efforts. You will take a key role in developing and maintaining ML models and tools that detect irrelevant and harmful content and enforce regulation and standards. You will work in a dynamic, collaborative company culture with cutting-edge technologies as part of a global Data Science organization for Security, Risk & Fraud prevention.
What you'll bring
- 5+ years of Data Science experience, preferably with a strong NLP background
- MSc or PhD in a relevant field (Computer Science, Statistics, Applied Math), or equivalent experience (research experience from the industry)
- Expertise in Data Mining algorithms and Statistical Modeling techniques
- Well-versed in Data Science methods, tools, and frameworks, including data processing platforms and distributed computing systems (for example Python, SQL,SKLearn, Numpy, Pandas, Keras)
- Experience in working on one of the cloud platforms (for example AWS, GCP, Azure)
Advantages:
- Experience with compliance-related modeling
- Experience with unstructured data
How you will lead
- Collaborate with the content moderation policy team to define and track relevant KPIs
- Use our paved paths and infrastructure for building, hosting, and maintaining machinelearning models, including cloud-based platforms
- Collaborate with data engineers to establish robust data pipelines for model training and Inference under high-throughput and low-latency constraints
- Formalizing compliance requirements and priorities backlog
- Sharing knowledge on privacy and compliance issues, both internally and externally