Why Sola?
Because we believe AI should empower, not complicate. At Sola, you’ll work on a platform that simplifies cybersecurity for practitioners everywhere, combining cutting-edge technology with user-first design. Learn more about it here.
We are looking for a Senior Applied AI Researcher to design, evaluate, and deploy AI systems that operate over complex security data and workflows.
This role focuses on building reliable agent-based systems, developing semantic representations of security data, and enabling AI systems to reason across structured and unstructured sources. The work spans research, experimentation, and productionization.
You will work on problems such as relational discovery, schema alignment, semantic modeling, and graph-based retrieval, and turn promising approaches into real systems used by security teams.
What You’ll Do
- Design and build AI systems for security analysis and automation.
- Develop methods for relational discovery, schema matching, and semantic modeling across heterogeneous security data.
- Design and run evaluation and benchmarking frameworks for models, agents, and end-to-end systems.
- Experiment with agent architectures, tools, and orchestration strategies.
- Investigate system behavior, analyze failure modes, and improve system reliability.
- Collaborate with engineering teams to bring research prototypes into production systems.
Requirements:
- Strong background in applied AI, machine learning, or AI systems.
- MSc or PhD in Computer Science, Machine Learning, AI, or a related field, or equivalent practical experience.
- Hands-on experience with LLMs, AI agents, or complex AI systems.
- Experience designing evaluation and benchmarking methodologies for AI systems.
- Experience working with structured and semi-structured data systems.
- Proficiency in Python and modern AI frameworks.
- Ability to work independently on ambiguous, real-world problems.
Preferred
- Familiarity with knowledge graphs, graph databases, or graph-based reasoning.
- Experience applying AI in security or adversarial environments.
- Experience evaluating system-level behavior of AI systems in production.
- Publications, open-source contributions, or prior research in AI systems, agents, or data reasoning systems.