Join our dynamic team at the forefront of cybersecurity innovation, where cutting-edge generative AI meets real-world challenges.
About the role: As a GenAI Engineer, you'll drive the development of sophisticated AI solutions that leverage Large Language Models (LLMs) and intelligent agent systems. Working alongside our AI engineering, R&D, and product teams, you will design, develop, and deploy production-level systems—from initial data collection and model optimization to sophisticated cloud-based deployment and monitoring.
Key Responsibilities
Generative AI Development & Model Optimization:
- LLM Production Solutions: Develop sophisticated LLM-based production-level solutions.
- Model Evaluation & Tuning: Evaluate, select, and fine-tune models to meet specific performance and security requirements.
- AI Systems & Agent development: Design, build, and deploy systems, agents, and workflows that enable complex decision-making processes—utilizing modern frameworks (e.g., LangChain, LangGraph, A2A) to manage and coordinate multiple AI agents.
- RAG Pipeline Enhancement: Build and optimize Retrieval-Augmented Generation (RAG) pipelines to boost accuracy.
- MCP Integration: Implement Model Context Protocol (MCP) to enable AI systems to seamlessly interact with external tools, data sources, and services, creating extensible and interoperable AI architectures.
- Data Engineering & Analysis: Execute data scraping, optimization, and analysis to support robust AI solutions.
Backend & Multi-Service Architecture:
- Microservices Development: Design and implement scalable backend services and microservices architectures that support AI workloads and integrate seamlessly with existing systems.
- API Development: Develop robust, production-ready APIs for AI model serving, inter-service communication, and handling AI inference requests with event-driven, asynchronous patterns.
- Service Integration: Orchestrate multi-service architectures, ensuring efficient communication between AI components, databases, caching layers, and external services.
Cloud Infrastructure & Deployment:
- AWS Service Utilization: Leverage AWS services (ECS, Lambda, S3, Bedrock, Step Functions, SageMaker) to deploy scalable and secure AI applications.
Required Skills
- Proficiency in Python and modern software engineering practices.
- Hands-on experience with LLMs, prompt engineering, RAG implementation, agent development, and fine-tuning methodologies.
- Strong working knowledge of AWS services (Specifically Bedrock, Step Functions, ECS, Lambda, S3) and cloud architecture.
- Proficiency with Git and collaborative development workflows.
Preferred Skills
- Familiarity with cybersecurity principles and best practices.
- Strong product sense and ability to translate business needs into LLM implementations.
- Experience working with Multi-Cloud Platforms (MCP).
Qualifications
- Experience: 2+ years of experience in AI development, data science, backend development or a related field.
- Education: Bachelor’s or Master’s degree in a quantitative discipline or a closely related field.
We recognize that not every candidate will meet every qualification listed above. If you’re passionate about AI innovation and believe your skills align with our vision, we encourage you to apply.
Our Culture: At our innovative startup, we cultivate a collaborative and team-centric environment that values creativity, technical excellence, and continuous learning. We empower our team members to explore new ideas, drive impactful projects from day one.