DevJobs

Full Stack Engineer

Overview
Skills
  • TypeScript TypeScript ꞏ 4y
  • SQL SQL
  • Node.js Node.js ꞏ 4y
  • GraphQL GraphQL
  • Angular Angular
  • PostgreSQL PostgreSQL
  • MongoDB MongoDB
  • Redis Redis
  • RESTful API RESTful API
  • AWS AWS
  • Kubernetes Kubernetes
  • Helm
  • Docker Docker
  • RabbitMQ RabbitMQ
  • Karpenter
  • Prisma
  • S3
  • Streaming parsers
  • CD
  • CI
  • CloudWatch
  • ETL
  • KEDA
  • Meta
  • pgvector
  • pnpm
  • Shopify
  • Similarity search
  • TikTok
  • TurboRepo
  • Vite
  • Vue 3
  • E-commerce feeds
  • WooCommerce
  • Google Merchant Center
About Selectika

Selectika AI helps retailers turn messy product data into clean, structured, channel-ready catalogues. We ingest large catalogues, enrich items, and sync to e-commerce and marketing channels like Google Merchant Center, Meta, and TikTok.

What you will do

●     Own services that power the catalogue pipeline: ingest, transform, enrich, validate, sync, and exports


●     Design and ship TypeScript Node.js microservices in a monorepo with shared packages

●     Build and evolve GraphQL and REST APIs for internal teams and external partners

●     Migrate the frontend from Angular to Vue 3 and set solid foundations with Vite and reusable components

●     Optimize large data flows using streaming parsers for CSV XML JSON APIs with delta updates and idempotent writes

●     Implement storage layers using Prisma and PostgreSQL Aurora RDS plus MongoDB for document workloads and Redis for caching and queue metadata

●     Work with RabbitMQ workers for feed parsing, enrichment steps, and background jobs with backpressure and retries

●     Operate on AWS EKS using Helm and Karpenter with KEDA for queue-aware autoscaling CloudWatch for logs and metrics and S3 for assets

●     Measure and improve performance reliability and cost using tracing structured logging and SLOs

●     Collaborate closely with product and data teams to turn retailer feeds into validated, channel-ready data

Why this is interesting day to day

●     You ship fast. Infra is Kubernetes on AWS with Helm charts per service, GitHub Actions pipelines, and Karpenter autoscaling so deploys are quick and safe

●     You work on real throughput. Large catalogues, high message rates, and memory-sensitive processing

●     You see impact. Changes land across dev and main namespaces with dashboards and alerts, and you can prove wins with metrics

Our stack

●     Language TypeScript Node.js monorepo. Some Python where useful

●     Data PostgreSQL Aurora RDS with Prisma. MongoDB. Redis

●     Messaging RabbitMQ

●     APIs GraphQL and REST

●     Infra AWS EKS with Karpenter and Helm. KEDA optional. CloudWatch. S3

●     Tooling GitHub Actions Docker pnpm TurboRepo

●     Product features Tagging enrichment similarity and feed exports

Required experience

●     4 plus years building production services with TypeScript and Node.js

●     Strong SQL and data modeling with PostgreSQL and hands-on Prisma

●     Experience with large data feeds, streaming parsers, and memory-efficient ETL

●     Operating services on Kubernetes and AWS with CI CD and observability

●     Production experience with message queues preferably RabbitMQ including dead-lettering and retries

●     Performance tuning for high-throughput APIs and batch workers

●     Ownership mindset and clear communication

Nice to have

●     Vue 3 and Vite. Experience migrating from Angular

●     GraphQL schema design and gateway patterns

●     Image pipelines on S3 with presigned URLs thumbnails and CDNs

●     E-commerce feeds Google Merchant Center Meta TikTok Shopify WooCommerce

●     KEDA based autoscaling from RabbitMQ metrics

●     pgvector or similar similarity search

What success looks like

● You ship a service or worker into the pipeline with dashboards and alerts

● You move a feeder to streaming with backpressure and reduce p99 latency on a core API

● You lead an improvement across dev and main including queue policies and autoscaling that lowers cost and increases throughput

Interview process

●     Intro call 20 minutes fit and role context

●     Technical deep dive 60 minutes architecture and code review on a real service from our domain

●     Practical pair session 90 minutes build a streaming feed parser and persist via Prisma plus a small API no take-home

●     Final culture and product session 30 minutes with founders and PM

How to apply

Send GitHub or relevant repos plus a short note about a high-throughput system you built to [email protected]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Selectika