About Placer.ai
One of the fastest growing tech companies in North America, Placer.ai (“Placer”) reached $100 million ARR “Centaur” status in 2024, within 6 years of launching the Placer platform. We previously crossed the $1B valuation “Unicorn” threshold in 2022.Our A.I.-based SaaS platform transforms how leaders make critical decisions by unlocking unparalleled visibility into markets and locations, creating a blue ocean market of more than $100B. And the best part is, we accomplished this by leading a privacy-first business in which data has been stripped of user identifiers and only aggregated information is provided within the platform. Our platform provides instant visibility into almost any location in the U.S. We have grown exponentially over the past 6 years, counting more than 4,000+ paying customers including many Fortune 500 companies, such as, JP Morgan Chase, Wayfair, Google and many more.
Named one of Forbes America’s Best Startup Employers and a Deloitte Technology Fast company, we are proud of our collaborative, innovative and inclusive culture.
Summary
We’re looking for a hands-on
Individual Contributor Data Engineer to design, build, and operate large-scale data products at Placer.ai. You’ll own mission-critical pipelines and services, balancing pre-computation with on-demand execution to deliver complex, business-critical insights with the right cost, latency, and reliability.
Responsibilities
- Design and run Spark data pipelines, orchestrated with Airflow, governed with Unity Catalog.
- Build scalable batch and on-demand data products, aiming for the sweet spot between pre-compute and on-demand for complex logic - owning SLAs/SLOs, cost, and performance.
- Implement robust data quality, lineage, and observability across pipelines.
- Contribute to the architecture and scaling of our Export Center for off-platform report generation and delivery.
- Partner with Product, Analytics, and Backend to turn requirements into resilient data systems.
Requirements
- BSc degree in Computer Science or an equivalent
- 5+ years of professional Backend/Data-Engineering experience
- 2+ years of Data-Engineering experience
- Production experience with Apache Spark, Airflow, Databricks, and Unity Catalog.
- Strong SQL and one of Python/Scala; solid data modeling and performance tuning chops.
- Proven track record building large-scale (multi-team, multi-tenant) data pipelines and services.
- Pragmatic approach to cost/latency trade-offs, caching, and storage formats.
- Experience shipping reporting/exports pipelines and integrating with downstream delivery channels.
- IC mindset: you lead through design, code, and collaboration (no direct reports).
Other Requirements
- Delta Lake, query optimization, and workload management experience.
- Observability stacks (e.g., metrics, logging, data quality frameworks).
- GCS or other major cloud provider experience.
- Terraform IAC experience.
WHY JOIN PLACER.AI?
Join a rocketship! We are pioneers of a new market that we are creating
- Take a central and critical role at Placer.ai
- Work with, and learn from, top-notch talent
- Competitive salary
Excellent Benefits
NOTEWORTHY LINKS TO LEARN MORE ABOUT PLACER
- Placer.ai in a nutshell
- Placer.ai's $100M round C funding (unicorn valuation!)
- Placer.ai's data
- Placer.ai in the news
- COVID-19 Economic Recovery Dashboard
Placer.ai is committed to maintaining a drug-free workplace and promoting a safe, healthy working environment for all employees.
Placer.ai is an equal opportunity employer and has a global remote workforce. Placer.ai’s applicants are considered solely based on their qualifications, without regard to an applicant’s disability or need for accommodation. Any Placer.ai applicant who requires reasonable accommodations during the application process should contact Placer.ai’s Human Resources Department to make the need for an accommodation known.