Think of a startup inside a big company. With funding. At scale.
We’re looking for data engineers in two domains.
One is TeamUp (by AT&T) which helps engineering leaders, managers, and developers improve daily.
TeamUp is building a big-data SW intelligence platform that turns data generated throughout the SW development lifecycle (code, user stories, pipelines, scanning tools) into insights and optimization tools to help 1000+ R&D teams in AT&T continuously improve while developing products for dozens of millions of customers to use.
The other domain focuses on AT&T Security Platform, which helps engineering leaders and managers to prepare for the post-quantum era. This security big-data software platform will conduct daily security scans on thousands of applications and tools, providing actionable insights and defining the next steps to ensure robust security in a post-quantum world.
We are seeking an experienced Data Engineer to join our dynamic research team.
This role provides the unique opportunity to manage the entire data lifecycle, from extraction to visualization using external tools. You will also employ data science methodologies to analyze, classify, and explore data. This position offers a rare chance to delve deeply into both data engineering and data science.
Responsibilities:
- Design, implement, and maintain the complete data flow, from extraction to visualization using external tools.
- Work within a product team where solutions are collaboratively proposed. You will be expected to translate requirements into technical designs and implement them as part of large-scale data engineering solutions.
- Utilize various Machine Learning models to explore data, collaborating with cutting-edge big data tools and techniques.
- Collaborate with R&D, architects, data scientists, analysts, Projects Managers and DevOps in order to deliver the right solution
Requirements:
- At least 3 years of experience in Big-Data technologies as a data engineer, including ETL/ELT processes and data exploration
- At least 3 years of experience with Spark, Spark Streaming, Databricks, Airflow and other components required in building end-to-end data pipelines.
- At least 3 years of experience in Python programming and SQL queries
- At least 1 year of experience in BI development
- At least 1 year of experience in data visualization tools, such as PowerBI, Tableau
- At least 1 year of experience in cloud big data solutions and architectures (Azure)
- Deep knowledge of Big Data design principles and guidelines, data warehouse modeling and data transformations
- Strong experience in Javascript/Java programming languages
- Excellent interpersonal and communicational skills, a team player
- Excellent self-management skills, ownership skills, and understanding of the business
- Bachelor in Mathematics/Statistics/Computer Science