DevJobs

Data Engineer

Overview
Skills
  • Python Python ꞏ 3y
  • C++ C++
  • Rust Rust
  • Spark Spark
  • Numpy Numpy
  • Pandas Pandas
  • Linux Linux
  • CI/CD CI/CD
  • AWS AWS
  • Docker Docker
  • Kubernetes Kubernetes
  • Airflow Airflow
  • Kubeflow
  • polars
  • Parquet
  • Jupyter notebook
  • HDF5
  • DeltaLake
  • AVRO
  • DataBricks

Final is a world leader in trading algorithms and trade execution technologies development. Our multi-disciplinary teams have developed a unique and highly successful machine learning algorithmic based HFT platform that delivers excellent results. In a world increasingly dominated by learning machines and artificial intelligence, we at Final are especially proud of our humans. Our elite team of exceptional people are the soul of our company, and it is our top priority to provide them with a professionally fulfilling environment that supports healthy work-life balance. Our employees are encouraged to pursue their passions outside of work and we are proud to offer them a variety of opportunities, multiple resources and an agile work environment which promotes their well-being.

We are searching for an innovative and experienced Data Engineer that will join us and be part of our new data initiatives team in our data group.

As a Data Engineer, you will:

  • Be a part of a cross functional team of data, backend and DevOps engineers.
  • Be responsible for ingesting large volumes of new data, followed by deep understanding and inspection of the data in close collaboration with data scientists.
  • Lead the architecture, planning, design and development of mission-critical, diverse and large-scale data pipelines over both public and on-prem cloud solutions.

Requirements:

  • At least 3 years of experience working as a Data Engineer
  • At least 3 years of experience working in python development with emphasis on data analysis tools such as numpy, pandas, polars, Jupyter notebook.
  • Hands-on experience with Spark for large-scale batch processing.
  • Hands-on experience working with AWS data processing tools and concepts.
  • Proven understanding in designing, developing and optimizing complex solutions that move and/or manipulate large volumes of data.
  • Sound understanding of partitioning and optimization techniques of different big data file formats (such as Parquet, DeltaLake, AVRO, HDF5)
  • Experience with Docker, Linux, CI/CD tools and concepts, Kubernetes.
  • Experience with data pipelining tools such as Airflow, Kubeflow or similar.
  • BSc / MSc degree in Computer Science/Engineering / Mathematics or Statistics.
  • Understanding of ML concepts and processes.

Advantage:

  • Hands-on experience with DataBricks platform.
  • Experience working on large scale and complex on-premises systems.
  • Hands-on experience in ML frameworks and models training and implementation.
  • Hands-on experience in lower-level programming languages such as C++ or RUST
Final