True end to end supply chain cost control is in the details!
Fixefy bridges the gap between operations and finances. We supercharge companies that rely on outsourced logistics and transportation services by unlocking a new dimension of savings and insights from enterprises existing data.
The platform mines, analyzes, validates, and reconciles data scattered across supplier invoices and different systems, hidden in free text communications, and locked in the minds of the enterprise's most experienced people. It empowers companies to harness data to bring clarity and order back to supply chain expenditure.
As a Data Engineer at Fixefy, you will play a pivotal role in revolutionizing supply chain cost control by leveraging data engineering principles. You will be responsible for designing, implementing, and maintaining data pipelines that extract, transform, and load (ETL) large volumes of heterogeneous data from various sources. Your expertise in data modeling, SQL and No-SQL databases, and data orchestration platforms will be essential in building scalable solutions to support our platform's analytical capabilities. You will collaborate closely with cross-functional teams to understand business requirements, optimize data workflows, and drive data-driven decision-making processes. This role offers an exciting opportunity to work at the intersection of supply chain management, finance, and technology, contributing to the advancement of our mission to empower companies with actionable insights to optimize their supply chain expenditure.
Responsibilities
- Develop and maintain data pipelines to process and analyze large-scale datasets efficiently.
- Implement data workflows and scheduling tasks to ensure reliable and scalable data processing.
- Design and optimize data models to support analytical queries and reporting requirements, balancing performance and scalability.
- Utilize SQL and No-SQL databases to store and retrieve structured and unstructured data, ensuring data integrity and consistency.
- Write efficient and maintainable Python code to automate data processing tasks, perform data transformations, and integrate with existing systems.
- Collaborate with cross-functional teams to understand business requirements, translate them into technical specifications, and deliver scalable data solutions.
- Communicate effectively with team members and stakeholders to ensure alignment on project goals, timelines, and deliverables.
- Demonstrate strong analytical skills to troubleshoot data-related issues, optimize data workflows, and improve overall system performance.
- Take ownership of projects, work independently, and proactively identify opportunities for process improvements and innovation.
- Stay updated on the latest trends and advancements in data engineering, supply chain management, and FinTech ecosystem to drive continuous learning and improvement.
- Thoroughly document new tools and working processes
Qualifications
- Data Engineering, Data Modeling, and ETL skills
- 1-2 years experience with data processing technologies such as Apache Spark
- 1-2 years experience with data orchestration platforms such as Apache Airflow, Prefect, Dagster
- 1-2 years of experience working with SQL and No-SQL databases
- Working with containerization technologies such as Docker
- Proficiency in Python
- Proficiency in SQL
- Excellent analytical skills
- Strong communication and interpersonal skills, a team player
- Strong technical writing & documentation skills, attention to detail