Freelance Data Engineer
Our client is undertaking a strategic initiative to modernise and scale their enterprise data platform in the cloud. The goal of the project is to centralise and streamline data flows from multiple internal systems into a unified Azure-based environment that enables reliable analytics, reporting, and data-driven decision making across the organisation. As part of this initiative, the team is building robust, scalable data pipelines and enrichment processes that transform raw operational data into high-quality, accessible datasets for business intelligence and advanced analytics.
The project focuses on improving how data is ingested, processed, and distributed across the organisation while ensuring strong governance, reusability, and performance. This includes integrating various internal systems, enabling real-time and batch data processing, and delivering well-structured data models that can support reporting platforms such as Power BI. The successful candidate will play a key role in designing and implementing data workflows that are reliable, scalable, and aligned with modern DevOps and data engineering practices.
Role Responsibilities
- Work extensively with SQL Server to manage, transform, and optimise structured data used within the organisation’s data platform.
- Design, build, and maintain data pipelines within the Azure ecosystem, ideally using Azure Databricks, as well as services such as Azure Data Factory, Azure Functions, Azure Stream Analytics / Log Analytics, and Azure DevOps.
- Develop and maintain data workflows and enrichment processes in Python, particularly within a Databricks environment, enabling efficient data transformation and processing at scale.
- Explore and evaluate new technologies within the data ecosystem, such as Redis, RabbitMQ, Neo4j, Apache Arrow, and similar tools, to enhance performance, integration capabilities, and data architecture flexibility.
- Collaborate closely with business analysts and stakeholders to clarify requirements, translate business needs into technical solutions, and design reusable components that integrate seamlessly within the broader data platform.
- Participate actively in testing and validation processes, working within DevOps practices to ensure the reliability, quality, and maintainability of delivered solutions.
- Produce clear and well-structured technical documentation covering data pipelines, workflows, and architectural decisions to support maintainability and knowledge sharing across teams.
- Work with Power BI to ensure data structures and datasets are optimised for reporting and analytics consumption.
- Integrate and process data from a range of sources, with experience working with SAP data sources considered beneficial but not essential.
- Engage in constructive technical discussions, apply logical problem-solving, and provide clear, practical feedback when evaluating solutions or improving existing implementations.
- Take a cross-organisational perspective on data architecture, ensuring solutions promote standardisation, scalability, and reuse across different teams and business domains.