Job Details
Data Engineer
- £65000.00 - £75000.00 per annum
- West London, London
- Permanent
Data Engineer
London (five days per week on site)
£65-75k (dependent on experience)
The Highlights
This global logistics business, based in central London, is undergoing an exciting data transformation programme as it invests in a new team charged with building and implementing a new Azure Databricks platform. Working five days a week in the central London office, you'll join this growing team and play a key role in building and deploying modern Azure Databricks based data solutions, enabling the business to make faster, data-driven decisions.
The business is evolving and maturing its approach to data, creating an opportunity to build data solutions that have a significant impact on the performance of the business. Working closely with a multi skilled team of the Data Science, Engineers, Analytics and Analyst professionals, you will have the opportunity build your skills in analytics engineering, responding to business and project needs rather than operating as a narrow silo.
You'll work hands-on with Azure Databricks, Azure Data Factory, Delta Lake, and Power BI to design scalable data pipelines, implement efficient data models, and ensure high-quality data delivery. This is a fantastic opportunity to join a newly created team, work on new development projects and contribute to the implementation of a new data strategy.
The Position
The day to day for you will be focused on building solutions for the business. In addition, you'll be responsible for the following:
- Designing, developing, and optimizing end-to-end data pipelines (batch & streaming) using Azure Databricks, Spark, and Delta Lake.
- Implementing Medallion Architecture and building scalable ETL/ELT processes with Azure Data Factory and PySpark.
- Partner with the data architecture function to support data governance, using tools such as Azure Purview and Unity Catalog
- Driving data consistency, accuracy, and reliability across pipelines.
- Working collaboratively with analysts to validate and refine datasets for reporting.
- Apply DevOps & CI/CD best practices (Git, Azure DevOps) for automated testing and deployment.
- Optimize Spark jobs, Delta Lake tables, and SQL queries for performance and cost efficiency.
- Troubleshoot and resolve data pipeline issues proactively.
- Partner with Data Architects, Analysts, and Business Teams to deliver end-to-end solutions.
- Stay ahead of emerging data technologies (e.g., streaming with Kafka/Event Hubs, Knowledge Graphs).
- Advocate for best practices in data engineering across the organization.
Skills & Experience
- To be considered for this position you'll need to demonstrate commercial experience working as a data engineer or analytics engineer. Supported by the following:
- Experience of building data solutions for complex commercial business processes.
- Experience of working withing logistics or heavy industry sectors (e.g. oil & gas, engineering, mining, shipping, freight/logistics, utilities, airlines etc).
- Hands-on experience with Azure Databricks, Delta Lake, Data Factory, and Synapse.
- Strong understanding of Lakehouse architecture and medallion design patterns.
- Proficient in Python, PySpark, and SQL (advanced query optimization).
- Experience building scalable ETL pipelines and data transformations.
- Knowledge of data quality frameworks and monitoring.
- Experience with Git, CI/CD pipelines, and Agile methodologies.
- Ability to write clean, maintainable, and well-documented code.
- Experience of Power BI or other visualization tools.
- Ideally knowledge of IoT data pipelines.
McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.