Ref: a0GP900000Gh5Cw.2

Data Engineer - Outside IR35 - Remote

England, London

Job description

Data Engineer - Outside IR35 - Remote

a0GP900000Gh5Cw.2

Role Overview



We are seeking a skilled Data Engineer with strong expertise in Microsoft Azure and Databricks to design, build, and optimize scalable data pipelines and analytics solutions. You will play a key role in building modern data platforms that support business intelligence, advanced analytics, and machine learning initiatives.

This role requires hands-on experience with cloud-native data architectures, distributed processing, and best practices in data engineering.

Key Responsibilities



  • Design, develop, and maintain scalable ETL/ELT pipelines using Azure Data Factory, Azure Databricks, and related Azure services


  • Build and optimize data workflows using Apache Spark within Databricks


  • Develop and maintain data lakes and lakehouse architectures using Azure Data Lake Storage


  • Implement data modeling solutions for analytics and reporting


  • Ensure data quality, governance, and security best practices


  • Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable data solutions


  • Monitor and optimize performance, cost, and reliability of cloud-based data solutions


  • Implement CI/CD pipelines for data workflows using Azure DevOps or similar tools


  • Troubleshoot and resolve data-related technical issues

Required Qualifications



  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience)


  • 3+ years of experience in data engineering


  • Strong hands-on experience with Microsoft Azure data services


  • Proficiency in Azure Databricks and Apache Spark (PySpark or Scala)


  • Experience with Azure Data Factory (ADF)


  • Strong SQL skills and experience with relational and non-relational databases


  • Experience with Azure Data Lake Storage (ADLS)


  • Knowledge of data warehousing concepts and dimensional modeling


  • Experience working with large-scale structured and unstructured datasets

Preferred Qualifications



  • Experience implementing lakehouse architectures


  • Familiarity with Delta Lake and data versioning concepts


  • Experience with streaming data pipelines (e.g., Event Hubs, Kafka)


  • Knowledge of Infrastructure as Code (ARM, Bicep, Terraform)


  • Experience with Power BI or other BI tools


  • Azure certifications (e.g., DP-203: Data Engineering on Microsoft Azure)

Technical Skills



  • Python (required)


  • SQL (advanced level)


  • Spark / PySpark


  • Azure Data Factory


  • Azure Databricks


  • Azure Data Lake


  • Git & CI/CD pipelines


  • REST APIs (nice to have)




To apply for this role please submit your CV or contact Dillon Blackburn on 0191 255 1428 or at d.blackburn@tenthrevolution.com.

Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.