A major Belgian insurance group is modernising its legacy Azure SQL and Data Vault platform into a governed Lakehouse built on Databricks, Delta Lake and ADLS. This hands-on role accelerates Databricks adoption, implementing PySpark/Spark pipelines, Unity Catalog governance and Databricks Workflows to replace batch legacy processes and enable reporting with Power BI.
The mission
The organisation is executing a Lakehouse migration to replace on-premise Azure SQL Server and Data Vault models with a scalable Delta Lake architecture on Azure Data Lake Storage. The platform work focuses on Databricks Workspace, Delta Tables, Unity Catalog and orchestration through Azure Data Factory, with attention to performance tuning and cost control across cluster pool and serverless options. The outcome is a production-grade environment that supports analytics and reporting at enterprise scale.
Day to day you will act as a senior, hands-on Databricks engineer and technology lead rather than a pure architect. You will design and build ingestion and transformation frameworks using notebooks, Databricks Workflows and Delta patterns, lead migration of existing Data Vault models to medallion (Bronze/Silver/Gold) Delta structures, and coach engineering teams on CI/CD and deployment practices for Databricks assets. Early work includes synchronising with the data architect, establishing engineering standards, enabling Unity Catalog governance and delivering reusable pipeline templates for ingestion and transformation.
Your responsibilities
- Lead and deliver production Databricks implementations, producing reusable ingestion and transformation frameworks that reduce time to production.
- Optimise Spark/PySpark jobs and Databricks configurations for performance and cost, applying cluster pool and serverless strategies.
- Migrate legacy SQL/Data Vault datasets into Delta Lake models, translating business logic into efficient medallion patterns.
- Implement CI/CD and deployment pipelines for Databricks notebooks, jobs and Unity Catalog artifacts to ensure repeatable releases.
- Coach and mentor internal engineers on Databricks best practices, governance, and observability to raise team maturity.
- Integrate Databricks outputs with enterprise reporting tools, supporting Power BI consumption and query performance tuning.
Your profile
Essential skills
- Proven, real-world Databricks implementation experience in enterprise environments, delivering end-to-end pipelines.
- Strong hands-on expertise with Databricks Workspace, Spark / PySpark and SQL for data engineering workloads.
- Deep knowledge of Delta Lake / Delta Tables, Unity Catalog, and Databricks Workflows.
- Practical experience with Azure Data Lake Storage (ADLS) and Azure Data Factory (ADF) for orchestration.
- Demonstrable ability to optimise performance and manage Databricks cost/pricing trade-offs.
- Experience migrating legacy platforms to modern cloud data platforms and defining engineering best practices.
Preferred skills
- Infrastructure as Code experience (Terraform, Bicep or ARM) and DevOps pipelines (Azure DevOps or GitHub Actions).
- Exposure to streaming or event-driven architectures and Data Vault migration scenarios.
- Familiarity with Power BI integration patterns.
Languages
- English, C1
- Dutch, B2
- French, B2