The organisation runs an enterprise Azure data platform used by multiple business units to consolidate reporting and operational analytics. This Technical Support Engineer role exists to enforce engineering standards and deliver robust data pipelines across the Azure Data ecosystem, with a focus on Azure Synapse and Azure Data Factory (ADF) while operating from Brussels.
The mission
You will work inside the data engineering function that implements the organisation's cloud data warehouse and transformation layers. The platform uses Azure Synapse for warehousing, ADF and Mapping Data Flow for orchestration and ELT, and Databricks with PySpark for heavier transformations. Your input shapes how teams produce production-grade pipelines, ensure data quality, and keep ingestion and transformation performant and maintainable.
As the technical reference between Data Platform Architects and multiple engineering teams, your day-to-day includes validating pull requests, translating architecture into actionable patterns, and intervening on critical components when needed. You will coach junior engineers, contribute to standards and tooling, and take ownership of complex pipeline incidents and performance tuning across ingestion, transformation and consumption layers.
Your responsibilities
- Lead technical reviews and approve pull requests to ensure compliance with data engineering standards and secure, maintainable code.
- Translate platform architecture into concrete implementation patterns for ADF, MDF, Databricks and Synapse, and document those patterns for engineering teams.
- Design, develop and optimise production data pipelines using Azure Synapse, ADF Mapping Data Flow and Databricks PySpark to improve performance and reliability.
- Mentor and coach junior data engineers, run knowledge-sharing sessions, and provide structured feedback to improve team practices.
- Diagnose and resolve operational incidents affecting ETL/ELT jobs, data quality, and query performance, collaborating with platform architects when required.
- Contribute to dimensional modelling decisions, ensuring star schema and DWH concepts are applied for scalable consumption layers.
Your profile
Essential skills
- Proven senior-level experience implementing and operating pipelines in the Azure Data ecosystem, including Azure Synapse, Azure Data Factory (ADF) and Mapping Data Flow (MDF).
- Strong hands-on experience with Databricks and PySpark for transformation logic and performance tuning.
- Solid SQL skills and practical experience with dimensional modelling, star schema design and general DWH concepts.
- Experience acting as a technical authority: performing PR reviews, defining standards and coaching multiple teams.
- Analytical, structured problem-solver with strong written and verbal communication for technical and non-technical stakeholders.
Preferred skills
- Experience with Power BI and semantic data modelling for the consumption layer.
- Familiarity with CI/CD for data pipelines and infrastructure-as-code patterns applied to data engineering.
Languages
- English, C1 (fluent)
- French, C1 (fluent)
- Dutch, B2 (good)