For the right candidate, we are willing to sponsor a 482 visa. We are seeking a highly skilled and experienced
Contract Azure Fabric Data Engineer with deep expertise in designing and building advanced data solutions. This role will have a strong emphasis on
real-time data processing and robust data observability, leveraging the full capabilities of the Azure cloud ecosystem, particularly
Azure Fabric.
This is a critical role that will directly impact our ability to deliver immediate, high-quality data to our business stakeholders and power our next generation of analytical and operational applications. You will be instrumental in designing, building, and optimising our real-time data pipelines, ensuring data reliability, and establishing best practices for data quality and system resilience.
Key Responsibilities:- Azure Fabric Expertise: Act as a subject matter expert for Azure Fabric, designing and implementing end-to-end data solutions that leverage its capabilities for unified data analytics, data warehousing, and particularly real-time processing and data product delivery.
- Design & Build Real-time Data Pipelines: Architect, develop, and implement highly scalable and fault-tolerant real-time data ingestion and processing pipelines using Azure services, with a strong focus on Azure Event Hubs, Azure Stream Analytics, Azure Databricks (Spark Streaming), and specifically leveraging Azure Fabric's real-time capabilities.
- Implement Data Observability Frameworks: Establish and mature our data observability capabilities by designing and implementing robust monitoring, alerting, and logging solutions for data pipelines and data quality. Utilise tools and techniques for data freshness, volume, schema drift, and lineage tracking (e.g., custom solutions, commercial tools, or open-source like Great Expectations).
- Drive Data-Centric DevOps & CI/CD: Champion and implement DevOps best practices for data engineering, including Infrastructure-as-Code (IaC) using Terraform/Bicep, automated testing, version control (Git), and CI/CD pipelines (Azure DevOps) for deploying data solutions efficiently and reliably.
- Data Quality & Governance: Collaborate with data governance teams to embed data quality checks, validation rules, and security controls directly into data pipelines, ensuring compliance and data integrity.
- Performance Optimisation: Continuously monitor and optimise the performance, scalability, and cost-efficiency of existing and new data solutions, especially those handling high-velocity data.
- Collaboration & Mentorship: Work closely with data scientists, data analysts, and software engineers to understand data needs, integrate data products, and provide technical guidance to junior team members.
- Troubleshooting & Incident Resolution: Proactively identify, diagnose, and resolve complex data pipeline issues, ensuring minimal downtime and data loss.
Required Skills & Experience:- Proven experience (5+ years) as a Data Engineer, with a strong focus on real-time data processing and cloud-native solutions.
- Mandatory deep expertise in Microsoft Azure cloud data services, including significant hands-on experience with Azure Event Hubs, Azure Stream Analytics, Azure Databricks (Spark), and Azure Data Factory.
- Demonstrable and hands-on experience with Azure Fabric is highly advantageous.
- Strong programming skills in Python or Scala for data manipulation and pipeline development.
- Expertise in SQL, with a focus on performance tuning and complex data transformations.
- Experience in implementing data observability principles and tools (e.g., logging, metrics, tracing, data quality checks).
- Solid understanding and practical application of DevOps principles for data pipelines, including CI/CD automation, Git, and IaC (Terraform/Bicep).
- Familiarity with containerisation (Docker, Kubernetes) concepts for deploying data workloads.
- Experience with monitoring tools (e.g., Azure Monitor, Datadog, Grafana).
- Strong understanding of data warehousing concepts, data modelling, and ETL/ELT methodologies.
- Excellent problem-solving skills and the ability to work independently and as part of a dynamic team.
- Strong communication and interpersonal skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
Desirable (Bonus Points):- Experience with other stream processing technologies like Apache Kafka or Apache Flink.
- Understanding of MLOps concepts and building data pipelines for machine learning models.
- Exposure to data governance frameworks and tools.
- Azure Data Engineer certifications (e.g., DP-203).
If you are a driven Data Engineer with a passion for real-time data, data observability, and cloud-native excellence, we encourage you to apply!