Someone beat you to it!

Unfortunately, another Gumtree user is in the process of purchasing this item. Check back later in case they've changed their mind, or click the button below to browse more Pay Online Securely listings.

Senior Data Engineer (Spark & Python Specialist)

1 day ago15 views
Ad Saved to My List
View and manage your saved ads in your account.
Report Ad
General Details
Advertised By:Agency
Company Name:Executive Placements
Job Type:Full-Time
Description
Role:
Our client is seeking a Senior Cloud Data Engineer to join their Engineering team at the company. In this role, you will be a key technical contributor responsible for building, optimizing, and maintaining high-performance data processing engines.

You will apply your deep expertise in Spark Best Practices to develop modular, portable PySpark applications, ensuring the companies platform remains provider-agnostic while operating within modern environments like Microsoft Fabric.

As an experienced member of the Data Engineering team, you will focus on refactoring legacy logic into scalable Python-centric solutions, optimizing the companies data lakehouse architecture, and contributing to the overall technical excellence of their data platform.

Key Responsibilities
  • Spark Implementation & Optimization: Act as a senior technical resource for Spark internals, applying best practices in memory management, shuffle tuning, and partitioning to ensure their Spark-based processing is performant and cost-effective.
  • Cloud-Agnostic Development: Develop and maintain data pipelines using Python, PySpark, and Delta Lake/Parquet, adhering to their strategy of decoupling code from specific cloud provider services and reducing reliance on GUI-based tools like ADF.
  • Refactoring & Modernization: Contribute to the ongoing evolution of the companies platform by refactoring complex SQL-based ETL into modular, testable, and maintainable Python libraries.
  • Lakehouse Engineering: Build and optimize Medallion Architecture (Bronze/Silver/Gold) layers using Delta Lake, ensuring efficient data versioning, schema evolution, and storage performance (Z-Ordering, Vacuuming).
  • Code-First Orchestration: Support the transition toward code-centric orchestration patterns (e.g., Airflow, Dagster, or Python-based wrappers) that prioritize portability and reduce dependency on cloud-specific orchestration tools.
  • Technical Excellence: Participate in code reviews, mentor junior engineers in PySpark best practices, and contribute to the development of automated testing frameworks (Pytest) to ensure high-quality, reliable data delivery.

Required Skills and Experience
  • Senior-Level Spark: 6+ years of experience with Spark/PySpark, with the ability to diagnose performance bottlenecks via the Spark UI and optimize complex DAGs.
  • Advanced Python: Strong proficiency in production-grade Python, with experience building reusable libraries and implementing automated testing.
  • SQL Proficiency: Solid T-SQL skills to accurately interpret and migrate existing logic into the new Python-centric environment.
  • Azure Synapse Analytics &a
Id Subtitle 1352331629
View More
Apply now:
Executive Placements
Selling for 1 year
Total Ads5.13K
Active Ads5.13K
Professional Seller
Seller stats
5.13KTotal Ads
12.97MTotal Views
Contact Executive Placements
Message
(3998)
Name
(Optional)
Email Address
(Optional)
Phone
(Required)
Upload CV(Optional)
DOC or PDF only max 2 MB file size
Send Message
By clicking "Send" you accept the Terms & Conditions and Privacy Notice and agree to receive newsletters and promo offers from us.