Role: Software Engineer 3 (Senior)📍 Location: Charlotte, NC (Uptown) – Onsite/Hybrid
⏳ Duration: 12+ Months
🗣 Interview: 2 Rounds
🛂 Work Authorization: USC/GC preferred; H1B transfers will also be considered
Role Overview:We are seeking a
Senior Software/Data Engineer with strong experience in
Spark, Python, and cloud platforms to work on large-scale, data-intensive systems. This role involves building and optimizing distributed data solutions and supporting modernization initiatives.
Key Responsibilities:
- Develop and optimize data pipelines using Apache Spark/PySpark
- Build scalable applications using Python (Django preferred)
- Work on cloud-based solutions (GCP or similar)
- Support data platform modernization (e.g., Hadoop → Cloud)
- Collaborate with cross-functional teams in high-scale environments
Required Skills
- 5–7 years of software/data engineering experience
- 3+ years with Apache Spark (PySpark/Scala/Java)
- 2+ years of Python development (Django preferred)
- Experience with GCP (GCS, IAM, GKE, Cloud Run) or similar cloud
- Hands-on with Kubernetes/OpenShift/Docker
- Strong understanding of distributed systems & data processing
Technical Stack
- Spark (PySpark/Scala/Java)
- Python (Django)
- GCP / Cloud Platforms
- Hadoop ecosystem & migration
- Kubernetes / OpenShift / Docker
- Kafka / Messaging systems
- CI/CD (GitHub Actions, Helm, Sonar, Harness)
- Monitoring (Prometheus, Grafana)
Nice To Have
- Experience with AI/ML or GenAI
- Financial Services/Banking domain experience
- Experience with large-scale data platforms
Ideal Candidate
- Strong in data engineering + cloud
- Hands-on with Spark + Python + GCP
- Experience in migration/modernization projects
- Self-driven, proactive, and solution-oriented
Skills: cloud run,data,python,software,iam,gke,kubernetes,platforms,banking domain,gcp,gcs,finnacial services,apache,kafka,django,opensift,docker,spark,cloud,modernization