Job Summary :
We are seeking an experienced Spark Developer with strong expertise in Java, Spring Boot, and Oracle SQL/PL SQL to design and implement scalable data solutions. This role involves building high-performance data pipelines, optimizing big-data workloads, and deploying applications in modern cloud environments.
Key Responsibilities :
- Design and develop scalable data pipelines to support large-scale big-data processing.
- Build, optimize, and maintain Apache Spark batch and streaming applications using Java.
- Optimize Spark jobs for performance, scalability, and cost efficiency.
- Integrate data from multiple sources and systems, ensuring data quality and consistency.
- Develop and maintain complex Oracle SQL and PL/SQL scripts, stored procedures, and performance-tuned queries.
- Manage and troubleshoot Spark jobs running on Kubernetes or cloud-based clusters.
- Deploy Spark applications on Kubernetes or similar container orchestration platforms.
- Collaborate with cross-functional teams to deliver robust and efficient data solutions.
Required Skills & Experience :
- Strong proficiency in Java and Spring Boot for backend development.
- Hands-on experience with Apache Spark (batch and streaming).
- Expertise in Oracle SQL and PL/SQL, including query optimization and stored procedure development.
- Solid understanding of performance tuning for Spark and database queries.
- Experience with Kubernetes, containerization, and cloud platforms (AWS, Azure, or GCP).
- Familiarity with distributed systems and large-scale data processing.
- Strong problem-solving skills and ability to troubleshoot complex data workflows.
Preferred Qualifications :
- Experience with data integration tools and ETL processes.
- Knowledge of CI/CD pipelines for data applications.
- Exposure to data security and compliance best practices.