Data Engineer (SQL, Pyspark, Databricks)
Date:
Nov 25, 2024
Location:
Bangalore, KA, IN, 560100
Req ID:
21475
Summary
We’re looking for a high-performing and self-motivated Data Engineer with strong Healthcare software solution expertize to join our product development team and drive critical initiatives at Gainwell technologies. This role requires close collaboration with engineering, infrastructure and product teams, and involves ongoing interaction with a variety of end-users and internal stakeholders.
Your role in our mission
Essential Job Functions
- Participates in client/project meeting(s) for highly complex project definition, needs assessment and design review. Evaluates the needs and requirements of the users and provides technical expertise in the development of technical, structural and organizational specifications. Determines appropriateness of data for storage and optimum storage organization.
- Designs, develops, and integrates highly complex database systems for internal and external users/clients.
- Creates, documents and implements standards and/or complex modeling to monitor and enhance the capacity and performance of the database. Codes complex programs and derives logical processes on technical platforms. Designs user interfaces and business application prototypes.
- Performs complex analyses and reviews applications being released into production. Develops and oversees the implementation of test application code in client server environments to ensure that software conforms to build management practices.
- Remains abreast of and analyzes new and emerging technologies and tools for applicability to field. Prepares recommendations and presentation to management.
- Provides leadership and work guidance to less experienced personnel.
What we're looking for
- Bachelor's degree in computer sciences or related field
- Technical Skills - SQL, Spark/Pyspark, Databricks, Python.
- Total experience needed is 9+ years and 4+ years experince in ETL Processing/data architecture or equivalent.
- 3+ years of experience working with big data technologies on AWS/Azure/GCP
- 3+ years of experience in the Apache Spark/DataBricks framework (Python/Scala)
- Experience working with different between database structures (e.g., transaction based vs. data warehouse)
- Databricks and AWS developer/architect certifications a big plus
What you should expect in this role
- Opportunities to travel through your work (0-10%)