Apply Now »

Senior Data Engineer (AWS Glue, Python and PySpark)

Date:  Feb 3, 2024
Location: 

Bangalore, KA, IN, 560100

Req ID:  20289

Summary

 

As a ​Senior Data Engineer (AWS Glue, Python and PySpark) at Gainwell, you can contribute your skills as we harness the power of technology to help our clients improve the health and well-being of the members they serve — a community’s most vulnerable. Connect your passion with purpose, teaming with people who thrive on finding innovative solutions to some of healthcare’s biggest challenges. Here are the details on this position.

Your role in our mission

 

  • At Gainwell Technologies we build innovative technologies and solutions to deliver better health and human services outcomes to our customer.
  • Our offering spans Medicaid Management, Care Quality and Cost Containment, Human Services and Public Health, Pharmacy Solutions, Systems Integration, Interoperability and Analytics.
  • The opportunity is for an Application Developer in Product Development. Our new offering will target the Healthcare market and align with our corporate strategy. You will have the opportunity to shape, build and grow a new product offering while influencing the go to market and road map.
  • Agile, fast paced, exploratory, quick proof of concepts, fail fast, learn fast, an open mindset to change tooling, design, processes or methodology. In short find out what works best and move on to next steps. Regardless of role we are one team, with one goal in mind.
  • Build PySpark applications using Spark Data frames in Python. 
  • Develop and implement strategies to extract data from various sources and APIs. 
  • Perform data cleansing and transformation tasks to standardize and prepare the data for analysis. 
  • Utilize AWS ETL services, such as AWS Glue, to automate and optimize data processing workflows. 
  • Collaborate with stakeholders to understand data requirements and provide solutions for data extrac-tion and transformation. 
  • Monitor and troubleshoot data extraction and ETL processes to ensure data quality and integrity. 
  • Stay updated with industry trends and best practices in web scraping, API integration, and AWS ETL. 

What we're looking for

 

  • At least 5-7 years of experience in Data Engineer role and minimum 4 years in AWS Glue, Python and PySpark development experience is must.
  • Data Warehouse experience is required.
  • Experience working with various APIs to extract data from third-party sources. 
  • Proficiency in programming languages such as Python, with experience in data manipulation and trans-formation. 
  • Familiarity with AWS ETL services, especially AWS Glue, for data processing and transformation. 
  • Strong problem-solving skills and the ability to analyze and interpret complex data structures. 
  • Excellent attention to detail and a commitment to delivering high-quality, accurate data. 
  • Worked on optimizing spark jobs that processes huge volumes of data.
  • Hands on experience in version control tools like Git.
  • Work in a dynamic and fast-paced AWS environment while designing and building client facing products and solutions.
  • Experience in communicating with stakeholders, other technical teams, and management to collect requirements, describe software product features, technical designs, and product strategy.
  • Experience mentoring junior software engineers to improve their skills and make them more effective.
  • Working knowledge of distributed systems and a willingness to jump in and learn what is happening in the back-end code.
  • A solid grasp of fundamental algorithms and applications

What you should expect in this role

 

  • Remote Opportunity.
  • Fast-paced, challenging and rewarding work environment.
  • Will require late evening work to overlap US work hours.
  • Opportunities to travel through your work (0-10%).

Apply Now »