Loading...
 
Share this Job

Senior Data Engineer - Apache Spark/AWS - Remote

Apply now »

Date: Nov 11, 2021

Location: Any city, TX, US, 99999 Any city, ND, US, 99999 Any city, DC, US, 99999 Any city, ME, US, 99999 Any city, AR, US, 99999 Any city, FL, US, 99999 Any city, OR, US, 99999 Any city, OH, US, 99999 Any city, DE, US, 99999 Any city, NH, US, 99999 Any city, NC, US, 99999 Any city, WA, US, 99999 Any city, GA, US, 99999 Any city, SC, US, 99999 Any city, LA, US, 99999 Any city, OK, US, 99999 Any city, ID, US, 99999 Any city, NJ, US, 99999 Any city, NY, US, 99999 Any city, FL, US, 99999 Any city, WV, US, 99999 Any city, CA, US, 99999 Any city, MA, US, 99999 Any city, IL, US, 99999

Company: Gainwell Technologies LLC

Gainwell Technologies is the leading provider of technology solutions for the health and human services program. We are a new company with an exceptional track record of over 50 years of proven experience thanks to our great employees. We’re looking for a dynamic data engineer with Apache Spark and AWS experience to join the product development team at Gainwell Technologies. You will have the opportunity to work as part of a cross-functional team to define, design and deploy frameworks for data collection, normalization, transformation, storage and reporting on AWS to support the analytic missions of Gainwell and its clients.

 

 

Essential Job Functions

  • Design, develop and deploy data pipelines including ETL-processes for getting, processing and delivering data using Apache Spark Framework.
  • Monitor, manage, validate and test data extraction, movement, transformation, loading, normalization, cleansing and updating processes. Build complex databases that are useful, accessible, safe and secure.
  • Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance.
  • Collaborate with team-members on data models and schemas in our data warehouse.
  • Collaborate with team-members on documenting source-to-target mapping.
  • Conceptualize and visualize data frameworks
  • Communicate effectively with various internal and external stakeholders.

Basic Qualifications

  • Bachelor's degree or equivalent combination of education and experience
  • Bachelor's degree in computer sciences or related field preferred
  • Six or more years of relevant experience or equivalent education in ETL Processing/data architecture or equivalent.
  • 2+ years of experience working with big data technologies on AWS (S3/Glue/EMR/RedShift).
  • 2+ years of experience in the Apache Spark/DataBricks framework (Python/Scala)
  • Experience working with different between database structures (e.g., transaction based vs. data warehouse)

Other Qualifications

  • Strong project planning and estimating skills related to area of expertise
  • Strong communication skills
  • Good leadership skills to guide and mentor the work of less experienced personnel
  • Ability to be a high-impact player on multiple simultaneous engagements
  • Ability to think strategically, balancing long and short-term priorities
  • Willingness to travel

Work Environment

  • Client or office environment / may work remotely
  • Occasional evening and weekend work