See Job Openings

Jr. Cloud Data Engineer (C2H)

  • Location: Madison, Wisconsin
  • Type: Contract To Hire
  • Job #2186

Carex’s partner is an insurance company based in Madison, WI, with an office in Boston, MA.  They are seeking a Junior/Entry-level Cloud Data Engineer to join their team on a contract-to-hire basis.

The Cloud Data Engineer is a specialized role participating in designing and implementing systems on Public Cloud infrastructure to deliver more analytical and business value from a wide range of data sources.  You will work with the team to design and develop high-performance, resilient, automated data pipelines, streams, and applications, adapting technologies for ingesting, transforming, classifying, cleansing and exposing data using creative design to meet objectives.  Your skills and education in data management technologies will enable you to match the right technologies to the required schemas and workloads.  The focus is on the AWS and GCP platforms, with a strong serverless bias.  They rely heavily on Python, PySpark, BigQuery and related technologies, and work in an Agile, DevOps team culture.  They expect you to bring an array of specialized skills noted below, and to come prepared to learn rapidly to build on the foundation of your basic skills and education in this field.

What you'll do:

  • Build and Maintain serverless data pipelines in terabyte scale using AWS and GCP services – AWS Glue, PySpark and Python, AWS Redshift, AWS S3, AWS Lambda and Step Functions, AWS Athena, AWS DynamoDB, GCP BigQuery, GCP Cloud Composer, GCP Cloud Functions, Google Cloud Storage and others
  • Integrate new data sources from enterprise sources and external vendors using a variety of ingestion patterns including streams, SQL ingestion, file and API.
  • Maintain and provide support for the existing data pipelines using the above-noted technologies
  • Work to develop and enhance the data architecture of the new environment, including recommending optimal schemas, storage layers and database engines including relational, graph, columnar, and document-based, according to requirements
  • Develop real-time/near real-time data ingestion from a range of data integration sources, including business systems, external vendors and partner and enterprise sources
  • Provision and use machine-learning-based data wrangling tools like Trifacta to cleanse and reshape 3rd party data to make suitable for use.
  • Participate in a DevOps culture by developing deployment code for applications and pipeline services
  • Develop and implement data quality rules and logic across integrated data sources.
  • Serve as internal subject matter expert and coach to train team members in the use of distributed computing frameworks and big-data services and tools, including AWS and GCP services and projects

What you'll bring: (Experience is expected to be hands-on work, and formal education)

  • Bachelor’s degree in Computer Science, Mathematics, Engineering, or equivalent work experience
  • Some exposure to working with datasets with very high volume of records or objects
  • Intermediate level programming experience in Python and SQL
  • One year working with Spark or other distributed computing frameworks (may include: Hadoop, Cloudera)
  • Two years with relational databases (typical examples include: PostgreSQL Microsoft SQL Server, MySQL, Oracle)
  • Some exposure to AWS services including S3, Lambda, one or more AWS database technologies including Redshift, DynamoDB or Athena
  • Some exposure to AWS services: DynamoDB, Step Functions
  • Experience with contemporary data file formats like Apache Parquet and Avro, preferably with compression codecs, like Snappy and BZip.
  • Experience analyzing data for data quality and supporting the use of data in an enterprise setting.

Desired Experience and Skills:

  • Some exposure to Machine Learning tools and practices, including DataRobot, Sagemaker or others
  • Some exposure to Google Cloud Platform (GCP) services, which may include any combination of: BigQuery, Cloud Storage, Cloud Functions, Cloud Composer, Pub/Sub and others (this may be via POC or academic study, though practical experience is preferred)
  • Streaming technologies (e.g.: Amazon Kinesis, Kafka)
  • Graph Database experience (e.g.: Neo4j, Neptune)
  • Distributed SQL query engines (e.g.: Athena, Redshift Spectrum, Presto)
  • Experience with caching and search engines (e.g.: ElasticSearch, Redis)
  • ML experience, especially with Amazon Sagemaker, DataRobot, AutoML
  • IAC coding tools, including CDK, Terraform, Cloudformation, Cloud Build

#LI-TB1

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!