Data Engineer

Chicago, IL 

DATA ENGINEERING – FULL-TIME

Company Description

rMark Bio helps life science companies solve for the complexities that come with digital transformation by developing end-to-end AI solutions that deliver personalized business intelligence through integrated applications and API accessible services.

Healthcare innovation is best served when individuals with diverse backgrounds come together with a common purpose and clear objectives to improve patient lives.

We are product strategists, engineers, data scientists and designers who are experts in our domain and passionate about our mission to accelerate innovation, collaboration and scientific discovery for life sciences

Job Description

rMark Bio, is searching for an experienced Data Engineer to work within an agile team of data scientists, software architects and developers to implement and maintain secure, scalable, cloud-based software solutions, build customer integration solutions, and streamline the team’s software delivery tools and processes. Technical aptitude is a must as well as the right team-minded attitude and the ability to work interchangeably with others. The core team is as a small SWAT-style team with everyone pulling their own weight, playing a variety of roles, and covering each others’ responsibilities when needed.  Applicants should be qualified in collecting, cleaning, and maintaining large datasets that are critical to customer and product success.

Job Responsibilities

  • Build automated ETL pipelines for cloud environments including, AWS, GCP, Azure, and Heroku.
  • Develop and support data integrity reporting and alerting for ETL pipelines.
  • Ability to work closely with Engineering, Product and Customer Success Teams. 
  • Build robust and deployable software in Python, C++ or C#.
  • Build, deploy, and maintain RESTful APIs to access datasets.
  • Parse and extract data from common formats including, XML, JSON, CSV, and Pipe delimited.
  • Integrate with customer provided APIs.
  • Build, organize, and maintain datamarts using any of SQL, JSON, Blob, or other databases as needed.  
  • Write and maintain excellent documentation of all work.

Experience and Qualifications

  • Minimum 3-5 years of experience is required
  • B.S. in Computer Science or closely related field preferred, but not required. Real-world experience and proven track records count as much, if not more.
  • Programming Languages: Python and C# (C++ and R are a plus).
  • Experience with databricks is a plus.
  • Established expertise developing ETL pipelines on serverless cloud solutions. (AWS Lambda, Azure App Services, etc..) 
  • Linux/Unix
  • Docker
  • Databases: SQL, JSON, object storage, etc.  Experience with graph databases like Neo4j and TigerGraph is a plus.
  • Strong technical writing skills will be heavily stressed

If you are a recruiter or placement agency, please do not submit resumes to any person or email address at rMark Bio prior to having a signed agreement from rMark Bio’s HR department. rMark Bio is not liable for and will not pay placement fees for candidates submitted by any agency other than its prior-approved recruitment partners. Furthermore, any resumes sent to us without a written signed agreement in place will be considered your company’s gift to rMark Bio. and may be forwarded to our recruiters for their attention. Thank you.

rMark Bio is an equal opportunity employer. All qualified applicants for employment will be considered without regard to race, color, religion, sex, gender identity, sexual orientation, national origin, status as an individual with a disability, veteran status, or any other basis protected by federal, state, or local law.