Staff Data Engineer

IMO

Staff Data Engineer

This job is no longer open

Research shows that women and underrepresented groups only apply to jobs only if they think they meet 100% of the qualifications on a job description. IMO is committed to considering all candidates even if you don’t think you meet 100% of the qualifications listed. We look forward to receiving your application!

Work that is meaningful. A job that has impact. Colleagues that inspire. That’s what you’ll find at Intelligent Medical Objects (IMO), a growing health IT company creating clinical terminology and insights solutions that are used by more than 740,000 US physicians and 4,500 US hospitals to power better patient care and support meaningful analytics.

Intelligent Medical Objects (IMO) (Rosemont, IL) seeks a Staff Data Engineer to lead the teams to create and maintain optimal data pipeline architecture for IMO Data Platform, build large complex datasets, and contribute to the building of the infrastructure required to extract, transform and loading of data from relevant sources. Specific duties include: being the resident Spark & python expert and guiding the team; lead the work for performance optimizations for Dataset generation; guiding the team on use of various AWS services; lead the team in identifying and designing internal process improvements in automating manual processes, optimizing data delivery and performance, re-designing data pipeline infrastructure for scalability, availability and reliability; building analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metric; working with data experts to design and implement better functionality in IMO products; ensuring application-specific availability, scalability, and monitoring of resources and costs; developing quality source code, including documentation of detail level designs; leveraging automation across testing, integration, and deployment activities; working on a Scrum team with stakeholders including the Executive, Product, Architecture, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs; creating data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader; supports product owner and team with technical solutions; champions team’s adherence to technical standards and ensures alignment with architectural direction; leads discovery of the team’s tech debt and collaborates with the team to minimize it; leads and coordinates incident management, root cause analysis and capture of preventative actions; collaborates with architecture to support technical roadmaps and POCs for the team and mentoring colleagues’ technical development into generalized specialists. Option to work remotely.

Position requires a Bachelor’s degree, or foreign equivalent, in Computer Science, Computer Engineering or a closely related engineering field of study, plus 5 years of experience in the job offered, or as a Technology Analyst or similar experience implementing well-architected data pipelines that are dynamically scalable, highly available, fault-tolerant, and reliable for analytics and platform solutions position. Must have 3 years of experience with each of the following: relational SQL; ETL design and implementation; AWS cloud services, such as EC2, EMR, RDS, or Redshift; object-oriented/functional scripting languages, such as Python, PySpark, or Scala. Specific experience must include: working SQL knowledge; working with relational databases as well as working familiarity with a variety of databases; performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement; building processes supporting data transformation, data structures, and metadata; building processes supporting data transformation, data structures, metadata, dependency, and workload management; manipulating, processing and extracting value from disconnected datasets; working in data warehousing environments and with big data; and performance tuning and optimization. Option to work remotely.

Full time position. Apply by submitting your resumes athttps://www.e-imo.com/careers, Job ID : 519

This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.