We’re united by a mission: to make the world a safer place. Corvus Insurance uses novel data and artificial intelligence/machine learning to achieve better insights into commercial insurance risk. Our software empowers brokers and policyholders to better predict and prevent complex claims through data-driven tools and Smart Commercial Insurance® policies. This allows us to reduce or eliminate the impact of adverse events, creating a safer world for everyone. Drawing inspiration from the intelligent, tool-building corvid family of birds, we are a team of high-flying collaborative builders. We’re excited to meet you. Spread your wings and soar with us.
Corvus is seeking a Senior Data Engineer who will work alongside Product, Data Science and Engineering to build out and sustain a modern data platform that serves data analytics as a product to our internal stakeholders - Underwriting, Actuary, Sales, Claims, and Finance.
As a Senior Data Engineer, you’ll lead the technical direction of a team that will deploy and develop pipelines and platforms that organize and make disparate data meaningful. Here, you’ll work with and guide a multi-disciplinary team of data analysts, engineers and data consumers in a fast and flexible environment. You’ll use your experience in analytical exploration and data examination while you manage the assessment, design, building, and maintenance of cloud scalable platforms.
Responsibilities:
Help execute the roadmap for the Data Platform and related data initiatives
Identify use cases and define OKRs for data initiatives as they relate to driving company priorities
Collaborate across teams to understand cross-functional uses of data and its transformation to create meaningful services, reporting and insights generation
Develop, deploy, and maintain data pipelines for ingesting, processing, and storing structured and unstructured data including ETL processes
Collaborate with the team to provide data-driven insights and support the organization's data analytics needs
Ensure data security, privacy, and compliance by implementing appropriate policies, procedures, and encryption methods
Monitor and optimize the performance, scalability, and cost-efficiency of data storage and processing solutions
What you’ll bring to the flock:
5 years of experience designing, developing, operationalizing, and maintaining data applications in a cloud environment such as AWS, GCP or Azure
5 years of experience creating software for retrieving, parsing, and processing structured and unstructured data
Experience with modern columnar data warehouses such as Snowflake, Redshift, BigQuery
Familiarity with AWS Services (Redshift, RDS, EKS, S3, EMR, Glue, Lambda) is preferred
5 years of experience building scalable ETL/ELT workflows for reporting and analytics
Experience with orchestration tools such as Airflow, 5Tran, Dagster, or Prefect, with dbt
Advanced working knowledge of Python, SQL, Elixir, or Java
Experience ingesting, processing, and visualizing data sources of varying types - structured/relational and unstructured
Ability to develop scripts and programs for converting various types of data into usable formats and support project team to scale, monitor and operate data platforms
Solid understanding of the Product Development Lifecycle
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
Strong project management and organizational skills
Bachelor’s or Master’s degree in Computer Science, ML, Engineering or related discipline or equivalent experience