Our centralized Data and Analytics team at Recharge delivers critical analytic capabilities and insights that drive definition and implementation of our business strategies. The Data Engineer opportunity is ideal for someone who is passionate about wrangling and building pipelines for multiple large sets of data from disparate sources to provide end to end data solutions, empowering the organization to meet key business objectives.
You will own and architect Recharge’s data landscape. You will combine product usage, behavioral, transactional, business systems, and third-party data into the analytics pipeline. You will work closely with our analytics and engineering teams to implement solutions to answer complex questions and drive business decisions.
Live by and champion our values: #day-one, #ownership, #empathy, #humility.
Hands-on leadership, influence, and development of all things data services.
Develop modern data architectural solutions for relational, dimensional and NoSQL systems to support business intelligence reporting and analytics, including that for machine learning models and data science, ensuring effectiveness, scalability, and reliability.
Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models.
Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models
Design, develop, implement, and optimize existing ETL processes that merge data from disparate sources for consumption by data analysts and scientists, business owners, and decisions makers.
Detect data quality issues, identify their root causes, implement fixes, and design data audits to capture issues.
Influence and communicate with all levels of stakeholders including analysts, developers, business users, and executives.
6+ years experience in a data engineering related role (Data Warehouse Architect, Data Modeler, ETL Developer, Business Intelligence Analytics, Software Engineer) with a track record of manipulating, processing and extracting value from datasets
3+ years of hands-on experience designing and building data models and pipelines for ingesting, transforming and delivery of large amounts of data, from multiple sources into a Dimensional (Star Schema) Data Warehouse, Data Lake.
Experience with a variety of data warehouse, data lake and enterprise data management platforms (Snowflake {preferred}, Redshift, databricks, MySQL, Postgres, Oracle, RDS, AWS, GCP)
Experience working with a variety of ETL pipelining tools. (FiveTran, dbt, DataFlow, DataProc, Airflow, Python)
Expert proficiency in SQL
Good knowledge of metadata management, data modeling techniques (Relational, Star Schema, Data Vault etc) , and related tools (Erwin or ER Studio or others).