Help us Shape the Future of Data
Anaconda is the world’s most popular data science platform. With more than 26 million users, the open-source Anaconda Distribution is the easiest way to do data science and machine learning. We pioneered the use of Python for data science, championed its vibrant community, and continue to steward open-source projects that make tomorrow’s innovations possible. Our enterprise-grade solutions enable corporate, research, and academic institutions around the world to harness the power of open source for competitive advantage and groundbreaking research.
Anaconda is seeking people who want to play a role in shaping the future of enterprise machine learning, and data science. Candidates should be knowledgeable and capable, but always eager to learn more and to teach others. Overall, we strive to create a culture of ability and humility and an environment that is both relaxed and focused. We stress empathy and collaboration with our customers, open-source users, and each other.
Here is what people love most about working here: We’re not just a company, we’re part of a movement. Our dedicated employees and user community are democratizing data science and creating and promoting open-source technologies for a better world, and our commercial offerings make it possible for enterprise users to leverage the most innovative output from open source in a secure, governed way.
Summary
Anaconda is seeking a talented Data Engineer to join our rapidly-growing company. This is an excellent opportunity for you to leverage your experience and skills and apply it to the world of data science and machine learning.
What You’ll Do:
- Support Anaconda’s legacy data infrastructure and drive the evolution of our pipelines.
- Identify and implement process improvements: designing infrastructure that scales, automating manual processes, etc.
- Drive database design and the underlying information architecture, transformation logic, and efficient query development to support our growing data needs.
- Implement testing and observability across the data infrastructure to ensure data quality from raw sources to downstream models.
- Write documentation that supports code maintainability.
- Take ownership of the various tasks that will allow us to maintain high-quality data; ingestion, validation, transformation, enrichment, mapping, storage, etc
- Work closely with Product teams to anticipate and support changes to the data.
- Work with the Business Insights team and Infrastructure teams to build reliable, scalable tooling for analysis and experimentation.
- Have a high sense of urgency to deliver projects as well as troubleshoot and fix data queries/ issues.
What You Need:
- 5+ years of relevant experience inside the engineering domain.
- Database experience with relational and non-relational data stores, including Big Query.
- Deep experience in ETL/ELT design and implementation using tools like Apache Airflow, Prefect, Matillion, Fivetran, Stitch, Kinesis, Lambda, Glue, Athena, etc.
- Experience working with large data sets, and an understanding of how to write code that leverages the parallel capabilities of Python and database platforms.
- Strong knowledge of database performance concepts like indices, segmentation, projections, and partitions.
- To be self-directed/motivated with excellent organizational skills.
- Proficiency in Python.
- Experience executing projects with Data Science and Engineering teams from start to finish.
- To be comfortable with varying degrees of ambiguity. As a rapidly growing startup, our requirements and priorities evolve quickly!
- Experience with cloud infrastructure, containerization, and orchestration.
- A team attitude: “I am not done until WE are done”.
- To embody our core values:
- Ability & Humility
- Innovation & Action
- Empathy & Connection
- To care deeply about fostering an environment where people of all backgrounds and experiences can flourish.
What Will Make You Stand Out:
- Experience working in a fast-paced startup environment.
- Experience working in an open-source or data science-oriented company.
- Experience with Kafka or other event streaming technologies.
- Experience with Dask and Prefect.
- Experience with infrastructure as code (Terraform, Cloud Formation or Ansible).
Why You’ll Like Working Here:
- This is a unique opportunity to translate strong open source adoption and user enthusiasm into commercial product growth.
- We are a dynamic company that rewards high performers.
- We’re on the cutting edge of the enterprise application of data science, machine learning, and AI.
- Collaborative team environment that values multiple perspectives and clear thinking.
- Employees-first culture.
- Remote First & Flexible working hours.
- Medical, Dental, Vision, HSA, Life, and 401K.
- Health and Remote working reimbursement.
- Paid parental leave - both mothers and fathers.
- Pre-IPO stock options.
- Open vacation policy.
An Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.