Working at Atlassian
Atlassian can hire people in any country where we have a legal entity. Assuming you have eligible working rights and a sufficient time zone overlap with your team, you can choose to work remotely or return to an office as they reopen (unless it’s necessary for your role to be performed in the office). Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
About the Role
Atlassian is looking for an experienced Data Engineer to join the Data Engineering team and build data solutions and applications that power business decisions across the organization.
As the data specialist, you will be partnering with a cross-functional team consisting of Information Technology, data platform, product engineering, analytics teams, and data scientists across various initiatives. You are an empathetic, structured problem solver who is passionate about building systems at scale. Requirements may be vague, but the iterations will be rapid, and you need to take smart and calculated risks.
What will you do in this role:
- Partner with Product Manager, analytics, and business teams to review and gather the data/reporting/analytics requirements and build trusted and scalable data models, data extraction processes, and data applications to help answer complex questions.
- Design, build, test, and maintain scalable data pipelines and microservices sourcing both first-party and third-party datasets and store using distributed (cloud) structures and other applicable storage forms such as graph databases and relational NoSQL databases.
- Build high volume, distributed, and scalable data platform capabilities and microservices to create efficient and scalable data solutions to enable data access by applications via API.
- Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns
- Develop analytical data models and dashboards to serve the needs of insights and analytics
- Constantly strive to optimize the data models to provide data with quality and trust.
- Utilize and advance continuous integration and deployment frameworks
- Work with architecture & engineering leads to ensure quality solutions are implemented and engineering standard methodologies adhered to.
- Research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing.
- Leverage artificial intelligence, machine learning, and other big-data techniques to provide a competitive edge to the business.
Skills you need to be successful in the role:
- Empathetic, structured problem solver who is passionate about building systems at scale.
- Solid understanding of disciplines such as database engineering and software development.
- Proficient in database schema design, analytical and operational data modeling
- Proven experience working with large datasets and big data ecosystem for computing (spark, Kafka, Hive or similar), orchestration tools (airflow, oozie, luigi), and storage(S3, Hadoop, DBFS)
- Complete understanding of microservices guiding principles to design and build restful API's to allow other systems to be integrated and consume data off of data repositories(Data Lake, SSOT/MDM)
- Experience with modern databases (Redshift, Dynamo DB, Mongo DB, Postgres or similar)
- Proficient in one or more programming languages such as Java, Python, Scala, etc. and rock-solid SQL skills.
- Champions automated builds and deployments using CICD tools like Bitbucket, Git
- Experience in developing, publishing, and maintaining sophisticated reports/dashboards/visualization, using tools like Tableau, Redash, or d3js.
- Strong hands-on experience in designing and developing curated datasets for data science and machine learning
- Prior experience working with Analytics and with Data Science teams
- Proven analytical, communication, and organizational skills and the ability to prioritize multiple tasks at a given time.
- An open mind to try solutions that may seem astonishing at first
- A BS in Computer Science or equivalent experience
Skills not required but good to have:
- Experience working for SAAS companies
- Experience with Machine Learning
- Committed code to open source projects
- Experience building self-service tooling and platforms