Working at Atlassian
Atlassian can hire people in any country where we have a legal entity. Assuming you have eligible working rights and a sufficient time zone overlap with your team, you can choose to work remotely or return to an office as they reopen (unless it’s necessary for your role to be performed in the office). Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
Atlassian is looking for a Senior Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale.
On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow.
You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Spark and Airflow. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience.
More about you
As a Senior Data Engineer in the GTM-DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases.
On your first day, we'll expect you to have:
- A BS in Computer Science or equivalent experience
- At least 5+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
- Strong programming skills (Python, Java or Scala preferred)
- Experience writing SQL, structuring data, and data storage practices
- Experience with data modeling
- Knowledge of data warehousing concepts
- Experience building data pipelines, platforms, micro services, and REST APIs
- Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
- Experience in modern software development practices (Agile, TDD, CICD)
- Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies.
- A willingness to accept failure, learn and try again
- An open mind to try solutions that may seem crazy at first
- Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)
It's preferred, but not required, that you have:
- Experience building self-service tooling and platforms
- Built and designed Kappa architecture platforms
- Built pipelines using Databricks and well versed with their API’s
- Contributed to open source projects (Ex: Operators in Airflow)