Senior Data Engineer

Senior Data Engineer

This job is no longer open

As Boldr’s Senior Data Engineer, you will set up data pipelines architectures, pull data from different data sources and provide ready to use data sets for our analytics team. You would be in charge of the extract-transform-load (ETL) process not only for all of Boldr’s data, but some of our clients’ as well. You will also assure that all the databases that you handle are up to par in terms of performance, integrity and security. You will be supporting our growing analytics practice to create insights for our ever-expanding client base.


WHY DO WE WANT YOU

We are currently looking for impact-driven individuals who are passionate in helping Boldr grow and achieve our Purpose. We expect our Team to become our ultimate partners to success by always giving their 110% in everything, sharing their talents and quirks, and championing our core values: Curious, Dynamic and Authentic.

WHAT WILL YOU DO:

  • Design, build, and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Responsible for managing and coordinating the activities and deliverables of the Data Engineering team.
  • Review and manage source code with Git.
  • Help Data engineers build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources (Zendesk, Freshdesk, QuickBooks, Sprout, Kustomer, etc).
  • Write database documentation, including data standards, procedures and definitions for the data dictionary (metadata).
  • Control access permissions and privileges across all databases you handle.
  • Work with stakeholders and Data Engineers to assist with data-related technical issues and support their data infrastructure needs.
  • Intermediate working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • At least 4 years of experience building and optimizing data pipelines, architectures, and data sets.
  • Strong analytic skills related to working with unstructured datasets.
  • Intermediate project management and organizational skills.
  • At least 4 years of experience with data pipeline and workflow management tools, particularly using Airflow and AWS tools.
  • Experience with ETL (Extract-Transform-Load), data integration, manipulation, transformation, and cleaning with scripting languages such as Python, Java, etc.
  • Experience with AWS cloud services: Lambda, SNS, RDS, Redshift, API Gateway, S3, VPC, etc.
  • Experience with Google Cloud Platform.
  • Intermediate knowledgeable of data transfer, backup, recovery, security, integrity and SQL.
  • At least 4 years of experience with RESTful Services and APIs
  • A general understanding of the Philippines Data Privacy Act
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, PHP, etc.
  • 2 years + programming experience.
  • Working knowledge with version control tool like Github
  • AWS Solutions Architect (Associate) Certification is a must.
  • AWS Solutions Architect (Professional) Certification is a plus.
  • SSS
  • Pag-ibig
  • Philhealth
  • HMO on day one
  • 13th month pay
  • Paid incentive leaves
    • Personal time-offs (PTOs)
    • Sick leave
This job is no longer open
Logos/outerjoin logo full

Outer Join is the premier job board for remote jobs in data science, analytics, and engineering.