Senior Data Engineer (AWS Redshift + QuickSuite, Python + DBT)
Job Overview
We are looking for a skilled Data Engineer to join our team and help us build a robust, scalable data ecosystem. In this role, you will be the architect of our data pipelines, ensuring that information flows seamlessly from source systems into our warehouse and ultimately into the hands of our decision-makers.
If you enjoy turning messy, raw data into clean, actionable insights and have a passion for modern data stack tooling, we want to hear from you.
Key Responsibilities
● Pipeline Development: Design, build, and maintain scalable ELT/ETL pipelines using Python to ingest data from various internal and external sources.
● Data Modeling: Utilize dbt (data build tool) to transform raw data in our warehouse into well-structured, documented, and tested production-ready tables.
● Warehouse Management: Optimize our Amazon Redshift environment for performance, cost, and reliability, including distribution keys, sort keys, and vacuuming strategies.
● BI Support: Partner with analysts to ensure the Amazon QuickSight suite is powered by highly available and accurate data sets.
● Data Quality: Implement automated testing and monitoring to ensure high data integrity and observability across the entire lifecycle.
● Infrastructure as Code: Maintain and version-control data models and infrastructure to ensure reproducible environments.
Qualifications & Skills
● Python & SQL: Proficiency in Python for data manipulation (pandas, requests) and expert-level SQL for complex analytical queries.
● The DBT Ecosystem: Strong experience with DBT Cloud or Core, including macros, seeds, and snapshotting.
● Cloud Warehousing: Hands-on experience with Amazon Redshift, specifically regarding performance tuning and cluster management.
● Visualization & BI: Experience connecting clean data models to Amazon QuickSight for executive dashboards and self-service analytics.