Data Engineer (DBT + Snowflake) | Codersbrain
full-time
Posted on September 13, 2025
Job Description
Data Engineer (DBT + Snowflake)
Company Overview
Company details not specified.
Job Summary
We are seeking a skilled Data Engineer with expertise in DBT (Data Build Tool) and Snowflake to design, build, and optimize modern data pipelines and analytics solutions. The ideal candidate will play a critical role in developing scalable data solutions that enhance the organization’s data analytics capabilities.
Responsibilities
- Design, develop, and maintain ELT pipelines using DBT and Snowflake.
- Build and optimize data models, transformations, and reusable DBT macros to support analytics and reporting.
- Manage Snowflake warehouse architecture, including schema design, role-based access control, and query optimization.
- Implement best practices for performance tuning, clustering, partitioning, and cost optimization in Snowflake.
- Develop CI/CD workflows for DBT projects using Git, dbt Cloud, or other orchestration tools (e.g., Airflow, Dagster, Prefect).
- Collaborate with data analysts, BI developers, and business teams to translate requirements into scalable data solutions.
- Implement data governance, lineage, and testing to ensure high-quality and reliable data delivery.
- Monitor data pipelines, troubleshoot issues, and ensure timely delivery of datasets.
- Integrate Snowflake with upstream and downstream systems (AWS/GCP/Azure, APIs, Kafka, BI tools).
- Document data pipelines, models, and workflows for long-term maintainability.
Qualifications
- 5+ years of experience in data engineering, including at least 2+ years hands-on experience with DBT and Snowflake.
- Strong expertise in SQL and performance tuning on cloud data warehouses.
- Proficiency in DBT for transformations, testing, and modular data modeling.
- Hands-on experience with Snowflake features including warehouses, roles, security, tasks, streams, and materialized views.
- Familiarity with cloud platforms (AWS, GCP, Azure) and storage systems (S3, GCS, ADLS).
- Experience with version control (Git) and CI/CD pipelines.
- Good understanding of data governance, security, and compliance practices.
- Strong problem-solving skills, attention to detail, and collaboration abilities.
Preferred Skills
- Experience with data visualization tools and frameworks.
- Knowledge of ETL/ELT methodologies and architectures.
- Understanding of machine learning concepts and implementation.
Experience
- Minimum of 5 years of relevant experience in data engineering.
Environment
- This is a remote position, allowing for flexibility in location.
Salary
Salary details not specified.
Growth Opportunities
Opportunities for career advancement within the company not specified.
Benefits
Benefits details not specified.