Informatica Developer | Codersbrain
full-time
Posted on September 19, 2025
Job Description
Informatica CDI/ETL Developer
Company Overview
Not specified
Job Summary
The Informatica CDI/ETL Developer will play a critical role in designing, developing, and maintaining ETL workflows through Informatica for large-scale data integration projects. This role involves collaborating with various teams to create robust data solutions, ensuring data quality, governance, and compliance, while supporting production environments to minimize downtime.
Responsibilities
- Design, develop, and maintain Informatica ETL workflows (PowerCenter, IDQ, IICS) for large-scale data integration.
- Work with cloud platforms (Azure, AWS, GCP) and integrate data with cloud data warehouses (e.g., Snowflake, Synapse, Redshift, BigQuery).
- Implement data quality, profiling, and cleansing using Informatica IDQ.
- Optimize ETL/ELT pipelines for high performance and scalability.
- Develop real-time and batch data pipelines using tools like Informatica CDC, Kafka, Spark, or other streaming technologies.
- Collaborate with data architects, analysts, and business teams to gather requirements and design robust data solutions.
- Ensure adherence to data governance, security, and compliance practices.
- Support and troubleshoot issues in production environments, ensuring minimal downtime.
- Mentor junior engineers and contribute to best practices for data engineering and DevOps automation.
Qualifications
- Strong hands-on experience with Informatica PowerCenter (ETL/ELT) and Informatica Data Quality (IDQ).
- Expertise in SQL and PL/SQL (Oracle, SQL Server, Teradata, DB2, etc.).
- Experience with Informatica Intelligent Cloud Services (IICS) – including Data Integration (DI), Application Integration (AI), and API Management.
- Strong understanding of cloud platforms (Azure, AWS, GCP) and their data services.
- Proficiency in integrating with Cloud Data Warehouses (Snowflake, Synapse, Redshift, BigQuery).
- Hands-on knowledge of data modeling (Star, Snowflake schemas, OLTP, OLAP).
- Proven ability to handle large-scale data integration and performance tuning.
Preferred Skills
- Programming & Automation skills in Python, Java, or Shell scripting.
- Familiarity with the Big Data Ecosystem: including Hadoop, Spark, or Databricks for large-scale data processing.
- Knowledge of DevOps/CI-CD tools such as Jenkins, Git, and Azure DevOps for deployment automation.
Experience
Min Experience: Not specified
Environment
Location: Not specified
Salary
Salary: Not specified
Growth Opportunities
Not specified
Benefits
Not specified