Azure Databrick | Codersbrain
Job Description
Azure Databrick
Job Summary
We are seeking a skilled Azure Databrick professional to join our team with 7 openings available at our Chennai and Bangalore locations. In this full-time role, you will be central to designing, developing, and maintaining end-to-end data analytics pipelines on the Azure cloud platform. You will optimize our data environment, enhance data-driven decision-making, and drive overall business performance. Applicants must be available to start within 15 days, and the application process will close on May 29, 2025.
Responsibilities
- Develop, maintain, and optimize robust data pipelines using Azure Databricks.
- Utilize Spark SQL and PySpark SQL to execute complex data queries and transformations.
- Implement ETL processes to ensure efficient integration of data from multiple sources.
- Collaborate with cross-functional teams, including Data Science and Engineering, to drive comprehensive data solutions.
- Troubleshoot performance issues and continuously optimize analytical processes.
- Support data analysis initiatives using Python Data Analysis techniques and tools.
Qualifications
- A minimum of 5 years of relevant experience in cloud data analytics and big data technologies.
- Strong expertise in Azure Databricks for scalable data processing.
- Proven experience with Spark SQL and PySpark SQL.
- Proficiency in Python for data analysis and automation.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Excellent problem-solving, communication, and collaboration skills.
- Familiarity with cloud-based data warehousing and ETL processes.
Preferred Skills
- Advanced certifications in Azure or Big Data technologies.
- Experience with additional Big Data frameworks or data visualization tools.
- Knowledge of modern software development practices and agile methodologies.
Experience
- At least 5 years of hands-on experience in data analytics, cloud computing, and big data technologies.
- Prior experience in developing sophisticated data pipelines and implementing cloud-based data solutions.
Environment
- Work location based in Chennai or Bangalore.
- On-site office environment that promotes collaborative teamwork.
- A fast-paced, innovative culture within a technology-driven organization.
Tools
file_search
// Tool for searching files uploaded by the user. // // To use this tool, you must send it a message. To set the tool as the recipient for your message, include this in the message header: to=file_search.<function_name> // // For example, to call file_search.msearch, you would use: // <|im_start|>assistant to=file_search.msearch code<|im_sep|>{"queries": ["first query", "second query"]}<|ghissue|> // // Note that the above must match exactly. // // You must provide citations for your answers. Each result will include a citation marker that looks like this: citeturn7file4