Job Description
Responsibilities:
Work in project teams using agile and traditional project approaches.
Identify customer needs and develop innovative data integration scenarios in modern cloud and on-premises environments.
Analyze and optimize ETL/ELT processes in hybrid system landscapes.
Develop solutions for various industries for generative AI, hyper-automation, computer vision, or IoT.
Design and model data architectures, develop ETL/ELT processes and data pipelines, and create reports, dashboards, and ad-hoc analyses.
Ensure the quality of work results in terms of functionality, code quality, and documentation, and support the operation of solutions through bug fixing, optimization, and monitoring.
Requirements:
Minimum of 5 years of experience in AI development, with strong programming skills in object-oriented Python, SQL, and DAX
Experience in developing data warehouse and business intelligence projects, knowledge of database systems, and data competencies in Cloud technologies (Azure, GCP, AWS)
Proficiency in data tool stacks for creating data processes by using ADF, Synapse pipelines, Databricks, and data visualization with PowerBI
Experience in designing and implementing efficient data workflows in the cloud and building data pipelines with e.g., Apache Airflow, Argo Workflow, or Databricks
Python skills specifically for generative AI and prompt engineering.
Focus on code quality, write testable and maintainable code that scales, and have experience in building CI/CD pipelines (e.g., GitHub Actions)