Kloudynet is a fast-growing Cybersecurity company in Southeast Asia and also an Advanced Security Partner with Microsoft specializing in providing innovative solutions and services to clients across various industries. Currently, we are looking for Senior System Analyst for our client who is a leading in the insurance industry.
RESPONSIBILITIES
- Build and support data lake, data warehouse and data API based solutions by using suitable technology.
- Document data solutions, standards, requirement specifications, and business processes.
- Design, develop and implement secured, reliable, cost effective and high performing ETL solutions using Python, Pyspark, Talend and Kafka.
- Perform unit testing, design & support integration testing strategies and automate data quality monitoring
- Develop good understanding of existing data sets and data model, as well as existing solutions to be able to support and enhance them.
- Establish and follow best practices in code version control & deployment and help to reduce the deployment time
- Monitor scheduled jobs and reduce BAU monitoring efforts by automation
- Take ownership and manage timely completion of deliverables
- Maintain good documentation of the solutions and effectively communicate stakeholders about the value added
- Work closely with system vendors and internal stakeholders like Marketing, Analytics, Finance, Actuarial, Claims, IT Security, IT Infra etc, to ensure on-time project deliveries.
- Build and support data lake, data warehouse and data API based solutions at FWD by using suitable technology.
- Document data solutions, standards, requirement specifications, and business processes.
- Design, develop and implement secured, reliable, cost effective and high performing ETL solutions using Python, Pyspark, Talend and Kafka.
- Perform unit testing, design & support integration testing strategies and automate data quality monitoring
- Develop good understanding of existing data sets and data model, as well as existing solutions to be able to support and enhance them.
- Establish and follow best practices in code version control & deployment and help to reduce the deployment time
- Monitor scheduled jobs and reduce BAU monitoring efforts by automation
- Take ownership and manage timely completion of deliverables
- Maintain good documentation of the solutions and effectively communicate stakeholders about the value added
- Work closely with system vendors and internal stakeholders like Marketing, Analytics, Finance, Actuarial, Claims, IT Security, IT Infra etc, to ensure on-time project deliveries.
- Build and support data lake, data warehouse and data API based solutions at FWD by using suitable technology.
- Document data solutions, standards, requirement specifications, and business processes.
- Design, develop and implement secured, reliable, cost effective and high performing ETL solutions using Python, Pyspark, Talend and Kafka.
- Perform unit testing, design & support integration testing strategies and automate data quality monitoring
- Develop good understanding of existing data sets and data model, as well as existing solutions to be able to support and enhance them.
- Establish and follow best practices in code version control & deployment and help to reduce the deployment time
- Monitor scheduled jobs and reduce BAU monitoring efforts by automation
- Take ownership and manage timely completion of deliverables
- Maintain good documentation of the solutions and effectively communicate stakeholders about the value added
- Work closely with system vendors and internal stakeholders like Marketing, Analytics, Finance, Actuarial, Claims, IT Security, IT Infra etc, to ensure on-time project deliveries.
QUALIFICATIONS / EXPERIENCE
- 4-6 years of professional experience in big data, data warehousing, operational data stores, and large-scale architecture and implementations.
- Good knowledge in writing SQL and working with SQL/NOSQL database management systems and data warehouses
- Good hands-on experience in AWS/Azure/Google cloud services
- Good understanding in using distributed computing frameworks like Apache spark using python or scala
- Developing data processing python programs based on Pandas and related libraries
- Knowledge in stream processing frameworks like Kafka streams or similar frameworks
- Knowledge in orchestration tools like Apache Airflow or similar tools
- Good experience in ETL tools like Talend, Informatica or DataStage
- Experienced in using Change data capture systems like IBM Infosphere CDC or Debezium or AWS DMS or similar tools
- Knowledge in Shell scripting to automate tasks