The Technology team at RGA Asia utilizes best practice design techniques to deliver business applications, workflow automation, analytics, data management, and more.
We work closely with business teams to translate their requirements into technology solutions, empower them to work more efficiently and deliver a superior service to our customers.
As a Senior Data Engineer, you are responsible for expanding and improving our data and data pipeline architecture, as well as enhancing data flows and data collection to ensure our data delivery architecture is optimal, safe, compliant and consistent throughout initiatives.
As the key interface to operationalize data and analytics, this role requires collaborative skills to evangelize business stakeholders of effective data & analytics practices, and help data consumers optimize their models for quality, security and governance.
You are an experienced data wrangler who enjoys optimizing existing data systems or building them from scratch. You are autonomous and comfortable supporting the data needs of multiple teams and products.
Build data pipelines : Architecting, creating, maintaining and optimizing data pipelines is the primary responsibility of the data engineer.
Drive automation through effective metadata management : automate the most common, repeatable and tedious data preparation and integration tasks, in order to minimize manual processes and errors and improve productivity.
The data engineer also assists with renovating the data management infrastructure to drive automation in data integration and management.
Collaborate across departments : work collaboratively with varied stakeholders (notably data analysts and scientists) to refine their data consumption requirements.
Educate & train : be knowledgeable about how to address data topics, including using data & domain understanding to address new data requirements, proposing innovative data ingestion, preparation, integration and operationalization, and training stakeholders in data pipelining & preparation.
Participate in ensuring compliant data use : ensure that data users and consumers use the data provisioned to them responsibly.
Work with data governance teams, and participate in vetting and promoting content to the curated data catalog for governed reuse.
Become a data and analytics evangelist : The data engineer is a blend of analytics evangelist , data guru and fixer. This role will promote the available data and analytics capabilities and expertise to business leaders to help them leverage these capabilities in achieving business goals.
Education and Experience
6+ years of work experience in data management including data integration, modeling, optimization and data quality, of which 3+ years supporting data and analytics initiatives for cross-functional teams
Foundational knowledge of Data Management practices, with strong experience in : Various data management architectures like data warehouse, data lake and data hub, and supporting processes like data integration, governance and metadata management Designing, building and managing data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management Working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using data integration technologies
Experience in data governance, notably in moving data pipelines into production, with exposure to : Data preparation tools (Trifacta, Alteryx, ) Database programming languages (including SQL and PL / SQL for relational databases, MongoDB or Cassandra for nonrelational databases) Working with SQL on Hadoop tools and technologies (HIVE, Impala, Presto, Hortonworks Data Flow, Dremio, Informatica, Talend,.
Advanced analytics tools for Object-oriented / object function scripting (R, Python, Java, C++, Scala, ) Message queuing technologies (Kafka, JMS, Azure Service Bus, Amazon SQS ) Stream data integration (Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis ) & stream analytics technologies (KSQL, Apache Spark Streaming, Apache Samza .
Continuous integration tools (eg Jenkins)
Ability to automate pipeline development Experience with DevOps capabilities like version control, automated builds, testing and release management capabilities with Git, Jenkins, Puppet, Ansible Adept in Agile and able to apply DevOps and DataOps principles to data pipelines
Exposure to hybrid deployments (cloud and on-premise), with an ability to work across multiple environments & operating systems through containerization techniques (Docker, Kubernetes, AWS ECS, etc.)
Strong experience with popular data discovery, analytics and BI tools (PowerBI, Tableau, Qlik )
Bachelor’s degree in STEM or a related technical field, or equivalent work experience
Certification on Alteryx, Tableau or similar tools
Strong experience collaborating with a wide range of IT and business stakeholders
Strong verbal and written communication demonstrating ability to efficiently share information, influence decisions, negotiate and network
Advanced analytical and problem-solving skills, with a strategic view to conceive solutions that adapt over time or can be reused across different initiatives
Organizational skills with attention to details and great documentation skills
Ability to adapt quickly to new methods and work under tight deadlines
Ability to set goals and handle multiple tasks, stakeholders and projects simultaneously