Data Engineer - Virtual Banking-(2000012463)
Greater China and North Asia-Hong Kong-Hong Kong
Mox is built by and for the ones who aspire to live life to the fullest we call them Generation Mox.
The name Mox reflects the endless opportunities we can create, - Mobile eXperience; Money eXperience; Money X (multiplier), eXponential growth, eXploration it’s all up for us to define together.
We are Generation Mox. Are you?
Do you want to join us on this journey?
Role and Responsibilities
We're looking for a Data Engineer to work on site with our development and data science teams in our offices in Hong Kong.
We work in project-based sprints in small, interdisciplinary teams.
As a Data Engineer you'd be responsible for the design, creation and maintenance of analytics infrastructure that enables almost every other function in the data world.
You will be responsible for the development, construction, maintenance and testing of architectures, such as data lake, warehouses, databases, data pipelines and large-scale processing systems.
As part of Data Engineering team, you are also responsible for the creation of data set processes used in modelling, mining, acquisition, and verification.
Collaborate closely with our development and product teams in our fast-paced delivery environment
Design, build and maintaining modern, automated, cloud native, analytics infrastructure
Build and manage data warehouses, databases, data pipelines.
Understand and translate business needs into data models supporting long-term solutions. Work with the development team to implement data strategies, build data flows and develop conceptual, logical and physical data models that ensure high data quality and reduced redundancy
Our Ideal Candidate
Knowledge of technology best practices for building a modern data lake, data warehouses and data pipelines
Good understand of technologies and experience in building a highly scalable and fault tolerant cloud data platform
Self-starter, capable of working without direction and able to deliver projects from scratch
Good practical experience and knowledge in building and maintaining Data Warehousing / Big Data Tools - Hadoop and MapReduce, Apache Spark and Spark SQL, HIVE
In-Depth Database Knowledge of RDBMS (PostgreSQL and MySQL) and NoSQL (HBase)
Strong experience in building and maintaining cloud Big Data and ETL tool, Google Big Table, Big Query and Air Flow (Google Composer)
Strong knowledge and experience with Apache Beam in implementing batch and streaming data processing jobs, strong Development background in Python or Java
Strong knowledge in messaging systems like Kafka, RabbitMQ and Google Pub / Sub
Experience with Agile / Lean projects SCRUM, KANBAN etc
Practical knowledge with Git flow, Trunk and GitHub flow branching strategies
Strong English communication skills
Qualification & Education Requirements
Container management and container orchestration experience Docker, Kubernetes
Monitoring tools Elastic Stack, Prometheus, Grafana
Breadth of knowledge operating systems, networking, distributed computing, cloud computing
Familiar with Big Data Technologies (AWS RedShift, Panoply), ETL Tools (StitchData and Segment), Machine Learning technologies and environments