- A Data Engineer who can handle a plethora of tasks; from designing, creating, building, and maintaining data pipelines,
- Collating raw data from a variety of sources and ensuring performance optimization.
- Familiar with big data frameworks, databases, building data infrastructure, containers and more.
- Adept in ensuring smooth operation of implemented pipelines, including preventative measures to minimise the number of unplanned downtimes of streams, and be the type of engineer who takes initiative, spots opportunities and drives positive change within integration streams
- SQL & NoSQL database technologies
- Data transformation, converting raw data into consumable format
- Distributed event store and stream processing solutions such as Apache Kafka
- Data mining, finding patterns in large data sets and preparing them for analysis, classification and predictions
- Data warehousing and ETL tools
- Real-time processing frameworks, to generate quick insights to act upon
- Data buffering for our streaming data
- Machine Learning integration and algorithms.
- AWS Cloud computing
- Data visualization, present insights and learnings generated in a consumable format