Data Engineer

Salary
S$8000 - S$10000 per month
Location
Singapore
Type
Permanent
Workplace
On-site
Published
Oct 3, 2024
Ref
BBBH152011_1727933528
Share this

You will have the opportunity to contribute to the setup of a high-performing internal tech development team to enable the rapid growth of a global organization. Job Responsibilities * Design and deliver data solutions using an agile, iterative approach based on Scrum and change management process. * Collaborate with the Data Engineering Manager on technical architecture and design. * Understand commercial data usage to identify system requirements with Business Analysts or users. * Analyse and estimate IT changes, providing input on technical opportunities, constraints, and trade-offs. * Create documentation and present to both technical and non-technical audiences. * Handover to L2 team and provide third-line support for short periods after releases. * Own their learning to remain a technical subject matter expert. * Collaborate with the Data Engineering Manager, other Data Engineers, Lead BA, and X-team SMEs to deliver software to Production with minimal impact. * Conduct detailed testing for development activities and demonstrate results according to the delivery methodology and coding standards. * Create and productionize complex data pipelines with quick turnaround and high quality. * Assist the team with Production code deployment and data platform support. * Support the L2 team in fixing Production bugs. * Act as a Release Engineer in sprints as needed. Job Requirement * Strong academic background with a degree, equivalent professional qualification, or experience. * Analytical, flexible, and curious, open to diverse opinions and new ideas. * Around 5 years' experience in a similar role * Extensive cloud experience in Azure and expertise in modern cloud-based data architectures. * Advanced coding experience in Python and SQL. * Experience in Databricks or equivalent experience in Spark. * Advanced knowledge of Databricks e.g. Unity Catalog, Delta Live Tables, Platform administration, know internal of Databricks e.g. Optimize, Vacuum, Z-order etc. * In-depth knowledge of Azure services, including VNET, Key Vaults, Azure Data Factory, ADLS Gen2, Virtual Machines, App Services, Storage Accounts, and Azure Active Directory. * Proficient in dimensional modelling (e.g., star and snowflake schema design). * Experience orchestrating data pipelines using Azure Data Factory or Airflow. * Knowledge of Python packages such as Pandas, Numpy, and Seaborn. * Strong understanding of Big Data, MapReduce, Spark, and file formats like Parquet, Avro, and ORC. * Familiarity with reporting tools such as Power BI or Tableau.

Apply

Share this
Follow us
© Gravitas Group 2024Site by