Description:
You have strong experience building platforms and/or software products across the full stack. You can design, build, and test software that is both large scale and complex. You should understand industry-recognised design patterns and architectures.
You understand how to select and implement the appropriate technologies to deliver resilient, scalable, and future-proofed data solutions. You know how to use best practices in DevOps with orchestration tools like Kubernetes.
You can build and nurture high-performing engineering teams. You value strong relationships with your team and your clients. You are adept at understanding, building for, and speaking to the diverse needs of multiple sectors and departments in a rapidly changing environment while maintaining a culture of trust and transparency.
Job Responsibilities:
Cooperatively design, develop, and implement large-scale data platform as a product.
Design APIs and data flows to connect a variety of sources such as IoT & Sensors, operational systems, and third-party APIs.
Document source to target mappings and maintain relevant documentation for all production code.
Provide support and mentorship for technical staff and clients to build a collaborative and effective work culture.
Develop and deliver data patterns, tools, and practices that can be leveraged by delivery teams.
Build and maintain appropriate metadata repositories for greater understanding of data assets.
Liaise with data analysts, architects, and scientists to design and implement data services.
Design, analyze, map, and model data flow between other platform layers and components.
Job Requirements:
5 years of relevant industry experience leading teams of engineers to deliver data focused products.
Experience designing and developing REST & Streaming APIs, gateways, and microservices.
Strong public, conversational, and written communication, including the ability to articulate complex technical concepts to non-technical audiences, such as senior stakeholders.
Experience developing with a number of general-purpose programming language (e.g. Python, Java, Scala, C#, Ruby, PHP, Go, and/or Swift).
Experience with relational databases such as Postgres and MySQL.
Experience with Big Data technology like Hadoop, MongoDB, Spark and Kafka.
Understanding of modern data architecture approaches and processes, including data lakes, data warehouses, data integration, and data consumption.
Understanding of data pipelines and workflows using applications like Apache Airflow, Nifi, Beam and other open-source providers. Advanced SQL knowledge is essential.
Experience working with DevOps practices, Git version control, and agile approaches.
Experience developing for on-premises or major cloud platforms, such as Azure, AWS, and Google Cloud.
Knowledge of Data Mesh and decentralised data architecture patterns would be considered as a big plus.
Experience with GraphQL desirable.
#J-18808-Ljbffr