Job Detail

Lead Big Data Engineer

Inseriert am: 14.10.2019
Mission

As a senior data engineer, you will lead our big data engineers who build data pipelines of structured and unstructured datasets used by our data scientists. You will be a champion of the technical standards for a small team of data engineers and help mentor more junior team members while keeping your day-to-day, hands-on data engineering tasks. You will work closely with our communities of DevOps and data scientists.


 


Our Big Data Engineers identify and evaluate providers of structured and unstructured datasets and design integration solutions. You will help them architect and implement high-performance batch and streaming data pipelines for our data scientists and internal business users by using our big data stack (AWS services, Kubernetes cluster, S3 data lake, Spark cluster). Furthermore, you will help them assemble large, complex datasets to meet functional business requirements. You will also optimise our current data pipelines and machine-learning workflows.


 


In addition, our Data Engineers are responsible for maintaining our internal data products as well as rolling out our open data platform. They contribute to an ambitious overall initiative to modernise our architecture and data landscape with new technologies in an agile and dynamic environment.

Responsibilities

  • Running and supporting the big data engineers’ team according to Scrum/Agile principles.

  • Overseeing third-party data engineering teams.

  • Being a champion of technical standards.

  • Managing the introduction of new technologies related to data engineering.

  • Owning the data environment.

  • Being accountable for the deliveries of the data engineering team.

  • Implementing a full data pipeline from raw to structured data.

  • Producing data: monitorate and alert.

  • Scouting data: contacting data providers, identifying and evaluating datasets.

Profile

 


Technical Profile



  • Strong background in computer science is mandatory (Master’s or PhD in Computer Science).

  • In-depth experience in data pipelining and all related technologies (AWS cloudformation, Glue, Lambdas, S3, serverless).

  • Good understanding of data-platform architecture.

  • Fluent in Python and/or Scala.

  • Experience with relational database programming (SQL).

  • Experience with Agile work environment and tools (Jira).

  • Experience with unstructured data and/or graph databases (NoSQL, Neo4j, etc.).

  • Experience with Big Data stack, distributed computing platforms and query interfaces (MapReduce, Kafka, Hadoop, Spark, Hive, Elasticsearch, etc.).

  • AWS Certification (Solution Architect, Big Data) would be a plus.
     


Personal profile



  • You act with humility.

  • You are a team player and comfortable working as part of a global team.

  • You have an interest in developing people through mentoring and coaching.

  • You have a deliver-first mindset.

  • You have a positive attitude towards challenging situations.

  • You have strong integrity and unbiased judgement.

  • You can multi-task and balance conflicting priorities.

  • You think “everything should be made as simple as possible, but no simpler”.

  • You like to bring clarity out of ambiguity.

  • You have perfect command of English. French would be a plus.

  • You live in Switzerland or are willing to relocate.


 

Note

We will not accept any CVs via agencies


 


LBDE/CM/FG

Details