- Postes vacants:
- 1 poste ouvert
- Type d'emploi désiré :
- Freelance / Indépendant
- Niveau d'étude :
- Ingénieur
- Langue :
- Anglais
- Genre :
- Indifférent
Description de l'emploi
This job is for freelancers who are at a senior level >5 years in this specific field.
Before you apply, please ensure that you fulfill all the requirements!
We are looking for Data Engineers that will join. You will help internal teams to migrate to a new set of canonical datasets produced by Metadata Distribution squad.What you’ll do – Implement canonical datasets for metadata entities that are used to fuel hundreds of experiences on our platform – Supporting internal teams in migrating their pipelines to the new generation of metadata datasets. – Getting hands-on experience with Google Cloud Platform and technology/languages such as BigQuery, Scala, Scio, Luigi, Styx and Docker – Operate large batch data pipelines – Work closely with our customers and stakeholders to understand, document, troubleshoot and analyze their data requirements
Exigences de l'emploi
Who you are – You have Data Engineering experience and you know how to work with high- volume, heterogeneous data, preferably with distributed systems such as Hadoop, BigTable, Cassandra, GCP, AWS or Azure. – You know Scala language well – You have experience with one or more higher-level JVM-based data processing frameworks such as Beam, Dataflow, Crunch, Scalding, Storm, Spark, Flink etc. – You might have worked with Docker as well as Luigi, Airflow, or similar tools. – You are passionate about crafting clean code and have experience in coding and building data pipelines – You understand the value of collaboration and partnership within team
Date d'expiration
16/10/2020