Postes vacants:
1 poste ouvert
Type d'emploi désiré :
CDI
Experience :
3 à 5 ans
Niveau d'étude :
DESS, DEA, Master, Bac + 5, Grandes Ecoles
Rémunération proposée :
Confidentiel
Langue :
Français, Anglais
Genre :
Indifférent

Description de l'emploi

Context 

At SESAMm, we provide tools for the asset management industry, based on our proprietary big data, artificial intelligence and natural language processing technologies. We analyze a huge amount of unstructured textual data extracted from millions of news articles, blogs, forums and social networks in real time. We use this alternative data in combination with standard market data to provide innovative analytics on thousands of financial products across all asset classes, and to develop custom investment strategies using our internal machine learning and statistical expertise. With more than EUR 8M raised since its creation in 2014, major clients across the world, numerous awards won and an exponential team growth, we are expanding quickly in Western Europe, Americas and Asia. 

Job Description 

you will build and scale data components to key SESAMm products, such as raw data ingestion pipeline, job scheduling and ETL design / optimization, optimize the migration the Product Data Platform toward cloud or on-premise solutions, and setup the best data development practices for other tech members.

Communicate the work of your team with weekly updates.

– Design and implement best data pipeline for our Text-based products (ingestion, processing, exposition) :

  • Test and design state-of-the-art data ingestion pipelines
  • Implement efficient streaming services

– Lead the acquisition of new data sources

  • For each new data source, describe its feasibility and potential
  • Integrate the new data into the datalake

– Develop data request tooling for Data Scientists and Technical teams

  • Ease the new data request engine
  • Optimize current queries

– Implement and maintain critical data systems

  • Process and integrate data in new databases or datalake
  • Ensure maintainability and create update systems

Used technologies : Spark, AWS-EMR, Kafka, SQL, MongoDB…

Exigences de l'emploi

Candidate Profile

  • Engineering school / university with specialization in IT, software engineering or data science. Other types of profiles are welcome to apply as long as they have significant IT experience.

Work Experience and Skills Requirements :

  • Work Experience : 2-5 years of experience in data engineering / any at-scale data processing experience.
  • Good understanding of different databases and data storage technologies
  • Very good knowledge of distributed computing systems, such as Spark, on a stand-alone and cluster-basis
  • Good knowledge of cloud computing systems, such as AWS, GCP, Azure ML.
  • Development : mastering a language within Python, Java and/or Scala at least a knowledge with Python
  • Good communication and popularization skills : understand technical team needs and issues, collaborate with several internal teams. Team player.
  • Additional skills : strong interest in data science / Natural Language Processing

You should be able to work in a product team and show high motivation. This job requires autonomy, curiosity toward a changing environment and real dedication to solving problems for clients.

Date d'expiration

24/12/2020

Postuler ici

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *