About the Job:
We are looking for experienced individuals who are passionate about data science and enjoy working in a collaborative environment. You will get the chance to work with one of the most advanced and comprehensive web crawling and scraping infrastructures in the world, leveraging massive data sets with cutting edge technology.
Due to business requirements, only candidates based in Ireland will be considered.
Job Responsibilities:
You will apply your data science and engineering skills to create products based on machine learning, analyze large volumes of complex data, model challenging problems, and develop algorithms to solve our internal and client needs.
You will work and experiment with state-of-the-art web crawling, machine learning and data processing technologies. Some of the problems you’ll be working on include object detection, text classification, named entity recognition, crawling algorithms.
You will work in collaboration with other data scientists and engineers across Scrapinghub to design and build creative solutions to challenging problems.
You will work on projects that span the whole organization, including areas such as Product and Professional Services.
Job Requirements:
Strong machine learning background (natural language processing, computer vision, deep learning, “classical” methods)
Hands-on experience in Data Science projects (data preparation, target metrics, model evaluation, validation, etc.)
Strong software development skills, ideally in python.
Experience with any of these tools is a plus: pytorch, scikit-learn, tensorflow, pandas, jupyter, spacy, gensim, vowpal wabbit, crfsuite, scrapy, spark, AWS, docker, kafka.

