Job Description:
- Lead, design, develop and implement large-scale, real-time data processing systems by working with large structured and unstructured data from various complex sources.
- Design, implement and deploy ETL to load data into NoSQL / Hadoop.
- Performance fine-tuning of the data processing platform
- Development of various API’s to interact with front-end and other data warehouses
- Coordinate with web programmers to deliver a stable and highly available reporting platform
- Coordinate with data scientist to integrate complex data models into the data processing platform.
- Have fun in a highly dynamic team and drive innovations to continue as a leader in one of the fastest-growing industries
Job Requirements:
- Candidate must possess at least a Bachelor’s Degree in Computer Science, Information System or related discipline. MSc or PhD a plus.
- Proficiency in Python
- A strong background in interactive query processing
- Experience with Big Data applications/solutions such as Hadoop, HBase, Hive, Cassandra, Pig etc.
- Experience with NoSQL and handling large datasets
- Passion and interest for all things distributed – file systems, databases and computational frameworks
- Individual who is passionate, resourceful, self-motivated, highly committed, a team player and able to motivate others
- Strong leadership qualities
- Good verbal and written communication.
- Must be willing in work in highly dynamic and challenging startup environment.
Salary and compensation
$30,000 — $45,000/year
Equity
1.0 – 3.0

