Data Engineer Role at Wave Canada

last updated February 9, 2026 5:06 UTC

Wave HQ

HQ: Hybrid

more jobs in this category:

  • -> Website & App Tester @ PingPong
  • -> Entry Level Content Writer @ Jerry
  • -> Code Challenge Reviewer - Review Code In Your Spare Time - £50 Per Hour @ Geektastic
  • -> Frontend Developer (React) @ Cake
  • -> Frontend Engineer @ Torc

At Wave, we empower small businesses to succeed so our communities can thrive. Our workplace is full of creativity and energy, and no matter where you work or how you work best, you’ll have the tools and support you need. Success here means being bold, learning quickly, and sharing your knowledge openly.

As a Data Engineer reporting to the Manager of Data Engineering, you will build tools and infrastructure that support the Data Products and Insights & Innovation teams, as well as the broader organization.

We’re seeking a curious, driven self-starter who enjoys tackling complex challenges and managing multiple domains and stakeholders. This technically strong individual will work with all levels of the Data & AI team and partner with engineering teams to develop data solutions, expand our data infrastructure, and help move Wave forward in its transformation into a data-focused organization.

This position is ideal for someone with experience working in complex product environments. Strong communication skills are essential to bridge the gap between technical and non-technical audiences at varying levels of data maturity.

Here’s How You Make an Impact:
– You’re a builder: Design, create, and deploy components of a modern data platform, including CDC ingestion with Debezium and Kafka, a centralized Hudi-based data lake, and a variety of batch, incremental, and streaming pipelines.
– You ensure continuity while modernizing: Maintain and improve the Amazon Redshift warehouse and existing Python ELT pipelines while leading the shift toward a Databricks and dbt-based analytics environment.
– You balance innovation with operational excellence: Build scalable, reliable, and cost-effective data systems while improving observability, performance, and stability across both legacy and new platforms.
– You collaborate to drive results: Work with cross-functional partners to create data infrastructure and pipelines that support analytics, machine learning, and GenAI use cases, ensuring data is accurate and delivered on time.
– You handle ambiguity and take ownership: Work independently to identify and implement improvements to pipelines and workflows under tight deadlines and evolving requirements.
– You keep the platform stable: Respond to PagerDuty alerts, troubleshoot issues, and proactively improve monitoring and alerting to reduce incidents and maintain high availability.
– You communicate well: Offer technical guidance, explain complex concepts clearly, and listen carefully to build trust and resolve issues.
– You think about the customer: Evaluate existing systems, improve data accessibility, and build practical solutions that help internal teams generate insights and enhance the customer experience.

You Thrive Here by Having:
– Data engineering expertise: 3+ years building pipelines and working with modern, secure data stacks, including CDC streaming ingestion (e.g., Debezium) into warehouses supporting AI/ML.
– AWS cloud experience: At least 3 years working with AWS infrastructure such as Kafka (MSK), Spark/AWS Glue, and Terraform for IaC.
– Strong SQL and data modelling skills: Deep knowledge of SQL and data modelling for both OLTP and OLAP systems.
– Databricks experience: Prior experience with a production Databricks environment is a strong asset.
– Strong coding abilities: Proven ability to write and review maintainable, high‑quality code using Python, SQL, and dbt, and to use third-party frameworks when needed.
– Data lake development experience: Experience building S3-based data lakes using Apache Hudi and formats like Parquet, Avro, JSON, and CSV.
– CI/CD experience: Skill in developing and deploying data pipelines using CI/CD best practices.

Bonus Points For:
– Knowledge of data governance, including data quality, lineage, privacy, and experience with data catalog tools.
– Familiarity with tools like Stitch and Segment CDP for integrating diverse data sources.
– Experience with analytics and ML tools such as Athena, Redshift, or SageMaker Feature Store.

Salary range: 101,000 to 113,000 USD annually. Final compensation depends on experience, expertise, and role fit. Most hires land mid-range, with the highest offers reserved for candidates with exceptional skills and immediate impact.

We also provide:
– Bonus program
– Employer-paid benefits
– Health & Wellness Flex Account
– Professional Development Account
– Wellness Days
– Holiday Shutdown
– Wave Days (extra summer vacation days)
– Get A-Wave Program (work from anywhere in the world for up to 90 days)

Visit our site to learn more about working at Wave.

We value diverse perspectives, and your unique experience strengthens our organization. We welcome candidates from all backgrounds and look forward to discussing how you can thrive here.

Wave is committed to an inclusive and accessible hiring process. If you need accommodations, email careers@waveapps.com and we’ll work with you to meet your needs.

Please note that interviews may include AI-assisted transcription so interviewers can stay fully engaged.

This posting reflects an active, open role.

Apply info ->

To apply for this job, please visit jobs.lever.co

Shopping Cart
There are no products in the cart!
Total
 0.00
0