We are seeking an experienced Data Engineer to join our growing technology team. In this role, you’ll build and maintain the data infrastructure that powers our insurance comparison platform, enabling us to deliver the best solutions to our clients while maintaining our commitment to service excellence.
About The Project
We are a leading independent insurance brokerage dedicated to helping clients find the right coverage at the best possible rates. By partnering with hundreds of top-rated insurance providers, we offer transparent and hassle-free solutions across a wide range of products — including term life, home, auto, business, disability, and identity theft protection.
Our focus on operational excellence, exceptional service, and financial integrity has earned us recognition as one of the top-performing firms nationwide. We take pride in simplifying the insurance process and providing every client with confidence and peace of mind.
Required Qualifications:
- 5+ years of experience in data engineering or related field;
- Strong proficiency with dbt for data transformation and modeling;
- Hands-on experience with Apache Airflow for workflow orchestration;
- Advanced Python programming skills;
- Demonstrated experience with PySpark/Apache Spark for distributed data processing;
- Strong SQL skills and experience with modern data warehouses (Snowflake, BigQuery, Redshift, or similar);
- Understanding of data modeling concepts (dimensional modeling, slowly changing dimensions, etc.);
- Experience with version control (Git) and CI/CD practices;
- Strong problem-solving skills and attention to detail.
Preferred Qualifications:
- Experience in the insurance or financial services industry;
- Knowledge of cloud platforms (AWS preferred);
- Familiarity with data visualization tools (Tableau, Looker, or SuperSet);
- Experience with real-time data processing and streaming technologies;
- Understanding of data governance and compliance requirements.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Apache Airflow to orchestrate complex ETL/ELT workflows;
- Build and optimize data transformation logic using dbt (data build tool) to ensure data quality and consistency across our analytics platform;
- Develop data processing applications using Python and PySpark/Spark to handle large-scale insurance data from hundreds of partner companies;
- Create and maintain data models that support business intelligence, reporting, and analytics needs;
- Collaborate with data analysts, actuaries, and business stakeholders to understand data requirements and deliver solutions;
- Monitor and optimize data pipeline performance, ensuring reliability and efficiency;
- Implement data quality checks and validation processes to maintain data integrity;
- Document data architectures, workflows, and best practices.
Our benefits:
- Professional and career growth promotion;
- Competitive salary;
- Paid vacations and sick leaves;
- Internal Medical Program;
- Program for veterans (which includes mentorship, an accessible office for individuals with disabilities, legal support, and additional benefits);
- Flexible working hours;
- Regular corporate social activities;
- Regular technical training at our office;
- English courses;
- Gym, etc.