Data Engineer (PySpark, AWS, Databricks) Irvine California

By Dice USA Job Portal

Published On:

Join WhatsApp

Join Now

Join Telegram

Join Now

Data Engineer (PySpark, AWS, Databricks) Irvine California

Job Title: Data Engineer (PySpark, AWS, Databricks)
Location: Irvine, California
Duration: 6 Months + Extension
Client: Sapient


Job Overview

Data Engineer PySpark AWS Databricks Irvine We are actively hiring a skilled and motivated Data Engineer to join a high-performing data team supporting enterprise-scale initiatives for Sapient. This role is ideal for professionals who have strong hands-on experience in modern data engineering technologies such as PySpark, AWS, SQL, and Databricks, and are passionate about building scalable, efficient, and reliable data pipelines.

As a Data Engineer, you will be responsible for designing, developing, and optimizing large-scale data processing systems that support business intelligence, analytics, and data science use cases. The position requires close collaboration with cross-functional teams, including data analysts, architects, and business stakeholders, to deliver high-quality data solutions aligned with business objectives Data Engineer PySpark AWS Databricks Irvine

Data Engineer PySpark AWS Databricks Irvine
Data Engineer PySpark AWS Databricks Irvine

Data Engineer PySpark AWS Databricks Irvine This is a hybrid role based in Irvine, California, requiring onsite presence three days a week. Candidates must be local to the area.


Key Responsibilities

  • Data Engineer PySpark AWS Databricks Irvine Design, develop, and maintain robust and scalable data pipelines using PySpark and Python for processing large datasets.
  • Work with structured and semi-structured data using Hive, SQL, and cloud-based data platforms.
  • Build and optimize ETL/ELT workflows to ensure efficient data ingestion, transformation, and delivery.
  • Utilize AWS cloud services such as S3, Glue, Lambda, and Redshift to implement cloud-native data solutions.
  • Develop workflows and scheduling pipelines using tools like Apache Airflow.
  • Collaborate with stakeholders to gather requirements and translate them into technical solutions.
  • Implement best practices in data modeling, data warehousing, and data governance.
  • Work with MPP data warehouses such as Amazon Redshift or Azure SQL Data Warehouse (SQLDW).
  • Leverage Databricks for advanced data processing and analytics workloads.
  • Maintain version control using tools like GitHub or Bitbucket.
  • Ensure data quality, integrity, and security across all data platforms.
  • Participate in code reviews, testing, and deployment processes.
  • Coordinate effectively with offshore teams to ensure seamless delivery of projects

.NET / Java Full Stack Developer Edina MN | $85/hr C2C | 24+ Months Contract


Data Engineer PySpark AWS Databricks Irvine Required Skills & Qualifications

  • Strong hands-on experience with PySpark, Python, SQL, and Hive.
  • Proven experience working with AWS cloud ecosystem (S3, Glue, Redshift, etc.).
  • Familiarity with workflow orchestration tools such as Apache Airflow.
  • Experience with version control systems like GitHub or Bitbucket.
  • Solid understanding of data warehousing concepts, including schema design, partitioning, and indexing.
  • Experience working with MPP data warehouse platforms like Redshift or SQLDW.
  • Exposure to Databricks and big data processing frameworks.
  • Strong problem-solving skills and ability to handle large-scale data challenges.
  • Excellent communication and client-facing skills.
  • Experience working in a collaborative, distributed team environment, including offshore coordination.

100+ Live Jobs JPMorgan Chase Remote Jobs USA 2026


Preferred / Secondary Skills

  • Experience in enterprise data modeling, particularly within the finance domain.
  • Knowledge of asset management systems, including financial instruments such as fixed income (FI), equities, and derivatives.
  • Familiarity with AWS Data Quality (DQ) tools and frameworks.
  • Exposure to DevOps practices, including CI/CD pipelines and automation.
  • Understanding of data governance, lineage, and compliance standards in financial services.

What Makes This Role Exciting

This opportunity allows you to work with cutting-edge technologies in a dynamic and fast-paced environment. You will be part of a team that is driving digital transformation initiatives and building modern data platforms that enable data-driven decision-making.

Working with Sapient means exposure to large-scale enterprise projects, collaboration with industry experts, and the opportunity to grow your expertise in cloud-based data engineering and advanced analytics.


Ideal Candidate Profile

Data Engineer PySpark AWS Databricks Irvine The ideal candidate is someone who:

  • Has a strong foundation in data engineering and big data technologies
  • Is comfortable working with cloud platforms, especially AWS
  • Can independently design and implement scalable data solutions
  • Has experience working in client-facing roles and understands business needs
  • Thrives in collaborative environments and can work effectively with offshore teams
  • Has a keen interest in financial data and asset management (preferred but not mandatory)

Work Environment

  • Hybrid model: 3 days onsite in Irvine, CA
  • Collaborative and agile team structure
  • Opportunity to work with global teams
  • Exposure to enterprise-level data architecture

Apply Now: shiva@1pointsys.com

Conclusion

This Data Engineer role is an excellent opportunity for professionals looking to advance their careers in cloud data engineering and big data technologies. With a strong emphasis on PySpark, AWS, Databricks, and data warehousing, this position offers both technical challenges and career growth.

If you are passionate about building scalable data solutions, enjoy solving complex data problems, and want to work with a leading client like Sapient, this role is a perfect fit.

 

Leave a Comment