Staff Data Engineer Added less than a minute ago Salary Salary not provided Related skills aws python databricks kafka pyspark ???? DescriptionDesign, build, and operate a Databricks lakehouse with Delta Live Tables.Architect and maintain scalable AWS data pipelines (S3/Glue/Lambda/Kinesis).Enforce data residency and governance using Unity Catalog and OPA.Define data quality, lineage, schema evolution, and audit logs.Implement FHIR-to-OMOP mappings for interoperability and analytics.Enable near real-time analytics with Kafka-based streaming.???? Requirements8+ years data engineering with Databricks (Delta Lake, Unity Catalog, DLT).Strong AWS data services: S3, Glue, Lambda, Kinesis, Redshift, Athena, IAM.Advanced Python and PySpark for batch and streaming ETL/ELT.Experience with Kafka (MSK/Confluent) integration with lakehouse.Data modeling for analytics and operational use (dim modeling, SCD).IaC/CD pipelines for data platforms (Terraform, CloudFormation, CDK).Regulatory/compliance familiarity (HIPAA, SOC 2, ISO 27001) and data residency.Bachelor's degree in CS/DS/Engineering or related field.???? BenefitsRemote within LatAm; 100% remote.Payments in USD; 40 hours/week; PTO; holidays; Christmas break.Wellness stipend and snack boxes. Meet JobCopilot: Your Personal AI Job Hunter Automatically Apply to Data Jobs. Just set your preferences and Job Copilot will do the rest — finding, filtering, and applying while you focus on what matters.