(Remote Work) Site Reliability Engineer, Data Platform - Kraken

  Remote, Remote

Job Overview

  • Job Title Site Reliability Engineer, Data Platform
  • Hiring Organization Kraken
  • Company Website https://www.kraken.com/
  • Remote Locations US
  • Job Type  Remote, Full-Time

Building the Future of Crypto 

Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.

What makes us different?

Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.

Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission.

As a fully remote company, we have Krakenites in 60+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading securitycrypto education, and world-class client support through our products like Kraken ProKraken NFT, and Kraken Futures.

Become a Krakenite and build the future of crypto!

Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don’t fully meet the listed requirements, especially if you’re passionate or knowledgable about crypto!

As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.

Job Responsibilities

  • Architect and implement data infrastructure solutions (self service)  that support the needs of 10+ business units and over 100 engineering and data analysts.
  • Utilize Infrastructure as Code (IaC) principles to design, provision, and manage both on-premises and cloud (AWS) infrastructure components using tools such as Terraform.
  • Collaborate with teams to ensure seamless integration of data-related services with existing systems.
  • Develop and maintain automation scripts using bash/shell scripting and to automate operational tasks and deployments.
  • Enhance and manage CI/CD pipelines to facilitate consistent software deployments across the data infrastructure.
  • Implement robust data monitoring and alerting solutions to proactively detect anomalies and performance issues.
  • Evangelize and implement security (authentication and authorization), and role-based access control (RBAC) and permissions for a multitude of user groups and machine workflows within AWS.
  • Manage and maintain real-time streaming data architecture using technologies like Kafka and Debezium Change Data Capture (CDC).
  • Ensure the timely and accurate processing of streaming data, enabling data analysts and engineers to gain insights from up-to-date information.
  • Utilize Kubernetes to manage containerized applications within the data infrastructure, ensuring efficient deployment, scaling, and orchestration.
  • Implement effective incident response procedures and participate in on-call rotations.
  • Troubleshoot and resolve incidents promptly to minimize downtime and impact.
  • Collaborate with data analysts, engineers, and cross-functional teams to understand requirements and implement appropriate solutions.
  • Document architecture, processes, and best practices to enable knowledge sharing and support continuous improvement.

Job Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
  • Proven experience (5+ years) working as a Site Reliability Engineer, Infrastructure Engineer, or similar roles, with a focus on data infrastructure and security.
  • Experience with real-time data processing technologies, such as Kafka and Debezium.
  • Strong expertise in AWS ecosystem, including IAM, permission boundaries, serverless, role based access control with Athena, LakeFormation, EMR, Glue, Lambdas, etc.
  • Proficiency in Infrastructure as Code tools such as Terraform and Atlantis.
  • Experience with containerization and orchestration tools, particularly Kubernetes.
  • Solid understanding of bash/shell scripting and proficiency in at least one programming language such as Go.
  • Familiarity with CI/CD deployment pipelines and related tools.
  • Strong problem-solving skills and the ability to troubleshoot complex systems.
  • Experience with data-related technologies (databases, airflow, data warehousing, data lakes).

Nice to Haves

  • Experience integrating AWS services with HashiCorp stack (Nomad, Consul, Vault)
  • Expertise in zero-trust architecture.
  • Experience with Tableau administration.

How To Apply

Click “Apply” below to fill in the application form!

More Information

  • Remote Job Location United States
  • Salary Offer to be discussed
  • Experience Level Senior Level
  • Education Level Bachelor's Degree
  • Working Hours to be arranged (full time based )
  • Job Application Via Custom Application Page

Click link to Apply

Get the latest jobs in your inbox