DataOps Engineer

Chicago Remote

About Akuna:  

Akuna Capital is an innovative trading firm with a strong focus on collaboration, innovative technology, data driven solutions, and automation. We specialize in providing liquidity as an options market-maker – meaning we are committed to providing competitive quotes that we are willing to both buy and sell. To do this successfully, we design and implement our own low latency technologies, trading strategies, and mathematical models.  

Our Founding Partners first conceptualized Akuna in their hometown of Sydney. They opened the firm’s first office in 2011 in the heart of the derivatives industry and the options capital of the world – Chicago. Today, Akuna is proud to operate from additional offices in Sydney, Shanghai, London, Boston, and Austin.  

What you will do as a DataOps Engineer on the Data Engineering Team at Akuna:  

We are a data-driven organization and are seeking DataOps engineers to continuously evolve our data platform. We believe our data provides a competitive advantage and is crucial to the success of our business. The DataOps Team has been entrusted to run our infrastructure and build exceptional management and monitoring tooling in conjunction with our talented Data Engineering teams. Our data platform extends globally and must support ingestion, processing, and access to complex datasets for a wide range of streaming and batch use cases. To support it we build, deploy, and monitor the platform in efficient and highly automated ways, building on the best frameworks and technologies available.  

In this role, you will:  

  • Work within the Global Data Team, using your expertise to build highly automated provisioning, monitoring and operational capabilities for data at Akuna 
  • Support the ongoing growth, design, and expansion of our data platform across a wide variety of data sources, enabling large scale support across an array of streaming, near real-time and research workflows  
  • Operate the data platform to ensure key SLAs are managed across a wide range of producers and consumers  
  • Build and run essential monitoring infrastructure supporting many of the most important data pipelines at the firm  
  • Work with the Data Infrastructure Team to coordinate high-quality data platform releases into our hybrid cloud architecture  
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications  
  • Challenge the status quo and help push our organization forward and define the future of our tech stack 

Qualities that make great candidates:  

  • BS/MS/PhD in Computer Science, Engineering, or equivalent technical field  
  • 5+ years of professional experience developing and operating automation, monitoring and management solutions  
  • Prior hands-on experience with data platforms and technologies such as Kafka, Delta Lake, Spark, Elastic Search  
  • Prior hands-on experience with observability and monitoring tools (Prometheus, Grafana, ELK stack etc.)  
  • Demonstrated experience with a leading cloud provider (AWS, Azure, Google Cloud Platform)  
  • Demonstrated experience with containerization and container orchestration technologies, like Docker, Kubernetes, and Argo 
  • Knowledge of at least one programming language (Java, Python, C++, Scala, Go, etc.)  
  • Highly motivated and willing to take ownership of high-impact projects upon arrival  
  • Hands-on, collaborative working style, with the ability to build relationships with multiple teams  
  • Demonstrated experience using software engineering best practices like Continuous Integration / Deployment to deliver complex software projects  

Remote opportunities are available and reviewed on a case-by-case basis. Please note your preference in the application. 

Apply Now