Data Infrastructure Developer - Data Engineering

Chicago Remote

About Akuna: 

Akuna Capital is an innovative trading firm with a strong focus on collaboration, cutting-edge technology, data driven solutions, and automation. We specialize in providing liquidity as an options market-maker – meaning we are committed to providing competitive quotes that we are willing to both buy and sell. To do this successfully, we design and implement our own low latency technologies, trading strategies, and mathematical models. 

Our Founding Partners first conceptualized Akuna in their hometown of Sydney. They opened the firm’s first office in 2011 in the heart of the derivatives industry and the options capital of the world – Chicago. Today, Akuna is proud to operate from additional offices in Sydney, Shanghai, London, Boston, and Austin. 

What you’ll do as a Data Infrastructure Developer on the Data Engineering team at Akuna:

We are a data-driven organization and are seeking Developers to take our data platform to the next level. At Akuna, we believe that our data provides a key competitive advantage and is a critical part to the success of our business. Our Data Engineering team is composed of world class talent and the Data Infrastructure team has been entrusted with building and maintaining our data platform. Our platform starts with gathering data from disparate sources across the globe and ends with our users across Trading, Quantitative and Support staff accessing complex datasets for a wide range of streaming and batch use cases. Along the way, we build tools to capture, transform, monitor and access the data in efficient and intuitive ways, building on the best tools and technologies available for each step of the pipeline.

In this role, you will:

  • Work within a growing global team supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture
  • Mentor junior developers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack

Qualities that make great candidates:

  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java, Scala or Kotlin experience required – Python experience an added plus
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kafka, Presto, etc. is ideal
  • Experience building large-scale ETL pipelines
  • Must possess excellent communication, analytical, and problem-solving skills
  • Demonstrated experience working with diverse data sets and frameworks across multiple domains – financial data experience not required, but a plus
  • Experience with containerization and container orchestration technologies, like Docker, Kubernetes, and Argo
  • Past hands-on experience with AWS or other public cloud providers (GCP, Azure) is strongly preferred
  • Demonstrated experience using software engineering best practices like Continuous Integration / Deployment to deliver complex software projects

Remote opportunities are available and reviewed on a case-by-case basis. Please note your preference in the application.

Apply Now