Join Newsletter

Workshop - Stream Processing with Apache Flink

YOW! Data 2020 - 29 Apr

Register Now

Apache Flink is a distributed stream processor that makes it easy to implement stateful stream processing applications and operate them at scale.

In this workshop, you will learn the basics of stream processing with Apache Flink. You will implement a stream processing application that ingests events from Apache Kafka and submit it to a (Docker) local Flink cluster for execution. You will learn how to manage and operate a continuously running application and how to access job and framework metrics.

In the afternoon, we will have a look at Flink's streaming SQL interface. You will submit SQL queries that are evaluated over unbounded data streams, producing results that are continuously updated as more and more data is ingested.

Fabian Hueske

Co-Founder, Software Engineer

Ververica

Germany

Fabian Hueske is a committer and PMC member of the Apache Flink project and has been contributing to Flink since its earliest days. Fabian is a co-founder of Ververica, a Berlin-based startup devoted to fostering Flink, that was acquired by Alibaba in early 2019. He still works as a software engineer at Ververica and contributes to Apache Flink®. Fabian holds a PhD in computer science from TU Berlin and is a co-author of “Stream Processing with Apache Flink”.

Workshop Details

Target Audience
All
Level
Intermediate
Duration
Full day

Prerequisites

  • Internet Connection
  • Basic knowledge of Java and SQL
  • Basic knowledge of distributed data processing (MapReduce/Spark/etc.) will be helpful but is not required
  • You will need a notebook with at least 8 GB RAM and the following software installed: Docker (incl. Docker Compose), Java 8, a Java IDE (preferably IDEA IntelliJ), Apache Maven