Flink 2.0.0.0 - 2510 (DEP 10.0.0) Release Notes

Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. For more information about the Data Fabric implementation of Flink, see Apache Flink.

The notes below relate specifically to the Data Fabric distribution of Apache Flink. You may also be interested in the Apache Flink home page.

These release notes contain only HPE-specific information and are not necessarily cumulative in nature. For information about how to use the release notes, see Ecosystem Component Release Notes.

Version 2.0.0.0
Release Date October 2025
HPE Version Interoperability See Ecosystem Pack Components and OS Support
GitHub Sorce
GiitHub Rlease Tag
  • flink-shaded: 19.0.0-dep-1000
  • flink: 2.0.0.0-dep-1000
  • flink-connector-kafka: 4.0.0.0-dep-1000
Maven Artifacts https://repository.mapr.com/maven/
Package Names Navigate to https://package.mapr.hpe.com/releases/MEP/, and select your EEP (MEP) or DEP and OS to view the list of package names.

New in this Release

  • Apache Flink 2.0.0 included in DEP with support for DEP YARN, MapRFS (for checkpoint and HA storage), and DF ZooKeeper.
  • Apache Flink Connector Kafka 4.0.0 with support for Data Fabric Streams is included in DEP 10.0.0.

Known Issues and Limitations

Flink in DEP 10.0.0 has the following Known issues and limitations:

Warnings
If you are using the auto-generated self-signed certificates, you may see the following warning
  • Warning 1
    If HADOOP_CREDSTORE_PASSWORD variable is not set, when you run a Flink CLI tool for the first time, following warning may get displayed many times:
    WARNING: You have accepted the use of the default provider password
                by not configuring a password in one of the two following locations:
                       * In the environment variable HADOOP_CREDSTORE_PASSWORD
                       * In a file referred to by the configuration entry 
                         hadoop.security.credstore.java-keystore-provider.password-file.
    Please review the documentation regarding provider passwords in
    the keystore passwords section of the Credential Provider API
    Continuing with the default provider password.
    Workaround: No need of any workaround, as there is no functional effect.
  • Warning 2
    Job Manager may log the following warning many times:
    2025-09-06 07:02:32,542 WARN  org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint   [] - Unhandled exception
    org.apache.flink.shaded.netty4.io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
    Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
            at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
            at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
            at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:370) ~[?:?]
            at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?]
            at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:209) ~[?:?]
            at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
            at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:309) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1441) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1334) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1383) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            ... 16 more

    As clients won't trust the default self-signed certificates, you may get the warning many times.

    Workaround: No need of any workaround, as there is no functional effect. However, you can switch to custom keystore to avoid this warning.

Known issues
FLINK-10: IllegalStateException after job has been submitted
Upon submitting a Flink job, following exception may get displayed:
Exception in thread "Thread-1" java.lang.IllegalStateException: Trying to access closed classloader. Please check if you store classloaders directly or indirectly in static fields. If the stacktrace suggests that the leak occurs in a third party library and cannot be fixed immediately, you can disable this check with the configuration 'classloader.check-leaked-classloader'.
        at org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:189)
        at org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:219)
        at org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2910)
        at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3185)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3144)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3116)
        at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2994)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2976)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1294)
        at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1935)
        at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1912)
        at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183)
This is also a known issue in Apache Flink documentation, marked as Not a priority: see FLINK-19916 Hadoop3 ShutdownHookManager visit closed ClassLoader for details.

Workaround: No need of any workaround, as there is no functional effect. However, you can set classloader.check-leaked-classloader: false to avoid this exception.

MS-1775: End offset must be exclusive
When configuring Kafka source to work in .setBounded(OffsetsInitializer.latest()) mode (to read a topic up to the end, and stop), the last record will be lost due to inclusive end offsets (must be exclusive).
Workaround: None.
MS-1776: Order in a single partitioned topic is not guaranteed with null keys
Processing order is not guaranteed even with a topic with only one partition and null message keys.
Workaround: None.
Limitations:
  1. Only YARN is supported as a resource manager.

  2. Only DataStream API is supported for application development in DEP 10.0.0.

  3. FIPS mode is not supported by Flink in DEP 10.0.0.

  4. The Exactly-once delivery guarantees are not supported in KafkaSink.
  5. Flink-20:No user ticket renewal is supported, and Flink cluster (a YARN application) might fail if the user ticket expires (by default 2 weeks). See Configuring mapR ticket expiration time on how to configure the expiration time.

  6. MS-1777: Some Kafka source/sink metrics in Flink aren’t supported.

Fixes

None