Flink 2.0.0.100 - 2604 (DEP 10.1.0) Release Notes

Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. For more information about the Data Fabric implementation of Flink, see Apache Flink.

The notes below relate specifically to the Data Fabric distribution of Apache Flink. You may also be interested in the Apache Flink home page.

These release notes contain only HPE-specific information and are not necessarily cumulative in nature. For information about how to use the release notes, see Ecosystem Component Release Notes.

Version 2.0.0.100
Release Date April 2026
HPE Version Interoperability See Ecosystem Pack Components and OS Support
GitHub Sorce
GiitHub Rlease Tag 2.0.0.100-dep-1010
Maven Artifacts https://repository.mapr.com/maven/
Package Names Navigate to https://package.mapr.hpe.com/releases/MEP/, and select your EEP (MEP) or DEP and OS to view the list of package names.

New in this Release

  • FIPS support.
  • SQL & Iceberg integration support.
  • Bug fixes.
  • CVE fixes.

Fixes

This HPE release includes the following fixes on the base release:

GitHub Commit Number Data (YYYY-MM-DD) HPE Fix Number and Description
54c401d8ae2 2026-03-03 FLINK-46 Upgrade monaco-editor to 0.55.1 to get rid of DOMPurify CVEs
7442893e2ca 2026-03-02 [FLINK-39022][security] Set security.ssl.algorithms default value to modern cipher suite
f133f80b3b2 2026-02-26 FLINK-43 CVE-2025-68161 in Flink
bcb8a5aa673 2026-01-08 FLINK-41 Include iceberg-flink-runtime in mapr-flink package
20b47be00c2 2025-12-22 FLINK-38 Use DF ticket as user name source when genering default TLS keystores
7d346d9afd2 2025-12-21 FLINK-37 Fix errors logged in scripts when running in a non-Data-Fabric environment
0b489c200bd 2025-12-20 FLINK-36 DefaultTLSConfigurer is not applied in Standalone cluster entrypoint
bb89f551551 2025-12-20 FLINK-35 cat: /proc/sys/crypto/fips_enabled: No such file or directory - on non-FIPS Ubuntu 20.04
6d473b44870 2025-11-18 FLINK-24 FIPS support in Flink
90b893e9078 2025-11-18 FLINK-34 Iceberg source & sink in SQL

Known Issues and Limitations

Flink in DEP 10.1.0 has the following Known issues and limitations:

Warnings
If you are using the auto-generated self-signed certificates, you may see the following warning
  • Warning 1
    If HADOOP_CREDSTORE_PASSWORD variable is not set, when you run a Flink CLI tool for the first time, following warning may get displayed many times:
    WARNING: You have accepted the use of the default provider password
                by not configuring a password in one of the two following locations:
                       * In the environment variable HADOOP_CREDSTORE_PASSWORD
                       * In a file referred to by the configuration entry 
                         hadoop.security.credstore.java-keystore-provider.password-file.
    Please review the documentation regarding provider passwords in
    the keystore passwords section of the Credential Provider API
    Continuing with the default provider password.
    Workaround: No need of any workaround, as there is no functional effect.
  • Warning 2
    Job Manager may log the following warning many times:
    2025-09-06 07:02:32,542 WARN  org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint   [] - Unhandled exception
    org.apache.flink.shaded.netty4.io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
    Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown
            at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
            at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
            at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:370) ~[?:?]
            at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?]
            at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:209) ~[?:?]
            at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
            at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
            at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:309) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1441) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1334) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1383) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            at org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468) ~[flink-dist-2.0.0.0-dep-1000-SNAPSHOT.jar:2.0.0.0-dep-1000-SNAPSHOT]
            ... 16 more

    As clients won't trust the default self-signed certificates, you may get the warning many times.

    Workaround: No need of any workaround, as there is no functional effect. However, you can switch to custom keystore to avoid this warning.

Known issues
FLINK-10: IllegalStateException after job has been submitted
Upon submitting a Flink job, following exception may get displayed:
Exception in thread "Thread-1" java.lang.IllegalStateException: Trying to access closed classloader. Please check if you store classloaders directly or indirectly in static fields. If the stacktrace suggests that the leak occurs in a third party library and cannot be fixed immediately, you can disable this check with the configuration 'classloader.check-leaked-classloader'.
        at org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.ensureInner(FlinkUserCodeClassLoaders.java:189)
        at org.apache.flink.util.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.getResource(FlinkUserCodeClassLoaders.java:219)
        at org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2910)
        at org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3185)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3144)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:3116)
        at org.apache.hadoop.conf.Configuration.loadProps(Configuration.java:2994)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2976)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1294)
        at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1935)
        at org.apache.hadoop.conf.Configuration.getTimeDuration(Configuration.java:1912)
        at org.apache.hadoop.util.ShutdownHookManager.getShutdownTimeout(ShutdownHookManager.java:183)
This is also a known issue in Apache Flink documentation, marked as Not a priority: see FLINK-19916 Hadoop3 ShutdownHookManager visit closed ClassLoader for details.

Workaround: No need of any workaround, as there is no functional effect. However, you can set classloader.check-leaked-classloader: false to avoid this exception.

MS-1775: End offset must be exclusive
When configuring Kafka source to work in .setBounded(OffsetsInitializer.latest()) mode (to read a topic up to the end, and stop), the last record will be lost due to inclusive end offsets (must be exclusive).
Workaround: None.
MS-1776: Order in a single partitioned topic is not guaranteed with null keys
Processing order is not guaranteed even with a topic with only one partition and null message keys.
Workaround: None.
Limitations:
  1. Only YARN is supported as a resource manager.

  2. The Exactly-once delivery guarantees are not supported in KafkaSink.
  3. Flink-20:No user ticket renewal is supported, and Flink cluster (a YARN application) might fail if the user ticket expires (by default 2 weeks). See Configuring mapR ticket expiration time on how to configure the expiration time.

  4. MS-1777: Some Kafka source/sink metrics in Flink aren’t supported.