Configuring Flink
Desribes how to configure Flink.
Flink Overview
/opt/mapr/flink/flink-<version>/confThis path can be
changed with FLINK_CONF_DIR environment variable.- Configuration for details on
configuration directory in general, and the main configuration file is
config.yaml. - How to use logging for details on different logging files.
Default Configuration
- TLS/SSL is enabled by default. See Security in Flink for details.
Using MapR-FS as a checkpoint storage
maprfs:// URL in
execution.checkpointing.dir as
follows:execution:
checkpointing:
dir: maprfs:///user/<username>/flink/checkpointsIntegrating Flink with ZooKeeper for High Availability
high-availability.zookeeper.quorum as well as a
maprfs:// URL in high-availability.storageDir
as follows:high-availability:
storageDir: maprfs:///user/mapruser1/flink/ha
zookeeper:
quorum: <host name>:5181Configuring Flink for use with Kerberos
To run Flink with Kerberos on DEP, you must
add -Dhadoop.login=hybrid to the env.java.opts.all
setting in config.yaml, configure the
security.kerberos.login section with the correct keytab,
principal, and JAAS contexts, restart or start a Flink YARN session in detached
mode, confirm in the logs that Kerberos authentication has succeeded, and then
verify that your jobs are producing data correctly by consuming from
Kerberos-protected Kafka topics with Kerberos-enabled clients.
- Modify the Flink
config.yamlfile to add-Dhadoop.login=hybridtoenv.java.opts.all:env: java: opts: all: --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED -Dhadoop.login=hybrid - In the same
config.yamlfile, uncomment and modify the Kerberos properties as appropriate for your environment:security: kerberos: login: use-ticket-cache: true keytab: /home/mapruser1/mapruser1.keytab principal: mapruser1/node1.cluster.com@NODE1 # # The configuration below defines which JAAS login contexts contexts: Client,KafkaClient-
use-ticket-cache:-
trueto use the existing Kerberos ticket cache (if available).
-
-
keytab: Path to the keytab file for the service/user that will run Flink. -
principal: Kerberos principal associated with the keytab. -
contexts: JAAS login contexts that should use this configuration. In this example, both genericClientandKafkaClientcontexts are enabled.
-