Operational Changes (Release 7.9)
Lists the functional changes made to existing commands in HPE Ezmeral Data Fabric release 7.9.0.
Changes to librdkafka version
Data Fabric 7.9 supports Librdkafka version 2.0.2. This version is not available for Windows. Librdkafka 2.0.2 in core 7.9 is not compatible with HPE Ezmeral Data Fabric Stream clients for Python or C# applications. Nodes running Python or C# applications should not be upgraded to HPE Ezmeral Data Fabric 7.9.
Insight Gathering Changes
Iceberg tables from Data Fabric 7.8 are incompatible with the Iceberg tables from Data Fabric 7.9. All insight data from the Iceberg tables for Data Fabric 7.8 is unavailable when you upgrade to Data Fabric 7.9 or higher.
Repository Changes
Recent changes to the download repository for HPE Ezmeral Data Fabric core and ecosystem packages might affect your ability to install or upgrade software. For more inforrmation, see What's New in Release 7.9.
New Key for Signature Verification for Data Fabric Files
.rpm
, .tar.gz
,
.zip
, and .tgz
files for the following Data Fabric products:- HPE Ezmeral Data Fabric core 7.6.1 and later
- HPE Ezmeral Data Fabric clients
- HPE Ezmeral Ecosystem Pack (EEP) 9.2.1 and later
- Installer 1.18.0.5 and later
For more information, see HPE GPG Public Keys for GPG or RPM Signature Verification.
32-GB Minimum Memory Requirement for Production Nodes
Minimum memory requirements for production nodes changed for releases 7.0.0 and later. Production nodes require at least 32 GB of memory per node. For more information, see Memory and Disk Space.
Nonsecure Configurations
Beginning with release 7.3.0, the configure.sh script no
longer supports the -unsecure
parameter. By default,
configure.sh
implements the -S
or
-secure
parameter even if the parameter is not specified.
This change builds on security enhancements introduced in earlier releases. Releases 7.0.0 and later installations are secure by default. Also, Installer 1.18 automatically configures a secure cluster and does not provide an option to configure a non-secure cluster. Nonsecure installations have not been validated for use with releases 7.0.0 or later.
Using Custom Certificates with Object Store
Default installations of the HPE Ezmeral Data Fabric use encrypted, self-signed certificates to enable SSL communication. If your environment does not permit self-signed certificates, or if you prefer to generate your own certificates rather than use the default certificates, Data Fabric supports an option to generate your own certificates. See Using Custom Signed Certificates with Object Store.
Key Store and Trust Store Changes in Release 7.0.0 and Later
Changes to Cross-Cluster Configuration
- Running
configure-crosscluster.sh
in releases 7.0.0 and later requires you to specify two additional parameters:localtruststorepassword
remotetruststorepassword
- The
configure-crosscluster.sh
script now returns an error if it is run by a user other than the cluster owner. For example, you cannot run the script as theroot
user. - In releases 7.0.0 and later, cross-cluster configuration using the basic
configure-cross.cluster.sh
script options is supported if nodes in the local and remote clusters are either all non-FIPS nodes or all FIPS nodes. See Configuring Cross-Cluster Security for a Mixed (FIPS and Non-FIPS) Configuration for the manual steps to configure mixed clusters consisting of FIPS and non-FIPS nodes using the-localhosts
and-remotehosts
options.
For more information, see configure-crosscluster.sh.
About ssl-server.xml and ssl-client.xml in Releases 7.0.0 and Later
The Hadoop configuration files (ssl-server.xml
and
ssl-client.xml
) contain SSL configuration information for the client and
server in XML format. This section describes some changes in the use of these files in
releases 7.0.0 and later.
Clear-Text Passwords Are Removed from ssl-server.xml and ssl-client.xml
In release 6.2.0 and earlier releases of the HPE Ezmeral Data Fabric,
key and trust store passwords are stored in clear text in the
ssl-server.xml
and ssl-client.xml
configuration files,
and the passwords are the same for both key and trust stores.
Beginning with release 7.0.0, clear-text passwords are removed from the Hadoop
ssl-server.xml
and ssl-client.xml
configuration files.
And distinct passwords are generated – one for the key store and one for the trust store.
See Key and Trust Store Password Protection.
For Java applications, key and trust store passwords are now protected in credential stores
accessible through the Hadoop Credential Provider API. For non-Java applications key store
passwords are stored in maprkeycreds.conf
, and trust store passwords are
stored in maprtrustcreds.conf
. See Application Development with Encrypted Key and Trust Stores.
For information about what happens to the clear-text passwords during an upgrade, see the Upgrade Notes (Release 7.9) and Removing Clear-Text Passwords After Upgrade.
Do Not Copy These Files When FIPS-Enabled Nodes Are Present
ssl-client.xml
ssl-server.xml
ssl_keystore
(symlink)ssl_truststore
(symlink)ssl_userkeystore
(symlink)ssl_usertruststore
(symlink)
In releases 7.0.0 and later, it is a best practice to avoid copying the
ssl-client.xml
and ssl-server.xml
files regardless of
the FIPS configuration. In particular, when adding a non-FIPS node to a FIPS cluster, you
must not copy the Hadoop ssl*.xml
files to the other nodes in the cluster.
To determine whether a node is FIPS enabled, manageSSLKeys.sh
reads the
trust store type from ssl-client.xml
when running in standalone mode
instead of indirectly through configure.sh
. Copying the Hadoop
ssl*.xml
files that are set to the BCFKS store type from a FIPS to a
non-FIPS node then causes commands such as manageSSLKeys.sh convert
to
fail.
In a FIPS-enabled node, symlink files are provided for the .bcfks
versions
of the key store, user keystore, truststore, and user trust store. These files must not be
copied. Copying the files can result in errors later when you run
configure.sh
.
For more information about the files to copy when you enable security, see Enabling Security on a Configured Cluster.