Known Issues (Release 7.9.0)

You might encounter the following known issues after upgrading to release 7.9.0. This list is current as of the release date.

IMPORTANT
The "Support notices of known issues" tool is no longer available, but you can obtain the same information by logging on to the HPE Support Center.

Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

HPE Ezmeral Data Fabric Streams

MS-1511
When messages are produced for the second time on a stream, CopyStream fails with an exception com.mapr.db.exceptions.DBException: flush() failed with err code = 22.
Workaround: None.

Client Libraries

MFS-20211
User unable to use standalone mapr-client on Ubuntu machines.
Workaround: Install the libcurl3-gnutls package before using mapr-client on Ubuntu machines.
MFS-18258

When you add a new cluster to a cluster group, the FUSE-based POSIX client and the loopbacknfs POSIX client take about five minutes to load or list the newly added cluster.

Workaround: None.

Data Fabric UI

Node Removal

DFUI-2751
Data Fabric UI hangs when there is removal of node operation is repeated by adding and removing the same node repeatedly.
Workaround: Use the maprcli for this operation.

Sign-in Issues

DFUI-2743
SSO (Keycloak SSO) user unable to log in to Data Fabric UI when the user is assigned a pre-defined role from Data Fabric UI.
Workaround: Assign the pre-defined role using the Keycloak console.
DFUI-2734
A user that is assigned a user-defined role is unable to log in to Data Fabric UI. The 'no login permission' error is displayed when such a user attempts to log in to Data Fabric UI.
Workaround: None
DFUI-2701

Even after being assigned full control permission on all fabrics, the user is unable to log in to a non-primary fabric using Data Fabric UI

Workaround: None.

DFUI-160
If you sign in to the Data Fabric UI as an SSO user but you do not have fabric-level login permission, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Ezmeral Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Try signing in as a user who has fabric-level login permission.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-437
If you sign in to the Data Fabric UI as a non-SSO user and then sign out and try to sign in as an SSO user, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Ezmeral Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Dismiss the "Managed Control System" sign-in screen, and retry signing in as a non-SSO user.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-811
If you launch the Data Fabric UI and then sign out and wait for 5-10 minutes and then attempt to sign in, a sign-in page for the "Managed Control System" (MCS) is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-826
In a cloud fabric, an empty page is displayed after a session expires and you subsequently click on a fabric name. The browser can display the following URL:
https://<hostname>:8443/oath/login
Workaround: None.
DFUI-874
Sometimes when you attempt to sign in to the Data Fabric UI, the "Managed Control System" (MCS) is displayed, or the Object Store UI is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-897
A user with no assigned role cannot sign in to the Data Fabric UI.
Workaround: Using your SSO provider software, assign a role to the user, and retry the sign-in operation.
DFUI-1123
Attempting to sign in to the Data Fabric UI as a group results in a login error message in the browser. For example:
https://<hostname>:8443/login?error
Workaround: None.

Mirroring Issues

MFS-17538
During PBS validation, a primary cluster with automatic mirroring of a PBS volume to a non-primary cluster might give the following error:
Failed to fetch fabric cluster-151-B
401 Unauthorized: "HTTP ERROR 401 JWT validation failed: null<EOL>URI:
 /rest/dashboard/info/<EOL>STATUS: 401<EOL>MESSAGE: 
JWT validation failed: null<EOL>SERVLET: mapr-apiserver<EOL>"
Workaround: Restart the API server.
DFUI-1227
If you create a mirror volume with a security policy, an error is generated when you try to remove the security policy.
Workaround: None.
DFUI-1229
Data aces on a mirror volume cannot be edited.
Workaround: None.

Display Issues

DFUI-2691
Existing user-defined roles are not visible on the Data Fabric UI when you wish to assign roles to users.
Workaround: Use the respective maprcli command to perform the operation.
DFUI-2708
A user that is provided admin permissions by way of IAM policy assignment is unable to log in to the Data Fabric UI.
Workaround: Use the maprcli to login and perform any actions related to Data Fabric.
DFUI-2719
Permissions assigned a user by way of assigning IAM policy to user are not visible on the Data Fabric UI.
Workaround: None.
DFUI-2703
A user is unable to edit cluster settings and ACLs though the user has been granted permissions equivalent to fabric manager by way of assigning an with IAM policy with fabric management actions.
Workaround: None.
DFUI-2749
User-defined roles assigned to a user are not reflecting on the Data Fabric UI.
Workaround: Use the security iam role mapping maprcli command to view the user-defined role assigned to the user.
DFUI-1186
After you complete the SSO setup for a new fabric, fabric resources such as volumes and mirrors are not immediately displayed in the Data Fabric UI.
Workaround: Wait at least 20 minutes or more for the Data Fabric UI to display the fabric details.
DFUI-1221
If a fabric includes a large number of resources, loading the resources to display in the Resources card on the home page can take a long time.
Workaround: None.
DFUI-2102
When you create a table replica on a primary cluster with the source table on a secondary cluster, the replication operation times out. However, the table replica is successfully created on the primary cluster. The table replica appears in the Replication tab, but does not appear in the Data Fabric UI Graph or Table view for the primary cluster.

This behavior is the same for both a source table on the primary cluster and the replica on the secondary cluster.

Workaround: None.

External S3

MFS-20148
A Keycloak user is unable connect to an external S3 server with the access key and secret key, by using an S3 client.
Workaround: None.
DFUI-2157
Editing buckets on external S3 servers is not supported.
Workaround: None.

Installation or Fabric Creation

MFS-18972

RHEL-based default keycloak that is shipped with Data Fabric cannot be used to configure STS.

Workaround: Set up external Keycloak to run on port 443 for STS to work.

MFS-18734
Release 7.7.0 of the HPE Ezmeral Data Fabric has a dependency on the libssl1.1 package, which is not included in Ubuntu 22.04. As a result, you must apply the package manually to Ubuntu 22.04 nodes before installing Data Fabric software.
Workaround: On every node in the fabric or cluster:
NOTE
The following steps are required for cluster nodes but are not required for client nodes.
  1. Download the libssl1.1 package:
    wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb
  2. Use the following command to install the package:
    sudo dpkg -i libssl1.1_1.1.0g-2ubuntu4_amd64.deb
IN-3482
Fabric creation can fail if host-name resolution takes more than 300 ms.
Workaround: Check your host-name resolution time, and take steps to improve it. See Troubleshoot Fabric Creation. Then retry fabric deployment.
DFUI-565, EZINDFAAS-169
Installation or fabric creation can fail if a proxy is used for internet traffic with the HPE Ezmeral Data Fabric.
Workaround: Export the following proxy settings, and retry the operation:
# cat /etc/environment
export http_proxy=http://<proxy_server_hostname_or_IP>:<proxy_port>
export https_proxy=http://<proxy_server_hostname_or_IP>:<proxy_port>
export HTTP_PROXY=http://<proxy_server_hostname_or_IP>:<proxy_port>
export HTTPS_PROXY=http://<proxy_server_hostname_or_IP>:<proxy_port>

Object Store

DFUI-519
An SSO user is unable to create buckets on the Data Fabric UI and the Object Store. This is applicable to an SSO user with any role such as infrastructure administrator, fabric manager or developer.
Workaround: Create an IAM policy with all permissions in the user account. This has to be done via minIO client or the Object Store UI. Assign the IAM policy to the SSO user. Login to the Data Fabric UI and create a bucket/view bucket.
DFUI-577
Downloading a large file (1 GB or larger) can fail with the following error:
Unable to download file "<filename>": Request failed with status code 500
Workaround: Instead of using the Data Fabric UI to download a large file, use a MinIO Client (mc) command. For more information about mc commands, see MinIO Client (mc) Commands.

Online Help

DFUI-459
If a proxy is used for internet traffic with the HPE Ezmeral Data Fabric, online help screens can time out or fail to fetch help content.
Workaround: Add the following proxy servers to the /opt/mapr/apiserver/conf/properties.cfg file:
  • http.proxy=<proxyServer>:<proxyPort>
  • https.proxy=<proxyServer>:<proxyPort>

Security Policies

DFUI-2736
A fabric user is able to create a security policy from the command line but unable to create security policy from the Data Fabric UI.
Workaround: None.
MFS-18154/EZINDFAAS-674
A security policy created on a cloud-based primary fabric (such as AWS) is not replicated on to a secondary fabric created on another cloud provider (such as GCP).
Workaround: None.

Topics

DFUI-637
Non-LDAP SSO user authenticating to Keycloak cannot create topic on the Data Fabric UI.
Workaround: None.
DFUI-639
A non-LDAP SSO user authenticating to Keycloak cannot create a volume or stream using the Data Fabric UI.
Workaround: None. Non-LDAP and SSO local users are not currently supported.

Upgrade

COMSECURE-615
Upgrading directly from release 6.1.x to release 7.x.x can fail because the upgrade process reads password information from the default Hadoop ssl-server.xml and ssl-client.xml files rather than the original .xml files. Note that upgrades from release 6.2.0 to 7.x.x are not affected by this issue.
The issue does not occur, and the upgrade succeeds, if either of the following conditions is true:
  • The existing password is mapr123 (the default value) when the EEP upgrade is initiated.
  • You upgrade the cluster first to release 6.2.0 and then subsequently to release 7.x.x.
Understanding the Upgrade Process and Workaround: The workaround in this section modifies the release 6.1.x-to-7.x.x upgrade so that it works like the 6.2.0-to-7.x.x upgrade.
Upgrading to core 7.x.x requires installing the mapr-hadoop-util package. Before the upgrade, Hadoop files are stored in a subdirectory such as hadoop-2.7.0. Installation of the mapr-hadoop-util package:
  • Creates a subdirectory to preserve the original .xml files. This subdirectory has the same name as the original Hadoop directory and a timestamp suffix (for example, hadoop-2.7.0.20210324131839.GA).
  • Creates a subdirectory for the new Hadoop version (hadoop-2.7.6).
  • Deletes the original hadoop-2.7.0 directory.
During the upgrade, a special file called /opt/mapr/hadoop/prior_hadoop_dir needs to be created to store the location of the prior Hadoop directory. The configure.sh script uses this location to copy the ssl-server.xml and ssl-client.xml files to the new hadoop-2.7.6 subdirectory.
In a release 6.1.x-to-7.x.x upgrade, the prior_hadoop_dir file does not get created, and configure.sh uses the default ssl-server.xml and ssl-client.xml files provided with Hadoop 2.7.6. In this scenario, any customization in the original .xml files is not applied.
The following workaround restores the missing prior_hadoop_dir file. With the file restored, configure.sh -R consumes the prior_hadoop_dir file and copies the the original ssl-server.xml and ssl-client.xml files into the hadoop-2.7.6 directory, replacing the files that contain the default mapr123 password.
Workaround: After upgrading the ecosystem packages, but before running configure.sh -R:
  1. Create a file named prior_hadoop_dir that contains the Hadoop directory path. For example:
    # cat /opt/mapr/hadoop/prior_hadoop_dir
    /opt/mapr/hadoop/hadoop-2.7.0.20210324131839.GA
    If multiple directories are present, specify the directory with the most recent timestamp.
  2. Run the configure.sh -R command as instructed to complete the EEP upgrade.
EZINDFAAS-811
Upgrading from release 7.6.1 to 7.7.0 fails if you initiate the upgrade from a Data Fabric UI URL that is not the URL provided by the seed node when you created the fabric. The seed node indicates the API server node that is the primary installer host.
Workaround: Use either of the following workarounds:
  • Initiate the upgrade from the Data Fabric UI URL provided by the seed node when the fabric was created. This URL uses the API server node with the running installer service.
  • If you must use an API server node other than the primary installer host:
    1. Copy the .pem file from the /infrastructure/terraform/ directory of the primary installer host to the /tmp directory of the secondary installer host where you want to initiate the upgrade.
    2. Restart the installer service on the secondary installer host:
      sudo service mapr-installer restart
    3. Initiate the upgrade as described in Upgrading a Data Fabric.
MFS-17624
An upgrade from release 7.5.0 or earlier to 7.6.0 or later can terminate with a fatal error detected by the Java Runtime Environment.
Workaround: None.
OTSDB-147
After upgrading OpenTSDB from version 2.4.0 to version 2.4.1, the Crontab on each OpenTSDB node is not updated and continues to point to the previous OpenTSDB version.
Workaround: To fix the Crontab, run the following commands on each OpenTSDB node, replacing $MAPR_USER with the name of the cluster admin (typically mapr) :
  • RHEL
    export CRONTAB="/var/spool/cron/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • SLES
    export CRONTAB="/var/spool/cron/tabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • Ubuntu
    export CRONTAB="/var/spool/cron/crontabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
DFUI-2163
SSO authentication is not enabled for Data Fabric UI, after upgrading from HPE Ezmeral Data Fabric release version 7.5 to release version 7.6.
Workaround: Restart the API server after upgrade.

Volumes

DFUI-638
Non-LDAP SSO user authenticating to Keycloak cannot create volume on the Data Fabric UI.

Workaround: Create a volume via the Data Fabric minIO client.