Known Issues (Release 8.0)

You might encounter the following known issues after upgrading to release 8.0. This list is current as of the release date.

IMPORTANT
The "Support notices of known issues" tool is no longer available, but you can obtain the same information by logging on to the HPE Support Center. See Support Articles in the HPE Support Center.

Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

HPE Data Fabric Streams

MS-1511
When messages are produced for the second time on a stream, CopyStream fails with an exception com.mapr.db.exceptions.DBException: flush() failed with err code = 22.
Workaround: None.

Client Libraries

MFS-18258

When you add a new cluster to a cluster group, the FUSE-based POSIX client and the loopbacknfs POSIX client take about five minutes to load or list the newly added cluster.

Workaround: None.

MFS-21119
MAC client 7.10: Mac client configuration script fails to handle CLDB hostname.

Workaround: None.

Data Fabric UI

Sign-in Issues

DFUI-437
If you sign in to the Data Fabric UI as a non-SSO user and then sign out and try to sign in as an SSO user, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Dismiss the "Managed Control System" sign-in screen, and retry signing in as a non-SSO user.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-811
If you launch the Data Fabric UI and then sign out and wait for 5-10 minutes and then attempt to sign in, a sign-in page for the "Managed Control System" (MCS) is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-826
In a cloud fabric, an empty page is displayed after a session expires and you subsequently click on a fabric name. The browser can display the following URL:
https://<hostname>:8443/oath/login
Workaround: None.
DFUI-874
Sometimes when you attempt to sign in to the Data Fabric UI, the "Managed Control System" (MCS) is displayed, or the Object Store UI is displayed.
Workaround: See the workaround for DFUI-437.

Display Issues

DFUI-2749
Custom roles assigned to Keycloak users or groups are not visible on the Data Fabric UI.
Workaround: On the Data Fabric UI, navigate to Security administration>Identity Management. Click the Roles tab. Click Actions>View details for the custom role in the list of available custom roles. You should see the list of users and groups that have been assigned the custom role.
DFUI-1221
If a fabric includes a large number of resources, loading the resources to display in the Resources card on the home page can take a long time.
Workaround: None.
DFUI-2102
When you create a table replica on a primary cluster with the source table on a secondary cluster, the replication operation times out. However, the table replica is successfully created on the primary cluster. The table replica appears in the Replication tab, but does not appear in the Data Fabric UI Graph or Table view for the primary cluster.

This behavior is the same for both a source table on the primary cluster and the replica on the secondary cluster.

Workaround: None.
DFUI-2099
When you delete a table replica from the Data Fabric UI Home page, the table replica remains listed in the Replication tab. When you select the table on the Replication tab, a message returns stating that the requested file does not exist.
Workaround: None.

Installation or Fabric Creation

IN-3655
When libssl openssl 1.1.1 is missing on Ubuntu 22, the installer fails to install Hue and the collectd service does not start.
Workaround: Add the focal-security repository for Ubuntu 22.04 or later for libssl 1.1.1
echo "deb http://security.ubuntu.com/ubuntu focal-security main" | sudo tee /etc/apt/sources.list.d/focal-security.list
sudo apt update
IN-3592

Installing patch via mapr-installer-cli with patch_location option is broken.

Workaround: Use -o environment.patch_version instead of -o environment.patch_location if you are using stanza for patch installation

IN-3613

Patch install on the fabric removes Keycloak and data-access-gateway on the cluster.

Recommendation:

If a fabric has been installed using the Data Fabric UI, upgrade the fabric or apply a patch to the fabric by using the Data Fabric UI only. If a fabric has been installed using the core-installer, upgrade the fabric or apply a patch to the fabric using the core-installer only.

Workaround: Run fabric incremental install via mapr-installer-cli to restore keycloak service on the fabric by using the steps given below:
  1. Unmount /opt/mapr/keycloak-ha/data folder and remove the keycloak-ha folder
  2. Check basic-stanza.yml if it has corresponding core/mep version.
     /opt/mapr/installer/bin/mapr-installer-cli install -n --force -t /opt/mapr/installer/ezndfaas/src/stanza/basic-stanza.yml -u :@:9443 -o config.cluster_name='NAME' -o config.hosts='["HOST1", 'HOST2"...]' -o config.ssh_method=PASSWORD -o config.ssh_id='root' -o config.ssh_password='SSH_PASSWORD' -o config.cluster_admin_id='' -o config.cluster_admin_password='' -o config.cluster_admin_group='' -o config.sso_keycloak='true'
  3. Restart warden on the keycloak node and wait for node services to come up.
MFS-18734
Release 7.7.0 of the HPE Data Fabric has a dependency on the libssl1.1 package, which is not included in Ubuntu 22.04. As a result, you must apply the package manually to Ubuntu 22.04 nodes before installing Data Fabric software.
Workaround: On every node in the fabric or cluster:
NOTE
The following steps are required for cluster nodes but are not required for client nodes.
  1. Download the libssl1.1 package:
    wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb
  2. Use the following command to install the package:
    sudo dpkg -i libssl1.1_1.1.0g-2ubuntu4_amd64.deb

Object Store

MFS-22003
Although a jumbo object repair has been successful in a previous gfsck utility run, during subsequent runs of the gfsck utility on the same jumbo object a Data Fabric user gets an error stating that the jumbo object repair has failed.
Workaround: None.
MFS-22001
Reconstruction operation on bucket volumes using gfsck doesn't repair corrupted data for certain objects.
Workaround: None.
DFUI-519
An SSO user is unable to create buckets on the Data Fabric UI and the Object Store. This is applicable to an SSO user with any role such as infrastructure administrator, fabric manager or developer.
Workaround: Create an IAM policy with all permissions in the user account. This has to be done via minIO client or the Object Store UI. Assign the IAM policy to the SSO user. Login to the Data Fabric UI and create a bucket/view bucket.

Security Policies

MFS-18154
A security policy created on a cloud-based primary fabric (such as AWS) is not replicated on to a secondary fabric created on another cloud provider (such as GCP).
Workaround: None.

Topics

DFUI-637
Non-LDAP SSO user authenticating to Keycloak cannot create topic on the Data Fabric UI.
Workaround: None.
DFUI-639
A non-LDAP SSO user authenticating to Keycloak cannot create a volume or stream using the Data Fabric UI.
Workaround: None. Non-LDAP and SSO local users are not currently supported.

Upgrade

COMSECURE-615
Upgrading directly from release 6.1.x to release 7.x.x can fail because the upgrade process reads password information from the default Hadoop ssl-server.xml and ssl-client.xml files rather than the original .xml files. Note that upgrades from release 6.2.0 to 7.x.x are not affected by this issue.
The issue does not occur, and the upgrade succeeds, if either of the following conditions is true:
  • The existing password is mapr123 (the default value) when the EEP upgrade is initiated.
  • You upgrade the cluster first to release 6.2.0 and then subsequently to release 7.x.x.
Understanding the Upgrade Process and Workaround: The workaround in this section modifies the release 6.1.x-to-7.x.x upgrade so that it works like the 6.2.0-to-7.x.x upgrade.
Upgrading to core 7.x.x requires installing the mapr-hadoop-util package. Before the upgrade, Hadoop files are stored in a subdirectory such as hadoop-2.7.0. Installation of the mapr-hadoop-util package:
  • Creates a subdirectory to preserve the original .xml files. This subdirectory has the same name as the original Hadoop directory and a timestamp suffix (for example, hadoop-2.7.0.20210324131839.GA).
  • Creates a subdirectory for the new Hadoop version (hadoop-2.7.6).
  • Deletes the original hadoop-2.7.0 directory.
During the upgrade, a special file called /opt/mapr/hadoop/prior_hadoop_dir needs to be created to store the location of the prior Hadoop directory. The configure.sh script uses this location to copy the ssl-server.xml and ssl-client.xml files to the new hadoop-2.7.6 subdirectory.
In a release 6.1.x-to-7.x.x upgrade, the prior_hadoop_dir file does not get created, and configure.sh uses the default ssl-server.xml and ssl-client.xml files provided with Hadoop 2.7.6. In this scenario, any customization in the original .xml files is not applied.
The following workaround restores the missing prior_hadoop_dir file. With the file restored, configure.sh -R consumes the prior_hadoop_dir file and copies the the original ssl-server.xml and ssl-client.xml files into the hadoop-2.7.6 directory, replacing the files that contain the default mapr123 password.
Workaround: After upgrading the ecosystem packages, but before running configure.sh -R:
  1. Create a file named prior_hadoop_dir that contains the Hadoop directory path. For example:
    # cat /opt/mapr/hadoop/prior_hadoop_dir
    /opt/mapr/hadoop/hadoop-2.7.0.20210324131839.GA
    If multiple directories are present, specify the directory with the most recent timestamp.
  2. Run the configure.sh -R command as instructed to complete the EEP upgrade.
DFUI-2163
SSO authentication is not enabled for Data Fabric UI, after upgrading from HPE Ezmeral Data Fabric release version 7.5 to release version 7.6.
Workaround: Restart the API server after upgrade.

Volumes

DFUI-638
Non-LDAP SSO user authenticating to Keycloak cannot create volume on the Data Fabric UI.

Workaround: Create a volume via the Data Fabric minIO client.

DFUI-3111
Unable to edit volume ACLs on non-SSO environment because of incorrect entity existence check on the Data Fabric UI.

Workaround: Edit volume ACLs by using the volume modify command.

CLDB

EZINDFAAS-1171
Installer package upgrade as part of clusterupgrade API fails in the first attempt as the Ansible version is upgraded.

Workaround: Retry the operation.

EZINDFAAS-1177
CLDB takes a long time to come up and shuts down intermittently. Also, NFS has to be started manually.

Workaround:

  • Restart warden by using the following command, and wait for about 10 minutes.
    service mapr-warden restart
  • Restart NFS by using the following command.
    maprcli node services -nfs start -nodes `hostname -f` -json