Known Issues (Release 8.0)
You might encounter the following known issues after upgrading to release 8.0. This list is current as of the release date.
Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.
HPE Data Fabric Streams
- MS-1511
- When messages are produced for the second time on a stream, CopyStream fails with an
exception
com.mapr.db.exceptions.DBException: flush() failed with err code = 22.
Client Libraries
- MFS-18258
-
When you add a new cluster to a cluster group, the FUSE-based POSIX client and the loopbacknfs POSIX client take about five minutes to load or list the newly added cluster.
Workaround: None.
- MFS-21119
- MAC client 7.10: Mac client configuration script fails to handle CLDB
hostname.
Workaround: None.
Data Fabric UI
Sign-in Issues
- DFUI-437
- If you sign in to the Data Fabric UI as a non-SSO user and then sign out and try to sign in as an SSO user, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Data Fabric.
- DFUI-811
- If you launch the Data Fabric UI and then sign out and wait for 5-10 minutes and then attempt to sign in, a sign-in page for the "Managed Control System" (MCS) is displayed.
- DFUI-826
- In a cloud fabric, an empty page is displayed after a session expires and you
subsequently click on a fabric name. The browser can display the following
URL:
https://<hostname>:8443/oath/login - DFUI-874
- Sometimes when you attempt to sign in to the Data Fabric UI, the "Managed Control System" (MCS) is displayed, or the Object Store UI is displayed.
Display Issues
- DFUI-2749
- Custom roles assigned to Keycloak users or groups are not visible on the Data Fabric UI.
- DFUI-1221
- If a fabric includes a large number of resources, loading the resources to display in the Resources card on the home page can take a long time.
- DFUI-2102
- When you create a table replica on a primary cluster with the source table on a
secondary cluster, the replication operation times out. However, the table replica is
successfully created on the primary cluster. The table replica appears in the
Replication tab, but does not appear in the Data Fabric UI
Graph or Table view for the primary
cluster.
This behavior is the same for both a source table on the primary cluster and the replica on the secondary cluster.
- DFUI-2099
- When you delete a table replica from the Data Fabric UI Home page, the table replica remains listed in the Replication tab. When you select the table on the Replication tab, a message returns stating that the requested file does not exist.
Installation or Fabric Creation
- IN-3655
- When libssl openssl 1.1.1 is missing on Ubuntu 22, the installer fails to install
Hue and the
collectdservice does not start.Workaround: Add the focal-security repository for Ubuntu 22.04 or later for libssl 1.1.1echo "deb http://security.ubuntu.com/ubuntu focal-security main" | sudo tee /etc/apt/sources.list.d/focal-security.list sudo apt update - IN-3592
-
Installing patch via mapr-installer-cli with
patch_locationoption is broken.Workaround: Use
-o environment.patch_versioninstead of-o environment.patch_locationif you are using stanza for patch installation - IN-3613
-
Patch install on the fabric removes Keycloak and data-access-gateway on the cluster.
Recommendation:
If a fabric has been installed using the Data Fabric UI, upgrade the fabric or apply a patch to the fabric by using the Data Fabric UI only. If a fabric has been installed using the core-installer, upgrade the fabric or apply a patch to the fabric using the core-installer only.
- MFS-18734
- Release 7.7.0 of the HPE Data Fabric has a dependency
on the
libssl1.1package, which is not included in Ubuntu 22.04. As a result, you must apply the package manually to Ubuntu 22.04 nodes before installing Data Fabric software.
Object Store
- MFS-22003
- Although a jumbo object repair has been successful in a previous
gfsckutility run, during subsequent runs of thegfsckutility on the same jumbo object a Data Fabric user gets an error stating that the jumbo object repair has failed. - MFS-22001
- Reconstruction operation on bucket volumes using gfsck doesn't repair corrupted data for certain objects.
- DFUI-519
- An SSO user is unable to create buckets on the Data Fabric UI and the Object Store. This is applicable to an SSO user with any role such as infrastructure administrator, fabric manager or developer.
Security Policies
- MFS-18154
- A security policy created on a cloud-based primary fabric (such as AWS) is not replicated on to a secondary fabric created on another cloud provider (such as GCP).
Topics
- DFUI-637
- Non-LDAP SSO user authenticating to Keycloak cannot create topic on the Data Fabric UI.
- DFUI-639
- A non-LDAP SSO user authenticating to Keycloak cannot create a volume or stream using the Data Fabric UI.
Upgrade
- COMSECURE-615
- Upgrading directly from release 6.1.x to release 7.x.x can fail because the upgrade
process reads password information from the default Hadoop
ssl-server.xmlandssl-client.xmlfiles rather than the original.xmlfiles. Note that upgrades from release 6.2.0 to 7.x.x are not affected by this issue.
- DFUI-2163
- SSO authentication is not enabled for Data Fabric UI, after upgrading from HPE Ezmeral Data Fabric release version 7.5 to release version 7.6.
Volumes
- DFUI-638
- Non-LDAP SSO user authenticating to Keycloak cannot create volume on the Data Fabric UI.
Workaround: Create a volume via the Data Fabric minIO client.
- DFUI-3111
- Unable to edit volume ACLs on non-SSO environment because of incorrect entity
existence check on the Data Fabric UI.
Workaround: Edit volume ACLs by using the volume modify command.
CLDB
- EZINDFAAS-1171
- Installer package upgrade as part of clusterupgrade API fails in the first attempt
as the Ansible version is upgraded.
Workaround: Retry the operation.
- EZINDFAAS-1177
- CLDB takes a long time to come up and shuts down intermittently. Also, NFS has to be
started manually.
Workaround:
- Restart warden by using the following command, and wait for about 10 minutes.
service mapr-warden restart
- Restart NFS by using the following
command.
maprcli node services -nfs start -nodes `hostname -f` -json
- Restart warden by using the following command, and wait for about 10 minutes.