HPE Ezmeral Data Fabric Control System (MCS)

The HPE Ezmeral Data Fabric includes the HPE Ezmeral Data Fabric Control System (MCS) cluster-management tool that you can use to administer HPE Ezmeral Data Fabric clusters. The Control System provides command line and REST APIs, job monitoring metrics, and help troubleshooting cluster issues.

HPE Ezmeral Runtime Enterprise automatically installs and configures AD/LDAP authentication for HPE Ezmeral Data Fabric as part of cluster creation, and also installs and enables the MAST Gateway Service. If desired, you can set up HttpFS as described below. Please also see the following sections of this article:

Implementation Differences

HPE Ezmeral Data Fabric Control System provides less information in a Kubernetes environment than in a bare-metal HPE Ezmeral Data Fabric environment. In a bare-metal HPE Ezmeral Data Fabric implementation, the Control System enables you to manage all aspects of a Data Fabric cluster and provides node-specific data-management features in the bare-metal environment.

HPE Ezmeral Data Fabric Control System in a Kubernetes environment:

  • Primarily provides Volumes and Services information.
  • Does not display the Overview, Nodes, Data, Data>Streams, or Data>Tables menu options.
  • Volumes information is equivalent to the Data>Volumes information provided in a bare-metal environment.
  • The Control System only displays Services under the headings Core, Others, and Monitoring. You cannot start, stop, or restart services.
  • The User Permissions screen only allows you to remove users.

See 6.2 Administration for additional information about using the control system (link opens in a new browser tab or window).

Setting up HttpFS for HPE Ezmeral Data Fabric

HPE Ezmeral Data Fabric supports the optional HttpFS package that allows data access via cURL or any other HTTP client. For additional information, please see the HPE Ezmeral Data Fabric article Installation Instructions. Please also see Additional information (link opens an external website in a new browser tab/window).

HttpFS includes the following key features:

  • By default, HttpFS runs in Secure mode and requires basic authentication. You may also configured it to use Kerberos for authentication, as described below.
  • HttpFS impersonates users. For example, if User_A authenticates, then any files will be written/read as User_A. All volume and file ACEs are honored.
  • HttpFS provides full access to files in MaprFS paths on HPE Ezmeral Data Fabric. It is not integrated with DataTaps.
  • When browsing volumes, HttpPFS is similar to MapR Hadoop in that it provides access to data within any mounted volume without exposing volume objects.
  • Volume objects are configuration-level structures that are viewed/modified through either the Control System or Data Fabric CLI commands. See Accessing the HPE Ezmeral Data Fabric Control System (MCS) and HPE Ezmeral Data Fabric Commands.

Setting up HttpFS

To set up HttpFS on HPE Ezmeral Data Fabric:

  1. Log in to the CLDB node by executing the following command:

    bdmapr --root /bin/bash
  2. Verify that yum works.
  3. Update the proxy setting by executing the following command:

    echo "proxy=http://web-proxy.corp.enterprise.com:8080" >> /etc/yum.conf
  4. Update the MapR repository configuration by executing the following command:
    sed -i /etc/yum.repos.d/mapr.repo 's/gpgcheck=1/gpgcheck=0/g' sed -i /etc/yum.repos.d/mapr.repo 's/repo_gpgcheck=1/repo_gpgcheck=0/g'
  5. Install HttpFS by executing the following commands:
    yum install -y mapr-httpfs 
    /opt/mapr/server/configure.sh -R
  6. When HttpFS starts, test it in a web browser by executing the following command:

    https://<controller_ip>:14000/
  7. Read a file using the web browser by executing the following command:

    https://<controller_ip>:14000/webhdfs/v1/<maprfs_path_to_file>?op=OPEN

You can now use Postman to create directories or write files. For example, to create a new request:

  • Type: PUT
  • URL: https://<controller_ip>:14000/webhdfs/v1/tmp/testdirectory?op=MKDIRS
  • Authorization configuration:
    • Type: Basic Auth
    • Username: Any valid user (could be admin)
    • Password: Password for that user

Enable Insecure mode. You will be prompted to enable Insecure mode the first time you send any request and get back the self-signed certificates.

Accessing the HPE Ezmeral Data Fabric Control System (MCS)

HPE Ezmeral Runtime Enterprise automatically routes any request made to port 8443 to the MCS. Thus, once you have configured AD/LDAP authentication, enabled the MAST Gateway service, and set up HttpFS, you can access the MCS at:

https://<gateway_host_ip_address>:8443

If platform HA is enabled, then the <controller_ip_address> must be either the Primary Controller, Shadow Controller, or Cluster IP address. Do not use the Gateway host IP address.

The default username is admin. The administrator password is stored on the Primary Controller host at /opt/bluedata/mapr/conf/mapr-admin-pass.

The HPE Ezmeral Data Fabric Container Location Database (CLDB) runs on the Primary, Shadow, and Arbiter hosts.

The HPE Ezmeral Data Fabric service runs in a container on the Controller host.

HPE Ezmeral Data Fabric Commands

Run the following commands on the Controller host to get the information needed:

Task CLI Command
List all nodes with services running on them, along with the topology. bdmapr maprcli node list -columns h,svc,racktopo,id
List all volumes that are present on a node. bdmapr maprcli volume list -filter -nodes FQDN_OF_THE_HOST -columns volumename,minreplicas,numreplicas,mountdir,quota,advisoryquota
List license info. bdmapr maprcli license list
Zookeeper status. bdmapr --root /opt/mapr/zookeeper/zookeeper-3.4.11/bin/zookeeper qstatus
CLDB Master node. bdmapr maprcli node cldbmaster
List CLDB/Zookeeper nodes bdmapr maprcli node listcldbzks
Log on to MapR Container Shell (MCS). bdmapr --root bash