Step 9: Install Metrics Monitoring
Metrics monitoring is part of monitoring, which also includes log monitoring. Monitoring components are available as part of the Ecosystem Pack (EEP) that you selected for the cluster.
About this task
IMPORTANT
Beginning with release 7.8.0 and EEP 9.3.0, the monitoring components are
provided in the Data Fabric (core) repository and
not in the EEP repository. For more
information, see Installation Notes (Release 7.9). Complete these steps to install metrics monitoring as the root
user
or using sudo
.
Installing
metrics monitoring components on a client node or edge node is not
supported.
Procedure
-
For metrics monitoring, install the following packages:
Component Requirements collectd Install the mapr-collectd
package on each node in the HPE Ezmeral Data Fabric cluster.OpenTSDB and AsyncHBase Install the mapr-opentsdb
on one or more nodes. To allow failover of metrics storage when one OpenTSDB node is unavailable, install OpenTSDB on at least three nodes in the cluster.NOTEmapr-opentsdb
depends onmapr-asynchbase
, andmapr-asynchbase
is automatically installed on the node where you installmapr-opentsdb
.Grafana Optional: Install the mapr-grafana
package on at least one node in the HPE Ezmeral Data Fabric cluster. Grafana is optional for metrics monitoring in general.On a three-node cluster, you could run the following commands to install metrics packages:- For CentOS/RedHat:
- Node A:
yum install mapr-collectd mapr-grafana
- Node B:
yum install mapr-collectd mapr-opentsdb
- Node C:
yum install mapr-collectd
- Node A:
- For Ubuntu:
- Node A:
apt-get install mapr-collectd mapr-grafana
- Node B:
apt-get install mapr-collectd mapr-opentsdb
- Node C:
apt-get install mapr-collectd
- Node A:
- For SLES:
- Node A:
zypper install mapr-collectd mapr-grafana
- Node B:
zypper install mapr-collectd mapr-opentsdb
- Node C:
zypper install mapr-collectd
- Node A:
- For CentOS/RedHat:
-
Release 6.0.1 and later: Configure a password for Grafana:
- For a secured cluster, ensure that the
/opt/mapr/conf/ssl_truststore.pem
file is present in/opt/mapr/conf
on the Grafana nodes. If the/opt/mapr/conf/ssl_truststore.pem
file is not present, you must copy it from the CLDB primary node to/opt/mapr/conf
on the Grafana nodes.NOTEIn a secure cluster, Grafana uses PAM to authenticate using the cluster administrator login ID (typically themapr
user ID) and password, so no additional information is needed.
- For a secured cluster, ensure that the
-
On every cluster node, run
configure.sh
with the-R
and-OT
parameters, and other parameters, as needed, for the Grafana password. A Warden service must be running when you useconfigure.sh -R -OT
./opt/mapr/server/configure.sh -R -OT <comma-separate list of OpenTSDB nodes>
Parameter Description -R
After initial node configuration, specifies that configure.sh
should use the previously configured ZooKeeper and CLDB nodes.-OT
Specifies a comma-separated list of host names or IP addresses that identify the OpenTSDB nodes. The OpenTSDB nodes can be part of the current HPE Ezmeral Data Fabric cluster or part of a different HPE Ezmeral Data Fabric cluster. The list is in the following format: hostname/IP address[:port_no] [,hostname/IP address[:port_no]...]
NOTEThe default OpenTSDB port is 4242. If you want to use a different port, specify the port number when you list the OpenTSDB nodes.For example, to configure monitoring components you can run one of the following commands:- In this example, default ports are used for the OpenTSDB nodes.
/opt/mapr/server/configure.sh -R -OT NodeB
- In this example, non-default ports are specified for the OpenTSDB nodes:
/opt/mapr/server/configure.sh -R -OT NodeB:4040
After you runconfigure.sh -R
, if errors are displayed see Troubleshoot Monitoring Installation Errors. -
To start collecting metrics for the NodeManager and ResourceManager services,
restart these services on each node where they are installed.
maprcli node services -name nodemanager -nodes <space separated list of hostname/IPaddresses> -action restart
maprcli node services -name resourcemanager -nodes <space separated list of hostname/IPaddresses> -action restart