Step 10: Install Log Monitoring
Installing the monitoring logging components is optional. The logging components enable the collection, storage, and visualization of core logs, system logs, and ecosystem component logs. Monitoring components are available as part of the Ecosystem Pack (EEP) that you selected for the cluster.
About this task
Complete the steps to install the logging components as the root
user or using sudo
. Installing logging components on a client node
or edge node is not supported.
Procedure
-
For log monitoring, install the following packages:
Component Requirements fluentd Install the mapr-fluentd
package on each node in the cluster.Elasticsearch Install the mapr-elasticsearch
package on at least three nodes in the cluster to allow failover of log storage if one Elasticsearch node is unavailable.Kibana Install the mapr-kibana
package on at least one node in the cluster.For example, on a three-node cluster, you can run the following commands to install log packages:- For CentOS/RedHat:
- Node A:
yum install mapr-fluentd mapr-elasticsearch
- Node B:
yum install mapr-fluentd mapr-elasticsearch
- Node C:
yum install mapr-fluentd mapr-elasticsearch mapr-kibana
- Node A:
- For Ubuntu:
- Node A:
apt-get install mapr-fluentd mapr-elasticsearch
- Node B:
apt-get install mapr-fluentd mapr-elasticsearch
- Node C:
apt-get install mapr-fluentd mapr-elasticsearch mapr-kibana
- Node A:
- For SLES:
- Node A:
zypper install mapr-fluentd mapr-elasticsearch
- Node B:
zypper install mapr-fluentd mapr-elasticsearch
- Node C:
zypper install mapr-fluentd mapr-elasticsearch mapr-kibana
- Node A:
- For CentOS/RedHat:
-
For secure
HPE Ezmeral Data Fabric clusters,
run
maprlogin print
to verify that you have a user ticket for the HPE Ezmeral Data Fabric user and theroot
user. These user tickets are required for a successful installation. If you need to generate a HPE Ezmeral Data Fabric user ticket, runmaprlogin password
. For more information, see Generating a HPE Ezmeral Data Fabric User Ticket. -
For secure data-fabric clusters, verify that the following keystore,
truststore, and pem files are present on all nodes. If the files are not
present, you must copy them from the security master node to all other nodes. If
the
/opt/mapr/conf/ca
directory doesn't exist, you must create the directory:/opt/mapr/conf/ssl_userkeystore
/opt/mapr/conf/ssl_userkeystore.csr
/opt/mapr/conf/ssl_userkeystore.p12
/opt/mapr/conf/ssl_userkeystore.pem
/opt/mapr/conf/ssl_userkeystore-signed.pem
/opt/mapr/conf/ssl_usertruststore
/opt/mapr/conf/ssl_usertruststore.p12
/opt/mapr/conf/ssl_usertruststore.pem
/opt/mapr/conf/ca/root-ca.pem
/opt/mapr/conf/ca/chain-ca.pem
/opt/mapr/conf/ca/signing-ca.pem
-
For secure
HPE Ezmeral Data Fabric clusters,
configure a password for the Elasticsearch
admin
user to enable authentication for the end user using Kibana to search the Elasticsearch log index. This password needs to be provided at the time of runningconfigure.sh
. If no password is specified, you will default to the pre-mep-5.0.0, default password ofadmin
. Use one of the following methods to pass the password to Elasticsearch/Kibana:- On the nodes where Fluentd/Elasticsearch/Kibana is installed, export the
password as an environment variable before calling
configure.sh
:export ES_ADMIN_PASSWORD="<newElasticsearchPassword>"
Then run
configure.sh
as you normally would run it (go to step 5). - Add the following options to the
configure.sh
command in step 5. This method explicitly passes the password on theconfigure.sh
command line:-EPelasticsearch '-password <newElasticsearchPassword>' -EPkibana '-password <newElasticsearchPassword>' -EPfluentd '-password <newElasticsearchPassword>'
Example/opt/mapr/server/configure.sh -R -v -ES mfs74.qa.lab -ESDB /opt/mapr/es_db -OT mfs74.qa.lab -C mfs74.qa.lab -Z mfs74.qa.lab -EPelasticsearch '-password helloMapR' -EPkibana '-password helloMapR' -EPfluentd '-password helloMapR'
- Add the following options to the
configure.sh
command in step 5. This method explicitly passes the password on theconfigure.sh
command line by specifying a file:-EPelasticsearch '-password <name of local file containing new password>' -EPkibana '-password <name of local file containing new password>' -EPfluentd '-password <name of local file containing new password>'
Example/opt/mapr/server/configure.sh -R -v -ES mfs74.qa.lab -ESDB /opt/mapr/es_db -OT mfs74.qa.lab -C mfs74.qa.lab -Z mfs74.qa.lab -EPelasticsearch '-password /tmp/es_password' -EPkibana '-password /tmp/es_password' -EPfluentd '-password /tmp/es_password'
- On the nodes where Fluentd/Elasticsearch/Kibana is installed, export the
password as an environment variable before calling
-
Run
configure.sh
on each node in the HPE Ezmeral Data Fabric cluster with the-R
and-ES
parameters, adding parameters to configure the Fluentd/Elasticsearch/Kibana password as needed. Optionally, you can include the-ESDB
parameter to specify the location for writing index data. A Warden service must be running when you useconfigure.sh -R
./opt/mapr/server/configure.sh -R -ES <comma-separate list of Elasticsearch nodes> [-ESDB <filepath>]
Parameter Description -ES
Specifies a comma-separated list of host names or IP addresses that identify the Elasticsearch nodes. The Elasticsearch nodes can be part of the current HPE Ezmeral Data Fabric cluster or part of a different HPE Ezmeral Data Fabric cluster. The list is in the following format: hostname/IPaddress[:port_no] [,hostname/IPaddress[:port_no]...]
NOTEThe default Elasticsearch port is 9200. If you want to use a different port, specify the port number when you list the Elasticsearch nodes.-ESDB
Specifies a non-default location for writing index data on Elasticsearch nodes. In order to configure an index location, you only need to include this parameter on Elasticsearch nodes. By default, the Elasticsearch index is written to /opt/mapr/elasticsearch/elasticsearch-<version>/var/lib/MaprMonitoring/
.NOTEElasticsearch requires a lot of disk space. Therefore, a separate filesystem for the index is strongly recommended. It is not recommended to store index data under the/
or the/var
file system.Upgrading to a new version of monitoring removes the
/opt/mapr/elasticsearch/elasticsearch-<version>/var/lib/MaprMonitoring/
directory. If you want to retain Elasticsearch index data through an upgrade, you must use the-ESDB
parameter to specify a separate filesystem or back up the default directory before upgrading. The Pre-Upgrade Steps for Monitoring include this step.-OT
Specifies a comma-separated list of host names or IP addresses that identify the OpenTSDB nodes. The OpenTSDB nodes can be part of the current HPE Ezmeral Data Fabric cluster or part of a different HPE Ezmeral Data Fabric cluster. Do not use this option when you configure a node for the first time. Use this option along with the -R
parameter. A Warden service must be running when you useconfigure.sh -R -OT
.The hostname list should use the following format:hostname/IP address[:port_no][,hostname/IP address[:port_no]...]
NOTEThe default OpenTSDB port is 4242. If you want to use a different port, specify the port number when you list the OpenTSDB nodes.-R
After initial node configuration, specifies that configure.sh
should use the previously configured ZooKeeper and CLDB nodes.For example, to configure monitoring components you can run one of the following commands:- In this example, a location is specified for the Elasticsearch index
directory, and default ports are used for Elasticsearch
nodes:
/opt/mapr/server/configure.sh -R -ES NodeA,NodeB,NodeC -ESDB /opt/mapr/myindexlocation
- In this example, non-default ports are specified for Elasticsearch, and
the default location is used for the Elasticsearch index
directory:
/opt/mapr/server/configure.sh -R -ES NodeA:9595,NodeB:9595,NodeC:9595
After you runconfigure.sh -R
, if errors are displayed see Troubleshoot Monitoring Installation Errors. -
If you installed Kibana, perform the following steps: