Configuring Hive Client on a Data Fabric Client Node

This topic describes how to configure the Hive client on a Data Fabric client node.

Installing Hive on a Client Node

Prerequisites

The following instructions use the package manager to download and install Hive from the DEP / EEP repository to a client node. When you install Hive on a client node, you can use the Hive shell from a machine outside the cluster.

Before you begin, verify the following prerequisites:

  • The DEP / EEP repository is set up. To set up the DEP / EEP repository, see Step 11: Install Ecosystem Components Manually.

  • The HPE Data Fabric client must be installed on the same node that you install the Hive client.

    For HPE Data Fabric client setup instructions, see HPE Data Fabric Client.

  • You must know the IP addresses or hostnames of the ZooKeeper nodes on the cluster.

About this task

Run the following commands as root or using sudo.

Procedure

  1. On the client computer, install mapr-hive package.
    CentOS or Red Hat:
    yum install mapr-hive
    Ubuntu
    apt-get install mapr-hive
    SLES
    zypper install mapr-hive
  2. Copy hive-site.xml from the cluster node to the client node (replace hive-site.xml after the installation is complete).
  3. Finish the configuration procedure by running configure.sh with the -R flag specified.
  4. Example of a connection string for a MapR-secure cluster:
    /opt/mapr/hive/hive-3.1.3/bin/beeline -u "jdbc:hive2://<HS2_SERVER_CLUSTER_FQDN>:10000/default;auth=maprsasl;ssl=true"