Configuring Hive Client on a Data Fabric Client Node
This topic describes how to configure the Hive client on a Data Fabric client node.
Installing Hive on a Client Node
Prerequisites
The following instructions use the package manager to download and install Hive from the EEP repository to a client node. When you install Hive on a client node, you can use the Hive shell from a machine outside the cluster.
Before you begin, verify the following prerequisites:
-
The EEP repository is set up. To set up the EEP repository, see Step 11: Install Ecosystem Components Manually.
-
The HPE Data Fabric client must be installed on the same node that you install the Hive client.
For HPE Data Fabric client setup instructions, see HPE Data Fabric Client.
-
You must know the IP addresses or hostnames of the ZooKeeper nodes on the cluster.
About this task
Run the following commands as root or using sudo.
Procedure
-
On the client computer, install
mapr-hivepackage.CentOS or Red Hat:yum install mapr-hiveUbuntuapt-get install mapr-hiveSLESzypper install mapr-hive -
On all Hive nodes, run configure.sh with a list of the
CLDB nodes and ZooKeeper nodes in the cluster.
After successful run, an example of a connection string for a MapR-secure cluster is as follows:
/opt/mapr/hive/hive-3.1.3/bin/beeline -u "jdbc:hive2://<HS2_SERVER_CLUSTER_FQDN>:10000/default;auth=maprsasl;ssl=true"