Configuring Hive Client on a Data Fabric Client Node
This topic describes how to configure the Hive client on a Data Fabric client node.
Installing Hive on a Client Node
Prerequisites
The following instructions use the package manager to download and install Hive from the DEP / EEP repository to a client node. When you install Hive on a client node, you can use the Hive shell from a machine outside the cluster.
Before you begin, verify the following prerequisites:
-
The DEP / EEP repository is set up. To set up the DEP / EEP repository, see Step 11: Install Ecosystem Components Manually.
-
The HPE Data Fabric client must be installed on the same node that you install the Hive client.
For HPE Data Fabric client setup instructions, see HPE Data Fabric Client.
-
You must know the IP addresses or hostnames of the ZooKeeper nodes on the cluster.
About this task
Run the following commands as root or using sudo.
Procedure
-
On the client computer, install
mapr-hivepackage.CentOS or Red Hat:yum install mapr-hiveUbuntuapt-get install mapr-hiveSLESzypper install mapr-hive -
Copy
hive-site.xmlfrom the cluster node to the client node (replacehive-site.xmlafter the installation is complete). -
Finish the configuration procedure by running configure.sh with the
-Rflag specified. -
Example of a connection string for a MapR-secure cluster:
/opt/mapr/hive/hive-3.1.3/bin/beeline -u "jdbc:hive2://<HS2_SERVER_CLUSTER_FQDN>:10000/default;auth=maprsasl;ssl=true"