Component Migration
This section describes how to migrate customized components to Hadoop for the HPE Ezmeral Data Fabric.
Hadoop for the HPE Ezmeral Data Fabric features the complete Hadoop distribution including components such as Hive. There are a few things to know about migrating Hive, or about migrating custom components you have patched yourself.
Custom Components
If you have applied your own patches to a component and wish to continue to use that customized component with the Data Fabric distribution, you should keep the following considerations in mind:
- Data Fabric libraries: All Hadoop components must point to
data-fabric software for the Hadoop libraries. Change any absolute paths. Do
not hardcode
hdfs://
ormaprfs://
into your applications. This is also true of Hadoop ecosystem components that are not included in the Data Fabric Hadoop distribution (such as Cascading). For more information see Working with file system. - Component compatibility: Before you commit to the migration of a customized component (for example, customized HBase), check the Data Fabric release notes to see if HPE has issued a patch that satisfies your business requirements. HPE publishes a list of Hadoop common patches and Data Fabric patches with each release and makes those patches available for HPE customers to take, build, and deploy.
- ZooKeeper coordination service: Certain components depend on ZooKeeper. When you migrate your customized component from the HDFS cluster to the Data Fabric cluster, make sure it points correctly to the Data Fabric ZooKeeper service.