Node Migration
You can add decommissioned HDFS data nodes to your HPE Ezmeral Data Fabric cluster.
Once you have loaded your data and tested and tuned your applications, you can add decommission HDFS data-nodes and add them to the Data Fabric cluster.
This is a three-step process:
- Decommissioning nodes on an Apache Hadoop cluster: The Hadoop decommission feature enables you to gracefully remove a set of existing data-nodes from a cluster while it is running, without data loss. For more information, see the Hadoop Wiki FAQ.
- Meeting minimum hardware and software requirements: Ensure that every data-node you want to add to the Data Fabric cluster meets the hardware, software, and configuration requirements.
- Adding Nodes to a Data Fabric cluster: You can add those data-nodes to the Data Fabric cluster. For more information, see Adding Nodes to a Cluster.