Managing Users and Groups
Provides a brief introduction to user management on an HPE Ezmeral Data Fabric cluster.
The following two users are important when installing and setting up Data Fabric software:
root
is used to install Data Fabric software on each node.- The “Data Fabric user” is the user that Data Fabric services run as (typically named mapr or hadoop) on each node. The Data Fabric user has full privileges to administer the cluster. Administrative privilege with varying levels of control can be assigned to other users as well.
Before installing Data Fabric, decide on the name, user ID (UID) and group ID (GID) for the Data Fabric user. The Data Fabric user must exist on each node, and the user name, UID and primary GID must match on all nodes.
- When adding a user to a cluster node, specify the
--uid
option with theuseradd
command to guarantee that the user has the same UID on all machines. - When adding a group to a cluster node, specify the
--gid
option with thegroupadd
command to guarantee that the group has the same GID on all machines.
Data Fabric uses the native operating system configuration of each node to authenticate users and groups for access to the cluster. If you are deploying a large cluster, you should consider configuring all nodes to use LDAP or another user management system. You can use the Control System to grant specific permissions to particular users and groups. For more information, see Setting User Permissions. Each user can be restricted to a specific amount of disk usage. For more information, see Setting Quotas for Users and Groups.
By default, Data Fabric grants the user root
full administrative permissions. If the nodes do not have an explicit root
login, grant full permissions to another user after deployment. See Adding Cluster Permissions.
On the node where you plan to run the mapr-apiserver
(the Control System),
install Pluggable Authentication Modules (PAM). See PAM Configuration for more
information.
You can perform the following procedures to manage users and groups in a Data Fabric cluster using the Control System (click ) and the CLI: