How Impersonation Works
Introduces impersonation functionality, limitations, and core requirements.
If a user attempts to impersonate another user to the file system or HPE Ezmeral Data Fabric Database systems and the configuration parameters for resolving the UID and GIDs on the server (see Resolving Username with UID and GIDs During Impersonation) are disabled:
- The Data Fabric client looks for that user name in the local operating system registry.
- If the user name is:
- Found, Data Fabric sends the user’s UID and GID to the server for impersonation.
- Not found in the local operating system registry, the user action is not processed.
If a user attempts to impersonate another user to the file system or HPE Ezmeral Data Fabric Database systems and if the configuration parameters for resolving the UID and GIDs on the server (see Resolving Username with UID and GIDs During Impersonation) are enabled:
- The Data Fabric client asks CLDB to look for that user name and resolve the UID and GIDs for that user on the server.
- If the user name is:
- Found on the server, the server allows the user to proceed with the impersonation.
- Not found, the user action is not processed.
NOTEIf the configuration property for resolving the username is set on the client, and the configuration property for resolving the username is not set on CLDB, the operation fails with an error.
Limitations on Impersonation
Service with impersonation tickets cannot be used to impersonate themapr
or
root
users. A scoped service with impersonation ticket cannot contain the
UID of the root
or mapr
user (in the impersonated UIDs) and
the GID of the root
or mapr
user (in the impersonated GIDs).
The mapr
user can impersonate any user, including
root
.Core Requirements for Impersonation
The mapr
superuser is allowed to access to the file system and HPE Ezmeral Data Fabric Database systems. The
following conditions must be met for the mapr
superuser to be able to
impersonate another Data Fabric user:
- The
hadoop.proxyuser.mapr.groups
andhadoop.proxyuser.mapr.hosts
parameters must be set correctly in thecore-site.xml
file.See Enabling Impersonation for the mapr Superuser.
These settings are not always required. The hadoop proxy user functionality is only applicable to ecosystem components included in the Data Fabric distribution for Apache Hadoop. If the Data Fabric client accesses an ecosystem component, such as HiveServer2, these settings may be required. These settings are never needed if the Data Fabric client accesses the file system or HPE Ezmeral Data Fabric Database directly. Enabling impersonation here ensures that the correct settings are in place if they are needed.
- The name of the Data Fabric user that you want the
mapr
superuser to be able to impersonate must appear in the local operating system registry where the Data Fabric client is running if server-side resolution of UID and GIDs is not enabled. - The UID and GUID of the user name under which the Data Fabric client is running must match exactly the UID and GUID for that user name on the server.
mapr
user can impersonate any user, including user root.For all other users with access to the file system and HPE Ezmeral Data Fabric Database systems, the following conditions must be met for the user to impersonate another user.
- A valid servicewithimpersonation ticket must be present for the user who intends to impersonate on the system.
- The name of the user to impersonate must appear in the local operating system registry where the Data Fabric client is running if the server-side resolution of UID and GIDs is not enabled.
- The UID and GUID of the user name under which the Data Fabric client is running must match exactly the UID and GUID for that user name on the server.
Component Requirements for Impersonation
Some Data Fabric ecosystem components have additional requirements to enable impersonation.
The following components must have settings that support impersonation in the configuration files indicated, on each node where the component resides:
- Drill: Edit the
drill-env.sh
file. See Configuring User Impersonation in the Apache Drill documentation. - HBase: Edit the
hbasesite.xml
file. See Impersonation through the HBase REST Gateway. - HiveServer2: Edit the
hive-site.xml
file. See Hive User Impersonation. - Hue: Edit the
hue.ini
file. - Spark: No special settings are required for Spark in MapReduce 2 (YARN) mode since Spark automatically inherits the correct behavior from YARN. If running standalone, Spark cannot perform impersonation and should not be used if security is important.
Application Development Requirements
You can set up impersonation in an application programmatically.
- C/C++: Use
hb_connection_create_as_user()
. See Creating C Apps - Binary Tables and Impersonation Example for more information. - Java: Use
UserGroupInformation.doAs()
. See Class UserGroupInformation in the Hadoop documentation for more information.