Object Store (S3 Gateway) Overview
The object store functionality provided for container-based HPE Ezmeral Data Fabric is similar to the S3 Gateway feature included in the bare-metal HPE Ezmeral Data Fabric, which is described in the HPE Ezmeral Data Fabric documentation in S3 Gateway (link opens in a new browser tab/window).
Deployment
To deploy the object store, you must create a Data Fabric cluster as described in Creating a New Data Fabric Cluster, and then deploy that cluster with the object store applied.
When you create a Data Fabric cluster by using the HPE Ezmeral Runtime Enterprise GUI, a single ObjectStore Zone is created by default.
This example shows a single zone object-store deployment:
apiVersion: hcp.hpe.com/v1
kind: DataPlatform
metadata:
name: dataplatform
spec:
baseimagetag: "202103030809C"
imageregistry: gcr.io/mapr-252711
environmenttype: hcp
simpledeploymentdisks:
- /dev/sdc
- /dev/sdd
disableha: true
core:
zookeeper:
failurecount: 0
cldb:
failurecount: 0
webserver:
count: 1
admincli:
count: 1
gateways:
objectstore:
imageregistry: gcr.io/mapr-252711
image: objectstore-2.0.0:202103030809C
zones:
- name: zone1
count: 1
size: 10Gi
fspath: ""
hostports:
- hostport: 9000
nodeport: 31900
requestcpu: "1000m"
limitcpu: "4000m"
requestmemory: 2Gi
limitmemory: 2Gi
requestdisk: 20Gi
limitdisk: 30Gi
loglevel: INFO
The object-store deployment uses the following fields:
imageregistry
– Registry where container images are stored.image
- Image name and tag.zones
- Object store zones.name
- Zone name.count
- Number of instances in the zone.fspath
- Mount folder path, formatted as/mapr/csi-volume/FOLDER_NAME
. If this property is not specified, then the path will be automatically set to/mapr/csi-volume/objectstore-ZONE_NAME-svc
and will employ the service name for the zone.hostports
- Object store node and service port. This value will overwrite theport
value fromconfigmap
. The default port of the object store, 9000, can cause conflicts with Erlang RPC. In such cases, change the port of the object store to any free port.-
nodeport
- external port on all cluster nodes, which will be used for forwarding requests to Objectstore instances in this zone. If nodeport is not specified, then forwarding from external port will not be configured for this zone. -
size
- size of data fabric volume for Objectstore loglevel
- Container logging level. This value will overwrite theloglevel
value from configmap.
Configuration
Configure the object store by preparing a configmap
. You can
edit the configmap
using an editor, such as:
KUBE_EDITOR="nano" kubectl edit configmap objectstore-cm -n dataplatform
For example:
minio.json:
----
{
"fsPath": "/mapr/csi-volume//objectstore-0",
"deploymentMode": "S3",
"oldAccessKey": "",
"oldSecretKey": "",
"port": "9000",
"logPath": "/opt/mapr/objectstore-client/objectstore-client-2.0.0/logs/minio.log",
"logLevel": 4
}
objectstore.sample.logrotate:
----
/opt/mapr/objectstore-client/objectstore-client-2.0.0/logs/minio.log
{
rotate 7
daily
compress
missingok
sharedscripts
postrotate
/bin/kill -HUP 'cat /opt/mapr/pid/objectstore.pid 2> /dev/null' 2> /dev/null || true
endscript
}
The minio.json
section of the configmap
maps your
configuration to the pod minio.json
file. See S3 Gateway (link opens in a
new browser tab/window). Verify that the configmap
specifies all
object store pods in all zones. Recreate all object store pods after modifying the
configmap
.
The values of port
and logLevel
in
configmap
will be overwritten by the values of
hostport
and loglevel
value from
deployment.
Scaling
The number of object-store instances in a zone can be scaled, as described in Upgrading and Patching the Data Fabric Cluster. The required pods are automatically started or terminated as needed after scaling instances up or down or adding a new zone.
HA Support
Objectstore 2.0.0 supports working in HA mode. Kubernetes makes HA available inside zones, and all pods inside one zone are thus mounted to the same folder. A separate service is created for each zone, and the service FQDN allows access to each instance. To check the services:
kubectl get svc -n dataplatform
Service FQDNs are formatted as follows:
objectstore-ZONE_NAME-svc.dataplatform.svc.YOUR_CLUSTER_DNS_PREFIX
If you use the MinIO client to make any administrative change to the object store configuration (such as adding new users, groups, policies, or notifications), then you must manually restart all instances (re-create pods) to avoid behavior collisions in different instances.
Limitations
- All object store zones and pods use one
configmap
. - The
fspath
property overrides theconfigmap
value. If thefspath
property is not set, then the default value for the zone overrides theconfigmap
value. - Zone services provide only HA. They do not provide distributed mode and load balancing.
- The maximum number of object store instances is the same as the number of nodes in the cluster, because each object store requires an open port for listening connections.