How to configure NFS on ECS

NFS on ECSIn that post we will check how to configure and use NFS with multi-protocol access.

Introduction

ECS enables object buckets to be configured for access as NFS filesystems using NFSv3. Also ECS supports multi-protocol access, so that files written using NFS can be accessed using S3, OpenStack Swift and EMC Atmos object protocols. Similarly, objects written using S3 and OpenStack Swift object protocols can be made available through NFS.

In the same way as for the bucket itself, objects and directories created using object protocols can be accessed by Unix users / group members by mapping the object users and groups.

 

Create a bucket for NFS

  • At the ECS Portal, select Manage > Buckets > New Bucket
  • Enter a name for the bucket – nfsbucket
  • Specify the namespace that the bucket will belong to – ns1
  • Select a Replication Group – RG1
  • Enter the name of the bucket owner – objuser1
  • Do not enable CAS (!)
  • Enable any other bucket features that you require.:

    • Quota
    • Server-side Encryption
    • Metadata Search
    • Access During Outage
    • Compliance (read only via NFS)
    • Bucket Retention

NFS on ECS   

  • Select Enabled for the File System

    • must be set at the time of bucket creation and cannot be changed afterwards (!)
    • Specify Default Bucket Group – objgroup
      • These settings control the primary UNIX group and permissions assigned to file-system enabled objects when they are created via S3
      • These settings can be left unset if the bucket will not be accessed via S3
      • Default Bucket Group has to be specified either at bucket creation (recommended) or later from the NFS client.
      • Set the default permissions for files and directories created in the bucket using the object protocol.

        • These setting are used to apply Unix group permissions to objects created using via S3 object protocol.
        • The S3 protocol does not have the concept of groups so there is no opportunity for setting group permissions in S3 and mapping them to Unix permissions. Hence, this provides a one-off opportunity for a file or directory created using the S3 protocol to be assigned to the specified default group with the permissions specified here.

z2

 

Add an NFS export

  • Select the File > Exports > New Export page
  • From the namespace field, select the namespace that owns the bucket that you want to export – ns1
  • From the bucket field, select the bucket – nfsbucket
  • In the Export Path field, specify the path
    • ECS automatically generates the export path based on the namespace and bucket.
    • You only need to enter a name if you are exporting a directory that already exists within the bucket. So if you enter /ns1/nfsbucket/dir1, you should ensure that dir1 exists. If it does not, mounting the export will fail.
    • I don’t specify the directory mounting the whole bucket – /ns1/nfsbucket/

z3

 

  • Add the hosts that you want to be able to access the export
    • At least one host is required (!)
    • Choose whether the NFS share allows Read or both ReadWrite access. Write access allows NFS users to create storage objects in the ECS bucket.
    • NFS v3 now allows for safe asynchronous writes which increases performance over synchronous writes that were often a problem in earlier NFS implementations – async
    • You must choose an Authentication option – sys
    • Specify whether subdirectories of the export path will be allowed as mount points..
      If you have exported /ns1/nfsbucket, you will also be able to mount subdirectories, such as/ns1/nfsbucket/dir1, provided the directory exists (same as the alldir setting in /etc/exports).
    • Keep AnonUser, AnonGroup and RootSquash empty

 z4

 

Add a user or group mapping

  • Create the user and group
[root@linuxhost ~]# useradd objuser1
[root@linuxhost ~]# passwd objuser1
Changing password for user objuser1.
New password:
Retype new password:

[root@linuxhost ~]# groupadd objgroup
[root@linuxhost ~]# usermod -a -G objgroup objuser1
[root@linuxhost ~]# id objuser1
uid=1000(objuser1) gid=1000(objuser1) groups=1000(objuser1),1001(objgroup)

Note: ECS stores the owner and group for the bucket, and the owner and group for files and directories within the bucket, as ECS object username and custom group names, respectively. The names must be mapped to Unix IDs (UID or GID)) in order that NFS users can be given access with the appropriate privileges. The mapping enables ECS to treat an ECS object user and an NFS user as the same user but with two sets of credentials, one to access ECS using NFS, and one to access the ECS using the object protocols. Because the accounts are mapped, files written by an NFS user will be accessible as objects by the mapped object user and objects written by the object users will be accessible as files by the NFS user.

  • At the Manage > File page, select the User Group Mapping view
  • In the User/Group field, enter the ECS user name or group name that you want to map – objuser1
  • Specify the namespace that the ECS object user or custom group, to which you are going to map the Unix user or group, belongs – ns1
  • In the UID field, enter to user ID or group ID that you want the ECS username to map to – 1000
  • Select the Type of mapping: User or Group

 z5

  • A default group has been assigned to the bucket. For the default group to show as the associated Linux group when the export is mounted, a mapping between its name and a Linux GID must have been created.z6
  • Check the mappingz7

 

Mounting an NFS export

  • Check the NFS export
[root@linuxhost ~]# showmount -e 10.76.246.146
Export list for 10.76.246.146:
/ns1/nfsbucket 10.76.246.143

Note: Check if needed on your Linux distribution – install nfs-utils (yum install nfs-utils)

  • Create a directory on which to mount the export. The directory should belong to the same owner as the bucket
[root@linuxhost ~]# su - objuser1
[objuser1@linuxhost ~]$ mkdir ~objuser1/mnt
[objuser1@linuxhost ~]$ exit
logout
  • Only root can mount nfs
  • As the root user, mount the export in the directory mount point that you created
[root@linuxhost ~]# mount -t nfs -o "vers=3,nolock" 10.76.246.146:/ns1/nfsbucket /home/objuser1/mnt/

[root@linuxhost ~]# mount

10.76.246.146:/ns1/nfsbucket on /home/objuser1/mnt type nfs (rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.76.246.146,mountvers=3,mountport=2049,mountproto=udp,local_lock=all,addr=10.76.246.146)
Note: in case of the error like the one below, check if nfs-utils package is installed

mount: wrong fs type, bad option, bad superblock on 10.76.246.146:/ns1/nfsbucket,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

 

Check the NFS and object access

  • Create files and directory
  • Files’ ownership is objuser1:objgroup 
[root@linuxhost ~]# su - objuser1
Last login: Fri Jun 24 13:03:04 MSK 2016 on pts/0

[objuser1@linuxhost ~]$ cd mnt/
[objuser1@linuxhost mnt]$ touch file1
[objuser1@linuxhost mnt]$ mkdir dir1
[objuser1@linuxhost mnt]$ touch dir1/file2

[objuser1@linuxhost mnt]$ ls -laR
.:
total 1
drwxrwxr-x. 3 objuser1 objgroup 96 Jun 24 13:17 .
drwxrwxr-x. 3 objuser1 objgroup 96 Jun 24 13:17 dir1
-rw-rwxr--. 1 objuser1 objgroup  0 Jun 24 13:17 file1
./dir1:
total 1
drwxrwxr-x. 3 objuser1 objgroup 96 Jun 24 13:17 .
-rw-rwxr--. 1 objuser1 objgroup  0 Jun 24 13:17 file2
  • Check the objects via Cyberduckz8
  • Upload and check a new objectz9

z10

  • Check the new file in Linux
  • The user and group are mapped correctly
[objuser1@linuxhost mnt]$ ls -l
total 2919
drwxrwxr-x. 3 objuser1 objgroup      96 Jun 24 13:17 dir1
-rwxrwx---. 1 objuser1 objgroup 2988012 Jun 24 13:24 docu70100 Administrator's Guide.pdf
-rw-rwxr--. 1 objuser1 objgroup       0 Jun 24 13:17 file1
  • Delete the file which was created via NFS using S3 accessz11
  • The object disappeared from the NFS share
[objuser1@linuxhost mnt]$ ls
dir1 docu70100 Administrator's Guide.pdf
  • Delete the object which was created via S3 using NFS
[objuser1@linuxhost mnt]$ rm docu70100\ Administrator\'s\ Guide.pdf
  • The object disappeared from the S3 bucketz12

 

Check NFS directories

Note: The S3 protocol does not make provision for the creation of directories. To enable multi-protocol operation, ECS support for the S3 protocol formalizes the use of “/” and creates “directory” objects for all intermediate paths in an object name. So an object called “/dir1/file2” will result in the creation of a file object called “file2” and directory object for “dir1”. This directory object is not exposed to the customer via S3 (!), and is only maintained to provide multi-protocol access and compatibility with filesystem based APIs. This means that when the bucket is viewed as an NFS/HDFS filesystem, ECS can display files within a directory structure.

 

  • Crete a new NFS directory
[objuser1@linuxhost mnt]$ mkdir dir2
[objuser1@linuxhost mnt]$ ls -l
total 1
drwxrwxr-x. 3 objuser1 objgroup 96 Jun 24 13:41 dir1
drwxrwxr-x. 3 objuser1 objgroup 96 Jun 24 14:06 dir2
  • The directory is not visible via S3z13 
  • Create a file inside the new directory
[objuser1@linuxhost mnt]$ touch dir2/file3
[objuser1@linuxhost mnt]$ ls dir2/
file3
  • Check the file via S3
  • The file and directory are visible nowz14

 Amazon documentation provides the explanation why the directory is visible now.

http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html

“Object Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.”

So when I placed a new object file3 into the dir2 “directory”, actually the object is just stored with the key name “dir2/file3”, where “dir2/” is the prefix.

 

  • We can confirm that using S3 CLI utility s3curl
  • Object “dir2” doesn’t exist
  • Single object “dir2/file3” is created
linuxhost:s3curl $ ./s3curl.pl --id=ecsid -- -v -s http://10.76.246.146:9020/nfsbucket |xmllint --format -
*   Trying 10.76.246.146...
* Connected to 10.76.246.146 (10.76.246.146) port 9020 (#0)
> GET /nfsbucket HTTP/1.1
> Host: 10.76.246.146:9020
> User-Agent: curl/7.43.0
> Accept: */*
> Date: Fri, 24 Jun 2016 11:21:53 +0000
> Authorization: AWS objuser1:vbIXMRLDYSMS5mkEiQioGpOsa10=
>
< HTTP/1.1 200 OK
< Date: Fri, 24 Jun 2016 11:21:28 GMT

<Contents>
<Key>dir2/file3</Key>
<LastModified>2016-06-24T11:08:14.967Z</LastModified>
<ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag>
<Size>0</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>objuser1</ID>
<DisplayName>objuser1</DisplayName>
</Owner>
</Contents>
</ListBucketResult>

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s