What I need to achieve that is almost done without the EFS/S3FS share:
SFTP pod used by some microservices that process content and deliver back processed content have sftp users that go to tenant specific paths (e.g. tenant-1, tenant-2) through SFTP, these chrooted paths are mounted to different EFS points through EFS provisioner.
Tenant pod - each mounting /var/s3fs <-> S3FS S3 bucket. Additionally the K8s deployment is mounting the efs shares (k8s pvc) from SFTP by tenant in /efs/nfs path.
It means that when I put a file through SFTP I see it in the s3fs pods in the /efs/nfs location and I have cronjob that decrypts/encrypt content and push to s3 path.
Now for more specifics flow is that customer content is expected to be encrypted from the /efs/nfs path and put in S3 for retention, so the requirement is to put encrypted content in the /var/s3fs and fetch some, there is inbox/outbox folders respectively. All that is working perfectly.
The problem comes from the fact that customer need to use same SFTP to access the /var/s3fs content. So what happens is S3 has inbox/outbox folders, I see them both in S3 and /var/s3fs mounts locally in s3fs pods. I try to do a kubernetes mount with volume/pvc to the /var/s3fs which is already a mounted by S3FS, I try to share in EFS with a new pvc. I also mount same efs location to user chroot in sftp. Result is I don't see the content from /var/s3fs or vice versa from SFTP pod.
When go to s3fs pods and type df -h I only see the mount by s3fs, the efs doesn't show it, but mount and /cat/mtab do show both mounts. I guess is something about libraries or permissions or the part about mounting a mounted path, please advise. Also please advise if there is any other reasonable solution for this Kubernetes use case. I tried bind to other path and mounting it instead, but same result.