
AWS just announced Amazon S3 Files, and my first reaction was simple: where does this fit among the shared filesystem options on AWS?
I think that is the right question.
At first glance, S3 Files sounds similar to tools people already know:
But they are not solving the exact same problem.
If you’re building modern workloads on AWS, whether that means apps, data pipelines, agent workflows, ML jobs, or file-heavy automation, this is the split I would use:
That is the short version.
According to the AWS announcement and the S3 Files docs, Amazon S3 Files makes an S3 bucket accessible as a shared file system with file-system semantics, low-latency access for active data, and synchronization between file operations and S3 objects.
The important part is this:
your data still lives in S3, and the synchronization docs are explicit that the linked S3 bucket remains the long-term store and the source of truth in conflict scenarios.
That is what makes it different from EFS, and also different from most of the older “mount S3 like a file system” tools.
A lot of modern systems still work through files, even when the storage backend is object storage.
Think about what agents and ML workflows actually do:
That is why S3 Files is interesting.
AI systems are part of this story, but not the whole story.
Before this, teams usually had to do one of these:
S3 Files is AWS trying to remove that mess.
This is where a lot of the discussion goes sideways.
The real comparison is:
Why separately?
Because Amazon EFS is an actual shared NFS storage product. The others are better thought of as ways to access S3 through a file interface, but they do that in very different ways.
Here is the cleanest way to think about it.
A managed shared file system over S3.
AWS owns the file-system layer. You get mount targets, access points, and NFS semantics, plus documented synchronization behavior.
A high-throughput S3 file client.
AWS says it is ideal for large-scale read-heavy applications, creating new files, and working with large S3 datasets through file operations.
A POSIX-ish FUSE mount for S3.
The project literally describes itself as “performance first and POSIX second”.
A more filesystem-like FUSE mount for S3.
It supports a larger subset of POSIX, but it still inherits the awkward reality that S3 is not a real local filesystem.
A real shared file system.
With Amazon EFS, the file system is the product. With S3 Files, S3 stays the source of truth.
This is the key architectural difference.
With S3 Files, AWS owns the shared file-system abstraction over S3.
With Mountpoint, goofys, and s3fs-fuse, the client is translating file operations into S3 API calls. That makes them much closer to mount or access approaches than to a standalone shared filesystem product.
That changes things like:
This is why S3 Files feels more like a platform capability, while the FUSE tools feel more like adapters.
If I had to pick the most relevant comparison for most AWS builders, it would be S3 Files vs Mountpoint for S3.
According to the AWS docs, the official Mountpoint repository, and its semantics guide, Mountpoint is optimized for:
And AWS is also pretty clear about what it is not for on general purpose buckets:
There are some narrower exceptions in the semantics doc, especially for S3 Express One Zone behaviors like append or rename in specific modes, but for normal general-purpose-bucket guidance, the limitations above are the ones that matter most.
That makes Mountpoint very compelling for:
But if your workload needs a shared writable file system abstraction over S3, S3 Files is the more interesting option.
I actually appreciate how honest goofys is.
It calls itself a “filey system” instead of a filesystem.
Its own README says:
fsync ignoredSo if you are trying to make an agent or tool read from S3 with minimal fuss, goofys can still make sense.
But I would not confuse that with a managed shared storage layer.
s3fs-fuse supports more filesystem-like behavior than goofys, including:
That sounds attractive until you read the limitations section.
Its own docs call out:
So yes, it is more flexible than goofys in some ways, but it is still a client-side translation layer over object storage.
This is the part that feels genuinely useful.
This became very real with the latest Amazon Bedrock AgentCore release notes.
AgentCore Runtime now supports attaching both Amazon S3 Files and Amazon EFS directly to agent runtimes, as shown in the runtime file system configuration docs.
That means you can now design an agent runtime around this split:
That is a real design decision, not just an AWS launch-day example.
If your training data, feature data, or generated artifacts already belong in S3, S3 Files can remove a lot of annoying copy steps.
Instead of:
you get:
It is not always the right answer, but it is much cleaner than what a lot of teams do today.
If one step writes artifacts, another step reads them, and the long-term home should still be S3, S3 Files starts to look attractive.
That said, I would still be careful with anything that depends on strong shared mutation patterns or assumptions that feel like a traditional local filesystem.
Even with the new AI angle, Amazon EFS is still the better answer when the file system itself is the product you need.
That includes:
If your first sentence is:
“I need a real shared file system”
then EFS is still probably the safer answer.
If your first sentence is:
“My data belongs in S3, but my tools want paths and files”
then S3 Files becomes much more interesting.
For WordPress, I would still choose EFS for the runtime.
Why?
So WordPress is actually a good sanity check. It is a classic shared app-storage workload, and that makes it a good example of where S3 Files is not the main answer, even if S3 Files sounds shiny and new.
If you are building on AWS today, this is the table I would use.
| Option | Best when | Not ideal when |
|---|---|---|
| Mountpoint for S3 | Your workload is mostly read-heavy, your files are large objects, and you want high-throughput access to S3 through a file interface. | You need rich shared filesystem behavior, broad POSIX semantics, or collaborative writable workflows. |
| Amazon S3 Files | Your source of truth should remain in S3, but your agent or pipeline wants normal file operations and a managed shared filesystem abstraction instead of a client-side mount hack. | You need the filesystem itself to behave like the primary mutable shared storage layer. |
| goofys | You want a lightweight client-side mount and performance matters more than POSIX completeness. | You need fuller filesystem semantics or want fewer behavioral surprises. |
| s3fs-fuse | You specifically want a FUSE mount with more POSIX-like behavior than goofys and you are solving a compatibility problem. | You want a clean shared storage design with strong multi-client semantics. |
| Amazon EFS | You need a real shared filesystem first, your tools expect stronger filesystem behavior, or the workload is shared mutable app/runtime storage. | Your data should primarily live in S3 and you mostly want files as an interface over object storage. |
If I had to compress this into one short rule:
Use S3 Files when S3 is the truth and files are the interface. Use EFS when the file system itself is the truth. Use Mountpoint when you mostly want fast reads from S3.
That is the split I would use in practice.
I do not think Amazon S3 Files kills EFS.
I also do not think it kills Mountpoint, goofys, or s3fs-fuse.
What it does kill is a very specific kind of ugly glue:
For modern AWS builders, that is a real improvement.
The better question now is no longer:
“Can I mount S3 like files?”
We already had several answers to that.
The better question is:
“Do I want a client-side mount, a managed shared file system over S3, or a real standalone shared file system?”
Once you answer that, the choice gets a lot clearer.