Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(az): correctly remove nested file structures #449

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

M0dEx
Copy link

@M0dEx M0dEx commented Jul 2, 2024

As described in #448, the current AzureBlobPath.rmtree() can sometimes fail and must be called again to remove all of the file/directory structure.

This PR fixes this issue by first deleting all blobs, then deleting all folders, from the most nested to the least nested ones, and finally removing the root folder. This guarantees that the operation will always succeed.

Closes #448.

@pjbull
Copy link
Member

pjbull commented Jul 2, 2024

@M0dEx generally this looks good on initial review. Is there a test case you can add that fails against the live server without this fix and succeeds with it?

@M0dEx
Copy link
Author

M0dEx commented Jul 3, 2024

@M0dEx generally this looks good on initial review. Is there a test case you can add that fails against the live server without this fix and succeeds with it?

Since the root cause might be specific container-level settings (retention, deletion policies, etc.), it has been difficult for us to replicate the issue reliably, especially using Azurite, which only provides a subset of the Azure Blob API features.

I might be asking you to take a lot of faith here, but this change has fixed the behaviour of rmtree() in the mentioned use cases for us.

Copy link

codecov bot commented Jul 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 93.4%. Comparing base (08b018b) to head (fa1025c).

Additional details and impacted files
@@           Coverage Diff            @@
##           master    #449     +/-   ##
========================================
- Coverage    93.7%   93.4%   -0.4%     
========================================
  Files          23      23             
  Lines        1654    1658      +4     
========================================
- Hits         1551    1549      -2     
- Misses        103     109      +6     
Files Coverage Δ
cloudpathlib/azure/azblobclient.py 94.9% <100.0%> (+0.1%) ⬆️

... and 3 files with indirect coverage changes

Copy link
Member

@pjbull pjbull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the network overhead is not worth it for most Azure configs. See inline note.


# folders need to be deleted from the deepest to the shallowest
folders = sorted(
(blob.blob for blob, is_dir in blobs if is_dir and blob.exists()), reverse=True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about perf since .exists adds a network call. This means that for large file trees (broad or deep) we potentially add a lot of calls for "fake" folders that don't actually exist on storage but _list_dir returns to act like a file system.

I believe that only explicitly created folders on blob storage (or maybe with certain parameters set) would still stick around without specific removal. To create these, you need to be using hierarchical namespaces/a Data Lake Gen2 Storage Account.

I think the fix may instead to be to use an azure-SDK API that will list all explicit blobs (files or folders) rather than _list_dir in determining what to remove. I'm not sure if list_blobs does that under accounts with the hierarchical namespaces, so I think we'd need to test that first to see if it works for your use cases.

Do you think you could dig in and see if those settings/accounts can repro the issue for you?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about perf since .exists adds a network call. This means that for large file trees (broad or deep) we potentially add a lot of calls for "fake" folders that don't actually exist on storage but _list_dir returns to act like a file system.

I don't think the call to exists here is strictly necessary, but I might be misremembering.

I believe that only explicitly created folders on blob storage (or maybe with certain parameters set) would still stick around without specific removal. To create these, you need to be using hierarchical namespaces/a Data Lake Gen2 Storage Account.

Hmm, on further investigation, it seems the "problematic" accounts are using hierarchical namespaces. The directories seems to be "sticky" even without being explicitly created, which is a bit surprising.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's very helpful to know, thanks. I'd love to support that feature of blob storage—I'll set one up on our test account and see how it works. We may need more fixes than this to make cloudpathlib play nice with hierarchical namespace enabled storage accounts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AzureBlobPath.rmtree() sometimes fails and needs to be called again
2 participants