patrickramos commited on
Commit
0a1d273
·
verified ·
1 Parent(s): 266d653

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .venv/lib/python3.13/site-packages/charset_normalizer-3.4.2.dist-info/licenses/LICENSE +21 -0
  2. .venv/lib/python3.13/site-packages/filelock/__pycache__/__init__.cpython-313.pyc +0 -0
  3. .venv/lib/python3.13/site-packages/filelock/__pycache__/_api.cpython-313.pyc +0 -0
  4. .venv/lib/python3.13/site-packages/filelock/__pycache__/_error.cpython-313.pyc +0 -0
  5. .venv/lib/python3.13/site-packages/filelock/__pycache__/_unix.cpython-313.pyc +0 -0
  6. .venv/lib/python3.13/site-packages/filelock/__pycache__/_util.cpython-313.pyc +0 -0
  7. .venv/lib/python3.13/site-packages/filelock/__pycache__/version.cpython-313.pyc +0 -0
  8. .venv/lib/python3.13/site-packages/fsspec-2025.7.0.dist-info/licenses/LICENSE +29 -0
  9. .venv/lib/python3.13/site-packages/fsspec/implementations/__init__.py +0 -0
  10. .venv/lib/python3.13/site-packages/fsspec/implementations/arrow.py +304 -0
  11. .venv/lib/python3.13/site-packages/fsspec/implementations/asyn_wrapper.py +114 -0
  12. .venv/lib/python3.13/site-packages/fsspec/implementations/cache_mapper.py +75 -0
  13. .venv/lib/python3.13/site-packages/fsspec/implementations/cache_metadata.py +233 -0
  14. .venv/lib/python3.13/site-packages/fsspec/implementations/cached.py +997 -0
  15. .venv/lib/python3.13/site-packages/fsspec/implementations/dask.py +152 -0
  16. .venv/lib/python3.13/site-packages/fsspec/implementations/data.py +58 -0
  17. .venv/lib/python3.13/site-packages/fsspec/implementations/dbfs.py +468 -0
  18. .venv/lib/python3.13/site-packages/fsspec/implementations/dirfs.py +388 -0
  19. .venv/lib/python3.13/site-packages/fsspec/implementations/ftp.py +387 -0
  20. .venv/lib/python3.13/site-packages/fsspec/implementations/gist.py +232 -0
  21. .venv/lib/python3.13/site-packages/fsspec/implementations/git.py +114 -0
  22. .venv/lib/python3.13/site-packages/fsspec/implementations/github.py +333 -0
  23. .venv/lib/python3.13/site-packages/fsspec/implementations/http.py +890 -0
  24. .venv/lib/python3.13/site-packages/fsspec/implementations/http_sync.py +931 -0
  25. .venv/lib/python3.13/site-packages/fsspec/implementations/jupyter.py +124 -0
  26. .venv/lib/python3.13/site-packages/fsspec/implementations/libarchive.py +213 -0
  27. .venv/lib/python3.13/site-packages/fsspec/implementations/local.py +514 -0
  28. .venv/lib/python3.13/site-packages/fsspec/implementations/memory.py +311 -0
  29. .venv/lib/python3.13/site-packages/fsspec/implementations/reference.py +1305 -0
  30. .venv/lib/python3.13/site-packages/fsspec/implementations/sftp.py +180 -0
  31. .venv/lib/python3.13/site-packages/fsspec/implementations/smb.py +416 -0
  32. .venv/lib/python3.13/site-packages/fsspec/implementations/tar.py +124 -0
  33. .venv/lib/python3.13/site-packages/fsspec/implementations/webhdfs.py +485 -0
  34. .venv/lib/python3.13/site-packages/fsspec/implementations/zip.py +177 -0
  35. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/__init__.py +289 -0
  36. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/common.py +175 -0
  37. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/copy.py +557 -0
  38. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/get.py +587 -0
  39. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/mv.py +57 -0
  40. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/open.py +11 -0
  41. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/pipe.py +11 -0
  42. .venv/lib/python3.13/site-packages/fsspec/tests/abstract/put.py +591 -0
  43. .venv/lib/python3.13/site-packages/hf_xet-1.1.5.dist-info/licenses/LICENSE +201 -0
  44. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/__init__.cpython-313.pyc +0 -0
  45. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_commit_api.cpython-313.pyc +0 -0
  46. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_commit_scheduler.cpython-313.pyc +0 -0
  47. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_inference_endpoints.cpython-313.pyc +0 -0
  48. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_local_folder.cpython-313.pyc +0 -0
  49. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_login.cpython-313.pyc +0 -0
  50. .venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_snapshot_download.cpython-313.pyc +0 -0
.venv/lib/python3.13/site-packages/charset_normalizer-3.4.2.dist-info/licenses/LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 TAHRI Ahmed R.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
.venv/lib/python3.13/site-packages/filelock/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (1.6 kB). View file
 
.venv/lib/python3.13/site-packages/filelock/__pycache__/_api.cpython-313.pyc ADDED
Binary file (16.5 kB). View file
 
.venv/lib/python3.13/site-packages/filelock/__pycache__/_error.cpython-313.pyc ADDED
Binary file (1.84 kB). View file
 
.venv/lib/python3.13/site-packages/filelock/__pycache__/_unix.cpython-313.pyc ADDED
Binary file (3.64 kB). View file
 
.venv/lib/python3.13/site-packages/filelock/__pycache__/_util.cpython-313.pyc ADDED
Binary file (2.02 kB). View file
 
.venv/lib/python3.13/site-packages/filelock/__pycache__/version.cpython-313.pyc ADDED
Binary file (671 Bytes). View file
 
.venv/lib/python3.13/site-packages/fsspec-2025.7.0.dist-info/licenses/LICENSE ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ BSD 3-Clause License
2
+
3
+ Copyright (c) 2018, Martin Durant
4
+ All rights reserved.
5
+
6
+ Redistribution and use in source and binary forms, with or without
7
+ modification, are permitted provided that the following conditions are met:
8
+
9
+ * Redistributions of source code must retain the above copyright notice, this
10
+ list of conditions and the following disclaimer.
11
+
12
+ * Redistributions in binary form must reproduce the above copyright notice,
13
+ this list of conditions and the following disclaimer in the documentation
14
+ and/or other materials provided with the distribution.
15
+
16
+ * Neither the name of the copyright holder nor the names of its
17
+ contributors may be used to endorse or promote products derived from
18
+ this software without specific prior written permission.
19
+
20
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.venv/lib/python3.13/site-packages/fsspec/implementations/__init__.py ADDED
File without changes
.venv/lib/python3.13/site-packages/fsspec/implementations/arrow.py ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import errno
2
+ import io
3
+ import os
4
+ import secrets
5
+ import shutil
6
+ from contextlib import suppress
7
+ from functools import cached_property, wraps
8
+ from urllib.parse import parse_qs
9
+
10
+ from fsspec.spec import AbstractFileSystem
11
+ from fsspec.utils import (
12
+ get_package_version_without_import,
13
+ infer_storage_options,
14
+ mirror_from,
15
+ tokenize,
16
+ )
17
+
18
+
19
+ def wrap_exceptions(func):
20
+ @wraps(func)
21
+ def wrapper(*args, **kwargs):
22
+ try:
23
+ return func(*args, **kwargs)
24
+ except OSError as exception:
25
+ if not exception.args:
26
+ raise
27
+
28
+ message, *args = exception.args
29
+ if isinstance(message, str) and "does not exist" in message:
30
+ raise FileNotFoundError(errno.ENOENT, message) from exception
31
+ else:
32
+ raise
33
+
34
+ return wrapper
35
+
36
+
37
+ PYARROW_VERSION = None
38
+
39
+
40
+ class ArrowFSWrapper(AbstractFileSystem):
41
+ """FSSpec-compatible wrapper of pyarrow.fs.FileSystem.
42
+
43
+ Parameters
44
+ ----------
45
+ fs : pyarrow.fs.FileSystem
46
+
47
+ """
48
+
49
+ root_marker = "/"
50
+
51
+ def __init__(self, fs, **kwargs):
52
+ global PYARROW_VERSION
53
+ PYARROW_VERSION = get_package_version_without_import("pyarrow")
54
+ self.fs = fs
55
+ super().__init__(**kwargs)
56
+
57
+ @property
58
+ def protocol(self):
59
+ return self.fs.type_name
60
+
61
+ @cached_property
62
+ def fsid(self):
63
+ return "hdfs_" + tokenize(self.fs.host, self.fs.port)
64
+
65
+ @classmethod
66
+ def _strip_protocol(cls, path):
67
+ ops = infer_storage_options(path)
68
+ path = ops["path"]
69
+ if path.startswith("//"):
70
+ # special case for "hdfs://path" (without the triple slash)
71
+ path = path[1:]
72
+ return path
73
+
74
+ def ls(self, path, detail=False, **kwargs):
75
+ path = self._strip_protocol(path)
76
+ from pyarrow.fs import FileSelector
77
+
78
+ entries = [
79
+ self._make_entry(entry)
80
+ for entry in self.fs.get_file_info(FileSelector(path))
81
+ ]
82
+ if detail:
83
+ return entries
84
+ else:
85
+ return [entry["name"] for entry in entries]
86
+
87
+ def info(self, path, **kwargs):
88
+ path = self._strip_protocol(path)
89
+ [info] = self.fs.get_file_info([path])
90
+ return self._make_entry(info)
91
+
92
+ def exists(self, path):
93
+ path = self._strip_protocol(path)
94
+ try:
95
+ self.info(path)
96
+ except FileNotFoundError:
97
+ return False
98
+ else:
99
+ return True
100
+
101
+ def _make_entry(self, info):
102
+ from pyarrow.fs import FileType
103
+
104
+ if info.type is FileType.Directory:
105
+ kind = "directory"
106
+ elif info.type is FileType.File:
107
+ kind = "file"
108
+ elif info.type is FileType.NotFound:
109
+ raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), info.path)
110
+ else:
111
+ kind = "other"
112
+
113
+ return {
114
+ "name": info.path,
115
+ "size": info.size,
116
+ "type": kind,
117
+ "mtime": info.mtime,
118
+ }
119
+
120
+ @wrap_exceptions
121
+ def cp_file(self, path1, path2, **kwargs):
122
+ path1 = self._strip_protocol(path1).rstrip("/")
123
+ path2 = self._strip_protocol(path2).rstrip("/")
124
+
125
+ with self._open(path1, "rb") as lstream:
126
+ tmp_fname = f"{path2}.tmp.{secrets.token_hex(6)}"
127
+ try:
128
+ with self.open(tmp_fname, "wb") as rstream:
129
+ shutil.copyfileobj(lstream, rstream)
130
+ self.fs.move(tmp_fname, path2)
131
+ except BaseException:
132
+ with suppress(FileNotFoundError):
133
+ self.fs.delete_file(tmp_fname)
134
+ raise
135
+
136
+ @wrap_exceptions
137
+ def mv(self, path1, path2, **kwargs):
138
+ path1 = self._strip_protocol(path1).rstrip("/")
139
+ path2 = self._strip_protocol(path2).rstrip("/")
140
+ self.fs.move(path1, path2)
141
+
142
+ @wrap_exceptions
143
+ def rm_file(self, path):
144
+ path = self._strip_protocol(path)
145
+ self.fs.delete_file(path)
146
+
147
+ @wrap_exceptions
148
+ def rm(self, path, recursive=False, maxdepth=None):
149
+ path = self._strip_protocol(path).rstrip("/")
150
+ if self.isdir(path):
151
+ if recursive:
152
+ self.fs.delete_dir(path)
153
+ else:
154
+ raise ValueError("Can't delete directories without recursive=False")
155
+ else:
156
+ self.fs.delete_file(path)
157
+
158
+ @wrap_exceptions
159
+ def _open(self, path, mode="rb", block_size=None, seekable=True, **kwargs):
160
+ if mode == "rb":
161
+ if seekable:
162
+ method = self.fs.open_input_file
163
+ else:
164
+ method = self.fs.open_input_stream
165
+ elif mode == "wb":
166
+ method = self.fs.open_output_stream
167
+ elif mode == "ab":
168
+ method = self.fs.open_append_stream
169
+ else:
170
+ raise ValueError(f"unsupported mode for Arrow filesystem: {mode!r}")
171
+
172
+ _kwargs = {}
173
+ if mode != "rb" or not seekable:
174
+ if int(PYARROW_VERSION.split(".")[0]) >= 4:
175
+ # disable compression auto-detection
176
+ _kwargs["compression"] = None
177
+ stream = method(path, **_kwargs)
178
+
179
+ return ArrowFile(self, stream, path, mode, block_size, **kwargs)
180
+
181
+ @wrap_exceptions
182
+ def mkdir(self, path, create_parents=True, **kwargs):
183
+ path = self._strip_protocol(path)
184
+ if create_parents:
185
+ self.makedirs(path, exist_ok=True)
186
+ else:
187
+ self.fs.create_dir(path, recursive=False)
188
+
189
+ @wrap_exceptions
190
+ def makedirs(self, path, exist_ok=False):
191
+ path = self._strip_protocol(path)
192
+ self.fs.create_dir(path, recursive=True)
193
+
194
+ @wrap_exceptions
195
+ def rmdir(self, path):
196
+ path = self._strip_protocol(path)
197
+ self.fs.delete_dir(path)
198
+
199
+ @wrap_exceptions
200
+ def modified(self, path):
201
+ path = self._strip_protocol(path)
202
+ return self.fs.get_file_info(path).mtime
203
+
204
+ def cat_file(self, path, start=None, end=None, **kwargs):
205
+ kwargs["seekable"] = start not in [None, 0]
206
+ return super().cat_file(path, start=None, end=None, **kwargs)
207
+
208
+ def get_file(self, rpath, lpath, **kwargs):
209
+ kwargs["seekable"] = False
210
+ super().get_file(rpath, lpath, **kwargs)
211
+
212
+
213
+ @mirror_from(
214
+ "stream",
215
+ [
216
+ "read",
217
+ "seek",
218
+ "tell",
219
+ "write",
220
+ "readable",
221
+ "writable",
222
+ "close",
223
+ "size",
224
+ "seekable",
225
+ ],
226
+ )
227
+ class ArrowFile(io.IOBase):
228
+ def __init__(self, fs, stream, path, mode, block_size=None, **kwargs):
229
+ self.path = path
230
+ self.mode = mode
231
+
232
+ self.fs = fs
233
+ self.stream = stream
234
+
235
+ self.blocksize = self.block_size = block_size
236
+ self.kwargs = kwargs
237
+
238
+ def __enter__(self):
239
+ return self
240
+
241
+ def __exit__(self, *args):
242
+ return self.close()
243
+
244
+
245
+ class HadoopFileSystem(ArrowFSWrapper):
246
+ """A wrapper on top of the pyarrow.fs.HadoopFileSystem
247
+ to connect it's interface with fsspec"""
248
+
249
+ protocol = "hdfs"
250
+
251
+ def __init__(
252
+ self,
253
+ host="default",
254
+ port=0,
255
+ user=None,
256
+ kerb_ticket=None,
257
+ replication=3,
258
+ extra_conf=None,
259
+ **kwargs,
260
+ ):
261
+ """
262
+
263
+ Parameters
264
+ ----------
265
+ host: str
266
+ Hostname, IP or "default" to try to read from Hadoop config
267
+ port: int
268
+ Port to connect on, or default from Hadoop config if 0
269
+ user: str or None
270
+ If given, connect as this username
271
+ kerb_ticket: str or None
272
+ If given, use this ticket for authentication
273
+ replication: int
274
+ set replication factor of file for write operations. default value is 3.
275
+ extra_conf: None or dict
276
+ Passed on to HadoopFileSystem
277
+ """
278
+ from pyarrow.fs import HadoopFileSystem
279
+
280
+ fs = HadoopFileSystem(
281
+ host=host,
282
+ port=port,
283
+ user=user,
284
+ kerb_ticket=kerb_ticket,
285
+ replication=replication,
286
+ extra_conf=extra_conf,
287
+ )
288
+ super().__init__(fs=fs, **kwargs)
289
+
290
+ @staticmethod
291
+ def _get_kwargs_from_urls(path):
292
+ ops = infer_storage_options(path)
293
+ out = {}
294
+ if ops.get("host", None):
295
+ out["host"] = ops["host"]
296
+ if ops.get("username", None):
297
+ out["user"] = ops["username"]
298
+ if ops.get("port", None):
299
+ out["port"] = ops["port"]
300
+ if ops.get("url_query", None):
301
+ queries = parse_qs(ops["url_query"])
302
+ if queries.get("replication", None):
303
+ out["replication"] = int(queries["replication"][0])
304
+ return out
.venv/lib/python3.13/site-packages/fsspec/implementations/asyn_wrapper.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import functools
3
+ import inspect
4
+
5
+ import fsspec
6
+ from fsspec.asyn import AsyncFileSystem, running_async
7
+
8
+
9
+ def async_wrapper(func, obj=None):
10
+ """
11
+ Wraps a synchronous function to make it awaitable.
12
+
13
+ Parameters
14
+ ----------
15
+ func : callable
16
+ The synchronous function to wrap.
17
+ obj : object, optional
18
+ The instance to bind the function to, if applicable.
19
+
20
+ Returns
21
+ -------
22
+ coroutine
23
+ An awaitable version of the function.
24
+ """
25
+
26
+ @functools.wraps(func)
27
+ async def wrapper(*args, **kwargs):
28
+ return await asyncio.to_thread(func, *args, **kwargs)
29
+
30
+ return wrapper
31
+
32
+
33
+ class AsyncFileSystemWrapper(AsyncFileSystem):
34
+ """
35
+ A wrapper class to convert a synchronous filesystem into an asynchronous one.
36
+
37
+ This class takes an existing synchronous filesystem implementation and wraps all
38
+ its methods to provide an asynchronous interface.
39
+
40
+ Parameters
41
+ ----------
42
+ sync_fs : AbstractFileSystem
43
+ The synchronous filesystem instance to wrap.
44
+ """
45
+
46
+ protocol = "asyncwrapper", "async_wrapper"
47
+ cachable = False
48
+
49
+ def __init__(
50
+ self,
51
+ fs=None,
52
+ asynchronous=None,
53
+ target_protocol=None,
54
+ target_options=None,
55
+ **kwargs,
56
+ ):
57
+ if asynchronous is None:
58
+ asynchronous = running_async()
59
+ super().__init__(asynchronous=asynchronous, **kwargs)
60
+ if fs is not None:
61
+ self.sync_fs = fs
62
+ else:
63
+ self.sync_fs = fsspec.filesystem(target_protocol, **target_options)
64
+ self.protocol = self.sync_fs.protocol
65
+ self._wrap_all_sync_methods()
66
+
67
+ @property
68
+ def fsid(self):
69
+ return f"async_{self.sync_fs.fsid}"
70
+
71
+ def _wrap_all_sync_methods(self):
72
+ """
73
+ Wrap all synchronous methods of the underlying filesystem with asynchronous versions.
74
+ """
75
+ excluded_methods = {"open"}
76
+ for method_name in dir(self.sync_fs):
77
+ if method_name.startswith("_") or method_name in excluded_methods:
78
+ continue
79
+
80
+ attr = inspect.getattr_static(self.sync_fs, method_name)
81
+ if isinstance(attr, property):
82
+ continue
83
+
84
+ method = getattr(self.sync_fs, method_name)
85
+ if callable(method) and not inspect.iscoroutinefunction(method):
86
+ async_method = async_wrapper(method, obj=self)
87
+ setattr(self, f"_{method_name}", async_method)
88
+
89
+ @classmethod
90
+ def wrap_class(cls, sync_fs_class):
91
+ """
92
+ Create a new class that can be used to instantiate an AsyncFileSystemWrapper
93
+ with lazy instantiation of the underlying synchronous filesystem.
94
+
95
+ Parameters
96
+ ----------
97
+ sync_fs_class : type
98
+ The class of the synchronous filesystem to wrap.
99
+
100
+ Returns
101
+ -------
102
+ type
103
+ A new class that wraps the provided synchronous filesystem class.
104
+ """
105
+
106
+ class GeneratedAsyncFileSystemWrapper(cls):
107
+ def __init__(self, *args, **kwargs):
108
+ sync_fs = sync_fs_class(*args, **kwargs)
109
+ super().__init__(sync_fs)
110
+
111
+ GeneratedAsyncFileSystemWrapper.__name__ = (
112
+ f"Async{sync_fs_class.__name__}Wrapper"
113
+ )
114
+ return GeneratedAsyncFileSystemWrapper
.venv/lib/python3.13/site-packages/fsspec/implementations/cache_mapper.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import abc
4
+ import hashlib
5
+
6
+ from fsspec.implementations.local import make_path_posix
7
+
8
+
9
+ class AbstractCacheMapper(abc.ABC):
10
+ """Abstract super-class for mappers from remote URLs to local cached
11
+ basenames.
12
+ """
13
+
14
+ @abc.abstractmethod
15
+ def __call__(self, path: str) -> str: ...
16
+
17
+ def __eq__(self, other: object) -> bool:
18
+ # Identity only depends on class. When derived classes have attributes
19
+ # they will need to be included.
20
+ return isinstance(other, type(self))
21
+
22
+ def __hash__(self) -> int:
23
+ # Identity only depends on class. When derived classes have attributes
24
+ # they will need to be included.
25
+ return hash(type(self))
26
+
27
+
28
+ class BasenameCacheMapper(AbstractCacheMapper):
29
+ """Cache mapper that uses the basename of the remote URL and a fixed number
30
+ of directory levels above this.
31
+
32
+ The default is zero directory levels, meaning different paths with the same
33
+ basename will have the same cached basename.
34
+ """
35
+
36
+ def __init__(self, directory_levels: int = 0):
37
+ if directory_levels < 0:
38
+ raise ValueError(
39
+ "BasenameCacheMapper requires zero or positive directory_levels"
40
+ )
41
+ self.directory_levels = directory_levels
42
+
43
+ # Separator for directories when encoded as strings.
44
+ self._separator = "_@_"
45
+
46
+ def __call__(self, path: str) -> str:
47
+ path = make_path_posix(path)
48
+ prefix, *bits = path.rsplit("/", self.directory_levels + 1)
49
+ if bits:
50
+ return self._separator.join(bits)
51
+ else:
52
+ return prefix # No separator found, simple filename
53
+
54
+ def __eq__(self, other: object) -> bool:
55
+ return super().__eq__(other) and self.directory_levels == other.directory_levels
56
+
57
+ def __hash__(self) -> int:
58
+ return super().__hash__() ^ hash(self.directory_levels)
59
+
60
+
61
+ class HashCacheMapper(AbstractCacheMapper):
62
+ """Cache mapper that uses a hash of the remote URL."""
63
+
64
+ def __call__(self, path: str) -> str:
65
+ return hashlib.sha256(path.encode()).hexdigest()
66
+
67
+
68
+ def create_cache_mapper(same_names: bool) -> AbstractCacheMapper:
69
+ """Factory method to create cache mapper for backward compatibility with
70
+ ``CachingFileSystem`` constructor using ``same_names`` kwarg.
71
+ """
72
+ if same_names:
73
+ return BasenameCacheMapper()
74
+ else:
75
+ return HashCacheMapper()
.venv/lib/python3.13/site-packages/fsspec/implementations/cache_metadata.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import os
4
+ import pickle
5
+ import time
6
+ from typing import TYPE_CHECKING
7
+
8
+ from fsspec.utils import atomic_write
9
+
10
+ try:
11
+ import ujson as json
12
+ except ImportError:
13
+ if not TYPE_CHECKING:
14
+ import json
15
+
16
+ if TYPE_CHECKING:
17
+ from collections.abc import Iterator
18
+ from typing import Any, Literal
19
+
20
+ from typing_extensions import TypeAlias
21
+
22
+ from .cached import CachingFileSystem
23
+
24
+ Detail: TypeAlias = dict[str, Any]
25
+
26
+
27
+ class CacheMetadata:
28
+ """Cache metadata.
29
+
30
+ All reading and writing of cache metadata is performed by this class,
31
+ accessing the cached files and blocks is not.
32
+
33
+ Metadata is stored in a single file per storage directory in JSON format.
34
+ For backward compatibility, also reads metadata stored in pickle format
35
+ which is converted to JSON when next saved.
36
+ """
37
+
38
+ def __init__(self, storage: list[str]):
39
+ """
40
+
41
+ Parameters
42
+ ----------
43
+ storage: list[str]
44
+ Directories containing cached files, must be at least one. Metadata
45
+ is stored in the last of these directories by convention.
46
+ """
47
+ if not storage:
48
+ raise ValueError("CacheMetadata expects at least one storage location")
49
+
50
+ self._storage = storage
51
+ self.cached_files: list[Detail] = [{}]
52
+
53
+ # Private attribute to force saving of metadata in pickle format rather than
54
+ # JSON for use in tests to confirm can read both pickle and JSON formats.
55
+ self._force_save_pickle = False
56
+
57
+ def _load(self, fn: str) -> Detail:
58
+ """Low-level function to load metadata from specific file"""
59
+ try:
60
+ with open(fn, "r") as f:
61
+ loaded = json.load(f)
62
+ except ValueError:
63
+ with open(fn, "rb") as f:
64
+ loaded = pickle.load(f)
65
+ for c in loaded.values():
66
+ if isinstance(c.get("blocks"), list):
67
+ c["blocks"] = set(c["blocks"])
68
+ return loaded
69
+
70
+ def _save(self, metadata_to_save: Detail, fn: str) -> None:
71
+ """Low-level function to save metadata to specific file"""
72
+ if self._force_save_pickle:
73
+ with atomic_write(fn) as f:
74
+ pickle.dump(metadata_to_save, f)
75
+ else:
76
+ with atomic_write(fn, mode="w") as f:
77
+ json.dump(metadata_to_save, f)
78
+
79
+ def _scan_locations(
80
+ self, writable_only: bool = False
81
+ ) -> Iterator[tuple[str, str, bool]]:
82
+ """Yield locations (filenames) where metadata is stored, and whether
83
+ writable or not.
84
+
85
+ Parameters
86
+ ----------
87
+ writable: bool
88
+ Set to True to only yield writable locations.
89
+
90
+ Returns
91
+ -------
92
+ Yields (str, str, bool)
93
+ """
94
+ n = len(self._storage)
95
+ for i, storage in enumerate(self._storage):
96
+ writable = i == n - 1
97
+ if writable_only and not writable:
98
+ continue
99
+ yield os.path.join(storage, "cache"), storage, writable
100
+
101
+ def check_file(
102
+ self, path: str, cfs: CachingFileSystem | None
103
+ ) -> Literal[False] | tuple[Detail, str]:
104
+ """If path is in cache return its details, otherwise return ``False``.
105
+
106
+ If the optional CachingFileSystem is specified then it is used to
107
+ perform extra checks to reject possible matches, such as if they are
108
+ too old.
109
+ """
110
+ for (fn, base, _), cache in zip(self._scan_locations(), self.cached_files):
111
+ if path not in cache:
112
+ continue
113
+ detail = cache[path].copy()
114
+
115
+ if cfs is not None:
116
+ if cfs.check_files and detail["uid"] != cfs.fs.ukey(path):
117
+ # Wrong file as determined by hash of file properties
118
+ continue
119
+ if cfs.expiry and time.time() - detail["time"] > cfs.expiry:
120
+ # Cached file has expired
121
+ continue
122
+
123
+ fn = os.path.join(base, detail["fn"])
124
+ if os.path.exists(fn):
125
+ return detail, fn
126
+ return False
127
+
128
+ def clear_expired(self, expiry_time: int) -> tuple[list[str], bool]:
129
+ """Remove expired metadata from the cache.
130
+
131
+ Returns names of files corresponding to expired metadata and a boolean
132
+ flag indicating whether the writable cache is empty. Caller is
133
+ responsible for deleting the expired files.
134
+ """
135
+ expired_files = []
136
+ for path, detail in self.cached_files[-1].copy().items():
137
+ if time.time() - detail["time"] > expiry_time:
138
+ fn = detail.get("fn", "")
139
+ if not fn:
140
+ raise RuntimeError(
141
+ f"Cache metadata does not contain 'fn' for {path}"
142
+ )
143
+ fn = os.path.join(self._storage[-1], fn)
144
+ expired_files.append(fn)
145
+ self.cached_files[-1].pop(path)
146
+
147
+ if self.cached_files[-1]:
148
+ cache_path = os.path.join(self._storage[-1], "cache")
149
+ self._save(self.cached_files[-1], cache_path)
150
+
151
+ writable_cache_empty = not self.cached_files[-1]
152
+ return expired_files, writable_cache_empty
153
+
154
+ def load(self) -> None:
155
+ """Load all metadata from disk and store in ``self.cached_files``"""
156
+ cached_files = []
157
+ for fn, _, _ in self._scan_locations():
158
+ if os.path.exists(fn):
159
+ # TODO: consolidate blocks here
160
+ cached_files.append(self._load(fn))
161
+ else:
162
+ cached_files.append({})
163
+ self.cached_files = cached_files or [{}]
164
+
165
+ def on_close_cached_file(self, f: Any, path: str) -> None:
166
+ """Perform side-effect actions on closing a cached file.
167
+
168
+ The actual closing of the file is the responsibility of the caller.
169
+ """
170
+ # File must be writeble, so in self.cached_files[-1]
171
+ c = self.cached_files[-1][path]
172
+ if c["blocks"] is not True and len(c["blocks"]) * f.blocksize >= f.size:
173
+ c["blocks"] = True
174
+
175
+ def pop_file(self, path: str) -> str | None:
176
+ """Remove metadata of cached file.
177
+
178
+ If path is in the cache, return the filename of the cached file,
179
+ otherwise return ``None``. Caller is responsible for deleting the
180
+ cached file.
181
+ """
182
+ details = self.check_file(path, None)
183
+ if not details:
184
+ return None
185
+ _, fn = details
186
+ if fn.startswith(self._storage[-1]):
187
+ self.cached_files[-1].pop(path)
188
+ self.save()
189
+ else:
190
+ raise PermissionError(
191
+ "Can only delete cached file in last, writable cache location"
192
+ )
193
+ return fn
194
+
195
+ def save(self) -> None:
196
+ """Save metadata to disk"""
197
+ for (fn, _, writable), cache in zip(self._scan_locations(), self.cached_files):
198
+ if not writable:
199
+ continue
200
+
201
+ if os.path.exists(fn):
202
+ cached_files = self._load(fn)
203
+ for k, c in cached_files.items():
204
+ if k in cache:
205
+ if c["blocks"] is True or cache[k]["blocks"] is True:
206
+ c["blocks"] = True
207
+ else:
208
+ # self.cached_files[*][*]["blocks"] must continue to
209
+ # point to the same set object so that updates
210
+ # performed by MMapCache are propagated back to
211
+ # self.cached_files.
212
+ blocks = cache[k]["blocks"]
213
+ blocks.update(c["blocks"])
214
+ c["blocks"] = blocks
215
+ c["time"] = max(c["time"], cache[k]["time"])
216
+ c["uid"] = cache[k]["uid"]
217
+
218
+ # Files can be added to cache after it was written once
219
+ for k, c in cache.items():
220
+ if k not in cached_files:
221
+ cached_files[k] = c
222
+ else:
223
+ cached_files = cache
224
+ cache = {k: v.copy() for k, v in cached_files.items()}
225
+ for c in cache.values():
226
+ if isinstance(c["blocks"], set):
227
+ c["blocks"] = list(c["blocks"])
228
+ self._save(cache, fn)
229
+ self.cached_files[-1] = cached_files
230
+
231
+ def update_file(self, path: str, detail: Detail) -> None:
232
+ """Update metadata for specific file in memory, do not save"""
233
+ self.cached_files[-1][path] = detail
.venv/lib/python3.13/site-packages/fsspec/implementations/cached.py ADDED
@@ -0,0 +1,997 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import inspect
4
+ import logging
5
+ import os
6
+ import tempfile
7
+ import time
8
+ import weakref
9
+ from shutil import rmtree
10
+ from typing import TYPE_CHECKING, Any, Callable, ClassVar
11
+
12
+ from fsspec import AbstractFileSystem, filesystem
13
+ from fsspec.callbacks import DEFAULT_CALLBACK
14
+ from fsspec.compression import compr
15
+ from fsspec.core import BaseCache, MMapCache
16
+ from fsspec.exceptions import BlocksizeMismatchError
17
+ from fsspec.implementations.cache_mapper import create_cache_mapper
18
+ from fsspec.implementations.cache_metadata import CacheMetadata
19
+ from fsspec.implementations.local import LocalFileSystem
20
+ from fsspec.spec import AbstractBufferedFile
21
+ from fsspec.transaction import Transaction
22
+ from fsspec.utils import infer_compression
23
+
24
+ if TYPE_CHECKING:
25
+ from fsspec.implementations.cache_mapper import AbstractCacheMapper
26
+
27
+ logger = logging.getLogger("fsspec.cached")
28
+
29
+
30
+ class WriteCachedTransaction(Transaction):
31
+ def complete(self, commit=True):
32
+ rpaths = [f.path for f in self.files]
33
+ lpaths = [f.fn for f in self.files]
34
+ if commit:
35
+ self.fs.put(lpaths, rpaths)
36
+ self.files.clear()
37
+ self.fs._intrans = False
38
+ self.fs._transaction = None
39
+ self.fs = None # break cycle
40
+
41
+
42
+ class CachingFileSystem(AbstractFileSystem):
43
+ """Locally caching filesystem, layer over any other FS
44
+
45
+ This class implements chunk-wise local storage of remote files, for quick
46
+ access after the initial download. The files are stored in a given
47
+ directory with hashes of URLs for the filenames. If no directory is given,
48
+ a temporary one is used, which should be cleaned up by the OS after the
49
+ process ends. The files themselves are sparse (as implemented in
50
+ :class:`~fsspec.caching.MMapCache`), so only the data which is accessed
51
+ takes up space.
52
+
53
+ Restrictions:
54
+
55
+ - the block-size must be the same for each access of a given file, unless
56
+ all blocks of the file have already been read
57
+ - caching can only be applied to file-systems which produce files
58
+ derived from fsspec.spec.AbstractBufferedFile ; LocalFileSystem is also
59
+ allowed, for testing
60
+ """
61
+
62
+ protocol: ClassVar[str | tuple[str, ...]] = ("blockcache", "cached")
63
+
64
+ def __init__(
65
+ self,
66
+ target_protocol=None,
67
+ cache_storage="TMP",
68
+ cache_check=10,
69
+ check_files=False,
70
+ expiry_time=604800,
71
+ target_options=None,
72
+ fs=None,
73
+ same_names: bool | None = None,
74
+ compression=None,
75
+ cache_mapper: AbstractCacheMapper | None = None,
76
+ **kwargs,
77
+ ):
78
+ """
79
+
80
+ Parameters
81
+ ----------
82
+ target_protocol: str (optional)
83
+ Target filesystem protocol. Provide either this or ``fs``.
84
+ cache_storage: str or list(str)
85
+ Location to store files. If "TMP", this is a temporary directory,
86
+ and will be cleaned up by the OS when this process ends (or later).
87
+ If a list, each location will be tried in the order given, but
88
+ only the last will be considered writable.
89
+ cache_check: int
90
+ Number of seconds between reload of cache metadata
91
+ check_files: bool
92
+ Whether to explicitly see if the UID of the remote file matches
93
+ the stored one before using. Warning: some file systems such as
94
+ HTTP cannot reliably give a unique hash of the contents of some
95
+ path, so be sure to set this option to False.
96
+ expiry_time: int
97
+ The time in seconds after which a local copy is considered useless.
98
+ Set to falsy to prevent expiry. The default is equivalent to one
99
+ week.
100
+ target_options: dict or None
101
+ Passed to the instantiation of the FS, if fs is None.
102
+ fs: filesystem instance
103
+ The target filesystem to run against. Provide this or ``protocol``.
104
+ same_names: bool (optional)
105
+ By default, target URLs are hashed using a ``HashCacheMapper`` so
106
+ that files from different backends with the same basename do not
107
+ conflict. If this argument is ``true``, a ``BasenameCacheMapper``
108
+ is used instead. Other cache mapper options are available by using
109
+ the ``cache_mapper`` keyword argument. Only one of this and
110
+ ``cache_mapper`` should be specified.
111
+ compression: str (optional)
112
+ To decompress on download. Can be 'infer' (guess from the URL name),
113
+ one of the entries in ``fsspec.compression.compr``, or None for no
114
+ decompression.
115
+ cache_mapper: AbstractCacheMapper (optional)
116
+ The object use to map from original filenames to cached filenames.
117
+ Only one of this and ``same_names`` should be specified.
118
+ """
119
+ super().__init__(**kwargs)
120
+ if fs is None and target_protocol is None:
121
+ raise ValueError(
122
+ "Please provide filesystem instance(fs) or target_protocol"
123
+ )
124
+ if not (fs is None) ^ (target_protocol is None):
125
+ raise ValueError(
126
+ "Both filesystems (fs) and target_protocol may not be both given."
127
+ )
128
+ if cache_storage == "TMP":
129
+ tempdir = tempfile.mkdtemp()
130
+ storage = [tempdir]
131
+ weakref.finalize(self, self._remove_tempdir, tempdir)
132
+ else:
133
+ if isinstance(cache_storage, str):
134
+ storage = [cache_storage]
135
+ else:
136
+ storage = cache_storage
137
+ os.makedirs(storage[-1], exist_ok=True)
138
+ self.storage = storage
139
+ self.kwargs = target_options or {}
140
+ self.cache_check = cache_check
141
+ self.check_files = check_files
142
+ self.expiry = expiry_time
143
+ self.compression = compression
144
+
145
+ # Size of cache in bytes. If None then the size is unknown and will be
146
+ # recalculated the next time cache_size() is called. On writes to the
147
+ # cache this is reset to None.
148
+ self._cache_size = None
149
+
150
+ if same_names is not None and cache_mapper is not None:
151
+ raise ValueError(
152
+ "Cannot specify both same_names and cache_mapper in "
153
+ "CachingFileSystem.__init__"
154
+ )
155
+ if cache_mapper is not None:
156
+ self._mapper = cache_mapper
157
+ else:
158
+ self._mapper = create_cache_mapper(
159
+ same_names if same_names is not None else False
160
+ )
161
+
162
+ self.target_protocol = (
163
+ target_protocol
164
+ if isinstance(target_protocol, str)
165
+ else (fs.protocol if isinstance(fs.protocol, str) else fs.protocol[0])
166
+ )
167
+ self._metadata = CacheMetadata(self.storage)
168
+ self.load_cache()
169
+ self.fs = fs if fs is not None else filesystem(target_protocol, **self.kwargs)
170
+
171
+ def _strip_protocol(path):
172
+ # acts as a method, since each instance has a difference target
173
+ return self.fs._strip_protocol(type(self)._strip_protocol(path))
174
+
175
+ self._strip_protocol: Callable = _strip_protocol
176
+
177
+ @staticmethod
178
+ def _remove_tempdir(tempdir):
179
+ try:
180
+ rmtree(tempdir)
181
+ except Exception:
182
+ pass
183
+
184
+ def _mkcache(self):
185
+ os.makedirs(self.storage[-1], exist_ok=True)
186
+
187
+ def cache_size(self):
188
+ """Return size of cache in bytes.
189
+
190
+ If more than one cache directory is in use, only the size of the last
191
+ one (the writable cache directory) is returned.
192
+ """
193
+ if self._cache_size is None:
194
+ cache_dir = self.storage[-1]
195
+ self._cache_size = filesystem("file").du(cache_dir, withdirs=True)
196
+ return self._cache_size
197
+
198
+ def load_cache(self):
199
+ """Read set of stored blocks from file"""
200
+ self._metadata.load()
201
+ self._mkcache()
202
+ self.last_cache = time.time()
203
+
204
+ def save_cache(self):
205
+ """Save set of stored blocks from file"""
206
+ self._mkcache()
207
+ self._metadata.save()
208
+ self.last_cache = time.time()
209
+ self._cache_size = None
210
+
211
+ def _check_cache(self):
212
+ """Reload caches if time elapsed or any disappeared"""
213
+ self._mkcache()
214
+ if not self.cache_check:
215
+ # explicitly told not to bother checking
216
+ return
217
+ timecond = time.time() - self.last_cache > self.cache_check
218
+ existcond = all(os.path.exists(storage) for storage in self.storage)
219
+ if timecond or not existcond:
220
+ self.load_cache()
221
+
222
+ def _check_file(self, path):
223
+ """Is path in cache and still valid"""
224
+ path = self._strip_protocol(path)
225
+ self._check_cache()
226
+ return self._metadata.check_file(path, self)
227
+
228
+ def clear_cache(self):
229
+ """Remove all files and metadata from the cache
230
+
231
+ In the case of multiple cache locations, this clears only the last one,
232
+ which is assumed to be the read/write one.
233
+ """
234
+ rmtree(self.storage[-1])
235
+ self.load_cache()
236
+ self._cache_size = None
237
+
238
+ def clear_expired_cache(self, expiry_time=None):
239
+ """Remove all expired files and metadata from the cache
240
+
241
+ In the case of multiple cache locations, this clears only the last one,
242
+ which is assumed to be the read/write one.
243
+
244
+ Parameters
245
+ ----------
246
+ expiry_time: int
247
+ The time in seconds after which a local copy is considered useless.
248
+ If not defined the default is equivalent to the attribute from the
249
+ file caching instantiation.
250
+ """
251
+
252
+ if not expiry_time:
253
+ expiry_time = self.expiry
254
+
255
+ self._check_cache()
256
+
257
+ expired_files, writable_cache_empty = self._metadata.clear_expired(expiry_time)
258
+ for fn in expired_files:
259
+ if os.path.exists(fn):
260
+ os.remove(fn)
261
+
262
+ if writable_cache_empty:
263
+ rmtree(self.storage[-1])
264
+ self.load_cache()
265
+
266
+ self._cache_size = None
267
+
268
+ def pop_from_cache(self, path):
269
+ """Remove cached version of given file
270
+
271
+ Deletes local copy of the given (remote) path. If it is found in a cache
272
+ location which is not the last, it is assumed to be read-only, and
273
+ raises PermissionError
274
+ """
275
+ path = self._strip_protocol(path)
276
+ fn = self._metadata.pop_file(path)
277
+ if fn is not None:
278
+ os.remove(fn)
279
+ self._cache_size = None
280
+
281
+ def _open(
282
+ self,
283
+ path,
284
+ mode="rb",
285
+ block_size=None,
286
+ autocommit=True,
287
+ cache_options=None,
288
+ **kwargs,
289
+ ):
290
+ """Wrap the target _open
291
+
292
+ If the whole file exists in the cache, just open it locally and
293
+ return that.
294
+
295
+ Otherwise, open the file on the target FS, and make it have a mmap
296
+ cache pointing to the location which we determine, in our cache.
297
+ The ``blocks`` instance is shared, so as the mmap cache instance
298
+ updates, so does the entry in our ``cached_files`` attribute.
299
+ We monkey-patch this file, so that when it closes, we call
300
+ ``close_and_update`` to save the state of the blocks.
301
+ """
302
+ path = self._strip_protocol(path)
303
+
304
+ path = self.fs._strip_protocol(path)
305
+ if "r" not in mode:
306
+ return self.fs._open(
307
+ path,
308
+ mode=mode,
309
+ block_size=block_size,
310
+ autocommit=autocommit,
311
+ cache_options=cache_options,
312
+ **kwargs,
313
+ )
314
+ detail = self._check_file(path)
315
+ if detail:
316
+ # file is in cache
317
+ detail, fn = detail
318
+ hash, blocks = detail["fn"], detail["blocks"]
319
+ if blocks is True:
320
+ # stored file is complete
321
+ logger.debug("Opening local copy of %s", path)
322
+ return open(fn, mode)
323
+ # TODO: action where partial file exists in read-only cache
324
+ logger.debug("Opening partially cached copy of %s", path)
325
+ else:
326
+ hash = self._mapper(path)
327
+ fn = os.path.join(self.storage[-1], hash)
328
+ blocks = set()
329
+ detail = {
330
+ "original": path,
331
+ "fn": hash,
332
+ "blocks": blocks,
333
+ "time": time.time(),
334
+ "uid": self.fs.ukey(path),
335
+ }
336
+ self._metadata.update_file(path, detail)
337
+ logger.debug("Creating local sparse file for %s", path)
338
+
339
+ # explicitly submitting the size to the open call will avoid extra
340
+ # operations when opening. This is particularly relevant
341
+ # for any file that is read over a network, e.g. S3.
342
+ size = detail.get("size")
343
+
344
+ # call target filesystems open
345
+ self._mkcache()
346
+ f = self.fs._open(
347
+ path,
348
+ mode=mode,
349
+ block_size=block_size,
350
+ autocommit=autocommit,
351
+ cache_options=cache_options,
352
+ cache_type="none",
353
+ size=size,
354
+ **kwargs,
355
+ )
356
+
357
+ # set size if not already set
358
+ if size is None:
359
+ detail["size"] = f.size
360
+ self._metadata.update_file(path, detail)
361
+
362
+ if self.compression:
363
+ comp = (
364
+ infer_compression(path)
365
+ if self.compression == "infer"
366
+ else self.compression
367
+ )
368
+ f = compr[comp](f, mode="rb")
369
+ if "blocksize" in detail:
370
+ if detail["blocksize"] != f.blocksize:
371
+ raise BlocksizeMismatchError(
372
+ f"Cached file must be reopened with same block"
373
+ f" size as original (old: {detail['blocksize']},"
374
+ f" new {f.blocksize})"
375
+ )
376
+ else:
377
+ detail["blocksize"] = f.blocksize
378
+
379
+ def _fetch_ranges(ranges):
380
+ return self.fs.cat_ranges(
381
+ [path] * len(ranges),
382
+ [r[0] for r in ranges],
383
+ [r[1] for r in ranges],
384
+ **kwargs,
385
+ )
386
+
387
+ multi_fetcher = None if self.compression else _fetch_ranges
388
+ f.cache = MMapCache(
389
+ f.blocksize, f._fetch_range, f.size, fn, blocks, multi_fetcher=multi_fetcher
390
+ )
391
+ close = f.close
392
+ f.close = lambda: self.close_and_update(f, close)
393
+ self.save_cache()
394
+ return f
395
+
396
+ def _parent(self, path):
397
+ return self.fs._parent(path)
398
+
399
+ def hash_name(self, path: str, *args: Any) -> str:
400
+ # Kept for backward compatibility with downstream libraries.
401
+ # Ignores extra arguments, previously same_name boolean.
402
+ return self._mapper(path)
403
+
404
+ def close_and_update(self, f, close):
405
+ """Called when a file is closing, so store the set of blocks"""
406
+ if f.closed:
407
+ return
408
+ path = self._strip_protocol(f.path)
409
+ self._metadata.on_close_cached_file(f, path)
410
+ try:
411
+ logger.debug("going to save")
412
+ self.save_cache()
413
+ logger.debug("saved")
414
+ except OSError:
415
+ logger.debug("Cache saving failed while closing file")
416
+ except NameError:
417
+ logger.debug("Cache save failed due to interpreter shutdown")
418
+ close()
419
+ f.closed = True
420
+
421
+ def ls(self, path, detail=True):
422
+ return self.fs.ls(path, detail)
423
+
424
+ def __getattribute__(self, item):
425
+ if item in {
426
+ "load_cache",
427
+ "_open",
428
+ "save_cache",
429
+ "close_and_update",
430
+ "__init__",
431
+ "__getattribute__",
432
+ "__reduce__",
433
+ "_make_local_details",
434
+ "open",
435
+ "cat",
436
+ "cat_file",
437
+ "_cat_file",
438
+ "cat_ranges",
439
+ "_cat_ranges",
440
+ "get",
441
+ "read_block",
442
+ "tail",
443
+ "head",
444
+ "info",
445
+ "ls",
446
+ "exists",
447
+ "isfile",
448
+ "isdir",
449
+ "_check_file",
450
+ "_check_cache",
451
+ "_mkcache",
452
+ "clear_cache",
453
+ "clear_expired_cache",
454
+ "pop_from_cache",
455
+ "local_file",
456
+ "_paths_from_path",
457
+ "get_mapper",
458
+ "open_many",
459
+ "commit_many",
460
+ "hash_name",
461
+ "__hash__",
462
+ "__eq__",
463
+ "to_json",
464
+ "to_dict",
465
+ "cache_size",
466
+ "pipe_file",
467
+ "pipe",
468
+ "start_transaction",
469
+ "end_transaction",
470
+ }:
471
+ # all the methods defined in this class. Note `open` here, since
472
+ # it calls `_open`, but is actually in superclass
473
+ return lambda *args, **kw: getattr(type(self), item).__get__(self)(
474
+ *args, **kw
475
+ )
476
+ if item in ["__reduce_ex__"]:
477
+ raise AttributeError
478
+ if item in ["transaction"]:
479
+ # property
480
+ return type(self).transaction.__get__(self)
481
+ if item in ["_cache", "transaction_type"]:
482
+ # class attributes
483
+ return getattr(type(self), item)
484
+ if item == "__class__":
485
+ return type(self)
486
+ d = object.__getattribute__(self, "__dict__")
487
+ fs = d.get("fs", None) # fs is not immediately defined
488
+ if item in d:
489
+ return d[item]
490
+ elif fs is not None:
491
+ if item in fs.__dict__:
492
+ # attribute of instance
493
+ return fs.__dict__[item]
494
+ # attributed belonging to the target filesystem
495
+ cls = type(fs)
496
+ m = getattr(cls, item)
497
+ if (inspect.isfunction(m) or inspect.isdatadescriptor(m)) and (
498
+ not hasattr(m, "__self__") or m.__self__ is None
499
+ ):
500
+ # instance method
501
+ return m.__get__(fs, cls)
502
+ return m # class method or attribute
503
+ else:
504
+ # attributes of the superclass, while target is being set up
505
+ return super().__getattribute__(item)
506
+
507
+ def __eq__(self, other):
508
+ """Test for equality."""
509
+ if self is other:
510
+ return True
511
+ if not isinstance(other, type(self)):
512
+ return False
513
+ return (
514
+ self.storage == other.storage
515
+ and self.kwargs == other.kwargs
516
+ and self.cache_check == other.cache_check
517
+ and self.check_files == other.check_files
518
+ and self.expiry == other.expiry
519
+ and self.compression == other.compression
520
+ and self._mapper == other._mapper
521
+ and self.target_protocol == other.target_protocol
522
+ )
523
+
524
+ def __hash__(self):
525
+ """Calculate hash."""
526
+ return (
527
+ hash(tuple(self.storage))
528
+ ^ hash(str(self.kwargs))
529
+ ^ hash(self.cache_check)
530
+ ^ hash(self.check_files)
531
+ ^ hash(self.expiry)
532
+ ^ hash(self.compression)
533
+ ^ hash(self._mapper)
534
+ ^ hash(self.target_protocol)
535
+ )
536
+
537
+
538
+ class WholeFileCacheFileSystem(CachingFileSystem):
539
+ """Caches whole remote files on first access
540
+
541
+ This class is intended as a layer over any other file system, and
542
+ will make a local copy of each file accessed, so that all subsequent
543
+ reads are local. This is similar to ``CachingFileSystem``, but without
544
+ the block-wise functionality and so can work even when sparse files
545
+ are not allowed. See its docstring for definition of the init
546
+ arguments.
547
+
548
+ The class still needs access to the remote store for listing files,
549
+ and may refresh cached files.
550
+ """
551
+
552
+ protocol = "filecache"
553
+ local_file = True
554
+
555
+ def open_many(self, open_files, **kwargs):
556
+ paths = [of.path for of in open_files]
557
+ if "r" in open_files.mode:
558
+ self._mkcache()
559
+ else:
560
+ return [
561
+ LocalTempFile(
562
+ self.fs,
563
+ path,
564
+ mode=open_files.mode,
565
+ fn=os.path.join(self.storage[-1], self._mapper(path)),
566
+ **kwargs,
567
+ )
568
+ for path in paths
569
+ ]
570
+
571
+ if self.compression:
572
+ raise NotImplementedError
573
+ details = [self._check_file(sp) for sp in paths]
574
+ downpath = [p for p, d in zip(paths, details) if not d]
575
+ downfn0 = [
576
+ os.path.join(self.storage[-1], self._mapper(p))
577
+ for p, d in zip(paths, details)
578
+ ] # keep these path names for opening later
579
+ downfn = [fn for fn, d in zip(downfn0, details) if not d]
580
+ if downpath:
581
+ # skip if all files are already cached and up to date
582
+ self.fs.get(downpath, downfn)
583
+
584
+ # update metadata - only happens when downloads are successful
585
+ newdetail = [
586
+ {
587
+ "original": path,
588
+ "fn": self._mapper(path),
589
+ "blocks": True,
590
+ "time": time.time(),
591
+ "uid": self.fs.ukey(path),
592
+ }
593
+ for path in downpath
594
+ ]
595
+ for path, detail in zip(downpath, newdetail):
596
+ self._metadata.update_file(path, detail)
597
+ self.save_cache()
598
+
599
+ def firstpart(fn):
600
+ # helper to adapt both whole-file and simple-cache
601
+ return fn[1] if isinstance(fn, tuple) else fn
602
+
603
+ return [
604
+ open(firstpart(fn0) if fn0 else fn1, mode=open_files.mode)
605
+ for fn0, fn1 in zip(details, downfn0)
606
+ ]
607
+
608
+ def commit_many(self, open_files):
609
+ self.fs.put([f.fn for f in open_files], [f.path for f in open_files])
610
+ [f.close() for f in open_files]
611
+ for f in open_files:
612
+ # in case autocommit is off, and so close did not already delete
613
+ try:
614
+ os.remove(f.name)
615
+ except FileNotFoundError:
616
+ pass
617
+ self._cache_size = None
618
+
619
+ def _make_local_details(self, path):
620
+ hash = self._mapper(path)
621
+ fn = os.path.join(self.storage[-1], hash)
622
+ detail = {
623
+ "original": path,
624
+ "fn": hash,
625
+ "blocks": True,
626
+ "time": time.time(),
627
+ "uid": self.fs.ukey(path),
628
+ }
629
+ self._metadata.update_file(path, detail)
630
+ logger.debug("Copying %s to local cache", path)
631
+ return fn
632
+
633
+ def cat(
634
+ self,
635
+ path,
636
+ recursive=False,
637
+ on_error="raise",
638
+ callback=DEFAULT_CALLBACK,
639
+ **kwargs,
640
+ ):
641
+ paths = self.expand_path(
642
+ path, recursive=recursive, maxdepth=kwargs.get("maxdepth")
643
+ )
644
+ getpaths = []
645
+ storepaths = []
646
+ fns = []
647
+ out = {}
648
+ for p in paths.copy():
649
+ try:
650
+ detail = self._check_file(p)
651
+ if not detail:
652
+ fn = self._make_local_details(p)
653
+ getpaths.append(p)
654
+ storepaths.append(fn)
655
+ else:
656
+ detail, fn = detail if isinstance(detail, tuple) else (None, detail)
657
+ fns.append(fn)
658
+ except Exception as e:
659
+ if on_error == "raise":
660
+ raise
661
+ if on_error == "return":
662
+ out[p] = e
663
+ paths.remove(p)
664
+
665
+ if getpaths:
666
+ self.fs.get(getpaths, storepaths)
667
+ self.save_cache()
668
+
669
+ callback.set_size(len(paths))
670
+ for p, fn in zip(paths, fns):
671
+ with open(fn, "rb") as f:
672
+ out[p] = f.read()
673
+ callback.relative_update(1)
674
+ if isinstance(path, str) and len(paths) == 1 and recursive is False:
675
+ out = out[paths[0]]
676
+ return out
677
+
678
+ def _open(self, path, mode="rb", **kwargs):
679
+ path = self._strip_protocol(path)
680
+ if "r" not in mode:
681
+ hash = self._mapper(path)
682
+ fn = os.path.join(self.storage[-1], hash)
683
+ user_specified_kwargs = {
684
+ k: v
685
+ for k, v in kwargs.items()
686
+ # those kwargs were added by open(), we don't want them
687
+ if k not in ["autocommit", "block_size", "cache_options"]
688
+ }
689
+ return LocalTempFile(self, path, mode=mode, fn=fn, **user_specified_kwargs)
690
+ detail = self._check_file(path)
691
+ if detail:
692
+ detail, fn = detail
693
+ _, blocks = detail["fn"], detail["blocks"]
694
+ if blocks is True:
695
+ logger.debug("Opening local copy of %s", path)
696
+
697
+ # In order to support downstream filesystems to be able to
698
+ # infer the compression from the original filename, like
699
+ # the `TarFileSystem`, let's extend the `io.BufferedReader`
700
+ # fileobject protocol by adding a dedicated attribute
701
+ # `original`.
702
+ f = open(fn, mode)
703
+ f.original = detail.get("original")
704
+ return f
705
+ else:
706
+ raise ValueError(
707
+ f"Attempt to open partially cached file {path}"
708
+ f" as a wholly cached file"
709
+ )
710
+ else:
711
+ fn = self._make_local_details(path)
712
+ kwargs["mode"] = mode
713
+
714
+ # call target filesystems open
715
+ self._mkcache()
716
+ if self.compression:
717
+ with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2:
718
+ if isinstance(f, AbstractBufferedFile):
719
+ # want no type of caching if just downloading whole thing
720
+ f.cache = BaseCache(0, f.cache.fetcher, f.size)
721
+ comp = (
722
+ infer_compression(path)
723
+ if self.compression == "infer"
724
+ else self.compression
725
+ )
726
+ f = compr[comp](f, mode="rb")
727
+ data = True
728
+ while data:
729
+ block = getattr(f, "blocksize", 5 * 2**20)
730
+ data = f.read(block)
731
+ f2.write(data)
732
+ else:
733
+ self.fs.get_file(path, fn)
734
+ self.save_cache()
735
+ return self._open(path, mode)
736
+
737
+
738
+ class SimpleCacheFileSystem(WholeFileCacheFileSystem):
739
+ """Caches whole remote files on first access
740
+
741
+ This class is intended as a layer over any other file system, and
742
+ will make a local copy of each file accessed, so that all subsequent
743
+ reads are local. This implementation only copies whole files, and
744
+ does not keep any metadata about the download time or file details.
745
+ It is therefore safer to use in multi-threaded/concurrent situations.
746
+
747
+ This is the only of the caching filesystems that supports write: you will
748
+ be given a real local open file, and upon close and commit, it will be
749
+ uploaded to the target filesystem; the writability or the target URL is
750
+ not checked until that time.
751
+
752
+ """
753
+
754
+ protocol = "simplecache"
755
+ local_file = True
756
+ transaction_type = WriteCachedTransaction
757
+
758
+ def __init__(self, **kwargs):
759
+ kw = kwargs.copy()
760
+ for key in ["cache_check", "expiry_time", "check_files"]:
761
+ kw[key] = False
762
+ super().__init__(**kw)
763
+ for storage in self.storage:
764
+ if not os.path.exists(storage):
765
+ os.makedirs(storage, exist_ok=True)
766
+
767
+ def _check_file(self, path):
768
+ self._check_cache()
769
+ sha = self._mapper(path)
770
+ for storage in self.storage:
771
+ fn = os.path.join(storage, sha)
772
+ if os.path.exists(fn):
773
+ return fn
774
+
775
+ def save_cache(self):
776
+ pass
777
+
778
+ def load_cache(self):
779
+ pass
780
+
781
+ def pipe_file(self, path, value=None, **kwargs):
782
+ if self._intrans:
783
+ with self.open(path, "wb") as f:
784
+ f.write(value)
785
+ else:
786
+ super().pipe_file(path, value)
787
+
788
+ def ls(self, path, detail=True, **kwargs):
789
+ path = self._strip_protocol(path)
790
+ details = []
791
+ try:
792
+ details = self.fs.ls(
793
+ path, detail=True, **kwargs
794
+ ).copy() # don't edit original!
795
+ except FileNotFoundError as e:
796
+ ex = e
797
+ else:
798
+ ex = None
799
+ if self._intrans:
800
+ path1 = path.rstrip("/") + "/"
801
+ for f in self.transaction.files:
802
+ if f.path == path:
803
+ details.append(
804
+ {"name": path, "size": f.size or f.tell(), "type": "file"}
805
+ )
806
+ elif f.path.startswith(path1):
807
+ if f.path.count("/") == path1.count("/"):
808
+ details.append(
809
+ {"name": f.path, "size": f.size or f.tell(), "type": "file"}
810
+ )
811
+ else:
812
+ dname = "/".join(f.path.split("/")[: path1.count("/") + 1])
813
+ details.append({"name": dname, "size": 0, "type": "directory"})
814
+ if ex is not None and not details:
815
+ raise ex
816
+ if detail:
817
+ return details
818
+ return sorted(_["name"] for _ in details)
819
+
820
+ def info(self, path, **kwargs):
821
+ path = self._strip_protocol(path)
822
+ if self._intrans:
823
+ f = [_ for _ in self.transaction.files if _.path == path]
824
+ if f:
825
+ size = os.path.getsize(f[0].fn) if f[0].closed else f[0].tell()
826
+ return {"name": path, "size": size, "type": "file"}
827
+ f = any(_.path.startswith(path + "/") for _ in self.transaction.files)
828
+ if f:
829
+ return {"name": path, "size": 0, "type": "directory"}
830
+ return self.fs.info(path, **kwargs)
831
+
832
+ def pipe(self, path, value=None, **kwargs):
833
+ if isinstance(path, str):
834
+ self.pipe_file(self._strip_protocol(path), value, **kwargs)
835
+ elif isinstance(path, dict):
836
+ for k, v in path.items():
837
+ self.pipe_file(self._strip_protocol(k), v, **kwargs)
838
+ else:
839
+ raise ValueError("path must be str or dict")
840
+
841
+ async def _cat_file(self, path, start=None, end=None, **kwargs):
842
+ logger.debug("async cat_file %s", path)
843
+ path = self._strip_protocol(path)
844
+ sha = self._mapper(path)
845
+ fn = self._check_file(path)
846
+
847
+ if not fn:
848
+ fn = os.path.join(self.storage[-1], sha)
849
+ await self.fs._get_file(path, fn, **kwargs)
850
+
851
+ with open(fn, "rb") as f: # noqa ASYNC230
852
+ if start:
853
+ f.seek(start)
854
+ size = -1 if end is None else end - f.tell()
855
+ return f.read(size)
856
+
857
+ async def _cat_ranges(
858
+ self, paths, starts, ends, max_gap=None, on_error="return", **kwargs
859
+ ):
860
+ logger.debug("async cat ranges %s", paths)
861
+ lpaths = []
862
+ rset = set()
863
+ download = []
864
+ rpaths = []
865
+ for p in paths:
866
+ fn = self._check_file(p)
867
+ if fn is None and p not in rset:
868
+ sha = self._mapper(p)
869
+ fn = os.path.join(self.storage[-1], sha)
870
+ download.append(fn)
871
+ rset.add(p)
872
+ rpaths.append(p)
873
+ lpaths.append(fn)
874
+ if download:
875
+ await self.fs._get(rpaths, download, on_error=on_error)
876
+
877
+ return LocalFileSystem().cat_ranges(
878
+ lpaths, starts, ends, max_gap=max_gap, on_error=on_error, **kwargs
879
+ )
880
+
881
+ def cat_ranges(
882
+ self, paths, starts, ends, max_gap=None, on_error="return", **kwargs
883
+ ):
884
+ logger.debug("cat ranges %s", paths)
885
+ lpaths = [self._check_file(p) for p in paths]
886
+ rpaths = [p for l, p in zip(lpaths, paths) if l is False]
887
+ lpaths = [l for l, p in zip(lpaths, paths) if l is False]
888
+ self.fs.get(rpaths, lpaths)
889
+ return LocalFileSystem().cat_ranges(
890
+ paths, starts, ends, max_gap=max_gap, on_error=on_error, **kwargs
891
+ )
892
+
893
+ def _open(self, path, mode="rb", **kwargs):
894
+ path = self._strip_protocol(path)
895
+ sha = self._mapper(path)
896
+
897
+ if "r" not in mode:
898
+ fn = os.path.join(self.storage[-1], sha)
899
+ user_specified_kwargs = {
900
+ k: v
901
+ for k, v in kwargs.items()
902
+ if k not in ["autocommit", "block_size", "cache_options"]
903
+ } # those were added by open()
904
+ return LocalTempFile(
905
+ self,
906
+ path,
907
+ mode=mode,
908
+ autocommit=not self._intrans,
909
+ fn=fn,
910
+ **user_specified_kwargs,
911
+ )
912
+ fn = self._check_file(path)
913
+ if fn:
914
+ return open(fn, mode)
915
+
916
+ fn = os.path.join(self.storage[-1], sha)
917
+ logger.debug("Copying %s to local cache", path)
918
+ kwargs["mode"] = mode
919
+
920
+ self._mkcache()
921
+ self._cache_size = None
922
+ if self.compression:
923
+ with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2:
924
+ if isinstance(f, AbstractBufferedFile):
925
+ # want no type of caching if just downloading whole thing
926
+ f.cache = BaseCache(0, f.cache.fetcher, f.size)
927
+ comp = (
928
+ infer_compression(path)
929
+ if self.compression == "infer"
930
+ else self.compression
931
+ )
932
+ f = compr[comp](f, mode="rb")
933
+ data = True
934
+ while data:
935
+ block = getattr(f, "blocksize", 5 * 2**20)
936
+ data = f.read(block)
937
+ f2.write(data)
938
+ else:
939
+ self.fs.get_file(path, fn)
940
+ return self._open(path, mode)
941
+
942
+
943
+ class LocalTempFile:
944
+ """A temporary local file, which will be uploaded on commit"""
945
+
946
+ def __init__(self, fs, path, fn, mode="wb", autocommit=True, seek=0, **kwargs):
947
+ self.fn = fn
948
+ self.fh = open(fn, mode)
949
+ self.mode = mode
950
+ if seek:
951
+ self.fh.seek(seek)
952
+ self.path = path
953
+ self.size = None
954
+ self.fs = fs
955
+ self.closed = False
956
+ self.autocommit = autocommit
957
+ self.kwargs = kwargs
958
+
959
+ def __reduce__(self):
960
+ # always open in r+b to allow continuing writing at a location
961
+ return (
962
+ LocalTempFile,
963
+ (self.fs, self.path, self.fn, "r+b", self.autocommit, self.tell()),
964
+ )
965
+
966
+ def __enter__(self):
967
+ return self.fh
968
+
969
+ def __exit__(self, exc_type, exc_val, exc_tb):
970
+ self.close()
971
+
972
+ def close(self):
973
+ # self.size = self.fh.tell()
974
+ if self.closed:
975
+ return
976
+ self.fh.close()
977
+ self.closed = True
978
+ if self.autocommit:
979
+ self.commit()
980
+
981
+ def discard(self):
982
+ self.fh.close()
983
+ os.remove(self.fn)
984
+
985
+ def commit(self):
986
+ self.fs.put(self.fn, self.path, **self.kwargs)
987
+ # we do not delete the local copy, it's still in the cache.
988
+
989
+ @property
990
+ def name(self):
991
+ return self.fn
992
+
993
+ def __repr__(self) -> str:
994
+ return f"LocalTempFile: {self.path}"
995
+
996
+ def __getattr__(self, item):
997
+ return getattr(self.fh, item)
.venv/lib/python3.13/site-packages/fsspec/implementations/dask.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import dask
2
+ from distributed.client import Client, _get_global_client
3
+ from distributed.worker import Worker
4
+
5
+ from fsspec import filesystem
6
+ from fsspec.spec import AbstractBufferedFile, AbstractFileSystem
7
+ from fsspec.utils import infer_storage_options
8
+
9
+
10
+ def _get_client(client):
11
+ if client is None:
12
+ return _get_global_client()
13
+ elif isinstance(client, Client):
14
+ return client
15
+ else:
16
+ # e.g., connection string
17
+ return Client(client)
18
+
19
+
20
+ def _in_worker():
21
+ return bool(Worker._instances)
22
+
23
+
24
+ class DaskWorkerFileSystem(AbstractFileSystem):
25
+ """View files accessible to a worker as any other remote file-system
26
+
27
+ When instances are run on the worker, uses the real filesystem. When
28
+ run on the client, they call the worker to provide information or data.
29
+
30
+ **Warning** this implementation is experimental, and read-only for now.
31
+ """
32
+
33
+ def __init__(
34
+ self, target_protocol=None, target_options=None, fs=None, client=None, **kwargs
35
+ ):
36
+ super().__init__(**kwargs)
37
+ if not (fs is None) ^ (target_protocol is None):
38
+ raise ValueError(
39
+ "Please provide one of filesystem instance (fs) or"
40
+ " target_protocol, not both"
41
+ )
42
+ self.target_protocol = target_protocol
43
+ self.target_options = target_options
44
+ self.worker = None
45
+ self.client = client
46
+ self.fs = fs
47
+ self._determine_worker()
48
+
49
+ @staticmethod
50
+ def _get_kwargs_from_urls(path):
51
+ so = infer_storage_options(path)
52
+ if "host" in so and "port" in so:
53
+ return {"client": f"{so['host']}:{so['port']}"}
54
+ else:
55
+ return {}
56
+
57
+ def _determine_worker(self):
58
+ if _in_worker():
59
+ self.worker = True
60
+ if self.fs is None:
61
+ self.fs = filesystem(
62
+ self.target_protocol, **(self.target_options or {})
63
+ )
64
+ else:
65
+ self.worker = False
66
+ self.client = _get_client(self.client)
67
+ self.rfs = dask.delayed(self)
68
+
69
+ def mkdir(self, *args, **kwargs):
70
+ if self.worker:
71
+ self.fs.mkdir(*args, **kwargs)
72
+ else:
73
+ self.rfs.mkdir(*args, **kwargs).compute()
74
+
75
+ def rm(self, *args, **kwargs):
76
+ if self.worker:
77
+ self.fs.rm(*args, **kwargs)
78
+ else:
79
+ self.rfs.rm(*args, **kwargs).compute()
80
+
81
+ def copy(self, *args, **kwargs):
82
+ if self.worker:
83
+ self.fs.copy(*args, **kwargs)
84
+ else:
85
+ self.rfs.copy(*args, **kwargs).compute()
86
+
87
+ def mv(self, *args, **kwargs):
88
+ if self.worker:
89
+ self.fs.mv(*args, **kwargs)
90
+ else:
91
+ self.rfs.mv(*args, **kwargs).compute()
92
+
93
+ def ls(self, *args, **kwargs):
94
+ if self.worker:
95
+ return self.fs.ls(*args, **kwargs)
96
+ else:
97
+ return self.rfs.ls(*args, **kwargs).compute()
98
+
99
+ def _open(
100
+ self,
101
+ path,
102
+ mode="rb",
103
+ block_size=None,
104
+ autocommit=True,
105
+ cache_options=None,
106
+ **kwargs,
107
+ ):
108
+ if self.worker:
109
+ return self.fs._open(
110
+ path,
111
+ mode=mode,
112
+ block_size=block_size,
113
+ autocommit=autocommit,
114
+ cache_options=cache_options,
115
+ **kwargs,
116
+ )
117
+ else:
118
+ return DaskFile(
119
+ fs=self,
120
+ path=path,
121
+ mode=mode,
122
+ block_size=block_size,
123
+ autocommit=autocommit,
124
+ cache_options=cache_options,
125
+ **kwargs,
126
+ )
127
+
128
+ def fetch_range(self, path, mode, start, end):
129
+ if self.worker:
130
+ with self._open(path, mode) as f:
131
+ f.seek(start)
132
+ return f.read(end - start)
133
+ else:
134
+ return self.rfs.fetch_range(path, mode, start, end).compute()
135
+
136
+
137
+ class DaskFile(AbstractBufferedFile):
138
+ def __init__(self, mode="rb", **kwargs):
139
+ if mode != "rb":
140
+ raise ValueError('Remote dask files can only be opened in "rb" mode')
141
+ super().__init__(**kwargs)
142
+
143
+ def _upload_chunk(self, final=False):
144
+ pass
145
+
146
+ def _initiate_upload(self):
147
+ """Create remote file/upload"""
148
+ pass
149
+
150
+ def _fetch_range(self, start, end):
151
+ """Get the specified set of bytes from remote"""
152
+ return self.fs.fetch_range(self.path, self.mode, start, end)
.venv/lib/python3.13/site-packages/fsspec/implementations/data.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import io
3
+ from typing import Optional
4
+ from urllib.parse import unquote
5
+
6
+ from fsspec import AbstractFileSystem
7
+
8
+
9
+ class DataFileSystem(AbstractFileSystem):
10
+ """A handy decoder for data-URLs
11
+
12
+ Example
13
+ -------
14
+ >>> with fsspec.open("data:,Hello%2C%20World%21") as f:
15
+ ... print(f.read())
16
+ b"Hello, World!"
17
+
18
+ See https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs
19
+ """
20
+
21
+ protocol = "data"
22
+
23
+ def __init__(self, **kwargs):
24
+ """No parameters for this filesystem"""
25
+ super().__init__(**kwargs)
26
+
27
+ def cat_file(self, path, start=None, end=None, **kwargs):
28
+ pref, data = path.split(",", 1)
29
+ if pref.endswith("base64"):
30
+ return base64.b64decode(data)[start:end]
31
+ return unquote(data).encode()[start:end]
32
+
33
+ def info(self, path, **kwargs):
34
+ pref, name = path.split(",", 1)
35
+ data = self.cat_file(path)
36
+ mime = pref.split(":", 1)[1].split(";", 1)[0]
37
+ return {"name": name, "size": len(data), "type": "file", "mimetype": mime}
38
+
39
+ def _open(
40
+ self,
41
+ path,
42
+ mode="rb",
43
+ block_size=None,
44
+ autocommit=True,
45
+ cache_options=None,
46
+ **kwargs,
47
+ ):
48
+ if "r" not in mode:
49
+ raise ValueError("Read only filesystem")
50
+ return io.BytesIO(self.cat_file(path))
51
+
52
+ @staticmethod
53
+ def encode(data: bytes, mime: Optional[str] = None):
54
+ """Format the given data into data-URL syntax
55
+
56
+ This version always base64 encodes, even when the data is ascii/url-safe.
57
+ """
58
+ return f"data:{mime or ''};base64,{base64.b64encode(data).decode()}"
.venv/lib/python3.13/site-packages/fsspec/implementations/dbfs.py ADDED
@@ -0,0 +1,468 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import urllib
3
+
4
+ import requests
5
+ import requests.exceptions
6
+ from requests.adapters import HTTPAdapter, Retry
7
+
8
+ from fsspec import AbstractFileSystem
9
+ from fsspec.spec import AbstractBufferedFile
10
+
11
+
12
+ class DatabricksException(Exception):
13
+ """
14
+ Helper class for exceptions raised in this module.
15
+ """
16
+
17
+ def __init__(self, error_code, message, details=None):
18
+ """Create a new DatabricksException"""
19
+ super().__init__(message)
20
+
21
+ self.error_code = error_code
22
+ self.message = message
23
+ self.details = details
24
+
25
+
26
+ class DatabricksFileSystem(AbstractFileSystem):
27
+ """
28
+ Get access to the Databricks filesystem implementation over HTTP.
29
+ Can be used inside and outside of a databricks cluster.
30
+ """
31
+
32
+ def __init__(self, instance, token, **kwargs):
33
+ """
34
+ Create a new DatabricksFileSystem.
35
+
36
+ Parameters
37
+ ----------
38
+ instance: str
39
+ The instance URL of the databricks cluster.
40
+ For example for an Azure databricks cluster, this
41
+ has the form adb-<some-number>.<two digits>.azuredatabricks.net.
42
+ token: str
43
+ Your personal token. Find out more
44
+ here: https://docs.databricks.com/dev-tools/api/latest/authentication.html
45
+ """
46
+ self.instance = instance
47
+ self.token = token
48
+ self.session = requests.Session()
49
+ self.retries = Retry(
50
+ total=10,
51
+ backoff_factor=0.05,
52
+ status_forcelist=[408, 429, 500, 502, 503, 504],
53
+ )
54
+
55
+ self.session.mount("https://", HTTPAdapter(max_retries=self.retries))
56
+ self.session.headers.update({"Authorization": f"Bearer {self.token}"})
57
+
58
+ super().__init__(**kwargs)
59
+
60
+ def ls(self, path, detail=True, **kwargs):
61
+ """
62
+ List the contents of the given path.
63
+
64
+ Parameters
65
+ ----------
66
+ path: str
67
+ Absolute path
68
+ detail: bool
69
+ Return not only the list of filenames,
70
+ but also additional information on file sizes
71
+ and types.
72
+ """
73
+ out = self._ls_from_cache(path)
74
+ if not out:
75
+ try:
76
+ r = self._send_to_api(
77
+ method="get", endpoint="list", json={"path": path}
78
+ )
79
+ except DatabricksException as e:
80
+ if e.error_code == "RESOURCE_DOES_NOT_EXIST":
81
+ raise FileNotFoundError(e.message) from e
82
+
83
+ raise
84
+ files = r.get("files", [])
85
+ out = [
86
+ {
87
+ "name": o["path"],
88
+ "type": "directory" if o["is_dir"] else "file",
89
+ "size": o["file_size"],
90
+ }
91
+ for o in files
92
+ ]
93
+ self.dircache[path] = out
94
+
95
+ if detail:
96
+ return out
97
+ return [o["name"] for o in out]
98
+
99
+ def makedirs(self, path, exist_ok=True):
100
+ """
101
+ Create a given absolute path and all of its parents.
102
+
103
+ Parameters
104
+ ----------
105
+ path: str
106
+ Absolute path to create
107
+ exist_ok: bool
108
+ If false, checks if the folder
109
+ exists before creating it (and raises an
110
+ Exception if this is the case)
111
+ """
112
+ if not exist_ok:
113
+ try:
114
+ # If the following succeeds, the path is already present
115
+ self._send_to_api(
116
+ method="get", endpoint="get-status", json={"path": path}
117
+ )
118
+ raise FileExistsError(f"Path {path} already exists")
119
+ except DatabricksException as e:
120
+ if e.error_code == "RESOURCE_DOES_NOT_EXIST":
121
+ pass
122
+
123
+ try:
124
+ self._send_to_api(method="post", endpoint="mkdirs", json={"path": path})
125
+ except DatabricksException as e:
126
+ if e.error_code == "RESOURCE_ALREADY_EXISTS":
127
+ raise FileExistsError(e.message) from e
128
+
129
+ raise
130
+ self.invalidate_cache(self._parent(path))
131
+
132
+ def mkdir(self, path, create_parents=True, **kwargs):
133
+ """
134
+ Create a given absolute path and all of its parents.
135
+
136
+ Parameters
137
+ ----------
138
+ path: str
139
+ Absolute path to create
140
+ create_parents: bool
141
+ Whether to create all parents or not.
142
+ "False" is not implemented so far.
143
+ """
144
+ if not create_parents:
145
+ raise NotImplementedError
146
+
147
+ self.mkdirs(path, **kwargs)
148
+
149
+ def rm(self, path, recursive=False, **kwargs):
150
+ """
151
+ Remove the file or folder at the given absolute path.
152
+
153
+ Parameters
154
+ ----------
155
+ path: str
156
+ Absolute path what to remove
157
+ recursive: bool
158
+ Recursively delete all files in a folder.
159
+ """
160
+ try:
161
+ self._send_to_api(
162
+ method="post",
163
+ endpoint="delete",
164
+ json={"path": path, "recursive": recursive},
165
+ )
166
+ except DatabricksException as e:
167
+ # This is not really an exception, it just means
168
+ # not everything was deleted so far
169
+ if e.error_code == "PARTIAL_DELETE":
170
+ self.rm(path=path, recursive=recursive)
171
+ elif e.error_code == "IO_ERROR":
172
+ # Using the same exception as the os module would use here
173
+ raise OSError(e.message) from e
174
+
175
+ raise
176
+ self.invalidate_cache(self._parent(path))
177
+
178
+ def mv(
179
+ self, source_path, destination_path, recursive=False, maxdepth=None, **kwargs
180
+ ):
181
+ """
182
+ Move a source to a destination path.
183
+
184
+ A note from the original [databricks API manual]
185
+ (https://docs.databricks.com/dev-tools/api/latest/dbfs.html#move).
186
+
187
+ When moving a large number of files the API call will time out after
188
+ approximately 60s, potentially resulting in partially moved data.
189
+ Therefore, for operations that move more than 10k files, we strongly
190
+ discourage using the DBFS REST API.
191
+
192
+ Parameters
193
+ ----------
194
+ source_path: str
195
+ From where to move (absolute path)
196
+ destination_path: str
197
+ To where to move (absolute path)
198
+ recursive: bool
199
+ Not implemented to far.
200
+ maxdepth:
201
+ Not implemented to far.
202
+ """
203
+ if recursive:
204
+ raise NotImplementedError
205
+ if maxdepth:
206
+ raise NotImplementedError
207
+
208
+ try:
209
+ self._send_to_api(
210
+ method="post",
211
+ endpoint="move",
212
+ json={"source_path": source_path, "destination_path": destination_path},
213
+ )
214
+ except DatabricksException as e:
215
+ if e.error_code == "RESOURCE_DOES_NOT_EXIST":
216
+ raise FileNotFoundError(e.message) from e
217
+ elif e.error_code == "RESOURCE_ALREADY_EXISTS":
218
+ raise FileExistsError(e.message) from e
219
+
220
+ raise
221
+ self.invalidate_cache(self._parent(source_path))
222
+ self.invalidate_cache(self._parent(destination_path))
223
+
224
+ def _open(self, path, mode="rb", block_size="default", **kwargs):
225
+ """
226
+ Overwrite the base class method to make sure to create a DBFile.
227
+ All arguments are copied from the base method.
228
+
229
+ Only the default blocksize is allowed.
230
+ """
231
+ return DatabricksFile(self, path, mode=mode, block_size=block_size, **kwargs)
232
+
233
+ def _send_to_api(self, method, endpoint, json):
234
+ """
235
+ Send the given json to the DBFS API
236
+ using a get or post request (specified by the argument `method`).
237
+
238
+ Parameters
239
+ ----------
240
+ method: str
241
+ Which http method to use for communication; "get" or "post".
242
+ endpoint: str
243
+ Where to send the request to (last part of the API URL)
244
+ json: dict
245
+ Dictionary of information to send
246
+ """
247
+ if method == "post":
248
+ session_call = self.session.post
249
+ elif method == "get":
250
+ session_call = self.session.get
251
+ else:
252
+ raise ValueError(f"Do not understand method {method}")
253
+
254
+ url = urllib.parse.urljoin(f"https://{self.instance}/api/2.0/dbfs/", endpoint)
255
+
256
+ r = session_call(url, json=json)
257
+
258
+ # The DBFS API will return a json, also in case of an exception.
259
+ # We want to preserve this information as good as possible.
260
+ try:
261
+ r.raise_for_status()
262
+ except requests.HTTPError as e:
263
+ # try to extract json error message
264
+ # if that fails, fall back to the original exception
265
+ try:
266
+ exception_json = e.response.json()
267
+ except Exception:
268
+ raise e from None
269
+
270
+ raise DatabricksException(**exception_json) from e
271
+
272
+ return r.json()
273
+
274
+ def _create_handle(self, path, overwrite=True):
275
+ """
276
+ Internal function to create a handle, which can be used to
277
+ write blocks of a file to DBFS.
278
+ A handle has a unique identifier which needs to be passed
279
+ whenever written during this transaction.
280
+ The handle is active for 10 minutes - after that a new
281
+ write transaction needs to be created.
282
+ Make sure to close the handle after you are finished.
283
+
284
+ Parameters
285
+ ----------
286
+ path: str
287
+ Absolute path for this file.
288
+ overwrite: bool
289
+ If a file already exist at this location, either overwrite
290
+ it or raise an exception.
291
+ """
292
+ try:
293
+ r = self._send_to_api(
294
+ method="post",
295
+ endpoint="create",
296
+ json={"path": path, "overwrite": overwrite},
297
+ )
298
+ return r["handle"]
299
+ except DatabricksException as e:
300
+ if e.error_code == "RESOURCE_ALREADY_EXISTS":
301
+ raise FileExistsError(e.message) from e
302
+
303
+ raise
304
+
305
+ def _close_handle(self, handle):
306
+ """
307
+ Close a handle, which was opened by :func:`_create_handle`.
308
+
309
+ Parameters
310
+ ----------
311
+ handle: str
312
+ Which handle to close.
313
+ """
314
+ try:
315
+ self._send_to_api(method="post", endpoint="close", json={"handle": handle})
316
+ except DatabricksException as e:
317
+ if e.error_code == "RESOURCE_DOES_NOT_EXIST":
318
+ raise FileNotFoundError(e.message) from e
319
+
320
+ raise
321
+
322
+ def _add_data(self, handle, data):
323
+ """
324
+ Upload data to an already opened file handle
325
+ (opened by :func:`_create_handle`).
326
+ The maximal allowed data size is 1MB after
327
+ conversion to base64.
328
+ Remember to close the handle when you are finished.
329
+
330
+ Parameters
331
+ ----------
332
+ handle: str
333
+ Which handle to upload data to.
334
+ data: bytes
335
+ Block of data to add to the handle.
336
+ """
337
+ data = base64.b64encode(data).decode()
338
+ try:
339
+ self._send_to_api(
340
+ method="post",
341
+ endpoint="add-block",
342
+ json={"handle": handle, "data": data},
343
+ )
344
+ except DatabricksException as e:
345
+ if e.error_code == "RESOURCE_DOES_NOT_EXIST":
346
+ raise FileNotFoundError(e.message) from e
347
+ elif e.error_code == "MAX_BLOCK_SIZE_EXCEEDED":
348
+ raise ValueError(e.message) from e
349
+
350
+ raise
351
+
352
+ def _get_data(self, path, start, end):
353
+ """
354
+ Download data in bytes from a given absolute path in a block
355
+ from [start, start+length].
356
+ The maximum number of allowed bytes to read is 1MB.
357
+
358
+ Parameters
359
+ ----------
360
+ path: str
361
+ Absolute path to download data from
362
+ start: int
363
+ Start position of the block
364
+ end: int
365
+ End position of the block
366
+ """
367
+ try:
368
+ r = self._send_to_api(
369
+ method="get",
370
+ endpoint="read",
371
+ json={"path": path, "offset": start, "length": end - start},
372
+ )
373
+ return base64.b64decode(r["data"])
374
+ except DatabricksException as e:
375
+ if e.error_code == "RESOURCE_DOES_NOT_EXIST":
376
+ raise FileNotFoundError(e.message) from e
377
+ elif e.error_code in ["INVALID_PARAMETER_VALUE", "MAX_READ_SIZE_EXCEEDED"]:
378
+ raise ValueError(e.message) from e
379
+
380
+ raise
381
+
382
+ def invalidate_cache(self, path=None):
383
+ if path is None:
384
+ self.dircache.clear()
385
+ else:
386
+ self.dircache.pop(path, None)
387
+ super().invalidate_cache(path)
388
+
389
+
390
+ class DatabricksFile(AbstractBufferedFile):
391
+ """
392
+ Helper class for files referenced in the DatabricksFileSystem.
393
+ """
394
+
395
+ DEFAULT_BLOCK_SIZE = 1 * 2**20 # only allowed block size
396
+
397
+ def __init__(
398
+ self,
399
+ fs,
400
+ path,
401
+ mode="rb",
402
+ block_size="default",
403
+ autocommit=True,
404
+ cache_type="readahead",
405
+ cache_options=None,
406
+ **kwargs,
407
+ ):
408
+ """
409
+ Create a new instance of the DatabricksFile.
410
+
411
+ The blocksize needs to be the default one.
412
+ """
413
+ if block_size is None or block_size == "default":
414
+ block_size = self.DEFAULT_BLOCK_SIZE
415
+
416
+ assert block_size == self.DEFAULT_BLOCK_SIZE, (
417
+ f"Only the default block size is allowed, not {block_size}"
418
+ )
419
+
420
+ super().__init__(
421
+ fs,
422
+ path,
423
+ mode=mode,
424
+ block_size=block_size,
425
+ autocommit=autocommit,
426
+ cache_type=cache_type,
427
+ cache_options=cache_options or {},
428
+ **kwargs,
429
+ )
430
+
431
+ def _initiate_upload(self):
432
+ """Internal function to start a file upload"""
433
+ self.handle = self.fs._create_handle(self.path)
434
+
435
+ def _upload_chunk(self, final=False):
436
+ """Internal function to add a chunk of data to a started upload"""
437
+ self.buffer.seek(0)
438
+ data = self.buffer.getvalue()
439
+
440
+ data_chunks = [
441
+ data[start:end] for start, end in self._to_sized_blocks(len(data))
442
+ ]
443
+
444
+ for data_chunk in data_chunks:
445
+ self.fs._add_data(handle=self.handle, data=data_chunk)
446
+
447
+ if final:
448
+ self.fs._close_handle(handle=self.handle)
449
+ return True
450
+
451
+ def _fetch_range(self, start, end):
452
+ """Internal function to download a block of data"""
453
+ return_buffer = b""
454
+ length = end - start
455
+ for chunk_start, chunk_end in self._to_sized_blocks(length, start):
456
+ return_buffer += self.fs._get_data(
457
+ path=self.path, start=chunk_start, end=chunk_end
458
+ )
459
+
460
+ return return_buffer
461
+
462
+ def _to_sized_blocks(self, length, start=0):
463
+ """Helper function to split a range from 0 to total_length into bloksizes"""
464
+ end = start + length
465
+ for data_chunk in range(start, end, self.blocksize):
466
+ data_start = data_chunk
467
+ data_end = min(end, data_chunk + self.blocksize)
468
+ yield data_start, data_end
.venv/lib/python3.13/site-packages/fsspec/implementations/dirfs.py ADDED
@@ -0,0 +1,388 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .. import filesystem
2
+ from ..asyn import AsyncFileSystem
3
+
4
+
5
+ class DirFileSystem(AsyncFileSystem):
6
+ """Directory prefix filesystem
7
+
8
+ The DirFileSystem is a filesystem-wrapper. It assumes every path it is dealing with
9
+ is relative to the `path`. After performing the necessary paths operation it
10
+ delegates everything to the wrapped filesystem.
11
+ """
12
+
13
+ protocol = "dir"
14
+
15
+ def __init__(
16
+ self,
17
+ path=None,
18
+ fs=None,
19
+ fo=None,
20
+ target_protocol=None,
21
+ target_options=None,
22
+ **storage_options,
23
+ ):
24
+ """
25
+ Parameters
26
+ ----------
27
+ path: str
28
+ Path to the directory.
29
+ fs: AbstractFileSystem
30
+ An instantiated filesystem to wrap.
31
+ target_protocol, target_options:
32
+ if fs is none, construct it from these
33
+ fo: str
34
+ Alternate for path; do not provide both
35
+ """
36
+ super().__init__(**storage_options)
37
+ if fs is None:
38
+ fs = filesystem(protocol=target_protocol, **(target_options or {}))
39
+ path = path or fo
40
+
41
+ if self.asynchronous and not fs.async_impl:
42
+ raise ValueError("can't use asynchronous with non-async fs")
43
+
44
+ if fs.async_impl and self.asynchronous != fs.asynchronous:
45
+ raise ValueError("both dirfs and fs should be in the same sync/async mode")
46
+
47
+ self.path = fs._strip_protocol(path)
48
+ self.fs = fs
49
+
50
+ def _join(self, path):
51
+ if isinstance(path, str):
52
+ if not self.path:
53
+ return path
54
+ if not path:
55
+ return self.path
56
+ return self.fs.sep.join((self.path, self._strip_protocol(path)))
57
+ if isinstance(path, dict):
58
+ return {self._join(_path): value for _path, value in path.items()}
59
+ return [self._join(_path) for _path in path]
60
+
61
+ def _relpath(self, path):
62
+ if isinstance(path, str):
63
+ if not self.path:
64
+ return path
65
+ # We need to account for S3FileSystem returning paths that do not
66
+ # start with a '/'
67
+ if path == self.path or (
68
+ self.path.startswith(self.fs.sep) and path == self.path[1:]
69
+ ):
70
+ return ""
71
+ prefix = self.path + self.fs.sep
72
+ if self.path.startswith(self.fs.sep) and not path.startswith(self.fs.sep):
73
+ prefix = prefix[1:]
74
+ assert path.startswith(prefix)
75
+ return path[len(prefix) :]
76
+ return [self._relpath(_path) for _path in path]
77
+
78
+ # Wrappers below
79
+
80
+ @property
81
+ def sep(self):
82
+ return self.fs.sep
83
+
84
+ async def set_session(self, *args, **kwargs):
85
+ return await self.fs.set_session(*args, **kwargs)
86
+
87
+ async def _rm_file(self, path, **kwargs):
88
+ return await self.fs._rm_file(self._join(path), **kwargs)
89
+
90
+ def rm_file(self, path, **kwargs):
91
+ return self.fs.rm_file(self._join(path), **kwargs)
92
+
93
+ async def _rm(self, path, *args, **kwargs):
94
+ return await self.fs._rm(self._join(path), *args, **kwargs)
95
+
96
+ def rm(self, path, *args, **kwargs):
97
+ return self.fs.rm(self._join(path), *args, **kwargs)
98
+
99
+ async def _cp_file(self, path1, path2, **kwargs):
100
+ return await self.fs._cp_file(self._join(path1), self._join(path2), **kwargs)
101
+
102
+ def cp_file(self, path1, path2, **kwargs):
103
+ return self.fs.cp_file(self._join(path1), self._join(path2), **kwargs)
104
+
105
+ async def _copy(
106
+ self,
107
+ path1,
108
+ path2,
109
+ *args,
110
+ **kwargs,
111
+ ):
112
+ return await self.fs._copy(
113
+ self._join(path1),
114
+ self._join(path2),
115
+ *args,
116
+ **kwargs,
117
+ )
118
+
119
+ def copy(self, path1, path2, *args, **kwargs):
120
+ return self.fs.copy(
121
+ self._join(path1),
122
+ self._join(path2),
123
+ *args,
124
+ **kwargs,
125
+ )
126
+
127
+ async def _pipe(self, path, *args, **kwargs):
128
+ return await self.fs._pipe(self._join(path), *args, **kwargs)
129
+
130
+ def pipe(self, path, *args, **kwargs):
131
+ return self.fs.pipe(self._join(path), *args, **kwargs)
132
+
133
+ async def _pipe_file(self, path, *args, **kwargs):
134
+ return await self.fs._pipe_file(self._join(path), *args, **kwargs)
135
+
136
+ def pipe_file(self, path, *args, **kwargs):
137
+ return self.fs.pipe_file(self._join(path), *args, **kwargs)
138
+
139
+ async def _cat_file(self, path, *args, **kwargs):
140
+ return await self.fs._cat_file(self._join(path), *args, **kwargs)
141
+
142
+ def cat_file(self, path, *args, **kwargs):
143
+ return self.fs.cat_file(self._join(path), *args, **kwargs)
144
+
145
+ async def _cat(self, path, *args, **kwargs):
146
+ ret = await self.fs._cat(
147
+ self._join(path),
148
+ *args,
149
+ **kwargs,
150
+ )
151
+
152
+ if isinstance(ret, dict):
153
+ return {self._relpath(key): value for key, value in ret.items()}
154
+
155
+ return ret
156
+
157
+ def cat(self, path, *args, **kwargs):
158
+ ret = self.fs.cat(
159
+ self._join(path),
160
+ *args,
161
+ **kwargs,
162
+ )
163
+
164
+ if isinstance(ret, dict):
165
+ return {self._relpath(key): value for key, value in ret.items()}
166
+
167
+ return ret
168
+
169
+ async def _put_file(self, lpath, rpath, **kwargs):
170
+ return await self.fs._put_file(lpath, self._join(rpath), **kwargs)
171
+
172
+ def put_file(self, lpath, rpath, **kwargs):
173
+ return self.fs.put_file(lpath, self._join(rpath), **kwargs)
174
+
175
+ async def _put(
176
+ self,
177
+ lpath,
178
+ rpath,
179
+ *args,
180
+ **kwargs,
181
+ ):
182
+ return await self.fs._put(
183
+ lpath,
184
+ self._join(rpath),
185
+ *args,
186
+ **kwargs,
187
+ )
188
+
189
+ def put(self, lpath, rpath, *args, **kwargs):
190
+ return self.fs.put(
191
+ lpath,
192
+ self._join(rpath),
193
+ *args,
194
+ **kwargs,
195
+ )
196
+
197
+ async def _get_file(self, rpath, lpath, **kwargs):
198
+ return await self.fs._get_file(self._join(rpath), lpath, **kwargs)
199
+
200
+ def get_file(self, rpath, lpath, **kwargs):
201
+ return self.fs.get_file(self._join(rpath), lpath, **kwargs)
202
+
203
+ async def _get(self, rpath, *args, **kwargs):
204
+ return await self.fs._get(self._join(rpath), *args, **kwargs)
205
+
206
+ def get(self, rpath, *args, **kwargs):
207
+ return self.fs.get(self._join(rpath), *args, **kwargs)
208
+
209
+ async def _isfile(self, path):
210
+ return await self.fs._isfile(self._join(path))
211
+
212
+ def isfile(self, path):
213
+ return self.fs.isfile(self._join(path))
214
+
215
+ async def _isdir(self, path):
216
+ return await self.fs._isdir(self._join(path))
217
+
218
+ def isdir(self, path):
219
+ return self.fs.isdir(self._join(path))
220
+
221
+ async def _size(self, path):
222
+ return await self.fs._size(self._join(path))
223
+
224
+ def size(self, path):
225
+ return self.fs.size(self._join(path))
226
+
227
+ async def _exists(self, path):
228
+ return await self.fs._exists(self._join(path))
229
+
230
+ def exists(self, path):
231
+ return self.fs.exists(self._join(path))
232
+
233
+ async def _info(self, path, **kwargs):
234
+ info = await self.fs._info(self._join(path), **kwargs)
235
+ info = info.copy()
236
+ info["name"] = self._relpath(info["name"])
237
+ return info
238
+
239
+ def info(self, path, **kwargs):
240
+ info = self.fs.info(self._join(path), **kwargs)
241
+ info = info.copy()
242
+ info["name"] = self._relpath(info["name"])
243
+ return info
244
+
245
+ async def _ls(self, path, detail=True, **kwargs):
246
+ ret = (await self.fs._ls(self._join(path), detail=detail, **kwargs)).copy()
247
+ if detail:
248
+ out = []
249
+ for entry in ret:
250
+ entry = entry.copy()
251
+ entry["name"] = self._relpath(entry["name"])
252
+ out.append(entry)
253
+ return out
254
+
255
+ return self._relpath(ret)
256
+
257
+ def ls(self, path, detail=True, **kwargs):
258
+ ret = self.fs.ls(self._join(path), detail=detail, **kwargs).copy()
259
+ if detail:
260
+ out = []
261
+ for entry in ret:
262
+ entry = entry.copy()
263
+ entry["name"] = self._relpath(entry["name"])
264
+ out.append(entry)
265
+ return out
266
+
267
+ return self._relpath(ret)
268
+
269
+ async def _walk(self, path, *args, **kwargs):
270
+ async for root, dirs, files in self.fs._walk(self._join(path), *args, **kwargs):
271
+ yield self._relpath(root), dirs, files
272
+
273
+ def walk(self, path, *args, **kwargs):
274
+ for root, dirs, files in self.fs.walk(self._join(path), *args, **kwargs):
275
+ yield self._relpath(root), dirs, files
276
+
277
+ async def _glob(self, path, **kwargs):
278
+ detail = kwargs.get("detail", False)
279
+ ret = await self.fs._glob(self._join(path), **kwargs)
280
+ if detail:
281
+ return {self._relpath(path): info for path, info in ret.items()}
282
+ return self._relpath(ret)
283
+
284
+ def glob(self, path, **kwargs):
285
+ detail = kwargs.get("detail", False)
286
+ ret = self.fs.glob(self._join(path), **kwargs)
287
+ if detail:
288
+ return {self._relpath(path): info for path, info in ret.items()}
289
+ return self._relpath(ret)
290
+
291
+ async def _du(self, path, *args, **kwargs):
292
+ total = kwargs.get("total", True)
293
+ ret = await self.fs._du(self._join(path), *args, **kwargs)
294
+ if total:
295
+ return ret
296
+
297
+ return {self._relpath(path): size for path, size in ret.items()}
298
+
299
+ def du(self, path, *args, **kwargs):
300
+ total = kwargs.get("total", True)
301
+ ret = self.fs.du(self._join(path), *args, **kwargs)
302
+ if total:
303
+ return ret
304
+
305
+ return {self._relpath(path): size for path, size in ret.items()}
306
+
307
+ async def _find(self, path, *args, **kwargs):
308
+ detail = kwargs.get("detail", False)
309
+ ret = await self.fs._find(self._join(path), *args, **kwargs)
310
+ if detail:
311
+ return {self._relpath(path): info for path, info in ret.items()}
312
+ return self._relpath(ret)
313
+
314
+ def find(self, path, *args, **kwargs):
315
+ detail = kwargs.get("detail", False)
316
+ ret = self.fs.find(self._join(path), *args, **kwargs)
317
+ if detail:
318
+ return {self._relpath(path): info for path, info in ret.items()}
319
+ return self._relpath(ret)
320
+
321
+ async def _expand_path(self, path, *args, **kwargs):
322
+ return self._relpath(
323
+ await self.fs._expand_path(self._join(path), *args, **kwargs)
324
+ )
325
+
326
+ def expand_path(self, path, *args, **kwargs):
327
+ return self._relpath(self.fs.expand_path(self._join(path), *args, **kwargs))
328
+
329
+ async def _mkdir(self, path, *args, **kwargs):
330
+ return await self.fs._mkdir(self._join(path), *args, **kwargs)
331
+
332
+ def mkdir(self, path, *args, **kwargs):
333
+ return self.fs.mkdir(self._join(path), *args, **kwargs)
334
+
335
+ async def _makedirs(self, path, *args, **kwargs):
336
+ return await self.fs._makedirs(self._join(path), *args, **kwargs)
337
+
338
+ def makedirs(self, path, *args, **kwargs):
339
+ return self.fs.makedirs(self._join(path), *args, **kwargs)
340
+
341
+ def rmdir(self, path):
342
+ return self.fs.rmdir(self._join(path))
343
+
344
+ def mv(self, path1, path2, **kwargs):
345
+ return self.fs.mv(
346
+ self._join(path1),
347
+ self._join(path2),
348
+ **kwargs,
349
+ )
350
+
351
+ def touch(self, path, **kwargs):
352
+ return self.fs.touch(self._join(path), **kwargs)
353
+
354
+ def created(self, path):
355
+ return self.fs.created(self._join(path))
356
+
357
+ def modified(self, path):
358
+ return self.fs.modified(self._join(path))
359
+
360
+ def sign(self, path, *args, **kwargs):
361
+ return self.fs.sign(self._join(path), *args, **kwargs)
362
+
363
+ def __repr__(self):
364
+ return f"{self.__class__.__qualname__}(path='{self.path}', fs={self.fs})"
365
+
366
+ def open(
367
+ self,
368
+ path,
369
+ *args,
370
+ **kwargs,
371
+ ):
372
+ return self.fs.open(
373
+ self._join(path),
374
+ *args,
375
+ **kwargs,
376
+ )
377
+
378
+ async def open_async(
379
+ self,
380
+ path,
381
+ *args,
382
+ **kwargs,
383
+ ):
384
+ return await self.fs.open_async(
385
+ self._join(path),
386
+ *args,
387
+ **kwargs,
388
+ )
.venv/lib/python3.13/site-packages/fsspec/implementations/ftp.py ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import uuid
3
+ from ftplib import FTP, FTP_TLS, Error, error_perm
4
+ from typing import Any
5
+
6
+ from ..spec import AbstractBufferedFile, AbstractFileSystem
7
+ from ..utils import infer_storage_options, isfilelike
8
+
9
+
10
+ class FTPFileSystem(AbstractFileSystem):
11
+ """A filesystem over classic FTP"""
12
+
13
+ root_marker = "/"
14
+ cachable = False
15
+ protocol = "ftp"
16
+
17
+ def __init__(
18
+ self,
19
+ host,
20
+ port=21,
21
+ username=None,
22
+ password=None,
23
+ acct=None,
24
+ block_size=None,
25
+ tempdir=None,
26
+ timeout=30,
27
+ encoding="utf-8",
28
+ tls=False,
29
+ **kwargs,
30
+ ):
31
+ """
32
+ You can use _get_kwargs_from_urls to get some kwargs from
33
+ a reasonable FTP url.
34
+
35
+ Authentication will be anonymous if username/password are not
36
+ given.
37
+
38
+ Parameters
39
+ ----------
40
+ host: str
41
+ The remote server name/ip to connect to
42
+ port: int
43
+ Port to connect with
44
+ username: str or None
45
+ If authenticating, the user's identifier
46
+ password: str of None
47
+ User's password on the server, if using
48
+ acct: str or None
49
+ Some servers also need an "account" string for auth
50
+ block_size: int or None
51
+ If given, the read-ahead or write buffer size.
52
+ tempdir: str
53
+ Directory on remote to put temporary files when in a transaction
54
+ timeout: int
55
+ Timeout of the ftp connection in seconds
56
+ encoding: str
57
+ Encoding to use for directories and filenames in FTP connection
58
+ tls: bool
59
+ Use FTP-TLS, by default False
60
+ """
61
+ super().__init__(**kwargs)
62
+ self.host = host
63
+ self.port = port
64
+ self.tempdir = tempdir or "/tmp"
65
+ self.cred = username or "", password or "", acct or ""
66
+ self.timeout = timeout
67
+ self.encoding = encoding
68
+ if block_size is not None:
69
+ self.blocksize = block_size
70
+ else:
71
+ self.blocksize = 2**16
72
+ self.tls = tls
73
+ self._connect()
74
+ if self.tls:
75
+ self.ftp.prot_p()
76
+
77
+ def _connect(self):
78
+ if self.tls:
79
+ ftp_cls = FTP_TLS
80
+ else:
81
+ ftp_cls = FTP
82
+ self.ftp = ftp_cls(timeout=self.timeout, encoding=self.encoding)
83
+ self.ftp.connect(self.host, self.port)
84
+ self.ftp.login(*self.cred)
85
+
86
+ @classmethod
87
+ def _strip_protocol(cls, path):
88
+ return "/" + infer_storage_options(path)["path"].lstrip("/").rstrip("/")
89
+
90
+ @staticmethod
91
+ def _get_kwargs_from_urls(urlpath):
92
+ out = infer_storage_options(urlpath)
93
+ out.pop("path", None)
94
+ out.pop("protocol", None)
95
+ return out
96
+
97
+ def ls(self, path, detail=True, **kwargs):
98
+ path = self._strip_protocol(path)
99
+ out = []
100
+ if path not in self.dircache:
101
+ try:
102
+ try:
103
+ out = [
104
+ (fn, details)
105
+ for (fn, details) in self.ftp.mlsd(path)
106
+ if fn not in [".", ".."]
107
+ and details["type"] not in ["pdir", "cdir"]
108
+ ]
109
+ except error_perm:
110
+ out = _mlsd2(self.ftp, path) # Not platform independent
111
+ for fn, details in out:
112
+ details["name"] = "/".join(
113
+ ["" if path == "/" else path, fn.lstrip("/")]
114
+ )
115
+ if details["type"] == "file":
116
+ details["size"] = int(details["size"])
117
+ else:
118
+ details["size"] = 0
119
+ if details["type"] == "dir":
120
+ details["type"] = "directory"
121
+ self.dircache[path] = out
122
+ except Error:
123
+ try:
124
+ info = self.info(path)
125
+ if info["type"] == "file":
126
+ out = [(path, info)]
127
+ except (Error, IndexError) as exc:
128
+ raise FileNotFoundError(path) from exc
129
+ files = self.dircache.get(path, out)
130
+ if not detail:
131
+ return sorted([fn for fn, details in files])
132
+ return [details for fn, details in files]
133
+
134
+ def info(self, path, **kwargs):
135
+ # implement with direct method
136
+ path = self._strip_protocol(path)
137
+ if path == "/":
138
+ # special case, since this dir has no real entry
139
+ return {"name": "/", "size": 0, "type": "directory"}
140
+ files = self.ls(self._parent(path).lstrip("/"), True)
141
+ try:
142
+ out = next(f for f in files if f["name"] == path)
143
+ except StopIteration as exc:
144
+ raise FileNotFoundError(path) from exc
145
+ return out
146
+
147
+ def get_file(self, rpath, lpath, **kwargs):
148
+ if self.isdir(rpath):
149
+ if not os.path.exists(lpath):
150
+ os.mkdir(lpath)
151
+ return
152
+ if isfilelike(lpath):
153
+ outfile = lpath
154
+ else:
155
+ outfile = open(lpath, "wb")
156
+
157
+ def cb(x):
158
+ outfile.write(x)
159
+
160
+ self.ftp.retrbinary(
161
+ f"RETR {rpath}",
162
+ blocksize=self.blocksize,
163
+ callback=cb,
164
+ )
165
+ if not isfilelike(lpath):
166
+ outfile.close()
167
+
168
+ def cat_file(self, path, start=None, end=None, **kwargs):
169
+ if end is not None:
170
+ return super().cat_file(path, start, end, **kwargs)
171
+ out = []
172
+
173
+ def cb(x):
174
+ out.append(x)
175
+
176
+ try:
177
+ self.ftp.retrbinary(
178
+ f"RETR {path}",
179
+ blocksize=self.blocksize,
180
+ rest=start,
181
+ callback=cb,
182
+ )
183
+ except (Error, error_perm) as orig_exc:
184
+ raise FileNotFoundError(path) from orig_exc
185
+ return b"".join(out)
186
+
187
+ def _open(
188
+ self,
189
+ path,
190
+ mode="rb",
191
+ block_size=None,
192
+ cache_options=None,
193
+ autocommit=True,
194
+ **kwargs,
195
+ ):
196
+ path = self._strip_protocol(path)
197
+ block_size = block_size or self.blocksize
198
+ return FTPFile(
199
+ self,
200
+ path,
201
+ mode=mode,
202
+ block_size=block_size,
203
+ tempdir=self.tempdir,
204
+ autocommit=autocommit,
205
+ cache_options=cache_options,
206
+ )
207
+
208
+ def _rm(self, path):
209
+ path = self._strip_protocol(path)
210
+ self.ftp.delete(path)
211
+ self.invalidate_cache(self._parent(path))
212
+
213
+ def rm(self, path, recursive=False, maxdepth=None):
214
+ paths = self.expand_path(path, recursive=recursive, maxdepth=maxdepth)
215
+ for p in reversed(paths):
216
+ if self.isfile(p):
217
+ self.rm_file(p)
218
+ else:
219
+ self.rmdir(p)
220
+
221
+ def mkdir(self, path: str, create_parents: bool = True, **kwargs: Any) -> None:
222
+ path = self._strip_protocol(path)
223
+ parent = self._parent(path)
224
+ if parent != self.root_marker and not self.exists(parent) and create_parents:
225
+ self.mkdir(parent, create_parents=create_parents)
226
+
227
+ self.ftp.mkd(path)
228
+ self.invalidate_cache(self._parent(path))
229
+
230
+ def makedirs(self, path: str, exist_ok: bool = False) -> None:
231
+ path = self._strip_protocol(path)
232
+ if self.exists(path):
233
+ # NB: "/" does not "exist" as it has no directory entry
234
+ if not exist_ok:
235
+ raise FileExistsError(f"{path} exists without `exist_ok`")
236
+ # exists_ok=True -> no-op
237
+ else:
238
+ self.mkdir(path, create_parents=True)
239
+
240
+ def rmdir(self, path):
241
+ path = self._strip_protocol(path)
242
+ self.ftp.rmd(path)
243
+ self.invalidate_cache(self._parent(path))
244
+
245
+ def mv(self, path1, path2, **kwargs):
246
+ path1 = self._strip_protocol(path1)
247
+ path2 = self._strip_protocol(path2)
248
+ self.ftp.rename(path1, path2)
249
+ self.invalidate_cache(self._parent(path1))
250
+ self.invalidate_cache(self._parent(path2))
251
+
252
+ def __del__(self):
253
+ self.ftp.close()
254
+
255
+ def invalidate_cache(self, path=None):
256
+ if path is None:
257
+ self.dircache.clear()
258
+ else:
259
+ self.dircache.pop(path, None)
260
+ super().invalidate_cache(path)
261
+
262
+
263
+ class TransferDone(Exception):
264
+ """Internal exception to break out of transfer"""
265
+
266
+ pass
267
+
268
+
269
+ class FTPFile(AbstractBufferedFile):
270
+ """Interact with a remote FTP file with read/write buffering"""
271
+
272
+ def __init__(
273
+ self,
274
+ fs,
275
+ path,
276
+ mode="rb",
277
+ block_size="default",
278
+ autocommit=True,
279
+ cache_type="readahead",
280
+ cache_options=None,
281
+ **kwargs,
282
+ ):
283
+ super().__init__(
284
+ fs,
285
+ path,
286
+ mode=mode,
287
+ block_size=block_size,
288
+ autocommit=autocommit,
289
+ cache_type=cache_type,
290
+ cache_options=cache_options,
291
+ **kwargs,
292
+ )
293
+ if not autocommit:
294
+ self.target = self.path
295
+ self.path = "/".join([kwargs["tempdir"], str(uuid.uuid4())])
296
+
297
+ def commit(self):
298
+ self.fs.mv(self.path, self.target)
299
+
300
+ def discard(self):
301
+ self.fs.rm(self.path)
302
+
303
+ def _fetch_range(self, start, end):
304
+ """Get bytes between given byte limits
305
+
306
+ Implemented by raising an exception in the fetch callback when the
307
+ number of bytes received reaches the requested amount.
308
+
309
+ Will fail if the server does not respect the REST command on
310
+ retrieve requests.
311
+ """
312
+ out = []
313
+ total = [0]
314
+
315
+ def callback(x):
316
+ total[0] += len(x)
317
+ if total[0] > end - start:
318
+ out.append(x[: (end - start) - total[0]])
319
+ if end < self.size:
320
+ raise TransferDone
321
+ else:
322
+ out.append(x)
323
+
324
+ if total[0] == end - start and end < self.size:
325
+ raise TransferDone
326
+
327
+ try:
328
+ self.fs.ftp.retrbinary(
329
+ f"RETR {self.path}",
330
+ blocksize=self.blocksize,
331
+ rest=start,
332
+ callback=callback,
333
+ )
334
+ except TransferDone:
335
+ try:
336
+ # stop transfer, we got enough bytes for this block
337
+ self.fs.ftp.abort()
338
+ self.fs.ftp.getmultiline()
339
+ except Error:
340
+ self.fs._connect()
341
+
342
+ return b"".join(out)
343
+
344
+ def _upload_chunk(self, final=False):
345
+ self.buffer.seek(0)
346
+ self.fs.ftp.storbinary(
347
+ f"STOR {self.path}", self.buffer, blocksize=self.blocksize, rest=self.offset
348
+ )
349
+ return True
350
+
351
+
352
+ def _mlsd2(ftp, path="."):
353
+ """
354
+ Fall back to using `dir` instead of `mlsd` if not supported.
355
+
356
+ This parses a Linux style `ls -l` response to `dir`, but the response may
357
+ be platform dependent.
358
+
359
+ Parameters
360
+ ----------
361
+ ftp: ftplib.FTP
362
+ path: str
363
+ Expects to be given path, but defaults to ".".
364
+ """
365
+ lines = []
366
+ minfo = []
367
+ ftp.dir(path, lines.append)
368
+ for line in lines:
369
+ split_line = line.split()
370
+ if len(split_line) < 9:
371
+ continue
372
+ this = (
373
+ split_line[-1],
374
+ {
375
+ "modify": " ".join(split_line[5:8]),
376
+ "unix.owner": split_line[2],
377
+ "unix.group": split_line[3],
378
+ "unix.mode": split_line[0],
379
+ "size": split_line[4],
380
+ },
381
+ )
382
+ if this[1]["unix.mode"][0] == "d":
383
+ this[1]["type"] = "dir"
384
+ else:
385
+ this[1]["type"] = "file"
386
+ minfo.append(this)
387
+ return minfo
.venv/lib/python3.13/site-packages/fsspec/implementations/gist.py ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import requests
2
+
3
+ from ..spec import AbstractFileSystem
4
+ from ..utils import infer_storage_options
5
+ from .memory import MemoryFile
6
+
7
+
8
+ class GistFileSystem(AbstractFileSystem):
9
+ """
10
+ Interface to files in a single GitHub Gist.
11
+
12
+ Provides read-only access to a gist's files. Gists do not contain
13
+ subdirectories, so file listing is straightforward.
14
+
15
+ Parameters
16
+ ----------
17
+ gist_id : str
18
+ The ID of the gist you want to access (the long hex value from the URL).
19
+ filenames : list[str] (optional)
20
+ If provided, only make a file system representing these files, and do not fetch
21
+ the list of all files for this gist.
22
+ sha : str (optional)
23
+ If provided, fetch a particular revision of the gist. If omitted,
24
+ the latest revision is used.
25
+ username : str (optional)
26
+ GitHub username for authentication (required if token is given).
27
+ token : str (optional)
28
+ GitHub personal access token (required if username is given).
29
+ timeout : (float, float) or float, optional
30
+ Connect and read timeouts for requests (default 60s each).
31
+ kwargs : dict
32
+ Stored on `self.request_kw` and passed to `requests.get` when fetching Gist
33
+ metadata or reading ("opening") a file.
34
+ """
35
+
36
+ protocol = "gist"
37
+ gist_url = "https://api.github.com/gists/{gist_id}"
38
+ gist_rev_url = "https://api.github.com/gists/{gist_id}/{sha}"
39
+
40
+ def __init__(
41
+ self,
42
+ gist_id,
43
+ filenames=None,
44
+ sha=None,
45
+ username=None,
46
+ token=None,
47
+ timeout=None,
48
+ **kwargs,
49
+ ):
50
+ super().__init__()
51
+ self.gist_id = gist_id
52
+ self.filenames = filenames
53
+ self.sha = sha # revision of the gist (optional)
54
+ if (username is None) ^ (token is None):
55
+ # Both or neither must be set
56
+ if username or token:
57
+ raise ValueError("Auth requires both username and token, or neither.")
58
+ self.username = username
59
+ self.token = token
60
+ self.request_kw = kwargs
61
+ # Default timeouts to 60s connect/read if none provided
62
+ self.timeout = timeout if timeout is not None else (60, 60)
63
+
64
+ # We use a single-level "directory" cache, because a gist is essentially flat
65
+ self.dircache[""] = self._fetch_file_list()
66
+
67
+ @property
68
+ def kw(self):
69
+ """Auth parameters passed to 'requests' if we have username/token."""
70
+ if self.username is not None and self.token is not None:
71
+ return {"auth": (self.username, self.token), **self.request_kw}
72
+ return self.request_kw
73
+
74
+ def _fetch_gist_metadata(self):
75
+ """
76
+ Fetch the JSON metadata for this gist (possibly for a specific revision).
77
+ """
78
+ if self.sha:
79
+ url = self.gist_rev_url.format(gist_id=self.gist_id, sha=self.sha)
80
+ else:
81
+ url = self.gist_url.format(gist_id=self.gist_id)
82
+
83
+ r = requests.get(url, timeout=self.timeout, **self.kw)
84
+ if r.status_code == 404:
85
+ raise FileNotFoundError(
86
+ f"Gist not found: {self.gist_id}@{self.sha or 'latest'}"
87
+ )
88
+ r.raise_for_status()
89
+ return r.json()
90
+
91
+ def _fetch_file_list(self):
92
+ """
93
+ Returns a list of dicts describing each file in the gist. These get stored
94
+ in self.dircache[""].
95
+ """
96
+ meta = self._fetch_gist_metadata()
97
+ if self.filenames:
98
+ available_files = meta.get("files", {})
99
+ files = {}
100
+ for fn in self.filenames:
101
+ if fn not in available_files:
102
+ raise FileNotFoundError(fn)
103
+ files[fn] = available_files[fn]
104
+ else:
105
+ files = meta.get("files", {})
106
+
107
+ out = []
108
+ for fname, finfo in files.items():
109
+ if finfo is None:
110
+ # Occasionally GitHub returns a file entry with null if it was deleted
111
+ continue
112
+ # Build a directory entry
113
+ out.append(
114
+ {
115
+ "name": fname, # file's name
116
+ "type": "file", # gists have no subdirectories
117
+ "size": finfo.get("size", 0), # file size in bytes
118
+ "raw_url": finfo.get("raw_url"),
119
+ }
120
+ )
121
+ return out
122
+
123
+ @classmethod
124
+ def _strip_protocol(cls, path):
125
+ """
126
+ Remove 'gist://' from the path, if present.
127
+ """
128
+ # The default infer_storage_options can handle gist://username:token@id/file
129
+ # or gist://id/file, but let's ensure we handle a normal usage too.
130
+ # We'll just strip the protocol prefix if it exists.
131
+ path = infer_storage_options(path).get("path", path)
132
+ return path.lstrip("/")
133
+
134
+ @staticmethod
135
+ def _get_kwargs_from_urls(path):
136
+ """
137
+ Parse 'gist://' style URLs into GistFileSystem constructor kwargs.
138
+ For example:
139
+ gist://:TOKEN@<gist_id>/file.txt
140
+ gist://username:TOKEN@<gist_id>/file.txt
141
+ """
142
+ so = infer_storage_options(path)
143
+ out = {}
144
+ if "username" in so and so["username"]:
145
+ out["username"] = so["username"]
146
+ if "password" in so and so["password"]:
147
+ out["token"] = so["password"]
148
+ if "host" in so and so["host"]:
149
+ # We interpret 'host' as the gist ID
150
+ out["gist_id"] = so["host"]
151
+
152
+ # Extract SHA and filename from path
153
+ if "path" in so and so["path"]:
154
+ path_parts = so["path"].rsplit("/", 2)[-2:]
155
+ if len(path_parts) == 2:
156
+ if path_parts[0]: # SHA present
157
+ out["sha"] = path_parts[0]
158
+ if path_parts[1]: # filename also present
159
+ out["filenames"] = [path_parts[1]]
160
+
161
+ return out
162
+
163
+ def ls(self, path="", detail=False, **kwargs):
164
+ """
165
+ List files in the gist. Gists are single-level, so any 'path' is basically
166
+ the filename, or empty for all files.
167
+
168
+ Parameters
169
+ ----------
170
+ path : str, optional
171
+ The filename to list. If empty, returns all files in the gist.
172
+ detail : bool, default False
173
+ If True, return a list of dicts; if False, return a list of filenames.
174
+ """
175
+ path = self._strip_protocol(path or "")
176
+ # If path is empty, return all
177
+ if path == "":
178
+ results = self.dircache[""]
179
+ else:
180
+ # We want just the single file with this name
181
+ all_files = self.dircache[""]
182
+ results = [f for f in all_files if f["name"] == path]
183
+ if not results:
184
+ raise FileNotFoundError(path)
185
+ if detail:
186
+ return results
187
+ else:
188
+ return sorted(f["name"] for f in results)
189
+
190
+ def _open(self, path, mode="rb", block_size=None, **kwargs):
191
+ """
192
+ Read a single file from the gist.
193
+ """
194
+ if mode != "rb":
195
+ raise NotImplementedError("GitHub Gist FS is read-only (no write).")
196
+
197
+ path = self._strip_protocol(path)
198
+ # Find the file entry in our dircache
199
+ matches = [f for f in self.dircache[""] if f["name"] == path]
200
+ if not matches:
201
+ raise FileNotFoundError(path)
202
+ finfo = matches[0]
203
+
204
+ raw_url = finfo.get("raw_url")
205
+ if not raw_url:
206
+ raise FileNotFoundError(f"No raw_url for file: {path}")
207
+
208
+ r = requests.get(raw_url, timeout=self.timeout, **self.kw)
209
+ if r.status_code == 404:
210
+ raise FileNotFoundError(path)
211
+ r.raise_for_status()
212
+ return MemoryFile(path, None, r.content)
213
+
214
+ def cat(self, path, recursive=False, on_error="raise", **kwargs):
215
+ """
216
+ Return {path: contents} for the given file or files. If 'recursive' is True,
217
+ and path is empty, returns all files in the gist.
218
+ """
219
+ paths = self.expand_path(path, recursive=recursive)
220
+ out = {}
221
+ for p in paths:
222
+ try:
223
+ with self.open(p, "rb") as f:
224
+ out[p] = f.read()
225
+ except FileNotFoundError as e:
226
+ if on_error == "raise":
227
+ raise e
228
+ elif on_error == "omit":
229
+ pass # skip
230
+ else:
231
+ out[p] = e
232
+ return out
.venv/lib/python3.13/site-packages/fsspec/implementations/git.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import pygit2
4
+
5
+ from fsspec.spec import AbstractFileSystem
6
+
7
+ from .memory import MemoryFile
8
+
9
+
10
+ class GitFileSystem(AbstractFileSystem):
11
+ """Browse the files of a local git repo at any hash/tag/branch
12
+
13
+ (experimental backend)
14
+ """
15
+
16
+ root_marker = ""
17
+ cachable = True
18
+
19
+ def __init__(self, path=None, fo=None, ref=None, **kwargs):
20
+ """
21
+
22
+ Parameters
23
+ ----------
24
+ path: str (optional)
25
+ Local location of the repo (uses current directory if not given).
26
+ May be deprecated in favour of ``fo``. When used with a higher
27
+ level function such as fsspec.open(), may be of the form
28
+ "git://[path-to-repo[:]][ref@]path/to/file" (but the actual
29
+ file path should not contain "@" or ":").
30
+ fo: str (optional)
31
+ Same as ``path``, but passed as part of a chained URL. This one
32
+ takes precedence if both are given.
33
+ ref: str (optional)
34
+ Reference to work with, could be a hash, tag or branch name. Defaults
35
+ to current working tree. Note that ``ls`` and ``open`` also take hash,
36
+ so this becomes the default for those operations
37
+ kwargs
38
+ """
39
+ super().__init__(**kwargs)
40
+ self.repo = pygit2.Repository(fo or path or os.getcwd())
41
+ self.ref = ref or "master"
42
+
43
+ @classmethod
44
+ def _strip_protocol(cls, path):
45
+ path = super()._strip_protocol(path).lstrip("/")
46
+ if ":" in path:
47
+ path = path.split(":", 1)[1]
48
+ if "@" in path:
49
+ path = path.split("@", 1)[1]
50
+ return path.lstrip("/")
51
+
52
+ def _path_to_object(self, path, ref):
53
+ comm, ref = self.repo.resolve_refish(ref or self.ref)
54
+ parts = path.split("/")
55
+ tree = comm.tree
56
+ for part in parts:
57
+ if part and isinstance(tree, pygit2.Tree):
58
+ if part not in tree:
59
+ raise FileNotFoundError(path)
60
+ tree = tree[part]
61
+ return tree
62
+
63
+ @staticmethod
64
+ def _get_kwargs_from_urls(path):
65
+ path = path.removeprefix("git://")
66
+ out = {}
67
+ if ":" in path:
68
+ out["path"], path = path.split(":", 1)
69
+ if "@" in path:
70
+ out["ref"], path = path.split("@", 1)
71
+ return out
72
+
73
+ @staticmethod
74
+ def _object_to_info(obj, path=None):
75
+ # obj.name and obj.filemode are None for the root tree!
76
+ is_dir = isinstance(obj, pygit2.Tree)
77
+ return {
78
+ "type": "directory" if is_dir else "file",
79
+ "name": (
80
+ "/".join([path, obj.name or ""]).lstrip("/") if path else obj.name
81
+ ),
82
+ "hex": str(obj.id),
83
+ "mode": "100644" if obj.filemode is None else f"{obj.filemode:o}",
84
+ "size": 0 if is_dir else obj.size,
85
+ }
86
+
87
+ def ls(self, path, detail=True, ref=None, **kwargs):
88
+ tree = self._path_to_object(self._strip_protocol(path), ref)
89
+ return [
90
+ GitFileSystem._object_to_info(obj, path)
91
+ if detail
92
+ else GitFileSystem._object_to_info(obj, path)["name"]
93
+ for obj in (tree if isinstance(tree, pygit2.Tree) else [tree])
94
+ ]
95
+
96
+ def info(self, path, ref=None, **kwargs):
97
+ tree = self._path_to_object(self._strip_protocol(path), ref)
98
+ return GitFileSystem._object_to_info(tree, path)
99
+
100
+ def ukey(self, path, ref=None):
101
+ return self.info(path, ref=ref)["hex"]
102
+
103
+ def _open(
104
+ self,
105
+ path,
106
+ mode="rb",
107
+ block_size=None,
108
+ autocommit=True,
109
+ cache_options=None,
110
+ ref=None,
111
+ **kwargs,
112
+ ):
113
+ obj = self._path_to_object(path, ref or self.ref)
114
+ return MemoryFile(data=obj.data)
.venv/lib/python3.13/site-packages/fsspec/implementations/github.py ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import re
3
+
4
+ import requests
5
+
6
+ from ..spec import AbstractFileSystem
7
+ from ..utils import infer_storage_options
8
+ from .memory import MemoryFile
9
+
10
+
11
+ class GithubFileSystem(AbstractFileSystem):
12
+ """Interface to files in github
13
+
14
+ An instance of this class provides the files residing within a remote github
15
+ repository. You may specify a point in the repos history, by SHA, branch
16
+ or tag (default is current master).
17
+
18
+ For files less than 1 MB in size, file content is returned directly in a
19
+ MemoryFile. For larger files, or for files tracked by git-lfs, file content
20
+ is returned as an HTTPFile wrapping the ``download_url`` provided by the
21
+ GitHub API.
22
+
23
+ When using fsspec.open, allows URIs of the form:
24
+
25
+ - "github://path/file", in which case you must specify org, repo and
26
+ may specify sha in the extra args
27
+ - 'github://org:repo@/precip/catalog.yml', where the org and repo are
28
+ part of the URI
29
+ - 'github://org:repo@sha/precip/catalog.yml', where the sha is also included
30
+
31
+ ``sha`` can be the full or abbreviated hex of the commit you want to fetch
32
+ from, or a branch or tag name (so long as it doesn't contain special characters
33
+ like "/", "?", which would have to be HTTP-encoded).
34
+
35
+ For authorised access, you must provide username and token, which can be made
36
+ at https://github.com/settings/tokens
37
+ """
38
+
39
+ url = "https://api.github.com/repos/{org}/{repo}/git/trees/{sha}"
40
+ content_url = "https://api.github.com/repos/{org}/{repo}/contents/{path}?ref={sha}"
41
+ protocol = "github"
42
+ timeout = (60, 60) # connect, read timeouts
43
+
44
+ def __init__(
45
+ self, org, repo, sha=None, username=None, token=None, timeout=None, **kwargs
46
+ ):
47
+ super().__init__(**kwargs)
48
+ self.org = org
49
+ self.repo = repo
50
+ if (username is None) ^ (token is None):
51
+ raise ValueError("Auth required both username and token")
52
+ self.username = username
53
+ self.token = token
54
+ if timeout is not None:
55
+ self.timeout = timeout
56
+ if sha is None:
57
+ # look up default branch (not necessarily "master")
58
+ u = "https://api.github.com/repos/{org}/{repo}"
59
+ r = requests.get(
60
+ u.format(org=org, repo=repo), timeout=self.timeout, **self.kw
61
+ )
62
+ r.raise_for_status()
63
+ sha = r.json()["default_branch"]
64
+
65
+ self.root = sha
66
+ self.ls("")
67
+ try:
68
+ from .http import HTTPFileSystem
69
+
70
+ self.http_fs = HTTPFileSystem(**kwargs)
71
+ except ImportError:
72
+ self.http_fs = None
73
+
74
+ @property
75
+ def kw(self):
76
+ if self.username:
77
+ return {"auth": (self.username, self.token)}
78
+ return {}
79
+
80
+ @classmethod
81
+ def repos(cls, org_or_user, is_org=True):
82
+ """List repo names for given org or user
83
+
84
+ This may become the top level of the FS
85
+
86
+ Parameters
87
+ ----------
88
+ org_or_user: str
89
+ Name of the github org or user to query
90
+ is_org: bool (default True)
91
+ Whether the name is an organisation (True) or user (False)
92
+
93
+ Returns
94
+ -------
95
+ List of string
96
+ """
97
+ r = requests.get(
98
+ f"https://api.github.com/{['users', 'orgs'][is_org]}/{org_or_user}/repos",
99
+ timeout=cls.timeout,
100
+ )
101
+ r.raise_for_status()
102
+ return [repo["name"] for repo in r.json()]
103
+
104
+ @property
105
+ def tags(self):
106
+ """Names of tags in the repo"""
107
+ r = requests.get(
108
+ f"https://api.github.com/repos/{self.org}/{self.repo}/tags",
109
+ timeout=self.timeout,
110
+ **self.kw,
111
+ )
112
+ r.raise_for_status()
113
+ return [t["name"] for t in r.json()]
114
+
115
+ @property
116
+ def branches(self):
117
+ """Names of branches in the repo"""
118
+ r = requests.get(
119
+ f"https://api.github.com/repos/{self.org}/{self.repo}/branches",
120
+ timeout=self.timeout,
121
+ **self.kw,
122
+ )
123
+ r.raise_for_status()
124
+ return [t["name"] for t in r.json()]
125
+
126
+ @property
127
+ def refs(self):
128
+ """Named references, tags and branches"""
129
+ return {"tags": self.tags, "branches": self.branches}
130
+
131
+ def ls(self, path, detail=False, sha=None, _sha=None, **kwargs):
132
+ """List files at given path
133
+
134
+ Parameters
135
+ ----------
136
+ path: str
137
+ Location to list, relative to repo root
138
+ detail: bool
139
+ If True, returns list of dicts, one per file; if False, returns
140
+ list of full filenames only
141
+ sha: str (optional)
142
+ List at the given point in the repo history, branch or tag name or commit
143
+ SHA
144
+ _sha: str (optional)
145
+ List this specific tree object (used internally to descend into trees)
146
+ """
147
+ path = self._strip_protocol(path)
148
+ if path == "":
149
+ _sha = sha or self.root
150
+ if _sha is None:
151
+ parts = path.rstrip("/").split("/")
152
+ so_far = ""
153
+ _sha = sha or self.root
154
+ for part in parts:
155
+ out = self.ls(so_far, True, sha=sha, _sha=_sha)
156
+ so_far += "/" + part if so_far else part
157
+ out = [o for o in out if o["name"] == so_far]
158
+ if not out:
159
+ raise FileNotFoundError(path)
160
+ out = out[0]
161
+ if out["type"] == "file":
162
+ if detail:
163
+ return [out]
164
+ else:
165
+ return path
166
+ _sha = out["sha"]
167
+ if path not in self.dircache or sha not in [self.root, None]:
168
+ r = requests.get(
169
+ self.url.format(org=self.org, repo=self.repo, sha=_sha),
170
+ timeout=self.timeout,
171
+ **self.kw,
172
+ )
173
+ if r.status_code == 404:
174
+ raise FileNotFoundError(path)
175
+ r.raise_for_status()
176
+ types = {"blob": "file", "tree": "directory"}
177
+ out = [
178
+ {
179
+ "name": path + "/" + f["path"] if path else f["path"],
180
+ "mode": f["mode"],
181
+ "type": types[f["type"]],
182
+ "size": f.get("size", 0),
183
+ "sha": f["sha"],
184
+ }
185
+ for f in r.json()["tree"]
186
+ if f["type"] in types
187
+ ]
188
+ if sha in [self.root, None]:
189
+ self.dircache[path] = out
190
+ else:
191
+ out = self.dircache[path]
192
+ if detail:
193
+ return out
194
+ else:
195
+ return sorted([f["name"] for f in out])
196
+
197
+ def invalidate_cache(self, path=None):
198
+ self.dircache.clear()
199
+
200
+ @classmethod
201
+ def _strip_protocol(cls, path):
202
+ opts = infer_storage_options(path)
203
+ if "username" not in opts:
204
+ return super()._strip_protocol(path)
205
+ return opts["path"].lstrip("/")
206
+
207
+ @staticmethod
208
+ def _get_kwargs_from_urls(path):
209
+ opts = infer_storage_options(path)
210
+ if "username" not in opts:
211
+ return {}
212
+ out = {"org": opts["username"], "repo": opts["password"]}
213
+ if opts["host"]:
214
+ out["sha"] = opts["host"]
215
+ return out
216
+
217
+ def _open(
218
+ self,
219
+ path,
220
+ mode="rb",
221
+ block_size=None,
222
+ cache_options=None,
223
+ sha=None,
224
+ **kwargs,
225
+ ):
226
+ if mode != "rb":
227
+ raise NotImplementedError
228
+
229
+ # construct a url to hit the GitHub API's repo contents API
230
+ url = self.content_url.format(
231
+ org=self.org, repo=self.repo, path=path, sha=sha or self.root
232
+ )
233
+
234
+ # make a request to this API, and parse the response as JSON
235
+ r = requests.get(url, timeout=self.timeout, **self.kw)
236
+ if r.status_code == 404:
237
+ raise FileNotFoundError(path)
238
+ r.raise_for_status()
239
+ content_json = r.json()
240
+
241
+ # if the response's content key is not empty, try to parse it as base64
242
+ if content_json["content"]:
243
+ content = base64.b64decode(content_json["content"])
244
+
245
+ # as long as the content does not start with the string
246
+ # "version https://git-lfs.github.com/"
247
+ # then it is probably not a git-lfs pointer and we can just return
248
+ # the content directly
249
+ if not content.startswith(b"version https://git-lfs.github.com/"):
250
+ return MemoryFile(None, None, content)
251
+
252
+ # we land here if the content was not present in the first response
253
+ # (regular file over 1MB or git-lfs tracked file)
254
+ # in this case, we get let the HTTPFileSystem handle the download
255
+ if self.http_fs is None:
256
+ raise ImportError(
257
+ "Please install fsspec[http] to access github files >1 MB "
258
+ "or git-lfs tracked files."
259
+ )
260
+ return self.http_fs.open(
261
+ content_json["download_url"],
262
+ mode=mode,
263
+ block_size=block_size,
264
+ cache_options=cache_options,
265
+ **kwargs,
266
+ )
267
+
268
+ def rm(self, path, recursive=False, maxdepth=None, message=None):
269
+ path = self.expand_path(path, recursive=recursive, maxdepth=maxdepth)
270
+ for p in reversed(path):
271
+ self.rm_file(p, message=message)
272
+
273
+ def rm_file(self, path, message=None, **kwargs):
274
+ """
275
+ Remove a file from a specified branch using a given commit message.
276
+
277
+ Since Github DELETE operation requires a branch name, and we can't reliably
278
+ determine whether the provided SHA refers to a branch, tag, or commit, we
279
+ assume it's a branch. If it's not, the user will encounter an error when
280
+ attempting to retrieve the file SHA or delete the file.
281
+
282
+ Parameters
283
+ ----------
284
+ path: str
285
+ The file's location relative to the repository root.
286
+ message: str, optional
287
+ The commit message for the deletion.
288
+ """
289
+
290
+ if not self.username:
291
+ raise ValueError("Authentication required")
292
+
293
+ path = self._strip_protocol(path)
294
+
295
+ # Attempt to get SHA from cache or Github API
296
+ sha = self._get_sha_from_cache(path)
297
+ if not sha:
298
+ url = self.content_url.format(
299
+ org=self.org, repo=self.repo, path=path.lstrip("/"), sha=self.root
300
+ )
301
+ r = requests.get(url, timeout=self.timeout, **self.kw)
302
+ if r.status_code == 404:
303
+ raise FileNotFoundError(path)
304
+ r.raise_for_status()
305
+ sha = r.json()["sha"]
306
+
307
+ # Delete the file
308
+ delete_url = self.content_url.format(
309
+ org=self.org, repo=self.repo, path=path, sha=self.root
310
+ )
311
+ branch = self.root
312
+ data = {
313
+ "message": message or f"Delete {path}",
314
+ "sha": sha,
315
+ **({"branch": branch} if branch else {}),
316
+ }
317
+
318
+ r = requests.delete(delete_url, json=data, timeout=self.timeout, **self.kw)
319
+ error_message = r.json().get("message", "")
320
+ if re.search(r"Branch .+ not found", error_message):
321
+ error = "Remove only works when the filesystem is initialised from a branch or default (None)"
322
+ raise ValueError(error)
323
+ r.raise_for_status()
324
+
325
+ self.invalidate_cache(path)
326
+
327
+ def _get_sha_from_cache(self, path):
328
+ for entries in self.dircache.values():
329
+ for entry in entries:
330
+ entry_path = entry.get("name")
331
+ if entry_path and entry_path == path and "sha" in entry:
332
+ return entry["sha"]
333
+ return None
.venv/lib/python3.13/site-packages/fsspec/implementations/http.py ADDED
@@ -0,0 +1,890 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import io
3
+ import logging
4
+ import re
5
+ import weakref
6
+ from copy import copy
7
+ from urllib.parse import urlparse
8
+
9
+ import aiohttp
10
+ import yarl
11
+
12
+ from fsspec.asyn import AbstractAsyncStreamedFile, AsyncFileSystem, sync, sync_wrapper
13
+ from fsspec.callbacks import DEFAULT_CALLBACK
14
+ from fsspec.exceptions import FSTimeoutError
15
+ from fsspec.spec import AbstractBufferedFile
16
+ from fsspec.utils import (
17
+ DEFAULT_BLOCK_SIZE,
18
+ glob_translate,
19
+ isfilelike,
20
+ nullcontext,
21
+ tokenize,
22
+ )
23
+
24
+ from ..caching import AllBytes
25
+
26
+ # https://stackoverflow.com/a/15926317/3821154
27
+ ex = re.compile(r"""<(a|A)\s+(?:[^>]*?\s+)?(href|HREF)=["'](?P<url>[^"']+)""")
28
+ ex2 = re.compile(r"""(?P<url>http[s]?://[-a-zA-Z0-9@:%_+.~#?&/=]+)""")
29
+ logger = logging.getLogger("fsspec.http")
30
+
31
+
32
+ async def get_client(**kwargs):
33
+ return aiohttp.ClientSession(**kwargs)
34
+
35
+
36
+ class HTTPFileSystem(AsyncFileSystem):
37
+ """
38
+ Simple File-System for fetching data via HTTP(S)
39
+
40
+ ``ls()`` is implemented by loading the parent page and doing a regex
41
+ match on the result. If simple_link=True, anything of the form
42
+ "http(s)://server.com/stuff?thing=other"; otherwise only links within
43
+ HTML href tags will be used.
44
+ """
45
+
46
+ sep = "/"
47
+
48
+ def __init__(
49
+ self,
50
+ simple_links=True,
51
+ block_size=None,
52
+ same_scheme=True,
53
+ size_policy=None,
54
+ cache_type="bytes",
55
+ cache_options=None,
56
+ asynchronous=False,
57
+ loop=None,
58
+ client_kwargs=None,
59
+ get_client=get_client,
60
+ encoded=False,
61
+ **storage_options,
62
+ ):
63
+ """
64
+ NB: if this is called async, you must await set_client
65
+
66
+ Parameters
67
+ ----------
68
+ block_size: int
69
+ Blocks to read bytes; if 0, will default to raw requests file-like
70
+ objects instead of HTTPFile instances
71
+ simple_links: bool
72
+ If True, will consider both HTML <a> tags and anything that looks
73
+ like a URL; if False, will consider only the former.
74
+ same_scheme: True
75
+ When doing ls/glob, if this is True, only consider paths that have
76
+ http/https matching the input URLs.
77
+ size_policy: this argument is deprecated
78
+ client_kwargs: dict
79
+ Passed to aiohttp.ClientSession, see
80
+ https://docs.aiohttp.org/en/stable/client_reference.html
81
+ For example, ``{'auth': aiohttp.BasicAuth('user', 'pass')}``
82
+ get_client: Callable[..., aiohttp.ClientSession]
83
+ A callable, which takes keyword arguments and constructs
84
+ an aiohttp.ClientSession. Its state will be managed by
85
+ the HTTPFileSystem class.
86
+ storage_options: key-value
87
+ Any other parameters passed on to requests
88
+ cache_type, cache_options: defaults used in open()
89
+ """
90
+ super().__init__(self, asynchronous=asynchronous, loop=loop, **storage_options)
91
+ self.block_size = block_size if block_size is not None else DEFAULT_BLOCK_SIZE
92
+ self.simple_links = simple_links
93
+ self.same_schema = same_scheme
94
+ self.cache_type = cache_type
95
+ self.cache_options = cache_options
96
+ self.client_kwargs = client_kwargs or {}
97
+ self.get_client = get_client
98
+ self.encoded = encoded
99
+ self.kwargs = storage_options
100
+ self._session = None
101
+
102
+ # Clean caching-related parameters from `storage_options`
103
+ # before propagating them as `request_options` through `self.kwargs`.
104
+ # TODO: Maybe rename `self.kwargs` to `self.request_options` to make
105
+ # it clearer.
106
+ request_options = copy(storage_options)
107
+ self.use_listings_cache = request_options.pop("use_listings_cache", False)
108
+ request_options.pop("listings_expiry_time", None)
109
+ request_options.pop("max_paths", None)
110
+ request_options.pop("skip_instance_cache", None)
111
+ self.kwargs = request_options
112
+
113
+ @property
114
+ def fsid(self):
115
+ return "http"
116
+
117
+ def encode_url(self, url):
118
+ return yarl.URL(url, encoded=self.encoded)
119
+
120
+ @staticmethod
121
+ def close_session(loop, session):
122
+ if loop is not None and loop.is_running():
123
+ try:
124
+ sync(loop, session.close, timeout=0.1)
125
+ return
126
+ except (TimeoutError, FSTimeoutError, NotImplementedError):
127
+ pass
128
+ connector = getattr(session, "_connector", None)
129
+ if connector is not None:
130
+ # close after loop is dead
131
+ connector._close()
132
+
133
+ async def set_session(self):
134
+ if self._session is None:
135
+ self._session = await self.get_client(loop=self.loop, **self.client_kwargs)
136
+ if not self.asynchronous:
137
+ weakref.finalize(self, self.close_session, self.loop, self._session)
138
+ return self._session
139
+
140
+ @classmethod
141
+ def _strip_protocol(cls, path):
142
+ """For HTTP, we always want to keep the full URL"""
143
+ return path
144
+
145
+ @classmethod
146
+ def _parent(cls, path):
147
+ # override, since _strip_protocol is different for URLs
148
+ par = super()._parent(path)
149
+ if len(par) > 7: # "http://..."
150
+ return par
151
+ return ""
152
+
153
+ async def _ls_real(self, url, detail=True, **kwargs):
154
+ # ignoring URL-encoded arguments
155
+ kw = self.kwargs.copy()
156
+ kw.update(kwargs)
157
+ logger.debug(url)
158
+ session = await self.set_session()
159
+ async with session.get(self.encode_url(url), **self.kwargs) as r:
160
+ self._raise_not_found_for_status(r, url)
161
+
162
+ if "Content-Type" in r.headers:
163
+ mimetype = r.headers["Content-Type"].partition(";")[0]
164
+ else:
165
+ mimetype = None
166
+
167
+ if mimetype in ("text/html", None):
168
+ try:
169
+ text = await r.text(errors="ignore")
170
+ if self.simple_links:
171
+ links = ex2.findall(text) + [u[2] for u in ex.findall(text)]
172
+ else:
173
+ links = [u[2] for u in ex.findall(text)]
174
+ except UnicodeDecodeError:
175
+ links = [] # binary, not HTML
176
+ else:
177
+ links = []
178
+
179
+ out = set()
180
+ parts = urlparse(url)
181
+ for l in links:
182
+ if isinstance(l, tuple):
183
+ l = l[1]
184
+ if l.startswith("/") and len(l) > 1:
185
+ # absolute URL on this server
186
+ l = f"{parts.scheme}://{parts.netloc}{l}"
187
+ if l.startswith("http"):
188
+ if self.same_schema and l.startswith(url.rstrip("/") + "/"):
189
+ out.add(l)
190
+ elif l.replace("https", "http").startswith(
191
+ url.replace("https", "http").rstrip("/") + "/"
192
+ ):
193
+ # allowed to cross http <-> https
194
+ out.add(l)
195
+ else:
196
+ if l not in ["..", "../"]:
197
+ # Ignore FTP-like "parent"
198
+ out.add("/".join([url.rstrip("/"), l.lstrip("/")]))
199
+ if not out and url.endswith("/"):
200
+ out = await self._ls_real(url.rstrip("/"), detail=False)
201
+ if detail:
202
+ return [
203
+ {
204
+ "name": u,
205
+ "size": None,
206
+ "type": "directory" if u.endswith("/") else "file",
207
+ }
208
+ for u in out
209
+ ]
210
+ else:
211
+ return sorted(out)
212
+
213
+ async def _ls(self, url, detail=True, **kwargs):
214
+ if self.use_listings_cache and url in self.dircache:
215
+ out = self.dircache[url]
216
+ else:
217
+ out = await self._ls_real(url, detail=detail, **kwargs)
218
+ self.dircache[url] = out
219
+ return out
220
+
221
+ ls = sync_wrapper(_ls)
222
+
223
+ def _raise_not_found_for_status(self, response, url):
224
+ """
225
+ Raises FileNotFoundError for 404s, otherwise uses raise_for_status.
226
+ """
227
+ if response.status == 404:
228
+ raise FileNotFoundError(url)
229
+ response.raise_for_status()
230
+
231
+ async def _cat_file(self, url, start=None, end=None, **kwargs):
232
+ kw = self.kwargs.copy()
233
+ kw.update(kwargs)
234
+ logger.debug(url)
235
+
236
+ if start is not None or end is not None:
237
+ if start == end:
238
+ return b""
239
+ headers = kw.pop("headers", {}).copy()
240
+
241
+ headers["Range"] = await self._process_limits(url, start, end)
242
+ kw["headers"] = headers
243
+ session = await self.set_session()
244
+ async with session.get(self.encode_url(url), **kw) as r:
245
+ out = await r.read()
246
+ self._raise_not_found_for_status(r, url)
247
+ return out
248
+
249
+ async def _get_file(
250
+ self, rpath, lpath, chunk_size=5 * 2**20, callback=DEFAULT_CALLBACK, **kwargs
251
+ ):
252
+ kw = self.kwargs.copy()
253
+ kw.update(kwargs)
254
+ logger.debug(rpath)
255
+ session = await self.set_session()
256
+ async with session.get(self.encode_url(rpath), **kw) as r:
257
+ try:
258
+ size = int(r.headers["content-length"])
259
+ except (ValueError, KeyError):
260
+ size = None
261
+
262
+ callback.set_size(size)
263
+ self._raise_not_found_for_status(r, rpath)
264
+ if isfilelike(lpath):
265
+ outfile = lpath
266
+ else:
267
+ outfile = open(lpath, "wb") # noqa: ASYNC230
268
+
269
+ try:
270
+ chunk = True
271
+ while chunk:
272
+ chunk = await r.content.read(chunk_size)
273
+ outfile.write(chunk)
274
+ callback.relative_update(len(chunk))
275
+ finally:
276
+ if not isfilelike(lpath):
277
+ outfile.close()
278
+
279
+ async def _put_file(
280
+ self,
281
+ lpath,
282
+ rpath,
283
+ chunk_size=5 * 2**20,
284
+ callback=DEFAULT_CALLBACK,
285
+ method="post",
286
+ mode="overwrite",
287
+ **kwargs,
288
+ ):
289
+ if mode != "overwrite":
290
+ raise NotImplementedError("Exclusive write")
291
+
292
+ async def gen_chunks():
293
+ # Support passing arbitrary file-like objects
294
+ # and use them instead of streams.
295
+ if isinstance(lpath, io.IOBase):
296
+ context = nullcontext(lpath)
297
+ use_seek = False # might not support seeking
298
+ else:
299
+ context = open(lpath, "rb") # noqa: ASYNC230
300
+ use_seek = True
301
+
302
+ with context as f:
303
+ if use_seek:
304
+ callback.set_size(f.seek(0, 2))
305
+ f.seek(0)
306
+ else:
307
+ callback.set_size(getattr(f, "size", None))
308
+
309
+ chunk = f.read(chunk_size)
310
+ while chunk:
311
+ yield chunk
312
+ callback.relative_update(len(chunk))
313
+ chunk = f.read(chunk_size)
314
+
315
+ kw = self.kwargs.copy()
316
+ kw.update(kwargs)
317
+ session = await self.set_session()
318
+
319
+ method = method.lower()
320
+ if method not in ("post", "put"):
321
+ raise ValueError(
322
+ f"method has to be either 'post' or 'put', not: {method!r}"
323
+ )
324
+
325
+ meth = getattr(session, method)
326
+ async with meth(self.encode_url(rpath), data=gen_chunks(), **kw) as resp:
327
+ self._raise_not_found_for_status(resp, rpath)
328
+
329
+ async def _exists(self, path, **kwargs):
330
+ kw = self.kwargs.copy()
331
+ kw.update(kwargs)
332
+ try:
333
+ logger.debug(path)
334
+ session = await self.set_session()
335
+ r = await session.get(self.encode_url(path), **kw)
336
+ async with r:
337
+ return r.status < 400
338
+ except aiohttp.ClientError:
339
+ return False
340
+
341
+ async def _isfile(self, path, **kwargs):
342
+ return await self._exists(path, **kwargs)
343
+
344
+ def _open(
345
+ self,
346
+ path,
347
+ mode="rb",
348
+ block_size=None,
349
+ autocommit=None, # XXX: This differs from the base class.
350
+ cache_type=None,
351
+ cache_options=None,
352
+ size=None,
353
+ **kwargs,
354
+ ):
355
+ """Make a file-like object
356
+
357
+ Parameters
358
+ ----------
359
+ path: str
360
+ Full URL with protocol
361
+ mode: string
362
+ must be "rb"
363
+ block_size: int or None
364
+ Bytes to download in one request; use instance value if None. If
365
+ zero, will return a streaming Requests file-like instance.
366
+ kwargs: key-value
367
+ Any other parameters, passed to requests calls
368
+ """
369
+ if mode != "rb":
370
+ raise NotImplementedError
371
+ block_size = block_size if block_size is not None else self.block_size
372
+ kw = self.kwargs.copy()
373
+ kw["asynchronous"] = self.asynchronous
374
+ kw.update(kwargs)
375
+ info = {}
376
+ size = size or info.update(self.info(path, **kwargs)) or info["size"]
377
+ session = sync(self.loop, self.set_session)
378
+ if block_size and size and info.get("partial", True):
379
+ return HTTPFile(
380
+ self,
381
+ path,
382
+ session=session,
383
+ block_size=block_size,
384
+ mode=mode,
385
+ size=size,
386
+ cache_type=cache_type or self.cache_type,
387
+ cache_options=cache_options or self.cache_options,
388
+ loop=self.loop,
389
+ **kw,
390
+ )
391
+ else:
392
+ return HTTPStreamFile(
393
+ self,
394
+ path,
395
+ mode=mode,
396
+ loop=self.loop,
397
+ session=session,
398
+ **kw,
399
+ )
400
+
401
+ async def open_async(self, path, mode="rb", size=None, **kwargs):
402
+ session = await self.set_session()
403
+ if size is None:
404
+ try:
405
+ size = (await self._info(path, **kwargs))["size"]
406
+ except FileNotFoundError:
407
+ pass
408
+ return AsyncStreamFile(
409
+ self,
410
+ path,
411
+ loop=self.loop,
412
+ session=session,
413
+ size=size,
414
+ **kwargs,
415
+ )
416
+
417
+ def ukey(self, url):
418
+ """Unique identifier; assume HTTP files are static, unchanging"""
419
+ return tokenize(url, self.kwargs, self.protocol)
420
+
421
+ async def _info(self, url, **kwargs):
422
+ """Get info of URL
423
+
424
+ Tries to access location via HEAD, and then GET methods, but does
425
+ not fetch the data.
426
+
427
+ It is possible that the server does not supply any size information, in
428
+ which case size will be given as None (and certain operations on the
429
+ corresponding file will not work).
430
+ """
431
+ info = {}
432
+ session = await self.set_session()
433
+
434
+ for policy in ["head", "get"]:
435
+ try:
436
+ info.update(
437
+ await _file_info(
438
+ self.encode_url(url),
439
+ size_policy=policy,
440
+ session=session,
441
+ **self.kwargs,
442
+ **kwargs,
443
+ )
444
+ )
445
+ if info.get("size") is not None:
446
+ break
447
+ except Exception as exc:
448
+ if policy == "get":
449
+ # If get failed, then raise a FileNotFoundError
450
+ raise FileNotFoundError(url) from exc
451
+ logger.debug("", exc_info=exc)
452
+
453
+ return {"name": url, "size": None, **info, "type": "file"}
454
+
455
+ async def _glob(self, path, maxdepth=None, **kwargs):
456
+ """
457
+ Find files by glob-matching.
458
+
459
+ This implementation is idntical to the one in AbstractFileSystem,
460
+ but "?" is not considered as a character for globbing, because it is
461
+ so common in URLs, often identifying the "query" part.
462
+ """
463
+ if maxdepth is not None and maxdepth < 1:
464
+ raise ValueError("maxdepth must be at least 1")
465
+ import re
466
+
467
+ ends_with_slash = path.endswith("/") # _strip_protocol strips trailing slash
468
+ path = self._strip_protocol(path)
469
+ append_slash_to_dirname = ends_with_slash or path.endswith(("/**", "/*"))
470
+ idx_star = path.find("*") if path.find("*") >= 0 else len(path)
471
+ idx_brace = path.find("[") if path.find("[") >= 0 else len(path)
472
+
473
+ min_idx = min(idx_star, idx_brace)
474
+
475
+ detail = kwargs.pop("detail", False)
476
+
477
+ if not has_magic(path):
478
+ if await self._exists(path, **kwargs):
479
+ if not detail:
480
+ return [path]
481
+ else:
482
+ return {path: await self._info(path, **kwargs)}
483
+ else:
484
+ if not detail:
485
+ return [] # glob of non-existent returns empty
486
+ else:
487
+ return {}
488
+ elif "/" in path[:min_idx]:
489
+ min_idx = path[:min_idx].rindex("/")
490
+ root = path[: min_idx + 1]
491
+ depth = path[min_idx + 1 :].count("/") + 1
492
+ else:
493
+ root = ""
494
+ depth = path[min_idx + 1 :].count("/") + 1
495
+
496
+ if "**" in path:
497
+ if maxdepth is not None:
498
+ idx_double_stars = path.find("**")
499
+ depth_double_stars = path[idx_double_stars:].count("/") + 1
500
+ depth = depth - depth_double_stars + maxdepth
501
+ else:
502
+ depth = None
503
+
504
+ allpaths = await self._find(
505
+ root, maxdepth=depth, withdirs=True, detail=True, **kwargs
506
+ )
507
+
508
+ pattern = glob_translate(path + ("/" if ends_with_slash else ""))
509
+ pattern = re.compile(pattern)
510
+
511
+ out = {
512
+ (
513
+ p.rstrip("/")
514
+ if not append_slash_to_dirname
515
+ and info["type"] == "directory"
516
+ and p.endswith("/")
517
+ else p
518
+ ): info
519
+ for p, info in sorted(allpaths.items())
520
+ if pattern.match(p.rstrip("/"))
521
+ }
522
+
523
+ if detail:
524
+ return out
525
+ else:
526
+ return list(out)
527
+
528
+ async def _isdir(self, path):
529
+ # override, since all URLs are (also) files
530
+ try:
531
+ return bool(await self._ls(path))
532
+ except (FileNotFoundError, ValueError):
533
+ return False
534
+
535
+ async def _pipe_file(self, path, value, mode="overwrite", **kwargs):
536
+ """
537
+ Write bytes to a remote file over HTTP.
538
+
539
+ Parameters
540
+ ----------
541
+ path : str
542
+ Target URL where the data should be written
543
+ value : bytes
544
+ Data to be written
545
+ mode : str
546
+ How to write to the file - 'overwrite' or 'append'
547
+ **kwargs : dict
548
+ Additional parameters to pass to the HTTP request
549
+ """
550
+ url = self._strip_protocol(path)
551
+ headers = kwargs.pop("headers", {})
552
+ headers["Content-Length"] = str(len(value))
553
+
554
+ session = await self.set_session()
555
+
556
+ async with session.put(url, data=value, headers=headers, **kwargs) as r:
557
+ r.raise_for_status()
558
+
559
+
560
+ class HTTPFile(AbstractBufferedFile):
561
+ """
562
+ A file-like object pointing to a remote HTTP(S) resource
563
+
564
+ Supports only reading, with read-ahead of a predetermined block-size.
565
+
566
+ In the case that the server does not supply the filesize, only reading of
567
+ the complete file in one go is supported.
568
+
569
+ Parameters
570
+ ----------
571
+ url: str
572
+ Full URL of the remote resource, including the protocol
573
+ session: aiohttp.ClientSession or None
574
+ All calls will be made within this session, to avoid restarting
575
+ connections where the server allows this
576
+ block_size: int or None
577
+ The amount of read-ahead to do, in bytes. Default is 5MB, or the value
578
+ configured for the FileSystem creating this file
579
+ size: None or int
580
+ If given, this is the size of the file in bytes, and we don't attempt
581
+ to call the server to find the value.
582
+ kwargs: all other key-values are passed to requests calls.
583
+ """
584
+
585
+ def __init__(
586
+ self,
587
+ fs,
588
+ url,
589
+ session=None,
590
+ block_size=None,
591
+ mode="rb",
592
+ cache_type="bytes",
593
+ cache_options=None,
594
+ size=None,
595
+ loop=None,
596
+ asynchronous=False,
597
+ **kwargs,
598
+ ):
599
+ if mode != "rb":
600
+ raise NotImplementedError("File mode not supported")
601
+ self.asynchronous = asynchronous
602
+ self.loop = loop
603
+ self.url = url
604
+ self.session = session
605
+ self.details = {"name": url, "size": size, "type": "file"}
606
+ super().__init__(
607
+ fs=fs,
608
+ path=url,
609
+ mode=mode,
610
+ block_size=block_size,
611
+ cache_type=cache_type,
612
+ cache_options=cache_options,
613
+ **kwargs,
614
+ )
615
+
616
+ def read(self, length=-1):
617
+ """Read bytes from file
618
+
619
+ Parameters
620
+ ----------
621
+ length: int
622
+ Read up to this many bytes. If negative, read all content to end of
623
+ file. If the server has not supplied the filesize, attempting to
624
+ read only part of the data will raise a ValueError.
625
+ """
626
+ if (
627
+ (length < 0 and self.loc == 0) # explicit read all
628
+ # but not when the size is known and fits into a block anyways
629
+ and not (self.size is not None and self.size <= self.blocksize)
630
+ ):
631
+ self._fetch_all()
632
+ if self.size is None:
633
+ if length < 0:
634
+ self._fetch_all()
635
+ else:
636
+ length = min(self.size - self.loc, length)
637
+ return super().read(length)
638
+
639
+ async def async_fetch_all(self):
640
+ """Read whole file in one shot, without caching
641
+
642
+ This is only called when position is still at zero,
643
+ and read() is called without a byte-count.
644
+ """
645
+ logger.debug(f"Fetch all for {self}")
646
+ if not isinstance(self.cache, AllBytes):
647
+ r = await self.session.get(self.fs.encode_url(self.url), **self.kwargs)
648
+ async with r:
649
+ r.raise_for_status()
650
+ out = await r.read()
651
+ self.cache = AllBytes(
652
+ size=len(out), fetcher=None, blocksize=None, data=out
653
+ )
654
+ self.size = len(out)
655
+
656
+ _fetch_all = sync_wrapper(async_fetch_all)
657
+
658
+ def _parse_content_range(self, headers):
659
+ """Parse the Content-Range header"""
660
+ s = headers.get("Content-Range", "")
661
+ m = re.match(r"bytes (\d+-\d+|\*)/(\d+|\*)", s)
662
+ if not m:
663
+ return None, None, None
664
+
665
+ if m[1] == "*":
666
+ start = end = None
667
+ else:
668
+ start, end = [int(x) for x in m[1].split("-")]
669
+ total = None if m[2] == "*" else int(m[2])
670
+ return start, end, total
671
+
672
+ async def async_fetch_range(self, start, end):
673
+ """Download a block of data
674
+
675
+ The expectation is that the server returns only the requested bytes,
676
+ with HTTP code 206. If this is not the case, we first check the headers,
677
+ and then stream the output - if the data size is bigger than we
678
+ requested, an exception is raised.
679
+ """
680
+ logger.debug(f"Fetch range for {self}: {start}-{end}")
681
+ kwargs = self.kwargs.copy()
682
+ headers = kwargs.pop("headers", {}).copy()
683
+ headers["Range"] = f"bytes={start}-{end - 1}"
684
+ logger.debug(f"{self.url} : {headers['Range']}")
685
+ r = await self.session.get(
686
+ self.fs.encode_url(self.url), headers=headers, **kwargs
687
+ )
688
+ async with r:
689
+ if r.status == 416:
690
+ # range request outside file
691
+ return b""
692
+ r.raise_for_status()
693
+
694
+ # If the server has handled the range request, it should reply
695
+ # with status 206 (partial content). But we'll guess that a suitable
696
+ # Content-Range header or a Content-Length no more than the
697
+ # requested range also mean we have got the desired range.
698
+ response_is_range = (
699
+ r.status == 206
700
+ or self._parse_content_range(r.headers)[0] == start
701
+ or int(r.headers.get("Content-Length", end + 1)) <= end - start
702
+ )
703
+
704
+ if response_is_range:
705
+ # partial content, as expected
706
+ out = await r.read()
707
+ elif start > 0:
708
+ raise ValueError(
709
+ "The HTTP server doesn't appear to support range requests. "
710
+ "Only reading this file from the beginning is supported. "
711
+ "Open with block_size=0 for a streaming file interface."
712
+ )
713
+ else:
714
+ # Response is not a range, but we want the start of the file,
715
+ # so we can read the required amount anyway.
716
+ cl = 0
717
+ out = []
718
+ while True:
719
+ chunk = await r.content.read(2**20)
720
+ # data size unknown, let's read until we have enough
721
+ if chunk:
722
+ out.append(chunk)
723
+ cl += len(chunk)
724
+ if cl > end - start:
725
+ break
726
+ else:
727
+ break
728
+ out = b"".join(out)[: end - start]
729
+ return out
730
+
731
+ _fetch_range = sync_wrapper(async_fetch_range)
732
+
733
+
734
+ magic_check = re.compile("([*[])")
735
+
736
+
737
+ def has_magic(s):
738
+ match = magic_check.search(s)
739
+ return match is not None
740
+
741
+
742
+ class HTTPStreamFile(AbstractBufferedFile):
743
+ def __init__(self, fs, url, mode="rb", loop=None, session=None, **kwargs):
744
+ self.asynchronous = kwargs.pop("asynchronous", False)
745
+ self.url = url
746
+ self.loop = loop
747
+ self.session = session
748
+ if mode != "rb":
749
+ raise ValueError
750
+ self.details = {"name": url, "size": None}
751
+ super().__init__(fs=fs, path=url, mode=mode, cache_type="none", **kwargs)
752
+
753
+ async def cor():
754
+ r = await self.session.get(self.fs.encode_url(url), **kwargs).__aenter__()
755
+ self.fs._raise_not_found_for_status(r, url)
756
+ return r
757
+
758
+ self.r = sync(self.loop, cor)
759
+ self.loop = fs.loop
760
+
761
+ def seek(self, loc, whence=0):
762
+ if loc == 0 and whence == 1:
763
+ return
764
+ if loc == self.loc and whence == 0:
765
+ return
766
+ raise ValueError("Cannot seek streaming HTTP file")
767
+
768
+ async def _read(self, num=-1):
769
+ out = await self.r.content.read(num)
770
+ self.loc += len(out)
771
+ return out
772
+
773
+ read = sync_wrapper(_read)
774
+
775
+ async def _close(self):
776
+ self.r.close()
777
+
778
+ def close(self):
779
+ asyncio.run_coroutine_threadsafe(self._close(), self.loop)
780
+ super().close()
781
+
782
+
783
+ class AsyncStreamFile(AbstractAsyncStreamedFile):
784
+ def __init__(
785
+ self, fs, url, mode="rb", loop=None, session=None, size=None, **kwargs
786
+ ):
787
+ self.url = url
788
+ self.session = session
789
+ self.r = None
790
+ if mode != "rb":
791
+ raise ValueError
792
+ self.details = {"name": url, "size": None}
793
+ self.kwargs = kwargs
794
+ super().__init__(fs=fs, path=url, mode=mode, cache_type="none")
795
+ self.size = size
796
+
797
+ async def read(self, num=-1):
798
+ if self.r is None:
799
+ r = await self.session.get(
800
+ self.fs.encode_url(self.url), **self.kwargs
801
+ ).__aenter__()
802
+ self.fs._raise_not_found_for_status(r, self.url)
803
+ self.r = r
804
+ out = await self.r.content.read(num)
805
+ self.loc += len(out)
806
+ return out
807
+
808
+ async def close(self):
809
+ if self.r is not None:
810
+ self.r.close()
811
+ self.r = None
812
+ await super().close()
813
+
814
+
815
+ async def get_range(session, url, start, end, file=None, **kwargs):
816
+ # explicit get a range when we know it must be safe
817
+ kwargs = kwargs.copy()
818
+ headers = kwargs.pop("headers", {}).copy()
819
+ headers["Range"] = f"bytes={start}-{end - 1}"
820
+ r = await session.get(url, headers=headers, **kwargs)
821
+ r.raise_for_status()
822
+ async with r:
823
+ out = await r.read()
824
+ if file:
825
+ with open(file, "r+b") as f: # noqa: ASYNC230
826
+ f.seek(start)
827
+ f.write(out)
828
+ else:
829
+ return out
830
+
831
+
832
+ async def _file_info(url, session, size_policy="head", **kwargs):
833
+ """Call HEAD on the server to get details about the file (size/checksum etc.)
834
+
835
+ Default operation is to explicitly allow redirects and use encoding
836
+ 'identity' (no compression) to get the true size of the target.
837
+ """
838
+ logger.debug("Retrieve file size for %s", url)
839
+ kwargs = kwargs.copy()
840
+ ar = kwargs.pop("allow_redirects", True)
841
+ head = kwargs.get("headers", {}).copy()
842
+ head["Accept-Encoding"] = "identity"
843
+ kwargs["headers"] = head
844
+
845
+ info = {}
846
+ if size_policy == "head":
847
+ r = await session.head(url, allow_redirects=ar, **kwargs)
848
+ elif size_policy == "get":
849
+ r = await session.get(url, allow_redirects=ar, **kwargs)
850
+ else:
851
+ raise TypeError(f'size_policy must be "head" or "get", got {size_policy}')
852
+ async with r:
853
+ r.raise_for_status()
854
+
855
+ if "Content-Length" in r.headers:
856
+ # Some servers may choose to ignore Accept-Encoding and return
857
+ # compressed content, in which case the returned size is unreliable.
858
+ if "Content-Encoding" not in r.headers or r.headers["Content-Encoding"] in [
859
+ "identity",
860
+ "",
861
+ ]:
862
+ info["size"] = int(r.headers["Content-Length"])
863
+ elif "Content-Range" in r.headers:
864
+ info["size"] = int(r.headers["Content-Range"].split("/")[1])
865
+
866
+ if "Content-Type" in r.headers:
867
+ info["mimetype"] = r.headers["Content-Type"].partition(";")[0]
868
+
869
+ if r.headers.get("Accept-Ranges") == "none":
870
+ # Some servers may explicitly discourage partial content requests, but
871
+ # the lack of "Accept-Ranges" does not always indicate they would fail
872
+ info["partial"] = False
873
+
874
+ info["url"] = str(r.url)
875
+
876
+ for checksum_field in ["ETag", "Content-MD5", "Digest"]:
877
+ if r.headers.get(checksum_field):
878
+ info[checksum_field] = r.headers[checksum_field]
879
+
880
+ return info
881
+
882
+
883
+ async def _file_size(url, session=None, *args, **kwargs):
884
+ if session is None:
885
+ session = await get_client()
886
+ info = await _file_info(url, session=session, *args, **kwargs)
887
+ return info.get("size")
888
+
889
+
890
+ file_size = sync_wrapper(_file_size)
.venv/lib/python3.13/site-packages/fsspec/implementations/http_sync.py ADDED
@@ -0,0 +1,931 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """This file is largely copied from http.py"""
2
+
3
+ import io
4
+ import logging
5
+ import re
6
+ import urllib.error
7
+ import urllib.parse
8
+ from copy import copy
9
+ from json import dumps, loads
10
+ from urllib.parse import urlparse
11
+
12
+ try:
13
+ import yarl
14
+ except (ImportError, ModuleNotFoundError, OSError):
15
+ yarl = False
16
+
17
+ from fsspec.callbacks import _DEFAULT_CALLBACK
18
+ from fsspec.registry import register_implementation
19
+ from fsspec.spec import AbstractBufferedFile, AbstractFileSystem
20
+ from fsspec.utils import DEFAULT_BLOCK_SIZE, isfilelike, nullcontext, tokenize
21
+
22
+ from ..caching import AllBytes
23
+
24
+ # https://stackoverflow.com/a/15926317/3821154
25
+ ex = re.compile(r"""<(a|A)\s+(?:[^>]*?\s+)?(href|HREF)=["'](?P<url>[^"']+)""")
26
+ ex2 = re.compile(r"""(?P<url>http[s]?://[-a-zA-Z0-9@:%_+.~#?&/=]+)""")
27
+ logger = logging.getLogger("fsspec.http")
28
+
29
+
30
+ class JsHttpException(urllib.error.HTTPError): ...
31
+
32
+
33
+ class StreamIO(io.BytesIO):
34
+ # fake class, so you can set attributes on it
35
+ # will eventually actually stream
36
+ ...
37
+
38
+
39
+ class ResponseProxy:
40
+ """Looks like a requests response"""
41
+
42
+ def __init__(self, req, stream=False):
43
+ self.request = req
44
+ self.stream = stream
45
+ self._data = None
46
+ self._headers = None
47
+
48
+ @property
49
+ def raw(self):
50
+ if self._data is None:
51
+ b = self.request.response.to_bytes()
52
+ if self.stream:
53
+ self._data = StreamIO(b)
54
+ else:
55
+ self._data = b
56
+ return self._data
57
+
58
+ def close(self):
59
+ if hasattr(self, "_data"):
60
+ del self._data
61
+
62
+ @property
63
+ def headers(self):
64
+ if self._headers is None:
65
+ self._headers = dict(
66
+ [
67
+ _.split(": ")
68
+ for _ in self.request.getAllResponseHeaders().strip().split("\r\n")
69
+ ]
70
+ )
71
+ return self._headers
72
+
73
+ @property
74
+ def status_code(self):
75
+ return int(self.request.status)
76
+
77
+ def raise_for_status(self):
78
+ if not self.ok:
79
+ raise JsHttpException(
80
+ self.url, self.status_code, self.reason, self.headers, None
81
+ )
82
+
83
+ def iter_content(self, chunksize, *_, **__):
84
+ while True:
85
+ out = self.raw.read(chunksize)
86
+ if out:
87
+ yield out
88
+ else:
89
+ break
90
+
91
+ @property
92
+ def reason(self):
93
+ return self.request.statusText
94
+
95
+ @property
96
+ def ok(self):
97
+ return self.status_code < 400
98
+
99
+ @property
100
+ def url(self):
101
+ return self.request.response.responseURL
102
+
103
+ @property
104
+ def text(self):
105
+ # TODO: encoding from headers
106
+ return self.content.decode()
107
+
108
+ @property
109
+ def content(self):
110
+ self.stream = False
111
+ return self.raw
112
+
113
+ def json(self):
114
+ return loads(self.text)
115
+
116
+
117
+ class RequestsSessionShim:
118
+ def __init__(self):
119
+ self.headers = {}
120
+
121
+ def request(
122
+ self,
123
+ method,
124
+ url,
125
+ params=None,
126
+ data=None,
127
+ headers=None,
128
+ cookies=None,
129
+ files=None,
130
+ auth=None,
131
+ timeout=None,
132
+ allow_redirects=None,
133
+ proxies=None,
134
+ hooks=None,
135
+ stream=None,
136
+ verify=None,
137
+ cert=None,
138
+ json=None,
139
+ ):
140
+ from js import Blob, XMLHttpRequest
141
+
142
+ logger.debug("JS request: %s %s", method, url)
143
+
144
+ if cert or verify or proxies or files or cookies or hooks:
145
+ raise NotImplementedError
146
+ if data and json:
147
+ raise ValueError("Use json= or data=, not both")
148
+ req = XMLHttpRequest.new()
149
+ extra = auth if auth else ()
150
+ if params:
151
+ url = f"{url}?{urllib.parse.urlencode(params)}"
152
+ req.open(method, url, False, *extra)
153
+ if timeout:
154
+ req.timeout = timeout
155
+ if headers:
156
+ for k, v in headers.items():
157
+ req.setRequestHeader(k, v)
158
+
159
+ req.setRequestHeader("Accept", "application/octet-stream")
160
+ req.responseType = "arraybuffer"
161
+ if json:
162
+ blob = Blob.new([dumps(data)], {type: "application/json"})
163
+ req.send(blob)
164
+ elif data:
165
+ if isinstance(data, io.IOBase):
166
+ data = data.read()
167
+ blob = Blob.new([data], {type: "application/octet-stream"})
168
+ req.send(blob)
169
+ else:
170
+ req.send(None)
171
+ return ResponseProxy(req, stream=stream)
172
+
173
+ def get(self, url, **kwargs):
174
+ return self.request("GET", url, **kwargs)
175
+
176
+ def head(self, url, **kwargs):
177
+ return self.request("HEAD", url, **kwargs)
178
+
179
+ def post(self, url, **kwargs):
180
+ return self.request("POST}", url, **kwargs)
181
+
182
+ def put(self, url, **kwargs):
183
+ return self.request("PUT", url, **kwargs)
184
+
185
+ def patch(self, url, **kwargs):
186
+ return self.request("PATCH", url, **kwargs)
187
+
188
+ def delete(self, url, **kwargs):
189
+ return self.request("DELETE", url, **kwargs)
190
+
191
+
192
+ class HTTPFileSystem(AbstractFileSystem):
193
+ """
194
+ Simple File-System for fetching data via HTTP(S)
195
+
196
+ This is the BLOCKING version of the normal HTTPFileSystem. It uses
197
+ requests in normal python and the JS runtime in pyodide.
198
+
199
+ ***This implementation is extremely experimental, do not use unless
200
+ you are testing pyodide/pyscript integration***
201
+ """
202
+
203
+ protocol = ("http", "https", "sync-http", "sync-https")
204
+ sep = "/"
205
+
206
+ def __init__(
207
+ self,
208
+ simple_links=True,
209
+ block_size=None,
210
+ same_scheme=True,
211
+ cache_type="readahead",
212
+ cache_options=None,
213
+ client_kwargs=None,
214
+ encoded=False,
215
+ **storage_options,
216
+ ):
217
+ """
218
+
219
+ Parameters
220
+ ----------
221
+ block_size: int
222
+ Blocks to read bytes; if 0, will default to raw requests file-like
223
+ objects instead of HTTPFile instances
224
+ simple_links: bool
225
+ If True, will consider both HTML <a> tags and anything that looks
226
+ like a URL; if False, will consider only the former.
227
+ same_scheme: True
228
+ When doing ls/glob, if this is True, only consider paths that have
229
+ http/https matching the input URLs.
230
+ size_policy: this argument is deprecated
231
+ client_kwargs: dict
232
+ Passed to aiohttp.ClientSession, see
233
+ https://docs.aiohttp.org/en/stable/client_reference.html
234
+ For example, ``{'auth': aiohttp.BasicAuth('user', 'pass')}``
235
+ storage_options: key-value
236
+ Any other parameters passed on to requests
237
+ cache_type, cache_options: defaults used in open
238
+ """
239
+ super().__init__(self, **storage_options)
240
+ self.block_size = block_size if block_size is not None else DEFAULT_BLOCK_SIZE
241
+ self.simple_links = simple_links
242
+ self.same_schema = same_scheme
243
+ self.cache_type = cache_type
244
+ self.cache_options = cache_options
245
+ self.client_kwargs = client_kwargs or {}
246
+ self.encoded = encoded
247
+ self.kwargs = storage_options
248
+
249
+ try:
250
+ import js # noqa: F401
251
+
252
+ logger.debug("Starting JS session")
253
+ self.session = RequestsSessionShim()
254
+ self.js = True
255
+ except Exception as e:
256
+ import requests
257
+
258
+ logger.debug("Starting cpython session because of: %s", e)
259
+ self.session = requests.Session(**(client_kwargs or {}))
260
+ self.js = False
261
+
262
+ request_options = copy(storage_options)
263
+ self.use_listings_cache = request_options.pop("use_listings_cache", False)
264
+ request_options.pop("listings_expiry_time", None)
265
+ request_options.pop("max_paths", None)
266
+ request_options.pop("skip_instance_cache", None)
267
+ self.kwargs = request_options
268
+
269
+ @property
270
+ def fsid(self):
271
+ return "sync-http"
272
+
273
+ def encode_url(self, url):
274
+ if yarl:
275
+ return yarl.URL(url, encoded=self.encoded)
276
+ return url
277
+
278
+ @classmethod
279
+ def _strip_protocol(cls, path: str) -> str:
280
+ """For HTTP, we always want to keep the full URL"""
281
+ path = path.replace("sync-http://", "http://").replace(
282
+ "sync-https://", "https://"
283
+ )
284
+ return path
285
+
286
+ @classmethod
287
+ def _parent(cls, path):
288
+ # override, since _strip_protocol is different for URLs
289
+ par = super()._parent(path)
290
+ if len(par) > 7: # "http://..."
291
+ return par
292
+ return ""
293
+
294
+ def _ls_real(self, url, detail=True, **kwargs):
295
+ # ignoring URL-encoded arguments
296
+ kw = self.kwargs.copy()
297
+ kw.update(kwargs)
298
+ logger.debug(url)
299
+ r = self.session.get(self.encode_url(url), **self.kwargs)
300
+ self._raise_not_found_for_status(r, url)
301
+ text = r.text
302
+ if self.simple_links:
303
+ links = ex2.findall(text) + [u[2] for u in ex.findall(text)]
304
+ else:
305
+ links = [u[2] for u in ex.findall(text)]
306
+ out = set()
307
+ parts = urlparse(url)
308
+ for l in links:
309
+ if isinstance(l, tuple):
310
+ l = l[1]
311
+ if l.startswith("/") and len(l) > 1:
312
+ # absolute URL on this server
313
+ l = parts.scheme + "://" + parts.netloc + l
314
+ if l.startswith("http"):
315
+ if self.same_schema and l.startswith(url.rstrip("/") + "/"):
316
+ out.add(l)
317
+ elif l.replace("https", "http").startswith(
318
+ url.replace("https", "http").rstrip("/") + "/"
319
+ ):
320
+ # allowed to cross http <-> https
321
+ out.add(l)
322
+ else:
323
+ if l not in ["..", "../"]:
324
+ # Ignore FTP-like "parent"
325
+ out.add("/".join([url.rstrip("/"), l.lstrip("/")]))
326
+ if not out and url.endswith("/"):
327
+ out = self._ls_real(url.rstrip("/"), detail=False)
328
+ if detail:
329
+ return [
330
+ {
331
+ "name": u,
332
+ "size": None,
333
+ "type": "directory" if u.endswith("/") else "file",
334
+ }
335
+ for u in out
336
+ ]
337
+ else:
338
+ return sorted(out)
339
+
340
+ def ls(self, url, detail=True, **kwargs):
341
+ if self.use_listings_cache and url in self.dircache:
342
+ out = self.dircache[url]
343
+ else:
344
+ out = self._ls_real(url, detail=detail, **kwargs)
345
+ self.dircache[url] = out
346
+ return out
347
+
348
+ def _raise_not_found_for_status(self, response, url):
349
+ """
350
+ Raises FileNotFoundError for 404s, otherwise uses raise_for_status.
351
+ """
352
+ if response.status_code == 404:
353
+ raise FileNotFoundError(url)
354
+ response.raise_for_status()
355
+
356
+ def cat_file(self, url, start=None, end=None, **kwargs):
357
+ kw = self.kwargs.copy()
358
+ kw.update(kwargs)
359
+ logger.debug(url)
360
+
361
+ if start is not None or end is not None:
362
+ if start == end:
363
+ return b""
364
+ headers = kw.pop("headers", {}).copy()
365
+
366
+ headers["Range"] = self._process_limits(url, start, end)
367
+ kw["headers"] = headers
368
+ r = self.session.get(self.encode_url(url), **kw)
369
+ self._raise_not_found_for_status(r, url)
370
+ return r.content
371
+
372
+ def get_file(
373
+ self, rpath, lpath, chunk_size=5 * 2**20, callback=_DEFAULT_CALLBACK, **kwargs
374
+ ):
375
+ kw = self.kwargs.copy()
376
+ kw.update(kwargs)
377
+ logger.debug(rpath)
378
+ r = self.session.get(self.encode_url(rpath), **kw)
379
+ try:
380
+ size = int(
381
+ r.headers.get("content-length", None)
382
+ or r.headers.get("Content-Length", None)
383
+ )
384
+ except (ValueError, KeyError, TypeError):
385
+ size = None
386
+
387
+ callback.set_size(size)
388
+ self._raise_not_found_for_status(r, rpath)
389
+ if not isfilelike(lpath):
390
+ lpath = open(lpath, "wb")
391
+ for chunk in r.iter_content(chunk_size, decode_unicode=False):
392
+ lpath.write(chunk)
393
+ callback.relative_update(len(chunk))
394
+
395
+ def put_file(
396
+ self,
397
+ lpath,
398
+ rpath,
399
+ chunk_size=5 * 2**20,
400
+ callback=_DEFAULT_CALLBACK,
401
+ method="post",
402
+ **kwargs,
403
+ ):
404
+ def gen_chunks():
405
+ # Support passing arbitrary file-like objects
406
+ # and use them instead of streams.
407
+ if isinstance(lpath, io.IOBase):
408
+ context = nullcontext(lpath)
409
+ use_seek = False # might not support seeking
410
+ else:
411
+ context = open(lpath, "rb")
412
+ use_seek = True
413
+
414
+ with context as f:
415
+ if use_seek:
416
+ callback.set_size(f.seek(0, 2))
417
+ f.seek(0)
418
+ else:
419
+ callback.set_size(getattr(f, "size", None))
420
+
421
+ chunk = f.read(chunk_size)
422
+ while chunk:
423
+ yield chunk
424
+ callback.relative_update(len(chunk))
425
+ chunk = f.read(chunk_size)
426
+
427
+ kw = self.kwargs.copy()
428
+ kw.update(kwargs)
429
+
430
+ method = method.lower()
431
+ if method not in ("post", "put"):
432
+ raise ValueError(
433
+ f"method has to be either 'post' or 'put', not: {method!r}"
434
+ )
435
+
436
+ meth = getattr(self.session, method)
437
+ resp = meth(rpath, data=gen_chunks(), **kw)
438
+ self._raise_not_found_for_status(resp, rpath)
439
+
440
+ def _process_limits(self, url, start, end):
441
+ """Helper for "Range"-based _cat_file"""
442
+ size = None
443
+ suff = False
444
+ if start is not None and start < 0:
445
+ # if start is negative and end None, end is the "suffix length"
446
+ if end is None:
447
+ end = -start
448
+ start = ""
449
+ suff = True
450
+ else:
451
+ size = size or self.info(url)["size"]
452
+ start = size + start
453
+ elif start is None:
454
+ start = 0
455
+ if not suff:
456
+ if end is not None and end < 0:
457
+ if start is not None:
458
+ size = size or self.info(url)["size"]
459
+ end = size + end
460
+ elif end is None:
461
+ end = ""
462
+ if isinstance(end, int):
463
+ end -= 1 # bytes range is inclusive
464
+ return f"bytes={start}-{end}"
465
+
466
+ def exists(self, path, **kwargs):
467
+ kw = self.kwargs.copy()
468
+ kw.update(kwargs)
469
+ try:
470
+ logger.debug(path)
471
+ r = self.session.get(self.encode_url(path), **kw)
472
+ return r.status_code < 400
473
+ except Exception:
474
+ return False
475
+
476
+ def isfile(self, path, **kwargs):
477
+ return self.exists(path, **kwargs)
478
+
479
+ def _open(
480
+ self,
481
+ path,
482
+ mode="rb",
483
+ block_size=None,
484
+ autocommit=None, # XXX: This differs from the base class.
485
+ cache_type=None,
486
+ cache_options=None,
487
+ size=None,
488
+ **kwargs,
489
+ ):
490
+ """Make a file-like object
491
+
492
+ Parameters
493
+ ----------
494
+ path: str
495
+ Full URL with protocol
496
+ mode: string
497
+ must be "rb"
498
+ block_size: int or None
499
+ Bytes to download in one request; use instance value if None. If
500
+ zero, will return a streaming Requests file-like instance.
501
+ kwargs: key-value
502
+ Any other parameters, passed to requests calls
503
+ """
504
+ if mode != "rb":
505
+ raise NotImplementedError
506
+ block_size = block_size if block_size is not None else self.block_size
507
+ kw = self.kwargs.copy()
508
+ kw.update(kwargs)
509
+ size = size or self.info(path, **kwargs)["size"]
510
+ if block_size and size:
511
+ return HTTPFile(
512
+ self,
513
+ path,
514
+ session=self.session,
515
+ block_size=block_size,
516
+ mode=mode,
517
+ size=size,
518
+ cache_type=cache_type or self.cache_type,
519
+ cache_options=cache_options or self.cache_options,
520
+ **kw,
521
+ )
522
+ else:
523
+ return HTTPStreamFile(
524
+ self,
525
+ path,
526
+ mode=mode,
527
+ session=self.session,
528
+ **kw,
529
+ )
530
+
531
+ def ukey(self, url):
532
+ """Unique identifier; assume HTTP files are static, unchanging"""
533
+ return tokenize(url, self.kwargs, self.protocol)
534
+
535
+ def info(self, url, **kwargs):
536
+ """Get info of URL
537
+
538
+ Tries to access location via HEAD, and then GET methods, but does
539
+ not fetch the data.
540
+
541
+ It is possible that the server does not supply any size information, in
542
+ which case size will be given as None (and certain operations on the
543
+ corresponding file will not work).
544
+ """
545
+ info = {}
546
+ for policy in ["head", "get"]:
547
+ try:
548
+ info.update(
549
+ _file_info(
550
+ self.encode_url(url),
551
+ size_policy=policy,
552
+ session=self.session,
553
+ **self.kwargs,
554
+ **kwargs,
555
+ )
556
+ )
557
+ if info.get("size") is not None:
558
+ break
559
+ except Exception as exc:
560
+ if policy == "get":
561
+ # If get failed, then raise a FileNotFoundError
562
+ raise FileNotFoundError(url) from exc
563
+ logger.debug(str(exc))
564
+
565
+ return {"name": url, "size": None, **info, "type": "file"}
566
+
567
+ def glob(self, path, maxdepth=None, **kwargs):
568
+ """
569
+ Find files by glob-matching.
570
+
571
+ This implementation is idntical to the one in AbstractFileSystem,
572
+ but "?" is not considered as a character for globbing, because it is
573
+ so common in URLs, often identifying the "query" part.
574
+ """
575
+ import re
576
+
577
+ ends = path.endswith("/")
578
+ path = self._strip_protocol(path)
579
+ indstar = path.find("*") if path.find("*") >= 0 else len(path)
580
+ indbrace = path.find("[") if path.find("[") >= 0 else len(path)
581
+
582
+ ind = min(indstar, indbrace)
583
+
584
+ detail = kwargs.pop("detail", False)
585
+
586
+ if not has_magic(path):
587
+ root = path
588
+ depth = 1
589
+ if ends:
590
+ path += "/*"
591
+ elif self.exists(path):
592
+ if not detail:
593
+ return [path]
594
+ else:
595
+ return {path: self.info(path)}
596
+ else:
597
+ if not detail:
598
+ return [] # glob of non-existent returns empty
599
+ else:
600
+ return {}
601
+ elif "/" in path[:ind]:
602
+ ind2 = path[:ind].rindex("/")
603
+ root = path[: ind2 + 1]
604
+ depth = None if "**" in path else path[ind2 + 1 :].count("/") + 1
605
+ else:
606
+ root = ""
607
+ depth = None if "**" in path else path[ind + 1 :].count("/") + 1
608
+
609
+ allpaths = self.find(
610
+ root, maxdepth=maxdepth or depth, withdirs=True, detail=True, **kwargs
611
+ )
612
+ # Escape characters special to python regex, leaving our supported
613
+ # special characters in place.
614
+ # See https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html
615
+ # for shell globbing details.
616
+ pattern = (
617
+ "^"
618
+ + (
619
+ path.replace("\\", r"\\")
620
+ .replace(".", r"\.")
621
+ .replace("+", r"\+")
622
+ .replace("//", "/")
623
+ .replace("(", r"\(")
624
+ .replace(")", r"\)")
625
+ .replace("|", r"\|")
626
+ .replace("^", r"\^")
627
+ .replace("$", r"\$")
628
+ .replace("{", r"\{")
629
+ .replace("}", r"\}")
630
+ .rstrip("/")
631
+ )
632
+ + "$"
633
+ )
634
+ pattern = re.sub("[*]{2}", "=PLACEHOLDER=", pattern)
635
+ pattern = re.sub("[*]", "[^/]*", pattern)
636
+ pattern = re.compile(pattern.replace("=PLACEHOLDER=", ".*"))
637
+ out = {
638
+ p: allpaths[p]
639
+ for p in sorted(allpaths)
640
+ if pattern.match(p.replace("//", "/").rstrip("/"))
641
+ }
642
+ if detail:
643
+ return out
644
+ else:
645
+ return list(out)
646
+
647
+ def isdir(self, path):
648
+ # override, since all URLs are (also) files
649
+ try:
650
+ return bool(self.ls(path))
651
+ except (FileNotFoundError, ValueError):
652
+ return False
653
+
654
+
655
+ class HTTPFile(AbstractBufferedFile):
656
+ """
657
+ A file-like object pointing to a remove HTTP(S) resource
658
+
659
+ Supports only reading, with read-ahead of a predermined block-size.
660
+
661
+ In the case that the server does not supply the filesize, only reading of
662
+ the complete file in one go is supported.
663
+
664
+ Parameters
665
+ ----------
666
+ url: str
667
+ Full URL of the remote resource, including the protocol
668
+ session: requests.Session or None
669
+ All calls will be made within this session, to avoid restarting
670
+ connections where the server allows this
671
+ block_size: int or None
672
+ The amount of read-ahead to do, in bytes. Default is 5MB, or the value
673
+ configured for the FileSystem creating this file
674
+ size: None or int
675
+ If given, this is the size of the file in bytes, and we don't attempt
676
+ to call the server to find the value.
677
+ kwargs: all other key-values are passed to requests calls.
678
+ """
679
+
680
+ def __init__(
681
+ self,
682
+ fs,
683
+ url,
684
+ session=None,
685
+ block_size=None,
686
+ mode="rb",
687
+ cache_type="bytes",
688
+ cache_options=None,
689
+ size=None,
690
+ **kwargs,
691
+ ):
692
+ if mode != "rb":
693
+ raise NotImplementedError("File mode not supported")
694
+ self.url = url
695
+ self.session = session
696
+ self.details = {"name": url, "size": size, "type": "file"}
697
+ super().__init__(
698
+ fs=fs,
699
+ path=url,
700
+ mode=mode,
701
+ block_size=block_size,
702
+ cache_type=cache_type,
703
+ cache_options=cache_options,
704
+ **kwargs,
705
+ )
706
+
707
+ def read(self, length=-1):
708
+ """Read bytes from file
709
+
710
+ Parameters
711
+ ----------
712
+ length: int
713
+ Read up to this many bytes. If negative, read all content to end of
714
+ file. If the server has not supplied the filesize, attempting to
715
+ read only part of the data will raise a ValueError.
716
+ """
717
+ if (
718
+ (length < 0 and self.loc == 0) # explicit read all
719
+ # but not when the size is known and fits into a block anyways
720
+ and not (self.size is not None and self.size <= self.blocksize)
721
+ ):
722
+ self._fetch_all()
723
+ if self.size is None:
724
+ if length < 0:
725
+ self._fetch_all()
726
+ else:
727
+ length = min(self.size - self.loc, length)
728
+ return super().read(length)
729
+
730
+ def _fetch_all(self):
731
+ """Read whole file in one shot, without caching
732
+
733
+ This is only called when position is still at zero,
734
+ and read() is called without a byte-count.
735
+ """
736
+ logger.debug(f"Fetch all for {self}")
737
+ if not isinstance(self.cache, AllBytes):
738
+ r = self.session.get(self.fs.encode_url(self.url), **self.kwargs)
739
+ r.raise_for_status()
740
+ out = r.content
741
+ self.cache = AllBytes(size=len(out), fetcher=None, blocksize=None, data=out)
742
+ self.size = len(out)
743
+
744
+ def _parse_content_range(self, headers):
745
+ """Parse the Content-Range header"""
746
+ s = headers.get("Content-Range", "")
747
+ m = re.match(r"bytes (\d+-\d+|\*)/(\d+|\*)", s)
748
+ if not m:
749
+ return None, None, None
750
+
751
+ if m[1] == "*":
752
+ start = end = None
753
+ else:
754
+ start, end = [int(x) for x in m[1].split("-")]
755
+ total = None if m[2] == "*" else int(m[2])
756
+ return start, end, total
757
+
758
+ def _fetch_range(self, start, end):
759
+ """Download a block of data
760
+
761
+ The expectation is that the server returns only the requested bytes,
762
+ with HTTP code 206. If this is not the case, we first check the headers,
763
+ and then stream the output - if the data size is bigger than we
764
+ requested, an exception is raised.
765
+ """
766
+ logger.debug(f"Fetch range for {self}: {start}-{end}")
767
+ kwargs = self.kwargs.copy()
768
+ headers = kwargs.pop("headers", {}).copy()
769
+ headers["Range"] = f"bytes={start}-{end - 1}"
770
+ logger.debug("%s : %s", self.url, headers["Range"])
771
+ r = self.session.get(self.fs.encode_url(self.url), headers=headers, **kwargs)
772
+ if r.status_code == 416:
773
+ # range request outside file
774
+ return b""
775
+ r.raise_for_status()
776
+
777
+ # If the server has handled the range request, it should reply
778
+ # with status 206 (partial content). But we'll guess that a suitable
779
+ # Content-Range header or a Content-Length no more than the
780
+ # requested range also mean we have got the desired range.
781
+ cl = r.headers.get("Content-Length", r.headers.get("content-length", end + 1))
782
+ response_is_range = (
783
+ r.status_code == 206
784
+ or self._parse_content_range(r.headers)[0] == start
785
+ or int(cl) <= end - start
786
+ )
787
+
788
+ if response_is_range:
789
+ # partial content, as expected
790
+ out = r.content
791
+ elif start > 0:
792
+ raise ValueError(
793
+ "The HTTP server doesn't appear to support range requests. "
794
+ "Only reading this file from the beginning is supported. "
795
+ "Open with block_size=0 for a streaming file interface."
796
+ )
797
+ else:
798
+ # Response is not a range, but we want the start of the file,
799
+ # so we can read the required amount anyway.
800
+ cl = 0
801
+ out = []
802
+ for chunk in r.iter_content(2**20, False):
803
+ out.append(chunk)
804
+ cl += len(chunk)
805
+ out = b"".join(out)[: end - start]
806
+ return out
807
+
808
+
809
+ magic_check = re.compile("([*[])")
810
+
811
+
812
+ def has_magic(s):
813
+ match = magic_check.search(s)
814
+ return match is not None
815
+
816
+
817
+ class HTTPStreamFile(AbstractBufferedFile):
818
+ def __init__(self, fs, url, mode="rb", session=None, **kwargs):
819
+ self.url = url
820
+ self.session = session
821
+ if mode != "rb":
822
+ raise ValueError
823
+ self.details = {"name": url, "size": None}
824
+ super().__init__(fs=fs, path=url, mode=mode, cache_type="readahead", **kwargs)
825
+
826
+ r = self.session.get(self.fs.encode_url(url), stream=True, **kwargs)
827
+ self.fs._raise_not_found_for_status(r, url)
828
+ self.it = r.iter_content(1024, False)
829
+ self.leftover = b""
830
+
831
+ self.r = r
832
+
833
+ def seek(self, *args, **kwargs):
834
+ raise ValueError("Cannot seek streaming HTTP file")
835
+
836
+ def read(self, num=-1):
837
+ bufs = [self.leftover]
838
+ leng = len(self.leftover)
839
+ while leng < num or num < 0:
840
+ try:
841
+ out = self.it.__next__()
842
+ except StopIteration:
843
+ break
844
+ if out:
845
+ bufs.append(out)
846
+ else:
847
+ break
848
+ leng += len(out)
849
+ out = b"".join(bufs)
850
+ if num >= 0:
851
+ self.leftover = out[num:]
852
+ out = out[:num]
853
+ else:
854
+ self.leftover = b""
855
+ self.loc += len(out)
856
+ return out
857
+
858
+ def close(self):
859
+ self.r.close()
860
+ self.closed = True
861
+
862
+
863
+ def get_range(session, url, start, end, **kwargs):
864
+ # explicit get a range when we know it must be safe
865
+ kwargs = kwargs.copy()
866
+ headers = kwargs.pop("headers", {}).copy()
867
+ headers["Range"] = f"bytes={start}-{end - 1}"
868
+ r = session.get(url, headers=headers, **kwargs)
869
+ r.raise_for_status()
870
+ return r.content
871
+
872
+
873
+ def _file_info(url, session, size_policy="head", **kwargs):
874
+ """Call HEAD on the server to get details about the file (size/checksum etc.)
875
+
876
+ Default operation is to explicitly allow redirects and use encoding
877
+ 'identity' (no compression) to get the true size of the target.
878
+ """
879
+ logger.debug("Retrieve file size for %s", url)
880
+ kwargs = kwargs.copy()
881
+ ar = kwargs.pop("allow_redirects", True)
882
+ head = kwargs.get("headers", {}).copy()
883
+ # TODO: not allowed in JS
884
+ # head["Accept-Encoding"] = "identity"
885
+ kwargs["headers"] = head
886
+
887
+ info = {}
888
+ if size_policy == "head":
889
+ r = session.head(url, allow_redirects=ar, **kwargs)
890
+ elif size_policy == "get":
891
+ r = session.get(url, allow_redirects=ar, **kwargs)
892
+ else:
893
+ raise TypeError(f'size_policy must be "head" or "get", got {size_policy}')
894
+ r.raise_for_status()
895
+
896
+ # TODO:
897
+ # recognise lack of 'Accept-Ranges',
898
+ # or 'Accept-Ranges': 'none' (not 'bytes')
899
+ # to mean streaming only, no random access => return None
900
+ if "Content-Length" in r.headers:
901
+ info["size"] = int(r.headers["Content-Length"])
902
+ elif "Content-Range" in r.headers:
903
+ info["size"] = int(r.headers["Content-Range"].split("/")[1])
904
+ elif "content-length" in r.headers:
905
+ info["size"] = int(r.headers["content-length"])
906
+ elif "content-range" in r.headers:
907
+ info["size"] = int(r.headers["content-range"].split("/")[1])
908
+
909
+ for checksum_field in ["ETag", "Content-MD5", "Digest"]:
910
+ if r.headers.get(checksum_field):
911
+ info[checksum_field] = r.headers[checksum_field]
912
+
913
+ return info
914
+
915
+
916
+ # importing this is enough to register it
917
+ def register():
918
+ register_implementation("http", HTTPFileSystem, clobber=True)
919
+ register_implementation("https", HTTPFileSystem, clobber=True)
920
+ register_implementation("sync-http", HTTPFileSystem, clobber=True)
921
+ register_implementation("sync-https", HTTPFileSystem, clobber=True)
922
+
923
+
924
+ register()
925
+
926
+
927
+ def unregister():
928
+ from fsspec.implementations.http import HTTPFileSystem
929
+
930
+ register_implementation("http", HTTPFileSystem, clobber=True)
931
+ register_implementation("https", HTTPFileSystem, clobber=True)
.venv/lib/python3.13/site-packages/fsspec/implementations/jupyter.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import io
3
+ import re
4
+
5
+ import requests
6
+
7
+ import fsspec
8
+
9
+
10
+ class JupyterFileSystem(fsspec.AbstractFileSystem):
11
+ """View of the files as seen by a Jupyter server (notebook or lab)"""
12
+
13
+ protocol = ("jupyter", "jlab")
14
+
15
+ def __init__(self, url, tok=None, **kwargs):
16
+ """
17
+
18
+ Parameters
19
+ ----------
20
+ url : str
21
+ Base URL of the server, like "http://127.0.0.1:8888". May include
22
+ token in the string, which is given by the process when starting up
23
+ tok : str
24
+ If the token is obtained separately, can be given here
25
+ kwargs
26
+ """
27
+ if "?" in url:
28
+ if tok is None:
29
+ try:
30
+ tok = re.findall("token=([a-z0-9]+)", url)[0]
31
+ except IndexError as e:
32
+ raise ValueError("Could not determine token") from e
33
+ url = url.split("?", 1)[0]
34
+ self.url = url.rstrip("/") + "/api/contents"
35
+ self.session = requests.Session()
36
+ if tok:
37
+ self.session.headers["Authorization"] = f"token {tok}"
38
+
39
+ super().__init__(**kwargs)
40
+
41
+ def ls(self, path, detail=True, **kwargs):
42
+ path = self._strip_protocol(path)
43
+ r = self.session.get(f"{self.url}/{path}")
44
+ if r.status_code == 404:
45
+ return FileNotFoundError(path)
46
+ r.raise_for_status()
47
+ out = r.json()
48
+
49
+ if out["type"] == "directory":
50
+ out = out["content"]
51
+ else:
52
+ out = [out]
53
+ for o in out:
54
+ o["name"] = o.pop("path")
55
+ o.pop("content")
56
+ if o["type"] == "notebook":
57
+ o["type"] = "file"
58
+ if detail:
59
+ return out
60
+ return [o["name"] for o in out]
61
+
62
+ def cat_file(self, path, start=None, end=None, **kwargs):
63
+ path = self._strip_protocol(path)
64
+ r = self.session.get(f"{self.url}/{path}")
65
+ if r.status_code == 404:
66
+ return FileNotFoundError(path)
67
+ r.raise_for_status()
68
+ out = r.json()
69
+ if out["format"] == "text":
70
+ # data should be binary
71
+ b = out["content"].encode()
72
+ else:
73
+ b = base64.b64decode(out["content"])
74
+ return b[start:end]
75
+
76
+ def pipe_file(self, path, value, **_):
77
+ path = self._strip_protocol(path)
78
+ json = {
79
+ "name": path.rsplit("/", 1)[-1],
80
+ "path": path,
81
+ "size": len(value),
82
+ "content": base64.b64encode(value).decode(),
83
+ "format": "base64",
84
+ "type": "file",
85
+ }
86
+ self.session.put(f"{self.url}/{path}", json=json)
87
+
88
+ def mkdir(self, path, create_parents=True, **kwargs):
89
+ path = self._strip_protocol(path)
90
+ if create_parents and "/" in path:
91
+ self.mkdir(path.rsplit("/", 1)[0], True)
92
+ json = {
93
+ "name": path.rsplit("/", 1)[-1],
94
+ "path": path,
95
+ "size": None,
96
+ "content": None,
97
+ "type": "directory",
98
+ }
99
+ self.session.put(f"{self.url}/{path}", json=json)
100
+
101
+ def _rm(self, path):
102
+ path = self._strip_protocol(path)
103
+ self.session.delete(f"{self.url}/{path}")
104
+
105
+ def _open(self, path, mode="rb", **kwargs):
106
+ path = self._strip_protocol(path)
107
+ if mode == "rb":
108
+ data = self.cat_file(path)
109
+ return io.BytesIO(data)
110
+ else:
111
+ return SimpleFileWriter(self, path, mode="wb")
112
+
113
+
114
+ class SimpleFileWriter(fsspec.spec.AbstractBufferedFile):
115
+ def _upload_chunk(self, final=False):
116
+ """Never uploads a chunk until file is done
117
+
118
+ Not suitable for large files
119
+ """
120
+ if final is False:
121
+ return False
122
+ self.buffer.seek(0)
123
+ data = self.buffer.read()
124
+ self.fs.pipe_file(self.path, data)
.venv/lib/python3.13/site-packages/fsspec/implementations/libarchive.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from contextlib import contextmanager
2
+ from ctypes import (
3
+ CFUNCTYPE,
4
+ POINTER,
5
+ c_int,
6
+ c_longlong,
7
+ c_void_p,
8
+ cast,
9
+ create_string_buffer,
10
+ )
11
+
12
+ import libarchive
13
+ import libarchive.ffi as ffi
14
+
15
+ from fsspec import open_files
16
+ from fsspec.archive import AbstractArchiveFileSystem
17
+ from fsspec.implementations.memory import MemoryFile
18
+ from fsspec.utils import DEFAULT_BLOCK_SIZE
19
+
20
+ # Libarchive requires seekable files or memory only for certain archive
21
+ # types. However, since we read the directory first to cache the contents
22
+ # and also allow random access to any file, the file-like object needs
23
+ # to be seekable no matter what.
24
+
25
+ # Seek call-backs (not provided in the libarchive python wrapper)
26
+ SEEK_CALLBACK = CFUNCTYPE(c_longlong, c_int, c_void_p, c_longlong, c_int)
27
+ read_set_seek_callback = ffi.ffi(
28
+ "read_set_seek_callback", [ffi.c_archive_p, SEEK_CALLBACK], c_int, ffi.check_int
29
+ )
30
+ new_api = hasattr(ffi, "NO_OPEN_CB")
31
+
32
+
33
+ @contextmanager
34
+ def custom_reader(file, format_name="all", filter_name="all", block_size=ffi.page_size):
35
+ """Read an archive from a seekable file-like object.
36
+
37
+ The `file` object must support the standard `readinto` and 'seek' methods.
38
+ """
39
+ buf = create_string_buffer(block_size)
40
+ buf_p = cast(buf, c_void_p)
41
+
42
+ def read_func(archive_p, context, ptrptr):
43
+ # readinto the buffer, returns number of bytes read
44
+ length = file.readinto(buf)
45
+ # write the address of the buffer into the pointer
46
+ ptrptr = cast(ptrptr, POINTER(c_void_p))
47
+ ptrptr[0] = buf_p
48
+ # tell libarchive how much data was written into the buffer
49
+ return length
50
+
51
+ def seek_func(archive_p, context, offset, whence):
52
+ file.seek(offset, whence)
53
+ # tell libarchvie the current position
54
+ return file.tell()
55
+
56
+ read_cb = ffi.READ_CALLBACK(read_func)
57
+ seek_cb = SEEK_CALLBACK(seek_func)
58
+
59
+ if new_api:
60
+ open_cb = ffi.NO_OPEN_CB
61
+ close_cb = ffi.NO_CLOSE_CB
62
+ else:
63
+ open_cb = libarchive.read.OPEN_CALLBACK(ffi.VOID_CB)
64
+ close_cb = libarchive.read.CLOSE_CALLBACK(ffi.VOID_CB)
65
+
66
+ with libarchive.read.new_archive_read(format_name, filter_name) as archive_p:
67
+ read_set_seek_callback(archive_p, seek_cb)
68
+ ffi.read_open(archive_p, None, open_cb, read_cb, close_cb)
69
+ yield libarchive.read.ArchiveRead(archive_p)
70
+
71
+
72
+ class LibArchiveFileSystem(AbstractArchiveFileSystem):
73
+ """Compressed archives as a file-system (read-only)
74
+
75
+ Supports the following formats:
76
+ tar, pax , cpio, ISO9660, zip, mtree, shar, ar, raw, xar, lha/lzh, rar
77
+ Microsoft CAB, 7-Zip, WARC
78
+
79
+ See the libarchive documentation for further restrictions.
80
+ https://www.libarchive.org/
81
+
82
+ Keeps file object open while instance lives. It only works in seekable
83
+ file-like objects. In case the filesystem does not support this kind of
84
+ file object, it is recommended to cache locally.
85
+
86
+ This class is pickleable, but not necessarily thread-safe (depends on the
87
+ platform). See libarchive documentation for details.
88
+ """
89
+
90
+ root_marker = ""
91
+ protocol = "libarchive"
92
+ cachable = False
93
+
94
+ def __init__(
95
+ self,
96
+ fo="",
97
+ mode="r",
98
+ target_protocol=None,
99
+ target_options=None,
100
+ block_size=DEFAULT_BLOCK_SIZE,
101
+ **kwargs,
102
+ ):
103
+ """
104
+ Parameters
105
+ ----------
106
+ fo: str or file-like
107
+ Contains ZIP, and must exist. If a str, will fetch file using
108
+ :meth:`~fsspec.open_files`, which must return one file exactly.
109
+ mode: str
110
+ Currently, only 'r' accepted
111
+ target_protocol: str (optional)
112
+ If ``fo`` is a string, this value can be used to override the
113
+ FS protocol inferred from a URL
114
+ target_options: dict (optional)
115
+ Kwargs passed when instantiating the target FS, if ``fo`` is
116
+ a string.
117
+ """
118
+ super().__init__(self, **kwargs)
119
+ if mode != "r":
120
+ raise ValueError("Only read from archive files accepted")
121
+ if isinstance(fo, str):
122
+ files = open_files(fo, protocol=target_protocol, **(target_options or {}))
123
+ if len(files) != 1:
124
+ raise ValueError(
125
+ f'Path "{fo}" did not resolve to exactly one file: "{files}"'
126
+ )
127
+ fo = files[0]
128
+ self.of = fo
129
+ self.fo = fo.__enter__() # the whole instance is a context
130
+ self.block_size = block_size
131
+ self.dir_cache = None
132
+
133
+ @contextmanager
134
+ def _open_archive(self):
135
+ self.fo.seek(0)
136
+ with custom_reader(self.fo, block_size=self.block_size) as arc:
137
+ yield arc
138
+
139
+ @classmethod
140
+ def _strip_protocol(cls, path):
141
+ # file paths are always relative to the archive root
142
+ return super()._strip_protocol(path).lstrip("/")
143
+
144
+ def _get_dirs(self):
145
+ fields = {
146
+ "name": "pathname",
147
+ "size": "size",
148
+ "created": "ctime",
149
+ "mode": "mode",
150
+ "uid": "uid",
151
+ "gid": "gid",
152
+ "mtime": "mtime",
153
+ }
154
+
155
+ if self.dir_cache is not None:
156
+ return
157
+
158
+ self.dir_cache = {}
159
+ list_names = []
160
+ with self._open_archive() as arc:
161
+ for entry in arc:
162
+ if not entry.isdir and not entry.isfile:
163
+ # Skip symbolic links, fifo entries, etc.
164
+ continue
165
+ self.dir_cache.update(
166
+ {
167
+ dirname: {"name": dirname, "size": 0, "type": "directory"}
168
+ for dirname in self._all_dirnames(set(entry.name))
169
+ }
170
+ )
171
+ f = {key: getattr(entry, fields[key]) for key in fields}
172
+ f["type"] = "directory" if entry.isdir else "file"
173
+ list_names.append(entry.name)
174
+
175
+ self.dir_cache[f["name"]] = f
176
+ # libarchive does not seem to return an entry for the directories (at least
177
+ # not in all formats), so get the directories names from the files names
178
+ self.dir_cache.update(
179
+ {
180
+ dirname: {"name": dirname, "size": 0, "type": "directory"}
181
+ for dirname in self._all_dirnames(list_names)
182
+ }
183
+ )
184
+
185
+ def _open(
186
+ self,
187
+ path,
188
+ mode="rb",
189
+ block_size=None,
190
+ autocommit=True,
191
+ cache_options=None,
192
+ **kwargs,
193
+ ):
194
+ path = self._strip_protocol(path)
195
+ if mode != "rb":
196
+ raise NotImplementedError
197
+
198
+ data = bytes()
199
+ with self._open_archive() as arc:
200
+ for entry in arc:
201
+ if entry.pathname != path:
202
+ continue
203
+
204
+ if entry.size == 0:
205
+ # empty file, so there are no blocks
206
+ break
207
+
208
+ for block in entry.get_blocks(entry.size):
209
+ data = block
210
+ break
211
+ else:
212
+ raise ValueError
213
+ return MemoryFile(fs=self, path=path, data=data)
.venv/lib/python3.13/site-packages/fsspec/implementations/local.py ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datetime
2
+ import io
3
+ import logging
4
+ import os
5
+ import os.path as osp
6
+ import shutil
7
+ import stat
8
+ import tempfile
9
+ from functools import lru_cache
10
+
11
+ from fsspec import AbstractFileSystem
12
+ from fsspec.compression import compr
13
+ from fsspec.core import get_compression
14
+ from fsspec.utils import isfilelike, stringify_path
15
+
16
+ logger = logging.getLogger("fsspec.local")
17
+
18
+
19
+ class LocalFileSystem(AbstractFileSystem):
20
+ """Interface to files on local storage
21
+
22
+ Parameters
23
+ ----------
24
+ auto_mkdir: bool
25
+ Whether, when opening a file, the directory containing it should
26
+ be created (if it doesn't already exist). This is assumed by pyarrow
27
+ code.
28
+ """
29
+
30
+ root_marker = "/"
31
+ protocol = "file", "local"
32
+ local_file = True
33
+
34
+ def __init__(self, auto_mkdir=False, **kwargs):
35
+ super().__init__(**kwargs)
36
+ self.auto_mkdir = auto_mkdir
37
+
38
+ @property
39
+ def fsid(self):
40
+ return "local"
41
+
42
+ def mkdir(self, path, create_parents=True, **kwargs):
43
+ path = self._strip_protocol(path)
44
+ if self.exists(path):
45
+ raise FileExistsError(path)
46
+ if create_parents:
47
+ self.makedirs(path, exist_ok=True)
48
+ else:
49
+ os.mkdir(path, **kwargs)
50
+
51
+ def makedirs(self, path, exist_ok=False):
52
+ path = self._strip_protocol(path)
53
+ os.makedirs(path, exist_ok=exist_ok)
54
+
55
+ def rmdir(self, path):
56
+ path = self._strip_protocol(path)
57
+ os.rmdir(path)
58
+
59
+ def ls(self, path, detail=False, **kwargs):
60
+ path = self._strip_protocol(path)
61
+ path_info = self.info(path)
62
+ infos = []
63
+ if path_info["type"] == "directory":
64
+ with os.scandir(path) as it:
65
+ for f in it:
66
+ try:
67
+ # Only get the info if requested since it is a bit expensive (the stat call inside)
68
+ # The strip_protocol is also used in info() and calls make_path_posix to always return posix paths
69
+ info = self.info(f) if detail else self._strip_protocol(f.path)
70
+ infos.append(info)
71
+ except FileNotFoundError:
72
+ pass
73
+ else:
74
+ infos = [path_info] if detail else [path_info["name"]]
75
+
76
+ return infos
77
+
78
+ def info(self, path, **kwargs):
79
+ if isinstance(path, os.DirEntry):
80
+ # scandir DirEntry
81
+ out = path.stat(follow_symlinks=False)
82
+ link = path.is_symlink()
83
+ if path.is_dir(follow_symlinks=False):
84
+ t = "directory"
85
+ elif path.is_file(follow_symlinks=False):
86
+ t = "file"
87
+ else:
88
+ t = "other"
89
+
90
+ size = out.st_size
91
+ if link:
92
+ try:
93
+ out2 = path.stat(follow_symlinks=True)
94
+ size = out2.st_size
95
+ except OSError:
96
+ size = 0
97
+ path = self._strip_protocol(path.path)
98
+ else:
99
+ # str or path-like
100
+ path = self._strip_protocol(path)
101
+ out = os.stat(path, follow_symlinks=False)
102
+ link = stat.S_ISLNK(out.st_mode)
103
+ if link:
104
+ out = os.stat(path, follow_symlinks=True)
105
+ size = out.st_size
106
+ if stat.S_ISDIR(out.st_mode):
107
+ t = "directory"
108
+ elif stat.S_ISREG(out.st_mode):
109
+ t = "file"
110
+ else:
111
+ t = "other"
112
+
113
+ # Check for the 'st_birthtime' attribute, which is not always present; fallback to st_ctime
114
+ created_time = getattr(out, "st_birthtime", out.st_ctime)
115
+
116
+ result = {
117
+ "name": path,
118
+ "size": size,
119
+ "type": t,
120
+ "created": created_time,
121
+ "islink": link,
122
+ }
123
+ for field in ["mode", "uid", "gid", "mtime", "ino", "nlink"]:
124
+ result[field] = getattr(out, f"st_{field}")
125
+ if link:
126
+ result["destination"] = os.readlink(path)
127
+ return result
128
+
129
+ def lexists(self, path, **kwargs):
130
+ return osp.lexists(path)
131
+
132
+ def cp_file(self, path1, path2, **kwargs):
133
+ path1 = self._strip_protocol(path1)
134
+ path2 = self._strip_protocol(path2)
135
+ if self.auto_mkdir:
136
+ self.makedirs(self._parent(path2), exist_ok=True)
137
+ if self.isfile(path1):
138
+ shutil.copyfile(path1, path2)
139
+ elif self.isdir(path1):
140
+ self.mkdirs(path2, exist_ok=True)
141
+ else:
142
+ raise FileNotFoundError(path1)
143
+
144
+ def isfile(self, path):
145
+ path = self._strip_protocol(path)
146
+ return os.path.isfile(path)
147
+
148
+ def isdir(self, path):
149
+ path = self._strip_protocol(path)
150
+ return os.path.isdir(path)
151
+
152
+ def get_file(self, path1, path2, callback=None, **kwargs):
153
+ if isfilelike(path2):
154
+ with open(path1, "rb") as f:
155
+ shutil.copyfileobj(f, path2)
156
+ else:
157
+ return self.cp_file(path1, path2, **kwargs)
158
+
159
+ def put_file(self, path1, path2, callback=None, **kwargs):
160
+ return self.cp_file(path1, path2, **kwargs)
161
+
162
+ def mv(self, path1, path2, recursive: bool = True, **kwargs):
163
+ """Move files/directories
164
+ For the specific case of local, all ops on directories are recursive and
165
+ the recursive= kwarg is ignored.
166
+ """
167
+ path1 = self._strip_protocol(path1)
168
+ path2 = self._strip_protocol(path2)
169
+ shutil.move(path1, path2)
170
+
171
+ def link(self, src, dst, **kwargs):
172
+ src = self._strip_protocol(src)
173
+ dst = self._strip_protocol(dst)
174
+ os.link(src, dst, **kwargs)
175
+
176
+ def symlink(self, src, dst, **kwargs):
177
+ src = self._strip_protocol(src)
178
+ dst = self._strip_protocol(dst)
179
+ os.symlink(src, dst, **kwargs)
180
+
181
+ def islink(self, path) -> bool:
182
+ return os.path.islink(self._strip_protocol(path))
183
+
184
+ def rm_file(self, path):
185
+ os.remove(self._strip_protocol(path))
186
+
187
+ def rm(self, path, recursive=False, maxdepth=None):
188
+ if not isinstance(path, list):
189
+ path = [path]
190
+
191
+ for p in path:
192
+ p = self._strip_protocol(p)
193
+ if self.isdir(p):
194
+ if not recursive:
195
+ raise ValueError("Cannot delete directory, set recursive=True")
196
+ if osp.abspath(p) == os.getcwd():
197
+ raise ValueError("Cannot delete current working directory")
198
+ shutil.rmtree(p)
199
+ else:
200
+ os.remove(p)
201
+
202
+ def unstrip_protocol(self, name):
203
+ name = self._strip_protocol(name) # normalise for local/win/...
204
+ return f"file://{name}"
205
+
206
+ def _open(self, path, mode="rb", block_size=None, **kwargs):
207
+ path = self._strip_protocol(path)
208
+ if self.auto_mkdir and "w" in mode:
209
+ self.makedirs(self._parent(path), exist_ok=True)
210
+ return LocalFileOpener(path, mode, fs=self, **kwargs)
211
+
212
+ def touch(self, path, truncate=True, **kwargs):
213
+ path = self._strip_protocol(path)
214
+ if self.auto_mkdir:
215
+ self.makedirs(self._parent(path), exist_ok=True)
216
+ if self.exists(path):
217
+ os.utime(path, None)
218
+ else:
219
+ open(path, "a").close()
220
+ if truncate:
221
+ os.truncate(path, 0)
222
+
223
+ def created(self, path):
224
+ info = self.info(path=path)
225
+ return datetime.datetime.fromtimestamp(
226
+ info["created"], tz=datetime.timezone.utc
227
+ )
228
+
229
+ def modified(self, path):
230
+ info = self.info(path=path)
231
+ return datetime.datetime.fromtimestamp(info["mtime"], tz=datetime.timezone.utc)
232
+
233
+ @classmethod
234
+ def _parent(cls, path):
235
+ path = cls._strip_protocol(path)
236
+ if os.sep == "/":
237
+ # posix native
238
+ return path.rsplit("/", 1)[0] or "/"
239
+ else:
240
+ # NT
241
+ path_ = path.rsplit("/", 1)[0]
242
+ if len(path_) <= 3:
243
+ if path_[1:2] == ":":
244
+ # nt root (something like c:/)
245
+ return path_[0] + ":/"
246
+ # More cases may be required here
247
+ return path_
248
+
249
+ @classmethod
250
+ def _strip_protocol(cls, path):
251
+ path = stringify_path(path)
252
+ if path.startswith("file://"):
253
+ path = path[7:]
254
+ elif path.startswith("file:"):
255
+ path = path[5:]
256
+ elif path.startswith("local://"):
257
+ path = path[8:]
258
+ elif path.startswith("local:"):
259
+ path = path[6:]
260
+
261
+ path = make_path_posix(path)
262
+ if os.sep != "/":
263
+ # This code-path is a stripped down version of
264
+ # > drive, path = ntpath.splitdrive(path)
265
+ if path[1:2] == ":":
266
+ # Absolute drive-letter path, e.g. X:\Windows
267
+ # Relative path with drive, e.g. X:Windows
268
+ drive, path = path[:2], path[2:]
269
+ elif path[:2] == "//":
270
+ # UNC drives, e.g. \\server\share or \\?\UNC\server\share
271
+ # Device drives, e.g. \\.\device or \\?\device
272
+ if (index1 := path.find("/", 2)) == -1 or (
273
+ index2 := path.find("/", index1 + 1)
274
+ ) == -1:
275
+ drive, path = path, ""
276
+ else:
277
+ drive, path = path[:index2], path[index2:]
278
+ else:
279
+ # Relative path, e.g. Windows
280
+ drive = ""
281
+
282
+ path = path.rstrip("/") or cls.root_marker
283
+ return drive + path
284
+
285
+ else:
286
+ return path.rstrip("/") or cls.root_marker
287
+
288
+ def _isfilestore(self):
289
+ # Inheriting from DaskFileSystem makes this False (S3, etc. were)
290
+ # the original motivation. But we are a posix-like file system.
291
+ # See https://github.com/dask/dask/issues/5526
292
+ return True
293
+
294
+ def chmod(self, path, mode):
295
+ path = stringify_path(path)
296
+ return os.chmod(path, mode)
297
+
298
+
299
+ def make_path_posix(path):
300
+ """Make path generic and absolute for current OS"""
301
+ if not isinstance(path, str):
302
+ if isinstance(path, (list, set, tuple)):
303
+ return type(path)(make_path_posix(p) for p in path)
304
+ else:
305
+ path = stringify_path(path)
306
+ if not isinstance(path, str):
307
+ raise TypeError(f"could not convert {path!r} to string")
308
+ if os.sep == "/":
309
+ # Native posix
310
+ if path.startswith("/"):
311
+ # most common fast case for posix
312
+ return path
313
+ elif path.startswith("~"):
314
+ return osp.expanduser(path)
315
+ elif path.startswith("./"):
316
+ path = path[2:]
317
+ elif path == ".":
318
+ path = ""
319
+ return f"{os.getcwd()}/{path}"
320
+ else:
321
+ # NT handling
322
+ if path[0:1] == "/" and path[2:3] == ":":
323
+ # path is like "/c:/local/path"
324
+ path = path[1:]
325
+ if path[1:2] == ":":
326
+ # windows full path like "C:\\local\\path"
327
+ if len(path) <= 3:
328
+ # nt root (something like c:/)
329
+ return path[0] + ":/"
330
+ path = path.replace("\\", "/")
331
+ return path
332
+ elif path[0:1] == "~":
333
+ return make_path_posix(osp.expanduser(path))
334
+ elif path.startswith(("\\\\", "//")):
335
+ # windows UNC/DFS-style paths
336
+ return "//" + path[2:].replace("\\", "/")
337
+ elif path.startswith(("\\", "/")):
338
+ # windows relative path with root
339
+ path = path.replace("\\", "/")
340
+ return f"{osp.splitdrive(os.getcwd())[0]}{path}"
341
+ else:
342
+ path = path.replace("\\", "/")
343
+ if path.startswith("./"):
344
+ path = path[2:]
345
+ elif path == ".":
346
+ path = ""
347
+ return f"{make_path_posix(os.getcwd())}/{path}"
348
+
349
+
350
+ def trailing_sep(path):
351
+ """Return True if the path ends with a path separator.
352
+
353
+ A forward slash is always considered a path separator, even on Operating
354
+ Systems that normally use a backslash.
355
+ """
356
+ # TODO: if all incoming paths were posix-compliant then separator would
357
+ # always be a forward slash, simplifying this function.
358
+ # See https://github.com/fsspec/filesystem_spec/pull/1250
359
+ return path.endswith(os.sep) or (os.altsep is not None and path.endswith(os.altsep))
360
+
361
+
362
+ @lru_cache(maxsize=1)
363
+ def get_umask(mask: int = 0o666) -> int:
364
+ """Get the current umask.
365
+
366
+ Follows https://stackoverflow.com/a/44130549 to get the umask.
367
+ Temporarily sets the umask to the given value, and then resets it to the
368
+ original value.
369
+ """
370
+ value = os.umask(mask)
371
+ os.umask(value)
372
+ return value
373
+
374
+
375
+ class LocalFileOpener(io.IOBase):
376
+ def __init__(
377
+ self, path, mode, autocommit=True, fs=None, compression=None, **kwargs
378
+ ):
379
+ logger.debug("open file: %s", path)
380
+ self.path = path
381
+ self.mode = mode
382
+ self.fs = fs
383
+ self.f = None
384
+ self.autocommit = autocommit
385
+ self.compression = get_compression(path, compression)
386
+ self.blocksize = io.DEFAULT_BUFFER_SIZE
387
+ self._open()
388
+
389
+ def _open(self):
390
+ if self.f is None or self.f.closed:
391
+ if self.autocommit or "w" not in self.mode:
392
+ self.f = open(self.path, mode=self.mode)
393
+ if self.compression:
394
+ compress = compr[self.compression]
395
+ self.f = compress(self.f, mode=self.mode)
396
+ else:
397
+ # TODO: check if path is writable?
398
+ i, name = tempfile.mkstemp()
399
+ os.close(i) # we want normal open and normal buffered file
400
+ self.temp = name
401
+ self.f = open(name, mode=self.mode)
402
+ if "w" not in self.mode:
403
+ self.size = self.f.seek(0, 2)
404
+ self.f.seek(0)
405
+ self.f.size = self.size
406
+
407
+ def _fetch_range(self, start, end):
408
+ # probably only used by cached FS
409
+ if "r" not in self.mode:
410
+ raise ValueError
411
+ self._open()
412
+ self.f.seek(start)
413
+ return self.f.read(end - start)
414
+
415
+ def __setstate__(self, state):
416
+ self.f = None
417
+ loc = state.pop("loc", None)
418
+ self.__dict__.update(state)
419
+ if "r" in state["mode"]:
420
+ self.f = None
421
+ self._open()
422
+ self.f.seek(loc)
423
+
424
+ def __getstate__(self):
425
+ d = self.__dict__.copy()
426
+ d.pop("f")
427
+ if "r" in self.mode:
428
+ d["loc"] = self.f.tell()
429
+ else:
430
+ if not self.f.closed:
431
+ raise ValueError("Cannot serialise open write-mode local file")
432
+ return d
433
+
434
+ def commit(self):
435
+ if self.autocommit:
436
+ raise RuntimeError("Can only commit if not already set to autocommit")
437
+ try:
438
+ shutil.move(self.temp, self.path)
439
+ except PermissionError as e:
440
+ # shutil.move raises PermissionError if os.rename
441
+ # and the default copy2 fallback with shutil.copystats fail.
442
+ # The file should be there nonetheless, but without copied permissions.
443
+ # If it doesn't exist, there was no permission to create the file.
444
+ if not os.path.exists(self.path):
445
+ raise e
446
+ else:
447
+ # If PermissionError is not raised, permissions can be set.
448
+ try:
449
+ mask = 0o666
450
+ os.chmod(self.path, mask & ~get_umask(mask))
451
+ except RuntimeError:
452
+ pass
453
+
454
+ def discard(self):
455
+ if self.autocommit:
456
+ raise RuntimeError("Cannot discard if set to autocommit")
457
+ os.remove(self.temp)
458
+
459
+ def readable(self) -> bool:
460
+ return True
461
+
462
+ def writable(self) -> bool:
463
+ return "r" not in self.mode
464
+
465
+ def read(self, *args, **kwargs):
466
+ return self.f.read(*args, **kwargs)
467
+
468
+ def write(self, *args, **kwargs):
469
+ return self.f.write(*args, **kwargs)
470
+
471
+ def tell(self, *args, **kwargs):
472
+ return self.f.tell(*args, **kwargs)
473
+
474
+ def seek(self, *args, **kwargs):
475
+ return self.f.seek(*args, **kwargs)
476
+
477
+ def seekable(self, *args, **kwargs):
478
+ return self.f.seekable(*args, **kwargs)
479
+
480
+ def readline(self, *args, **kwargs):
481
+ return self.f.readline(*args, **kwargs)
482
+
483
+ def readlines(self, *args, **kwargs):
484
+ return self.f.readlines(*args, **kwargs)
485
+
486
+ def close(self):
487
+ return self.f.close()
488
+
489
+ def truncate(self, size=None) -> int:
490
+ return self.f.truncate(size)
491
+
492
+ @property
493
+ def closed(self):
494
+ return self.f.closed
495
+
496
+ def fileno(self):
497
+ return self.raw.fileno()
498
+
499
+ def flush(self) -> None:
500
+ self.f.flush()
501
+
502
+ def __iter__(self):
503
+ return self.f.__iter__()
504
+
505
+ def __getattr__(self, item):
506
+ return getattr(self.f, item)
507
+
508
+ def __enter__(self):
509
+ self._incontext = True
510
+ return self
511
+
512
+ def __exit__(self, exc_type, exc_value, traceback):
513
+ self._incontext = False
514
+ self.f.__exit__(exc_type, exc_value, traceback)
.venv/lib/python3.13/site-packages/fsspec/implementations/memory.py ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import logging
4
+ from datetime import datetime, timezone
5
+ from errno import ENOTEMPTY
6
+ from io import BytesIO
7
+ from pathlib import PurePath, PureWindowsPath
8
+ from typing import Any, ClassVar
9
+
10
+ from fsspec import AbstractFileSystem
11
+ from fsspec.implementations.local import LocalFileSystem
12
+ from fsspec.utils import stringify_path
13
+
14
+ logger = logging.getLogger("fsspec.memoryfs")
15
+
16
+
17
+ class MemoryFileSystem(AbstractFileSystem):
18
+ """A filesystem based on a dict of BytesIO objects
19
+
20
+ This is a global filesystem so instances of this class all point to the same
21
+ in memory filesystem.
22
+ """
23
+
24
+ store: ClassVar[dict[str, Any]] = {} # global, do not overwrite!
25
+ pseudo_dirs = [""] # global, do not overwrite!
26
+ protocol = "memory"
27
+ root_marker = "/"
28
+
29
+ @classmethod
30
+ def _strip_protocol(cls, path):
31
+ if isinstance(path, PurePath):
32
+ if isinstance(path, PureWindowsPath):
33
+ return LocalFileSystem._strip_protocol(path)
34
+ else:
35
+ path = stringify_path(path)
36
+
37
+ path = path.removeprefix("memory://")
38
+ if "::" in path or "://" in path:
39
+ return path.rstrip("/")
40
+ path = path.lstrip("/").rstrip("/")
41
+ return "/" + path if path else ""
42
+
43
+ def ls(self, path, detail=True, **kwargs):
44
+ path = self._strip_protocol(path)
45
+ if path in self.store:
46
+ # there is a key with this exact name
47
+ if not detail:
48
+ return [path]
49
+ return [
50
+ {
51
+ "name": path,
52
+ "size": self.store[path].size,
53
+ "type": "file",
54
+ "created": self.store[path].created.timestamp(),
55
+ }
56
+ ]
57
+ paths = set()
58
+ starter = path + "/"
59
+ out = []
60
+ for p2 in tuple(self.store):
61
+ if p2.startswith(starter):
62
+ if "/" not in p2[len(starter) :]:
63
+ # exact child
64
+ out.append(
65
+ {
66
+ "name": p2,
67
+ "size": self.store[p2].size,
68
+ "type": "file",
69
+ "created": self.store[p2].created.timestamp(),
70
+ }
71
+ )
72
+ elif len(p2) > len(starter):
73
+ # implied child directory
74
+ ppath = starter + p2[len(starter) :].split("/", 1)[0]
75
+ if ppath not in paths:
76
+ out = out or []
77
+ out.append(
78
+ {
79
+ "name": ppath,
80
+ "size": 0,
81
+ "type": "directory",
82
+ }
83
+ )
84
+ paths.add(ppath)
85
+ for p2 in self.pseudo_dirs:
86
+ if p2.startswith(starter):
87
+ if "/" not in p2[len(starter) :]:
88
+ # exact child pdir
89
+ if p2 not in paths:
90
+ out.append({"name": p2, "size": 0, "type": "directory"})
91
+ paths.add(p2)
92
+ else:
93
+ # directory implied by deeper pdir
94
+ ppath = starter + p2[len(starter) :].split("/", 1)[0]
95
+ if ppath not in paths:
96
+ out.append({"name": ppath, "size": 0, "type": "directory"})
97
+ paths.add(ppath)
98
+ if not out:
99
+ if path in self.pseudo_dirs:
100
+ # empty dir
101
+ return []
102
+ raise FileNotFoundError(path)
103
+ if detail:
104
+ return out
105
+ return sorted([f["name"] for f in out])
106
+
107
+ def mkdir(self, path, create_parents=True, **kwargs):
108
+ path = self._strip_protocol(path)
109
+ if path in self.store or path in self.pseudo_dirs:
110
+ raise FileExistsError(path)
111
+ if self._parent(path).strip("/") and self.isfile(self._parent(path)):
112
+ raise NotADirectoryError(self._parent(path))
113
+ if create_parents and self._parent(path).strip("/"):
114
+ try:
115
+ self.mkdir(self._parent(path), create_parents, **kwargs)
116
+ except FileExistsError:
117
+ pass
118
+ if path and path not in self.pseudo_dirs:
119
+ self.pseudo_dirs.append(path)
120
+
121
+ def makedirs(self, path, exist_ok=False):
122
+ try:
123
+ self.mkdir(path, create_parents=True)
124
+ except FileExistsError:
125
+ if not exist_ok:
126
+ raise
127
+
128
+ def pipe_file(self, path, value, mode="overwrite", **kwargs):
129
+ """Set the bytes of given file
130
+
131
+ Avoids copies of the data if possible
132
+ """
133
+ mode = "xb" if mode == "create" else "wb"
134
+ self.open(path, mode=mode, data=value)
135
+
136
+ def rmdir(self, path):
137
+ path = self._strip_protocol(path)
138
+ if path == "":
139
+ # silently avoid deleting FS root
140
+ return
141
+ if path in self.pseudo_dirs:
142
+ if not self.ls(path):
143
+ self.pseudo_dirs.remove(path)
144
+ else:
145
+ raise OSError(ENOTEMPTY, "Directory not empty", path)
146
+ else:
147
+ raise FileNotFoundError(path)
148
+
149
+ def info(self, path, **kwargs):
150
+ logger.debug("info: %s", path)
151
+ path = self._strip_protocol(path)
152
+ if path in self.pseudo_dirs or any(
153
+ p.startswith(path + "/") for p in list(self.store) + self.pseudo_dirs
154
+ ):
155
+ return {
156
+ "name": path,
157
+ "size": 0,
158
+ "type": "directory",
159
+ }
160
+ elif path in self.store:
161
+ filelike = self.store[path]
162
+ return {
163
+ "name": path,
164
+ "size": filelike.size,
165
+ "type": "file",
166
+ "created": getattr(filelike, "created", None),
167
+ }
168
+ else:
169
+ raise FileNotFoundError(path)
170
+
171
+ def _open(
172
+ self,
173
+ path,
174
+ mode="rb",
175
+ block_size=None,
176
+ autocommit=True,
177
+ cache_options=None,
178
+ **kwargs,
179
+ ):
180
+ path = self._strip_protocol(path)
181
+ if "x" in mode and self.exists(path):
182
+ raise FileExistsError
183
+ if path in self.pseudo_dirs:
184
+ raise IsADirectoryError(path)
185
+ parent = path
186
+ while len(parent) > 1:
187
+ parent = self._parent(parent)
188
+ if self.isfile(parent):
189
+ raise FileExistsError(parent)
190
+ if mode in ["rb", "ab", "r+b"]:
191
+ if path in self.store:
192
+ f = self.store[path]
193
+ if mode == "ab":
194
+ # position at the end of file
195
+ f.seek(0, 2)
196
+ else:
197
+ # position at the beginning of file
198
+ f.seek(0)
199
+ return f
200
+ else:
201
+ raise FileNotFoundError(path)
202
+ elif mode in {"wb", "xb"}:
203
+ if mode == "xb" and self.exists(path):
204
+ raise FileExistsError
205
+ m = MemoryFile(self, path, kwargs.get("data"))
206
+ if not self._intrans:
207
+ m.commit()
208
+ return m
209
+ else:
210
+ name = self.__class__.__name__
211
+ raise ValueError(f"unsupported file mode for {name}: {mode!r}")
212
+
213
+ def cp_file(self, path1, path2, **kwargs):
214
+ path1 = self._strip_protocol(path1)
215
+ path2 = self._strip_protocol(path2)
216
+ if self.isfile(path1):
217
+ self.store[path2] = MemoryFile(
218
+ self, path2, self.store[path1].getvalue()
219
+ ) # implicit copy
220
+ elif self.isdir(path1):
221
+ if path2 not in self.pseudo_dirs:
222
+ self.pseudo_dirs.append(path2)
223
+ else:
224
+ raise FileNotFoundError(path1)
225
+
226
+ def cat_file(self, path, start=None, end=None, **kwargs):
227
+ logger.debug("cat: %s", path)
228
+ path = self._strip_protocol(path)
229
+ try:
230
+ return bytes(self.store[path].getbuffer()[start:end])
231
+ except KeyError as e:
232
+ raise FileNotFoundError(path) from e
233
+
234
+ def _rm(self, path):
235
+ path = self._strip_protocol(path)
236
+ try:
237
+ del self.store[path]
238
+ except KeyError as e:
239
+ raise FileNotFoundError(path) from e
240
+
241
+ def modified(self, path):
242
+ path = self._strip_protocol(path)
243
+ try:
244
+ return self.store[path].modified
245
+ except KeyError as e:
246
+ raise FileNotFoundError(path) from e
247
+
248
+ def created(self, path):
249
+ path = self._strip_protocol(path)
250
+ try:
251
+ return self.store[path].created
252
+ except KeyError as e:
253
+ raise FileNotFoundError(path) from e
254
+
255
+ def isfile(self, path):
256
+ path = self._strip_protocol(path)
257
+ return path in self.store
258
+
259
+ def rm(self, path, recursive=False, maxdepth=None):
260
+ if isinstance(path, str):
261
+ path = self._strip_protocol(path)
262
+ else:
263
+ path = [self._strip_protocol(p) for p in path]
264
+ paths = self.expand_path(path, recursive=recursive, maxdepth=maxdepth)
265
+ for p in reversed(paths):
266
+ if self.isfile(p):
267
+ self.rm_file(p)
268
+ # If the expanded path doesn't exist, it is only because the expanded
269
+ # path was a directory that does not exist in self.pseudo_dirs. This
270
+ # is possible if you directly create files without making the
271
+ # directories first.
272
+ elif not self.exists(p):
273
+ continue
274
+ else:
275
+ self.rmdir(p)
276
+
277
+
278
+ class MemoryFile(BytesIO):
279
+ """A BytesIO which can't close and works as a context manager
280
+
281
+ Can initialise with data. Each path should only be active once at any moment.
282
+
283
+ No need to provide fs, path if auto-committing (default)
284
+ """
285
+
286
+ def __init__(self, fs=None, path=None, data=None):
287
+ logger.debug("open file %s", path)
288
+ self.fs = fs
289
+ self.path = path
290
+ self.created = datetime.now(tz=timezone.utc)
291
+ self.modified = datetime.now(tz=timezone.utc)
292
+ if data:
293
+ super().__init__(data)
294
+ self.seek(0)
295
+
296
+ @property
297
+ def size(self):
298
+ return self.getbuffer().nbytes
299
+
300
+ def __enter__(self):
301
+ return self
302
+
303
+ def close(self):
304
+ pass
305
+
306
+ def discard(self):
307
+ pass
308
+
309
+ def commit(self):
310
+ self.fs.store[self.path] = self
311
+ self.modified = datetime.now(tz=timezone.utc)
.venv/lib/python3.13/site-packages/fsspec/implementations/reference.py ADDED
@@ -0,0 +1,1305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import collections
3
+ import io
4
+ import itertools
5
+ import logging
6
+ import math
7
+ import os
8
+ from functools import lru_cache
9
+ from itertools import chain
10
+ from typing import TYPE_CHECKING, Literal
11
+
12
+ import fsspec.core
13
+ from fsspec.spec import AbstractBufferedFile
14
+
15
+ try:
16
+ import ujson as json
17
+ except ImportError:
18
+ if not TYPE_CHECKING:
19
+ import json
20
+
21
+ from fsspec.asyn import AsyncFileSystem
22
+ from fsspec.callbacks import DEFAULT_CALLBACK
23
+ from fsspec.core import filesystem, open, split_protocol
24
+ from fsspec.implementations.asyn_wrapper import AsyncFileSystemWrapper
25
+ from fsspec.utils import isfilelike, merge_offset_ranges, other_paths
26
+
27
+ logger = logging.getLogger("fsspec.reference")
28
+
29
+
30
+ class ReferenceNotReachable(RuntimeError):
31
+ def __init__(self, reference, target, *args):
32
+ super().__init__(*args)
33
+ self.reference = reference
34
+ self.target = target
35
+
36
+ def __str__(self):
37
+ return f'Reference "{self.reference}" failed to fetch target {self.target}'
38
+
39
+
40
+ def _first(d):
41
+ return next(iter(d.values()))
42
+
43
+
44
+ def _prot_in_references(path, references):
45
+ ref = references.get(path)
46
+ if isinstance(ref, (list, tuple)) and isinstance(ref[0], str):
47
+ return split_protocol(ref[0])[0] if ref[0] else ref[0]
48
+
49
+
50
+ def _protocol_groups(paths, references):
51
+ if isinstance(paths, str):
52
+ return {_prot_in_references(paths, references): [paths]}
53
+ out = {}
54
+ for path in paths:
55
+ protocol = _prot_in_references(path, references)
56
+ out.setdefault(protocol, []).append(path)
57
+ return out
58
+
59
+
60
+ class RefsValuesView(collections.abc.ValuesView):
61
+ def __iter__(self):
62
+ for val in self._mapping.zmetadata.values():
63
+ yield json.dumps(val).encode()
64
+ yield from self._mapping._items.values()
65
+ for field in self._mapping.listdir():
66
+ chunk_sizes = self._mapping._get_chunk_sizes(field)
67
+ if len(chunk_sizes) == 0:
68
+ yield self._mapping[field + "/0"]
69
+ continue
70
+ yield from self._mapping._generate_all_records(field)
71
+
72
+
73
+ class RefsItemsView(collections.abc.ItemsView):
74
+ def __iter__(self):
75
+ return zip(self._mapping.keys(), self._mapping.values())
76
+
77
+
78
+ def ravel_multi_index(idx, sizes):
79
+ val = 0
80
+ mult = 1
81
+ for i, s in zip(idx[::-1], sizes[::-1]):
82
+ val += i * mult
83
+ mult *= s
84
+ return val
85
+
86
+
87
+ class LazyReferenceMapper(collections.abc.MutableMapping):
88
+ """This interface can be used to read/write references from Parquet stores.
89
+ It is not intended for other types of references.
90
+ It can be used with Kerchunk's MultiZarrToZarr method to combine
91
+ references into a parquet store.
92
+ Examples of this use-case can be found here:
93
+ https://fsspec.github.io/kerchunk/advanced.html?highlight=parquet#parquet-storage"""
94
+
95
+ # import is class level to prevent numpy dep requirement for fsspec
96
+ @property
97
+ def np(self):
98
+ import numpy as np
99
+
100
+ return np
101
+
102
+ @property
103
+ def pd(self):
104
+ import pandas as pd
105
+
106
+ return pd
107
+
108
+ def __init__(
109
+ self,
110
+ root,
111
+ fs=None,
112
+ out_root=None,
113
+ cache_size=128,
114
+ categorical_threshold=10,
115
+ engine: Literal["fastparquet", "pyarrow"] = "fastparquet",
116
+ ):
117
+ """
118
+
119
+ This instance will be writable, storing changes in memory until full partitions
120
+ are accumulated or .flush() is called.
121
+
122
+ To create an empty lazy store, use .create()
123
+
124
+ Parameters
125
+ ----------
126
+ root : str
127
+ Root of parquet store
128
+ fs : fsspec.AbstractFileSystem
129
+ fsspec filesystem object, default is local filesystem.
130
+ cache_size : int, default=128
131
+ Maximum size of LRU cache, where cache_size*record_size denotes
132
+ the total number of references that can be loaded in memory at once.
133
+ categorical_threshold : int
134
+ Encode urls as pandas.Categorical to reduce memory footprint if the ratio
135
+ of the number of unique urls to total number of refs for each variable
136
+ is greater than or equal to this number. (default 10)
137
+ engine: Literal["fastparquet","pyarrow"]
138
+ Engine choice for reading parquet files. (default is "fastparquet")
139
+ """
140
+
141
+ self.root = root
142
+ self.chunk_sizes = {}
143
+ self.cat_thresh = categorical_threshold
144
+ self.engine = engine
145
+ self.cache_size = cache_size
146
+ self.url = self.root + "/{field}/refs.{record}.parq"
147
+ # TODO: derive fs from `root`
148
+ self.fs = fsspec.filesystem("file") if fs is None else fs
149
+ self.out_root = self.fs.unstrip_protocol(out_root or self.root)
150
+
151
+ from importlib.util import find_spec
152
+
153
+ if self.engine == "pyarrow" and find_spec("pyarrow") is None:
154
+ raise ImportError("engine choice `pyarrow` is not installed.")
155
+
156
+ def __getattr__(self, item):
157
+ if item in ("_items", "record_size", "zmetadata"):
158
+ self.setup()
159
+ # avoid possible recursion if setup fails somehow
160
+ return self.__dict__[item]
161
+ raise AttributeError(item)
162
+
163
+ def setup(self):
164
+ self._items = {}
165
+ self._items[".zmetadata"] = self.fs.cat_file(
166
+ "/".join([self.root, ".zmetadata"])
167
+ )
168
+ met = json.loads(self._items[".zmetadata"])
169
+ self.record_size = met["record_size"]
170
+ self.zmetadata = met["metadata"]
171
+
172
+ # Define function to open and decompress refs
173
+ @lru_cache(maxsize=self.cache_size)
174
+ def open_refs(field, record):
175
+ """cached parquet file loader"""
176
+ path = self.url.format(field=field, record=record)
177
+ data = io.BytesIO(self.fs.cat_file(path))
178
+ try:
179
+ df = self.pd.read_parquet(data, engine=self.engine)
180
+ refs = {c: df[c].to_numpy() for c in df.columns}
181
+ except OSError:
182
+ refs = None
183
+ return refs
184
+
185
+ self.open_refs = open_refs
186
+
187
+ @staticmethod
188
+ def create(root, storage_options=None, fs=None, record_size=10000, **kwargs):
189
+ """Make empty parquet reference set
190
+
191
+ First deletes the contents of the given directory, if it exists.
192
+
193
+ Parameters
194
+ ----------
195
+ root: str
196
+ Directory to contain the output; will be created
197
+ storage_options: dict | None
198
+ For making the filesystem to use for writing is fs is None
199
+ fs: FileSystem | None
200
+ Filesystem for writing
201
+ record_size: int
202
+ Number of references per parquet file
203
+ kwargs: passed to __init__
204
+
205
+ Returns
206
+ -------
207
+ LazyReferenceMapper instance
208
+ """
209
+ met = {"metadata": {}, "record_size": record_size}
210
+ if fs is None:
211
+ fs, root = fsspec.core.url_to_fs(root, **(storage_options or {}))
212
+ if fs.exists(root):
213
+ fs.rm(root, recursive=True)
214
+ fs.makedirs(root, exist_ok=True)
215
+ fs.pipe("/".join([root, ".zmetadata"]), json.dumps(met).encode())
216
+ return LazyReferenceMapper(root, fs, **kwargs)
217
+
218
+ @lru_cache()
219
+ def listdir(self):
220
+ """List top-level directories"""
221
+ dirs = (p.rsplit("/", 1)[0] for p in self.zmetadata if not p.startswith(".z"))
222
+ return set(dirs)
223
+
224
+ def ls(self, path="", detail=True):
225
+ """Shortcut file listings"""
226
+ path = path.rstrip("/")
227
+ pathdash = path + "/" if path else ""
228
+ dirnames = self.listdir()
229
+ dirs = [
230
+ d
231
+ for d in dirnames
232
+ if d.startswith(pathdash) and "/" not in d.lstrip(pathdash)
233
+ ]
234
+ if dirs:
235
+ others = {
236
+ f
237
+ for f in chain(
238
+ [".zmetadata"],
239
+ (name for name in self.zmetadata),
240
+ (name for name in self._items),
241
+ )
242
+ if f.startswith(pathdash) and "/" not in f.lstrip(pathdash)
243
+ }
244
+ if detail is False:
245
+ others.update(dirs)
246
+ return sorted(others)
247
+ dirinfo = [{"name": name, "type": "directory", "size": 0} for name in dirs]
248
+ fileinfo = [
249
+ {
250
+ "name": name,
251
+ "type": "file",
252
+ "size": len(
253
+ json.dumps(self.zmetadata[name])
254
+ if name in self.zmetadata
255
+ else self._items[name]
256
+ ),
257
+ }
258
+ for name in others
259
+ ]
260
+ return sorted(dirinfo + fileinfo, key=lambda s: s["name"])
261
+ field = path
262
+ others = set(
263
+ [name for name in self.zmetadata if name.startswith(f"{path}/")]
264
+ + [name for name in self._items if name.startswith(f"{path}/")]
265
+ )
266
+ fileinfo = [
267
+ {
268
+ "name": name,
269
+ "type": "file",
270
+ "size": len(
271
+ json.dumps(self.zmetadata[name])
272
+ if name in self.zmetadata
273
+ else self._items[name]
274
+ ),
275
+ }
276
+ for name in others
277
+ ]
278
+ keys = self._keys_in_field(field)
279
+
280
+ if detail is False:
281
+ return list(others) + list(keys)
282
+ recs = self._generate_all_records(field)
283
+ recinfo = [
284
+ {"name": name, "type": "file", "size": rec[-1]}
285
+ for name, rec in zip(keys, recs)
286
+ if rec[0] # filters out path==None, deleted/missing
287
+ ]
288
+ return fileinfo + recinfo
289
+
290
+ def _load_one_key(self, key):
291
+ """Get the reference for one key
292
+
293
+ Returns bytes, one-element list or three-element list.
294
+ """
295
+ if key in self._items:
296
+ return self._items[key]
297
+ elif key in self.zmetadata:
298
+ return json.dumps(self.zmetadata[key]).encode()
299
+ elif "/" not in key or self._is_meta(key):
300
+ raise KeyError(key)
301
+ field, _ = key.rsplit("/", 1)
302
+ record, ri, chunk_size = self._key_to_record(key)
303
+ maybe = self._items.get((field, record), {}).get(ri, False)
304
+ if maybe is None:
305
+ # explicitly deleted
306
+ raise KeyError
307
+ elif maybe:
308
+ return maybe
309
+ elif chunk_size == 0:
310
+ return b""
311
+
312
+ # Chunk keys can be loaded from row group and cached in LRU cache
313
+ try:
314
+ refs = self.open_refs(field, record)
315
+ except (ValueError, TypeError, FileNotFoundError) as exc:
316
+ raise KeyError(key) from exc
317
+ columns = ["path", "offset", "size", "raw"]
318
+ selection = [refs[c][ri] if c in refs else None for c in columns]
319
+ raw = selection[-1]
320
+ if raw is not None:
321
+ return raw
322
+ if selection[0] is None:
323
+ raise KeyError("This reference does not exist or has been deleted")
324
+ if selection[1:3] == [0, 0]:
325
+ # URL only
326
+ return selection[:1]
327
+ # URL, offset, size
328
+ return selection[:3]
329
+
330
+ @lru_cache(4096)
331
+ def _key_to_record(self, key):
332
+ """Details needed to construct a reference for one key"""
333
+ field, chunk = key.rsplit("/", 1)
334
+ chunk_sizes = self._get_chunk_sizes(field)
335
+ if len(chunk_sizes) == 0:
336
+ return 0, 0, 0
337
+ chunk_idx = [int(c) for c in chunk.split(".")]
338
+ chunk_number = ravel_multi_index(chunk_idx, chunk_sizes)
339
+ record = chunk_number // self.record_size
340
+ ri = chunk_number % self.record_size
341
+ return record, ri, len(chunk_sizes)
342
+
343
+ def _get_chunk_sizes(self, field):
344
+ """The number of chunks along each axis for a given field"""
345
+ if field not in self.chunk_sizes:
346
+ zarray = self.zmetadata[f"{field}/.zarray"]
347
+ size_ratio = [
348
+ math.ceil(s / c) for s, c in zip(zarray["shape"], zarray["chunks"])
349
+ ]
350
+ self.chunk_sizes[field] = size_ratio or [1]
351
+ return self.chunk_sizes[field]
352
+
353
+ def _generate_record(self, field, record):
354
+ """The references for a given parquet file of a given field"""
355
+ refs = self.open_refs(field, record)
356
+ it = iter(zip(*refs.values()))
357
+ if len(refs) == 3:
358
+ # All urls
359
+ return (list(t) for t in it)
360
+ elif len(refs) == 1:
361
+ # All raws
362
+ return refs["raw"]
363
+ else:
364
+ # Mix of urls and raws
365
+ return (list(t[:3]) if not t[3] else t[3] for t in it)
366
+
367
+ def _generate_all_records(self, field):
368
+ """Load all the references within a field by iterating over the parquet files"""
369
+ nrec = 1
370
+ for ch in self._get_chunk_sizes(field):
371
+ nrec *= ch
372
+ nrec = math.ceil(nrec / self.record_size)
373
+ for record in range(nrec):
374
+ yield from self._generate_record(field, record)
375
+
376
+ def values(self):
377
+ return RefsValuesView(self)
378
+
379
+ def items(self):
380
+ return RefsItemsView(self)
381
+
382
+ def __hash__(self):
383
+ return id(self)
384
+
385
+ def __getitem__(self, key):
386
+ return self._load_one_key(key)
387
+
388
+ def __setitem__(self, key, value):
389
+ if "/" in key and not self._is_meta(key):
390
+ field, chunk = key.rsplit("/", 1)
391
+ record, i, _ = self._key_to_record(key)
392
+ subdict = self._items.setdefault((field, record), {})
393
+ subdict[i] = value
394
+ if len(subdict) == self.record_size:
395
+ self.write(field, record)
396
+ else:
397
+ # metadata or top-level
398
+ if hasattr(value, "to_bytes"):
399
+ val = value.to_bytes().decode()
400
+ elif isinstance(value, bytes):
401
+ val = value.decode()
402
+ else:
403
+ val = value
404
+ self._items[key] = val
405
+ new_value = json.loads(val)
406
+ self.zmetadata[key] = {**self.zmetadata.get(key, {}), **new_value}
407
+
408
+ @staticmethod
409
+ def _is_meta(key):
410
+ return key.startswith(".z") or "/.z" in key
411
+
412
+ def __delitem__(self, key):
413
+ if key in self._items:
414
+ del self._items[key]
415
+ elif key in self.zmetadata:
416
+ del self.zmetadata[key]
417
+ else:
418
+ if "/" in key and not self._is_meta(key):
419
+ field, _ = key.rsplit("/", 1)
420
+ record, i, _ = self._key_to_record(key)
421
+ subdict = self._items.setdefault((field, record), {})
422
+ subdict[i] = None
423
+ if len(subdict) == self.record_size:
424
+ self.write(field, record)
425
+ else:
426
+ # metadata or top-level
427
+ self._items[key] = None
428
+
429
+ def write(self, field, record, base_url=None, storage_options=None):
430
+ # extra requirements if writing
431
+ import kerchunk.df
432
+ import numpy as np
433
+ import pandas as pd
434
+
435
+ partition = self._items[(field, record)]
436
+ original = False
437
+ if len(partition) < self.record_size:
438
+ try:
439
+ original = self.open_refs(field, record)
440
+ except OSError:
441
+ pass
442
+
443
+ if original:
444
+ paths = original["path"]
445
+ offsets = original["offset"]
446
+ sizes = original["size"]
447
+ raws = original["raw"]
448
+ else:
449
+ paths = np.full(self.record_size, np.nan, dtype="O")
450
+ offsets = np.zeros(self.record_size, dtype="int64")
451
+ sizes = np.zeros(self.record_size, dtype="int64")
452
+ raws = np.full(self.record_size, np.nan, dtype="O")
453
+ for j, data in partition.items():
454
+ if isinstance(data, list):
455
+ if (
456
+ str(paths.dtype) == "category"
457
+ and data[0] not in paths.dtype.categories
458
+ ):
459
+ paths = paths.add_categories(data[0])
460
+ paths[j] = data[0]
461
+ if len(data) > 1:
462
+ offsets[j] = data[1]
463
+ sizes[j] = data[2]
464
+ elif data is None:
465
+ # delete
466
+ paths[j] = None
467
+ offsets[j] = 0
468
+ sizes[j] = 0
469
+ raws[j] = None
470
+ else:
471
+ # this is the only call into kerchunk, could remove
472
+ raws[j] = kerchunk.df._proc_raw(data)
473
+ # TODO: only save needed columns
474
+ df = pd.DataFrame(
475
+ {
476
+ "path": paths,
477
+ "offset": offsets,
478
+ "size": sizes,
479
+ "raw": raws,
480
+ },
481
+ copy=False,
482
+ )
483
+ if df.path.count() / (df.path.nunique() or 1) > self.cat_thresh:
484
+ df["path"] = df["path"].astype("category")
485
+ object_encoding = {"raw": "bytes", "path": "utf8"}
486
+ has_nulls = ["path", "raw"]
487
+
488
+ fn = f"{base_url or self.out_root}/{field}/refs.{record}.parq"
489
+ self.fs.mkdirs(f"{base_url or self.out_root}/{field}", exist_ok=True)
490
+
491
+ if self.engine == "pyarrow":
492
+ df_backend_kwargs = {"write_statistics": False}
493
+ elif self.engine == "fastparquet":
494
+ df_backend_kwargs = {
495
+ "stats": False,
496
+ "object_encoding": object_encoding,
497
+ "has_nulls": has_nulls,
498
+ }
499
+ else:
500
+ raise NotImplementedError(f"{self.engine} not supported")
501
+ df.to_parquet(
502
+ fn,
503
+ engine=self.engine,
504
+ storage_options=storage_options
505
+ or getattr(self.fs, "storage_options", None),
506
+ compression="zstd",
507
+ index=False,
508
+ **df_backend_kwargs,
509
+ )
510
+
511
+ partition.clear()
512
+ self._items.pop((field, record))
513
+
514
+ def flush(self, base_url=None, storage_options=None):
515
+ """Output any modified or deleted keys
516
+
517
+ Parameters
518
+ ----------
519
+ base_url: str
520
+ Location of the output
521
+ """
522
+
523
+ # write what we have so far and clear sub chunks
524
+ for thing in list(self._items):
525
+ if isinstance(thing, tuple):
526
+ field, record = thing
527
+ self.write(
528
+ field,
529
+ record,
530
+ base_url=base_url,
531
+ storage_options=storage_options,
532
+ )
533
+
534
+ # gather .zmetadata from self._items and write that too
535
+ for k in list(self._items):
536
+ if k != ".zmetadata" and ".z" in k:
537
+ self.zmetadata[k] = json.loads(self._items.pop(k))
538
+ met = {"metadata": self.zmetadata, "record_size": self.record_size}
539
+ self._items.clear()
540
+ self._items[".zmetadata"] = json.dumps(met).encode()
541
+ self.fs.pipe(
542
+ "/".join([base_url or self.out_root, ".zmetadata"]),
543
+ self._items[".zmetadata"],
544
+ )
545
+
546
+ # TODO: only clear those that we wrote to?
547
+ self.open_refs.cache_clear()
548
+
549
+ def __len__(self):
550
+ # Caveat: This counts expected references, not actual - but is fast
551
+ count = 0
552
+ for field in self.listdir():
553
+ if field.startswith("."):
554
+ count += 1
555
+ else:
556
+ count += math.prod(self._get_chunk_sizes(field))
557
+ count += len(self.zmetadata) # all metadata keys
558
+ # any other files not in reference partitions
559
+ count += sum(1 for _ in self._items if not isinstance(_, tuple))
560
+ return count
561
+
562
+ def __iter__(self):
563
+ # Caveat: returns only existing keys, so the number of these does not
564
+ # match len(self)
565
+ metas = set(self.zmetadata)
566
+ metas.update(self._items)
567
+ for bit in metas:
568
+ if isinstance(bit, str):
569
+ yield bit
570
+ for field in self.listdir():
571
+ for k in self._keys_in_field(field):
572
+ if k in self:
573
+ yield k
574
+
575
+ def __contains__(self, item):
576
+ try:
577
+ self._load_one_key(item)
578
+ return True
579
+ except KeyError:
580
+ return False
581
+
582
+ def _keys_in_field(self, field):
583
+ """List key names in given field
584
+
585
+ Produces strings like "field/x.y" appropriate from the chunking of the array
586
+ """
587
+ chunk_sizes = self._get_chunk_sizes(field)
588
+ if len(chunk_sizes) == 0:
589
+ yield field + "/0"
590
+ return
591
+ inds = itertools.product(*(range(i) for i in chunk_sizes))
592
+ for ind in inds:
593
+ yield field + "/" + ".".join([str(c) for c in ind])
594
+
595
+
596
+ class ReferenceFileSystem(AsyncFileSystem):
597
+ """View byte ranges of some other file as a file system
598
+ Initial version: single file system target, which must support
599
+ async, and must allow start and end args in _cat_file. Later versions
600
+ may allow multiple arbitrary URLs for the targets.
601
+ This FileSystem is read-only. It is designed to be used with async
602
+ targets (for now). We do not get original file details from the target FS.
603
+ Configuration is by passing a dict of references at init, or a URL to
604
+ a JSON file containing the same; this dict
605
+ can also contain concrete data for some set of paths.
606
+ Reference dict format:
607
+ {path0: bytes_data, path1: (target_url, offset, size)}
608
+ https://github.com/fsspec/kerchunk/blob/main/README.md
609
+ """
610
+
611
+ protocol = "reference"
612
+ cachable = False
613
+
614
+ def __init__(
615
+ self,
616
+ fo,
617
+ target=None,
618
+ ref_storage_args=None,
619
+ target_protocol=None,
620
+ target_options=None,
621
+ remote_protocol=None,
622
+ remote_options=None,
623
+ fs=None,
624
+ template_overrides=None,
625
+ simple_templates=True,
626
+ max_gap=64_000,
627
+ max_block=256_000_000,
628
+ cache_size=128,
629
+ **kwargs,
630
+ ):
631
+ """
632
+ Parameters
633
+ ----------
634
+ fo : dict or str
635
+ The set of references to use for this instance, with a structure as above.
636
+ If str referencing a JSON file, will use fsspec.open, in conjunction
637
+ with target_options and target_protocol to open and parse JSON at this
638
+ location. If a directory, then assume references are a set of parquet
639
+ files to be loaded lazily.
640
+ target : str
641
+ For any references having target_url as None, this is the default file
642
+ target to use
643
+ ref_storage_args : dict
644
+ If references is a str, use these kwargs for loading the JSON file.
645
+ Deprecated: use target_options instead.
646
+ target_protocol : str
647
+ Used for loading the reference file, if it is a path. If None, protocol
648
+ will be derived from the given path
649
+ target_options : dict
650
+ Extra FS options for loading the reference file ``fo``, if given as a path
651
+ remote_protocol : str
652
+ The protocol of the filesystem on which the references will be evaluated
653
+ (unless fs is provided). If not given, will be derived from the first
654
+ URL that has a protocol in the templates or in the references, in that
655
+ order.
656
+ remote_options : dict
657
+ kwargs to go with remote_protocol
658
+ fs : AbstractFileSystem | dict(str, (AbstractFileSystem | dict))
659
+ Directly provide a file system(s):
660
+ - a single filesystem instance
661
+ - a dict of protocol:filesystem, where each value is either a filesystem
662
+ instance, or a dict of kwargs that can be used to create in
663
+ instance for the given protocol
664
+
665
+ If this is given, remote_options and remote_protocol are ignored.
666
+ template_overrides : dict
667
+ Swap out any templates in the references file with these - useful for
668
+ testing.
669
+ simple_templates: bool
670
+ Whether templates can be processed with simple replace (True) or if
671
+ jinja is needed (False, much slower). All reference sets produced by
672
+ ``kerchunk`` are simple in this sense, but the spec allows for complex.
673
+ max_gap, max_block: int
674
+ For merging multiple concurrent requests to the same remote file.
675
+ Neighboring byte ranges will only be merged when their
676
+ inter-range gap is <= ``max_gap``. Default is 64KB. Set to 0
677
+ to only merge when it requires no extra bytes. Pass a negative
678
+ number to disable merging, appropriate for local target files.
679
+ Neighboring byte ranges will only be merged when the size of
680
+ the aggregated range is <= ``max_block``. Default is 256MB.
681
+ cache_size : int
682
+ Maximum size of LRU cache, where cache_size*record_size denotes
683
+ the total number of references that can be loaded in memory at once.
684
+ Only used for lazily loaded references.
685
+ kwargs : passed to parent class
686
+ """
687
+ super().__init__(**kwargs)
688
+ self.target = target
689
+ self.template_overrides = template_overrides
690
+ self.simple_templates = simple_templates
691
+ self.templates = {}
692
+ self.fss = {}
693
+ self._dircache = {}
694
+ self.max_gap = max_gap
695
+ self.max_block = max_block
696
+ if isinstance(fo, str):
697
+ dic = dict(
698
+ **(ref_storage_args or target_options or {}), protocol=target_protocol
699
+ )
700
+ ref_fs, fo2 = fsspec.core.url_to_fs(fo, **dic)
701
+ if ref_fs.isfile(fo2):
702
+ # text JSON
703
+ with fsspec.open(fo, "rb", **dic) as f:
704
+ logger.info("Read reference from URL %s", fo)
705
+ text = json.load(f)
706
+ self._process_references(text, template_overrides)
707
+ else:
708
+ # Lazy parquet refs
709
+ logger.info("Open lazy reference dict from URL %s", fo)
710
+ self.references = LazyReferenceMapper(
711
+ fo2,
712
+ fs=ref_fs,
713
+ cache_size=cache_size,
714
+ )
715
+ else:
716
+ # dictionaries
717
+ self._process_references(fo, template_overrides)
718
+ if isinstance(fs, dict):
719
+ self.fss = {
720
+ k: (
721
+ fsspec.filesystem(k.split(":", 1)[0], **opts)
722
+ if isinstance(opts, dict)
723
+ else opts
724
+ )
725
+ for k, opts in fs.items()
726
+ }
727
+ if None not in self.fss:
728
+ self.fss[None] = filesystem("file")
729
+ return
730
+ if fs is not None:
731
+ # single remote FS
732
+ remote_protocol = (
733
+ fs.protocol[0] if isinstance(fs.protocol, tuple) else fs.protocol
734
+ )
735
+ self.fss[remote_protocol] = fs
736
+
737
+ if remote_protocol is None:
738
+ # get single protocol from any templates
739
+ for ref in self.templates.values():
740
+ if callable(ref):
741
+ ref = ref()
742
+ protocol, _ = fsspec.core.split_protocol(ref)
743
+ if protocol and protocol not in self.fss:
744
+ fs = filesystem(protocol, **(remote_options or {}))
745
+ self.fss[protocol] = fs
746
+ if remote_protocol is None:
747
+ # get single protocol from references
748
+ # TODO: warning here, since this can be very expensive?
749
+ for ref in self.references.values():
750
+ if callable(ref):
751
+ ref = ref()
752
+ if isinstance(ref, list) and ref[0]:
753
+ protocol, _ = fsspec.core.split_protocol(ref[0])
754
+ if protocol not in self.fss:
755
+ fs = filesystem(protocol, **(remote_options or {}))
756
+ self.fss[protocol] = fs
757
+ # only use first remote URL
758
+ break
759
+
760
+ if remote_protocol and remote_protocol not in self.fss:
761
+ fs = filesystem(remote_protocol, **(remote_options or {}))
762
+ self.fss[remote_protocol] = fs
763
+
764
+ self.fss[None] = fs or filesystem("file") # default one
765
+ # Wrap any non-async filesystems to ensure async methods are available below
766
+ for k, f in self.fss.items():
767
+ if not f.async_impl:
768
+ self.fss[k] = AsyncFileSystemWrapper(f, asynchronous=self.asynchronous)
769
+ elif self.asynchronous ^ f.asynchronous:
770
+ raise ValueError(
771
+ "Reference-FS's target filesystem must have same value "
772
+ "of asynchronous"
773
+ )
774
+
775
+ def _cat_common(self, path, start=None, end=None):
776
+ path = self._strip_protocol(path)
777
+ logger.debug(f"cat: {path}")
778
+ try:
779
+ part = self.references[path]
780
+ except KeyError as exc:
781
+ raise FileNotFoundError(path) from exc
782
+ if isinstance(part, str):
783
+ part = part.encode()
784
+ if hasattr(part, "to_bytes"):
785
+ part = part.to_bytes()
786
+ if isinstance(part, bytes):
787
+ logger.debug(f"Reference: {path}, type bytes")
788
+ if part.startswith(b"base64:"):
789
+ part = base64.b64decode(part[7:])
790
+ return part, None, None
791
+
792
+ if len(part) == 1:
793
+ logger.debug(f"Reference: {path}, whole file => {part}")
794
+ url = part[0]
795
+ start1, end1 = start, end
796
+ else:
797
+ url, start0, size = part
798
+ logger.debug(f"Reference: {path} => {url}, offset {start0}, size {size}")
799
+ end0 = start0 + size
800
+
801
+ if start is not None:
802
+ if start >= 0:
803
+ start1 = start0 + start
804
+ else:
805
+ start1 = end0 + start
806
+ else:
807
+ start1 = start0
808
+ if end is not None:
809
+ if end >= 0:
810
+ end1 = start0 + end
811
+ else:
812
+ end1 = end0 + end
813
+ else:
814
+ end1 = end0
815
+ if url is None:
816
+ url = self.target
817
+ return url, start1, end1
818
+
819
+ async def _cat_file(self, path, start=None, end=None, **kwargs):
820
+ part_or_url, start0, end0 = self._cat_common(path, start=start, end=end)
821
+ if isinstance(part_or_url, bytes):
822
+ return part_or_url[start:end]
823
+ protocol, _ = split_protocol(part_or_url)
824
+ try:
825
+ return await self.fss[protocol]._cat_file(
826
+ part_or_url, start=start0, end=end0
827
+ )
828
+ except Exception as e:
829
+ raise ReferenceNotReachable(path, part_or_url) from e
830
+
831
+ def cat_file(self, path, start=None, end=None, **kwargs):
832
+ part_or_url, start0, end0 = self._cat_common(path, start=start, end=end)
833
+ if isinstance(part_or_url, bytes):
834
+ return part_or_url[start:end]
835
+ protocol, _ = split_protocol(part_or_url)
836
+ try:
837
+ return self.fss[protocol].cat_file(part_or_url, start=start0, end=end0)
838
+ except Exception as e:
839
+ raise ReferenceNotReachable(path, part_or_url) from e
840
+
841
+ def pipe_file(self, path, value, **_):
842
+ """Temporarily add binary data or reference as a file"""
843
+ self.references[path] = value
844
+
845
+ async def _get_file(self, rpath, lpath, **kwargs):
846
+ if self.isdir(rpath):
847
+ return os.makedirs(lpath, exist_ok=True)
848
+ data = await self._cat_file(rpath)
849
+ with open(lpath, "wb") as f:
850
+ f.write(data)
851
+
852
+ def get_file(self, rpath, lpath, callback=DEFAULT_CALLBACK, **kwargs):
853
+ if self.isdir(rpath):
854
+ return os.makedirs(lpath, exist_ok=True)
855
+ data = self.cat_file(rpath, **kwargs)
856
+ callback.set_size(len(data))
857
+ if isfilelike(lpath):
858
+ lpath.write(data)
859
+ else:
860
+ with open(lpath, "wb") as f:
861
+ f.write(data)
862
+ callback.absolute_update(len(data))
863
+
864
+ def get(self, rpath, lpath, recursive=False, **kwargs):
865
+ if recursive:
866
+ # trigger directory build
867
+ self.ls("")
868
+ rpath = self.expand_path(rpath, recursive=recursive)
869
+ fs = fsspec.filesystem("file", auto_mkdir=True)
870
+ targets = other_paths(rpath, lpath)
871
+ if recursive:
872
+ data = self.cat([r for r in rpath if not self.isdir(r)])
873
+ else:
874
+ data = self.cat(rpath)
875
+ for remote, local in zip(rpath, targets):
876
+ if remote in data:
877
+ fs.pipe_file(local, data[remote])
878
+
879
+ def cat(self, path, recursive=False, on_error="raise", **kwargs):
880
+ if isinstance(path, str) and recursive:
881
+ raise NotImplementedError
882
+ if isinstance(path, list) and (recursive or any("*" in p for p in path)):
883
+ raise NotImplementedError
884
+ # TODO: if references is lazy, pre-fetch all paths in batch before access
885
+ proto_dict = _protocol_groups(path, self.references)
886
+ out = {}
887
+ for proto, paths in proto_dict.items():
888
+ fs = self.fss[proto]
889
+ urls, starts, ends, valid_paths = [], [], [], []
890
+ for p in paths:
891
+ # find references or label not-found. Early exit if any not
892
+ # found and on_error is "raise"
893
+ try:
894
+ u, s, e = self._cat_common(p)
895
+ if not isinstance(u, (bytes, str)):
896
+ # nan/None from parquet
897
+ continue
898
+ except FileNotFoundError as err:
899
+ if on_error == "raise":
900
+ raise
901
+ if on_error != "omit":
902
+ out[p] = err
903
+ else:
904
+ urls.append(u)
905
+ starts.append(s)
906
+ ends.append(e)
907
+ valid_paths.append(p)
908
+
909
+ # process references into form for merging
910
+ urls2 = []
911
+ starts2 = []
912
+ ends2 = []
913
+ paths2 = []
914
+ whole_files = set()
915
+ for u, s, e, p in zip(urls, starts, ends, valid_paths):
916
+ if isinstance(u, bytes):
917
+ # data
918
+ out[p] = u
919
+ elif s is None:
920
+ # whole file - limits are None, None, but no further
921
+ # entries take for this file
922
+ whole_files.add(u)
923
+ urls2.append(u)
924
+ starts2.append(s)
925
+ ends2.append(e)
926
+ paths2.append(p)
927
+ for u, s, e, p in zip(urls, starts, ends, valid_paths):
928
+ # second run to account for files that are to be loaded whole
929
+ if s is not None and u not in whole_files:
930
+ urls2.append(u)
931
+ starts2.append(s)
932
+ ends2.append(e)
933
+ paths2.append(p)
934
+
935
+ # merge and fetch consolidated ranges
936
+ new_paths, new_starts, new_ends = merge_offset_ranges(
937
+ list(urls2),
938
+ list(starts2),
939
+ list(ends2),
940
+ sort=True,
941
+ max_gap=self.max_gap,
942
+ max_block=self.max_block,
943
+ )
944
+ bytes_out = fs.cat_ranges(new_paths, new_starts, new_ends)
945
+
946
+ # unbundle from merged bytes - simple approach
947
+ for u, s, e, p in zip(urls, starts, ends, valid_paths):
948
+ if p in out:
949
+ continue # was bytes, already handled
950
+ for np, ns, ne, b in zip(new_paths, new_starts, new_ends, bytes_out):
951
+ if np == u and (ns is None or ne is None):
952
+ if isinstance(b, Exception):
953
+ out[p] = b
954
+ else:
955
+ out[p] = b[s:e]
956
+ elif np == u and s >= ns and e <= ne:
957
+ if isinstance(b, Exception):
958
+ out[p] = b
959
+ else:
960
+ out[p] = b[s - ns : (e - ne) or None]
961
+
962
+ for k, v in out.copy().items():
963
+ # these were valid references, but fetch failed, so transform exc
964
+ if isinstance(v, Exception) and k in self.references:
965
+ ex = out[k]
966
+ new_ex = ReferenceNotReachable(k, self.references[k])
967
+ new_ex.__cause__ = ex
968
+ if on_error == "raise":
969
+ raise new_ex
970
+ elif on_error != "omit":
971
+ out[k] = new_ex
972
+
973
+ if len(out) == 1 and isinstance(path, str) and "*" not in path:
974
+ return _first(out)
975
+ return out
976
+
977
+ def _process_references(self, references, template_overrides=None):
978
+ vers = references.get("version", None)
979
+ if vers is None:
980
+ self._process_references0(references)
981
+ elif vers == 1:
982
+ self._process_references1(references, template_overrides=template_overrides)
983
+ else:
984
+ raise ValueError(f"Unknown reference spec version: {vers}")
985
+ # TODO: we make dircache by iterating over all entries, but for Spec >= 1,
986
+ # can replace with programmatic. Is it even needed for mapper interface?
987
+
988
+ def _process_references0(self, references):
989
+ """Make reference dict for Spec Version 0"""
990
+ if isinstance(references, dict):
991
+ # do not do this for lazy/parquet backend, which will not make dicts,
992
+ # but must remain writable in the original object
993
+ references = {
994
+ key: json.dumps(val) if isinstance(val, dict) else val
995
+ for key, val in references.items()
996
+ }
997
+ self.references = references
998
+
999
+ def _process_references1(self, references, template_overrides=None):
1000
+ if not self.simple_templates or self.templates:
1001
+ import jinja2
1002
+ self.references = {}
1003
+ self._process_templates(references.get("templates", {}))
1004
+
1005
+ @lru_cache(1000)
1006
+ def _render_jinja(u):
1007
+ return jinja2.Template(u).render(**self.templates)
1008
+
1009
+ for k, v in references.get("refs", {}).items():
1010
+ if isinstance(v, str):
1011
+ if v.startswith("base64:"):
1012
+ self.references[k] = base64.b64decode(v[7:])
1013
+ self.references[k] = v
1014
+ elif isinstance(v, dict):
1015
+ self.references[k] = json.dumps(v)
1016
+ elif self.templates:
1017
+ u = v[0]
1018
+ if "{{" in u:
1019
+ if self.simple_templates:
1020
+ u = (
1021
+ u.replace("{{", "{")
1022
+ .replace("}}", "}")
1023
+ .format(**self.templates)
1024
+ )
1025
+ else:
1026
+ u = _render_jinja(u)
1027
+ self.references[k] = [u] if len(v) == 1 else [u, v[1], v[2]]
1028
+ else:
1029
+ self.references[k] = v
1030
+ self.references.update(self._process_gen(references.get("gen", [])))
1031
+
1032
+ def _process_templates(self, tmp):
1033
+ self.templates = {}
1034
+ if self.template_overrides is not None:
1035
+ tmp.update(self.template_overrides)
1036
+ for k, v in tmp.items():
1037
+ if "{{" in v:
1038
+ import jinja2
1039
+
1040
+ self.templates[k] = lambda temp=v, **kwargs: jinja2.Template(
1041
+ temp
1042
+ ).render(**kwargs)
1043
+ else:
1044
+ self.templates[k] = v
1045
+
1046
+ def _process_gen(self, gens):
1047
+ out = {}
1048
+ for gen in gens:
1049
+ dimension = {
1050
+ k: (
1051
+ v
1052
+ if isinstance(v, list)
1053
+ else range(v.get("start", 0), v["stop"], v.get("step", 1))
1054
+ )
1055
+ for k, v in gen["dimensions"].items()
1056
+ }
1057
+ products = (
1058
+ dict(zip(dimension.keys(), values))
1059
+ for values in itertools.product(*dimension.values())
1060
+ )
1061
+ for pr in products:
1062
+ import jinja2
1063
+
1064
+ key = jinja2.Template(gen["key"]).render(**pr, **self.templates)
1065
+ url = jinja2.Template(gen["url"]).render(**pr, **self.templates)
1066
+ if ("offset" in gen) and ("length" in gen):
1067
+ offset = int(
1068
+ jinja2.Template(gen["offset"]).render(**pr, **self.templates)
1069
+ )
1070
+ length = int(
1071
+ jinja2.Template(gen["length"]).render(**pr, **self.templates)
1072
+ )
1073
+ out[key] = [url, offset, length]
1074
+ elif ("offset" in gen) ^ ("length" in gen):
1075
+ raise ValueError(
1076
+ "Both 'offset' and 'length' are required for a "
1077
+ "reference generator entry if either is provided."
1078
+ )
1079
+ else:
1080
+ out[key] = [url]
1081
+ return out
1082
+
1083
+ def _dircache_from_items(self):
1084
+ self.dircache = {"": []}
1085
+ it = self.references.items()
1086
+ for path, part in it:
1087
+ if isinstance(part, (bytes, str)) or hasattr(part, "to_bytes"):
1088
+ size = len(part)
1089
+ elif len(part) == 1:
1090
+ size = None
1091
+ else:
1092
+ _, _, size = part
1093
+ par = path.rsplit("/", 1)[0] if "/" in path else ""
1094
+ par0 = par
1095
+ subdirs = [par0]
1096
+ while par0 and par0 not in self.dircache:
1097
+ # collect parent directories
1098
+ par0 = self._parent(par0)
1099
+ subdirs.append(par0)
1100
+
1101
+ subdirs.reverse()
1102
+ for parent, child in zip(subdirs, subdirs[1:]):
1103
+ # register newly discovered directories
1104
+ assert child not in self.dircache
1105
+ assert parent in self.dircache
1106
+ self.dircache[parent].append(
1107
+ {"name": child, "type": "directory", "size": 0}
1108
+ )
1109
+ self.dircache[child] = []
1110
+
1111
+ self.dircache[par].append({"name": path, "type": "file", "size": size})
1112
+
1113
+ def _open(self, path, mode="rb", block_size=None, cache_options=None, **kwargs):
1114
+ part_or_url, start0, end0 = self._cat_common(path)
1115
+ # This logic is kept outside `ReferenceFile` to avoid unnecessary redirection.
1116
+ # That does mean `_cat_common` gets called twice if it eventually reaches `ReferenceFile`.
1117
+ if isinstance(part_or_url, bytes):
1118
+ return io.BytesIO(part_or_url[start0:end0])
1119
+
1120
+ protocol, _ = split_protocol(part_or_url)
1121
+ if start0 is None and end0 is None:
1122
+ return self.fss[protocol]._open(
1123
+ part_or_url,
1124
+ mode,
1125
+ block_size=block_size,
1126
+ cache_options=cache_options,
1127
+ **kwargs,
1128
+ )
1129
+
1130
+ return ReferenceFile(
1131
+ self,
1132
+ path,
1133
+ mode,
1134
+ block_size=block_size,
1135
+ cache_options=cache_options,
1136
+ **kwargs,
1137
+ )
1138
+
1139
+ def ls(self, path, detail=True, **kwargs):
1140
+ logger.debug("list %s", path)
1141
+ path = self._strip_protocol(path)
1142
+ if isinstance(self.references, LazyReferenceMapper):
1143
+ try:
1144
+ return self.references.ls(path, detail)
1145
+ except KeyError:
1146
+ pass
1147
+ raise FileNotFoundError(f"'{path}' is not a known key")
1148
+ if not self.dircache:
1149
+ self._dircache_from_items()
1150
+ out = self._ls_from_cache(path)
1151
+ if out is None:
1152
+ raise FileNotFoundError(path)
1153
+ if detail:
1154
+ return out
1155
+ return [o["name"] for o in out]
1156
+
1157
+ def exists(self, path, **kwargs): # overwrite auto-sync version
1158
+ return self.isdir(path) or self.isfile(path)
1159
+
1160
+ def isdir(self, path): # overwrite auto-sync version
1161
+ if self.dircache:
1162
+ return path in self.dircache
1163
+ elif isinstance(self.references, LazyReferenceMapper):
1164
+ return path in self.references.listdir()
1165
+ else:
1166
+ # this may be faster than building dircache for single calls, but
1167
+ # by looping will be slow for many calls; could cache it?
1168
+ return any(_.startswith(f"{path}/") for _ in self.references)
1169
+
1170
+ def isfile(self, path): # overwrite auto-sync version
1171
+ return path in self.references
1172
+
1173
+ async def _ls(self, path, detail=True, **kwargs): # calls fast sync code
1174
+ return self.ls(path, detail, **kwargs)
1175
+
1176
+ def find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):
1177
+ if withdirs:
1178
+ return super().find(
1179
+ path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, **kwargs
1180
+ )
1181
+ if path:
1182
+ path = self._strip_protocol(path)
1183
+ r = sorted(k for k in self.references if k.startswith(path))
1184
+ else:
1185
+ r = sorted(self.references)
1186
+ if detail:
1187
+ if not self.dircache:
1188
+ self._dircache_from_items()
1189
+ return {k: self._ls_from_cache(k)[0] for k in r}
1190
+ else:
1191
+ return r
1192
+
1193
+ def info(self, path, **kwargs):
1194
+ out = self.references.get(path)
1195
+ if out is not None:
1196
+ if isinstance(out, (str, bytes)):
1197
+ # decode base64 here
1198
+ return {"name": path, "type": "file", "size": len(out)}
1199
+ elif len(out) > 1:
1200
+ return {"name": path, "type": "file", "size": out[2]}
1201
+ else:
1202
+ out0 = [{"name": path, "type": "file", "size": None}]
1203
+ else:
1204
+ out = self.ls(path, True)
1205
+ out0 = [o for o in out if o["name"] == path]
1206
+ if not out0:
1207
+ return {"name": path, "type": "directory", "size": 0}
1208
+ if out0[0]["size"] is None:
1209
+ # if this is a whole remote file, update size using remote FS
1210
+ prot, _ = split_protocol(self.references[path][0])
1211
+ out0[0]["size"] = self.fss[prot].size(self.references[path][0])
1212
+ return out0[0]
1213
+
1214
+ async def _info(self, path, **kwargs): # calls fast sync code
1215
+ return self.info(path)
1216
+
1217
+ async def _rm_file(self, path, **kwargs):
1218
+ self.references.pop(
1219
+ path, None
1220
+ ) # ignores FileNotFound, just as well for directories
1221
+ self.dircache.clear() # this is a bit heavy handed
1222
+
1223
+ async def _pipe_file(self, path, data, mode="overwrite", **kwargs):
1224
+ if mode == "create" and self.exists(path):
1225
+ raise FileExistsError
1226
+ # can be str or bytes
1227
+ self.references[path] = data
1228
+ self.dircache.clear() # this is a bit heavy handed
1229
+
1230
+ async def _put_file(self, lpath, rpath, mode="overwrite", **kwargs):
1231
+ # puts binary
1232
+ if mode == "create" and self.exists(rpath):
1233
+ raise FileExistsError
1234
+ with open(lpath, "rb") as f:
1235
+ self.references[rpath] = f.read()
1236
+ self.dircache.clear() # this is a bit heavy handed
1237
+
1238
+ def save_json(self, url, **storage_options):
1239
+ """Write modified references into new location"""
1240
+ out = {}
1241
+ for k, v in self.references.items():
1242
+ if isinstance(v, bytes):
1243
+ try:
1244
+ out[k] = v.decode("ascii")
1245
+ except UnicodeDecodeError:
1246
+ out[k] = (b"base64:" + base64.b64encode(v)).decode()
1247
+ else:
1248
+ out[k] = v
1249
+ with fsspec.open(url, "wb", **storage_options) as f:
1250
+ f.write(json.dumps({"version": 1, "refs": out}).encode())
1251
+
1252
+
1253
+ class ReferenceFile(AbstractBufferedFile):
1254
+ def __init__(
1255
+ self,
1256
+ fs,
1257
+ path,
1258
+ mode="rb",
1259
+ block_size="default",
1260
+ autocommit=True,
1261
+ cache_type="readahead",
1262
+ cache_options=None,
1263
+ size=None,
1264
+ **kwargs,
1265
+ ):
1266
+ super().__init__(
1267
+ fs,
1268
+ path,
1269
+ mode=mode,
1270
+ block_size=block_size,
1271
+ autocommit=autocommit,
1272
+ size=size,
1273
+ cache_type=cache_type,
1274
+ cache_options=cache_options,
1275
+ **kwargs,
1276
+ )
1277
+ part_or_url, self.start, self.end = self.fs._cat_common(self.path)
1278
+ protocol, _ = split_protocol(part_or_url)
1279
+ self.src_fs = self.fs.fss[protocol]
1280
+ self.src_path = part_or_url
1281
+ self._f = None
1282
+
1283
+ @property
1284
+ def f(self):
1285
+ if self._f is None or self._f.closed:
1286
+ self._f = self.src_fs._open(
1287
+ self.src_path,
1288
+ mode=self.mode,
1289
+ block_size=self.blocksize,
1290
+ autocommit=self.autocommit,
1291
+ cache_type="none",
1292
+ **self.kwargs,
1293
+ )
1294
+ return self._f
1295
+
1296
+ def close(self):
1297
+ if self._f is not None:
1298
+ self._f.close()
1299
+ return super().close()
1300
+
1301
+ def _fetch_range(self, start, end):
1302
+ start = start + self.start
1303
+ end = min(end + self.start, self.end)
1304
+ self.f.seek(start)
1305
+ return self.f.read(end - start)
.venv/lib/python3.13/site-packages/fsspec/implementations/sftp.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datetime
2
+ import logging
3
+ import os
4
+ import types
5
+ import uuid
6
+ from stat import S_ISDIR, S_ISLNK
7
+
8
+ import paramiko
9
+
10
+ from .. import AbstractFileSystem
11
+ from ..utils import infer_storage_options
12
+
13
+ logger = logging.getLogger("fsspec.sftp")
14
+
15
+
16
+ class SFTPFileSystem(AbstractFileSystem):
17
+ """Files over SFTP/SSH
18
+
19
+ Peer-to-peer filesystem over SSH using paramiko.
20
+
21
+ Note: if using this with the ``open`` or ``open_files``, with full URLs,
22
+ there is no way to tell if a path is relative, so all paths are assumed
23
+ to be absolute.
24
+ """
25
+
26
+ protocol = "sftp", "ssh"
27
+
28
+ def __init__(self, host, **ssh_kwargs):
29
+ """
30
+
31
+ Parameters
32
+ ----------
33
+ host: str
34
+ Hostname or IP as a string
35
+ temppath: str
36
+ Location on the server to put files, when within a transaction
37
+ ssh_kwargs: dict
38
+ Parameters passed on to connection. See details in
39
+ https://docs.paramiko.org/en/3.3/api/client.html#paramiko.client.SSHClient.connect
40
+ May include port, username, password...
41
+ """
42
+ if self._cached:
43
+ return
44
+ super().__init__(**ssh_kwargs)
45
+ self.temppath = ssh_kwargs.pop("temppath", "/tmp") # remote temp directory
46
+ self.host = host
47
+ self.ssh_kwargs = ssh_kwargs
48
+ self._connect()
49
+
50
+ def _connect(self):
51
+ logger.debug("Connecting to SFTP server %s", self.host)
52
+ self.client = paramiko.SSHClient()
53
+ self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
54
+ self.client.connect(self.host, **self.ssh_kwargs)
55
+ self.ftp = self.client.open_sftp()
56
+
57
+ @classmethod
58
+ def _strip_protocol(cls, path):
59
+ return infer_storage_options(path)["path"]
60
+
61
+ @staticmethod
62
+ def _get_kwargs_from_urls(urlpath):
63
+ out = infer_storage_options(urlpath)
64
+ out.pop("path", None)
65
+ out.pop("protocol", None)
66
+ return out
67
+
68
+ def mkdir(self, path, create_parents=True, mode=511):
69
+ logger.debug("Creating folder %s", path)
70
+ if self.exists(path):
71
+ raise FileExistsError(f"File exists: {path}")
72
+
73
+ if create_parents:
74
+ self.makedirs(path)
75
+ else:
76
+ self.ftp.mkdir(path, mode)
77
+
78
+ def makedirs(self, path, exist_ok=False, mode=511):
79
+ if self.exists(path) and not exist_ok:
80
+ raise FileExistsError(f"File exists: {path}")
81
+
82
+ parts = path.split("/")
83
+ new_path = "/" if path[:1] == "/" else ""
84
+
85
+ for part in parts:
86
+ if part:
87
+ new_path = f"{new_path}/{part}" if new_path else part
88
+ if not self.exists(new_path):
89
+ self.ftp.mkdir(new_path, mode)
90
+
91
+ def rmdir(self, path):
92
+ logger.debug("Removing folder %s", path)
93
+ self.ftp.rmdir(path)
94
+
95
+ def info(self, path):
96
+ stat = self._decode_stat(self.ftp.stat(path))
97
+ stat["name"] = path
98
+ return stat
99
+
100
+ @staticmethod
101
+ def _decode_stat(stat, parent_path=None):
102
+ if S_ISDIR(stat.st_mode):
103
+ t = "directory"
104
+ elif S_ISLNK(stat.st_mode):
105
+ t = "link"
106
+ else:
107
+ t = "file"
108
+ out = {
109
+ "name": "",
110
+ "size": stat.st_size,
111
+ "type": t,
112
+ "uid": stat.st_uid,
113
+ "gid": stat.st_gid,
114
+ "time": datetime.datetime.fromtimestamp(
115
+ stat.st_atime, tz=datetime.timezone.utc
116
+ ),
117
+ "mtime": datetime.datetime.fromtimestamp(
118
+ stat.st_mtime, tz=datetime.timezone.utc
119
+ ),
120
+ }
121
+ if parent_path:
122
+ out["name"] = "/".join([parent_path.rstrip("/"), stat.filename])
123
+ return out
124
+
125
+ def ls(self, path, detail=False):
126
+ logger.debug("Listing folder %s", path)
127
+ stats = [self._decode_stat(stat, path) for stat in self.ftp.listdir_iter(path)]
128
+ if detail:
129
+ return stats
130
+ else:
131
+ paths = [stat["name"] for stat in stats]
132
+ return sorted(paths)
133
+
134
+ def put(self, lpath, rpath, callback=None, **kwargs):
135
+ logger.debug("Put file %s into %s", lpath, rpath)
136
+ self.ftp.put(lpath, rpath)
137
+
138
+ def get_file(self, rpath, lpath, **kwargs):
139
+ if self.isdir(rpath):
140
+ os.makedirs(lpath, exist_ok=True)
141
+ else:
142
+ self.ftp.get(self._strip_protocol(rpath), lpath)
143
+
144
+ def _open(self, path, mode="rb", block_size=None, **kwargs):
145
+ """
146
+ block_size: int or None
147
+ If 0, no buffering, if 1, line buffering, if >1, buffer that many
148
+ bytes, if None use default from paramiko.
149
+ """
150
+ logger.debug("Opening file %s", path)
151
+ if kwargs.get("autocommit", True) is False:
152
+ # writes to temporary file, move on commit
153
+ path2 = "/".join([self.temppath, str(uuid.uuid4())])
154
+ f = self.ftp.open(path2, mode, bufsize=block_size if block_size else -1)
155
+ f.temppath = path2
156
+ f.targetpath = path
157
+ f.fs = self
158
+ f.commit = types.MethodType(commit_a_file, f)
159
+ f.discard = types.MethodType(discard_a_file, f)
160
+ else:
161
+ f = self.ftp.open(path, mode, bufsize=block_size if block_size else -1)
162
+ return f
163
+
164
+ def _rm(self, path):
165
+ if self.isdir(path):
166
+ self.ftp.rmdir(path)
167
+ else:
168
+ self.ftp.remove(path)
169
+
170
+ def mv(self, old, new):
171
+ logger.debug("Renaming %s into %s", old, new)
172
+ self.ftp.posix_rename(old, new)
173
+
174
+
175
+ def commit_a_file(self):
176
+ self.fs.mv(self.temppath, self.targetpath)
177
+
178
+
179
+ def discard_a_file(self):
180
+ self.fs._rm(self.temppath)
.venv/lib/python3.13/site-packages/fsspec/implementations/smb.py ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ This module contains SMBFileSystem class responsible for handling access to
3
+ Windows Samba network shares by using package smbprotocol
4
+ """
5
+
6
+ import datetime
7
+ import re
8
+ import uuid
9
+ from stat import S_ISDIR, S_ISLNK
10
+
11
+ import smbclient
12
+ import smbprotocol.exceptions
13
+
14
+ from .. import AbstractFileSystem
15
+ from ..utils import infer_storage_options
16
+
17
+ # ! pylint: disable=bad-continuation
18
+
19
+
20
+ class SMBFileSystem(AbstractFileSystem):
21
+ """Allow reading and writing to Windows and Samba network shares.
22
+
23
+ When using `fsspec.open()` for getting a file-like object the URI
24
+ should be specified as this format:
25
+ ``smb://workgroup;user:password@server:port/share/folder/file.csv``.
26
+
27
+ Example::
28
+
29
+ >>> import fsspec
30
+ >>> with fsspec.open(
31
+ ... 'smb://myuser:mypassword@myserver.com/' 'share/folder/file.csv'
32
+ ... ) as smbfile:
33
+ ... df = pd.read_csv(smbfile, sep='|', header=None)
34
+
35
+ Note that you need to pass in a valid hostname or IP address for the host
36
+ component of the URL. Do not use the Windows/NetBIOS machine name for the
37
+ host component.
38
+
39
+ The first component of the path in the URL points to the name of the shared
40
+ folder. Subsequent path components will point to the directory/folder/file.
41
+
42
+ The URL components ``workgroup`` , ``user``, ``password`` and ``port`` may be
43
+ optional.
44
+
45
+ .. note::
46
+
47
+ For working this source require `smbprotocol`_ to be installed, e.g.::
48
+
49
+ $ pip install smbprotocol
50
+ # or
51
+ # pip install smbprotocol[kerberos]
52
+
53
+ .. _smbprotocol: https://github.com/jborean93/smbprotocol#requirements
54
+
55
+ Note: if using this with the ``open`` or ``open_files``, with full URLs,
56
+ there is no way to tell if a path is relative, so all paths are assumed
57
+ to be absolute.
58
+ """
59
+
60
+ protocol = "smb"
61
+
62
+ # pylint: disable=too-many-arguments
63
+ def __init__(
64
+ self,
65
+ host,
66
+ port=None,
67
+ username=None,
68
+ password=None,
69
+ timeout=60,
70
+ encrypt=None,
71
+ share_access=None,
72
+ register_session_retries=4,
73
+ register_session_retry_wait=1,
74
+ register_session_retry_factor=10,
75
+ auto_mkdir=False,
76
+ **kwargs,
77
+ ):
78
+ """
79
+ You can use _get_kwargs_from_urls to get some kwargs from
80
+ a reasonable SMB url.
81
+
82
+ Authentication will be anonymous or integrated if username/password are not
83
+ given.
84
+
85
+ Parameters
86
+ ----------
87
+ host: str
88
+ The remote server name/ip to connect to
89
+ port: int or None
90
+ Port to connect with. Usually 445, sometimes 139.
91
+ username: str or None
92
+ Username to connect with. Required if Kerberos auth is not being used.
93
+ password: str or None
94
+ User's password on the server, if using username
95
+ timeout: int
96
+ Connection timeout in seconds
97
+ encrypt: bool
98
+ Whether to force encryption or not, once this has been set to True
99
+ the session cannot be changed back to False.
100
+ share_access: str or None
101
+ Specifies the default access applied to file open operations
102
+ performed with this file system object.
103
+ This affects whether other processes can concurrently open a handle
104
+ to the same file.
105
+
106
+ - None (the default): exclusively locks the file until closed.
107
+ - 'r': Allow other handles to be opened with read access.
108
+ - 'w': Allow other handles to be opened with write access.
109
+ - 'd': Allow other handles to be opened with delete access.
110
+ register_session_retries: int
111
+ Number of retries to register a session with the server. Retries are not performed
112
+ for authentication errors, as they are considered as invalid credentials and not network
113
+ issues. If set to negative value, no register attempts will be performed.
114
+ register_session_retry_wait: int
115
+ Time in seconds to wait between each retry. Number must be non-negative.
116
+ register_session_retry_factor: int
117
+ Base factor for the wait time between each retry. The wait time
118
+ is calculated using exponential function. For factor=1 all wait times
119
+ will be equal to `register_session_retry_wait`. For any number of retries,
120
+ the last wait time will be equal to `register_session_retry_wait` and for retries>1
121
+ the first wait time will be equal to `register_session_retry_wait / factor`.
122
+ Number must be equal to or greater than 1. Optimal factor is 10.
123
+ auto_mkdir: bool
124
+ Whether, when opening a file, the directory containing it should
125
+ be created (if it doesn't already exist). This is assumed by pyarrow
126
+ and zarr-python code.
127
+ """
128
+ super().__init__(**kwargs)
129
+ self.host = host
130
+ self.port = port
131
+ self.username = username
132
+ self.password = password
133
+ self.timeout = timeout
134
+ self.encrypt = encrypt
135
+ self.temppath = kwargs.pop("temppath", "")
136
+ self.share_access = share_access
137
+ self.register_session_retries = register_session_retries
138
+ if register_session_retry_wait < 0:
139
+ raise ValueError(
140
+ "register_session_retry_wait must be a non-negative integer"
141
+ )
142
+ self.register_session_retry_wait = register_session_retry_wait
143
+ if register_session_retry_factor < 1:
144
+ raise ValueError(
145
+ "register_session_retry_factor must be a positive "
146
+ "integer equal to or greater than 1"
147
+ )
148
+ self.register_session_retry_factor = register_session_retry_factor
149
+ self.auto_mkdir = auto_mkdir
150
+ self._connect()
151
+
152
+ @property
153
+ def _port(self):
154
+ return 445 if self.port is None else self.port
155
+
156
+ def _connect(self):
157
+ import time
158
+
159
+ if self.register_session_retries <= -1:
160
+ return
161
+
162
+ retried_errors = []
163
+
164
+ wait_time = self.register_session_retry_wait
165
+ n_waits = (
166
+ self.register_session_retries - 1
167
+ ) # -1 = No wait time after the last retry
168
+ factor = self.register_session_retry_factor
169
+
170
+ # Generate wait times for each retry attempt.
171
+ # Wait times are calculated using exponential function. For factor=1 all wait times
172
+ # will be equal to `wait`. For any number of retries the last wait time will be
173
+ # equal to `wait` and for retries>2 the first wait time will be equal to `wait / factor`.
174
+ wait_times = iter(
175
+ factor ** (n / n_waits - 1) * wait_time for n in range(0, n_waits + 1)
176
+ )
177
+
178
+ for attempt in range(self.register_session_retries + 1):
179
+ try:
180
+ smbclient.register_session(
181
+ self.host,
182
+ username=self.username,
183
+ password=self.password,
184
+ port=self._port,
185
+ encrypt=self.encrypt,
186
+ connection_timeout=self.timeout,
187
+ )
188
+ return
189
+ except (
190
+ smbprotocol.exceptions.SMBAuthenticationError,
191
+ smbprotocol.exceptions.LogonFailure,
192
+ ):
193
+ # These exceptions should not be repeated, as they clearly indicate
194
+ # that the credentials are invalid and not a network issue.
195
+ raise
196
+ except ValueError as exc:
197
+ if re.findall(r"\[Errno -\d+]", str(exc)):
198
+ # This exception is raised by the smbprotocol.transport:Tcp.connect
199
+ # and originates from socket.gaierror (OSError). These exceptions might
200
+ # be raised due to network instability. We will retry to connect.
201
+ retried_errors.append(exc)
202
+ else:
203
+ # All another ValueError exceptions should be raised, as they are not
204
+ # related to network issues.
205
+ raise
206
+ except Exception as exc:
207
+ # Save the exception and retry to connect. This except might be dropped
208
+ # in the future, once all exceptions suited for retry are identified.
209
+ retried_errors.append(exc)
210
+
211
+ if attempt < self.register_session_retries:
212
+ time.sleep(next(wait_times))
213
+
214
+ # Raise last exception to inform user about the connection issues.
215
+ # Note: Should we use ExceptionGroup to raise all exceptions?
216
+ raise retried_errors[-1]
217
+
218
+ @classmethod
219
+ def _strip_protocol(cls, path):
220
+ return infer_storage_options(path)["path"]
221
+
222
+ @staticmethod
223
+ def _get_kwargs_from_urls(path):
224
+ # smb://workgroup;user:password@host:port/share/folder/file.csv
225
+ out = infer_storage_options(path)
226
+ out.pop("path", None)
227
+ out.pop("protocol", None)
228
+ return out
229
+
230
+ def mkdir(self, path, create_parents=True, **kwargs):
231
+ wpath = _as_unc_path(self.host, path)
232
+ if create_parents:
233
+ smbclient.makedirs(wpath, exist_ok=False, port=self._port, **kwargs)
234
+ else:
235
+ smbclient.mkdir(wpath, port=self._port, **kwargs)
236
+
237
+ def makedirs(self, path, exist_ok=False):
238
+ if _share_has_path(path):
239
+ wpath = _as_unc_path(self.host, path)
240
+ smbclient.makedirs(wpath, exist_ok=exist_ok, port=self._port)
241
+
242
+ def rmdir(self, path):
243
+ if _share_has_path(path):
244
+ wpath = _as_unc_path(self.host, path)
245
+ smbclient.rmdir(wpath, port=self._port)
246
+
247
+ def info(self, path, **kwargs):
248
+ wpath = _as_unc_path(self.host, path)
249
+ stats = smbclient.stat(wpath, port=self._port, **kwargs)
250
+ if S_ISDIR(stats.st_mode):
251
+ stype = "directory"
252
+ elif S_ISLNK(stats.st_mode):
253
+ stype = "link"
254
+ else:
255
+ stype = "file"
256
+ res = {
257
+ "name": path + "/" if stype == "directory" else path,
258
+ "size": stats.st_size,
259
+ "type": stype,
260
+ "uid": stats.st_uid,
261
+ "gid": stats.st_gid,
262
+ "time": stats.st_atime,
263
+ "mtime": stats.st_mtime,
264
+ }
265
+ return res
266
+
267
+ def created(self, path):
268
+ """Return the created timestamp of a file as a datetime.datetime"""
269
+ wpath = _as_unc_path(self.host, path)
270
+ stats = smbclient.stat(wpath, port=self._port)
271
+ return datetime.datetime.fromtimestamp(stats.st_ctime, tz=datetime.timezone.utc)
272
+
273
+ def modified(self, path):
274
+ """Return the modified timestamp of a file as a datetime.datetime"""
275
+ wpath = _as_unc_path(self.host, path)
276
+ stats = smbclient.stat(wpath, port=self._port)
277
+ return datetime.datetime.fromtimestamp(stats.st_mtime, tz=datetime.timezone.utc)
278
+
279
+ def ls(self, path, detail=True, **kwargs):
280
+ unc = _as_unc_path(self.host, path)
281
+ listed = smbclient.listdir(unc, port=self._port, **kwargs)
282
+ dirs = ["/".join([path.rstrip("/"), p]) for p in listed]
283
+ if detail:
284
+ dirs = [self.info(d) for d in dirs]
285
+ return dirs
286
+
287
+ # pylint: disable=too-many-arguments
288
+ def _open(
289
+ self,
290
+ path,
291
+ mode="rb",
292
+ block_size=-1,
293
+ autocommit=True,
294
+ cache_options=None,
295
+ **kwargs,
296
+ ):
297
+ """
298
+ block_size: int or None
299
+ If 0, no buffering, 1, line buffering, >1, buffer that many bytes
300
+
301
+ Notes
302
+ -----
303
+ By specifying 'share_access' in 'kwargs' it is possible to override the
304
+ default shared access setting applied in the constructor of this object.
305
+ """
306
+ if self.auto_mkdir and "w" in mode:
307
+ self.makedirs(self._parent(path), exist_ok=True)
308
+ bls = block_size if block_size is not None and block_size >= 0 else -1
309
+ wpath = _as_unc_path(self.host, path)
310
+ share_access = kwargs.pop("share_access", self.share_access)
311
+ if "w" in mode and autocommit is False:
312
+ temp = _as_temp_path(self.host, path, self.temppath)
313
+ return SMBFileOpener(
314
+ wpath, temp, mode, port=self._port, block_size=bls, **kwargs
315
+ )
316
+ return smbclient.open_file(
317
+ wpath,
318
+ mode,
319
+ buffering=bls,
320
+ share_access=share_access,
321
+ port=self._port,
322
+ **kwargs,
323
+ )
324
+
325
+ def copy(self, path1, path2, **kwargs):
326
+ """Copy within two locations in the same filesystem"""
327
+ wpath1 = _as_unc_path(self.host, path1)
328
+ wpath2 = _as_unc_path(self.host, path2)
329
+ if self.auto_mkdir:
330
+ self.makedirs(self._parent(path2), exist_ok=True)
331
+ smbclient.copyfile(wpath1, wpath2, port=self._port, **kwargs)
332
+
333
+ def _rm(self, path):
334
+ if _share_has_path(path):
335
+ wpath = _as_unc_path(self.host, path)
336
+ stats = smbclient.stat(wpath, port=self._port)
337
+ if S_ISDIR(stats.st_mode):
338
+ smbclient.rmdir(wpath, port=self._port)
339
+ else:
340
+ smbclient.remove(wpath, port=self._port)
341
+
342
+ def mv(self, path1, path2, recursive=None, maxdepth=None, **kwargs):
343
+ wpath1 = _as_unc_path(self.host, path1)
344
+ wpath2 = _as_unc_path(self.host, path2)
345
+ smbclient.rename(wpath1, wpath2, port=self._port, **kwargs)
346
+
347
+
348
+ def _as_unc_path(host, path):
349
+ rpath = path.replace("/", "\\")
350
+ unc = f"\\\\{host}{rpath}"
351
+ return unc
352
+
353
+
354
+ def _as_temp_path(host, path, temppath):
355
+ share = path.split("/")[1]
356
+ temp_file = f"/{share}{temppath}/{uuid.uuid4()}"
357
+ unc = _as_unc_path(host, temp_file)
358
+ return unc
359
+
360
+
361
+ def _share_has_path(path):
362
+ parts = path.count("/")
363
+ if path.endswith("/"):
364
+ return parts > 2
365
+ return parts > 1
366
+
367
+
368
+ class SMBFileOpener:
369
+ """writes to remote temporary file, move on commit"""
370
+
371
+ def __init__(self, path, temp, mode, port=445, block_size=-1, **kwargs):
372
+ self.path = path
373
+ self.temp = temp
374
+ self.mode = mode
375
+ self.block_size = block_size
376
+ self.kwargs = kwargs
377
+ self.smbfile = None
378
+ self._incontext = False
379
+ self.port = port
380
+ self._open()
381
+
382
+ def _open(self):
383
+ if self.smbfile is None or self.smbfile.closed:
384
+ self.smbfile = smbclient.open_file(
385
+ self.temp,
386
+ self.mode,
387
+ port=self.port,
388
+ buffering=self.block_size,
389
+ **self.kwargs,
390
+ )
391
+
392
+ def commit(self):
393
+ """Move temp file to definitive on success."""
394
+ # TODO: use transaction support in SMB protocol
395
+ smbclient.replace(self.temp, self.path, port=self.port)
396
+
397
+ def discard(self):
398
+ """Remove the temp file on failure."""
399
+ smbclient.remove(self.temp, port=self.port)
400
+
401
+ def __fspath__(self):
402
+ return self.path
403
+
404
+ def __iter__(self):
405
+ return self.smbfile.__iter__()
406
+
407
+ def __getattr__(self, item):
408
+ return getattr(self.smbfile, item)
409
+
410
+ def __enter__(self):
411
+ self._incontext = True
412
+ return self.smbfile.__enter__()
413
+
414
+ def __exit__(self, exc_type, exc_value, traceback):
415
+ self._incontext = False
416
+ self.smbfile.__exit__(exc_type, exc_value, traceback)
.venv/lib/python3.13/site-packages/fsspec/implementations/tar.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import tarfile
3
+
4
+ import fsspec
5
+ from fsspec.archive import AbstractArchiveFileSystem
6
+ from fsspec.compression import compr
7
+ from fsspec.utils import infer_compression
8
+
9
+ typemap = {b"0": "file", b"5": "directory"}
10
+
11
+ logger = logging.getLogger("tar")
12
+
13
+
14
+ class TarFileSystem(AbstractArchiveFileSystem):
15
+ """Compressed Tar archives as a file-system (read-only)
16
+
17
+ Supports the following formats:
18
+ tar.gz, tar.bz2, tar.xz
19
+ """
20
+
21
+ root_marker = ""
22
+ protocol = "tar"
23
+ cachable = False
24
+
25
+ def __init__(
26
+ self,
27
+ fo="",
28
+ index_store=None,
29
+ target_options=None,
30
+ target_protocol=None,
31
+ compression=None,
32
+ **kwargs,
33
+ ):
34
+ super().__init__(**kwargs)
35
+ target_options = target_options or {}
36
+
37
+ if isinstance(fo, str):
38
+ self.of = fsspec.open(fo, protocol=target_protocol, **target_options)
39
+ fo = self.of.open() # keep the reference
40
+
41
+ # Try to infer compression.
42
+ if compression is None:
43
+ name = None
44
+
45
+ # Try different ways to get hold of the filename. `fo` might either
46
+ # be a `fsspec.LocalFileOpener`, an `io.BufferedReader` or an
47
+ # `fsspec.AbstractFileSystem` instance.
48
+ try:
49
+ # Amended io.BufferedReader or similar.
50
+ # This uses a "protocol extension" where original filenames are
51
+ # propagated to archive-like filesystems in order to let them
52
+ # infer the right compression appropriately.
53
+ if hasattr(fo, "original"):
54
+ name = fo.original
55
+
56
+ # fsspec.LocalFileOpener
57
+ elif hasattr(fo, "path"):
58
+ name = fo.path
59
+
60
+ # io.BufferedReader
61
+ elif hasattr(fo, "name"):
62
+ name = fo.name
63
+
64
+ # fsspec.AbstractFileSystem
65
+ elif hasattr(fo, "info"):
66
+ name = fo.info()["name"]
67
+
68
+ except Exception as ex:
69
+ logger.warning(
70
+ f"Unable to determine file name, not inferring compression: {ex}"
71
+ )
72
+
73
+ if name is not None:
74
+ compression = infer_compression(name)
75
+ logger.info(f"Inferred compression {compression} from file name {name}")
76
+
77
+ if compression is not None:
78
+ # TODO: tarfile already implements compression with modes like "'r:gz'",
79
+ # but then would seek to offset in the file work?
80
+ fo = compr[compression](fo)
81
+
82
+ self._fo_ref = fo
83
+ self.fo = fo # the whole instance is a context
84
+ self.tar = tarfile.TarFile(fileobj=self.fo)
85
+ self.dir_cache = None
86
+
87
+ self.index_store = index_store
88
+ self.index = None
89
+ self._index()
90
+
91
+ def _index(self):
92
+ # TODO: load and set saved index, if exists
93
+ out = {}
94
+ for ti in self.tar:
95
+ info = ti.get_info()
96
+ info["type"] = typemap.get(info["type"], "file")
97
+ name = ti.get_info()["name"].rstrip("/")
98
+ out[name] = (info, ti.offset_data)
99
+
100
+ self.index = out
101
+ # TODO: save index to self.index_store here, if set
102
+
103
+ def _get_dirs(self):
104
+ if self.dir_cache is not None:
105
+ return
106
+
107
+ # This enables ls to get directories as children as well as files
108
+ self.dir_cache = {
109
+ dirname: {"name": dirname, "size": 0, "type": "directory"}
110
+ for dirname in self._all_dirnames(self.tar.getnames())
111
+ }
112
+ for member in self.tar.getmembers():
113
+ info = member.get_info()
114
+ info["name"] = info["name"].rstrip("/")
115
+ info["type"] = typemap.get(info["type"], "file")
116
+ self.dir_cache[info["name"]] = info
117
+
118
+ def _open(self, path, mode="rb", **kwargs):
119
+ if mode != "rb":
120
+ raise ValueError("Read-only filesystem implementation")
121
+ details, offset = self.index[path]
122
+ if details["type"] != "file":
123
+ raise ValueError("Can only handle regular files")
124
+ return self.tar.extractfile(path)
.venv/lib/python3.13/site-packages/fsspec/implementations/webhdfs.py ADDED
@@ -0,0 +1,485 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # https://hadoop.apache.org/docs/r1.0.4/webhdfs.html
2
+
3
+ import logging
4
+ import os
5
+ import secrets
6
+ import shutil
7
+ import tempfile
8
+ import uuid
9
+ from contextlib import suppress
10
+ from urllib.parse import quote
11
+
12
+ import requests
13
+
14
+ from ..spec import AbstractBufferedFile, AbstractFileSystem
15
+ from ..utils import infer_storage_options, tokenize
16
+
17
+ logger = logging.getLogger("webhdfs")
18
+
19
+
20
+ class WebHDFS(AbstractFileSystem):
21
+ """
22
+ Interface to HDFS over HTTP using the WebHDFS API. Supports also HttpFS gateways.
23
+
24
+ Four auth mechanisms are supported:
25
+
26
+ insecure: no auth is done, and the user is assumed to be whoever they
27
+ say they are (parameter ``user``), or a predefined value such as
28
+ "dr.who" if not given
29
+ spnego: when kerberos authentication is enabled, auth is negotiated by
30
+ requests_kerberos https://github.com/requests/requests-kerberos .
31
+ This establishes a session based on existing kinit login and/or
32
+ specified principal/password; parameters are passed with ``kerb_kwargs``
33
+ token: uses an existing Hadoop delegation token from another secured
34
+ service. Indeed, this client can also generate such tokens when
35
+ not insecure. Note that tokens expire, but can be renewed (by a
36
+ previously specified user) and may allow for proxying.
37
+ basic-auth: used when both parameter ``user`` and parameter ``password``
38
+ are provided.
39
+
40
+ """
41
+
42
+ tempdir = str(tempfile.gettempdir())
43
+ protocol = "webhdfs", "webHDFS"
44
+
45
+ def __init__(
46
+ self,
47
+ host,
48
+ port=50070,
49
+ kerberos=False,
50
+ token=None,
51
+ user=None,
52
+ password=None,
53
+ proxy_to=None,
54
+ kerb_kwargs=None,
55
+ data_proxy=None,
56
+ use_https=False,
57
+ session_cert=None,
58
+ session_verify=True,
59
+ **kwargs,
60
+ ):
61
+ """
62
+ Parameters
63
+ ----------
64
+ host: str
65
+ Name-node address
66
+ port: int
67
+ Port for webHDFS
68
+ kerberos: bool
69
+ Whether to authenticate with kerberos for this connection
70
+ token: str or None
71
+ If given, use this token on every call to authenticate. A user
72
+ and user-proxy may be encoded in the token and should not be also
73
+ given
74
+ user: str or None
75
+ If given, assert the user name to connect with
76
+ password: str or None
77
+ If given, assert the password to use for basic auth. If password
78
+ is provided, user must be provided also
79
+ proxy_to: str or None
80
+ If given, the user has the authority to proxy, and this value is
81
+ the user in who's name actions are taken
82
+ kerb_kwargs: dict
83
+ Any extra arguments for HTTPKerberosAuth, see
84
+ `<https://github.com/requests/requests-kerberos/blob/master/requests_kerberos/kerberos_.py>`_
85
+ data_proxy: dict, callable or None
86
+ If given, map data-node addresses. This can be necessary if the
87
+ HDFS cluster is behind a proxy, running on Docker or otherwise has
88
+ a mismatch between the host-names given by the name-node and the
89
+ address by which to refer to them from the client. If a dict,
90
+ maps host names ``host->data_proxy[host]``; if a callable, full
91
+ URLs are passed, and function must conform to
92
+ ``url->data_proxy(url)``.
93
+ use_https: bool
94
+ Whether to connect to the Name-node using HTTPS instead of HTTP
95
+ session_cert: str or Tuple[str, str] or None
96
+ Path to a certificate file, or tuple of (cert, key) files to use
97
+ for the requests.Session
98
+ session_verify: str, bool or None
99
+ Path to a certificate file to use for verifying the requests.Session.
100
+ kwargs
101
+ """
102
+ if self._cached:
103
+ return
104
+ super().__init__(**kwargs)
105
+ self.url = f"{'https' if use_https else 'http'}://{host}:{port}/webhdfs/v1"
106
+ self.kerb = kerberos
107
+ self.kerb_kwargs = kerb_kwargs or {}
108
+ self.pars = {}
109
+ self.proxy = data_proxy or {}
110
+ if token is not None:
111
+ if user is not None or proxy_to is not None:
112
+ raise ValueError(
113
+ "If passing a delegation token, must not set "
114
+ "user or proxy_to, as these are encoded in the"
115
+ " token"
116
+ )
117
+ self.pars["delegation"] = token
118
+ self.user = user
119
+ self.password = password
120
+
121
+ if password is not None:
122
+ if user is None:
123
+ raise ValueError(
124
+ "If passing a password, the user must also be"
125
+ "set in order to set up the basic-auth"
126
+ )
127
+ else:
128
+ if user is not None:
129
+ self.pars["user.name"] = user
130
+
131
+ if proxy_to is not None:
132
+ self.pars["doas"] = proxy_to
133
+ if kerberos and user is not None:
134
+ raise ValueError(
135
+ "If using Kerberos auth, do not specify the "
136
+ "user, this is handled by kinit."
137
+ )
138
+
139
+ self.session_cert = session_cert
140
+ self.session_verify = session_verify
141
+
142
+ self._connect()
143
+
144
+ self._fsid = f"webhdfs_{tokenize(host, port)}"
145
+
146
+ @property
147
+ def fsid(self):
148
+ return self._fsid
149
+
150
+ def _connect(self):
151
+ self.session = requests.Session()
152
+
153
+ if self.session_cert:
154
+ self.session.cert = self.session_cert
155
+
156
+ self.session.verify = self.session_verify
157
+
158
+ if self.kerb:
159
+ from requests_kerberos import HTTPKerberosAuth
160
+
161
+ self.session.auth = HTTPKerberosAuth(**self.kerb_kwargs)
162
+
163
+ if self.user is not None and self.password is not None:
164
+ from requests.auth import HTTPBasicAuth
165
+
166
+ self.session.auth = HTTPBasicAuth(self.user, self.password)
167
+
168
+ def _call(self, op, method="get", path=None, data=None, redirect=True, **kwargs):
169
+ path = self._strip_protocol(path) if path is not None else ""
170
+ url = self._apply_proxy(self.url + quote(path, safe="/="))
171
+ args = kwargs.copy()
172
+ args.update(self.pars)
173
+ args["op"] = op.upper()
174
+ logger.debug("sending %s with %s", url, method)
175
+ out = self.session.request(
176
+ method=method.upper(),
177
+ url=url,
178
+ params=args,
179
+ data=data,
180
+ allow_redirects=redirect,
181
+ )
182
+ if out.status_code in [400, 401, 403, 404, 500]:
183
+ try:
184
+ err = out.json()
185
+ msg = err["RemoteException"]["message"]
186
+ exp = err["RemoteException"]["exception"]
187
+ except (ValueError, KeyError):
188
+ pass
189
+ else:
190
+ if exp in ["IllegalArgumentException", "UnsupportedOperationException"]:
191
+ raise ValueError(msg)
192
+ elif exp in ["SecurityException", "AccessControlException"]:
193
+ raise PermissionError(msg)
194
+ elif exp in ["FileNotFoundException"]:
195
+ raise FileNotFoundError(msg)
196
+ else:
197
+ raise RuntimeError(msg)
198
+ out.raise_for_status()
199
+ return out
200
+
201
+ def _open(
202
+ self,
203
+ path,
204
+ mode="rb",
205
+ block_size=None,
206
+ autocommit=True,
207
+ replication=None,
208
+ permissions=None,
209
+ **kwargs,
210
+ ):
211
+ """
212
+
213
+ Parameters
214
+ ----------
215
+ path: str
216
+ File location
217
+ mode: str
218
+ 'rb', 'wb', etc.
219
+ block_size: int
220
+ Client buffer size for read-ahead or write buffer
221
+ autocommit: bool
222
+ If False, writes to temporary file that only gets put in final
223
+ location upon commit
224
+ replication: int
225
+ Number of copies of file on the cluster, write mode only
226
+ permissions: str or int
227
+ posix permissions, write mode only
228
+ kwargs
229
+
230
+ Returns
231
+ -------
232
+ WebHDFile instance
233
+ """
234
+ block_size = block_size or self.blocksize
235
+ return WebHDFile(
236
+ self,
237
+ path,
238
+ mode=mode,
239
+ block_size=block_size,
240
+ tempdir=self.tempdir,
241
+ autocommit=autocommit,
242
+ replication=replication,
243
+ permissions=permissions,
244
+ )
245
+
246
+ @staticmethod
247
+ def _process_info(info):
248
+ info["type"] = info["type"].lower()
249
+ info["size"] = info["length"]
250
+ return info
251
+
252
+ @classmethod
253
+ def _strip_protocol(cls, path):
254
+ return infer_storage_options(path)["path"]
255
+
256
+ @staticmethod
257
+ def _get_kwargs_from_urls(urlpath):
258
+ out = infer_storage_options(urlpath)
259
+ out.pop("path", None)
260
+ out.pop("protocol", None)
261
+ if "username" in out:
262
+ out["user"] = out.pop("username")
263
+ return out
264
+
265
+ def info(self, path):
266
+ out = self._call("GETFILESTATUS", path=path)
267
+ info = out.json()["FileStatus"]
268
+ info["name"] = path
269
+ return self._process_info(info)
270
+
271
+ def ls(self, path, detail=False):
272
+ out = self._call("LISTSTATUS", path=path)
273
+ infos = out.json()["FileStatuses"]["FileStatus"]
274
+ for info in infos:
275
+ self._process_info(info)
276
+ info["name"] = path.rstrip("/") + "/" + info["pathSuffix"]
277
+ if detail:
278
+ return sorted(infos, key=lambda i: i["name"])
279
+ else:
280
+ return sorted(info["name"] for info in infos)
281
+
282
+ def content_summary(self, path):
283
+ """Total numbers of files, directories and bytes under path"""
284
+ out = self._call("GETCONTENTSUMMARY", path=path)
285
+ return out.json()["ContentSummary"]
286
+
287
+ def ukey(self, path):
288
+ """Checksum info of file, giving method and result"""
289
+ out = self._call("GETFILECHECKSUM", path=path, redirect=False)
290
+ if "Location" in out.headers:
291
+ location = self._apply_proxy(out.headers["Location"])
292
+ out2 = self.session.get(location)
293
+ out2.raise_for_status()
294
+ return out2.json()["FileChecksum"]
295
+ else:
296
+ out.raise_for_status()
297
+ return out.json()["FileChecksum"]
298
+
299
+ def home_directory(self):
300
+ """Get user's home directory"""
301
+ out = self._call("GETHOMEDIRECTORY")
302
+ return out.json()["Path"]
303
+
304
+ def get_delegation_token(self, renewer=None):
305
+ """Retrieve token which can give the same authority to other uses
306
+
307
+ Parameters
308
+ ----------
309
+ renewer: str or None
310
+ User who may use this token; if None, will be current user
311
+ """
312
+ if renewer:
313
+ out = self._call("GETDELEGATIONTOKEN", renewer=renewer)
314
+ else:
315
+ out = self._call("GETDELEGATIONTOKEN")
316
+ t = out.json()["Token"]
317
+ if t is None:
318
+ raise ValueError("No token available for this user/security context")
319
+ return t["urlString"]
320
+
321
+ def renew_delegation_token(self, token):
322
+ """Make token live longer. Returns new expiry time"""
323
+ out = self._call("RENEWDELEGATIONTOKEN", method="put", token=token)
324
+ return out.json()["long"]
325
+
326
+ def cancel_delegation_token(self, token):
327
+ """Stop the token from being useful"""
328
+ self._call("CANCELDELEGATIONTOKEN", method="put", token=token)
329
+
330
+ def chmod(self, path, mod):
331
+ """Set the permission at path
332
+
333
+ Parameters
334
+ ----------
335
+ path: str
336
+ location to set (file or directory)
337
+ mod: str or int
338
+ posix epresentation or permission, give as oct string, e.g, '777'
339
+ or 0o777
340
+ """
341
+ self._call("SETPERMISSION", method="put", path=path, permission=mod)
342
+
343
+ def chown(self, path, owner=None, group=None):
344
+ """Change owning user and/or group"""
345
+ kwargs = {}
346
+ if owner is not None:
347
+ kwargs["owner"] = owner
348
+ if group is not None:
349
+ kwargs["group"] = group
350
+ self._call("SETOWNER", method="put", path=path, **kwargs)
351
+
352
+ def set_replication(self, path, replication):
353
+ """
354
+ Set file replication factor
355
+
356
+ Parameters
357
+ ----------
358
+ path: str
359
+ File location (not for directories)
360
+ replication: int
361
+ Number of copies of file on the cluster. Should be smaller than
362
+ number of data nodes; normally 3 on most systems.
363
+ """
364
+ self._call("SETREPLICATION", path=path, method="put", replication=replication)
365
+
366
+ def mkdir(self, path, **kwargs):
367
+ self._call("MKDIRS", method="put", path=path)
368
+
369
+ def makedirs(self, path, exist_ok=False):
370
+ if exist_ok is False and self.exists(path):
371
+ raise FileExistsError(path)
372
+ self.mkdir(path)
373
+
374
+ def mv(self, path1, path2, **kwargs):
375
+ self._call("RENAME", method="put", path=path1, destination=path2)
376
+
377
+ def rm(self, path, recursive=False, **kwargs):
378
+ self._call(
379
+ "DELETE",
380
+ method="delete",
381
+ path=path,
382
+ recursive="true" if recursive else "false",
383
+ )
384
+
385
+ def rm_file(self, path, **kwargs):
386
+ self.rm(path)
387
+
388
+ def cp_file(self, lpath, rpath, **kwargs):
389
+ with self.open(lpath) as lstream:
390
+ tmp_fname = "/".join([self._parent(rpath), f".tmp.{secrets.token_hex(16)}"])
391
+ # Perform an atomic copy (stream to a temporary file and
392
+ # move it to the actual destination).
393
+ try:
394
+ with self.open(tmp_fname, "wb") as rstream:
395
+ shutil.copyfileobj(lstream, rstream)
396
+ self.mv(tmp_fname, rpath)
397
+ except BaseException:
398
+ with suppress(FileNotFoundError):
399
+ self.rm(tmp_fname)
400
+ raise
401
+
402
+ def _apply_proxy(self, location):
403
+ if self.proxy and callable(self.proxy):
404
+ location = self.proxy(location)
405
+ elif self.proxy:
406
+ # as a dict
407
+ for k, v in self.proxy.items():
408
+ location = location.replace(k, v, 1)
409
+ return location
410
+
411
+
412
+ class WebHDFile(AbstractBufferedFile):
413
+ """A file living in HDFS over webHDFS"""
414
+
415
+ def __init__(self, fs, path, **kwargs):
416
+ super().__init__(fs, path, **kwargs)
417
+ kwargs = kwargs.copy()
418
+ if kwargs.get("permissions", None) is None:
419
+ kwargs.pop("permissions", None)
420
+ if kwargs.get("replication", None) is None:
421
+ kwargs.pop("replication", None)
422
+ self.permissions = kwargs.pop("permissions", 511)
423
+ tempdir = kwargs.pop("tempdir")
424
+ if kwargs.pop("autocommit", False) is False:
425
+ self.target = self.path
426
+ self.path = os.path.join(tempdir, str(uuid.uuid4()))
427
+
428
+ def _upload_chunk(self, final=False):
429
+ """Write one part of a multi-block file upload
430
+
431
+ Parameters
432
+ ==========
433
+ final: bool
434
+ This is the last block, so should complete file, if
435
+ self.autocommit is True.
436
+ """
437
+ out = self.fs.session.post(
438
+ self.location,
439
+ data=self.buffer.getvalue(),
440
+ headers={"content-type": "application/octet-stream"},
441
+ )
442
+ out.raise_for_status()
443
+ return True
444
+
445
+ def _initiate_upload(self):
446
+ """Create remote file/upload"""
447
+ kwargs = self.kwargs.copy()
448
+ if "a" in self.mode:
449
+ op, method = "APPEND", "POST"
450
+ else:
451
+ op, method = "CREATE", "PUT"
452
+ kwargs["overwrite"] = "true"
453
+ out = self.fs._call(op, method, self.path, redirect=False, **kwargs)
454
+ location = self.fs._apply_proxy(out.headers["Location"])
455
+ if "w" in self.mode:
456
+ # create empty file to append to
457
+ out2 = self.fs.session.put(
458
+ location, headers={"content-type": "application/octet-stream"}
459
+ )
460
+ out2.raise_for_status()
461
+ # after creating empty file, change location to append to
462
+ out2 = self.fs._call("APPEND", "POST", self.path, redirect=False, **kwargs)
463
+ self.location = self.fs._apply_proxy(out2.headers["Location"])
464
+
465
+ def _fetch_range(self, start, end):
466
+ start = max(start, 0)
467
+ end = min(self.size, end)
468
+ if start >= end or start >= self.size:
469
+ return b""
470
+ out = self.fs._call(
471
+ "OPEN", path=self.path, offset=start, length=end - start, redirect=False
472
+ )
473
+ out.raise_for_status()
474
+ if "Location" in out.headers:
475
+ location = out.headers["Location"]
476
+ out2 = self.fs.session.get(self.fs._apply_proxy(location))
477
+ return out2.content
478
+ else:
479
+ return out.content
480
+
481
+ def commit(self):
482
+ self.fs.mv(self.path, self.target)
483
+
484
+ def discard(self):
485
+ self.fs.rm(self.path)
.venv/lib/python3.13/site-packages/fsspec/implementations/zip.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import zipfile
3
+
4
+ import fsspec
5
+ from fsspec.archive import AbstractArchiveFileSystem
6
+
7
+
8
+ class ZipFileSystem(AbstractArchiveFileSystem):
9
+ """Read/Write contents of ZIP archive as a file-system
10
+
11
+ Keeps file object open while instance lives.
12
+
13
+ This class is pickleable, but not necessarily thread-safe
14
+ """
15
+
16
+ root_marker = ""
17
+ protocol = "zip"
18
+ cachable = False
19
+
20
+ def __init__(
21
+ self,
22
+ fo="",
23
+ mode="r",
24
+ target_protocol=None,
25
+ target_options=None,
26
+ compression=zipfile.ZIP_STORED,
27
+ allowZip64=True,
28
+ compresslevel=None,
29
+ **kwargs,
30
+ ):
31
+ """
32
+ Parameters
33
+ ----------
34
+ fo: str or file-like
35
+ Contains ZIP, and must exist. If a str, will fetch file using
36
+ :meth:`~fsspec.open_files`, which must return one file exactly.
37
+ mode: str
38
+ Accept: "r", "w", "a"
39
+ target_protocol: str (optional)
40
+ If ``fo`` is a string, this value can be used to override the
41
+ FS protocol inferred from a URL
42
+ target_options: dict (optional)
43
+ Kwargs passed when instantiating the target FS, if ``fo`` is
44
+ a string.
45
+ compression, allowZip64, compresslevel: passed to ZipFile
46
+ Only relevant when creating a ZIP
47
+ """
48
+ super().__init__(self, **kwargs)
49
+ if mode not in set("rwa"):
50
+ raise ValueError(f"mode '{mode}' no understood")
51
+ self.mode = mode
52
+ if isinstance(fo, (str, os.PathLike)):
53
+ if mode == "a":
54
+ m = "r+b"
55
+ else:
56
+ m = mode + "b"
57
+ fo = fsspec.open(
58
+ fo, mode=m, protocol=target_protocol, **(target_options or {})
59
+ )
60
+ self.force_zip_64 = allowZip64
61
+ self.of = fo
62
+ self.fo = fo.__enter__() # the whole instance is a context
63
+ self.zip = zipfile.ZipFile(
64
+ self.fo,
65
+ mode=mode,
66
+ compression=compression,
67
+ allowZip64=allowZip64,
68
+ compresslevel=compresslevel,
69
+ )
70
+ self.dir_cache = None
71
+
72
+ @classmethod
73
+ def _strip_protocol(cls, path):
74
+ # zip file paths are always relative to the archive root
75
+ return super()._strip_protocol(path).lstrip("/")
76
+
77
+ def __del__(self):
78
+ if hasattr(self, "zip"):
79
+ self.close()
80
+ del self.zip
81
+
82
+ def close(self):
83
+ """Commits any write changes to the file. Done on ``del`` too."""
84
+ self.zip.close()
85
+
86
+ def _get_dirs(self):
87
+ if self.dir_cache is None or self.mode in set("wa"):
88
+ # when writing, dir_cache is always in the ZipFile's attributes,
89
+ # not read from the file.
90
+ files = self.zip.infolist()
91
+ self.dir_cache = {
92
+ dirname.rstrip("/"): {
93
+ "name": dirname.rstrip("/"),
94
+ "size": 0,
95
+ "type": "directory",
96
+ }
97
+ for dirname in self._all_dirnames(self.zip.namelist())
98
+ }
99
+ for z in files:
100
+ f = {s: getattr(z, s, None) for s in zipfile.ZipInfo.__slots__}
101
+ f.update(
102
+ {
103
+ "name": z.filename.rstrip("/"),
104
+ "size": z.file_size,
105
+ "type": ("directory" if z.is_dir() else "file"),
106
+ }
107
+ )
108
+ self.dir_cache[f["name"]] = f
109
+
110
+ def pipe_file(self, path, value, **kwargs):
111
+ # override upstream, because we know the exact file size in this case
112
+ self.zip.writestr(path, value, **kwargs)
113
+
114
+ def _open(
115
+ self,
116
+ path,
117
+ mode="rb",
118
+ block_size=None,
119
+ autocommit=True,
120
+ cache_options=None,
121
+ **kwargs,
122
+ ):
123
+ path = self._strip_protocol(path)
124
+ if "r" in mode and self.mode in set("wa"):
125
+ if self.exists(path):
126
+ raise OSError("ZipFS can only be open for reading or writing, not both")
127
+ raise FileNotFoundError(path)
128
+ if "r" in self.mode and "w" in mode:
129
+ raise OSError("ZipFS can only be open for reading or writing, not both")
130
+ out = self.zip.open(path, mode.strip("b"), force_zip64=self.force_zip_64)
131
+ if "r" in mode:
132
+ info = self.info(path)
133
+ out.size = info["size"]
134
+ out.name = info["name"]
135
+ return out
136
+
137
+ def find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):
138
+ if maxdepth is not None and maxdepth < 1:
139
+ raise ValueError("maxdepth must be at least 1")
140
+
141
+ # Remove the leading slash, as the zip file paths are always
142
+ # given without a leading slash
143
+ path = path.lstrip("/")
144
+ path_parts = list(filter(lambda s: bool(s), path.split("/")))
145
+
146
+ def _matching_starts(file_path):
147
+ file_parts = filter(lambda s: bool(s), file_path.split("/"))
148
+ return all(a == b for a, b in zip(path_parts, file_parts))
149
+
150
+ self._get_dirs()
151
+
152
+ result = {}
153
+ # To match posix find, if an exact file name is given, we should
154
+ # return only that file
155
+ if path in self.dir_cache and self.dir_cache[path]["type"] == "file":
156
+ result[path] = self.dir_cache[path]
157
+ return result if detail else [path]
158
+
159
+ for file_path, file_info in self.dir_cache.items():
160
+ if not (path == "" or _matching_starts(file_path)):
161
+ continue
162
+
163
+ if file_info["type"] == "directory":
164
+ if withdirs:
165
+ if file_path not in result:
166
+ result[file_path.strip("/")] = file_info
167
+ continue
168
+
169
+ if file_path not in result:
170
+ result[file_path] = file_info if detail else None
171
+
172
+ if maxdepth:
173
+ path_depth = path.count("/")
174
+ result = {
175
+ k: v for k, v in result.items() if k.count("/") - path_depth < maxdepth
176
+ }
177
+ return result if detail else sorted(result)
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/__init__.py ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from hashlib import md5
3
+
4
+ import pytest
5
+
6
+ from fsspec.implementations.local import LocalFileSystem
7
+ from fsspec.tests.abstract.copy import AbstractCopyTests # noqa: F401
8
+ from fsspec.tests.abstract.get import AbstractGetTests # noqa: F401
9
+ from fsspec.tests.abstract.open import AbstractOpenTests # noqa: F401
10
+ from fsspec.tests.abstract.pipe import AbstractPipeTests # noqa: F401
11
+ from fsspec.tests.abstract.put import AbstractPutTests # noqa: F401
12
+
13
+
14
+ class BaseAbstractFixtures:
15
+ """
16
+ Abstract base class containing fixtures that are used by but never need to
17
+ be overridden in derived filesystem-specific classes to run the abstract
18
+ tests on such filesystems.
19
+ """
20
+
21
+ @pytest.fixture
22
+ def fs_bulk_operations_scenario_0(self, fs, fs_join, fs_path):
23
+ """
24
+ Scenario on remote filesystem that is used for many cp/get/put tests.
25
+
26
+ Cleans up at the end of each test it which it is used.
27
+ """
28
+ source = self._bulk_operations_scenario_0(fs, fs_join, fs_path)
29
+ yield source
30
+ fs.rm(source, recursive=True)
31
+
32
+ @pytest.fixture
33
+ def fs_glob_edge_cases_files(self, fs, fs_join, fs_path):
34
+ """
35
+ Scenario on remote filesystem that is used for glob edge cases cp/get/put tests.
36
+
37
+ Cleans up at the end of each test it which it is used.
38
+ """
39
+ source = self._glob_edge_cases_files(fs, fs_join, fs_path)
40
+ yield source
41
+ fs.rm(source, recursive=True)
42
+
43
+ @pytest.fixture
44
+ def fs_dir_and_file_with_same_name_prefix(self, fs, fs_join, fs_path):
45
+ """
46
+ Scenario on remote filesystem that is used to check cp/get/put on directory
47
+ and file with the same name prefixes.
48
+
49
+ Cleans up at the end of each test it which it is used.
50
+ """
51
+ source = self._dir_and_file_with_same_name_prefix(fs, fs_join, fs_path)
52
+ yield source
53
+ fs.rm(source, recursive=True)
54
+
55
+ @pytest.fixture
56
+ def fs_10_files_with_hashed_names(self, fs, fs_join, fs_path):
57
+ """
58
+ Scenario on remote filesystem that is used to check cp/get/put files order
59
+ when source and destination are lists.
60
+
61
+ Cleans up at the end of each test it which it is used.
62
+ """
63
+ source = self._10_files_with_hashed_names(fs, fs_join, fs_path)
64
+ yield source
65
+ fs.rm(source, recursive=True)
66
+
67
+ @pytest.fixture
68
+ def fs_target(self, fs, fs_join, fs_path):
69
+ """
70
+ Return name of remote directory that does not yet exist to copy into.
71
+
72
+ Cleans up at the end of each test it which it is used.
73
+ """
74
+ target = fs_join(fs_path, "target")
75
+ yield target
76
+ if fs.exists(target):
77
+ fs.rm(target, recursive=True)
78
+
79
+ @pytest.fixture
80
+ def local_bulk_operations_scenario_0(self, local_fs, local_join, local_path):
81
+ """
82
+ Scenario on local filesystem that is used for many cp/get/put tests.
83
+
84
+ Cleans up at the end of each test it which it is used.
85
+ """
86
+ source = self._bulk_operations_scenario_0(local_fs, local_join, local_path)
87
+ yield source
88
+ local_fs.rm(source, recursive=True)
89
+
90
+ @pytest.fixture
91
+ def local_glob_edge_cases_files(self, local_fs, local_join, local_path):
92
+ """
93
+ Scenario on local filesystem that is used for glob edge cases cp/get/put tests.
94
+
95
+ Cleans up at the end of each test it which it is used.
96
+ """
97
+ source = self._glob_edge_cases_files(local_fs, local_join, local_path)
98
+ yield source
99
+ local_fs.rm(source, recursive=True)
100
+
101
+ @pytest.fixture
102
+ def local_dir_and_file_with_same_name_prefix(
103
+ self, local_fs, local_join, local_path
104
+ ):
105
+ """
106
+ Scenario on local filesystem that is used to check cp/get/put on directory
107
+ and file with the same name prefixes.
108
+
109
+ Cleans up at the end of each test it which it is used.
110
+ """
111
+ source = self._dir_and_file_with_same_name_prefix(
112
+ local_fs, local_join, local_path
113
+ )
114
+ yield source
115
+ local_fs.rm(source, recursive=True)
116
+
117
+ @pytest.fixture
118
+ def local_10_files_with_hashed_names(self, local_fs, local_join, local_path):
119
+ """
120
+ Scenario on local filesystem that is used to check cp/get/put files order
121
+ when source and destination are lists.
122
+
123
+ Cleans up at the end of each test it which it is used.
124
+ """
125
+ source = self._10_files_with_hashed_names(local_fs, local_join, local_path)
126
+ yield source
127
+ local_fs.rm(source, recursive=True)
128
+
129
+ @pytest.fixture
130
+ def local_target(self, local_fs, local_join, local_path):
131
+ """
132
+ Return name of local directory that does not yet exist to copy into.
133
+
134
+ Cleans up at the end of each test it which it is used.
135
+ """
136
+ target = local_join(local_path, "target")
137
+ yield target
138
+ if local_fs.exists(target):
139
+ local_fs.rm(target, recursive=True)
140
+
141
+ def _glob_edge_cases_files(self, some_fs, some_join, some_path):
142
+ """
143
+ Scenario that is used for glob edge cases cp/get/put tests.
144
+ Creates the following directory and file structure:
145
+
146
+ 📁 source
147
+ ├── 📄 file1
148
+ ├── 📄 file2
149
+ ├── 📁 subdir0
150
+ │ ├── 📄 subfile1
151
+ │ ├── 📄 subfile2
152
+ │ └── 📁 nesteddir
153
+ │ └── 📄 nestedfile
154
+ └── 📁 subdir1
155
+ ├── 📄 subfile1
156
+ ├── 📄 subfile2
157
+ └── 📁 nesteddir
158
+ └── 📄 nestedfile
159
+ """
160
+ source = some_join(some_path, "source")
161
+ some_fs.touch(some_join(source, "file1"))
162
+ some_fs.touch(some_join(source, "file2"))
163
+
164
+ for subdir_idx in range(2):
165
+ subdir = some_join(source, f"subdir{subdir_idx}")
166
+ nesteddir = some_join(subdir, "nesteddir")
167
+ some_fs.makedirs(nesteddir)
168
+ some_fs.touch(some_join(subdir, "subfile1"))
169
+ some_fs.touch(some_join(subdir, "subfile2"))
170
+ some_fs.touch(some_join(nesteddir, "nestedfile"))
171
+
172
+ return source
173
+
174
+ def _bulk_operations_scenario_0(self, some_fs, some_join, some_path):
175
+ """
176
+ Scenario that is used for many cp/get/put tests. Creates the following
177
+ directory and file structure:
178
+
179
+ 📁 source
180
+ ├── 📄 file1
181
+ ├── 📄 file2
182
+ └── 📁 subdir
183
+ ├── 📄 subfile1
184
+ ├── 📄 subfile2
185
+ └── 📁 nesteddir
186
+ └── 📄 nestedfile
187
+ """
188
+ source = some_join(some_path, "source")
189
+ subdir = some_join(source, "subdir")
190
+ nesteddir = some_join(subdir, "nesteddir")
191
+ some_fs.makedirs(nesteddir)
192
+ some_fs.touch(some_join(source, "file1"))
193
+ some_fs.touch(some_join(source, "file2"))
194
+ some_fs.touch(some_join(subdir, "subfile1"))
195
+ some_fs.touch(some_join(subdir, "subfile2"))
196
+ some_fs.touch(some_join(nesteddir, "nestedfile"))
197
+ return source
198
+
199
+ def _dir_and_file_with_same_name_prefix(self, some_fs, some_join, some_path):
200
+ """
201
+ Scenario that is used to check cp/get/put on directory and file with
202
+ the same name prefixes. Creates the following directory and file structure:
203
+
204
+ 📁 source
205
+ ├── 📄 subdir.txt
206
+ └── 📁 subdir
207
+ └── 📄 subfile.txt
208
+ """
209
+ source = some_join(some_path, "source")
210
+ subdir = some_join(source, "subdir")
211
+ file = some_join(source, "subdir.txt")
212
+ subfile = some_join(subdir, "subfile.txt")
213
+ some_fs.makedirs(subdir)
214
+ some_fs.touch(file)
215
+ some_fs.touch(subfile)
216
+ return source
217
+
218
+ def _10_files_with_hashed_names(self, some_fs, some_join, some_path):
219
+ """
220
+ Scenario that is used to check cp/get/put files order when source and
221
+ destination are lists. Creates the following directory and file structure:
222
+
223
+ 📁 source
224
+ └── 📄 {hashed([0-9])}.txt
225
+ """
226
+ source = some_join(some_path, "source")
227
+ for i in range(10):
228
+ hashed_i = md5(str(i).encode("utf-8")).hexdigest()
229
+ path = some_join(source, f"{hashed_i}.txt")
230
+ some_fs.pipe(path=path, value=f"{i}".encode())
231
+ return source
232
+
233
+
234
+ class AbstractFixtures(BaseAbstractFixtures):
235
+ """
236
+ Abstract base class containing fixtures that may be overridden in derived
237
+ filesystem-specific classes to run the abstract tests on such filesystems.
238
+
239
+ For any particular filesystem some of these fixtures must be overridden,
240
+ such as ``fs`` and ``fs_path``, and others may be overridden if the
241
+ default functions here are not appropriate, such as ``fs_join``.
242
+ """
243
+
244
+ @pytest.fixture
245
+ def fs(self):
246
+ raise NotImplementedError("This function must be overridden in derived classes")
247
+
248
+ @pytest.fixture
249
+ def fs_join(self):
250
+ """
251
+ Return a function that joins its arguments together into a path.
252
+
253
+ Most fsspec implementations join paths in a platform-dependent way,
254
+ but some will override this to always use a forward slash.
255
+ """
256
+ return os.path.join
257
+
258
+ @pytest.fixture
259
+ def fs_path(self):
260
+ raise NotImplementedError("This function must be overridden in derived classes")
261
+
262
+ @pytest.fixture(scope="class")
263
+ def local_fs(self):
264
+ # Maybe need an option for auto_mkdir=False? This is only relevant
265
+ # for certain implementations.
266
+ return LocalFileSystem(auto_mkdir=True)
267
+
268
+ @pytest.fixture
269
+ def local_join(self):
270
+ """
271
+ Return a function that joins its arguments together into a path, on
272
+ the local filesystem.
273
+ """
274
+ return os.path.join
275
+
276
+ @pytest.fixture
277
+ def local_path(self, tmpdir):
278
+ return tmpdir
279
+
280
+ @pytest.fixture
281
+ def supports_empty_directories(self):
282
+ """
283
+ Return whether this implementation supports empty directories.
284
+ """
285
+ return True
286
+
287
+ @pytest.fixture
288
+ def fs_sanitize_path(self):
289
+ return lambda x: x
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/common.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GLOB_EDGE_CASES_TESTS = {
2
+ "argnames": ("path", "recursive", "maxdepth", "expected"),
3
+ "argvalues": [
4
+ ("fil?1", False, None, ["file1"]),
5
+ ("fil?1", True, None, ["file1"]),
6
+ ("file[1-2]", False, None, ["file1", "file2"]),
7
+ ("file[1-2]", True, None, ["file1", "file2"]),
8
+ ("*", False, None, ["file1", "file2"]),
9
+ (
10
+ "*",
11
+ True,
12
+ None,
13
+ [
14
+ "file1",
15
+ "file2",
16
+ "subdir0/subfile1",
17
+ "subdir0/subfile2",
18
+ "subdir0/nesteddir/nestedfile",
19
+ "subdir1/subfile1",
20
+ "subdir1/subfile2",
21
+ "subdir1/nesteddir/nestedfile",
22
+ ],
23
+ ),
24
+ ("*", True, 1, ["file1", "file2"]),
25
+ (
26
+ "*",
27
+ True,
28
+ 2,
29
+ [
30
+ "file1",
31
+ "file2",
32
+ "subdir0/subfile1",
33
+ "subdir0/subfile2",
34
+ "subdir1/subfile1",
35
+ "subdir1/subfile2",
36
+ ],
37
+ ),
38
+ ("*1", False, None, ["file1"]),
39
+ (
40
+ "*1",
41
+ True,
42
+ None,
43
+ [
44
+ "file1",
45
+ "subdir1/subfile1",
46
+ "subdir1/subfile2",
47
+ "subdir1/nesteddir/nestedfile",
48
+ ],
49
+ ),
50
+ ("*1", True, 2, ["file1", "subdir1/subfile1", "subdir1/subfile2"]),
51
+ (
52
+ "**",
53
+ False,
54
+ None,
55
+ [
56
+ "file1",
57
+ "file2",
58
+ "subdir0/subfile1",
59
+ "subdir0/subfile2",
60
+ "subdir0/nesteddir/nestedfile",
61
+ "subdir1/subfile1",
62
+ "subdir1/subfile2",
63
+ "subdir1/nesteddir/nestedfile",
64
+ ],
65
+ ),
66
+ (
67
+ "**",
68
+ True,
69
+ None,
70
+ [
71
+ "file1",
72
+ "file2",
73
+ "subdir0/subfile1",
74
+ "subdir0/subfile2",
75
+ "subdir0/nesteddir/nestedfile",
76
+ "subdir1/subfile1",
77
+ "subdir1/subfile2",
78
+ "subdir1/nesteddir/nestedfile",
79
+ ],
80
+ ),
81
+ ("**", True, 1, ["file1", "file2"]),
82
+ (
83
+ "**",
84
+ True,
85
+ 2,
86
+ [
87
+ "file1",
88
+ "file2",
89
+ "subdir0/subfile1",
90
+ "subdir0/subfile2",
91
+ "subdir0/nesteddir/nestedfile",
92
+ "subdir1/subfile1",
93
+ "subdir1/subfile2",
94
+ "subdir1/nesteddir/nestedfile",
95
+ ],
96
+ ),
97
+ (
98
+ "**",
99
+ False,
100
+ 2,
101
+ [
102
+ "file1",
103
+ "file2",
104
+ "subdir0/subfile1",
105
+ "subdir0/subfile2",
106
+ "subdir1/subfile1",
107
+ "subdir1/subfile2",
108
+ ],
109
+ ),
110
+ ("**/*1", False, None, ["file1", "subdir0/subfile1", "subdir1/subfile1"]),
111
+ (
112
+ "**/*1",
113
+ True,
114
+ None,
115
+ [
116
+ "file1",
117
+ "subdir0/subfile1",
118
+ "subdir1/subfile1",
119
+ "subdir1/subfile2",
120
+ "subdir1/nesteddir/nestedfile",
121
+ ],
122
+ ),
123
+ ("**/*1", True, 1, ["file1"]),
124
+ (
125
+ "**/*1",
126
+ True,
127
+ 2,
128
+ ["file1", "subdir0/subfile1", "subdir1/subfile1", "subdir1/subfile2"],
129
+ ),
130
+ ("**/*1", False, 2, ["file1", "subdir0/subfile1", "subdir1/subfile1"]),
131
+ ("**/subdir0", False, None, []),
132
+ ("**/subdir0", True, None, ["subfile1", "subfile2", "nesteddir/nestedfile"]),
133
+ ("**/subdir0/nested*", False, 2, []),
134
+ ("**/subdir0/nested*", True, 2, ["nestedfile"]),
135
+ ("subdir[1-2]", False, None, []),
136
+ ("subdir[1-2]", True, None, ["subfile1", "subfile2", "nesteddir/nestedfile"]),
137
+ ("subdir[1-2]", True, 2, ["subfile1", "subfile2"]),
138
+ ("subdir[0-1]", False, None, []),
139
+ (
140
+ "subdir[0-1]",
141
+ True,
142
+ None,
143
+ [
144
+ "subdir0/subfile1",
145
+ "subdir0/subfile2",
146
+ "subdir0/nesteddir/nestedfile",
147
+ "subdir1/subfile1",
148
+ "subdir1/subfile2",
149
+ "subdir1/nesteddir/nestedfile",
150
+ ],
151
+ ),
152
+ (
153
+ "subdir[0-1]/*fil[e]*",
154
+ False,
155
+ None,
156
+ [
157
+ "subdir0/subfile1",
158
+ "subdir0/subfile2",
159
+ "subdir1/subfile1",
160
+ "subdir1/subfile2",
161
+ ],
162
+ ),
163
+ (
164
+ "subdir[0-1]/*fil[e]*",
165
+ True,
166
+ None,
167
+ [
168
+ "subdir0/subfile1",
169
+ "subdir0/subfile2",
170
+ "subdir1/subfile1",
171
+ "subdir1/subfile2",
172
+ ],
173
+ ),
174
+ ],
175
+ }
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/copy.py ADDED
@@ -0,0 +1,557 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from hashlib import md5
2
+ from itertools import product
3
+
4
+ import pytest
5
+
6
+ from fsspec.tests.abstract.common import GLOB_EDGE_CASES_TESTS
7
+
8
+
9
+ class AbstractCopyTests:
10
+ def test_copy_file_to_existing_directory(
11
+ self,
12
+ fs,
13
+ fs_join,
14
+ fs_bulk_operations_scenario_0,
15
+ fs_target,
16
+ supports_empty_directories,
17
+ ):
18
+ # Copy scenario 1a
19
+ source = fs_bulk_operations_scenario_0
20
+
21
+ target = fs_target
22
+ fs.mkdir(target)
23
+ if not supports_empty_directories:
24
+ # Force target directory to exist by adding a dummy file
25
+ fs.touch(fs_join(target, "dummy"))
26
+ assert fs.isdir(target)
27
+
28
+ target_file2 = fs_join(target, "file2")
29
+ target_subfile1 = fs_join(target, "subfile1")
30
+
31
+ # Copy from source directory
32
+ fs.cp(fs_join(source, "file2"), target)
33
+ assert fs.isfile(target_file2)
34
+
35
+ # Copy from sub directory
36
+ fs.cp(fs_join(source, "subdir", "subfile1"), target)
37
+ assert fs.isfile(target_subfile1)
38
+
39
+ # Remove copied files
40
+ fs.rm([target_file2, target_subfile1])
41
+ assert not fs.exists(target_file2)
42
+ assert not fs.exists(target_subfile1)
43
+
44
+ # Repeat with trailing slash on target
45
+ fs.cp(fs_join(source, "file2"), target + "/")
46
+ assert fs.isdir(target)
47
+ assert fs.isfile(target_file2)
48
+
49
+ fs.cp(fs_join(source, "subdir", "subfile1"), target + "/")
50
+ assert fs.isfile(target_subfile1)
51
+
52
+ def test_copy_file_to_new_directory(
53
+ self, fs, fs_join, fs_bulk_operations_scenario_0, fs_target
54
+ ):
55
+ # Copy scenario 1b
56
+ source = fs_bulk_operations_scenario_0
57
+
58
+ target = fs_target
59
+ fs.mkdir(target)
60
+
61
+ fs.cp(
62
+ fs_join(source, "subdir", "subfile1"), fs_join(target, "newdir/")
63
+ ) # Note trailing slash
64
+ assert fs.isdir(target)
65
+ assert fs.isdir(fs_join(target, "newdir"))
66
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
67
+
68
+ def test_copy_file_to_file_in_existing_directory(
69
+ self,
70
+ fs,
71
+ fs_join,
72
+ fs_bulk_operations_scenario_0,
73
+ fs_target,
74
+ supports_empty_directories,
75
+ ):
76
+ # Copy scenario 1c
77
+ source = fs_bulk_operations_scenario_0
78
+
79
+ target = fs_target
80
+ fs.mkdir(target)
81
+ if not supports_empty_directories:
82
+ # Force target directory to exist by adding a dummy file
83
+ fs.touch(fs_join(target, "dummy"))
84
+ assert fs.isdir(target)
85
+
86
+ fs.cp(fs_join(source, "subdir", "subfile1"), fs_join(target, "newfile"))
87
+ assert fs.isfile(fs_join(target, "newfile"))
88
+
89
+ def test_copy_file_to_file_in_new_directory(
90
+ self, fs, fs_join, fs_bulk_operations_scenario_0, fs_target
91
+ ):
92
+ # Copy scenario 1d
93
+ source = fs_bulk_operations_scenario_0
94
+
95
+ target = fs_target
96
+ fs.mkdir(target)
97
+
98
+ fs.cp(
99
+ fs_join(source, "subdir", "subfile1"), fs_join(target, "newdir", "newfile")
100
+ )
101
+ assert fs.isdir(fs_join(target, "newdir"))
102
+ assert fs.isfile(fs_join(target, "newdir", "newfile"))
103
+
104
+ def test_copy_directory_to_existing_directory(
105
+ self,
106
+ fs,
107
+ fs_join,
108
+ fs_bulk_operations_scenario_0,
109
+ fs_target,
110
+ supports_empty_directories,
111
+ ):
112
+ # Copy scenario 1e
113
+ source = fs_bulk_operations_scenario_0
114
+
115
+ target = fs_target
116
+ fs.mkdir(target)
117
+ if not supports_empty_directories:
118
+ # Force target directory to exist by adding a dummy file
119
+ dummy = fs_join(target, "dummy")
120
+ fs.touch(dummy)
121
+ assert fs.isdir(target)
122
+
123
+ for source_slash, target_slash in zip([False, True], [False, True]):
124
+ s = fs_join(source, "subdir")
125
+ if source_slash:
126
+ s += "/"
127
+ t = target + "/" if target_slash else target
128
+
129
+ # Without recursive does nothing
130
+ fs.cp(s, t)
131
+ assert fs.ls(target, detail=False) == (
132
+ [] if supports_empty_directories else [dummy]
133
+ )
134
+
135
+ # With recursive
136
+ fs.cp(s, t, recursive=True)
137
+ if source_slash:
138
+ assert fs.isfile(fs_join(target, "subfile1"))
139
+ assert fs.isfile(fs_join(target, "subfile2"))
140
+ assert fs.isdir(fs_join(target, "nesteddir"))
141
+ assert fs.isfile(fs_join(target, "nesteddir", "nestedfile"))
142
+ assert not fs.exists(fs_join(target, "subdir"))
143
+
144
+ fs.rm(
145
+ [
146
+ fs_join(target, "subfile1"),
147
+ fs_join(target, "subfile2"),
148
+ fs_join(target, "nesteddir"),
149
+ ],
150
+ recursive=True,
151
+ )
152
+ else:
153
+ assert fs.isdir(fs_join(target, "subdir"))
154
+ assert fs.isfile(fs_join(target, "subdir", "subfile1"))
155
+ assert fs.isfile(fs_join(target, "subdir", "subfile2"))
156
+ assert fs.isdir(fs_join(target, "subdir", "nesteddir"))
157
+ assert fs.isfile(fs_join(target, "subdir", "nesteddir", "nestedfile"))
158
+
159
+ fs.rm(fs_join(target, "subdir"), recursive=True)
160
+ assert fs.ls(target, detail=False) == (
161
+ [] if supports_empty_directories else [dummy]
162
+ )
163
+
164
+ # Limit recursive by maxdepth
165
+ fs.cp(s, t, recursive=True, maxdepth=1)
166
+ if source_slash:
167
+ assert fs.isfile(fs_join(target, "subfile1"))
168
+ assert fs.isfile(fs_join(target, "subfile2"))
169
+ assert not fs.exists(fs_join(target, "nesteddir"))
170
+ assert not fs.exists(fs_join(target, "subdir"))
171
+
172
+ fs.rm(
173
+ [
174
+ fs_join(target, "subfile1"),
175
+ fs_join(target, "subfile2"),
176
+ ],
177
+ recursive=True,
178
+ )
179
+ else:
180
+ assert fs.isdir(fs_join(target, "subdir"))
181
+ assert fs.isfile(fs_join(target, "subdir", "subfile1"))
182
+ assert fs.isfile(fs_join(target, "subdir", "subfile2"))
183
+ assert not fs.exists(fs_join(target, "subdir", "nesteddir"))
184
+
185
+ fs.rm(fs_join(target, "subdir"), recursive=True)
186
+ assert fs.ls(target, detail=False) == (
187
+ [] if supports_empty_directories else [dummy]
188
+ )
189
+
190
+ def test_copy_directory_to_new_directory(
191
+ self,
192
+ fs,
193
+ fs_join,
194
+ fs_bulk_operations_scenario_0,
195
+ fs_target,
196
+ supports_empty_directories,
197
+ ):
198
+ # Copy scenario 1f
199
+ source = fs_bulk_operations_scenario_0
200
+
201
+ target = fs_target
202
+ fs.mkdir(target)
203
+
204
+ for source_slash, target_slash in zip([False, True], [False, True]):
205
+ s = fs_join(source, "subdir")
206
+ if source_slash:
207
+ s += "/"
208
+ t = fs_join(target, "newdir")
209
+ if target_slash:
210
+ t += "/"
211
+
212
+ # Without recursive does nothing
213
+ fs.cp(s, t)
214
+ if supports_empty_directories:
215
+ assert fs.ls(target) == []
216
+ else:
217
+ with pytest.raises(FileNotFoundError):
218
+ fs.ls(target)
219
+
220
+ # With recursive
221
+ fs.cp(s, t, recursive=True)
222
+ assert fs.isdir(fs_join(target, "newdir"))
223
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
224
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
225
+ assert fs.isdir(fs_join(target, "newdir", "nesteddir"))
226
+ assert fs.isfile(fs_join(target, "newdir", "nesteddir", "nestedfile"))
227
+ assert not fs.exists(fs_join(target, "subdir"))
228
+
229
+ fs.rm(fs_join(target, "newdir"), recursive=True)
230
+ assert not fs.exists(fs_join(target, "newdir"))
231
+
232
+ # Limit recursive by maxdepth
233
+ fs.cp(s, t, recursive=True, maxdepth=1)
234
+ assert fs.isdir(fs_join(target, "newdir"))
235
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
236
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
237
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir"))
238
+ assert not fs.exists(fs_join(target, "subdir"))
239
+
240
+ fs.rm(fs_join(target, "newdir"), recursive=True)
241
+ assert not fs.exists(fs_join(target, "newdir"))
242
+
243
+ def test_copy_glob_to_existing_directory(
244
+ self,
245
+ fs,
246
+ fs_join,
247
+ fs_bulk_operations_scenario_0,
248
+ fs_target,
249
+ supports_empty_directories,
250
+ ):
251
+ # Copy scenario 1g
252
+ source = fs_bulk_operations_scenario_0
253
+
254
+ target = fs_target
255
+ fs.mkdir(target)
256
+ if not supports_empty_directories:
257
+ # Force target directory to exist by adding a dummy file
258
+ dummy = fs_join(target, "dummy")
259
+ fs.touch(dummy)
260
+ assert fs.isdir(target)
261
+
262
+ for target_slash in [False, True]:
263
+ t = target + "/" if target_slash else target
264
+
265
+ # Without recursive
266
+ fs.cp(fs_join(source, "subdir", "*"), t)
267
+ assert fs.isfile(fs_join(target, "subfile1"))
268
+ assert fs.isfile(fs_join(target, "subfile2"))
269
+ assert not fs.isdir(fs_join(target, "nesteddir"))
270
+ assert not fs.exists(fs_join(target, "nesteddir", "nestedfile"))
271
+ assert not fs.exists(fs_join(target, "subdir"))
272
+
273
+ fs.rm(
274
+ [
275
+ fs_join(target, "subfile1"),
276
+ fs_join(target, "subfile2"),
277
+ ],
278
+ recursive=True,
279
+ )
280
+ assert fs.ls(target, detail=False) == (
281
+ [] if supports_empty_directories else [dummy]
282
+ )
283
+
284
+ # With recursive
285
+ for glob, recursive in zip(["*", "**"], [True, False]):
286
+ fs.cp(fs_join(source, "subdir", glob), t, recursive=recursive)
287
+ assert fs.isfile(fs_join(target, "subfile1"))
288
+ assert fs.isfile(fs_join(target, "subfile2"))
289
+ assert fs.isdir(fs_join(target, "nesteddir"))
290
+ assert fs.isfile(fs_join(target, "nesteddir", "nestedfile"))
291
+ assert not fs.exists(fs_join(target, "subdir"))
292
+
293
+ fs.rm(
294
+ [
295
+ fs_join(target, "subfile1"),
296
+ fs_join(target, "subfile2"),
297
+ fs_join(target, "nesteddir"),
298
+ ],
299
+ recursive=True,
300
+ )
301
+ assert fs.ls(target, detail=False) == (
302
+ [] if supports_empty_directories else [dummy]
303
+ )
304
+
305
+ # Limit recursive by maxdepth
306
+ fs.cp(
307
+ fs_join(source, "subdir", glob), t, recursive=recursive, maxdepth=1
308
+ )
309
+ assert fs.isfile(fs_join(target, "subfile1"))
310
+ assert fs.isfile(fs_join(target, "subfile2"))
311
+ assert not fs.exists(fs_join(target, "nesteddir"))
312
+ assert not fs.exists(fs_join(target, "subdir"))
313
+
314
+ fs.rm(
315
+ [
316
+ fs_join(target, "subfile1"),
317
+ fs_join(target, "subfile2"),
318
+ ],
319
+ recursive=True,
320
+ )
321
+ assert fs.ls(target, detail=False) == (
322
+ [] if supports_empty_directories else [dummy]
323
+ )
324
+
325
+ def test_copy_glob_to_new_directory(
326
+ self, fs, fs_join, fs_bulk_operations_scenario_0, fs_target
327
+ ):
328
+ # Copy scenario 1h
329
+ source = fs_bulk_operations_scenario_0
330
+
331
+ target = fs_target
332
+ fs.mkdir(target)
333
+
334
+ for target_slash in [False, True]:
335
+ t = fs_join(target, "newdir")
336
+ if target_slash:
337
+ t += "/"
338
+
339
+ # Without recursive
340
+ fs.cp(fs_join(source, "subdir", "*"), t)
341
+ assert fs.isdir(fs_join(target, "newdir"))
342
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
343
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
344
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir"))
345
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir", "nestedfile"))
346
+ assert not fs.exists(fs_join(target, "subdir"))
347
+ assert not fs.exists(fs_join(target, "newdir", "subdir"))
348
+
349
+ fs.rm(fs_join(target, "newdir"), recursive=True)
350
+ assert not fs.exists(fs_join(target, "newdir"))
351
+
352
+ # With recursive
353
+ for glob, recursive in zip(["*", "**"], [True, False]):
354
+ fs.cp(fs_join(source, "subdir", glob), t, recursive=recursive)
355
+ assert fs.isdir(fs_join(target, "newdir"))
356
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
357
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
358
+ assert fs.isdir(fs_join(target, "newdir", "nesteddir"))
359
+ assert fs.isfile(fs_join(target, "newdir", "nesteddir", "nestedfile"))
360
+ assert not fs.exists(fs_join(target, "subdir"))
361
+ assert not fs.exists(fs_join(target, "newdir", "subdir"))
362
+
363
+ fs.rm(fs_join(target, "newdir"), recursive=True)
364
+ assert not fs.exists(fs_join(target, "newdir"))
365
+
366
+ # Limit recursive by maxdepth
367
+ fs.cp(
368
+ fs_join(source, "subdir", glob), t, recursive=recursive, maxdepth=1
369
+ )
370
+ assert fs.isdir(fs_join(target, "newdir"))
371
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
372
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
373
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir"))
374
+ assert not fs.exists(fs_join(target, "subdir"))
375
+ assert not fs.exists(fs_join(target, "newdir", "subdir"))
376
+
377
+ fs.rm(fs_join(target, "newdir"), recursive=True)
378
+ assert not fs.exists(fs_join(target, "newdir"))
379
+
380
+ @pytest.mark.parametrize(
381
+ GLOB_EDGE_CASES_TESTS["argnames"],
382
+ GLOB_EDGE_CASES_TESTS["argvalues"],
383
+ )
384
+ def test_copy_glob_edge_cases(
385
+ self,
386
+ path,
387
+ recursive,
388
+ maxdepth,
389
+ expected,
390
+ fs,
391
+ fs_join,
392
+ fs_glob_edge_cases_files,
393
+ fs_target,
394
+ fs_sanitize_path,
395
+ ):
396
+ # Copy scenario 1g
397
+ source = fs_glob_edge_cases_files
398
+
399
+ target = fs_target
400
+
401
+ for new_dir, target_slash in product([True, False], [True, False]):
402
+ fs.mkdir(target)
403
+
404
+ t = fs_join(target, "newdir") if new_dir else target
405
+ t = t + "/" if target_slash else t
406
+
407
+ fs.copy(fs_join(source, path), t, recursive=recursive, maxdepth=maxdepth)
408
+
409
+ output = fs.find(target)
410
+ if new_dir:
411
+ prefixed_expected = [
412
+ fs_sanitize_path(fs_join(target, "newdir", p)) for p in expected
413
+ ]
414
+ else:
415
+ prefixed_expected = [
416
+ fs_sanitize_path(fs_join(target, p)) for p in expected
417
+ ]
418
+ assert sorted(output) == sorted(prefixed_expected)
419
+
420
+ try:
421
+ fs.rm(target, recursive=True)
422
+ except FileNotFoundError:
423
+ pass
424
+
425
+ def test_copy_list_of_files_to_existing_directory(
426
+ self,
427
+ fs,
428
+ fs_join,
429
+ fs_bulk_operations_scenario_0,
430
+ fs_target,
431
+ supports_empty_directories,
432
+ ):
433
+ # Copy scenario 2a
434
+ source = fs_bulk_operations_scenario_0
435
+
436
+ target = fs_target
437
+ fs.mkdir(target)
438
+ if not supports_empty_directories:
439
+ # Force target directory to exist by adding a dummy file
440
+ dummy = fs_join(target, "dummy")
441
+ fs.touch(dummy)
442
+ assert fs.isdir(target)
443
+
444
+ source_files = [
445
+ fs_join(source, "file1"),
446
+ fs_join(source, "file2"),
447
+ fs_join(source, "subdir", "subfile1"),
448
+ ]
449
+
450
+ for target_slash in [False, True]:
451
+ t = target + "/" if target_slash else target
452
+
453
+ fs.cp(source_files, t)
454
+ assert fs.isfile(fs_join(target, "file1"))
455
+ assert fs.isfile(fs_join(target, "file2"))
456
+ assert fs.isfile(fs_join(target, "subfile1"))
457
+
458
+ fs.rm(
459
+ [
460
+ fs_join(target, "file1"),
461
+ fs_join(target, "file2"),
462
+ fs_join(target, "subfile1"),
463
+ ],
464
+ recursive=True,
465
+ )
466
+ assert fs.ls(target, detail=False) == (
467
+ [] if supports_empty_directories else [dummy]
468
+ )
469
+
470
+ def test_copy_list_of_files_to_new_directory(
471
+ self, fs, fs_join, fs_bulk_operations_scenario_0, fs_target
472
+ ):
473
+ # Copy scenario 2b
474
+ source = fs_bulk_operations_scenario_0
475
+
476
+ target = fs_target
477
+ fs.mkdir(target)
478
+
479
+ source_files = [
480
+ fs_join(source, "file1"),
481
+ fs_join(source, "file2"),
482
+ fs_join(source, "subdir", "subfile1"),
483
+ ]
484
+
485
+ fs.cp(source_files, fs_join(target, "newdir") + "/") # Note trailing slash
486
+ assert fs.isdir(fs_join(target, "newdir"))
487
+ assert fs.isfile(fs_join(target, "newdir", "file1"))
488
+ assert fs.isfile(fs_join(target, "newdir", "file2"))
489
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
490
+
491
+ def test_copy_two_files_new_directory(
492
+ self, fs, fs_join, fs_bulk_operations_scenario_0, fs_target
493
+ ):
494
+ # This is a duplicate of test_copy_list_of_files_to_new_directory and
495
+ # can eventually be removed.
496
+ source = fs_bulk_operations_scenario_0
497
+
498
+ target = fs_target
499
+ assert not fs.exists(target)
500
+ fs.cp([fs_join(source, "file1"), fs_join(source, "file2")], target)
501
+
502
+ assert fs.isdir(target)
503
+ assert fs.isfile(fs_join(target, "file1"))
504
+ assert fs.isfile(fs_join(target, "file2"))
505
+
506
+ def test_copy_directory_without_files_with_same_name_prefix(
507
+ self,
508
+ fs,
509
+ fs_join,
510
+ fs_target,
511
+ fs_dir_and_file_with_same_name_prefix,
512
+ supports_empty_directories,
513
+ ):
514
+ # Create the test dirs
515
+ source = fs_dir_and_file_with_same_name_prefix
516
+ target = fs_target
517
+
518
+ # Test without glob
519
+ fs.cp(fs_join(source, "subdir"), target, recursive=True)
520
+
521
+ assert fs.isfile(fs_join(target, "subfile.txt"))
522
+ assert not fs.isfile(fs_join(target, "subdir.txt"))
523
+
524
+ fs.rm([fs_join(target, "subfile.txt")])
525
+ if supports_empty_directories:
526
+ assert fs.ls(target) == []
527
+ else:
528
+ assert not fs.exists(target)
529
+
530
+ # Test with glob
531
+ fs.cp(fs_join(source, "subdir*"), target, recursive=True)
532
+
533
+ assert fs.isdir(fs_join(target, "subdir"))
534
+ assert fs.isfile(fs_join(target, "subdir", "subfile.txt"))
535
+ assert fs.isfile(fs_join(target, "subdir.txt"))
536
+
537
+ def test_copy_with_source_and_destination_as_list(
538
+ self, fs, fs_target, fs_join, fs_10_files_with_hashed_names
539
+ ):
540
+ # Create the test dir
541
+ source = fs_10_files_with_hashed_names
542
+ target = fs_target
543
+
544
+ # Create list of files for source and destination
545
+ source_files = []
546
+ destination_files = []
547
+ for i in range(10):
548
+ hashed_i = md5(str(i).encode("utf-8")).hexdigest()
549
+ source_files.append(fs_join(source, f"{hashed_i}.txt"))
550
+ destination_files.append(fs_join(target, f"{hashed_i}.txt"))
551
+
552
+ # Copy and assert order was kept
553
+ fs.copy(path1=source_files, path2=destination_files)
554
+
555
+ for i in range(10):
556
+ file_content = fs.cat(destination_files[i]).decode("utf-8")
557
+ assert file_content == str(i)
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/get.py ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from hashlib import md5
2
+ from itertools import product
3
+
4
+ import pytest
5
+
6
+ from fsspec.implementations.local import make_path_posix
7
+ from fsspec.tests.abstract.common import GLOB_EDGE_CASES_TESTS
8
+
9
+
10
+ class AbstractGetTests:
11
+ def test_get_file_to_existing_directory(
12
+ self,
13
+ fs,
14
+ fs_join,
15
+ fs_bulk_operations_scenario_0,
16
+ local_fs,
17
+ local_join,
18
+ local_target,
19
+ ):
20
+ # Copy scenario 1a
21
+ source = fs_bulk_operations_scenario_0
22
+
23
+ target = local_target
24
+ local_fs.mkdir(target)
25
+ assert local_fs.isdir(target)
26
+
27
+ target_file2 = local_join(target, "file2")
28
+ target_subfile1 = local_join(target, "subfile1")
29
+
30
+ # Copy from source directory
31
+ fs.get(fs_join(source, "file2"), target)
32
+ assert local_fs.isfile(target_file2)
33
+
34
+ # Copy from sub directory
35
+ fs.get(fs_join(source, "subdir", "subfile1"), target)
36
+ assert local_fs.isfile(target_subfile1)
37
+
38
+ # Remove copied files
39
+ local_fs.rm([target_file2, target_subfile1])
40
+ assert not local_fs.exists(target_file2)
41
+ assert not local_fs.exists(target_subfile1)
42
+
43
+ # Repeat with trailing slash on target
44
+ fs.get(fs_join(source, "file2"), target + "/")
45
+ assert local_fs.isdir(target)
46
+ assert local_fs.isfile(target_file2)
47
+
48
+ fs.get(fs_join(source, "subdir", "subfile1"), target + "/")
49
+ assert local_fs.isfile(target_subfile1)
50
+
51
+ def test_get_file_to_new_directory(
52
+ self,
53
+ fs,
54
+ fs_join,
55
+ fs_bulk_operations_scenario_0,
56
+ local_fs,
57
+ local_join,
58
+ local_target,
59
+ ):
60
+ # Copy scenario 1b
61
+ source = fs_bulk_operations_scenario_0
62
+
63
+ target = local_target
64
+ local_fs.mkdir(target)
65
+
66
+ fs.get(
67
+ fs_join(source, "subdir", "subfile1"), local_join(target, "newdir/")
68
+ ) # Note trailing slash
69
+
70
+ assert local_fs.isdir(target)
71
+ assert local_fs.isdir(local_join(target, "newdir"))
72
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
73
+
74
+ def test_get_file_to_file_in_existing_directory(
75
+ self,
76
+ fs,
77
+ fs_join,
78
+ fs_bulk_operations_scenario_0,
79
+ local_fs,
80
+ local_join,
81
+ local_target,
82
+ ):
83
+ # Copy scenario 1c
84
+ source = fs_bulk_operations_scenario_0
85
+
86
+ target = local_target
87
+ local_fs.mkdir(target)
88
+
89
+ fs.get(fs_join(source, "subdir", "subfile1"), local_join(target, "newfile"))
90
+ assert local_fs.isfile(local_join(target, "newfile"))
91
+
92
+ def test_get_file_to_file_in_new_directory(
93
+ self,
94
+ fs,
95
+ fs_join,
96
+ fs_bulk_operations_scenario_0,
97
+ local_fs,
98
+ local_join,
99
+ local_target,
100
+ ):
101
+ # Copy scenario 1d
102
+ source = fs_bulk_operations_scenario_0
103
+
104
+ target = local_target
105
+ local_fs.mkdir(target)
106
+
107
+ fs.get(
108
+ fs_join(source, "subdir", "subfile1"),
109
+ local_join(target, "newdir", "newfile"),
110
+ )
111
+ assert local_fs.isdir(local_join(target, "newdir"))
112
+ assert local_fs.isfile(local_join(target, "newdir", "newfile"))
113
+
114
+ def test_get_directory_to_existing_directory(
115
+ self,
116
+ fs,
117
+ fs_join,
118
+ fs_bulk_operations_scenario_0,
119
+ local_fs,
120
+ local_join,
121
+ local_target,
122
+ ):
123
+ # Copy scenario 1e
124
+ source = fs_bulk_operations_scenario_0
125
+
126
+ target = local_target
127
+ local_fs.mkdir(target)
128
+ assert local_fs.isdir(target)
129
+
130
+ for source_slash, target_slash in zip([False, True], [False, True]):
131
+ s = fs_join(source, "subdir")
132
+ if source_slash:
133
+ s += "/"
134
+ t = target + "/" if target_slash else target
135
+
136
+ # Without recursive does nothing
137
+ fs.get(s, t)
138
+ assert local_fs.ls(target) == []
139
+
140
+ # With recursive
141
+ fs.get(s, t, recursive=True)
142
+ if source_slash:
143
+ assert local_fs.isfile(local_join(target, "subfile1"))
144
+ assert local_fs.isfile(local_join(target, "subfile2"))
145
+ assert local_fs.isdir(local_join(target, "nesteddir"))
146
+ assert local_fs.isfile(local_join(target, "nesteddir", "nestedfile"))
147
+ assert not local_fs.exists(local_join(target, "subdir"))
148
+
149
+ local_fs.rm(
150
+ [
151
+ local_join(target, "subfile1"),
152
+ local_join(target, "subfile2"),
153
+ local_join(target, "nesteddir"),
154
+ ],
155
+ recursive=True,
156
+ )
157
+ else:
158
+ assert local_fs.isdir(local_join(target, "subdir"))
159
+ assert local_fs.isfile(local_join(target, "subdir", "subfile1"))
160
+ assert local_fs.isfile(local_join(target, "subdir", "subfile2"))
161
+ assert local_fs.isdir(local_join(target, "subdir", "nesteddir"))
162
+ assert local_fs.isfile(
163
+ local_join(target, "subdir", "nesteddir", "nestedfile")
164
+ )
165
+
166
+ local_fs.rm(local_join(target, "subdir"), recursive=True)
167
+ assert local_fs.ls(target) == []
168
+
169
+ # Limit recursive by maxdepth
170
+ fs.get(s, t, recursive=True, maxdepth=1)
171
+ if source_slash:
172
+ assert local_fs.isfile(local_join(target, "subfile1"))
173
+ assert local_fs.isfile(local_join(target, "subfile2"))
174
+ assert not local_fs.exists(local_join(target, "nesteddir"))
175
+ assert not local_fs.exists(local_join(target, "subdir"))
176
+
177
+ local_fs.rm(
178
+ [
179
+ local_join(target, "subfile1"),
180
+ local_join(target, "subfile2"),
181
+ ],
182
+ recursive=True,
183
+ )
184
+ else:
185
+ assert local_fs.isdir(local_join(target, "subdir"))
186
+ assert local_fs.isfile(local_join(target, "subdir", "subfile1"))
187
+ assert local_fs.isfile(local_join(target, "subdir", "subfile2"))
188
+ assert not local_fs.exists(local_join(target, "subdir", "nesteddir"))
189
+
190
+ local_fs.rm(local_join(target, "subdir"), recursive=True)
191
+ assert local_fs.ls(target) == []
192
+
193
+ def test_get_directory_to_new_directory(
194
+ self,
195
+ fs,
196
+ fs_join,
197
+ fs_bulk_operations_scenario_0,
198
+ local_fs,
199
+ local_join,
200
+ local_target,
201
+ ):
202
+ # Copy scenario 1f
203
+ source = fs_bulk_operations_scenario_0
204
+
205
+ target = local_target
206
+ local_fs.mkdir(target)
207
+
208
+ for source_slash, target_slash in zip([False, True], [False, True]):
209
+ s = fs_join(source, "subdir")
210
+ if source_slash:
211
+ s += "/"
212
+ t = local_join(target, "newdir")
213
+ if target_slash:
214
+ t += "/"
215
+
216
+ # Without recursive does nothing
217
+ fs.get(s, t)
218
+ assert local_fs.ls(target) == []
219
+
220
+ # With recursive
221
+ fs.get(s, t, recursive=True)
222
+ assert local_fs.isdir(local_join(target, "newdir"))
223
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
224
+ assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
225
+ assert local_fs.isdir(local_join(target, "newdir", "nesteddir"))
226
+ assert local_fs.isfile(
227
+ local_join(target, "newdir", "nesteddir", "nestedfile")
228
+ )
229
+ assert not local_fs.exists(local_join(target, "subdir"))
230
+
231
+ local_fs.rm(local_join(target, "newdir"), recursive=True)
232
+ assert local_fs.ls(target) == []
233
+
234
+ # Limit recursive by maxdepth
235
+ fs.get(s, t, recursive=True, maxdepth=1)
236
+ assert local_fs.isdir(local_join(target, "newdir"))
237
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
238
+ assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
239
+ assert not local_fs.exists(local_join(target, "newdir", "nesteddir"))
240
+ assert not local_fs.exists(local_join(target, "subdir"))
241
+
242
+ local_fs.rm(local_join(target, "newdir"), recursive=True)
243
+ assert not local_fs.exists(local_join(target, "newdir"))
244
+
245
+ def test_get_glob_to_existing_directory(
246
+ self,
247
+ fs,
248
+ fs_join,
249
+ fs_bulk_operations_scenario_0,
250
+ local_fs,
251
+ local_join,
252
+ local_target,
253
+ ):
254
+ # Copy scenario 1g
255
+ source = fs_bulk_operations_scenario_0
256
+
257
+ target = local_target
258
+ local_fs.mkdir(target)
259
+
260
+ for target_slash in [False, True]:
261
+ t = target + "/" if target_slash else target
262
+
263
+ # Without recursive
264
+ fs.get(fs_join(source, "subdir", "*"), t)
265
+ assert local_fs.isfile(local_join(target, "subfile1"))
266
+ assert local_fs.isfile(local_join(target, "subfile2"))
267
+ assert not local_fs.isdir(local_join(target, "nesteddir"))
268
+ assert not local_fs.exists(local_join(target, "nesteddir", "nestedfile"))
269
+ assert not local_fs.exists(local_join(target, "subdir"))
270
+
271
+ local_fs.rm(
272
+ [
273
+ local_join(target, "subfile1"),
274
+ local_join(target, "subfile2"),
275
+ ],
276
+ recursive=True,
277
+ )
278
+ assert local_fs.ls(target) == []
279
+
280
+ # With recursive
281
+ for glob, recursive in zip(["*", "**"], [True, False]):
282
+ fs.get(fs_join(source, "subdir", glob), t, recursive=recursive)
283
+ assert local_fs.isfile(local_join(target, "subfile1"))
284
+ assert local_fs.isfile(local_join(target, "subfile2"))
285
+ assert local_fs.isdir(local_join(target, "nesteddir"))
286
+ assert local_fs.isfile(local_join(target, "nesteddir", "nestedfile"))
287
+ assert not local_fs.exists(local_join(target, "subdir"))
288
+
289
+ local_fs.rm(
290
+ [
291
+ local_join(target, "subfile1"),
292
+ local_join(target, "subfile2"),
293
+ local_join(target, "nesteddir"),
294
+ ],
295
+ recursive=True,
296
+ )
297
+ assert local_fs.ls(target) == []
298
+
299
+ # Limit recursive by maxdepth
300
+ fs.get(
301
+ fs_join(source, "subdir", glob), t, recursive=recursive, maxdepth=1
302
+ )
303
+ assert local_fs.isfile(local_join(target, "subfile1"))
304
+ assert local_fs.isfile(local_join(target, "subfile2"))
305
+ assert not local_fs.exists(local_join(target, "nesteddir"))
306
+ assert not local_fs.exists(local_join(target, "subdir"))
307
+
308
+ local_fs.rm(
309
+ [
310
+ local_join(target, "subfile1"),
311
+ local_join(target, "subfile2"),
312
+ ],
313
+ recursive=True,
314
+ )
315
+ assert local_fs.ls(target) == []
316
+
317
+ def test_get_glob_to_new_directory(
318
+ self,
319
+ fs,
320
+ fs_join,
321
+ fs_bulk_operations_scenario_0,
322
+ local_fs,
323
+ local_join,
324
+ local_target,
325
+ ):
326
+ # Copy scenario 1h
327
+ source = fs_bulk_operations_scenario_0
328
+
329
+ target = local_target
330
+ local_fs.mkdir(target)
331
+
332
+ for target_slash in [False, True]:
333
+ t = fs_join(target, "newdir")
334
+ if target_slash:
335
+ t += "/"
336
+
337
+ # Without recursive
338
+ fs.get(fs_join(source, "subdir", "*"), t)
339
+ assert local_fs.isdir(local_join(target, "newdir"))
340
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
341
+ assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
342
+ assert not local_fs.exists(local_join(target, "newdir", "nesteddir"))
343
+ assert not local_fs.exists(
344
+ local_join(target, "newdir", "nesteddir", "nestedfile")
345
+ )
346
+ assert not local_fs.exists(local_join(target, "subdir"))
347
+ assert not local_fs.exists(local_join(target, "newdir", "subdir"))
348
+
349
+ local_fs.rm(local_join(target, "newdir"), recursive=True)
350
+ assert local_fs.ls(target) == []
351
+
352
+ # With recursive
353
+ for glob, recursive in zip(["*", "**"], [True, False]):
354
+ fs.get(fs_join(source, "subdir", glob), t, recursive=recursive)
355
+ assert local_fs.isdir(local_join(target, "newdir"))
356
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
357
+ assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
358
+ assert local_fs.isdir(local_join(target, "newdir", "nesteddir"))
359
+ assert local_fs.isfile(
360
+ local_join(target, "newdir", "nesteddir", "nestedfile")
361
+ )
362
+ assert not local_fs.exists(local_join(target, "subdir"))
363
+ assert not local_fs.exists(local_join(target, "newdir", "subdir"))
364
+
365
+ local_fs.rm(local_join(target, "newdir"), recursive=True)
366
+ assert not local_fs.exists(local_join(target, "newdir"))
367
+
368
+ # Limit recursive by maxdepth
369
+ fs.get(
370
+ fs_join(source, "subdir", glob), t, recursive=recursive, maxdepth=1
371
+ )
372
+ assert local_fs.isdir(local_join(target, "newdir"))
373
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
374
+ assert local_fs.isfile(local_join(target, "newdir", "subfile2"))
375
+ assert not local_fs.exists(local_join(target, "newdir", "nesteddir"))
376
+ assert not local_fs.exists(local_join(target, "subdir"))
377
+ assert not local_fs.exists(local_join(target, "newdir", "subdir"))
378
+
379
+ local_fs.rm(local_fs.ls(target, detail=False), recursive=True)
380
+ assert not local_fs.exists(local_join(target, "newdir"))
381
+
382
+ @pytest.mark.parametrize(
383
+ GLOB_EDGE_CASES_TESTS["argnames"],
384
+ GLOB_EDGE_CASES_TESTS["argvalues"],
385
+ )
386
+ def test_get_glob_edge_cases(
387
+ self,
388
+ path,
389
+ recursive,
390
+ maxdepth,
391
+ expected,
392
+ fs,
393
+ fs_join,
394
+ fs_glob_edge_cases_files,
395
+ local_fs,
396
+ local_join,
397
+ local_target,
398
+ ):
399
+ # Copy scenario 1g
400
+ source = fs_glob_edge_cases_files
401
+
402
+ target = local_target
403
+
404
+ for new_dir, target_slash in product([True, False], [True, False]):
405
+ local_fs.mkdir(target)
406
+
407
+ t = local_join(target, "newdir") if new_dir else target
408
+ t = t + "/" if target_slash else t
409
+
410
+ fs.get(fs_join(source, path), t, recursive=recursive, maxdepth=maxdepth)
411
+
412
+ output = local_fs.find(target)
413
+ if new_dir:
414
+ prefixed_expected = [
415
+ make_path_posix(local_join(target, "newdir", p)) for p in expected
416
+ ]
417
+ else:
418
+ prefixed_expected = [
419
+ make_path_posix(local_join(target, p)) for p in expected
420
+ ]
421
+ assert sorted(output) == sorted(prefixed_expected)
422
+
423
+ try:
424
+ local_fs.rm(target, recursive=True)
425
+ except FileNotFoundError:
426
+ pass
427
+
428
+ def test_get_list_of_files_to_existing_directory(
429
+ self,
430
+ fs,
431
+ fs_join,
432
+ fs_bulk_operations_scenario_0,
433
+ local_fs,
434
+ local_join,
435
+ local_target,
436
+ ):
437
+ # Copy scenario 2a
438
+ source = fs_bulk_operations_scenario_0
439
+
440
+ target = local_target
441
+ local_fs.mkdir(target)
442
+
443
+ source_files = [
444
+ fs_join(source, "file1"),
445
+ fs_join(source, "file2"),
446
+ fs_join(source, "subdir", "subfile1"),
447
+ ]
448
+
449
+ for target_slash in [False, True]:
450
+ t = target + "/" if target_slash else target
451
+
452
+ fs.get(source_files, t)
453
+ assert local_fs.isfile(local_join(target, "file1"))
454
+ assert local_fs.isfile(local_join(target, "file2"))
455
+ assert local_fs.isfile(local_join(target, "subfile1"))
456
+
457
+ local_fs.rm(
458
+ [
459
+ local_join(target, "file1"),
460
+ local_join(target, "file2"),
461
+ local_join(target, "subfile1"),
462
+ ],
463
+ recursive=True,
464
+ )
465
+ assert local_fs.ls(target) == []
466
+
467
+ def test_get_list_of_files_to_new_directory(
468
+ self,
469
+ fs,
470
+ fs_join,
471
+ fs_bulk_operations_scenario_0,
472
+ local_fs,
473
+ local_join,
474
+ local_target,
475
+ ):
476
+ # Copy scenario 2b
477
+ source = fs_bulk_operations_scenario_0
478
+
479
+ target = local_target
480
+ local_fs.mkdir(target)
481
+
482
+ source_files = [
483
+ fs_join(source, "file1"),
484
+ fs_join(source, "file2"),
485
+ fs_join(source, "subdir", "subfile1"),
486
+ ]
487
+
488
+ fs.get(source_files, local_join(target, "newdir") + "/") # Note trailing slash
489
+ assert local_fs.isdir(local_join(target, "newdir"))
490
+ assert local_fs.isfile(local_join(target, "newdir", "file1"))
491
+ assert local_fs.isfile(local_join(target, "newdir", "file2"))
492
+ assert local_fs.isfile(local_join(target, "newdir", "subfile1"))
493
+
494
+ def test_get_directory_recursive(
495
+ self, fs, fs_join, fs_path, local_fs, local_join, local_target
496
+ ):
497
+ # https://github.com/fsspec/filesystem_spec/issues/1062
498
+ # Recursive cp/get/put of source directory into non-existent target directory.
499
+ src = fs_join(fs_path, "src")
500
+ src_file = fs_join(src, "file")
501
+ fs.mkdir(src)
502
+ fs.touch(src_file)
503
+
504
+ target = local_target
505
+
506
+ # get without slash
507
+ assert not local_fs.exists(target)
508
+ for loop in range(2):
509
+ fs.get(src, target, recursive=True)
510
+ assert local_fs.isdir(target)
511
+
512
+ if loop == 0:
513
+ assert local_fs.isfile(local_join(target, "file"))
514
+ assert not local_fs.exists(local_join(target, "src"))
515
+ else:
516
+ assert local_fs.isfile(local_join(target, "file"))
517
+ assert local_fs.isdir(local_join(target, "src"))
518
+ assert local_fs.isfile(local_join(target, "src", "file"))
519
+
520
+ local_fs.rm(target, recursive=True)
521
+
522
+ # get with slash
523
+ assert not local_fs.exists(target)
524
+ for loop in range(2):
525
+ fs.get(src + "/", target, recursive=True)
526
+ assert local_fs.isdir(target)
527
+ assert local_fs.isfile(local_join(target, "file"))
528
+ assert not local_fs.exists(local_join(target, "src"))
529
+
530
+ def test_get_directory_without_files_with_same_name_prefix(
531
+ self,
532
+ fs,
533
+ fs_join,
534
+ local_fs,
535
+ local_join,
536
+ local_target,
537
+ fs_dir_and_file_with_same_name_prefix,
538
+ ):
539
+ # Create the test dirs
540
+ source = fs_dir_and_file_with_same_name_prefix
541
+ target = local_target
542
+
543
+ # Test without glob
544
+ fs.get(fs_join(source, "subdir"), target, recursive=True)
545
+
546
+ assert local_fs.isfile(local_join(target, "subfile.txt"))
547
+ assert not local_fs.isfile(local_join(target, "subdir.txt"))
548
+
549
+ local_fs.rm([local_join(target, "subfile.txt")])
550
+ assert local_fs.ls(target) == []
551
+
552
+ # Test with glob
553
+ fs.get(fs_join(source, "subdir*"), target, recursive=True)
554
+
555
+ assert local_fs.isdir(local_join(target, "subdir"))
556
+ assert local_fs.isfile(local_join(target, "subdir", "subfile.txt"))
557
+ assert local_fs.isfile(local_join(target, "subdir.txt"))
558
+
559
+ def test_get_with_source_and_destination_as_list(
560
+ self,
561
+ fs,
562
+ fs_join,
563
+ local_fs,
564
+ local_join,
565
+ local_target,
566
+ fs_10_files_with_hashed_names,
567
+ ):
568
+ # Create the test dir
569
+ source = fs_10_files_with_hashed_names
570
+ target = local_target
571
+
572
+ # Create list of files for source and destination
573
+ source_files = []
574
+ destination_files = []
575
+ for i in range(10):
576
+ hashed_i = md5(str(i).encode("utf-8")).hexdigest()
577
+ source_files.append(fs_join(source, f"{hashed_i}.txt"))
578
+ destination_files.append(
579
+ make_path_posix(local_join(target, f"{hashed_i}.txt"))
580
+ )
581
+
582
+ # Copy and assert order was kept
583
+ fs.get(rpath=source_files, lpath=destination_files)
584
+
585
+ for i in range(10):
586
+ file_content = local_fs.cat(destination_files[i]).decode("utf-8")
587
+ assert file_content == str(i)
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/mv.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import pytest
4
+
5
+ import fsspec
6
+
7
+
8
+ def test_move_raises_error_with_tmpdir(tmpdir):
9
+ # Create a file in the temporary directory
10
+ source = tmpdir.join("source_file.txt")
11
+ source.write("content")
12
+
13
+ # Define a destination that simulates a protected or invalid path
14
+ destination = tmpdir.join("non_existent_directory/destination_file.txt")
15
+
16
+ # Instantiate the filesystem (assuming the local file system interface)
17
+ fs = fsspec.filesystem("file")
18
+
19
+ # Use the actual file paths as string
20
+ with pytest.raises(FileNotFoundError):
21
+ fs.mv(str(source), str(destination))
22
+
23
+
24
+ @pytest.mark.parametrize("recursive", (True, False))
25
+ def test_move_raises_error_with_tmpdir_permission(recursive, tmpdir):
26
+ # Create a file in the temporary directory
27
+ source = tmpdir.join("source_file.txt")
28
+ source.write("content")
29
+
30
+ # Create a protected directory (non-writable)
31
+ protected_dir = tmpdir.mkdir("protected_directory")
32
+ protected_path = str(protected_dir)
33
+
34
+ # Set the directory to read-only
35
+ if os.name == "nt":
36
+ os.system(f'icacls "{protected_path}" /deny Everyone:(W)')
37
+ else:
38
+ os.chmod(protected_path, 0o555) # Sets the directory to read-only
39
+
40
+ # Define a destination inside the protected directory
41
+ destination = protected_dir.join("destination_file.txt")
42
+
43
+ # Instantiate the filesystem (assuming the local file system interface)
44
+ fs = fsspec.filesystem("file")
45
+
46
+ # Try to move the file to the read-only directory, expecting a permission error
47
+ with pytest.raises(PermissionError):
48
+ fs.mv(str(source), str(destination), recursive=recursive)
49
+
50
+ # Assert the file was not created in the destination
51
+ assert not os.path.exists(destination)
52
+
53
+ # Cleanup: Restore permissions so the directory can be cleaned up
54
+ if os.name == "nt":
55
+ os.system(f'icacls "{protected_path}" /remove:d Everyone')
56
+ else:
57
+ os.chmod(protected_path, 0o755) # Restore write permission for cleanup
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/open.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pytest
2
+
3
+
4
+ class AbstractOpenTests:
5
+ def test_open_exclusive(self, fs, fs_target):
6
+ with fs.open(fs_target, "wb") as f:
7
+ f.write(b"data")
8
+ with fs.open(fs_target, "rb") as f:
9
+ assert f.read() == b"data"
10
+ with pytest.raises(FileExistsError):
11
+ fs.open(fs_target, "xb")
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/pipe.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pytest
2
+
3
+
4
+ class AbstractPipeTests:
5
+ def test_pipe_exclusive(self, fs, fs_target):
6
+ fs.pipe_file(fs_target, b"data")
7
+ assert fs.cat_file(fs_target) == b"data"
8
+ with pytest.raises(FileExistsError):
9
+ fs.pipe_file(fs_target, b"data", mode="create")
10
+ fs.pipe_file(fs_target, b"new data", mode="overwrite")
11
+ assert fs.cat_file(fs_target) == b"new data"
.venv/lib/python3.13/site-packages/fsspec/tests/abstract/put.py ADDED
@@ -0,0 +1,591 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from hashlib import md5
2
+ from itertools import product
3
+
4
+ import pytest
5
+
6
+ from fsspec.tests.abstract.common import GLOB_EDGE_CASES_TESTS
7
+
8
+
9
+ class AbstractPutTests:
10
+ def test_put_file_to_existing_directory(
11
+ self,
12
+ fs,
13
+ fs_join,
14
+ fs_target,
15
+ local_join,
16
+ local_bulk_operations_scenario_0,
17
+ supports_empty_directories,
18
+ ):
19
+ # Copy scenario 1a
20
+ source = local_bulk_operations_scenario_0
21
+
22
+ target = fs_target
23
+ fs.mkdir(target)
24
+ if not supports_empty_directories:
25
+ # Force target directory to exist by adding a dummy file
26
+ fs.touch(fs_join(target, "dummy"))
27
+ assert fs.isdir(target)
28
+
29
+ target_file2 = fs_join(target, "file2")
30
+ target_subfile1 = fs_join(target, "subfile1")
31
+
32
+ # Copy from source directory
33
+ fs.put(local_join(source, "file2"), target)
34
+ assert fs.isfile(target_file2)
35
+
36
+ # Copy from sub directory
37
+ fs.put(local_join(source, "subdir", "subfile1"), target)
38
+ assert fs.isfile(target_subfile1)
39
+
40
+ # Remove copied files
41
+ fs.rm([target_file2, target_subfile1])
42
+ assert not fs.exists(target_file2)
43
+ assert not fs.exists(target_subfile1)
44
+
45
+ # Repeat with trailing slash on target
46
+ fs.put(local_join(source, "file2"), target + "/")
47
+ assert fs.isdir(target)
48
+ assert fs.isfile(target_file2)
49
+
50
+ fs.put(local_join(source, "subdir", "subfile1"), target + "/")
51
+ assert fs.isfile(target_subfile1)
52
+
53
+ def test_put_file_to_new_directory(
54
+ self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0
55
+ ):
56
+ # Copy scenario 1b
57
+ source = local_bulk_operations_scenario_0
58
+
59
+ target = fs_target
60
+ fs.mkdir(target)
61
+
62
+ fs.put(
63
+ local_join(source, "subdir", "subfile1"), fs_join(target, "newdir/")
64
+ ) # Note trailing slash
65
+ assert fs.isdir(target)
66
+ assert fs.isdir(fs_join(target, "newdir"))
67
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
68
+
69
+ def test_put_file_to_file_in_existing_directory(
70
+ self,
71
+ fs,
72
+ fs_join,
73
+ fs_target,
74
+ local_join,
75
+ supports_empty_directories,
76
+ local_bulk_operations_scenario_0,
77
+ ):
78
+ # Copy scenario 1c
79
+ source = local_bulk_operations_scenario_0
80
+
81
+ target = fs_target
82
+ fs.mkdir(target)
83
+ if not supports_empty_directories:
84
+ # Force target directory to exist by adding a dummy file
85
+ fs.touch(fs_join(target, "dummy"))
86
+ assert fs.isdir(target)
87
+
88
+ fs.put(local_join(source, "subdir", "subfile1"), fs_join(target, "newfile"))
89
+ assert fs.isfile(fs_join(target, "newfile"))
90
+
91
+ def test_put_file_to_file_in_new_directory(
92
+ self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0
93
+ ):
94
+ # Copy scenario 1d
95
+ source = local_bulk_operations_scenario_0
96
+
97
+ target = fs_target
98
+ fs.mkdir(target)
99
+
100
+ fs.put(
101
+ local_join(source, "subdir", "subfile1"),
102
+ fs_join(target, "newdir", "newfile"),
103
+ )
104
+ assert fs.isdir(fs_join(target, "newdir"))
105
+ assert fs.isfile(fs_join(target, "newdir", "newfile"))
106
+
107
+ def test_put_directory_to_existing_directory(
108
+ self,
109
+ fs,
110
+ fs_join,
111
+ fs_target,
112
+ local_bulk_operations_scenario_0,
113
+ supports_empty_directories,
114
+ ):
115
+ # Copy scenario 1e
116
+ source = local_bulk_operations_scenario_0
117
+
118
+ target = fs_target
119
+ fs.mkdir(target)
120
+ if not supports_empty_directories:
121
+ # Force target directory to exist by adding a dummy file
122
+ dummy = fs_join(target, "dummy")
123
+ fs.touch(dummy)
124
+ assert fs.isdir(target)
125
+
126
+ for source_slash, target_slash in zip([False, True], [False, True]):
127
+ s = fs_join(source, "subdir")
128
+ if source_slash:
129
+ s += "/"
130
+ t = target + "/" if target_slash else target
131
+
132
+ # Without recursive does nothing
133
+ fs.put(s, t)
134
+ assert fs.ls(target, detail=False) == (
135
+ [] if supports_empty_directories else [dummy]
136
+ )
137
+
138
+ # With recursive
139
+ fs.put(s, t, recursive=True)
140
+ if source_slash:
141
+ assert fs.isfile(fs_join(target, "subfile1"))
142
+ assert fs.isfile(fs_join(target, "subfile2"))
143
+ assert fs.isdir(fs_join(target, "nesteddir"))
144
+ assert fs.isfile(fs_join(target, "nesteddir", "nestedfile"))
145
+ assert not fs.exists(fs_join(target, "subdir"))
146
+
147
+ fs.rm(
148
+ [
149
+ fs_join(target, "subfile1"),
150
+ fs_join(target, "subfile2"),
151
+ fs_join(target, "nesteddir"),
152
+ ],
153
+ recursive=True,
154
+ )
155
+ else:
156
+ assert fs.isdir(fs_join(target, "subdir"))
157
+ assert fs.isfile(fs_join(target, "subdir", "subfile1"))
158
+ assert fs.isfile(fs_join(target, "subdir", "subfile2"))
159
+ assert fs.isdir(fs_join(target, "subdir", "nesteddir"))
160
+ assert fs.isfile(fs_join(target, "subdir", "nesteddir", "nestedfile"))
161
+
162
+ fs.rm(fs_join(target, "subdir"), recursive=True)
163
+ assert fs.ls(target, detail=False) == (
164
+ [] if supports_empty_directories else [dummy]
165
+ )
166
+
167
+ # Limit recursive by maxdepth
168
+ fs.put(s, t, recursive=True, maxdepth=1)
169
+ if source_slash:
170
+ assert fs.isfile(fs_join(target, "subfile1"))
171
+ assert fs.isfile(fs_join(target, "subfile2"))
172
+ assert not fs.exists(fs_join(target, "nesteddir"))
173
+ assert not fs.exists(fs_join(target, "subdir"))
174
+
175
+ fs.rm(
176
+ [
177
+ fs_join(target, "subfile1"),
178
+ fs_join(target, "subfile2"),
179
+ ],
180
+ recursive=True,
181
+ )
182
+ else:
183
+ assert fs.isdir(fs_join(target, "subdir"))
184
+ assert fs.isfile(fs_join(target, "subdir", "subfile1"))
185
+ assert fs.isfile(fs_join(target, "subdir", "subfile2"))
186
+ assert not fs.exists(fs_join(target, "subdir", "nesteddir"))
187
+
188
+ fs.rm(fs_join(target, "subdir"), recursive=True)
189
+ assert fs.ls(target, detail=False) == (
190
+ [] if supports_empty_directories else [dummy]
191
+ )
192
+
193
+ def test_put_directory_to_new_directory(
194
+ self,
195
+ fs,
196
+ fs_join,
197
+ fs_target,
198
+ local_bulk_operations_scenario_0,
199
+ supports_empty_directories,
200
+ ):
201
+ # Copy scenario 1f
202
+ source = local_bulk_operations_scenario_0
203
+
204
+ target = fs_target
205
+ fs.mkdir(target)
206
+
207
+ for source_slash, target_slash in zip([False, True], [False, True]):
208
+ s = fs_join(source, "subdir")
209
+ if source_slash:
210
+ s += "/"
211
+ t = fs_join(target, "newdir")
212
+ if target_slash:
213
+ t += "/"
214
+
215
+ # Without recursive does nothing
216
+ fs.put(s, t)
217
+ if supports_empty_directories:
218
+ assert fs.ls(target) == []
219
+ else:
220
+ with pytest.raises(FileNotFoundError):
221
+ fs.ls(target)
222
+
223
+ # With recursive
224
+ fs.put(s, t, recursive=True)
225
+ assert fs.isdir(fs_join(target, "newdir"))
226
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
227
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
228
+ assert fs.isdir(fs_join(target, "newdir", "nesteddir"))
229
+ assert fs.isfile(fs_join(target, "newdir", "nesteddir", "nestedfile"))
230
+ assert not fs.exists(fs_join(target, "subdir"))
231
+
232
+ fs.rm(fs_join(target, "newdir"), recursive=True)
233
+ assert not fs.exists(fs_join(target, "newdir"))
234
+
235
+ # Limit recursive by maxdepth
236
+ fs.put(s, t, recursive=True, maxdepth=1)
237
+ assert fs.isdir(fs_join(target, "newdir"))
238
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
239
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
240
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir"))
241
+ assert not fs.exists(fs_join(target, "subdir"))
242
+
243
+ fs.rm(fs_join(target, "newdir"), recursive=True)
244
+ assert not fs.exists(fs_join(target, "newdir"))
245
+
246
+ def test_put_glob_to_existing_directory(
247
+ self,
248
+ fs,
249
+ fs_join,
250
+ fs_target,
251
+ local_join,
252
+ supports_empty_directories,
253
+ local_bulk_operations_scenario_0,
254
+ ):
255
+ # Copy scenario 1g
256
+ source = local_bulk_operations_scenario_0
257
+
258
+ target = fs_target
259
+ fs.mkdir(target)
260
+ if not supports_empty_directories:
261
+ # Force target directory to exist by adding a dummy file
262
+ dummy = fs_join(target, "dummy")
263
+ fs.touch(dummy)
264
+ assert fs.isdir(target)
265
+
266
+ for target_slash in [False, True]:
267
+ t = target + "/" if target_slash else target
268
+
269
+ # Without recursive
270
+ fs.put(local_join(source, "subdir", "*"), t)
271
+ assert fs.isfile(fs_join(target, "subfile1"))
272
+ assert fs.isfile(fs_join(target, "subfile2"))
273
+ assert not fs.isdir(fs_join(target, "nesteddir"))
274
+ assert not fs.exists(fs_join(target, "nesteddir", "nestedfile"))
275
+ assert not fs.exists(fs_join(target, "subdir"))
276
+
277
+ fs.rm(
278
+ [
279
+ fs_join(target, "subfile1"),
280
+ fs_join(target, "subfile2"),
281
+ ],
282
+ recursive=True,
283
+ )
284
+ assert fs.ls(target, detail=False) == (
285
+ [] if supports_empty_directories else [dummy]
286
+ )
287
+
288
+ # With recursive
289
+ for glob, recursive in zip(["*", "**"], [True, False]):
290
+ fs.put(local_join(source, "subdir", glob), t, recursive=recursive)
291
+ assert fs.isfile(fs_join(target, "subfile1"))
292
+ assert fs.isfile(fs_join(target, "subfile2"))
293
+ assert fs.isdir(fs_join(target, "nesteddir"))
294
+ assert fs.isfile(fs_join(target, "nesteddir", "nestedfile"))
295
+ assert not fs.exists(fs_join(target, "subdir"))
296
+
297
+ fs.rm(
298
+ [
299
+ fs_join(target, "subfile1"),
300
+ fs_join(target, "subfile2"),
301
+ fs_join(target, "nesteddir"),
302
+ ],
303
+ recursive=True,
304
+ )
305
+ assert fs.ls(target, detail=False) == (
306
+ [] if supports_empty_directories else [dummy]
307
+ )
308
+
309
+ # Limit recursive by maxdepth
310
+ fs.put(
311
+ local_join(source, "subdir", glob),
312
+ t,
313
+ recursive=recursive,
314
+ maxdepth=1,
315
+ )
316
+ assert fs.isfile(fs_join(target, "subfile1"))
317
+ assert fs.isfile(fs_join(target, "subfile2"))
318
+ assert not fs.exists(fs_join(target, "nesteddir"))
319
+ assert not fs.exists(fs_join(target, "subdir"))
320
+
321
+ fs.rm(
322
+ [
323
+ fs_join(target, "subfile1"),
324
+ fs_join(target, "subfile2"),
325
+ ],
326
+ recursive=True,
327
+ )
328
+ assert fs.ls(target, detail=False) == (
329
+ [] if supports_empty_directories else [dummy]
330
+ )
331
+
332
+ def test_put_glob_to_new_directory(
333
+ self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0
334
+ ):
335
+ # Copy scenario 1h
336
+ source = local_bulk_operations_scenario_0
337
+
338
+ target = fs_target
339
+ fs.mkdir(target)
340
+
341
+ for target_slash in [False, True]:
342
+ t = fs_join(target, "newdir")
343
+ if target_slash:
344
+ t += "/"
345
+
346
+ # Without recursive
347
+ fs.put(local_join(source, "subdir", "*"), t)
348
+ assert fs.isdir(fs_join(target, "newdir"))
349
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
350
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
351
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir"))
352
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir", "nestedfile"))
353
+ assert not fs.exists(fs_join(target, "subdir"))
354
+ assert not fs.exists(fs_join(target, "newdir", "subdir"))
355
+
356
+ fs.rm(fs_join(target, "newdir"), recursive=True)
357
+ assert not fs.exists(fs_join(target, "newdir"))
358
+
359
+ # With recursive
360
+ for glob, recursive in zip(["*", "**"], [True, False]):
361
+ fs.put(local_join(source, "subdir", glob), t, recursive=recursive)
362
+ assert fs.isdir(fs_join(target, "newdir"))
363
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
364
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
365
+ assert fs.isdir(fs_join(target, "newdir", "nesteddir"))
366
+ assert fs.isfile(fs_join(target, "newdir", "nesteddir", "nestedfile"))
367
+ assert not fs.exists(fs_join(target, "subdir"))
368
+ assert not fs.exists(fs_join(target, "newdir", "subdir"))
369
+
370
+ fs.rm(fs_join(target, "newdir"), recursive=True)
371
+ assert not fs.exists(fs_join(target, "newdir"))
372
+
373
+ # Limit recursive by maxdepth
374
+ fs.put(
375
+ local_join(source, "subdir", glob),
376
+ t,
377
+ recursive=recursive,
378
+ maxdepth=1,
379
+ )
380
+ assert fs.isdir(fs_join(target, "newdir"))
381
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
382
+ assert fs.isfile(fs_join(target, "newdir", "subfile2"))
383
+ assert not fs.exists(fs_join(target, "newdir", "nesteddir"))
384
+ assert not fs.exists(fs_join(target, "subdir"))
385
+ assert not fs.exists(fs_join(target, "newdir", "subdir"))
386
+
387
+ fs.rm(fs_join(target, "newdir"), recursive=True)
388
+ assert not fs.exists(fs_join(target, "newdir"))
389
+
390
+ @pytest.mark.parametrize(
391
+ GLOB_EDGE_CASES_TESTS["argnames"],
392
+ GLOB_EDGE_CASES_TESTS["argvalues"],
393
+ )
394
+ def test_put_glob_edge_cases(
395
+ self,
396
+ path,
397
+ recursive,
398
+ maxdepth,
399
+ expected,
400
+ fs,
401
+ fs_join,
402
+ fs_target,
403
+ local_glob_edge_cases_files,
404
+ local_join,
405
+ fs_sanitize_path,
406
+ ):
407
+ # Copy scenario 1g
408
+ source = local_glob_edge_cases_files
409
+
410
+ target = fs_target
411
+
412
+ for new_dir, target_slash in product([True, False], [True, False]):
413
+ fs.mkdir(target)
414
+
415
+ t = fs_join(target, "newdir") if new_dir else target
416
+ t = t + "/" if target_slash else t
417
+
418
+ fs.put(local_join(source, path), t, recursive=recursive, maxdepth=maxdepth)
419
+
420
+ output = fs.find(target)
421
+ if new_dir:
422
+ prefixed_expected = [
423
+ fs_sanitize_path(fs_join(target, "newdir", p)) for p in expected
424
+ ]
425
+ else:
426
+ prefixed_expected = [
427
+ fs_sanitize_path(fs_join(target, p)) for p in expected
428
+ ]
429
+ assert sorted(output) == sorted(prefixed_expected)
430
+
431
+ try:
432
+ fs.rm(target, recursive=True)
433
+ except FileNotFoundError:
434
+ pass
435
+
436
+ def test_put_list_of_files_to_existing_directory(
437
+ self,
438
+ fs,
439
+ fs_join,
440
+ fs_target,
441
+ local_join,
442
+ local_bulk_operations_scenario_0,
443
+ supports_empty_directories,
444
+ ):
445
+ # Copy scenario 2a
446
+ source = local_bulk_operations_scenario_0
447
+
448
+ target = fs_target
449
+ fs.mkdir(target)
450
+ if not supports_empty_directories:
451
+ # Force target directory to exist by adding a dummy file
452
+ dummy = fs_join(target, "dummy")
453
+ fs.touch(dummy)
454
+ assert fs.isdir(target)
455
+
456
+ source_files = [
457
+ local_join(source, "file1"),
458
+ local_join(source, "file2"),
459
+ local_join(source, "subdir", "subfile1"),
460
+ ]
461
+
462
+ for target_slash in [False, True]:
463
+ t = target + "/" if target_slash else target
464
+
465
+ fs.put(source_files, t)
466
+ assert fs.isfile(fs_join(target, "file1"))
467
+ assert fs.isfile(fs_join(target, "file2"))
468
+ assert fs.isfile(fs_join(target, "subfile1"))
469
+
470
+ fs.rm(
471
+ [
472
+ fs_join(target, "file1"),
473
+ fs_join(target, "file2"),
474
+ fs_join(target, "subfile1"),
475
+ ],
476
+ recursive=True,
477
+ )
478
+ assert fs.ls(target, detail=False) == (
479
+ [] if supports_empty_directories else [dummy]
480
+ )
481
+
482
+ def test_put_list_of_files_to_new_directory(
483
+ self, fs, fs_join, fs_target, local_join, local_bulk_operations_scenario_0
484
+ ):
485
+ # Copy scenario 2b
486
+ source = local_bulk_operations_scenario_0
487
+
488
+ target = fs_target
489
+ fs.mkdir(target)
490
+
491
+ source_files = [
492
+ local_join(source, "file1"),
493
+ local_join(source, "file2"),
494
+ local_join(source, "subdir", "subfile1"),
495
+ ]
496
+
497
+ fs.put(source_files, fs_join(target, "newdir") + "/") # Note trailing slash
498
+ assert fs.isdir(fs_join(target, "newdir"))
499
+ assert fs.isfile(fs_join(target, "newdir", "file1"))
500
+ assert fs.isfile(fs_join(target, "newdir", "file2"))
501
+ assert fs.isfile(fs_join(target, "newdir", "subfile1"))
502
+
503
+ def test_put_directory_recursive(
504
+ self, fs, fs_join, fs_target, local_fs, local_join, local_path
505
+ ):
506
+ # https://github.com/fsspec/filesystem_spec/issues/1062
507
+ # Recursive cp/get/put of source directory into non-existent target directory.
508
+ src = local_join(local_path, "src")
509
+ src_file = local_join(src, "file")
510
+ local_fs.mkdir(src)
511
+ local_fs.touch(src_file)
512
+
513
+ target = fs_target
514
+
515
+ # put without slash
516
+ assert not fs.exists(target)
517
+ for loop in range(2):
518
+ fs.put(src, target, recursive=True)
519
+ assert fs.isdir(target)
520
+
521
+ if loop == 0:
522
+ assert fs.isfile(fs_join(target, "file"))
523
+ assert not fs.exists(fs_join(target, "src"))
524
+ else:
525
+ assert fs.isfile(fs_join(target, "file"))
526
+ assert fs.isdir(fs_join(target, "src"))
527
+ assert fs.isfile(fs_join(target, "src", "file"))
528
+
529
+ fs.rm(target, recursive=True)
530
+
531
+ # put with slash
532
+ assert not fs.exists(target)
533
+ for loop in range(2):
534
+ fs.put(src + "/", target, recursive=True)
535
+ assert fs.isdir(target)
536
+ assert fs.isfile(fs_join(target, "file"))
537
+ assert not fs.exists(fs_join(target, "src"))
538
+
539
+ def test_put_directory_without_files_with_same_name_prefix(
540
+ self,
541
+ fs,
542
+ fs_join,
543
+ fs_target,
544
+ local_join,
545
+ local_dir_and_file_with_same_name_prefix,
546
+ supports_empty_directories,
547
+ ):
548
+ # Create the test dirs
549
+ source = local_dir_and_file_with_same_name_prefix
550
+ target = fs_target
551
+
552
+ # Test without glob
553
+ fs.put(local_join(source, "subdir"), fs_target, recursive=True)
554
+
555
+ assert fs.isfile(fs_join(fs_target, "subfile.txt"))
556
+ assert not fs.isfile(fs_join(fs_target, "subdir.txt"))
557
+
558
+ fs.rm([fs_join(target, "subfile.txt")])
559
+ if supports_empty_directories:
560
+ assert fs.ls(target) == []
561
+ else:
562
+ assert not fs.exists(target)
563
+
564
+ # Test with glob
565
+ fs.put(local_join(source, "subdir*"), fs_target, recursive=True)
566
+
567
+ assert fs.isdir(fs_join(fs_target, "subdir"))
568
+ assert fs.isfile(fs_join(fs_target, "subdir", "subfile.txt"))
569
+ assert fs.isfile(fs_join(fs_target, "subdir.txt"))
570
+
571
+ def test_copy_with_source_and_destination_as_list(
572
+ self, fs, fs_target, fs_join, local_join, local_10_files_with_hashed_names
573
+ ):
574
+ # Create the test dir
575
+ source = local_10_files_with_hashed_names
576
+ target = fs_target
577
+
578
+ # Create list of files for source and destination
579
+ source_files = []
580
+ destination_files = []
581
+ for i in range(10):
582
+ hashed_i = md5(str(i).encode("utf-8")).hexdigest()
583
+ source_files.append(local_join(source, f"{hashed_i}.txt"))
584
+ destination_files.append(fs_join(target, f"{hashed_i}.txt"))
585
+
586
+ # Copy and assert order was kept
587
+ fs.put(lpath=source_files, rpath=destination_files)
588
+
589
+ for i in range(10):
590
+ file_content = fs.cat(destination_files[i]).decode("utf-8")
591
+ assert file_content == str(i)
.venv/lib/python3.13/site-packages/hf_xet-1.1.5.dist-info/licenses/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/__init__.cpython-313.pyc ADDED
Binary file (21.9 kB). View file
 
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_commit_api.cpython-313.pyc ADDED
Binary file (41.3 kB). View file
 
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_commit_scheduler.cpython-313.pyc ADDED
Binary file (18.2 kB). View file
 
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_inference_endpoints.cpython-313.pyc ADDED
Binary file (19.2 kB). View file
 
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_local_folder.cpython-313.pyc ADDED
Binary file (19.9 kB). View file
 
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_login.cpython-313.pyc ADDED
Binary file (21.2 kB). View file
 
.venv/lib/python3.13/site-packages/huggingface_hub/__pycache__/_snapshot_download.cpython-313.pyc ADDED
Binary file (14.7 kB). View file