repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
miguelgrinberg/Flask-Migrate | flask | 242 | Tutorial sections 4.5+ - No Changes in Schema detected (Solution?) | Hi
For the command line examples in Section 4.5 - 4.9 to work. Namely the "flask db migrate -m "users table" and subsequent upgrades.
You need the import statements seen in the Shell Context example for Ex 4.10.
As stated in other issues / posts the Model and the DB aren't imported to be seen by the Flask db modules, once you add this, problem solved.
Maybe the tutorial needs an amendment?
Ex:
Compared my 4.5 code base vs the Microblog "source" repo seen here.
``` shell
(microblog_venv) ╭─jrock@ritchie ~/git
╰─$ diff --ignore-all-space ./microblog_source/microblog.py ./jrock/microblog/microblog.py 1 ↵
1,7c1,2
< from app import app, db
< from app.models import User, Post
<
<
< @app.shell_context_processor
< def make_shell_context():
< return {'db': db, 'User': User, 'Post': Post}
---
> # From our module import that app 'file'
> from app import app
\ No newline at end of file
(microblog_venv) ╭─jrock@ritchie ~/git
╰─$
``` | closed | 2018-12-24T19:30:01Z | 2019-04-07T09:55:51Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/242 | [
"question"
] | JeanNiBee | 2 |
jina-ai/serve | machine-learning | 5,272 | Align on OpenTelemetry service names and cloud semantic attributes. | The current tracers are using the default runtime name or module name to create spans. These are currently very generic and don't differentiate between different flows. Without unique service names and other cloud deployment attributes ([k8s](https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/k8s/), [cloud provider attributes](https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/#cloud-provider-specific-attributes)), viewing traces will become un-filterable and very messy.
# Proposal
- Allow users to provide/override the deployment name using environment variables which is then set in the instrumentation library.
- Jina AI Cloud specific attributes need to be manually supported and documented.
- Self hosted cloud specific attributes need to be set by the user using environment variables as defined in the OpenTelemetry [documentation](https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/#cloud-provider-specific-attributes). | closed | 2022-10-11T15:07:15Z | 2022-11-21T12:38:55Z | https://github.com/jina-ai/serve/issues/5272 | [] | girishc13 | 1 |
holoviz/panel | matplotlib | 6,927 | panel.serve() implicitly and unconditionally captures SIGINT under the hood | ### ALL software version info
panel 1.4.4 (currently latest). Others are irrelevant, I believe, but still:
- python 3.9
- bokeh 3.4.1
- OS Windows 11
- browser FireFox (definitely irrelevant)
### Description of expected behavior and the observed behavior
#### Observed:
`panel.serve(..., threaded=False)` delegates to `panel.io.server.get_server()`.
This in turn attempts (with silent failure!?) to install a `SIGINT` handler that eventually calls `server.io_loop.stop()`.
This behavior stops the asyncio loop in its tracks without allowing any opportunity for code that shares the same event loop to perform its own orderly cleanup.
Note also that the signal handler is installed unconditionally; no arguments can be passed in to prevent it. In particular, this is done even when `start=False` is passed.
#### Expected:
IMHO, being intended for programmatic server operation from user code, it is none of `panel.serve()`'s business to do ANY kind of OS signal handling. Also, it is not within its charter to stop the event loop it is running on.
All of this should be handled by higher level code that has better awareness of the environment it is being run in, and what else may be running in it.
I expect `panel.serve()`, especially if called with `start=False`, to just create a server. That's it. Not `start()` and definitely not `stop()` it for me.
Even if this overreach is somehow deemed to be within the charter of `panel.serve()` in some cases, there should be a mechanism in place to allow the caller to prevent this when undesired.
### Complete, minimal, self-contained example code that reproduces the issue
In the following program, if SIGINT is received while in the `try` block, things will explode.
This is because the event loop gets stopped by `panel.serve` while the coroutine is inside `asyncio.wait(tasks_running, ...)`, and the `finally` clause never gets to run.
```
fueling_dashboard: panel.viewable.Viewable
class SpaceShip:
async def monitor_fuel_tanks(self, how: str):
while True:
fueling_dashboard.update_fuel_display(self.poll_fuel_sensors(how))
await asyncio.sleep(0.5)
async def make_launch_preparations(self, ...):
from asyncio import create_task, ALL_COMPLETED
fueling_dashboard_server = panel.serve(fueling_dashboard, start=False, threaded=False, ...)
panel.state.execute(self.monitor_fuel_tanks('carefully')) # schedule background update task before starting server
fueling_dashboard_server.start()
tasks_running = {create_task(self.pump_in_oxygen(fill_level=0.95)), create_task(self.pump_in_hydrogen(fill_level=0.95))}
try:
_, tasks_running = await asyncio.wait(tasks_running, return_when=ALL_COMPLETED)
finally:
was_successful = not tasks_running
if not was_successful: # something went wrong; e.g. KeyboardInterrupt or asyncio.CancelledError because SIGINT was received
fueling_dashboard.big_red_flashing_lamp.turn_on()
for task in tasks_running: # stop pumping in
task.cancel()
# empty tanks to prevent explosion
await asyncio.wait({self.pump_out_oxygen(), self.pump_out_hydrogen()}, return_when=ALL_COMPLETED)
fueling_dashboard_server.stop() # no longer needed
return was_successful
```
### Stack traceback and/or browser JavaScript console output
not applicable
### Screenshots or screencasts of the bug in action
not applicable
- [x] I may be interested in making a pull request to address this (but I'm not sure I know enough about what else could break)
| open | 2024-06-17T13:26:21Z | 2024-06-17T13:45:18Z | https://github.com/holoviz/panel/issues/6927 | [] | mcskatkat | 0 |
inducer/pudb | pytest | 257 | Failure on Windows 10 | `pip install` reports success:
```
C:\Users\bruce\Documents\Git\on-java>pip install pudb
Collecting pudb
Downloading pudb-2017.1.2.tar.gz (53kB)
100% |████████████████████████████████| 61kB 72kB/s
Collecting urwid>=1.1.1 (from pudb)
Downloading urwid-1.3.1.tar.gz (588kB)
100% |████████████████████████████████| 593kB 262kB/s
Collecting pygments>=1.0 (from pudb)
Using cached Pygments-2.2.0-py2.py3-none-any.whl
Installing collected packages: urwid, pygments, pudb
Running setup.py install for urwid ... done
Running setup.py install for pudb ... done
Successfully installed pudb-2017.1.2 pygments-2.2.0 urwid-1.3.1
```
But when I try to run the first example:
`python -m pudb.run binsearch.py`
I see:
```
Traceback (most recent call last):
File "C:\Python\Python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Python\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Python\Python36\lib\site-packages\pudb\run.py", line 38, in <module>
main()
File "C:\Python\Python36\lib\site-packages\pudb\run.py", line 34, in main
steal_output=options.steal_output)
File "C:\Python\Python36\lib\site-packages\pudb\__init__.py", line 64, in runscript
dbg = _get_debugger(steal_output=steal_output)
File "C:\Python\Python36\lib\site-packages\pudb\__init__.py", line 50, in _get_debugger
dbg = Debugger(**kwargs)
File "C:\Python\Python36\lib\site-packages\pudb\debugger.py", line 152, in __init__
self.ui = DebuggerUI(self, stdin=stdin, stdout=stdout, term_size=term_size)
File "C:\Python\Python36\lib\site-packages\pudb\debugger.py", line 1905, in __init__
self.screen = ThreadsafeRawScreen()
File "C:\Python\Python36\lib\site-packages\urwid\raw_display.py", line 89, in __init__
fcntl.fcntl(self._resize_pipe_rd, fcntl.F_SETFL, os.O_NONBLOCK)
NameError: name 'fcntl' is not defined
``` | closed | 2017-06-06T16:44:20Z | 2017-06-06T21:20:14Z | https://github.com/inducer/pudb/issues/257 | [] | BruceEckel | 10 |
alteryx/featuretools | data-science | 2,287 | Move all Natural Language primitives that don't require an external library into core Featuretools | - As a user of Featuretools core, I want to apply NL primitives without install nlp-primitives.
- We need to move all the primitives that don't require an external library into Featuretools core:
- https://github.com/alteryx/nlp_primitives | closed | 2022-09-12T18:00:12Z | 2022-10-20T16:25:16Z | https://github.com/alteryx/featuretools/issues/2287 | [] | gsheni | 0 |
PokeAPI/pokeapi | graphql | 373 | Availability of named resources (for current hosting) | This issue is to consolidate updates for the current outage of named resources (eg. /pokemon/bulbasaur) into one place.
See also #374 for work on getting named resources working when we move to Netlify. | closed | 2018-09-22T22:10:54Z | 2018-09-22T23:55:51Z | https://github.com/PokeAPI/pokeapi/issues/373 | [] | tdmalone | 14 |
slackapi/python-slack-sdk | asyncio | 959 | RTMClient v2 - OSError: [Errno 9] Bad file descriptor | RTMClientv2 regularly disconnects. (26 times in a 24 hour period).
### Reproducible in:
#### The Slack SDK version
slack-sdk==3.3.1
#### Python runtime version
Python 3.8.7
#### OS info
Linux 5.4.97-gentoo #1 SMP
#### Steps to reproduce:
Run the RTMClientv2 for extended periods of time.
### Expected result:
The connection should remain established and only disconnect when the Slack server sends a `goodbye` event (after approximately 8 hours) or when there is a network disruption.
### Actual result:
There doesn't appear to be any `goodbye` event sent to trigger a disconnect. The disconnection issue only affects RTMClientv2 but not Events Socket Mode client. The bot is idle when the error occurs.
```
2021-02-15 16:51:17,915 DEBUG slack_sdk.rtm.v2 Message processing completed (type: hello)
2021-02-15 17:28:09,571 INFO slack_sdk.rtm.v2 The connection seems to be stale. Disconnecting... (session id: ae73a7cb-f2f0-41cd-ad7d-ced66fde6717)
2021-02-15 17:28:09,636 ERROR slack_sdk.rtm.v2 on_error invoked (session id: ae73a7cb-f2f0-41cd-ad7d-ced66fde6717, error: OSError, message: [Errno 9] Bad file de
scriptor)
Traceback (most recent call last):
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/connection.py", line 255, in run_until_completion
] = _receive_messages(
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/internals.py", line 131, in _receive_messages
return _fetch_messages(
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/internals.py", line 154, in _fetch_messages
remaining_bytes = receive()
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/internals.py", line 125, in receive
received_bytes = sock.recv(size)
File "/usr/lib/python3.8/ssl.py", line 1228, in recv
return super().recv(buflen, flags)
OSError: [Errno 9] Bad file descriptor
2021-02-15 17:28:09,636 INFO slack_sdk.rtm.v2 The connection has been closed (session id: ae73a7cb-f2f0-41cd-ad7d-ced66fde6717)
2021-02-15 17:28:09,637 INFO slack_sdk.rtm.v2 Stopped receiving messages from a connection (session id: ae73a7cb-f2f0-41cd-ad7d-ced66fde6717)
2021-02-15 17:28:09,637 INFO slack_sdk.rtm.v2 The session seems to be already closed. Going to reconnect... (session id: ae73a7cb-f2f0-41cd-ad7d-ced66fde6717)
2021-02-15 17:28:09,637 INFO slack_sdk.rtm.v2 Connecting to a new endpoint...
2021-02-15 17:28:10,215 INFO slack_sdk.rtm.v2 The connection has been closed (session id: ae73a7cb-f2f0-41cd-ad7d-ced66fde6717)
2021-02-15 17:28:10,215 INFO slack_sdk.rtm.v2 A new session has been established (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5)
2021-02-15 17:28:10,215 INFO slack_sdk.rtm.v2 Connected to a new endpoint...
2021-02-15 17:28:10,638 INFO slack_sdk.rtm.v2 Starting to receive messages from a new connection (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5)
2021-02-15 17:28:10,639 DEBUG slack_sdk.rtm.v2 on_message invoked: (message: {"type": "hello", "region":"eu-central-1", "start": true, "host_id":"gs-fra-angr"})
2021-02-15 17:28:10,639 DEBUG slack_sdk.rtm.v2 A new message enqueued (current queue size: 1)
2021-02-15 17:28:10,639 DEBUG slack_sdk.rtm.v2 A message dequeued (current queue size: 0)
2021-02-15 17:28:10,639 DEBUG slack_sdk.rtm.v2 Message processing started (type: hello)
2021-02-15 17:28:10,644 DEBUG slack_sdk.rtm.v2 Message processing completed (type: hello)
2021-02-15 18:34:23,979 INFO slack_sdk.rtm.v2 The connection seems to be stale. Disconnecting... (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5)
2021-02-15 18:34:24,127 ERROR slack_sdk.rtm.v2 on_error invoked (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5, error: OSError, message: [Errno 9] Bad file descriptor)
Traceback (most recent call last):
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/connection.py", line 255, in run_until_completion
] = _receive_messages(
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/internals.py", line 131, in _receive_messages
return _fetch_messages(
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/internals.py", line 154, in _fetch_messages
remaining_bytes = receive()
File "/opt/errbot/lib/python3.8/site-packages/slack_sdk/socket_mode/builtin/internals.py", line 125, in receive
received_bytes = sock.recv(size)
File "/usr/lib/python3.8/ssl.py", line 1228, in recv
return super().recv(buflen, flags)
OSError: [Errno 9] Bad file descriptor
2021-02-15 18:34:24,127 INFO slack_sdk.rtm.v2 The connection has been closed (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5)
2021-02-15 18:34:24,131 INFO slack_sdk.rtm.v2 Stopped receiving messages from a connection (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5)
2021-02-15 18:34:24,131 INFO slack_sdk.rtm.v2 The session seems to be already closed. Going to reconnect... (session id: 55954e5b-0b93-4c11-b515-362c90bf59f5)
2021-02-15 18:34:24,132 INFO slack_sdk.rtm.v2 Connecting to a new endpoint...
```
| closed | 2021-02-17T00:46:52Z | 2021-02-19T08:42:31Z | https://github.com/slackapi/python-slack-sdk/issues/959 | [
"rtm-client",
"Version: 3x"
] | nzlosh | 4 |
PaddlePaddle/models | computer-vision | 4,833 | 已解决 | closed | 2020-09-03T03:12:17Z | 2020-09-03T07:32:16Z | https://github.com/PaddlePaddle/models/issues/4833 | [] | kyuer | 0 | |
d2l-ai/d2l-en | deep-learning | 2,097 | d2l makes os.environ["CUDA_VISIBLE_DEVICES"] = "1" invalid | when i try to import d2l into my project, I found if os.environ["CUDA_VISIBLE_DEVICES"] is set after the d2l, then it will be valid.
<img width="443" alt="image" src="https://user-images.githubusercontent.com/38461329/162663129-7294395b-f58a-4de4-8eed-4dde85bd2229.png">
before run this program

when i run this program

Even though set before the d2l have been imported will work, I still hope this bug can be solved, since it will be more eleglent and convinent.
| closed | 2022-04-11T04:05:52Z | 2022-04-11T09:05:07Z | https://github.com/d2l-ai/d2l-en/issues/2097 | [] | FelliYang | 1 |
Gozargah/Marzban | api | 1,059 | Returning Null for admin_id When a User Is Created in Telegram Bot | ادمین سودو در `.env` آیدی عدی تلگرام را وارد کرده: `TELEGRAM_ADMIN_ID`
علاوه بر آن هنگام ایجاد ادمین هم آیدی تلگرام را وارد نموده. (در جدول `admins` برای یوزر سودو در `telegram_id` آیدی عددی تلگرام وجود داره)
حالا وقتی ادمین سودو از طریق ربات اقدام به سایت یوزر میکنه؛ `admin_id` در جدول `users` مقدارش null هست:


در `app/db/crud.py` فانکشن `get_admin_by_telegram_id` وجود داره ولی در` app/telegram/handlers/admin.py` در فانکشن `confirm_user_command` استفاده نشده:

| closed | 2024-06-23T21:17:27Z | 2024-07-03T06:34:55Z | https://github.com/Gozargah/Marzban/issues/1059 | [
"Bug"
] | amotlagh | 1 |
blb-ventures/strawberry-django-plus | graphql | 249 | hard dependency on django.contrib.auth | The line in utils/typing.py:
UserType: TypeAlias = Union[AbstractUser, AnonymousUser]
requires the inclusion of "django.contrib.auth" in INSTALLED_APPS
it would be better to use the
AbstractBaseUser of django/contrib/auth/base_user.py as it doesn't require the inclusion and is even the base of AbstractUser, AnonymousUser and an extra UserType is not neccessary.
This also improves the compatibility with 3party user apps
| closed | 2023-06-19T09:05:20Z | 2023-06-24T12:25:58Z | https://github.com/blb-ventures/strawberry-django-plus/issues/249 | [] | devkral | 1 |
KrishnaswamyLab/PHATE | data-visualization | 124 | Install issue, probably sklearn -> scikit-learn versions | We are having errors installing phate, and I think it tracks back to here, and to updates in scikit-learn. I posted details in the graphtools repo: https://github.com/KrishnaswamyLab/graphtools/issues/64.
I believe scikit-learn 1.2.0 breaks graphtools /PHATE. Have you seen this issue? Thanks in advance. | closed | 2022-12-27T15:29:49Z | 2023-01-03T07:14:53Z | https://github.com/KrishnaswamyLab/PHATE/issues/124 | [
"bug"
] | bbimber | 4 |
onnx/onnx | pytorch | 6,572 | It should be dim instead of a | https://github.com/onnx/onnx/blob/96a0ca4374d2198944ff882bd273e64222b59cb9/onnx/reference/ops/op_center_crop_pad.py#L24 | open | 2024-12-04T08:03:06Z | 2024-12-04T08:03:06Z | https://github.com/onnx/onnx/issues/6572 | [] | aksenventwo | 0 |
postmanlabs/httpbin | api | 661 | URL prefix? | Hey there,
My company has been using httpbin copiously and while we think it's fantastic, one consistent problem we've been having is trying to set httpbin behind a url path prefix (e.g. no httpbin.mydomain.com but mydomain.com/httpbin). We've scoured the documentation and it does not seem like this is possible although a flask app is supposed to be able to support this. Is there any documentation I'm missing, or could it be possible to add support for this feature? | open | 2021-11-30T18:24:13Z | 2023-08-29T00:05:59Z | https://github.com/postmanlabs/httpbin/issues/661 | [] | juandiegopalomino | 1 |
mckinsey/vizro | pydantic | 798 | Add default styling for waterfall chart to chart template | Context: https://github.com/mckinsey/vizro/pull/786 | open | 2024-10-08T09:50:55Z | 2024-10-08T13:35:27Z | https://github.com/mckinsey/vizro/issues/798 | [] | huong-li-nguyen | 3 |
ansible/ansible | python | 84,743 | PowerShell module doesn't accept type list in spec | ### Summary
When I try to run a custom module whose spec has a list type, it errors with
`"Exception calling "Create" with "2" argument(s): Unable to cast object of type 'System.String' to type 'System.Collections.IList'.`
# min viable:
```powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
$spec = @{
options = @{
list_option = @{
type="list"
}
}
}
[Ansible.Basic.AnsibleModule]::Create($args, $spec)
```
### Issue Type
Bug Report
### Component Name
module_util Ansible.Basic
### Ansible Version
```console
ansible [core 2.18.2]
config file = None
configured module search path = [ '/home/vscode/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules' ]
ansible python module location = /mnt/poetry-persistent-cache/virtua1envs/j1abaca-8Y3pZR--lS-py3.12/1ib/python3.12/site-packages/ansib1e
ansible collection location = /hane/vscode/. ansible/collections:/usr/share/ansible/collections
executable location = /mnt/poetry-persistent-cache/virtua1envs/<env name>/bin/ansib1e
python version = 3.12.8 (main, Dec 4 2024, 20:37:48) [GCC 10.2.1 20210110] (/mnt/poetry-persistent-cache/virtua1envs/<env name>/bin/python)
jinja version = 3.1.5
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = None
GALAXY SERVERS:
```
### OS / Environment
VSCode devcontainers:
mcr.microsoft.com/devcontainers/python:3.12-bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-playbook <playbook.yaml> -i <inventory file.yml>
```
### Expected Results
I expect it to provide the valid ansible module
### Actual Results
```console
The full traceback is:
Exception calling "Create" with "2" argument(s): "Unable to cast object of type 'System.String' to type 'System.Collections.IList'."
At line:49 char:1
+ New-AnsibleModule -Arguments $args -Spec $spec
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AnsibleModule], MethodInvocationException
+ FullyQualifiedErrorId : InvalidCastException,New-AnsibleModule
ScriptStackTrace:
at New-AnsibleModule, <No file>: line 66
at <ScriptBlock>, <No file>: line 49
System.Management.Automation.MethodInvocationException: Exception calling "Create" with "2" argument(s): "Unable to cast object of type 'System.String' to type 'System.Collections.IList'." ---> System.InvalidCastException: Unable to cast object of type 'System.String' to type 'System.Collections.IList'.
at Ansible.Basic.AnsibleModule.CheckRequiredIf(IDictionary param, IList requiredIf) in c:\Users\mcanady\AppData\Local\Temp\1dfcd3e2-e15a-4631-b743-71d8667ecbbd\vltxogcu.0.cs:line 1198
at Ansible.Basic.AnsibleModule.CheckArguments(IDictionary spec, IDictionary param, List`1 legalInputs) in c:\Users\mcanady\AppData\Local\Temp\1dfcd3e2-e15a-4631-b743-71d8667ecbbd\vltxogcu.0.cs:line 1004
at Ansible.Basic.AnsibleModule..ctor(String[] args, IDictionary argumentSpec, IDictionary[] fragments) in c:\Users\mcanady\AppData\Local\Temp\1dfcd3e2-e15a-4631-b743-71d8667ecbbd\vltxogcu.0.cs:line 276
at CallSite.Target(Closure , CallSite , Type , String[] , Object )
--- End of inner exception stack trace ---
at System.Management.Automation.ExceptionHandlingOps.CheckActionPreference(FunctionContext funcContext, Exception exception)
at System.Management.Automation.Interpreter.ActionCallInstruction`2.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.EnterTryCatchFinallyInstruction.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.Interpreter.Run(InterpretedFrame frame)
at System.Management.Automation.Interpreter.LightLambda.RunVoid1[T0](T0 arg0)
at System.Management.Automation.PSScriptCmdlet.RunClause(Action`1 clause, Object dollarUnderbar, Object inputToProcess)
at System.Management.Automation.PSScriptCmdlet.DoEndProcessing()
at System.Management.Automation.CommandProcessorBase.Complete()
failed: [129.57.30.48] (item={'identity': 'JLAB\\CNILDS2$', 'rights': ['Enroll'], 'control_type': 'Deny'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"control_type": "Deny",
"identity": "JLAB\\CNILDS2$",
"rights": [
"Enroll"
]
},
"msg": "Unhandled exception while executing module: Exception calling \"Create\" with \"2\" argument(s): \"Unable to cast object of type 'System.String' to type 'System.Collections.IList'.\""
}
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2025-02-24T16:12:51Z | 2025-03-10T13:00:03Z | https://github.com/ansible/ansible/issues/84743 | [
"bug",
"affects_2.18"
] | michaeldcanady | 5 |
python-gino/gino | sqlalchemy | 692 | Continually Getting TypeError: 'GinoExecutor' object is not callable | * GINO version: 1.0.0
* Python version: 3.7
* asyncpg version: 0.20.1
* aiocontextvars version: Not installed
* PostgreSQL version: 12.1
### Description
Attempting to make a simple query but am unable to.
### What I Did
While running:
```
domain = await Domain.get(Domain.domain == 'http://domain.com').gino().first()
```
I get:
```
TypeError: 'GinoExecutor' object is not callable
```
In context, here's some more code:
```python
./db/__init__.py
from gino import Gino
from scraper import config
print(config.DB_DSN)
import asyncio
gino_db = Gino()
async def main():
await gino_db.set_bind(config.DB_DSN)
asyncio.get_event_loop().run_until_complete(main())
# Import your models here so Alembic will pick them up
from db.models import *
```
My main.py:
```python
...
loop.run_until_complete(init_db())
...
```
And finally the init_db() function
```python
async def init_db():
engine = await gino_db.set_bind(config.DB_DSN, echo=True)
domain = await Domain.query.where(Domain.domain == 'http://equipomedia.com').gino().first()
```
What's going wrong? In the same code I'm able to call ```domains = await Domain.query.gino.all()``` and get a valid response.
[Github Repo](https://github.com/austincollinpena/spell-check-scraper) | closed | 2020-06-03T08:46:19Z | 2020-06-03T15:06:26Z | https://github.com/python-gino/gino/issues/692 | [] | austincollinpena | 4 |
ultralytics/ultralytics | pytorch | 19,708 | Assertion Error | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hellooo,
I have done a fine-tunning like this with my images with a different size than usual:
from ultralytics import YOLO
model = YOLO(‘yolo11m-seg.pt’)
results = model.train(
data=‘data.yaml’,
epochs=100,
imgsz=2560,
batch=1,
device=0,
lr0=0.01,
weight_decay=0.0005,
workers=8
)
With my .pt model I compiled it to .engine like this:
model.export(format="engine")
with input shape (1, 3, 2560, 2560) BCHW and output shape(s) ((1, 37, 134400), (1, 32, 640, 640)) (129.3 MB)
TensorRT: input "images" with shape(1, 3, 2560, 2560) DataType.FLOAT
TensorRT: output "output0" with shape(1, 37, 134400) DataType.FLOAT
TensorRT: output "output1" with shape(1, 32, 640, 640) DataType.FLOAT
[03/14/2025-18:32:49] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[03/14/2025-18:34:20] [TRT] [I] Compiler backend is used during engine build.
[03/14/2025-18:36:05] [TRT] [E] Error Code: 9: Skipping tactic 0x001526e231ae2e51 due to exception Cask convolution execution
[03/14/2025-18:37:08] [TRT] [I] [GraphReduction] The approximate region cut reduction algorithm is called.
[03/14/2025-18:37:08] [TRT] [I] Detected 1 inputs and 5 output network tensors.
[03/14/2025-18:37:09] [TRT] [I] Total Host Persistent Memory: 731472 bytes
[03/14/2025-18:37:09] [TRT] [I] Total Device Persistent Memory: 1952256 bytes
[03/14/2025-18:37:09] [TRT] [I] Max Scratch Memory: 1320550400 bytes
[03/14/2025-18:37:09] [TRT] [I] [BlockAssignment] Started assigning block shifts. This will take 363 steps to complete.
[03/14/2025-18:37:09] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 27.7015ms to assign 13 blocks to 363 nodes requiring 2123367424 bytes.
[03/14/2025-18:37:09] [TRT] [I] Total Activation Memory: 2123366400 bytes
[03/14/2025-18:37:09] [TRT] [I] Total Weights Memory: 105195780 bytes
[03/14/2025-18:37:09] [TRT] [I] Compiler backend is used during engine execution.
[03/14/2025-18:37:09] [TRT] [I] Engine generation completed in 260.223 seconds.
[03/14/2025-18:37:09] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 9 MiB, GPU 3202 MiB
Everything seems to end up correctly but when I try to run inference, I always get the same error:
assert im.shape == s, f ‘input size {im.shape} {’>' if self.dynamic else “not equal to”} max model size {s}’
AssertionError: input size torch.Size([1, 3, 640, 640]) not equal to max model size (1, 3, 2560, 2560)
### Additional
_No response_ | closed | 2025-03-14T23:15:49Z | 2025-03-15T22:13:59Z | https://github.com/ultralytics/ultralytics/issues/19708 | [
"question",
"segment",
"exports"
] | davidpacios | 5 |
vanna-ai/vanna | data-visualization | 408 | Add No-SQL Database Support | Description: This feature request proposes adding support for No-SQL databases to the vanna project. No-SQL databases are a type of database that is not based on the relational model. They are a good option for storing data that does not fit well into a relational schema, such as JSON data.
Benefits:
- Increased flexibility: No-SQL databases can store a wider variety of data types than relational databases.
- Improved scalability: No-SQL databases can scale more easily than relational databases.
- Reduced complexity: No-SQL databases can simplify the development process by eliminating the need to design and manage relational schemas.
Drawbacks:
- Potential for increased complexity: While No-SQL databases can simplify the development process in some cases, they can also introduce complexity if not used carefully.
- Limited querying capabilities: No-SQL databases may not offer the same level of querying capabilities as relational databases.
Proposed Implementation:
- The vanna project should be extended to support a popular No-SQL database, such as MongoDB or Firebase.
- The project should provide a way for users to connect to their No-SQL database instance.
- The project should provide a way for users to query and manipulate their data in the No-SQL database.
I hope this helps! | open | 2024-05-04T16:33:26Z | 2025-03-20T06:33:58Z | https://github.com/vanna-ai/vanna/issues/408 | [] | shrijayan | 4 |
deeppavlov/DeepPavlov | nlp | 1,239 | Provide an extensive example of API usage in go-bot | The go-bot is able ([example](https://github.com/deepmipt/DeepPavlov/blob/master/examples/gobot_extended_tutorial.ipynb)) to query an API to get the data. It allows to perform the dialog relying onto some explicit knowledge.
In the provided example the bot queries the database to perform some read operations. Operations of other classes (e.g. update) seem to be possible but we have no examples of such cases.
**The contribution could follow these steps**:
* discover the purpose of update- queries in goal-oriented datasets
* find the dataset with such api calls or generate an artificial one
* provide an example of the go-bot trained on this dataset and using this data | closed | 2020-06-04T13:50:55Z | 2022-04-06T10:17:34Z | https://github.com/deeppavlov/DeepPavlov/issues/1239 | [
"Documentation",
"code",
"easy"
] | oserikov | 1 |
mithi/hexapod-robot-simulator | plotly | 33 | ❗Some unstable poses are marked as stable | ❗ Some unstable poses are not marked as unstable
<img width="1253" alt="Screen Shot 2020-04-18 at 10 19 55 PM" src="https://user-images.githubusercontent.com/1670421/79640224-f9a1f200-81c2-11ea-9ee0-c18d8d655417.png">
| closed | 2020-04-11T10:26:25Z | 2020-04-23T15:22:07Z | https://github.com/mithi/hexapod-robot-simulator/issues/33 | [
"bug",
"help wanted",
"PRIORITY"
] | mithi | 1 |
Johnserf-Seed/TikTokDownload | api | 719 | [BUG] Error Downloading TikTok Posts Due to msToken API Error | ## 错误的详细描述
在使用 `f2 tk` 命令尝试下载抖音帖子时,我经常遇到错误。相同的设置过程用于下载抖音(Douyin)时正常工作。
## 系统平台
<details>
<summary>点击展开</summary>
- **操作系统**: Windows 11
- **Python 版本**: Python 3.11.1
- **F2 版本**: 0.0.1.5
- **浏览器**: Firefox
- **网络环境**: 美国,无代理
</details>
## 错误复现步骤
<details>
<summary>点击展开</summary>
1. 在 `douyin\Lib\site-packages\f2\conf\app.yaml` 文件中设置抖音配置。
2. 运行命令 `f2 tk --auto-cookie firefox`。
3. 运行命令 `f2 tk -u https://www.tiktok.com/@xxxx`。
**配置文件(app.yaml)**:
```yaml
tiktok:
cookie: ak_bmsc=xxxx; passport_csrf_token=xxxx; passport_csrf_token_default=xxxx; multi_sids=xxxx; cmpl_token=xxxx; passport_auth_status=xxxx; passport_auth_status_ss=xxxx; sid_guard=xxxx; uid_tt=xxxx; uid_tt_ss=xxxx; sid_tt=xxxx; sessionid=xxxx; sessionid_ss=xxxx; ssid_ucp_v1=xxxx; tt-target-idc-sign=xxxx; tt_chain_token=xxxx; bm_sv=xxxx; store-idc=xxxx; ttwid=xxxx; odin_tt=xxxx; msToken=xxxx; tiktok_webapp_theme=light
cover: false
desc: false
folderize: false
interval: all
languages: en_US
max_connections: 5
max_counts: 0
max_retries: 4
max_tasks: 6
mode: post
music: false
naming: '{create}_{aweme_id}_{desc}'
page_counts: 20
path: ./Download
timeout: 6
```
**错误信息**:
```plaintext
ERROR msToken API错误:msToken 内容不符合要求
INFO 生成虚假的msToken
INFO 生成虚假的msToken
ERROR 解析
https://www.tiktok.com/api/user/detail/?WebIdLastTime=1716643217&aid=1988&app_language=zh-Hans&app_name=tiktok_web&browser_language=zh-CN&browser_name=Mozilla&browser_online=true&browser_platform=Win32&browser_version=5.0%20(Windows%20NT%2010.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36&channel=tiktok_web&cookie_enabled=true&device_id=7306060721837852167&device_platform=web_pc&focus_state=true&from_page=user&history_len=4&is_fullscreen=false&is_page_visible=true&language=zh-Hans&os=windows&priority_region=&referer=®ion=SG&root_referer=https://www.tiktok.com/&screen_height=1080&screen_width=1920&webcast_language=zh-Hans&tz_name=Asia/Hong_Kong&msToken=BsX4yCeHR4DH+4Z1kGkzncqJnTIraH3-ZL85kv0lC3us+O04tDhlndLqN7Ff8zZq134EIxtk06RDjnRFvhQRVgIHJFhJDRee+RcSZ8d6dWQ9k22tf5uVDevFoNkO3q43FlszpFZ1X2WzgFM9MP==&secUid=MS4wLjABAAAA1TQUIEMS7ThX22wMrfKDn1G_yIYHRQ4kCM3WxsgDUJbLYN5SExsIbJH_-L5YG-gY&uniqueId=&X-Bogus=DFS...
接口 JSON 失败: Expecting property name enclosed in double quotes: line 2 column 5 (char 6)
ERROR API内容请求失败,请更换新cookie后再试
```
**调试命令**:
请运行命令 `f2 -d DEBUG` 并附上日志目录中的日志文件,以提供日志信息进行诊断。
</details>
## 预期行为
我希望帖子能够成功下载,而不会遇到 msToken 错误。
## 截图
如果适用,请添加截图以帮助解释您的问题。
## 日志文件
请附上调试日志文件,以帮助诊断问题。
## 其他信息
任何有助于解决问题的额外信息。
- [x] 我已经查看了[文档](https://johnserf-seed.github.io/f2/quick-start.html)和[已关闭的问题](https://github.com/Johnserf-Seed/f2/issues?q=is%3Aissue+is%3Aclosed)以寻找可能的解决方案。
- [x] 我在[常见问题解答](https://johnserf-seed.github.io/f2/question-answer/qa.html)中没有找到我的问题。
- [x] 这个问题是公开的,并且我已删除所有敏感信息。
- [x] 我理解未按照模板提交的问题将不予优先处理。 | closed | 2024-05-25T13:39:14Z | 2024-07-03T00:11:59Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/719 | [
"故障(bug)",
"已确认(confirmed)"
] | iDataist | 4 |
521xueweihan/HelloGitHub | python | 2,844 | 【开源自荐】ReactPress — 基于React的博客&CMS内容管理系统 | ## 推荐项目
- 项目地址:<https://github.com/fecommunity/reactpress>
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:基于React的博客&CMS内容管理系统
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:`ReactPress` 是使用React开发的开源发布平台,用户可以在支持React和MySQL数据库的服务器上架设属于自己的博客、网站。也可以把 `ReactPress` 当作一个内容管理系统(CMS)来使用。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- 📦 技术栈:基于 `React` + `NextJS` + `MySQL 5.7` + `NestJS` 构建
- 🌈 组件化:基于 `antd 5.20` 最新版的交互语言和视觉风格
- 🌍 国际化:支持中英文切换,国际化配置管理能力
- 🌞 黑白主题:支持亮色和暗黑模式主题自由切换
- 🖌️ 创作管理:内置 `MarkDown` 编辑器,支持文章写文章、分类目录管理,标签管理
- 📃 页面管理:支持自定义新页面
- 💬 评论管理:支持内容评论管理
- 📷️ 媒体管理:支持文件本地上传和 `OSS` 文件上传
- 📱 移动端:完美适配移动端H5页面
- 示例代码:
```bash
$ git clone --depth=1 https://github.com/fecommnity/reactpress.git
$ cd reactpress
$ npm i -g pnpm
$ pnpm i
```
- 截图:






- 后续更新计划:
每周更新。
| open | 2024-11-11T14:40:52Z | 2024-11-11T14:40:52Z | https://github.com/521xueweihan/HelloGitHub/issues/2844 | [] | fecommunity | 0 |
odoo/odoo | python | 202,218 | [18.0] hr_holidays: on leave refusal, first_approver_id and second_approver_id are wrongly updated | ### Odoo Version
- [ ] 16.0
- [ ] 17.0
- [x] 18.0
- [ ] Other (specify)
### Steps to Reproduce
Given a leave having 'both' as validate_type (leave 'By Employee's Approver and Time Off Officer').
When the employee's first approver approves the leave, and the second approver refuses the leave
Then the leave first_approver_id and second_approver_id are wrong.
When the employee's first approver refuses the leave
Then the leave first_approver_id and second_approver_id are wrong.
```
def action_refuse(self):
...
validated_holidays = self.filtered(lambda hol: hol.state == 'validate1')
validated_holidays.write({'state': 'refuse', 'first_approver_id': current_employee.id})
(self - validated_holidays).write({'state': 'refuse', 'second_approver_id': current_employee.id})
....
```
### Log Output
```shell
```
### Support Ticket
_No response_ | open | 2025-03-18T05:20:46Z | 2025-03-18T05:20:46Z | https://github.com/odoo/odoo/issues/202218 | [] | dsauvage | 0 |
learning-at-home/hivemind | asyncio | 320 | [Minor] make "could not connect" errors in example more pronounced | In examples/albert, if a peer cannot connect to others, it will print something like:
```
[...][WARN][dht.node.create:234] DHTNode bootstrap failed: none of the initial_peers responded to a ping.
```
Which is nice, but no one will ever see this warning among 100+ lines of other logs (training config, module warnings, etc)
Let's make this warning into an error? | closed | 2021-07-15T13:10:20Z | 2021-08-20T15:08:42Z | https://github.com/learning-at-home/hivemind/issues/320 | [
"enhancement"
] | justheuristic | 0 |
jupyterlab/jupyter-ai | jupyter | 871 | Jupyter AI plugin schema is never loaded; contains unused cell toolbar and menus | I was trying to add a shortcut for https://github.com/jupyterlab/jupyter-ai/issues/799 and noticed that there is `schema/plugin.json` which contains a cell toolbar button and menu actions:
https://github.com/jupyterlab/jupyter-ai/blob/5183bc9281d81a953b0f360e76b03dc15f3d8987/packages/jupyter-ai/schema/plugin.json#L7-L43
However, these do not show up int the UI because the schema is not correctly hooked up, and these commands are not defined. Is it intended?
Also, I am quite confused on the design directions because the original version of the UI did contain a dedicated cell toolbar button but it seems this was changed early on but I cannot find an issue nor PR documenting rationale for this change. | closed | 2024-07-05T09:24:31Z | 2024-07-08T15:51:15Z | https://github.com/jupyterlab/jupyter-ai/issues/871 | [
"bug"
] | krassowski | 1 |
facebookresearch/fairseq | pytorch | 5,538 | Has anyone got the MMPT example to work? | ## 🐛 Bug
I am running into so many dependency errors when trying to use VideoCLIP model. If anyone has gotten it to work please share your details on which package versions you have installed, etc.
By the way, I'm trying to get the model to compare if a video and text match and get a score from that.
I am currently stuck with this error while running the example code:
```
---------------------------------------------------------------------------
ConfigAttributeError Traceback (most recent call last)
Cell In[1], line 6
1 import torch
3 from mmpt.models import MMPTModel
----> 6 model, tokenizer, aligner = MMPTModel.from_pretrained(
7 "projects/retri/videoclip/how2.yaml")
9 model.eval()
12 # B, T, FPS, H, W, C (VideoCLIP is trained on 30 fps of s3d)
File ~/PycharmProjects/fairseq/examples/MMPT/mmpt/models/mmfusion.py:39, in MMPTModel.from_pretrained(cls, config, checkpoint)
37 from ..utils import recursive_config
38 from ..tasks import Task
---> 39 config = recursive_config(config)
40 mmtask = Task.config_task(config)
41 checkpoint_path = os.path.join(config.eval.save_path, checkpoint)
File ~/PycharmProjects/fairseq/examples/MMPT/mmpt/utils/load_config.py:58, in recursive_config(config_path)
56 """allows for stacking of configs in any depth."""
57 config = OmegaConf.load(config_path)
---> 58 if config.includes is not None:
59 includes = config.includes
60 config.pop("includes")
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/dictconfig.py:355, in DictConfig.__getattr__(self, key)
351 return self._get_impl(
352 key=key, default_value=_DEFAULT_MARKER_, validate_key=False
353 )
354 except ConfigKeyError as e:
--> 355 self._format_and_raise(
356 key=key, value=None, cause=e, type_override=ConfigAttributeError
357 )
358 except Exception as e:
359 self._format_and_raise(key=key, value=None, cause=e)
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/base.py:231, in Node._format_and_raise(self, key, value, cause, msg, type_override)
223 def _format_and_raise(
224 self,
225 key: Any,
(...)
229 type_override: Any = None,
230 ) -> None:
--> 231 format_and_raise(
232 node=self,
233 key=key,
234 value=value,
235 msg=str(cause) if msg is None else msg,
236 cause=cause,
237 type_override=type_override,
238 )
239 assert False
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/_utils.py:899, in format_and_raise(node, key, value, msg, cause, type_override)
896 ex.ref_type = ref_type
897 ex.ref_type_str = ref_type_str
--> 899 _raise(ex, cause)
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/_utils.py:797, in _raise(ex, cause)
795 else:
796 ex.__cause__ = None
--> 797 raise ex.with_traceback(sys.exc_info()[2])
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/dictconfig.py:351, in DictConfig.__getattr__(self, key)
348 raise AttributeError()
350 try:
--> 351 return self._get_impl(
352 key=key, default_value=_DEFAULT_MARKER_, validate_key=False
353 )
354 except ConfigKeyError as e:
355 self._format_and_raise(
356 key=key, value=None, cause=e, type_override=ConfigAttributeError
357 )
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/dictconfig.py:442, in DictConfig._get_impl(self, key, default_value, validate_key)
438 def _get_impl(
439 self, key: DictKeyType, default_value: Any, validate_key: bool = True
440 ) -> Any:
441 try:
--> 442 node = self._get_child(
443 key=key, throw_on_missing_key=True, validate_key=validate_key
444 )
445 except (ConfigAttributeError, ConfigKeyError):
446 if default_value is not _DEFAULT_MARKER_:
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/basecontainer.py:73, in BaseContainer._get_child(self, key, validate_access, validate_key, throw_on_missing_value, throw_on_missing_key)
64 def _get_child(
65 self,
66 key: Any,
(...)
70 throw_on_missing_key: bool = False,
71 ) -> Union[Optional[Node], List[Optional[Node]]]:
72 """Like _get_node, passing through to the nearest concrete Node."""
---> 73 child = self._get_node(
74 key=key,
75 validate_access=validate_access,
76 validate_key=validate_key,
77 throw_on_missing_value=throw_on_missing_value,
78 throw_on_missing_key=throw_on_missing_key,
79 )
80 if isinstance(child, UnionNode) and not _is_special(child):
81 value = child._value()
File /opt/anaconda3/envs/test_env_3_9/lib/python3.9/site-packages/omegaconf/dictconfig.py:480, in DictConfig._get_node(self, key, validate_access, validate_key, throw_on_missing_value, throw_on_missing_key)
478 if value is None:
479 if throw_on_missing_key:
--> 480 raise ConfigKeyError(f"Missing key {key!s}")
481 elif throw_on_missing_value and value._is_missing():
482 raise MissingMandatoryValue("Missing mandatory value: $KEY")
ConfigAttributeError: Missing key includes
full_key: includes
object_type=dict
``` | open | 2024-09-05T18:08:36Z | 2024-10-13T07:51:31Z | https://github.com/facebookresearch/fairseq/issues/5538 | [
"bug",
"needs triage"
] | qingy1337 | 1 |
jeffknupp/sandman2 | sqlalchemy | 112 | does sandman2 query data support “>” "<" | ### Environment
MySQL 8.0
sandman2 1.2.1
pymysql 0.9.3
Postman 7.2.2
Operating system: win7 x64
### Description of issue
**step1 create table and insert data**
create database paspce ,create table t_areainfo , insert data to table.
like this
"create database if not exists pspace;
create table if not exists pspace.t_areainfo(
id int primary key,
level int,
name varchar(255),
parentId int,
status int
);
insert into pspace.t_areainfo values(1, 0, 'aaa', 0, 0),(2, 0, 'bbb', 1, 0),(3, 0, 'ccc', 1, 0),(4, 0, 'ddd', 2, 0);"
**step2 start sandman2ctl then use postman query data**
start sandman2ctl in cmd like this "sandman2ctl mysql+pymysql://admin:juan@localhost/pspace".
then use postman query data by get at this url "127.0.0.1:5000/t_areainfo/", it works well.
**step3 query data by get and use "<"**
query data by get at url "127.0.0.1:5000/t_areainfo/?id<3" reveive this message
"{
"message": "Invalid field [id<3]"
}"
is bug or not support?tell me thanks。 | closed | 2019-07-11T06:35:48Z | 2019-07-22T07:33:35Z | https://github.com/jeffknupp/sandman2/issues/112 | [] | abcweizhuo | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,151 | Why not camera's intrinsic parameters(cx,cy) adjusted with image resolution ? | Why, when the image resolution is adjusted by factors of 2, 4, or 8 and the image size is reduced, are the camera's intrinsic parameters not adjusted, especially the values of the intrinsic parameters cx and cy?
| open | 2025-02-03T16:43:58Z | 2025-03-04T08:34:51Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1151 | [] | scott198510 | 1 |
strawberry-graphql/strawberry | fastapi | 3,349 | When i use run with python3, ImportError occured | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
ImportError : cannot import name 'GraphQLError' from 'graphql'
## Describe the Bug
It works well when executed with poetry run app.main:main.
However, when executing with python3 app/main.py, the following Import Error occurs.
_**Error occured code line**_
<img width="849" alt="image" src="https://github.com/strawberry-graphql/strawberry/assets/10377550/713ab6b4-76c1-4b0d-84e1-80903f8855ea">
**_Traceback_**
```bash
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/evanhwang/dev/ai-hub/hub-api/app/bootstrap/admin/bootstrapper.py", line 4, in <module>
from app.bootstrap.admin.router import AdminRouter
File "/Users/evanhwang/dev/ai-hub/hub-api/app/bootstrap/admin/router.py", line 3, in <module>
import strawberry
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/__init__.py", line 1, in <module>
from . import experimental, federation, relay
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/federation/__init__.py", line 1, in <module>
from .argument import argument
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/federation/argument.py", line 3, in <module>
from strawberry.arguments import StrawberryArgumentAnnotation
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/arguments.py", line 18, in <module>
from strawberry.annotation import StrawberryAnnotation
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/annotation.py", line 23, in <module>
from strawberry.custom_scalar import ScalarDefinition
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/custom_scalar.py", line 19, in <module>
from strawberry.exceptions import InvalidUnionTypeError
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/exceptions/__init__.py", line 6, in <module>
from graphql import GraphQLError
ImportError: cannot import name 'GraphQLError' from 'graphql' (/Users/evanhwang/dev/ai-hub/hub-api/app/graphql/__init__.py)
```
## System Information
- Operating System: Mac Ventura 13.5.1(22G90)
- Strawberry Version (if applicable):
Entered in pyproject.toml as follows:
```bash
strawberry-graphql = {extras = ["debug-server", "fastapi"], version = "^0.217.1"}
```
**_pyproject.toml_**
```toml
##############################################################################
# poetry 종속성 설정
# - https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# - 기본적으로 PyPI에서 종속성을 찾습니다.
##############################################################################
[tool.poetry.dependencies]
python = "3.11.*"
fastapi = "^0.103.2"
uvicorn = "^0.23.2"
poethepoet = "^0.24.0"
requests = "^2.31.0"
poetry = "^1.6.1"
sqlalchemy = "^2.0.22"
sentry-sdk = "^1.32.0"
pydantic-settings = "^2.0.3"
psycopg2-binary = "^2.9.9"
cryptography = "^41.0.4"
python-ulid = "^2.2.0"
ulid = "^1.1"
redis = "^5.0.1"
aiofiles = "^23.2.1"
pyyaml = "^6.0.1"
python-jose = "^3.3.0"
strawberry-graphql = {extras = ["debug-server", "fastapi"], version = "^0.217.1"}
[tool.poetry.group.dev.dependencies]
pytest = "^7.4.0"
pytest-mock = "^3.6.1"
httpx = "^0.24.1"
poetry = "^1.5.1"
sqlalchemy = "^2.0.22"
redis = "^5.0.1"
mypy = "^1.7.0"
types-aiofiles = "^23.2.0.0"
types-pyyaml = "^6.0.12.12"
commitizen = "^3.13.0"
black = "^23.3.0" # fortmatter
isort = "^5.12.0" # import 정렬
pycln = "^2.1.5" # unused import 정리
ruff = "^0.0.275" # linting
##############################################################################
# poethepoet
# - https://github.com/nat-n/poethepoet
# - poe를 통한 태스크 러너 설정
##############################################################################
types-requests = "^2.31.0.20240106"
pre-commit = "^3.6.0"
[tool.poe.tasks.format-check-only]
help = "Check without formatting with 'pycln', 'black', 'isort'."
sequence = [
{cmd = "pycln --check ."},
{cmd = "black --check ."},
{cmd = "isort --check-only ."}
]
[tool.poe.tasks.format]
help = "Run formatter with 'pycln', 'black', 'isort'."
sequence = [
{cmd = "pycln -a ."},
{cmd = "black ."},
{cmd = "isort ."}
]
[tool.poe.tasks.lint]
help = "Run linter with 'ruff'."
cmd = "ruff ."
[tool.poe.tasks.type-check]
help = "Run type checker with 'mypy'"
cmd = "mypy ."
[tool.poe.tasks.clean]
help = "Clean mypy_cache, pytest_cache, pycache..."
cmd = "rm -rf .coverage .mypy_cache .pytest_cache **/__pycache__"
##############################################################################
# isort
# - https://pycqa.github.io/isort/
# - python import 정렬 모듈 설정
##############################################################################
[tool.isort]
profile = "black"
##############################################################################
# ruff
# - https://github.com/astral-sh/ruff
# - Rust 기반 포맷터, 린터입니다.
##############################################################################
[tool.ruff]
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"C", # flake8-comprehensions
"B", # flake8-bugbear
# "T20", # flake8-print
]
ignore = [
"E501", # line too long, handled by black
"E402", # line too long, handled by black
"B008", # do not perform function calls in argument defaults
"C901", # too complex
]
[tool.commitizen]
##############################################################################
# mypy 설정
# - https://mypy.readthedocs.io/en/stable/
# - 정적 타입 체크를 수행합니다.
##############################################################################
[tool.mypy]
python_version = "3.11"
packages=["app"]
exclude=["tests"]
ignore_missing_imports = true
show_traceback = true
show_error_codes = true
disable_error_code="misc, attr-defined"
follow_imports="skip"
#strict = false
# 다음은 --strict에 포함된 여러 옵션들입니다.
warn_unused_configs = true # mypy 설정에서 사용되지 않은 [mypy-<pattern>] config 섹션에 대해 경고를 발생시킵니다. (증분 모드를 끄려면 --no-incremental 사용 필요)
disallow_any_generics = false # 명시적인 타입 매개변수를 지정하지 않은 제네릭 타입의 사용을 금지합니다. 예를 들어, 단순히 x: list와 같은 코드는 허용되지 않으며 항상 x: list[int]와 같이 명시적으로 작성해야 합니다.
disallow_subclassing_any = true # 클래스가 Any 타입을 상속할 때 오류를 보고합니다. 이는 기본 클래스가 존재하지 않는 모듈에서 가져올 때( --ignore-missing-imports 사용 시) 또는 가져오기 문에 # type: ignore 주석이 있는 경우에 발생할 수 있습니다.
disallow_untyped_calls = true # 타입 어노테이션이 없는 함수 정의에서 함수 호출시 오류를 보고합니다.
disallow_untyped_defs = false # 타입 어노테이션이 없거나 불완전한 타입 어노테이션이 있는 함수 정의를 보고합니다. (--disallow-incomplete-defs의 상위 집합)
disallow_incomplete_defs = false # 부분적으로 주석이 달린 함수 정의를 보고합니다. 그러나 완전히 주석이 달린 정의는 여전히 허용됩니다.
check_untyped_defs = true # 타입 어노테이션이 없는 함수의 본문을 항상 타입 체크합니다. (기본적으로 주석이 없는 함수의 본문은 타입 체크되지 않습니다.) 모든 매개변수를 Any로 간주하고 항상 Any를 반환값으로 추정합니다.
disallow_untyped_decorators = true # 타입 어노테이션이 없는 데코레이터를 사용할 때 오류를 보고합니다.
warn_redundant_casts = true # 코드가 불필요한 캐스트를 사용하는 경우 오류를 보고합니다. 캐스트가 안전하게 제거될 수 있는 경우 경고가 발생합니다.
warn_unused_ignores = false # 코드에 실제로 오류 메시지를 생성하지 않는 # type: ignore 주석이 있는 경우 경고를 발생시킵니다.
warn_return_any = false # Any 타입을 반환하는 함수에 대해 경고를 발생시킵니다.
no_implicit_reexport = true # 기본적으로 모듈에 가져온 값은 내보내진 것으로 간주되어 mypy는 다른 모듈에서 이를 가져오도록 허용합니다. 그러나 이 플래그를 사용하면 from-as를 사용하거나 __all__에 포함되지 않은 경우 내보내지 않도록 동작을 변경합니다.
strict_equality = true # mypy는 기본적으로 42 == 'no'와 같은 항상 거짓인 비교를 허용합니다. 이 플래그를 사용하면 이러한 비교를 금지하고 비슷한 식별 및 컨테이너 확인을 보고합니다. (예: from typing import Text)
extra_checks = true # 기술적으로는 올바르지만 실제 코드에서 불편할 수 있는 추가적인 검사를 활성화합니다. 특히 TypedDict 업데이트에서 부분 중첩을 금지하고 Concatenate를 통해 위치 전용 인자를 만듭니다.
# pydantic 플러그인 설정 하지 않으면 가짜 타입 오류 발생 여지 있음
# - https://www.twoistoomany.com/blog/2023/04/12/pydantic-mypy-plugin-in-pyproject/
plugins = ["pydantic.mypy", "strawberry.ext.mypy_plugin"]
##############################################################################
# 빌드 시스템 설정
##############################################################################
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
##############################################################################
# virtualenv 설정
# - 프로젝트에서 peotry 명령 호출 시 venv가 없다면 '.venv' 경로에 생성
##############################################################################
[virtualenvs]
create = true
in-project = true
path = ".venv"
```
## Additional Context
- I already run 'Invaidate Caches` in pycharm. | closed | 2024-01-19T02:17:47Z | 2025-03-20T15:56:34Z | https://github.com/strawberry-graphql/strawberry/issues/3349 | [] | evan-hwang | 6 |
mckinsey/vizro | data-visualization | 991 | Allow horizontally-aligned radio items | ### Which package?
vizro
### What's the problem this feature will solve?
[dbc.RadioItems](https://dash-bootstrap-components.opensource.faculty.ai/docs/components/input/) on which `vm.RadioItems` is based allows horizontally-aligned radio items through the `inline` option.
### Describe the solution you'd like
Allow `vm.RadioItems` to take an `inline` option and propagate it to `dbc.RadioItems`.
### Code of Conduct
- [x] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2025-02-04T16:39:06Z | 2025-03-24T20:03:57Z | https://github.com/mckinsey/vizro/issues/991 | [
"Feature Request :nerd_face:"
] | gtauzin | 11 |
pydata/xarray | numpy | 9,877 | infer_freq() doesn't recognize monthly output if the time dimension is the middle of each month | ### What happened?
CESM used to write the time dimension of its output files at the end of the averaging period, so for monthly output the following would hold:
* January averages would have a time dimension of midnight on February 1
* February averages would have a time dimension of midnight on March 1
* etc
The version currently being developed uses the middle of the averaging period, so
* January averages now have a time dimension of noon on January 16 (15.5 days into a 31 day month)
* February averages now have a time dimension of midnight on February 15 (14 days into a 28 day month)
* etc
Some of our diagnostic packages (https://geocat-comp.readthedocs.io/en/latest/user_api/generated/geocat.comp.climatologies.climatology_average.html) require uniformly spaced data and rely on `xr.infer_freq()` to enforce that. `infer_freq()` recognizes Feb 1, March 1, April 1, ... as monthly but does not do the same for January 16 (12:00), Feb 15, March 16 (12:00), April 16, ...
### What did you expect to happen?
It would be great if `infer_freq()` could recognize a time dimension of monthly mid-points as having a monthly frequency
### Minimal Complete Verifiable Example
```Python
import numpy as np
import xarray as xr
month_bounds = np.array([0., 31., 59., 90., 120., 151., 181., 212., 243., 273., 304., 334., 365.])
mid_month = xr.decode_cf(xr.DataArray(0.5*(month_bounds[:-1] + month_bounds[1:]), attrs={'units': 'days since 0001-01-01 00:00:00', 'calendar': 'noleap'}).to_dataset(name='time'))['time']
end_month = xr.decode_cf(xr.DataArray(month_bounds[1:], attrs={'units': 'days since 0001-01-01 00:00:00', 'calendar': 'noleap'}).to_dataset(name='time'))['time']
print(f'infer_freq(mid_month) = {xr.infer_freq(mid_month)}') # None
print(f'infer_freq(end_month) = {xr.infer_freq(end_month)}') # 'MS'
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
>>> print(f'infer_freq(mid_month) = {xr.infer_freq(mid_month)}') # None
infer_freq(mid_month) = None
>>> print(f'infer_freq(end_month) = {xr.infer_freq(end_month)}') # 'MS'
infer_freq(end_month) = MS
```
### Anything else we need to know?
I'm not familiar enough with `xarray` to be able to offer up a solution, but I figured logging the issue was a good first step. Sorry I can't do more!
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0]
python-bits: 64
OS: Linux
OS-release: 5.14.21-150400.24.18-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: None
libnetcdf: None
xarray: 2024.11.0
pandas: 2.2.3
numpy: 2.2.0
scipy: None
netCDF4: None
pydap: None
h5netcdf: None
h5py: None
zarr: None
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 75.6.0
pip: 24.3.1
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
</details>
| open | 2024-12-11T22:31:47Z | 2024-12-11T22:31:47Z | https://github.com/pydata/xarray/issues/9877 | [
"bug",
"needs triage"
] | mnlevy1981 | 0 |
joeyespo/grip | flask | 121 | Perhaps it would be nice to have WYSIWYG possibility | I'd like to see if it would be nice to edit in the localhost and then have it push back to the readme file once done.
Not sure if it is possible and not sure if it would be an improvement, but just thought to throw it out there.
| closed | 2015-05-23T08:50:57Z | 2015-05-23T16:30:46Z | https://github.com/joeyespo/grip/issues/121 | [
"out-of-scope"
] | kootenpv | 1 |
wkentaro/labelme | deep-learning | 1,495 | Raster and ghost appear when adjusting brightness and contrast labelme v5.5 | ### Provide environment information
python 3.8.19
### What OS are you using?
Windows 10
### Describe the Bug
When I adjust the brightness and contrast, the raster and ghost will appear. The contrast will change, but it will be obscured by the raster and overlap, and going back to the default values will not change the result. I will have to reopen the file to get it back to normal.But nothing happened on the console.
The images are a series of .png images, 1616x970 in size,32 bit.
labelme: 5.5.0
### Expected Behavior
_No response_
### To Reproduce
_No response_ | open | 2024-09-19T02:20:25Z | 2024-09-19T02:27:36Z | https://github.com/wkentaro/labelme/issues/1495 | [
"issue::bug"
] | Downsiren | 1 |
AirtestProject/Airtest | automation | 493 | 关于opencv-contrib-python版本问题,"There is no SIFT module in your OpenCV environment!" | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
Linux执行airtest脚本报"There is no SIFT module in your OpenCV environment!"
根据https://github.com/AirtestProject/Airtest/issues/377,想安装3.2.0.7的opencv-contrib-python,有的Linux能安装上,有的却报"ERROR: airtest 1.0.24 has requirement opencv-contrib-python==3.4.2.17, but you'll have opencv-contrib-python 3.2.0.7 which is incompatible."
能安装上opencv-contrib-python==3.2.0.7的环境的确是不报"There is no SIFT module in your OpenCV environment!"了。
为什么有的环境安装opencv-contrib-python==3.2.0.7却报错?
(python2和python3都不行)
```
(在这里粘贴traceback或其他报错信息)
```
Collecting opencv-contrib-python==3.2.0.7
Downloading https://files.pythonhosted.org/packages/34/70/323020070a925c75d53042923265807c7915181921e0484911249f1d3336/opencv_contrib_python-3.2.0.7-cp27-cp27m-win_amd64.whl (28.4MB)
|████████████████████████████████| 28.4MB 7.3MB/s
Requirement already satisfied: numpy>=1.11.1 in c:\python27\lib\site-packages (from opencv-contrib-python==3.2.0.7) (1.16.4)
ERROR: airtest 1.0.24 has requirement opencv-contrib-python==3.4.2.17, but you'll have opencv-contrib-python 3.2.0.7 which is incompatible.
Installing collected packages: opencv-contrib-python
Successfully installed opencv-contrib-python-3.2.0.7
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
(预期想要得到什么、见到什么)
**python 版本:** `python2.7`
**airtest 版本:** `1.0.24`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
linux ubuntu16.04运行异常,windows使用opencv-contrib-python==3.4.2.17运行正常,Linux和windows安装opencv-contrib-python==3.2.0.7都有问题。
| open | 2019-08-12T13:41:51Z | 2019-08-13T01:43:04Z | https://github.com/AirtestProject/Airtest/issues/493 | [] | SHUJIAN01 | 1 |
babysor/MockingBird | deep-learning | 234 | 请问训练到25k的时候注意力线还是没有出来,并且文件多出了一个出来这个是正常的吗? | 请问训练到25k的时候注意力线还是没有出来,并且文件多出了一个出来这个是正常的吗?




这样的情况是正确的还是我操作错了
cpu是i59400F,显卡是1505ti,配置GPU是的数值只能配置到6,GPU的占用率已经在80-100%跳动了
Tacotron Training
tts_schedule = [(2, 1e-3, 10_000, 6), # Progressive training schedule
(2, 5e-4, 15_000, 6), # (r, lr, step, batch_size)
(2, 2e-4, 20_000, 6), # (r, lr, step, batch_size)
(2, 1e-4, 30_000, 6), #
(2, 5e-5, 40_000, 6), #
(2, 1e-5, 60_000, 6), #
(2, 5e-6, 160_000, 6), # r = reduction factor (# of mel frames
(2, 3e-6, 320_000, 6), # synthesized for each decoder iteration)
(2, 1e-6, 640_000, 6)], # lr = learning rate
还请大佬给指点指点,谢谢! | open | 2021-11-25T08:53:44Z | 2022-05-27T13:22:16Z | https://github.com/babysor/MockingBird/issues/234 | [] | yemaohaker | 13 |
ultralytics/ultralytics | computer-vision | 18,920 | Validation of YOLO pretrained COCO on custom Dataset - Zero metrics | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello, I want to test the Yolo model pre-trained on YOLO on my custom dataset.
My test dataset contains only 1 label class: the boat coco class.
My YAML file is following :
```
path: N:/IA/data_2024/split_coco_to_data
train:
val: images/test
test:
# Classes
names:
0: person
1: bicycle
2: car
3: motorcycle
4: airplane
5: bus
6: train
7: truck
8: boat
9: traffic light
10: fire hydrant
11: stop sign
12: parking meter
13: bench
14: bird
15: cat
16: dog
17: horse
18: sheep
19: cow
20: elephant
21: bear
22: zebra
23: giraffe
24: backpack
25: umbrella
26: handbag
27: tie
28: suitcase
29: frisbee
30: skis
31: snowboard
32: sports ball
33: kite
34: baseball bat
35: baseball glove
36: skateboard
37: surfboard
38: tennis racket
39: bottle
40: wine glass
41: cup
42: fork
43: knife
44: spoon
45: bowl
46: banana
47: apple
48: sandwich
49: orange
50: broccoli
51: carrot
52: hot dog
53: pizza
54: donut
55: cake
56: chair
57: couch
58: potted plant
59: bed
60: dining table
61: toilet
62: tv
63: laptop
64: mouse
65: remote
66: keyboard
67: cell phone
68: microwave
69: oven
70: toaster
71: sink
72: refrigerator
73: book
74: clock
75: vase
76: scissors
77: teddy bear
78: hair drier
79: toothbrush
```
**As I don't have the training data for the COCO, I'm leaving the train path in the YAML file blank.**
I get these results and error when I attempt to print the metrics :
```
[]
```
I get the test images with the predictions as well as the predictions.json file containing all the predictions.
However, I don't get any output metrics, which are set to 0 by default. I don't understand where the error comes from, given that my YAML file is well defined, as is my entire folder tree.
### Additional
_No response_ | open | 2025-01-27T15:58:13Z | 2025-01-31T14:07:29Z | https://github.com/ultralytics/ultralytics/issues/18920 | [
"question",
"detect"
] | adriengoleb | 37 |
streamlit/streamlit | python | 10,112 | pills and segmented_control with customized image display for options | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
The `pills` and `segmented_control` components are fantastic. However, I currently miss 1 functionality, that is allowing arbitrary images to be displayed on "buttons" or options display, which is supported in `st.button`.
### Why?
The reason is the same as allowing arbitrary images in `st.button`. In many use cases, images are clear and compact in displaying options.
### How?
Allow `format_func` to accept the base64 text or a path to a local image / icon file.
### Additional Context
https://github.com/streamlit/streamlit/issues/7300
https://github.com/streamlit/streamlit/pull/9670 | closed | 2025-01-04T16:09:34Z | 2025-01-10T15:26:58Z | https://github.com/streamlit/streamlit/issues/10112 | [
"type:enhancement",
"feature:st.segmented_control",
"feature:st.pills"
] | nycjersey | 2 |
streamlit/streamlit | streamlit | 10,115 | Input widget does not take input from password manager | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
When my password manager (Dashlane version 6.2451.1 - latest) autofills an e-mail address into a text input widget, the on_change callback is not triggered and the session state is not updated.
Typing in the e-mail address in the input widget with the keyboard works as expected.
### Reproducible Code Example
```Python
import streamlit as st
st.text_input('Input e-mail', key='test_key')
if st.button('Submit'):
print("input: " + st.session_state['test_key'])
```
### Steps To Reproduce
1. Setup Dashlane password manager (add some credentials for autofill)
2. Open page with code example
3. Use Dashlane to autofill the text input widget
4. Click submit (does not work)
5. Adjust the text using keyboard input
6. Click submit again (works)
### Expected Behavior
I expect the session state to be updated (and on_change to be triggered) when an e-mail address is set in the input widget by the password manager.
### Current Behavior
https://github.com/user-attachments/assets/10fd4c53-cf87-46bb-9316-ef62f0a7b867
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version: 3.12.3
- Operating System: MacOs 12.6
- Browser: Chrome version 131.0.6778.205 (Official Build) (arm64)
### Additional Information
_No response_ | open | 2025-01-06T14:24:31Z | 2025-03-09T08:25:35Z | https://github.com/streamlit/streamlit/issues/10115 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.text_input"
] | sandervdhimst | 4 |
TencentARC/GFPGAN | pytorch | 499 | gfpgan | billing problem
| open | 2024-01-29T06:48:54Z | 2024-02-29T04:42:55Z | https://github.com/TencentARC/GFPGAN/issues/499 | [] | assassin1382 | 6 |
ultralytics/yolov5 | deep-learning | 13,042 | how to find why mAP suddenly increased | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I trained YOLOv5s for 500 epochs, and around the 385th to 387th epochs, there was a sudden increase in mAP, resulting in the best result at about 80%. After this peak, the mAP gradually decreased.
I've repeated this training several times to see if this sudden increase would appear again, but it didn't. The best results after these subsequent trainings, without the sudden increase, decreased from 80% to 70%.
My questions are:
How can this phenomenon be explained?
How can I identify the specific reason for this sudden increase in mAP?
I suspect that an inappropriate learning rate might have caused this issue. Should I adjust the learning rate or other hyperparameters?
Attached are images showing the mAP increase during the initial training (around the 385th to 387th epochs) and subsequent trainings where the sudden increase did not appear.

👆 Sudden increase at about the 385th to 387th epochs in the initial training

👆 No sudden increase in subsequent trainings with the same dataset and parameters

👆batch size and epoch was change from 16 to 96 , 500 to 1000 respectively, but the same
### Additional
_No response_ | closed | 2024-05-28T02:09:34Z | 2024-10-20T19:46:43Z | https://github.com/ultralytics/yolov5/issues/13042 | [
"question"
] | MiNaMisan | 6 |
aimhubio/aim | tensorflow | 2,645 | Remove soft lock from UI | ## 🚀 Feature
When running in aim remote server mode it's difficult for users to clear failed runs / soft locks.
From the UI they see a generic error. Remote server side in logs shows:
```
Error while trying to delete run '1901419848fb433ab111647c'. Cannot delete Run '1901419848fb433ab111647c'. Run is locked..
```
This can be resolved by an admin deleting the lock manually from the filesystem but this is an operationally expensive exercise.
Would be great if a user could solve this from the UI.
### Motivation
- Minimise operational overheads.
### Alternatives
- Operational time manually deleting locks for failed runs.
| open | 2023-04-11T12:45:24Z | 2023-12-29T14:28:13Z | https://github.com/aimhubio/aim/issues/2645 | [
"type / enhancement",
"area / Web-UI"
] | dcarrion87 | 5 |
dolevf/graphql-cop | graphql | 27 | How can I use graphql-cop as package in the test suite | Hi,
I would like to know if graphql-cop can be imported as a package
Aswathy | closed | 2023-08-30T09:00:15Z | 2023-10-31T05:12:31Z | https://github.com/dolevf/graphql-cop/issues/27 | [
"question"
] | abnair24 | 2 |
mljar/mljar-supervised | scikit-learn | 100 | Add `explain` and `performance` modes | There should be `explain` mode in the AutoML which will produce explanations.
There should be `performance` mode for max accuracy of models from AutoML. | closed | 2020-06-02T10:57:12Z | 2020-07-08T09:38:35Z | https://github.com/mljar/mljar-supervised/issues/100 | [
"enhancement"
] | pplonski | 1 |
allenai/allennlp | pytorch | 5,033 | Publish info about each model implementation in models repo | From our discussion on Slack.
This could be markdown docs, ideally automatically generated. Could also publish to our API docs.
Kind of related to #4720 | open | 2021-03-02T18:12:27Z | 2021-03-02T18:12:27Z | https://github.com/allenai/allennlp/issues/5033 | [] | epwalsh | 0 |
Esri/arcgis-python-api | jupyter | 1,750 | Doc - remove reference to "PlacesAPI (beta)" | In the "Find Places" guide, we have a reference to the Places API beta. This feature is no longer in beta so we can remove reference to it in the guide. We can simply refer to it as "Places service".
https://developers.arcgis.com/python/guide/find-places/

| closed | 2024-01-30T18:33:07Z | 2024-02-22T00:31:36Z | https://github.com/Esri/arcgis-python-api/issues/1750 | [
"bug"
] | nginer316 | 1 |
gradio-app/gradio | deep-learning | 10,057 | Expected 3 arguments for function ChatInterface | ### Describe the bug
I'm trying to use Chatinterface as follows:
```
gr.ChatInterface(
type="messages",
fn=partial(chat, args=args),
chatbot=chatbot,
textbox=message_box,
theme="soft",
cache_examples=True,
)
```
This causes a warning when I'm running the solution.
```
> Gradio version: 5.6.0
> Application is starting...
> /usr/local/lib/python3.10/dist-packages/gradio/utils.py:999: UserWarning: Expected 3 arguments for function <function ChatInterface._setup_api.<locals>.api_fn at 0x7f225c67c8b0>, received 2.
> warnings.warn(
> /usr/local/lib/python3.10/dist-packages/gradio/utils.py:1003: UserWarning: Expected at least 3 arguments for function <function ChatInterface._setup_api.<locals>.api_fn at 0x7f225c67c8b0>, received 2.
```
This is a bit confusing as the [docs for 5.6.0](https://www.gradio.app/docs/gradio/chatinterface) explicitly say
> Only one parameter is required: fn, which takes a function that governs the response of the chatbot based on the user input and chat history.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
gr.ChatInterface(
type="messages",
fn=partial(chat, args=args),
chatbot=chatbot,
textbox=message_box,
theme="soft",
cache_examples=True,
)
#use whatever chat fn
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.0
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.25.2
jinja2: 3.1.2
markupsafe: 2.1.2
numpy: 1.24.3
orjson: 3.10.7
packaging: 23.1
pandas: 2.2.3
pillow: 9.3.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0
ruff: 0.6.9
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.12.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.31.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.6.0
httpx: 0.27.2
huggingface-hub: 0.25.2
packaging: 23.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2024-11-27T22:27:08Z | 2024-11-29T16:59:12Z | https://github.com/gradio-app/gradio/issues/10057 | [
"bug",
"needs repro"
] | csanadpoda | 2 |
lepture/authlib | django | 251 | Django OAuth2/OIDC server example | This Issue is not related to any problems regarding authlib but it want just to get some useful information and also purpose some Documentation and example enhanchements. Is there a working example as an example django project, to get a fully featured and working authlib AS/Provider?
- I found some django client/relying-party in github but not a AS/Provider one.
- I have read the Django related unit tests.
I'd like to have a rudimental structure to working on and starting the development of a OIDC Provider ready for production environment. I'd like to have an example folder in this project or a separate repository, related to this, to get a boostrap project.
Is there some additional resources from which to draw? | open | 2020-07-12T14:27:26Z | 2025-02-21T10:23:08Z | https://github.com/lepture/authlib/issues/251 | [
"feature request",
"server",
"in future"
] | peppelinux | 2 |
encode/httpx | asyncio | 3,029 | Auth type makes problem in `dbt-duckdb` > `fsspec` > `webdav4` > `httpx` | ```python
# _types.py
AuthTypes = Union[
Mapping[str, Any], # added
Tuple[Union[str, bytes], Union[str, bytes]],
Callable[["Request"], "Request"],
"Auth",
]
# _client.py
def _build_auth(self, auth: typing.Optional[AuthTypes]) -> typing.Optional[Auth]:
if auth is None:
return None
elif isinstance(auth, tuple):
if any(
[
type(auth == dict),
auth.get('username'),
auth.get('password'),
]
):
return BasicAuth( # added
username=auth.get('username'),
passworkd=auth.get('password')
)
return BasicAuth(username=auth[0], password=auth[1])
elif isinstance(auth, Auth):
return auth
elif callable(auth):
return FunctionAuth(func=auth)
else:
raise TypeError(f'Invalid "auth" argument: {auth!r}')
```
The auth option that this made in when I use the `webdav` option in the file system interface used by `dbt-duckdb` and is related to the handling of `AuthType` in this repository.
However, the current `fs: webdav` option is almost unavailable because the external module(especially `dbt`) is restricted from processing the yaml file in `profiles.yml` and converting it into a tuple. Therefore, we can consider ways such as enhancing `httpx` or changing the `profile.yaml` treatment in `dbt-core` to handle it.
Can you give me any advice on this matter? @jwills | closed | 2023-12-31T16:14:09Z | 2023-12-31T16:55:43Z | https://github.com/encode/httpx/issues/3029 | [] | ZergRocks | 0 |
jacobgil/pytorch-grad-cam | computer-vision | 224 | GradCAM results doesn't seem valid with resenet50 | `
import cv2
import torch
import numpy as np
from pytorch_grad_cam import GradCAM
from pytorch_grad_cam.utils.image import show_cam_on_image, preprocess_image
from torchvision.models import resnet50
from torchvision import transforms
import matplotlib.pyplot as plt
rgb_img = cv2.imread("/home/zeeshan/Downloads/dog.jpg")
rgb_img = np.float32(rgb_img) / 255
input_tensor = preprocess_image(rgb_img,
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
model = resnet50(pretrained=True)
target_layers = [model.layer4]
cam = GradCAM(model=model, target_layers=target_layers)
grayscale_cam = cam(input_tensor=input_tensor, targets=None)
grayscale_cam = grayscale_cam[0, :]
visualization = show_cam_on_image(rgb_img, grayscale_cam, use_rgb=True)
visualization = cv2.cvtColor(visualization, cv2.COLOR_BGR2RGB)
plt.imshow(visualization)
`
I am not getting the desired result? Could somebody please help me out, here is the [link](https://www.pexels.com/photo/short-coated-tan-dog-2253275/) to the image.
**Output Generated**

**Desired Output**

| closed | 2022-04-05T11:29:51Z | 2022-04-06T01:53:38Z | https://github.com/jacobgil/pytorch-grad-cam/issues/224 | [] | zeahmd | 2 |
ydataai/ydata-profiling | pandas | 723 | Negative exponents appear positive | **Describe the bug**
Negative exponents do not appear in a profiling report
**To Reproduce**
```python
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
data = { 'some_numbers' : (0.0001, 0.00001, 0.00000001, 0.002, 0.0002, 0.00003) * 100}
df = pd.DataFrame(data)
profile = ProfileReport(df, 'No Negative Exponents')
profile.to_file('NoNegativeExponents.html')
```

Minimum should be 1 x 10<sup>-8</sup> rather than 1 x 10<sup>8</sup>. The issue also arises for Mean and Maximum.
**Version information:**
Python 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
**Additional context**
<!--
Add any other context about the problem here.
-->
| closed | 2021-03-05T15:36:26Z | 2021-03-25T23:29:32Z | https://github.com/ydataai/ydata-profiling/issues/723 | [
"bug 🐛"
] | rdpapworth | 1 |
apache/airflow | automation | 47,891 | clean duplications of Xcom backend docs from core | ### Body
Most of the text in [Object Storage XCom Backend](https://airflow.apache.org/docs/apache-airflow/2.10.5/core-concepts/xcoms.html#object-storage-xcom-backend) is duplicated with [Object Storage XCom Backend (common-io provider)](https://airflow.apache.org/docs/apache-airflow-providers-common-io/1.5.1/xcom_backend.html#object-storage-xcom-backend)
The task:
The core part should give high level explanation and forward to the provider docs for further read.
Similar to what we do with [Kubernetes](https://airflow.apache.org/docs/apache-airflow/2.10.5/administration-and-deployment/kubernetes.html#kubernetes) we give high level explanation and forward to additional read in helm chart or provider docs.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-18T06:45:18Z | 2025-03-20T09:10:51Z | https://github.com/apache/airflow/issues/47891 | [
"good first issue",
"kind:documentation",
"provider:common-io"
] | eladkal | 1 |
wkentaro/labelme | computer-vision | 879 | How to convert a dataset to cityscape format using labelme | closed | 2021-07-01T02:12:35Z | 2022-06-25T04:42:29Z | https://github.com/wkentaro/labelme/issues/879 | [] | dreamlychina | 0 | |
dask/dask | numpy | 11,087 | test_division_or_partition in test_sql is failing for pandas 3 | The test is broken, the memory usage doesn't work as expected | open | 2024-04-30T10:15:15Z | 2024-04-30T10:15:15Z | https://github.com/dask/dask/issues/11087 | [
"dataframe"
] | phofl | 0 |
tensorflow/tensor2tensor | machine-learning | 1,867 | The checkpoint is always not found when i use decoder command | ### Description
The checkpoint is always not found in T2T-decoder. After deleting checkpoint, can run T2T-decoder,Which model is used at this time?and why?
### Environment information
OS: <your answer here>
$ pip freeze | grep tensor
mesh-tensorflow==0.1.17
tensor2tensor==1.15.7
tensorboard==2.3.0
tensorboard-plugin-wit==1.7.0
tensorflow==2.3.1
tensorflow-addons==0.11.2
tensorflow-datasets==4.0.1
tensorflow-estimator==2.3.0
tensorflow-gan==2.0.0
tensorflow-gpu==2.3.0
tensorflow-hub==0.9.0
tensorflow-metadata==0.24.0
tensorflow-probability==0.7.0
$ python -V
3.7.7
### For bugs: reproduction and error logs
# Steps to reproduce:
this is my decoder command。
t2t-decoder --t2t_usr_dir=self_script --problem=my_problem --data_dir=./self_data --model=evolved_transformer --hparams_set=evolved_transformer_deep --output_dir=./train_evolved_transformer_v1 --decode_hparams="beam_size=4,alpha=0.6" --decode_from_file=./decoder/test_C.txt --decode_to_file=./decoder/test_OE.out
# Error logs:
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: Key evolved_transformer/body/decoder/layer_0/first_attend_to_encoder/multihead_attention/k/kernel not found i n checkpoint
[[{{node save/RestoreV2}}]]
(1) Not found: Key evolved_transformer/body/decoder/layer_0/first_attend_to_encoder/multihead_attention/k/kernel not found i n checkpoint
[[{{node save/RestoreV2}}]]
[[save/RestoreV2_1/_25]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/WwhStuGrp/WwhStu11G/anaconda3/envs/py3.7-tensorflow/lib/python3.7/site-packages/tensorflow/python/training/saver .py", line 1299, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/WwhStuGrp/WwhStu11G/anaconda3/envs/py3.7-tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session .py", line 958, in run
run_metadata_ptr)
File "/home/WwhStuGrp/WwhStu11G/anaconda3/envs/py3.7-tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session .py", line 1181, in _run
feed_dict_tensor, options, run_metadata)
File "/home/WwhStuGrp/WwhStu11G/anaconda3/envs/py3.7-tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session .py", line 1359, in _do_run
run_metadata)
File "/home/WwhStuGrp/WwhStu11G/anaconda3/envs/py3.7-tensorflow/lib/python3.7/site-packages/tensorflow/python/client/session .py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: Key evolved_transformer/body/decoder/layer_0/first_attend_to_encoder/multihead_attention/k/kernel not found i n checkpoint
[[node save/RestoreV2 (defined at /lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py:629 ) ]]
(1) Not found: Key evolved_transformer/body/decoder/layer_0/first_attend_to_encoder/multihead_attention/k/kernel not found i n checkpoint
[[node save/RestoreV2 (defined at /lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py:629 ) ]]
[[save/RestoreV2_1/_25]]
0 successful operations.
0 derived errors ignored.
| open | 2020-11-04T16:33:40Z | 2020-11-04T16:33:40Z | https://github.com/tensorflow/tensor2tensor/issues/1867 | [] | Nanamumuhan | 0 |
InstaPy/InstaPy | automation | 6,472 | like_by_tags or like_by_feed dosen't work anymore | Hi!
From one day to the next, the script no longer worked. No matter whether I used the method:
session.like_by_tags
or
session.like_by_feed
the script logs in, opens the feed and then crashes with the error. I don't know if this is the correct error message. I am new to python. If you want me to look for the error message from another source, please feel free to let me know and I'll look there again.
To rule out an error or misconfiguration on my local machine, I installed python and instapy again on another machine, same problem there.
Windows 10
Python 3.10.0
I don't know how to get the Instapy version, but I have found it via
py -m pip install instapy -U to update it
Traceback (most recent call last):
File "D:\Instabot\instagram_bot.py", line 58, in <module>
session.like_by_feed(amount=65, randomize=False, unfollow=False, interact=True)
File "C:\Users\zunam\AppData\Roaming\Python\Python310\site-packages\instapy\instapy.py", line 4137, in like_by_feed
for _ in self.like_by_feed_generator(amount, randomize, unfollow, interact):
File "C:\Users\zunam\AppData\Roaming\Python\Python310\site-packages\instapy\instapy.py", line 4244, in like_by_feed_generator
) = check_link(
File "C:\Users\zunam\AppData\Roaming\Python\Python310\site-packages\instapy\like_util.py", line 619, in check_link
media = post_page[0]["shortcode_media"]
KeyError: 0
I take a look at other issues, but found no solution.
Thank you for your help
Best wishes from Germany
| open | 2022-01-25T15:57:11Z | 2022-03-28T14:38:20Z | https://github.com/InstaPy/InstaPy/issues/6472 | [] | T1000MG | 10 |
httpie/cli | api | 964 | Support header values read from file ? | httpie allows to get url params and form data from file, by using @ syntax
There are sometimes complicated headers, for example tokens.
It seems this is not yet possible for headers.
Can this feature be added for headers as well?
Probably with a syntax like:
```
':@'
```
like:
```
Referer:http://httpie.org Cookie:foo=bar User-Agent:bacon/1.0 FunnyHeader:@somefile
```
That would be very helpful.
| closed | 2020-09-10T12:42:57Z | 2021-12-23T19:06:35Z | https://github.com/httpie/cli/issues/964 | [] | coldcoff | 3 |
modin-project/modin | data-science | 7,453 | MODIN creates new partition if we add new column to dataframe | ```
import logging
logger = logging.getLogger(__name__)
def log_partitions(input_df):
partitions = input_df._query_compiler._modin_frame._partitions
# Iterate through the partition matrix
logger.info(f"Row partitions: {len(partitions)}")
row_index = 0
for partition_row in partitions:
print(f"Row {row_index} has Column partitions {len(partition_row)}")
col_index = 0
for partition in partition_row:
print(f"DF Shape {partition.get().shape} is for row {row_index} column {col_index}")
col_index = col_index + 1
row_index = row_index + 1
import modin.pandas as pd
df = pd.DataFrame({"col": ["A,B,C", "X,Y,Z", "1,2,3"]})
log_partitions(df)
for i in range(3): # Adding columns one by one
df[f"split_{i}"] = df["col"].str.split(",").str[i]
print(df)
log_partitions(df)
```
This gives output
```
Row 0 has Column partitions 1
DF Shape (3, 1) is for row 0 column 0
col split_0 split_1 split_2
0 A,B,C A B C
1 X,Y,Z X Y Z
2 1,2,3 1 2 3
Row 0 has Column partitions 4
DF Shape (3, 1) is for row 0 column 0
DF Shape (3, 1) is for row 0 column 1
DF Shape (3, 1) is for row 0 column 2
DF Shape (3, 1) is for row 0 column 3
```
Modin is creating new partitions for each column addition. This is the sample code to reproduce the issue, the real issue comes in where this happens in a pipeline step , after creating multiple partitions if the next step works on multiple columns belongs to different partitions the performance is very bad. What is the solution for this ?
Thanks in advance | open | 2025-03-04T07:42:04Z | 2025-03-06T21:15:26Z | https://github.com/modin-project/modin/issues/7453 | [
"question ❓"
] | Sumukhagc | 1 |
xonsh/xonsh | data-science | 5,768 | Remove unexpectedly sourcing foreign shell run control files | We need to remove unwanted sourcing of RC files for other shells.
### How to repeat
```xsh
echo 'echo 1' >> ~/.bashrc
xonsh --no-rc --no-env
# 1
# @
```
### Discussed in https://github.com/xonsh/xonsh/discussions/5764
<details>
<div type='discussions-op-text'>
<sup>Originally posted by **JeffMelton** December 29, 2024</sup>
I stumbled on this when I added carapace to my Xonsh run control, and I opened an issue over there. Because there's still unexpected behavior from Xonsh without carapace in the mix, I'm also dropping a question here. [Caveat: I'm just getting started with Xonsh, slowly rewriting stuff from my Elvish daily driver, so I'm starting with the assumption that PEBKAC. Sprinkle a grain of salt on everything here because it's possible I've unwittingly footgunned myself.]
My `~/.config/xonsh` directory looks like this:
```
❯ tree ~/.config/xonsh
├── rc.d
│ ├── 00-path.xsh
│ ├── 01-xontribs.xsh
│ ├── <redacted>
│ ├── 06-env.xsh
│ ├── 10-k8s.xsh
│ ├── 15-util.xsh
│ ├── 20-ops.xsh
│ ├── 98-aliases.xsh
│ ├── 99-completions.xsh
│ └── utils.xsh
├── rc.xsh
```
The exceptions thrown when I have carapace in my `rc.xsh` suggested that Xonsh is sourcing foreign shell run control files, though I haven't asked it to do so. To test that theory, I added `echo "this is <whatever>"` to other shell run control files. When `~/.bashrc` is available and carapace is not enabled in `rc.xsh`, launching an iTerm2 profile that starts `~/.local/xonsh-env/xbin/xonsh` will print `this is bashrc`, and everything will start up just fine. Note that it also prints that when running `xonsh --no-rc` from another shell.
When `~/.bashrc` is not available and carapace is disabled in `rc.xsh`, Xonsh throws exceptions indicating that `starship` and `zoxide` — both initialized in `rc.xsh` aren't available. They're both in `/opt/homebrew/bin`, which is added to PATH in `.bashrc` and in `~/.config/xonsh/rc.d/00-path.xsh`, so that tells me that Xonsh sources `rc.xsh` first. That's fine, I can move the stuff in `00-path.xsh` to `rc.xsh`. I do occasionally drop to Bash, so `~/.bashrc` needs to stay put.
But why is `.bashrc` being sourced in the first place? I get that it's a convenience feature to be able to source a foreign shell config, and that's lovely if you want it, but I don't, and I can't find any documentation on how to disable it. Everything I've read seems to suggest that it's opt-in, and I'm about 99% sure I didn't.
So, questions: How can I tell for sure whether I've configured Xonsh to source `~/.bashrc`, and how can I configure it to no longer do so? TIA 🙏🏽 </div>
</details>
| closed | 2025-01-03T14:47:02Z | 2025-01-10T13:53:35Z | https://github.com/xonsh/xonsh/issues/5768 | [
"source-foreign",
"xonshrc",
"startup",
"v1"
] | anki-code | 3 |
unionai-oss/pandera | pandas | 967 | Request: A date-time type that is timezone aware, or any timezone. | **Is your feature request related to a problem? Please describe.**
We have the need to accept timestamps that require having a timezone (any timezone, not a specific one).
I believe Pandera requires you to provide the specific timezone at the moment.
**Describe the solution you'd like**
A new dtype for "datetimes with timezone".
Naive datetimes should fail; aware datetimes should always validate (no matter the timezone).
**Describe alternatives you've considered**
Making up my own dtype. Working on it, so far, unclear whether I'm covering all the required bases (but it runs).
Thank you (I love the library!) | open | 2022-10-18T04:12:45Z | 2023-02-03T16:05:17Z | https://github.com/unionai-oss/pandera/issues/967 | [
"enhancement"
] | blais | 8 |
biolab/orange3 | data-visualization | 6,675 | File widget: add option to skip a range of rows not holding headers or data | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
Many datasets that are available online as open data have rows that are not column headers or actual records holding the data. Often they hold descriptions of the features and/or other general data such as licensing info. These could be either above or below the actual data. Also, some datasets have column headers spanning more than one row.
When importing these datasets in Orange, these rows confuse the mechanisms to recognize variables and variable types.
**What's your proposed solution?**
It would be nice to be able to specify a range of rows that has to be disregarded, e.g. rows 3-5, when importing a file
**Are there any alternative solutions?**
Using a spreadsheet to do it manually.
| closed | 2023-12-13T14:28:02Z | 2024-01-05T10:38:14Z | https://github.com/biolab/orange3/issues/6675 | [] | wvdvegte | 5 |
pydata/xarray | numpy | 9,880 | ⚠️ Nightly upstream-dev CI failed ⚠️ | [Workflow Run URL](https://github.com/pydata/xarray/actions/runs/12307067834)
<details><summary>Python 3.12 Test Summary</summary>
```
xarray/tests/test_backends.py::TestInstrumentedZarrStore::test_append: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestInstrumentedZarrStore::test_region_write: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_zero_dimensional_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_store[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_test_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_load[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_dataset_compute[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_pickle[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_pickle_dataarray[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_None_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_object_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_encoded_characters[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_numpy_datetime_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_cftime_datetime_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_timedelta_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_float64_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_coordinates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_global_coordinates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_coordinates_with_space[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_boolean_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_orthogonal_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_vectorized_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_vectorized_indexing_negative_step[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_outer_indexing_reversed[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_isel_dataarray[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_array_type_after_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_dropna[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_ondisk_after_print[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_bytes_with_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_empty_vlen_string_array[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[8 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_grid_mapping_and_bounds_are_not_coordinates_in_file[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_coordinate_variables_after_dataset_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_grid_mapping_and_bounds_are_coordinates_after_dataarray_roundtrip[2 failing variants]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
xarray/tests/test_backends.py::TestZarrDictStore::test_coordinates_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_endian[2]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_kwarg_dates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_default_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_via_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_in_coord[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_in_coord_via_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_same_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_overwrite_values[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_multiindex_not_implemented[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_string_object_warning[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_read_non_consolidated_warning[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_auto_chunk[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_manual_chunk[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_warning_on_bad_chunks[2 failing variants]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
xarray/tests/test_backends.py::TestZarrDictStore::test_write_uneven_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_dask[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_hidden_zarr_keys[2]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_persistence_modes[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_compressor_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_group[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_zarr_mode_w_overwrites_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_dataset_caching[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_rplus_success[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_rplus_fails[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_invalid_dim_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_no_dims_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_append_dim_not_set_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_not_a_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_existing_encoding_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_string_length_mismatch_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_check_encoding_is_consistent_after_append[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_new_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_append_dim_no_overwrite[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_to_zarr_compute_false_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_to_zarr_append_compute_false_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_save_emptydim[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_no_warning_from_open_emptydim_with_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[72 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_preexisting_override_metadata[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_errors[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_chunksizes[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_partial_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_larger_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_open_zarr_use_cftime[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_write_read_select_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_attributes[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_datetime64_or_timedelta64[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_cftime_datetime[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_zero_dimensional_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_store[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_test_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_load[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_dataset_compute[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_pickle[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_pickle_dataarray[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_None_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_object_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_string_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_string_encoded_characters[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_numpy_datetime_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_cftime_datetime_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_timedelta_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_float64_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_coordinates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_global_coordinates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_coordinates_with_space[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_boolean_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_orthogonal_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_vectorized_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_vectorized_indexing_negative_step[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_outer_indexing_reversed[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_isel_dataarray[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_array_type_after_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_dropna[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_ondisk_after_print[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_bytes_with_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_empty_vlen_string_array[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_mask_and_scale[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_unsigned[8 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_grid_mapping_and_bounds_are_not_coordinates_in_file[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_coordinate_variables_after_dataset_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_grid_mapping_and_bounds_are_coordinates_after_dataarray_roundtrip[2 failing variants]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_coordinates_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_endian[2]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_encoding_kwarg_dates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_default_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_explicitly_omit_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_explicitly_omit_fill_value_via_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_explicitly_omit_fill_value_in_coord[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_explicitly_omit_fill_value_in_coord_via_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_encoding_same_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_overwrite_values[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_multiindex_not_implemented[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_string_object_warning[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_roundtrip_consolidated[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_read_non_consolidated_warning[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_auto_chunk[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_manual_chunk[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_warning_on_bad_chunks[2 failing variants]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_uneven_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_chunk_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_chunk_encoding_with_dask[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_hidden_zarr_keys[2]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_persistence_modes[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_compressor_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_group[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_zarr_mode_w_overwrites_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_dataset_caching[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_mode_rplus_success[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_mode_rplus_fails[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_invalid_dim_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_no_dims_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_append_dim_not_set_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_mode_not_a_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_existing_encoding_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_string_length_mismatch_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_check_encoding_is_consistent_after_append[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_new_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_with_append_dim_no_overwrite[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_to_zarr_compute_false_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_to_zarr_append_compute_false_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_save_emptydim[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_no_warning_from_open_emptydim_with_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_region[72 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_region_mode[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_preexisting_override_metadata[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_region_errors[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_encoding_chunksizes[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_chunk_encoding_with_partial_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_chunk_encoding_with_larger_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_open_zarr_use_cftime[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_read_select_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_attributes[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_chunked_datetime64_or_timedelta64[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_chunked_cftime_datetime[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_zero_dimensional_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_store[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_test_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_load[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_dataset_compute[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_pickle[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_pickle_dataarray[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_None_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_object_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_string_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_string_encoded_characters[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_numpy_datetime_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_cftime_datetime_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_timedelta_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_float64_data[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_coordinates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_global_coordinates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_coordinates_with_space[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_boolean_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_orthogonal_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_vectorized_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_vectorized_indexing_negative_step[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_outer_indexing_reversed[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_isel_dataarray[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_array_type_after_indexing[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_dropna[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_ondisk_after_print[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_bytes_with_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_empty_vlen_string_array[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_mask_and_scale[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_unsigned[8 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_grid_mapping_and_bounds_are_not_coordinates_in_file[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_coordinate_variables_after_dataset_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_grid_mapping_and_bounds_are_coordinates_after_dataarray_roundtrip[2 failing variants]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_coordinates_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_endian[2]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_encoding_kwarg_dates[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_default_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_explicitly_omit_fill_value[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_explicitly_omit_fill_value_via_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_explicitly_omit_fill_value_in_coord[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_explicitly_omit_fill_value_in_coord_via_encoding_kwarg[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_encoding_same_dtype[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_overwrite_values[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_multiindex_not_implemented[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_string_object_warning[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_roundtrip_consolidated[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_read_non_consolidated_warning[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_auto_chunk[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_manual_chunk[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_warning_on_bad_chunks[2 failing variants]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
Emitted warnings: [].
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_uneven_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_chunk_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_chunk_encoding_with_dask[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_hidden_zarr_keys[2]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_persistence_modes[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_compressor_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_group[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_zarr_mode_w_overwrites_encoding[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_dataset_caching[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_mode_rplus_success[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_mode_rplus_fails[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_invalid_dim_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_no_dims_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_append_dim_not_set_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_mode_not_a_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_existing_encoding_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_string_length_mismatch_raises[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_check_encoding_is_consistent_after_append[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_new_variable[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_with_append_dim_no_overwrite[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_to_zarr_compute_false_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_to_zarr_append_compute_false_roundtrip[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_save_emptydim[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_no_warning_from_open_emptydim_with_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_region[72 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_region_mode[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_preexisting_override_metadata[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_region_errors[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_encoding_chunksizes[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_chunk_encoding_with_partial_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_chunk_encoding_with_larger_dask_chunks[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_open_zarr_use_cftime[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_read_select_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_attributes[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_chunked_datetime64_or_timedelta64[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_chunked_cftime_datetime[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_avoid_excess_metadata_calls[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_closing_internal_zip_store[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrRegionAuto::test_zarr_region_auto_all[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrRegionAuto::test_zarr_region_auto_mixed[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrRegionAuto::test_zarr_region_auto_noncontiguous[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrRegionAuto::test_zarr_region_auto_new_coord_vals[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrRegionAuto::test_zarr_region_index_write[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrRegionAuto::test_zarr_region_append[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_region[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_region_chunk_partial[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_append_chunk_partial[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_region_chunk_partial_offset[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_safe_chunk_append_dim[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_safe_chunk_region[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_dimension_names[3]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDictStore::test_append_string_length_mismatch_works[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_dimension_names[3]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrDirectoryStore::test_append_string_length_mismatch_works[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_dimension_names[3]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestZarrWriteEmpty::test_append_string_length_mismatch_works[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_zarr_version_deprecated: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestDataArrayToZarr::test_dataarray_to_zarr_no_name[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestDataArrayToZarr::test_dataarray_to_zarr_with_name[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestDataArrayToZarr::test_dataarray_to_zarr_coord_name_clash[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestDataArrayToZarr::test_open_dataarray_options[2 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_open_dataset_chunking_zarr[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::test_chunking_consintency[6 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_backends.py::TestNCZarr::test_overwriting_nczarr: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
xarray/tests/test_distributed.py::test_dask_distributed_zarr_integration_test[4 failing variants]: TypeError: Group.create_array() got an unexpected keyword argument 'exists_ok'
```
</details>
| closed | 2024-12-13T00:34:41Z | 2024-12-13T15:59:56Z | https://github.com/pydata/xarray/issues/9880 | [
"CI"
] | github-actions[bot] | 2 |
plotly/dash-table | plotly | 150 | A final pass on the default styles | Once the styling and sizing tests are fixed, I'll take another pass through the default styles | closed | 2018-10-22T17:13:26Z | 2019-07-02T14:01:53Z | https://github.com/plotly/dash-table/issues/150 | [] | chriddyp | 7 |
ultralytics/ultralytics | python | 18,885 | train yolo with random weighted sampler | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have many data sources, Now I can write all data paths in data.yaml file and when I start the training, the different dataset sources are merged together into only 1 dataset.
Can I use or create a method like random weighted sampler to take equal number of samples from each data source for every epoch
For example:
data1 (20,000 images)
data2 (1000 images)
I want every epoch to have only 1000 images from each data source instead of having all images for every epoch
How can I do this or what are the methods I should edit to do this
thank you
### Additional
_No response_ | open | 2025-01-25T18:40:01Z | 2025-01-26T16:52:01Z | https://github.com/ultralytics/ultralytics/issues/18885 | [
"enhancement",
"question"
] | Dreahim | 7 |
microsoft/qlib | deep-learning | 1,888 | How to create a new Stock Pool (Market) | ## ❓ Questions and Help
How can I create a new Stock Pool (Market) in Qlib and define a specific set of stocks? For example, how can I classify certain stocks under a category similar to CSI 300?"
I have checked the documentation, but it seems that there is no relevant information. If there is, could you please provide the exact link? Thank you!
| open | 2025-02-15T16:39:58Z | 2025-03-11T08:02:27Z | https://github.com/microsoft/qlib/issues/1888 | [
"question"
] | Len1925 | 1 |
RobertCraigie/prisma-client-py | asyncio | 744 | Can't run prisma migrate on mysql8: Unknown authentication plugin `sha256_password'. | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
Using this docker-compose:
```yaml
version: "3.9"
services:
mysql:
image: mysql:8.0
environment:
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: mydatabase
ports:
- "3306:3306"
volumes:
- mysql_data:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
volumes:
mysql_data:
```
When I run
```bash
❯ docker compose up -d
❯ prisma migrate dev --name init
```
I see:
```
Environment variables loaded from .env
Prisma schema loaded from schema.prisma
Datasource "db": MySQL database "mydatabase" at "localhost:3306"
Error: Migration engine error:
Error querying the database: Unknown authentication plugin `sha256_password'.
```
## How to reproduce
Run
```bash
prisma migrate dev --name init
```
## Expected behavior
The migrations are applied successfully
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
// generator
generator client {
provider = "prisma-client-py"
interface = "asyncio"
recursive_type_depth = 5
}
model Organization {
id String @id @default(cuid())
name String
websiteUrl String?
contactEmail String?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
users User[]
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: MacOS Ventura
- Database: MySQL 8
- Python version: Python 3.10.10
- Prisma version:
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 4.11.0
prisma client python : 0.8.2
platform : darwin
expected engine version : 8fde8fef4033376662cad983758335009d522acb
installed extras : []
install path : /Users/highlander85/Library/Caches/pypoetry/virtualenvs/my-service-hE8eyDXL-py3.10/lib/python3.10/site-packages/prisma
binary cache dir : /Users/highlander85/.cache/prisma-python/binaries/4.11.0/8fde8fef4033376662cad983758335009d522acb
```
| open | 2023-04-18T23:32:33Z | 2023-04-22T13:10:00Z | https://github.com/RobertCraigie/prisma-client-py/issues/744 | [
"bug/0-needs-info",
"kind/bug"
] | travisneilturner | 1 |
litestar-org/litestar | pydantic | 4,031 | Docs: new middleware docs | ### Summary
something seems off here: https://docs.litestar.dev/2/usage/middleware/creating-middleware.html

| open | 2025-02-26T17:34:24Z | 2025-02-26T17:34:47Z | https://github.com/litestar-org/litestar/issues/4031 | [
"Documentation :books:"
] | euri10 | 0 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 59 | I need help with "data.py" file | Hello, like the title i need some help with "data.py" file
When i run the original "data.py" file that clone from here it go error like below
```
Traceback (most recent call last):
File "/Users/NGUYENQUANGHUY/PycharmProjects/stanford-tensorflow-tutorials/assignments/chatbot/data.py", line 255, in <module>
prepare_raw_data()
File "/Users/NGUYENQUANGHUY/PycharmProjects/stanford-tensorflow-tutorials/assignments/chatbot/data.py", line 178, in prepare_raw_data
id2line = get_lines()
File "/Users/NGUYENQUANGHUY/PycharmProjects/stanford-tensorflow-tutorials/assignments/chatbot/data.py", line 34, in get_lines
parts = line.split(' +++$+++ ')
TypeError: a bytes-like object is required, not 'str'
```
So i try do not open "movie_lines.txt" file in binary mode, but this time i got the below error
```
Traceback (most recent call last):
File "/Users/NGUYENQUANGHUY/PycharmProjects/stanford-tensorflow-tutorials/assignments/chatbot/data.py", line 255, in <module>
prepare_raw_data()
File "/Users/NGUYENQUANGHUY/PycharmProjects/stanford-tensorflow-tutorials/assignments/chatbot/data.py", line 178, in prepare_raw_data
id2line = get_lines()
File "/Users/NGUYENQUANGHUY/PycharmProjects/stanford-tensorflow-tutorials/assignments/chatbot/data.py", line 32, in get_lines
lines = f.readlines()
File "/Users/NGUYENQUANGHUY/.pyenv/versions/anaconda3-4.0.0/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xad in position 3767: invalid start byte
```
Then i decided to change some string variable to bytes. So my "data.py" file become like this
```
from __future__ import print_function
import os
import random
import re
import numpy as np
import config
def get_lines():
id2line = {}
file_path = os.path.join(config.DATA_PATH, config.LINE_FILE)
with open(file_path, 'rb') as f:
lines = f.readlines()
for line in lines:
parts = line.split(b' +++$+++ ')
if len(parts) == 5:
if parts[4][-1] == '\n':
parts[4] = parts[4][:-1]
id2line[parts[0]] = parts[4]
return id2line
def get_convos():
""" Get conversations from the raw data """
file_path = os.path.join(config.DATA_PATH, config.CONVO_FILE)
convos = []
with open(file_path, 'rb') as f:
for line in f.readlines():
parts = line.split(b' +++$+++ ')
if len(parts) == 4:
convo = []
for line in parts[3][1:-2].split(b', '):
convo.append(line[1:-1])
convos.append(convo)
return convos
def question_answers(id2line, convos):
""" Divide the dataset into two sets: questions and answers. """
questions, answers = [], []
for convo in convos:
for index, line in enumerate(convo[:-1]):
questions.append(id2line[convo[index]])
answers.append(id2line[convo[index + 1]])
assert len(questions) == len(answers)
return questions, answers
def prepare_dataset(questions, answers):
# create path to store all the train & test encoder & decoder
make_dir(config.PROCESSED_PATH)
# random convos to create the test set
test_ids = random.sample([i for i in range(len(questions))],config.TESTSET_SIZE)
filenames = ['train.enc', 'train.dec', 'test.enc', 'test.dec']
files = []
for filename in filenames:
files.append(open(os.path.join(config.PROCESSED_PATH, filename),'wb'))
for i in range(len(questions)):
if i in test_ids:
files[2].write(questions[i] + b'\n')
files[3].write(answers[i] + b'\n')
else:
files[0].write(questions[i] + b'\n')
files[1].write(answers[i] + b'\n')
for file in files:
file.close()
def make_dir(path):
""" Create a directory if there isn't one already. """
try:
os.mkdir(path)
except OSError:
pass
def basic_tokenizer(line, normalize_digits=True):
""" A basic tokenizer to tokenize text into tokens.
Feel free to change this to suit your need. """
line = re.sub(b'<u>', b'', line)
line = re.sub(b'</u>', b'', line)
line = re.sub(b'\[', b'', line)
line = re.sub(b'\]', b'', line)
words = []
_WORD_SPLIT = re.compile(b"([.,!?\"'-<>:;)(])")
_DIGIT_RE = re.compile(b"\d")
for fragment in line.strip().lower().split():
for token in re.split(_WORD_SPLIT, fragment):
if not token:
continue
if normalize_digits:
token = re.sub(_DIGIT_RE, b'#', token)
words.append(token)
return words
def build_vocab(filename, normalize_digits=True):
in_path = os.path.join(config.PROCESSED_PATH, filename)
out_path = os.path.join(config.PROCESSED_PATH, 'vocab.{}'.format(filename[-3:]))
vocab = {}
with open(in_path, 'rb') as f:
for line in f.readlines():
for token in basic_tokenizer(line):
if not token in vocab:
vocab[token] = 0
vocab[token] += 1
sorted_vocab = sorted(vocab, key=vocab.get, reverse=True)
with open(out_path, 'wb') as f:
f.write(b'<pad>' + b'\n')
f.write(b'<unk>' + b'\n')
f.write(b'<s>' + b'\n')
f.write(b'<\s>' + b'\n')
index = 4
for word in sorted_vocab:
if vocab[word] < config.THRESHOLD:
with open('config.py', 'a') as cf:
if filename[-3:] == 'enc':
cf.write('ENC_VOCAB = ' + str(index) + '\n')
else:
cf.write('DEC_VOCAB = ' + str(index) + '\n')
break
f.write(bytes(word) + b'\n')
index += 1
def load_vocab(vocab_path):
with open(vocab_path, 'rb') as f:
words = f.read().splitlines()
return words, {words[i]: i for i in range(len(words))}
def sentence2id(vocab, line):
return [vocab.get(token, vocab[b'<unk>']) for token in basic_tokenizer(line)]
def token2id(data, mode):
""" Convert all the tokens in the data into their corresponding
index in the vocabulary. """
vocab_path = 'vocab.' + mode
in_path = data + '.' + mode
out_path = data + '_ids.' + mode
_, vocab = load_vocab(os.path.join(config.PROCESSED_PATH, vocab_path))
in_file = open(os.path.join(config.PROCESSED_PATH, in_path), 'rb')
out_file = open(os.path.join(config.PROCESSED_PATH, out_path), 'wb')
lines = in_file.read().splitlines()
for line in lines:
if mode == 'dec': # we only care about '<s>' and </s> in encoder
ids = [vocab[b'<s>']]
else:
ids = []
ids.extend(sentence2id(vocab, line))
# ids.extend([vocab.get(token, vocab['<unk>']) for token in basic_tokenizer(line)])
if mode == 'dec':
ids.append(vocab[b'<\s>'])
out_file.write(b' '.join(bytes(id_) for id_ in ids) + b'\n')
def prepare_raw_data():
print('Preparing raw data into train set and test set ...')
id2line = get_lines()
convos = get_convos()
questions, answers = question_answers(id2line, convos)
prepare_dataset(questions, answers)
def process_data():
print('Preparing data to be model-ready ...')
build_vocab('train.enc')
build_vocab('train.dec')
token2id('train', 'enc')
token2id('train', 'dec')
token2id('test', 'enc')
token2id('test', 'dec')
def load_data(enc_filename, dec_filename, max_training_size=None):
encode_file = open(os.path.join(config.PROCESSED_PATH, enc_filename), 'rb')
decode_file = open(os.path.join(config.PROCESSED_PATH, dec_filename), 'rb')
encode, decode = encode_file.readline(), decode_file.readline()
data_buckets = [[] for _ in config.BUCKETS]
i = 0
while encode and decode:
if (i + 1) % 10000 == 0:
print("Bucketing conversation number", i)
encode_ids = [str(id_) for id_ in encode.split()]
decode_ids = [str(id_) for id_ in decode.split()]
for bucket_id, (encode_max_size, decode_max_size) in enumerate(config.BUCKETS):
if len(encode_ids) <= encode_max_size and len(decode_ids) <= decode_max_size:
data_buckets[bucket_id].append([encode_ids, decode_ids])
break
encode, decode = encode_file.readline(), decode_file.readline()
i += 1
return data_buckets
def _pad_input(input_, size):
return input_ + [config.PAD_ID] * (size - len(input_))
def _reshape_batch(inputs, size, batch_size):
""" Create batch-major inputs. Batch inputs are just re-indexed inputs
"""
batch_inputs = []
for length_id in range(size):
batch_inputs.append(np.array([inputs[batch_id][length_id]
for batch_id in range(batch_size)], dtype=str))
return batch_inputs
def get_batch(data_bucket, bucket_id, batch_size=1):
""" Return one batch to feed into the model """
# only pad to the max length of the bucket
encoder_size, decoder_size = config.BUCKETS[bucket_id]
encoder_inputs, decoder_inputs = [], []
for _ in range(batch_size):
encoder_input, decoder_input = random.choice(data_bucket)
# pad both encoder and decoder, reverse the encoder
encoder_inputs.append(list(reversed(_pad_input(encoder_input, encoder_size))))
decoder_inputs.append(_pad_input(decoder_input, decoder_size))
# now we create batch-major vectors from the data selected above.
batch_encoder_inputs = _reshape_batch(encoder_inputs, encoder_size, batch_size)
batch_decoder_inputs = _reshape_batch(decoder_inputs, decoder_size, batch_size)
# create decoder_masks to be 0 for decoders that are padding.
batch_masks = []
for length_id in range(decoder_size):
batch_mask = np.ones(batch_size, dtype=np.float32)
for batch_id in range(batch_size):
# we set mask to 0 if the corresponding target is a PAD symbol.
# the corresponding decoder is decoder_input shifted by 1 forward.
if length_id < decoder_size - 1:
target = decoder_inputs[batch_id][length_id + 1]
if length_id == decoder_size - 1 or target == config.PAD_ID:
batch_mask[batch_id] = 0.0
batch_masks.append(batch_mask)
return batch_encoder_inputs, batch_decoder_inputs, batch_masks
if __name__ == '__main__':
prepare_raw_data()
process_data()
```
With the changing this "data.py" file can run smoothly, but i found that i have some problems with this dataset afterward. Actually i got many errors in training step but i think the reason for that errors come from dataset so i appreciate it if you guy can take a look for my post. Thank you. | open | 2017-09-02T03:14:03Z | 2020-09-20T14:05:21Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/59 | [] | Huygin2394 | 8 |
coqui-ai/TTS | deep-learning | 4,097 | [Bug] OS Error | ### Describe the bug
C:\Users\Emire\Desktop\Kılıç>pip install TTS --user
Collecting TTS
Using cached TTS-0.22.0-cp311-cp311-win_amd64.whl
Requirement already satisfied: numpy>=1.24.3 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (1.26.4)
Requirement already satisfied: cython>=0.29.30 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (3.0.11)
Requirement already satisfied: scipy>=1.11.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (1.14.1)
Requirement already satisfied: torch>=2.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (2.5.1)
Requirement already satisfied: torchaudio in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (2.5.1)
Requirement already satisfied: soundfile>=0.12.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.12.1)
Requirement already satisfied: librosa>=0.10.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.10.2.post1)
Requirement already satisfied: scikit-learn>=1.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (1.6.0)
Requirement already satisfied: numba>=0.57.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.60.0)
Requirement already satisfied: inflect>=5.6.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (7.4.0)
Requirement already satisfied: tqdm>=4.64.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (4.67.1)
Requirement already satisfied: anyascii>=0.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.3.2)
Requirement already satisfied: pyyaml>=6.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (6.0.2)
Requirement already satisfied: fsspec>=2023.6.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (2024.12.0)
Requirement already satisfied: aiohttp>=3.8.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (3.11.11)
Requirement already satisfied: packaging>=23.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (24.2)
Requirement already satisfied: flask>=2.0.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (3.1.0)
Requirement already satisfied: pysbd>=0.3.4 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.3.4)
Requirement already satisfied: umap-learn>=0.5.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.5.7)
Requirement already satisfied: pandas<2.0,>=1.4 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (1.5.3)
Requirement already satisfied: matplotlib>=3.7.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (3.10.0)
Requirement already satisfied: trainer>=0.0.32 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.0.36)
Requirement already satisfied: coqpit>=0.0.16 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.0.17)
Requirement already satisfied: jieba in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.42.1)
Requirement already satisfied: pypinyin in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.53.0)
Requirement already satisfied: hangul_romanize in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.1.0)
Requirement already satisfied: gruut==2.2.3 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut[de,es,fr]==2.2.3->TTS) (2.2.3)
Requirement already satisfied: jamo in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.4.1)
Requirement already satisfied: nltk in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (3.9.1)
Requirement already satisfied: g2pkk>=0.1.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.1.2)
Requirement already satisfied: bangla in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.0.2)
Requirement already satisfied: bnnumerizer in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.0.2)
Requirement already satisfied: bnunicodenormalizer in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.1.7)
Requirement already satisfied: einops>=0.6.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.8.0)
Collecting transformers>=4.33.0 (from TTS)
Using cached transformers-4.47.1-py3-none-any.whl.metadata (44 kB)
Collecting encodec>=0.1.1 (from TTS)
Using cached encodec-0.1.1-py3-none-any.whl
Requirement already satisfied: unidecode>=1.3.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (1.3.8)
Requirement already satisfied: num2words in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from TTS) (0.5.14)
Collecting spacy>=3 (from spacy[ja]>=3->TTS)
Using cached spacy-3.8.3-cp311-cp311-win_amd64.whl.metadata (27 kB)
Requirement already satisfied: Babel<3.0.0,>=2.8.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (2.16.0)
Requirement already satisfied: dateparser~=1.1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (1.1.8)
Requirement already satisfied: gruut-ipa<1.0,>=0.12.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (0.13.0)
Requirement already satisfied: gruut_lang_en~=2.0.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (2.0.1)
Requirement already satisfied: jsonlines~=1.2.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (1.2.0)
Requirement already satisfied: networkx<3.0.0,>=2.5.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (2.8.8)
Requirement already satisfied: python-crfsuite~=0.9.7 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (0.9.11)
Requirement already satisfied: gruut_lang_de~=2.0.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut[de,es,fr]==2.2.3->TTS) (2.0.1)
Requirement already satisfied: gruut_lang_es~=2.0.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut[de,es,fr]==2.2.3->TTS) (2.0.1)
Requirement already satisfied: gruut_lang_fr~=2.0.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from gruut[de,es,fr]==2.2.3->TTS) (2.0.2)
Requirement already satisfied: aiohappyeyeballs>=2.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (2.4.4)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (1.3.2)
Requirement already satisfied: attrs>=17.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (24.3.0)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (1.5.0)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (6.1.0)
Requirement already satisfied: propcache>=0.2.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (0.2.1)
Requirement already satisfied: yarl<2.0,>=1.17.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp>=3.8.1->TTS) (1.18.3)
Requirement already satisfied: Werkzeug>=3.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from flask>=2.0.1->TTS) (3.1.3)
Requirement already satisfied: Jinja2>=3.1.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from flask>=2.0.1->TTS) (3.1.4)
Requirement already satisfied: itsdangerous>=2.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from flask>=2.0.1->TTS) (2.2.0)
Requirement already satisfied: click>=8.1.3 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from flask>=2.0.1->TTS) (8.1.7)
Requirement already satisfied: blinker>=1.9 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from flask>=2.0.1->TTS) (1.9.0)
Requirement already satisfied: more-itertools>=8.5.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from inflect>=5.6.0->TTS) (10.5.0)
Requirement already satisfied: typeguard>=4.0.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from inflect>=5.6.0->TTS) (4.4.1)
Requirement already satisfied: audioread>=2.1.9 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (3.0.1)
Requirement already satisfied: joblib>=0.14 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (1.4.2)
Requirement already satisfied: decorator>=4.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (5.1.1)
Requirement already satisfied: pooch>=1.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (1.8.2)
Requirement already satisfied: soxr>=0.3.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (0.5.0.post1)
Requirement already satisfied: typing-extensions>=4.1.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (4.12.2)
Requirement already satisfied: lazy-loader>=0.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (0.4)
Requirement already satisfied: msgpack>=1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from librosa>=0.10.0->TTS) (1.1.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (1.3.1)
Requirement already satisfied: cycler>=0.10 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (4.55.3)
Requirement already satisfied: kiwisolver>=1.3.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (1.4.8)
Requirement already satisfied: pillow>=8 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (11.0.0)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib>=3.7.0->TTS) (2.9.0.post0)
Requirement already satisfied: docopt>=0.6.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from num2words->TTS) (0.6.2)
Requirement already satisfied: llvmlite<0.44,>=0.43.0dev0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from numba>=0.57.0->TTS) (0.43.0)
Requirement already satisfied: pytz>=2020.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from pandas<2.0,>=1.4->TTS) (2024.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from scikit-learn>=1.3.0->TTS) (3.5.0)
Requirement already satisfied: cffi>=1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from soundfile>=0.12.0->TTS) (1.17.1)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (3.0.12)
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (1.0.5)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (1.0.11)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (2.0.10)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (3.0.9)
Collecting thinc<8.4.0,>=8.3.0 (from spacy>=3->spacy[ja]>=3->TTS)
Using cached thinc-8.3.3-cp311-cp311-win_amd64.whl.metadata (15 kB)
Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (1.1.3)
Requirement already satisfied: srsly<3.0.0,>=2.4.3 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (2.5.0)
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (2.0.10)
Requirement already satisfied: weasel<0.5.0,>=0.1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (0.4.1)
Requirement already satisfied: typer<1.0.0,>=0.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (0.15.1)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (2.32.3)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (2.10.4)
Requirement already satisfied: setuptools in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (75.6.0)
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy>=3->spacy[ja]>=3->TTS) (3.5.0)
Requirement already satisfied: sudachipy!=0.6.1,>=0.5.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy[ja]>=3->TTS) (0.6.9)
Requirement already satisfied: sudachidict_core>=20211220 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from spacy[ja]>=3->TTS) (20241021)
Requirement already satisfied: filelock in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from torch>=2.1->TTS) (3.16.1)
Requirement already satisfied: sympy==1.13.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from torch>=2.1->TTS) (1.13.1)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from sympy==1.13.1->torch>=2.1->TTS) (1.3.0)
Requirement already satisfied: colorama in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tqdm>=4.64.1->TTS) (0.4.6)
Requirement already satisfied: psutil in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from trainer>=0.0.32->TTS) (6.1.0)
Requirement already satisfied: tensorboard in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from trainer>=0.0.32->TTS) (2.18.0)
Requirement already satisfied: huggingface-hub<1.0,>=0.24.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers>=4.33.0->TTS) (0.27.0)
Requirement already satisfied: regex!=2019.12.17 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers>=4.33.0->TTS) (2024.11.6)
Requirement already satisfied: tokenizers<0.22,>=0.21 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers>=4.33.0->TTS) (0.21.0)
Requirement already satisfied: safetensors>=0.4.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from transformers>=4.33.0->TTS) (0.4.5)
Requirement already satisfied: pynndescent>=0.5 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from umap-learn>=0.5.1->TTS) (0.5.13)
Requirement already satisfied: pycparser in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from cffi>=1.0->soundfile>=0.12.0->TTS) (2.22)
Requirement already satisfied: tzlocal in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from dateparser~=1.1.0->gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (5.2)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from Jinja2>=3.1.2->flask>=2.0.1->TTS) (3.0.2)
Requirement already satisfied: six in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from jsonlines~=1.2.0->gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (1.17.0)
Requirement already satisfied: language-data>=1.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from langcodes<4.0.0,>=3.2.0->spacy>=3->spacy[ja]>=3->TTS) (1.3.0)
Requirement already satisfied: platformdirs>=2.5.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from pooch>=1.1->librosa>=0.10.0->TTS) (4.3.6)
Requirement already satisfied: annotated-types>=0.6.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy>=3->spacy[ja]>=3->TTS) (0.7.0)
Requirement already satisfied: pydantic-core==2.27.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy>=3->spacy[ja]>=3->TTS) (2.27.2)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests<3.0.0,>=2.13.0->spacy>=3->spacy[ja]>=3->TTS) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests<3.0.0,>=2.13.0->spacy>=3->spacy[ja]>=3->TTS) (2.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests<3.0.0,>=2.13.0->spacy>=3->spacy[ja]>=3->TTS) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests<3.0.0,>=2.13.0->spacy>=3->spacy[ja]>=3->TTS) (2024.12.14)
Requirement already satisfied: blis<1.2.0,>=1.1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from thinc<8.4.0,>=8.3.0->spacy>=3->spacy[ja]>=3->TTS) (1.1.0)
Requirement already satisfied: confection<1.0.0,>=0.0.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from thinc<8.4.0,>=8.3.0->spacy>=3->spacy[ja]>=3->TTS) (0.1.5)
Requirement already satisfied: shellingham>=1.3.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from typer<1.0.0,>=0.3.0->spacy>=3->spacy[ja]>=3->TTS) (1.5.4)
Requirement already satisfied: rich>=10.11.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from typer<1.0.0,>=0.3.0->spacy>=3->spacy[ja]>=3->TTS) (13.9.4)
Requirement already satisfied: cloudpathlib<1.0.0,>=0.7.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from weasel<0.5.0,>=0.1.0->spacy>=3->spacy[ja]>=3->TTS) (0.20.0)
Requirement already satisfied: smart-open<8.0.0,>=5.2.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from weasel<0.5.0,>=0.1.0->spacy>=3->spacy[ja]>=3->TTS) (7.1.0)
Requirement already satisfied: absl-py>=0.4 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tensorboard->trainer>=0.0.32->TTS) (2.1.0)
Requirement already satisfied: grpcio>=1.48.2 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tensorboard->trainer>=0.0.32->TTS) (1.68.1)
Requirement already satisfied: markdown>=2.6.8 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tensorboard->trainer>=0.0.32->TTS) (3.7)
Requirement already satisfied: protobuf!=4.24.0,>=3.19.6 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tensorboard->trainer>=0.0.32->TTS) (5.29.2)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tensorboard->trainer>=0.0.32->TTS) (0.7.2)
Requirement already satisfied: marisa-trie>=1.1.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy>=3->spacy[ja]>=3->TTS) (1.2.1)
Requirement already satisfied: markdown-it-py>=2.2.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from rich>=10.11.0->typer<1.0.0,>=0.3.0->spacy>=3->spacy[ja]>=3->TTS) (3.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from rich>=10.11.0->typer<1.0.0,>=0.3.0->spacy>=3->spacy[ja]>=3->TTS) (2.18.0)
Requirement already satisfied: wrapt in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from smart-open<8.0.0,>=5.2.1->weasel<0.5.0,>=0.1.0->spacy>=3->spacy[ja]>=3->TTS) (1.17.0)
Requirement already satisfied: tzdata in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tzlocal->dateparser~=1.1.0->gruut==2.2.3->gruut[de,es,fr]==2.2.3->TTS) (2024.2)
Requirement already satisfied: mdurl~=0.1 in c:\users\emire\appdata\local\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0.0,>=0.3.0->spacy>=3->spacy[ja]>=3->TTS) (0.1.2)
Using cached spacy-3.8.3-cp311-cp311-win_amd64.whl (12.2 MB)
Using cached transformers-4.47.1-py3-none-any.whl (10.1 MB)
Using cached thinc-8.3.3-cp311-cp311-win_amd64.whl (1.5 MB)
Installing collected packages: transformers, thinc, encodec, spacy, TTS
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\Emire\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\transformers\\models\\deprecated\\trajectory_transformer\\convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py'
HINT: This error might have occurred since this system does not have Windows Long Path support enabled. You can find information on how to enable this at https://pip.pypa.io/warnings/enable-long-paths
What to do please help
### To Reproduce
test
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
test
```
### Additional context
_No response_ | closed | 2024-12-26T20:01:33Z | 2025-02-07T06:03:12Z | https://github.com/coqui-ai/TTS/issues/4097 | [
"bug",
"wontfix"
] | JustLmr | 3 |
mars-project/mars | pandas | 2,972 | [Proposal] a lineage reconstruction based failover for mars | # Background
Large-scale distributed computing systems may fail due to various reasons, including network problems, machine failures, and process restarts. Network failures can cause nodes and workers to fail to send and receive data; machine failures and process restarts can cause data loss and tasks to be re-executed.
In Mars On Ray, failures fall into three main categories:
1. The ray objects (input parameters, etc.) that subtask depends on exist, and the execution of subtask itself fails (usually caused by OOM). In this case, subtask can be retried;
2. The ray objects (input parameters, etc.) that the subtask depends on are lost, but the lineage of those objects exist. In this case, the input parameters need to be recursively reconstructed through lineage reconstruction, and then the current subtask can be executed;
3. The ray objects (input parameters, etc.) that subtask depends on are lost, but the lineages of the objects are lost. In this case, the subtask cannot be recovered;
Here we propose how to implement Mars On Ray failover based on the distributed future provided by Ray:
1. Recover subtask based on task retry
2. Recover lost subtask results through lineage reconstruction
3. Cut off the lineage dependency chain through checkpoint and object management to avoid a large number of task reconstruction, and reduce the metadata storage overhead at the same time.
# Proposal
## Subtask retry
Most subtask execution failures can be directly recovered by task retry. Ray supports automatic rerunning of failed tasks. The default number of retries is 3. We can specify the number of retries through `max_retries` when submitting subtasks. Set to -1 to retry indefinitely; set to 0 to disable retry.
The key to task retry is that each subtask must be **idempotent and side-effect free**, otherwise there will be data and state inconsistencies. Therefore, for each subtask, if it is retryable, set the configured number of retries when submitting the remote task corresponding to the subtask; if it is not retryable, set `max_retries` to 0.
## Recover subtask through lineage reconstruction
When the input objects that the subtask depends on are lost (node failure, object GC, etc.), the current subtask cannot be retried directly. All objects that the task depends on need to be recursively reconstructed before retrying the current subtask. Ray supports **automatic reconstruction of the lost objects through lineage reconstruction**, thereby recovering the failed subtask. When the task supports retry, the owner will cache the object lineage, that is, the task specification that needs to recreate the object. If all copies of the object are lost, the owner will resubmit the task that created the object, and the objects that the task depends on will be recursively reconstructed.
The key to lineage reconstruction is that the owner holds the lineage of the object and its entire dependency tree. If the lineage is evicted, the object cannot be reconstructed and an error is raised:
```
ray.exceptions.ObjectReconstructionFailedLineageEvictedError: Failed to retrieve object 18b2ad3c688fb947ffffffffffffffffffffffff0100000001000000. To see information about where this ObjectRef was created in Python, set the environment variable RAY_record_ref_creation_sites=1 during `ray start` and `ray.init()`.
E
E The object cannot be reconstructed because its lineage has been evicted to reduce memory pressure. To prevent this error, set the environment variable RAY_max_lineage_bytes=<bytes> (default 1GB) during `ray start`.
../../../anaconda/envs/py3.7/lib/python3.7/site-packages/ray/worker.py:1811: ObjectReconstructionFailedLineageEvictedError
```
If the lineage is not evicted, it will start to resubmit the failed subtasks to recover the lost object:
```
[2022-04-25 23:32:37,446 E 381614 381695] core_worker.cc:510: :info_message: Attempting to recover 18 lost objects by resubmitting their tasks. To disable object reconstruction, set @ray.remote(max_tries=0).
```
Therefore, **the key to support lineage reconstruction in Mars is to manage lineage and avoid lineage loss**. Currently, the lineage occupied by each subtask graph when shuffle excluded is generally less than 100M, and the lineage occupied when shuffle included will rapidly increase from hundreds of `M` to several `G` with the number of chunks:
map chunk nums | subtasks nums | subtask graph serialization size&duration | ray mapper task spec size | ray reducer task spec size | rough lineage size
-- | -- | -- | -- | -- | --
3000 | 4500 | 10s,22448580(22M) | 13015975,.i.e 12.5M | 434057232,.i.e 414M | 427M
6000 | 9000 | 12s,45064250(43M) | 26032975, .i.e 25M | 1696331232, .i.e 1.6G | 1.6G
9000 | 13500 | 18s,67681107(65M) | 39049975,.i.e 37.2M | 3816821232,.i.e 3.6G | 3.6G
Since Mars is a fine-grained task graph and ray is a fine-grained lineage, the lineage overhead is much higher than a coarse-grained lineage such as spark. Even if Ray supports "collapsing" shared metadata in the future, e.g., keeping one metadata entry for all N outputs of a task, due to the fact that chunk graphs of most subtasks are different, the corresponding task specs are different too, so Mars cannot use this optimization to reduce lineage storage cost.
Therefore, in order to avoid the loss of lineage and reducing the lineage overhead of the supervisor, we should **manage the lineage in a distributed manner,** such as submitting some Subtask graphs to separate ray actors for execution.
To sum up, the Mars Failover based on lineage reconstruction can be designed as a distributed supervisor architecture. The distributed supervisor can also reduce pressure of the single supervisor:

Specifically:
- `SubtaskGraphSubmitter` is responsible for submitting all subtasks of the specified SubtaskGraph to the cluster for execution using ray task API.
- We can also cut a SubtaskGraph into multiple subgraphs and send them to different SubtaskGraphSubmitter processes for submission to avoid scenarios where a single SubtaskGraph is too large;
- `SubtaskGraphSubmitter` needs to be created to a separate node through PG, avoid sharing the node with the computing process, and reduce the probability of the SubtaskGraphSubmitter Actor failure;
- `LineageManager` is responsible for accounting the cost of lineage, and then determine when to submit SubtaskGraph in the supervisor or in the SubtaskGraphSubmitter actor. The main focus are:
- SubtaskGraph serialization overhead, end-to-end delay.
- The resource overhead occupied by the Ray Actor itself can be set to num_cpus=0, because it only schedules tasks and lineage, and does not perform computation. Currently Ray will start evicting lineage when the memory occupied by lineage exceeds RAY_max_lineage_bytes (default 1GB).
- Also provides a switch to allow the remote SubtaskGraphSubmitter to be turned off
- SubtaskGraphSubmitter itself is also managed by the life cycle service as a reference, thus ensuring that SubtaskGraphSubmitter can be recycled when the corresponding object is no longer needed.
For simplicity, a set of heuristic rules can be implemented to determine when to submit SubtaskGraph locally or remotely:
- If lineage reconstruction is turned off, all Subtask Graphs are submitted in the supervisor
- If the lineage occupied by the current Subtask Graph is greater than the 1/2 threshold, submit it to the remote actor for scheduling
- If the current lineage has exceeded the 80% threshold, all subsequent SubtaskGraph are submitted to the remote actor for scheduling.
At the same time, Mars also needs to optimize the subtask to reduce the size of the task specification that needs to be stored, so that it can store the lineages of larger computing tasks:
- Prune subtask's chunk graph and input parameters
- Optimize serialized result size
## Cut the dependency chain through checkpoint
Lineage reconstruction has its shortcomings, it has memory overhead and too much reconstruction issues:
- A large number of fine-grained lineages take up a lot of memory. If the dependency chain is very long, the lineages will OOM or be evicted.
- When a node fails and an object is lost, it will reconstruct from very early lineage, which causes a large number of subtasks to rerun and slow task execution. For most **narrowly dependent subtasks**, lineage reconstruction generally only needs to reconstruct a few Subtasks. However, if the upstream depends on ** shuffle subtask,** since shuffle is ** ALL-to-ALL communication**, it needs to rerun a large number of upstream subtasks or incur too much data transfer.
Therefore, Mars needs to implement the **checkpoint mechanism to cut off the dependency chain to remove lineage metadata overhead and lots of subtask reconstruction**.
In traditional computing engines, checkpointing is accomplished by storing the computing results in the distributed file system. The reliability provided by the distributed file system ensures that reconstruction can be finished by the end of the checkpointed stage.
In Mars on Ray, we can extend Ray's Object Store to achieve this capability. When submitting a remote task, we can specify the reliability of object that ray must ensure. When there is no node which has the checkpointed object, Ray should automatically load object data from external systems into object store. This ensures that even all objects replicas in object store has been lost, the subtask doesn't need to be reconstructed.
In this way, if we find the appropriate cutpoint to cut off the lineage in mars, then Mars can release the lineage before this part of the object. Since it's ensured lineage reconstruction will succeed by the end of this cutpoint. The cutpoint rule can be like following:
- Checkpoint when submitting a wide-dependency task like shuffle subtask. Specify multiple object replicas to ensure that lineage reconstruction can be completed by the end of those subtasks and avoid a lot of overhead of wide-dependency recalculation.
- Extend Mars tileable's execute API to allow users to specify the number of replicas for result chunks. When the number of copies is greater than 1, all lineages of the tileable can be cleaned up, and the lineage reconstruction can be stopped there. This is the key to **failover of iterative computations**.
# Limitation
- Lineage reconstruction does not support reconstruction of objects created by `ray.put`, but all objects in mars on ray future are the result of remote task execution, so this problem does not exist.
- Lineage reconstruction will result in higher memory usage because the supervisor/SubtaskGraphSubmitter needs to store all subtasks that can be re-executed on failure.
- At present, Ray has only basic cache for lineage, which makes the lineage easy to be evicted by the large-scale and complex computation of frameworks built on ray, and the subsequent Failover fails. In the future, we need to expand Ray's lineage management capabilities to allow the upper-level framework to control the lineage life cycle in a more fine-grained manner.
- Currently if the lineage for object exists, the objects won't be gc, which may eventually use up disk space. We may need allow to decouple the lineage life cycle from object.
# Devlopment Plan
- [x] #3029
- [ ] Implement SubtaskGraphSubmitter, encapsulate SubtaskGraph submission logic
- [ ] Implement LineageManager to support the submission of lineages in local and remote SubtaskGraphSubmitter based on rules
- [ ] Make SubtaskGraphSubmitter managebale by tileable lifecycle
- [ ] Extend the HA capability of the Ray object store, support specifying the number of object replicas and the object reliability level when submitting a remote task
- [ ] Implement the Mars checkpoint mechanism
- [ ] Subtask submission information pruning
- [ ] Subtask submission information serialization size optimization
- [ ] Decouple the lineage life cycle from ray object.
# Reference
- https://docs.ray.io/en/latest/ray-core/troubleshooting.html#understanding-objectlosterrors
- https://docs.ray.io/en/master/ray-core/actors/fault-tolerance.html
- https://docs.ray.io/en/master/ray-core/actors/patterns/fault-tolerance-actor-checkpointing.html
- https://docs.ray.io/en/master/ray-core/tasks/fault-tolerance.html
- [Ownership: A Distributed Futures System for Fine-Grained Tasks](https://www.usenix.org/system/files/nsdi21_slides_wang-stephanie.pdf)
| open | 2022-04-26T15:12:45Z | 2022-05-20T03:36:23Z | https://github.com/mars-project/mars/issues/2972 | [] | chaokunyang | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,601 | Question about Cyclegan's FCN SCORE source code | Excuse me, can you provide the source code of Cyclegan's FCN Score? I want to evaluate the results I do, thank you very much | closed | 2023-09-30T10:20:22Z | 2023-10-20T02:12:06Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1601 | [] | HuXiaokai12138 | 0 |
moshi4/pyCirclize | data-visualization | 83 | Unable to auto adjust annotation | AttributeError: module 'pycirclize.config' has no attribute 'ann_adjust'
config.ann_adjust.enable = True
pycirclize.__version__=1.8.0
| closed | 2025-02-14T09:10:01Z | 2025-02-14T09:40:29Z | https://github.com/moshi4/pyCirclize/issues/83 | [
"question"
] | jishnu-lab | 1 |
tox-dev/tox | automation | 2,633 | Undocumented user level config file [tox4] | In FAQ for tox 4 I discovered a note about "user level config-file", but I can't find any details, such as default file location, structure of the config and its behavior. | closed | 2022-12-08T08:15:15Z | 2023-06-17T01:12:09Z | https://github.com/tox-dev/tox/issues/2633 | [
"area:documentation",
"help:wanted"
] | ziima | 5 |
LibrePhotos/librephotos | django | 763 | Implement viewing videos | **Describe the enhancement you'd like**
As a user, I want to be able to watch my videos.
**Describe why this will benefit the LibrePhotos**
Use react-native-vlc-media-player as this should be very stable and reliable.
| open | 2023-02-27T17:09:16Z | 2023-12-11T14:56:50Z | https://github.com/LibrePhotos/librephotos/issues/763 | [
"enhancement",
"good first issue",
"mobile"
] | derneuere | 2 |
dunossauro/fastapi-do-zero | pydantic | 240 | Texto inconforme com código na aula 7 sobre a função `get_current_user` | Na aula 7, quando entramos na seção [Protegendo os Endpoints](https://fastapidozero.dunossauro.com/06/#protegendo-os-endpoints) a função `get_current_user` é definida de forma **síncrona**:
```python
# ...
def get_current_user(
session: Session = Depends(get_session),
token: str = Depends(oauth2_scheme),
):
# ...
```
Já no trecho que é comentado sobre a função, diz que ela foi definida como **assíncrona**.

O texto, portanto, causa confusão já que não está de acordo com o trecho de código.
As possíveis soluções seriam:
* Alterar o código para async
* Remover trecho sobre assincronismo
Caso altere o código para async, acho que vale a pena explicar por que que ao colocar `async` é suficiente para consultar o banco de dados de forma não bloqueante, já que estamos utilizando o banco de dados de forma síncrona. | closed | 2024-09-16T21:09:31Z | 2024-10-02T22:03:24Z | https://github.com/dunossauro/fastapi-do-zero/issues/240 | [] | RWallan | 0 |
huggingface/text-generation-inference | nlp | 2,363 | Build Intel CPU optimized image automatically | Hello, we are looking for the best way for deploying TGI on Xeons.
I understand that container images tagged with `x.y.z-intel` are the XPU builds, while `Dockerfile_intel` defines both XPU and CPU paths, XPU being the default. I have successfully ran manual builds for CPU and those works great.
By using the default `x.y.z-intel` tag, one launches the XPU-optimized variant. On Xeons, this causes `Target function add_rms_norm on cpu haven't implemented yet.` errors which I don't encounter when building the CPU image manually (via modified `PLATFORM` argument).
@sywangyi @Narsil would it be possible to automatically build the CPU-optimized variant w/ IPEX & upload as part of CI?
It boils down to adding additional image build, which runs `Dockerfile_intel` with build_arg `PLATFORM` set to `cpu` instead of default `xpu` and pushing it to a separate tag (e.g. `x.y.z-intel-cpu`).
Additionally, is there any place where all of the available image tags are listed?
Thanks for all of the great work! | open | 2024-08-06T12:47:59Z | 2024-08-08T14:47:11Z | https://github.com/huggingface/text-generation-inference/issues/2363 | [] | Feelas | 1 |
kymatio/kymatio | numpy | 501 | ENH more 1D tests for np/tf frontend/backend | closed | 2020-01-24T03:40:08Z | 2022-05-31T00:49:07Z | https://github.com/kymatio/kymatio/issues/501 | [] | edouardoyallon | 2 | |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,209 | error on requirments.txt MacOS | trying to install on MacOS. When I enter "pip install -r requirements.txt" it goes through a bunch of stuff but then errors out with what is pasted below. Anyone have any ideas?
MacOS 12.3.1, Mac Studio M1
Error output:
` …
Processing numpy/random/mtrand.pyx
Processing numpy/random/_generator.pyx
Processing numpy/random/_pcg64.pyx
Processing numpy/random/_common.pyx
Cythonizing sources
blas_opt_info:
blas_mkl_info:
customize UnixCCompiler
libraries mkl_rt not found in ['/Users/ben/anaconda3/lib', '/usr/local/lib', '/usr/lib']
NOT AVAILABLE
blis_info:
libraries blis not found in ['/Users/ben/anaconda3/lib', '/usr/local/lib', '/usr/lib']
NOT AVAILABLE
openblas_info:
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/ben/anaconda3/include -fPIC -O2 -isystem /Users/ben/anaconda3/include
creating /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var
creating /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders
creating /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0
creating /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn
creating /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T
creating /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12
compile options: '-c'
clang: /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.c
xcrun: error: invalid active developer path (/Applications/Xcode.app/Contents/Developer), missing xcrun at: /Applications/Xcode.app/Contents/Developer/usr/bin/xcrun
Traceback (most recent call last):
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/unixccompiler.py", line 53, in UnixCCompiler__compile
self.spawn(self.compiler_so + cc_args + [src, '-o', obj] + deps +
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/ccompiler.py", line 90, in <lambda>
m = lambda self, *args, **kw: func(self, *args, **kw)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/ccompiler.py", line 173, in CCompiler_spawn
raise DistutilsExecError('Command "%s" failed with exit status %d%s' %
distutils.errors.DistutilsExecError: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/ben/anaconda3/include -fPIC -O2 -isystem /Users/ben/anaconda3/include -c /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.c -o /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.o -MMD -MF /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.o.d" failed with exit status 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ben/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/Users/ben/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/ben/anaconda3/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-build-env-7tj7xzft/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 157, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-build-env-7tj7xzft/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 248, in run_setup
super(_BuildMetaLegacyBackend,
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-build-env-7tj7xzft/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 513, in <module>
setup_package()
File "setup.py", line 505, in setup_package
setup(**metadata)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/core.py", line 135, in setup
config = configuration()
File "setup.py", line 173, in configuration
config.add_subpackage('numpy')
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/misc_util.py", line 1019, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/misc_util.py", line 985, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/misc_util.py", line 927, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "numpy/setup.py", line 8, in configuration
config.add_subpackage('core')
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/misc_util.py", line 1019, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/misc_util.py", line 985, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/misc_util.py", line 927, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "numpy/core/setup.py", line 757, in configuration
blas_info = get_info('blas_opt', 0)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 584, in get_info
return cl().get_info(notfound_action)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 844, in get_info
self.calc_info()
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 1989, in calc_info
if self._calc_info(blas):
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 1981, in _calc_info
return getattr(self, '_calc_info_{}'.format(name))()
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 1932, in _calc_info_openblas
info = get_info('openblas')
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 584, in get_info
return cl().get_info(notfound_action)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 844, in get_info
self.calc_info()
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 2195, in calc_info
info = self._calc_info()
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 2183, in _calc_info
if not (skip_symbol_check or self.check_symbols(info)):
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/system_info.py", line 2262, in check_symbols
obj = c.compile([src], output_dir=tmpdir)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/ccompiler.py", line 90, in <lambda>
m = lambda self, *args, **kw: func(self, *args, **kw)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/ccompiler.py", line 361, in CCompiler_compile
single_compile(o)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/ccompiler.py", line 321, in single_compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/ccompiler.py", line 90, in <lambda>
m = lambda self, *args, **kw: func(self, *args, **kw)
File "/private/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/pip-install-3od9mbik/numpy_75f80fbb24b443528443f3b884563934/numpy/distutils/unixccompiler.py", line 57, in UnixCCompiler__compile
raise CompileError(msg)
distutils.errors.CompileError: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Users/ben/anaconda3/include -fPIC -O2 -isystem /Users/ben/anaconda3/include -c /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.c -o /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.o -MMD -MF /var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/var/folders/s0/j8wf6b05697_54202d1tr5800000gn/T/tmpkhmv3_12/source.o.d" failed with exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
` | open | 2023-05-04T14:39:23Z | 2023-09-28T14:15:57Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1209 | [] | dlogneb | 4 |
ultrafunkamsterdam/undetected-chromedriver | automation | 787 | TypeError: expected str, bytes or os.PathLike object, not NoneType | I am trying to run undetected chromedriver with the get started code but keep getting this error:
```
[Process Process-1:
Traceback (most recent call last):
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\site-packages\undetected_chromedriver\dprocess.py", line 59, in _start_detached
p = Popen([executable, *args], stdin=PIPE, stdout=PIPE, stderr=PIPE, **kwargs)
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 947, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 1356, in _execute_child
args = list2cmdline(args)
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\subprocess.py", line 561, in list2cmdline
for arg in map(os.fsdecode, seq):
File "C:\Users\DC\AppData\Local\Programs\Python\Python39\lib\os.py", line 822, in fsdecode
filename = fspath(filename) # Does type-checking of `filename`.
TypeError: expected str, bytes or os.PathLike object, not NoneType](url)
``` | open | 2022-08-19T11:21:47Z | 2022-12-03T10:45:21Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/787 | [] | saif-byte | 4 |
d2l-ai/d2l-en | data-science | 1,837 | Minor typo in Figure 9.1.1: `function` misspelled as `fuction` | In [Fig. 9.1.1](https://github.com/d2l-ai/d2l-en/blob/master/img/gru-1.svg) as well as some other diagrams in [9.1. GRU](https://d2l.ai/chapter_recurrent-modern/gru.html), `activation function` is misspelled as `activation fuction`.
 | closed | 2021-07-22T08:11:36Z | 2021-07-23T05:04:47Z | https://github.com/d2l-ai/d2l-en/issues/1837 | [] | gudzpoz | 1 |
sqlalchemy/alembic | sqlalchemy | 1,302 | ForeignKeyConstraint argument `match` is not rendered in autogeneration | When specifying `ForeignKeyConstraint(..., match="FULL")` I noticed that the autogeneration does not pick up the keyword argument `match`.
I already checked, the constraint is interpreted correctly and only during rendering in:
https://github.com/sqlalchemy/alembic/blob/dbdec2661b8a01132ea3f7a027f85fed2eaf5e54/alembic/autogenerate/render.py#L983C25-L983C25
the argument is just silently dropped, which totally confuses me.
Am I not seeing the reason for that? I need many `ForeignKeyConstraints` to have full matching, this is troublesome doing by hand. | closed | 2023-08-24T14:08:56Z | 2023-08-31T18:27:04Z | https://github.com/sqlalchemy/alembic/issues/1302 | [
"bug",
"autogenerate - rendering",
"PRs (with tests!) welcome"
] | asibkamalsada | 3 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 10 | 在docker中部署以后网页404 not found。 | 您好,我在docker中编译并完成部署以后,点击打开对应的网站,显示404not found,请问是什么原因导致的呢?
| closed | 2024-03-27T06:42:02Z | 2024-09-23T01:39:26Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/10 | [] | hxj0316 | 1 |
SALib/SALib | numpy | 256 | Unable to install v1.3.7 on Python 2.7 | Install of SALib-1.3.7 fails on Python 2.7. Based on a related discussion, I believe this would solve the issue, but not familiar enough with the library to ensure compatibility:
https://github.com/pbrod/numdifftools/issues/37#issuecomment-395794866
```log
Collecting salib
Using cached https://files.pythonhosted.org/packages/12/8b/14f6c0f0a12b29d5e1766e7a585269cd6ec9728a63886c161a6eddb4e7fa/SALib-1.3.7.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/rv/spd1m5_524j35pnylts_bxvm0000gn/T/pip-install-w_1pbI/salib/setup.py", line 37, in <module>
setup_package()
File "/private/var/folders/rv/spd1m5_524j35pnylts_bxvm0000gn/T/pip-install-w_1pbI/salib/setup.py", line 33, in setup_package
use_pyscaffold=True)
File "/usr/local/lib/python2.7/site-packages/setuptools/__init__.py", line 144, in setup
_install_setup_requires(attrs)
File "/usr/local/lib/python2.7/site-packages/setuptools/__init__.py", line 139, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/usr/local/lib/python2.7/site-packages/setuptools/dist.py", line 717, in fetch_build_eggs
replace_conflicting=True,
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 782, in resolve
replace_conflicting=replace_conflicting
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1077, in obtain
return installer(requirement)
File "/usr/local/lib/python2.7/site-packages/setuptools/dist.py", line 784, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 679, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 705, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 890, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1158, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1144, in run_setup
run_setup(setup_script, args)
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/usr/local/lib/python2.7/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/var/folders/rv/spd1m5_524j35pnylts_bxvm0000gn/T/easy_install-ozE0Ta/PyScaffold-3.0.3/setup.py", line 107, in <module>
File "/var/folders/rv/spd1m5_524j35pnylts_bxvm0000gn/T/easy_install-ozE0Ta/PyScaffold-3.0.3/setup.py", line 102, in setup_package
File "/var/folders/rv/spd1m5_524j35pnylts_bxvm0000gn/T/easy_install-ozE0Ta/PyScaffold-3.0.3/setup.py", line 76, in bootstrap_cfg
File "/var/folders/rv/spd1m5_524j35pnylts_bxvm0000gn/T/easy_install-ozE0Ta/PyScaffold-3.0.3/src/pyscaffold/utils.py", line 274, in check_setuptools_version
RuntimeError: Due to a bug in setuptools, PyScaffold currently needs at least Python 3.4! Install PyScaffold 2.5 for Python 2.7 support.
``` | closed | 2019-07-25T10:06:46Z | 2019-08-18T14:29:58Z | https://github.com/SALib/SALib/issues/256 | [] | dxdc | 4 |
litestar-org/litestar | pydantic | 3,899 | Bug: create_static_files_router with S3FS crashes due to unsupported fs info key (`mtime`) | ### Description
When using S3FS in a static router, [ASGIFileResponse](https://github.com/litestar-org/litestar/blob/f31ef97d6cb725bf9898f55abbb5150b36823f27/litestar/response/file.py#L220) uses the `mtime` attribute on fs_info which is not consistently available across fssspec implementations (See: https://github.com/fsspec/filesystem_spec/issues/526).
### URL to code causing the issue
_No response_
### MCVE
```python
# Script Dependencies:
# litestar
# s3fs
from s3fs import S3FileSystem
from litestar import Litestar
from litestar.static_files import create_static_files_router
BUCKET_NAME = "some-s3-bucket"
app = Litestar(route_handlers=[
create_static_files_router(
path="/",
directories=[f"/{BUCKET_NAME}/"],
html_mode=True,
file_system=S3FileSystem()
)
]
)
```
### Steps to reproduce
```bash
1. Seed an s3 bucket with an index.html file
2. Run `litestar run`
3. Go to `localhost:8000`
4. See error
```
### Screenshots
_No response_
### Logs
```bash
INFO: 127.0.0.1:50567 - "GET / HTTP/1.1" 500 Internal Server Error
ERROR - 2024-12-12 14:44:08,439 - litestar - config - Uncaught exception (connection_type=http, path=/):
Traceback (most recent call last):
File ".venv/lib/python3.12/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File ".venv/lib/python3.12/site-packages/litestar/_asgi/asgi_router.py", line 100, in __call__
await asgi_app(scope, receive, send)
File ".venv/lib/python3.12/site-packages/litestar/routes/http.py", line 84, in handle
await response(scope, receive, send)
File ".venv/lib/python3.12/site-packages/litestar/response/base.py", line 194, in __call__
await self.start_response(send=send)
File ".venv/lib/python3.12/site-packages/litestar/response/file.py", line 220, in start_response
self.headers.setdefault("last-modified", formatdate(fs_info["mtime"], usegmt=True))
~~~~~~~^^^^^^^^^
KeyError: 'mtime'
```
### Litestar Version
2.13.0
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-12-12T22:45:54Z | 2025-03-20T15:55:03Z | https://github.com/litestar-org/litestar/issues/3899 | [
"Bug :bug:"
] | thomastu | 3 |
sanic-org/sanic | asyncio | 2,554 | Support overwriting a route of blueprint from copy | **Feture Description**
I want to overwrite the implement of route partially when they belong to different version blueprint, and it raises sanic_routing.exceptions.RouteExists.
**Sample Code**
```python
from sanic import Blueprint
from sanic.response import json
from sanic import Sanic
app = Sanic('test')
bpv1 = Blueprint('bpv1', version=1)
@bpv1.route('/hello')
async def root(request):
return json('hello v1')
app.blueprint(bpv1)
bpv2 = bpv1.copy('bpv2', version=2)
@bpv2.route('/hello')
async def root(request):
return json('hello v2')
app.blueprint(bpv2)
```
**Current Solution**
I get this solution from forum and i was encouraged to post new a feature request based on it~
```python
bpv2 = bpv1.copy("bpv2", version=2)
bpv2._future_routes = {
route for route in bpv2._future_routes if route.uri != "/hello"
}
@bpv2.route("/hello")
async def root2(request):
return json("hello v2")
```
**Web Link**
https://community.sanicframework.org/t/how-to-overwrite-a-route-when-using-blueprint-copy/1067
| closed | 2022-09-28T09:23:42Z | 2023-07-07T11:56:44Z | https://github.com/sanic-org/sanic/issues/2554 | [
"idea discussion",
"feature request"
] | Tpinion | 5 |
gee-community/geemap | jupyter | 1,062 | geemap.ee_export_image problem. | Hi! Thanks for the great package and awsome work. I have a problem with ee_export_image that I will explain it below.
### Environment Information
- geemap version: 0.11.0
- Python version: 3.9.8
- Operating System: Windows 10 x64
### Description
I want to download a decadal mvc for NDSI (1-10, 11-20, and 20 the last day of the month), MOD10A1 product. When I visualize the first image of the final collection of images is the correct one (filtered to take in account NDSI with more than 2 observations per the decadal period, clipped, etc.). When I export the images from the collection they are wrong, respectively the images before filtering out the miminum number of observations.
### What I Did
```
import os
import ee
import datetime
from datetime import datetime, timedelta
import geemap
Map=geemap.Map()
pol3=ee.FeatureCollection('geometry of interest')
## a function to extract values from specific bits
def bitwiseExtract(input, fromBit, toBit):
maskSize=ee.Number(1).add(toBit).subtract(fromBit)
mask=ee.Number(1).leftShift(maskSize).subtract(1)
return input.rightShift(fromBit).bitwiseAnd(mask)
# A function to mask out low quality zones
def qa_mask(image):
# Select the QA band.
QA = image.select('NDSI_Snow_Cover_Basic_QA')
qaMask=bitwiseExtract(QA, 0, 15).lte(2)
mask=qaMask
return image.updateMask(mask)
## a function to keep values for count gte 2
def mask_count(image):
maskCount=image.select('NDSI_count').gte(2)
image=image.updateMask(maskCount)
return image.select('NDSI_max')
###function to count the number of bands to omit images with 0 bands
def nb_bands(image):
return image.set('numar',image.bandNames().size())
###GLS water dataset to filter out all regions that are covered with lakes, pond, big rivers etc.
water_dataset_orig = ee.ImageCollection('GLCF/GLS_WATER').select('water').min()
# Remap values.
water_dataset = ee.Image(1)\
.where(water_dataset_orig.lte(1), 0)\
.where(water_dataset_orig.eq(2), 1)\
.where(water_dataset_orig.gte(3),0);
###Mask for NDSI values greater than 30
def masker(image):
test=water_dataset.select('constant').eq(0)
test2=image.updateMask(test)
mask1 = test2.select('NDSI_Snow_Cover').gt(30)
return image.updateMask(mask1)
#Create NDSI
def func_dko(image):
return image.select('NDSI_Snow_Cover').rename('NDSI').clip(pol3).copyProperties(image,['system:time_start','system:time_end'])
col1 = ee.ImageCollection('MODIS/006/MOD10A1').map(qa_mask)
NDSI = col1.map(masker).map(func_dko)
###variables for the decadal composite image
#define starting and ending dates
startyear = 2002
endyear = 2003
mapYears = ee.List.sequence(startyear, endyear)
startmonth = 6
endmonth =1
mapMonths = ee.List.sequence(1, 12)
start = ee.Date.fromYMD(startyear, startmonth, 1)
end = ee.Date.fromYMD(endyear, endmonth+1, 1); # end date is always exclusive
# define list of dates to filter on
startdays = ee.List([1, 11, 21])
enddays = ee.List([11, 21, 1])
# Create a sequence of numbers, one for each time interval.
sequence = ee.List.sequence(0, 2)
def collect(collection, start, end, mapYears, mapMonths):
def func_years(year):
def func_months(month):
def func_sequence(dayRange):
startDate = ee.Date.fromYMD(year, month, startdays.get(dayRange))
# the end date needs some if statements to be well formatted
endDate = ee.Date(ee.Algorithms.If( ee.Number(dayRange).eq(2), \
ee.Algorithms.If(ee.Number(month).eq(12), \
ee.Date.fromYMD(ee.Number(year).add(1), 1, enddays.get(dayRange)),\
ee.Date.fromYMD(year, ee.Number(month).add(1), enddays.get(dayRange))),\
ee.Date.fromYMD(year, month, enddays.get(dayRange))))
return collection.filterDate(startDate, endDate).reduce(ee.Reducer.max().combine(reducer2=ee.Reducer.count(), sharedInputs=True)).set('system:time_start', startDate.millis()).set('system:time_end', endDate.millis()).set('numbImages',collection.size())
images2 = sequence.map(func_sequence)
return images2
images1 = mapMonths.map(func_months)
return images1
images=mapYears.map(func_years).flatten()
collection = ee.ImageCollection.fromImages(images)
return collection.filterDate(start, end).set('system:time_start', start.millis()).set('system:time_end', end.millis())
##applying the fucntion defined above
colectie_decad_init=collect(NDSI,start,end,mapYears,mapMonths)
colectie_decad_number=colectie_decad_init.map(nb_bands)
colectie_decad=colectie_decad_number.filter(ee.Filter.gte("numar",2))
colectie_final=colectie_decad.map(mask_count)
##visualize data for confirmation
final_list=colectie_final.toList(200)
image_1=ee.Image(final_list.get(0))
date_image_1 = image_1.date().format('dd-MM-yyyy').getInfo()
print(date_image_1 )
ndviParams = {'min': -1, 'max':100, 'palette': ['8B0000','FF0000', 'FF4500', 'FFFF00', '00FF00','008000', '006400']}
Map.addLayer(ee.Image(viz_list.get(0)), ndviParams, 'NDSI image')
Map
##final step, export your images
out_dir = os.path.expanduser("Downloads_new")
if not os.path.exists(out_dir):
os.makedirs(out_dir)
for i in range(0, 200, 1):
image_2=ee.Image(final_list.get(i))
date = image_2.date().format('dd-MM-yyyy').getInfo()
name1= 'SnowDecad'+'_'+date+'.tif'
filename1 = os.path.join(out_dir, name1)
print(name1)
geemap.ee_export_image(image_2, filename=filename1, scale=500,region=pol3.geometry(), crs='EPSG:3035')
```
I think it is something wrong here, at export of the images. I checked and rechecked the variables and the code. I also test the code in **EarthEngine web platform** and with **ee.batch.Export.image.toDrive** and the results are correct.
Thank u for everything!
| closed | 2022-05-18T08:30:43Z | 2022-05-19T14:21:26Z | https://github.com/gee-community/geemap/issues/1062 | [
"bug"
] | georgeboldeanu | 12 |
ultralytics/ultralytics | python | 19,466 | Pausing and Resuming Model Training in YOLO | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi everyone,
Thank you for all the amazing work you do! I know Glenn Jocher will likely answer this, as he’s always been a pioneer in providing solutions. :)
My question is: **Is it possible to pause and resume the training process in YOLO?**
Here’s the scenario:
- I have a training process with 300 epochs.
- Due to limited GPU resources (e.g., I’m using Kaggle, which provides 30-hours free GPU per week also 12 hour per each session), I need to pause training at, say, epoch 40, and then resume it later.
I’m aware that I can save the best model checkpoint during training and use it as a base for a new training session. However, the issue is that when I use the best model as a base, the scores often drop significantly in the new training session, and it takes a long time to recover. This is especially problematic when my dataset is complex, then each epoch takes about 1 hour to train.
By the time the model recovers from the drop, I’ve already spent a lot of time and resources, which feels inefficient.
I hope I’ve explained my challenge clearly. If there’s a solution or a better approach to handle this, I’d greatly appreciate your advice!
Thank you in advance!
### Additional
_No response_ | closed | 2025-02-27T19:19:15Z | 2025-02-28T14:54:15Z | https://github.com/ultralytics/ultralytics/issues/19466 | [
"question"
] | AISoltani | 4 |
flasgger/flasgger | api | 327 | Feature request: support `pydantic` schema | Recently I just found [fastapi](https://github.com/tiangolo/fastapi) has very nice OpenAPI support with the help of [pydantic](https://github.com/samuelcolvin/pydantic)(Python 3.6+). So people can declare schema with Python types, which is very convenient.
Would like to know if this library will support pydantic in the future? Or if there is something I can do to help? | closed | 2019-08-16T08:21:29Z | 2022-02-17T11:22:26Z | https://github.com/flasgger/flasgger/issues/327 | [] | kemingy | 2 |
hbldh/bleak | asyncio | 731 | BleakClient accessing attribute that does not exist | * bleak version: 0.14.0
* Python version: 3.10.1
* Operating System: Windows 10
I'm trying to debug some windows connection issues. I noticed that if I supply `BleakClient` with a device like this:
```python
import os
os.environ["BLEAK_LOGGING"] = "1"
import asyncio
from bleak import BleakClient, BleakScanner
address = "AA:AA:AA:AA:AA:AA" # YOUR MAC ADDRESS
async def run(address):
device = await BleakScanner.find_device_by_address(address)
async with BleakClient(device) as client:
await client.connect()
asyncio.run(run(address))
```
Then I get the following error:
```
Traceback (most recent call last):
File "C:\somepath\windowstest.py", line 15, in <module>
asyncio.run(run(address))
File "C:\Program Files\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Program Files\Python310\lib\asyncio\base_events.py", line 641, in run_until_complete
return future.result()
File "C:\somepath\windowstest.py", line 12, in run
async with BleakClient(device) as client:
File "C:\somepath\venv\lib\site-packages\bleak\backends\winrt\client.py", line 129, in __init__
address_or_ble_device.address.details.adv.bluetooth_address
AttributeError: 'str' object has no attribute 'details'
```
However if I change this to use the address directly:
```python
import os
os.environ["BLEAK_LOGGING"] = "1"
import asyncio
from bleak import BleakClient, BleakScanner
address = "AA:AA:AA:AA:AA:AA" # YOUR MAC ADDRESS
async def run(address):
async with BleakClient(address) as client:
print(client)
asyncio.run(run(address))
```
Then it works. It seems like `address_or_ble_device.address` is just a string and the desired property is actually at `address_or_ble_device.details.adv.bluetooth_address`. | closed | 2022-01-11T17:21:48Z | 2022-01-12T21:12:50Z | https://github.com/hbldh/bleak/issues/731 | [
"Backend: WinRT"
] | rhyst | 1 |
pytorch/vision | machine-learning | 8,034 | Need to update Efficientnet weight | As reported @ar0ck in https://github.com/pytorch/vision/issues/7744#issuecomment-1754154799 the efficientnet weight still have the wrong hash. | closed | 2023-10-10T08:36:25Z | 2023-10-11T10:13:55Z | https://github.com/pytorch/vision/issues/8034 | [
"bug",
"module: models"
] | NicolasHug | 0 |
Guovin/iptv-api | api | 431 | 链接后面的$1920x1080不要 | 链接后面的$1920x1080、$LR•IPV4『线路28』|1920x1080不要添加上去,有些播放器不支持,导致不能播放。 | closed | 2024-10-22T03:13:08Z | 2024-10-25T09:10:32Z | https://github.com/Guovin/iptv-api/issues/431 | [
"enhancement"
] | wuyihuai | 3 |
litestar-org/litestar | pydantic | 3,058 | Bug: Response examples are not generated even with `generate_examples=True` | ### Description
It's unclear how to enable/disable the example the example generation for responses.
Is it by default on or not? `Parameter` defaults to `generate_examples=True` but no examples appear by default.
Yet in a larger app I do get examples, but I don't seem to understand how to reproduce the same on a MCVE.
(Is there a mention about this in docs?)
### URL to code causing the issue
_No response_
### MCVE
```python
import json
from litestar import Litestar, post
from litestar.openapi import ResponseSpec
from pydantic import BaseModel
class Response(BaseModel):
text: str
num: int
@post("/", responses={201: ResponseSpec(Response, generate_examples=True)})
def endpoint() -> Response:
return Response(text="hello", num=1)
app = Litestar(route_handlers=[endpoint])
print(json.dumps(app.openapi_schema.to_schema(), indent=4))
```
The schema:
```json
"components": {
"schemas": {
"Response": {
"properties": {
"text": {
"type": "string"
},
"num": {
"type": "integer"
}
},
"type": "object",
"required": [
"num",
"text"
],
"title": "Response"
}
}
}
```
No `"examples"` there.
### Steps to reproduce
```bash
1. `python app.py`
2. See the schema
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.5.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-02-01T15:18:28Z | 2025-03-20T15:54:23Z | https://github.com/litestar-org/litestar/issues/3058 | [
"Bug :bug:"
] | tuukkamustonen | 5 |
keras-team/keras | machine-learning | 20,285 | Possible typo in the "Transfer learning & fine-tuning" guide | I'm not sure if this is issue. For now it is an observation.
Reading the guide about [Transfer learning & fine-tuning](https://github.com/keras-team/keras/blob/master/guides/transfer_learning.py) I had some trouble to understand the following statement: https://github.com/keras-team/keras/blob/38db71acb207fb1dda380a40df68636e45d46174/guides/transfer_learning.py#L234-L237
`base_model` is part of a Functional model that is trained with the `fit` method. It invokes the `call` method of the model with `training=True`: https://github.com/keras-team/keras/blob/38db71acb207fb1dda380a40df68636e45d46174/keras/src/backend/tensorflow/trainer.py#L51
This overrides the previous `training=False`. I tested it using [pdb](https://docs.python.org/3/library/pdb.html) and Google Colab. [Here](https://colab.research.google.com/drive/1gk383gklxOdUKUZsfn6eJm53UsCrkyBa?usp=sharing) the IPython notebook. As you can see in the last cell of the notebook `training` is `True` but the `BatchNormalization` layer (included in the Xception model) works in **inference mode** because `self.trainable` is `False`: https://github.com/keras-team/keras/blob/38db71acb207fb1dda380a40df68636e45d46174/keras/src/layers/normalization/batch_normalization.py#L254-L266
But after unfreezing the `base_model`: https://github.com/keras-team/keras/blob/38db71acb207fb1dda380a40df68636e45d46174/guides/transfer_learning.py#L531-L536
`BatchNormalization` layer works in **training mode**. | closed | 2024-09-24T23:02:56Z | 2024-09-30T04:21:12Z | https://github.com/keras-team/keras/issues/20285 | [
"type:support"
] | miticollo | 6 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 504 | setting db on the fly | Wanted to ask if there is something in the library to let me change the database connection to point to a different db on the fly? The thing I'm trying to do is from a POST request I use the credentials to connect to the database. | closed | 2017-06-09T21:41:27Z | 2020-12-05T20:55:49Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/504 | [] | mtung2017 | 6 |
zappa/Zappa | flask | 436 | [Migrated] Pass Additional Arguments to Zappa Manage | Originally from: https://github.com/Miserlou/Zappa/issues/1136 by [lifeignite](https://github.com/lifeignite)
if some column or data can be changed or deleted while migrating, django warns like below.
> Any objects related to these content types by a foreign key will also
> be deleted. Are you sure you want to delete these content types?
> If you're unsure, answer 'no'.
>
> Type 'yes' to continue, or 'no' to cancel:
but, as I know, I cannot input yes or no using zappa. django provide `--noinput` options, but it wasn't work because this option selects 'no' automatically.
I want to migrate using `zappa manage migrate`. what should I do? | closed | 2021-02-20T08:32:54Z | 2024-04-13T16:17:49Z | https://github.com/zappa/Zappa/issues/436 | [
"enhancement",
"help wanted",
"hacktoberfest",
"no-activity",
"auto-closed"
] | jneves | 2 |
wemake-services/django-test-migrations | pytest | 109 | Add more DB configuration checks | In #91 we are introducing new Django check that validate `system timeout` settings on following database:
+ `postgresql` -`statement_timeout`
+ `mysql` - `max_execution_time`
The idea behind this group of checks is to help developers configure databases according to best practices.
If you have any ideas of such rules/checks, please shere in comments to this issue!
Some nice articles/sites about database configuration/settings:
+ https://postgresqlco.nf/en/doc/param/ | open | 2020-07-24T13:14:17Z | 2020-07-25T07:25:17Z | https://github.com/wemake-services/django-test-migrations/issues/109 | [] | skarzi | 0 |
RafaelMiquelino/dash-flask-login | dash | 5 | current_user.is_authenticated returns false in deployment | Hello,
Thanks for making this repository.
I have been using it with success on a localhost, but as soon as I deploy it, on a hosted server, the user authentication stops behaving. As the user logs in, it registers that the user is authenticated, but within less than a 1s the bool current_user.is_authenticated is set to false.
I have tried everything, and this problem is consistent for my code, that includes the code from this repository, and if one puts this repository on a server and runs it.
Thanks and all the best,
Max H | closed | 2020-06-04T07:39:04Z | 2020-06-09T09:23:49Z | https://github.com/RafaelMiquelino/dash-flask-login/issues/5 | [] | max454545 | 12 |
piskvorky/gensim | machine-learning | 3,165 | Streaming instead of online LDA | The current implementation of LDA in gensim is not actually well suited for streaming.
Here is an interesting [publication:](http://proceedings.mlr.press/v37/theis15.pdf)
Theis, Lucas, and Matt Hoffman. "A trust-region method for stochastic variational inference with applications to streaming data." International Conference on Machine Learning. PMLR, 2015.
All LDA implementations can be found here: https://github.com/lucastheis/trlda/ | open | 2021-06-06T14:04:15Z | 2021-06-06T14:07:33Z | https://github.com/piskvorky/gensim/issues/3165 | [] | jonaschn | 0 |
ageitgey/face_recognition | python | 1,348 | One column per each feature | Can I use this method for storage? How to achieve

[Original link](https://ardas-it.com/comparing-3-ways-to-store-faces-when-developing-facial-recognition-search) | open | 2021-07-21T07:40:55Z | 2021-07-21T07:40:55Z | https://github.com/ageitgey/face_recognition/issues/1348 | [] | Flour-MO | 0 |
akfamily/akshare | data-science | 5,689 | AKShare 接口问题报告 | stock_zh_a_hist报错 | akshare 版本 1.16.3
df = ak.stock_zh_a_hist("300114", period='daily', start_date="20200101", end_date="20250218", adjust='qfq')
site-packages\akshare\stock_feature\stock_hist_em.py:1049, in stock_zh_a_hist(symbol, period, start_date, end_date, adjust, timeout)
1041 period_dict = {"daily": "101", "weekly": "102", "monthly": "103"}
1042 url = "https://push2his.eastmoney.com/api/qt/stock/kline/get"
1043 params = {
1044 "fields1": "f1,f2,f3,f4,f5,f6",
1045 "fields2": "f51,f52,f53,f54,f55,f56,f57,f58,f59,f60,f61,f116",
1046 "ut": "7eea3edcaed734bea9cbfc24409ed989",
1047 "klt": period_dict[period],
1048 "fqt": adjust_dict[adjust],
-> 1049 "secid": f"{code_id_dict[symbol]}.{symbol}",
1050 "beg": start_date,
1051 "end": end_date,
1052 "_": "1623766962675",
1053 }
1054 r = requests.get(url, params=params, timeout=timeout)
1055 data_json = r.json()
KeyError: '300114' | closed | 2025-02-18T06:37:42Z | 2025-02-18T08:46:03Z | https://github.com/akfamily/akshare/issues/5689 | [
"bug"
] | caihua | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.