repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
erdewit/ib_insync | asyncio | 321 | consistently retrieving last price | I see 'last' price cannot be retrieved after market close. I tried reqMktData(233), reqTickers() and even reqTickByTickData("AllLast"). I cannot use 'close' because it is close of the previous day and IB seems to be updating close way later as it waits for corp action processing. For instance, as of Saturday 4pm the close is still reflecting Thursday close.
Example contract: Contract(secType='CONTFUT', conId=383974339, symbol='ES', lastTradeDateOrContractMonth='20201218', multiplier='50', exchange='GLOBEX', currency='USD', localSymbol='ESZ0', tradingClass='ES')
I suppose I could use reqHistoricalTicks as a backup, but it seems weird that the most obvious field is not available with simple call. I suppose this is all on the IB side. Is there any straight forward way to fetch last price consistently at all times?
| closed | 2020-12-05T21:16:20Z | 2020-12-13T10:27:18Z | https://github.com/erdewit/ib_insync/issues/321 | [] | satyan-g | 2 |
Miserlou/Zappa | django | 2,227 | Lambda functions with s3 event sources are publically accessible | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
AWS Security Hub flags Zappa deployed lambda functions with an s3 event source as allowing public access.
```
PCI.Lambda.1 Lambda functions should prohibit public access
CRITICAL:
This AWS control checks whether the Lambda function policy attached to the Lambda resource prohibits public access.
Related requirements: PCI DSS 1.2.1, PCI DSS 1.3.1, PCI DSS 1.3.2, PCI DSS 1.3.4, PCI DSS 7.2.1
For directions on how to fix this issue, consult the AWS Security Hub PCI DSS documentation.
https://docs.aws.amazon.com/console/securityhub/PCI.Lambda.1/remediation
```
## Expected Behavior
<!--- Tell us what should happen -->
While I'm not sure this satisfies cases where there are multiple AWS accounts involved, it seems to me the default behavior should be to create private lambda functions by including the AWS:SourceAccount in the lambda resource policy conditions as shown in my steps to reproduce below.
## Actual Behavior
<!--- Tell us what happens instead -->
Zappa creates lambdas that can be invoked by anyone in control of the s3 bucket leading to AWS Security Hub flagging a security finding.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
Since s3 buckets are involved and names are global, you'll need to edit references to the s3 bucket name in the below steps
1. Create a zappa_settings.json file as below
```
{
"test": {
"app_function": "my_project_name.lambda_handler",
"aws_region": "us-west-2",
"project_name": "my_project_name",
"timeout_seconds": 900,
"apigateway_enabled": false,
"runtime": "python3.8",
"keep_warm": false,
"s3_bucket": "zappa-my-project-name",
"events": [
{
"function": "lambda_handler",
"event_source": {
"arn": "arn:aws:s3:::zappa-my-project-name"
"events": [
"s3:ObjectCreated:*"
]
}
}
]
}
}
```
2. create a lambda function file named lambda_handler.py with the following content:
```
def lambda_handler(event, context):
pass
```
3. zappa deploy test
3. open the resource policy in the aws console by navigating to the lambda function / configuration / permissions / resource policy
4. here's an example policy
```
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "redacted",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-west-2:redacted:function:my-project-name-test",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::zappa-my-project-name"
}
}
}
]
}
```
5. note the policy conditions only check if the principal is s3.amazonaws.com. This means anyone in control of the s3 bucket in the event source can trigger your lambda function. For example, if you were to delete the bucket, someone else may create a bucket with the same name, drop an object in it, and trigger your lambda.
6. If we add the aws account ARN as a condition, the function is no longer publically invokable, and AWS security hub is satisfied
```
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "redacted",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-west-2:redacted:function:my-project-name-test",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "575985943108"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:::zappa-my-project-name"
}
}
}
]
}
```
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.53.0
* Operating System and Python version: Ubuntu 20.04, python 3.8.10
* The output of `pip freeze`: argcomplete==1.12.3
boto3==1.18.42
botocore==1.21.42
certifi==2021.5.30
cfn-flip==1.2.3
charset-normalizer==2.0.5
click==8.0.1
durationpy==0.5
future==0.18.2
hjson==3.0.2
idna==3.2
jmespath==0.10.0
kappa==0.6.0
pep517==0.11.0
pip-tools==6.2.0
placebo==0.9.0
python-dateutil==2.8.2
python-slugify==5.0.2
PyYAML==5.4.1
requests==2.26.0
s3transfer==0.5.0
six==1.16.0
text-unidecode==1.3
toml==0.10.2
tomli==1.2.1
tqdm==4.62.2
troposphere==3.0.3
urllib3==1.26.6
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.53.0
* Link to your project (optional):
* Your `zappa_settings.json`:
See steps to reproduce | closed | 2021-09-16T14:43:06Z | 2021-09-16T14:49:19Z | https://github.com/Miserlou/Zappa/issues/2227 | [] | bruceduhamel | 1 |
lepture/authlib | django | 210 | httpx content_stream module import failure | **Describe the bug**
I started getting this error since yesterday which is blocking me from deploying new version of my code. I don't know how this `content_streams` has been working for me so far but the module name seems to be `_content_streams`. https://github.com/encode/httpx/blob/master/httpx/_content_streams.py
**Error Stacks**
```
2020-03-20T13:52:18.914-07:00 [APP/PROC/WEB/8] [ERR] from authlib.integrations.starlette_client import OAuth
2020-03-20T13:52:18.914-07:00 [APP/PROC/WEB/8] [ERR] File "/home/vcap/deps/0/python/lib/python3.7/site-packages/authlib/integrations/starlette_client/__init__.py", line 4, in <module>
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] from .integration import StartletteIntegration, StarletteRemoteApp
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] File "/home/vcap/deps/0/python/lib/python3.7/site-packages/authlib/integrations/starlette_client/integration.py", line 2, in <module>
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] from ..httpx_client import AsyncOAuth1Client, AsyncOAuth2Client
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] File "/home/vcap/deps/0/python/lib/python3.7/site-packages/authlib/integrations/httpx_client/__init__.py", line 9, in <module>
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] from .oauth1_client import OAuth1Auth, AsyncOAuth1Client
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] File "/home/vcap/deps/0/python/lib/python3.7/site-packages/authlib/integrations/httpx_client/oauth1_client.py", line 11, in <module>
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] from .utils import extract_client_kwargs, rebuild_request
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] File "/home/vcap/deps/0/python/lib/python3.7/site-packages/authlib/integrations/httpx_client/utils.py", line 2, in <module>
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] from httpx.content_streams import ByteStream
2020-03-20T13:52:18.915-07:00 [APP/PROC/WEB/8] [ERR] ModuleNotFoundError: No module named 'httpx.content_streams'
```
**To Reproduce**
import - `from authlib.integrations.starlette_client import OAuth`
**Expected behavior**
The imported module name should be accurate.
**Environment:**
- OS: Linux
- Python Version: 3.5+
- Authlib Version: 0.14.1
| closed | 2020-03-20T21:14:06Z | 2020-04-25T02:42:35Z | https://github.com/lepture/authlib/issues/210 | [
"bug"
] | pdiwan | 4 |
wandb/wandb | tensorflow | 8,930 | [Bug]: WandB Stuck When Fetching Artifacts | #### Description
The process becomes unresponsive while fetching artifacts using the WandB API. The issue occurs specifically during the `api.artifact` call.
- Authentication has been successfully set up using the API token.
- No error messages or timeouts are encountered.
- The last observed output is:
```
- Retrieving metadata for me/project/test-results:v{i}
```
indicating that it gets stuck during the `api.artifact` function. I let it run for 30+ minutes without any progress and no indication that it is downloading big files. The expected artifact is ~0.5MB
#### Code to Reproduce
```python
from wandb import Api
api = Api()
for i in range(1, 5):
print(f"- Retrieving metadata for me/project/test-results:v{i}")
artifact = api.artifact(f'me/project/test-results:v{i}', type='test-results')
print("- Metadata successfully retrieved")
artifact_dir = artifact.download()
print(f"- Artifact downloaded successfully to directory: {artifact_dir}")
```
#### Environment
- WandB SDK version: 0.18.7
- Python version: Python 3.8.20
- Operating system: MB Pro M3, 15.1.1
#### Additional Notes
Any advice on what might be causing this issue or further debugging steps would be appreciated. | closed | 2024-11-21T08:32:51Z | 2024-11-27T19:29:05Z | https://github.com/wandb/wandb/issues/8930 | [
"ty:bug",
"c:artifacts"
] | lumoe | 3 |
encode/databases | asyncio | 186 | Support nosql databases | Right now core is coupled with sqlalchemy and expects an SQL database for the backend. This prevents the ability to implement a non-sql DB backend. What I propose is to move the [_build_query](https://github.com/encode/databases/blob/master/databases/core.py#L275) out of core and into a SQL DB layer that can wrap existing sql backend implementations this would make core truly agnostic to the backend and enable support for nosql backends.
FWIW, my company would like to use this database abstraction to implement a neo4j backend so we can standardize on our API for our sql and neo4j transaction management. I could take a stab at doing this PR if you guys are interested in supporting nosql databases or if you only want this tool to be used for SQL DBs then we can just close this issue. | closed | 2020-04-08T17:30:26Z | 2021-09-12T08:44:34Z | https://github.com/encode/databases/issues/186 | [] | nikordaris | 1 |
numpy/numpy | numpy | 27,861 | Dropping Python 3.10 support. | We are scheduled to drop Python 3.10 support in NumPy 2.3. I will make a PR to get started on that, but
have noticed a few issues I have questions about:
@ngoldbaum `numpy/_core/src/multiarray/stringdtype/static_string.c` has a 3.10 workaround.
@seberg `numpy/_core/include/numpy/numpyconfig.h` NPY_FEATURE_VERSION needs update.
@r-devulap `linux_simd.yml` could probably use an update/rework.
@mattip can PyPy support 3.11 yet?
| closed | 2024-11-26T19:59:16Z | 2024-12-04T23:52:45Z | https://github.com/numpy/numpy/issues/27861 | [
"17 - Task"
] | charris | 4 |
blacklanternsecurity/bbot | automation | 1,501 | Don't add subnets to whitelist + blacklist if their parent is already included | Feature + tests. | closed | 2024-06-26T13:14:32Z | 2024-06-26T20:10:18Z | https://github.com/blacklanternsecurity/bbot/issues/1501 | [
"enhancement"
] | TheTechromancer | 1 |
chaos-genius/chaos_genius | data-visualization | 594 | [Feature] Add the product version in the app | Acceptance Criteria:
Add product version in the APP.
| closed | 2022-01-17T07:17:32Z | 2022-01-24T12:43:31Z | https://github.com/chaos-genius/chaos_genius/issues/594 | [] | ChartistDev | 2 |
aimhubio/aim | data-visualization | 2,436 | Runs hanging "in progress" & can't `aim runs close` due to IO Error: While lock file | ## 🐛 Bug
Most recent run in Aim always hangs "In progress" until another run is started. When trying to force close the run using `aim runs close e23474f7c433427ab61ec693` I get the stack trace:
```
Closing runs: 0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/user/miniconda3/envs/tf/bin/aim", line 8, in <module>
sys.exit(cli_entry_point())
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/aim/cli/runs/commands.py", line 179, in close_runs
for _ in tqdm.tqdm(
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/user/miniconda3/envs/tf/lib/python3.9/multiprocessing/pool.py", line 870, in next
raise value
File "/home/user/miniconda3/envs/tf/lib/python3.9/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/aim/cli/runs/commands.py", line 175, in close_run
index_manager.index(run_hash)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/aim/sdk/index_manager.py", line 116, in index
meta_tree = self.repo.request_tree('meta', run_hash, read_only=False).subtree('meta')
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/aim/sdk/repo.py", line 313, in request_tree
return self.request(name, sub, read_only=read_only, from_union=from_union).tree()
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/aim/sdk/repo.py", line 339, in request
container = self._get_container(path, read_only=False, from_union=False)
File "/home/user/miniconda3/envs/tf/lib/python3.9/site-packages/aim/sdk/repo.py", line 279, in _get_container
container = RocksContainer(path, read_only=read_only)
File "aim/storage/rockscontainer.pyx", line 104, in aim.storage.rockscontainer.RocksContainer.__init__
File "aim/storage/rockscontainer.pyx", line 154, in aim.storage.rockscontainer.RocksContainer.writable_db
File "aim/storage/rockscontainer.pyx", line 146, in aim.storage.rockscontainer.RocksContainer.db
File "src/aimrocks/lib_rocksdb.pyx", line 1686, in aimrocks.lib_rocksdb.DB.__cinit__
File "src/aimrocks/lib_rocksdb.pyx", line 89, in aimrocks.lib_rocksdb.check_status
aimrocks.errors.RocksIOError: b'IO error: While lock file: /home/user/Desktop/classification/.aim/meta/chunks/e23474f7c433427ab61ec693/LOCK: Resource temporarily unavailable'
```
I see there's another #2434 with a similar problem maybe, so feel free to delete if this is considered duplicate although unlike the other issue I am not using remote server and the error only appears once when trying to close the run manually.
Exiting the local server from terminal and restarting with `aim up` doesn't change it.
### To reproduce
I am using `aim.tensorflow.AimCallback` to interact with the run metadata.
```
from aim.tensorflow import AimCallback
config = model.get_config()['layers']
a = {}
for i in config:
a.setdefault(i['class_name'], []).append(i['config'])
aim_callback = AimCallback(experiment="aim_on_keras")
for key, val in a.items():
aim_callback._run[key] = val
aim_callback._run['data_train_tensor'] = X_train.shape
aim_callback._run['data_test_tensor'] = X_test.shape
aim_callback._run['label_train_tensor'] = Y_train.shape
aim_callback._run['label_test_tensor'] = Y_test.shape
aim_callback._run['epochs'] = epochs
aim_callback._run['batch_size'] = batch_size
aim_callback._run['max_words'] = MAX_NB_WORDS
aim_callback._run['max_seq_length'] = MAX_SEQUENCE_LENGTH
aim_callback._run['test_size'] = TEST_SIZE
```
and using `callbacks=[aim_callback]` in keras `model.fit()`
### Expected behavior
Run should automatically end after notebook completes execution.
Run should also close when manually using `aim runs close xxxx`
### Environment
- Aim Version (e.g., 3.0.1)
aim==3.15.1
aim-ui==3.15.1
aimrecords==0.0.7
aimrocks==0.2.1
- Python version
Python 3.9.15
- pip version
pip 22.3.1
- OS (e.g., Linux)
Ubuntu 20.04.1
- Any other relevant information
### Additional context
<!-- Add any other context about the problem here. -->
| closed | 2022-12-16T16:10:53Z | 2022-12-17T08:43:48Z | https://github.com/aimhubio/aim/issues/2436 | [
"type / bug",
"help wanted"
] | mohammed-zia | 2 |
InstaPy/InstaPy | automation | 6,272 | Image not liked: b'Unavailable Page' | all interactions i'm getting this error can anyone help me?
| open | 2021-07-13T18:28:27Z | 2021-07-17T16:00:57Z | https://github.com/InstaPy/InstaPy/issues/6272 | [] | voxoff79 | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 1,628 | Problem installation (python version) | What python version i need to install requirement (v5.6)?
now i catch this error (arc linux):
(env) [Dokjolly@arch ultimatevocalremovergui-5.6]$ pip install -r requirements.txt
Ignoring SoundFile: markers 'sys_platform == "windows"' don't match your environment
Collecting altgraph==0.17.3 (from -r requirements.txt (line 1))
Downloading altgraph-0.17.3-py2.py3-none-any.whl.metadata (7.4 kB)
Collecting audioread==3.0.0 (from -r requirements.txt (line 2))
Downloading audioread-3.0.0.tar.gz (377 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
Traceback (most recent call last):
File "/home/Dokjolly/Desktop/ultimatevocalremovergui-5.6/env/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/Dokjolly/Desktop/ultimatevocalremovergui-5.6/env/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/Dokjolly/Desktop/ultimatevocalremovergui-5.6/env/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-nyx53dey/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-nyx53dey/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-nyx53dey/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-nyx53dey/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
File "<string>", line 17, in <module>
ModuleNotFoundError: No module named 'imp'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip is available: 24.2 -> 24.3.1
[notice] To update, run: pip install --upgrade pip
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
| open | 2024-11-19T09:11:24Z | 2024-12-21T17:27:08Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1628 | [] | Dokjolly0 | 2 |
miguelgrinberg/Flask-SocketIO | flask | 759 | [Query] Error after 500+ client connections | Hi Miguel,
I am seeing the server is not accepting any new connection when it reaches 500+ client connections. After 500+ connections, I am seeing socket error and the server is not accepting any new connections after that.
I am using the server in evenlet mode.
Thanks,
Swathin | closed | 2018-08-09T11:15:00Z | 2018-09-29T09:42:00Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/759 | [
"question"
] | swathinsankaran | 1 |
postmanlabs/httpbin | api | 205 | Digest authentication doesn't work for CORS | Digest auth endpoint is missing the "Access-Control-Expose-Headers: WWW-Authenticate" header in order to correctly support CORS requests. Without it, the browser doesn't allow the client to get the value of the WWW-Authenticate header.
| closed | 2015-01-22T06:54:09Z | 2018-04-26T17:51:05Z | https://github.com/postmanlabs/httpbin/issues/205 | [] | reinert | 4 |
piskvorky/gensim | data-science | 3,473 | Merging corpora requires converting itertools chain object to list object | When merging corpora, it is essential to convert the itertools.chain object to a list. Otherwise the serialization will not save the older corpus.
# now we can merge corpora from the two incompatible dictionaries into one
merged_corpus = itertools.chain(some_corpus_from_dict1, dict2_to_dict1[some_corpus_from_dict2])
should be
merged_corpus = list(itertools.chain(some_corpus_from_dict1, dict2_to_dict1[some_corpus_from_dict2]))
Then the merged_corpus can be serialized using the standard
MmCorpus.serialize(merged_corpus_output_fname, merged_corpus) | closed | 2023-05-16T16:40:25Z | 2023-05-16T19:33:30Z | https://github.com/piskvorky/gensim/issues/3473 | [] | mspezio | 2 |
graphistry/pygraphistry | pandas | 36 | Hint to set notebook to Trusted | ( @thibaudh : can you take, or should I?)
When opening a third-party notebook, our viz won't be shown because our JS won't run by default.
I propose either:
A) Print out warning/hint HTML to do `File -> Trusted Notebook` and then have JS delete that warning
B) Load an iframe URL and then have our existing iframe js logic overwrite it.
Leaning towards A due to embedding issues motivating the JS logic.
| closed | 2015-09-22T03:09:15Z | 2016-05-08T01:59:06Z | https://github.com/graphistry/pygraphistry/issues/36 | [
"enhancement"
] | lmeyerov | 1 |
3b1b/manim | python | 1,203 | How to animate shifts in camera frame center in a SpecialThreeDScene ? | I've tried **`self.play(self.camera.frame_center.shift, 2*UP)`** , but the [result ](https://streamable.com/xyjzlq) is weird.
This is the code I currently have :
```python
class ThreeDFrameShifts(SpecialThreeDScene):
def construct(self):
self.set_camera_orientation(45*DEGREES, 45*DEGREES)
plane = NumberPlane(
y_min=-10, y_max=10, x_min=-10, x_max=10,
background_line_style=dict(stroke_color=GREY)
)
vects = VGroup()
for direction, text, col in zip([2 * RIGHT, 2 * UP], ["X", "Y"], [BLUE, GREEN]):
vect = Vector(direction, color=col)
vect.add(TextMobject(text, color=col).next_to(vect, UR, buff=0))
vects.add(vect)
dot = Dot(color=MAROON).scale(1.5).move_to(self.camera.get_frame_center())
dot.add_updater(lambda d: d.move_to(self.camera.get_frame_center()))
self.add(plane, dot, vects)
self.play(self.camera.frame_center.shift, 2*UP, run_time=2)
self.wait()
```
I want to animate shifts in frame center like [this ](https://streamable.com/i5jt07) one, but inside a **`SpecialThreeDScene`**. I should probably be inheriting from both **`SpecialThreeDScene`** and **`MovingCameraScene`** , but I don't know what to do next. | closed | 2020-08-16T08:35:35Z | 2020-10-21T16:28:13Z | https://github.com/3b1b/manim/issues/1203 | [] | ghost | 0 |
jina-ai/clip-as-service | pytorch | 509 | when i run example2.py raise error | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [ ] Are you running the latest `bert-as-service`?
* [ ] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [ ] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [ ] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary):
- TensorFlow version:
- Python version:
- `bert-as-service` version:
- GPU model and memory:
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start YOUR_SERVER_ARGS
```
and calling the server via:
```python
bc = BertClient(YOUR_CLIENT_ARGS)
bc.encode()
```
Then this issue shows up:
... | open | 2020-01-22T04:25:20Z | 2020-01-22T04:27:15Z | https://github.com/jina-ai/clip-as-service/issues/509 | [] | cqray1990 | 1 |
iperov/DeepFaceLab | deep-learning | 857 | No preview appearing | no preview is showing. I have tried training mode and in every training mode no preview showed.
this error message also appears even though i am using H64. (specs: GTX1050 2gb, intel(R) Xeon(R), 12gb ram)
Starting. Press "Enter" to stop training and save model.
Error: OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'training/Adam/Variable_30/Assign', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 111, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\ModelBase.py", line 507, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\Model_H64\Model.py", line 88, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1216, in train_on_batch
self._make_train_function()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 509, in _make_train_function
loss=self.total_loss)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in get_updates
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in <listcomp>
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 704, in zeros
return variable(v, dtype=dtype, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 402, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1329, in __init__
constraint=constraint)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1481, in _init_from_args
validate_shape=validate_shape).op
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/Variable_30/Assign}} = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 111, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\ModelBase.py", line 507, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\Model_H64\Model.py", line 88, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2697, in __call__
if hasattr(get_session(), '_make_callable_from_options'):
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 206, in get_session
session.run(tf.variables_initializer(uninitialized_vars))
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'training/Adam/Variable_30/Assign', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 111, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\ModelBase.py", line 507, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\Model_H64\Model.py", line 88, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1216, in train_on_batch
self._make_train_function()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 509, in _make_train_function
loss=self.total_loss)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in get_updates
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in <listcomp>
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 704, in zeros
return variable(v, dtype=dtype, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 402, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1329, in __init__
constraint=constraint)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1481, in _init_from_args
validate_shape=validate_shape).op
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I believe there is a way around this problem, currently i know that OOM means out of memory but i have already set my batch number to 1 so i hope help is here thank you. | closed | 2020-08-10T14:03:29Z | 2023-06-11T07:42:19Z | https://github.com/iperov/DeepFaceLab/issues/857 | [] | JeezLoveJazzMusic | 3 |
JaidedAI/EasyOCR | machine-learning | 1,019 | Print out alternate predicted results string and probabilities | Hi,
Is there a method to print out the alternate results from the "Reader" class? For instance, here is an example of my code:
###Example code###
reader = easyocr.Reader(['en'], gpu = True)
results = reader.readtext('example_labeltif')
for result in results:
bbox, text, score = result
print(f"Text: {text}")
print(f"Confidence Score: {score}")
print()
(Returns):
Text: 5023-00013 A-3
Confidence Score: 0.7890631529022496
Text: BEAKER,
Confidence Score: 0.9955028038505163
Text: ERIKA
Confidence Score: 0.6211073188069697
###
I would like to see what the alternate predictions were for "5023-00013 A-3".
An example alternative predicition for "5023-00013 A-3" would be something like this (e.g. S823-88813 A-3", Confidence score: 0.200234)
Is there a way to do this? Possibly altering "recognition.py" "get_recoginizer" method? I don't want to alter any source code if I have to but if someone has a solution, I am all ears.
| open | 2023-05-16T19:38:40Z | 2024-10-02T11:15:16Z | https://github.com/JaidedAI/EasyOCR/issues/1019 | [] | rdavis22 | 2 |
inducer/pudb | pytest | 46 | Python3 setup.py install fails | I was able to build using python3 setup.py build. But doing a setup.py install failed with the following traceback.
zip_safe flag not set; analyzing archive contents...
pudb.**pycache**.debugger.cpython-33: module references **file**
Traceback (most recent call last):
File "setup.py", line 47, in <module>
packages=["pudb"])
File "/home/vagrant/sandbox/python3/lib/python3.3/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/vagrant/sandbox/python3/lib/python3.3/distutils/dist.py", line 917, in run_commands
self.run_command(cmd)
File "/home/vagrant/sandbox/python3/lib/python3.3/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/install.py", line 73, in run
self.do_egg_install()
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/install.py", line 93, in do_egg_install
self.run_command('bdist_egg')
File "/home/vagrant/sandbox/python3/lib/python3.3/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/vagrant/sandbox/python3/lib/python3.3/distutils/dist.py", line 936, in run_command
cmd_obj.run()
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/bdist_egg.py", line 227, in run
os.path.join(archive_root,'EGG-INFO'), self.zip_safe()
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/bdist_egg.py", line 266, in zip_safe
return analyze_egg(self.bdist_dir, self.stubs)
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/bdist_egg.py", line 402, in analyze_egg
safe = scan_module(egg_dir, base, name, stubs) and safe
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/bdist_egg.py", line 435, in scan_module
symbols = dict.fromkeys(iter_symbols(code))
File "/home/vagrant/sandbox/python3/lib/python3.3/site-packages/distribute-0.6.29dev-py3.3.egg/setuptools/command/bdist_egg.py", line 457, in iter_symbols
for name in code.co_names: yield name
AttributeError: 'int' object has no attribute 'co_names'
| closed | 2012-08-24T00:10:57Z | 2013-03-12T23:15:35Z | https://github.com/inducer/pudb/issues/46 | [] | orsenthil | 7 |
tqdm/tqdm | jupyter | 1,641 | Tqdm prints duplicated progress bar | I want to use tqdm in a loop such as:
def __process_each_iteration(self, imputer) -> tuple[int, float, float]:
progress_bar= tqdm(
range(self.base_imputed_df.shape[1]),
desc="Processing...: ",
bar_format=(
"{l_bar}{bar}| Iteration {n_fmt}/{total_fmt} "
"[{elapsed}<{remaining}, {rate_fmt}]"
),
)
for col_index in progress_bar:
pass
progress_bar.close()
def fit_transform(self):
for idx, imputer in enumerate(range(10)):
change, avg_train_metric, avg_val_metric = self.__process_each_iteration(imputer)
pass...
`
when I run the above code, it gives me the following output:
Iteration 1/9
Processing...: 100%|██████████| Iteration 30/30 [00:09<00:00, 3.26it/s]
Processing...: 0%| | Iteration 0/30 [00:00<?, ?it/s]30 columns updated.
Average r2_score -> train: 0.9665220914801507, val: 0.7951696912960284
------------------------------------------------------------
Iteration 2/9
Processing...: 100%|██████████| Iteration 30/30 [00:13<00:00, 2.30it/s]
Processing...: 0%| | Iteration 0/30 [00:00<?, ?it/s]19 columns updated.
Average r2_score -> train: 0.9849819806147938, val: 0.85501137134333
------------------------------------------------------------
it prints two progress bars in each iteration
I used tqdm as follows:
` with tqdm(...) as`
but I had the same problem...
| open | 2024-12-15T20:22:26Z | 2025-01-14T20:45:53Z | https://github.com/tqdm/tqdm/issues/1641 | [] | fatemeakbari | 1 |
sammchardy/python-binance | api | 1,150 | Incompatible with Python 3.10 | In Python 3.10 many "loop" keyword arguments were removed from various `asyncio` APIs. Sadly it is heavily used to implement websocket streams. | open | 2022-02-23T13:44:37Z | 2022-03-04T02:54:48Z | https://github.com/sammchardy/python-binance/issues/1150 | [] | ydm | 1 |
seleniumbase/SeleniumBase | pytest | 2,406 | SB with Stealth not pass iphey.com , pixelscan.net | hello please i try to use SB with seleniumstealth but not pass iphey.com , pixelscan.net
this is my code 👍
```
> from seleniumbase import Driver
> from selenium_stealth import stealth
>
> driver = Driver(uc=True,mobile=True)
>
> stealth(
> driver,
> languages=["en-US", "en"],
> vendor="Google Inc.",
> platform="Android",
> webgl_vendor="Google Inc. (Imagination Technologies)",
> renderer="ANGLE (Imagination Technologies,PowerVR Rogue GE8320, OpenGL ES 3.2)",
> fix_hairline=True,
> )
> driver.get("https://iphey.com")
>
>
> #driver.get("https://browserleaks.com/webrtc")
> driver.sleep(10000)
> driver.quit()
``` | closed | 2024-01-02T06:46:09Z | 2024-03-15T00:35:13Z | https://github.com/seleniumbase/SeleniumBase/issues/2406 | [
"question",
"UC Mode / CDP Mode"
] | pythondeveloperz | 10 |
krish-adi/barfi | streamlit | 8 | Poetry: Migrate to package management to use poetry | Use Poetry to manage the package and use `poetry install` to run the package with the environment in editable mode. | closed | 2022-08-30T11:09:57Z | 2025-01-06T04:11:53Z | https://github.com/krish-adi/barfi/issues/8 | [
"enhancement"
] | krish-adi | 1 |
inducer/pudb | pytest | 125 | Can't see how to open __init__ file of package when pressing m | Oh, actually, it's a problem with this specific package, which has an `ImportError` (which is what I'm trying to use pudb to debug, so it's frustrating that I can't open the file. Please allow open by file and not by module.)
| open | 2014-09-09T01:38:19Z | 2014-09-09T01:39:47Z | https://github.com/inducer/pudb/issues/125 | [] | cool-RR | 0 |
PokeAPI/pokeapi | api | 1,141 | Alola Route 16 east/west distinction is not correct | location-area/1063 and location-area/1064 seem to be saying that there is a Scraggy which you can only find in the eastern grass field in USUM. I've personally confirmed in Ultra Sun that the Scraggy is in both fields. | open | 2024-10-08T04:25:39Z | 2024-10-08T04:25:39Z | https://github.com/PokeAPI/pokeapi/issues/1141 | [] | Pinsplash | 0 |
indico/indico | flask | 6,009 | List of bookings for a user + cloning booking | - UI mockups to share w/ Burotel admins (skipping in favor for an already existing design - the bookings page card view could work well for this)
- New page with user search + choice booked-for/booked-by + date filter (maybe)
- Show all bookings from that user
- Option to clone a booking (with data prefilled, just new dates) | open | 2023-10-30T10:27:50Z | 2023-11-17T22:00:46Z | https://github.com/indico/indico/issues/6009 | [] | GovernmentPlates | 0 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 88 | Web server is returning an unknown error | Web server is returning an unknown error Error code 520
Visit [cloudflare.com](https://www.cloudflare.com/5xx-error-landing?utm_source=errorcode_520&utm_campaign=douyin.wtf) for more information.
2022-10-10 01:54:46 UTC | closed | 2022-10-10T01:56:15Z | 2022-10-10T02:10:38Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/88 | [
"Web Down"
] | Astar-Li | 3 |
pytest-dev/pytest-qt | pytest | 98 | Logging hookwrapper hides exceptions | I'm currently investigating a problem with `pytest-bdd` where it raises an `INTERNALERROR>` - unfortunately, `pytest-qt` hides it with another one :wink:
```
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 90, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 121, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/main.py", line 146, in pytest_runtestloop
INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
INTERNALERROR> return _wrapped_call(hook_impl.function(*args), self.execute)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
INTERNALERROR> return call_outcome.get_result()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
INTERNALERROR> self.result = func()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
INTERNALERROR> return _wrapped_call(hook_impl.function(*args), self.execute)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
INTERNALERROR> return call_outcome.get_result()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
INTERNALERROR> self.result = func()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/runner.py", line 65, in pytest_runtest_protocol
INTERNALERROR> runtestprotocol(item, nextitem=nextitem)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/runner.py", line 75, in runtestprotocol
INTERNALERROR> reports.append(call_and_report(item, "call", log))
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/runner.py", line 121, in call_and_report
INTERNALERROR> report = hook.pytest_runtest_makereport(item=item, call=call)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
INTERNALERROR> return _wrapped_call(hook_impl.function(*args), self.execute)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call
INTERNALERROR> wrap_controller.send(call_outcome)
INTERNALERROR> File "/home/florian/proj/qutebrowser/git/.tox/py35/lib/python3.5/site-packages/pytestqt/logging.py", line 43, in pytest_runtest_makereport
INTERNALERROR> report = outcome.result
INTERNALERROR> AttributeError: '_CallOutcome' object has no attribute 'result'
```
Looking at it with `pdb`, that seems to be the case because there was an exception:
```
(Pdb) outcome.result
*** AttributeError: '_CallOutcome' object has no attribute 'result'
(Pdb) outcome.excinfo
(<class 'IndexError'>, IndexError('list index out of range',), <traceback object at 0x7f1645d85688>)
```
Shouldn't `pytest-qt` use `outcome.get_result()` instead which raises the exception if there is any?
| closed | 2015-10-12T04:55:21Z | 2015-10-14T23:53:04Z | https://github.com/pytest-dev/pytest-qt/issues/98 | [] | The-Compiler | 4 |
gradio-app/gradio | deep-learning | 10,199 | Auto-Reloading doesn't run gr.render(input=state_object) | ### Describe the bug
Auto-Reloading doesn't run the `@gr.render(...)` decorated function if the input is a gr.State object.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
1. Run this official doc's example on dynamic event listeners
https://www.gradio.app/guides/dynamic-apps-with-render-decorator#dynamic-event-listeners
```python
import gradio as gr
with gr.Blocks() as demo:
text_count = gr.State(1)
add_btn = gr.Button("Add Box")
add_btn.click(lambda x: x + 1, text_count, text_count)
@gr.render(inputs=text_count)
def render_count(count):
boxes = []
for i in range(count):
box = gr.Textbox(key=i, label=f"Box {i}")
boxes.append(box)
def merge(*args):
return " ".join(args)
merge_btn.click(merge, boxes, output)
merge_btn = gr.Button("Merge")
output = gr.Textbox(label="Merged Output")
demo.launch()
```
it should render correctly like this:

2. Now change the code slightly, e.g. change the button text to `Add a Box` and wait for auto-reloading to re-render

### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio environment
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.3.0
gradio_client version: 1.4.2
```
### Severity
I can work around it by refreshing the page, however, if it works as expected, it will be more ergonomic and make the development experience more enjoyable and less disruptive. | open | 2024-12-13T13:04:20Z | 2024-12-18T19:24:40Z | https://github.com/gradio-app/gradio/issues/10199 | [
"bug"
] | cliffxuan | 2 |
gradio-app/gradio | deep-learning | 10,680 | Support streaming for chat models in `gr.load` | - [x] I have searched to see if a similar issue already exists.
Currently, when creating a chat interface with `gr.load` for chat models, the model execution seems to be handled with this code:
https://github.com/gradio-app/gradio/blob/b43200d7df92e40285c1e5fb1a2f010278fce5d2/gradio/external_utils.py#L131-L139
and the chat response is returned at once.
For example,
```py
import gradio as gr
gr.load(
"models/deepseek-ai/DeepSeek-R1",
provider="together",
chatbot=gr.Chatbot(type="messages", allow_tags=["think"], scale=1),
).launch()
```
https://github.com/user-attachments/assets/72c3268d-abed-43f8-b0e6-2f08b90e06dc
But I think the UX would be better if it supported streaming as well.
```py
import os
import gradio as gr
from huggingface_hub import InferenceClient
client = InferenceClient(model="deepseek-ai/DeepSeek-R1", provider="together", api_key=os.getenv("HF_TOKEN"))
def fn(message, history):
messages = [*history, {"role": "user", "content": message}]
out = ""
for chunk in client.chat_completion(messages=messages, max_tokens=2000, stream=True):
out += chunk.choices[0].delta.content or ""
yield out
gr.ChatInterface(fn=fn, type="messages", chatbot=gr.Chatbot(type="messages", allow_tags=["think"], scale=1)).launch()
```
https://github.com/user-attachments/assets/0bc5ff06-4a4c-42fa-bf56-e010deb08fd0 | closed | 2025-02-26T03:33:52Z | 2025-03-08T09:05:27Z | https://github.com/gradio-app/gradio/issues/10680 | [
"enhancement"
] | hysts | 3 |
collerek/ormar | fastapi | 348 | Integer nullable not working | **Describe the bug**
When I create a model with a numeric type id in migrations, this field has a null = true property. but if I specify the type of uuid this property is equal to false.
**To Reproduce**
Model with id as int:
```python
class Meeting(BaseModel):
class Meta(MainMeta):
tablename = 'meetings'
constraints = [ormar.UniqueColumns("time", "week_day", "teacher_id")]
id = ormar.Integer(primary_key=True, nullable=False)
week_day: int = ormar.Integer(minimum=1, maximum=7, choices=[1, 2, 3, 4, 5, 6, 7], nullable=False)
time: datetime.time = ormar.Time()
start: datetime.date = ormar.Date()
end: datetime.date = ormar.Date()
group: Union[Group, Dict] = ormar.ForeignKey(Group, related_name='meetings', name="group_id", ondelete="CASCADE",
nullable=False)
teacher: User = ormar.ForeignKey(User, related_name='meetings', name="teacher_id", ondelete="CASCADE",
nullable=False)
```
migrattions with id as int:
```python
op.create_table('meetings',
sa.Column('id', sa.Integer(), nullable=True),
sa.Column('week_day', sa.Integer(), nullable=False),
sa.Column('time', sa.Time(), nullable=False),
sa.Column('start', sa.Date(), nullable=False),
sa.Column('end', sa.Date(), nullable=False),
sa.Column('group_id', sa.Integer(), nullable=False),
sa.Column('teacher_id', sa.CHAR(36), nullable=False),
sa.ForeignKeyConstraint(['group_id'], ['groups.id'], name='fk_meetings_groups_id_group', ondelete='CASCADE'),
sa.ForeignKeyConstraint(['teacher_id'], ['users.id'], name='fk_meetings_users_id_teacher', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('time', 'week_day', 'teacher_id', name='uc_meetings_time_week_day_teacher_id')
)
```
Model with id as uuid:
```python
class Meeting(BaseModel):
class Meta(MainMeta):
tablename = 'meetings'
constraints = [ormar.UniqueColumns("time", "week_day", "teacher_id")]
id = ormar.UUID(primary_key=True, nullable=False)
week_day: int = ormar.Integer(minimum=1, maximum=7, choices=[1, 2, 3, 4, 5, 6, 7], nullable=False)
time: datetime.time = ormar.Time()
start: datetime.date = ormar.Date()
end: datetime.date = ormar.Date()
group: Union[Group, Dict] = ormar.ForeignKey(Group, related_name='meetings', name="group_id", ondelete="CASCADE",
nullable=False)
teacher: User = ormar.ForeignKey(User, related_name='meetings', name="teacher_id", ondelete="CASCADE",
nullable=False)
```
migrations with id as uuid:
```python
op.create_table('meetings',
sa.Column('id', sa.CHAR(32), nullable=False),
sa.Column('week_day', sa.Integer(), nullable=False),
sa.Column('time', sa.Time(), nullable=False),
sa.Column('start', sa.Date(), nullable=False),
sa.Column('end', sa.Date(), nullable=False),
sa.Column('group_id', sa.Integer(), nullable=False),
sa.Column('teacher_id', sa.CHAR(36), nullable=False),
sa.ForeignKeyConstraint(['group_id'], ['groups.id'], name='fk_meetings_groups_id_group', ondelete='CASCADE'),
sa.ForeignKeyConstraint(['teacher_id'], ['users.id'], name='fk_meetings_users_id_teacher', ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('time', 'week_day', 'teacher_id', name='uc_meetings_time_week_day_teacher_id')
)
```
migrations are successful, but in the base id has the property notnull = true. and when migrating again, Alembik tries to change this property to false...
```
INFO [alembic.autogenerate.compare] Detected NULL on column 'meetings.id'
```
second migration:
```python
op.alter_column('meetings', 'id',
existing_type=sa.INTEGER(),
nullable=True,
autoincrement=True,
existing_server_default=sa.text("nextval('meetings_id_seq'::regclass)"))
```
and I have error:
```python
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\users\user\appdata\local\programs\python\python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\user\appdata\local\programs\python\python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\User\.virtualenvs\borkovSchool-SnKBJgJY\Scripts\alembic.exe\__main__.py", line 7, in <module>
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\config.py", line 588, in main
CommandLine(prog=prog).main(argv=argv)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\config.py", line 582, in main
self.run_cmd(cfg, options)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\config.py", line 559, in run_cmd
fn(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\command.py", line 320, in upgrade
script.run_env()
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\script\base.py", line 563, in run_env
util.load_python_file(self.dir, "env.py")
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\util\pyfiles.py", line 92, in load_python_file
module = load_module_py(module_id, path)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\util\pyfiles.py", line 108, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "alembic\env.py", line 84, in <module>
run_migrations_online()
File "alembic\env.py", line 78, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\runtime\environment.py", line 851, in run_migrations
self.get_context().run_migrations(**kw)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\runtime\migration.py", line 612, in run_migrations
step.migration_fn(**kw)
File "D:\Projects\Programming\borkovSchool\alembic\versions\96e30f57bf1a_second.py", line 21, in upgrade
op.alter_column('addresses', 'id',
File "<string>", line 8, in alter_column
File "<string>", line 3, in alter_column
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\operations\ops.py", line 1880, in alter_column
return operations.invoke(alt)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\operations\base.py", line 387, in invoke
return fn(self, operation)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\operations\toimpl.py", line 50, in alter_column
operations.impl.alter_column(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\ddl\postgresql.py", line 173, in alter_column
super(PostgresqlImpl, self).alter_column(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\ddl\impl.py", line 231, in alter_column
self._exec(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\alembic\ddl\impl.py", line 197, in _exec
return conn.execute(construct, multiparams)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\engine\base.py", line 1263, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\sql\ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\engine\base.py", line 1353, in _execute_ddl
ret = self._execute_context(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\engine\base.py", line 1814, in _execute_context
self._handle_dbapi_exception(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\engine\base.py", line 1995, in _handle_dbapi_exception
util.raise_(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_
raise exception
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\engine\base.py", line 1771, in _execute_context
self.dialect.do_execute(
File "c:\users\user\.virtualenvs\borkovschool-snkbjgjy\lib\site-packages\sqlalchemy\engine\default.py", line 717, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidTableDefinition) ОШИБКА: столбец "id" входит в первичный ключ
[SQL: ALTER TABLE addresses ALTER COLUMN id DROP NOT NULL]
```
**Expected behavior**
Id is expected to be nullable = false on migrations
**Versions**
- Database backend used `postgress`
- Python version `python 3.9.6`
- `ormar` version `0.10.19`
- `pydantic` version `1.8.2`
- `fastapi` version `0.68.1`
| closed | 2021-09-16T20:15:50Z | 2021-09-26T16:06:48Z | https://github.com/collerek/ormar/issues/348 | [
"bug"
] | artel1992 | 6 |
deepspeedai/DeepSpeed | machine-learning | 6,723 | CUBLAS_STATUS_NOT_SUPPORTED | **Describe the bug**
when I run my code. I found the error:
```RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasGemmStridedBatchedEx(handle, opa, opb, (int)m, (int)n, (int)k, (void*)&falpha, a, CUDA_R_16BF, (int)lda, stridea, b, CUDA_R_16BF, (int)ldb, strideb, (void*)&fbeta, c, CUDA_R_16BF, (int)ldc, stridec, (int)num_batches, compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)```
**ds_report output**
```
[2024-11-07 11:09:25,334] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2024-11-07 11:09:27.985422: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1730948968.001130 14809 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1730948968.005845 14809 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-07 11:09:28.021106: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Ampere and newer architectures
[WARNING] FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
gds .................... [NO] ....... [OKAY]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
transformer_inference .. [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
inference_core_ops ..... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
cutlass_ops ............ [NO] ....... [NO]
quantizer .............. [NO] ....... [OKAY]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
ragged_device_ops ...... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
ragged_ops ............. [NO] ....... [NO]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5
[WARNING] using untested triton version (3.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/root/anaconda3/envs/deepspeed/lib/python3.9/site-packages/torch']
torch version .................... 2.5.0+cu121
deepspeed install path ........... ['/root/anaconda3/envs/deepspeed/lib/python3.9/site-packages/deepspeed']
deepspeed info ................... 0.15.3, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.4
shared memory (/dev/shm) size .... 125.75 GB
```
**System info (please complete the following information):**
- OS: Ubuntu 22.04
- GPU count and types: 1 GPU NVIDIA GeForce GTX TITAN X
- Python version: 3.9.18
**my ds_config:**
```
ds_config = {
"train_batch_size": args.batch_size,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00006,
"betas": [0.9, 0.95],
"weight_decay": 0.01
}
},
"bf16":{
"enabled": True
},
"data_types":{
"grad_accum_dtype": "bf16"
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/mnt/nvme0n1",
"pin_memory": False
},
"offload_param": {
"device": "nvme",
"nvme_path": "/mnt/nvme0n1",
"buffer_count": 32,
"buffer_size": 1e8,
"max_in_cpu": 1e6,
"pin_memory": False
},
"sub_group_size" : 0
},
"communication_data_type": "bf16"
}
```
Thanks for your help! | closed | 2024-11-07T03:40:52Z | 2024-11-12T14:36:13Z | https://github.com/deepspeedai/DeepSpeed/issues/6723 | [
"bug",
"training"
] | niebowen666 | 5 |
errbotio/errbot | automation | 1,293 | Provide official support and guidance for Docker deployments | ### I am...
* [ ] Reporting a bug
* [x] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### Issue description
There are many different options for deploying and running Errbot, but Docker seems to be among the more discussed strategies. I suggest that we add some documentation about some possible ways that Docker can be used with Errbot, and maybe even include an official Errbot Docker image catered to the most common use cases.
I don't know if it's an official page, but there is an [Errbot Docker Hub][1] page that has not been updated in a number of years. If Docker Hub is going to be part of the strategy, we might also consider using [Automated Builds][3], as it would make keeping the image up to date much easier.
A community member has been maintaining a [docker-errbot][2] repo with the Docker strategy they use for their deployment. It may be helpful reference material, but there are also a few extras there that most users probably do not want or need.
In short, I'm posting this issue to start the discussion. There may be a few challenging parts around Dockerizing an Errbot deployment, such as provisioning and maintaining plugins. It would be nice to get a feel for what the community has tried already and where the pain points might be. Also, we might want to talk about using Kubernetes or some other similar tooling to achieve the same goal of containerized deployment, but Docker seems like the most obvious starting point.
[1]: https://hub.docker.com/r/errbot/err/
[2]: https://github.com/rroemhild/docker-errbot/
[3]: https://docs.docker.com/docker-hub/builds/ | open | 2019-01-28T15:51:36Z | 2021-07-23T06:52:18Z | https://github.com/errbotio/errbot/issues/1293 | [
"type: documentation",
"#deployment",
"#release-process"
] | sheluchin | 7 |
ultralytics/yolov5 | pytorch | 13,280 | Split features map of data | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi everyone,
I am working with two types of data: RGB and IR images. I want to apply the CSSA method (as described in this repository: [CSSA GitHub](https://github.com/artrela/mulitmodal-cssa/tree/main)). To do this, I need to extract feature maps from the data before feeding them into the CSSA model. I want to apply CSSA before C3 stage on the image below.
Could you please advise on the best way to split the RGB and IR data within the pipeline? Additionally, which specific files or parts of the codebase would need modification to implement this process effectively?
Thanks in advance for your help!


### Additional
_No response_ | open | 2024-08-25T14:54:59Z | 2024-08-25T23:20:44Z | https://github.com/ultralytics/yolov5/issues/13280 | [
"question"
] | letriluan | 1 |
ccxt/ccxt | api | 24,846 | `decimal.ConversionSyntax` for stop loss and take profit orders | ### Operating System
Windows 10
### Programming Languages
Python
### CCXT Version
4.4.47
### Description
I get this error when I try to place stop loss order for an open short position. I was able to open short position with the same base coin amount, but placing stop loss failed.
```
venv\Lib\site-packages\ccxt\base\exchange.py", line 5339, in amount_to_precision
result = self.decimal_to_precision(amount, TRUNCATE, market['precision']['amount'], self.precisionMode, self.paddingMode)
venv\Lib\site-packages\ccxt\base\decimal_to_precision.py", line 58, in decimal_to_precision
dec = decimal.Decimal(str(n))
decimal.InvalidOperation: [<class 'decimal.ConversionSyntax'>]
```
### Code
I have Order objects for each type of order (market open, market close, stop loss, take profit etc). This is the code that **opened the short position**:
```py
async def execute(self):
"""Executes the order
Returns:
str: response received from Binance
"""
if self.executed:
return self.execute_response
if self.base_currency_amount:
positionSide = 'SHORT' if self.short else 'LONG'
self.execute_response = await self.exchange.create_market_sell_order(self.symbol, self.base_currency_amount, params={'positionSide': positionSide})
self.id = self.execute_response['id']
self.entered_at_price = self.execute_response['price'] or self.execute_response['info']['price']
self.price = self.entered_at_price
await super().execute()
return self.execute_response
```
And this is the code that **tried to place stop loss order** for the open position:
```py
async def execute(self):
if self.executed:
return self.execute_response
positionSide = 'LONG' if self.side == 'sell' else 'SHORT'
response = await self.exchange.create_order(self.symbol, 'STOP', self.side, self.base_currency_amount, self.entered_at_price, params={'stopPrice': self.entered_at_price, 'positionSide': positionSide})
self.execute_response = response
self.id = self.execute_response['id']
await super().execute()
return self.execute_response
```
The `base_currency_amount` variable in both cases equaled to ` 2.8011533166141267`. The exchange is `binanceusdm`
| closed | 2025-01-11T16:19:20Z | 2025-01-11T16:35:05Z | https://github.com/ccxt/ccxt/issues/24846 | [] | fam04s | 0 |
iterative/dvc | machine-learning | 9,946 | dvc quick tips | Two items in one - feel free to close, primarily for google and anyone who has been dealing with the same issue.
It would be great if there were a quick tips page alongside the how to section of the dvc.org site for community contributions.
The tip I'd add is
If you are building a docker image with models baked in using dvc with google/gcloud cloud build,
authentication for cloud storage is described as requiring
```bash
$ export GOOGLE_APPLICATION_CREDENTIALS='.../project-XXX.json'
```
A path to a service account key file. https://dvc.org/doc/user-guide/data-management/remote-storage/google-cloud-storage#custom-authentication
This raises an issue of how do you store a sensitive key for CI?
## Assumptions
* You are using dvc and google storage
* You are using source control and storing your .dvc reference files in source control
* You are using cloud build with a repo cloudbuild.yaml file (can also be done in json)
An approach is to add read permissions to the build principal to access the bucket you are using for your models / data
```
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME --member=CLOUD_BUILD_PRINCIPAL_IDENTIFIER --role=storage.objects.get
```
Substituting CLOUD_BUILD_PRINCIPAL_IDENTIFIER for the cloudbuilder service account, which is generally found on the settings tab of your cloud builds page https://console.cloud.google.com/cloud-build/settings/service-account
And BUCKET_NAME for the bucket you are storing your models / data in
Once that's done modify the cloudbuild.yaml to add a step using a python image, install dvc and pull the modules from your source control references. Modify dvc pull models as required.
```yaml
steps:
- name: python
entrypoint: bash
args: ['-c', 'pip install -U dvc dvc[gs]; dvc pull models;']
id: Model_Pull
.....
```
DVC will authenticate using a metaserver that's available in the cloud and not require GOOGLE_APPLICATION_CREDENTIALS or service key files.
All steps in a cloud build load docker images, which mount /workspace/[your code], any modifications to that file system remain for the next step e.g. Build
At which point you will have performed a dvc pull on your models allowing a Dockerfile with a
```Dockerfile
COPY models /destination
```
to copy your models to the appropriate destination
An additional tip is that you may need to run a chown or chmod on the destination directory if you are using a non-root user
e.g.
```Dockerfile
RUN chmod -R 644 /models
```
| closed | 2023-09-14T21:01:11Z | 2023-09-14T21:53:02Z | https://github.com/iterative/dvc/issues/9946 | [] | pjaol | 1 |
graphql-python/graphene | graphql | 1,440 | Incorrect query AST when there are duplicated nested fields with different selection sets | When resolving a query with duplicated nested fields with different selection sets, only the first selection set is available to the resolver info AST.
Given the following query:
```gql
query {
person {
firstName
}
person {
lastName
}
}
```
When inspecting the AST available in `info.field_nodes`, only `firstName` will be available.
Expected behavior: the person resolver is called once, and `firstName` and `lastName` are available in the AST. Or, the person resolver gets called twice: first with `firstName` in the AST and the second with `lastName` in the AST.
Current behavior: the person resolver is called once, and only with `firstName`.
This is a problem, because I want to be able to know all the full selection set of a queried field for optimization purposes. This bug is especially puzzling because the data returned is in the correct shape with the merged selection sets, yet it seems like the resolver is only returning incomplete data. Any insight on this issue would be greatly appreciated!
Here is a minimal reproducible example: https://gist.github.com/fireteam99/20be417e397a1672380e33b18164ec12.
- Version: 3.1
- Platform: MacOS/Linux
| closed | 2022-08-08T19:47:10Z | 2022-08-29T18:32:24Z | https://github.com/graphql-python/graphene/issues/1440 | [
"🐛 bug"
] | fireteam99 | 7 |
postmanlabs/httpbin | api | 112 | deprecate pypi package | closed | 2013-07-23T14:13:37Z | 2018-04-26T17:51:00Z | https://github.com/postmanlabs/httpbin/issues/112 | [] | kennethreitz | 0 | |
huggingface/datasets | pytorch | 6,729 | Support zipfiles that span multiple disks? | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response
get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files
split_modules = {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp>
split: infer_module_for_data_files_list(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list
return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives
for f in xglob(extracted, recursive=True, download_config=download_config)[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob
fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem
return cls(**storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__
self.zip = zipfile.ZipFile(
File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
endrec = _EndRecData(fp)
File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData
return _EndRecData64(fpin, -sizeEndCentDir, endrec)
File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64
raise BadZipFile("zipfiles that span multiple disks are not supported")
zipfile.BadZipFile: zipfiles that span multiple disks are not supported
```
The files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are:
<img width="629" alt="Capture d’écran 2024-03-11 à 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
| closed | 2024-03-11T21:07:41Z | 2024-06-26T05:08:59Z | https://github.com/huggingface/datasets/issues/6729 | [
"enhancement",
"question"
] | severo | 6 |
tensorflow/tensor2tensor | machine-learning | 1,702 | AttributeError: module 'tensorflow' has no attribute 'contrib' | ### Description
Error when importing problems.
...
### Environment information
```
OS: Windows10
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.14.0
tensor2tensor==1.14.0
tensorboard==1.14.0
tensorflow==2.0.0b1
tensorflow-datasets==1.2.0
tensorflow-estimator==1.14.0
tensorflow-gan==1.0.0.dev0
tensorflow-metadata==0.14.0
tensorflow-probability==0.7.0
python -V
Python 3.7.3```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
from tensor2tensor import problems
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Anaconda3\lib\site-packages\tensor2tensor\problems.py", line 22, in <module>
from tensor2tensor.utils import registry
File "C:\Users\Anaconda3\lib\site-packages\tensor2tensor\ut
```
I want to execute the given example program given.
```
# Error logs:
...
```
| open | 2019-09-16T02:32:18Z | 2019-11-11T10:38:27Z | https://github.com/tensorflow/tensor2tensor/issues/1702 | [] | newmluser | 4 |
piskvorky/gensim | nlp | 2,743 | Word2vec: total loss suspiciously drops with worker count, probably thread-unsafe tallying | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
The word2vec implementation requires a workaround, as detailed in #2735, to correctly report the total loss per epoch. After doing that though, the next issue is that the total loss reported seems to vary depending on the number of workers.
#### Steps/code/corpus to reproduce
This is my code:
class MyLossCalculatorII(CallbackAny2Vec):
def __init__(self):
self.epoch = 1
self.losses = []
self.cumu_loss = 0.0
self.previous_epoch_time = time.time()
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
norms = [linalg.norm(v) for v in model.wv.vectors]
now = time.time()
epoch_seconds = now - self.previous_epoch_time
self.previous_epoch_time = now
self.cumu_loss += float(loss)
print(f"Loss after epoch {self.epoch}: {loss} (cumulative loss so far: {self.cumu_loss}) "+\
f"-> epoch took {round(epoch_seconds, 2)} s - vector norms min/avg/max: "+\
f"{round(float(min(norms)), 2)}, {round(float(sum(norms)/len(norms)), 2)}, {round(float(max(norms)), 2)}")
self.epoch += 1
self.losses.append(float(loss))
model.running_training_loss = 0.0
def train_and_check(my_sentences, my_epochs, my_workers=8, my_loss_calc_class=MyLossCalculatorII):
print(f"Building vocab...")
my_model: Word2Vec = Word2Vec(sg=1, compute_loss=True, workers=my_workers)
my_model.build_vocab(my_sentences)
print(f"Vocab done. Training model for {my_epochs} epochs, with {my_workers} workers...")
loss_calc = my_loss_calc_class()
trained_word_count, raw_word_count = my_model.train(my_sentences, total_examples=my_model.corpus_count, compute_loss=True,
epochs=my_epochs, callbacks=[loss_calc])
loss = loss_calc.losses[-1]
print(trained_word_count, raw_word_count, loss)
loss_df = pd.DataFrame({"training loss": loss_calc.losses})
loss_df.plot(color="blue")
# print(f"Calculating accuracy...")
# acc, details = my_model.wv.evaluate_word_analogies(questions_file, case_insensitive=True)
# print(acc)
return loss_calc, my_model
My data is an in-memory list of sentences of Finnish text, each sentence being a list of strings:
[18]: sentences[0]
[18]: ['hän', 'tietää', 'minkälainen', 'tilanne', 'tulla']
I'm running the following code:
lc4, model4 = train_and_check(sentences, my_epochs=20, my_workers=4)
lc8, model8 = train_and_check(sentences, my_epochs=20, my_workers=8)
lc16, model16 = train_and_check(sentences, my_epochs=20, my_workers=16)
lc32, model32 = train_and_check(sentences, my_epochs=20, my_workers=32)
And the outputs are (last few lines + plot only):
# lc4
Loss after epoch 20: 40341580.0 (cumulative loss so far: 830458060.0) -> epoch took 58.15 s - vector norms min/avg/max: 0.02, 3.79, 12.27
589841037 669998240 40341580.0
Wall time: 20min 14s

# lc8
Loss after epoch 20: 25501282.0 (cumulative loss so far: 521681620.0) -> epoch took 36.6 s - vector norms min/avg/max: 0.02, 3.79, 12.24
589845960 669998240 25501282.0
Wall time: 12min 46s

# lc16
Loss after epoch 20: 14466763.0 (cumulative loss so far: 295212011.0) -> epoch took 26.25 s - vector norms min/avg/max: 0.02, 3.79, 12.55
589839763 669998240 14466763.0
Wall time: 9min 35s

# lc32
Loss after epoch 20: 7991086.5 (cumulative loss so far: 161415654.5) -> epoch took 27.5 s - vector norms min/avg/max: 0.02, 3.79, 12.33
589843184 669998240 7991086.5
Wall time: 9min 37s

What is going on here? The loss (whether total loss, final-epoch loss or average loss per epoch) varies, although the data is the same and the number of epochs is the same. I would imagine that "1 epoch" means "each data point is considered precisely once", in which case the number of workers should only affect how quickly the training is done and not the loss (the loss would still vary randomly a bit depending on which order the data points are considered etc, but that should be minor). Here though the loss seems to be roughly proportional to 1/n where n = number of workers.
I'm guessing based on the similar shape of the loss progressions and the very similar vector magnitudes that the training is actually fine in all four cases, so hopefully this is just another display bug similar to #2735.
#### Versions
The output of
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
is
Windows-10-10.0.18362-SP0
Python 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 22:01:29) [MSC v.1900 64 bit (AMD64)]
NumPy 1.17.3
SciPy 1.3.1
gensim 3.8.1
FAST_VERSION 1
| open | 2020-02-02T20:27:38Z | 2020-02-06T00:20:37Z | https://github.com/piskvorky/gensim/issues/2743 | [
"bug"
] | tsaastam | 1 |
andrew-hossack/dash-tools | plotly | 49 | ⬆️ [Feature Request] ReadTheDocs | Add ReadTheDocs to improve current documentation
-> https://docs.readthedocs.io/en/stable/tutorial/ | closed | 2022-08-10T21:30:50Z | 2022-08-12T22:10:04Z | https://github.com/andrew-hossack/dash-tools/issues/49 | [] | andrew-hossack | 1 |
supabase/supabase-py | fastapi | 420 | Create a client with Auth context of a user | Hi everyone
**Is your feature request related to a problem? Please describe.**
I am trying to write python cloud function (instead of supabase edge function). I wan't to get caller's identity do proceed database read/write with his RLS context.
In JS, this is possible as described in the documentation.
https://supabase.com/docs/guides/functions/auth
```js
import { serve } from 'https://deno.land/std@0.177.0/http/server.ts'
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'
serve(async (req: Request) => {
try {
// Create a Supabase client with the Auth context of the logged in user.
const supabaseClient = createClient(
// Supabase API URL - env var exported by default.
Deno.env.get('SUPABASE_URL') ?? '',
// Supabase API ANON KEY - env var exported by default.
Deno.env.get('SUPABASE_ANON_KEY') ?? '',
// Create client with Auth context of the user that called the function.
// This way your row-level-security (RLS) policies are applied.
{ global: { headers: { Authorization: req.headers.get('Authorization')! } } }
)
```
With Python client, I couldn't reproduce. I tried:
```python
supa_client = create_client("https://****.supabase.co",
"***anon_api_key***",
ClientOptions().replace(headers={"authorization":"Bearer ***user_session_token***"
}))
```
I also tried
```python
supa_client = create_client("https://****.supabase.co",
"***anon_api_key***",
}))
supa_client.auth.set_session("***user_session_token***","")
```
None of this works. After studying the code a bit, I think this may be the problem:
https://github.com/supabase-community/supabase-py/blob/2bba842449ccd0b5f933198c343f54c5a67db7ed/supabase/client.py#L61
https://github.com/supabase-community/supabase-py/blob/2bba842449ccd0b5f933198c343f54c5a67db7ed/supabase/client.py#L208
Authorization token is always overwritten with anon API KEY
```python
options.headers.update(self._get_auth_headers())
```
```python
def _get_auth_headers(self) -> Dict[str, str]:
"""Helper method to get auth headers."""
# What's the corresponding method to get the token
return {
"apiKey": self.supabase_key,
"Authorization": f"Bearer {self.supabase_key}",
}
```
**Describe the solution you'd like**
It should be possible to reproduce JS behavior to create client with Auth context of the user that called the function (logged in user's JWT).
Am I missing something ?
| closed | 2023-04-24T15:26:06Z | 2024-04-17T14:23:09Z | https://github.com/supabase/supabase-py/issues/420 | [
"bug"
] | vlebert | 7 |
SYSTRAN/faster-whisper | deep-learning | 1,151 | Add the possibility of using `return_logits_vocab` from Ctranslate2 | As latest version of Ctranslate2 generate [method](https://opennmt.net/CTranslate2/python/ctranslate2.models.Whisper.html#ctranslate2.models.Whisper.generate) allow passing `return_logits_vocab` to include the log probs in its output.
Would it be possible to expose a parameter to `return_logits_vocab` in `FasterWhisperPipeline.transcribe()` method ? | open | 2024-11-18T10:59:14Z | 2024-11-25T08:15:19Z | https://github.com/SYSTRAN/faster-whisper/issues/1151 | [] | pierrepv8 | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,181 | Synthesizer training speed is not varying with batch size. | I'm following this #437 to fine-tune the synthesizer model. One thing I noticed, training time is not varying with batch size.
With default parameters, batch size = 12.
memory used ~ 3683MiB
Found 476 samples
+----------------+------------+---------------+------------------+
| Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) |
+----------------+------------+---------------+------------------+
| 25k Steps | 12 | 3e-05 | 2 |
+----------------+------------+---------------+------------------+
{| Epoch: 1/625 (40/40) | Loss: 0.4793 | 0.79 steps/s | Step: 295k | }
It took 1352.45 seconds for fine-tuning 1000 steps.
When I increased batch size to 64 anticipating that training time will decrease, It took nearly the same training time as above for 1000 steps.
batch size = 64
memory used ~ 10435MiB
Found 476 samples
+----------------+------------+---------------+------------------+
| Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) |
+----------------+------------+---------------+------------------+
| 25k Steps | 64 | 3e-05 | 2 |
+----------------+------------+---------------+------------------+
{| Epoch: 1/3125 (8/8) | Loss: 0.4669 | 0.70 steps/s | Step: 295k | }
Even though I can see gpu consumption has increased for batch size=64, it's not reflecting in the training time.
| open | 2023-03-30T13:49:46Z | 2023-03-31T14:50:47Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1181 | [] | sanal-176 | 1 |
plotly/dash-table | plotly | 322 | Add LICENCE to MANIFEST.in | Please add an appropriate license entry to the MANIFEST.in file. | closed | 2019-01-02T14:02:18Z | 2019-02-13T17:08:50Z | https://github.com/plotly/dash-table/issues/322 | [
"dash-type-maintenance"
] | dkucharc | 1 |
microsoft/nni | machine-learning | 5,610 | ImportError: Cannot use a path to identify something from __main__. | **Describe the issue**:
Hi,
I was able to run the demo scripts. Now, I am trying with my own architecture and I am running into this error while running the experimen.run command:
"ImportError: Cannot use a path to identify something from __main__.
During handling of the above exception, another exception occurred:
.
.
.
TypeError: cannot pickle '_io.BufferedReader' object."
**Full Log message**:
ImportError Traceback (most recent call last)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:791](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:791), in get_hybrid_cls_or_func_name(cls_or_func, pickle_size_limit)
790 try:
--> 791 name = _get_cls_or_func_name(cls_or_func)
792 # import success, use a path format
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:770](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:770), in _get_cls_or_func_name(cls_or_func)
769 if module_name == '__main__':
--> 770 raise ImportError('Cannot use a path to identify something from __main__.')
771 full_name = module_name + '.' + cls_or_func.__name__
ImportError: Cannot use a path to identify something from __main__.
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 exp.run(exp_config, 8081)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/nas/experiment/pytorch.py:298](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/nas/experiment/pytorch.py:298), in RetiariiExperiment.run(self, config, port, debug)
291 if self._action == 'create':
292 base_model_ir, self.applied_mutators = preprocess_model(
293 self.base_model, self.evaluator, self.applied_mutators,
294 full_ir=not isinstance(canoni_conf.execution_engine, (PyEngineConfig, BenchmarkEngineConfig)),
295 dummy_input=canoni_conf.execution_engine.dummy_input
296 if isinstance(canoni_conf.execution_engine, (BaseEngineConfig, CgoEngineConfig)) else None
297 )
--> 298 self._save_experiment_checkpoint(base_model_ir, self.applied_mutators, self.strategy,
299 canoni_conf.experiment_working_directory)
300 elif self._action == 'resume':
301 base_model_ir, self.applied_mutators, self.strategy = self._load_experiment_checkpoint(
302 canoni_conf.experiment_working_directory)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/nas/experiment/pytorch.py:226](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/nas/experiment/pytorch.py:226), in RetiariiExperiment._save_experiment_checkpoint(self, base_model_ir, applied_mutators, strategy, exp_work_dir)
224 ckp_path = os.path.join(exp_work_dir, self.id, 'checkpoint')
225 with open(os.path.join(ckp_path, 'nas_model'), 'w') as fp:
--> 226 dump(base_model_ir._dump(), fp, pickle_size_limit=int(os.getenv('PICKLE_SIZE_LIMIT', 64 * 1024)))
227 with open(os.path.join(ckp_path, 'applied_mutators'), 'w') as fp:
228 dump(applied_mutators, fp)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:341](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:341), in dump(obj, fp, use_trace, pickle_size_limit, allow_nan, **json_tricks_kwargs)
339 if json_tricks_kwargs.get('compression') is not None:
340 raise ValueError('If you meant to compress the dumped payload, please use `dump_bytes`.')
--> 341 result = _dump(
342 obj=obj,
343 fp=fp,
344 use_trace=use_trace,
345 pickle_size_limit=pickle_size_limit,
346 allow_nan=allow_nan,
347 **json_tricks_kwargs)
348 return cast(str, result)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:390](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:390), in _dump(obj, fp, use_trace, pickle_size_limit, allow_nan, **json_tricks_kwargs)
387 json_tricks_kwargs['allow_nan'] = allow_nan
389 if fp is not None:
--> 390 return json_tricks.dump(obj, fp, obj_encoders=encoders, **json_tricks_kwargs)
391 else:
392 return json_tricks.dumps(obj, obj_encoders=encoders, **json_tricks_kwargs)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/nonp.py:151](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/nonp.py:151), in dump(obj, fp, sort_keys, cls, obj_encoders, extra_obj_encoders, primitives, compression, force_flush, allow_nan, conv_str_byte, fallback_encoders, properties, **jsonkwargs)
149 if (isinstance(obj, str_type) or hasattr(obj, 'write')) and isinstance(fp, (list, dict)):
150 raise ValueError('json-tricks dump arguments are in the wrong order: provide the data to be serialized before file handle')
--> 151 txt = dumps(obj, sort_keys=sort_keys, cls=cls, obj_encoders=obj_encoders, extra_obj_encoders=extra_obj_encoders,
152 primitives=primitives, compression=compression, allow_nan=allow_nan, conv_str_byte=conv_str_byte,
153 fallback_encoders=fallback_encoders, properties=properties, **jsonkwargs)
154 if isinstance(fp, str_type):
155 if compression:
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/nonp.py:125](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/nonp.py:125), in dumps(obj, sort_keys, cls, obj_encoders, extra_obj_encoders, primitives, compression, allow_nan, conv_str_byte, fallback_encoders, properties, **jsonkwargs)
121 cls = TricksEncoder
122 combined_encoder = cls(sort_keys=sort_keys, obj_encoders=encoders, allow_nan=allow_nan,
123 primitives=primitives, fallback_encoders=fallback_encoders,
124 properties=properties, **jsonkwargs)
--> 125 txt = combined_encoder.encode(obj)
126 if not is_py3 and isinstance(txt, str):
127 txt = unicode(txt, ENCODING)
File [~/anaconda3/envs/tpot/lib/python3.10/json/encoder.py:199](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/json/encoder.py:199), in JSONEncoder.encode(self, o)
195 return encode_basestring(o)
196 # This doesn't pass the iterator directly to ''.join() because the
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
File [~/anaconda3/envs/tpot/lib/python3.10/json/encoder.py:257](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/json/encoder.py:257), in JSONEncoder.iterencode(self, o, _one_shot)
252 else:
253 _iterencode = _make_iterencode(
254 markers, self.default, _encoder, self.indent, floatstr,
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/encoders.py:77](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/encoders.py:77), in TricksEncoder.default(self, obj, *args, **kwargs)
75 prev_id = id(obj)
76 for encoder in self.obj_encoders:
---> 77 obj = encoder(obj, primitives=self.primitives, is_changed=id(obj) != prev_id, properties=self.properties)
78 if id(obj) == prev_id:
79 raise TypeError(('Object of type {0:} could not be encoded by {1:} using encoders [{2:s}]. '
80 'You can add an encoders for this type using `extra_obj_encoders`. If you want to \'skip\' this '
81 'object, consider using `fallback_encoders` like `str` or `lambda o: None`.').format(
82 type(obj), self.__class__.__name__, ', '.join(str(encoder) for encoder in self.obj_encoders)))
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/utils.py:66](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/json_tricks/utils.py:66), in filtered_wrapper..wrapper(*args, **kwargs)
65 def wrapper(*args, **kwargs):
---> 66 return encoder(*args, **{k: v for k, v in kwargs.items() if k in names})
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:818](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:818), in _json_tricks_func_or_cls_encode(cls_or_func, primitives, pickle_size_limit)
813 if not isinstance(cls_or_func, type) and not _is_function(cls_or_func):
814 # not a function or class, continue
815 return cls_or_func
817 return {
--> 818 '__nni_type__': get_hybrid_cls_or_func_name(cls_or_func, pickle_size_limit)
819 }
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:795](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/nni/common/serializer.py:795), in get_hybrid_cls_or_func_name(cls_or_func, pickle_size_limit)
793 return 'path:' + name
794 except (ImportError, AttributeError):
--> 795 b = cloudpickle.dumps(cls_or_func)
796 if len(b) > pickle_size_limit:
797 raise ValueError(f'Pickle too large when trying to dump {cls_or_func}. '
798 'Please try to raise pickle_size_limit if you insist.')
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:73](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:73), in dumps(obj, protocol, buffer_callback)
69 with io.BytesIO() as file:
70 cp = CloudPickler(
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
File [~/anaconda3/envs/tpot/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:632](https://vscode-remote+ssh-002dremote-002b10-002e29-002e17-002e73.vscode-resource.vscode-cdn.net/home/emre/TPoT/tpot-master%40e4a3699bb93/src/~/anaconda3/envs/tpot/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:632), in CloudPickler.dump(self, obj)
630 def dump(self, obj):
631 try:
--> 632 return Pickler.dump(self, obj)
633 except RuntimeError as e:
634 if "recursion" in e.args[0]:
TypeError: cannot pickle '_io.BufferedReader' object
`Log screenshot:`

.
.
.

Any ideas on what might be the problem? Thanks.
| closed | 2023-06-15T17:56:07Z | 2023-07-07T23:24:30Z | https://github.com/microsoft/nni/issues/5610 | [] | ekurtgl | 17 |
pyg-team/pytorch_geometric | pytorch | 9,653 | RuntimeError when using CaptumExplainer after GNNExplainer | ### 🐛 Describe the bug
Trying to run CaptumExplainer after using GNNExplainer throws a Runtime error. However, running CaptumExplainer _before_ running GNNExplainer does not. (A similar thing happens with GraphMaskExplainer as well.) The expected result is that both GNNExplainer and CaptumExplainer successfully return explanations regardless of the order in which they are called.
Below is the MWE:
```python
from torch_geometric.nn import GCN
from torch_geometric.explain import Explainer, GNNExplainer, CaptumExplainer
from torch_geometric.datasets import FakeDataset
dataset = FakeDataset()
data = dataset[0]
model = GCN(64, 16, 2, 1)
gnnexplainer = Explainer(
model=model,
algorithm=GNNExplainer(),
explanation_type='model',
edge_mask_type='object',
model_config=dict(
mode='multiclass_classification',
task_level='node',
return_type='raw',
)
)
captumexplainer = Explainer(
model=model,
algorithm=CaptumExplainer('IntegratedGradients'),
explanation_type='model',
edge_mask_type='object',
model_config=dict(
mode='multiclass_classification',
task_level='node',
return_type='raw',
)
)
gnnexplainer(data.x, data.edge_index, index=0)
captumexplainer(data.x, data.edge_index, index=0)
```
Here is the full traceback:
```
Traceback (most recent call last):
File "c:\Users\jesse\Documents\gnn-project\bug_mwe.py", line 32, in <module>
captumexplainer(data.x, data.edge_index, index=0)
File "C:\Users\jesse\miniconda3\Lib\site-packages\torch_geometric\explain\explainer.py", line 205, in __call__
explanation = self.algorithm(
^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\torch_geometric\explain\algorithm\captum_explainer.py", line 170, in forward
attributions = self.attribution_method_instance.attribute(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\captum\log\__init__.py", line 42, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\captum\attr\_core\integrated_gradients.py", line 274, in attribute
attributions = _batch_attribution(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\captum\attr\_utils\batching.py", line 78, in _batch_attribution
current_attr = attr_method._attribute(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\captum\attr\_core\integrated_gradients.py", line 351, in _attribute
grads = self.gradient_func(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\captum\_utils\gradient.py", line 119, in compute_gradients
grads = torch.autograd.grad(torch.unbind(outputs), inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jesse\miniconda3\Lib\site-packages\torch\autograd\__init__.py", line 411, in grad
result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
```
### Versions
```
Collecting environment information...
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:42:21) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 528.97
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=4001
DeviceID=CPU0
Family=107
L2CacheSize=8192
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=4001
Name=AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
ProcessorType=3
Revision=29697
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] torch_geometric==2.5.2
[pip3] torchaudio==2.2.2
[pip3] torchvision==0.17.2
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py312h2bbff1b_1
[conda] mkl_fft 1.3.8 py312h2bbff1b_0
[conda] mkl_random 1.2.4 py312h59b6b97_0
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] pyg 2.5.2 py312_torch_2.2.0_cu121 pyg
[conda] pytorch 2.2.2 py3.12_cuda12.1_cudnn8_0 pytorch
[conda] pytorch-cuda 12.1 hde6ce7c_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.2.2 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi
``` | open | 2024-09-11T00:19:26Z | 2024-09-11T00:19:26Z | https://github.com/pyg-team/pytorch_geometric/issues/9653 | [
"bug"
] | he-jesse | 0 |
zappa/Zappa | flask | 831 | [Migrated] Slow import of zappa.asynchronous due to session.client() calls | Originally from: https://github.com/Miserlou/Zappa/issues/2072 by [schuyler1d](https://github.com/schuyler1d)
## Context
Merely importing `from zappa.asynchronous import run` takes several seconds due to the aws_session.client('sns'), etc. at the top of the page. It's a worthy goal to keep these client sessions global/in-memory, precisely because of this delay, however we should only tax the delay if/when we actually need those sessions.
## Expected Behavior
`from zappa.asynchronous import run` should be fast/immediate
## Actual Behavior
It takes several seconds for the session to 'initialize'
## Possible Fix
I propose we memoize the sessions instead.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.51.0
* Operating System and Python version: 3.6 (ubuntu/linux)
* The output of `pip freeze`:
argcomplete==1.11.1
asgiref==3.2.3
boto3==1.12.4
botocore==1.15.4
certifi==2019.11.28
cfn-flip==1.2.2
chardet==3.0.4
Click==7.0
coverage==5.0.3
coveralls==1.11.1
Django==3.0.3
docopt==0.6.2
docutils==0.15.2
durationpy==0.5
entrypoints==0.3
flake8==3.7.9
Flask==1.1.1
future==0.18.2
hjson==3.0.1
idna==2.9
importlib-metadata==1.5.0
itsdangerous==1.1.0
Jinja2==2.11.1
jmespath==0.9.4
kappa==0.6.0
MarkupSafe==1.1.1
mccabe==0.6.1
mock==4.0.1
nose==1.3.7
nose-timer==0.7.5
pip-tools==4.5.0
placebo==0.9.0
pycodestyle==2.5.0
pyflakes==2.1.1
python-dateutil==2.6.1
python-slugify==4.0.0
pytz==2019.3
PyYAML==5.3
requests==2.23.0
s3transfer==0.3.3
six==1.14.0
sqlparse==0.3.0
text-unidecode==1.3
toml==0.10.0
tqdm==4.43.0
troposphere==2.5.3
urllib3==1.25.8
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zipp==3.0.0
* Link to your project (optional):
* Your `zappa_settings.json`:
| closed | 2021-02-20T12:52:13Z | 2022-08-18T01:57:38Z | https://github.com/zappa/Zappa/issues/831 | [
"duplicate"
] | jneves | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 768 | How to run test in the aligned mode? | I try to train and test the CycleGAN in the aligned mode. All works fine except of finding resuts of test. A put test images to the dataset\test folder, run test.py, it wrote processing (0000)-th image..., processing (0005)-th image... and exit. And nothing was happend, no any error or exception, and no result images anywhere. | closed | 2019-09-15T21:43:51Z | 2019-09-18T04:19:40Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/768 | [] | Makhaon | 2 |
opengeos/leafmap | jupyter | 144 | Add labels to the map | Add labels to the map based on pandas DataFarme, GeoPandas GeoDataFrame, GeoJSON, etc.
Reference: https://github.com/giswqs/geemap/issues/815 | closed | 2021-12-16T15:43:55Z | 2021-12-24T02:14:19Z | https://github.com/opengeos/leafmap/issues/144 | [
"Feature Request"
] | giswqs | 1 |
huggingface/pytorch-image-models | pytorch | 1,959 | [FEATURE] Is it possible for adding hparams to model.default_cfg? | **Is your feature request related to a problem? Please describe.**
When I search for some model: https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384, the model card said it was fine-tuned on ImageNet-1k in timm by Ross Wightman. Though it is directed to some more details on pretrain, the hparams for this finetuning process are hard to find.
**Describe the solution you'd like**
Maybe we could add the hparams in model.finetune_cfg to provide more useful information?
**Describe alternatives you've considered**
or maybe the args.yaml file can be provided or linked to the model card?
**Additional context**
Thank you very much! I found some convnext hparams on https://gist.github.com/rwightman/ee0b02c1e99a0761264d1d1319e95e5b
but only for nano and atto, I'm not sure if they are still a strong hparams for finetuning large models? Should I start my sweep based on these much smaller models hparams? | closed | 2023-09-20T14:24:47Z | 2024-06-02T00:18:41Z | https://github.com/huggingface/pytorch-image-models/issues/1959 | [
"enhancement"
] | luckyhug | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 1,514 | [question] Extending the User model and further classes at the same time | ### class diagram
https://gist.githubusercontent.com/atlasloewenherz/c3915d826c0f3fc67070be60cc0dc240/raw/5b1773163a78b3130169854ebb197c8d08b7e519/diamond.png
### Extending the User model and Joined Table Inheritance
Hi everyone,
i have the following models:
both the user and the supplier they are inheriting from Party (as in contract party) a contract is a composition of two or more Parties traditionally Supplier and a User.
even though the user and supplier both inherit from the Party class they still have different attributes and behavior.
instead if duplicating the User model my own application User <- Party and the F.A.B User, I'm trying to extend the F.A.B User to "Be" both a Party user and remain the F.A.B User.
here is how I do proceed:
Helper class
------------
```python
class PartyTypes(enum.Enum):
PARTY = 'party'
USER = 'user'
SUPPLIER = 'supplier'
```
My Application base model using F.A.B Base model
---------------------------------------------------
```python
class PrimaryKeyIdMixin(AuditMixin): # uses the F.A.B base class
__abstract__ = True
id = Column(Integer, primary_key=True, autoincrement=True)
#...
```
Address model
---------------
#### every party have one or many addresses
```python
class Address(PrimaryKeyIdMixin): # uses the F.A.B base class
__tablename__ = 'address'
number = Column(String(5), nullable=True)
city = Column(String(255), nullable=True)
street = Column(String(255), nullable=True)
state = Column(String(255), nullable=True)
zip = Column(String(5), nullable=True)
country = Column(String(255), nullable=True)
party_id = Column(Integer, ForeignKey('party.id'))
party = relationship("Party", back_populates="addresses")
```
Contract model
----------------
#### Contract has two or Many Parties
```python
class Contract(AppBuilderModel, AuditMixin):
__tablename__ = 'contract'
parties = relationship("Party", secondary=association_table)
```
The Party model
----------------
```python
association_table = Table('contract_party_association', metadata,
Column('contract_id', Integer, ForeignKey('contract.id')),
Column('party_id', Integer, ForeignKey('party.id'))
)
class Party(AppBuilderModel, AuditMixin):
__tablename__ = 'party'
id = Column(Integer, primary_key=True, autoincrement=True)
type = Column(String(20))
# inheritance
__mapper_args__ = {
'polymorphic_identity': PartyTypes.PARTY.value,
'polymorphic_on': type
}
def addresses(cls):
return relationship("Address", back_populates="party")
```
#### I extended the User as the following
```python
from flask_appbuilder.security.sqla.models import User
class AppUser(User,Party):
__tablename__ = 'ab_user'
dob = Column(DateTime)
phone = Column(PhoneNumberType)
@declared_attr
def addresses(cls):
return relationship("Address", back_populates="party")
```
Error message
---------------
```python
raise exc.InvalidRequestError(
sqlalchemy.exc.InvalidRequestError: Class <class 'memoris.sec_models.AppUser'> has multiple mapped bases: [<class 'flask_appbuilder.security.sqla.models.User'>, <class 'memoris.models.Party'>]
```
# Environment
Flask-Appbuilder version: **Flask-AppBuilder 3.1.1**
pip freeze output:
```
apispec==3.3.2
attrs==20.2.0
Babel==2.8.0
bcrypt==3.2.0
cffi==1.14.3
click==7.1.2
colorama==0.4.4
defusedxml==0.6.0
dnspython==2.0.0
email-validator==1.1.1
Flask==1.1.2
Flask-AppBuilder==3.1.1
Flask-Babel==1.0.0
Flask-BabelPkg==0.9.6
Flask-Bcrypt==0.7.1
Flask-Cors==3.0.9
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
Flask-WTF==0.14.3
healthcheck==1.3.3
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jsonschema==3.2.0
MarkupSafe==1.1.1
marshmallow==3.8.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
phonenumbers==8.12.12
prison==0.1.3
pycparser==2.20
PyJWT==1.7.1
pyrsistent==0.17.3
python-dateutil==2.8.1
python3-openid==3.2.0
pytz==2020.1
PyYAML==5.3.1
six==1.15.0
speaklater==1.3
SQLAlchemy==1.3.20
SQLAlchemy-Utils==0.36.8
Werkzeug==1.0.1
WTForms==2.3.3
```
### the is the full stack trace
```python
/Users/yelassad/projects/buildr/buildr-venv/bin/python -m run
SECRET KEY ENV VAR NOT SET! SHOULD NOT SEE IN PRODUCTION
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Cellar/python@3.9/3.9.0_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/yelassad/projects/buildr/memoris/run.py", line 1, in <module>
from memoris import app
File "/Users/yelassad/projects/buildr/memoris/memoris/__init__.py", line 20, in <module>
from memoris.sec import MySecurityManager
File "/Users/yelassad/projects/buildr/memoris/memoris/sec.py", line 3, in <module>
from .sec_models import AppUser
File "/Users/yelassad/projects/buildr/memoris/memoris/sec_models.py", line 8, in <module>
class AppUser(User, Party):
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/flask_sqlalchemy/model.py", line 67, in __init__
super(NameMetaMixin, cls).__init__(name, bases, d)
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/flask_sqlalchemy/model.py", line 121, in __init__
super(BindMetaMixin, cls).__init__(name, bases, d)
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/sqlalchemy/ext/declarative/api.py", line 76, in __init__
_as_declarative(cls, classname, cls.__dict__)
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/sqlalchemy/ext/declarative/base.py", line 131, in _as_declarative
_MapperConfig.setup_mapping(cls, classname, dict_)
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/sqlalchemy/ext/declarative/base.py", line 160, in setup_mapping
cfg_cls(cls_, classname, dict_)
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/sqlalchemy/ext/declarative/base.py", line 192, in __init__
self._setup_inheritance()
File "/Users/yelassad/projects/buildr/buildr-venv/lib/python3.9/site-packages/sqlalchemy/ext/declarative/base.py", line 573, in _setup_inheritance
raise exc.InvalidRequestError(
sqlalchemy.exc.InvalidRequestError: Class <class 'memoris.sec_models.AppUser'> has multiple mapped bases: [<class 'flask_appbuilder.security.sqla.models.User'>, <class 'memoris.models.Party'>]
```
any help or guidance will be appreciated!!
| closed | 2020-11-09T23:57:21Z | 2021-07-09T12:34:33Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1514 | [
"stale"
] | atlasloewenherz | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,099 | how to offline render RGB image | after training I got ply format 3dgs file, I could give the height, width, and intrinsic or fov of the camera together with the R,t for the camera, how could I render the RGB image online or offline with the ply file and the camera info? | closed | 2024-12-07T17:03:47Z | 2024-12-13T05:38:32Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1099 | [] | kaixin-bai | 2 |
sinaptik-ai/pandas-ai | data-visualization | 807 | Error 404: Resource Not Found | ### System Info
platform: windows
python version: 3.11.5
pandasai version: 1.5.5
### 🐛 Describe the bug
I am getting resource not found error, although I am using same key and url for openai api and its working fine. Can you please suggest me the possible reasons for the error? Below is my code:
```
import pandas as pd
import os
from pandasai import SmartDataframe
from pandasai.llm import AzureOpenAI
llm = AzureOpenAI(
api_token="",
api_base = "",
api_version="2022-12-01",
deployment_name="my_model",
is_chat_model=True
)
df = SmartDataframe(pd.read_csv('incident_event_log.csv'), config={"llm": llm})
```
Code is working fine till here but whenever I am trying to execute below line, I am getting the error as Resource Not Found:
`df.chat('How many incidents are active?')`
| closed | 2023-12-08T08:49:39Z | 2024-01-05T11:26:37Z | https://github.com/sinaptik-ai/pandas-ai/issues/807 | [] | adeepmalmotra | 9 |
s3rius/FastAPI-template | graphql | 231 | Add grpc support | closed | 2025-02-06T12:12:00Z | 2025-02-17T22:49:07Z | https://github.com/s3rius/FastAPI-template/issues/231 | [] | FedorArbuzov | 1 | |
jacobgil/pytorch-grad-cam | computer-vision | 470 | Can I use grad-cam for video classification? | Hi Jacob,
I am trying to visualize attention maps for video data. I am using ViViT model 2 and my inputs are the size [B x T x C x Hx W]. I have tried using grad-cam but I got the error: axis 2 is out of bounds for array of dimension 0.
Error occurs in grad_cam.py while returning the np.mean(grads, axis=(2,3)) in the grad_cam_weights function.
I am curious if it is possible in any way to use grad-cam for video data.
Thanks in advance | open | 2023-12-14T14:22:04Z | 2023-12-14T14:22:04Z | https://github.com/jacobgil/pytorch-grad-cam/issues/470 | [] | purentap | 0 |
donnemartin/data-science-ipython-notebooks | scikit-learn | 88 | Data Science | open | 2022-07-01T22:25:03Z | 2023-03-16T10:41:22Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/88 | [
"needs-review"
] | SibiyaS | 1 | |
ultralytics/ultralytics | pytorch | 19,197 | labels.cache permission error | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
While training on custom dataset, I encountered the following error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:
'...\\temp\\labels.cache.npy' -> '...\\temp\\labels.cache'
After tracing the code, I found that the issue originates from the save_dataset_cache_file() function in ultralytics\data\utils.py, specifically in the following lines:
np.save(str(path), x) # Save cache for next time
path.with_suffix(".cache.npy").rename(path) # Remove .npy suffix
The problem occurs because np.save() does not write the file immediately, causing the rename operation to fail due to the file being locked.
To resolve this, I modified the code as follows:
with open(str(path), "wb") as outf:
np.save(outf, x)
After making this change, the error no longer occurs.
### Environment
OS: Windows10
python: 3.8.10
Ultralytics: 8.3.54
### Minimal Reproducible Example
model = YOLO(pretrain_model_path, task='detect')
model.train(
data=data_yaml,
augment=True,
imgsz=imgsz,
epochs=epochs,
workers=0,
batch=batch_size,
cfg=cfg_yaml,
plots=False,
project='new',
name=project_name,
exist_ok=True
)
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-12T07:14:04Z | 2025-02-13T04:39:10Z | https://github.com/ultralytics/ultralytics/issues/19197 | [
"bug",
"fixed",
"detect"
] | eric80739 | 4 |
coqui-ai/TTS | deep-learning | 2,736 | [Feature request] Multilingual YourTTS checkpoint: Dutch, French, German, Italian, Portuguese, Polish, Spanish, and English | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
Adding a YourTTS checkpoint in the languages: Dutch, French, German, Italian, Portuguese, Polish, Spanish, and English
**Solution**
I have added a new checkpoint for the YourTTS model, which has been trained in multiple languages, including Dutch, French, German, Italian, Portuguese, Polish, Spanish, and English.
This checkpoint corresponds to the work available in the paper: [CML-TTS A Multilingual Dataset for Speech Synthesis in Low-Resource Languages](https://arxiv.org/abs/2306.10097). The model was trained using the CML-TTS dataset and the LibriTTS dataset in English. The checkpoint can be downloaded from:
[https://drive.google.com/u/2/uc?id=1yDCSJ1pFZQTHhL09GMbOrdjcPULApa0p](https://drive.google.com/u/2/uc?id=1yDCSJ1pFZQTHhL09GMbOrdjcPULApa0p)
I would also like to inform you that samples generated using this checkpoint can be verified by accessing the following link: [https://freds0.github.io/CML-TTS-Dataset/](https://freds0.github.io/CML-TTS-Dataset/)
| closed | 2023-07-02T22:12:46Z | 2024-12-22T19:11:39Z | https://github.com/coqui-ai/TTS/issues/2736 | [
"feature request"
] | freds0 | 8 |
apify/crawlee-python | web-scraping | 1,052 | API docs do not render defaults | - `BeautifulSoupCrawler` as an example:

There is a default for the `parser`, but not for the other arguments.
| open | 2025-03-05T10:06:48Z | 2025-03-06T09:23:29Z | https://github.com/apify/crawlee-python/issues/1052 | [
"documentation",
"t-tooling"
] | vdusek | 0 |
JoeanAmier/XHS-Downloader | api | 235 | [功能异常] 下载故障,chrome提示有病毒屏蔽下载文件,继续下载后Windows提示有病毒 | **问题描述**
清晰简洁地描述该错误是什么。
A clear and concise description of what the bug is.
**重现步骤**
重现该问题的步骤:
Steps to reproduce the behavior:
1. ...
2. ...
3. ...
**预期结果**
清晰简洁地描述您预期会发生的情况。
A clear and concise description of what you expected to happen.
**补充信息**
在此添加有关该问题的任何其他上下文信息,例如:操作系统、运行方式、配置文件、错误截图、运行日志等。
请注意:提供配置文件时,请删除 Cookie 内容,避免敏感数据泄露!
Add any other contextual information about the issue here, such as operating system, runtime mode, configuration files,
error screenshots, runtime logs, etc.
Please note: When providing configuration files, please delete cookie content to avoid sensitive data leakage!
| open | 2025-03-24T06:34:37Z | 2025-03-24T12:10:40Z | https://github.com/JoeanAmier/XHS-Downloader/issues/235 | [
"不会处理(wontfix)"
] | gitkukara | 1 |
oegedijk/explainerdashboard | plotly | 250 | explainerdashboard 'what if' searchbox for index dropdown/selection isn't working | ### Discussed in https://github.com/oegedijk/explainerdashboard/discussions/249
<div type='discussions-op-text'>
<sup>Originally posted by **AkshayRShiraguppi** January 11, 2023</sup>
I am not able to type and search specific value in search box in 'whatif' and 'Individual prediction' tabs to choose an index.(When typed a specific number, it just shows wrong values in drop down and clears what i typed) (Random button seems to work fine)
I even don't see all index initially after clicking dropdown in the index search box in what if section. I have to type something in the box and then i see the values.
@oegedijk </div> | closed | 2023-01-12T21:45:50Z | 2023-02-15T17:22:26Z | https://github.com/oegedijk/explainerdashboard/issues/250 | [] | AkshayRShiraguppi | 0 |
ageitgey/face_recognition | machine-learning | 1,082 | AttributeError: 'Image' object has no attribute 'read' in Google Colab | * face_recognition version:
* Python version: 3.0
* Operating System: Google Colab
### Description
I am trying to implement the library in Google Colab.
Here is my code so far
#pip install dependencies
!pip install face_recognition
!pip install os
!pip install cv2
import face_recognition
import os
import cv2
#Load the Drive helper and mount
from google.colab import drive
#This will prompt for authorization.
drive.mount('/content/drive')
#After executing the cell above, Drive
#files will be present in "/content/drive/My Drive".
!ls "/content/drive/My Drive"
!ls "/content/drive/My Drive/faces/unknown"
#'sigrid.jpeg' is in unknown
from IPython.display import Image
from IPython.display import display
Embed = Image('sigrid.jpeg')
Embed
#Here the image doesn't fully show in the notebook
import face_recognition
image = face_recognition.load_image_file(Embed)
face_locations = face_recognition.face_locations(image)
#Then I get this error
AttributeError: 'Image' object has no attribute 'read'
Can anyone please tell me how to solve this? Thanks.
| closed | 2020-03-10T02:08:47Z | 2020-03-25T22:36:49Z | https://github.com/ageitgey/face_recognition/issues/1082 | [] | aanis | 2 |
onnx/onnx | tensorflow | 6,267 | [1.16.2/1.17?] ONNX build Windows | # Bug Report
### Is the issue related to model conversion?
No. I can't even perform imports.
### Describe the bug
My projects are permissive with respect to which `onnx` PyPI package version is installed. `onnx 1.16.2` came out this morning and broke my projects.
For example, in a turnkeyml environment that used to work:
```
pip install --upgrade onnx
turnkey -h
```
results in:
```
(tkml) PS C:\work\turnkeyml> turnkey -h
Traceback (most recent call last):
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\Scripts\turnkey.exe\__main__.py", line 4, in <module>
File "C:\work\turnkeyml\src\turnkeyml\__init__.py", line 3, in <module>
from .files_api import evaluate_files
File "C:\work\turnkeyml\src\turnkeyml\files_api.py", line 8, in <module>
from turnkeyml.sequence import Sequence
File "C:\work\turnkeyml\src\turnkeyml\sequence\__init__.py", line 1, in <module>
from .sequence import Sequence
File "C:\work\turnkeyml\src\turnkeyml\sequence\sequence.py", line 11, in <module>
import turnkeyml.common.status as status
File "C:\work\turnkeyml\src\turnkeyml\common\status.py", line 10, in <module>
import turnkeyml.common.analyze_model as analyze_model
File "C:\work\turnkeyml\src\turnkeyml\common\analyze_model.py", line 4, in <module>
import onnx
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: A dynamic link library (DLL) initialization routine failed.
```
Setting `onnx<1.16.2` resolves the issue.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Windows 11
- ONNX version (*e.g. 1.13*): 1.16.2
- Python version: 3.10 (works on Python 3.8)
- Protobuf version: 3.20.2
### Reproduction instructions
```
conda create -n otest python=3.10
conda activate otest
pip install turnkeyml==3.0.1
turnkey -h
```
### Expected behavior
Patch version increases to packages (1.16.1 -> 1.16.2) should not include breaking changes.
| open | 2024-08-01T14:26:01Z | 2025-03-14T07:51:06Z | https://github.com/onnx/onnx/issues/6267 | [
"bug",
"announcement"
] | jeremyfowers | 45 |
Lightning-AI/pytorch-lightning | deep-learning | 19,596 | save_hyperparameter incorrectly infers parameters from superclass | ### Bug description
Given a model with a submodel, both of which call `save_hyperparameters`, hyperparameters of the submodel that share a name with the main model are overwritten.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
from pytorch_lightning.core.mixins.hparams_mixin import HyperparametersMixin
class Submodel(HyperparametersMixin):
def __init__(self, hparam: int):
super().__init__()
self.save_hyperparameters()
class Model(HyperparametersMixin):
def __init__(self, hparam: int):
super().__init__()
self.submodel = Submodel(a=3)
self.save_hyperparameters()
model = Model(hparam=5)
print(model.hparams)
print(model.submodel.hparams)
```
**Expectation**: `model.hparams.hparam == 5`, `model.submodel.hparams.hparam == 3`
**Reality: `model.hparams.hparam == 5`, `model.submodel.hparams.hparam == 5`
### Error messages and logs
_No response_
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: None
* Lightning:
- lightning: 2.2.1
- lightning-utilities: 0.8.0
- pytorch-lightning: 2.2.1
- torch: 2.0.1
- torch-cluster: 1.6.1
- torch-geometric: 2.3.1
- torchmetrics: 1.0.0
* Packages:
- accessible-pygments: 0.0.4
- aiohttp: 3.8.4
- aiosignal: 1.3.1
- alabaster: 0.7.13
- alembic: 1.11.1
- antlr4-python3-runtime: 4.9.3
- anyio: 3.7.0
- appdirs: 1.4.4
- appnope: 0.1.3
- argon2-cffi: 21.3.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.2.3
- astroid: 3.0.0
- asttokens: 2.2.1
- async-lru: 2.0.2
- async-timeout: 4.0.2
- attrs: 23.1.0
- awkward: 2.2.4
- awkward-cpp: 17
- babel: 2.12.1
- backcall: 0.2.0
- backports.functools-lru-cache: 1.6.4
- beautifulsoup4: 4.12.2
- bleach: 6.0.0
- certifi: 2023.5.7
- cffi: 1.15.1
- cfgv: 3.3.1
- charset-normalizer: 3.1.0
- click: 8.1.3
- cmaes: 0.9.1
- codespell: 2.2.6
- colorama: 0.4.6
- colorlog: 6.7.0
- comm: 0.1.3
- commonmark: 0.9.1
- contourpy: 1.0.7
- coolname: 2.2.0
- coverage: 7.2.7
- cycler: 0.11.0
- debugpy: 1.6.7
- decorator: 5.1.1
- defusedxml: 0.7.1
- dill: 0.3.7
- diskcache: 5.6.3
- distlib: 0.3.6
- docker-pycreds: 0.4.0
- docstring-parser: 0.15
- docutils: 0.19
- entrypoints: 0.4
- exceptiongroup: 1.1.1
- executing: 1.2.0
- fastjsonschema: 2.17.1
- filelock: 3.12.0
- flit-core: 3.9.0
- fonttools: 4.39.4
- fqdn: 1.5.1
- frozenlist: 1.3.3
- fsspec: 2023.6.0
- gitdb: 4.0.10
- gitpython: 3.1.31
- gmpy2: 2.1.2
- gnn-tracking: 0.0.1
- gnntrack: 0.0.1
- greenlet: 2.0.2
- hpo2: 0.1.0
- hsfparana: 0.1.0
- hydra-core: 1.3.2
- identify: 2.5.24
- idna: 3.4
- imagesize: 1.4.1
- importlib-metadata: 6.6.0
- importlib-resources: 5.12.0
- iniconfig: 2.0.0
- ipykernel: 6.23.1
- ipython: 8.14.0
- ipython-genutils: 0.2.0
- ipywidgets: 8.0.6
- isoduration: 20.11.0
- isort: 5.12.0
- jedi: 0.18.2
- jinja2: 3.1.2
- joblib: 1.2.0
- json5: 0.9.14
- jsonargparse: 4.21.2
- jsonpointer: 2.4
- jsonschema: 4.17.3
- jupyter: 1.0.0
- jupyter-client: 8.2.0
- jupyter-console: 6.6.3
- jupyter-core: 5.3.0
- jupyter-events: 0.6.3
- jupyter-lsp: 2.2.0
- jupyter-server: 2.6.0
- jupyter-server-terminals: 0.4.4
- jupyterlab: 4.0.2
- jupyterlab-pygments: 0.2.2
- jupyterlab-server: 2.23.0
- jupyterlab-widgets: 3.0.7
- kiwisolver: 1.4.4
- lazy-object-proxy: 1.9.0
- lightning: 2.2.1
- lightning-utilities: 0.8.0
- llvmlite: 0.40.1
- mako: 1.2.4
- markdown-it-py: 3.0.0
- markupsafe: 2.1.3
- matplotlib: 3.7.1
- matplotlib-inline: 0.1.6
- mccabe: 0.7.0
- mdmm: 0.1.3
- mdurl: 0.1.2
- mistune: 2.0.5
- mplhep: 0.3.28
- mplhep-data: 0.0.3
- mpmath: 1.3.0
- msgpack: 1.0.5
- multidict: 6.0.4
- nbclassic: 1.0.0
- nbclient: 0.8.0
- nbconvert: 7.4.0
- nbformat: 5.9.0
- nest-asyncio: 1.5.6
- networkx: 3.1
- nodeenv: 1.8.0
- notebook: 6.5.4
- notebook-shim: 0.2.3
- numba: 0.57.1
- numpy: 1.24.4
- object-condensation: 0.1.dev20+gf5708c7
- omegaconf: 2.3.0
- optuna: 3.2.0
- overrides: 7.3.1
- packaging: 23.1
- pandas: 2.0.2
- pandocfilters: 1.5.0
- parso: 0.8.3
- pathtools: 0.1.2
- pexpect: 4.8.0
- pickleshare: 0.7.5
- pillow: 9.5.0
- pip: 23.1.2
- pkgutil-resolve-name: 1.3.10
- platformdirs: 3.5.1
- pluggy: 1.0.0
- pooch: 1.7.0
- pre-commit: 3.3.2
- prometheus-client: 0.17.0
- prompt-toolkit: 3.0.38
- protobuf: 3.20.3
- psutil: 5.9.5
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pyarrow: 12.0.1
- pycparser: 2.21
- pydata-sphinx-theme: 0.13.3
- pygments: 2.15.1
- pylint: 3.0.0
- pyobjc-core: 9.2
- pyobjc-framework-cocoa: 9.2
- pyparsing: 3.0.9
- pyrsistent: 0.19.3
- pysocks: 1.7.1
- pytest: 7.4.0
- pytest-cov: 4.1.0
- pytest-cover: 3.0.0
- pytest-coverage: 0.0
- python-dateutil: 2.8.2
- python-frontmatter: 1.0.0
- python-json-logger: 2.0.7
- pytorch-lightning: 2.2.1
- pytz: 2023.3
- pyyaml: 6.0
- pyzmq: 25.1.0
- ray: 2.5.1
- recommonmark: 0.7.1
- requests: 2.31.0
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.4.2
- ruff: 0.0.276
- scienceplots: 2.1.1
- scikit-learn: 1.2.2
- scipy: 1.10.1
- send2trash: 1.8.2
- sentry-sdk: 1.21.1
- setproctitle: 1.3.2
- setuptools: 67.7.2
- six: 1.16.0
- smmap: 3.0.5
- sniffio: 1.3.0
- snowballstemmer: 2.2.0
- soupsieve: 2.3.2.post1
- sphinx: 6.2.1
- sphinx-autoapi: 2.1.0
- sphinx-book-theme: 1.0.1
- sphinxcontrib-applehelp: 1.0.4
- sphinxcontrib-devhelp: 1.0.2
- sphinxcontrib-htmlhelp: 2.0.1
- sphinxcontrib-jsmath: 1.0.1
- sphinxcontrib-qthelp: 1.0.3
- sphinxcontrib-serializinghtml: 1.1.5
- sqlalchemy: 2.0.15
- stack-data: 0.6.2
- sympy: 1.12
- tabulate: 0.9.0
- tensorboardx: 2.6
- terminado: 0.17.1
- threadpoolctl: 3.1.0
- tinycss2: 1.2.1
- tomlkit: 0.12.1
- torch: 2.0.1
- torch-cluster: 1.6.1
- torch-geometric: 2.3.1
- torchmetrics: 1.0.0
- tornado: 6.3.2
- tqdm: 4.65.0
- trackml: 3
- traitlets: 5.9.0
- types-markupsafe: 1.1.10
- typeshed-client: 2.3.0
- typing-extensions: 4.6.3
- typing-utils: 0.1.0
- tzdata: 2023.3
- uhi: 0.3.3
- unidecode: 1.3.6
- uproot: 5.0.9
- uri-template: 1.3.0
- urllib3: 2.0.3
- virtualenv: 20.23.0
- wandb: 0.16.3
- wandb-osh: 1.0.4
- wcwidth: 0.2.6
- webcolors: 1.13
- webencodings: 0.5.1
- websocket-client: 1.5.2
- wheel: 0.40.0
- widgetsnbextension: 4.0.7
- wrapt: 1.15.0
- yarl: 1.9.2
- zipp: 3.15.0
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: arm
- python: 3.11.3
- release: 23.2.0
- version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
</details>
### More info
_No response_ | open | 2024-03-08T02:09:52Z | 2024-06-02T14:00:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19596 | [
"bug"
] | klieret | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 19,750 | Trainer does not wait for neptune logger completion and logger connection stays open unless explicitly closed | ### Bug description
I'm performing a naive hyperparameter sweep using the PL Trainer and NeptuneLogger. After the successful completion of `Trainer.fit()` I see that the neptune run is still not complete on Neptune App until the Jupyter Kernel is killed. I also see odd behavior where the next run will start and the training will be terminated almost immediately (what I suspect to be the NeptuneLogger instance synchronizing with the server and then stopping training?).
A snippet of the code is below:
```python
def train(hparams):
model = ImageClassifier(hparams["model_name"], num_classes=hparams["num_classes"], lr=hparams["lr"])
neptune_logger = NeptuneLogger(
project="project_name",
api_token=neptune_token
)
neptune_logger.log_hyperparams(params=hparams)
trainer = Trainer(
callbacks=[checkpoint_callback, early_stopping_callback],
max_epochs=hparams["max_epochs"],
accelerator=hparams["training_device"],
logger=neptune_logger,
)
trainer.fit(model, train_dataloader, val_dataloader)
neptune_logger.log_model_summary(model=model, max_depth=-1)
model = ImageClassifier.load_from_checkpoint("checkpoints/best-checkpoint.ckpt", model_name=hparams["model_name"], num_classes=num_classes, lr=hparams["lr"])
script = model.to_torchscript()
torch.jit.save(script, "traced_model.pt")
neptune_logger.run["model"].track_files("traced_model.pt")
max_epochs = [30, 40, 50]
lrs = [5e-2, 1e-3, 5e-3]
batch_sizes = [(32, 32, 32), (64, 64, 64), (128, 128, 128)]
val_split_sizes = [0.2, 0.3, 0.4]
combinations = list(itertools.product(max_epochs, lrs, batch_sizes, val_split_sizes))
for hp in combinations:
hparams = {"num_classes": num_classes, "max_epochs": hp[0], "lr": hp[1], "batch_sizes": hp[2], "val_split_size": hp[3]}
print("=============Training============")
print(f"Parameters: {run_params}")
train(run_params)
print("=============Complete============")
```
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 2060 SUPER
- available: True
- version: 12.1
* Lightning:
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.2.1
- torch: 2.1.0
- torchaudio: 2.1.0
- torchmetrics: 1.3.2
- torchvision: 0.16.0
* Packages:
- aiohttp: 3.9.3
- aiosignal: 1.3.1
- anyio: 4.0.0
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- asttokens: 2.4.1
- async-lru: 2.0.4
- attrs: 23.1.0
- babel: 2.13.1
- backports.functools-lru-cache: 1.6.5
- beautifulsoup4: 4.12.2
- bleach: 6.1.0
- boto3: 1.34.81
- botocore: 1.34.81
- bottleneck: 1.3.5
- bravado: 11.0.3
- bravado-core: 6.1.1
- brotli: 1.1.0
- cached-property: 1.5.2
- certifi: 2023.7.22
- cffi: 1.16.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- comm: 0.1.4
- contourpy: 1.1.1
- cycler: 0.12.1
- datasets: 2.18.0
- debugpy: 1.8.0
- decorator: 5.1.1
- defusedxml: 0.7.1
- dill: 0.3.8
- entrypoints: 0.4
- exceptiongroup: 1.1.3
- executing: 2.0.1
- fastjsonschema: 2.18.1
- filelock: 3.13.1
- fonttools: 4.43.1
- fqdn: 1.5.1
- frozenlist: 1.4.1
- fsspec: 2024.2.0
- future: 1.0.0
- gitdb: 4.0.11
- gitpython: 3.1.43
- gmpy2: 2.1.2
- huggingface-hub: 0.22.2
- idna: 3.4
- importlib-metadata: 6.8.0
- importlib-resources: 6.1.0
- ipykernel: 6.26.0
- ipython: 8.17.2
- ipython-genutils: 0.2.0
- ipywidgets: 8.1.1
- isoduration: 20.11.0
- jedi: 0.19.1
- jinja2: 3.1.2
- jmespath: 1.0.1
- json5: 0.9.14
- jsonpointer: 2.4
- jsonref: 1.1.0
- jsonschema: 4.19.2
- jsonschema-specifications: 2023.7.1
- jupyter: 1.0.0
- jupyter-client: 7.4.9
- jupyter-console: 6.6.3
- jupyter-contrib-core: 0.4.0
- jupyter-contrib-nbextensions: 0.7.0
- jupyter-core: 5.5.0
- jupyter-events: 0.8.0
- jupyter-highlight-selected-word: 0.2.0
- jupyter-latex-envs: 1.4.6
- jupyter-lsp: 2.2.0
- jupyter-nbextensions-configurator: 0.6.1
- jupyter-server: 2.9.1
- jupyter-server-terminals: 0.4.4
- jupyterlab: 4.0.7
- jupyterlab-pygments: 0.2.2
- jupyterlab-server: 2.25.0
- jupyterlab-widgets: 3.0.9
- kiwisolver: 1.4.5
- lightning-utilities: 0.11.2
- lxml: 4.9.2
- markupsafe: 2.1.3
- matplotlib: 3.8.1
- matplotlib-inline: 0.1.6
- mistune: 3.0.1
- monotonic: 1.6
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.0.5
- multiprocess: 0.70.16
- munkres: 1.1.4
- nbclassic: 1.0.0
- nbclient: 0.8.0
- nbconvert: 7.10.0
- nbformat: 5.9.2
- neptune: 1.10.2
- nest-asyncio: 1.5.8
- networkx: 3.2.1
- notebook: 6.5.6
- notebook-shim: 0.2.3
- numexpr: 2.8.7
- numpy: 1.26.0
- oauthlib: 3.2.2
- overrides: 7.4.0
- packaging: 23.2
- pandas: 2.1.1
- pandocfilters: 1.5.0
- parso: 0.8.3
- pexpect: 4.8.0
- pickleshare: 0.7.5
- pillow: 9.4.0
- pip: 23.3.1
- pkgutil-resolve-name: 1.3.10
- platformdirs: 3.11.0
- ply: 3.11
- prometheus-client: 0.18.0
- prompt-toolkit: 3.0.39
- psutil: 5.9.5
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pyarrow: 15.0.2
- pyarrow-hotfix: 0.6
- pycparser: 2.21
- pygments: 2.16.1
- pyjwt: 2.8.0
- pyparsing: 3.1.1
- pyqt5: 5.15.9
- pyqt5-sip: 12.12.2
- pysocks: 1.7.1
- python-dateutil: 2.8.2
- python-json-logger: 2.0.7
- pytorch-lightning: 2.2.1
- pytz: 2023.3.post1
- pyyaml: 6.0.1
- pyzmq: 24.0.1
- qtconsole: 5.4.4
- qtpy: 2.4.1
- referencing: 0.30.2
- requests: 2.31.0
- requests-oauthlib: 2.0.0
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rpds-py: 0.10.6
- s3transfer: 0.10.1
- safetensors: 0.4.2
- send2trash: 1.8.2
- setuptools: 68.2.2
- simplejson: 3.19.2
- sip: 6.7.12
- six: 1.16.0
- smmap: 5.0.1
- sniffio: 1.3.0
- soupsieve: 2.5
- stack-data: 0.6.2
- swagger-spec-validator: 3.0.3
- sympy: 1.12
- terminado: 0.17.1
- timm: 0.9.16
- tinycss2: 1.2.1
- toml: 0.10.2
- tomli: 2.0.1
- torch: 2.1.0
- torchaudio: 2.1.0
- torchmetrics: 1.3.2
- torchvision: 0.16.0
- tornado: 6.3.3
- tqdm: 4.66.2
- traitlets: 5.13.0
- triton: 2.1.0
- types-python-dateutil: 2.8.19.14
- typing-extensions: 4.8.0
- typing-utils: 0.1.0
- tzdata: 2023.3
- uri-template: 1.3.0
- urllib3: 2.0.7
- wcwidth: 0.2.9
- webcolors: 1.13
- webencodings: 0.5.1
- websocket-client: 1.6.4
- wheel: 0.41.3
- widgetsnbextension: 4.0.9
- xxhash: 3.4.1
- yarl: 1.9.4
- zipp: 3.17.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.11.6
- release: 5.4.0-166-generic
- version: #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023
</details>
### More info
_No response_ | open | 2024-04-10T03:21:26Z | 2024-04-10T03:25:13Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19750 | [
"bug",
"needs triage"
] | videetparekh | 1 |
serengil/deepface | machine-learning | 1,352 | Error detecting faces: too many values to unpack (expected 4) | ### Before You Report a Bug, Please Confirm You Have Done The Following...
- [X] I have updated to the latest version of the packages.
- [X] I have searched for both [existing issues](https://github.com/serengil/deepface/issues) and [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed) and found none that matched my issue.
### DeepFace's version
deepface-0.0.93
### Python version
31.0
### Operating System
linux
### Dependencies
absl-py==1.4.0
accelerate==0.34.2
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
alabaster==0.7.16
albucore==0.0.16
albumentations==1.4.15
altair==4.2.2
annotated-types==0.7.0
anyio==3.7.1
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
array_record==0.5.1
arviz==0.19.0
astropy==6.1.3
astropy-iers-data==0.2024.9.16.0.32.21
astunparse==1.6.3
async-timeout==4.0.3
atpublic==4.1.0
attrs==24.2.0
audioread==3.0.1
autograd==1.7.0
babel==2.16.0
backcall==0.2.0
beautifulsoup4==4.12.3
bigframes==1.18.0
bigquery-magics==0.2.0
bleach==6.1.0
blinker==1.4
blis==0.7.11
blosc2==2.0.0
bokeh==3.4.3
bqplot==0.12.43
branca==0.7.2
build==1.2.2
CacheControl==0.14.0
cachetools==5.5.0
catalogue==2.0.10
certifi==2024.8.30
cffi==1.17.1
chardet==5.2.0
charset-normalizer==3.3.2
chex==0.1.86
clarabel==0.9.0
click==8.1.7
cloudpathlib==0.19.0
cloudpickle==2.2.1
cmake==3.30.3
cmdstanpy==1.2.4
colorcet==3.1.0
colorlover==0.3.0
colour==0.1.5
community==1.0.0b1
confection==0.1.5
cons==0.4.6
contextlib2==21.6.0
contourpy==1.3.0
cryptography==43.0.1
cuda-python==12.2.1
cudf-cu12 @ https://pypi.nvidia.com/cudf-cu12/cudf_cu12-24.4.1-cp310-cp310-manylinux_2_28_x86_64.whl#sha256=57366e7ef09dc63e0b389aff20df6c37d91e2790065861ee31a4720149f5b694
cufflinks==0.17.3
cupy-cuda12x==12.2.0
cvxopt==1.3.2
cvxpy==1.5.3
cycler==0.12.1
cymem==2.0.8
Cython==3.0.11
dask==2024.8.0
datascience==0.17.6
db-dtypes==1.3.0
dbus-python==1.2.18
debugpy==1.6.6
decorator==4.4.2
deepface==0.0.93
defusedxml==0.7.1
distributed==2024.8.0
distro==1.7.0
dlib==19.24.2
dm-tree==0.1.8
docstring_parser==0.16
docutils==0.18.1
dopamine_rl==4.0.9
duckdb==1.1.0
earthengine-api==1.0.0
easydict==1.13
ecos==2.0.14
editdistance==0.8.1
eerepr==0.0.4
einops==0.8.0
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl#sha256=86cc141f63942d4b2c5fcee06630fd6f904788d2f0ab005cce45aadb8fb73889
entrypoints==0.4
et-xmlfile==1.1.0
etils==1.9.4
etuples==0.3.9
eval_type_backport==0.2.0
exceptiongroup==1.2.2
facenet-pytorch==2.6.0
fastai==2.7.17
fastcore==1.7.8
fastdownload==0.0.7
fastjsonschema==2.20.0
fastprogress==1.0.3
fastrlock==0.8.2
filelock==3.16.1
fire==0.6.0
firebase-admin==6.5.0
Flask==2.2.5
Flask-Cors==5.0.0
flatbuffers==24.3.25
flax==0.8.5
folium==0.17.0
fonttools==4.53.1
frozendict==2.4.4
frozenlist==1.4.1
fsspec==2024.6.1
future==1.0.0
gast==0.6.0
gcsfs==2024.6.1
GDAL==3.6.4
gdown==5.2.0
geemap==0.34.3
gensim==4.3.3
geocoder==1.38.1
geographiclib==2.0
geopandas==1.0.1
geopy==2.4.1
gin-config==0.5.0
glob2==0.7
google==2.0.3
google-ai-generativelanguage==0.6.6
google-api-core==2.19.2
google-api-python-client==2.137.0
google-auth==2.27.0
google-auth-httplib2==0.2.0
google-auth-oauthlib==1.2.1
google-cloud-aiplatform==1.67.1
google-cloud-bigquery==3.25.0
google-cloud-bigquery-connection==1.15.5
google-cloud-bigquery-storage==2.26.0
google-cloud-bigtable==2.26.0
google-cloud-core==2.4.1
google-cloud-datastore==2.19.0
google-cloud-firestore==2.16.1
google-cloud-functions==1.16.5
google-cloud-iam==2.15.2
google-cloud-language==2.13.4
google-cloud-pubsub==2.23.1
google-cloud-resource-manager==1.12.5
google-cloud-storage==2.8.0
google-cloud-translate==3.15.5
google-colab @ file:///colabtools/dist/google_colab-1.0.0.tar.gz#sha256=07bb3e866a2fb3dc3072920a4722b4a4c9c2fc953a97253597f3e5391c3dd17c
google-crc32c==1.6.0
google-generativeai==0.7.2
google-pasta==0.2.0
google-resumable-media==2.7.2
googleapis-common-protos==1.65.0
googledrivedownloader==0.4
graphviz==0.20.3
greenlet==3.1.1
grpc-google-iam-v1==0.13.1
grpcio==1.64.1
grpcio-status==1.48.2
gspread==6.0.2
gspread-dataframe==3.3.1
gunicorn==23.0.0
gym==0.25.2
gym-notices==0.0.8
h5netcdf==1.3.0
h5py==3.11.0
holidays==0.57
holoviews==1.19.1
html5lib==1.1
httpimport==1.4.0
httplib2==0.22.0
huggingface-hub==0.24.7
humanize==4.10.0
hyperopt==0.2.7
ibis-framework==9.2.0
idna==3.10
imageio==2.35.1
imageio-ffmpeg==0.5.1
imagesize==1.4.1
imbalanced-learn==0.12.3
imgaug==0.4.0
immutabledict==4.2.0
importlib_metadata==8.5.0
importlib_resources==6.4.5
imutils==0.5.4
inflect==7.4.0
iniconfig==2.0.0
intel-cmplr-lib-ur==2024.2.1
intel-openmp==2024.2.1
ipyevents==2.0.2
ipyfilechooser==0.6.0
ipykernel==5.5.6
ipyleaflet==0.19.2
ipyparallel==8.8.0
ipython==7.34.0
ipython-genutils==0.2.0
ipython-sql==0.5.0
ipytree==0.2.2
ipywidgets==7.7.1
itsdangerous==2.2.0
jax==0.4.33
jax-cuda12-pjrt==0.4.33
jax-cuda12-plugin==0.4.33
jaxlib==0.4.33
jeepney==0.7.1
jellyfish==1.1.0
jieba==0.42.1
Jinja2==3.1.4
joblib==1.4.2
jsonpickle==3.3.0
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
jupyter-client==6.1.12
jupyter-console==6.1.0
jupyter-leaflet==0.19.2
jupyter-server==1.24.0
jupyter_core==5.7.2
jupyterlab_pygments==0.3.0
jupyterlab_widgets==3.0.13
kaggle==1.6.17
kagglehub==0.3.0
keras==3.4.1
keyring==23.5.0
kiwisolver==1.4.7
langcodes==3.4.0
language_data==1.2.0
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy_loader==0.4
libclang==18.1.1
librosa==0.10.2.post1
lightgbm==4.5.0
linkify-it-py==2.0.3
llvmlite==0.43.0
locket==1.0.0
logical-unification==0.4.6
lxml==4.9.4
marisa-trie==1.2.0
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.7.1
matplotlib-inline==0.1.7
matplotlib-venn==1.1.1
mdit-py-plugins==0.4.2
mdurl==0.1.2
mediapipe==0.10.15
miniKanren==1.0.3
missingno==0.5.2
mistune==0.8.4
mizani==0.11.4
mkl==2024.2.2
ml-dtypes==0.4.1
mlxtend==0.23.1
more-itertools==10.5.0
moviepy==1.0.3
mpmath==1.3.0
msgpack==1.0.8
mtcnn==0.1.1
multidict==6.1.0
multipledispatch==1.0.0
multitasking==0.0.11
murmurhash==1.0.10
music21==9.1.0
namex==0.0.8
natsort==8.4.0
nbclassic==1.1.0
nbclient==0.10.0
nbconvert==6.5.4
nbformat==5.10.4
nest-asyncio==1.6.0
networkx==3.3
nibabel==5.2.1
nltk==3.8.1
notebook==6.5.5
notebook_shim==0.2.4
numba==0.60.0
numexpr==2.10.1
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvcc-cu12==12.6.68
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.6.68
nvidia-nvtx-cu12==12.1.105
nvtx==0.2.10
oauth2client==4.1.3
oauthlib==3.2.2
opencv-contrib-python==4.10.0.84
opencv-python==4.10.0.84
opencv-python-headless==4.10.0.84
openpyxl==3.1.5
opt-einsum==3.3.0
optax==0.2.3
optree==0.12.1
orbax-checkpoint==0.6.4
osqp==0.6.7.post0
packaging==24.1
pandas==2.1.4
pandas-datareader==0.10.0
pandas-gbq==0.23.1
pandas-stubs==2.1.4.231227
pandocfilters==1.5.1
panel==1.4.5
param==2.1.1
parso==0.8.4
parsy==2.1
partd==1.4.2
pathlib==1.0.1
patsy==0.5.6
peewee==3.17.6
pexpect==4.9.0
pickleshare==0.7.5
pillow==10.2.0
pip-tools==7.4.1
platformdirs==4.3.6
plotly==5.24.1
plotnine==0.13.6
pluggy==1.5.0
polars==1.6.0
pooch==1.8.2
portpicker==1.5.2
prefetch_generator==1.0.3
preshed==3.0.9
prettytable==3.11.0
proglog==0.1.10
progressbar2==4.5.0
prometheus_client==0.21.0
promise==2.3
prompt_toolkit==3.0.47
prophet==1.1.5
proto-plus==1.24.0
protobuf==4.25.5
psutil==5.9.5
psycopg2==2.9.9
ptyprocess==0.7.0
py-cpuinfo==9.0.0
py4j==0.10.9.7
pyarrow==14.0.2
pyarrow-hotfix==0.6
pyasn1==0.6.1
pyasn1_modules==0.4.1
pycocotools==2.0.8
pycparser==2.22
pydantic==2.9.2
pydantic_core==2.23.4
pydata-google-auth==1.8.2
pydot==3.0.1
pydot-ng==2.0.0
pydotplus==2.0.2
PyDrive==1.3.1
PyDrive2==1.20.0
pyerfa==2.0.1.4
pygame==2.6.0
Pygments==2.18.0
PyGObject==3.42.1
PyJWT==2.9.0
pymc==5.16.2
pymystem3==0.2.0
pynvjitlink-cu12==0.3.0
pyogrio==0.9.0
PyOpenGL==3.1.7
pyOpenSSL==24.2.1
pyparsing==3.1.4
pyperclip==1.9.0
pyproj==3.6.1
pyproject_hooks==1.1.0
pyshp==2.3.1
PySocks==1.7.1
pytensor==2.25.4
pytest==7.4.4
python-apt==2.4.0
python-box==7.2.0
python-dateutil==2.8.2
python-louvain==0.16
python-slugify==8.0.4
python-utils==3.8.2
pytz==2024.2
pyviz_comms==3.0.3
PyYAML==6.0.2
pyzmq==24.0.1
qdldl==0.1.7.post4
ratelim==0.1.6
referencing==0.35.1
regex==2024.9.11
requests==2.32.3
requests-oauthlib==1.3.1
requirements-parser==0.9.0
retina-face==0.0.17
rich==13.8.1
rmm-cu12==24.4.0
rpds-py==0.20.0
rpy2==3.4.2
rsa==4.9
safetensors==0.4.5
scikit-image==0.24.0
scikit-learn==1.5.2
scipy==1.13.1
scooby==0.10.0
scs==3.2.7
seaborn==0.13.1
SecretStorage==3.3.1
Send2Trash==1.8.3
sentencepiece==0.2.0
shapely==2.0.6
shellingham==1.5.4
simple-parsing==0.1.6
six==1.16.0
sklearn-pandas==2.2.0
smart-open==7.0.4
sniffio==1.3.1
snowballstemmer==2.2.0
sortedcontainers==2.4.0
sounddevice==0.5.0
soundfile==0.12.1
soupsieve==2.6
soxr==0.5.0.post1
spacy==3.7.6
spacy-legacy==3.0.12
spacy-loggers==1.0.5
Sphinx==5.0.2
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
SQLAlchemy==2.0.35
sqlglot==25.1.0
sqlparse==0.5.1
srsly==2.4.8
stanio==0.5.1
statsmodels==0.14.3
StrEnum==0.4.15
sympy==1.13.3
tables==3.8.0
tabulate==0.9.0
tbb==2021.13.1
tblib==3.0.0
tenacity==9.0.0
tensorboard==2.17.0
tensorboard-data-server==0.7.2
tensorflow==2.17.0
tensorflow-datasets==4.9.6
tensorflow-hub==0.16.1
tensorflow-io-gcs-filesystem==0.37.1
tensorflow-metadata==1.15.0
tensorflow-probability==0.24.0
tensorstore==0.1.65
termcolor==2.4.0
terminado==0.18.1
text-unidecode==1.3
textblob==0.17.1
tf-slim==1.1.0
tf_keras==2.17.0
thinc==8.2.5
threadpoolctl==3.5.0
tifffile==2024.9.20
tinycss2==1.3.0
tokenizers==0.19.1
toml==0.10.2
tomli==2.0.1
toolz==0.12.1
torch==2.2.2
torchaudio @ https://download.pytorch.org/whl/cu121_full/torchaudio-2.4.1%2Bcu121-cp310-cp310-linux_x86_64.whl#sha256=da8c87c80a1c1376a48dc33eef30b03bbdf1df25a05bd2b1c620b8811c7b19be
torchsummary==1.5.1
torchvision==0.17.2
tornado==6.3.3
tqdm==4.66.5
traitlets==5.7.1
traittypes==0.2.1
transformers==4.44.2
triton==2.2.0
tweepy==4.14.0
typeguard==4.3.0
typer==0.12.5
types-pytz==2024.2.0.20240913
types-setuptools==75.1.0.20240917
typing_extensions==4.12.2
tzdata==2024.1
tzlocal==5.2
uc-micro-py==1.0.3
ultralytics==8.3.0
ultralytics-thop==2.0.8
uritemplate==4.1.1
urllib3==2.2.3
vega-datasets==0.9.0
wadllib==1.3.6
wasabi==1.1.3
wcwidth==0.2.13
weasel==0.4.1
webcolors==24.8.0
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.0.4
widgetsnbextension==3.6.9
wordcloud==1.9.3
wrapt==1.16.0
xarray==2024.9.0
xarray-einstats==0.8.0
xgboost==2.1.1
xlrd==2.0.1
xyzservices==2024.9.0
yarl==1.11.1
yellowbrick==1.5
yfinance==0.2.43
zict==3.0.0
zipp==3.20.2
### Reproducible example
```Python
i have downloaded a vedio from youtube and tried to detect faces in it frame by frame , i am working on a cctv project, it tried on all resolution but keep getting this error
Error detecting faces: Face could not be detected in numpy array.Please confirm that the picture is a face photo or consider to set enforce_detection param to False.
i tried to enchance the quality then got this error
Error detecting faces: too many values to unpack (expected 4)
#youtube vedio link https://www.youtube.com/watch?v=Si3odzG3MZs
#following is the code
# =======================================================
import cv2
from deepface import DeepFace
from google.colab.patches import cv2_imshow
from PIL import Image, ImageOps
from deepface import DeepFace
import numpy as np
import time
# Path to your video file
video_path = "/content/drive/MyDrive/unknown/623.mp4"
# Load the video
cap = cv2.VideoCapture(video_path)
# Check if video opened successfully
if not cap.isOpened():
print("Error: Could not open video.")
exit()
# Function to enhance frames
def enhance_frame(frame):
# Convert OpenCV frame (BGR) to PIL image (RGB)
pil_image = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
# Apply auto contrast using PIL
enhanced_pil_image = ImageOps.autocontrast(pil_image)
# Convert back to OpenCV format (RGB -> BGR)
enhanced_frame = cv2.cvtColor(np.array(enhanced_pil_image), cv2.COLOR_RGB2BGR)
# Apply bilateral filter to reduce noise (preserving edges)
enhanced_frame = cv2.bilateralFilter(enhanced_frame, d=9, sigmaColor=75, sigmaSpace=75)
return enhanced_frame
# Start time tracking
start_time = time.time()
# Read video frame by frame
while cap.isOpened():
ret, frame = cap.read()
if not ret:
print("Finished processing the video.")
break
# Enhance the frame
# enhanced_frame = enhance_frame(frame)
# Convert the enhanced frame from BGR to RGB (as DeepFace expects RGB format)
rgb_frame = cv2.cvtColor(enhanced_frame, cv2.COLOR_BGR2RGB)
try:
# Detect faces in the frame using DeepFace
face_objs = DeepFace.extract_faces(img_path=rgb_frame, detector_backend='opencv')
# Loop through detected faces and draw rectangles around them
for face_obj in face_objs:
x, y, w, h = face_obj['facial_area'].values()
# Draw rectangle around the face
cv2.rectangle(enhanced_frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
except Exception as e:
print(f"Error detecting faces: {e}")
# Display the resulting frame with detected faces
cv2_imshow( enhanced_frame)
# Break the loop when 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Get the current time and calculate elapsed time
elapsed_time = time.time() - start_time
if elapsed_time > 90: # Stop after 30 seconds
print("Stopping after 30 seconds.")
break
# Release the video capture object and close display windows
```
### Relevant Log Output
_No response_
### Expected Result
it should have detected the faces
### What happened instead?
_No response_
### Additional Info
_No response_ | closed | 2024-09-30T13:17:13Z | 2024-10-01T08:44:31Z | https://github.com/serengil/deepface/issues/1352 | [
"bug",
"invalid"
] | MeharG811 | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 582 | 关于HRNet的部分 | 老师您好,感谢您的付出。想请教一下关于关键点检测HRNet的部分,就是您列举的两个仓库都有一个nms的模块(然后里面有很多与C语言相关的代码),但是您这边的代码好像没有。想请教一下原因以及nms这部分该怎么去读跟理解。感谢! | closed | 2022-06-28T04:51:34Z | 2023-03-02T10:35:32Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/582 | [] | davidpengiupui | 3 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 256 | 指令精调chinese-alpaca-2-13b,adapter_model.bin只有158KB | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型训练与精调
### 基础模型
Chinese-Alpaca-2 (7B/13B)
### 操作系统
Linux
### 详细描述问题
```
指令精调chinese-alpaca-2-13b后,adapter_model.bin只有158KB,在8张3090上运行微调,deepspeed用的是zero3。
因为用用同样的数据 zero2精调chinese-alpaca-2-7b后adapter_model.bin的大小有1169MB,所以是不是zero3微调13b出了问题?
看到别的issue里提到到上一层级找adapter_model.bin,并没有找到。
今天拉取了最新代码,peft从之前0.3.0.dev 升级到了0.5.0,重新微调还是一样的问题。
-----run_sft.sh 代码
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/media/disk1/source/text-generation-webui/models/ziqingyang_chinese-alpaca-2-13b
chinese_tokenizer_path=/media/disk1/source/text-generation-webui/models/ziqingyang_chinese-alpaca-2-13b
dataset_dir=/media/disk1/source/Chinese-LLaMA-Alpaca-2/data/20230907/json
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
max_seq_length=512
output_dir=/media/disk1/source/Chinese-LLaMA-Alpaca-2/output_lora_13b
validation_file=/media/disk1/source/Chinese-LLaMA-Alpaca-2/data/20230907/eval.json
deepspeed_config_file=ds_zero3_no_offload.json
torchrun --nnodes 1 --nproc_per_node 8 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 100 \
--save_steps 200 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length ${max_seq_length} \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--lora_dropout ${lora_dropout} \
--modules_to_save ${modules_to_save} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--load_in_kbits 16 \
--gradient_checkpointing \
--ddp_find_unused_parameters False
因为报错zero3不能用low_cpu_mem_usage和device_map,所以注释了这两个参数
-----run_clm_sft_with_peft.py 代码
model = LlamaForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
torch_dtype=torch_dtype,
#low_cpu_mem_usage=True,
#device_map=device_map,
load_in_4bit=load_in_4bit,
load_in_8bit=load_in_8bit,
quantization_config=quantization_config,
)
----ds_zero3_no_offload.json 代码
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 100,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1e-10
},
"zero_optimization": {
"stage": 3,
"allgather_partitions": true,
"allgather_bucket_size": 1e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 1e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### 依赖情况(代码类问题务必提供)
```
bitsandbytes 0.41.0
ctranstormers 0.2.25+cu117
peft 0.5.0
sentencepece 0.1.97
torch 2.0.1
transtormers 4.31.0
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
``` | closed | 2023-09-08T04:51:13Z | 2023-09-24T10:50:55Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/256 | [] | Damonproto | 5 |
fbdesignpro/sweetviz | data-visualization | 100 | target_feat is numerical but is interpreted as categorical | Hi,
thanks for bringing this software to life!!! :-)
If I execute the following code, I get the error below. Am I doing something wrong here?
```python3
import sweetviz as sv
import pandas as pd
import io
import sys
print("sys.version=", sys.version)
print("sv.__version__=", sv.__version__)
print("pd.__version__=", pd.__version__)
myString = """
-0.029905, 1.0, 2.0
1.803806, 0.5, 2.0
0.729614, 2.0, 2.0
0.702528, 0.5, 1.0
0.718835, 1.0, 2.0
0.603640, 2.0, 2.0
0.492969, 2.0, 2.0
0.573862, 0.5, 2.0
4.357443, 0.5, 2.0
0.129235, 2.0, 1.0
"""
df = pd.read_csv(io.StringIO(myString), sep=',', header=None)
df.columns=['a', 'b', 'c']
df['a'] = df.a.astype(float)
print("type(df.a.iloc[0])=", type(df.a.iloc[0]))
my_report = sv.analyze(df, target_feat="a")
```
error:
```
sys.version= 3.8.7 (default, Dec 21 2020, 21:23:03)
[GCC 5.4.0 20160609]
sv.__version__= 2.1.3
pd.__version__= 1.3.2
type(df.a.iloc[0])= <class 'numpy.float64'>
Feature: a (TARGET) |██▌ | [ 25%] 00:00 -> (00:00 left)Gtk-Message: 20:00:57.393: Failed to load module "appmenu-gtk-module"
Feature: a (TARGET) |█████ | [ 50%] 00:00 -> (00:00 left)Traceback (most recent call last):
File "/home/user/PycharmProjects/my_project/file01.py", line 31, in <module>
my_report = sv.analyze(df, target_feat="a")
File "/home/user/venv/numba38/lib/python3.8/site-packages/sweetviz/sv_public.py", line 12, in analyze
report = sweetviz.DataframeReport(source, target_feat, None,
File "/home/user/venv/numba38/lib/python3.8/site-packages/sweetviz/dataframe_report.py", line 238, in __init__
this_feat = FeatureToProcess(cur_order_index,
File "/home/user/venv/numba38/lib/python3.8/site-packages/sweetviz/sv_types.py", line 77, in __init__
raise ValueError("TARGET values can only be of NUMERICAL or BOOLEAN type for now.\n"
ValueError: TARGET values can only be of NUMERICAL or BOOLEAN type for now.
CATEGORICAL type was detected; if you meant the target to be
NUMERICAL, use a FeatureConfig(force_num=...) object.
Feature: a (TARGET) |█████ | [ 50%] 00:01 -> (00:01 left)
``` | closed | 2021-08-29T18:12:12Z | 2022-01-15T04:03:06Z | https://github.com/fbdesignpro/sweetviz/issues/100 | [
"bug"
] | bm765 | 2 |
pyppeteer/pyppeteer | automation | 398 | Set Browser Window Size? | How would you go about passing in args to the launch command so you can set browser window size?
I am trying to do something along the lines of ```args=[`--window-size=1920,1080`]``` but the syntax is incorrect.
async def main():
browser = await launch(
headless=True,
ignoreHTTPSErrors=True,
args=[`--window-size=1920,1080`]
)
page = await browser.newPage()
await page.goto('https://www.google.com')
time.sleep(2)
await page.evaluate('''() => {
document.getElementsByClassName('xxx')[0].style.display = 'none'
}''')
time.sleep(1)
await page.screenshot({'path': 'example.png'})
await browser.close()
asyncio.get_event_loop().run_until_complete(main())
| open | 2022-08-15T16:12:43Z | 2022-08-15T16:14:03Z | https://github.com/pyppeteer/pyppeteer/issues/398 | [] | ThePieMonster | 0 |
lukas-blecher/LaTeX-OCR | pytorch | 16 | Installation Help | I'm trying to install using the
`pip install -r requirements.txt`
line of code, but it seems like my computer is stuck in an infinite loop. I successfully installed Pytorch and Python 3.7 before running the requirements line. At first it was unsuccessful and the error line recommended trying --user. No dice. I appreciate your help! I'm excited to try out your code. | closed | 2021-05-19T21:57:27Z | 2021-06-02T15:56:18Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/16 | [] | scrawfo9 | 3 |
psf/requests | python | 6,743 | requests library seems to ignore "Transfer-Encoding" header | I want to send a request with "Transfer-Encoding:chunked" but somehow the header is never set.
Below is my code for testing and the corresponding captured request.
```
import requests
url = 'http://[replaced]/test.php'
def data_chunks():
yield b'8\r\n'
yield b'search=1\r\n'
yield b'0\r\n'
response = requests.post(url,data=data_chunks(), headers={"Content-Type":"application/x-www-form-urlencoded","Transfer-Encoding":"chunked"}, proxies={"http":"http://127.0.0.1:8080"})
```
> POST /test.php HTTP/1.1
> Host: [replaced]
> User-Agent: python-requests/2.28.1
> Accept-Encoding: gzip, deflate
> Accept: */*
> Connection: close
> Content-Type: application/x-www-form-urlencoded
> Content-Length: 16
>
> 8
> search=1
> 0
If I do not set the "Transfer-Encoding" header it is not used and even if I explicitly set the "Transfer-Encoding" header it is not used. The requests library always seems to put a "Content-Length" instead.
What am I supposed to do? | closed | 2024-06-15T18:12:30Z | 2024-06-15T18:12:42Z | https://github.com/psf/requests/issues/6743 | [
"Question/Not a bug",
"actions/autoclose-qa"
] | Green360 | 1 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 959 | How do i define the model? | i use this way to connect to oracle
`SQLALCHEMY_DATABASE_URI = 'oracle://username:password@ip:port/servername'`
How to specify the library when writing Model?
`class MyselfModel(BaseModel):
__tablename__ = 'user'
username = db.Column(db.String(32))
`
How to specify the library corresponding to the user table?
i checked the documention, but did not find.
help me!thank! | closed | 2021-04-23T10:48:52Z | 2021-05-08T00:03:42Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/959 | [] | importTthis | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,573 | CF checkbox infinite loop even clicked by mouse | chrome version:117
```
import undetected_chromedriver as uc
driver = uc.Chrome(headless=False)
driver.get('https://nowsecure.nl')
``` | open | 2023-09-21T07:33:08Z | 2023-09-23T03:50:20Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1573 | [] | fyxtc | 4 |
scikit-optimize/scikit-optimize | scikit-learn | 782 | How to give sample_weight in the fit method in BayesSearchCV? | open | 2019-07-24T07:36:16Z | 2019-07-25T09:41:29Z | https://github.com/scikit-optimize/scikit-optimize/issues/782 | [] | Prasant1993 | 0 | |
mwaskom/seaborn | pandas | 3,785 | Is it possible to config seaborn to follow mpl behavior to break line plot with nan value? | Is it possible to config seaborn to follow mpl behavior to break line plot with nan value? Some times it is useful, and this is also default for other plot tools like echarts.
Refer [Plotting masked and NaN values](https://matplotlib.org/stable/gallery/lines_bars_and_markers/masked_demo.html)
```py
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-np.pi/2, np.pi/2, 31)
y = np.cos(x)**3
# 1) remove points where y > 0.7
x2 = x[y <= 0.7]
y2 = y[y <= 0.7]
# 2) mask points where y > 0.7
y3 = np.ma.masked_where(y > 0.7, y)
# 3) set to NaN where y > 0.7
y4 = y.copy()
y4[y3 > 0.7] = np.nan
fig, ax = plt.subplots(figsize=(10, 8), nrows=2)
ax[0].plot(x*0.1, y, 'o-', color='lightgrey', label='No mask')
ax[0].plot(x2*0.4, y2, 'o-', label='Points removed')
ax[0].plot(x*0.7, y3, 'o-', label='Masked values')
ax[0].plot(x*1.0, y4, 'o-', label='NaN values')
ax[0].legend()
ax[0].set_title('Masked and NaN data in Matplotlib')
sns.lineplot(x=x*0.1, y=y, marker='o', color='lightgrey', label='No mask', ax=ax[1])
sns.lineplot(x=x2*0.4, y=y2, marker='o', label='Points removed', ax=ax[1])
sns.lineplot(x=x*0.7, y=y3, marker='o', label='Masked values', ax=ax[1])
sns.lineplot(x=x*1.0, y=y4, marker='o', label='NaN values', ax=ax[1])
ax[1].legend()
ax[1].set_title('Masked and NaN data in Seaborn')
plt.tight_layout()
plt.show()
```

| closed | 2024-11-14T04:52:43Z | 2024-11-14T13:51:32Z | https://github.com/mwaskom/seaborn/issues/3785 | [] | randomseed42 | 1 |
langmanus/langmanus | automation | 65 | 非常棒的项目,终于有基于langchain开源生态的项目了 | 非常棒的项目,终于有基于langchain开源生态的项目了,其它的Agent项目都重复造轮子,导致阅读代码非常困难,使用langgraph,我花费了五分钟就理解了项目代码。 | closed | 2025-03-20T05:41:05Z | 2025-03-20T13:39:37Z | https://github.com/langmanus/langmanus/issues/65 | [] | shell-nlp | 2 |
biolab/orange3 | numpy | 6,643 | Bulk .ows reader for monitoring our assets | **What's your use case?**
I would like to analyse my orange workflows (.ows), and for that, what would be better than using Orange Data Mining ?
**What's your proposed solution?**
A tool where you can send folders or adresses (like \\srv_simon\odm\workflows\prod*2023*.ows and \\srv_simon\odm\workflows\), choose if you want to scan subfolders. 2 output :
-workflow level (1 row by workflow),worfklname and adress of the workflow, some files metadata (creation, etc)
-node level : worfklname and adress of the workflow, the used tools, etc, the annotations.
**Are there any alternative solutions?**
Using something else than Orange but that means no cool inception.
| closed | 2023-11-19T09:30:23Z | 2023-11-26T18:13:43Z | https://github.com/biolab/orange3/issues/6643 | [] | simonaubertbd | 1 |
dynaconf/dynaconf | fastapi | 714 | Add combo converters to the docs | Implemented here https://github.com/rochacbruno/dynaconf/pull/704
@EdwardCuiPeacock implemented and now we need to add to the proper docs section. | closed | 2022-01-29T18:40:22Z | 2022-04-16T16:29:39Z | https://github.com/dynaconf/dynaconf/issues/714 | [] | rochacbruno | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 312 | [BUG] 简短明了的描述问题 | ***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
| closed | 2023-11-02T02:11:16Z | 2024-02-07T03:44:32Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/312 | [
"BUG"
] | RobinsChens | 0 |
yzhao062/pyod | data-science | 288 | Refactor: have consistency name for keras model sub-object |
When saving keras model, it creates issues, by hvaing different names
```
self.model_
self.combine_model
```
| open | 2021-03-03T01:55:39Z | 2021-03-15T04:32:29Z | https://github.com/yzhao062/pyod/issues/288 | [] | arita37 | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 193 | Add a "tuples_per_anchor" or "num_tuples" argument to BaseTupleMiner | This would allow users to limit the number of pairs/triplets returned by a miner. The ```triplets_per_anchor``` flag would be removed from TripletMarginLoss and MarginLoss. See #192 for related discussion. | open | 2020-09-09T16:33:12Z | 2020-09-09T16:33:32Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/193 | [
"enhancement"
] | KevinMusgrave | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 2,267 | in general: is it possible to remove firstname, lastname, username from the User entity ? | I want to make the registration process easy as possible, by removing unesseccary information.
I can add my own attributes like phone_number by creating a new class and inheriting from User, but the problem is:
**>>> I dont need firstname, lastname and username... How can I change the User entity ?**
I saw that these variables are used all over the codebase (for example in the login route, registration route, regsistrationUserModel, ...)
I thought of using a custom form and add_post(self, item) to overwrite the firstname, lastname, username, with the provided user email before persiting to the datasbase ... but is there a better (cleaner) way ? | open | 2024-08-23T07:54:48Z | 2024-09-07T12:38:38Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2267 | [] | AttaM1rza | 1 |
littlecodersh/ItChat | api | 754 | 如何实现远程登录 | 本地运行itchat,登录时弹出的二维码只在当前网络有效,远程无法识别二维码
怎么破?

| open | 2018-11-04T05:12:34Z | 2019-06-23T06:55:33Z | https://github.com/littlecodersh/ItChat/issues/754 | [] | wqw547243068 | 3 |
JoeanAmier/XHS-Downloader | api | 7 | 我的win无法运行,一直在←[0m←[38 ;5;32;48;5;234m,请问哪里出了问题。 |

还有cookie如何获取呀 | open | 2023-10-20T03:07:23Z | 2023-10-21T02:46:38Z | https://github.com/JoeanAmier/XHS-Downloader/issues/7 | [] | quchifanma | 3 |
indico/indico | flask | 6,113 | Allow events to be linked to room booking occurrences | ### Is your feature request related to a problem? Please describe.
Room bookings may have multiple occurrences but events cannot be linked to them. Events can only be linked to the room booking itself. This means that if you have a room booking with multiple occurrences, you can only link one event to it.
### Describe the solution you'd like
1. Change the database schema so that events are linked to room booking occurrences instead.
2. Implement a WTForm field to select among existing room booking occurrences.
3. Display this form field in the room booking area of the event.
### Additional context
This is part of a series of improvements that will more easily allow users to link events and existing room bookings (i.e. #6046).
| closed | 2023-12-22T10:48:16Z | 2024-03-11T09:57:05Z | https://github.com/indico/indico/issues/6113 | [
"enhancement"
] | OmeGak | 0 |
psf/requests | python | 6,007 | No Error when iteration over streamed content randomly stops | <!-- Summary. -->
## Expected Result
An error is thrown when iterating over streamed content and the connection is closed/timed out by server.
<!-- What you expected. -->
## Actual Result
Iterating over streamed content randomly think it has all the content when it doesn't.
<!-- What happened instead. -->
## Reproduction Steps
If you try to use this function to download a large file with and the download speed is slow it will sometimes eventually just think it has completed writing all the content when it has not. I have tried including different headers and it didn't prevent this from happening. Sorry if this doesn't count as a bug, I can not find any similar issues online.
edit: found that setting chunk_size=1 on a large video file will have the same effect as a slow connection.
```python
import requests
def requests_download(url:str, file_name:str):
responce = requests.get(url=url, stream=True)
with open(file_name, 'wb') as f:
for chunk in responce.iter_content(chunk_size=1024):
f.write(chunk)
```
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": "4.0.0"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "2.10"
},
"implementation": {
"name": "CPython",
"version": "3.9.5"
},
"platform": {
"release": "10",
"system": "Windows"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.25.1"
},
"system_ssl": {
"version": "101010bf"
},
"urllib3": {
"version": "1.26.4"
},
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2021-12-15T04:31:41Z | 2022-03-17T02:21:45Z | https://github.com/psf/requests/issues/6007 | [] | AlphaSlayer1964 | 1 |
tableau/server-client-python | rest-api | 1,573 | Enhancement request: Add support for embedding snowflake key pair authentication credentials in data sources | ## Description
I want to embed key pair authentication credentials after publishing a Snowflake-connected data source to Tableau Cloud using Tableau Server Client. | open | 2025-02-17T23:56:37Z | 2025-02-17T23:56:37Z | https://github.com/tableau/server-client-python/issues/1573 | [
"enhancement",
"needs investigation"
] | george-prg | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,492 | remove False from ping() for mysqlclient; need to keep it for pymysql but at the same time watch out for removal | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10489
see thread at https://github.com/PyMySQL/mysqlclient/discussions/651#discussioncomment-7308971
we need to remove the False call for mysqlclient immediately, and probably apply introspection to the pymysql version to make sure we are either sending False or we are checking that it no longer accepts an argument.
we should backport and release for 1.4.x also | closed | 2023-10-18T13:31:32Z | 2023-10-20T13:27:28Z | https://github.com/sqlalchemy/sqlalchemy/issues/10492 | [
"bug",
"mysql",
"near-term release"
] | zzzeek | 2 |
matplotlib/mplfinance | matplotlib | 560 | Update data in Tkinter application | Hello!
I'm sorry if this is the wrong forum to ask a question about this.
I have a TkInter application where I create an mplfinance plot to show some data of a certain ticker. I also want to be able to update this data. Currently I have a GUI class that handles all the visuals, then I have a method that adds a plot and then another method that tries to edit/refresh the figure with the new data.
Essentially it works a bit like the following
```py
Class GUI(tk.Tk):
#Some nonrelevant code
def createPlot(self):
self.fig, _ mpf.plot(self.pdDataframe, returnFig=True)
self.canvas = FigureCanvasTkAgg(fig)
self.canvas.draw()
```
The createPlot works fine but I want to create a method that updates/re-renders the plot. How would I go about doing that? Creating a new figure doesn't seem to work since it gives me the following error "Starting a Matplotlib GUI outside of the main thread will likely fail. Is there a workaround for this? | closed | 2022-10-12T10:56:05Z | 2022-10-19T21:10:33Z | https://github.com/matplotlib/mplfinance/issues/560 | [
"question"
] | eliasseverholt | 3 |
ivy-llc/ivy | numpy | 28,082 | Fix Frontend Failing Test: jax - stat.paddle.mean | To-do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-01-27T15:14:59Z | 2024-01-31T15:01:42Z | https://github.com/ivy-llc/ivy/issues/28082 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
Urinx/WeixinBot | api | 245 | 发送消息时软件奔溃 | 4029534919478924547 滴雨的屋檐 -> 甜心宝贝我永远爱你: hi
Traceback (most recent call last):
File "weixin.py", line 1219, in <module>
webwx.start()
File "weixin.py", line 36, in wrapper
return fn(*args)
File "weixin.py", line 1013, in start
_thread.start_new_thread(self.listenMsgMode())
File "weixin.py", line 900, in listenMsgMode
self.handleMsg(r)
File "weixin.py", line 801, in handleMsg
ans = self._xiaodoubi(content) + '\n[微信机器人自动回复]'
TypeError: can't concat str to bytes | open | 2018-01-02T01:05:25Z | 2019-03-12T17:15:53Z | https://github.com/Urinx/WeixinBot/issues/245 | [] | xiegeyang | 7 |
dask/dask | pandas | 11,395 | When adding collumns from 2 dataframes will not compute in some instances, fix for one instance seems to break the other | **Describe the issue**:
Upgrading to pandas 2, existing code. WE add some collumns together as part of our process some data seems to require a compute mid process but this then causes other data to fail, this seems buggy, there are 2 errors generated in the code below, it seems the fix for one bit of data breaks the process for the other data.
**Minimal Complete Verifiable Example**:
```python
import pandas as pd
import dask.dataframe as dd
preds1=pd.DataFrame({
"prediction_probability":[1.0] * 2,
"prediction": [1,1],
"num_runs": [1,1],
"Idx":[1,4],
})
preds1=preds1.set_index("Idx")
ads1=pd.DataFrame({
"prediction_probability":[1.0] * 2,
"prediction": [1,1],
"num_runs": [1,1,],
"Idx":[1,4],
})
ads1=ads1.set_index("Idx")
preds2=pd.DataFrame({
"prediction_probability":[1.0] * 4,
"prediction": [1,1,1,1],
"num_runs": [1,1,1,1],
"Idx":[1,2,3,4],
})
preds2=preds2.set_index("Idx")
ads2=pd.DataFrame({
"prediction_probability":[1.0] * 2,
"prediction": [1,1],
"num_runs": [1,1],
"Idx":[1,2],
})
ads2=ads2.set_index("Idx")
# computing at end
# this works
preds_dd = dd.from_pandas(preds1)
ads_dd = dd.from_pandas(ads1)
preds_dd["prediction"] = preds_dd.prediction.add(
ads_dd.prediction, fill_value=0
)
print(preds_dd.compute())
# this fails
preds_dd = dd.from_pandas(preds2)
ads_dd = dd.from_pandas(ads2)
preds_dd["prediction"] = preds_dd.prediction.add(
ads_dd.prediction, fill_value=0
)
print(preds_dd.compute())
# extra compute in the middle on the series
# this fails
preds_dd = dd.from_pandas(preds1)
ads_dd = dd.from_pandas(ads1)
preds_dd["prediction"] = preds_dd.prediction.add(
ads_dd.prediction.compute(), fill_value=0
)
print(preds_dd.compute())
# this works
preds_dd = dd.from_pandas(preds1)
ads_dd = dd.from_pandas(ads1)
preds_dd["prediction"] = preds_dd.prediction.add(
ads_dd.prediction.compute(), fill_value=0
)
print(preds_dd.compute())
```
**Anything else we need to know?**:
stack trace
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[6], line 6
2 ads_dd = dd.from_pandas(ads1)
3 preds_dd["prediction"] = preds_dd.prediction.add(
4 ads_dd.prediction.compute(), fill_value=0
5 )
----> 6 print(preds_dd.compute())
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_collection.py:476, in FrameBase.compute(self, fuse, **kwargs)
474 if not isinstance(out, Scalar):
475 out = out.repartition(npartitions=1)
--> 476 out = out.optimize(fuse=fuse)
477 return DaskMethodsMixin.compute(out, **kwargs)
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_collection.py:591, in FrameBase.optimize(self, fuse)
573 def optimize(self, fuse: bool = True):
574 """Optimizes the DataFrame.
575
576 Runs the optimizer with all steps over the DataFrame and wraps the result in a
(...)
589 The optimized Dask Dataframe
590 """
--> 591 return new_collection(self.expr.optimize(fuse=fuse))
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:94, in Expr.optimize(self, **kwargs)
93 def optimize(self, **kwargs):
---> 94 return optimize(self, **kwargs)
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:3070, in optimize(expr, fuse)
3049 """High level query optimization
3050
3051 This leverages three optimization passes:
(...)
3066 optimize_blockwise_fusion
3067 """
3068 stage: core.OptimizerStage = "fused" if fuse else "simplified-physical"
-> 3070 return optimize_until(expr, stage)
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:3031, in optimize_until(expr, stage)
3028 return expr
3030 # Lower
-> 3031 expr = expr.lower_completely()
3032 if stage == "physical":
3033 return expr
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_core.py:447, in Expr.lower_completely(self)
445 lowered = {}
446 while True:
--> 447 new = expr.lower_once(lowered)
448 if new._name == expr._name:
449 break
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_core.py:402, in Expr.lower_once(self, lowered)
399 expr = self
401 # Lower this node
--> 402 out = expr._lower()
403 if out is None:
404 out = expr
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:3435, in MaybeAlignPartitions._lower(self)
3432 def _lower(self):
3433 # This can be expensive when something that has expensive division
3434 # calculation is in the Expression
-> 3435 dfs = self.args
3436 if (
3437 len(dfs) == 1
3438 or all(
(...)
3441 or len(self.divisions) == 2
3442 ):
3443 return self._expr_cls(*self.operands)
File [~/.pyenv/versions/3.10.14/lib/python3.10/functools.py:981](http://localhost:8888/home/honej/.pyenv/versions/3.10.14/lib/python3.10/functools.py#line=980), in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:3430, in MaybeAlignPartitions.args(self)
3427 @functools.cached_property
3428 def args(self):
3429 dfs = [op for op in self.operands if isinstance(op, Expr)]
-> 3430 return [op for op in dfs if not is_broadcastable(dfs, op)]
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:3430, in <listcomp>(.0)
3427 @functools.cached_property
3428 def args(self):
3429 dfs = [op for op in self.operands if isinstance(op, Expr)]
-> 3430 return [op for op in dfs if not is_broadcastable(dfs, op)]
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:3086, in is_broadcastable(dfs, s)
3081 except (TypeError, ValueError):
3082 return False
3084 return (
3085 s.ndim == 1
-> 3086 and s.npartitions == 1
3087 and s.known_divisions
3088 and any(compare(s, df) for df in dfs if df.ndim == 2)
3089 or s.ndim == 0
3090 )
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:398, in Expr.npartitions(self)
396 return self.operands[idx]
397 else:
--> 398 return len(self.divisions) - 1
File [~/.pyenv/versions/3.10.14/lib/python3.10/functools.py:981](http://localhost:8888/home/honej/.pyenv/versions/3.10.14/lib/python3.10/functools.py#line=980), in cached_property.__get__(self, instance, owner)
979 val = cache.get(self.attrname, _NOT_FOUND)
980 if val is _NOT_FOUND:
--> 981 val = self.func(instance)
982 try:
983 cache[self.attrname] = val
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:382, in Expr.divisions(self)
380 @functools.cached_property
381 def divisions(self):
--> 382 return tuple(self._divisions())
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:2633, in Binop._divisions(self)
2631 return tuple(self.operation(left_divisions, right_divisions))
2632 else:
-> 2633 return super()._divisions()
File ~/git/ukho-bathymetry-ml/ukho-bathymetry-ml/.venv/lib/python3.10/site-packages/dask_expr/_expr.py:530, in Blockwise._divisions(self)
528 for arg in dependencies:
529 if not self._broadcast_dep(arg):
--> 530 assert arg.divisions == dependencies[0].divisions
531 return dependencies[0].divisions
AssertionError:
```
**Environment**:
- Dask version: 2024.8.2
- Python version: 3.10.14
- Operating System:Ubuntu
- Install method (conda, pip, source):Poetry
| open | 2024-09-18T10:23:45Z | 2025-03-10T01:51:01Z | https://github.com/dask/dask/issues/11395 | [
"needs attention",
"dask-expr"
] | JimHBeam | 1 |
microsoft/qlib | machine-learning | 943 | Bug in TCTS model | https://github.com/microsoft/qlib/blob/4dc66932d571fe6008db403f5780317ff2a3d09f/qlib/contrib/model/pytorch_tcts.py#L146-L148
Why set `p.init_fore_model = False` in line 148, this field makes no sense and not used in other place.
I think it is a typo and should be `p.requires_grad = False` as the `init_fore_model` is not updated when training `fore_model.`
Since it is a typo but the result of TCTS should be unchanged because it only update the parameters in `fore_optimizer`. | closed | 2022-03-02T09:46:10Z | 2022-06-07T09:02:03Z | https://github.com/microsoft/qlib/issues/943 | [
"stale"
] | Tribleave | 2 |
stanford-oval/storm | nlp | 260 | Automatic Evaluation: Soft Heading Recall | Hi, I\`m wondering if there is a version that has been implemented by your team about the **"Soft Heading Recall"**? I\`m very interested in that metric. I`m finding a metric which can evaluate the similarity of article outlines.
I\'ve implemented a version, but I\'m not sure if it\'s right. I\'d be grateful if you could provide me with that code. | closed | 2024-11-28T15:15:38Z | 2024-11-29T05:22:44Z | https://github.com/stanford-oval/storm/issues/260 | [] | nkwejj | 1 |
gradio-app/gradio | python | 10,818 | Bitdefender Issue | ### Describe the bug

### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
I have no clue
### Screenshot

### Logs
```shell
```
### System Info
```shell
PS G:\Projects2\Imagen Edit> gradio environment
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.15.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.7
ffmpy: 0.4.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.29.3
jinja2: 3.1.6
markupsafe: 2.1.5
numpy: 1.24.3
orjson: 3.10.7
packaging: 23.2
pandas: 2.2.3
pillow: 10.0.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.18
pyyaml: 6.0.2
ruff: 0.9.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.12.0
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.10.0
httpx: 0.28.1
huggingface-hub: 0.29.3
packaging: 23.2
typing-extensions: 4.12.2
websockets: 14.2
PS G:\Projects2\Imagen Edit>
```
### Severity
Blocking usage of gradio | open | 2025-03-17T17:42:00Z | 2025-03-17T19:20:36Z | https://github.com/gradio-app/gradio/issues/10818 | [
"bug"
] | PierrunoYT | 1 |
pydata/xarray | numpy | 9,690 | Temporal polyfit does not use the same coordinate as polyval | ### What happened?
When fitting a variable along a temporal dimension, results from computing the fitted "trend" with `xr.polyval` were very different from the original variable. It seems that the function is not evaluated on the same coordinate the fit was made on.
### What did you expect to happen?
The fit should be made on the same coordinate as the evaluation.
### Minimal Complete Verifiable Example
```Python
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
da = xr.DataArray(np.arange(100), dims=('time',), coords={'time': xr.date_range('2001-01-01', periods=100, freq='YS')})
fit = da.polyfit('time', deg=3)
val = xr.polyval(da.time, fit.polyfit_coefficients)
print(da[0], val[0])
# 0, 31.00174342
# The data is linear, the fit should be exact.
plt.plot(da.time, da, label='Original')
plt.plot(da.time, val, label='Fit')
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
`xr.polyval` transforms the temporal coordinate here : https://github.com/pydata/xarray/blob/dbb98b4a40d1679800f7f85d0dad59ef60b5b790/xarray/core/computation.py#L2070
And `da.polyfit` does that here : https://github.com/pydata/xarray/blob/dbb98b4a40d1679800f7f85d0dad59ef60b5b790/xarray/core/dataset.py#L9069-L9077
The results are similar, however, in `polyfit` the new `x` starts at 0, while this offsetting is not done in `polyval`.
This bug was introduced in #9369 I believe. I added a review there woops :sweat:. Would it be possible to use the same function in both `polyfit` and `polyval` ?
I can make a PR. My intuition would be to use `xr.core.computation._ensure_numeric` in `da.polyfit`.
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0]
python-bits: 64
OS: Linux
OS-release: 6.11.5-200.fc40.x86_64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: fr_CA.UTF-8
LOCALE: ('fr_CA', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.10.0
pandas: 2.2.3
numpy: 2.0.2
scipy: 1.14.1
netCDF4: 1.7.1
pydap: None
h5netcdf: 1.4.0
h5py: 3.12.1
zarr: None
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.4.2
dask: 2024.10.0
distributed: 2024.10.0
matplotlib: 3.9.2
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: 0.24.3
sparse: 0.16.0a10.dev3+ga73b20d
flox: 0.9.12
numpy_groupies: 0.11.2
setuptools: 75.1.0
pip: 24.2
conda: None
pytest: 8.3.3
mypy: 1.13.0
IPython: 8.29.0
sphinx: 8.1.3
</details>
| closed | 2024-10-28T15:09:43Z | 2024-10-29T23:35:20Z | https://github.com/pydata/xarray/issues/9690 | [
"bug",
"regression"
] | aulemahal | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.