repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
dnouri/nolearn | scikit-learn | 181 | Trying to use Lasagne **tags when creating a Neural Network | I'm currently trying to set the "trainable=False" on the base layer class for the neural network I am setting up using Lasagne (http://lasagne.readthedocs.org/en/latest/modules/layers/base.html). When I try and pass it as a parameter when I setup my neural net, I get the following error:
TypeError: Failed to instantiate <class 'lasagne.layers.conv.Conv2DLayer'> with args {'incoming': <lasagne.layers.input.InputLayer object at 0x110b05c90>, 'name': 'conv2d1', 'trainable': False, 'filter_size': 9, 'pad': 'valid', 'num_filters': 16}.
Maybe parameter names have changed?
It appears that when nolearn constructs the neural network in lasagne, it's not passing the tags correctly. Is there something I am missing?
The snippet of my code that is failing is:
layers = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayerFast, {'num_filters': conv_num_filters, 'filter_size': filter_size1, 'pad': pad_in,'trainable': False}),
]
Thanks
| closed | 2015-11-23T03:17:30Z | 2016-01-21T22:26:58Z | https://github.com/dnouri/nolearn/issues/181 | [] | caleytown | 4 |
cleanlab/cleanlab | data-science | 403 | color_sentence fails in tutorial notebook | <!-- Briefly summarize the issue. -->
In the notebook, `display_issues` highlights all token issues with a call to `color_sentence`:
https://github.com/cleanlab/cleanlab/blob/1a239922fe195d2a6104d6dc3552d53da16380ce/docs/source/tutorials/token_classification.ipynb?short_path=2ebceca#L369-L379
One of the examples trips everything up with the following error:
```
missing ), unterminated subpattern at position 2
```
# Stack trace
From [failed CI job](https://github.com/cleanlab/cleanlab/actions/runs/2996555945):
<details><summary> Click to toggle stack trace</summary>
```bash
---------------------------------------------------------------------------
error Traceback (most recent call last)
Input In [12], in <module>
----> 1 display_issues(issues,given_words,pred_probs=pred_probs,given_labels=labels,
2 exclude=[(0,1),(1,0)],class_names=merged_entities)
File ~/work/cleanlab/cleanlab/cleanlab/token_classification/summary.py:81, in display_issues(issues, given_words, pred_probs, given_labels, exclude, class_names, top)
78 given = class_names[given]
80 shown += 1
---> 81 print("Sentence %d, token %d: \n%s" % (i, j, color_sentence(sentence,word)))
82 if given_labels and not pred_probs:
83 print("Given label: %s\n" % str(given))
File ~/work/cleanlab/cleanlab/cleanlab/internal/token_classification_utils.py:175, in color_sentence(sentence, word)
158 """
159 Searches for a given token in the sentence and returns the sentence where the given token is colored red
160
(...)
172
173 """
174 colored_word = colored(word, "red")
--> 175 colored_sentence, number_of_substitions = re.subn(
176 r"\b{}\b".format(word),colored_word,sentence
177 )
178 if number_of_substitions == 0:
179 # Use basic string manipulation if regex fails
180 colored_sentence = sentence.replace(word, colored_word)
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/re.py:221, in subn(pattern, repl, string, count, flags)
212 def subn(pattern, repl, string, count=0, flags=0):
213 """Return a 2-tuple containing (new_string, number).
214 new_string is the string obtained by replacing the leftmost
215 non-overlapping occurrences of the pattern in the source
(...)
219 If it is a callable, it's passed the Match object and must
220 return a replacement string to be used."""
--> 221 return _compile(pattern,flags).subn(repl, string, count)
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/re.py:304, in _compile(pattern, flags)
302 if not sre_compile.isstring(pattern):
303 raise TypeError("first argument must be string or compiled pattern")
--> 304 p = sre_compile.compile(pattern,flags)
305 if not (flags & DEBUG):
306 if len(_cache) >= _MAXCACHE:
307 # Drop the oldest item
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_compile.py:764, in compile(p, flags)
762 if isstring(p):
763 pattern = p
--> 764 p = sre_parse.parse(p,flags)
765 else:
766 pattern = None
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:948, in parse(str, flags, state)
945 state.str = str
947 try:
--> 948 p = _parse_sub(source,state,flags&SRE_FLAG_VERBOSE,0)
949 except Verbose:
950 # the VERBOSE flag was switched on inside the pattern. to be
951 # on the safe side, we'll parse the whole thing again...
952 state = State()
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:443, in _parse_sub(source, state, verbose, nested)
441 start = source.tell()
442 while True:
--> 443 itemsappend(_parse(source,state,verbose,nested+1,
444 notnestedandnotitems))
445 if not sourcematch("|"):
446 break
File /opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/sre_parse.py:836, in _parse(source, state, verbose, nested, first)
834 p = _parse_sub(source, state, sub_verbose, nested + 1)
835 if not source.match(")"):
--> 836 raise source.error("missing ), unterminated subpattern",
837 source.tell() - start)
838 if group is not None:
839 state.closegroup(group, p)
error: missing ), unterminated subpattern at position 2
```
</details>
| closed | 2022-09-06T10:51:14Z | 2022-09-06T16:29:01Z | https://github.com/cleanlab/cleanlab/issues/403 | [
"bug"
] | elisno | 2 |
Yorko/mlcourse.ai | plotly | 345 | /assignments_demo/assignment04_habr_popularity_ridge.ipynb - Опечатка в тексте задания | "Инициализируйте DictVectorizer с параметрами по умолчанию.
Примените метод fit_transform к X_train['title'] и метод transform к X_valid['title'] и X_test['title']"
Скорее всего здесь опечатка: должно быть X_train[feats], X_valid[feats], X_test[feats] | closed | 2018-07-19T10:19:13Z | 2018-08-04T16:07:08Z | https://github.com/Yorko/mlcourse.ai/issues/345 | [
"minor_fix"
] | pavel-petkun | 1 |
ydataai/ydata-profiling | jupyter | 1,498 | Getting requirements to build wheel did not run successfully.( ydata- profiling 4.6 error) | ### Current Behaviour
I was installing ydata-profiling from my files and I have this error
### Expected Behaviour
it should dowloaded
### Data Description
.
### Code that reproduces the bug
```Python
C:\Users\Usuario>pip install "C:\Users\Usuario\Downloads\ydata-profiling-develop"
Processing c:\users\usuario\downloads\ydata-profiling-develop
Preparing metadata (setup.py) ... done
Requirement already satisfied: scipy<1.12,>=1.4.1 in c:\users\usuario\appdata\local\programs\python\python312\lib\site-packages (from ydata-profiling==0.0.dev0) (1.11.3)
Collecting pandas!=1.4.0,<2.1,>1.1 (from ydata-profiling==0.0.dev0)
Using cached pandas-2.0.3.tar.gz (5.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting matplotlib<=3.7.3,>=3.2 (from ydata-profiling==0.0.dev0)
Using cached matplotlib-3.7.3-cp312-cp312-win_amd64.whl.metadata (5.8 kB)
Requirement already satisfied: pydantic>=2 in c:\users\usuario\appdata\local\programs\python\python312\lib\site-packages (from ydata-profiling==0.0.dev0) (2.4.2)
Requirement already satisfied: PyYAML<6.1,>=5.0.0 in c:\users\usuario\appdata\local\programs\python\python312\lib\site-packages (from ydata-profiling==0.0.dev0) (6.0.1)
Requirement already satisfied: jinja2<3.2,>=2.11.1 in c:\users\usuario\appdata\local\programs\python\python312\lib\site-packages (from ydata-profiling==0.0.dev0) (3.1.2)
Requirement already satisfied: visions==0.7.5 in c:\users\usuario\appdata\local\programs\python\python312\lib\site-packages (from visions[type_image_path]==0.7.5->ydata-profiling==0.0.dev0) (0.7.5)
Collecting numpy<1.26,>=1.16.0 (from ydata-profiling==0.0.dev0)
Using cached numpy-1.25.2.tar.gz (10.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [33 lines of output]
Traceback (most recent call last):
File "C:\Users\Usuario\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\Usuario\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 112, in get_requires_for_build_wheel
backend = _build_backend()
^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\AppData\Local\Programs\Python\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Usuario\AppData\Local\Programs\Python\Python312\Lib\importlib\__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1304, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 929, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 994, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\Users\Usuario\AppData\Local\Temp\pip-build-env-20xx808f\overlay\Lib\site-packages\setuptools\__init__.py", line 16, in <module>
import setuptools.version
File "C:\Users\Usuario\AppData\Local\Temp\pip-build-env-20xx808f\overlay\Lib\site-packages\setuptools\version.py", line 1, in <module>
import pkg_resources
File "C:\Users\Usuario\AppData\Local\Temp\pip-build-env-20xx808f\overlay\Lib\site-packages\pkg_resources\__init__.py", line 2172, in <module>
register_finder(pkgutil.ImpImporter, find_on_path)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
### pandas-profiling version
4.6
### Dependencies
```Text
pandas 2.1.2
numpy 1.26.1
setuptools 68.2.2
pip 23.3.1
```
### OS
window 10 in the cdm
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-11-12T20:10:07Z | 2023-12-04T19:31:44Z | https://github.com/ydataai/ydata-profiling/issues/1498 | [
"information requested ❔"
] | luch3x | 1 |
pydantic/pydantic-ai | pydantic | 129 | How can one cache the tool calls? | open | 2024-12-03T12:27:48Z | 2025-01-24T14:30:56Z | https://github.com/pydantic/pydantic-ai/issues/129 | [
"Feature request",
"caching"
] | pedroallenrevez | 5 | |
d2l-ai/d2l-en | deep-learning | 2,107 | Evaluation results of DenseNet TF look wrong | http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_convolutional-modern/densenet.html

l | closed | 2022-04-21T00:58:48Z | 2022-12-15T23:56:03Z | https://github.com/d2l-ai/d2l-en/issues/2107 | [] | astonzhang | 2 |
sqlalchemy/alembic | sqlalchemy | 1,482 | command.upgrade(alembic_cfg, "head") never returns and blocks the process indefinitely. | **Bug Description**:
Running `command.upgrade(alembic_cfg, "head")` never returns and blocks the process indefinitely.
**Expected Behavior**:
The process should complete and return, either successfully or with a failure.
**To Reproduce**:
```python
def run_migrations():
try:
logger.info("Starting database migrations")
alembic_cfg = Config("alembic.ini")
command.upgrade(alembic_cfg, "head")
logger.info("Database migrations completed successfully")
except Exception as e:
logger.error(f"Migration failed: {e}")
raise
```
**Error**:
```
# Copy error here. Please include the full stack trace.
```
**Environment**:
- **OS**: macOS 14.15
- **Python**: 3.12
- **Alembic**: 1.13.1
- **SQLAlchemy**: 2.0.30
- **Database**: SQLite
- **DBAPI**: SQLModel 2.0.0
**Additional Context**:
The following workaround using `subprocess` works:
```python
def run_migrations():
try:
logger.info("Starting database migrations")
# Run the Alembic upgrade command using subprocess
result = subprocess.run(["alembic", "upgrade", "head"], capture_output=True, text=True)
if result.returncode != 0:
logger.error(f"Alembic upgrade failed: {result.stderr}")
raise RuntimeError(f"Alembic upgrade failed: {result.stderr}")
logger.info("Database migrations completed successfully")
except Exception as e:
logger.error(f"Migration failed: {e}")
raise
finally:
# Ensure the engine is disposed
engine.dispose()
logger.info("Disposed of the engine to close all connections.")
```
**Have a nice day!** | closed | 2024-06-01T07:51:15Z | 2024-06-01T08:06:26Z | https://github.com/sqlalchemy/alembic/issues/1482 | [] | oefterdal | 0 |
strawberry-graphql/strawberry-django | graphql | 315 | Add strawberry django auth implementation to doc | ## Add Strawberry DJango Auth tutorials in the doc
## Description
Its would be helpful to add the tutorial for [Django Auth implementation](https://github.com/nrbnlulu/strawberry-django-auth) to [/guide/authentication](https://strawberry-graphql.github.io/strawberry-graphql-django/guide/authentication/).
Its would be beneficial to newcomers to integrate strawberry django auth to the django strawberry application. Especially on the:
1. Authorization
- manage permission / authorization
- determine if the user have priviledge to access certain resources
3. Authentication
- determine if the user is login
- determine | open | 2023-07-19T10:40:18Z | 2025-03-20T15:57:15Z | https://github.com/strawberry-graphql/strawberry-django/issues/315 | [
"documentation",
"help wanted"
] | Skyquek | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,058 | Different account names for 2FA codes depending on how it is invoked | Hi!
We are facing issues with the MFA setup, and cannot get it to work properly.
We have tried two different scenarios: 1 - User selects MFA voluntarily from account, 2 - System enforces MFA
Scenario 1
If enforcement of MFA not is demanded by admin, the user can select the option from the user settings. A QR code is shown, and when scanning it, it drags the name om the portal and add is to the authenticator app, but when entering the code to verify the connection, nothing happens - the option is not saved.
Scenario 2:
MFA enforcement is set by admin and when user is logging in, he is prompted to scan a QR code. This code ONLY adds the name GlobaLeaks to the authenticator, but not the name of the site (as in previous scenario). When entering the presented token, it states that it is invalid/expired.
We are caught in a 'Catch 22'
What to do? | closed | 2021-09-29T07:52:42Z | 2021-09-30T15:55:09Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3058 | [] | schris-dk | 8 |
baoliay2008/lccn_predictor | pydantic | 77 | Select and deselect question on graph | Please add option to select or deselect a question in the graph such that it zooms in ontothe selected questions.
This way we look at the individual graphs, because usually the q4 has less AC's so we cant look at its graph in detail.
For example, If we deselect the q1,q2,q3 the contest graph will be focused on q4 and set the scale of the graph to give a more clarity. | open | 2024-06-22T17:54:59Z | 2024-11-03T06:32:09Z | https://github.com/baoliay2008/lccn_predictor/issues/77 | [
"enhancement"
] | 21Cash | 1 |
praw-dev/praw | api | 2,012 | Add ability to instantiate praw with access_token | ### Describe the solution you'd like
Currently, we can instantiate Reddit instance with the `refresh_token`, but for when you need to handle potentially multiple users/accounts (such as in a web application), or when you need to run praw stuff in an isolated context (e.g. in a worker as part of a job - we cannot 'persist' the Reddit instance), you necessarily need a new "instance" of Reddit (as a thin layer - Reddit API client) for each request/worker context.
And having to re-instantiate with `refresh_token` only means we necessarily need to make an extra call in each "context" to get the access token from the Reddit API.
Can we add the ability to instantiate it with access token (_and_ refresh token) as well (so that when access token isn't expired, praw can just use it to make API requests directly without having to make that extra call)?
Thanks.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-01-09T23:53:16Z | 2024-07-01T12:31:08Z | https://github.com/praw-dev/praw/issues/2012 | [
"Stale",
"Auto-closed - Stale"
] | JaneJeon | 13 |
PokemonGoF/PokemonGo-Bot | automation | 5,441 | No module named UpdateLiveInventory | ### Expected Behavior
<!-- Tell us what you expect to happen -->
Submit pull request
### Actual Behavior
<!-- Tell us what is happening -->
Getting this error....although I did not touch anything to do with UpdateLiveInventory
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->
### Output when issue occurred
<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->
### Steps to Reproduce
<!-- Tell us the steps you have taken to reproduce the issue -->
Submit a pull request and travis will fail?
### Other Information
OS:
<!-- Tell us what Operating system you're using -->
Branch:
<!-- dev or master -->
Git Commit:
<!-- run 'git log -n 1 --pretty=format:"%H"' -->
Python Version:
<!-- run 'python -V' and paste it here) -->
Any other relevant files/configs (eg: path files)
<!-- Anything else which may be of relevance -->
<!-- ===============END OF ISSUE SECTION=============== -->
<!-- Note: Delete these lines and everything BELOW if creating an Issue -->
<!-- ===============FEATURE REQUEST SECTION===============
Before you create a Feature Request, please check the following:
1. Have you [searched our feature tracker](https://github.com/PokemonGoF/PokemonGo-Bot/labels/Feature%20Request) first to ensure someone else hasn't already come up with the same great idea. If so then be sure to +1 it
2. While you're there, be sure to vote on other feature requests to let the devs know what is important to you.
-->
<!-- Note: Delete this line and everything ABOVE if creating a Feature Request -->
### Short Description
<!-- Tell us a short description of your request -->
### Possible solution
<!-- Tell us how you would include it in the current bot -->
### How it would help others
<!-- Tell us how you think it would help yourself and other users -->
<!-- ==========END OF FEATURE REQUEST SECTION========== -->
| closed | 2016-09-14T00:08:14Z | 2016-09-14T00:23:32Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5441 | [] | javajohnHub | 3 |
zappa/Zappa | django | 1,000 | AttributeError: 'Template' object has no attribute 'add_description' | Error deploy
self.cf_template.add_description("Automatically generated with Zappa")
AttributeError: 'Template' object has no attribute 'add_description'
| closed | 2021-07-06T18:29:10Z | 2021-11-08T10:14:26Z | https://github.com/zappa/Zappa/issues/1000 | [] | MayaraMacielMatos | 23 |
onnx/onnx | deep-learning | 6,736 | Security risk around auto_update_doc.yml Github Action | ## Security Risk Description
I believe there is potential security risk of auto_update_doc.yml. The PR is executed on `pull_request_target` with [contents:write](https://github.com/onnx/onnx/blob/3d5acaf3e23ae8db7ac01b8cfedb17b8817121f4/.github/workflows/auto_update_doc.yml#L22) and the only gating mechanism is the auto update doc label.
Github Actions label gating is actually vulnerable to race condition as it is described here: https://github.com/AdnaneKhan/ActionsTOCTOU?tab=readme-ov-file#label-gating-toctou. Thus, a user might potentially submit a benign request to social engineer one of the members of onnx to assign it `auto update doc` label and then exploit TOCTOU to execute malicious code within CI/CD. Then, they can abuse extensive contents:write [permissions](https://docs.github.com/en/rest/authentication/permissions-required-for-github-apps?apiVersion=2022-11-28#repository-permissions-for-contents) for malicious purposes.
## Recommendation
Option 1. Deprecate the workflow. Based on my analysis it is not used frequently.
Option 2. Pull latest code using head.sha instead of head.ref as it is [now](https://github.com/onnx/onnx/blob/3d5acaf3e23ae8db7ac01b8cfedb17b8817121f4/.github/workflows/auto_update_doc.yml#L30). See for more details: https://0xn3va.gitbook.io/cheat-sheets/ci-cd/github/actions#confusion-between-head.ref-and-head.sha
| closed | 2025-02-27T23:15:44Z | 2025-03-23T19:47:04Z | https://github.com/onnx/onnx/issues/6736 | [
"vulnerability"
] | mshudrak | 11 |
tensorpack/tensorpack | tensorflow | 1,351 | tracing the GPU memory usage | Hi Yuxin, I'm training the FCOS model using Tensorpack. The following is some logging infos.

It seems there are some bad GPU memory allocation somewhere. Is it possible to trace where the bad GPU allocation is? Or could we find the operations or tensors that leads to the bad memory allocation? Thanks. | closed | 2019-10-22T13:43:07Z | 2019-10-22T23:00:32Z | https://github.com/tensorpack/tensorpack/issues/1351 | [
"unrelated"
] | Remember2018 | 2 |
jazzband/django-oauth-toolkit | django | 974 | Add project to django packages | Hi,
I've tried to add your project to https://djangopackages.org/grids/g/oidc/ as it looks like it supports opendID connect.
I hope I did well, I've marked it as a "provider"

Have a good day! | closed | 2021-05-04T10:00:20Z | 2021-10-23T01:10:35Z | https://github.com/jazzband/django-oauth-toolkit/issues/974 | [
"question"
] | HugoDelval | 0 |
miguelgrinberg/python-socketio | asyncio | 811 | Packets with binary are encoded with the wrong packet type when using msgpack | **Describe the bug**
When using the msgpack serializer, packets that contain binary data are encoded as BINARY_EVENT or BINARY_ACK, however, the Javascript Socket.IO implementation does not use these types when using msgpack. Since msgpack can already efficiently transmit binary data, the binary data is inlined (which does work in python-socketio) and the type is left as EVENT or ACK (which is wrong in python-socketio).
**To Reproduce**
Steps to reproduce the behavior:
1. Configure a python-socketio server using serializer="msgpack"
2. Connect a Javascript client to it using the official Socket.IO implementation
3. Emit an event containing binary data from the server to the client
4. The client will fail to decode the message
**Expected behavior**
The BINARY_EVENT type should be encoded as EVENT, and the BINARY_ACK type should be encoded as ACK. | closed | 2021-10-26T03:15:53Z | 2021-10-27T23:29:44Z | https://github.com/miguelgrinberg/python-socketio/issues/811 | [
"bug"
] | ttarhan | 0 |
dmlc/gluon-cv | computer-vision | 1,400 | [Website] API ref gluoncv.model_zoo doesn't include ResNEXT | While GluonCV -> Model Zoo -> Classification shows ResNEXT: https://gluon-cv.mxnet.io/model_zoo/classification.html
API reference guide, doesn't include `resnext` in
```
gluoncv.model_zoo.get_model()
```
https://gluon-cv.mxnet.io/api/model_zoo.html#gluoncv-model-zoo-get-model | closed | 2020-08-04T17:32:28Z | 2020-09-07T21:52:05Z | https://github.com/dmlc/gluon-cv/issues/1400 | [] | ChaiBapchya | 1 |
pallets/quart | asyncio | 125 | Quart OIDC Keycloak | Hello,
I followed the link: https://github.com/pgjones/quart/issues/103 and found it is closed.
I am trying to get my Quart app working with flask_oidc_ext but it fails. Is it because there is no asyncio support for flask_oidc_ext? I use import quart.flask_patch but it does not help. Any help is greatly appreciated.
test_oidc.py:
```
import quart.flask_patch # noqa
from quart import jsonify, Quart
from flask_oidc_ext import OpenIDConnect
app = Quart(__name__)
app.clients = set()
app.config.update({
'SECRET_KEY': 'SomethingNotEntirelySecret',
'OIDC_CLIENT_SECRETS': './client_secrets.json',
'OIDC_DEBUG': True,
'OIDC_ID_TOKEN_COOKIE_SECURE': False,
'OIDC_REQUIRE_VERIFIED_EMAIL': False,
'OIDC_USER_INFO_ENABLED': True,
'OIDC_SCOPES': ['openid', 'email', 'profile'],
'OIDC_INTROSPECTION_AUTH_METHOD': 'bearer'
})
oidc = OpenIDConnect(app)
@app.route("/")
@oidc.require_login
async def home():
print(f"!!!!!! HOME")
return jsonify(sucess=True)
if __name__ == '__main__':
app.run(host="localhost", port=8080, debug=True)
```
When I start the server and navigate to http://localhost:8080, I am redirected to the login page provided by Keycloak. This works as expected. However, after I am authenticated, I see the following error:
```
[2021-05-04 16:03:25,067] Error in ASGI Framework
Traceback (most recent call last):
File "../venv/lib/python3.7/site-packages/hypercorn/asyncio/context.py", line 39, in _handle
await invoke_asgi(app, scope, receive, send)
File "../venv/lib/python3.7/site-packages/hypercorn/utils.py", line 239, in invoke_asgi
await app(scope, receive, send)
File "../venv/lib/python3.7/site-packages/quart/app.py", line 2069, in __call__
await self.asgi_app(scope, receive, send)
File "../venv/lib/python3.7/site-packages/quart/app.py", line 2092, in asgi_app
await asgi_handler(receive, send)
File "../venv/lib/python3.7/site-packages/quart/asgi.py", line 31, in __call__
_raise_exceptions(done)
File "../venv/lib/python3.7/site-packages/quart/asgi.py", line 234, in _raise_exceptions
raise task.exception()
File "../lib/python3.7/asyncio/tasks.py", line 223, in __step
result = coro.send(None)
File "../venv/lib/python3.7/site-packages/quart/asgi.py", line 79, in handle_request
await asyncio.wait_for(self._send_response(send, response), timeout=timeout)
File "../lib/python3.7/asyncio/tasks.py", line 416, in wait_for
return fut.result()
File "../lib/python3.7/asyncio/futures.py", line 178, in result
raise self._exception
File "../lib/python3.7/asyncio/tasks.py", line 223, in __step
result = coro.send(None)
File "../venv/lib/python3.7/site-packages/quart/asgi.py", line 93, in _send_response
async for data in body:
File "../venv/lib/python3.7/site-packages/quart/wrappers/response.py", line 124, in _aiter
for data in iterable: # type: ignore
TypeError: 'coroutine' object is not iterable
``` | closed | 2021-05-04T23:23:11Z | 2022-07-05T01:58:53Z | https://github.com/pallets/quart/issues/125 | [] | git999-cmd | 3 |
python-gitlab/python-gitlab | api | 3,131 | Typing overloads for http_list and its callers | To finish the `list` / `terator` typing situation if possible before the next release, we could also add overloading to `http_list` and potentially its callers in our code (or migrate those to a full manager with a listmixin).
We currently still have over 20 occurrences, though maybe some of them could be refactored into a normal list:
```
~/repos/python-gitlab$ grep '\.http_list(' -r gitlab/v4
gitlab/v4/objects/files.py: result = self.gitlab.http_list(path, query_data, **kwargs)
gitlab/v4/objects/runners.py: obj = self.gitlab.http_list(path, query_data, **kwargs)
gitlab/v4/objects/geo_nodes.py: result = self.gitlab.http_list("/geo_nodes/status", **kwargs)
gitlab/v4/objects/geo_nodes.py: result = self.gitlab.http_list("/geo_nodes/current/failures", **kwargs)
gitlab/v4/objects/commits.py: return self.manager.gitlab.http_list(path, **kwargs)
gitlab/v4/objects/commits.py: return self.manager.gitlab.http_list(path, query_data=query_data, **kwargs)
gitlab/v4/objects/commits.py: return self.manager.gitlab.http_list(path, **kwargs)
gitlab/v4/objects/merge_requests.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/merge_requests.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/merge_requests.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/milestones.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/milestones.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/milestones.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/milestones.py: data_list = self.manager.gitlab.http_list(path, iterator=True, **kwargs)
gitlab/v4/objects/repositories.py: return self.manager.gitlab.http_list(gl_path, query_data=query_data, **kwargs)
gitlab/v4/objects/repositories.py: return self.manager.gitlab.http_list(path, **kwargs)
gitlab/v4/objects/ldap.py: obj = self.gitlab.http_list(path, **data)
gitlab/v4/objects/groups.py: return self.manager.gitlab.http_list(path, query_data=data, **kwargs)
gitlab/v4/objects/issues.py: result = self.manager.gitlab.http_list(path, **kwargs)
gitlab/v4/objects/issues.py: result = self.manager.gitlab.http_list(path, **kwargs)
gitlab/v4/objects/projects.py: return self.manager.gitlab.http_list(path, query_data=data, **kwargs)
```
/cc @JohnVillalovos @igorp-collabora just FYI, as I saw we still have https://github.com/python-gitlab/python-gitlab/issues/2338. | open | 2025-02-12T23:28:43Z | 2025-02-26T15:54:00Z | https://github.com/python-gitlab/python-gitlab/issues/3131 | [] | nejch | 3 |
mitmproxy/mitmproxy | python | 6,686 | Switch from light only to dark only mode | #### Problem Description
Cf. #3886
#### Proposal
Add a dark mode like #3886 requested, but as a replacement instead of a complement to light mode, in order to solve this :
> Dark mode has constant maintenance cost if you don't want to let it perish, and I don't want to maintain that.
If dark is the only mode then the maintenance cost remains unchanged compared to the current light mode's maintenance cost.
#### Alternatives
Refactoring the current light mode colors into variables, so that dark mode as a complement instead of a replacement would be a simple as providing alternative variables, resulting in a minimal additional maintenance cost, compared to doubling selectors.
#### Additional context
Dark mode for the web UI is only natural considering that the terminal UI also has one. | closed | 2024-02-26T17:37:45Z | 2024-02-26T18:16:59Z | https://github.com/mitmproxy/mitmproxy/issues/6686 | [
"kind/feature"
] | KaKi87 | 1 |
liangliangyy/DjangoBlog | django | 52 | 将文章和页面分离出来 | closed | 2017-12-04T13:37:53Z | 2017-12-09T15:52:52Z | https://github.com/liangliangyy/DjangoBlog/issues/52 | [] | liangliangyy | 0 | |
liangliangyy/DjangoBlog | django | 749 | Hy | closed | 2024-12-17T19:32:22Z | 2024-12-25T09:56:29Z | https://github.com/liangliangyy/DjangoBlog/issues/749 | [] | ghost | 0 | |
lepture/authlib | django | 190 | oauth2.auth.encode_none() changes the body size but leaves content-length header set to the old size | **Describe the bug**
When using the HTTPX AsyncOAuth2Client with password grant, the httpx_client OAuth2ClientAuth.auth_flow() method modifies the body by adding the client_id, but it does not update the Content-Length header which has already been calculated. This causes an exception in the httpx h11 processing when it deletes more characters from the stream buffer than the content-length has specified.
**Error Stacks**
```
.virtualenvs/traffica_stc/lib/python3.7/site-packages/authlib/integrations/httpx_client/oauth2_client.py:109: in _fetch_token
auth=auth, **kwargs)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/client.py:1316: in post
timeout=timeout,
.virtualenvs/traffica_stc/lib/python3.7/site-packages/authlib/integrations/httpx_client/oauth2_client.py:89: in request
method, url, auth=auth, **kwargs)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/client.py:1097: in request
request, auth=auth, allow_redirects=allow_redirects, timeout=timeout,
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/client.py:1118: in send
request, auth=auth, timeout=timeout, allow_redirects=allow_redirects,
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/client.py:1148: in send_handling_redirects
request, auth=auth, timeout=timeout, history=history
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/client.py:1184: in send_handling_auth
response = await self.send_single_request(request, timeout)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/client.py:1208: in send_single_request
response = await dispatcher.send(request, timeout=timeout)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/dispatch/connection_pool.py:157: in send
raise exc
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/dispatch/connection_pool.py:153: in send
response = await connection.send(request, timeout=timeout)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/dispatch/connection.py:44: in send
return await self.connection.send(request, timeout=timeout)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/dispatch/http11.py:51: in send
await self._send_request_body(request, timeout)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/dispatch/http11.py:101: in _send_request_body
await self._send_event(event, timeout)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/httpx/dispatch/http11.py:117: in _send_event
bytes_to_send = self.h11_state.send(event)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/h11/_connection.py:464: in send
data_list = self.send_with_data_passthrough(event)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/h11/_connection.py:498: in send_with_data_passthrough
writer(event, data_list.append)
.virtualenvs/traffica_stc/lib/python3.7/site-packages/h11/_writers.py:69: in __call__
self.send_data(event.data, write)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <h11._writers.ContentLengthWriter object at 0x7fdbaa265a58>
data = b'grant_type=password&username=foo&password=barbar%23123&client_id=foo-backend'
write = <built-in method append of list object at 0x7fdbaa25ba88>
def send_data(self, data, write):
self._length -= len(data)
if self._length < 0:
raise LocalProtocolError(
> "Too much data for declared Content-Length")
E h11._util.LocalProtocolError: Too much data for declared Content-Length
.virtualenvs/traffica_stc/lib/python3.7/site-packages/h11/_writers.py:89: LocalProtocolError
---------------------------- Captured log teardown -----------------------------
```
**To Reproduce**
```
import pytest
from authlib.integrations.httpx_client import AsyncOAuth2Client
@pytest.mark.asyncio
async def test_keycloak():
client = AsyncOAuth2Client(client_id="foo-backend",
client_secret=None,
username="foo", password="barbar#123",
token_endpoint="https://keycloak/auth/realms/myrealm/protocol/openid-connect/token",
verify=False, trust_env=False)
client.token = await client.fetch_token(url="https://keycloak/auth/realms/myrealm/protocol/openid-connect/token", username="foo",
password="barbar#123")
print(client.token)
```
**Expected behavior**
The token should be fetched from the server
**Environment:**
- OS: CentOS Linux
- Python Version: 3.7
- Authlib Version: 0.14
**Additional context**
This change in oauth2/auth.py fixes the problem:
```
def encode_none(client, method, uri, headers, body):
if method == 'GET':
uri = add_params_to_uri(uri, [('client_id', client.client_id)])
return uri, headers, body
body = add_params_to_qs(body, [('client_id', client.client_id)])
# Update Content-Length header
headers['Content-Length'] = str(len(body))
return uri, headers, body
```
| closed | 2020-02-12T12:56:36Z | 2020-02-16T04:39:47Z | https://github.com/lepture/authlib/issues/190 | [
"bug"
] | bobh66 | 0 |
ets-labs/python-dependency-injector | flask | 23 | Implement JointJS graph renderer | Example:
- http://www.jointjs.com/demos/umlcd
AutoLayout:
- http://www.daviddurman.com/automatic-graph-layout-with-jointjs-and-dagre.html
- http://jointjs.com/rappid/docs/layout/directedGraph
| closed | 2015-03-12T23:15:18Z | 2020-06-29T20:56:58Z | https://github.com/ets-labs/python-dependency-injector/issues/23 | [
"feature"
] | rmk135 | 1 |
ultralytics/yolov5 | machine-learning | 12,931 | polygon annotation to object detection | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I want to run object detection with segmentation labeling data, but I got an error.
As far as I know, object detection is possible with segmentation labeled data, but is it a labeling issue?
`python tools/train.py --batch 32 --conf configs/yolov6s_finetune.py --epoch 50 --data ./FST1/data.yaml --fuse_ab --device 0`
`img record infomation path is:./FST1/train/.images_cache.json
Traceback (most recent call last):
File "tools/train.py", line 143, in <module>
main(args)
File "tools/train.py", line 128, in main
trainer = Trainer(args, cfg, device)
File "/media/HDD/조홍석/YOLOv6/yolov6/core/engine.py", line 91, in __init__
self.train_loader, self.val_loader = self.get_data_loader(self.args, self.cfg, self.data_dict)
File "/media/HDD/조홍석/YOLOv6/yolov6/core/engine.py", line 387, in get_data_loader
train_loader = create_dataloader(train_path, args.img_size, args.batch_size // args.world_size, grid_size,
File "/media/HDD/조홍석/YOLOv6/yolov6/data/data_load.py", line 46, in create_dataloader
dataset = TrainValDataset(
File "/media/HDD/조홍석/YOLOv6/yolov6/data/datasets.py", line 82, in __init__
self.img_paths, self.labels = self.get_imgs_labels(self.img_dir)
File "/media/HDD/조홍석/YOLOv6/yolov6/data/datasets.py", line 435, in get_imgs_labels
*[
File "/media/HDD/조홍석/YOLOv6/yolov6/data/datasets.py", line 438, in <listcomp>
np.array(info["labels"], dtype=np.float32)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.`
### Additional
_No response_ | closed | 2024-04-17T07:31:16Z | 2024-05-28T00:21:51Z | https://github.com/ultralytics/yolov5/issues/12931 | [
"question",
"Stale"
] | Cho-Hong-Seok | 2 |
jupyter-book/jupyter-book | jupyter | 1,824 | Faulty call to _message_box in pdf.py | ### Describe the bug
**context**
When modifying the `_config.yml` file to include a custom `sphinx:config:latex_documents` list, the jupyter-book build command fails.
**expectation**
I expected a document to be generated.
**bug**
But instead, the build command returns the error message:
```console
File "..\jupyter_book\sphinx.py", line 138, in build_sphinx
new_latex_documents = update_latex_documents(
File "..\jupyter_book\pdf.py", line 80, in update_latex_documents
_message_box(
File "..\jupyter_book\utils.py", line 30, in _message_box
border_colored = _color_message(border, color)
File "..\jupyter_book\utils.py", line 22, in _color_message
return bcolors[style] + msg + endc
KeyError: 'This suggests the user has made custom settings to their build'
```
**cause**
The bug is caused by a faulty call to `_message_box` inside `pdf.py`. The three lines of the message are passed as three separate arguments to the function in:
```
_message_box(
"Latex documents specified as a multi element list in the _config",
"This suggests the user has made custom settings to their build",
"[Skipping] processing of automatic latex overrides",
)
```
**fix**
The bug could be fixed by changing the call to `_message_box` to:
```
_message_box(
"Latex documents specified as a multi element list in the _config\n This suggests the user has made custom settings to their build \n [Skipping] processing of automatic latex overrides",
)
```
### Reproduce the bug
1. Add any custom `sphinx:config:latex_documents` list with more than one element
2. Run the `jb build` command
### List your environment
_No response_ | open | 2022-08-30T09:26:27Z | 2023-03-23T12:34:02Z | https://github.com/jupyter-book/jupyter-book/issues/1824 | [
"bug"
] | paulremo | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,947 | Weird behavior on AWS beanstalk deployment | I'm kinda running out of ideas and getting desperate at the same time ahah. I also don't know if this is the correct place but here it goes:
Trying to deploy a flask web-app using flask-socketio over a Docker AWS Beanstalk deployment. gevent is being used to serve the web app:
Note 1: The deployment used an AWS Application load balancer and entry point with a certificate and routes the traffic to a EC2 instance where a nginx server is running on port 80 to get the request and proxy them to the docker on port 5000 - This is the normal flow of the AWS Beanstalk stack
Note: no configuration at all was made to the nginx on the EC2 instnace
```
CMD ["gunicorn", "--bind", "0.0.0.0:5000","--workers","1", "--threads", "1", "application:application", "-k", "geventwebsocket.gunicorn.workers.GeventWebSocketWorker", "--timeout", "180"]
```
First weird thing:
From the client if I do not force the protocol to to only use WebSocket I can't make the server go from polling to WebSocket. I get a 400 on the request. If I force to use Webscocket from the client I can make the ws protocol work.
Second weird thing:
If I actually remove gevent-websocket requirement and start the server with a gevent worker the server can make the request over polling, with no problem at all.
Third weird thing:
If I run the server with gevent-websocket support and force the client to run with ws only (to make sure I can actually use webscokets) I can run the web app normally, except for one tiny little thing that is driving me crazy: The first request to the server (after the login) is getting a 400 regardless of what I do. And this request is a normal HTTP GET method. All the requests after that first one, runs normally.
I'm also getting this weird things on the server output
```
<gevent._socket3.socket at 0x7fd3c5d2c1c0 object, fd=14, family=2, type=1, proto=6>: Invalid HTTP method: '3ff5ce1-583e6c977c0b74f445ff044d\r\n'
```
I have no idea what this is and where it comes from
Happy to present all info needed | closed | 2023-03-01T14:21:54Z | 2023-03-01T16:43:45Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1947 | [] | RodrigoNeves95 | 0 |
mitmproxy/pdoc | api | 226 | Data Descriptors aren't properly captured | #### Problem Description
When a class attribute is implemented as a data descriptor, pdoc doesn't seem to be able to capture its docstring correctly.
For instance, put the code below into a module called descriptor.py and run pdoc on it.
```python
class Descriptor():
def __init__(self, func):
self.__doc__ = func.__doc__
def __get__(self, instance, owner):
return self if instance is None else getattr(instance, "_x", 0)
def __set__(self, instance, value):
instance._x = value
class Dummy:
@Descriptor
def size(self):
"""This is the size"""
```
The result is like this:

i.e. the size attribute is correctly identified as a `Descriptor` instance, but its `__doc__` is not captured.
---
Now remove method `__set__` from the `Descriptor` class, so that it is a non-data descriptor. This changes the result to:

i.e. now the docstring is present.
#### System Information
```
pdoc: 6.3.1
Python: 3.7.9
Platform: Windows-10-10.0.19041-SP0
``` | closed | 2021-02-24T22:23:51Z | 2021-02-25T12:08:02Z | https://github.com/mitmproxy/pdoc/issues/226 | [
"bug"
] | barbester | 3 |
cvat-ai/cvat | pytorch | 9,101 | [Question] Zooming/panning/annotating contextual images in sync with main image | Hello,
I'm working with tracking fairly small (close to point-source) objects across 2D frames.
- I would like to be able to have contextual images zoom and pan in sync with the zooming/panning I do in the main image, so that the same zoomed-in/panned view is maintained across the main image and the contextual images.
- I would also like to have annotations sync to the contextual image, so that an annotation I make on the main image is annotated simultaneously on the corresponding object on all the contextual images. This would be helpful where our context images are a couple of temporally previous and next frames, to speed up the annotation process for many objects. I don't think this exists given that contextual images are entirely separate files, instead of being able to use the previous/next frames in the main image sequence, but figured I would ask.
Is there any functionality for this in CVAT? If not are there any plans for it? I couldn't find anything relating to this in the docs or the source code.
Thank you! | open | 2025-02-13T02:52:44Z | 2025-02-26T03:23:23Z | https://github.com/cvat-ai/cvat/issues/9101 | [
"enhancement"
] | jonvanveen | 3 |
babysor/MockingBird | deep-learning | 49 | 能出一个视频教程嘛 | 本人是一个小白,真的尝试去做了,好在一些安装下载配置别人有出教程,但不同人出的并不连贯,让我产生一种莫名其妙的感觉,很多东西在于细节,也许他所讲授的方法适用于这个特定的问题,但并不适用于项目,拜托了 | closed | 2021-08-26T05:44:37Z | 2021-12-26T03:37:25Z | https://github.com/babysor/MockingBird/issues/49 | [
"good first issue"
] | ffspig | 7 |
flasgger/flasgger | flask | 511 | More maintainers wanted | Responsibilities:
- ensure production-level code quality
- coordinate releases over pypi
- debug project for compatibility issues
Please reply to this issue to request to be added as a maintainer (prior pull requests will be considered) | open | 2021-12-16T02:08:55Z | 2023-04-24T20:31:34Z | https://github.com/flasgger/flasgger/issues/511 | [] | billyrrr | 14 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 208 | data loader is so slow! Anyone can fix it? | open | 2020-07-31T05:56:53Z | 2020-07-31T05:56:53Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/208 | [] | shawnthu | 0 | |
ultralytics/yolov5 | machine-learning | 12,668 | Roc graph and Auc graph using yolo5 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
hi, i need a roc graph and auc but i cann't figure out how to draw it how to do it. I had already run validation and i don't know what to do next. I read about metrics but i can not do it
Thank you.
### Additional
_No response_ | closed | 2024-01-24T17:46:04Z | 2024-10-20T19:38:09Z | https://github.com/ultralytics/yolov5/issues/12668 | [
"question",
"Stale"
] | elavlo126 | 3 |
mirumee/ariadne | graphql | 1,208 | python_multipart deprecation use `import python_multipart` | Python_multipart package is throwing the following deprecation warning
```
/venv/lib/python3.12/site-packages/ariadne/__init__.py:15: in <module>
from .graphql import graphql, graphql_sync, subscribe
/venv/lib/python3.12/site-packages/ariadne/graphql.py:50: in <module>
from .validation.introspection_disabled import IntrospectionDisabledRule
/venv/lib/python3.12/site-packages/ariadne/validation/introspection_disabled.py:5: in <module>
from ..contrib.tracing.utils import is_introspection_key
/venv/lib/python3.12/site-packages/ariadne/contrib/tracing/utils.py:10: in <module>
from multipart.multipart import File
/venv/lib/python3.12/site-packages/multipart/__init__.py:22: in <module>
warnings.warn("Please use `import python_multipart` instead.", PendingDeprecationWarning, stacklevel=2)
E PendingDeprecationWarning: Please use `import python_multipart` instead.
``` | closed | 2024-12-02T10:52:39Z | 2025-02-19T14:37:17Z | https://github.com/mirumee/ariadne/issues/1208 | [] | kevinvalk | 0 |
daleroberts/itermplot | matplotlib | 4 | Bpython support | Bpython seems to do something funny to the `stdout` that corrupts the terminal escape characters. Is there a workaround? | open | 2017-01-13T20:37:37Z | 2017-08-10T12:31:47Z | https://github.com/daleroberts/itermplot/issues/4 | [
"enhancement",
"help wanted"
] | daleroberts | 1 |
laughingman7743/PyAthena | sqlalchemy | 559 | How to connect using VPC endpoint? | Athena allows for the use of VPC endpoints to improve security: https://docs.aws.amazon.com/athena/latest/ug/interface-vpc-endpoint.html. How can I use this library when connecting to such endpoints (i.e. VPC_Endpoint_ID.athena.Region.vpce.amazonaws.com)? | closed | 2024-11-18T16:15:18Z | 2024-11-19T14:34:02Z | https://github.com/laughingman7743/PyAthena/issues/559 | [] | mrcolumbia | 6 |
huggingface/pytorch-image-models | pytorch | 1,167 | [BUG] train error | hi,
I try to use timm to train my own dataset, I make them as the imagenet struct, but I found the train loss don't decrease, I don't know if I do something wrong, my train script is like this
sh ./distributed_train.sh 4 ./data --model swin_small_patch4_window7_224 -b 54 --class-map ./data/class_map.txt --sched cosine_lr --epochs 50 --opt adamw -j 16 --weight-decay 0.05 --lr .001 --reprob 0.25 --remode pixel --aa rand-m9-mstd0.5-inc1 --drop-path 0.1 --pretrained --num-class ${class_num}
it runs normally, but i find the train loss is not decrease, and at first, it warned that
Reducer buckets have been rebuilt in this iteration.
I use timm to train last year, it work perfectly, but I pull the newest version, and run as the past params, it can't train well | closed | 2022-03-10T02:26:48Z | 2022-03-11T02:47:30Z | https://github.com/huggingface/pytorch-image-models/issues/1167 | [
"bug"
] | 523997931 | 1 |
vllm-project/vllm | pytorch | 14,599 | [Usage]: python -m vllm.entrypoints.openai.api_server --model models/QwQ-32B --served-model-name QwQ-32B --max-model-len=2048 --dtype=bflo at16 --quantization=bitsandbytes --load_format=bitsandbytes | ### Your current environment
我想在模型加载时量化部署,我用了这条命令启服务:python -m vllm.entrypoints.openai.api_server --model models/QwQ-32B --served-model-name QwQ-32B --max-model-len=2048 --dtype=bflo
at16 --quantization=bitsandbytes --load_format=bitsandbytes 是成功的,但我不知道是都为INT4的量化
### How would you like to use vllm
我想在模型加载时量化部署,我用了这条命令启服务:python -m vllm.entrypoints.openai.api_server --model models/QwQ-32B --served-model-name QwQ-32B --max-model-len=2048 --dtype=bflo
at16 --quantization=bitsandbytes --load_format=bitsandbytes 是成功的,但我不知道是都为INT4的量化
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-11T05:45:36Z | 2025-03-24T09:33:49Z | https://github.com/vllm-project/vllm/issues/14599 | [
"usage"
] | longglecc | 3 |
vitalik/django-ninja | pydantic | 347 | Inheritance with ModelSchema? | I have two similar `ModelSchema` and to avoid useless repetition, I'd like to use an abstract ModelSchema to group common fields.
I didn't find a mention of this in [this page](https://django-ninja.rest-framework.com/tutorial/django-pydantic/) so I don't know if:
1) it's possible and documented somewhere else
1) it's possible but undocumented
2) it's impossible
For example I have this:
```python
class ItemInBasesSchema(ModelSchema):
is_favorite: bool = None
class Config:
model = Item
model_fields = (
"id",
"slug",
"name",
"image_path",
"length_in_mn",
"special_field_for_base" # specific to this schema
)
class ItemInMealsSchema(ModelSchema):
is_favorite: bool = None
class Config:
model = Item
model_fields = (
"id",
"slug",
"name",
"image_path",
"length_in_mn",
"special_field_for_meal" # specific to this schema
)
```
And I would like to do something like:
```python
class ItemBase(ModelSchema):
is_favorite: bool = None
class Config:
model = Item
model_fields = (
"id",
"slug",
"name",
"image_path",
"length_in_mn",
)
class ItemInBasesSchema(ItemBase):
pass # + get the ability to add specific fields here
class ItemInMealsSchema(ItemBase):
pass # + get the ability to add specific fields here
```
As inheritance is a very common need, I think we should mention it in the documentation (even if it's currently impossible). What do you think? | open | 2022-02-04T16:02:39Z | 2024-07-18T13:51:15Z | https://github.com/vitalik/django-ninja/issues/347 | [] | ddahan | 9 |
ymcui/Chinese-BERT-wwm | tensorflow | 177 | rbt4没有使用mlm继续预训练微调吗 | 在自己的语料上
rbt4 mlm准确率0.06
roberta mlm准确率0.54+ | closed | 2021-03-22T08:34:42Z | 2021-03-30T09:32:32Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/177 | [
"stale"
] | SysuCharon | 2 |
onnx/onnx | tensorflow | 6,476 | [Feature request] please add onnx model opset version downgrade (v19 -> v18) support | ### System information
1.12.0 ~ 1.16.1
### What is the problem that this feature solves?
Some 3rd-party infer model frameworks(like rknn / ascend) don't support high version opset when transform onnx model to local model framework, so I need to downgrade it.

### Alternatives considered
Find a suitable model with required opset version, but if the author did not provide, I have no idea.
### Describe the feature
_No response_
### Will this influence the current api (Y/N)?
N
### Feature Area
converters, version_converter
### Are you willing to contribute it (Y/N)
None
### Notes
_No response_ | open | 2024-10-21T02:01:06Z | 2024-10-21T09:01:07Z | https://github.com/onnx/onnx/issues/6476 | [
"topic: enhancement"
] | Liuhehe2019 | 1 |
RobertCraigie/prisma-client-py | asyncio | 74 | Singleton Class | Hey, in the original Prisma Library, I used a singleton class to instantiate the client only once and then use it in my code. I did not need to connect and disconnect it everytime as well.
I tried doing something similar here but I am having a rough time to import the singleton class from another module.
Do you think it can be done?
Thanks! | closed | 2021-09-28T16:30:15Z | 2021-10-02T10:25:55Z | https://github.com/RobertCraigie/prisma-client-py/issues/74 | [
"kind/question"
] | danielweil | 10 |
noirbizarre/flask-restplus | api | 212 | Using abort with custom messages prints a massive traceback in console | I'm using the `abort()` functionality in RESTPlus to return an error and it returns everything properly. However, I get this mess in my console:
``` python
--------------------------------------------------------------------------------
ERROR in app [/home/user/API/venv/lib/python3.5/site-packages/flask/app.py:1587]:
Exception on /command/1 [GET]
--------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/API/venv/lib/python3.5/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/home/user/API/venv/lib/python3.5/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/api.py", line 309, in wrapper
resp = resource(*args, **kwargs)
File "/home/user/API/venv/lib/python3.5/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/resource.py", line 44, in dispatch_request
resp = meth(*args, **kwargs)
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/marshalling.py", line 101, in wrapper
resp = f(*args, **kwargs)
File "/home/user/API/app/resources/command.py", line 58, in get
abort(500, "foo", custom="bar", spam="eggs")
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/errors.py", line 29, in abort
flask.abort(code)
File "/home/user/API/venv/lib/python3.5/site-packages/werkzeug/exceptions.py", line 646, in __call__
raise self.mapping[code](*args, **kwargs)
werkzeug.exceptions.InternalServerError: 500: Internal Server Error
2016-10-25 13:12:17 user-Thinkpad-W520 app[14146] ERROR Exception on /command/1 [GET]
Traceback (most recent call last):
File "/home/user/API/venv/lib/python3.5/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/home/user/API/venv/lib/python3.5/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/api.py", line 309, in wrapper
resp = resource(*args, **kwargs)
File "/home/user/API/venv/lib/python3.5/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/resource.py", line 44, in dispatch_request
resp = meth(*args, **kwargs)
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/marshalling.py", line 101, in wrapper
resp = f(*args, **kwargs)
File "/home/user/API/app/resources/command.py", line 58, in get
abort(500, "foo", custom="bar", spam="eggs")
File "/home/user/API/venv/lib/python3.5/site-packages/flask_restplus/errors.py", line 29, in abort
flask.abort(code)
File "/home/user/API/venv/lib/python3.5/site-packages/werkzeug/exceptions.py", line 646, in __call__
raise self.mapping[code](*args, **kwargs)
werkzeug.exceptions.InternalServerError: 500: Internal Server Error
```
I understand that this may be caused by `werkzeug`, but I was wondering if there is any way I can return an error without having to have that massive traceback spamming my server log/console?
Preferably it would be along the lines of Flask's `make_response(<content>, <HTTP code>)`.
| open | 2016-10-25T17:15:41Z | 2018-09-27T11:40:34Z | https://github.com/noirbizarre/flask-restplus/issues/212 | [
"enhancement"
] | RPiAwesomeness | 3 |
OWASP/Nettacker | automation | 239 | [Medium] Multiple logical bugs within header_xss module | The `header_xss` module, contains the following on line 34:
```
r = requests.head(host)
for header in r.headers:
headers_xss[header] = payloads_xss
req = requests.post(host, headers=headers_xss)
if payloads_xss.lower() in req.text.lower():
return True
else:
return False
```
So what the engine does is, it makes a HEAD request to the site, and then it iterates over the received headers from the HEAD request and adds them to the already existing `headers_xss` dictionary. By doing this we are actually violating the w3 standards defined in RFC 7230-7235.
A quick glance at the [wikipedia page](https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Request_fields) says a lot about the specific set of header fields which can be used during a client request. But, when we are appending the set of response header fields, to the request header set, we are defying the standard.
Elaborately explaining, taking the following example:
- Site: http://www.webscantest.com
- Making the HEAD request, yeilds the following headers:
```
{
'Date': 'Wed, 25 Mar 2020 06:05:32 GMT',
'Server': 'Apache/2.4.7 (Ubuntu)',
'X-Powered-By': 'PHP/5.5.9-1ubuntu4.29',
'Set-Cookie': 'TEST_SESSIONID=ak6b8793njuaai9lns68ka9q54; path=/, NB_SRVID=srv140717; path=/',
'Expires': 'Thu, 19 Nov 1981 08:52:00 GMT',
'Cache-Control': 'no-store, no-cache, must-revalidate, post-check=0, pre-check=0',
'Pragma': 'no-cache',
'Connection': 'close',
'Content-Type': 'text/html'
}
```
- Performing the dict transformation: `for header in r.headers: headers_xss[header] = payloads_xss`, the header set we get is:
```
{
'User-Agent': '<script>alert()</script>',
'Except': '<script>alert()</script>',
'Accept-Encoding': '<script>alert()</script>',
'Referer': '<script>alert()</script>',
'Accept-Language': '<script>alert()</script>',
'Date': '<script>alert()</script>',
'Server': '<script>alert()</script>',
'X-Powered-By': '<script>alert()</script>',
'Set-Cookie': '<script>alert()</script>',
'Expires': '<script>alert()</script>',
'Cache-Control': '<script>alert()</script>',
'Pragma': '<script>alert()</script>',
'Connection': '<script>alert()</script>',
'Content-Type': '<script>alert()</script>'
}
```
Now if we are performing the POST request using this set of headers, we are violating the client-server model, since we are both using the header set of a server as well as the browser/client lol.
Also, I see we are making a POST request without any data?
Also, the `payloads_xss` value is set to `<script>alert(/1/)</script>` which is a HTML context payload. I think we should support javascript context based payloads as well as reflected input based contexts too, since we are never sure how and where the input is getting reflected/stored!
I maybe wrong in my understanding hence ccing @pradeepjairamani, the author of the code. He might have other intentions while writing the code, hence I'm open to a quality discussion. :) | closed | 2020-03-25T06:29:55Z | 2021-02-02T21:35:16Z | https://github.com/OWASP/Nettacker/issues/239 | [
"bug"
] | 0xInfection | 0 |
man-group/arctic | pandas | 61 | a bug in _ndarray_store.py in windows? | Hi all, maybe there was a bug in _ndarray_store
use pandas dataframe
```
index = pd.date_range('1/1/2010', periods=8, tz=mktz())
df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=list('abc'))
arctic = Arctic('localhost')
arctic.initialize_library('nasdaq')
store_db = arctic.get_library('nasdaq')
store_db.append('sym001', df, metadata={'source': 'test'})
```
# it's ok...... then read it
```
print store_db.read('sym001', date_range=DateRange(start=20100101)).data
```
# Here got a exception:
```
File "D:\Python27\lib\site-packages\arctic-1.17.0-py2.7-win-amd64.egg\arctic\store\version_store.py", line 321, in read
date_range=date_range, read_preference=read_preference, **kwargs)
```
File "D:\Python27\lib\site-packages\arctic-1.17.0-py2.7-win-amd64.egg\arctic\store\version_store.py", line 366, in _do_read
data = handler.read(self._arctic_lib, version, symbol, from_version=from_version, *_kwargs)
File "D:\Python27\lib\site-packages\arctic-1.17.0-py2.7-win-amd64.egg\arctic\store_pandas_ndarray_store.py", line 301, in read
item = super(PandasDataFrameStore, self).read(arctic_lib, version, symbol, *_kwargs)
File "D:\Python27\lib\site-packages\arctic-1.17.0-py2.7-win-amd64.egg\arctic\store_pandas_ndarray_store.py", line 197, in read
date_range=date_range, *_kwargs)
File "D:\Python27\lib\site-packages\arctic-1.17.0-py2.7-win-amd64.egg\arctic\store_ndarray_store.py", line 170, in read
return self._do_read(collection, version, symbol, index_range=index_range)
File "D:\Python27\lib\site-packages\arctic-1.17.0-py2.7-win-amd64.egg\arctic\store_ndarray_store.py", line 194, in _do_read
for i, x in enumerate(collection.find(spec, sort=[('segment', pymongo.ASCENDING)],)):
File "D:\Python27\lib\site-packages\pymongo\cursor.py", line 1097, in next
if len(self.__data) or self._refresh():
File "D:\Python27\lib\site-packages\pymongo\cursor.py", line 1019, in _refresh
self.__read_concern))
File "D:\Python27\lib\site-packages\pymongo\cursor.py", line 850, in __send_message
*_kwargs)
File "D:\Python27\lib\site-packages\pymongo\mongo_client.py", line 794, in _send_message_with_response
exhaust)
File "D:\Python27\lib\site-packages\pymongo\mongo_client.py", line 805, in _reset_on_error
return func(_args, *_kwargs)
File "D:\Python27\lib\site-packages\pymongo\server.py", line 108, in send_message_with_response
set_slave_okay, sock_info.is_mongos, use_find_cmd)
File "D:\Python27\lib\site-packages\pymongo\message.py", line 275, in get_message
spec, self.fields, self.codec_options)
bson.errors.InvalidDocument: Cannot encode object: 7
It looked very stranger...
I checked this question a whole day.... and I found the problem was:
spec = {'symbol': symbol,
'parent': version.get('base_version_id', version['_id']),
'segment': {'$lt': to_index}
}
if from_index:
----->
spec['segment']['$gte'] = from_index
I change this line:
spec['segment']['$gte'] = from_index
to
spec['segment']['$gte'] = long(from_index)
because the type(from_index) was numpy.int64, it was not compatible with pymongo in windows64,
the solution was use python type instead of numpy type, here was the two way to do this:
1. to_index.item() and from_index.item()
2. long(to_index()) and long(from_index)
Were there some better solution?
| closed | 2015-12-18T14:14:27Z | 2015-12-30T10:27:01Z | https://github.com/man-group/arctic/issues/61 | [] | testmana2 | 2 |
koxudaxi/datamodel-code-generator | pydantic | 1,955 | Casts `default` values of type `number` (scientific notation) to `str` | **Describe the bug**
Bug in parsing `number` in scientific notation
**To Reproduce**
Example schema:
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"test": {
"type": "number",
"title": "Test",
"description": "Testcase",
"default": 1e-5
}
}
}
```
Used commandline:
```
$ datamodel-codegen --input test_codegen.json --output model_test_codegen.py --output-model-type pydantic_v2.BaseModel --input-file-type jsonschema
```
**Observed behavior**
```python
class Model(BaseModel):
test: Optional[float] = Field('1e-5', description='Testcase', title='Test')
```
**Expected behavior**
```python
class Model(BaseModel):
test: Optional[float] = Field(1e-5, description='Testcase', title='Test')
```
**Version:**
- OS: Linux
- Python version: 3.10.12
- datamodel-code-generator version: 0.25.6
Maybe related to #1952 . | open | 2024-05-11T16:03:39Z | 2024-05-11T16:03:39Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1955 | [] | maximilian-tech | 0 |
davidsandberg/facenet | tensorflow | 753 | problem with Pre-trained model VGG2. | Hey, I am facing this issue when using the pre-trained model of VGG2.
/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Loading model...
2018-05-19 18:45:58.151132: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-05-19 18:45:58.151162: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-05-19 18:45:58.151166: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-05-19 18:45:58.151185: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-05-19 18:45:58.151189: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Traceback (most recent call last):
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1039, in _do_call
return fn(*args)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1021, in _run_fn
status, run_metadata)
File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [128] rhs shape= [512]
[[Node: save/Assign_18 = Assign[T=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/BatchNorm/beta"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](InceptionResnetV1/Bottleneck/BatchNorm/beta, save/RestoreV2_18)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "temp_img1.py", line 163, in <module>
extract_feature = FaceFeature(FRGraph)
File "/home/anju/rashmi_folder/FaceRec_old_before_24Apr_2018/face_feature.py", line 25, in __init__
saver.restore(self.sess, model_path)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1457, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [128] rhs shape= [512]
[[Node: save/Assign_18 = Assign[T=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/BatchNorm/beta"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](InceptionResnetV1/Bottleneck/BatchNorm/beta, save/RestoreV2_18)]]
Caused by op 'save/Assign_18', defined at:
File "temp_img1.py", line 163, in <module>
extract_feature = FaceFeature(FRGraph)
File "/home/anju/rashmi_folder/FaceRec_old_before_24Apr_2018/face_feature.py", line 24, in __init__
saver = tf.train.Saver() #saver load pretrain model
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1056, in __init__
self.build()
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1086, in build
restore_sequentially=self._restore_sequentially)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 691, in build
restore_sequentially, reshape)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 419, in _AddRestoreOps
assign_ops.append(saveable.restore(tensors, shapes))
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 155, in restore
self.op.get_shape().is_fully_defined())
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/ops/state_ops.py", line 270, in assign
validate_shape=validate_shape)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign
use_locking=use_locking, name=name)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/anju/.virtualenvs/dl4cv2/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [128] rhs shape= [512]
[[Node: save/Assign_18 = Assign[T=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/BatchNorm/beta"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](InceptionResnetV1/Bottleneck/BatchNorm/beta, save/RestoreV2_18)]]
Please help me with this. | closed | 2018-05-19T13:20:42Z | 2018-09-25T10:04:09Z | https://github.com/davidsandberg/facenet/issues/753 | [] | rashmisgh | 10 |
the0demiurge/ShadowSocksShare | flask | 15 | 发现一个免费ss网站 | http://shadowsocksph.space/
被墙了,用socks代理能访问。 | closed | 2017-11-04T05:53:04Z | 2018-01-18T06:14:52Z | https://github.com/the0demiurge/ShadowSocksShare/issues/15 | [
"资源分享"
] | zebradonna | 8 |
xlwings/xlwings | automation | 1,922 | when i use xlwings write in excel. the yyyy-mm-dd become yyyy/mm/dd, How do I set it up so it doesn't happen | #### OS (e.g. Windows 10 or macOS Sierra)
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
#### Describe your issue (incl. Traceback!)
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# Your code here
``` | closed | 2022-05-24T12:34:24Z | 2022-05-29T08:54:35Z | https://github.com/xlwings/xlwings/issues/1922 | [] | niuhongbin1 | 4 |
huggingface/transformers | python | 36,334 | Some of test/utils tests fail being invalidated by tests/utils/test_import_utils.py::test_clear_import_cache | With:
* https://github.com/huggingface/transformers/commit/547911e727fffa06052fcd35776c1df115ec32ed
On:
* Nvidia A10
* Intel Data Center GPU Max (PVC)
Test defined in [tests/utils/test_import_utils.py](https://github.com/huggingface/transformers/blob/main/tests/utils/test_import_utils.py) seems to invalidate the state of the test engine (unloads loaded modules) which causes failures for some of the next tests, but not all. Overall running `pytest tests/utils` will result in 19 failing tests which otherwise would pass with `--ignore=tests/utils/test_import_utils.py`. Below is the minimal reproducer:
```
# python3 -m pytest tests/utils/test_import_utils.py tests/utils/test_logging.py
...
tests/utils/test_import_utils.py::test_clear_import_cache PASSED [ 14%]
tests/utils/test_logging.py::HfArgumentParserTest::test_advisory_warnings PASSED [ 28%]
tests/utils/test_logging.py::HfArgumentParserTest::test_env_invalid_override FAILED [ 42%]
tests/utils/test_logging.py::HfArgumentParserTest::test_env_override FAILED [ 57%]
tests/utils/test_logging.py::HfArgumentParserTest::test_integration PASSED [ 71%]
tests/utils/test_logging.py::HfArgumentParserTest::test_set_level PASSED [ 85%]
tests/utils/test_logging.py::test_set_progress_bar_enabled PASSED [100%]
...
E AssertionError: 'Unknown option TRANSFORMERS_VERBOSITY=super-error' not found in ''
...
E AssertionError: 40 != 30 : TRANSFORMERS_VERBOSITY=error/40, but internal verbosity is 30
```
Compare with running without `tests/utils/test_import_utils.py`:
```
# python3 -m pytest tests/utils/test_logging.py
...
tests/utils/test_logging.py::HfArgumentParserTest::test_advisory_warnings PASSED
tests/utils/test_logging.py::HfArgumentParserTest::test_env_invalid_override
--------------------------------------------------- live log call ----------------------------------
WARNING root:logging.py:66 Unknown option TRANSFORMERS_VERBOSITY=super-error, has to be one of: detwarning, error, critical
PASSED
tests/utils/test_logging.py::HfArgumentParserTest::test_env_override PASSED
tests/utils/test_logging.py::HfArgumentParserTest::test_integration PASSED
tests/utils/test_logging.py::HfArgumentParserTest::test_set_level PASSED
tests/utils/test_logging.py::test_set_progress_bar_enabled PASSED
```
The `tests/utils/test_import_utils.py` was introduced by:
* https://github.com/huggingface/transformers/pull/35858
Can test in `tests/utils/test_import_utils.py` be modified to reset the state of the test engine so next tests will run without being impacted? or test be isolated to avoid invalidating the test engine (executed in separate process)?
CC: @sambhavnoobcoder @ydshieh | closed | 2025-02-21T19:50:07Z | 2025-03-24T09:56:26Z | https://github.com/huggingface/transformers/issues/36334 | [] | dvrogozh | 4 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 803 | Screens like statistics(how many jobs applied, how many got response) help users to track their job search. | closed | 2024-11-11T01:20:15Z | 2024-12-04T02:06:13Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/803 | [
"enhancement",
"stale"
] | surapuramakhil | 2 | |
modelscope/modelscope | nlp | 643 | Video-to-Video model finetuning | Hi, I'm trying to finetune the `damo/Video-to-Video` model. But after a couple of iteration my results degenerate to noise.
Could you please take a look and see if the following alignes with your training process.
I'm initializing the `unet`, `vae` and `text encoder` unsing the official weights.
```
model_ms = VideoToVideo(model_dir="modules/Video-to-Video")
vae = model_ms.autoencoder
unet = model_ms.generator
text_encoder = model_ms.clip_encoder
```
and use the following training scheduler, the (sigmas are calculated from betas obtained using [cosine beta schedule](https://github.com/modelscope/modelscope/issues/606)).
```
class GaussianDiffusion(object):
def __init__(self, sigmas):
self.sigmas = sigmas
self.alphas = torch.sqrt(1 - sigmas**2)
self.num_timesteps = len(sigmas)
def add_noise(self, x0, noise, t):
noise = torch.randn_like(x0) if noise is None else noise
xt = _i(self.alphas, t, x0) * x0 + \
_i(self.sigmas, t, x0) * noise
return xt
def get_velocity(self, x0, noise, t):
v = _i(self.alphas, t, noise) * noise - _i(self.sigmas, t, x0) * x0
return v
```
the loss is calculate with the following step:
```
noise = torch.randn_like(z_0)
prompt_embeddings = text_encoder(caption).detach()
z_t = noise_scheduler.add_noise(z_0, timesteps, noise)
target = noise_scheduler.get_velocity(z_0, noise, timesteps)
z_0_pred = unet(z_t, timesteps, prompt_embeddings)
loss = torch.nn.functional.mse_loss(z_0_pred.float(), target.float(), reduction=reduction)
```
Finetune related: @tastelikefeet @Jintao-Huang
| closed | 2023-11-14T08:25:04Z | 2024-06-23T01:53:37Z | https://github.com/modelscope/modelscope/issues/643 | [
"Stale"
] | hpoghos | 9 |
ultralytics/ultralytics | pytorch | 19,490 | Turn off early stopping for first "n" epochs | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When I am finetuning a model, the loss is initially very low. After that, it increases a little bit and the model starts to converge. What I want to do is, turn off the earlystopping for the first 100 epochs and after 100 epochs are completed, turn it on.

I could set earlystopping to a very high value, but that would leave it to train even though the model has converged.
### Additional
_No response_ | open | 2025-03-02T04:14:37Z | 2025-03-02T21:01:42Z | https://github.com/ultralytics/ultralytics/issues/19490 | [
"question"
] | chinge55 | 2 |
glumpy/glumpy | numpy | 28 | Problems with Texture2D | Hi!
Trying to install glumpy and try it on a Mac Pro with Yosemite and macports. I have two, probably related issues. When I import glumpy, I get the following warning:
In [2]: import glumpy
[w] Cannot set error on copy on GPU copy
Then when I run any of the more complex examples, I get the following errors:
run voronoi.py
[i] Using GLFW (GL 2.1)
## [i] Running at 60 frames/second
TypeError Traceback (most recent call last)
/Users/bror/git/code/glumpy/examples/voronoi.py in <module>()
109 cones.bind(C)
110
--> 111 app.run()
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/**init**.pyc in run(clock, framerate, interactive, duration, framecount)
299 global **running**
300
--> 301 clock = **init**(clock=clock, framerate=framerate, backend=**backend**)
302 options = parser.get_options()
303
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/**init**.pyc in **init**(clock, framerate, backend)
259
260 # Dispatch an initial resize event
--> 261 window.dispatch_event('on_resize', window._width, window._height)
262
263 return __clock__
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/window/event.pyc in dispatch_event(self, event_type, *args)
377 except TypeError:
378 self._raise_dispatch_exception(
--> 379 event_type, args, getattr(self, event_type))
380
381 if invoked:
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/window/event.pyc in dispatch_event(self, event_type, _args)
373 try:
374 invoked = True
--> 375 if getattr(self, event_type)(_args):
376 return EVENT_HANDLED
377 except TypeError:
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/window/window.pyc in on_resize(self, width, height)
191 """" Default resize handler that set viewport """
192 gl.glViewport(0, 0, width, height)
--> 193 self.dispatch_event('on_draw', 0.0)
194 self.swap()
195
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/window/event.pyc in dispatch_event(self, event_type, *args)
366 return EVENT_HANDLED
367 except TypeError:
--> 368 self._raise_dispatch_exception(event_type, args, handler)
369
370
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/app/window/event.pyc in dispatch_event(self, event_type, _args)
363 try:
364 invoked = True
--> 365 if handler(_args):
366 return EVENT_HANDLED
367 except TypeError:
/Users/bror/git/code/glumpy/examples/voronoi.py in on_draw(dt)
59 @window.event
60 def on_draw(dt):
---> 61 with borders:
62 window.clear()
63 gl.glEnable(gl.GL_DEPTH_TEST)
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/graphics/filter.pyc in **enter**(self)
148 # Prepare framebuffer for "original" rendering
149 gl.glViewport(0, 0, self.width, self.height)
--> 150 self._framebuffers[0].activate()
151
152
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/gloo/globject.pyc in activate(self)
88 self._need_create = False
89
---> 90 self._activate()
91
92 if self.need_setup:
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/gloo/framebuffer.pyc in _activate(self)
378
379 if self._need_attach:
--> 380 self._attach()
381 self._need_attach = False
382 attachments = [gl.GL_COLOR_ATTACHMENT0+i for i in range(len(self.color))]
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/gloo/framebuffer.pyc in _attach(self)
407 buffer.deactivate()
408 elif isinstance(buffer, Texture2D):
--> 409 buffer.activate()
410 # INFO: 0 is for mipmap level 0 (default) of the texture
411 gl.glFramebufferTexture2D(gl.GL_FRAMEBUFFER, attachment,
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/gloo/globject.pyc in activate(self)
95
96 if self.need_update:
---> 97 self._update()
98 self._need_update = False
99
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/glumpy/gloo/texture.pyc in _update(self)
312 gl.glBindTexture(self._target, self.handle)
313 gl.glTexSubImage2D(self.target, 0, x, y, width, height,
--> 314 self._cpu_format, self.gtype, self)
315 gl.glBindTexture(self._target, self.handle)
316
latebind.pyx in OpenGL_accelerate.latebind.LateBind.**call** (src/latebind.c:989)()
wrapper.pyx in OpenGL_accelerate.wrapper.Wrapper.**call** (src/wrapper.c:6294)()
wrapper.pyx in OpenGL_accelerate.wrapper.PyArgCalculator.c_call (src/wrapper.c:4241)()
wrapper.pyx in OpenGL_accelerate.wrapper.PyArgCalculatorElement.c_call (src/wrapper.c:3601)()
wrapper.pyx in OpenGL_accelerate.wrapper.PyArgCalculatorElement.c_call (src/wrapper.c:3520)()
/Users/bror/.virtualenvs/glumpy/lib/python2.7/site-packages/OpenGL/GL/images.pyc in **call**(self, arg, baseOperation, pyArgs)
455 type = pyArgs[ self.typeIndex ]
456 arrayType = arrays.GL_CONSTANT_TO_ARRAY_TYPE[ images.TYPE_TO_ARRAYTYPE[ type ] ]
--> 457 return arrayType.asArray( arg )
458 # def cResolver( self, array ):
459 # return array
arraydatatype.pyx in OpenGL_accelerate.arraydatatype.ArrayDatatype.asArray (src/arraydatatype.c:4221)()
arraydatatype.pyx in OpenGL_accelerate.arraydatatype.HandlerRegistry.c_lookup (src/arraydatatype.c:2230)()
TypeError: ("No array-type handler for type <class 'glumpy.gloo.texture.Texture2D'> (value: Texture2D([[[ 0., 0., 0.],\n [ 0., 0., 0) registered", <OpenGL.GL.images.ImageInputConverter object at 0x10a5ba750>)
Any idea of what I'm doing wrong? I've tried using virtualenv to avoid problems from my normal setup.
/Bror
| closed | 2015-03-12T22:09:21Z | 2015-03-20T16:36:02Z | https://github.com/glumpy/glumpy/issues/28 | [] | brorfred | 6 |
jupyter/nbgrader | jupyter | 1,411 | Unpining depency on nbconvert 5.6.1? | ### `nbgrader --version`
0.7 dev
### Issue
nbgrader is pinning an old version of nbconvert (5.6.1).
https://github.com/jupyter/nbgrader/blob/f5729fb2b6638ea2c6f9b0f9226a0dd7c8cb2ad5/setup.py#L94
Is this a strong dependency? Could it be lifted to allow for nbgrader 6 which
was released in August 2020?
We have a rather fat image for our JupyterHub, and this is putting stress
on the constraints resolution. Also preventing the use of, e.g. voila which
uses nbgrader 6.
Thanks in advance! | closed | 2021-02-09T16:04:35Z | 2021-03-19T08:01:37Z | https://github.com/jupyter/nbgrader/issues/1411 | [
"bug",
"duplicate"
] | nthiery | 6 |
serengil/deepface | machine-learning | 768 | please find exception stacktrace - using arcface as model | ```
return DeepFace.find(img_path=img_path, db_path=config.tdes_images_location, align=align,
File "/home/akhil/PycharmProjects/TDES-analytics/venv/lib/python3.10/site-packages/deepface/DeepFace.py", line 488, in find
img_objs = functions.extract_faces(
File "/home/akhil/PycharmProjects/TDES-analytics/venv/lib/python3.10/site-packages/deepface/commons/functions.py", line 105, in extract_faces
img_region = [0, 0, img.shape[1], img.shape[0]]
AttributeError: 'NoneType' object has no attribute 'shape'
``` | closed | 2023-06-03T19:17:18Z | 2023-10-15T21:32:30Z | https://github.com/serengil/deepface/issues/768 | [
"question"
] | surapuramakhil | 3 |
PokeAPI/pokeapi | graphql | 499 | giratina is not a pokemon? | 
| closed | 2020-06-03T05:04:32Z | 2020-06-03T05:15:38Z | https://github.com/PokeAPI/pokeapi/issues/499 | [] | FirezTheGreat | 1 |
3b1b/manim | python | 1,224 | Text object bug with size=0.5 | I used latest Manim from master branch on Windows 10. [Both fonts](https://fonts.google.com/?query=open+sans).
[Demo.zip](https://github.com/3b1b/manim/files/5210715/Demo.zip)
`size=1`:

`size=0.5`:

```
strs = ['OpenSansCondensed-Bold.ttf',
'OpenSansCondensed-Light.ttf',
'OpenSansCondensed-LightItalic.ttf',
'OpenSans-Bold.ttf',
'OpenSans-BoldItalic.ttf',
'OpenSans-ExtraBold.ttf',
'OpenSans-ExtraBoldItalic.ttf',
'OpenSans-Italic.ttf',
'OpenSans-Light.ttf',
'OpenSans-LightItalic.ttf',
'OpenSans-Regular.ttf',
'OpenSans-SemiBold.ttf',
'OpenSans-SemiBoldItalic.ttf']
fonts = {strs[0]: 'Open Sans Condensed Bold',
strs[1]: 'Open Sans Condensed Light',
strs[2]: 'Open Sans Condensed Light Italic',
strs[3]: 'Open Sans Bold',
strs[4]: 'Open Sans Bold Italic',
strs[5]: 'Open Sans ExtraBold',
strs[6]: 'Open Sans ExtraBold Italic',
strs[7]: 'Open Sans Italic',
strs[8]: 'Open Sans Light',
strs[9]: 'Open Sans Light Italic',
strs[10]: 'Open Sans Regular',
strs[11]: 'Open Sans SemiBold',
strs[12]: 'Open Sans SemiBold Italic'}
class Demo(Scene):
def construct(self):
for size in [2, 1, .7, .5]:
for i in range(len(strs)):
text = Text(strs[i], font=fonts[strs[i]], stroke_width=0, size=size)
text.move_to([text.length_over_dim(0) / 2 - 5, .6 * (6 - i), 0])
self.add(text)
self.wait(.1)
self.remove(*self.mobjects)
``` | open | 2020-09-11T17:34:59Z | 2020-09-11T17:41:50Z | https://github.com/3b1b/manim/issues/1224 | [] | qo4on | 0 |
huggingface/datasets | tensorflow | 6,495 | Newline characters don't behave as expected when calling dataset.info | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@marios
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[Source](https://huggingface.co/docs/datasets/v2.2.1/en/access)
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673)
### Expected behavior
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(
description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n',
citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398',
license='',
features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')},
download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}},
download_size=1494541,
post_processing_size=None,
dataset_size=1492156,
size_in_bytes=2986697
) | open | 2023-12-12T23:07:51Z | 2023-12-13T13:24:22Z | https://github.com/huggingface/datasets/issues/6495 | [] | gerald-wrona | 0 |
autogluon/autogluon | data-science | 3,808 | score_tests are all negative | Hi AutoGluon team,
Thanks for making this AutoGluon for us to use. I heard AutoGluon at a conference in New Orleans this week.
However, I ran some of my data with the code in Colab. It ran well, however, the score_test values are all negative (see below link)
https://colab.research.google.com/drive/1QznEsKdz8MyymerQ7W76kWJeiybjSjie#scrollTo=AfJX-sv7ZJY2
I don't know what I did wrong. It would be very helpful if you could help me diagnose this problem.
Thanks,
Feiyang
| closed | 2023-12-12T06:03:17Z | 2023-12-16T17:53:41Z | https://github.com/autogluon/autogluon/issues/3808 | [] | FeiyangBai | 6 |
aidlearning/AidLearning-FrameWork | jupyter | 16 | performance degradation after running overnight | I run your face recognition application on my android tablet overnight and the app becomes quite slow (you can see video refresh on the screen quite slow). I can see a couple frames/sec at the beginning.
Are you aware of this issue?
| closed | 2019-05-22T05:01:32Z | 2020-08-03T09:10:32Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/16 | [] | kaishijeng | 6 |
xonsh/xonsh | data-science | 4,773 | Create None file | A file called None is being generated
## xonfig
<details>
```
$ xonfig
+------------------+----------------------+
| xonsh | 0.11.0 |
| Python | 3.10.4 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.24 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.11.2 |
| on posix | True |
| on linux | True |
| distro | manjaro |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| on jupyter | False |
| jupyter kernel | None |
| xontrib 1 | add_variable |
| xontrib 2 | base16_shell |
| RC file 1 | /home/erick/.xonshrc |
+------------------+----------------------+
```
</details>
## Expected Behavior
Don't create None file
## Current Behavior
Create None file
### Traceback (if applicable)
<details>
```
Exception ignored in atexit callback: <function _lastflush at 0x7fda69716ef0>
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/xonsh/built_ins.py", line 506, in _lastflush
XSH.history.flush(at_exit=True)
File "/usr/lib/python3.10/site-packages/xonsh/history/__amalgam__.py", line 765, in flush
hf = JsonHistoryFlusher(
File "/usr/lib/python3.10/site-packages/xonsh/history/__amalgam__.py", line 514, in __init__
self.dump()
File "/usr/lib/python3.10/site-packages/xonsh/history/__amalgam__.py", line 548, in dump
with open(self.filename, "r", newline="\n") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'None'
```
</details>
## Steps to Reproduce
1. Open terminal
2. cd Desktop
3. exit
## Files
##### screenshot

##### Content of None file
<details>
```json
{"locs": [ 69, 3674, 3754, 5293],
"index": {"offsets": {"__total__": 0, "cmds": [9], "env": {"ANDROID_HOME": 38, "BASE16_SHELL": 81, "BASE16_THEME": 134, "BASH_COMPLETIONS": 163, "BOTTOM_TOOLBAR": 227, "COC_DATA_HOME": 249, "COC_VIMCONFIG": 293, "COLORTERM": 336, "DBUS_SESSION_BUS_ADDRESS": 377, "DESKTOP_SESSION": 428, "DISPLAY": 448, "EDITOR": 464, "GDMSESSION": 495, "GDM_LANG": 516, "GNOME_SETUP_DISPLAY": 554, "GNOME_TERMINAL_SCREEN": 585, "GNOME_TERMINAL_SERVICE": 678, "HOME": 696, "LANG": 719, "LC_ADDRESS": 748, "LC_IDENTIFICATION": 784, "LC_MEASUREMENT": 817, "LC_MONETARY": 847, "LC_NAME": 873, "LC_NUMERIC": 902, "LC_PAPER": 929, "LC_TELEPHONE": 960, "LC_TIME": 986, "LOGNAME": 1012, "LS_COLORS": 1034, "MAIL": 2399, "MOTD_SHOWN": 2438, "MOZ_ENABLE_WAYLAND": 2467, "MULTILINE_PROMPT": 2492, "MYVIMRC": 2524, "NVIM_LISTEN_ADDRESS": 2586, "NVIM_LOG_FILE": 2624, "NVM_BIN": 2666, "NVM_CD_FLAGS": 2729, "NVM_DIR": 2744, "NVM_INC": 2775, "PATH": 2839, "PROMPT": 3518, "PWD": 3607, "QT_AUTO_SCREEN_SCALE_FACTOR": 3687, "QT_IM_MODULE": 3708, "QT_QPA_PLATFORMTHEME": 3740, "RIGHT_PROMPT": 3765, "SESSION_MANAGER": 3788, "SHELL": 3870, "SHELL_TYPE": 3902, "SHLVL": 3929, "SSH_AUTH_SOCK": 3951, "SYSTEMD_EXEC_PID": 4001, "TERM": 4017, "THREAD_SUBPROCS": 4051, "TITLE": 4065, "USER": 4085, "USERNAME": 4106, "VIMRUNTIME": 4129, "VK_ICD_FILENAMES": 4176, "VTE_VERSION": 4284, "WAYLAND_DISPLAY": 4311, "XAUTHORITY": 4338, "XDG_CURRENT_DESKTOP": 4407, "XDG_DATA_DIRS": 4433, "XDG_MENU_PREFIX": 4563, "XDG_RUNTIME_DIR": 4592, "XDG_SESSION_CLASS": 4631, "XDG_SESSION_DESKTOP": 4662, "XDG_SESSION_TYPE": 4691, "XMODIFIERS": 4716, "XONSHRC": 4739, "XONSHRC_DIR": 4824, "XONSH_CONFIG_DIR": 4894, "XONSH_DATA_DIR": 4941, "XONSH_HISTORY_FILE": 4997, "XONSH_INTERACTIVE": 5026, "XONSH_LOGIN": 5046, "XONSH_SHOW_TRACEBACK": 5075, "XONSH_TRACEBACK_LOGFILE": 5107, "XONSH_VERSION": 5157, "_": 5172, "__total__": 21}, "locked": 5200, "sessionid": 5219, "ts": [5266, 5285, 5265]}, "sizes": {"__total__": 5293, "cmds": [3], "env": {"ANDROID_HOME": 25, "BASE16_SHELL": 35, "BASE16_THEME": 7, "BASH_COMPLETIONS": 44, "BOTTOM_TOOLBAR": 3, "COC_DATA_HOME": 25, "COC_VIMCONFIG": 28, "COLORTERM": 11, "DBUS_SESSION_BUS_ADDRESS": 30, "DESKTOP_SESSION": 7, "DISPLAY": 4, "EDITOR": 15, "GDMSESSION": 7, "GDM_LANG": 13, "GNOME_SETUP_DISPLAY": 4, "GNOME_TERMINAL_SCREEN": 65, "GNOME_TERMINAL_SERVICE": 8, "HOME": 13, "LANG": 13, "LC_ADDRESS": 13, "LC_IDENTIFICATION": 13, "LC_MEASUREMENT": 13, "LC_MONETARY": 13, "LC_NAME": 13, "LC_NUMERIC": 13, "LC_PAPER": 13, "LC_TELEPHONE": 13, "LC_TIME": 13, "LOGNAME": 7, "LS_COLORS": 1355, "MAIL": 23, "MOTD_SHOWN": 5, "MOZ_ENABLE_WAYLAND": 3, "MULTILINE_PROMPT": 19, "MYVIMRC": 37, "NVIM_LISTEN_ADDRESS": 19, "NVIM_LOG_FILE": 29, "NVM_BIN": 45, "NVM_CD_FLAGS": 2, "NVM_DIR": 18, "NVM_INC": 54, "PATH": 667, "PROMPT": 80, "PWD": 47, "QT_AUTO_SCREEN_SCALE_FACTOR": 3, "QT_IM_MODULE": 6, "QT_QPA_PLATFORMTHEME": 7, "RIGHT_PROMPT": 2, "SESSION_MANAGER": 71, "SHELL": 16, "SHELL_TYPE": 16, "SHLVL": 3, "SSH_AUTH_SOCK": 28, "SYSTEMD_EXEC_PID": 6, "TERM": 13, "THREAD_SUBPROCS": 3, "TITLE": 10, "USER": 7, "USERNAME": 7, "VIMRUNTIME": 25, "VK_ICD_FILENAMES": 91, "VTE_VERSION": 6, "WAYLAND_DISPLAY": 11, "XAUTHORITY": 44, "XDG_CURRENT_DESKTOP": 7, "XDG_DATA_DIRS": 109, "XDG_MENU_PREFIX": 8, "XDG_RUNTIME_DIR": 16, "XDG_SESSION_CLASS": 6, "XDG_SESSION_DESKTOP": 7, "XDG_SESSION_TYPE": 9, "XMODIFIERS": 10, "XONSHRC": 68, "XONSHRC_DIR": 48, "XONSH_CONFIG_DIR": 27, "XONSH_DATA_DIR": 32, "XONSH_HISTORY_FILE": 6, "XONSH_INTERACTIVE": 3, "XONSH_LOGIN": 3, "XONSH_SHOW_TRACEBACK": 3, "XONSH_TRACEBACK_LOGFILE": 31, "XONSH_VERSION": 8, "_": 14, "__total__": 5167}, "locked": 4, "sessionid": 38, "ts": [17, 4, 26]}},
"data": {"cmds": []
, "env": {"ANDROID_HOME": "/home/erick/Android/Sdk", "BASE16_SHELL": "/home/erick/.config/base16-shell/", "BASE16_THEME": "atlas", "BASH_COMPLETIONS": "/usr/share/bash-completion/bash_completion", "BOTTOM_TOOLBAR": " ", "COC_DATA_HOME": "/home/erick/.config/coc", "COC_VIMCONFIG": "/home/erick/.dotfiles/nvim", "COLORTERM": "truecolor", "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus", "DESKTOP_SESSION": "gnome", "DISPLAY": ":0", "EDITOR": "/usr/bin/nano", "GDMSESSION": "gnome", "GDM_LANG": "es_PE.UTF-8", "GNOME_SETUP_DISPLAY": ":1", "GNOME_TERMINAL_SCREEN": "/org/gnome/Terminal/screen/5bc0ff37_3736_41cb_8350_11f6b86cffde", "GNOME_TERMINAL_SERVICE": ":1.431", "HOME": "/home/erick", "LANG": "es_PE.UTF-8", "LC_ADDRESS": "es_PE.UTF-8", "LC_IDENTIFICATION": "es_PE.UTF-8", "LC_MEASUREMENT": "es_PE.UTF-8", "LC_MONETARY": "es_PE.UTF-8", "LC_NAME": "es_PE.UTF-8", "LC_NUMERIC": "es_PE.UTF-8", "LC_PAPER": "es_PE.UTF-8", "LC_TELEPHONE": "es_PE.UTF-8", "LC_TIME": "es_PE.UTF-8", "LOGNAME": "erick", "LS_COLORS": "*.7z=1;31:*.aac=36:*.ace=1;31:*.alz=1;31:*.arc=1;31:*.arj=1;31:*.asf=1;35:*.au=36:*.avi=1;35:*.bmp=1;35:*.bz=1;31:*.bz2=1;31:*.cab=1;31:*.cgm=1;35:*.cpio=1;31:*.deb=1;31:*.dl=1;35:*.dwm=1;31:*.dz=1;31:*.ear=1;31:*.emf=1;35:*.esd=1;31:*.flac=36:*.flc=1;35:*.fli=1;35:*.flv=1;35:*.gif=1;35:*.gl=1;35:*.gz=1;31:*.jar=1;31:*.jpeg=1;35:*.jpg=1;35:*.lha=1;31:*.lrz=1;31:*.lz=1;31:*.lz4=1;31:*.lzh=1;31:*.lzma=1;31:*.lzo=1;31:*.m2v=1;35:*.m4a=36:*.m4v=1;35:*.mid=36:*.midi=36:*.mjpeg=1;35:*.mjpg=1;35:*.mka=36:*.mkv=1;35:*.mng=1;35:*.mov=1;35:*.mp3=36:*.mp4=1;35:*.mp4v=1;35:*.mpc=36:*.mpeg=1;35:*.mpg=1;35:*.nuv=1;35:*.oga=36:*.ogg=36:*.ogm=1;35:*.ogv=1;35:*.ogx=1;35:*.opus=36:*.pbm=1;35:*.pcx=1;35:*.pgm=1;35:*.png=1;35:*.ppm=1;35:*.qt=1;35:*.ra=36:*.rar=1;31:*.rm=1;35:*.rmvb=1;35:*.rpm=1;31:*.rz=1;31:*.sar=1;31:*.spx=36:*.svg=1;35:*.svgz=1;35:*.swm=1;31:*.t7z=1;31:*.tar=1;31:*.taz=1;31:*.tbz=1;31:*.tbz2=1;31:*.tga=1;35:*.tgz=1;31:*.tif=1;35:*.tiff=1;35:*.tlz=1;31:*.txz=1;31:*.tz=1;31:*.tzo=1;31:*.tzst=1;31:*.vob=1;35:*.war=1;31:*.wav=36:*.webm=1;35:*.webp=1;35:*.wim=1;31:*.wmv=1;35:*.xbm=1;35:*.xcf=1;35:*.xpm=1;35:*.xspf=36:*.xwd=1;35:*.xz=1;31:*.yuv=1;35:*.z=1;31:*.zip=1;31:*.zoo=1;31:*.zst=1;31:bd=40;1;33:ca=30;41:cd=40;1;33:di=1;34:do=1;35:ex=1;32:ln=1;36:mh=0:mi=0:or=40;1;31:ow=34;42:pi=40;33:rs=0:sg=30;43:so=1;35:st=37;44:su=37;41:tw=30;42", "MAIL": "/var/spool/mail/erick", "MOTD_SHOWN": "pam", "MOZ_ENABLE_WAYLAND": "1", "MULTILINE_PROMPT": "`*\u00b7.\u00b7*`", "MYVIMRC": "/home/erick/.dotfiles/nvim/init.lua", "NVIM_LISTEN_ADDRESS": "/tmp/nvim2oQhBZ/0", "NVIM_LOG_FILE": "/home/erick/.cache/nvim/log", "NVM_BIN": "/home/erick/.nvm/versions/node/v14.18.2/bin", "NVM_CD_FLAGS": "", "NVM_DIR": "/home/erick/.nvm", "NVM_INC": "/home/erick/.nvm/versions/node/v14.18.2/include/node", "PATH": "/home/erick/.nvm/versions/node/v14.18.2/bin:/usr/local/bin:/usr/bin:/home/erick/.cargo/env:/home/erick/Android/Sdk/platform-tools:/opt/blender:/opt/node14/bin:/opt/ngrok:/opt/vagrant:/usr/local/go/bin:/home/erick/go/bin:/home/erick/.yarn/bin:/home/erick/.config/composer/vendor/bin:/home/erick/.npm-global/bin:/home/erick/.local/bin:/home/erick/Tools/azuredatastudio:/home/erick/.cargo/env:/home/erick/Android/Sdk/platform-tools:/opt/blender:/opt/node14/bin:/opt/ngrok:/opt/vagrant:/usr/local/go/bin:/home/erick/go/bin:/home/erick/.yarn/bin:/home/erick/.config/composer/vendor/bin:/home/erick/.npm-global/bin:/home/erick/.local/bin:/home/erick/Tools/azuredatastudio", "PROMPT": "{env_name:{} }{YELLOW}{cwd_base}{branch_color}{curr_branch: [{}]} {RED}\uf490 ", "PWD": "/home/erick/TG/proyectos/wortix-dashboard/app", "QT_AUTO_SCREEN_SCALE_FACTOR": "1", "QT_IM_MODULE": "ibus", "QT_QPA_PLATFORMTHEME": "qt5ct", "RIGHT_PROMPT": "", "SESSION_MANAGER": "local/stixcode:@/tmp/.ICE-unix/1237,unix/stixcode:/tmp/.ICE-unix/1237", "SHELL": "/usr/bin/xonsh", "SHELL_TYPE": "prompt_toolkit", "SHLVL": "2", "SSH_AUTH_SOCK": "/run/user/1000/keyring/ssh", "SYSTEMD_EXEC_PID": "1252", "TERM": "xterm-color", "THREAD_SUBPROCS": "1", "TITLE": "Terminal", "USER": "erick", "USERNAME": "erick", "VIMRUNTIME": "/usr/share/nvim/runtime", "VK_ICD_FILENAMES": "/usr/share/vulkan/icd.d/radeon_icd.i686.json:/usr/share/vulkan/icd.d/radeon_icd.x86_64.js", "VTE_VERSION": "6800", "WAYLAND_DISPLAY": "wayland-0", "XAUTHORITY": "/run/user/1000/.mutter-Xwaylandauth.3DHVK1", "XDG_CURRENT_DESKTOP": "GNOME", "XDG_DATA_DIRS": "/home/erick/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share/:/usr/share/", "XDG_MENU_PREFIX": "gnome-", "XDG_RUNTIME_DIR": "/run/user/1000", "XDG_SESSION_CLASS": "user", "XDG_SESSION_DESKTOP": "gnome", "XDG_SESSION_TYPE": "wayland", "XMODIFIERS": "@im=ibus", "XONSHRC": "/etc/xonshrc:/home/erick/.config/xonsh/rc.xsh:/home/erick/.xonshrc", "XONSHRC_DIR": "/etc/xonsh/rc.d:/home/erick/.config/xonsh/rc.d", "XONSH_CONFIG_DIR": "/home/erick/.config/xonsh", "XONSH_DATA_DIR": "/home/erick/.local/share/xonsh", "XONSH_HISTORY_FILE": "None", "XONSH_INTERACTIVE": "1", "XONSH_LOGIN": "1", "XONSH_SHOW_TRACEBACK": "1", "XONSH_TRACEBACK_LOGFILE": "/home/erick/.xonsh/errors.log", "XONSH_VERSION": "0.11.0", "_": "/usr/bin/env"}
, "locked": true, "sessionid": "33b596a1-3d26-4910-9379-7a30b126f7f8", "ts": [1650461292.774055, null]
}
}
```
</details>
##### Content of None file (formatted)
<details>
```json
{
"locs": [69, 3674, 3754, 5293],
"index": {
"offsets": {
"__total__": 0,
"cmds": [9],
"env": {
"ANDROID_HOME": 38,
"BASE16_SHELL": 81,
"BASE16_THEME": 134,
"BASH_COMPLETIONS": 163,
"BOTTOM_TOOLBAR": 227,
"COC_DATA_HOME": 249,
"COC_VIMCONFIG": 293,
"COLORTERM": 336,
"DBUS_SESSION_BUS_ADDRESS": 377,
"DESKTOP_SESSION": 428,
"DISPLAY": 448,
"EDITOR": 464,
"GDMSESSION": 495,
"GDM_LANG": 516,
"GNOME_SETUP_DISPLAY": 554,
"GNOME_TERMINAL_SCREEN": 585,
"GNOME_TERMINAL_SERVICE": 678,
"HOME": 696,
"LANG": 719,
"LC_ADDRESS": 748,
"LC_IDENTIFICATION": 784,
"LC_MEASUREMENT": 817,
"LC_MONETARY": 847,
"LC_NAME": 873,
"LC_NUMERIC": 902,
"LC_PAPER": 929,
"LC_TELEPHONE": 960,
"LC_TIME": 986,
"LOGNAME": 1012,
"LS_COLORS": 1034,
"MAIL": 2399,
"MOTD_SHOWN": 2438,
"MOZ_ENABLE_WAYLAND": 2467,
"MULTILINE_PROMPT": 2492,
"MYVIMRC": 2524,
"NVIM_LISTEN_ADDRESS": 2586,
"NVIM_LOG_FILE": 2624,
"NVM_BIN": 2666,
"NVM_CD_FLAGS": 2729,
"NVM_DIR": 2744,
"NVM_INC": 2775,
"PATH": 2839,
"PROMPT": 3518,
"PWD": 3607,
"QT_AUTO_SCREEN_SCALE_FACTOR": 3687,
"QT_IM_MODULE": 3708,
"QT_QPA_PLATFORMTHEME": 3740,
"RIGHT_PROMPT": 3765,
"SESSION_MANAGER": 3788,
"SHELL": 3870,
"SHELL_TYPE": 3902,
"SHLVL": 3929,
"SSH_AUTH_SOCK": 3951,
"SYSTEMD_EXEC_PID": 4001,
"TERM": 4017,
"THREAD_SUBPROCS": 4051,
"TITLE": 4065,
"USER": 4085,
"USERNAME": 4106,
"VIMRUNTIME": 4129,
"VK_ICD_FILENAMES": 4176,
"VTE_VERSION": 4284,
"WAYLAND_DISPLAY": 4311,
"XAUTHORITY": 4338,
"XDG_CURRENT_DESKTOP": 4407,
"XDG_DATA_DIRS": 4433,
"XDG_MENU_PREFIX": 4563,
"XDG_RUNTIME_DIR": 4592,
"XDG_SESSION_CLASS": 4631,
"XDG_SESSION_DESKTOP": 4662,
"XDG_SESSION_TYPE": 4691,
"XMODIFIERS": 4716,
"XONSHRC": 4739,
"XONSHRC_DIR": 4824,
"XONSH_CONFIG_DIR": 4894,
"XONSH_DATA_DIR": 4941,
"XONSH_HISTORY_FILE": 4997,
"XONSH_INTERACTIVE": 5026,
"XONSH_LOGIN": 5046,
"XONSH_SHOW_TRACEBACK": 5075,
"XONSH_TRACEBACK_LOGFILE": 5107,
"XONSH_VERSION": 5157,
"_": 5172,
"__total__": 21
},
"locked": 5200,
"sessionid": 5219,
"ts": [5266, 5285, 5265]
},
"sizes": {
"__total__": 5293,
"cmds": [3],
"env": {
"ANDROID_HOME": 25,
"BASE16_SHELL": 35,
"BASE16_THEME": 7,
"BASH_COMPLETIONS": 44,
"BOTTOM_TOOLBAR": 3,
"COC_DATA_HOME": 25,
"COC_VIMCONFIG": 28,
"COLORTERM": 11,
"DBUS_SESSION_BUS_ADDRESS": 30,
"DESKTOP_SESSION": 7,
"DISPLAY": 4,
"EDITOR": 15,
"GDMSESSION": 7,
"GDM_LANG": 13,
"GNOME_SETUP_DISPLAY": 4,
"GNOME_TERMINAL_SCREEN": 65,
"GNOME_TERMINAL_SERVICE": 8,
"HOME": 13,
"LANG": 13,
"LC_ADDRESS": 13,
"LC_IDENTIFICATION": 13,
"LC_MEASUREMENT": 13,
"LC_MONETARY": 13,
"LC_NAME": 13,
"LC_NUMERIC": 13,
"LC_PAPER": 13,
"LC_TELEPHONE": 13,
"LC_TIME": 13,
"LOGNAME": 7,
"LS_COLORS": 1355,
"MAIL": 23,
"MOTD_SHOWN": 5,
"MOZ_ENABLE_WAYLAND": 3,
"MULTILINE_PROMPT": 19,
"MYVIMRC": 37,
"NVIM_LISTEN_ADDRESS": 19,
"NVIM_LOG_FILE": 29,
"NVM_BIN": 45,
"NVM_CD_FLAGS": 2,
"NVM_DIR": 18,
"NVM_INC": 54,
"PATH": 667,
"PROMPT": 80,
"PWD": 47,
"QT_AUTO_SCREEN_SCALE_FACTOR": 3,
"QT_IM_MODULE": 6,
"QT_QPA_PLATFORMTHEME": 7,
"RIGHT_PROMPT": 2,
"SESSION_MANAGER": 71,
"SHELL": 16,
"SHELL_TYPE": 16,
"SHLVL": 3,
"SSH_AUTH_SOCK": 28,
"SYSTEMD_EXEC_PID": 6,
"TERM": 13,
"THREAD_SUBPROCS": 3,
"TITLE": 10,
"USER": 7,
"USERNAME": 7,
"VIMRUNTIME": 25,
"VK_ICD_FILENAMES": 91,
"VTE_VERSION": 6,
"WAYLAND_DISPLAY": 11,
"XAUTHORITY": 44,
"XDG_CURRENT_DESKTOP": 7,
"XDG_DATA_DIRS": 109,
"XDG_MENU_PREFIX": 8,
"XDG_RUNTIME_DIR": 16,
"XDG_SESSION_CLASS": 6,
"XDG_SESSION_DESKTOP": 7,
"XDG_SESSION_TYPE": 9,
"XMODIFIERS": 10,
"XONSHRC": 68,
"XONSHRC_DIR": 48,
"XONSH_CONFIG_DIR": 27,
"XONSH_DATA_DIR": 32,
"XONSH_HISTORY_FILE": 6,
"XONSH_INTERACTIVE": 3,
"XONSH_LOGIN": 3,
"XONSH_SHOW_TRACEBACK": 3,
"XONSH_TRACEBACK_LOGFILE": 31,
"XONSH_VERSION": 8,
"_": 14,
"__total__": 5167
},
"locked": 4,
"sessionid": 38,
"ts": [17, 4, 26]
}
},
"data": {
"cmds": [],
"env": {
"ANDROID_HOME": "/home/erick/Android/Sdk",
"BASE16_SHELL": "/home/erick/.config/base16-shell/",
"BASE16_THEME": "atlas",
"BASH_COMPLETIONS": "/usr/share/bash-completion/bash_completion",
"BOTTOM_TOOLBAR": " ",
"COC_DATA_HOME": "/home/erick/.config/coc",
"COC_VIMCONFIG": "/home/erick/.dotfiles/nvim",
"COLORTERM": "truecolor",
"DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/1000/bus",
"DESKTOP_SESSION": "gnome",
"DISPLAY": ":0",
"EDITOR": "/usr/bin/nano",
"GDMSESSION": "gnome",
"GDM_LANG": "es_PE.UTF-8",
"GNOME_SETUP_DISPLAY": ":1",
"GNOME_TERMINAL_SCREEN": "/org/gnome/Terminal/screen/5bc0ff37_3736_41cb_8350_11f6b86cffde",
"GNOME_TERMINAL_SERVICE": ":1.431",
"HOME": "/home/erick",
"LANG": "es_PE.UTF-8",
"LC_ADDRESS": "es_PE.UTF-8",
"LC_IDENTIFICATION": "es_PE.UTF-8",
"LC_MEASUREMENT": "es_PE.UTF-8",
"LC_MONETARY": "es_PE.UTF-8",
"LC_NAME": "es_PE.UTF-8",
"LC_NUMERIC": "es_PE.UTF-8",
"LC_PAPER": "es_PE.UTF-8",
"LC_TELEPHONE": "es_PE.UTF-8",
"LC_TIME": "es_PE.UTF-8",
"LOGNAME": "erick",
"LS_COLORS": "*.7z=1;31:*.aac=36:*.ace=1;31:*.alz=1;31:*.arc=1;31:*.arj=1;31:*.asf=1;35:*.au=36:*.avi=1;35:*.bmp=1;35:*.bz=1;31:*.bz2=1;31:*.cab=1;31:*.cgm=1;35:*.cpio=1;31:*.deb=1;31:*.dl=1;35:*.dwm=1;31:*.dz=1;31:*.ear=1;31:*.emf=1;35:*.esd=1;31:*.flac=36:*.flc=1;35:*.fli=1;35:*.flv=1;35:*.gif=1;35:*.gl=1;35:*.gz=1;31:*.jar=1;31:*.jpeg=1;35:*.jpg=1;35:*.lha=1;31:*.lrz=1;31:*.lz=1;31:*.lz4=1;31:*.lzh=1;31:*.lzma=1;31:*.lzo=1;31:*.m2v=1;35:*.m4a=36:*.m4v=1;35:*.mid=36:*.midi=36:*.mjpeg=1;35:*.mjpg=1;35:*.mka=36:*.mkv=1;35:*.mng=1;35:*.mov=1;35:*.mp3=36:*.mp4=1;35:*.mp4v=1;35:*.mpc=36:*.mpeg=1;35:*.mpg=1;35:*.nuv=1;35:*.oga=36:*.ogg=36:*.ogm=1;35:*.ogv=1;35:*.ogx=1;35:*.opus=36:*.pbm=1;35:*.pcx=1;35:*.pgm=1;35:*.png=1;35:*.ppm=1;35:*.qt=1;35:*.ra=36:*.rar=1;31:*.rm=1;35:*.rmvb=1;35:*.rpm=1;31:*.rz=1;31:*.sar=1;31:*.spx=36:*.svg=1;35:*.svgz=1;35:*.swm=1;31:*.t7z=1;31:*.tar=1;31:*.taz=1;31:*.tbz=1;31:*.tbz2=1;31:*.tga=1;35:*.tgz=1;31:*.tif=1;35:*.tiff=1;35:*.tlz=1;31:*.txz=1;31:*.tz=1;31:*.tzo=1;31:*.tzst=1;31:*.vob=1;35:*.war=1;31:*.wav=36:*.webm=1;35:*.webp=1;35:*.wim=1;31:*.wmv=1;35:*.xbm=1;35:*.xcf=1;35:*.xpm=1;35:*.xspf=36:*.xwd=1;35:*.xz=1;31:*.yuv=1;35:*.z=1;31:*.zip=1;31:*.zoo=1;31:*.zst=1;31:bd=40;1;33:ca=30;41:cd=40;1;33:di=1;34:do=1;35:ex=1;32:ln=1;36:mh=0:mi=0:or=40;1;31:ow=34;42:pi=40;33:rs=0:sg=30;43:so=1;35:st=37;44:su=37;41:tw=30;42",
"MAIL": "/var/spool/mail/erick",
"MOTD_SHOWN": "pam",
"MOZ_ENABLE_WAYLAND": "1",
"MULTILINE_PROMPT": "`*\u00b7.\u00b7*`",
"MYVIMRC": "/home/erick/.dotfiles/nvim/init.lua",
"NVIM_LISTEN_ADDRESS": "/tmp/nvim2oQhBZ/0",
"NVIM_LOG_FILE": "/home/erick/.cache/nvim/log",
"NVM_BIN": "/home/erick/.nvm/versions/node/v14.18.2/bin",
"NVM_CD_FLAGS": "",
"NVM_DIR": "/home/erick/.nvm",
"NVM_INC": "/home/erick/.nvm/versions/node/v14.18.2/include/node",
"PATH": "/home/erick/.nvm/versions/node/v14.18.2/bin:/usr/local/bin:/usr/bin:/home/erick/.cargo/env:/home/erick/Android/Sdk/platform-tools:/opt/blender:/opt/node14/bin:/opt/ngrok:/opt/vagrant:/usr/local/go/bin:/home/erick/go/bin:/home/erick/.yarn/bin:/home/erick/.config/composer/vendor/bin:/home/erick/.npm-global/bin:/home/erick/.local/bin:/home/erick/Tools/azuredatastudio:/home/erick/.cargo/env:/home/erick/Android/Sdk/platform-tools:/opt/blender:/opt/node14/bin:/opt/ngrok:/opt/vagrant:/usr/local/go/bin:/home/erick/go/bin:/home/erick/.yarn/bin:/home/erick/.config/composer/vendor/bin:/home/erick/.npm-global/bin:/home/erick/.local/bin:/home/erick/Tools/azuredatastudio",
"PROMPT": "{env_name:{} }{YELLOW}{cwd_base}{branch_color}{curr_branch: [{}]} {RED}\uf490 ",
"PWD": "/home/erick/TG/proyectos/wortix-dashboard/app",
"QT_AUTO_SCREEN_SCALE_FACTOR": "1",
"QT_IM_MODULE": "ibus",
"QT_QPA_PLATFORMTHEME": "qt5ct",
"RIGHT_PROMPT": "",
"SESSION_MANAGER": "local/stixcode:@/tmp/.ICE-unix/1237,unix/stixcode:/tmp/.ICE-unix/1237",
"SHELL": "/usr/bin/xonsh",
"SHELL_TYPE": "prompt_toolkit",
"SHLVL": "2",
"SSH_AUTH_SOCK": "/run/user/1000/keyring/ssh",
"SYSTEMD_EXEC_PID": "1252",
"TERM": "xterm-color",
"THREAD_SUBPROCS": "1",
"TITLE": "Terminal",
"USER": "erick",
"USERNAME": "erick",
"VIMRUNTIME": "/usr/share/nvim/runtime",
"VK_ICD_FILENAMES": "/usr/share/vulkan/icd.d/radeon_icd.i686.json:/usr/share/vulkan/icd.d/radeon_icd.x86_64.js",
"VTE_VERSION": "6800",
"WAYLAND_DISPLAY": "wayland-0",
"XAUTHORITY": "/run/user/1000/.mutter-Xwaylandauth.3DHVK1",
"XDG_CURRENT_DESKTOP": "GNOME",
"XDG_DATA_DIRS": "/home/erick/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share/:/usr/share/",
"XDG_MENU_PREFIX": "gnome-",
"XDG_RUNTIME_DIR": "/run/user/1000",
"XDG_SESSION_CLASS": "user",
"XDG_SESSION_DESKTOP": "gnome",
"XDG_SESSION_TYPE": "wayland",
"XMODIFIERS": "@im=ibus",
"XONSHRC": "/etc/xonshrc:/home/erick/.config/xonsh/rc.xsh:/home/erick/.xonshrc",
"XONSHRC_DIR": "/etc/xonsh/rc.d:/home/erick/.config/xonsh/rc.d",
"XONSH_CONFIG_DIR": "/home/erick/.config/xonsh",
"XONSH_DATA_DIR": "/home/erick/.local/share/xonsh",
"XONSH_HISTORY_FILE": "None",
"XONSH_INTERACTIVE": "1",
"XONSH_LOGIN": "1",
"XONSH_SHOW_TRACEBACK": "1",
"XONSH_TRACEBACK_LOGFILE": "/home/erick/.xonsh/errors.log",
"XONSH_VERSION": "0.11.0",
"_": "/usr/bin/env"
},
"locked": true,
"sessionid": "33b596a1-3d26-4910-9379-7a30b126f7f8",
"ts": [1650461292.774055, null]
}
}
```
</details>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2022-04-21T17:52:26Z | 2022-07-18T18:45:09Z | https://github.com/xonsh/xonsh/issues/4773 | [
"good first issue",
"history",
"history-json"
] | ericktucto | 2 |
BeanieODM/beanie | pydantic | 851 | [BUG] (docs) migrations *have* to specify a `-db` argument, even if it's included in the db string? | The docs at https://beanie-odm.dev/tutorial/migrations/ show a few different invocations for the `migrate` command. In some the `db` name is included in the url string only, in others it's passed explicitly to the `-db` argument - after a bunch of debugging, it seems like only passing it in the uri string doesn't work, the docs should reflect this fact. | closed | 2024-02-05T11:27:13Z | 2024-08-27T07:24:55Z | https://github.com/BeanieODM/beanie/issues/851 | [
"bug"
] | ldorigo | 1 |
deepspeedai/DeepSpeed | deep-learning | 7,129 | [REQUEST] Is there any plan to support deepseek v3's MOE structure | I like the training of transformers+deepspeed very much. After reading the content of deepspeed MOE, I want to see if deepseek v3 can be supported through deepspeed. It seems that there is still a long way to run. Is there a plan? | open | 2025-03-11T03:36:42Z | 2025-03-11T16:25:34Z | https://github.com/deepspeedai/DeepSpeed/issues/7129 | [
"enhancement"
] | glowwormX | 0 |
ploomber/ploomber | jupyter | 217 | Support for env when using DAGSpec.auto_load | closed | 2020-08-10T19:09:19Z | 2020-10-03T19:57:36Z | https://github.com/ploomber/ploomber/issues/217 | [] | edublancas | 0 | |
xonsh/xonsh | data-science | 5,008 | Escape character doesn't sent correctly to execution system | # «`\;`» impossible to send in a command line.
## xonsh version and config.
```bash
$ xonsh -V
xonsh/0.13.3
$ cat ~/.xonshrc
#$XONSH_COLOR_STYLE = 'material'
$XONSH_COLOR_STYLE = 'fruity'
$PROMPT = '{env_name}{BOLD_GREEN}{user}@{hostname}{BOLD_BLUE}: {cwd}{branch_color}{curr_branch: {}}{RESET} {RED}{last_return_code_if_nonzero:[{BOLD_INTENSE_RED}{}{RED}] }\n{RESET}{BOLD_BLUE}{prompt_end}{RESET} '
source-foreign --login True "echo loading xonsh foreign shell"
xontrib load abbrevs autovox fish_completer vox xog
import re
import os
import shutil
$ xonfig
+------------------+---------------------+
| xonsh | 0.13.3 |
| Python | 3.10.8 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.33 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.13.0 |
| on posix | True |
| on linux | True |
| distro | arch |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib 1 | abbrevs |
| xontrib 2 | autovox |
| xontrib 3 | fish_completer |
| xontrib 4 | vox |
| xontrib 5 | voxapi |
| xontrib 6 | xog |
| RC file 1 | /home/andy/.xonshrc |
+------------------+---------------------+
$ cat /etc/os-release
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
PRIVACY_POLICY_URL="https://terms.archlinux.org/docs/privacy-policy/"
LOGO=archlinux-logo
$ uname -a
Linux p 6.0.11-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 02 Dec 2022 17:25:31 +0000 x86_64 GNU/Linux
```
## Expected Behavior.
I want to run this command:
```bash
$ find . -name "*~" -exec rm -v {} \;
```
This should remove all `*~` files.
## Current Behavior.
This command doesn't run because `\;`is substituted by `\\;`:
```bash
$ find . -name "*~" -exec echo "{}" \;
find: falta el argumento de `-exec'
$ find . -name "*~" -exec echo "{}" \\;
find: falta el argumento de `-exec'
$ !(find . -name "*~" -exec echo "{}" \;)
CommandPipeline(
stdin=<_io.BytesIO object at 0x7f92bc8d2610>,
stdout=<_io.BytesIO object at 0x7f92bc8d0cc0>,
stderr=<_io.BytesIO object at 0x7f92bc8d1ad0>,
pid=63298,
returncode=1,
args=['find', '.', '-name', '*~', '-exec', 'echo', '{}', '\\;'],
alias=None,
stdin_redirect=['<stdin>', 'r'],
stdout_redirect=[7, 'wb'],
stderr_redirect=[9, 'w'],
timestamps=[1670496486.3960257, 1670496486.41144],
executed_cmd=['find', '.', '-name', '*~', '-exec', 'echo', '{}', '\\;'],
input='',
output='',
errors="find: falta el argumento de `-exec'\n"
)
```
I've try this but doen't run too:
```bash
$ echo @(r'\;')
\;
$ !(find . -name "*~" -exec echo "{}" @(r'\;'))
CommandPipeline(
stdin=<_io.BytesIO object at 0x7f92bc9224d0>,
stdout=<_io.BytesIO object at 0x7f92bc921bc0>,
stderr=<_io.BytesIO object at 0x7f92bc920810>,
pid=68016,
returncode=1,
args=['find', '.', '-name', '*~', '-exec', 'echo', '{}', '\\;'],
alias=None,
stdin_redirect=['<stdin>', 'r'],
stdout_redirect=[7, 'wb'],
stderr_redirect=[9, 'w'],
timestamps=[1670496961.2162602, 1670496961.2519572],
executed_cmd=['find', '.', '-name', '*~', '-exec', 'echo', '{}', '\\;'],
input='',
output='',
errors="find: falta el argumento de `-exec'\n"
)
$ echo @(print(r'\;'))
\;
None
$ !(find . -name "*~" -exec echo "{}" @(print(r'\;')))
\;
CommandPipeline(
stdin=<_io.BytesIO object at 0x7f92bc8d3740>,
stdout=<_io.BytesIO object at 0x7f92bc8d35b0>,
stderr=<_io.BytesIO object at 0x7f92bc8d0db0>,
pid=64878,
returncode=1,
args=['find', '.', '-name', '*~', '-exec', 'echo', '{}', 'None'],
alias=None,
stdin_redirect=['<stdin>', 'r'],
stdout_redirect=[7, 'wb'],
stderr_redirect=[9, 'w'],
timestamps=[1670496650.9127395, 1670496650.9378898],
executed_cmd=['find', '.', '-name', '*~', '-exec', 'echo', '{}', 'None'],
input='',
output='',
errors="find: falta el argumento de `-exec'\n"
)
```
Perhaps I have a mistake.
How is it possible to run this command?
| closed | 2022-12-08T10:54:17Z | 2022-12-08T18:46:22Z | https://github.com/xonsh/xonsh/issues/5008 | [] | orencio | 2 |
biolab/orange3 | numpy | 6,173 | Feature Statistics - open issues | <!--
If something's not right with Orange, fill out the bug report template at:
https://github.com/biolab/orange3/issues/new?assignees=&labels=bug+report&template=bug_report.md&title=
If you have an idea for a new feature, fill out the feature request template at:
https://github.com/biolab/orange3/issues/new?assignees=&labels=&template=feature_request.md&title=
-->
After PR #6158 there are some open issues left that should be discussed:
- [x] Compute Mode for numeric variables?
- [x] Show Mode in the widget to make it consistent with the new (and improved) output? Currently the mode is squeezed into the Median column, which would otherwise be empty for categorical variables. But numeric variables could have both...
- [ ] Documentation should be updated
- [x] There are some warnings which could cause issues in the future:
```
Orange/statistics/util.py:510: FutureWarning: Unlike other reduction functions (e.g. `skew`, `kurtosis`), the default behavior of `mode` typically preserves the axis it acts along. In SciPy 1.11.0, this behavior will change: the default value of `keepdims` will become False, the `axis` over which the statistic is taken will be eliminated, and the value None will no longer be accepted. Set `keepdims` to True or False to avoid this warning.
res = scipy.stats.stats.mode(x, axis)
```
```
orange-widget-base/orangewidget/gui.py:2068: UserWarning: decorate OWFeatureStatistics.commit with @gui.deferred and then explicitly call commit.now or commit.deferred.
``` | closed | 2022-10-14T13:50:34Z | 2023-02-23T11:00:15Z | https://github.com/biolab/orange3/issues/6173 | [
"wish",
"snack"
] | lanzagar | 2 |
Lightning-AI/pytorch-lightning | data-science | 19,956 | ValueError: range() arg 3 must not be zero - Need to Identify the Root Cause | ### Bug description
I am encountering a `ValueError: range() arg 3 must not be zero` while processing video frames in batches. The relevant code section is provided below.
### What version are you seeing the problem on?
v2.0
### How to reproduce the bug
```python
## Code Snippet
### Definition of the `VideoDataset` Class
class VideoDataset:
def __init__(self, frame_batch_size=1, ...):
self.frame_batch_size = frame_batch_size
print(f"Initialized frame_batch_size: {self.frame_batch_size}")
# Other initialization code
# ...
def __getitem__(self, idx):
print(f"self.get_j_frames: {self.get_j_frames}") # Debug output
print(f"frame_batch_size before call: {self.get_j_frames.frame_batch_size}") # Debug output
# Set a default value if frame_batch_size is zero
if self.get_j_frames.frame_batch_size == 0:
print("Warning: frame_batch_size is 0, setting to default value 1")
self.get_j_frames.frame_batch_size = 1
top_j_sim_video_embeddings_list = self.get_j_frames(df)
print(f"Video frames for index {idx} fetched")
video_output_avg = self.video_processor(top_j_sim_video_embeddings_list)
return video_output_avg
### Initialization of self.get_j_frames
```python
class GetJFrames:
def __init__(self, frame_batch_size):
self.frame_batch_size = frame_batch_size
print(f"Initialized GetJFrames frame_batch_size: {self.frame_batch_size}")
# Initialization within VideoDataset
class VideoDataset:
def __init__(self, frame_batch_size=1, ...):
self.frame_batch_size = frame_batch_size
print(f"Initialized VideoDataset frame_batch_size: {self.frame_batch_size}")
# Initialize get_j_frames here
self.get_j_frames = GetJFrames(frame_batch_size)
print(f"Initialized self.get_j_frames with frame_batch_size: {self.get_j_frames.frame_batch_size}")
# Other initialization code
# ...
```
### Error messages and logs
```
ValueError Traceback (most recent call last)
Cell In[95], line 4
2 print(f"Length of dataset: {len(dataset)}")
3 print("Fetching first item from dataset...")
----> 4 first_item = dataset[0]
5 print("First item fetched:", first_item)
File ~/main/reproduct/choi/video_dataset.py:179, in VideoDataset.__getitem__(self, idx)
177 print(f"self.get_j_frames: {self.get_j_frames}") # デバッグ出力
178 print(f"frame_batch_size before call: {self.get_j_frames.frame_batch_size}") # デバッグ出力
--> 179 top_j_sim_video_embeddings_list = self.get_j_frames(df)
180 print(f"Video frames for index {idx} fetched")
182 video_output_avg = self.video_processor(top_j_sim_video_embeddings_list)
File ~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)
File ~/anaconda3/envs/choi_venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
...
--> 259 for i in range(0, len(frame_paths), frame_batch_size):
260 batch_frame_paths = frame_paths[i:i+frame_batch_size]
261 batch_frames = [load_image(frame_path).unsqueeze(0) for frame_path in batch_frame_paths]
ValueError: range() arg 3 must not be zero
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version: 2.0.8
#- PyTorch Version: 2.3.0
#- Python version: 3.8.18
#- OS: Ubuntu 20.04
#- CUDA/cuDNN version: 11.8
#- How you installed Lightning(: `conda`
#- Running environment of LightningApp: remote server
```
</details>
### More info
**Debug Output**
The following debug output shows that frame_batch_size is zero at some point:
```python
frame_batch_size before call: 0
self.frame_batch_size: 0
```
**What I Have Tried**
1. Added debug statements to trace where frame_batch_size becomes zero.
2. Set a default value for frame_batch_size when it is zero to prevent the error, but I want to identify the root cause.
**Questions**
1. What could be causing frame_batch_size to be zero at initialization or at some point in the code execution?
2. What are the best practices to prevent such issues where default values are overridden unexpectedly?
## All Relevant Code
All relevant code can be downloaded from the following link:
https://note.com/rafo/n/n979dc84fdf14
The issue likely resides within the `video_dataset.py` file.
Any help or guidance to identify the root cause and fix this issue would be greatly appreciated. Thank you! | closed | 2024-06-07T13:18:25Z | 2024-06-17T07:07:33Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19956 | [
"bug",
"needs triage",
"ver: 2.0.x"
] | YuyaWake | 1 |
SALib/SALib | numpy | 181 | Interpratation S_conf | HelloTogether
I have a following question to the Interpretation of S_conf.
I made an Sobol sensitivity analysis with three variables. The results are good. It looks like I have almost no interaction.
Now the S_i value and the S_i_conf value are ok as well as the S_T and the S_T_conf value. But there is only a small difference between them. All the efects of second (x1x2,...) and third order (x1x2x3) are statistically insignificant (Conf.interval overlapping Zero).
My question is if this automatically means that the difference between S_i and S_T is insignificant as well?
My approach was to check it with the difference. When S_T minus S_i is bigger than S_i_conf plus S_T_conf, the difference between them is significant. Otherwise it's not.
What do you think?
Thank you very much in advance for your answer.
Kind regards
Stefan Gemperle
| closed | 2017-12-22T13:53:16Z | 2019-11-07T22:51:39Z | https://github.com/SALib/SALib/issues/181 | [
"question_interpretation"
] | GemsHSLU | 6 |
tfranzel/drf-spectacular | rest-api | 763 | authorization type changes using drf spectacular library | Hi Team,
Actually using drf spectacular library we are generating one schema file in which by default authorization is coming as
tokenAuth:
type: http
scheme: bearer
bearerFormat: Token
but we want a authorization type as 'API Key' so is there is any way to change the authorization type using a drf spectacular library so that once we create a schema file "API key" appears as a authorization type instead of bearer token? | closed | 2022-07-04T04:10:41Z | 2022-07-14T07:40:22Z | https://github.com/tfranzel/drf-spectacular/issues/763 | [] | shubhambajad | 4 |
QingdaoU/OnlineJudge | django | 338 | fps导入失败! | 在提交issue之前请
- 认真阅读文档 http://docs.onlinejudge.me/#/
- 搜索和查看历史issues
- 安全类问题请不要在 GitHub 上公布,请发送邮件到 `admin@qduoj.com`,根据漏洞危害程度发送红包感谢。
然后提交issue请写清楚下列事项
- 进行什么操作的时候遇到了什么问题,最好能有复现步骤
- 错误提示是什么,如果看不到错误提示,请去data文件夹查看相应log文件。大段的错误提示请包在代码块标记里面。
- 你尝试修复问题的操作
- 页面问题请写清浏览器版本,尽量有截图
环境:ubuntu18.04
使用上传fps功能上传xml文件时,点击upload上传后,提示server error。
xml文件地址:https://github.com/zhblue/freeproblemset/blob/master/fps-examples/fps-loj-small-pics.zip
gunicorn.log:
`[2020-11-15 15:45:54] - [ERROR] - [root:156] - Invalid xml, error 'test_input' tag order
Traceback (most recent call last):
File "/app/utils/api/api.py", line 149, in dispatch
return super(APIView, self).dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/views/generic/base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "/app/problem/views/admin.py", line 683, in post
problems = FPSParser(tf.name).parse()
File "/app/fps/parser.py", line 32, in parse
ret.append(self._parse_one_problem(node))
File "/app/fps/parser.py", line 97, in _parse_one_problem
raise ValueError("Invalid xml, error 'test_input' tag order")
ValueError: Invalid xml, error 'test_input' tag order
[2020-11-15 15:45:59] - [ERROR] - [sentry.errors:684] - Sentry responded with an API error: RateLimited(None)
b'Sentry responded with an API error: RateLimited(None)'
[2020-11-15 15:46:00] - [ERROR] - [sentry.errors.uncaught:712] - ["Invalid xml, error 'test_input' tag order", ' File "utils/api/api.py", line 149, in dispatch', ' File "django/views/generic/base.py", line 88, in dispatch', ' File "problem/views/admin.py", line 683, in post', ' File "fps/parser.py", line 32, in parse', ' File "fps/parser.py", line 97, in _parse_one_problem']
`
另外,使用旧版本迁移时,当有题目为special judge时(没有样例),会直接停止程序! | open | 2020-11-15T15:52:46Z | 2022-12-09T06:29:06Z | https://github.com/QingdaoU/OnlineJudge/issues/338 | [] | FishZe | 4 |
deepfakes/faceswap | machine-learning | 794 | GPU Not working | ```shell
Loading...
07/15/2019 04:46:55 INFO Log level set to: INFO
07/15/2019 04:46:56 INFO Output Directory: C:\Users\LPDR\Desktop\cysd
07/15/2019 04:46:56 INFO Input Video: C:\Users\LPDR\Desktop\cysd-06.mp4
07/15/2019 04:46:56 INFO Loading Detect from Mtcnn plugin...
07/15/2019 04:46:56 INFO Loading Align from Fan plugin...
07/15/2019 04:46:57 INFO Starting, this may take a while...
07/15/2019 04:46:59 INFO Initializing Face Alignment Network...
07/15/2019 04:47:00 WARNING Using CPU
07/15/2019 04:47:06 INFO Initialized Face Alignment Network.
07/15/2019 04:47:08 INFO Initializing MTCNN Detector...
07/15/2019 04:47:08 WARNING From c:\program files\python3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.
07/15/2019 04:47:08 WARNING From C:\Users\LPDR\faceswap\plugins\extract\detect\mtcnn.py:423: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nDeprecated in favor of operator or tf.math.divide.
07/15/2019 04:47:09 WARNING Using CPU
```
`nvcc -V` :
```shell
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:32_Central_Daylight_Time_2017
Cuda compilation tools, release 9.0, V9.0.176
```
tensorflow@1.13.1
And I install by using `faceswap_setup_x64_v0.99.1.exe` with NVIDIA GPU option.
| closed | 2019-07-14T20:55:51Z | 2019-07-16T18:23:28Z | https://github.com/deepfakes/faceswap/issues/794 | [] | HyperSimon | 11 |
StratoDem/sd-material-ui | dash | 53 | Rename components to remove "SD" prefix | ### Description
Components should be renamed to drop the "SD" prefix, e.g., the `SDDropdown` should be renamed to just `Dropdown`.
This will require importing the base `material-ui` components with aliases, like
```js
import MuiDropdown from 'material-ui/Dropdown'; // vs import Dropdown from 'material-ui/Dropdown';
...
class Dropdown extends React.Component {
render() {
return <Dropdown ... />
...
``` | closed | 2018-01-26T15:13:20Z | 2018-01-26T19:11:14Z | https://github.com/StratoDem/sd-material-ui/issues/53 | [
"Priority: Medium",
"Tech: Architecture",
"Type: Maintenance",
"v2 release"
] | mjclawar | 1 |
matplotlib/matplotlib | data-visualization | 28,907 | [Bug]: completely freezes | ### Bug summary
"When using PyCharm (regardless of the version) in debug mode or starting matplotlib.pyplot.plot in the Python console, the process completely freezes, and I can only force it to end."
### Code for reproduction
```Python
import matplotlib
matplotlib.use('tkagg')
import matplotlib.pyplot as plt
import numpy as np
plt.imshow(np.zeros((10, 10)))
plt.show()
```
### Actual outcome
any version of pycharm
### Expected outcome
nothing
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.9.*
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
pip | closed | 2024-09-29T14:50:21Z | 2024-10-30T01:16:24Z | https://github.com/matplotlib/matplotlib/issues/28907 | [
"status: needs clarification"
] | name-used | 17 |
streamlit/streamlit | streamlit | 10,383 | Make st.toast appear/bring it to the front (stack order) when used in st.dialog | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Not sure to place this as a feature request or bug but it seems when using st.toast inside st.dialog, the dialog is sent to the background of the dialog.
### Reproducible Code Example
```Python
import streamlit as st
st.dialog(title="Streamlit Toast Notification")
def toast_notification():
activate_toast = st.button(label="send toast")
if activate_toast:
st.toast("Hi, I am in the background!")
toast_notification()
```
### Steps To Reproduce
1. Create dialog
2. Click button to show toast
### Expected Behavior
st.toast should be stacked at the front of the dialog.
### Current Behavior
Stacks behind st.dialog.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version: 3.10
- Operating System: Windows
- Browser: Chrome
### Additional Information
_No response_ | open | 2025-02-12T20:19:16Z | 2025-02-13T12:10:54Z | https://github.com/streamlit/streamlit/issues/10383 | [
"type:enhancement",
"feature:st.toast",
"feature:st.dialog"
] | Socvest | 4 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,475 | [Feature Request]: Integrate --sd-webui-ar-plusplus | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Note that I also [requested this over at Forge](https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/586)
### One of the primary elements in image generation is resolution.
**Image models are trained on various sized images, but "bucketing" has them trained on 64px increments, which is why ideal image generation resolutions are also in 64px increments.**
Most people do not understand this, and do not connect popular aspect ratios to values which are rounded to that precision.
Last week, I found myself wanting an extension that provides ideal image resolutions and found that **there are currently none**.
---
**My findings**:
<details>
<summary>The Aspect Ratio / Resolution extensions in the current list of Available Extensions</summary>
https://github.com/xhoxye/sd-webui-ar_xhox
- Makes an enormous panel of user configured static resolutions
https://github.com/bit9labs/sd-ratio-lock
- Lets you select an aspect ratio from a dropdown list and locks the sliders to it.
- Rounds to increment of 2px
https://github.com/thomasasfk/sd-webui-aspect-ratio-helper
- Lets you select an aspect ratio from a dropdown list and locks the sliders to it.
- Rounds to increment of 1px
- Has an extremely verbose configuration
https://github.com/alemelis/sd-webui-ar
- Adds a few aspect ratio buttons and static resolution buttons.
- Built in 4px rounding precision.
- Does not have any toggles.
https://github.com/LEv145/--sd-webui-ar-plus
- Same as above, but improves upon it with:
- Toggle to switch from Width or Height being updated.
- A verbose calculator tool that is not particularly helpful
</details>
---
**In my opinion**: [The fork by LEv145](https://github.com/LEv145/--sd-webui-ar-plus) is the best of these, but none of these are satisfactory.
I made [an issue](https://github.com/LEv145/--sd-webui-ar-plus/issues/21) in regards to their calculation method and rounding precision, and they were not bothered to do anything about it.
I decided I would fork that project and try to resolve the issues I perceived.
As I was nearing completion, I thought the features of this extension could be useful for everyone. I'm suggesting that you take a look and consider integrating this.
If you are interested, but do not like one feature or another, please let me know.
---
**Default view**
<img width="944" alt="1" src="https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/1613484/d33d7e87-8fc6-49a6-a21c-66890dbff484">
---
**With Information panel toggled open**
<img width="937" alt="2" src="https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/1613484/ca0f01fe-056f-45e3-8a30-d15918a32f19">
### Proposed workflow
## https://github.com/altoiddealer/--sd-webui-ar-plusplus
### Additional information
_No response_ | open | 2024-04-10T00:26:43Z | 2024-06-24T02:05:03Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15475 | [
"enhancement"
] | altoiddealer | 4 |
sanic-org/sanic | asyncio | 2,996 | Add Python 3.13 to CI Tests | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
Can we be proactive and make sure that Sanic will be compatible with Python 3.13 when it's released in early October?
MagicStack just released v0.21.0beta1 which has support for Python 3.13
### Additional context
_No response_ | closed | 2024-09-04T23:57:14Z | 2024-12-31T13:37:39Z | https://github.com/sanic-org/sanic/issues/2996 | [
"feature request"
] | robd003 | 4 |
widgetti/solara | jupyter | 51 | Solara App running on VM is not reachable | Hi guys,
thanks for this great lib. I´ve already used it some time in jupyter notebooks and running apps on my local machine. However, I am not able to access apps running on my VM (oracle cloud). All I see is a "solara" spinner and the message "Loading app". I think it´s problem with my firewall configuration. Which ports need to be open for running apps on a virtual machine?
Thanks,
legout | closed | 2023-03-24T19:10:40Z | 2023-03-27T20:49:00Z | https://github.com/widgetti/solara/issues/51 | [] | legout | 3 |
matplotlib/mplfinance | matplotlib | 337 | Is it possible to apply mpf styles to other matplotlib plots? | I would like to use the themes for other matplotlib plots. | closed | 2021-02-22T21:52:58Z | 2021-02-23T15:14:08Z | https://github.com/matplotlib/mplfinance/issues/337 | [
"question"
] | BlackArbsCEO | 2 |
amisadmin/fastapi-amis-admin | sqlalchemy | 144 | 307 Temporary Redirect loop | 默认代码安装,登陆root成功后, 307 Temporary Redirect死循环。日志如下:
```
2023-11-17 08:57:37,025 INFO sqlalchemy.engine.Engine [cached since 408.3s ago] (2,)
INFO: 15.204.16.79:9173 - "GET /auth/form/login?redirect=/admin/ HTTP/1.1" 307 Temporary Redirect
2023-11-17 08:57:37,037 INFO sqlalchemy.engine.Engine COMMIT
2023-11-17 08:57:37,507 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-11-17 08:57:37,507 INFO sqlalchemy.engine.Engine SELECT auth_token.token, auth_token.create_time, auth_token.id, auth_token.data
FROM auth_token
WHERE auth_token.token = %s
2023-11-17 08:57:37,507 INFO sqlalchemy.engine.Engine [cached since 409.4s ago] ('_JDgGlV_etBtYw77BC5OhaoBaGlRYEXXe7FOIqCWdFM',)
INFO: 15.204.16.79:9173 - "GET / HTTP/1.1" 307 Temporary Redirect
2023-11-17 08:57:37,513 INFO sqlalchemy.engine.Engine COMMIT
2023-11-17 08:57:37,948 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-11-17 08:57:37,948 INFO sqlalchemy.engine.Engine SELECT auth_token.token, auth_token.create_time, auth_token.id, auth_token.data
FROM auth_token
WHERE auth_token.token = %s
2023-11-17 08:57:37,949 INFO sqlalchemy.engine.Engine [cached since 409.9s ago] ('_JDgGlV_etBtYw77BC5OhaoBaGlRYEXXe7FOIqCWdFM',)
2023-11-17 08:57:37,955 INFO sqlalchemy.engine.Engine SELECT auth_user.email AS auth_user_email, auth_user.password AS auth_user_password, auth_user.username AS auth_user_username, auth_user.delete_time AS auth_user_delete_time, auth_user.update_time AS auth_user_update_time, auth_user.create_time AS auth_user_create_time, auth_user.id AS auth_user_id, auth_user.is_active AS auth_user_is_active, auth_user.nickname AS auth_user_nickname, auth_user.avatar AS auth_user_avatar
FROM auth_user
WHERE auth_user.id = %s
2023-11-17 08:57:37,955 INFO sqlalchemy.engine.Engine [cached since 409.3s ago] (2,)
INFO: 15.204.16.79:9173 - "GET /auth/form/login?redirect=/admin/ HTTP/1.1" 307 Temporary Redirect
``` | open | 2023-11-17T09:25:11Z | 2024-11-28T07:59:00Z | https://github.com/amisadmin/fastapi-amis-admin/issues/144 | [] | lifengmds | 6 |
jmcnamara/XlsxWriter | pandas | 749 | Failure when FIPS mode is enabled in the kernel due to MD5 being restricted | I am attempting to use XlsxWriter to generate Excel files on a Red Hat Enterprise Linux 7.8 machine with [FIPS mode](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-federal_standards_and_regulations-federal_information_processing_standard) enabled. XlsxWriter fails because access to MD5 is restricted when this mode is enabled. I see XlsxWriter is using MD5 to determine if two images are the same to avoid storing it multiple times within the file.
I was able to work around it by modifying workbook.py line 1224 as follows:
`md5 = hashlib.md5(data, usedforsecurity=False).hexdigest()`
However I do not think this is a portable solution as best I can tell the usedforsecurity keyword parameter is a Red Hat-specific change. Instead I think the best option is to replace MD5 with another algorithm, I propose SHA256 which has been available since hashlib was introduced in Python 2.5. I may be able to fork and attempt this change in the next few days if you think this is an acceptable solution.
I am using Python version 3.6.8 and XlsxWriter 1.2.8 on Red Hat Enterprise Linux 7.8.
Here is some code that demonstrates the problem:
```python
import xlsxwriter
workbook = xlsxwriter.Workbook('images.xlsx')
worksheet = workbook.add_worksheet()
worksheet.insert_image('B2', 'python.png')
workbook.close()
```
Here is what I get when I run the script (without modifying XlsxWriter).
```
$ python3 test2.py
Traceback (most recent call last):
File "test2.py", line 15, in <module>
workbook.close()
File "/home/user/app/env/lib64/python3.6/site-packages/xlsxwriter/workbook.py", line 316, in close
self._store_workbook()
File "/home/user/app/env/lib64/python3.6/site-packages/xlsxwriter/workbook.py", line 667, in _store_workbook
self._prepare_drawings()
File "/home/user/app/env/lib64/python3.6/site-packages/xlsxwriter/workbook.py", line 1128, in _prepare_drawings
self._get_image_properties(filename, image_data)
File "/home/user/app/env/lib64/python3.6/site-packages/xlsxwriter/workbook.py", line 1224, in _get_image_properties
md5 = hashlib.md5(data).hexdigest()
ValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips
``` | closed | 2020-09-17T23:58:12Z | 2020-09-18T22:37:21Z | https://github.com/jmcnamara/XlsxWriter/issues/749 | [
"feature request",
"under investigation"
] | quanterium | 5 |
python-visualization/folium | data-visualization | 1,395 | plugins.MarkerCluster - markers are not showned when tiles=None | **Describe the bug**
When using `MarkerCluster` plugin to add marker to a map, markers are not showned if I add `tiles=None` parameter to `folium.Map()`
**To Reproduce**
```python
import folium
from folium import plugins
coords = [[45.01, 3.05]]
texts = ["hello"]
m = folium.Map([45, 3], tiles=None)
plugins.MarkerCluster(coords, popups=texts).add_to(m)
m
```
**Expected behavior**
I expected to actualy see markers. I actually see markers when I remove `tiles=None`
**Environment (please complete the following information):**
- Browser: firefox
- Jupyter Notebook or html files? => Jupyter Notebook
- Python version (check it with `import sys; print(sys.version_info)`) => sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0)
- folium version (check it with `import folium; print(folium.__version__)`) => 0.11.0
- branca version (check it with `import branca; print(branca.__version__)`) => 0.4.1
**Possible solutions**
- Removing `tiles=None` from map init, but I didn't need map tiles in that case
- Do not use plugins.MarkerCluster plugin but use folium.Marker directly
folium is maintained by volunteers. Can you help making a fix for this issue? => No, I'm a beginner user
| closed | 2020-10-15T11:18:05Z | 2022-11-22T15:42:04Z | https://github.com/python-visualization/folium/issues/1395 | [] | brunetton | 3 |
xlwings/xlwings | automation | 2,005 | Sporadic 'NoneType' object has no attribute error | #### OS (e.g. Windows 10 or macOS Sierra)
Windows 10
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xwlings-0.27.12
Python-3.8
Excel-Office365
#### Describe your issue (incl. Traceback!)
Thank you for the great work on this package, it is awesome!
I have a process that runs with xlwings.
Each morning a batch script starts up and kills running Excel instances.
Python starts and xlwings Excel instance and opens a Workbook (line 23-24)
The Script waits for several hours and then begins to iteratively insert data into the Excel workbook, wait for formulas to calculate and save the workbooks before reading the sheet's values with pandas (lines 35-77)
This has been working except, from time to time the script fails on the section that writes the data into the Excel (lines 48-53)
The error thrown is either `'NoneType' object has no attribute 'Range'` or `'NoneType' object has no attribute 'Worksheet'`.
I have attempted to open the workbook again when this error is thrown (line 76).
Do you know what causes this error and if this is the best way to handle such issues?
After I run `wbxl = xw.books.open(rf"{filepath + filename}")`, do the other xlwings Excels still exist at this point or are still running as zombies?
Is there a way to make sure the excel is successfully recycled to continue the process without duplicate workbooks being opened by xlwings?
Also worth noting, the Excel workbook sits on **Google Drive**. Not sure if that causes some issue? Or is not recommended for repeated inserting/saving/reading?
Additionally, the last line that calls `wb.close()` throws this error. Is this last line not recommended/needed since `xl.App` is called via context manager?
Thank you again! Looking forward to getting this to work for my use case.
```
wbxl.close()
ERROR:root:
ERROR:root: File "C:\Python\lib\site-packages\xlwings\main.py", line 708, in __exit__
ERROR:root:
ERROR:root:self.quit()
ERROR:root:
ERROR:root: File "C:\Python\lib\site-packages\xlwings\main.py", line 373, in quit
ERROR:root:
ERROR:root:return self.impl.quit()
ERROR:root:
ERROR:root: File "C:\Python\lib\site-packages\xlwings\_xlwindows.py", line 586, in quit
ERROR:root:
ERROR:root:self.xl.DisplayAlerts = False
ERROR:root:
ERROR:root: File "C:\Python\lib\site-packages\xlwings\_xlwindows.py", line 143, in __setattr__
ERROR:root:
ERROR:root:return setattr(self._inner, key, value)
ERROR:root:
ERROR:root: File "C:\Python\lib\site-packages\win32com\client\__init__.py", line 595, in __setattr__
ERROR:root:
ERROR:root:self._oleobj_.Invoke(*(args + (value,) + defArgs))
ERROR:root:
ERROR:root:pywintypes
ERROR:root:.
ERROR:root:com_error
ERROR:root::
ERROR:root:(-2146777998, 'OLE error 0x800ac472', None, None)
```
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
from datetime import datetime
import os
import sys
import time
import numpy as np
import pandas as pd
import xlwings as xw
print(f"XLWINGS VERSION: {xw.__version__}")
filepath = 'C:/Users/User/My Drive/Contents/'
filename = "WB1.xlsx"
df_all_tracking_exp = pd.DataFrame({
""
})
n = 40
splits = int(np.floor(len(df_all_tracking_exp.index) / n))
chunks = np.split(df_all_tracking_exp.iloc[:splits * n], splits)
chunks.append(df_all_tracking_exp.iloc[splits * n:])
with xw.App(visible=False) as app:
wbxl = xw.books.open(rf"{filepath + filename}")
while datetime.now().hour < 7:
print(f"Time is: {datetime.now()}......sleeping")
time.sleep(600)
print(f"Time is: {datetime.now()}...getting ready")
time.sleep(1800)
print("Starting loop...")
loop_n = 0
post_close_run = 0
while datetime.now().hour <= 18 \
and post_close_run == 0:
print(f"Loop: {loop_n}")
print(f"...Hour is {datetime.now().hour}....running")
if datetime.now().hour >= 18:
post_close_run = 1
df_list = []
start_time = datetime.now()
err_cnt = 0
for idx, chunk in enumerate(chunks):
try:
print(f"Pulling chunk {idx} of {len(chunks)}")
(
wbxl.sheets['Sheet1']
.range("A3")
.options(chunksize=1_000, index=False, header=False)
.value
) = chunk['Components_list']
time.sleep(7)
wbxl.save()
df = pd.read_excel(filepath + filename, sheet_name="Sheet1", header=1)
# if not logged in
while (df['Data'] == 0).all():
print("...waiting")
# input(login_script_prompt)
time.sleep(30)
wbxl.save()
df = pd.read_excel(filepath + filename, sheet_name="Sheet1", header=1)
df_list.append(df)
except Exception as e:
print(str(e))
err_cnt += 1
print("Error count:", err_cnt)
if err_cnt >= 10:
print("Reached error count on workbooks...exiting")
sys.exit
print("Re-opening workbook")
wbxl = xw.books.open(rf"{filepath + filename}")
continue
wbxl.save()
wbxl.close()
``` | closed | 2022-09-01T15:59:05Z | 2022-09-16T00:01:09Z | https://github.com/xlwings/xlwings/issues/2005 | [] | CollierKing | 4 |
viewflow/viewflow | django | 476 | Transition comparison error in `fsm/chart.py` | Hi,
I am working on a codebase that uses **FSM** in viewflow 2.9. I am now trying to add admin support for some manual transitions. It works fine on an individual object view, however it throws the following error on the list view.
`'<' not supported between instances of 'Transition' and 'Transition'`
The error seems to be caused by `sorted(edges)` at line 82
https://github.com/viewflow/viewflow/blob/b33bd37ec9675e84e0e18a296627c1ec153491f6/viewflow/fsm/chart.py#L79-L83
I don't know the code base much, but, at a glance, the [`Transition`](https://github.com/viewflow/viewflow/blob/b33bd37ec9675e84e0e18a296627c1ec153491f6/viewflow/fsm/base.py#L30-L58) class doesn't seem to support sorting. Is this a bug or am I missing something else?
Here is the traceback:
```
Request Method: GET
Request URL: http://localhost:8000/admin/reviews/review/
Django Version: 4.2
Python Version: 3.12.7
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.admin',
'django.forms',
# ...
'viewflow',
# ...
]
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware',
'viewflow.middleware.SiteMiddleware',
'viewflow.middleware.HotwireTurboMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/contrib/admin/options.py", line 688, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/utils/decorators.py", line 134, in _wrapper_view
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/views/decorators/cache.py", line 62, in _wrapper_view_func
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/django/contrib/admin/sites.py", line 242, in inner
return view(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/viewflow/fsm/admin.py", line 118, in changelist_view
flow_chart = fsm.chart(state)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/viewflow/fsm/chart.py", line 82, in chart
for source, target, transition in sorted(edges)
^^^^^^^^^^^^^
Exception Type: TypeError at /admin/reviews/review/
Exception Value: '<' not supported between instances of 'Transition' and 'Transition'
```
Thanks | open | 2025-01-18T02:01:23Z | 2025-02-25T08:38:56Z | https://github.com/viewflow/viewflow/issues/476 | [
"request/bug",
"dev/flow"
] | 100cube | 1 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 680 | More restrictive version lock of SQLAlchemy in setup.py | I just installed Flask-SQLAlchemy 2.1 today (part of an older code base) but it pulled down SQLAlchemy 1.3.0 which is brand new. SQLAlchemy 1.3.0 has a number of backwards incompatible changes with 1.2 and previous releases which is wreaking havoc in a code base.
A big one is:
_[sql] [bug] Fully removed the behavior of strings passed directly as components of a select() or Query object being coerced to text() constructs automatically; the warning that has been emitted is now an ArgumentError or in the case of order_by() / group_by() a CompileError. This has emitted a warning since version 1.0 however its presence continues to create concerns for the potential of mis-use of this behavior._
The expected behavior IMO would be that Flask-SQLAlchemy version locks something more specific in setup.py for SQLAlchemy and then you can manage bumping it up with new Flask-SQLAlchemy releases.
Right now it just pulls in versions >= 0.8.0 which is very dangerous. | closed | 2019-03-05T22:15:07Z | 2020-12-05T20:37:38Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/680 | [] | nickjj | 8 |
xonsh/xonsh | data-science | 5,776 | history-json: Ctrl-d outside homedir throws JSONDecodeError | ## Current Behavior
Pressing ctrl-d in any directory other than ~ will throw a json decode error.
Edit:
Same goes for ctl-c
Traceback (if applicable):
<details>
```xsh
~/.config/hypr took 9s
@
Traceback (most recent call last):
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/main.py", line 512, in main
sys.exit(main_xonsh(args))
~~~~~~~~~~^^^^^^
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/main.py", line 618, in main_xonsh
postmain(args)
~~~~~~~~^^^^^^
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/main.py", line 624, in postmain
XSH.unload()
~~~~~~~~~~^^
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/built_ins.py", line 714, in unload
self.history.flush(at_exit=True)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/history/json.py", line 526, in flush
hf = JsonHistoryFlusher(
self.filename,
...<4 lines>...
skip=skip,
)
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/history/json.py", line 273, in __init__
self.dump()
~~~~~~~~~^^
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/history/json.py", line 308, in dump
hist = xlj.LazyJSON(f).load()
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/lib/lazyjson.py", line 138, in load
return self._load_or_node(offset, size)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/home/hplar/.local/share/pipx/venvs/xonsh/lib/python3.13/site-packages/xonsh/lib/lazyjson.py", line 145, in _load_or_node
val = json.loads(s)
ujson.JSONDecodeError: Unmatched '"' when decoding 'string'
Xonsh encountered an issue during launch.
Please report to https://github.com/xonsh/xonsh/issues
Failback to /bin/sh
sh-5.2$
```
</details>
## Expected Behavior
I expect to ctrl-d to exit my terminal
## xonfig
<details>
```xsh
~
@ xonfig
+-----------------------------+----------------------+
| xonsh | 0.19.0 |
| Python | 3.13.1 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.48 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.19.1 |
| on posix | True |
| on linux | True |
| distro | arch |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib 1 | fzf-completions |
| xontrib 2 | langenv_common |
| xontrib 3 | prompt_starship |
| xontrib 4 | makefile_complete |
| xontrib 5 | vox |
| xontrib 6 | voxapi |
| xontrib 7 | pyenv |
| RC file 1 | /home/hplar/.xonshrc |
| UPDATE_OS_ENVIRON | False |
| XONSH_CAPTURE_ALWAYS | False |
| XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines |
| THREAD_SUBPROCS | True |
| XONSH_CACHE_SCRIPTS | True |
+-----------------------------+----------------------+
```
</details>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2025-01-14T08:19:38Z | 2025-01-15T17:41:58Z | https://github.com/xonsh/xonsh/issues/5776 | [
"history",
"history-json"
] | hplar | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 786 | Number of Generator and How to increase gpu | Hi!
I have a question. I'm sorry if I had the same question in the past.
I want to increase the number of gpu to use, but even if I increase the option as follows, only one gpu is used.
--gpu_ids 0,1,2 --batch_size 24
I also want to increase the number of Generators, but I don't know how to increase them.
Could you tell me how to solve the above two problems?
| closed | 2019-10-10T15:23:06Z | 2019-10-11T04:04:56Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/786 | [] | Migita6 | 2 |
scikit-image/scikit-image | computer-vision | 6,860 | `measure.regionprops_table` with `properties=['centroid_weighted']` fails on images with >1 color channel. | ### Description:
Calling `skimage.measure.regionprops_table` with `properties=['centroid_weighted']` on an image with >1 color channel raises `ValueError: setting an array element with a sequence.` I expected to get a dictionary with keys `centroid_weighted-n-c` where `n` are the indices of the spatial dimensions and `c` are the indices of the color channels.
### Way to reproduce:
```python
import numpy as np
import skimage
# Create 3D label image with a sphere
label_img = np.zeros((9, 9, 9), dtype='uint8')
label_img[:, :, :] = skimage.morphology.ball(radius=4)
n_pixels = label_img.sum()
# Create random number generator
rng = np.random.default_rng(seed=123)
# Create intensity image of sphere filled with random values
intensity_img = np.zeros((9, 9, 9, 3))
intensity_img[:, :, :, 0][label_img == 1] = rng.uniform(low=10, high=20, size=n_pixels)
intensity_img[:, :, :, 1][label_img == 1] = rng.uniform(low=10, high=20, size=n_pixels)
intensity_img[:, :, :, 2][label_img == 1] = rng.uniform(low=10, high=20, size=n_pixels)
# Measure weighted centroid for 1 color channel - this works
skimage.measure.regionprops_table(label_img, intensity_img[:, :, :, 0:1], properties=['centroid_weighted'])
# output: {'centroid_weighted-0': array([4.0115177]), 'centroid_weighted-1': array([3.98670859]), 'centroid_weighted-2': array([3.99605087])}
# Measure weighted centroid for 2 color channels - this DOESN'T work
skimage.measure.regionprops_table(label_img, intensity_img[:, :, :, 0:2], properties=['centroid_weighted'])
# Measure weighted centroid for 3 color channels - this DOESN'T work
skimage.measure.regionprops_table(label_img, intensity_img[:, :, :, 0:3], properties=['centroid_weighted'])
```
The `ValueError` with traceback is below. The error is the same whether 2 or 3 color intensity image is used.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: only size-1 arrays can be converted to Python scalars
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
Cell In[86], line 26
16 intensity_img[:, :, :, 2][label_img == 1] = rng.uniform(low=10, high=20, size=n_pixels)
18 # Measure weighted centroid for 1 color channel - this works
19 # skimage.measure.regionprops_table(label_img, intensity_img[:, :, :, 0:1], properties=['centroid_weighted'])
20 # output: {'centroid_weighted-0': array([4.0115177]), 'centroid_weighted-1': array([3.98670859]), 'centroid_weighted-2': array([3.99605087])}
(...)
24
25 # Measure weighted centroid for 3 color channels - this DOESN'T work
---> 26 skimage.measure.regionprops_table(label_img, intensity_img[:, :, :, 0:3], properties=['centroid_weighted'])
File /Applications/miniconda3/envs/idpimg/lib/python3.10/site-packages/skimage/measure/_regionprops.py:1045, in regionprops_table(label_image, intensity_image, properties, cache, separator, extra_properties, spacing)
1041 out_d = _props_to_dict(regions, properties=properties,
1042 separator=separator)
1043 return {k: v[:0] for k, v in out_d.items()}
-> 1045 return _props_to_dict(
1046 regions, properties=properties, separator=separator
1047 )
File /Applications/miniconda3/envs/idpimg/lib/python3.10/site-packages/skimage/measure/_regionprops.py:877, in _props_to_dict(regions, properties, separator)
875 rp = regions[k][prop]
876 for i, loc in enumerate(locs):
--> 877 column_data[k, i] = rp[loc]
879 # add the columns to the output dictionary
880 for i, modified_prop in enumerate(modified_props):
ValueError: setting an array element with a sequence.
```
### Version information:
```Shell
3.10.8 (main, Nov 11 2022, 08:11:25) [Clang 12.0.0 ]
macOS-10.16-x86_64-i386-64bit
scikit-image version: 0.20.0
numpy version: 1.22.3
```
| closed | 2023-03-29T01:53:44Z | 2023-09-17T11:41:53Z | https://github.com/scikit-image/scikit-image/issues/6860 | [
":bug: Bug"
] | aslyon | 3 |
arogozhnikov/einops | numpy | 156 | tensorflow 1.13 | Could einops support tensorflow 1.13?
| closed | 2021-12-10T10:12:36Z | 2021-12-11T07:28:36Z | https://github.com/arogozhnikov/einops/issues/156 | [
"bug"
] | yunzqq | 1 |
tox-dev/tox | automation | 2,496 | requirement file updates aren't detected | Passing a requirements file to `deps` is really helpful; it would be great if updates to the requirements files were detected and a env recreate/install was triggered.
Version: 3.26.0
tox.ini
```ini
[testenv]
basepython = python3
install_command = pip3 install {opts} {packages}
deps = -r{toxinidir}/requirements-dev.txt
```
requirements-dev.txt
```
-r requirements.txt
pytest==6.2.5
...
```
Updating a dependency in requirements-dev.txt and requirements.txt doesn't trigger a recreate/reinstall, so old dependencies are used.
| closed | 2022-09-09T01:21:41Z | 2022-09-09T02:39:24Z | https://github.com/tox-dev/tox/issues/2496 | [
"feature:new"
] | gebhardtr | 1 |
PaddlePaddle/ERNIE | nlp | 742 | self.sp_model.Load(sp_model_path)报错, 把pathlib的路径转换成str就OK | https://github.com/PaddlePaddle/ERNIE/blob/develop/ernie/tokenizing_ernie.py#L272 | closed | 2021-09-13T09:22:17Z | 2021-11-23T08:13:12Z | https://github.com/PaddlePaddle/ERNIE/issues/742 | [
"wontfix"
] | jjxyai | 1 |
wger-project/wger | django | 1,028 | Swap Body weight with Nutrition in header. | ### Discussed in https://github.com/wger-project/wger/discussions/1027
<div type='discussions-op-text'>
<sup>Originally posted by **ImTheTom** May 2, 2022</sup>
Hey,
How would you feel about swapping the body weight drop down in the header with the nutrition drop down in the web app.
This would mean the header has the same ordering as the cards in the dashboard on the home page and the same order as the tabs in app.
Cons would include users who are already used to the ordering.</div> | closed | 2022-05-02T09:47:59Z | 2022-05-09T15:15:07Z | https://github.com/wger-project/wger/issues/1028 | [] | rolandgeider | 0 |
onnx/onnxmltools | scikit-learn | 708 | convert_lightgbm incorrectly saves onnx format | Predictions change after saving to onnx and then loading the model again. See below, where I compare to saving an XGBoost model, which results in the same predictions after loading from ONNX format


Source:
```python
import os
import shutil
from datetime import datetime, timedelta
import numpy as np
import pandas as pd
import plotly
import lightgbm as lgb
import xgboost as xgb
import onnxmltools
import onnxruntime
from skl2onnx.common.data_types import FloatTensorType
pd.options.plotting.backend = "plotly"
end_date = datetime.now()
start_date = end_date - timedelta(days=30)
date_range = pd.date_range(start=start_date, end=end_date, freq='5min')
df_timestamps = pd.DataFrame(index=date_range)
N = len(df_timestamps)
used = pd.Series([0] * N, index=date_range)
used[(used.index.dayofweek <= 4) & (used.index.hour == 8)] = 1
used[(used.index.dayofweek <= 4) & (used.index.hour == 12)] = 2
used[(used.index.dayofweek <= 4) & (used.index.hour == 14)] = 3
y = pd.DataFrame(
{
'y': used,
},
index=date_range
)
X = pd.DataFrame(
{
'sin_day_of_week': np.sin(2 * np.pi * date_range.dayofweek / 7),
'cos_day_of_week': np.cos(2 * np.pi * date_range.dayofweek / 7),
'sin_hour_of_day': np.sin(2 * np.pi * date_range.hour / 24),
'cos_hour_of_day': np.cos(2 * np.pi * date_range.hour / 24),
},
index=date_range
)
X.columns = [f'f{i}' for i in range(X.shape[1])]
fig = y.plot()
fig.show()
# Get predictions directly from trained model
lgb_model = lgb.LGBMRegressor(
objective='quantile', # Use quantile loss
alpha=.95, # Quantile for the loss (default is median: 0.5)
n_estimators=500, # Number of boosting iterations
max_depth=10, # Maximum tree depth
)
xgb_model = xgb.XGBRegressor(
objective='reg:quantileerror', # Use quantile loss
quantile_alpha=.95, # Quantile for the loss (default is median: 0.5)
n_estimators=500, # Number of boosting iterations
max_depth=10, # Maximum tree depth
)
lgb_model.fit(X, y)
xgb_model.fit(X, y)
initial_type = [
('float_input', FloatTensorType([None, X.shape[1]]))
]
onnx_model_lgmb = onnxmltools.convert_lightgbm(lgb_model, initial_types=initial_type)
onnx_model_xgboost = onnxmltools.convert_xgboost(xgb_model, initial_types=initial_type)
lgmb_path = "tmp/lgbm/"
xgboost_path = "tmp/xgboost/"
if os.path.exists(lgmb_path):
shutil.rmtree(lgmb_path)
if os.path.exists(xgboost_path):
shutil.rmtree(xgboost_path)
os.makedirs(lgmb_path, exist_ok=True)
os.makedirs(xgboost_path, exist_ok=True)
onnxmltools.utils.save_model(onnx_model_lgmb, lgmb_path + "model.onnx")
onnxmltools.utils.save_model(onnx_model_xgboost, xgboost_path + "model.onnx")
# Predictions before saving
lgb_predictions = lgb_model.predict(X)
xgb_predictions = xgb_model.predict(X)
df = pd.DataFrame(
{
'actual': y['y'],
'lbgm predictions': lgb_predictions,
'xgb predictions': xgb_predictions,
},
index=X.index
)
fig = df.plot(title="Before Saving")
fig.show()
# Get predictions from saved model
lgbm_sess = onnxruntime.InferenceSession(lgmb_path + "model.onnx")
xgb_sess = onnxruntime.InferenceSession(xgboost_path + "model.onnx")
loaded_lgb_predictions = lgbm_sess.run(output_names=['variable'], input_feed={'float_input': X.to_numpy().astype(np.float32)})[0]
loaded_lgb_predictions = pd.Series(loaded_lgb_predictions.ravel(), index=X.index)
loaded_xgb_predictions = xgb_sess.run(output_names=['variable'], input_feed={'float_input': X.to_numpy().astype(np.float32)})[0]
loaded_xgb_predictions = pd.Series(loaded_xgb_predictions.ravel(), index=X.index)
df = pd.DataFrame(
{
'actual': y['y'],
'lbgm predictions': loaded_lgb_predictions,
'xgb predictions': loaded_xgb_predictions,
},
index=X.index
)
fig = df.plot(title="After Saving")
fig.show()
```
| open | 2025-01-09T22:52:38Z | 2025-02-03T13:22:30Z | https://github.com/onnx/onnxmltools/issues/708 | [] | msnilsen | 1 |
ivy-llc/ivy | pytorch | 28,067 | Fix Frontend Failing Test: paddle - creation.paddle.eye | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-01-27T10:59:47Z | 2024-01-27T11:10:43Z | https://github.com/ivy-llc/ivy/issues/28067 | [
"Sub Task"
] | Sai-Suraj-27 | 2 |
deepfakes/faceswap | deep-learning | 736 | no grapsh is show train warning | Logs:
Loading...
05/29/2019 16:25:18 INFO Log level set to: INFO
Using TensorFlow backend.
05/29/2019 16:25:21 INFO Model A Directory: C:\Users\lipeng\faceswap\workspace\data_dst\aligned
05/29/2019 16:25:21 INFO Model B Directory: C:\Users\lipeng\faceswap\workspace\data_src\aligned
05/29/2019 16:25:21 INFO Training data directory: C:\Users\lipeng\faceswap\workspace\model
05/29/2019 16:25:21 INFO ===============================================
05/29/2019 16:25:21 INFO - Starting -
05/29/2019 16:25:21 INFO - Press 'ENTER' to save and quit -
05/29/2019 16:25:21 INFO - Press 'S' to save model weights immediately -
05/29/2019 16:25:21 INFO ===============================================
05/29/2019 16:25:22 INFO Loading data, this may take a while...
05/29/2019 16:25:22 INFO Loading Model from Original plugin...
05/29/2019 16:25:25 INFO Using configuration saved in state file
05/29/2019 16:25:25 WARNING From E:\anaconda\envs\faceswap\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.
05/29/2019 16:25:36 INFO Loaded model from disk: 'C:\Users\wblipeng1\faceswap\workspace\model'
05/29/2019 16:25:36 INFO Loading Trainer from Original plugin...
05/29/2019 16:25:41 INFO Enabled TensorBoard Logging
05/29/2019 16:25:41 WARNING From E:\anaconda\envs\faceswap\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.
05/29/2019 16:27:55 INFO saved models
05/29/2019 16:30:59 WARNING No handles with labels found to put in legend.
Question:
train model , no grapsh is show ,click refresh button ,logs print No handles with labels found to put in legend.
| closed | 2019-05-29T06:55:21Z | 2019-05-31T08:08:27Z | https://github.com/deepfakes/faceswap/issues/736 | [] | fortunelee | 0 |
roboflow/supervision | deep-learning | 1,566 | Bugfix: Class-agnostic mAP | # Bugfix: Class-agnostic metrics.
> [!TIP]
> [Hacktoberfest](https://hacktoberfest.com/) is calling! Whether it's your first PR or your 50th, you’re helping shape the future of open source. Help us build the most reliable and user-friendly computer vision library out there! 🌱
We recently realized that the `class_agnostic` argument in `MeanAveragePrecision` does nothing. Oops! Fix the algorithm, so if this argument is set, the metric treats all predictions and targets as if they belong to the same class.
---
Helpful links:
* [Contribution guide](https://supervision.roboflow.com/develop/contributing/#how-to-contribute-changes)
* Metrics:
* mAP metric: [docs](https://supervision.roboflow.com/develop/metrics/mean_average_precision/), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/mean_average_precision.py#L25)
* F1 Score: [docs](https://supervision.roboflow.com/develop/metrics/f1_score/), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/f1_score.py#L25)
* [Supervision Cheatsheet](https://roboflow.github.io/cheatsheet-supervision/)
* [Colab Starter Template](https://colab.research.google.com/drive/1rin7WrS-UvVIe-_Gfxmu-yVslGphOq89#scrollTo=pjmCrNre2g58)
* [Prior metrics test Colab](https://colab.research.google.com/drive/1qSMDDpImc9arTgQv-qvxlTA87KRRegYN) | closed | 2024-10-03T19:21:08Z | 2024-10-11T15:51:28Z | https://github.com/roboflow/supervision/issues/1566 | [
"bug",
"help wanted",
"hacktoberfest"
] | LinasKo | 9 |
Lightning-AI/pytorch-lightning | machine-learning | 20,281 | `NeptuneCallback` produces lots of `X-coordinates (step) must be strictly increasing` errors | ### Bug description
When Optuna is run in parallel mode (`n_jobs=-1`), with `NeptuneCallback`, I get:
`[neptune] [error ] Error occurred during asynchronous operation processing: X-coordinates (step) must be strictly increasing for series attribute: trials/values. Invalid point: 0.0`
It's normal that during parallel or distributed hyperparam optimization, information become unordered. Either Neptune should support adding steps out of order, or `NeptuneCallback` should support it somehow (e.g. by using an artificial step number).
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
```python
study.optimize(..., callbacks=[NeptuneCallback(run)], n_jobs=-1)
```
### Error messages and logs
`[neptune] [error ] Error occurred during asynchronous operation processing: X-coordinates (step) must be strictly increasing for series attribute: trials/values. Invalid point: 0.0`
### Environment
Any multi-threaded environment.
### More info
_No response_ | open | 2024-09-14T11:49:28Z | 2024-09-28T23:45:22Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20281 | [
"bug",
"needs triage"
] | iirekm | 1 |
adbar/trafilatura | web-scraping | 782 | "Chinese dot" breaks the fingerprint method | Hi everyone,
I noticed that if this "chinese dot" is included in the fingerprint hash "。" then the hash is always "ffffffffffffffff"
```
from trafilatura.deduplication import content_fingerprint, Simhash
content_a = Simhash("""行政長官岑浩""")
print(Simhash("""行政長官岑浩""").to_hex()) # 13dd8c82d4634a48
print(Simhash("""欢迎与我交流""").to_hex()) # 58429793861fa351
print(Simhash("""行政長官岑浩。""").to_hex()) #ffffffffffffffff
print(Simhash("""欢迎与我交流。""").to_hex()) #ffffffffffffffff
```
I hope the encoding is not all messed up. | closed | 2025-02-03T07:21:22Z | 2025-02-17T16:29:56Z | https://github.com/adbar/trafilatura/issues/782 | [
"enhancement"
] | reinoldus | 2 |
roboflow/supervision | computer-vision | 1,247 | Speed Estimator for Vehicle Tracking | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Where and how do I specifically add the configurations for both "vehicles.mp4" and "vehicles-result.mp4" in the ultralytics script, "ultralytics_example.py"?
Does it simply replace the code-line: "--source_video_path" and "--target_video-path"?
Can you specifically send the 146-line ultralytics script to incorporate "vehicles.mp4" and "vehicles-result.mp4"?
### Additional
_No response_ | closed | 2024-05-30T11:10:04Z | 2024-05-30T11:38:05Z | https://github.com/roboflow/supervision/issues/1247 | [
"question"
] | bthoma48 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.