repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
HIT-SCIR/ltp | nlp | 1 | 整理相关文档 | 整理相关文档,撰写Wiki等。
| closed | 2011-06-09T06:11:51Z | 2013-09-01T07:22:31Z | https://github.com/HIT-SCIR/ltp/issues/1 | [] | carfly | 0 |
apify/crawlee-python | web-scraping | 951 | Template issues failing docker image build and error in camoufox template | 1) poetry-plugin-export and poetry missmatch
Few weeks back poetry v2 was released. Currently actor template generated docker file build will fail on poetry-plugin-export dependency resolution as it wants version two, but our poetry is locked to <2
```
Because no versions of poetry-plugin-export match >1.9.0,<2.0.0
2025-02-03T06:59:32.156Z and poetry-plugin-export (1.9.0) depends on poetry (>=2.0.0,<3.0.0), poetry-plugin-export (>=1.9.0,<2.0.0) requires poetry (>=2.0.0,<3.0.0).
2025-02-03T06:59:32.158Z So, because poetry-instance depends on both poetry (1.8.5) and poetry-plugin-export (^1.9.0), version solving failed.
```
Either migrate to poetry 2.x.x (might be too big change) or lock poetry-plugin-export <1.9.0
2) Cookiecutter camoufox name
In several places incorrect project alias is used and thus wrong files are generated.
This `cookiecutter.crawler_type == 'camoufox'` should be this `cookiecutter.crawler_type == 'playwright-camoufox'`
| closed | 2025-02-03T07:21:47Z | 2025-02-03T10:44:12Z | https://github.com/apify/crawlee-python/issues/951 | [
"bug",
"t-tooling"
] | Pijukatel | 0 |
laurentS/slowapi | fastapi | 39 | Add to docs limiter must be below the route annotation in order to work | This works
```
@router.get("/test")
@limiter.limit("2/minute")
async def test(
request: Request
):
return "hi"
```
but this doesnt
```
@limiter.limit("2/minute")
@router.get("/test")
async def test(
request: Request
):
return "hi"
```
That would be useful for beginners to know (if they havent used any ratelimiter module before) | closed | 2021-02-23T12:49:58Z | 2021-03-14T00:01:12Z | https://github.com/laurentS/slowapi/issues/39 | [] | transfluxus | 0 |
paperless-ngx/paperless-ngx | machine-learning | 7,612 | [BUG] rror occurred while consuming: UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 60-61: invalid continuation byte | ### Description

Adding documents in German and getting bunch of errors like the following. Scanning the documents using ScanSnap IX1500
Here is my docker config
```
version: '3.8'
services:
broker:
image: redis
read_only: true
healthcheck:
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
container_name: Paperless-NGX-REDIS
security_opt:
- no-new-privileges:true
environment:
REDIS_ARGS: "--save 60 10"
restart: on-failure:5
volumes:
- /volume1/docker/paperless/redis:/data
gotenberg:
image: docker.io/gotenberg/gotenberg:8.7
restart: on-failure:5
security_opt:
- no-new-privileges:true
command:
- "gotenberg"
- "--chromium-disable-javascript=true"
- "--chromium-allow-list=file:///tmp/.*"
tika:
image: docker.io/apache/tika:latest
restart: on-failure:5
db:
image: postgres:16
container_name: Paperless-NGX-DB
restart: on-failure:5
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "paperless", "-U", "paperless"]
timeout: 45s
interval: 10s
retries: 10
security_opt:
- no-new-privileges:true
volumes:
- /volume1/docker/paperless/db:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGRES_USER: paperless
POSTGRES_PASSWORD: paperless
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
container_name: Paperless-NGX
healthcheck:
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
interval: 30s
timeout: 10s
retries: 5
security_opt:
- no-new-privileges:true
restart: on-failure:5
depends_on:
db:
condition: service_healthy
broker:
condition: service_healthy
tika:
condition: service_started
gotenberg:
condition: service_started
ports:
- 8001:8000
volumes:
- /volume1/docker/paperless/data:/usr/src/paperless/data
- /volume1/docker/paperless/media:/usr/src/paperless/media
- /volume1/docker/paperless/export:/usr/src/paperless/export
- /volume1/docker/paperless/consume:/usr/src/paperless/consume
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_OCR_SKIP_ARCHIVE_FILE: always
PAPERLESS_OCR_PAGES: 1
PAPERLESS_TIME_ZONE: Europe/Berlin
PAPERLESS_ADMIN_USER: admin
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
PAPERLESS_FILENAME_FORMAT: "{correspondent}/{created_year}/{created} {title}"
PAPERLESS_OCR_USER_ARGS: '{"invalidate_digital_signatures": true}'
PAPERLESS_OCR_LANGUAGE: "aze+deu+eng+rus"
PAPERLESS_OCR_LANGUAGES: "ces tur aze deu eng rus"
PAPERLESS_DEBUG: false
```
### Steps to reproduce
1. Scan PDFs using ScanSnap ix1500
2. Upload to the paperless
3. Get the error
### Webserver logs
```bash
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 60: invalid continuation byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 327, in main_wrap
raise exc_info[1]
File "/usr/src/paperless/src/documents/consumer.py", line 598, in run
document_parser.parse(self.working_copy, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 435, in parse
raise ParseError(f"{e.__class__.__name__}: {e!s}") from e
documents.parsers.ParseError: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 60: invalid continuation byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 149, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 629, in run
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 304, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 03092024_005.pdf: Error occurred while consuming document 03092024_005.pdf: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 60: invalid continuation byte
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.6
### Host OS
Synology DSM 7.2.2
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-09-02T22:52:04Z | 2024-10-01T07:44:22Z | https://github.com/paperless-ngx/paperless-ngx/issues/7612 | [
"not a bug"
] | tural-ali | 1 |
albumentations-team/albumentations | machine-learning | 1,767 | Can I inform the initial bounding box visibilities? | ## My Question
Is it possible to inform a transformation what is the initial visibility of each bounding box?
As far as I know, the transformation always assume that the objects are 100% visible in the before the transformation. But in real life, that is not always the case
### Additional Context
I work with object detection on very high resolution images. As a preprocessing step, the images of the training dataset have to be sliced before they can be used by the model. During this preprocessing, the visibility of many bounding boxes become less than 100%. Of course, I can calculate those values, but is there a way to use them with albumentations? | open | 2024-06-03T13:18:11Z | 2024-06-04T09:10:22Z | https://github.com/albumentations-team/albumentations/issues/1767 | [
"question",
"Need more info"
] | gui-miotto | 3 |
RobertCraigie/prisma-client-py | asyncio | 418 | Decimal breaks sqlserver schema | ## Bug description
prisma generate produces no output, the prisma client is not generated with the following schema:
## How to reproduce
prisma generate
## Expected behavior
The prisma client is generated
## Prisma information
```prisma
datasource db {
provider = "sqlserver"
url = ""
}
generator db {
provider = "prisma-client-py"
interface = "asyncio"
}
model Test {
id Int @id
value Decimal
}
```
## Environment & setup
- OS: Windows 11
- Database: SQL Server
- Python version: 3.8.10
- Prisma version: 3.13.0
```
prisma : 3.13.0
prisma client python : 0.6.6
platform : windows
engines : efdf9b1183dddfd4258cd181a72125755215ab7b
install path : C:\Users\x\AppData\Local\Programs\Python\Python38\Lib\site-packages\prisma
installed extras : []
```
| closed | 2022-06-03T14:17:41Z | 2022-06-03T19:37:38Z | https://github.com/RobertCraigie/prisma-client-py/issues/418 | [
"kind/question",
"topic: types"
] | nitedani | 1 |
AirtestProject/Airtest | automation | 774 | Android 10 MaxTouch 传入的 x.y是整数的时候 transform转换出来的总是0 touch方法不work | Android 10 MaxTouch 传入的 x.y是整数的时候 transform转换出来的总是0 touch方法不work.
使用的是python 2.7的环境 | closed | 2020-07-23T12:03:32Z | 2020-07-31T02:32:01Z | https://github.com/AirtestProject/Airtest/issues/774 | [] | fredlluo | 1 |
pnkraemer/tueplots | matplotlib | 104 | ICLR 2023 bundle | Would it be possible to add the style bundle for iclr 2023? | closed | 2022-08-30T13:35:40Z | 2022-09-16T14:43:29Z | https://github.com/pnkraemer/tueplots/issues/104 | [] | gisilvs | 2 |
jazzband/django-oauth-toolkit | django | 621 | ValueError: invalid literal for int() with base 10: 'abcdefg' (/o/applications/{pk}/) | Is there a reason to have this regex in urls.py `[\w-]` instead of `[0-9^/]` (for `pk`)?
Visit for example: http://example.com/o/applications/1asdgsdg/, you will get this error:
> ValueError: invalid literal for int() with base 10: '1asdgsdg'
Take a look at this pull request: | closed | 2018-07-11T07:56:12Z | 2020-10-16T15:02:35Z | https://github.com/jazzband/django-oauth-toolkit/issues/621 | [] | ptrojanowski | 1 |
ray-project/ray | machine-learning | 51,120 | CI test linux://doc:doc_code_cgraph_overlap is consistently_failing | CI test **linux://doc:doc_code_cgraph_overlap** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8711#01956a36-520a-4bea-a351-c4b739770614
- https://buildkite.com/ray-project/postmerge/builds/8711#01956a0e-a7de-4ac8-a743-147f7e001fb9
DataCaseName-linux://doc:doc_code_cgraph_overlap-END
Managed by OSS Test Policy | closed | 2025-03-06T07:25:41Z | 2025-03-07T22:35:15Z | https://github.com/ray-project/ray/issues/51120 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 7 |
getsentry/sentry | python | 87,094 | Clickup Connection not working - Infinite Spinner | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
Video: https://www.loom.com/share/677562533f7842ba826c93032c26d17b?sid=91cc82d1-2013-47e3-b9a7-073ef2340dda
Try to connect Clickup with Sentry following the documentation.
This issue was mentioned here and at least for me it is not solved: https://github.com/getsentry/sentry/issues/72727
### Expected Result
Successful connection for Clickup.
### Actual Result
Infinite Spinner and the connection is in "Pending" state. Tested it for Chrome and Safari and I was logged in for both, Sentry and Clickup.


### Product Area
Settings
### Link
_No response_
### DSN
_No response_
### Version
_No response_ | open | 2025-03-14T16:37:43Z | 2025-03-18T19:35:02Z | https://github.com/getsentry/sentry/issues/87094 | [
"Product Area: Settings - Integrations"
] | ik4Rus | 3 |
onnx/onnxmltools | scikit-learn | 504 | Unsupported objective 'quantile' for LGBMRegressor | ### Issue description
When trying to run `onnxmltools.convert_lightgbm` on a `lightgbm.sklearn.LGBMRegressor` model with `objective='poisson'`, the conversion fails with a RuntimeError.
### Open PR
This is an update. O just saw [this pull request](https://github.com/onnx/onnxmltools/pull/503). By directly applying the change it works properly. I would really recommend approving and merge the PR. :)
### Similar closed issue
This problem seems to be similar to [this one](https://github.com/onnx/onnxmltools/issues/462), reported by @janjagusch. I could not reproduce the workaround for the original (poisson) problem.
### Reproducible example
```
import numpy as np
from lightgbm.sklearn import LGBMRegressor
from onnxmltools import convert_lightgbm
from skl2onnx.common.data_types import FloatTensorType
examples = 1000
features = 10
X = np.random.randn(examples, features)
y = np.round(np.random.randn(examples), 0)
model = LGBMRegressor(objective="quantile", metric="quantile", alpha=0.05)
model.fit(X, y)
initial_types = [("float_input", FloatTensorType([None, features]))]
onnx_model = convert_lightgbm(model, initial_types=initial_types)
```
Results in the following error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_33543/973951346.py in <module>
14
15 initial_types = [("float_input", FloatTensorType([None, features]))]
---> 16 onnx_model = convert_lightgbm(model, initial_types=initial_types)
/media/bruno/dados/dev/virtual_envs/generic_env/lib/python3.9/site-packages/onnxmltools/convert/main.py in convert_lightgbm(model, name, initial_types, doc_string, target_opset, targeted_onnx, custom_conversion_functions, custom_shape_calculators, without_onnx_ml, zipmap)
134
135 from .lightgbm.convert import convert
--> 136 return convert(model, name, initial_types, doc_string, target_opset, targeted_onnx,
137 custom_conversion_functions, custom_shape_calculators, without_onnx_ml,
138 zipmap=zipmap)
/media/bruno/dados/dev/virtual_envs/generic_env/lib/python3.9/site-packages/onnxmltools/convert/lightgbm/convert.py in convert(model, name, initial_types, doc_string, target_opset, targeted_onnx, custom_conversion_functions, custom_shape_calculators, without_onnx_ml, zipmap)
54 custom_shape_calculators, zipmap=zipmap)
55 topology.compile()
---> 56 onnx_ml_model = convert_topology(topology, name, doc_string, target_opset, targeted_onnx)
57
58 if without_onnx_ml:
/media/bruno/dados/dev/virtual_envs/generic_env/lib/python3.9/site-packages/onnxconverter_common/topology.py in convert_topology(topology, model_name, doc_string, target_opset, targeted_onnx, channel_first_inputs)
774 else:
775 # Convert the selected operator into some ONNX objects and save them into the container
--> 776 get_converter(operator.type)(scope, operator, container)
777
778 # When calling ModelComponentContainer's add_initializer(...), nothing is added into the input list.
/media/bruno/dados/dev/virtual_envs/generic_env/lib/python3.9/site-packages/onnxmltools/convert/lightgbm/operator_converters/LightGbm.py in convert_lightgbm(scope, operator, container)
398 post_transform = "Exp"
399 else:
--> 400 raise RuntimeError(
401 "LightGBM objective should be cleaned already not '{}'.".format(
402 gbm_text['objective']))
RuntimeError: LightGBM objective should be cleaned already not 'quantile'.
```
### Versions
```
numpy==1.21.1
lightgbm==3.2.1
onnxmltools==1.9.1
skl2onnx==1.10.0
```
| closed | 2021-10-03T13:05:20Z | 2021-10-04T11:41:29Z | https://github.com/onnx/onnxmltools/issues/504 | [] | brunovilar | 1 |
vitalik/django-ninja | pydantic | 689 | Just a share, Developed a web-admin system with django-ninja | Hi @vitalik
This is Jerry, Before I used DRF to develop applications. When I find out about Django-ninja I loved it right away. So I used Django-ninja and Vue3 to develop a web-admin system, welcome you to visit my project and provide some suggestions. Thank you a lot.
_Project address_: [Fuadmin Code](https://github.com/FuAdmin/fu-admin)
_Preview address_: [Fuadmin Preview](http://175.24.184.165:9090/)
BTW, Because the server is in china I don't sure if you can access the website.
Below are some screenshots of the system



| closed | 2023-02-25T07:44:56Z | 2023-04-20T17:40:40Z | https://github.com/vitalik/django-ninja/issues/689 | [] | FuAdmin | 3 |
marimo-team/marimo | data-science | 3,176 | Persistent cache with auto-cleanup | ### Description
### Objective
Enable https://github.com/marimo-team/marimo/issues/3054 (option to save all cell results by default) in a scalable way.
### Assumptions
1. Stable cell_ids across marimo app instances (both on the same and different machines) and branches/commits. See more details [here](https://github.com/marimo-team/marimo/issues/3055#issuecomment-2531427949) on why without stable cell_ids, the qualities listed below are not realisable. (I've created a separate issue for this: https://github.com/marimo-team/marimo/issues/3177)
### The qualities that I think persistent cache _should_ obtain
1. Fast and scalable (to thousands of cells) pruning of cell_id's list from outdated entries and addition of new entries, which requires a fast lookup from cell_id to the list of tuples (module_hash, created_at, content_blob).
2. Pruning shouldn't base decisions on native file system's file's ctime or mtime attributes as they are not guaranteed to be preserved in some sync/distribution setups, primarily, in Git ([Git _doesn't_ store file creation times](https://stackoverflow.com/questions/62039244/does-git-store-the-file-creation-time)).
3. Protection from content blob corruption in case of sudden machine power outages, marimo app crashes, etc.
4. Agnostic towards the way cache is shared between machines and/or backed up: Git/Github, git-annex, CI systems cache directories (such as Github Actions's [cache action](https://github.com/actions/cache)), Marimo Cloud, Dropbox, rsync, backup systems, etc.
5. Legible to the user/developer and other software outside of the marimo app: prefer text over binary formats.
6. Basic deduplication, that is, if module-coinciding-with-cell has the same results (despite having different inputs) as some previous version, we don't store the results multiple times.
**Non-goals**
1. Deep/fine-grained deduplication or deduplication of results _across_ cells: may be added later if strongly needed by someone, but I doubt this, because primarily the cache space usage should be addressed by auto-cleanup. For [top-level functions](https://github.com/marimo-team/marimo/issues/2293) (when they are supported), I think for most cases using [XTDB](https://xtdb.com/) to store the results per "input columns" will achieve much better space compression than a generic cache (apart from various other benefits).
2. 100% cached content loss guarantee. This is still a cache, not a replacement for a database.
### Limitation of the current design of persistent cache
The current design of persistent cache doesn't obtain qualities 1-3 and 6 above and I don't see how it fixed "on the margin".
### Suggested solution
Cache dir structure:
```
cache/
- 00/ (bucketing per first two characters of cell_ids (uuids)
- 00...[uuid -- cell_id]/
- .table
- [content-hash-sha1].pkl (first blob)
- [content-hash-sha1].pkl (second blob)
- [...]
- 00...[uuid -- cell_id]-cell/ (note the -cell suffix of the dir name. It indicates "cached module coinciding with the cell".
- .table
- <content-hash>.pkl (first blob)
- <content-hash>.pkl (second blob)
- [...]
```
`.table` file format is as follows:
```
[cache type] [module hash] [created time] [content hash]
[cache type] [module hash] [created time] [content hash]
...
```
### Design considerations
**Why bucketing per first two characters of cell_ids**: Future-proof thing that will help Git not to break down when working with monorepos with thousands of cells and more. Also, this will aid Marimo itself, as when it will cache results for a new cell it will need to create the directory for cell_id. In some file systems that may become slow when there are already thousands of entries (i.e., other directories in the `cache` directory. Bucketing by first two characters makes the `cache` dir no bigger than 256 entries. This future proofing is very cheap (new cells are not created often), so I think it makes sense to add it already. (This idea is borrowed from Git itself, see `ls -l .git/objects`.)
**Why directory-per-cell**: keeps `.table` files very small (almost always less than 4KB, one page), which makes loading, updating, and writing them down them easier. Also, importantly, when tables are per-cell, this will exclude Git merge conflicts unless the same cell was updated in both branches. Even then, having `__marimo__/cache/*/*/.table merge=union` in `.gitattributes` would help (Marimo may take up pruning obsolete lines every time it overwrites the `.table` file anyway).
Note: `[cell_id]/` and `[cell_id]-cell/` are two different directories that can co-exist for the same cell_id. The `[cell_id]-cell/` contains only the module hashes for the cell itself, whereas `[cell_id]/` contains module hashes for `with persistent_cache()` blocks _within_ the cell.
For different auto-cleanup algorithms for cached-modules-coinciding-with-cells and modules _within_ cells, see comments in #3055.
Note departure from the current persistent cache design where the file name is the module hash. Naming content blobs with their own hashes makes two module_hashes in the table to point to the same content blob very simple and robust. It will not lead to extra read latency most of the time because after the first access, marimo app can easily store all hashes in memory. It helps that for large workspaces, a single app will not access most cells so the overhead of this in-memory table caching is small, thanks to directory-per-cell design.
### Alternative
This proposal supersedes https://github.com/marimo-team/marimo/issues/3055. It is the opposite of the current persistent cache design: obtains qualities 1-3 and 6, but fails for 4 (Git merge conflicts!!) and 5 (not super legible). | open | 2024-12-15T18:10:25Z | 2025-02-07T12:31:00Z | https://github.com/marimo-team/marimo/issues/3176 | [
"enhancement"
] | leventov | 37 |
PokemonGoF/PokemonGo-Bot | automation | 5,620 | Feature request: FollowPath with different vehicles | ### Short Description
It would be nice if, for every point in the path for the FollowPath task, you could indicate which vehicle should be simulated.
- walking (default)
- bicycle (speed around 20 km/h)
- car (max speed as indicated by the google maps api?)
- plane (900 km/h, no connection to the API for the flying time)
### Possible solution
In the FollowTask walker, use a different speed depending on the vehicle for the current stretch. When 'flying', pause for the flying time.
### How it would help others
Helps to get to an interesting location more quickly without teleporting. Combine with loitering to, e.g. fly to California, walk around there from pokestop to pokestop, catching pokemon on the way, for a couple of hours, then drive to another city (no egg incubating during the trip), walk around there for another couple of hours, then fly back home.
<!-- ==========END OF FEATURE REQUEST SECTION========== -->
| open | 2016-09-22T20:21:24Z | 2016-09-30T11:43:12Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5620 | [
"Feature Request"
] | janpascal | 5 |
flavors/django-graphql-jwt | graphql | 198 | @login_required returns 200 instead of 401 | When there is no a jwt token or token is invalid, I've got 200 status from server. That's an issue, it should return 401 status, we can't every time check a payload in a successful response. I've seen the same topic here https://github.com/flavors/django-graphql-jwt/issues/68, but django-graphql-extensions doesn't allow to change a response status. I understand that there is no standards but in common sense if I haven't got data that's not 200. Following your logic graphene should return 200 as well when a query isn't correct. But it returns 400. If there is any way how to implement 401 status code please let me know. For me that's a blocker of using django-graphql-jwt for prod projects. | closed | 2020-05-10T20:20:46Z | 2020-05-17T14:10:27Z | https://github.com/flavors/django-graphql-jwt/issues/198 | [] | PetrShchukin | 1 |
AutoGPTQ/AutoGPTQ | nlp | 283 | Fail to Install from source when using docker | root@docker-desktop:/data1/llm/code/AutoGPTQ# pip install .
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Processing /data1/llm/code/AutoGPTQ
Preparing metadata (setup.py) ... done
Requirement already satisfied: accelerate>=0.19.0 in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (0.21.0)
Requirement already satisfied: datasets in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (2.14.4)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (1.24.4)
Collecting rouge (from auto-gptq==0.4.1+cu1211009)
Downloading rouge-1.0.1-py3-none-any.whl (13 kB)
Requirement already satisfied: torch>=1.13.0 in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (2.1.0a0+b5021ba)
Requirement already satisfied: safetensors in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (0.3.1)
Requirement already satisfied: transformers>=4.31.0 in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (4.31.0)
Requirement already satisfied: peft in /usr/local/lib/python3.10/dist-packages (from auto-gptq==0.4.1+cu1211009) (0.4.0)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.19.0->auto-gptq==0.4.1+cu1211009) (23.1)
Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.19.0->auto-gptq==0.4.1+cu1211009) (5.9.4)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.19.0->auto-gptq==0.4.1+cu1211009) (6.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (3.12.2)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (4.7.1)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (2.6.3)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (3.1.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (2023.6.0)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (0.16.4)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (2023.6.3)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (2.31.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (0.13.3)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (4.65.0)
Requirement already satisfied: pyarrow>=8.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets->auto-gptq==0.4.1+cu1211009) (11.0.0)
Requirement already satisfied: dill<0.3.8,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from datasets->auto-gptq==0.4.1+cu1211009) (0.3.7)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets->auto-gptq==0.4.1+cu1211009) (2.0.3)
Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets->auto-gptq==0.4.1+cu1211009) (3.3.0)
Requirement already satisfied: multiprocess in /usr/local/lib/python3.10/dist-packages (from datasets->auto-gptq==0.4.1+cu1211009) (0.70.15)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets->auto-gptq==0.4.1+cu1211009) (3.8.4)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from rouge->auto-gptq==0.4.1+cu1211009) (1.16.0)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (23.1.0)
Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (3.1.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets->auto-gptq==0.4.1+cu1211009) (1.3.1)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers>=4.31.0->auto-gptq==0.4.1+cu1211009) (2023.5.7)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (2.1.3)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->auto-gptq==0.4.1+cu1211009) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->auto-gptq==0.4.1+cu1211009) (2023.3)
Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets->auto-gptq==0.4.1+cu1211009) (2023.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.13.0->auto-gptq==0.4.1+cu1211009) (1.3.0)
Building wheels for collected packages: auto-gptq
Building wheel for auto-gptq (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [219 lines of output]
conda_cuda_include_dir /usr/lib/python3/dist-packages/nvidia/cuda_runtime/include
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.10
creating build/lib.linux-x86_64-3.10/auto_gptq
copying auto_gptq/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq
creating build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks
copying auto_gptq/eval_tasks/language_modeling_task.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks
copying auto_gptq/eval_tasks/sequence_classification_task.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks
copying auto_gptq/eval_tasks/text_summarization_task.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks
copying auto_gptq/eval_tasks/_base.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks
copying auto_gptq/eval_tasks/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks
creating build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/auto.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/baichuan.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/bloom.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/codegen.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/gpt2.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/gptj.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/gpt_bigcode.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/gpt_neox.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/internlm.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/llama.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/moss.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/opt.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/qwen.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/rw.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/_base.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/_const.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
copying auto_gptq/modeling/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/modeling
creating build/lib.linux-x86_64-3.10/auto_gptq/nn_modules
copying auto_gptq/nn_modules/fused_gptj_attn.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules
copying auto_gptq/nn_modules/fused_llama_attn.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules
copying auto_gptq/nn_modules/fused_llama_mlp.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules
copying auto_gptq/nn_modules/_fused_base.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules
copying auto_gptq/nn_modules/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules
creating build/lib.linux-x86_64-3.10/auto_gptq/quantization
copying auto_gptq/quantization/gptq.py -> build/lib.linux-x86_64-3.10/auto_gptq/quantization
copying auto_gptq/quantization/quantizer.py -> build/lib.linux-x86_64-3.10/auto_gptq/quantization
copying auto_gptq/quantization/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/quantization
creating build/lib.linux-x86_64-3.10/auto_gptq/utils
copying auto_gptq/utils/data_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/utils
copying auto_gptq/utils/exllama_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/utils
copying auto_gptq/utils/import_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/utils
copying auto_gptq/utils/peft_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/utils
copying auto_gptq/utils/perplexity_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/utils
copying auto_gptq/utils/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/utils
creating build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks/_utils
copying auto_gptq/eval_tasks/_utils/classification_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks/_utils
copying auto_gptq/eval_tasks/_utils/generation_utils.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks/_utils
copying auto_gptq/eval_tasks/_utils/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/eval_tasks/_utils
creating build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/qlinear
copying auto_gptq/nn_modules/qlinear/qlinear_cuda.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/qlinear
copying auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/qlinear
copying auto_gptq/nn_modules/qlinear/qlinear_exllama.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/qlinear
copying auto_gptq/nn_modules/qlinear/qlinear_triton.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/qlinear
copying auto_gptq/nn_modules/qlinear/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/qlinear
creating build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/triton_utils
copying auto_gptq/nn_modules/triton_utils/custom_autotune.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/triton_utils
copying auto_gptq/nn_modules/triton_utils/kernels.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/triton_utils
copying auto_gptq/nn_modules/triton_utils/mixin.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/triton_utils
copying auto_gptq/nn_modules/triton_utils/__init__.py -> build/lib.linux-x86_64-3.10/auto_gptq/nn_modules/triton_utils
running build_ext
building 'autogptq_cuda_64' extension
creating /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10
creating /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/autogptq_cuda
Emitting ninja build file /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/autogptq_cuda/autogptq_cuda_64.o.d -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.10/dist-packages/torch/include -I/usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.10/dist-packages/torch/include/TH -I/usr/local/lib/python3.10/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/data1/llm/code/AutoGPTQ/autogptq_cuda -I/usr/include/python3.10 -c -c /data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_64.cpp -o /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/autogptq_cuda/autogptq_cuda_64.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1016"' -DTORCH_EXTENSION_NAME=autogptq_cuda_64 -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17
[2/2] /usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.10/dist-packages/torch/include -I/usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.10/dist-packages/torch/include/TH -I/usr/local/lib/python3.10/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/data1/llm/code/AutoGPTQ/autogptq_cuda -I/usr/include/python3.10 -c -c /data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu -o /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/autogptq_cuda/autogptq_cuda_kernel_64.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1016"' -DTORCH_EXTENSION_NAME=autogptq_cuda_64 -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_90,code=compute_90 -gencode=arch=compute_90,code=sm_90 -std=c++17
FAILED: /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/autogptq_cuda/autogptq_cuda_kernel_64.o
/usr/local/cuda/bin/nvcc -I/usr/local/lib/python3.10/dist-packages/torch/include -I/usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.10/dist-packages/torch/include/TH -I/usr/local/lib/python3.10/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/data1/llm/code/AutoGPTQ/autogptq_cuda -I/usr/include/python3.10 -c -c /data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu -o /data1/llm/code/AutoGPTQ/build/temp.linux-x86_64-3.10/autogptq_cuda/autogptq_cuda_kernel_64.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1016"' -DTORCH_EXTENSION_NAME=autogptq_cuda_64 -D_GLIBCXX_USE_CXX11_ABI=1 -gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_90,code=compute_90 -gencode=arch=compute_90,code=sm_90 -std=c++17
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(62): error: no suitable conversion function from "__half_raw" to "int" exists
half tmpres = __hadd(hsum, val);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1167): error: identifier "__hfma2" is undefined
res2 = __hfma2(__hfma2(deq2[(tmp >> 0) & 0xf][off], scale, zero), blockvec[k + 0], res2);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1167): error: identifier "__hfma2" is undefined
res2 = __hfma2(__hfma2(deq2[(tmp >> 0) & 0xf][off], scale, zero), blockvec[k + 0], res2);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1301): error: identifier "__hfma2" is undefined
res2 = __hfma2(__hfma2(deq2[(tmp1 >> 0) & 0x3f][off], scale, zero), blockvec[k + 0], res2);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1301): error: identifier "__hfma2" is undefined
res2 = __hfma2(__hfma2(deq2[(tmp1 >> 0) & 0x3f][off], scale, zero), blockvec[k + 0], res2);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1419): error: identifier "__hfma2" is undefined
res2 = __hfma2(__hfma2(deq2[(tmp >> 0) & 0xff][off], scale, zero), blockvec[k + 0], res2);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1419): error: identifier "__hfma2" is undefined
res2 = __hfma2(__hfma2(deq2[(tmp >> 0) & 0xff][off], scale, zero), blockvec[k + 0], res2);
^
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(332): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant2MatMulKernel(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, const int *, int, int, int, int, int) [with scalar_t=double]" at line 270
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(477): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant3MatMulKernel(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, const int *, int, int, int, int, int) [with scalar_t=double]" at line 357
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(565): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant4MatMulKernel(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, const int *, int, int, int, int, int) [with scalar_t=double]" at line 502
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(652): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant8MatMulKernel(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, const int *, int, int, int, int, int) [with scalar_t=double]" at line 590
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(750): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant2MatMulKernel_old(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, int, int, int, int, int, int) [with scalar_t=double]" at line 679
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(909): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant3MatMulKernel_old(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, int, int, int, int, int, int) [with scalar_t=double]" at line 774
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(996): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant4MatMulKernel_old(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, int, int, int, int, int, int) [with scalar_t=double]" at line 933
/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu(1079): error: no instance of overloaded function "atomicAdd" matches the argument list
argument types are: (double *, double)
atomicAdd(&mul[b * width + w], res);
^
detected during instantiation of "void VecQuant8MatMulKernel_old(const scalar_t *, const int *, scalar_t *, const scalar_t *, const int *, int, int, int, int, int, int) [with scalar_t=double]" at line 1020
15 errors detected in the compilation of "/data1/llm/code/AutoGPTQ/autogptq_cuda/autogptq_cuda_kernel_64.cu".
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1902, in _run_ninja_build
subprocess.run(
File "/usr/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/data1/llm/code/AutoGPTQ/setup.py", line 147, in <module>
setup(
File "/usr/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 107, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.10/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.10/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/usr/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.10/dist-packages/wheel/bdist_wheel.py", line 343, in run
self.run_command("build")
File "/usr/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/usr/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3.10/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.10/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/usr/lib/python3.10/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.10/dist-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/usr/local/lib/python3.10/dist-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/usr/lib/python3.10/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 848, in build_extensions
build_ext.build_extensions(self)
File "/usr/local/lib/python3.10/dist-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/usr/lib/python3.10/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/usr/lib/python3.10/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/usr/local/lib/python3.10/dist-packages/setuptools/command/build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "/usr/lib/python3.10/distutils/command/build_ext.py", line 529, in build_extension
objects = self.compiler.compile(sources,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 661, in unix_wrap_ninja_compile
_write_ninja_file_and_compile_objects(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1575, in _write_ninja_file_and_compile_objects
_run_ninja_build(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 1918, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for auto-gptq
Running setup.py clean for auto-gptq
Failed to build auto-gptq
ERROR: Could not build wheels for auto-gptq, which is required to install pyproject.toml-based projects | closed | 2023-08-24T14:05:05Z | 2024-03-28T11:50:53Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/283 | [
"bug"
] | jackaihfia2334 | 10 |
hzwer/ECCV2022-RIFE | computer-vision | 361 | 测试MiddleBurySet结果时候细节略微模糊 | 作者你好,我再使用你的参数来对MiddleBurySet进行推理时,发现对很多样本的插帧存在细节模糊等问题,这个你有研究过如何产生的吗。我跑了多个模型,其实都存在这个问题,很少能够非常好的保留出细节


| open | 2024-03-27T08:42:14Z | 2024-03-27T15:39:23Z | https://github.com/hzwer/ECCV2022-RIFE/issues/361 | [] | 11923303233 | 1 |
jonaswinkler/paperless-ng | django | 591 | [BUG] Experienced an exception upon (re-)archiving of PDF documents. | Experienced an exception upon (re-)archiving of PDF docs.
**To Reproduce**
Steps to reproduce the behavior:
1. $ cd /home/user/path_to_paperless_ng_container
2. $ docker-compose exec webserver document_archiver --overwrite
3. ~25% of documents processed (~6 with Corrupt JPEG data, and 1 with CreationDate could not be copied to XMP warnings). The exception happened well after the two types of warning listed above.
4. See error
**Expected behavior**
It was expected that archival PDF/A documents would be (re-)created for all documents.
**Webserver logs**
New to docker, not sure where logs are kept. Below is the output to the CLI (it is a bit mangled, but the exception should be clear).
```
$ docker-compose exec webserver document_archiver --overwrite
15%|█████████████████████████▌ | 337/2180 [1:37:56<10:52:15, 21.23s/it]Corrupt JPEG data: 6 extraneous bytes before marker 0xd9
16%|█████████████████████████▋ | 338/2180 [1:38:09<9:35:04, 18.73s/it]Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
16%|██████████████████████████▎ | 346/2180 [1:39:33<7:12:22, 14.15s/it]Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
16%|██████████████████████████▍ | 347/2180 [1:39:53<8:03:46, 15.84s/it]Corrupt JPEG data: 3 extraneous bytes before marker 0xd9
Corrupt JPEG data: 5 extraneous bytes before marker 0xd9
Corrupt JPEG data: 7 extraneous bytes before marker 0xd9
Corrupt JPEG data: 1 extraneous bytes before marker 0xd9
18%|█████████████████████████████▎ | 390/2180 [2:03:59<64:18:53, 129.35s/it]/usr/local/lib/python3.7/site-packages/pikepdf/models/metadata.py:372: UserWarning: The metadata field /CreationDate could not be copied to XMP
warn(msg)
34%|████████████████████████████████████████████████████████▉ | 748/2180 [4:01:17<7:41:56, 19.36s/it]
multiprocessing.pool.RemoteTraceback:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/usr/src/paperless/src/documents/management/commands/document_archiver.py", line 34, in handle_document
parser = parser_class(logging_group=uuid.uuid4())
TypeError: 'NoneType' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/usr/src/paperless/src/documents/management/commands/document_archiver.py", line 127, in handle
total=len(document_ids)
File "/usr/local/lib/python3.7/site-packages/tqdm/std.py", line 1166, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
raise value
TypeError: 'NoneType' object is not callable
```
**Relevant information**
- ~2.2K documents, all but a handful using paperless.
- I went from bare-metal paperless to docker paperless-ng. I manually copied the media/data/DB from paperless into the paperless-ng docker volumes. The install was successful and I have been using/importing for ~PT1W. I wanted to ensure that I had PDF/A for all my documents (including older documents processed with paperless) so chose to run document_archiver --overwrite. There were a small number of documents (<= 5) which paperless-ng thought had no content, which were (maybe) poor quality for OCR.
- Host OS: Debian Testing (bullseye) 5.10.13
- Browser: N/A
- Version: 1.1.4 (using SQLite)
- Installation method: Orginally bare-metal paperless -> migrated to docker paperless-ng.
- No changes made to `docker-compose.yml`, `docker-compose.env` or `paperless.conf`.
- simple-scan and pdfarranger (almost) exclusively used for scanning concatenating.
| closed | 2021-02-21T23:46:52Z | 2021-02-22T22:48:11Z | https://github.com/jonaswinkler/paperless-ng/issues/591 | [
"bug",
"fixed in next release"
] | git-help-eng | 6 |
pydantic/FastUI | pydantic | 172 | Audio component | It would be great to have an audio recording and playback components. With OpenAI Whisper for audio transcription and fast advances in TTS (Text To Speech) models, there are a lot of use cases for recording and playing back audio. See for example https://ttsdemo.themetavoice.xyz/ | closed | 2024-02-08T19:02:18Z | 2024-02-09T07:13:47Z | https://github.com/pydantic/FastUI/issues/172 | [] | chemeris | 1 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 385 | [BUG] 好多接口400了 | ***发生错误的平台?***
抖音
***发生错误的端点?***
API-V4.0
***提交的输入值?***
https://douyin.wtf/api/douyin/web/fetch_one_video?aweme_id=7345492945006595379
[短视频链接](https://douyin.wtf/api/douyin/web/fetch_one_video?aweme_id=7345492945006595379)
***是否有再次尝试?***
是,发生错误后错误依旧存在。
| closed | 2024-05-11T08:28:11Z | 2024-05-12T04:02:26Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/385 | [
"BUG"
] | idcim | 1 |
dynaconf/dynaconf | fastapi | 1,203 | [RFC] Load databricks job parameters | **Is your feature request related to a problem? Please describe.**
I love dynaconf it's a perfect tool.
However, the company I'm in is using databricks and I was running my code (with dynaconf in it)
Databricks have a notion of "Jobs" that can have "parameters".
However, they're not passed as environment variables, and we have to use their sdk (`dbutils.widget.get("param_name")`).
The only way to have "real environment variable" is to define them cluster wise...
**Describe the solution you'd like**
Be able to load the databricks parameters via dynaconf.
Could be an optional package like `pip install dynaconf[databricks]`
**Describe alternatives you've considered**
Use cluster environment variables but it's a scope too big, I have to restart the cluster anytime I want to change a variable. | closed | 2024-12-09T16:38:36Z | 2024-12-09T20:15:36Z | https://github.com/dynaconf/dynaconf/issues/1203 | [
"Not a Bug",
"RFC"
] | ierezell | 2 |
pytest-dev/pytest-django | pytest | 939 | asserts module cannot import name 'assertDictEqual' and other TestCase assertion methods | Trying to get assertDictEqual from pytest_django.asserts with multiple ways:
`from pytest_django.asserts import assertDictEqual
`
or
`from pytest_django import asserts`
both of them throws errors
the first one's error:
> from pytest_django.asserts import assertDictEqual
> E ImportError: cannot import name 'assertDictEqual' from 'pytest_django.asserts' (/usr/local/lib/python3.7/site-packages/pytest_django/asserts.py)
the second one's error:
> AttributeError: module 'pytest_django.asserts' has no attribute 'assertDictEqual'
| closed | 2021-07-04T11:08:28Z | 2021-07-06T23:39:05Z | https://github.com/pytest-dev/pytest-django/issues/939 | [] | MohamedAbdultawab | 1 |
widgetti/solara | fastapi | 349 | [Maintenance] Run pre-commit in CI and update periodically | Currently there is a nice pre-commit configuration, in [`.pre-commit-config.yaml`](https://github.com/widgetti/solara/blob/master/.pre-commit-config.yaml), but there are two problems:
- It isn't run and enforced in CI, so currently it doesn't do anything.
- The file isn't periodically updated, so most pre-commit actions are over a year old.
Currently running pre-commit.ci results in **123 files changed** - all with very minor things like whitespaces/lines and import orders. This why pre-commit only works if it's ran on every commit (in CI).
There are different approaches on this, but by far the most convenient (at least that I know of) is using [pre-commit.ci](https://pre-commit.ci/). It takes care of:
- Running pre-commit in CI, and warning if any tests fail
- Updating the pre-commit configuration periodically (daily/weekly/monthly)
- (optionally) directly applying fixes to open PRs (you can enable or disable this)
If this sounds useful, the first step is enabling [pre-commit.ci](https://pre-commit.ci/) for this repo. | open | 2023-10-26T14:16:53Z | 2023-10-30T15:46:24Z | https://github.com/widgetti/solara/issues/349 | [] | EwoutH | 4 |
glumpy/glumpy | numpy | 213 | 'glumpy.app.window.key' has no attribute 'MOD_CONTROL' | While using pyqt5 backend, the program exits when pressing a command or control key in Apple keyboard.
'glumpy.app.window.key' has no attribute 'MOD_CONTROL'
"glumpy/app/window/backends/backend_qt5.py", line 338, in _modifiers_translate
| open | 2019-04-04T15:20:42Z | 2020-09-10T07:14:01Z | https://github.com/glumpy/glumpy/issues/213 | [] | prithivi-iviz | 6 |
fastapi-users/fastapi-users | fastapi | 253 | JWT token refresh | Hi, thanks for the great package and documentation first of all!
However, I was wondering if I missed something or if the JWT logic is missing a `/refresh_token` router?
How can I make sure the user doesn't need to supply the password again when the token expires? I was planning on automatically refreshing it as long as it is still valid. | closed | 2020-07-10T12:08:21Z | 2023-06-08T21:54:27Z | https://github.com/fastapi-users/fastapi-users/issues/253 | [
"documentation",
"question"
] | moreinhardt | 8 |
tensorlayer/TensorLayer | tensorflow | 1,024 | RuntimeError using SubpixelConv1d: SubpixelConv1d._PS is a private method | ### New Issue Checklist
- [x] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [x] I searched for [existing GitHub issues](https://github.com/tensorlayer/tensorlayer/issues)
### Issue Description
The SubpixelConv1d class decorates the method _PS as @private_method, but when its own forward method calls the _PS function, the decorator for some reason gets confused and can't detect it's a valid use of the method and raises the exception.
### Reproducible Code
- Debian Gnu/Linux (testing/bulleye)
- Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.
```python
#!/usr/bin/python3
import tensorflow as tf
import tensorlayer as tl
import numpy as np
inputs = tl.layers.Input((1, 2, 2))
prev = tl.layers.SubpixelConv1d(2, in_channels=2)(inputs)
model = tl.models.Model(inputs, prev)
train_batch = np.array([1, 2, 3, 4])
train_batch = train_batch.reshape((1,2,2))
valid_batch = train_batch
tl.utils.fit(model,
train_op=tf.optimizers.Adam(learning_rate=0.0001),
cost=tl.cost.cross_entropy,
X_train=train_batch, y_train=train_batch,
acc=tf.compat.v1.metrics.accuracy, batch_size=len(train_batch), n_epoch=20, X_val=valid_batch, y_val=valid_batch, eval_train=True,
)
```
| closed | 2019-07-19T12:45:47Z | 2019-07-23T15:10:49Z | https://github.com/tensorlayer/TensorLayer/issues/1024 | [] | Rafagd | 4 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,500 | [Bug]: Some malicious extension is getting installed automatically after making 10K+ calls to Stable diffusion model through the API. |
[sd.txt](https://github.com/user-attachments/files/17046338/sd.txt)
[sd.txt](https://github.com/user-attachments/files/17046338/sd.txt)
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When we made 10K calls to generate different images we observed that a new extension with URL "http://77.90.22.129:3000/WCZMKQKVIQ/na8672" is getting installed.

### Steps to reproduce the problem
1. Install Stable Diffusion.
2. Install following Extensions:
A. https://github.com/Mikubill/sd-webui-controlnet
B. https://github.com/AUTOMATIC1111/stable-diffusion-webui-nsfw-censor
C. https://github.com/w-e-w/sd-webui-nudenet-nsfw-censor
3. Try making 10K calls to Stable diffusion using the endpoint: sdapi/v1/txt2img
### What should have happened?
The malicious extension shouldn't have been installed automatically.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-09-18-15-03.json](https://github.com/user-attachments/files/17046063/sysinfo-2024-09-18-15-03.json)
### Console logs
```Shell
Attached in files section.
```
### Additional information
We have deployed it on K8s on a pod using a Dockerfile. | open | 2024-09-18T15:18:10Z | 2024-09-20T03:06:35Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16500 | [
"bug-report"
] | knowitall12 | 1 |
ray-project/ray | tensorflow | 51,474 | CI test windows://python/ray/tests:test_state_api is consistently_failing | CI test **windows://python/ray/tests:test_state_api** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aab6-90e6-41dc-a317-44713639c7da
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4e-4615-9f26-d591e0616ce7
DataCaseName-windows://python/ray/tests:test_state_api-END
Managed by OSS Test Policy | closed | 2025-03-18T22:03:55Z | 2025-03-19T21:54:51Z | https://github.com/ray-project/ray/issues/51474 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 4 |
mwaskom/seaborn | pandas | 3,027 | JointGrid can't rotation the xticks | 
## How to rotation the xticks like the picture above ?
``` python
plt.figure(figsize = (10,8))
g = sns.JointGrid(x='u',y='t',data = tdata)
g.plot(sns.scatterplot, sns.histplot)
plt.show()
```
And "plt.figure" also can't work on "JoinGrid" | closed | 2022-09-15T14:54:43Z | 2022-09-15T15:26:29Z | https://github.com/mwaskom/seaborn/issues/3027 | [] | duckbill | 1 |
sunscrapers/djoser | rest-api | 745 | Figure out a way to make sure translations are up to date | closed | 2023-07-02T09:49:09Z | 2023-07-02T10:48:36Z | https://github.com/sunscrapers/djoser/issues/745 | [] | tomwojcik | 0 | |
AntonOsika/gpt-engineer | python | 545 | Application files are created with backticks |
## Issue Template
File names created with backticks
## Expected Behavior
Application files should be created with normal names e.g. App.js
## Current Behavior
Application files are generated with backticks e.g. `App.js` on Mac.
## Failure Information (for bugs)
Application files are generated with backticks e.g. `App.js` on Mac.
### Steps to Reproduce
Ask gpt-engineer to create a React app. The resulting generated filenames are enclosed in backticks. Thus the generated application doesn't run successfully out of the box.
Please provide detailed steps for reproducing the issue.
1. Create prompt file
2. Run gpt-engineer on directory containing prompt file.
3. you get it...
### Failure Logs
Please include any relevant log snippets or files here.
| closed | 2023-07-18T01:05:57Z | 2023-08-16T19:36:45Z | https://github.com/AntonOsika/gpt-engineer/issues/545 | [] | femibyte | 4 |
ionelmc/pytest-benchmark | pytest | 84 | Fixtures are executed even when all tests that use it are skipped | I have [an example](https://0xacab.org/drebs/soledad/-/jobs/15880) of a fixture being executed even when all the tests that use it are being skipped.
In that example, tests that use `pytest-benchmark` live in the `testing/tests/benchmarks` folder, and all others should be skipped. I have introduced new (non-benchmark) tests that use fixtures in the `testing/tests/responsiveness` folder, and those fixtures are being executed even though the corresponding tests are correctly skipped.
I can provide details on how to reproduce that if you need it (but you may already know what this is about as we talked about it in IRC). | closed | 2017-08-01T21:24:03Z | 2017-08-05T19:56:26Z | https://github.com/ionelmc/pytest-benchmark/issues/84 | [] | drebs | 0 |
horovod/horovod | deep-learning | 4,039 | v0.28.1 Version Mismatch with TF 2.12.0. Works with v0.28.0 | **Environment:**
1. Framework: TensorFlow, Keras
2. Framework version: 2.12.0
3. Horovod version: 0.28.1
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: 3.10
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
**Bug report:**
1. pip install keras==2.12.0
2. pip install tensorflow==2.12.0
3. HOROVOD_WITH_TENSORFLOW=1 pip install horovod
4. (in python) : import horovod.keras as hvd
I get this error:
> horovod.common.exceptions.HorovodVersionMismatchError: Framework tensorflow installed with version 2.14.0 but found version 2.12.0.
> This can result in unexpected behavior including runtime errors.
> Reinstall Horovod using `pip install --no-cache-dir` to build with the new version.
I tried reinstalling Horovod with --no-cache-dir, but it still gave the same error.
It works however when I install with Horovod v0.28.0
| open | 2024-04-16T14:54:09Z | 2024-04-16T16:03:55Z | https://github.com/horovod/horovod/issues/4039 | [
"bug"
] | liamaltarac | 0 |
unit8co/darts | data-science | 2,491 | [BUG] TimeSeries.from_group_dataframe incompatible with integer timestamps | **Describe the bug**
`TimeSeries.from_group_dataframe` should support both timestamp and integer time columns, but it internally converts to `DatetimeIndex` regardless of what is passed.
https://github.com/unit8co/darts/blob/a646adf10fa73d8facb16a611f8a3682dc8a1191/darts/timeseries.py#L869-L871
**To Reproduce**
```python
import pandas as pd
import darts
df = pd.DataFrame({
"group": ["a", "a", "a", "b", "b", "b"],
"t": [0, 1, 2, 0, 1, 2],
"x": [1.0, 2.0, 3.0, 2.0, 3.0, 4.0],
})
ts = darts.TimeSeries.from_group_dataframe(df, group_cols="group", time_col="t")
ts[0].time_index
# DatetimeIndex([ '1970-01-01 00:00:00',
# '1970-01-01 00:00:00.000000001',
# '1970-01-01 00:00:00.000000002'],
# dtype='datetime64[ns]', name='t', freq='ns')
```
**Expected behavior**
`time_index` should be a `RangeIndex`.
**System (please complete the following information):**
- Python version: 3.10
- darts version 0.30.0
| closed | 2024-08-07T14:43:42Z | 2024-08-30T12:04:23Z | https://github.com/unit8co/darts/issues/2491 | [
"bug",
"good first issue"
] | slishak-PX | 0 |
dask/dask | scikit-learn | 10,995 | Feedback - DataFrame query planning | The latest release `2024.3.0` enabled query planning for `DataFrame`s by default. This issue can be used to report feedback and ask related questions.
If you encountered a bug or unexpected behavior, please check if you got the most recent version of `dask-expr` installed. This is a separate package with a decoupled release process allowing us to roll out hotfixes quickly.
If you are still encountering issues after an update, please open an issue with a reproducer and we will respond as soon as possible.
See below for a list of known issues and/or check the [issue tracker](https://github.com/dask/dask/issues?q=is%3Aopen+is%3Aissue+label%3Adask-expr)
## Brief introduction
The legacy DataFrame implementation did not offer a way to optimize your query and `dask` executed whatever you requested literally. In many situations this was suboptimal and could cause significant performance overhead.
Let's take a simple example by using the NYC Uber/Lyft dataset
```python
df = read_parquet(
"s3://coiled-datasets/uber-lyft-tlc/",
filesystem='pyarrow',
)
df = df[df.hvfhs_license_num == "HV0003"]
result = df.sum(numeric_only=True)["tips"]
```
This query loads the data, applies a filter on the vendor, calculates a sum and picks a single column for the result. The legacy `DataFrame` would load **all** data, only then apply the filter, compute the sum over all columns and then throw all the data away since we're only interested in a single column.
Advanced users may be tempted to rewrite this to an optimized version that provides the required columns and filters to the ``read_parquet`` call already but this is now done automatically for you. After optimization, the above query is identical to
```python
df_man_opt = dd.read_parquet(
"s3://coiled-datasets/uber-lyft-tlc/",
columns=["tips"],
filters=[("hvfhs_license_num", "==", "HV0003")],
)
result_man_opt = df_man_opt.sum(numeric_only=True)
```
which loads only the required columns from storage and applies filters even on rowgroup level, if applicable.
```python
result.simplify().pprint()
Projection: columns='tips'
Sum: numeric_only=True
ReadParquetPyarrowFS: path='s3://coiled-datasets/uber-lyft-tlc/' columns=['tips'] filters=[[('hvfhs_license_num', '==', 'HV0003')]] filesystem='pyarrow'
```
Column projection and predicate pushdown are only some of the most obvious optimizations that are performed automaticaly for you.
See for yourself and let us know what you think.
## Known issues
### `dask-expr` is not installed when updating dask
When using a package manager like `pip` it can happen that an upgrade like `pip update dask` does not pull in the extra dependencies of `dask.dataframe`. To ensure all dependencies are installed, please install `dask[dataframe]` or use `conda install dask`
### Pandas copy-on-write enabled as an import side effect
Prior to version dask-expr==1.0.2, importing dask.dataframe set the pandas option copy-on-write to True
See also https://github.com/dask/dask/issues/10996 | open | 2024-03-12T10:07:17Z | 2024-07-12T11:15:15Z | https://github.com/dask/dask/issues/10995 | [
"dataframe",
"discussion",
"dask-expr"
] | fjetter | 17 |
InstaPy/InstaPy | automation | 6,470 | like_util.py line 618 problem | I am using instapy-0.6.14
code:
from instapy import InstaPy
import random
from instapy import InstaPy
from instapy import smart_run
#login credentials
insta_username = '****'
insta_password = '****'
#login session
session = InstaPy(username=insta_username, password=insta_password)
session.login()
session.like_by_tags(["#carz"], amount=5)
getting below error
Traceback (most recent call last): File "E:\Study\Python_Automation\Insta_Commentor\quickstart.py", line 56, in session.like_by_tags(my_hashtags, amount=90, media=None) File "C:\Users\sonu3\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\instapy.py", line 1977, in like_by_tags inappropriate, user_name, is_video, reason, scope = check_link( File "C:\Users\sonu3\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\like_util.py", line 618, in check_link media = post_page[0]["shortcode_media"] KeyError: 0
I tried everything but I didn't solve this problem.
| open | 2022-01-24T07:29:07Z | 2022-02-10T22:41:10Z | https://github.com/InstaPy/InstaPy/issues/6470 | [] | sergenkuzey | 19 |
huggingface/datasets | pytorch | 7,389 | Getting statistics about filtered examples | @lhoestq wondering if the team has thought about this and if there are any recommendations?
Currently when processing datasets some examples are bound to get filtered out, whether it's due to bad format, or length is too long, or any other custom filters that might be getting applied. Let's just focus on the filter by length for now, since that would be something that gets applied dynamically for each training run. Say we want to show a graph in W&B with the running total of the number of filtered examples so far.
What would be a good way to go about hooking this up? Because the map/filter operations happen before the DataLoader batches are created, at training time if we're just grabbing batches from the DataLoader then we won't know how many things have been filtered already. But there's not really a good way to include a 'num_filtered' key into the dataset itself either because dataset map/filter process examples independently and don't have a way to track a running sum.
The only approach I can kind of think of is having a 'is_filtered' key in the dataset, and then creating a custom batcher/collator that reads that and tracks the metric? | closed | 2025-02-10T20:48:29Z | 2025-02-11T20:44:15Z | https://github.com/huggingface/datasets/issues/7389 | [] | jonathanasdf | 2 |
allure-framework/allure-python | pytest | 182 | how to i can remove parameter from allure report | 
@sseliverstov | closed | 2017-11-28T09:24:35Z | 2018-01-11T19:33:32Z | https://github.com/allure-framework/allure-python/issues/182 | [] | abelofcn | 1 |
huggingface/transformers | tensorflow | 36,006 | current transformers library does not work with microsoft/Phi-3.5-mini-instruct | ### System Info
- `transformers` version: 4.48.2
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
The following Python program does not work:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'microsoft/Phi-3.5-mini-instruct'
model = AutoModelForCausalLM.from_pretrained(model_name,
revision='main',
trust_remote_code=False,
attn_implementation='flash_attention_2',
torch_dtype=torch.bfloat16,
use_cache=False,
device_map="cuda",
low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model.eval()
messages = [{ "content": """You are an educated researcher and always answer in correct scientific terms.
You are very deep into AI and its methodologies. You are very creative.""",
"role": "system" },
{ "content": "Write an abstract with the title 'New Training Methods for LLMs'",
"role": "user" },
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors = "pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens = 512, use_cache = True,
do_sample=True, temperature=0.7, top_k=25, top_p=0.8)
```
I get the following error message:
```
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Traceback (most recent call last):
File "phi3.py", line 21, in <module>
outputs = model.generate(inputs, max_new_tokens = 512, use_cache = True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/transformers/generation/utils.py", line 2255, in generate
result = self._sample(
^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/transformers/generation/utils.py", line 3254, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/transformers/models/phi3/modeling_phi3.py", line 899, in forward
outputs = self.model(
^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/transformers/models/phi3/modeling_phi3.py", line 605, in forward
position_embeddings = self.rotary_emb(hidden_states, position_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "$PYTHON_PATH/lib/python3.12/site-packages/transformers/models/phi3/modeling_phi3.py", line 368, in forward
freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_CUDA_bmm)
```
All other models that I have tried work with this code
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the Python program above.
### Expected behavior
The program should run without an error. It works for other models. | closed | 2025-02-02T17:27:09Z | 2025-02-05T16:26:01Z | https://github.com/huggingface/transformers/issues/36006 | [
"bug"
] | datanizing | 3 |
A3M4/YouTube-Report | matplotlib | 21 | ModuleNotFoundError: No module named 'tkinter' | seaborn uses tkinter, which I dont have (and probably won't need for this)
`Traceback (most recent call last):
File "report.py", line 5, in <module>
import seaborn as sns
File "C:\Program Files\Python37\lib\site-packages\seaborn\__init__.py", line 6, in <module>
from .rcmod import *
File "C:\Program Files\Python37\lib\site-packages\seaborn\rcmod.py", line 5, in <module>
from . import palettes, _orig_rc_params
File "C:\Program Files\Python37\lib\site-packages\seaborn\palettes.py", line 12, in <module>
from .utils import desaturate, set_hls_values, get_color_cycle
File "C:\Program Files\Python37\lib\site-packages\seaborn\utils.py", line 11, in <module>
import matplotlib.pyplot as plt
File "C:\Program Files\Python37\lib\site-packages\matplotlib\pyplot.py", line 2349, in <module>
switch_backend(rcParams["backend"])
File "C:\Program Files\Python37\lib\site-packages\matplotlib\pyplot.py", line 221, in switch_backend
backend_mod = importlib.import_module(backend_name)
File "C:\Program Files\Python37\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Program Files\Python37\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 1, in <module>
from . import _backend_tk
File "C:\Program Files\Python37\lib\site-packages\matplotlib\backends\_backend_tk.py", line 6, in <module>
import tkinter as tk
ModuleNotFoundError: No module named 'tkinter'` | open | 2019-12-18T08:34:08Z | 2019-12-27T03:25:18Z | https://github.com/A3M4/YouTube-Report/issues/21 | [] | R3tr0BoiDX | 6 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,320 | A x added | open | 2024-12-09T02:45:07Z | 2024-12-09T02:45:07Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1320 | [] | Cyber4tff | 0 | |
microsoft/nni | deep-learning | 5,717 | after speed up the number of output dimention change | **Describe the issue**:
i used nni.compression.speedup and after the speed uo the number of output featuears change from 10 to 5 featurears only!!!!
GoogLeNet(
(conv1): BasicConv2d(
(conv): Conv2d(3, 32, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(conv2): BasicConv2d(
(conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(conv3): BasicConv2d(
(conv): Conv2d(32, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(inception3a): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(96, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(96, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(48, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(96, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(8, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(96, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(inception3b): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(128, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(16, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
(inception4a): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(240, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(240, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(48, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(104, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(240, 8, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(8, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(8, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(240, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(inception4b): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(256, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(256, 56, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(56, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(56, 112, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(12, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(inception4c): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(12, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(inception4d): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(256, 56, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(56, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(256, 72, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(72, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(72, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(256, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(inception4e): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(264, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(264, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(80, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(264, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(264, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(maxpool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)
(inception5a): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(416, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(80, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(416, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(416, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(inception5b): Inception(
(branch1): BasicConv2d(
(conv): Conv2d(416, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(branch2): Sequential(
(0): BasicConv2d(
(conv): Conv2d(416, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(96, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch3): Sequential(
(0): BasicConv2d(
(conv): Conv2d(416, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicConv2d(
(conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
(branch4): Sequential(
(0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
(1): BasicConv2d(
(conv): Conv2d(416, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(aux1): None
(aux2): None
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(dropout): Dropout(p=0.2, inplace=False)
(fc): Linear(in_features=512, out_features=5, bias=True)
)
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2023-11-30T18:01:26Z | 2023-11-30T18:05:54Z | https://github.com/microsoft/nni/issues/5717 | [] | m3bbass | 0 |
tensorpack/tensorpack | tensorflow | 1,388 | How to deal with ERR [EnqueueThread] Exception in thread EnqueueThread QueueInput/input_queue? | When run the codes tensorpack\examples\GAN\DCGAN.py, an error occured, 'ERR **[EnqueueThread]** Exception in thread EnqueueThread QueueInput/input_queue:'
The Command:
`python ./DCGAN-CelebA.py --data E:/projects/下载程序/tensorpack/examples/GAN/mytry1/img_align_celeba/ --crop-size 140`
And the error is:
`Traceback (most recent call last):
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\input_source\input_source.py", line 161, in run
dp = next(self._itr)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\dataflow\common.py", line 386, in __iter__
for dp in self.ds:
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\dataflow\common.py", line 118, in __iter__
for data in self.ds:
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\dataflow\common.py", line 312, in __iter__
for dp in self.ds:
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\dataflow\image.py", line 75, in __iter__
assert im is not None,
AssertionError: E:/projects/下载程序/tensorpack/examples/GAN/mytry1/img_align_celeba\060315.jpg`
`[0121 10:03:55 @input_source.py:179] [EnqueueThread] Thread EnqueueThread QueueInput/input_queue Exited.
[0121 10:03:56 @base.py:275] Start Epoch 1 ...
0%| |0/300[00:00<?,?it/s]2020-01-21 10:03:56.580060: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2020-01-21 10:03:56.823400: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-01-21 10:03:57.849404: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
[0121 10:03:58 @base.py:291] Training was stopped by exception FIFOQueue '_0_QueueInput/input_queue' is closed and has insufficient elements (requested 1, current size 0)
[[node QueueInput/input_deque (defined at C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]'
'Original stack trace for 'QueueInput/input_deque':
File "./DCGAN-CelebA.py", line 166, in <module>
model=M).train_with_defaults(
File "E:\projects\下载程序\tensorpack\examples\GAN\GAN.py", line 95, in __init__
self._build_gan_trainer(input, model)
File "E:\projects\下载程序\tensorpack\examples\GAN\GAN.py", line 110, in _build_gan_trainer
self.tower_func(*input.get_input_tensors())
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\input_source\input_source_base.py", line 83, in get_input_tensors
return self._get_input_tensors()
File "C:\Users\a\Anaconda3\lib\site-packages\tensorpack\input_source\input_source.py", line 272, in _get_input_tensors
ret = self.queue.dequeue(name='input_deque')
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\ops\data_flow_ops.py", line 446, in dequeue
self._queue_ref, self._dtypes, name=name)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\ops\gen_data_flow_ops.py", line 4140, in queue_dequeue_v2
timeout_ms=timeout_ms, name=name)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "C:\Users\a\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
.
0%| |0/300[00:01<?,?it/s]
`
How to solve this problems, thank very much!
| closed | 2020-01-21T02:18:21Z | 2020-01-21T04:56:39Z | https://github.com/tensorpack/tensorpack/issues/1388 | [
"unrelated"
] | ming-nju | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 270 | RFC: Switch to mkdocs? | Shall we switch to a more mature and maintained documentation generator framework? E.g. http://www.mkdocs.org/ as used by Keras.
The current builder is working OK, but incredibly hackish in retrospect. | closed | 2016-11-17T10:21:48Z | 2020-02-02T11:53:20Z | https://github.com/scikit-optimize/scikit-optimize/issues/270 | [] | glouppe | 6 |
horovod/horovod | machine-learning | 3,801 | Unable to use GPU on 2nd machine | Hi I have setup horovod on a k8s cluster with 2 GPU nodes using spark-operator. I have executed the mnist example using tensorflow, and it was executed successfully on both nodes (utlilizing GPUs on both nodes). However when I am using KerasEstimator on spark, the training executes successfully but I think that only one gpu is getting used.
I am following this example:
https://docs.databricks.com/_static/notebooks/deep-learning/horovod-spark-estimator-keras.html
here are the logs:
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Bootstrap : Using eth0:10.84.52.31<0>
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO NET/IB : No device found.
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO NET/Socket : Using [0]eth0:10.84.52.31<0>
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Using network Socket
[1,0]<stdout>:NCCL version 2.11.4+cuda11.4
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Bootstrap : Using eth0:10.84.179.52<0>
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO NET/IB : No device found.
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO NET/Socket : Using [0]eth0:10.84.179.52<0>
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Using network Socket
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Setting affinity for GPU 0 to 55555555,55555555
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 00/02 : 0 1
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 01/02 : 0 1
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Setting affinity for GPU 0 to 55555555,55555555
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 00 : 0[2000] -> 1[4000] [receive] via NET/Socket/0
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 01 : 0[2000] -> 1[4000] [receive] via NET/Socket/0
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 00 : 1[4000] -> 0[2000] [send] via NET/Socket/0
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Channel 01 : 1[4000] -> 0[2000] [send] via NET/Socket/0
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 00 : 1[4000] -> 0[2000] [receive] via NET/Socket/0
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 01 : 1[4000] -> 0[2000] [receive] via NET/Socket/0
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 00 : 0[2000] -> 1[4000] [send] via NET/Socket/0
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Channel 01 : 0[2000] -> 1[4000] [send] via NET/Socket/0
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Connected all rings
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Connected all trees
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO comm 0x7fd2247488e0 rank 0 nranks 2 cudaDev 0 busId 2000 - Init COMPLETE
[1,0]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-1:246:259 [0] NCCL INFO Launch mode Parallel
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Connected all rings
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO Connected all trees
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
[1,1]<stdout>:fraud-engine-application-5422-6f5af3856318205f-exec-2:1240:1253 [0] NCCL INFO comm 0x7fad647478a0 rank 1 nranks 2 cudaDev 0 busId 4000 - Init COMPLETE
[1,0]<stdout>:
[1,1]<stderr>:WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0086s vs `on_train_batch_end` time: 0.0658s). Check your callbacks.
[1,0]<stderr>:WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0053s vs `on_train_batch_end` time: 0.0687s). Check your callbacks.
1/4851 [..............................] - ETA: 8:35:39 - loss: 1.0356 - accuracy: 0.4844[1,0]<stdout>:
9/4851 [..............................] - ETA: 30s - loss: 0.9629 - accuracy: 0.4219 [1,0]<stdout>:
17/4851 [..............................] - ETA: 31s - loss: 0.9131 - accuracy: 0.4265[1,0]<stdout>:
24/4851 [..............................] - ETA: 33s - loss: 0.8747 - accuracy: 0.4421[1,0]<stdout>:
31/4851 [..............................] - ETA: 34s - loss: 0.8364 - accuracy: 0.4768[1,0]<stdout>:
39/4851 [..............................] - ETA: 34s - loss: 0.7905 - accuracy: 0.5445[1,0]<stdout>:
48/4851 [..............................] - ETA: 32s - loss: 0.7389 - accuracy: 0.6286[1,0]<stdout>:
56/4851 [..............................] - ETA: 32s - loss: 0.6957 - accuracy: 0.6816[1,0]<stdout>:
64/4851 [..............................] - ETA: 32s - loss: 0.6540 - accuracy: 0.7214[1,0]<stdout>:
71/4851 [..............................] - ETA: 32s - loss: 0.6205 - accuracy: 0.7489[1,0]<stdout>:
79/4851 [..............................] - ETA: 32s - loss: 0.5844 - accuracy: 0.7743[1,0]<stdout>:
87/4851 [..............................] - ETA: 32s - loss: 0.5504 - accuracy: 0.7951[1,0]<stdout>:
95/4851 [..............................] - ETA: 32s - loss: 0.5194 - accuracy: 0.8123[1,0]<stdout>:
103/4851 [..............................] - ETA: 32s - loss: 0.4912 - accuracy: 0.8269[1,0]<stdout>:
112/4851 [..............................] - ETA: 31s - loss: 0.4623 - accuracy: 0.8408[1,0]<stdout>:
121/4851 [..............................] - ETA: 31s - loss: 0.4364 - accuracy: 0.8525[1,0]<stdout>:
131/4851 [..............................] - ETA: 30s - loss: 0.4106 - accuracy: 0.8637[1,0]<stdout>:
140/4851 [..............................] - ETA: 30s - loss: 0.3886 - accuracy: 0.8724[1,0]<stdout>:
148/4851 [..............................] - ETA: 30s - loss: 0.3706 - accuracy: 0.8793[1,0]<stdout>:
156/4851 [..............................] - ETA: 30s - loss: 0.3542 - accuracy: 0.8855[1,0]<stdout>:
164/4851 [>.............................] - ETA: 30s - loss: 0.3388 - accuracy: 0.8911[1,0]<stdout>:
172/4851 [>.............................] - ETA: 30s - loss: 0.3246 - accuracy: 0.8962[1,0]<stdout>:
180/4851 [>.............................] - ETA: 30s - loss: 0.3116 - accuracy: 0.9008[1,0]<stdout>:
188/4851 [>.............................] - ETA: 30s - loss: 0.2994 - accuracy: 0.9050[1,0]<stdout>:
196/4851 [>.............................] - ETA: 30s - loss: 0.2882 - accuracy: 0.9089[1,0]<stdout>:
204/4851 [>.............................] - ETA: 30s - loss: 0.2778 - accuracy: 0.9125[1,0]<stdout>:
212/4851 [>.............................] - ETA: 30s - loss: 0.2680 - accuracy: 0.9158[1,0]<stdout>:
220/4851 [>.............................] - ETA: 30s - loss: 0.2588 - accuracy: 0.9188[1,0]<stdout>:
227/4851 [>.............................] - ETA: 30s - loss: 0.2513 - accuracy: 0.9213[1,0]<stdout>:
235/4851 [>.............................] - ETA: 30s - loss: 0.2432 - accuracy: 0.9240[1,0]<stdout>:
243/4851 [>.............................] - ETA: 30s - loss: 0.2356 - accuracy: 0.9265[1,0]<stdout>:
251/4851 [>.............................] - ETA: 30s - loss: 0.2285 - accuracy: 0.9288[1,0]<stdout>:
259/4851 [>.............................] - ETA: 30s - loss: 0.2218 - accuracy: 0.9310[1,0]<stdout>:
267/4851 [>.............................] - ETA: 30s - loss: 0.2155 - accuracy: 0.9331[1,0]<stdout>:
275/4851 [>.............................] - ETA: 30s - loss: 0.2095 - accuracy: 0.9351[1,0]<stdout>:
283/4851 [>.............................] - ETA: 30s - loss: 0.2038 - accuracy: 0.9369[1,0]<stdout>:
291/4851 [>.............................] - ETA: 30s - loss: 0.1985 - accuracy: 0.9386[1,0]<stdout>:
299/4851 [>.............................] - ETA: 30s - loss: 0.1933 - accuracy: 0.9403[1,0]<stdout>:
307/4851 [>.............................] - ETA: 30s - loss: 0.1885 - accuracy: 0.9418[1,0]<stdout>:
316/4851 [>.............................] - ETA: 30s - loss: 0.1833 - accuracy: 0.9435[1,0]<stdout>:
325/4851 [=>............................] - ETA: 30s - loss: 0.1784 - accuracy: 0.9450[1,0]<stdout>:
334/4851 [=>............................] - ETA: 30s - loss: 0.1738 - accuracy: 0.9465[1,0]<stdout>:
343/4851 [=>............................] - ETA: 30s - loss: 0.1694 - accuracy: 0.9479[1,0]<stdout>:
351/4851 [=>............................] - ETA: 29s - loss: 0.1656 - accuracy: 0.9491[1,0]<stdout>:
358/4851 [=>............................] - ETA: 30s - loss: 0.1625 - accuracy: 0.9501[1,0]<stdout>:
366/4851 [=>............................] - ETA: 29s - loss: 0.1590 - accuracy: 0.9512[1,0]<stdout>:
374/4851 [=>............................] - ETA: 29s - loss: 0.1557 - accuracy: 0.9522[1,0]<stdout>:
383/4851 [=>............................] - ETA: 29s - loss: 0.1521 - accuracy: 0.9534[1,0]<stdout>:
391/4851 [=>............................] - ETA: 29s - loss: 0.1491 - accuracy: 0.9543[1,0]<stdout>:
400/4851 [=>............................] - ETA: 29s - loss: 0.1458 - accuracy: 0.9554[1,0]<stdout>:
408/4851 [=>............................] - ETA: 29s - loss: 0.1430 - accuracy: 0.9562[1,0]<stdout>:
417/4851 [=>............................] - ETA: 29s - loss: 0.1400 - accuracy: 0.9572[1,0]<stdout>:
422/4851 [=>............................] - ETA: 29s - loss: 0.1384 - accuracy: 0.9577[1,0]<stdout>:
428/4851 [=>............................] - ETA: 29s - loss: 0.1365 - accuracy: 0.9583[1,0]<stdout>:
437/4851 [=>............................] - ETA: 29s - loss: 0.1338 - accuracy: 0.9591[1,0]<stdout>:
447/4851 [=>............................] - ETA: 29s - loss: 0.1314 - accuracy: 0.9600[1,0]<stdout>:
456/4851 [=>............................] - ETA: 29s - loss: 0.1289 - accuracy: 0.9608[1,0]<stdout>:
465/4851 [=>............................] - ETA: 29s - loss: 0.1264 - accuracy: 0.9616[1,0]<stdout>:
474/4851 [=>............................] - ETA: 29s - loss: 0.1241 - accuracy: 0.9623[1,0]<stdout>:
483/4851 [=>............................] - ETA: 29s - loss: 0.1218 - accuracy: 0.9630[1,0]<stdout>:
491/4851 [==>...........................] - ETA: 28s - loss: 0.1199 - accuracy: 0.9636[1,0]<stdout>:
499/4851 [==>...........................] - ETA: 28s - loss: 0.1180 - accuracy: 0.9642[1,0]<stdout>:
508/4851 [==>...........................] - ETA: 28s - loss: 0.1160 - accuracy: 0.9648[1,0]<stdout>:
518/4851 [==>...........................] - ETA: 28s - loss: 0.1138 - accuracy: 0.9655[1,0]<stdout>:
527/4851 [==>...........................] - ETA: 28s - loss: 0.1118 - accuracy: 0.9661[1,0]<stdout>:
536/4851 [==>...........................] - ETA: 28s - loss: 0.1100 - accuracy: 0.9667[1,0]<stdout>:
545/4851 [==>...........................] - ETA: 28s - loss: 0.1082 - accuracy: 0.9672[1,0]<stdout>:
554/4851 [==>...........................] - ETA: 28s - loss: 0.1065 - accuracy: 0.9677[1,0]<stdout>:
562/4851 [==>...........................] - ETA: 28s - loss: 0.1050 - accuracy: 0.9682[1,0]<stdout>:
572/4851 [==>...........................] - ETA: 27s - loss: 0.1032 - accuracy: 0.9688[1,0]<stdout>: | open | 2022-12-30T17:14:54Z | 2023-02-13T10:58:48Z | https://github.com/horovod/horovod/issues/3801 | [
"bug",
"spark"
] | obaid1922 | 0 |
tableau/server-client-python | rest-api | 930 | Unable to fetch all workbooks for users.populate_workbooks(user) | Unable to fetch all workbooks for a given user.
**Versions**
Details of your environment, including:
- Tableau Online
- Python 3.7+
- tableauserverclient<=0.14.1
**To Reproduce**
`
import tableauserverclient as tsc
from tableauserverclient.server.endpoint.exceptions import ServerResponseError
def get_workbooks_users(self) -> list: #tableau credentials are passed through Self
list_workbooks_n_users = []
try:
with self.server.auth.sign_in(self.tableau_auth):
for user in tsc.Pager(self.server.users):
if 'tol.admin.api.broker.service.usera@tableau.com' not in user.name:
self.server.users.populate_workbooks(user)
for wkb in user.workbooks:
print(wkb.name, user.name)
list_workbooks_n_users.append({
'user_id': user.id,
'user_name': user.name,
'wb_id': wkb.id,
'wb_name': wkb.name,
'project_id': wkb.project_id,
'project_name': wkb.project_name,
"site_id": self.site_id
})
except ValueError:
print("some thing")
finally:
print("some other thing")
return list_workbooks_n_users
`
**Results**
I only get the first one hundred results for my user. I know the user has access to all tableau assets.
**NOTE:** Be careful not to post user names, passwords, auth tokens or any other private or sensitive information.
| closed | 2021-10-27T14:34:44Z | 2023-02-16T23:03:32Z | https://github.com/tableau/server-client-python/issues/930 | [
"bug"
] | kdutta-c | 17 |
apache/airflow | automation | 47,349 | Scheduler crash while using PythonVirtualenvOperator | ### Apache Airflow version
3.0.0b1
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Scheduler is crasing while using PythonVirtualenvOperator.
**Error**:
```typescript
File "/opt/airflow/airflow/cli/commands/local_commands/daemon_utils.py", line 86, in run_command_with_daemon_option
callback()
File "/opt/airflow/airflow/cli/commands/local_commands/scheduler_command.py", line 55, in <lambda>
callback=lambda: _run_scheduler_job(args),
File "/opt/airflow/airflow/cli/commands/local_commands/scheduler_command.py", line 43, in _run_scheduler_job
run_job(job=job_runner.job, execute_callable=job_runner._execute)
File "/opt/airflow/airflow/utils/session.py", line 101, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/airflow/airflow/jobs/job.py", line 342, in run_job
return execute_job(job, execute_callable=execute_callable)
File "/opt/airflow/airflow/jobs/job.py", line 371, in execute_job
ret = execute_callable()
File "/opt/airflow/airflow/jobs/scheduler_job_runner.py", line 926, in _execute
self._run_scheduler_loop()
File "/opt/airflow/airflow/jobs/scheduler_job_runner.py", line 1051, in _run_scheduler_loop
executor.heartbeat()
File "/opt/airflow/airflow/traces/tracer.py", line 54, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/airflow/executors/base_executor.py", line 252, in heartbeat
self.trigger_tasks(open_slots)
File "/opt/airflow/airflow/traces/tracer.py", line 54, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/airflow/executors/base_executor.py", line 409, in trigger_tasks
self._process_workloads(workloads) # type: ignore[attr-defined]
File "/opt/airflow/providers/celery/src/airflow/providers/celery/executors/celery_executor.py", line 281, in _process_workloads
self._send_tasks(tasks)
File "/opt/airflow/providers/celery/src/airflow/providers/celery/executors/celery_executor.py", line 290, in _send_tasks
key_and_async_results = self._send_tasks_to_celery(task_tuples_to_send)
File "/opt/airflow/providers/celery/src/airflow/providers/celery/executors/celery_executor.py", line 329, in _send_tasks_to_celery
return list(map(send_task_to_executor, task_tuples_to_send))
File "/opt/airflow/providers/celery/src/airflow/providers/celery/executors/celery_executor_utils.py", line 266, in send_task_to_executor
args = (args.model_dump_json(),)
File "/usr/local/lib/python3.9/site-packages/pydantic/main.py", line 477, in model_dump_json
return self.__pydantic_serializer__.to_json(
pydantic_core._pydantic_core.PydanticSerializationError: Unable to serialize unknown type: <class 'kubernetes.client.models.v1_pod.V1Pod'>
```
### What you think should happen instead?
PythonVirtualenvOperator should not cause scheduler to crash.
### How to reproduce
Run the below DAG:
```python
from airflow.models import DAG
from airflow.providers.standard.operators.python import PythonVirtualenvOperator
from pendulum import today
from kubernetes.client import models as k8s
def callable_virtualenv():
"""
Example function that will be performed in a virtual environment.
Importing at the module level ensures that it will not attempt to import the
library before it is installed.
"""
from time import sleep
from colorama import Back, Fore, Style
print(Fore.RED + "some red text")
print(Back.GREEN + "and with a green background")
print(Style.DIM + "and in dim text")
print(Style.RESET_ALL)
for _ in range(10):
print(Style.DIM + "Please wait...", flush=True)
sleep(10)
print("Finished")
with DAG(
dag_id="virtualenv_python_operator",
default_args={"owner": "airflow"},
schedule=None,
start_date=today('UTC').add(days=-2),
tags=["core"],
) as dag:
task = PythonVirtualenvOperator(
task_id="virtualenv_python",
python_callable=callable_virtualenv,
requirements=["colorama==0.4.0"],
system_site_packages=False,
executor_config={
"pod_override": k8s.V1Pod(
spec=k8s.V1PodSpec(
containers=[
k8s.V1Container(
name="base",
resources=k8s.V1ResourceRequirements(
requests={
"cpu": "100m",
"memory": "384Mi",
},
limits={
"cpu": 1,
"memory": "500Mi",
}
)
)
]
)
)
}
)
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-04T14:09:44Z | 2025-03-05T08:41:15Z | https://github.com/apache/airflow/issues/47349 | [
"kind:bug",
"area:Scheduler",
"priority:critical",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 3 |
wkentaro/labelme | computer-vision | 987 | [Question] Why Labelme GUI not add open flags.txt | closed | 2022-02-15T06:00:50Z | 2022-02-25T21:09:07Z | https://github.com/wkentaro/labelme/issues/987 | [] | YuaXan | 1 | |
xuebinqin/U-2-Net | computer-vision | 30 | Is there a docker image? | Hi
It would be great to have a docker image for this so people less experienced with Python (me 😛 ) can try it out.
Or maybe just pointing out in the README some already made docker image that would work. | open | 2020-05-25T09:20:36Z | 2020-05-25T16:19:03Z | https://github.com/xuebinqin/U-2-Net/issues/30 | [] | joaojoyce | 1 |
apache/airflow | data-science | 47,413 | Scheduler HA mode, DagFileProcessor Race Condition | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.1
### What happened?
We use dynamic dag generation to generate dags in our Airflow environment. We have one base dag definition file, we will call `big_dag.py`, generating >1500 dags. Recently, after the introduction of a handful more dags generated from `big_dag.py`, all the `big_dag.py` generated dags have disappeared from UI and reappear randomly in a loop.
We noticed that if we restart our env a couple times, we could randomly achieve stability. We started to believe some timing issue was at play.
### What you think should happen instead?
Goal State: Dags that generate >1500 dags should not cause any disruptions to environment, given appropriate timeouts.
After checking the dag_process_manager log stream we noticed a prevalence of this error:
`psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "serialized_dag_pkey" DETAIL: Key (dag_id)=(<dag_name>)`
I believe the issue is on this line of the `write_dag` function of the `SerializedDagModel`:
**This code is from the main branch, I believe the issue is still present in main**
https://github.com/apache/airflow/blob/7bfe283cf4fa28453c857e659f4c1d5917f9e11c/airflow/models/serialized_dag.py#L197
The check for if a serialized dag should be updated or not is NOT ATOMIC, which leads to the issue where more than 1 scheduler runs into a race condition while trying to update serialization.
I believe a "check-then-update" atomic action should be used here through a mechanism like the row level `SELECT ... FOR UPDATE`.
### How to reproduce
You can reproduce this by having an environment with multiple schedulers/standalone_dag_file_processors and dag files that dynamically generate > 1500 dags. Time for a full processing of a >1500 dag file should be ~200 seconds (make sure timeout accommodates this).
To increase the likelihood the duplicate serialized pkey issue happens, reduce min_file_process_interval to like 30 seconds.
### Operating System
Amazon Linux 2023
### Versions of Apache Airflow Providers
_No response_
### Deployment
Amazon (AWS) MWAA
### Deployment details
2.10.1
2 Schedulers
xL Environment Size:

min_file_process_interval= 600
standalone_dag_processor = True (we believe MWAA creates one per scheduler)
dag_file_processor_timeout = 900
dagbag_import_timeout = 900
### Anything else?
I am not sure why the timing works out when dag definitio files are generating <<1500 dags, but could just be the speed of the environment is finishing all work before a race condition can occur.
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-05T19:43:20Z | 2025-03-11T16:09:10Z | https://github.com/apache/airflow/issues/47413 | [
"kind:bug",
"area:Scheduler",
"area:MetaDB",
"area:core",
"needs-triage"
] | robertchinezon | 4 |
netbox-community/netbox | django | 18,713 | Deprecation Warning during Netbox Update | ### Proposed Changes
```log
INFO - DeprecationWarning: Setting a fallback anchor function is deprecated and will be removed in a future release.
File "/data/netbox-4.2.4/venv/lib/python3.10/site-packages/mkdocstrings/plugin.py", line 190, in on_config
autorefs.get_fallback_anchor = self.handlers.get_anchors
File "/data/netbox-4.2.4/venv/lib/python3.10/site-packages/mkdocs_autorefs/plugin.py", line 130, in get_fallback_anchor
warn(
INFO - Cleaning site directory
INFO - Building documentation to directory: /data/netbox-4.2.4/netbox/project-static/docs
/data/netbox-4.2.4/venv/lib/python3.10/site-packages/strawberry/utils/deprecations.py:23: UserWarning: _type_definition is deprecated, use __strawberry_definition__ instead
self.warn()
INFO - Documentation built in 10.35 seconds
Collecting static files (python3 netbox/manage.py collectstatic --no-input)...
```
I think it's necessary to look into upgrading the modules concerned to avoid future problems.
### Justification
When a dependency or module begins to be deprecated, it could pose a number of incompatibility or security problems in the future. It would be worth looking into updating the code concerned.
### Impact
the impact may differ depending on the severity of the depreciation. | closed | 2025-02-24T07:52:45Z | 2025-02-24T13:29:15Z | https://github.com/netbox-community/netbox/issues/18713 | [] | TheGuardianLight | 1 |
zappa/Zappa | flask | 1,087 | Error in zappa tail - Unable to import module 'handler': No module named 'dataclasses' | <!--- Provide a general summary of the issue in the Title above -->
## Context
Redeploying existing app with Python 3.8 / 3.9 produces error seen in zappa tail:
```
Unable to import module 'handler': No module named 'dataclasses'
```
No reference to "handler"/"dataclasses" found in my code.
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
## Expected Behavior
<!--- Tell us what should happen -->
Should not be there
## Actual Behavior
<!--- Tell us what happens instead -->
The error above
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: latest
* Operating System and Python version: Python 3.8/3.9, Ubuntu Linux 20.04.
* The output of `pip freeze`:
```
aniso8601==9.0.1
argcomplete==1.12.3
boto3==1.20.14
botocore==1.23.14
certifi==2021.10.8
cfn-flip==1.3.0
charset-normalizer==2.0.8
click==8.0.3
durationpy==0.5
Flask==2.0.2
Flask-RESTful==0.3.9
future==0.18.2
hjson==3.0.2
idna==3.3
itsdangerous==2.0.1
Jinja2==3.0.3
jmespath==0.10.0
kappa==0.6.0
MarkupSafe==2.0.1
pep517==0.12.0
pip-tools==6.4.0
placebo==0.10.0
python-dateutil==2.8.2
python-dotenv==0.19.2
python-slugify==5.0.2
pytz==2021.3
PyYAML==6.0
requests==2.26.0
s3transfer==0.5.0
six==1.16.0
text-unidecode==1.3
toml==0.10.2
tomli==1.2.2
tqdm==4.62.3
troposphere==3.1.1
urllib3==1.26.7
Werkzeug==2.0.2
wsgi-request-logger==0.4.6
zappa==0.54.1
```
| closed | 2021-11-26T22:04:46Z | 2022-08-18T11:43:55Z | https://github.com/zappa/Zappa/issues/1087 | [] | synergiator | 1 |
pykaldi/pykaldi | numpy | 315 | Path in run_vad.sh is incorrect | The path is run_vad.sh is:
- `python compute-vad.py --test-plot scp:data/wav.scp ark,t:out/ltsv-feats.ark`
The path should be:
- `python compute-vad.py --test-plot scp:data/test/wav.scp ark,t:out/ltsv-feats.ark` | open | 2023-02-27T09:22:17Z | 2023-02-27T09:22:17Z | https://github.com/pykaldi/pykaldi/issues/315 | [] | JPunch | 0 |
xlwings/xlwings | automation | 1,799 | Incorrect encoding in error messages (UTF-8 not decoded) | #### Windows 8
#### xlwings-0.25.3, Excel 2013, Python 3.7
if in python code I execute, for example `raise RuntimeError('ошибка')`
in Excel I get `err.Description = "RuntimeError: РѕСРёР±РєР°"` which is raw UTF-8 bytes reinterpreted in system codepage.
So it is impossible to raise errors in user's native language.
The error messages received from python should be decoded from UTF-8 to wide chars. | closed | 2022-01-24T21:23:28Z | 2022-02-19T11:20:18Z | https://github.com/xlwings/xlwings/issues/1799 | [] | panda-34 | 7 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 34 | Can you give Autoencoder/autoencoder.py example code? | Can you give Autoencoder/autoencoder.py example code? Thanks!
| open | 2017-06-25T15:13:00Z | 2017-08-18T11:41:46Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/34 | [] | liangyanfeng | 1 |
Python3WebSpider/ProxyPool | flask | 218 | 请求代理时,返回500错误 |
172.16.2.136 - - [2024-07-26 08:50:23] "GET /random HTTP/1.1" 500 426 0.002383
proxypool | [2024-07-26 08:50:51,941] ERROR in app: Exception on /random [GET]
proxypool | Traceback (most recent call last):
proxypool | File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
proxypool | response = self.full_dispatch_request()
proxypool | File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
proxypool | rv = self.handle_user_exception(e)
proxypool | File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
proxypool | reraise(exc_type, exc_value, tb)
proxypool | File "/root/.local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
proxypool | raise value
proxypool | File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
proxypool | rv = self.dispatch_request()
proxypool | File "/root/.local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
proxypool | return self.view_functions[rule.endpoint](**req.view_args)
proxypool | File "/app/proxypool/processors/server.py", line 19, in decorator
proxypool | return func(*args, **kwargs)
proxypool | File "/app/proxypool/processors/server.py", line 70, in get_proxy
proxypool | return conn.random().string()
proxypool | File "/app/proxypool/storages/redis.py", line 69, in random
proxypool | return convert_proxy_or_proxies(choice(proxies))
proxypool | File "/app/proxypool/utils/proxy.py", line 66, in convert_proxy_or_proxies
proxypool | host, port = data.split(':')
proxypool | ValueError: too many values to unpack (expected 2) | open | 2024-07-26T08:57:11Z | 2024-07-26T08:57:11Z | https://github.com/Python3WebSpider/ProxyPool/issues/218 | [
"bug"
] | wwf1227 | 0 |
hzwer/ECCV2022-RIFE | computer-vision | 101 | Support for both real-time and non-realtime UltraHFR | Hello,
Inventor of TestUFO / Founder of Blur Busters here.
We are [pioneers](https://blurbusters.com/category/area51-display-research) in UltraHFR experiments (true 240fps on true 240Hz displays, as well as we're working with laboratory true-1000Hz displays of the future that will display 1000fps content in real time).
(Before anyone asks, yes, [researchers have shown 1000Hz has human visible benefits -- scientific citations here](https://blurbusters.com/1000hz-journey) -- and another research article, [Stroboscopic Effect of Finite Frame Rate Displays](https://blurbusters.com/stroboscopics).)
I have an [Ultra HFR FAQ: Real-time 240fps, 480fps and 1000fps on real-time 240Hz, 480Hz and 1000Hz displays](https://blurbusters.com/ultrahfr) which shows how to speed up Phantom Flex videos to real-time speeds, as well as other high speed camera footage (e.g. 240fps GoPro HERO8 videos). However, these are low quality, and filming 4K 60fps to be interpolated to 4K 240fps on prototype 4K 240Hz displays look much better.
I'm researching RIFE for potential suitability.
Is it possible that RIFE could be modified to use a series of multiple RTX 3090-league series (SLI configurations), to do real-time UltraHFR -- say converting 4K 120fps HFR camera to 4K 960fps HFR in real-time? We have access to such prototype display technology, albiet the GPU technology is lagging slightly behind what custom-built laboratory displays can now do.
(P.S. Some of the precursors are already on the market. An E-Cinema projector, the Christie 4K digital projector, is already capable of 480Hz, albiet at only 1080p.) | closed | 2021-02-03T20:52:23Z | 2021-04-09T10:11:57Z | https://github.com/hzwer/ECCV2022-RIFE/issues/101 | [] | mdrejhon | 1 |
dunossauro/fastapi-do-zero | pydantic | 163 | Slides para aulas síncronas | Os slides estão bastante desatualizados em relação ao texto. Eles devem ser atualizados e alguns até mesmo criados.
- [x] Apresentação
- [x] Configurando o Ambiente de Desenvolvimento
- [x] Introdução ao desenvolvimento WEB
- [x] Estruturando o Projeto e Criando Rotas CRUD
- [x] Configurando o Banco de Dados e Gerenciando Migrações com Alembic
- [x] Integrando Banco de Dados a API
- [x] Autenticação e Autorização com JWT
- [x] Refatorando a Estrutura do Projeto
- [x] Tornando o sistema de autenticação robusto
- [x] Criando Rotas CRUD para Gerenciamento de Tarefas
- [x] Dockerizando a nossa aplicação e introduzindo o PostgreSQL
- [x] Automatizando os testes com Integração Contínua (CI)
- [x] Fazendo deploy no Fly.io
- [x] Despedida e próximos passos | closed | 2024-06-04T20:11:31Z | 2024-07-30T19:32:16Z | https://github.com/dunossauro/fastapi-do-zero/issues/163 | [] | dunossauro | 1 |
mwaskom/seaborn | matplotlib | 3,563 | Using weighted mean estimator for bootstrapped confidence intervals in seaborn plots | Hi, I would like to use a weighted mean estimator for calculating confidence intervals on various seaborn plots. In the past I have done this via a 'hack' suggested [here](https://github.com/mwaskom/seaborn/issues/722) which uses complex numbers to encode the data and its weights before passing to a seaborn plotting function.
Unfortunately, as of upgrading to seaborn v0.13.0, this approach no longer works as it seems like the complex numbers are cast to reals at some point in the plotting process (and hence lose part of the data). This had previously worked up until v0.12.2.
I appreciate this was always a bit of a hack, but would either of the following be possible:
a) Add native support for weighted mean estimators to the seaborn plotting functions or,
b) Restore this hacky behaviour for now in a future release
I have tried alternatives such as storing the data and its weights in tuples or dataclasses, however neither of these approaches work as the data types are not numeric.
Language and package versions:
- Python v3.11.5
- numpy v1.26.2
- matplotlib v3.8.1
- pandas v2.1.3
Example code:
```
import pandas as pd
import seaborn as sns
import numpy as np
randomGenerator = np.random.default_rng(123)
values = randomGenerator.normal(10, 5, size=(100,))
weights = randomGenerator.uniform(size=(100,))
dataFrame = pd.DataFrame({'values': values, 'weights': weights})
dataFrame['valuesWithWeights'] = dataFrame['values'] + 1j * dataFrame['weights']
def WeightedMean(valuesWithWeights, **kwargs):
values, weights = np.real(valuesWithWeights), np.imag(valuesWithWeights)
weightedSum = np.sum((weights * values)) / np.sum(weights)
return weightedSum
sns.barplot(data=dataFrame, y='valuesWithWeights', estimator=WeightedMean)
```
Output using seaborn v0.12.2

Output using seaborn v0.13.0
```
[c:\Temp\seaborn_test\seaborn-venv\Lib\site-packages\matplotlib\cbook.py:1699](file:///C:/Temp/seaborn_test/seaborn-venv/Lib/site-packages/matplotlib/cbook.py:1699): ComplexWarning: Casting complex values to real discards the imaginary part
return math.isfinite(val)
[c:\Temp\seaborn_test\seaborn-venv\Lib\site-packages\pandas\core\dtypes\astype.py:134](file:///C:/Temp/seaborn_test/seaborn-venv/Lib/site-packages/pandas/core/dtypes/astype.py:134): ComplexWarning: Casting complex values to real discards the imaginary part
return arr.astype(dtype, copy=True)
[C:\Users\idunn\AppData\Local\Temp\ipykernel_40880\4206068624.py:3](file:///C:/Users/idunn/AppData/Local/Temp/ipykernel_40880/4206068624.py:3): RuntimeWarning: invalid value encountered in scalar divide
weightedSum = np.sum((weights * values)) / np.sum(weights)
[c:\Temp\seaborn_test\seaborn-venv\Lib\site-packages\numpy\lib\nanfunctions.py:1384](file:///C:/Temp/seaborn_test/seaborn-venv/Lib/site-packages/numpy/lib/nanfunctions.py:1384): RuntimeWarning: All-NaN slice encountered
return _nanquantile_unchecked(
```

| closed | 2023-11-17T10:06:52Z | 2023-12-09T23:03:15Z | https://github.com/mwaskom/seaborn/issues/3563 | [
"wishlist",
"statistics"
] | iainAtIon | 13 |
getsentry/sentry | django | 86,837 | [RELEASES] Add empty state to session health tab | We should have an empty state screen for the "Session Health" pages, if someone is landing on that page with no data. | closed | 2025-03-11T20:40:25Z | 2025-03-14T22:35:11Z | https://github.com/getsentry/sentry/issues/86837 | [] | michellewzhang | 0 |
oegedijk/explainerdashboard | plotly | 248 | v0.3.8 is breaking due to changes in dtreeviz API | Hi,
At the moment we are using v0.3.8 which no longer works due to changes in dtreeviz API. Can I request to pin the version of dtreeviz for v0.3.8 release to protect from breaking changes in dtreeviz v2 API. Thank you!
`ImportError: cannot import name 'dtreeviz' from 'dtreeviz.trees'` | closed | 2023-01-11T16:42:38Z | 2023-02-14T08:21:04Z | https://github.com/oegedijk/explainerdashboard/issues/248 | [] | sinha1 | 1 |
httpie/cli | api | 1,410 | Python 3.11 test failures: Different enum reprs and different cookie order | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
The tests are failing with Python 3.11.0b3.
1. `git clone` httpie and `cd` into it, this is on the master branch @ 418b12bbd6072585118c06c5c4e17996d7f0b085
2. `python3.11 -m venv __venv__` and `. __venv__/bin/activate`
3. `pip install '.[test]'`
4. `pytest tests/`
## Current result
```
============================= test session starts ==============================
platform linux -- Python 3.11.0b3, pytest-7.1.2, pluggy-1.0.0
rootdir: .../httpie, configfile: pytest.ini
plugins: mock-3.7.0, lazy-fixture-0.6.3, httpbin-1.0.2, forked-1.4.0, xdist-2.5.0
collected 1026 items
tests/test_auth.py .......... [ 0%]
tests/test_compress.py ....... [ 1%]
tests/test_downloads.py ...... [ 2%]
tests/test_errors.py .... [ 2%]
tests/test_httpie.py ...........s............... [ 5%]
tests/test_json.py . [ 5%]
tests/test_auth.py .......... [ 6%]
tests/test_compress.py ....... [ 7%]
tests/test_downloads.py ...... [ 7%]
tests/test_errors.py .... [ 7%]
tests/test_httpie.py ...........s............... [ 10%]
tests/test_json.py . [ 10%]
tests/test_auth.py ......... [ 11%]
tests/test_auth_plugins.py .... [ 11%]
tests/test_binary.py ...... [ 12%]
tests/test_cli.py ..............F....................... [ 16%]
tests/test_cli_ui.py .... [ 16%]
tests/test_cli_utils.py .. [ 16%]
tests/test_compress.py .. [ 17%]
tests/test_config.py ........s [ 17%]
tests/test_cookie.py . [ 18%]
tests/test_cookie_on_redirects.py ..................... [ 20%]
tests/test_defaults.py ................. [ 21%]
tests/test_downloads.py .................... [ 23%]
tests/test_encoding.py ................................................. [ 28%]
.... [ 28%]
tests/test_errors.py .... [ 29%]
tests/test_exit_status.py ......... [ 30%]
tests/test_httpie.py ...................... [ 32%]
tests/test_httpie_cli.py ..................................... [ 35%]
tests/test_json.py ..................................................... [ 41%]
........................................................................ [ 48%]
........................................................................ [ 55%]
........................................................................ [ 62%]
....................... [ 64%]
tests/test_meta.py ...... [ 64%]
tests/test_offline.py ......... [ 65%]
tests/test_output.py ......FF.FxXX...................................... [ 70%]
........................................................................ [ 77%]
. [ 77%]
tests/test_parser_schema.py . [ 77%]
tests/test_plugins_cli.py ......FFFF..F.F [ 79%]
tests/test_redirects.py ...x...... [ 80%]
tests/test_regressions.py ... [ 80%]
tests/test_sessions.py .........................FF.F.................... [ 85%]
............ [ 86%]
tests/test_ssl.py .ss.....pytest-httpbin server hit an exception serving request: EOF occurred in violation of protocol (_ssl.c:992)
attempting to ignore so the rest of the tests can run
............pytest-httpbin server hit an exception serving request: EOF occurred in violation of protocol (_ssl.c:992)
attempting to ignore so the rest of the tests can run
... [ 88%]
tests/test_stream.py ................ [ 90%]
tests/test_tokens.py ................... [ 92%]
tests/test_transport_plugin.py . [ 92%]
tests/test_update_warnings.py ............ [ 93%]
tests/test_uploads.py ........................... [ 96%]
tests/test_windows.py s. [ 96%]
tests/test_xml.py ................. [ 98%]
tests/utils/matching/test_matching.py .................... [100%]
=================================== FAILURES ===================================
_______________________ test_url_colon_slash_slash_only ________________________
def test_url_colon_slash_slash_only():
r = http('://', tolerate_error_exit_status=True)
> assert r.stderr.strip() == "http: error: InvalidURL: Invalid URL 'http://': No host supplied"
E AssertionError: assert 'http: LogLev...host supplied' == 'http: error:...host supplied'
E - http: error: InvalidURL: Invalid URL 'http://': No host supplied
E ? ^^^^
E + http: LogLevel.ERROR: InvalidURL: Invalid URL 'http://': No host supplied
E ? ++++ ^^^^^^^^^
tests/test_cli.py:192: AssertionError
----------------------------- Captured stderr call -----------------------------
http: LogLevel.ERROR: InvalidURL: Invalid URL 'http://': No host supplied
_____________ TestQuietFlag.test_quiet_with_check_status_non_zero ______________
self = <tests.test_output.TestQuietFlag object at 0x7ff4dcbdfe90>
httpbin = <pytest_httpbin.serve.Server object at 0x7ff4dcb5a4d0>
def test_quiet_with_check_status_non_zero(self, httpbin):
r = http(
'--quiet', '--check-status', httpbin + '/status/500',
tolerate_error_exit_status=True,
)
> assert 'http: warning: HTTP 500' in r.stderr
E AssertionError: assert 'http: warning: HTTP 500' in '\nhttp: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR\n\n\n'
E + where '\nhttp: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR\n\n\n' = ''.stderr
tests/test_output.py:69: AssertionError
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [07/Jun/2022 16:23:54] "GET /status/500 HTTP/1.1" 500 0
http: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR
___________ TestQuietFlag.test_quiet_with_check_status_non_zero_pipe ___________
self = <tests.test_output.TestQuietFlag object at 0x7ff4dcbe4650>
httpbin = <pytest_httpbin.serve.Server object at 0x7ff4dcb5a4d0>
def test_quiet_with_check_status_non_zero_pipe(self, httpbin):
r = http(
'--quiet', '--check-status', httpbin + '/status/500',
tolerate_error_exit_status=True,
env=MockEnvironment(stdout_isatty=False)
)
> assert 'http: warning: HTTP 500' in r.stderr
E AssertionError: assert 'http: warning: HTTP 500' in '\nhttp: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR\n\n\n'
E + where '\nhttp: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR\n\n\n' = ''.stderr
tests/test_output.py:77: AssertionError
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [07/Jun/2022 16:23:54] "GET /status/500 HTTP/1.1" 500 0
http: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR
________ TestQuietFlag.test_quiet_quiet_with_check_status_non_zero_pipe ________
self = <tests.test_output.TestQuietFlag object at 0x7ff4dcbe5550>
httpbin = <pytest_httpbin.serve.Server object at 0x7ff4dcb5a4d0>
def test_quiet_quiet_with_check_status_non_zero_pipe(self, httpbin):
r = http(
'--quiet', '--quiet', '--check-status', httpbin + '/status/500',
tolerate_error_exit_status=True,
env=MockEnvironment(stdout_isatty=False)
)
> assert 'http: warning: HTTP 500' in r.stderr
E AssertionError: assert 'http: warning: HTTP 500' in '\nhttp: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR\n\n\n'
E + where '\nhttp: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR\n\n\n' = ''.stderr
tests/test_output.py:92: AssertionError
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [07/Jun/2022 16:23:54] "GET /status/500 HTTP/1.1" 500 0
http: LogLevel.WARNING: HTTP 500 INTERNAL SERVER ERROR
_________________________ test_plugins_uninstall[True] _________________________
interface = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_True_0/interface'), environment=<...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c78410>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>)
httpie_plugins_success = <function httpie_plugins_success.<locals>.runner at 0x7ff4d5cbe0c0>
dummy_plugin = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_True_0/interface...ue}>), name='httpie-4bef7587', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')])
cli_mode = True
@pytest.mark.requires_installation
@pytest.mark.parametrize('cli_mode', [True, False])
def test_plugins_uninstall(interface, httpie_plugins_success, dummy_plugin, cli_mode):
httpie_plugins_success('install', dummy_plugin.path, cli_mode=cli_mode)
httpie_plugins_success('uninstall', dummy_plugin.name, cli_mode=cli_mode)
> assert not interface.is_installed(dummy_plugin.name)
E AssertionError: assert not True
E + where True = <bound method Interface.is_installed of Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uni...ut': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c78410>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>)>('httpie-4bef7587')
E + where <bound method Interface.is_installed of Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uni...ut': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c78410>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>)> = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_True_0/interface'), environment=<...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c78410>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>).is_installed
E + and 'httpie-4bef7587' = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_True_0/interface...ue}>), name='httpie-4bef7587', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')]).name
tests/test_plugins_cli.py:59: AssertionError
________________________ test_plugins_uninstall[False] _________________________
interface = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_False_0/interface'), environment=...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c811d0>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>)
httpie_plugins_success = <function httpie_plugins_success.<locals>.runner at 0x7ff4d5cac220>
dummy_plugin = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_False_0/interfac...ue}>), name='httpie-300cc8fa', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')])
cli_mode = False
@pytest.mark.requires_installation
@pytest.mark.parametrize('cli_mode', [True, False])
def test_plugins_uninstall(interface, httpie_plugins_success, dummy_plugin, cli_mode):
httpie_plugins_success('install', dummy_plugin.path, cli_mode=cli_mode)
httpie_plugins_success('uninstall', dummy_plugin.name, cli_mode=cli_mode)
> assert not interface.is_installed(dummy_plugin.name)
E AssertionError: assert not True
E + where True = <bound method Interface.is_installed of Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uni...ut': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c811d0>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>)>('httpie-300cc8fa')
E + where <bound method Interface.is_installed of Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uni...ut': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c811d0>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>)> = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_False_0/interface'), environment=...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5c811d0>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>).is_installed
E + and 'httpie-300cc8fa' = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_False_0/interfac...ue}>), name='httpie-300cc8fa', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')]).name
tests/test_plugins_cli.py:59: AssertionError
_____________________ test_plugins_listing_after_uninstall _____________________
interface = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_listing_after_uni0/interface'), environment...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5dbc410>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>)
httpie_plugins_success = <function httpie_plugins_success.<locals>.runner at 0x7ff4d5cd2200>
dummy_plugin = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_listing_after_uni0/interfa...ue}>), name='httpie-99a195f1', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')])
@pytest.mark.requires_installation
def test_plugins_listing_after_uninstall(interface, httpie_plugins_success, dummy_plugin):
httpie_plugins_success('install', dummy_plugin.path)
httpie_plugins_success('uninstall', dummy_plugin.name)
data = parse_listing(httpie_plugins_success('list'))
> assert len(data) == 0
E AssertionError: assert 1 == 0
E + where 1 = len({'httpie-99a195f1': {'entry_points': [{'group': 'httpie.plugins.auth.v1', 'name': 'test'}], 'version': '1.0.0'}})
tests/test_plugins_cli.py:68: AssertionError
_______________________ test_plugins_uninstall_specific ________________________
interface = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_specifi0/interface'), environment...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5cc8950>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>)
httpie_plugins_success = <function httpie_plugins_success.<locals>.runner at 0x7ff4d5cd3560>
@pytest.mark.requires_installation
def test_plugins_uninstall_specific(interface, httpie_plugins_success):
new_plugin_1 = interface.make_dummy_plugin()
new_plugin_2 = interface.make_dummy_plugin()
target_plugin = interface.make_dummy_plugin()
httpie_plugins_success('install', new_plugin_1.path, new_plugin_2.path, target_plugin.path)
httpie_plugins_success('uninstall', target_plugin.name)
assert interface.is_installed(new_plugin_1.name)
assert interface.is_installed(new_plugin_2.name)
> assert not interface.is_installed(target_plugin.name)
E AssertionError: assert not True
E + where True = <bound method Interface.is_installed of Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uni...ut': <tempfile._TemporaryFileWrapper object at 0x7ff4d5cc8950>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>)>('httpie-79b6b9b0')
E + where <bound method Interface.is_installed of Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uni...ut': <tempfile._TemporaryFileWrapper object at 0x7ff4d5cc8950>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>)> = Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_specifi0/interface'), environment...out': <tempfile._TemporaryFileWrapper object at 0x7ff4d5cc8950>,\n 'stdout_encoding': 'utf-8',\n 'stdout_isatty': True}>).is_installed
E + and 'httpie-79b6b9b0' = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_uninstall_specifi0/interfa...ue}>), name='httpie-79b6b9b0', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')]).name
tests/test_plugins_cli.py:82: AssertionError
________________________ test_plugins_double_uninstall _________________________
httpie_plugins = <function httpie_plugins.<locals>.runner at 0x7ff4d5cfa160>
httpie_plugins_success = <function httpie_plugins_success.<locals>.runner at 0x7ff4d5cfa3e0>
dummy_plugin = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_plugins_double_uninstall0/interfac...ue}>), name='httpie-6d157572', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')])
@pytest.mark.requires_installation
def test_plugins_double_uninstall(httpie_plugins, httpie_plugins_success, dummy_plugin):
httpie_plugins_success("install", dummy_plugin.path)
httpie_plugins_success("uninstall", dummy_plugin.name)
result = httpie_plugins("uninstall", dummy_plugin.name)
> assert result.exit_status == ExitStatus.ERROR
E AssertionError: assert <ExitStatus.SUCCESS: 0> == <ExitStatus.ERROR: 1>
E + where <ExitStatus.SUCCESS: 0> = 'Successfully uninstalled httpie-6d157572\n'.exit_status
E + and <ExitStatus.ERROR: 1> = ExitStatus.ERROR
tests/test_plugins_cli.py:113: AssertionError
_____________________________ test_broken_plugins ______________________________
httpie_plugins = <function httpie_plugins.<locals>.runner at 0x7ff4d6d29300>
httpie_plugins_success = <function httpie_plugins_success.<locals>.runner at 0x7ff4d76eb920>
dummy_plugin = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_broken_plugins0/interface'), envir...ue}>), name='httpie-8972797e', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')])
broken_plugin = Plugin(interface=Interface(path=PosixPath('/tmp/pytest-of-.../pytest-31/test_broken_plugins0/interface'), envir...ue}>), name='httpie-4cd7d933', version='1.0.0', entry_points=[EntryPoint(name='test', group='httpie.plugins.auth.v1')])
@pytest.mark.requires_installation
def test_broken_plugins(httpie_plugins, httpie_plugins_success, dummy_plugin, broken_plugin):
httpie_plugins_success("install", dummy_plugin.path, broken_plugin.path)
with pytest.warns(
UserWarning,
match=(
f'While loading "{broken_plugin.name}", an error'
' occurred: broken plugin'
)
):
data = parse_listing(httpie_plugins_success('list'))
assert len(data) == 2
# We load before the uninstallation, so it will warn again.
with pytest.warns(UserWarning):
httpie_plugins_success("uninstall", broken_plugin.name)
# No warning now, since it is uninstalled.
data = parse_listing(httpie_plugins_success('list'))
> assert len(data) == 1
E AssertionError: assert 2 == 1
E + where 2 = len({'httpie-4cd7d933': {'entry_points': [{'group': 'httpie.plugins.auth.v1', 'name': 'test'}], 'version': '1.0.0'}, 'httpie-8972797e': {'entry_points': [{'group': 'httpie.plugins.auth.v1', 'name': 'test'}], 'version': '1.0.0'}})
tests/test_plugins_cli.py:153: AssertionError
_ TestCookieStorage.test_existing_and_new_cookies_sent_in_request[new=bar;chocolate=milk-new_cookies_dict1-chocolate=milk; cookie1=foo; cookie2=foo; new=bar] _
self = <tests.test_sessions.TestCookieStorage object at 0x7ff4dca91c10>
new_cookies = 'new=bar;chocolate=milk'
new_cookies_dict = {'chocolate': 'milk', 'new': 'bar'}
expected = 'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
httpbin = <pytest_httpbin.serve.Server object at 0x7ff4dcb5a4d0>
@pytest.mark.parametrize(
'new_cookies, new_cookies_dict, expected',
[(
'new=bar',
{'new': 'bar'},
'cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar;chocolate=milk',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar; chocolate=milk',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar;; chocolate=milk;;;',
{'new': 'bar', 'chocolate': 'milk'},
'cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar; chocolate=milk;;;',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
)
]
)
def test_existing_and_new_cookies_sent_in_request(self, new_cookies, new_cookies_dict, expected, httpbin):
r = http(
'--session', str(self.session_path),
'--print=H',
httpbin.url,
'Cookie:' + new_cookies,
)
# Note: cookies in response are in alphabetical order
> assert f'Cookie: {expected}' in r
E AssertionError: assert 'Cookie: chocolate=milk; cookie1=foo; cookie2=foo; new=bar' in 'GET / HTTP/1.1\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate, br\r\nConnection: keep-alive\r\nCookie: cookie1=foo; cookie2=foo; new=bar; chocolate=milk\r\nHost: 127.0.0.1:38085\r\nUser-Agent: HTTPie/3.2.1\r\n\r\n'
tests/test_sessions.py:485: AssertionError
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [07/Jun/2022 16:24:20] "GET / HTTP/1.1" 200 12144
_ TestCookieStorage.test_existing_and_new_cookies_sent_in_request[new=bar; chocolate=milk-new_cookies_dict2-chocolate=milk; cookie1=foo; cookie2=foo; new=bar] _
self = <tests.test_sessions.TestCookieStorage object at 0x7ff4dca91e90>
new_cookies = 'new=bar; chocolate=milk'
new_cookies_dict = {'chocolate': 'milk', 'new': 'bar'}
expected = 'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
httpbin = <pytest_httpbin.serve.Server object at 0x7ff4dcb5a4d0>
@pytest.mark.parametrize(
'new_cookies, new_cookies_dict, expected',
[(
'new=bar',
{'new': 'bar'},
'cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar;chocolate=milk',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar; chocolate=milk',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar;; chocolate=milk;;;',
{'new': 'bar', 'chocolate': 'milk'},
'cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar; chocolate=milk;;;',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
)
]
)
def test_existing_and_new_cookies_sent_in_request(self, new_cookies, new_cookies_dict, expected, httpbin):
r = http(
'--session', str(self.session_path),
'--print=H',
httpbin.url,
'Cookie:' + new_cookies,
)
# Note: cookies in response are in alphabetical order
> assert f'Cookie: {expected}' in r
E AssertionError: assert 'Cookie: chocolate=milk; cookie1=foo; cookie2=foo; new=bar' in 'GET / HTTP/1.1\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate, br\r\nConnection: keep-alive\r\nCookie: cookie1=foo; cookie2=foo; new=bar; chocolate=milk\r\nHost: 127.0.0.1:38085\r\nUser-Agent: HTTPie/3.2.1\r\n\r\n'
tests/test_sessions.py:485: AssertionError
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [07/Jun/2022 16:24:20] "GET / HTTP/1.1" 200 12144
_ TestCookieStorage.test_existing_and_new_cookies_sent_in_request[new=bar; chocolate=milk;;;-new_cookies_dict4-chocolate=milk; cookie1=foo; cookie2=foo; new=bar] _
self = <tests.test_sessions.TestCookieStorage object at 0x7ff4dca925d0>
new_cookies = 'new=bar; chocolate=milk;;;'
new_cookies_dict = {'chocolate': 'milk', 'new': 'bar'}
expected = 'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
httpbin = <pytest_httpbin.serve.Server object at 0x7ff4dcb5a4d0>
@pytest.mark.parametrize(
'new_cookies, new_cookies_dict, expected',
[(
'new=bar',
{'new': 'bar'},
'cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar;chocolate=milk',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar; chocolate=milk',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar;; chocolate=milk;;;',
{'new': 'bar', 'chocolate': 'milk'},
'cookie1=foo; cookie2=foo; new=bar'
),
(
'new=bar; chocolate=milk;;;',
{'new': 'bar', 'chocolate': 'milk'},
'chocolate=milk; cookie1=foo; cookie2=foo; new=bar'
)
]
)
def test_existing_and_new_cookies_sent_in_request(self, new_cookies, new_cookies_dict, expected, httpbin):
r = http(
'--session', str(self.session_path),
'--print=H',
httpbin.url,
'Cookie:' + new_cookies,
)
# Note: cookies in response are in alphabetical order
> assert f'Cookie: {expected}' in r
E AssertionError: assert 'Cookie: chocolate=milk; cookie1=foo; cookie2=foo; new=bar' in 'GET / HTTP/1.1\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate, br\r\nConnection: keep-alive\r\nCookie: cookie1=foo; cookie2=foo; new=bar; chocolate=milk\r\nHost: 127.0.0.1:38085\r\nUser-Agent: HTTPie/3.2.1\r\n\r\n'
tests/test_sessions.py:485: AssertionError
----------------------------- Captured stderr call -----------------------------
127.0.0.1 - - [07/Jun/2022 16:24:20] "GET / HTTP/1.1" 200 12144
=============================== warnings summary ===============================
...
=========================== short test summary info ============================
FAILED tests/test_cli.py::test_url_colon_slash_slash_only - AssertionError: a...
FAILED tests/test_output.py::TestQuietFlag::test_quiet_with_check_status_non_zero
FAILED tests/test_output.py::TestQuietFlag::test_quiet_with_check_status_non_zero_pipe
FAILED tests/test_output.py::TestQuietFlag::test_quiet_quiet_with_check_status_non_zero_pipe
FAILED tests/test_plugins_cli.py::test_plugins_uninstall[True] - AssertionErr...
FAILED tests/test_plugins_cli.py::test_plugins_uninstall[False] - AssertionEr...
FAILED tests/test_plugins_cli.py::test_plugins_listing_after_uninstall - Asse...
FAILED tests/test_plugins_cli.py::test_plugins_uninstall_specific - Assertion...
FAILED tests/test_plugins_cli.py::test_plugins_double_uninstall - AssertionEr...
FAILED tests/test_plugins_cli.py::test_broken_plugins - AssertionError: asser...
FAILED tests/test_sessions.py::TestCookieStorage::test_existing_and_new_cookies_sent_in_request[new=bar;chocolate=milk-new_cookies_dict1-chocolate=milk; cookie1=foo; cookie2=foo; new=bar]
FAILED tests/test_sessions.py::TestCookieStorage::test_existing_and_new_cookies_sent_in_request[new=bar; chocolate=milk-new_cookies_dict2-chocolate=milk; cookie1=foo; cookie2=foo; new=bar]
FAILED tests/test_sessions.py::TestCookieStorage::test_existing_and_new_cookies_sent_in_request[new=bar; chocolate=milk;;;-new_cookies_dict4-chocolate=milk; cookie1=foo; cookie2=foo; new=bar]
= 13 failed, 1003 passed, 6 skipped, 2 xfailed, 2 xpassed, 351 warnings in 56.38s =
```
## Expected result
Tests pass.
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
Not relevant.
## Additional information, screenshots, or code examples
We are about to update to Python 3.11 in Fedora 37 (the development version of Fedora). This is not yet blocking our users, but getting it sorted out will be eventually required. Fedora 37 Beta is to be released in September 2022.
I believe that most of the test failures observed, if not all, are bad test expectations rather than actual problems in httpie.
| closed | 2022-06-07T14:30:54Z | 2023-07-08T08:06:49Z | https://github.com/httpie/cli/issues/1410 | [
"bug",
"testing"
] | hroncok | 1 |
pytorch/pytorch | python | 149,199 | DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64 (__main__.TestForeachCUDA) | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38766586800).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99 | open | 2025-03-14T15:42:55Z | 2025-03-14T15:42:59Z | https://github.com/pytorch/pytorch/issues/149199 | [
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | pytorch-bot[bot] | 1 |
fbdesignpro/sweetviz | pandas | 105 | formatter gets series instead of float value | hi when using
```
File "/home/jurgis/PycharmProjects/debitum-portfolio/notebooks/jurgis/sw_report.py", line 22, in <module>
my_report = sv.compare([df_A, "A"], [df_B, "B"])
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/sv_public.py", line 22, in compare
report = sweetviz.DataframeReport(source, target_feat, compare,
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/dataframe_report.py", line 256, in __init__
self._features[f.source.name] = sa.analyze_feature_to_dictionary(f)
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/series_analyzer.py", line 142, in analyze_feature_to_dictionary
sweetviz.series_analyzer_text.analyze(to_process, returned_feature_dict)
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/series_analyzer_text.py", line 50, in analyze
feature_dict["html_summary"] = sv_html.generate_html_summary_text(feature_dict, compare_dict)
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/sv_html.py", line 226, in generate_html_summary_text
output = template.render(feature_dict = feature_dict, compare_dict = compare_dict, \
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/templates/feature_summary_text.html", line 25, in top-level template code
<div class="pair-pos__num dim">{{ rowdata.count_compare.number|fmt_int_limit }}</div>
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/sweetviz/sv_html_formatters.py", line 16, in fmt_int_limit
if value > 999999:
File "/home/jurgis/anaconda3/envs/debitum-portfolio/lib/python3.8/site-packages/pandas/core/generic.py", line 1442, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
with debug I see, that `fmt_int_limit` gets series with single row: datetime in index, and integer in value. | open | 2022-01-18T15:34:02Z | 2023-11-17T15:33:07Z | https://github.com/fbdesignpro/sweetviz/issues/105 | [
"bug"
] | dz0 | 4 |
tensorlayer/TensorLayer | tensorflow | 719 | In training process, validation data are necessary? | ### New Issue Checklist
- [* ] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [* ] I searched for [existing GitHub issues](https://github.com/tensorlayer/tensorlayer/issues)
### Issue Description
while i reading the tl.utils.fit source code, i have a confuse: "In training process, the validation data are necessary?". Maybe i misunderstand the code, below is the fit func code:
```python
def fit(
sess, network, train_op, cost, X_train, y_train, x, y_, acc=None, batch_size=100, n_epoch=100, print_freq=5,
X_val=None, y_val=None, eval_train=True, tensorboard=False, tensorboard_epoch_freq=5,
tensorboard_weight_histograms=True, tensorboard_graph_vis=True
):
"""Training a given non time-series network by the given cost function, training data, batch_size, n_epoch etc.
- MNIST example click `here <https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_mnist_simple.py>`_.
- In order to control the training details, the authors HIGHLY recommend ``tl.iterate`` see two MNIST examples `1 <https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_mlp_dropout1.py>`_, `2 <https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_mlp_dropout1.py>`_.
Parameters
----------
sess : Session
TensorFlow Session.
network : TensorLayer layer
the network to be trained.
train_op : TensorFlow optimizer
The optimizer for training e.g. tf.train.AdamOptimizer.
X_train : numpy.array
The input of training data
y_train : numpy.array
The target of training data
x : placeholder
For inputs.
y_ : placeholder
For targets.
acc : TensorFlow expression or None
Metric for accuracy or others. If None, would not print the information.
batch_size : int
The batch size for training and evaluating.
n_epoch : int
The number of training epochs.
print_freq : int
Print the training information every ``print_freq`` epochs.
X_val : numpy.array or None
The input of validation data. If None, would not perform validation.
y_val : numpy.array or None
The target of validation data. If None, would not perform validation.
eval_train : boolean
Whether to evaluate the model during training.
If X_val and y_val are not None, it reflects whether to evaluate the model on training data.
tensorboard : boolean
If True, summary data will be stored to the log/ directory for visualization with tensorboard.
See also detailed tensorboard_X settings for specific configurations of features. (default False)
Also runs `tl.layers.initialize_global_variables(sess)` internally in fit() to setup the summary nodes.
tensorboard_epoch_freq : int
How many epochs between storing tensorboard checkpoint for visualization to log/ directory (default 5).
tensorboard_weight_histograms : boolean
If True updates tensorboard data in the logs/ directory for visualization
of the weight histograms every tensorboard_epoch_freq epoch (default True).
tensorboard_graph_vis : boolean
If True stores the graph in the tensorboard summaries saved to log/ (default True).
Examples
--------
See `tutorial_mnist_simple.py <https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_mnist_simple.py>`_
>>> tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
... acc=acc, batch_size=500, n_epoch=200, print_freq=5,
... X_val=X_val, y_val=y_val, eval_train=False)
>>> tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
... acc=acc, batch_size=500, n_epoch=200, print_freq=5,
... X_val=X_val, y_val=y_val, eval_train=False,
... tensorboard=True, tensorboard_weight_histograms=True, tensorboard_graph_vis=True)
Notes
--------
If tensorboard=True, the `global_variables_initializer` will be run inside the fit function
in order to initialize the automatically generated summary nodes used for tensorboard visualization,
thus `tf.global_variables_initializer().run()` before the `fit()` call will be undefined.
"""
if X_train.shape[0] < batch_size:
raise AssertionError("Number of training examples should be bigger than the batch size")
if (tensorboard):
logging.info("Setting up tensorboard ...")
#Set up tensorboard summaries and saver
tl.files.exists_or_mkdir('logs/')
#Only write summaries for more recent TensorFlow versions
if hasattr(tf, 'summary') and hasattr(tf.summary, 'FileWriter'):
if tensorboard_graph_vis:
train_writer = tf.summary.FileWriter('logs/train', sess.graph)
val_writer = tf.summary.FileWriter('logs/validation', sess.graph)
else:
train_writer = tf.summary.FileWriter('logs/train')
val_writer = tf.summary.FileWriter('logs/validation')
#Set up summary nodes
if (tensorboard_weight_histograms):
for param in network.all_params:
if hasattr(tf, 'summary') and hasattr(tf.summary, 'histogram'):
logging.info('Param name %s' % param.name)
tf.summary.histogram(param.name, param)
if hasattr(tf, 'summary') and hasattr(tf.summary, 'histogram'):
tf.summary.scalar('cost', cost)
merged = tf.summary.merge_all()
#Initalize all variables and summaries
tl.layers.initialize_global_variables(sess)
logging.info("Finished! use $tensorboard --logdir=logs/ to start server")
logging.info("Start training the network ...")
start_time_begin = time.time()
tensorboard_train_index, tensorboard_val_index = 0, 0
for epoch in range(n_epoch):
start_time = time.time()
loss_ep = 0
n_step = 0
for X_train_a, y_train_a in iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update(network.all_drop) # enable noise layers
loss, _ = sess.run([cost, train_op], feed_dict=feed_dict)
loss_ep += loss
n_step += 1
loss_ep = loss_ep / n_step
if tensorboard and hasattr(tf, 'summary'):
if epoch + 1 == 1 or (epoch + 1) % tensorboard_epoch_freq == 0:
for X_train_a, y_train_a in iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
dp_dict = dict_to_one(network.all_drop) # disable noise layers
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update(dp_dict)
result = sess.run(merged, feed_dict=feed_dict)
train_writer.add_summary(result, tensorboard_train_index)
tensorboard_train_index += 1
if (X_val is not None) and (y_val is not None):
for X_val_a, y_val_a in iterate.minibatches(X_val, y_val, batch_size, shuffle=True):
dp_dict = dict_to_one(network.all_drop) # disable noise layers
feed_dict = {x: X_val_a, y_: y_val_a}
feed_dict.update(dp_dict)
result = sess.run(merged, feed_dict=feed_dict)
val_writer.add_summary(result, tensorboard_val_index)
tensorboard_val_index += 1
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
if (X_val is not None) and (y_val is not None):
logging.info("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time))
if eval_train is True:
train_loss, train_acc, n_batch = 0, 0, 0
for X_train_a, y_train_a in iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
dp_dict = dict_to_one(network.all_drop) # disable noise layers
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update(dp_dict)
if acc is not None:
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
train_acc += ac
else:
err = sess.run(cost, feed_dict=feed_dict)
train_loss += err
n_batch += 1
logging.info(" train loss: %f" % (train_loss / n_batch))
if acc is not None:
logging.info(" train acc: %f" % (train_acc / n_batch))
val_loss, val_acc, n_batch = 0, 0, 0
for X_val_a, y_val_a in iterate.minibatches(X_val, y_val, batch_size, shuffle=True):
dp_dict = dict_to_one(network.all_drop) # disable noise layers
feed_dict = {x: X_val_a, y_: y_val_a}
feed_dict.update(dp_dict)
if acc is not None:
err, ac = sess.run([cost, acc], feed_dict=feed_dict)
val_acc += ac
else:
err = sess.run(cost, feed_dict=feed_dict)
val_loss += err
n_batch += 1
logging.info(" val loss: %f" % (val_loss / n_batch))
if acc is not None:
logging.info(" val acc: %f" % (val_acc / n_batch))
else:
logging.info(
"Epoch %d of %d took %fs, loss %f" % (epoch + 1, n_epoch, time.time() - start_time, loss_ep)
)
logging.info("Total training time: %fs" % (time.time() - start_time_begin))
```
while "eval_train" is set as True , just compute train_acc/train_loss again, and then val_loss/val_acc, and log it? Then i confused...
| closed | 2018-07-02T02:26:13Z | 2018-09-05T17:28:31Z | https://github.com/tensorlayer/TensorLayer/issues/719 | [] | ilibx | 1 |
huggingface/datasets | computer-vision | 7,107 | load_dataset broken in 2.21.0 | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:

in 2.21.0:

### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| closed | 2024-08-16T14:59:51Z | 2024-08-18T09:28:43Z | https://github.com/huggingface/datasets/issues/7107 | [] | anjor | 4 |
airtai/faststream | asyncio | 1,415 | Bug: TestRabbitBroker does not throw error when `rpc` and `reply_to` are both provided | **Describe the bug**
When using the `TestRabbitBroker` with the argument `with_real=False`, the command `broker.publish()` will allow the simultaneous use of the arguments `rpc` and `reply_to`, even though this should trigger a `SetupError`.
**How to reproduce**
Save the following script as `example.py`, then run
```
$ pytest example.py
```
```python
import pytest
from faststream import FastStream
from faststream.rabbit import RabbitBroker, TestRabbitBroker
broker = RabbitBroker("amqp://guest:guest@localhost:5672/")
app = FastStream(broker)
@broker.subscriber("in-queue")
async def handle_msg(msg: str) -> str:
return f"Received message: {msg!r}"
@pytest.mark.asyncio
async def test_rpc():
async with TestRabbitBroker(broker, with_real=False) as br:
response = await br.publish(
"Hello there!",
"in-queue",
rpc=True,
reply_to="reponse-queue",
)
assert response == "Received message: 'Hello there!'"
```
**Expected behavior**
I expect FastStream to throw an exception (which it does when using `with_real=True`):
```
faststream.exceptions.SetupError: You should use `reply_to` to send response to
long-living queue and `rpc` to get response in sync mode.
```
**Observed behavior**
There is no error and the test passes even though it should fail.
**Environment**
```bash
$ faststream -v
Running FastStream 0.5.3 with CPython 3.12.3 on Darwin
```
| closed | 2024-05-02T10:19:05Z | 2024-05-03T05:11:54Z | https://github.com/airtai/faststream/issues/1415 | [
"bug"
] | maxalbert | 0 |
matterport/Mask_RCNN | tensorflow | 2,586 | Problem in implemenation | I am trying to run the demo of Mask_RCNN and the pre-trained custom model in my data-set.
I always faced this error massage:
%%%%%the error massage %%%%%%%%%%%%
AttributeError Traceback (most recent call last)
<ipython-input-2-65dd16d484db> in <module>()
14 sys.path.append(ROOT_DIR) # To find local version of the library
15 from mrcnn import utils
---> 16 import mrcnn.model as modellib
17 from mrcnn import visualize
18 # Import COCO config
/content/Mask_RCNN/mrcnn/model.py in <module>()
253
254
--> 255 class ProposalLayer(KE.Layer):
256 """Receives anchor scores and selects a subset to pass as proposals
257 to the second stage. Filtering is done based on anchor scores and
AttributeError: module 'keras.engine' has no attribute 'Layer'
%%%%the end of error massage%%%%%%%%%%%
which occurred while running the command:
import mrcnn.model as modellib
can any one explain to me what is exactly the problem ,and how can i solve it ? i will be so thankful
| open | 2021-06-03T12:25:38Z | 2021-08-18T14:06:29Z | https://github.com/matterport/Mask_RCNN/issues/2586 | [] | alaa-shubbak | 7 |
horovod/horovod | machine-learning | 3,903 | Horovod distributed optimizer is not working. | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.12.0
3. Horovod version: 0.27.0
**Bug report:**
Horovod distributed optimizer is not working even after trying SGD, Optimizer, etc. nothing is working.

| closed | 2023-04-27T06:59:02Z | 2023-04-27T09:20:47Z | https://github.com/horovod/horovod/issues/3903 | [
"bug"
] | Shup-46 | 1 |
MagicStack/asyncpg | asyncio | 753 | asyncpg==0.22.0: TypeError: _execute() got an unexpected keyword argument 'record_class' | * **asyncpg version**: asyncpg==0.22.0
* **PostgreSQL version**: PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**:
* **Python version**: Python 3.8.5 (v3.8.5:580fbb018f, Jul 20 2020, 12:11:27)
* **Platform**: OS X
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: yes
After update to 22 version i can't do any select:
```
"""
docker run --rm --detach \
--env POSTGRES_USER=user \
--env POSTGRES_PASSWORD=hackme \
--env POSTGRES_DB=db \
--publish 5432:5432 postgres
"""
import asyncio
from asyncpgsa import PG
async def main():
pg = PG()
await pg.init('postgresql://user:hackme@localhost/db')
await pg.fetch('SELECT 1')
asyncio.run(main())
``` | closed | 2021-04-30T07:56:50Z | 2021-04-30T08:04:03Z | https://github.com/MagicStack/asyncpg/issues/753 | [] | alvassin | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,195 | The main language is always English, despite the change to another language | ### What version of GlobaLeaks are you using?
5.08
### What browser(s) are you seeing the problem on?
Chrome
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
Globaleaks is installed on a VPS running Debian 11
Although the only language set is Polish, in the application on html lang="en’
This causes the Chrome browser, to want to translate the page which is confusing for users and additionally the translation is wrong
How can this be changed?
Please help
### Proposed solution
_No response_ | closed | 2024-09-20T08:41:58Z | 2024-09-21T10:27:13Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4195 | [] | dawnog | 3 |
autogluon/autogluon | computer-vision | 4,291 | [timeseries] Improve error message when specifying an invalid path to load | As of v1.1, the error message produced by TimeSeriesPredictor when loaded with an invalid path is confusing. We should make sure that users understand that the `predictor.pkl` was unable to found, as that is more important than not finding the `version.txt` file.
## TimeSeries
### Code
```
from autogluon.timeseries import TimeSeriesPredictor
TimeSeriesPredictor.load("some_invalid_path/")
```
### Exception
```
WARNING: Could not find version file at "some_invalid_path/version.txt".
This means that the predictor was fit in an AutoGluon version `<=0.7.0`.
############################## WARNING ##############################
WARNING: AutoGluon version differs from the version used to create the predictor! This may lead to instability and it is highly recommended the predictor be loaded with the exact AutoGluon version it was created with. AutoGluon does not support backwards compatibility.
Predictor Version: Unknown (Likely <=0.7.0)
Current Version: 1.1.1b20240601
############################## WARNING ##############################
Traceback (most recent call last):
File "/home/ubuntu/workspace/code/scratch/scratch_run.py", line 77, in <module>
predictor = TimeSeriesPredictor.load("some_invalid_path/")
File "/home/ubuntu/workspace/code/autogluon/timeseries/src/autogluon/timeseries/predictor.py", line 1110, in load
check_saved_predictor_version(
File "/home/ubuntu/workspace/code/autogluon/common/src/autogluon/common/utils/utils.py", line 162, in check_saved_predictor_version
raise AssertionError(
AssertionError: Predictor was created on version Unknown (Likely <=0.7.0) but is being loaded with version 1.1.1. Please ensure the versions match to avoid instability. While it is NOT recommended, this error can be bypassed by specifying `require_version_match=False`. Exceptions encountered after setting `require_version_match=False` may be very cryptic, and in most cases mean that the predictor is fully incompatible with the installed version.
```
## Tabular
### Code
```
from autogluon.tabular import TabularPredictor
TabularPredictor.load("some_invalid_path/")
```
### Exception
```
WARNING: Could not find version file at "some_invalid_path/version.txt".
This means that the predictor was fit in an AutoGluon version `<=0.3.1`.
Traceback (most recent call last):
File "/home/ubuntu/workspace/code/scratch/scratch_run.py", line 79, in <module>
predictor = TabularPredictor.load("some_invalid_path/")
File "/home/ubuntu/workspace/code/autogluon/tabular/src/autogluon/tabular/predictor/predictor.py", line 4440, in load
predictor = cls._load(path=path)
File "/home/ubuntu/workspace/code/autogluon/tabular/src/autogluon/tabular/predictor/predictor.py", line 4363, in _load
predictor: TabularPredictor = load_pkl.load(path=os.path.join(path, cls.predictor_file_name))
File "/home/ubuntu/workspace/code/autogluon/common/src/autogluon/common/loaders/load_pkl.py", line 43, in load
with compression_fn_map[compression_fn]["open"](validated_path, "rb", **compression_fn_kwargs) as fin:
FileNotFoundError: [Errno 2] No such file or directory: 'some_invalid_path/predictor.pkl'
```
| closed | 2024-06-25T04:47:10Z | 2024-08-19T07:23:55Z | https://github.com/autogluon/autogluon/issues/4291 | [
"API & Doc",
"module: timeseries",
"priority: 1"
] | Innixma | 0 |
mwaskom/seaborn | data-science | 2,796 | ImportError: DLL load failed while importing qhull: The specified module could not be found. | Getting this error when importing seaborn
ImportError: DLL load failed while importing qhull: The specified module could not be found.
>>> import seaborn as sns
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\seaborn\__init__.py", line 2, in <module>
from .rcmod import * # noqa: F401,F403
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\seaborn\rcmod.py", line 7, in <module>
from . import palettes
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\seaborn\palettes.py", line 9, in <module>
from .utils import desaturate, get_color_cycle
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\seaborn\utils.py", line 10, in <module>
from scipy import stats
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\scipy\stats\__init__.py", line 441, in <module>
from .stats import *
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\scipy\stats\stats.py", line 37, in <module>
from scipy.spatial.distance import cdist
File "C:\WPy64-3980\python-3.9.8.amd64\lib\site-packages\scipy\spatial\__init__.py", line 98, in <module>
from .qhull import *
ImportError: DLL load failed while importing qhull: The specified module could not be found.
Using seaborn‑0.11.0‑py3‑none‑any.whl on Python 3.9.8 (tags/v3.9.8:bb3fdcf, Nov 5 2021, 20:48:33) [MSC v.1929 64 bit (AMD64)] on win32 | closed | 2022-05-11T21:08:13Z | 2022-05-14T08:34:32Z | https://github.com/mwaskom/seaborn/issues/2796 | [] | Code4SAFrankie | 1 |
polarsource/polar | fastapi | 5,120 | Subscriptions and benefits not revoked when deleting a Customer | When (soft-)deleting a Customer, subscriptions and benefits are **not** revoked! | closed | 2025-02-27T12:59:55Z | 2025-02-28T09:38:16Z | https://github.com/polarsource/polar/issues/5120 | [
"bug"
] | frankie567 | 0 |
Textualize/rich | python | 3,538 | [BUG] Progress bar doesn't work correctly under MPI | - [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
When using rich with MPI the progress bar doesn't work correctly.
To reproduce:
1. `mpirun -n 1 python -m rich.progress`
2. press ctrl+c
The results differ depending on which mpi implementation one uses.
- openmpi (4.1.6): the program exits without new line and no cursor visible (see screenshot below)
- mpich (4.1.2): no progress bar can be seen at all
Note that under openmpi this is not related to [this closed issue](https://github.com/Textualize/rich/issues/127#issue-646602229), because running `mpirun -n 1 python -c "import shutil; print(shutil.get_terminal_size())"` returns correct values.

I first discovered this when running `pymc` (5.17). Running a sampler with mpirun results in no progress bar visible until the sampler finishes. I believe this is related to the issue described above.
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
I'm on nixos. This was tested on kitty terminal (and partially on xterm).
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```
python -m rich.diagnose
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=175 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 48 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=175, height=48), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=175, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=48, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=175, height=48) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 175 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-kitty', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
```
```
pip freeze | grep rich
rich @ file:///home/conda/feedstock_root/build_artifacts/rich_1729622917073/work/dist
```
</details>
| open | 2024-10-24T14:09:21Z | 2024-10-24T14:18:07Z | https://github.com/Textualize/rich/issues/3538 | [
"Needs triage"
] | jwnki | 2 |
man-group/arctic | pandas | 810 | MemoryError when saving a dataframe with large strings to TickStore | #### Arctic Version
1.79.2
#### Arctic Store
TickStore
#### Platform and version
Python 3.6.7, Linux Mint 19 Cinnamon 64-bit
#### Description of problem and/or code sample that reproduces the issue
Hi,
I'm trying to save the following data:
https://drive.google.com/file/d/1dWWBNvx6vjyNK4kjZTVL4-YM0fmWxT5b/view?usp=sharing
to TickStore, code:
https://pastebin.com/jEqXxq2t
and getting a MemoryError, see the stack traces:
https://pastebin.com/Uy4pYAfH
I'm quite new to arctic so I might be doing something wrong, and I would appreciate if you could guide me with this.
Side question:
Considering the nature of my data (2 col made of a time stamp and long string/json file), what is the best way to store these using arctic?
Thanks,
Alan | closed | 2019-08-04T09:01:02Z | 2019-09-08T14:51:10Z | https://github.com/man-group/arctic/issues/810 | [] | alanbogossian | 12 |
adamerose/PandasGUI | pandas | 36 | Variable Dragging not working on osx | I love this project. I can see myself using it constantly. It really fits a niche of working between spreadsheet and notebook. I actually think you could charge for this when it's mature.
Running miniconda python on OSX Cataline, and when trying to plot, the variables won't drag over. Let me know the best way to communicate my system state, and I can give you what you need.
Here's my setup.
```bash
λ which python
/Users/rob/miniconda/envs/viz/bin/python
•_simbiz•[mac_pro][19:17:04]( master )[ ~/rob/repos/sales_ops ]
λ python --version
Python 3.6.8 :: Anaconda, Inc.
•_simbiz•[mac_pro][19:17:08]( master )[ ~/rob/repos/sales_ops ]
λ pip freeze | grep gui
pandasgui==0.2.4.3
λ pip freeze | grep -i pyqt
PyQt5==5.15.1
PyQt5-sip==12.8.1
PyQtWebEngine==5.15.1
``` | open | 2020-10-15T23:20:31Z | 2020-11-26T19:49:40Z | https://github.com/adamerose/PandasGUI/issues/36 | [] | robdmc | 20 |
jessevig/bertviz | nlp | 18 | Horizontal head view feature | Hi, thanks for the great visualization tool!
I'm just wondering whether we can have a feature which renders head view in **horizontal** direction? The reason is that it's more suitable to show the sequence of tokens in the horizontal direction for language like Chinese, Japanese or Korean.

In the above example, typical sentences in Chinese take about 6,70 characters but it already uses a lot of space showing 10 of them in the current head view.
Thanks again for the great tool!
| open | 2019-08-31T06:44:42Z | 2019-09-21T19:17:49Z | https://github.com/jessevig/bertviz/issues/18 | [
"enhancement"
] | leemengtw | 1 |
slackapi/python-slack-sdk | asyncio | 1,323 | Ability to render previews of message/post content as html or similar. | I'm not sure if this is possible but I'd like to be validate that my message formatting is correct by programmatically render previews of posts without sending them to the Slack server. Today if we want to test that rendering is correct, we have to send sample messages to a dummy channel. We'd like to be able to confirm formatting locally.
It looks like the desired capability exists in the [Block Kit Builder Web Tool](https://app.slack.com/block-kit-builder/TFG99TU9K), but I'd like to perform the equivalent previews locally using the SDK.
Is this a possibility today by chance, or are there other workarounds to get similar capability? | closed | 2023-02-08T01:13:53Z | 2023-02-08T06:21:12Z | https://github.com/slackapi/python-slack-sdk/issues/1323 | [
"question"
] | aaronsteers | 2 |
nok/sklearn-porter | scikit-learn | 18 | Errors when porting LinearSVC model | Sorry to bother you again, but when attempting to run:
`python3 -m sklearn_porter -i model_notokenizer.pkl -l java
`I get:
```
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.5/dist-packages/sklearn_porter/__main__.py", line 71, in <module>
main()
File "/usr/local/lib/python3.5/dist-packages/sklearn_porter/__main__.py", line 49, in main
porter = Porter(model, language=language)
File "/usr/local/lib/python3.5/dist-packages/sklearn_porter/Porter.py", line 65, in __init__
raise ValueError(error)
ValueError: The given model 'Pipeline(memory=None,
steps=[('vect', TfidfVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=0.5, max_features=None, min_df=0.001,
ngram_range=(1, 1), norm='l2', preprocessor=None, smooth_idf=True...ax_iter=1000,
multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,
verbose=0))])' isn't supported.
```
I'm running python 3.5.2, numpy 1.13.1, and sklearn 0.19.0. | closed | 2017-08-21T23:50:26Z | 2017-10-05T23:04:19Z | https://github.com/nok/sklearn-porter/issues/18 | [
"bug"
] | FakeNameSE | 9 |
hayabhay/frogbase | streamlit | 70 | "Download error" when attempting to upload from local storage | DownloadError: [0;31mERROR:[0m [generic] None: Unable to download webpage: <urlopen error unknown url type: c> (caused by URLError('unknown url type: c'))
Traceback:
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\tzundo\Documents\frogbase (ai transcribe)\frogbase-2.0.0a1\ui\01_🏠_Home.py", line 103, in <module>
fb.add(sources, **opts).transcribe(ignore_captioned=True)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tzundo\Documents\frogbase (ai transcribe)\frogbase-2.0.0a1\frogbase\core.py", line 237, in add
self._media_buffer = self.media.add(sources, **opts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tzundo\Documents\frogbase (ai transcribe)\frogbase-2.0.0a1\frogbase\media.py", line 499, in add
added_media += self._add_from_web(source, **opts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tzundo\Documents\frogbase (ai transcribe)\frogbase-2.0.0a1\frogbase\media.py", line 264, in _add_from_web
ydl.download(url)
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yt_dlp\YoutubeDL.py", line 3485, in download
self.__download_wrapper(self.extract_info)(
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yt_dlp\YoutubeDL.py", line 3460, in wrapper
res = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yt_dlp\YoutubeDL.py", line 1549, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yt_dlp\YoutubeDL.py", line 1578, in wrapper
self.report_error(str(e), e.format_traceback())
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yt_dlp\YoutubeDL.py", line 1042, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "C:\Users\tzundo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\yt_dlp\YoutubeDL.py", line 981, in trouble
raise DownloadError(message, exc_info) | open | 2023-07-30T13:59:26Z | 2023-07-30T13:59:26Z | https://github.com/hayabhay/frogbase/issues/70 | [] | Tzundoku | 0 |
aio-libs/aiomysql | sqlalchemy | 577 | SQLAlchemy 1.4 statements with .in_ operator not compiled correctly | In SQLAlchemy version 1.4 they way statements using the `.in_` operator are compiled was changed to enable better caching. However this means statements need to be compiled with additional compile args in order to render correctly. You can read about the change here:
https://docs.sqlalchemy.org/en/14/changelog/migration_14.html#change-4645
The issue this manifests itself as:
```python3
stmt = select([my_table]).where(my_table.c.column.in_([1, 2]))
async with database.acquire() as conn:
result = await conn.execute(stmt)
pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[POSTCOMPILE_column_1])
``` | open | 2021-04-17T21:04:38Z | 2023-06-11T20:02:53Z | https://github.com/aio-libs/aiomysql/issues/577 | [
"bug",
"sqlalchemy"
] | Askaholic | 5 |
ansible/awx | django | 15,272 | Allow awx cli to use the API searching features | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
Enhancement to Existing Feature
### Feature Summary
Unless I'm mistaken by the doc, there doesn't see to be a way to use the API searching lookup type features on resources.
For instance
`awx credentials list --name "exact match" `
could be
`awx credentials list --name__istartswith "content"`
or
`awx credentials list --name__icontains "content"`
### Select the relevant components
- [ ] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [X] CLI
- [ ] Other
### Steps to reproduce
.
### Current results
.
### Sugested feature result
.
### Additional information
_No response_ | open | 2024-06-14T13:40:26Z | 2024-07-25T07:56:20Z | https://github.com/ansible/awx/issues/15272 | [
"type:enhancement",
"needs_triage",
"community"
] | nodje | 2 |
PaddlePaddle/models | computer-vision | 5,107 | TypeError: a bytes-like object is required, not 'str' | My python version is 3.7.7, I got following error :
```
(base) weidawang@weidawang-TUF-Gaming-FX506LU-FX506LU:~/Repo/PaddlePaddle/models/PaddleCV/video/data/dataset/tall$ python gen_infer.py
Traceback (most recent call last):
File "gen_infer.py", line 37, in <module>
select_name = movies_sentence[0][0].split('.')[0]
TypeError: a bytes-like object is required, not 'str'
```
| open | 2020-12-18T06:33:22Z | 2024-02-26T05:09:40Z | https://github.com/PaddlePaddle/models/issues/5107 | [] | wwdok | 2 |
babysor/MockingBird | deep-learning | 784 | aishell3中的语音合成效果较差 | Preparing the encoder, the synthesizer and the vocoder...
Loaded encoder "pretrained.pt" trained to step 1594501
Synthesizer using device: cuda
Building hifigan
Loading 'vocoder/saved_models/pretrained/g_hifigan.pt'
Complete.
Removing weight norm...
Trainable Parameters: 0.000M
Loaded synthesizer "mandarin.pt" trained to step 75000
+----------+---+
| Tacotron | r |
+----------+---+
| 75k | 2 |
+----------+---+
Read ['江苏修鞋奶奶婉拒捐款一人养活患病老伴和儿子']
Synthesizing ['jiang1 su1 xiu1 xie2 nai3 nai3 wan3 ju4 juan1 kuan3 yi1 ren2 yang3 huo2 huan4 bing4 lao3 ban4 he2 er2 zi5']
| Generating 1/1
Done.
使用了百度网盘的合成器,请问是什么问题呢? | open | 2022-11-19T07:15:01Z | 2022-11-19T07:15:01Z | https://github.com/babysor/MockingBird/issues/784 | [] | JJun-Guo | 0 |
ckan/ckan | api | 8,379 | CKAN login breaks when setting to beaker.session.data_serializer cookie setting to json | ## CKAN version
2.10.4
## Describe the bug
When upgrading from CKAN 2.9.11 to 2.10.4 with the recommended settings from the changelog for Beaker and Flask authentication, login breaks and shows a blank page (Chrome) or nothing happens (Firefox). CKAN crashes with the following error message: "TypeError: Object of type datetime is not JSON serializable" (see full error message below). We see two cookies set when trying to login, ckan and remember_token (see image below).
The setting for beaker.session.data_serializer to json seems to break the setup, when this is not set, login works fine. It seems that within CKAN a date is added in the cookie which cannot be JSON serialized?
I followed the changelog (https://docs.ckan.org/en/2.10/changelog.html) where the settings for 2.10.1 migration guide are used.
### Steps to reproduce
This setup does not setup the flask remember_me cookie, as the issues seems to lie within the beaker session cookie.
- git clone https://github.com/ckan/ckan-docker.git
- cd ckan-docker
- cp .env.example .env
- Edit .env and set
- line 30 to: CKAN_VERSION=2.10.4
- line 32 to: CKAN_SITE_URL=http://localhost:5000
- Add after line 32:
CKAN___BEAKER__SESSION__KEY=ckan
CKAN___BEAKER__SESSION__TYPE=cookie
CKAN___BEAKER__SESSION__SECRET=arandomstring
CKAN___BEAKER__SESSION__VALIDATE_KEY=arandomstring
CKAN___BEAKER__SESSION__DATA_SERIALIZER=json
CKAN___BEAKER__SESSION__COOKIE_EXPIRES=true
CKAN___BEAKER__SESSION__SAVE_ACCESSED_TIME=false
CKAN___BEAKER__SESSION__SECURE=true
CKAN___BEAKER__SESSION__HTTPONLY=true
CKAN___BEAKER__SESSION__SAMESITE=strict
- docker compose -f docker-compose.dev.yml --env-file .env up --build -d
- Navigate to http://localhost:5000/user/login
- Try and log in as ckan_admin:test1234
- Login fails with a 500, and the error TypeError: Object of type datetime is not JSON serializable. Whole of localhost:5000 is now unavailable.
- Manually delete the ckan cookie and localhost is available again.
### Expected behavior
During the update to 2.10.4, we want to set data_serializer to json for security issues around pickle. We expected no login issues after setting the repoze.who settings to Flask and the beaker settings to json.
### Additional details
**Error message when after login**
ckan | Debugging middleware caught exception in streamed response at a point where response headers were already sent.
ckan | Traceback (most recent call last):
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/flask/app.py", line 2080, in wsgi_app
ckan | return response(environ, start_response)
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 632, in __call__
ckan | start_response(status, headers)
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/middleware.py", line 150, in session_start_response
ckan | session.persist()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 880, in persist
ckan | self._session().save()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 723, in save
ckan | self._create_cookie()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 737, in _create_cookie
ckan | val = self._encrypt_data()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 381, in _encrypt_data
ckan | data = self.serializer.dumps(session_data)
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/util.py", line 469, in dumps
ckan | return zlib.compress(json.dumps(data).encode('utf-8'))
ckan | File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
ckan | return _default_encoder.encode(obj)
ckan | File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
ckan | chunks = self.iterencode(o, _one_shot=True)
ckan | File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
ckan | return _iterencode(o, 0)
ckan | File "/usr/lib/python3.10/json/encoder.py", line 179, in default
ckan | raise TypeError(f'Object of type {o.__class__.__name__} '
ckan | TypeError: Object of type datetime is not JSON serializable
ckan | 2024-07-30 09:07:05,063 INFO [ckan.config.middleware.flask_app] 404 /webassets/webassets-external/activity.css.map render time 0.016 seconds
ckan | Debugging middleware caught exception in streamed response at a point where response headers were already sent.
ckan | Traceback (most recent call last):
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/flask/app.py", line 2080, in wsgi_app
ckan | return response(environ, start_response)
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 632, in __call__
ckan | start_response(status, headers)
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/middleware.py", line 150, in session_start_response
ckan | session.persist()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 880, in persist
ckan | self._session().save()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 723, in save
ckan | self._create_cookie()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 737, in _create_cookie
ckan | val = self._encrypt_data()
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/session.py", line 381, in _encrypt_data
ckan | data = self.serializer.dumps(session_data)
ckan | File "/usr/lib/ckan/venv/lib/python3.10/site-packages/beaker/util.py", line 469, in dumps
ckan | return zlib.compress(json.dumps(data).encode('utf-8'))
ckan | File "/usr/lib/python3.10/json/__init__.py", line 231, in dumps
ckan | return _default_encoder.encode(obj)
ckan | File "/usr/lib/python3.10/json/encoder.py", line 199, in encode
ckan | chunks = self.iterencode(o, _one_shot=True)
ckan | File "/usr/lib/python3.10/json/encoder.py", line 257, in iterencode
ckan | return _iterencode(o, 0)
ckan | File "/usr/lib/python3.10/json/encoder.py", line 179, in default
ckan | raise TypeError(f'Object of type {o.__class__.__name__} '
ckan | TypeError: Object of type datetime is not JSON serializable
**Cookies set after login:**

| open | 2024-07-30T10:58:14Z | 2025-03-13T20:20:25Z | https://github.com/ckan/ckan/issues/8379 | [] | rosinaderks | 4 |
donnemartin/system-design-primer | python | 449 | System desing prime | open | 2020-07-20T19:37:18Z | 2020-08-02T18:05:09Z | https://github.com/donnemartin/system-design-primer/issues/449 | [
"needs-review"
] | Orinfini | 0 | |
open-mmlab/mmdetection | pytorch | 11,736 | inference on images | I run the following script to perform inference on multiple images using a trained model (trained on custom data). I want to save only those images in which objects are detected.
Below is my inference code:
```
import os
from mmdet.apis import init_detector, inference_detector
import mmcv
import cv2
from mmdet.registry import VISUALIZERS
config_file = 'configs/_base_/custom2.py'
checkpoint_file = 'work_dirs/custom2/epoch_300.pth'
model = init_detector(config_file, checkpoint_file)
image_folder = 'images'
output_folder = 'outputs'
image_paths = [os.path.join(image_folder, img_name) for img_name in os.listdir(image_folder)]
visualizer = VISUALIZERS.build(model.cfg.visualizer)
visualizer.dataset_meta = model.dataset_meta
for img_path in image_paths:
image = mmcv.imread(img_path)
result = inference_detector(model, image)
os.makedirs(output_folder, exist_ok=True)
output_file = os.path.join(output_folder, os.path.basename(img_path))
visualizer.add_datasample(
'result',
cv2.cvtColor(image, cv2.COLOR_BGR2RGB),
data_sample=result,
draw_gt=False,
wait_time=0,
pred_score_thr=0.3,
out_file=output_file
)
``` | open | 2024-05-23T12:49:23Z | 2024-06-20T10:33:00Z | https://github.com/open-mmlab/mmdetection/issues/11736 | [] | Syed05 | 1 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 536 | [NOT A BUG]一个猪鼻操作导致的cookie正确获取,wbe一直cookie报错无法解析的问题 | 捣鼓半天,翻issus发现开发者这句话https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/491#issuecomment-2461989573
README里面的演示链接https://www.bilibili.com/video/BV1vE421j7NR/,是要让你记UA,然后哥们就去把config.yaml的ua改了。
改回原本config.yaml自带的ua即可正常解析。 | closed | 2025-01-12T08:31:05Z | 2025-01-21T04:30:08Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/536 | [
"BUG",
"enhancement"
] | Readedd | 2 |
davidsandberg/facenet | tensorflow | 1,085 | Where are the export_embeddings.py output files saved? | closed | 2019-09-17T17:25:12Z | 2019-09-17T17:40:41Z | https://github.com/davidsandberg/facenet/issues/1085 | [] | yuanbit | 0 | |
dsdanielpark/Bard-API | nlp | 1 | Problem when executing Bard().get_answer(...) | Bard-API/bardapi/core.py", line 32, in _get_snim0e
return re.search(r"SNlM0e\":\"(.*?)\"", resp.text).group(1)
AttributeError: 'NoneType' object has no attribute 'group' | closed | 2023-05-14T20:47:01Z | 2024-03-05T08:22:29Z | https://github.com/dsdanielpark/Bard-API/issues/1 | [] | vipin211 | 44 |
SALib/SALib | numpy | 334 | Problem with seed in Saltelli.sample | I noticed a possible bug in Saltelli.sample function.
In particular, the seed option does not seem to work properly (Python 3.6, Ubuntu).
I attached the following script:
```python
N=1
problem = {'num_vars': 3,
'names': ['x1','x2','x3'],
'bounds': [
[0,1],
[0,1],
[0,1]
]
}
from SALib.sample import saltelli
Sample = saltelli.sample(problem, N, calc_second_order=False, seed=None)
print(Sample)
```
As can be easily verified, the sampling provides always the same results (Sample) with any "seed" setting.
| closed | 2020-07-22T16:47:34Z | 2020-11-08T08:21:50Z | https://github.com/SALib/SALib/issues/334 | [
"bug",
"clean up/maintenance"
] | gianlucamaracchini | 3 |
coqui-ai/TTS | python | 2,966 | [Feature request] simple script for mass case | Hello. I have 10+ years of CC calls with transcribe. Please give universal solution for traming on that data. Need group by people or nerd full random? Need cutof dirty places or need use all of variants?:
In Russia we have few thousands hours of Radio Fantasy Club. Dictor read fantasy books and DJ support him by full-on music in background.. One person 15+ yes do this few days in wvery werk. And license is public (need only gets test part from books)
I think need give for everyone simple script for popular personal archive types)
| closed | 2023-09-18T21:58:13Z | 2023-09-28T10:05:36Z | https://github.com/coqui-ai/TTS/issues/2966 | [
"feature request"
] | slavonnet | 2 |
wagtail/wagtail | django | 12,770 | Sidebar search + ES + large number of pages in DB. Result in Elasticsearch terms query error | <!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
The sidebar search for "Pages" (not images, documents nor users) crashes when large amount of pages present in the database AND using Elasticsearch as main search backend. This is a similar issue to #12349 where image search applied all image id's as a filter. @gasman picked this issues up when we notified on the support Slack channel, where we described how to reproduce the issue as well.
The query made to ES exceeds the maximum amount of term for a term query. Which was the same exact error in the image search, now fixed.
`RequestError(400, 'search_phase_execution_exception', 'failed to create query: The number of terms [376711] used in the Terms Query request has exceeded the allowed maximum of [65536]. This maximum can be set by changing the [index.max_terms_count] index level setting.')`
### Steps to Reproduce
The steps have NOT been tested.
1. Reduce the index.max_terms_count in ES to a more manageable amount
2. Have the number of pages in the DB more than the term limit allows
3. Search in the Wagtail sidebar search box.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: no
### Technical details
- Python version: 3.11.9
- Django version: 5.14
- Wagtail version: 6.3.2
- Browser version: Firefox 130.0.1
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| open | 2025-01-13T13:27:15Z | 2025-02-03T14:01:33Z | https://github.com/wagtail/wagtail/issues/12770 | [
"type:Bug",
"component:Search"
] | ruv-arnar | 20 |
tensorflow/datasets | numpy | 5,217 | [data request] <Berkeley DeepDrive Dataset(images)> | * Name of dataset: <Berkeley DeepDrive Dataset>
* URL of dataset: <https://bdd-data.berkeley.edu/portal.html>
* License of dataset: <Copyright ©2018. The Regents of the University of California (Regents). All Rights Reserved.
THIS SOFTWARE AND/OR DATA WAS DEPOSITED IN THE BAIR OPEN RESEARCH COMMONS REPOSITORY ON 1/1/2021
Permission to use, copy, modify, and distribute this software and its documentation for educational, research, and not-for-profit purposes, without fee and without a signed licensing agreement; and permission to use, copy, modify and distribute this software for commercial purposes (such rights not subject to transfer) to BDD and BAIR Commons members and their affiliates, is hereby granted, provided that the above copyright notice, this paragraph and the following two paragraphs appear in all copies, modifications, and distributions. Contact The Office of Technology Licensing, UC Berkeley, 2150 Shattuck Avenue, Suite 510, Berkeley, CA 94720-1620, (510) 643-7201, otl@berkeley.edu, http://ipira.berkeley.edu/industry-info for commercial licensing opportunities.
IN NO EVENT SHALL REGENTS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF REGENTS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
REGENTS SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE AND ACCOMPANYING DOCUMENTATION, IF ANY, PROVIDED HEREUNDER IS PROVIDED "AS IS". REGENTS HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.>
* Short description of dataset and use case(s): <
Short description of BDD10K dataset and use cases:
Content:
100,000 diverse driving scene images: Captures a wide range of real-world driving scenarios, including various weather conditions, locations, and traffic situations.
Rich annotations: Includes detailed object bounding boxes, lane markings, and drivable areas for comprehensive scene understanding.
Multiple tasks: Supports object detection, semantic segmentation, lane line detection, and more, enabling multifaceted research and development.
Common use cases:
Autonomous driving research: Used extensively for training and evaluating deep learning models for object detection, scene understanding, and control systems in autonomous vehicles.
Computer vision research: Serves as a benchmark for developing and testing new algorithms for tasks such as object segmentation, image classification, and image retrieval in various driving scenarios.
Robotics research: Facilitates research in navigation, path planning, and decision-making for autonomous robots operating in complex environments.
Multitask learning: The diverse tasks within BDD10K make it ideal for exploring techniques that can learn multiple tasks simultaneously, potentially improving model efficiency and performance.>
Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize.
And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md). | open | 2023-12-27T07:37:21Z | 2024-01-07T19:05:48Z | https://github.com/tensorflow/datasets/issues/5217 | [
"dataset request"
] | Yashsharma009 | 1 |
ivy-llc/ivy | tensorflow | 28,746 | upsample | - [ ] #25880 | closed | 2024-05-04T09:01:46Z | 2024-05-19T23:00:48Z | https://github.com/ivy-llc/ivy/issues/28746 | [
"Sub Task",
"Stale"
] | Ajitofy | 2 |
frappe/frappe | rest-api | 31,086 | Image not updating | I replaced an image file in the public folder with the same name to update the item’s image. After refreshing the site, the new image appears on the server device, but on other devices, the old image is still showing. I tried clearing the cache in every way, but the issue remains.
Frappe Framework: v15.54.1 (version-15) | open | 2025-02-03T06:57:44Z | 2025-02-03T06:57:44Z | https://github.com/frappe/frappe/issues/31086 | [
"bug"
] | nilpatel42 | 0 |
jina-ai/serve | machine-learning | 5,683 | Unable to delete the executor from local CLI command | **Describe the bug**
I am unable to delete the Executor which was created by following the steps described in the section "Deploy to JCloud" of the document "https://docs.jina.ai/get-started/create-app/". Here is the error response i received:

Also, in the documentation, the below code snippet is hard coded:
`jina cloud remove 1655d050ad`. Here the executor value needs to be dynamic and needs to be reflected in the documentation.
| closed | 2023-02-09T16:28:11Z | 2023-04-13T17:00:48Z | https://github.com/jina-ai/serve/issues/5683 | [] | Keertiraj | 1 |
serengil/deepface | deep-learning | 1,226 | FAU Detection | ### Description
Hi all,
Can I detect the FAU (Facial Action Units) using this library? If not exists a direct way, is there any way to extract these features using DeepFace?
Thanks a lot in advance :D
### Additional Info
_No response_ | closed | 2024-05-01T11:30:46Z | 2024-05-01T12:52:32Z | https://github.com/serengil/deepface/issues/1226 | [
"enhancement",
"invalid"
] | giammy677dev | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.