repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pinry/pinry | django | 85 | Multiple N+1 queries in pin loading | I was wondering why loading pins took close to half a second and looked at the queries. Loading 50 pins takes 304 queries (url: `/api/v1/pin/?format=json&order_by=-id&offset=0`):
- two auth related ones I ignored
- one getting the total count
``` sql
SELECT COUNT(*) FROM "core_pin"
```
- one getting the pins
``` sql
SELECT
"core_pin"."id", "core_pin"."submitter_id",
"core_pin"."url", "core_pin"."origin",
"core_pin"."description", "core_pin"."image_id",
"core_pin"."published"
FROM
"core_pin"
ORDER BY
"core_pin"."id" DESC
LIMIT
50
```
- **six** queries **per pin**
``` sql
SELECT
DISTINCT "taggit_tag"."id",
"taggit_tag"."name",
"taggit_tag"."slug"
FROM
"taggit_tag"
INNER JOIN
"taggit_taggeditem"
ON ("taggit_tag"."id" = "taggit_taggeditem"."tag_id")
WHERE (
"taggit_taggeditem"."object_id" = 100 AND
"taggit_taggeditem"."content_type_id" = 12
);
```
``` sql
SELECT
"django_images_image"."id", "django_images_image"."image",
"django_images_image"."height", "django_images_image"."width"
FROM
"django_images_image"
WHERE
"django_images_image"."id" = 103 ;
```
``` sql
SELECT
"django_images_thumbnail"."id", "django_images_thumbnail"."original_id",
"django_images_thumbnail"."image", "django_images_thumbnail"."size",
"django_images_thumbnail"."height", "django_images_thumbnail"."width"
FROM
"django_images_thumbnail"
WHERE (
"django_images_thumbnail"."original_id" = 103 AND
"django_images_thumbnail"."size" = square
);
```
``` sql
SELECT
"django_images_thumbnail"."id", "django_images_thumbnail"."original_id",
"django_images_thumbnail"."image", "django_images_thumbnail"."size",
"django_images_thumbnail"."height", "django_images_thumbnail"."width"
FROM
"django_images_thumbnail"
WHERE (
"django_images_thumbnail"."original_id" = 103 AND
"django_images_thumbnail"."size" = standard
);
```
``` sql
SELECT
"django_images_thumbnail"."id", "django_images_thumbnail"."original_id",
"django_images_thumbnail"."image", "django_images_thumbnail"."size",
"django_images_thumbnail"."height", "django_images_thumbnail"."width"
FROM
"django_images_thumbnail"
WHERE (
"django_images_thumbnail"."original_id" = 103 AND
"django_images_thumbnail"."size" = thumbnail
);
```
``` sql
SELECT
"auth_user"."id", "auth_user"."password",
"auth_user"."last_login", "auth_user"."is_superuser",
"auth_user"."username", "auth_user"."first_name",
"auth_user"."last_name", "auth_user"."email",
"auth_user"."is_staff", "auth_user"."is_active",
"auth_user"."date_joined"
FROM
"auth_user"
WHERE
"auth_user"."id" = 1
```
All of these queries are very fast in isolation, but not in aggregate. This is just following and loading of references in a loop and can be done with `JOIN`s / eager loading in the ORM layer.
I found one of the offenders in [pinry/core/api.py line 99](https://github.com/pinry/pinry/blob/master/pinry/core/api.py#L99), I guess the other ones are hidden somewhere I didn't expect them to be.
**Middleware for getting query statistics**
``` python
import string
from django.db import connection
class SQLLogMiddleware:
"""
Based on: http://djangosnippets.org/snippets/161/
"""
def process_response(self, request, response):
time = 0.0
for q in connection.queries:
time += float(q['time'])
print "Total query count:", len(connection.queries)
print "Total execution time:", sum(
float(q['time']) for q in connection.queries
)
for q in connection.queries:
print q['time'], ":", q['sql']
return response
```
| closed | 2015-03-22T23:55:27Z | 2015-04-10T19:02:27Z | https://github.com/pinry/pinry/issues/85 | [] | JensGutermuth | 3 |
adbar/trafilatura | web-scraping | 404 | Corrupted Markdown output when TXT+formatting | I wrote a fairly complicated testcase.. then realized I could use the command line tool :-D
The docs indicate Markdown is an option
https://github.com/adbar/trafilatura/blob/d78fbb5e0d88566cb1326f04210a93b46db8ac87/docs/usage-python.rst?plain=1#L71
* The plain text output (no Markdown) looks good.
* In the examples I've tried so far the Markdown output is not usable, it appears to have the same content as text BUT the formatting is incorrect, new paragraph (line) breaks appear at odd places (e.g. the 2nd character on a line).
# Demo
## Session 1 - server test data
Get test data (once) and serve it to avoid repeatedly hitting web site (I could not see a way to pass in a file to trafilatura)
wget -O wget_output.html http://www.pcgamer.com/2012/08/09/an-illusionist-in-skyrim-part-1/
echo http://localhost:1234/wget_output.html
python3 -m http.server 1234
## Session 2 - scrape data
cd /tmp
mkdir trafilatura_demo
cd trafilatura_demo/
python3 -m venv py3venv
. py3venv/bin/activate
python -m pip install trafilatura
trafilatura --version
Then:
# good text output, without formatting
trafilatura -u http://localhost:1234/wget_output.html
# not great - some new lines show up
trafilatura --links -u http://localhost:1234/wget_output.html
trafilatura --links --images -u http://localhost:1234/wget_output.html
# messed up parapgraphs and newlines in markdown
trafilatura --formatting --links --images -u http://localhost:1234/wget_output.html
trafilatura --formatting -u http://localhost:1234/wget_output.html
Partial extract showing problem:
In
[Skyrim]...
....
"
*Legends ....
There are others in the same document but I'm reluctant to include too much of the content. Hopefully the test case above is enough to reproduce for other people.
It's really obvious there is odd formatting when converting back into html (e.g. using pandoc in gfm mode, or any other md2html tool).
------
There is no option for html (only xml) which was my idea for a workaround.
I did poke around the code but I can;t get a handle on why white space is being injected into the xml cleaning code (I can see there are reasons for it, my ham fisted attempt to remove them all was unsuccessful :-D).
Thanks for making this tool available, I'm using the python readability module and trafilatura does a much better job at the meta data extraction (so far, readability works better for me for content extraction). I'm not sure if I'm misusing the the library.
| closed | 2023-08-06T22:38:28Z | 2024-03-28T12:44:28Z | https://github.com/adbar/trafilatura/issues/404 | [
"bug"
] | clach04 | 2 |
jpadilla/django-rest-framework-jwt | django | 2 | Overriding JSONWebTokenSerializer.validate() to auth based on 3rd party response and not user model? | Hi there!
I'm a bit of a Python neophyte so huge apologies if this is way more obvious than I'm making it out to be, but any suggestions how I'd override JSONWebTokenSerializer.validate() so that, instead of validating against [django.contrib.auth.authenticate() here](https://github.com/GetBlimp/django-rest-framework-jwt/blob/master/rest_framework_jwt/serializers.py#L27), it passes the details to a 3rd-party API and then validates based on a response from that?
(Essentially, I work at a news org with a giant non-metered paywall, and I need to pass details from the user's paywall cookie to the paywall API to validate that it's a valid cookie. I'm thinking DRFJWT will then generate a JWT that gets passed back to the user, and then every request thereafter just validates based on that, thus reducing load on the paywall API. If I'm totally out to lunch with my thinking, please let me know.)
Thanks!
| closed | 2014-01-20T18:02:25Z | 2017-08-21T08:47:29Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/2 | [
"discussion"
] | aendra-rininsland | 6 |
PokemonGoF/PokemonGo-Bot | automation | 5,628 | Location Confirmation before starting bot | ### Short Description
Currently, when starting bot, it will find if the name you stated in location can be found (from google?) and then get the coordinates.
The problem is, if I have the same name in favorite location, it will use the location it found in geocoding instead of the one in favorite location. Example, if I put East Coast Park in Favorite location the bot might take me to some other East Coast Park.... there are many East Coast Park around the world.
### Possible solution
Pause bot with coordinates (and name of the country coordinates are in?) and ask if confirm correct before proceeding with bot.
### How it would help others
For new users who aren't aware, this would prevent them from getting softban/hardban without knowing what happened....
| open | 2016-09-23T04:56:32Z | 2016-09-23T11:00:43Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5628 | [] | MerlionRock | 2 |
tensorly/tensorly | numpy | 287 | Issue with test_svd | `test_backend.test_svd` seems to occasionally fail, at least with the MXNet backend, see e.g. https://github.com/tensorly/tensorly/runs/2945783934
Relevant traceback:
```
=================================== FAILURES ===================================
___________________________________ test_svd ___________________________________
def test_svd():
"""Test for the SVD functions"""
tol = 0.1
tol_orthogonality = 0.01
for name, svd_fun in T.SVD_FUNS.items():
sizes = [(100, 100), (100, 5), (10, 10), (10, 4), (5, 100)]
n_eigenvecs = [90, 4, 5, 4, 5]
for s, n in zip(sizes, n_eigenvecs):
matrix = np.random.random(s)
matrix_backend = T.tensor(matrix)
fU, fS, fV = svd_fun(matrix_backend, n_eigenvecs=n)
U, S, V = svd(matrix)
U, S, V = U[:, :n], S[:n], V[:n, :]
> assert_array_almost_equal(np.abs(S), T.abs(fS), decimal=3,
err_msg='eigenvals not correct for "{}" svd fun VS svd and backend="{}, for {} eigenenvecs, and size {}".'.format(
name, tl.get_backend(), n, s))
tensorly/tests/test_backend.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = array([50.15259113, 5.49374303, 5.39877779, 5.32205362, 5.09081317,
5.03235189, 4.93070913, 4.8058477 , ...691386, 0.81209131, 0.75197604, 0.72356455,
0.65506754, 0.63010537, 0.57366523, 0.54597993, 0.48835856])
b = array([50.15259113, 5.49374303, 5.39877779, 5.32205362, 5.09081317,
5.03235189, 4.93070913, 4.8058477 , ...9059, 0.75193308, 0.72347523,
0.65506358, 0.63002172, 0.57352005, 0.54564354, 0.48644761], dtype=float64)
args = ()
kwargs = {'decimal': 3, 'err_msg': 'eigenvals not correct for "randomized_svd" svd fun VS svd and backend="mxnet, for 90 eigenenvecs, and size (100, 100)".'}
def assert_array_almost_equal(a, b, *args, **kwargs):
> np.testing.assert_array_almost_equal(T.to_numpy(a), T.to_numpy(b),
*args, **kwargs)
E AssertionError:
E Arrays are not almost equal to 3 decimals
E eigenvals not correct for "randomized_svd" svd fun VS svd and backend="mxnet, for 90 eigenenvecs, and size (100, 100)".
E Mismatched elements: 1 / 90 (1.11%)
E Max absolute difference: 0.00191095
E Max relative difference: 0.00392838
E x: array([50.153, 5.494, 5.399, 5.322, 5.091, 5.032, 4.931, 4.806,
E 4.743, 4.701, 4.562, 4.529, 4.433, 4.383, 4.325, 4.315,
E 4.194, 4.158, 4.095, 4.016, 3.929, 3.88 , 3.829, 3.801,...
E y: array([50.153, 5.494, 5.399, 5.322, 5.091, 5.032, 4.931, 4.806,
E 4.743, 4.701, 4.562, 4.529, 4.433, 4.383, 4.325, 4.315,
E 4.194, 4.158, 4.095, 4.016, 3.929, 3.88 , 3.829, 3.801,...
tensorly/testing.py:12: AssertionError
``` | closed | 2021-06-29T20:21:47Z | 2022-07-09T18:25:12Z | https://github.com/tensorly/tensorly/issues/287 | [
"bug"
] | JeanKossaifi | 0 |
nolar/kopf | asyncio | 864 | Is there a way to add namespaceSelector to the generated ValidatingWebhookConfiguration? | ### Keywords
namespaceSelector, ValidatingWebhookConfiguration
### Problem
I didn't find a solution how to add a `namespaceSelector` to the genrated `ValidatingWebhookConfiguration`. I found `kopf.WebhookClientConfigService` so I can add the service options to `ValidatingWebhookConfiguration` but I did't find a similar class for `namespaceSelector`. | open | 2021-11-18T13:37:13Z | 2021-12-05T11:35:04Z | https://github.com/nolar/kopf/issues/864 | [
"question"
] | devopstales | 2 |
marshmallow-code/flask-smorest | rest-api | 365 | Flask commands to generate OpenAPI schema in yaml | Currently there are two Flask command:
1. `flask openapi print` to print OpenAPI schema to output
2. `flask openapi write <file>` to write OpenAPI schema to file
Both of them serialize OpenAPI schema to json.
It would be great to have an option to generate it to yaml as well. | closed | 2022-06-09T15:17:29Z | 2022-06-20T13:07:28Z | https://github.com/marshmallow-code/flask-smorest/issues/365 | [
"enhancement"
] | derlikh-smart | 3 |
ultralytics/yolov5 | machine-learning | 13,069 | Visualizing YOLOv5 Segmentation Data | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Greetings, I would like to use YOLOv5 Segmentation for one of my projects. However, before I collect and annotate data, I want to get a proof of concept that v5-seg can do everything that I need to do.
I have the model loaded to my needs with this code:
````
MODEL_NAME = "yolov5s-seg.pt"
MODEL_PATH = os.path.dirname(__file__) + f"/{MODEL_NAME}"
# Load YOLO model
model = torch.hub.load('ultralytics/yolov5', 'custom', path=MODEL_PATH)
model.conf = 0.65
````
When it comes to retrieving data for the model, I think I did it correctly.
```
while True:
# Get the current frame capture using DXcam (Screen Capture)
frame = camera.grab(region=(capture_x, capture_y, capture_x + capture_width, capture_y + capture_height))
if frame is None: # Failsafe
continue
# Make a copy of the frame for YOLO inference
yolo_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Convert the image to a tensor and add a batch dimension
img_tensor = torch.from_numpy(yolo_frame).permute(2, 0, 1).float().div(255.0).unsqueeze(0)
# Perform inference
results = model(img_tensor)
# Parse the results
if len(results) > 0 and len(results[0]) > 0: # Check if there are any detections
# Assuming results[0] is a tensor with the following columns: [x1, y1, x2, y2, confidence, class, *masks]
masks = results[0][:, 6:] # Extract masks (assuming masks are from the 7th column onwards)
boxes = results[0][:, :4].cpu().numpy() # Bounding boxes
scores = results[0][:, 4].cpu().numpy() # Confidence scores
classes = results[0][:, 5].cpu().numpy() # Class predictions
# Convert the masks to the same size as the original image
if masks is not None:
masks = masks.cpu().numpy() # Convert masks to numpy array
# Iterate through masks and draw them on the image
for mask in masks:
mask_resized = cv2.resize(mask, (frame.shape[1], frame.shape[0]), interpolation=cv2.INTER_LINEAR)
mask_binary = (mask_resized > 0.5).astype('uint8') * 255
contours, _ = cv2.findContours(mask_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(frame, contours, -1, (0, 255, 0), 2)
# Display the results in an OpenCV window
cv2.imshow('Vehicle Detection', frame)
# Wait for a key press, with a short delay to allow for video playback
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
```
However, no matter what image I give it, the output is always this:

I am not sure of what i'm doing wrong, is it model loading? Is it Inference? Is it plotting the data? From my knowledge this should work just fine.
Any help on the matter is appreciated!
### Additional
Also, another small question. My GPU is not really good, so I tend to train my models in Google Colab. My issue is that my model will get to 150 - 200 epochs and then the runtime will disconnect because of Inactivity. I feel like i'm checking in on it and clicking around often enough for it to not crash (every ~10 minutes or so). I understand this can be easily fixed by getting a Colab subscription, but I would like to avoid paying for Colab if at all possible. Inout on this issue is also appreciated! | closed | 2024-06-07T02:21:54Z | 2024-07-19T01:09:53Z | https://github.com/ultralytics/yolov5/issues/13069 | [
"question",
"Stale"
] | DylDevs | 10 |
AirtestProject/Airtest | automation | 1,148 | 微信小程序 text 输入内容到真机,内容不全 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
执行 `text("18519004293", enter=False)` 输入内容,页面只展示了 18590093
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
https://github.com/AirtestProject/Airtest/assets/19237129/d52f4ed8-b848-444e-b66c-5c58efba536e
**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
可以输入完整内容
**python 版本:** `python3.5`
**airtest 版本:** `1.0.69`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: ios
- 系统: 16.4
- (别的信息)
**其他相关环境信息**
| open | 2023-07-26T11:21:49Z | 2023-07-26T11:22:48Z | https://github.com/AirtestProject/Airtest/issues/1148 | [] | bluescurry | 0 |
robotframework/robotframework | automation | 4,792 | Add Vietnamese translation | We added translation infrastructure and as well as translations for various languages in RF 6.0 (#4390). We now have PR #4791 adding Vietnamese translation. | closed | 2023-06-09T11:53:31Z | 2023-06-13T14:00:42Z | https://github.com/robotframework/robotframework/issues/4792 | [
"enhancement",
"priority: high",
"acknowledge",
"effort: small"
] | pekkaklarck | 2 |
Miserlou/Zappa | flask | 1,248 | re-deploying with custom domain name gives "forbidden" error | <!--- Provide a general summary of the issue in the Title above -->
I used undeploy followed by deploy on a site with a custom domain name and
AWS certificate, then tried to certify since this changed the Amazon url (which worked with the site) but certify apparently can only be run one time, leaving the custom domain broken (gives the "{message:forbidden}" error) with no obvious way to fix it.
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
This site uses a domain name managed by a non-AWS provider, but I configured with an AWS certificate.
Site was certified and working with custom domain name for the first time yesterday. The site wasn't working with the custom domain this morning (don't know why). Unable to find a cause and not knowing how the AWS configuration works I tried undeploy then deploy (updating the DNS to point to the new Amazon URL) and then attempted to run certify which threw an exception indicating that it is already certified. After a long period of Amazon education I was able to determine that the "Base Path Mappings" (amazon's console at: "your-region.console.aws.amazon.com/apigateway/", then select "custom domain names" - for those like me who don't know where to do this) was empty and that setting it to:
Path: /
Destination: (production-deployment):production
allowed my custom domain to work again.
I am not certain if this is the configuration that is created by Zappa using deploy/certify, only that this works. I am also uncertain as to how/why the site stopped working overnight and if this was the issue then.
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
Zappa should provide some means of verifying that the API gateway configuration is correct/matches the current configuration and updating the API gateway if it isn't correct when the "certify" option is used.
## Actual Behavior
<!--- Tell us what happens instead -->
Throws exception indicating domain name is already certified
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Ideally zappa would automatically detect the existing configuration and verify that it matches what would otherwise be uploaded. Alternatively add a certify command line option "--update" which would force replacement of any current configuration .
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
Starting with a site that has never been deployed:
1. zappa deploy production (then update with amazon generated url)
2. zappa certify production
3. zappa undeploy production
4. zappa deploy production
5. zappa certify production
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: Debian Linux (jessie), python 3.6.3
* The output of `pip freeze`:
[pip-freeze.txt](https://github.com/Miserlou/Zappa/files/1489783/pip-freeze.txt)
* Link to your project (optional):
* Your `zappa_settings.py`:
| open | 2017-11-21T02:17:59Z | 2019-09-20T09:21:08Z | https://github.com/Miserlou/Zappa/issues/1248 | [
"bug",
"aws"
] | SCDealy | 9 |
healthchecks/healthchecks | django | 263 | 1.7.0 sendalerts is not working | I've got 1.7.0 installed in an Alpine Linux 3.10 container... the entire site it working, everything migrated over, but sentalerts does nothing.... I get:
```
sendalerts is now running
```
But then nothing happens... no more timing marks, no output at all... it just sits there doing nothing. I've made a test trigger, configured my webhook.... tested the webhook and it runs fine, hitting the ping URL updates the status as it should... but sendalerts doesn't seem to be detecting the change in status. I've even run it with the `-v 3` option... and after it says its running, it still outputs nothing. | closed | 2019-06-23T03:05:19Z | 2019-08-05T18:30:58Z | https://github.com/healthchecks/healthchecks/issues/263 | [] | stevenmcastano | 11 |
SALib/SALib | numpy | 151 | delta.analyze randomly returns singular matrix error | Dear,
feeding the `delta.analyze()` function with:
```
problem = {
'num_vars': 3,
'names': ['K_IO_AOBden', 'K_SO_AOBden', 'K_SNO_AOBden'],
'bounds': [[0.01, 11], [0.01, 13.2], [0.08, 4.3]]
}
```
```
In: param_values
Out: array([[ 2.12591411, 1.8200793 , 0.10082878],
[ 6.21754537, 5.85214066, 2.8846046 ],
[ 8.01209459, 12.95017617, 1.00034018],
[ 10.72587215, 9.03629573, 1.88932555],
[ 4.17005179, 1.30648717, 3.7287254 ],
[ 3.24299291, 3.44273292, 4.01808892],
[ 8.80676551, 10.03481953, 1.43068963],
[ 0.29322542, 6.80514303, 2.43057428]])
```
and
`metric` (a simple average value of the last 100 points of my time series output)
I randomly (meaning that sometimes it works out and gives the output normally and sometimes not) get the following error:
```
...................python2.7/site-packages/SALib/analyze/delta.pyc in calc_delta(Y, Ygrid, X, m)
89 ix = np.where((xr > m[j]) & (xr <= m[j + 1]))[0]
90 nm = len(ix)
---> 91 fyc = gaussian_kde(Y[ix], bw_method='silverman')(Ygrid)
92 d_hat += (nm / (2 * N)) * np.trapz(np.abs(fy - fyc), Ygrid)
93
...................python2.7/site-packages/scipy/stats/kde.pyc in __init__(self, dataset, bw_method)
170
171 self.d, self.n = self.dataset.shape
--> 172 self.set_bandwidth(bw_method=bw_method)
173
174 def evaluate(self, points):
...................python2.7/site-packages/scipy/stats/kde.pyc in set_bandwidth(self, bw_method)
497 raise ValueError(msg)
498
--> 499 self._compute_covariance()
500
501 def _compute_covariance(self):
...................python2.7/site-packages/scipy/stats/kde.pyc in _compute_covariance(self)
508 self._data_covariance = atleast_2d(np.cov(self.dataset, rowvar=1,
509 bias=False))
--> 510 self._data_inv_cov = linalg.inv(self._data_covariance)
511
512 self.covariance = self._data_covariance * self.factor**2
...................python2.7/site-packages/scipy/linalg/basic.pyc in inv(a, overwrite_a, check_finite)
817 inv_a, info = getri(lu, piv, lwork=lwork, overwrite_lu=1)
818 if info > 0:
--> 819 raise LinAlgError("singular matrix")
820 if info < 0:
821 raise ValueError('illegal value in %d-th argument of internal '
LinAlgError: singular matrix
```
Might be useful to know that increasing the `num_resamples` the error occurs more often (i.e. with 5 resamples occurs every 3 times, while using a resample of 15 you almost always get the error)
Do you think this is due to a bug in SALib or scipy, or rather something I'm missing?
| closed | 2017-06-29T08:13:02Z | 2017-06-29T11:12:31Z | https://github.com/SALib/SALib/issues/151 | [
"bug"
] | gbellandi | 6 |
pallets/flask | python | 4,739 | Tests failing on latest Flask version 2.2.1 | Hi All,
I am working on a Flask project and suddenly my unit tests start failing on the latest flask version `2.2.1`
Got following error while running [tox](https://pypi.org/project/tox/) command
> /usr/lib/python3.8/doctest.py:939: in find
self._find(tests, obj, name, module, source_lines, globs, {})
.tox/tests/lib/python3.8/site-packages/_pytest/doctest.py:533: in _find
super()._find( # type:ignore[misc]
/usr/lib/python3.8/doctest.py:998: in _find
if ((inspect.isroutine(inspect.unwrap(val))
.tox/tests/lib/python3.8/site-packages/_pytest/doctest.py:475: in _mock_aware_unwrap
return real_unwrap(func, stop=_is_mocked)
/usr/lib/python3.8/inspect.py:520: in unwrap
while _is_wrapper(func):
/usr/lib/python3.8/inspect.py:514: in _is_wrapper
return hasattr(f, '__wrapped__') and not stop(f)
.tox/tests/lib/python3.8/site-packages/werkzeug/local.py:316: in __get__
obj = instance._get_current_object() # type: ignore[misc]
.tox/tests/lib/python3.8/site-packages/werkzeug/local.py:509: in _get_current_object
raise RuntimeError(unbound_message) from None
E RuntimeError: Working outside of request context.
E
E This typically means that you attempted to use functionality that needed
E an active HTTP request. Consult the documentation on testing for
E information about how to avoid this problem.
When I changed my flask version back to `2.1.3` it started working again.
| closed | 2022-08-04T12:20:50Z | 2022-08-19T00:07:38Z | https://github.com/pallets/flask/issues/4739 | [] | hiteshgoyal18 | 3 |
pytest-dev/pytest-cov | pytest | 155 | Testing on Windows doesn't produce coverage data | Specifically, this [appveyor config file](https://github.com/xonsh/slug/blob/master/.appveyor.yml) was [run through appveyor](https://ci.appveyor.com/project/xonsh/slug/build/job/kepc6ge0a09n5a97) and produced [this coverage file](https://codecov.s3.amazonaws.com/v4/raw/2017-04-04/0765DFC7D7C1F1017C42EC5A811DAF9C/2ad236b4b86dee1a79086a57cce44c3744879301/e8a29a29-eaf6-4148-9922-50110c876b5f.txt).
Only on Windows/Appveyor, not Mac/Travis or Linux/Travis, is the coverage information just missing. | open | 2017-04-04T03:13:18Z | 2017-10-28T04:27:54Z | https://github.com/pytest-dev/pytest-cov/issues/155 | [
"question"
] | AstraLuma | 7 |
unit8co/darts | data-science | 1,899 | Fine-tuning quesiton | In the current version, it's fine-tuning possible? I saw an older post about that and I wonder if it's a way in the current version to achieve that.
| closed | 2023-07-14T14:51:13Z | 2023-07-15T08:37:08Z | https://github.com/unit8co/darts/issues/1899 | [
"bug",
"triage"
] | LaplaceSingularity | 5 |
huggingface/text-generation-inference | nlp | 2,503 | Add support for Idefics 3 | ### Model description
Please add support for HuggingFaceM4/Idefics3-8B-Llama3 in tgi:
_Idefics3 is an open multimodal model that accepts arbitrary sequences of image and text inputs and produces text outputs. The model can answer questions about images, describe visual content, create stories grounded on multiple images, or simply behave as a pure language model without visual inputs._
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Well, the necessary changes for the transformers library are just waiting for a review for the PR:
https://github.com/huggingface/transformers/pull/32473
as the time of writing this model request.
As model/finetune and transformers lib is made by the same famous company I would assume there should be no big problems. ;-) | open | 2024-09-07T12:27:41Z | 2024-09-09T11:22:43Z | https://github.com/huggingface/text-generation-inference/issues/2503 | [
"new model"
] | stelterlab | 3 |
drivendataorg/cookiecutter-data-science | data-science | 105 | MacOS-specific .DS_Store files missing from .gitignore file | My colleagues and I were having issues collaborating using this repository template, and noticed that _.DS_Store_ files, which are created during the creation of new folders in MacOS directories, were being tracked by git. This was causing merge issues. I have added this file extension at the end of the _.gitignore_ file.
| closed | 2018-03-28T10:26:07Z | 2018-04-14T12:57:18Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/105 | [] | randallrs | 2 |
cleanlab/cleanlab | data-science | 887 | add class-imbalance detection to the default set of Datalab issue types | closed | 2023-11-09T18:20:54Z | 2023-11-11T21:54:34Z | https://github.com/cleanlab/cleanlab/issues/887 | [
"enhancement",
"needs triage"
] | jwmueller | 1 | |
mljar/mercury | data-visualization | 132 | make automatic website refresh as optional for scheduled notebooks | Right now, when running a scheduled notebook there is an automatic refresh of the website every 1 minute. In the case of notebooks scheduled with longer intervals (daily), it is not needed. Please make it optional.

| closed | 2022-07-09T10:16:07Z | 2023-02-15T10:05:53Z | https://github.com/mljar/mercury/issues/132 | [
"enhancement"
] | pplonski | 0 |
google/seq2seq | tensorflow | 133 | Add conversational modeling walkthrough | Hi,
I want to use this project for conversational modeling. I'm wondering if it's possible for you to write a walkthrough or provide steps needed to accomplish this goal.
Thanks in advanced. | closed | 2017-03-31T14:39:08Z | 2017-03-31T18:58:54Z | https://github.com/google/seq2seq/issues/133 | [] | mohgh | 1 |
PablocFonseca/streamlit-aggrid | streamlit | 205 | Auto size all columns not work as expected | The current implementation of the `columns_auto_size_mode=ColumnsAutoSizeMode.FIT_CONTENTS` feature may not be working as expected. It seems that only some columns are properly sized and that you need to click the "Autosize Columns" button once to ensure that all columns are sized correctly. | open | 2023-03-23T09:18:15Z | 2023-03-30T16:39:45Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/205 | [] | zbjdonald | 3 |
coqui-ai/TTS | python | 2,842 | Special character like ö, ä, ü not spoken [Bug] | ### Describe the bug
The special character do not correct convertet to spoken text.
from TTS.api import TTS
def read_file_to_string(file_path):
try:
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
return content
except FileNotFoundError:
print("Datei nicht gefunden.")
return ""
except Exception as e:
print("Fehler beim Lesen der Datei:", e)
return ""
file_content = read_file_to_string("text.txt")
print(file_content)
api = TTS(model_name="tts_models/de/thorsten/tacotron2-DCA", gpu=False)
api.tts_to_file(file_content, file_path="output.wav", encoding='utf-8')
The string file_content is in correct utf-8 format.
### To Reproduce
Run the code and check the output.wav.
### Expected behavior
Correct speaking with ö, ä, ü
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cpu",
"TTS": "0.14.3",
"numpy": "1.21.6"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "AMD64 Family 25 Model 97 Stepping 2, AuthenticAMD",
"python": "3.8.17",
"version": "10.0.22621"
}
}
```
### Additional context
_No response_ | closed | 2023-08-06T09:21:44Z | 2024-08-09T08:25:35Z | https://github.com/coqui-ai/TTS/issues/2842 | [
"bug"
] | frixos25 | 9 |
MagicStack/asyncpg | asyncio | 346 | Very bad performance using insert many | * **asyncpg version**: 0.17.0
* **PostgreSQL version**: 10
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: local
* **Python version**: 3.7
* **Platform**: Windows
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: No
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: did not use UVLoop
<!-- Enter your issue details below this comment. -->
I am building a DB for storing financial tick data, so was testing insert performance for different DBs using different libs. For postgres I used psycopg2 in sync mode, aiopg, and asyncpg. I was inserting 120000 rows of symbol and OHLC data.
What I was getting very bad insert performances using asyncpg insertmany
For creating sample data
```
import asyncio
import string
import random
from time import time
import numpy as np
import pandas as pd
import asyncpg
import psycopg2
import aiopg
# Number of securirities to insert
SECURITIES = 2000
def string_gen(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
def generate_random_data():
"""Generate random data for inserting to DB"""
index = pd.date_range(start='2018-01-01 00:00:01', end='2018-01-01 00:01:00', freq='s')
dflist = []
for _ in range(SECURITIES):
data = np.random.rand(len(index), 4)
data = pd.DataFrame(data, index=index, columns=['open', 'high', 'low', 'close'])
data['symbol'] = string_gen()
dflist.append(data)
data = pd.concat(dflist)
data.index.name = 'time'
data = data.reset_index()
data = data[['time', 'symbol', 'open', 'high', 'low', 'close']]
return [tuple(x) for x in data.values]
```
For psycopg2 insert took 5 to 6 seconds
```
args_str = b','.join(cur.mogrify(string, row) for row in data)
args_str = args_str.decode('utf-8') # Convert byte string to UTF-8
cur.execute("INSERT INTO ohlc (time, symbol, open, high, low, close) VALUES " + args_str)
conn.commit()
```
For asyncpg insertmany took 30 seconds so i did something like by creating a insert statement using psycopg2 and insert using asyncpg.. still took 7 to 8 seconds
```
p_conn = psycopg2.connect(user='postgres', password='postgres')
cur = p_conn.cursor()
string = '(' + ('%s,' * len(data[0]))[:-1] + ')'
args_str = b','.join(cur.mogrify(string, row) for row in data)
cur.close()
args_str = args_str.decode('utf-8')
insert_str = "INSERT INTO ohlc (time, symbol, open, high, low, close) VALUES " + args_str
await conn.execute(insert_str)
```
similarly for aiopg i was getting 6 to 7 seconds..
I guess i am doing something wrong.. since the DB is same for all 3 libs.. we can ignore performance issues of DB. | closed | 2018-08-21T15:02:16Z | 2018-08-26T20:58:02Z | https://github.com/MagicStack/asyncpg/issues/346 | [] | akashgurava | 2 |
robotframework/robotframework | automation | 5,304 | Libdoc: Support documentation written with Markdown | It seems this was [discussed briefly back in 2016](https://github.com/robotframework/robotframework/issues/2476) but I wanted to see if there was any thoughts on supporting the Markdown format for Libdoc in 2015.
With Markdown being the preferred(currently only?) markup [for copilot knowledge bases](https://docs.github.com/en/enterprise-cloud@latest/copilot/customizing-copilot/managing-copilot-knowledge-bases), this would be a helpful enhancement for anyone trying to leverage specific library documentation for their copilot code generation or autocompletes.
Due to the complexity of the HTML of the libdoc outputs they don't seem to work through any HTML to Mardown conversion tools so perhaps a more simplified Libdoc HTML output would work as well? | open | 2025-01-02T16:52:01Z | 2025-03-07T15:12:22Z | https://github.com/robotframework/robotframework/issues/5304 | [
"priority: critical",
"effort: large"
] | Wolfe1 | 4 |
yzhao062/pyod | data-science | 256 | Cannot save AutoEncoder | The [official instructions](https://pyod.readthedocs.io/en/latest/model_persistence.html) say to use joblib for pickling PyOD models.
This fails for AutoEncoders, or any other TensorFlow-backed model as far as I can tell. The error is:
```
>>> dump(model, 'model.joblib')
...
TypeError: can't pickle _thread.RLock objects
```
Note that it's not sufficient to save the underlying Keras Sequential model, since I need the methods & variables of BaseDetector (like `.decision_scores_` or `.decision_function()`. | open | 2020-12-06T07:50:26Z | 2021-03-03T01:29:11Z | https://github.com/yzhao062/pyod/issues/256 | [
"bug",
"help wanted",
"good first issue"
] | kennysong | 6 |
wger-project/wger | django | 1,770 | Weight tracker: Date selection | ## Steps to Reproduce
1. Open the F-Droid App on GrapheneOS/Android
2. Press + on the Weight tab
**Expected results:** The current date is automatically selected.
**Actual results:** The date of the latest entry is selected.
| open | 2024-09-17T05:01:44Z | 2024-09-25T10:17:53Z | https://github.com/wger-project/wger/issues/1770 | [] | hubortje | 2 |
scanapi/scanapi | rest-api | 352 | Add Dotenv (.env) support | Hi.
I added Dotenv to ScanAPI.
can I submit a PR? | closed | 2021-03-08T04:28:47Z | 2021-03-24T15:48:23Z | https://github.com/scanapi/scanapi/issues/352 | [] | jpsilva15 | 1 |
aleju/imgaug | machine-learning | 40 | [MacOS] IOError when running generate_example_images.py | When I clone the repo, and run the `generate_example_images.py`, I get a runtime error:
```
$ cd ~/repos/imgaug
$ python generate_example_images.py
[draw_per_augmenter_images] Loading image...
[draw_per_augmenter_images] Initializing...
[draw_per_augmenter_images] Augmenting...
Traceback (most recent call last):
File "generate_example_images.py", line 290, in <module>
main()
File "generate_example_images.py", line 18, in main
draw_per_augmenter_images()
File "generate_example_images.py", line 252, in draw_per_augmenter_images
misc.imsave("examples.jpg", output_image.draw())
File "generate_example_images.py", line 271, in draw
rows_drawn = [self.draw_row(title, images, subtitles) for title, images, subtitles in self.rows]
File "generate_example_images.py", line 277, in draw_row
title_cell = ia.draw_text(title_cell, x=2, y=2, text=title, color=[0, 0, 0], size=12)
File "/Users/erickim/repos/imgaug/imgaug/imgaug.py", line 129, in draw_text
font = ImageFont.truetype("DejaVuSans.ttf", size)
File "/usr/local/lib/python2.7/site-packages/PIL/ImageFont.py", line 238, in truetype
return FreeTypeFont(font, size, index, encoding)
File "/usr/local/lib/python2.7/site-packages/PIL/ImageFont.py", line 127, in __init__
self.font = core.getfont(font, size, index, encoding)
IOError: cannot open resource
```
A quick fix is to modify `imgaug/imgaug.py:128` and give the absolute path of the `DejaVuSans.ttf` file that is included in the repo:
```
diff --git a/imgaug/imgaug.py b/imgaug/imgaug.py
index 7e94c82..b2b2485 100644
--- a/imgaug/imgaug.py
+++ b/imgaug/imgaug.py
@@ -9,6 +9,7 @@ import math
from scipy import misc
import multiprocessing
import threading
+import os
import sys
import six
import six.moves as sm
@@ -125,7 +126,8 @@ def draw_text(img, y, x, text, color=[0, 255, 0], size=25):
shape = img.shape
img = Image.fromarray(img)
- font = ImageFont.truetype("DejaVuSans.ttf", size)
+ font = ImageFont.truetype(os.path.join(os.path.abspath(os.path.split(__file__)[0]), "DejaVuSans.ttf"), size)
+
context = ImageDraw.Draw(img)
context.text((x, y), text, fill=tuple(color), font=font)
img_np = np.asarray(img)
```
Thoughts on this change? | open | 2017-06-09T21:04:13Z | 2017-06-09T21:29:37Z | https://github.com/aleju/imgaug/issues/40 | [] | erickim555 | 1 |
open-mmlab/mmdetection | pytorch | 11,354 | How to improve CPU utilization ? | When I train yolox using RTX4090, the CPU usage is very low.Only two cores are used.

And the GPU usage also low,only used much GPU memory.

**How to improve the CPU and GPU utilization? Does dataloader use GPU or CPU by default?** | open | 2024-01-10T01:35:13Z | 2024-07-26T01:20:55Z | https://github.com/open-mmlab/mmdetection/issues/11354 | [] | gitleej | 6 |
s3rius/FastAPI-template | graphql | 36 | Issue with project name and k8s namespace | The regex for project name is not the same that is allowed for k8s namespaces
```
Error from server (Invalid): error when creating "deploy/kube/namespace.yml": Namespace "tmp_test" is invalid: metadata.name: Invalid value: "tmp_test": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')
``` | closed | 2021-10-08T15:48:02Z | 2021-10-09T12:39:53Z | https://github.com/s3rius/FastAPI-template/issues/36 | [] | gpkc | 1 |
voila-dashboards/voila | jupyter | 882 | Voila not rendering on JupyterLab Extension | Hello, I am trying to launch Voila via the Lab extension, however it keeps loading without rendering the output as per below:

list of extensions:
JupyterLab v3.0.14
C:\Users\Anaconda3\share\jupyter\labextensions
@jupyter-widgets/jupyterlab-manager v3.0.0 enabled ok (python, jupyterlab_widgets)
@voila-dashboards/jupyterlab-preview v2.0.2 enabled ok (python, voila)
voila:
voila 0.2.10 pyhd8ed1ab_0 conda-forge | open | 2021-05-05T09:02:57Z | 2021-05-11T15:30:31Z | https://github.com/voila-dashboards/voila/issues/882 | [] | siglacredit | 5 |
unit8co/darts | data-science | 1,998 | [BUG] Can't fit a loaded darts RNNModel | **Describe the bug**
I can't fit and use a saved model after loading it. I get the error "FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/darts_logs/Karachi_RNN/_model.pth.tar'" when I try to fit it on new data.
**To Reproduce**
This is how I created the darts RNNModel
```
%%time
n_params = {'training_length': 13, 'lr': 0.00039982048662377925, 'dropout': 0.28118606540118873, 'input_chunk_length': 6, 'Model': 'GRU', 'batch_size': 48, 'num_loader_workers': 8}
col = 'et0_fao_evapotranspiration'
train = data.loc[:'2023-07-28']
test = data.loc['2023-07-28':]
y_train = TimeSeries.from_series(train[col])
y_test = TimeSeries.from_series(test[col])
scaler = StandardScaler()
transformer = Scaler(scaler)
series_transformed = transformer.fit_transform(y_train)
early_stopper = EarlyStopping("train_loss",min_delta=0.001, patience=10,verbose=False)
callbacks = [early_stopper]
pl_trainer_kwargs = {
"accelerator": "auto",
"callbacks": callbacks,
}
model1 = RNNModel(
input_chunk_length=n_params['input_chunk_length'],
model=n_params['Model'],
hidden_dim=20,
dropout=n_params['dropout'],
batch_size=n_params['batch_size'],
n_epochs=300,
optimizer_kwargs={"lr": n_params['lr']},
model_name="Karachi_RNN",
pl_trainer_kwargs=pl_trainer_kwargs,
# log_tensorboard=True,
random_state=42,
training_length=n_params['training_length'],
force_reset=True,
save_checkpoints=True
)
model1.fit(
series=series_transformed, verbose=0,
num_loader_workers=n_params['num_loader_workers']
)
preds = model1.predict(n=len(test), series=series_transformed)
n_preds = transformer.inverse_transform(preds)
val = rmse(y_test, n_preds)
print(f'RMSE: {val}')
#saving the model
model1.save('/kaggle/working/evapotranspiration_model.pt')
```
I have then downloaded this model so that I can use it in a new notebook.
#loading the model
`evo_model = RNNModel.load('evapotranspiration_model.pt')`
Trying to fit new data using the saved model and then make predictions.
```
json_data = {
"data_columns" : "weathercode,temperature_2m_max,temperature_2m_min,temperature_2m_mean,apparent_temperature_max,apparent_temperature_min,apparent_temperature_mean,sunrise,sunset,shortwave_radiation_sum,precipitation_sum,rain_sum,snowfall_sum,precipitation_hours,windspeed_10m_max,windgusts_10m_max,winddirection_10m_dominant,et0_fao_evapotranspiration"
}
print(evo_model)
data_columns = json_data['data_columns']
now = datetime.now() - relativedelta(days=7)
start = now - relativedelta(months=11)
date_string_end = now.strftime('%Y-%m-%d')
date_string_start = start.strftime('%Y-%m-%d')
date_pred = []
for date in pd.date_range(start=datetime.now() - relativedelta(days=6), periods=10):
date_pred.append(date.strftime('%Y-%m-%d'))
url = "https://archive-api.open-meteo.com/v1/archive"
cities = [
{ "name": "Karachi", "country": "Pakistan", "latitude": 24.8608, "longitude": 67.0104 }
]
cities_df =[]
for city in cities:
params = {"latitude":city["latitude"],
"longitude":city['longitude'],
"start_date": date_string_start,
"end_date": date_string_end,
"daily": data_columns,
"timezone": "GMT",
"min": date_string_start,
"max": date_string_end,
}
res = requests.get(url, params=params)
data = res.json()
df = pd.DataFrame(data["daily"])
df["latitude"] = data["latitude"]
df["longitude"] = data["longitude"]
df["elevation"] = data["elevation"]
df["country"] = city["country"]
df["city"] = city["name"]
cities_df.append(df)
concat_df = pd.concat(cities_df, ignore_index=True)
concat_df.set_index('time', inplace=True)
print(concat_df.columns)
total_hours = concat_df['precipitation_hours'].sum()
concat_df['precipitation_rate'] = concat_df['precipitation_sum']/total_hours
##generate prediction for evo_transpiration
et0_fao_evapotranspiration = TimeSeries.from_series(concat_df['et0_fao_evapotranspiration'].values)
scaler = StandardScaler()
transformer = Scaler(scaler)
series_transformed = transformer.fit_transform(et0_fao_evapotranspiration)
evo_model.fit(
series=series_transformed, verbose=0,
)
evo_preds = evo_model.predict(n=10, series=series_transformed)
evo_preds = transformer.inverse_transform(evo_preds)
print(evo_preds)
```
I get the error "FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/darts_logs/Karachi_RNN/_model.pth.tar'" when it tries to execute the line `evo_model.fit(series=series_transformed, verbose=0)`
I don't understand why its saying FileNotFound because I have already downloaded the 'evapotranspiration_model.pt' model in my computer.
**Expected behavior**
I expected it to execute without any error and make predictions because I have properly saved & loaded the RNNModel. Please help
**System (please complete the following information):**
- Python version: [e.g. 3.8]
- darts version [e.g. 0.24.0]
**Additional context**
Add any other context about the problem here.
| closed | 2023-09-18T10:53:34Z | 2024-01-21T15:26:48Z | https://github.com/unit8co/darts/issues/1998 | [
"q&a"
] | Kamal-Moha | 4 |
reloadware/reloadium | django | 151 | Reloadium experienced a fatal error and has to quit. | ## Describe the bug*
A clear and concise description of what the bug is.
C:\Users\test\Documents\python_code\gitlab_code\venv\Scripts\python.exe" -m reloadium_launcher pydev_proxy "C:/Program Files/JetBrains/PyCharm Community Edition 2022.1/plugins/python-ce/helpers/pydev/pydevd.py" --multiprocess --client 127.0.0.1 --port 50448 --file "C:\Users\test\Documents\python_code\gitlab_code\test\Main.py"
Connected to pydev debugger (build 231.9011.38)
■■■■■■■■■■■■■■■
Reloadium 1.1.1
■■■■■■■■■■■■■■■
If you like this project consider becoming a sponsor or giving a star at https://github.com/reloadware/reloadium
Reloadium experienced a fatal error and has to quit.
Please submit a github issue to let us know at https://github.com/reloadware/reloadium
Process finished with exit code 1
<img width="367" alt="image" src="https://github.com/reloadware/reloadium/assets/7973168/5cca3127-e98d-4a92-b42e-1b17cce6d3cb">
<img width="732" alt="image" src="https://github.com/reloadware/reloadium/assets/7973168/0ca2eabe-6cb5-4f36-a19e-79ba068ee41e">
python environment
Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
but i can esay to use in another test project like this
<img width="690" alt="image" src="https://github.com/reloadware/reloadium/assets/7973168/3184c9bf-2442-483d-8145-122699d99432">
<img width="784" alt="image" src="https://github.com/reloadware/reloadium/assets/7973168/116fa567-9a23-40a5-ad91-c152188f0e3b">
| closed | 2023-06-10T14:54:02Z | 2023-07-06T12:43:42Z | https://github.com/reloadware/reloadium/issues/151 | [] | gengchaogit | 11 |
deepfakes/faceswap | machine-learning | 731 | Effmpeg not working with Docker installation because Dockerfile does not include an installation of ffmpeg | Effmpeg does not work with the Docker container because there is no ffmpeg installation. | closed | 2019-05-18T00:19:02Z | 2019-05-18T00:55:11Z | https://github.com/deepfakes/faceswap/issues/731 | [] | timothydelter | 2 |
pykaldi/pykaldi | numpy | 132 | Why is compute-mfcc-feats not found and ivector-extract-online2. IS there a simple way to fix this issue | 
| closed | 2019-06-04T20:35:22Z | 2020-10-13T20:17:56Z | https://github.com/pykaldi/pykaldi/issues/132 | [] | zachadams16 | 3 |
iterative/dvc | data-science | 9,786 | `pull`: fails unless target specified | # Bug Report
## Description
`dvc pull` fails but `dvc pull target` succeeds for the same file.
Reported multiple times in discord:
https://discord.com/channels/485586884165107732/563406153334128681/1131979446379888853
https://discord.com/channels/485586884165107732/485596304961962003/1135912803086119043
### Reproduce
Reproduction script:
```bash
rm -rf /Library/Caches/dvc/repo/63a86d1aa938a14ed3f3014e34dbe38a
rm -rf example-get-started-http-private-fixture
git clone git@github.com:iterative/example-get-started-http-private-fixture.git
cd example-get-started-http-private-fixture
dvc pull
dvc remote modify private-http auth basic
dvc remote modify private-http user user1
dvc remote modify private-http password password1
dvc pull
dvc pull data/features
```
The 1st and 2nd pull both fail to pull `data/features`, but the final `dvc pull data/features` succeeds.
### Expected
The 2nd `dvc pull` (after the remote config has been fixed) should succeed. | closed | 2023-08-01T16:57:26Z | 2023-08-04T07:51:12Z | https://github.com/iterative/dvc/issues/9786 | [
"bug",
"A: data-sync"
] | dberenbaum | 4 |
scrapy/scrapy | python | 5,769 | Plans to use "GOOGLE_APPLICATION_CREDENTIALS_JSON" (FEEDS) | Hello,
I am currently calling the crawlers from Airflow, using `PythonOperator`. It works perfect. But, because of that, I cannot set a env variable named `GOOGLE_APPLICATION_CREDENTIALS` with a path to the json file, once its a limitation I have (fully managed environment, can't edit the image). Instead of it, I have the content of the json, provided by an Airflow Connection.
I know that besides `GOOGLE_APPLICATION_CREDENTIALS` env var google credentials mechanism ([google-cloud-storage](https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-python) lib) also "looks" to `GOOGLE_APPLICATION_CREDENTIALS_JSON` which exactly what I do have when I call a `crawl(MySpider)`.
What I also tried in order to use the content of json credentials instead of this env var was this [scrapy-s3pipeline](https://github.com/orangain/scrapy-s3pipeline) but somehow it is not working (the feed is not exported).
So, my question is more about if you plan to move into this direction, providing this different manner to handle an authentication in order to allow a crawler to FEED to GCS.
Thank you in advance.
| closed | 2022-12-23T15:27:18Z | 2022-12-27T16:08:02Z | https://github.com/scrapy/scrapy/issues/5769 | [] | elitongadotti | 5 |
ray-project/ray | python | 51,471 | CI test windows://python/ray/tests:test_cancel is consistently_failing | CI test **windows://python/ray/tests:test_cancel** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa9f-d3d7-4df6-a372-b880dc10a310
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4e-4a62-bd29-e5408e12b496
DataCaseName-windows://python/ray/tests:test_cancel-END
Managed by OSS Test Policy | closed | 2025-03-18T21:44:28Z | 2025-03-19T21:52:37Z | https://github.com/ray-project/ray/issues/51471 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
csurfer/pyheat | matplotlib | 18 | LOADING Module issue - Module not found | Hi,
I have a customized module **xyz.pyd**, on windows 10, which my program loads/executes perfectly as it's in the same directory. But pyheat fails to load it. Does it load the modules only from the site-packages?
Regards
Prabhat | open | 2021-11-04T02:27:14Z | 2022-08-01T10:47:19Z | https://github.com/csurfer/pyheat/issues/18 | [] | prabhatM | 2 |
TracecatHQ/tracecat | fastapi | 67 | Datadog Security Monitoring | **User Story:** I want to build automated investigations given findings from Datadog security products.
Datadog's key security features can be grouped in the following:
- CSPM findings
- SIEM signals
- SIEM signal state management
- CSPM findings state management
- SIEM detection rules
- Suppressions for SIEM detections
- Filters for SIEM detections
We will prioritize GET and UPDATE operations for alerts first.
API reference: https://docs.datadoghq.com/api/latest/security-monitoring/
## TODOs
Note: this list is non-exhaustive. We are using this issue as the tracker for all Datadog integrations.
- [x] [Get a quick list of security signals](https://docs.datadoghq.com/api/latest/security-monitoring/?code-lang=curl#get-a-quick-list-of-security-signals)
- [ ] [Change the triage state of a security signal](https://docs.datadoghq.com/api/latest/security-monitoring/?code-lang=curl#change-the-triage-state-of-a-security-signal)
- [ ] [List rules](https://docs.datadoghq.com/api/latest/security-monitoring/?code-lang=curl#list-rules)
## Use Cases
- Run automated detection hardening with stratus-red-team and SIEM detections (LIST operation with date / account ID filter)
- Automated threat intel to detections checker? | closed | 2024-04-19T09:59:14Z | 2024-06-16T19:19:12Z | https://github.com/TracecatHQ/tracecat/issues/67 | [
"enhancement",
"good first issue",
"integrations",
"tracker"
] | topher-lo | 1 |
scanapi/scanapi | rest-api | 403 | Fix anti pattern issues mentioned in static analysis | ## Feature request
### Description of the feature
<!-- A clear and concise description of what the new feature is. -->
To increase the quality of the project we are using static analysis to find out anti-patterns in the project.
A detailed list of the issues can be found [here](https://deepsource.io/gh/scanapi/scanapi/issues/?category=antipattern)
💡 The Issue requires multiple PRs so more than one person can contribute to the issue.
| closed | 2021-06-12T08:11:38Z | 2021-07-30T20:38:00Z | https://github.com/scanapi/scanapi/issues/403 | [
"Feature",
"Refactor",
"Code Quality",
"Antipattern",
"Multi Contributors"
] | Pradhvan | 0 |
TencentARC/GFPGAN | deep-learning | 34 | 训练问题 | 作者,您好,我想问下,为什么我在本地训练都是正常的,然后在集群训练loss总是NAN,我尝试从新建环境,最终训练,始终保持本地和集群环境一直,可是最终还是本地正常,集群异常,请问怎么回事,谢谢 | closed | 2021-08-06T13:04:06Z | 2022-03-15T02:20:50Z | https://github.com/TencentARC/GFPGAN/issues/34 | [] | ZZFanya-DWR | 2 |
vastsa/FileCodeBox | fastapi | 27 | 安全问题! | 部署成功后,不做任何配置的情况下,默认管理员密码是admin,这十分危险。请作者考虑改为:第一次部署成功时,随机生成复杂密码。 | closed | 2022-12-28T07:48:19Z | 2023-01-16T06:58:40Z | https://github.com/vastsa/FileCodeBox/issues/27 | [
"enhancement"
] | tinyxingqiu | 2 |
slackapi/python-slack-sdk | asyncio | 1,216 | Conversations_members and direct message channels | Is `conversations_members` not compatible with direct message channels? I have an app with all four user level scopes (groups, channels, im, mpim all have read permissions). When I try:
```python
client.conversations_members(channel="D1234567890") #Example direct channel id
```
I get a `channel_not_found` response from Slack. Is there a way to achieve this? According to the docs, the old `im` APIs have all migrated to the `conversations` API. But this doesn't seem to work.
Similarly, given a `message` event (Specifically during `message.im` events), the payload doesn't seem to contain the to user. Is there a way to retrieve it from the Events API?
| closed | 2022-05-20T09:14:36Z | 2023-12-27T19:52:13Z | https://github.com/slackapi/python-slack-sdk/issues/1216 | [
"question",
"needs info"
] | skewwhiff | 9 |
deepspeedai/DeepSpeed | deep-learning | 5,719 | Issue with LoRA Tuning on llama3-70b using PEFT and TRL's SFTTrainer | We are attempting to perform LoRA tuning on llama3-70b using PEFT with TRL's SFTTrainer. We are using 8 H100 GPUs and distributed training with ZeRO-stage3, but we encounter an error. Could you please provide any solutions?
Here is the error message:
```
Loading checkpoint shards: 77%|███████▋ | 23/30 [00:58<00:25, 3.68s/it]
Loading checkpoint shards: 80%|████████ | 24/30 [00:59<00:17, 2.87s/it]
Loading checkpoint shards: 83%|████████▎ | 25/30 [01:00<00:14, 2.89s/it]
Loading checkpoint shards: 80%|████████ | 24/30 [01:00<00:18, 3.12s/it]
Loading checkpoint shards: 83%|████████▎ | 25/30 [01:01<00:12, 2.47s/it]
Loading checkpoint shards: 87%|████████▋ | 26/30 [01:01<00:09, 2.49s/it]
Loading checkpoint shards: 83%|████████▎ | 25/30 [01:02<00:13, 2.70s/it]
Loading checkpoint shards: 87%|████████▋ | 26/30 [01:02<00:08, 2.18s/it]
Loading checkpoint shards: 90%|█████████ | 27/30 [01:03<00:06, 2.20s/it]
Loading checkpoint shards: 87%|████████▋ | 26/30 [01:03<00:09, 2.36s/it]
Loading checkpoint shards: 90%|█████████ | 27/30 [01:04<00:05, 1.99s/it]
Loading checkpoint shards: 93%|█████████▎| 28/30 [01:04<00:03, 1.99s/it]
Loading checkpoint shards: 90%|█████████ | 27/30 [01:05<00:06, 2.10s/it]
Loading checkpoint shards: 93%|█████████▎| 28/30 [01:05<00:03, 1.85s/it]
Loading checkpoint shards: 97%|█████████▋| 29/30 [01:06<00:01, 1.83s/it]
Loading checkpoint shards: 93%|█████████▎| 28/30 [01:06<00:03, 1.84s/it]
Loading checkpoint shards: 97%|█████████▋| 29/30 [01:06<00:01, 1.65s/it]
Loading checkpoint shards: 100%|██████████| 30/30 [01:06<00:00, 1.50s/it]
Loading checkpoint shards: 100%|██████████| 30/30 [01:06<00:00, 2.23s/it]
Loading checkpoint shards: 100%|██████████| 30/30 [01:07<00:00, 1.37s/it]
Loading checkpoint shards: 100%|██████████| 30/30 [01:07<00:00, 2.26s/it]
[WARNING|logging.py:314] 2024-07-02 18:12:30,312 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 97%|█████████▋| 29/30 [01:07<00:01, 1.59s/it][WARNING|logging.py:314] 2024-07-02 18:12:30,672 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|██████████| 30/30 [01:08<00:00, 1.26s/it]
Loading checkpoint shards: 100%|██████████| 30/30 [01:08<00:00, 2.27s/it]
[WARNING|logging.py:314] 2024-07-02 18:12:31,194 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/trl/trainer/utils.py:116: UserWarning: The pad_token_id and eos_token_id values of this tokenizer are identical. If you are planning for multi-turn training, it can result in the model continuously generating questions and answers without eos token. To avoid this, set the pad_token_id to a different value.
warnings.warn(
/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/trl/trainer/utils.py:116: UserWarning: The pad_token_id and eos_token_id values of this tokenizer are identical. If you are planning for multi-turn training, it can result in the model continuously generating questions and answers without eos token. To avoid this, set the pad_token_id to a different value.
warnings.warn(
/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/trl/trainer/utils.py:116: UserWarning: The pad_token_id and eos_token_id values of this tokenizer are identical. If you are planning for multi-turn training, it can result in the model continuously generating questions and answers without eos token. To avoid this, set the pad_token_id to a different value.
warnings.warn(
[2024-07-02 18:12:37,303] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 14959
[2024-07-02 18:12:37,304] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 14960
Traceback (most recent call last):
File "/work/scripts/train_py/run_clm_sft_update.py", line 686, in <module>
main()
File "/work/scripts/train_py/run_clm_sft_update.py", line 609, in main
trainer = SFTTrainer(
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/trl/trainer/sft_trainer.py", line 278, in __init__
with PartialState().local_main_process_first():
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/accelerate/state.py", line 520, in local_main_process_first
yield from self._goes_first(self.is_local_main_process)
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/accelerate/state.py", line 384, in _goes_first
self.wait_for_everyone()
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/accelerate/state.py", line 378, in wait_for_everyone
torch.distributed.barrier()
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
return func(*args, **kwargs)
File "/home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3439, in barrier
work = default_pg.barrier(opts=opts)
torch.distributed.DistBackendError: [3] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '0', but store->get('0') got error: Connection reset by peer
Exception raised from recvBytes at ../torch/csrc/distributed/c10d/Utils.hpp:670 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fe5eecf4d87 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x5894fde (0x7fe5db5f0fde in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #2: c10d::TCPStore::doWait(c10::ArrayRef<std::string>, std::chrono::duration<long, std::ratio<1l, 1000l> >) + 0x360 (0x7fe5db5eb7f0 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: c10d::TCPStore::doGet(std::string const&) + 0x32 (0x7fe5db5ebb32 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #4: c10d::TCPStore::get(std::string const&) + 0xa1 (0x7fe5db5ec961 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #5: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fe5db5a1dd1 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fe5db5a1dd1 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #7: c10d::PrefixStore::get(std::string const&) + 0x31 (0x7fe5db5a1dd1 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #8: c10d::ProcessGroupNCCL::broadcastUniqueNCCLID(ncclUniqueId*, bool, std::string const&, int) + 0xa9 (0x7fe5a47dfc69 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #9: c10d::ProcessGroupNCCL::getNCCLComm(std::string const&, std::vector<c10::Device, std::allocator<c10::Device> > const&, c10d::OpType, int, bool) + 0x22b (0x7fe5a47e6c5b in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0x10ad03d (0x7fe5a47f003d in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #11: c10d::ProcessGroupNCCL::allreduce_impl(std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllreduceOptions const&) + 0x21 (0x7fe5a47f18e1 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #12: c10d::ProcessGroupNCCL::allreduce(std::vector<at::Tensor, std::allocator<at::Tensor> >&, c10d::AllreduceOptions const&) + 0x3bf (0x7fe5a47f38ff in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #13: c10d::ProcessGroupNCCL::barrier(c10d::BarrierOptions const&) + 0xb0e (0x7fe5a4802d4e in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so)
frame #14: <unknown function> + 0x5838872 (0x7fe5db594872 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #15: <unknown function> + 0x5843590 (0x7fe5db59f590 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #16: <unknown function> + 0x5843695 (0x7fe5db59f695 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0x4e8937c (0x7fe5dabe537c in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #18: <unknown function> + 0x1a08a38 (0x7fe5d7764a38 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #19: <unknown function> + 0x584cca4 (0x7fe5db5a8ca4 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #20: <unknown function> + 0x584da55 (0x7fe5db5a9a55 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #21: <unknown function> + 0xc93e88 (0x7fe5ede1ee88 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #22: <unknown function> + 0x413ef4 (0x7fe5ed59eef4 in /home/user1/.pyenv/versions/3.10.14/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #58: <unknown function> + 0x29d90 (0x7fe5ef964d90 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #59: __libc_start_main + 0x80 (0x7fe5ef964e40 in /usr/lib/x86_64-linux-gnu/libc.so.6)
frame #60: _start + 0x25 (0x55b3be923095 in /home/user1/.pyenv/versions/3.10.14/bin/python3.10)
. This may indicate a possible application crash on rank 0 or a network set up issue.
```
Additionally, it's very strange because it worked correctly during a previous test run. Below is the log from that run. We haven't changed the code since then, but now we are encountering a new error.
One point of concern is that in the successful run log, there is a message:
```
[INFO|modeling_utils.py:3363] 2024-07-01 15:02:46,215 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model
```
before loading the model. However, this message is missing in the current log, and it seems the model is loaded into CPU memory first (previously, it was loaded directly into GPU memory).
Training Python Scripts:
```
import logging
import os
from contextlib import nullcontext
TRL_USE_RICH = os.environ.get("TRL_USE_RICH", False)
from trl.commands.cli_utils import init_zero_verbose, SFTScriptArguments, TrlParser
if TRL_USE_RICH:
init_zero_verbose()
FORMAT = "%(message)s"
from rich.console import Console
from rich.logging import RichHandler
import torch
from datasets import load_dataset
from tqdm.rich import tqdm
from transformers import AutoTokenizer
from trl import (
ModelConfig,
RichProgressCallback,
SFTConfig,
SFTTrainer,
get_peft_config,
get_quantization_config,
get_kbit_device_map,
)
tqdm.pandas()
if TRL_USE_RICH:
logging.basicConfig(format=FORMAT, datefmt="[%X]", handlers=[RichHandler()], level=logging.INFO)
if __name__ == "__main__":
parser = TrlParser((SFTScriptArguments, SFTConfig, ModelConfig))
args, training_args, model_config = parser.parse_args_and_config()
# Force use our print callback
if TRL_USE_RICH:
training_args.disable_tqdm = True
console = Console()
################
# Model & Tokenizer
################
torch_dtype = (
model_config.torch_dtype
if model_config.torch_dtype in ["auto", None]
else getattr(torch, model_config.torch_dtype)
)
quantization_config = get_quantization_config(model_config)
model_kwargs = dict(
revision=model_config.model_revision,
trust_remote_code=model_config.trust_remote_code,
attn_implementation=model_config.attn_implementation,
torch_dtype=torch_dtype,
use_cache=False if training_args.gradient_checkpointing else True,
device_map=get_kbit_device_map() if quantization_config is not None else None,
quantization_config=quantization_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_config.model_name_or_path, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
################
# Dataset
################
raw_datasets = load_dataset(args.dataset_name)
train_dataset = raw_datasets[args.dataset_train_split]
eval_dataset = raw_datasets[args.dataset_test_split]
################
# Optional rich context managers
###############
init_context = nullcontext() if not TRL_USE_RICH else console.status("[bold green]Initializing the SFTTrainer...")
save_context = (
nullcontext()
if not TRL_USE_RICH
else console.status(f"[bold green]Training completed! Saving the model to {training_args.output_dir}")
)
################
# Training
################
with init_context:
trainer = SFTTrainer(
model=model_config.model_name_or_path,
model_init_kwargs=model_kwargs,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer,
peft_config=get_peft_config(model_config),
callbacks=[RichProgressCallback] if TRL_USE_RICH else None,
)
trainer.train()
with save_context:
trainer.save_model(training_args.output_dir)
```
Training ShellScripts:
```
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
time \
deepspeed \
sft.py \
--deepspeed ds_config_zero3.json \
--dataset_dir mytest \
--model_name_or_path meta-llama/Meta-Llama-3-70B-Instruct \
--tokenizer_name meta-llama/Meta-Llama-3-70B-Instruct \
--num_train_epochs 5 \
--do_train \
--do_eval \
--bf16 \
--output_dir ./lora-test \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 16 \
--learning_rate=5e-6 \
--lr_scheduler_type "constant" \
--warmup_ratio 0.03 \
--logging_steps 1 \
--evaluation_strategy steps \
--evaluation_steps 100 \
--save_strategy epoch \
--overwrite_output_dir \
--gradient_checkpointing \
--use_peft True \
--lora_r 16 \
--ddp_timeout 72000 \
--lora_alpha 32 \
--lora_dropout 0.05 \
--lora_target_modules q_proj v_proj k_proj o_proj gate_proj down_proj up_proj \
```
DeepSpeed Config:
```
{
"bf16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"reduce_bucket_size": 5e7,
"stage3_prefetch_bucket_size": 5e7,
"stage3_param_persistence_threshold": 0,
"stage3_max_live_parameters": 1e8,
"stage3_max_reuse_distance": 1e8,
"sub_group_size": 5e7,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
``` | open | 2024-07-02T09:37:29Z | 2024-07-02T16:04:35Z | https://github.com/deepspeedai/DeepSpeed/issues/5719 | [
"training"
] | yutanozaki1 | 0 |
davidsandberg/facenet | computer-vision | 798 | Cudnn incompatibilty | I am using Linux Mint 18.2 Cinnamon 64 bit OS.I have installed Cuda 9.1 installed with Cudnn 7.1.Whenever I run the commands to train on my custom images I get the following error:
**Command:**
` src/classifier.py TRAIN ~/datasets/my_dataset/train/ ~/models/20180402-114759.pb ~/models/my_classifier.pkl --batch_size 32
src/classifier.py`
**Error**
SyntaxWarning: assertion is always true, perhaps remove parentheses?
assert(len(cls.image_paths)>0, 'There must be at least one image for each class in the dataset')
/home/big15/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
2018-06-21 12:13:54.714762: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-06-21 12:13:54.715066: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties:
name: GeForce GT 710 major: 3 minor: 5 memoryClockRate(GHz): 0.954
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.81GiB
2018-06-21 12:13:54.715089: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-06-21 12:13:55.022402: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-21 12:13:55.022451: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-06-21 12:13:55.022462: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-06-21 12:13:55.022623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1595 MB memory) -> physical GPU (device: 0, name: GeForce GT 710, pci bus id: 0000:01:00.0, compute capability: 3.5)
Number of classes: 11
Number of images: 1760
Loading feature extraction model
Model filename: /home/big15/models/20180402-114759.pb
Calculating features for images
2018-06-21 12:14:00.283701: E tensorflow/stream_executor/cuda/cuda_dnn.cc:396] Loaded runtime CuDNN library: 7102 (compatibility version 7100) but source was compiled with 7005 (compatibility version 7000). If using a binary install, upgrade your CuDNN library to match. If building from sources, make sure the library loaded at runtime matches a compatible version specified during compile configuration.
2018-06-21 12:14:00.284164: F tensorflow/core/kernels/conv_ops.cc:712] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo<T>(), &algorithms)
Aborted
Well I tried to downgrade Cudnn to 7.0.x version but still it is giving the same error.
Has anyone else faced the same issue? | open | 2018-06-21T09:49:41Z | 2018-06-21T09:50:44Z | https://github.com/davidsandberg/facenet/issues/798 | [] | nirajvermafcb | 0 |
httpie/cli | rest-api | 1,388 | No such file or directory: '~/.config/httpie/version_info.json' | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. do use `http` command, e.g. `http GET http://localhost:8004/consumers/test`
## Current result
```bash
❯ http GET http://localhost:8004/consumers/test
HTTP/1.1 200
Connection: keep-alive
Content-Length: 0
Date: Fri, 06 May 2022 06:28:00 GMT
Keep-Alive: timeout=60
X-B3-TraceId: baf0d94787afeb82
~ on ☁️ (ap-southeast-1)
❯ Traceback (most recent call last):
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 19, in <module>
sys.exit(main())
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 9, in main
exit_status = main()
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 162, in main
return raw_main(
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 44, in raw_main
return run_daemon_task(env, args)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/daemon_runner.py", line 47, in run_daemon_task
DAEMONIZED_TASKS[options.task_id](env)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/update_warnings.py", line 51, in _fetch_updates
with open_with_lockfile(file, 'w') as stream:
File "/opt/homebrew/Cellar/python@3.10/3.10.4/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/utils.py", line 287, in open_with_lockfile
with open(file, *args, **kwargs) as stream:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/na/.config/httpie/version_info.json'
```
## Expected result
```bash
❯ http GET http://localhost:8004/consumers/test
HTTP/1.1 200
Connection: keep-alive
Content-Length: 0
Date: Fri, 06 May 2022 06:28:00 GMT
Keep-Alive: timeout=60
X-B3-TraceId: baf0d94787afeb82
```
(without the `FileNotFoundError`)
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
❯ http --debug GET http://localhost:8004/consumers/test
HTTPie 3.2.0
Requests 2.27.1
Pygments 2.12.0
Python 3.10.4 (main, Apr 26 2022, 19:36:29) [Clang 13.1.6 (clang-1316.0.21.2)]
/opt/homebrew/Cellar/httpie/3.2.0/libexec/bin/python3.10
Darwin 21.4.0
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x104179d80>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x104179c60>,
'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/Users/nico.arianto/.config/httpie'),
'devnull': <property object at 0x104153b00>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x104179cf0>,
'program_name': 'http',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x104169570>,
'rich_error_console': <functools.cached_property object at 0x10416b0a0>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.2.0')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x104483220>,
'url': 'http://localhost:8004/consumers/test'})
HTTP/1.1 200
Connection: keep-alive
Content-Length: 0
Date: Fri, 06 May 2022 06:37:17 GMT
Keep-Alive: timeout=60
X-B3-TraceId: 5c2a368fd5b3f98a
~ on ☁️ (ap-southeast-1)
❯ Traceback (most recent call last):
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 19, in <module>
sys.exit(main())
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 9, in main
exit_status = main()
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 162, in main
return raw_main(
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 44, in raw_main
return run_daemon_task(env, args)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/daemon_runner.py", line 47, in run_daemon_task
DAEMONIZED_TASKS[options.task_id](env)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/update_warnings.py", line 51, in _fetch_updates
with open_with_lockfile(file, 'w') as stream:
File "/opt/homebrew/Cellar/python@3.10/3.10.4/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/utils.py", line 287, in open_with_lockfile
with open(file, *args, **kwargs) as stream:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/nico.arianto/.config/httpie/version_info.json'
```
## Additional information, screenshots, or code examples
Installation via `homebrew`
| closed | 2022-05-06T06:38:24Z | 2022-05-07T00:43:46Z | https://github.com/httpie/cli/issues/1388 | [
"bug",
"new"
] | nico-arianto | 3 |
pydantic/pydantic-ai | pydantic | 836 | Prompt prefill/prefix | One essential prompt engineering technique is to prefill the response to steer the model to certain outcomes:
- { - to get JSON
- <!DOCTYPE html> - to get back an HTML document
- <svg width="200" height="200" to get back an SVG with the specified dimensions
- Or in the case of thinking models, add <think> and_prefill_the thinking_direction to guide the thinking process
I wonder if this is possible with pyndantic ai at the moment else if there are plans to support this. | open | 2025-02-01T04:19:14Z | 2025-02-05T06:48:48Z | https://github.com/pydantic/pydantic-ai/issues/836 | [] | tranhoangnguyen03 | 7 |
tensorpack/tensorpack | tensorflow | 992 | Do you have plan to reproduce deformable convolution? | closed | 2018-11-28T17:40:34Z | 2018-11-28T22:49:11Z | https://github.com/tensorpack/tensorpack/issues/992 | [
"examples"
] | jianlong-yuan | 1 | |
tqdm/tqdm | jupyter | 613 | RuntimeError: cannot join current thread | - [ ] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
1. version info:
```
4.25.0 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] linux
````
2. My code:
```
for index, file_path in enumerate(tqdm.tqdm(Path(dirpath).iterdir())):
do something
```
3. Error messages:
```
Exception ignored in: <bound method tqdm.__del__ of 50it [00:31, 1.56it/s]>
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 885, in __del__
self.close()
File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 1090, in close
self._decr_instances(self)
File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 454, in _decr_instances
cls.monitor.exit()
File "/usr/local/lib/python3.5/dist-packages/tqdm/_monitor.py", line 52, in exit
self.join()
File "/usr/lib/python3.5/threading.py", line 1051, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
``` | closed | 2018-09-17T02:15:36Z | 2021-11-17T01:53:17Z | https://github.com/tqdm/tqdm/issues/613 | [
"p0-bug-critical ☢",
"synchronisation ⇶"
] | david30907d | 22 |
dropbox/PyHive | sqlalchemy | 40 | allow user to specify other authMechanism | I want to use the PyHive integrate with SQLAlchemy to operate Hive and Presto.
Presto works well, but for Hive, the authMechanism is fixed, `PLAIN`.
https://github.com/dropbox/PyHive/search?utf8=%E2%9C%93&q=PLAIN
So when the required mechanism is not `PLAIN`, it will complain:
```
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
```
Is there way to support authMechanism like pyhs2?
https://github.com/BradRuderman/pyhs2/search?utf8=%E2%9C%93&q=authMechanism&type=Code
| closed | 2016-02-15T06:26:54Z | 2018-07-10T07:30:29Z | https://github.com/dropbox/PyHive/issues/40 | [
"duplicate"
] | twds | 3 |
PrefectHQ/prefect | automation | 17,299 | incorrect image tag for auto-generated Dockerfile | ### Bug summary
reported by Brain from slack
when using the auto generated dockerfile created by prefect in version 3.1.9:
```
prefect.utilities.dockerutils.BuildError: failed to resolve reference "[docker.io/prefecthq/prefect:3.1.9-python3.13](http://docker.io/prefecthq/prefect:3.1.9-python3.13)": [docker.io/prefecthq/prefect:3.1.9-python3.13](http://docker.io/prefecthq/prefect:3.1.9-python3.13): not found
```
indeed this reference doesn't exist on dockerhub
### Version info
```Text
3.1.9
```
### Additional context
_No response_ | open | 2025-02-26T23:19:11Z | 2025-02-28T15:43:49Z | https://github.com/PrefectHQ/prefect/issues/17299 | [
"bug"
] | zzstoatzz | 5 |
vastsa/FileCodeBox | fastapi | 244 | 最新版本的,管理后台设置后,前台无法正常更新 | 后台更新后,前台并没有生效。
如果手动更改配置文件,请问在哪里更改。 | closed | 2025-02-01T15:03:37Z | 2025-02-08T14:30:22Z | https://github.com/vastsa/FileCodeBox/issues/244 | [] | haobangme | 4 |
deepset-ai/haystack | nlp | 8,903 | Proposal to make input variables to `PromptBuilder` and `ChatPromptBuilder` required by default | **Is your feature request related to a problem? Please describe.**
Most of our components require some (or all) inputs during runtime. For our components whose inputs are based on Jinja2 templates (e.g. `ConditionalRouter`, `OutputAdpater`, `PromptBuilder`, and `ChatPromptBuilder`) we differ on how we treat whether all Jinja2 variables are required or are optional by default. For example, the components `ConditionalRouter`, and `OutputAdpater` we require all Jinja2 variables defined in their templates to run. But for the `PromptBuilder`, and `ChatPromptBuilder` we set all Jinja2 variables as optional by default.
This optionality has caused "intended" but usually unexpected behavior (from the perspective of the user) when running pipelines with multiple branches where each branch may contain a (Chat)PromptBuilder + (Chat)Generator. Specifically, if no required variables are set in the prompt builder then that component will always trigger even if it's along a branch that has been turned "off" by a previous `ConditionalRouter`.
This can lead to unexpected responses from a branch of the pipeline that wasn't meant to trigger from the users perspective. Ie you could end up with two answers from two branches in a pipeline even though a user only expected one to occur.
**Describe the solution you'd like**
I'd like to propose changing the default behavior of the PromptBuilder and ChatPromptBuilder to require all input variables rather than having them be optional by default. This would be a **breaking change** but one that I think is more intuitive to users and would be inline with how our `ConditionalRouter` and `OutputAdapter` work.
**Describe alternatives you've considered**
Leave as is and just add a warning to PromptBuilder and ChatPromptBuilder. See https://github.com/deepset-ai/haystack/issues/8901
**Additional context**
@ju-gu and I have run into this multiple times when building pipelines for clients.
| closed | 2025-02-21T14:03:22Z | 2025-03-21T14:53:27Z | https://github.com/deepset-ai/haystack/issues/8903 | [
"P2"
] | sjrl | 7 |
explosion/spaCy | machine-learning | 13,772 | In requirements.txt thinc>=8.3.4,<8.4.0,which was not found so I changed it to thinc>=8.3.4,<8.4.0 but it is giving error that failed building wheel for thinc |
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
(dlenv) [manshika@lappy spaCy]$ pip install -r requirements.txt
Collecting spacy-legacy<3.1.0,>=3.0.11 (from -r requirements.txt (line 2))
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0 (from -r requirements.txt (line 3))
Using cached spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting cymem<2.1.0,>=2.0.2 (from -r requirements.txt (line 4))
Using cached cymem-2.0.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.5 kB)
Collecting preshed<3.1.0,>=3.0.2 (from -r requirements.txt (line 5))
Using cached preshed-3.0.9-cp313-cp313-linux_x86_64.whl
ERROR: Ignored the following yanked versions: 6.10.4.dev0, 7.4.4
ERROR: Could not find a version that satisfies the requirement thinc<8.4.0,>=8.3.4 (from versions: 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.41, 1.42, 1.60, 1.61, 1.62, 1.63, 1.64, 1.65, 1.66, 1.67, 1.68, 1.69, 1.70, 1.71, 1.72, 1.73, 1.74, 1.75, 1.76, 2.0, 3.0, 3.1, 3.2, 3.3, 3.4.1, 4.0.0, 4.1.0, 4.2.0, 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6, 5.0.7, 5.0.8, 6.0.0, 6.1.0, 6.1.1, 6.1.2, 6.1.3, 6.2.0, 6.3.0, 6.4.0, 6.5.0, 6.5.2, 6.6.0, 6.7.0, 6.7.1, 6.7.2, 6.7.3, 6.8.0, 6.8.1, 6.8.2, 6.9.0, 6.10.0, 6.10.1.dev0, 6.10.1, 6.10.2.dev0, 6.10.2.dev1, 6.10.2, 6.10.3.dev0, 6.10.3.dev1, 6.10.3, 6.11.0.dev2, 6.11.1.dev0, 6.11.1.dev1, 6.11.1.dev2, 6.11.1.dev3, 6.11.1.dev4, 6.11.1.dev6, 6.11.1.dev7, 6.11.1.dev10, 6.11.1.dev11, 6.11.1.dev12, 6.11.1.dev13, 6.11.1.dev15, 6.11.1.dev16, 6.11.1.dev17, 6.11.1.dev18, 6.11.1.dev19, 6.11.1.dev20, 6.11.1, 6.11.2.dev0, 6.11.2, 6.11.3.dev1, 6.11.3.dev2, 6.12.0, 6.12.1, 7.0.0.dev0, 7.0.0.dev1, 7.0.0.dev2, 7.0.0.dev3, 7.0.0.dev4, 7.0.0.dev5, 7.0.0.dev6, 7.0.0.dev8, 7.0.0, 7.0.1.dev0, 7.0.1.dev1, 7.0.1.dev2, 7.0.1, 7.0.2, 7.0.3, 7.0.4.dev0, 7.0.4, 7.0.5.dev0, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.1.0.dev0, 7.1.0, 7.1.1, 7.2.0.dev3, 7.2.0, 7.3.0.dev0, 7.3.0, 7.3.1, 7.4.0.dev0, 7.4.0.dev1, 7.4.0.dev2, 7.4.0, 7.4.1, 7.4.2, 7.4.3, 7.4.5, 7.4.6, 8.0.0.dev0, 8.0.0.dev2, 8.0.0.dev4, 8.0.0a0, 8.0.0a1, 8.0.0a2, 8.0.0a3, 8.0.0a6, 8.0.0a8, 8.0.0a9, 8.0.0a11, 8.0.0a12, 8.0.0a13, 8.0.0a14, 8.0.0a16, 8.0.0a17, 8.0.0a18, 8.0.0a19, 8.0.0a20, 8.0.0a21, 8.0.0a22, 8.0.0a23, 8.0.0a24, 8.0.0a25, 8.0.0a26, 8.0.0a27, 8.0.0a28, 8.0.0a29, 8.0.0a30, 8.0.0a31, 8.0.0a32, 8.0.0a33, 8.0.0a34, 8.0.0a35, 8.0.0a36, 8.0.0a40, 8.0.0a41, 8.0.0a42, 8.0.0a43, 8.0.0a44, 8.0.0rc0, 8.0.0rc1, 8.0.0rc2, 8.0.0rc3, 8.0.0rc4, 8.0.0rc5, 8.0.0rc6.dev0, 8.0.0rc6, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.0.11, 8.0.12, 8.0.13, 8.0.14.dev0, 8.0.14, 8.0.15, 8.0.16, 8.0.17, 8.1.0.dev0, 8.1.0.dev1, 8.1.0.dev2, 8.1.0.dev3, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.3.0, 8.3.1, 8.3.2, 9.0.0.dev0, 9.0.0.dev1, 9.0.0.dev2, 9.0.0.dev3, 9.0.0.dev4, 9.0.0.dev5, 9.0.0, 9.1.0, 9.1.1)
ERROR: No matching distribution found for thinc<8.4.0,>=8.3.4
(dlenv) [manshika@lappy spaCy]$ pip install -r requirements.txt
Collecting spacy-legacy<3.1.0,>=3.0.11 (from -r requirements.txt (line 2))
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0 (from -r requirements.txt (line 3))
Using cached spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting cymem<2.1.0,>=2.0.2 (from -r requirements.txt (line 4))
Using cached cymem-2.0.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.5 kB)
Collecting preshed<3.1.0,>=3.0.2 (from -r requirements.txt (line 5))
Using cached preshed-3.0.9-cp313-cp313-linux_x86_64.whl
Collecting thinc<8.4.0,>=8.3.0 (from -r requirements.txt (line 6))
Using cached thinc-8.3.2.tar.gz (193 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting ml_datasets<0.3.0,>=0.2.0 (from -r requirements.txt (line 7))
Using cached ml_datasets-0.2.0-py3-none-any.whl.metadata (7.5 kB)
Collecting murmurhash<1.1.0,>=0.28.0 (from -r requirements.txt (line 8))
Using cached murmurhash-1.0.12-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting wasabi<1.2.0,>=0.9.1 (from -r requirements.txt (line 9))
Using cached wasabi-1.1.3-py3-none-any.whl.metadata (28 kB)
Collecting srsly<3.0.0,>=2.4.3 (from -r requirements.txt (line 10))
Using cached srsly-2.5.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (19 kB)
Collecting catalogue<2.1.0,>=2.0.6 (from -r requirements.txt (line 11))
Using cached catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
Collecting typer<1.0.0,>=0.3.0 (from -r requirements.txt (line 12))
Using cached typer-0.15.2-py3-none-any.whl.metadata (15 kB)
Collecting weasel<0.5.0,>=0.1.0 (from -r requirements.txt (line 13))
Using cached weasel-0.4.1-py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: numpy<3.0.0,>=2.0.0 in /home/manshika/.virtualenvs/dlenv/lib/python3.13/site-packages (from -r requirements.txt (line 15)) (2.2.4)
Collecting requests<3.0.0,>=2.13.0 (from -r requirements.txt (line 16))
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting tqdm<5.0.0,>=4.38.0 (from -r requirements.txt (line 17))
Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Collecting pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 (from -r requirements.txt (line 18))
Using cached pydantic-2.10.6-py3-none-any.whl.metadata (30 kB)
Collecting jinja2 (from -r requirements.txt (line 19))
Using cached jinja2-3.1.6-py3-none-any.whl.metadata (2.9 kB)
Collecting langcodes<4.0.0,>=3.2.0 (from -r requirements.txt (line 20))
Using cached langcodes-3.5.0-py3-none-any.whl.metadata (29 kB)
Requirement already satisfied: setuptools in /home/manshika/.virtualenvs/dlenv/lib/python3.13/site-packages (from -r requirements.txt (line 22)) (76.1.0)
Collecting packaging>=20.0 (from -r requirements.txt (line 23))
Using cached packaging-24.2-py3-none-any.whl.metadata (3.2 kB)
Collecting pre-commit>=2.13.0 (from -r requirements.txt (line 25))
Using cached pre_commit-4.1.0-py2.py3-none-any.whl.metadata (1.3 kB)
Collecting cython<3.0,>=0.25 (from -r requirements.txt (line 26))
Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB)
Collecting pytest!=7.1.0,>=5.2.0 (from -r requirements.txt (line 27))
Using cached pytest-8.3.5-py3-none-any.whl.metadata (7.6 kB)
Collecting pytest-timeout<2.0.0,>=1.3.0 (from -r requirements.txt (line 28))
Using cached pytest_timeout-1.4.2-py2.py3-none-any.whl.metadata (11 kB)
Collecting mock<3.0.0,>=2.0.0 (from -r requirements.txt (line 29))
Using cached mock-2.0.0-py2.py3-none-any.whl.metadata (3.2 kB)
Collecting flake8<6.0.0,>=3.8.0 (from -r requirements.txt (line 30))
Using cached flake8-5.0.4-py2.py3-none-any.whl.metadata (4.1 kB)
Collecting hypothesis<7.0.0,>=3.27.0 (from -r requirements.txt (line 31))
Using cached hypothesis-6.129.4-py3-none-any.whl.metadata (4.4 kB)
Collecting mypy<1.6.0,>=1.5.0 (from -r requirements.txt (line 32))
Using cached mypy-1.5.1-py3-none-any.whl.metadata (1.7 kB)
Collecting types-mock>=0.1.1 (from -r requirements.txt (line 33))
Using cached types_mock-5.2.0.20250306-py3-none-any.whl.metadata (2.0 kB)
Collecting types-setuptools>=57.0.0 (from -r requirements.txt (line 34))
Using cached types_setuptools-76.0.0.20250313-py3-none-any.whl.metadata (2.2 kB)
Collecting types-requests (from -r requirements.txt (line 35))
Using cached types_requests-2.32.0.20250306-py3-none-any.whl.metadata (2.3 kB)
Collecting black==22.3.0 (from -r requirements.txt (line 37))
Using cached black-22.3.0-py3-none-any.whl.metadata (45 kB)
Collecting cython-lint>=0.15.0 (from -r requirements.txt (line 38))
Using cached cython_lint-0.16.6-py3-none-any.whl.metadata (4.9 kB)
Collecting isort<6.0,>=5.0 (from -r requirements.txt (line 39))
Using cached isort-5.13.2-py3-none-any.whl.metadata (12 kB)
Collecting click>=8.0.0 (from black==22.3.0->-r requirements.txt (line 37))
Using cached click-8.1.8-py3-none-any.whl.metadata (2.3 kB)
Collecting platformdirs>=2 (from black==22.3.0->-r requirements.txt (line 37))
Using cached platformdirs-4.3.6-py3-none-any.whl.metadata (11 kB)
Collecting pathspec>=0.9.0 (from black==22.3.0->-r requirements.txt (line 37))
Using cached pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
Collecting mypy-extensions>=0.4.3 (from black==22.3.0->-r requirements.txt (line 37))
Using cached mypy_extensions-1.0.0-py3-none-any.whl.metadata (1.1 kB)
Collecting blis<1.1.0,>=1.0.0 (from thinc<8.4.0,>=8.3.0->-r requirements.txt (line 6))
Using cached blis-1.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.6 kB)
Collecting confection<1.0.0,>=0.0.1 (from thinc<8.4.0,>=8.3.0->-r requirements.txt (line 6))
Using cached confection-0.1.5-py3-none-any.whl.metadata (19 kB)
Collecting numpy<3.0.0,>=2.0.0 (from -r requirements.txt (line 15))
Using cached numpy-2.0.2-cp313-cp313-linux_x86_64.whl
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/manshika/.virtualenvs/dlenv/lib/python3.13/site-packages (from typer<1.0.0,>=0.3.0->-r requirements.txt (line 12)) (4.12.2)
Collecting shellingham>=1.3.0 (from typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached shellingham-1.5.4-py2.py3-none-any.whl.metadata (3.5 kB)
Collecting rich>=10.11.0 (from typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached rich-13.9.4-py3-none-any.whl.metadata (18 kB)
Collecting cloudpathlib<1.0.0,>=0.7.0 (from weasel<0.5.0,>=0.1.0->-r requirements.txt (line 13))
Using cached cloudpathlib-0.21.0-py3-none-any.whl.metadata (14 kB)
Collecting smart-open<8.0.0,>=5.2.1 (from weasel<0.5.0,>=0.1.0->-r requirements.txt (line 13))
Using cached smart_open-7.1.0-py3-none-any.whl.metadata (24 kB)
Collecting charset-normalizer<4,>=2 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached urllib3-2.3.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests<3.0.0,>=2.13.0->-r requirements.txt (line 16))
Using cached certifi-2025.1.31-py3-none-any.whl.metadata (2.5 kB)
Collecting annotated-types>=0.6.0 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->-r requirements.txt (line 18))
Using cached annotated_types-0.7.0-py3-none-any.whl.metadata (15 kB)
Collecting pydantic-core==2.27.2 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->-r requirements.txt (line 18))
Using cached pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->-r requirements.txt (line 19))
Using cached MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Collecting language-data>=1.2 (from langcodes<4.0.0,>=3.2.0->-r requirements.txt (line 20))
Using cached language_data-1.3.0-py3-none-any.whl.metadata (4.3 kB)
Collecting cfgv>=2.0.0 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached cfgv-3.4.0-py2.py3-none-any.whl.metadata (8.5 kB)
Collecting identify>=1.0.0 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached identify-2.6.9-py2.py3-none-any.whl.metadata (4.4 kB)
Collecting nodeenv>=0.11.1 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached nodeenv-1.9.1-py2.py3-none-any.whl.metadata (21 kB)
Collecting pyyaml>=5.1 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting virtualenv>=20.10.0 (from pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached virtualenv-20.29.3-py3-none-any.whl.metadata (4.5 kB)
Collecting iniconfig (from pytest!=7.1.0,>=5.2.0->-r requirements.txt (line 27))
Using cached iniconfig-2.0.0-py3-none-any.whl.metadata (2.6 kB)
Collecting pluggy<2,>=1.5 (from pytest!=7.1.0,>=5.2.0->-r requirements.txt (line 27))
Using cached pluggy-1.5.0-py3-none-any.whl.metadata (4.8 kB)
Collecting pbr>=0.11 (from mock<3.0.0,>=2.0.0->-r requirements.txt (line 29))
Using cached pbr-6.1.1-py2.py3-none-any.whl.metadata (3.4 kB)
Collecting six>=1.9 (from mock<3.0.0,>=2.0.0->-r requirements.txt (line 29))
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting mccabe<0.8.0,>=0.7.0 (from flake8<6.0.0,>=3.8.0->-r requirements.txt (line 30))
Using cached mccabe-0.7.0-py2.py3-none-any.whl.metadata (5.0 kB)
Collecting pycodestyle<2.10.0,>=2.9.0 (from flake8<6.0.0,>=3.8.0->-r requirements.txt (line 30))
Using cached pycodestyle-2.9.1-py2.py3-none-any.whl.metadata (31 kB)
Collecting pyflakes<2.6.0,>=2.5.0 (from flake8<6.0.0,>=3.8.0->-r requirements.txt (line 30))
Using cached pyflakes-2.5.0-py2.py3-none-any.whl.metadata (3.8 kB)
Collecting attrs>=22.2.0 (from hypothesis<7.0.0,>=3.27.0->-r requirements.txt (line 31))
Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
Collecting sortedcontainers<3.0.0,>=2.1.0 (from hypothesis<7.0.0,>=3.27.0->-r requirements.txt (line 31))
Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl.metadata (10 kB)
Collecting tokenize-rt>=3.2.0 (from cython-lint>=0.15.0->-r requirements.txt (line 38))
Using cached tokenize_rt-6.1.0-py2.py3-none-any.whl.metadata (4.1 kB)
Collecting marisa-trie>=1.1.0 (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->-r requirements.txt (line 20))
Using cached marisa_trie-1.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.0 kB)
Collecting markdown-it-py>=2.2.0 (from rich>=10.11.0->typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting pygments<3.0.0,>=2.13.0 (from rich>=10.11.0->typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached pygments-2.19.1-py3-none-any.whl.metadata (2.5 kB)
Collecting wrapt (from smart-open<8.0.0,>=5.2.1->weasel<0.5.0,>=0.1.0->-r requirements.txt (line 13))
Using cached wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.4 kB)
Collecting distlib<1,>=0.3.7 (from virtualenv>=20.10.0->pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached distlib-0.3.9-py2.py3-none-any.whl.metadata (5.2 kB)
Collecting filelock<4,>=3.12.2 (from virtualenv>=20.10.0->pre-commit>=2.13.0->-r requirements.txt (line 25))
Using cached filelock-3.18.0-py3-none-any.whl.metadata (2.9 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0.0,>=0.3.0->-r requirements.txt (line 12))
Using cached mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB)
Using cached black-22.3.0-py3-none-any.whl (153 kB)
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Using cached spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Using cached cymem-2.0.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (222 kB)
Using cached ml_datasets-0.2.0-py3-none-any.whl (15 kB)
Using cached murmurhash-1.0.12-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (133 kB)
Using cached wasabi-1.1.3-py3-none-any.whl (27 kB)
Using cached srsly-2.5.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Using cached typer-0.15.2-py3-none-any.whl (45 kB)
Using cached weasel-0.4.1-py3-none-any.whl (50 kB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Using cached pydantic-2.10.6-py3-none-any.whl (431 kB)
Using cached pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
Using cached jinja2-3.1.6-py3-none-any.whl (134 kB)
Using cached langcodes-3.5.0-py3-none-any.whl (182 kB)
Using cached packaging-24.2-py3-none-any.whl (65 kB)
Using cached pre_commit-4.1.0-py2.py3-none-any.whl (220 kB)
Using cached Cython-0.29.37-py2.py3-none-any.whl (989 kB)
Using cached pytest-8.3.5-py3-none-any.whl (343 kB)
Using cached pytest_timeout-1.4.2-py2.py3-none-any.whl (10 kB)
Using cached mock-2.0.0-py2.py3-none-any.whl (56 kB)
Using cached flake8-5.0.4-py2.py3-none-any.whl (61 kB)
Using cached hypothesis-6.129.4-py3-none-any.whl (489 kB)
Using cached mypy-1.5.1-py3-none-any.whl (2.5 MB)
Using cached types_mock-5.2.0.20250306-py3-none-any.whl (10 kB)
Using cached types_setuptools-76.0.0.20250313-py3-none-any.whl (65 kB)
Using cached types_requests-2.32.0.20250306-py3-none-any.whl (20 kB)
Using cached cython_lint-0.16.6-py3-none-any.whl (12 kB)
Using cached isort-5.13.2-py3-none-any.whl (92 kB)
Using cached annotated_types-0.7.0-py3-none-any.whl (13 kB)
Using cached attrs-25.3.0-py3-none-any.whl (63 kB)
Using cached blis-1.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.2 MB)
Using cached certifi-2025.1.31-py3-none-any.whl (166 kB)
Using cached cfgv-3.4.0-py2.py3-none-any.whl (7.2 kB)
Using cached charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)
Using cached click-8.1.8-py3-none-any.whl (98 kB)
Using cached cloudpathlib-0.21.0-py3-none-any.whl (52 kB)
Using cached confection-0.1.5-py3-none-any.whl (35 kB)
Using cached identify-2.6.9-py2.py3-none-any.whl (99 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached language_data-1.3.0-py3-none-any.whl (5.4 MB)
Using cached MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23 kB)
Using cached mccabe-0.7.0-py2.py3-none-any.whl (7.3 kB)
Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Using cached nodeenv-1.9.1-py2.py3-none-any.whl (22 kB)
Using cached pathspec-0.12.1-py3-none-any.whl (31 kB)
Using cached pbr-6.1.1-py2.py3-none-any.whl (108 kB)
Using cached platformdirs-4.3.6-py3-none-any.whl (18 kB)
Using cached pluggy-1.5.0-py3-none-any.whl (20 kB)
Using cached pycodestyle-2.9.1-py2.py3-none-any.whl (41 kB)
Using cached pyflakes-2.5.0-py2.py3-none-any.whl (66 kB)
Using cached PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (759 kB)
Using cached rich-13.9.4-py3-none-any.whl (242 kB)
Using cached shellingham-1.5.4-py2.py3-none-any.whl (9.8 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached smart_open-7.1.0-py3-none-any.whl (61 kB)
Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Using cached tokenize_rt-6.1.0-py2.py3-none-any.whl (6.0 kB)
Using cached urllib3-2.3.0-py3-none-any.whl (128 kB)
Using cached virtualenv-20.29.3-py3-none-any.whl (4.3 MB)
Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Using cached distlib-0.3.9-py2.py3-none-any.whl (468 kB)
Using cached filelock-3.18.0-py3-none-any.whl (16 kB)
Using cached marisa_trie-1.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB)
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Using cached pygments-2.19.1-py3-none-any.whl (1.2 MB)
Using cached wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (89 kB)
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Building wheels for collected packages: thinc
Building wheel for thinc (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for thinc (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [380 lines of output]
Cythonizing sources
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-313/thinc
copying thinc/util.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/types.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/schedules.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/optimizers.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/mypy.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/model.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/loss.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/initializers.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/config.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/compat.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/api.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/about.py -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc
creating build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/util.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_util.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_types.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_serialize.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_schedules.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_optimizers.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_loss.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_initializers.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_indexing.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_import__all__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_examples.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/test_config.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/strategies.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/enable_tensorflow.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/enable_mxnet.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/conftest.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
copying thinc/tests/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests
creating build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/torchscript.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/tensorflow.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/shim.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/pytorch_grad_scaler.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/pytorch.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/mxnet.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
copying thinc/shims/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/shims
creating build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_signpost_interval.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_reshape.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_ragged.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_padded.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_nvtx_range.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_getitem.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_flatten_v2.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_flatten.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_debug.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_cpu.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_array2d.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/with_array.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/uniqued.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/tuplify.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/torchscriptwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/tensorflowwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/swish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/strings2arrays.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/softmax_activation.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/softmax.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/sigmoid_activation.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/sigmoid.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/siamese.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/resizable.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/residual.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/remap_ids.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/relu.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_sum.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_mean.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_max.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_last.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/reduce_first.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/ragged2list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/pytorchwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/parametricattention_v2.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/parametricattention.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/padded2list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/noop.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/mxnetwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/multisoftmax.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/mish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/maxout.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/map_list.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/lstm.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/logistic.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/list2ragged.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/list2padded.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/list2array.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/linear.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/layernorm.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/hashembed.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/hard_swish_mobilenet.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/hard_swish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/gelu.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/expand_window.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/embed.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/dropout.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/dish.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/concatenate.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/clone.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/clipped_linear.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/chain.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/cauchysimilarity.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/bidirectional.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/array_getitem.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/add.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/layers
creating build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/extra/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/extra
creating build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/ops.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/mps_ops.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cupy_ops.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_param_server.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_custom_kernels.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_cupy_allocators.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/backends
creating build/lib.linux-x86_64-cpython-313/thinc/tests/shims
copying thinc/tests/shims/test_pytorch_grad_scaler.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/shims
copying thinc/tests/shims/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/shims
creating build/lib.linux-x86_64-cpython-313/thinc/tests/regression
copying thinc/tests/regression/test_issue564.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression
copying thinc/tests/regression/test_issue208.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression
copying thinc/tests/regression/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy
copying thinc/tests/mypy/test_mypy.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy
copying thinc/tests/mypy/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy
creating build/lib.linux-x86_64-cpython-313/thinc/tests/model
copying thinc/tests/model/test_validation.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/model
copying thinc/tests/model/test_model.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/model
copying thinc/tests/model/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/model
creating build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_with_transforms.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_with_flatten.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_with_debug.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_uniqued.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_transforms.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_torchscriptwrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_tensorflow_wrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_sparse_linear.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_softmax.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_shim.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_resizable.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_reduce.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_pytorch_wrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_parametric_attention_v2.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_mxnet_wrapper.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_mnist.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_mappers.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_lstm.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_linear.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_layers_api.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_hash_embed.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_feed_forward.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_combinators.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/test_basic_tagger.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
copying thinc/tests/layers/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/layers
creating build/lib.linux-x86_64-cpython-313/thinc/tests/extra
copying thinc/tests/extra/test_beam_search.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/extra
copying thinc/tests/extra/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/extra
creating build/lib.linux-x86_64-cpython-313/thinc/tests/backends
copying thinc/tests/backends/test_ops.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/backends
copying thinc/tests/backends/test_mem.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/backends
copying thinc/tests/backends/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/backends
creating build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
copying thinc/tests/regression/issue519/test_issue519.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
copying thinc/tests/regression/issue519/program.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
copying thinc/tests/regression/issue519/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/regression/issue519
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/success_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/success_no_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/fail_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/fail_no_plugin.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
copying thinc/tests/mypy/modules/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/modules
creating build/lib.linux-x86_64-cpython-313/thinc/extra/tests
copying thinc/extra/tests/__init__.py -> build/lib.linux-x86_64-cpython-313/thinc/extra/tests
running egg_info
writing thinc.egg-info/PKG-INFO
writing dependency_links to thinc.egg-info/dependency_links.txt
writing entry points to thinc.egg-info/entry_points.txt
writing requirements to thinc.egg-info/requires.txt
writing top-level names to thinc.egg-info/top_level.txt
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayscalars.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarraytypes.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ufuncobject.h won't be automatically included in the manifest: the path must be relative
dependency /usr/include/python3.13/Python.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/arrayscalars.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarrayobject.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ndarraytypes.h won't be automatically included in the manifest: the path must be relative
dependency /tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include/numpy/ufuncobject.h won't be automatically included in the manifest: the path must be relative
dependency /usr/include/python3.13/Python.h won't be automatically included in the manifest: the path must be relative
reading manifest file 'thinc.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'tmp'
adding license file 'LICENSE'
writing manifest file 'thinc.egg-info/SOURCES.txt'
/tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/setuptools/command/build_py.py:212: _Warning: Package 'thinc.tests.mypy.configs' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'thinc.tests.mypy.configs' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'thinc.tests.mypy.configs' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'thinc.tests.mypy.configs' to be distributed and are
already explicitly excluding 'thinc.tests.mypy.configs' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
/tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/setuptools/command/build_py.py:212: _Warning: Package 'thinc.tests.mypy.outputs' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'thinc.tests.mypy.outputs' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'thinc.tests.mypy.outputs' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'thinc.tests.mypy.outputs' to be distributed and are
already explicitly excluding 'thinc.tests.mypy.outputs' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
copying thinc/__init__.pxd -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/py.typed -> build/lib.linux-x86_64-cpython-313/thinc
copying thinc/layers/premap_ids.pyx -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/layers/sparselinear.pyx -> build/lib.linux-x86_64-cpython-313/thinc/layers
copying thinc/extra/__init__.pxd -> build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/extra/search.pxd -> build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/extra/search.pyx -> build/lib.linux-x86_64-cpython-313/thinc/extra
copying thinc/backends/__init__.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_custom_kernels.cu -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/_murmur3.cu -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cblas.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cblas.pyx -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/cpu_kernels.hh -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/linalg.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/linalg.pyx -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/numpy_ops.pxd -> build/lib.linux-x86_64-cpython-313/thinc/backends
copying thinc/backends/numpy_ops.pyx -> build/lib.linux-x86_64-cpython-313/thinc/backends
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/configs
copying thinc/tests/mypy/configs/mypy-default.ini -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/configs
copying thinc/tests/mypy/configs/mypy-plugin.ini -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/configs
creating build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/fail-no-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/fail-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/success-no-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/tests/mypy/outputs/success-plugin.txt -> build/lib.linux-x86_64-cpython-313/thinc/tests/mypy/outputs
copying thinc/extra/tests/c_test_search.pyx -> build/lib.linux-x86_64-cpython-313/thinc/extra/tests
running build_ext
building 'thinc.backends.cblas' extension
creating build/temp.linux-x86_64-cpython-313/thinc/backends
g++ -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC -I/tmp/pip-build-env-w27chjg7/overlay/lib/python3.13/site-packages/numpy/_core/include -I/usr/include/python3.13 -I/home/manshika/.virtualenvs/dlenv/include -I/usr/include/python3.13 -c thinc/backends/cblas.cpp -o build/temp.linux-x86_64-cpython-313/thinc/backends/cblas.o -O3 -Wno-strict-prototypes -Wno-unused-function -std=c++11
cc1plus: warning: command-line option ‘-Wno-strict-prototypes’ is valid for C/ObjC but not for C++
thinc/backends/cblas.cpp:871:72: warning: ‘Py_UNICODE’ is deprecated [-Wdeprecated-declarations]
871 | static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
| ^
In file included from /usr/include/python3.13/unicodeobject.h:1014,
from /usr/include/python3.13/Python.h:79,
from thinc/backends/cblas.cpp:24:
/usr/include/python3.13/cpython/unicodeobject.h:10:37: note: declared here
10 | Py_DEPRECATED(3.13) typedef wchar_t Py_UNICODE;
| ^~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE*)’:
thinc/backends/cblas.cpp:872:23: warning: ‘Py_UNICODE’ is deprecated [-Wdeprecated-declarations]
872 | const Py_UNICODE *u_end = u;
| ^~~~~
/usr/include/python3.13/cpython/unicodeobject.h:10:37: note: declared here
10 | Py_DEPRECATED(3.13) typedef wchar_t Py_UNICODE;
| ^~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘int __Pyx_PyList_Extend(PyObject*, PyObject*)’:
thinc/backends/cblas.cpp:1908:22: error: ‘_PyList_Extend’ was not declared in this scope; did you mean ‘PyList_Extend’?
1908 | PyObject* none = _PyList_Extend((PyListObject*)L, v);
| ^~~~~~~~~~~~~~
| PyList_Extend
thinc/backends/cblas.cpp: In function ‘void __Pyx_init_assertions_enabled()’:
thinc/backends/cblas.cpp:1946:39: error: ‘_PyInterpreterState_GetConfig’ was not declared in this scope; did you mean ‘PyInterpreterState_GetID’?
1946 | __pyx_assertions_enabled_flag = ! _PyInterpreterState_GetConfig(__Pyx_PyThreadState_Current->interp)->optimization_level;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| PyInterpreterState_GetID
thinc/backends/cblas.cpp: In function ‘int __Pyx_PyInt_As_int(PyObject*)’:
thinc/backends/cblas.cpp:20354:46: error: too few arguments to function ‘int _PyLong_AsByteArray(PyLongObject*, unsigned char*, size_t, int, int, int)’
20354 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
20355 | bytes, sizeof(val),
| ~~~~~~~~~~~~~~~~~~~
20356 | is_little, !is_unsigned);
| ~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/python3.13/longobject.h:107,
from /usr/include/python3.13/Python.h:81:
/usr/include/python3.13/cpython/longobject.h:111:17: note: declared here
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^~~~~~~~~~~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘long int __Pyx_PyInt_As_long(PyObject*)’:
thinc/backends/cblas.cpp:20550:46: error: too few arguments to function ‘int _PyLong_AsByteArray(PyLongObject*, unsigned char*, size_t, int, int, int)’
20550 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
20551 | bytes, sizeof(val),
| ~~~~~~~~~~~~~~~~~~~
20552 | is_little, !is_unsigned);
| ~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/python3.13/cpython/longobject.h:111:17: note: declared here
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^~~~~~~~~~~~~~~~~~~
thinc/backends/cblas.cpp: In function ‘char __Pyx_PyInt_As_char(PyObject*)’:
thinc/backends/cblas.cpp:20822:46: error: too few arguments to function ‘int _PyLong_AsByteArray(PyLongObject*, unsigned char*, size_t, int, int, int)’
20822 | int ret = _PyLong_AsByteArray((PyLongObject *)v,
| ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
20823 | bytes, sizeof(val),
| ~~~~~~~~~~~~~~~~~~~
20824 | is_little, !is_unsigned);
| ~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/python3.13/cpython/longobject.h:111:17: note: declared here
111 | PyAPI_FUNC(int) _PyLong_AsByteArray(PyLongObject* v,
| ^~~~~~~~~~~~~~~~~~~
error: command '/usr/bin/g++' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for thinc
Failed to build thinc
ERROR: Failed to build installable wheels for some pyproject.toml based projects (thinc)
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Arch Linux
* Python Version Used: Python 3.13.2
* spaCy Version Used:Latest
* Environment Information: virtual environment
| open | 2025-03-18T12:44:49Z | 2025-03-18T12:44:49Z | https://github.com/explosion/spaCy/issues/13772 | [] | manshika13 | 0 |
davidsandberg/facenet | tensorflow | 294 | Only 0.750+-0.083 Accuracy , far away from 0.99???? | How did u get 0.99 ?? | closed | 2017-05-27T05:58:08Z | 2017-06-01T00:49:04Z | https://github.com/davidsandberg/facenet/issues/294 | [] | ouyangbei | 1 |
Kanaries/pygwalker | matplotlib | 628 | Hide table render | Is there a way to not render the table? I tried some parameter to pass in `gw_mode` but then I can hide the view and not the table.
| open | 2024-09-25T12:27:22Z | 2024-09-26T02:14:36Z | https://github.com/Kanaries/pygwalker/issues/628 | [
"enhancement"
] | RodrigoSKohl | 2 |
liangliangyy/DjangoBlog | django | 207 | ./manage.py migrate执行时错误 | ./manage.py migrate执行时报下面错误,请问该怎么修改啊
-----------------------------------------------
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions, sites
Running migrations:
Applying admin.0001_initial...Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 71, in execute
return self.cursor.execute(query, args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 684, in _read_packet
packet.check_error()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/protocol.py", line 220, in check_error
err.raise_mysql_exception(self._data)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/err.py", line 109, in raise_mysql_exception
raise errorclass(errno, errval)
pymysql.err.IntegrityError: (1215, 'Cannot add foreign key constraint')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/management/base.py", line 316, in run_from_argv
self.execute(*args, **cmd_options)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/management/base.py", line 353, in execute
output = self.handle(*args, **options)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/management/commands/migrate.py", line 203, in handle
fake_initial=fake_initial,
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/migrations/executor.py", line 244, in apply_migration
state = migration.apply(state, schema_editor)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/base/schema.py", line 106, in __exit__
self.execute(sql)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/base/schema.py", line 133, in execute
cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 71, in execute
return self.cursor.execute(query, args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/connections.py", line 684, in _read_packet
packet.check_error()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/protocol.py", line 220, in check_error
err.raise_mysql_exception(self._data)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pymysql/err.py", line 109, in raise_mysql_exception
raise errorclass(errno, errval)
django.db.utils.IntegrityError: (1215, 'Cannot add foreign key constraint') | closed | 2019-02-05T08:45:42Z | 2019-02-05T14:07:03Z | https://github.com/liangliangyy/DjangoBlog/issues/207 | [] | JamesAiit | 1 |
JaidedAI/EasyOCR | pytorch | 1,208 | compute_ratio_and_resize passes a PIL constant to a cv2 function | utils.py contains the following code
https://github.com/JaidedAI/EasyOCR/blob/c999505ef6b43be1c4ee36aa04ad979175178352/easyocr/utils.py#L566-L577
Note that Image is part of PIL
https://github.com/JaidedAI/EasyOCR/blob/c999505ef6b43be1c4ee36aa04ad979175178352/easyocr/utils.py#L7-L8
Since both of these map to an integer, this is not a type error. However PIL maps LANCZOS to 1 (https://github.com/python-pillow/Pillow/blob/e47877587fb8aa1853ef7473285a2964f5e98520/src/PIL/Image.py#L158-L164 ), while a 1 means bilinear interpolation in opencv (https://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html ) | open | 2024-02-02T00:20:22Z | 2024-05-02T03:53:28Z | https://github.com/JaidedAI/EasyOCR/issues/1208 | [] | andreaswimmer | 1 |
alteryx/featuretools | data-science | 1,867 | Change features_only to feature_defs_only | When looking at the [DFS](https://featuretools.alteryx.com/en/stable/generated/featuretools.dfs.html#featuretools-dfs) call, I feel that the `features_only` option is misleading. Setting this to `True` only returns definitions and not the feature matrix. So I believe the option should be `feature_defs_only`
#### Code Example
```python
import featuretools as ft
es = ft.demo.load_mock_customer(return_entityset=True)
feature_defs = ft.dfs(
entityset=es,
target_dataframe_name="customers",
agg_primitives=["mean"],
trans_primitives=["time_since_previous"],
features_defs_only=True,
)
feature_defs
```
| closed | 2022-01-26T20:47:31Z | 2023-03-15T20:10:49Z | https://github.com/alteryx/featuretools/issues/1867 | [
"enhancement"
] | dvreed77 | 8 |
ibis-project/ibis | pandas | 10,709 | feat: add `missing_ok: bool = False` kwarg to Table.drop() signature | ### Is your feature request related to a problem?
I have several places where I do:
```python
if "must_not_be_present" in table.columns:
table.drop("must_not_be_present")
# ...continue on
```
I want to be able to do non-conditionally: ` t.drop("must_not_be_present", missing_ok=True)`
This is similar to `Path.mkdir(exist_ok=True)` and `Path.unlink(missing_ok=False)`. These were the inspiration for the name of the param, but I am open to suggestions on different kwarg names. I ran this through chatGPT and other options that were decent were ignore_missing, skip_missing, allow_missing.
### What is the motivation behind your request?
_No response_
### Describe the solution you'd like
Adding it as a kwarg argument. This should't be breaking to anyone.
### What version of ibis are you running?
main
### What backend(s) are you using, if any?
_No response_
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct | open | 2025-01-23T01:03:06Z | 2025-02-10T23:17:26Z | https://github.com/ibis-project/ibis/issues/10709 | [
"feature"
] | NickCrews | 2 |
horovod/horovod | pytorch | 3,420 | Installation steps | Hello! I have followed all the stps though I can not install horovod properly until now
I have built these two files :
yml
> name: adel
>
> channels:
> - pytorch=1.9.0
> - conda-forge
> - defaults
>
> dependencies:
> - ccache
> - cmake
> - cudatoolkit=11.3
> - cudnn
> - cxx-compiler
> - jupyterlab
> - mpi4py # installs cuda-aware openmpi
> - nccl
> - nvcc_linux-64
> - openmpi
> - pip
> - pip:
> - tensorflow-gpu==2.4.*
> - -r requirements.txt
> - python=3.8
> - tensorboard=2.6.0
> - torchaudio=0.9.0
> - torchvision=0.10.0
> - numpy
> - tqdm
> - tokenizers=0.10.3
> - prettytable=2.2.1
> - einops=0.3.2
and requirements
>
> horovod==0.22.1
> transformers==4.8.2
> datasets==1.8.0
> jupyterlab-nvdashboard==0.2.*
> jupyter-tensorboard==0.2.*
> jupyterlab-nvdashboard==0.2.*
> jupyter-tensorboard==0.2.*
> sacrebleu==2.0.0
Then `export HOROVOD_GPU_OPERATIONS=NCCL` after that `conda env create --file horovod.yml --force`
to get the error
Collecting package metadata (repodata.json): failed
UnavailableInvalidChannel: The channel is not accessible or is invalid.
channel name: pytorch=1.9.0
channel url: https://conda.anaconda.org/pytorch=1.9.0
error code: 404
You will need to adjust your conda configuration to proceed.
Use `conda config --show channels` to view your configuration's current state,
and use `conda config --show-sources` to view config file locations.
Do I miss something?
| closed | 2022-02-25T13:51:41Z | 2022-03-03T13:30:56Z | https://github.com/horovod/horovod/issues/3420 | [] | Arij-Aladel | 1 |
rthalley/dnspython | asyncio | 679 | Problème | Bonjour j'ai un problème sur kodi, Lorsque je lance certaine video, un message d'erreur apparaît me mettant : addons dnspython manquant
Quelqun peut m'aider svp | closed | 2021-08-24T12:58:09Z | 2021-08-24T17:02:43Z | https://github.com/rthalley/dnspython/issues/679 | [] | WazYon9 | 1 |
piskvorky/gensim | data-science | 3,104 | lsi_dispatcher is not working from command-line when not specifying maxsize argument | #### Problem description
When running `lsi_dispatcher` from the command-line, if you don't specify the `maxsize` argument explicitly, you get an error for the missing positional argument:
```
usage: lsi_dispatcher.py [-h] maxsize
lsi_dispatcher.py: error: the following arguments are required: maxsize
```
According to the documentation, this argument should be optional.
The issue seems to be that the nargs argument to `add_argument` is missing:
```python
parser.add_argument(
'maxsize', type=int, help='Maximum number of jobs to be kept pre-fetched in the queue.', default=MAX_JOBS_QUEUE
)
```
In order to make this argument optional, this should be:
```python
parser.add_argument(
'maxsize', nargs='?', type=int, help='Maximum number of jobs to be kept pre-fetched in the queue.', default=MAX_JOBS_QUEUE
)
```
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
$ python3 -m gensim.models.lsi_dispatcher
usage: lsi_dispatcher.py [-h] maxsize
lsi_dispatcher.py: error: the following arguments are required: maxsize
```
#### Versions
```python
Linux-5.4.0-67-generic-x86_64-with-glibc2.2
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0]
Bits 64
NumPy 1.19.4
SciPy 1.6.0
gensim 4.0.1
FAST_VERSION 1
```
| closed | 2021-04-06T08:54:18Z | 2021-04-28T05:22:01Z | https://github.com/piskvorky/gensim/issues/3104 | [] | robguinness | 4 |
Lightning-AI/pytorch-lightning | data-science | 20,407 | update dataset at "on_train_epoch_start", but "training_step" still get old data | ### Bug description
I use `trainer.fit(model, datamodule=dm)` to start training.
"dm" is an object whose class inherited from `pl.LightningDataModule`, and in the class, I override the function:
```python
def train_dataloader(self):
train_dataset = MixedBatchMultiviewDataset(self.args, self.tokenizer,
known_exs=self.known_train,
unknown_exs=self.unknown_train,
feature=self.args.feature)
train_dataloader = DataLoader(train_dataset,
batch_size = self.args.train_batch_size,
shuffle=True, num_workers=self.args.num_workers,
pin_memory=True, collate_fn=self.collate_batch_feat)
return train_dataloader
```
at the model's hook `on_train_epoch_start`, I update the dataset:
```python
train_dl = self.trainer.train_dataloader
train_dl.dataset.update_pseudo_labels(uid2pl)
loop = self.trainer.fit_loop
loop._combined_loader = None
loop.setup_data()
```
in the `training_step`, the batch data is still old data, but `trainer.train_dataloader.dataset` is new:
```python
def training_step(self, batch: List[Dict[str, torch.Tensor]], batch_idx: int):
self.mv_model._on_train_batch_start()
logger.info(self.trainer.train_dataloader.dataset.unknown_feats) # new
logger.info(batch) # old
```
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
cc @justusschock | open | 2024-11-08T16:22:03Z | 2024-11-18T22:48:19Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20407 | [
"bug",
"waiting on author",
"loops"
] | Yak1m4Sg | 1 |
faif/python-patterns | python | 3 | Is Borg really implemented right ? | I thought that the point of Borg was to allow subclassing. However, the implementation doesn't match expectations in this case:
```
class Borg:
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
self.state = 'Running'
def __str__(self):
return self.state
instance0 = Borg()
instance1 = Borg()
instance0.state = 'Idle'
print instance1 # prints 'Idle', instead of 'Running', as expected
borg = Borg()
borg.state = 'Idle'
class YourBorg(Borg):
pass
borg = YourBorg()
print borg # prints 'Running' instead of 'Idle', **not** as expected
```
Are you sure that Borg supports setting attributes in the constructor ?
It looks like it defeats the purpose of Borg...
| closed | 2012-08-27T22:04:18Z | 2020-07-05T19:55:40Z | https://github.com/faif/python-patterns/issues/3 | [
"enhancement"
] | jpic | 7 |
aleju/imgaug | machine-learning | 525 | [Question/FR] Way to make a dependency between different StochasticParameters between sequences | I have a specific problem:
I want to translate in one sequence and then to scale in another sequence. Like this:
```python
seq1 = iaa.Sequential([
iaa.Affine(
translate_px={
'x': iap.Uniform(-200, 200),
'y': iap.Uniform(-200, 200)
}
),
])
seq2 = iaa.Sequential([
iaa.Affine(
scale=iap.Uniform(0.2, 0.6),
),
])
# Apply seq1
# Apply seq2 separately
```
But I want amount it is translated to be proportional to how much it will be scaled.
I thought using something like `iap.Divide(-200, scale_param)` will solve it, but it works differently than I thought.
Is there a way to achieve it? | open | 2019-12-17T08:04:14Z | 2019-12-17T21:53:14Z | https://github.com/aleju/imgaug/issues/525 | [] | soswow | 2 |
django-import-export/django-import-export | django | 1,355 | Can't use example | **Describe the bug**
Cannot use manage.py in example app.
**To Reproduce**
Steps to reproduce the behavior:
1. Clone repo (_pip install -e git+https://github.com/django-import-export/django-import-export.git#egg=django-import-export_)
2. `cd tests`
3. `python manage.py makemigrations`
4. Error: `ModuleNotFoundError: No module named 'django_extensions'`
**Versions (please complete the following information):**
- Django Import Export: 2.7.1
- Python 3.9
- Django 4.0
**Expected behavior**
Run makemigrations in example app
| closed | 2021-12-08T09:05:24Z | 2021-12-22T13:07:09Z | https://github.com/django-import-export/django-import-export/issues/1355 | [
"bug"
] | Samoht1 | 2 |
robotframework/robotframework | automation | 5,063 | Robot Framework does not run outside Windows if `signal.setitimer` is not available (affects e.g. Pyodide) | As described by @Snooz82 in Slack:
> Pyodide 0.23 and newer runs Python 3.11.2 which officially supports WebAssembly as a [PEP11 Tier 3](https://peps.python.org/pep-0011/#tier-3) platform. [#3252](https://github.com/pyodide/pyodide/pull/3252), [#3614](https://github.com/pyodide/pyodide/pull/3614)
>
> That causes incompatibility to Robot Framework… :sob:
> RF uses setitimer and this seems not be included anymore, because it never worked on JavaScript…
> That means we can not update to Pyodide 0.23.0 from March 30 2023…
> current Version is 0.25.0
>
> I updated it on the code playground now from 18.1 to 22.1 which caused RF 3.1 to die, due to missing python 3.10 support.
> We could refactor the code so, that depending on the selected Robot Version we do use a different Pyodide Version.
> And in the next Release of Robot Framework, we could actively check if we can support running on newest Pyodide version.
> Maybe that mean, that some timer signals are not working, but i think that would be ok.
> Pekka: if you think there is a possibility to patch robot/running/timeouts/posix.py so that RF could still live without it, this would be an option too.
When I tried Robot Framework in a Jupyter Lite Notebook using Pyodide, I received the error below:
`ImportError: cannot import name 'setitimer' from 'signal' (/lib/python311.zip/signal.py)`
| closed | 2024-02-23T16:54:17Z | 2024-06-04T14:08:51Z | https://github.com/robotframework/robotframework/issues/5063 | [
"bug",
"priority: high",
"rc 1",
"effort: small"
] | manykarim | 3 |
deezer/spleeter | tensorflow | 325 | [Discussion] Python 3.8 support? | Ubuntu 20.04 LTS is around the corner and it will come pre-installed with Python 3.8. Is there any plans to support Python 3.8 in Spleeter?
Thanks. | closed | 2020-04-14T10:44:53Z | 2020-04-14T10:49:14Z | https://github.com/deezer/spleeter/issues/325 | [
"question",
"wontfix"
] | Tantawi | 1 |
mage-ai/mage-ai | data-science | 4,932 | [DOCUMENTATION] - Add documentation for "Features" in Settings menu | Requesting documentation for "Features" items in the Settings menu.

| closed | 2024-04-12T18:27:02Z | 2024-04-25T05:06:46Z | https://github.com/mage-ai/mage-ai/issues/4932 | [
"documentation"
] | amlloyd | 2 |
Evil0ctal/Douyin_TikTok_Download_API | api | 250 | 使用Douyin TikTok Download/Scraper API-V1,抖音单条视频下载进行测试 |

访问接口报500错误
此API 是下架了?
| closed | 2023-08-23T07:14:24Z | 2023-08-28T07:54:52Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/250 | [
"BUG"
] | chenningxi | 2 |
slackapi/python-slack-sdk | asyncio | 1,000 | RTMClient v2 still requires aiohttp installed (even though it's unused) | Thanks @max-arnold for pointing this out!
---
For me the latest release (3.5.0rc1) still fails without aiohttp:
```
In [1]: from slack_sdk.rtm.v2 import RTMClient
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-0f1e042af01f> in <module>
----> 1 from slack_sdk.rtm.v2 import RTMClient
~/.virtualenvs/slack-test/lib/python3.7/site-packages/slack_sdk/rtm/__init__.py in <module>
14 from typing import Optional, Callable, DefaultDict
15
---> 16 import aiohttp
17
18 import slack_sdk.errors as client_err
ModuleNotFoundError: No module named 'aiohttp'
```
_Originally posted by @max-arnold in https://github.com/slackapi/python-slack-sdk/issues/932#issuecomment-818875690_ | closed | 2021-04-13T21:27:10Z | 2021-04-14T22:16:46Z | https://github.com/slackapi/python-slack-sdk/issues/1000 | [
"bug",
"rtm-client",
"Version: 3x"
] | seratch | 0 |
pyjanitor-devs/pyjanitor | pandas | 853 | Clean up tests for row_to_name function | Going to suggest a test that, perhaps, more generally tests the exact _property_ we'd like to guarantee.
Because the intent of the function is to delete rows without resetting the index, could we do something akin to:
```python
def test_row_to_names_delete_the_row_without_resetting_index(dataframe):
"""Test that executing row_to_names does not reset the index."""
expected_index = pd.Index(...)
pd.testing.assert_index_equal(df.index, expected_index)
```
The idea here is that as far as possible, we test desired properties of the resultant dataframe, rather than only test exact values. What we gain here is generality; the test becomes less specific to a particular dataframe's values and much more general. We get a more powerful test. In the case of this function, we want to test that the dataframe index has some form that relates to the original index in a particular way (i.e. it is missing a specified row).
I'll be the first to admit that the tests that you encountered here as a pattern could have been done better, perhaps. Hopefully, the code sketch above gives you the right ideas, @fireddd. I'd only ask that you do the style of a test I suggested for the two tests that you've added, don't worry about the previously-existing tests.
Could we also make sure there are docstrings for the tests? The intent of the test should be described; I provided an example up there in the code sketch.
_Originally posted by @ericmjl in https://github.com/pyjanitor-devs/pyjanitor/pull/849#discussion_r675995500_ | open | 2021-07-25T23:55:20Z | 2021-07-25T23:55:20Z | https://github.com/pyjanitor-devs/pyjanitor/issues/853 | [] | fireddd | 0 |
aio-libs-abandoned/aioredis-py | asyncio | 1,008 | [2.0] Type annotations break mypy | I tried porting an existing project to aioredis 2.0. I've got it almost working, but the type annotations that have been added are too strict (and in some cases just wrong) and break mypy. The main problem is that all the functions that take keys annotate them as `str`, when `bytes` (and I think several other types) are perfectly acceptable and are used in my code. The same applies to `register_script`.
The `min` and `max` arguments of `zrangebylex` and `zrevrangebylex` are annotated as int, but they're used for lexicographical sorting so are string-like.
Getting the type annotations right is a fair large undertaking. If there is a desire to release 2.0 soon I'd suggest deleting `py.typed` so that mypy doesn't see this package as annotated. There are annotations for redis-py in typeshed; perhaps that would be a good place to start, although I've occasionally also had issues there. | closed | 2021-06-09T13:00:46Z | 2021-07-13T05:09:59Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1008 | [] | bmerry | 5 |
pydata/bottleneck | numpy | 26 | Wrong dtype on win64 for functions that return indices | As reported by Christoph Gohlke, the bottleneck functions that return indices return the wrong dtype on win64.
See the following threads:
http://mail.scipy.org/pipermail/numpy-discussion/2011-June/056679.html
http://groups.google.com/group/cython-users/browse_thread/thread/f8022ee7ccbf7c5b
| closed | 2011-06-10T16:13:52Z | 2011-06-10T18:00:28Z | https://github.com/pydata/bottleneck/issues/26 | [] | kwgoodman | 0 |
huggingface/diffusers | deep-learning | 11,133 | bug while using cogvideox image to video pipeline | ### Describe the bug
while using the script at "https://huggingface.co/docs/diffusers/using-diffusers/text-img2vid" of cogvideox pipeline,an error occured:
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 8 for tensor number 1 in the list.
### Reproduction
import torch
from diffusers import CogVideoXImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
prompt = "A vast, shimmering ocean flows gracefully under a twilight sky, its waves undulating in a mesmerizing dance of blues and greens. The surface glints with the last rays of the setting sun, casting golden highlights that ripple across the water. Seagulls soar above, their cries blending with the gentle roar of the waves. The horizon stretches infinitely, where the ocean meets the sky in a seamless blend of hues. Close-ups reveal the intricate patterns of the waves, capturing the fluidity and dynamic beauty of the sea in motion."
image = load_image(image="cogvideox_rocket.png")
pipe = CogVideoXImageToVideoPipeline.from_pretrained(
"THUDM/CogVideoX-5b-I2V",
torch_dtype=torch.bfloat16
)
pipe.vae.enable_tiling()
pipe.vae.enable_slicing()
video = pipe(
prompt=prompt,
image=image,
num_videos_per_prompt=1,
num_inference_steps=50,
num_frames=49,
guidance_scale=6,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
export_to_video(video, "output.mp4", fps=8)
### Logs
```shell
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.28it/s]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.60it/s]
Traceback (most recent call last):
File "/home/qsy/snvd/demo/demo_generate_cogvideo.py", line 30, in <module>
video = pipe(
^^^^^
File "/home/qsy/.conda/envs/snvds/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/qsy/snvd/model/pipeline_cogvideox_image2video.py", line 787, in __call__
latents, image_latents = self.prepare_latents(
^^^^^^^^^^^^^^^^^^^^^
File "/home/qsy/snvd/model/pipeline_cogvideox_image2video.py", line 407, in prepare_latents
image_latents = torch.cat([image_latents, latent_padding], dim=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 8 for tensor number 1 in the list.
```
### System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.11.11
- PyTorch version (GPU?): 2.5.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.1
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A800 80GB PCIe, 81920 MiB
NVIDIA A800 80GB PCIe, 81920 MiB
NVIDIA A800 80GB PCIe, 81920 MiB
NVIDIA A800 80GB PCIe, 81920 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | open | 2025-03-21T11:41:22Z | 2025-03-21T11:41:22Z | https://github.com/huggingface/diffusers/issues/11133 | [
"bug"
] | MrTom34 | 0 |
harry0703/MoneyPrinterTurbo | automation | 73 | Ubuntu字体不支持 | Ubuntu系统convert: delegate library support not built-in '/home/project/MoneyPrinterTurbo/resource/fonts/STHeitiLight.ttc' (Freetypype/2112.
| closed | 2024-03-27T08:51:43Z | 2024-03-28T08:33:32Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/73 | [] | zhuangzhuang3 | 4 |
sammchardy/python-binance | api | 1,012 | NameError: name 'Client' is not defined | I just started coding yesterday, but when I type;
from binance.client import Client
api_key = 'api_key'
api_secret = 'api_secret'
client = Client(api_key, api_secret, tld='us')
it returns the error;
SyntaxError: EOL while scanning string literal
client = Client(api_key, api_secret, tld='us')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'Client' is not defined
Any help? | open | 2021-09-08T14:18:13Z | 2023-04-11T19:44:44Z | https://github.com/sammchardy/python-binance/issues/1012 | [] | Chaneriel | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,548 | Calling disconnect on a write_only manager doesn't trigger anything | Created in the wrong repo, sorry!
**Describe the bug**
Related to #1174, I needed to be able to disconnect clients by their sid from external processes (such as Celery). Thanks to the work done in #1174 this is possible, but it looks like it may have partially broken somewhere along the way. Calling `disconnect` on a write_only manager doesn't throw an exception, but nothing happens. I've inspected the messages being sent through redis and it simply isn't sending one. That said, calling `can_disconnect` or `_publish` directly both work.
**To Reproduce**
I have only tested with an `AsyncServer` and redis, so I'm not sure at the moment if it affects all managers.
1. Set up a server:
```
import socketio
mgr = socketio.AsyncRedisManager(...)
sio = socketio.AsyncServer(client_manager=mgr, ...)
```
2. Set up a write-only manager:
```
external_sio = socketio.RedisManager(..., write_only=True)
```
3. Connect to the SocketIO server and grab the `sid` (not strictly necessary)
4. Run `external_sio.disconnect('sid_here', namespace='/some_namespace')`
5. Observe that the client isn't disconnected. Furthermore, you'll notice that the `pubsub message: disconnect` log doesn't show up ([see this line](https://github.com/miguelgrinberg/python-socketio/blob/master/socketio/asyncio_pubsub_manager.py#L170)). I subclassed the manager and confirmed that nothing was being received at all, so it isn't just getting stopped somewhere in the middle of `_thread`.
**Expected behavior**
The disconnect event should be sent through redis.
**Logs**
No logs, but that's part of the problem 😄
**Additional context**
Thank you so much for this fantastic library, it's a life saver. To anyone else experiencing this in the meantime, you can use `external_sio.can_disconnect` instead. | closed | 2021-05-10T16:58:13Z | 2021-05-10T16:58:47Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1548 | [] | lsapan | 0 |
LAION-AI/Open-Assistant | python | 2,778 | Please add a "Are you Sure you want to Skip" popup to avoid frustration and loss of progress | Unfortunately I have already written many long answers and while correcting them I have clicked on the `skip` button which lead to my messages being deleted immediately which is extremely frustrating, especially when you curated that particular answer for half an hour or so already...
If the Assistant or User Message is empty skipping immediately should be no problem, but if there is a message in the response field there should be a popup! | closed | 2023-04-20T12:21:47Z | 2023-06-17T18:56:05Z | https://github.com/LAION-AI/Open-Assistant/issues/2778 | [
"website",
"UI/UX"
] | Logophoman | 5 |
AntonOsika/gpt-engineer | python | 1,073 | `--improve` changes are no longer applied after #1052 was merged | ## Expected Behavior
At the end of a `gpte -i` run, I should get the prompt
```
Do you want to apply these changes? [y/N]
```
## Current Behavior
I get this instead:
```
[...]
Added line print("foo") to src/my_file.py at line 24 end
No changes applied. Could you please upload the debug_log_file.txt in /home/akaihola/prg/my_app/.gpteng/memory folder to github?
```
## Failure Information
This failure happens with `main` branch at commit a8b82d1 as well as the merge point of #1052 in commit 9486b87 four days earlier.
Before the merge at commit 841e5b1 everything works as expected.
This Git history tree view illustrates which commits I found broken and which one works:
```
a8b82d1 * Merge pull request #1068 from gpt-engineer-org/dev/add-tomlkit *** BROKEN ***
|\
a8e6850 | * build(deps): added tomlkit
|/
2a4e59f * Merge pull request #1067 from azrv/patch_regex_diff
|\
b0d3c1e | * Add timeout while searching for git diffs in LLMs response
|/
9486b87 * Merge pull request #1052 from gpt-engineer-org/1031-feature-request-automat *** BROKEN ***
|\
3431e78 | * add test for unexpected errors in validating hunks
bb284e1 | * add both unchanged and exceptions to return cases
b0e96e2 | * add test case for failing diff
6b0a267 | * remove old improve log parameter and force return with error
4382ec3 | * remove old improve log function tests
dd711a1 | * update bug report template
0642ae8 | * add test for log creation
4682327 | * merge logs into one file for easy copy
d93f55f | * remove file selector sections from try block
3ab5aca | * Merge remote-tracking branch 'origin/main' into 1031-feature-request-automated-issue-logging
| |\
535c7da | * \ Merge remote-tracking branch 'origin/main' into 1031-feature-request-automated-issue-logging
| |\ \
c3b8404 | * | | add improve_mode_output_log for better tracking
2f32955 | * | | remove exceptions to ensure the program runs
ee38321 | * | | store user uploaded files and prompts locally
841e5b1 * | | | fix: fixed broken test for gpt-engineer.toml after renaming of config s ***** OK *****
``` | closed | 2024-03-18T21:01:37Z | 2024-03-19T18:15:12Z | https://github.com/AntonOsika/gpt-engineer/issues/1073 | [
"bug"
] | akaihola | 2 |
gee-community/geemap | streamlit | 1,883 | `download_ee_image()` | `download_ee_image()` is wrapper for the [geediim](https://github.com/leftfield-geospatial/geedim) package. It is most suitable for download original image rather than computation results. The more complicated your workflow, the less likely `download_ee_image` will work. For long-running computational results, you should use `ee_export_image_to_drive`.
_Originally posted by @giswqs in https://github.com/gee-community/geemap/issues/1882#issuecomment-1890277036_
| closed | 2024-01-14T07:03:07Z | 2024-01-14T19:12:04Z | https://github.com/gee-community/geemap/issues/1883 | [] | zwy1502 | 1 |
plotly/plotly.py | plotly | 4,126 | CORS needed to use some tile servers with mapbox | Some tile servers needs CORS for mapbox to be able to GET the tiles.
I can turn off the check in browser, but that doesn't seem the right way to do it.
Could this be added in the function?
Kindly | closed | 2023-03-28T00:14:30Z | 2023-03-30T16:28:50Z | https://github.com/plotly/plotly.py/issues/4126 | [] | jontis | 1 |
mljar/mercury | data-visualization | 420 | Waiting for worker... (Vanilla Docker-Compose Install) | The demo notebooks are stuck "Waiting for worker ..."

This is a brand-new install, followed all the instructions.
.env file:
> NOTEBOOKS_PATH=../mercury-deploy-demo/
> DJANGO_SUPERUSER_USERNAME=adminusername
> DJANGO_SUPERUSER_PASSWORD=astrongpassword
> DJANGO_SUPERUSER_EMAIL=username@email.com
> ALLOWED_HOSTS=[static, public user ipv4],[static, public server ipv4],0.0.0.0
> SECRET_KEY="******************************************"
> DEBUG=False
> SERVE_STATIC=False
> WELCOME=/app/notebooks/welcome.md
> TIME_ZONE=US/Eastern
> DJANGO_LOG_LEVEL=INFO
> MERCURY_VERBOSE=0
> ACCOUNT_EMAIL_VERIFICATION=none
Looking at Issue #391, I implemented the changes recommended by @mariliaribeiro [here](https://github.com/mljar/mercury/issues/391#issuecomment-1862479838). No change.
I saw some issues with a number of files containing references to port 8000 vice 9000. I swapped all to port 9000 using `grep -rl :8000 . | xargs sed -i 's/:8000/:9000/g'`. No change in results. No change.
Results from `celery status`: ConnectionRefusedError: [Errno 111] Connection refused
Results from `django-errors.log`: [Blank file]
I'm not sure where to find any more answers describing what is going on. Are there other logs that I can pull that would assist in troubleshooting?
| open | 2024-02-12T22:33:29Z | 2025-02-10T11:40:06Z | https://github.com/mljar/mercury/issues/420 | [
"bug"
] | mikep11 | 14 |
danimtb/dasshio | dash | 32 | Dashio not working after upgrading Hassio 0.68.1 | Here is the log:
> 2018-05-06 10:37:37,448 | INFO | Mutfak button pressed!
> 2018-05-06 10:37:37,450 | INFO | Request: http://hassio/homeassistant/api/services/switch/toggle
> 2018-05-06 10:37:37,498 | INFO | Status Code: 500
> 2018-05-06 10:37:37,499 | ERROR | Bad request
> 2018-05-06 10:37:37,563 | INFO | Packet captured, waiting 10s ...
> 2018-05-06 10:37:47,572 | INFO | Starting sniffing...
hassio => 0.68.1
supervisor => 103.1 | closed | 2018-05-06T07:39:02Z | 2018-05-07T16:44:01Z | https://github.com/danimtb/dasshio/issues/32 | [] | cryptooth | 4 |
marcomusy/vedo | numpy | 965 | Can't undo twice | I tried to create an undo button from the pyqt button that connects with the function
```python
self.button_undo.clicked.connect(self.handle_undo_button)
```
```python
def handle_undo_button(self):
self.plt.remove([self.mesh])
self.mesh = self.mesh_prev
self.plt.add(self.mesh).render()
```
So, when I delete some cells from mesh through this
```python
def delete_mesh_cell(self):
if (len(self.selected_ids) > 0):
self.mesh.delete_cells(self.selected_ids)
self.selected_ids = np.array([]).astype(int)
self.plt.render()
```
The problem is that the first time I click undo button it works fine, but the second time after delete cell again, it does nothing. (mesh_prev contains original mesh as imported)
Could you please give some suggestions on how we can solve this problem?
| closed | 2023-11-13T11:03:55Z | 2023-11-14T17:39:13Z | https://github.com/marcomusy/vedo/issues/965 | [] | Thanatossan | 2 |
Lightning-AI/pytorch-lightning | pytorch | 19,729 | `log_dir` contains both forward and backward slashes as path separator when using remote file location as `default_root_dir` on windows | ### Bug description
I am running on Windows OS and storing logs and artifacts on remote location (AWS S3).
```python
path_pl_logs = f"s3://{bucket_name}/pytorch-lightning-logs/{experiment_name}"
trainer = pl.Trainer(
accelerator="gpu",
devices=1,
max_epochs=checkpoint_n_epoch,
default_root_dir=path_pl_logs,
)
```
With the following output containing a mixture of forward and backward slashes. This causes problems with the folder structure.
```python
trainer.log_dir
>> s3://{bucket_name}/pytorch-lightning-logs/{experiment_name}\lightning_logs\version_0
```
Ideally when using a remote location it would not use os.path.join to create folder/filenames but instead just use a forward slash file separator.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): Trainer
#- PyTorch Lightning Version (e.g., 1.5.0): 2.2.1
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0): 2.2.2
#- Python version (e.g., 3.9): 3.11
#- OS (e.g., Linux): Windows 11
#- CUDA/cuDNN version: 12.1
#- GPU models and configuration: NVIDIA RTX A2000 Laptop GPU
#- How you installed Lightning(`conda`, `pip`, source): pipenv
#- Running environment of LightningApp (e.g. local, cloud): local
```
</details>
### More info
_No response_ | open | 2024-04-03T11:03:39Z | 2024-04-03T11:03:39Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19729 | [
"bug",
"needs triage"
] | jochemvankempen | 0 |
holoviz/panel | jupyter | 6,926 | VideoStream from CCTV | When I use the `VideoStream` class to get the network camera information, it can only get the video stream information of the local camera. If I want to use a remote network camera, such as a video stream transmitted through the rtmp or rtsp protocol, how should I display it in the panel?
Using opencv can achieve the effect I want:
```python
import cv2
cap = cv2.VideoCapture("rtsp://cctv_url")
ret, frame = cap.read()
while ret:
ret, frame = cap.read()
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
cap.release()
```
Is it possible to use Panel's `VideoStream` to achieve the above effect? | open | 2024-06-16T16:43:55Z | 2024-06-17T01:24:30Z | https://github.com/holoviz/panel/issues/6926 | [] | lankoestee | 2 |
LibreTranslate/LibreTranslate | api | 710 | Uppercase text leads to no translation | E.g. `GAS RUN/INSPECTION PROPANE FIREPOTS` from English to Spanish returns the original string.
But `gas run/inspection propane firepots` returns an actual translation. | open | 2024-11-27T18:03:28Z | 2025-01-09T15:00:10Z | https://github.com/LibreTranslate/LibreTranslate/issues/710 | [
"model improvement"
] | pierotofy | 1 |
microsoft/qlib | deep-learning | 1,311 | 怎样排除指数参与训练?回测时,如何排除指数? | 请问模型训练时,怎样排除指数,比如沪深300指数SH000300?
另外,回测时,如何排除指数,也就是不要交易指数? | closed | 2022-10-09T05:36:48Z | 2023-06-20T15:02:01Z | https://github.com/microsoft/qlib/issues/1311 | [
"question",
"stale"
] | quantcn | 3 |
influxdata/influxdb-client-python | jupyter | 367 | Pandas outputs warning when calling dataframe.append in flux_csv_parser._prepare_data_frame | https://github.com/influxdata/influxdb-client-python/blob/922477ff2499165ad2018106ad7cb5a72ddb59d9/influxdb_client/client/flux_csv_parser.py#L190
This method should return :
```
return self._data_frame.append(_temp_df, sort=True)
```
Pandas outputs warning when calling `dataframe.append` in `flux_csv_parser._prepare_data_frame()`.
The output is:
```
pandas/core/frame.py:6211: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
```
Description of the issue can be found here:
https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.23.0.html#concatenation-will-no-longer-sort | closed | 2021-11-20T03:53:54Z | 2021-11-26T06:56:04Z | https://github.com/influxdata/influxdb-client-python/issues/367 | [
"wontfix"
] | mm0 | 2 |
igorbenav/FastAPI-boilerplate | sqlalchemy | 113 | nginx failed | docker compose up
fastapi-boilerplate-db-1 | 2024-02-06 05:51:59.250 UTC [27] LOG: database system was shut down at 2024-02-06 05:51:57 UTC
fastapi-boilerplate-db-1 | 2024-02-06 05:51:59.260 UTC [1] LOG: database system is ready to accept connections
fastapi-boilerplate-web-1 | /bin/sh: 1:
fastapi-boilerplate-web-1 | [gunicorn,: not found | closed | 2024-02-06T05:53:17Z | 2024-02-09T22:24:07Z | https://github.com/igorbenav/FastAPI-boilerplate/issues/113 | [
"bug"
] | saakethtypes | 8 |
autokey/autokey | automation | 935 | External scripts called with absolute path name do not have API access | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [X] autokey-qt
- [ ] beta
- [X] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [X] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
Kubuntu 23.10
### Which AutoKey GUI did you use?
Qt
### Which AutoKey version did you use?
0.96.0
### How did you install AutoKey?
From the .deb file from git
### Can you briefly describe the issue?
Pretty much the title. For some reason calling an external script using the absolute path name does not work properly; i.e. autokey API calls aren't available for some reason. Calling using the script description works normally.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. Create the following scripts in a folder called "Scripts" within AutoKey:
CallExternal:
```
engine.run_script("~/.config/autokey/data/Scripts/External.py")
```
External:
```
if not store.has_key("runs"):
# Create the value on the first run of the script
store.set_value("runs", 1)
else:
# Otherwise, get the current value and increment it
cur = store.get_value("runs")
store.set_value("runs", cur + 1)
dialog.info_dialog(message="I've been run %d times!" % store.get_value("runs"))
```
2. Bind the `CallExternal` script to the F1 key, for example
3. Save and press the F1 key
You should see the following error message pop up:
```
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/autokey/service.py", line 530, in _execute
exec(compiled_code, scope)
File "/home/<user>/.config/autokey/data/Scripts/CallExternal.py", line 2, in <module>
engine.run_script("~/.config/autokey/data/Scripts/External.py")
File "/usr/lib/python3/dist-packages/autokey/scripting/engine.py", line 362, in run_script
self.runner.run_subscript(path)
File "/usr/lib/python3/dist-packages/autokey/service.py", line 580, in run_subscript
exec(compiled_code, scope)
File "/home/<user>/.config/autokey/data/Scripts/External.py", line 3, in <module>
if not store.has_key("runs"):
^^^^^
NameError: name 'store' is not defined
```
Which suggests the API isn't being imported properly in this case.
However, changing the `CallExternal` script to read
```
engine.run_script("External")
```
And then pressing F1 again reveals the script working normally.
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | open | 2024-02-06T16:47:18Z | 2024-02-10T22:55:52Z | https://github.com/autokey/autokey/issues/935 | [
"enhancement",
"scripting"
] | elydpg | 7 |
pydantic/pydantic-ai | pydantic | 490 | Isn't a foobar a bad choice for examples? | Hi. Thank you very much for all the work you've done. I really like your approach to agency.
But going through documentation examples, I've come across several Foobar models in examples.
If to think, it's very confusing why apples and carrots as comments, why x,y,z(yeah, foobar, but do we have to?) as variables etc.

| closed | 2024-12-18T22:32:27Z | 2024-12-23T13:20:47Z | https://github.com/pydantic/pydantic-ai/issues/490 | [
"documentation"
] | snqb | 3 |
microsoft/nni | machine-learning | 5,435 | aten::ScalarImplicit is not Supported | **Describe the issue**:
**Environment**:
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
I have an model:
```
class get_model(nn.Module):
def __init__(self, args, num_channel=3, num_class=40, **kwargs):
super(get_model, self).__init__()
self.args = args
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(128)
self.bn4 = nn.BatchNorm2d(256)
self.bn5 = nn.BatchNorm1d(args.emb_dims)
self.conv1 = nn.Sequential(nn.Conv2d(num_channel*2, 64, kernel_size=1, bias=False),
self.bn1,
nn.LeakyReLU(negative_slope=0.2))
self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
self.bn2,
nn.LeakyReLU(negative_slope=0.2))
self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
self.bn3,
nn.LeakyReLU(negative_slope=0.2))
self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
self.bn4,
nn.LeakyReLU(negative_slope=0.2))
self.conv5 = nn.Sequential(nn.Conv1d(512, args.emb_dims, kernel_size=1, bias=False),
self.bn5,
nn.LeakyReLU(negative_slope=0.2))
self.linear1 = nn.Linear(args.emb_dims*2, 512, bias=False)
self.bn6 = nn.BatchNorm1d(512)
self.dp1 = nn.Dropout(p=args.dropout)
self.linear2 = nn.Linear(512, 256)
self.bn7 = nn.BatchNorm1d(256)
self.dp2 = nn.Dropout(p=args.dropout)
self.linear3 = nn.Linear(256, num_class)
def forward(self, x):
batch_size = x.size()[0]
x = get_graph_feature(x, k=self.args.k)
x = self.conv1(x)
x1 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x1, k=self.args.k)
x = self.conv2(x)
x2 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x2, k=self.args.k)
x = self.conv3(x)
x3 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x3, k=self.args.k)
x = self.conv4(x)
x4 = x.max(dim=-1, keepdim=False)[0]
x = torch.cat((x1, x2, x3, x4), dim=1)
x = self.conv5(x)
x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
x = torch.cat((x1, x2), 1)
x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
x = self.dp1(x)
x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)
x = self.dp2(x)
x = self.linear3(x)
return x
```
And try prune it and speedup:
```
prune_config_list = [{
'sparsity_per_layer': 0.5,
'op_types': ['Linear', 'Conv2d']
}, {
'exclude': True,
'op_names': ['conv1.0', 'linear3']
}]
from nni.algorithms.compression.v2.pytorch.pruning import L2NormPruner
from nni.compression.pytorch.speedup import ModelSpeedup
pruner = L2NormPruner(classifier, prune_config_list)
masked_model, masks = pruner.compress()
pruner.show_pruned_weights()
# need to unwrap the model, if the model is wrapped before speedup
pruner._unwrap_model()
# speedup the model, for more information about speedup, please refer :doc:`pruning_speedup`.
ModelSpeedup(classifier, torch.randn(1, 3, 1024, device='cuda'), masks).speedup_model()
print(classifier)
```
But get the error:
```
[2023-03-13 10:06:07] simulated prune conv2.0 remain/total: 32/64
[2023-03-13 10:06:07] simulated prune conv3.0 remain/total: 64/128
[2023-03-13 10:06:07] simulated prune conv4.0 remain/total: 128/256
[2023-03-13 10:06:07] simulated prune linear1 remain/total: 256/512
[2023-03-13 10:06:07] simulated prune linear2 remain/total: 128/256
[2023-03-13 10:06:08] start to speedup the model
[2023-03-13 10:06:08] infer module masks...
[2023-03-13 10:06:08] Update mask for .aten::size.22
[2023-03-13 10:06:08] Update mask for .aten::size.25
[2023-03-13 10:06:08] Update mask for .aten::size.30
[2023-03-13 10:06:08] Update mask for .aten::size.33
[2023-03-13 10:06:08] Update mask for .aten::Int.23
[2023-03-13 10:06:08] Update mask for .aten::Int.24
[2023-03-13 10:06:08] Update mask for .aten::Int.26
[2023-03-13 10:06:08] Update mask for .aten::Int.27
[2023-03-13 10:06:08] Update mask for .aten::ScalarImplicit.28
[2023-03-13 10:06:08] ERROR: aten::ScalarImplicit is not Supported! Please report an issue at https://github.com/microsoft/nni. Thanks~
[2023-03-13 10:06:08] Update mask for .aten::Int.29
[2023-03-13 10:06:08] Update mask for .aten::Int.31
[2023-03-13 10:06:08] Update mask for .aten::Int.32
[2023-03-13 10:06:08] Update mask for .aten::Int.34
[2023-03-13 10:06:08] Update mask for .aten::Int.35
[2023-03-13 10:06:08] Update mask for .aten::Int.36
[2023-03-13 10:06:08] Update mask for .aten::mul.57
[2023-03-13 10:06:08] Update mask for .aten::arange.50
Traceback (most recent call last):
File "nni_optim.py", line 246, in <module>
ModelSpeedup(classifier, torch.randn(1, 3, 1024, device='cuda'), masks).speedup_model()
File "/home/zyfra-devbox/.conda/envs/nni/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 546, in speedup_model
self.infer_modules_masks()
File "/home/zyfra-devbox/.conda/envs/nni/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 383, in infer_modules_masks
self.update_direct_sparsity(curnode)
File "/home/zyfra-devbox/.conda/envs/nni/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 237, in update_direct_sparsity
_auto_infer = AutoMaskInference(
File "/home/zyfra-devbox/.conda/envs/nni/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py", line 80, in __init__
self.output = self.module(*dummy_input)
File "/home/zyfra-devbox/.conda/envs/nni/lib/python3.8/site-packages/nni/compression/pytorch/speedup/jit_translate.py", line 227, in __call__
assert len(args) >= len(self.undetermined)
AssertionError
```
I've checked several related issues:
https://github.com/microsoft/nni/issues/5097
https://github.com/microsoft/nni/issues/5090
But not find the workaround there was mentioned that torch.full can cause the aten::ScalarImplicit. But I have no such operations.
What the workaround can be?
In addiction I've tried to prune with v2 api. I cloned and build the code from repo and tried:
```
from nni.compression.pytorch.speedup.v2 import ModelSpeedup
ModelSpeedup(classifier, torch.randn(1, 3, 1024, device='cuda'), masks).speedup_model()
```
But got:
```
[2023-03-13 08:12:57] simulated prune conv2.0 remain/total: 32/64
[2023-03-13 08:12:57] simulated prune conv3.0 remain/total: 64/128
[2023-03-13 08:12:57] simulated prune conv4.0 remain/total: 128/256
[2023-03-13 08:12:57] simulated prune linear1 remain/total: 256/512
[2023-03-13 08:12:57] simulated prune linear2 remain/total: 128/256
Traceback (most recent call last):
File "nni_optim.py", line 248, in <module>
ModelSpeedup(classifier, torch.randn(1, 3, 1024, device='cuda'), masks).speedup_model()
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 93, in __init__
self.graph_module = graph_module if isinstance(graph_module, GraphModule) else concrete_trace(model, self.dummy_input)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 1378, in concrete_trace
graph = tracer.trace(root,
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 911, in trace
(self.create_arg(OperatorPatcherContext.patch_run(fn, *args, *more_args, **kwargs)),),
File "/root/devel/nni/nni/common/concrete_trace_utils/operator_patcher.py", line 270, in patch_run
return new_func(*args, **kwargs)
File "/root/devel/IAE/downstream_tasks/classification/models/dgcnn_clsft.py", line 98, in new_func
x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
File "/root/devel/nni/nni/common/concrete_trace_utils/operator_patcher.py", line 270, in patch_run
return new_func(*args, **kwargs)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 659, in module_call_wrapper
return self.create_proxy('call_module', module_qualified_name, args, kwargs)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 283, in create_proxy
value_unwrapped = self.run_target(kind, target, args_unwrapped, kwargs_unwrapped)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 250, in run_target
return OperatorPatcherContext.patch_run(mod, *args, **kwargs)
File "/root/devel/nni/nni/common/concrete_trace_utils/operator_patcher.py", line 270, in patch_run
return new_func(*args, **kwargs)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 639, in module_call_wrapper
return _orig_module_call(mod, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/batchnorm.py", line 168, in forward
return F.batch_norm(
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 1122, in func_wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2280, in batch_norm
_verify_batch_size(input.size())
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 1122, in func_wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2248, in _verify_batch_size
raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512])
```
So I've tried to increase batch size. Tried different numbers but get cuda OOM even if I have batch size of 2:
```
******************************
******************************
[2023-03-13 08:14:57] simulated prune conv2.0 remain/total: 32/64
[2023-03-13 08:14:57] simulated prune conv3.0 remain/total: 64/128
[2023-03-13 08:14:57] simulated prune conv4.0 remain/total: 128/256
[2023-03-13 08:14:57] simulated prune linear1 remain/total: 256/512
[2023-03-13 08:14:57] simulated prune linear2 remain/total: 128/256
[2023-03-13 08:14:57] Start to speedup the model...
[2023-03-13 08:14:57] Resolve the mask conflict before mask propagate...
[2023-03-13 08:14:57] Infer module masks...
[2023-03-13 08:14:57] Propagate original variables
[2023-03-13 08:14:57] Propagate variables for placeholder: x
[2023-03-13 08:14:57] Propagate variables for call_method: size
[2023-03-13 08:14:57] Propagate variables for call_function: getitem
[2023-03-13 08:14:57] Propagate variables for call_method: size_1
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_1
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_2
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_3
[2023-03-13 08:14:57] Propagate variables for call_method: view
[2023-03-13 08:14:57] Propagate variables for call_method: transpose
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous
[2023-03-13 08:14:57] Propagate variables for call_function: matmul
[2023-03-13 08:14:57] Propagate variables for call_function: mul
[2023-03-13 08:14:57] Propagate variables for call_function: pow_1
[2023-03-13 08:14:57] Propagate variables for call_function: sum_1
[2023-03-13 08:14:57] Propagate variables for call_function: neg
[2023-03-13 08:14:57] Propagate variables for call_function: sub
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_1
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_1
[2023-03-13 08:14:57] Propagate variables for call_function: sub_1
[2023-03-13 08:14:57] Propagate variables for call_method: topk
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_4
[2023-03-13 08:14:57] Propagate variables for call_function: arange
[2023-03-13 08:14:57] Propagate variables for call_method: view_1
[2023-03-13 08:14:57] Propagate variables for call_function: mul_1
[2023-03-13 08:14:57] Propagate variables for call_function: iadd
[2023-03-13 08:14:57] Propagate variables for call_method: view_2
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_2
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_2
[2023-03-13 08:14:57] Propagate variables for call_function: mul_2
[2023-03-13 08:14:57] Propagate variables for call_method: view_3
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_5
[2023-03-13 08:14:57] Propagate variables for call_method: view_4
[2023-03-13 08:14:57] Propagate variables for call_method: view_5
[2023-03-13 08:14:57] Propagate variables for call_method: repeat
[2023-03-13 08:14:57] Propagate variables for call_function: sub_2
[2023-03-13 08:14:57] Propagate variables for call_function: cat
[2023-03-13 08:14:57] Propagate variables for call_method: permute
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_3
[2023-03-13 08:14:57] Propagate variables for call_module: conv1_0
[2023-03-13 08:14:57] Propagate variables for call_module: bn1
[2023-03-13 08:14:57] Propagate variables for call_module: conv1_2
[2023-03-13 08:14:57] Propagate variables for call_method: max_1
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_6
[2023-03-13 08:14:57] Propagate variables for call_method: size_2
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_7
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_8
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_9
[2023-03-13 08:14:57] Propagate variables for call_method: view_6
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_3
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_4
[2023-03-13 08:14:57] Propagate variables for call_function: matmul_1
[2023-03-13 08:14:57] Propagate variables for call_function: mul_3
[2023-03-13 08:14:57] Propagate variables for call_function: pow_2
[2023-03-13 08:14:57] Propagate variables for call_function: sum_2
[2023-03-13 08:14:57] Propagate variables for call_function: neg_1
[2023-03-13 08:14:57] Propagate variables for call_function: sub_3
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_4
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_5
[2023-03-13 08:14:57] Propagate variables for call_function: sub_4
[2023-03-13 08:14:57] Propagate variables for call_method: topk_1
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_10
[2023-03-13 08:14:57] Propagate variables for call_function: arange_1
[2023-03-13 08:14:57] Propagate variables for call_method: view_7
[2023-03-13 08:14:57] Propagate variables for call_function: mul_4
[2023-03-13 08:14:57] Propagate variables for call_function: iadd_1
[2023-03-13 08:14:57] Propagate variables for call_method: view_8
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_5
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_6
[2023-03-13 08:14:57] Propagate variables for call_function: mul_5
[2023-03-13 08:14:57] Propagate variables for call_method: view_9
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_11
[2023-03-13 08:14:57] Propagate variables for call_method: view_10
[2023-03-13 08:14:57] Propagate variables for call_method: view_11
[2023-03-13 08:14:57] Propagate variables for call_method: repeat_1
[2023-03-13 08:14:57] Propagate variables for call_function: sub_5
[2023-03-13 08:14:57] Propagate variables for call_function: cat_1
[2023-03-13 08:14:57] Propagate variables for call_method: permute_1
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_7
[2023-03-13 08:14:57] Propagate variables for call_module: conv2_0
[2023-03-13 08:14:57] Propagate variables for call_module: bn2
[2023-03-13 08:14:57] Propagate variables for call_module: conv2_2
[2023-03-13 08:14:57] Propagate variables for call_method: max_2
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_12
[2023-03-13 08:14:57] Propagate variables for call_method: size_3
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_13
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_14
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_15
[2023-03-13 08:14:57] Propagate variables for call_method: view_12
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_6
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_8
[2023-03-13 08:14:57] Propagate variables for call_function: matmul_2
[2023-03-13 08:14:57] Propagate variables for call_function: mul_6
[2023-03-13 08:14:57] Propagate variables for call_function: pow_3
[2023-03-13 08:14:57] Propagate variables for call_function: sum_3
[2023-03-13 08:14:57] Propagate variables for call_function: neg_2
[2023-03-13 08:14:57] Propagate variables for call_function: sub_6
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_7
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_9
[2023-03-13 08:14:57] Propagate variables for call_function: sub_7
[2023-03-13 08:14:57] Propagate variables for call_method: topk_2
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_16
[2023-03-13 08:14:57] Propagate variables for call_function: arange_2
[2023-03-13 08:14:57] Propagate variables for call_method: view_13
[2023-03-13 08:14:57] Propagate variables for call_function: mul_7
[2023-03-13 08:14:57] Propagate variables for call_function: iadd_2
[2023-03-13 08:14:57] Propagate variables for call_method: view_14
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_8
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_10
[2023-03-13 08:14:57] Propagate variables for call_function: mul_8
[2023-03-13 08:14:57] Propagate variables for call_method: view_15
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_17
[2023-03-13 08:14:57] Propagate variables for call_method: view_16
[2023-03-13 08:14:57] Propagate variables for call_method: view_17
[2023-03-13 08:14:57] Propagate variables for call_method: repeat_2
[2023-03-13 08:14:57] Propagate variables for call_function: sub_8
[2023-03-13 08:14:57] Propagate variables for call_function: cat_2
[2023-03-13 08:14:57] Propagate variables for call_method: permute_2
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_11
[2023-03-13 08:14:57] Propagate variables for call_module: conv3_0
[2023-03-13 08:14:57] Propagate variables for call_module: bn3
[2023-03-13 08:14:57] Propagate variables for call_module: conv3_2
[2023-03-13 08:14:57] Propagate variables for call_method: max_3
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_18
[2023-03-13 08:14:57] Propagate variables for call_method: size_4
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_19
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_20
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_21
[2023-03-13 08:14:57] Propagate variables for call_method: view_18
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_9
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_12
[2023-03-13 08:14:57] Propagate variables for call_function: matmul_3
[2023-03-13 08:14:57] Propagate variables for call_function: mul_9
[2023-03-13 08:14:57] Propagate variables for call_function: pow_4
[2023-03-13 08:14:57] Propagate variables for call_function: sum_4
[2023-03-13 08:14:57] Propagate variables for call_function: neg_3
[2023-03-13 08:14:57] Propagate variables for call_function: sub_9
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_10
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_13
[2023-03-13 08:14:57] Propagate variables for call_function: sub_10
[2023-03-13 08:14:57] Propagate variables for call_method: topk_3
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_22
[2023-03-13 08:14:57] Propagate variables for call_function: arange_3
[2023-03-13 08:14:57] Propagate variables for call_method: view_19
[2023-03-13 08:14:57] Propagate variables for call_function: mul_10
[2023-03-13 08:14:57] Propagate variables for call_function: iadd_3
[2023-03-13 08:14:57] Propagate variables for call_method: view_20
[2023-03-13 08:14:57] Propagate variables for call_method: transpose_11
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_14
[2023-03-13 08:14:57] Propagate variables for call_function: mul_11
[2023-03-13 08:14:57] Propagate variables for call_method: view_21
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_23
[2023-03-13 08:14:57] Propagate variables for call_method: view_22
[2023-03-13 08:14:57] Propagate variables for call_method: view_23
[2023-03-13 08:14:57] Propagate variables for call_method: repeat_3
[2023-03-13 08:14:57] Propagate variables for call_function: sub_11
[2023-03-13 08:14:57] Propagate variables for call_function: cat_3
[2023-03-13 08:14:57] Propagate variables for call_method: permute_3
[2023-03-13 08:14:57] Propagate variables for call_method: contiguous_15
[2023-03-13 08:14:57] Propagate variables for call_module: conv4_0
[2023-03-13 08:14:57] Propagate variables for call_module: bn4
[2023-03-13 08:14:57] Propagate variables for call_module: conv4_2
[2023-03-13 08:14:57] Propagate variables for call_method: max_4
[2023-03-13 08:14:57] Propagate variables for call_function: getitem_24
[2023-03-13 08:14:57] Propagate variables for call_function: cat_4
[2023-03-13 08:14:57] Propagate variables for call_module: conv5_0
[2023-03-13 08:14:57] Propagate variables for call_module: bn5
[2023-03-13 08:14:57] Propagate variables for call_module: conv5_2
[2023-03-13 08:14:57] Propagate variables for call_function: adaptive_max_pool1d
[2023-03-13 08:14:57] Propagate variables for call_method: view_24
[2023-03-13 08:14:57] Propagate variables for call_function: adaptive_avg_pool1d
[2023-03-13 08:14:57] Propagate variables for call_method: view_25
[2023-03-13 08:14:57] Propagate variables for call_function: cat_5
[2023-03-13 08:14:57] Propagate variables for call_module: linear1
[2023-03-13 08:14:57] Propagate variables for call_module: bn6
[2023-03-13 08:14:57] Propagate variables for call_function: leaky_relu
[2023-03-13 08:14:57] Propagate variables for call_module: dp1
[2023-03-13 08:14:57] Propagate variables for call_module: linear2
[2023-03-13 08:14:57] Propagate variables for call_module: bn7
[2023-03-13 08:14:57] Propagate variables for call_function: leaky_relu_1
[2023-03-13 08:14:57] Propagate variables for call_module: dp2
[2023-03-13 08:14:57] Propagate variables for call_module: linear3
[2023-03-13 08:14:57] Propagate variables for output: output
[2023-03-13 08:14:57] Update direct sparsity...
Traceback (most recent call last):
File "nni_optim.py", line 248, in <module>
ModelSpeedup(classifier, torch.randn(2, 3, 1024, device='cuda'), masks).speedup_model()
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 383, in speedup_model
self.update_direct_sparsity()
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 237, in update_direct_sparsity
self.node_infos[node].mask_updater.direct_update_preprocess(self, node)
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 95, in direct_update_preprocess
node_info.output_randomize = tree_map_zip(lambda t: randomize_if_tensor(t, batch_dim, batch_size), node_info.output_origin)
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/utils.py", line 73, in tree_map_zip
return tree_map(fn, pytrees[0])
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_pytree.py", line 179, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_pytree.py", line 179, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 95, in <lambda>
node_info.output_randomize = tree_map_zip(lambda t: randomize_if_tensor(t, batch_dim, batch_size), node_info.output_origin)
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/utils.py", line 55, in randomize_if_tensor
new_obj = obj.clone().detach().contiguous()
RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 10.76 GiB total capacity; 9.16 GiB already allocated; 69.31 MiB free; 9.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
It's too strange because I have the NVIDIA RTX2080 TI with 12 gb of memorry and I'm able to train the model with batch size 32.
As I have 2 gpus I tried to run on parallel:
```
from nni.compression.pytorch.speedup.v2 import ModelSpeedup
classifier = torch.nn.DataParallel(classifier, device_ids=[0, 1])
ModelSpeedup(classifier, torch.randn(1, 3, 1024, device='cuda'), masks).speedup_model()
```
But got the next error:
```
File "nni_optim.py", line 248, in <module>
ModelSpeedup(classifier, torch.randn(2, 3, 1024, device='cuda'), masks).speedup_model()
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 93, in __init__
self.graph_module = graph_module if isinstance(graph_module, GraphModule) else concrete_trace(model, self.dummy_input)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 1378, in concrete_trace
graph = tracer.trace(root,
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 589, in trace
fn, args, more_args, kwargs = self.create_args_for_root(fn, isinstance(root, torch.nn.Module), concrete_args)
File "/root/devel/nni/nni/common/concrete_trace_utils/concrete_tracer.py", line 464, in create_args_for_root
raise RuntimeError(f"Tracing expected {len(arg_names)} arguments but got {len(concrete_args)} concrete arguments")
RuntimeError: Tracing expected 0 arguments but got 1 concrete arguments
```
What is the best way to deal with this problem?
UPD:
I've run the code on cpu with code from built library from source and got the next error:
```
[2023-03-13 09:55:49] simulated prune conv1.0 remain/total: 32/64
[2023-03-13 09:55:49] simulated prune conv2.0 remain/total: 32/64
[2023-03-13 09:55:49] simulated prune conv3.0 remain/total: 64/128
[2023-03-13 09:55:49] simulated prune conv4.0 remain/total: 128/256
[2023-03-13 09:55:49] simulated prune linear1 remain/total: 256/512
[2023-03-13 09:55:49] simulated prune linear2 remain/total: 128/256
[2023-03-13 09:55:50] Start to speedup the model...
[2023-03-13 09:55:50] Resolve the mask conflict before mask propagate...
[2023-03-13 09:55:50] Infer module masks...
[2023-03-13 09:55:50] Propagate original variables
[2023-03-13 09:55:50] Propagate variables for placeholder: x
[2023-03-13 09:55:50] Propagate variables for call_method: size
[2023-03-13 09:55:50] Propagate variables for call_function: getitem
[2023-03-13 09:55:50] Propagate variables for call_method: size_1
[2023-03-13 09:55:50] Propagate variables for call_function: getitem_1
[2023-03-13 09:55:50] Propagate variables for call_function: getitem_2
[2023-03-13 09:55:50] Propagate variables for call_function: getitem_3
[2023-03-13 09:55:50] Propagate variables for call_method: view
[2023-03-13 09:55:50] Propagate variables for call_method: transpose
[2023-03-13 09:55:50] Propagate variables for call_method: contiguous
[2023-03-13 09:55:50] Propagate variables for call_function: matmul
[2023-03-13 09:55:50] Propagate variables for call_function: mul
[2023-03-13 09:55:50] Propagate variables for call_function: pow_1
[2023-03-13 09:55:50] Propagate variables for call_function: sum_1
[2023-03-13 09:55:50] Propagate variables for call_function: neg
[2023-03-13 09:55:50] Propagate variables for call_function: sub
[2023-03-13 09:55:50] Propagate variables for call_method: transpose_1
[2023-03-13 09:55:50] Propagate variables for call_method: contiguous_1
[2023-03-13 09:55:50] Propagate variables for call_function: sub_1
[2023-03-13 09:55:50] Propagate variables for call_method: topk
[2023-03-13 09:55:50] Propagate variables for call_function: getitem_4
[2023-03-13 09:55:50] Propagate variables for call_function: arange
[2023-03-13 09:55:50] Propagate variables for call_method: view_1
[2023-03-13 09:55:50] Propagate variables for call_function: mul_1
[2023-03-13 09:55:50] Propagate variables for call_function: iadd
[2023-03-13 09:55:50] Propagate variables for call_method: view_2
[2023-03-13 09:55:50] Propagate variables for call_method: transpose_2
[2023-03-13 09:55:50] Propagate variables for call_method: contiguous_2
[2023-03-13 09:55:50] Propagate variables for call_function: mul_2
[2023-03-13 09:55:50] Propagate variables for call_method: view_3
[2023-03-13 09:55:50] Propagate variables for call_function: getitem_5
[2023-03-13 09:55:50] Propagate variables for call_method: view_4
[2023-03-13 09:55:50] Propagate variables for call_method: view_5
[2023-03-13 09:55:50] Propagate variables for call_method: repeat
[2023-03-13 09:55:50] Propagate variables for call_function: sub_2
[2023-03-13 09:55:50] Propagate variables for call_function: cat
[2023-03-13 09:55:50] Propagate variables for call_method: permute
[2023-03-13 09:55:50] Propagate variables for call_method: contiguous_3
[2023-03-13 09:55:50] Propagate variables for call_module: conv1_0
[2023-03-13 09:55:51] Propagate variables for call_module: bn1
[2023-03-13 09:55:51] Propagate variables for call_module: conv1_2
[2023-03-13 09:55:51] Propagate variables for call_method: max_1
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_6
[2023-03-13 09:55:51] Propagate variables for call_method: size_2
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_7
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_8
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_9
[2023-03-13 09:55:51] Propagate variables for call_method: view_6
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_3
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_4
[2023-03-13 09:55:51] Propagate variables for call_function: matmul_1
[2023-03-13 09:55:51] Propagate variables for call_function: mul_3
[2023-03-13 09:55:51] Propagate variables for call_function: pow_2
[2023-03-13 09:55:51] Propagate variables for call_function: sum_2
[2023-03-13 09:55:51] Propagate variables for call_function: neg_1
[2023-03-13 09:55:51] Propagate variables for call_function: sub_3
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_4
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_5
[2023-03-13 09:55:51] Propagate variables for call_function: sub_4
[2023-03-13 09:55:51] Propagate variables for call_method: topk_1
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_10
[2023-03-13 09:55:51] Propagate variables for call_function: arange_1
[2023-03-13 09:55:51] Propagate variables for call_method: view_7
[2023-03-13 09:55:51] Propagate variables for call_function: mul_4
[2023-03-13 09:55:51] Propagate variables for call_function: iadd_1
[2023-03-13 09:55:51] Propagate variables for call_method: view_8
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_5
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_6
[2023-03-13 09:55:51] Propagate variables for call_function: mul_5
[2023-03-13 09:55:51] Propagate variables for call_method: view_9
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_11
[2023-03-13 09:55:51] Propagate variables for call_method: view_10
[2023-03-13 09:55:51] Propagate variables for call_method: view_11
[2023-03-13 09:55:51] Propagate variables for call_method: repeat_1
[2023-03-13 09:55:51] Propagate variables for call_function: sub_5
[2023-03-13 09:55:51] Propagate variables for call_function: cat_1
[2023-03-13 09:55:51] Propagate variables for call_method: permute_1
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_7
[2023-03-13 09:55:51] Propagate variables for call_module: conv2_0
[2023-03-13 09:55:51] Propagate variables for call_module: bn2
[2023-03-13 09:55:51] Propagate variables for call_module: conv2_2
[2023-03-13 09:55:51] Propagate variables for call_method: max_2
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_12
[2023-03-13 09:55:51] Propagate variables for call_method: size_3
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_13
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_14
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_15
[2023-03-13 09:55:51] Propagate variables for call_method: view_12
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_6
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_8
[2023-03-13 09:55:51] Propagate variables for call_function: matmul_2
[2023-03-13 09:55:51] Propagate variables for call_function: mul_6
[2023-03-13 09:55:51] Propagate variables for call_function: pow_3
[2023-03-13 09:55:51] Propagate variables for call_function: sum_3
[2023-03-13 09:55:51] Propagate variables for call_function: neg_2
[2023-03-13 09:55:51] Propagate variables for call_function: sub_6
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_7
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_9
[2023-03-13 09:55:51] Propagate variables for call_function: sub_7
[2023-03-13 09:55:51] Propagate variables for call_method: topk_2
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_16
[2023-03-13 09:55:51] Propagate variables for call_function: arange_2
[2023-03-13 09:55:51] Propagate variables for call_method: view_13
[2023-03-13 09:55:51] Propagate variables for call_function: mul_7
[2023-03-13 09:55:51] Propagate variables for call_function: iadd_2
[2023-03-13 09:55:51] Propagate variables for call_method: view_14
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_8
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_10
[2023-03-13 09:55:51] Propagate variables for call_function: mul_8
[2023-03-13 09:55:51] Propagate variables for call_method: view_15
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_17
[2023-03-13 09:55:51] Propagate variables for call_method: view_16
[2023-03-13 09:55:51] Propagate variables for call_method: view_17
[2023-03-13 09:55:51] Propagate variables for call_method: repeat_2
[2023-03-13 09:55:51] Propagate variables for call_function: sub_8
[2023-03-13 09:55:51] Propagate variables for call_function: cat_2
[2023-03-13 09:55:51] Propagate variables for call_method: permute_2
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_11
[2023-03-13 09:55:51] Propagate variables for call_module: conv3_0
[2023-03-13 09:55:51] Propagate variables for call_module: bn3
[2023-03-13 09:55:51] Propagate variables for call_module: conv3_2
[2023-03-13 09:55:51] Propagate variables for call_method: max_3
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_18
[2023-03-13 09:55:51] Propagate variables for call_method: size_4
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_19
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_20
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_21
[2023-03-13 09:55:51] Propagate variables for call_method: view_18
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_9
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_12
[2023-03-13 09:55:51] Propagate variables for call_function: matmul_3
[2023-03-13 09:55:51] Propagate variables for call_function: mul_9
[2023-03-13 09:55:51] Propagate variables for call_function: pow_4
[2023-03-13 09:55:51] Propagate variables for call_function: sum_4
[2023-03-13 09:55:51] Propagate variables for call_function: neg_3
[2023-03-13 09:55:51] Propagate variables for call_function: sub_9
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_10
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_13
[2023-03-13 09:55:51] Propagate variables for call_function: sub_10
[2023-03-13 09:55:51] Propagate variables for call_method: topk_3
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_22
[2023-03-13 09:55:51] Propagate variables for call_function: arange_3
[2023-03-13 09:55:51] Propagate variables for call_method: view_19
[2023-03-13 09:55:51] Propagate variables for call_function: mul_10
[2023-03-13 09:55:51] Propagate variables for call_function: iadd_3
[2023-03-13 09:55:51] Propagate variables for call_method: view_20
[2023-03-13 09:55:51] Propagate variables for call_method: transpose_11
[2023-03-13 09:55:51] Propagate variables for call_method: contiguous_14
[2023-03-13 09:55:51] Propagate variables for call_function: mul_11
[2023-03-13 09:55:51] Propagate variables for call_method: view_21
[2023-03-13 09:55:51] Propagate variables for call_function: getitem_23
[2023-03-13 09:55:51] Propagate variables for call_method: view_22
[2023-03-13 09:55:51] Propagate variables for call_method: view_23
[2023-03-13 09:55:51] Propagate variables for call_method: repeat_3
[2023-03-13 09:55:52] Propagate variables for call_function: sub_11
[2023-03-13 09:55:52] Propagate variables for call_function: cat_3
[2023-03-13 09:55:52] Propagate variables for call_method: permute_3
[2023-03-13 09:55:52] Propagate variables for call_method: contiguous_15
[2023-03-13 09:55:52] Propagate variables for call_module: conv4_0
[2023-03-13 09:55:52] Propagate variables for call_module: bn4
[2023-03-13 09:55:52] Propagate variables for call_module: conv4_2
[2023-03-13 09:55:52] Propagate variables for call_method: max_4
[2023-03-13 09:55:52] Propagate variables for call_function: getitem_24
[2023-03-13 09:55:52] Propagate variables for call_function: cat_4
[2023-03-13 09:55:52] Propagate variables for call_module: conv5_0
[2023-03-13 09:55:52] Propagate variables for call_module: bn5
[2023-03-13 09:55:52] Propagate variables for call_module: conv5_2
[2023-03-13 09:55:52] Propagate variables for call_function: adaptive_max_pool1d
[2023-03-13 09:55:52] Propagate variables for call_method: view_24
[2023-03-13 09:55:52] Propagate variables for call_function: adaptive_avg_pool1d
[2023-03-13 09:55:52] Propagate variables for call_method: view_25
[2023-03-13 09:55:52] Propagate variables for call_function: cat_5
[2023-03-13 09:55:52] Propagate variables for call_module: linear1
[2023-03-13 09:55:52] Propagate variables for call_module: bn6
[2023-03-13 09:55:52] Propagate variables for call_function: leaky_relu
[2023-03-13 09:55:52] Propagate variables for call_module: dp1
[2023-03-13 09:55:52] Propagate variables for call_module: linear2
[2023-03-13 09:55:52] Propagate variables for call_module: bn7
[2023-03-13 09:55:52] Propagate variables for call_function: leaky_relu_1
[2023-03-13 09:55:52] Propagate variables for call_module: dp2
[2023-03-13 09:55:52] Propagate variables for call_module: linear3
[2023-03-13 09:55:52] Propagate variables for output: output
[2023-03-13 09:55:52] Update direct sparsity...
[2023-03-13 09:55:56] Update direct mask for placeholder: x
[2023-03-13 09:55:56] Update direct mask for call_method: size
[2023-03-13 09:55:56] Update direct mask for call_function: getitem
[2023-03-13 09:55:56] Update direct mask for call_method: size_1
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_1
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_2
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_3
[2023-03-13 09:55:56] Update direct mask for call_method: view
[2023-03-13 09:55:56] Update direct mask for call_method: transpose
[2023-03-13 09:55:56] Update direct mask for call_method: contiguous
[2023-03-13 09:55:56] Update direct mask for call_function: matmul
[2023-03-13 09:55:56] Update direct mask for call_function: mul
[2023-03-13 09:55:56] Update direct mask for call_function: pow_1
[2023-03-13 09:55:56] Update direct mask for call_function: sum_1
[2023-03-13 09:55:56] Update direct mask for call_function: neg
[2023-03-13 09:55:56] Update direct mask for call_function: sub
[2023-03-13 09:55:56] Update direct mask for call_method: transpose_1
[2023-03-13 09:55:56] Update direct mask for call_method: contiguous_1
[2023-03-13 09:55:56] Update direct mask for call_function: sub_1
[2023-03-13 09:55:56] Update direct mask for call_method: topk
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_4
[2023-03-13 09:55:56] Update direct mask for call_function: arange
[2023-03-13 09:55:56] Update direct mask for call_method: view_1
[2023-03-13 09:55:56] Update direct mask for call_function: mul_1
[2023-03-13 09:55:56] Update direct mask for call_function: iadd
[2023-03-13 09:55:56] Update direct mask for call_method: view_2
[2023-03-13 09:55:56] Update direct mask for call_method: transpose_2
[2023-03-13 09:55:56] Update direct mask for call_method: contiguous_2
[2023-03-13 09:55:56] Update direct mask for call_function: mul_2
[2023-03-13 09:55:56] Update direct mask for call_method: view_3
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_5
[2023-03-13 09:55:56] Update direct mask for call_method: view_4
[2023-03-13 09:55:56] Update direct mask for call_method: view_5
[2023-03-13 09:55:56] Update direct mask for call_method: repeat
[2023-03-13 09:55:56] Update direct mask for call_function: sub_2
[2023-03-13 09:55:56] Update direct mask for call_function: cat
[2023-03-13 09:55:56] Update direct mask for call_method: permute
[2023-03-13 09:55:56] Update direct mask for call_method: contiguous_3
[2023-03-13 09:55:56] Update direct mask for call_module: conv1_0
[2023-03-13 09:55:56] Update direct mask for call_module: bn1
[2023-03-13 09:55:56] Update direct mask for call_module: conv1_2
[2023-03-13 09:55:56] Update direct mask for call_method: max_1
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_6
[2023-03-13 09:55:56] Update direct mask for call_method: size_2
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_7
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_8
[2023-03-13 09:55:56] Update direct mask for call_function: getitem_9
[2023-03-13 09:55:56] Update direct mask for call_method: view_6
[2023-03-13 09:55:56] Update direct mask for call_method: transpose_3
[2023-03-13 09:55:56] Update direct mask for call_method: contiguous_4
[2023-03-13 09:55:57] Update direct mask for call_function: matmul_1
[2023-03-13 09:55:57] Update direct mask for call_function: mul_3
[2023-03-13 09:55:57] Update direct mask for call_function: pow_2
[2023-03-13 09:55:57] Update direct mask for call_function: sum_2
[2023-03-13 09:55:57] Update direct mask for call_function: neg_1
[2023-03-13 09:55:57] Update direct mask for call_function: sub_3
[2023-03-13 09:55:57] Update direct mask for call_method: transpose_4
[2023-03-13 09:55:57] Update direct mask for call_method: contiguous_5
[2023-03-13 09:55:57] Update direct mask for call_function: sub_4
[2023-03-13 09:55:57] Update direct mask for call_method: topk_1
[2023-03-13 09:55:57] Update direct mask for call_function: getitem_10
[2023-03-13 09:55:57] Update direct mask for call_function: arange_1
[2023-03-13 09:55:57] Update direct mask for call_method: view_7
[2023-03-13 09:55:57] Update direct mask for call_function: mul_4
[2023-03-13 09:55:57] Update direct mask for call_function: iadd_1
[2023-03-13 09:55:57] Update direct mask for call_method: view_8
[2023-03-13 09:55:57] Update direct mask for call_method: transpose_5
[2023-03-13 09:55:57] Update direct mask for call_method: contiguous_6
[2023-03-13 09:55:57] Update direct mask for call_function: mul_5
[2023-03-13 09:55:57] Update direct mask for call_method: view_9
[2023-03-13 09:55:57] Update direct mask for call_function: getitem_11
[2023-03-13 09:55:57] Update direct mask for call_method: view_10
[2023-03-13 09:55:57] Update direct mask for call_method: view_11
[2023-03-13 09:55:57] Update direct mask for call_method: repeat_1
[2023-03-13 09:55:57] Update direct mask for call_function: sub_5
[2023-03-13 09:55:57] Update direct mask for call_function: cat_1
[2023-03-13 09:55:58] Update direct mask for call_method: permute_1
[2023-03-13 09:55:58] Update direct mask for call_method: contiguous_7
[2023-03-13 09:55:58] Update direct mask for call_module: conv2_0
[2023-03-13 09:55:58] Update direct mask for call_module: bn2
[2023-03-13 09:55:58] Update direct mask for call_module: conv2_2
[2023-03-13 09:55:58] Update direct mask for call_method: max_2
[2023-03-13 09:55:58] Update direct mask for call_function: getitem_12
[2023-03-13 09:55:58] Update direct mask for call_method: size_3
[2023-03-13 09:55:58] Update direct mask for call_function: getitem_13
[2023-03-13 09:55:58] Update direct mask for call_function: getitem_14
[2023-03-13 09:55:58] Update direct mask for call_function: getitem_15
[2023-03-13 09:55:58] Update direct mask for call_method: view_12
[2023-03-13 09:55:58] Update direct mask for call_method: transpose_6
[2023-03-13 09:55:58] Update direct mask for call_method: contiguous_8
[2023-03-13 09:55:58] Update direct mask for call_function: matmul_2
[2023-03-13 09:55:59] Update direct mask for call_function: mul_6
[2023-03-13 09:55:59] Update direct mask for call_function: pow_3
[2023-03-13 09:55:59] Update direct mask for call_function: sum_3
[2023-03-13 09:55:59] Update direct mask for call_function: neg_2
[2023-03-13 09:55:59] Update direct mask for call_function: sub_6
[2023-03-13 09:55:59] Update direct mask for call_method: transpose_7
[2023-03-13 09:55:59] Update direct mask for call_method: contiguous_9
[2023-03-13 09:55:59] Update direct mask for call_function: sub_7
[2023-03-13 09:55:59] Update direct mask for call_method: topk_2
[2023-03-13 09:55:59] Update direct mask for call_function: getitem_16
[2023-03-13 09:55:59] Update direct mask for call_function: arange_2
[2023-03-13 09:55:59] Update direct mask for call_method: view_13
[2023-03-13 09:55:59] Update direct mask for call_function: mul_7
[2023-03-13 09:55:59] Update direct mask for call_function: iadd_2
[2023-03-13 09:55:59] Update direct mask for call_method: view_14
[2023-03-13 09:55:59] Update direct mask for call_method: transpose_8
[2023-03-13 09:55:59] Update direct mask for call_method: contiguous_10
[2023-03-13 09:55:59] Update direct mask for call_function: mul_8
[2023-03-13 09:55:59] Update direct mask for call_method: view_15
[2023-03-13 09:55:59] Update direct mask for call_function: getitem_17
[2023-03-13 09:55:59] Update direct mask for call_method: view_16
[2023-03-13 09:55:59] Update direct mask for call_method: view_17
[2023-03-13 09:55:59] Update direct mask for call_method: repeat_2
[2023-03-13 09:55:59] Update direct mask for call_function: sub_8
[2023-03-13 09:55:59] Update direct mask for call_function: cat_2
[2023-03-13 09:56:00] Update direct mask for call_method: permute_2
[2023-03-13 09:56:00] Update direct mask for call_method: contiguous_11
[2023-03-13 09:56:00] Update direct mask for call_module: conv3_0
[2023-03-13 09:56:00] Update direct mask for call_module: bn3
[2023-03-13 09:56:00] Update direct mask for call_module: conv3_2
[2023-03-13 09:56:00] Update direct mask for call_method: max_3
[2023-03-13 09:56:01] Update direct mask for call_function: getitem_18
[2023-03-13 09:56:01] Update direct mask for call_method: size_4
[2023-03-13 09:56:01] Update direct mask for call_function: getitem_19
[2023-03-13 09:56:01] Update direct mask for call_function: getitem_20
[2023-03-13 09:56:01] Update direct mask for call_function: getitem_21
[2023-03-13 09:56:01] Update direct mask for call_method: view_18
[2023-03-13 09:56:01] Update direct mask for call_method: transpose_9
[2023-03-13 09:56:01] Update direct mask for call_method: contiguous_12
[2023-03-13 09:56:01] Update direct mask for call_function: matmul_3
[2023-03-13 09:56:01] Update direct mask for call_function: mul_9
[2023-03-13 09:56:01] Update direct mask for call_function: pow_4
[2023-03-13 09:56:01] Update direct mask for call_function: sum_4
[2023-03-13 09:56:01] Update direct mask for call_function: neg_3
[2023-03-13 09:56:01] Update direct mask for call_function: sub_9
[2023-03-13 09:56:01] Update direct mask for call_method: transpose_10
[2023-03-13 09:56:01] Update direct mask for call_method: contiguous_13
[2023-03-13 09:56:01] Update direct mask for call_function: sub_10
[2023-03-13 09:56:01] Update direct mask for call_method: topk_3
[2023-03-13 09:56:01] Update direct mask for call_function: getitem_22
[2023-03-13 09:56:01] Update direct mask for call_function: arange_3
[2023-03-13 09:56:01] Update direct mask for call_method: view_19
[2023-03-13 09:56:01] Update direct mask for call_function: mul_10
[2023-03-13 09:56:01] Update direct mask for call_function: iadd_3
[2023-03-13 09:56:01] Update direct mask for call_method: view_20
[2023-03-13 09:56:01] Update direct mask for call_method: transpose_11
[2023-03-13 09:56:01] Update direct mask for call_method: contiguous_14
[2023-03-13 09:56:01] Update direct mask for call_function: mul_11
[2023-03-13 09:56:01] Update direct mask for call_method: view_21
[2023-03-13 09:56:01] Update direct mask for call_function: getitem_23
[2023-03-13 09:56:01] Update direct mask for call_method: view_22
[2023-03-13 09:56:01] Update direct mask for call_method: view_23
[2023-03-13 09:56:01] Update direct mask for call_method: repeat_3
[2023-03-13 09:56:02] Update direct mask for call_function: sub_11
[2023-03-13 09:56:02] Update direct mask for call_function: cat_3
[2023-03-13 09:56:02] Update direct mask for call_method: permute_3
[2023-03-13 09:56:03] Update direct mask for call_method: contiguous_15
[2023-03-13 09:56:04] Update direct mask for call_module: conv4_0
[2023-03-13 09:56:04] Update direct mask for call_module: bn4
[2023-03-13 09:56:04] Update direct mask for call_module: conv4_2
[2023-03-13 09:56:04] Update direct mask for call_method: max_4
[2023-03-13 09:56:04] Update direct mask for call_function: getitem_24
[2023-03-13 09:56:04] Update direct mask for call_function: cat_4
[2023-03-13 09:56:04] Update direct mask for call_module: conv5_0
[2023-03-13 09:56:05] Update direct mask for call_module: bn5
[2023-03-13 09:56:05] Update direct mask for call_module: conv5_2
[2023-03-13 09:56:05] Update direct mask for call_function: adaptive_max_pool1d
[2023-03-13 09:56:05] Update direct mask for call_method: view_24
[2023-03-13 09:56:05] Update direct mask for call_function: adaptive_avg_pool1d
[2023-03-13 09:56:05] Update direct mask for call_method: view_25
[2023-03-13 09:56:05] Update direct mask for call_function: cat_5
[2023-03-13 09:56:05] Update direct mask for call_module: linear1
[2023-03-13 09:56:05] Update direct mask for call_module: bn6
[2023-03-13 09:56:05] Update direct mask for call_function: leaky_relu
[2023-03-13 09:56:05] Update direct mask for call_module: dp1
[2023-03-13 09:56:05] Update direct mask for call_module: linear2
[2023-03-13 09:56:05] Update direct mask for call_module: bn7
[2023-03-13 09:56:05] Update direct mask for call_function: leaky_relu_1
[2023-03-13 09:56:05] Update direct mask for call_module: dp2
[2023-03-13 09:56:05] Update direct mask for call_module: linear3
[2023-03-13 09:56:05] Update direct mask for output: output
[2023-03-13 09:56:05] Update indirect sparsity...
[2023-03-13 09:56:05] Update indirect mask for output: output
[2023-03-13 09:56:05] Update indirect mask for call_module: linear3
[2023-03-13 09:56:05] Update indirect mask for call_module: dp2
[2023-03-13 09:56:05] Update indirect mask for call_function: leaky_relu_1
[2023-03-13 09:56:05] Update indirect mask for call_module: bn7
[2023-03-13 09:56:05] Update indirect mask for call_module: linear2
[2023-03-13 09:56:05] Update indirect mask for call_module: dp1
[2023-03-13 09:56:05] Update indirect mask for call_function: leaky_relu
[2023-03-13 09:56:05] Update indirect mask for call_module: bn6
[2023-03-13 09:56:05] Update indirect mask for call_module: linear1
[2023-03-13 09:56:05] Update indirect mask for call_function: cat_5
[2023-03-13 09:56:05] Update indirect mask for call_method: view_25
[2023-03-13 09:56:05] Update indirect mask for call_function: adaptive_avg_pool1d
[2023-03-13 09:56:05] Update indirect mask for call_method: view_24
[2023-03-13 09:56:05] Update indirect mask for call_function: adaptive_max_pool1d
[2023-03-13 09:56:05] Update indirect mask for call_module: conv5_2
[2023-03-13 09:56:05] Update indirect mask for call_module: bn5
[2023-03-13 09:56:05] Update indirect mask for call_module: conv5_0
[2023-03-13 09:56:05] Update indirect mask for call_function: cat_4
[2023-03-13 09:56:05] Update indirect mask for call_function: getitem_24
[2023-03-13 09:56:05] Update indirect mask for call_method: max_4
[2023-03-13 09:56:05] Update indirect mask for call_module: conv4_2
Traceback (most recent call last):
File "nni_optim.py", line 246, in <module>
ModelSpeedup(classifier, torch.randn(2, 3, 1024, device='cpu'), masks).speedup_model()
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 384, in speedup_model
self.update_indirect_sparsity()
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/model_speedup.py", line 259, in update_indirect_sparsity
self.node_infos[node].mask_updater.indirect_update_process(self, node)
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 438, in indirect_update_process
indirect_fn(model_speedup, node)
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 382, in indirect_activation
input_grad = tree_map_zip(lambda t, m: (t * m).type_as(t) if isinstance(m, torch.Tensor) else t, \
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/utils.py", line 81, in tree_map_zip
return tree_unflatten([fn(*args) for args in zip(*flat_args_list)], spec_list[0])
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/utils.py", line 81, in <listcomp>
return tree_unflatten([fn(*args) for args in zip(*flat_args_list)], spec_list[0])
File "/root/devel/nni/nni/compression/pytorch/speedup/v2/mask_updater.py", line 382, in <lambda>
input_grad = tree_map_zip(lambda t, m: (t * m).type_as(t) if isinstance(m, torch.Tensor) else t, \
TypeError: unsupported operand type(s) for *: 'NoneType' and 'Tensor'
```
| closed | 2023-03-13T08:22:08Z | 2023-04-14T04:18:21Z | https://github.com/microsoft/nni/issues/5435 | [] | Kracozebr | 7 |
geex-arts/django-jet | django | 126 | Related popups for new items aren't working right | An example can be found on the demo site:
http://demo.jet.geex-arts.com/admin/menu/menuitemcategory/2/#/tab/inline_1/
1. Click add another menu item
2. Click the plus icon on the new row
3. The popup doesn't open in the iframe related popup tab, it opens in full window.
| closed | 2016-09-27T19:49:44Z | 2016-11-19T15:58:50Z | https://github.com/geex-arts/django-jet/issues/126 | [] | kmorey | 5 |
replicate/cog | tensorflow | 1,663 | ERROR: failed to solve: circular dependency detected on stage: weights | Hello,
Trying to push a model with --separate-weights fails with this error. Works fine when the flag is not used. I am using cog version 0.9.7. I also tried deleting .cog , removed the models/ folder from the auto-generated .dockerignore
```
Building Docker image from environment in cog.yaml as r8.im/xxxxx/xxxxx...
[+] Building 187.8s (8/8) FINISHED docker:desktop-linux
=> [internal] load .dockerignore 0.0s
=> => transferring context: 1.47kB 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 141B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1.4 1.1s
=> CACHED docker-image://docker.io/docker/dockerfile:1.4@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s
=> [internal] load build context 98.2s
=> => transferring context: 10.62GB 98.2s
=> [1/1] COPY models /src/models 58.0s
=> preparing layers for inline cache 30.1s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:5be97ec87e2fb8b2f8f001848c916dd1a2031b3624ff2974d14e8558adc3c655 0.0s
=> => naming to r8.im/xxxxxx/xxxxxx 0.0s
[+] Building 3.7s (5/5) FINISHED docker:desktop-linux
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 2.47kB 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 1.49kB 0.0s
=> resolve image config for docker.io/docker/dockerfile:1.4 3.4s
=> [auth] docker/dockerfile:pull token for registry-1.docker.io 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1.4@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc 0.0s
Dockerfile:1
--------------------
1 | >>> #syntax=docker/dockerfile:1.4
2 | FROM python:3.11 as deps
3 | COPY .cog/tmp/build3627906443/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl
--------------------
ERROR: failed to solve: circular dependency detected on stage: weights
ⅹ Failed to build runner Docker image: Failed to build Docker image: exit status 1
```
Below is cog.yaml
```
# Configuration for Cog ⚙️
# Reference: https://cog.run/yaml
build:
# set to true if your model requires a GPU
gpu: true
# a list of ubuntu apt packages to install
system_packages:
- "git"
# python version in the form '3.11' or '3.11.4'
python_version: "3.11"
python_requirements: requirements.txt
# predict.py defines how predictions are run on your model
predict: "predict.py:Predictor"
``` | closed | 2024-05-14T07:35:28Z | 2024-07-17T16:44:24Z | https://github.com/replicate/cog/issues/1663 | [] | gurteshwar | 10 |
ultralytics/yolov5 | machine-learning | 12,943 | 提升训练速度 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
怎么可以提高yolov5-seg模型的训练速度,调用了GPU,但是利用率很低,3070ti的显卡训练八千张图片一轮需要九分钟
### Additional
_No response_ | closed | 2024-04-18T21:03:59Z | 2024-05-30T00:22:03Z | https://github.com/ultralytics/yolov5/issues/12943 | [
"question",
"Stale"
] | 2375963934a | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.