repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
plotly/dash | data-visualization | 2,234 | [BUG] multi_page layout (use_pages=True) won't recognize the pages if they are compiled .pyc files. | As mention in the title, it looks like dash only bats an eye for .py files. Am i doing something wrong or are .pyc files just not supported "yet"? | closed | 2022-09-17T17:47:59Z | 2024-07-24T15:12:38Z | https://github.com/plotly/dash/issues/2234 | [] | TheBubblePopped | 3 |
holoviz/panel | plotly | 6,953 | Feature Request: CalendarMonthWidget | I want something like this
<img width="682" alt="image" src="https://github.com/holoviz/panel/assets/15331990/9a563d0c-da1c-4d22-a67e-965b1c9ffb04">
I imagine the `value` can be a dict:
```
{
'2024-02-01': "Some event",
'2024-02-02 12:00': pn.widgets.TextInput(), # maybe?
}
``` | closed | 2024-07-03T18:41:38Z | 2025-01-14T00:01:01Z | https://github.com/holoviz/panel/issues/6953 | [
"type: feature"
] | ahuang11 | 2 |
albumentations-team/albumentations | machine-learning | 1,473 | HueSaturationValue with 4-channel RGBA images with transparency | First of all, I need to say Albumentations is amazing, and I appreciate everything y'all have done! 10/10!
One problem I came to: albumentations.augmentations.transforms.HueSaturationValue only works for RGB or greyscale images
I get the following error when running HueSaturationValue transform over an RGBA image:
`TypeError: HueSaturationValue transformation expects 1-channel or 3-channel images.`
`
def apply(self, image, hue_shift=0, sat_shift=0, val_shift=0, **params):
if not is_rgb_image(image) and not is_grayscale_image(image):
raise TypeError("HueSaturationValue transformation expects 1-channel or 3-channel images.")
return F.shift_hsv(image, hue_shift, sat_shift, val_shift)
`
https://github.com/albumentations-team/albumentations/blob/e3b47b3a127f92541cfeb16abbb44a6f8bf79cc8/albumentations/augmentations/transforms.py#L1005C21-L1005C21
I'd like to simply:
1. Create my transform function with all of my transforms (including HueSaturationValue)
2. Run the RGBA image through that transform function
If I have multiple transforms is the only solution to:
1. Convert a copy of RGBA image to RGB image
3. Create a transform function with a single HueSaturationValue transform
4. Run the RGB image through that transform function
5. Apply the mask from the RGBA image on the newly transformed RGB image, creating a new RGBA image with HueSaturationValue transformed
6. Create another transform function with the rest of our transforms
7. Run the RGBA image through that second transform function
Can we run the HueSaturationValue on the RGB channels only and ignore the alpha channel?
I haven't tested it yet, but if we could treat it like an np array, could we do something like this?:
`
def apply(self, image, hue_shift=0, sat_shift=0, val_shift=0, **params):
if not is_rgb_image(image) and not is_grayscale_image(image) and not is_rgba_image(image):
raise TypeError("HueSaturationValue transformation expects 1-channel, 3-channel, or 4-channel images.")
if is_rgba_image(image):
return np.concatenate((F.shift_hsv(image[0:3], hue_shift, sat_shift, val_shift), image[3]), axis=0)
else:
return F.shift_hsv(image, hue_shift, sat_shift, val_shift)
`
Of course I don't know albumentations perfectly, maybe there's a good reason why it shouldn't work with RGBA images? Please let me know!
Again, thanks so much for y'alls hard work! Albumentations is amazing! | closed | 2023-08-15T21:26:19Z | 2024-10-31T02:28:46Z | https://github.com/albumentations-team/albumentations/issues/1473 | [] | JJrodny | 1 |
autokey/autokey | automation | 757 | autokey does on Ubuntu 22.04 no longer allow to set hotkeys. What can I do? | ## Classification:
(Pick one: Bug, Crash/Hang/Data loss, Enhancement, Feature (new), Performance, UI/Usability)
## Reproducibility:
(Pick one: Always, Sometimes, Rarely, Unable, I didn't try)
## AutoKey version:
(Paste in your AutoKey version and, if the problem is known to be present in more than one version, please list them all.)
## Used GUI:
(Pick one: Gtk, Qt, Both):
## Installed via:
(Pick one: PPA, pip3, Git, package manager, etc.)
## Linux distribution:
(Provide information about your Linux distribution and its release or version number.)
## Summary:
(Provide a brief summary of the problem.)
## Steps to reproduce:
- I do this.
- Then I do that.
## Expected result:
(Explain what should happen.)
## Actual result:
(Explain what actually happens.)
## Screenshot (if applicable):
(Include one or more screenshots of the issue by dragging the image file(s) here to help with debugging.)
## Verbose output (if applicable):
(Include the output from launching AutoKey via the `autokey-gtk --verbose` or `autokey-qt --verbose` command to help with debugging. Please upload the output somewhere accessible or paste it into a code block here, enclosing it in triple backticks.)
```
Example code block. Replace this with your output content.
```
## Notes:
(Describe debugging steps you've taken, a workaround you've figured out, or any other information you think we might need.)
| closed | 2023-01-06T10:10:08Z | 2023-01-12T19:32:55Z | https://github.com/autokey/autokey/issues/757 | [
"user support"
] | hsc57 | 3 |
Miserlou/Zappa | flask | 1,915 | switching "keep_warm" off doesn't work | If you switch the keep_warm functionality on and then back off again then the cloudwatch rule is NOT deleted.
## Context
If you set keep_warm to true, deploy set it to false and deploy again then the cloudwatch rule is not deleted. Thus, the function will still be kept warm.
## Expected Behavior
if keep_warm: false, then the function should not be kept warm
## Actual Behavior
it is kept warm
## Possible Fix
not sure ...
## Steps to Reproduce
1. deploy a function with "keep_warm": true
2. update this function with "keep_warm": false
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.48.2
* Operating System and Python version: ubuntu 18.04 python 3.7.3
* The output of `pip freeze`:
appdirs==1.4.3
argcomplete==1.9.3
aspy.yaml==1.3.0
atomicwrites==1.3.0
attrs==19.1.0
black==19.3b0
boto3==1.9.194
botocore==1.12.194
certifi==2019.6.16
cfgv==2.0.1
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
cognitojwt==1.1.0
coverage==4.5.4
dash==1.0.1
dash-auth==1.3.2
dash-bootstrap-components==0.6.3
dash-core-components==1.0.0
dash-daq==0.1.0
dash-google-auth==0.1.2
dash-html-components==1.0.0
dash-renderer==1.0.0
dash-table==4.0.1
docutils==0.14
durationpy==0.5
ecdsa==0.13.2
Flask==1.1.1
Flask-Cognito==1.13
Flask-Compress==1.4.0
Flask-Dance==2.2.0
Flask-SeaSurf==0.2.2
future==0.16.0
hjson==3.0.1
identify==1.4.5
idna==2.8
importlib-metadata==0.18
itsdangerous==1.1.0
Jinja2==2.10.1
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
more-itertools==7.2.0
nodeenv==1.3.3
numpy==1.16.4
oauthlib==3.0.2
pandas==0.24.2
pkg-resources==0.0.0
placebo==0.9.0
plotly==4.0.0
pluggy==0.12.0
pre-commit==1.17.0
py==1.8.0
pyasn1==0.4.5
pytest==4.3.1
pytest-cov==2.7.1
python-dateutil==2.6.1
python-jose==3.0.1
python-slugify==1.2.4
pytz==2019.1
PyYAML==5.1.1
requests==2.7.0
requests-oauthlib==1.2.0
retrying==1.3.3
rsa==4.0
s3transfer==0.2.1
six==1.12.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.9
ua-parser==0.8.0
Unidecode==1.1.1
urllib3==1.25.3
URLObject==2.4.3
virtualenv==16.6.2
Werkzeug==0.15.5
wsgi-request-logger==0.4.6
zappa==0.48.2
zipp==0.5.2
| open | 2019-08-08T08:54:48Z | 2019-09-11T05:49:09Z | https://github.com/Miserlou/Zappa/issues/1915 | [] | filthysocks | 5 |
jumpserver/jumpserver | django | 14,396 | [Feature] Language PT-BR | ### Product Version
Language PT-BR
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [x] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### ⭐️ Feature Description
Language PT-BR
### Proposed Solution
Language PT-BR
### Additional Information
_No response_ | closed | 2024-11-01T16:59:30Z | 2024-12-19T10:48:02Z | https://github.com/jumpserver/jumpserver/issues/14396 | [
"✅ Done",
"⏳ Pending feedback",
"⭐️ Feature Request",
"📝 Recorded",
"📦 z~release:v4.5.0"
] | renan-moraes1 | 3 |
A3M4/YouTube-Report | seaborn | 51 | Who Im | > Wooo nice
_Originally posted by @Whoim1122 in [dcdbc29](https://github.com/A3M4/YouTube-Report/commit/dcdbc29e8c05fca643da03ca0ae3fa7bd1b8d0a9#commitcomment-146973287)_ | open | 2024-09-20T10:47:08Z | 2024-09-20T10:47:08Z | https://github.com/A3M4/YouTube-Report/issues/51 | [] | Whoim1122 | 0 |
miguelgrinberg/python-socketio | asyncio | 1,306 | Option to reconnect on initial connection failure | A failure on the initial connection is currently not retried. Add an option to engage the existing reconnection mechanism when the initial connection fails. | closed | 2024-02-05T00:18:20Z | 2024-02-05T12:56:00Z | https://github.com/miguelgrinberg/python-socketio/issues/1306 | [
"enhancement"
] | miguelgrinberg | 0 |
Esri/arcgis-python-api | jupyter | 1,870 | Feature layer query introduces duplicates when querying above 2000 records | Since the query refactor in 2.3.0 the query method does not correctly request all features when the number of features exceed the maxRecord limit of the feature service.
A query that should result in 18k features results in 20k features with 3~k duplicates. The same query on 2.2.X < has no issues.
The issue was found when querying a feature service with a maxRecord set at 5000, with a query returning more than 15k features.
I found some discrepancies between the old and new code.
1. Default maxRecord count instead of service property
When the query exceeds the transfer limit, it will instate a `resultRecordCount` of 2000 and an `resultOffset` of the same amount.
```python
if "resultRecordCount" not in params:
# assign initial value after first query
params["resultRecordCount"] = 2000
if "resultOffset" in params:
# add the number we found to the offset so we don't have doubles
params["resultOffset"] = params["resultOffset"] + len(
result["features"]
)
else:
# initial offset after first query (result record count set by user or up above)
params["resultOffset"] = params["resultRecordCount"]
```
This may be a default value, but when the default return amount on the Feature Service is higher this will result in a faulty second query with 2000 returned records and a 2000 offset, while already 5000 features had been returned in the first query.
2. First query may or may not be ordered.
The second problem arises from the fact that the features returned in the first query (at the top of the query method) are not ordered. However the following query results using the `resultRecordCount` and `resultOffset` are ordered. Which means that these results may or may not contain features that have already been returned in the very first query. Before the refactor this wasn't an issue because the code checked if pagination was needed before performing the first query.
```python
def _query(layer, url, params, raw=False):
"""returns results of query"""
result = {}
try:
# Layer query call
result = layer._con.post(url, params, token=layer._token) # This one is not ordered?
# Figure out what to return
if "error" in result:
raise ValueError(result)
elif "returnCountOnly" in params and _is_true(params["returnCountOnly"]):
# returns an int
return result["count"]
elif "returnIdsOnly" in params and _is_true(params["returnIdsOnly"]):
# returns a dict with keys: 'objectIdFieldName' and 'objectIds'
return result
elif "returnExtentOnly" in params and _is_true(params["returnExtentOnly"]):
# returns extent dictionary with key: 'extent'
return result
elif _is_true(raw):
return result
elif "resultRecordCount" in params and params["resultRecordCount"] == len(
result["features"]
):
return arcgis_features.FeatureSet.from_dict(result)
else:
# we have features to return
features = result["features"]
# If none of the ifs above worked then keep going to find more features
# Make sure we have all features
if "exceededTransferLimit" in result:
while (
"exceededTransferLimit" in result
and result["exceededTransferLimit"] == True
):
if "resultRecordCount" not in params:
# assign initial value after first query
params["resultRecordCount"] = 2000
if "resultOffset" in params:
# add the number we found to the offset so we don't have doubles
params["resultOffset"] = params["resultOffset"] + len(
result["features"]
)
else:
# initial offset after first query (result record count set by user or up above)
params["resultOffset"] = params["resultRecordCount"]
result = layer._con.post(path=url, postdata=params, token=layer._token) # These queries are ordered?
# add new features to the list
features = features + result["features"]
# assign complete list
result["features"] = features
```
I use a workaround for these issues by:
- forcing an ordering on all query so the first query will also have forced ordering. Changing the code to check if pagination is needed before performing the feature queries would also fix this (like before 2.3.0)
`order_by_fields="OBJECTID ASC"`
- To make queries with the correct number of features, this part in the query method `params["resultRecordCount"] = 2000` is replaced by `params["resultRecordCount"] = len(features)` where the length of the returned features from the first query is set as the maxRecord amount that the first query has reached. This might as well be a value read from the service properties like before. | closed | 2024-07-17T13:08:23Z | 2024-10-09T11:40:16Z | https://github.com/Esri/arcgis-python-api/issues/1870 | [
"bug"
] | HDO-B | 4 |
chezou/tabula-py | pandas | 374 | Pls add "orientation" parameter to read_pdf | ### Is your feature request related to a problem? Please describe.
Today the Tabula (which is pretty great) reads text in both vertical and horizontal orientation alike.
When there is a page where there is a mix of them (for whatever reason) - results are typically messed-up.
### Describe the solution you'd like
Add a parameter to read_pdf
orientation="vertical" or "horizontal" or "any"
"Any" is what we all get today by default.
"vertical" will ignore all text written horizontally,
"horizontal" will ignore all text written vertically
### Describe alternatives you've considered
selecting only the right-orientation areas with a template.
This solution only works per-specific PDF file, it does not scale to many different PDFs
### Additional context
_No response_ | closed | 2023-11-23T15:34:44Z | 2024-01-16T02:02:22Z | https://github.com/chezou/tabula-py/issues/374 | [
"enhancement",
"help wanted",
"tabula-java limitation"
] | EZcat335 | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,464 | Training High Resolution + Rectangular images | Hi,
I intend to training images which are both rectangular and high resolution. Which training flags should be used in this case?
I checked the training/test tips link. However, it mentions either high-resolution or rectangular, which seems not possible to combine. | closed | 2022-07-21T16:26:06Z | 2022-09-06T20:52:59Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1464 | [] | ankur-chr | 2 |
pallets-eco/flask-sqlalchemy | flask | 1,361 | Flask plugin breaks DeclarativeBase | The plugin breaks the DeclarativeBase, e.g.:
```
from typing import Any
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.types import JSON
class Base(DeclarativeBase):
type_annotation_map = {dict[str, Any]: JSON}
db = SQLAlchemy(model_class=Base)
```
Gives the error:
`sqlalchemy.exc.InvalidRequestError: Declarative base class has both a 'registry' attribute and a type_annotation_map entry. Per-base type_annotation_maps are not supported. Please apply the type_annotation_map to this registry directly.`
However, `type_annotation_map` is supported on the base, but by subclassing the base class (something which seems to be happening inside the plugin), it now detects both the registry created from the base class with `type_annotation_map` and the `type_annotation_map`. This doesn't happen without the Flask plugin.
Environment:
- Python version: 3.11.6
- Flask-SQLAlchemy version: 3.1.1
- SQLAlchemy version: 2.0.31
| open | 2024-07-23T14:12:17Z | 2024-12-02T07:42:37Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1361 | [] | pvanderlinden | 4 |
ultralytics/yolov5 | machine-learning | 13,508 | A problem about calculating confusion matrices | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
in yolov5/utils/metics.py
class ConfusionMatrix:
"""Generates and visualizes a confusion matrix for evaluating object detection classification performance."""
def __init__(self, nc, conf=0.25, iou_thres=0.45):
"""Initializes ConfusionMatrix with given number of classes, confidence, and IoU threshold."""
self.matrix = np.zeros((nc + 1, nc + 1))
self.nc = nc # number of classes
self.conf = conf
self.iou_thres = iou_thres
def process_batch(self, detections, labels):
"""
Return intersection-over-union (Jaccard index) of boxes.
Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
Arguments:
detections (Array[N, 6]), x1, y1, x2, y2, conf, class
labels (Array[M, 5]), class, x1, y1, x2, y2
Returns:
None, updates confusion matrix accordingly
"""
if detections is None:
gt_classes = labels.int()
for gc in gt_classes:
self.matrix[self.nc, gc] += 1 # background FN
return
detections = detections[detections[:, 4] > self.conf]
gt_classes = labels[:, 0].int()
detection_classes = detections[:, 5].int()
iou = box_iou(labels[:, 1:], detections[:, :4])
x = torch.where(iou > self.iou_thres)
if x[0].shape[0]:
matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy()
if x[0].shape[0] > 1:
matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
matches = matches[matches[:, 2].argsort()[::-1]]
matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
else:
matches = np.zeros((0, 3))
n = matches.shape[0] > 0
m0, m1, _ = matches.transpose().astype(int)
for i, gc in enumerate(gt_classes):
j = m0 == i
if n and sum(j) == 1:
self.matrix[detection_classes[m1[j]], gc] += 1 # correct
else:
self.matrix[self.nc, gc] += 1 # true background
if n:
for i, dc in enumerate(detection_classes):
if not any(m1 == i):
self.matrix[dc, self.nc] += 1 # predicted background
When the detection box does not match with gt, that is, when iou is 0, the detection box is not calculated into fp. Is this a special design or a bug. The confusion matrix in YOLOV11 is not quite the same
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-12T01:58:28Z | 2025-02-16T19:01:55Z | https://github.com/ultralytics/yolov5/issues/13508 | [
"bug",
"detect"
] | SwustLiC | 3 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 990 | Change the __bind_key__ dynamically on model before querying? | I need to retrieve data from several databases, same data model, different data.
```py
class SomeModel(db.Model):
id = db.Column(db.Integer, primary_key=True, nullable=False)
site = db.Column(db.String)
other = db.Column(db.String)
```
```py
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/main.db'
app.config['SQLALCHEMY_BINDS'] = {
'site1':'sqlite:////tmp/site1.db'',
'site2':'sqlite:////tmp/site2.db'',
}
```
```py
def get_data(site):
model = SomeModel()
model.__bind_key__ = site
return model.query.all()
```
It does not work but is there a way to achieve this?
| closed | 2021-08-04T19:33:58Z | 2022-10-03T00:21:51Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/990 | [] | mxdev88 | 3 |
slackapi/python-slack-sdk | asyncio | 1,062 | Need help converting my websocket class to slack sdk websocket client | I am trying to convert our current framework that uses the RTM client to using the API for event listening so we can use bots that don't have the deprecated bot client. I have a class that uses websocket, [here](https://gist.github.com/jcastro-sfdc/dbd16f8f9d462c7d782a5ea25d2b7012). I am trying to rewrite our call back class and think I need a little help converting it. It currently uses a named BASE_URL for call backs. Since we want to enable socket mode on all our apps I need help as to how I should be configuring this class [right here](https://gist.github.com/jcastro-sfdc/8cfa529070e92aede880df8f50472ad5). I am not familiar enough with bolt or you websocket client on how it generates a call back address. Thanks for any help. | closed | 2021-07-14T13:21:28Z | 2021-07-15T13:24:54Z | https://github.com/slackapi/python-slack-sdk/issues/1062 | [
"question",
"socket-mode"
] | jcastro-sfdc | 3 |
FlareSolverr/FlareSolverr | api | 833 | flaresolver dosen't work anymore with yggtorrent.wtf wesite | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes]
- FlareSolverr User-Agent (3.0.2 and 3.2.2 win):
- Are you using a VPN: [yes and no]
- Are you using a Proxy: [no]
- Are you using Captcha Solver: [no]
```
### Description
after 2 am french time the flaresolver dosen't work anymore with yggtorrent.wtf cloudflare security ddos check with win install or with docker install with or without VPN just drop this error in log
### Logged Error Messages
```text
2023-07-30 11:45:04 INFO Incoming request => POST /v1 body: {'maxTimeout': 60000, 'cmd': 'request.get', 'url': 'https://www3.yggtorrent.wtf/engine/search?category=all&do=search&order=desc&sort=publish_date', 'proxy': {}}
2023-07-30 11:45:19 INFO Challenge detected. Selector found: #challenge-spinner
2023-07-30 11:46:06 ERROR Error: Error solving the challenge. Timeout after 60.0 seconds.
2023-07-30 11:46:06 INFO Response in 61.468 s
2023-07-30 11:46:06 INFO 192.168.1.51 POST http://192.168.1.45:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_ | closed | 2023-07-30T09:53:27Z | 2023-08-03T16:19:07Z | https://github.com/FlareSolverr/FlareSolverr/issues/833 | [
"duplicate"
] | Siagutrop | 48 |
OpenInterpreter/open-interpreter | python | 777 | Support OSS model serving (ollama etc) | ### Is your feature request related to a problem? Please describe.
Currently to use open-interpreter I need to install a closed source application
### Describe the solution you'd like
I would like to use the open source alternative
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2023-11-19T20:04:52Z | 2024-03-18T20:47:53Z | https://github.com/OpenInterpreter/open-interpreter/issues/777 | [
"Enhancement"
] | JayDoubleu | 5 |
AirtestProject/Airtest | automation | 298 | I would like to know more details on how to export test results to Excel. | Remove any following parts if does not have details about
**Describe the bug**
I would like to know more details on how to export test results to Excel.
I would like to know a detailed explanation and example on how to organize the results of the test into Excel.
```
paste traceback here
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**python version:** `python3.5`
**airtest version:** `1.0.22`
> You can get airtest version via `pip freeze` command.
**Smartphone (please complete the following information):**
- Device: [e.g. google pixel 2]
- OS: [e.g. Android 8.1]
- more information if have
**Additional context**
Add any other context about the problem here.
| closed | 2019-03-06T10:11:48Z | 2019-03-21T00:05:51Z | https://github.com/AirtestProject/Airtest/issues/298 | [] | JJunM | 5 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,027 | Avoid redownloading driver every time? | I see that if I avoid redownloading the undetected-chromedriver every time I run my program, then the site in question (familysearch.org) detects me. Could someone please explain how it fingerprints the driver? Why does redownloading the same driver fix the issue? I'm wondering if there's a less costly fix here. | closed | 2023-02-03T18:43:45Z | 2023-02-04T21:36:45Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1027 | [] | alexreg | 9 |
PablocFonseca/streamlit-aggrid | streamlit | 93 | how to make text wrap in aggrid cells? | Hi,
Two goals:
1. simple wrap of "abcdefghijkl"
---
abcdef
ghijkl
---
2. respect line feeds/cr in strings
---
line 1
line 2
etc.
--- | closed | 2022-05-17T05:33:27Z | 2024-04-04T17:53:21Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/93 | [
"question"
] | fredzannarbor | 2 |
voxel51/fiftyone | computer-vision | 5,461 | [FR] Loading/uploading dataset with CVAT docker container's "share" mounted drive | I would like to request an official way for CVAT to load images directly from the "share" mounted drive of the CVAT Docker container when using FiftyOne for annotation tasks, rather than uploading (copying) them from the local drive to the container each time.
Motivation
Use Case: When working with large datasets, uploading files from the local drive to the CVAT Docker container every time an annotation task is created can be time-consuming and redundant, especially if the data already exists in a shared mounted directory.
Value to FiftyOne Users:
This would significantly reduce the time and resources required for setting up annotation tasks, improving workflow efficiency for users handling large datasets.
Value to My Project/Organization:
My project has dozens of GBs that I'd not need to upload everytime (takes long time) when this dataset could be accessed in seconds through the mounted shared drive.
Current Difficulty:
Currently, CVAT uploads the files even if they are already available in the Docker container via a shared mount. There is no documented method to instruct CVAT to directly reference these files, leading to unnecessary data transfer...
What areas of FiftyOne does this feature affect?
.annote() method for example
Details
I came across a related issue here: [voxel51/fiftyone#1235](https://github.com/voxel51/fiftyone/issues/1235), which discusses a similar challenge.
The solution proposed in that thread involves modifying the `fiftyone.utils.cvat` module to allow CVAT to use `server_files` when referencing images from the shared directory.
However, I wasn't sure which implementation is the best to work, and furthermore, maybe there's already some integrated newer solution that resolves this exact issue.
### Willingness to contribute
- Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
| open | 2025-02-04T01:11:56Z | 2025-02-12T01:53:43Z | https://github.com/voxel51/fiftyone/issues/5461 | [
"feature"
] | lejrn | 1 |
microsoft/nni | data-science | 5,783 | WARNING: GPU found but will not be used. Please set `experiment.config.trial_gpu_number` to the number of GPUs you want to use for each trial. | Hello, NAS! was found the problem:WARNING: GPU found but will not be used. Please set `experiment.config.trial_gpu_number` to the number of GPUs you want to use for each trial.
```[tasklist]
### Tasks
```
| open | 2024-05-16T14:40:12Z | 2024-05-29T02:27:43Z | https://github.com/microsoft/nni/issues/5783 | [] | xutongpure | 1 |
pyqtgraph/pyqtgraph | numpy | 3,245 | Add LTTB downsampling support | Consider adding LTTB downsampling support for timeseries based plots. See demo at https://www.base.is/flot/.
Python based modules to consider:
https://github.com/dgoeries/lttbc/
https://pypi.org/project/lttb/
| open | 2025-02-12T13:38:38Z | 2025-02-22T14:00:55Z | https://github.com/pyqtgraph/pyqtgraph/issues/3245 | [] | hinxx | 5 |
mithi/hexapod-robot-simulator | dash | 2 | Foot tips in contact with polygon disregards non-foot tip points | If the height of the foot tip is greater than the point of contact with the body,
use the point of contact with the body instead. | closed | 2020-02-17T16:25:19Z | 2020-02-18T07:40:14Z | https://github.com/mithi/hexapod-robot-simulator/issues/2 | [] | mithi | 0 |
modelscope/data-juicer | streamlit | 383 | Confused with the meaning of 'preprocess' time-consuming in the `reproduced_redpajama /README.md` | I am confused with the `preprocess` and `read+unify` stage meaning in `configs/reproduced_redpajama/README.md`, could you explain in more detail about the meaning of these two stages?

| closed | 2024-08-13T03:25:03Z | 2024-08-13T08:23:28Z | https://github.com/modelscope/data-juicer/issues/383 | [] | flyflypeng | 2 |
Lightning-AI/pytorch-lightning | deep-learning | 19,858 | Dynamically link arguments in `LightningCLI`? | ### Description & Motivation
Is it possible to _dynamically_ link arguments in the `LightningCLI`, say, depending on the module or datamodule subclass that is specified in a config file or at the command line?
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @carmocca @mauvilsa | closed | 2024-05-09T17:17:19Z | 2024-05-14T20:11:52Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19858 | [
"feature",
"lightningcli"
] | EthanMarx | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,701 | xy plot Grid legend font size LARGER | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Make font size bigger in Grid legends
### Proposed workflow
Add Settings option for font size
### Additional information
_No response_ | closed | 2024-05-04T10:23:19Z | 2024-05-20T06:02:32Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15701 | [
"enhancement"
] | rafstahelin | 0 |
hyperspy/hyperspy | data-visualization | 3,334 | Interactive ROI plot does not update limits | Running the following code:
```python
import numpy as np
import hyperspy.api as hs
s = hs.signals.Signal2D(np.random.random((500, 500)))
line_roi = hs.roi.Line2DROI()
s.plot()
s_line = line_roi.interactive(s, color='red')
s_line.plot()
```
Gives the plot:

Then, changing the length of the red ROI line, does update the data in `s_line` plot, but it does not update the x-limits.
Example of this:


| open | 2024-03-14T18:15:16Z | 2024-03-15T08:22:24Z | https://github.com/hyperspy/hyperspy/issues/3334 | [
"type: enhancement",
"good first PR"
] | magnunor | 5 |
fastapi/sqlmodel | fastapi | 397 | pathlib.Path is probably an unsupported type | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from pathlib import Path
from typing import Optional
import pytest
from sqlmodel import VARCHAR, Column, Field, Session, SQLModel, create_engine
class MyFile(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
path: Path = Field(sa_column=Column(VARCHAR))
@pytest.fixture
def session() -> Session:
engine = create_engine(
'sqlite://', connect_args={'check_same_thread': False})
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
yield session
def test_myfile(session: Session, tmp_path: Path):
session.add(MyFile(path=tmp_path / 'test.txt'))
session.commit()
```
### Description
* Create a model with a field of type `pathlib.Path`.
* Create a table where the field is mapped as a VARCHAR.
* Insert into the table.
* Select from table and have the field mapped back as a `pathlib.Path`.
### Operating System
Linux
### Operating System Details
Ubuntu 22.04.
### SQLModel Version
0.0.6
### Python Version
3.10
### Additional Context
Logs from the test case:
```
2022-08-10 10:56:17,187 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-08-10 10:56:17,188 INFO sqlalchemy.engine.Engine PRAGMA main.table_info("myfile")
2022-08-10 10:56:17,188 INFO sqlalchemy.engine.Engine [raw sql] ()
2022-08-10 10:56:17,188 INFO sqlalchemy.engine.Engine PRAGMA temp.table_info("myfile")
2022-08-10 10:56:17,188 INFO sqlalchemy.engine.Engine [raw sql] ()
2022-08-10 10:56:17,189 INFO sqlalchemy.engine.Engine
CREATE TABLE myfile (
path VARCHAR,
id INTEGER,
PRIMARY KEY (id)
)
2022-08-10 10:56:17,189 INFO sqlalchemy.engine.Engine [no key 0.00012s] ()
2022-08-10 10:56:17,189 INFO sqlalchemy.engine.Engine COMMIT
-------------------------------------------------------------------------- Captured stderr setup ---------------------------------------------------------------------------
sqlalchemy.engine.Engine - BEGIN (implicit)
sqlalchemy.engine.Engine - PRAGMA main.table_info("myfile")
sqlalchemy.engine.Engine - [raw sql] ()
sqlalchemy.engine.Engine - PRAGMA temp.table_info("myfile")
sqlalchemy.engine.Engine - [raw sql] ()
sqlalchemy.engine.Engine -
CREATE TABLE myfile (
path VARCHAR,
id INTEGER,
PRIMARY KEY (id)
)
sqlalchemy.engine.Engine - [no key 0.00012s] ()
sqlalchemy.engine.Engine - COMMIT
---------------------------------------------------------------------------- Captured log setup ----------------------------------------------------------------------------
INFO sqlalchemy.engine.Engine:base.py:953 BEGIN (implicit)
INFO sqlalchemy.engine.Engine:base.py:1772 PRAGMA main.table_info("myfile")
INFO sqlalchemy.engine.Engine:base.py:1777 [raw sql] ()
INFO sqlalchemy.engine.Engine:base.py:1772 PRAGMA temp.table_info("myfile")
INFO sqlalchemy.engine.Engine:base.py:1777 [raw sql] ()
INFO sqlalchemy.engine.Engine:base.py:1772
CREATE TABLE myfile (
path VARCHAR,
id INTEGER,
PRIMARY KEY (id)
)
INFO sqlalchemy.engine.Engine:base.py:1777 [no key 0.00012s] ()
INFO sqlalchemy.engine.Engine:base.py:1013 COMMIT
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
2022-08-10 10:56:17,194 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-08-10 10:56:17,195 INFO sqlalchemy.engine.Engine INSERT INTO myfile (path) VALUES (?)
2022-08-10 10:56:17,195 INFO sqlalchemy.engine.Engine [generated in 0.00029s] (PosixPath('/tmp/pytest-of-vscode/pytest-187/test_myfile0/test.txt'),)
2022-08-10 10:56:17,195 INFO sqlalchemy.engine.Engine ROLLBACK
--------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------
sqlalchemy.engine.Engine - BEGIN (implicit)
sqlalchemy.engine.Engine - INSERT INTO myfile (path) VALUES (?)
sqlalchemy.engine.Engine - [generated in 0.00029s] (PosixPath('/tmp/pytest-of-vscode/pytest-187/test_myfile0/test.txt'),)
sqlalchemy.engine.Engine - ROLLBACK
---------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
INFO sqlalchemy.engine.Engine:base.py:953 BEGIN (implicit)
INFO sqlalchemy.engine.Engine:base.py:1772 INSERT INTO myfile (path) VALUES (?)
INFO sqlalchemy.engine.Engine:base.py:1777 [generated in 0.00029s] (PosixPath('/tmp/pytest-of-vscode/pytest-187/test_myfile0/test.txt'),)
INFO sqlalchemy.engine.Engine:base.py:981 ROLLBACK
========================================================================= short test summary info ==========================================================================
FAILED tests/test_misc.py::test_myfile - sqlalchemy.exc.InterfaceError: (sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type.
``` | open | 2022-08-10T10:59:15Z | 2024-10-21T09:08:52Z | https://github.com/fastapi/sqlmodel/issues/397 | [
"question"
] | matutter | 3 |
bendichter/brokenaxes | matplotlib | 47 | xticks chaos after change xscale | Hi, I wanna just plot specified xticks (250, 500, 1000). However, after I set_xscale("log"), the plot shows both default xticks and mine. How to deal with this matter. Code is as follows:
``` python
import os, matplotlib
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
from matplotlib.gridspec import GridSpec
import brokenaxes
matplotlib.rcParams.update({'font.size': 16})
num_labels_svhn = np.array([250, 500, 1000])
labeled_svhn = np.array([21.65, 13.86, 9.76])
TE_svhn = np.array([None, 5.12, 4.42])
MT_svhn = np.array([4.35, 4.18, 3.95])
MT_SNTG_svhn = np.array([4.29, 3.99, 3.86])
ours_svhn = np.array([4.18, 3.80, 3.15])
# fig, ax = plt.subplots(1, 2, figsize=(18, 6))
fig = plt.figure(figsize=(18, 6))
sps1, sps2 = GridSpec(1, 2)
bax = brokenaxes.BrokenAxes(xlims=((220,1100),),
ylims=((2.7, 5.5), (9, 23)),
d=0.005, tilt=45, hspace=0.015,
wspace=0.015, subplot_spec=sps1,
height_ratios=(2, 3))
bax.plot(num_labels_svhn, labeled_svhn, marker='o', ms=8, label='Labeled-Only', c='blue', ls='--')
bax.plot(num_labels_svhn, TE_svhn, marker='o', ms=8, label='Temp. Ensem.', c='orange', ls='--')
bax.plot(num_labels_svhn, MT_svhn, marker='o', ms=8, label='Mean Teacher', c='green', ls='--')
bax.plot(num_labels_svhn, MT_SNTG_svhn, marker='o', ms=8, label='MT+SNTG', c='red', ls='--')
bax.plot(num_labels_svhn, ours_svhn, marker='o', ms=8, label='Ours', ls='-', c='purple', linewidth=2)
bax.big_ax.set_xlabel('#Labels', fontsize=16)
bax.big_ax.xaxis.set_label_coords(0.50, -0.075)
bax.big_ax.set_ylabel('Error Rate of SVHN', fontsize=16)
bax.big_ax.yaxis.set_label_coords(-0.06, 0.52)
for ax in bax.axs:
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(16)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(16)
bax.grid(which='both', axis='both')
bax.big_ax.spines['right'].set_visible(True)
bax.big_ax.spines['top'].set_visible(True)
bax.first_col[0].set_yticks([10., 15., 20.])
bax.first_col[1].set_yticks([3., 4., 5.])
bax.last_row[0].set_xscale("log") # the order of these two lines does matter
bax.last_row[0].set_xticks([250, 500, 1000]) # the order of these two lines does matter
```
this is the figure drawn by this code

I want to set x_ticks stricktly as(250, 500, 1000) with `log` scale.
Could you please help me with this situation. Appreciate your help in advance.
| closed | 2020-03-26T03:00:50Z | 2020-03-26T23:55:42Z | https://github.com/bendichter/brokenaxes/issues/47 | [] | Harrypotterrrr | 3 |
zappa/Zappa | django | 615 | [Migrated] Feature: Ability to use existing API Gateway | Originally from: https://github.com/Miserlou/Zappa/issues/1576 by [bharling](https://github.com/bharling)
## Context
Hi, we're a small company in the process of gradually expanding our monolithic elasticbeanstalk application with smaller targeted microservices based on ApiStar / Zappa. We're getting good mileage so far and have deployed a small selection of background task-type services using SNS messaging as triggers, however we'd really like to start adding actual endpoints to our stack that call lambda functions. I've been trying for a couple of days to figure out how to properly setup an API gateway that will proxy to both our existing stack and newer services deployed with zappa. I've tried manually setting up a custom domain for the zappa apps and a cloudfront distro in front of both the existing stack and the new api gateway but am not getting very far. It would be a lot simpler IMO if we could specify an existing API gateway ARN in the zappa_settings.json and elect to use that so that it could integrate with an existing configuration.
## Expected Behavior
Would be great if I could specify "api_gateway_arn" in the zappa_settings.json file to use an existing api gateway
## Actual Behavior
There is no option to do this
## Possible Fix
I guess this is not straightforward given the API gateway is created via a cloudformation template, I guess it'd require moving some of that to boto functions instead ?
## Steps to Reproduce
N/A
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.46.2
* Operating System and Python version: 3.6
* The output of `pip freeze`:
ably==1.0.1
apistar==0.5.40
argcomplete==1.9.3
attrs==18.1.0
awscli==1.15.19
backcall==0.1.0
base58==1.0.0
boto3==1.7.0
botocore==1.10.36
cairocffi==0.8.1
CairoSVG==2.1.3
certifi==2018.4.16
cffi==1.11.5
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
colorama==0.3.7
cookies==2.2.1
coverage==4.5.1
cssselect2==0.2.1
decorator==4.3.0
defusedxml==0.5.0
docutils==0.14
durationpy==0.5
et-xmlfile==1.0.1
Faker==0.8.15
Flask==0.12.2
future==0.16.0
hjson==3.0.1
html5lib==1.0.1
humanize==0.5.1
idna==2.7
ipython==6.4.0
ipython-genutils==0.2.0
itsdangerous==0.24
jdcal==1.4
jedi==0.12.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.0
more-itertools==4.1.0
msgpack-python==0.5.6
openpyxl==2.5.3
parso==0.2.1
pdfrw==0.4
pexpect==4.5.0
pickleshare==0.7.4
Pillow==5.1.0
placebo==0.8.1
pluggy==0.6.0
prompt-toolkit==1.0.15
psycopg2==2.7.4
ptyprocess==0.5.2
py==1.5.3
pyasn1==0.4.2
pycparser==2.18
Pygments==2.2.0
PyNaCl==1.2.1
Pyphen==0.9.4
pytest==3.5.1
pytest-cov==2.5.1
pytest-cover==3.0.0
pytest-coverage==0.0
pytest-faker==2.0.0
pytest-mock==1.10.0
-e git+https://github.com/crowdcomms/pytest-sqlalchemy.git@f9c666196d59edd289ecf57704ec7b51cff4440a#egg=pytest_sqlalchemy
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.19.0
responses==0.9.0
rsa==3.4.2
s3transfer==0.1.13
simplegeneric==0.8.1
six==1.11.0
SQLAlchemy==1.2.7
text-unidecode==1.2
tinycss2==0.6.1
toml==0.9.4
tqdm==4.19.1
traitlets==4.3.2
troposphere==2.3.0
Unidecode==1.0.22
urllib3==1.23
virtualenv==15.2.0
wcwidth==0.1.7
WeasyPrint==0.42.3
webencodings==0.5.1
Werkzeug==0.14.1
whitenoise==3.3.1
wsgi-request-logger==0.4.6
zappa==0.46.2
| closed | 2021-02-20T12:26:41Z | 2024-04-13T17:10:16Z | https://github.com/zappa/Zappa/issues/615 | [
"no-activity",
"auto-closed"
] | jneves | 3 |
opengeos/leafmap | jupyter | 707 | Doesn't work in Kaggle notebooks | ### Environment Information
- leafmap version: 0.31.5
- Python version: 3.10
- Operating System: Linux
It just prints "loading widget", but won't display a map. It seems it does work with folium backend; Maybe it should fall back to using folium when it can't display.

https://www.kaggle.com/code/opensourcerer9000/leafmap-hello-world | closed | 2024-03-20T18:04:53Z | 2024-03-20T18:24:24Z | https://github.com/opengeos/leafmap/issues/707 | [
"bug"
] | openSourcerer9000 | 3 |
voila-dashboards/voila | jupyter | 955 | Switch to `voila` as a Jupyter Server `ExtensionApp` | <!--
Welcome! Thanks for thinking of a way to improve Voilà. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
With the `ExtensionApp` now part of the latest Jupyter Server stable releases since 1.0, we should switch `voila` to using it.
### Proposed Solution
There has already been a couple of PRs for this:
- https://github.com/voila-dashboards/voila/pull/270
- https://github.com/voila-dashboards/voila/pull/492
- https://github.com/voila-dashboards/voila/pull/563
- https://github.com/voila-dashboards/voila/pull/592
We could pick the more recent PR, rebase it, fix conflicts and continue from there.
Or start from scratch with a new PR (since there are a lot of conflicts), adding everyone who contributed before to the commits as co-authors.
### Additional Context
We can track that for the `0.3.0` since it will most likely introduce some breaking changes. | open | 2021-09-09T07:26:56Z | 2021-10-22T14:11:53Z | https://github.com/voila-dashboards/voila/issues/955 | [
"enhancement"
] | jtpio | 2 |
kizniche/Mycodo | automation | 869 | Flask - Upgrade Page Issue | Recently I've been getting an issue (using a couple different mycodo versions) when trying to load the upgrade page through the web interface. Not a huge deal since I can upgrade through the upgrade commands script, but thought it might be helpful:
```
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_restx/api.py", line 639, in error_router
return original_handler(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_login/utils.py", line 272, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_admin.py", line 423, in admin_upgrade
is_internet=False)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 2, in top-level template code
{% set active_page = "upgrade" %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 325, in top-level template code
{%- block body %}{% endblock -%}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/admin/upgrade.html", line 52, in block "body"
{% elif current_latest_major_version != current_release.split('.')[0] and
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 471, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'current_release' is undefined
``` | closed | 2020-10-19T17:11:53Z | 2020-10-27T22:31:49Z | https://github.com/kizniche/Mycodo/issues/869 | [
"bug"
] | not5 | 2 |
dagster-io/dagster | data-science | 28,423 | Add json asset kind | ### What's the use case?
The `json` asset kind is currently missing.
Probably other common formats such as `yaml` would be nice to have too.
### Ideas of implementation
_No response_
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | closed | 2025-03-12T10:48:45Z | 2025-03-19T16:36:23Z | https://github.com/dagster-io/dagster/issues/28423 | [
"type: feature-request",
"area: tags"
] | danielgafni | 1 |
ultralytics/yolov5 | deep-learning | 13,256 | Marking YOLOv5 Detection Text Outputs with TP or FP | ### Discussed in https://github.com/ultralytics/yolov5/discussions/13184
<div type='discussions-op-text'>
<sup>Originally posted by **kyrangraves** July 11, 2024</sup>
Hi All,
I have trained a YOLOv5 model and used val.py to test the performance of the trained model using independent labelled datasets. I have set the val.py script to write detections to text files. Is there a way to extract/denote for each detection made whether or not the detection is a true positive or false positive? This information is surely determined for each detection to calculate performance metrics. Is there a way to easily output this information in the text files?
I would ideally like to produce a text-output like this,
5 0.794286 0.599206 0.0373207 0.0625995 0.956099 TP
3 0.652693 0.639765 0.0693291 0.138997 0.847412 TP
3 0.954316 0.848078 0.0913681 0.159411 0.170322 FP
I have looked through the discussion threads and elsewhere online and I seemingly cannot find any answers. This may be because it's obvious, or just not yet been discussed. Any information would be greatly appreciated!
Thank you,
Kyran </div> | closed | 2024-08-13T08:37:41Z | 2024-10-20T19:51:43Z | https://github.com/ultralytics/yolov5/issues/13256 | [
"question"
] | kyrangraves | 4 |
FactoryBoy/factory_boy | django | 206 | Factory attribute with reserved name (attributes). | Have a model with a field name `attributes` that I'm unable to create due to it being a reserved method on the factory:
```
class TestFactory(factory_boy.Factory):
attributes = 'something'
```
Results in:
```
>>> TestFactory.create()
*** TypeError: 'str' object is not callable
```
Is there a way to alias the attribute names to avoid this collision?
| closed | 2015-05-07T20:36:36Z | 2015-05-22T03:26:23Z | https://github.com/FactoryBoy/factory_boy/issues/206 | [
"Bug"
] | kevinastone | 2 |
hankcs/HanLP | nlp | 897 | “苹果”拼音问题 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是: portable-1.6.6
我使用的版本是: portable-1.6.6
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
中文 `苹果` 转拼音,期望输出 `pingguo`,但是结果却是 `pinguo`
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 触发代码
```
System.out.println(HanLP.convertToPinyinString("苹果", " ", true));
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
pingguo
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
pinguo
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
试过很多版本都是这样
| closed | 2018-07-23T03:22:27Z | 2018-07-29T15:03:07Z | https://github.com/hankcs/HanLP/issues/897 | [
"improvement"
] | yhan219 | 1 |
ivy-llc/ivy | pytorch | 28,114 | Fix Frontend Failing Test: paddle - pointwise_ops.torch.real | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-01-30T09:18:27Z | 2024-03-08T14:53:46Z | https://github.com/ivy-llc/ivy/issues/28114 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 894 | In use or deprecated? | Is the flask-sqlalchemy packege still receiving support ? | closed | 2020-12-18T13:44:06Z | 2021-01-02T00:47:47Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/894 | [] | jorgebg2016 | 1 |
microsoft/nni | data-science | 5,741 | Comparison exception: The values for attribute 'shape' do not match: torch.Size([]) != torch.Size([1, 8400]). | 使用nni对yolov8进行剪枝,报了如下错误
Comparison exception: The values for attribute 'shape' do not match: torch.Size([]) != torch.Size([1, 8400]). | open | 2024-02-01T12:37:04Z | 2024-02-01T12:37:04Z | https://github.com/microsoft/nni/issues/5741 | [] | Gooddz1 | 0 |
collerek/ormar | pydantic | 273 | PostgreSQL Array Columns | I really like the Postgres feature of having array type columns.
sqlAlchemy supports the PostgreSQL dialect refer:
https://docs.sqlalchemy.org/en/14/dialects/postgresql.html?highlight=array#sqlalchemy.dialects.postgresql.ARRAY
It would be nice to use them with ormar like:
`characteristics:list[str] = ormar.Array(element_type='String')`
or something like that.
Until that is possible I will use JSON() but it seems to be an overkill
Considering the DRY Principle, this package is the way to go when using a Database for an API. Thank you for that | closed | 2021-07-17T17:52:31Z | 2022-02-25T11:28:57Z | https://github.com/collerek/ormar/issues/273 | [
"enhancement"
] | Lester1989 | 5 |
Neoteroi/BlackSheep | asyncio | 158 | OpenAPI Documentation: generic type schema improperly set as array | **Describe the bug**
OpenAPI Documentation: generic type schema improperly set as array.
**To Reproduce**
```python
T = TypeVar("T")
class PaginatedSet(Generic[T]):
items: List[T]
total: int
@app.route("/api/events-statuses")
def paginated_example() -> PaginatedSet[EventSystemStatusSummary]:
...
```
```yaml
/api/events-statuses:
get:
responses:
'200':
description: Success response
content:
application/json:
schema:
type: array
nullable: false
items:
$ref: '#/components/schemas/EventSystemStatusSummary'
operationId: paginated_example
```
The following works:
```python
@dataclass
class EventStatuses:
items: List[EventSystemStatusSummary]
total: int
@app.route("/api/events-statuses")
def paginated_example() -> EventStatuses:
...
```
```yaml
/api/events-statuses:
get:
responses:
'200':
description: Success response
content:
application/json:
schema:
$ref: '#/components/schemas/EventStatuses'
operationId: paginated_example
# ...
components:
schemas:
EventSystemStatusSummary:
type: object
required:
- systemName
- status
- confirmationDate
properties:
systemName:
type: string
nullable: false
status:
type: string
nullable: false
confirmationDate:
type: string
format: date-time
nullable: false
EventStatuses:
type: object
required:
- items
- total
properties:
items:
type: array
nullable: false
items:
$ref: '#/components/schemas/EventSystemStatusSummary'
total:
type: integer
format: int64
nullable: false
``` | closed | 2021-05-31T23:08:46Z | 2021-06-09T07:18:24Z | https://github.com/Neoteroi/BlackSheep/issues/158 | [
"fixed in branch"
] | RobertoPrevato | 1 |
jupyterlab/jupyter-ai | jupyter | 392 | Chat history in notebook magics for Chat model providers | ## Problem
The current magics implementation is using an artificial list of chat message history to keep track of exchanges between 2 or more cells when `openai-chat` model is used. Although, this might have the appearance of working as expected for OpenAI, this is not translatable and scalable to other chat model providers, for example ChatAnthropic.
## Solution
Create a conversation chain and memory to generate messages for providers that support `BaseChatModel`. Using the memory class in an LLM chain automatically keeps track of message history and avoid manual tracking of exchanges. This also helps in future proofing the magics implementation for new chat providers that will be introduced in future.
| open | 2023-09-22T03:31:07Z | 2024-09-16T23:34:55Z | https://github.com/jupyterlab/jupyter-ai/issues/392 | [
"bug",
"priority"
] | 3coins | 1 |
postmanlabs/httpbin | api | 587 | How to return custom response data | As title, how to return custom response data | open | 2019-11-21T06:46:55Z | 2019-11-21T06:46:55Z | https://github.com/postmanlabs/httpbin/issues/587 | [] | wd603546401 | 0 |
microsoft/unilm | nlp | 987 | Language Pre-Training of VLMO | This is because I used a wrong pre-training task. Sorry for the bother. | closed | 2023-01-29T13:15:01Z | 2023-01-29T13:27:05Z | https://github.com/microsoft/unilm/issues/987 | [] | Adam-lxd | 0 |
huggingface/transformers | python | 35,995 | Unable to load openai/whisper-tiny | ### System Info
Python 3.11.8
huggingface-hub==0.28.1
transformers==4.48.2
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Merely run:
`WhisperProcessor.from_pretrained("openai/whisper-tiny")`
Error is:
```
File "/home/probst/Projects/whisper-iara/finetune.py", line 101, in __init__
self.processor = WhisperProcessor.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/whisper-iara/.venv/lib/python3.11/site-packages/transformers/processing_utils.py", line 974, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/whisper-iara/.venv/lib/python3.11/site-packages/transformers/processing_utils.py", line 1020, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/whisper-iara/.venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1925, in from_pretrained
raise ValueError(
ValueError: Calling WhisperTokenizer.from_pretrained() with the path to a single file or url is not supported for this tokenizer. Use a model identifier or the path to a directory instead.
```
The code above works for whisper-tiny.en, whisper-base, whisper-small and so on, but not for whisper-tiny.
The workaround for now is to manually git clone the repository and specify the local path to it.
### Expected behavior
I should be able to load the model without any errors. | closed | 2025-01-31T16:53:28Z | 2025-02-06T11:32:38Z | https://github.com/huggingface/transformers/issues/35995 | [
"bug"
] | pprobst | 3 |
samuelcolvin/watchfiles | asyncio | 227 | Track file modifications | Hi,
This might be redundant with #201, sorry in advance if that’s the case.
I would like to be able to run the callback upon file modifications (i.e. the contents have changed). Currently, even `touch`-ing a file triggers the shell command. | closed | 2023-04-21T09:34:05Z | 2023-10-19T07:09:25Z | https://github.com/samuelcolvin/watchfiles/issues/227 | [] | frank-lenormand | 3 |
noirbizarre/flask-restplus | flask | 469 | How to create subnamespace for swagger documentation? | The problem is that within a single namespace there might be many api endpoints making the list too long to read. Move part of them into another namespace is OK. But in swagger documentation it becomes a peer rather then a nested folded subnamespace. I want to preserve the hierarchical order for example:
in original swagger documentation:
person:
-------[get]person/pid
-------[get]person/device
-------[post]person/device
--------...... many other endpoints
I wish I could make it clearer and preserve the hierarchical structure like this:
person:
-------[get]person/pid
-------device:
-------[post]person/device
-------[get]person/device
--------...... many other endpoints
| open | 2018-06-07T15:27:54Z | 2018-06-10T15:32:21Z | https://github.com/noirbizarre/flask-restplus/issues/469 | [] | LLCcom | 3 |
thtrieu/darkflow | tensorflow | 560 | Error in cfg_yielder while trying to import yolo trained on coco | I downloaded the yolov2 608x608 .cfg and .weights files trained on the coco dataset from [here](https://pjreddie.com/darknet/yolo/)
and I'm running this code
from darkflow.net.build import TFNet
options = options = {"model": "cfg/yolov2_608.cfg", "load": "bin/yolov2_608.weights", "threshold": 0.1, "gpu": 1.0}
tfnet = TFNet(options)
I get this error
Parsing ./cfg/yolov2_608.cfg
Traceback (most recent call last):
File "<ipython-input-26-06cf761ac459>", line 3, in <module>
tfnet = TFNet(options)
File "C:\Users\CDS Lab\Downloads\darkflow-master\darkflow\net\build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "C:\Users\CDS Lab\Downloads\darkflow-master\darkflow\dark\darknet.py", line 17, in __init__
src_parsed = self.parse_cfg(self.src_cfg, FLAGS)
File "C:\Users\CDS Lab\Downloads\darkflow-master\darkflow\dark\darknet.py", line 68, in parse_cfg
for i, info in enumerate(cfg_layers):
File "C:\Users\CDS Lab\Downloads\darkflow-master\darkflow\utils\process.py", line 316, in cfg_yielder
exit('Layer {} not implemented'.format(d['type']))
NameError: name 'exit' is not defined
I have no idea what is causing this. I have successfully loaded and used the .cfg and .weights files trained on the voc dataset and they have worked fine. | open | 2018-02-05T23:32:02Z | 2019-07-07T11:39:51Z | https://github.com/thtrieu/darkflow/issues/560 | [] | gauravspatil | 6 |
PaddlePaddle/models | computer-vision | 4,968 | 使用ETS模型时,如何使用自己的视频来准备数据集 | 您好,
我想请教一下,在跑ETS模型时,如何将自己的视频数据生成ETS模型能够使用的数据集?谢谢! | closed | 2020-11-23T09:40:42Z | 2020-11-26T02:51:26Z | https://github.com/PaddlePaddle/models/issues/4968 | [] | mc261670164 | 3 |
FlareSolverr/FlareSolverr | api | 368 | docker is not behaving similar in different machines | ### Environment
* **FlareSolverr version**: v2.2.1
* **Last working FlareSolverr version**: v2.2.1
* **Operating system**: Linux
* **Are you using Docker**: yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0
* **Are you using a proxy or VPN?** [yes/no] no
* **Are you using Captcha Solver:** [yes/no] no
* **If using captcha solver, which one:**
* **URL to test this issue:** please see below for detailed instructions
### Description
Hi,
First of all thanks to developer(s) of this project, this is the only solution which can pass the cloudflare detection for the site I'm scraping from.
I'm using the docker version of flaresolverr and it works great in my laptop, I just generate cf_clearance cookie and then use it in a script. However, when I tried to migrate the system to a server, I had encountered problems.
I used flaresolverr in a Raspberry Pi at home and a server in remote location in docker environment and they both fail (or succeed rarely) to generate cookies. Since I was using docker, I was expecting they behave same but they didn't. So, I'm guessing something is leaking into docker environment from host OS and cloudflare protection can detect it (that it's laptop or headless server).
I wasn't sure where to start for troubleshooting so I'm asking for help.
Here's the first script to generate cookie:
```bash
#!/usr/bin/bash
docker run -d --rm --name=flaresolverr -p 8191:8191 -e LOG_LEVEL=info ghcr.io/flaresolverr/flaresolverr:latest
sleep 5
until curl -s -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{"cmd": "sessions.create"}'; do echo Trying again..; sleep 5; done
SESSION=$(curl -s -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{"cmd": "sessions.list"}' | jq '.sessions[0]' | sed -e 's/"//g')
sleep 2
# this is just to initiate traffic, does not collect cookie, maybe it's unnecessary
curl -s -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{"cmd":"request.get","maxTimeout":60000,"session":"'$SESSION'","url":"'$URL'"}'
curl -s -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{"cmd":"request.get","maxTimeout":60000,"session":"'$SESSION'","url":"'$URL'","returnOnlyCookies": true}' |\
jq '[.solution.cookies[] | ([.name,.value|@uri]|join("="))]|join(";")' | \
sed -e's/"//g' >| active_cookie
```
The cookie should at least have `cf_clearance`, `PHPSESSID` and `XSRF-TOKEN-V2` items. By the way, the response occasionally tells "Cloudflare Error: Cloudflare has blocked this request" but still the cookie works.
After that, the cookie can be used with curl. Below I gave couple examples, one is for querying a search page, visiting an ad page and finally getting details about an ad. Header information for curl might be too crowded and contain unnecessary items, I just collected it from Chrome Dev Tools "Copy as cURL".
```bash
COOKIE="$(cat active_cookie)"
# search
curl $URL -X POST -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Origin: '"$HOST"'}' -H 'Connection:keep-alive' -H 'Referer: '"$URL"'}' -H 'Cookie: '"$COOKIE"'' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Site: same-origin' -H 'DNT: 1' -H 'Sec-GPC: 1' --data-raw 'sort_by=0&min_price=36&max_price=500&seller_online%5B%5D=online&hide_botted=1&page=1' --compressed
# individual ad
curl $AD_URL -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate, br' -H 'Connection: keep-alive' -H 'Cookie: '"$COOKIE"'' -H 'Upgrade-Insecure-Requests: 1' -H 'Sec-Fetch-Dest: document' -H 'Sec-Fetch-Mode: navigate' -H 'Sec-Fetch-Site: none' -H 'Sec-Fetch-User: ?1' -H 'DNT: 1' -H 'Sec-GPC: 1' -H 'TE: trailers' --compressed
## individual ad details
curl $AD_URL -X POST -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'X-Requested-With: XMLHttpRequest' -H 'Origin: '"$HOST"'}' -H 'Connection: keep-alive' -H 'Referer: '"$AD_URL"'}' -H 'Cookie: '"$COOKIE"'' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Site: same-origin' -H 'DNT: 1' -H 'Sec-GPC: 1' --data-raw 'get_details=1' --compressed
```
If curl command successfully works you should see webpage content. If cookie fails you see a webpage telling "Having trouble?"
URL, AD_URL and HOST information are at [pastebin site](https://pastebin.com/raw/4MgS795n). Sorry for hiding URL/HOST info, I didn't want the target site to find this post and develop a defence against flaresolverr.
If you can advise me about how to troubleshoot this problem it will be greatly appreciated.
### Logged Error Messages
**error messages**
2022-04-19T15:23:18+00:00 INFO REQ-3 Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.
### Screenshots
no screenshots are available
| closed | 2022-04-19T15:32:20Z | 2022-04-24T18:33:50Z | https://github.com/FlareSolverr/FlareSolverr/issues/368 | [
"more information needed"
] | alperyilmaz | 5 |
davidsandberg/facenet | computer-vision | 748 | Label list for the trained model so it can be used on Android/IOS | Hi, I'm trying to run the frozen casia graph (.pb format) in the tensorflow detector so that it can be used to detect faces on Android/IOS devices. These are the parameters needed -
` private static Classifier detector = TensorFlowObjectDetectionAPIModel.create(
TF_OD_API_MODEL_FILE, TF_OD_API_LABELS_FILE, TF_OD_API_INPUT_SIZE);
`
**TF_OD_API_MODEL_FILE** is the frozen .pb graph that you provided which is fine.
**TF_OD_API_INPUT_SIZE** is the size of the faces trained in your model (for example for the Android SSD example it is 300). **Can you please tell the size of the faces trained in your model?**
**TF_OD_API_LABELS_FILE** is a list of labels that the graph was trained on. I tried using 'face' but it says 'Failed to find input node 'image_tensor'' so you used a different label when training. Can you please tell the label used to train the faces?
Below find an example of the labels file if I trained it for these categories -
dummy
face
eye
door
| open | 2018-05-15T16:33:23Z | 2018-05-15T16:34:43Z | https://github.com/davidsandberg/facenet/issues/748 | [] | Zod20 | 0 |
ansible/ansible | python | 84,465 | deb822_repository: cannot configure multiple repositories in one .sources file | ### Summary
I am using this task to configure a .sources file for gitlab repositories (converting from apt_repository)
```yaml
- ansible.builtin.set_fact:
apt_get_repo_name: "gitlab"
- name: "apt: adding Gitlab repository (deb822)"
ansible.builtin.deb822_repository:
name: "{{ apt_get_repo_name }}"
enabled: true
uris:
- "{{ item }}"
components:
- "main"
suites:
- "{{ ansible_distribution_release }}"
types:
- "deb"
- "deb-src"
signed_by: "{{ lookup('ansible.builtin.file', 'keys/{{ apt_get_repo_name }}.gpg') }}"
allow_insecure: false
allow_weak: false
check_date: true
check_valid_until: true
loop:
- "https://packages.gitlab.com/gitlab/gitlab-ce/{{ ansible_distribution | lower }}"
- "https://packages.gitlab.com/gitlab/gitlab-ee/{{ ansible_distribution | lower }}"
- "https://packages.gitlab.com/runner/gitlab-runner/{{ ansible_distribution | lower }}"
when:
- "ansible_distribution_version is version('24.04', '>=')"
notify:
- apt-get-update
```
During execution the gitlab.sources file is written three times, once for each item. At the end of the loop the file only contains an entry for the final gitlab-runner url. The expected result is that all three urls are configured in separate blocks in the same file.
```
$ apt-cache policy ansible-core
ansible-core:
Installed: 2.17.7-1ppa~jammy
Candidate: 2.17.7-1ppa~jammy
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.deb822_repository
### Ansible Version
```console
$ ansible --version
$ ansible --version
ansible [core 2.17.7]
config file = /home/username/.ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/username/.ansible.cfg
DEFAULT_JINJA2_EXTENSIONS(/home/username/.ansible.cfg) = jinja2.ext.do
EDITOR(env: EDITOR) = /bin/nano
```
```
### OS / Environment
```
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.5 LTS
Release: 22.04
Codename: jammy
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Example task included in summary.
### Expected Results
Three configuration blocks are written in to the .sources file. (I had just done an upgrade on the target machine from jammy->noble which rewrote the equivalent .list to a .sources with multiple entries so this appears to be supported.)
### Actual Results
```console
The .sources file only contained configuration for the final item in the loop.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-12-11T15:49:51Z | 2025-01-07T15:10:06Z | https://github.com/ansible/ansible/issues/84465 | [
"module",
"bug",
"affects_2.17"
] | JKDingwall | 3 |
litestar-org/litestar | pydantic | 3,766 | Bug: schema_extra does not recognize upstream JSONSchema property/key names | ### Description
With code generated for other (non-litestar) JSONSchema consumers, using upstream names (such as `uniqueItems` instead of `unique_items`) inside of `schema_extra` results in a ValueError.
### URL to code causing the issue
_No response_
### MCVE
```python
from typing import Annotated, Any, Dict
import pydantic
from litestar import Litestar, get
# Presume that this type's JSONSchema is used by consumers other than litestar, and needs to be spec-compliant
class RandomType(pydantic.BaseModel):
# uniqueItems is per spec -- see https://json-schema.org/understanding-json-schema/reference/array
unique_items: Annotated[list[str], pydantic.Field(json_schema_extra={"uniqueItems": True})]
@get("/")
async def hello_world() -> RandomType:
"""Route Handler that outputs hello world."""
return RandomType(unique_items=["hi", "bye"])
app = Litestar(route_handlers=[hello_world])
```
### Steps to reproduce
```bash
1. Run `litestar --app app:app run --debug`
2. Load `http://localhost:8000/schema`
3. Watch the resulting stack trace
```
### Screenshots
_No response_
### Logs
```bash
ERROR - 2024-09-29 21:52:29,271 - litestar - config - Uncaught exception (connection_type=http, path=/schema):
Traceback (most recent call last):
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_asgi/asgi_router.py", line 100, in __call__
await asgi_app(scope, receive, send)
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/routes/http.py", line 80, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/routes/http.py", line 132, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/routes/http.py", line 152, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/routes/http.py", line 195, in _get_response_data
data = route_handler.fn(**parsed_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/plugin.py", line 161, in _handler
return plugin_.render(request, self.provide_openapi_schema())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/plugin.py", line 99, in provide_openapi_schema
self._openapi_schema = self.provide_openapi().to_schema()
^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/plugin.py", line 94, in provide_openapi
self._openapi = self._build_openapi()
^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/plugin.py", line 83, in _build_openapi
path_item = create_path_item_for_route(context, route)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/path_item.py", line 139, in create_path_item_for_route
return path_item_factory.create_path_item()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/path_item.py", line 44, in create_path_item
operation = self.create_operation_for_handler_method(route_handler, HttpMethod(http_method))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/path_item.py", line 73, in create_operation_for_handler_method
responses = create_responses_for_handler(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/responses.py", line 340, in create_responses_for_handler
return ResponseFactory(context, route_handler).create_responses(raises_validation_error=raises_validation_error)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/responses.py", line 91, in create_responses
str(self.route_handler.status_code): self.create_success_response(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/responses.py", line 150, in create_success_response
result = self.schema_creator.for_field_definition(field_def)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/schema_generation/schema.py", line 333, in for_field_definition
result = self.for_plugin(field_definition, plugin_for_annotation)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/schema_generation/schema.py", line 515, in for_plugin
schema = plugin.to_openapi_schema(field_definition=field_definition, schema_creator=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/contrib/pydantic/pydantic_schema_plugin.py", line 235, in to_openapi_schema
return self.for_pydantic_model(field_definition=field_definition, schema_creator=schema_creator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/contrib/pydantic/pydantic_schema_plugin.py", line 252, in for_pydantic_model
return schema_creator.create_component_schema(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/schema_generation/schema.py", line 645, in create_component_schema
schema.properties = {k: self.for_field_definition(v) for k, v in property_fields.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/schema_generation/schema.py", line 361, in for_field_definition
return self.process_schema_result(field_definition, result) if isinstance(result, Schema) else result
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/schema_generation/schema.py", line 596, in process_schema_result
raise ValueError(
ValueError: `schema_extra` declares key `uniqueItems` which does not exist in `Schema` object
> /nix/store/abpcb2xnkhh64yh30nkc6nv19nv720y4-python3-3.12.4-env/lib/python3.12/site-packages/litestar/_openapi/schema_generation/schema.py(596)process_schema_result()
```
### Litestar Version
2.12.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-09-30T02:54:57Z | 2025-03-20T15:54:57Z | https://github.com/litestar-org/litestar/issues/3766 | [
"Bug :bug:"
] | charles-dyfis-net | 1 |
wandb/wandb | data-science | 9,349 | [Bug]: Cannot run wandb with ray on a drive that is not C: on Windows 11 | ### Describe the bug
I am running into an issue previously described [https://github.com/wandb/wandb/issues/1991](https://github.com/wandb/wandb/issues/1991), but that I do not have permission to re-open. This prevents me from using wandb 😞
### Setup
**Windows 11 (OS Build = 22631.4751)**
**Virtual Environment created using [uv package manager](https://docs.astral.sh/uv/)**
**Python = 3.10**
**CUDA 12.4**
**Packages:**
"torch==2.5.1",
"lightning==2.5.0",
"wandb==0.19.4",
"ray[data,train,tune]==2.41"
### Reproducible example
```
import pytorch_lightning as pl
import torch
import os
from torch import nn
from torch.utils.data import DataLoader, random_split, TensorDataset
from pytorch_lightning.loggers import WandbLogger, CSVLogger
from ray import train, tune
USE_WANDB = True
#Turn off ray logging
os.environ["TUNE_DISABLE_AUTO_CALLBACK_LOGGERS"] = "1"
def create_dataset():
x = torch.randn(100, 1)
y = 3 * x + torch.randn(100, 1)
dataset = TensorDataset(x, y)
train_dataset, val_dataset = random_split(dataset, [80, 20])
train_loader = DataLoader(train_dataset, batch_size=16)
val_loader = DataLoader(val_dataset, batch_size=16)
return train_loader, val_loader
class SimpleModel(pl.LightningModule):
def __init__(self):
super(SimpleModel, self).__init__()
self.layer = nn.Linear(1, 1)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = nn.functional.mse_loss(y_hat, y)
self.log('train_loss', loss)
return loss
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), lr=0.01)
def train_model(cfg):
model = SimpleModel()
if USE_WANDB:
logger = WandbLogger(project='misc')
else:
logger = CSVLogger('misc')
trainer = pl.Trainer(max_epochs=cfg['n_epochs'], logger=logger)
train_loader, val_loader = create_dataset()
trainer.fit(model, train_loader, val_loader)
resources_per_trial = {"cpu": 1, "gpu": 1}
search_space = {'n_epochs': tune.choice([1, 2])}
tuner = tune.Tuner(
tune.with_resources(train_model, resources=resources_per_trial),
tune_config=tune.TuneConfig(num_samples=3),
param_space=search_space,
)
tuner.fit()
```
### Traceback
```
Trial train_model_cd6d2_00002 errored after 0 iterations at 2025-01-28 10:15:24. Total running time: 17s
Error file: C:/Users/hseely/AppData/Local/Temp/ray/session_2025-01-28_10-14-57_757185_38132/artifacts/2025-01-28_10-15-06/train_model_2025-01-28_10-14-54/driver_artifacts/train_model_cd6d2_00002_2_n_epochs=2_2025-01-28_10-15-06/error.txt
2025-01-28 10:15:24,453 ERROR tune_controller.py:1331 -- Trial task failed for trial train_model_cd6d2_00000
Traceback (most recent call last):
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\air\execution\_internal\event_manager.py", line 110, in resolve_future
result = ray.get(future)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\_private\auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\_private\client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\_private\worker.py", line 2772, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\_private\worker.py", line 919, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::ImplicitFunc.train() (pid=42668, ip=127.0.0.1, actor_id=9b402c5556939abc2982653201000000, repr=train_model)
File "python\ray\_raylet.pyx", line 1883, in ray._raylet.execute_task
File "python\ray\_raylet.pyx", line 1824, in ray._raylet.execute_task.function_executor
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\_private\function_manager.py", line 696, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\util\tracing\tracing_helper.py", line 463, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\tune\trainable\trainable.py", line 331, in train
raise skipped from exception_cause(skipped)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\air\_internal\util.py", line 107, in run
self._ret = self._target(*self._args, **self._kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\tune\trainable\function_trainable.py", line 44, in <lambda>
training_func=lambda: self._trainable_func(self.config),
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\util\tracing\tracing_helper.py", line 463, in _resume_span
return method(self, *_args, **_kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\ray\tune\trainable\function_trainable.py", line 249, in _trainable_func
output = fn()
File "D:\Sync\RQ3\Analysis\wandb_bug_reprex.py", line 52, in train_model
trainer.fit(model, train_loader, val_loader)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 539, in fit
call._call_and_handle_interrupt(
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\pytorch_lightning\trainer\call.py", line 47, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 575, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 944, in _run
call._call_setup_hook(self) # allow user to set up LightningModule in accelerator environment
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\pytorch_lightning\trainer\call.py", line 96, in _call_setup_hook
if hasattr(logger, "experiment"):
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\lightning_fabric\loggers\logger.py", line 118, in experiment
return fn(self)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\pytorch_lightning\loggers\wandb.py", line 407, in experiment
self._experiment = wandb.init(**self._wandb_init)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_init.py", line 1458, in init
wandb._sentry.reraise(e)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\analytics\sentry.py", line 156, in reraise
raise exc.with_traceback(sys.exc_info()[2])
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_init.py", line 1402, in init
wl = wandb.setup()
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_setup.py", line 383, in setup
return _setup(settings=settings)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_setup.py", line 323, in _setup
_singleton = _WandbSetup(settings=settings, pid=pid)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_setup.py", line 100, in __init__
self._settings = self._settings_setup(settings)
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_setup.py", line 134, in _settings_setup
s.update_from_system_environment()
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_settings.py", line 1077, in update_from_system_environment
self.program_relpath = self.program_relpath or self._get_program_relpath(
File "D:\Sync\RQ3\analysis\.venv\lib\site-packages\wandb\sdk\wandb_settings.py", line 1209, in _get_program_relpath
relative_path = os.path.relpath(full_path_to_program, start=root)
File "C:\Users\hseely\AppData\Roaming\uv\python\cpython-3.10.4-windows-x86_64-none\lib\ntpath.py", line 718, in relpath
raise ValueError("path is on mount %r, start on mount %r" % (
ValueError: path is on mount 'D:', start on mount 'C:'
```
| open | 2025-01-28T18:32:06Z | 2025-03-13T15:13:32Z | https://github.com/wandb/wandb/issues/9349 | [
"ty:bug",
"a:sdk",
"c:sdk:settings"
] | harryseely | 8 |
lepture/authlib | flask | 116 | documentation describes version that's not release yet | [Current documentation](https://docs.authlib.org/en/latest/index.html) points to the lastest code which hasn't been released yet. The latest version released is 0.10, which can also be found on [Pypi](https://pypi.org/project/Authlib/). I feel like pointing the correct documentation to the corresponding release would definitely be less confusing. | closed | 2019-03-13T13:20:51Z | 2019-03-18T13:29:43Z | https://github.com/lepture/authlib/issues/116 | [] | alantw | 2 |
huggingface/pytorch-image-models | pytorch | 2,120 | [BUG] Changing patch size does not work with CLIP/OpenCLIP ViT | **Describe the bug**
```python
import timm
m = timm.create_model("vit_tiny_patch16_224.augreg_in21k", pretrained=True, patch_size=8) # this works - Google checkpoint
m = timm.create_model("vit_base_patch16_224.augreg2_in21k_ft_in1k", pretrained=True, patch_size=8) # also works - timm checkpoint
m = timm.create_model("vit_base_patch16_clip_224.datacompxl", pretrained=True, patch_size=8) # this doesn't - OpenCLIP checkpoint
```
The problem seems to lie in these lines
https://github.com/huggingface/pytorch-image-models/blob/492947d1293fe7b5151bb2b9d9add3df37fabf63/timm/models/vision_transformer.py#L1016-L1019
which returns the state dict early, instead of continuing with the logic below. DINOv2 state dict is not returned early.
Modifying the code to below seems to work correctly (note that the check for DINOv2 state dict is an `elif` now)
```python
if 'visual.class_embedding' in state_dict:
state_dict = _convert_openai_clip(state_dict, model)
elif 'module.visual.class_embedding' in state_dict:
state_dict = _convert_openai_clip(state_dict, model, prefix='module.visual.')
elif "mask_token" in state_dict:
state_dict = _convert_dinov2(state_dict, model)
```
**Expected behavior**
Any ViT checkpoint should work with patch embed resampling.
**Desktop (please complete the following information):**
- OS: Ubuntu 22.04
- This repository version: 0.9.16
- PyTorch version w/ CUDA/cuDNN: 2.2.1 | closed | 2024-03-21T01:09:51Z | 2024-03-21T20:13:55Z | https://github.com/huggingface/pytorch-image-models/issues/2120 | [
"bug"
] | gau-nernst | 1 |
plotly/dash | data-visualization | 2,475 | Allow modification of position/direction and style of dash_table tooltips | **Context**
- The tooltip is always positioned under its corresponding cell, except in the last rows where it's positioned on top. This automatic behaviour cannot be modified.
- Right now it's only possible to modify the _general_ style of _all_ of the tooltips with the `css` argument of `dash_table.DataTable`
**Describe the solution you'd like**
Add an argument similar to `tooltip_position` an a `style` key in `tooltip_conditional`, that could be used like:
```
dash_table.DataTable(
...,
tooltip_position='top',
tooltip_conditional= {
'if': {
'filter_query': '{Region} contains "New"'
},
'type': 'markdown',
'value': 'This row is significant.',
'style': {'background-color':'red', 'max-width':'100px'}
}
]
```
**Describe alternatives you've considered**
The tooltip is not a container per cell, but a general container that covers the whole table, and I guess somehow it gets the mouse position and calculates the appropriate position for the visible hover div (the position is specified in css with position: absolute and then top: XXXpx left: XXXpx)
I have explored solutions with the different tooltip properties of dash_table (https://dash.plotly.com/datatable/tooltips#conditional-tooltips) but there are no keys in the tooltip dict to specify the position/direction.
I've explored a workaround by including the table as the children of a [dmc.Tooltip](https://www.dash-mantine-components.com/components/tooltip) and modifying its position based on the hover info of the table, but it didn't work. I will open a feature request so that the Product Team takes this into account for future developments.
| open | 2023-03-22T12:02:07Z | 2024-08-13T19:29:34Z | https://github.com/plotly/dash/issues/2475 | [
"feature",
"dash-data-table",
"P3"
] | celia-lm | 0 |
voxel51/fiftyone | data-science | 5,107 | How to annotate in CVAT both Classifications and Detections simultaneously? | Hello there,
I'm working on a dataset that consists both of bounding boxes (as fo.Detections) and labels (as fo.Classifications).
I can use fiftyone to upload those to CVAT, which will automatically create a CVAT project, task and job with the desired images, but I can only upload one field at a time:
```
curr_view.annotate(
'uploading_bb',
label_field='ground_truth',
project_name=project_name,
organization=organization,
launch_editor=False,
url=url)
curr_view.annotate(
'upload_tags',
label_field='TAGs',
label_type='classifications',
classes=[x for x in classes_datalake if 'TAG' in x],
project_name=project_name,
organization=organization,
launch_editor=False,
url=url)
```
Which results in two different tasks being created, one just with bounding boxes and another just with CVAT TAGs.
How can I create a job that simultaneously have both bounding boxes and TAGs? | closed | 2024-11-14T00:58:17Z | 2024-11-14T14:44:25Z | https://github.com/voxel51/fiftyone/issues/5107 | [
"question"
] | thiagoribeirodamotta | 1 |
TvoroG/pytest-lazy-fixture | pytest | 54 | skipping one of parametrized tests with `lazy_fixture` fails | Hi all,
Each test are run once for all environment in our test suite by default.
`conftest.py`
```python
from dataclasses import dataclass
from enum import auto, Flag
import pytest
from pytest_lazyfixture import is_lazy_fixture
class Environment(Flag):
SLOVAK = auto()
CZECH = auto()
ALL = SLOVAK | CZECH
@dataclass
class TestItem:
SUPPORTED_ENVIRONMENT_MARK_NAME = 'supported_environments'
@classmethod
def only_environment(cls, env):
return getattr(pytest.mark, cls.SUPPORTED_ENVIRONMENT_MARK_NAME)(env)
item: pytest.Function
@property
def current_environment(self):
try: current_environment = self.item.callspec.getparam(for_all_envs.__name__)
except ValueError: current_environment = None
else:
if is_lazy_fixture(current_environment):
current_environment = self.item._request.getfixturevalue(current_environment.name)
return current_environment
@property
def supported_environments(self):
return mark.args[0] \
if (mark := next(self.item.iter_markers(self.SUPPORTED_ENVIRONMENT_MARK_NAME), None)) \
else Environment.ALL
@pytest.fixture(
autouse = True,
params = [Environment.CZECH, Environment.SLOVAK],
)
def for_all_envs(request):
...
def pytest_runtest_setup(item):
test_item = TestItem(item = item)
if test_item.current_environment not in test_item.supported_environments:
pytest.skip(f'cannot run on environment `{test_item.current_environment}`')
```
and the usage below:
`test.py`
```python
from tests.conftest import Environment, TestItem
@TestItem.only_environment(Environment.CZECH)
def test_tests_skipping():
...
```
For now everything works well (output below):
```bash
collected 2 items
tests/test_tests_skipping.py::test_tests_skipping[Environment.CZECH] PASSED
tests/test_tests_skipping.py::test_tests_skipping[Environment.SLOVAK] SKIPPED (cannot run on environment `Environment.SLOVAK`)
```
Each environment needs custom setup and that's where the `pytest_lazyfixture` comes. I added two fixtures and changed parameter values of `for_all_envs` fixture.
`conftest.py`
```python
@pytest.fixture
def czech_environment():
# NOTE: do some env specific setup here
return Environment.CZECH
@pytest.fixture
def slovak_environment():
# NOTE: do some env specific setup here
return Environment.SLOVAK
@pytest.fixture(
autouse = True,
params = [
pytest.param(lazy_fixture(czech_environment.__name__), id = 'CZ'),
pytest.param(lazy_fixture(slovak_environment.__name__), id = 'SK'),
]
)
def for_all_envs(request):
return request.param
```
and now the output is:
```bash
collected 2 items
tests/test_tests_skipping.py::test_tests_skipping[cz] PASSED
tests/test_tests_skipping.py::test_tests_skipping[sk] SKIPPED (cannot run on environment `Environment.SLOVAK`)
tests/test_tests_skipping.py::test_tests_skipping[sk] ERROR
=========================================================================== ERRORS =================================================================================
________________________________________________________ ERROR at teardown of test_tests_skipping[sk] ______________________________________________________________
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:311: in from_call
result: Optional[TResult] = func()
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:255: in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/pluggy/_hooks.py:265: in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/pluggy/_manager.py:80: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:175: in pytest_runtest_teardown
item.session._setupstate.teardown_exact(item, nextitem)
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:419: in teardown_exact
self._teardown_towards(needed_collectors)
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:434: in _teardown_towards
raise exc
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:427: in _teardown_towards
self._pop_and_teardown()
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:387: in _pop_and_teardown
self._teardown_with_finalization(colitem)
/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:408: in _teardown_with_finalization
assert colitem in self.stack
E AssertionError
```
If the test is run for both env, everything is OK:
`test.py`
```python
def test_tests_skipping():
...
```
`output`
```bash
collected 2 items
tests/test_tests_skipping.py::test_tests_skipping[cz] PASSED
tests/test_tests_skipping.py::test_tests_skipping[sk] PASSED
=========================================================================== 2 passed in 0.06s ===========================================================================
```
It seems there is missing `colitem <Function test_tests_skipping[sk]>` in the `self.stack` in `/home/jcas/.pyenv/versions/3.9.6/envs/pytest_lazy_fixture_bug3.9.6/lib/python3.9/site-packages/_pytest/runner.py:408` (from the last frame).
I'm not sure what is wrong, because I don't know these `pytest` internals. Also I'm not sure if it is
related to `pytest-lazy-fixture`.
Can anybody help me with this please?
| closed | 2021-10-21T15:12:40Z | 2021-10-22T13:13:43Z | https://github.com/TvoroG/pytest-lazy-fixture/issues/54 | [] | micuda | 2 |
benbusby/whoogle-search | flask | 1,104 | [FEATURE] Multipass image for Whoogle | Would be one more way to use Whoogle in a containerized local environment. It's really useful for Ubuntu users and users that doesn't like Docker. | open | 2023-12-22T02:52:32Z | 2023-12-22T02:52:32Z | https://github.com/benbusby/whoogle-search/issues/1104 | [
"enhancement"
] | ghost | 0 |
ClimbsRocks/auto_ml | scikit-learn | 5 | FUTURE: run hyperparameter optimization on all the models | closed | 2016-08-08T00:16:50Z | 2016-08-09T04:27:59Z | https://github.com/ClimbsRocks/auto_ml/issues/5 | [] | ClimbsRocks | 1 | |
jupyter/nbgrader | jupyter | 1,685 | where can I find complete list of formgrader API urls | I am integrating nbgrader into my LMS. In my LMS backend, I am using formgrader URLs like
1. formgrader/api/assignment/{assignment_name}/generate_feedback"
2. formgrader/api/assignment/{assignment_name}/release_feedback"
3. formgrader/api/assignments
to fetch details and perform some actions directly from my LMS frontend.
These are URLs that I found out myself by using chrome debugger tools
**I need one for auto-grading all assignments.**
I only found the one that auto grades for a single student, like so
- /formgrader/api/submission/{assignment_name}/{student_name}/autograde
Where can I find the entire list of formgrader endpoints(URLs) and their documentation? | closed | 2022-10-18T11:14:16Z | 2024-02-23T15:52:56Z | https://github.com/jupyter/nbgrader/issues/1685 | [
"question"
] | adarshverma19 | 2 |
cobrateam/splinter | automation | 724 | Main example throws error | When I run the given Google example on Linux i get the following (python3.6)
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/splinter/element_list.py", line 42, in __getitem__
return self._container[index]
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/splinter/element_list.py", line 76, in __getattr__
return getattr(self.first, name)
File "/usr/local/lib/python3.6/dist-packages/splinter/element_list.py", line 57, in first
return self[0]
File "/usr/local/lib/python3.6/dist-packages/splinter/element_list.py", line 46, in __getitem__
self.find_by, self.query
splinter.exceptions.ElementDoesNotExist: no elements could be found with name "btnG"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/splinter/element_list.py", line 79, in __getattr__
return getattr(self._container, name)
AttributeError: 'list' object has no attribute 'click'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "splinter_test.py", line 6, in <module>
browser.find_by_name('btnG').click()
File "/usr/local/lib/python3.6/dist-packages/splinter/element_list.py", line 83, in __getattr__
self.__class__.__name__, name
AttributeError: 'ElementList' object has no attribute 'click'
```
I did an inspect on the google website and it seems the button is called 'btnK' so I changed it to that but now get this
```
selenium.common.exceptions.ElementNotInteractableException: Message: Element <input class="gNO89b" name="btnK" type="submit"> could not be scrolled into view
``` | closed | 2019-10-30T04:20:44Z | 2021-04-02T17:57:41Z | https://github.com/cobrateam/splinter/issues/724 | [
"NeedsInvestigation"
] | Zylatis | 2 |
Textualize/rich | python | 3,672 | [BUG] Console in RichHandler is unable to pass TAB characters through to terminal. | - [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
Currently, I can force rich.Console to pass TAB characters through when I call `Console.print(..., expand_tabs=False)`, which is good, but there is no way to set a `Console` to _not_ expand TABs by _default_, which I need if I want to use `RichHandler` for logging.
Code like this would be useful:
```python
logging.addHandler( RichHandler( console=Console( expand_tabs=False, file=sys.stderr ) ) )
```
for instance, I like log output to be valid TSV (TAB-Separated Values) files as well as prettily-rendered console output. This allows me to process large log files efficiently. Unfortunately, this isn't valid Rich. Since TAB _is_ a valid character to print to console, I consider not being able to do so a bug, rather than a feature request, as it is not currently possible for me to print the characters I want to console (TABs are getting replaced with space characters in RichHandler).
**Platform**
<details>
<summary>Click to expand</summary>
Linux, and any combination of bash or fish.
Output of
```
python -m rich.diagnose
pip freeze | grep rich
```
```
$ python -m rich.diagnose; pip freeze | grep rich
Γò¡ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ <class 'rich.console.Console'> ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò«
Γöé A high level console interface. Γöé
Γöé Γöé
Γöé Γò¡ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò« Γöé
Γöé Γöé <console width=270 ColorSystem.EIGHT_BIT> Γöé Γöé
Γöé Γò░ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò» Γöé
Γöé Γöé
Γöé color_system = '256' Γöé
Γöé encoding = 'utf-8' Γöé
Γöé file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> Γöé
Γöé height = 71 Γöé
Γöé is_alt_screen = False Γöé
Γöé is_dumb_terminal = False Γöé
Γöé is_interactive = True Γöé
Γöé is_jupyter = False Γöé
Γöé is_terminal = True Γöé
Γöé legacy_windows = False Γöé
Γöé no_color = False Γöé
Γöé options = ConsoleOptions(size=ConsoleDimensions(width=270, height=71), legacy_windows=False, min_width=1, max_width=270, is_terminal=True, encoding='utf-8', max_height=71, justify=None, overflow=None, no_wrap=False, highlight=None, markup=None, height=None) Γöé
Γöé quiet = False Γöé
Γöé record = False Γöé
Γöé safe_box = True Γöé
Γöé size = ConsoleDimensions(width=270, height=71) Γöé
Γöé soft_wrap = False Γöé
Γöé stderr = False Γöé
Γöé style = None Γöé
Γöé tab_size = 8 Γöé
Γöé width = 270 Γöé
Γò░ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò»
Γò¡ΓöÇΓöÇΓöÇ <class 'rich._windows.WindowsConsoleFeatures'> ΓöÇΓöÇΓöÇΓöÇΓò«
Γöé Windows features available. Γöé
Γöé Γöé
Γöé Γò¡ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò« Γöé
Γöé Γöé WindowsConsoleFeatures(vt=False, truecolor=False) Γöé Γöé
Γöé Γò░ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò» Γöé
Γöé Γöé
Γöé truecolor = False Γöé
Γöé vt = False Γöé
Γò░ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò»
Γò¡ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ Environment Variables ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò«
Γöé {'TERM': 'xterm-256color', 'COLORTERM': None, 'CLICOLOR': None, 'NO_COLOR': None, 'TERM_PROGRAM': 'tmux', 'COLUMNS': None, 'LINES': None, 'JUPYTER_COLUMNS': None, 'JUPYTER_LINES': None, 'JPY_PARENT_PID': None, 'VSCODE_VERBOSE_LOGGING': None} Γöé
Γò░ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò»
platform="Linux"
rich==13.9.4
```
</details>
| closed | 2025-03-21T15:01:14Z | 2025-03-21T17:35:40Z | https://github.com/Textualize/rich/issues/3672 | [
"Needs triage"
] | tokuchan | 5 |
plotly/dash | plotly | 2,863 | [BUG] running does not support wildcards in ids | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-bootstrap-components 1.6.0
```
**Describe the bug**
Using wildcards in ids for Outputs in the running kwarg does leads to `itempath is undefined` error.
**Expected behavior**
Wildcards are properly resolved and matching components are updated while the callback is running.
**MVE**
```
from time import sleep
import dash_bootstrap_components as dbc
from dash import MATCH, Input, Output, callback
layout = [
dbc.Button(
"Test1",
id={"component": "button", "index": "1"},
),
dbc.Button(
"Test2",
id={"component": "button", "index": "2"},
),
]
@callback(
Output({"component": "button", "index": MATCH}, "color"),
Input({"component": "button", "index": MATCH}, "n_clicks"),
running=[
(Output({"component": "button", "index": MATCH}, "children"), "running", "finished"),
],
prevent_initial_call=True,
)
def test(_) -> str:
sleep(3)
return "warning"
```
| closed | 2024-05-20T14:53:45Z | 2024-06-12T13:04:15Z | https://github.com/plotly/dash/issues/2863 | [
"bug",
"sev-2"
] | tlauli | 1 |
pytorch/vision | computer-vision | 8,793 | masks_to_boxes Does Not Enforce 0 <= x1 < x2 and 0 <= y1 < y2, Leading to Invalid Bounding Boxes | ### 🐛 Describe the bug
```
def masks_to_boxes(masks: torch.Tensor) -> torch.Tensor:
"""
Compute the bounding boxes around the provided masks.
Returns a [N, 4] tensor containing bounding boxes. The boxes are in ``(x1, y1, x2, y2)`` format with
``0 <= x1 < x2`` and ``0 <= y1 < y2``.
Args:
masks (Tensor[N, H, W]): masks to transform where N is the number of masks
and (H, W) are the spatial dimensions.
Returns:
Tensor[N, 4]: bounding boxes
"""
if not torch.jit.is_scripting() and not torch.jit.is_tracing():
_log_api_usage_once(masks_to_boxes)
if masks.numel() == 0:
return torch.zeros((0, 4), device=masks.device, dtype=torch.float)
n = masks.shape[0]
bounding_boxes = torch.zeros((n, 4), device=masks.device, dtype=torch.float)
for index, mask in enumerate(masks):
y, x = torch.where(mask != 0)
bounding_boxes[index, 0] = torch.min(x)
bounding_boxes[index, 1] = torch.min(y)
bounding_boxes[index, 2] = torch.max(x)
bounding_boxes[index, 3] = torch.max(y)
return bounding_boxes
```
This is the official implementation of the masks_to_boxes function intended to calculate the bounding box for a given mask. The "bug" hereby is that while the documentation clearly states that ``0 <= x1 < x2`` and ``0 <= y1 < y2`` but this format clearly is not enforced. Returning a Bounding Box with the width or height 0 can result in significant issues, since e.g. some model architectures like Mask R-CNN normalize the ground truth bounding box before loss calculation. This normalization with a bounding height/width of 0 leads to the coordinates being nan which results in the loss being nan which then destabelizes the training process. And due to the missleading documentation users look for the root cause of their loss being nan at a lot of places but this one.
Due to the potential consequences of such a misformated bounding box, I would recommend to enforce the documented behavior and to raise an error if this condition is violated. Alternatively, the bounding box could be expanded by a single pixel but this might lead to inaccuracies for very small objects. At the very least the documentation should be adapted and the user should be made aware to check for this edge case for themself.
### Versions
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.11.3 (main, Oct 1 2024, 01:42:12) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.33.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-63
Off-line CPU(s) list: 64-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9334 32-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 69%
CPU max MHz: 3910.2529
CPU min MHz: 0.0000
BogoMIPS: 5400.13
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 64 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
NUMA node2 CPU(s): 16-23
NUMA node3 CPU(s): 24-31
NUMA node4 CPU(s): 32-39
NUMA node5 CPU(s): 40-47
NUMA node6 CPU(s): 48-55
NUMA node7 CPU(s): 56-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.1.2+cu121
[pip3] torchaudio==2.1.2+cu121
[pip3] torchvision==0.16.2+cu121
[conda] Could not collect | closed | 2024-12-11T16:48:58Z | 2024-12-12T13:25:54Z | https://github.com/pytorch/vision/issues/8793 | [] | H4nz0u | 2 |
Esri/arcgis-python-api | jupyter | 1,252 | arcgis.geometry.Geometry.from_shapely doesn't work in 2.0.0 | **Describe the bug**
Currently trying to use Geometry.from_shapely throws a NameError due to _HASSHAPELY not being defined.
**To Reproduce**
Steps to reproduce the behavior:
```python
>>> import arcgis
>>> from shapely.geometry import box
>>> g = box(0, 0, 1, 1)
>>> x = arcgis.geometry.Geometry.from_shapely(g)
```
error:
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dwilson/miniconda3/envs/geodesic/lib/python3.8/site-packages/arcgis/geometry/_types.py", line 877, in from_shapely
if _HASSHAPELY:
NameError: name '_HASSHAPELY' is not defined
```
**Expected behavior**
I assume there was an API change and _HASSHAPELY got orphaned. If I explicity set _HASSHAPELY to True on `arcgis.geometry._types`, all is well.
**Platform (please complete the following information):**
- OS: Windows/WSL2
- Python API Version 2.0.0
| closed | 2022-05-17T21:11:33Z | 2022-05-19T13:50:44Z | https://github.com/Esri/arcgis-python-api/issues/1252 | [
"bug"
] | dwilson1988 | 1 |
encode/databases | sqlalchemy | 574 | session manager/ session maker | I see the `await database.connect()` used but is that the equivalent to sqlalchemy's orm.sessionmaker?
Do I just open a new connection every time up to the defined limit and `databases` handles them in the connection pool?
I do have the connection and disconnection at start up - but seem to have to call the connect function again as a dependency whenever I try to execute a query elsewhere in the code
```
@app.on_event("startup")
async def startup():
await database.connect()
@app.on_event("shutdown")
async def shutdown():
await database.disconnect()
```
| open | 2023-11-17T04:15:22Z | 2023-11-17T04:15:22Z | https://github.com/encode/databases/issues/574 | [] | eddyizm | 0 |
serpapi/google-search-results-python | web-scraping | 42 | AttributeError during the import of the module: `initialized module 'serpapi' has no attribute 'GoogleSearch' (most likely due to a circular import)` | The first line of your code example:
from serpapi import GoogleSearch
[here](https://serpapi.com/blog/using-google-maps-local-results-from-serpapi/#fullcode)
gives me
AttributeError: partially initialized module 'serpapi' has no attribute 'GoogleSearch' (most likely due to a circular import) | closed | 2023-04-03T20:50:27Z | 2023-04-06T12:42:40Z | https://github.com/serpapi/google-search-results-python/issues/42 | [] | Ciroxxxx | 5 |
open-mmlab/mmdetection | pytorch | 11,711 | python tools/train.py configs/yolox/yolox_tiny_8xb8-300e_coco.py, than report AssertionError: | 
| closed | 2024-05-14T03:44:22Z | 2024-05-14T06:57:53Z | https://github.com/open-mmlab/mmdetection/issues/11711 | [] | realTravisYou | 0 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 77 | 如何使用微调后的模型 | 只用修改微调后的权重去做预测吗?

| closed | 2023-07-19T12:49:34Z | 2023-09-08T08:02:08Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/77 | [] | handsomexiaoyi | 1 |
iterative/dvc | machine-learning | 9,722 | Epic: params/metrics/plots collection | p1:
- [x] https://github.com/iterative/studio/issues/4912
- [x] https://github.com/iterative/dvc/issues/9588
- [x] https://github.com/iterative/dvc/issues/9478
p2:
- [x] https://github.com/iterative/dvc/issues/9452
| closed | 2023-07-11T12:34:30Z | 2024-04-24T20:22:44Z | https://github.com/iterative/dvc/issues/9722 | [
"p1-important",
"A: plots",
"A: params"
] | dberenbaum | 5 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,107 | Is there any benefit to doing the training myself? | I see in the Wiki that there is a guide on how to train the models myself. For this, I would need around 500GB of data. For what I'm trying to do this might be impractical. What is the benefit of doing the training myself? All I want to do is use the program to generate speech for 3 voices of my choosing via code (i.e. not using the toolbox). Would I need to retrain the 3 models in order to do this?
I'm new to ML so sorry if this seems like a rookie question. | open | 2022-09-06T01:22:25Z | 2023-03-23T13:04:26Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1107 | [] | CodingRox82 | 2 |
httpie/cli | rest-api | 1,404 | Inconsistent BUILD_CHANNEL value | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
Hey, sorry for deleting your template, but this is a request from your Fedora packager and not from a user.
I've figured out that httpie 3.2.0+ now includes an update notification https://httpie.io/blog/httpie-3.2.0#update-warnings
It points the users to https://httpie.io/docs/cli/pypi -- so I though, well, maybe I can point them to https://httpie.io/docs/cli/fedora instead? Maybe there could be a file marker that contains the last bit of this URl and I could just `echo fedora > httpie/internal/update_method.marker` right?
So I looked and apparently, this is all being handled already, wow :tada:
https://github.com/httpie/httpie/blob/d9e1dc08c9b4eb0d6021360118daca87fa248400/httpie/internal/update_warnings.py#L23
Where is this installation_method coming from, I asked? It is coming from
https://github.com/httpie/httpie/blob/d9e1dc08c9b4eb0d6021360118daca87fa248400/httpie/internal/__build_channel__.py#L1-L5
This is so awesome, let me just replace that `unknown` with `fedora`, and everything will work! Thanks
I however see that this `BUILD_CHANNEL` thing is also used to search keys in https://packages.httpie.io/latest.json but that includes `fedora_rawhide`. Now I have a problem:
- If I set `BUILD_CHANNEL = 'fedora'`, there's nothing in https://packages.httpie.io/latest.json of that key and the warning will never be shown
- If I set `BUILD_CHANNEL = 'fedora_rawhide'`, the warning will be shown, but the link will lead to https://httpie.io/docs/cli/ which is not helpful. Also, there are other Fedora versions.
What do I do with this? How can I help you populate the data for fedora-XY and make all fedora-XY links point to the Fedora build instructions? (XY is the Fedora version.)
Thanks. | open | 2022-05-17T11:06:11Z | 2022-05-17T11:42:57Z | https://github.com/httpie/cli/issues/1404 | [
"enhancement"
] | hroncok | 4 |
kizniche/Mycodo | automation | 962 | Support for I2C or SPI controlled potentiometers for physical control of humidifier output? | Not sure if this is the correct place to ask this question, so I apologize in advance if I have broken protocol :-P
Does Mycodo have built-in support for I2C or SPI controlled digital potentiometers?
I do not see such devices listed in the "Supported Outputs" section of the manual.
I would like to be able to use I2C or SPI controlled potentiometers to automate the control of devices.
Specifically in this instance I am using a cheap ultrasonic humidifier to maintain RH and VPD in a hydroponics system.
I live in a very hot, dry climate and the humidifier is an absolute must-have component for maintaining ideal environment. However, the only way to control it's output is via a manual knob (analog potentiometer).
I would love to replace that analog potentiometer with a remotely-controllable digital potentiometer such as this...
https://learn.adafruit.com/ds3502-i2c-potentiometer
Right now the only way I can control my RH is to increase or decrease the intake & exhaust fan speeds in my hydroponics system (via PWM). While this does work to some extent, the RH takes too long to balance and reach desired levels, and I have to constantly adjust the output on the humidifier manually several times a day to get the RH just right.
It would be a HUGE advantage to be able to use a Mycodo widget to physically control my humidifier output via automation in conjunction with a temp/humidity/pressure sensor rather than having to manually fuss with analog knobs all day. :-(
There are several devices that could benefit from the use of digital potentiometers:
Fan speed controllers (as an alternative to PWM).
Light dimmers.
Humidifiers.
Water chillers.
Tank stirrers.
DC water pumps (to control flow rates)
Etc. etc.
Is this something that Mycodo already does, or is it something that I would have to custom program? Unfortunately I am not a programmer, and I am really looking for control software that is mostly "plug and play" with some relatively simple calibration rather than spending weeks to figure out how to write and integrate the code for such a device. :-(
By the way, Mycodo is an AMAZING project, and I can't wait to take my hydroponics system design to the next level using it's automation! THANK YOU TO EVERYONE for creating this incredible software! <3
Thank you,
JCL
| closed | 2021-03-25T20:19:10Z | 2021-09-20T19:14:17Z | https://github.com/kizniche/Mycodo/issues/962 | [
"enhancement"
] | LucidEye | 11 |
fastapi-users/fastapi-users | asyncio | 392 | Tortoise-orm custom user | Iḿ trying make custom user with fastapi_users,
so
from fastapi_users import models
from fastapi_users.db import TortoiseBaseUserModel, TortoiseUserDatabase
import datetime
class User(models.BaseUser):
nome: str = "defaultuserteste"
data_criado: str = datetime.datetime.now()
is_fund: bool = False
is_agendador: bool = False
is_revisor: bool = False
is_aprovador: bool = False
class UserCreate(models.BaseUserCreate):
nome: str = "defaultuserteste"
data_criado: str = datetime.datetime.now()
is_fund: bool = False
is_agendador: bool = False
is_revisor: bool = False
is_aprovador: bool = False
class UserUpdate(User, models.BaseUserUpdate):
nome: str = "defaultuserteste"
data_criado: str = datetime.datetime.now()
is_fund: bool = False
is_agendador: bool = False
is_revisor: bool = False
is_aprovador: bool = False
class UserDB(User, models.BaseUserDB):
nome: str = "defaultuserteste"
data_criado: str = datetime.datetime.now()
is_fund: bool = False
is_agendador: bool = False
is_revisor: bool = False
is_aprovador: bool = False
class UserModel(TortoiseBaseUserModel):
nome: str = "defaultuserteste"
data_criado: str = datetime.datetime.now()
is_fund: bool = False
is_agendador: bool = False
is_revisor: bool = False
is_aprovador: bool = False
user_db = TortoiseUserDatabase(UserDB, UserModel)
Database not changed i'm using mysql ... nothings happening. | closed | 2020-11-20T03:54:56Z | 2020-11-23T12:21:11Z | https://github.com/fastapi-users/fastapi-users/issues/392 | [
"documentation",
"question"
] | ScrimForever | 10 |
tfranzel/drf-spectacular | rest-api | 849 | Unable to guess serializer on apidoc schema endpoint | **Describe the bug**
drf-spectacular complains, that it's unable to guess serializer for `/api/schema` endpoint which returns the schema for the apidoc
**To Reproduce**
* Add to `urls.py`
* ```py
path("api/schema", SpectacularAPIView.as_view(), name="schema")
```
* Go to `localhost:8001/api/schema`
* Error entry in Django console:
* `Error #107: schema: unable to guess serializer. This is graceful fallback handling for APIViews. Consider using GenericAPIView as view base class, if view is under your control. Ignoring view for now. `
`"SERVE_INCLUDE_SCHEMA": False` has been overridden, but changing it back to `True` does not seem to make a difference
| closed | 2022-11-04T06:14:21Z | 2022-11-04T07:42:16Z | https://github.com/tfranzel/drf-spectacular/issues/849 | [] | HansAarneLiblik | 2 |
plotly/dash | plotly | 2,983 | exception handling mechanism for using keyword parameters to orchestrate callbacks reported an error | Hi, there.
I encountered a bug while using the exception handling feature in version 2.18.
Dash related information is as follows
```
dash 2.18.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
The test code is as follows
```
from dash import Dash, html, Input, Output, set_props
def global_callback_error_handler(err):
set_props('output-global', {'children': f'global: {err}'})
app = Dash(on_error=global_callback_error_handler)
app.layout = [
html.Button('start', id='start-local'),
html.Button('start-global', id='start-global'),
html.Div(id='output'),
html.Div(id='output-global'),
html.Div(id='error-message'),
]
def on_callback_error(err):
set_props('error-message', {'children': f'message: {err}'})
return dict(test=f'callback: {err}')
@app.callback(
output=dict(test=Output('output', 'children')),
inputs=dict(start=Input('start-local', 'n_clicks')),
on_error=on_callback_error,
prevent_initial_call=True,
)
def on_start(start):
raise Exception('local error')
@app.callback(
output=dict(test=Output('output-global', 'children')),
inputs=dict(start=Input('start-global', 'n_clicks')),
prevent_initial_call=True,
)
def on_start_global(start):
raise Exception('global error')
if __name__ == '__main__':
app.run(debug=True)
```
The error message is as follows
```
dash._grouping.SchemaTypeValidationError: Schema: {'test': <Output `output-global.children`>}
Path: ()
Expected type: <class 'dict'>
Received value of type <class 'list'>:
[<dash._callback.NoUpdate object at 0x00000215C014BF40>]
```
It seems to return a list containing NoUpdate by default. | closed | 2024-09-05T08:56:58Z | 2024-09-06T17:45:01Z | https://github.com/plotly/dash/issues/2983 | [
"bug",
"P2"
] | insistence | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 30 | Use concatenation for feature pyramid aggregation? | Hi! Thanks for repo owner's contribution! This repository is useful and benefits lots of people!
I would like to discuss the implementation of FPN in this repo with the people watching on this repo.
According to this [document](http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf), I think page 25 suggesting that we should use concatenation instead of summation if I did not misunderstand the page.
https://github.com/qubvel/segmentation_models.pytorch/blob/02ef0e6ec4351cec9c707d0c3984941867e8a4d8/segmentation_models_pytorch/fpn/decoder.py#L102
```
def forward(self, x):
c5, c4, c3, c2, _ = x
p5 = self.conv1(c5)
p4 = self.p4([p5, c4])
p3 = self.p3([p4, c3])
p2 = self.p2([p3, c2])
s5 = self.s5(p5)
s4 = self.s4(p4)
s3 = self.s3(p3)
s2 = self.s2(p2)
# use concatenation instead of summation?
# x = s5 + s4 + s3 + s2
x = torch.cat([s5, s4, s3, s2], dim=1)
x = self.dropout(x)
x = self.final_conv(x)
x = F.interpolate(x, scale_factor=4, mode='bilinear', align_corners=True)
return x
```
| closed | 2019-07-08T09:17:02Z | 2019-07-15T08:18:59Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/30 | [] | hyc-xyz | 3 |
cvat-ai/cvat | computer-vision | 9,222 | Perview picture in models from nuclio | Hi,
I want to show a preview picture in my models from nuclio. As default no picture exists

If someone install a model from robolflow, you see a preview picture

It would be helpful for my collegues, if the see a picture in our self developed models from nuclio.
How can I insert a picture ?
Best Regards
Rose | closed | 2025-03-18T09:56:35Z | 2025-03-18T12:31:59Z | https://github.com/cvat-ai/cvat/issues/9222 | [
"question"
] | RoseDeSable | 1 |
open-mmlab/mmdetection | pytorch | 11,922 | what's the difference between mm-grounding dino and the open-grounding dino? i found the odvg.py some difference | open | 2024-08-26T02:59:10Z | 2024-08-26T03:45:03Z | https://github.com/open-mmlab/mmdetection/issues/11922 | [] | lyf6 | 2 | |
plotly/dash | data-visualization | 2,525 | [BUG] | Hi everyone,
**Describe your context**
I'm trying to create a multi-page Dash application, using a Flask server.
Here are main requirements :
dash==2.7.1
flask==2.2.2
Here is my project folder:
dashapp/
- app.py
- pages /
- un.py
- deux.py
Content of app.py :
``` from dash import html
from dash import dcc
import dash
from flask import Flask
server = Flask(__name__)
server.config.update(SECRET_KEY="dash123KeySecret")
app = dash.Dash(__name__, server=server, use_pages=True)
app.layout = html.Div(
[
#One Link for each page in page_registry
html.Div(
[
html.Div(
dcc.Link(
f"{page['name']} - {page['path']}", href=page["path"]
)
)
for page in dash.page_registry.values()
]
),
dash.page_container
]
)
```
Content of un.py :
```
from dash import html
from dash_labs.plugins import register_page
def layout():
return html.Div(children=[
html.H1('1')
]
)
register_page(__name__, path='/')
```
Content of deux.py :
```
from dash import html
from dash_labs.plugins import register_page
def layout():
return html.Div(children=[
html.H1('2')
]
)
register_page(__name__, path='/deux')
```
**bug**
When running the app, i get a 404 error on the layout of the page "deux" whereas page "un" is ok


**Expected behavior**
Like a one-page application, get the choosen layout displayed | closed | 2023-05-05T12:12:24Z | 2023-05-05T13:40:06Z | https://github.com/plotly/dash/issues/2525 | [] | kevin35ledy | 1 |
google-research/bert | nlp | 865 | how to use fine-tuned bert for another task | For example, I fine tuned bert in a sentence classification (5 labels) task, then I want to fine tuning this fine-tuned bert model in a sentence pair classification (binary label) task, I guess the performance of sentence pair classification can be improved, how can I realize this function, thanks a lot! | open | 2019-09-26T03:22:21Z | 2019-09-26T03:22:21Z | https://github.com/google-research/bert/issues/865 | [] | OYE93 | 0 |
huggingface/datasets | tensorflow | 6,552 | Loading a dataset from Google Colab hangs at "Resolving data files". | ### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:

It is happening when the `_get_origin_metadata` definition is invoked:
```python
def _get_origin_metadata(
data_files: List[str],
max_workers=64,
download_config: Optional[DownloadConfig] = None,
) -> Tuple[str]:
return thread_map(
partial(_get_single_origin_metadata, download_config=download_config),
data_files,
max_workers=max_workers,
tqdm_class=hf_tqdm,
desc="Resolving data files",
disable=len(data_files) <= 16,
```
The thread is then stuck at `waiter.acquire()` in the builtin `threading.py` file.
I can load the dataset just fine on my machine.
Cheers,
Thomas
### Steps to reproduce the bug
In Google Colab:
```python
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("colour-science/color-checker-detection-dataset")
```
### Expected behavior
The dataset should be loaded.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.1
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | closed | 2024-01-03T02:18:17Z | 2024-01-08T10:09:04Z | https://github.com/huggingface/datasets/issues/6552 | [] | KelSolaar | 2 |
zappa/Zappa | flask | 565 | [Migrated] xgboost error after zappa deploy | Originally from: https://github.com/Miserlou/Zappa/issues/1487 by [umernawaz0301](https://github.com/umernawaz0301)
Cannot find XGBoost Library in the candidate path, did you install compilers and run build.sh in root path?
List of candidates:
/tmp/zappa-py-3-ml/xgboost/libxgboost.so
/tmp/zappa-py-3-ml/xgboost/../../lib/libxgboost.so
/tmp/zappa-py-3-ml/xgboost/./lib/libxgboost.so
/var/lang/xgboost/libxgboost.so
in local environment running perfect but giving error after deploying to aws
can anyone answer why this error occurred and how to resolve this..
thanks in advance. | closed | 2021-02-20T12:22:50Z | 2024-04-13T17:09:26Z | https://github.com/zappa/Zappa/issues/565 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
sanic-org/sanic | asyncio | 2,174 | How do I parse the request content in xml format | like this
```
<xml>
<ToUserName>
<![CDATA[wx5823bf96d3bd56c7]]>
</ToUserName>
<Encrypt>
<![CDATA[RypEvHKD8QQKFhvQ6QleEB4J58tiPdvo+rtK1I9qca6aM/wvqnLSV5zEPeusUiX5L5X/0lWfrf0QADHHhGd3QczcdCUpj911L3vg3W/sYYvuJTs3TUUkSUXxaccAS0qhxchrRYt66wiSpGLYL42aM6A8dTT+6k4aSknmPj48kzJs8qLjvd4Xgpue06DOdnLxAUHzM6+kDZ+HMZfJYuR+LtwGc2hgf5gsijff0ekUNXZiqATP7PF5mZxZ3Izoun1s4zG4LUMnvw2r+KqCKIw+3IQH03v+BCA9nMELNqbSf6tiWSrXJB3LAVGUcallcrw8V2t9EL4EhzJWrQUax5wLVMNS0+rUPA3k22Ncx4XXZS9o0MBH27Bo6BpNelZpS+/uh9KsNlY6bHCmJU9p8g7m3fVKn28H3KDYA5Pl/T8Z1ptDAVe0lXdQ2YoyyH2uyPIGHBZZIs2pDBS8R07+qN+E7Q==]]>
</Encrypt>
<AgentID>
<![CDATA[218]]>
</AgentID>
</xml>
``` | closed | 2021-06-25T03:49:23Z | 2021-06-25T05:15:42Z | https://github.com/sanic-org/sanic/issues/2174 | [] | zxj17815 | 1 |
ipython/ipython | jupyter | 13,993 | Proposal: Use `AutoSuggestFromHistory` by default | (Sorry this ended up being a bit of an essay :). There are a number of related issues and I wanted to be clear exactly what this one is about.)
This is a proposal to make the default configuration for up/down completion be as given by this configuration:
```python
%config TerminalInteractiveShell.autosuggestions_provider = 'AutoSuggestFromHistory'
```
Essentially this restores the behaviour of `up/down` for navigating to history with search that was changed in gh-13888 (IPython 8.9.0). This refers to when typing a partial match of a previously entered command and pushing `up` to retrieve that previously entered command without needing to push `right` to complete and run the command.
This follows from the discussion in gh-13878 (It was suggested in https://github.com/ipython/ipython/issues/13878#issuecomment-1484060390 to open a new issue). This is also referring to the same problem as gh-13979.
This proposal is at least partially an alternative to gh-13987.
A useful and longstanding IPython feature is the ability to rerun previous commands and to search for them by typing some letters and pushing `up`. For example you might enter these commands:
```python
In [2]: result = sum([1, 3)] # typed this wrong
In [3]: result = sum([1, 3, 4, 6, 7, 78])
In [4]: a = -result
In [5]: b = a + a
In [6]: long_list = [1, 2, 3, 4]
...: print(sum(long_list))
10
```
Now if you want to rerun `In [3]` IPython provides a number of ways to retrieve that command. One quick way of retrieving it is to type the first letter(s) of the command e.g. `r` and then push `up` which before gh-13888 would complete the line ready to push enter and rerun that command:
```python
In [7]: result = sum([1, 3, 4, 6, 7, 78])# cursor at end of line
```
If there are multiple matches then pressing up repeatedly will list them in reverse history order. This behaviour is consistent with other REPLs such as Julia, Matlab etc.
The change in gh-13888 means that pressing up no longer invokes the "history buffer" and instead uses the "suggestion buffer". This brings a number of behavioural changes that mean that searching with `up` requires different keypresses and is now inconsistent with other REPLs having this feature as well as a little less convenient (in my opinion!) to use. I have tried this behaviour for over a month now to see if I could get used to it but am now convinced the previous behaviour is a better default and this proposal is to restore that.
The differences between how the history buffer and the suggestion buffer work are as follows:
1. With the suggestion buffer the most recently matched command is displayed automatically without needing to press `up` but it is not completed and so pressing right is needed to run the command. This means that where previously you would type `r up enter` you should now type `r right enter` to rerun the line shown above. To find a match that is not the most recent you should push `r up... right enter` rather than previously `r up... enter`.
2. If searching for the most recent command the old way using `r up enter` (which is still the way used in other REPLs) then the search will not find the most recent command but the *second* most recent command and if enter is pressed then it will run without completing which will be an error. Completing that line with `right enter` is also incorrect for example it would run the incorrectly typed `In [2]` above (I couldn't possibly count how many times I've done this!).
3. With the history buffer it is possible to rerun a multiline code block such as `In [6]` above using `l up enter`. With the suggestion buffer suggestions are only given for one line at a time meaning that it is necessary to search for and run each line separately e.g. for `In [6]` above the keypresses would be `l right enter p right enter`.
4. With the history buffer as soon as you see a search result that is what you were looking for `enter` will rerun it whereas with the suggestion buffer `right enter` is needed after confirming that the correct command is shown.
5. The suggestion buffer offers the possibility to partially complete from the previous command using `ctrl+right`. For example typing `l ctrl+right` will complete `long_list = ` meaning that multiple `ctrl-right` can be used to reconstruct part of the command and then it is possible to type something different for the remainder of the command.
Apart from the case of multiline blocks it is possible to get out of suggestion mode in a few ways:
1. Pressing escape and waiting 2 seconds turns the suggestion buffer off. At that point up/down will use the history buffer as before.
2. Pressing right will complete the line leaving it in a state where you can press enter or edit the line and then press enter. This is similar to what the history buffer does except that it is not possible from here to press up/down to navigate to alternate commands.
3. It is proposed in gh-13987 that the `delete` key could be used to disable the suggestion buffer as `escape` does except without the 2 second delay.
The proposal here is essentially that pressing `up` will immediately disable the suggestion buffer and move to the history buffer restoring previous behaviour for searching with `up/down`.
I have listed a number of differences from having `up/down` use the suggestion buffer rather than the history buffer. While some of them might seem like small differences rather than necessarily being disadvantageous it is important to note that they are differences relative to IPython's previous behaviour and also relative to other systems that have the same feature. The change behaviour also means that navigating like `r up` behaves very differently from just navigating with `up` whereas previously these behaved the same except for the restriction to only navigate through matching commands. Consistency across different UIs is inherently advantageous for anyone who needs to learn and use those multiple UIs.
Also the small differences do stack up when something is heavily used. For example I regularly type `r up enter` pausing only (if at all) before the `enter` to confirm that it is the right command or whether I need to push `up` more times to get that command. When I do this the speed at which I type `r up` is too fast to look at any suggestion. Now though I need to type `r`, look at the faintly coloured suggestion, and then decide whether to push `up` or `right`. That small decision that needs to be made and the involvement of one extra key causes an unavoidable slowdown/frustration every time I do this.
The only advantage as far as I can tell of having suggestion mode rather than history mode is the possibility of using `ctrl+right` for partial completion. I have tried to use this but it is just never useful for what I want to do when searching with `up`. Basically there are two possibilities for what I am trying to do:
1. Rerun a previous command exactly as it was.
2. Search for the previous command but then edit it and run a modified version of that command.
The partial completion is only potentially useful in case 2 but actually it is not a good fit there and so I always use `right` and then edit the line before running it. The reason partial completion is not useful here is because you have a line like this:
```python
In [1]: result = function(arg1, 2*arg2, arg3, arg4)
```
Say you want to rerun this line but using `3*arg2` instead of `2*arg2`. I would do this by pressing `r up` which previously would complete the whole line (now `r up right` is needed). Then you can use e.g. arrow keys to navigate to the `2` and change it to a `3` and then hit `enter`. With the suggestion buffer and partial completion you would type `r` and then `ctrl+right` 3 times to complete `result = function(args1, `. Now you don't want to complete the `2` so you type `3` but then the suggestion buffer has no suggestions any more because your line doesn't match the history: you will have to type `*arg2, arg3, arg4)` manually.
Also perhaps this should be a separate issue but on my Mac laptop ctrl+right is the key combination for switching between different fullscreen applications so it actually doesn't work for completing suggestions in IPython at all. | closed | 2023-03-26T15:29:20Z | 2023-07-07T09:25:50Z | https://github.com/ipython/ipython/issues/13993 | [
"needs-decision",
"autosuggestions"
] | oscarbenjamin | 7 |
pydata/xarray | numpy | 9,285 | DataTree.update can cause multiple root groups. | ### What happened?
Reviewing documentation for hierarchical-data.rst I saw the abe/herbert example didn't look right, updated the `abe.assign()` -> `abe = abe.assign()` and it still looked wrong
```
>>> abe = xr.DataTree(name="abe")
>>> herbert = xr.DataTree(name="Herb")
>>> abe.update({"herbert": herbert})
>>> print(abe)
<xarray.DataTree 'abe'>
Group: /
└── Group: /
>>> print(abe.groups)
('/', '/')
```
### What did you expect to happen?
I expected not to see two root groups.
### Minimal Complete Verifiable Example
```Python
abe = xr.DataTree(name="abe")
herbert = xr.DataTree(name="Herb")
abe.update({"herbert": herbert})
print(abe)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
it is failing the update or the assign (but because assign uses update)
### Environment
<details>
/Users/savoie/.pyenv/versions/miniconda3-4.7.12/envs/xarray-tests/lib/python3.11/site-packages/_distutils_hack/__init__.py:26: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
INSTALLED VERSIONS
------------------
commit: 9e4b737ee0907e16703dab79e118e51b1fc36d46
python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:45:13) [Clang 16.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 23.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.6.1.dev88+g068bab28b.d20240724
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.13.0
netCDF4: 1.6.5
pydap: installed
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: 2.17.2
cftime: 1.6.3
nc_time_axis: 1.4.1
iris: 3.9.0
bottleneck: 1.3.8
dask: 2024.4.2
distributed: 2024.4.2
matplotlib: 3.8.4
cartopy: 0.23.0
seaborn: 0.13.2
numbagg: 0.8.1
fsspec: 2024.3.1
cupy: None
pint: 0.23
sparse: 0.15.1
flox: 0.9.6
numpy_groupies: 0.10.2
setuptools: 69.5.1
pip: 24.0
conda: None
pytest: 8.1.1
mypy: 1.8.0
IPython: 8.25.0
sphinx: None
</details>
| closed | 2024-07-26T21:01:58Z | 2024-08-27T19:36:28Z | https://github.com/pydata/xarray/issues/9285 | [
"bug",
"topic-DataTree"
] | flamingbear | 7 |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 91 | The explanation of `load_balancing_loss` is kind of confusing | Hi, thanks for the nice implementation of the Switch Transformer. But I find the explanation of the "load_balancing_loss" may be confusing. In the [tutorial](https://nn.labml.ai/transformers/switch/experiment.html#section-33), the formula on the left side calculates the loss is for a single layer. However, if I read it right, the code logic on the right side computes the load_balance_loss sum of all layers. I know this is a small problem, but I just want to make sure the consistency 😁.
| closed | 2021-09-03T14:02:20Z | 2021-09-06T07:58:17Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/91 | [] | hobbitlzy | 2 |
HumanSignal/labelImg | deep-learning | 181 | how to run it on MacOSX? | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
when I run it like that:
$make qt4py2
pyrcc4 -py2 -o resources.py resources.qrc
$python labelImg.py
Segmentation fault: 11
- **OS:**
- **PyQt version:** | closed | 2017-10-24T06:49:20Z | 2019-01-23T07:47:43Z | https://github.com/HumanSignal/labelImg/issues/181 | [] | huary | 8 |
hankcs/HanLP | nlp | 1,667 | ImportError: cannot import name 'TFAutoModel' | **Describe the bug**
ImportError: cannot import name 'TFAutoModel'
**Code to reproduce the issue**
from transformers import BertTokenizer, BertConfig, PretrainedConfig, TFAutoModel, \
**Describe the current behavior**
ImportError: cannot import name 'TFAutoModel'
**Expected behavior**
ImportError: cannot import name 'TFAutoModel'
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Darwin-19.6.0-x86_64-i386-64bit
- Python version:3.6.9
- HanLP version:2.1.0-alpha.53
**Other info / logs**
Failed to load https://file.hankcs.com/hanlp/tok/pku98_6m_conv_ngram_20200110_134736.zip. See traceback below:
================================ERROR LOG BEGINS================================
Traceback (most recent call last):
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp/utils/component_util.py", line 74, in load_from_meta_file
obj: Component = object_from_classpath(cls)
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp_common/reflection.py", line 27, in object_from_classpath
classpath = str_to_type(classpath)
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp_common/reflection.py", line 44, in str_to_type
cls = getattr(importlib.import_module(module_name), class_name)
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp/components/tok_tf.py", line 12, in <module>
from hanlp.components.taggers.transformers.transformer_tagger_tf import TransformerTaggerTF
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp/components/taggers/transformers/transformer_tagger_tf.py", line 11, in <module>
from hanlp.layers.transformers.loader_tf import build_transformer
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp/layers/transformers/loader_tf.py", line 12, in <module>
from hanlp.layers.transformers.tf_imports import zh_albert_models_google, bert_models_google
File "/Users/yuanxiao/.pyenv/versions/3.6.9/lib/python3.6/site-packages/hanlp/layers/transformers/tf_imports.py", line 5, in <module>
from transformers import BertTokenizer, BertConfig, PretrainedConfig, TFAutoModel, \
ImportError: cannot import name 'TFAutoModel'
=================================ERROR LOG ENDS=================================
If the problem still persists, please submit an issue to https://github.com/hankcs/HanLP/issues
When reporting an issue, make sure to paste the FULL ERROR LOG above and the system info below.
OS: Darwin-19.6.0-x86_64-i386-64bit
Python: 3.6.9
PyTorch: 1.9.0
HanLP: 2.1.0-alpha.53
* [x] I've completed this form and searched the web for solutions. | closed | 2021-07-28T11:04:43Z | 2021-07-28T15:05:28Z | https://github.com/hankcs/HanLP/issues/1667 | [
"invalid"
] | yx179971 | 1 |
ultralytics/yolov5 | deep-learning | 13,096 | tflite error | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When I make some changes to the c3 module and name it C3_dysnake, I do the training to get the best.pt file and convert it to tflite format, the following problem occurs how can I solve it.


### Additional
_No response_ | closed | 2024-06-17T14:01:12Z | 2024-10-20T19:48:01Z | https://github.com/ultralytics/yolov5/issues/13096 | [
"question",
"Stale"
] | Selfpline6 | 3 |
InstaPy/InstaPy | automation | 6,542 | Cannot detect post media type , tried many web solutions | Hey bots, I've got the error "**Cannot detect post media type**" when I use a like function . I'm asking it again because I even followed many steps in the internet, for exemple, [#6346 ](https://github.com/InstaPy/InstaPy/pull/6346)
This previous solution suggested to change line 905 from like_util.py to:
```
post_category = element.find_element_by_xpath(
"//a[@href='/p/"
+ post_href.split("/")[-2]
+ "/']/child::div[@class='u7YqG']/child::div/*[name()='svg']"
).get_attribute("aria-label")
```
Or another online suggestion is to change where is span to div in the same place I wrote above. But none of these solutions for me worked. Anyone in 2022 had the same issue, and could not solve it using those solutions. Or even better, anyone can give me a good solution for this?
| open | 2022-03-08T00:13:15Z | 2022-03-09T09:40:39Z | https://github.com/InstaPy/InstaPy/issues/6542 | [] | adrielkirch | 1 |
gradio-app/gradio | data-visualization | 10,846 | Model3D: Use Babylon Viewer ESM | model3D uses Babylon.js as UMD module. Update to ESM to allow tree-shaking and reduced size.
Also, Babylon.js new viewer can help make the code smaller. It can come with UI elements like loading bars, animation controls, etc.
Current Canvas3D size : 4.6Mb
Updated Canvas3D size: 1.2Mb | open | 2025-03-20T16:52:47Z | 2025-03-21T11:04:30Z | https://github.com/gradio-app/gradio/issues/10846 | [
"enhancement"
] | CedricGuillemet | 0 |
pydantic/pydantic-ai | pydantic | 1,211 | Agent run regenerates a dynamic system-prompt if it is missing in 'message_history' | ### Description
Currently, the way an Agent run is implemented (as also stated in the docs) is that if 'message_history' is passed, a dynamic system-prompt is not regenerated even if 'message_history' does not contain a system-prompt. This makes having a fixed size message buffer an unnecessarily inelegant solution; where rather than fetching the N most recent messages, you must first fetch a system-prompt and then prepend it to the message buffer.
This would be easily solved if the 'run' function just implements the required check and calls the system-prompt generation functions by it self, when it does not find a system-prompt part_kind.
### References
_No response_ | open | 2025-03-22T16:45:02Z | 2025-03-22T16:45:02Z | https://github.com/pydantic/pydantic-ai/issues/1211 | [] | abshake96 | 0 |
RobertCraigie/prisma-client-py | asyncio | 9 | Add support for batching queries | ## Problem
In some situations it is desirable that a write operation being committed is dependant on other write operations succeeding, for example, if you update 200 users within a transaction, each update must succeed - if not, all changes are rolled back and the transaction fails as a whole.
## Suggested solution
> We cannot support the same syntax that prisma [does](https://www.prisma.io/docs/concepts/components/prisma-client/transactions#the-transaction-api) as we need to support a synchronous API as well
This is a good solution as it can easily be modified to support dependencies between write operations if it is added to the prisma query engine, [#1844](https://github.com/prisma/prisma/issues/1844).
The only potential problem with this solution is that we are creating a new client reference which would potential be difficult for some users to implement but I cannot think of a way to do this while maintaining type safety.
`async`
```py
async with client.batch_() as tx:
tx.user.create({'name': 'Robert'})
tx.user.create({'name': 'Bob'})
tx = client.batch_()
tx.user.create({'name': 'Robert'})
tx.user.create({'name': 'Bob'})
await tx.commit()
```
`sync`
```py
with client.batch_() as tx:
tx.user.create({'name': 'Robert'})
tx.user.create({'name': 'Bob'})
tx = client.batch_()
tx.user.create({'name': 'Robert'})
tx.user.create({'name': 'Bob'})
tx.commit()
```
## Additional context
* [https://www.prisma.io/docs/guides/prisma-guides/prisma-client-transactions-guide](https://www.prisma.io/docs/guides/prisma-guides/prisma-client-transactions-guide)
* [https://www.prisma.io/docs/concepts/components/prisma-client/transactions#the-transaction-api](https://www.prisma.io/docs/concepts/components/prisma-client/transactions#the-transaction-api)
## Implementation notes
As we have to type every action to return None there will be some refactoring to support it, but it should be possible with generics and if not we can always duplicate the whole client/actions classes with Jinja | closed | 2021-02-04T12:45:04Z | 2021-12-20T20:02:52Z | https://github.com/RobertCraigie/prisma-client-py/issues/9 | [
"kind/feature"
] | RobertCraigie | 2 |
plotly/dash-bio | dash | 558 | JSME Dash component | Would be a cool addition to Dash Bio for drawing molecules within Dash apps:
https://www.npmjs.com/package/jsme-react
<img width="792" alt="image" src="https://user-images.githubusercontent.com/1865834/116279586-908d7c80-a73c-11eb-998a-ede4dfcde0f3.png">
Looks like some prior work here:
https://iwatobipen.wordpress.com/2020/12/30/embed-molecular-editor-into-streamlit-app-streamlit-chemoinformatics-rdkit/ | closed | 2021-04-27T16:40:26Z | 2021-11-15T16:47:40Z | https://github.com/plotly/dash-bio/issues/558 | [] | jackparmer | 0 |
art049/odmantic | pydantic | 194 | key_name is not respected on embedded documents | # Bug
The key_name property on field is not respected for embedded documents.
Reproducer:
```
class Username(EmbeddedModel):
name: str = Field(key_name="username", alias="username")
class Player(Model):
name: Username = Field(key_name="username", alias="username")
print(Player(username=Username(username="Jack")).doc())
```
This prints:
```
{'username': {'name': 'Jack'}, '_id': ObjectId('61712ba07b12f975d53c9d4a')}
```
The expected outcome is:
```
{'username': {'username': 'Jack'}, '_id': ObjectId('61712ba07b12f975d53c9d4a')}
``` | closed | 2021-10-21T08:59:31Z | 2022-08-15T16:27:46Z | https://github.com/art049/odmantic/issues/194 | [
"bug"
] | jvanegmond | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.