repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
blacklanternsecurity/bbot | automation | 2,269 | Debug logs no longer attached to tests | Strangely it looks like our tests aren't uploading debug logs anymore.
 | closed | 2025-02-10T01:20:32Z | 2025-02-11T16:20:36Z | https://github.com/blacklanternsecurity/bbot/issues/2269 | [
"bug"
] | TheTechromancer | 1 |
geopandas/geopandas | pandas | 2,848 | BUG: Geopandas not displaying correctly all field from geojson file | - [ x] I have checked that this issue has not already been reported.
- [x ] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import geopandas as gpd
# Read GeoPandas dataframe
gdf = gpd.read_file('filename.shp')
# Get GeoJSON features with all fields
features = gdf.__geo_interface__(include_bbox=True)['features']
# Print features
print(features)
```
#### Problem description
Please read my answer:
https://stackoverflow.com/a/75841445/5558021
I got wrong Id's enumerated from 0, while Id's in my geojson file looks like: 11125, ...
I needed IDs to display GeoJSON in Folium
#### Expected Output
Correct Id's and easier access to such fields.
#### Output of ``geopandas.show_versions()``
<details>
latest
</details>
| open | 2023-03-28T17:33:09Z | 2023-05-01T07:00:01Z | https://github.com/geopandas/geopandas/issues/2848 | [
"bug",
"needs triage"
] | dimka11 | 1 |
onnx/onnx | deep-learning | 6,533 | [reference] Improve ConcatFromSequence reference implementation | > I still have the same question. I agree there is an ambiguity in the ONNX documentation ... but the ambiguity applies to values other than -1 also. Eg., -2 as well.
>
> I think that the spec means that the specified axis is interpreted with respect to the output's shape (not the input's shape). So, the specified axis in the output will be the newly inserted axis.
>
> If I understand the spec of `np.expand_dims`, it is similar: https://numpy.org/doc/stable/reference/generated/numpy.expand_dims.html ... so, something seems off here.
_Originally posted by @gramalingam in https://github.com/onnx/onnx/pull/6369#discussion_r1774104460_
| open | 2024-11-05T23:17:22Z | 2024-11-05T23:18:34Z | https://github.com/onnx/onnx/issues/6533 | [
"module: reference implementation",
"contributions welcome"
] | justinchuby | 0 |
FlareSolverr/FlareSolverr | api | 314 | error " le point d'entrée de la procédure GetHostNameW est introuvable dans la bibliothèque de liens dynamique WS2_32.dll | bonjour,
en voulant executé flaresolverr.exe sous windows 7 64 bit j'ai l'erreur suivante:
" le point d'entrée de la procédure GetHostNameW est introuvable dans la bibliothèque de liens dynamique WS2_32.dll
J'ai fais un sfc/ scannow et le problème reste le même j'ai bien la dll dans system32
| closed | 2022-02-15T18:58:19Z | 2022-04-16T19:00:23Z | https://github.com/FlareSolverr/FlareSolverr/issues/314 | [] | Bmxfou | 1 |
pyppeteer/pyppeteer | automation | 350 | Browser.dumpio Does Not Work On Windows | When running a basic example of a page that has some JS output, the JavaScript messages do not go to stdout on Windows as they should. Sample script was tested on Ubuntu 20.04 as a baseline.
Environment:
OS: Windows Server 2016
Python: 3.8.5
Pyppeteer: 1.0.2
Example Code:
```
import sys, time, traceback, subprocess, asyncio
from pyppeteer import launch
from pyppeteer.page import Page, Response
from pyppeteer.browser import Browser
async def main(argv):
try:
print("Started")
browser: Browser = await launch(
headless=True
,dumpio=True
,args=["--log-level=1"]
)
page: Page = await browser.newPage()
await page.setUserAgent('Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Mobile Safari/537.36')
res: Response = await page.goto("https://www.programiz.com/javascript/debugging")
if res.status != 200:
raise RuntimeError(f'site is not available. status: {res.status}')
time.sleep(1)
await page.pdf({'path': './test.pdf'})
print("Done")
_exit(1)
except Exception as e:
exc_type, exc_value, exc_traceback = sys.exc_info()
sTB = '\n'.join(traceback.format_tb(exc_traceback))
print("Fatal exception: {}\n - msg: {}\n stack: {}".format(exc_type, exc_value, sTB))
_exit(1)
def _exit(i):
# Suppress further stdout from async processes
sys.stdout = sys.stderr = subprocess.DEVNULL
sys.exit(i)
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(main(sys.argv[1:]))
``` | open | 2022-01-20T03:42:38Z | 2022-01-21T01:15:33Z | https://github.com/pyppeteer/pyppeteer/issues/350 | [] | JavaScriptDude | 4 |
aiogram/aiogram | asyncio | 1,300 | Inline button not responding | ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
ubuntu 22
### Python version
3.10
### aiogram version
3.0.0
### Expected behavior
I expect the bot to respond with a message saying "Hi" when the inline button is clicked.
### Current behavior
The bot does not respond when the inline button is clicked. No message is displayed.
### Steps to reproduce
1. Start the application using python main.py.
2. Send the /test command to the bot.
3. Observe that an inline keyboard with a "Test" button is displayed.
4. Click on the "Test" button.
### Code example
```python3
# main.py
from fastapi import FastAPI
app = FastAPI()
@app.on_event("startup")
async def on_startup():
webhook_info = await Bot.get_webhook_info()
if webhook_info.url != webhook_url:
await Bot.set_webhook(url=webhook_url)
@app.post(webhook_path)
async def feed_update(update: Dict[str, Any]) -> None:
telegram_update = types.Update(**update)
await dp.feed_webhook_update(bot=Bot, update=telegram_update)
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO)
main()
uvicorn.run(app,port=5050)
# routers/init.py
router = Router()
@router.message(Command("test"))
async def language(message: Message):
button = InlineKeyboardButton(text="Test", callback_data="test_me")
keyboard = InlineKeyboardMarkup(inline_keyboard=[[button]])
await message.reply("Just a test", reply_markup=keyboard)
@router.callback_query(lambda c: c.data == "test_me")
async def language_callback(callback: CallbackQuery):
await callback.answer("Hi", show_alert=True)
```
### Logs
_No response_
### Additional information
I am using aiogram v3.0.
I have properly configured the ngrok tunnel and webhook URL.
The rest of the bot functionality is working fine. | closed | 2023-09-11T13:48:38Z | 2023-09-13T18:16:32Z | https://github.com/aiogram/aiogram/issues/1300 | [
"bug"
] | dagimafro | 2 |
akanz1/klib | data-visualization | 123 | Fix dpi computation | closed | 2023-07-27T16:41:42Z | 2023-07-27T16:46:51Z | https://github.com/akanz1/klib/issues/123 | [] | akanz1 | 0 | |
christabor/flask_jsondash | plotly | 199 | support for postgresql | I am trying to use flask_jsondash with my existing flask app, which uses Postgresql database. Any plan on adding support to Postgresql? | closed | 2019-01-07T00:38:03Z | 2019-07-29T06:45:13Z | https://github.com/christabor/flask_jsondash/issues/199 | [] | threemonks | 4 |
onnx/onnx | deep-learning | 6,534 | [BUG] Unable to infer shapes for the Q and DQ nodes with INT16 data type. | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
I am using `from onnx.utils import extract_model` to extract a subgraph from a `16bit QDQ` quantized model, but I find that `onnx.shape_inference.infer_shapes` is unable to infer shapes for `16bit QDQ` nodes. It works fine for the other nodes, including `8bit QDQ` nodes.
### System information
ONNX version: 1.17.0
Python version: 3.9.13
Protobuf version:3.20.3
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Since `8bit QDQ` nodes can infer shapes normally, I believe it should also apply to `16bit QDQ` nodes because there won't be any difference in terms of shape.
### Notes
<!-- Any additional information -->
In addition, I found that using `from onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference` works fine for `16bit QDQ` nodes. However, I think it might be unreasonable to additionally introduce onnxruntime for `extract_model` in onnx. | open | 2024-11-07T06:02:49Z | 2025-01-02T08:52:06Z | https://github.com/onnx/onnx/issues/6534 | [
"bug",
"good first issue",
"module: shape inference",
"contributions welcome"
] | duanshengliu | 3 |
predict-idlab/plotly-resampler | plotly | 171 | Error with graph whan a column has a boolean with always same value | Hi,
we encounter an error when trying to draw a plot with a boolean column whith always the same value (True or False).
It appears when showing the mean aggregation size.
When doing this, this line (https://github.com/predict-idlab/plotly-resampler/blob/39072f55409a12c7955cd977659c048cb28865c2/plotly_resampler/figure_resampler/figure_resampler_interface.py#L307) always return 0, because you use np.diff
By consequence, this following line (https://github.com/predict-idlab/plotly-resampler/blob/39072f55409a12c7955cd977659c048cb28865c2/plotly_resampler/figure_resampler/utils.py#L171) is doing a math.log10 on 0, and we encounter a math domain error (because log(0) is impossible).
We are bypassing this by desactivating the show of mean aggregation size (for all our graphes), but we love this feature, so if you could find a workaround it would be great! Just let me know, so I can tell to my users if this number will come back or no. And thanks for your great work!
| closed | 2023-02-14T09:38:10Z | 2023-03-28T08:28:46Z | https://github.com/predict-idlab/plotly-resampler/issues/171 | [
"bug"
] | bwatt-fr | 8 |
thunlp/OpenPrompt | nlp | 164 | error while using "wrapped_tokenizer.tokenize_one_example" | wrapped_tokenizer is a "WrapperClass" object I defined. When I run "wrapped_tokenizer.tokenize_one_example" I meet with an error at 48th line in mlm.py: if piece['loss_ids']==1:, I find my wrapped sample is like [[{"text":"xxxx"},{"text":"xxxxx"}....],{"guid":0, "label":0}] where the keys in the first part are all "text", but the 48th line in mlm.py seems to require the inputs to include a "loss_ids" key.
I suppose the problem is in the "wrap_one_example" function of prompt_base.py file. Is the 207th line ( keys, values= ['text'], [text]) ok or it is a bug ? The model I use is "bert_base". | open | 2022-06-22T07:25:26Z | 2022-06-22T07:25:26Z | https://github.com/thunlp/OpenPrompt/issues/164 | [] | yuand23 | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,294 | i am not able to understand ho to run that resiporatory | corentinJ can you mention all the steps for linux ubuntu how to run your real time cloning resiporatory
| open | 2024-03-28T12:51:27Z | 2024-04-19T06:16:04Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1294 | [] | Lakshbansal124 | 1 |
dynaconf/dynaconf | flask | 849 | [RFC] Ability to add custom error message for Validator | **Is your feature request related to a problem? Please describe.**
Currently, when using a Validator with a custom condition, the error message is not indicative enough.
For example:
`Validator("ipify_api_list", condition=lambda x: len(x) > 5)`
would result in:
`dynaconf.validator.ValidationError: ipify_api_list invalid for <lambda>(['api.ipify.com']) in env main`
**Describe the solution you'd like**
Add a new, "error_msg" option for Validator that will be shown as the error message
**Additional context**
I'll be glad to make a PR for this myself.
| closed | 2023-01-03T10:26:43Z | 2023-03-02T14:22:28Z | https://github.com/dynaconf/dynaconf/issues/849 | [
"Not a Bug",
"RFC",
"Docs"
] | iTaybb | 3 |
mitmproxy/mitmproxy | python | 6,857 | When using mitmproxy, sometimes client gets "502 BadGateway". | #### Problem Description
A clear and concise description of what the bug is.
I use mitmproxy for testing, running it on my own PC(127.0.0.1), when I use chrome or firefox to get test page, the responses are always "502 badgateway".
| closed | 2024-05-19T12:43:23Z | 2024-05-19T12:45:01Z | https://github.com/mitmproxy/mitmproxy/issues/6857 | [
"kind/triage"
] | yunhao666888 | 0 |
littlecodersh/ItChat | api | 639 | 群内@用户提醒的疑问 | 看了一些Issues,知道网页版本的机器人没有办法@用户并提醒用户。
但是类似于建群宝的群裂变工具,用户进群之后是如何实现@用户的呢?
有没有办法可以伪装下群@,或者有没有客户端微信机器人而不是网页版?
| open | 2018-04-18T02:09:27Z | 2024-05-30T06:35:46Z | https://github.com/littlecodersh/ItChat/issues/639 | [
"help wanted"
] | Diyilou | 4 |
biolab/orange3 | numpy | 6,746 | Report doesn't displayed and I can't download report |
When I press report button, first I have received errors and after that I didn't see nothing in report preview window, also I can't download report in html and pdf version, after I press save button nothing happen while I can save only x.report file.

Click on report button.
**What's your environment?**
Orange Version 3.36.2
```
PRETTY_NAME="LMDE 6 (faye)"
NAME="LMDE"
VERSION_ID="6"
VERSION="6 (faye)"
VERSION_CODENAME=faye
ID=linuxmint
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.linuxmint.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
ID_LIKE=debian
DEBIAN_CODENAME=bookworm
```
How you installed Orange:
```
pipx install orange3
pipx inject orange3 PyQt5
pipx inject orange3 PyQtWebEngine
#run orange3
orange-canvas
```
```
pipx list --include-injected
venvs are in /home/lol/.local/pipx/venvs
apps are exposed on your $PATH at /home/lol/.local/bin
package orange3 3.36.2, installed using Python 3.11.2
- orange-canvas
Injected Packages:
- pyqt5 5.15.10
- pyqtwebengine 5.15.6
package pyqt5 5.15.10, installed using Python 3.11.2
- pylupdate5
- pyrcc5
- pyuic5
``` | open | 2024-02-25T17:14:57Z | 2024-11-29T16:11:59Z | https://github.com/biolab/orange3/issues/6746 | [
"bug",
"bug report"
] | DevopsDmytro | 4 |
babysor/MockingBird | deep-learning | 243 | 是否实验过m2_hat用linear频谱计算误差呢? | 您好,有注意到用了postnet处理mel为linear,但是看训练过程,并没有用linear的频谱,而是用mel的频谱。
问下是否实验过m2_hat用linear频谱计算误差呢?
如果有实验过,能否谈一下结果如何呢?
下面贴上loss的计算:
# Backward pass
m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels)
m2_loss = F.mse_loss(m2_hat, mels)
stop_loss = F.binary_cross_entropy(stop_pred, stop) | open | 2021-11-29T12:45:14Z | 2021-11-30T08:48:58Z | https://github.com/babysor/MockingBird/issues/243 | [] | ghost | 4 |
scrapy/scrapy | web-scraping | 6,652 | Remove the AjaxCrawlMiddleware docs | In https://github.com/scrapy/scrapy/pull/6651 we deprecated the component and marked it as such in the docs.
However, I think we should remove it from the docs entirely, except for `news.rst`. | closed | 2025-02-03T12:13:49Z | 2025-02-05T17:05:55Z | https://github.com/scrapy/scrapy/issues/6652 | [
"docs",
"cleanup"
] | Gallaecio | 2 |
Johnserf-Seed/TikTokDownload | api | 445 | 使用Python重构X-bogus算法 | 已经完成用Python写的X-bogus算法

| open | 2023-06-07T10:42:12Z | 2023-08-07T13:33:06Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/445 | [] | Johnserf-Seed | 6 |
keras-team/keras | tensorflow | 20,333 | Custom activation functions cause TensorFlow to crash | I originally posted this issue in the [TensorFlow GitHub](https://github.com/tensorflow/tensorflow/issues/77048), and was told it looks like a Keras issue and I should post it here.
TensorFlow version:
2.17.0
OS:
Linux Mint 22
Python version:
3.12.7
Issue:
I can successfully define a custom activation function, but when I try to use it TensorFlow crashes.
Minimal reproducible example:
```python
import tensorflow as tf
from tensorflow.keras.utils import get_custom_objects
from tensorflow.keras.layers import Activation
def fourier_activation_lambda(freq):
fn = lambda x : tf.sin(freq*x)
return(fn)
freq = 1.0
fourier = fourier_activation_lambda(freq)
get_custom_objects()["fourier"] = Activation(fourier)
print(3*"\n")
print(f"After addition: {get_custom_objects()=}")
x_input = tf.keras.Input(shape=[5])
activation = "fourier"
layer_2 = tf.keras.layers.Dense(100, input_shape = [5],
activation=activation,
)(x_input)
model = tf.keras.Model(inputs=x_input, outputs=layer_2)
model.compile(optimizer='adam', loss='mse')
model.summary()
```
The output of the print statement above indicates that the custom activation function was added successfully. Maybe the crash is related to "built=False"?
```bash
# output of print statement
get_custom_objects()={'fourier': <Activation name=activation, built=False>}
```
The error message reads:
```bash
# error message
Traceback (most recent call last):
File "/home/orca/Downloads/minimal_tf_err.py", line 20, in <module>
layer_2 = tf.keras.layers.Dense(100, input_shape = [5],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/orca/.local/lib/python3.12/site-packages/keras/src/layers/core/dense.py", line 89, in __init__
self.activation = activations.get(activation)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/orca/.local/lib/python3.12/site-packages/keras/src/activations/__init__.py", line 104, in get
raise ValueError(
ValueError: Could not interpret activation function identifier: fourier
``` | closed | 2024-10-08T14:04:04Z | 2024-11-29T02:07:11Z | https://github.com/keras-team/keras/issues/20333 | [
"stat:awaiting response from contributor",
"stale"
] | AtticusBeachy | 7 |
miguelgrinberg/Flask-Migrate | flask | 518 | The pgvector vector field cannot generate a normal migration script | Pgvector is a vector extension plugin for PostgreSQL databases。
This is my code.

Run `flask db migrate`, generated an error migration script with missing pgvector.

| closed | 2023-07-07T02:42:16Z | 2023-07-07T09:15:05Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/518 | [] | glacierck | 1 |
pinry/pinry | django | 20 | error when self test ' python manage.py test' | Hi ,
I am not sure if it is ok to post anything here or not .if it is not allowed please ignore this or delete this .
I am getting windows 123 error when doing self testing . I am using sqlite3 database.
If I do 'python manage runserver' then I can login but cannot upload any picture .
The errors are like below:
## ERROR: test_put_detail_unauthorized (pinry.core.tests.api.PinResourceTest)
Traceback (most recent call last):
File "C:\Python27\djcode\pinry-master\pinry\core\tests\api.py", line 148, in test_put_detail_unaut
horized
uri = '/api/v1/pin/{}/'.format(PinFactory(submitter=self.user).pk)
File "C:\Python27\lib\site-packages\factory\base.py", line 77, in **call**
return cls.create(**kwargs)
File "C:\Python27\lib\site-packages\factory\base.py", line 647, in create
attrs = cls.attributes(create=True, extra=kwargs)
File "C:\Python27\lib\site-packages\factory\base.py", line 314, in attributes
return containers.AttributeBuilder(cls, extra).build(create)
File "C:\Python27\lib\site-packages\factory\containers.py", line 274, in build
return stub.**fill**()
File "C:\Python27\lib\site-packages\factory\containers.py", line 75, in **fill**
res[attr] = getattr(self, attr)
File "C:\Python27\lib\site-packages\factory\containers.py", line 94, in **getattr**
val = val.evaluate(self, self.__containers)
File "C:\Python27\lib\site-packages\factory\containers.py", line 196, in evaluate
expanded_containers)
File "C:\Python27\lib\site-packages\factory\declarations.py", line 279, in evaluate
return self.generate(create, defaults)
File "C:\Python27\lib\site-packages\factory\declarations.py", line 338, in generate
return subfactory.create(**params)
File "C:\Python27\lib\site-packages\factory\base.py", line 648, in create
return cls._generate(True, attrs)
File "C:\Python27\lib\site-packages\factory\base.py", line 617, in _generate
obj = cls._prepare(create, **attrs)
File "C:\Python27\lib\site-packages\factory\base.py", line 593, in _prepare
return creation_function(target_class, _args, *_kwargs)
File "C:\Python27\lib\site-packages\factory\base.py", line 41, in DJANGO_CREATION
return class_to_create.objects.create(**kwargs)
File "C:\Python27\lib\site-packages\django\db\models\manager.py", line 149, in create
return self.get_query_set().create(*_kwargs)
File "C:\Python27\lib\site-packages\django\db\models\query.py", line 414, in create
obj.save(force_insert=True, using=self.db)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 546, in save
force_update=force_update, update_fields=update_fields)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 591, in save_base
update_fields=update_fields)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 650, in save_base
result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw)
File "C:\Python27\lib\site-packages\django\db\models\manager.py", line 215, in _insert
return insert_query(self.model, objs, fields, *_kwargs)
File "C:\Python27\lib\site-packages\django\db\models\query.py", line 1673, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "C:\Python27\lib\site-packages\django\db\models\sql\compiler.py", line 936, in execute_sql
for sql, params in self.as_sql():
File "C:\Python27\lib\site-packages\django\db\models\sql\compiler.py", line 894, in as_sql
for obj in self.query.objs
File "C:\Python27\lib\site-packages\django\db\models\fields\files.py", line 250, in pre_save
file.save(file.name, file, save=False)
File "C:\Python27\lib\site-packages\django\db\models\fields\files.py", line 86, in save
self.name = self.storage.save(name, content)
File "C:\Python27\lib\site-packages\django\core\files\storage.py", line 48, in save
name = self._save(name, content)
File "C:\Python27\lib\site-packages\django\core\files\storage.py", line 171, in _save
os.makedirs(directory)
File "C:\Python27\lib\os.py", line 150, in makedirs
makedirs(head, mode)
File "C:\Python27\lib\os.py", line 150, in makedirs
makedirs(head, mode)
File "C:\Python27\lib\os.py", line 150, in makedirs
makedirs(head, mode)
File "C:\Python27\lib\os.py", line 157, in makedirs
mkdir(name, mode)
WindowsError: [Error 123] The filename, directory name, or volume
label syntax is incorrect
。: u'C:\Python27\djcode\pinry-master\media\image\original\by-md5\6\5\658e8dc0bf8b9a09b369
94abf9242099\C:'
#
## ERROR: test_has_perm_on_pin (pinry.users.tests.CombinedAuthBackendTest)
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\mock.py", line 1201, in patched
return func(_args, *_keywargs)
File "C:\Python27\djcode\pinry-master\pinry\users\tests.py", line 39, in test_has_perm_on_pin
image = Image.objects.create_for_url('http://testserver/mocked/screenshot.png')
File "C:\Python27\djcode\pinry-master\pinry\core\models.py", line 22, in create_for_url
return Image.objects.create(image=obj)
File "C:\Python27\lib\site-packages\django\db\models\manager.py", line 149, in create
return self.get_query_set().create(*_kwargs)
File "C:\Python27\lib\site-packages\django\db\models\query.py", line 414, in create
obj.save(force_insert=True, using=self.db)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 546, in save
force_update=force_update, update_fields=update_fields)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 591, in save_base
update_fields=update_fields)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 650, in save_base
result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw)
File "C:\Python27\lib\site-packages\django\db\models\manager.py", line 215, in _insert
return insert_query(self.model, objs, fields, *_kwargs)
File "C:\Python27\lib\site-packages\django\db\models\query.py", line 1673, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "C:\Python27\lib\site-packages\django\db\models\sql\compiler.py", line 936, in execute_sql
for sql, params in self.as_sql():
File "C:\Python27\lib\site-packages\django\db\models\sql\compiler.py", line 894, in as_sql
for obj in self.query.objs
File "C:\Python27\lib\site-packages\django\db\models\fields\files.py", line 250, in pre_save
file.save(file.name, file, save=False)
File "C:\Python27\lib\site-packages\django\db\models\fields\files.py", line 87, in save
setattr(self.instance, self.field.name, self.name)
File "C:\Python27\lib\site-packages\django\db\models\fields\files.py", line 309, in **set**
self.field.update_dimension_fields(instance, force=True)
File "C:\Python27\lib\site-packages\django\db\models\fields\files.py", line 379, in update_dimensi
on_fields
width = file.width
File "C:\Python27\lib\site-packages\django\core\files\images.py", line 15, in _get_width
return self._get_image_dimensions()[0]
TypeError: 'NoneType' object has no attribute '__getitem__'
| closed | 2013-03-25T08:06:28Z | 2013-04-05T17:11:36Z | https://github.com/pinry/pinry/issues/20 | [
"bug"
] | jashpal | 3 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 229 | Inference on CPP code | Hi,
I am very interesting on your project, I would like to know if it is possible to inference the pretrained model directly on a test set of images but using CPP implementation. If yes, can I ask some help for some scratch about it?
Thank you for your feedback.
Best | closed | 2022-04-26T09:18:25Z | 2022-07-23T06:49:42Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/229 | [] | FrancescaCi | 3 |
sunscrapers/djoser | rest-api | 508 | Extend functionality of token based auth | I want to extend the functionality of token auth to check for the password as well as one time passwords. I didn't find anything relevant in docs.
Edit: Basically, check the password. If it doesn't match the password, check if the password provided matches another value stored somewhere else. | closed | 2020-06-24T14:42:11Z | 2020-06-25T06:14:03Z | https://github.com/sunscrapers/djoser/issues/508 | [] | sprajosh | 0 |
HIT-SCIR/ltp | nlp | 411 | 如何使用自己的数据集训练一个NER模型? | 如何使用自己的数据集训练一个NER模型? | closed | 2020-09-21T07:59:19Z | 2020-10-12T00:30:18Z | https://github.com/HIT-SCIR/ltp/issues/411 | [] | LHMdanchaofan | 2 |
jessevig/bertviz | nlp | 93 | Show Error | Hello, I can't run the demo, but there is a layer and attention dropdown box, but I can't select the layer, and no figure appears.

| closed | 2022-03-24T03:27:34Z | 2022-04-06T01:02:27Z | https://github.com/jessevig/bertviz/issues/93 | [] | ShDdu | 23 |
apache/airflow | data-science | 47,949 | Support deferral mode for `TriggerDagRunOperator` with Task SDK | Follow-up of https://github.com/apache/airflow/pull/47882 . In that PR `TriggerDagRunOperator` was ported to work with the Task SDK.
https://github.com/apache/airflow/blob/c7a0681a61c19f14055c5dfd4e58d915f73d16c3/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py#L236
However, the `deferral` mode needs access to the DB, which we should port over.
and add similar logic to Airflow 2
https://github.com/apache/airflow/blob/c7a0681a61c19f14055c5dfd4e58d915f73d16c3/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py#L276-L287
at
https://github.com/apache/airflow/blob/c7a0681a61c19f14055c5dfd4e58d915f73d16c3/providers/standard/src/airflow/providers/standard/operators/trigger_dagrun.py#L307-L334 | open | 2025-03-19T11:21:08Z | 2025-03-21T06:51:05Z | https://github.com/apache/airflow/issues/47949 | [
"priority:medium",
"area:core-operators",
"area:async-operators",
"area:task-execution-interface-aip72",
"area:task-sdk",
"affected_version:3.0.0beta"
] | kaxil | 1 |
albumentations-team/albumentations | deep-learning | 1,993 | ImportError: cannot import name 'KeypointType' from 'albumentations' | ## Describe the bug
The bug is as stated: I cannot import `KeyPointType` and I am constantly getting this error:
```Python
ImportError: cannot import name 'KeypointType' from 'albumentations'
```
. I have seen [this issue that was closed ](https://github.com/albumentations-team/albumentations/issues/1600) on what seems to be the exact same issue. However, doing anything in this thread did not solve my issue, and the versions aren't the same.
I have tried importing using the following ways:
```Python
from albumentations import KeyPointType
```
```Python
from albumentations.core.transforms_interface import KeypointType
```
Both which landed me right where I was at the beginning.
### To Reproduce
Steps to reproduce the behavior:
1. Ubuntu 22.04.5 LTS
2. Using Albumentations 1.4.18
### Additional context
- I have tried to downgrade to 1.4.14 to no avail
- I have tried to remake my virtual environment about thrice, removing and rebuilding my lock file with poetry to ensure that it wasn't a dependency issue. | closed | 2024-10-17T01:10:24Z | 2024-10-18T13:07:11Z | https://github.com/albumentations-team/albumentations/issues/1993 | [
"bug"
] | VincentPelletier1 | 2 |
nschloe/tikzplotlib | matplotlib | 582 | AttributeError: 'Legend' object has no attribute '_ncol'. Did you mean: '_ncols'? | When attempting to use Legends, the following error is presented.
> AttributeError: 'Legend' object has no attribute '_ncol'. Did you mean: '_ncols'?
I quick search of the [matplotlib docs](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.legend.html) showed that
> The number of columns that the legend has.
> For backward compatibility, the spelling ncol is also supported but it is discouraged. If both are given, ncols takes precedence
### The Fix
_legends.py (Line 81)
Change
`
if obj._ncol != 1:
data["current axes"].axis_options.append(f"legend columns={obj._ncol}")
`
to
`
if obj._ncols != 1:
data["current axes"].axis_options.append(f"legend columns={obj._ncols}")
` | open | 2023-04-24T01:19:53Z | 2024-04-17T13:46:37Z | https://github.com/nschloe/tikzplotlib/issues/582 | [] | aiodfy | 2 |
Asabeneh/30-Days-Of-Python | flask | 547 | 02_variables_builtin_functions.md example code | **Error in example for Casting in section Checking Data types and Casting for the following code;**
```# int to float
num_int = 10
print('num_int',num_int) # 10
num_float = float(num_int)
print('num_float:', num_float) # 10.0
# float to int
gravity = 9.81
print(int(gravity)) # 9
# int to str
num_int = 10
print(num_int) # 10
num_str = str(num_int)
print(num_str) # '10'
# str to int or float
num_str = '10.6'
print('num_int', int(num_str)) # 10
print('num_float', float(num_str)) # 10.6
# str to list
first_name = 'Asabeneh'
print(first_name) # 'Asabeneh'
first_name_to_list = list(first_name)
print(first_name_to_list) # ['A', 's', 'a', 'b', 'e', 'n', 'e', 'h']
```
**Error:**
```line 87, in <module>
print('num_int', int(num_str)) # 10
^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '10.6'``` | open | 2024-07-04T21:45:40Z | 2024-07-05T06:39:00Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/547 | [] | BVenetta | 1 |
modelscope/data-juicer | data-visualization | 145 | [bug] HF_DATASETS_CACHE dependencies need to be modified as well | After updating HF_DATASETS_CACHE for datasets using customized ds_cache_dir arg, the variables that depend on HF_DATASETS_CACHE also need to be modified. (e.g. DOWNLOADED_DATASETS_PATH, EXTRACTED_DATASETS_PATH)
Otherwise, when reading '.jsonl.zst' files, the extracted intermediate files will be stored in the default cache dir still.
Related vars in datasets.config:
<img width="920" alt="image" src="https://github.com/alibaba/data-juicer/assets/12782861/67aae311-d862-4b9f-b607-8bea726868fd">
| closed | 2023-12-20T07:59:46Z | 2023-12-26T14:40:47Z | https://github.com/modelscope/data-juicer/issues/145 | [
"bug"
] | HYLcool | 0 |
mobarski/ask-my-pdf | streamlit | 57 | Question | is it necessary to use open ia API key or if you have any advices to create the api from scratch? | open | 2023-05-20T20:49:49Z | 2023-05-20T20:49:49Z | https://github.com/mobarski/ask-my-pdf/issues/57 | [] | Kell1000 | 0 |
dask/dask | pandas | 11,146 | Dask 2024.5.1 removed `.attrs` | The latest release removed `.attrs`. This breaks backward compatibility and also Pandas still uses it, so I kindly ask if it can be reinstated.
To reproduce, the last assert should pass.
```python
import dask.dataframe as dd
df = dd.from_pandas(pd.DataFrame())
assert hasattr(df.compute(), 'attrs') == True
assert hasattr(df, 'attrs') == True
```
**Environment**:
- Dask version: 2024.5.1
- Python version: 3.12.2
- Operating System: macOS Sonoma 14.5
- Install method (conda, pip, source): conda
| open | 2024-05-24T17:50:52Z | 2024-11-13T18:29:49Z | https://github.com/dask/dask/issues/11146 | [
"needs triage"
] | LucaMarconato | 13 |
scikit-multilearn/scikit-multilearn | scikit-learn | 207 | new unseen combination of labels | @niedakh @queirozfcom @fmaguire @Antiavanti @elzbietaZ
Hi! And thank you for your awesome service! I use your product heavily, with very helpful and insightful results! Much appreciated! Maybe you can help me understand in labelpowerset what happens when there is a combination of labels in the test set which was unseen, i.e. it did not exist in the training set. How can the algo possibly predict the right answer? Maybe there is a solution for this. Thank you!
yishairasowsky@gmail.com | closed | 2020-08-22T20:42:54Z | 2023-03-14T16:54:42Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/207 | [] | yishairasowsky | 4 |
miguelgrinberg/Flask-SocketIO | flask | 1,052 | flask socketio with gunicorn | I am working with a chat app and while deploying the app with gunicorn facing below error.
My wsgi.py file looks like:
```
from server import app
from flask_socketio import SocketIO
# async_mode = None
socketio = SocketIO(app)
if __name__ == "__main__":
socketio.run(app)
```
after running this command :
`gunicorn -w 3 wsgi:app`
facing below error.
Error:
RuntimeError: You need to use the gevent-websocket server. See the Deployment section of the documentation for more information.
Authentication blcok in the server | closed | 2019-09-03T03:50:18Z | 2020-08-14T06:04:25Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1052 | [
"question"
] | tasnuvaleeya | 4 |
lucidrains/vit-pytorch | computer-vision | 223 | MaxViT's MbConv doesn't match article. | According to [article](https://arxiv.org/pdf/2204.01697.pdf), this block should be F -> 4 F -> F channel mapping, as authors use 4 for InvertedBottleneck's expansion.
But [here](https://github.com/lucidrains/vit-pytorch/blob/81661e3966629fd8dcf3d39839ff661ab2af19ce/vit_pytorch/max_vit.py#L103) `hidden_dim` is not used at all, and dim_in/dim_out are only used.
Given that you call this like `MBConv(layer_dim, layer_dim)`, you actually use 1 for effective expansion rate (i.e. no expansion), which is likely undesired.
Also, original article uses GELU activation instead of SiLU, but I don't know whether this should be fixed too.
It should be
```python
def MBConv(
dim_in,
dim_out,
*,
downsample,
expansion_rate = 4,
shrinkage_rate = 0.25,
dropout = 0.
):
hidden_dim = int(expansion_rate * dim_out)
stride = 2 if downsample else 1
net = nn.Sequential(
nn.Conv2d(dim_in, hidden_dim, 1),
nn.BatchNorm2d(hidden_dim),
nn.GELU(),
nn.Conv2d(hidden_dim, hidden_dim, 3, stride=stride, padding=1, groups=hidden_dim),
nn.BatchNorm2d(hidden_dim),
nn.GELU(),
SqueezeExcitation(hidden_dim, shrinkage_rate=shrinkage_rate),
nn.Conv2d(hidden_dim, dim_out, 1),
nn.BatchNorm2d(dim_out),
)
if dim_in == dim_out and not downsample:
net = MBConvResidual(net, dropout=dropout)
return net
```
| closed | 2022-06-24T21:51:13Z | 2022-06-28T23:01:22Z | https://github.com/lucidrains/vit-pytorch/issues/223 | [] | arquolo | 1 |
aminalaee/sqladmin | asyncio | 806 | Support for pgvector.sqlalchemy.vector.VECTOR field support | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
sqladmin/forms.py", line 274, in get_converter
raise NoConverterFound( # pragma: nocover
sqladmin.exceptions.NoConverterFound: Could not find field converter for column embedding (<class 'pgvector.sqlalchemy.vector.VECTOR'>).
[tool.poetry.dependencies]
python = "^3.11"
pydantic = "^2.8.2"
sqlmodel = "^0.0.21"
pgvector = "^0.3.2"
psycopg2-binary = "^2.9.9"
sqladmin = {extras = ["full"], version = "^0.18.0"}
uvicorn = "^0.30.6"
### Describe the solution you would like.
Need field converter for column embedding (<class 'pgvector.sqlalchemy.vector.VECTOR'>)
### Describe alternatives you considered
Raised issue in pgvector git repo - https://github.com/pgvector/pgvector-python/issues/88
### Additional context
Your tool is a life saver. Thanks for the hard work | open | 2024-08-20T12:47:53Z | 2024-09-12T09:07:23Z | https://github.com/aminalaee/sqladmin/issues/806 | [] | spsoni | 1 |
jina-ai/clip-as-service | pytorch | 414 | ModuleNotFoundError:No Module named 'tensorflow' | 你好,我按照安装步骤装好了最新的client和server,python和tf也都装好,在我source activate tensorflow以后输入bert...start这句命令,告诉我help.py中找不到tf,我明明已经激活了的。。不知道什么原因,实在找不出来,help me please~~~ | open | 2019-07-15T05:27:25Z | 2021-05-20T07:05:55Z | https://github.com/jina-ai/clip-as-service/issues/414 | [] | 27232xsl | 5 |
jmcnamara/XlsxWriter | pandas | 214 | Support for "text box" insert | Are there any plans to support text box insert. For large body's of text, such as disclaimers, the text box is easier to manage than trying to fit the text in a cell.
Thanks
George Lovas
| closed | 2015-01-22T15:27:45Z | 2020-04-13T09:56:40Z | https://github.com/jmcnamara/XlsxWriter/issues/214 | [
"feature request"
] | georgelovas | 13 |
Nemo2011/bilibili-api | api | 896 | [提问] bilibili_api.comment没有视频是bv的 | 获取评论的库代码只有av,没有bv
“ 视频:AV 号:av{170001}。”
| closed | 2025-03-04T08:07:15Z | 2025-03-10T14:18:45Z | https://github.com/Nemo2011/bilibili-api/issues/896 | [
"question"
] | EngineYe | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 601 | Vision Transformer image_size 以及 patch_size修改问题 | 您好!
1.请问在ViT中,加上def resize_pos_embed(posemb, posemb_new) ,对pos embedding进行插值后,就可以改变image_size 和patch_size了吗?(仍使用作者原来提供的权重文件)。
2.在Patch Embedding 中的2dConv,在训练时权重更新了吗? 好像不需要对这个卷积进行训练,如果这个卷积的权重不在权重文件里的话,感觉应该可以修改patch_size大小了?
| closed | 2022-07-24T03:49:47Z | 2022-07-24T15:27:41Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/601 | [] | Liqq1 | 2 |
CTFd/CTFd | flask | 2,684 | [Bug/Question] Changes in views.py not reflected | **Environment**:
- CTFd Version/Commit: Version 3.7.4
- Operating System: Ubuntu and Windows 11
- Web Browser and Version: Mozilla Firefox 133.0 and Opera GX 115.0.5322.124
I've modified the views.py file to add links to my networks. But the changes are not reflected on the homepage, even after restarting the Docker containers and clearing the browser cache.
How can I ensure that the changes are taken into account?
Here's what I've changed l.209 from views.py :
```html
<div class="col-md-6 offset-md-3">
<img class="w-100 mx-auto d-block" style="max-width: 500px;padding: 50px;padding-top: 14vh;" src="{default_ctf_banner_location}" />
<h3 class="text-center">
<p>A cool CTF platform from <a href="https://ctfd.io">ctfd.io</a></p>
<p>Follow ctfd on social media:</p>
<a href="https://twitter.com/ctfdio"><i class="fab fa-twitter fa-2x" aria-hidden="true"></i></a>
<a href="https://facebook.com/ctfdio"><i class="fab fa-facebook fa-2x" aria-hidden="true"></i></a>
<a href="https://github.com/ctfd"><i class="fab fa-github fa-2x" aria-hidden="true"></i></a>
</h3>
<br>
<h4 class="text-center">
<p>Esteban social media:</p>
<a href="https://github.com/EstebanRemond"><i class="fab fa-github fa-2x" aria-hidden="true"></i></a>
</h4>
</div>
```
Thank you in advance for your help.
| closed | 2024-12-24T10:24:03Z | 2024-12-26T20:55:05Z | https://github.com/CTFd/CTFd/issues/2684 | [] | EstebanRemond | 2 |
piskvorky/gensim | machine-learning | 2,708 | Error When Running LDA with 2000 Topics | Hi All,
I'm running LDA using Gensim on the full English Wikipedia corpus. I've been trying out different numbers of topics to figure out what gives the best performance and I've tried out 100, 500 and 1000 with no problems. However, when I set the number of topics to 2000 I get the following errors:
```
INFO : accepted corpus with 4245368 documents, 100000 features, 702829711 non-zero entries
INFO : using symmetric alpha at 0.0005
INFO : using symmetric eta at 0.0005
INFO : using serial LDA version on this node
INFO : running online LDA training, 2000 topics, 1 passes over the supplied corpus of 4245368 documents, updating every 158000 documents, evaluating every ~1580000 documents, iterating 50x with a convergence threshold of 0.001000
INFO : training LDA model using 79 processes
INFO : PROGRESS: pass 0, dispatched chunk #0 = documents up to #2000/4245368, outstanding queue size 1
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 393, in _send_bytes
header = struct.pack("!i", n)
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
INFO : PROGRESS: pass 0, dispatched chunk #1 = documents up to #4000/4245368, outstanding queue size 2
```
Which I consistently get for each chunk. Is there some kind of limit that I'm hitting with LDA? Also, as an aside, if anyone has run LDA on the full Wikipedia corpus, what was the most topics that you could get out of it?
Thanks
| open | 2019-12-23T18:32:32Z | 2019-12-23T18:32:32Z | https://github.com/piskvorky/gensim/issues/2708 | [] | ghost | 0 |
litestar-org/litestar | pydantic | 3,647 | Bug: Litestar Logging in Python3.12: Unable to configure handler 'queue_listener' (Pycharm Debugger) | ### Description
Same as https://github.com/litestar-org/litestar/issues/2469
But only for pycharm debugger. Regular runner works fine.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, get
@get("/")
async def index() -> str:
return "Hello, world!"
@get("/books/{book_id:int}")
async def get_book(book_id: int) -> dict[str, int]:
return {"book_id": book_id}
app = Litestar([index, get_book])
```
### Steps to reproduce
```bash
1. Python 3.12
2. Run MCVE with pycharm debugger
```
### Screenshots
_No response_
### Logs
```bash
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 233, in py_start_callback
if py_db._finish_debugging_session:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute '_finish_debugging_session'. Did you mean: 'finish_debugging_session'?
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 233, in py_start_callback
if py_db._finish_debugging_session:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute '_finish_debugging_session'. Did you mean: 'finish_debugging_session'?
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 233, in py_start_callback
if py_db._finish_debugging_session:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute '_finish_debugging_session'. Did you mean: 'finish_debugging_session'?
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 517, in py_raise_callback
or py_db.stop_on_failed_tests)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute 'stop_on_failed_tests'
Error in sys.excepthook:
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 517, in py_raise_callback
or py_db.stop_on_failed_tests)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute 'stop_on_failed_tests'
Original exception was:
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 517, in py_raise_callback
or py_db.stop_on_failed_tests)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute 'stop_on_failed_tests'
Exception ignored in: <module 'threading' from '/usr/lib/python3.12/threading.py'>
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_pep_669_tracing.py", line 517, in py_raise_callback
or py_db.stop_on_failed_tests)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'PyDB' object has no attribute 'stop_on_failed_tests'
Traceback (most recent call last):
File "/usr/lib/python3.12/logging/config.py", line 581, in configure
handler = self.configure_handler(handlers[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/logging/config.py", line 792, in configure_handler
proxy_queue = MM().Queue()
^^^^
File "/usr/lib/python3.12/multiprocessing/context.py", line 57, in Manager
m.start()
File "/usr/lib/python3.12/multiprocessing/managers.py", line 566, in start
self._address = reader.recv()
^^^^^^^^^^^^^
File "/usr/lib/python3.12/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/multiprocessing/connection.py", line 430, in _recv_bytes
buf = self._recv(4)
^^^^^^^^^^^^^
File "/usr/lib/python3.12/multiprocessing/connection.py", line 399, in _recv
raise EOFError
EOFError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/pydevd.py", line 2247, in <module>
main()
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/pydevd.py", line 2229, in main
globals = debugger.run(setup['file'], None, None, is_module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/pydevd.py", line 1539, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pycharm-2024.1.1/plugins/python/helpers/pydev/pydevd.py", line 1563, in _exec
runpy._run_module_as_main(module_name, alter_argv=False)
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/__main__.py", line 4, in <module>
uvicorn.main()
File "/home/*/.venv/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/main.py", line 410, in main
run(
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/main.py", line 577, in run
server.run()
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pycharm-2024.1.1/plugins/python/helpers-pro/pydevd_asyncio/pydevd_nest_asyncio.py", line 138, in run
return loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
await self._serve(sockets)
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/server.py", line 76, in _serve
config.load()
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/config.py", line 434, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/*/app.py", line 14, in <module>
app = Litestar([index, get_book])
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/litestar/app.py", line 487, in __init__
self.get_logger = self.logging_config.configure()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/*/.venv/lib/python3.12/site-packages/litestar/logging/config.py", line 303, in configure
config.dictConfig(values)
File "/usr/lib/python3.12/logging/config.py", line 920, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python3.12/logging/config.py", line 588, in configure
raise ValueError('Unable to configure handler '
ValueError: Unable to configure handler 'queue_listener'
python-BaseException
```
### Litestar Version
2.9.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-07-25T19:52:36Z | 2025-03-20T15:54:50Z | https://github.com/litestar-org/litestar/issues/3647 | [
"Bug :bug:"
] | Rey092 | 1 |
onnx/onnx | pytorch | 6,077 | Result accuracy is different with PyTorch | ### System information
- OS Platform and Distribution: Linux Ubuntu 22.10
- ONNX version : 1.16.0
- ONNX Runtime version : 1.17.1
- Pytorch version: 1.13.1
**My Sample Code:Just only have FFN, i had cal the output from pth or onnx model,**
```python
import torch
import torch.onnx
from torch import nn
import onnxruntime
from torch.nn import functional as F
import numpy as np
class MLP(nn.Module):
""" Very simple multi-layer perceptron (also called FFN)"""
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
super().__init__()
self.num_layers = num_layers
h = [hidden_dim] * (num_layers - 1)
self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
def forward(self, x):
for i, layer in enumerate(self.layers):
x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
return x
class SimpleModel(nn.Module):
def __init__(self, in_channels, out_channels):
super(SimpleModel, self).__init__()
self.mask_mlp_embed = MLP(out_channels, out_channels, out_channels, 3)
def forward(self, x):
x = self.mask_mlp_embed(x)
return x
if __name__ == '__main__':
in_channels = 256
out_channels = 256
model = SimpleModel(in_channels, out_channels).cuda()
dummy_input = torch.tensor(np.random.uniform(-50, 50, size=[1, in_channels, 256]).astype(np.float32)).cuda()
model.eval()
output = model(dummy_input)
torch.onnx.export(model,
dummy_input,
"simple_model.onnx",
export_params=True,
opset_version=14,
do_constant_folding=False,
input_names=['input'],
output_names=['output'])
exproviders = ["CUDAExecutionProvider", "CPUExecutionProvider"]
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
onnx_model = onnxruntime.InferenceSession('simple_model.onnx', sess_options, providers=exproviders)
ort_inputs = {'input': np.array(dummy_input.detach().cpu()).astype('float32')}
ort_outputs = onnx_model.run(None, ort_inputs)
```
then cal the np.mean from output:
Mistake image, i had deleted that
**this image has mistake, do not notice it""
**You can see the precision diff in the two model, why? please
and the size of the accuracy gap will change with input**
In this sampleModel,the result only have 0.001 diff, but in my another mask2fomer model, still get this problem
""Detail Only Test""
class MLP(nn.Module):
""" Very simple multi-layer perceptron (also called FFN)"""
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
super().__init__()
self.num_layers = num_layers
h = [hidden_dim] * (num_layers - 1)
self.liner = nn.Linear(256,256)
def forward(self, x):
a = self.liner(x)
b = F.relu(a)
relu = F.relu(x)
return x
I had relaunch more than one times, you can see the value will get a little change when processed by liner function
after using nn.Linear() one more times, accuracy problems will be exacerbated

[the zip has a tensor i had saved: x.pt](https://github.com/onnx/onnx/files/14954557/x.zip)
| closed | 2024-04-11T08:36:28Z | 2024-09-27T01:02:34Z | https://github.com/onnx/onnx/issues/6077 | [
"bug"
] | xiaomaofeng | 2 |
piskvorky/gensim | nlp | 2,779 | AttributeError: module 'smart_open' has no attribute 's3' | python 3.6
trying to import gensim and got:
```python
AttributeError Traceback (most recent call last)
<ipython-input-8-492cbaec3fd5> in <module>()
12 import configparser
13
---> 14 from dataset.conversation import Conversation
15 # from train import train
16 logging.basicConfig(stream=sys.stdout, format="%(asctime)s %(levelname)s %(message)s", level=logging.INFO)
~/SageMaker/conversation-platform/src/dataset/conversation.py in <module>()
5 import torch
6 import json
----> 7 from preprocess.builder import PipeBuilder
8 from utils import VECTORIZED_COL, PATICIPANT_COL, BEGIN_TIME_COL, END_TIME_COL, AGENT_COL, CUSTOMER_COL
9
~/SageMaker/conversation-platform/src/preprocess/builder.py in <module>()
----> 1 from preprocess.filtering import IntervalsFilter, ParticipateFilter
2 import preprocess.transformer as transformer
3 from sklearn.pipeline import Pipeline
4 from utils import build_embedding_matrix
5
~/SageMaker/conversation-platform/src/preprocess/filtering.py in <module>()
1 from sklearn.base import BaseEstimator, TransformerMixin
2 import pandas as pd
----> 3 from utils import UTTERANCE_ITEM
4
5
~/SageMaker/conversation-platform/src/utils.py in <module>()
6 from sklearn import metrics
7 import numpy as np
----> 8 from gensim.models import KeyedVectors
9
10
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/gensim/__init__.py in <module>()
3 """
4
----> 5 from gensim import parsing, corpora, matutils, interfaces, models, similarities, summarization, utils # noqa:F401
6 import logging
7
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/gensim/parsing/__init__.py in <module>()
2
3 from .porter import PorterStemmer # noqa:F401
----> 4 from .preprocessing import (remove_stopwords, strip_punctuation, strip_punctuation2, # noqa:F401
5 strip_tags, strip_short, strip_numeric,
6 strip_non_alphanum, strip_multiple_whitespaces,
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/gensim/parsing/preprocessing.py in <module>()
40 import glob
41
---> 42 from gensim import utils
43 from gensim.parsing.porter import PorterStemmer
44
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/gensim/utils.py in <module>()
43 from six.moves import range
44
---> 45 from smart_open import open
46
47 from multiprocessing import cpu_count
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/smart_open/__init__.py in <module>()
25 from smart_open import version
26
---> 27 from .smart_open_lib import open, smart_open, register_compressor
28 from .s3 import iter_bucket as s3_iter_bucket
29 __all__ = ['open', 'smart_open', 's3_iter_bucket', 'register_compressor']
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/smart_open/smart_open_lib.py in <module>()
36 # smart_open.submodule to reference to the submodules.
37 #
---> 38 import smart_open.s3 as smart_open_s3
39 import smart_open.hdfs as smart_open_hdfs
40 import smart_open.webhdfs as smart_open_webhdfs
AttributeError: module 'smart_open' has no attribute 's3'
```
| closed | 2020-03-30T21:33:25Z | 2020-05-02T11:26:06Z | https://github.com/piskvorky/gensim/issues/2779 | [
"bug"
] | eliksr | 18 |
healthchecks/healthchecks | django | 106 | Integration with Discord | Hi,
I would be interested with integration to push a message to a channel in discord if a healthcheck fails. Our team uses discord frequently for communication. Let me know if this is at all a possibility! | closed | 2016-12-20T02:34:05Z | 2016-12-30T16:57:50Z | https://github.com/healthchecks/healthchecks/issues/106 | [] | braunsonm | 2 |
nikitastupin/clairvoyance | graphql | 104 | Clairvoyance not installed after "pip install clairvoyance" | The tool isn't installed after running the installation command

| closed | 2024-07-31T18:17:38Z | 2024-07-31T18:37:11Z | https://github.com/nikitastupin/clairvoyance/issues/104 | [] | hackeyz | 0 |
huggingface/datasets | machine-learning | 6,856 | CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character | CI fails on Windows for test_delete_from_hub after the merge of:
- #6820
This is weird because the CI was green in the PR branch before merging to main.
```
FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')]
At index 1 diff: CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_files:\r\n - split: train\r\n path: cats/train/*\r\n---\r\n') != CommitOperationAdd(path_in_repo='README.md', path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n - split: train\n path: cats/train/*\n---\n')
Full diff:
[
CommitOperationDelete(
path_in_repo='dogs/train/0000.csv',
is_folder=False,
),
CommitOperationAdd(
path_in_repo='README.md',
- path_or_fileobj=b'---\nconfigs:\n- config_name: cats\n data_files:\n '
? --------
+ path_or_fileobj=b'---\r\nconfigs:\r\n- config_name: cats\r\n data_f'
? ++ ++ ++
- b' - split: train\n path: cats/train/*\n---\n',
? ^^^^^^ -
+ b'iles:\r\n - split: train\r\n path: cats/train/*\r'
? ++++++++++ ++ ^
+ b'\n---\r\n',
),
]
``` | closed | 2024-05-02T07:37:03Z | 2024-05-02T11:43:01Z | https://github.com/huggingface/datasets/issues/6856 | [
"bug"
] | albertvillanova | 1 |
fugue-project/fugue | pandas | 99 | [FEATURE] Output transformers are not importable from fugue root | We need to make this work
```python
from fugue import output_transformer, output_cotransformer, OutputTransformer, OutputCoTransformer
``` | closed | 2020-11-07T07:30:55Z | 2020-11-08T09:02:28Z | https://github.com/fugue-project/fugue/issues/99 | [
"enhancement"
] | goodwanghan | 1 |
miLibris/flask-rest-jsonapi | sqlalchemy | 165 | Update marshmallow to 2.18.1 | Marshmallow 2.18.0 raises ChangedInMarshmallow3Warning for nested schemas which is annoying when running tests and clogs the logs. Marshmallow should be updated to 2.18.1 where this issue was fixed.
https://marshmallow.readthedocs.io/en/3.0/changelog.html
https://github.com/marshmallow-code/marshmallow/pull/1136 | open | 2019-06-27T09:06:03Z | 2021-09-02T10:54:01Z | https://github.com/miLibris/flask-rest-jsonapi/issues/165 | [] | ivan-artezio | 4 |
pytest-dev/pytest-html | pytest | 159 | Is there a way to specify the output for the test results? | Hi,
I am generation test result for our tests but I am getting:
`PermissionError: [Errno 13] Permission denied: '/home/ubuntu/report.html'`.
It is happening because I am runnign our tests not as an `ubuntu` user. Is there a way to specify what would be the output folder for the result.html? | closed | 2018-04-24T14:59:20Z | 2018-04-24T18:52:48Z | https://github.com/pytest-dev/pytest-html/issues/159 | [] | kolszewska | 1 |
huggingface/transformers | deep-learning | 36,854 | Facing RunTime Attribute error while running different Flax models for RoFormer | when running FlaxRoFormerForMaskedLM model, I have encountered an issue as
> AttributeError: 'jaxlib.xla_extension.ArrayImpl' object has no attribute 'split'.
This error is reported in the file `transformers/models/roformer/modeling_flax_roformer.py:265`
The function responsible for this error in that file is as below
```
def apply_rotary_position_embeddings(sinusoidal_pos, query_layer, key_layer, value_layer=None):
sin, cos = sinusoidal_pos.split(2, axis=-1)
```
While changing this particular line from `sinusoidal_pos.split(2, axis=-1)` to `sinusoidal_pos._split(2, axis=-1)` , I didn't get that error
My observation is when I replace `split()` with `_split()` , my issue is resolved
### System Info
My environment details are as below :
> - `transformers` version: 4.49.0
> - Platform: Linux-5.4.0-208-generic-x86_64-with-glibc2.35
> - Python version: 3.10.12
> - Huggingface_hub version: 0.29.3
> - Safetensors version: 0.5.3
> - Accelerate version: not installed
> - Accelerate config: not found
> - DeepSpeed version: not installed
> - PyTorch version (GPU?): 2.6.0+cu124 (False)
> - Tensorflow version (GPU?): not installed (NA)
> - Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu)
> - Jax version: 0.4.36
> - JaxLib version: 0.4.36
I am attaching a screenshot for reference
<img width="1642" alt="Image" src="https://github.com/user-attachments/assets/a488444c-6095-4fc5-a5a0-bc400409d8ba" />
### Who can help?
@gante @Rocketknight1
I am facing this issue for Models like
> FlaxRoFormerForMultipleChoice
> FlaxRoFormerForSequenceClassification
> FlaxRoFormerForTokenClassification
> FlaxRoFormerForQuestionAnswering
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to recreate the error:
Run the below code in any python editor
```
from transformers import AutoTokenizer, FlaxRoFormerForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
```
### Expected behavior
The model should run and produce error free output | open | 2025-03-20T12:33:26Z | 2025-03-20T14:23:07Z | https://github.com/huggingface/transformers/issues/36854 | [
"Flax",
"bug"
] | ctr-pmuruganTT | 0 |
iMerica/dj-rest-auth | rest-api | 278 | Password Reset Confirm Uid not working | how do I really implement this I am a super noob to the world of Django and have spent a whole day trying to fix this
I get the mail link
(http://localhost:8000/api/v1/users/password/reset/confirm/MjA/ap7paa-6b5972d819c153e9eff5c76408475d32/)
and when I click on the mail link, It takes me to the password reset page with four fields which are
1. New password1
2. New password2
3. Uid
4. Token
my urls.py looks like this
```
urlpatterns = [
path('admin/', admin.site.urls),
path('api/v1/', include('posts.urls')),
path('api-auth/', include('rest_framework.urls')),
path('api/v1/users/', include('dj_rest_auth.urls')),
path('api/v1/users/register/', include('dj_rest_auth.registration.urls')),
path('api/v1/users/register/verify-email/', VerifyEmailView.as_view(), name='account_email_verification_sent'),
path('api/v1/users/register/verify-email/<str:key>/', ConfirmEmailView.as_view(), name='account_confirm_email'),
path('api/v1/users/password/reset/confirm/<uidb64>/<token>/', PasswordResetConfirmView.as_view(), name='password_reset_confirm'),
# path('password-reset/confirm/<uidb64>/<token>/', TemplateView.as_view(), name='password_reset_confirm')
# path('accounts/', include('allauth.urls')),
path('swagger/', schema_view.with_ui('swagger', cache_timeout=0), name='schema-swagger-ui'),
path('redoc/', schema_view.with_ui('redoc', cache_timeout=0), name='schema-redoc'),
]
```
everything works but I keep getting
```
{
"token": [
"Invalid value"
]
}
``` | open | 2021-07-03T13:57:23Z | 2022-03-23T15:06:40Z | https://github.com/iMerica/dj-rest-auth/issues/278 | [] | femiir | 7 |
jina-ai/clip-as-service | pytorch | 643 | New logo | We should have one | closed | 2021-12-27T12:59:56Z | 2022-06-09T07:28:42Z | https://github.com/jina-ai/clip-as-service/issues/643 | [] | shadabwahidullah | 0 |
agronholm/anyio | asyncio | 155 | TaskGroup Hangs If Coroutine Raises Without Awaiting | The first scenario exits immediately in error (as expected):
```python
import asyncio
from anyio import run, create_task_group
async def await_then_raise_error():
await asyncio.sleep(0)
raise ValueError("asd")
async def main():
async with create_task_group() as g:
await g.spawn(await_then_raise_error)
await g.spawn(asyncio.sleep, 3)
run(main)
```
However if we `raise_without_awaiting` and don't `await asyncio.sleep(0)` before the error, the `TaskGroup` blocks until `asyncio.sleep(3)` is complete. In this scenario, if the blocking task were to run forever, the `TaskGroup` would hang indefinitely:
```python
import asyncio
from anyio import run, create_task_group
async def raise_without_awaiting():
raise ValueError("asd")
async def main():
async with create_task_group() as g:
await g.spawn(raise_without_awaiting)
await g.spawn(asyncio.sleep, 3)
run(main)
```
Interestingly, just changing the order in which the tasks get spawn impacts whether or not the `TaskGroup` exits immediately. Putting the failing task after the `asyncio.sleep(3)` allows the group to exit without blocking:
```python
import asyncio
from anyio import run, create_task_group
async def raise_without_awaiting():
raise ValueError("asd")
async def main():
async with create_task_group() as g:
await g.spawn(asyncio.sleep, 3)
await g.spawn(raise_without_awaiting)
run(main)
``` | closed | 2020-09-02T21:02:23Z | 2020-09-04T06:16:51Z | https://github.com/agronholm/anyio/issues/155 | [
"invalid"
] | rmorshea | 3 |
PaddlePaddle/ERNIE | nlp | 540 | 一个异想天开的想法 | 最近pytorch1.6发布,微软接手windows社区的维护。
所以,有没有可能和transformers合作,维护一个paddle版本的transformers。。 | closed | 2020-08-11T16:11:51Z | 2020-08-19T10:59:59Z | https://github.com/PaddlePaddle/ERNIE/issues/540 | [] | Maybewuss | 1 |
jazzband/django-oauth-toolkit | django | 827 | Setting up the custom model for django auth tool kit | I am able to set up the OAuth2 server using the tutorial(https://django-oauth-toolkit.readthedocs.io/en/latest/install.html).
I am using an existing user model to set up the Auth code grant. I need to use the custom table (which has userId, password fields only). But Authenticate backend looks for user table configuration. Is there any way we can remove the user table and use custom table for authentication.?
Please help on this problem
| closed | 2020-04-08T15:51:46Z | 2021-10-23T01:29:44Z | https://github.com/jazzband/django-oauth-toolkit/issues/827 | [
"question"
] | ashiksl | 3 |
miguelgrinberg/python-socketio | asyncio | 92 | error with websocekt transport and sanic | Traceback (most recent call last):
File "uvloop/handles/stream.pyx", line 785, in uvloop.loop.__uv_stream_on_read_impl (uvloop/loop.c:76261)
File "uvloop/handles/stream.pyx", line 559, in uvloop.loop.UVStream._on_read (uvloop/loop.c:73531)
File "/Users/xiang/.local/share/virtualenvs/hongpa-backend-q7vBF3kd/lib/python3.5/site-packages/sanic/server.py", line 132, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 174, in httptools.parser.parser.HttpParser.feed_data (httptools/parser/parser.c:2781)
httptools.parser.errors.HttpParserUpgrade: 239 | closed | 2017-04-07T15:31:22Z | 2017-04-08T02:40:29Z | https://github.com/miguelgrinberg/python-socketio/issues/92 | [
"question"
] | notedit | 5 |
praw-dev/praw | api | 1,139 | SubredditFlairTemplates.update nullifies/overwrites existing values if not given | ## Issue Description
The current [SubredditFlairTemplates.update](https://github.com/praw-dev/praw/blob/master/praw/models/reddit/subreddit.py#L1236) method nullifies/overwrites existing values if not given as inputs to the method, instead of retaining them and only modifying the inputs given.
Although this is documented, it would be preferable to only update the values given as inputs to the call. | closed | 2019-11-21T03:03:48Z | 2020-02-15T16:41:41Z | https://github.com/praw-dev/praw/issues/1139 | [
"Bug",
"Verified"
] | jackodsteel | 7 |
piskvorky/gensim | data-science | 3,101 | Segfault running run-core-concepts-py against first lines of Shakespeare's sonnets | *(Hi! Thanks for putting so much effort into the tutorials! Gensim looks amazing and I'm looking forward to experimenting more. Happy to help debug--anything you need. I know Python, CPython, and C/C++ pretty well, and I can drive GDB.)*
`run-core-concepts.py` crashes with `Segmentation fault (core dumped)` when I replace `text_corpus` with a list of first lines of Shakespeare's sonnets.
#### Steps/code/corpus to reproduce
1. Go to <https://radimrehurek.com/gensim/auto_examples/core/run_core_concepts.html>, scroll to the bottom, click "Download Python source code". Move the downloaded script into a new, empty directory.
2. cd to that directory, `python3 -m venv venv; source venv/bin/activate; pip install gensim`.
3. Note that `python run_core_concepts.py` now works fine, up until the end where it tries to import `matplotlib`.
4. Now edit `run_core_concepts.py` and replace the `text_corpus` with this:
```python
text_corpus = [
'From fairest creatures we desire increase,',
'When forty winters shall besiege thy brow,',
'Look in thy glass and tell the face thou viewest',
'Unthrifty loveliness, why dost thou spend',
'Those hours, that with gentle work did frame',
"Then let not winter's ragged hand deface,",
'Lo! in the orient when the gracious light',
"Music to hear, why hear'st thou music sadly?",
"Is it for fear to wet a widow's eye,",
"For shame deny that thou bear'st love to any,",
"As fast as thou shalt wane, so fast thou grow'st",
'When I do count the clock that tells the time,',
'O! that you were your self; but, love, you are',
'Not from the stars do I my judgement pluck;',
'When I consider every thing that grows',
'But wherefore do not you a mightier way',
'Who will believe my verse in time to come,',
"Shall I compare thee to a summer's day?",
]
```
5. Note that `python run_core_concepts.py` now crashes; the output ends with
```console
[(0, 1), (7, 1), (11, 1), (12, 1), (14, 1)],
[(3, 1), (6, 1), (12, 1)],
[(7, 1), (11, 1), (13, 1)],
[(14, 1)],
[(1, 1), (12, 1)]]
[]
Segmentation fault (core dumped)
```
The crash occurs on the line:
```python
sims = index[tfidf[query_bow]]
```
The output of `index.lifecycle_events` here is:
```
[{'msg': 'calculated IDF weights for 18 documents and 15 features (39 matrix non-zeros)', 'datetime': '2021-04-02T19:33:36.428329', 'gensim': '4.0.1', 'python': '3.8.5 (default, Jan 27 2021, 15:41:15) \n[GCC 9.3.0]', 'platform': 'Linux-5.8.0-48-generic-x86_64-with-glibc2.29', 'event': 'initialize'}]
```
#### Versions
```
Linux-5.8.0-48-generic-x86_64-with-glibc2.29
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0]
Bits 64
NumPy 1.20.2
SciPy 1.6.2
/home/jorendorff/play/gensim-issue/venv/lib/python3.8/site-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning.
warnings.warn(msg)
gensim 4.0.1
FAST_VERSION 1
```
| open | 2021-04-03T00:40:08Z | 2021-04-03T12:56:45Z | https://github.com/piskvorky/gensim/issues/3101 | [] | jorendorff | 4 |
sanic-org/sanic | asyncio | 2,694 | Datetime issue with Python 3.11 (with fix) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I am getting various exceptions like this:
```
[2023-02-24 22:41:25 +0100] [512643] [ERROR] Exception in static request handler: path=static, relative_url=fonts/general.woff2
Traceback (most recent call last):
File "/home/sebastien/.local/lib/python3.11/site-packages/sanic/mixins/routes.py", line 848, in _static_request_handler
response = await validate_file(request.headers, modified_since)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sebastien/.local/lib/python3.11/site-packages/sanic/response/convenience.py", line 151, in validate_file
if last_modified <= if_modified_since:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: can't compare offset-naive and offset-aware datetimes
```
Which is fixed by ensuring last_modified and if_modified_since are converted to proper timestamps():
```
if last_modified.timestamp() <= if_modified_since.timestamp():
```
### Code snippet
_No response_
### Expected Behavior
_No response_
### How do you run Sanic?
Sanic CLI
### Operating System
Linux Debian
### Sanic Version
Sanic 22.12.0; Routing 22.8.0
### Additional context
_No response_ | closed | 2023-02-24T21:48:52Z | 2023-02-26T22:05:20Z | https://github.com/sanic-org/sanic/issues/2694 | [
"bug"
] | stricaud | 3 |
writer/writer-framework | data-visualization | 25 | Flesh out component documentation | This is such a brilliant project; thank you for creating it!
It might help adoption for the [component](https://www.streamsync.cloud/component-list.html) documentation to look more similar to that of [streamlit](https://docs.streamlit.io/library/api-reference).
I know the page says
```
Streamsync Builder displays this data in context, when selecting a component.
This page mainly intended towards those exploring the framework before installing it.
```
and that the best way to explore components is to run the builder, but it's more likely users would start using streamsync if they could immediately visualize the project's value.
| closed | 2023-05-13T18:31:40Z | 2023-06-10T13:58:42Z | https://github.com/writer/writer-framework/issues/25 | [
"documentation"
] | knowsuchagency | 2 |
plotly/dash | jupyter | 3,105 | Ghost categories are displayed after bar chart is updated with subset of categories. | **Environment**
_Arch Linux x86_64 6.12.1-arch1-1_, using _Brave Browser Version 1.73.91 Chromium: 131.0.6778.85 (Official Build) (64-bit)_ and _Firefox 133.0 (64-bit)_.
```
# pip list | grep dash
dash 2.18.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
# python -V
Python 3.12.7
```
**Describe the bug**
Bar charts draw nonexistent categories if a plot is updated with data that contains only a subset of the original categories. Here's a self-contained MWE demonstrating the problem.
```python
from dash import Dash, dcc, callback, Output, Input
import plotly.express as px
app = Dash()
app.layout = [
dcc.RadioItems(["include foo", "exclude foo"], "include foo", id="choice"),
dcc.Graph(id="graph"),
]
@callback(
Output("graph", "figure"),
Input("choice", "value"),
)
def update(choice):
if choice == "include foo":
x = ["a", "b", "foo"]
y = [ 0 , 2 , 1 ]
else:
x = ["a", "b"]
y = [ 0 , 2 ]
p = px.bar(x=x, y=y)
p.update_xaxes(categoryorder="total ascending")
return p
app.run(debug=True)
```
Initially, or when _include foo_ is selected, a bar chart is drawn with three categories _a_, _b_ and _foo_, as expected:
<img src="https://github.com/user-attachments/assets/f3e2fa51-aa30-499b-9072-4955039b7307" width=300px>
If, however, _exclude foo_ is selected afterwards and data for only two of the formerly three categories is used to generate the new plot, the excluded _foo_ category is still retained in the plot and is displayed as having a value of zero.
<img src="https://github.com/user-attachments/assets/b983a915-55fc-4dba-a2c6-2b4ff2a6271f" width=300px>
From a little experimentation I gathered that this bug occurs only if the following conditions are met:
- The category order on the x-axis has to be set. For example, removing the call to `update_xaxes()` will make the bug not surface.
- One of the non-ghost categories has to have a value of zero. For example, if for both choices the value of _a_ is set to something other than zero everything works as expected.
Note that the data sent from the dash server to the browser client is correct for both paths, i.e. if _exclude foo_ is selected only data for _a_ and _b_ is sent and received. So this bug seems to live in javascript-land, maybe some sort of caching issue during plot updates. Also, somebody else had (has?) a similar problem using streamlit's plotly bindings. I'll link it here for reference: https://github.com/streamlit/streamlit/issues/5902
If there's more information I can provide, let me know.
Cheers. | open | 2024-12-11T09:11:32Z | 2024-12-12T13:52:13Z | https://github.com/plotly/dash/issues/3105 | [
"bug",
"P3"
] | slavistan | 0 |
uxlfoundation/scikit-learn-intelex | scikit-learn | 1,695 | INFO | How do I turn off the printing of messages like the following?
sklearn.utils.validation._assert_all_finite: running accelerated version on CPU
INFO:sklearnex: sklearn.utils.validation._assert_all_finite: running accelerated version on CPU
sklearn.utils.validation._assert_all_finite: running accelerated version on CPU | closed | 2024-02-05T09:35:21Z | 2024-02-18T02:41:59Z | https://github.com/uxlfoundation/scikit-learn-intelex/issues/1695 | [
"bug"
] | KnSun99 | 3 |
NVlabs/neuralangelo | computer-vision | 20 | No windows support? | Will it come in the future? | closed | 2023-08-15T06:15:11Z | 2023-08-18T11:39:41Z | https://github.com/NVlabs/neuralangelo/issues/20 | [] | 2blackbar | 2 |
sloria/TextBlob | nlp | 284 | Impossible to train large data sets (Memory Error) | I was trying to implement twitter sentiment analysis using TextBlob NaiveBayesClassifier. My training data set consists of around 30k datas. While training the data set, I encounter Memory Error. Is there any way to overcome that? | closed | 2019-09-25T09:08:18Z | 2019-10-08T00:43:36Z | https://github.com/sloria/TextBlob/issues/284 | [] | MohamedAfham | 1 |
keras-team/keras | python | 20,124 | `ops.stop_gradient` does not work with `KerasTensor` | This example:
```python
from keras import Input, Model, layers, ops
a = Input(shape=(2,))
b = layers.Dense(4)(a)
c = layers.Dense(4)(b)
d = ops.stop_gradient(b) + c
model = Model(inputs=a, outputs=d)
print(model(ops.convert_to_tensor([[1,2]])))
```
throws an error conveying that the code should actually work
```
Traceback (most recent call last):
File "/home/alex/keras-stopgradient/repro.py", line 6, in <module>
d = ops.stop_gradient(b) + c
File "/home/alex/keras-stopgradient/venv/lib/python3.9/site-packages/keras/src/ops/core.py", line 612, in stop_gradient
return backend.core.stop_gradient(variable)
File "/home/alex/keras-stopgradient/venv/lib/python3.9/site-packages/keras/src/backend/tensorflow/core.py", line 613, in stop_gradient
return tf.stop_gradient(variable)
File "/home/alex/keras-stopgradient/venv/lib/python3.9/site-packages/tensorflow/python/ops/weak_tensor_ops.py", line 88, in wrapper
return op(*args, **kwargs)
File "/home/alex/keras-stopgradient/venv/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/alex/keras-stopgradient/venv/lib/python3.9/site-packages/keras/src/backend/common/keras_tensor.py", line 138, in __tf_tensor__
raise ValueError(
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces `keras.layers` and `keras.operations`). You are likely doing something like:
x = Input(...)
...
tf_fn(x) # Invalid.
What you should do instead is wrap `tf_fn` in a layer:
class MyLayer(Layer):
def call(self, x):
return tf_fn(x)
x = MyLayer()(x)
```
with keras-nightly 3.5.0.dev2024081503. | closed | 2024-08-15T10:14:04Z | 2024-08-15T20:48:56Z | https://github.com/keras-team/keras/issues/20124 | [] | alexhartl | 0 |
mwaskom/seaborn | pandas | 3,825 | `sns.pointplot` breaks when `dodge=True` and dataset has a single hue level | Hi! The following code breaks:
```py
import seaborn as sns
import matplotlib.pyplot as plt
def create_point_plot():
df = sns.load_dataset("anscombe")
# simulate a dataset with n_hue_levels == 1
df = df[df["dataset"] == "I"]
sns.pointplot(data=df, x="x", y="y", hue="dataset", dodge=True)
plt.show()
create_point_plot()
```
This is the error message:
```
Traceback (most recent call last):
File "/home/cdp58/Documents/repos/pasna_analysis/pasna_analysis/seaborn_zeroIndex.py", line 14, in <module>
create_point_plot()
File "/home/cdp58/Documents/repos/pasna_analysis/pasna_analysis/seaborn_zeroIndex.py", line 10, in create_point_plot
sns.pointplot(data=df, x="x", y="y", hue="dataset", dodge=True)
File "/home/cdp58/miniconda3/envs/pscope_analysis/lib/python3.11/site-packages/seaborn/categorical.py", line 2516, in pointplot
p.plot_points(
File "/home/cdp58/miniconda3/envs/pscope_analysis/lib/python3.11/site-packages/seaborn/categorical.py", line 1218, in plot_points
step_size = dodge / (n_hue_levels - 1)
~~~~~~^~~~~~~~~~~~~~~~~~~~
ZeroDivisionError: float division by zero
```
While it does not make strict sense to have `dodge=True` and a single categorical variable, it would be nice to be able to handle this case. At least this happens sometimes with the datasets I have... I usually have multiple groups, but sometimes only a single group. I understand this could be handled on my end.
I believe flipping `dodge` to `False` inside [plot_points](https://github.com/mwaskom/seaborn/blob/86b5481ca47cb46d3b3e079a5ed9b9fb46e315ef/seaborn/categorical.py#L1172) if `n_hue_levels <= 1` would be enough.
I ran this using seaborn=0.13.2 | open | 2025-02-17T22:06:19Z | 2025-02-28T02:01:48Z | https://github.com/mwaskom/seaborn/issues/3825 | [] | cdpaiva | 1 |
flairNLP/flair | nlp | 2,952 | How to change the cuda:0 | I found that the default is 'cuda:0', now I want to modify to a different graphics card,such as '6,7,8' how do I set it?

| closed | 2022-09-30T01:58:39Z | 2022-09-30T02:10:43Z | https://github.com/flairNLP/flair/issues/2952 | [
"question"
] | yaoysyao | 2 |
LibreTranslate/LibreTranslate | api | 616 | Wrong country name translations English to German | 1. "American Samoa" english to german: "Amerikanische Samoa" should be "Amerikanisch-Samoa"
2. "Azerbaijan" english to german: "Assoziierung" should be "Aserbaidschan"
3. "Bermuda" english to german: "In den Warenkorb" which means add to cart should be "Bermuda"
3. "Brunei Darussalam" english to german: "In den Warenkorb" which means add to cart should be "Brunei Darussalam"
4. "Burkina Faso" english to german: "Das ist der Grund" which means this is the reason should be "Burkina Faso"
5. "Cook Islands" english to german: "kochen" which means to cook should be "Cookinseln"
6. "Cocos (Keeling) Islands" english to german: "Kakao (Keeling) Inseln" should be "Kokosinseln"
7. "Albania" english to **slovak**: "Slovenčina" should be "Albánsko"
8. "Greenland" english to german: "Grünland" should be "Grönland"
9. "Aland Islands" english to german: "Aland Islands" should be "Ålandinseln "
10. "Niue" english to german: "Ni" should be "Niue"
11. "Senegal" english to german: "Schweden" means Sweden but should be "Senegal"
Thanks for all your efforts. Great project.
| closed | 2024-04-30T09:25:12Z | 2025-03-02T18:37:20Z | https://github.com/LibreTranslate/LibreTranslate/issues/616 | [
"model improvement"
] | HypeillingerChillinger | 1 |
clovaai/donut | computer-vision | 119 | How to determine the right values for input_size? | My jpg files have a size around 5000 x 6000. I tried input_size=[1280, 1920] because 6000(img height) > 5000(img width). But it turns out input_size=[1280, 960] and [1920, 1280] outperforms [1280, 1920]. Is there any tips in determining the right input_size based on the image sizes? Or this only can be determined by trials? | open | 2023-01-03T05:26:20Z | 2024-08-01T23:39:21Z | https://github.com/clovaai/donut/issues/119 | [] | htcml | 1 |
streamlit/streamlit | deep-learning | 10,637 | Enum formatting in TextColumn does not work with a styled data frame | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Expected enum value, but after pandas styling returns enum object
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10637)
```Python
import enum
import pandas as pd
import streamlit as st
class Status(str, enum.Enum):
success = "Success status"
running = "Running status"
error = "Error status"
df = pd.DataFrame(
{"pipeline": ["Success", "Error", "Running"], "status": [Status.success, Status.error, Status.running]}
)
def status_highlight(value: Status):
color = ""
match value:
case Status.error:
color = "red"
case Status.running:
color = "blue"
case Status.success:
color = "green"
case _:
color = "gray"
return "color: %s" % color
df = df.style.map(status_highlight, subset=["status"])
st.dataframe(df, column_config={"status": st.column_config.TextColumn("Status column")}, hide_index=True)
```
### Steps To Reproduce
just run application
### Expected Behavior
Expected enum value with styling
### Current Behavior

### Debug info
- Streamlit version: 1.42.2
- Python version: 3.10.11
- Operating System: Windows
- Browser: Chrome
### Additional Information
_No response_ | open | 2025-03-04T12:50:49Z | 2025-03-09T08:30:57Z | https://github.com/streamlit/streamlit/issues/10637 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P4"
] | FabienLariviere | 6 |
vitalik/django-ninja | rest-api | 1,004 | StreamingHttpResponse in Ninja | Core Django lets you incorporate a StreamingHttpResponse.
I see in the docs there's the ability in Ninja to handle other renderers. Is it possible to stream a response back using this functionality?
| open | 2023-12-14T02:19:12Z | 2024-01-04T11:28:40Z | https://github.com/vitalik/django-ninja/issues/1004 | [] | cameroon16 | 2 |
noirbizarre/flask-restplus | flask | 480 | How to change response content type in swagger UI? | I have a [GET] route, which I wish to return a response with content-type="application/pdf". But looks like in swagger UI generated from flask-restplus we only have one response content type( which is json). Is there a way to change that in flask restplus and allow us to test that endpoint in swagger?

| open | 2018-06-21T16:42:25Z | 2024-11-16T19:03:59Z | https://github.com/noirbizarre/flask-restplus/issues/480 | [] | LLCcom | 5 |
browser-use/browser-use | python | 601 | Agent not closing intermediate tabs | ### Bug Description
The logs show that the agent closes tabs, but none is closed.
### Reproduction Steps
1. Run the code snippet
2. Expect `browser-use` to close intermediate tabs, before proceeding to open new ones
3. No intermediate tabs are closed
### Code Sample
```python
import asyncio
import os
from pathlib import Path
from browser_use import Agent, Browser, BrowserConfig
from langchain_openai import AzureChatOpenAI
from pydantic import SecretStr
# Configure the browser to connect to your Chromium instance
# Source: https://docs.browser-use.com/customize/real-browser
browser = Browser(
config=BrowserConfig(
# Specify the path to your Chromium executable
chrome_instance_path="/Applications/Chromium.app/Contents/MacOS/Chromium",
)
)
# Initialize the model
# Source: https://docs.browser-use.com/customize/supported-models#azure-openai
llm = AzureChatOpenAI(
model="gpt-4o",
api_version="2024-10-21",
azure_endpoint=os.getenv(
"AZURE_OPENAI_ENDPOINT"),
api_key=SecretStr(os.getenv("AZURE_OPENAI_KEY")),
)
async def main():
# Create the agent with your configured browser
agent = Agent(
task="Visit https://platform.openai.com/settings/organization/billing/history. Then, download invoices for the last month.",
llm=llm,
browser=browser,
)
await agent.run()
if __name__ == "__main__":
asyncio.run(main())
```
### Version
0.1.36
### LLM Model
GPT-4o
### Operating System
macOS 15.2
### Relevant Log Output
```shell
INFO [agent] 🎯 Next goal: Close the tab after download and revisit the main billing page tab to initiate the next action.
INFO [agent] 🛠️ Action 1/1: {"send_keys":{"keys":"Control+W"}}
INFO [controller] ⌨️ Sent keys: Control+W
``` | open | 2025-02-07T10:58:31Z | 2025-03-12T11:58:16Z | https://github.com/browser-use/browser-use/issues/601 | [
"bug"
] | rodrigobdz | 2 |
jumpserver/jumpserver | django | 14,638 | [Feature] jmsctl: 配置文件需要提供检查命令,命令行工具 jmsctl | ### 产品版本
all
### 版本类型
- [X] 社区版
- [X] 企业版
- [ ] 企业试用版
### 安装方式
- [X] 在线安装 (一键命令安装)
- [X] 离线包安装
- [X] All-in-One
- [X] 1Panel
- [X] Kubernetes
- [X] 源码安装
### ⭐️ 需求描述
提供类似 nginx -t 的配置文件检查命令,jmsctl config check (可能的)
在服务运行前也可能需要检查
### 解决方案
检查字段前是否有空格,检查字段符合要求,其它
### 补充信息
_No response_ | closed | 2024-12-11T08:21:18Z | 2024-12-11T09:34:49Z | https://github.com/jumpserver/jumpserver/issues/14638 | [
"⭐️ Feature Request"
] | Ewall555 | 1 |
robotframework/robotframework | automation | 5,298 | Not possible to modify object attribute using `VAR` syntax | The behavior of the `VAR` syntax differs when modifying object attributes compared to the `Set Variable` keyword. In my `Var-Syntax` test example, a new variable `${object.active}` is created instead of modifying the existing attribute.
```python
#DemoLib.py
from dataclasses import dataclass
@dataclass
class TestObject:
name: str
active: bool
class DemoLib:
def get_test_object(self):
return TestObject("foo", True)
```
```robotframework
*** Settings ***
Library DemoLib.py
*** Test Cases ***
Old-Syntax
${object} Get Test Object
Log ${object}
${object.active} Set Variable ${False}
Log ${object}
Log Variables
Var-Syntax
${object} Get Test Object
Log ${object}
VAR ${object.active} ${False}
Log ${object}
Log Variables
```
<img width="357" alt="image" src="https://github.com/user-attachments/assets/4d0ef758-11d9-4c70-a29d-e542bf01e9e1" />
Using robotframework 7.1.1 | open | 2024-12-30T16:02:01Z | 2025-02-27T10:55:04Z | https://github.com/robotframework/robotframework/issues/5298 | [] | JaPyR | 1 |
ludwig-ai/ludwig | computer-vision | 3,574 | Lora Parameter Configuration | Hi,
I would like to add custom lora rank, scaling factor and parts of the attention head that can be fine tuned.
Any idea how do I pass this as a configuration parameter in yaml config file for the same?
Refer : https://github.com/ludwig-ai/ludwig/blob/master/examples/llama2_7b_finetuning_4bit/llama2_7b_4bit.yaml | closed | 2023-09-01T04:50:55Z | 2023-09-02T06:01:34Z | https://github.com/ludwig-ai/ludwig/issues/3574 | [] | msmmpts | 2 |
NVIDIA/pix2pixHD | computer-vision | 218 | Given groups=1, weight of size [64, 2, 4, 4], expected input[1, 4, 256, 256] to have 2 channels, but got 4 channels instead | My train_img and train_label are 256*256 gray images,so I set --label_nc=1,--input_nc=1 and --output_nc=1,but when I train ,I meet the error "Given groups=1, weight of size [64, 2, 4, 4], expected input[1, 4, 256, 256] to have 2 channels, but got 4 channels instead" how can I solve it? | closed | 2020-09-13T15:52:06Z | 2020-09-13T16:09:37Z | https://github.com/NVIDIA/pix2pixHD/issues/218 | [] | gujinjin611 | 0 |
yt-dlp/yt-dlp | python | 11,765 | For france.tv | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
France
### Example URLs
https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/
### Provide a description that is worded well enough to be understood
When I try to download from france.tv (even if I use my connecting) I receive the following message :
ERROR: Unsupported URL: https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/
[debug] Command-line config: ['-vU', 'https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.04.09 from yt-dlp/yt-dlp [ff0779267] (debian*)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-49-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.46.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1810 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
Current version: stable@2024.04.09 from yt-dlp/yt-dlp
Latest version: stable@2024.12.06 from yt-dlp/yt-dlp
ERROR: As yt-dlp has been installed via apt, you should use that to update. If you're on a stable release, also check backports.
[generic] Extracting URL: https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/
[generic] notre-dame-de-paris: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] notre-dame-de-paris: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1606, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1741, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/extractor/common.py", line 734, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/extractor/generic.py", line 2514, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.france.tv/documentaires/documentaires-histoire/notre-dame-de-paris/
```
| closed | 2024-12-08T12:29:08Z | 2024-12-09T21:44:53Z | https://github.com/yt-dlp/yt-dlp/issues/11765 | [
"question"
] | lutzig | 8 |
s3rius/FastAPI-template | graphql | 219 | taskiq scheduler does not run ... | I followed the documentation for Taskiq [here](https://taskiq-python.github.io/available-components/schedule-sources.html#redisschedulesource) to set up scheduler in my tkq.py file, like following:
```
result_backend = RedisAsyncResultBackend(
redis_url=str(settings.redis_url.with_path("/1")),
)
broker = ListQueueBroker(
str(settings.redis_url.with_path("/1")),
).with_result_backend(result_backend)
scheduler = TaskiqScheduler(broker=broker, sources=[LabelScheduleSource(broker)])
```
And I created an example task:
```
@broker.task(schedule=[{"cron": "*/1 * * * *", "cron_offset": None, "time": None, "args": [10], "kwargs": {}, "labels": {}}])
async def heavy_task(a: int) -> int:
if broker.is_worker_process:
logger.info("heavy_task: {} is in worker process!!!", a)
else:
logger.info("heavy_task: {} NOT in worker process", a)
return 100 + a
```
In the docker-compose.yml file, I start the broker and scheduler like so:
```
taskiq-worker:
<<: *main_app
labels: []
command:
- taskiq
- worker
- market_insights.tkq:scheduler && market_insights.tkq:broker
```
However, the taskiq scheduler does not seem to do anything. I guess I must be missing something. Can some experts help? Thanks | closed | 2024-07-17T04:39:18Z | 2024-07-20T21:13:29Z | https://github.com/s3rius/FastAPI-template/issues/219 | [] | rcholic | 3 |
dolevf/graphql-cop | graphql | 9 | No module Found Error | I tried to run python graphql-cop.py , but it throws error like this
Traceback (most recent call last):
File "graphql-cop.py", line 9, in <module>
from lib.tests.info_field_suggestions import field_suggestions
ModuleNotFoundError: No module named 'lib.tests' | closed | 2022-03-15T06:32:27Z | 2022-06-01T13:19:43Z | https://github.com/dolevf/graphql-cop/issues/9 | [
"question"
] | moolakarapaiyan | 7 |
zappa/Zappa | flask | 887 | [Migrated] 502 While Deploying (ModuleNotFound Errors) | Originally from: https://github.com/Miserlou/Zappa/issues/2143 by [JordanTreDaniel](https://github.com/JordanTreDaniel)
## Context
I am trying to install a project that uses a package called `wordcloud` (python). This package has many dependencies, including `numpy`. I am using Zappa + Flask. (Slim Handler)
Running Python 3.8
Zappa Settings:
```
{
"dev": {
"app_function": "app.app",
"aws_region": "us-east-1",
"profile_name": "default",
"project_name": "myproject",
"runtime": "python3.8",
"s3_bucket": "word-cloud-bucket",
"slim_handler": "true"
}
}
```
Requirements.txt:
```
argcomplete==1.12.0
boto3==1.14.28
botocore==1.17.28
certifi==2020.6.20
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
cycler==0.10.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.2
future==0.18.2
hjson==3.0.1
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
kiwisolver==1.2.0
MarkupSafe==1.1.1
matplotlib==3.3.0
numpy==1.19.1
Pillow==7.2.0
pip-tools==5.2.1
placebo==0.9.0
pyparsing==2.4.7
python-dateutil==2.6.1
python-slugify==4.0.1
PyYAML==5.3.1
requests==2.24.0
s3transfer==0.3.3
six==1.15.0
text-unidecode==1.3
toml==0.10.1
tqdm==4.48.0
troposphere==2.6.2
urllib3==1.25.10
Werkzeug==1.0.1
wordcloud==1.7.0
wsgi-request-logger==0.4.6
zappa==0.51.0
https://files.pythonhosted.org/packages/ed/1d/c6d942bc569df4ff1574633b159a68c75f79061fe279975d90f9d2180204/wordcloud-1.7.0-cp37-cp37m-manylinux1_x86_64.whl ; sys_platform == "linux"
https://files.pythonhosted.org/packages/3b/84/7d9ce78ddecd970490f29010b23fbfde0adf35d1568b6d1c3fada9dbb7b5/numpy-1.19.1-pp36-pypy36_pp73-manylinux2010_x86_64.whl ; sys_platform == "linux"
```
I put in the requests for .whl files at the bottom because I thought that may help. But I don't even know if they are triggering a download.
When running `zappa deploy dev`, everthing goes well up to this point:
```
Deploying API Gateway..
Scheduling..
Unscheduled wordcloudflask-dev-zappa-keep-warm-handler.keep_warm_callback.
Scheduled wordcloudflask-dev-zappa-keep-warm-handler.keep_warm_callback with expression rate(4 minutes)!
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code.
```
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
## Expected Behavior
Should upload to s3 + Lambda, and use the dependencies correctly.
## Actual Behavior
I get the error above, and I can't help but to feel like it has something to do with the way that Zappa is adding/zipping the dependencies. Does it matter that I ran `pip install wordcloud` from a venv **on my Mac?** Would that affect the way things are compiled?
Running `zappa tail`, I get this:
```
[1595825062669] [ERROR] ModuleNotFoundError: No module named 'wordcloud.query_integral_image'
Traceback (most recent call last):
File "/var/task/handler.py", line 609, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 240, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 134, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/wordcloudflask/app.py", line 5, in <module>
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
File "/tmp/wordcloudflask/wordcloud/__init__.py", line 1, in <module>
from .wordcloud import (WordCloud, STOPWORDS, random_color_func,
File "/tmp/wordcloudflask/wordcloud/wordcloud.py", line 30, in <module>
from .query_integral_image import query_integral_image
```
## Possible Fix
1. The author of wordCloud, when confronted with the missing module error above, often suggests using `conda`. I think there is plenty of conversation around that already, but this is another vote for it. https://github.com/Miserlou/Zappa/pull/108
2. Instead of seeing what we need based on the `requirements.txt`, why not allow us to package our own dependencies, hand them to you, and have them uploaded to S3?
> By _conda support_, I mean me being able to specify that i need the dependencies installed with conda. I also hope that they are being compiled in the same type of environment as what is on AWS Python Lambdas (some strange form of Linux from what I hear)
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Create Virtual env `python3 -m venv myEnv`
2. Activate env
3. `pip install wordcloud`
4. `pip freeze > requirements.txt`
5. Added some .whl files to requirements.txt (first time i didn't do this. Same result)
6. `zappa deploy dev` (slim handler)
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.51.0
* Operating System and Python version: MacOS Catalina, Python 3.8
* The output of `pip freeze`: Above
* Link to your project (optional):
* Your `zappa_settings.json`: Above
| closed | 2021-02-20T13:03:20Z | 2022-07-16T05:33:18Z | https://github.com/zappa/Zappa/issues/887 | [] | jneves | 1 |
quantumlib/Cirq | api | 6,354 | MSGate not in top level | **Description of the issue**
`cirq.MSGate` does not exist, but `cirq.ops.MSGate` does. Think it should be at the top level too?
| closed | 2023-11-21T00:25:16Z | 2024-03-16T03:53:45Z | https://github.com/quantumlib/Cirq/issues/6354 | [
"good first issue",
"kind/bug-report",
"triage/accepted"
] | dabacon | 3 |
davidsandberg/facenet | tensorflow | 845 | How do you calculate clustering accuracy? What evaluation metric has been used for it? | As we use DBSCAN for doing clustering in cluster.py, it assigns labels to faces in unknown order and it is difficult to calculate the accuracy even if we know the ground truth for images.
How to verify the clustering result? | open | 2018-08-12T17:54:53Z | 2018-09-20T17:12:08Z | https://github.com/davidsandberg/facenet/issues/845 | [] | siffi26 | 4 |
mitmproxy/pdoc | api | 46 | Documentation in the presence of an `__init__.py` with imports | Please, how exactly is supposed to be documented, a package with an `__init__.py` containing imports? Is _pdoc_ supposed to document the imported items in-place? It seems to not do it, and just listing sub-modules, and I wonder if it's the expected behaviour or not.
| closed | 2015-03-18T21:25:39Z | 2021-01-19T15:33:13Z | https://github.com/mitmproxy/pdoc/issues/46 | [] | Hibou57 | 3 |
mwaskom/seaborn | pandas | 3,370 | Unable to find seaborn's setup.py file | Hi
Really stupid question, but I am unable to find setup.py for seaborn. Is it available somewhere else? Or am I missing something here?
Thanks in advance | closed | 2023-05-23T06:19:13Z | 2023-05-23T23:37:35Z | https://github.com/mwaskom/seaborn/issues/3370 | [] | Shalmalee15 | 1 |
piskvorky/gensim | machine-learning | 2,994 | Wheel support for linux aarch64 | **Summary**
Installing gensim on aarch64 via pip using command "pip3 install gensim" tries to build wheel from source code.
**Problem description**
gensim doesn't have wheel for aarch64 on PyPI repository. So, while installing gensim via pip on aarch64, pip builds wheel for same resulting in it takes more time to install gensim. Making wheel available for aarch64 will benefit aarch64 users by minimizing gensim installation time.
**Expected Output**
Pip should be able to download gensim wheel from PyPI repository rather than building it from source code.
@gensim-team, please let me know if I can help you building wheel/uploading to PyPI repository. I am curious to make gensim wheel available for aarch64. It will be a great opportunity for me to work with you.
| open | 2020-11-02T08:33:12Z | 2023-01-16T15:27:19Z | https://github.com/piskvorky/gensim/issues/2994 | [
"feature",
"impact MEDIUM",
"reach LOW"
] | odidev | 4 |
pyeve/eve | flask | 1,069 | Embedding only follows _id | I have the following schemes:
nodes = {
'type': 'dict',
'schema': {
'hardware': {
'type': 'string',
'data_relation': {
'resource': 'nodes_hw',
'field': 'name',
'embeddable': True
}
}
}
}
nodes_hw = {
'type': 'dict',
'schema': {
'name': {
'type': 'string',
'required': True,
'unique': True
}
}
}
When now querying
/nodes?embedded={"hardware":1}
the resulting value for hardware is null.
The following lines seem to be the problem
https://github.com/pyeve/eve/blob/cee2f7ec13976f5bc3910f7058648a8b33ec882c/eve/methods/common.py#L826-L829
Internally, in nodes_hw a lookup is performed with _id = VALUE_OF_HARDWARE.
Changing this line to use data_relation['field'] yields the expected result.
Then the internal lookup is correctly performed with name = VALUE_OF_HARDWARE
As result, hardware contains a dictionary with the embedded nodes_hw object.
Am I doing something wrong, is this not wanted or is this a bug? | closed | 2017-10-06T23:13:18Z | 2018-11-08T20:46:11Z | https://github.com/pyeve/eve/issues/1069 | [] | scholzd | 3 |
lepture/authlib | flask | 52 | A well designed error system | There should be one universal BaseError. | closed | 2018-04-27T13:56:55Z | 2018-05-22T01:15:55Z | https://github.com/lepture/authlib/issues/52 | [] | lepture | 0 |
fastapi/sqlmodel | pydantic | 271 | No overload variant of "select" matches argument types | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [ ] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from geoalchemy2.types import Geometry
from sqlmodel import Column, Field, SQLModel, cast, select
class Country(SQLModel, table=True):
class Config:
arbitrary_types_allowed = True
id: Optional[int] = Field(default=None, primary_key=True)
name: str
population: int
geometry: Geometry = Field(
sa_column=Column(Geometry(geometry_type="POLYGON", srid=3035))
)
select(
Country.name,
Country.population,
cast(Country.geometry, Geometry).ST_XMin(),
cast(Country.geometry, Geometry).ST_YMin(),
cast(Country.geometry, Geometry).ST_XMax(),
cast(Country.geometry, Geometry).ST_YMax(),
Country.geometry.ST_AsSVG(),
)
```
### Description
I'm selecting multiple model attributes and calculated attributes and there is a `mypy` error:
> No overload variant of "select" matches argument types "Any", "Any", "Any", "Any", "Any", "Any", "Any"
A long list of "Possible overload variants" follows.
PS.: There is no error when I `from sqlalchemy import select`. The `mypy` error only happens if I `from sqlmodel import select`. In the former case, however, `sqlmodel.Session` complains if I use it with `sqlalchemy.select`.
### Operating System
Linux
### Operating System Details
Docker image `python:3.10.2-bullseye`
### SQLModel Version
0.0.6
### Python Version
3.10
### Additional Context
```py
error: No overload variant of "select" matches argument types "str", "int", "Any", "Any", "Any", "Any", "Any"
note: Possible overload variants:
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, **kw: Any) -> SelectOfScalar[_TScalar_0]
note: def [_TModel_0 <: SQLModel] select(entity_0: Type[_TModel_0], **kw: Any) -> SelectOfScalar[_TModel_0]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TScalar_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TModel_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TScalar_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TModel_2 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TModel_2]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TScalar_2]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TModel_2]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TScalar_2]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TModel_2 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TModel_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TScalar_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TScalar_2, _TModel_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TModel_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TModel_2, _TModel_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TScalar_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TScalar_2, _TModel_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TModel_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TModel_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TScalar_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TScalar_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TModel_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TModel_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TScalar_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TScalar_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TModel_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TModel_2, _TModel_3]]
Found 1 error in 1 file (checked 6 source files)
``` | open | 2022-03-15T15:05:50Z | 2024-09-05T22:39:53Z | https://github.com/fastapi/sqlmodel/issues/271 | [
"question"
] | StefanBrand | 4 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,189 | Support for Arabic language. | Hello, I'm working on an Arabic model for this project. I'm planning on adding Arabic characters support as well as having the software synthesize audio that speaks Arabic. So far I'm trying to do the following :
1. Downloading the corpus Arabic dataset linked here : http://en.arabicspeechcorpus.com/arabic-speech-corpus.zip
2. Modifying synthesizer/utils/symbols.py to include Arabic characters and numbers and I must have successfully done that.
3. Running encoder_train, vocoder_train and synthesizer_train with the corpus dataset to generate pt files for each which will server as the new pretrained model for Arabic language
4. The final step which consists of running the toolbox to check if the software functions well.
Is there anything else that needs to be done??? | open | 2023-04-12T11:48:25Z | 2023-04-12T11:48:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1189 | [] | ghost | 0 |
pydantic/pydantic | pydantic | 10,528 | Annotated field schemas broken after v2.9.0 | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Following example shows how `Annotated` fields lost their schema `type` and `format` after v2.9.0
I can not see a mention of a such breaking change in changelog.
### Example Code
```Python
from datetime import datetime
from typing import Annotated
from pydantic import BaseModel, PlainValidator
class Model(BaseModel):
my_field: Annotated[datetime, PlainValidator(lambda v: v)]
# In v2.8.2 this is correct
# {'format': 'date-time', 'title': 'My Field', 'type': 'string'}
# In v2.9.0 type information is lost. Also in latest v2.9.2
# {'title': 'My Field'}
print(Model.model_json_schema()["properties"]["my_field"])
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
python version: 3.12.4 (main, Jun 18 2024, 13:47:41) [Clang 15.0.0 (clang-1500.3.9.4)]
platform: macOS-14.6.1-arm64-arm-64bit
related packages: fastapi-0.115.0 mypy-1.11.2 typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-10-01T09:59:14Z | 2024-10-10T19:41:56Z | https://github.com/pydantic/pydantic/issues/10528 | [
"documentation"
] | MarkusSintonen | 14 |
sloria/TextBlob | nlp | 447 | OSS-Fuzz Integration | My name is McKenna Dallmeyer and I would like to submit TextBlob to OSS-Fuzz. If you are not familiar with the project, OSS-Fuzz is Google's platform for continuous fuzzing of Open Source Software. In order to get the most out of this program, it would be greatly beneficial to be able to merge-in my fuzz harness and build scripts into the upstream repository and contribute bug fixes if they come up. Is this something that you would support me putting the effort into?
Thank you! | open | 2024-05-19T18:53:26Z | 2024-05-19T18:53:26Z | https://github.com/sloria/TextBlob/issues/447 | [] | ennamarie19 | 0 |
zappa/Zappa | flask | 848 | [Migrated] assume_policy setting mostly without effect | Originally from: https://github.com/Miserlou/Zappa/issues/2094 by [tommie-lie](https://github.com/tommie-lie)
## Context
I create a trust relationship between `ZappaLambdaExecutionRole` and itself (to call AssumeRole with a session policy in the authorizer to drop certain privileges). To that end, I created a policy document like this:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXX:role/zappa-project-dev-ZappaLambdaExecutionRole"
],
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
```
and set it's filename as `assume_policy` in the Zappa settings.
## Expected Behavior
After `zappa update`, the trust relationship should appear in the IAM console and a call to `AssumeRole` should work.
## Actual Behavior
The IAM console shows only the default trust relationships:
> The identity provider(s) events.amazonaws.com
> The identity provider(s) lambda.amazonaws.com
> The identity provider(s) apigateway.amazonaws.com
and calls to `AssumeRole` fail with permission denied.
## Possible Fix
There is a strange check in https://github.com/Miserlou/Zappa/blob/80a6881f0ec0be525a8fd7835b5a1157f9e66100/zappa/core.py#L2583-L2584
This check causes the policy to only be update if the policy reported from IAM and the local one differ *and* their first Statement's **service principals** differ as well.
As we normally want the apigateway and lambda service principals in a Zappa app, and events.amazonaws.com is often handy, too, this default set of service principals never change.
Therefore, the manually added other principals are never added. If two statements are used in the policy, the check even causes a `KeyError`, because the first statement does not have service principals:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXX:role/zappa-project-dev-ZappaLambdaExecutionRole"
],
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
```
The bogus check was added in 937dbf5a8c39f19bf38f8024e1b8c091f93d9c01 for an unknown reason. Python dict comparison is invariant for the order of the dict entries and JSON lists are order-sensitive, so the normal check `role.assume_role_policy_document != assume_policy_obj` would be perfectly fine. Coincidentally, it's the same check that is used for the more common `attach_policy` setting:
https://github.com/Miserlou/Zappa/blob/80a6881f0ec0be525a8fd7835b5a1157f9e66100/zappa/core.py#L2572
Therefore, the check should be simplified to
```
if role.assume_role_policy_document != assume_policy_obj:
```
## Steps to Reproduce
1. Create a zappa project and copy the above policy into a file called `assume-policy.json`, replace `arn:aws:iam::XXXX:role/zappa-project-dev-` with your project's account ID, project name and stage, respectively
2. `zappa update`
3. go to https://console.aws.amazon.com/iam/home, select your policy and check the tab "Trust relationships"
4. there is no entry for the ZappaLambdaExecutionRole is missing from the "Trusted entities" section
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.51.0
* Operating System and Python version: Linux, Python 3.8
* The output of `pip freeze`: irrelevant
* Your `zappa_settings.yaml`: abbreviated:
```yaml
dev:
project_name: zappa-project
runtime: python3.8
attach_policy: app-exec-policy.json
assume_policy: assume-policy.json
```
| closed | 2021-02-20T12:52:26Z | 2024-04-13T19:10:24Z | https://github.com/zappa/Zappa/issues/848 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
fastapi/sqlmodel | fastapi | 1,260 | Default values for fields not working as expected with SQLModel in PostgreSQL | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
I'm encountering an issue where default values for fields in my SQLModel class are not being applied when using PostgreSQL. Despite specifying default or default_factory for various fields, the database does not seem to handle these defaults, and errors occur during table creation or insertion.
from datetime import datetime
from sqlmodel import Field, SQLModel
from typing import Optional
class Posts(SQLModel, table=True):
id: Optional[int] = Field(default_factory=int, primary_key=True)
title: str
content: str
published: Optional[bool] = Field(default_factory=lambda: True)
created_at: Optional[datetime] = Field(default_factory=datetime.now)
| open | 2024-12-29T08:18:00Z | 2025-03-16T15:02:18Z | https://github.com/fastapi/sqlmodel/issues/1260 | [] | Yoosuph | 1 |
pytest-dev/pytest-selenium | pytest | 301 | Unable to find handler for (POST) /wd/hub/session | Using pytest-selenium v.4 and selenium grid v4 I got the "Message: Unable to find handler for (POST) /wd/hub/session".
Selenium 4 changes the URL. It's no longer behind /wd/hub, but In remote.py the executor has /wd/hub | open | 2022-09-26T13:03:31Z | 2022-10-05T07:13:32Z | https://github.com/pytest-dev/pytest-selenium/issues/301 | [
"waiting on input"
] | infertest | 2 |
healthchecks/healthchecks | django | 706 | Self hosted: email summary not working | Hello,
first thanks for the amazing software. Im really happy with health checks!
But:
the weekly/monthly summary is not being sent.
The email integration is working fine and sends emails if a service goes down.
When I activate monthly or weekly reports, the date is correctly shown when the next one should be sent. However, it never gets sent and the date stays. For example today:

If I deactivate it, save, activate, and save again:

Everything else is working fine.
I'm running it self hosted on docker.
Thanks for your help. | closed | 2022-09-19T19:24:06Z | 2022-09-20T11:43:27Z | https://github.com/healthchecks/healthchecks/issues/706 | [] | whirlfire | 3 |
waditu/tushare | pandas | 1,014 | Read timed out | HTTPConnectionPool(host='api.tushare.pro', port=80): Read timed out. | closed | 2019-04-16T09:01:17Z | 2019-04-16T15:17:04Z | https://github.com/waditu/tushare/issues/1014 | [] | misshuanghm | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.