repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
stanfordnlp/stanza | nlp | 1,436 | Missing attributes start_char and end_char for certain tokens in Stanza pipeline output. | ### Describe the bug:
The output of the Stanza pipeline is missing start_char and end_char values for certain tokens. This issue can be observed in the following example, where the token 'It\"s' lacks start_char and end_char values, even though these fields are present for other tokens in the output.
### Steps to reproduce:
1. Import Stanza and initialize a pipeline:
import stanza
pipeline = stanza.Pipeline('en', processors='tokenize,mwt,pos,lemma,ner,depparse', download_method=None, verbose=0)
2. Pass a simple sentence through the pipeline:
pipeline('It"s an example sentence.')
3. Check the output. The token with id=1 does not have start_char and end_char values.
### Expected behavior:
All tokens in the output should include start_char and end_char values.
### Actual behavior:
The token with id=1 ('It\"s') lacks start_char and end_char values, while all other tokens include these fields.
```
[
[
{
"id": 1,
"text": "It\"s",
"lemma": "irc",
"upos": "VERB",
"xpos": "VBZ",
"feats": "Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin",
"head": 0,
"deprel": "root",
"ner": "O",
"multi_ner": [
"O"
]
},
{
"id": 2,
"text": "an",
"lemma": "a",
"upos": "DET",
"xpos": "DT",
"feats": "Definite=Ind|PronType=Art",
"head": 4,
"deprel": "det",
"start_char": 5,
"end_char": 7,
"ner": "O",
"multi_ner": [
"O"
]
},
...
```
### Environment (please complete the following information):
- Kernel version: 6.11.2-amd64
- Python version: 3.11.10
- Installed packages:
- numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1728239949208/work/dist/numpy-2.1.2-cp311-cp311-linux_x86_64.whl#sha256=768addcb66d11bf95f7d2036c7b2595c638ef6539dba5f6a98faa0cdd9170ce8
- stanza==1.9.2 | closed | 2024-11-26T22:19:54Z | 2024-12-03T11:25:55Z | https://github.com/stanfordnlp/stanza/issues/1436 | [
"bug"
] | al3xkras | 3 |
Ehco1996/django-sspanel | django | 159 | 在后台删除用户,数据库不会更新? | 我在后台删除了一个用户,然后再用这个昵称注册的时候提示已经被注册了。
后台看是确实只剩下一个了,但是数据库还有两个,而且用被删除的用户名登录可以登录上,输入用户名密码之后跳转500页面,然后点浏览器的返回,提示登录成功。虽然有些功能用不了,但是查看公告,联系站长,捐赠付费,充值界面,商品界面,购买记录,邀请管理,返利记录,这些功能都可以用。不知道算不算bug



| closed | 2018-07-22T06:01:59Z | 2018-07-23T08:36:38Z | https://github.com/Ehco1996/django-sspanel/issues/159 | [] | Aruelius | 3 |
twopirllc/pandas-ta | pandas | 584 | Description for Each Indicator | I am very new to technical indicators, and there are over 200 indicators.
where can I find the documentation for each indicator with math? if documentation is not available, will this project accept my contribution? | closed | 2022-09-01T15:59:49Z | 2024-12-16T17:26:50Z | https://github.com/twopirllc/pandas-ta/issues/584 | [
"enhancement"
] | chandraveshchaudhari | 4 |
tortoise/tortoise-orm | asyncio | 1,462 | fields.TimeDeltaField in pydantic_model_creator raise Input should be None [type=none_required, input_value=datetime.timedelta(second...104, microseconds=48382), input_type=timedelta] | **Describe the bug**
with
tortoise-orm==0.20.0
pydantic==2.2.1
code like:
```
class Task(Model):
usage_time = fields.TimeDeltaField(null=True, blank=True, description='usgae')
class TaskListSchema(pydantic_model_creator(Task, name='TaskListSchema', exclude=('usage_time',))):
pass
TaskListSchema.from_orm(task).dict()
````
will get error:
```
Traceback (most recent call last):
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\fastapi\applications.py", line 289, in __call__
await super().__call__(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\base.py", line 108, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\python_workspace\AutoTestServer\common\middlewares.py", line 18, in dispatch
response = await call_next(request)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
raise app_exc
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
await self.app(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
raise e
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\fastapi\routing.py", line 273, in app
raw_response = await run_endpoint_function(
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\fastapi\routing.py", line 190, in run_endpoint_function
return await dependant.call(**values)
File "D:\python_workspace\AutoTestServer\task\views.py", line 232, in task_list
data = [TaskListSchema.from_orm(task).model_dump() for task in queryset]
File "D:\python_workspace\AutoTestServer\task\views.py", line 232, in <listcomp>
data = [TaskListSchema.from_orm(task).model_dump() for task in queryset]
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\typing_extensions.py", line 2562, in wrapper
return __arg(*args, **kwargs)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\pydantic\main.py", line 1066, in from_orm
return cls.model_validate(obj)
File "D:\python_workspace\AutoTestServer\venv\lib\site-packages\pydantic\main.py", line 496, in model_validate
return cls.__pydantic_validator__.validate_python(
pydantic_core._pydantic_core.ValidationError: 1 validation error for TaskListSchema
usage_time
Input should be None [type=none_required, input_value=datetime.timedelta(second...104, microseconds=48382), input_type=timedelta]
For further information visit https://errors.pydantic.dev/2.2/v/none_required
```
**To Reproduce**
print TaskListSchema will get
```
{
"additionalProperties": false,
"properties": {
"usage_time": {
"description": "\u4efb\u52a1\u8017\u65f6",
"nullable": true,
"title": "Usage Time",
"type": "null"
}
},
"required": [
"usage_time"
],
"title": "TaskListSchema",
"type": "object"
}
```
when i use
tortoise-orm==0.19.2
pydantic==1.10.2
print TaskListSchema will get
```
{
"title": "TaskListSchema",
"description": "\u81ea\u52a8\u5316\u4efb\u52a1",
"type": "object",
"properties": {
"usage_time": {
"title": "Usage Time",
"description": "\u4efb\u52a1\u8017\u65f6",
"nullable": true,
"type": "number",
"format": "time-delta"
}
},
"additionalProperties": false
}
```
this schema will work! Is there something change in pydantic_model_creator when handle fields.TimeDeltaField ?
**Expected behavior**
fields.TimeDeltaField in pydantic_model_creator works good
**Additional context**
Add any other context about the problem here.
| open | 2023-08-23T07:16:43Z | 2023-08-23T10:02:17Z | https://github.com/tortoise/tortoise-orm/issues/1462 | [] | yinkh | 6 |
unit8co/darts | data-science | 2,256 | Wonder if there is any way to make a model predict more spread 'dynamic'? explain more in body | Hello! I have been trying alot to get a model to predict well on some electricity data. I have tried several models (but not all yet), and TCN is one of the ones that performs the best.
The model achieves a pretty good rmse and it outperforms the naive models substantially.
If i train and predcit on accumulated kwh usage the results looks better (but with worse rmse).
If i train on interval sum (which is stationary) then it gets a better rmse but the results looks kind of off.


My judgement is that the iSum is a better forecast but that it looks worse. Which brings me to my question of whether you can adjust some parameter so that the models predicts more similarly to the data it trains on. Or perhaps you could just transform the iSum prediction in some way so that it looks more like the typical iSum data.
| closed | 2024-02-29T00:40:12Z | 2024-08-23T09:18:04Z | https://github.com/unit8co/darts/issues/2256 | [
"question"
] | Allena101 | 7 |
piskvorky/gensim | data-science | 3,177 | Incompatible types of unicode strings in `gensim.similarities.fastss.editdist` | #### Problem description
In #3146, a new algorithm for fast Levenshtein distance computation has been added. However, the algorithm currently cannot cope with different internal representations of Unicode strings, resulting in cryptic errors:
#### Steps/code/corpus to reproduce
``` python
$ pip install git+https://github.com/RaRe-Technologies/gensim.git@develop
$ python3
>>> from gensim.similarities.fastss import editdist
>>>
>>> editdist('Žižka', 'šiška')
2
>>> editdist('Žižka', 'Zizka')
gensim/similarities/fastss.pyx in gensim.similarities.fastss.editdist()
ValueError: incompatible types of unicode strings
```
#### Suggested patch
See #3178.
#### Versions
```python
>>> import platform; print(platform.platform())
Linux-4.15.0-108-generic-x86_64-with-glibc2.10
>>> import sys; print("Python", sys.version)
Python 3.8.3 (default, Jul 2 2020, 16:21:59)
[GCC 7.3.0]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.20.3
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.6.3
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.1.0.dev0
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
| closed | 2021-06-21T18:25:05Z | 2021-06-29T05:23:41Z | https://github.com/piskvorky/gensim/issues/3177 | [] | Witiko | 3 |
PokeAPI/pokeapi | api | 306 | CORS Issue | This is a very specific question but I am trying to access the PokeAPI V1 and V2 and they seem to behave differently.
My code for both is
$.ajax({
type:"GET",
dataType: "jsonp",
url: (Uone or Utwo)
success: function(dataone)
{*somefunction
}})
. When I put in the url from the pokeAPI V1 it works but when I put in the url from the pokeAPI V2 it gives me an error that Unexpected Error occured: Unexpected Syntax : found. I read about this on StackOverflow and it appears that the file is not truly a JSON file. I tried everything to fix this issue but it didn't work.
In a more general sense, I was receiving error messages that said "NO ACCESS HEADERS. BLOCKED BY CROSS ORIGIN RESOURCE SHARING" or something like that. I downloaded a chrome extension which fixed this issue but it is not a good one. I tried two ways to permanently fix this solution, one which was to add "?/callback?" to the end of my URL, and the other to add datatype JSONP to my ajax call. This worked for the V1 API but not for the V2 API. I have no clue why.
| closed | 2017-10-01T05:38:16Z | 2017-10-04T09:57:25Z | https://github.com/PokeAPI/pokeapi/issues/306 | [] | mbapna123 | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,101 | Using uc option with proxy in multiprocessing repeats same ip | 
Hi, using this code in multiprocessing and different proxy, repeats ip:
driver = Driver(uc=True, proxy=proxy_dict['http'], headless=headless, uc_cdp_events=api,incognito=True)
Thanks
| closed | 2023-09-12T16:23:10Z | 2023-09-15T16:18:30Z | https://github.com/seleniumbase/SeleniumBase/issues/2101 | [
"self-resolved",
"UC Mode / CDP Mode"
] | FranciscoPalomares | 11 |
httpie/cli | api | 822 | Fix simple typo: downland -> download | There is a small typo in httpie/downloads.py.
Should read download rather than downland.
| closed | 2019-12-04T11:08:59Z | 2019-12-04T12:32:09Z | https://github.com/httpie/cli/issues/822 | [] | timgates42 | 0 |
Neoteroi/BlackSheep | asyncio | 246 | Add missing article to exception messages | Add the missing articles to these error messages:
```python
def validate_source_path(source_folder: str) -> None:
source_folder_path = Path(source_folder)
if not source_folder_path.exists():
raise InvalidArgument("given root path does not exist")
if not source_folder_path.is_dir():
raise InvalidArgument("given root path is not a directory")
``` | closed | 2022-03-25T19:48:22Z | 2022-04-26T20:26:22Z | https://github.com/Neoteroi/BlackSheep/issues/246 | [
"enhancement",
"fixed in branch"
] | RobertoPrevato | 0 |
ymcui/Chinese-BERT-wwm | nlp | 160 | 建议增加例子demo 方便快速入手 | 建议增加例子demo 方便快速入手 | closed | 2020-11-23T05:13:08Z | 2020-12-01T05:29:55Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/160 | [
"stale"
] | zeng8280 | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 670 | Default normalization in distances is counterintuitive (or wrong) | Not a real bug, and maybe it's just personal preference, but I feel like the normalization in several distances is counterintuitive.
For example, the documentation for `CosineSimilarity`, says
> This class is equivalent to [DotProductSimilarity(normalize_embeddings=True)](https://kevinmusgrave.github.io/pytorch-metric-learning/distances/#dotproductsimilarity).
Which of course is correct, however, the default `DotProductSimilarity` itself normalizes the input vectors.
Also, the documentation for the `LpDistance` says
> With default parameters, this is the Euclidean distance.
This is not true as the Euclidean distances performs on unnormalized vectors.
So maybe a bug after all(?) | open | 2023-10-16T17:16:28Z | 2023-10-17T10:37:39Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/670 | [
"bug",
"documentation"
] | mcschmitz | 1 |
flasgger/flasgger | rest-api | 281 | Add YAML in request body | I have an API that accepts YAML or JSON in body (according to what I sent in the other parameter)...
But even when I use `consume` tag, flasgger is only sending `"Content-Type: application/json"`
Here's the YAML doc string:
```
- in: body
name: yaml
consumes:
- application/yaml
- application/json
``` | open | 2019-02-04T10:26:41Z | 2019-02-04T10:26:41Z | https://github.com/flasgger/flasgger/issues/281 | [] | AseedUsmani | 0 |
matplotlib/matplotlib | data-visualization | 29,492 | [Bug]: Face color of right-side table cells without right edge rendered wrongly | ### Bug summary
When the right edge of a colored right-most cell is removed, the rendering goes wrong, and a diagonal is created.
Here's an example where cell 4 is rendered wrongly, while cell 3 without its left edge is rendered correctly:

### Code for reproduction
```Python
"""
This is a reproducer to show that right sided cells of a Matplotlib.Table,
without a border, render the face color of that cell wrongly.
"""
from matplotlib import pyplot as plt
def main():
# Setup data.
values = [[1, 2], [3, 4], [5, 6]]
labels = ["Col 1", "Col 2"]
colors = [["w", "w"], ["r", "r"], ["w", "w"]]
# Setup figure.
_, ax = plt.subplots()
ax.axis("off")
# Plot the table
table = plt.table(
cellText=values,
colLabels=labels,
cellColours=colors,
loc="center",
cellLoc="center",
)
# Remove a left side border, showing cell 3 has NO rendering issues.
table.get_celld()[(2, 0)].visible_edges = "BTR"
# Remove a right side border, showing cell 4 has rendering issues.
table.get_celld()[(2, 1)].visible_edges = "BTL"
# Plot figure.
plt.show()
if __name__ == "__main__":
main()
```
### Actual outcome
Cell 4 has its face color half-rendered with a diagonal going from left bottom to right upper.
### Expected outcome
You would expect cell 4 to be rendered with a squared color instead of a diagonal, just as cell 3 is rendered.
### Additional information
Fresh install of Matplotlib, I have no clue how long this has been in the package.
### Operating system
Ubuntu
### Matplotlib Version
3.10.0
### Matplotlib Backend
tkagg
### Python version
3.12.3
### Jupyter version
_No response_
### Installation
pip | closed | 2025-01-20T16:28:47Z | 2025-01-20T18:16:40Z | https://github.com/matplotlib/matplotlib/issues/29492 | [
"topic: table"
] | okkevaneck | 1 |
mage-ai/mage-ai | data-science | 5,024 | [BUG] First Time only Project Git initialization Error | ### Mage version
9.70
### Describe the bug
First Time only Project Git initialization Error. Once you close the error and try again it works fine. See screen shot for Error. This is in Multi Project Platform mode
### To reproduce
Create Project
Navigate to version control app
initial git in project directroy
### Expected behavior
Directory intializes without error
### Screenshots

### Operating system
_No response_
### Additional context
_No response_ | open | 2024-05-02T20:02:32Z | 2024-05-03T19:26:26Z | https://github.com/mage-ai/mage-ai/issues/5024 | [
"bug"
] | Arthidon | 0 |
huggingface/datasets | nlp | 7,000 | IterableDataset: Unsupported ScalarType BFloat16 | ### Describe the bug
`IterableDataset.from_generator` crashes when using BFloat16:
```
File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor
args = (obj.detach().cpu().numpy(),)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType BFloat16
```
### Steps to reproduce the bug
```python
import torch
from datasets import IterableDataset
def demo(x):
yield {"x": x}
x = torch.tensor([1.], dtype=torch.bfloat16)
dataset = IterableDataset.from_generator(
demo,
gen_kwargs=dict(x=x),
)
example = next(iter(dataset))
print(example)
```
### Expected behavior
Code sample should print:
```python
{'x': tensor([1.], dtype=torch.bfloat16)}
```
### Environment info
```
datasets==2.20.0
torch==2.2.2
``` | closed | 2024-06-25T14:43:26Z | 2024-06-25T16:04:00Z | https://github.com/huggingface/datasets/issues/7000 | [] | stoical07 | 3 |
TheAlgorithms/Python | python | 11,585 | Please like my code |
userInput = input('Enter 1 or 2: ')
if userInput == "1":
print ("Hello world")
print ("I love Python")
elif userInput == "2":
print ("python rocks")
print ("How are you?")
else:
print ("you did not enter a valid number")
| closed | 2024-09-26T09:31:43Z | 2024-09-30T10:01:19Z | https://github.com/TheAlgorithms/Python/issues/11585 | [] | Phyhoncoder13 | 2 |
OpenInterpreter/open-interpreter | python | 1,507 | Fails with "json.decoder.JSONDecodeError" trying to manipulate with screen using Antropic | ### Describe the bug
I've run "interpreter --os" to play with anthropic OS control feature and it crashed with "json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
### Reproduce
I've run "interpreter --os", set a key and asked if it can hear me through the mic
it tried to open settings and failed:
I'll click on the Sound settings to check your microphone settings.
{"action": "mouse_move", "coordinate": [40, 369]}{"action": "left_click"}{"action": "screenshot"}
I notice I'm still in the Notifications settings. Let me try clicking the Sound icon again to access sound settings.
{"action": "key", "text": "Command+Space"}{"action": "type", "text": "Sound"}
Let me click the "Open" button to access Sound settings.
{"action": "mouse_move", "coordinate": [558, 202]}Traceback (most recent call last):
File "/opt/anaconda3/bin/interpreter", line 5, in <module>
from interpreter.terminal_interface.start_terminal_interface import main
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/__init__.py", line 54, in <module>
run_async_main()
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/computer_use/loop.py", line 528, in run_async_main
asyncio.run(main())
File "/opt/anaconda3/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/computer_use/loop.py", line 505, in main
async for chunk in sampling_loop(
File "/opt/anaconda3/lib/python3.12/site-packages/interpreter/computer_use/loop.py", line 186, in sampling_loop
current_block.input = json.loads(current_block.partial_json)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
### Expected behavior
i was expecting it says something like "can't perform" or something, but not crash
### Screenshots

### Open Interpreter version
0.4.3
### Python version
3.12.4
### Operating System name and version
macOS 14.7 (23H124)
### Additional context
_No response_ | open | 2024-10-29T08:57:57Z | 2024-11-04T14:30:26Z | https://github.com/OpenInterpreter/open-interpreter/issues/1507 | [
"Needs Verification"
] | artemu78 | 1 |
flairNLP/flair | pytorch | 3,292 | [Feature]: add BINDER | ### Problem statement
Extend the label verbalizer decoder to the [BINDER](https://openreview.net/forum?id=9EAQVEINuum) paper which is using a contrastive / similarity loss whereas the current implementation uses matrix multiplication.
### Solution
Write a new class that implements the BINDER paper.
### Additional Context
_No response_ | open | 2023-08-06T10:36:21Z | 2023-08-06T10:36:22Z | https://github.com/flairNLP/flair/issues/3292 | [
"feature"
] | whoisjones | 0 |
gradio-app/gradio | python | 10,572 | gr.Dataframe style is not working on gradio 5.16.0 | ### Describe the bug
As described in: https://www.gradio.app/guides/styling-the-gradio-dataframe, previously the style can display properly. But now it doesn't work in the FIRST run anymore. If run multiple times, the style can show properly.
Please check the code below and run it in https://www.gradio.app/playground for reproduction.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import pandas as pd
import gradio as gr
# Creating a sample dataframe
def run():
df = pd.DataFrame({
"A" : [14, 4, 5, 4, -1],
"B" : [5, 2, 54, 3, 2],
"C" : [20, 20, 7, 3, 8],
"D" : [14, 3, 6, 2, 6],
"E" : [23, 45, 64, 32, 23]
})
df = df.style.map(color_num, subset=["A"])
return df
# Function to apply text color
def color_num(value: float) -> str:
color = "red" if value >= 0 else "green"
color_style = "color: {}".format(color)
return color_style
# Displaying the styled dataframe in Gradio
with gr.Blocks() as demo:
gr.Textbox("{}".format(gr.__version__))
a = gr.DataFrame()
b = gr.Button("run")
b.click(run,outputs=a)
demo.launch()
```
### Screenshot
<img width="1920" alt="Image" src="https://github.com/user-attachments/assets/b91e7dc3-be4c-4931-beee-cafe426e015d" />
<img width="1920" alt="Image" src="https://github.com/user-attachments/assets/c48d06bc-cb64-4540-b471-3734226d3204" />
### Logs
```shell
```
### System Info
```shell
gradio 5.16.0
```
### Severity
I can work around it | closed | 2025-02-12T10:21:49Z | 2025-02-19T19:01:42Z | https://github.com/gradio-app/gradio/issues/10572 | [
"bug",
"Priority",
"💾 Dataframe",
"Regression"
] | jamie0725 | 2 |
ray-project/ray | deep-learning | 51,084 | [core] Cover cpplint for `/src/ray/object_manager` (excluding `plasma`) | ### Description
As part of the initiative to introduce cpplint into the pre-commit hook, we are gradually cleaning up C++ folders to ensure compliance with code style requirements. This issue focuses on cleaning up `src/ray/object_manager` (excluding `plasma`, as it's being covered through #50954).
### Goal
Ensure all .h and .cc files in src/ray/raylet_client comply with cpplint rules.
Address or suppress all cpplint warnings.
### Steps to Complete
- Checkout the latest main branch and install the pre-commit hook.
- Manually modify all C++ files in src/ray/raylet_client to trigger cpplint (e.g., by adding a newline).
- Run git commit to trigger cpplint and identify issues.
- Fix the reported issues or suppress them using clang-tidy if necessary.
This is a sub issue from #50583
| open | 2025-03-05T01:47:39Z | 2025-03-05T04:48:13Z | https://github.com/ray-project/ray/issues/51084 | [
"enhancement",
"core"
] | elimelt | 2 |
gunthercox/ChatterBot | machine-learning | 2,038 | Erroneous chatterbot reponses | While running the following script:
`from chatterbot import ChatBot
bot = ChatBot(
'Maxine',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
database_uri='sqlite:///database.sqlite3',
logic_adapters=[
{
'import_path': 'chatterbot.logic.BestMatch',
'threshold': 0.95,
'default_response': 'what?'
},
'chatterbot.logic.MathematicalEvaluation',
'chatterbot.logic.TimeLogicAdapter'
],
)
print('ask me something')
while True:
try:
bot_input = bot.get_response(input())
print(bot_input)
except(KeyboardInterrupt, EOFError, SystemExit):
break
I grt the following reponses
ask me something
how are you
I am good.
what time is it
The current time is 12:41 PM
how much is 2 * 2
2 * 2 = 4
eeee
**The current time is 12:41 PM**
how much is 2 + 5
2 + 5 = 7
rrrr
what?
how are you
I am good.
gggg
what?
The bold face response should have been 'what?'
Later it does respond with the correct response.
Is this just a glitch that you have to live with or am I doing something wrong. | open | 2020-08-30T16:45:26Z | 2020-09-03T14:07:32Z | https://github.com/gunthercox/ChatterBot/issues/2038 | [] | hankp46 | 1 |
scikit-hep/awkward | numpy | 2,968 | ak.to_parquet_row_groups | ### Description of new feature
Awkward 1.x had a secret back-door to `ak.to_parquet` that would interpret the first argument as an iterator of `ak.Arrays` to write as row groups, rather than a single `ak.Array` to write as a single row group, if its type was a `PartitionedArray`, which no longer exists in Awkward 2.x.
A better way to do it would be to have a separate function, `ak.to_parquet_row_groups`, which always interprets its first argument as an iterable of arrays and write each as a row group in the output file.
Once an `iterator` has been constructed, this is how Awkward 1.x wrote Parquet row groups:
https://github.com/scikit-hep/awkward/blob/cf8f37360e6aa8e064ff28bbb4a2d18702e35fb0/src/awkward/operations/convert.py#L3071-L3083
The new function should do it in a similar way. However, the new function also shares 99% of its functionality with `ak.to_parquet`, which should not be reimplemented. Instead, the
```python
@high_level_function()
def to_parquet
```
in [src/awkward/operations/ak_to_parquet.py](https://github.com/scikit-hep/awkward/blob/main/src/awkward/operations/ak_to_parquet.py) should be split into an entry point and a hidden implementation `_impl` (as almost all of the other `ak.*` functions are) and that `_impl` should have a back-door way to tell it that the first argument is to be interpreted as an iterable of arrays, rather than an array. When it gets to the point of creating the writer and filling it,
https://github.com/scikit-hep/awkward/blob/f9a29effed4998054235649c4ee1dfadc059ede0/src/awkward/operations/ak_to_parquet.py#L320-L339
the back-door should use Awkward 1.x's procedure of writing the first row group and then continuing to write row groups until `StopIteration`. Then `ak.to_parquet_row_groups` should use this `_impl`'s back-door and `ak.to_parquet` should not. | closed | 2024-01-19T20:30:33Z | 2024-02-05T16:01:17Z | https://github.com/scikit-hep/awkward/issues/2968 | [
"feature"
] | jpivarski | 0 |
giotto-ai/giotto-tda | scikit-learn | 350 | Bindings externals wasserstein, input parameter q type | <!-- Instructions For Filing a Bug: https://github.com/giotto-ai/giotto-tda/blob/master/CONTRIBUTING.rst -->
#### Description
<!-- Example: Joblib Error thrown when calling fit on VietorisRipsPersistence
-->
Variable `q` in `wasserstein` bindings is of type [int](https://github.com/giotto-ai/giotto-tda/blob/adfb1489bf31fa31286fef772cf8b793290a1aeb/gtda/externals/bindings/wasserstein_bindings.cpp#L14). But the C++ implementation, `q`, called `wasserstein_power` is of type `Real` => `double`, [link](https://github.com/giotto-ai/giotto-tda/blob/adfb1489bf31fa31286fef772cf8b793290a1aeb/gtda/externals/hera/wasserstein/basic_defs_ws.h#L83).
`q` should be converted to double.
#### Versions
<!--
Please run the following snippet and paste the output below.
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import joblib; print("Joblib", joblib.__version__)
import sklearn; print("Scikit-learn", sklearn.__version__)
import gtda; print("Giotto-tda", gtda.__version__)
-->
Latest release
<!-- Thanks for contributing! -->
| closed | 2020-03-04T12:45:07Z | 2020-03-06T12:24:45Z | https://github.com/giotto-ai/giotto-tda/issues/350 | [] | MonkeyBreaker | 0 |
axnsan12/drf-yasg | django | 316 | sorting elements | Now elements are sorted by URL. method with tag auth and url 'other'
comes after the method with url 'next' without tag,
How to sort the list redoc-first element with tags, sorted by tags, elements without tags are located after tags?
'other' with tag auth comes before methods without tags. | closed | 2019-02-15T15:08:18Z | 2022-02-03T13:43:53Z | https://github.com/axnsan12/drf-yasg/issues/316 | [] | fenist19 | 3 |
streamlit/streamlit | machine-learning | 10,128 | Pre set selections for `st.dataframe` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Is there any way to set selection on dataframe or data_editor ?
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-01-08T10:19:34Z | 2025-02-12T04:20:14Z | https://github.com/streamlit/streamlit/issues/10128 | [
"type:enhancement",
"feature:st.dataframe"
] | StarDustEins | 6 |
piskvorky/gensim | machine-learning | 3,149 | Python int too large to convert to C long | import cv2
import numpy as np
import matplotlib.pyplot as plt
def make_coordinates(image, line_parameters):
slope, intercept = line_parameters
y1 = image.shape[0]
y2 = int(y1 * (3/5))
x1 = int((y1 - intercept)/slope)
x2 = int((y2 - intercept)/slope)
return np.array([x1, y1, x2, y2])
def average_slope_intercept(image, lines):
left_fit = []
right_fit = []
for line in lines:
x1, y1, x2, y2 = line.reshape(4)
parameters = np.polyfit((x1, x2), (y1, y2), 1)
slope = parameters[0]
intercept = parameters[1]
if slope < 0:
left_fit.append((slope, intercept))
else:
right_fit.append((slope, intercept))
left_fit_average = np.average(left_fit, axis = 0)
right_fit_average = np.average(right_fit, axis = 0)
left_line = make_coordinates(image, left_fit_average)
right_line = make_coordinates(image, right_fit_average)
return np.array([left_line, right_line])
def canny(image):
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 50, 150)
return canny
def display_lines(image, lines):
line_image = np.zeros_like(image)
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line.reshape(4)
cv2.line(line_image, (x1, y1), (x2, y2), (255, 0, 0), 10)
return line_image
def region_of_interest(image):
height = image.shape[0]
polygon = np.array([
[(95, height), (1010, height), (450,250)]
])
mask = np.zeros_like(image)
cv2.fillPoly(mask, polygon, 255)
masked_image = np.bitwise_and(canny, mask)
return masked_image
#image = cv2.imread('Nagpur_image.jpg')
#lane_image = np.copy(image)
#canny = canny(lane_image)
#cropped_image = region_of_interest(canny)
#lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength = 40, maxLineGap = 5)
#averaged_lines = average_slope_intercept(lane_image, lines)
#line_image = display_lines(lane_image, averaged_lines)
#combo_image = cv2.addWeighted(lane_image, 0.8, line_image, 1, 1)
#cv2.imshow("result", combo_image)
#cv2.waitKey(0)
cap = cv2.VideoCapture("test3.mp4")
while(cap.isOpened()):
_, frame = cap.read()
canny = canny(frame)
cropped_image = region_of_interest(canny)
lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength = 40, maxLineGap = 5)
averaged_lines = average_slope_intercept(frame, lines)
line_image = display_lines(frame, averaged_lines)
combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)
cv2.imshow("result", combo_image)
cv2.waitKey(1)
**ERROR** :
Traceback (most recent call last):
File "nagpurlanes.py", line 75, in <module>
line_image = display_lines(frame, averaged_lines)
File "nagpurlanes.py", line 42, in display_lines
cv2.line(line_image, (x1, y1), (x2, y2), (255, 0, 0), 10)
OverflowError: Python int too large to convert to C long | closed | 2021-05-16T10:11:08Z | 2021-05-16T10:20:44Z | https://github.com/piskvorky/gensim/issues/3149 | [] | aayushchimurkar7m | 1 |
JoeanAmier/TikTokDownloader | api | 227 | 作品有多个UID可能会出问题 | ERROR: type should be string, got " \r\nhttps://github.com/JoeanAmier/TikTokDownloader/blob/master/src/extract/extractor.py#L508\r\n\r\n默认使用**最后一个作品**的UID,但是某些情况下,一个账号会采集到多个UID,因此概率出现问题,获取到错误的UID,可使用当前作品中最多的UID\r\n\r\n### 一个作品多个UID的情况\r\n\r\n\r\n### 最多的情况下可累计7 、8 个UID,这是其中的一个示例\r\n\r\n\r\n\r\n\r\n```python\r\n\r\n uid_lst = [item['author']['uid'] for item in data]\r\n\r\n from collections import Counter\r\n counts = Counter(uid_lst)\r\n\r\n # 获取出现次数最多的元素及其索引\r\n most_common_element = counts.most_common(1)[0][0]\r\n most_common_index = uid_lst.index(most_common_element)\r\n true_data = data[most_common_index]\r\n\r\n self.log.info(f\"当前账号UID列表: { set(uid_lst) } , 最多的UID:{most_common_element}\")\r\n\r\n item = self.generate_data_object(true_data)\r\n```" | open | 2024-06-04T02:37:56Z | 2024-06-09T15:44:30Z | https://github.com/JoeanAmier/TikTokDownloader/issues/227 | [
"功能异常(bug)",
"确认问题(confirm)",
"处理完成(complete)"
] | sansi98h | 3 |
marimo-team/marimo | data-science | 4,150 | [Newbie Feedback] First glance at marimo | Great:
- youtube guides
- single python file
- can change column width easily
- dark mode
- AI integration
- scratchpad!
- detects changes made in VSCode
- lightning fast answers on github
Could be better:
- strange that auto-save isn't on by default
- scratchpad different for each file. Often I work on many files at the same time, and want to have an overall scratchpad for all files. Will keep using an .md file for now
- AI integration doesn't rival the likes of cursor obviously. 2 improvements would be great tho: 1. allow editing cells inline. Right now the LLM output can only be added as a new file, same as with Cursor and working with Jupyter Notebooks. 2. If the knowledge base of marimo isn't used under the hood, that would be a great opportunity. The chatbot seems to now about marimo, so perhaps you already apply a RAG approach with your docs, discord messages and github issues
- when using jupyter Notebooks in VScode forks, I can set breakpoints in juptyer cells and debug them. Something for marimo?
Pretty impressive git history btw, perhaps take a week off c: :
 | closed | 2025-03-18T16:01:42Z | 2025-03-18T21:52:27Z | https://github.com/marimo-team/marimo/issues/4150 | [] | dentroai | 7 |
plotly/dash | plotly | 2,296 | [BUG] Component as Props inserted by callbacks not found with dynamic inputs/outputs callbacks. | **Describe your context**
```
dash 2.6.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
When the ` component properties` be updated, the added components won't trigger callback, only the initial components in the `component properties` work, for example:

```Python
import dash
import uuid
from dash import html, dcc
from dash.dependencies import Input, Output, State, MATCH
app = dash.Dash()
app.layout = html.Div(
[
html.Button(
'add option',
id='add-option',
style={
'marginBottom': '25px'
}
),
dcc.Checklist(
[
{
"label": [
html.Button(
'click me',
id={
'type': 'button',
'index': 0
}
),
html.Span(
id={
'type': 'text',
'index': 0
}
)
],
"value": 0,
}
],
id='options'
)
],
style={
'padding': '50px'
}
)
@app.callback(
Output('options', 'options'),
Input('add-option', 'n_clicks'),
State('options', 'options'),
prevent_initial_call=True
)
def add_option(n_clicks, options):
new_uuid = str(uuid.uuid4())
return [
*options,
{
"label": [
html.Button(
'click me',
id={
'type': 'button',
'index': new_uuid
},
),
html.Span(
id={
'type': 'text',
'index': new_uuid
}
)
],
"value": new_uuid
}
]
@app.callback(
Output(
{
'type': 'text',
'index': MATCH
},
'children'
),
Input(
{
'type': 'button',
'index': MATCH
},
'n_clicks'
)
)
def demo(n_clicks):
return n_clicks
if __name__ == '__main__':
app.run(debug=True)
```
**Expected behavior**
All components in the `component properties` should work as defined callback.
@chriddyp @alexcjohnson
| closed | 2022-11-02T01:36:28Z | 2022-12-05T17:07:55Z | https://github.com/plotly/dash/issues/2296 | [
"bug"
] | CNFeffery | 1 |
albumentations-team/albumentations | deep-learning | 2,340 | RandomFog behaves differently in version 2.0.4 compared to previous versions | ## Describe the bug
The result of RandomFog in version 1.4.6 is completely different from the newest version 2.0.4
### To Reproduce
Steps to reproduce the behavior:
1. Environment (e.g., OS, Python version, Albumentations version, etc.)
OS: MacOS Sonoma 14.2 (23C64)
Python: 3.10.16
Albumentations version: 1.4.6 and 2.0.4
2. Sample code that produces the bug.
```python
import albumentations as A
from PIL import Image
import numpy as np
# read image
img_path = 'path/to/image'
pil_image = Image.open(img_path).convert('RGB')
# apply data aug
# for version 1.4.6
_augs_fnc = A.RandomFog(alpha_coef=0.08, fog_coef_lower=0.6, fog_coef_upper=1.0, p=1.0)
# for version 2.0.4
# _augs_fnc = A.RandomFog(alpha_coef=0.08, fog_coef_range=(0.6, 1.0), p=1.0,)
img_np = np.array(pil_image)
augmented = _augs_fnc(image=img_np)
aug_image = Image.fromarray(augmented['image'])
```
3. Any error messages or incorrect outputs.
None
### Expected behavior
I expect both results should look the same or very similar
### Actual behavior
The two results are completely different and RandomFog in version 2.0.4 are not really foggy
### Screenshots
#### Load image
<img width="619" alt="Image" src="https://github.com/user-attachments/assets/4c8741b0-e6ae-4506-adf4-1ee3d78edf8f" />
#### Version 1.4.6
<img width="753" alt="Image" src="https://github.com/user-attachments/assets/1e93d006-c75a-4b2d-9858-9bf2e2d68360" />
#### Version 2.0.4
<img width="775" alt="Image" src="https://github.com/user-attachments/assets/599238a5-e385-409b-a6cc-4d4e28be3be6" /> | closed | 2025-02-13T03:39:28Z | 2025-02-26T21:20:56Z | https://github.com/albumentations-team/albumentations/issues/2340 | [
"bug"
] | huuquan1994 | 5 |
mwaskom/seaborn | data-visualization | 2,748 | `histplot` isn't showing bar for a bin in certain distributions | For an example like this:
```
import numpy as np
import seaborn as sns
np.random.seed(42)
ZEROS=700
ONES=30000
dummy_data = np.hstack([np.zeros(ZEROS), 1 - (np.random.binomial(1000, 0.1, ONES) / 1000)])
sns.histplot(dummy_data)
```
We get a plot without any visible bar at x==0:

Interestingly if I Make the distribution 10 times smaller:
```
ZEROS=70
ONES=3000
```
The bar starts being visible.
Another way to overcome it is to use fill=False:

I have as well noticed that it depends on the figsize, and another way to make the bar visible is to add e.g. `plt.figure(figsize=(9,4))` before:

See reproduction in collab: https://colab.research.google.com/drive/1x79KLWA6mJrr1-P8zsKNfKCV5EoZ4lhr?usp=sharing | closed | 2022-02-24T07:46:34Z | 2022-03-04T11:49:34Z | https://github.com/mwaskom/seaborn/issues/2748 | [
"upstream"
] | kretes | 8 |
xinntao/Real-ESRGAN | pytorch | 249 | Unwanted texture added | Hi, first of all thanks for the great work on this project. I have a question regarding a specific example found below. I find that in most cases the model gives good results, but in this case I find that there is an unwanted texture added to the suit which was not there in the original image. This is true for both the RealESRGAN_x2plus model and the RealESRGAN_x4plus model. I would expect the parts that are out of focus to remain out of focus for the upscaled image.
Do you have any advice on what I could do to fix this issue, and what can be the cause? Thanks in advance.
**Input:**

**Output:**

| open | 2022-02-09T15:36:27Z | 2022-02-13T22:37:50Z | https://github.com/xinntao/Real-ESRGAN/issues/249 | [] | TimovNiedek | 1 |
iperov/DeepFaceLive | machine-learning | 43 | 'path' is not recognized as an internal or external command | Destination path can't have spaces, otherwise you get the error 'not recognized as an internal or external command' | closed | 2022-02-27T07:05:02Z | 2022-02-27T07:41:16Z | https://github.com/iperov/DeepFaceLive/issues/43 | [] | MateSteinforth | 1 |
JaidedAI/EasyOCR | deep-learning | 351 | Optimize for number plate recognition | How would I optimize easyocr to operate in just one language ( e.g. english ) to do number plate recognition/ANPR?
Can we make the OCR capability faster by stripping out other languages or is this controlled by passing it the language as a parameter? I think this has great potential for number plate recognition, but it needs to be faster.
Thank you for your help. | closed | 2021-01-18T04:32:41Z | 2021-07-02T08:52:49Z | https://github.com/JaidedAI/EasyOCR/issues/351 | [] | Stevew7777 | 3 |
mljar/mljar-supervised | scikit-learn | 713 | Please document all preprocessing methods | Digging into the code, it looks like mljar tries various reprocessing techniques on the data, however none of the documentation covers what it attempts. | open | 2024-03-06T22:01:05Z | 2024-03-11T18:15:09Z | https://github.com/mljar/mljar-supervised/issues/713 | [
"docs"
] | gdevenyi | 4 |
davidsandberg/facenet | tensorflow | 477 | Windows Platform | Will this work on windows? | closed | 2017-10-08T07:42:58Z | 2018-04-01T21:08:04Z | https://github.com/davidsandberg/facenet/issues/477 | [] | elnasdgreat | 1 |
gee-community/geemap | streamlit | 277 | Python out of memory, kernel get restarted | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.8.8
- Python version: 3.7
- Operating System: Ubuntu 20.04
### Description
I am trying to use open street shape files to load into earth engine map, but notebook kernel get restarted after sometimes, seems memory is not enough, Though my pc has 12 GB RAM.
Here is the shape file link : http://download.geofabrik.de/europe/denmark.html
### What I Did
```
denmark_landuse = geemap.shp_to_ee((os.path.join(data_path,'denmark_street/gis_osm_landuse_a_free_1.shp')))
Map.addLayer(denmark_street,{},'Denmark_landuse_')
```
| closed | 2021-01-28T10:54:43Z | 2021-01-28T15:54:10Z | https://github.com/gee-community/geemap/issues/277 | [
"bug"
] | MISSEY | 1 |
holoviz/panel | jupyter | 6,870 | New widget combining Multichoice and AutocompleteInput | I would like to allow arbitrary text into Multichoice, along with the given option. I.e. the user is able to introduce new elements (which however do not enter the options list) always of the same type.
In the end I would like to have maybe an extension of AutocompleteInput into a list. IPywidgets already implement it rather well: https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html#tag-widgets
Alternatively one could extend LiteralInput of list type, where the type of each entered element of the list is validated (pure string, float, int). | closed | 2024-05-28T17:33:04Z | 2024-06-27T22:17:07Z | https://github.com/holoviz/panel/issues/6870 | [
"duplicate"
] | paoloalba | 3 |
bmoscon/cryptofeed | asyncio | 505 | is CANDLES interval only available as 1m? | I tried binance futures candles but I see only 1m intervals passing by, is it possible to have like 5m or do we need to compose it ourselves? | closed | 2021-05-30T14:15:25Z | 2021-05-30T15:55:03Z | https://github.com/bmoscon/cryptofeed/issues/505 | [
"question"
] | gigitalz | 3 |
wagtail/wagtail | django | 12,300 | JS slugify (with unicode enabled) leaves invalid characters in the slug | ### Issue Summary
The JS `slugify` function (used when cleaning a manually-entered slug) fails to strip out certain spacer/combining characters that are disallowed by Django slug validation.
### Steps to Reproduce
1. On a site with `WAGTAIL_ALLOW_UNICODE_SLUGS = True`, create a new page
2. Paste the text "উইকিপিডিয়ায় স্বাগতম!" into the slug field
3. Defocus the field; it will be rewritten to "উইকিপিডিয়ায়-স্বাগতম"
4. Attempt to save the page; this returns the error "Enter a valid “slug” consisting of Unicode letters, numbers, underscores, or hyphens."
The expected behaviour is that the browser-side slug cleanup step should result in a slug that is accepted by the Django validation. However, the output contains the character [U+09BF](https://www.fileformat.info/info/unicode/char/09bf/index.htm) which is disallowed, being a spacing combining character.
Note that the bug only arises for slugs entered manually into the slug field; pasting "উইকিপিডিয়ায় স্বাগতম!" into the _title_ field results in the slug "উইকপডযয-সবগতম" which is valid. This is because the title-to-slug conversion uses a more complex function which also performs transliteration when unicode slugs are disabled (which wouldn't be appropriate for manually-entered slugs); however, the simplified function used for cleaning manually-entered slugs is _overly_ simple:
https://github.com/wagtail/wagtail/blob/84b3bf70349171e44f283085f1d89ccd9deed40e/client/src/utils/slugify.ts#L10-L13
since it's only stripping out a designated short list of punctuation characters, where it would be better to strip out everything that is _not_ classed as a letter/number or explicitly allowed punctuation (just hyphen and underscore I think). Would be useful to compare this against the behaviour of the more advanced title-to-slug conversion, and Django's `django.utils.text.slugify`.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.12.4.
- Django version: 5.1.1
- Wagtail version: 6.3a0
- Browser version: Chrome 128
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| closed | 2024-09-11T11:26:40Z | 2024-09-19T14:22:12Z | https://github.com/wagtail/wagtail/issues/12300 | [
"type:Bug",
"good first issue"
] | gasman | 6 |
davidsandberg/facenet | computer-vision | 508 | Real time face recognition | Hello,
I am trying to understand how to use facenet for real time face recognition from the video stream.
I followed the steps from the wiki page Classifier training of inception resnet v1, but I am not sure how to use the results further and if I performed all the steps correctly.
1. I ran alignment and extraction algorithm to a folder of images containing 20 images of my colleague and me, as the idea is that our faces should be recognized from the stream. Images are 182x182 px and the output is saved in training_models_182 directory.
2. I ran alignment and extraction algorithm to lsw database and generated folder lfw_mtcnnpy_160 containing aligned and cropped 160x160px faces of celebrities.
3. I ran train_softmax algorithm where data_dir is folder generated in step 1 and lfw_dir is folder generated in the step 2.
From my understanding a classifier should have been generated and further on applied to the data coming from the stream. The content of the folder are 8 files: checkpoint, meta file and ckpt.9, ckpt.12 and ckpt.15 index and data files. How should I use this files further to check whether persons in the stream are correctly classified or not?
I would really appreciate any suggestions.
| open | 2017-10-30T09:16:56Z | 2019-05-11T11:46:23Z | https://github.com/davidsandberg/facenet/issues/508 | [] | mia-petkovic | 8 |
tensorpack/tensorpack | tensorflow | 903 | How many frames in one episode when evaluating DDQN? How to set the number? | Hi, nice to meet you.
I notice that in readme it is said that "An episode is limited to 10000 steps." However, in code ,i found
############################
env = AtariPlayer(ROM_FILE, frame_skip=ACTION_REPEAT, viz=viz,
live_lost_as_eoe=train, max_num_frames=60000)
############################
It seems that there are 60000 frames in one episode when evaluating. And in paper (DDQN), there are 18000 frames. Right? I am so confused about the differences. | closed | 2018-09-21T07:53:27Z | 2018-09-21T15:09:35Z | https://github.com/tensorpack/tensorpack/issues/903 | [
"examples"
] | silentobservers | 1 |
allure-framework/allure-python | pytest | 772 | allure-pytest 2.13.2 will merge parameterized use cases | #### Bug Description
When using @pytest.mark.parametrize() parametrization, the use cases are merged in the generated allure report, and it is not possible to generate an execution result record for each parametrized use case.
#### Steps to reproduce

#### Getting report results
The @pytest.mark.parametrize() parameterization executes four use cases, three of which pass and one of which fails, and sets the failing use case to retry once, and after the execution is complete, the generated allure report can only see the last report record, and the other use case executions are all categorized under " Retries" below

#### Expected Result
It is possible to display execution results for each use case individually, as in allure-pytest 2.13.1.

#### Running environment
- Allure version: 2.13.0
- Test framework: pytest 7.1.0
- Allure adaptor: allure-pytest 2.13.2 | open | 2023-10-10T02:40:53Z | 2023-12-06T11:55:58Z | https://github.com/allure-framework/allure-python/issues/772 | [] | puhong112 | 2 |
sunscrapers/djoser | rest-api | 761 | `resend_activation` action uses `password_reset` permission from settings | Hello there,
As mentionned in the title of this issue, [`resend_activation`](https://github.com/sunscrapers/djoser/blob/master/djoser/views.py#L68-L69) action uses the `password_reset` permission from the settings:
``` py
def get_permissions(self):
...
elif self.action == "resend_activation":
self.permission_classes = settings.PERMISSIONS.password_reset
...
```
What is the reason of this? In my opinion, it should use its own permession or at least the `activation` permission.
I can open a PR in order to fix this if needed. | open | 2023-09-07T12:28:54Z | 2024-12-11T09:27:00Z | https://github.com/sunscrapers/djoser/issues/761 | [
"help wanted"
] | 73VW | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 1,010 | GPR with noise .fit bug | ## Bug description
Calling multiple times .fit(X, y) whenever self.noise exists causes self.kernel to add new WhiteKernel noise.
## Steps to reproduce
```python
from skopt.learning.gaussian_process import gpr, kernels #gpr is the package with the bug
import numpy as np
np.random.seed(0) # for reproduction
gp = gpr.GaussianProcessRegressor(kernel=kernels.RBF(), alpha=1e-7, noise=0.01)
train_inputs = np.array([0, 0.1, 0.2]).reshape(-1, 1)
train_targets = np.array([1, 1.5, 0.6])
for i in range(10):
gp.fit(train_inputs, train_targets)
test_inputs = np.arange(0, 0.25, 0.05).reshape(-1, 1)
gp.predict(test_inputs)
print(str(gp.get_params()['kernel__k1']).count("WhiteKernel")) # number of WhiteKernels in self.kernel increases with each .fit
train_inputs = np.vstack([train_inputs, np.random.rand()])
train_targets = np.append(train_targets, np.random.rand())
```
## Proposed solution
Lines 182-184, 189-194 from skopt.learning.gaussian_process.gpr should be on the __init__ function, right after line 161
```python
if isinstance(self.noise, str) and self.noise != "gaussian":
raise ValueError("expected noise to be 'gaussian', got %s"
% self.noise)
if self.noise == "gaussian":
kernel = kernel + WhiteKernel()
elif self.noise:
kernel = kernel + WhiteKernel(
noise_level=self.noise, noise_level_bounds="fixed"
)
``` | open | 2021-03-16T11:41:00Z | 2021-03-16T23:48:07Z | https://github.com/scikit-optimize/scikit-optimize/issues/1010 | [] | PeraltaFede | 1 |
plotly/plotly.py | plotly | 4,127 | Plotly and tqdm in the same notebook cell causes blank plot | In a colab notebook, if a tqdm progress bar is being output in the same cell as a plotly plot, then rendering of the plotly plot fails. See [MRE colab notebook](https://colab.research.google.com/drive/1zZYwqRLuwSCvbg851g4szR_VxyiXoShT?usp=sharing).
In the chrome developer console I see "Uncaught ReferenceError: Plotly is not defined".
This problem appears in plotly.py version 5.12 and later. Version 5.11 is fine. | open | 2023-03-28T12:49:36Z | 2024-09-28T18:34:00Z | https://github.com/plotly/plotly.py/issues/4127 | [
"bug",
"P3"
] | alimanfoo | 1 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 14 | Encoding problem in "11_char_rnn_gist.py" example | Hi, I'm reading the code of `11_char_rnn_gist.py`, and I found the following problem:
In line 57, we encode the sequence `seq` with one-hot code with `depth=len(vocab)`.
However, `seq` is generated with `[vocab.index(x) + 1 for x in text if x in vocab]`, so the code of characters is between `1` to `len(vocab)`, then we pad them with `0`. So with `tf.one_hot`, the last character in `vocab` is neglected, and the PAD symbol is encoded to `[1 0 0 0 ...]`.
When we run the demo, it seems ok because the last character `}` hardly appears in our dataset.
If we change `vocab` from (let `a` be the last character in 'vocab')
```
vocab = (
" $%'()+,-./0123456789:;=?ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"\\^_abcdefghijklmnopqrstuvwxyz{|}")
```
to
```
vocab = (
" $%'()+,-./0123456789:;=?ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"\\^_bcdefghijklmnopqrstuvwxyz{|}a")
```
Then the outputs are completely non sense (it outputs something like `T8WrtP sVM -reca5 r, ...`), while it should work as before. | open | 2017-04-11T14:34:23Z | 2017-07-11T17:42:40Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/14 | [] | alanwang93 | 1 |
TheAlgorithms/Python | python | 11,937 | would like to add travelling salesman problem under dynamic_programming | ### Feature description
adding of travelling salesman problem to learn and understand concepts and algorithms of dynamic programming | closed | 2024-10-10T14:13:52Z | 2024-10-10T14:15:40Z | https://github.com/TheAlgorithms/Python/issues/11937 | [
"enhancement"
] | OmMahajan29 | 0 |
PaddlePaddle/PaddleNLP | nlp | 9,693 | [Question]: 昇腾910B3,Paddle UIE 正常推理,但是训练过程loss=NaN | ### 请提出你的问题
PaddlePaddle==2.6.1
[PaddleCustomDevice](https://github.com/PaddlePaddle/PaddleCustomDevice)==2.6 编译安装
```
λ 5926b66120ca /app/output/PaddleNLP-2.6.1 python ./applications/information_extraction/text/finetune.py \
> --device npu \
--export_model_dir ./checkpoint/model_best \
--overwrite_output_dir \
--disable_tqdm False \
--metric_for_best_model eval_f1 \
--load_best_model_at_end True \
--save_total_limit 1> --logging_steps 10 \
> --save_steps 100 \
> --eval_steps 100 \
> --seed 1000 \
> --model_name_or_path /app/output/PaddleNLP-2.6.1/uie-base/ \
> --output_dir ./checkpoint/model_best \
> --train_path ./applications/information_extraction/text/data/train.txt \
> --dev_path ./applications/information_extraction/text/data/dev.txt \
> --max_seq_len 512 \
> --per_device_train_batch_size 16 \
> --per_device_eval_batch_size 16 \
> --num_train_epochs 20 \
> --learning_rate 1e-5 \
> --do_train \
> --do_eval \
> --do_export \
> --export_model_dir ./checkpoint/model_best \
> --overwrite_output_dir \
> --disable_tqdm False \
> --metric_for_best_model eval_f1 \
> --load_best_model_at_end True \
> --save_total_limit 1
I1225 11:43:11.952183 131826 init.cc:233] ENV [CUSTOM_DEVICE_ROOT]=/usr/local/lib/python3.9/dist-packages/paddle_custom_device
I1225 11:43:11.952232 131826 init.cc:142] Try loading custom device libs from: [/usr/local/lib/python3.9/dist-packages/paddle_custom_device]
I1225 11:43:12.406082 131826 custom_device.cc:1108] Successed in loading custom runtime in lib: /usr/local/lib/python3.9/dist-packages/paddle_custom_device/libpaddle-custom-npu.so
I1225 11:43:12.409425 131826 custom_kernel.cc:63] Successed in loading 326 custom kernel(s) from loaded lib(s), will be used like native ones.
I1225 11:43:12.409597 131826 init.cc:154] Finished in LoadCustomDevice with libs_path: [/usr/local/lib/python3.9/dist-packages/paddle_custom_device]
I1225 11:43:12.409634 131826 init.cc:239] CustomDevice: npu, visible devices count: 1
/usr/local/lib/python3.9/dist-packages/_distutils_hack/__init__.py:26: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
[2024-12-25 11:43:15,415] [ WARNING] - evaluation_strategy reset to IntervalStrategy.STEPS for do_eval is True. you can also set evaluation_strategy='epoch'.
[2024-12-25 11:43:15,415] [ INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
[2024-12-25 11:43:15,415] [ INFO] - ============================================================
[2024-12-25 11:43:15,415] [ INFO] - Model Configuration Arguments
[2024-12-25 11:43:15,415] [ INFO] - paddle commit id :fbf852dd832bc0e63ae31cd4aa37defd829e4c03
[2024-12-25 11:43:15,416] [ INFO] - export_model_dir :./checkpoint/model_best
[2024-12-25 11:43:15,416] [ INFO] - model_name_or_path :/app/output/PaddleNLP-2.6.1/uie-base/
[2024-12-25 11:43:15,416] [ INFO] - multilingual :False
[2024-12-25 11:43:15,416] [ INFO] -
[2024-12-25 11:43:15,416] [ INFO] - ============================================================
[2024-12-25 11:43:15,416] [ INFO] - Data Configuration Arguments
[2024-12-25 11:43:15,417] [ INFO] - paddle commit id :fbf852dd832bc0e63ae31cd4aa37defd829e4c03
[2024-12-25 11:43:15,417] [ INFO] - dev_path :./applications/information_extraction/text/data/dev.txt
[2024-12-25 11:43:15,417] [ INFO] - dynamic_max_length :None
[2024-12-25 11:43:15,417] [ INFO] - max_seq_length :512
[2024-12-25 11:43:15,417] [ INFO] - train_path :./applications/information_extraction/text/data/train.txt
[2024-12-25 11:43:15,417] [ INFO] -
[2024-12-25 11:43:15,418] [ WARNING] - Process rank: -1, device: npu, world_size: 1, distributed training: False, 16-bits training: False
[2024-12-25 11:43:15,418] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load '/app/output/PaddleNLP-2.6.1/uie-base/'.
[2024-12-25 11:43:15,441] [ INFO] - Loading configuration file /app/output/PaddleNLP-2.6.1/uie-base/config.json
[2024-12-25 11:43:15,441] [ INFO] - Loading weights file /app/output/PaddleNLP-2.6.1/uie-base/model_state.pdparams
[2024-12-25 11:43:15,708] [ INFO] - Loaded weights file from disk, setting weights to model.
[2024-12-25 11:43:32,588] [ INFO] - All model checkpoint weights were used when initializing UIE.
[2024-12-25 11:43:32,589] [ INFO] - All the weights of UIE were initialized from the model checkpoint at /app/output/PaddleNLP-2.6.1/uie-base/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use UIE for predictions without further training.
[2024-12-25 11:43:32,708] [ INFO] - ============================================================
[2024-12-25 11:43:32,709] [ INFO] - Training Configuration Arguments
[2024-12-25 11:43:32,709] [ INFO] - paddle commit id : fbf852dd832bc0e63ae31cd4aa37defd829e4c03
[2024-12-25 11:43:32,709] [ INFO] - _no_sync_in_gradient_accumulation: True
[2024-12-25 11:43:32,709] [ INFO] - activation_quantize_type : None
[2024-12-25 11:43:32,709] [ INFO] - adam_beta1 : 0.9
[2024-12-25 11:43:32,709] [ INFO] - adam_beta2 : 0.999
[2024-12-25 11:43:32,709] [ INFO] - adam_epsilon : 1e-08
[2024-12-25 11:43:32,709] [ INFO] - algo_list : None
[2024-12-25 11:43:32,710] [ INFO] - amp_custom_black_list : None
[2024-12-25 11:43:32,710] [ INFO] - amp_custom_white_list : None
[2024-12-25 11:43:32,710] [ INFO] - amp_master_grad : False
[2024-12-25 11:43:32,710] [ INFO] - batch_num_list : None
[2024-12-25 11:43:32,710] [ INFO] - batch_size_list : None
[2024-12-25 11:43:32,710] [ INFO] - bf16 : False
[2024-12-25 11:43:32,710] [ INFO] - bf16_full_eval : False
[2024-12-25 11:43:32,710] [ INFO] - bias_correction : False
[2024-12-25 11:43:32,710] [ INFO] - current_device : npu:0
[2024-12-25 11:43:32,711] [ INFO] - data_parallel_rank : 0
[2024-12-25 11:43:32,711] [ INFO] - dataloader_drop_last : False
[2024-12-25 11:43:32,711] [ INFO] - dataloader_num_workers : 0
[2024-12-25 11:43:32,711] [ INFO] - dataset_rank : 0
[2024-12-25 11:43:32,711] [ INFO] - dataset_world_size : 1
[2024-12-25 11:43:32,711] [ INFO] - device : npu
[2024-12-25 11:43:32,711] [ INFO] - disable_tqdm : False
[2024-12-25 11:43:32,711] [ INFO] - distributed_dataloader : False
[2024-12-25 11:43:32,711] [ INFO] - do_compress : False
[2024-12-25 11:43:32,711] [ INFO] - do_eval : True
[2024-12-25 11:43:32,712] [ INFO] - do_export : True
[2024-12-25 11:43:32,712] [ INFO] - do_predict : False
[2024-12-25 11:43:32,712] [ INFO] - do_train : True
[2024-12-25 11:43:32,712] [ INFO] - eval_accumulation_steps : None
[2024-12-25 11:43:32,712] [ INFO] - eval_batch_size : 16
[2024-12-25 11:43:32,712] [ INFO] - eval_steps : 100
[2024-12-25 11:43:32,712] [ INFO] - evaluation_strategy : IntervalStrategy.STEPS
[2024-12-25 11:43:32,712] [ INFO] - flatten_param_grads : False
[2024-12-25 11:43:32,712] [ INFO] - fp16 : False
[2024-12-25 11:43:32,712] [ INFO] - fp16_full_eval : False
[2024-12-25 11:43:32,713] [ INFO] - fp16_opt_level : O1
[2024-12-25 11:43:32,713] [ INFO] - gradient_accumulation_steps : 1
[2024-12-25 11:43:32,713] [ INFO] - greater_is_better : True
[2024-12-25 11:43:32,713] [ INFO] - hybrid_parallel_topo_order : None
[2024-12-25 11:43:32,713] [ INFO] - ignore_data_skip : False
[2024-12-25 11:43:32,713] [ INFO] - input_dtype : int64
[2024-12-25 11:43:32,713] [ INFO] - input_infer_model_path : None
[2024-12-25 11:43:32,713] [ INFO] - label_names : ['start_positions', 'end_positions']
[2024-12-25 11:43:32,713] [ INFO] - lazy_data_processing : True
[2024-12-25 11:43:32,713] [ INFO] - learning_rate : 1e-05
[2024-12-25 11:43:32,714] [ INFO] - load_best_model_at_end : True
[2024-12-25 11:43:32,714] [ INFO] - load_sharded_model : False
[2024-12-25 11:43:32,714] [ INFO] - local_process_index : 0
[2024-12-25 11:43:32,714] [ INFO] - local_rank : -1
[2024-12-25 11:43:32,714] [ INFO] - log_level : -1
[2024-12-25 11:43:32,714] [ INFO] - log_level_replica : -1
[2024-12-25 11:43:32,714] [ INFO] - log_on_each_node : True
[2024-12-25 11:43:32,714] [ INFO] - logging_dir : ./checkpoint/model_best/runs/Dec25_11-43-15_5926b66120ca
[2024-12-25 11:43:32,714] [ INFO] - logging_first_step : False
[2024-12-25 11:43:32,714] [ INFO] - logging_steps : 10
[2024-12-25 11:43:32,715] [ INFO] - logging_strategy : IntervalStrategy.STEPS
[2024-12-25 11:43:32,715] [ INFO] - lr_end : 1e-07
[2024-12-25 11:43:32,715] [ INFO] - lr_scheduler_type : SchedulerType.LINEAR
[2024-12-25 11:43:32,715] [ INFO] - max_evaluate_steps : -1
[2024-12-25 11:43:32,715] [ INFO] - max_grad_norm : 1.0
[2024-12-25 11:43:32,715] [ INFO] - max_steps : -1
[2024-12-25 11:43:32,715] [ INFO] - metric_for_best_model : eval_f1
[2024-12-25 11:43:32,715] [ INFO] - minimum_eval_times : None
[2024-12-25 11:43:32,715] [ INFO] - moving_rate : 0.9
[2024-12-25 11:43:32,715] [ INFO] - no_cuda : False
[2024-12-25 11:43:32,716] [ INFO] - num_cycles : 0.5
[2024-12-25 11:43:32,716] [ INFO] - num_train_epochs : 20.0
[2024-12-25 11:43:32,716] [ INFO] - onnx_format : True
[2024-12-25 11:43:32,716] [ INFO] - optim : OptimizerNames.ADAMW
[2024-12-25 11:43:32,716] [ INFO] - optimizer_name_suffix : None
[2024-12-25 11:43:32,716] [ INFO] - output_dir : ./checkpoint/model_best
[2024-12-25 11:43:32,716] [ INFO] - overwrite_output_dir : True
[2024-12-25 11:43:32,716] [ INFO] - past_index : -1
[2024-12-25 11:43:32,716] [ INFO] - per_device_eval_batch_size : 16
[2024-12-25 11:43:32,716] [ INFO] - per_device_train_batch_size : 16
[2024-12-25 11:43:32,717] [ INFO] - pipeline_parallel_config :
[2024-12-25 11:43:32,717] [ INFO] - pipeline_parallel_degree : -1
[2024-12-25 11:43:32,717] [ INFO] - pipeline_parallel_rank : 0
[2024-12-25 11:43:32,717] [ INFO] - power : 1.0
[2024-12-25 11:43:32,717] [ INFO] - prediction_loss_only : False
[2024-12-25 11:43:32,717] [ INFO] - process_index : 0
[2024-12-25 11:43:32,717] [ INFO] - prune_embeddings : False
[2024-12-25 11:43:32,717] [ INFO] - recompute : False
[2024-12-25 11:43:32,718] [ INFO] - remove_unused_columns : True
[2024-12-25 11:43:32,718] [ INFO] - report_to : ['visualdl']
[2024-12-25 11:43:32,718] [ INFO] - resume_from_checkpoint : None
[2024-12-25 11:43:32,718] [ INFO] - round_type : round
[2024-12-25 11:43:32,718] [ INFO] - run_name : ./checkpoint/model_best
[2024-12-25 11:43:32,718] [ INFO] - save_on_each_node : False
[2024-12-25 11:43:32,718] [ INFO] - save_sharded_model : False
[2024-12-25 11:43:32,718] [ INFO] - save_steps : 100
[2024-12-25 11:43:32,718] [ INFO] - save_strategy : IntervalStrategy.STEPS
[2024-12-25 11:43:32,718] [ INFO] - save_total_limit : 1
[2024-12-25 11:43:32,719] [ INFO] - scale_loss : 32768
[2024-12-25 11:43:32,719] [ INFO] - seed : 1000
[2024-12-25 11:43:32,719] [ INFO] - sharding : []
[2024-12-25 11:43:32,719] [ INFO] - sharding_degree : -1
[2024-12-25 11:43:32,719] [ INFO] - sharding_parallel_config :
[2024-12-25 11:43:32,719] [ INFO] - sharding_parallel_degree : -1
[2024-12-25 11:43:32,719] [ INFO] - sharding_parallel_rank : 0
[2024-12-25 11:43:32,719] [ INFO] - should_load_dataset : True
[2024-12-25 11:43:32,719] [ INFO] - should_load_sharding_stage1_model: False
[2024-12-25 11:43:32,719] [ INFO] - should_log : True
[2024-12-25 11:43:32,720] [ INFO] - should_save : True
[2024-12-25 11:43:32,720] [ INFO] - should_save_model_state : True
[2024-12-25 11:43:32,720] [ INFO] - should_save_sharding_stage1_model: False
[2024-12-25 11:43:32,720] [ INFO] - skip_memory_metrics : True
[2024-12-25 11:43:32,720] [ INFO] - skip_profile_timer : True
[2024-12-25 11:43:32,720] [ INFO] - strategy : dynabert+ptq
[2024-12-25 11:43:32,720] [ INFO] - tensor_parallel_config :
[2024-12-25 11:43:32,720] [ INFO] - tensor_parallel_degree : -1
[2024-12-25 11:43:32,720] [ INFO] - tensor_parallel_rank : 0
[2024-12-25 11:43:32,720] [ INFO] - train_batch_size : 16
[2024-12-25 11:43:32,721] [ INFO] - use_hybrid_parallel : False
[2024-12-25 11:43:32,721] [ INFO] - use_pact : True
[2024-12-25 11:43:32,721] [ INFO] - warmup_ratio : 0.1
[2024-12-25 11:43:32,721] [ INFO] - warmup_steps : 0
[2024-12-25 11:43:32,721] [ INFO] - weight_decay : 0.0
[2024-12-25 11:43:32,721] [ INFO] - weight_name_suffix : None
[2024-12-25 11:43:32,721] [ INFO] - weight_quantize_type : channel_wise_abs_max
[2024-12-25 11:43:32,721] [ INFO] - width_mult_list : None
[2024-12-25 11:43:32,721] [ INFO] - world_size : 1
[2024-12-25 11:43:32,721] [ INFO] -
[2024-12-25 11:43:32,722] [ INFO] - ***** Running training *****
[2024-12-25 11:43:32,723] [ INFO] - Num examples = 1,167
[2024-12-25 11:43:32,723] [ INFO] - Num Epochs = 20
[2024-12-25 11:43:32,723] [ INFO] - Instantaneous batch size per device = 16
[2024-12-25 11:43:32,723] [ INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16
[2024-12-25 11:43:32,723] [ INFO] - Gradient Accumulation steps = 1
[2024-12-25 11:43:32,723] [ INFO] - Total optimization steps = 1,460
[2024-12-25 11:43:32,723] [ INFO] - Total num train samples = 23,340
[2024-12-25 11:43:32,725] [ INFO] - Number of trainable parameters = 117,946,370 (per device)
0%| | 0/1460 [00:00<?, ?it/s]/app/output/PaddleNLP-2.6.1/paddlenlp/transformers/tokenizer_utils_base.py:2478: FutureWarning: The `max_seq_len` argument is deprecated and will be removed in a future version, please use `max_length` instead.
warnings.warn(
/app/output/PaddleNLP-2.6.1/paddlenlp/transformers/tokenizer_utils_base.py:1878: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).
warnings.warn(
1%|▋ | 10/1460 [01:53<41:33, 1.72s/it]loss: nan, learning_rate: 1e-05, global_step: 10, interval_runtime: 113.6572, interval_samples_per_second: 1.4077422000673658, interval_steps_per_second: 0.08798388750421036, epoch: 0.137
loss: nan, learning_rate: 1e-05, global_step: 20, interval_runtime: 3.6064, interval_samples_per_second: 44.36522606484956, interval_steps_per_second: 2.7728266290530974, epoch: 0.274
loss: nan, learning_rate: 1e-05, global_step: 30, interval_runtime: 3.6258, interval_samples_per_second: 44.127987665259184, interval_steps_per_second: 2.757999229078699, epoch: 0.411
loss: nan, learning_rate: 1e-05, global_step: 40, interval_runtime: 3.5736, interval_samples_per_second: 44.77264604545965, interval_steps_per_second: 2.7982903778412282, epoch: 0.5479
loss: nan, learning_rate: 1e-05, global_step: 50, interval_runtime: 3.6283, interval_samples_per_second: 44.09766862567628, interval_steps_per_second: 2.7561042891047673, epoch: 0.6849
loss: 0.0, learning_rate: 1e-05, global_step: 60, interval_runtime: 3.6448, interval_samples_per_second: 43.8985043642171, interval_steps_per_second: 2.7436565227635685, epoch: 0.8219
loss: nan, learning_rate: 1e-05, global_step: 70, interval_runtime: 3.7433, interval_samples_per_second: 42.74318439757554, interval_steps_per_second: 2.6714490248484712, epoch: 0.9589
5%|████▊ | 72/1460 [02:16<07:23, 3.13it/s]loss: nan, learning_rate: 1e-05, global_step: 80, interval_runtime: 70.2024, interval_samples_per_second: 2.2791238350855676, interval_steps_per_second: 0.14244523969284797, epoch: 1.0959
loss: nan, learning_rate: 1e-05, global_step: 90, interval_runtime: 3.7313, interval_samples_per_second: 42.88079748004118, interval_steps_per_second: 2.680049842502574, epoch: 1.2329
loss: nan, learning_rate: 1e-05, global_step: 100, interval_runtime: 3.8448, interval_samples_per_second: 41.61510258783223, interval_steps_per_second: 2.6009439117395146, epoch: 1.3699
7%|██████▌ | 100/1460 [03:33<09:22, 2.42it/s][2024-12-25 11:47:05,984] [ INFO] - ***** Running Evaluation *****
[2024-12-25 11:47:05,985] [ INFO] - Num examples = 120
[2024-12-25 11:47:05,985] [ INFO] - Total prediction steps = 8
[2024-12-25 11:47:05,985] [ INFO] - Pre device batch size = 16
[2024-12-25 11:47:05,985] [ INFO] -
``` | closed | 2024-12-25T07:35:38Z | 2025-03-18T03:22:00Z | https://github.com/PaddlePaddle/PaddleNLP/issues/9693 | [
"question",
"stale"
] | modderBUG | 3 |
roboflow/supervision | deep-learning | 1,610 | How to define image size (dimensions) for detection inference while using sv.Detections.from_ultralytics() for images/videos? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
How can I define specific image sizes while running inference with supervision ?
Whenever I am using `sv.Detections.from_ultralytics()` method for running inference with COCO-pretrained YOLO models on videos (1280 x 720 resolution), the inference output always shows a fixed dimension (384 x 640) for the frames.
Here is the sample code I am running :
```
import ultralytics
from ultralytics import YOLO
import supervision as sv
model = YOLO("yolov9e.pt")
model.fuse()
selected_classes = [1, 2, 3, 5, 7] # from COCO
selected_class_info = [(class_id, CLASS_NAMES_DICT.get(class_id, f"No class name found for Class ID: {class_id}"))
for class_id in selected_classes]
video_info = sv.VideoInfo.from_video_path(SOURCE_VIDEO_PATH)
print(video_info)
stride =15
generator = sv.get_video_frames_generator(SOURCE_VIDEO_PATH,stride=stride)
iterator = iter(generator)
frame = next(iterator)
fps = video_info.fps
bounding_box_annotator = sv.BoxAnnotator(thickness=1)
label_annotator = sv.LabelAnnotator(text_thickness=1, text_scale=0.4, text_padding=1)
# Define callback function for video processing
def callback(frame: np.ndarray, index: int, fps: int) -> np.ndarray:
# Resize frames using cv2
#frame = cv2.resize(frame, (video_info.width, video_info.height))
#print("Current frame dimensions: {}x{}".format(frame.shape[1], frame.shape[0]))
results = model(frame, verbose=True)[0]
detections = sv.Detections.from_ultralytics(results)
detections = detections[np.isin(detections.class_id, selected_classes)]
detections = detections[detections.confidence > 0.1]
# annotate
labels = [
f"{model.model.names[class_id]} {confidence:0.2f}" for confidence, class_id
in zip(detections.confidence, detections.class_id)
]
annotated_frame = bounding_box_annotator.annotate(scene=frame, detections=detections)
annotated_frame = label_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels)
return annotated_frame
# Process video
with sv.VideoSink(target_path=TARGET_VIDEO_PATH, video_info=video_info) as sink:
for index, frame in enumerate(tqdm(generator, desc="Processing Video", unit="frame", total=video_info.total_frames // stride)):
annotated_frame = callback(frame, index, fps)
sink.write_frame(annotated_frame)
```
Here's what the output looks like:
```
Processing Video: 1%| | 6/1024 [00:02<03:54, 4.35frame/s]
0: 384x640 16 cars, 1 truck, 5 traffic lights, 20.3ms
Speed: 3.5ms preprocess, 20.3ms inference, 0.9ms postprocess per image at shape (1, 3, 384, 640)
0: 384x640 1 person, 13 cars, 1 truck, 5 traffic lights, 22.7ms
Speed: 1.7ms preprocess, 22.7ms inference, 0.9ms postprocess per image at shape (1, 3, 384, 640)
0: 384x640 16 cars, 1 truck, 5 traffic lights, 22.3ms
Speed: 2.0ms preprocess, 22.3ms inference, 1.0ms postprocess per image at shape (1, 3, 384, 640)
```
Even if I resize the frames inside the callback function:
```
def callback(frame: np.ndarray, index: int, fps: int) -> np.ndarray:
...
frame = cv2.resize(frame, (video_info.width, video_info.height))
print("Current frame dimensions: {}x{}".format(frame.shape[1], frame.shape[0]))
results = model(frame, verbose=True)[0]
detections = sv.Detections.from_ultralytics(results)
...
```
I still have the same inference outputs:
```
Processing Video: 0%| | 1/1024 [00:01<30:54, 1.81s/frame]
Current frame dimensions: 1280x720
0: 384x640 14 cars, 5 traffic lights, 18.3ms
Speed: 1.7ms preprocess, 18.3ms inference, 0.9ms postprocess per image at shape (1, 3, 384, 640)
Current frame dimensions: 1280x720
0: 384x640 1 person, 17 cars, 6 traffic lights, 39.9ms
Speed: 2.5ms preprocess, 39.9ms inference, 1.5ms postprocess per image at shape (1, 3, 384, 640)
```
However, if I am running the models separately (without supervision), this isn't a problem. Because I can define the image size. For example:
Running from Ultralytics (using `imgsz=1280`):
```
from ultralytics import YOLO
model = YOLO('yolov9e.pt')
result = model.predict( src=SOURCE_VIDEO_PATH, save=True, imgsz=1280, conf=0.1, classes=[1, 2, 3, 5, 7])
```
or, from the Github package (using `img=1280`):
```
%cd yolov9
!python detect.py --img 1280 --conf 0.1 --weights /yolov9/weights/yolov9-e.pt --source SOURCE_VIDEO_PATH --verbose 1 --classes 1 2 3 5 7 --line-thickness 1
```
And using the YOLO models from Ultralytics or the Github package is showing better detection results than using the `sv.Detections.from_ultralytics()` method. I tried them for YOLOv8, YOLOv9, YOLOv11, and YOLOv8+SAHI and all showed the same trends. So, am I missing something ?
### Additional
_No response_ | closed | 2024-10-20T21:04:17Z | 2024-10-21T02:29:57Z | https://github.com/roboflow/supervision/issues/1610 | [
"question"
] | tonmoy-TS | 1 |
MaartenGr/BERTopic | nlp | 2,114 | reduce_outliers and update_topics remove stop_words and ngram_range effects | ### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
After running `reduce_outliers` and `update_topics`, the effects of all specifications used in `vectorizer_model` (stop words, ngram) are gone. The results' representation words only show single words. Thanks.

```
vectorizer_model = CountVectorizer(stop_words=stop_words, ngram_range=(1, 4), min_df=5)
representation_model = MaximalMarginalRelevance(diversity=0.5)
topic_model_outlier_reduction = BERTopic(
vectorizer_model=vectorizer_model,
representation_model=representation_model,
top_n_words=15,
min_topic_size=15,
calculate_probabilities=True
)
topics_outlier_reduction, probs_outlier_reduction = topic_model_outlier_reduction.fit_transform(docs, embeddings)
new_topics = topic_model_outlier_reduction.reduce_outliers(docs,
topics_outlier_reduction,
threshold=0.2, strategy="distributions") # probabilities=probs_outlier_reduction,
topic_model_outlier_reduction.update_topics(docs, topics=new_topics)
```
### BERTopic Version
0.16.0 | open | 2024-08-06T17:36:32Z | 2024-08-07T09:10:17Z | https://github.com/MaartenGr/BERTopic/issues/2114 | [
"bug"
] | shj37 | 1 |
FlareSolverr/FlareSolverr | api | 953 | Unable to request using the first cookie, each request needs to be bypassed again | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes/no]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [yes/no]
- Are you using a Proxy: [yes/no]
- Are you using Captcha Solver: [yes/no]
- If using captcha solver, which one:
- URL to test this issue:https://www.ozon.ru
```
### Description
https://www.ozon.ru
### Logged Error Messages
```text
https://www.ozon.ru
```
### Screenshots

 | closed | 2023-11-09T03:28:25Z | 2023-11-26T12:52:00Z | https://github.com/FlareSolverr/FlareSolverr/issues/953 | [
"more information needed"
] | aini123152011 | 1 |
pydantic/pydantic-settings | pydantic | 434 | Clarification Needed on pyproject_toml_table_header Logic in Pydantic Settings | # Issue Context
I tried to use pydantic-settings for project configuration management, but I couldn't understand why the pyproject_toml_table_header is restricted to a single block.
```
self.toml_table_header: tuple[str, ...] = settings_cls.model_config.get(
'pyproject_toml_table_header', ('tool', 'pydantic-settings')
)
self.toml_data = self._read_files(self.toml_file_path)
for key in self.toml_table_header:
self.toml_data = self.toml_data.get(key, {})
super(TomlConfigSettingsSource, self).__init__(settings_cls, self.toml_data)
```
If multiple headers are provided, this logic seems to overwrite toml_data repeatedly, resulting in toml_data containing content from only one header. Is my understanding correct?
Are there any alternative logics to better handle this content? For instance, would it be more appropriate to use something like:
```
self.toml_data = {k:v for k, v in self.toml_data.items() if k in self.toml_table_header}
```
or other
| closed | 2024-10-01T16:38:32Z | 2025-01-09T09:57:07Z | https://github.com/pydantic/pydantic-settings/issues/434 | [
"feature request"
] | py-mu | 9 |
vitalik/django-ninja | django | 869 | import error when using Ninja in Docker | I have an existing ,large Django application running in a Docker container, and added Ninja.
however, after restarting the container, I get an import error: ModuleNotFoundError: No module named 'ninja'
But then after making a change, Django reloads (while the container continues) and it gets imported just fine.
So it seems to be a timing issue; Ninja isn't fully imported yet while rest of the code tries to use it.
How can this be prevented?
| closed | 2023-10-03T06:22:17Z | 2023-10-03T08:42:06Z | https://github.com/vitalik/django-ninja/issues/869 | [] | AlexDcm | 1 |
Nemo2011/bilibili-api | api | 831 | [漏洞] b站的一个域名撤掉了,导致probe_url检测时报错,无法上传文件 | **模块版本:"16.3.0"
**模块路径:** `bilibili_api.video_uploader.py`
**报错信息:**
```
[2024-10-17 21:29:35,235: WARNING/ForkPoolWorker-2] File "/usr/local/lib/python3.11/site-packages/bilibili_api/video_uploader.py", line 91, in _probe
httpx.post(f'https:{line["probe_url"]}', data=data, timeout=timeout)
```
**报错代码:**
```
/usr/local/lib/python3.11/site-packages/bilibili_api/data/video_uploader_lines.json
"ws": {
"os": "upos",
"upcdn": "ws",
"probe_version": 20221109,
"query": "upcdn=ws&probe_version=20221109",
"probe_url": "//upos-cs-upcdnws.bilivideo.com/OK"
}
```
---
域名撤了:
$ ping upos-cs-upcdnws.bilivideo.com
ping: upos-cs-upcdnws.bilivideo.com: Name or service not known
| open | 2024-10-18T01:48:10Z | 2024-10-23T02:25:40Z | https://github.com/Nemo2011/bilibili-api/issues/831 | [
"bug"
] | liaozd | 2 |
manrajgrover/halo | jupyter | 14 | Debug log on terminal | Hi,
when using Halo with boto3, I have a strange behaviour, all the debug log from boto are displayed

MacOS/Python3.6
halo (0.0.5)
boto3 (1.4.4)
Thanks | closed | 2017-10-03T13:21:14Z | 2017-10-06T16:37:45Z | https://github.com/manrajgrover/halo/issues/14 | [
"bug",
"up-for-grabs",
"hacktoberfest"
] | antoinerrr | 3 |
0b01001001/spectree | pydantic | 350 | bug: SecuritySchemeData root_validator conflicts with alias | ### Describe the bug
Impossible to create SecuritySchemeData with field_in field because of the alias="in" field on the pydantic field_in field
### To Reproduce
```
from spectree.models import InType, SecureType, SecuritySchemeData
if __name__ == "__main__":
foo = SecuritySchemeData(
type=SecureType.API_KEY,
name="X-MIRO-ID",
field_in=InType.HEADER,
description="Miro passport authentication",
scheme="passport",
)
```
### Expected behavior
Should create a valid SecuritySchemeData but instead raises this exception
```
pydantic.error_wrappers.ValidationError: 1 validation error for SecuritySchemeData
__root__
For `apiKey` type `name, field_in` field(s) is required. (type=value_error)
```
### The spectree version
Name: spectree
Version: 1.2.3
Summary: generate OpenAPI document and validate request&response with Python annotations.
Home-page:
Author:
Author-email: Keming Yang <kemingy94@gmail.com>
License: Apache-2.0
Location: /Users/jkuperus/Library/Caches/pypoetry/virtualenvs/data-workflow-service-VxjL5469-py3.8/lib/python3.8/site-packages
Requires: pydantic
Required-by:
Darwin Jelmers-MacBook-Pro.local 22.6.0 Darwin Kernel Version 22.6.0: Wed Jul 5 22:22:05 PDT 2023; root:xnu-8796.141.3~6/RELEASE_ARM64_T6000 arm64
Python 3.8.10
### Additional context
As a workaround you can do
```
SecuritySchemeData(
**{
"type": SecureType.API_KEY,
"name": "X-MIRO-ID",
"in": InType.HEADER,
"scheme": "passport",
"description": "Miro passport authentication",
}
)
``` | closed | 2023-10-03T11:30:12Z | 2024-10-30T09:24:55Z | https://github.com/0b01001001/spectree/issues/350 | [
"bug"
] | jelmerk | 2 |
aleju/imgaug | machine-learning | 743 | Would ElasticTransformation add a function only process keypoints? | same as title.
I have not an image, just keypoints.
I can use only keypoints aug in PiecewiseAffine, and i rewrite _augment_keypoints_by_samples in PiecewiseAffine like https://github.com/aleju/imgaug/issues/621
so i want to show the difference before ElasticTransformation and after. Do you have any advices? | open | 2021-02-01T07:52:36Z | 2021-02-01T07:52:36Z | https://github.com/aleju/imgaug/issues/743 | [] | merria28 | 0 |
nteract/papermill | jupyter | 119 | Review third party package for parameterizing notebooks (Python 3) | Mostly a note to self, as I'll be traveling for a few weeks and would like to check this out: https://github.com/hz-inova/run_jnb | closed | 2018-02-12T16:53:54Z | 2018-08-03T23:10:50Z | https://github.com/nteract/papermill/issues/119 | [] | willingc | 0 |
aleju/imgaug | machine-learning | 730 | 0.5.0 release? | Hi, just wondering when the next release will come out? I'm seeing a very intermittent, hard to reproduce bug in v0.4.0 (see below) that looks like it's probably been fixed in the current source.
Bug stack trace (snipped to last few lines) is below. Happy to post a separate issue for it, but probably unnecessary when it looks like it's been fixed?
```
...
File "/opt/conda/lib/python3.7/site-packages/imgaug/augmenters/geometric.py", line 4344, in _augment_batch_
self._order_segmentation_maps)
File "/opt/conda/lib/python3.7/site-packages/imgaug/augmenters/geometric.py", line 4422, in _augment_hm_or_sm_by_samples
arr, dx, dy, order=order, cval=cval, mode=mode)
File "/opt/conda/lib/python3.7/site-packages/imgaug/augmenters/geometric.py", line 4753, in _map_coordinates
x_shifted = x + (-1) * dx
ValueError: operands could not be broadcast together with shapes (1081,1921) (1080,1920)
``` | open | 2020-11-03T22:21:57Z | 2020-11-08T19:50:25Z | https://github.com/aleju/imgaug/issues/730 | [] | OliverColeman | 1 |
JaidedAI/EasyOCR | machine-learning | 1,109 | Any advice on how to improve performance on custom datasets? | The final goal is to increase the recognition rate of Korean and English in Korean, English, Chinese, etc. In the case of the character area, I know that it is recognized in units of sentences. Which do you think will be a better dataset for training, word units or sentence units? Also, is it possible to learn with a Korean-applied dataset and apply it together with an English recognition model? | open | 2023-08-10T04:08:52Z | 2023-09-25T22:15:22Z | https://github.com/JaidedAI/EasyOCR/issues/1109 | [] | Seoung-wook | 1 |
gevent/gevent | asyncio | 1,489 | Random crash on Windows when gevent.idle() and threadpool used | gevent version: 1.4.0
Python version: cPython 3.7.3 downloaded from python.org
Operating System: Win10
### Description:
The code bellow crashes on every second run.
Output:
```
Run #0 threadpool object num: 12
Run #1 threadpool object num: 13
Run #2 threadpool object num: 13
Run #3 threadpool object num: 14
Run #4 threadpool object num: 16
Run #5 threadpool object num: 16
Run #6 threadpool object num: 16
Run #7 threadpool object num: 18
Run #8 threadpool object num: 22
[Finished in 0.6s with exit code 3221226356]
```
### What I've run:
```python
import time
import gevent
import gevent.monkey
import gevent.threadpool
import sqlite3
import shutil
gevent.monkey.patch_all(thread=False, threading=False)
import threading
def worker():
db = sqlite3.connect("test.db")
db.close()
return "worker done"
def test():
pool = gevent.get_hub().threadpool
gevent.idle()
for i in range(20):
pool.spawn(worker)
assert pool.apply(worker) == "worker done"
time.sleep(0.1)
import gc
for i in range(100):
test()
pools = [obj for obj in gc.get_objects() if "threadpool" in str(type(obj))]
print("Run #%s threadpool object num: %s" % (i, len(pools)))
print("Done.")
```
Update:
Adding `hub = gevent.hub.Hub()` into the `test` function makes it crash with similar success rate even without `gevent.idle()`. In this case I got an error message:
```
Fatal Python error: ffi.from_handle() detected that the address passed points to garbage. If it is really the result of ffi.new_handle(), then the Python object has already been garbage collected
Thread 0x00000e34 (most recent call first):
File "E:\Web\Today\test2.py", line 11 in worker
File "C:\python3\lib\site-packages\gevent\threadpool.py", line 281 in _worker
Thread 0x00004030 (most recent call first):
File "C:\python3\lib\site-packages\gevent\_threading.py", line 84 in wait
File "C:\python3\lib\site-packages\gevent\_threading.py", line 166 in get
File "C:\python3\lib\site-packages\gevent\threadpool.py", line 270 in _worker
…
Current thread 0x00004b60 (most recent call first):
File "C:\python3\lib\site-packages\gevent\libuv\loop.py", line 32 in _find_loop_from_c_watcher
File "C:\python3\lib\site-packages\gevent\_ffi\loop.py", line 261 in python_prepare_callback
File "C:\python3\lib\site-packages\gevent\libuv\loop.py", line 473 in run
File "C:\python3\lib\site-packages\gevent\hub.py", line 582 in run
[Finished in 1.2s with exit code 3221226505]
```
Pretty edge case, but may have same connection with the original error. | closed | 2019-12-07T20:43:31Z | 2019-12-12T12:58:35Z | https://github.com/gevent/gevent/issues/1489 | [
"Type: Bug",
"Loop: libuv"
] | HelloZeroNet | 4 |
fastapi/sqlmodel | fastapi | 271 | No overload variant of "select" matches argument types | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [ ] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from geoalchemy2.types import Geometry
from sqlmodel import Column, Field, SQLModel, cast, select
class Country(SQLModel, table=True):
class Config:
arbitrary_types_allowed = True
id: Optional[int] = Field(default=None, primary_key=True)
name: str
population: int
geometry: Geometry = Field(
sa_column=Column(Geometry(geometry_type="POLYGON", srid=3035))
)
select(
Country.name,
Country.population,
cast(Country.geometry, Geometry).ST_XMin(),
cast(Country.geometry, Geometry).ST_YMin(),
cast(Country.geometry, Geometry).ST_XMax(),
cast(Country.geometry, Geometry).ST_YMax(),
Country.geometry.ST_AsSVG(),
)
```
### Description
I'm selecting multiple model attributes and calculated attributes and there is a `mypy` error:
> No overload variant of "select" matches argument types "Any", "Any", "Any", "Any", "Any", "Any", "Any"
A long list of "Possible overload variants" follows.
PS.: There is no error when I `from sqlalchemy import select`. The `mypy` error only happens if I `from sqlmodel import select`. In the former case, however, `sqlmodel.Session` complains if I use it with `sqlalchemy.select`.
### Operating System
Linux
### Operating System Details
Docker image `python:3.10.2-bullseye`
### SQLModel Version
0.0.6
### Python Version
3.10
### Additional Context
```py
error: No overload variant of "select" matches argument types "str", "int", "Any", "Any", "Any", "Any", "Any"
note: Possible overload variants:
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, **kw: Any) -> SelectOfScalar[_TScalar_0]
note: def [_TModel_0 <: SQLModel] select(entity_0: Type[_TModel_0], **kw: Any) -> SelectOfScalar[_TModel_0]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TScalar_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TModel_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TScalar_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TModel_2 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TModel_2]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TScalar_2]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TModel_2]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: _TScalar_2, **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TScalar_2]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TModel_2 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TModel_2]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TScalar_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TScalar_2, _TModel_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TModel_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TScalar_1, _TModel_2, _TModel_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TScalar_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TScalar_2, _TModel_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TModel_2, _TScalar_3]]
note: def [_TScalar_0 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: _TScalar_0, entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TScalar_0, _TModel_1, _TModel_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TScalar_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TScalar_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TModel_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TScalar_1 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: _TScalar_1, entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TScalar_1, _TModel_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TScalar_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TScalar_2 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None), _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: _TScalar_2, entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TScalar_2, _TModel_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TScalar_3 in (Column[Any], Sequence[Any], Mapping[Any, Any], UUID, datetime, float, int, bool, bytes, str, None)] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: _TScalar_3, **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TModel_2, _TScalar_3]]
note: def [_TModel_0 <: SQLModel, _TModel_1 <: SQLModel, _TModel_2 <: SQLModel, _TModel_3 <: SQLModel] select(entity_0: Type[_TModel_0], entity_1: Type[_TModel_1], entity_2: Type[_TModel_2], entity_3: Type[_TModel_3], **kw: Any) -> Select[Tuple[_TModel_0, _TModel_1, _TModel_2, _TModel_3]]
Found 1 error in 1 file (checked 6 source files)
``` | open | 2022-03-15T15:05:50Z | 2024-09-05T22:39:53Z | https://github.com/fastapi/sqlmodel/issues/271 | [
"question"
] | StefanBrand | 4 |
dynaconf/dynaconf | django | 922 | [bug] Integer like array is not cast to string even if said so. | **Describe the bug**
I am trying to have a number only string as a value in the dynaconf setting object, but the parser only treats it as a number.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<!-- Describe or use the command `$ tree -v` and paste below -->
<details>
<summary> Project structure </summary>
```bash
# /path/
# ...../folder/...
# please provide your folder structure here
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
```TOML
# settings.toml
[default]
password = 123456
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**/path/src/config.py**
```python
from dynaconf import Dynaconf, Validator
settings = Dynaconf(
environments=True,
settings_files=["settings.toml", ".secrets.toml"],
root_path=os.path.dirname(os.path.abspath(__file__)),
validators=[
Validator("PASSWORD", must_exist=True, cast=str)
],
)
```
**/path/src/app.py**
```python
from config import settings
assert settings.password == "123456" # This throws an error and can only read the password as an integer.
```
| closed | 2023-04-19T15:33:55Z | 2023-04-27T18:10:20Z | https://github.com/dynaconf/dynaconf/issues/922 | [
"bug"
] | FeryET | 3 |
rthalley/dnspython | asyncio | 748 | dns.resolver.NXDOMAIN problem | hello. The cname of the subdomain "images.mytech.walmart.com" appears to be "mytechcmsprod.azureedge.net". but "images.mytech.walmart.com" not has an A dns record. if the subdomain address does not have an "A" dns record, I wrote a function that tries to print the "CNAME" dns record to the screen.
```
def test_dns():
try:
r = dns.resolver.resolve("images.mytech.walmart.com","A")
except dns.resolver.NXDOMAIN as nx:
print(nx.canonical_name.to_text())
```
when I call the above function it gives me "cs1.wpc.phicdn.net." gives the output. But that should be the cname it should give me >>"mytechcmsprod.azureedge.net"
but for other sites, the same function works correctly, it gives wrong output for this subdomain address. why am I getting wrong output for subdomain address "images.mytech.walmart.com".
| closed | 2022-01-03T17:48:51Z | 2022-01-03T23:02:58Z | https://github.com/rthalley/dnspython/issues/748 | [] | Phoenix1112 | 8 |
521xueweihan/HelloGitHub | python | 1,857 | C++ | ## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
- 项目后续更新计划:
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg
## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
| closed | 2021-08-28T12:43:38Z | 2021-08-28T12:43:43Z | https://github.com/521xueweihan/HelloGitHub/issues/1857 | [
"恶意issue"
] | zcfloat | 1 |
ageitgey/face_recognition | machine-learning | 1,019 | ImportError: No module named face_recognition | * face_recognition version:
* Python version:
* Operating System:
### Description
I am trying to get started with face_recognition.
### What I Did
Running mac OS
Installed Anaconda - running python 3.7
pip 19.0.3 from /anaconda3/lib/python3.7/site-packages/pip (python 3.7)
pip installed face_recognition is successful

Tried to import face_recognition but its not working. It keeps saying module not found. Any help please?
| open | 2020-01-04T11:58:11Z | 2022-11-01T07:43:14Z | https://github.com/ageitgey/face_recognition/issues/1019 | [] | leeadh | 6 |
plotly/dash | plotly | 2,447 | [BUG] Content-type of image type is application/json | **Context**
I'm using Dash 2.8.0 with Python 3.8.
I have a multi-page app using the default `use_pages=True` setup with pages defined in the `pages/` directory.
I also have a variety of static assets in the `assets/` directory. The folder structure is as follows:
```
assets/
- styles.css
- typography.css
- mountain.png
pages/
- some_page.py
app.py
```
If I try to load the image using an Img(src=get_asset_url('mountain.png')) tag. I get the following response.

The content-type header of the requested mountain.png is "application/json", therefore my binary_encoding rules try to load the image in plaintext, causing an error rendering of the resource.
**Expected behavior**
I would expect the Content-type header to be image/png or another generic image MIME type.
| closed | 2023-03-07T09:38:10Z | 2024-07-24T17:38:04Z | https://github.com/plotly/dash/issues/2447 | [] | rsewell97 | 4 |
iperov/DeepFaceLab | machine-learning | 857 | No preview appearing | no preview is showing. I have tried training mode and in every training mode no preview showed.
this error message also appears even though i am using H64. (specs: GTX1050 2gb, intel(R) Xeon(R), 12gb ram)
Starting. Press "Enter" to stop training and save model.
Error: OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'training/Adam/Variable_30/Assign', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 111, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\ModelBase.py", line 507, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\Model_H64\Model.py", line 88, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1216, in train_on_batch
self._make_train_function()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 509, in _make_train_function
loss=self.total_loss)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in get_updates
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in <listcomp>
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 704, in zeros
return variable(v, dtype=dtype, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 402, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1329, in __init__
constraint=constraint)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1481, in _init_from_args
validate_shape=validate_shape).op
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/Variable_30/Assign}} = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 111, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\ModelBase.py", line 507, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\Model_H64\Model.py", line 88, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 2697, in __call__
if hasattr(get_session(), '_make_callable_from_options'):
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 206, in get_session
session.run(tf.variables_initializer(uninitialized_vars))
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'training/Adam/Variable_30/Assign', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 111, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\ModelBase.py", line 507, in train_one_iter
losses = self.onTrainOneIter(sample, self.generator_list)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\models\Model_H64\Model.py", line 88, in onTrainOneIter
total, loss_src_bgr, loss_src_mask, loss_dst_bgr, loss_dst_mask = self.ae.train_on_batch( [warped_src, target_src_full_mask, warped_dst, target_dst_full_mask], [target_src, target_src_full_mask, target_dst, target_dst_full_mask] )
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 1216, in train_on_batch
self._make_train_function()
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\engine\training.py", line 509, in _make_train_function
loss=self.total_loss)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in get_updates
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\DeepFaceLab\nnlib\nnlib.py", line 1075, in <listcomp>
ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 704, in zeros
return variable(v, dtype=dtype, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py", line 402, in variable
v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1329, in __init__
constraint=constraint)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1481, in _init_from_args
validate_shape=validate_shape).op
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[3,3,512,2048] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node training/Adam/Variable_30/Assign (defined at C:\Users\Admin\Desktop\deepfake\DeepFaceLab_CUDA\_internal\python-3.6.8\lib\site-packages\keras\backend\tensorflow_backend.py:402) = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_14)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I believe there is a way around this problem, currently i know that OOM means out of memory but i have already set my batch number to 1 so i hope help is here thank you. | closed | 2020-08-10T14:03:29Z | 2023-06-11T07:42:19Z | https://github.com/iperov/DeepFaceLab/issues/857 | [] | JeezLoveJazzMusic | 3 |
ultralytics/yolov5 | pytorch | 12,522 | Inference Resolutions,Parameters and TensorRT Usage in 1280x1280 models | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello. First of all, I have read the issues and documentation related to this topic. Although there are answers to my questions in different topic headings, I couldn't generally reach the result I wanted. The subject I want to ask about is the usage of 1280x1280 models and running these models with Tensor RT support.
First of all, during training, we set the model input to 1280 using the "--img" command. We also choose the yolov5n6 from the 1280 model family available in the YOLOv5 repository. We started the model training with appropriate commands. At the end of the day, we have our own fine-tuned yolov5n6 model specific to our custom dataset (with images at a resolution of 1920x1080, 16x9 aspect ratio).
Now, when using this model during the inference stage, if we don't specify a size parameter, it defaults to 640. However, if we specify a size parameter of 1280, it uses 1280 as the size.
Now, here's a question that arises: How does an architecture designed to work with 1280x1280 input images handle 640x640 inputs? Shouldn't the input size of an architecture be fixed? When the 1280-size model is used, the inference time increases, while it decreases with 640. This situation indicates that the model operates differently at different input sizes and even slows down as the size increases. It is possible to observe this phenomenon in detect.py as well. In short, how can it be possible?
My first question was about inference. Now I'm moving on to the part about TensorRT. When I want to use TensorRT, my intention is actually to keep the input size the same at 1280x1280. I just want to optimize the architecture to work better on NVIDIA devices. This way, an inference process running at 20 Hz should increase to 30 or 40 Hz.
Using Export.py, I can successfully generate a .engine model with a resolution of 1280x1280. The only thing to be aware of in Export.py is that the resolutions provided should be multiples of 64 because the stride value is set to 64 in common.py.
The problem arises during inference. When I want to perform inference with the .engine model, it tells me that the expected input for the engine model is 1,3,1280,1280, but the image I provide is 1,3,1280,960. So, there is a resolution problem. If I update the size parameter to 1280x1280 during inference, I still encounter a problem. It is highly likely that the software you are using internally resizes the input from 1920x1080 to 1280x960 for a specific reason.
To solve these issues, I resized my input image to 1280x1280 using cv2.resize, and I provided the [1280,1280] size parameter. Only then were I able to run it with TensorRT.
With the methods I mentioned at the end, I can create the desired TensorRT engine model. It just needs to be a multiple of 64. For example, I create a 960x960 model and resize the source image to 960x960 myself, and it works. Is this process correct? Can we adjust engine models to the desired resolution?
If you can provide more clarity on the topics I mentioned, I would greatly appreciate it. Thank you in advance.
### Additional
_No response_ | closed | 2023-12-18T12:30:53Z | 2024-10-20T19:34:45Z | https://github.com/ultralytics/yolov5/issues/12522 | [
"question"
] | harunalperentoktas | 6 |
blb-ventures/strawberry-django-plus | graphql | 14 | Error when using relay module without Django | I'm interested in using the `relay` module in a non-Django project (keeping my eyes on how https://github.com/strawberry-graphql/strawberry/issues/1573 progresses).
I copied the `relay` and `aio` models out of this package and tried running your example on https://github.com/strawberry-graphql/strawberry/issues/157.
```python
# schema.py
from typing import Iterable, Optional
import strawberry
from strawberry.types.info import Info
from .utils import relay
fruits = [
{
"id": 1,
"name": "Banana",
"description": "Lorem ipsum",
},
{
"id": 2,
"name": "Apple",
"description": None,
},
{
"id": 3,
"name": "Orange",
"description": "Lorem ipsum",
},
]
@strawberry.type
class Fruit(relay.Node):
name: str
description: Optional[str]
@classmethod
def resolve_node(
cls,
node_id: str,
*,
info: Optional[Info] = None,
required: bool = False,
):
for fruit in fruits:
if str(fruit["id"]) == node_id:
return Fruit(**fruit)
if required:
raise ValueError(f"Fruit by id {node_id} not found.")
return None
@classmethod
def resolve_nodes(
cls,
*,
info: Optional[Info] = None,
node_ids: Optional[Iterable[str]] = None,
):
node_ids = node_ids and set(node_ids)
for fruit in fruits:
if node_ids is not None and str(fruit["id"]) not in node_ids:
continue
yield Fruit(**fruit)
@strawberry.type
class Query:
fruit: Fruit = relay.node()
fruits_conn: relay.Connection[Fruit] = relay.connection()
@relay.connection
def fruits_conn_with_filter(self, name_startswith: str) -> Iterable[Fruit]:
for fruit in fruits:
if fruit["name"].startswith(name_startswith):
yield Fruit(**fruit)
@strawberry.type
class Mutation:
@relay.input_mutation
def create_fruit(self, name: str, description: Optional[str]) -> Fruit:
fruit_data = {
"id": max(f["id"] for f in fruits) + 1,
"name": name,
"description": description,
}
fruits.append(fruit_data)
return Fruit(**fruit_data)
schema = strawberry.Schema(query=Query, mutation=Mutation)
```
```
strawberry server schema
```
When I issue the following query for `Fruit` 1, I get an error:
```graphql
{
fruit(id: "RnJ1aXQ6MQ==") {
id
name
}
}
```
```
File "./utils/relay.py", line 783, in get_result
return gid.resolve_node(info)
File "./utils/relay.py", line 293, in resolve_node
node = n_type.resolve_node(
File "./schema.py", line 42, in resolve_node
return Fruit(**fruit)
TypeError: __init__() got an unexpected keyword argument 'id'
```
I'm unsure how this is supposed to work, given that [`id` is a classmethod](https://github.com/blb-ventures/strawberry-django-plus/blob/bd00e68e2daa574f7a5c9c984a0f3d0302ed5ae7/strawberry_django_plus/relay.py#L343-L364). Can I get a hand debugging this please? | closed | 2022-02-23T23:35:55Z | 2022-02-26T18:36:34Z | https://github.com/blb-ventures/strawberry-django-plus/issues/14 | [] | sloria | 3 |
dask/dask | numpy | 10,931 | combine_first: conditional type-cast to rhs's dtype | - XREF https://github.com/pandas-dev/pandas/pull/52532
`test_combine_first` is flaky. This is because the test randomly generates the test data, and if a chunk ends up being full of NaNs it will improperly trigger a FutureWarning in pandas that should only trigger when the whole Series is full of NaNs.
Reproducer linked below.
Additionally, if lhs is of dtype='datetime64[s]' and full of NaTs, while rhs has dtype='datetime64[ns]', the output dtype is rhs's dtype in pandas, but lhs's dtype in dask. This can't be fixed due to the dtype being eager in dask.
| open | 2024-02-16T12:34:12Z | 2024-02-16T16:30:06Z | https://github.com/dask/dask/issues/10931 | [
"dataframe",
"p3"
] | crusaderky | 6 |
mouredev/Hello-Python | fastapi | 568 | 网赌被黑的原因及解决方案 | 出黑咨询威:zdn200 飞机:@lc15688
切记,只要您赢钱了,遇到任何不给你提现的借口,基本表明您被黑了。
如果你出现以下这些情况,说明你已经被黑了:↓ ↓
【1】限制你账号的部分功能!出款和入款端口关闭,以及让你充值解开取款通道等等!
【2】客服找一些借口说什么系统维护,风控审核等等借口,就是不让取款!
【网赌被黑怎么办】【网赌赢了平台不给出款】【系统更新】【取款失败】【注单异常】【网络波动】【提交失败】
【单注为回归】【单注未更新】【出款通道维护】 【打双倍流水】 【充值同等的金额】
关于网上网赌娱乐平台赢钱了各种借口不给出款最新解决方法
切记,只要你赢钱了,遇到任何不给你提现的借口,基本表明你已经被黑了。
第一:提款被拒绝,各种借口理由,就是不让出款,让打倍投流水等等!
第二:限制你账号的部分功能!出款和入款端口关闭,以及让你充值解开取款通道等等!
第三:客服找一些借口说什么系统维护,风控审核等等借口,就是不让取款!
第四:确认被黑了,应该怎么做? (找专业团队教你怎么办挽回损失,出款不成功不收取任何前期费用)
第五:保持冷静,不要和客服争吵,防止号被冻结。
第六:稳住客服情绪,只要平台觉得你还在正常游戏。
第七:忽悠客服,不经意的表达自己的经济实力,且适当的装傻。
第八:只要可以登陆,可以转换额度,剩下的交给我们,我们将会帮你把损失降到最低。
团队经验深(8年老团队)团队掌控着一些最新出款技术可以帮到你。
只要账号还能正常登录我们团队有80%的把握帮你出款成功。(注:我们团队是先出款后收费,让你先交费的请勿再次上当受骗,诚信合作!)
网赌要玩就玩实际的实体平台资金安全保障-只要有人说能百分百赢的和各种彩金诱惑等等…..都是杀猪台野鸡台资金账号被黑被冻结只是迟早的事#远离赌博
| closed | 2025-03-21T05:46:55Z | 2025-03-21T08:03:57Z | https://github.com/mouredev/Hello-Python/issues/568 | [] | zdn200dali | 0 |
jina-ai/clip-as-service | pytorch | 494 | Please add option to return tokenized text via http route /encode or a dedicated route /tokenize | Please add option to return tokenized text via http route /encode or a dedicated route /tokenize
[x] Are you running the latest `bert-as-service`?
[x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
[x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
[x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
review of the flask route for /encode indicates it only passes a single possible parameter to the actual encode process, no option is parsed to be able to indicate that one needs the tokenized version of the encoded text returned.
**System information**
docker
custom entry_point.sh as:
```bash
#!/bin/sh
bert-serving-start -num_worker=$1 -model_dir /model -show_tokens_to_client -max_seq_len NONE -http_port 8080
```
running as:
```bash
PATH_MODEL="/opt/docker/bert-as-service-models/multi_cased_L-12_H-768_A-12"
NUM_WORKER=1
docker run --gpus=all --restart always --network=lan0 --ip="$IP" --name "bert-as-service" -di -v "$PATH_MODEL:/model" -t "$TAG" "$NUM_WORKER"
``` | open | 2019-12-19T18:49:57Z | 2019-12-19T18:52:36Z | https://github.com/jina-ai/clip-as-service/issues/494 | [] | michael-newsrx | 0 |
jazzband/django-oauth-toolkit | django | 976 | Ping Federate and Django Oauth Toolkit | <!-- What is your question? -->
Hello, I am trying to get the Django Oauth toolkit to work with my company's Ping Federate basic client credentials, but I am so lost. I want my users to sign in through our corporate sign in, and then grab their username and then take credentials from our admin.contrib
Am I in the totally wrong package? | closed | 2021-05-04T21:18:28Z | 2022-07-25T13:12:32Z | https://github.com/jazzband/django-oauth-toolkit/issues/976 | [
"question"
] | TimothyMalahy | 5 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,054 | Not all resources support annotations configuration | ### Proposed change
Introduce `annotations` in Hub & CHP Deployment
### Alternative options
Introduce global `extraAnnotations ` for all possible generated resources
### Who would use this feature?
Would be useful for different k8s management solutions that use annotations for searching resources. | closed | 2023-03-09T11:20:39Z | 2023-03-12T13:16:33Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3054 | [
"duplicate"
] | dev-dsp | 4 |
hatchet-dev/hatchet | fastapi | 785 | docs: timeouts page should say scheduling timeouts are for workflows, not steps | See https://github.com/hatchet-dev/hatchet/blame/69a7bc3b7553b324a51f72abb71918ef17b28d5c/frontend/docs/pages/home/features/timeouts.mdx#L9 | closed | 2024-08-14T12:58:29Z | 2025-02-15T00:20:08Z | https://github.com/hatchet-dev/hatchet/issues/785 | [] | wodow | 0 |
Nemo2011/bilibili-api | api | 597 | 函数名错误 | https://github.com/Nemo2011/bilibili-api/blob/11c33b16003f3111684421fb9ad4cbea5833ef18/bilibili_api/hot.py#L31
https://github.com/Nemo2011/bilibili-api/blob/11c33b16003f3111684421fb9ad4cbea5833ef18/bilibili_api/rank.py#L235
`weakly`应该更名为`weekly` | closed | 2023-12-15T20:06:58Z | 2023-12-15T23:35:30Z | https://github.com/Nemo2011/bilibili-api/issues/597 | [
"bug"
] | kaixinol | 1 |
keras-team/keras | tensorflow | 20,143 | TensorBoard Callback never updates internal step counter | Using the TensorBoard callback to generate time series data is currently not possible, since the internal `self._train_step` counter is never updated. Instead, looking at the time series tab only ever yields a single data point instead of a graph. This is caused by the fact that every call to `self.summary.scalar(...)` is given the same step value.
The simplest fix that comes to my mind is adding this line to `keras.callbacks.TensorBoard.on_train_batch_end()`, but it is likely that a better fix exists that I am not aware of:
```py3
def on_train_batch_end(...):
...
self._train_step += 1
```
Below, you may find what this fix looks like, running multiple steps (despite what the screenshot says, multiple steps are logged).
Without the fix:

With the fix:

| open | 2024-08-21T15:26:20Z | 2024-11-26T16:42:40Z | https://github.com/keras-team/keras/issues/20143 | [
"stat:awaiting keras-eng",
"type:Bug"
] | LarsKue | 5 |
apache/airflow | machine-learning | 47,778 | [Regression]Missing Asset Alias dependency graph | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
In the current UI implementation, there is no way to see Asset Alias dependencies with DAG in Airflow 2.10.5, we were able to see that in the dependency graph.
**AF2**
<img width="694" alt="Image" src="https://github.com/user-attachments/assets/b082371e-615c-417f-bd92-48132efe8030" />
Now in AF3 there is only the DAG graph where we can see Alias info but that will not show dependencies with other DAG
<img width="759" alt="Image" src="https://github.com/user-attachments/assets/723a2d45-0ce2-4a1e-85f0-4f44724ffcfd" />
### What you think should happen instead?
_No response_
### How to reproduce
Add below DAG as per that dependency should be `example_dataset_alias_mapped` (DAG) --> `alias-dataset-1`(Alias) --> `downstream_alias`(DAG). In current UI there is no way to can see this dependency as we were able to see in AF2.
```
"""
###
"""
from airflow.decorators import dag, task
from airflow.datasets import Dataset, DatasetAlias
from airflow.datasets.metadata import Metadata
from pendulum import datetime
my_alias_name = "alias-dataset-1"
@dag(
dag_display_name="example_dataset_alias_mapped",
start_date=datetime(2024, 8, 1),
schedule=None,
catchup=False,
tags=["datasets"],
)
def dataset_alias_dynamic_test():
@task
def upstream_task():
return ["a", "b"]
@task(outlets=[DatasetAlias(my_alias_name)])
def use_metadata(name):
yield Metadata(
Dataset(name),
alias=my_alias_name,
extra={} # extra is NOT optional
)
use_metadata.expand(name=upstream_task())
dataset_alias_dynamic_test()
@dag(
start_date=datetime(2024, 8, 1),
schedule=[DatasetAlias(my_alias_name)],
catchup=False,
tags=["dataset"]
)
def downstream_alias():
@task
def t1():
return 0
t1()
downstream_alias()
```
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-14T10:47:15Z | 2025-03-18T22:33:26Z | https://github.com/apache/airflow/issues/47778 | [
"kind:bug",
"priority:high",
"area:core",
"area:UI",
"area:datasets",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 3 |
computationalmodelling/nbval | pytest | 180 | new pytest version throws deprecation warning | Earlier today, pytest v7.0.0 was released.
With this new pytest version installed, running `pytest --nbval my_notebook.ipynb` throws a deprecation warning:
```text
E pytest.PytestRemovedIn8Warning: The (fspath: py.path.local) argument to IPyNbFile is deprecated. Please use the (path: pathlib.Path) argument instead.
E See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path
```
Here is the full traceback for the warning:
```text
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/pluggy/_hooks.py:265: in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/pluggy/_manager.py:80: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/nbval/plugin.py:117: in pytest_collect_file
return IPyNbFile.from_parent(parent, fspath=path)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/_pytest/nodes.py:633: in from_parent
return super().from_parent(parent=parent, fspath=fspath, path=path, **kw)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/_pytest/nodes.py:264: in from_parent
return cls._create(parent=parent, **kw)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/_pytest/nodes.py:140: in _create
return super().__call__(*k, **kw)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/nbval/plugin.py:206: in __init__
super(IPyNbFile, self).__init__(*args, **kwargs)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/_pytest/nodes.py:588: in __init__
path = _imply_path(type(self), path, fspath=fspath)
.nox/test_jupyter_notebooks-3-9/lib/python3.9/site-packages/_pytest/nodes.py:110: in _imply_path
warnings.warn(
E pytest.PytestRemovedIn8Warning: The (fspath: py.path.local) argument to IPyNbFile is deprecated. Please use the (path: pathlib.Path) argument instead.
E See https://docs.pytest.org/en/latest/deprecations.html#fspath-argument-for-node-constructors-replaced-with-pathlib-path
``` | closed | 2022-02-04T20:39:18Z | 2023-02-17T22:26:02Z | https://github.com/computationalmodelling/nbval/issues/180 | [] | Jasha10 | 3 |
polakowo/vectorbt | data-visualization | 76 | portfolio.iloc crash if num_tests is too big | Hi,
I test the PortfolioOptimization.ipynb, always crash at `print(rb_portfolio.iloc[rb_best_asset_group].stats())`,
so I clone it to pycharm, get error `Process finished with exit code -1073741819 (0xC0000005)`
and is will crash in `portfolio.base._indexing_func`
```python
new_order_records = self._orders._col_idxs_records(col_idxs)
```
https://github.com/polakowo/vectorbt/blob/master/vectorbt/portfolio/base.py#L435
```python
new_records_arr = nb.record_col_map_select_nb(
self.values, self.col_mapper.col_map, to_1d(col_idxs))
```
https://github.com/polakowo/vectorbt/blob/master/vectorbt/records/base.py#L299
if not crash, make num_tests big then test again.
Do you have some advise to avoid that?
```python
import os
import numpy as np
import pandas as pd
import yfinance as yf
from datetime import datetime
# os.environ['NUMBA_DISABLE_JIT'] = '1' # uncomment this if you want to use pypfopt within simulation
from numba import njit, jit
import vectorbt as vbt
from vectorbt.generic.nb import nanmean_nb
from vectorbt.portfolio.nb import create_order_nb, auto_call_seq_ctx_nb
from vectorbt.portfolio.enums import SizeType, Direction
# Define params
assets = ['FB', 'AMZN', 'NFLX', 'GOOG', 'AAPL']
start_date = datetime(2017, 1, 1)
end_date = datetime.now()
num_tests = 2000
vbt.settings.returns['year_freq'] = '252 days'
# # Download data
# asset_price = pd.DataFrame({
# s: yf.Ticker(s).history(start=start_date, end=end_date)['Close']
# for s in assets
# }, columns=pd.Index(assets, name='asset'))
# print(asset_price.shape)
# asset_price.to_pickle('asset_price.pkl')
asset_price = pd.read_pickle('asset_price.pkl')
np.random.seed(42)
# Generate random weights, n times
weights = []
for i in range(num_tests):
w = np.random.random_sample(len(assets))
w = w / np.sum(w)
weights.append(w)
print(len(weights))
_asset_price = asset_price.vbt.tile(num_tests, keys=pd.Index(np.arange(num_tests), name='asset_group'))
_asset_price = _asset_price.vbt.stack_index(pd.Index(np.concatenate(weights), name='weights'))
print(_asset_price.columns)
# Select the first index of each month
rb_mask = ~_asset_price.index.to_period('m').duplicated()
print(rb_mask.sum())
rb_size = np.full_like(_asset_price, np.nan)
rb_size[rb_mask, :] = np.concatenate(weights) # allocate at mask
print(rb_size.shape)
rb_portfolio = vbt.Portfolio.from_orders(
close=_asset_price,
size=rb_size,
size_type='targetpercent',
group_by='asset_group',
cash_sharing=True,
call_seq='auto', # important: sell before buy
freq='D',
incl_unrealized=True
)
print(len(rb_portfolio.orders()))
rb_portfolio.iloc[0]
```
| closed | 2020-12-31T01:26:28Z | 2021-01-02T10:48:10Z | https://github.com/polakowo/vectorbt/issues/76 | [] | wukan1986 | 10 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 842 | SV-EER calculation | Hi,
I'm trying to do the evaluation on test datasets. How can I calculate SV-EER for synthesizer. I did following steps, correct me if I'm wrong.
- run synthesizer on test data get melspectrums.
- did inverse melspectrum to get wav data
- get embeds for the wav data
- calculate similarity matrix and EER
Thanks,
Srinivas. | closed | 2021-09-09T08:44:12Z | 2021-09-14T20:42:00Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/842 | [] | cnu1439 | 1 |
keras-team/autokeras | tensorflow | 1,650 | Big CSV files and reducing RAM requirement | I have about 80-160 GB CSV files (2000 - 10 000 features, including string, floats and ints) I am trying to run with Autokeras. However, after .fit() I see first RAM get filles (128 Gb) and then the swap file get used (1 TB). Finally, it crashes. RTX 3090, 128 GB RAM, 1 TB swap.
What are the ways to reduce RAM size?
- I would be okay with smaller data types, like float16, for numberic fields, but I would like to avoid explicitly specifying types for 10 000 columns
- autokeras fit() does not suppost dask's datasets
- tf.data.experimental.make_csv_dataset seems to work but it requires all data to be in a single datatype (otherwise, getting an error from tf.pack). I am able to use it with one dataset by ignoring all non-numeric data. It seems to have low RAM usage, but it is still processing before start of GPU training (=I can't yet confirm it works).
| open | 2021-11-21T10:54:45Z | 2021-11-26T19:03:31Z | https://github.com/keras-team/autokeras/issues/1650 | [] | torronen | 2 |
mwaskom/seaborn | data-science | 3,339 | Parameter `layout_rect` for PairGrid() | In some situations axis labels disappear outside the figure when using PairGrid().
Like this:

I think, this could be fixed by adding a parameter `layout_rect` (in the fashion of the existing parameter `layout_pad`) to the constructor, which is then passed on as `rect` parameter to `tight_layout`. | closed | 2023-04-24T17:04:13Z | 2023-04-25T10:54:08Z | https://github.com/mwaskom/seaborn/issues/3339 | [] | leoluecken | 2 |
pyeve/eve | flask | 621 | Cannot set auth_field | Hello @nicolaiarocci ,
Thank you so much for creating this project. I'm having a blast tinkering with it.
However I'm having an issue regarding the `auth_field`
I've created a `users` collection, and I want my users to be able to update their email, but only their account. However the only author reference I have in that collection is the `_id` object, but I can't get it to work. Python returns the following error:
`eve.exceptions.ConfigException: "accounts": auth_field cannot be set to ID_FIELD (_id)`
Here my code:
```
schema = {
'username': {
'type': 'string',
'required': True,
'maxlength': 50,
'unique': True,
},
'email': {
'type': 'email',
'required': True
},
'password': {
'type': 'string',
'required': True,
},
....
}
domain = {
'auth_field': '_id',
'extra_response_fields': ['token'],
'authentication': auth.Token,
'resource_methods': ['POST'],
'item_methods': ['GET', 'DELETE'],
'schema': schema,
}
```
Am I missing anything ?
Thanks,
| closed | 2015-05-06T16:59:17Z | 2015-05-07T08:21:48Z | https://github.com/pyeve/eve/issues/621 | [] | hbarroso | 3 |
kornia/kornia | computer-vision | 2,910 | AttributeError: 'list' object has no attribute 'ndim' with RandomTransplantation and batch with keys | ### Describe the bug
I can run:
```python
import torch
import kornia.augmentation as K
from kornia.augmentation.container import AugmentationSequential
# Example batch with correct shapes
batch = {
"image": torch.randn(10, 3, 64, 64), # [N, C, H, W]
"mask": torch.randint(0, 2, (10, 64, 64)) # [N, H, W]
}
aug = AugmentationSequential(
K.RandomTransplantation(p=1.0, excluded_labels=[0]),
data_keys=None
)
augmented_batch = aug(batch)
```
However when I use my own datamodule, I receive an error:
```python
datamodule.setup(stage="fit")
aug = AugmentationSequential(
ka.RandomTransplantation(p=1., excluded_labels=[0]),
data_keys = None,
)
train_dataloader = datamodule.train_dataloader()
batch = next(iter(train_dataloader))
# where batch["image"].shape, batch["mask"].shape == (torch.Size([10, 3, 64, 64]), torch.Size([10, 64, 64]))
augmented_batch = aug(batch)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], [line 1](vscode-notebook-cell:?execution_count=17&line=1)
----> [1](vscode-notebook-cell:?execution_count=17&line=1) augmented_batch = aug(batch)
File [/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1518](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1518), in Module._wrapped_call_impl(self, *args, **kwargs)
[1516](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1516) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
[1517](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1517) else:
-> [1518](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1518) return self._call_impl(*args, **kwargs)
File [/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1527](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1527), in Module._call_impl(self, *args, **kwargs)
[1522](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1522) # If we don't have any hooks, we want to skip the rest of the logic in
[1523](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1523) # this function, and just call forward.
[1524](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1524) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
[1525](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1525) or _global_backward_pre_hooks or _global_backward_hooks
[1526](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1526) or _global_forward_hooks or _global_forward_pre_hooks):
-> [1527](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1527) return forward_call(*args, **kwargs)
[1529](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1529) try:
[1530](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/torch/nn/modules/module.py:1530) result = None
File [/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:421](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:421), in AugmentationSequential.forward(self, params, data_keys, *args)
[419](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:419) for param in params:
[420](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:420) module = self.get_submodule(param.name)
--> [421](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:421) outputs = self.transform_op.transform( # type: ignore
[422](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:422) *outputs, module=module, param=param, extra_args=self.extra_args
[423](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:423) )
[424](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:424) if not isinstance(outputs, (list, tuple)):
[425](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:425) # Make sure we are unpacking a list whilst post-proc
[426](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/augment.py:426) outputs = [outputs]
File [/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:120](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:120), in AugmentationSequentialOps.transform(self, module, param, extra_args, data_keys, *arg)
[114](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:114) _data_keys = self.preproc_datakeys(data_keys)
[116](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:116) if isinstance(module, K.RandomTransplantation):
[117](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:117) # For transforms which require the full input to calculate the parameters (e.g. RandomTransplantation)
[118](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:118) param = ParamItem(
[119](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:119) name=param.name,
--> [120](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:120) data=module.params_from_input(
[121](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:121) *arg, # type: ignore[arg-type]
[122](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:122) data_keys=_data_keys,
[123](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:123) params=param.data, # type: ignore[arg-type]
[124](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:124) extra_args=extra_args,
[125](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:125) ),
[126](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:126) )
[128](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:128) outputs = []
[129](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/container/ops.py:129) for inp, dcate in zip(arg, _data_keys):
File [/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:219](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:219), in RandomTransplantation.params_from_input(self, data_keys, params, extra_args, *input)
[216](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:216) for _input, key in zip(input, data_keys):
[217](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:217) if key == DataKey.INPUT:
[218](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:218) KORNIA_CHECK(
--> [219](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:219) _input.ndim == mask.ndim + 1,
[220](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:220) "Every image input must have one additional dimension (channel dimension) than the segmentation "
[221](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:221) f"mask, but got {_input.ndim} for the input image and {mask.ndim} for the segmentation mask.",
[222](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:222) )
[223](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:223) KORNIA_CHECK(
[224](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:224) mask.size() == torch.Size([s for i, s in enumerate(_input.size()) if i != self._channel_dim]),
[225](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:225) "The dimensions of the input image and segmentation mask must match except for the channel "
[226](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:226) f"dimension, but got {_input.size()} for the input image and {mask.size()} for the segmentation "
[227](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:227) "mask.",
[228](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:228) )
[230](https://vscode-remote+vscode-002d01hkygqybbyxdgxvrbmrk03kmn-002estudio-002elightning-002eai.vscode-resource.vscode-cdn.net/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/transplantation.py:230) if "acceptor_indices" not in params:
AttributeError: 'list' object has no attribute 'ndim'
```
### Reproduction steps
```bash
As shown
```
### Expected behavior
As shown
### Environment
```shell
⚡ ~ python collect_env.py
Collecting environment information...
PyTorch version: 2.1.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1058-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2799.998
BogoMIPS: 5599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 8 MiB
⚡ ~ python collect_env.py
Collecting environment information...
PyTorch version: 2.1.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1058-aws-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2799.998
BogoMIPS: 5599.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 8 MiB
L3 cache: 64 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] pytorch-lightning==2.1.2
[pip3] segmentation-models-pytorch==0.3.3
[pip3] torch==2.1.1+cu121
[pip3] torchaudio==2.1.1+cu121
[pip3] torchgeo==0.6.0.dev0
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.16.1+cu121
[pip3] triton==2.1.0
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] numpy 1.26.2 pypi_0 pypi
[conda] pytorch-lightning 2.1.2 pypi_0 pypi
[conda] segmentation-models-pytorch 0.3.3 pypi_0 pypi
[conda] torch 2.1.1+cu121 pypi_0 pypi
[conda] torchaudio 2.1.1+cu121 pypi_0 pypi
[conda] torchgeo 0.6.0.dev0 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchvision 0.16.1+cu121 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
```
### Additional context
_No response_ | closed | 2024-05-16T14:03:08Z | 2024-05-17T13:50:50Z | https://github.com/kornia/kornia/issues/2910 | [
"help wanted"
] | robmarkcole | 2 |
nltk/nltk | nlp | 2,371 | Replace nltk.decorators.py with decorator library | From @Copper-Head,
Can we get rid of `decorators.py` by using the [decorator](https://pypi.org/project/decorator/) library? | open | 2019-08-19T01:56:13Z | 2019-08-19T01:56:34Z | https://github.com/nltk/nltk/issues/2371 | [
"nice idea",
"pythonic"
] | alvations | 0 |
deezer/spleeter | tensorflow | 868 | Nevermind, delete the issue | Nevermind, delete the issue | closed | 2023-08-25T02:04:30Z | 2023-08-26T21:54:32Z | https://github.com/deezer/spleeter/issues/868 | [
"question"
] | otro678 | 0 |
browser-use/browser-use | python | 875 | step endless loop | ### Bug Description
use gpt-4o,always endless loop,but Qwen2.5-72B-Instruct success

### Reproduction Steps
version:
python:3.12.9
browser-use:0.1.40
langchain-openai:0.3.1
langchain-core:0.3.37
openai:1.64.0
playwright:1.50.0
### Code Sample
```python
`
from langchain_openai import ChatOpenAI
from browser_use import Agent
from dotenv import load_dotenv
import os
load_dotenv()
import asyncio
async def main():
api_key = os.getenv('OPENAI_API_KEY')
base_url = os.getenv('OPENAI_BASE_URL')
llm = ChatOpenAI(model='gpt-4o', api_key=api_key, base_url=base_url)
agent = Agent(
task="获取https://www.baidu.com页面的百度热搜前五条",
llm=llm,
use_vision=False
)
result = await agent.run(max_steps=20)
print(result)
asyncio.run(main())
`
```
### Version
0.1.40
### LLM Model
GPT-4o
### Operating System
macOS 13.4.1
### Relevant Log Output
```shell
``` | open | 2025-02-26T06:45:32Z | 2025-02-26T10:12:59Z | https://github.com/browser-use/browser-use/issues/875 | [
"bug"
] | shukuang-1 | 4 |
MaartenGr/BERTopic | nlp | 1,219 | Using topics as a predictor in regression or classification | Hi!
I am using BERTopic to 'cluster' descriptions of online orders. These descriptions can range between a single word ('shoes') or more lengthy ('5 boxes of local herbal tea'). I am wondering if I can use the output of BERTopic, the topics, as a feature/predictor, in a classification or regression model.
Are there any caveats to using a unsupervised technique, within a supervised model?
Thank you in advance. | closed | 2023-04-27T17:55:30Z | 2023-05-23T08:27:42Z | https://github.com/MaartenGr/BERTopic/issues/1219 | [] | corabola | 2 |
flasgger/flasgger | rest-api | 35 | Change "no content" response from 204 status | Hi,
is it possible to change the response body when the code is 204?
The only way that I found it was change the javascript code.
| closed | 2016-10-14T19:27:35Z | 2017-03-24T20:19:49Z | https://github.com/flasgger/flasgger/issues/35 | [
"enhancement",
"help wanted"
] | andryw | 1 |
pytest-dev/pytest-selenium | pytest | 32 | Selenium plugin listed twice | Hi,
When using pytest-selenium and running my test, I see this at the start:
``` sh
platform darwin -- Python 3.4.3 -- py-1.4.30 -- pytest-2.7.2
rootdir: /bokeh/bokeh, inifile: setup.cfg
plugins: html, selenium, selenium, variables
collected 410 items
```
I think it's disconcerting that the plugins line reads: `plugins: html, selenium, selenium, variables`
I think it comes from having the separate `safety.py` file.
| closed | 2015-09-10T23:39:05Z | 2015-09-11T20:54:37Z | https://github.com/pytest-dev/pytest-selenium/issues/32 | [] | birdsarah | 3 |
matterport/Mask_RCNN | tensorflow | 2,389 | Problem loading model using keras.models.load_model() | Hi. I am trying to make a program using the already pre-trained model mask_rcnn_balloons.h5, but get a problem when loadning the file.
how I load the model:
path = "E:/Dokumenter/Ikt450/Assignments/7/models/mask_rcnn_balloon.h5"
model = keras.models.load_model(path)
error:
Traceback (most recent call last):
File "E:/Dokumenter/Ikt450/Assignments/7/main.py", line 9, in <module>
model = keras.models.load_model(path)
File "C:\Users\suvat\.virtualenvs\7-p3jOesU2\lib\site-packages\tensorflow\python\keras\saving\save.py", line 182, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "C:\Users\suvat\.virtualenvs\7-p3jOesU2\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 175, in load_model_from_hdf5
raise ValueError('No model found in config file.')
ValueError: No model found in config file. | open | 2020-10-13T07:14:32Z | 2020-10-13T07:14:32Z | https://github.com/matterport/Mask_RCNN/issues/2389 | [] | Mariussuv | 0 |
polarsource/polar | fastapi | 5,158 | Checkout: Store `attribution_id` & `utm_*` query params automatically in `metadata` | Allow setting attribution via query params for the checkout that we store as an `attribution` object on checkout and pass to the order(s) & subscription(s). Useful for affiliate partners and sellers in general to use in combination with built-in efforts, e.g checkout links.
Rough idea: We would support `attribution_id` along with `utm_*` query params that would be set on the `attribution` object (JSON)
And set the following keys automatically in metadata:
```
{
"attribution.id": "<?attribution_id>",
"attribution.utm_source": "<?utm_source>",
"attribution.utm_medium": "<?utm_medium>",
"attribution.utm_campaign": "<?utm_campaign>",
}
```
In case such keys are set beforehand, the URL parameters should not override the preset values. | open | 2025-03-04T16:30:15Z | 2025-03-13T08:15:59Z | https://github.com/polarsource/polar/issues/5158 | [
"feature",
"changelog"
] | birkjernstrom | 0 |
PokemonGoF/PokemonGo-Bot | automation | 5,824 | New Pokemon Hunter | ### Short Description
The sniping functionality doesn't really work in new API (0.45) because of the new speed limitations. I propose the following idea for sniping/hunting (rare) pokemons.
### Possible solution
Create a new task (modified MoveToMapPokemon/PokemonHunter?) with following functionality:
1. Get JSON data of pokemons from raw_data or similar JSON for limited area (personal/public PoGoMap). The prerequisite is that the bot is located in or near the same limited area.
2. Get distance to the nearest pokemon with highest priority from catch list.
3. Pause the bot / logout for the time that it takes to move from current location to the location of the target pokemon with average speed e.g. 80 km/h (straight line, speed configurable). Add a random additional time to the result, to simulate moving by car (configurable). If the total time exceeds e.g. 15 minutes (configurable) or the disappear time from the JSON, abort task.
4. After the time runs out, set location of the bot to be the same as with target pokemon.
5. Resume the bot / log in, look for the target pokemon. If it isn't visible immediately, walk around for a while (e.g. 1 minute, configurable). If it appears, try to catch it.
6. (Configurable) Either return to the saved previous location (with the pause/logout and configurable speed) or stay at the new location and continue running other tasks until hunting timeout runs out (if any) or until new pokemon from the list within reach appears in the JSON data.
### How it would help others
Should improve chances to catch rare pokemons.
<!-- ==========END OF FEATURE REQUEST SECTION========== -->
| open | 2016-11-19T23:49:18Z | 2016-11-19T23:49:18Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5824 | [] | nucl3x | 0 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 424 | 关于训练过程中'eval_loss'都是nan的问题,解决方法 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型训练与精调
### 基础模型
None
### 操作系统
Linux
### 详细描述问题
看到很多人遇到了SFT时eval_loss为nan的情况,这几天我也遇到了,有说是训练参数问题、数值精度问题,但依然都会出现eval_loss=nan。经过debug,可能是分词部分有问题:https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/blob/0189e8be7706d2aceb90d238149d5fda6b6aea8d/scripts/training/build_dataset.py#L45
这里会将input和target分词后直接拼接起来,如果超过`max_seq_length`则截断。但是如果在target很短的情况下,target可能会丢失(我的数据中有的target就很短,只有一个词)。这样训练样本中的`labels`可能全变为了`IGNORE_INDEX`,导致训练loss不稳定,eval_loss出现nan。我的修改是,使用`max_input_length`和`max_target_length`参数而不是`max_seq_length`。在`input_ids = torch.LongTensor(s + t)[:max_seq_length]`前面加上两行:
```
if len(s) > max_input_length:
s = s[:max_input_length]
```
这样我的训练loss也平稳下来了,eval_loss也正常了。供大家参考。
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
``` | closed | 2023-11-27T03:08:05Z | 2024-04-22T14:23:49Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/424 | [
"stale"
] | 5663015 | 12 |
biolab/orange3 | pandas | 6,688 | FreeViz automatically runs when changing Gravity settings | FreeViz runs automatically when Gravity controls are changed in certain situations. I'm not sure if this is intended because every other setting is applied immediately, but the the fact that the optimization is stopped would suggest it should not start until I press Start. Either way, it still behaves in two different ways.
When it happens:
1. Click the Start button in FreeViz to **run it at least once**
2. Check or uncheck Gravity, or move the slider
When it does **not** happen:
- FreeViz has not been run yet
- When Initizalization is changed
OS: Windows 10
Orange: 3.36.1 | open | 2024-01-01T16:06:22Z | 2024-01-05T09:09:48Z | https://github.com/biolab/orange3/issues/6688 | [
"wish",
"snack"
] | processo | 3 |
mlfoundations/open_clip | computer-vision | 674 | How can I decode the image feature to RGB-image | I want to use the image feature to do some downstream tasks (anomaly detection), could I decode the reconstructed feature to image like an autoencoder? I'm new to this and would really appreciate some simple guidance! | closed | 2023-10-15T12:38:02Z | 2023-10-21T21:34:18Z | https://github.com/mlfoundations/open_clip/issues/674 | [] | 1216537742 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.