repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
recommenders-team/recommenders | data-science | 1,833 | [FEATURE] AzureML SDK v2 support | ### Description
<!--- Describe your expected feature in detail -->
AzureML SDK v2 was GA at Ignite 2022.
We saw the codes in [the examples](https://github.com/microsoft/recommenders/tree/main/examples) are based on SDK v1.
Do we have any plans to update the examples to follow SDK v2?
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| closed | 2022-10-25T02:20:07Z | 2024-05-06T14:49:21Z | https://github.com/recommenders-team/recommenders/issues/1833 | [
"enhancement"
] | shohei1029 | 2 |
unionai-oss/pandera | pandas | 871 | Dask DataFrame filter fails | **Describe the bug**
Dask Dataframes validated with `strict='filter'` do not drop extraneous columns.
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
#### Code Sample
```python
import pandas as pd
import pandera as pa
import dask.dataframe as dd
df1 = pd.DataFrame([[1, 1], [3, 2], [5, 3]], columns=["col1", "col2"])
df2 = dd.from_pandas(df1, npartitions=1)
my_schema = pa.DataFrameSchema(
{
"col2": pa.Column(int),
},
strict="filter",
)
new_df1 = my_schema(df1)
new_df2 = my_schema(df2)
```
#### Expected behavior
DataFrames should be filtered such that `col2` remains and `col1` is dropped. The validated pandas DataFrame `new_df1` behaves as expected. However, the resulting Dask DataFrame `new_df2` retains both columns.
#### Additional context
Apologies if this falls under the wider net of #119. I am interpreting that issue as pertaining to more complex memory management problems. Thanks for your help. | open | 2022-06-01T14:35:04Z | 2022-06-01T14:35:43Z | https://github.com/unionai-oss/pandera/issues/871 | [
"bug"
] | gg314 | 0 |
Kav-K/GPTDiscord | asyncio | 368 | Unknown interaction errors | Sometimes there are unknown interaction errors with a response in /gpt converse takes too long to return, or if the user deleted their original message it is responding to before it responds | closed | 2023-10-31T01:18:43Z | 2023-11-12T19:40:57Z | https://github.com/Kav-K/GPTDiscord/issues/368 | [
"bug",
"help wanted",
"good first issue",
"help-wanted-important"
] | Kav-K | 11 |
Hironsan/BossSensor | computer-vision | 6 | how many pictures are used to train | how many pictures are used to train? I use your code. Boss picture number: 300 other picture number: 300*10 ; (10 persons , everyone have 300 ). the model give accuracy 0.93. I think it is too low. how to improve the accuracy? | open | 2017-01-09T08:50:22Z | 2017-01-10T13:15:31Z | https://github.com/Hironsan/BossSensor/issues/6 | [] | seeyourcell | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,274 | Cannot connect to chrome at 127.0.0.1:33773 when deployed on Render | Hello,
I am trying to run a python script for headless browsing deployed on Render but I have this error message:
```
May 19 03:15:54 PM Incoming POST request for /api/strategies/collaborative/
May 19 03:15:56 PM stderr: 2023/05/19 13:15:56 INFO ====== WebDriver manager ======
May 19 03:15:56 PM
May 19 03:15:57 PM stderr: 2023/05/19 13:15:57 INFO There is no [linux64] chromedriver "latest" for browser google-chrome "113.0.5672" in cache
May 19 03:15:57 PM
May 19 03:15:57 PM stderr: 2023/05/19 13:15:57 INFO Get LATEST chromedriver version for google-chrome
May 19 03:15:57 PM
May 19 03:15:57 PM stderr: 2023/05/19 13:15:57 INFO About to download new driver from https://chromedriver.storage.googleapis.com/113.0.5672.63/chromedriver_linux64.zip
May 19 03:15:57 PM
May 19 03:15:57 PM stderr:
[WDM] - Downloading: 0%| | 0.00/6.98M [00:00<?, ?B/s]
May 19 03:15:58 PM stderr:
[WDM] - Downloading: 19%|█▉ | 1.35M/6.98M [00:00<00:00, 7.96MB/s]
May 19 03:15:58 PM stderr:
[WDM] - Downloading: 39%|███▉ | 2.73M/6.98M [00:00<00:00, 10.9MB/s]
May 19 03:15:58 PM stderr:
[WDM] - Downloading: 58%|█████▊ | 4.02M/6.98M [00:00<00:00, 11.9MB/s]
May 19 03:15:58 PM stderr:
[WDM] - Downloading: 75%|███████▍ | 5.23M/6.98M [00:00<00:00, 12.1MB/s]
May 19 03:15:58 PM stderr:
[WDM] - Downloading: 92%|█████████▏| 6.42M/6.98M [00:00<00:00, 12.2MB/s]
May 19 03:15:58 PM stderr:
[WDM] - Downloading: 100%|██████████| 6.98M/6.98M [00:00<00:00, 12.5MB/s]
May 19 03:15:59 PM stderr:
May 19 03:15:59 PM 2023/05/19 13:15:59 INFO Driver has been saved in cache [/opt/render/.wdm/drivers/chromedriver/linux64/113.0.5672.63]
May 19 03:15:59 PM
May 19 03:15:59 PM stderr: 2023/05/19 13:15:59 INFO Loading undetected Chrome
May 19 03:15:59 PM
May 19 03:17:02 PM stderr: Traceback (most recent call last):
May 19 03:17:02 PM File "scripts/python.py", line 26, in <module>
May 19 03:17:02 PM main()
May 19 03:17:02 PM File "scripts/python.py", line 17, in main
May 19 03:17:02 PM python = Python_Client(login, password)
May 19 03:17:02 PM File "/opt/render/project/src/server/scripts/python_utils.py", line 93, in __init__
May 19 03:17:02 PM version_main=112
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/undetected_chromedriver/__init__.py", line 461, in __init__
May 19 03:17:02 PM service=service, # needed or the service will be re-created
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/selenium/webdriver/chrome/webdriver.py", line 93, in __init__
May 19 03:17:02 PM keep_alive,
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/selenium/webdriver/chromium/webdriver.py", line 112, in __init__
May 19 03:17:02 PM options=options,
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__
May 19 03:17:02 PM self.start_session(capabilities, browser_profile)
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/undetected_chromedriver/__init__.py", line 717, in start_session
May 19 03:17:02 PM capabilities, browser_profile
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session
May 19 03:17:02 PM response = self.execute(Command.NEW_SESSION, parameters)
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
May 19 03:17:02 PM self.error_handler.check_response(response)
May 19 03:17:02 PM File "/opt/render/project/src/.venv/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
May 19 03:17:02 PM raise exception_class(message, screen, stacktrace)
May 19 03:17:02 PM selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:33773
May 19 03:17:02 PM from chrome not reachable
May 19 03:17:02 PM Stacktrace:
May 19 03:17:02 PM #0 0x55d383f67fe3 <unknown>
May 19 03:17:02 PM #1 0x55d383ca6bc1 <unknown>
May 19 03:17:02 PM #2 0x55d383c94ff6 <unknown>
May 19 03:17:02 PM #3 0x55d383cd3e00 <unknown>
May 19 03:17:02 PM #4 0x55d383ccb352 <unknown>
May 19 03:17:02 PM #5 0x55d383d0daf7 <unknown>
May 19 03:17:02 PM #6 0x55d383d0d11f <unknown>
May 19 03:17:02 PM #7 0x55d383d04693 <unknown>
May 19 03:17:02 PM #8 0x55d383cd703a <unknown>
May 19 03:17:02 PM #9 0x55d383cd817e <unknown>
May 19 03:17:02 PM #10 0x55d383f29dbd <unknown>
May 19 03:17:02 PM #11 0x55d383f2dc6c <unknown>
May 19 03:17:02 PM #12 0x55d383f374b0 <unknown>
May 19 03:17:02 PM #13 0x55d383f2ed63 <unknown>
May 19 03:17:02 PM #14 0x55d383f01c35 <unknown>
May 19 03:17:02 PM #15 0x55d383f52138 <unknown>
May 19 03:17:02 PM #16 0x55d383f522c7 <unknown>
May 19 03:17:02 PM #17 0x55d383f60093 <unknown>
May 19 03:17:02 PM #18 0x7f0685c35fa3 start_thread
May 19 03:17:02 PM
May 19 03:17:02 PM
May 19 03:17:03 PM child process exited with code 1
May 19 03:17:03 PM Script output:
May 19 03:17:03 PM JSON string is empty
May 19 03:17:03 PM Error: TypeError: Cannot read properties of undefined (reading 'map')
```
Here is my code :
```
def __init__(
self,
username :str,
password :str,
headless :bool = True,
cold_start :bool = False,
verbose :bool = False
):
if verbose:
logging.getLogger().setLevel(logging.INFO)
logging.info('Verbose mode active')
options = uc.ChromeOptions()
# options.add_argument('--incognito')
options.binary_location = '/opt/render/.local/share/undetected_chromedriver/undetected_chromedriver'
service = Service(ChromeDriverManager().install())
if headless:
options.add_argument('--headless')
logging.info('Loading undetected Chrome')
self.browser = uc.Chrome(
# use_subprocess=True,
service=service,
options=options,
headless=headless,
version_main=112
)
self.browser.set_page_load_timeout(30)
logging.info('Opening service')
# Retry mechanism for opening the ChatGPT page
for i in range(3): # Try 3 times
try:
self.browser.get('https://xxxxxxxx')
logging.info('Successfully opened service)
```
It works locally
What is wrong? | closed | 2023-05-19T13:45:05Z | 2023-05-19T17:53:57Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1274 | [] | Louvivien | 0 |
tensorflow/tensor2tensor | deep-learning | 1,809 | Character level language model does not work | ### Description
I try to train a character level language model but only get strange tokens as output. The model trains and loss decreases.
Am I just trying to infer from the model incorrectly, or is there something else going on?
...
### Environment information
```
OS: Debian GNU/Linux 9.11 (stretch) (GNU/Linux 4.9.0-11-amd64 x86_64\n)
$ pip freeze | grep tensor
mesh-tensorflow==0.1.13
tensor2tensor==1.15.5
tensorboard==1.15.0
tensorflow-datasets==1.2.0
tensorflow-estimator==1.15.1
tensorflow-gan==2.0.0
tensorflow-gpu==1.15.2
tensorflow-hub==0.6.0
tensorflow-io==0.8.1
tensorflow-metadata==0.21.1
tensorflow-probability==0.7.0
tensorflow-serving-api-gpu==1.14.0
$ python3 -V
Python 3.5.3
```
### For bugs: reproduction and error logs
# Steps to reproduce:
Train a character level model by running the following:
```
sudo pip3 install tensor2tensor
PROBLEM=languagemodel_ptb10k
MODEL=transformer
HPARAMS=transformer_small
DATA_DIR=$HOME/t2t_data
TMP_DIR=/tmp/t2t_datagen
TRAIN_DIR=$HOME/t2t_train/$PROBLEM/$MODEL-$HPARAMS
mkdir -p $DATA_DIR $TMP_DIR $TRAIN_DIR
t2t-datagen --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR --problem=$PROBLEM
t2t-trainer --data_dir=$DATA_DIR \
--problem=$PROBLEM\
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR
```
It trains and the loss decreases.
After a while, try to make an inference by running the following:
```
DECODE_FILE=$DATA_DIR/decode_this.txt
echo "My name is Joh" >> $DECODE_FILE
echo "The last character of this senten" >> $DECODE_FILE
echo "th" >> $DECODE_FILE
BEAM_SIZE=4
ALPHA=0.6
t2t-decoder \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_from_file=$DECODE_FILE \
--decode_to_file=output.txt
cat output.txt
```
I expect to at least see "e" after "th", but instead I get:
```
����������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������
```
Please let me know what other information I can provide.
I'm just trying to get reasonable output of a character-level language model. I might be doing inference step wrong? Appreciate any help, thanks. | open | 2020-05-02T03:05:00Z | 2020-05-02T03:09:37Z | https://github.com/tensorflow/tensor2tensor/issues/1809 | [] | KosayJabre | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 714 | issue with chunking? Getting error Token indices sequence lenght is longer than specified maxium token. | Iam getting a error `**Token indices sequence length is longer than the specified maximum sequence length for this model (4504 > 1024)**. Running this sequence through the model will result in indexing errors`
from scrapegraphai.graphs import SmartScraperGraph
from scrapegraphai.utils import prettify_exec_info
```
graph_config = {
"llm": {
"model": "ollama/llama3.1:8b-instruct-q8_0",
"temperature": 1,
"format": "json", # Ollama needs the format to be specified explicitly
"model_tokens": 2000, # depending on the model set context length
"base_url": "http://localhost:11434", # set ollama URL of the local host (YOU CAN CHANGE IT, if you have a different endpoint
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"temperature": 0,
"base_url": "http://localhost:11434", # set ollama URL
}
}
# ************************************************
# Create the SmartScraperGraph instance and run it
# ************************************************
smart_scraper_graph = SmartScraperGraph(
prompt="List all articles also provide article url, upvotes and author name",
# also accepts a string with the already downloaded HTML code
source="https://news.ycombinator.com/",
config=graph_config
)
result = smart_scraper_graph.run()
print(result)
graph_exec_info = smart_scraper_graph.get_execution_info()
print(prettify_exec_info(graph_exec_info))
```
```
C:\Users\djds4\llm-scraper\src>python -m test
Model ollama/llama3.1:8b-instruct-q8_0 not found,
using default token size (8192)
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
C:\Users\djds4\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Token indices sequence length is longer than the specified maximum sequence length for this model (4504 > 1024). Running this sequence through the model will result in indexing errors
{'24': {'title': 'Ask HN: Any good essays/books/advice about software sales?', 'user': 'nikasakana', 'points': 148, 'comments': 59}, '25': {'title': 'The Case of the Missing Increment', 'url': 'https://www.computerenhance.com/p/the-case-of-the-missing-increment', 'points': 17, 'comments': 3}, '26': {'title': 'Show HN: A macOS app to prevent sound quality degradation on AirPods', 'url': 'https://apps.apple.com/us/app/crystalclear-sound/id6695723746?mt=12', 'points': 165, 'comments': 216}, '27': {'title': 'Do AI companies work?', 'url': 'https://benn.substack.com/p/do-ai-companies-work', 'points': 326, 'comments': 334}, '28': {'title': 'Keep Track: 3D Satellite Toolkit', 'url': 'https://app.keeptrack.space', 'points': 162, 'comments': 36}, '29': {'title': "Fix photogrammetry bridges so that they are not 'solid' underneath (2020)", 'url': 'https://forums.flightsimulator.com/t/fix-photogrammetry-bridges-so-that-they-are-not-solid-underneath/326917', 'points': 50, 'comments': 14}, '30': {'title': 'GnuCash 5.9', 'url': 'https://www.gnucash.org/news.phtml', 'points': 223, 'comments': 115}}
node_name total_tokens prompt_tokens completion_tokens successful_requests total_cost_USD exec_time
0 Fetch 0 0 0 0 0.0 1.782655
1 ParseNode 0 0 0 0 0.0 0.903249
2 GenerateAnswer 1386 1026 360 1 0.0 149.856989
3 TOTAL RESULT 1386 1026 360 1 0.0 152.542893
```
- not sure why i get the error `Model ollama/llama3.1:8b-instruct-q8_0 not found,` when the response seems to be served from that model.
- Articles returned are from the end of the page, so there definetly is some kind of chunking running.
| closed | 2024-10-01T08:17:17Z | 2025-01-04T09:57:35Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/714 | [] | djds4rce | 9 |
pandas-dev/pandas | pandas | 60,410 | DOC: incorrect formula for half-life of exponentially weighted window | ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/user_guide/window.html#exponentially-weighted-window
### Documentation problem

in the documentation for alpha as a function of half-life the formula says 1-exp^(log(0.5)/h)
it should be either exp(log(0.5)/h) or e^(log(0.5)/h) but not exp^(log(0.5)/h)
### Suggested fix for documentation
I suggest changing it to 1-e^(log(0.5)/h) | closed | 2024-11-25T03:53:39Z | 2024-11-25T18:36:09Z | https://github.com/pandas-dev/pandas/issues/60410 | [
"Docs",
"Needs Triage"
] | partev | 0 |
mlfoundations/open_clip | computer-vision | 1,021 | RuntimeError: expected scalar type Float but found BFloat16 (ComfyUI) | !!! Exception during processing !!! expected scalar type Float but found BFloat16
Traceback (most recent call last):
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 651, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 985, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 953, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 936, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 715, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 161, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 380, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 916, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 919, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 360, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 196, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 309, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 131, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 160, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 204, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options, attn_mask=kwargs.get("attention_mask", None))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 148, in forward_orig
img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 53, in forward
latents = self.norm2(latents)
^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\normalization.py", line 217, in forward
return F.layer_norm(
^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py", line 2900, in layer_norm
return torch.layer_norm(
^^^^^^^^^^^^^^^^^
RuntimeError: expected scalar type Float but found BFloat16 | closed | 2025-01-15T14:03:44Z | 2025-01-15T20:39:42Z | https://github.com/mlfoundations/open_clip/issues/1021 | [] | Ehsan7104 | 0 |
aleju/imgaug | deep-learning | 496 | Keypoints relocate wrongly when rotated | I observed that when I use `iaa.Affine(rotate="anything that is not zero")`, the keypoints shifted wrongly on the small image.
In the following image, the 3rd plot is rotated by 180 degrees, but somehow the keypoints are not in the center of the circle. The problem is more noticeable when you resize the image further to 30x30.

Here's the minimal code to reproduce the issue:
```python
import cv2 as cv
import numpy as np
import imgaug as ia
from imgaug import augmenters as iaa
from imgaug.augmentables.kps import KeypointsOnImage
def enlarge_and_plot(img, koi):
plot = cv.resize(img, (600, 600))
plot = koi.on(plot).draw_on_image(plot, size=10)
return plot
# creating big image and big keypoints
img = np.ones((500,500,3), dtype=np.uint8) * 255
coords = [100,100,200,200,300,300,400,400,100,400] # list of XY coordinates
coords = np.float32(coords).reshape((-1, 2))
for coord in coords:
cv.circle(img, tuple(coord), 10, (0,0,0), thickness=3)
koi = KeypointsOnImage.from_xy_array(coords, img.shape)
plot1 = enlarge_and_plot(img, koi)
# make small image and small keypoints
img = cv.resize(img, (45, 45))
koi = koi.on(img)
plot2 = enlarge_and_plot(img, koi)
# rotate small image and small keypoints
print(koi)
img, koi = iaa.Affine(rotate=180)(image=img, keypoints=koi)
print(koi)
plot3 = enlarge_and_plot(img, koi)
ia.imshow(np.hstack([plot1, plot2, plot3]))
```
I don't know if this is a bug or it's simply my mistake of how I'm using the library. Please give me insight. @aleju
But I think it's kind of an off-by-one error. I see that the first coordinate (100,100) gets mapped to (9,9) on 45x45 image, which is correct, but after rotation, the coordinate gets mapped to (35,35) which should be (36,36) if I understand correctly.
Here's the output of the program:
```python
KeypointsOnImage([Keypoint(x=9.00000000, y=9.00000000), Keypoint(x=18.00000000, y=18.00000000), Keypoint(x=27.00000191, y=27.00000191), Keypoint(x=36.00000000, y=36.00000000), Keypoint(x=9.00000000, y=36.00000000)], shape=(45, 45, 3))
KeypointsOnImage([Keypoint(x=35.00000000, y=35.00000000), Keypoint(x=26.00000000, y=26.00000000), Keypoint(x=16.99999809, y=16.99999809), Keypoint(x=8.00000000, y=8.00000000), Keypoint(x=35.00000000, y=8.00000000)], shape=(45, 45, 3))
```
If this is the case, it means that you won't notice this bug in a big image because one pixel wrong is not visible to the naked eyes | closed | 2019-11-14T13:15:33Z | 2019-11-16T08:53:19Z | https://github.com/aleju/imgaug/issues/496 | [] | offchan42 | 6 |
tableau/server-client-python | rest-api | 760 | assign groups to projects while publishing workbooks | Is there a way to assign projects to groups based on the requirement
| open | 2020-12-10T17:05:38Z | 2021-02-22T19:17:17Z | https://github.com/tableau/server-client-python/issues/760 | [
"enhancement"
] | yashwathreddy | 3 |
ymcui/Chinese-BERT-wwm | tensorflow | 205 | 计算两句子的相似度 | '''
>>> import torch
>>> from transformers import BertModel, BertTokenizer
>>> model_name = "hfl/chinese-roberta-wwm-ext-large"
>>> tokenizer = BertTokenizer.from_pretrained(model_name)
>>> model = BertModel.from_pretrained(model_name)
>>> input_text1 = "今天天气不错,你觉得呢?"
>>> input_text2 = "今天天气不错,你觉得呢?我喜欢吃饺子"
>>> input_ids1 = tokenizer.encode(input_text1, add_special_tokens=True)
>>> input_ids2 = tokenizer.encode(input_text2, add_special_tokens=True)
>>> input_ids1 = torch.tensor([input_ids1])
>>> input_ids2 = torch.tensor([input_ids2])
>>> out1 = model(input_ids1)[0]
>>> out2 = model(input_ids2)[0]
>>> out1.shape
torch.Size([1, 14, 1024])
>>> out2.shape
torch.Size([1, 20, 1024])
'''
为什么输出特征维度不一样,我想比较两个句子的相似度,用哪个维度的特征呢? | closed | 2021-11-24T08:12:51Z | 2022-02-13T12:04:20Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/205 | [] | yfq512 | 2 |
scanapi/scanapi | rest-api | 514 | Add hacktoberfest topic on the repository | It would be nice to have the topic for next month | closed | 2021-09-26T20:57:59Z | 2022-02-02T16:41:24Z | https://github.com/scanapi/scanapi/issues/514 | [
"Question"
] | patrickelectric | 6 |
JaidedAI/EasyOCR | machine-learning | 430 | CUDA out of memory error while trying to transcribe a lot of images. | Like the title suggests I am trying to transcribe thousands of images but I ran into this CUDA OOM error after 40 images were transcribed
```
Traceback (most recent call last):
File "C:/Users/cubeservdev/Dev/OCRTest/ocr_test.py", line 15, in <module>
transcription = reader.readtext(f'images/{image_file}', detail=0)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\easyocr\easyocr.py", line 378, in readtext
add_margin, False)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\easyocr\easyocr.py", line 273, in detect
False, self.device, optimal_num_chars)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\easyocr\detection.py", line 81, in get_textbox
bboxes, polys = test_net(canvas_size, mag_ratio, detector, image, text_threshold, link_threshold, low_text, poly, device, estimate_num_chars)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\easyocr\detection.py", line 38, in test_net
y, feature = net(x)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\parallel\data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\easyocr\craft.py", line 60, in forward
sources = self.basenet(x)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\easyocr\model\modules.py", line 61, in forward
h = self.slice1(X)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\modules\batchnorm.py", line 136, in forward
self.weight, self.bias, bn_training, exponential_average_factor, self.eps)
File "C:\ProgramData\Anaconda3\envs\EasyOCRTest\lib\site-packages\torch\nn\functional.py", line 2058, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
RuntimeError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 3.00 GiB total capacity; 1.75 GiB already allocated; 0 bytes free; 1.84 GiB reserved in total by PyTorch)
libpng warning: iCCP: known incorrect sRGB profile
```
```py
import os
import time
import easyocr
if __name__ == '__main__':
images = os.listdir('images')
image_count = len(images)
reader = easyocr.Reader(['ch_sim', 'en'], recog_network='latin_g1', gpu=True)
for index, image_file in enumerate(images):
text_file_name = image_file.split('.')[0]
print(f'Transcribing ({index}/{image_count}) {text_file_name}', end='')
start_time = time.time()
with open(f'transcriptions/{text_file_name}.txt', 'w+', encoding='utf-8') as text_file:
transcription = reader.readtext(f'images/{image_file}', detail=0)
text_file.write('\n'.join(transcription))
print(f' finished in {round(time.time() - start_time, 2)} seconds.')
```
I don't think the memory issues/leaks are being caused by my code but I could most definitely be wrong. How can I resolve this issue? | closed | 2021-05-16T18:42:48Z | 2022-03-02T09:24:59Z | https://github.com/JaidedAI/EasyOCR/issues/430 | [] | cubeserverdev | 4 |
mage-ai/mage-ai | data-science | 5,299 | [BUG] The unique_conflict_method='UPDATE' function of MySQL data exporter did not work properly | ### Mage version
v0.9.72
### Describe the bug
When I use the MySQL data exporter like following code
```Python
with MySQL.with_config(ConfigFileLoader(config_path, config_profile)) as loader:
loader.export(
df,
schema_name=None,
table_name=table_name,
index=False, # Specifies whether to include index in exported table
if_exists='append', # Specify resolution policy if table name already exists
allow_reserved_words=True,
unique_conflict_method='UPDATE',
unique_constraints=constraints_columns,
)
```
It report an error `You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'AS new`
### To reproduce
1. use the code I provided
2. run the block
### Expected behavior
1. update the conflict record successfully
### Screenshots
_No response_
### Operating system
_No response_
### Additional context
could fix the problem with the code below in MySQL.py
```Python
if UNIQUE_CONFLICT_METHOD_UPDATE == unique_conflict_method:
update_command = [f'{col} = VALUES({col})' for col in cleaned_columns]
query += [
f"ON DUPLICATE KEY UPDATE {', '.join(update_command)}",
]
``` | open | 2024-07-29T14:49:13Z | 2024-07-29T14:49:13Z | https://github.com/mage-ai/mage-ai/issues/5299 | [
"bug"
] | highkay | 0 |
rougier/scientific-visualization-book | matplotlib | 53 | Wrong Code in page 96 (Size, aspect & layout)? | When I run the code `F`, `H` and `I` scripts in Page 96 (Chapter 8 Size, aspect & layout), I got the figure as shown in the following figures. It is different from Figure 8.1. When I modify the axes aspect to 0.5, 1 and 2 respectively, it gets normal. It seems that `aspect='auto'` doesn't work.
- F, H and I
<img width="295" alt="F script" src="https://user-images.githubusercontent.com/39882510/174038682-c45a16a8-5a21-4ec5-b2c6-2358fb59e9aa.png">
<img width="291" alt="H script" src="https://user-images.githubusercontent.com/39882510/174038799-696424c4-88fa-4527-a2e6-114aba0ca5c9.png">
<img width="290" alt="I script" src="https://user-images.githubusercontent.com/39882510/174038996-7cf45c48-2920-4ed7-81c7-e43ec7361380.png">
| closed | 2022-06-16T09:32:37Z | 2022-06-23T07:08:39Z | https://github.com/rougier/scientific-visualization-book/issues/53 | [] | zhangkaihua88 | 3 |
aminalaee/sqladmin | asyncio | 316 | Feature parity with Flask-Admin | # Feature parity with Flask-Admin
## General features
| Feature | Status |
| ---------------------------------------------- | ------- |
| `ModelView` with configurations | ✓ |
| `BaseView` for creating custom views | ✓ |
| Authentication | ✓ |
| Ajax search related model | ✓ |
| Customizing the templates (basic) | ✓ |
| Batch operations | |
| Inline models | |
| Limited support for multiple PK models | |
| Managing files | |
| Allow usage of related model in list/sort/etc. | |
| Grouping views | |
| Customizing batch operations (actions) | |
## ModelView options parity
| Option | Status |
| ----------------------------- | ------- |
| `can_create` | ✓ |
| `can_edit` | ✓ |
| `can_delete` | ✓ |
| `can_view_details` | ✓ |
| `can_export` | ✓ |
| `column_list` | ✓ |
| `column_exclude_list` | ✓ |
| `column_formatters` | ✓ |
| `page_size` | ✓ |
| `page_size_options` | ✓ |
| `column_searchable_list` | ✓ |
| `column_sortable_list` | ✓ |
| `column_default_sort` | ✓ |
| `column_details_list` | ✓ |
| `column_details_exclude_list` | ✓ |
| `column_formatters_detail` | ✓ |
| `list_template` | ✓ |
| `create_template` | ✓ |
| `details_template` | ✓ |
| `edit_template` | ✓ |
| `column_export_list` | ✓ |
| `column_export_exclude_list` | ✓ |
| `export_types` | ✓ |
| `export_max_rows` | ✓ |
| `form` | ✓ |
| `form_base_class` | ✓ |
| `form_args` | ✓ |
| `form_columns` | ✓ |
| `form_excluded_columns` | ✓ |
| `form_overrides` | ✓ |
| `form_include_pk` | ✓ |
| `form_ajax_refs` | ✓ |
| `column_filters` | |
# Django features
## General features
| Feature | Status |
| ---------------- | ------- |
| `save_as` option | ✓ |
| open | 2022-09-14T09:32:47Z | 2022-12-21T11:00:26Z | https://github.com/aminalaee/sqladmin/issues/316 | [] | aminalaee | 0 |
microsoft/unilm | nlp | 1,325 | E5: what prompt is used in training and evaluation? | @intfloat
**Describe**
I tried to reproduce the E5 score on MTEB (particularly BEIR) using the released checkpoint, but I do observe a big gap on some datasets (e.g. on TREC-COVID reproduced 51.0 vs. reported 79.6). I suspect that the prompt can be the key (as in the eval [code](https://github.com/microsoft/unilm/blob/027f0eb1cedac529915721110ab9a8dbdfad4dd9/e5/mteb_eval.py#L27C24-L27C30)). It is not clearly stated in the paper (Sec 4.3, it mentions that "For tasks other than zero-shot text classification and retrieval, we use the query embeddings by default.").
So I wonder if you can elaborate what prompts are used in E5 training and MTEB evaluation?
1. What prompts are used in pretraining?
2. What prompts are used in fine-tuning?
3. What prompts are used in MTEB evaluation? Does the prompt applied to both query and doc ([line 32](https://github.com/microsoft/unilm/blob/027f0eb1cedac529915721110ab9a8dbdfad4dd9/e5/mteb_eval.py#L32) shows ['', 'query: ', 'passage: '])?
Thank you!
Rui
| closed | 2023-10-11T21:21:23Z | 2023-10-12T02:18:52Z | https://github.com/microsoft/unilm/issues/1325 | [] | memray | 2 |
apache/airflow | python | 47,274 | Clearing Task Instances Intermittently Throws HTTP 500 Error | ### Apache Airflow version
AF3 beta1
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
When we try to clear task instance it's throws Intermittently.
**Logs:**
```
NFO: 192.168.207.1:54306 - "POST /public/dags/etl_dag/clearTaskInstances HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
+ Exception Group Traceback (most recent call last):
| File "/usr/local/lib/python3.9/site-packages/starlette/_utils.py", line 76, in collapse_excgroups
| yield
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 178, in __call__
| recv_stream.close()
| File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 767, in __aexit__
| raise BaseExceptionGroup(
| exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
| await super().__call__(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 187, in __call__
| raise exc
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 165, in __call__
| await self.app(scope, receive, _send)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 29, in __call__
| await responder(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 126, in __call__
| await super().__call__(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 46, in __call__
| await self.app(scope, receive, self.send_with_compression)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 93, in __call__
| await self.simple_response(scope, receive, send, request_headers=headers)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 144, in simple_response
| await self.app(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 178, in __call__
| recv_stream.close()
| File "/usr/local/lib/python3.9/contextlib.py", line 137, in __exit__
| self.gen.throw(typ, value, traceback)
| File "/usr/local/lib/python3.9/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
| raise exc
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 175, in __call__
| response = await self.dispatch_func(request, call_next)
| File "/opt/airflow/airflow/api_fastapi/core_api/middleware.py", line 28, in dispatch
| response = await call_next(request)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 153, in call_next
| raise app_exc
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 140, in coro
| await self.app(scope, receive_or_disconnect, send_no_error)
| File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 714, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 734, in app
| await route.handle(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 288, in handle
| await self.app(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 76, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 73, in app
| response = await f(request)
| File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 301, in app
| raw_response = await run_endpoint_function(
| File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 214, in run_endpoint_function
| return await run_in_threadpool(dependant.call, **values)
| File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 37, in run_in_threadpool
| return await anyio.to_thread.run_sync(func)
| File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
| return await get_async_backend().run_sync_in_worker_thread(
| File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
| return await future
| File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 962, in run
| result = context.run(func, *args)
| File "/opt/airflow/airflow/api_fastapi/core_api/routes/public/task_instances.py", line 651, in post_clear_task_instances
| dag = dag.partial_subset(
| File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 811, in partial_subset
| dag.task_dict = {
| File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 812, in <dictcomp>
| t.task_id: _deepcopy_task(t)
| File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 808, in _deepcopy_task
| return copy.deepcopy(t, memo)
| File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy
| y = copier(memo)
| File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/baseoperator.py", line 1188, in __deepcopy__
| object.__setattr__(result, k, v)
| AttributeError: can't set attribute
+------------------------------------
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 29, in __call__
await responder(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 126, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 46, in __call__
await self.app(scope, receive, self.send_with_compression)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 93, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 144, in simple_response
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 178, in __call__
recv_stream.close()
File "/usr/local/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.9/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 175, in __call__
response = await self.dispatch_func(request, call_next)
File "/opt/airflow/airflow/api_fastapi/core_api/middleware.py", line 28, in dispatch
response = await call_next(request)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 153, in call_next
raise app_exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 140, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 214, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/opt/airflow/airflow/api_fastapi/core_api/routes/public/task_instances.py", line 651, in post_clear_task_instances
dag = dag.partial_subset(
File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 811, in partial_subset
dag.task_dict = {
File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 812, in <dictcomp>
t.task_id: _deepcopy_task(t)
File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 808, in _deepcopy_task
return copy.deepcopy(t, memo)
File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/baseoperator.py", line 1188, in __deepcopy__
object.__setattr__(result, k, v)
AttributeError: can't set attribute
```
### What you think should happen instead?
Task instances endpoint should not throw HTTP500
### How to reproduce
As I mentioned, its intermittent you need to try clearing task instance couple of times from UI and you will observe this issue
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-02T10:59:30Z | 2025-03-12T07:18:15Z | https://github.com/apache/airflow/issues/47274 | [
"kind:bug",
"priority:high",
"area:core",
"AIP-84",
"area:task-sdk",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 11 |
ray-project/ray | pytorch | 51,493 | CI test windows://python/ray/tests:test_actor_pool is consistently_failing | CI test **windows://python/ray/tests:test_actor_pool** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_actor_pool-END
Managed by OSS Test Policy | closed | 2025-03-19T00:05:15Z | 2025-03-19T21:51:38Z | https://github.com/ray-project/ray/issues/51493 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 3 |
albumentations-team/albumentations | machine-learning | 1,630 | [Tech debt] Improve Interface for RandomFog | Right now in the transform we have separate parameters for `fog_coef_upper` and `fog_coef_upper`
Better would be to have one parameter `fog_coef_range = [fog_coef_lower, fog_coef_upper]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704 | closed | 2024-04-05T18:37:00Z | 2024-06-07T04:34:28Z | https://github.com/albumentations-team/albumentations/issues/1630 | [
"good first issue",
"Tech debt"
] | ternaus | 1 |
dot-agent/nextpy | fastapi | 157 | Why hasn't this project been updated for a while | This is a good project, but it hasn't been updated for a long time. Why is that | open | 2024-09-23T09:25:54Z | 2025-01-13T12:36:11Z | https://github.com/dot-agent/nextpy/issues/157 | [] | redpintings | 2 |
hyperspy/hyperspy | data-visualization | 2,970 | Should `hs.load` always return a list? | One common inconvenience/confusing that beginners experience with hyperspy is issue like https://github.com/hyperspy/hyperspy/issues/2959, because `hs.load` can return a list of signals or a signal... this is very simple to explain but this could be avoided if `hs.load` would always return a list, even when the file to load contains only a single dataset.
For example, we change the API to have a syntax very similar to what matplotlib does with `plot`
```python
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
lines = ax.plot([0, 1, 2])
print(lines)
# lines is a list
# [<matplotlib.lines.Line2D at 0x252af81fa90>]
line, = ax.plot([3, 4, 5])
print(line)
# line is not a list
# <matplotlib.lines.Line2D at 0x252afa14640>
```
| open | 2022-06-22T10:38:02Z | 2023-09-08T20:16:42Z | https://github.com/hyperspy/hyperspy/issues/2970 | [
"type: API change"
] | ericpre | 1 |
Significant-Gravitas/AutoGPT | python | 9,587 | Deepseek support | Hey Devs,
let me start by saying that this repo is great. Good job on your work, and thanks for sharing it.
Could we include support of DeepSeek V3 API? If there is a solution out there then when can it be implemented?
Thanks! | open | 2025-03-06T10:34:11Z | 2025-03-12T10:43:11Z | https://github.com/Significant-Gravitas/AutoGPT/issues/9587 | [] | q377985133 | 3 |
tqdm/tqdm | jupyter | 697 | DataFrameGroupBy.progress_apply not always equal to DataFrameGroupBy.apply | - [X] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
- [X] I have mentioned version numbers, operating system and
environment, where applicable:
**tqdm version**: 4.31.1
**Python version:** 3.7.1
**OS Version:** Ubuntu 16.04
**Context:**
```python
import pandas as pd
import numpy as np
from tqdm import tqdm
tqdm.pandas()
df_size = int(5e6)
df = pd.DataFrame(dict(a=np.random.randint(1, 8, df_size),
b=np.random.rand(df_size)))
```
**Observed:**
```python
df.groupby('a').apply(max)
a b
a
1 1.0 0.999999
2 2.0 0.999997
3 3.0 0.999999
4 4.0 0.999997
5 5.0 0.999999
6 6.0 1.000000
7 7.0 1.000000
# but
df.groupby('a').progress_apply(max)
a
1 b
2 b
3 b
4 b
5 b
6 b
7 b
dtype: object
```
**Expected:**
`df.groupby('a').apply(max) ` and `df.groupby('a').progress_apply(max)` return the same value.
**Additional information:**
Replacing `max` by `sum` returns normal result for `apply`, but throws this error for `progress_apply`:
> TypeError: unsupported operand type(s) for +: 'int' and 'str'
| open | 2019-03-16T17:32:53Z | 2019-05-09T16:29:45Z | https://github.com/tqdm/tqdm/issues/697 | [
"help wanted 🙏",
"to-fix ⌛",
"submodule ⊂"
] | nalepae | 4 |
holoviz/panel | jupyter | 7,402 | Unable to run ChartJS example from custom_models.md | <details>
<summary>Software Version Info</summary>
```plaintext
panel.__version__ = '1.5.2.post1.dev8+gef313542.d20241015'
bokeh.__version__ = 3.6.0
OS Windows
Browser firefox
```
</details>
The impression I get is that the [custom_models.md](https://github.com/holoviz/panel/blob/main/doc/developer_guide/custom_models.md) is potentially a little out of date or does not build for all systems. while attempting to create the example custom model using ChartJS I encounterered a few errors in the chartjs.ts file that had to be changed but the bulk of the issue is when I run the final :
```
panel serve panel/tests/pane/test_chartjs.py --auto --show
```
A window is opened showing the python error:
```
AttributeError: unexpected attribute 'title' to ChartJS, possible attributes are align, aspect_ratio, clicks, context_menu, css_classes, css_variables, disabled, elements, flow_mode, height, height_policy, js_event_callbacks, js_property_callbacks, margin, max_height, max_width, min_height, min_width, name, object, resizable, sizing_mode, styles, stylesheets, subscribed_events, syncable, tags, visible, width or width_policy
```
### Changes Made from the custom_models example:
for the most part I have followed the custom_models.md to the word except for these changes required to get the code to run:
panel/model/chartjs.py
```python
from bokeh.core.properties import Int, String
from .layout import HTMLBox # changed from from 'bokeh.models import HTMLBox' as this threw error 'ImportError: cannot import name 'HTMLBox' from 'bokeh.models' (C:\Users\lyndo\OneDrive\Documents\Professional_Work\NMIS\three-panel\panel\.pixi\envs\test-312\Lib\site-packages\bokeh\models\__init__.py)'
class ChartJS(HTMLBox):
"""Custom ChartJS Model"""
object = String()
clicks = Int()
```
panel/model/chartjs.ts
```typescript
// See https://docs.bokeh.org/en/latest/docs/reference/models/layouts.html
import { HTMLBox, HTMLBoxView } from "./layout" // changed from 'import { HTMLBox, HTMLBoxView } from "@bokehjs/models/layouts/html_box"'
// See https://docs.bokeh.org/en/latest/docs/reference/core/properties.html
import * as p from "@bokehjs/core/properties"
// The view of the Bokeh extension/ HTML element
// Here you can define how to render the model as well as react to model changes or View events.
export class ChartJSView extends HTMLBoxView {
declare model: ChartJS // declare added
objectElement: any // Element
override connect_signals(): void { // override added ect ...
super.connect_signals()
this.on_change(this.model.properties.object, () => {
this.render();
})
}
override render(): void {
super.render()
this.el.innerHTML = `<button type="button">${this.model.object}</button>`
this.objectElement = this.el.firstElementChild
this.objectElement.addEventListener("click", () => {this.model.clicks+=1;}, false)
}
}
export namespace ChartJS {
export type Attrs = p.AttrsOf<Props>
export type Props = HTMLBox.Props & {
object: p.Property<string>,
clicks: p.Property<number>,
}
}
export interface ChartJS extends ChartJS.Attrs { }
// The Bokeh .ts model corresponding to the Bokeh .py model
export class ChartJS extends HTMLBox {
declare properties: ChartJS.Props
constructor(attrs?: Partial<ChartJS.Attrs>) {
super(attrs)
}
static override __module__ = "panel.models.chartjs"
static {
this.prototype.default_view = ChartJSView;
this.define<ChartJS.Props>(({Int, String}) => ({
object: [String, "Click Me!"],
clicks: [Int, 0],
}))
}
}
```
| closed | 2024-10-15T16:48:03Z | 2024-10-21T09:14:18Z | https://github.com/holoviz/panel/issues/7402 | [] | LyndonAlcock | 4 |
jupyter/nbgrader | jupyter | 1,210 | Autograde cell with input or print statemes | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
1. Is it possible to autograde a cell that contains an input statement?
2. Is it possible to autograde a cell that contains a print statement?
### Operating system
OS X 10.14.6
### `nbgrader --version`
Python version 3.7.3 (default, Mar 27 2019, 16:54:48)
[Clang 4.0.1 (tags/RELEASE_401/final)]
nbgrader version 0.6.0
### `jupyterhub --version` (if used with JupyterHub)
### `jupyter notebook --version`
6.0.0
### Expected behavior
### Actual behavior
### Steps to reproduce the behavior
| open | 2019-08-30T22:10:49Z | 2023-07-12T21:21:23Z | https://github.com/jupyter/nbgrader/issues/1210 | [
"enhancement"
] | hebertodelrio | 5 |
awesto/django-shop | django | 262 | django-shop is not python3 compatible | I'm trying to fix this in my branch python3. I got rid of classy-tags since they are obsolete and not ported to python3. All other libraries are already ported.
Then I made necessary changes to the source code and raised minimum django version to 1.5.1.
Now all tests (except one under python3 - the circular import is somehow not circular there) are passing under both versions. But it needs far more testing.
| closed | 2013-12-22T11:17:00Z | 2016-02-02T13:56:48Z | https://github.com/awesto/django-shop/issues/262 | [] | katomaso | 3 |
xonsh/xonsh | data-science | 5,157 | ast DeprecationWarnings with Python 3.12b1 | Since upgrading from Python 3.12a7 to 3.12b2 and using xonsh 0.14, I see a flurry of DeprecationWarnings when starting up xonsh (in my case on Windows):
```
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\ast.py:9: DeprecationWarning: ast.Bytes is deprecated and will be removed in Python 3.14; use ast.Constant instead
from ast import (
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\ast.py:40: DeprecationWarning: ast.Ellipsis is deprecated and will be removed in Python 3.14; use ast.Constant instead
from ast import Ellipsis as EllipsisNode
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\ast.py:41: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
from ast import (
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\ast.py:41: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
from ast import (
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\ast.py:41: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
from ast import (
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:3550: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
p[0] = ast.Str(s=p1.value, lineno=p1.lineno, col_offset=p1.lexpos)
C:\Program Files\Python 3.12\Lib\ast.py:587: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
return Constant(*args, **kwargs)
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:179: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
return "*" in x.s
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:2420: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
p[0] = ast.NameConstant(value=True, lineno=p1.lineno, col_offset=p1.lexpos)
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:2410: DeprecationWarning: ast.Ellipsis is deprecated and will be removed in Python 3.14; use ast.Constant instead
p[0] = ast.EllipsisNode(lineno=p1.lineno, col_offset=p1.lexpos)
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:2657: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
p[0] = ast.Num(
C:\Program Files\Python 3.12\Lib\ast.py:587: DeprecationWarning: Attribute n is deprecated and will be removed in Python 3.14; use value instead
return Constant(*args, **kwargs)
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\context_check.py:23: DeprecationWarning: ast.Num is deprecated and will be removed in Python 3.14; use ast.Constant instead
elif isinstance(x, (ast.Set, ast.Dict, ast.Num, ast.Str, ast.Bytes)):
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\context_check.py:23: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
elif isinstance(x, (ast.Set, ast.Dict, ast.Num, ast.Str, ast.Bytes)):
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\context_check.py:23: DeprecationWarning: ast.Bytes is deprecated and will be removed in Python 3.14; use ast.Constant instead
elif isinstance(x, (ast.Set, ast.Dict, ast.Num, ast.Str, ast.Bytes)):
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\context_check.py:45: DeprecationWarning: ast.NameConstant is deprecated and will be removed in Python 3.14; use ast.Constant instead
elif isinstance(x, ast.NameConstant):
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:3550: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
p[0] = ast.Str(s=p1.value, lineno=p1.lineno, col_offset=p1.lexpos)
C:\Program Files\Python 3.12\Lib\ast.py:587: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
return Constant(*args, **kwargs)
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:179: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
return "*" in x.s
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:3550: DeprecationWarning: ast.Str is deprecated and will be removed in Python 3.14; use ast.Constant instead
p[0] = ast.Str(s=p1.value, lineno=p1.lineno, col_offset=p1.lexpos)
C:\Program Files\Python 3.12\Lib\ast.py:587: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
return Constant(*args, **kwargs)
C:\Users\jaraco\.local\pipx\venvs\xonsh\Lib\site-packages\xonsh\parsers\base.py:179: DeprecationWarning: Attribute s is deprecated and will be removed in Python 3.14; use value instead
return "*" in x.s
```
Hopefully these warnings can be suppressed or a fix added soon to streamline the experience for users on Python 3.12+. | closed | 2023-06-19T15:37:00Z | 2023-07-29T15:45:32Z | https://github.com/xonsh/xonsh/issues/5157 | [
"parser",
"py312"
] | jaraco | 4 |
strawberry-graphql/strawberry-django | graphql | 261 | relay depth limit | Hi,
I was reading [blog](https://blog.cloudflare.com/protecting-graphql-apis-from-malicious-queries/) from Cloudflare about malicious queries in GraphQL.
Is there a way to detect query depth so I can respond respectively ?
```gql
query {
petition(ID: 123) {
signers {
nodes {
petitions {
nodes {
signers {
nodes {
petitions {
nodes {
...
}
}
}
}
}
}
}
}
}
}
```
Thanks | closed | 2023-06-14T22:47:41Z | 2023-06-15T23:00:37Z | https://github.com/strawberry-graphql/strawberry-django/issues/261 | [] | tasiotas | 6 |
dynaconf/dynaconf | fastapi | 979 | [bug] when using a validator with a default for nested data it parses value via toml twice | **Describe the bug**
```py
from __future__ import annotations
from dynaconf import Dynaconf
from dynaconf import Validator
settings = Dynaconf()
settings.validators.register(
Validator("group.something_new", default=5),
)
settings.validators.validate()
assert settings.group.test_list == ["1", "2"], settings.group
```
Execution
```console
$ DYNACONF_GROUP__TEST_LIST="['1','2']" python app.py
```
expectation:
```py
settings.group.test_list == ["1", "2"]
```
current behavior
```console
$ DYNACONF_GROUP__TEST_LIST="['1','2']" python app.py
Traceback (most recent call last):
File "/home/rochacbruno/Projects/dynaconf/tests_functional/issues/905_item_duplication_in_list/app.py", line 21, in <module>
assert settings.group.test_list == ["1", "2"], settings.group
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: {'something_new': 5, 'TEST_LIST': [1, 2]}
```
## What is happening?
At the end of validators.validate.validate the `Validator("group.something_new", default=5),` is executed and then default value is merged with current data, during that process dynaconf is calling `parse_conf_data` passing `tomlfy=True` which forces toml to evaluate the existing data again so.
```py
In [1]: from dynaconf.utils.parse_conf import parse_conf_data
In [2]: data = {"TEST_LIST": ["1", "2"]}
In [3]: parse_conf_data(data, tomlfy=True)
Out[3]: {'TEST_LIST': [1, 2]}
In [4]: parse_conf_data(data, tomlfy=False)
Out[4]: {'TEST_LIST': ['1', '2']}
```
## Possible solutions:
Change `setdefault` method to use `tomlfy=False`
Change `parse_with_toml` to avoid doing that transformation.
| closed | 2023-08-16T17:05:12Z | 2024-07-08T18:09:40Z | https://github.com/dynaconf/dynaconf/issues/979 | [
"bug",
"4.0-breaking-change"
] | rochacbruno | 1 |
FlareSolverr/FlareSolverr | api | 626 | [yggtorrent] (testing) Exception (yggtorrent): Error connecting to FlareSolverr | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-12-20T12:15:19Z | 2022-12-21T01:00:42Z | https://github.com/FlareSolverr/FlareSolverr/issues/626 | [
"invalid"
] | o0-sicnarf-0o | 0 |
microsoft/qlib | machine-learning | 1,371 | Can provider_uri be other protocols like http | ## ❓ Questions and Help
We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue. | closed | 2022-11-21T06:56:35Z | 2023-02-24T12:02:34Z | https://github.com/microsoft/qlib/issues/1371 | [
"question",
"stale"
] | Vincent4zzzz | 1 |
psf/requests | python | 6,512 | Requests are not retried when received body length is shorter than Content-Length | When a server sends less bytes than indicated by Content-Length, we get a ChunkedEncodingError instead of retrying the request.
urllib3 supports retrying requests in this situation by setting `preload_content=True`. When a user specifies `stream=True`, obviously, all bets are off: the response cannot be preloaded and therefore the request cannot be retried. However, even when `stream=False`, the response is still not preloaded and therefore the urllib3 retry mechanism in this situation is bypassed.
---
As a background to this issue, I've been investigating rare failures in my CI builds during `pip install`. I believe this issue to be the proximate cause: pip makes some requests to PyPI, with `stream=False` and retries configured but still fails.
In the current version of pip (which has an out of date urllib3 package), pip falls victim to https://github.com/psf/requests/issues/4956 and fails to parse the PyPI metadata with a `JSONDecodeError`. Upgrading pip's urllib3 version results in a `ChunkedEncodingError` as below.
## Expected Result
The request is retried according to the specified retry policy.
## Actual Result
`requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(10 bytes read, 26227 more expected)', IncompleteRead(10 bytes read, 26227 more expected))`
Because the response is not preloaded, urllib3 cannot retry the request, and requests has no retry functionality of its own.
## Reproduction Steps
```python
import requests
from requests.adapters import HTTPAdapter
s = requests.Session()
s.mount("http://", HTTPAdapter(max_retries=5))
r = s.get('http://127.0.0.1:5000/test', stream=False)
```
I'm using an intentionally broken local server for testing. See [here](https://github.com/psf/requests/issues/4956#issuecomment-573325001) for an example.
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.2.0"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.4"
},
"implementation": {
"name": "CPython",
"version": "3.11.4"
},
"platform": {
"release": "6.4.11-100.fc37.x86_64",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "30000090"
},
"urllib3": {
"version": "2.0.4"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
## Proposed Patch
I have a proposed patch which I believe fixes this problem. Unfortunately, my patch breaks a bunch of the tests (and probably also breaks backwards compatibility, in particular, this patch causes requests to start leaking urllib3 exceptions). On the off chance it's useful in coming up with a proper fix, here it is:
```
diff --git a/src/requests/adapters.py b/src/requests/adapters.py
index eb240fa9..ce01c2a5 100644
--- a/src/requests/adapters.py
+++ b/src/requests/adapters.py
@@ -489,8 +489,8 @@ class HTTPAdapter(BaseAdapter):
headers=request.headers,
redirect=False,
assert_same_host=False,
- preload_content=False,
- decode_content=False,
+ preload_content=not stream,
+ decode_content=not stream,
retries=self.max_retries,
timeout=timeout,
chunked=chunked,
diff --git a/src/requests/models.py b/src/requests/models.py
index 44556394..f43f1bf8 100644
--- a/src/requests/models.py
+++ b/src/requests/models.py
@@ -893,6 +893,8 @@ class Response:
if self.status_code == 0 or self.raw is None:
self._content = None
+ elif getattr(self.raw, "data", None) is not None:
+ self._content = self.raw.data
else:
self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
diff --git a/tests/test_lowlevel.py b/tests/test_lowlevel.py
index 859d07e8..39a1175e 100644
--- a/tests/test_lowlevel.py
+++ b/tests/test_lowlevel.py
@@ -4,6 +4,7 @@ import pytest
from tests.testserver.server import Server, consume_socket_content
import requests
+from requests.adapters import HTTPAdapter
from requests.compat import JSONDecodeError
from .utils import override_environ
@@ -426,3 +427,33 @@ def test_json_decode_compatibility_for_alt_utf_encodings():
assert isinstance(excinfo.value, requests.exceptions.RequestException)
assert isinstance(excinfo.value, JSONDecodeError)
assert r.text not in str(excinfo.value)
+
+
+def test_retry_truncated_response():
+ data = b"truncated before retry"
+ response_lengths = [len(data), 9]
+
+ def retry_handler(sock):
+ request_content = consume_socket_content(sock, timeout=0.5)
+
+ response = (
+ b"HTTP/1.1 200 OK\r\n"
+ b"Content-Length: %d\r\n\r\n"
+ b"%s"
+ ) % (len(data), data[:response_lengths.pop()])
+ sock.send(response)
+
+ return request_content
+
+ close_server = threading.Event()
+ server = Server(retry_handler, wait_to_close_event=close_server, requests_to_handle=2)
+
+ s = requests.Session()
+ s.mount("http://", HTTPAdapter(max_retries=2))
+
+ with server as (host, port):
+ url = f"http://{host}:{port}/"
+ r = s.get(url, stream=False)
+ assert r.status_code == 200
+ assert r.content == data
+ close_server.set()
``` | open | 2023-08-23T19:11:31Z | 2024-07-01T16:42:59Z | https://github.com/psf/requests/issues/6512 | [] | zweger | 5 |
litestar-org/litestar | api | 3,760 | Bug: `return_dto` not used in openapi spec when `Response` is returned by handler | ### Description
When I return a `Response[MyDTO]`, I expect the generated openapi spec to include the full dto but it appears to return an empty schema `{}` instead. https://github.com/litestar-org/litestar/issues/1631 appears to have implemented support for this, but maybe it broke since then. The code below shows that the DTO is being used (because the camelCase rename strategy is applied) , but just not included in the api spec.
### URL to code causing the issue
_No response_
### MCVE
```python
import json
from litestar.contrib.pydantic import PydanticDTO
from litestar.dto import DTOConfig
from litestar import Controller, get, MediaType, Response, Litestar
from litestar.testing import TestClient
from pydantic import BaseModel
class MyResponseSchema(BaseModel):
my_name: str
class MyResponseDTO(PydanticDTO[MyResponseSchema]):
config = DTOConfig(rename_strategy="camel")
class MyController(Controller):
path = "/my-endpoint"
@get(
"",
media_type=MediaType.JSON,
return_dto=MyResponseDTO,
)
async def get_my_data(self) -> Response[MyResponseSchema]:
data = MyResponseSchema(id=1, my_name="example")
return Response(data)
if __name__ == "__main__":
app = Litestar(
route_handlers=[MyController],
)
print(json.dumps(app.openapi_schema.to_schema(), indent=4))
with TestClient(app=app) as client:
print(client.get("/my-endpoint").text)
```
### Steps to reproduce
```bash
1. Run the script.
2. Look at the printed openapi spec.
```
### Screenshots
```bash
This is the output of that script:
{
"info": {
"title": "Litestar API",
"version": "1.0.0"
},
"openapi": "3.1.0",
"servers": [
{
"url": "/"
}
],
"paths": {
"/my-endpoint": {
"get": {
"summary": "GetMyData",
"operationId": "MyEndpointGetMyData",
"responses": {
"200": {
"description": "Request fulfilled, document follows",
"headers": {},
"content": {
"application/json": {
"schema": {}
}
}
}
},
"deprecated": false
}
}
},
"components": {
"schemas": {}
}
}
{"myName":"example"}
```
```
### Logs
_No response_
### Litestar Version
2.12.1
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-09-27T17:11:36Z | 2025-03-20T15:54:55Z | https://github.com/litestar-org/litestar/issues/3760 | [
"Bug :bug:"
] | atom-andrew | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,290 | Getting detected by Google after running for 2-3 minutes. | It's getting detected on google login after continuously running for 2-3 minutes . | open | 2023-05-25T09:35:53Z | 2023-06-21T04:06:39Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1290 | [] | sprogdept001 | 1 |
aiortc/aiortc | asyncio | 208 | How to fix 'ClientConnectorError' in aiohttp in Janus example | Hi!
Im trying to run Janus example. I do the same as in this [description](https://github.com/aiortc/aiortc/tree/master/examples/janus) , but i get an error:
`aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:8088 ssl:None [Connect call failed ('127.0.0.1', 8088)]`
the issue occur in function:
```
async def _wrap_create_connection(
self, *args: Any,
req: 'ClientRequest',
timeout: 'ClientTimeout',
client_error: Type[Exception]=ClientConnectorError,
**kwargs: Any) -> Tuple[asyncio.Transport, ResponseHandler]:
try:
with CeilTimeout(timeout.sock_connect):
return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa
except cert_errors as exc:
raise ClientConnectorCertificateError(
req.connection_key, exc) from exc
except ssl_errors as exc:
raise ClientConnectorSSLError(req.connection_key, exc) from exc
except OSError as exc:
raise client_error(req.connection_key, exc) from exc
```
in last line. I tried to run example on 2 different ubuntu, but an error was identical.
I searched the same problems, but did not find anything to help me.
What it can be, can you help please?
| closed | 2019-09-24T13:44:57Z | 2019-09-24T19:21:10Z | https://github.com/aiortc/aiortc/issues/208 | [] | Glunky | 2 |
TencentARC/GFPGAN | deep-learning | 59 | How to get original size? (no upscale) | I've tried `--UPSCALE 1` and I also tried to completely remove the command.
But I always get at least upscale x2
EDIT:
Sorry for the mess It works now, I guess I did something wrong but now it works fine, I finally get 100% resolution. | open | 2021-09-05T02:40:23Z | 2021-09-17T02:58:36Z | https://github.com/TencentARC/GFPGAN/issues/59 | [] | AlonDan | 4 |
HumanSignal/labelImg | deep-learning | 310 | the minX mustn't equal maxX and the same as minY&maxY value | initPos = self.current[0]
minX = initPos.x()
minY = initPos.y()
targetPos = self.line[1]
maxX = targetPos.x()
maxY = targetPos.y()
self.current.addPoint(QPointF(maxX, minY))
self.current.addPoint(targetPos)
self.current.addPoint(QPointF(minX, maxY))
self.current.addPoint(initPos)
self.line[0] = self.current[-1]
| closed | 2018-06-05T09:25:43Z | 2018-06-14T02:49:35Z | https://github.com/HumanSignal/labelImg/issues/310 | [] | loulansuiye | 1 |
tortoise/tortoise-orm | asyncio | 1,449 | DatetimeField Query Questions | **Describe the bug**
tortoise-orm: 0.20.0
python:3.11
I can't get my data when using the get_or_create method.
Is the reason found to be a problem with DatetimeField being unable to convert the time zone? I didn't encounter this issue with similar queries in Django
**To Reproduce**
My database configuration:
`register_tortoise(
app,
config={
'connections': {
'default': 'postgres://postgres:postgres@localhost:5432/test'
},
'apps': {
'models': {
'models': ['app.db'],
'default_connection': 'default',
}
},
"use_tz": True,
"timezone": "Asia/Shanghai",
},
generate_schemas=True,
add_exception_handlers=True,
)`
**Expected behavior**
Database data:
`
id time_fields
2 2023-07-28 07:00:33+08
`
code:
instance, is_create = table.get_or_create(time_fields='2023-07-28 07:00:33')
The expected result of is_create should be False
**Additional context**
| closed | 2023-08-04T02:17:16Z | 2023-08-04T02:25:26Z | https://github.com/tortoise/tortoise-orm/issues/1449 | [] | smomop | 0 |
gtalarico/django-vue-template | rest-api | 3 | Add Documentation for CDN configuration | closed | 2018-08-15T07:37:47Z | 2018-09-03T21:52:16Z | https://github.com/gtalarico/django-vue-template/issues/3 | [
"Documentation"
] | gtalarico | 0 | |
piskvorky/gensim | nlp | 2,796 | Inconsistency between pip version (3.8.2) and installed version (3.8.1) | #### Problem description
The gensim package version is 3.8.2 on [pip](https://pypi.org/project/gensim/), but after installing it and checking the version on python console I see 3.8.1:
```python
import gensim
gensim.__version__
# '3.8.1'
```
#### Versions
```bash
Linux-5.3.0-46-generic-x86_64-with-glibc2.10
Python 3.8.2 (default, Mar 26 2020, 15:53:00)
[GCC 7.3.0]
NumPy 1.18.1
SciPy 1.4.1
gensim 3.8.1
FAST_VERSION 1
```
| closed | 2020-04-15T10:05:36Z | 2020-07-15T09:36:43Z | https://github.com/piskvorky/gensim/issues/2796 | [
"bug"
] | dpasqualin | 4 |
OpenInterpreter/open-interpreter | python | 670 | Infrastructure intelligence | ### Is your feature request related to a problem? Please describe.
<img width="569" alt="Screen Shot 2023-10-21 at 10 43 45 AM" src="https://github.com/KillianLucas/open-interpreter/assets/115367894/c1b53e8d-07b3-4637-9d57-1cb5e514ea6e">
### Describe the solution you'd like
I'd like for open interpreter to keep track of its intelligence resource and even be able to replenish its credits from within open interpreter itself.
Perhaps interface methods generic to language models for resource maintenance.
### Describe alternatives you've considered
_No response_
### Additional context
Generalize to multiple language models. | closed | 2023-10-21T07:47:28Z | 2023-12-19T08:42:35Z | https://github.com/OpenInterpreter/open-interpreter/issues/670 | [
"Enhancement"
] | ngoiyaeric | 4 |
holoviz/panel | plotly | 6,897 | ReactiveHTML components cannot have a String param initialized on creation | There appears to be a bug in `ReactiveHTML` which makes it impossible to initialize string parameters with values passed into the constructor.
#### ALL software version info
`panel==1.4.4`
#### Description of expected behavior and the observed behavior
When `ReactiveHTML` is subclassed and a `param.String` is defined on the derived class, attempting to initialize that class with a value for that param results in the following error (trace truncated):
```
File "/Users/jerry/Library/Caches/pypoetry/virtualenvs/panel-ui-extensions-BV4D3BUD-py3.12/lib/python3.12/site-packages/param/parameterized.py", line 1647, in _validate
self._validate_value(val, self.allow_None)
File "/Users/jerry/Library/Caches/pypoetry/virtualenvs/panel-ui-extensions-BV4D3BUD-py3.12/lib/python3.12/site-packages/param/parameterized.py", line 1641, in _validate_value
raise ValueError(
ValueError: String parameter 'ReactiveHTMLMetaclass.text' only takes a string value, not value of <class 'panel.pane.markup.Markdown'>.
```
#### Complete, minimal, self-contained example code that reproduces the issue
The error may be reproduced using the following custom component code taken directly from the Panel website (imports added):
```python
import panel as pn
import param
from panel.reactive import ReactiveHTML
class CustomComponent(ReactiveHTML):
"""I'm a custom component"""
text = param.String("I'm **bold**")
color = param.Color("silver", label="Select a color", doc="""The color of the component""")
_template = """
<p style="background:{{ color }}">Jinja literal value. {{ text }}</p>
<p id="el" style="background:${color}">Javascript template variable. ${text}</p>
"""
main = pn.Column()
component = CustomComponent(text="hello world")
main.append(component)
template = pn.template.BootstrapTemplate(main=main)
template.servable()
```
I would expect the custom component to be displayed, but instead the above error is thrown in the `__init__` method of `ReactiveHTML`.
#### Stack traceback and/or browser JavaScript console output
```
2024-06-06 20:45:22,332 Error running application handler <panel.io.handlers.ScriptHandler object at 0x12c2af520>: String parameter 'ReactiveHTMLMetaclass.text' only takes a string value, not value of <class 'panel.pane.markup.Markdown'>.
File 'parameterized.py', line 1634, in _validate_value:
raise ValueError( Traceback (most recent call last):
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/io/handlers.py", line 389, in run
exec(self._code, module.__dict__)
File "/Users/jerry/Development/machine-prospector/scratch/reactive_test.py", line 19, in <module>
component = CustomComponent(text='hello world')
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/reactive.py", line 1615, in __init__
super().__init__(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/reactive.py", line 560, in __init__
super().__init__(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/reactive.py", line 120, in __init__
super().__init__(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/viewable.py", line 700, in __init__
super().__init__(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/viewable.py", line 539, in __init__
super().__init__(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/panel/viewable.py", line 300, in __init__
super().__init__(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 4148, in __init__
refs, deps = self.param._setup_params(**params)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 1678, in override_initialization
ret = fn(self_, *args, **kw)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 1971, in _setup_params
setattr(self, name, resolved)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 527, in _f
return f(self, obj, val)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 1490, in __set__
self._validate(val)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 1640, in _validate
self._validate_value(val, self.allow_None)
File "/Users/jerry/Development/machine-prospector/venv/lib/python3.10/site-packages/param/parameterized.py", line 1634, in _validate_value
raise ValueError(
ValueError: String parameter 'ReactiveHTMLMetaclass.text' only takes a string value, not value of <class 'panel.pane.markup.Markdown'>.
```
The direct cause of the bug appears to be the line 1614 in `panel/reactive.py`:
```
params[children_param] = panel(child_value)
```
During `__init__` this line is hit when an external value is passed to the constructor, but not otherwise. If this happens, the `child_value` (in my above example the text `"hello world"`) is turned into a `Markdown` pane, which throws a subsequent validation error as the pane is not a string.
- [X] I may be interested in making a pull request to address this
Happy to take a swing at fixing this with some guidance about how not to break anything else and in particular, why `self._parser.children` is not populated when a default is supplied but is populated when an external value is passed in. | closed | 2024-06-07T00:58:11Z | 2024-06-27T22:48:04Z | https://github.com/holoviz/panel/issues/6897 | [
"component: reactivehtml"
] | jerry-kobold | 3 |
ipython/ipython | data-science | 14,250 | The bang magic and the line numbering | ```
In [1]: 1 ! ls \
2 /dev/null
3 int( \
4 qwerty)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[1], line 3
1 get_ipython().system(' ls /dev/null')
2 int( \
----> 3 qwerty)
NameError: name 'qwerty' is not defined
```
Expected: the original code in the traceback.
8.15.0 | open | 2023-11-22T13:45:27Z | 2024-03-03T16:53:39Z | https://github.com/ipython/ipython/issues/14250 | [] | kuraga | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 758 | Model Files Not Found | I tried to create the project and got rid of basically every problem except a Model Files not found problem! It goes as follows:
Arguments:
datasets_root: None
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
cpu: False
seed: None
no_mp3_support: False
********************************************************************************
Error: Model files not found. Follow these instructions to get and install the models:
https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models
********************************************************************************
I'm not sure what i;m doing wrong, I have all three pretrained folders within but i'm still getting the error. | closed | 2021-05-15T01:01:31Z | 2022-06-02T21:18:16Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/758 | [] | TKTSWalker | 4 |
jwkvam/bowtie | plotly | 229 | ImportError: No module named html | I'm stuck trying to run the example code:
https://github.com/jwkvam/bowtie-demo/blob/master/example.py
I believe I have all dependencies installed via conda and pip (plotlywrapper), and I've tried running it in several environments and consoles.
>>> #!/usr/bin/env python
... # -*- coding: utf-8 -*-
...
... from bowtie import App, command
... from bowtie.control import Dropdown, Slider
... from bowtie.visual import Plotly, Table
... from bowtie.html import Markdown
...
... import numpy as np
... import pandas as pd
... import plotlywrapper as pw
...
... from sklearn.kernel_ridge import KernelRidge
...
... iris = pd.read_csv('./iris.csv')
... iris = iris.drop(iris.columns[0], axis=1)
...
... attrs = iris.columns[:-1]
...
... description = Markdown("""Bowtie Demo
... ===========
...
... Demonstrates interactive elements with the iris dataset.
... Select some attributes to plot and select some data on the 2d plot.
... Change the alpha parameter to see how that affects the model.
... """)
...
... xdown = Dropdown(caption='X variable', labels=attrs, values=attrs)
... ydown = Dropdown(caption='Y variable', labels=attrs, values=attrs)
... zdown = Dropdown(caption='Z variable', labels=attrs, values=attrs)
... alphaslider = Slider(caption='alpha parameter', start=10, minimum=1, maximum=50)
...
... mainplot = Plotly()
... mplot3 = Plotly()
... linear = Plotly()
... table1 = Table()
...
...
... def pairplot(x, y):
... print('hellox')
... if x is None or y is None:
... return
... x = x['value']
... y = y['value']
... plot = pw.Chart()
... for i, df in iris.groupby('Species'):
... plot += pw.scatter(df[x], df[y], label=i)
... plot.xlabel(x)
... plot.ylabel(y)
... mainplot.do_all(plot.to_json())
...
...
... def threeplot(x, y, z):
... if x is None or y is None or z is None:
... return
... x = x['value']
... y = y['value']
... z = z['value']
... plot = pw.Chart()
... for i, df in iris.groupby('Species'):
... plot += pw.scatter3d(df[x], df[y], df[z], label=i)
... plot.xlabel(x)
... plot.ylabel(y)
... plot.zlabel(z)
... mplot3.do_all(plot.to_json())
...
...
... def mainregress(selection, alpha):
... if len(selection) < 2:
... return
...
... x = xdown.get()['value']
... y = ydown.get()['value']
...
... tabdata = []
... mldatax = []
... mldatay = []
... species = iris.Species.unique()
... for i, p in enumerate(selection['points']):
... mldatax.append(p['x'])
... mldatay.append(p['y'])
... tabdata.append({
... x: p['x'],
... y: p['y'],
... 'species': species[p['curve']]
... })
...
...
... X = np.c_[mldatax, np.array(mldatax) ** 2]
... ridge = KernelRidge(alpha=alpha).fit(X, mldatay)
...
... xspace = np.linspace(min(mldatax)-1, max(mldatax)+1, 100)
...
... plot = pw.scatter(mldatax, mldatay, label='train', markersize=15)
... for i, df in iris.groupby('Species'):
... plot += pw.scatter(df[x], df[y], label=i)
... plot += pw.line(xspace, ridge.predict(np.c_[xspace, xspace**2]), label='model', mode='lines')
... plot.xlabel(x)
... plot.ylabel(y)
... linear.do_all(plot.to_json())
... table1.do_data(pd.DataFrame(tabdata))
...
...
... @command
... def main():
... app = App(rows=2, columns=3, background_color='PaleTurquoise', debug=False)
... app.columns[0].fraction(2)
... app.columns[1].fraction(1)
... app.columns[2].fraction(1)
...
... app.add_sidebar(description)
... app.add_sidebar(xdown)
... app.add_sidebar(ydown)
... app.add_sidebar(zdown)
... app.add_sidebar(alphaslider)
...
... app[0, 0] = mainplot
... app[0, 1:] = mplot3
... app[1, :2] = linear
... app[1, 2] = table1
...
... app.subscribe(pairplot, xdown.on_change, ydown.on_change)
... app.subscribe(threeplot, xdown.on_change, ydown.on_change, zdown.on_change)
... app.subscribe(mainregress, mainplot.on_select, alphaslider.on_change)
...
... return app
...
ImportError: No module named html
ImportErrorTraceback (most recent call last)
<ipython-input-1-528c9221da1d> in <module>()
5 from bowtie.control import Dropdown, Slider
6 from bowtie.visual import Plotly, Table
----> 7 from bowtie.html import Markdown
8
9 import numpy as np
ImportError: No module named html | closed | 2018-05-08T19:20:01Z | 2018-05-08T22:54:19Z | https://github.com/jwkvam/bowtie/issues/229 | [] | willupowers | 3 |
clovaai/donut | computer-vision | 129 | How to train the model for supporting Arabic Language | Hello,
How I can train the model to support Arabic language
what i understand from this issue https://github.com/clovaai/donut/issues/77
arabic may not supported ,
but we have team they can create a custom Arabic tokenizer , but we need more info on how to do it and how to integrate it
to donut
thanks in advance | open | 2023-01-30T09:39:27Z | 2023-02-12T00:53:43Z | https://github.com/clovaai/donut/issues/129 | [] | Abdullamhd | 1 |
mckinsey/vizro | data-visualization | 99 | Have a component like Plotly Textarea to get text input from the user. | ### What's the problem this feature will solve?
Enabling users to input SQL queries for data retrieval can significantly enhance the utility of data connectors. This feature would allow for the generation of dynamic dashboards that can be customized according to user-defined queries as texts.
### Describe the solution you'd like
The following functionality from https://dash.plotly.com/dash-core-components/textarea#textarea-properties will suit text-based input.
from dash import Dash, dcc, html, Input, Output, callback
app = Dash(__name__)
app.layout = html.Div([
dcc.Textarea(
id='textarea-example',
value='Textarea content initialized\nwith multiple lines of text',
style={'width': '100%', 'height': 300},
),
html.Div(id='textarea-example-output', style={'whiteSpace': 'pre-line'})
])
@callback(
Output('textarea-example-output', 'children'),
Input('textarea-example', 'value')
)
def update_output(value):
return 'You have entered: \n{}'.format(value)
### Alternative Solutions
A different approach would be to have dropdown menus where the user could select the list of tables and filters and we generate the query in the backend.
### Additional context
I was thinking of implementing a component like the following. I haven't tested it yet, but will work on such a solution.
from typing import Optional
from dash import ClientsideFunction, Input, Output, State, clientside_callback, dcc, html
from pydantic import Field, validator
from vizro.models import Action, VizroBaseModel
from vizro.models._action._actions_chain import _action_validator_factory
from vizro.models._models_utils import _log_call
class Textarea(VizroBaseModel):
"""Textarea component for Vizro.
Can be provided to [`Filter`][vizro.models.Filter] or
[`Parameter`][vizro.models.Parameter]. Based on the underlying
[`dcc.Textarea`](https://dash.plotly.com/dash-core-components/textarea).
Args:
value (Optional[str]): Default value for textarea. Defaults to `None`.
title (Optional[str]): Title to be displayed. Defaults to `None`.
actions (List[Action]): See [`Action`][vizro.models.Action]. Defaults to `[]`.
"""
value: Optional[str] = Field(None, description="Default value for textarea.")
title: Optional[str] = Field(None, description="Title to be displayed.")
actions: List[Action] = []
# Validator for actions, if needed
_set_actions = _action_validator_factory("value")
@_log_call
def build(self):
output = [Output(f"{self.id}_output", "children")]
inputs = [Input(f"{self.id}_textarea", "value")]
clientside_callback(
ClientsideFunction(namespace="clientside", function_name="update_textarea_output"),
output=output,
inputs=inputs,
)
return html.Div(
[
html.P(self.title) if self.title else None,
dcc.Textarea(
id=f"{self.id}_textarea",
value=self.value,
style={'width': '100%', 'height': 300},
),
html.Div(
id=f"{self.id}_output",
style={'whiteSpace': 'pre-line'}
),
],
className="textarea_container",
id=f"{self.id}_outer",
)
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2023-10-06T05:16:26Z | 2024-07-09T15:09:00Z | https://github.com/mckinsey/vizro/issues/99 | [
"Custom Components :rocket:"
] | farshidbalan | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,029 | Python 3.8 is eol, remove support from 2.1 | as title says.
Things we could use:
- dict union -> does not really have too much impact since sqlalchemy mostly uses immutable dict
- use lowercase collections -> IIRC I had issues there, so it's likely safer to just wait until 3.10+. Also would make backporting to 2.0 harder | closed | 2024-10-24T15:45:14Z | 2024-11-06T23:56:09Z | https://github.com/sqlalchemy/sqlalchemy/issues/12029 | [
"setup"
] | CaselIT | 4 |
Farama-Foundation/PettingZoo | api | 1,269 | [Pyright] Pyright complains in random_demo.py and average_total_reward.py | ### Describe the bug
When I run `pre-commit run --all-files`, I get the following error:
```
/home/albert-han/PettingZoo/pettingzoo/utils/average_total_reward.py
/home/albert-han/PettingZoo/pettingzoo/utils/average_total_reward.py:37:40 - error: Argument of type "int | list[int] | list[list[int]] | list[list[list[Any]]]" cannot be assigned to parameter "seq" of type "SupportsLenAndGetItem[_T@choice]" in function "choice"
Type "int | list[int] | list[list[int]] | list[list[list[Any]]]" cannot be assigned to type "SupportsLenAndGetItem[_T@choice]"
"int" is incompatible with protocol "SupportsLenAndGetItem[_T@choice]"
"__len__" is not present
"__getitem__" is not present (reportGeneralTypeIssues)
/home/albert-han/PettingZoo/pettingzoo/utils/random_demo.py
/home/albert-han/PettingZoo/pettingzoo/utils/random_demo.py:26:40 - error: Argument of type "int | list[int] | list[list[int]] | list[list[list[Any]]]" cannot be assigned to parameter "seq" of type "SupportsLenAndGetItem[_T@choice]" in function "choice"
Type "int | list[int] | list[list[int]] | list[list[list[Any]]]" cannot be assigned to type "SupportsLenAndGetItem[_T@choice]"
"int" is incompatible with protocol "SupportsLenAndGetItem[_T@choice]"
"__len__" is not present
"__getitem__" is not present (reportGe
````
These pyright errors seem valid to me as `tolist()`'s return type includes a scalar, and `random.choice()` doesn't accept scalar values as arguments.
### Code example
```shell
Following line causes a pyright error:
action = random.choice(np.flatnonzero(obs["action_mask"]).tolist())
```
### System info
_No response_
### Additional context
_No response_
### Checklist
- [x] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
| closed | 2025-02-23T21:56:19Z | 2025-02-25T16:35:31Z | https://github.com/Farama-Foundation/PettingZoo/issues/1269 | [
"bug"
] | yjhan96 | 0 |
PaddlePaddle/ERNIE | nlp | 278 | ERNIE 2.0 多任务学习的预训练代码何时会放 | closed | 2019-08-12T15:17:45Z | 2020-05-28T11:52:49Z | https://github.com/PaddlePaddle/ERNIE/issues/278 | [
"wontfix"
] | Albert-Ma | 3 | |
HIT-SCIR/ltp | nlp | 401 | batch时报错 | `ltp.seg()`对输入的list有什么限制吗?
在跑大批量的数据时,batch从2-10都会中断报错。只有batch为1的时候才能正常跑完。
单测过一个batch,是没问题。
报错是list out of range
猜想是不是内部会对batch做concat还是啥。 | closed | 2020-08-19T08:41:39Z | 2020-08-21T07:41:31Z | https://github.com/HIT-SCIR/ltp/issues/401 | [] | Damcy | 2 |
ipython/ipython | data-science | 14,038 | Document that `%gui qt` currently forces Qt5 | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
Currently, when users select `qt` with `%gui qt` or `%matplotlib qt`, IPython forces matplotlib to switch to `Qt5Agg` first, before matplotlib's `qt_compat.py` kicks in (which, if run on its own, can actually pick up the right Qt bindings installed).
https://github.com/ipython/ipython/blob/396593e7ad8cab3a9c36fb0f3e26cbf79cff069c/IPython/core/pylabtools.py#L26
https://github.com/ipython/ipython/blob/396593e7ad8cab3a9c36fb0f3e26cbf79cff069c/IPython/core/pylabtools.py#L301-L322
This isn't documented and has caused me a lot of confusion and trouble (and will cause further trouble in the future as usage of Qt6 increases over Qt5):
* https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-matplotlib
* https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-gui
I found out about this after spending months trying to work out why `%matplotlib qt` doesn't work when I have Qt6 bindings installed: https://github.com/matplotlib/matplotlib/issues/25673#event-9105154889
I suggest we either explicitly document this in the IPython line magic docs, or we change this default behavior.
Relevant but not duplicate:
* https://github.com/ipython/ipython/issues/13859 | open | 2023-04-26T15:59:31Z | 2023-07-12T13:38:01Z | https://github.com/ipython/ipython/issues/14038 | [] | kwsp | 4 |
widgetti/solara | flask | 306 | fix: typing failures with modern typed libraries | We limit traitlets and matplotlib to avoid lint failures in https://github.com/widgetti/solara/pull/305
We should fix those errors, and unpin the installations in CI. | open | 2023-09-27T09:49:30Z | 2023-10-02T17:10:58Z | https://github.com/widgetti/solara/issues/306 | [
"good first issue",
"help wanted"
] | maartenbreddels | 1 |
huggingface/transformers | tensorflow | 36,854 | Facing RunTime Attribute error while running different Flax models for RoFormer | when running FlaxRoFormerForMaskedLM model, I have encountered an issue as
> AttributeError: 'jaxlib.xla_extension.ArrayImpl' object has no attribute 'split'.
This error is reported in the file `transformers/models/roformer/modeling_flax_roformer.py:265`
The function responsible for this error in that file is as below
```
def apply_rotary_position_embeddings(sinusoidal_pos, query_layer, key_layer, value_layer=None):
sin, cos = sinusoidal_pos.split(2, axis=-1)
```
While changing this particular line from `sinusoidal_pos.split(2, axis=-1)` to `sinusoidal_pos._split(2, axis=-1)` , I didn't get that error
My observation is when I replace `split()` with `_split()` , my issue is resolved
### System Info
My environment details are as below :
> - `transformers` version: 4.49.0
> - Platform: Linux-5.4.0-208-generic-x86_64-with-glibc2.35
> - Python version: 3.10.12
> - Huggingface_hub version: 0.29.3
> - Safetensors version: 0.5.3
> - Accelerate version: not installed
> - Accelerate config: not found
> - DeepSpeed version: not installed
> - PyTorch version (GPU?): 2.6.0+cu124 (False)
> - Tensorflow version (GPU?): not installed (NA)
> - Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu)
> - Jax version: 0.4.36
> - JaxLib version: 0.4.36
I am attaching a screenshot for reference
<img width="1642" alt="Image" src="https://github.com/user-attachments/assets/a488444c-6095-4fc5-a5a0-bc400409d8ba" />
### Who can help?
@gante @Rocketknight1
I am facing this issue for Models like
> FlaxRoFormerForMultipleChoice
> FlaxRoFormerForSequenceClassification
> FlaxRoFormerForTokenClassification
> FlaxRoFormerForQuestionAnswering
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to recreate the error:
Run the below code in any python editor
```
from transformers import AutoTokenizer, FlaxRoFormerForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
```
### Expected behavior
The model should run and produce error free output | open | 2025-03-20T12:33:26Z | 2025-03-20T14:23:07Z | https://github.com/huggingface/transformers/issues/36854 | [
"Flax",
"bug"
] | ctr-pmuruganTT | 0 |
harry0703/MoneyPrinterTurbo | automation | 262 | PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'D:\\kevinzhang\\tmp\\MoneyPrinterTurbo\\storage\\tasks\\0fa8fa91-1ca6-49b2-a7a0-60c1c3b7be1f\\combined-1.mp4' | 生成视频之后,调用删除api会报错。重启服务再删除成功。
```
curl -X 'DELETE' \
'http://127.0.0.1:8502/api/v1/tasks/0fa8fa91-1ca6-49b2-a7a0-60c1c3b7be1f' \
-H 'accept: application/json'
```
附上log
[2024_04_15.txt](https://github.com/harry0703/MoneyPrinterTurbo/files/14975885/2024_04_15.txt)
| closed | 2024-04-15T08:06:08Z | 2024-04-16T01:04:37Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/262 | [] | KevinZhang19870314 | 2 |
plotly/dash | data-science | 2,946 | [BUG] Component value changing without user interaction or callbacks firing | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Windows 11
- Browser Chrome & Safari
**Describe the bug**
A clear and concise description of what the bug is.
**Expected behavior**
The app has the following layout:
```
session_picker_row = dbc.Row(
[
...
dbc.Col(
dcc.Dropdown(
options=[],
placeholder="Select a session",
value=None,
id="session",
),
),
...
dbc.Col(
dbc.Button(
children="Load Session / Reorder Drivers",
n_clicks=0,
disabled=True,
color="success",
id="load-session",
)
),
],
)
```
The dataflow within callbacks is unidirectional from `session` to `load-session` using the following callback:
```
@callback(
Output("load-session", "disabled"),
Input("season", "value"),
Input("event", "value"),
Input("session", "value"),
prevent_initial_call=True,
)
def enable_load_session(season: int | None, event: str | None, session: str | None) -> bool:
"""Toggles load session button on when the previous three fields are filled."""
return not (season is not None and event is not None and session is not None)
```
I have noticed that sometimes the `n_click` property of `load-session`, which starts from 0, goes to 1 and drops back down to 0. Simultaneously, the `value` property of `session` would revert to `None` which is what I initialize it with. This is all without any callback firing.
The line causing this behavior is editing a cached (with `dcc.store`) dataframe and doesn't trigger any callback. Might this have something to do with the browser cache?
| closed | 2024-08-10T03:24:28Z | 2024-08-10T16:54:40Z | https://github.com/plotly/dash/issues/2946 | [] | Casper-Guo | 8 |
dpgaspar/Flask-AppBuilder | rest-api | 2,104 | missing .get(col_name) | https://github.com/dpgaspar/Flask-AppBuilder/blob/6130d078b608658eb66e15530502efd309653435/flask_appbuilder/views.py#L463
@dpgaspar as you can see, in the add, you have the .get(col_name) in
```python
if self.add_form_query_rel_fields:
filter_rel_fields = self.add_form_query_rel_fields.get(col_name)
```
But it's missing in the edit!
Adding it just fix the issue! | closed | 2023-08-16T18:07:49Z | 2023-10-23T11:43:55Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2104 | [
"bug"
] | gbrault | 1 |
horovod/horovod | machine-learning | 3,240 | One process are worked in two GPUs? | **Environment:**
1. Framework: PyTorch
2. Framework version: I do not know
3. Horovod version: 0.23.0
4. MPI version: 4.0.0
5. CUDA version:11.2
6. NCCL version:2.8.4 + cuda 11.1
7. Python version: 3.8
8. Spark / PySpark version: no
9. Ray version: no
10. OS and version: Ubuntu 18.04
11. GCC version: I do not know
12. CMake version: I do not know
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes but no answer
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? yes, but no answer
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? no
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I use the following Dockerfile to make a docker image:
[](url)https://drive.google.com/file/d/1aZAGyqCyBbB7hgR1uHn-KPX98ymLjBMx/view?usp=sharing
And then I run the horovod example: pytorch_mnist.py
But I got the following picture:
<img width="483" alt="b25845e196c2411c2a4b7350da28749" src="https://user-images.githubusercontent.com/30434881/138592927-2f8b2abe-5fe4-4bfd-9746-59c553c3a5f5.png">
It seems that PID worked in two GPUs, such as 801, 802 and 803.
But the training process can be done.
How can I do?
Thank you in advance. | open | 2021-10-24T11:56:55Z | 2021-10-24T12:01:13Z | https://github.com/horovod/horovod/issues/3240 | [
"bug"
] | xml94 | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 110 | Can "integrated charts" be used with streamlit-aggrid? | I can see from the showcase example that e.g. groups and pivot-tables work. Before buying a license, I would like to know if the enterprise "inegrated charts" feature will work when AGGrid is used via Streamlit. | closed | 2022-07-12T14:08:55Z | 2024-04-04T17:53:58Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/110 | [] | fluence-world | 6 |
recommenders-team/recommenders | data-science | 1,217 | [BUG] AttributeError: 'HParams' object has no attribute 'use_entity' | I am using DKN_mind and repeatedly getting this error

any suggestions? | closed | 2020-10-19T15:02:33Z | 2020-10-29T14:13:37Z | https://github.com/recommenders-team/recommenders/issues/1217 | [
"bug"
] | shainaraza | 1 |
QuivrHQ/quivr | api | 3,058 | Filter knowledge by folder type (quivr folder / integrations) | closed | 2024-08-22T14:41:41Z | 2024-10-23T08:06:27Z | https://github.com/QuivrHQ/quivr/issues/3058 | [
"Feature"
] | linear[bot] | 1 | |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 45 | Weight Standardization | ### Papers
* [Micro-Batch Training with Batch-Channel Normalization and Weight Standardization](https://arxiv.org/pdf/1903.10520.pdf) [papers with code](https://paperswithcode.com/paper/weight-standardization)
* [CHARACTERIZING SIGNAL PROPAGATION TO CLOSE THE PERFORMANCE GAP IN UNNORMALIZED RESNETS](https://arxiv.org/pdf/2101.08692.pdf)
* [Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks](https://arxiv.org/pdf/1602.07868.pdf)
| closed | 2021-04-24T10:19:28Z | 2021-06-21T16:08:08Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/45 | [
"paper implementation"
] | vpj | 0 |
babysor/MockingBird | pytorch | 254 | ValueError: operands could not be broadcast together with shapes (2200,) (4000,) (2200,) | Getting this error while running the synthesize() function on Google Colab. Any solution?
| open | 2021-12-08T03:00:09Z | 2022-05-21T03:34:15Z | https://github.com/babysor/MockingBird/issues/254 | [] | joeynmq | 7 |
Farama-Foundation/PettingZoo | api | 1,099 | [Bug Report] Cannot import package: circular import with pettingzoo and gymnasium-robotics | ### Describe the bug
Importing `pettingzoo` crashes when the package `gymnasium-robotics` is installed in the system.
Code to reproduce behavior is included below. It consists of installing both packages and importing pettingzoo.
### Code example
```shell
# Install dependencies
pip install pettingzoo gymnasium-robotics
# Import pettingzoo in Python
>>> import pettingzoo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/pettingzoo/__init__.py", line 4, in <module>
from pettingzoo.utils import AECEnv, ParallelEnv
File "/usr/local/lib/python3.9/dist-packages/pettingzoo/utils/__init__.py", line 2, in <module>
from pettingzoo.utils.average_total_reward import average_total_reward
File "/usr/local/lib/python3.9/dist-packages/pettingzoo/utils/average_total_reward.py", line 7, in <module>
from pettingzoo.utils.env import AECEnv
File "/usr/local/lib/python3.9/dist-packages/pettingzoo/utils/env.py", line 6, in <module>
import gymnasium.spaces
File "/usr/local/lib/python3.9/dist-packages/gymnasium/__init__.py", line 12, in <module>
from gymnasium.envs.registration import (
File "/usr/local/lib/python3.9/dist-packages/gymnasium/envs/__init__.py", line 387, in <module>
load_plugin_envs()
File "/usr/local/lib/python3.9/dist-packages/gymnasium/envs/registration.py", line 592, in load_plugin_envs
fn = plugin.load()
File "/usr/local/lib/python3.9/dist-packages/importlib_metadata/__init__.py", line 209, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.9/dist-packages/gymnasium_robotics/__init__.py", line 6, in <module>
from gymnasium_robotics.envs.multiagent_mujoco import mamujoco_v0
File "/usr/local/lib/python3.9/dist-packages/gymnasium_robotics/envs/multiagent_mujoco/__init__.py", line 12, in <module>
from gymnasium_robotics.envs.multiagent_mujoco.mujoco_multi import ( # noqa: F401
File "/usr/local/lib/python3.9/dist-packages/gymnasium_robotics/envs/multiagent_mujoco/mujoco_multi.py", line 59, in <module>
class MultiAgentMujocoEnv(pettingzoo.utils.env.ParallelEnv):
AttributeError: partially initialized module 'pettingzoo' has no attribute 'utils' (most likely due to a circular import)
# Works if gymnasium robotics is imported first (new Python instance)
>>> import gymnasium_robotics
>>> import pettingzoo
>>>
```
### System info
```
Distributor ID: Ubuntu
Description: Ubuntu 20.04.5 LTS
Release: 20.04
Codename: focal
```
```
gymnasium-robotics 1.2.2
pettingzoo 1.24.1
```
```
Python 3.9.15
```
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
| closed | 2023-09-11T11:11:07Z | 2023-09-27T19:17:49Z | https://github.com/Farama-Foundation/PettingZoo/issues/1099 | [
"bug"
] | thomasbbrunner | 4 |
dropbox/PyHive | sqlalchemy | 345 | Peewee ORM support? | Can I use Peewee ORM with PyHive? | closed | 2020-07-02T16:06:50Z | 2020-07-28T23:36:11Z | https://github.com/dropbox/PyHive/issues/345 | [] | wilberh | 1 |
recommenders-team/recommenders | machine-learning | 1,946 | [ASK] I can't getting start | When I tried to get start and type` pip install recommenders[examples]` ,it shows:
Building wheels for collected packages: lightfm, scikit-surprise, Flask-BasicAuth, future
Building wheel for lightfm (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [42 lines of output]
Compiling without OpenMP support.
C:\ProgramData\Anaconda3\envs\recommenders\lib\site-packages\setuptools\dist.py:755: SetuptoolsDeprecationWarning: Invalid dash-separated options
!!
********************************************************************************
Usage of dash-separated 'description-file' will not be supported in future
versions. Please use the underscore name 'description_file' instead.
By 2023-Sep-26, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
opt = self.warn_dash_deprecation(opt, section)
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
creating build\lib.win-amd64-cpython-39\lightfm
copying lightfm\cross_validation.py -> build\lib.win-amd64-cpython-39\lightfm
copying lightfm\data.py -> build\lib.win-amd64-cpython-39\lightfm
copying lightfm\evaluation.py -> build\lib.win-amd64-cpython-39\lightfm
copying lightfm\lightfm.py -> build\lib.win-amd64-cpython-39\lightfm
copying lightfm\_lightfm_fast.py -> build\lib.win-amd64-cpython-39\lightfm
copying lightfm\__init__.py -> build\lib.win-amd64-cpython-39\lightfm
creating build\lib.win-amd64-cpython-39\lightfm\datasets
copying lightfm\datasets\movielens.py -> build\lib.win-amd64-cpython-39\lightfm\datasets
copying lightfm\datasets\stackexchange.py -> build\lib.win-amd64-cpython-39\lightfm\datasets
copying lightfm\datasets\_common.py -> build\lib.win-amd64-cpython-39\lightfm\datasets
copying lightfm\datasets\__init__.py -> build\lib.win-amd64-cpython-39\lightfm\datasets
copying lightfm\_lightfm_fast_no_openmp.c -> build\lib.win-amd64-cpython-39\lightfm
copying lightfm\_lightfm_fast_openmp.c -> build\lib.win-amd64-cpython-39\lightfm
running build_ext
building 'lightfm._lightfm_fast_no_openmp' extension
creating build\temp.win-amd64-cpython-39
creating build\temp.win-amd64-cpython-39\Release
creating build\temp.win-amd64-cpython-39\Release\lightfm
cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\ProgramData\Anaconda3\envs\recommenders\include -IC:\ProgramData\Anaconda3\envs\recommenders\Include /Tclightfm/_lightfm_fast_no_openmp.c /Fobuild\temp.win-amd64-cpython-39\Release\lightfm/_lightfm_fast_no_openmp.obj -ffast-math -march=native
error: command 'cl.exe' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for lightfm
Running setup.py clean for lightfm
error: subprocess-exited-with-error
× python setup.py clean did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
Compiling without OpenMP support.
C:\ProgramData\Anaconda3\envs\recommenders\lib\site-packages\setuptools\dist.py:755: SetuptoolsDeprecationWarning: Invalid dash-separated options
!!
********************************************************************************
Usage of dash-separated 'description-file' will not be supported in future
versions. Please use the underscore name 'description_file' instead.
By 2023-Sep-26, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
opt = self.warn_dash_deprecation(opt, section)
running clean
error: [WinError 2] 系統找不到指定的檔案。
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed cleaning build dir for lightfm
Building wheel for scikit-surprise (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [105 lines of output]
C:\Users\user\AppData\Local\Temp\pip-install-21zxdjc0\scikit-surprise_62954189362e4de6b574a39d547e2beb\setup.py:65: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!
********************************************************************************
Requirements should be satisfied by a PEP 517 installer.
If you are using pip, you can try `pip install --use-pep517`.
********************************************************************************
!!
dist.Distribution().fetch_build_eggs(["numpy>=1.17.3"])
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
creating build\lib.win-amd64-cpython-39\surprise
copying surprise\accuracy.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\builtin_datasets.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\dataset.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\dump.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\reader.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\trainset.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\utils.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\__init__.py -> build\lib.win-amd64-cpython-39\surprise
copying surprise\__main__.py -> build\lib.win-amd64-cpython-39\surprise
creating build\lib.win-amd64-cpython-39\surprise\model_selection
copying surprise\model_selection\search.py -> build\lib.win-amd64-cpython-39\surprise\model_selection
copying surprise\model_selection\split.py -> build\lib.win-amd64-cpython-39\surprise\model_selection
copying surprise\model_selection\validation.py -> build\lib.win-amd64-cpython-39\surprise\model_selection
copying surprise\model_selection\__init__.py -> build\lib.win-amd64-cpython-39\surprise\model_selection
creating build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\algo_base.py -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\baseline_only.py -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\knns.py -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\predictions.py -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\random_pred.py -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\__init__.py -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
running egg_info
writing scikit_surprise.egg-info\PKG-INFO
writing dependency_links to scikit_surprise.egg-info\dependency_links.txt
writing entry points to scikit_surprise.egg-info\entry_points.txt
writing requirements to scikit_surprise.egg-info\requires.txt
writing top-level names to scikit_surprise.egg-info\top_level.txt
reading manifest file 'scikit_surprise.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE.md'
writing manifest file 'scikit_surprise.egg-info\SOURCES.txt'
C:\ProgramData\Anaconda3\envs\recommenders\lib\site-packages\setuptools\command\build_py.py:201: _Warning: Package 'surprise.prediction_algorithms' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'surprise.prediction_algorithms' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'surprise.prediction_algorithms' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'surprise.prediction_algorithms' to be distributed and are
already explicitly excluding 'surprise.prediction_algorithms' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
copying surprise\similarities.c -> build\lib.win-amd64-cpython-39\surprise
copying surprise\similarities.pyx -> build\lib.win-amd64-cpython-39\surprise
copying surprise\prediction_algorithms\co_clustering.c -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\matrix_factorization.c -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\optimize_baselines.c -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\slope_one.c -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\co_clustering.pyx -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\matrix_factorization.pyx -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\optimize_baselines.pyx -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
copying surprise\prediction_algorithms\slope_one.pyx -> build\lib.win-amd64-cpython-39\surprise\prediction_algorithms
running build_ext
building 'surprise.similarities' extension
creating build\temp.win-amd64-cpython-39
creating build\temp.win-amd64-cpython-39\Release
creating build\temp.win-amd64-cpython-39\Release\surprise
cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Ic:\users\user\appdata\local\temp\pip-install-21zxdjc0\scikit-surprise_62954189362e4de6b574a39d547e2beb\.eggs\numpy-1.25.0-py3.9-win-amd64.egg\numpy\core\include -IC:\ProgramData\Anaconda3\envs\recommenders\include -IC:\ProgramData\Anaconda3\envs\recommenders\Include /Tcsurprise/similarities.c /Fobuild\temp.win-amd64-cpython-39\Release\surprise/similarities.obj
error: command 'cl.exe' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for scikit-surprise
Running setup.py clean for scikit-surprise
Building wheel for Flask-BasicAuth (setup.py) ... done
Created wheel for Flask-BasicAuth: filename=Flask_BasicAuth-0.2.0-py3-none-any.whl size=4261 sha256=26d58eed63f5d93cebf12ba5cae3cd70b66550a7f8cf1c48910e45d8a1d979d4
Stored in directory: c:\users\user\appdata\local\pip\cache\wheels\d4\5a\db\e442580c22be34f69e537448832d7e1ee5a9c5adb63ace30bf
Building wheel for future (setup.py) ... done
Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492055 sha256=b790230f9e1ada98b146c71846ea47735114d1ba36a3e0cce703776203e098cd
Stored in directory: c:\users\user\appdata\local\pip\cache\wheels\bf\5d\6a\2e53874f7ec4e2bede522385439531fafec8fafe005b5c3d1b
Successfully built Flask-BasicAuth future
Failed to build lightfm scikit-surprise
ERROR: Could not build wheels for lightfm, scikit-surprise, which is required to install pyproject.toml-based projects
What should I do ? | closed | 2023-06-20T09:15:46Z | 2023-07-11T08:25:28Z | https://github.com/recommenders-team/recommenders/issues/1946 | [
"help wanted"
] | b856741 | 2 |
keras-team/autokeras | tensorflow | 1,874 | Bug: StructuredDataClassifier ignores loss parameter | ### Bug Description
The StructuredDataClassifier trains with default loss and ignores any user input. Even setting the loss to some random string like `loss='this is not a loss'` does not change the behavior (or result in an error).
### Bug Reproduction
```
import tensorflow as tf
import autokeras as ak
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# Initialize the structured data classifier.
clf = ak.StructuredDataClassifier(
overwrite=True, max_trials=3, loss='mse'
)
# Feed the structured data classifier with training data.
clf.fit(
# The path to the train.csv file.
train_file_path,
# The name of the label column.
"survived",
epochs=10,
)
# Print loss function used during training
model = clf.export_model()
print('loss function:', model.loss)
```
### Expected Behavior
The model is supposed to be trained using the mean squared error, but it is actually being trained using <keras.losses.BinaryCrossentropy>.
### Setup Details
- OS: Windows
- Python: 3
- autokeras: 1.0.20
- keras-tuner: 1.1.3
- scikit-learn: 1.0.2
- numpy: 1.21.5
- pandas: 1.5.1
- tensorflow: 2.9.1
### Additional context
Upon examining the code, it appears that the user input is being overwritten at [this](https://github.com/keras-team/autokeras/blob/5abd2d51396134b1d3e5831adb8d25572f39c003/autokeras/blocks/heads.py#L139) point.
[#1608](https://github.com/keras-team/autokeras/issues/1608) raises a similar issue as a feature request.
| open | 2023-04-04T07:56:09Z | 2023-04-04T07:56:09Z | https://github.com/keras-team/autokeras/issues/1874 | [
"bug report"
] | DF-Damm | 0 |
cobrateam/splinter | automation | 1,079 | 👋 From the Selenium project! | At the Selenium Project we want to collaborate with you and work together to improve the WebDriver ecosystem. We would like to meet you, understand your pain points, and discuss ideas around Selenium and/or WebDriver.
If you are interested, please fill out the form below and we will reach out to you.
https://forms.gle/Z72BmP4FTsM1GKgE6
We are looking forward to hearing from you!
PS: Feel free to close this issue, it was just meant as a way to reach out to you 😄 | closed | 2022-08-04T13:59:14Z | 2022-08-04T14:57:30Z | https://github.com/cobrateam/splinter/issues/1079 | [] | diemol | 0 |
talkpython/data-driven-web-apps-with-flask | sqlalchemy | 3 | Chapter 07 | When the Login button is clicked with empty fields the video shows that password field turns red. This does occur when using Firefox but it does not occur when using Safari on the Mac.
Not a bug per se, but something that may confuse users. | closed | 2019-07-27T23:21:15Z | 2019-07-29T16:38:46Z | https://github.com/talkpython/data-driven-web-apps-with-flask/issues/3 | [] | cmcknight | 1 |
Lightning-AI/pytorch-lightning | deep-learning | 20,572 | auto_scale_batch_size arg not accept by lightning.Trainer | ### Bug description
The `auto_scale_batch_size` arg is not accept in `lightning.Trainer`, but accepted in `pytorch_lightning.Trainer`.
```
Error in call to target 'lightning.pytorch.trainer.trainer.Trainer':
TypeError("Trainer.__init__() got an unexpected keyword argument 'auto_scale_batch_size'")
```
### What version are you seeing the problem on?
v2.5
### How to reproduce the bug
```python
import lightning as L
L.Trainer(auto_scale_batch_size="binsearch")
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
lightning 2.5.0.post0 pypi_0 pypi
lightning-bolts 0.7.0 pypi_0 pypi
lightning-utilities 0.11.9 pypi_0 pypi
pytorch-lightning 1.9.5 pypi_0 pypi
torch 2.5.1 pypi_0 pypi
torchmetrics 1.6.1 pypi_0 pypi
torchvision 0.20.1 pypi_0 pypi
python 3.12.8 h5148396_0
#- OS (e.g., Linux): 22.04.2 LTS
#- CUDA/cuDNN version: CUDA 12.0
#- GPU models and configuration: 8x Quadro RTX 6000
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | open | 2025-02-03T22:58:59Z | 2025-02-03T22:59:11Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20572 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | yc-tao | 0 |
biolab/orange3 | data-visualization | 6,312 | Row number as a variable |
**What's your use case?**
When using Select Rows, it should also be possible to select by row number. For example, in some datasets from connected products, the first x rows originate from prototypes or they are otherwise not representative, for instance due to start-up problems. In such cases it is useful to be able to use "row number is greater than" as a row selection criterion.
I have also once encountered a situation where I would have liked to use the row number as a variable in Feature Constructor, but I cannot remember what the exact use case was ...
**What's your proposed solution?**
Make the row number available as a variable in Select Rows and Feature Constructor. Even better, allow use such as
`newvar := existingvar (row - 2) * othervar`
in Feature Constructor to refer to the value of a variable 2 rows back (which will of course not work for the first two rows
**Are there any alternative solutions?**
Yes:
- use a Python Script as suggested [here](https://discord.com/channels/633376992607076354/822470786346516501/940962165001695242) or
- abuse Melt and Group By as suggested [here](https://discord.com/channels/633376992607076354/822470786346516501/1016643062480511056)
| closed | 2023-01-24T14:22:43Z | 2023-01-25T21:11:59Z | https://github.com/biolab/orange3/issues/6312 | [] | wvdvegte | 5 |
deezer/spleeter | deep-learning | 112 | [Discussion] Why is the model rebuilt every time? | Anyone wants to explain to me why the model is rebuilt every time?
My rationale is that since the model can be recycled for a batch, why can't the model be saved as a file and loaded next time a separation is executed?
It just doesn't make sense to me.
I think that the model should be rebuilt only when asked to do so. | closed | 2019-11-18T05:22:40Z | 2019-11-22T18:35:36Z | https://github.com/deezer/spleeter/issues/112 | [
"question"
] | aidv | 2 |
python-gitlab/python-gitlab | api | 2,839 | get_id() retruns name of label | ## Description of the problem, including code/CLI snippet
get_id() of a GroupLabel returns the name of the label and not the id.
In the case of a label inherited from a parent group, it is not possible to distinguish between two labels if they have the same name.
## Expected Behavior
I expect to get the value of `label.id`
## Actual Behavior
I receive the value of `label.name`
## Specifications
- python-gitlab version: 3.8.1
- API version you are using (v3/v4):
- Gitlab server version (or gitlab.com): 16.9.2
| open | 2024-04-16T06:26:29Z | 2024-04-16T06:26:29Z | https://github.com/python-gitlab/python-gitlab/issues/2839 | [] | cweber-dbs | 0 |
mkhorasani/Streamlit-Authenticator | streamlit | 174 | Inquiry Regarding Persistent Login Issue | I am writing to inquire about an authentication issue that we have observed in our Streamlit application. Specifically, we have noticed that once User A logs into the system, other individuals are able to access and browse the application in the name of User A, regardless of the computer or device they are using. However, once User A logs out, other users are then required to log in before accessing the application.
We are seeking clarification on the root cause of this behavior. It seems counterintuitive that a user's session would persist across different devices and computers without any form of authentication or session token validation. This poses a significant security risk as it allows unauthorized access to potentially sensitive information.
Here are a few key points that we would like to understand:
How is the session management implemented in Streamlit? Are there any known limitations or vulnerabilities that could explain this behavior?
Are there any specific configuration settings or code changes that we need to make to ensure that sessions are properly isolated and require re-authentication for each user on each device?
Are there any best practices or recommendations that you can provide to strengthen the authentication and session management in our Streamlit application?
We appreciate your assistance in resolving this issue and ensuring the security of our application. Thank you for your time and consideration. | closed | 2024-06-27T10:15:08Z | 2024-06-28T17:34:54Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/174 | [
"help wanted"
] | 3togo | 2 |
pydata/pandas-datareader | pandas | 508 | CDC datasets, WISQARS, WONDER | - https://wonder.cdc.gov/datasets.html
- https://www.cdc.gov/nchs/data_access/ftp_data.htm
- WISQARS/WONDER data comparison:
- https://www.cdc.gov/injury/wisqars/fatal_help/faq.html#WONDER
- https://wonder.cdc.gov/wonder/help/WONDER-API.html
Is there a recommended way to cache whole (compressed) datasets retrievable FTP?
| open | 2018-03-21T10:30:43Z | 2018-03-21T10:30:43Z | https://github.com/pydata/pandas-datareader/issues/508 | [] | westurner | 0 |
apache/airflow | python | 47,501 | AIP-38 | Add API Endpoint to serve connection types and extra form meta data | ### Body
To be able to implement #47496 and #47497 the connection types and extra form elements meta data needs to be served by an additional API endpoint.
Note: The extra form parameters should be served in the same structure and format like the DAG params such that the form elements of FlexibleForm can be re-used in the UI.
Assumption is that the needed connection types are serialized in a DB table. (No dependency to providers manager should be added to API server)
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-07T14:54:17Z | 2025-03-12T22:28:20Z | https://github.com/apache/airflow/issues/47501 | [
"kind:feature",
"area:API",
"kind:meta"
] | jscheffl | 0 |
sherlock-project/sherlock | python | 2,423 | Requesting support for: programming.dev | ### Site URL
https://programming.dev
### Additional info
- Link to the site main page: https://programming.dev
- Link to an existing account: https://programming.dev/u/pylapp
- Link to a nonexistent account: https://programming.dev/u/noonewouldeverusethis42
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct | open | 2025-03-05T12:14:42Z | 2025-03-05T12:27:12Z | https://github.com/sherlock-project/sherlock/issues/2423 | [
"site support request"
] | pylapp | 1 |
Asabeneh/30-Days-Of-Python | numpy | 157 | V | V | closed | 2021-04-28T17:10:08Z | 2021-07-05T21:58:51Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/157 | [] | eknatx | 0 |
HumanSignal/labelImg | deep-learning | 424 | JPG format is not supported | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
- **OS:**
- **PyQt version:**
| open | 2019-01-04T01:48:03Z | 2023-03-23T02:21:35Z | https://github.com/HumanSignal/labelImg/issues/424 | [] | cuiluguang | 3 |
jeffknupp/sandman2 | sqlalchemy | 59 | how to install it without pip | how to install it on hosts where the flying dependency ``https://pypi.python.org/simple/Flask-HTTPAuth/`` ( when executing ``python setup.py install`` ) can't be accessed due to network administration ?
thanks | closed | 2017-03-24T10:59:34Z | 2017-03-24T21:33:04Z | https://github.com/jeffknupp/sandman2/issues/59 | [] | downgoon | 1 |
miguelgrinberg/python-socketio | asyncio | 520 | HTTP Basic Authentication? | Does SocketIO support authenticating with endpoints which require HTTP basic authentication? I cannot see any indication that it does / does not in documentation.
Thanks! | closed | 2020-07-13T10:14:32Z | 2020-07-13T11:02:13Z | https://github.com/miguelgrinberg/python-socketio/issues/520 | [
"question"
] | 9ukn23nq | 3 |
jina-ai/clip-as-service | pytorch | 194 | 'bert-serving-start' is not recognized as an internal or external command | Hi,
This is a very silly question.....
I have python 3.6.6, tensorflow 1.12.0, doing everything in conda environment, Windows 10.
I pip installed bert-serving-server/client and it shows
`Successfully installed GPUtil-1.4.0 bert-serving-client-1.7.2 bert-serving-server-1.7.2 pyzmq-17.1.2`
but when I run the following as CLI
`bert-serving-start -model_dir /tmp/english_L-12_H-768_A-12/ -num_worker=4`
it says
`'bert-serving-start' is not recognized as an internal or external command`
I found bert-serving library is located under C:\Users\Name\Anaconda\Lib\site-packages. So I tried to run bert-serving-start again under these three folders:
1. site-packages
2. site-packages\bert_serving
3. site-packages\bert_serving_server-1.7.2.dist-info

However, the result is same as not recognized. Can anyone help me? | closed | 2019-01-16T15:16:56Z | 2021-10-20T04:26:11Z | https://github.com/jina-ai/clip-as-service/issues/194 | [] | moon-home | 13 |
ultralytics/ultralytics | python | 19,571 | YOLOv11 tuning: best fitness=0.0 observed at all iterations | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, I am using the YOLOv11 model. I am able to successfully train with model.train() and infer using model.predict(). However, I am having some difficulty with tuning. Any help would be very much greatly appreciated, I am stumped as to what could be the underlying issue.
Below is the code I am running for tuning:
```
dataset_path = 'dataset.yaml'
tuning_results = model.tune(data=dataset_path, epochs=30, iterations=5, optimizer="AdamW", plots=True, save=True, val=True)
```
Below is the command window output for the first iteration of tuning. Every subsequent iteration also produces fitness of 0, so the best iteration remains the first one throughout and after the tuning process.
```
Transferred 1009/1015 items from pretrained weights
TensorBoard: Start with 'tensorboard --logdir runs/training/x_e5_is576_oTrue___2', view at http://localhost:6006/
Freezing layer 'model.23.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks...
AMP: checks passed ✅
train: Scanning /home/user/datasets/labels/train.cache... 48 images, 20
val: Scanning /home/user/datasets/labels/val.cache... 48 images, 20 back
Plotting labels to runs/training/x_e5_is576_oTrue___2/labels.jpg...
optimizer: AdamW(lr=0.01, momentum=0.937) with parameter groups 167 weight(decay=0.0), 174 weight(decay=0.0005), 173 bias(decay=0.0)
TensorBoard: model graph visualization added ✅
Image sizes 576 train, 576 val
Using 8 dataloader workers
Logging results to runs/training/x_e5_is576_oTrue___2
Starting training for 30 epochs...
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
1/30 11.6G 0.4824 301.3 0.1395 20 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
2/30 13.2G 2.527 304.5 0.6108 17 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
3/30 13.2G 2.642 308.9 0.7432 13 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
4/30 13.2G 1.291 307.7 0.3846 12 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
5/30 13.2G 0.9325 1252 0.2793 7 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
6/30 13.1G 3.196 3062 0.9984 16 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
7/30 13G 1.845 3220 0.5244 15 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
8/30 13G 3.086 3258 0.906 13 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
9/30 13G 2.992 5055 0.8702 11 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
10/30 13G 1.05 168.7 0.3335 10 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
11/30 13G 1.363 95.66 0.4132 19 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
12/30 13G 2.445 41.94 0.7493 13 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
13/30 13.1G 1.822 21.96 0.5235 9 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
14/30 13G 4.284 10.96 1.131 10 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
15/30 13G 6.077 7.64 1.456 17 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
16/30 13.1G 4.513 5.696 1.262 12 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
17/30 13G 3.867 5.742 1.157 7 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
18/30 13.2G 4.719 5.728 1.21 15 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
19/30 13.2G 4.554 5.561 1.21 16 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
20/30 13.1G 4.375 5.36 1.16 15 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Closing dataloader mosaic
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
21/30 13.2G 4.509 5.606 1.205 9 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
22/30 12G 4.28 5.265 1.209 10 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
23/30 13.1G 3.73 5.298 1.187 9 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
24/30 13.1G 4.077 5.431 1.177 7 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
25/30 12G 4.047 5.216 1.132 10 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
26/30 12G 3.873 5.164 1.114 10 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
27/30 12G 4.309 5.152 1.238 8 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
28/30 12G 4.014 5.135 1.17 8 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
29/30 13.1G 4.342 5.066 1.144 8 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
30/30 13.1G 4.31 5.147 1.243 12 576: 100%|█
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
30 epochs completed in 0.069 hours.
Optimizer stripped from runs/training/x_e5_is576_oTrue___2/weights/last.pt, 114.4MB
Optimizer stripped from runs/training/x_e5_is576_oTrue___2/weights/best.pt, 114.4MB
Validating runs/training/x_e5_is576_oTrue___2/weights/best.pt...
Ultralytics 8.3.35 🚀 Python-3.10.12 torch-2.5.1+cu124 CUDA:0 (Tesla T4, 14918MiB)
YOLO11x summary (fused): 464 layers, 56,828,179 parameters, 0 gradients, 194.4 GFLOPs
Class Images Instances Box(P R mAP50 mAP50-
all 48 28 0 0 0 0
Speed: 0.1ms preprocess, 15.9ms inference, 0.0ms loss, 0.1ms postprocess per image
Results saved to runs/training/x_e5_is576_oTrue___2
💡 Learn more at https://docs.ultralytics.com/modes/train
VS Code: view Ultralytics VS Code Extension ⚡ at https://docs.ultralytics.com/integrations/vscode
Saved runs/training/tune/tune_scatter_plots.png
Saved runs/training/tune/tune_fitness.png
Tuner: 1/5 iterations complete ✅ (275.02s)
Tuner: Results saved to runs/training/tune
Tuner: Best fitness=0.0 observed at iteration 1
Tuner: Best fitness metrics are {'metrics/precision(B)': 0.0, 'metrics/recall(B)': 0.0, 'metrics/mAP50(B)': 0.0, 'metrics/mAP50-95(B)': 0.0, 'val/box_loss': nan, 'val/cls_loss': nan, 'val/dfl_loss': nan, 'fitness': 0.0}
Tuner: Best fitness model is runs/training/x_e5_is576_oTrue___2
Tuner: Best fitness hyperparameters are printed below.
Printing 'runs/training/tune/best_hyperparameters.yaml'
lr0: 0.01
lrf: 0.01
momentum: 0.937
weight_decay: 0.0005
warmup_epochs: 3.0
warmup_momentum: 0.8
box: 7.5
cls: 0.5
dfl: 1.5
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
degrees: 0.0
translate: 0.1
scale: 0.5
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
bgr: 0.0
mosaic: 1.0
mixup: 0.0
copy_paste: 0.0
```
Command window output for final iteration:
```
Tuner: 5/5 iterations complete ✅ (1385.57s)
Tuner: Results saved to runs/training/tune
Tuner: Best fitness=0.0 observed at iteration 1
Tuner: Best fitness metrics are {'metrics/precision(B)': 0.0, 'metrics/recall(B)': 0.0, 'metrics/mAP50(B)': 0.0, 'metrics/mAP50-95(B)': 0.0, 'val/box_loss': nan, 'val/cls_loss': nan, 'val/dfl_loss': nan, 'fitness': 0.0}
Tuner: Best fitness model is runs/training/x_e5_is576_oTrue___2
Tuner: Best fitness hyperparameters are printed below.
Printing 'runs/training/tune/best_hyperparameters.yaml'
lr0: 0.01
lrf: 0.01
momentum: 0.937
weight_decay: 0.0005
warmup_epochs: 3.0
warmup_momentum: 0.8
box: 7.5
cls: 0.5
dfl: 1.5
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
degrees: 0.0
translate: 0.1
scale: 0.5
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
bgr: 0.0
mosaic: 1.0
mixup: 0.0
copy_paste: 0.0
``` | closed | 2025-03-08T00:03:43Z | 2025-03-11T14:40:53Z | https://github.com/ultralytics/ultralytics/issues/19571 | [
"question",
"detect"
] | ss4824 | 5 |
FactoryBoy/factory_boy | sqlalchemy | 869 | Use Pydantic models as the model in Meta | #### The problem
Pydantic is a fast growing library that handles data validation in a very clean way using type hinting. If you are working on a python project that needs to digest and output data models you are likely to use pydantic these days, even more so if you are using fastapi since it uses pydantic to validate json objects by default.
Factory_boy handles different types of schemas but it does not handle JSON schema which can be generated from a Pydantic model very easily (at least that I know of). It does not accept pydantic models as a valid model either so we cannot use factory_boy to handle data generation with a very well known library as pydantic.
#### Proposed solution
It would be great if factory_boy accepted a pydantic model in the Meta class the same way it accepts different ORMs. This would allow to understand the data types and restrictions from the model and generate data appropriately that matches that specific models. As far as I know there is no such a tool that allows this and I am sure it would be very useful for many people.
#### Extra notes
I have been looking around and I could not find anything similar to this and in fact I saw an issue opened in Pydantic in which [the author mentioned](https://github.com/samuelcolvin/pydantic/issues/1652#issuecomment-651186177) factory_boy as a good tool to handle this kind of data generation (as pydantic itself is not intended to that and it would open a different can of worms).
I am not sure if this is something you would be interested in but I am sure that, with the number of people using pydantic either by itself or as part of fastapi, this addition would be very much appreciated.
If there is already a way of producing data model from json schema (which would work as well) please let me know. I have found nothing that does this.
Thanks!
| open | 2021-06-14T07:31:11Z | 2023-09-15T10:38:26Z | https://github.com/FactoryBoy/factory_boy/issues/869 | [
"Feature",
"NeedInfo"
] | jaraqueffdc | 9 |
marimo-team/marimo | data-visualization | 3,748 | API URL discrepancy between Ollama's GitHub example and what works with Marimo. HTTP/404 returned when Ollama example is used... | ### Describe the bug
Hello Friends:
First of all, thank you for this delightful and exciting product. I'm eager to use it. `=:)`
I run `Ollama` in a `podman(1)` container and expose it's `11434` port to the `HOST`. That all works fine. For instance, from the `podman(1) HOST` itself (_and from anywhere on my network when I specify the HOST-IP_), this works:
```
user@fedora$ curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{ "model": "qwen2.5:32b-instruct",
"messages": [ { "role": "system", "content": "You are a helpful code assistant."},
{ "role": "user", "content": "What model are you?" }] }'
```
**Output** (_which is correct_):
```
{"id":"chatcmpl-987","object":"chat.completion","created":1739235326,
"model":"qwen2.5:32b-instruct","system_fingerprint":"fp_ollama",
"choices":[{"index":0,"message":{"role":"assistant",
"content":"I am based on a large language model created by
Alibaba Cloud, known as Qwen."},"finish_reason":"stop"}],
"usage":{"prompt_tokens":25,"completion_tokens":19,"total_tokens":44}}
```
However, I just wanted to point out that, while the above `API URL` works for `curl(1)`:
- `http://localhost:11434/v1/chat/completions` (Used with `curl(1)` above)
it does not also work for `Marimo`. In fact, neither do any of these `API URL` variants work - each returning a `HTTP/404` error:
- ❌ `http://localhost:11434`
- ❌ `http://localhost:11434/api`
- ❌ `http://localhost:11434/v1/api`
- ❌ `http://localhost:11434/v1/chat/completions` (Works with `curl(1)` above)
Through trial & error, I discovered that only this variant works with `Marimo`:
- ✅ `http://localhost:11434/v1`
I just wanted to point out this discrepancy relative to the [`Ollama GitHub example`:](https://github.com/ollama/ollama/blob/main/docs/openai.md)
Just a heads-up in case others run into this also.

### Environment
Also, here is the environment:
```
{
"marimo": "0.11.2",
"OS": "Linux",
"OS Version": "6.12.11-200.fc41.x86_64",
"Processor": "",
"Python Version": "3.12.8",
"Binaries": {
"Browser": "133.0.6943.53",
"Node": "v22.11.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.26.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.6",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {
"pandas": "2.2.3",
"pyarrow": "19.0.0"
},
"Experimental Flags": {
"chat_sidebar": true
}
}
```
### Code to reproduce
_No response_ | closed | 2025-02-11T02:00:09Z | 2025-02-11T02:55:52Z | https://github.com/marimo-team/marimo/issues/3748 | [
"bug"
] | nmvega | 2 |
tensorly/tensorly | numpy | 94 | Unable to target GPU with MXNET backend | I need some assistance with targeting GPU with MXNET backend. I attempted to use the Robust PCA API with tensors on GPU. Using the following code, execution is occurring on the CPU cores and not GPU(0). Thanks!
<img width="691" alt="image" src="https://user-images.githubusercontent.com/15822729/50386331-046d7500-06aa-11e9-950d-2d9631828c0f.png">
| closed | 2018-12-23T17:40:08Z | 2018-12-23T18:25:13Z | https://github.com/tensorly/tensorly/issues/94 | [] | DFuller134 | 0 |
slackapi/bolt-python | fastapi | 728 | Add slack reaction, edit previous message even possible ? | Hi fellows,
I'm not sure, might be bolt api does not support it, I cant find any similar examples
Is it possible to do such things like:
1. Add slack reaction to the message that my bot have just send, like adding: 1️⃣ 2️⃣ ... ✅ as reactions ( I don't want to spam slack messages)
2. Edit slack message, that just have been send by my bot, and append "dots" every minute - to show that "progress" is still ongoing ...
thx in advance
| closed | 2022-09-29T16:10:26Z | 2022-09-29T20:35:22Z | https://github.com/slackapi/bolt-python/issues/728 | [
"question"
] | sielaq | 2 |
google-research/bert | nlp | 1,304 | help | Why do I input the same batch and output different logits during debugging? | open | 2022-05-03T15:04:04Z | 2022-05-03T15:04:04Z | https://github.com/google-research/bert/issues/1304 | [] | DreamH1gh | 0 |
glumpy/glumpy | numpy | 149 | ModuleNotFoundError: No module named 'glumpy.ext.sdf.sdf' | when trying to run `examples/font-sdf.py`:
```
Traceback (most recent call last):
File ".\Graph.py", line 26, in <module>
labels.append(text, regular, origin = (x,y,z), scale=scale, anchor_x="left")
File "C:\python\Python36\lib\site-packages\glumpy\graphics\collections\sdf_glyph_collection.py", line 76, in append
V, I = self.bake(text, font, anchor_x, anchor_y)
File "C:\python\Python36\lib\site-packages\glumpy\graphics\collections\sdf_glyph_collection.py", line 128, in bake
glyph = font[charcode]
File "C:\python\Python36\lib\site-packages\glumpy\graphics\text\sdf_font.py", line 75, in __getitem__
self.load('%c' % charcode)
File "C:\python\Python36\lib\site-packages\glumpy\graphics\text\sdf_font.py", line 130, in load
data,offset,advance = self.load_glyph(face, charcode)
File "C:\python\Python36\lib\site-packages\glumpy\graphics\text\sdf_font.py", line 82, in load_glyph
from glumpy.ext.sdf import compute_sdf
File "C:\python\Python36\lib\site-packages\glumpy\ext\sdf\__init__.py", line 5, in <module>
from .sdf import *
ModuleNotFoundError: No module named 'glumpy.ext.sdf.sdf'
``` | open | 2018-04-16T18:42:12Z | 2018-04-23T07:07:23Z | https://github.com/glumpy/glumpy/issues/149 | [] | Axel1492 | 1 |
strawberry-graphql/strawberry | graphql | 2,815 | relay: conflict with GlobalId of strawberry_django_plus | I get the error:
strawberry.exceptions.scalar_already_registered.ScalarAlreadyRegisteredError: Scalar `GlobalID` has already been registered
I think it is because I use strawberry_django_plus and the new relay implementation.
I don't know on which side to report | closed | 2023-06-06T22:31:37Z | 2023-06-15T23:17:37Z | https://github.com/strawberry-graphql/strawberry/issues/2815 | [
"bug"
] | devkral | 4 |
psf/black | python | 3,918 | Formatting one-tuples as multi-line if already multi-line | **Describe the style change**
Black excludes one-tuples `(1,)` and single-item subscripts with trailing comma `tuple[int,]` from magic comma handling, because unlike in list literals etc. the comma here is required, so cannot be used to distinguish between the user's desire to single-line or multi-line.
The single-line format chosen by Black is the desired behavior for "actual" one-tuples, but is not the desired behavior for "currently 1 item but maybe more in the future" tuples.
**Examples in the current _Black_ style**
Given **input**:
```python
class WidgetAdmin(admin.ModelAdmin):
readonly_fields = (
'id',
)
fields = ('foo',)
```
the formatting is:
```python
class WidgetAdmin(admin.ModelAdmin):
readonly_fields = ("id",)
fields = ("foo",)
```
**Desired style**
I would like Black to have a special case for one-tuples (and one-subscripts), distinguishing between the newline and no-newline cases. Black would use the multiline format if the input is multiline.
```python
class WidgetAdmin(admin.ModelAdmin):
readonly_fields = (
"id",
)
fields = ("foo",)
```
This adds a new form of context sensitivity in addition to magic trailing comma, but I think it makes sense since it plugs a hole in the magic trailing comma handling.
**Additional context**
Working on moving some large projects to use Black, this is my major gripe. The problem with the current style is that it removes the git-diff friendliness of the multi-line format, which magic trailing comma normally handles nicely.
Common examples for us are in Django Admin classes (like above), another is in `Literal`s, but we have them in *a lot* of different cases.
Known workarounds:
1. Switch to list literals. Not great because lists are mutable, heavier than tuples, and sometimes a tuple specifically is needed.
2. Add a "forcing" comment:
```py
readonly_fields = (
"id",
#
)
```
I think it's not very pretty.
Backward compat: it breaks compat in the sense that arbitrary input will get different output. But given already-formatted input, the output isn't changed. This adheres to the [Stability Policy](https://black.readthedocs.io/en/latest/the_black_code_style/index.html#stability-policy) if I understand it correctly. | open | 2023-10-03T13:12:47Z | 2023-11-04T21:26:08Z | https://github.com/psf/black/issues/3918 | [
"T: style"
] | bluetech | 6 |
zappa/Zappa | django | 925 | [Migrated] fails to pip install zappa | Originally from: https://github.com/Miserlou/Zappa/issues/2191 by [AndroLee](https://github.com/AndroLee)
"pip install zappa" returns an error
## Context
Collecting zappa
Using cached zappa-0.52.0-py3-none-any.whl (114 kB)
Requirement already satisfied: wheel in c:\users\leeandr\pycharmprojects\mychatbot2\venv\lib\site-packages (from zappa) (0.36.1)
Requirement already satisfied: pip>=9.0.1 in c:\users\leeandr\pycharmprojects\mychatbot2\venv\lib\site-packages (from zappa) (20.3.3)
Collecting kappa==0.6.0
Using cached kappa-0.6.0.tar.gz (29 kB)
ERROR: Command errored out with exit status 1:
command: 'c:\users\leeandr\pycharmprojects\mychatbot2\venv\scripts\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\leeandr\\AppData\\Local\\Temp\\pip-install-qbrjlaxm\\kappa_53d6de6b432849b3bdabd76fb0731947\\setup.py'"'"'; __file__='"'"'C:\\Users\\leeandr\\AppData\\Local\\Temp\\pip-install-qbrjlaxm\\kappa_53d6de6b432849b3bdabd76fb0731947\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\leeandr\AppData\Local\Temp\pip-pip-egg-info-s73bwltt'
cwd: C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\setup.py", line 54, in <module>
run_setup()
File "C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\setup.py", line 22, in run_setup
long_description=open_file('README.rst').read(),
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 2339: illegal multibyte sequence
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
## Expected Behavior
pip install is successful.
## Actual Behavior
It returns a failure as above.
## Possible Fix
NA
## Steps to Reproduce
Not sure if a Traditional Chinese version of Windows matters but it is what I have.
## Your Environment
* Zappa version used: installing latest one
* Operating System and Python version: 3.9
* The output of `pip freeze`: none
* Link to your project (optional):
* Your `zappa_settings.json`:
| closed | 2021-02-20T13:24:36Z | 2022-07-16T05:30:26Z | https://github.com/zappa/Zappa/issues/925 | [] | jneves | 1 |
amdegroot/ssd.pytorch | computer-vision | 337 | Error in training(Error in training) | Error in training
iter 900 || Loss: 6.6272 || timer: 0.1010 sec.
iter 910 || Loss: 7.0335 || timer: 0.1023 sec.
iter 920 || Loss: 6.6000 || timer: 0.1001 sec.
iter 930 || Loss: 6.7137 || timer: 0.1013 sec.
iter 940 || Loss: 6.9450 || timer: 0.1027 sec.
iter 950 || Loss: 6.5815 || timer: 0.1038 sec.
iter 960 || Loss: 6.8804 || timer: 0.1021 sec.
iter 970 || Loss: 6.6749 || timer: 0.1279 sec.
iter 980 || Loss: 6.4802 || timer: 0.1018 sec.
iter 990 || Loss: 6.1978 || timer: 0.1184 sec.
iter 1000 || Loss: 6.7934 || timer: 0.1019 sec.
iter 1010 || Loss: 6.5664 || timer: 0.1028 sec.
iter 1020 || Loss: 6.6167 || timer: 0.0977 sec.
iter 1030 || Loss: 6.3809 || Traceback (most recent call last):
File "train.py", line 261, in <module>
train()
File "train.py", line 166, in train
images, targets = next(batch_iterator)
File "/home/xlm/anaconda3/envs/M2Det/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 627, in __next__
raise StopIteration
StopIteration
when i change batch-size and learning-rate,the iteration were change,but allways
<1200 | open | 2019-05-05T12:14:26Z | 2019-07-09T08:35:27Z | https://github.com/amdegroot/ssd.pytorch/issues/337 | [] | xlm998 | 3 |
graphql-python/graphene-django | graphql | 616 | Is it possible to use AsyncExecutor? | Hello!
How can I use AsyncExecutor with **graphene-django?**
I try this setup:
```
# types.py
class Source(graphene.ObjectType):
value = graphene.String()
# queries.py
class SourcesQuery(graphene.ObjectType):
sources = graphene.List(
of_type=Source
)
async def resolve_sources(self, info):
await asyncio.sleep(0.0001)
return [Source(value='foo'), Source(value='bar')]
# backend.py
class CustomBackend(GraphQLCoreBackend):
def __init__(self, *args, **kwargs):
self.execute_params = {
"executor": AsyncioExecutor(),
"return_promise": False,
}
# urls.py
path("graphql", GraphQLView.as_view(
backend=CustomBackend(),
schema=graphene.Schema(query=SourcesQuery),
)),
```
But this didn't work.
What is wrong? | closed | 2019-04-08T12:39:33Z | 2020-09-22T12:49:07Z | https://github.com/graphql-python/graphene-django/issues/616 | [] | artinnok | 5 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 850 | Encoder zero output | Hi blue-fish:
Follow up on# 776. I set up an environment on Linux according to the guidance post here and re-processed everything and started training on LibriSpeech train other and vox1, vox2 with 768/256. The loss converges slightly faster than some other folks result here. But then I examined the embed output after 160000 iterations and found out that the embed consists of zeroes dominantly. Is it normal? I uploaded a UMAP.

| closed | 2021-09-16T19:59:20Z | 2021-09-20T18:13:04Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/850 | [] | ARKEYTECT | 1 |
jofpin/trape | flask | 153 | ERROR in app: Exception on /register [POST] | Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/user/trape/core/user.py", line 98, in register
db.sentences_victim('insert_victim', [victimConnect, vId, time.time()], 2)
File "/home/user/trape/core/db.py", line 153, in sentences_victim
return self.sql_insert(self.prop_sentences_victim(type, data))
File "/home/user/trape/core/db.py", line 55, in sql_insert
self.conn.commit()
OperationalError: disk I/O error
Any fix to this? Is the error related to Flask? | open | 2019-05-03T06:55:32Z | 2019-08-15T16:01:02Z | https://github.com/jofpin/trape/issues/153 | [] | cr4shcod3 | 1 |
ultralytics/ultralytics | deep-learning | 18,752 | What is the effect of cropped objects after cropping training images? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hi,
As far as I know, during the training, images may be cropped. As a result, objects near the edges of an image may be cropped too (As in the picture below). In tiny object detection, when the tiny object is cropped, we can not understand what is the cropped image exactly. Now what is the effect of these cropped objects in the training? will the model be trained for detecting the cropped object rather than the whole object? Or these cropped objects are ignored during the training?

### Additional
_No response_ | open | 2025-01-18T12:38:16Z | 2025-01-20T11:13:53Z | https://github.com/ultralytics/ultralytics/issues/18752 | [
"question",
"detect"
] | sayyaradonis1 | 4 |
Yorko/mlcourse.ai | matplotlib | 345 | /assignments_demo/assignment04_habr_popularity_ridge.ipynb - Опечатка в тексте задания | "Инициализируйте DictVectorizer с параметрами по умолчанию.
Примените метод fit_transform к X_train['title'] и метод transform к X_valid['title'] и X_test['title']"
Скорее всего здесь опечатка: должно быть X_train[feats], X_valid[feats], X_test[feats] | closed | 2018-07-19T10:19:13Z | 2018-08-04T16:07:08Z | https://github.com/Yorko/mlcourse.ai/issues/345 | [
"minor_fix"
] | pavel-petkun | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.