repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
adbar/trafilatura | web-scraping | 644 | Validate value of `output_format` in `extract()` and `bare_extraction()` | On the command-line the number of supported output formats is limited and the input is validated. In Python an invalid output format will silently default to TXT. | closed | 2024-07-16T16:11:06Z | 2024-07-18T16:49:11Z | https://github.com/adbar/trafilatura/issues/644 | [
"enhancement"
] | adbar | 0 |
Lightning-AI/pytorch-lightning | pytorch | 19,803 | Current FSDPPrecision does not support custom scaler for 16-mixed precision | ### Bug description

`self.precision` here inherits from parent class `Precision`, so it is always "32-true"

The subsequent definition of `self.scaler` also assigns `None` if the `scaler is not None` and `precision == "16-mixed"`.

Is this intentional or a bug?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-23T04:14:07Z | 2024-04-23T04:14:07Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19803 | [
"bug",
"needs triage"
] | SongzhouYang | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 417 | modifications to backref made inside _wrap_with_default_query_class(fn, cls) get lost | ```
def _wrap_with_default_query_class(fn, cls):
@functools.wraps(fn)
def newfn(*args, **kwargs):
_set_default_query_class(kwargs, cls)
if "backref" in kwargs:
backref = kwargs['backref']
if isinstance(backref, string_types):
backref = (backref, {})
_set_default_query_class(backref[1], cls)
return fn(*args, **kwargs)
return newfn
```
In the code above `backref` is extracted from `kwargs`, modified if it's a string type, but the modifications are never reintroduced to `kwargs`.
| closed | 2016-08-19T18:03:12Z | 2022-10-03T00:22:03Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/417 | [] | ekoka | 1 |
mwaskom/seaborn | data-visualization | 2,835 | Two bugs in catplot with kind="swarm" | ```python
sns.catplot(
tips, x="day", y="total_bill", col="time",
height=3, kind="swarm",
)
```

Issues:
- A palette is mapped to the x variable (`catplot` probably has some logic to do this to override `FacetGrid` passing `color`?
- The points on the left facet are not swarmed.
Interestingly, the latter issue does not arise when using `FacetGrid` directly:
```python
sns.FacetGrid(tips, col="time").map(sns.swarmplot, "day", "total_bill")
```

| closed | 2022-06-04T21:44:25Z | 2022-06-11T16:07:00Z | https://github.com/mwaskom/seaborn/issues/2835 | [
"bug",
"mod:categorical"
] | mwaskom | 0 |
taverntesting/tavern | pytest | 452 | How to test an API which requires Public key id and a pem file for authorization | I'm trying to test an api using tavern framework. The authorization required to test the api when using python code is an api key id and a pem file (which consists of the private key) . Both of these are used to encrypt the request, and the public api key is sent in the header.
I have tried using the 'cert' tag , along with the 'Auth' tag, but the test always fails as the response from the application is :
` {"error": {"status_code": 401, "status": "Unauthorized"}} ` , instead of 200 OK.
The yaml file is included below.
Is there a mistake in the yaml file ? And are there any other ways in tavern with which i can authorize the request using the pem file and api key ?
Edit : Attaching the yaml file for the correct indentation
<img width="557" alt="tavern_file" src="https://user-images.githubusercontent.com/24609616/65417837-a75d4600-de18-11e9-82dc-4adccc74f238.png">
| closed | 2019-09-23T10:00:46Z | 2019-12-03T11:29:26Z | https://github.com/taverntesting/tavern/issues/452 | [] | anuragc1729 | 5 |
seleniumbase/SeleniumBase | web-scraping | 2,993 | uc mode doesn't pass the CF in Ubuntu system | I attempted to bypass CF Turnstile with the following code. It works without any issues on the Windows system, but it always fails on the Linux system. My SeleniumBase version is 4.29.6.
```python
with SB(uc=True, incognito=True, test=True, rtf=True, agent=ua, disable_features="UserAgentClientHint", proxy=proxy, headed=True, xvfb=True) as sb:
try:
url = "https://fr.tlscontact.com/oauth2/authorization/oidc"
sb.uc_open_with_reconnect(url, 4)
sb.uc_gui_click_captcha() # Ready if needed!
sb.uc_gui_click_cf()
sb.wait_for_element('div[class="tls-form"]', timeout=30)
except Exception as e:
sb.save_screenshot("log.png", folder="./downloaded_files")
sb.assert_downloaded_file("log.png")
print('\n"%s/%s" was saved!' % ("downloaded_files", "log.png"))
sb.save_page_source('./downloaded_files/log.html')
print('\n"%s/%s" was saved!' % ("downloaded_files", "log.html"))
```
I also tried the following code:
```
with SB(uc=True, test=True, incognito=True, proxy=proxy, agent=agent) as sb:
sb.driver.uc_open_with_reconnect("https://fr.tlscontact.com/oauth2/authorization/oidc", 15)
sb.switch_to_frame("iframe")
sb.execute_script('document.querySelector("input").focus()')
save_log(sb, '1')
sb.send_keys("html", "\t")
sb.driver.disconnect()
pyautogui.press(" ")
sb.sleep(10)
sb.driver.reconnect(4)
sb.sleep(10)
#sb.assert_element("img#captcha-success", timeout=3)
print(sb.get_current_url())
sb.set_messenger_theme(location="top_left")
sb.post_message("SeleniumBase wasn't detected", duration=3)
save_log(sb, '2')
```
However, it resulted in the following error:
```
Traceback (most recent call last):
File "/home/ubuntu/t3.py", line 134, in <module>
sb.switch_to_frame("iframe")
File "/home/ubuntu/tlss/lib/python3.10/site-packages/seleniumbase/fixtures/base_case.py", line 3496, in switch_to_frame
if isinstance(frame, str) and self.is_element_visible(frame):
File "/home/ubuntu/tlss/lib/python3.10/site-packages/seleniumbase/fixtures/base_case.py", line 1434, in is_element_visible
self.wait_for_ready_state_complete()
File "/home/ubuntu/tlss/lib/python3.10/site-packages/seleniumbase/fixtures/base_case.py", line 4540, in wait_for_ready_state_complete
self._check_browser()
File "/home/ubuntu/tlss/lib/python3.10/site-packages/seleniumbase/fixtures/base_case.py", line 8985, in _check_browser
raise NoSuchWindowException("Active window was already closed!")
selenium.common.exceptions.NoSuchWindowException:` Message: Active window was already closed!
```
this is my sistem environment:
(tlss) ubuntu@ip-172-31-42-185:~$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=24.04
DISTRIB_CODENAME=noble
DISTRIB_DESCRIPTION="Ubuntu 24.04 LTS"
PRETTY_NAME="Ubuntu 24.04 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
please kindly help :(
| closed | 2024-08-04T18:01:42Z | 2024-08-04T19:46:56Z | https://github.com/seleniumbase/SeleniumBase/issues/2993 | [
"invalid usage",
"UC Mode / CDP Mode"
] | yingrulinn | 1 |
sebastianruder/NLP-progress | machine-learning | 166 | what about unsupervised tasks, such as topic modelling? | open | 2018-11-20T21:00:21Z | 2018-12-21T09:49:38Z | https://github.com/sebastianruder/NLP-progress/issues/166 | [] | shgidi | 3 | |
matplotlib/matplotlib | matplotlib | 29,192 | [Bug]: Can't "import matplotlib.pyplot as plt" | ### Bug summary
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[43], line 2
1 # Plot the data points
----> 2 plt.scatter(x_train, y_train, marker='x', c='r')
3 # Set the title
4 plt.title("Housing Prices")
File D:\Anaconda\Lib\site-packages\matplotlib\_api\__init__.py:226, in caching_module_getattr.<locals>.__getattr__(name)
224 if name in props:
225 return props[name].__get__(instance)
--> 226 raise AttributeError(
227 f"module {cls.__module__!r} has no attribute {name!r}")
AttributeError: module 'matplotlib' has no attribute 'scatter'

### Code for reproduction
```Python
# Plot the data points
plt.scatter(x_train, y_train, marker='x', c='r')
# Set the title
plt.title("Housing Prices")
# Set the y-axis label
plt.ylabel('Price (in 1000s of dollars)')
# Set the x-axis label
plt.xlabel('Size (1000 sqft)')
plt.show()
```
### Actual outcome
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[43], line 2
1 # Plot the data points
----> 2 plt.scatter(x_train, y_train, marker='x', c='r')
3 # Set the title
4 plt.title("Housing Prices")
File D:\Anaconda\Lib\site-packages\matplotlib\_api\__init__.py:226, in caching_module_getattr.<locals>.__getattr__(name)
224 if name in props:
225 return props[name].__get__(instance)
--> 226 raise AttributeError(
227 f"module {cls.__module__!r} has no attribute {name!r}")
AttributeError: module 'matplotlib' has no attribute 'scatter'

### Expected outcome
a graph
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
None | closed | 2024-11-26T03:29:42Z | 2024-11-30T04:45:54Z | https://github.com/matplotlib/matplotlib/issues/29192 | [
"Community support"
] | ilxy2 | 3 |
dsdanielpark/Bard-API | nlp | 169 | Will ask_about_image and tts support conversation_id? | I try to use ```ask_about_image``` with conversation_id but it always create new conversation! | closed | 2023-08-19T10:44:40Z | 2023-08-19T11:11:39Z | https://github.com/dsdanielpark/Bard-API/issues/169 | [] | kogakisaki | 1 |
marshmallow-code/flask-marshmallow | rest-api | 242 | Python3.10: Distutils depreciation warning | Couldn't find an issue for this yet, but in py3.10 there's a warning about distutils. Should be an easy transition.
```
/usr/local/lib/python3.10/site-packages/flask_marshmallow/__init__.py:10: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.version import LooseVersion
``` | closed | 2022-02-21T04:21:04Z | 2023-02-24T16:22:40Z | https://github.com/marshmallow-code/flask-marshmallow/issues/242 | [] | tgross35 | 1 |
stanford-oval/storm | nlp | 328 | [BUG] LM configurations lost when reloading a saved CoSTORM runner | ## Description
When saving a CoStormRunner session and attempting to reload it later, the language model configurations are not properly preserved. The session loads with default LM configurations instead of the ones that were used in the original session. This causes inconsistency in behavior between original and reloaded sessions.
Additionally, sensitive authentication information was potentially being serialized with the LM configurations, creating a security risk.
## Steps to Reproduce
1. Create a CoStormRunner instance with custom LM configurations
2. Run a partial session (e.g., through warm start or a few rounds of conversation)
3. Save the session using `to_dict()`
4. Try to reload the session using `from_dict()`
5. Observe that LM configurations are reset to default values
## Expected Behavior
When reloading a saved session, all LM configurations should be preserved exactly as they were in the original session, while properly handling sensitive authentication data.
## Environment
- OS: Ubuntu 22.04
- Python version: 3.10
| open | 2025-03-03T17:57:03Z | 2025-03-03T17:57:03Z | https://github.com/stanford-oval/storm/issues/328 | [] | JamesHWade | 0 |
ranaroussi/yfinance | pandas | 1,610 | The historic data is not correct for stocks. Data does not match with Yahoo Finance website also. Website is showing correct data. | ### Are you up-to-date?
Yes
### Does Yahoo actually have the data?
Yes, And the data on the website looks accurate.
### Are you spamming Yahoo?
I'm trying to find ATH of all the stocks within Nifty 500 index.
### Still think it's a bug?
Yes, For example, below is the code to find ATH of INFY.NS. Since, I know that it made it's ATH around 17 Jan 2022. I am using time range to get the data -
`
import yfinance as yf
stock = yf.Ticker("INFY.NS")
history = stock.history(start="2022-01-05", end="2022-01-20")
yf.enable_debug_mode()
print("Yahoo Finance Version", yf.__version__)
print("***********")
print(history)
print("All time HIGH", history["High"].max())
print("All time HIGH Date", history[history["High"] == history["High"].max()].index[0])
`
Output shows
`
Yahoo Finance Version 0.2.24
***********
Open High Low Close Volume Dividends Stock Splits
Date
2022-01-05 00:00:00+05:30 1810.738793 1813.502575 1753.557568 1757.989136 6995719 0.0 0.0
2022-01-06 00:00:00+05:30 1742.121151 1742.121151 1715.436582 1732.400391 6449205 0.0 0.0
2022-01-07 00:00:00+05:30 1730.160822 1749.745437 1721.917288 1729.064941 4834389 0.0 0.0
2022-01-10 00:00:00+05:30 1729.732011 1782.052854 1727.825970 1763.802490 7857560 0.0 0.0
2022-01-11 00:00:00+05:30 1767.852894 1782.148200 1763.230768 1768.424683 5142287 0.0 0.0
2022-01-12 00:00:00+05:30 1781.099914 1800.255602 1772.618009 1789.248169 5362535 0.0 0.0
2022-01-13 00:00:00+05:30 1815.503807 1822.651460 1778.669547 1807.689087 14277630 0.0 0.0
2022-01-14 00:00:00+05:30 1793.584466 1842.188509 1792.631446 1838.709961 7688506 0.0 0.0
2022-01-17 00:00:00+05:30 1847.477856 1862.106697 1839.805995 1848.383179 5262464 0.0 0.0
2022-01-18 00:00:00+05:30 1831.800470 1853.624660 1821.650780 1830.513916 3690315 0.0 0.0
2022-01-19 00:00:00+05:30 1825.987220 1825.987220 1775.334158 1779.336914 5747770 0.0 0.0
All time HIGH 1862.1066965763564
All time HIGH Date 2022-01-17 00:00:00+05:30
`
It says the ATH is 1862.1066965763564 but actually the ATH is 1953.90 . The website shows it correct - https://finance.yahoo.com/quote/INFY.NS/history?period1=1641340800&period2=1642636800&interval=1d&filter=history&frequency=1d&includeAdjustedClose=true
As I see the Open, High, Low, Close data is shown wrong by the yfinance module.
| closed | 2023-07-16T12:08:32Z | 2023-09-03T14:26:43Z | https://github.com/ranaroussi/yfinance/issues/1610 | [] | sourabhgupta385 | 2 |
deepfakes/faceswap | machine-learning | 864 | The Python script is not working in Linux Mint. | The full details of the system:
`RELEASE=19.1
CODENAME=tessa
EDITION="Xfce"
DESCRIPTION="Linux Mint 19.1 Tessa"
DESKTOP=Gnome
TOOLKIT=GTK
NEW_FEATURES_URL=https://www.linuxmint.com/rel_tessa_xfce_whatsnew.php
RELEASE_NOTES_URL=https://www.linuxmint.com/rel_tessa_xfce.php
USER_GUIDE_URL=https://www.linuxmint.com/documentation.php
GRUB_TITLE=Linux Mint 19.1 Xfce`
The actual error (for any parameter to faceswap.py)
```
Traceback (most recent call last):
File "faceswap.py", line 5, in <module>
import lib.cli as cli
File "/home/januario/Documents/faceswap/lib/cli.py", line 16, in <module>
from lib.logger import crash_log, log_setup
File "/home/januario/Documents/faceswap/lib/logger.py", line 5, in <module>
from logging.handlers import QueueHandler, QueueListener, RotatingFileHandler
ImportError: cannot import name QueueHandler
```
Yeah I could try Docker or simply installing this on Windows, but thought it would be a good idea to report 🤷 | closed | 2019-09-06T08:18:49Z | 2019-09-06T09:43:33Z | https://github.com/deepfakes/faceswap/issues/864 | [] | januarionclx | 1 |
allenai/allennlp | data-science | 4,823 | Add the Gaussian Error Linear Unit as an Activation option | The [Gaussian Error Linear Unit](https://arxiv.org/pdf/1606.08415.pdf) activation is currently not a possible option from the set of registered Activations. Since this class just directly called the PyTorch classes - adding this in is a 1 line addition. Motivation is that models like BART/BERT use this activation in many places and elegant consistency of activation function across models that are "something pretrained" + "more weights trained on AllenNLP" would be nice.
**Describe the solution you'd like**
Add the following snippet to the end of the [Activations class](https://github.com/allenai/allennlp/blob/master/allennlp/nn/activations.py) class
```
"gelu": (torch.nn.GELU, None),
```
**Describe alternatives you've considered**
Manually hardcoding the activation. This isn't very robust and modules such as FeedForward complain since Gelu isnt a registered activation to insert between layers (as far as I can tell).
Thanks - happy to submit a tiny PR for this | closed | 2020-11-26T18:20:18Z | 2020-12-02T04:04:05Z | https://github.com/allenai/allennlp/issues/4823 | [
"Feature request"
] | tomsherborne | 1 |
janosh/pymatviz | plotly | 106 | Periodic table heatmap raises error for values = 1 for `log=True` | I get an error when using `ptable_heatmap_plotly` with a dataset with element prevalence = 1.
Maybe we could modify the following logic to allow displaying that?
```python
if log and values.dropna()[values != 0].min() <= 1:
smaller_1 = values[values <= 1]
raise ValueError(
"Log color scale requires all heat map values to be > 1 since values <= 1 "
f"map to negative log values which throws off the color scale. Got "
f"{smaller_1.size} values <= 1: {dict(smaller_1)}"
)
```
This is the error I get:
```py
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[61], line 3
1 from pymatviz import ptable_heatmap_plotly
----> 3 ptable_heatmap_plotly(df_grouped_formula['formula'], log=True)
File /opt/conda/lib/python3.10/site-packages/pymatviz/ptable.py:552, in ptable_heatmap_plotly(values, count_mode, colorscale, showscale, heat_mode, precision, hover_props, hover_data, font_colors, gap, font_size, bg_color, color_bar, cscale_range, exclude_elements, log, fill_value, label_map, **kwargs)
550 if log and values.dropna()[values != 0].min() <= 1:
551 smaller_1 = values[values <= 1]
--> 552 raise ValueError(
553 "Log color scale requires all heat map values to be > 1 since values <= 1 "
554 f"map to negative log values which throws off the color scale. Got "
555 f"{smaller_1.size} values <= 1: {dict(smaller_1)}"
556 )
558 if heat_mode in ("fraction", "percent"):
559 # normalize heat values
560 clean_vals = values.replace([np.inf, -np.inf], np.nan).dropna()
ValueError: Log color scale requires all heat map values to be > 1 since values <= 1 map to negative log values which throws off the color scale. Got 8 values <= 1: {'Al': 1.0, 'Ti': 1.0, 'Mn': 1.0, 'Fe': 1.0, 'Y': 1.0, 'Te': 1.0, 'Pt': 1.0, 'Au': 1.0}
```
| closed | 2023-11-28T10:13:19Z | 2023-11-29T08:54:08Z | https://github.com/janosh/pymatviz/issues/106 | [
"enhancement",
"plotly",
"ptable"
] | Pepe-Marquez | 3 |
deepspeedai/DeepSpeed | deep-learning | 7,117 | safe_get_full_grad & safe_set_full_grad | deepspeed 0.15.3
zero 3 is used
For "safe_get_full_grad", does it return the same gradient values on each process/rank?
As for "safe_set_full_grad", should it be called on all the processes/ranks? or just one of them is enough?
If it's the former one, users will need to ensure gradient values to be set on each process/rank are the same?
Also, which float type should be used for "safe_set_full_grad"? any way to check this? | open | 2025-03-09T10:10:19Z | 2025-03-21T22:12:20Z | https://github.com/deepspeedai/DeepSpeed/issues/7117 | [] | ProjectDisR | 3 |
yeongpin/cursor-free-vip | automation | 213 | [Discussion]: We're experiencing high demand for Claude 3.5 Sonnet right now. Please upgrade to Pro, switch to the 'default' model, which finds an available Premium model, try sonnet 3.7, or try again in a few moments. | ### Issue Checklist
- [x] I understand that Issues are used to provide feedback and solve problems, not to complain in the comments section, and will provide more information to help solve the problem.
- [x] I confirm that I need to raise questions and discuss problems, not Bug feedback or demand suggestions.
- [x] I have read [Github Issues](https://github.com/yeongpin/cursor-free-vip/issues) and searched for existing [open issues](https://github.com/yeongpin/cursor-free-vip/issues) and [closed issues](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20), and found no similar issues.
### Platform
Windows x64
### Version
lastest
### Your question
3.7 sonnet high load is common. But 3.5 sonnet high load? That's crazy, how can i fix it?
### Additional information
```shell
```
### Priority
Low (I'll look at it when I have time) | closed | 2025-03-12T14:10:43Z | 2025-03-13T03:49:26Z | https://github.com/yeongpin/cursor-free-vip/issues/213 | [
"question"
] | OHOHAI | 3 |
pyppeteer/pyppeteer | automation | 202 | process response based on conditions | closed | 2020-12-21T09:38:47Z | 2020-12-21T16:04:54Z | https://github.com/pyppeteer/pyppeteer/issues/202 | [] | gulfpearl | 0 | |
allenai/allennlp | pytorch | 4,840 | Support SWA - Stochastic Weight Averaging optimizer | **Is your feature request related to a problem? Please describe.**
The PyTorch starting from version 1.6 support a set of tools to implement the Stochastic Weight Averaging (SWA) technique.
https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/
They suggest that SWA can drastically improve the generalization performance and demonstrated to have a strong performance in several areas.
[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018.
**Describe the solution you'd like**
The SWA training it is not a drop-on replacement. It requires some very minor changes to the training loop. It would be nice to natively support a SWA training.
| open | 2020-12-04T16:29:59Z | 2021-12-09T03:56:33Z | https://github.com/allenai/allennlp/issues/4840 | [
"Contributions welcome",
"Feature request"
] | bratao | 5 |
Anjok07/ultimatevocalremovergui | pytorch | 1,691 | I Have A issue When Trying To Extract Vocals From A Song | This Error keeps Coming Up Every Single Time Even though It Worked The Other Day.
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Could not allocate tensor with 1272936384 bytes. There is not enough GPU video memory available!"
Traceback Error: "
File "UVR.py", line 6890, in process_start
File "separate.py", line 668, in seperate
File "separate.py", line 841, in demix
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "lib_v5\bs_roformer.py", line 520, in forward
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "lib_v5\bs_roformer.py", line 226, in forward
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "lib_v5\bs_roformer.py", line 127, in forward
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "lib_v5\attend.py", line 95, in forward
File "lib_v5\attend.py", line 74, in flash_attn
"
Error Time Stamp [2025-01-01 20:46:04]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: True
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: True
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: False
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_save_to_input_path: False
is_task_complete: False
is_normalization: True
is_use_opencl: True
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_24
device_set: NVIDIA GeForce GTX 1060 6GB:0
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2025-01-01T20:46:27Z | 2025-01-15T13:21:08Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1691 | [] | Bambam0304 | 1 |
AirtestProject/Airtest | automation | 956 | 使用pytest执行用例结束后,exitfunc函数报错了 | 使用pytest执行用例结束后,exitfunc函数报错了。
环境:Python3.9,
用例框架:pytest
日志如下:
testcases/testlogin.py::TestLogin::test_login --- Logging error ---
Traceback (most recent call last):
File "D:\Python39\lib\logging\__init__.py", line 1086, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
============================= 1 passed in 45.01s ==============================
File "D:\Python39\lib\site-packages\airtest\utils\snippet.py", line 92, in exitfunc
_cleanup()
File "D:\Python39\lib\site-packages\airtest\utils\snippet.py", line 64, in _cleanup
func(*args, **kwargs)
File "D:\Python39\lib\site-packages\airtest\core\android\adb.py", line 1569, in cleanup_adb_forward
adb._cleanup_forwards()
File "D:\Python39\lib\site-packages\airtest\core\android\adb.py", line 830, in _cleanup_forwards
self.remove_forward(local)
File "D:\Python39\lib\site-packages\airtest\core\android\adb.py", line 549, in remove_forward
self.cmd(cmds)
File "D:\Python39\lib\site-packages\airtest\core\android\adb.py", line 179, in cmd
proc = self.start_cmd(cmds, device)
File "D:\Python39\lib\site-packages\airtest\core\android\adb.py", line 147, in start_cmd
LOGGING.debug(" ".join(cmds))
Message: 'D:\\Python39\\Lib\\site-packages\\airtest\\core\\android\\static\\adb\\windows\\adb.exe -s e3b7d428 forward --remove tcp:15324'
Arguments: ()
------------------
执行代码如下:
from common.commonbase import CommonBasePage
class TestLogin(object):
c= None
def setup(self):
self.c =CommonBasePage('Android:///','com.jinxin.namibox')
self.c.closeapp()
#install("path/to/your/apk")
def teardown(self):
pass
def test_login(self):
self.c.startapp()
self.c.sleepProcess(30)
e = self.c.findElementById('com.jinxin.namibox:id/tab_item_text3')
self.c.elementClick(e) | closed | 2021-08-18T06:48:41Z | 2021-09-30T07:39:09Z | https://github.com/AirtestProject/Airtest/issues/956 | [] | kuailel45 | 1 |
pytest-dev/pytest-cov | pytest | 417 | LocalPath has no attribute startswith in pytest_load_initial_conftests | # Summary
In my latest version of xdoctest my dashboards are failing in pytest-cov with the error:
```python
@pytest.mark.tryfirst
def pytest_load_initial_conftests(early_config, parser, args):
options = early_config.known_args_namespace
no_cov = options.no_cov_should_warn = False
for arg in args:
# arg = str(arg)
if arg == '--no-cov':
no_cov = True
> elif arg.startswith('--cov') and no_cov:
E AttributeError: 'LocalPath' object has no attribute 'startswith'
/home/joncrall/.local/conda/envs/py38/lib/python3.8/site-packages/pytest_cov/plugin.py:121: AttributeError
```
The traceback can be seen here:
https://app.circleci.com/pipelines/github/Erotemic/xdoctest/360/workflows/81333393-b945-4d01-9714-4039305e7dce/jobs/1856/steps
## Expected vs actual result
I'm not sure if this is a pytest-cov error or not, pytest figures are incredibly hard to debug as they don't really allow for IPython embedding, and you can't make instances of them without running inside the pytest entry point. For whatever reason the `args` passed to the pytest-cov function:
```python
@pytest.mark.tryfirst
def pytest_load_initial_conftests(early_config, parser, args):
options = early_config.known_args_namespace
no_cov = options.no_cov_should_warn = False
for arg in args:
if arg == '--no-cov':
no_cov = True
elif arg.startswith('--cov') and no_cov:
options.no_cov_should_warn = True
break
if early_config.known_args_namespace.cov_source:
plugin = CovPlugin(options, early_config.pluginmanager)
early_config.pluginmanager.register(plugin, '_cov')
```
includes a non-string LocalPath object, which does not have the startswith method. If we simply force the arg to be a string by adding: `arg = str(arg)` as the first line of the loop everything works as expected.
Any advice on if this is a pytest-cov bug, a pytest-bug, or a bug in the way I'm using them in xdoctest would be greatly appreciated. I can't seem to find anything in xdoctest that would trigger it, but I also don't see this error report anywhere else, so my suspicion is that its something on my end. But I guess it could also be that I'm the first one to find this, so I figured it wouldn't hurt to submit a bug report.
# Reproducer
This can be reproduced by
```
mkdir -p $HOME/tmp
cd $HOME/tmp
git clone git@github.com:Erotemic/xdoctest.git -b dev/0.13.0 $HOME/tmp/xdoctest_0_13_0
cd $HOME/tmp/xdoctest_0_13_0
pip install -r requirements.txt -U
pytest testing/test_plugin.py::TestXDoctestModuleLevel::test_collect_module_two_doctest_no_modulelevel
```
## Versions
```
(py38) joncrall@Ooo:~/tmp/xdoctest_0_13_0$ pytest --version
This is pytest version 5.4.3, imported from /home/joncrall/.local/conda/envs/py38/lib/python3.8/site-packages/pytest/__init__.py
setuptools registered plugins:
pytest-cov-2.10.0 at /home/joncrall/.local/conda/envs/py38/lib/python3.8/site-packages/pytest_cov/plugin.py
pytest-timeout-1.3.4 at /home/joncrall/.local/conda/envs/py38/lib/python3.8/site-packages/pytest_timeout.py
```
| closed | 2020-07-02T14:35:25Z | 2020-07-12T12:52:03Z | https://github.com/pytest-dev/pytest-cov/issues/417 | [] | Erotemic | 2 |
davidsandberg/facenet | tensorflow | 1,062 | Difference between proposed inception resnet v2 and tensorflow's repo inception resnet v2 model | i tried to train my dataset with the tensorflow's repo inception resnet v2 model but it yields different results than that from your inception model.. I am not familiar with slim package as i started learning deep learning with tf.keras...Can you explain me what makes your model different and yields "correct" results? @davidsandberg | closed | 2019-07-29T18:10:44Z | 2022-03-29T12:02:33Z | https://github.com/davidsandberg/facenet/issues/1062 | [] | christk1 | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,645 | [Bug]: Prompt in textarea cannot be drag and drop since 1.9.0 | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
drag and drop prompt text in textarea is not working since 1.9.0rc, it's work at 1.8.0
### Steps to reproduce the problem
1. Type "masterpiece, highly details, ultra quality" in prompt
2. Select masterpiece and drag it between "highly details" and "ultra quality"
3. Nothing happened
### What should have happened?
"masterpiece" between "highly details" and "ultra quality"
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-04-27-20-12.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15139759/sysinfo-2024-04-27-20-12.json)
### Console logs
```Shell
dragdrop.js?1713798036.0:97
GET https://127.0.0.1:8080/masterpiece 404 (Not Found)
(anonymous) @ dragdrop.js?1713798036.0:97
dragdrop.js?1713798036.0:99 Error fetching URL: masterpiece 404
(anonymous) @ dragdrop.js?1713798036.0:99
```
### Additional information
_No response_ | closed | 2024-04-27T20:23:04Z | 2024-06-08T12:51:40Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15645 | [
"bug"
] | xdejiko | 3 |
davidteather/TikTok-Api | api | 844 | KeyError: 'hasMore' | Whenever I run this
tiktoks = api.trending()
It returns this error. Also, I am bad at coding so please make it simple for me :)
Traceback (most recent call last):
File "C:\Users\AdminNoMicrosoft\Downloads\automated-yt-channel-main\main.py", line 13, in <module>
import procedures.DownloadVideos as download_tiktok
File "C:\Users\AdminNoMicrosoft\Downloads\automated-yt-channel-main\procedures\DownloadVideos.py", line 15, in <module>
tiktoks = api.trending()
File "C:\Users\AdminNoMicrosoft\AppData\Local\Programs\Python\Python39\lib\site-packages\TikTokApi\tiktok.py", line 413, in by_trending
if not res["hasMore"] and not first:
KeyError: 'hasMore'
I thought what would happen is it would get trending videos
- OS: [e.g. Windows 10]
- TikTokApi Version [e.g. 5.0.0] - if out of date upgrade before posting an issue
| closed | 2022-02-27T03:52:34Z | 2023-08-08T15:16:58Z | https://github.com/davidteather/TikTok-Api/issues/844 | [
"bug"
] | me-rgb | 4 |
ansible/awx | django | 14,969 | AWX Community Meeting Agenda - March 2024 | # AWX Office Hours
## Proposed agenda based on topics
@thedoubl3j postgres 15 [upgrade](https://forum.ansible.com/t/awx-postgres-15-support-container-readiness-checks-available/4262)
@AlanCoding progress update/plan on RBAC changes
@fosterseth https://github.com/ansible/awx-operator/pull/1674 -ness check and migration change
@TheRealHaoLiu Bind EE images version with DEFAULT_AWX_VERSION https://github.com/ansible/awx-operator/pull/1740
@TheRealHaoLiu VScode Debugging demo #14942
## What
After a successful Contributor Summit in October 2023, one of the bits of feedback we got was to host a regular time for the Automation Controller (AWX) Team to be available for your folks in the AWX Community, so we are happy to announce a new regular video meeting.
This kind of feedback loop is vital to the success of AWX and the AWX team wants to make it as easy as possible for you - our community - to get involved.
## Where & When
Our next meeting will be held on Tuesday, March12 th, 2024 at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC)
* [Google Meet](https://meet.google.com/vyk-dfow-cfi)
* Via Phone PIN: 842522378 [Guide](https://support.google.com/meet/answer/9518557)
This meeting is held once a month, on the second Tuesday of the month, at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC)
## How
Add one topic per comment in this GitHub issue
If you don't have a GitHub account, jump on [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) on Matrix and we can add the topic for you
## Talk with us
As well as the fortnightly video meeting you can join the Community (inc development team) on Matrix Chat.
* Matrix: [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) (recomended)
* libera.chat IRC: `#ansible-awx` (If you are already setup on IRC)
The Matrix & IRC channels are bridged, you'll just have a better experience on Matrix
## Links
[AWX YouTube Chanel](https://www.youtube.com/@ansible-awx)
[Previous Meeting](https://github.com/ansible/awx/issues/14756)
[Meeting recording](https://youtu.be/i1Kd6ts0RNk)
[Next Meeting](https://github.com/ansible/awx/issues/15062)
See you soon!
| closed | 2024-03-07T17:55:11Z | 2024-04-10T19:05:53Z | https://github.com/ansible/awx/issues/14969 | [] | TheRealHaoLiu | 6 |
scikit-learn/scikit-learn | data-science | 30,339 | DOC: clarify the documentation for the loss functions used in GBRT, and Absolute Error in particular. | ### Describe the bug
From my understanding, currently there is no way to minimize the MAE (Mean Absolute Error). Quantile regression with quantile=0.5 will optimize for the Median Absolute Error. This would be different from optimizing the MAE when the conditional distribution of the response variable is not symmetrically-distributed.
https://github.com/scikit-learn/scikit-learn/blob/46a7c9a5e4fe88dfdfd371bf36477f03498a3390/sklearn/_loss/loss.py#L574-L577
**What I expect**
- Using `HistGradientBoostingRegressor(loss="absolute_error")` should optimize for the mean of absolute errors.
- Using `HistGradientBoostingRegressor(loss="quantile", quantile=0.5)` should optimize for the median of absolute errors.
```python
if sample_weight is None:
return np.mean(y_true, axis=0)
else:
return _weighted_mean(y_true, sample_weight)
```
**What happens**
Both give the same results
- Using `HistGradientBoostingRegressor(loss="absolute_error")` optimizes for the median of absolute errors
- Using `HistGradientBoostingRegressor(loss="quantile", quantile=0.5)` optimizes for the median of absolute errors
**Suggested Actions**
If this is intended behavior:
- Feel free to close this issue marked as resolved.
- Kindly add a note in the documentation that "Absolute Error optimizes for Median Absolute Error, not Mean Absolute Error" as "absolute_error" is not very clear.
- I would appreciate if there was more explanation regarding on using custom loss functions #21614. This way, we could optimize for Mean Absolute Error, Median Absolute Error, Log Cosh, etc. as per the requirement.
**Note**
I have tried my best to go through the documentation prior to creating this issue. I am a fresh graduate in Computer Science, and if you believe this issue is not well-framed due to a misunderstanding of my concepts, kindly advise me and I'll work on it.
### Steps/Code to Reproduce
```python
# Imports
from sklearn.ensemble import HistGradientBoostingRegressor
import numpy as np
# Dataset Generation
x = np.linspace(start=0, stop=10, num=100)
n_repeat = 100 # no of x for each x
X = np.repeat(x, n_repeat)[:, np.newaxis]
y_true_mean = 1 * np.repeat(x, n_repeat)
noise = np.random.RandomState(0).lognormal(mean=0, sigma=1, size=y_true_mean.shape[0])
y_noisy = y_true_mean + noise
# Model Creation
mae = HistGradientBoostingRegressor(loss="absolute_error") # should be mean of absolute errors
quantile = HistGradientBoostingRegressor(loss="quantile", quantile=0.5) # should be median of absolute errors
# Fit & Prediction
y_pred_mae = mae.fit(X, y_noisy).predict(X)
y_pred_quantile = quantile.fit(X, y_noisy).predict(X)
# Prediction Comparison
print((y_pred_mae - y_pred_quantile).sum()) # both give same results
```
### Expected Results
Median and mean of absolute errors should give different results for a log-normally distributed response. Hence, the predictions should be different from each other, and the difference of their predictions, should total as a non-zero value.
### Actual Results
Predictions by both models are the same, which can be seen in the difference of their predictions, totaling as 0.
```shell
0.
```
### Versions
```shell
System:
python: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
executable: /usr/bin/python3
machine: Linux-6.1.85+-x86_64-with-glibc2.35
Python dependencies:
sklearn: 1.5.2
pip: 24.1.2
setuptools: 75.1.0
numpy: 1.26.4
scipy: 1.13.1
Cython: 3.0.11
pandas: 2.2.2
matplotlib: 3.8.0
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 2
prefix: libopenblas
filepath: /usr/local/lib/python3.10/dist-packages/numpy.libs/libopenblas64_p-r0-0cf96a72.3.23.dev.so
version: 0.3.23.dev
threading_layer: pthreads
architecture: Haswell
user_api: blas
internal_api: openblas
num_threads: 2
prefix: libopenblas
filepath: /usr/local/lib/python3.10/dist-packages/scipy.libs/libopenblasp-r0-01191904.3.27.so
version: 0.3.27
threading_layer: pthreads
architecture: Haswell
user_api: openmp
internal_api: openmp
num_threads: 2
prefix: libgomp
filepath: /usr/local/lib/python3.10/dist-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
version: None
``` | open | 2024-11-23T19:46:07Z | 2024-12-03T10:45:13Z | https://github.com/scikit-learn/scikit-learn/issues/30339 | [
"Documentation"
] | AhmedThahir | 13 |
scikit-learn-contrib/metric-learn | scikit-learn | 312 | TypeError: _inplace_paired_L2() missing 2 required positional arguments: 'A' and 'B' | #### Description
I get this error TypeError: _inplace_paired_L2() missing 2 required positional arguments: 'A' and 'B'
#### Steps/Code to Reproduce
Example:
```python
from sklearn.datasets import make_friedman1
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
def friedman_np_to_df(X,y):
return pd.DataFrame(X,columns=['x0','x1', 'x2', 'x3', 'x4']), pd.Series(y)
# Make training set
X_train, NA = make_friedman1(n_samples=1000, n_features=5, random_state = 1) #dont care about Y so call it NA
X_train, NA = friedman_np_to_df(X_train,NA)
#categorize training set based off of x0
domain_list = []
for i in range(len(X_train)):
if X_train.iloc[i]['x0'] < 0.6:
domain_list.append(1)
else:
domain_list.append(0)
X_train['domain'] = domain_list
# Set training set to where domain == 1 (x0 < 0.5)
X_train = X_train[X_train['domain']==1]
y_train = X_train.copy()
X_train = X_train.drop(columns = ['domain'])
y_train = y_train['domain']
# Make testing set with a different random_state
X_test, NA2 = make_friedman1(n_samples=1000, n_features=5, random_state = 3)
X_test, NA2 = friedman_np_to_df(X_test,NA2)
#categorize testing set based off of x0
domain_list = []
for i in range(len(X_test)):
if X_test.iloc[i]['x0'] < 0.6:
domain_list.append(1)
else:
domain_list.append(0)
X_test['domain'] = domain_list
y_test = X_test['domain'].copy()
X_test = X_test.drop(columns = ['domain'])
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from metric_learn import LMNN
lmnn_knn = Pipeline(steps=[('lmnn', LMNN()), ('knn', KNeighborsClassifier())])
parameters = {'lmnn__k':[1, 2,3], 'knn__n_neighbors':[1 , 2]}
grid_lmnn_knn = GridSearchCV(lmnn_knn, parameters, n_jobs=-1, verbose=True)
grid_lmnn_knn.fit(X_train,y_train)
grid_lmnn_knn.score(X_test, y_test)
```
#### Expected Results
Example: No error is thrown. Score is calculated
#### Actual Results
```ptb
Fitting 5 folds for each of 6 candidates, totalling 30 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 2 concurrent workers.
[Parallel(n_jobs=-1)]: Done 30 out of 30 | elapsed: 0.5s finished
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-54-e89c6a61ea02> in <module>()
6 parameters = {'lmnn__k':[1, 2,3], 'knn__n_neighbors':[1 , 2]}
7 grid_lmnn_knn = GridSearchCV(lmnn_knn, parameters, n_jobs=-1, verbose=True)
----> 8 grid_lmnn_knn.fit(X_train,y_train)
9 grid_lmnn_knn.score(X_test, y_test)
10
7 frames
/usr/local/lib/python3.7/dist-packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)
737 refit_start_time = time.time()
738 if y is not None:
--> 739 self.best_estimator_.fit(X, y, **fit_params)
740 else:
741 self.best_estimator_.fit(X, **fit_params)
/usr/local/lib/python3.7/dist-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params)
348 This estimator
349 """
--> 350 Xt, fit_params = self._fit(X, y, **fit_params)
351 with _print_elapsed_time('Pipeline',
352 self._log_message(len(self.steps) - 1)):
/usr/local/lib/python3.7/dist-packages/sklearn/pipeline.py in _fit(self, X, y, **fit_params)
313 message_clsname='Pipeline',
314 message=self._log_message(step_idx),
--> 315 **fit_params_steps[name])
316 # Replace the transformer of the step with the fitted
317 # transformer. This is necessary when loading the transformer
/usr/local/lib/python3.7/dist-packages/joblib/memory.py in __call__(self, *args, **kwargs)
350
351 def __call__(self, *args, **kwargs):
--> 352 return self.func(*args, **kwargs)
353
354 def call_and_shelve(self, *args, **kwargs):
/usr/local/lib/python3.7/dist-packages/sklearn/pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params)
726 with _print_elapsed_time(message_clsname, message):
727 if hasattr(transformer, 'fit_transform'):
--> 728 res = transformer.fit_transform(X, y, **fit_params)
729 else:
730 res = transformer.fit(X, y, **fit_params).transform(X)
/usr/local/lib/python3.7/dist-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
572 else:
573 # fit method of arity 2 (supervised transformation)
--> 574 return self.fit(X, y, **fit_params).transform(X)
575
576
/usr/local/lib/python3.7/dist-packages/metric_learn/lmnn.py in fit(self, X, y)
180 G, objective, total_active = self._loss_grad(X, L, dfG, k,
181 reg, target_neighbors,
--> 182 label_inds)
183
184 it = 1 # we already made one iteration
/usr/local/lib/python3.7/dist-packages/metric_learn/lmnn.py in _loss_grad(self, X, L, dfG, k, reg, target_neighbors, label_inds)
246 label_inds, L)
247
--> 248 g0 = _inplace_paired_L2(*Lx[impostors])
249
250 # we reorder the target neighbors
TypeError: _inplace_paired_L2() missing 2 required positional arguments: 'A' and 'B'
```
#### Versions
Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
Python 3.7.10 (default, Feb 20 2021, 21:17:23)
[GCC 7.5.0]
NumPy 1.19.5
SciPy 1.4.1
Scikit-Learn 0.22.2.post1
Metric-Learn 0.6.2
<!-- Thanks for contributing! -->
| open | 2021-03-31T23:34:42Z | 2021-04-05T02:41:05Z | https://github.com/scikit-learn-contrib/metric-learn/issues/312 | [
"bug"
] | angelotc | 12 |
ageitgey/face_recognition | machine-learning | 965 | Error while installing on CentOS 7 | * face_recognition version: Latest
* Python version: 3.8
* Operating System: CentOS7
### Description
I followed the installation instructions available in the repository . Tried to install dlib and it also failed. was able to install cmake but it is not installing other components.
### What I Did
```
$ sudo pip3 install face_recognition
Collecting face_recognition
Using cached https://files.pythonhosted.org/packages/3f/ed/ad9a28042f373d4633fc8b49109b623597d6f193d3bbbef7780a5ee8eef2/face_recognition-1.2.3-py2.py3-none-any.whl
Requirement already satisfied: Pillow in /usr/local/lib/python3.8/site-packages (from face_recognition) (6.2.1)
Requirement already satisfied: face-recognition-models>=0.3.0 in /usr/local/lib/python3.8/site-packages (from face_recognition) (0.3.0)
Requirement already satisfied: Click>=6.0 in /usr/local/lib/python3.8/site-packages (from face_recognition) (7.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.8/site-packages (from face_recognition) (1.17.3)
Collecting dlib>=19.7
Using cached https://files.pythonhosted.org/packages/1e/62/aacb236d21fbd08148b1d517d58a9d80ea31bdcd386d26f21f8b23b1eb28/dlib-19.18.0.tar.gz
Installing collected packages: dlib, face-recognition
Running setup.py install for dlib ... error
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-d00i72gh/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-d00i72gh/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-ynn72kjh/install-record.txt --single-version-externally-managed --compile
cwd: /tmp/pip-install-d00i72gh/dlib/
Complete output (54 lines):
running install
running build
running build_py
package init file 'dlib/__init__.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "/usr/local/bin/cmake", line 8, in <module>
sys.exit(cmake())
File "/usr/local/lib/python3.8/site-packages/cmake/__init__.py", line 46, in cmake
raise SystemExit(_program('cmake', sys.argv[1:]))
File "/usr/local/lib/python3.8/site-packages/cmake/__init__.py", line 42, in _program
return subprocess.call([os.path.join(CMAKE_BIN_DIR, name)] + args)
File "/usr/local/lib/python3.8/subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/local/lib/python3.8/subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.8/subprocess.py", line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.8/site-packages/cmake/data/bin/cmake'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-d00i72gh/dlib/setup.py", line 223, in <module>
setup(
File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/local/lib/python3.8/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-install-d00i72gh/dlib/setup.py", line 129, in run
cmake_version = self.get_cmake_version()
File "/tmp/pip-install-d00i72gh/dlib/setup.py", line 120, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "/usr/local/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/local/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['cmake', '--version']' returned non-zero exit status 1.
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/local/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-d00i72gh/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-d00i72gh/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-ynn72kjh/install-record.txt --single-version-externally-managed --compile Check the logs for full command output.
```
| open | 2019-10-26T00:28:48Z | 2019-12-06T19:22:18Z | https://github.com/ageitgey/face_recognition/issues/965 | [] | dharamhbtik | 1 |
arogozhnikov/einops | numpy | 23 | problem with styles | The beginning of "Improving RNN language modelling" and "CNNs for text classification" blocks aren't visible in https://arogozhnikov.github.io/einops/pytorch-examples.html
und typos: "Improving RNN language modilling" - modelling. ^_^ | closed | 2018-12-01T19:07:31Z | 2020-06-04T06:27:31Z | https://github.com/arogozhnikov/einops/issues/23 | [
"bug"
] | oMalyugina | 1 |
OthersideAI/self-operating-computer | automation | 179 | [FEATURE] Azure open AI support | ### Is your feature request related to a problem? Please describe.
Does this work with Azure Open AI today? We can add that support.
### Describe the solution you'd like
Works with Azure Open AI
### Describe alternatives you've considered
### Additional context
| open | 2024-03-13T15:39:58Z | 2025-03-14T18:44:45Z | https://github.com/OthersideAI/self-operating-computer/issues/179 | [
"enhancement"
] | sashankh | 3 |
asacristani/fastapi-rocket-boilerplate | pytest | 17 | Monitoring: add logging system | Include logs using different levels for the following events:
- db interactions
- celery task interactions
- endpoint interactions | open | 2023-10-11T10:56:06Z | 2023-10-11T11:07:03Z | https://github.com/asacristani/fastapi-rocket-boilerplate/issues/17 | [
"enhancement"
] | asacristani | 0 |
autokey/autokey | automation | 174 | Can't add sporkwitch repository on Debian 9 | ## Classification:
Bug
## Reproducibility:
Always
## Summary
Tried to install autokey based on instructions here:
https://github.com/autokey/autokey
Ubuntu/Mint/Debian
There is a repository available for Ubuntu 18.04 LTS (and compatible derivatives, such as Kubuntu):
sudo add-apt-repository ppa:sporkwitch/autokey
sudo apt update
sudo apt install autokey-gtk
## Steps to Reproduce (if applicable)
[See below]
## Expected Results
Install autokey with apt packaging system
## Actual Results
```
$ sudo add-apt-repository ppa:sporkwitch/autokey
*buntu packaging for https://github.com/autokey/autokey
More info: https://launchpad.net/~sporkwitch/+archive/ubuntu/autokey
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keybox '/tmp/tmp8wims4b5/pubring.gpg' created
gpg: /tmp/tmp8wims4b5/trustdb.gpg: trustdb created
gpg: key D69DB56333E1F169: public key "Launchpad PPA for Robert Klebes" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: no valid OpenPGP data found.
$ sudo apt update
Hit:1 http://security.debian.org/debian-security stretch/updates InRelease
Ign:2 http://ftp.us.debian.org/debian stretch InRelease
Hit:3 http://ftp.us.debian.org/debian stretch-updates InRelease
Hit:4 http://ftp.us.debian.org/debian stretch Release
Hit:5 https://deb.nodesource.com/node_10.x stretch InRelease
Ign:7 http://deb.debian.org/debian stretch InRelease
Hit:8 http://deb.debian.org/debian unstable InRelease
Hit:9 http://deb.debian.org/debian stretch Release
Ign:10 http://ppa.launchpad.net/sporkwitch/autokey/ubuntu cosmic InRelease
Err:11 http://ppa.launchpad.net/sporkwitch/autokey/ubuntu cosmic Release
404 Not Found
Reading package lists... Done
E: The repository 'http://ppa.launchpad.net/sporkwitch/autokey/ubuntu cosmic Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
$ sudo apt-get install autokey-gtk
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
autokey-common python-simplejson python-xlib
The following NEW packages will be installed:
autokey-common autokey-gtk python-simplejson python-xlib
0 upgraded, 4 newly installed, 0 to remove and 5 not upgraded.
Need to get 0 B/270 kB of archives.
After this operation, 1,781 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Selecting previously unselected package python-xlib.
(Reading database ... 174569 files and directories currently installed.)
Preparing to unpack .../python-xlib_0.14+20091101-5_all.deb ...
Unpacking python-xlib (0.14+20091101-5) ...
Selecting previously unselected package python-simplejson.
Preparing to unpack .../python-simplejson_3.10.0-1_amd64.deb ...
Unpacking python-simplejson (3.10.0-1) ...
Selecting previously unselected package autokey-common.
Preparing to unpack .../autokey-common_0.90.4-1_all.deb ...
Unpacking autokey-common (0.90.4-1) ...
Selecting previously unselected package autokey-gtk.
Preparing to unpack .../autokey-gtk_0.90.4-1_all.deb ...
Unpacking autokey-gtk (0.90.4-1) ...
Setting up python-simplejson (3.10.0-1) ...
Processing triggers for mime-support (3.60) ...
Processing triggers for desktop-file-utils (0.23-1) ...
Setting up python-xlib (0.14+20091101-5) ...
Processing triggers for man-db (2.7.6.1-2) ...
Processing triggers for hicolor-icon-theme (0.15-1) ...
Setting up autokey-common (0.90.4-1) ...
Setting up autokey-gtk (0.90.4-1) ...
update-alternatives: using /usr/bin/autokey-gtk to provide /usr/bin/autokey (autokey) in auto mode
autokey-gtk (0.90.4-1) is broken on debian 9. Here's what happens when you try to run it:
$ autokey
/usr/lib/python2.7/dist-packages/autokey/gtkapp.py:24: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
from gi.repository import Gtk, Gdk, GObject, GLib
/usr/lib/python2.7/dist-packages/autokey/gtkui/notifier.py:19: PyGIWarning: Notify was imported without specifying a version first. Use gi.require_version('Notify', '0.7') before import to ensure that the right version gets loaded.
from gi.repository import Gtk, Gdk, Notify
/usr/lib/python2.7/dist-packages/autokey/gtkui/configwindow.py:20: PyGIWarning: GtkSource was imported without specifying a version first. Use gi.require_version('GtkSource', '3.0') before import to ensure that the right version gets loaded.
from gi.repository import Gtk, Pango, GtkSource, Gdk, Gio
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 250, sequence_number = 15, major_opcode = 33, minor_opcode = 0
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 250, sequence_number = 16, major_opcode = 33, minor_opcode = 0
```
## Version
AutoKey version: 0.90.4-1
Used GUI (Gtk, Qt, or both): Gtk
If the problem is known to be present in more than one version, please list all of those.
Installed via: (PPA, pip3, …).
PPA
Distro:
Operating System: Debian GNU/Linux 9 (stretch)
Kernel: Linux 4.9.0-6-amd64
Architecture: x86-64
| closed | 2018-08-14T06:20:04Z | 2019-02-10T22:20:01Z | https://github.com/autokey/autokey/issues/174 | [
"documentation"
] | crasch | 5 |
albumentations-team/albumentations | machine-learning | 1,736 | [Tech debt] Remove scikit-learn as depencency | We can remove scikit learn as dependency. MinMax scaler is easy to implement and PCA exist in OpenCV | closed | 2024-05-21T01:33:29Z | 2024-07-23T23:59:32Z | https://github.com/albumentations-team/albumentations/issues/1736 | [
"Tech debt"
] | ternaus | 0 |
plotly/dash | flask | 3,160 | `progress` and `cancel` on background callbacks show type errors | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
```
dash 3.0.0rc1
```
**Describe the bug**
`progress` and `cancel` on background callbacks show type errors in Dash 3.0.0rc1
`cancel` shows an error when passing a single component
<img width="880" alt="Image" src="https://github.com/user-attachments/assets/b972a11f-0301-4ddd-9e81-129de626dd77" />
`progress` shows an error when passing a list of outputs:
<img width="887" alt="Image" src="https://github.com/user-attachments/assets/246de795-fbbc-4426-b229-9df6997d435e" />
| closed | 2025-02-12T18:21:46Z | 2025-02-13T22:10:32Z | https://github.com/plotly/dash/issues/3160 | [
"bug",
"P1"
] | LiamConnors | 0 |
Lightning-AI/pytorch-lightning | pytorch | 19,936 | FileNotFoundError: [Errno 2] No such file or directory tfevents file | ### Bug description
I am working on the code base of stable diffusion here https://github.com/CompVis/latent-diffusion. I am getting below error in Multi GPU trianing where it can not find the tfevents file.
```
trainer.fit(model, data)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._dispatch()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
return self._run_train()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1058, in _run_train
self.training_type_plugin.reconciliate_processes(traceback.format_exc())
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 453, in reconciliate_processes
raise DeadlockDetectedException(f"DeadLock detected from rank: {self.global_rank} \n {trace}")
pytorch_lightning.utilities.exceptions.DeadlockDetectedException: DeadLock detected from rank: 0
Traceback (most recent call last):
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
self.fit_loop.run()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 149, in advance
self.trainer.call_hook(
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1217, in call_hook
trainer_hook(*args, **kwargs)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/trainer/callback_hook.py", line 189, in on_train_batch_end
callback.on_train_batch_end(self, self.lightning_module, outputs, batch, batch_idx, dataloader_idx)
File "/home/csgrad/mbhosale/phd/Pathdiff/PathLDM/main.py", line 443, in on_train_batch_end
self.log_img(pl_module, batch, batch_idx, split="train")
File "/home/csgrad/mbhosale/phd/Pathdiff/PathLDM/main.py", line 424, in log_img
logger_log_images(pl_module, images, pl_module.global_step, split)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 48, in wrapped_fn
return fn(*args, **kwargs)
File "/home/csgrad/mbhosale/phd/Pathdiff/PathLDM/main.py", line 363, in _testtube
pl_module.logger.experiment.add_image(tag, grid, global_step=pl_module.global_step)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 614, in add_image
self._get_file_writer().add_summary(
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 113, in add_summary
self.add_event(event, global_step, walltime)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 98, in add_event
self.event_writer.add_event(event)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 117, in add_event
self._async_writer.write(event.SerializeToString())
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 171, in write
self._check_worker_status()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 212, in _check_worker_status
raise exception
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 244, in run
self._run()
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 275, in _run
self._record_writer.write(data)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/summary/writer/record_writer.py", line 40, in write
self._writer.write(header + header_crc + data + footer_crc)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/io/gfile.py", line 773, in write
self.fs.append(self.filename, file_content, self.binary_mode)
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/io/gfile.py", line 167, in append
self._write(filename, file_content, "ab" if binary_mode else "a")
File "/home/csgrad/mbhosale/anaconda3/envs/pathldm1/lib/python3.8/site-packages/tensorboard/compat/tensorflow_stub/io/gfile.py", line 171, in _write
with io.open(filename, mode, encoding=encoding) as f:
FileNotFoundError: [Errno 2] No such file or directory: b'logs/06-03T05-49_plip_imagenet_finetune_PanNuke/testtube/version_0/tf/events.out.tfevents.1717408192.deepbull8.818802.0
```
I checked the file, there is no folder names tf under version_0. Interestingly I get this error only when I run it on multiple GPUs, with a single GPU somehow it gets resolved. I have no idea how to resolve or start debugging this issue.
### What version are you seeing the problem on?
v1.4.2
### How to reproduce the bug
```python
Run the multi gpu training of https://github.com/cvlab-stonybrook/PathLDM/blob/main/main.py
```
### Error messages and logs
```
As shown in above comments.
```
### Environment
<details>
<summary>Current environment</summary>
```
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 2.0.0 pypi_0 pypi
aiohttp 3.9.1 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
albumentations 0.4.3 pypi_0 pypi
altair 5.2.0 pypi_0 pypi
antlr4-python3-runtime 4.8 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports-zoneinfo 0.2.1 pypi_0 pypi
blas 1.0 mkl
blessed 1.20.0 py38h06a4308_0
blinker 1.7.0 pypi_0 pypi
bottleneck 1.3.7 py38ha9d4c09_0
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotli-python 1.0.9 py38h6a678d5_7
bzip2 1.0.8 h7b6447c_0
c-ares 1.19.1 h5eee18b_0
ca-certificates 2024.3.11 h06a4308_0
cachetools 5.3.2 pypi_0 pypi
certifi 2024.2.2 py38h06a4308_0
cffi 1.15.1 py38h74dc2b5_0
chardet 5.2.0 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
click 8.1.7 pypi_0 pypi
clip 1.0 dev_0 <develop>
comm 0.1.4 pyhd8ed1ab_0 conda-forge
contourpy 1.0.5 py38hdb19cb5_0
cryptography 41.0.3 py38h130f0dd_0
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
cudatoolkit 11.0.221 h6bb024c_0
cycler 0.11.0 pyhd3eb1b0_0
cyrus-sasl 2.1.28 h9c0eb46_1
dbus 1.13.18 hb2f20db_0
debugpy 1.6.7 py38h6a678d5_0
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
docker-pycreds 0.4.0 pypi_0 pypi
einops 0.3.0 pypi_0 pypi
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
expat 2.5.0 h6a678d5_0
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.13.1 py38h06a4308_0
fontconfig 2.14.1 h4c34cd2_2
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 h4a9f257_0
frozenlist 1.4.1 pypi_0 pypi
fsspec 2023.12.2 pypi_0 pypi
ftfy 6.1.3 pypi_0 pypi
future 0.18.3 pypi_0 pypi
giflib 5.2.1 h5eee18b_3
gitdb 4.0.11 pypi_0 pypi
gitpython 3.1.40 pypi_0 pypi
glib 2.69.1 h4ff587b_1
gmp 6.2.1 h295c915_3
gmpy2 2.1.2 py38heeb90bb_0
gnutls 3.6.15 he1e5248_0
google-auth 2.25.2 pypi_0 pypi
google-auth-oauthlib 1.0.0 pypi_0 pypi
gpustat 1.1.1 py38h06a4308_0
grpcio 1.60.0 pypi_0 pypi
gst-plugins-base 1.14.1 h6a678d5_1
gstreamer 1.14.1 h5eee18b_1
h5py 3.9.0 py38he06866b_0
hdf5 1.12.1 h70be1eb_2
huggingface-hub 0.20.1 pypi_0 pypi
icu 73.1 h6a678d5_0
idna 3.6 pypi_0 pypi
imageio 2.9.0 pypi_0 pypi
imageio-ffmpeg 0.4.2 pypi_0 pypi
imgaug 0.2.6 pypi_0 pypi
importlib-metadata 6.11.0 pypi_0 pypi
importlib_resources 6.1.1 py38h06a4308_0
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.26.0 pyhf8b6a83_0 conda-forge
ipython 8.12.0 pyh41d4057_0 conda-forge
ipywidgets 8.1.2 pypi_0 pypi
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.2 py38h06a4308_0
joblib 1.3.2 pypi_0 pypi
jpeg 9e h5eee18b_1
jsonschema 4.20.0 pypi_0 pypi
jsonschema-specifications 2023.11.2 pypi_0 pypi
jupyter_client 7.3.4 pyhd8ed1ab_0 conda-forge
jupyter_core 5.6.0 py38h578d9bd_0 conda-forge
jupyterlab-widgets 3.0.10 pypi_0 pypi
kiwisolver 1.4.4 py38h6a678d5_0
krb5 1.20.1 h568e23c_1
lame 3.100 h7b6447c_0
latent-diffusion 0.0.1 dev_0 <develop>
lazy-loader 0.3 pypi_0 pypi
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libclang 14.0.6 default_hc6dbbc7_1
libclang13 14.0.6 default_he11475f_1
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.8.1.2 0 nvidia
libcups 2.4.2 ha637b67_0
libcurand 10.3.4.101 0 nvidia
libcurl 8.2.1 h91b91d3_0
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_1
libedit 3.1.20230828 h5eee18b_0
libev 4.33 h7f8727e_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 11.2.0 h00389a5_1
libgfortran5 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libllvm14 14.0.6 hef93074_0
libnghttp2 1.52.0 ha637b67_1
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libpq 12.15 h37d81fd_1
libsodium 1.0.18 h36c2ea0_1 conda-forge
libssh2 1.10.0 h37d81fd_2
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libuuid 1.41.5 h5eee18b_0
libuv 1.44.2 h5eee18b_0
libwebp 1.3.2 h11a3e52_0
libwebp-base 1.3.2 h5eee18b_0
libxcb 1.15 h7f8727e_0
libxkbcommon 1.0.1 h5eee18b_1
libxml2 2.10.4 hf1b16e4_1
lightning-utilities 0.10.0 pypi_0 pypi
lz4-c 1.9.4 h6a678d5_0
markdown 3.5.1 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 py38h06a4308_0
matplotlib-base 3.7.2 py38h1128e8f_0
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mdurl 0.1.2 pypi_0 pypi
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.3.0 py38h06a4308_0
multidict 6.0.4 pypi_0 pypi
munkres 1.1.4 py_0
mysql 5.7.24 he378463_2
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.8 pyhd8ed1ab_0 conda-forge
nettle 3.7.3 hbbd107a_1
networkx 3.1 py38h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
numexpr 2.8.4 py38he184ba9_0
numpy 1.24.4 pypi_0 pypi
numpy-base 1.24.3 py38h31eccc5_0
nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
nvidia-ml-py 12.535.133 py38h06a4308_0
nvidia-nccl-cu12 2.18.1 pypi_0 pypi
nvidia-nvjitlink-cu12 12.3.101 pypi_0 pypi
nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
omegaconf 2.1.1 pypi_0 pypi
open-clip-torch 2.23.0 pypi_0 pypi
opencv-python 4.1.2.30 pypi_0 pypi
opencv-python-headless 4.8.1.78 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openjpeg 2.4.0 h3ad879b_0
openssl 1.1.1w h7f8727e_0
packaging 21.3 pypi_0 pypi
pandas 2.0.3 py38h1128e8f_0
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pcre 8.45 h295c915_0
pexpect 4.8.0 pyh1a96a4e_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.0.1 py38ha6cbd5a_0
pip 20.3.3 py38h06a4308_0
pkgutil-resolve-name 1.3.10 pypi_0 pypi
platformdirs 4.1.0 pyhd8ed1ab_0 conda-forge
ply 3.11 py38_0
pooch 1.7.0 py38h06a4308_0
prompt-toolkit 3.0.42 pyha770c72_0 conda-forge
prompt_toolkit 3.0.42 hd8ed1ab_0 conda-forge
protobuf 3.20.1 pypi_0 pypi
psutil 5.9.0 py38h5eee18b_0
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pudb 2019.2 pypi_0 pypi
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pyarrow 14.0.2 pypi_0 pypi
pyasn1 0.5.1 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pydeck 0.8.1b0 pypi_0 pypi
pydeprecate 0.3.1 pypi_0 pypi
pygments 2.17.2 pyhd8ed1ab_0 conda-forge
pyopenssl 23.2.0 py38h06a4308_0
pyparsing 3.0.9 py38h06a4308_0
pyqt 5.15.10 py38h6a678d5_0
pyqt5-sip 12.13.0 py38h5eee18b_0
pysocks 1.7.1 py38h06a4308_0
python 3.8.5 h7579374_1
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd3eb1b0_0
python_abi 3.8 2_cp38 conda-forge
pytorch 2.0.1 py3.8_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_5 pytorch
pytorch-fid 0.3.0 pypi_0 pypi
pytorch-lightning 1.4.2 pypi_0 pypi
pytorch-mutex 1.0 cuda pytorch
pytz 2023.3.post1 py38h06a4308_0
pywavelets 1.4.1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
pyzmq 25.1.0 py38h6a678d5_0
qt-main 5.15.2 h110a718_10
readline 8.2 h5eee18b_0
referencing 0.32.0 pypi_0 pypi
regex 2023.12.25 pypi_0 pypi
requests 2.31.0 py38h06a4308_0
requests-oauthlib 1.3.1 pypi_0 pypi
rich 13.7.0 pypi_0 pypi
rpds-py 0.15.2 pypi_0 pypi
rsa 4.9 pypi_0 pypi
sacremoses 0.1.1 pypi_0 pypi
safetensors 0.4.1 pypi_0 pypi
scikit-image 0.20.0 pypi_0 pypi
scikit-learn 1.3.0 py38h1128e8f_0 anaconda
scipy 1.9.1 pypi_0 pypi
seaborn 0.12.2 py38h06a4308_0
sentencepiece 0.1.99 pypi_0 pypi
sentry-sdk 1.39.1 pypi_0 pypi
setproctitle 1.3.3 pypi_0 pypi
setuptools 68.2.2 py38h06a4308_0
sip 6.7.12 py38h6a678d5_0
six 1.16.0 pyhd3eb1b0_1
smmap 5.0.1 pypi_0 pypi
sqlite 3.41.2 h5eee18b_0
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
streamlit 1.29.0 pypi_0 pypi
sympy 1.12 py38h06a4308_0
taming-transformers 0.0.1 dev_0 <develop>
tenacity 8.2.3 pypi_0 pypi
tensorboard 2.14.0 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
test-tube 0.7.5 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0
tifffile 2023.7.10 pypi_0 pypi
timm 0.9.12 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.13.3 pypi_0 pypi
toml 0.10.2 pypi_0 pypi
tomli 2.0.1 py38h06a4308_0
toolz 0.12.0 pypi_0 pypi
torch-fidelity 0.3.0 pypi_0 pypi
torchaudio 2.0.2 py38_cu117 pytorch
torchmetrics 0.6.0 pypi_0 pypi
torchtriton 2.0.0 py38 pytorch
torchvision 0.15.2 py38_cu117 pytorch
tornado 6.1 py38h0a891b7_3 conda-forge
tqdm 4.66.1 pypi_0 pypi
traitlets 5.14.0 pyhd8ed1ab_0 conda-forge
transformers 4.28.0 pypi_0 pypi
triton 2.1.0 pypi_0 pypi
typing_extensions 4.7.1 py38h06a4308_0
tzlocal 5.2 pypi_0 pypi
urllib3 2.1.0 pypi_0 pypi
urwid 2.3.4 pypi_0 pypi
validators 0.22.0 pypi_0 pypi
wandb 0.16.1 pypi_0 pypi
watchdog 3.0.0 pypi_0 pypi
wcwidth 0.2.12 pyhd8ed1ab_0 conda-forge
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py38h06a4308_0
widgetsnbextension 4.0.10 pypi_0 pypi
xz 5.4.5 h5eee18b_0
yarl 1.9.4 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.17.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0
```
</details>
### More info
_No response_ | closed | 2024-06-03T10:57:26Z | 2024-08-04T09:16:52Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19936 | [
"bug",
"ver: 1.8.x"
] | bhosalems | 1 |
unit8co/darts | data-science | 2,388 | TSMixer ConditionalMixer Skip Connections | **Is your feature request related to a current problem? Please describe.**
TSMixer Model with num_blocks higher than 4 aren't training well. It is somewhat nebulous to pinpoint but higher number of blocks can lead to much worse results. In my dataset, anything with num_blocks of 8 remains stagnant at extremely suboptimal metrics. Even in simpler datasets like ETTh, it leads to worse results although it is easier to attribute to overfitting.
There is no clear mention of this in the original paper, but they do not train deeper "extended" models (the type implemented in Darts) according to the benchmarks.
**Describe proposed solution**
Through some experimentation, simple skip connection in the Conditional Mixer layer that combines the input and the output of time+feature mixer layers greatly alleviates this issue. This operation isn't in the original paper, but seems like a simplistic way to extend the functionality without modifying the general architecture.
There are a few ways to implement skip connections and since it isn't the default choice, it must remain optional. Adding a new argument for mixing_skip_connection_cls that is instantiated by the ConditionalMixerLayer if specified seems to work quite cleanly. Darts can even provide the simplest variation of `x_inp + x_processed` as one of the `str` variants. I have tried the recursive variant from https://aclanthology.org/2020.coling-main.320.pdf and is quite effective in my dataset.
<img width="640" alt="Screenshot 2024-05-19 at 12 50 20 AM" src="https://github.com/unit8co/darts/assets/25041325/a5874ef6-4ff1-49dc-b5cf-2cdf4b107bec">
**Describe potential alternatives**
- don't train deep models and hope you don't need to?
**Additional context**
Results from Etth
this is a very exaggerated example:
num_blocks = 2 vs s num_blocks=16 vsnum_blocks=2 + simple skip vs num_blocks=16 + simple skip
<img width="705" alt="Screenshot 2024-05-19 at 12 49 06 AM" src="https://github.com/unit8co/darts/assets/25041325/d415d7ce-de4a-4794-b61b-7505b181929b">
clearly deep no skip underperforms compared to the rest
I can post training curves from my dataset with/without skip connections showing the improvements but can't share the data. The dataset is fairly similar to M5 though
skip implementation
```
class Skip(nn.Module):
def __init__(self, sequence_length,input_dim, output_dim,*args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.projection = nn.Linear(input_dim,output_dim) if input_dim != output_dim else nn.Identity()
class SimpleSkip(Skip):
def forward(self,x_original,x_processed,):
x_original = self.projection(x_original)
return x_processed+x_original
``` | open | 2024-05-19T07:57:10Z | 2024-09-23T09:40:11Z | https://github.com/unit8co/darts/issues/2388 | [
"improvement",
"pr_welcome"
] | tRosenflanz | 2 |
coqui-ai/TTS | deep-learning | 2,705 | [Bug] Bad result of YourTTS trainning | ### Describe the bug
Training the YourTTS model [https://github.com/coqui-ai/TTS/blob/dev/recipes/vctk/yourtts/train_yourtts.py](url)
on VCTK dataset, but getting a bad speech generation result.
generated file even don't speak English.
The model seems like do not converge.
I use one single RTX3090 train for 20 hours.
### To Reproduce
[https://github.com/coqui-ai/TTS/blob/dev/recipes/vctk/yourtts/train_yourtts.py](url)
### Expected behavior
get a relatively good-quality of voice
### Logs
```shell
[1m --> STEP: 1039/1243 -- GLOBAL_STEP: 34600[0m
| > loss_disc: 2.54361 (2.52598)
| > loss_disc_real_0: 0.21437 (0.18234)
| > loss_disc_real_1: 0.18707 (0.21857)
| > loss_disc_real_2: 0.19539 (0.22308)
| > loss_disc_real_3: 0.20699 (0.23064)
| > loss_disc_real_4: 0.21294 (0.22657)
| > loss_disc_real_5: 0.18105 (0.21795)
| > loss_0: 2.54361 (2.52598)
| > grad_norm_0: 48.09296 (43.48993)
| > loss_gen: 2.17479 (2.24237)
| > loss_kl: 2.19664 (2.13727)
| > loss_feat: 4.84878 (4.85987)
| > loss_mel: 19.80973 (19.19988)
| > loss_duration: 1.70732 (1.70623)
| > loss_1: 30.73726 (30.14562)
| > grad_norm_1: 224.25027 (240.00607)
| > current_lr_0: 0.00020
| > current_lr_1: 0.00020
| > step_time: 1.62350 (1.68084)
| > loader_time: 0.01860 (0.01940)
```
### Environment
```shell
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3090",
"NVIDIA GeForce RTX 3090"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu117",
"TTS": "0.14.3",
"numpy": "1.21.6"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.9.13",
"version": "#161~18.04.1-Ubuntu SMP Fri Feb 10 15:55:22 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-06-23T23:51:46Z | 2023-06-25T08:03:17Z | https://github.com/coqui-ai/TTS/issues/2705 | [
"bug"
] | ggbangyes | 0 |
pydantic/FastUI | fastapi | 226 | Form data is being sent as empty after 2 errors | When filling form data, if it fails twice, on third attempts, it sends empty form data.
<img width="1512" alt="Screenshot 2024-02-28 at 16 14 46" src="https://github.com/pydantic/FastUI/assets/86913668/5c5cf90c-db2c-48c2-998d-963c2941d102">
<img width="1512" alt="Screenshot 2024-02-28 at 16 15 33" src="https://github.com/pydantic/FastUI/assets/86913668/0ca4798c-cb07-4a2e-8b82-c311f5d26f76">
To reproduce, run this app with uvicorn, and then fill the form and submit for 3 times
```python
from typing import Annotated
from fastapi import FastAPI, HTTPException
from fastapi.responses import HTMLResponse, RedirectResponse
from fastui import FastUI, prebuilt_html
from fastui import components as c
from fastui.forms import fastui_form
from pydantic import BaseModel, Field, SecretStr
class LoginForm(BaseModel):
username: str = Field(title='Email Address')
password: SecretStr
def login_component():
return [
c.Heading(text='Login Form', level=2, class_name='center'),
c.ModelForm(model=LoginForm, display_mode='page', submit_url='/api/forms/login'),
]
app = FastAPI()
@app.get('/')
async def main():
return HTMLResponse(prebuilt_html(title='Operation Dashboard'))
@app.get('/api', response_model=FastUI, response_model_exclude_none=True)
async def login() -> list[c.AnyComponent] | RedirectResponse:
"""Root of website"""
return [*login_component()]
@app.post('/api/forms/login', response_model=FastUI, response_model_exclude_none=True)
async def submit_login(
form: Annotated[LoginForm, fastui_form(LoginForm)],
):
raise HTTPException(401, 'Bad data')
```
Console log:
<img width="1024" alt="Screenshot 2024-02-28 at 16 20 53" src="https://github.com/pydantic/FastUI/assets/86913668/fe7dd2ea-1280-41d0-b992-0c60ad2623df">
| open | 2024-02-28T15:21:08Z | 2024-02-28T15:21:57Z | https://github.com/pydantic/FastUI/issues/226 | [] | ManiMozaffar | 0 |
ckan/ckan | api | 8,007 | Incorrect organization dataset counts in side bar | ## CKAN version
At least CKAN 2.9 onwards
## Describe the bug
The organization snippet in the side bar shown in the dataset and organization page shows the total dataset for this organization:

But if you search datasets within that organization, the total reflects the number of results returned, not the total datasets:

### Steps to reproduce
* Create an organization
* Create at least two datasets
* Go to the organization index page, search for just one of the datasets
### Expected behavior
The number of datasets in the side bar org snippet should be constant, and be the total number of datasets
| closed | 2024-01-09T11:54:41Z | 2024-06-24T13:40:48Z | https://github.com/ckan/ckan/issues/8007 | [
"Good for Contribution"
] | amercader | 4 |
tfranzel/drf-spectacular | rest-api | 822 | Contact in spectacular settings renders 'URL' instead of name provided | ```
'CONTACT': {
"name": "Email",
"url": "email@domain.io"
},
```

| closed | 2022-09-28T13:54:40Z | 2022-09-28T16:27:34Z | https://github.com/tfranzel/drf-spectacular/issues/822 | [] | mugane-dj | 1 |
pandas-dev/pandas | python | 60,911 | Date format different in the same page | ERROR: type should be string, got "https://github.com/pandas-dev/pandas/blob/02de8140251096386cbefab0186d45af0c3d8ebd/web/pandas/index.html#L124\n\nThis date format line is different from other date lines in the same page. While this is %Y-%m-%d, others are \"%b %d, %Y\"." | closed | 2025-02-11T16:31:51Z | 2025-02-11T17:08:51Z | https://github.com/pandas-dev/pandas/issues/60911 | [] | rffontenelle | 0 |
vi3k6i5/flashtext | nlp | 44 | bug | ```python
len("İ") # 1
```
```python
len("İ".lower()) # 2
```
this will cause string index out of range flashtext. | open | 2018-01-19T11:59:37Z | 2021-02-20T09:59:02Z | https://github.com/vi3k6i5/flashtext/issues/44 | [] | chenkovsky | 15 |
X-PLUG/MobileAgent | automation | 27 | Mobile-Agent-v2 can't type even when ADB Keyboard is activated | Hi, thanks for the kind open-sourcing. I found an issue when running my experiments, and I am wondering whether it is something wrong on my side or a potential corner case for the code, so I would like to discuss it here.
I found that even when my _ADB Keyboard_ is activated, the ``keyboard`` variable still shows as _False_, which affects the ``get_action_prompt()`` function. This causes the agent to perceive that the keyboard is not activated, preventing the agent from choosing the **Type** action. Below is an example of the issue:
> Unable to Type. You cannot use the action \"Type\" because the keyboard has not been activated. If you want to type, please first activate the keyboard by tapping on the input box on the screen.
I then tried to debug and found the related code:
https://github.com/X-PLUG/MobileAgent/blob/35a2264f53aaa769b2c2b24fbb1805b837d45aa8/Mobile-Agent-v2/run.py#L284
https://github.com/X-PLUG/MobileAgent/blob/35a2264f53aaa769b2c2b24fbb1805b837d45aa8/Mobile-Agent-v2/run.py#L290-L296
Based on this code, there are two reasons why the agent cannot type on my side:
1. **Line 284**: The ``keyword`` variable can only be switched to _True_ in the first iteration. However, in my case (which might differ for different Android phones), my agent can only observe the _ADB Keyboard {ON}_ when it can input something (e.g., already focused on the search box), which is almost impossible in the first iteration. Therefore, the ``keyboard`` variable is always _False_ for the agent.
2. **Line 292**: The switch might be skipped if the condition is not satisfied. In my case (due to the phone I am using, Google Pixel 8 Pro), the location where _ADB Keyboard {ON}_ appears is relatively too high to satisfy the condition. When I make the threshold smaller (e.g., 0.8), the issue is fixed.
Though this issue might be a rare case, I would greatly appreciate it if you could share some comments about it.
Many thanks :D | closed | 2024-06-14T15:05:52Z | 2024-06-14T17:20:23Z | https://github.com/X-PLUG/MobileAgent/issues/27 | [] | jingxuanchen916 | 7 |
indico/indico | flask | 5,908 | [A11Y] Timezone list markup improvements | **Describe the bug**
Timezone selector's behavior currently suffers from several issues related to keyboard interaction. It could also use some UX improvements as well.
**To Reproduce**
Steps to reproduce the behavior:
Scenario A:
1. Go to any page which has a time zone selector
2. Focus the timezone button using a keyboard
3. Open the timezone popup by pressing Enter/Return or Spacebar
4. Try to navigate into the timezone popup
5. Observe that navigation "skips" the timezone popup
Scenario B:
1. Click at the top of the timezone popup so that further keyboard navigation will start from the top of the popup
2. Focus the radios
3. Select the "Specify a timezone" radio
4. Observe that the focus is automatically moved to the select list below
**Expected behavior**
When the popup is expanded, the keyboard focus should move to the popup *when the user presses Tab key* (not automatically).
When the "Specify a timezone" radio is selected, focus should *not* automatically move to the select list.
There are also several issues with the semantics of the markup which I will address in a patch for this issue.
**Additional context**
- https://talk.getindico.io/t/keyboard-navigation-session-bar-timezone-picker/3197/3
- https://www.w3.org/WAI/WCAG21/Understanding/keyboard.html | closed | 2023-08-28T10:07:45Z | 2023-10-04T10:49:53Z | https://github.com/indico/indico/issues/5908 | [
"bug"
] | foxbunny | 0 |
babysor/MockingBird | pytorch | 2 | 支持中文版toolbox,直接输入中文 | closed | 2021-08-07T08:51:04Z | 2021-08-17T03:15:06Z | https://github.com/babysor/MockingBird/issues/2 | [] | babysor | 0 | |
scikit-learn/scikit-learn | data-science | 30,052 | ⚠️ CI failed on linux_arm64_wheel (last failure: Oct 13, 2024) ⚠️ | **CI failed on [linux_arm64_wheel](https://cirrus-ci.com/build/5764259953508352)** (Oct 13, 2024)
| closed | 2024-10-13T03:45:56Z | 2024-10-15T06:57:38Z | https://github.com/scikit-learn/scikit-learn/issues/30052 | [
"Needs Triage"
] | scikit-learn-bot | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 336 | ModuleNotFoundError: No module named 'utils.distribution' | I can get SV2TTS working fine with the base pretrained models however when I paste Tacotron 2 files to the Real-Time-Voice-Cloning synthesizer folder and paste WaveRNN Vocoder + TTS files to Real-Time-Voice-Cloning vocoder folder I get the following error when launching python demo_toolbox.py
Traceback (most recent call last):
File "demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "D:\VoiceClone2\toolbox\__init__.py", line 4, in <module>
from vocoder import inference as vocoder
File "D:\VoiceClone2\vocoder\inference.py", line 1, in <module>
from vocoder.models.fatchord_version import WaveRNN
File "D:\VoiceClone2\vocoder\models\fatchord_version.py", line 4, in <module>
from utils.distribution import sample_from_discretized_mix_logistic
ModuleNotFoundError: No module named 'utils.distribution'
Conda Install Utils does not fix the problem and there is no Conda install utils.distribution | closed | 2020-05-06T01:33:28Z | 2020-07-04T14:15:59Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/336 | [] | Lanev | 2 |
iMerica/dj-rest-auth | rest-api | 640 | Feature Request: Extensibility of the fields when using the RegisterSerializer | There is an issue with the RegisterSerializer which prevents saving additional fields which are added for the CustomUser. There should be a way to extend the fields to accomodate this field changes and persist them. The limitation is caused by how the fields are manually defined in the adapter save_user function | open | 2024-06-02T22:47:50Z | 2024-06-02T22:47:50Z | https://github.com/iMerica/dj-rest-auth/issues/640 | [] | Strapchay | 0 |
Farama-Foundation/PettingZoo | api | 1,078 | [Question] Help to understand PettingZoo + SuperSuit + StableBaselines3 approach | ### Question
Hi everyone,
I have successfully trained a simple multi-agent game environment using Stable Baselines 3 + PettingZoo + SuperSuit. Surprisingly, all of the agents learn incredibly well using a single agent interface as stable baselines 3 is.
Now, my question is: I don't really get the classification of this algorithm. Is it an example of "joint action learning" or "centralized training and decentralized execution"?
I have been following this tutorial which is also available on PettingZoo examples: https://towardsdatascience.com/multi-agent-deep-reinforcement-learning-in-15-lines-of-code-using-pettingzoo-e0b963c0820b
Unfortunately, SuperSuit doesn't seem to provide a detailed explanation of its workflow. It seems like observation and chosen actions are stacked together, so I tend to think that it's a joint action learning implementation.
Thank you in advance! | closed | 2023-08-28T07:14:24Z | 2023-09-13T14:05:05Z | https://github.com/Farama-Foundation/PettingZoo/issues/1078 | [
"question"
] | Lauqz | 4 |
codertimo/BERT-pytorch | nlp | 37 | shape match for mul? | https://github.com/codertimo/BERT-pytorch/blob/alpha0.0.1a5/bert_pytorch/model/embedding/position.py#L18
What are the shapes of `position` and `div_term`? | closed | 2018-10-26T08:34:06Z | 2018-10-26T23:51:51Z | https://github.com/codertimo/BERT-pytorch/issues/37 | [] | guotong1988 | 1 |
thtrieu/darkflow | tensorflow | 361 | How to retrain my model with new Images and classes from a checkpoint | How can I retain my model e.g I have trained a model with 3 classes and stopped at checkpoint 1250.And now i intend to add some more images,annotations and classes,labels to be train with my trained data.
Is it possible to start retrain from a checkpoint with new data?Which command I can use to do that. | open | 2017-07-28T06:21:50Z | 2019-11-23T23:10:01Z | https://github.com/thtrieu/darkflow/issues/361 | [] | ManojPabani | 3 |
OthersideAI/self-operating-computer | automation | 64 | Error parsing JSON: X get_image failed: error 8 (73, 0, 967) | [Self-Operating Computer]
Hello, I can help you with anything. What would you like done?
[User]
google the word HI
Error parsing JSON: X get_image failed: error 8 (73, 0, 967)
[Self-Operating Computer][Error] something went wrong :(
[Self-Operating Computer][Error] AI response
Failed take action after looking at the screenshot
what could be the problem? | open | 2023-12-02T17:23:04Z | 2023-12-21T15:27:41Z | https://github.com/OthersideAI/self-operating-computer/issues/64 | [
"bug"
] | Andy1996247 | 11 |
jpadilla/django-rest-framework-jwt | django | 288 | Is it possible to revoke refresh tokens? | I'm not even sure that this is how refresh tokens are meant to behave, but how can a user effectively notify the system to stop issuing new tokens by using a refresh token in the case their token is compromised?
My settings file contains the following JWT settings
```
JWT_AUTH = {
'JWT_EXPIRATION_DELTA': datetime.timedelta(seconds=300),
'JWT_REFRESH_EXPIRATION_DELTA': datetime.timedelta(days=7),
'JWT_ALLOW_REFRESH': True,
'JWT_AUTH_HEADER_PREFIX': 'Token'
}
```
After a user obtains a valid JWT token from **_rest_framework_jwt.views.obtain_jwt_token_** they can use it to access my system's APIs, for up to 7 days by getting new tokens each time using **_rest_framework_jwt.views.refresh_jwt_token_**. However, what if one of the expired JWT Tokens is compromised before the refresh token's expiration delta (7 days), couldn't it be used to obtain a valid token by calling the same refresh endpoint? If so, how can a refresh token be revoked so this does not happen?
Note: still trying to wrap my head around using JWT tokens securely | open | 2016-11-22T21:01:06Z | 2017-02-28T14:14:45Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/288 | [] | alexolivas | 6 |
tiangolo/uwsgi-nginx-flask-docker | flask | 43 | NGINX don´t start | Hi, sorry for my noob question, but i´m beggining on docker.
Is It there something that disable nginx when I start a container with debug directives ? I started the container using this command line:
docker run -d --name mycontainer -p 80:80 -v $(pwd)/app:/app -e FLASK_APP=main.py -e FLASK_DEBUG=1 myimage flask run --host=0.0.0.0 --port=80
After do some packages instalations on container I commited to a new image. After this, when I start a new container using this new image, it always run in debug mode. | closed | 2018-02-26T12:50:03Z | 2018-09-22T17:37:23Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/43 | [] | diegoalbuquerque | 5 |
NullArray/AutoSploit | automation | 815 | Divided by zero exception85 | Error: Attempted to divide by zero.85 | closed | 2019-04-19T16:01:08Z | 2019-04-19T16:37:45Z | https://github.com/NullArray/AutoSploit/issues/815 | [] | AutosploitReporter | 0 |
iperov/DeepFaceLab | machine-learning | 874 | Can Radeon be used? | I tried hard to use the deepfake but it has many errors and I can't pass the part 4 can the deepfake be used on a radeon hd 5455 ddr3 video card?
Thank you | open | 2020-08-26T11:12:42Z | 2023-06-08T21:20:34Z | https://github.com/iperov/DeepFaceLab/issues/874 | [] | admobwebmaster | 7 |
tflearn/tflearn | data-science | 534 | run name and layer names incorrect in Tensorboard | I use latest TFLearn (0.2.2) and TensorFlow (0.12) on Windows (from GIT), and have issues with visualizing results with Tensorboard.
I use something like:
`
model.fit({'input': X }, {'target': Y}, n_epoch=99, batch_size=512,
validation_set=({'input': testX}, {'target': testY}),
snapshot_step=None, show_metric=True, run_id='BBv44')
`
TensorFlow does write log files in the correct location (in my case: `E:\tmp\tflearn_logs\BBv44`).
*Problem 1*: Runs do not show up in Tensorboard. When I start Tensorboard with `--logdir=E:\tmp\tflearn_logs` it used to show all runs in sub-directories (e.g. BBv40, BBv41, ...). New runs (BBv44) are not in the list. A workaround is to start Tensorboard explicitly with `--logdir=E:\tmp\tflearn_logs\BBv41`. When I do this, data is visualized (but the run has no name). However, I can no longer compare results of multiple runs.
*Problem 2* : The grouping within Tensorboard seems to be messed up. Instead of a category such as "Accuracy/Validation" I see now groups such as "Adam" with layer image captions like "Adam/Conv2D_1/W/Regularizer/L2-Loss". From Tensorflow I got the following messages:
`
INFO:tensorflow:Summary name Loss & var loss/ is illegal; using Loss___var_loss/ instead.
INFO:tensorflow:Summary name Loss & var loss/ (raw) is illegal; using Loss___var_loss/__raw_ instead.
INFO:tensorflow:Summary name Loss/ (raw) is illegal; using Loss/__raw_ instead.
INFO:tensorflow:Summary name Conv2D/W/Regularizer/L2-Loss (raw) is illegal; using Conv2D/W/Regularizer/L2-Loss__raw_ instead.
INFO:tensorflow:Summary name Conv2D_1/W/Regularizer/L2-Loss (raw) is illegal; using Conv2D_1/W/Regularizer/L2-Loss__raw_ instead.
INFO:tensorflow:Summary name Conv2D_2/W/Regularizer/L2-Loss (raw) is illegal; using Conv2D_2/W/Regularizer/L2-Loss__raw_ instead.
`
| open | 2016-12-23T12:46:30Z | 2016-12-23T12:46:30Z | https://github.com/tflearn/tflearn/issues/534 | [] | werner-rammer | 0 |
gevent/gevent | asyncio | 1,397 | gevent.[i]map convenience APIs | Currently these are hidden in `gevent.pool.Group`. Are they useful enough with common enough defaults or easily understood tradeoffs in the defaults that they should be exposed as convenient top-level APIs?
This occurred to me when writing a response to https://github.com/benoitc/gunicorn/issues/2013
Unbounded (group):
```python
responses = gevent.map(requests.get, urls)
```
Bounded (pool):
```python
responses = gevent.map_limited(4, requests.get, urls)
``` | open | 2019-04-15T13:56:25Z | 2019-04-29T19:20:16Z | https://github.com/gevent/gevent/issues/1397 | [
"Type: Enhancement",
"Type: Question"
] | jamadden | 1 |
MaartenGr/BERTopic | nlp | 1,181 | Manual assignment of a topic to a document that is not covered by a specific topic category. | Hi, @MaartenGr, I have a question regarding manually assigning a topic to a document. In our study, we want to know the prevalence of a specific topic across the entire dataset. However, after reviewing the documents under this topic category, we found that some posts are related to this topic but have been assigned to other low-similarity topics. To ensure that all posts relevant to the specific topic are captured, we were wondering whether manual assignment of the related documents to the topic is possible, and if any function exists for this purpose.
We appreciate your valuable time and thank you for establishing such a splendid community. | closed | 2023-04-12T03:16:34Z | 2023-05-23T08:34:20Z | https://github.com/MaartenGr/BERTopic/issues/1181 | [] | han-1231 | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 475 | Rectangular input shape results in square output shape | Thanks for the implementation of SMP, I have been playing with this project and noticed the following:
- My dataset consists of images with shape 512x288
- My output results are of the shape 512x512, this means I manually need to trim a section of each image, in order to obtain the correct result. Besides that, it also makes me think that there are quite some calculations happening (factor 1.7) that should not be happening.
Does anybody know how this phenomenon arises? I expected it to be in the dataloader, or either in the upsampling, but in either case I cannot really what I should change, in order to get the 512x288 output, instead of the default 512x512 | closed | 2021-08-24T08:45:13Z | 2021-08-24T11:14:48Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/475 | [] | Fritskee | 4 |
modin-project/modin | pandas | 7,165 | Fix upload coverage | Looks like a root cause: https://github.com/codecov/codecov-action/issues/1359 | closed | 2024-04-10T12:54:32Z | 2024-04-11T22:02:03Z | https://github.com/modin-project/modin/issues/7165 | [
"P0",
"CI"
] | anmyachev | 0 |
HIT-SCIR/ltp | nlp | 141 | 请问这里的离线版本能使用语义依存分析吗? | 我想使用语义依存分析的功能,但是好像只有在线的API支持(http://www.ltp-cloud.com/demo/ )
离线的model中没有说支持语义依存分析 http://ltp.readthedocs.org/zh_CN/latest/ltptest.html
请问除了用在线API之外,还有其他方式使用语义依存分析的功能吗?
| closed | 2015-11-27T15:34:54Z | 2017-11-05T04:23:01Z | https://github.com/HIT-SCIR/ltp/issues/141 | [
"user-questions"
] | bohaoist | 4 |
hbldh/bleak | asyncio | 1,498 | Bluetooth Short UUIDs are not resolving correctly | * bleak version: 0.21.1
* Python version: Python 3.11.2
* Operating System: Debian GNU/Linux 12 (bookworm)
* BlueZ version (`bluetoothctl -v`) in case of Linux: bluetoothctl: 5.66
* Raspberry PI 4B.
### Description
After installing bleak, I tried the basic 'USAGE' sample from here: https://github.com/hbldh/bleak
...which reads the model number characteristic. Since my device does not have this characteristic, I changed to the Manufacturer Name UUID - "2A29", instead of "2A24". With no other changes, the sample fails with error 'Characteristic with UUID 2A29 could not be found.
### What I Did
I experimented with using the characteristic, the characteristic handle, short integer uuid, short string uuid and full uuid. The characteristic, characteristic handle and full uuid all returned the manufacturer name. Both short uuids tests did not.
Test Program
```python
import asyncio
from bleak import BleakClient
from bleak import BleakScanner
async def main():
print("Scanning for device...")
bledevice = await BleakScanner.find_device_by_name("Train.Red FYER 0220")
if bledevice == None:
print("Device not found")
exit
else:
print("Device found.")
print("Connecting to device...")
client = BleakClient(bledevice)
try:
await client.connect()
print(f"Connected={client.is_connected}")
print(f"Address={client.address}")
ManChar = None
for handle in client.services.services:
service = client.services.services[handle]
print(handle, service.uuid, service.description)
for characteristic in service.characteristics:
print(f" {characteristic.handle} {characteristic.uuid} {characteristic.description}")
if(characteristic.uuid=='00002a29-0000-1000-8000-00805f9b34fb'):
print(' (Found Manufacturer Characteristic)')
ManChar=characteristic
for descriptor in characteristic.descriptors:
print(f" {descriptor.handle} {descriptor.uuid} {descriptor.description}")
if(ManChar==None): exit
# This works
print(f"read_gatt_char({ManChar}):")
try:
manufacturer_name = await client.read_gatt_char(ManChar)
print(" {0}".format("".join(map(chr, manufacturer_name))))
except Exception as e:
print(f" {e}")
# This works
print(f"read_gatt_char({ManChar.handle}):")
try:
manufacturer_name = await client.read_gatt_char(ManChar.handle)
print(" {0}".format("".join(map(chr, manufacturer_name))))
except Exception as e:
print(f" {e}")
# This works
print("read_gatt_char('00002a29-0000-1000-8000-00805f9b34fb'):")
try:
manufacturer_name = await client.read_gatt_char('00002a29-0000-1000-8000-00805f9b34fb')
print(" {0}".format("".join(map(chr, manufacturer_name))))
except Exception as e:
print(f" {e}")
# This fails
print("read_gatt_char(0x2A29):")
try:
manufacturer_name = await client.read_gatt_char(0x2A29)
print(" {0}".format("".join(map(chr, manufacturer_name))))
except Exception as e:
print(f" {e}")
# This fails
print("read_gatt_char('2A29'):")
try:
manufacturer_name = await client.read_gatt_char('2A29')
print(" {0}".format("".join(map(chr, manufacturer_name))))
except Exception as e:
print(f" {e}")
# This fails
print("read_gatt_char('2a29'):")
try:
manufacturer_name = await client.read_gatt_char('2a29')
print(" {0}".format("".join(map(chr, manufacturer_name))))
except Exception as e:
print(f" {e}")
finally:
await client.disconnect()
asyncio.run(main())
```
Test Output
```
Scanning for device...
Device found.
Connecting to device...
Connected=True
Address=CB:93:EC:D1:C2:2D
20 0000180f-0000-1000-8000-00805f9b34fb Battery Service
21 00002a19-0000-1000-8000-00805f9b34fb Battery Level
10 00001801-0000-1000-8000-00805f9b34fb Generic Attribute Profile
11 00002a05-0000-1000-8000-00805f9b34fb Service Changed
13 00002902-0000-1000-8000-00805f9b34fb Client Characteristic Configuration
32 00004400-12df-4fbd-b4cd-a4471afe3d11 Unknown
33 00004401-12df-4fbd-b4cd-a4471afe3d11 Unknown
35 00002902-0000-1000-8000-00805f9b34fb Client Characteristic Configuration
36 00004402-12df-4fbd-b4cd-a4471afe3d11 Unknown
14 00002200-d578-4741-9b5b-7c64e958cfc6 Unknown
15 00002201-d578-4741-9b5b-7c64e958cfc6 Unknown
17 00002902-0000-1000-8000-00805f9b34fb Client Characteristic Configuration
18 00002202-d578-4741-9b5b-7c64e958cfc6 Unknown
23 0000180a-0000-1000-8000-00805f9b34fb Device Information
24 00002a29-0000-1000-8000-00805f9b34fb Manufacturer Name String
(Found Manufacturer Characteristic)
26 00002a24-0000-1000-8000-00805f9b34fb Model Number String
30 00002a26-0000-1000-8000-00805f9b34fb Firmware Revision String
28 00002a27-0000-1000-8000-00805f9b34fb Hardware Revision String
read_gatt_char(00002a29-0000-1000-8000-00805f9b34fb (Handle: 24): Manufacturer Name String):
Train.Red
read_gatt_char(24):
Train.Red
read_gatt_char('00002a29-0000-1000-8000-00805f9b34fb'):
Train.Red
read_gatt_char(0x2A29):
Characteristic with UUID 10793 could not be found!
read_gatt_char('2A29'):
Characteristic with UUID 2A29 could not be found!
read_gatt_char('2a29'):
Characteristic with UUID 2a29 could not be found!
```
| closed | 2024-01-30T10:31:18Z | 2024-04-29T01:06:03Z | https://github.com/hbldh/bleak/issues/1498 | [
"bug"
] | StephenDone | 2 |
OpenBB-finance/OpenBB | machine-learning | 6,729 | [Bug] clone OpenBB-finance | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps(from the start) and commands to reproduce the behavior
**Screenshots**
If applicable, add screenshots to help explain your problem.
If you are running the terminal using the conda version please
rerun the terminal with `python terminal.py --debug`, and then
recreate your issue. Then include a screenshot of the entire
error printout.
**Desktop (please complete the following information):**
- OS: [e.g. Mac Sierra]
- Python version [e.g. 3.6.8]
**Additional context**
Add any other information that you think could be useful for us.
| closed | 2024-10-02T20:56:36Z | 2024-10-02T22:00:12Z | https://github.com/OpenBB-finance/OpenBB/issues/6729 | [] | FR3DERICO24 | 1 |
ludwig-ai/ludwig | computer-vision | 3,686 | Config type safety and auto-complete | Love the platform and low code ideas it brings to ML infra. ❤️
**Is your feature request related to a problem? Please describe.**
LudwigModel accepts yaml string or raw dict as the model config and in many examples you would see decent amount of string yaml hard-coded, but the structure of such yaml is not guided, nor known upfront, which makes it a bit more work to adapt declarative approach in programmatic interface.
**Describe the use case**
Looking at LLM fine tuning:
https://colab.research.google.com/drive/1c3AO8l_H6V_x37RwQ8V7M6A-RmcBf2tG?usp=sharing#scrollTo=JfZq1-qbulcg
Looks like I can specify preprocessing sample rate, curious what else I can do in preprocessing, would be interested to see if I can set fixed parallelism for expected multi-core utilization 🤔
**Describe the solution you'd like**
Ideally have pydentic classes defining each yaml structure, so you would have type safety, inputs and outputs validation as well as IDE auto-complete for quicker lookup of possible fields and values with documentation in place with code. However, dataclasses that present in the code base for configs also good enough, as they provide auto-complete and possibility for arguments documentation.
Something similar to:
```python
config = LLMConfig(
base_model="meta-llama/Llama-2-7b-hf",
quantization=BitsAndBites(bits=4),
adapter=Lora(alpha=16),
prompt=Prompt(template="Say hello to {username}"),
input_features=TextFeature(name="prompt", preprocessing=Preprocessing(max_sequence_length=256)),
output_feature=TextFeature(name="output", preprocessing=Preprocessing(max_sequence_length=256)),
preprocessing=Preprocessing(sample_ratio=0.1, parallelism=4)
)
model = LudwigModel(config=config, logging_level=logging.INFO)
```
**Describe alternatives you've considered**
I see there are such domain models already present:
https://github.com/ludwig-ai/ludwig/blob/e46a9890b9f6345a0ba2face03d0e6fcedb909d9/ludwig/features/text_feature.py#L211
They contain mode implementation details that are needed for easier and faster auto-complete search of possible properties, potentially would be faster to reuse those if their API is not meant to change and I am assuming the config yaml is de-serialized into such classes anyway.
**Additional context**
This should help not only adaption of the declarative approach, but also can save de-serialization process.
| closed | 2023-10-04T20:46:20Z | 2024-10-18T17:04:31Z | https://github.com/ludwig-ai/ludwig/issues/3686 | [] | Tradunsky | 1 |
TencentARC/GFPGAN | deep-learning | 19 | train | This training process looks a bit strange, is there something wrong with my configuration file, I am based on 1024 resolution
> 021-07-13 16:20:56,380 INFO: [train..][epoch: 0, iter: 100, lr:(2.000e-03,)] [eta: 112 days, 17:17:33, time (data): 1.796 (0.015)] l_g_pix: inf l_p_8: nan l_p_16: nan l_p_32: nan l_p_64: nan l_p_128: nan l_p_256: nan l_p_512: inf l_p_1024: inf l_g_percep: inf l_g_style: inf l_g_gan: nan l_g_gan_left_eye: nan l_g_gan_right_eye: nan l_g_gan_mouth: nan l_g_comp_style_loss: nan l_identity: nan l_d: nan real_score: nan fake_score: nan l_d_left_eye: nan l_d_right_eye: nan l_d_mouth: nan
| closed | 2021-07-13T08:46:51Z | 2023-04-12T07:56:19Z | https://github.com/TencentARC/GFPGAN/issues/19 | [] | ZZFanya-DWR | 2 |
dmlc/gluon-cv | computer-vision | 1,036 | cls_pred, box_pred, mask_pred, roi, samples, matches, rpn_score, rpn_box, anchors, cls_targets, box_targets, box_masks, indices = net(data, gt_box, gt_label) | there is 4 need, and give 5 error | closed | 2019-11-11T01:58:28Z | 2019-11-11T02:55:47Z | https://github.com/dmlc/gluon-cv/issues/1036 | [] | wb315 | 1 |
keras-team/keras | tensorflow | 21,001 | EarlyStopping with list of metrics to monitor | In addition to this example:
`callback = keras.callbacks.EarlyStopping(monitor='val_loss')`
Allow monitoring of multiple metrics, as in this example:
`callback = keras.callbacks.EarlyStopping(monitor=['val_loss', 'val_accuracy', 'val_f1measure'])`
This way, training should not stop while any of these metrics get better values, not just one of them. | open | 2025-03-07T19:02:29Z | 2025-03-17T17:00:52Z | https://github.com/keras-team/keras/issues/21001 | [
"type:feature"
] | fabriciorsf | 13 |
AirtestProject/Airtest | automation | 816 | 1.2.6版本问题 | 1、使用过程中,现在的预览界面画质降低明显
2、使用过程中,存在闪退
3、使用时,截图时框类似于poco里面的乱画
4、使用poco录制时,存在录制点击等操作事件不同步响应的问题 | closed | 2020-10-19T09:08:12Z | 2021-02-21T08:56:18Z | https://github.com/AirtestProject/Airtest/issues/816 | [
"bug"
] | YongShing | 2 |
GibbsConsulting/django-plotly-dash | plotly | 376 | Live-Updating: `ValueError: No route found for path 'dpd/ws/channel'` | I am attempting to implement live-updating according to the documentation. I encountered the issue described in #368 and, following the suggestion in #369 to copy `websocketbridge.js` to my static folder, I am now encountering the above error on the server.
This error is printed every few seconds.
Here is the full stack trace (note: custom formatting):
```python
2022-01-11 16:10:11.473 | ERROR | daphne.server:application_checker:290 - Exception inside application: No route found for path 'dpd/ws/channel'.
Traceback (most recent call last):
> File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/staticfiles.py", line 44, in __call__
return await self.application(scope, receive, send)
│ │ │ │ └ functools.partial(<bound method Server.handle_reply of <daphne.server.Server object at 0x7fbc3518ca60>>, <WebSocketProtocol c...
│ │ │ └ <bound method Queue.get of <Queue at 0x7fbc25cf32e0 maxsize=0 _queue=[{'type': 'websocket.connect'}] tasks=1>>
│ │ └ {'type': 'websocket', 'path': '/dpd/ws/channel', 'raw_path': b'/dpd/ws/channel', 'headers': [(b'host', b'127.0.0.1:8000'), (b...
│ └ <channels.routing.ProtocolTypeRouter object at 0x7fbc42648ca0>
└ <channels.staticfiles.StaticFilesWrapper object at 0x7fbc36373ca0>
File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/routing.py", line 71, in __call__
return await application(scope, receive, send)
│ │ │ └ functools.partial(<bound method Server.handle_reply of <daphne.server.Server object at 0x7fbc3518ca60>>, <WebSocketProtocol c...
│ │ └ <bound method Queue.get of <Queue at 0x7fbc25cf32e0 maxsize=0 _queue=[{'type': 'websocket.connect'}] tasks=1>>
│ └ {'type': 'websocket', 'path': '/dpd/ws/channel', 'raw_path': b'/dpd/ws/channel', 'headers': [(b'host', b'127.0.0.1:8000'), (b...
└ <channels.sessions.CookieMiddleware object at 0x7fbc42648af0>
File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/sessions.py", line 47, in __call__
return await self.inner(dict(scope, cookies=cookies), receive, send)
│ │ │ │ │ └ functools.partial(<bound method Server.handle_reply of <daphne.server.Server object at 0x7fbc3518ca60>>, <WebSocketProtocol c...
│ │ │ │ └ <bound method Queue.get of <Queue at 0x7fbc25cf32e0 maxsize=0 _queue=[{'type': 'websocket.connect'}] tasks=1>>
│ │ │ └ {'csrftoken': 'ksFsl6mpPkmQrJeeq2kedERpL4Vd9JuRoORFC976y5NQANbzxgrag27QcUyoDx75', 'sessionid': 'phyh1f1jkryztx0c16ndjy3e7wnx0...
│ │ └ {'type': 'websocket', 'path': '/dpd/ws/channel', 'raw_path': b'/dpd/ws/channel', 'headers': [(b'host', b'127.0.0.1:8000'), (b...
│ └ <channels.sessions.SessionMiddleware object at 0x7fbc426486d0>
└ <channels.sessions.CookieMiddleware object at 0x7fbc42648af0>
File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/sessions.py", line 263, in __call__
return await self.inner(wrapper.scope, receive, wrapper.send)
│ │ │ │ │ │ └ <function InstanceSessionWrapper.send at 0x7fbc350f4af0>
│ │ │ │ │ └ <channels.sessions.InstanceSessionWrapper object at 0x7fbc34309580>
│ │ │ │ └ <bound method Queue.get of <Queue at 0x7fbc25cf32e0 maxsize=0 _queue=[{'type': 'websocket.connect'}] tasks=1>>
│ │ │ └ {'type': 'websocket', 'path': '/dpd/ws/channel', 'raw_path': b'/dpd/ws/channel', 'headers': [(b'host', b'127.0.0.1:8000'), (b...
│ │ └ <channels.sessions.InstanceSessionWrapper object at 0x7fbc34309580>
│ └ <channels.auth.AuthMiddleware object at 0x7fbc426485e0>
└ <channels.sessions.SessionMiddleware object at 0x7fbc426486d0>
File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/auth.py", line 185, in __call__
return await super().__call__(scope, receive, send)
│ │ └ <bound method InstanceSessionWrapper.send of <channels.sessions.InstanceSessionWrapper object at 0x7fbc34309580>>
│ └ <bound method Queue.get of <Queue at 0x7fbc25cf32e0 maxsize=0 _queue=[{'type': 'websocket.connect'}] tasks=1>>
└ {'type': 'websocket', 'path': '/dpd/ws/channel', 'raw_path': b'/dpd/ws/channel', 'headers': [(b'host', b'127.0.0.1:8000'), (b...
File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/middleware.py", line 26, in __call__
return await self.inner(scope, receive, send)
│ │ │ │ └ <bound method InstanceSessionWrapper.send of <channels.sessions.InstanceSessionWrapper object at 0x7fbc34309580>>
│ │ │ └ <bound method Queue.get of <Queue at 0x7fbc25cf32e0 maxsize=0 _queue=[{'type': 'websocket.connect'}] tasks=1>>
│ │ └ {'type': 'websocket', 'path': '/dpd/ws/channel', 'raw_path': b'/dpd/ws/channel', 'headers': [(b'host', b'127.0.0.1:8000'), (b...
│ └ <channels.routing.URLRouter object at 0x7fbc42690430>
└ <channels.auth.AuthMiddleware object at 0x7fbc426485e0>
File "/home/dur10420/miniconda3/envs/dtautomation/lib/python3.9/site-packages/channels/routing.py", line 168, in __call__
raise ValueError("No route found for path %r." % path)
└ 'dpd/ws/channel'
ValueError: No route found for path 'dpd/ws/channel'.
```
My `urls.py`:
```python
from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path("", include("apps.index.urls")),
path("admin/", admin.site.urls),
path("circuits/", include("apps.circuits.urls")),
path("django_plotly_dash/", include("django_plotly_dash.urls")),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
```
Also, possibly related to this, I encounter these errors on the client when the tag `{% plotly_message_pipe %}` is added to my template:

I am directly inserting my app into the template using `{% plotly_direct %}` with the required `{% plotly_header %}` and `{% plotly_footer %}` tags as described in the docs.
Thanks for your time,
Jules
| closed | 2022-01-11T22:19:54Z | 2024-10-19T16:38:33Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/376 | [] | JulianOrteil | 7 |
encode/apistar | api | 664 | Handle deprecated links and parameters | Both OpenAPI 2/Swagger and OpenAPI 3 offer a `deprecated` boolean attribute on [operation objects](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#operationObject), [parameter objects](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#parameter-object) and [schema objects](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#fixed-fields-20): APIStar could raise a warning when calling a deprecated endpoint, using a deprecated parameter or schema, or at least make the attributes available to the user in the `Link` and `Field` classes. | open | 2019-09-05T12:04:57Z | 2019-09-05T12:04:57Z | https://github.com/encode/apistar/issues/664 | [] | Lucidiot | 0 |
skypilot-org/skypilot | data-science | 4,847 | uvicorn worker numbers does not hornor cgroup resource limit | closed | 2025-02-28T02:33:11Z | 2025-03-01T09:13:28Z | https://github.com/skypilot-org/skypilot/issues/4847 | [
"api server"
] | aylei | 0 | |
tflearn/tflearn | tensorflow | 467 | tflearn and gym.render() cannot work together | I just used the following code. And the code ran fine when `import tflearn` was commented. If I have both `import tflearn` and `env.render()`, error occurred.
```python
import tflearn
import tensorflow as tf
import gym
with tf.Session() as sess:
env = gym.make('Pendulum-v0')
env.seed(1234)
env.reset()
env.render()
```
The error is like this:
```
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties:
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.253
pciBusID 0000:01:00.0
Total memory: 3.93GiB
Free memory: 3.31GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x36e0a20
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 1 with properties:
name: GeForce GTX 980
major: 5 minor: 2 memoryClockRate (GHz) 1.253
pciBusID 0000:07:00.0
Total memory: 3.94GiB
Free memory: 3.87GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 1: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 980, pci bus id: 0000:07:00.0)
[2016-11-17 10:40:25,058] Making new env: Pendulum-v0
Traceback (most recent call last):
File "test.py", line 14, in <module>
env.render()
File "/home/chentao/software/gym/gym/core.py", line 192, in render
return self._render(mode=mode, close=close)
File "/home/chentao/software/gym/gym/envs/classic_control/pendulum.py", line 66, in _render
from gym.envs.classic_control import rendering
File "/home/chentao/software/gym/gym/envs/classic_control/rendering.py", line 23, in <module>
from pyglet.gl import *
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/gl/__init__.py", line 236, in <module>
import pyglet.window
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/window/__init__.py", line 1817, in <module>
gl._create_shadow_window()
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/gl/__init__.py", line 205, in _create_shadow_window
_shadow_window = Window(width=1, height=1, visible=False)
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/window/xlib/__init__.py", line 163, in __init__
super(XlibWindow, self).__init__(*args, **kwargs)
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/window/__init__.py", line 505, in __init__
config = screen.get_best_config(template_config)
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/canvas/base.py", line 161, in get_best_config
configs = self.get_matching_configs(template)
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/canvas/xlib.py", line 179, in get_matching_configs
configs = template.match(canvas)
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/gl/xlib.py", line 29, in match
have_13 = info.have_version(1, 3)
File "/home/chentao/software/anaconda2/envs/tensorflow/lib/python2.7/site-packages/pyglet/gl/glx_info.py", line 89, in have_version
client = [int(i) for i in client_version.split('.')]
ValueError: invalid literal for int() with base 10: 'None'
``` | open | 2016-11-17T02:45:18Z | 2017-08-09T19:47:02Z | https://github.com/tflearn/tflearn/issues/467 | [] | taochenshh | 5 |
mckinsey/vizro | plotly | 191 | Make a grouped charts | ### Question
Hi, how can i built grouped charts?
In example, i want to built a bar chart with not pure Price, i want to do it with Sum of Price or Mean of Price for this years which places at x-axis.
But in Selector i want to filter pure Dataframe for a prices in example (and after built grouped chart)
### Code/Examples
df = df[["brand", "year", "price"]]
components=[
vm.Graph(
id="win_time_graph",
figure=px.bar(
data_frame = df,
x="year",
y="price",
color="brand",
),
),
---> i want to built MEAN of prices for every Year
### Other information
_No response_
### vizro version
last
### Python version
3.11
### OS
win 10
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2023-12-05T08:54:11Z | 2024-01-25T09:52:47Z | https://github.com/mckinsey/vizro/issues/191 | [
"General Question :question:"
] | vmisusu | 7 |
Yorko/mlcourse.ai | plotly | 719 | Typo in the feature naming in Reduction Impurity counting | In the book, [Feature importance page](https://mlcourse.ai/book/topic05/topic5_part3_feature_importance.html) there is a typo in the feature name. One of the chosen should be "Petal length (cm)".
<img width="767" alt="image" src="https://user-images.githubusercontent.com/17138883/189652317-d999f0a6-43bc-4b74-99c7-a3b0ba1a117d.png">
| closed | 2022-09-12T12:26:14Z | 2022-09-13T23:01:01Z | https://github.com/Yorko/mlcourse.ai/issues/719 | [] | aulasau | 1 |
Nekmo/amazon-dash | dash | 172 | logger.setLevel wrong argument type | ### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### Guideline for bug reports
* amazon-dash version: 1.4.0
* Python version: 3.7.3
* Pip & Setuptools version: pip 18.1, setuptools 40.8.0
* Operating System: raspbian 10
How to get your version:
```
amazon-dash --version
python --version
pip --version
easy_install --version
```
- [x] The `pip install` or `setup install` command has been completed without errors
- [x] The `python -m amazon_dash.install` command has been completed without errors
- [ ] The `amazon-dash discovery` command works without errors
- [ ] I have created/edited the configuration file
- [ ] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
I tried to run the amazon-dash command, but didn't complete. The problem is in the cli function.
Checking the `loglevel` variable, the data type is `str`, but it should be `int`.
```
[DEBUG] function cli: loglevel type: <class 'str'> | logging.INFO type: <class 'int'>
```
#### What I Did
```bash
> amazon-dash hack-device
```
```
Welcome to Amazon-dash v1.4.0 using Python 3.7.3
December 31 is the last day to block requests from your Amazon-dash buttons to Amazon servers. In 2020 your buttons can be bricked in an update from Amazon servers.
Traceback (most recent call last):
File "/usr/local/bin/amazon-dash", line 6, in <module>
catch(cli)()
File "/usr/local/lib/python3.7/dist-packages/amazon_dash/exceptions.py", line 103, in wrap
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1134, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1059, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1662, in invoke
super().invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1401, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 767, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/amazon_dash/management.py", line 110, in cli
create_logger('amazon-dash', loglevel)
File "/usr/local/lib/python3.7/dist-packages/amazon_dash/management.py", line 35, in create_logger
logger.setLevel(level)
File "/usr/lib/python3.7/logging/__init__.py", line 1358, in setLevel
self.level = _checkLevel(level)
File "/usr/lib/python3.7/logging/__init__.py", line 192, in _checkLevel
raise ValueError("Unknown level: %r" % level)
ValueError: Unknown level: '20'
``` | closed | 2021-05-17T17:27:50Z | 2022-02-05T16:28:11Z | https://github.com/Nekmo/amazon-dash/issues/172 | [] | scmanjarrez | 3 |
ionelmc/pytest-benchmark | pytest | 11 | Graph plotting | closed | 2015-07-22T10:46:39Z | 2015-08-11T01:23:48Z | https://github.com/ionelmc/pytest-benchmark/issues/11 | [] | ionelmc | 0 | |
InstaPy/InstaPy | automation | 6,218 | Is InstaPy able to extract bio information of someone else followers? | Hi I'm new to instaPy, had search and looked into the documentation and also discord but I could find how to get someone else followers profile (name, bio).
I am looking for specific words in people who someone else followers, I am not sure if InstaPy can do this for me.
I would be even happy if I can do an extract bio of followers with Instapy.
Can someone please enlighten me here if this is possible if so, how? | open | 2021-06-08T04:22:15Z | 2021-07-08T12:43:48Z | https://github.com/InstaPy/InstaPy/issues/6218 | [] | LightMoon | 5 |
521xueweihan/HelloGitHub | python | 2,699 | 【开源自荐】PhpWebStudy: 一款macOS功能强大的PHP开发环境管理工具。 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/xpf0000/PhpWebStudy
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:macOS功能强大的PHP开发环境管理工具
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:PhpWebStudy 是一款一体化的 PHP 开发环境管理软件, 包含了PHP开发中所需要的一切. 动态服务器,静态服务器, DNS 服务器, FTP 服务器, PHP 和 NodeJS, 数据库, 数据缓存, 数据队列. 开发时参照了宝塔面板, MAMP Pro 等, 尽量做到简单好用
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:支持模块比较全面. UI 简洁直观. 各个模块配置文件和日志编辑查看非常方便. php扩展比较全而且安装方便.
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg

- 后续更新计划:
1. 添加更多模块支持, 例如 caddy 等
2. 继续优化使用体验, 让软件更易用
| open | 2024-03-08T10:26:51Z | 2024-04-24T12:15:58Z | https://github.com/521xueweihan/HelloGitHub/issues/2699 | [
"JavaScript 项目"
] | xpf0000 | 0 |
explosion/spaCy | machine-learning | 12,933 | ValueError: [E949] Unable to align tokens for the predicted and reference docs. Windows, spacy 3.6.0, Python 3.8.10 | ### Discussed in https://github.com/explosion/spaCy/discussions/12932
<div type='discussions-op-text'>
<sup>Originally posted by **PeachDew** August 24, 2023</sup>
Hi! I referred to spacy's custom tokenization doc here: https://spacy.io/usage/linguistic-features#custom-tokenizer-training
and tried using a custom-trained tokenizer in my NER project.
Here is my functions.py file:
<details>
```
from tokenizers import Tokenizer
from spacy.tokens import Doc
import spacy
import pickle
TK_PATH = "./tokenizers/WPC-trained.json"
tokenizer = Tokenizer.from_file(TK_PATH)
class CustomTokenizer:
def __init__(self, vocab):
self.vocab = vocab
self._tokenizer = tokenizer
def __call__(self, text):
tokens = self._tokenizer.encode(text)
words = []
spaces = []
for i, (text, (start, end)) in enumerate(zip(tokens.tokens, tokens.offsets)):
words.append(text)
if i < len(tokens.tokens) - 1:
# If next start != current end we assume a space in between
next_start, next_end = tokens.offsets[i + 1]
spaces.append(next_start > end)
else:
spaces.append(True)
return Doc(self.vocab, words=words, spaces=spaces)
def to_bytes(self):
return pickle.dumps(self.__dict__)
def from_bytes(self, data):
self.__dict__.update(pickle.loads(data))
def to_disk(self, path, **kwargs):
with open(path, 'wb') as file_:
file_.write(self.to_bytes())
def from_disk(self, path, **kwargs):
with open(path, 'rb') as file_:
self.from_bytes(file_.read())
@spacy.registry.tokenizers("custom_tokenizer")
def create_whitespace_tokenizer():
def create_tokenizer(nlp):
return CustomTokenizer(nlp.vocab)
return create_tokenizer
```
</details>
and in my config.cfg:
```
[nlp.tokenizer]
@tokenizers = "custom_tokenizer"
```
I trained different tokenizers, and the BPE one worked without any hiccups but when training using the WordLevel tokenizer:
```
ValueError: [E949] Unable to align tokens for the predicted and reference docs.
It is only possible to align the docs when both texts are the same except for whitespace and capitalization.
The predicted tokens start with: ['AAA', 'BBB', ':', '0']. The reference tokens start with: ['AAA', 'BBB:0.999', '"', '\r']
```
It seems that spacy is not using my custom tokenizer for prediction. Or is it an issue with an additional alignment step I have to include in the config?
I used https://huggingface.co/docs/tokenizers/quicktour to train my custom tokenizers.
</div> | closed | 2023-08-24T04:59:43Z | 2023-09-30T00:02:08Z | https://github.com/explosion/spaCy/issues/12933 | [
"duplicate"
] | PeachDew | 2 |
deepspeedai/DeepSpeed | deep-learning | 5,640 | Does deepspeed support aarch64? | I am seeing the following error when trying to run it on aarch64 machine with H100.
Linux r8-u37 6.5.0-1019-nvidia-64k #19-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 12:54:40 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
```
anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/comm/shm.cpp:10:10: fatal error: immintrin.h: No such file or directory
10 | #include <immintrin.h>
| ^~~~~~~~~~~~~
compilation terminated.
[2/3] c++ -MMD -MF shm_interface.o.d -DTORCH_EXTENSION_NAME=deepspeed_shm_comm -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/includes -isystem /home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch/include -isystem /home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch/include/TH -isystem /home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/torch/include/THC -isystem /home/khayam/anaconda3/envs/deepspeed/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O2 -fopenmp -c /home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/comm/shm_interface.cpp -o shm_interface.o
/home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/comm/shm_interface.cpp: In function ‘void initialize(int, int)’:
/home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/comm/shm_interface.cpp:42:46: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
42 | if (addr_string == NULL) { addr_string = ""; }
| ^~
/home/khayam/anaconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/ops/csrc/cpu/comm/shm_interface.cpp:44:46: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
44 | if (port_string == NULL) { port_string = ""; }
| ^~
ninja: build stopped: subcommand failed.
``` | closed | 2024-06-11T16:53:30Z | 2024-08-13T23:14:12Z | https://github.com/deepspeedai/DeepSpeed/issues/5640 | [] | khayamgondal | 7 |
ultralytics/yolov5 | machine-learning | 12,840 | Getting a ValueError when training using the high augmentation yaml file | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question

Hello,
I have trained on this data countless times and never gotten such an error before. Now that I am using the high augmentation option (hyp.scratch-high.yaml) it never finishes training and around the same point (epoch ~190) it returns this error.
What could the problem be?
### Additional
_No response_ | closed | 2024-03-22T15:05:36Z | 2024-05-03T00:21:02Z | https://github.com/ultralytics/yolov5/issues/12840 | [
"question",
"Stale"
] | osamamer | 2 |
ageitgey/face_recognition | machine-learning | 713 | Try to avoid photo's and validate real human face | Hi!
Can I "emulate" 3D face recognition?
I need to ensure the webcam takes a human face and not a photography.
Thanks! | open | 2019-01-04T19:46:56Z | 2019-01-18T10:44:26Z | https://github.com/ageitgey/face_recognition/issues/713 | [] | neumartin | 1 |
Lightning-AI/pytorch-lightning | data-science | 20,255 | Weights are misshappen when using model's forward in on_fit_end() hook with FSDP | ### Bug description
Hi everyone !
I am training an image classifier and would like to see the embeddings at the end of training, but I don't find how to do it while using FSDP, since the weights seem to get flattenned outside of train/validation/_step. Indeed, with the following code, I get a RuntimeError: weight should have at least three dimensions.
Is it an attended behaviour? I don't understand because I tried to run in the predict step instead of the on_predict_end hook and I had the same error.
Thanks for your help already
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
```python
import certifi
import os
import timm
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torch.nn.functional as F
import lightning as L
from lightning.pytorch.core import LightningModule, LightningDataModule
from lightning.pytorch import loggers as pl_loggers
from lightning.pytorch.trainer import Trainer
from lightning.pytorch.strategies import FSDPStrategy
from torchvision.datasets import MNIST
from torchvision.transforms import v2
from torchvision.transforms import Lambda
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from einops import rearrange
class ResnetModule(LightningModule):
def __init__(self, num_classes=10, in_chans=3):
super().__init__()
self.model = timm.create_model("resnet50", pretrained = False, num_classes = num_classes, in_chans = in_chans, drop_rate = 0.3)
self.loss_fn = nn.BCEWithLogitsLoss()
def forward(self, x):
out = self.model(x)
return out
def training_step(self, batch, batch_idx):
# Here we have self.model.conv1.weight.shape = torch.Size([64, 3, 7, 7])
loss = self._calculate_loss(batch, batch_idx, "train")
return loss
def validation_step(self, batch, batch_idx):
# Here we have self.model.conv1.weight.shape = torch.Size([64, 3, 7, 7])
self._calculate_loss(batch, batch_idx, "val")
def _calculate_loss(self, batch, batch_idx, mode = "train"):
images, labels = batch
outputs = self(images)
loss = self.loss_fn(outputs, labels)
return loss
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.parameters(), 1e-4)
return optimizer
def get_tb_logger(self, experiment:bool = False) -> pl_loggers.TensorBoardLogger | SummaryWriter:
for lg in self.trainer.loggers:
if isinstance(lg, pl_loggers.TensorBoardLogger):
return lg.experiment if experiment else lg
return None
def on_train_end(self):
# !!!!!!!!!
# Here we have self.model.conv1.weight.shape = torch.Size([9408])
# !!!!!!!!!
pass
def on_fit_end(self):
# !!!!!!!!!
# Here we have self.model.conv1.weight.shape = torch.Size([9408])
# !!!!!!!!!
embeddings_activations = []
embeddings_inputs = []
embeddings_labels = []
def hook(model, input, output):
embeddings_activations.append(output)
# Attach hook to the wanted layer
hook_handle = self.model.global_pool.register_forward_hook(hook)
val_dataloader = self.trainer.datamodule.val_dataloader()
for batch_idx, batch in enumerate(val_dataloader):
imgs, labels = batch
embeddings_inputs.append(imgs)
embeddings_labels.append(labels)
self(imgs)
tb_logger = self.get_tb_logger(experiment=True)
if tb_logger:
features = rearrange(embeddings_activations, 'n b p -> (n b) p')
images = rearrange(embeddings_inputs, 'n b c h w -> (n b) c h w')
labels_one_hot = rearrange(embeddings_labels, 'n b l -> (n b) l')
metadata = [torch.argmax(t).item() for t in labels_one_hot]
tb_logger.add_embedding(
features,
metadata = metadata,
label_img = images,
global_step = self.current_epoch,
tag = f"{self.model.__class__.__name__}'s embeddings",
)
hook_handle.remove()
del embeddings_activations
del embeddings_inputs
del embeddings_labels
class MNISTModule(LightningDataModule):
def __init__(self, num_workers, pin_memory):
super().__init__()
self.batch_size = 64
self.num_workers = num_workers
self.pin_memory = pin_memory
self.transform = transforms.Compose(
[
v2.Grayscale(num_output_channels=3),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]
)
self.mnist_path = os.path.join(os.getcwd(),'dataset','mnist')
def prepare_data(self):
dataset = MNIST(
self.mnist_path,
download=True,
transform=self.transform if self.transform is not None else transforms.ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
def setup(self, stage: str):
if stage == "fit":
self.train_set = MNIST(
self.mnist_path,
download=False,
transform=self.transform if self.transform is not None else transforms.ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
self.val_set = MNIST(
self.mnist_path,
download=False,
transform=self.transform if self.transform is not None else transforms.ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
def train_dataloader(self):
return DataLoader(self.train_set, batch_size=self.batch_size,num_workers=self.num_workers,pin_memory=self.pin_memory)
def val_dataloader(self):
return DataLoader(self.val_set, batch_size=self.batch_size,num_workers=self.num_workers,pin_memory=self.pin_memory)
def main():
print("Using PyTorch {} and Lightning {}".format(torch.__version__, L.__version__))
# Lightning modules
datamodule = MNISTModule(num_workers=8, pin_memory=True)
resnet = ResnetModule(num_classes=10, in_chans=3)
# Logger
tensorboard = pl_loggers.TensorBoardLogger(
save_dir = os.path.join(os.getcwd(), 'results'),
log_graph = False,
)
# Strategy
strategy = FSDPStrategy(
activation_checkpointing_policy={nn.Linear,nn.Conv2d},
sharding_strategy="FULL_SHARD",
)
# Trainer
trainer = Trainer(
devices=2,
max_epochs=2,
strategy=strategy,
logger=tensorboard,
)
trainer.fit(resnet, datamodule)
trainer.print(torch.cuda.memory_summary())
if __name__ == '__main__':
main()
```
### Error messages and logs
```
Traceback (most recent call last):
File "/shared/nfs/apps/python/gcc/12.1.0/python-3.10.5/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/shared/nfs/apps/python/gcc/12.1.0/python-3.10.5/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/shared/nfs/home/my_project/minimal_working_example.py", line 217, in <module>
Traceback (most recent call last):
File "/shared/nfs/apps/python/gcc/12.1.0/python-3.10.5/lib/python3.10/runpy.py", line 196, in _run_module_as_main
main()
File "/shared/nfs/home/my_project/minimal_working_example.py", line 193, in main
return _run_code(code, main_globals, None,
File "/shared/nfs/apps/python/gcc/12.1.0/python-3.10.5/lib/python3.10/runpy.py", line 86, in _run_code
trainer.fit(resnet, datamodule)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 543, in fit
exec(code, run_globals)
File "/shared/nfs/home/my_project/minimal_working_example.py", line 217, in <module>
main()
File "/shared/nfs/home/my_project/minimal_working_example.py", line 193, in main
call._call_and_handle_interrupt(
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 43, in _call_and_handle_interrupt
trainer.fit(resnet, datamodule)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 543, in fit
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 105, in launch
call._call_and_handle_interrupt(
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt
return function(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 579, in _fit_impl
return trainer_fn(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 996, in _run
self._run(model, ckpt_path=ckpt_path)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 996, in _run
call._call_lightning_module_hook(self, "on_fit_end")
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 159, in _call_lightning_module_hook
call._call_lightning_module_hook(self, "on_fit_end")
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 159, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/shared/nfs/home/my_project/minimal_working_example.py", line 92, in on_fit_end
output = fn(*args, **kwargs)
File "/shared/nfs/home/my_project/minimal_working_example.py", line 92, in on_fit_end
self(imgs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
self(imgs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/shared/nfs/home/my_project/minimal_working_example.py", line 44, in forward
return forward_call(*args, **kwargs)
File "/shared/nfs/home/my_project/minimal_working_example.py", line 44, in forward
out = self.model(x)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
out = self.model(x)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/timm/models/resnet.py", line 635, in forward
return forward_call(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/timm/models/resnet.py", line 635, in forward
x = self.forward_features(x)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/timm/models/resnet.py", line 614, in forward_features
x = self.forward_features(x)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/timm/models/resnet.py", line 614, in forward_features
x = self.conv1(x)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
x = self.conv1(x)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py", line 164, in forward
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return self.checkpoint_fn( # type: ignore[misc]
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
return forward_call(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py", line 164, in forward
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return self.checkpoint_fn( # type: ignore[misc]
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
return fn(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 458, in checkpoint
return fn(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
ret = function(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return fn(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 458, in checkpoint
ret = function(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
return self._call_impl(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return self._conv_forward(input, self.weight, self.bias)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: weight should have at least three dimensions
return forward_call(*args, **kwargs)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/shared/nfs/home/user/venv/torch21/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: weight should have at least three dimensions
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0): 2.3.3
#- PyTorch Version (e.g., 2.4): 2.1.0
#- Python version (e.g., 3.12): 3.10.5
#- OS (e.g., Linux): Linux
#- CUDA/cuDNN version: 118
#- GPU models and configuration: 2 TeslaV100
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | open | 2024-09-06T14:44:37Z | 2024-09-06T15:33:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20255 | [
"bug",
"needs triage",
"ver: 2.3.x"
] | QuentinAndre11 | 0 |
chaos-genius/chaos_genius | data-visualization | 437 | Account for NaN & NULL values in DeepDrill analysis | closed | 2021-11-26T12:35:44Z | 2021-12-09T06:34:33Z | https://github.com/chaos-genius/chaos_genius/issues/437 | [
"✨ enhancement",
"🧮 algorithms"
] | suranah | 3 | |
dsdanielpark/Bard-API | nlp | 82 | Response error with example from documentation | When running any of the examples from the `README.md` file, I get such errors:
`'Response Error: b\')]}\\\'\\n\\n38\\n[["wrb.fr",null,null,null,null,[9]]]\\n55\\n[["di",65],["af.httprm",65,"3225051010106282097",10]]\\n25\\n[["e",4,null,null,130]]\\n\'.'`
```
from bardapi import Bard
api_key = "xxxxxxx"
Bard(token=api_key).get_answer("나와 내 동년배들이 좋아하는 뉴진스에 대해서 알려줘")['content']
```
I followed the instructions on retrieving the `__Secure-1PSID` key from the official website's cookies.
Note that I am located in a European country where the official Bard is not yet made available. Does it mean that this particular unofficial version also preserves these geographical restrictions?
Device: MacOS Ventura 13.3.1 (a)
| closed | 2023-06-28T14:31:24Z | 2024-01-18T16:24:31Z | https://github.com/dsdanielpark/Bard-API/issues/82 | [] | leweex95 | 10 |
xorbitsai/xorbits | numpy | 221 | ENH: provide accurate parameter docstrings | ### Is your feature request related to a problem? Please describe
Currently, we mark unsupported parameters as 'not supported yet', but for supported parameters that behave different from pandas, there's no hint or wraning for users.
### Describe the solution you'd like
Provide accurate parameter docstrings.
| open | 2023-02-17T08:51:46Z | 2023-05-17T04:30:24Z | https://github.com/xorbitsai/xorbits/issues/221 | [
"enhancement"
] | UranusSeven | 0 |
horovod/horovod | pytorch | 3,380 | mpirun check failed with version | **Environment:**
4. MPI version: 4.1.2a1
**Bug report:**
I think it's not a Horovod problem but still report here.
With MPI 4.1.2a1 version, I see
```
Checking whether extension tensorflow was built with MPI.
Extension tensorflow was built with MPI.
Was unable to run mpirun --version:
mpirun: Error: unknown option "--version"
```
I tried to modify Horovod to change it to "-version" ,"-V", all failed with same `unknown option` error.
Then I go back to version 4.0.5, everything is fine. | open | 2022-01-24T07:03:41Z | 2022-02-02T12:59:42Z | https://github.com/horovod/horovod/issues/3380 | [
"bug"
] | wjxiz1992 | 6 |
Kanaries/pygwalker | plotly | 455 | Question: How to hidden mode function | Hello
I display pygwalker on web app.
I use latest version(0.4.6 ).
I don't have to display mode function because I always use only walker mode.
Is it possible to hidden mode function?

| closed | 2024-03-02T08:14:49Z | 2024-03-09T03:57:48Z | https://github.com/Kanaries/pygwalker/issues/455 | [
"P1",
"proposal"
] | relakuman | 6 |
autokey/autokey | automation | 68 | AutoKey don't run under Ubuntu 16.04 -> Xlib.error.BadAccess | Hello,
i have try to install autokey-py3 on my Ubuntu 16.04 lts
first installation with Git and setup.py failt maybe a python version Problem
then i add the PPA and install over there but now, when i start over the UI shortcut i see the Icon in the startet for 10 - 15 sec an the icon remote from the starter, when i start over terminal i see this Errors:
:~$ autokey
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 247, sequence_number = 15, major_opcode = 33, minor_opcode = 0
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 247, sequence_number = 16, major_opcode = 33, minor_opcode = 0
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 247, sequence_number = 17, major_opcode = 33, minor_opcode = 0
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 247, sequence_number = 18, major_opcode = 33, minor_opcode = 0
what can i do?
| closed | 2017-02-02T12:42:38Z | 2017-02-02T17:21:58Z | https://github.com/autokey/autokey/issues/68 | [] | MultiStorm | 1 |
mljar/mercury | data-visualization | 358 | Issue with PDF download | I was trying to download PDF on machine without Chromium installed and got error from pyppeteer. It was for static notebook.
```
[INFO] Starting Chromium download.
[2023-08-31 08:36:44,927: INFO/MainProcess] Starting Chromium download.
[INFO] Beginning extraction
[2023-08-31 08:36:45,445: INFO/MainProcess] Beginning extraction
DJ INFO 2023-08-31 08:36:45,717 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.20, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:36:46,860 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.4.134:42565]
DJ INFO 2023-08-31 08:36:48,003 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.4.134:42565]
DJ INFO 2023-08-31 08:36:49,146 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.4.134:26694]
DJ INFO 2023-08-31 08:36:50,289 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:36:51,431 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.4.134:42565]
DJ INFO 2023-08-31 08:36:52,574 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:36:53,716 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.02, 10.16.4.134:42565]
[INFO] Chromium extracted to: /home/user/.local/share/pyppeteer/local-chromium/588429
[2023-08-31 08:36:54,666: INFO/MainProcess] Chromium extracted to: /home/user/.local/share/pyppeteer/local-chromium/588429
DJ INFO 2023-08-31 08:36:54,864 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.4.134:26694]
DJ INFO 2023-08-31 08:36:56,006 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:36:57,162 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:53586]
DJ INFO 2023-08-31 08:36:58,302 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:36:59,443 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:37:00,585 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
DJ INFO 2023-08-31 08:37:02,391 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.68, 10.16.30.23:53586]
# ... rest of logs
DJ INFO 2023-08-31 08:37:24,166 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
[2023-08-31 08:37:24,778: ERROR/MainProcess] Task apps.tasks.tasks_export.export_to_pdf[5213a10b-7f8d-432d-9702-3514b39fa9ad] raised unexpected: BrowserError('Browser closed unexpectedly:\n')
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/celery/app/trace.py", line 477, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/celery/app/trace.py", line 760, in __protected_call__
return self.run(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/mercury/apps/tasks/tasks_export.py", line 42, in export_to_pdf
to_pdf(notebook_os_path + slides_postfix, pdf_os_path)
File "/home/user/.local/lib/python3.10/site-packages/mercury/apps/tasks/export_pdf.py", line 146, in to_pdf
).result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/user/.local/lib/python3.10/site-packages/mercury/apps/tasks/export_pdf.py", line 17, in html_to_pdf
browser = await launch(
File "/home/user/.local/lib/python3.10/site-packages/pyppeteer/launcher.py", line 307, in launch
return await Launcher(options, **kwargs).launch()
File "/home/user/.local/lib/python3.10/site-packages/pyppeteer/launcher.py", line 168, in launch
self.browserWSEndpoint = get_ws_endpoint(self.url)
File "/home/user/.local/lib/python3.10/site-packages/pyppeteer/launcher.py", line 227, in get_ws_endpoint
raise BrowserError('Browser closed unexpectedly:\n')
pyppeteer.errors.BrowserError: Browser closed unexpectedly:
DJ INFO 2023-08-31 08:37:25,307 runserver HTTP GET /api/v1/get_pdf/5213a10b-7f8d-432d-9702-3514b39fa9ad/ 200 [0.01, 10.16.30.23:11989]
``` | open | 2023-08-31T08:39:41Z | 2023-08-31T08:39:41Z | https://github.com/mljar/mercury/issues/358 | [] | pplonski | 0 |
flasgger/flasgger | rest-api | 351 | apispec_1.json 404 not found at development server | Hi~!
I'm using flasgger 0.9.2
The problem is when I deploy source code to ec2, The `/apidocs` page working properly with IP address and port.
But with domain name, `/apidocs` page cannot get apispec_1.json (404 not found response) that I can check at Chrome development mode Network tab.
The weird thing is that I can check apispec_1.json content through Chrome browser with the URL that is failed at `/apidocs` page.
Could anyone get me a hint for this kind of situation plz... | closed | 2019-12-16T07:53:34Z | 2020-05-21T00:38:50Z | https://github.com/flasgger/flasgger/issues/351 | [] | zooozoo | 3 |
docarray/docarray | fastapi | 1,890 | add int64 support the form of a millisecond timestamp | ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.docarray.org) and still think this feature is missing
### Description
I use DocArray in Jina, when I set the form of a millisecond timestamp to doc in executor,error as below:
```python
doc = MyDocument(data=int(time.time() * 1000))
proto = doc.to_protobuf()
res_doc = MyDocument.from_protobuf(proto)
print(res_doc)
```

I have reviewed the source code and found that NodeProto does not support int64, but the native Google library does have this type.
In our scenario, we need to record millisecond information. Currently, we can only convert it temporarily in the executor using a string, which is very inconvenient. We hope the official support for native types will be provided.


### Affected Components
- [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [ ] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [X] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | open | 2024-06-03T03:40:59Z | 2024-06-03T22:11:34Z | https://github.com/docarray/docarray/issues/1890 | [] | Janus-Xu | 1 |
polarsource/polar | fastapi | 5,170 | Write a README for new web backoffice | Basics about how it's structured, how to add endpoints, etc. | closed | 2025-03-05T09:06:02Z | 2025-03-06T09:39:02Z | https://github.com/polarsource/polar/issues/5170 | [] | frankie567 | 0 |
davidteather/TikTok-Api | api | 251 | [BUG] - Response 403 | Since yesterday i have a problem with few endpoints. I've tested only getby endpoints and byHashtag and by Sound have responses 403
```
>>> api.byHashtag('love')
{'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36', 'accept-encoding': 'gzip, deflate, br', 'accept': 'application/json, text/plain, */*', 'Connection': 'keep-alive', 'authority': 'm.tiktok.com', 'method': 'GET', 'path': '/share/item/list?aid=1988&app_name=tiktok_web&device_platform=web&referer=&user_agent=Mozilla%2F5.0+(Windows+NT+10.0%3B+Win64%3B+x64)+AppleWebKit%2F537.36+(KHTML,+like+Gecko)+Chrome%2F84.0.4147.125+Safari%2F537.36&cookie_enabled=true&screen_width=2560&screen_height=1440&browser_language=&browser_platform=&browser_name=&browser_version=&browser_online=true&timezone_name=&priority_region=&appId=1233&appType=m&isAndroid=false&isMobile=false&isIOS=false&OS=windows&did=822148675®ion=US&secUid=&id=4231&type=3&count=30&minCursor=0&maxCursor=0&shareUid=&recType=&lang=en&verifyFp=AaN99VturDQ2&_signature=_02B4Z6wo00f01SDv8tgAAIBB6rMaD8GoX..ZAABdw74', 'scheme': 'https', 'accept-language': 'en-US,en;q=0.9', 'referrer': 'https://www.tiktok.com/', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site'}
Converting response to JSON failed response is below (probably empty)
Traceback (most recent call last):
File "~/PycharmProjects/backup_scrapers/vevn/lib/python3.8/site-packages/TikTokApi/tiktok.py", line 85, in getData
return r.json()
File "~/PycharmProjects/backup_scrapers/vevn/lib/python3.8/site-packages/requests/models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "~/PycharmProjects/backup_scrapers/vevn/lib/python3.8/site-packages/TikTokApi/tiktok.py", line 357, in byHashtag
res = self.getData(api_url, b, proxy=proxy, language=language)
File "~/PycharmProjects/backup_scrapers/vevn/lib/python3.8/site-packages/TikTokApi/tiktok.py", line 91, in getData
raise Exception('Invalid Response')
Exception: Invalid Response
```
is this a bigger problem or something went wrong on my project?
**Desktop:**
- OS: macOS Catalina
- TikTokApi Version 3.4.3/3.4.6
| closed | 2020-09-06T22:13:14Z | 2020-09-08T00:45:13Z | https://github.com/davidteather/TikTok-Api/issues/251 | [
"bug",
"help wanted"
] | rocailler | 10 |
errbotio/errbot | automation | 963 | Restore backup | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [ ] Reporting a bug
* [ ] Suggesting a new feature
* [x] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: Errbot version 4.3.6
* OS version: Debian 8 in Docker
* Python version: Python 3.5.2
* Using a virtual environment: no
### Issue description
I try to restore running bot in Docker via backup/restore configs and get this error:
```
15:31:10 INFO errbot.bootstrap **** RESTORING the bot from /var/lib/err/backup.py
Traceback (most recent call last):
File "/usr/local/bin/errbot", line 9, in <module>
load_entry_point('errbot==4.3.6', 'console_scripts', 'errbot')()
File "/usr/local/lib/python3.5/site-packages/errbot/cli.py", line 296, in main
bootstrap(backend, root_logger, config, restore)
File "/usr/local/lib/python3.5/site-packages/errbot/bootstrap.py", line 198, in bootstrap
bot = setup_bot(bot_class, logger, config, restore)
File "/usr/local/lib/python3.5/site-packages/errbot/bootstrap.py", line 143, in setup_bot
ast.literal_eval(f.read())
File "/usr/local/lib/python3.5/ast.py", line 46, in literal_eval
node_or_string = parse(node_or_string, mode='eval')
File "/usr/local/lib/python3.5/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 4
log.info("Restoring plugin_manager")
^
SyntaxError: invalid syntax
`
My top of backup file (created via !backup command):
## This file is not executable on its own. use errbot -r FILE to restore your bot.
log.info("Restoring repo_manager")
log.info("Restoring plugin_manager")
bot.plugin_manager["configs"] = {'Webserver': {'SSL': {'certificate': '', 'host': '0.0.0.0', 'key': '', 'enabled': False, 'port': 3142}, 'HOST': '127.0.0.1', 'PORT': 3141}}
log.info("Installing plugins.")
if "installed_repos" in bot.repo_manager:
for repo in bot.repo_manager["installed_repos"]:
log.error(bot.repo_manager.install_repo(repo))
log.info("Restoring plugins data.")
bot.plugin_manager.update_dynamic_plugins()
pobj = bot.plugin_manager.get_plugin_by_name("Plugins").plugin_object
pobj.init_storage()
pobj.close_storage()
``` | closed | 2017-02-08T15:39:49Z | 2019-06-19T06:08:44Z | https://github.com/errbotio/errbot/issues/963 | [
"type: bug",
"#needs_validation"
] | ingtarius | 3 |
mage-ai/mage-ai | data-science | 5,116 | [BUG] integer division or modulo by zero | ### Mage version
v0.9.70
### Describe the bug
Sometimes the following errors may occur when executing triggers
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/mage_ai/shared/retry.py", line 38, in retry_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/block_executor.py", line 588, in __execute_with_retry
return self._execute(
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/block_executor.py", line 1077, in _execute
result = self.block.execute_sync(
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/models/block/dynamic/child.py", line 183, in execute_sync
parent_index = dynamic_block_index % count
ZeroDivisionError: integer division or modulo by zero
Stack trace
File "/usr/local/bin/mage", line 8, in <module>
sys.exit(app())
File "/usr/local/lib/python3.10/site-packages/typer/main.py", line 311, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/typer/core.py", line 778, in main
return _main(
File "/usr/local/lib/python3.10/site-packages/typer/core.py", line 216, in _main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/typer/main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "/usr/local/lib/python3.10/site-packages/mage_ai/cli/main.py", line 163, in start
start_server(
File "/usr/local/lib/python3.10/site-packages/mage_ai/server/server.py", line 773, in start_server
scheduler_manager.start_scheduler()
File "/usr/local/lib/python3.10/site-packages/mage_ai/server/scheduler_manager.py", line 92, in start_scheduler
proc.start()
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
return Popen(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/popen_fork.py", line 71, in _launch
code = process_obj._bootstrap(parent_sentinel=child_r)
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/db/process.py", line 15, in start_session_and_run
results = target(*args)
File "/usr/local/lib/python3.10/site-packages/mage_ai/server/scheduler_manager.py", line 53, in run_scheduler
LoopTimeTrigger().start()
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/triggers/loop_time_trigger.py", line 14, in start
self.run()
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/triggers/time_trigger.py", line 11, in run
schedule_all()
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/pipeline_scheduler_original.py", line 1633, in schedule_all
PipelineScheduler(r).start()
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/db/__init__.py", line 157, in func_with_rollback
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/pipeline_scheduler_original.py", line 190, in start
self.schedule()
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/db/__init__.py", line 157, in func_with_rollback
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/pipeline_scheduler_original.py", line 317, in schedule
self.__schedule_blocks(block_runs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/pipeline_scheduler_original.py", line 618, in __schedule_blocks
job_manager.add_job(
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/job_manager.py", line 28, in add_job
self.queue.enqueue(job_id, target, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/queue/process_queue.py", line 121, in enqueue
self.start_worker_pool()
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/queue/process_queue.py", line 198, in start_worker_pool
self.worker_pool_proc.start()
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
return Popen(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/popen_fork.py", line 71, in _launch
code = process_obj._bootstrap(parent_sentinel=child_r)
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/queue/process_queue.py", line 351, in poll_job_and_execute
worker.start()
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/context.py", line 281, in _Popen
return Popen(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/lib/python3.10/multiprocessing/popen_fork.py", line 71, in _launch
code = process_obj._bootstrap(parent_sentinel=child_r)
File "/usr/local/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/site-packages/newrelic/api/background_task.py", line 117, in wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/queue/process_queue.py", line 315, in run
start_session_and_run(args[1], *args[2], **args[3])
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/db/process.py", line 15, in start_session_and_run
results = target(*args)
File "/usr/local/lib/python3.10/site-packages/mage_ai/orchestration/pipeline_scheduler_original.py", line 1170, in run_block
return ExecutorFactory.get_block_executor(
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/block_executor.py", line 613, in execute
result = __execute_with_retry()
File "/usr/local/lib/python3.10/site-packages/mage_ai/shared/retry.py", line 46, in retry_func
logger.warning(
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/logging/logger.py", line 39, in warning
self.__send_message('warning', message, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/logging/logger.py", line 65, in __send_message
data['error_stack'] = traceback.format_stack(),
```
### To reproduce
_No response_
### Expected behavior
_No response_
### Screenshots
_No response_
### Operating system
_No response_
### Additional context
_No response_ | closed | 2024-05-27T02:56:13Z | 2024-06-19T22:03:35Z | https://github.com/mage-ai/mage-ai/issues/5116 | [
"bug"
] | yanlingsishao | 2 |
coqui-ai/TTS | pytorch | 4,169 | [Feature request] upgrade TTS Python packahe | **🚀 Feature Description**
The TTS package does not install when Python version is greater than 3.11
This is problematic considering that the current version is 3.13, and 3.14 is around the corner
**Solution**
Support Python version 3.13 (at least)
**Alternative Solutions**
If upgrading is not hassle-free, fork the project to have a version that is comoaè
**Additional context**
| open | 2025-03-13T01:56:22Z | 2025-03-22T13:34:43Z | https://github.com/coqui-ai/TTS/issues/4169 | [
"feature request"
] | hros | 8 |
Lightning-AI/pytorch-lightning | pytorch | 20,551 | Tensorboard Logger is flushed on every step | ### Bug description
I noticed a significantly degraded performance with tensorboard logger on S3.
I printede the call stack of the tensorboard logger's flush call, and found that, on every call to `log_metrics`, tensorboard's `flush` will be called.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
logger = TensorBoardLogger("s3-mountpoint", max_queue=1000, flush_secs=20)
trainer = L.Trainer(
num_nodes=num_nodes,
devices=local_world_size,
accelerator="cuda",
max_epochs=1,
precision="bf16-true",
strategy="fsdp",
log_every_n_steps=1,
enable_checkpointing=False,
default_root_dir="mountpoint",
logger=logger,
)
```
### Error messages and logs
```
trainer.fit(lit_model, data)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 538, in fit
call._call_and_handle_interrupt(
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py", line 46, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 105, in launch
return function(*args, **kwargs)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run
results = self._run_stage()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage
self.fit_loop.run()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run
self.advance()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance
self.epoch_loop.run(self._data_fetcher)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 140, in run
self.advance(data_fetcher)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 278, in advance
trainer._logger_connector.update_train_step_metrics()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py", line 163, in update_train_step_metrics
self.log_metrics(self.metrics["log"])
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py", line 118, in log_metrics
logger.save()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning_utilities/core/rank_zero.py", line 42, in wrapped_fn
return fn(*args, **kwargs)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/pytorch/loggers/tensorboard.py", line 210, in save
super().save()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning_utilities/core/rank_zero.py", line 42, in wrapped_fn
return fn(*args, **kwargs)
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/lightning/fabric/loggers/tensorboard.py", line 290, in save
self.experiment.flush()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/torch/utils/tensorboard/writer.py", line 1194, in flush
writer.flush()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/torch/utils/tensorboard/writer.py", line 153, in flush
self.event_writer.flush()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/tensorboard/summary/writer/event_file_writer.py", line 127, in flush
self._async_writer.flush()
File "/root/miniforge3/envs/lightning/lib/python3.11/site-packages/tensorboard/summary/writer/event_file_writer.py", line 185, in flush
traceback.print_stack()
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.5.0): 2.4.0
#- PyTorch Version (e.g., 2.5):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux): Linux
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
| open | 2025-01-17T05:57:38Z | 2025-01-17T16:44:57Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20551 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | leoleoasd | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.