repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | machine-learning | 36,293 | Bug in v4.49 where the attention mask is ignored during generation (t5-small) | ### System Info
Hi all!
First, thank you very much for your hard work and making these features avalible.
I'm seeing a bug after updating to v4.49 where the output changes even though the attention mask should be masking padded values. Below is a script to reproduce the error.
It will tokenize two prompts, and then call `.generate` on the shorter prompt while trying different slices of the padded `input_ids` and padded `attention_mask`. At some point, the generated response will change for v4.49 but not v4.48.
Enviroment information
```
- `transformers` version: 4.49.0
- Platform: macOS-15.3-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.29.0
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
```
output of `uv pip compile requirements.in`
```
transformers==4.48.0 # change this to 4.49.0 to reproduce the error
asttokens==3.0.0
certifi==2025.1.31
charset-normalizer==3.4.1
decorator==5.1.1
exceptiongroup==1.2.2
executing==2.2.0
filelock==3.17.0
fsspec==2025.2.0
huggingface-hub==0.29.0
idna==3.10
ipython==8.32.0
jedi==0.19.2
jinja2==3.1.5
markupsafe==3.0.2
matplotlib-inline==0.1.7
mpmath==1.3.0
networkx==3.4.2
numpy==2.2.3
packaging==24.2
parso==0.8.4
pexpect==4.9.0
prompt-toolkit==3.0.50
ptyprocess==0.7.0
pure-eval==0.2.3
pygments==2.19.1
pyyaml==6.0.2
regex==2024.11.6
requests==2.32.3
safetensors==0.5.2
sentencepiece==0.2.0
stack-data==0.6.3
sympy==1.13.1
tokenizers==0.21.0
torch==2.6.0
tqdm==4.67.1
traitlets==5.14.3
typing-extensions==4.12.2
urllib3==2.3.0
wcwidth==0.2.13
```
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
tokenizer = AutoTokenizer.from_pretrained("t5-small")
cfg = GenerationConfig(
max_new_tokens=512,
do_sample=False,
use_cache=True, # same behavior with use_cache=False
)
shortprompt = ("summarize: Transformers v4.49 appears to have a bug where .generate stops respecting "
"the attention_mask after some number of tokens.")
longprompt = ("summarize: I enjoy walking with my cute dog, especially in the early mornings "
"when the air is crisp and the streets are quiet. Watching my dog happily trot along, "
"always brings a smile to my face.")
# ---
print("# Single prompt ---")
inputs = tokenizer(
[shortprompt], return_tensors="pt", padding=True
)
outputs = model.generate(**inputs, generation_config=cfg)
expected = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(f"short prompt: '{expected}'")
print()
# ---
print("# Double prompt ---")
inputs = tokenizer(
[shortprompt, longprompt], return_tensors="pt", padding=True
)
outputs = model.generate(**inputs, generation_config=cfg)
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(f"short prompt: '{text[0]}'")
print(f"long prompt: '{text[1]}'")
print()
# ---
print("# Single shortprompt with mask ---")
def run_sliced_input(slice_, show_text=False):
shortprompt_tokens = inputs.input_ids[0:1, slice_]
shortprompt_mask = inputs.attention_mask[0:1, slice_]
outputs = model.generate(inputs=shortprompt_tokens, attention_mask=shortprompt_mask, generation_config=cfg)
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
if show_text:
print(f"'{text}'")
return text != expected
# run a bisect search to find the first slice that fails
import bisect
start = inputs.attention_mask[0].sum().item()
full_range = inputs.attention_mask.size(1)
ends = range(start, full_range)
print(f"searching in range {start} to {full_range}")
first_failure = start + bisect.bisect_left(
[slice(None, end) for end in ends], True, key=run_sliced_input
)
if first_failure == full_range:
print("No failure found in the full range!")
else:
print(f"First failing slice: {first_failure}")
print(f"Output with slice at {first_failure-1}: ", end="")
run_sliced_input(slice(None, first_failure-1), show_text=True)
print(f"Output with slice at {first_failure}: ", end="")
run_sliced_input(slice(None, first_failure), show_text=True)
```
### Expected behavior
version 4.48
```
# Single prompt ---
short prompt: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'
# Double prompt ---
short prompt: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'
long prompt: 'i enjoy walking with my cute dog, especially in the early mornings. watching my dog happily trot along brings a smile to my face.'
# Single shortprompt with mask ---
searching in range 36 to 46
No failure found in the full range!
```
version 4.49
```
# Single prompt ---
short prompt: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'
# Double prompt ---
short prompt: ''
long prompt: 'i enjoy walking with my cute dog, especially in the early mornings. watching my dog happily trot along brings a smile to my face.'
# Single shortprompt with mask ---
searching in range 36 to 46
First failing slice: 39
Output with slice at 38: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'
Output with slice at 39: 'Transformers v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'
``` | closed | 2025-02-20T02:16:23Z | 2025-02-20T16:28:11Z | https://github.com/huggingface/transformers/issues/36293 | [
"bug"
] | bdhammel | 3 |
gradio-app/gradio | data-visualization | 10,763 | Support ability to create native gr.Barplots with multiple series side-by-side | I wanted to create a `gr.Barplot` that plots multiple `y` columns for each `x` value, but it seems like this is not possible with our `gr.Barplot`. We do support the ability to stack bars like this:
<img width="639" alt="Image" src="https://github.com/user-attachments/assets/bd436d9b-afb1-4aca-a48c-c2dba646e40a" />
But not have them side by side. The API I would expect is to be able to pass a list of columns for the `y` parameter, not just a single column name | open | 2025-03-08T09:17:20Z | 2025-03-08T09:17:25Z | https://github.com/gradio-app/gradio/issues/10763 | [
"enhancement"
] | abidlabs | 0 |
saulpw/visidata | pandas | 2,660 | command to freeze the current column directly | Often I want to replace the current column with a frozen copy. It would be convenient to have a command that does the equivalent of:
`Sheet.addCommand("", 'setcol-freeze', 'i = cursorVisibleColIndex; name = cursorCol.name; fc = freeze_col(cursorCol); fc.name = name; addColumnAtCursor(fc); columns.pop(i)', 'replace current column with a frozen copy, with all cells evaluated')`
(right now this command triggers a bug that's already been reported #2607)
I can't think of a good keyboard shortcut that is not yet taken.
Do other people want this feature? | open | 2024-12-31T05:27:57Z | 2025-01-17T22:03:52Z | https://github.com/saulpw/visidata/issues/2660 | [
"wishlist"
] | midichef | 7 |
sebp/scikit-survival | scikit-learn | 368 | Add support for predict_survival_function to Stacking | As mentioned in #364, `Stacking` currently does not support `predict_survival_function` nor `predict_cumulative_hazard_function`.
If the meta-estimator supports these functions, so should `Stacking`. | closed | 2023-06-05T17:34:07Z | 2023-07-11T20:50:35Z | https://github.com/sebp/scikit-survival/issues/368 | [
"enhancement",
"help wanted"
] | sebp | 0 |
clovaai/donut | computer-vision | 227 | Easier to fine tune using this repository code or Transformers and nielsr code? | open | 2023-07-23T00:30:35Z | 2024-02-07T17:05:12Z | https://github.com/clovaai/donut/issues/227 | [] | DoctorSlimm | 3 | |
dnouri/nolearn | scikit-learn | 232 | TypeError: __init__() got multiple values for keyword argument 'scales' | Trying to change the scales parameter when calling the dbn class in nolearn 0.5:
```
dbn = DBN(
# [[numNodes input layer], numNodes hidden layer, numNodes output layer ]
hiddenAr,
# Learning rate of algorithm
learn_rates,
learn_rates_pretrain = 0.01,
# Decay of learn rate
learn_rate_decays=1,
# Iterations of training data (epochs)
epochs=ep,
# Verbosity level
verbose=1,
momentum=mom,
scales = 0.03,
use_re_lu = True
)
```
Getting the error described in the title. Not sure why.
| closed | 2016-03-24T10:37:06Z | 2016-03-25T18:50:34Z | https://github.com/dnouri/nolearn/issues/232 | [] | CAWilson94 | 1 |
modin-project/modin | pandas | 7,254 | Support right merge/join | closed | 2024-05-13T00:32:16Z | 2024-05-13T23:39:23Z | https://github.com/modin-project/modin/issues/7254 | [
"new feature/request 💬"
] | anmyachev | 0 | |
AutoGPTQ/AutoGPTQ | nlp | 716 | [FEATURE] Quantization of internlm/internlm-xcomposer2-4khd-7b to 4bit? | Hello, I have a question regarding quantization of internlm/internlm-xcomposer2-4khd-7b model to 4bit. Is it possible to make with autogptq? As the plan is to use it for fine tuning with https://github.com/InternLM/InternLM-XComposer.
I have already make quantization with https://github.com/InternLM/lmdeploy, however the only way, how I can infer the quantized model with lmdeploy pipeline. So I am not able to make fine tuning of quanitized model with lmdeploy.
Sending the issue, where I was trying to make quantization with AutoGPTQ: https://github.com/InternLM/InternLM-XComposer/issues/337 | open | 2024-07-28T11:53:51Z | 2024-07-28T11:53:51Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/716 | [
"enhancement"
] | zhuraromdev | 0 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 156 | Resize or Crop when training large photo | What do you suggest? Or just try both ways and see. Looking forward to your reply | closed | 2021-04-30T02:56:16Z | 2021-07-13T02:09:00Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/156 | [] | syfbme | 5 |
praw-dev/praw | api | 1,887 | Improve ability to properly handle failures and retries for API calls that are not idempotent | ### Describe the solution you'd like
Many Reddit API calls can be repeated without anything bad happening (e.g., removing a post is more or less idempotent, you might get multiple log entries and a few timestamps might be updated, but nothing bad happens), but some API calls are not so friendly. This is particularly the case for API calls that create content such as submitting a post, making a comment, or sending a message.
The problem is that when you call something like `subreddit.submit(...)` there's currently no way to find out how many times the underlying Reddit API call was made, what intermediate errors happened, etc. Most of the time when you call submit, you create one post or you get one failure. But sometimes, you create multiple posts for one reason or another. It could be due to retries after failures that weren't actually failures, time outs when the call actually worked, etc. Sometimes it was a failure and the retry was definitely needed, but the failure was a "partial success" that might need to be deleted (I've seen multiple examples of PRAW creating two posts, but the first post that only shows up in the user history and/or via search, but isn't indexed in /new).
Two ideas for how this could be improved:
1. Add an option that causes PRAW to raise an exception any time an API call has been repeated *after* the call is done. This would allow running check/recovery logic that could do whatever needs to be done for that particular situation (e.g., remove any extra submissions found via the user history, /new, or /search). Something like this:
```
reddit.raise_exception_after_retries = True
try:
submission = subreddit.submit(...)
except prawcore.exceptions.retry as e:
# check/recovery logic to make sure we haven't made multiple submissions
except Exception as e:
# other error handling
reddit.raise_exception_after_retries = False
```
The exception object would ideally also include some information about what happened: the number of retries, any errors returned prior to the call finally succeeding, etc.
2. Add an option that disables retries completely and all errors, timeouts, etc. get raised the first time. This would allow writing your own retry logic instead. Something like this:
```
reddit.retries = False
for attempt in range(3):
try:
submission = subreddit.submit(...)
break
except Exception as e:
# call a check function to make sure it wasn't created or half-created so we can delete and try again and succeed without any errors or timeouts
check, result = submission_exists(title=mytitle, url=myurl)
if check:
if result == "good":
break
else:
check.delete()
reddit.retries = True
```
The logic would probably be more complex than that, of course.
### Describe alternatives you've considered
I believe the only way to do this right now would be writing a logging filter or monkey patching praw or prawcore. Those do not seem like good solutions.
### Additional context
Duplicate submissions and comments are the worst. That is all.
| closed | 2022-07-21T22:45:03Z | 2022-09-20T18:10:08Z | https://github.com/praw-dev/praw/issues/1887 | [
"Feature",
"Stale",
"Auto-closed - Stale"
] | dequeued0 | 5 |
unionai-oss/pandera | pandas | 968 | Unnecessary pandas-stubs pin | #916 pinned the version of pandas-stubs for the mypy plugin at 1.4.3.220807. The pandas-stubs issue that was stated as the reason for this pin (https://github.com/pandas-dev/pandas-stubs/issues/197) has been resolved and released. Would it be possible to remove the pin?
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [x] (optional) I have confirmed this bug exists on the master branch of pandera.
| closed | 2022-10-18T09:13:50Z | 2022-11-04T14:35:20Z | https://github.com/unionai-oss/pandera/issues/968 | [
"bug"
] | sebwills | 1 |
pytest-dev/pytest-selenium | pytest | 327 | `Screenshot` doesn't match with `HTML` | I except that `Screenshot` will display the innerText of `recognized-texts` in `HTML`:
```html
<p id="recognized-texts">2024-02-18 00:17:20 [00059baiden]<br>2024-02-18 00:18:22 [005657obama]<br></p>
```
However, it is not behaving as expected:
|Screenshot|HTML|
|-|-|
|<img src="https://github.com/pytest-dev/pytest-selenium/assets/46549482/6bc3fe53-f1cc-4047-b1f2-7fae008ec551" alt="drawing" width="400"/>|<img src="https://github.com/pytest-dev/pytest-selenium/assets/46549482/3afb6dc7-c4f4-4900-a42c-80f2b2acecce" alt="drawing" width="400"/>|
Should I manually insert a `time.sleep()` in the code section found at
https://github.com/pytest-dev/pytest-selenium/blob/c5be64bc8fffef5f4639a375619b614472f561ab/src/pytest_selenium/pytest_selenium.py#L297-L306
, or is there any existing argument that I may have overlooked? | closed | 2024-02-17T17:08:59Z | 2024-02-19T03:16:18Z | https://github.com/pytest-dev/pytest-selenium/issues/327 | [] | changchiyou | 7 |
MaartenGr/BERTopic | nlp | 1,648 | [QST] Is there a way to make bertopic library skinnier? | I'm trying to run BERTopic model in docker, it works fine, but the bertopic library downloads a lot of dependencies making docker image really heavy. Is there a way to make BERTopic bare-bone? | closed | 2023-11-27T18:23:42Z | 2023-11-29T09:44:32Z | https://github.com/MaartenGr/BERTopic/issues/1648 | [] | bjpietrzak | 2 |
pytorch/vision | computer-vision | 8,697 | torchvision is restricted to ffmpeg-4 on conda | ### 🐛 Describe the bug
torchvision is retricted to ffmpeg-4 on conda currently. This makes it impossible for me to upgrade my environment to newer versions of torch. The reason is that I need additional libraries which depend on newer versions of ffmpeg. ffmpeg-5 was released in 2022 so it's no surprise that some packages depend on it (or newer).
I saw in the commit log that the reason is a build failure, so I have mild hopes that this is something that could be worked around?
### Versions
Given the output of the script and the nature of the issue, this is likely meaningless.
I am currently using
[conda] torchvision 0.16.2 py310_cpu pytorch
I can go a _bit_ higher, but not to where I need to (which is 0.19/0.20) | open | 2024-10-25T16:51:19Z | 2024-10-29T09:54:22Z | https://github.com/pytorch/vision/issues/8697 | [] | bschindler | 5 |
nltk/nltk | nlp | 2,527 | Quote author names mixed up in wordnet definitions | If I run the following code:
```python
from nltk.corpus import wordnet
for ss in wordnet.all_synsets():
if ' - ' in ss.definition():
print(ss, ss.definition())
```
I get a list of definitions like this:
```
Synset('abstemious.a.01') sparing in consumption of especially food and drink; - John Galsworthy
Synset('ascetic.s.02') practicing great self-denial; - William James
Synset('dead-on.s.01') accurate and to the point; ; - Peter S.Prescott
Synset('used_to.s.01') in the habit; ; ; - Henry David Thoreau
Synset('predaceous.s.02') living by or given to victimizing others for personal gain; ; - Peter S. Prescott; - W.E.Swinton
Synset('passive.a.01') lacking in energy or will; - George Meredith
Synset('resistless.s.02') offering no resistance; ; - Theodore Roosevelt
Synset('alcoholic.s.02') addicted to alcohol; - Carl Van Doren
Synset('reductive.s.01') characterized by or causing diminution or curtailment; - R.H.Rovere
Synset('mounted.s.02') decorated with applied ornamentation; often used in combination; - F.V.W.Mason
Synset('coordinated.s.02') being dexterous in the use of more than one set of muscle movements; - Mary McCarthy
Synset('light-fingered.s.01') having nimble fingers literally or figuratively; especially for stealing or picking pockets; - Harry Hansen; - Time
Synset('bumbling.s.01') lacking physical movement skills, especially with the hands; ; ; ; - Mary H. Vorse
Synset('uninfluenced.s.01') not influenced or affected; - V.L.Parrington
```
I'm concerned that these authors (such as `- Theodore Roosevelt`) possibly shouldn't be in the definition? I think these are the authors of the last example in the `ss.examples()` list, that haven't been parsed as part of the example because they aren't within the double quotes. | open | 2020-04-08T07:13:36Z | 2021-09-21T21:25:58Z | https://github.com/nltk/nltk/issues/2527 | [] | multimeric | 6 |
Kav-K/GPTDiscord | asyncio | 275 | IO On closed file in /internet chat | 
| closed | 2023-04-19T14:38:00Z | 2023-04-24T02:33:56Z | https://github.com/Kav-K/GPTDiscord/issues/275 | [
"bug",
"high-prio",
"help-wanted-important"
] | Kav-K | 0 |
pytest-dev/pytest-qt | pytest | 125 | How to test QML Components? | I'm wondering how you would test a QML application. Here is how I am creating my QML application. What should I call on qtbot?
``` Python
class UsersManager(QtCore.QObject):
users = QtCore.pyqtSignal(QtCore.QVariant)
@QtCore.pyqtSlot()
def LoadUsers(self):
def thread():
users = FetchUsers()
self.users.emit(users)
threading.Thread(target=thread).start()
app = QtGui.QGuiApplication(sys.argv)
QtQml.qmlRegisterType(UsersManager, 'UsersManager', 1, 0, 'UsersManager')
engine = QtQml.QQmlApplicationEngine("Main.qml")
app.exec_()
```
| closed | 2016-03-28T18:07:34Z | 2016-05-16T19:43:31Z | https://github.com/pytest-dev/pytest-qt/issues/125 | [] | Siecje | 3 |
miguelgrinberg/Flask-SocketIO | flask | 1,845 | Access to flask socketio msg queue and difference with celery msg queue | ### Discussed in https://github.com/miguelgrinberg/Flask-SocketIO/discussions/1844
<div type='discussions-op-text'>
<sup>Originally posted by **MarioCiranni** July 13, 2022</sup>
Hi,
Ive looked up on the web a lot for a satisfying answer about how to access the socketio msg queue but no particular avail.
Im still not sure how can I get access to the socketio msg queue in a flask application that implements websocket with flask-socketio library. @miguelgrinberg IIve also noticed that in your talk back at the PyCon in 2016 you clearly distinguish between socketio msg queue and celery msg queue and how they have different purposes, but still I am not sure how they are used and what makes them different.
Im bringing up this topic because I am developing a webserver. On the backend I need to process a stream of images coming from the webcam and feed it to a ML model. For I had some lagging issue when displaying the image back on the browser, which I take is due to the CPU intensive work done on the backend which causes delay in the response back to the client, I just tought that I could get access to the msg queue / buffer of the socket and just keep the last image or the last n images in order to speed up execution and avoid this CPU bottleneck. Yet, I am not sure which might be the right decision to make in this regard. Ive seen that in case of CPU intensive tasks usually Celery it is used in conjunction with Flask but I do not understand completely fits in my case or I could opt for a (maybe worse but) simpler solution like just accessing the socketio msg queue and drop some images to speed up execution as I was saying above.
I would appreciate so much if you could get a feedback on this.
Thanks,
Mario </div> | closed | 2022-07-13T10:16:18Z | 2022-07-13T10:58:57Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1845 | [] | MarioCiranni | 2 |
lux-org/lux | jupyter | 371 | [BUG] | C:\Users\Sherzod Azamov\anaconda3\lib\site-packages\IPython\core\formatters.py:345: UserWarning:
Unexpected error in rendering Lux widget and recommendations. Falling back to Pandas display.

| closed | 2021-05-03T07:10:40Z | 2021-05-18T21:25:58Z | https://github.com/lux-org/lux/issues/371 | [] | sherzod-az | 2 |
falconry/falcon | api | 1,762 | Is it possible to access the route that the data will be forwarded to inside a middleware (process request)? | I have a middleware where I want to process some data depending on what route it will be forwarded to, thus I'm wondering if it's possible to obtain any information on what route the data will be forwarded to. | closed | 2020-08-15T16:08:46Z | 2020-08-15T23:35:27Z | https://github.com/falconry/falcon/issues/1762 | [
"needs-information",
"question"
] | FerusAndBeyond | 3 |
yihong0618/running_page | data-visualization | 484 | 获取keep数据错误 | yihong你好!
部署这个项目快两年了,一直运行正常。但这周突然发现,最近两次跑步的keep数据没有更新上来。
本地执行python scripts/keep_sync.py **** ***--with-gpx命令后,报出下面的错误:
2 new keep runs to generate
parsing keep id 59d47317e666861941f1cf50_9223370343116489210_rn
Something wrong paring keep id 59d47317e666861941f1cf50_9223370343116489210_rnInvalid base64-encoded string: number of data characters (21) cannot be 1 more than a multiple of 4
parsing keep id 59d47317e666861941f1cf50_9223370343501096357_rn
Something wrong paring keep id 59d47317e666861941f1cf50_9223370343501096357_rnInvalid base64-encoded string: number of data characters (21) cannot be 1 more than a multiple of 4
No tracks found.
请问这是怎么回事呢? | closed | 2023-09-06T13:29:53Z | 2024-02-02T05:42:50Z | https://github.com/yihong0618/running_page/issues/484 | [] | Epiphany-git | 41 |
pallets-eco/flask-sqlalchemy | flask | 941 | Our implementation of binds can cause table name conflicts | This is a description of the issue that #222 tries to solve. After investigating further, and based on differences between SQLAlchemy 1.3 and 1.4, I don't think I'll be able to merge that PR so I'm writing up more here.
We have the `SQLALCHEMY_BINDS` config mapping keys to engine URLs. Each model can have an optional `__bind_key__` attribute. We override `Session.get_bind` to look for this key and choose the engine from the config. In this way, you can define models that are present in different databases.
This is slightly different than SQLAlchemy itself. There the `Session(binds={})` maps classes to engines. So a specific model could have a specific engine, or a base class could be used to map all its models to the same engine. The configuration is done when setting up the session and engines, not when defining the models and tables.
Our way causes issues when two models with the same name / table name are defined that belong to separate binds. The `Model` base class has one `metadata` associated with it, and all names registered with a metadata must be unique. It also makes it possible to write foreign keys between models that will end up using separate engines, which won't work. When using plain SQLAlchemy, you would create separate declarative bases and bind each of them to a different engine, but in Flask-SQLAlchemy there is only one `db.Model` base class.
#222 addresses this by creating a different metadata in the metaclass when creating a model with a different bind key. I wasn't super comfortable with that, but the release of SQLAlchemy 1.4 made it more clear why. In 1.4, it uses a new `registry` object, and `Base.metadata` is essentially an alias to `Base.registry.metadata`. Looking back at how 1.3 did it, `Base._decl_class_registry` was the equivalent, and it wasn't being overridden by #222, so you'd still have names overwriting each other in the registry if not the metadata. With 1.4, we'd need to override `registry`, not `metadata` directly, and looking at #222's current implementation this seems even more complex and messy. And if we want to support SQLAlchemy <= 1.3 we need to detect and override both the old and new implementations.
The problem is that our `__bind_key__` is only available after class creation has started, but `metadata`/`registry` was created before that when creating the declarative base. There's a disconnect between when we have the information to know what to create, and when it needs to be created. *Maybe* it could be addressed with metaclass trickery to substitute a different base class when creating a subclass with a different key, but I haven't investigated very far and I'm not particularly enthusiastic about trying to deal with that complexity.
Alternatively, we could make `db.make_declarative_base` more of a public API (or a new method) and have it take a `bind_key` parameter. So when you want to use a separate bind, instead of inheriting `db.Model`, inherit `db.get_base_model("key")`. We'd probably want to disallow setting `__bind_key__` manually and show a message saying what to do instead, except there are also valid reasons to have models in the same metadata use different binds, as long as there's no conflicts. However, this seems pretty confusing to teach users, I foresee it causing a bunch of new questions even as it solves the current problem. | closed | 2021-03-24T19:12:24Z | 2022-10-03T00:21:45Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/941 | [] | davidism | 13 |
pywinauto/pywinauto | automation | 977 | Support use child_window in pywinauto for desktop application | ## Expected Behavior
I want to control one alert of desktop application by pywinauto
## Actual Behavior
## Steps to Reproduce the Problem
print_control_identifiers()
Dialog - 'MainWindow' (L-32000, T-32000, R-31840, B-31972)
['MainWindowDialog', 'MainWindow', 'Dialog']
child_window(title="MainWindow", control_type="Window")
|
| Custom - '' (L-32000, T-32000, R-30634, B-31232)
| ['Custom', 'Phần mềm quản lý nhà hàng, quán cafeCustom', 'Custom0', 'Custom1', 'Phần mềm quản lý nhà hàng, quán cafeCustom0', 'Phần mềm quản lý nhà hàng, quán cafeCustom1']
| |
| | Custom - '' (L-32000, T-32000, R-30634, B-31800)
| | ['Custom2', 'Phần mềm quản lý nhà hàng, quán cafeCustom2']
| | |
| | | Image - '' (L-30739, T-31980, R-30714, B-31955)
| | | ['Image', 'Image0', 'Image1']
| | |
| | | Image - '' (L-30689, T-31980, R-30664, B-31955)
| | | ['Image2']
| | | child_window(auto_id="imgClose", control_type="Image")
| | |
| | | Static - '' (L-30871, T-31930, R-30664, B-31894)
| | | ['Static', 'Về trang chủStatic', 'Static0', 'Static1', 'Về trang chủStatic0', 'Về trang chủStatic1']
| | | child_window(auto_id="lblBackHome", control_type="Text")
| | | |
| | | | Image - '' (L-30871, T-31923, R-30843, B-31895)
| | | | ['Image3']
| | | |
| | | | Static - 'Về trang chủ' (L-30833, T-31930, R-30664, B-31894)
| | | | ['Static2', 'Về trang chủ', 'Về trang chủStatic2']
| | | | child_window(title="Về trang chủ", control_type="Text")
| | |
| | | Image - '' (L-31417, T-31923, R-31217, B-31868)
| | | ['Image4']
| | |
| | | Static - 'Phần mềm quản lý nhà hàng, quán cafe' (L-32000, T-31848, R-30634, B-31800)
| | | ['Static3', 'Phần mềm quản lý nhà hàng, quán cafeStatic', 'Phần mềm quản lý nhà hàng, quán cafe']
| | | child_window(title="Phần mềm quản lý nhà hàng, quán cafe", control_type="Text")
| |
| | Custom - '' (L-32000, T-31800, R-30634, B-31332)
| | ['Custom3', 'Phần mềm quản lý nhà hàng, quán cafeCustom3']
| | child_window(auto_id="uc", control_type="Custom")
| | |
| | | Static - 'Đăng nhập tài khoản chủ nhà hàng' (L-31565, T-31780, R-31069, B-31742)
| | | ['Static4', 'Đăng nhập tài khoản chủ nhà hàng', 'Đăng nhập tài khoản chủ nhà hàngStatic']
| | | child_window(title="Đăng nhập tài khoản chủ nhà hàng", control_type="Text")
| | |
| | | Image - '' (L-31629, T-31695, R-31579, B-31658)
| | | ['Image5', 'Phần mềm quản lý nhà hàng, quán cafeImage']
| | |
| | | Static - '(+84)' (L-31579, T-31705, R-31465, B-31648)
| | | ['Static5', '(+84)Static', '(+84)', '(+84)Static0', '(+84)Static1']
| | | child_window(title="(+84)", control_type="Text")
| | |
| | | Edit - '' (L-31459, T-31712, R-30995, B-31642)
| | | ['Edit', '(+84)Edit']
| | | child_window(auto_id="txtPhone", control_type="Edit")
| | | |
| | | | Static - 'Nhập số điện thoại' (L-31452, T-31690, R-31000, B-31663)
| | | | ['Static6', 'Nhập số điện thoại', 'Nhập số điện thoạiStatic']
| | | | child_window(title="Nhập số điện thoại", control_type="Text")
| | |
| | | Static - 'Phương thức đăng nhập khác' (L-31669, T-31612, R-31369, B-31584)
| | | ['Static7', 'Phương thức đăng nhập khác', 'Phương thức đăng nhập khácStatic']
| | | child_window(title="Phương thức đăng nhập khác", control_type="Text")
| | |
| | | Static - 'Kích hoạt bằng mã thiết bị' (L-31234, T-31612, R-30965, B-31584)
| | | ['Static8', 'Kích hoạt bằng mã thiết bị', 'Kích hoạt bằng mã thiết bịStatic']
| | | child_window(title="Kích hoạt bằng mã thiết bị", control_type="Text")
| | |
| | | Button - 'GỬI SMS' (L-31417, T-31484, R-31217, B-31399)
| | | ['GỬI SMSButton', 'Button', 'GỬI SMS', 'GỬI SMS0', 'GỬI SMS1']
| | | child_window(title="GỬI SMS", control_type="Button")
| | | |
| | | | Static - 'GỬI SMS' (L-31348, T-31450, R-31286, B-31432)
| | | | ['Static9', 'GỬI SMSStatic', 'GỬI SMS2']
| | | | child_window(title="GỬI SMS", control_type="Text")
| |
| | Custom - '' (L-32000, T-31332, R-30634, B-31232)
| | ['GỬI SMSCustom', 'Custom4', 'Phần mềm quản lý nhà hàng, quán cafeCustom4']
| | |
| | | Static - '' (L-31602, T-31298, R-31329, B-31265)
| | | ['Static10', 'WebsiteStatic', 'WebsiteStatic0', 'WebsiteStatic1']
| | | |
| | | | Image - '' (L-31602, T-31298, R-31574, B-31270)
| | | | ['Image6', 'OKImage']
| | | |
| | | | Static - 'Website' (L-31564, T-31298, R-31479, B-31265)
| | | | ['Static11', 'WebsiteStatic2', 'Website']
| | | | child_window(title="Website", control_type="Text")
| | | |
| | | | Static - ': ' (L-31479, T-31298, R-31467, B-31265)
| | | | ['Static12', ': ', ': Static']
| | | | child_window(title=": ", control_type="Text")
| | | |
| | | | Static - 'www.sapo.vn' (L-31467, T-31298, R-31329, B-31265)
| | | | ['Static13', 'www.sapo.vnStatic', 'www.sapo.vn', 'www.sapo.vn0', 'www.sapo.vn1']
| | | | child_window(title="www.sapo.vn", control_type="Text")
| | | | |
| | | | | Hyperlink - 'www.sapo.vn' (L-31467, T-31298, R-31329, B-31270)
| | | | | ['Hyperlink', 'www.sapo.vn2', 'www.sapo.vnHyperlink']
| | | | | child_window(title="www.sapo.vn", control_type="Hyperlink")
| | |
| | | Static - ' ' (L-31329, T-31298, R-31316, B-31265)
| | | ['Static14', ' Static', ' ', ' Static0', ' Static1']
| | | child_window(title=" ", control_type="Text")
| | |
| | | Static - '' (L-31269, T-31298, R-31032, B-31265)
| | | ['Static15', ' Static2']
| | | |
| | | | Image - '' (L-31269, T-31298, R-31241, B-31270)
| | | | [' Image', 'Image7']
| | | |
| | | | Static - 'Hotline' (L-31231, T-31298, R-31154, B-31270)
| | | | ['Static16', 'HotlineStatic', 'Hotline']
| | | | child_window(title="Hotline", control_type="Text")
| | | |
| | | | Static - ': 1800 6750' (L-31154, T-31298, R-31032, B-31270)
| | | | ['Static17', ': 1800 6750', ': 1800 6750Static']
| | | | child_window(title=": 1800 6750", control_type="Text")
|
| Custom - '' (L-31636, T-31741, R-30997, B-31491)
| ['Đăng nhập tài khoản chủ nhà hàngCustom', 'Custom5']
| |
| | Static - '' (L-31421, T-31731, R-31213, B-31685)
| | ['Static18', '(+84)Static2']
| | |
| | | Image - '' (L-31416, T-31726, R-31380, B-31690)
| | | ['Image8', '(+84)Image']
| | |
| | | Static - 'Thông báo' (L-31370, T-31726, R-31218, B-31690)
| | | ['Static19', 'Thông báo', 'Thông báoStatic']
| | | child_window(title="Thông báo", control_type="Text")
| |
| | Static - 'OK' (L-31636, T-31571, R-30997, B-31491)
| | ['Static20', 'OKStatic', 'OK', 'OKStatic0', 'OKStatic1', 'OK0', 'OK1']
| | child_window(title="OK", control_type="Text")
| | |
| | | Static - 'OK' (L-31334, T-31546, R-31300, B-31516)
| | | ['Static21', 'OKStatic2', 'OK2']
| | | child_window(title="OK", control_type="Text")
| |
| | Static - 'Số điện thoại không đúng định dạng' (L-31520, T-31643, R-31114, B-31613)
| | ['Static22', 'Số điện thoại không đúng định dạngStatic', 'Số điện thoại không đúng định dạng']
| | child_window(title="Số điện thoại không đúng định dạng", control_type="Text")
==> I want to control
Static - 'OK' (L-31636, T-31571, R-30997, B-31491)
| | ['Static20', 'OKStatic', 'OK', 'OKStatic0', 'OKStatic1', 'OK0', 'OK1']
| | child_window(title="OK", control_type="Text")
| | |
| | | Static - 'OK' (L-31334, T-31546, R-31300, B-31516)
| | | ['Static21', 'OKStatic2', 'OK2']
| | | child_window(title="OK", control_type="Text")
| |
| | Static - 'Số điện thoại không đúng định dạng' (L-31520, T-31643, R-31114, B-31613)
| | ['Static22', 'Số điện thoại không đúng định dạngStatic', 'Số điện thoại không đúng định dạng']
| | child_window(title="Số điện thoại không đúng định dạng", control_type="Text")
Please help me!
Thanks so much!!
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.7
- Platform and OS: win10
| closed | 2020-09-05T18:41:38Z | 2020-09-18T10:36:09Z | https://github.com/pywinauto/pywinauto/issues/977 | [
"question"
] | yenbka | 4 |
mirumee/ariadne-codegen | graphql | 274 | Get copy of introspected schema | Thanks for an excellent tool!
I'm using the `remote_schema_url` to introspect and generate the Python code, and it works great. However, it would be nice being able to get a copy of the introspected schema as it would aid developing the queries code.
Perhaps I've missed something in the docs and this is already possible. If not, I can create a PR, if you think it makes sense to add this. | closed | 2024-02-12T11:12:59Z | 2024-02-12T11:42:02Z | https://github.com/mirumee/ariadne-codegen/issues/274 | [] | rbw | 4 |
scrapy/scrapy | web-scraping | 6,658 | Switch tests to full pytest style | We currently use pytest to run tests but write tests in the `unittest` style, also using `twisted.trial`. This has some restrictions, especially regarding pytest fixtures (see also #6478 and #6637). It seems like a good idea to switch to just using pytest, with pytest-style asserts, fixtures etc., using `pytest-twisted` if needed. Hopefully this can be done gradually. We can also try this migration on any of the smaller repos with a similar way of running tests, such as w3lib.
We may also want to rewrite async tests from `inlineCallbacks` (oir even `addCallback`) to `async def` in the process (or separately, whatever is easier).
Random related links:
* https://docs.pytest.org/en/stable/how-to/unittest.html
* https://docs.pytest.org/en/stable/how-to/xunit_setup.html
* https://github.com/pytest-dev/pytest-twisted/issues/147
* https://github.com/dannysepler/pytestify
* https://github.com/pytest-dev/unittest2pytest | open | 2025-02-06T16:03:39Z | 2025-03-11T17:43:45Z | https://github.com/scrapy/scrapy/issues/6658 | [
"enhancement",
"CI"
] | wRAR | 1 |
pytorch/pytorch | deep-learning | 148,908 | Numpy v1 v2 compatibility | Whats the policy on numpy compatibility in pytorch? I see that requirements-ci.txt pins numpy==1 for <python3.13 and numpy==2 for py3.13, but later in CI numpy gets reinstalled as numpy==2.0.2 for most python versions. Is CI supposed to use v2 or v1? Does being compatible with v2 ensure compatibility with v1?
cc @mruberry @rgommers @malfet | closed | 2025-03-10T20:10:10Z | 2025-03-10T20:13:59Z | https://github.com/pytorch/pytorch/issues/148908 | [
"module: numpy"
] | clee2000 | 1 |
indico/indico | sqlalchemy | 6,460 | Show number of emails about to be sent to avoid mistakes | **Is your feature request related to a problem? Please describe.**
In the sending emails dialog, there is a preview. Currently, it only shows the first email as an example. By mistake, I sent to way too many people, which was embarrassing. I am speaking both about the "contributions" list and the "submissions" list "Email" button.
**Describe the solution you'd like**
1. It would already help a lot if the dialog showed how many people will be emailed! This can be in the preview and/or the message editing form.
2. It would be nice to click through other example emails beyond the first one.
**Describe alternatives you've considered**
Alternatively, one could think of storing the prepared emails in an outbox first, where they can be double-checked, and then flushing that outbox upon press of a "resume" button.
**Additional context**
v3.2.9 | open | 2024-07-30T12:05:22Z | 2024-07-30T12:05:22Z | https://github.com/indico/indico/issues/6460 | [
"enhancement"
] | JohannesBuchner | 0 |
Miksus/rocketry | automation | 161 | BUG Using CSVFileRepo raise NotImplementedError | **Install**
```shell
pip install rocketry==2.4.1
```
**Code**
```python
import datetime
from rocketry import Rocketry
from redbird.repos import CSVFileRepo
app = Rocketry(logger_repo=CSVFileRepo(filename='logs.csv'))
@app.task('secondly')
def do_things():
print(datetime.datetime.now())
if __name__ == '__main__':
app.run()
```
It raise NotImplementedError.
```
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\templates.py", line 68, in last
return self.repo.query_read_last(self.query_)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\templates.py", line 309, in query_read_last
raise NotImplementedError("Read using first not implemented.")
NotImplementedError: Read using first not implemented.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:/mycode/Rocketry_examples/04 日志.py", line 11, in <module>
def do_things():
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\rocketry\tasks\func.py", line 193, in __call__
super().__init__(func=func, **self._delayed_kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\rocketry\core\task.py", line 275, in __init__
self.set_cached()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\rocketry\core\task.py", line 825, in set_cached
self.last_run = self._get_last_action("run", from_logs=True, logger=logger)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\rocketry\core\task.py", line 1064, in _get_last_action
value = self._get_last_action_from_log(action, logger)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\rocketry\core\task.py", line 1074, in _get_last_action_from_log
record = logger.get_latest(action=action)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\rocketry\core\log\adapter.py", line 91, in get_latest
return self.filter_by(**kwargs).last()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\templates.py", line 70, in last
return super().last()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\base.py", line 57, in last
for item in self.query():
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\templates.py", line 23, in query
yield from items
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\repos\csv.py", line 82, in query_items
yield from read_items(self, self.read_file(), query)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\utils\query.py", line 39, in read_items
for data in reader:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\repos\csv.py", line 114, in read_file
reader = self.get_reader(file)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\repos\csv.py", line 143, in get_reader
return csv.DictReader(buff, fieldnames=self.get_headers(), **self.kwds_csv)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\redbird\repos\csv.py", line 105, in get_headers
raise TypeError("Cannot determine CSV headers")
TypeError: Cannot determine CSV headers
``` | closed | 2022-11-29T03:09:10Z | 2022-11-29T08:35:37Z | https://github.com/Miksus/rocketry/issues/161 | [
"bug"
] | vba34520 | 1 |
ymcui/Chinese-BERT-wwm | nlp | 91 | 关于pipeline | 纯新人,想问个问题。新版本的Transformer中提供了pipeline接口,可快速将模型应用于"feature-extraction"、"sentiment-analysis"、"ner"、"question-answering"和"fill-mask"等任务。我尝试了在pipeline中直接使用Chinese-BERT-wwm,发现报错,请问是没有提供这项功能吗? | closed | 2020-03-08T15:27:10Z | 2020-03-11T04:50:48Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/91 | [] | guofei1989 | 2 |
Significant-Gravitas/AutoGPT | python | 9,317 | Add XML Parsing block | Use [https://github.com/Significant-Gravitas/gravitasml](https://github.com/Significant-Gravitas/gravitasml) | closed | 2025-01-22T14:23:48Z | 2025-02-12T01:38:31Z | https://github.com/Significant-Gravitas/AutoGPT/issues/9317 | [
"good first issue"
] | ntindle | 17 |
sunscrapers/djoser | rest-api | 712 | Add Blacklist endpoint for jwt endpoints | Please update the rest_framework_simplejwt package to v5.* so we could add blacklisting of token upon logout using jwt | closed | 2023-01-26T04:17:08Z | 2023-04-29T12:16:08Z | https://github.com/sunscrapers/djoser/issues/712 | [] | cooldragon12 | 2 |
graphql-python/graphene-django | graphql | 828 | AttributeError: 'function' object has no attribute 'wrapped' in Django 2.2 | In Django 2.2 (works fine in 2.1) tests, connections are overridden/monkey patched with properties that throw errors, specifically the `connection.cursor` method.
https://github.com/django/django/blob/master/django/test/testcases.py#L210
Graphene also monkey patches `connection.cursor`.
https://github.com/graphql-python/graphene-django/blob/master/graphene_django/debug/sql/tracking.py#L43
This causes tests to fail when Django attempts to undo the monkey patch.
https://github.com/django/django/blob/master/django/test/testcases.py#L220
The following error occurs:
```
ERROR: tearDownClass (point_of_sale.tests.graphene.queries.e2e_test_cash_and_check_batch_query.CashAndCheckBatchQueryTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/cedar/api/common/testing/test_cases/db_test_case.py", line 44, in tearDownClass
super(TestCase, cls).tearDownClass()
File "/usr/local/lib/python3.6/dist-packages/django/test/testcases.py", line 244, in tearDownClass
cls._remove_databases_failures()
File "/usr/local/lib/python3.6/dist-packages/django/test/testcases.py", line 240, in _remove_databases_failures
setattr(connection, name, method.wrapped)
AttributeError: 'function' object has no attribute 'wrapped'
----------------------------------------------------------------------
```
This test is using the Django test client to test the /graphql endpoint.
https://docs.djangoproject.com/en/3.0/topics/testing/tools/#overview-and-a-quick-example
| open | 2019-12-18T04:45:59Z | 2023-01-25T14:29:04Z | https://github.com/graphql-python/graphene-django/issues/828 | [
"wontfix"
] | taylor-cedar | 11 |
mwouts/itables | jupyter | 325 | `show(df)` does not work with `modin.pandas` | `show()` is not working while I'm importing pandas with from modin. I'm using [modin](https://github.com/modin-project/modin) to improve pandas performance.
```
import modin.pandas as pd
df = pd.read_csv("****.csv")
```
Now `show(df, classes="display")` column showing the following error.
```
AttributeError: 'DataFrame' object has no attribute 'iter_rows'
``` | open | 2024-10-06T12:42:49Z | 2025-02-17T13:50:39Z | https://github.com/mwouts/itables/issues/325 | [] | wpritom | 10 |
pyeve/eve | flask | 745 | Quickstart instructions produce 500 error | Following the documentation, I get 500 errors when following the quickstart instructions for http://127.0.0.1:5000/people with version 0.6.0.
After some investigation, it was because I did not have mongo up and running.
| closed | 2015-10-19T07:14:04Z | 2015-10-19T07:20:23Z | https://github.com/pyeve/eve/issues/745 | [] | einchance | 3 |
Josh-XT/AGiXT | automation | 1,183 | Ask the user if they want to execute the suggested chain of commands. | https://github.com/Josh-XT/AGiXT/blob/b6aa3d617605713619197f7214d939db039f9b35/agixt/Interactions.py#L839
```python
command_args=command_args,
)
)
# TODO: Ask the user if they want to execute the suggested chain of commands.
command_output = f"{command_output}\n\n**Would you like to execute the command `{command_name}` with the following parameters?**\n```json\n{json.dumps(command_args, indent=4)}\n```"
# Ask the AI to make the command output more readable and relevant to the conversation and respond with that.
except Exception as e:
logging.error(
f"Error: {self.agent_name} failed to execute command `{command_name}`. {e}"
``` | closed | 2024-05-09T17:32:20Z | 2024-05-28T14:30:03Z | https://github.com/Josh-XT/AGiXT/issues/1183 | [
"todo"
] | github-actions[bot] | 1 |
axnsan12/drf-yasg | django | 583 | Exclude according to the "request" object | Is there a way to use the "request" object when excluding endpoints?
In my case i want to filter the endpoints displayed to the user according to the user's permissions in our system.
I know the option to use `permission_classes` but this didn't work in my case. my viewset uses `permission_classes` yet the un-permitted classes are still displayed in the swagger-UI | closed | 2020-04-27T15:07:08Z | 2020-04-30T13:41:27Z | https://github.com/axnsan12/drf-yasg/issues/583 | [] | maayanelgamil | 0 |
piskvorky/gensim | machine-learning | 2,820 | Prepare gensim 3.8.3 | OK guys, looks like we're getting close to releasing this thing. I've just updated the CHANGELOG - @piskvorky please have a look and make any changes as necessary. Each update will require a re-run of the CI and a rebuild of the wheels, so please keep that in mind.
Some other relevant things to check:
- [Release checklist](https://github.com/RaRe-Technologies/gensim/wiki/Developer-page#making-a-new-release)
- [Release milestone](https://github.com/RaRe-Technologies/gensim/milestone/2?closed=1)
- [Diff with current develop HEAD](https://github.com/RaRe-Technologies/gensim/compare/develop...release-3.8.3?expand=1)
I've gone through the above myself and think like we're ready to release. @piskvorky @menshikh-iv Please let me know if you feel the same and we'll get this thing out the door. | closed | 2020-05-02T23:59:19Z | 2020-10-28T02:12:13Z | https://github.com/piskvorky/gensim/issues/2820 | [
"housekeeping"
] | mpenkov | 7 |
JaidedAI/EasyOCR | deep-learning | 655 | Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) | my operating system is ubuntu:
the code :
------------------------------------------------------------------------------
import easyocr
path = "/home/why/work/python/pas/images/shot.png"
reader = easyocr.Reader(['en'])
result = reader.readtext(path)
| closed | 2022-01-29T04:16:58Z | 2022-08-25T10:52:28Z | https://github.com/JaidedAI/EasyOCR/issues/655 | [] | mwt-why | 1 |
flairNLP/flair | pytorch | 3,609 | [Question]: How to merge output from flair with NER model | ### Question
Hey,
I'm fusing flair with the ner-english-ontonotes-large model to determine entities in text, which is working really great.
Further processing of these NER results becomes difficult when texts contain certain entities differently.
For example, If I have a news about the greatest duck of Duckburg: Donald Duck, like this:
"Donald Duck is the famous person from Duckburg. Donald lives there with his family"
Flair/NLP will generate the 2 person entries: "Donald Duck" and "Donald".
I know, this is probably not a flair specific question, but is there a way, to merge/find the connection between "Donald Duck" and "Donald"?
The use case is to collect for example all the persons in a text and it is sub-optimal, if the output handles "Donald Duck" and "Donald" as different persons.
On the other hand, the model is great to recognize when the same word does not belong to the same entity, like Hamburger. The model exactly "knows" if the is the GPE, NORP or a PRODUCT.
What I need is the reverse case: different words that mean the same thing.
Any idea how to handle/merge this? | open | 2025-02-03T14:38:33Z | 2025-02-07T18:13:01Z | https://github.com/flairNLP/flair/issues/3609 | [
"question"
] | B0rner | 1 |
gradio-app/gradio | machine-learning | 10,519 | [Gradio 5.15 container] - Width size: Something changed | ### Describe the bug
I was controlling width of main interface with custom css in class:
but in this new version its is not working.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
css ="""
.gradio-container {width: 95% !important}
div.gradio-container{
max-width: unset !important;
}
"""
with gr.Blocks(css=css) as app:
with gr.Tabs():
with gr.TabItem("Test"):
gallery = gr.Gallery(label="Generated Images", interactive=True, show_label=True, preview=True, allow_preview=True)
app.launch(inbrowser=True)
```
### Screenshot

### Logs
```shell
N/A
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.15.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.7.0 is not installed.
httpx: 0.27.0
huggingface-hub: 0.28.1
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.26.3
orjson: 3.10.6
packaging: 24.1
pandas: 2.2.2
pillow: 11.0.0
pydantic: 2.8.2
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.1
ruff: 0.9.4
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.3
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.5
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.0
huggingface-hub: 0.28.1
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2025-02-05T21:34:22Z | 2025-02-27T07:03:10Z | https://github.com/gradio-app/gradio/issues/10519 | [
"bug"
] | elismasilva | 4 |
miguelgrinberg/Flask-SocketIO | flask | 810 | Misbehaving websocket client can crash server | I have an app based on Flask-SocketIO (running on eventlet), and this week was seeing frequent issues where the server would print the below trace and then stop responding to all requests.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/eventlet/wsgi.py", line 547, in handle_one_response
result = self.application(self.environ, start_response)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.6/site-packages/flask_socketio/__init__.py", line 43, in __call__
start_response)
File "/usr/local/lib/python3.6/site-packages/engineio/middleware.py", line 47, in __call__
return self.engineio_app.handle_request(environ, start_response)
File "/usr/local/lib/python3.6/site-packages/socketio/server.py", line 360, in handle_request
return self.eio.handle_request(environ, start_response)
File "/usr/local/lib/python3.6/site-packages/engineio/server.py", line 282, in handle_request
environ, start_response)
File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 103, in handle_get_request
start_response)
File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 145, in _upgrade_websocket
return ws(environ, start_response)
File "/usr/local/lib/python3.6/site-packages/engineio/async_eventlet.py", line 19, in __call__
return super(WebSocketWSGI, self).__call__(environ, start_response)
File "/usr/local/lib/python3.6/site-packages/eventlet/websocket.py", line 130, in __call__
self.handler(ws)
File "/usr/local/lib/python3.6/site-packages/engineio/socket.py", line 170, in _websocket_handler
pkt = ws.wait()
File "/usr/local/lib/python3.6/site-packages/eventlet/websocket.py", line 788, in wait
for i in self.iterator:
File "/usr/local/lib/python3.6/site-packages/eventlet/websocket.py", line 643, in _iter_frames
message = self._recv_frame(message=fragmented_message)
File "/usr/local/lib/python3.6/site-packages/eventlet/websocket.py", line 669, in _recv_frame
header = recv(2)
File "/usr/local/lib/python3.6/site-packages/eventlet/websocket.py", line 578, in _get_bytes
d = self.socket.recv(numbytes - len(data))
File "/usr/local/lib/python3.6/site-packages/eventlet/greenio/base.py", line 364, in recv
return self._recv_loop(self.fd.recv, b'', bufsize, flags)
File "/usr/local/lib/python3.6/site-packages/eventlet/greenio/base.py", line 358, in _recv_loop
self._read_trampoline()
File "/usr/local/lib/python3.6/site-packages/eventlet/greenio/base.py", line 329, in _read_trampoline
timeout_exc=socket_timeout('timed out'))
File "/usr/local/lib/python3.6/site-packages/eventlet/greenio/base.py", line 208, in _trampoline
mark_as_closed=self._mark_as_closed)
File "/usr/local/lib/python3.6/site-packages/eventlet/hubs/__init__.py", line 164, in trampoline
return hub.switch()
File "/usr/local/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 297, in switch
return self.greenlet.switch()
socket.timeout: timed out
```
This looks similar to #557, but most of the debugging there seemed to center around fixing the client to avoid the error. In my case, the client is an Ember.js app. I was using Mirage for data mocking, but attempting to let websocket traffic flow using Mirage's `passthrough`. My best interpretation is that `passthrough` doesn't work with websockets, and was causing an unexpected sequence of events in the websocket exchange. I've since made some adjustments on the client side and it seems to mitigate the issue.
Regardless, if a _misbehaving_ client can lock up and crash my server, it seems like a _malicious_ client could do the same thing. No matter what the client does, the server should not lock up and crash. If a socket times out waiting on a client response, the server should just move on/drop the client/etc - anything to gracefully handle the broken flow and keep serving other requests. | closed | 2018-10-11T14:14:11Z | 2019-04-07T10:09:42Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/810 | [
"question"
] | awrichar | 3 |
yeongpin/cursor-free-vip | automation | 209 | [Discussion]: The 0.47.x update is being released. will be update? | ### Issue Checklist
- [x] I understand that Issues are used to provide feedback and solve problems, not to complain in the comments section, and will provide more information to help solve the problem.
- [x] I confirm that I need to raise questions and discuss problems, not Bug feedback or demand suggestions.
- [x] I have read [Github Issues](https://github.com/yeongpin/cursor-free-vip/issues) and searched for existing [open issues](https://github.com/yeongpin/cursor-free-vip/issues) and [closed issues](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20), and found no similar issues.
### Platform
Windows x32
### Version
0.47
### Your question
Hi, I wanted to ask you about the 0.47 version of the cursor. It is being launched right now.. When will there be an update for this tool?

### Additional information
```shell
```
### Priority
Low (I'll look at it when I have time) | closed | 2025-03-12T10:37:59Z | 2025-03-13T05:43:47Z | https://github.com/yeongpin/cursor-free-vip/issues/209 | [
"question"
] | ElnurM1 | 3 |
nschloe/matplotx | matplotlib | 34 | Bar labels when bar is too short | Sometimes when using `show_bar_values(alignemnent="horizontal")` and the bar is too small this can happen:

The expected behaviour would be:

| open | 2022-02-07T15:52:32Z | 2022-02-07T15:55:51Z | https://github.com/nschloe/matplotx/issues/34 | [] | RemDelaporteMathurin | 2 |
automl/auto-sklearn | scikit-learn | 1,412 | I start getting port error already in use | ## Describe the bug ##
I am running Ubuntu on Windows 11 bash and auto sklearn version **0.14.6**
Whenever I try to call Autosklearn I get this error
**_An error ocurred while starting the kernel
/home/asmgx/.local/lib/python3.8/site‑packages/distributed/node.py:180: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 42913 instead
warnings.warn(_**
here is the code
**automl = autosklearn.classification.AutoSklearnClassifier(
ime_left_for_this_task=60*10,
per_run_time_limit=60*1,
memory_limit = 1024 * 10,
n_jobs=-1,
metric=autosklearn.metrics.f1_macro,
)**
i tried to restart my laptop and restart kernel but nothing works, always getting the same error
I tried to call the code from Spyder (got same error)
and tried from python and still same error
| closed | 2022-03-01T04:01:44Z | 2022-03-25T12:16:42Z | https://github.com/automl/auto-sklearn/issues/1412 | [] | asmgx | 5 |
wkentaro/labelme | deep-learning | 527 | Instance segmentation not working | SegmentationObjectPNG and SegmentationClassPNG have same type of images and not showing different colors for different instances.
<img src=https://user-images.githubusercontent.com/55757328/71172201-9db37500-2285-11ea-9758-e8decca2be09.png width=30% > <img src=https://user-images.githubusercontent.com/55757328/71172208-a441ec80-2285-11ea-92f0-5c145f4059dc.png width=30%>
Even though while labelling I labelled as classname-1, classname-2
The labels.txt file has only classname once as you have shown in the instance segmentation example.
What could I be doing wrong? I really want different instances in different colors | closed | 2019-12-19T12:03:40Z | 2020-03-15T00:00:21Z | https://github.com/wkentaro/labelme/issues/527 | [] | aditya-krish | 2 |
bloomberg/pytest-memray | pytest | 109 | Getting different results for @pytest.mark.limit_memory on macOS | ## Bug Report
The urllib3 test suite uses @pytest.mark.limit_memory,
When using pytest 7.4.4 + pytest-memray 1.5.0 we get the expected behaviour (that is the test passes).
Then using pytest 8.0.0 + pytest-memray 1.5.0 we got some test failures. This happens on macOS (python 3.10, 3.11 and 3.12 ) and Ubuntu 22.04 (python 3.12, not with python 3.11 or python 3.10).
The test checks for a limit of 10.01MB but it fails due to increased memory usage (10.1MB), since the test code has not changed (the only change is bumping the pytest version to 8.0.0) we wonder if there is problem on the way pytest-memray is calculating the memory usage.
I'm aware that the [ pytest-memray usage](https://pytest-memray.readthedocs.io/en/latest/usage.html) states:
> As the Python interpreter has its own [object allocator](https://docs.python.org/3/c-api/memory.html) it’s possible that memory is not immediately released to the system when objects are deleted, so tests using this marker **may need to give some room to account for this.**
But we wonder why the memory usage reported is bigger now if the tested code is the same.
Also we know that if we run the urllib3 `test_get_all_memory_usage_single_chunk` alone it will pass (only consumes 10.01M) but if we run the `test_socket_close_socket_then_file` before it fails on `test_get_all_memory_usage_single_chunk ` (reporting that it consumes 10.1M). That suggest that some allocation that happens on `test_socket_close_socket_then_file` is counted as if happened on `test_get_all_memory_usage_single_chunk`.
See more details on https://github.com/urllib3/urllib3/pull/3335
**Input Code**
**Expected behavior/code** A clear and concise description of what you expected to
happen (or code).
**Environment**
python 3.10 on macOS
python 3.11 on macOS
python 3.12 on macOS
python 3.12 on Ubuntu 22.04
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
**Additional context/Screenshots** Add any other context about the problem here. If
applicable, add screenshots to help explain.
| closed | 2024-02-27T20:30:15Z | 2024-02-27T21:32:46Z | https://github.com/bloomberg/pytest-memray/issues/109 | [] | ecerulm | 5 |
sinaptik-ai/pandas-ai | data-science | 1,027 | need clarification | Seeing inconsistent results based on the order of fields provided in the data,
Using this dataframe from the provided examples,
dataframe = {
"country": [
"United States",
"United Kingdom",
"France",
"Germany",
"Italy",
"Spain",
"Canada",
"Australia",
"Japan",
"China",
],
"gdp": [
19294482071552,
2891615567872,
2411255037952,
3435817336832,
1745433788416,
1181205135360,
1607402389504,
1490967855104,
4380756541440,
14631844184064,
],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12],
}
When asked to the PandasAI agent,
llm = OpenAI()
df = Agent([pd.DataFrame(dataframe)], config={"llm": llm})
response = df.chat("What are 3 most happiest countries?")
print(response)
I get -
"No happiness index data available."
But when I move this line above the gdp while defining dataframe,
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12],
I get the expected answer.
What could have caused this issue? | closed | 2024-03-13T09:58:11Z | 2024-06-20T16:04:12Z | https://github.com/sinaptik-ai/pandas-ai/issues/1027 | [] | PNF404 | 2 |
awesto/django-shop | django | 615 | Modules in common.txt not installed through pip install django-shop | The following modules were not installed with `pip install django-shop` although they are included in [common.txt](https://github.com/awesto/django-shop/blob/master/requirements/common.txt)
* django-filter
* django-sass-processor
* django-compressor
* djangocms-bootstrap3
| closed | 2017-07-07T14:06:22Z | 2017-07-07T14:29:05Z | https://github.com/awesto/django-shop/issues/615 | [] | raratiru | 2 |
wandb/wandb | tensorflow | 8,981 | [Q]: Do we need to purchase a commercial license if we build server in our internal AWS env? | ### Ask your question
We want to build a wandb server in our company's AWS environment. Do we need to purchase a commercial license?
Reference doc: https://docs.wandb.ai/guides/hosting/self-managed/aws-tf/
| closed | 2024-12-02T07:18:34Z | 2024-12-05T22:59:36Z | https://github.com/wandb/wandb/issues/8981 | [
"ty:question",
"a:app"
] | AaronZhangL | 3 |
jina-ai/clip-as-service | pytorch | 625 | zmq.error.ZMQError: Operation not supported | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (WSL Ubuntu 18.04):
- TensorFlow installed from (source or binary): source
- TensorFlow version: 1.10.0
- Python version:3.6
- `bert-as-service` version: 1.10.0
- GPU model and memory:
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```bash
bert-serving-start -model_dir=/mnt/f/cased_L-12_H-768_A-12 -num_worker=4
```
Then this issue shows up:
```bash
bert-serving-start -model_dir=/mnt/f/cased_L-12_H-768_A-12 -num_worker=4
usage: /home/jiaenliu/anaconda3/envs/py36/bin/bert-serving-start -model_dir=/mnt/f/cased_L-12_H-768_A-12 -num_worker=4
ARG VALUE
__________________________________________________
ckpt_name = bert_model.ckpt
config_name = bert_config.json
cors = *
cpu = False
device_map = []
do_lower_case = True
fixed_embed_length = False
fp16 = False
gpu_memory_fraction = 0.5
graph_tmp_dir = None
http_max_connect = 10
http_port = None
mask_cls_sep = False
max_batch_size = 256
max_seq_len = 25
model_dir = /mnt/f/cased_L-12_H-768_A-12
no_position_embeddings = False
no_special_token = False
num_worker = 4
pooling_layer = [-2]
pooling_strategy = REDUCE_MEAN
port = 5555
port_out = 5556
prefetch_size = 10
priority_batch_size = 16
show_tokens_to_client = False
tuned_model_dir = None
verbose = False
xla = False
I:VENTILATOR:[__i:__i: 67]:freeze, optimize and export graph, could take a while...
I:GRAPHOPT:[gra:opt: 53]:model config: /mnt/f/cased_L-12_H-768_A-12/bert_config.json
I:GRAPHOPT:[gra:opt: 56]:checkpoint: /mnt/f/cased_L-12_H-768_A-12/bert_model.ckpt
I:GRAPHOPT:[gra:opt: 60]:build graph...
I:GRAPHOPT:[gra:opt:132]:load parameters from checkpoint...
I:GRAPHOPT:[gra:opt:136]:optimize...
I:GRAPHOPT:[gra:opt:144]:freeze...
I:GRAPHOPT:[gra:opt:149]:write graph to a tmp file: /tmp/tmparfnu0r5
I:VENTILATOR:[__i:__i: 75]:optimized graph is stored at: /tmp/tmparfnu0r5
I:VENTILATOR:[__i:_ru:129]:bind all sockets
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/bert_serving/server/__init__.py", line 115, in run
self._run()
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/zmq/decorators.py", line 76, in wrapper
return func(*args, **kwargs)
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/zmq/decorators.py", line 76, in wrapper
return func(*args, **kwargs)
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/zmq/decorators.py", line 76, in wrapper
return func(*args, **kwargs)
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/bert_serving/server/zmq_decor.py", line 27, in wrapper
return func(*args, **kwargs)
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/bert_serving/server/__init__.py", line 131, in _run
addr_front2sink = auto_bind(sink)
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/bert_serving/server/helper.py", line 203, in auto_bind
socket.bind('ipc://{}'.format(tmp_dir))
File "/home/jiaenliu/anaconda3/envs/py36/lib/python3.6/site-packages/zmq/sugar/socket.py", line 172, in bind
super().bind(addr)
File "zmq/backend/cython/socket.pyx", line 540, in zmq.backend.cython.socket.Socket.bind
File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Operation not supported
```
I have already tried the solution in this thread https://github.com/hanxiao/bert-as-service/issues/293 and it does not work for me.
... | closed | 2021-03-30T07:44:13Z | 2021-03-31T02:23:05Z | https://github.com/jina-ai/clip-as-service/issues/625 | [] | JiaenLiu | 1 |
modelscope/data-juicer | streamlit | 49 | [MM enhancement] support text-based interleaved multimodal data as the intermediate format | Basic support of multimodal data processing. | closed | 2023-10-27T06:46:19Z | 2023-11-13T08:26:40Z | https://github.com/modelscope/data-juicer/issues/49 | [
"enhancement",
"dj:multimodal"
] | HYLcool | 0 |
flaskbb/flaskbb | flask | 33 | I'm getting randomly DetachedInstanceError's. | Since we have implemented the Flask-WhooshAlchemy search, I'm get sometimes this error:
`DetachedInstanceError: Parent instance <Post at 0x10e4fc4d0> is not bound to a Session; lazy load operation of attribute 'topic' cannot proceed`
| closed | 2014-03-27T12:44:21Z | 2018-04-15T07:47:31Z | https://github.com/flaskbb/flaskbb/issues/33 | [
"bug"
] | sh4nks | 1 |
statsmodels/statsmodels | data-science | 8,720 | Wildly different answers replicating a GEE model from SPSS | #### Describe the bug
I'm attempting to replicate a GEE model in statsmodels from a published paper that used SPSS (https://pubmed.ncbi.nlm.nih.gov/33279717/). I am getting very different answers for what seems like the same input structure. I even signed up for a free trial of SPSS and can confirm SPSS gives the answers reported in the paper.
The input matrices are being loaded from the same .csv (and I filter using pandas to achieve the same dataframe as in SPSS).
#### Code Sample, a copy-pastable example if possible
```SPSS
USE ALL.
COMPUTE filter_$=(BehTaskNum = 1 or BehTaskNum = 2 or (BehTaskNum = 3 and BlockNumber = 6)).
FILTER BY filter_$.
EXECUTE.
GENLIN DifferenceScore BY White Right (ORDER=ASCENDING)
/MODEL White Right White*Right INTERCEPT=YES
DISTRIBUTION=NORMAL LINK=IDENTITY
/CRITERIA SCALE=MLE PCONVERGE=1E-006(ABSOLUTE) SINGULAR=1E-012 ANALYSISTYPE=3(WALD) CILEVEL=95
LIKELIHOOD=FULL
/REPEATED SUBJECT=participantID SORT=YES
CORRTYPE=EXCHANGEABLE ADJUSTCORR=YES COVB=ROBUST MAXITERATIONS=1000 PCONVERGE=1e-006(ABSOLUTE)
UPDATECORR=1
/PRINT CPS DESCRIPTIVES MODELINFO FIT SUMMARY SOLUTION.
```
```python
fam = sm.families.Gaussian(link=sm.families.links.identity)
ind = sm.cov_struct.Exchangeable()
GEE_model = smf.gee("DifferenceScore ~ White * Right", groups="ParticipantID",
data=stim_df_with_facename,cov_struct=ind, family=fam)
stim_model_out = GEE_model.fit(maxiter=1000)
stim_model_out.summary()
```
SPSS results:
<img width="530" alt="image" src="https://user-images.githubusercontent.com/29741844/222989361-34f313c6-b734-4a8b-af11-4d92265a0643.png">
statsmodel results:
<img width="233" alt="image" src="https://user-images.githubusercontent.com/29741844/222990165-e16ebe3f-98fc-4720-bf17-a0ff1498eee6.png">
The results aren't even close (seems statsmodels isn't converging--and I've tried up to 10000 iterations but get the same result). I should point out if I run a model with an additional predictor (White*Right+Macro) the results are closer...but still quite a bit different:
SPSS results:
<img width="530" alt="image" src="https://user-images.githubusercontent.com/29741844/222990424-0acc50a6-85a8-4e02-8720-c4b72bd7880f.png">
statsmodel results:
<img width="237" alt="image" src="https://user-images.githubusercontent.com/29741844/222990456-005431f9-cc72-44f6-81c3-c18a665847d0.png">
Thanks for the help--spent a lot of time on this. I'm much more familiar with Mixed Effect models...but trying those in statsmodels were not replicating the GEE results either (even though in principle they should be similar).
| closed | 2023-03-05T22:52:17Z | 2023-04-14T15:04:20Z | https://github.com/statsmodels/statsmodels/issues/8720 | [] | jjsakon | 8 |
pyppeteer/pyppeteer | automation | 380 | add page number to header or footer |
We can add header footer into pdf by following code in puppeteer(Javascript)
```js
await page.pdf({ path: 'path.pdf',
format: 'a4',
displayHeaderFooter: true,
headerTemplate: ``,
footerTemplate: `
<div style="border-top: solid 1px #bbb; width: 100%; font-size: 9px;
padding: 5px 5px 0; color: #bbb; position: relative;">
<div style="position: absolute; left: 5px; top: 5px;"><span class="date"></span></div>
<div style="position: absolute; right: 5px; top: 5px;"><span class="pageNumber"></span>/<span class="totalPages"></span></div>
</div>
`,
});
```
is there any way to add the same in pyppeteer(python)? | closed | 2022-04-22T04:03:45Z | 2022-05-03T02:16:43Z | https://github.com/pyppeteer/pyppeteer/issues/380 | [] | srkds | 1 |
aio-libs/aiopg | sqlalchemy | 357 | aiopg.sa queries can block with large result sets | I'm not precisely sure if this is a problem in aiopg, but it seems to be able to manifests through different usages of aiopg queries.
So generally, what I'm seeing is that when trying to make queries which return large number of rows (in my case we're getting back say ~100k rows), using an aiopg.sa engine will cause the event loop to hang while iterating over the rows.
when using aiopg query/cursor directly, we're good, example:
```python
async def example():
async with aiopg.create_pool(CONN_STRING) as pool:
async with pool.acquire() as conn:
async with conn.cursor() as cur:
await cur.execute('SELECT * FROM big_table')
a = 0
async for i in cur:
a += 1
print(a)
```
However when doing the same query with an aiopg.sa engine, it "hangs" the event loop. In this case meaning I set `loop.set_debug(True); loop.slow_callback_duration = 2`, and managed to track the source of my periodic hangs to this. example:
```python
async def example():
async with aiopg.sa.create_engine(**kwargs) as pool:
async with pool.acquire() as conn:
result = await cur.execute('SELECT * FROM big_table')
a = 0
for row in result:
a += 1
print(a)
```
I did manage to (it seems) work around the issue by iterating over the rows in a ThreadPoolExecutor, example:
```python
async def example():
async with aiopg.sa.create_engine(**kwargs) as pool:
async with pool.acquire() as conn:
result = await conn.execute('SELECT * FROM big_table')
def shenanagins(result):
a = 0
async for row in result:
a += 1
print(a)
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(None, shenanagins, result)
```
I was wondering if it may have to do with the dialect attached to aiopg.sa and/or post processing done to the rows, because it otherwise seems like you're mostly just passing through calls to pscopg2's `fetchone` and the `__inext__` for both interfaces are identical. Though it may or may not be something aiopg could handle for the user (or at least you may have more context than I do for a reasonable workaround).
Relatedly, if it *is* related to the post processing of rows, I noticed this [list comprehension](https://github.com/aio-libs/aiopg/blob/master/aiopg/sa/result.py#L366) which seemed like it could be a similar source of the same problem for calls to e.g. `fetchall` | closed | 2017-07-25T19:55:35Z | 2022-11-16T12:33:43Z | https://github.com/aio-libs/aiopg/issues/357 | [] | DanCardin | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,672 | Speed Up Training | I used CycleGAN for CBCT-to-CT reconstruction. But the pace of this training is very slow. One eopch can take up to 6 hours. Is there any way to speed up the training? | open | 2024-09-04T02:32:25Z | 2024-09-09T23:26:17Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1672 | [] | wyd2 | 1 |
jmcnamara/XlsxWriter | pandas | 961 | feature request: Add docs for working with Polars | Polars has integrated xlsx output support using xlsxwriter as of v0.16.10: https://github.com/pola-rs/polars/issues/5568
I've added initial docs for this at [Working with Polars and XlsxWriter](https://xlsxwriter.readthedocs.io/working_with_polars.html) in the main documentation. This is somewhat similar to the chapter on [Working with Pandas and XlsxWriter](https://xlsxwriter.readthedocs.io/working_with_pandas.html). | closed | 2023-03-06T00:53:33Z | 2023-03-26T11:31:28Z | https://github.com/jmcnamara/XlsxWriter/issues/961 | [
"feature request"
] | jmcnamara | 7 |
pydata/pandas-datareader | pandas | 898 | some data missing download from yahoo | When I download the historical data for a lot of tickers (~1000) from yahoo finance, the data starts to be incomplete after 150 tickers, like this
High Low ... Volume Adj Close
Date ...
2021-07-28 160.100006 158.770004 ... 3874300.0 159.419998
2021-07-29 161.070007 160.130005 ... 3621100.0 160.460007
2021-07-30 160.970001 159.720001 ... 4224400.0 159.970001
2021-08-03 160.919998 158.669998 ... 3292000.0 160.899994
2021-08-06 161.460007 160.740005 ... 1235614.0 161.389999
Obviously, the data of 08-04, 08-05 are missing.
I tried to download a single ticker, there is no problem.
The problem starts to appear from this week.
------------Update------------
An easy way to temporarily solve it is adding time.sleep(xxx) every 100 tickers
| open | 2021-08-06T16:44:17Z | 2021-08-06T17:30:59Z | https://github.com/pydata/pandas-datareader/issues/898 | [] | yuzhipeter | 0 |
fastapi/fastapi | fastapi | 12,246 | OpenAPI servers not being returned according how the docs say they should be | ### Discussed in https://github.com/fastapi/fastapi/discussions/12226
<div type='discussions-op-text'>
<sup>Originally posted by **mzealey** September 19, 2024</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from fastapi import FastAPI
app = FastAPI()
# you can add a test endpoint here or not - same bug either way
```
### Description
```
$ curl localhost:8$ curl localhost:8000/openapi.json
{"openapi":"3.1.0","info":{"title":"FastAPI","version":"0.1.0"},"paths":{}}
```
According to the documentation of the `servers` parameter in FastAPI:
> If the servers list is not provided, or is an empty list, the default value would be a dict with a url value of /.
(assuming that `root_path_in_servers = True` (the default))
Clearly this is not happening.
### Operating System
Linux
### Operating System Details
_No response_
### FastAPI Version
0.110.3 (but according to github code seems to be in latest also)
### Pydantic Version
2.5.3
### Python Version
Python 3.10.12
### Additional Context
_No response_</div> | open | 2024-09-22T10:29:30Z | 2024-09-22T16:10:30Z | https://github.com/fastapi/fastapi/issues/12246 | [
"question"
] | Kludex | 3 |
Layout-Parser/layout-parser | computer-vision | 103 | layoutparser doens't work well for a very well-structured CV | **Describe the bug**
layoutparser doens;t work well for a very well-structured CV, Am I using layoutparser in the wrong way? could you please help to check? Thanks very much.
**To Reproduce**
````
import layoutparser as lp
import cv2
import ssl
import warnings
ssl._create_default_https_context = ssl._create_unverified_context
warnings.filterwarnings('ignore')
image = cv2.imread("data/25.png")
image = image[..., ::-1]
model = lp.Detectron2LayoutModel('lp://PubLayNet/mask_rcnn_R_50_FPN_3x/config',
extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.8],
label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"})
layout = model.detect(image)
print(layout)
# Detect the layout of the input image
lp.draw_box(image, layout, box_width=3).show()
````
**Environment**
1. macos
2. use below command to install layoutparser
- pip install layoutparser torchvision && pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.5#egg=detectron2"
- Python 3.9.1
**Screenshots**
If applicable, add screenshots to help explain your problem.
<img width="439" alt="Screen Shot 2021-12-02 at 3 51 32 PM" src="https://user-images.githubusercontent.com/7931810/144380496-05e1549e-c987-4649-9161-ff2b5226f33e.png">
<img width="438" alt="Screen Shot 2021-12-02 at 3 51 40 PM" src="https://user-images.githubusercontent.com/7931810/144380517-2e91b752-79fe-457e-8227-5c4e2e8c3dfc.png">
<img width="603" alt="Screen Shot 2021-12-02 at 3 42 58 PM" src="https://user-images.githubusercontent.com/7931810/144380525-8ac69dde-1038-4b53-b831-99566d7c474b.png">
| open | 2021-12-02T08:03:15Z | 2022-08-10T08:29:09Z | https://github.com/Layout-Parser/layout-parser/issues/103 | [
"bug"
] | ttbuffey | 2 |
encode/databases | sqlalchemy | 504 | Question: how to set a custom json_serializer? | Question: how to set a custom json_serializer? I have to store a datetime data in JSONB column, so I have to override json_serializer to take care of it. Is there any way? thanks | open | 2022-08-04T21:26:33Z | 2023-03-10T15:55:48Z | https://github.com/encode/databases/issues/504 | [] | kamikaze | 14 |
piccolo-orm/piccolo | fastapi | 418 | piccolo migrations new my_app doesn't create new blank migration | `piccolo migrations new my_app` doesn't create a new migration if there are no table changes since the last migration. This makes it difficult to create `raw` migrations.
```console
❯ piccolo migrations new my_app
Creating new migration ...
Created tables 0
Dropped tables 0
Renamed tables 0
Created table columns 0
Dropped columns 0
Columns added to existing tables 0
Renamed columns 0
Altered columns 0
No changes detected - exiting.
``` | closed | 2022-02-03T01:08:05Z | 2022-04-15T07:21:51Z | https://github.com/piccolo-orm/piccolo/issues/418 | [
"bug"
] | theelderbeever | 7 |
pinry/pinry | django | 309 | Non docker(LXC Container) install documentation? | For people that use LXC containers, do we have non Docker installation documentation? | open | 2021-12-10T21:20:35Z | 2022-02-22T15:34:44Z | https://github.com/pinry/pinry/issues/309 | [] | ithakaa | 1 |
rthalley/dnspython | asyncio | 881 | BUG - DNS queries for a SOA record fails on subdomains | # Description
- DNS queries for a `SOA` record fail when dealing with subdomains.
- Note I tried this in python3.10 as well as python3.8 and experienced the same error in both.
# To Reproduce
1. Perform `nslookup` requests for a SOA record on a subdomain and observe the behavior
```
$ nslookup -query=SOA manpages.debian.org 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
manpages.debian.org canonical name = static.debian.org.
Authoritative answers can be found from:
debian.org
origin = denis.debian.org
mail addr = hostmaster.debian.org
serial = 2023010950
refresh = 1800
retry = 600
expire = 1814400
minimum = 600
```
```
$ nslookup -query=SOA unit42.paloaltonetworks.com 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
unit42.paloaltonetworks.com canonical name = unit42.paloaltonetworks.com.edgekey.net.
unit42.paloaltonetworks.com.edgekey.net canonical name = e13616.a.akamaiedge.net.
Authoritative answers can be found from:
a.akamaiedge.net
origin = n0a.akamaiedge.net
mail addr = hostmaster.akamai.com
serial = 1673312186
refresh = 1000
retry = 1000
expire = 1000
minimum = 1800
```
2. Try and perform the same requests using dns python from the python interactive prompt:
```
Python 3.10.7 (main, Sep 14 2022, 22:38:23) [Clang 14.0.0 (clang-1400.0.29.102)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns.resolver
>>> >>> my_resolver = dns.resolver.Resolver()
>>> my_resolver.nameservers
['10.0.0.1']
>>> # set nameserver to be the same as used for nslookup
>>> my_resolver.nameservers = ['8.8.8.8']
>>> my_resolver.nameservers
['8.8.8.8']
>>> # using deprecated query()
>>> my_resolver.query('manpages.debian.org', 'SOA').response.to_text()
<stdin>:1: DeprecationWarning: please use dns.resolver.Resolver.resolve() instead
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 1110, in query
return self.resolve(qname, rdtype, rdclass, tcp, source,
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 1090, in resolve
(answer, done) = resolution.query_result(response, None)
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 696, in query_result
raise NoAnswer(response=answer.response)
dns.resolver.NoAnswer: The DNS response does not contain an answer to the question: manpages.debian.org. IN SOA
>>> # using resolve()
>>> my_resolver.resolve('manpages.debian.org', 'SOA')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 1090, in resolve
(answer, done) = resolution.query_result(response, None)
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 696, in query_result
raise NoAnswer(response=answer.response)
dns.resolver.NoAnswer: The DNS response does not contain an answer to the question: manpages.debian.org. IN SOA
```
```
>>> # Trying other host
>>> # using deprecated query()
>>> my_resolver.query('unit42.paloaltonetworks.com', 'SOA').response.to_text()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 1110, in query
return self.resolve(qname, rdtype, rdclass, tcp, source,
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 1090, in resolve
(answer, done) = resolution.query_result(response, None)
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 696, in query_result
raise NoAnswer(response=answer.response)
dns.resolver.NoAnswer: The DNS response does not contain an answer to the question: unit42.paloaltonetworks.com. IN SOA
>>> # using resolve()
>>> my_resolver.resolve('unit42.paloaltonetworks.com', 'SOA').response.to_text()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 1090, in resolve
(answer, done) = resolution.query_result(response, None)
File "/opt/homebrew/lib/python3.10/site-packages/dns/resolver.py", line 696, in query_result
raise NoAnswer(response=answer.response)
dns.resolver.NoAnswer: The DNS response does not contain an answer to the question: unit42.paloaltonetworks.com. IN SOA
```
- Showing that SOA records word on parent domains with no problems.
```
>>> # requesting a SOA record on parent domains
>>> my_resolver.query('debian.org', 'SOA').response.to_text()
'id 28714\nopcode QUERY\nrcode NOERROR\nflags QR RD RA\n;QUESTION\ndebian.org. IN SOA\n;ANSWER\ndebian.org. 3600 IN SOA denis.debian.org. hostmaster.debian.org. 2023011003 1800 600 1814400 600\n;AUTHORITY\n;ADDITIONAL'
>>> my_resolver.resolve('paloaltonetworks.com', 'SOA').response.to_text()
'id 31423\nopcode QUERY\nrcode NOERROR\nflags QR RD RA\n;QUESTION\npaloaltonetworks.com. IN SOA\n;ANSWER\npaloaltonetworks.com. 14400 IN SOA ns1.p23.dynect.net. domains.paloaltonetworks.com. 1672823844 3600 600 604800 3600\n;AUTHORITY\n;ADDITIONAL'
```
# Context
- dnspython version 2.2.1
- Tested with Python versions 3.8.13 and 3.10.7
- Tested with macOS Monterey 12.6 and Linux Debian 10
| closed | 2023-01-10T02:26:49Z | 2023-01-10T14:23:29Z | https://github.com/rthalley/dnspython/issues/881 | [] | 0x303 | 1 |
pyro-ppl/numpyro | numpy | 1,872 | Support constraints.cat and CatTransform | Hello!
I have a custom multi-dimensional distribution where the support may be truncated along some dimensions. In terms of constraints, some dimensions will either be `real`, `greater_than`, `less_than`, or `interval`. I naively was then implementing the `support` as, e.g.:
```python
ivl = constraints.interval([0., -jnp.inf, 5.], [jnp.inf, 0., 10.])
```
Right now, this is not really supported by the `numpyro.distributions.constraints.Interval` class because of how [`feasible_like()`](https://github.com/pyro-ppl/numpyro/blob/master/numpyro/distributions/constraints.py#L514C5-L517C10) works, or how the `scale` is computed in the [unconstrained transform](https://github.com/pyro-ppl/numpyro/blob/master/numpyro/distributions/transforms.py#L1604). Would you be open to making these things inf-safe? So far I instead implemented a custom subclass `InfSafeInterval(constraints._Interval)` to support this, but thought I would check in on this. Thanks! | open | 2024-09-30T16:38:00Z | 2024-11-03T13:03:30Z | https://github.com/pyro-ppl/numpyro/issues/1872 | [
"enhancement",
"good first issue"
] | adrn | 4 |
HIT-SCIR/ltp | nlp | 448 | ltp.seg分词时 tokenized.encodings为none | closed | 2020-12-03T09:12:50Z | 2020-12-17T04:04:46Z | https://github.com/HIT-SCIR/ltp/issues/448 | [] | easonforai | 2 | |
vitalik/django-ninja | rest-api | 1,073 | Add support for different content type responses (e.g. application/octet-stream) | I have been creating a ninja API for my web app and have found the process very smooth, and have been enjoying the open API auto documentation, which I rely on in my front-end. I have encountered one problem in dealing with a file download endpoint.
The response should be easily specifiable under the openapi specs as
```
content:
application/octet-stream:
schema:
type: string
format: binary
```
however I've found no easy way to implement this within django-ninja.
I've had a look through the code and I think I've found where a change could be made
```python
if model not in [None, NOT_SET]:
# ::TODO:: test this: by_alias == True
schema = self._create_schema_from_model(
model, by_alias=operation.by_alias
)[0]
details[status]["content"] = {
self.api.renderer.media_type: {"schema": schema}
}
```
if the `schema` had an `__ninja_override_media_type__` attribute, this could be used to provide a custom media type for a response.
If you want me to have a stab at writing a PR for this let me know
| open | 2024-02-06T10:17:51Z | 2024-02-06T13:08:07Z | https://github.com/vitalik/django-ninja/issues/1073 | [] | LevonW-IIS | 2 |
waditu/tushare | pandas | 1,088 | 请问有没有使用 AsyncIO 的异步 IO 版本? | 现在使用 AsyncIO 的服务越来越多,如果在这些服务里调用 tushare 接口,并发性能将严重被影响。 | open | 2019-07-09T08:21:57Z | 2019-07-09T08:21:57Z | https://github.com/waditu/tushare/issues/1088 | [] | jaggerwang | 0 |
Skyvern-AI/skyvern | automation | 1,539 | How to fix these errors? | Could anyone please help with :
How to fix these errors:
" File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py", line 896, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/create.py", line 643, in connect
return dialect.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 621, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psycopg/connection.py", line 748, in connect
raise last_ex.with_traceback(None)
sqlalchemy.exc.OperationalError: (psycopg.OperationalError) connection failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
(Background on this error at: https://sqlalche.me/e/20/e3q8)
"
when buiding image locally and running it?
Thanks a llot. | open | 2025-01-12T09:00:01Z | 2025-01-14T02:11:45Z | https://github.com/Skyvern-AI/skyvern/issues/1539 | [] | computer2s | 3 |
modelscope/data-juicer | streamlit | 198 | [enhancement] The saving of the generated meta-data for multi-modal | 1. 需要指定一个全局目录存储多模态生成的中间数据,该目录下按op划分目录,分别存储该op产生的数据。目前会存储到源数据的路径上,污染源数据。
2. 生成的额外数据,如图像,需要利用hash获取文件名,解决覆盖与重复计算问题。
3. 涉及的算子包括image_blur_mapper、image_diffusion_mapper
| closed | 2024-01-26T04:59:29Z | 2024-05-02T09:31:55Z | https://github.com/modelscope/data-juicer/issues/198 | [
"enhancement",
"stale-issue"
] | BeachWang | 6 |
LibreTranslate/LibreTranslate | api | 370 | Error while traslating | When I try to translate something it always throw this error
```
Running on http://0.0.0.0:5000
/home/vaggos/.local/lib/python3.9/site-packages/torch/serialization.py:953: UserWarning: Failed to initialize NumPy: module compiled against API version 0xf but this version of numpy is 0xd (Triggered internally at /root/pytorch/torch/csrc/utils/tensor_numpy.cpp:77.)
obj = cast(Storage, torch.UntypedStorage(nbytes))
``` | closed | 2022-12-28T09:52:30Z | 2022-12-31T18:51:19Z | https://github.com/LibreTranslate/LibreTranslate/issues/370 | [
"possible bug"
] | vaggos-thanos | 2 |
thtrieu/darkflow | tensorflow | 933 | Where i can found the weights? | closed | 2018-11-14T08:47:37Z | 2018-11-16T12:23:03Z | https://github.com/thtrieu/darkflow/issues/933 | [] | padovanl | 0 | |
SciTools/cartopy | matplotlib | 1,651 | pcolormesh fails with `gouraud` shading | This refers to a question I posted on Stackoverflow https://stackoverflow.com/questions/63776199/cartopy-slow-rendering-with-non-orthographic-projection
When using a `100x100` array (or any size) and using `pcolormesh`, adding the `shading='gouraud'` argument fails but using `'flat'` is fine.
By not specifying the `shading` argument, the rendering is super slow compared to using an Orthographic projection. It seems the `C` array in `geoaxes.py` is not well defined for the `gouraud` shading?
#### Code to reproduce
```python
import numpy as np
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
import time
# Data
# Notice: we can use either phi ∈ [-180, 180] OR phi ∈ [0, 360]
phi = np.linspace(0, 2 * np.pi, 100)
lat = np.linspace(-np.pi / 2, np.pi / 2, 100)
# NOTICE: that PI is defined in the -z direction
theta = (lat + np.pi / 2)[::-1]
data = np.zeros((len(theta), len(phi)), dtype=np.float64)
for j, Th in enumerate(theta):
for i, Ph in enumerate(phi):
data[j, i] = Ph # plot by longitude
# Plot
t = time.time()
# Set up figure
fig = plt.figure(figsize=(8, 4))
ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine())
vlim = np.max(np.abs(data))
p = ax.pcolormesh(phi * 180 / np.pi, lat * 180 / np.pi,
data,
transform=ccrs.PlateCarree(),
cmap='RdBu',vmin=0, vmax=vlim)
ax.autoscale_view()
gl = ax.gridlines(draw_labels=False)
plt.colorbar(p)
plt.show()
print(time.time() - t)
```
#### Traceback
```python
-----------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-b199a9d7c32c> in <module>
2 ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine())
3 vlim = np.max(np.abs(data))
----> 4 p = ax.pcolormesh(phi * 180 / np.pi, lat * 180 / np.pi,
5 data,
6 transform=ccrs.PlateCarree(),
~/miniconda3/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py in wrapper(self, *args, **kwargs)
308
309 kwargs['transform'] = transform
--> 310 return func(self, *args, **kwargs)
311 return wrapper
312
~/miniconda3/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py in pcolormesh(self, *args, **kwargs)
1559
1560 """
-> 1561 result = self._pcolormesh_patched(*args, **kwargs)
1562 self.autoscale_view()
1563 return result
~/miniconda3/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py in _pcolormesh_patched(self, *args, **kwargs)
1672 isinstance(self.projection, wrap_proj_types):
1673
-> 1674 C = C.reshape((Ny - 1, Nx - 1))
1675 transformed_pts = transformed_pts.reshape((Ny, Nx, 2))
1676
~/miniconda3/lib/python3.8/site-packages/numpy/ma/core.py in reshape(self, *s, **kwargs)
4650 """
4651 kwargs.update(order=kwargs.get('order', 'C'))
-> 4652 result = self._data.reshape(*s, **kwargs).view(type(self))
4653 result._update_from(self)
4654 mask = self._mask
ValueError: cannot reshape array of size 10000 into shape (99,99)
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Linux. openSUSE Tumbleweed 20200829
### Cartopy version
0.18
### conda list
```
# packages in environment at /home/david/miniconda3:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
argon2-cffi 20.1.0 pypi_0 pypi
astroid 2.4.2 pypi_0 pypi
attrs 19.3.0 pypi_0 pypi
backcall 0.2.0 pypi_0 pypi
beautifulsoup4 4.9.1 pypi_0 pypi
bleach 3.1.5 pypi_0 pypi
ca-certificates 2020.7.22 0
cartopy 0.18.0 pypi_0 pypi
certifi 2020.6.20 pypi_0 pypi
cffi 1.14.0 py38he30daa8_1
chardet 3.0.4 py38_1003
conda 4.8.4 py38_0
conda-package-handling 1.6.1 py38h7b6447c_0
cryptography 2.9.2 py38h1ba5d50_0
cycler 0.10.0 pypi_0 pypi
cython 0.29.21 pypi_0 pypi
decorator 4.4.2 pypi_0 pypi
defusedxml 0.6.0 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
flake8 3.8.3 pypi_0 pypi
geos 3.8.1 he6710b0_0
greenlet 0.4.16 pypi_0 pypi
icu 58.2 he6710b0_3
idna 2.9 py_1
iniconfig 1.0.1 pypi_0 pypi
ipykernel 5.3.4 pypi_0 pypi
ipython 7.17.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 7.5.1 pypi_0 pypi
isort 4.3.21 pypi_0 pypi
jedi 0.17.2 pypi_0 pypi
jinja2 2.11.2 pypi_0 pypi
json5 0.9.5 pypi_0 pypi
jsonschema 3.2.0 pypi_0 pypi
jupyter 1.0.0 pypi_0 pypi
jupyter-client 6.1.6 pypi_0 pypi
jupyter-console 6.1.0 pypi_0 pypi
jupyter-core 4.6.3 pypi_0 pypi
jupyterlab 2.2.5 pypi_0 pypi
jupyterlab-server 1.2.0 pypi_0 pypi
jupytext 1.6.0 pypi_0 pypi
kiwisolver 1.2.0 pypi_0 pypi
lazy-object-proxy 1.4.3 pypi_0 pypi
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20181209 hc058e9b_0
libffi 3.3 he6710b0_1
libgcc-ng 9.1.0 hdf63c60_0
libstdcxx-ng 9.1.0 hdf63c60_0
libxml2 2.9.10 he19cac6_1
libxslt 1.1.34 hc22bd24_0
lxml 4.5.2 py38hefd8a0e_0
markdown-it-py 0.5.3 pypi_0 pypi
markupsafe 1.1.1 pypi_0 pypi
matplotlib 3.3.1 pypi_0 pypi
mccabe 0.6.1 pypi_0 pypi
mistune 0.8.4 pypi_0 pypi
more-itertools 8.5.0 pypi_0 pypi
msgpack 1.0.0 pypi_0 pypi
multipole-inversion 0.1 pypi_0 pypi
mypy 0.782 pypi_0 pypi
mypy-extensions 0.4.3 pypi_0 pypi
nbconvert 5.6.1 pypi_0 pypi
nbformat 5.0.7 pypi_0 pypi
ncurses 6.2 he6710b0_1
neovim 0.3.1 pypi_0 pypi
notebook 6.1.3 pypi_0 pypi
numpy 1.19.1 pypi_0 pypi
openssl 1.1.1g h7b6447c_0
packaging 20.4 pypi_0 pypi
pandocfilters 1.4.2 pypi_0 pypi
parso 0.7.1 pypi_0 pypi
pathlib 1.0.1 pypi_0 pypi
pep8 1.7.1 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pillow 7.2.0 pypi_0 pypi
pip 20.0.2 py38_3
pluggy 0.13.1 pypi_0 pypi
proj 6.2.1 haa6030c_0
prometheus-client 0.8.0 pypi_0 pypi
prompt-toolkit 3.0.6 pypi_0 pypi
psutil 5.7.2 pypi_0 pypi
ptyprocess 0.6.0 pypi_0 pypi
py 1.9.0 pypi_0 pypi
pycodestyle 2.6.0 pypi_0 pypi
pycosat 0.6.3 py38h7b6447c_1
pycparser 2.20 py_0
pyflakes 2.2.0 pypi_0 pypi
pygments 2.6.1 pypi_0 pypi
pylint 2.5.3 pypi_0 pypi
pynvim 0.4.1 pypi_0 pypi
pyopenssl 19.1.0 py38_0
pyparsing 2.4.7 pypi_0 pypi
pyrsistent 0.16.0 pypi_0 pypi
pyshp 2.1.0 pypi_0 pypi
pysocks 1.7.1 py38_0
pytest 6.0.1 pypi_0 pypi
python 3.8.3 hcff3b4d_0
python-dateutil 2.8.1 pypi_0 pypi
pyvtk 0.5.18 pypi_0 pypi
pyyaml 5.3.1 pypi_0 pypi
pyzmq 19.0.2 pypi_0 pypi
qtconsole 4.7.6 pypi_0 pypi
qtpy 1.9.0 pypi_0 pypi
readline 8.0 h7b6447c_0
requests 2.23.0 py38_0
ruamel_yaml 0.15.87 py38h7b6447c_0
scipy 1.5.2 pypi_0 pypi
send2trash 1.5.0 pypi_0 pypi
setuptools 46.4.0 py38_0
shapely 1.8.dev0 pypi_0 pypi
six 1.14.0 py38_0
soupsieve 2.0.1 pypi_0 pypi
sqlite 3.31.1 h62c20be_1
terminado 0.8.3 pypi_0 pypi
testpath 0.4.4 pypi_0 pypi
tk 8.6.8 hbc83047_0
toml 0.10.1 pypi_0 pypi
tornado 6.0.4 pypi_0 pypi
tqdm 4.46.0 py_0
traitlets 4.3.3 pypi_0 pypi
typed-ast 1.4.1 pypi_0 pypi
typing-extensions 3.7.4.2 pypi_0 pypi
urllib3 1.25.8 py38_0
wcwidth 0.2.5 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
wheel 0.34.2 py38_0
widgetsnbextension 3.5.1 pypi_0 pypi
wrapt 1.12.1 pypi_0 pypi
xz 5.2.5 h7b6447c_0
yaml 0.1.7 had09818_2
zlib 1.2.11 h7b6447c_3
```
### pip list
```
Package Version
---------------------- -------------------
argon2-cffi 20.1.0
astroid 2.4.2
attrs 19.3.0
backcall 0.2.0
beautifulsoup4 4.9.1
bleach 3.1.5
Cartopy 0.18.0
certifi 2020.6.20
cffi 1.14.0
chardet 3.0.4
conda 4.8.4
conda-package-handling 1.7.0
cryptography 2.9.2
cycler 0.10.0
Cython 0.29.21
decorator 4.4.2
defusedxml 0.6.0
entrypoints 0.3
flake8 3.8.3
greenlet 0.4.16
idna 2.9
iniconfig 1.0.1
ipykernel 5.3.4
ipython 7.17.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
isort 4.3.21
jedi 0.17.2
Jinja2 2.11.2
json5 0.9.5
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.6
jupyter-console 6.1.0
jupyter-core 4.6.3
jupyterlab 2.2.5
jupyterlab-server 1.2.0
jupytext 1.6.0
kiwisolver 1.2.0
lazy-object-proxy 1.4.3
lxml 4.5.2
markdown-it-py 0.5.3
MarkupSafe 1.1.1
matplotlib 3.3.1
mccabe 0.6.1
mistune 0.8.4
more-itertools 8.5.0
msgpack 1.0.0
multipole-inversion 0.1
mypy 0.782
mypy-extensions 0.4.3
nbconvert 5.6.1
nbformat 5.0.7
neovim 0.3.1
notebook 6.1.3
numpy 1.19.1
packaging 20.4
pandocfilters 1.4.2
parso 0.7.1
pathlib 1.0.1
pep8 1.7.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 7.2.0
pip 20.0.2
pluggy 0.13.1
prometheus-client 0.8.0
prompt-toolkit 3.0.6
psutil 5.7.2
ptyprocess 0.6.0
py 1.9.0
pycodestyle 2.6.0
pycosat 0.6.3
pycparser 2.20
pyflakes 2.2.0
Pygments 2.6.1
pylint 2.5.3
pynvim 0.4.1
pyOpenSSL 19.1.0
pyparsing 2.4.7
pyrsistent 0.16.0
pyshp 2.1.0
PySocks 1.7.1
pytest 6.0.1
python-dateutil 2.8.1
PyVTK 0.5.18
PyYAML 5.3.1
pyzmq 19.0.2
qtconsole 4.7.6
QtPy 1.9.0
requests 2.23.0
ruamel-yaml 0.15.87
scipy 1.5.2
Send2Trash 1.5.0
setuptools 46.4.0.post20200518
Shapely 1.8.dev0
six 1.14.0
soupsieve 2.0.1
terminado 0.8.3
testpath 0.4.4
toml 0.10.1
tornado 6.0.4
tqdm 4.46.0
traitlets 4.3.3
typed-ast 1.4.1
typing-extensions 3.7.4.2
urllib3 1.25.8
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.34.2
widgetsnbextension 3.5.1
wrapt 1.12.1
```
</details>
| closed | 2020-09-08T08:41:28Z | 2024-02-21T12:22:51Z | https://github.com/SciTools/cartopy/issues/1651 | [] | davidcortesortuno | 3 |
jonaswinkler/paperless-ng | django | 224 | Reset tag search after tag selected |
Hello, I noticed a small "fluidity" problem when searching for tags in the drop-down list when editing a document: once we have selected a tag following a search, the text that allowed us to find it is not deleted. If we wish to add another one, we must first delete the remains of our previous search. | closed | 2020-12-30T21:37:15Z | 2020-12-31T01:28:16Z | https://github.com/jonaswinkler/paperless-ng/issues/224 | [
"fixed in next release"
] | Philmo67 | 0 |
vitalik/django-ninja | rest-api | 864 | ModelSchema does not support reverse relations | macOS Venture 13.6
Python 3.11.4
Django 4.2.2
django-ninja 0.22.2
pydantic 1.10.13
Consider the following object relations and their corresponding schemas. `PrimaryObject` has a one-to-many relationship with `RelatedObject`, but the relation is defined on `RelatedObject`. I want to serialize a `PrimaryObject` and include all of its `RelatedObject` children in the representation.
```python
import typing
from django.db import models
from ninja import ModelSchema
class PrimaryObject(models.Model):
pass
class RelatedObject(models.Model):
primary_object = models.ForeignKey(PrimaryObject, related_name='relatedobjects', on_delete=models.CASCADE)
class RelatedObjectSchema(ModelSchema):
class Config:
model = RelatedObject
model_fields = ['id']
class PrimaryObjectSchema(ModelSchema):
relatedobjects: typing.List[RelatedObjectSchema]
class Config:
model = PrimaryObject
model_fields = ['id', 'relatedobjects']
```
Attempting `manage.py runserver` with the above configuration produces
```
ninja.errors.ConfigError: Field(s) {'relatedobjects'} are not in model <class 'myapp.models.PrimaryObject'>
```
This is because they are excluded from the list of available fields in `ninja.factory.SchemaFactory._model_fields`.
This is a relationship that is supported by the Django ORM, and it should "just work" with any libraries that claim to support the Django ORM. `djangorestframework` and `djantic`, for example, both support this use case without any fuss.
I can understand why it might be undesirable to exclude ORM-generated relationships by default, for example when specifying `fields = '__all__'`, as it could lead to an unintended explosion of database joins, but they should be treated as valid if a developer positively includes them in a schema.
Additionally, this relationship is supported by `ninja.Schema` just fine. Serializing a `PrimaryObject` with the following schema produces the expected output, which means that using `ninja.ModelSchema` results in a loss of important functionality.
```python
from ninja import Schema
class RelatedObjectSchema(Schema):
id: int
class PrimaryObjectSchema(Schema):
id: int
relatedobjects: typing.List[RelatedObjectSchema]
``` | closed | 2023-09-28T03:52:51Z | 2023-09-28T04:44:51Z | https://github.com/vitalik/django-ninja/issues/864 | [] | zbmott | 2 |
HIT-SCIR/ltp | nlp | 375 | 分词结果比较慢? | (1)分词似乎比较慢。试用了下,发现运行:
```
segment, hidden = ltp.seg(["我的句子"])
```
不管是small版还是tiny版,执行需要0.2s左右。这比pyltp慢了许多,请问有什么提速方案吗?谢谢!
(2)另外,目前不支持一次处理多个句子吗?比如:
```
segment, hidden = ltp.seg(["我的句子", "今天天气很的很不错,带你去爬山"])
```
报错如下:
```
File "/opt/conda/lib/python3.7/site-packages/ltp/ltp.py", line 38, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/ltp/ltp.py", line 138, in seg
tokenizerd = self.tokenizer.batch_encode_plus(inputs, return_tensors='pt')
File "/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1831, in batch_encode_plus
**kwargs,
File "/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 378, in _batch_encode_plus
return BatchEncoding(sanitized, encodings, tensor_type=return_tensors)
File "/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 159, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 515, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
``` | closed | 2020-07-01T08:08:52Z | 2022-04-29T07:55:36Z | https://github.com/HIT-SCIR/ltp/issues/375 | [] | MrRace | 13 |
Evil0ctal/Douyin_TikTok_Download_API | api | 226 | 如何切换v2以及是否考虑增加一个自动抓取最新视频的功能? | 已经部署好了 现在只有单一解析 没太看懂那个付费的api 购买之后如何替换?我是一键部署到linux 可否简单指导下
另外是否考虑定时自动抓取某一用户的最新视频,我现在用的一个微博爬虫,定时运行并将之前爬到的结果记录跳过,感觉这个功能很有用 | closed | 2023-07-20T14:16:04Z | 2024-04-23T05:05:24Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/226 | [
"enhancement"
] | AIEOV | 3 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,795 | [Bug]: Automatic1111 works extremelly slow if Silly Tavern is also running at the same time | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I just installed Automatic 1111, and it's running smoothly. at it stays like that if I run Oobabooga text UI at the same time.
How ever, if I run Silly Tavern at the same time, the time to generate a single image goes from 10 seconds to 10-15 minutes.
I had to alter the COMMANDLINE_ARGS argument in the 'webui-user.bat' file because Auto1111's API need to me enabled to be accessed by Silly Tavern, and because Oobabooga also uses port 7860, só I had to change Forge's port for a random on, i selected 7862 for no particular reason: set COMMANDLINE_ARGS= --api --port 7862
Edit: It seems that it also gets extremly slow speeds when Oobabooga is running, dispite Silly Tavern is not running....
### Steps to reproduce the problem
1- Run Auto1111
2- Generate an image directly through Auto1111 in seconds
3- Run Silly Taverns
4- DON'T connect Silly Tavern and Auto1111 via http://localhost:7860/
5- Generate a new image directly through Auto1111, without altering any settings, in seconds
6- CONNECT Silly Tavern and Auto1111 via http://localhost:7860/
7- Generate a new image directly through Auto1111, without altering any settings, takes 10-15 minutes
### What should have happened?
I assume that Image generation should have kept almost the same time, maybe a few seconds slower, but not 10-15 minutes for a single image, but it seems that something is wrong with the local connection between ST and Forge.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-05-15-04-20.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15316659/sysinfo-2024-05-15-04-20.json)
### Console logs
```Shell
venv "D:\app\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
Launching Web UI with arguments: --skip-torch-cuda-test --no-half-vae --listen --port=7860 --api --cors-allow-origins null --cuda-stream --cuda-malloc --pin-shared-memory
Using cudaMallocAsync backend.
Total VRAM 12282 MB, total RAM 31898 MB
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
VAE dtype: torch.bfloat16
CUDA Stream Activated: True
Using pytorch cross attention
ControlNet preprocessor location: D:\app\stable-diffusion-webui-forge\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.4.2, num models: 12
Loading weights [529c72f6c3] from D:\app\stable-diffusion-webui-forge\models\Stable-diffusion\mfcgPDXL_v10.safetensors
2024-05-15 02:24:41,233 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://0.0.0.0:7860
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Loading VAE weights specified in settings: D:\app\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 11081.996185302734
[Memory Management] Model Memory (MB) = 2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 7913.641487121582
Moving model(s) has taken 0.76 seconds
Model loaded in 4.0s (load weights from disk: 0.6s, forge load real models: 2.3s, calculate empty prompt: 0.9s).
To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 13.7s (prepare environment: 1.3s, import torch: 2.7s, import gradio: 0.5s, setup paths: 0.6s, other imports: 0.4s, load scripts: 3.1s, create ui: 0.4s, gradio launch: 4.3s, add APIs: 0.3s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 9236.397184371948
[Memory Management] Model Memory (MB) = 4897.086494445801
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 3315.3106899261475
Moving model(s) has taken 2.47 seconds
87%|███████████████████████████████████████████████████████████████████████ | 13/15 [03:51<00:27, 13.67s/it]
Total progress: 87%|█████████████████████████████████████████████████████████▏ | 13/15 [02:41<00:26, 13.35s/it]
```
### Additional information
Same problem happening with Automatic 1111 and Forge UI. | open | 2024-05-15T05:32:07Z | 2024-05-15T07:44:02Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15795 | [
"bug-report"
] | guispfilho | 0 |
aio-libs/aiopg | sqlalchemy | 123 | aiopg.sa.Engine doesn't implement sqlalchemy.engine.base.Engine | Hi,
It appears to me that the `Engine` class is far from implementing the current interface of `sqlalchemy.engine.base.Engine`. Same is true for `SAConnection`. This causes many duck-typed SQLAlchemy functions to fail.
For example:
``` py
import asyncio
from sqlalchemy import Table, MetaData, Column, Integer
from aiopg.sa import create_engine
metadata = MetaData()
Test = Table("Test", metadata,
Column("test", Integer)
)
async def run():
engine = create_engine(
database="postgres",
user="postgres",
password="- snip -",
host="localhost",
port="5432"
)
metadata.create_all(engine)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
```
results in
``` py
Traceback (most recent call last):
File "test.py", line 24, in <module>
loop.run_until_complete(run())
File "C:\tools\python\lib\asyncio\base_events.py", line 337, in run_until_complete
return future.result()
File "C:\tools\python\lib\asyncio\futures.py", line 274, in result
raise self._exception
File "C:\tools\python\lib\asyncio\tasks.py", line 239, in _step
result = coro.send(None)
File "test.py", line 20, in run
metadata.create_all(engine)
File "C:\tools\python\lib\site-packages\sqlalchemy\sql\schema.py", line 3742, in create_all
bind._run_visitor(ddl.SchemaGenerator,
AttributeError: '_PoolContextManager' object has no attribute '_run_visitor'
```
due to the`_run_visitor` method missing in `Engine`. Why are these classes not subclassing SQLAlchemy's `Engine` or `Connectable`?
| closed | 2016-07-13T21:57:38Z | 2016-07-18T22:22:21Z | https://github.com/aio-libs/aiopg/issues/123 | [] | nucular | 3 |
unit8co/darts | data-science | 2,550 | Usage of plot_residuals_analysis function | 
Based on the description in the github repo above, what would happen if there are missing timestamps in the timeseries of interest? For example, in my specific use case, certain timestamps are not considered during the metrics calculation step due to use-case specific reasons. Hence, the rows are removed from my dataframe. When passing that dataframe to the function, e.g.
`fig = plot_residuals_analysis(TimeSeries.from_dataframe(validation_df[['mean_error']], fill_missing_dates=False, freq='15min'))`, results are still returned without any issues.
However, the description of the function states that the plots might be displayed incorrectly if there are NaNs.

| closed | 2024-09-30T17:32:50Z | 2024-10-04T11:15:53Z | https://github.com/unit8co/darts/issues/2550 | [
"question"
] | ETTAN93 | 2 |
polakowo/vectorbt | data-visualization | 688 | portfolio stats calculation for dca strategies. not appear all buy orders only first | A DCA Dollar cost average, make several buy orders and later close the trade when it reach a profit from last average price, what i see is that in orders are only registered the first buy and last close but not other entries between first buy and last exit
i will explain with an simple example:
in this code i have 3 orders: 2 entries (buy_dates = ['2017-11-09', '2017-11-12'] ) and 1 exit (sell_date = '2017-11-14'), however portfolio.orders.records_readable only show first entry and exit:
import pandas as pd
import numpy as np
import vectorbt as vbt
# Descargar los datos
eth_price = vbt.YFData.download('ETH-USD').get('Close')
# Crear señales de entrada y salida
entries = pd.Series(False, index=eth_price.index)
exits = pd.Series(False, index=eth_price.index)
# Configurar las fechas de compra y venta
buy_dates = ['2017-11-09', '2017-11-12']
sell_date = '2017-11-14'
# Asignar las señales de compra y venta
entries[buy_dates] = True
exits[sell_date] = True
# Crear el portfolio
portfolio = vbt.Portfolio.from_signals(eth_price, entries, exits, freq='D')
print(eth_price[:6])
# Imprimir los trades
trades = portfolio.trades
print(trades.records_readable)
# Imprimir las estadísticas del portfolio
stats = portfolio.stats()
print(stats)
print (portfolio.orders.records_readable)
this is the result:
Date
2017-11-09 00:00:00+00:00 320.884003
2017-11-10 00:00:00+00:00 299.252991
2017-11-11 00:00:00+00:00 314.681000
2017-11-12 00:00:00+00:00 307.907990
2017-11-13 00:00:00+00:00 316.716003
2017-11-14 00:00:00+00:00 337.631012
Freq: D, Name: Close, dtype: float64
Exit Trade Id Column Size Entry Timestamp Avg Entry Price Entry Fees Exit Timestamp Avg Exit Price Exit Fees PnL Return Direction Status Position Id
0 0 0 0.311639 2017-11-09 00:00:00+00:00 320.884003 0.0 2017-11-14 00:00:00+00:00 337.631012 0.0 5.219023 0.05219 Long Closed 0
Start 2017-11-09 00:00:00+00:00
End 2024-01-30 00:00:00+00:00
Period 2274 days 00:00:00
Start Value 100.0
End Value 105.219023
Total Return [%] 5.219023
Benchmark Return [%] 619.782758
Max Gross Exposure [%] 100.0
Total Fees Paid 0.0
Max Drawdown [%] 6.741069
Max Drawdown Duration 4 days 00:00:00
Total Trades 1
Total Closed Trades 1
Total Open Trades 0
Open Trade PnL 0.0
Win Rate [%] 100.0
Best Trade [%] 5.219023
Worst Trade [%] 5.219023
Avg Winning Trade [%] 5.219023
Avg Losing Trade [%] NaN
Avg Winning Trade Duration 5 days 00:00:00
Avg Losing Trade Duration NaT
Profit Factor inf
Expectancy 5.219023
Sharpe Ratio 0.202396
Calmar Ratio 0.121631
Omega Ratio 1.643893
Sortino Ratio 0.324209
dtype: object
Order Id Column Timestamp Size Price Fees Side
0 0 0 2017-11-09 00:00:00+00:00 0.311639 320.884003 0.0 Buy
1 1 0 2017-11-14 00:00:00+00:00 0.311639 337.631012 0.0 Sell | open | 2024-02-09T12:32:07Z | 2024-03-16T10:53:02Z | https://github.com/polakowo/vectorbt/issues/688 | [] | spainbox | 1 |
ets-labs/python-dependency-injector | flask | 318 | Injection not working for class methods | I am not quite sure if this is expected behavior or not. Methods annotated as @classmethod end up getting extra parameters injected. The following code demonstrates. I discovered this while using Closing, but filled out the example a bit as I discovered that it is a general issue for Provide.
```
import sys
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide, Closing
def my_factory():
return 'test-factory'
def my_resource():
yield 'test-resource'
print('Closing')
class Container(containers.DeclarativeContainer):
factory = providers.Factory(my_factory)
resource = providers.Resource(my_resource)
def do_function_thing(r:str=Closing[Provide[Container.resource]]) -> None:
print('from function', r)
class MyClass():
def do_instance_thing(self, r:str=Closing[Provide[Container.resource]]) -> None:
print('from instance', r)
@classmethod
def do_class_thing(cls, r:str=Closing[Provide[Container.resource]]) -> None:
print('from class', r)
@classmethod
def non_closing_class_thing(cls, r:str=Provide[Container.factory]) -> None:
print('non-closing from class', r)
container = Container()
container.init_resources()
container.wire(modules=[sys.modules[__name__]])
do_function_thing()
c = MyClass()
c.do_instance_thing()
# both of these end up getting multiple values for r:
c.non_closing_class_thing()
c.do_class_thing()
```
The resulting output is:
```
from function test-resource
Closing
from instance test-resource
Closing
Traceback (most recent call last):
File "clstest.py", line 49, in <module>
c.non_closing_class_thing()
File "/Users/scott/repos/github.com/scott2b/Starlight/.venv/lib/python3.8/site-packages/dependency_injector/wiring.py", line 296, in _patched
result = fn(*args, **to_inject)
TypeError: non_closing_class_thing() got multiple values for argument 'r'
``` | closed | 2020-11-03T03:07:40Z | 2023-06-02T18:47:51Z | https://github.com/ets-labs/python-dependency-injector/issues/318 | [
"bug"
] | scott2b | 19 |
babysor/MockingBird | pytorch | 768 | 进行音频和梅尔频谱图预处理出错 | Using data from:
E:\BaiduNetdiskDownload\ai克隆语音\aidatatang_200zh\corpus\train
aidatatang_200zh: 0%| | 0/1 [00:00<?, ?speakers/s]no wordS
no wordS
no wordS
.....
no wordS
aidatatang_200zh: 100%|████████████████████████████████████████████████████████████| 1/1 [00:03<00:00, 3.93s/speakers]
The dataset consists of 0 utterances, 0 mel frames, 0 audio timesteps (0.00 hours).
Traceback (most recent call last):
File "E:\BaiduNetdiskDownload\ai克隆语音\MockingBird\pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "E:\BaiduNetdiskDownload\ai克隆语音\MockingBird\synthesizer\preprocess.py", line 88, in preprocess_dataset
print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
ValueError: max() arg is an empty sequence
也看了一下,发现反馈的问题都一样,但很多人的解决方案都已经做过了,都还是出错
请问有人怎样解决这个方案? | closed | 2022-10-19T12:05:41Z | 2022-10-20T10:24:52Z | https://github.com/babysor/MockingBird/issues/768 | [] | ten-years-of-invitation | 0 |
tfranzel/drf-spectacular | rest-api | 1,018 | Unclear how to specify example values | **Describe the bug**
As an engineer implementing schema docs via drf-spectacular it is unclear how to supply values for the documentation or to detail acceptable input formats.
**To Reproduce**
When specifying a type such as `DateField` on a serilalizer, ie.
```python
class MySerializer(serializers.Serializer):
date_of_birth = serializers.DateField()
```
the generated documentation might looks something like this
```json
{
"date_of_birth": "2019-08-24",
}
```
where the example value is a date field in the correct format
if I specify another field (for instance a custom phone number field) there is no clear way to supply or offer an example or formatting instructions
**Expected behavior**
It would be nice to be able to supply formatting instructions in each of the serializer fields.
| closed | 2023-07-05T17:57:04Z | 2024-03-14T22:30:04Z | https://github.com/tfranzel/drf-spectacular/issues/1018 | [] | dashdanw | 0 |
keras-team/keras | deep-learning | 21,004 | Ensured torch import is properly handled | Before :
try:
import torch # noqa: F401
except ImportError:
pass
After :
try:
import torch # noqa: F401
except ImportError:
torch = None # Explicitly set torch to None if not installed
| open | 2025-03-07T19:58:17Z | 2025-03-13T07:09:01Z | https://github.com/keras-team/keras/issues/21004 | [
"type:Bug"
] | FNICKE | 1 |
great-expectations/great_expectations | data-science | 10,917 | ExpectColumnValueLengthsToEqual is failing/raising exception when applied on a column having null values as well | **Describe the bug**
**ExpectColumnValueLengthsToEqual** is failing/raising exception when applied on a column having null values as well. in version 1.3.5. This was not failing in version 0.18
**To Reproduce**
```
import great_expectations as gx
import great_expectations.expectations as gxe
# Retrieve your Data Context
data_context = gx.get_context(mode="ephemeral")
# Define the Data Source name
data_source_name = "source_system_name_spark_dataframe"
# Add the Data Source to the Data Context
data_source = data_context.data_sources.add_spark(name=data_source_name)
# Define the Data Asset name
data_asset_name = "dataset_name"
# Add a Data Asset to the Data Source
data_asset = data_source.add_dataframe_asset(name=data_asset_name)
# Define the Batch Definition name
batch_definition_name = "dataset_batch_definition"
# Add a Batch Definition to the Data Asset
batch_definition = data_asset.add_batch_definition_whole_dataframe(
batch_definition_name
)
df = <A pyspark dataframe containing few null values in a string column>
batch_parameters = {"dataframe": df}
# Get the dataframe as a Batch
batch = batch_definition.get_batch(batch_parameters=batch_parameters)
test = gxe.ExpectColumnValueLengthsToEqual(column=<column_name>, value=<length>)
# Test the Expectation
validation_results = batch.validate(test, result_format="COMPLETE")
print(validation_results)
```
**Expected behavior**
No exception should be raised. For the null values, the length should be equal to zero or they should not be considered as part the expectation result
**Environment (please complete the following information):**
- Great Expectations Version: [1.3.5]
- Data Source: [Spark dataframe created from a csv file]
- Cloud environment: [Azure Databricks]
**Additional context**
```
{
"success": false,
"expectation_config": {
"type": "expect_column_value_lengths_to_equal",
"kwargs": {
"column": "ID Number",
"value": 7.0,
"batch_id": "source_system_name_spark_dataframe-dataset_name"
},
"meta": {}
},
"result": {},
"meta": {},
"exception_info": {
"('column_values.value_length.map', '0464e137b2cdb1dd819e7ee85c081f95', ())": {
"exception_traceback": "Traceback (most recent call last):\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/execution_engine/execution_engine.py\", line 532, in _process_direct_and_bundled_metric_computation_configurations\n metric_computation_configuration.metric_fn( # type: ignore[misc] # F not callable\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/expectations/metrics/metric_provider.py\", line 99, in inner_func\n return metric_fn(*args, **kwargs)\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/expectations/metrics/map_metric_provider/column_function_partial.py\", line 239, in inner_func\n ) = execution_engine.get_compute_domain(\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/execution_engine/sparkdf_execution_engine.py\", line 800, in get_compute_domain\n data: pyspark.DataFrame = self.get_domain_records(domain_kwargs=domain_kwargs)\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/execution_engine/sparkdf_execution_engine.py\", line 689, in get_domain_records\n data = data.filter(filter_condition.condition)\n File \"/databricks/spark/python/pyspark/instrumentation_utils.py\", line 48, in wrapper\n res = func(*args, **kwargs)\n File \"/databricks/spark/python/pyspark/sql/dataframe.py\", line 3123, in filter\n jdf = self._jdf.filter(condition)\n File \"/databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py\", line 1321, in __call__\n return_value = get_return_value(\n File \"/databricks/spark/python/pyspark/errors/exceptions.py\", line 234, in deco\n raise converted from None\npyspark.errors.exceptions.ParseException: \n[PARSE_SYNTAX_ERROR] Syntax error at or near 'IS'.(line 1, pos 10)\n\n== SQL ==\nID Number IS NOT NULL\n----------^^^\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/validator/validation_graph.py\", line 276, in _resolve\n self._execution_engine.resolve_metrics(\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/execution_engine/execution_engine.py\", line 279, in resolve_metrics\n return self._process_direct_and_bundled_metric_computation_configurations(\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-70771ece-6841-4d7b-a9e8-4a8bc864ed04/lib/python3.9/site-packages/great_expectations/execution_engine/execution_engine.py\", line 537, in _process_direct_and_bundled_metric_computation_configurations\n raise gx_exceptions.MetricResolutionError(\ngreat_expectations.exceptions.exceptions.MetricResolutionError: \n[PARSE_SYNTAX_ERROR] Syntax error at or near 'IS'.(line 1, pos 10)\n\n== SQL ==\nID Number IS NOT NULL\n----------^^^\n\n",
"exception_message": "\n[PARSE_SYNTAX_ERROR] Syntax error at or near 'IS'.(line 1, pos 10)\n\n== SQL ==\nID Number IS NOT NULL\n----------^^^\n",
"raised_exception": true
}
}
}
```
| closed | 2025-02-06T14:39:01Z | 2025-03-11T15:33:34Z | https://github.com/great-expectations/great_expectations/issues/10917 | [] | suchintakp5 | 3 |
nltk/nltk | nlp | 2,538 | Add wheel distribution(s) to PyPI | Has nltk considered the feasibility of adding wheels to PyPI?
As of now it is one of ~10% of packages listed on https://pythonwheels.com/ that [does not provide wheels](https://pypi.org/project/nltk/#files).
It looks like nltk is pure-Python with no dependencies on shared libraries or the like. That seems like it would make building the wheel itself pretty painless. | open | 2020-05-10T13:45:50Z | 2020-12-05T00:17:00Z | https://github.com/nltk/nltk/issues/2538 | [] | bsolomon1124 | 8 |
zama-ai/concrete-ml | scikit-learn | 95 | WARNING: high error rate, more details with --display-optimizer-choice? | <img width="394" alt="1bbbb1843a4ed7bd4278b72ad17807e" src="https://github.com/zama-ai/concrete-ml/assets/127387074/24479ef2-6552-407a-89d8-93eaffe98e5c">
Hello ,What does this mean?
| closed | 2023-07-10T11:47:09Z | 2023-07-28T04:10:00Z | https://github.com/zama-ai/concrete-ml/issues/95 | [] | maxwellgodv | 16 |
ultralytics/ultralytics | deep-learning | 19,371 | Android deploys yolov12 ncnn | https://github.com/mpj1234/ncnn-yolov12-android/tree/main | closed | 2025-02-22T12:45:41Z | 2025-02-24T07:04:19Z | https://github.com/ultralytics/ultralytics/issues/19371 | [] | mpj1234 | 1 |
sherlock-project/sherlock | python | 2,418 | Requesting support for: pronouns.page | ### Site URL
https://pronouns.page
### Additional info
Best place to query via is the API, e.g. `https://en.pronouns.page/api/profile/get/<username>?version=2`, with relevant documentation [here](https://en.pronouns.page/api)
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct | open | 2025-03-03T15:44:03Z | 2025-03-05T12:27:30Z | https://github.com/sherlock-project/sherlock/issues/2418 | [
"site support request"
] | wrac4242 | 0 |
d2l-ai/d2l-en | data-science | 2,421 | Discussion Forum Not Showing up on Classic Branch | As the below image shows, none of the lessons on the classic website have functioning discussion forums (eg. http://classic.d2l.ai/chapter_recurrent-modern/beam-search.html). :

I've checked it on Firefox and Edge already, I don't think this is browser related.
| closed | 2022-12-28T16:41:41Z | 2023-01-06T11:27:15Z | https://github.com/d2l-ai/d2l-en/issues/2421 | [] | Vortexx2 | 2 |
custom-components/pyscript | jupyter | 199 | AttributeError: module 'Crypto.Cipher' has no attribute 'AES' | I have an issue when importing
```python
...
from Crypto.Cipher import AES
...
```
It falls with exception
```
Exception in </config/pyscript/myscript.py> line 13: from Crypto.Cipher import AES ^ AttributeError: module 'Crypto.Cipher' has no attribute 'AES'
```
Any Ideas how to fix it? | closed | 2021-04-18T13:27:02Z | 2021-04-29T10:16:31Z | https://github.com/custom-components/pyscript/issues/199 | [] | kenoma | 1 |
15r10nk/inline-snapshot | pytest | 147 | trim should only remove things if all tests where executed successfully | # Problem
--inline-snashot=trim triggers currently when the user runs only some of the tests (with `testmon` or `pytest -k some_test`)
checking if all tests where successful executed should solve this problem | open | 2024-12-10T08:30:30Z | 2024-12-10T08:30:30Z | https://github.com/15r10nk/inline-snapshot/issues/147 | [] | 15r10nk | 0 |
15r10nk/inline-snapshot | pytest | 196 | Allow fixing whole snapshot regardless of managed/unmanaged values | I have a case like this:
```python
from dirty_equals import IsJson
from inline_snapshot import snapshot
def test_foo():
assert {"a": '{"b": 1}'} == snapshot({"a": IsJson({"b": 2})})
```
When this test fails, it's easy in this toy example to look at the pytest diff and to update `IsJson({"b": 2})` to `IsJson({"b": 1})` by hand.
But the real snapshot is huge and it's impossible to do this manual process. I essentially need inline-snapshot to just replace `IsJson({"b": 2})` with `'{"b": 1}'` and let me work from there. Of course the dynamic expressions which still match should be left unchanged. | closed | 2025-02-12T13:39:42Z | 2025-02-12T21:44:48Z | https://github.com/15r10nk/inline-snapshot/issues/196 | [] | alexmojaki | 7 |
google-research/bert | nlp | 1,146 | Dear bert team, how could I use bert to NER task? | Dear bert team,
I have train and test corpus with BIO tags, like below:
The O
patient O
was O
aged O
36 O
. O
How could I use bert to train my data to produce models and to predict the BIO tags of test data?
The resource have many programs, but I have no ideas that which program is what I need.
I would like to use google colab to run the program that could save python environmental problems.
Could you offer the tutorial of Bert NER task?
Thank you
Best regards; | open | 2020-09-08T10:54:56Z | 2020-09-08T10:54:56Z | https://github.com/google-research/bert/issues/1146 | [] | jasonsu123 | 0 |
vaexio/vaex | data-science | 1,485 | support reading from avro files [FEATURE-REQUEST] | **Description**
Support reading from avro files natively from cloud
```
vaex.open("gs://path_of_many_avro_files", fs_options={'anon': True})
```
**Is your feature request related to a problem? Please describe.**
Currently the workaround is to read_in with pandas as pandas dataframe then convert to vaex dataframe which doesn't work when data is too big.
Thanks | open | 2021-08-03T14:24:35Z | 2023-02-07T18:46:58Z | https://github.com/vaexio/vaex/issues/1485 | [] | stellaywu | 3 |
huggingface/peft | pytorch | 1,438 | ValueError: Tokenizer class XXXXXXXX does not exist or is not currently imported | ### System Info
if i use peft==0.8.2, i will get this error, but when i only change the version to 0.7.1 , the error will be sovled.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
when using use peft==0.8.2, the error like this :
File "*****/test_Qwen_aes_tag.py", line 9, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "/home/tiger/.local/lib/python3.9/site-packages/peft/auto.py", line 124, in from_pretrained
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path)
File "/home/tiger/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 724, in from_pretrained
raise ValueError(
ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.
### Expected behavior
Maybe a bug in the new version . | closed | 2024-02-06T03:12:00Z | 2024-03-26T15:03:43Z | https://github.com/huggingface/peft/issues/1438 | [] | Sun-Shiqi | 6 |
DistrictDataLabs/yellowbrick | matplotlib | 508 | Feature Correlation to Dependent Variable Visualizer | **Describe the solution you'd like**
This issue extends #334 with details about bullet point 3: "plot feature/target correlations".
As seen in [Model comparison using a noisy dataset -1](https://medium.com/@harsha.g1/model-comparison-using-a-noisy-dataset-1-db20f62c5126), it is useful to compare the pairwise correlation between the features and the dependent variable or target as a bar chart; similar to Rank1D and Rank2D, except that this is for the target only.

Once the target package has been created, create a visualizer, `yellowbrick.target.FeatureCorrelation` that creates this bar chart by fitting `X` and `y`. Write documentation that links this visualizer to the `JointPlot` visualizer.
Include the following correlations:
- Pearson
- [Mutual Information](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.mutual_info_classif.html#sklearn.feature_selection.mutual_info_classif) | closed | 2018-07-19T12:47:59Z | 2018-08-19T13:13:48Z | https://github.com/DistrictDataLabs/yellowbrick/issues/508 | [
"type: feature",
"priority: low"
] | bbengfort | 2 |
jmcnamara/XlsxWriter | pandas | 1,120 | question: How to force a cell to be a text cell even if the value of the cell is changed in Excel | ### Question
I had a project in which I was creating worksheets for people to fill in using Excel. I needed a way to ensure that if they entered "11:00" into a cell it stayed as "11:00", and not converted to an Excel time. Similarly, I needed numbers such as "1.20" to remain as the text "1.20" and not be converted into a number.
I finally found a way to do this by using workbook.add_format({"num_format": "@"}), which doesn't seem to be documented anywhere. Everything else I tried worked until the data in the cell was changed by the user in Excel, at which point excel changed the cell type to something other than text.
I wanted to put this here on github so that others might have a hope of finding the answer to the question "How do I prevent Excel from changing a cell's type and leaving it as text forever"?
Thank you for you awesome library! Sorry if I should have posted this some other way!
| closed | 2025-02-18T19:28:15Z | 2025-02-18T23:50:13Z | https://github.com/jmcnamara/XlsxWriter/issues/1120 | [
"question"
] | multicron | 1 |
biolab/orange3 | pandas | 6,125 | Group by: Standard deviation and Sum of TimeVariable | **What's wrong?**
In the Group by widget an error message appears when calculating the aggregations "Standard deviation" or "Sum" for a time variable.
**How can we reproduce the problem?**
- Load Dataset with TimeVariable (e.g. "Banking Crises")
- Select in Group by widget the Aggregations "Standard deviation" or "Sum" for the TimeVariable.


**What's your environment?**
- Operating system: Win10
- Orange version: '3.32.0.dev0+94958aa'
- How you installed Orange: git clone
| closed | 2022-09-05T10:15:26Z | 2023-01-20T07:31:02Z | https://github.com/biolab/orange3/issues/6125 | [
"bug"
] | mw25 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.