repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
holoviz/panel | matplotlib | 7,588 | JupyterHub Azure OAuth issues | panel==1.5.5
I'm trying to add azure oauth to my Panel application. I'm developing and testing in VS Code on my JupyterHub.
## Setup
```python
import panel as pn
pn.extension()
user = pn.state.user or "Guest User"
pn.panel(user).servable()
```
Without oauth, I would serve this via
```bash
panel serve script.py --index script
```
And I would be able to open the application at
https://my-domain/workspace/user/userid/vscode/proxy/5006/
and
https://my-domain/workspace/user/userid/vscode/proxy/5006/script
## Azure OAuth works on my laptop
It works on my laptop served with:
```python
panel serve script.py --index script --oauth-provider=azure --oauth-key='********-****-****-****-************' --oauth-secret='******************************' --cookie-secret='********************************' --oauth-encryption-key='********************************' --oauth-extra-params "{'tenant': '********-****-****-****-************'}"
```
## Issues
Coming in separate posts below
| open | 2025-01-05T16:08:35Z | 2025-03-10T14:40:14Z | https://github.com/holoviz/panel/issues/7588 | [] | MarcSkovMadsen | 13 |
python-visualization/folium | data-visualization | 1,117 | Map attribution not showing up for built in tiles | In case the default tile is chosen (OpenStreetMap) during the creation of the map (or other built-in tiles, the ones present in the template/tiles folder), no attribution is showing up on the bottom left of maps.
```python
import folium
m = folium.Map(location=[45.5236, -122.6750])
```
Looking into the code, it seems that the file `attr.txt` for each built-in tile is well read, but nothing is done with it, the template does't refer to attr and the options is not updated:
Here is the template:
```python
_template = Template(u"""
{% macro script(this, kwargs) %}
var {{ this.get_name() }} = L.tileLayer(
{{ this.tiles|tojson }},
{{ this.options|tojson }}
).addTo({{ this._parent.get_name() }});
{% endmacro %}
""")
```
and an extract of the code (`raster_layers.py`) that is supposed to set the `self.attr` attribute:
```python
attr_template = 'tiles/' + tiles_flat + '/attr.txt'
if tile_template in templates and attr_template in templates:
self.tiles = self._env.get_template(tile_template).render(API_key=API_key) # noqa
self.attr = self._env.get_template(attr_template).render()
else:
self.tiles = tiles
if not attr:
raise ValueError('Custom tiles must have an attribution.')
self.attr = attr
```
Am I missing something ?
| closed | 2019-04-04T09:52:51Z | 2019-04-14T15:20:29Z | https://github.com/python-visualization/folium/issues/1117 | [
"bug"
] | FabeG | 2 |
learning-at-home/hivemind | asyncio | 64 | Technical debt: RemoteMixtureOfExperts (v0.8) | * [x] beam search uses tuple endpoints (i.e. address, port), while dht switched to string endpoints
* [x] beam search needs one extra step in beam search because prefix.123.321 != expert.123.321
* [x] we may no longer need parallel autograd if it is implemented in pytorch (not the case)
* [x] remove hivemind.utils.autograd in favor of _RemoteExpertCallMany
* [x] add a more feature-rich test for moe.py (with several DHT nodes and experts)
* [ ] cancel unused queries in first_k_active?
* [ ] when declaring experts, introduce some kind of "grace period" - only "declare" prefixes that have not been updated for that period. (rationale: first prefixes are likely to be already updated by other peers) | closed | 2020-07-03T12:37:09Z | 2020-08-26T07:27:57Z | https://github.com/learning-at-home/hivemind/issues/64 | [
"enhancement",
"help wanted"
] | justheuristic | 1 |
babysor/MockingBird | pytorch | 334 | 麻烦帮忙看下这算不算正常收敛,需不需要重新训练 | 

| open | 2022-01-12T08:52:56Z | 2022-01-12T22:41:17Z | https://github.com/babysor/MockingBird/issues/334 | [] | muconai | 1 |
huggingface/transformers | python | 36,848 | GPT2 repetition of words in output | ### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NO
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
@pytest.mark.parametrize("dtype", [torch.float16, torch.float32])
def test_gpt2_cpu_inductor(dtype):
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2").to(dtype)
prompt1 = "GPT2 is model developed by OpenAI"
# run on CPU
input = tokenizer(prompt1, return_tensors="pt")
input_ids1 = input.input_ids
attention_mask = input.attention_mask
gen_tokens1 = model.generate(
input_ids1,
attention_mask = attention_mask,
max_new_tokens=30,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
gen_text1 = tokenizer.batch_decode(gen_tokens1)[0]
print(gen_text1)
import torch._inductor.config as inductor_config
inductor_config.inplace_buffers = False
model.transformer.wte.forward = torch.compile(
model.transformer.wte.forward, backend="inductor", fullgraph=False
)
gen_tokens_cpu1 = model.generate(
input_ids1,
attention_mask = attention_mask,
max_new_tokens=30,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
gen_text_cpu1 = tokenizer.batch_decode(gen_tokens_cpu1)[0]
assert gen_text1 == gen_text_cpu1
```
### Expected behavior
For above test I see output as
`GPT2 is model developed by OpenAI and is based on the OpenAI-based OpenAI-based OpenAI-based OpenAI-based OpenAI-based OpenAI-based Open`
is this expected behavior?
@ArthurZucker could you please explain why the output is like this? | closed | 2025-03-20T10:55:28Z | 2025-03-20T13:05:33Z | https://github.com/huggingface/transformers/issues/36848 | [
"bug"
] | vpandya-quic | 1 |
holoviz/panel | matplotlib | 7,359 | Enable Perspective to support large tables | I work with lots of tabular datasets of the size 50-500MB. For exploratory data analysis the Perspective Viewer is really powerful and unique. Unfortunately sending the full datasets to the client is slow and often breaks of a websocket max limitation imposed by Panel, JupyterHub or Kubernetes. You can increase these limits but only to some extend and also this can be outside the control or capability of a data scientist.
I'm increasingly seeing this problem and I'm not the only one seeing this problem ([Discourse #6804](https://discourse.holoviz.org/t/websocket-max-message-size-is-not-respected/6804)). Its actually a problem that is very common in Finance and Trading where I work. Currently Excel support larger tables than we do with Perspective. I believe we should enable users to work with larger files than Excel can in Perspective.
Actually Perspective was built to support large tabular data via virtualization. See [`regular-table`](https://github.com/finos/regular-table) and [Perspective](https://perspective.finos.org/). But our implementation only use the `perspective-viewer` web component. Not the advanced client-server virtualization architecture supported.
A Panel user actually showcased how to use the client-server virtualization in [Discourse #6430](https://discourse.holoviz.org/t/panel-perspective-on-the-server/6430). But its only a complicated to use proof of concept.
Please note that the client-server virtualization architecture seems similar to Mosaic - Mosaic is just built on DuckDB. There is a request to add Mosaic in [FR #7358](https://github.com/holoviz/panel/issues/7358).
## Discussion
The Tabulator Pane provides a kind of virtualization via the `pagination` parameter ("local" or "remote"). We could support a similar parameter with Perspective making it really easy for users. On the other hand there is power in exposing more of the underlying Perspective api like the PerspectiveManager and hosting tables once but using across sessions and users. I think also Panel Perspective pane would be more useful if it implemented the Jupyter Perspective Widget api and capabilities. See [PyCon Italy 2024](https://www.youtube.com/watch?v=s6n9vEyM1gY) and [PerspectiveWidget Implementation](https://github.com/finos/perspective/blob/master/rust/perspective-python/perspective/widget/__init__.py) for inspiration.
Today Panel can be running on both Tornado and FastAPI servers. The solution should work in both environments. Personally I want to migrate to FastAPI deployments if that is possible.
Also it should just work in Pyodide because that is where lots of the showcasing of the functionality will take place.
## Cannot use JupyterWidget
Unfortunately its not a workaround to use the Jupyter Widget
```python
import pandas as pd
import panel as pn
from perspective.widget import PerspectiveWidget
pn.extension("ipywidgets")
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=["A", "B", "C"])
p = PerspectiveWidget(df)
pn.pane.IPyWidget(p).servable()
```

| open | 2024-10-05T04:43:52Z | 2024-10-07T19:41:28Z | https://github.com/holoviz/panel/issues/7359 | [
"type: enhancement"
] | MarcSkovMadsen | 3 |
huggingface/transformers | tensorflow | 36,585 | Inconsistent Outputs When Using Flash Attention 2 and SDPA Attention with Attention Mask | ### System Info
- `transformers` version: 4.46.3
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.18
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.4.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: True
- GPU type: NVIDIA A800-SXM4-40GB
### Who can help?
@ylacombe, @eustlb
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi,
I am encountering an issue where the outputs of flash_attention_2 and sdpa attention implementations are inconsistent when an attention mask is used. Below is the code I used to test this:
```
import torch
import torch.nn as nn
from transformers import WhisperModel
def make_pad_mask(lengths: torch.Tensor) -> torch.Tensor:
"""
Args:
lengths:
A 1-D tensor containing sentence lengths.
Returns:
Return a 2-D bool tensor, where masked positions
are filled with `True` and non-masked positions are
filled with `False`.
>>> lengths = torch.tensor([1, 3, 2, 5])
>>> make_pad_mask(lengths)
tensor([[False, True, True, True, True],
[False, False, False, True, True],
[False, False, True, True, True],
[False, False, False, False, False]])
"""
assert lengths.ndim == 1, lengths.ndim
max_len = lengths.max()
n = lengths.size(0)
expaned_lengths = torch.arange(max_len).expand(n, max_len).to(lengths)
return expaned_lengths >= lengths.unsqueeze(1)
def whisper_encoder_dynamic_length_forward(
self,
input_features,
attention_mask=None,
head_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
expected_seq_length = (
self.config.max_source_positions * self.conv1.stride[0] * self.conv2.stride[0]
)
if input_features.shape[-1] > expected_seq_length:
raise ValueError(
f"Whisper expects the mel input features to be of length less equal {expected_seq_length}, but found {input_features.shape[-1]}."
)
output_attentions = (
output_attentions
if output_attentions is not None
else self.config.output_attentions
)
output_hidden_states = (
output_hidden_states
if output_hidden_states is not None
else self.config.output_hidden_states
)
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
inputs_embeds = nn.functional.gelu(self.conv1(input_features))
inputs_embeds = nn.functional.gelu(self.conv2(inputs_embeds))
inputs_embeds = inputs_embeds.permute(0, 2, 1)
embed_pos = self.embed_positions.weight
hidden_states = (
inputs_embeds + embed_pos[: inputs_embeds.shape[1], :]
) # NOTE: modified here
hidden_states = nn.functional.dropout(
hidden_states, p=self.dropout, training=self.training
)
encoder_states = () if output_hidden_states else None
all_attentions = () if output_attentions else None
# check if head_mask has a correct number of layers specified if desired
if head_mask is not None:
assert head_mask.size()[0] == (
len(self.layers)
), f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
for idx, encoder_layer in enumerate(self.layers):
if output_hidden_states:
encoder_states = encoder_states + (hidden_states,)
# add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
to_drop = False
if self.training:
dropout_probability = torch.rand([])
if dropout_probability < self.layerdrop: # skip the layer
to_drop = True
if to_drop:
layer_outputs = (None, None)
else:
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
encoder_layer.__call__,
hidden_states,
attention_mask,
(head_mask[idx] if head_mask is not None else None),
output_attentions,
)
else:
layer_outputs = encoder_layer(
hidden_states,
attention_mask,
layer_head_mask=(head_mask[idx] if head_mask is not None else None),
output_attentions=output_attentions,
)
hidden_states = layer_outputs[0]
if output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
hidden_states = self.layer_norm(hidden_states)
if output_hidden_states:
encoder_states = encoder_states + (hidden_states,)
if not return_dict:
return tuple(
v for v in [hidden_states, encoder_states, all_attentions] if v is not None
)
return BaseModelOutput(
last_hidden_state=hidden_states,
hidden_states=encoder_states,
attentions=all_attentions,
)
def forward_whisper_encoder_sdpa(
encoder,
x: torch.Tensor,
x_lens: torch.Tensor,
):
x = x.permute(0, 2, 1)
audio_feat_lengths = (x_lens - 1) // 2 + 1
encoder_out_lens = audio_feat_lengths
batch_size, _, max_mel_seq_len = x.shape
max_seq_len = (max_mel_seq_len - 1) // 2 + 1
# Create a sequence tensor of shape (batch_size, max_seq_len)
seq_range = (
torch.arange(
0,
max_seq_len,
dtype=audio_feat_lengths.dtype,
device=audio_feat_lengths.device,
)
.unsqueeze(0)
.expand(batch_size, max_seq_len)
)
lengths_expand = audio_feat_lengths.unsqueeze(1).expand(batch_size, max_seq_len)
#Create mask
padding_mask = seq_range >= lengths_expand
audio_attention_mask_ = padding_mask.view(batch_size, 1, 1, max_seq_len).expand(
batch_size, 1, max_seq_len, max_seq_len
)
audio_attention_mask = audio_attention_mask_.to(
dtype=encoder.conv1.weight.dtype,
device=encoder.conv1.weight.device,
)
audio_attention_mask[audio_attention_mask_] = float("-inf")
encoder_out = encoder(
input_features=x, attention_mask=audio_attention_mask
).last_hidden_state
assert encoder_out_lens.max() == encoder_out.size(1)
return encoder_out
def forward_whisper_encoder_flashatten2(
encoder,
x: torch.Tensor,
x_lens: torch.Tensor,
):
x = x.permute(0, 2, 1)
audio_feat_lengths = (x_lens - 1) // 2 + 1
attention_mask = ~make_pad_mask(audio_feat_lengths)
print(attention_mask)
encoder_out = encoder(
input_features=x, attention_mask=attention_mask
).last_hidden_state
return encoder_out
device = torch.device("cuda:0")
speech_encoder_sdpa = WhisperModel.from_pretrained("speech_ssl_models/whisper-large-v3", attn_implementation='sdpa', torch_dtype=torch.float16).encoder
speech_encoder_sdpa.__class__.forward = whisper_encoder_dynamic_length_forward
speech_encoder_sdpa.to(device)
speech_encoder_flash = WhisperModel.from_pretrained("speech_ssl_models/whisper-large-v3", attn_implementation='flash_attention_2', torch_dtype=torch.float16).encoder
speech_encoder_flash.__class__.forward = whisper_encoder_dynamic_length_forward
speech_encoder_flash.to(device)
x = torch.randn(2, 100, 128).half().to(device)
x_lens = torch.tensor([100, 60]).long().to(device)
out1 = forward_whisper_encoder_sdpa(speech_encoder_sdpa, x, x_lens)
out2 = forward_whisper_encoder_flashatten2(speech_encoder_flash, x, x_lens)
print("*******************whisper-sdpa**********************************\n", out1.shape, out1)
print("*******************whisper-flash-attentions2*********************\n", out2.shape, out2)
print("********************outputs allclose*****************************\n",torch.allclose(out1,out2,atol=1e-5))
```
### Expected behavior
The outputs of the two attention implementations are not consistent when an attention mask is used. Could you please help me understand why this is happening and how to resolve it?
Thank you!
You can replace the model path "speech_ssl_models/whisper-large-v3" with the appropriate model path you are using. | closed | 2025-03-06T12:57:07Z | 2025-03-11T07:41:32Z | https://github.com/huggingface/transformers/issues/36585 | [
"bug"
] | tartarleft | 11 |
jazzband/django-oauth-toolkit | django | 835 | Returning the token from `get_token_response` along with the HTTPResponse in introspect view | **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
I need to add some extra headers in response coming from `get_token_response` of introspect view and then forward it. In my case i need to get some attribute of token like `token.some_attribute` and add it as header in response. So as in get function i don't have access to `token` so i have to query db again.
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
Returning the token from `get_token_response` along with the HTTPResponse in introspect view so that if we need some token attribute in `get` or `post` method we need not to query token again from the Db.
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| open | 2020-04-23T18:54:24Z | 2020-04-23T18:54:24Z | https://github.com/jazzband/django-oauth-toolkit/issues/835 | [
"enhancement"
] | anveshagarwal | 0 |
fastapi/sqlmodel | pydantic | 307 | Locking mechanisms for preventing data integrity issues in concurrent data access scenarios | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
n.a.
```
### Description
So far SQLModel does not provide mechanisms for preventing data integrity issues in concurrent data access scenarios out of the box.
### Wanted Solution
It would be great to have something similar like [`LockModeTypes`](https://docs.oracle.com/javaee/7/api/javax/persistence/LockModeType.html) provided by the Java Persistence API (API).
Optimistic locking (`@Lock(LockModeType.OPTIMISTIC_FORCE_INCREMENT)`):
- OPTIMISTIC_FORCE_INCREMENT - Optimistic lock, with version update on the database level.
Pessimistic locking (`@Lock(LockModeType.<...>)`):
- PESSIMISTIC_READ - acquire a shared lock, and the locked entity cannot be changed before a transaction commit.
- PESSIMISTIC_WRITE - acquire an exclusive lock, and the locked entity can be changed.
- PESSIMISTIC_FORCE_INCREMENT - acquire an exclusive lock and update the version column, the locked entity can be changed
When using OPTIMISTIC_FORCE_INCREMENT based optimistic locking and pessimistic locking table rows are locked at the database level. So this should be in the scope of SQLModel.
Another option would be to provide optimistic locking on the class instance level instead of the database level similar to the options available in Java. Hibernate provides e.g. optimistic locking via `@OptimisticLocking(type = OptimisticLockType.<...>`:
- ALL - perform locking based on all fields
- DIRTY - perform locking based on only changed fields
- VERSION - perform locking using a dedicated version column
In SQLModel one could implement this functionality using Pydantic classes instead of the SQLAlchemy classes.
### Wanted Code
```python
n.a.
```
### Alternatives
Reinvent the wheel over and over again :smil
### Operating System
Other
### Operating System Details
n.a.
### SQLModel Version
n.a.
### Python Version
n.a.
### Additional Context
The article [Optimistic and Pessimistic Locking in JPA](https://hackernoon.com/optimistic-and-pessimistic-locking-in-jpa) is a nice resource about what types of locking is provided by and used in JPA. | open | 2022-04-19T19:54:17Z | 2025-01-09T02:56:50Z | https://github.com/fastapi/sqlmodel/issues/307 | [
"feature"
] | fkromer | 4 |
axnsan12/drf-yasg | django | 215 | Custom 'x-my-keys' in operations | In my project, I need to add custom 'x-some-keys' to the different paths of the generated json. Is this possible somehow using @swagger_auto_schema decorator?
Thanks. | closed | 2018-09-18T15:43:52Z | 2018-10-09T21:36:09Z | https://github.com/axnsan12/drf-yasg/issues/215 | [] | mbarchein | 5 |
pmaji/crypto-whale-watching-app | plotly | 20 | Order Book chart | Hello,
in my Opinion it would be nice to see the total size of order-book by a line in the background.
Due to the fact, we already have the data, it would be little problematic.
I just have the problem I´m not familiar with Plotty....
Thanks and Greets
Theimo
Edit:
Needed Tasks:
- [x] Sum upp Data for graph
- [x] Add code for Plotty | closed | 2018-02-17T06:54:05Z | 2018-03-19T20:59:21Z | https://github.com/pmaji/crypto-whale-watching-app/issues/20 | [] | theimo1221 | 25 |
ray-project/ray | tensorflow | 51,485 | CI test windows://python/ray/tests:test_runtime_env_plugin is consistently_failing | CI test **windows://python/ray/tests:test_runtime_env_plugin** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aad4-a541-45a9-b1ef-d27f9a1da383
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4168-a0da-6cbdc8cbd2df
DataCaseName-windows://python/ray/tests:test_runtime_env_plugin-END
Managed by OSS Test Policy | closed | 2025-03-18T23:07:47Z | 2025-03-19T21:54:28Z | https://github.com/ray-project/ray/issues/51485 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
agronholm/anyio | asyncio | 130 | Should we expose the user to 4-tuple IPv6 socket addresses? | The result of `sock.getsockname()` for IPv6 sockets is a 4-tuple (address, port, flowinfo, scopeid). The last two fields are specific to IPv6. The flowinfo field is useless/deprecated and the scopeid is only meaningful for link-local addresses. The important part here is that the address field can carry a scope ID by appending it to the address with a `%` separator, like `fe80::dead:beef%1`.
Most people are not used to dealing with 4-tuple socket addresses, so maybe it would make sense to make it part of the address always. On the other hand, others might *expect* the usual behavior and get confused when they get 2-tuples instead! | closed | 2020-07-25T15:25:14Z | 2020-07-31T13:01:31Z | https://github.com/agronholm/anyio/issues/130 | [
"design"
] | agronholm | 1 |
facebookresearch/fairseq | pytorch | 5,537 | Inferring the decoder and encoder of a transformer model in a streaming manner | Is it possible to infer the model separately through encoder.onnx and decoder.onnx? | open | 2024-08-29T03:03:19Z | 2024-08-29T03:03:19Z | https://github.com/facebookresearch/fairseq/issues/5537 | [
"enhancement",
"help wanted",
"needs triage"
] | pengpengtao | 0 |
google-research/bert | tensorflow | 1,043 | how compress the fine-tune model to small? | the size of base google model is abount 400M ,but when I use it for fine-tune ,the size of output model is about 1.2G,how can I compress it ? thanks a lot! | open | 2020-03-27T07:42:33Z | 2020-04-22T16:25:38Z | https://github.com/google-research/bert/issues/1043 | [] | bboyxu5928 | 1 |
nvbn/thefuck | python | 1,370 | Atuin support for history | Atuin is a much better shell history that's searchable, and is really convinient at times, however it uses an SQL database to store history, which means thefuck can't use the command history as reference the same it can with bash and zsh history
It would be nice if thefuck could support Atuin for history | closed | 2023-04-05T02:58:32Z | 2023-04-10T19:59:02Z | https://github.com/nvbn/thefuck/issues/1370 | [] | StandingPadAnimations | 0 |
christabor/flask_jsondash | plotly | 126 | Add embeddable mode | ### Use case:
As a user I need to be able to create more complex dashboards and pages than what is supported in the schema/tool. But I don't want the tool to be so complex that the schema and language effectively become a DSL and are as complex (or more so) than just doing it all myself.
To that end, it would be easiest to embed using a traditional iframe. This allows more complex dashboards to wrap this one.
### Tradeoffs:
* Any con associated with using iframes in general.
### Implementation:
* This would be used as an iframe.
* An `embedded=true` query parameter will be inserted into the url which is then used within the flask blueprint to toggle features off. Exactly the same implementation method as the existing `jsondash_demo_mode` option.
* No way to avoid duplicate asset loading/sharing of assets (e.g I use d3 in my site and also use it within here).
### Requirements:
* Should have a transparent bg so it fits well with other wrapped page designs
* Should hide all titles and editable elements
* Should hide all dragging/dropping/resizing ability
* Should hide all large titles and buttons, only showing the charts and their refresh buttons.
### Testing
Unit tests will suffice
### Examples
Config should be provided that creates a single full width chart that contains the embedded version as an iframe. This gives people an idea and provides fixtures for testing/demoing.
| closed | 2017-06-14T17:38:15Z | 2017-06-15T19:20:06Z | https://github.com/christabor/flask_jsondash/issues/126 | [
"enhancement",
"new chart"
] | christabor | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 647 | Can the viewer see the 3D rendering converge (during train) in real time? like instant ngp | i ran ./gaussian_splatting/SIBR_viewers/viewers/bin/SIBR_gaussianViewer_app.exe -m ./owl_van/ --rendering-size 1500
However, they all come out already rendered. I would like to see them converge in real time like instant ngp, is that possible? | closed | 2024-02-03T07:45:07Z | 2024-02-09T21:53:26Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/647 | [] | hanjoonwon | 1 |
explosion/spaCy | deep-learning | 13,175 | Downloading model error: ModuleNotFoundError: No module named 'spacy.symbols' | When following the `README.md` guide, ***IF*** the repo is cloned locally and the virtual environment is set up in the `spacy` folder, it will fail to download a model. I have been able to download a model through my IDE and at the CLI, but not if I am in the same directory as the repo.
The error is: `ModuleNotFoundError: No module named 'spacy.symbols'`. Below are the steps used once the repo was cloned, the error message, and stack trace.
There was an issued open for the same error it appears that was not resolved: [https://github.com/explosion/spaCy/issues/12847](https://github.com/explosion/spaCy/issues/12847).
---
#### Windows
<details>
```(.env) colto@TIMMY C:\Users\colto\Documents\GitHub\test_folder\spacy>pip list
Package Version
---------- -------
pip 23.2.1
setuptools 65.5.0
[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
(.env) colto@TIMMY C:\Users\colto\Documents\GitHub\test_folder\spacy>python -m pip install --upgrade pip
Requirement already satisfied: pip in c:\users\colto\documents\github\test_folder\spacy\.env\lib\site-pac
kages (23.2.1)
Collecting pip
Obtaining dependency information for pip from https://files.pythonhosted.org/packages/47/6a/453160888fa
b7c6a432a6e25f8afe6256d0d9f2cbd25971021da6491d899/pip-23.3.1-py3-none-any.whl.metadata
Using cached pip-23.3.1-py3-none-any.whl.metadata (3.5 kB)
Using cached pip-23.3.1-py3-none-any.whl (2.1 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 23.2.1
Uninstalling pip-23.2.1:
Successfully uninstalled pip-23.2.1
Successfully installed pip-23.3.1
(.env) colto@TIMMY C:\Users\colto\Documents\GitHub\test_folder\spacy>pip install -U pip setuptools wheel
Requirement already satisfied: pip in c:\users\colto\documents\github\test_folder\spacy\.env\lib\site-pac
kages (23.3.1)
Requirement already satisfied: setuptools in c:\users\colto\documents\github\test_folder\spacy\.env\lib\s
ite-packages (65.5.0)
Collecting setuptools
Downloading setuptools-69.0.2-py3-none-any.whl.metadata (6.3 kB)
Collecting wheel
Downloading wheel-0.42.0-py3-none-any.whl.metadata (2.2 kB)
Downloading setuptools-69.0.2-py3-none-any.whl (819 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 819.5/819.5 kB 2.6 MB/s eta 0:00:00
Downloading wheel-0.42.0-py3-none-any.whl (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.4/65.4 kB 3.7 MB/s eta 0:00:00
Installing collected packages: wheel, setuptools
Attempting uninstall: setuptools
Found existing installation: setuptools 65.5.0
Uninstalling setuptools-65.5.0:
Successfully uninstalled setuptools-65.5.0
Successfully installed setuptools-69.0.2 wheel-0.42.0
(.env) colto@TIMMY C:\Users\colto\Documents\GitHub\test_folder\spacy>pip install -U spacy
Collecting spacy
Using cached spacy-3.7.2-cp311-cp311-win_amd64.whl.metadata (26 kB)
Collecting spacy-legacy<3.1.0,>=3.0.11 (from spacy)
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0 (from spacy)
Using cached spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting murmurhash<1.1.0,>=0.28.0 (from spacy)
Using cached murmurhash-1.0.10-cp311-cp311-win_amd64.whl.metadata (2.0 kB)
Collecting cymem<2.1.0,>=2.0.2 (from spacy)
Using cached cymem-2.0.8-cp311-cp311-win_amd64.whl.metadata (8.6 kB)
Collecting preshed<3.1.0,>=3.0.2 (from spacy)
Using cached preshed-3.0.9-cp311-cp311-win_amd64.whl.metadata (2.2 kB)
Collecting thinc<8.3.0,>=8.1.8 (from spacy)
Using cached thinc-8.2.1-cp311-cp311-win_amd64.whl.metadata (15 kB)
Collecting wasabi<1.2.0,>=0.9.1 (from spacy)
Using cached wasabi-1.1.2-py3-none-any.whl.metadata (28 kB)
Collecting srsly<3.0.0,>=2.4.3 (from spacy)
Using cached srsly-2.4.8-cp311-cp311-win_amd64.whl.metadata (20 kB)
Collecting catalogue<2.1.0,>=2.0.6 (from spacy)
Using cached catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
Collecting weasel<0.4.0,>=0.1.0 (from spacy)
Using cached weasel-0.3.4-py3-none-any.whl.metadata (4.7 kB)
Collecting typer<0.10.0,>=0.3.0 (from spacy)
Using cached typer-0.9.0-py3-none-any.whl (45 kB)
Collecting smart-open<7.0.0,>=5.2.1 (from spacy)
Using cached smart_open-6.4.0-py3-none-any.whl.metadata (21 kB)
Collecting tqdm<5.0.0,>=4.38.0 (from spacy)
Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting requests<3.0.0,>=2.13.0 (from spacy)
Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 (from spacy)
Using cached pydantic-2.5.2-py3-none-any.whl.metadata (65 kB)
Collecting jinja2 (from spacy)
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Requirement already satisfied: setuptools in c:\users\colto\documents\github\test_folder\spacy\.env\lib\s
ite-packages (from spacy) (69.0.2)
Collecting packaging>=20.0 (from spacy)
Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting langcodes<4.0.0,>=3.2.0 (from spacy)
Using cached langcodes-3.3.0-py3-none-any.whl (181 kB)
Collecting numpy>=1.19.0 (from spacy)
Using cached numpy-1.26.2-cp311-cp311-win_amd64.whl.metadata (61 kB)
Collecting annotated-types>=0.4.0 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy)
Using cached annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting pydantic-core==2.14.5 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy)
Using cached pydantic_core-2.14.5-cp311-none-win_amd64.whl.metadata (6.6 kB)
Collecting typing-extensions>=4.6.1 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy)
Using cached typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests<3.0.0,>=2.13.0->spacy)
Using cached charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl.metadata (34 kB)
Collecting idna<4,>=2.5 (from requests<3.0.0,>=2.13.0->spacy)
Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests<3.0.0,>=2.13.0->spacy)
Using cached urllib3-2.1.0-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests<3.0.0,>=2.13.0->spacy)
Using cached certifi-2023.11.17-py3-none-any.whl.metadata (2.2 kB)
Collecting blis<0.8.0,>=0.7.8 (from thinc<8.3.0,>=8.1.8->spacy)
Using cached blis-0.7.11-cp311-cp311-win_amd64.whl.metadata (7.6 kB)
Collecting confection<1.0.0,>=0.0.1 (from thinc<8.3.0,>=8.1.8->spacy)
Using cached confection-0.1.4-py3-none-any.whl.metadata (19 kB)
Collecting colorama (from tqdm<5.0.0,>=4.38.0->spacy)
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting click<9.0.0,>=7.1.1 (from typer<0.10.0,>=0.3.0->spacy)
Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting cloudpathlib<0.17.0,>=0.7.0 (from weasel<0.4.0,>=0.1.0->spacy)
Using cached cloudpathlib-0.16.0-py3-none-any.whl.metadata (14 kB)
Collecting MarkupSafe>=2.0 (from jinja2->spacy)
Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata (3.1 kB)
Using cached spacy-3.7.2-cp311-cp311-win_amd64.whl (12.1 MB)
Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Using cached cymem-2.0.8-cp311-cp311-win_amd64.whl (39 kB)
Using cached murmurhash-1.0.10-cp311-cp311-win_amd64.whl (25 kB)
Using cached numpy-1.26.2-cp311-cp311-win_amd64.whl (15.8 MB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Using cached preshed-3.0.9-cp311-cp311-win_amd64.whl (122 kB)
Using cached pydantic-2.5.2-py3-none-any.whl (381 kB)
Using cached pydantic_core-2.14.5-cp311-none-win_amd64.whl (1.9 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached smart_open-6.4.0-py3-none-any.whl (57 kB)
Using cached spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Using cached srsly-2.4.8-cp311-cp311-win_amd64.whl (479 kB)
Using cached thinc-8.2.1-cp311-cp311-win_amd64.whl (1.5 MB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached wasabi-1.1.2-py3-none-any.whl (27 kB)
Using cached weasel-0.3.4-py3-none-any.whl (50 kB)
Using cached annotated_types-0.6.0-py3-none-any.whl (12 kB)
Using cached blis-0.7.11-cp311-cp311-win_amd64.whl (6.6 MB)
Using cached certifi-2023.11.17-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl (99 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached cloudpathlib-0.16.0-py3-none-any.whl (45 kB)
Using cached confection-0.1.4-py3-none-any.whl (35 kB)
Using cached idna-3.6-py3-none-any.whl (61 kB)
Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl (17 kB)
Using cached typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Using cached urllib3-2.1.0-py3-none-any.whl (104 kB)
Installing collected packages: cymem, urllib3, typing-extensions, spacy-loggers, spacy-legacy, smart-open
, packaging, numpy, murmurhash, MarkupSafe, langcodes, idna, colorama, cloudpathlib, charset-normalizer,
certifi, catalogue, annotated-types, wasabi, tqdm, srsly, requests, pydantic-core, preshed, jinja2, click
, blis, typer, pydantic, confection, weasel, thinc, spacy
Successfully installed MarkupSafe-2.1.3 annotated-types-0.6.0 blis-0.7.11 catalogue-2.0.10 certifi-2023.1
1.17 charset-normalizer-3.3.2 click-8.1.7 cloudpathlib-0.16.0 colorama-0.4.6 confection-0.1.4 cymem-2.0.8
idna-3.6 jinja2-3.1.2 langcodes-3.3.0 murmurhash-1.0.10 numpy-1.26.2 packaging-23.2 preshed-3.0.9 pydant
ic-2.5.2 pydantic-core-2.14.5 requests-2.31.0 smart-open-6.4.0 spacy-3.7.2 spacy-legacy-3.0.12 spacy-logg
ers-1.0.5 srsly-2.4.8 thinc-8.2.1 tqdm-4.66.1 typer-0.9.0 typing-extensions-4.8.0 urllib3-2.1.0 wasabi-1.
1.2 weasel-0.3.4
(.env) colto@TIMMY C:\Users\colto\Documents\GitHub\test_folder\spacy>python -m spacy download en_core_web
_sm
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 148, in _get_module_details
File "<frozen runpy>", line 112, in _get_module_details
File "C:\Users\colto\Documents\GitHub\test_folder\spacy\spacy\__init__.py", line 13, in <module>
from . import pipeline # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\colto\Documents\GitHub\test_folder\spacy\spacy\pipeline\__init__.py", line 1, in <module
>
from .attributeruler import AttributeRuler
File "C:\Users\colto\Documents\GitHub\test_folder\spacy\spacy\pipeline\attributeruler.py", line 6, in <
module>
from .. import util
File "C:\Users\colto\Documents\GitHub\test_folder\spacy\spacy\util.py", line 75, in <module>
from .symbols import ORTH
ModuleNotFoundError: No module named 'spacy.symbols'
```
</details>
#### Linux
<details>
```
colton@tano:~/GitHub$ git clone https://github.com/explosion/spacy.git
Cloning into 'spacy'...
remote: Enumerating objects: 111868, done.
remote: Counting objects: 100% (667/667), done.
remote: Compressing objects: 100% (363/363), done.
remote: Total 111868 (delta 361), reused 558 (delta 292), pack-reused 111201
Receiving objects: 100% (111868/111868), 197.96 MiB | 19.63 MiB/s, done.
Resolving deltas: 100% (84450/84450), done.
colton@tano:~/GitHub$ cd spacy/
colton@tano:~/GitHub/spacy$ python3 -m venv .env
colton@tano:~/GitHub/spacy$ source .env/bin/activate
(.env)
colton@tano:~/GitHub/spacy$ pip install -U pip setuptools wheel
Requirement already satisfied: pip in ./.env/lib/python3.10/site-packages (22.0.2)
Collecting pip
Using cached pip-23.3.1-py3-none-any.whl (2.1 MB)
Requirement already satisfied: setuptools in ./.env/lib/python3.10/site-packages (59.6.0)
Collecting setuptools
Using cached setuptools-69.0.2-py3-none-any.whl (819 kB)
Collecting wheel
Using cached wheel-0.42.0-py3-none-any.whl (65 kB)
Installing collected packages: wheel, setuptools, pip
Attempting uninstall: setuptools
Found existing installation: setuptools 59.6.0
Uninstalling setuptools-59.6.0:
Successfully uninstalled setuptools-59.6.0
Attempting uninstall: pip
Found existing installation: pip 22.0.2
Uninstalling pip-22.0.2:
Successfully uninstalled pip-22.0.2
Successfully installed pip-23.3.1 setuptools-69.0.2 wheel-0.42.0
(.env)
colton@tano:~/GitHub/spacy$ pip install -U spacy
Collecting spacy
Using cached spacy-3.7.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (25 kB)
Collecting spacy-legacy<3.1.0,>=3.0.11 (from spacy)
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0 (from spacy)
Using cached spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting murmurhash<1.1.0,>=0.28.0 (from spacy)
Using cached murmurhash-1.0.10-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.0 kB)
Collecting cymem<2.1.0,>=2.0.2 (from spacy)
Using cached cymem-2.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.4 kB)
Collecting preshed<3.1.0,>=3.0.2 (from spacy)
Using cached preshed-3.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.2 kB)
Collecting thinc<8.3.0,>=8.1.8 (from spacy)
Using cached thinc-8.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (15 kB)
Collecting wasabi<1.2.0,>=0.9.1 (from spacy)
Using cached wasabi-1.1.2-py3-none-any.whl.metadata (28 kB)
Collecting srsly<3.0.0,>=2.4.3 (from spacy)
Using cached srsly-2.4.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting catalogue<2.1.0,>=2.0.6 (from spacy)
Using cached catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
Collecting weasel<0.4.0,>=0.1.0 (from spacy)
Using cached weasel-0.3.4-py3-none-any.whl.metadata (4.7 kB)
Collecting typer<0.10.0,>=0.3.0 (from spacy)
Using cached typer-0.9.0-py3-none-any.whl (45 kB)
Collecting smart-open<7.0.0,>=5.2.1 (from spacy)
Using cached smart_open-6.4.0-py3-none-any.whl.metadata (21 kB)
Collecting tqdm<5.0.0,>=4.38.0 (from spacy)
Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting requests<3.0.0,>=2.13.0 (from spacy)
Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 (from spacy)
Using cached pydantic-2.5.2-py3-none-any.whl.metadata (65 kB)
Collecting jinja2 (from spacy)
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Requirement already satisfied: setuptools in ./.env/lib/python3.10/site-packages (from spacy) (69.0.2)
Collecting packaging>=20.0 (from spacy)
Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting langcodes<4.0.0,>=3.2.0 (from spacy)
Using cached langcodes-3.3.0-py3-none-any.whl (181 kB)
Collecting numpy>=1.19.0 (from spacy)
Using cached numpy-1.26.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting annotated-types>=0.4.0 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy)
Using cached annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting pydantic-core==2.14.5 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy)
Using cached pydantic_core-2.14.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.5 kB)
Collecting typing-extensions>=4.6.1 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy)
Using cached typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests<3.0.0,>=2.13.0->spacy)
Using cached charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests<3.0.0,>=2.13.0->spacy)
Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests<3.0.0,>=2.13.0->spacy)
Using cached urllib3-2.1.0-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests<3.0.0,>=2.13.0->spacy)
Using cached certifi-2023.11.17-py3-none-any.whl.metadata (2.2 kB)
Collecting blis<0.8.0,>=0.7.8 (from thinc<8.3.0,>=8.1.8->spacy)
Using cached blis-0.7.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.4 kB)
Collecting confection<1.0.0,>=0.0.1 (from thinc<8.3.0,>=8.1.8->spacy)
Using cached confection-0.1.4-py3-none-any.whl.metadata (19 kB)
Collecting click<9.0.0,>=7.1.1 (from typer<0.10.0,>=0.3.0->spacy)
Using cached click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting cloudpathlib<0.17.0,>=0.7.0 (from weasel<0.4.0,>=0.1.0->spacy)
Using cached cloudpathlib-0.16.0-py3-none-any.whl.metadata (14 kB)
Collecting MarkupSafe>=2.0 (from jinja2->spacy)
Using cached MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Using cached spacy-3.7.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB)
Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Using cached cymem-2.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (46 kB)
Using cached murmurhash-1.0.10-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (29 kB)
Using cached numpy-1.26.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Using cached preshed-3.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (156 kB)
Using cached pydantic-2.5.2-py3-none-any.whl (381 kB)
Using cached pydantic_core-2.14.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached smart_open-6.4.0-py3-none-any.whl (57 kB)
Using cached spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Using cached srsly-2.4.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (493 kB)
Using cached thinc-8.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (920 kB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached wasabi-1.1.2-py3-none-any.whl (27 kB)
Using cached weasel-0.3.4-py3-none-any.whl (50 kB)
Using cached annotated_types-0.6.0-py3-none-any.whl (12 kB)
Using cached blis-0.7.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.2 MB)
Using cached certifi-2023.11.17-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (142 kB)
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Using cached cloudpathlib-0.16.0-py3-none-any.whl (45 kB)
Using cached confection-0.1.4-py3-none-any.whl (35 kB)
Using cached idna-3.6-py3-none-any.whl (61 kB)
Using cached MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Using cached typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Using cached urllib3-2.1.0-py3-none-any.whl (104 kB)
Installing collected packages: cymem, wasabi, urllib3, typing-extensions, tqdm, spacy-loggers, spacy-legacy, smart-open, packaging, numpy, murmurhash, MarkupSafe, langcodes, idna, click, charset-normalizer, certifi, catalogue, annotated-types, typer, srsly, requests, pydantic-core, preshed, jinja2, cloudpathlib, blis, pydantic, confection, weasel, thinc, spacy
Successfully installed MarkupSafe-2.1.3 annotated-types-0.6.0 blis-0.7.11 catalogue-2.0.10 certifi-2023.11.17 charset-normalizer-3.3.2 click-8.1.7 cloudpathlib-0.16.0 confection-0.1.4 cymem-2.0.8 idna-3.6 jinja2-3.1.2 langcodes-3.3.0 murmurhash-1.0.10 numpy-1.26.2 packaging-23.2 preshed-3.0.9 pydantic-2.5.2 pydantic-core-2.14.5 requests-2.31.0 smart-open-6.4.0 spacy-3.7.2 spacy-legacy-3.0.12 spacy-loggers-1.0.5 srsly-2.4.8 thinc-8.2.1 tqdm-4.66.1 typer-0.9.0 typing-extensions-4.8.0 urllib3-2.1.0 wasabi-1.1.2 weasel-0.3.4
(.env)
colton@tano:~/GitHub/spacy$ python3 -m spacy download en_core_web_sm
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.10/runpy.py", line 146, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/colton/GitHub/spacy/spacy/__init__.py", line 13, in <module>
from . import pipeline # noqa: F401
File "/home/colton/GitHub/spacy/spacy/pipeline/__init__.py", line 1, in <module>
from .attributeruler import AttributeRuler
File "/home/colton/GitHub/spacy/spacy/pipeline/attributeruler.py", line 6, in <module>
from .. import util
File "/home/colton/GitHub/spacy/spacy/util.py", line 75, in <module>
from .symbols import ORTH
ModuleNotFoundError: No module named 'spacy.symbols'
```
</details>
If the repo is not cloned locally and the virtual environment set up inside `spacy` the model download and set up are flawless on both machines I attempted this on.
#### My Environment
**spaCy version:** 3.7.2
**Platform 1:** Windows 10 Home OS build: 19045.3570 64 bit (**Python version:** 3.11.5)
**Platform 2:** Pop!_OS 22.04 LTS "jammy" 64 bit (**Python version:** 3.10.12)
**Pipeline:** attempted to use en_core_web_sm
Colton | closed | 2023-12-04T21:23:58Z | 2023-12-05T18:12:50Z | https://github.com/explosion/spaCy/issues/13175 | [] | ojo4f3 | 1 |
jowilf/starlette-admin | sqlalchemy | 339 | Enhancement: Flask Admin like `can_edit`, `can_create`, `can_delete` and `can_view_details` features | **Is your feature request related to a problem? Please describe.**
I couldn't find a way to configure editing, creating, deleting and viewing details.
**Describe the solution you'd like**
If can_edit is set to `False`, we should remove the `Edit` button. I'm not sure if we should contrcut the endpoint for that progress or not.
**Describe alternatives you've considered**
[Flask Admin](https://flask-admin.readthedocs.io/en/latest/introduction/#modelview-configuration-attributes) has this feature, which probably would be a great reference.
| closed | 2023-10-18T18:32:00Z | 2023-10-19T12:37:35Z | https://github.com/jowilf/starlette-admin/issues/339 | [
"enhancement"
] | hasansezertasan | 2 |
feature-engine/feature_engine | scikit-learn | 397 | extend DatetimeFeatures to extract features from index as well | Hey @dodoarg
Since you created this transformer, I thought I would run this issue by you, in case you have some time in your hands and would be interested in this small addition to its functionality.
The idea is that the parameter `variables` in the init, also takes the string `"index"` as input, and then extracts the datetime features from the index, which should be datetime of course.
It is common in time series data to have the datetime variable in the index of the df. And we are rolling out a new module on time series forecasting in the next release. Hence, the addition :)
Hope you are doing well otherwise!
| closed | 2022-03-28T11:30:00Z | 2022-04-06T08:22:25Z | https://github.com/feature-engine/feature_engine/issues/397 | [] | solegalli | 2 |
littlecodersh/ItChat | api | 764 | 这是基于什么协议的web pc ipad | 这是基于什么协议的web pc ipad | open | 2018-11-30T08:20:35Z | 2018-11-30T11:05:02Z | https://github.com/littlecodersh/ItChat/issues/764 | [] | miniframework | 1 |
huggingface/diffusers | deep-learning | 10,656 | ControlNet union pipeline fails on multi-model | ### Describe the bug
All controlnet types are typically defined inside pipeline as below (example from `StableDiffusionXLControlNetPipeline`):
> controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel],
however, StableDiffusionXLControlNetUnionPipeline pipeline defines it simply as:
> controlnet: ControlNetUnionModel
which defeats one of the main advantages of union controlnet - to be able to perform multiple guidances using same model.
for reference, controlnetunion was added via pr #10131
any changes to txt2img pipeline should also be mirrored in img2img and inpaint pipelines.
### Reproduction
```py
control1 = ControlNetUnionModel.from_single_file(...)
control2 = ControlNetUnionModel.from_single_file(...)
pipe = StableDiffusionXLControlNetUnionPipeline.from_single_file(..., control=[control1, control2])
```
### Logs
```shell
│ 256 │ │ if not isinstance(controlnet, ControlNetUnionModel): │
│ ❱ 257 │ │ │ raise ValueError("Expected `controlnet` to be of type `ControlNetUnionModel`.") │
│ 258 │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Expected `controlnet` to be of type `ControlNetUnionModel`.
```
### System Info
diffusers==0.33.0.dev0
### Who can help?
@hlky @yiyixuxu @sayakpaul @DN6 | closed | 2025-01-26T19:51:39Z | 2025-02-26T17:55:48Z | https://github.com/huggingface/diffusers/issues/10656 | [
"bug",
"stale"
] | vladmandic | 17 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 258 | Error when trying to open a driver through docker | ```
2024-07-11 18:40:56 stderr: Traceback (most recent call last):
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/connector.py", line 1025, in _wrap_create_connection
2024-07-11 18:40:56 stderr: return await self._loop.create_connection(*args, **kwargs)
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/base_events.py", line 1085, in create_connection
2024-07-11 18:40:56 raise exceptions[0]
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/base_events.py", line 1069, in create_connection
2024-07-11 18:40:56 stderr: sock = await self._connect_sock(
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/base_events.py", line 973, in _connect_sock
2024-07-11 18:40:56 stderr: await self.sock_connect(sock, address)
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/selector_events.py", line 634, in sock_connect
2024-07-11 18:40:56 stderr: return await fut
2024-07-11 18:40:56 ^^^^^^^^^
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/selector_events.py", line 674, in _sock_connect_cb
2024-07-11 18:40:56 stderr: raise OSError(err, f'Connect call failed {address}')
2024-07-11 18:40:56 ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 54589)
2024-07-11 18:40:56 The above exception was the direct cause of the following exception:
2024-07-11 18:40:56 Traceback (most recent call last):
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/selenium_driverless/types/base_target.py", line 76, in _init
2024-07-11 18:40:56 res = await session.get(url, timeout=10)
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/client.py", line 581, in _request
2024-07-11 18:40:56 stderr: conn = await self._connector.connect(
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/connector.py", line 544, in connect
2024-07-11 18:40:56 proto = await self._create_connection(req, traces, timeout)
2024-07-11 18:40:56 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/connector.py", line 944, in _create_connection
2024-07-11 18:40:56 stderr: _, proto = await self._create_direct_connection(req, traces, timeout)
2024-07-11 18:40:56 stderr: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/connector.py", line 1257, in _create_direct_connection
2024-07-11 18:40:56 stderr: raise last_exc
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/connector.py", line 1226, in _create_direct_connection
2024-07-11 18:40:56 stderr: transp, proto = await self._wrap_create_connection(
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/aiohttp/connector.py", line 1033, in _wrap_create_connection
2024-07-11 18:40:56 raise client_error(req.connection_key, exc) from exc
2024-07-11 18:40:56 aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host 127.0.0.1:54589 ssl:default [Connect call failed ('127.0.0.1', 54589)]
2024-07-11 18:40:56 During handling of the above exception, another exception occurred:
2024-07-11 18:40:56 Traceback (most recent call last):
2024-07-11 18:40:56 File "/app/controllers/lilililili/lilililili_python/app.py", line 476, in <module>
2024-07-11 18:40:56 stderr: asyncio.run(main())
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/runners.py", line 190, in run
2024-07-11 18:40:56 return runner.run(main)
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/runners.py", line 118, in run
2024-07-11 18:40:56 return self._loop.run_until_complete(task)
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
2024-07-11 18:40:56 return future.result()
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/app/controllers/lilililili/lilililili_python/app.py", line 390, in main
2024-07-11 18:40:56 stderr: async with webdriver.Chrome(options=config_instance) as browser:
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/selenium_driverless/webdriver.py", line 128, in __aenter__
2024-07-11 18:40:56 await self.start_session()
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/selenium_driverless/webdriver.py", line 260, in start_session
2024-07-11 18:40:56 stderr: self._base_target = await BaseTarget(host=self._host, is_remote=self._is_remote,
2024-07-11 18:40:56 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-11 18:40:56 File "/usr/local/lib/python3.11/dist-packages/selenium_driverless/types/base_target.py", line 81, in _init
2024-07-11 18:40:56 raise asyncio.TimeoutError(
2024-07-11 18:40:56 TimeoutError: Couldn't connect to chrome within 30 seconds
```
code:
from selenium_driverless import webdriver
from selenium_driverless.types.by import By
config_instance = webdriver.ChromeOptions()
config_instance.add_argument("--no-sandbox")
config_instance.add_argument("--disable-dev-shm-usage")
config_instance.add_argument("--disable-gpu")
config_instance.add_argument("--disable-setuid-sandbox")
async with webdriver.Chrome(options=config_instance) as browser:
(+ more code redacted, but works perfect in windows 10. | closed | 2024-07-11T21:43:38Z | 2024-08-20T12:58:53Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/258 | [
"invalid"
] | darkTrapX0 | 1 |
Layout-Parser/layout-parser | computer-vision | 220 | AttributeError: module 'layoutparser' has no attribute 'load_pdf' | Hi,
I am trying to follow this tutorial (watched your video in YouTube)
[Tutorial](https://github.com/Layout-Parser/layout-parser/blob/main/examples/Customizing%20Layout%20Models%20with%20Label%20Studio%20Annotation/Customizing%20Layout%20Models%20with%20Label%20Studio%20Annotation.ipynb)
Followed installation instructions from here - [Installation](https://layout-parser.readthedocs.io/en/latest/notes/installation.html)
I am facing the issue when executing this line _pdf_tokens, pdf_images = lp.load_pdf("test.pdf", load_images=True)_
It looks like this function is not there in layout parser module anymore ? but documentation says it's present [PDF extraction](https://layout-parser.readthedocs.io/en/latest/api_doc/io.html#pdf)
Also, I wanted to check what is the best way to use Layout Parser to extract data from Invoices with complex structure
Thank you in advance. | open | 2024-12-18T08:11:10Z | 2024-12-18T08:12:20Z | https://github.com/Layout-Parser/layout-parser/issues/220 | [
"bug"
] | iRajesha | 0 |
awtkns/fastapi-crudrouter | fastapi | 29 | TypeError: __init__() got an unexpected keyword argument 'prefix' | Hi there,
I tried to create a sample code from the documentation. However it throws a type error.
Please find my code and other details below.
Code:
```
from pydantic import BaseModel
from fastapi import FastAPI
from fastapi_crudrouter import MemoryCRUDRouter as CRUDRouter
class Car(BaseModel):
name: str
year: int
make: str
app = FastAPI()
app.include_router(CRUDRouter(schema=Car))
```
When I try to execute this, I get the following error.
```
Traceback (most recent call last):
File "/home/dineshkumarkb/MyGitHub/MyPractice/Python/fastapi/autogenerate.py", line 13, in <module>
app.include_router(CRUDRouter(Car))
File "/home/dineshkumarkb/.local/lib/python3.8/site-packages/fastapi_crudrouter/core/mem.py", line 11, in __init__
super(MemoryCRUDRouter, self).__init__(schema, *args, **kwargs)
File "/home/dineshkumarkb/.local/lib/python3.8/site-packages/fastapi_crudrouter/core/_base.py", line 35, in __init__
super().__init__(prefix=prefix, tags=[prefix.strip('/').capitalize()], *args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'prefix'
```
Am I missing something?
Python version : 3.8.5
OS : Ubuntu 20.04
| closed | 2021-02-19T06:19:44Z | 2021-02-20T01:22:27Z | https://github.com/awtkns/fastapi-crudrouter/issues/29 | [] | dineshkumarkb | 2 |
sepandhaghighi/samila | matplotlib | 31 | Create Samila Art From Files | #### Description
We may come up with a solid behavior toward all file types to make Samila art from them. This issue tracker will be a place to discuss about:
+ How we can come up with that general solution?
#### Steps/Code to Reproduce
It may sound like this:
```
>>> g = Samila(path2file)
```
#### Expected Behavior
It will decode the file content into two functions (known as `f1` and `f2`) and give construct a `GenerativeImage` instance then return it back.
#### Samila Version (Use : `samila.__version__`)
0.1 | open | 2021-09-30T07:05:17Z | 2022-04-18T13:02:15Z | https://github.com/sepandhaghighi/samila/issues/31 | [
"enhancement",
"discussion"
] | sadrasabouri | 1 |
stanfordnlp/stanza | nlp | 1,418 | set_logging_level separate from download() | it should not be necessary for download to call set_logging_level
we could have a separate logging specifically for the downloads if we want to log the downloads at a higher or lower level | open | 2024-09-11T17:36:18Z | 2024-09-11T17:36:19Z | https://github.com/stanfordnlp/stanza/issues/1418 | [
"enhancement"
] | AngledLuffa | 0 |
pytest-dev/pytest-django | pytest | 703 | AttributeError: 'modify_settings' object has no attribute 'wrapped' | I am getting the following error when doing a simple test.
Error `AttributeError: 'modify_settings' object has no attribute 'wrapped'`
This is my test:
```
class FunctionalTest(LiveServerTestCase):
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_sample_test(self):
print(f'{self.live_server_url}/login/')
self.browser.get(f'{self.live_server_url}/login/')
self.assertTrue(True)
```
I am not quite sure why this is happening, but I can see the following is happening:
- `test_sample_test` seems to succeed, the browser opens and goes to the page and then closes.
- `LiveServerTestCase.tearDownClass` seems to get called twice.
- If I comment out the call to `_live_server_modified_settings.disable()` within `django.tests.testcases.LiveServerTestCase.tearDownClass` then the test "passes"
- The pytest test runner shows 6 tests found, my first 5 unit test pass fine, then test in this file shows one 'green dot' to say test passed and then and 'E' for error. So in total it looks like there is somehow 7 tests.
- I am using whitenoise for static file management, so I have tried to remove that in case it was interacting, but it did not change anything.
- I ran through the TDD book from Django last week and did not encounter this issue (on another environment though)
Stack trace from error log:
```
tp = <class 'AttributeError'>, value = None, tb = None
def reraise(tp, value, tb=None):
try:
if value is None:
value = tp()
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
> raise value
..\..\..\appdata\local\programs\python\python36-32\lib\site-packages\six.py:693:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\..\..\appdata\local\programs\python\python36-32\lib\site-packages\six.py:693: in reraise
raise value
..\..\..\appdata\local\programs\python\python36-32\lib\site-packages\six.py:693: in reraise
raise value
..\..\..\appdata\local\programs\python\python36-32\lib\site-packages\pytest_django\plugin.py:514: in teardown
cls.tearDownClass()
..\..\..\appdata\local\programs\python\python36-32\lib\site-packages\django\test\testcases.py:1339: in tearDownClass
cls._live_server_modified_settings.disable()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <django.test.utils.modify_settings object at 0x0610E430>
def disable(self):
if 'INSTALLED_APPS' in self.options:
apps.unset_installed_apps()
> settings._wrapped = self.wrapped
E AttributeError: 'modify_settings' object has no attribute 'wrapped'
```
- Django version: 2.1.7
- Geckodriver version: 0.24.0
- Selenium version: 3.141.0
- Pytest version: 4.2.0
- Pytest-django version: 3.4.5
- Python version: 3.6.5
I explored the following issues but wasn't able to debug root cause:
https://github.com/pytest-dev/pytest-django/issues/557
https://github.com/django-compressor/django-appconf/issues/30
https://github.com/divio/aldryn-search/issues/86 | closed | 2019-02-25T16:27:14Z | 2019-03-14T13:05:42Z | https://github.com/pytest-dev/pytest-django/issues/703 | [] | coler-j | 6 |
2noise/ChatTTS | python | 904 | 使用webui时,上传示例音频后,只生成文字,没有生成音频 | 我的操作步骤如下
1. 我上传了17s音频,
2. 上传成功后,会自动生成示例音频code.
3. 我复制其code,粘贴进**Speaker Embedding**.
4. 我点击 **Reload**
5. 我点击 **Generate**
6. 然后程序自动生成了 output text,但是没有生成 ouput audio并且按钮一直处在interrupt状态,我点击也没有任何反应,并且后台并无报错
<img width="1448" alt="Image" src="https://github.com/user-attachments/assets/db18ad2d-f76c-4e55-9f99-f57e5e78d0c6" />
<img width="1379" alt="Image" src="https://github.com/user-attachments/assets/dc2af495-0dc3-4bec-95a3-f6cb747a235c" /> | closed | 2025-02-23T00:19:55Z | 2025-02-23T00:27:50Z | https://github.com/2noise/ChatTTS/issues/904 | [] | zpskt | 1 |
Python3WebSpider/ProxyPool | flask | 189 | 大佬, IP是从哪个网站获取的啊 | 大佬, IP是从哪个网站获取的啊 | closed | 2023-03-11T10:16:21Z | 2023-03-13T16:14:21Z | https://github.com/Python3WebSpider/ProxyPool/issues/189 | [] | gangzi3 | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,012 | Rendered Output Image Quality Far Exceeds Viewer Display | Hi, I've trained the model and used `render.py` to generate render images for my camera setup, and the results look great. However, when I visualize the splats (the point_cloud.ply) from the exact same camera in the viewer, the quality appears significantly worse. Could you help me understand why there's such a difference in the visualization?
### Examples:
**Rendered output**
<img width="1079" alt="image" src="https://github.com/user-attachments/assets/af5a1b5a-824f-4294-be62-bad53704c78a">
**Viewer display**
<img width="1718" alt="image" src="https://github.com/user-attachments/assets/ed126420-44ee-429e-b8a6-c87e1de06d0f">
**Rendered output**
<img width="1074" alt="image" src="https://github.com/user-attachments/assets/887ba8bf-a83c-418f-9a51-f504f97abfd9">
**Viewer display**
<img width="1725" alt="image" src="https://github.com/user-attachments/assets/1df5619a-980f-4785-80e7-e22b27f2960d">
PS
solved - the web GL viewer on full screen was zoomed, resized the screen and got same scenes | closed | 2024-10-14T07:53:18Z | 2024-10-14T08:41:07Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1012 | [] | Golbstein | 0 |
vimalloc/flask-jwt-extended | flask | 425 | 开发环境下回调函数正常, 生产环境下就失效了。 | 类似这种情况 [https://stackoom.com/question/3o9UM/%E4%BD%BF%E7%94%A8flask-jwt%E6%89%A9%E5%B1%95%E7%9A%84Api%E6%9C%89%E8%BA%AB%E4%BB%BD%E9%AA%8C%E8%AF%81%E9%97%AE%E9%A2%98](https://stackoom.com/question/3o9UM/%E4%BD%BF%E7%94%A8flask-jwt%E6%89%A9%E5%B1%95%E7%9A%84Api%E6%9C%89%E8%BA%AB%E4%BB%BD%E9%AA%8C%E8%AF%81%E9%97%AE%E9%A2%98) | closed | 2021-05-12T13:53:21Z | 2021-05-12T14:06:59Z | https://github.com/vimalloc/flask-jwt-extended/issues/425 | [] | L-HeliantHuS | 0 |
microsoft/qlib | machine-learning | 1,864 | Can not get the character of a name with spaces | ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
不能获取名称带有空格的特征
Can not get the character of a name with spaces
## To Reproduce
Steps to reproduce the behavior:
from qlib.data import D
# fields = ["P($$roewa_q)", "P($$yoyni_q)"]
#获取带空格的特征
fields = ["P($$roe wa_q)", "P($$yoyni_q)"]
instruments = ["sh600000","sh600519"]
data = D.features(instruments, fields, start_time="2001-01-01", end_time="2022-07-19", freq="day")
data
sh600519,2005-01-04,NaN,0.25
"",2005-01-05,NaN,0.25
"",2005-01-06,NaN,0.25
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->

<!-- A screenshot of the error message or anything shouldn't appear-->
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version:0.9.5
- Python version:
- OS (`Windows`, `Linux`, `MacOS`):
- Commit number (optional, please provide it if you are using the dev version):
## Additional Notes
<!-- Add any other information about the problem here. -->
| closed | 2024-12-02T05:55:05Z | 2024-12-02T05:56:09Z | https://github.com/microsoft/qlib/issues/1864 | [
"bug"
] | mmschzs | 0 |
psf/requests | python | 6,742 | inconsistent handling of verify and REQUESTS_CA_BUNDLE | The interaction between the `verify` parameter and REQUESTS_CA_BUNDLE is inconsistent when using a `Session`.
Assuming the REQUESTS_CA_BUNDLE environment variable is set.
will NOT result in ssl verification:
```
sess = requests.Session()
sess.get("https://illegalcert.com", verify=False)
```
will verify the ssl verification:
```
sess = requests.Session()
sess.verify = False
sess.get("https://illegalcert.com")
```
## Expected Result
I would expect both scenario's not to verify the ssl certificate.
## Actual Result
When session.verify is set to False, it is overruled by the existence of the REQUESTS_CA_BUNDLE environment variable.
## Reproduction Steps
export REQUESTS_CA_BUNDLE=/path/to/ca
```python
sess = requests.Session()
# Will not verify the ssl certificate
sess.get("https://illegalcert.com", verify=False)
# will verify the ssl certificate
sess.verify = False
sess.get("https://illegalcert.com")
```
This is governed by: https://github.com/psf/requests/blob/0e322af87745eff34caffe4df68456ebc20d9068/src/requests/sessions.py#L766
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.6"
},
"implementation": {
"name": "CPython",
"version": "3.11.6"
},
"platform": {
"release": "5.15.146.1-microsoft-standard-WSL2",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "30000020"
},
"urllib3": {
"version": "2.2.1"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
| closed | 2024-06-13T12:17:23Z | 2024-06-13T13:36:10Z | https://github.com/psf/requests/issues/6742 | [] | houtmanj | 1 |
ymcui/Chinese-BERT-wwm | nlp | 67 | THUNews中的文章过长 | THUNews中的文章词汇量过长,你们是怎么处理的呢?是直接砍掉后面的内容,只取全面512个词来分类吗? | closed | 2019-10-28T14:36:45Z | 2019-11-04T06:26:48Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/67 | [] | LuMelon | 2 |
STVIR/pysot | computer-vision | 18 | Setup buildup problem | Hi, as I followed the instruction of install.md, I encountered the c1.exe failed with exit status 2 problem when I run python setup.py build_ext --inplace. How can I fix it?
Thanks a lot. | closed | 2019-05-25T15:32:56Z | 2019-07-10T03:26:45Z | https://github.com/STVIR/pysot/issues/18 | [] | 13331112522 | 2 |
flairNLP/flair | pytorch | 3,129 | [Bug]: Sentencepiece wheel issue preventing flair install | ### Describe the bug
I cannot download flair because of a legacy install failure with the package sentencepiece.
### To Reproduce
```python
cd flairtest
pipenv shell
pip3 install flair
```
### Expected behaivor
Expected successful install of flair. If I bypass the sentencepiece wheel error by running "pip3 install flair --only-binary=sentencepiece" then flair will install but hardly any of the features (as described in the tutorials) work.
### Logs and Stack traces
```stacktrace
pip3 install flair [22:46:28]
Collecting flair
Using cached flair-0.11.3-py3-none-any.whl (401 kB)
Collecting tabulate
Using cached tabulate-0.9.0-py3-none-any.whl (35 kB)
Collecting langdetect
Using cached langdetect-1.0.9-py3-none-any.whl
Collecting gdown==4.4.0
Using cached gdown-4.4.0-py3-none-any.whl
Collecting huggingface-hub
Downloading huggingface_hub-0.12.1-py3-none-any.whl (190 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 190.3/190.3 kB 2.1 MB/s eta 0:00:00
Collecting matplotlib>=2.2.3
Downloading matplotlib-3.7.0-cp310-cp310-macosx_11_0_arm64.whl (7.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 35.0 MB/s eta 0:00:00
Collecting hyperopt>=0.2.7
Using cached hyperopt-0.2.7-py2.py3-none-any.whl (1.6 MB)
Collecting ftfy
Using cached ftfy-6.1.1-py3-none-any.whl (53 kB)
Collecting konoha<5.0.0,>=4.0.0
Using cached konoha-4.6.5-py3-none-any.whl (20 kB)
Collecting scikit-learn>=0.21.3
Downloading scikit_learn-1.2.1-cp310-cp310-macosx_12_0_arm64.whl (8.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.4/8.4 MB 61.9 MB/s eta 0:00:00
Collecting mpld3==0.3
Using cached mpld3-0.3-py3-none-any.whl
Collecting segtok>=1.5.7
Using cached segtok-1.5.11-py3-none-any.whl (24 kB)
Collecting gensim>=3.4.0
Using cached gensim-4.3.0-cp310-cp310-macosx_10_9_universal2.whl (24.5 MB)
Collecting more-itertools
Downloading more_itertools-9.1.0-py3-none-any.whl (54 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.2/54.2 kB 2.1 MB/s eta 0:00:00
Collecting wikipedia-api
Using cached Wikipedia_API-0.5.8-py3-none-any.whl (13 kB)
Collecting deprecated>=1.2.4
Using cached Deprecated-1.2.13-py2.py3-none-any.whl (9.6 kB)
Collecting torch!=1.8,>=1.5.0
Using cached torch-1.13.1-cp310-none-macosx_11_0_arm64.whl (53.2 MB)
Collecting pptree
Using cached pptree-3.1-py3-none-any.whl
Collecting conllu>=4.0
Using cached conllu-4.5.2-py2.py3-none-any.whl (16 kB)
Collecting janome
Using cached Janome-0.4.2-py2.py3-none-any.whl (19.7 MB)
Collecting lxml
Using cached lxml-4.9.2-cp310-cp310-macosx_10_9_universal2.whl
Collecting regex
Using cached regex-2022.10.31-cp310-cp310-macosx_11_0_arm64.whl (287 kB)
Collecting python-dateutil>=2.6.1
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Requirement already satisfied: sqlitedict>=1.6.0 in /Users/Helena.Boyd/.local/share/virtualenvs/flairtest-ltTnab-O/lib/python3.10/site-packages (from flair) (2.1.0)
Collecting sentencepiece==0.1.95
Using cached sentencepiece-0.1.95.tar.gz (508 kB)
Preparing metadata (setup.py) ... done
Collecting transformers>=4.0.0
Downloading transformers-4.26.1-py3-none-any.whl (6.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 61.2 MB/s eta 0:00:00
Collecting tqdm>=4.26.0
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.1/77.1 kB 3.1 MB/s eta 0:00:00
Collecting bpemb>=0.3.2
Using cached bpemb-0.3.4-py3-none-any.whl (19 kB)
Collecting six
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting filelock
Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting beautifulsoup4
Downloading beautifulsoup4-4.11.2-py3-none-any.whl (129 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 129.4/129.4 kB 5.5 MB/s eta 0:00:00
Collecting requests[socks]
Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting numpy
Downloading numpy-1.24.2-cp310-cp310-macosx_11_0_arm64.whl (13.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.9/13.9 MB 55.0 MB/s eta 0:00:00
Collecting wrapt<2,>=1.10
Downloading wrapt-1.15.0-cp310-cp310-macosx_11_0_arm64.whl (36 kB)
Collecting FuzzyTM>=0.4.0
Using cached FuzzyTM-2.0.5-py3-none-any.whl (29 kB)
Collecting smart-open>=1.8.1
Using cached smart_open-6.3.0-py3-none-any.whl (56 kB)
Collecting scipy>=1.7.0
Downloading scipy-1.10.1-cp310-cp310-macosx_12_0_arm64.whl (28.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 28.8/28.8 MB 37.9 MB/s eta 0:00:00
Collecting networkx>=2.2
Using cached networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting cloudpickle
Downloading cloudpickle-2.2.1-py3-none-any.whl (25 kB)
Collecting future
Using cached future-0.18.3-py3-none-any.whl
Collecting py4j
Using cached py4j-0.10.9.7-py2.py3-none-any.whl (200 kB)
Collecting overrides<4.0.0,>=3.0.0
Using cached overrides-3.1.0-py3-none-any.whl
Collecting importlib-metadata<4.0.0,>=3.7.0
Using cached importlib_metadata-3.10.1-py3-none-any.whl (14 kB)
Collecting cycler>=0.10
Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting packaging>=20.0
Using cached packaging-23.0-py3-none-any.whl (42 kB)
Collecting contourpy>=1.0.1
Using cached contourpy-1.0.7-cp310-cp310-macosx_11_0_arm64.whl (229 kB)
Collecting fonttools>=4.22.0
Using cached fonttools-4.38.0-py3-none-any.whl (965 kB)
Collecting kiwisolver>=1.0.1
Using cached kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl (63 kB)
Collecting pillow>=6.2.0
Using cached Pillow-9.4.0-cp310-cp310-macosx_11_0_arm64.whl (3.0 MB)
Collecting pyparsing>=2.3.1
Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting joblib>=1.1.1
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Collecting threadpoolctl>=2.0.0
Using cached threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Collecting typing-extensions
Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /Users/Helena.Boyd/.local/share/virtualenvs/flairtest-ltTnab-O/lib/python3.10/site-packages (from transformers>=4.0.0->flair) (0.13.2)
Collecting pyyaml>=5.1
Using cached PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB)
Requirement already satisfied: wcwidth>=0.2.5 in /Users/Helena.Boyd/.local/share/virtualenvs/flairtest-ltTnab-O/lib/python3.10/site-packages (from ftfy->flair) (0.2.6)
Collecting pyfume
Using cached pyFUME-0.2.25-py3-none-any.whl (67 kB)
Collecting pandas
Downloading pandas-1.5.3-cp310-cp310-macosx_11_0_arm64.whl (10.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.9/10.9 MB 31.1 MB/s eta 0:00:00
Collecting zipp>=0.5
Downloading zipp-3.15.0-py3-none-any.whl (6.8 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.14-py2.py3-none-any.whl (140 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.0.1-cp310-cp310-macosx_11_0_arm64.whl (122 kB)
Collecting soupsieve>1.2
Downloading soupsieve-2.4-py3-none-any.whl (37 kB)
Collecting PySocks!=1.5.7,>=1.5.6
Using cached PySocks-1.7.1-py3-none-any.whl (16 kB)
Collecting pytz>=2020.1
Using cached pytz-2022.7.1-py2.py3-none-any.whl (499 kB)
Collecting simpful
Downloading simpful-2.10.0-py3-none-any.whl (31 kB)
Collecting fst-pso
Using cached fst_pso-1.8.1-py3-none-any.whl
Collecting miniful
Using cached miniful-0.0.6-py3-none-any.whl
Building wheels for collected packages: sentencepiece
Building wheel for sentencepiece (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [152 lines of output]
/Users/Helena.Boyd/.local/share/virtualenvs/flairtest-ltTnab-O/lib/python3.10/site-packages/setuptools/dist.py:770: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-310
creating build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
copying src/sentencepiece/__init__.py -> build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
/bin/sh: pkg-config: command not found
Cloning into 'sentencepiece'...
Note: switching to '0e6dfbf86e2fa6d86a3d9a8a08a628da71c073e0'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
-- VERSION: 0.1.95
-- The C compiler identification is AppleClang 13.0.0.13000027
-- The CXX compiler identification is AppleClang 13.0.0.13000027
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Not Found TCMalloc: TCMALLOC_LIB-NOTFOUND
-- Configuring done
-- Generating done
-- Build files have been written to: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/sentencepiece/build
./build_bundled.sh: line 16: nproc: command not found
[ 1%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/builder.cc.o
[ 3%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/arena.cc.o
[ 4%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/unicode_script.cc.o
[ 6%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/arenastring.cc.o
[ 7%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/bytestream.cc.o
[ 9%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/strutil.cc.o
[ 11%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/coded_stream.cc.o
[ 12%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/unigram_model_trainer.cc.o
[ 14%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/common.cc.o
[ 15%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/extension_set.cc.o
[ 17%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/trainer_factory.cc.o
[ 19%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/char_model_trainer.cc.o
[ 20%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/word_model_trainer.cc.o
[ 22%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/generated_message_util.cc.o
[ 23%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/generated_enum_util.cc.o
[ 25%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/generated_message_table_driven_lite.cc.o
[ 26%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/error.cc.o
[ 28%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/sentencepiece_trainer.cc.o
[ 30%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/int128.cc.o
[ 31%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/pretokenizer_for_training.cc.o
[ 33%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/parse_context.cc.o
[ 34%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/message_lite.cc.o
[ 36%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/bpe_model_trainer.cc.o
[ 38%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/repeated_field.cc.o
[ 39%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/zero_copy_stream_impl.cc.o
[ 41%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/statusor.cc.o
[ 42%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/time.cc.o
[ 44%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/io_win32.cc.o
[ 46%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/stringprintf.cc.o
[ 47%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/stringpiece.cc.o
[ 49%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/status.cc.o
[ 50%] Building CXX object src/CMakeFiles/sentencepiece_train-static.dir/trainer_interface.cc.o
[ 52%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/structurally_valid.cc.o
[ 53%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/char_model.cc.o
[ 55%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/zero_copy_stream_impl_lite.cc.o
[ 57%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/zero_copy_stream.cc.o
[ 58%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/word_model.cc.o
[ 60%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/implicit_weak_message.cc.o
[ 61%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/protobuf-lite/wire_format_lite.cc.o
[ 63%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/model_interface.cc.o
[ 65%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/bpe_model.cc.o
[ 66%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/builtin_pb/sentencepiece.pb.cc.o
[ 68%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/builtin_pb/sentencepiece_model.pb.cc.o
[ 69%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/sentencepiece_processor.cc.o
[ 71%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/unigram_model.cc.o
[ 73%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/normalizer.cc.o
[ 74%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/util.cc.o
[ 76%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/model_factory.cc.o
[ 77%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/filesystem.cc.o
[ 79%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/absl/strings/string_view.cc.o
[ 80%] Building CXX object src/CMakeFiles/sentencepiece-static.dir/__/third_party/absl/flags/flag.cc.o
/private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/sentencepiece/src/builder.cc:47:15: warning: unused variable 'kMaxUnicode' [-Wunused-const-variable]
constexpr int kMaxUnicode = 0x10FFFF;
^
/private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/sentencepiece/src/builder.cc:49:23: warning: unused variable 'kDefaultNormalizerName' [-Wunused-const-variable]
static constexpr char kDefaultNormalizerName[] = "nfkc";
^
2 warnings generated.
[ 82%] Linking CXX static library libsentencepiece_train.a
[ 84%] Linking CXX static library libsentencepiece.a
[ 84%] Built target sentencepiece_train-static
[ 84%] Built target sentencepiece-static
[ 85%] Building CXX object src/CMakeFiles/spm_encode.dir/spm_encode_main.cc.o
[ 87%] Building CXX object src/CMakeFiles/spm_normalize.dir/spm_normalize_main.cc.o
[ 88%] Building CXX object src/CMakeFiles/spm_decode.dir/spm_decode_main.cc.o
[ 90%] Building CXX object src/CMakeFiles/spm_export_vocab.dir/spm_export_vocab_main.cc.o
[ 92%] Building CXX object src/CMakeFiles/spm_train.dir/spm_train_main.cc.o
[ 93%] Linking CXX executable spm_export_vocab
[ 93%] Built target spm_export_vocab
[ 95%] Linking CXX executable spm_normalize
[ 96%] Linking CXX executable spm_train
[ 96%] Built target spm_normalize
[ 96%] Built target spm_train
[ 98%] Linking CXX executable spm_decode
[ 98%] Built target spm_decode
[100%] Linking CXX executable spm_encode
[100%] Built target spm_encode
[ 66%] Built target sentencepiece-static
[ 84%] Built target sentencepiece_train-static
[ 87%] Built target spm_encode
[ 90%] Built target spm_decode
[ 93%] Built target spm_normalize
[ 96%] Built target spm_train
[100%] Built target spm_export_vocab
Install the project...
-- Install configuration: ""
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/lib/pkgconfig/sentencepiece.pc
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/lib/libsentencepiece.a
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/lib/libsentencepiece_train.a
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_encode
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_decode
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_normalize
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_train
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_export_vocab
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/include/sentencepiece_trainer.h
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/include/sentencepiece_processor.h
env: pkg-config: No such file or directory
Failed to find sentencepiece pkg-config
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for sentencepiece
Running setup.py clean for sentencepiece
Failed to build sentencepiece
Installing collected packages: sentencepiece, pytz, py4j, pptree, overrides, mpld3, janome, charset-normalizer, zipp, wrapt, urllib3, typing-extensions, tqdm, threadpoolctl, tabulate, soupsieve, smart-open, six, regex, pyyaml, PySocks, pyparsing, pillow, packaging, numpy, networkx, more-itertools, lxml, kiwisolver, joblib, idna, future, ftfy, fonttools, filelock, cycler, conllu, cloudpickle, certifi, torch, segtok, scipy, requests, python-dateutil, langdetect, importlib-metadata, deprecated, contourpy, beautifulsoup4, wikipedia-api, simpful, scikit-learn, pandas, miniful, matplotlib, konoha, hyperopt, huggingface-hub, transformers, gdown, fst-pso, pyfume, FuzzyTM, gensim, bpemb, flair
Running setup.py install for sentencepiece ... error
error: subprocess-exited-with-error
× Running setup.py install for sentencepiece did not run successfully.
│ exit code: 1
╰─> [55 lines of output]
/Users/Helena.Boyd/.local/share/virtualenvs/flairtest-ltTnab-O/lib/python3.10/site-packages/setuptools/dist.py:770: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
running install
/Users/Helena.Boyd/.local/share/virtualenvs/flairtest-ltTnab-O/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-310
creating build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
copying src/sentencepiece/__init__.py -> build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.macosx-10.9-universal2-cpython-310/sentencepiece
warning: build_py: byte-compiling is disabled, skipping.
running build_ext
/bin/sh: pkg-config: command not found
mkdir: bundled: File exists
fatal: destination path 'sentencepiece' already exists and is not an empty directory.
fatal: destination path 'sentencepiece' already exists and is not an empty directory.
mkdir: build: File exists
-- VERSION: 0.1.95
-- Not Found TCMalloc: TCMALLOC_LIB-NOTFOUND
-- Configuring done
-- Generating done
-- Build files have been written to: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/sentencepiece/build
./build_bundled.sh: line 16: nproc: command not found
[ 17%] Built target sentencepiece_train-static
[ 84%] Built target sentencepiece-static
[ 87%] Built target spm_encode
[ 90%] Built target spm_decode
[ 93%] Built target spm_normalize
[ 96%] Built target spm_train
[100%] Built target spm_export_vocab
[ 66%] Built target sentencepiece-static
[ 84%] Built target sentencepiece_train-static
[ 87%] Built target spm_encode
[ 90%] Built target spm_decode
[ 93%] Built target spm_normalize
[ 96%] Built target spm_train
[100%] Built target spm_export_vocab
Install the project...
-- Install configuration: ""
-- Up-to-date: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/lib/pkgconfig/sentencepiece.pc
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/lib/libsentencepiece.a
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/lib/libsentencepiece_train.a
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_encode
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_decode
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_normalize
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_train
-- Installing: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/bin/spm_export_vocab
-- Up-to-date: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/include/sentencepiece_trainer.h
-- Up-to-date: /private/var/folders/c1/ggddkdqn54v1cpm8jl682ygr0000gq/T/pip-install-x1lgtehf/sentencepiece_55e71fbe13ac44c6bc59b9af03cc006d/bundled/include/sentencepiece_processor.h
env: pkg-config: No such file or directory
Failed to find sentencepiece pkg-config
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> sentencepiece
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
(flairtest) FAIL
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
I cannot run collect_env.py because I cannot install flair. | closed | 2023-03-04T04:03:34Z | 2023-05-05T12:30:42Z | https://github.com/flairNLP/flair/issues/3129 | [
"bug"
] | boydxh | 9 |
horovod/horovod | machine-learning | 3,450 | No module named 'fsspec.callbacks' thrown at horovod/spark/common/store.py ln 33 | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.6.2
3. Horovod version: 0.24.1
4. MPI version: 4.1.0
5. CUDA version: N/A
6. NCCL version: N/A
7. Python version: 3.7
8. Spark / PySpark version: 3.2
9. Ray version: N/A
10. OS and version: Ubuntu 18.04
11. GCC version: 9.3.1
12. CMake version: 2.8
**Checklist:** >>>>>>>>>>>>>>>>> all "YES"
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
1. Create a GCP Dataproc cluster with 2.0.27-ubuntu18 image
2. install TF 2.6.2 and Horovod 0.24.1
3. `>> from horovod.spark.common.store import HDFSStore`
output:
Traceback (most recent call last):
File "/opt/conda/default/lib/python3.8/site-packages/horovod/spark/common/store.py", line 33, in <module>
from fsspec.callbacks import _DEFAULModuleNotFoundError: No module named 'fsspec.callbacks'
The fsspec.callback module was introduced in https://github.com/fsspec/filesystem_spec/releases/tag/2021.07.0
The line
https://github.com/horovod/horovod/blob/ebd135098571722469bb6290a6d098a9e1c96574/setup.py#L169
should be
`spark_require_list = ['numpy', 'petastorm>=0.11.0', 'pyarrow>=0.15.0', 'fsspec>=2021.07.0']`
| closed | 2022-03-04T23:11:35Z | 2022-03-05T10:26:31Z | https://github.com/horovod/horovod/issues/3450 | [
"bug"
] | zyluo | 6 |
TencentARC/GFPGAN | pytorch | 66 | 训练iter数 | 非常感谢您的工作,
想请问一下,使用默认参数,12batchsize,得到GFPGANv1.pth需要多少iter?如果使用双卡,总batch降低到6可行吗? | closed | 2021-09-17T09:17:49Z | 2021-09-23T14:03:20Z | https://github.com/TencentARC/GFPGAN/issues/66 | [] | ShanglinLi | 0 |
ranaroussi/yfinance | pandas | 1,274 | sqlite database is locked when using multithread download | - Info about your system:
- yfinance version
- 0.2.3
- operating system
- Windows WSL Ubuntu 20.04, running in Docker image from python:3.7-slim
- Simple code that reproduces your problem
```
import yfinance as yf
start_date = '2017-01-01'
end_date= '2022-04-29'
tickers = 'SPY TSLA NVDA MSFT'
data = yf.download( # or pdr.get_data_yahoo(...
# tickers list or string as well, cannot be an numpy.ndarray
tickers=tickers,
# start date
start=start_date,
# end date
end=end_date,
# fetch data by interval (including intraday if period < 60 days)
# valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
# (optional, default is '1d')
interval="1d",
# group by ticker (to access via data['SPY'])
# (optional, default is 'column')
group_by='ticker',
# adjust all OHLC automatically
# (optional, default is False)
auto_adjust=False,
# download pre/post regular market hours data
# (optional, default is False)
prepost=True,
# use threads for mass downloading? (True/False/Integer)
# (optional, default is True)
threads=True,
# proxy URL scheme use use when downloading?
# (optional, default is None)
proxy=None
)
```
- The error message
- `- NVDA: OperationalError('database is locked')`
When running the multithread download from my flask server in Docker image, I sometime saw this error message, the failed symbol can be different one. The failing symbol will still produce empty dataset.
Any suggestion on how to know there is an error when calling `yf.download(threads=True)` or is there a way to fix the sqlite database lock issue? (Maybe by increasing the timeout?) | closed | 2023-01-04T14:40:39Z | 2023-05-15T03:49:10Z | https://github.com/ranaroussi/yfinance/issues/1274 | [] | gogog22510 | 11 |
pandas-dev/pandas | python | 60,616 | ENH: RST support | ### Feature Type
- [X] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I wish I could use ReStructured Text with pandas
### Feature Description
The end users code:
```python
import pandas as pd
df=pd.read_rst(rst)
df.to_rst()
```
I believe tabulate has a way to do this.
### Alternative Solutions
I also built a way to make rst tables.
### Additional Context
- [The RST docs](https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#tables)
I think `Grid Tables` would be best for pandas (or `Simple Tables`)
I did not use sudo-code in the examples due to complexity and that examples of how to do this can be seen in the above packages. See RST docs for what they look like. | open | 2024-12-29T17:41:50Z | 2025-01-11T18:28:22Z | https://github.com/pandas-dev/pandas/issues/60616 | [
"Enhancement",
"Needs Triage"
] | R5dan | 4 |
onnx/onnx | tensorflow | 5,796 | Improved yolov6 to have the same trt model inference speed under int8 and fp16 | # Ask a Question
### Question
<!-- Explain your question here. -->
I improved yolov6. After converting to tensorrt, the improved model's inference speed is faster than the original yolov6 under fp32 and fp16. However, after converting to int8, the speed of the original yolov6 has doubled (fp16:66fps -> int8:122fps ), and the speed of the improved yolov6 is (fp16: 100fps -> int8: 102fps). The speed is almost not improved. What is the reason? My improvement module includes split, concat, and DropPath operations. Onnx.opset is set to 12 because 13 will report an error.Because when using opset13 to convert tensorrt on NX, the following error will be reported:
```
[TensorRT] VERBOSE: ModelImporter.cpp:119: Searching for input: backbone.ERBlock_2.1.spatial_mixing.partial_conv3.0.weight
[TensorRT] VERBOSE: ModelImporter.cpp:125: Conv_6 [Conv] inputs: [input.8 -> (1, 32, 160, 160)], [backbone.ERBlock_2.1.spatial_mixing.partial_conv3.0.weight -> (16, 16, 3, 3)],
[TensorRT] VERBOSE: builtin_op_importers.cpp:450: Convolution input dimensions: (1, 32, 160, 160)
ERROR: ONNX Parse Failed
In node -1 (importConv): INVALID_NODE: Assertion failed: nchan == -1 || kernelWeights.shape.d[1] * ngroup == nchan
ERROR: failed to build the TensorRT engine!
```
### Further information
- Relevant Area: <!--e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. -->
- Is this issue related to a specific model?
**Model name**: <!-- *e.g. mnist* -->
**Model opset**: <!-- *e.g. 17* -->12
### Notes
<!-- Any additional information, code snippets. -->
| closed | 2023-12-08T03:27:28Z | 2023-12-12T17:57:40Z | https://github.com/onnx/onnx/issues/5796 | [
"question"
] | xiaoche-24 | 2 |
deepinsight/insightface | pytorch | 2,464 | RetinaFace Makefile | Hello
I've been working on the face detection model and followed instructions on the README. I want to use my camera to carry out real time detection, so I implemented the basic openCV to capture and process frames in the test.py file. I am able to detect faces however the test.py only works when I set the gpuid = -1 (cpu), when I change it to gpuid = 0 there are no errors but the script does not execute. After researching I found out that not all the files have been built properly in the cython folder; the gpu_nms.pyx file is causing errors while building. So I figured maybe this is why I'm able to bind the model in test.py to the GPU.
I'm using Windows as my OS, Python 3.9, CUDA 10.2 and I've installed mxnet-cu10.2 version 1.7.0.
This is the message I get when I run the Makefile
```
PS C:\Users\OMEN 45L\Desktop\gitlab\detectionservice\retinaface> make
cd rcnn/cython/ && python setup.py build_ext --inplace && del -rf build
CUDA is found
cl : Command line warning D9024 : unrecognized source file type 'gcc', object file assumed
cl : Command line warning D9027 : source file 'gcc' ignored
cl : Command line warning D9024 : unrecognized source file type 'nvcc', object file assumed
cl : Command line warning D9027 : source file 'nvcc' ignored
gpu_nms.cpp
c:\users\omen 45l\appdata\local\programs\python\python39\lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
gpu_nms.cpp(4598): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
gpu_nms.cpp(4608): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
gpu_nms.cpp(4793): error C2664: 'void _nms(int *,int *,const float *,int,int,float,int)': cannot convert argument 1 from '__pyx_t_5numpy_int32_t *' to 'int *'
gpu_nms.cpp(4793): note: Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
make: *** [Makefile:2: all] Error 1
``` | open | 2023-11-02T11:02:51Z | 2023-11-02T11:06:42Z | https://github.com/deepinsight/insightface/issues/2464 | [] | yimiox | 0 |
pytest-dev/pytest-selenium | pytest | 133 | Save screenshot to tempfile in text mode | Screenshots are really valuable tools for debugging selenium test (even more so when running in headless Firefox or Chrome). Currently, pytest-selenium injects screenshots into HTML reports, but maybe it could also save those screenshots to the disk and give the file path in the text summary, possibly optionally? At the moment it generates a (base64) screenshot in every case and just drops it when `html` is not enabled. | closed | 2017-09-13T12:22:50Z | 2019-07-12T08:12:59Z | https://github.com/pytest-dev/pytest-selenium/issues/133 | [] | xmo-odoo | 15 |
anselal/antminer-monitor | dash | 4 | Total hashing speed is displayed many times | 
| closed | 2017-10-08T11:09:34Z | 2017-10-08T23:51:04Z | https://github.com/anselal/antminer-monitor/issues/4 | [
":bug: bug"
] | babycicak | 2 |
microsoft/nni | data-science | 5,671 | Shape mismatch after compression/speedup | After running AGPPRUNER followed by model speedup, I'm getting a shape mismatch in a Linear layer. The weight and bias in the linear layer don't appear to be of matching size anymore:
<img width="916" alt="image" src="https://github.com/microsoft/nni/assets/5126549/dcbaa44a-137e-4943-b576-b95a49fd123f">
**Environment**:
- NNI version: 3.0rc1
- Training service (local|remote|pai|aml|etc):
- Python version: 3.10
- PyTorch version: 2.0
- Cpu or cuda version: 11.7
**Reproduce the problem**
- Code|Example:
- How to reproduce: | open | 2023-08-25T22:41:15Z | 2023-08-25T22:41:15Z | https://github.com/microsoft/nni/issues/5671 | [] | lminer | 0 |
dfki-ric/pytransform3d | matplotlib | 211 | Implement direct conversion from Euler angles to quaternions | Related to #207
Interface: `quaternion_from_euler(e: npt.ArrayLike, i: int, j: int, k: int, extrinsic: bool) -> np.ndarray`
Easiest solution:
1. Convert axis indices to rotation axes
2. Convert to three axis-angle representations
3. Convert to three quaternions
4. Concatenate quaternions | closed | 2023-01-02T22:29:02Z | 2023-01-03T11:29:05Z | https://github.com/dfki-ric/pytransform3d/issues/211 | [] | AlexanderFabisch | 0 |
polarsource/polar | fastapi | 4,978 | Discounts: Code is redeemed in case it's applied to a checkout even if it fails | A checkout failed due to becoming expired after too long time (1h+), but the discount applied from the checkout link was redeemed once applied to the checkout vs. upon successful checkout. | closed | 2025-02-07T22:18:21Z | 2025-02-10T14:53:36Z | https://github.com/polarsource/polar/issues/4978 | [
"bug"
] | birkjernstrom | 1 |
streamlit/streamlit | machine-learning | 10,878 | Allow HTML in `st.dataframe` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Allow showing raw HTML in `st.dataframe`, similar to what we allow in `st.html` or `st.markdown` with `unsafe_allow_html`. Note that this might be quite tricky implementation-wise since the underlying library we're using is not using normal HTML but renders on canvas.
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-23T13:13:28Z | 2025-03-23T15:05:47Z | https://github.com/streamlit/streamlit/issues/10878 | [
"type:enhancement",
"feature:st.dataframe",
"feature:st.html"
] | jrieke | 1 |
streamlit/streamlit | deep-learning | 10,737 | Add configurable buttons into `st.chat_input` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add support for configurable toggle buttons which are integrated into the chat input field similar to how its supported in many of the LLM-based apps:

### Why?
Allows implementing more complex LLM-based apps in Streamlit.
### How?
```python
prompt = st.chat_input(..., options=[":material/search: Search", ":material/science: Deep research"])
st.write("Prompt text", prompt.text)
st.write("Selected options", prompt.options)
```
### Additional Context
_No response_ | open | 2025-03-12T12:35:27Z | 2025-03-12T12:36:33Z | https://github.com/streamlit/streamlit/issues/10737 | [
"type:enhancement",
"feature:st.chat_input"
] | lukasmasuch | 1 |
ets-labs/python-dependency-injector | flask | 512 | Correct way to inherit from Configuration | Dear all,
I've been struggling to inherit from Configuration.
I'd like to adapt the __setitem__ method to always execute a method when a configuration is changed.
Has anyone ever done something like that? How could I inherit from Configuration?
Thanks in advance | open | 2021-09-16T22:09:23Z | 2021-11-08T01:12:14Z | https://github.com/ets-labs/python-dependency-injector/issues/512 | [] | japel | 1 |
MaartenGr/BERTopic | nlp | 1,368 | Topic distributions with Supervised BERTopic | Hi there,
When using `.approximate_distribution` on a supervised topic model, the results are mostly not meaningful. Many topic distributions are either zero matrices or have a probability of 1.0 for a particular topic.
Here is my code used to initialise and fit the model:
```
sentence_model = SentenceTransformer('all-MiniLM-L6-v2')
vectorizer_model = CountVectorizer(ngram_range=(1,4), min_df=7)
empty_dimensionality_model = BaseDimensionalityReduction()
clf = LogisticRegression()
ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True)
representation_model = KeyBERTInspired()
supervised_topic_model = BERTopic(
verbose=True,
calculate_probabilities=True,
embedding_model=sentence_model,
vectorizer_model=vectorizer_model,
umap_model=empty_dimensionality_model,
hdbscan_model=clf,
ctfidf_model=ctfidf_model,
representation_model=representation_model
).fit(train_docs, embeddings, y=labels)
```
```
topic_distr, _ = supervised_topic_model.approximate_distribution(train_docs)
```
When resulting output of `topic_distr[0]` is
```
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.])
```
and for `topic_distr[1]`, it is
```
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.])
```
May I know if this is an expected result? If so, how can I generate proper topic distributions for a supervised model?
Thank you!
| closed | 2023-06-27T10:23:05Z | 2023-07-10T14:31:03Z | https://github.com/MaartenGr/BERTopic/issues/1368 | [] | bttpcmdl | 10 |
xlwings/xlwings | automation | 1,931 | FileNotFoundError with embedded code | * Python code embedded
* Using both, UDFs and RunPython with "Use UDF Server" enabled
can lead to:
```
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'c:\\users\\xxx\\appdata\\local\\temp\\xlwings-mg2hf3dl\\xxx.py'
```
| closed | 2022-06-08T14:36:00Z | 2022-06-29T15:17:42Z | https://github.com/xlwings/xlwings/issues/1931 | [
"bug",
"PRO"
] | fzumstein | 2 |
Textualize/rich | python | 2,712 | [BUG] Rich shouldn't explode when using more than one display | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
If you use more than one display, `rich` raises an exception (`rich.errors.LiveError: Only one live display may be active at once`).
A minimal example:
```python
from rich.progress import track
for _ in track(range(100)):
for _ in track(range(100)):
pass
```
My horrible fix:
```python
from rich.progress import track
def safe_track(it):
from rich.errors import LiveError
it_ = track(it)
try:
yield from it_
except LiveError:
yield from it
for _ in track(range(100)):
for _ in safe_track(range(100)):
pass
```
I think it shouldn't raise an exception but at most raise a warning. Here I list some reasons:
- Who uses the iterator shouldn't know if the iterator is decorated with rich or not. If I'm using an iterator from an external library it shouldn't break my program just because it is decorated with rich.
- It violates the Liskov substitution principle. The iterator decorated with rich has stronger pre-conditions than the original iterator, in fact it requires that no other live displays are currently active. It mean that it can't be used interchangeably with the original iterator. | closed | 2022-12-23T11:37:17Z | 2024-08-26T15:51:31Z | https://github.com/Textualize/rich/issues/2712 | [
"Needs triage"
] | domef | 8 |
ultralytics/yolov5 | deep-learning | 13,416 | Loss computation sometimes cause nan values | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training
### Bug
These days when I'm trying to fine tune my model after pruning by training for several epochs, I found that loss value becomes nan from time to time. By setting breakpoints and checking, I found that there's a bug in [metrics.py](https://github.com/ultralytics/yolov5/blob/1435a8eed6b16d125e7808c81969a0c879d6b8a0/utils/metrics.py#L239)
Sometimes, if the prediction of some bounding box has a width or height of 0, it turns out to be nan values! Since in CIoU computation, h2 and h1 are used as dividers [here](https://github.com/ultralytics/yolov5/blob/1435a8eed6b16d125e7808c81969a0c879d6b8a0/utils/metrics.py#L265).
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | open | 2024-11-15T08:42:56Z | 2024-12-22T17:30:19Z | https://github.com/ultralytics/yolov5/issues/13416 | [
"bug"
] | tobymuller233 | 4 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 124 | 请问如何使用GPU部署呢 | closed | 2023-04-11T10:30:24Z | 2023-04-13T07:18:30Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/124 | [] | Chenhuaqi6 | 4 | |
matterport/Mask_RCNN | tensorflow | 3,034 | Hyperparamter Tuning Automated | Hi everyone,
is there a way to automatically tune the hyperparameters, like you can do with the keras tuner random search? I tried to implement it, but I could not succeed, as this wants the build function handed over, but the model in matterport is built directly when it initializes and I have not found anything in the issues here so far.
I would basically like to say:
Please take these 5 or 10 parameters and try them out and give me the results. The max/min values should be xy.
I can do all this manually, but that does not seem very effective. Thanks for your help.
| open | 2024-05-16T08:51:40Z | 2024-05-16T08:51:40Z | https://github.com/matterport/Mask_RCNN/issues/3034 | [] | Testbild | 0 |
Gozargah/Marzban | api | 1,036 | add multy user | please add api for add multy user | closed | 2024-06-06T07:02:56Z | 2024-07-03T16:09:58Z | https://github.com/Gozargah/Marzban/issues/1036 | [
"Feature"
] | H0sin | 0 |
Asabeneh/30-Days-Of-Python | numpy | 200 | Python | closed | 2022-04-27T04:23:08Z | 2022-04-27T04:23:28Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/200 | [] | praveen-kumar-akkala | 0 | |
serengil/deepface | deep-learning | 548 | How to use deepface in another language? | Hi, it's not an issue,
I just want to know if there is any way to use deepface in other languages without creating web API.
for example: Could I create DLL file from deepface Python, then call DLL functions from C# or .NET?
Thanks for your help. | closed | 2022-08-26T04:16:28Z | 2022-08-27T20:10:07Z | https://github.com/serengil/deepface/issues/548 | [
"enhancement"
] | SonPH088 | 1 |
Lightning-AI/pytorch-lightning | data-science | 20,332 | Add a Chinese version of README | ### 📚 Documentation
These are the reasons why I want to add a Chinese version of the README:
1、Reduce language barriers and expand user base: Chinese is one of the most widely spoken languages in the world, and providing a Chinese version of the README will help a large number of Chinese developers and researchers get up to speed with PyTorch Lightning, especially those who are not familiar with English, thereby attracting more people to participate and use the project.
2、Increase the international reach of open source projects: Adding multilingual support, especially Chinese, will help PyTorch Lightning spread globally, especially in academia and industry in China and other Chinese-speaking regions. This will greatly enhance the project's user base and number of contributors.
3、Accelerate community contributions: By providing Chinese documentation, Chinese developers can better understand the project, which makes it easier to participate in the development and contribution of the project, and promotes the activity and growth of the open source community.
4、Improve learning efficiency: Providing Chinese users with native language versions of documents can significantly shorten their learning curve, allowing them to focus on the technology itself instead of spending extra time on language understanding. This will improve the efficiency of learning and research.
cc @borda | open | 2024-10-10T08:18:47Z | 2024-10-10T08:19:08Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20332 | [
"docs",
"needs triage"
] | nocoding03 | 0 |
littlecodersh/ItChat | api | 670 | get_contact出错 | 在提交前,请确保您已经检查了以下内容!
- [x] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [x] 我已经阅读并按[文档][document] 中的指引进行了操作
- [x] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [x] 本问题确实关于`itchat`, 而不是其他项目.
- [x] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```python
Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)] on win32
import itchat
itchat.auto_login(hotReload=False)
itchat.run(debug=True)
█
Getting uuid of QR code.
Downloading QR code.
Please scan the QR code to log in.
Please press confirm on your phone.
Loading the contact, this may take a little while.
Traceback (most recent call last):
File "<input>", line 2, in <module>
File "D:\python27\lib\site-packages\itchat\components\register.py", line 36, in auto_login
loginCallback=loginCallback, exitCallback=exitCallback)
File "D:\python27\lib\site-packages\itchat\components\login.py", line 73, in login
self.get_contact(True)
File "D:\python27\lib\site-packages\itchat\components\contact.py", line 285, in get_contact
seq, batchMemberList = _get_contact(seq)
File "D:\python27\lib\site-packages\itchat\components\contact.py", line 281, in _get_contact
j = json.loads(r.content.decode('utf-8', 'replace'))
File "D:\python27\lib\json\__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "D:\python27\lib\json\decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "D:\python27\lib\json\decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
```
您的itchat版本为:`[1.3.9]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> 网页端**确认可以登录**,用itchat应该是在_get_contact函数中用get获取通讯录出错。
> 这可能和账户有关,我此前用这个账号登录itchat保持一个月以上登录状态,可能tx对账户信息进行了管制?但是网页端可以正常登录,所以说明至少模拟登陆和官方还是有不一致的地方。
```python
# itchat/components/contact.py
def _get_contact(seq=0):
url = '%s/webwxgetcontact?r=%s&seq=%s&skey=%s' % (self.loginInfo['url'],
int(time.time()), seq, self.loginInfo['skey'])
headers = {
'ContentType': 'application/json; charset=UTF-8',
'User-Agent' : config.USER_AGENT, }
try:
r = self.s.get(url, headers=headers) //此处获取不到数据
```

[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| open | 2018-05-26T02:48:50Z | 2018-11-08T09:17:07Z | https://github.com/littlecodersh/ItChat/issues/670 | [] | Frankleee | 2 |
sigmavirus24/github3.py | rest-api | 949 | GitHub user by ID | I'm trying to map/link GitHub accounts to internal accounts, since users can change their github usernames I'd like to map them with IDs instead of usernames.
GitHub provides the endpoint `/users/:username` to retrieve a user by username (https://developer.github.com/v3/users/#get-a-single-user)
Example for my user:
```
curl -X GET https://api.github.com/users/ericofusco
```
Although it's also possible to retrieve a Github account using the ID, this is not documented by GitHub.
```
curl -X GET https://api.github.com/user/5590224
```
That being said, can we push a new method like `user_by_id` to `GitHub` to use the undocumented endpoint above? | closed | 2019-06-24T10:22:19Z | 2019-06-24T10:35:43Z | https://github.com/sigmavirus24/github3.py/issues/949 | [] | ericofusco | 1 |
huggingface/text-generation-inference | nlp | 2,301 | Can't run llama3.1-70b at full context | ### System Info
2.2.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
On 4*H100:
```
docker stop llama31-70b-tgi ; docker remove llama31-70b-tgi
sudo docker run -d --restart=always --gpus '"device=0,1,2,3"' \
--shm-size 10.24gb \
-e HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN \
-e TRANSFORMERS_CACHE="/.cache/" -p \
5005:80 \
-v $HOME/.cache:/.cache/ \
-v $HOME/.cache/huggingface/hub/:/data \
--name llama31-70b-tgi \
ghcr.io/huggingface/text-generation-inference:2.2.0 \
--model-id meta-llama/Meta-Llama-3.1-70B-Instruct \
--max-input-length 131072 \
--max-total-tokens 139264 \
--max-stop-sequences 6 \
--num-shard 4 --sharded true &>> logs.llama3.1-70b.tgi.txt
```
get:
```
RuntimeError: Not enough memory to handle 131122 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
```
vLLM works fine without errors.
### Expected behavior
able to launch and use without error like vLLM | open | 2024-07-24T17:24:45Z | 2025-02-06T13:50:36Z | https://github.com/huggingface/text-generation-inference/issues/2301 | [] | pseudotensor | 32 |
NullArray/AutoSploit | automation | 763 | Divided by zero exception33 | Error: Attempted to divide by zero.33 | closed | 2019-04-19T16:00:28Z | 2019-04-19T16:38:06Z | https://github.com/NullArray/AutoSploit/issues/763 | [] | AutosploitReporter | 0 |
scikit-learn-contrib/metric-learn | scikit-learn | 150 | [DOC] Add description of the API in Weakly Supervised Parts and Supervised Parts | See comment https://github.com/metric-learn/metric-learn/issues/149#issuecomment-451450818 | closed | 2019-01-04T14:18:32Z | 2019-07-03T09:06:45Z | https://github.com/scikit-learn-contrib/metric-learn/issues/150 | [] | wdevazelhes | 0 |
Lightning-AI/LitServe | api | 81 | Print "Setup complete" when setup is complete | Right now it's kind of a blackbox when things happen... it's useful to know when the setup method was called... so we know everything is ready to go. | closed | 2024-05-11T00:15:27Z | 2024-05-14T20:52:03Z | https://github.com/Lightning-AI/LitServe/issues/81 | [
"enhancement",
"help wanted"
] | williamFalcon | 0 |
strawberry-graphql/strawberry | fastapi | 2,799 | Add the option in ChannelsConsumer to confirm topic subscription | <!--- Provide a general summary of the changes you want in the title above. -->
In order to inform the subscriber (GraphQL client) that the GQL subscription has been activated, it would be useful to return a null message. In order to avoid a race condition, it would be useful to configure `ChannelsConsumer.channel_listen` to return None as soon as the subscription on the channels layer has been activated.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
<!-- A few sentences describing what it is. -->
In `ChannelsConsumer.channel_listen`, I would like to add the option of yielding None after the `channel_layer.group_add` loop:
```py
# strawberry/channels/handlers/base.py:150
for group in groups:
await self.channel_layer.group_add(group, self.channel_name)
added_groups.append(group)
```
This would allow the subscriber to be written like this:
```py
@strawberry.type
class FooSubscription:
@strawberry.subscription
@staticmethod
async def foo_subscription(info: Info) -> AsyncGenerator[FooType | None, None]:
# yield None
async for message in info.context.ws.channel_listen("foo_type", groups=["a"]):
if message is None:
yield None
continue
yield FooType(message["payload"])
```
The commented out `yield None` is a workaround but confirms the subscription before the topic has been subscribed to. This is noticeable in tests when testing the Channels communication via REDIS. | closed | 2023-06-01T14:31:09Z | 2025-03-20T15:56:11Z | https://github.com/strawberry-graphql/strawberry/issues/2799 | [] | moritz89 | 7 |
aio-libs/aiomysql | sqlalchemy | 9 | Add ssl support | 1) add _ssl_ support by passing `ssl.SSLContext` to `asyncio.open_connection`.
2) start additional _mysql_ instance at travis ci and try to pass test suite.
| closed | 2015-02-24T21:13:15Z | 2018-04-22T18:49:00Z | https://github.com/aio-libs/aiomysql/issues/9 | [
"task"
] | jettify | 4 |
LibrePhotos/librephotos | django | 983 | [error] 23#23: *6 open() "/protected_media/thumbnails_big/<long strin>.webp" failed (13: Permission denied) |
**When Submitting please remove every thing above this line**
# 🐛 Bug Report
* [x ] 📁 I've Included a ZIP file containing my librephotos `log` files
* [x ] ❌ I have looked for similar issues (including closed ones)
* [x] 🎬 (If applicable) I've provided pictures or links to videos that clearly demonstrate the issue
## 📝 Description of issue:
Hello,
Installation Docker.
The pictures appear just as colored squares, with different colors.
See picture in screenshot file.
I got the hint that it looks like a permissions error.
The Proxy reports errors, I didn't find errors in the other logs (all included):
When I click on a picture, these error occur:
10.8.0.6 - - [06/Aug/2023:08:51:34 +0200] "GET /media/thumbnails_big/852ea07c82fa0481bf8f260967f8d62d2 HTTP/1.1" 403 153 "http://192.168.1.31:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0"
2023/08/06 08:51:34 [error] 23#23: *438 open() "/protected_media/thumbnails_big/223698ee5c041c1faf30ada8a6803e592.webp" failed (13: Permission denied), client: 10.8.0.6, server: , request: "GET /media/thumbnails_big/223698ee5c041c1faf30ada8a6803e592 HTTP/1.1", upstream: "http://172.29.64.4:8001/media/thumbnails_big/223698ee5c041c1faf30ada8a6803e592", host: "192.168.1.31:3000", referrer: "http://192.168.1.31:3000/"
10.8.0.6 - - [06/Aug/2023:08:51:34 +0200] "GET /media/thumbnails_big/223698ee5c041c1faf30ada8a6803e592 HTTP/1.1" 403 153 "http://192.168.1.31:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0"
10.8.0.6 - - [06/Aug/2023:08:51:35 +0200] "GET /api/rqavailable/ HTTP/1.1" 200 61 "http://192.168.1.31:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0"
10.8.0.6 - - [06/Aug/2023:08:51:37 +0200] "GET /api/rqavailable/ HTTP/1.1" 200 61 "http://192.168.1.31:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0"
10.8.0.6 - - [06/Aug/2023:08:51:40 +0200] "GET /api/rqavailable/ HTTP/1.1" 200 61 "http://192.168.1.31:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0"
10.8.0.6 - - [06/Aug/2023:08:51:43 +0200] "GET /api/rqavailable/ HTTP/1.1" 200 61 "http://192.168.1.31:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0"
I tried it also with:
- changing the rights of the dir an files to admin:admin and also with john:everyone (1000:100) in combination with PIUD/PGID as below
PUID and PGID for all services.
environment:
- PUID=000 also with 1000
- PGID=00 also with 100
- TZ=Europe/Amsterdam
yaml and env are attached.
Please could anyone give a solution or some hints to try.
I an quite a newby.
Regards, John
## 🔁 How can we reproduce it:[error] 23#23: *6 open() "/protected_media/thumbnails_big/852ea07c82fa0481bf8f260967f8d62d2.webp" failed (13: Permission denied)
## Please provide additional information:
- 💻 Operating system: NAS QNAP TVS-671
- ⚙ Architecture (x86 or ARM): X86
- 🔢 Librephotos version: all
- 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.): Docker
* 🐋 If Docker or Kubernets, provide docker-compose image tag:
- 📁 How is you picture library mounted (Local file system (Type), NFS, SMB, etc.): local shares
- ☁ If you are virtualizing librephotos, Virtualization platform (Proxmox, Xen, HyperV, etc.): --
[librephotos-yaml.txt](https://github.com/LibrePhotos/librephotos/files/12268847/librephotos-yaml.txt)
[librephotos-env.txt](https://github.com/LibrePhotos/librephotos/files/12268848/librephotos-env.txt)

[_proxy_logs.txt](https://github.com/LibrePhotos/librephotos/files/12268851/_proxy_logs.txt)
[_frontend_logs.txt](https://github.com/LibrePhotos/librephotos/files/12268853/_frontend_logs.txt)
[_db_logs.txt](https://github.com/LibrePhotos/librephotos/files/12268854/_db_logs.txt)
[_backend_logs.txt](https://github.com/LibrePhotos/librephotos/files/12268855/_backend_logs.txt)
| closed | 2023-08-06T07:13:47Z | 2023-08-28T13:54:06Z | https://github.com/LibrePhotos/librephotos/issues/983 | [
"bug",
"windows"
] | Keirriek | 1 |
encode/databases | sqlalchemy | 63 | it's better to use another name | `databases` is not search engine freiendly, neither for human being. | closed | 2019-03-11T10:54:47Z | 2019-03-11T12:02:29Z | https://github.com/encode/databases/issues/63 | [] | scil | 2 |
plotly/dash-component-boilerplate | dash | 96 | npm run build:js-dev doesn't work | ```
\my_dash_component>npm run build:js-dev
npm ERR! missing script: build:js-dev
npm ERR!
npm ERR! Did you mean this?
npm ERR! build:js
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\FruitfulApproach\AppData\Roaming\npm-cache\_logs\2019-12-15T17_04_47_683Z-debug.log
```
npm run build:js does work | closed | 2019-12-15T17:06:05Z | 2021-01-28T00:22:44Z | https://github.com/plotly/dash-component-boilerplate/issues/96 | [
"bug"
] | enjoysmath | 3 |
marshmallow-code/apispec | rest-api | 546 | Duplicate parameter with name body and location body | Created a rather simple REST API with the following setup:
Model
```
class Question(db.Model):
__tablename__ = "initial_questions"
id = Column(Integer, primary_key=True, autoincrement=True)
text = Column(Text, nullable=False)
```
Schema
```
class QuestionSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = Question
@post_load
def load(self, data, **kwargs):
return Question(**data)
```
Controller
```
questions_blueprint = Blueprint("questions", __name__, url_prefix="/questions")
@questions_blueprint.route("/", methods=["POST", "GET"])
@jwt_required()
def controller():
question_controller = QuestionController(request)
return question_controller(request)
class QuestionController(MethodResource):
@use_kwargs(QuestionSchema)
@marshal_with(QuestionSchema(many=True))
def get(self, **kwargs):
try:
return Question.query.filter_by(domain=question_domain).all()
except StatementError as e:
return make_response(str(e.orig), 400)
except Exception as e:
return make_response("", 500)
```
Config + Register:
```
app.config.update({
'APISPEC_SPEC': APISpec(
title='Co-Rona',
version='v1',
openapi_version="3.0.2",
plugins=[MarshmallowPlugin()],
),
'APISPEC_SWAGGER_URL': '/swagger/',
})
apispec = FlaskApiSpec(app)
apispec.register(QuestionController, blueprint="questions", endpoint="controller")
```
This leads to the following error:
```
apispec.exceptions.DuplicateParameterError: Duplicate parameter with name name and location body
```
After a "bit" of digging the reason is the property parsing that defaults to naming all elements in 'body' to 'body'
See here: https://github.com/marshmallow-code/apispec/blob/dev/src/apispec/ext/marshmallow/openapi.py#L232
For me changing this to the passed name attribute fixes the problem. | closed | 2020-03-22T00:41:47Z | 2020-03-30T20:40:20Z | https://github.com/marshmallow-code/apispec/issues/546 | [] | dgiebert | 4 |
Lightning-AI/pytorch-lightning | machine-learning | 20,386 | view size is not compatible with input tensor's size and stride | ### Bug description
Hi,
I'm trying to training F-RCNN based on coco dataset on my images. Image size is 512X512
I've tested dataloader separately and it works and prints the batch images and BB details
Also i've tried to print the loss in NN and it does print the `batch_mean` as well and after that ERROR occurs.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
img_process = v2.Compose(
[
v2.ToTensor(),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]
)
class SCocoDetection(datasets.CocoDetection):
def __init__(
self,
image_directory_path: str,
annotation_path : str,
train: bool = True,
image_processor = None
):
super().__init__(image_directory_path, annotation_path)
self.image_processor = image_processor
def __getitem__(self, idx):
image, annotations = super().__getitem__(idx)
images, targets = [], []
image_id = self.ids[idx]
for ann in annotations:
bbox = ann['bbox']
#small = (bbox[:, 2] * bbox[:, 3]) <= (image.size[1] * image.size[0] * 0.001)
small = (bbox[2] * bbox[3]) <= (512 * 512 * 0.001)
#print(small)
if small:
bbox = torch.tensor(bbox).unsqueeze(0).float()
boxes = ops.box_convert(bbox, in_fmt='xywh', out_fmt='xyxy')
boxes = boxes.float()
if (boxes[0][0] < boxes[0][2]) and (boxes[0][1] < boxes[0][3]):
output_dict = self.image_processor({"image": image, "boxes": boxes})
images.append(output_dict['image'])
targets.append({
'boxes': output_dict['boxes'],
'labels': torch.ones(len(boxes), dtype=int)
})
else:
print(f"Invalid box : {boxes}")
#print(f"image_id : {image_id} , idx : {idx} , targets :{targets}")
return images, targets
TRAIN_DATASET = SCocoDetection(
image_directory_path='047/v2_coco_train/images',
annotation_path='047/v2_coco_train/result.json',
image_processor=img_process,
train=True)
VAL_DATASET = SCocoDetection(
image_directory_path='047/v2_coco_test/images',
annotation_path= '047/v2_coco_test/result.json',
image_processor=img_process,
train=False)
print("Number of training examples:", len(TRAIN_DATASET))
print("Number of validation examples:", len(VAL_DATASET))
#print("Number of test examples:", len(TEST_DATASET))
def collate_fn(batch):
return tuple(zip(*batch))
TRAIN_DATALOADER = DataLoader(dataset=TRAIN_DATASET,collate_fn = collate_fn, batch_size=2, shuffle=True)
VAL_DATALOADER = DataLoader(dataset=VAL_DATASET,collate_fn = collate_fn, batch_size=4, shuffle=True)
```
```
import numpy as np
class CocoDNN(L.LightningModule):
def __init__(self):
super().__init__()
self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
def forward(self, images, targets=None):
return self.model(images, targets)
def training_step(self, batch, batch_idx):
imgs, annot = batch
print(f"Batch :{batch_idx}")
batch_losses = []
for img_b, annot_b in zip(imgs, annot):
print(len(img_b), len(annot_b))
if len(img_b) == 0:
continue
loss_dict = self.model(img_b, annot_b)
losses = sum(loss for loss in loss_dict.values())
#print(losses)
batch_losses.append(losses)
batch_mean = torch.mean(torch.stack(batch_losses))
#print(batch_mean)
self.log('train_loss', batch_mean)
def configure_optimizers(self):
return optim.SGD(self.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0005)
dnn = CocoDNN()
trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER)
```
```
### Error messages and logs
{
"name": "RuntimeError",
"message": "view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.",
"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[192], line 3
1 dnn = CocoDNN()
2 trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
----> 3 trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER)
File site-packages/lightning/pytorch/trainer/trainer.py:538, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
536 self.state.status = TrainerStatus.RUNNING
537 self.training = True
--> 538 call._call_and_handle_interrupt(
539 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
540 )
File site-packages/lightning/pytorch/trainer/call.py:47, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
45 if trainer.strategy.launcher is not None:
46 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
---> 47 return trainer_fn(*args, **kwargs)
49 except _TunerExitException:
50 _call_teardown_hook(trainer)
File site-packages/lightning/pytorch/trainer/trainer.py:574, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
567 assert self.state.fn is not None
568 ckpt_path = self._checkpoint_connector._select_ckpt_path(
569 self.state.fn,
570 ckpt_path,
571 model_provided=True,
572 model_connected=self.lightning_module is not None,
573 )
--> 574 self._run(model, ckpt_path=ckpt_path)
576 assert self.state.stopped
577 self.training = False
File site-packages/lightning/pytorch/trainer/trainer.py:981, in Trainer._run(self, model, ckpt_path)
976 self._signal_connector.register_signal_handlers()
978 # ----------------------------
979 # RUN THE TRAINER
980 # ----------------------------
--> 981 results = self._run_stage()
983 # ----------------------------
984 # POST-Training CLEAN UP
985 # ----------------------------
986 log.debug(f\"{self.__class__.__name__}: trainer tearing down\")
File site-packages/lightning/pytorch/trainer/trainer.py:1025, in Trainer._run_stage(self)
1023 self._run_sanity_check()
1024 with torch.autograd.set_detect_anomaly(self._detect_anomaly):
-> 1025 self.fit_loop.run()
1026 return None
1027 raise RuntimeError(f\"Unexpected state {self.state}\")
File site-packages/lightning/pytorch/loops/fit_loop.py:205, in _FitLoop.run(self)
203 try:
204 self.on_advance_start()
--> 205 self.advance()
206 self.on_advance_end()
207 self._restarting = False
File site-packages/lightning/pytorch/loops/fit_loop.py:363, in _FitLoop.advance(self)
361 with self.trainer.profiler.profile(\"run_training_epoch\"):
362 assert self._data_fetcher is not None
--> 363 self.epoch_loop.run(self._data_fetcher)
File site-packages/lightning/pytorch/loops/training_epoch_loop.py:140, in _TrainingEpochLoop.run(self, data_fetcher)
138 while not self.done:
139 try:
--> 140 self.advance(data_fetcher)
141 self.on_advance_end(data_fetcher)
142 self._restarting = False
File site-packages/lightning/pytorch/loops/training_epoch_loop.py:250, in _TrainingEpochLoop.advance(self, data_fetcher)
247 with trainer.profiler.profile(\"run_training_batch\"):
248 if trainer.lightning_module.automatic_optimization:
249 # in automatic optimization, there can only be one optimizer
--> 250 batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
251 else:
252 batch_output = self.manual_optimization.run(kwargs)
File site-packages/lightning/pytorch/loops/optimization/automatic.py:190, in _AutomaticOptimization.run(self, optimizer, batch_idx, kwargs)
183 closure()
185 # ------------------------------
186 # BACKWARD PASS
187 # ------------------------------
188 # gradient update with accumulated gradients
189 else:
--> 190 self._optimizer_step(batch_idx, closure)
192 result = closure.consume_result()
193 if result.loss is None:
File site-packages/lightning/pytorch/loops/optimization/automatic.py:268, in _AutomaticOptimization._optimizer_step(self, batch_idx, train_step_and_backward_closure)
265 self.optim_progress.optimizer.step.increment_ready()
267 # model hook
--> 268 call._call_lightning_module_hook(
269 trainer,
270 \"optimizer_step\",
271 trainer.current_epoch,
272 batch_idx,
273 optimizer,
274 train_step_and_backward_closure,
275 )
277 if not should_accumulate:
278 self.optim_progress.optimizer.step.increment_completed()
File site-packages/lightning/pytorch/trainer/call.py:167, in _call_lightning_module_hook(trainer, hook_name, pl_module, *args, **kwargs)
164 pl_module._current_fx_name = hook_name
166 with trainer.profiler.profile(f\"[LightningModule]{pl_module.__class__.__name__}.{hook_name}\"):
--> 167 output = fn(*args, **kwargs)
169 # restore current_fx when nested context
170 pl_module._current_fx_name = prev_fx_name
File site-packages/lightning/pytorch/core/module.py:1306, in LightningModule.optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure)
1275 def optimizer_step(
1276 self,
1277 epoch: int,
(...)
1280 optimizer_closure: Optional[Callable[[], Any]] = None,
1281 ) -> None:
1282 r\"\"\"Override this method to adjust the default way the :class:`~lightning.pytorch.trainer.trainer.Trainer` calls
1283 the optimizer.
1284
(...)
1304
1305 \"\"\"
-> 1306 optimizer.step(closure=optimizer_closure)
File site-packages/lightning/pytorch/core/optimizer.py:153, in LightningOptimizer.step(self, closure, **kwargs)
150 raise MisconfigurationException(\"When `optimizer.step(closure)` is called, the closure should be callable\")
152 assert self._strategy is not None
--> 153 step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
155 self._on_after_step()
157 return step_output
File site-packages/lightning/pytorch/strategies/strategy.py:238, in Strategy.optimizer_step(self, optimizer, closure, model, **kwargs)
236 # TODO(fabric): remove assertion once strategy's optimizer_step typing is fixed
237 assert isinstance(model, pl.LightningModule)
--> 238 return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File site-packages/lightning/pytorch/plugins/precision/precision.py:122, in Precision.optimizer_step(self, optimizer, model, closure, **kwargs)
120 \"\"\"Hook to run the optimizer step.\"\"\"
121 closure = partial(self._wrap_closure, model, optimizer, closure)
--> 122 return optimizer.step(closure=closure, **kwargs)
File site-packages/torch/optim/optimizer.py:487, in Optimizer.profile_hook_step.<locals>.wrapper(*args, **kwargs)
482 else:
483 raise RuntimeError(
484 f\"{func} must return None or a tuple of (new_args, new_kwargs), but got {result}.\"
485 )
--> 487 out = func(*args, **kwargs)
488 self._optimizer_step_code()
490 # call optimizer step post hooks
File site-packages/torch/optim/optimizer.py:91, in _use_grad_for_differentiable.<locals>._use_grad(self, *args, **kwargs)
89 torch.set_grad_enabled(self.defaults[\"differentiable\"])
90 torch._dynamo.graph_break()
---> 91 ret = func(self, *args, **kwargs)
92 finally:
93 torch._dynamo.graph_break()
File site-packages/torch/optim/sgd.py:112, in SGD.step(self, closure)
110 if closure is not None:
111 with torch.enable_grad():
--> 112 loss = closure()
114 for group in self.param_groups:
115 params: List[Tensor] = []
File site-packages/lightning/pytorch/plugins/precision/precision.py:108, in Precision._wrap_closure(self, model, optimizer, closure)
95 def _wrap_closure(
96 self,
97 model: \"pl.LightningModule\",
98 optimizer: Steppable,
99 closure: Callable[[], Any],
100 ) -> Any:
101 \"\"\"This double-closure allows makes sure the ``closure`` is executed before the ``on_before_optimizer_step``
102 hook is called.
103
(...)
106
107 \"\"\"
--> 108 closure_result = closure()
109 self._after_closure(model, optimizer)
110 return closure_result
File site-packages/lightning/pytorch/loops/optimization/automatic.py:144, in Closure.__call__(self, *args, **kwargs)
142 @override
143 def __call__(self, *args: Any, **kwargs: Any) -> Optional[Tensor]:
--> 144 self._result = self.closure(*args, **kwargs)
145 return self._result.loss
File site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File site-packages/lightning/pytorch/loops/optimization/automatic.py:138, in Closure.closure(self, *args, **kwargs)
135 self._zero_grad_fn()
137 if self._backward_fn is not None and step_output.closure_loss is not None:
--> 138 self._backward_fn(step_output.closure_loss)
140 return step_output
File site-packages/lightning/pytorch/loops/optimization/automatic.py:239, in _AutomaticOptimization._make_backward_fn.<locals>.backward_fn(loss)
238 def backward_fn(loss: Tensor) -> None:
--> 239 call._call_strategy_hook(self.trainer, \"backward\", loss, optimizer)
File site-packages/lightning/pytorch/trainer/call.py:319, in _call_strategy_hook(trainer, hook_name, *args, **kwargs)
316 return None
318 with trainer.profiler.profile(f\"[Strategy]{trainer.strategy.__class__.__name__}.{hook_name}\"):
--> 319 output = fn(*args, **kwargs)
321 # restore current_fx when nested context
322 pl_module._current_fx_name = prev_fx_name
File site-packages/lightning/pytorch/strategies/strategy.py:212, in Strategy.backward(self, closure_loss, optimizer, *args, **kwargs)
209 assert self.lightning_module is not None
210 closure_loss = self.precision_plugin.pre_backward(closure_loss, self.lightning_module)
--> 212 self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
214 closure_loss = self.precision_plugin.post_backward(closure_loss, self.lightning_module)
215 self.post_backward(closure_loss)
File site-packages/lightning/pytorch/plugins/precision/precision.py:72, in Precision.backward(self, tensor, model, optimizer, *args, **kwargs)
52 @override
53 def backward( # type: ignore[override]
54 self,
(...)
59 **kwargs: Any,
60 ) -> None:
61 r\"\"\"Performs the actual backpropagation.
62
63 Args:
(...)
70
71 \"\"\"
---> 72 model.backward(tensor, *args, **kwargs)
File site-packages/lightning/pytorch/core/module.py:1101, in LightningModule.backward(self, loss, *args, **kwargs)
1099 self._fabric.backward(loss, *args, **kwargs)
1100 else:
-> 1101 loss.backward(*args, **kwargs)
File site-packages/torch/_tensor.py:581, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
571 if has_torch_function_unary(self):
572 return handle_torch_function(
573 Tensor.backward,
574 (self,),
(...)
579 inputs=inputs,
580 )
--> 581 torch.autograd.backward(
582 self, gradient, retain_graph, create_graph, inputs=inputs
583 )
File site-packages/torch/autograd/__init__.py:347, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
342 retain_graph = create_graph
344 # The reason we repeat the same comment below is that
345 # some Python versions print out the first line of a multi-line function
346 # calls in the traceback and some print out the last line
--> 347 _engine_run_backward(
348 tensors,
349 grad_tensors_,
350 retain_graph,
351 create_graph,
352 inputs,
353 allow_unreachable=True,
354 accumulate_grad=True,
355 )
File site-packages/torch/autograd/graph.py:825, in _engine_run_backward(t_outputs, *args, **kwargs)
823 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
824 try:
--> 825 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
826 t_outputs, *args, **kwargs
827 ) # Calls into the C++ engine to run the backward pass
828 finally:
829 if attach_logging_hooks:
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead."
}
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0
#- PyTorch Version (e.g., 2.4): 2.5.1
#- Python version (e.g., 3.12): 3.11
#- OS (e.g., Linux): MacOS
#- CUDA/cuDNN version:
#- GPU models and configuration: MPS
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | closed | 2024-11-02T13:21:13Z | 2024-11-04T17:59:49Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20386 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | shanalikhan | 1 |
netbox-community/netbox | django | 18,921 | Server Error when accessing VPN tunnels terminations tab | ### Deployment Type
Self-hosted
### NetBox Version
v4.1.4
### Python Version
3.10
### Steps to Reproduce
1. Create a VPN tunnel
2. Create a terminations on dcim.interface and virtualization.vminterface
3. Assign an outside IP for virtualization.vminterface
4. Click on VPN tunnel terminations tab
### Expected Behavior
VPN tunnel terminations list is visible.
### Observed Behavior
Server Error
There was a problem with your request. Please contact an administrator.
The complete exception is provided below:
<class 'django.core.exceptions.FieldError'>
Field 'termination' does not generate an automatic reverse relation and therefore cannot be used for reverse querying. If it is a GenericForeignKey, consider adding a GenericRelation.
Python version: 3.10.12
NetBox version: 4.1.4
Plugins:
django3_saml2_nbplugin: 2.0
netbox_documents: 0.7.0
netbox_inventory: 2.1.0
netbox_reorder_rack: 1.1.3
nextbox_ui_plugin: 1.0.7
If further assistance is required, please post to the [NetBox discussion forum](https://github.com/netbox-community/netbox/discussions) on GitHub. | open | 2025-03-17T11:47:43Z | 2025-03-18T19:01:09Z | https://github.com/netbox-community/netbox/issues/18921 | [
"type: bug",
"status: needs owner",
"severity: low"
] | mwitczak86 | 1 |
ets-labs/python-dependency-injector | asyncio | 150 | Create AbstractFactory provider | closed | 2017-04-06T22:04:37Z | 2017-04-06T22:12:32Z | https://github.com/ets-labs/python-dependency-injector/issues/150 | [
"feature",
"docs"
] | rmk135 | 0 | |
google-research/bert | tensorflow | 1,046 | BERT Tiny/Mini/Small/Medium Cased models | Thanks a lot for newer smaller BERT models. Useful for many tasks. Reading the paper, it looks like these were built from bert-base-uncased. Is it possible to provide smaller CASED models from bert-base-cased which can be used for tasks that depends on casing.
Thanks in advance | open | 2020-04-01T05:38:32Z | 2020-04-01T05:38:32Z | https://github.com/google-research/bert/issues/1046 | [] | hrsmanian | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,213 | (De)serialization breaks some dialect-specific table metadata | ### Describe the bug
I was trying to cache reflected table metadata but got some unexpected results after unpickling
```python
from sqlalchemy import create_engine, MetaData, Table
engine = create_engine(
f'postgresql+psycopg2://{USER}:{PASS}@{HOST}:{PORT}/{NAME}')
meta = MetaData()
some_table = Table('some_table', meta, autoload_with=engine)
some_table.c.some_jsonb_column.comparator
```
```
<sqlalchemy.dialects.postgresql.json.JSONB.Comparator at 0x7f75ec8ddc80>
```
```python
import pickle
some_unpickled_table = pickle.loads(pickle.dumps(some_table))
some_unpickled_table.c.some_jsonb_column.comparator
```
```
<sqlalchemy.sql.sqltypes.NullType.Comparator at 0x7fae9b69a840>
```
This made me unable to use JSONB-specific operators
```python
some_unpickled_table.c.some_jsonb_column.has_key('some_key')
```
```
AttributeError: Neither 'Column' object nor 'Comparator' object has an attribute 'has_key'
```
I understand that table metadata may not be designed to be pickled. But anyway, could this be fixed?
### SQLAlchemy Version in Use
1.4.49
### DBAPI (i.e. the database driver)
psycopg2
### Database Vendor and Major Version
PostgreSQL 12.7
### Python Version
3.9.10
### Operating system
Linux
### To Reproduce
```python
from sqlalchemy import create_engine, MetaData, Table
import pickle
engine = create_engine(
f'postgresql+psycopg2://{USER}:{PASS}@{HOST}:{PORT}/{NAME}')
meta = MetaData()
some_table = Table('some_table', meta, autoload_with=engine)
some_unpickled_table = pickle.loads(pickle.dumps(some_table))
some_unpickled_table.c.some_jsonb_column.has_key('some_key')
```
### Error
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/.pyenv/versions/main/lib/python3.9/site-packages/sqlalchemy/sql/elements.py:855, in ColumnElement.__getattr__(self, key)
854 try:
--> 855 return getattr(self.comparator, key)
856 except AttributeError as err:
AttributeError: 'Comparator' object has no attribute 'has_key'
The above exception was the direct cause of the following exception:
AttributeError Traceback (most recent call last)
Input In [1], in <module>
8 some_table = Table('some_table', meta, autoload_with=engine)
9 some_unpickled_table = pickle.loads(pickle.dumps(some_table))
---> 10 some_unpickled_table.c.some_jsonb_column.has_key('some_key')
File ~/.pyenv/versions/main/lib/python3.9/site-packages/sqlalchemy/sql/elements.py:857, in ColumnElement.__getattr__(self, key)
855 return getattr(self.comparator, key)
856 except AttributeError as err:
--> 857 util.raise_(
858 AttributeError(
859 "Neither %r object nor %r object has an attribute %r"
860 % (
861 type(self).__name__,
862 type(self.comparator).__name__,
863 key,
864 )
865 ),
866 replace_context=err,
867 )
File ~/.pyenv/versions/main/lib/python3.9/site-packages/sqlalchemy/util/compat.py:211, in raise_(***failed resolving arguments***)
208 exception.__cause__ = replace_context
210 try:
--> 211 raise exception
212 finally:
213 # credit to
214 # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
215 # as the __traceback__ object creates a cycle
216 del exception, replace_context, from_, with_traceback
AttributeError: Neither 'Column' object nor 'Comparator' object has an attribute 'has_key'
```
### Additional context
_No response_ | closed | 2023-08-09T12:22:14Z | 2023-08-10T21:05:18Z | https://github.com/sqlalchemy/sqlalchemy/issues/10213 | [
"bug",
"datatypes",
"near-term release"
] | donnillo | 3 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 308 | 关于如何解决 抖音接口无法使用的临时方案 | ## 原因
经过自动的排查,发现代码本身没有什么问题。还是老问题cookies失效了。
## 如何获取有效的cookies
1. 打开[PC抖音官网](https://www.douyin.com/),手动扫码登录
2. 打开chrom调试工具,并执行以下代码
```javascript
document.cookie.split(";").filter(e => [
"s_v_web_id",
"ttwid",
"passport_csrf_token",
"passport_csrf_token_default",
"__ac_nonce",
"__ac_signature",
"douyin.com",
"device_web_cpu_core",
"device_web_memory_size",
"architecture",
"webcast_local_quality",
"IsDouyinActive",
"home_can_add_dy_2_desktop",
"strategyABtestKey",
"stream_recommend_feed_params",
"VIDEO_FILTER_MEMO_SELECT",
"volume_info",
"FORCE_LOGIN",
"csrf_session_id",
"bd_ticket_guard_client_data",
"msToken",
"msToken",
"tt_scid"
].includes(e.split("=")[0].trim())).join(";")
```
其输出如下:
```js
'__ac_signature=xxxxxxxxx; webcast_local_quality=null; stream_recommend_feed_params=%22%7B%5C%22cookie_enabled%5C%22%3Atrue%2C%5C%22screen_width%5C%22%3A1920%2C%5C%22screen_height%5C%22%3A1080%2C%5C%22browser_online%5C%22%3Atrue%2C%5C%22cpu_core_num%5C%22%3A12%2C%5C%22device_memory%5C%22%3A8%2C%5C%22downlink%5C%22%3A10%2C%5C%22effective_type%5C%22%3A%5C%224g%5C%22%2C%5C%22round_trip_time%5C%22%3A100%7D%22; passport_csrf_token=1eb85f423b143d00188f1809bd043015; passport_csrf_token_default=1eb85f423b143d00188f1809bd043015; s_v_web_id=verify_lo73dq14_mzz8hbXI_dpiQ_42YL_Ag5H_kHBHpUjc4Peg; douyin.com; device_web_cpu_core=12; device_web_memory_size=8; FORCE_LOGIN=%7B%22videoConsumedRemainSeconds%22%3A180%2C%22isForcePopClose%22%3A1%7D; volume_info=%7B%22isUserMute%22%3Afalse%2C%22isMute%22%3Afalse%2C%22volume%22%3A0.5%7D; csrf_session_id=2ba101be9d71fdac73ce7c0a20e97ff3; __ac_nonce=0653b2046002b88ceba2c; IsDouyinActive=true; VIDEO_FILTER_MEMO_SELECT=%7B%22expireTime%22%3A1698978779536%2C%22type%22%3A1%7D; bd_ticket_guard_client_data=eyJiZC10aWNrZXQtZ3VhcmQtdmVyc2lvbiI6MiwiYmQtdGlja2V0LWd1YXJkLWl0ZXJhdGlvbi12ZXJzaW9uIjoxLCJiZC10aWNrZXQtZ3VhcmQtcmVlLXB1YmxpYy1rZXkiOiJCRkxDOUdEa2RNeGtISUZUS0JGTzVURU5WNWczN05OV3VQQWM0TVpZbzZLaG9ReVVKSkZ6Rm1DMlR6SGlycEh2V2pXbjJBWmFMVW8yZm92VW5GQmw1TEE9IiwiYmQtdGlja2V0LWd1YXJkLXdlYi12ZXJzaW9uIjoxfQ%3D%3D; strategyABtestKey=%221698373980.611%22; home_can_add_dy_2_desktop=%221%22; msToken=txxxxxxxx; tt_scid=kUkr4UAhr0CNjr1tjvYq2nItF-toAFMjwYTImL-GGFh3dCgCdSCxc0Lj8ILUyW1.9e93; msToken=xxxxxxxxx''
```
输出的cookies还缺少两个字段 : `ttwid`和`architecture`。
- architecture
其中为固定值:architecture=amd64;
- ttwid

### 最终cookies,类似如下
```js
'ttwid=xxxxxxxxxxxxx;architecture=amd64;__ac_signature=xxxxxxxxx; webcast_local_quality=null; stream_recommend_feed_params=xxxxx; passport_csrf_token=1eb85f423b143d00188f1809bd043015; passport_csrf_token_default=1eb85f423b143d00188f1809bd043015; s_v_web_id=verify_lo73dq14_mzz8hbXI_dpiQ_42YL_Ag5H_kHBHpUjc4Peg; douyin.com; device_web_cpu_core=12; device_web_memory_size=8; FORCE_LOGIN=%7B%22videoConsumedRemainSeconds%22%3A180%2C%22isForcePopClose%22%3A1%7D; volume_info=%7B%22isUserMute%22%3Afalse%2C%22isMute%22%3Afalse%2C%22volume%22%3A0.5%7D; csrf_session_id=2ba101be9d71fdac73ce7c0a20e97ff3; __ac_nonce=0653b2046002b88ceba2c; IsDouyinActive=true; VIDEO_FILTER_MEMO_SELECT=%7B%22expireTime%22%3A1698978779536%2C%22type%22%3A1%7D; bd_ticket_guard_client_data=eyJiZC10aWNrZXQtZ3VhcmQtdmVyc2lvbiI6MiwiYmQtdGlja2V0LWd1YXJkLWl0ZXJhdGlvbi12ZXJzaW9uIjoxLCJiZC10aWNrZXQtZ3VhcmQtcmVlLXB1YmxpYy1rZXkiOiJCRkxDOUdEa2RNeGtISUZUS0JGTzVURU5WNWczN05OV3VQQWM0TVpZbzZLaG9ReVVKSkZ6Rm1DMlR6SGlycEh2V2pXbjJBWmFMVW8yZm92VW5GQmw1TEE9IiwiYmQtdGlja2V0LWd1YXJkLXdlYi12ZXJzaW9uIjoxfQ%3D%3D; strategyABtestKey=%221698373980.611%22; home_can_add_dy_2_desktop=%221%22; msToken=xxxxxxxxxx; tt_scid=kUkr4UAhr0CNjr1tjvYq2nItF-toAFMjwYTImL-GGFh3dCgCdSCxc0Lj8ILUyW1.9e93; msToken=xxxxxx'
```
合计`23`个字段
## 替换原有项目的cookies
<img width="548" alt="image" src="https://github.com/Evil0ctal/Douyin_TikTok_Download_API/assets/7713463/a481307c-71a4-4419-855a-7dc8ba9535f1">
然后重新启动就能正常返回json了,希望能对使用者有帮助 !!!!
| closed | 2023-10-27T03:58:02Z | 2024-02-07T03:44:18Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/308 | [
"BUG"
] | javaswing | 27 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 700 | Distributed training not using all available GPUs on large dataset | Hi,
Thank you for the great package! I am using the example notebook: `examples/notebooks/scRNAseq_MetricEmbedding.ipynb` to train a dataset with 100k data points. In the notebook, the training process is set to be computed distributed with `nn.DataParallel`, and there are 4 GPUs available. However, I have been getting this CUDA out of memory error:
`OutOfMemoryError: CUDA out of memory. Tried to allocate 1.33 GiB. GPU 0 has a total capacty of 14.58 GiB of which 1.01 GiB is free. Process 12957 has 13.57 GiB memory in use. Of the allocated memory 12.70 GiB is allocated by PyTorch, and 9.80 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF`
Upon checking the GPU memory usage, it appears that only GPU 0 is being used and other 3 GPUs are unused. I think this indicates that the training process is not distributed over all GPUs. Could I get some help solving this out of memory error? Thank you! | closed | 2024-07-11T01:06:57Z | 2024-08-02T12:52:41Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/700 | [] | ritamyhuang | 2 |
sebastianruder/NLP-progress | machine-learning | 254 | Question about the metric numbers on relation extraction? | I can not found the numbers in the original papers, could you please show me where did you get them?
https://nlpprogress.com/english/relationship_extraction.html
e.g. for the `New York Times Corpus` dataset, you report the `RESIDE` model can reach `73.6 ` on `P@10%` and `59.5` on `P@30%`. But I can not found those numbers in the original paper. Could you please show me from which figure you get those numbers? | closed | 2019-04-01T13:19:48Z | 2019-04-07T03:45:02Z | https://github.com/sebastianruder/NLP-progress/issues/254 | [] | speedcell4 | 3 |
waditu/tushare | pandas | 1,332 | Tushare的matlab接口使用时存在数据网址入口错误 | 我在用Matlab 2017a,发现pro_api.m文件中,网址入口字符串有误。原来字符串为 http_url = 'http://api.tushare.pro',运行时发现无法链接数据源。应该采用https方式,正确字符串为http_url = 'https://api.tushare.pro'。已通过验证。
我也同步检查了python接口库里./pro/client.py文件,网址入口字符串的确是'http://api.tushare.pro',而且在anaconda 下使用python运行,没有网址链接问题。
Tushare ID: 351545。 | open | 2020-04-08T09:01:50Z | 2020-04-08T09:01:50Z | https://github.com/waditu/tushare/issues/1332 | [] | yuanzy97 | 0 |
biolab/orange3 | scikit-learn | 6,334 | Calibration Plot: Calibration curve does not show isotonic calibration |
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
In Calibration Plot widget Calibration curve metric does not plot isotonic calibration curve.
I used housing data set with test&score widget set on Test on train data.
<img width="574" alt="Screenshot 2023-02-09 at 11 00 43" src="https://user-images.githubusercontent.com/11367976/217783557-50f4963c-1b60-4bdc-bd80-eeb6a71ce141.png">
- Operating system: macos bigsur
- Orange version: 3.34.1
- How you installed Orange: installer
| closed | 2023-02-09T10:14:29Z | 2023-02-17T09:59:01Z | https://github.com/biolab/orange3/issues/6334 | [
"bug report"
] | brunap | 3 |
litestar-org/litestar | asyncio | 3,949 | Enhancement: Using installed debugger post mortem | ### Summary
First of all, thank you all for this amazing project. I was wondering if you guys consider replacing `pdb.post_mortem` with a debugger package of already installed ones. This way, we can continue with the terminals we are used to.
In my mind, it is something like this:
middleware/_internal/__init__.py :
```
def get_post_mortem():
for package in ["pdbr", "pudb", "ipdb", "pdbpp"]:
try:
module = __import__(package, fromlist=["post_mortem"])
return module.post_mortem
except ImportError:
continue
import pdb
return pdb.post_mortem
```
middleware/_internal/exceptions/middleware.py :
```
if litestar_app.pdb_on_exception:
from .. import get_post_mortem
get_post_mortem()()
```
### Basic Example

### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | open | 2025-01-13T15:56:08Z | 2025-01-21T13:56:49Z | https://github.com/litestar-org/litestar/issues/3949 | [
"Enhancement"
] | cansarigol | 4 |
recommenders-team/recommenders | deep-learning | 1,559 | Error when i want to pull docker image | When i want to pull the docker image I face this error:
> Unable to find image 'recommenders:cpu' locally
docker: Error response from daemon: pull access denied for recommenders, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
also, I have login with my docker hub ID. | closed | 2021-10-27T15:38:12Z | 2021-10-30T10:32:01Z | https://github.com/recommenders-team/recommenders/issues/1559 | [
"help wanted"
] | ahforoughi | 2 |
albumentations-team/albumentations | deep-learning | 2,101 | [Add transform] Add RandomMotionBlur | Add RandomMotionBlur which is an alias over MotionBlur and has the same API as Kornia's
https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomMotionBlur
| closed | 2024-11-08T15:53:56Z | 2024-11-18T23:57:46Z | https://github.com/albumentations-team/albumentations/issues/2101 | [
"enhancement"
] | ternaus | 1 |
Farama-Foundation/Gymnasium | api | 845 | [Question] Providing type arguments to gymnasium.Env? | ### Question
`gymnasium.Env` is a generic class that custom environments need to inherit from (as is explained in the custom environment creation [tutorial](https://gymnasium.farama.org/tutorials/gymnasium_basics/environment_creation/)). However, simply doing something like `class CustomEnv(gymnasium.Env)` leads to a typing error as caught by `mypy` -
```
Missing type parameters for generic type "Env" [type-arg]
class CustomEnv(gymnasium.Env):
```
Solving this would require providing the requisite type arguments like `class CustomEnv(gymnasium.Env[ObsType, ActType])`, with the appropriate types for `ObsType`, `ActType`. But these are usually defined in terms of spaces, and there's no clear way of obtaining the "inner type" of a space (i.e., the type of what would be returned from `Space.sample()`) from the space itself. For e..g, if I have -
```python
self.observation_space = gymnasium.spaces.Box(0, 1, shape=(3, 3))
self.action_space = gymnasium.spaces.Discrete(4)
```
How could I provide appropriate type arguments to `gymnasium.Env` in order to type it correctly? | closed | 2023-12-15T23:44:04Z | 2024-12-24T09:42:20Z | https://github.com/Farama-Foundation/Gymnasium/issues/845 | [
"question"
] | AbhijeetKrishnan | 7 |
serengil/deepface | machine-learning | 1,208 | [BUG]: TypeError: unhashable type: 'list' | ### Before You Report a Bug, Please Confirm You Have Done The Following...
- [X] I have updated to the latest version of the packages.
- [X] I have searched for both [existing issues](https://github.com/serengil/deepface/issues) and [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed) and found none that matched my issue.
### DeepFace's version
v0.0.90
### Python version
3.9.1
### Operating System
Windows 11
### Dependencies
tensorflow==2.16.1
keras==3.3.2
### Reproducible example
```Python
import threading
import cv2
import os
os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'
from deepface import DeepFace
```
### Relevant Log Output
Traceback (most recent call last):
File "C:\Users\stony\OneDrive\Dokumentai\PYTHON\DeepFace\deepface2.py", line 7, in <module>
from deepface import DeepFace
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\deepface\DeepFace.py", line 15, in <module>
import tensorflow as tf
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\__init__.py", line 45, in <module>
from tensorflow._api.v2 import __internal__
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\_api\v2\__internal__\__init__.py", line 8, in <module>
from tensorflow._api.v2.__internal__ import autograph
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\_api\v2\__internal__\autograph\__init__.py", line 8, in <module>
from tensorflow.python.autograph.core.ag_ctx import control_status_ctx # line: 34
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\core\ag_ctx.py", line 21, in <module>
from tensorflow.python.autograph.utils import ag_logging
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\utils\__init__.py", line 17, in <module>
from tensorflow.python.autograph.utils.context_managers import control_dependency_on_returns
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\autograph\utils\context_managers.py", line 19, in <module>
from tensorflow.python.framework import ops
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\site-packages\tensorflow\python\framework\ops.py", line 5906, in <module>
) -> Optional[Callable[[Any], message.Message]]:
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 262, in inner
return func(*args, **kwds)
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 339, in __getitem__
return self._getitem(self, parameters)
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 463, in Optional
return Union[arg, type(None)]
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 262, in inner
return func(*args, **kwds)
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 339, in __getitem__
return self._getitem(self, parameters)
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 451, in Union
parameters = _remove_dups_flatten(parameters)
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 231, in _remove_dups_flatten
return tuple(_deduplicate(params))
File "C:\Users\stony\AppData\Local\Programs\Python\Python39\lib\typing.py", line 205, in _deduplicate
all_params = set(params)
TypeError: unhashable type: 'list'
### Expected Result
Expected DeepFace to be working
### What happened instead?
Got this TypeError: unhashable type: 'list' Error instead
### Additional Info
_No response_ | closed | 2024-04-23T20:23:35Z | 2024-04-23T20:28:30Z | https://github.com/serengil/deepface/issues/1208 | [
"bug",
"dependencies"
] | ArturasStonys | 2 |
anapaulagomes/pytest-picked | pytest | 4 | Module collects files with 'test' in the name, even if they are not test files. | Hi. I was just looking through the codebase to familiarize myself with how it works, and I found this bug- It collects files that have `test` in the name, including something with `test` in the middle of the word:
```
╰─ touch sample/intestine.py
╰─ ls sample
__init__.py intestine.py __pycache__ settings.py urls.py wsgi.py
╰─ pytest --picked
======================================================== test session starts =========================================================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/misterrios/Projects/sample/sample, inifile:
plugins: picked-0.1.0
collecting 0 items
Changed test files... 1. ['sample/intestine.py']
Changed test folders... 1. ['.pytest_cache/']
collected 0 items
==================================================== no tests ran in 0.05 seconds ====================================================
```
Pytest discovers tests by checking for `test_*.py` or `*_test.py`
See: https://docs.pytest.org/en/latest/goodpractices.html#test-discovery
So, a possible solution would be to use the built-in module `pathlib` to check the filenames and the file suffix combined with `startswith` and `endswith`. See: https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.name | closed | 2018-05-27T14:39:03Z | 2018-05-27T16:00:06Z | https://github.com/anapaulagomes/pytest-picked/issues/4 | [
"bug",
"good first issue"
] | MisterRios | 2 |
dsdanielpark/Bard-API | nlp | 87 | error when run chat bard on google colab or any server | <img width="576" alt="image" src="https://github.com/dsdanielpark/Bard-API/assets/82095274/0d131fdc-d479-4f51-a1ac-52304009340a">
when I run the code, it doesn't make any response and just always wait for input whatever I say.
Really appreciate for your response. | closed | 2023-07-01T12:38:18Z | 2023-07-03T06:29:21Z | https://github.com/dsdanielpark/Bard-API/issues/87 | [] | Xiansssss | 1 |
suitenumerique/docs | django | 794 | Is there a hosted version one can use? | There's https://docs.numerique.gouv.fr/ but afaict this is limited to french government.
Are there any plans to provide a hosted version (paid) for those outside of govenment? | open | 2025-03-22T13:39:16Z | 2025-03-24T16:31:18Z | https://github.com/suitenumerique/docs/issues/794 | [] | rufuspollock | 5 |
LAION-AI/Open-Assistant | python | 2,904 | VRAM information estimate | How much VRAM do the models approx. consume?
OA_SFT_Llama_30B_6 = my estimate about 68GB vram
12B model = 40GB vram
...
Are those assumptions right? Should i be able to run OA_SFT_Llama_30B_6 with 3 x RTX 3090 (72GB vram)? | closed | 2023-04-25T15:32:36Z | 2023-04-27T09:11:52Z | https://github.com/LAION-AI/Open-Assistant/issues/2904 | [] | snapo | 1 |
aiogram/aiogram | asyncio | 1,653 | docs: broken formatting | Code blocks in [docs](https://docs.aiogram.dev/en/dev-3.x/dispatcher/router.html#nested-routers) are broken
 | open | 2025-03-14T13:15:51Z | 2025-03-14T13:16:25Z | https://github.com/aiogram/aiogram/issues/1653 | [] | LagrangeH | 0 |
pywinauto/pywinauto | automation | 1,083 | Access violation in Python 3.9 | Hello,
To be quite honest, I'm not sure whether to report it here or on comtypes GitHub page, as I'm not that familiar with either of the repositories, and importing comtypes by itself works just fine.
## Expected Behavior
Importing pywinauto works without issues
## Actual Behavior
Importing pywinauto causes access violation issues on Windows
## Steps to Reproduce the Problem
1. Install Python 3.9
2. Install pywinauto
3. Import pywinauto in your script
## Short Example of Code to Demonstrate the Problem
```
import faulthandler
import pywinauto
faulthandler.enable()
```
which then results in the following:
```
Windows fatal exception: code 0x80010108
Thread 0x00002358 (most recent call first):
File "C:\Python39\lib\site-packages\comtypes\__init__.py", line 180 in _shutdown
Windows fatal exception: code 0x80010108
Thread 0x00002358 (most recent call first):
File "C:\Python39\lib\site-packages\comtypes\__init__.py", line 180 in _shutdown
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.9.5 x64
- Platform and OS: Win 10x64
| open | 2021-06-04T13:40:27Z | 2021-07-23T13:02:55Z | https://github.com/pywinauto/pywinauto/issues/1083 | [
"bug",
"3rd-party issue",
"need investigation"
] | prokmi | 10 |
newpanjing/simpleui | django | 128 | 无ACTION按钮时出错 | **bug描述**
简单的描述下遇到的bug:
当使用权限设置不显示ACTION按钮时, 会出错
def custom_button(context):
admin = context.get('cl').model_admin
data = {}
actions = admin.get_actions(context.request)
# if hasattr(admin, 'actions'):
# actions = admin.actions
# 输出自定义按钮的属性
for name in actions:
values = {}
fun = actions.get(name)[0]
for key, v in fun.__dict__.items():
if key != '__len__':
values[key] = v
data[name] = values
临时我增加了一个判断来解决
# 输出自定义按钮的属性
if not actions : return '{}'
**环境**
1.操作系统:
2.python版本:3.6
3.django版本:2.1
4.simpleui版本:2
| closed | 2019-07-29T02:59:25Z | 2019-08-21T09:21:56Z | https://github.com/newpanjing/simpleui/issues/128 | [
"bug"
] | JohnYan2017 | 0 |
home-assistant/core | asyncio | 140,740 | Constant "Login attempt failed" errors... | ### The problem
Hi there,
i get these error messages daily. HAOS is running locally, and is **not** available from the internet. The reported IP in the log is sometimes from my Macbook, sometimes an iPhone.
Please advise...
Thank you!
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
http
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/http
### Diagnostics information
<details>
<summary>error as reported in http://192.168.1.7:8123/config/logs - click to expand</summary>
```log
Logger: homeassistant.components.http.ban
Source: components/http/ban.py:136
integration: HTTP (documentation, issues)
First occurred: March 15, 2025 at 22:33:53 (5 occurrences)
Last logged: 16:03:57
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-15T21:18:05.681Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-15T21:18:13.987Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-15T21:48:59.845Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-15T21:49:08.032Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-15T22:04:45.947Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-15T22:19:58.087Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-16T07:20:51.791Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-16T07:23:51.720Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-16T14:40:02.451Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-16T14:57:59.700Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
```
</details>
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-16T17:35:44Z | 2025-03-20T12:58:10Z | https://github.com/home-assistant/core/issues/140740 | [
"integration: http"
] | notDavid | 3 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,602 | If a json with the value true is stored in the database, then replacing it with 1 does not work, the objects are considered the same - the query does not go to the database | ### Describe the bug
When updating a column with type jsonb there are problems.
If json has value true, and it should be replaced by 1, then sql query will not be executed. The data in the database will not be updated. If you replace it with the number 2, the same problem is not observed.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
asyncpg
### Database Vendor and Major Version
PostgreSQL 16
### Python Version
python 3.12
### Operating system
linix, windows
### To Reproduce
```python
q = select(Strategy).filter_by(name=name)
record = await session.scalar(q)
# print(record.settings["one"]) >> true
settings = {"one": 1}
record.settings = settings
session.add(record)
await session.commit()
```
### Error
```
No error code is generated. no database query is performed, objects are considered to be identical.
```
### Additional context
_No response_ | closed | 2024-07-12T12:36:31Z | 2024-07-12T13:26:37Z | https://github.com/sqlalchemy/sqlalchemy/issues/11602 | [] | Claud | 0 |
ray-project/ray | pytorch | 50,897 | New RLlib API examples | ### Description
Are there any examples of RLIB new api training for custom models/envs? All complete pipelines are written for old API
For example I want to build a basic PPO baseline for such an [env](https://github.com/Lux-AI-Challenge/Lux-Design-S3/blob/main/src/luxai_s3/wrappers.py)
### Use case
It would be in handy to provide such an examples to speed up new api migration | open | 2025-02-25T22:03:45Z | 2025-03-11T16:20:52Z | https://github.com/ray-project/ray/issues/50897 | [
"enhancement",
"P3",
"rllib",
"rllib-docs-or-examples",
"rllib-newstack"
] | visibledmitrii | 1 |
microsoft/MMdnn | tensorflow | 899 | How does MMDNN implement the "retrain" part? | Platform (like ubuntu 16.04/win10):
Python version:
Source framework with version (like Tensorflow 1.4.1 with GPU):
Destination framework with version (like CNTK 2.3 with GPU):
Pre-trained model path (webpath or webdisk path):
Running scripts:
Since i saw mmdnn have the ability to "retain" a model, i do not really got how it works.
Does it mean we still can train our model on the mmdnn's IR or what ? | open | 2020-10-06T11:53:09Z | 2020-10-10T05:27:04Z | https://github.com/microsoft/MMdnn/issues/899 | [] | calvin886 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.