repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ray-project/ray | deep-learning | 50,694 | Make sure precommit hook linter and CI matches | ### What happened + What you expected to happen
As of now, these two doesn't match.
An example, even the PR passes local precommit hook, it still fails CI: https://buildkite.com/ray-project/microcheck/builds/11578
### Versions / Dependencies
N/A
### Reproduction script
N/A
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-02-18T20:41:03Z | 2025-03-11T03:40:38Z | https://github.com/ray-project/ray/issues/50694 | [
"bug",
"P2"
] | dentiny | 3 |
plotly/dash-table | plotly | 627 | Footer | Table footers would be useful in some cases:
- show column totals or other aggregations
- live data: you could use the footer for quickly-changing/accumulating current data, without having to push the entire main data array with every change. Then when that data is finalized, move it into the main array just once and start a fresh row in the footer. | open | 2019-10-16T03:26:56Z | 2020-08-28T00:10:16Z | https://github.com/plotly/dash-table/issues/627 | [
"dash-type-enhancement"
] | alexcjohnson | 1 |
nltk/nltk | nlp | 2,459 | nltk.metrics.distance.jaro_similarity returns lower values than it should | Jaro similarity is supposed to give the same results if the strings are reversed:
```
from nltk.metrics import distance as dist
a='rureiter'
b='enreiter'
print("regular={}, reversed={}".format(dist.jaro_similarity(a, b), dist.jaro_similarity(a[::-1], b[::-1])))
```
The code above prints `regular=0.722222222222, reversed=0.833333333333`.
In fact, manual pen-and-paper examination shows that the correct similarity metric, in both cases
is 0.833333333333:
Separate the strings into prefix and suffix:
```
a = 'ru' + 'reiter'
b = 'en' + 'reiter'
```
The suffixes of `a` and `b` are equal. Because the suffixes are equal, and the prefixes have nothing in common between them, then the expected values are `matches == 6` and `transpositions == 0`. With these values:
```
a = 'ru' + 'reiter'
b = 'en' + 'reiter'
matches = 6
transpositions = 0
print(
1
/ 3
* (
matches / len(a)
+ matches / len(b)
+ (matches - transpositions // 2) / matches
)
)
```
The above code, unlike nltk, gives the correct answer of 0.8333333333333333 .
The issue lies in the fact that the current implementation does not try to minimize the number of transpositions in its algorithm, contrary to its documentation:
> The Jaro distance between is the min no. of single-character transpositions
> required to change one word into another
The implementation simply finds the first occurrence of each character of the first string (`a`) in the second string (`b`). This order of matching does not guarantee that the match will be optimal. In this example, the match is:
The first character of `a` is "r", and is matched against the third character of `b`. From that point, the suffix "reiter" can't be matched in full. Worse, the next match of `a` is character "e" which is matched against the first character of `b`. This matching makes a transposition:
```
r u r e i t e r
\ _/
\/
/\
/ |
e n r e i t e r
```
Later, things get even worse. The last "e" of `a` gets matched against the middle "e" of `b`:
```
r u r e i t e r
\ _/ /
\/ __/
/\ /
/ | /
e n r e i t e r
```
This matching causes more transpositions, for no reason.
A correct Jaro algorithm should find the minimum value of transposition possible.
With `matches=6`, and `transpositions=4` the result is 0.722222222222 .
Note that `transpositions=4` because the list of matched indices of the second string is sorted, while the first is not. Before sorting:
```
flagged_1 = [0, 3, 4, 5, 6]
flagged_2 = [2, 0, 4, 5, 3]
```
After sorting:
```
flagged_1 = [0, 3, 4, 5, 6]
flagged_2 = [0, 2, 3, 4, 5] # 4 different entries
```
| closed | 2019-11-10T16:17:21Z | 2024-01-23T16:06:21Z | https://github.com/nltk/nltk/issues/2459 | [
"metrics"
] | michael-veksler | 4 |
eriklindernoren/ML-From-Scratch | data-science | 72 | Runtime error during backward_pass() of PoolingLayer | I greatly appreciate your work and clearly written code which gives incredible insights into the back propagation technique. I've encountered a bit of a bug which is pretty solvable, but I don't want to make a pull request as I'm not sure of default values here.
It's at layers.py:400 (at the end of the line, last param):
https://github.com/eriklindernoren/ML-From-Scratch/blob/a2806c6732eee8d27762edd6d864e0c179d8e9e8/mlfromscratch/deep_learning/layers.py#L400
The last param is supposed to be in the string-style enum format of padding type. It's passing a literal 0 when it should be passing self.padding. PoolingLayer should also have a valid default value for self.padding which is also 0 (which of course causes this same error). In the case of 0 being an acceptable default, that value should be acceptable by the receiving function determine_padding, which is where the error is raised:
https://github.com/eriklindernoren/ML-From-Scratch/blob/a2806c6732eee8d27762edd6d864e0c179d8e9e8/mlfromscratch/deep_learning/layers.py#L718
Again, thank you for this repository. Amazing work. | open | 2020-01-01T18:25:58Z | 2020-01-01T19:31:05Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/72 | [] | krworks | 1 |
agronholm/anyio | asyncio | 308 | Document expectations on the `ByteReceiveStream.receive()` method. | It's unclear if [the `ByteReceiveStream.receive()` method](https://anyio.readthedocs.io/en/stable/api.html#anyio.abc.ByteReceiveStream) can be expected to possibly return `b""`.
If it *might* return `b""` then when performing a read-some-data-or-timeout, the user needs to consider that as a possible case, and always loop until we *do* have some data to return.
If it *cannot/should not* return `b""` then that'll need to be an explicitly documented expectation.
Prompted by https://github.com/encode/httpcore/issues/357, https://github.com/encode/httpcore/pull/358
| closed | 2021-06-11T10:29:12Z | 2021-06-17T15:33:41Z | https://github.com/agronholm/anyio/issues/308 | [
"documentation"
] | tomchristie | 2 |
microsoft/nni | machine-learning | 5,602 | Serializer behavior in v2.8 and v2.9 or higher | **Describe the issue**:
I'm trying to perform a model search in retiarii, but the behavior differs depending on the version of NNI.
The settings for nni.trace are as follows:
```
@nni.trace
class MyDataset(torch.utils.data.Dataset):
def __init__(self, root: str, train: bool = True):
filename = 'train.csv' if train else 'valid.csv'
df = pd.read_csv(os.path.join(root, filename))
self.x = df.iloc[:, 1:].values
self.y = df.iloc[:, 0].values
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return torch.Tensor(self.x[idx]), self.y[idx]
train_dataset = MyDataset(root='./data', train=True)
test_dataset = MyDataset(root='./data', train=False)
import nni.retiarii.evaluator.pytorch.lightning as pl
trainer = pl.Classification(train_dataloader=pl.DataLoader(train_dataset, batch_size=100),
val_dataloaders=pl.DataLoader(test_dataset, batch_size=100),
optimizer=torch.optim.SGD,
max_epochs=100,
accelerator='gpu', devices=1)
```
note) pl.Classification and pl.DataLoader are used as they are since nni.trace is already defined.
When the NNI version is 2.8, it works fine, but when the NNI version is 2.9/2.10, an error occurs.
`PayloadTooLarge: Pickle too large when trying to dump <nni.nas.nn.pytorch.mutator.ParameterChoiceMutator object at 0x7f7379699550>. This might be caused by classes that are not decorated by @nni.trace. Another option is to force bytes pickling and try to raise pickle_size_limit.`
I need advice on how to modify nni.trace to work with NNI version 2.9/2.10.
Because I would like to check the completed trial in nnictl view, but the results in retiarii can be checked from NNI version 2.9,
I want to execute the search with the same version.
**Environment**:
- NNI version: 2.8/2.9/2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu
- Server OS (for remote mode only):
- Python version: 3.8.8
- PyTorch/TensorFlow version:1.9.1
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: No
| closed | 2023-06-08T12:05:52Z | 2023-06-19T00:40:33Z | https://github.com/microsoft/nni/issues/5602 | [] | makonaga | 10 |
babysor/MockingBird | deep-learning | 524 | python3 运行demo_toolbox.py的时候提示 Error: Model files not found. Please download the models | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
<img width="571" alt="image" src="https://user-images.githubusercontent.com/17965372/165520089-6559daf8-eb9f-42d4-86e0-91745516f116.png">
<img width="564" alt="image" src="https://user-images.githubusercontent.com/17965372/165520273-45558385-928d-4ad7-ac44-050a203c8779.png">
缺少了这些模块,可以通过brew安装吗,还是要用pip安装,希望一个大佬来讲解下
| closed | 2022-04-27T12:42:17Z | 2023-02-11T09:45:27Z | https://github.com/babysor/MockingBird/issues/524 | [] | kirin0926 | 2 |
huggingface/datasets | pytorch | 7,298 | loading dataset issue with load_dataset() when training controlnet | ### Describe the bug
i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work?
would appreciate if someone can explain why that's the case here
1. for reference here's the structure of the original training files _before_ dataset creation -
```
- dir train
- dir A (illustrations)
- dir B (SignWriting)
- prompt.json containing:
{"source": "B/file.png", "target": "A/file.png", "prompt": "..."}
```
2. here are features _after_ dataset creation -
```
"features": {
"control_image": {
"_type": "Image"
},
"image": {
"_type": "Image"
},
"caption": {
"dtype": "string",
"_type": "Value"
}
```
3. I've also attempted to upload the dataset to huggingface with the same error output
### Steps to reproduce the bug
1. [dataset creation script](https://github.com/sign-language-processing/signwriting-illustration/blob/main/signwriting_illustration/controlnet_huggingface/dataset.py)
2. controlnet [training script](examples/controlnet/train_controlnet.py) used
3. training parameters -
! accelerate launch diffusers/examples/controlnet/train_controlnet.py \
--pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \
--output_dir="$OUTPUT_DIR" \
--train_data_dir="$HF_DATASET_DIR" \
--conditioning_image_column=control_image \
--image_column=image \
--caption_column=caption \
--resolution=512\
--learning_rate=1e-5 \
--validation_image "./validation/0a4b3c71265bb3a726457837428dda78.png" "./validation/0a5922fe2c638e6776bd62f623145004.png" "./validation/1c9f1a53106f64c682cf5d009ee7156f.png" \
--validation_prompt "An illustration of a man with short hair" "An illustration of a woman with short hair" "An illustration of Barack Obama" \
--train_batch_size=4 \
--num_train_epochs=500 \
--tracker_project_name="sd-controlnet-signwriting-test" \
--hub_model_id="sarahahtee/signwriting-illustration-test" \
--checkpointing_steps=5000 \
--validation_steps=1000 \
--report_to wandb \
--push_to_hub
4. command -
` sbatch --export=HUGGINGFACE_TOKEN=hf_token,WANDB_API_KEY=api_key script.sh`
### Expected behavior
```
11/25/2024 17:12:18 - INFO - __main__ - Initializing controlnet weights from unet
Generating train split: 1 examples [00:00, 334.85 examples/s]
Traceback (most recent call last):
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 1189, in <module>
main(args)
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 923, in main
train_dataset = make_train_dataset(args, tokenizer, accelerator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 639, in make_train_dataset
raise ValueError(
ValueError: `--image_column` value 'image' not found in dataset columns. Dataset columns are: _data_files, _fingerprint, _format_columns, _format_kwargs, _format_type, _output_all_columns, _split
```
### Environment info
accelerate 1.1.1
huggingface-hub 0.26.2
python 3.11
torch 2.5.1
transformers 4.46.2 | open | 2024-11-26T10:50:18Z | 2024-11-26T10:50:18Z | https://github.com/huggingface/datasets/issues/7298 | [] | sarahahtee | 0 |
aminalaee/sqladmin | asyncio | 361 | Two relations on model (foreign key), but one field on model | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
There are two correlations per table (seller, buyer), but a table is generated where there is only one users field.
```python
class DealModel(ormar.Model, BaseMixin):
"""DealModel."""
class Meta(ormar.ModelMeta):
"""Meta."""
tablename: str = "deals"
database: databases.Database = database
metadata: sqlalchemy.MetaData = metadata
stage: DealStage = ormar.Enum(
enum_class=DealStage,
default=DealStage.created,
default_server=DealStage.created,
nullable=False,
)
buyer: UserModel = ormar.ForeignKey(UserModel, nullable=False, related_name="buys")
seller: UserModel = ormar.ForeignKey(UserModel, nullable=False, related_name="sells")
pair: CurrencyPairModel = ormar.ForeignKey(CurrencyPairModel, nullable=False, related_name="deals")
session: SessionModel = ormar.ForeignKey(SessionModel, nullable=False, related_name="deals")
count: int = ormar.Integer(minimum=1, nullable=False)
rate: float = ormar.Float(minimum=0.01, nullable=False)
```
### Steps to reproduce the bug
_No response_
### Expected behavior
Except two fields Seller and Buyer
### Actual behavior
_No response_
### Debugging material
No
### Environment
- Ubuntu 20.04
- Python 3.10
### Additional context
_No response_ | closed | 2022-10-19T13:53:22Z | 2022-11-08T10:59:28Z | https://github.com/aminalaee/sqladmin/issues/361 | [] | Egnod | 2 |
deepset-ai/haystack | machine-learning | 8,204 | DocumentStore deserialiation with from_dict creates a new class instead of calling from_dict on the DocumentStore | **Describe the bug**
Components that deserialize a document store through `to_dict` do not call `from_dict` on the document store, but create a new instance of it. That can be wrong if the functionality differs.
We found that problem while testing out the new IAM workflow in the OpenSearch integration together with a basic component.
The OpenSearch components don't have that problems because they are calling the DocumentStore directly.
Link can be found below.
Affected components are:
- SentenceWindowRetriever
- CacheChecker
- FilterRetriever
Please double check this list, it was only a quick code search from my side.
**Error message**
No error message thrown, only found that problem while testing the IAM opensearch setup.
**Expected behavior**
Components that serialize a document store should also deserialize it correctly.
**Additional context**
Test setup that shows that a serialized OpenSearch document store is not deserialized correctly:
https://github.com/deepset-ai/haystack-core-integrations/pull/972
**To Reproduce**
Steps to reproduce the behavior
**FAQ Check**
- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Arch Linux / Ubuntu on EC2
- GPU/CPU: /
- Haystack version (commit or version number): 2.3.1
- DocumentStore: OpenSearch
- Reader:
- Retriever: SentenceWindowRetriever or CacheChecker or FilterRetriever or others
| closed | 2024-08-12T16:16:13Z | 2024-08-14T08:56:33Z | https://github.com/deepset-ai/haystack/issues/8204 | [
"type:bug"
] | FHardow | 0 |
vi3k6i5/flashtext | nlp | 53 | Any plans for a Java port? | :-) | open | 2018-06-17T11:26:11Z | 2018-06-17T11:26:11Z | https://github.com/vi3k6i5/flashtext/issues/53 | [] | matanox | 0 |
nltk/nltk | nlp | 2,710 | Empty README file | #2514 added an empty `README` file next to the existing `README.md` | closed | 2021-05-07T16:59:39Z | 2021-05-13T10:42:01Z | https://github.com/nltk/nltk/issues/2710 | [] | remram44 | 0 |
hbldh/bleak | asyncio | 798 | Don't use bluetoothctl | * bleak version: 0.14.2
* Python version: 3
* Operating System: GNU/Linux
* BlueZ version: 5.60
### Description
Trying to use bleak inside a Flatpak sandbox fails because bluetoothctl, like all other system utils, can't be accessed.
Inside the bluezdbus backend in `__init__.py` a bluetoothctl subprocess is spawned to get the current bluez version.
I suggest to use the bluez dbus interface directly to get the version.
| closed | 2022-04-06T09:10:39Z | 2023-03-18T16:50:15Z | https://github.com/hbldh/bleak/issues/798 | [
"enhancement",
"dependencies",
"Backend: BlueZ"
] | kq98 | 2 |
ultralytics/yolov5 | deep-learning | 12,723 | Retraining yolov5 for additional data | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have trained a yolov5 model using the pre trained weight for my custom dataset.It contains approx 13000 images for training and validation and total 8 classes are there.But for one class ,more data needed to be added to generalize prediction of various cases. But how can i retrain the model to fit with newly added data without retraining it from start.It takes around half an hour to complete one epoch.
### Additional
_No response_ | closed | 2024-02-09T07:21:18Z | 2024-03-22T00:20:02Z | https://github.com/ultralytics/yolov5/issues/12723 | [
"question",
"Stale"
] | humairaneha | 2 |
docarray/docarray | fastapi | 1,264 | DocIndex: Validate if `search_field` is valid | When a user passes a `search_field` we should check in the abstract class if it correspons to one of the columns that was parsed from the schema. That we the backend implementer does not have to worry about it, and we can give a uniform error message. | closed | 2023-03-21T15:47:34Z | 2023-04-11T14:01:38Z | https://github.com/docarray/docarray/issues/1264 | [
"DocArray v2",
"good-first-issue",
"area/document-index"
] | JohannesMessner | 5 |
dropbox/PyHive | sqlalchemy | 235 | SyntaxError in Python 3.7 when importing hive | Importing hive yields the following error, with Python 3.7:
```python
>>> from pyhive import hive
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python37\lib\site-packages\pyhive\hive.py", line 337
def execute(self, operation, parameters=None, async=False):
^
SyntaxError: invalid syntax
```
Undoubtedly due to `async` being a keyword in 3.7. | closed | 2018-09-07T08:09:17Z | 2018-09-10T16:59:09Z | https://github.com/dropbox/PyHive/issues/235 | [] | ragerin | 2 |
python-restx/flask-restx | api | 605 | Please output the schema name that is giving error | I got `Unable to render schema` and I had to tweak the library code to find out which of my 20+ schemas is causing the problem. By tweaking, I mean just printing out schemas in swagger.py's serialize_definitions(self) to until I receive an error. It would be less frustrating if the library directly told the name of my schema.
Here is the full traceback:
```
Unable to render schema
Traceback (most recent call last):
File "/home/ve/lib/python3.12/site-packages/flask_restx/api.py", line 571, in __schema__
self._schema = Swagger(self).as_dict()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/swagger.py", line 304, in as_dict
"definitions": self.serialize_definitions() or None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/swagger.py", line 661, in serialize_definitions
print(model.__schema__)
^^^^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/model.py", line 68, in __schema__
schema = self._schema
^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/model.py", line 151, in _schema
properties[name] = field.__schema__
^^^^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/werkzeug/utils.py", line 107, in __get__
value = self.fget(obj) # type: ignore
^^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/fields.py", line 217, in __schema__
return not_none(self.schema())
^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/fields.py", line 442, in schema
enum = self._v("enum")
^^^^^^^^^^^^^^^
File "/home/ve/lib/python3.12/site-packages/flask_restx/fields.py", line 213, in _v
return value() if callable(value) else value
^^^^^^^
TypeError: EnumType.__call__() missing 1 required positional argument: 'value'
```
| open | 2024-05-23T14:30:26Z | 2024-05-23T14:33:03Z | https://github.com/python-restx/flask-restx/issues/605 | [
"enhancement"
] | Nafyaz | 0 |
lucidrains/vit-pytorch | computer-vision | 33 | why only first vector is sufficient for classification | Thank you very much for sharing this great code.
I wonder why only the [first vector](https://github.com/lucidrains/vit-pytorch/blob/f1deb5fb7e7606dcb1d648f6e22c5f0631dce0e4/vit_pytorch/vit_pytorch.py#L126) is sufficient for classifying the data (referred in paper as z0). I checked the paper, but it was not clear also.
Besides, as mentioned in [this issue](https://github.com/lucidrains/vit-pytorch/issues/29), the other vectors are useless and not representative.
Can you suggest any explanation?
| closed | 2020-11-24T03:49:47Z | 2020-11-25T05:37:17Z | https://github.com/lucidrains/vit-pytorch/issues/33 | [] | besaman | 5 |
nerfstudio-project/nerfstudio | computer-vision | 2,989 | Adjustable camera positions | Is there a reason why nerf.studio, instant-ngp, dust3r etc don't allow users to manually correct the camera positions in the software itself visually? To me it seems like this would make sense for refinement? But at the moment we have to manually adjust a transforms.json file every time which just seems absurd.
| open | 2024-03-08T08:18:33Z | 2024-03-08T09:56:14Z | https://github.com/nerfstudio-project/nerfstudio/issues/2989 | [] | mrbid | 2 |
NullArray/AutoSploit | automation | 774 | Divided by zero exception44 | Error: Attempted to divide by zero.44 | closed | 2019-04-19T16:00:37Z | 2019-04-19T16:37:55Z | https://github.com/NullArray/AutoSploit/issues/774 | [] | AutosploitReporter | 0 |
PokeAPI/pokeapi | api | 1,044 | Why isnt csv a submodule just like sprites and cries? | Question basically given in title, but why isnt the dex data in a submodule like cries and sprites?
pokeapi/pokedex exists, probably cause its cloned from veekun/pokedex, but then this repo could be used to track technical issues, and pokedex for content issues. | open | 2024-02-15T09:37:04Z | 2024-02-21T20:48:52Z | https://github.com/PokeAPI/pokeapi/issues/1044 | [] | GreatNovaDragon | 2 |
google-research/bert | tensorflow | 451 | run_squad.py only seems to use one cpu (and ignore the GPU) | I am trying to perform the SQUAD training, but it seems to ignore the GTX1080 (although it is available to tensorflow) and run on the CPU, on a single core.
python run_squad.py \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v2.0.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v2.0.json \
--train_batch_size=8 \
--learning_rate=3e-5 \
--num_train_epochs=1.0 \
--max_seq_length=128 \
--doc_stride=128 \
--output_dir=/home/ben/Data/Bert_Models/Bert_SQUAD/ \
--version_2_with_negative=True \
--max_query_length 32 | closed | 2019-02-23T15:09:46Z | 2019-03-04T14:16:17Z | https://github.com/google-research/bert/issues/451 | [] | bbreton3 | 2 |
ionelmc/pytest-benchmark | pytest | 81 | KeyError: 'ops' with pytest-benchmark compare | I only have one saved benchmark currently, which is written by 3.0.0:
[0001_x.json.txt](https://github.com/ionelmc/pytest-benchmark/files/1170147/0001_x.json.txt) (renamed so I can upload it here)
When I do `pytest-benchmark compare` with 3.1.0, I get:
```
Computing stats ...Traceback (most recent call last):
File "./.tox/py36/bin/pytest-benchmark", line 11, in <module>
sys.exit(main())
File "/home/florian/proj/qutebrowser/git/.tox/py36/lib/python3.6/site-packages/pytest_benchmark/cli.py", line 162, in main
results_table.display(TerminalReporter(), groups, progress_reporter=report_noprogress)
File "/home/florian/proj/qutebrowser/git/.tox/py36/lib/python3.6/site-packages/pytest_benchmark/table.py", line 39, in display
benchmarks, tr, "{line} ({pos}/{total})", line=line))
File "/home/florian/proj/qutebrowser/git/.tox/py36/lib/python3.6/site-packages/pytest_benchmark/table.py", line 38, in <genexpr>
worst[prop] = min(bench[prop] for _, bench in progress_reporter(
KeyError: 'ops'
``` | closed | 2017-07-24T14:42:30Z | 2017-07-26T11:56:01Z | https://github.com/ionelmc/pytest-benchmark/issues/81 | [] | The-Compiler | 1 |
aleju/imgaug | deep-learning | 358 | numpy.dtype("f16") is not available (exception in dtype.py, numpy 1.10) | Hi,
```python
Traceback (most recent call last):
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\dtypes.py", line 65, in get_minimal_dtype
promoted_dt_highres = np.dtype(promoted_dt_highres)
TypeError: data type "f16" not understood
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train.py", line 383, in <module>
main(experiment_yml_path)
File "train.py", line 344, in main
callbacks=[model_checkpoint,tboard,reduce_lr])
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\keras\engine\training_generator.py", line 181, in fit_generator
generator_output = next(output_generator)
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\keras\utils\data_utils.py", line 709, in get
six.reraise(*sys.exc_info())
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\six.py", line 693, in reraise
raise value
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\keras\utils\data_utils.py", line 685, in get
inputs = self.queue.get(block=True).get()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 608, in get
raise self._value
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\keras\utils\data_utils.py", line 626, in next_sample
return six.next(_SHARED_SEQUENCES[uid])
File "train.py", line 80, in batch_gen
img_batch = img_aug.augment_images(img_batch.astype(np.float32))
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\augmenters\meta.py", line 603, in augment_images
hooks=hooks
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\augmenters\meta.py", line 2823, in _augment_images
hooks=hooks
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\augmenters\meta.py", line 515, in augment_images
hooks=hooks
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\augmenters\arithmetic.py", line 354, in _augment_images
increase_itemsize_factor=2)
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\dtypes.py", line 135, in promote_array_dtypes_
dt = get_minimal_dtype(dtypes, increase_itemsize_factor=increase_itemsize_factor)
File "D:\SEGMENT\new-bioseg\seg\lib\site-packages\imgaug\dtypes.py", line 77, in get_minimal_dtype
increase_itemsize_factor
TypeError: Unable to create a numpy dtype matching the name 'f16'. This error was caused when trying to find a minimal dtype covering the dtypes 'float32, float64' (which was determined to be 'float64') and then increasing its resolution (aka itemsize) by a factor of 2. This error can be avoided by choosing arrays with lower resolution dtypes as inputs, e.g. by reducing float32 to float16.
```
Maybe numpy.dtype("f16") didn't works in numpy 1.10.
(https://docs.scipy.org/doc/numpy/user/basics.types.html)
So I edited imgaug/dtype.py line 65
(https://github.com/Blosc/bcolz/issues/270)
```python
def get_minimal_dtype(arrays, increase_itemsize_factor=1):
input_dts = [array.dtype if not isinstance(array, np.dtype) else array
for array in arrays]
promoted_dt = np.promote_types(*input_dts)
if increase_itemsize_factor > 1:
promoted_dt_highres = "%s%d" % (promoted_dt.kind, promoted_dt.itemsize * increase_itemsize_factor)
try:
if promoted_dt_highres == "f16":
promoted_dt_highres = np.dtype(np.longdouble) # <----------
promoted_dt_highres = np.dtype(promoted_dt_highres)
return promoted_dt_highres
except TypeError:
raise TypeError(
("Unable to create a numpy dtype matching the name '%s'. "
+ "This error was caused when trying to find a minimal dtype covering the dtypes '%s' (which was "
+ "determined to be '%s') and then increasing its resolution (aka itemsize) by a factor of %d. "
+ "This error can be avoided by choosing arrays with lower resolution dtypes as inputs, e.g. by "
+ "reducing float32 to float16.") % (
promoted_dt_highres,
", ".join([input_dt.name for input_dt in input_dts]),
promoted_dt.name,
increase_itemsize_factor
)
)
return promoted_dt
```
..And it works.
| open | 2019-07-16T05:13:58Z | 2019-08-14T14:54:56Z | https://github.com/aleju/imgaug/issues/358 | [] | KUR-creative | 1 |
assafelovic/gpt-researcher | automation | 981 | "Incompatible Model Error" or "JSON Error" | Hello, I'm encountering an issue with installing and running the GPT Researcher application. After following the installation steps, the application returns several errors when I try to initiate a search.
**Steps Followed to Install the Application:**

**Empty results after research on the application :**

**Summary of the Errors Encountered:**
Incompatible Model Error: I receive a message indicating that my API key does not have access to the gpt-4o-2024-08-06 model. It seems that the required model is not activated for my account, even though I have an active subscription on https://chatgpt.com/.
JSON Format Error: The message "Error in reading JSON, attempting to repair JSON" appears, suggesting an issue with the JSON configuration file. It seems the application is unable to read or load the default.json file correctly.
NoneType Error in JSON Parsing: A TypeError: expected string or bytes-like object, got 'NoneType' error occurs, indicating that the expected response is empty. This could be related to a missing API response or error handling issue in the code.
**Here is the code with errors :**
`INFO: connection open
Warning: Configuration not found at 'default'. Using default configuration.
Do you mean 'default.json'?
⚠️ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\Admin\gpt-researcher\gpt_researcher\actions\agent_creator.py", line 27, in choose_agent
response = await create_chat_completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<9 lines>...
)
^
File "C:\Users\Admin\gpt-researcher\gpt_researcher\utils\llm.py", line 60, in create_chat_completion
response = await provider.get_chat_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
messages, stream, websocket
^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Admin\gpt-researcher\gpt_researcher\llm_provider\generic\base.py", line 116, in get_chat_response
output = await self.llm.ainvoke(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\langchain_core\language_models\chat_models.py", line 307, in ainvoke
llm_result = await self.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
)
^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\langchain_core\language_models\chat_models.py", line 796, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
prompt_messages, stop=stop, callbacks=callbacks, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\langchain_core\language_models\chat_models.py", line 756, in agenerate
raise exceptions[0]
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\langchain_core\language_models\chat_models.py", line 924, in _agenerate_with_cache
result = await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
messages, stop=stop, run_manager=run_manager, **kwargs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\langchain_openai\chat_models\base.py", line 860, in _agenerate
response = await self.async_client.create(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\openai\resources\chat\completions.py", line 1661, in create
return await self._post(
^^^^^^^^^^^^^^^^^
...<41 lines>...
)
^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\openai\_base_client.py", line 1839, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\openai\_base_client.py", line 1533, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\openai\_base_client.py", line 1634, in _request
raise self._make_status_error_from_response(err.response) from None
openai.PermissionDeniedError: Error code: 403 - {'error': {'message': 'Project `proj_2VxSRsTQaqjx5PDs2D0LEpin` does not have access to model `gpt-4o-2024-08-06`', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 242, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\middleware\errors.py", line 152, in __call__
await self.app(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\middleware\cors.py", line 77, in __call__
await self.app(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
raise exc
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\routing.py", line 735, in app
await route.handle(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\routing.py", line 362, in handle
await self.app(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\routing.py", line 95, in app
await wrap_app_handling_exceptions(app, session)(scope, receive, send)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
raise exc
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\starlette\routing.py", line 93, in app
await func(session)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\site-packages\fastapi\routing.py", line 383, in app
await dependant.call(**solved_result.values)
File "C:\Users\Admin\gpt-researcher\backend\server\server.py", line 110, in websocket_endpoint
await handle_websocket_communication(websocket, manager)
File "C:\Users\Admin\gpt-researcher\backend\server\server_utils.py", line 121, in handle_websocket_communication
await handle_start_command(websocket, data, manager)
File "C:\Users\Admin\gpt-researcher\backend\server\server_utils.py", line 28, in handle_start_command
report = await manager.start_streaming(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
task, report_type, report_source, source_urls, tone, websocket, headers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Admin\gpt-researcher\backend\server\websocket_manager.py", line 66, in start_streaming
report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers = headers, config_path = config_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\gpt-researcher\backend\server\websocket_manager.py", line 108, in run_agent
report = await researcher.run()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\gpt-researcher\backend\report_type\basic_report\basic_report.py", line 41, in run
await researcher.conduct_research()
File "C:\Users\Admin\gpt-researcher\gpt_researcher\agent.py", line 88, in conduct_research
self.agent, self.role = await choose_agent(
^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
File "C:\Users\Admin\gpt-researcher\gpt_researcher\actions\agent_creator.py", line 44, in choose_agent
return await handle_json_error(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Admin\gpt-researcher\gpt_researcher\actions\agent_creator.py", line 55, in handle_json_error
json_string = extract_json_with_regex(response)
File "C:\Users\Admin\gpt-researcher\gpt_researcher\actions\agent_creator.py", line 71, in extract_json_with_regex
json_match = re.search(r"{.*?}", response, re.DOTALL)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python313\Lib\re\__init__.py", line 177, in search
return _compile(pattern, flags).search(string)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
TypeError: expected string or bytes-like object, got 'NoneType'
INFO: connection closed
INFO: 127.0.0.1:64127 - "GET / HTTP/1.1" 200 OK
` | closed | 2024-11-11T11:20:40Z | 2024-11-17T10:20:00Z | https://github.com/assafelovic/gpt-researcher/issues/981 | [] | sackosindou | 4 |
joeyespo/grip | flask | 169 | Style issues in Grip 4.x | For comparison, here is what a markdown file looks like in Grip 3.3.0…

…and Grip 4.1.0:

I already tried `grip --clear` but it did not help. Any ideas what could cause this?
_Chromium Version 48.0.2564.116 Ubuntu 15.10 (64-bit)_
| closed | 2016-03-08T15:34:43Z | 2016-04-13T04:24:37Z | https://github.com/joeyespo/grip/issues/169 | [
"duplicate"
] | p3k | 2 |
dolevf/graphw00f | graphql | 6 | ariadne and strawberry have conflicting signatures | While testing on an Ariadne engine sending `query @deprecated {__typename}` returned `Directive '@deprecated' may not be used on query.` which is the signature for strawberry. | closed | 2022-04-07T11:32:30Z | 2022-04-26T13:28:51Z | https://github.com/dolevf/graphw00f/issues/6 | [
"bug"
] | MdotTIM | 3 |
joke2k/django-environ | django | 473 | link in BACKERS.rst is broken causing the CI to break | <img width="966" alt="image" src="https://github.com/joke2k/django-environ/assets/245021/35ffe7a1-5e3d-4f85-8670-12ad04e41285">
Actual CI run is at https://github.com/joke2k/django-environ/actions/runs/5055990988/jobs/9072911874
from this commit https://github.com/joke2k/django-environ/pull/472/commits/688a4d0b941ee9e5bc7d3e8ce8087bcca9489d50
To be solved by merging PR #472 | closed | 2023-05-23T10:25:52Z | 2023-05-24T13:31:32Z | https://github.com/joke2k/django-environ/issues/473 | [] | simkimsia | 0 |
vllm-project/vllm | pytorch | 15,060 | [Bug]: --enable-chunked-prefill setting is not taking effect | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
INFO 03-18 20:15:46 __init__.py:183] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-gcp-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.14
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 4 MiB (4 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-ml-py3==7.352.0
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.48.1
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-7 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_REQUIRE_CUDA=cuda>=12.5 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551
NCCL_VERSION=2.22.3-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
CUDA_VERSION=12.5.1
LD_LIBRARY_PATH=/home/llm/llm_server_env/lib/python3.10/site-packages/cv2/../../lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
VLLM_USE_V1=1
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
When I don't specify a setting for --enable-chunked-prefill I see it listed as None in the log. I was trying to use Llama-3.1.8b to make embeddings so I set my HF config.json architecture=LlamaModel and set --task embedding. After doing this VLLM outputted an error that chunked prefill is not supported for pooling models. So I set --enable-chunked-prefill to False, but the setting is not taking effect, and I get the same error. I see this in the log:
INFO 03-18 20:10:54 api_server.py:835] vLLM API server version 0.7.0
INFO 03-18 20:10:54 api_server.py:836] args: Namespace(host='0.0.0.0', port=10000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Llama-3.1-8B-Instruct', task='embedding', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=8000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4.0, cpu_offload_gb=0, gpu_memory_utilization=0.85, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=True, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
WARNING 03-18 20:10:54 arg_utils.py:1298] Setting max_num_batched_tokens to 2048 for OPENAI_API_SERVER usage context.
INFO 03-18 20:11:03 config.py:1483] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 03-18 20:11:04 core.py:45] Initializing an LLM engine (v0.7.0) with config: model='meta-llama/Llama-3.1-8B-Instruct', speculative_config=None, tokenizer='meta-llama/Llama-3.1-8B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8000, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=meta-llama/Llama-3.1-8B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
WARNING 03-18 20:11:05 registry.py:336] `mm_limits` has already been set for model=meta-llama/Llama-3.1-8B-Instruct, and will be overwritten by the new values.
INFO 03-18 20:11:05 gpu_model_runner.py:843] Starting to load model meta-llama/Llama-3.1-8B-Instruct...
INFO 03-18 20:11:05 cuda.py:157] Using Flash Attention backend on V1 engine.
INFO 03-18 20:11:05 topk_topp_sampler.py:34] Using FlashInfer for top-p & top-k sampling.
INFO 03-18 20:11:06 weight_utils.py:251] Using model weights format ['*.safetensors']
So I'm not able to disable chunked preview and so I can't generate the embedding.
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/llm/llm_server_env/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 899, in <module>
uvloop.run(run_server(args))
File "/home/llm/llm_server_env/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run
return loop.run_until_complete(wrapper())
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/home/llm/llm_server_env/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper
return await main
File "/home/llm/llm_server_env/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 863, in run_server
async with build_async_engine_client(args) as engine_client:
File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
return await anext(self.gen)
File "/home/llm/llm_server_env/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 133, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
return await anext(self.gen)
File "/home/llm/llm_server_env/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 157, in build_async_engine_client_from_engine_args
engine_client = AsyncLLMEngine.from_engine_args(
File "/home/llm/llm_server_env/lib/python3.10/site-packages/vllm/v1/engine/async_llm.py", line 100, in from_engine_args
vllm_config = engine_args.create_engine_config(usage_context)
File "/home/llm/llm_server_env/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1125, in create_engine_config
raise ValueError(msg)
ValueError: Chunked prefill is not supported for pooling models
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-18T20:34:56Z | 2025-03-19T03:11:21Z | https://github.com/vllm-project/vllm/issues/15060 | [
"bug"
] | amrobbins | 2 |
microsoft/hummingbird | scikit-learn | 22 | Rounding in small datasets | With small datasets, often get a rounding error that we don't see with larger datasets
repro:
```python
import numpy as np
import torch, pickle
from hummingbird import convert_sklearn
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
data = load_iris()
X, y = data.data, data.target
X_torch = torch.from_numpy(X)
model = RandomForestClassifier(n_estimators=10)
model.fit(X, y)
pytorch_model = convert_sklearn(
model,
extra_config = {"tree_implementation": "perf_tree_trav"})
skl = model.predict_proba(X)
pytorch_model.to('cuda')
hum_gpu = pytorch_model(X_torch.to('cuda'))
np.testing.assert_allclose(skl, hum_gpu[1].data.to('cpu').numpy(), rtol=1e-06, atol=1e-06)
```
you get
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/envs/rapids/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 1533, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/opt/conda/envs/rapids/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 846, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-06, atol=1e-06
Mismatched elements: 10 / 450 (2.22%)
Max absolute difference: 0.1
Max relative difference: 1.
x: array([[1. , 0. , 0. ],
[1. , 0. , 0. ],
[1. , 0. , 0. ],...
y: array([[1. , 0. , 0. ],
[1. , 0. , 0. ],
[1. , 0. , 0. ],...
```
| closed | 2020-04-08T00:54:31Z | 2020-04-21T00:37:33Z | https://github.com/microsoft/hummingbird/issues/22 | [
"bug"
] | ksaur | 5 |
microsoft/MMdnn | tensorflow | 22 | Convert ResNet101 from TensorFlow to PyTorch | Dear @kitstar,
I want to convert a _ResNet V1 101_ model (from TF-Slim) to PyTorch. Would you please kindly help me to do that?
Just as another suggestion, I think it would be great if you create a README.md file for PyTorch conversion section.
| closed | 2017-12-06T07:25:40Z | 2022-07-20T07:45:07Z | https://github.com/microsoft/MMdnn/issues/22 | [] | ahkarami | 19 |
getsentry/sentry | python | 87,480 | Sentry Grafana Integration | ### Problem Statement
We have 1 project, and there are 20 teams linked to it, each with their own url. The errors/issues are linked to the teams with ownership rules. We cannot filter this project within the dashboard from Grafana because the field url value is not available within the Grafana.
### Solution Brainstorm
Could you please add the field url value to the grafana sentry integration .
### Product Area
Settings - Integrations | closed | 2025-03-20T10:42:21Z | 2025-03-21T22:49:54Z | https://github.com/getsentry/sentry/issues/87480 | [
"Product Area: Settings - Integrations"
] | NazAksay | 2 |
nltk/nltk | nlp | 3,127 | Determine whether a Punkt model is available without loading it | Hi,
In some code, I'd like to check whether a Punkt model is available or not without loading the file (that is, without using `sent_tokenize` with dummy text and the language). The way to do it is not documented at all.
I dug into `nltk.data.load` and the whole module to come up with a solution. It's really different from what I had to do with stopwords, where the solution is quite easy and searchable (`ntlk.corpus.stopwords.fileids()`). The final solution I found is `nltk.data.find(f'tokenizers/punkt/{language}.pickle')`, which is super brittle and hard to find.
Would it be possible to have an easier way to determine whether a Punkt model is available for a given language? | open | 2023-03-02T16:08:45Z | 2023-11-27T17:38:00Z | https://github.com/nltk/nltk/issues/3127 | [] | dourouc05 | 1 |
huggingface/datasets | computer-vision | 7,354 | A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. | ### Describe the bug
Following this tutorial: https://huggingface.co/docs/diffusers/en/tutorials/basic_training and running it locally using VSCode on my MacBook. The first line in the tutorial fails: from datasets import load_dataset
dataset = load_dataset('huggan/smithsonian_butterflies_subset', split="train"). with this error:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.2 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2. and ImportError: numpy.core.multiarray failed to import.
Does from datasets import load_dataset really use NumPy 1.x?
### Steps to reproduce the bug
Open VSCode. create a new venv. Create a new ipynb file. Import pip install diffusers[training] try to run this line of code: from datasets import load_dataset
### Expected behavior
data is loaded
### Environment info
ran this: datasets-cli env
and got A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.2 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2. | closed | 2025-01-04T18:30:17Z | 2025-01-08T02:20:58Z | https://github.com/huggingface/datasets/issues/7354 | [] | jamessdixon | 1 |
fastapi/sqlmodel | sqlalchemy | 139 | equivalent for .subquery('t2') sqlalchemy | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
subq = session.exec(select(User.user_id,
func.max(User.created_at).label(
'maxdate'))
.group_by(User.user_id).subquery('t2'))
query = session.exec(select(User).join(subq, and_(
User.user_id == subq.c.user_id,
User.created_at == subq.c.maxdate))).all()
```
### Description
Error when trying to create a subquery
`Executable SQL or text() construct expected, got <sqlalchemy.sql.selectable.Subquery at 0x7f5cac0da990; t2>.`
trying this use case: https://stackoverflow.com/questions/45775724/sqlalchemy-group-by-and-return-max-date
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.7.9
### Additional Context
_No response_ | open | 2021-10-19T02:01:37Z | 2022-06-01T12:56:50Z | https://github.com/fastapi/sqlmodel/issues/139 | [
"question"
] | movaldivia | 1 |
serengil/deepface | machine-learning | 796 | how to use gpu or somethingelse to increase the speed of prediction | I want to use vedio frame as the input, but the fps of the vedio get only 3-4fps | closed | 2023-07-10T02:27:06Z | 2023-07-16T05:26:17Z | https://github.com/serengil/deepface/issues/796 | [
"question"
] | divergent020620 | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 370 | what is the better/best optimization policy(xyz_gradient_accum)? (has detail desciption) | now:
for iteration:
...
loss.backward() #gradients generated !!!
...
add_densification_stats() #gradients accumulated !!!
...
if iteration > densify_from_iter and iteration % densification_interval == 0:
densify_and_prune() -> densify_and_clone() | densify_and_split() -> densification_postfix() #gradients zero-ed !!!
optimizer.step() #gradients used, every epoch.
question:
how to get a better 'gradients accumulation/re-zero' hyperparameter?
fixed like the paper: epochs//fixed-number
OR
refer to the number of images? 1x 2x 3x Nx times of the number of images?
| closed | 2023-10-21T14:15:29Z | 2023-10-21T14:40:27Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/370 | [] | yuedajiong | 1 |
allenai/allennlp | nlp | 5,405 | Evaluator Class to allow more metrics and saving of input tokens during evaluation | **Is your feature request related to a problem? Please describe.**
An `Evaluator` class similar would allow specifying more metrics to run and how to post-process the input batches for saving.
**Describe the solution you'd like**
An `Evaluator` class similar to [`Trainer`](https://github.com/allenai/allennlp/blob/main/allennlp/training/trainer.py) that would have a key `evaluator` in the config.
The class config would have:
- Metrics to call - Some metrics are too computationally intensive to be run during training, but there currently is no way to use them in `allennlp evaluate`.
- Source & target namespaces - Currently, when predictions are saved, there is no way to save the inputs and targets as well. It makes it difficult to compare multiple models on the same dataset without implementing some other alignment method.
**Describe alternatives you've considered**
Modifying the [`evaluate`](https://github.com/allenai/allennlp/blob/48af9d3477733ec1b63f3351b8b80eab51e370fe/allennlp/training/util.py#L300) method required implementing more mixins to allow postprocessing the batch so that the inputs can be saved.
For the metrics, I have yet to find a way (that is not done inside of the model) to have some metrics not run during training but do run during evaluation.
This is something I could work on but wanted to see first if there was interest in such a feature.
| closed | 2021-09-12T13:17:07Z | 2022-01-27T13:24:39Z | https://github.com/allenai/allennlp/issues/5405 | [
"Contributions welcome"
] | gabeorlanski | 12 |
dropbox/PyHive | sqlalchemy | 21 | Pre-fetch method | When working with large amounts of data, it'd be nice to have fetch continue to pull records in another thread. For example, cursor.prefetchmany(100000) would return 100k rows on the first call, then spawn a new thread to fetch the next 100k rows.
| closed | 2015-06-30T21:58:14Z | 2015-07-01T20:40:36Z | https://github.com/dropbox/PyHive/issues/21 | [] | Downchuck | 3 |
wandb/wandb | data-science | 9,091 | [Bug]: config bug when using wandb with python-lightning | ### Describe the bug
<!--- Describe your issue here --->
When using wandb with pytorch-lightning, we use "from lightning.pytorch.loggers import WandbLogger" and use "wandb_logger=WandbLogger()" to define a logger, and then use it as a parameter in object "lightning.Trainer".
It raise a bug when trying to set up a config in the logger througe "wandb_logger.experiment.config.update(conf)"
the error information is:
File "***", line 70, in <module>
wandb_logger.experiment.config.update(conf)
AttributeError: 'function' object has no attribute 'update'
I update package lightning and wandb trying to solve the problem but failed.
the version of related packages and os are
- wandb 0.19.1
- lightning 2.4.0
- python 3.10.0
- ubuntu 22.04LTS
all the related codes are like


| closed | 2024-12-15T13:08:03Z | 2025-01-08T17:44:42Z | https://github.com/wandb/wandb/issues/9091 | [
"ty:bug",
"c:sdk:integration",
"c:sdk:config"
] | Chandery | 11 |
lepture/authlib | flask | 173 | Startlette client no longer works with httpx 0.8.0 | **Describe the bug**
With httpx==0.7.8, the example https://github.com/authlib/demo-oauth-client/tree/master/starlette-google-login runs as is. However, with httpx==0.8.0, it does not.
**Error Stacks**
```
ImportError: cannot import name 'AsyncClient' from 'httpx' (/.../venv/lib/python3.7/site-packages/httpx/__init__.py)
```
**To Reproduce**
Run the example code with the latest httpx, httpx==0.8.0.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment:**
- OS: MacOs
- Python Version: 3.7
- Authlib Version: 0.13
**Additional context**
Add any other context about the problem here.
| closed | 2019-11-30T18:31:46Z | 2019-12-01T09:51:11Z | https://github.com/lepture/authlib/issues/173 | [
"bug"
] | jorgecarleitao | 3 |
explosion/spaCy | data-science | 13,462 | The `transition_parser` in `Spacy` is not compatible with the use of cuda for inference | I am facing an issue where am trying to run a spacy based pipeline, using the `en_core_web_trf:3.7.3` model, whereby the `transition_parser` seems to be placing tensors on cpu instead of the gpu as can be seen in the logs below:
```
2024-04-26 10:31:25,319 [mlserver.parallel] ERROR - An error occurred calling method 'predict' from model 'exemplar-relation-extraction-service'.
Traceback (most recent call last):
File "/home/adarga/app/server.py", line 191, in predict
for sentence_spacy, request in zip(sentences_spacy, requests, strict=False):
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/language.py", line 1618, in pipe
for doc in docs:
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1703, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy/pipeline/transition_parser.pyx", line 245, in pipe
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1650, in minibatch
batch = list(itertools.islice(items, int(batch_size)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1703, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy/pipeline/pipe.pyx", line 55, in pipe
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1703, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy/pipeline/pipe.pyx", line 55, in pipe
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1703, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy/pipeline/transition_parser.pyx", line 245, in pipe
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1650, in minibatch
batch = list(itertools.islice(items, int(batch_size)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1703, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy/pipeline/trainable_pipe.pyx", line 73, in pipe
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1650, in minibatch
batch = list(itertools.islice(items, int(batch_size)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy/util.py", line 1703, in _pipe
yield from proc.pipe(docs, **kwargs)
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy_curated_transformers/pipeline/transformer.py", line 210, in pipe
preds = self.predict(batch)
^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy_curated_transformers/pipeline/transformer.py", line 242, in predict
return self.model.predict(docs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/model.py", line 334, in predict
return self._func(self, X, is_train=False)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy_curated_transformers/models/architectures.py", line 651, in transformer_model_forward
Y, backprop_layer = model.layers[0](docs, is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/model.py", line 310, in __call__
return self._func(self, X, is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy_curated_transformers/models/with_non_ws_tokens.py", line 72, in with_non_ws_tokens_forward
Y_no_ws, backprop_no_ws = inner(tokens, is_train)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/model.py", line 310, in __call__
return self._func(self, X, is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/layers/chain.py", line 54, in forward
Y, inc_layer_grad = layer(X, is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/model.py", line 310, in __call__
return self._func(self, X, is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/spacy_curated_transformers/models/with_strided_spans.py", line 108, in with_strided_spans_forward
output, bp = transformer(cast(TorchTransformerInT, batch), is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/model.py", line 310, in __call__
return self._func(self, X, is_train=is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/layers/pytorchwrapper.py", line 225, in forward
Ytorch, torch_backprop = model.shims[0](Xtorch, is_train)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/shims/pytorch.py", line 97, in __call__
return self.predict(inputs), lambda a: ...
^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/thinc/shims/pytorch.py", line 115, in predict
outputs = self._model(*inputs.args, **inputs.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/curated_transformers/models/curated_transformer.py", line 37, in forward
return self.curated_encoder.forward(input_ids, attention_mask, token_type_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/curated_transformers/models/roberta/encoder.py", line 46, in forward
embeddings = self.embeddings(input_ids, token_type_ids, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/curated_transformers/models/roberta/embeddings.py", line 42, in forward
return self.inner(input_ids, token_type_ids, position_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/curated_transformers/models/bert/embeddings.py", line 61, in forward
input_embeddings = self.word_embeddings(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/modules/sparse.py", line 163, in forward
return F.embedding(
^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/functional.py", line 2206, in embedding
return handle_torch_function(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/overrides.py", line 1604, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/utils/_device.py", line 77, in __torch_function__
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/pysetup/.venv/lib/python3.11/site-packages/torch/nn/functional.py", line 2237, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/pysetup/.venv/lib/python3.11/site-packages/mlserver/parallel/worker.py", line 136, in _process_request
return_value = await method(
^^^^^^^^^^^^^
File "/home/adarga/app/server.py", line 219, in predict
raise InferenceError(f"Error during relation extraction: {e}") from e
mlserver.errors.InferenceError: Error during relation extraction: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
```
I tried multiple fixes, such as using `torch.set_default_device("cuda:0")`, and `torch.set_default_dtype`, but this doesn't seem to be working.
## How to reproduce the behaviour
This error is encountered using the model in an MLServer deployment. It is a bit difficult to provide reproduction code here.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04
* Python Version Used: 3.11
* spaCy Version Used: 3.7.4[cupy-cuda12x]
* Environment Information: docker container running on an aws g4 instance
| closed | 2024-04-26T13:26:20Z | 2024-06-29T00:02:28Z | https://github.com/explosion/spaCy/issues/13462 | [
"gpu",
"feat / transformer"
] | hseelawi | 3 |
deepfakes/faceswap | machine-learning | 1,163 | Legacy face centering is not working correctly | Legacy face centering is not working correctly. I'm uploading a preview image for face and legacy centering to compare.


| closed | 2021-06-18T11:39:53Z | 2022-08-29T01:13:38Z | https://github.com/deepfakes/faceswap/issues/1163 | [
"bug"
] | dmiszkiewicz | 2 |
httpie/cli | rest-api | 853 | Cookies set to expire in HTTP response are not removed from session file | When using the `--session` option and storing a session in a file, when the response includes a `Set-Cookie` header that sets a cookie to a date in the past, the cookie is not cleared out of the session file as I would expect. Because of this, for example, cookies that are set to expire remain in the session and are still sent in subsequent requests.
Thank you. | closed | 2020-02-15T17:10:01Z | 2020-06-15T20:28:04Z | https://github.com/httpie/cli/issues/853 | [
"bug",
"help wanted"
] | rca | 2 |
kochlisGit/ProphitBet-Soccer-Bets-Predictor | seaborn | 81 | NAN values in HW Series | I am wondering why there are nan values in the HW column created on this line of the code
def compute_home_wins(self, matches_df: pd.DataFrame) -> pd.DataFrame:
matches_df['HW'] = self._compute_last_results(matches_df=matches_df, team_column='Home Team', result='H')
return matches_df
in database/repositories/leagues.py
| open | 2024-04-11T12:46:29Z | 2024-04-11T12:46:29Z | https://github.com/kochlisGit/ProphitBet-Soccer-Bets-Predictor/issues/81 | [] | kwadwobro | 0 |
stanfordnlp/stanza | nlp | 1,187 | Permission Denied when running stanza | I have a Django app hosted on ubuntu apache where am carrying out pos tagging with stanza.
Every time I launch the application am getting this error
` Permission denied: 'home/ooglobe/tmpw_4rz8p9',` and when I refresh the page the temp file keeps changing `Permission denied: 'home/ooglobe/tmp6nn3pf0o',`
This is my code
`import stanza`
`stanza.download('en',processors='tokenize,pos',model_dir='home/ooglobe/')`
`def extract_nouns_and_verbs(text):`
`nlp = stanza.Pipeline(processors="tokenize,pos",dir="/home/ooglobe/stanza_resources`
Not sure what am missing here | closed | 2023-02-03T09:02:32Z | 2023-02-03T13:23:48Z | https://github.com/stanfordnlp/stanza/issues/1187 | [] | ebyau | 0 |
sktime/pytorch-forecasting | pandas | 1,678 | [MNT] retroactively fix failing readthedocs versions 1.1.0 and 1.1.1 | Retroactively, the readthedocs versions 1.1.0 and 1.1.1 are failing due to faulty/deprecated `.readthedocs.yml` settings.
We should try to fix this, although I'm not sure whether it can be done.
A fix would be:
1. replace the `.readthedocs.yml` at the tag with the current one, and replace the `pyproject.toml` with one where the current `docs` depset is present, but make no other changes to `pyproject.toml` (I am not sure how to do this properly)
2. initiate build on readthedocs of 1.1.0 and 1.1.1 | open | 2024-09-20T19:18:59Z | 2024-09-20T19:19:15Z | https://github.com/sktime/pytorch-forecasting/issues/1678 | [
"documentation",
"maintenance"
] | fkiraly | 0 |
healthchecks/healthchecks | django | 613 | Integration Disabled | Hello,
I'm using the webook integration for Mattermost, I don't know why because I can't see any error, but happens after sometimes that I don't receive any messages and when I go to integrations page I see the mattermost integration is disabled.
If I do test I receive the message, but I don't get any messages from alarms.
The only way I can resolve is to delete and recreate the integration. | closed | 2022-03-01T22:43:35Z | 2022-03-09T10:25:54Z | https://github.com/healthchecks/healthchecks/issues/613 | [] | lettore | 4 |
pyppeteer/pyppeteer | automation | 436 | 1.0.2: pytest is failing in most of the units wih `Browser closed unexpectedly` | I'm packaging your module as an rpm package so I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.
- `python3 -sBm build -w --no-isolation`
- because I'm calling `build` with `--no-isolation` I'm using during all processes only locally installed modules
- install .whl file in </install/prefix> using 'installer` module
- run pytest with $PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
- build is performed in env which is *`cut off from access to the public network`* (pytest is executed with `-m "not network"`)
Here is pytest output:
<details>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pyppeteer-1.0.2-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pyppeteer-1.0.2-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network'
============================= test session starts ==============================
platform linux -- Python 3.8.16, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/pyppeteer-1.0.2
configfile: tox.ini
plugins: xdist-3.2.0
gw0 I / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I
gw0 [477] / gw1 [477] / gw2 [477] / gw3 [477] / gw4 [477] / gw5 [477]
FEEEEEEEEEEEEEEEFEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEFEEEEEEEEEEEEEEEEEEE [ 15%]
EEEEEEFEEEEEEEEEEEEEEEEEEEEEEEEEFEEEEEEEEEEEEEEEEEEEEFEEEEEEEEEEEEEEEEEE [ 30%]
EFEEEEEEEEEEEEEEEEEEEEE...F..FF...........EEEEEEEEEEEEEEEEEEEEFsFEE.EEEE [ 45%]
EEEEEEEFF.EEEEEEEEEEEEEEEEEEEEEEEEEEEF.EEEEEEEEEEEEEEEEEEEEEEEEEEFF..sEE [ 60%]
EEEEEEEEEEEEEEEEEEEEEEEFF.EEEEEEEEEEEEEFEEEEEEEEEEEEEEEEEEEEEF.sEEEEEEEE [ 75%]
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE [ 90%]
EEEEEEEEEEEEEEFEEFEEFFFEEEEEEEEEEEEEEEEEEEEEE [100%]
==================================== ERRORS ====================================
_________ ERROR at setup of TestCDPSession.test_enable_disable_domain __________
[gw1] linux -- Python 3.8.16 /usr/bin/python3
cls = <class 'tests.test_connection.TestCDPSession'>
@classmethod
def setUpClass(cls):
cls.port = get_free_port()
cls.app = get_application()
cls.server = cls.app.listen(cls.port)
> cls.browser = sync(launch(DEFAULT_OPTIONS))
tests/base.py:22:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/functools.py:875: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
/usr/lib/python3.8/site-packages/syncer.py:33: in sync_co
return asyncio.get_event_loop().run_until_complete(co)
/usr/lib64/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
pyppeteer/launcher.py:307: in launch
return await Launcher(options, **kwargs).launch()
pyppeteer/launcher.py:168: in launch
self.browserWSEndpoint = get_ws_endpoint(self.url)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
url = 'http://127.0.0.1:52497/json/version'
def get_ws_endpoint(url) -> str:
url = url + '/json/version'
timeout = time.time() + 30
while (True):
if time.time() > timeout:
> raise BrowserError('Browser closed unexpectedly:\n')
E pyppeteer.errors.BrowserError: Browser closed unexpectedly:
pyppeteer/launcher.py:227: BrowserError
______________ ERROR at setup of TestEvaluate.test_frame_evaluate ______________
[gw5] linux -- Python 3.8.16 /usr/bin/python3
cls = <class 'tests.test_frame.TestEvaluate'>
@classmethod
def setUpClass(cls):
cls.port = get_free_port()
cls.app = get_application()
cls.server = cls.app.listen(cls.port)
> cls.browser = sync(launch(DEFAULT_OPTIONS))
tests/base.py:22:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/functools.py:875: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
/usr/lib/python3.8/site-packages/syncer.py:33: in sync_co
return asyncio.get_event_loop().run_until_complete(co)
/usr/lib64/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
pyppeteer/launcher.py:307: in launch
return await Launcher(options, **kwargs).launch()
pyppeteer/launcher.py:168: in launch
self.browserWSEndpoint = get_ws_endpoint(self.url)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
url = 'http://127.0.0.1:33613/json/version'
def get_ws_endpoint(url) -> str:
url = url + '/json/version'
timeout = time.time() + 30
while (True):
if time.time() > timeout:
> raise BrowserError('Browser closed unexpectedly:\n')
E pyppeteer.errors.BrowserError: Browser closed unexpectedly:
pyppeteer/launcher.py:227: BrowserError
___________ ERROR at setup of TestQuerySelector.test_xpath_not_found ___________
[gw4] linux -- Python 3.8.16 /usr/bin/python3
cls = <class 'tests.test_element_handle.TestQuerySelector'>
@classmethod
def setUpClass(cls):
cls.port = get_free_port()
cls.app = get_application()
cls.server = cls.app.listen(cls.port)
> cls.browser = sync(launch(DEFAULT_OPTIONS))
tests/base.py:22:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python3.8/functools.py:875: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
/usr/lib/python3.8/site-packages/syncer.py:33: in sync_co
return asyncio.get_event_loop().run_until_complete(co)
/usr/lib64/python3.8/asyncio/base_events.py:616: in run_until_complete
return future.result()
pyppeteer/launcher.py:307: in launch
return await Launcher(options, **kwargs).launch()
pyppeteer/launcher.py:168: in launch
self.browserWSEndpoint = get_ws_endpoint(self.url)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
url = 'http://127.0.0.1:45435/json/version'
def get_ws_endpoint(url) -> str:
url = url + '/json/version'
timeout = time.time() + 30
while (True):
if time.time() > timeout:
> raise BrowserError('Browser closed unexpectedly:\n')
E pyppeteer.errors.BrowserError: Browser closed unexpectedly:
pyppeteer/launcher.py:227: BrowserError
[.and so one.]
=============================== warnings summary ===============================
pyppeteer/us_keyboard_layout.py:73
pyppeteer/us_keyboard_layout.py:73
pyppeteer/us_keyboard_layout.py:73
pyppeteer/us_keyboard_layout.py:73
pyppeteer/us_keyboard_layout.py:73
pyppeteer/us_keyboard_layout.py:73
/home/tkloczko/rpmbuild/BUILD/pyppeteer-1.0.2/pyppeteer/us_keyboard_layout.py:73: DeprecationWarning: invalid escape sequence \(
'Digit9': {'keyCode': 57, 'code': 'Digit9', 'shiftKey': '\(', 'key': '9'},
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/test_launcher.py:232: need server-side implementation
SKIPPED [1] tests/test_launcher.py:102: should fix ignoreHTTPSErrors.
SKIPPED [1] tests/test_launcher.py:483: This test hangs
ERROR tests/test_connection.py::TestCDPSession::test_enable_disable_domain - ...
ERROR tests/test_frame.py::TestEvaluate::test_frame_evaluate - pyppeteer.erro...
ERROR tests/test_element_handle.py::TestQuerySelector::test_xpath_not_found
ERROR tests/test_element_handle.py::TestClick::test_detached_node - pyppeteer...
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_no_coverage
ERROR tests/test_connection.py::TestCDPSession::test_send_event - pyppeteer.e...
ERROR tests/test_frame.py::TestEvaluate::test_frame_evaluate_after_navigation
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_no_reset_navigation
ERROR tests/test_element_handle.py::TestClick::test_hidden_node - pyppeteer.e...
ERROR tests/test_element_handle.py::TestClick::test_recursively_hidden_node
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_reset_navigation
ERROR tests/test_element_handle.py::TestClick::test_shadow_dom - pyppeteer.er...
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_url - pyppet...
ERROR tests/test_element_handle.py::TestClick::test_text_node - pyppeteer.err...
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_ignore_injected_css
ERROR tests/test_execution_context.py::TestQueryObject::test_query_objects - ...
ERROR tests/test_frame.py::TestWaitForFunction::test_bad_polling_value - pypp...
ERROR tests/test_coverage.py::TestJSCoverage::test_ignore_eval_script_by_default
ERROR tests/test_execution_context.py::TestQueryObject::test_query_objects_disposed
ERROR tests/test_frame.py::TestWaitForFunction::test_before_execution_context_resolved
ERROR tests/test_coverage.py::TestJSCoverage::test_ignore_injected_script - p...
ERROR tests/test_execution_context.py::TestQueryObject::test_query_objects_primitive_value_error
ERROR tests/test_frame.py::TestWaitForFunction::test_csp - pyppeteer.errors.B...
ERROR tests/test_coverage.py::TestJSCoverage::test_ignore_injected_script_with_reportAnonymousScript
ERROR tests/test_frame.py::TestWaitForFunction::test_disable_timeout - pyppet...
ERROR tests/test_element_handle.py::TestHover::test_hover - pyppeteer.errors....
ERROR tests/test_dialog.py::TestDialog::test_alert - pyppeteer.errors.Browser...
ERROR tests/test_frame.py::TestWaitForFunction::test_negative_polling_value
ERROR tests/test_dialog.py::TestDialog::test_prompt - pyppeteer.errors.Browse...
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage - pyppeteer.er...
ERROR tests/test_frame.py::TestWaitForFunction::test_poll_on_interval - pyppe...
ERROR tests/test_dialog.py::TestDialog::test_prompt_dismiss - pyppeteer.error...
ERROR tests/test_frame.py::TestWaitForFunction::test_poll_on_mutation - pyppe...
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_condition - py...
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_ignore_empty
ERROR tests/test_frame.py::TestWaitForFunction::test_poll_on_raf - pyppeteer....
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_multiple_script
ERROR tests/test_frame.py::TestWaitForFunction::test_respect_timeout - pyppet...
ERROR tests/test_frame.py::TestWaitForFunction::test_wait_for_expression - py...
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_no_reset_navigation
ERROR tests/test_frame.py::TestWaitForFunction::test_wait_for_function - pypp...
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_ranges - pyppe...
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_reset_navigation
ERROR tests/test_frame.py::TestWaitForFunction::test_wait_for_function_arg_element
ERROR tests/test_coverage.py::TestJSCoverage::test_js_coverage_source_url - p...
ERROR tests/test_frame.py::TestWaitForFunction::test_wait_for_function_args
ERROR tests/test_frame.py::TestWaitForFunction::test_wait_for_function_return_value
ERROR tests/test_coverage.py::TestJSCoverage::test_no_coverage - pyppeteer.er...
ERROR tests/test_coverage.py::TestJSCoverage::test_not_ignore_eval_script_with_reportAnonymousScript
ERROR tests/test_frame.py::TestWaitForFunction::test_wait_for_function_window
ERROR tests/test_execution_context.py::TestJSHandle::test_as_element - pyppet...
ERROR tests/test_execution_context.py::TestJSHandle::test_as_element_non_element
ERROR tests/test_execution_context.py::TestJSHandle::test_as_element_text_node
ERROR tests/test_element_handle.py::TestIsIntersectingViewport::test_is_intersecting_viewport
ERROR tests/test_execution_context.py::TestJSHandle::test_get_properties - py...
ERROR tests/test_execution_context.py::TestJSHandle::test_get_property - pypp...
ERROR tests/test_element_handle.py::TestBoundingBox::test_bounding_box - pypp...
ERROR tests/test_execution_context.py::TestJSHandle::test_json_circular_object_error
ERROR tests/test_element_handle.py::TestBoundingBox::test_force_layout - pypp...
ERROR tests/test_execution_context.py::TestJSHandle::test_json_date_fail - py...
ERROR tests/test_element_handle.py::TestBoundingBox::test_invisible_element
ERROR tests/test_execution_context.py::TestJSHandle::test_json_value - pyppet...
ERROR tests/test_element_handle.py::TestBoundingBox::test_nested_frame - pypp...
ERROR tests/test_execution_context.py::TestJSHandle::test_return_non_own_properties
ERROR tests/test_execution_context.py::TestJSHandle::test_to_string_complicated_object
ERROR tests/test_element_handle.py::TestBoundingBox::test_svg - pyppeteer.err...
ERROR tests/test_execution_context.py::TestJSHandle::test_to_string_number - ...
ERROR tests/test_execution_context.py::TestJSHandle::test_to_string_str - pyp...
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage - pyppeteer....
ERROR tests/test_frame.py::TestWaitForSelector::test_cross_process_navigation
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_complicated
ERROR tests/test_frame.py::TestWaitForSelector::test_error_msg_wait_for_hidden
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_media - pypp...
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_visible_inner
ERROR tests/test_coverage.py::TestCSSCoverage::test_css_coverage_multiple - p...
ERROR tests/test_element_handle.py::TestScreenshot::test_screenshot_larger_than_viewport
ERROR tests/test_element_handle.py::TestBoxModel::test_box_model - pyppeteer....
ERROR tests/test_element_handle.py::TestBoxModel::test_box_model_invisible - ...
ERROR tests/test_frame.py::TestContext::test_frame_context - pyppeteer.errors...
ERROR tests/test_element_handle.py::TestBoxModel::test_debug_error - pyppetee...
ERROR tests/test_frame.py::TestWaitForXPath::test_evaluation_failed - pyppete...
ERROR tests/test_input.py::TestClick::test_double_click - pyppeteer.errors.Br...
ERROR tests/test_frame.py::TestWaitForXPath::test_fancy_xpath - pyppeteer.err...
ERROR tests/test_input.py::TestClick::test_mouse_movement - pyppeteer.errors....
ERROR tests/test_frame.py::TestWaitForXPath::test_frame_detached - pyppeteer....
ERROR tests/test_input.py::TestClick::test_resize_textarea - pyppeteer.errors...
ERROR tests/test_frame.py::TestWaitForXPath::test_hidden - pyppeteer.errors.B...
ERROR tests/test_input.py::TestClick::test_right_click - pyppeteer.errors.Bro...
ERROR tests/test_frame.py::TestWaitForXPath::test_return_element_handle - pyp...
ERROR tests/test_input.py::TestClick::test_scroll_and_click - pyppeteer.error...
ERROR tests/test_frame.py::TestWaitForXPath::test_single_slash - pyppeteer.er...
ERROR tests/test_input.py::TestClick::test_select_text_by_mouse - pyppeteer.e...
ERROR tests/test_frame.py::TestWaitForXPath::test_specified_frame - pyppeteer...
ERROR tests/test_input.py::TestClick::test_select_text_by_triple_click - pypp...
ERROR tests/test_frame.py::TestWaitForXPath::test_text_node - pyppeteer.error...
ERROR tests/test_input.py::TestClick::test_tap_button - pyppeteer.errors.Brow...
ERROR tests/test_frame.py::TestWaitForXPath::test_timeout - pyppeteer.errors....
ERROR tests/test_input.py::TestClick::test_touch_enabled_viewport - pyppeteer...
ERROR tests/test_input.py::TestClick::test_touches_report - pyppeteer.errors....
ERROR tests/test_input.py::TestClick::test_trigger_hover - pyppeteer.errors.B...
ERROR tests/test_element_handle.py::TestQuerySelector::test_J - pyppeteer.err...
ERROR tests/test_element_handle.py::TestQuerySelector::test_JJ - pyppeteer.er...
ERROR tests/test_element_handle.py::TestQuerySelector::test_JJEval - pyppetee...
ERROR tests/test_element_handle.py::TestQuerySelector::test_JJEval_missing_selector
ERROR tests/test_element_handle.py::TestQuerySelector::test_JJEval_subtree - ...
ERROR tests/test_element_handle.py::TestQuerySelector::test_JJ_empty - pyppet...
ERROR tests/test_element_handle.py::TestQuerySelector::test_J_none - pyppetee...
ERROR tests/test_element_handle.py::TestQuerySelector::test_Jeval - pyppeteer...
ERROR tests/test_element_handle.py::TestQuerySelector::test_Jeval_subtree - p...
ERROR tests/test_frame.py::TestEvaluateHandle::test_evaluate_handle - pyppete...
ERROR tests/test_element_handle.py::TestQuerySelector::test_Jeval_with_missing_selector
ERROR tests/test_element_handle.py::TestContentFrame::test_content_frame - py...
ERROR tests/test_element_handle.py::TestQuerySelector::test_xpath - pyppeteer...
ERROR tests/test_frame.py::TestFrames::test_anchor_url - pyppeteer.errors.Bro...
ERROR tests/test_frame.py::TestFrames::test_frame_cross_process - pyppeteer.e...
ERROR tests/test_input.py::TestFileUpload::test_file_upload - pyppeteer.error...
ERROR tests/test_frame.py::TestFrames::test_frame_events - pyppeteer.errors.B...
ERROR tests/test_frame.py::TestFrames::test_frame_events_child - pyppeteer.er...
ERROR tests/test_frame.py::TestFrames::test_frame_events_main - pyppeteer.err...
ERROR tests/test_frame.py::TestFrames::test_frame_name - pyppeteer.errors.Bro...
ERROR tests/test_frame.py::TestEvaluate::test_frame_cross_site - pyppeteer.er...
ERROR tests/test_element_handle.py::TestClick::test_br_node - pyppeteer.error...
ERROR tests/test_frame.py::TestWaitForSelector::test_fail_frame_detached - py...
ERROR tests/test_element_handle.py::TestClick::test_clik - pyppeteer.errors.B...
ERROR tests/test_frame.py::TestWaitForSelector::test_fail_page_closed - pyppe...
ERROR tests/test_frame.py::TestWaitForSelector::test_run_in_specified_frame
ERROR tests/test_frame.py::TestWaitForSelector::test_shortcut_for_main_frame
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_page_navigation
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_after_node_appear
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_display_none
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_fail
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_hidden
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_immediate
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_inner_html
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_node_mutation
ERROR tests/test_input.py::TestType::test_emoji - pyppeteer.errors.BrowserErr...
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_remove
ERROR tests/test_input.py::TestType::test_emoji_in_iframe - pyppeteer.errors....
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_return_element
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_timeout
ERROR tests/test_frame.py::TestWaitForSelector::test_wait_for_selector_visible
ERROR tests/test_browser.py::TestPageClose::test_before_unload - pyppeteer.er...
ERROR tests/test_browser.py::TestPageClose::test_not_visible_in_browser_pages
ERROR tests/test_browser.py::TestPageClose::test_page_close_state - pyppeteer...
ERROR tests/test_input.py::TestType::test_key_arrowkey - pyppeteer.errors.Bro...
ERROR tests/test_input.py::TestType::test_key_location - pyppeteer.errors.Bro...
ERROR tests/test_input.py::TestType::test_key_modifiers - pyppeteer.errors.Br...
ERROR tests/test_frame.py::TestFrames::test_frame_nested - pyppeteer.errors.B...
ERROR tests/test_frame.py::TestFrames::test_frame_parent - pyppeteer.errors.B...
ERROR tests/test_input.py::TestType::test_key_press_element_handle - pyppetee...
ERROR tests/test_input.py::TestType::test_key_send_char - pyppeteer.errors.Br...
ERROR tests/test_input.py::TestType::test_key_type - pyppeteer.errors.Browser...
ERROR tests/test_input.py::TestType::test_key_type_long - pyppeteer.errors.Br...
ERROR tests/test_input.py::TestType::test_key_unknown - pyppeteer.errors.Brow...
ERROR tests/test_input.py::TestType::test_not_type_prevent_events - pyppeteer...
ERROR tests/test_input.py::TestType::test_repeat_multiple_modifiers - pyppete...
ERROR tests/test_input.py::TestType::test_repeat_properly - pyppeteer.errors....
ERROR tests/test_input.py::TestType::test_repeat_shift_key - pyppeteer.errors...
ERROR tests/test_input.py::TestType::test_send_proper_code_while_typing - pyp...
ERROR tests/test_input.py::TestType::test_send_proper_code_while_typing_with_shift
ERROR tests/test_browser_context.py::TestBrowserContext::test_across_session
ERROR tests/test_browser_context.py::TestBrowserContext::test_close_all_targets_once
ERROR tests/test_browser_context.py::TestBrowserContext::test_default_context
ERROR tests/test_browser_context.py::TestBrowserContext::test_fire_target_event
ERROR tests/test_browser_context.py::TestBrowserContext::test_incognito_context
ERROR tests/test_browser_context.py::TestBrowserContext::test_isolate_local_storage_and_cookie
ERROR tests/test_browser_context.py::TestBrowserContext::test_window_open_use_parent_tab_context
ERROR tests/test_input.py::TestClick::test_click - pyppeteer.errors.BrowserEr...
ERROR tests/test_input.py::TestClick::test_click_after_navigation - pyppeteer...
ERROR tests/test_input.py::TestClick::test_click_events - pyppeteer.errors.Br...
ERROR tests/test_input.py::TestClick::test_click_fail - pyppeteer.errors.Brow...
ERROR tests/test_input.py::TestClick::test_click_insilde_frame - pyppeteer.er...
ERROR tests/test_input.py::TestClick::test_click_label - pyppeteer.errors.Bro...
ERROR tests/test_input.py::TestClick::test_click_link - pyppeteer.errors.Brow...
ERROR tests/test_input.py::TestClick::test_click_offscreen_button - pyppeteer...
ERROR tests/test_input.py::TestClick::test_click_partially_obscured_button - ...
ERROR tests/test_network.py::TestNetworkEvent::test_events_order - pyppeteer....
ERROR tests/test_input.py::TestClick::test_click_with_device_scale_factor - p...
ERROR tests/test_network.py::TestNetworkEvent::test_fail_get_redirected_body
ERROR tests/test_input.py::TestClick::test_click_with_disabled_javascript - p...
ERROR tests/test_network.py::TestNetworkEvent::test_from_cache - pyppeteer.er...
ERROR tests/test_input.py::TestClick::test_click_with_modifier_key - pyppetee...
ERROR tests/test_input.py::TestClick::test_click_wrapped_links - pyppeteer.er...
ERROR tests/test_network.py::TestNetworkEvent::test_not_report_body_unless_finished
ERROR tests/test_network.py::TestNetworkEvent::test_redirects - pyppeteer.err...
ERROR tests/test_network.py::TestNetworkEvent::test_response - pyppeteer.erro...
ERROR tests/test_network.py::TestNetworkEvent::test_request - pyppeteer.error...
ERROR tests/test_network.py::TestNetworkEvent::test_response_body - pyppeteer...
ERROR tests/test_network.py::TestNetworkEvent::test_request_failed - pyppetee...
ERROR tests/test_network.py::TestNetworkEvent::test_response_from_service_worker
ERROR tests/test_network.py::TestNetworkEvent::test_request_finished - pyppet...
ERROR tests/test_network.py::TestNetworkEvent::test_request_post - pyppeteer....
ERROR tests/test_connection.py::TestConnection::test_error_msg - pyppeteer.er...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_redirects
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_stop
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_with_file_url
ERROR tests/test_network.py::TestRequestInterception::test_redirect_for_subresource
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_with_hash
ERROR tests/test_network.py::TestRequestInterception::test_referer_header - p...
ERROR tests/test_network.py::TestRequestInterception::test_request_respond - ...
ERROR tests/test_page.py::TestEvaluate::test_accept_none - pyppeteer.errors.B...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception
ERROR tests/test_network.py::TestRequestInterception::test_request_respond_bytes
ERROR tests/test_page.py::TestEvaluate::test_accept_string - pyppeteer.errors...
ERROR tests/test_network.py::TestRequestInterception::test_response_with_cookie
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_abort
ERROR tests/test_page.py::TestEvaluate::test_accept_string_with_comments - py...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_abort_data_url
ERROR tests/test_page.py::TestEvaluate::test_accept_string_with_semicolon - p...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_abort_main
ERROR tests/test_page.py::TestEvaluate::test_after_framenavigation - pyppetee...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_abort_redirects
ERROR tests/test_page.py::TestEvaluate::test_await_promise - pyppeteer.errors...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_amend_http_header
ERROR tests/test_page.py::TestEvaluate::test_element_handle_as_argument - pyp...
ERROR tests/test_page.py::TestEvaluate::test_element_handle_disposed - pyppet...
ERROR tests/test_page.py::TestEvaluate::test_element_handle_from_other_frame
ERROR tests/test_page.py::TestEvaluate::test_error_on_reload - pyppeteer.erro...
ERROR tests/test_connection.py::TestCDPSession::test_create_session - pyppete...
ERROR tests/test_connection.py::TestCDPSession::test_detach - pyppeteer.error...
ERROR tests/test_network.py::TestNavigationRequest::test_image - pyppeteer.er...
ERROR tests/test_network.py::TestNavigationRequest::test_interception - pyppe...
ERROR tests/test_network.py::TestNavigationRequest::test_navigation_request
ERROR tests/test_page.py::TestEvaluate::test_promise_reject - pyppeteer.error...
ERROR tests/test_page.py::TestEvaluate::test_return_complex_object - pyppetee...
ERROR tests/test_page.py::TestEvaluate::test_return_infinity - pyppeteer.erro...
ERROR tests/test_page.py::TestConsole::test_console_event - pyppeteer.errors....
ERROR tests/test_page.py::TestEvaluate::test_return_infinity_minus - pyppetee...
ERROR tests/test_page.py::TestConsole::test_console_event_many - pyppeteer.er...
ERROR tests/test_page.py::TestEvaluate::test_return_minus_zero - pyppeteer.er...
ERROR tests/test_page.py::TestConsole::test_console_window - pyppeteer.errors...
ERROR tests/test_page.py::TestEvaluate::test_return_nan - pyppeteer.errors.Br...
ERROR tests/test_page.py::TestConsole::test_trigger_correct_log - pyppeteer.e...
ERROR tests/test_page.py::TestEvaluate::test_serialize_null_field - pyppeteer...
ERROR tests/test_page.py::TestEvaluate::test_simulate_user_gesture - pyppetee...
ERROR tests/test_page.py::TestEvaluate::test_string_as_error_message - pyppet...
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_badly_encoded_server
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_custom_error_code
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_custom_header
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_custom_referer_header
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_data_url
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_disabled
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_encoded_server
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_encoded_server_2
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_equal_requests
ERROR tests/test_network.py::TestRequestInterception::test_request_interception_invalid_interception_id
ERROR tests/test_page.py::TestOfflineMode::test_emulate_navigator_offline - p...
ERROR tests/test_page.py::TestOfflineMode::test_offline_mode - pyppeteer.erro...
ERROR tests/test_page.py::TestDOMContentLoaded::test_fired - pyppeteer.errors...
ERROR tests/test_page.py::TestGoto::test_data_url_request - pyppeteer.errors....
ERROR tests/test_page.py::TestGoto::test_goto_bad_resource - pyppeteer.errors...
ERROR tests/test_page.py::TestGoto::test_goto_bad_url - pyppeteer.errors.Brow...
ERROR tests/test_page.py::TestGoto::test_goto_blank - pyppeteer.errors.Browse...
ERROR tests/test_page.py::TestGoto::test_goto_documentloaded - pyppeteer.erro...
ERROR tests/test_page.py::TestGoto::test_goto_domcontentloaded - pyppeteer.er...
ERROR tests/test_page.py::TestGoto::test_goto_fail_204 - pyppeteer.errors.Bro...
ERROR tests/test_page.py::TestGoto::test_goto_history_api_beforeunload - pypp...
ERROR tests/test_page.py::TestGoto::test_goto_networkidle - pyppeteer.errors....
ERROR tests/test_page.py::TestGoto::test_show_url_in_error_message - pyppetee...
ERROR tests/test_page.py::TestGoto::test_timeout - pyppeteer.errors.BrowserEr...
ERROR tests/test_page.py::TestGoto::test_timeout_default - pyppeteer.errors.B...
ERROR tests/test_page.py::TestGoto::test_url_with_hash - pyppeteer.errors.Bro...
ERROR tests/test_page.py::TestGoto::test_valid_url - pyppeteer.errors.Browser...
ERROR tests/test_page.py::TestGoto::test_wait_for_network_idle - pyppeteer.er...
ERROR tests/test_page.py::TestGoto::test_goto_subframe_204 - pyppeteer.errors...
ERROR tests/test_page.py::TestGoto::test_nav_networkidle0 - pyppeteer.errors....
ERROR tests/test_page.py::TestGoto::test_nav_networkidle2 - pyppeteer.errors....
ERROR tests/test_page.py::TestGoto::test_no_timeout - pyppeteer.errors.Browse...
ERROR tests/test_page.py::TestGoto::test_redirect - pyppeteer.errors.BrowserE...
ERROR tests/test_page.py::TestGoto::test_response_when_page_changes_url - pyp...
ERROR tests/test_page.py::TestGoto::test_self_request_page - pyppeteer.errors...
ERROR tests/test_page.py::TestEvaluateHandle::test_evaluate_handle - pyppetee...
ERROR tests/test_page.py::TestMetrics::test_metrics - pyppeteer.errors.Browse...
ERROR tests/test_page.py::TestMetrics::test_metrics_event - pyppeteer.errors....
ERROR tests/test_page.py::TestWaitForNavigation::test_both_domcontentloaded_loaded
ERROR tests/test_page.py::TestWaitForNavigation::test_click_anchor_link - pyp...
ERROR tests/test_page.py::TestWaitForNavigation::test_dom_history_back_forward
ERROR tests/test_page.py::TestWaitForNavigation::test_history_push_state - py...
ERROR tests/test_page.py::TestWaitForNavigation::test_history_replace_state
ERROR tests/test_page.py::TestWaitForNavigation::test_return_nevigated_response_reload
ERROR tests/test_page.py::TestWaitForNavigation::test_subframe_issues - pyppe...
ERROR tests/test_page.py::TestWaitForNavigation::test_wait_for_navigatoin - p...
ERROR tests/test_page.py::TestWaitForRequest::test_predicate - pyppeteer.erro...
ERROR tests/test_page.py::TestWaitForRequest::test_wait_for_request - pyppete...
ERROR tests/test_page.py::TestEvaluate::test_evaluate - pyppeteer.errors.Brow...
ERROR tests/test_page.py::TestEvaluate::test_evaluate_force_expression - pypp...
ERROR tests/test_page.py::TestEvaluate::test_fail_for_circular_object - pyppe...
ERROR tests/test_page.py::TestEvaluate::test_fail_window_object - pyppeteer.e...
ERROR tests/test_page.py::TestEvaluate::test_inside_expose_function - pyppete...
ERROR tests/test_page.py::TestEvaluate::test_nice_error_after_navigation - py...
ERROR tests/test_page.py::TestEvaluate::test_number_as_error_message - pyppet...
ERROR tests/test_page.py::TestWaitFor::test_single_slash_fail - pyppeteer.err...
ERROR tests/test_page.py::TestEvaluate::test_object_handle_as_argument - pypp...
ERROR tests/test_page.py::TestWaitFor::test_wait_for_error_type - pyppeteer.e...
ERROR tests/test_page.py::TestEvaluate::test_object_handle_to_primitive_value
ERROR tests/test_page.py::TestWaitFor::test_wait_for_func_with_args - pyppete...
ERROR tests/test_page.py::TestWaitFor::test_wait_for_selector - pyppeteer.err...
ERROR tests/test_page.py::TestWaitFor::test_wait_for_timeout - pyppeteer.erro...
ERROR tests/test_page.py::TestWaitFor::test_wait_for_xpath - pyppeteer.errors...
ERROR tests/test_page.py::TestGoto::test_404 - pyppeteer.errors.BrowserError:...
ERROR tests/test_page.py::TestGoto::test_data_url - pyppeteer.errors.BrowserE...
ERROR tests/test_page.py::TestWaitForRequest::test_no_timeout - pyppeteer.err...
ERROR tests/test_page.py::TestWaitForResponse::test_no_timeout - pyppeteer.er...
ERROR tests/test_page.py::TestWaitForResponse::test_predicate - pyppeteer.err...
ERROR tests/test_page.py::TestWaitForResponse::test_wait_for_response - pyppe...
ERROR tests/test_page.py::TestQuerySelector::test_query_selector - pyppeteer....
ERROR tests/test_page.py::TestQuerySelector::test_query_selector_all - pyppet...
ERROR tests/test_page.py::TestQuerySelector::test_query_selector_all_not_found
ERROR tests/test_page.py::TestQuerySelector::test_xpath - pyppeteer.errors.Br...
ERROR tests/test_page.py::TestQuerySelector::test_xpath_not_found - pyppeteer...
ERROR tests/test_page.py::TestQuerySelector::test_xpath_alias - pyppeteer.err...
ERROR tests/test_page.py::TestQuerySelector::test_xpath_multiple - pyppeteer....
ERROR tests/test_page.py::TestGoBack::test_history_api - pyppeteer.errors.Bro...
ERROR tests/test_page.py::TestRequest::test_request - pyppeteer.errors.Browse...
ERROR tests/test_page.py::TestGoBack::test_back - pyppeteer.errors.BrowserErr...
ERROR tests/test_page.py::TestSetBypassCSP::test_bypass_csp_header - pyppetee...
ERROR tests/test_page.py::TestSetBypassCSP::test_bypass_csp_meta_tag - pyppet...
ERROR tests/test_page.py::TestSetBypassCSP::test_bypass_scp_cross_process - p...
ERROR tests/test_page.py::TestUserAgent::test_user_agent - pyppeteer.errors.B...
ERROR tests/test_page.py::TestAddScriptTag::test_scp_error_content - pyppetee...
ERROR tests/test_page.py::TestUserAgent::test_user_agent_mobile_emulate - pyp...
ERROR tests/test_page.py::TestAddScriptTag::test_scp_error_url - pyppeteer.er...
ERROR tests/test_page.py::TestAddScriptTag::test_script_tag_content - pyppete...
ERROR tests/test_page.py::TestExposeFunction::test_call_from_evaluate_on_document
ERROR tests/test_page.py::TestExposeFunction::test_expose_function - pyppetee...
ERROR tests/test_page.py::TestAddScriptTag::test_script_tag_error - pyppeteer...
ERROR tests/test_page.py::TestExposeFunction::test_expose_function_frames - p...
ERROR tests/test_page.py::TestAddScriptTag::test_script_tag_path - pyppeteer....
ERROR tests/test_page.py::TestAddScriptTag::test_script_tag_path_source_map
ERROR tests/test_page.py::TestExposeFunction::test_expose_function_frames_before_navigation
ERROR tests/test_page.py::TestExposeFunction::test_expose_function_other_page
ERROR tests/test_page.py::TestExposeFunction::test_expose_function_return_promise
ERROR tests/test_page.py::TestQuerySelector::test_JJeval - pyppeteer.errors.B...
ERROR tests/test_page.py::TestQuerySelector::test_jeval - pyppeteer.errors.Br...
ERROR tests/test_page.py::TestQuerySelector::test_jeval_argument - pyppeteer....
ERROR tests/test_page.py::TestQuerySelector::test_jeval_argument_element - py...
ERROR tests/test_page.py::TestQuerySelector::test_jeval_not_found - pyppeteer...
ERROR tests/test_page.py::TestAuthenticate::test_auth - pyppeteer.errors.Brow...
ERROR tests/test_page.py::TestAddScriptTag::test_module_content - pyppeteer.e...
ERROR tests/test_page.py::TestAddScriptTag::test_module_path - pyppeteer.erro...
ERROR tests/test_page.py::TestAddScriptTag::test_module_url - pyppeteer.error...
ERROR tests/test_page.py::TestExtraHTTPHeader::test_extra_http_header - pyppe...
ERROR tests/test_page.py::TestExtraHTTPHeader::test_non_string_value - pyppet...
ERROR tests/test_page.py::TestAddStyleTag::test_style_tag_content - pyppeteer...
ERROR tests/test_page.py::TestAddStyleTag::test_style_tag_error - pyppeteer.e...
ERROR tests/test_page.py::TestAddStyleTag::test_style_tag_path - pyppeteer.er...
ERROR tests/test_page.py::TestErrorPage::test_error_page - pyppeteer.errors.B...
ERROR tests/test_page.py::TestAddStyleTag::test_style_tag_path_source_map - p...
ERROR tests/test_page.py::TestAddStyleTag::test_style_tag_url - pyppeteer.err...
ERROR tests/test_page.py::TestViewport::test_landscape_emulation - pyppeteer....
ERROR tests/test_page.py::TestViewport::test_mobile_emulation - pyppeteer.err...
ERROR tests/test_page.py::TestViewport::test_touch_emulation - pyppeteer.erro...
ERROR tests/test_page.py::TestViewport::test_viewport - pyppeteer.errors.Brow...
ERROR tests/test_page.py::TestAuthenticateFailed::test_auth_fail - pyppeteer....
ERROR tests/test_page.py::TestEmulate::test_click - pyppeteer.errors.BrowserE...
ERROR tests/test_page.py::TestEmulate::test_emulate - pyppeteer.errors.Browse...
ERROR tests/test_page.py::TestAddScriptTag::test_script_tag_url - pyppeteer.e...
ERROR tests/test_page.py::TestAddScriptTag::test_script_tag_url_fail - pyppet...
ERROR tests/test_page.py::TestAddStyleTag::test_style_tag_url_fail - pyppetee...
ERROR tests/test_page.py::TestJavaScriptEnabled::test_set_javascript_enabled
ERROR tests/test_page.py::TestPDF::test_pdf - pyppeteer.errors.BrowserError: ...
ERROR tests/test_page.py::TestAuthenticateDisable::test_disable_auth - pyppet...
ERROR tests/test_page.py::TestEmulateMedia::test_emulate_media - pyppeteer.er...
ERROR tests/test_page.py::TestEmulateMedia::test_emulate_media_bad_arg - pypp...
ERROR tests/test_page.py::TestAddStyleTag::test_csp_error_content - pyppeteer...
ERROR tests/test_page.py::TestAddStyleTag::test_csp_error_url - pyppeteer.err...
ERROR tests/test_page.py::TestUrl::test_url - pyppeteer.errors.BrowserError: ...
ERROR tests/test_page.py::TestEvaluateOnNewDocument::test_csp - pyppeteer.err...
ERROR tests/test_page.py::TestEvaluateOnNewDocument::test_evaluate_before_else_on_page
ERROR tests/test_page.py::TestTitle::test_title - pyppeteer.errors.BrowserErr...
ERROR tests/test_page.py::TestSetContent::test_set_content - pyppeteer.errors...
ERROR tests/test_page.py::TestSetContent::test_with_doctype - pyppeteer.error...
ERROR tests/test_page.py::TestSetContent::test_with_html4_doctype - pyppeteer...
ERROR tests/test_page.py::TestSelect::test_select - pyppeteer.errors.BrowserE...
ERROR tests/test_page.py::TestSelect::test_select_deselect - pyppeteer.errors...
ERROR tests/test_page.py::TestSelect::test_select_deselect_multiple - pyppete...
ERROR tests/test_page.py::TestSelect::test_select_first_item - pyppeteer.erro...
ERROR tests/test_page.py::TestSelect::test_select_multiple - pyppeteer.errors...
ERROR tests/test_page.py::TestSelect::test_select_no_match - pyppeteer.errors...
ERROR tests/test_page.py::TestViewport::test_detect_by_modernizr - pyppeteer....
ERROR tests/test_page.py::TestViewport::test_detect_touch_viewport_touch - py...
ERROR tests/test_page.py::TestCacheEnabled::test_cache_enable_disable - pyppe...
ERROR tests/test_page.py::TestSelect::test_return_selected_elements - pyppete...
ERROR tests/test_page.py::TestSelect::test_select_not_multiple - pyppeteer.er...
ERROR tests/test_page.py::TestSelect::test_select_not_select_element - pyppet...
ERROR tests/test_page.py::TestCookie::test_cookie_blank_page - pyppeteer.erro...
ERROR tests/test_page.py::TestCookie::test_cookie_blank_page2 - pyppeteer.err...
ERROR tests/test_page.py::TestCookie::test_cookie_data_url_page - pyppeteer.e...
ERROR tests/test_page.py::TestCookie::test_cookie_data_url_page2 - pyppeteer....
ERROR tests/test_page.py::TestCookie::test_cookies - pyppeteer.errors.Browser...
ERROR tests/test_page.py::TestCookieDelete::test_delete_cookie - pyppeteer.er...
ERROR tests/test_page.py::TestSelect::test_select_no_value - pyppeteer.errors...
ERROR tests/test_page.py::TestSelect::test_select_nonstring - pyppeteer.error...
ERROR tests/test_page.py::TestCookieFrames::test_frame - pyppeteer.errors.Bro...
ERROR tests/test_page.py::TestEvents::test_close_window_close - pyppeteer.err...
ERROR tests/test_pyppeteer.py::TestPyppeteer::test_get_facebook - pyppeteer.e...
ERROR tests/test_pyppeteer.py::TestPyppeteer::test_inject_file - pyppeteer.er...
ERROR tests/test_page.py::TestCookieWithPath::test_set_cookie_with_path - pyp...
ERROR tests/test_page.py::TestCookieDomain::test_different_domain - pyppeteer...
ERROR tests/test_page.py::TestEvents::test_close_page_close - pyppeteer.error...
ERROR tests/test_page.py::TestBrowser::test_get_browser - pyppeteer.errors.Br...
ERROR tests/test_pyppeteer.py::TestPyppeteer::test_plain_text_depr - pyppetee...
ERROR tests/test_pyppeteer.py::TestScreenshot::test_screenshot_large - pyppet...
ERROR tests/test_target.py::TestTarget::test_browser_target - pyppeteer.error...
ERROR tests/test_target.py::TestTarget::test_report_service_worker - pyppetee...
ERROR tests/test_target.py::TestTarget::test_crash_while_redirect - pyppeteer...
ERROR tests/test_target.py::TestTarget::test_return_all_pages - pyppeteer.err...
ERROR tests/test_target.py::TestTarget::test_default_page - pyppeteer.errors....
ERROR tests/test_target.py::TestTarget::test_targets - pyppeteer.errors.Brows...
ERROR tests/test_target.py::TestTarget::test_not_report_uninitialized_page - ...
ERROR tests/test_target.py::TestTarget::test_url_change - pyppeteer.errors.Br...
ERROR tests/test_target.py::TestTarget::test_opener - pyppeteer.errors.Browse...
ERROR tests/test_target.py::TestTarget::test_report_new_page - pyppeteer.erro...
ERROR tests/test_tracing.py::TestTracing::test_return_null_on_error - pyppete...
ERROR tests/test_tracing.py::TestTracing::test_tracing - pyppeteer.errors.Bro...
ERROR tests/test_tracing.py::TestTracing::test_tracing_two_page_error - pyppe...
ERROR tests/test_tracing.py::TestTracing::test_without_path - pyppeteer.error...
ERROR tests/test_worker.py::TestWorker::test_create_destroy_events - pyppetee...
ERROR tests/test_worker.py::TestWorker::test_execution_context - pyppeteer.er...
ERROR tests/test_worker.py::TestWorker::test_jshandle_for_console_log - pyppe...
ERROR tests/test_worker.py::TestWorker::test_report_console_logs - pyppeteer....
ERROR tests/test_tracing.py::TestTracing::test_custom_categories - pyppeteer....
ERROR tests/test_tracing.py::TestTracing::test_return_buffer - pyppeteer.erro...
ERROR tests/test_worker.py::TestWorker::test_report_error - pyppeteer.errors....
ERROR tests/test_worker.py::TestWorker::test_worker - pyppeteer.errors.Browse...
FAILED tests/test_abnormal_crash.py::TestBrowserCrash::test_browser_crash_send
FAILED tests/test_browser.py::TestBrowser::test_browser_process - pyppeteer.e...
FAILED tests/test_browser.py::TestBrowser::test_crash - pyppeteer.errors.Brow...
FAILED tests/test_browser.py::TestBrowser::test_disconnect - pyppeteer.errors...
FAILED tests/test_browser.py::TestBrowser::test_user_agent - pyppeteer.errors...
FAILED tests/test_browser.py::TestBrowser::test_version - pyppeteer.errors.Br...
FAILED tests/test_launcher.py::TestLauncher::test_await_after_close - pyppete...
FAILED tests/test_launcher.py::TestLauncher::test_launch - pyppeteer.errors.B...
FAILED tests/test_launcher.py::TestLauncher::test_close_no_connection - pyppe...
FAILED tests/test_launcher.py::TestConnect::test_reconnect - pyppeteer.errors...
FAILED tests/test_launcher.py::TestDefaultURL::test_default_url - pyppeteer.e...
FAILED tests/test_launcher.py::TestLauncher::test_default_viewport - pyppetee...
FAILED tests/test_launcher.py::TestLogLevel::test_level_default - pyppeteer.e...
FAILED tests/test_launcher.py::TestLauncher::test_disable_default_viewport - ...
FAILED tests/test_launcher.py::TestUserDataDir::test_user_data_dir_args - pyp...
FAILED tests/test_launcher.py::TestUserDataDir::test_user_data_dir_option - p...
FAILED tests/test_launcher.py::TestLauncher::test_dumpio_enable - AssertionEr...
FAILED tests/test_launcher.py::TestUserDataDir::test_user_data_dir_restore_state
FAILED tests/test_launcher.py::TestLauncher::test_ignore_https_errors_interception
FAILED tests/test_launcher.py::TestTargetEvents::test_target_events - pyppete...
FAILED tests/test_launcher.py::TestConnect::test_connect - pyppeteer.errors.B...
FAILED tests/test_screenshot.py::TestScreenShot::test_screenshot - pyppeteer....
FAILED tests/test_screenshot.py::TestPDF::test_pdf - pyppeteer.errors.Browser...
FAILED tests/test_screenshot.py::TestScreenShot::test_screenshot_base64 - pyp...
FAILED tests/test_screenshot.py::TestScreenShot::test_screenshot_binary - pyp...
FAILED tests/test_screenshot.py::TestScreenShot::test_unresolved_mimetype - p...
= 26 failed, 23 passed, 3 skipped, 6 warnings, 425 errors in 789.59s (0:13:09) =
```
</details>
Here is list of installed modules in build env
<details>
```console
Package Version
----------------------------- -----------------
alabaster 0.7.13
appdirs 1.4.4
asttokens 2.2.1
Babel 2.12.1
backcall 0.2.0
build 0.10.0
certifi 2022.12.7
charset-normalizer 3.1.0
comm 0.1.3
decorator 5.1.1
distro 1.8.0
docutils 0.19
exceptiongroup 1.0.0
execnet 1.9.0
executing 1.2.0
gpg 1.19.0
idna 3.4
imagesize 1.4.1
importlib-metadata 6.6.0
iniconfig 2.0.0
installer 0.7.0
ipykernel 6.22.0
ipython 8.12.0
ipython-genutils 0.2.0
jedi 0.18.2
Jinja2 3.1.2
libcomps 0.1.19
MarkupSafe 2.1.2
matplotlib-inline 0.1.6
nest-asyncio 1.5.6
packaging 23.1
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
pluggy 1.0.0
poetry-core 1.5.2
prompt-toolkit 3.0.38
ptyprocess 0.7.0
pure-eval 0.2.2
pyee 9.0.4
Pygments 2.15.1
pyproject_hooks 1.0.0
pytest 7.3.1
pytest-xdist 3.2.1
python-dateutil 2.8.2
pytz 2023.2
requests 2.28.2
setuptools 67.7.2
six 1.16.0
snowballstemmer 2.2.0
Sphinx 6.2.1
sphinxcontrib-applehelp 1.0.4
sphinxcontrib-devhelp 1.0.2.dev20230415
sphinxcontrib-htmlhelp 2.0.0
sphinxcontrib-jsmath 1.0.1.dev20230415
sphinxcontrib-qthelp 1.0.3.dev20230415
sphinxcontrib-serializinghtml 1.1.5
stack-data 0.6.2
syncer 1.3.0
tomli 2.0.1
tornado 6.2
tqdm 4.65.0
traitlets 5.9.0
typing_extensions 4.5.0
urllib3 1.26.15
wcwidth 0.2.6
websockets 11.0.2
wheel 0.40.0
zipp 3.15.0
```
</details>
Do you know what could be the reson why all those units are failing? 🤔 | closed | 2023-04-28T09:53:43Z | 2024-02-09T06:09:41Z | https://github.com/pyppeteer/pyppeteer/issues/436 | [] | kloczek | 1 |
Nike-Inc/koheesio | pydantic | 57 | [FEATURE] Remove Cerberus from secrets | <!-- We follow Design thinking principles to bring the new feature request to life. Please read through [Design thinking](https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process) principles if you are not familiar. -->
<!-- This is the [Board](https://github.com/orgs/Nike-Inc/projects/4) your feature request would go through, so keep in mind that there would be more back and forth on this. If you are very clear with all phases, please describe them here for faster development. -->
## Is your feature request related to a problem? Please describe.
Remove Cerberus support from koheesio
| closed | 2024-08-09T10:27:37Z | 2024-08-09T11:31:43Z | https://github.com/Nike-Inc/koheesio/issues/57 | [
"enhancement"
] | mikita-sakalouski | 0 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 33 | About coatnet | 感觉博主对coatnet的实现在很多地方有问题(也吐槽一下coatnet这篇论文很多细节都没说清楚)
我觉得最重要的一个概念是文章作者所说的relative attention。文章本身也没聊这个概念,不过它在这个概念的基础上折腾了一下卷积和自注意力的权重公式。最最关键的是,作者是通过引入**全局静态卷积核**来融合卷积与transformer的(说得更简单一点就是,人论文里模型的图中写的是Rel-Attention,而不是普通的Attention)。说实话这个全局静态卷积核我是没有在博主你的实现里看到。
另外,我好像也没看到任何残差连接,x = out + x呢。。
抱歉,大晚上脑子有点晕,很多表述不是很妥,不过我觉得我想说的核心问题还是表达出来了 | open | 2021-09-12T14:19:11Z | 2021-09-18T06:40:01Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/33 | [] | ShiveryMoon | 0 |
assafelovic/gpt-researcher | automation | 423 | Missing python requirement blocks starting the service from a Docker build | I ran the following and ran into the error below.
```shell
#!/bin/sh
git clone git@github.com:assafelovic/gpt-researcher.git
cd gpt-researcher
docker compose build
docker compose up
```
NOTE: I have my .env populated
Adding lxml[html_clean] to the requirements.txt solved the problem.
> WARN[0000] /home/smoney/src/aigency/gpt-researcher/gpt-researcher/docker-compose.yml: `version` is obsolete
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1078, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 783, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 409, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 575, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 65, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 76, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 433, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 1, in <module>
gpt-researcher-1 | from backend.server import app
gpt-researcher-1 | File "/usr/src/app/backend/server.py", line 7, in <module>
gpt-researcher-1 | from gpt_researcher.utils.websocket_manager import WebSocketManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .master import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/master/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/master/agent.py", line 4, in <module>
gpt-researcher-1 | from gpt_researcher.master.functions import *
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/master/functions.py", line 7, in <module>
gpt-researcher-1 | from gpt_researcher.scraper.scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 3, in <module>
gpt-researcher-1 | from .newspaper.newspaper import NewspaperScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/newspaper/newspaper.py", line 1, in <module>
gpt-researcher-1 | from newspaper import Article
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/newspaper/__init__.py", line 10, in <module>
gpt-researcher-1 | from .api import (build, build_article, fulltext, hot, languages,
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/newspaper/api.py", line 14, in <module>
gpt-researcher-1 | from .article import Article
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/newspaper/article.py", line 15, in <module>
gpt-researcher-1 | from . import network
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/newspaper/network.py", line 14, in <module>
gpt-researcher-1 | from .configuration import Configuration
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/newspaper/configuration.py", line 15, in <module>
gpt-researcher-1 | from .parsers import Parser
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/newspaper/parsers.py", line 12, in <module>
gpt-researcher-1 | import lxml.html.clean
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/lxml/html/clean.py", line 18, in <module>
gpt-researcher-1 | raise ImportError(
gpt-researcher-1 | ImportError: lxml.html.clean module is now a separate project lxml_html_clean.
gpt-researcher-1 | Install lxml[html_clean] or lxml_html_clean directly. | closed | 2024-04-02T18:55:57Z | 2024-04-03T09:29:05Z | https://github.com/assafelovic/gpt-researcher/issues/423 | [] | scottmoney | 5 |
plotly/dash-core-components | dash | 755 | Multiple Loading Controls broken if ids are substrings of each other | When using multiple Loadings on the same page, and having a lot of controls with similar IDs inside of each of them, I have found some irregularities where multiple Loading elements would trigger, even if there was no callback outputting data to any of its children.
I managed to condense this into a minimum proof-of-error below.
When I'm selecting a value, I'm expecting the Loading of the 'weird-loading2' div to trigger, but not the Loading of the Dropdown 'weird-loading', i.e. only the target (output) of the callback triggers the surrounding Loading element.
However, what happens is that both load. But ONLY if the id of the Dropdown is a substring of the id of the div. By changing 'weird-loading' to 'weird-loading3' it works exactly as expected.
``` python
from time import sleep
import dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Loading(children=[
dcc.Dropdown(id='weird-loading', options=[
{'label': 'change to load', 'value': 1},
{'label': 'both controls', 'value': 2}
])
]),
dcc.Loading(children=[
html.Div(id='weird-loading2')
]),
])
@app.callback(
Output('weird-loading2', 'children'),
[
Input('weird-loading', 'value'),
]
)
def simple_cb(_value):
sleep(1)
return []
if __name__ == '__main__':
app.run_server(host='0.0.0.0', debug=True, port=3000, dev_tools_hot_reload=True)
```
My versions:
```
dash==1.9.0
dash-core-components==1.8.0
dash-cytoscape==0.1.1
dash-html-components==1.0.2
dash-renderer==1.2.4
dash-table==4.6.0
``` | open | 2020-02-13T13:55:45Z | 2020-02-13T13:57:22Z | https://github.com/plotly/dash-core-components/issues/755 | [] | wolfgangpfnuer | 0 |
huggingface/datasets | pandas | 6,470 | If an image in a dataset is corrupted, we get unescapable error | ### Describe the bug
Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1
### Steps to reproduce the bug
```
from datasets import load_dataset, VerificationMode
dataset = load_dataset(
'sasha/birdsnap',
split="train",
verification_mode=VerificationMode.ALL_CHECKS,
streaming=True # I recommend using streaming=True when reproducing, as this dataset is large
)
for idx, row in enumerate(dataset):
# Iterating to 9287 took 7 minutes for me
# If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287]
pass
# error at 9287 OSError: image file is truncated (45 bytes not processed)
# note that we can't avoid the error using a try/except + continue inside the loop
```
### Expected behavior
Able to escape errors in casting to Image() without killing the whole loop
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | open | 2023-12-04T20:58:49Z | 2023-12-04T20:58:49Z | https://github.com/huggingface/datasets/issues/6470 | [] | chigozienri | 0 |
pytest-dev/pytest-html | pytest | 440 | extra.svg(content: type?) | Hi, what should be the content type to pass when appending extra.svg()?
While doing:
`extra.append(extras.svg(svg))`
with
```python
svg = """<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
<circle cx="50" cy="50" r="43" fill="none" stroke="#000" stroke-width="9"/>
<path d="M50,42c-6-9-20-9,-25,0c-2,5-2,11,0,16c5,9,19,9,25,0l-6-3c-2,5-9,5-11,0c-1-1-1-9,0-10c2-5,9-4,11,0z"/>
<path d="M78,42c-6-9-20-9,-25,0c-2,5-2,11,0,16c5,9,19,9,25,0l-6-3c-2,5-9,5-11,0c-1-1-1-9,0-10c2-5,9-4,11,0z"/>
</svg>"""
```
https://dev.w3.org/SVG/tools/svgweb/samples/svg-files/cc.svg
the generated .svg file in ./assets/xxxx.svg is not a valid svg file, but encoded (Looking at the code [src/pytest_html/result.py#L234](https://github.com/pytest-dev/pytest-html/blob/cc809864592638abe90c5c3a8bd1c03ab3f9970b/src/pytest_html/result.py#L234)
The report does not shows the proper SVG, however if I manually update the ./assets/xxx.svg with the proper SVG code, it shows the image as expected.
What should be the proper way to call extra.svg()?
Thanks,
Danilo. | closed | 2021-01-21T20:59:34Z | 2021-01-25T16:47:55Z | https://github.com/pytest-dev/pytest-html/issues/440 | [] | dramoz | 2 |
onnx/onnx | scikit-learn | 6,606 | Allow release CIs to run when a PR with the "run release CIs" label is updated | We need to update https://github.com/onnx/onnx/blob/64adc906975f6ac32512961c95a1bdeb6f1047a1/.github/workflows/create_release.yml#L10-L15 such that when a PR with the label updates (e.g. new commits are added) the release CIs also runs. This way developers do not need to remove and re-add the label to trigger the CIs.
@andife would you like to help with this? Thanks! | open | 2024-12-31T16:41:58Z | 2025-02-19T17:33:27Z | https://github.com/onnx/onnx/issues/6606 | [
"module: CI pipelines",
"contributions welcome"
] | justinchuby | 0 |
httpie/cli | rest-api | 1,171 | Method POST is used always when executing from go as external command | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
```go
package main
import (
"fmt"
"os/exec"
"strings"
)
func main() {
cmd := exec.Command("http", "-v", "--download", "pie.dev/image/png")
cmd.Stdin = strings.NewReader("some input")
out, err := cmd.CombinedOutput()
fmt.Printf("error:%v, output=%s", err, out)
}
```
1. Install go
2. Save above code as `test.go`
3. Execute `http` using `go run test.go`
4. `http` in this case is always using POST method
## Current result
```sh
❯ go run run.go
error:exit status 4, output=POST /image/png HTTP/1.1
User-Agent: HTTPie/2.5.0
Accept-Encoding: identity
Accept: application/json, */*;q=0.5
Connection: keep-alive
Content-Type: application/json
Content-Length: 10
Host: pie.dev
some input
http: warning: HTTP 405 Method Not Allowed
HTTP/1.1 405 Method Not Allowed
Date: Fri, 01 Oct 2021 20:02:08 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
allow: GET, OPTIONS, HEAD
access-control-allow-origin: *
access-control-allow-credentials: true
CF-Cache-Status: DYNAMIC
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=ZozyDmaSZ4O2KT0heUm24MDsaELjcsDSJgpkGrYIgeyPYCMaIPP2czSzaQy2GXyasl63Myp3ywWJpo%2Fg52z6LW6QAcnqiodkmAjxdXZUkepC6hUMzWIX4%2BwS"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Server: cloudflare
CF-RAY: 69784974dc1192f8-SJC
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400, h3-28=":443"; ma=86400, h3-27=":443"; ma=86400
```
## Expected result
Downloads a file
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
❯ go run run.go
error:exit status 4, output=HTTPie 2.5.0
Requests 2.26.0
Pygments 2.10.0
Python 3.9.7 (default, Sep 3 2021, 12:37:55)
[Clang 12.0.5 (clang-1205.0.22.9)]
/usr/local/Cellar/httpie/2.5.0/libexec/bin/python3.9
Darwin 20.6.0
<Environment {'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/Users/vr/.config/httpie'),
'devnull': <property object at 0x10f92eb30>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x10f9334c0>,
'program_name': 'http',
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': False,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': False,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': False}>
>>> requests.request(**{'auth': None,
'data': b'some input',
'headers': {'User-Agent': b'HTTPie/2.5.0', 'Accept': b'application/json, */*;q=0.5', 'Content-Type': b'application/json', 'Accept-Encoding': b'identity'},
'method': 'post',
'params': <generator object MultiValueOrderedDict.items at 0x10f9e7f20>,
'url': 'http://pie.dev/image/png'})
http: warning: HTTP 405 Method Not Allowed
```
| closed | 2021-10-01T20:09:20Z | 2021-10-14T15:15:09Z | https://github.com/httpie/cli/issues/1171 | [
"invalid"
] | vishr | 5 |
wagtail/wagtail | django | 12,560 | Internal links with anchor | ### Is your proposal related to a problem?
I want to link to a page. Not just the page, but also anchor within the page.
### Describe the solution you'd like
To be able to select page for the internal link + optional anchor.
### Describe alternatives you've considered
I can use external links for this, but then if I change page slugs, these link break.
### Working on this
Don't mind creating MR if desired/accepted.
| open | 2024-11-10T10:48:50Z | 2024-11-10T10:48:50Z | https://github.com/wagtail/wagtail/issues/12560 | [
"type:Enhancement"
] | hovi | 0 |
littlecodersh/ItChat | api | 559 | 騰訊新聞亂入,造成 itchat bot 卡住,怎避免? | 有人碰到這個問題嗎?
騰訊新聞亂入,造成 itchat bot 卡住。幸好按過 Ctrl-C 之後,還能繼續。
這怎避免?
目前機器正在跑,我晚點找機會把 itchat debug enable 起來看它怎麼說。。。。 | closed | 2017-12-01T10:31:09Z | 2018-02-28T08:47:24Z | https://github.com/littlecodersh/ItChat/issues/559 | [] | hcchengithub | 2 |
jupyter/nbviewer | jupyter | 750 | Scrolling in windows | I am checking in notebooks that show output scrolling in windows. On GitHub and in nbviewer, the scrollable windows are lost.
For example, this has scrollable windows in Jupyter, but not in nbviewer.
https://github.com/biblicalhumanities/greek-new-testament/blob/master/labnotes/dative-direct-objects.ipynb
http://nbviewer.jupyter.org/github/biblicalhumanities/greek-new-testament/blob/master/labnotes/dative-direct-objects.ipynb
I can fix this by converting to HTML and modifying the CSS for div.output_area, but I would love it if this would Just Work. Any suggestions? | closed | 2017-12-22T19:36:15Z | 2020-02-04T19:31:22Z | https://github.com/jupyter/nbviewer/issues/750 | [
"tag:Upstream"
] | jonathanrobie | 2 |
modin-project/modin | data-science | 7,021 | Implement to/from_dask_dataframe functions | closed | 2024-03-07T09:15:45Z | 2024-03-18T19:04:39Z | https://github.com/modin-project/modin/issues/7021 | [
"new feature/request 💬",
"Dask ⚡"
] | Retribution98 | 0 | |
huggingface/diffusers | deep-learning | 10,866 | Lumina Image 2.0 lora not working with lora available on Civitai | ### Describe the bug
Using Lumina 2.0 lora from civitai throw error.
Works fine for https://huggingface.co/sayakpaul/trained-lumina2-lora-yarn
### Reproduction
I tried using loras listed here
https://civitai.com/search/models?baseModel=Lumina&modelType=LORA&sortBy=models_v9&query=lumina
with code
https://huggingface.co/sayakpaul/trained-lumina2-lora-yarn
```python
import torch
from diffusers import Lumina2Text2ImgPipeline
pipe = Lumina2Text2ImgPipeline.from_pretrained(
"Alpha-VLLM/Lumina-Image-2.0", torch_dtype=torch.bfloat16
).to("cuda")
# Art Style of Hitoshi Ashinano https://civitai.com/models/1269546/art-style-of-hitoshi-ashinano-lumina-image-20
pipe.load_lora_weights("newgenai79/lumina2_lora",weight_name="Art_Style_of_Hitoshi_Ashinano.safetensors")
# Art Style of Studio Ghibli https://civitai.com/models/1257597/art-style-of-studio-ghibli-lumina-image-20
# pipe.load_lora_weights("newgenai79/lumina2_lora",weight_name="Art_Style_of_Studio_Ghibli.safetensors")
# Yarn https://huggingface.co/sayakpaul/trained-lumina2-lora-yarn
# pipe.load_lora_weights("newgenai79/lumina2_lora",weight_name="lumina2_puppy_lora.safetensors")
prompt = "Hitoshi Ashinano style. A young girl with vibrant green hair and large purple eyes peeks out from behind a white wooden door. She is wearing a white shirt and have a curious expression on her face. The background shows a blue sky with a few clouds, and there's a white fence visible. Green leaves hang down from the top left corner, and a small white circle can be seen in the sky. The scene captures a moment of innocent curiosity and wonder."
image = pipe(
prompt,
negative_prompt="blurry, ugly, bad, deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, cropped, out of frame, worst quality, low quality, jpeg artifacts, fused fingers, morbid, mutilated, extra fingers, mutated hands, bad anatomy, bad proportion, extra limbs",
guidance_scale=6,
num_inference_steps=35,
generator=torch.manual_seed(0)
).images[0]
```
### Logs
```shell
(venv) C:\aiOWN\diffuser_webui>python lumina2_lora.py
Loading checkpoint shards: 100%|████████████████████████████████████| 2/2 [00:07<00:00, 3.75s/it]
Loading checkpoint shards: 100%|████████████████████████████████████| 3/3 [00:11<00:00, 3.70s/it]
Loading pipeline components...: 100%|███████████████████████████████| 5/5 [00:19<00:00, 3.98s/it]
Loading default_0 was unsucessful with the following error:
Target modules {'w2', 'adaLN_modulation.1', 'w1', 'out', 'qkv', 'w3'} not found in the base model. Please check the target modules and try again.
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\lumina2_lora.py", line 8, in <module>
pipe.load_lora_weights("models/lora/lumina2/Art_Style_of_Hitoshi_Ashinano.safetensors")
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\loaders\lora_pipeline.py", line 3957, in load_lora_weights
self.load_lora_into_transformer(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\loaders\lora_pipeline.py", line 3994, in load_lora_into_transformer
transformer.load_lora_adapter(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\diffusers\loaders\peft.py", line 303, in load_lora_adapter
inject_adapter_in_model(lora_config, self, adapter_name=adapter_name, **peft_kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\peft\mapping.py", line 260, in inject_adapter_in_model
peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\peft\tuners\lora\model.py", line 141, in __init__
super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\peft\tuners\tuners_utils.py", line 184, in __init__
self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\peft\tuners\tuners_utils.py", line 520, in inject_adapter
raise ValueError(error_msg)
ValueError: Target modules {'w2', 'adaLN_modulation.1', 'w1', 'out', 'qkv', 'w3'} not found in the base model. Please check the target modules and try again.
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Running on Google Colab?: No
- Python version: 3.10.11
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.1
- Accelerate version: 1.4.0.dev0
- PEFT version: 0.14.0
- Bitsandbytes version: 0.45.3.dev0
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 4060 Laptop GPU, 8188 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sayakpaul | closed | 2025-02-21T19:04:59Z | 2025-03-07T12:28:57Z | https://github.com/huggingface/diffusers/issues/10866 | [
"bug"
] | nitinmukesh | 8 |
3b1b/manim | python | 1,936 | Running the example code works briefly and then generates a stack trace and exits | ### Describe the error
Running the example code works briefly and then generates a stack trace and exits
### Code and Error
```
$ manimgl example_scenes.py OpeningManimExample
```
**Error**:
```
sh: latex: command not found{c}\quad \\\quad \\\end{array}\right]"
[09:38:08] ERROR LaTeX Error! Not a worry, it tex_file_writing.py:112
happens to the best of us.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/manimgl", line 8, in <module>
sys.exit(main())
File "/Users/chuck/git/manim/manimlib/__main__.py", line 25, in main
scene.run()
File "/Users/chuck/git/manim/manimlib/scene/scene.py", line 131, in run
self.construct()
File "/Users/chuck/git/manim/example_scenes.py", line 29, in construct
IntegerMatrix(matrix, include_background_rectangle=True),
File "/Users/chuck/git/manim/manimlib/mobject/matrix.py", line 205, in __init__
super().__init__(matrix, element_alignment_corner=element_alignment_corner, **kwargs)
File "/Users/chuck/git/manim/manimlib/mobject/matrix.py", line 100, in __init__
self.add_brackets(bracket_v_buff, bracket_h_buff)
File "/Users/chuck/git/manim/manimlib/mobject/matrix.py", line 145, in add_brackets
brackets = Tex("".join((
File "/Users/chuck/git/manim/manimlib/mobject/svg/tex_mobject.py", line 208, in __init__
super().__init__(full_string, **kwargs)
File "/Users/chuck/git/manim/manimlib/mobject/svg/tex_mobject.py", line 58, in __init__
super().__init__(
File "/Users/chuck/git/manim/manimlib/mobject/svg/svg_mobject.py", line 72, in __init__
self.init_svg_mobject()
File "/Users/chuck/git/manim/manimlib/mobject/svg/svg_mobject.py", line 99, in init_svg_mobject
self.generate_mobject()
File "/Users/chuck/git/manim/manimlib/mobject/svg/svg_mobject.py", line 114, in generate_mobject
file_path = self.get_file_path()
File "/Users/chuck/git/manim/manimlib/mobject/svg/tex_mobject.py", line 87, in get_file_path
file_path = tex_content_to_svg_file(
File "/Users/chuck/git/manim/manimlib/utils/tex_file_writing.py", line 81, in tex_content_to_svg_file
create_tex_svg(full_tex, svg_file, compiler)
File "/Users/chuck/git/manim/manimlib/utils/tex_file_writing.py", line 115, in create_tex_svg
with open(root + ".log", "r", encoding="utf-8") as log_file:
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/3j/kxrrqr310jzd37rtx2cdf1zm0000gn/T/Tex/c3372cf5b2620435.log'
[Chucks-MBP:manim (master)]527$ ls -l /var/folders/3j/kxrrqr310jzd37rtx2cdf1zm0000gn/T/Tex/
total 8
-rw-r--r-- 1 chuck staff 625 Dec 19 09:38 c3372cf5b2620435.tex
```
### Environment
**MacOS Ventura 13.0**:
**ManimGL v1.6.1**: master <!-- The latest pull from github -->
**Python 3.10.7**:
| closed | 2022-12-19T14:51:32Z | 2024-05-18T02:16:32Z | https://github.com/3b1b/manim/issues/1936 | [] | ocheret | 3 |
docarray/docarray | fastapi | 1,005 | del and delitem | try del and delitem, especially in docarray stacked, and implement/fix it if needed | closed | 2023-01-11T08:51:18Z | 2023-02-08T08:53:32Z | https://github.com/docarray/docarray/issues/1005 | [] | JohannesMessner | 0 |
albumentations-team/albumentations | deep-learning | 2,466 | [Feature request] Add apply_to_images to RandomResizedCrop | open | 2025-03-11T01:34:18Z | 2025-03-12T04:51:03Z | https://github.com/albumentations-team/albumentations/issues/2466 | [
"enhancement",
"good first issue"
] | ternaus | 3 | |
supabase/supabase-py | fastapi | 973 | Unable to perform string concatenation with .select | # Bug report
<!--
⚠️ We receive a lot of bug reports which have already been solved or discussed. If you are looking for help, please try these first:
- Docs: https://docs.supabase.com
- Discussions: https://github.com/supabase/supabase/discussions
- Discord: https://discord.supabase.com
Before opening a bug report, please verify the following:
-->
- [x] I confirm this is a bug with Supabase, not with my own application.
- [x] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
A clear and concise description of what the bug is.
Trying to concatenate strings in a SELECT query just returns * instead.
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
1. Create a table
```sql
CREATE TABLE IF NOT EXISTS "test"(
"json" JSONB NOT NULL
);
```
2. Insert using the Python client
```python
response = await (
client
.from_("test")
.upsert({"json": {"hex": "abc123"}})
.execute()
)
```
3. Try to select the inserted data using a concatenation
```python
response = await (
client
.from_("test")
.select(r"'0x' || (json->>hex) AS hex")
.execute()
)
print(response.data)
```
4. Output is equivalent to `SELECT *`
```python
[{'json': {'hex': 'abc123'}}]
```
## Expected behavior
A clear and concise description of what you expected to happen.
I expected it to be equivalent to the following query:
```sql
SELECT '0x' || (json->>'hex') AS hex
FROM test;
```
Which would return:
| hex |
| -------- |
| 0xabc123 |
## Screenshots
If applicable, add screenshots to help explain your problem.
## System information
- OS: [e.g. macOS, Windows]
- Browser (if applies) [e.g. chrome, safari]
- Version of supabase-js: [e.g. 6.0.2]
- Version of Node.js: [e.g. 10.10.0]
## Additional context
Add any other context about the problem here.
| closed | 2024-10-21T18:49:40Z | 2024-10-21T22:49:48Z | https://github.com/supabase/supabase-py/issues/973 | [
"bug"
] | rdong8 | 1 |
brightmart/text_classification | nlp | 76 | TextRCNN model predict error | Restoring Variables from Checkpoint
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1327, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1312, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1420, in _call_tf_sessionrun
status, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 516, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [10171,100] rhs shape= [999,100]
[[Node: save/Assign = Assign[T=DT_FLOAT, _class=["loc:@Embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Embedding, save/RestoreV2)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "p71_TextRCNN_predict.py", line 132, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "p71_TextRCNN_predict.py", line 64, in main
saver.restore(sess,tf.train.latest_checkpoint(FLAGS.ckpt_dir)) #TODO
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1775, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1140, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [10171,100] rhs shape= [999,100]
[[Node: save/Assign = Assign[T=DT_FLOAT, _class=["loc:@Embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Embedding, save/RestoreV2)]]
Caused by op 'save/Assign', defined at:
File "p71_TextRCNN_predict.py", line 132, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "p71_TextRCNN_predict.py", line 61, in main
saver=tf.train.Saver()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1311, in __init__
self.build()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1320, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1357, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 809, in _build_internal
restore_sequentially, reshape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 470, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 162, in restore
self.op.get_shape().is_fully_defined())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 281, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3290, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1654, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [10171,100] rhs shape= [999,100]
[[Node: save/Assign = Assign[T=DT_FLOAT, _class=["loc:@Embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Embedding, save/RestoreV2)]] | open | 2018-08-03T09:45:20Z | 2018-12-24T08:44:32Z | https://github.com/brightmart/text_classification/issues/76 | [] | kevinsay | 3 |
vllm-project/vllm | pytorch | 15,025 | [Bug]: Speculative decoding with a draft model makes generation slower | ### Your current environment
I tried several vLLM versions (0.6.2 and latest 0.7.3) and have consistent speed drop when using speculative decoding with draft model. Tried on L4 and T4 GPU in Colab.
### 🐛 Describe the bug
Main model - `1.7B,` speculative model - `135M` parameters (`SmolLMv2` family of models)
I trained the small model using logits distillation of main model, so it has a good level of generation (acceptance rate is very high)
Still I get consistent performance drop ~30% in terms of speed when using 5 speculative tokens, when I reduce number speculative tokens - speed increases, but the best speed in achieved when using main model only without speculative.
Here are my parameters:
```python
llm = LLM(
model=MODEL_PATH,
speculative_model=SPECULATIVE_MODEL_PATH,
max_model_len=2500,
num_speculative_tokens=5,
gpu_memory_utilization=0.9
)
sampling_params = SamplingParams(
temperature=0,
top_k=1,
max_tokens=256,
)
outputs = llm.generate(prompts, sampling_params)
```
Is it expected?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-18T10:45:18Z | 2025-03-20T11:38:50Z | https://github.com/vllm-project/vllm/issues/15025 | [
"bug"
] | maiiabocharova | 1 |
gradio-app/gradio | data-visualization | 10,344 | example of adding custom js from Gradio docs is not working | ### Describe the bug
I am struggling to accomplish something similar to the example from here: https://www.gradio.app/guides/custom-CSS-and-JS (passing some value from python function to execute in js), but apparently even the example from the gradio website is not working. Could you please suggest an example which works and does the same?
I tried to copy paste the example code and execute it locally (thinking maybe it is an issue with gradio website and not gradio itself) but it also throws a bunch of errors.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
blocks = gr.Blocks()
with blocks as demo:
subject = gr.Textbox(placeholder="subject")
verb = gr.Radio(["ate", "loved", "hated"])
object = gr.Textbox(placeholder="object")
with gr.Row():
btn = gr.Button("Create sentence.")
reverse_btn = gr.Button("Reverse sentence.")
foo_bar_btn = gr.Button("Append foo")
reverse_then_to_the_server_btn = gr.Button(
"Reverse sentence and send to server."
)
def sentence_maker(w1, w2, w3):
return f"{w1} {w2} {w3}"
output1 = gr.Textbox(label="output 1")
output2 = gr.Textbox(label="verb")
output3 = gr.Textbox(label="verb reversed")
output4 = gr.Textbox(label="front end process and then send to backend")
btn.click(sentence_maker, [subject, verb, object], output1)
reverse_btn.click(
None, [subject, verb, object], output2, js="(s, v, o) => o + ' ' + v + ' ' + s"
)
verb.change(lambda x: x, verb, output3, js="(x) => [...x].reverse().join('')")
foo_bar_btn.click(None, [], subject, js="(x) => x + ' foo'")
reverse_then_to_the_server_btn.click(
sentence_maker,
[subject, verb, object],
output4,
js="(s, v, o) => [s, v, o].map(x => [...x].reverse().join(''))",
)
demo.launch()
```
### Screenshot


### Logs
_No response_
### System Info
```shell
gradio==5.12.0
```
### Severity
Blocking usage of gradio | open | 2025-01-13T12:43:00Z | 2025-02-21T14:02:32Z | https://github.com/gradio-app/gradio/issues/10344 | [
"bug"
] | SlimakSlimak | 1 |
robotframework/robotframework | automation | 4,575 | Add `on_limit_message` option to WHILE loops to control message used if loop limit is exceeded | Currently, the error raised when the limit of the WHILE loop is reached isn't customizable. As a result of the issue #4562, we decided to add an option named `on_limit_message` on the WHILE loop.
Here is an example :
```
*** Test Cases ***
On limit message
WHILE True limit=5 on_limit_message=Custom error message
Log Test
END
``` | closed | 2022-12-24T16:59:51Z | 2023-05-05T14:23:21Z | https://github.com/robotframework/robotframework/issues/4575 | [
"enhancement",
"priority: medium",
"beta 1",
"acknowledge",
"pr",
"effort: small"
] | asaout | 3 |
QuivrHQ/quivr | api | 3,114 | Invariant SQL scripts | closed | 2024-08-30T07:25:16Z | 2024-09-02T12:41:45Z | https://github.com/QuivrHQ/quivr/issues/3114 | [
"backend",
"area: scripts"
] | linear[bot] | 1 | |
mwaskom/seaborn | matplotlib | 2,843 | Adding mask as a argument within the seaborn heatmap | ### Feature improvement to heatmap in seaborm

In the above image when we create a correlation, we get the same data twice. This causes bit confusion to the end-users.So usually we add a mask manually and pass it as a argument to mask parameter.
We can simplify this by modifying mask argument to be a boolean argument, which when given true will apply mask as given in the right diagram.
import seaborn as sns
matrix = data.corr()
sns.heatmap(matrix, mask =true) // the result will be the right diagram
| closed | 2022-06-09T11:31:03Z | 2022-06-09T11:45:31Z | https://github.com/mwaskom/seaborn/issues/2843 | [] | krishnaduttPanchagnula | 1 |
sunscrapers/djoser | rest-api | 285 | Response message | I found it's a really good library except the response is really not giving any useful information, the only error response given is BAD_REQUEST. I really suggest someone could improve that. Also the documentation is not clear, please give more examples. | open | 2018-07-04T15:13:49Z | 2019-01-18T11:51:44Z | https://github.com/sunscrapers/djoser/issues/285 | [] | songlin-96 | 2 |
kennethreitz/responder | graphql | 443 | AttributeError: 'str' object has no attribute 'decode' | **Responder version:** 3.0.2.0
**Python version:** Python 3.8.6
**Steps to reproduce:** In _/etc/responder/Responder.conf_ `Challenge = 1122334455667788`
```
root@kali:/# responder -wrf -I wlan0
__
.----.-----.-----.-----.-----.-----.--| |.-----.----.
| _| -__|__ --| _ | _ | | _ || -__| _|
|__| |_____|_____| __|_____|__|__|_____||_____|__|
|__|
NBT-NS, LLMNR & MDNS Responder 3.0.2.0
Author: Laurent Gaffie (laurent.gaffie@gmail.com)
To kill this script hit CTRL-C
Traceback (most recent call last):
File "./Responder.py", line 58, in <module>
settings.Config.populate(options)
File "/usr/share/responder/settings.py", line 218, in populate
self.Challenge += self.NumChal[i:i+2].decode("hex")
AttributeError: 'str' object has no attribute 'decode'
```
**Possible solution:** In _/usr/share/responder/settings.py_ on Line _219_
```
[-] self.Challenge += self.NumChal[i:i+2].decode("hex")
[+] self.Challenge += self.NumChal[i:i+2]
```
**What, why?**: Looks like _self.NumChal_ is already string. So no need to decode
Can't create pull request right now as it needs further testing
UPD 1: Everything works without changes on Python2 and Python3.7 - I think Python 3.8 broke something(?) | closed | 2020-10-17T06:19:19Z | 2020-10-18T06:23:01Z | https://github.com/kennethreitz/responder/issues/443 | [] | m41denx | 0 |
postmanlabs/httpbin | api | 80 | IP address returned is no longer correct | If I do:
``` python
import requests
print requests.get('http://httpbin.org/get').json()['origin']
```
I get the wrong IP address. Same happens every other place I test this from.
| closed | 2013-01-19T16:54:19Z | 2018-04-26T17:50:58Z | https://github.com/postmanlabs/httpbin/issues/80 | [] | sigmavirus24 | 6 |
graphql-python/graphene-mongo | graphql | 49 | Cannot return null for non-nullable field Type.field. | If I create add a new field and the field is null, it leads to an error in
```Cannot return null for non-nullable field Type.field.```
In my model, I didn't set required to true for that field. After inspecting I found this in converter.py
```String(description=field.db_field, required=not field.null)```
which sets all field to required. | closed | 2018-08-18T15:48:05Z | 2018-09-06T02:15:55Z | https://github.com/graphql-python/graphene-mongo/issues/49 | [] | marvinkome | 4 |
OFA-Sys/Chinese-CLIP | nlp | 304 | Delete | 先占个坑(本周内填完),记录我实现应用的心得,以供需要的人参考。
为实现类似[README中的应用](https://www.modelscope.cn/studios/iic/chinese_clip_applications/summary),分为前后端的部署。
## 前端的部署
## 后端服务的部署
### 步骤1:数据的处理
clip-retrieval里案例处理数据集是用img2dataset, 除了这种方式,你还可以自己处理数据对,保证图文匹配就行。例如:你保证你的数据集文件夹下:有 xxxx.png和xxxx.txt即可,这个xxxx是个ID,同一个图片和文本对要保证一致。
### 步骤2:all_clip的改动
all_clip不支持chinese clip,需要手动添加支持,代码参考这里添加即可:https://github.com/data2ml/all-clip/pull/27
### 步骤3:clip_retrieval代码优化
问题1: Can't pickle local object 'get_image_dataset.<locals>.ImageDataset'
这是个[多线程的问题](https://github.com/rom1504/clip-retrieval/issues/220),加上参数--num_prepro_workers=1即可
问题2: 当采用自己的图片文本对的时候,报错self.images not found key
这是bug, 参考这里的[解决 办法](https://github.com/rom1504/clip-retrieval/issues/352) | closed | 2024-04-22T13:57:17Z | 2024-04-24T03:23:46Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/304 | [] | ChesonHuang | 0 |
deeppavlov/DeepPavlov | tensorflow | 880 | NER crashes during initialization | I use last version ipavlov library from pip, and have following code:
```self.NER = build_model(configs.ner.ner_rus, download=True)```
This code throws exception:
```
File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/commands/infer.py", line 61, in build_model
component = from_params(component_config, mode=mode, serialized=component_serialized)
File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py", line 104, in from_params
component = cls(**dict(config_params, **kwargs))
File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/models/tf_backend.py", line 47, in __call__
from .keras_model import KerasModel
File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/models/keras_model.py", line 23, in <module>
from keras import backend as K
File "/usr/local/lib/python3.6/dist-packages/keras/__init__.py", line 5, in <module>
from . import applications
File "/usr/local/lib/python3.6/dist-packages/keras/applications/__init__.py", line 13, in <module>
keras_applications.set_keras_submodules(
AttributeError: module 'keras_applications' has no attribute 'set_keras_submodules'
``` | closed | 2019-06-14T12:42:38Z | 2020-05-13T09:48:13Z | https://github.com/deeppavlov/DeepPavlov/issues/880 | [] | bavadim | 1 |
dgtlmoon/changedetection.io | web-scraping | 2,934 | 'Recheck all in ¨tag-name"' doesn't enqueue PAUSED watches | **Describe the bug**
Here is a **CORRECTED** report of the closed issue https://github.com/dgtlmoon/changedetection.io/issues/2932
The UI has a button called 'Recheck all in ¨_tag-name_"' under the list of watches assigned to a tag. Pressing it doesn't enqueue _paused watches_ for re-checking. The appearing notification always says "0 watches queued for rechecking." _Non-paused watches_ are enqueued as expected.
**Version**
0.49.0
**How did you install?**
Docker
**To Reproduce**
Steps to reproduce the behavior:
1. Create one or several watches.
2. _PAUSE these watches_.
3. Assign a tag to these watches.
4. Select the tag in the tag list.
5. Scroll down to the button 'Recheck all in ¨_tag-name_"'.
6. Press it.
7. Observe that none of the _paused_ watches were enqueued.
8. Observe that notification informs: "0 watches queued for rechecking."
**Expected behavior**
1. All watches with the tag should be enqueued, including paused.
2. The notification should inform: "N watches queued for rechecking." | open | 2025-01-27T20:24:57Z | 2025-01-29T16:12:26Z | https://github.com/dgtlmoon/changedetection.io/issues/2934 | [
"triage"
] | birukoff | 5 |
zihangdai/xlnet | nlp | 159 | is Xlnet base released? | It was projected to be released June 2019. I am waiting eagerly for it. | open | 2019-07-13T13:46:23Z | 2019-07-13T23:49:12Z | https://github.com/zihangdai/xlnet/issues/159 | [] | bhomass | 1 |
hankcs/HanLP | nlp | 1,181 | 为什么我提取地址的时候和Demo上的不一样呢 | 对于同一个地址,DEMO和我的程序分出来是不一样的,我使用的是1.7.3
上海上海市浦东新区金桥镇金高路2216弄
DEMO分出来是:上海 上海市 浦东新区 金桥镇 金高 路 2216 弄
我的程序分出来是:[上海/ns, 上海市浦东新区/ns, 金桥镇/ns, 金高路/ns, 2216/m, 弄/v]
把上海市浦东新区没有分开成上海市 浦东新区
代码:
Segment segment = HanLP.newSegment().enablePlaceRecognize(true);
List<Term> termList = segment.seg("上海上海市浦东新区金桥镇金高路2216弄");
System.out.println(termList); | closed | 2019-05-24T11:25:56Z | 2020-01-01T10:49:42Z | https://github.com/hankcs/HanLP/issues/1181 | [
"ignored"
] | hf200012 | 1 |
huggingface/datasets | computer-vision | 7,281 | File not found error | ### Describe the bug
I get a FileNotFoundError:
<img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87">
### Steps to reproduce the bug
See screenshot.
### Expected behavior
I want to load one audiofile from the dataset.
### Environment info
MacOs Intel 14.6.1 (23G93)
Python 3.10.9
Numpy 1.23
Datasets latest version | open | 2024-11-07T09:04:49Z | 2024-11-07T09:22:43Z | https://github.com/huggingface/datasets/issues/7281 | [] | MichielBontenbal | 1 |
dask/dask | pandas | 11,270 | Update optuna docs to point to Optuna-integration | Optuna moved some 3rd party things to optuna-integration
We should update https://docs.dask.org/en/stable/ml.html#hyperparameter-optimization and potentially more to point to the correct location. i.e.
```
from optuna_integration import DaskStorage
``` | open | 2024-08-02T17:22:24Z | 2024-08-02T17:22:29Z | https://github.com/dask/dask/issues/11270 | [
"documentation"
] | phofl | 0 |
paperless-ngx/paperless-ngx | machine-learning | 8,621 | [BUG] Concise description of the issue | ### Description
Using the API to insert tags with specific colors, ignores hex codes. All tags added have the same generic colors, no matter what hex value is provided.
curl -X POST "http://paperless:8000/api/tags/" \
-H "Authorization: Token xxxxxx" \
-H "Content-Type: application/json" \
-d '{"name": "test-tag3","colour":"#1f78b4","match":"","matching_algorithm":1,"is_insensitive":true,"is_inbox_tag":true}'
### Steps to reproduce
curl -X POST "http://paperless:8000/api/tags/" \
-H "Authorization: Token xxxxxx" \
-H "Content-Type: application/json" \
-d '{"name": "test-tag3","colour":"#1f78b4","match":"","matching_algorithm":1,"is_insensitive":true,"is_inbox_tag":true}'
### Webserver logs
```bash
No log
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.5
### Host OS
Docker image
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2025-01-06T17:27:15Z | 2025-01-06T18:11:46Z | https://github.com/paperless-ngx/paperless-ngx/issues/8621 | [
"not a bug"
] | etsiot | 1 |
apache/airflow | data-science | 47,670 | Able to create multiple backfills on same date | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Able to create multiple backfills on same date if logical_date is different.
This is happening with Dags which uses timedelta in schedule.
<img width="1031" alt="Image" src="https://github.com/user-attachments/assets/864ff70c-5068-41ff-b5a9-c9fda847db14" />
<img width="1138" alt="Image" src="https://github.com/user-attachments/assets/fc206749-ff7c-431d-894c-f31ffa54e9ae" />
### What you think should happen instead?
User should see 409 conflict when trying to create backfills on same date.
### How to reproduce
Run the backfill on the below Dag in gap of 2 minutes:
```python
from airflow.providers.standard.operators.bash import BashOperator
from datetime import datetime, timedelta
from airflow import DAG
dag = DAG(
'test_api_dag',
start_date=datetime(2025, 3, 1, 3, 28, 0),
schedule=timedelta(days=1),
is_paused_upon_creation=False
)
hello_task = BashOperator(
task_id='test_task',
bash_command='echo "Hello World from Airflow!"',
do_xcom_push = True,
dag=dag,
)
hello_task
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-12T10:17:04Z | 2025-03-15T15:28:34Z | https://github.com/apache/airflow/issues/47670 | [
"kind:bug",
"area:core",
"area:backfill",
"affected_version:3.0.0beta"
] | atul-astronomer | 3 |
scrapy/scrapy | web-scraping | 6,433 | core.engine/Signal handler polluting log | ### Description
The `OffsiteMiddleware` logs a single message for each domain filtered. Great!
But then the `core.engine` logs a message for every single url filtered by the OffsiteMiddleware.
(LOG_LEVEL: DEBUG)
The websites I am scraping have like 10 external links to twitter/youtube/etc in each page. For hundreds pages scrapped, the only thing I can see in the logs is `Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request`.
I don't know if this is intended behavior. If so, it is obviously not a bug.
But nonetheless, it is very different behavior compared to previous 1.x Scrapy versions. (I don't know when it has changed and I couldn't find anything in the release notes about that.)
If not a bug, maybe we could discuss the possibility of changing this behavior so we can have logs less polluted when debugging.
### Steps to Reproduce
#### Just run the following spider.
(url taken from another issue).
```python
import scrapy
class TestSpider(scrapy.spiders.CrawlSpider):
name = 'test'
allowed_domains = ['capybala.com']
start_urls = ['https://capybala.com/']
custom_settings = {
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
'LOG_LEVEL': 'DEBUG'
}
rules = (scrapy.spiders.Rule(scrapy.linkextractors.LinkExtractor(), callback='parse', follow=True),)
def parse(self, response):
print('noop')
```
#### Output:
```txt
2024-07-08 16:34:43 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: scrapybot)
2024-07-08 16:34:43 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.12.6, cssselect 1.2.0, parsel 1.9.1, w3lib 2.2.1, Twisted 24.3.0, Python 3.12.4 (main, Jul 3 2024, 16:55:58) [GCC 11.2.0], pyOpenSSL 24.1.0 (OpenSSL 3.2.2 4 Jun 2024), cryptography 42.0.8, Platform Linux-5.15.145-x86_64-AMD_Ryzen_9_5980HX_with_Radeon_Graphics-with-glibc2.33
2024-07-08 16:34:43 [scrapy.addons] INFO: Enabled addons:
[]
2024-07-08 16:34:43 [asyncio] DEBUG: Using selector: EpollSelector
2024-07-08 16:34:43 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-07-08 16:34:43 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
2024-07-08 16:34:43 [scrapy.extensions.telnet] INFO: Telnet Password: d2c4cce2938fba32
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2024-07-08 16:34:43 [scrapy.crawler] INFO: Overridden settings:
{'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'SPIDER_LOADER_WARN_ONLY': True,
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-07-08 16:34:43 [scrapy.core.engine] INFO: Spider opened
2024-07-08 16:34:43 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-07-08 16:34:43 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/> (referer: None)
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'bokuran.com': <GET https://bokuran.com/>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'webooker.info': <GET http://webooker.info/2013/10/ebook1-release/>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'ebook-1.com': <GET https://ebook-1.com/>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://ebook-1.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'chrome.google.com': <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'twitter.com': <GET https://twitter.com/orangain>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/orangain> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/webooker_log> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/find-kindle-edition/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/bokuran/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/ebook-1/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/dendrogram/> (referer: https://capybala.com/)
noop
2024-07-08 16:34:44 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://capybala.com/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://ebook-1.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/orangain> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/webooker_log> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/> before it reached the scheduler.
noop
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.
noop
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.
noop
noop
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.
2024-07-08 16:34:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://tree.capybala.com/> (referer: https://capybala.com/)
noop
2024-07-08 16:34:45 [scrapy.core.engine] INFO: Closing spider (finished)
2024-07-08 16:34:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1735,
'downloader/request_count': 7,
'downloader/request_method_count/GET': 7,
'downloader/response_bytes': 17486,
'downloader/response_count': 7,
'downloader/response_status_count/200': 7,
'dupefilter/filtered': 16,
'elapsed_time_seconds': 1.950522,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 7, 8, 19, 34, 45, 376469, tzinfo=datetime.timezone.utc),
'httpcompression/response_bytes': 29892,
'httpcompression/response_count': 7,
'log_count/DEBUG': 33,
'log_count/INFO': 10,
'memusage/max': 70103040,
'memusage/startup': 70103040,
'offsite/domains': 5,
'offsite/filtered': 17,
'request_depth_max': 2,
'response_received_count': 7,
'scheduler/dequeued': 7,
'scheduler/dequeued/memory': 7,
'scheduler/enqueued': 7,
'scheduler/enqueued/memory': 7,
'start_time': datetime.datetime(2024, 7, 8, 19, 34, 43, 425947, tzinfo=datetime.timezone.utc)}
2024-07-08 16:34:45 [scrapy.core.engine] INFO: Spider closed (finished)
```
**Expected behavior:**
I was not expecting to see so many `[scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET [...]> before it reached the scheduler.` messages. I believe just the messages given by the OffsiteMiddleware are enough.
**Actual behavior:**
There are **a lot** of "dropped request" messages.
Furthermore the same message is replicated several times if the same url is found more than one time. (e.g. https://twitter.com/orangain or https://twitter.com/webooker_log in the previous log)
**Reproduces how often:** always
### Versions
$ scrapy version --verbose
Scrapy : 2.11.2
lxml : 5.2.2.0
libxml2 : 2.12.6
cssselect : 1.2.0
parsel : 1.9.1
w3lib : 2.2.1
Twisted : 24.3.0
Python : 3.12.4 (main, Jul 3 2024, 16:55:58) [GCC 11.2.0]
pyOpenSSL : 24.1.0 (OpenSSL 3.2.2 4 Jun 2024)
cryptography : 42.0.8
Platform : Linux-5.15.145-x86_64-AMD_Ryzen_9_5980HX_with_Radeon_Graphics-with-glibc2.33
### Additional context
I believe this has nothing to do with the `CrawlSpider`, but that is what I am using. | closed | 2024-07-08T20:13:30Z | 2024-09-10T07:01:38Z | https://github.com/scrapy/scrapy/issues/6433 | [] | djuntsu | 6 |
httpie/cli | python | 685 | [Feature] Upload Progress Bar | Would be great to have an upload bar as well. | open | 2018-06-24T20:30:40Z | 2021-06-28T17:47:40Z | https://github.com/httpie/cli/issues/685 | [
"needs product design"
] | qoomon | 8 |
Yorko/mlcourse.ai | numpy | 666 | Misleading hyperlink on https://mlcourse.ai/roadmap | Misleading hyperlink on https://mlcourse.ai/roadmap
Chapter: "Week 5. Bagging and Random Forest"
Link: “Random Forest”
Actual link: https://mlcourse.ai/articles/topic5-part1-bagging/
Expected link: https://mlcourse.ai/articles/topic5-part2-rf/ | closed | 2020-06-02T17:17:10Z | 2020-06-06T07:47:59Z | https://github.com/Yorko/mlcourse.ai/issues/666 | [
"minor_fix"
] | www050 | 1 |
pennersr/django-allauth | django | 3,734 | SOCIALACCOUNT_PROVIDER nextcloud ignores settings | I have to manually change the domain (it just uses the default nextcloud.example.org), but after that connecting my nextcloud account still fails. I entered the settings json through the django admin panel, but I don't see any relevant logging.

| closed | 2024-04-19T13:06:13Z | 2024-04-22T11:08:13Z | https://github.com/pennersr/django-allauth/issues/3734 | [
"Good first issue"
] | rikmeijer | 8 |
graphql-python/graphene-django | graphql | 679 | Add API documentation | Recently @dvndrsn added some **great** API documentation to the Graphene project: https://github.com/graphql-python/graphene/pull/971 We should add do the same for Graphene-Django.
## To document:
* [ ] `DjangoObjectType`
* [ ] `DjangoConnectionField`
* [ ] `GraphQLView`
* [ ] `DjangoFormMutation`
* [ ] `DjangoModelFormMutation`
* [ ] `DjangoFilterConnectionField` | open | 2019-06-17T09:03:24Z | 2021-01-25T20:54:40Z | https://github.com/graphql-python/graphene-django/issues/679 | [
"📖 documentation"
] | jkimbo | 5 |
pydata/bottleneck | numpy | 159 | Py_DECREF in iterators.h | I'm looking at `iterators.h` and it seems to me that the `Py_DECREF` calls might be misplaced. If I'm reading this right, it looks like the intent is to decref the result of the `PyArray_Ravel` calls, but the decref happens before `a` is done being accessed. I could be missing something though, since I'm not super familiar with the numpy C API.
[Example](https://github.com/kwgoodman/bottleneck/blob/e987847285edd514411667f28c0ebc808d7d2b21/bottleneck/src/iterators.h#L114) | closed | 2017-01-03T22:16:29Z | 2017-02-09T20:41:37Z | https://github.com/pydata/bottleneck/issues/159 | [] | llchan | 11 |
deepinsight/insightface | pytorch | 2,254 | shuffle_rec error | Trying WebFace42M + DALI following the instructions in [arcface_torch](https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch). When I get to `python scripts/shuffle_rec.py <path to folder containing train.idx/.lst/.rec>`, I got the following error:
```
Process Process-1:
Traceback (most recent call last):
File "/home/user/anaconda3/envs/insightface/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/user/anaconda3/envs/insightface/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/user/user/FR/insightface/recognition/arcface_torch/scripts/shuffle_rec.py", line 17, in read_worker
assert header.flag > 0
AssertionError
```
What went wrong? | closed | 2023-02-28T09:34:18Z | 2023-04-04T05:40:00Z | https://github.com/deepinsight/insightface/issues/2254 | [] | HeChengHui | 5 |
reiinakano/scikit-plot | scikit-learn | 94 | ValueError: Found input variables with inconsistent numbers of samples | I'm trying to plot the ROC curve, but I get **ValueError: Found input variables with inconsistent numbers of samples.**
Here's the code I use:
`skplt.metrics.plot_roc(labels_test.values, pred_w2v_cnn.values)
plt.show()`
Both labels_test.values and pred_w2v_cnn.values have the same length and both are of type np.ndarray. I'd be thankful if anyone can help me to solve this problem. | open | 2018-09-19T23:19:58Z | 2018-09-25T09:42:01Z | https://github.com/reiinakano/scikit-plot/issues/94 | [] | AntonioAntovski | 3 |
koxudaxi/datamodel-code-generator | pydantic | 1,559 | Dataclasses not ordering properties correctly | **Describe the bug**
dataclass generation does not put fields with initialisers after fields without initialisers
**To Reproduce**
Example schema:
```yaml
i$id: https://practique.net/response.json
$schema: http://json-schema.org/schema#
type: object
required:
- a
- b
properties:
a:
type: string
enum:
- default value
b:
type: string
```
Used commandline:
```
$ datamodel-codegen --input-file-type jsonschema --input test.yaml --enum-field-as-literal one --use-one-literal-as-default --output-model-type dataclasses.dataclass --output test.py
```
**Expected behavior**
Code generated:
```python
@dataclass
class Model:
a: Literal['default value'] = 'default value'
b: str
```
python test.py gives following error:
```
Traceback (most recent call last):
File "/home/keean/Code/risr-exam-driver/json-schema/test.py", line 12, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1027, in _process_class
_init_fn(all_init_fields,
File "/usr/lib/python3.11/dataclasses.py", line 545, in _init_fn
raise TypeError(f'non-default argument {f.name!r} '
TypeError: non-default argument 'b' follows default argument
```
Manually reordering the properties to:
```python
@dataclass
class Model:
b: str
a: Literal['default value'] = 'default value'
```
fixes the error
**Version:**
- OS: Linux
- Python version: 3.11.5
- datamodel-code-generator version: 0.21.5
| closed | 2023-09-21T08:25:24Z | 2023-10-06T21:28:13Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1559 | [
"bug"
] | keean | 10 |
home-assistant/core | python | 140,651 | [Overkiz] - Entity sensor.luminosite_rssi_level (<class 'homeassistant.components.overkiz.sensor.OverkizStateSensor'>) | ### The problem
I see an error message in the system log. This looks like a mi match on unit for sensor Rssi for this equipment (https://boutique.somfy.fr/capteur-de-soleil-exterieur.html).
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Overkiz
### Link to integration documentation on our website
_No response_
### Diagnostics information
```
Enregistreur: homeassistant.components.sensor
Source: components/sensor/__init__.py:709
intégration: Capteur (documentation, problèmes)
S'est produit pour la première fois: 14 mars 2025 à 19:53:53 (1 occurrences)
Dernier enregistrement: 14 mars 2025 à 19:53:53
Entity sensor.luminosite_rssi_level (<class 'homeassistant.components.overkiz.sensor.OverkizStateSensor'>) is using native unit of measurement 'lx' which is not a valid unit for the device class ('signal_strength') it is using; expected one of ['dBm', 'dB']; Please update your configuration if your entity is manually configured, otherwise create a bug report at https://github.com/home-assistant/core/issues?q=is%3Aopen+is%3Aissue+label%3A%22integration%3A+overkiz%22
```
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-15T07:52:49Z | 2025-03-16T09:02:22Z | https://github.com/home-assistant/core/issues/140651 | [
"integration: overkiz"
] | alsmaison | 3 |
gee-community/geemap | jupyter | 2,085 | Activation Code Error | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information MacBook Pro M3 / Safari Browser
Please run the following code on your computer and share the output with us so that we can better debug your issue:
```python
import geemap
geemap.Report()
```
### Description
entered the code from google
### What I Did
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
File /opt/miniconda3/envs/gee/lib/python3.8/site-packages/ee/oauth.py:177, in write_private_json(json_path, info_dict)
176 try:
--> 177 os.makedirs(dirname)
178 except OSError as e:
File /opt/miniconda3/envs/gee/lib/python3.8/os.py:223, in makedirs(name, mode, exist_ok)
222 try:
--> 223 mkdir(name, mode)
224 except OSError:
225 # Cannot rely on checking for EEXIST, since the operating system
226 # could give priority to other errors like EACCES or EROFS
PermissionError: [Errno 13] Permission denied: '/Users/douglasgray/.config/earthengine'
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
Cell In[7], line 3
1 import ee
2 import geemap
----> 3 ee.Authenticate()
4 ee.Initialize(project='ee-dgray')
5 m = geemap.Map()
File /opt/miniconda3/envs/gee/lib/python3.8/site-packages/ee/__init__.py:124, in Authenticate(authorization_code, quiet, code_verifier, auth_mode, scopes, force)
93 def Authenticate(
94 authorization_code: Optional[str] = None,
95 quiet: Optional[bool] = None,
(...)
99 force: bool = False,
100 ) -> Optional[bool]:
101 """Prompts the user to authorize access to Earth Engine via OAuth2.
102
103 Args:
(...)
122 True if we found valid credentials and didn't run the auth flow.
123 """
--> 124 return oauth.authenticate(authorization_code, quiet, code_verifier, auth_mode,
125 scopes, force)
File /opt/miniconda3/envs/gee/lib/python3.8/site-packages/ee/oauth.py:512, in authenticate(cli_authorization_code, quiet, cli_code_verifier, auth_mode, scopes, force)
509 if flow.display_instructions(quiet):
510 _open_new_browser(flow.auth_url)
--> 512 flow.save_code()
File /opt/miniconda3/envs/gee/lib/python3.8/site-packages/ee/oauth.py:562, in Flow.save_code(self, code)
560 redirect_uri = self.server.url
561 code = self.server.fetch_code() # Waits for oauth callback
--> 562 _obtain_and_write_token(code, self.code_verifier, self.scopes, redirect_uri)
File /opt/miniconda3/envs/gee/lib/python3.8/site-packages/ee/oauth.py:247, in _obtain_and_write_token(auth_code, code_verifier, scopes, redirect_uri)
245 client_info['refresh_token'] = token
246 client_info['scopes'] = scopes
--> 247 write_private_json(get_credentials_path(), client_info)
248 print('\nSuccessfully saved authorization token.')
File /opt/miniconda3/envs/gee/lib/python3.8/site-packages/ee/oauth.py:181, in write_private_json(json_path, info_dict)
178 except OSError as e:
179 if e.errno != errno.EEXIST:
180 # pylint:disable=broad-exception-raised,raise-missing-from
--> 181 raise Exception('Error creating directory %s: %s' % (dirname, e))
182 # pylint:enable=broad-exception-raised,raise-missing-from
184 file_content = json.dumps(info_dict)
Exception: Error creating directory /Users/douglasgray/.config/earthengine: [Errno 13] Permission denied: '/Users/douglasgray/.config/earthengine'
| closed | 2024-07-18T14:26:19Z | 2024-07-18T14:31:56Z | https://github.com/gee-community/geemap/issues/2085 | [
"bug"
] | douglagug | 2 |
jumpserver/jumpserver | django | 14,179 | [Bug] 最近登录记录不完整 | ### 产品版本
v3.10.13
### 版本类型
- [X] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [X] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
操作系统:ubuntu22.04
安装方式:初始安装v.3.10.12,然后升级到v3.10.13
### 🐛 缺陷描述
9月10号登录系统,9月11号登录系统,9月12号登录系统,最近登录记录不完整,没有9月11号登录系统的记录
### 复现步骤
1、周一至周五每天登录系统;
2、查看右侧最近登录;
3、发现不是每天的登录都记录。
### 期望结果
每次登录都有记录。
### 补充信息
_No response_
### 尝试过的解决方案
初始安装v3.10.12存在此问题,最近升级到v3.10.13依然存在此问题。 | closed | 2024-09-18T07:43:42Z | 2024-09-23T09:35:31Z | https://github.com/jumpserver/jumpserver/issues/14179 | [
"🐛 Bug"
] | tianmaxingkong168 | 1 |
lepture/authlib | flask | 660 | New token will not be fetched if grant_type='client_credentials' is passed for fetch_token() | **Describe the bug**
If I pass `client_credentials` as the `grant_type` it will not automatically fetch the new token.
**To Reproduce**
My code where I pass the `grant_type`.
```
self.oauth2client = OAuth2Session(token_endpoint=f"{base_url}/auth/token")
self.oauth2client.fetch_token(grant_type="client_credentials",)
```
https://github.com/lepture/authlib/blob/master/authlib/oauth2/client.py#L199-L204
`self.metadata` is only set if `None` was passed for `grant_type`
https://github.com/lepture/authlib/blob/master/authlib/oauth2/client.py#L279-L284
`self.metadata['grant_type']` will be `None` so it will not fetch the new token. `self.metadata['grant_type']` needs to be set in order to re-auth without a refresh token.
My workaround was passing nothing for fetch token because it luckily defaults to `client_credentials` if nothing was passed for the `grant_type`.
Note:
This behavior doesn't come up if there are refresh tokens used for auth because it will just use the refresh token.
https://github.com/lepture/authlib/blob/master/authlib/oauth2/client.py#L276
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment:**
- OS: OSX
- Python Version: 3.11
- Authlib Version: 1.3.1
**Additional context**
Add any other context about the problem here.
| open | 2024-07-16T02:43:57Z | 2025-02-20T09:39:11Z | https://github.com/lepture/authlib/issues/660 | [
"bug",
"client"
] | bryan-prime | 0 |
Yorko/mlcourse.ai | numpy | 649 | can you help find email for Измайлов Константин | I see
Измайлов Константин Константинович (@Izmajlovkonstantin)
can you help find email for Измайлов Константин
I try to get him , ask code for
https://sphere.mail.ru/curriculum/program/discipline/818/
especially for video
https://www.youtube.com/watch?v=fit-ZAWexZ0&list=PLrCZzMib1e9p6lpNv-yt6uvHGyBxQncEh&index=8
11. Введение в SQL. Курс "ВВЕДЕНИЕ В АНАЛИЗ ДАННЫХ" | Технострим
from
mlcourse.ai/jupyter_russian/tutorials/boruta_tutorial_Izmajlovkonstantin.ipynb
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<center>\n",
"<img src=\"../../img/ods_stickers.jpg\">\n",
"## Открытый курс по машинному обучению\n",
"<center>Автор материала: Измайлов Константин Константинович (@Izmajlovkonstantin)."
]
} | closed | 2020-01-30T21:33:58Z | 2020-01-30T23:28:54Z | https://github.com/Yorko/mlcourse.ai/issues/649 | [
"invalid"
] | Sandy4321 | 1 |
pytest-dev/pytest-html | pytest | 376 | Is it possible to change the duration format from just seconds to something like hh:mm:ss? | Is there a way to format the duration time to something like:
HH:MM:SS
or
h hours, m mins, s secs
I tried doing it within here, by changing the attribute value
```
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
report.description = str(item.function.__doc__)
# change duration format here
duration = getattr(report, "duration", 0.0)
converted_duration = timedelta(seconds=duration)
setattr(report, "duration", converted_duration)
```
However, **plugin.py** didn't like the format
```
INTERNALERROR> File python-virtual-environments/python3_env/lib/python3.6/site-packages/pytest_html/plugin.py", line 162, in __init__
INTERNALERROR> html.td(f"{self.time:.2f}", class_="col-duration"),
INTERNALERROR> TypeError: unsupported format string passed to datetime.timedelta.__format__
```
| closed | 2020-11-20T21:00:56Z | 2023-03-23T15:13:55Z | https://github.com/pytest-dev/pytest-html/issues/376 | [
"enhancement",
"question"
] | brettnolan | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.