repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ResidentMario/geoplot | matplotlib | 284 | Feature request? Apply pointplot "hue" to edgecolor only | Thank you for this very useful library!
I am interested in making a scatterplot using R-style "hollow" markers, for example: https://statisticsglobe.com/wp-content/uploads/2019/09/group-outside-plot-in-R.png.
This is normally possible in the Matplotlib `scatter` method with `marker="o", facecolor="none", edgecolor="..."`.
However, I would also like to use a `hue=` setting in Geoplot to provide the circle border/edge colors for these markers. Is this possible in Geoplot currently? If not, consider it a feature request! | open | 2023-02-08T00:32:56Z | 2023-02-08T00:32:56Z | https://github.com/ResidentMario/geoplot/issues/284 | [] | gwerbin | 0 |
widgetti/solara | fastapi | 759 | (Re)rendering question | Is there a general (or a specific) guideline on when re-rendering of components takes place?
My understanding is that, it is close to "whenever a reactive variable/object is modified, it triggers a (re)render".
But things get a bit murky (for me) when an app gets a bit more complicated, and there are many interlinked components living in different files, "state" files (reactive objects) etc..
My appologies, I tried to make a pycafe example to illustrate this, but it ended up being too complex.. so I will try to add the relevant sections of my code to illustrate the problem / confusion.
in a file called "state.py" i have define a "state" to be used across different pages of the application
```python
@dataclasses.dataclass(frozen=False)
class AppState:
data: solara.Reactive[Data | None] = solara.reactive(None)
session_state: solara.Reactive[SessionState] = solara.reactive(SessionState())
settings: solara.Reactive[Settings] = solara.reactive(Settings())
app_state = AppState()
```
Then I have a primary, large ish component that goes like this (the main parts only)
```python
# Various obvious imports
@solara.lab.task
def initialize_session(num_questions: int, types: list):
# Load data (long running process)
# Other pre-processing and config
# Result is a dataclass object
return SessionState(
question_pool=question_pool,
current_question=current_question,
num_questions=num_questions,
is_session_active=True
)
@solara.lab.task
def check_answer(user_answer: str, question: QuestionAnswer):
# Some analysis that could be a long running process
# Result is a dataclass object
return SessionState(
question_pool=session_state.question_pool,
current_question=session_state.current_question,
num_questions=session_state.num_questions,
is_session_active=session_state.is_session_active,
question_log=session_state.question_log,
review=session_state.review,
count_mistakes=session_state.count_mistakes,
stats_attempted=session_state.stats_attempted,
stats_correct=session_state.stats_correct
)
Example()
types = solara.use_reactive([])
user_input = solara.use_reactive('')
current_question = solara.use_reactive(app_state.session_state.value.current_question)
# Some other reactive and non reactive variables defined.
def reset():
new_session_state = SessionState()
app_state.session_state.set(new_session_state)
def next():
pass
with solara.Div():
with solara.Card(style={'width': '50%'}):
solara.SelectMultiple(label='Select category', all_values=app_state.data.value.df['type'].unique().tolist(), values=types, disabled=app_state.session_state.value.is_session_active)
if app_state.session_state.value.is_session_active is False:
StartSessionComponent(callback=initialize_session, num_questions=num_questions, types=types.value)
else:
QuestionComponent(question=current_question.value, user_input=user_input.value, check_func=check_answer, next_func=next)
```
Finally i define the components in "components.py"
```python
solara.component
def StartSessionComponent(callback: callable, **kwargs):
with solara.Card(style={'width': '50%'}):
solara.Markdown(f"#### Select a category type and press 'Start' to begin.")
if callback.pending:
solara.Text('Loading...')
solara.Button('Start', on_click=lambda: callback(**kwargs), disabled=True)
elif callback.finished:
solara.Text('Finished')
app_state.session_state.set(callback.value)
else:
solara.Text('We can go now')
solara.Button('Start', on_click=lambda: callback(**kwargs), disabled=False)
@solara.component
def QuestionComponent(question: QuestionAnswer, user_input: str, hotkey_active: bool, check_func: callable, next_func: callable):
# reactive values
user_input = solara.use_reactive(user_input)
refocus_trigger = solara.use_reactive(0)
# other (non) reactive values used elsewhere
with solara.Card(style={'width': '50%'}):
solara.Markdown('Test title')
with FocusOnTrigger(enabled=app_state.session_state.value.is_session_active, target_query='input', refocus_trigger=refocus_trigger.value):
solara.v.TextField(
label='Your translation',
v_model=user_input.value,
on_v_model=user_input.set,
disabled=not app_state.session_state.value.is_session_active or question.is_checked,
continuous_update=True,
autofocus=True,
)
if (question.is_checked is False) and (app_state.session_state.value.is_session_active is True):
with solara.HBox():
if check_func.pending:
solara.Text('Checking...')
elif check_func.finished:
app_state.session_state.set(check_func.value)
print(f'app_state.session_state.current_question: {app_state.session_state.value.current_question}')
else:
solara.Button('Check', on_click=lambda: check_func(user_input.value, question), disabled=question.is_checked)
else:
# Do other logic
```
So basically this is happening..
When run the `Example()` will show a Start button (from the `StartSessionComponent` component). Clicking the Start button makes everything behave normally. the `app_state` is being updated, and the `Example()` is rerendered showing what comes next (according to the conditions).
When I click the `Check` button (from the `QuestionComponent`), the `check_answer` task runs successfully, and the `app_state` is correctly updated (I can see this from the print statement). However the `Example()` is not rerendered.
From what I can see, the approach is basically identical between the usage of the tasks (`initialize_session` and `check_answer`) and in both cases the `app_state` object gets updated correctly. However the former case consistently re-renders the UI, while the later never does.
Aside: I know that that in both cases `app_state` is correctly updated. When I develop, i have hot-reload on, so if I make a trivial change, the app will "soft-refresh" and it will go to the next expected state, as I would expect it to do when `app_state` is updated.
I understand this is a .. long convoluted example and not something you can run to figure out what is wrong. Nor do I expect it to be a bug, but more a design/structuring/flow problem. Any advice would be helpful. Thanks!
| open | 2024-08-28T22:48:59Z | 2024-08-29T19:42:11Z | https://github.com/widgetti/solara/issues/759 | [] | JovanVeljanoski | 2 |
aleju/imgaug | deep-learning | 284 | AssertionError: AssertionFailed on augment_batches | Say I have a list of 10 images X and corresponding 10 masks y. I do the following:
```
b = ia.Batch(X, segmentation_maps=S)
g = g = seq.augment_batches([b])
next(g)
```
I get the following error:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-41-e734f8aca5ac> in <module>()
----> 1 next(g)
~/anaconda3/envs/linda/lib/python3.6/site-packages/imgaug/augmenters/meta.py in augment_batches(self, batches, hooks, background)
266 for i, batch in enumerate(batches):
267 if isinstance(batch, ia.Batch):
--> 268 batch_copy = batch.deepcopy()
269 batch_copy.data = (i, batch_copy.data)
270 batches_normalized.append(batch_copy)
~/anaconda3/envs/linda/lib/python3.6/site-packages/imgaug/imgaug.py in deepcopy(self)
6851 images=_copy_images(self.images_unaug),
6852 heatmaps=_copy_augmentable_objects(self.heatmaps_unaug, HeatmapsOnImage),
-> 6853 segmentation_maps=_copy_augmentable_objects(self.segmentation_maps_unaug, SegmentationMapOnImage),
6854 keypoints=_copy_augmentable_objects(self.keypoints_unaug, KeypointsOnImage),
6855 bounding_boxes=_copy_augmentable_objects(self.bounding_boxes_unaug, BoundingBoxesOnImage),
~/anaconda3/envs/linda/lib/python3.6/site-packages/imgaug/imgaug.py in _copy_augmentable_objects(augmentables, clazz)
6844 else:
6845 do_assert(is_iterable(augmentables))
-> 6846 do_assert(all([isinstance(augmentable, clazz) for augmentable in augmentables]))
6847 augmentables_copy = [augmentable.deepcopy() for augmentable in augmentables]
6848 return augmentables_copy
~/anaconda3/envs/linda/lib/python3.6/site-packages/imgaug/imgaug.py in do_assert(condition, message)
1821 """
1822 if not condition:
-> 1823 raise AssertionError(str(message))
1824
1825
AssertionError: Assertion failed.
```
Not sure what happens here, I think I followed everything I found in the documentation. X is a list of images of size 256x256x3, and y is 256x256x1. Could be a bug. I will also dig a bit more through the code if I have time.
| open | 2019-03-11T13:28:29Z | 2019-03-30T16:12:23Z | https://github.com/aleju/imgaug/issues/284 | [] | vojavocni | 3 |
cvat-ai/cvat | computer-vision | 8,315 | Moving tasks between projects | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
I would like to move some tasks to identical project I have created (same label list and attributes).
> I use the task options in Django to move the task.
> 
When I transfer a task from one project to the other, it moves. I can see it in the new project task list, but I'm not able to open the jobs.
When I transfer it back to the original project, I'm able to open it as normal.
### Describe the solution you'd like
I would like to have an easy, user friendly, option to move multiple tasks in a batch from one project to the other.
* Matching label list
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | open | 2024-08-17T19:51:18Z | 2024-08-17T19:51:18Z | https://github.com/cvat-ai/cvat/issues/8315 | [
"enhancement"
] | ilya-sha | 0 |
stanfordnlp/stanza | nlp | 1,160 | 1.4.0 is buggy when it comes to some dependency parsing tasks, however, 1.3.0 works correctly | I am using the dependency parser and noticed 1.4.0 has bugs that do not exist in 1.3.0. Here is an example:
If B is true and if C is false, perform D; else, perform E and perform F
in 1.3.0, 'else' is correctly detected as a child of the 'perform' coming after it; however, in 1.4.0, it is detected as a child of the 'perform' before it.
How can I force Stanza to load 1.3.0 instead of the latest version, so I can move forward with what I am doing now? | open | 2022-12-07T20:38:41Z | 2023-09-21T05:49:20Z | https://github.com/stanfordnlp/stanza/issues/1160 | [
"bug"
] | apsyio | 3 |
dynaconf/dynaconf | flask | 1,004 | .secrets.toml not read automatically | Hi there,
currently I'm evaluating various libraries for settings management in my next project. One favorite is Dynaconf, it relly shines! :)
One thing which makes me wonder is that according to the docs the file `.secrets.toml` should be automatically read, shouldn't it?
Example:
```
# conf/myprog.toml
BASE='/some/path'
SECRET_KEY='abc123'
# conf/myprog.local.toml
BASE='/foo/bar'
# conf/.secrets.toml
SECRET_KEY='wildwilly'
```
Running a normal Python shell I import the settings and showed the result:
```
>>> from dynaconf import Dynaconf
>>> settings = Dynaconf(root_path='conf', merge_enabled=true, settings_files=['myprog.toml'])
>>> settings.to_dict()
{'BASE': '/foo/bar', 'SECRET_KEY': 'abc123'}
```
To my understanding `SECRET_KEY` should be `wildwilly`. So why it is not changed in the result? Or do I have to specify .secrets.toml explicitly in settings_files? Btw, it does not matter, if .secrets.toml is under conf or in the directory above (from where I started the interpreter from).
Dynaconf version is 3.1.5.
Many thanks | closed | 2023-09-14T13:54:12Z | 2023-11-19T17:58:22Z | https://github.com/dynaconf/dynaconf/issues/1004 | [
"question",
"Docs",
"good first issue"
] | thmsklngr | 4 |
pytest-dev/pytest-html | pytest | 872 | Captured stdio output repeating in HTML report | My test produces log output using the logging module. In the HTML report, the output lines repeat, and the repetition increases for the number of tests run.
e.g
first test:
```
---------------------------- Captured stderr setup -----------------------------
2025-02-06 10:26:49,996 - test - DEBUG - Something
2025-02-06 10:26:49,996 - test - DEBUG - Something else
```
second test
```
---------------------------- Captured stderr setup -----------------------------
2025-02-06 10:26:49,996 - test - DEBUG - Something
2025-02-06 10:26:49,996 - test - DEBUG - Something
2025-02-06 10:26:49,996 - test - DEBUG - Something else
2025-02-06 10:26:49,996 - test - DEBUG - Something else
```
using `pytest-html>=4.1.1` | open | 2025-02-06T10:34:42Z | 2025-02-06T10:35:50Z | https://github.com/pytest-dev/pytest-html/issues/872 | [] | zaoptos | 0 |
koaning/scikit-lego | scikit-learn | 26 | feature request: timeseries features | it might be nice to be able to accept a datetime column and to generate lots of relevant features from it that can be used in an sklearn pipeline.
think: day_of_week, hour, etc. | closed | 2019-03-05T14:01:15Z | 2019-10-18T14:06:20Z | https://github.com/koaning/scikit-lego/issues/26 | [] | koaning | 1 |
jupyter/nbgrader | jupyter | 1,189 | Student courses not appearing | When using the "multiple courses" setup with JupyterHub authentication, it does not seem that students can actually view assignments in the courses they are in. | closed | 2019-08-24T16:17:57Z | 2019-08-24T22:43:12Z | https://github.com/jupyter/nbgrader/issues/1189 | [
"bug"
] | jhamrick | 0 |
pytorch/vision | machine-learning | 8,786 | `download` parameter of `KMNIST()` should be explained at the end | ### 📚 The doc issue
[The doc](https://pytorch.org/vision/stable/generated/torchvision.datasets.KMNIST.html) of `KMNIST()` says `download` parameter is at the end as shown below:
> class torchvision.datasets.KMNIST(root: [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[str](https://docs.python.org/3/library/stdtypes.html#str), [Path](https://docs.python.org/3/library/pathlib.html#pathlib.Path)], train: [bool](https://docs.python.org/3/library/functions.html#bool) = True, transform: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[Callable](https://docs.python.org/3/library/typing.html#typing.Callable)] = None, target_transform: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[Callable](https://docs.python.org/3/library/typing.html#typing.Callable)] = None, download: [bool](https://docs.python.org/3/library/functions.html#bool) = False)
But `download` parameter is explained in the middle as shown below:
> Parameters:
> - root (str or pathlib.Path) – Root directory of dataset where KMNIST/raw/train-images-idx3-ubyte and KMNIST/raw/t10k-images-idx3-ubyte exist.
> - train ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, creates dataset from train-images-idx3-ubyte, otherwise from t10k-images-idx3-ubyte.
> - download ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.
> - transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g, transforms.RandomCrop
> - target_transform (callable, optional) – A function/transform that takes in the target and transforms it.
### Suggest a potential alternative/fix
So `download` parameter should be explained at the end as shown below:
> Parameters:
> - root (str or pathlib.Path) – Root directory of dataset where KMNIST/raw/train-images-idx3-ubyte and KMNIST/raw/t10k-images-idx3-ubyte exist.
> - train ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, creates dataset from train-images-idx3-ubyte, otherwise from t10k-images-idx3-ubyte.
> - transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g, transforms.RandomCrop
> - target_transform (callable, optional) – A function/transform that takes in the target and transforms it.
> - download ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again. | closed | 2024-12-06T05:16:00Z | 2025-02-19T16:10:57Z | https://github.com/pytorch/vision/issues/8786 | [] | hyperkai | 1 |
piskvorky/gensim | machine-learning | 2,716 | lemmatize: generator raised StopIteration | #### Problem description
I'm trying to use lemmatize function to my text but getting StopIteration exception.
#### Steps/code/corpus to reproduce
```
from gensim.utils import lemmatize
s = lemmatize('eight')
print(s)
```
Result:
```
python3 lem.py
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 609, in _read
raise StopIteration
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "lem.py", line 4, in <module>
s = lemmatize('eight')
File "/usr/local/lib/python3.7/site-packages/gensim/utils.py", line 1692, in lemmatize
parsed = parse(content, lemmata=True, collapse=False)
File "/usr/local/lib/python3.7/site-packages/pattern/text/en/__init__.py", line 169, in parse
return parser.parse(s, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 1172, in parse
s[i] = self.find_tags(s[i], **kwargs)
File "/usr/local/lib/python3.7/site-packages/pattern/text/en/__init__.py", line 114, in find_tags
return _Parser.find_tags(self, tokens, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 1113, in find_tags
lexicon = kwargs.get("lexicon", self.lexicon or {}),
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 376, in __len__
return self._lazy("__len__")
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 368, in _lazy
self.load()
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 625, in load
dict.update(self, (x.split(" ")[:2] for x in _read(self._path) if len(x.split(" ")) > 1))
File "/usr/local/lib/python3.7/site-packages/pattern/text/__init__.py", line 625, in <genexpr>
dict.update(self, (x.split(" ")[:2] for x in _read(self._path) if len(x.split(" ")) > 1))
RuntimeError: generator raised StopIteration
```
#### Versions
I'm using MacOS, Python3:
```>>> import platform; print(platform.platform())
Darwin-18.7.0-x86_64-i386-64bit
>>> import sys; print("Python", sys.version)
Python 3.7.4 (default, Sep 7 2019, 18:27:02)
[Clang 10.0.1 (clang-1001.0.46.4)]
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.18.0
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.4.1
>>> import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
gensim 3.8.1
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 0
```
```
pip3 freeze | grep pattern
pattern3==3.0.0
pip3 freeze | grep gensim
gensim==3.8.1
```
| open | 2019-12-29T11:10:15Z | 2020-06-16T19:07:06Z | https://github.com/piskvorky/gensim/issues/2716 | [] | TimurNurlygayanov | 14 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 344 | Support AsyncSession in SQLAlchemy | In SQLAlchemy 1.14 it will support `asyncio` with `AsyncSession`, is there any plan to make `marshmallow-sqlalchemy` work with `AsyncSession`? | closed | 2020-09-12T07:16:44Z | 2023-10-06T20:07:04Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/344 | [] | wei-hai | 1 |
NVIDIA/pix2pixHD | computer-vision | 317 | wrong output when testing with RGB segmentation mask | Hello,
I tried testing with the following segmentation mask:

And the result is the following:

Do you have an idea why I am getting such output?
Best,
| open | 2023-03-02T14:59:09Z | 2023-04-13T01:17:44Z | https://github.com/NVIDIA/pix2pixHD/issues/317 | [] | At-Walid | 1 |
PrefectHQ/prefect | automation | 16,773 | Cache Policy. DEFAULT - "self" doesn't work | ### Bug summary
### Problem
`DEFAULT` is defined as:
```python
DEFAULT = INPUTS + TASK_SOURCE + RUN_ID
```
This makes it a `CompoundCachePolicy`.
The issue is with the `__sub__` method, which `CompoundCachePolicy` inherits from `CachePolicy`. The current implementation in `CachePolicy` is broken. Instead of removing the parameter, it adds redundant inputs.
[Current implementation](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/cache_policies.py#L82) in `CachePolicy`:
```python
def __sub__(self, other: str) -> "CachePolicy":
if not isinstance(other, str): # type: ignore[reportUnnecessaryIsInstance]
raise TypeError("Can only subtract strings from key policies.")
new = Inputs(exclude=[other])
return CompoundCachePolicy(policies=[self, new])
```
### What Happens
When subtracting `"self"` from `DEFAULT`:
- It adds redundant `Inputs`.
- It doesn’t actually remove the parameter.
### Proposed Solution
1. For `CachePolicy`, `__sub__` should simply return `self`:
```python
def __sub__(self, other: str) -> "CachePolicy":
if not isinstance(other, str): # type: ignore[reportUnnecessaryIsInstance]
raise TypeError("Can only subtract strings from key policies.")
return self
```
2. For `CompoundCachePolicy`, `__sub__` should subtract parameter from all policies:
```python
def __sub__(self, other: str) -> "CachePolicy":
if not isinstance(other, str): # type: ignore[reportUnnecessaryIsInstance]
raise TypeError("Can only subtract strings from key policies.")
new = [x - other for x in self.policies]
return CompoundCachePolicy(policies=new)
```
### Expected Behavior
With these changes:
- Subtracting `"self"` from `DEFAULT` will work as expected.
- Parameters will be properly removed without adding redundant inputs.
### Version info
```Text
Version: 3.1.13
API version: 0.8.4
Python version: 3.12.7
Git commit: 16e85ce3
Built: Fri, Jan 17, 2025 8:46 AM
OS/Arch: win32/AMD64
Profile: local
Server type: server
Pydantic version: 2.10.5
```
### Additional context
_No response_ | closed | 2025-01-19T23:27:55Z | 2025-01-21T18:54:27Z | https://github.com/PrefectHQ/prefect/issues/16773 | [
"bug"
] | a14e | 8 |
horovod/horovod | machine-learning | 3,094 | How to conduct validation test during training with multi GPU? | Hi all,
When I use multi GPUs to train, but I want to conduct validation test during the train, how can I realize it? Here is my code:
`
with tf.Session(config=config) as sess:
ckpt = tf.train.latest_checkpoint(hp.checkpoint)
if ckpt is None:
logging.info("Starting new training")
sess.run(tf.global_variables_initializer())
sess.run(bcast)
else:
logging.info("Resuming from checkpoint: %s" % ckpt)
saver.restore(sess, ckpt)
while True:
try:
ids, _gs, _loss, _acc, _summary, _ = sess.run([train_id, global_step, loss, accuracy, train_summary, train_op])
if hvd.rank() == 0:
logging.info("step {}, loss:{:.4f}, accuracy:{:.4f}".format(_gs, _loss, _acc))
if _gs % 10 == 0:
summary_writer.add_summary(_summary, _gs)
if _gs % 1000 == 0:
logging.info("# save models at {} step".format(_gs))
saver.save(sess, ckpt_name, global_step=_gs)
if math.isnan(_loss):
logging.info("第hvd.rank({})个进程的第{}步出现错误".fromat(ids, _gs))
raise Exception('Loss Exploded')
if _gs % hp.eval_per_step == 0:
logging.info("# statrt a validation test: ")
......
`
It give the error as this:
INFO:root:Horovod has been shut down. This was caused by an exception on one of the ranks or an attempt to allreduce, allgather or broadcast a tensor after one of the ranks finished execution. If the shutdown was caused by an exception, you should see the exception in the log before the first shutdown message.
[[Node:DistributedAdamOptimizer_Allreduce/HorovodAllreduce_gradients_encoder_dense_Tensordot_transpose_1_grad_transpose_0 = HorovodAllreduce[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/encoder/dense/Tensordot/transpose_1_grad/transpose)]]
Anyone can give some suggestion? Thanks a lot! | closed | 2021-08-09T05:21:48Z | 2021-08-09T08:34:06Z | https://github.com/horovod/horovod/issues/3094 | [] | yjiangling | 0 |
amdegroot/ssd.pytorch | computer-vision | 178 | redundant information in data.scripts.cocolabels.txt? | it seems the first column in data.scripts.cocolabels.txt should not exist,so the second column represents class id and the third column represents class name | open | 2018-06-13T09:47:20Z | 2018-06-13T09:47:20Z | https://github.com/amdegroot/ssd.pytorch/issues/178 | [] | YingdiZhang | 0 |
huggingface/datasets | computer-vision | 7,461 | List of images behave differently on IterableDataset and Dataset | ### Describe the bug
This code:
```python
def train_iterable_gen():
images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128)))
yield {
"images": np.expand_dims(images, axis=0),
"messages": [
{
"role": "user",
"content": [{"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" }]
},
{
"role": "assistant",
"content": [{"type": "text", "text": "duck" }]
}
]
}
train_ds = Dataset.from_generator(train_iterable_gen,
features=Features({
'images': [datasets.Image(mode=None, decode=True, id=None)],
'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }], 'role': datasets.Value(dtype='string', id=None)}]
} )
)
```
works as I'd expect; if I iterate the dataset then the `images` column returns a `List[PIL.Image.Image]`, i.e. `'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x77EFB7EF4680>]`.
But if I change `Dataset` to `IterableDataset`, the `images` column changes into `'images': [{'path': None, 'bytes': ..]`
### Steps to reproduce the bug
The code above +
```python
def load_image(url):
response = requests.get(url)
image = Image.open(io.BytesIO(response.content))
return image
```
I'm feeding it to SFTTrainer
### Expected behavior
Dataset and IterableDataset would behave the same
### Environment info
```yaml
requires-python = ">=3.12"
dependencies = [
"av>=14.1.0",
"boto3>=1.36.7",
"datasets>=3.3.2",
"docker>=7.1.0",
"google-cloud-storage>=2.19.0",
"grpcio>=1.70.0",
"grpcio-tools>=1.70.0",
"moviepy>=2.1.2",
"open-clip-torch>=2.31.0",
"opencv-python>=4.11.0.86; sys_platform == 'darwin'",
"opencv-python-headless>=4.11.0.86; sys_platform == 'linux'",
"pandas>=2.2.3",
"pillow>=10.4.0",
"plotly>=6.0.0",
"py-spy>=0.4.0",
"pydantic>=2.10.6",
"pydantic-settings>=2.7.1",
"pymysql>=1.1.1",
"ray[data,default,serve,train,tune]>=2.43.0",
"torch>=2.6.0",
"torchmetrics>=1.6.1",
"torchvision>=0.21.0",
"transformers[torch]@git+https://github.com/huggingface/transformers",
"wandb>=0.19.4",
# https://github.com/Dao-AILab/flash-attention/issues/833
"flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl; sys_platform == 'linux'",
"trl@https://github.com/huggingface/trl.git",
"peft>=0.14.0",
]
``` | closed | 2025-03-17T15:59:23Z | 2025-03-18T08:57:17Z | https://github.com/huggingface/datasets/issues/7461 | [] | FredrikNoren | 2 |
tortoise/tortoise-orm | asyncio | 1,085 | 'update ... limit' doesn't work with asyncpg backend | **Describe the bug**
the 'limit' attribute of 'update' queries is missing with asyncpg engine
**To Reproduce**
```python3
import asyncio, os
from tortoise import Model, fields, Tortoise
class Table(Model):
x = fields.IntField()
async def test_query(db_url):
print(db_url.split(':')[0])
await Tortoise.init(db_url=db_url, modules={'db': ['__main__']})
query = Table.all().limit(1).update(x=10)
print('query.limit', query.limit)
print(query.sql())
await Tortoise.close_connections()
async def main():
await test_query(os.environ['DATABASE_URL'])
print('---')
await test_query('sqlite://update-limit.sqlite')
asyncio.run(main())
```
output:
```shell
postgres
query.limit 1
UPDATE "table" SET "x"=10
---
sqlite
query.limit 1
UPDATE "table" SET "x"=10 LIMIT 1
```
**Expected behavior**
postgres should generate a sql 'limit' keyword like sqlite
**Additional context**
this was raised in #748, and fixed in #754, but only for sqlite I think
happy to submit a PR if useful, LMK | closed | 2022-03-14T23:08:52Z | 2022-03-15T01:08:23Z | https://github.com/tortoise/tortoise-orm/issues/1085 | [] | abe-winter | 2 |
MaartenGr/BERTopic | nlp | 1,456 | Transform on pre-computed embedding | Hi,
Thanks for your great work on this awesome package!
In my use case I have a custom embedder (FastText with TF-IDF weighting), and therefore I'm pre-computing the embeddings. After training the model, I would like to transform/predict on new documents. I have generated the embeddings for them, but it seems that the `transform` method, unlike `fit_transform`, does not directly accept embeddings. How can this be achieved? Do I need to make an embedder class compatible with BERTopic and pass that on to the model, instead of pre-computing the embeddings? Any ideas or pointers will be appreciated.
Thanks!
| closed | 2023-08-07T06:27:58Z | 2023-08-16T06:13:22Z | https://github.com/MaartenGr/BERTopic/issues/1456 | [] | guymorlan | 4 |
mkhorasani/Streamlit-Authenticator | streamlit | 2 | Reuse username after login | Hi,
Do you know how it would be possible to reuse the username after the user logins? I want to pass it onto a query to search in a pandas dataframe so I can display information pertaining only to that user.
Thanks, | closed | 2022-01-06T09:47:58Z | 2024-09-27T20:02:52Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/2 | [] | pelguetat | 5 |
tensorflow/tensor2tensor | deep-learning | 1,747 | Use transformer encoder for sequence labeling | I would like to use the transformer architecture for a sequence-labeling problem. I have two files, one consisting of the input tokens, and the other one of the labels. The labels are short strings and there are about 100 different types of them. I guess I only need to the encoder and no decoder since the number of input tokens and output tokens is identical. For output, this could be realized by classes for each input token. Now my question might be trivial but how to do this in t2t? I have seen the tansformer_encoder used for phrase classification, but I am not clear on how to use it for classification of each individual token.
| open | 2019-11-17T09:13:14Z | 2019-12-07T03:59:37Z | https://github.com/tensorflow/tensor2tensor/issues/1747 | [] | sebastian-nehrdich | 4 |
huggingface/transformers | tensorflow | 36,725 | `torch.compile` custom backend called by AotAutograd triggers recompiles when used with `CompileConfig` | ### System Info
transformers==4.49.0
### Who can help?
@gante @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
If we use a custom backend with `torch.compile` called by `AotAutograd` as described in: https://pytorch.org/docs/stable/torch.compiler_custom_backends.html#custom-backends-after-aotautograd and use it with the [CompileConfig](https://github.com/huggingface/transformers/blob/9215cc62d4366072aacafa4e44028c1ca187167b/src/transformers/generation/configuration_utils.py#L1584) each call to `generate` will trigger a recompile.
Minimal reproducer:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, CompileConfig
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from functorch.compile import make_boxed_func
from torch._dynamo.backends.common import aot_autograd
dtype = torch.float32
model_path = "hf-internal-testing/tiny-random-LlamaForCausalLM"
tokenizer = AutoTokenizer.from_pretrained(model_path, torch_dtype=dtype)
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=dtype)
inputs = tokenizer(["The quick brown "], return_tensors="pt", padding=True)
kwargs = dict(inputs)
kwargs.update(
{
"do_sample": False,
"max_new_tokens": 10,
}
)
model.generation_config.cache_implementation = "static"
def my_compiler(gm, example_inputs):
return make_boxed_func(gm.forward)
my_backend = aot_autograd(fw_compiler=my_compiler)
model.generation_config.compile_config = CompileConfig(
backend=my_backend,
mode=None
)
model.generation_config.compile_config._compile_all_devices = True
with torch.no_grad():
for i in range(torch._dynamo.config.cache_size_limit + 1):
output = model.generate(**kwargs)
print(output)
```
This can be run with `TORCH_LOGS="recompiles"` for recompile logs and will result in an error due to the cache size limit being exceeded.
If we wrap the call to `aot_autograd` as:
`my_backend = torch._dynamo.disable(aot_autograd(fw_compiler=my_compiler))`
then it runs without recompiles.
### Expected behavior
Recompiles should not be happening and it shouldn't be necessary to disable dynamo with `aot_autograd`. | open | 2025-03-14T14:15:41Z | 2025-03-14T14:15:59Z | https://github.com/huggingface/transformers/issues/36725 | [
"bug"
] | shaurya0 | 0 |
pyqtgraph/pyqtgraph | numpy | 2,900 | Error while drawing item 【GLScatterPlotItem、GLSurfacePlotItem】 | import sys
from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget,QPushButton
from PySide6.QtCore import Qt
import pyqtgraph.opengl as gl
import numpy as np
import uuid
class TEST3D(QWidget):
def __init__(self,width=480,height=790):
super().__init__()
self.qwidth = width
self.qheight = height
self.initWindow()
def initWindow(self):
self.setAttribute(Qt.WA_DeleteOnClose)
self.setWindowModality(Qt.WindowModal)
self.resize(1366,768)
vlayout = QVBoxLayout(self)
self.gl_widget = gl.GLViewWidget(self)
vlayout.addWidget(self.gl_widget)
pos = np.empty((53, 3))
size = np.empty((53))
color = np.empty((53, 4))
pos[0] = (1,0,0)
size[0] = 0.5
color[0] = (1.0, 0.0, 0.0, 0.5)
pos[1] = (0,1,0)
size[1] = 0.2
color[1] = (0.0, 0.0, 1.0, 0.5)
pos[2] = (0,0,1)
size[2] = 2./3.
color[2] = (0.0, 1.0, 0.0, 0.5)
z = 0.5
d = 6.0
# 创建散点图
scatter = gl.GLScatterPlotItem(pos=pos, size=size, color=color, pxMode=False)
self.gl_widget.addItem(scatter)
g = gl.GLGridItem()
self.gl_widget.addItem(g)
self.show()
def closeEvent(self, event):
self.gl_widget.close()
event.accept()
class MyGLWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("PySide6 with GLViewWidget")
self.setGeometry(100, 100, 800, 600)
self.t3d = {}
self.basewidget = QWidget(self)
self.vlayout = QVBoxLayout(self.basewidget)
self.base_btn = QPushButton(u"There will be a problem opening it for the second time")
self.vlayout .addWidget(self.base_btn)
self.base_btn.clicked.connect(self.plot_test_data)
self.setCentralWidget( self.basewidget)
def plot_test_data(self):
self.t3d[str(uuid.uuid1())] = TEST3D()
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MyGLWindow()
window.show()
sys.exit(app.exec())

| open | 2023-12-14T07:32:38Z | 2023-12-19T03:23:09Z | https://github.com/pyqtgraph/pyqtgraph/issues/2900 | [] | cuish0920 | 2 |
iterative/dvc | data-science | 10,242 | dvc status --json can output non-json | # Bug Report
## Description
When there are large files to hash which are not cached, `dvc status --json` will still print out the message, which makes the output not valid json. I believe the use case of `dvc status --json` is to be able to pipe the output to a file and easily read it with another program, so extra messages make this inconvenient.
I accidentally erased the output I had but I think this is the message that is printed out: https://github.com/iterative/dvc-data/blob/300a3e072e5baba50f7ac5f91240891c0e30d030/src/dvc_data/hashfile/hash.py#L174
### Reproduce
1. large data file stage dependency
2. `dvc status --json` for the first time
### Expected
`dvc status --json` _only_ outputs valid json
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
DVC version: 3.33.4 (choco)
---------------------------
Platform: Python 3.11.6 on Windows-10-10.0.19045-SP0
Subprojects:
dvc_data = 2.24.0
dvc_objects = 2.0.1
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 1.6.0
Supports:
azure (adlfs = 2023.12.0, knack = 0.11.0, azure-identity = 1.15.0),
gdrive (pydrive2 = 1.19.0),
gs (gcsfs = 2023.12.2.post1),
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
oss (ossfs = 2023.12.0),
s3 (s3fs = 2023.12.2, boto3 = 1.33.13),
ssh (sshfs = 2023.10.0)
Config:
Global: C:\Users\starrgw1\AppData\Local\iterative\dvc
System: C:\ProgramData\iterative\dvc
``` | open | 2024-01-17T15:08:12Z | 2024-10-23T08:06:35Z | https://github.com/iterative/dvc/issues/10242 | [
"bug",
"p3-nice-to-have",
"ui",
"A: cli"
] | gregstarr | 10 |
nvbn/thefuck | python | 815 | Red color not reset when no fucks were given | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
**The output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`):**
The Fuck 3.26 using Python 3.6.3
**Your shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.):**
Windows PowerShell
**Your system (Debian 7, ArchLinux, Windows, etc.):**
Windows
**How to reproduce the bug:**
Force a "no fucks given" output (by putting it after a successful command or something)

**The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):**
_not sure how to do this in PowerShell, but it shouldn't be that relevant_
| closed | 2018-05-17T11:55:54Z | 2018-06-12T09:48:05Z | https://github.com/nvbn/thefuck/issues/815 | [
"windows"
] | vijfhoek | 2 |
pytest-dev/pytest-xdist | pytest | 839 | Is ssh and remote socket server deprecated or just rsync? | I read the [warning in the docs](https://pytest-xdist.readthedocs.io/en/latest/remote.html) about "this feature" being deprecated, but it's unclear to me what exactly is deprecated.
Are you deprecating everything involved in running tests on remote machines? This includes the whole ssh, socket server, `--rsyncdir` system. Or are you just deprecating `--rsyncdir`?
If it's just `rsyncdir` then that means I just have to manually `git clone`, `scp`, or otherwise get my source code to the target machine right? | closed | 2022-10-31T21:10:53Z | 2023-07-04T11:21:23Z | https://github.com/pytest-dev/pytest-xdist/issues/839 | [] | cheog | 3 |
microsoft/qlib | deep-learning | 1,590 | generate trade decisions every 10 days? | In method collect_data_loop, it seems that it will generate trade decisions every day.
But I want to generate trade decisions every 10 days. Can we do this? | closed | 2023-07-09T03:37:46Z | 2023-10-12T06:01:59Z | https://github.com/microsoft/qlib/issues/1590 | [
"question",
"stale"
] | quant2008 | 1 |
aminalaee/sqladmin | fastapi | 559 | Support multiple databases | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Sometimes we need multiple databases for a project,
but in this application I haven't found how to do that.
### Describe the solution you would like.
One possible solution could be to pass the sessionmaker factory instead of the engine.
SQLAlchemy documentation page about routing:
https://docs.sqlalchemy.org/en/20/orm/persistence_techniques.html#session-partitioning
### Describe alternatives you considered
Multiple instance of Admin()???
### Additional context
I found a PR with sessionmaker:
https://github.com/aminalaee/sqladmin/pull/542 | closed | 2023-07-21T10:42:46Z | 2023-08-01T07:04:01Z | https://github.com/aminalaee/sqladmin/issues/559 | [] | meetinger | 1 |
allure-framework/allure-python | pytest | 40 | Support next model2 version | closed | 2017-02-12T14:06:24Z | 2017-02-13T15:11:46Z | https://github.com/allure-framework/allure-python/issues/40 | [] | sseliverstov | 0 | |
jina-ai/clip-as-service | pytorch | 545 | 关于 POOL STRATEGY 的参数配置 | 不是问题,是想建议一个需求:server 端的 POOL STRategy 参数可不可以放在 client 端配置呀。
这样有不同需求的时候得重新启动服务 | open | 2020-04-29T03:22:21Z | 2020-04-29T03:22:21Z | https://github.com/jina-ai/clip-as-service/issues/545 | [] | dongrixinyu | 0 |
lanpa/tensorboardX | numpy | 231 | can't open the url on chrome | demo.py run is ok. However , the url can't open on Chrome.


OS:win10
tensorboardX (1.4)
tensorboard (1.8.0)
tensorflow (1.7.0)
torch (0.4.1)
torchvision (0.2.1)
| closed | 2018-09-28T02:54:00Z | 2018-09-28T07:10:51Z | https://github.com/lanpa/tensorboardX/issues/231 | [] | zhaoxin111 | 0 |
K3D-tools/K3D-jupyter | jupyter | 132 | voxel editing is broken in 2.4.21 | closed | 2019-02-05T10:17:47Z | 2019-02-20T11:02:58Z | https://github.com/K3D-tools/K3D-jupyter/issues/132 | [] | marcinofulus | 1 | |
pytorch/pytorch | python | 148,908 | Numpy v1 v2 compatibility | Whats the policy on numpy compatibility in pytorch? I see that requirements-ci.txt pins numpy==1 for <python3.13 and numpy==2 for py3.13, but later in CI numpy gets reinstalled as numpy==2.0.2 for most python versions. Is CI supposed to use v2 or v1? Does being compatible with v2 ensure compatibility with v1?
cc @mruberry @rgommers @malfet | closed | 2025-03-10T20:10:10Z | 2025-03-10T20:13:59Z | https://github.com/pytorch/pytorch/issues/148908 | [
"module: numpy"
] | clee2000 | 1 |
mirumee/ariadne | api | 1,078 | Query cost validation is skipping `InlineFragmentNode` | I've got tipped by @przlada that our query cost validator skips `InlineFragmentNode` when calculating the costs.
`InlineFragmentNode` is a fragment used when querying interfaces and unions:
```graphql
{
search(query: "lorem ipsum") {
... on User {
id
username
}
... on Comment {
id
content
}
}
}
``` | closed | 2023-04-26T08:48:48Z | 2023-04-28T10:56:32Z | https://github.com/mirumee/ariadne/issues/1078 | [
"bug",
"help wanted"
] | rafalp | 0 |
dgtlmoon/changedetection.io | web-scraping | 3,022 | [feature] Allow to set various default request headers (not only user agent header) | **Version and OS**
0.49.3 on termux (mobile linux)
**Is your feature request related to a problem? Please describe.**
To escape bot detection techniques i need to set-up real looking headers https://github.com/dgtlmoon/changedetection.io/issues/2198#issuecomment-2130495118 (not just user agent), but default settigs (cd.io > settings > fetching ) allows to set only user agent

so i need to set headers **for each watch** (cd.io > watch > edit > request)

**Describe the solution you'd like**
Allow set default settings, ex: cd.io > settings > fetching > request headers | closed | 2025-03-13T10:24:43Z | 2025-03-18T11:32:32Z | https://github.com/dgtlmoon/changedetection.io/issues/3022 | [
"enhancement"
] | gety9 | 3 |
cchen156/Learning-to-See-in-the-Dark | tensorflow | 44 | Why output picture so dark!!! I use the pretrained model. Need any other operation ??? | **I download the pretrained model and run 'test_Sony.py'.But the output is very dark!**

| closed | 2018-07-28T08:53:43Z | 2019-08-26T02:37:45Z | https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/44 | [] | StudentZhangxu | 4 |
vaexio/vaex | data-science | 2,020 | can I use ploty graohs with vaex dataframe ? | I wanna use a dataframe vaex with ploty express to make a dash app
I don't know if I can do this
df = dfvx.groupby((dfvx.PRO, dfvx.AGE), agg='count')
scatter = px.scatter(df,
size="PRO, color="AGE",
hover_name="PRO", log_x=True, size_max=50)
the Error :
ValueError: Value of 'size' is not the name of a column in 'data_frame'. Expected one of [0] but received: count
If there is a solution , let Me know
thaaank you | closed | 2022-04-15T16:24:19Z | 2022-06-08T02:38:26Z | https://github.com/vaexio/vaex/issues/2020 | [] | sanaeO | 6 |
waditu/tushare | pandas | 832 | 接口fut_basic | 接口:fut_basic
出问题字段:trade_time_desc
描述:trade_time_desc基础数据有错,特定期货合约的交易时间都是一样的,未考虑期货合约更改等问题,比如有些期货合约上夜盘之前,就只有百天有成交,上夜盘之后,夜盘时间也发生过改变,比如油脂油料的时间。该接口放出的trade_time_desc是错误的,比如没有上夜盘的时候,接口调出来的数据显示交易时间是包含了夜盘。


| open | 2018-11-20T06:08:00Z | 2018-11-20T14:12:08Z | https://github.com/waditu/tushare/issues/832 | [] | yangxiaobao87 | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 488 | Why fill diagonal with zeroes in get_matches_and_diffs | In this function why are diagonal elements filled with zeros?
When using for example SupConLoss and having two matrices that have the same labels (so the positive pair is always on the diagonal) the loss will always be 0.
```
def get_matches_and_diffs(labels, ref_labels=None):
if ref_labels is None:
ref_labels = labels
labels1 = labels.unsqueeze(1)
labels2 = ref_labels.unsqueeze(0)
matches = (labels1 == labels2).byte()
diffs = matches ^ 1
if ref_labels is labels:
matches.fill_diagonal_(0)
return matches, diffs
``` | closed | 2022-06-15T11:44:03Z | 2022-06-21T13:18:10Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/488 | [
"question"
] | BrunoCoric | 1 |
pyppeteer/pyppeteer | automation | 148 | sys:1: RuntimeWarning: coroutine 'Page.xpath' was never awaited | `async def main():
browser = await pp.launch(headless=False)
site = await browser.newPage()
await site.goto('https://www.google.com/')
time.sleep(3)
# images = site.xpath("""//*[@id="gbw"]/div/div/div[1]/div[2]/a""")
await site.click(site.xpath("""//*[@id="gbw"]/div/div/div[1]/div[2]/a"""))
asyncio.get_event_loop().run_until_complete(main())`
My code is throwing this error during "await site.click(site.xpath)"
I'm unsure how to fix this, any help? | open | 2020-07-08T07:03:00Z | 2020-07-19T22:53:34Z | https://github.com/pyppeteer/pyppeteer/issues/148 | [
"bug"
] | mutiny27 | 4 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 148 | [BUG] docker版本无法使用 | ***发生错误的平台?***
抖音
***发生错误的端点?***
Web APP
***提交的输入值?***
[6914948781100338440](https://www.douyin.com/video/6914948781100338440)
***是否有再次尝试?***
是
***你有查看本项目的自述文件或接口文档吗?***
有

| closed | 2023-02-04T14:51:44Z | 2023-02-05T08:17:08Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/148 | [
"BUG"
] | wowadz | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,843 | [Bug]: Getting error 128 | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
While try to run the .sh file i get 128 error

### Steps to reproduce the problem
Have no idea..
### What should have happened?
Maybe run the code
### What browsers do you use to access the UI ?
Other
### Sysinfo

### Console logs
```Shell

```
### Additional information
_No response_ | open | 2024-05-20T09:15:35Z | 2024-06-29T04:26:10Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15843 | [
"bug-report"
] | PrinceKaKKad | 3 |
FlareSolverr/FlareSolverr | api | 628 | [yggtorrent] (updating) FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser. | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-12-20T14:52:51Z | 2022-12-22T16:04:31Z | https://github.com/FlareSolverr/FlareSolverr/issues/628 | [
"duplicate",
"invalid"
] | Letweex | 5 |
microsoft/unilm | nlp | 1,686 | BEiT-3 indomain checkpoints split details | Hi, for my own research I'd like to use your Beit-3 indomain checkpoints - however, it's important to know for me on what exact splits of COCO this second stage of pre-training was done. Was it the old train split (83k images) or the new Karpathy split (113k images)? Thanks a lot in advance! | open | 2025-02-05T15:51:15Z | 2025-02-06T09:17:43Z | https://github.com/microsoft/unilm/issues/1686 | [] | tobiwiecz | 2 |
google-research/bert | tensorflow | 398 | Does training_batch_size affect model accuracy when fine-tuning? | Debating whether it is worth looking at implementing horovod to use multiGPU | open | 2019-01-26T02:28:04Z | 2019-01-26T02:28:04Z | https://github.com/google-research/bert/issues/398 | [] | echan00 | 0 |
ultralytics/ultralytics | machine-learning | 18,672 | Does YOLO-World version support complex queries for object detection? | ### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello Ultralytics Team,
I’m working on a project where I need to detect and describe objects in images using complex queries (e.g., "a building with a damaged roof and broken windows" or "a road completely submerged in water"). I’m considering using YOLO-World for this task and would like to confirm if the model supports such complex queries.
Specifically:
Can YOLO-World handle natural language prompts that describe multiple attributes of an object (e.g., "a damaged roof with broken windows")?
Does it support paragraph-level descriptions for object detection (e.g., "a flooded road with submerged vehicles and debris")?
Are there any limitations on the complexity or length of the text prompts?
If YOLO-World does not natively support complex queries, are there any recommended approaches or fine-tuning strategies to achieve this functionality?
Thank you for your time and assistance!
Best regards,
### Additional
_No response_ | open | 2025-01-14T04:01:28Z | 2025-02-14T00:19:57Z | https://github.com/ultralytics/ultralytics/issues/18672 | [
"question",
"Stale",
"detect"
] | loucif01 | 4 |
521xueweihan/HelloGitHub | python | 2,292 | 【开源自荐】regex-vis 可视化正则编辑器 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/Bowen7/regex-vis
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:regex-vis 可视化正则编辑器
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:输入一条正则表达式后,会生成它的可视化图形;然后可以选择图形中某些节点进行二次编辑;最后可以对当前正则表达式进行测试。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- 将输入的正则表达式转化为可视化图形
- 支持字面量和字符串形式,字符串形式下支持包括转义 `\` 符号
- 选中图形,告知图形对应子表达式
- 二次编辑图形,反向生成正则表达式
- 对最终的正则表达式进行测试,并且可以生成带有测试用例的分享链接
- 截图:

- 后续更新计划:
- 对输入框的正则表达式进行高亮处理
- e2e 测试
- 更多语言支持(i18n)
| closed | 2022-07-21T14:15:50Z | 2022-07-28T01:23:58Z | https://github.com/521xueweihan/HelloGitHub/issues/2292 | [
"已发布",
"JavaScript 项目"
] | Bowen7 | 1 |
aiortc/aiortc | asyncio | 368 | Several examples broken when used against aiortc 0.9.28 | tl;dr I think you might need to ship new binaries to pip
The commit https://github.com/aiortc/aiortc/commit/31abde4c7f142527a2a59c76333aafe627d4b2c6 updates the example code.
The example README files suggest installing dependencies via pip. When I run the example code from github against the pip installed library, the extra `await` trips it up.
If the examples are installed by pip somewhere, then my assumptions are all wrong! But I can't see them anywhere. | closed | 2020-05-26T22:05:56Z | 2021-01-27T12:53:29Z | https://github.com/aiortc/aiortc/issues/368 | [] | alexbird | 3 |
hankcs/HanLP | nlp | 725 | 如何在python中识别日本人名的译名 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<1.5.2;master>
当前最新版本号是:1.5.2
我使用的版本是:1.5.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<如何在python中实现对日本人名译名的分词>
本人使用了http://www.hankcs.com/nlp/python-calls-hanlp.html中在python中使用HanLP的方法,成功复现所有文中提到的功能,如何实现日本人名识别。
我目前在JClass中调用了com.hankcs.hanlp.recognition.nr.JapanesePersonRecognition包,但是不知道使用哪一个方法。
| closed | 2017-12-27T09:56:52Z | 2020-01-01T10:51:16Z | https://github.com/hankcs/HanLP/issues/725 | [
"ignored"
] | ZhuangAlliswell | 1 |
xinntao/Real-ESRGAN | pytorch | 255 | "Module Not Found" Google Colab | I faced this problem in Google Colab, yesterday it still works. Are you experiencing the same problem?

| closed | 2022-02-14T06:17:00Z | 2022-02-14T07:52:12Z | https://github.com/xinntao/Real-ESRGAN/issues/255 | [] | TFebbry | 1 |
Significant-Gravitas/AutoGPT | python | 9,569 | Request for multi-arch docker image | ### Duplicates
- [x] I have searched the existing issues
### Summary 💡
It would be great if the developer could push an official multi-arch docker image. An official multi-arch docker image is the requirement for the Umbrel App Store(https://github.com/getumbrel/umbrel), which is an open-source HomeServerOS.
### Examples 🌈
I did deploy a multi-arch docker image, and it is working fine so far. (See https://hub.docker.com/repository/docker/impranshu/autogpt/general)
### Motivation 🔦
I have also opened a PR based on this image, but it was closed by the maintainer of Umbrel, stating they need official multi-arch images for the app to be published( See https://github.com/getumbrel/umbrel-apps/pull/707/files)
| open | 2025-03-05T03:38:09Z | 2025-03-05T03:38:09Z | https://github.com/Significant-Gravitas/AutoGPT/issues/9569 | [] | IMPranshu | 0 |
tensorpack/tensorpack | tensorflow | 1,120 | How to do inference in GAN (Image2Image.py) | Hi there,
I am using example Image2Image.py super resolution. Like image classification tasks, I want to add an InferenceRunner in callbacks.

But it shows error like this:
KeyError: "The name 'InferenceTower/cost:0' refers to a Tensor which does not exist. The operation, 'InferenceTower/cost', does not exist in the graph."
I notice that GAN uses TowerTrainer. May I know how to use Inference runner under this setting? Thank you.
| closed | 2019-03-26T15:12:49Z | 2019-03-26T16:38:40Z | https://github.com/tensorpack/tensorpack/issues/1120 | [
"usage"
] | HongyangGao | 2 |
DistrictDataLabs/yellowbrick | matplotlib | 962 | Update Zenodo reference for 1.0 | Version 1.0 has been released, time to update our reference on Zenodo! | closed | 2019-08-29T01:35:24Z | 2019-08-29T15:16:10Z | https://github.com/DistrictDataLabs/yellowbrick/issues/962 | [] | rebeccabilbro | 1 |
wyfo/apischema | graphql | 271 | Unions break depending on order | I get an error when deserialising `Union[Literal, MyClass]` but not when deserialising `Union[MyClass, Literal]`.
It seems this error started some time after v0.15.7.
Example:
```python
from typing import Union, Literal
from dataclasses import dataclass
from apischema import deserialize
@dataclass
class Bar:
baz: int
deserialize(Union[Literal["foo"], Bar], {"baz": 1}) # this fails
deserialize(Union[Bar, Literal["foo"]], {"baz": 1}) # this works
```
Here's the traceback for the the call that fails:
```
Traceback (most recent call last):
File "/home/kheavey/anchorpy/throwaway.py", line 11, in <module>
deserialize(Union[Literal["foo"], Bar], {"baz": 1})
File "/home/kheavey/anchorpy/.venv/lib/python3.9/site-packages/apischema/utils.py", line 424, in wrapper
return wrapped(*args, **kwargs)
File "/home/kheavey/anchorpy/.venv/lib/python3.9/site-packages/apischema/deserialization/__init__.py", line 912, in deserialize
return deserialization_method(
File "/home/kheavey/anchorpy/.venv/lib/python3.9/site-packages/apischema/deserialization/__init__.py", line 698, in method
return deserialize_alt(data)
File "/home/kheavey/anchorpy/.venv/lib/python3.9/site-packages/apischema/deserialization/__init__.py", line 271, in method
return value_map[data]
TypeError: unhashable type: 'dict'
```
Here's what `value_map` and `data` look like:
```
(Pdb) value_map
{'foo': 'foo'}
(Pdb) data
{'baz': 1}
``` | closed | 2021-12-06T01:41:03Z | 2021-12-06T06:49:18Z | https://github.com/wyfo/apischema/issues/271 | [] | kevinheavey | 1 |
modin-project/modin | pandas | 6,601 | BUG: `sort_values` is destructive after `join` | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
abbreviations = pd.Series(['Major League Baseball', 'National Basketball Association'], index=['MLB', 'NBA'])
teams = pd.DataFrame({'name': ['Mariners', 'Lakers'] * 500, 'league_abbreviation': ['MLB', 'NBA'] * 500})
# This all works correctly (or seems to -- the sort_values breaks things below the surface)
joined = teams.set_index('league_abbreviation').join(abbreviations.rename('league_name'))
print(joined)
sort_values_result = joined.sort_values('league_name')
print(sort_values_result)
# This breaks!
print(joined)
```
### Issue Description
Calling `sort_values` has a destructive effect on the value of the dataframe. Some minimal debugging shows it has somehow lost the sorted column:
```python
> ~/src/modin/modin/core/dataframe/pandas/dataframe/dataframe.py(4028)to_pandas()
-> ErrorMessage.catch_bugs_and_request_email(
(Pdb) ll
4006 @lazy_metadata_decorator(apply_axis="both")
4007 def to_pandas(self):
4008 """
4009 Convert this Modin DataFrame to a pandas DataFrame.
4010
4011 Returns
4012 -------
4013 pandas.DataFrame
4014 """
4015 df = self._partition_mgr_cls.to_pandas(self._partitions)
4016 if df.empty:
4017 df = pandas.DataFrame(columns=self.columns, index=self.index)
4018 if len(df.columns) and self.has_materialized_dtypes:
4019 df = df.astype(self.dtypes)
4020 else:
4021 for axis, has_external_index in enumerate(
4022 ["has_materialized_index", "has_materialized_columns"]
4023 ):
4024 # no need to check external and internal axes since in that case
4025 # external axes will be computed from internal partitions
4026 if getattr(self, has_external_index):
4027 external_index = self.columns if axis else self.index
4028 -> ErrorMessage.catch_bugs_and_request_email(
4029 not df.axes[axis].equals(external_index),
4030 f"Internal and external indices on axis {axis} do not match.",
4031 )
4032 # have to do this in order to assign some potentially missing metadata,
4033 # the ones that were set to the external index but were never propagated
4034 # into the internal ones
4035 df = df.set_axis(axis=axis, labels=external_index, copy=False)
4036
4037 return df
(Pdb) df.axes[axis]
Index(['name'], dtype='object')
(Pdb) external_index
Index(['name', 'league_name'], dtype='object'
```
Note that simply changing `joined.sort_values` to `joined.copy().sort_values` fixes the problem in the example above.
I am guessing this does not actually have to do with joining, but probably is a result of these columns being in different partitions?
### Expected Behavior
The joined dataframe is unaffected by the `sort_values` call.
### Error Logs
<details>
```python-traceback
Traceback (most recent call last):
File "~/mambaforge/envs/modin/lib/python3.10/pdb.py", line 1723, in main
pdb._runscript(mainpyfile)
File "~/mambaforge/envs/modin/lib/python3.10/pdb.py", line 1583, in _runscript
self.run(statement)
File "~/mambaforge/envs/modin/lib/python3.10/bdb.py", line 598, in run
exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "~/src/modin/test_sort_values_join.py", line 18, in <module>
print(joined)
File "~/mambaforge/envs/modin/lib/python3.10/site-packages/ray/experimental/tqdm_ray.py", line 48, in safe_print
_print(*args, **kwargs)
File "~/src/modin/modin/logging/logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "~/src/modin/modin/pandas/base.py", line 3997, in __str__
return repr(self)
File "~/src/modin/modin/logging/logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "~/src/modin/modin/pandas/dataframe.py", line 246, in __repr__
result = repr(self._build_repr_df(num_rows, num_cols))
File "~/src/modin/modin/logging/logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "~/src/modin/modin/pandas/base.py", line 261, in _build_repr_df
return self.iloc[indexer]._query_compiler.to_pandas()
File "~/src/modin/modin/logging/logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "~/src/modin/modin/core/storage_formats/pandas/query_compiler.py", line 282, in to_pandas
return self._modin_frame.to_pandas()
File "~/src/modin/modin/logging/logger_decorator.py", line 129, in run_and_log
return obj(*args, **kwargs)
File "~/src/modin/modin/core/dataframe/pandas/dataframe/utils.py", line 501, in run_f_on_minimally_updated_metadata
result = f(self, *args, **kwargs)
File "~/src/modin/modin/core/dataframe/pandas/dataframe/dataframe.py", line 4028, in to_pandas
ErrorMessage.catch_bugs_and_request_email(
File "~/src/modin/modin/error_message.py", line 81, in catch_bugs_and_request_email
raise Exception(
Exception: Internal Error. Please visit https://github.com/modin-project/modin/issues to file an issue with the traceback and the command that caused this error. If you can't file a GitHub issue, please email bug_reports@modin.org.
Internal and external indices on axis 1 do not match.
```
</details>
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : ea8088af4cadfb76294e458e5095f262ca85fea9
python : 3.10.12.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-135-generic
Version : #152-Ubuntu SMP Wed Nov 23 20:19:22 UTC 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.23.0+108.gea8088af
ray : 2.6.1
dask : 2023.7.1
distributed : 2023.7.1
hdk : None
pandas dependencies
-------------------
pandas : 2.1.1
numpy : 1.25.1
pytz : 2023.3
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.2.1
Cython : None
pytest : 7.4.0
hypothesis : None
sphinx : 7.1.0
blosc : None
feather : 0.4.1
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : 2.9.6
jinja2 : 3.1.2
IPython : 8.14.0
pandas_datareader : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat: None
fastparquet : 2022.12.0
fsspec : 2023.6.0
gcsfs : None
matplotlib : 3.7.2
numba : None
numexpr : 2.8.4
odfpy : None
openpyxl : 3.1.2
pandas_gbq : 0.15.0
pyarrow : 12.0.1
pyreadstat : None
pyxlsb : None
s3fs : 2023.6.0
scipy : 1.11.1
sqlalchemy : 1.4.45
tables : 3.8.0
tabulate : None
xarray : None
xlrd : 2.0.1
zstandard : None
tzdata : 2023.3
qtpy : 2.3.1
pyqt5 : None
</details>
| closed | 2023-09-25T21:45:16Z | 2023-09-26T16:15:06Z | https://github.com/modin-project/modin/issues/6601 | [
"bug 🦗",
"P1"
] | zmbc | 1 |
microsoft/JARVIS | deep-learning | 84 | Got error: "Unable to locate package python3.8" | When I run `docker build .` , got the below error:
```
Fetched 19.9 MB in 3s (5909 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package python3.8
E: Couldn't find any package by glob 'python3.8'
E: Couldn't find any package by regex 'python3.8'
The command '/bin/sh -c apt-get update && apt-get install -y python3.8 python3-pip python3-dev build-essential && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100
```
The host is a Ubuntu 16.04 | open | 2023-04-07T05:53:41Z | 2023-04-07T10:39:37Z | https://github.com/microsoft/JARVIS/issues/84 | [] | Clarence-pan | 1 |
google-research/bert | tensorflow | 372 | Two to Three mask word prediction at same sentence is very complex? | Two to Three mask word prediction at same sentence also very complex.
how to get good accuracy?
if i have to pretrained bert model and own dataset with **masked_lm_prob=0.25** (https://github.com/google-research/bert#pre-training-with-bert), what will happened?
Thanks.
| open | 2019-01-18T05:48:06Z | 2019-02-11T07:10:39Z | https://github.com/google-research/bert/issues/372 | [] | MuruganR96 | 1 |
microsoft/qlib | deep-learning | 1,276 | HIST: Missing part of the code for generating stock2concept data | ## ❓ Questions and Help
Hello,
In the HIST algorithm, part of the code is missing, for generating stock2concept data
I.e., the code which generates examples/benchmarks/HIST/data/csi300_stock2concept.npy.
Please add it to the repository.
Thank you.
We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue. | closed | 2022-09-01T05:28:06Z | 2024-08-21T07:29:30Z | https://github.com/microsoft/qlib/issues/1276 | [
"question",
"stale"
] | smarkovichgolan | 4 |
babysor/MockingBird | deep-learning | 293 | python demo_toolbox.py -d D:\DATA\aidatatang_200zh\corpus\test报错 | Warning: you do not have any of the recognized datasets in D:\DATA\aidatatang_200zh\corpus\test.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48
aidatatang_200zh/corpus/dev
aidatatang_200zh/corpus/test
aishell3/test/wav
magicdata/train
Feel free to add your own. You can still use the toolbox by recording samples yourself.
为什么报错说没有可识别的数据,里面的所有文件我都解压了,求求了,还有那个<dataset_root>是不是要精确到数据集的具体文件,我用的是aidatatang_200zh的数据集 | closed | 2021-12-25T04:12:32Z | 2021-12-26T02:59:09Z | https://github.com/babysor/MockingBird/issues/293 | [] | leyangxing | 2 |
PaddlePaddle/ERNIE | nlp | 42 | 请问dbqa中如何显示模型回答的结果 | 你好,感觉的模型很棒,但是请问dbqa中如何显示模型回答的结果?我在源码中也没有看到训练模型时读取text_a和text_b的代码,并且test.tsv作为测试不应该没有label吗? | closed | 2019-03-19T08:00:11Z | 2019-06-27T03:30:51Z | https://github.com/PaddlePaddle/ERNIE/issues/42 | [] | ln23415 | 3 |
developmentseed/lonboard | jupyter | 1 | Separate into multiple widgets/layers? | The rendering API/options will be different based on the type of layer. Should you have a PointWidget, LineStringWidget, PolygonWidget, and then have `.get_fill_color` as an autocompletion-able attribute on only the `PolygonWidget`? And have like `create_widget(gdf)` as a top-level API that creates the table and then switches to create one of the widgets? | closed | 2023-09-25T05:10:45Z | 2023-10-04T00:26:43Z | https://github.com/developmentseed/lonboard/issues/1 | [] | kylebarron | 1 |
nvbn/thefuck | python | 707 | Reimplement cache | * read and parse a cache file only on first cache use;
* serialize and save to the cache file [atexit](https://docs.python.org/3/library/atexit.html);
* apply `@memoize` automatically;
* include "dependency" files full paths in a key, so we can have different cache entries for different `package.json` and etc;
* include arguments in the key. | closed | 2017-10-10T03:21:09Z | 2017-12-06T19:22:12Z | https://github.com/nvbn/thefuck/issues/707 | [
"next release"
] | nvbn | 0 |
microsoft/nni | pytorch | 4,969 | detail page empty with tensorflow tutorial code because of the "None" | 
tutorial link:
https://nni.readthedocs.io/en/stable/tutorials/hpo_quickstart_tensorflow/main.html
https://nni.readthedocs.io/zh/stable/tutorials/hpo_quickstart_tensorflow/main.html
No one would have thought the problem was here
platform: win10
nni version: 2.8
tensorflow version: 2.7.0
python version: 3.9.7 | closed | 2022-06-28T23:31:28Z | 2022-09-05T08:21:24Z | https://github.com/microsoft/nni/issues/4969 | [
"fixed downstream"
] | jax11235 | 2 |
microsoft/MMdnn | tensorflow | 284 | Input Dimension Error When Converting PyTorch ResNet to IR | # Environments
Platform (like ubuntu 16.04/win10): CentOS Linux release 7.4.1708 (Core)
Python version: Python 2.7.5
Source framework with version (like Tensorflow 1.4.1 with GPU): PyTorch '0.4.0'
Destination framework with version (like CNTK 2.3 with GPU): IR (and to TensorFlow 1.4.0 with GPU)
Pre-trained model path (webpath or webdisk path): torchvision.models (with avgpool and fc substituted)
``` python
model.avgpool = nn.AvgPool2d(kernel_size=(7, 13))
model.fc = nn.Linear(512 * resnet_expansions[args.model], args.num_classes)
model = model.cuda()
```
Running scripts:
` mmtoir -f pytorch -d resnet50_ir --inputShape 999 999 999 999 -n resnet50_best.pth`
# Problem
I got `RuntimeError: input has less dimensions than expected` when converting PyTorch ResNet to IR
To prevent from less dimensions, I tried more dimensions and higher values for each dimension, but I still got the error.
I tried
- According to the NHWC format,
- 32 224 336 3
- 32 336 224 3
- According to the NCHW format,
- 32 3 224 336
- 32 3 336 224
- high values,
- 999 999 999 999
- more dimensions,
- 999 999 999 999 999 999
By the way, the forward/backward pass in my scripts has no problem.
# solution
changed newest to stable | closed | 2018-07-03T09:56:38Z | 2018-07-04T04:33:27Z | https://github.com/microsoft/MMdnn/issues/284 | [] | cheolho | 0 |
mkhorasani/Streamlit-Authenticator | streamlit | 233 | All users being allowed to register after "pre-authorized" list becomes empty | Assume that the "pre-authorized" parameter in config.yaml contains 10 email IDs. Now, if all 10 users (defined in the list) finish with their registration, their email IDs get deleted from "pre-authorized" and the **register_user** method starts allowing all users to register thereby defeating the purpose of this parameter.
Please correct me if I got this wrong. Thank you! | closed | 2024-10-22T09:39:34Z | 2025-02-25T19:45:47Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/233 | [
"bug"
] | pallav445 | 5 |
amisadmin/fastapi-amis-admin | sqlalchemy | 21 | 依赖需要哪些版本,请给个requirements.txt | pydantic: v1.6.2
NameError: Field name "fields" shadows a BaseModel attribute; use a different field name with "alias='fields'".
| closed | 2022-05-03T09:16:20Z | 2022-05-06T03:05:28Z | https://github.com/amisadmin/fastapi-amis-admin/issues/21 | [] | littleforce163 | 2 |
rthalley/dnspython | asyncio | 1,176 | Refactoring socket creation code to facilitate connection reuse | I am working on connection reuse in dns_exporter. I want to open a socket to, say, a DoT server and use it for many lookups without having to do the whole TCP+TLS handshake for every query. dnspython supports this by providing a socket to for example `dns.query.tls()` in the `sock` argument. To create that socket currently I have to import and copy a bunch of the socket creation logic from dnspython to dns_exporter.
I would help a lot if the socket creation code in the query functions could be refactored into seperate functions, maybe `dns.query.tls()` could call `dns.query.get_tls_socket()` when the `sock` argument is not provided, but then `dns.query.get_tls_socket()` could also be called by the implementer for connection reuse purposes. This would make it trivial to use dnspython with, say, a DoT socket doing many lookups but only getting the tcp+tls handshake penalty once.
I am happy to help implement this, but I wanted to gauge your interest before writing too much code. I have a local branch with a working example for DoT, and it isn't that big of a diff, plus it almost makes the code more clean to have the socket creation stuff in a seperate function. Maybe more testable too. Let me know what you think, and if you agree this would be good to have in dnspython then please let me know how you wish to proceed regarding implementation details, who does what, etc.
**Context (please complete the following information):**
- dnspython 2.7.0
- Python 3.12
- OS: debian
| open | 2025-01-17T08:57:03Z | 2025-01-27T12:20:14Z | https://github.com/rthalley/dnspython/issues/1176 | [
"Enhancement Request"
] | tykling | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 455 | Error - AttributeError: 'Colorbar' object has no attribute 'set_clim' | Hello everyone! Every time I run I got this following error:
```
inference.py", line 174, in plot_embedding_as_heatmap
cbar.set_clim(*color_range)
AttributeError: 'Colorbar' object has no attribute 'set_clim'
```
I can comment the line and working fine, but I'd be glad to fix this.
The specific line is this last one:
```
cbar = plt.colorbar(mappable, ax=ax, fraction=0.046, pad=0.04)
cbar.set_clim(*color_range)
```
Could you help me? Thanks a lot! | closed | 2020-07-28T14:19:23Z | 2020-10-26T07:29:14Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/455 | [
"dependencies"
] | barubbabba123 | 4 |
google-research/bert | nlp | 426 | inference time on CPU take so long | I fine-tuning a classification model using bert, however the inference time on CPU is so long,
I run the inference process is so long. It takes nearly 15 seconds for one call (15s is only for prediction, not for loading the model). Below is the code for the inference:
> print("time 10: ", datetime.datetime.now())
> result = estimator.predict(input_fn=predict_input_fn)
> print("time 11: ", datetime.datetime.now())
> predicts = []
> i = 0
> for prediction in result:
> print("time 11-1: ", datetime.datetime.now())
> probabilities = [p for p in prediction["probabilities"]]
and here is the output of time:
> time 10: 2019-02-11 13:06:25.594185
> time 11: 2019-02-11 13:06:25.594229
> time 11-1: 2019-02-11 13:06:39.175300
How we can serve faster ?
Thank you very much.
| open | 2019-02-11T06:14:30Z | 2019-04-24T08:39:52Z | https://github.com/google-research/bert/issues/426 | [] | ntson2002 | 7 |
encode/httpx | asyncio | 2,560 | Website is down | From https://pypi.org/project/http3/ we reach this repository and www.encode.io/http3 which returns 404 | closed | 2023-02-01T13:34:22Z | 2023-02-09T17:53:15Z | https://github.com/encode/httpx/issues/2560 | [] | nmoreaud | 2 |
zihangdai/xlnet | nlp | 204 | Is it a BUG in run_race.py ??? | Ok, so I was really curious of how the input ids of RACE dataset would look like. So I inserted a print around line 205 of run_race.py
like this:
```
cur_input_ids = tokens
cur_input_mask = [0] * len(cur_input_ids)
print(cur_input_ids)
```
And the printed results for ONE question was like:
[context tokens, choice_1_tokens]
[context tokens, choice_1_tokens, [SEP], choice_2_tokens]
[context tokens, choice_1_tokens, [SEP], choice_2_tokens, [SEP], choice_3_tokens]
[context tokens, choice_1_tokens, [SEP], choice_2_tokens, [SEP], choice_3_tokens, [SEP], choice_4_tokens]
I was expecting something like
[context tokens, choice_1_tokens]
[context tokens, choice_2_tokens]
[context tokens, choice_3_tokens]
[context tokens, choice_4_tokens]
Is this a BUG or it's designed to be like this ???? | open | 2019-08-05T22:37:50Z | 2019-08-05T22:42:51Z | https://github.com/zihangdai/xlnet/issues/204 | [] | JMistral | 0 |
strawberry-graphql/strawberry | fastapi | 3,444 | Broken documentation examples in page https://strawberry.rocks/docs/guides/dataloaders | Example within https://strawberry.rocks/docs/guides/dataloaders#usage-with-context is broken and can't be run due to invalid imports. | closed | 2024-04-10T12:15:52Z | 2025-03-20T15:56:41Z | https://github.com/strawberry-graphql/strawberry/issues/3444 | [] | tejusp | 6 |
ycd/manage-fastapi | fastapi | 10 | Manage FastAPI August-September 2020 Roadmap | <h1 align="center">:hammer: Roadmap August-September 2020 :hammer:</h1>
## Goals
- Adding more templates for databases and object relatioınal mappers.
- Instead of creating database with async sql, now the database will be up to user
Example:
```
manage-fastapi startproject myproject
```
The command we ran above will ask the user something like this to select a database.
```
Select a database:
[0] Postgresql, sqlite3, mysql
[1] Tortoise ORM
[2] Peewee
[3] MongoDB, Couchbase
```
Each selection will have unique database template.
## New Features
**`runserver `**
Also thinking about **`showmodels`** to show all models also this command will came with option for request method like
`showmodels --get`
`showmodels --post`
| closed | 2020-08-11T23:48:43Z | 2020-08-30T00:50:02Z | https://github.com/ycd/manage-fastapi/issues/10 | [
"enhancement",
"help wanted"
] | ycd | 12 |
modelscope/data-juicer | streamlit | 105 | [MM] analysis for list data (such as list of sizes of images) | closed | 2023-11-29T04:10:06Z | 2023-11-30T06:23:13Z | https://github.com/modelscope/data-juicer/issues/105 | [
"enhancement",
"dj:multimodal"
] | HYLcool | 0 | |
zappa/Zappa | flask | 778 | [Migrated] -bash: zappa: command not found | Originally from: https://github.com/Miserlou/Zappa/issues/1921 by [3lonious](https://github.com/3lonious)
<!--- Provide a general summary of the issue in the Title above -->
## Context
i solved my issue by uninstalling pip and python and re setting up the environment and installation ect
| closed | 2021-02-20T12:42:18Z | 2024-04-13T18:37:20Z | https://github.com/zappa/Zappa/issues/778 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
MilesCranmer/PySR | scikit-learn | 168 | [BUG] module 'sympy.core.core' has no attribute 'numbers' | **Describe the bug**
A clear and concise description of what the bug is.
Can't install
Did
```bash
conda install -c conda-forge pysr
python -c 'import pysr; pysr.install()'
```
got
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/katherinepaseman/anaconda3/lib/python3.8/site-packages/pysr/__init__.py", line 12, in <module>
from .export_jax import sympy2jax
File "/Users/katherinepaseman/anaconda3/lib/python3.8/site-packages/pysr/export_jax.py", line 52, in <module>
sympy.core.numbers.Half: "(lambda: 0.5)",
AttributeError: module 'sympy.core.core' has no attribute 'numbers'
```
**Version:**
- OS: [e.g. macOS] - Running MacOS versions 13.4
- Does the bug still appear with the latest version of PySR? - yes
| open | 2022-07-26T19:03:59Z | 2023-04-20T06:05:49Z | https://github.com/MilesCranmer/PySR/issues/168 | [
"bug"
] | paseman | 2 |
MycroftAI/mycroft-core | nlp | 2,880 | mycroft.conf silently overwritten | **Describe the bug**
When there's an error in mycroft.conf, it is silently overwritten. This is bad because user settings should not be permanently deleted without consent. Instead, logs and/or the output of mycroft-start should show the error.
**To Reproduce**
Try the following mycroft.conf:
```
{
"max_allowed_core_version": 20.8,
"listener": {
"wake_word": "Lazarus",
"device_name": "default"
"energy_ratio": 1.5
},
"hotwords": {
"Lazarus": {
"module": "pocketsphinx",
"phonemes": "L AE Z ER AH S .",
}
}
}
```
Note the missing comma after "default" and incorrect use of the energy ratio parameter.
After running mycroft-start restart all, it is overwritten with the following:
```
{
"max_allowed_core_version": 20.8
}
```
**Expected behavior**
One of the following:
"Mycroft failed to start because of an error in mycroft.conf."
or
The config file is copied to `mycroft.conf.old` (or `mycroft.conf.old.1`, etc.) and `mycroft.conf` is overwritten with the following:
```
# The previous mycroft.conf contained errors and was moved to mycroft.conf.old.
{
"max_allowed_core_version": 20.8
}
``` | closed | 2021-04-05T13:04:27Z | 2022-03-07T00:33:11Z | https://github.com/MycroftAI/mycroft-core/issues/2880 | [
"bug"
] | david-morris | 10 |
dadadel/pyment | numpy | 88 | Not working on async functions | Only works with regular functions, not async declared functions. | closed | 2020-08-31T19:08:22Z | 2021-02-22T22:31:13Z | https://github.com/dadadel/pyment/issues/88 | [] | marcodelmoral | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,236 | remove select().c / .columns, completely. no trace | I thought we already removed this in 2.0 but we didn't. Erase it completely for 2.1 please | closed | 2023-08-15T01:51:14Z | 2024-11-18T14:25:12Z | https://github.com/sqlalchemy/sqlalchemy/issues/10236 | [
"task",
"high priority",
"sql"
] | zzzeek | 1 |
BeanieODM/beanie | asyncio | 992 | [BUG] - Beanie migrations run throws no module named 'some_document' | **Describe the bug**
I tried running a migration that follows the [guideline](https://beanie-odm.dev/tutorial/migrations/) but when i run the migration it fails.
I tried putting it in various directory levels(I'm using fastapi so i tried in root, src, inside the package holding the document i want to import and run the migration against).
**To Reproduce**
```python
beanie migrate -uri 'mongodb://user:pwd@localhost:27017' -db 'some_db' -p src/models/primary --distance 1 --no-use-transaction
As well as
beanie migrate -uri 'mongodb://user:pwd@localhost:27017/some_db' -p src/models/primary --distance 1 --no-use-transaction
```
**Expected behavior**
Run the migration with no errors
**Additional context**
Add any other context about the problem here.
| closed | 2024-08-08T08:55:33Z | 2024-10-16T02:41:35Z | https://github.com/BeanieODM/beanie/issues/992 | [
"Stale"
] | danielxpander | 3 |
pydantic/logfire | pydantic | 493 | Logging to multiple logfire project simultaneously | ### Question
Is there any mechanism to perform logging to multiple logfire project simultaneously from the same app?
To give you an example:
I have a backend service and I have an associated logfire project to this backend (my_backend_logfire_proj)...
But for whatever reason I also want to log certain specifics events that occur in the same backend to a different logfire project. | closed | 2024-10-10T21:00:09Z | 2024-10-17T17:04:17Z | https://github.com/pydantic/logfire/issues/493 | [
"Question"
] | Mumbawa | 3 |
strawberry-graphql/strawberry | django | 3,614 | `TypeError` in Python 3.8 (regression) | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
The following line raises a `TypeError` in Python 3.8:
https://github.com/strawberry-graphql/strawberry/blame/54b8a49198bb2f4b2dfca367fa4be52124ee0aee/strawberry/http/async_base_view.py#L236
```
TypeError: 'type' object is not subscriptable
```
This is because `asyncio.Queue` does not support subscripting in Python 3.8.
## System Information
- Operating system: Mac, Linux
- Strawberry version (if applicable): 0.239.1
## Additional Context
This bug was discovered as it causes [our Strawberry test suite to fail in Python 3.8](https://github.com/getsentry/sentry-python/pull/3491). This is a regression because the same test suite previously passed CI. | closed | 2024-09-03T08:12:25Z | 2025-03-20T15:56:51Z | https://github.com/strawberry-graphql/strawberry/issues/3614 | [
"bug"
] | szokeasaurusrex | 1 |
microsoft/nni | machine-learning | 5,678 | gpuIndices | **Describe the issue**:Hello everyone, I am a newbie in nni. I would like to ask about the difference between gpuIndices in tuner and localConfig. For example, I have a GPU: NVIDIA GeForce RTX 3060, but I want to use it to run nni, so how should I set gpuIndices in tuner and localConfig?Thanks!
**Environment**:
- NNI version:2.2
- Training service (local|remote|pai|aml|etc):local
- Client OS:Ubuntu
- Python version:python=3.6
- PyTorch/TensorFlow version:pytorch1.4.0
- Is conda/virtualenv/venv used?:conda
- Is running in Docker?:F
-
 | closed | 2023-09-11T09:22:20Z | 2023-10-06T11:24:24Z | https://github.com/microsoft/nni/issues/5678 | [] | Delong-Zhu | 0 |
AutoGPTQ/AutoGPTQ | nlp | 499 | [BUG] qwen-14B int8 inference slow | After quantizing the qwen-14b model using int8, the first-word response time is much slower compared to both the unquantized and int4 quantized models.
The response time of the first word of the model after int8 quantization is 2s The response time of the model after int4 quantization is 300ms, what is the reason for this?
auto-gptq==0.4.2 transformers==4.31.0 | open | 2023-12-28T08:02:49Z | 2023-12-28T08:02:49Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/499 | [
"bug"
] | Originhhh | 0 |
yt-dlp/yt-dlp | python | 12,364 | Can't download MP3 from YouTube | ### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Can't download MP3 from YouTube
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp 2025.01.26
``` | closed | 2025-02-14T15:46:35Z | 2025-02-14T20:30:17Z | https://github.com/yt-dlp/yt-dlp/issues/12364 | [
"incomplete"
] | TARO9547 | 4 |
public-apis/public-apis | api | 3,559 | Include usage of API specifications | First of all, fantastic list!
I think it would be great to include the type of API along with any specifications they follow, e.g. Swagger/OpenAPI (which version), AsyncAPI, GraphQL, etc.
I find myself needing examples of APIs that use each of these, and having that as a column in the list would be a great help!
Thanks! | closed | 2023-07-04T22:48:29Z | 2023-08-14T00:41:39Z | https://github.com/public-apis/public-apis/issues/3559 | [
"enhancement"
] | gregsdennis | 3 |
paperless-ngx/paperless-ngx | machine-learning | 7,530 | [BUG] Error message after uploading any PDF-File "import nltk" | ### Description
I get this Error-Message if i try to upload any pdf...

```
Rezept Korsett T-Shirts.pdf
Rezept Korsett T-Shirts.pdf: The following error occurred while storing document Rezept Korsett T-Shirts.pdf after parsing: ********************************************************************** Resource [93mpunkt_tab[0m not found. Please use the NLTK Downloader to obtain the resource: [31m>>> import nltk >>> nltk.download('punkt_tab') [0m For more information see: https://www.nltk.org/data.html Attempted to load [93mtokenizers/punkt_tab/german/[0m Searched in: - PosixPath('/usr/share/nltk_data') **********************************************************************
```
### Steps to reproduce
1. Upload a pdf
2. Get the Error-Message
### Webserver logs
```bash
[2024-08-23 14:51:25,777] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-7gcphg7i
[2024-08-23 14:51:25,778] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: Rezept Korsett T-Shirts.pdf: The following error occurred while storing document Rezept Korsett T-Shirts.pdf after parsing:
**********************************************************************
Resource [93mpunkt_tab[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt_tab')
[0m
For more information see: https://www.nltk.org/data.html
Attempted to load [93mtokenizers/punkt_tab/german/[0m
Searched in:
- PosixPath('/usr/share/nltk_data')
**********************************************************************
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 327, in main_wrap
raise exc_info[1]
File "/usr/src/paperless/src/documents/consumer.py", line 670, in run
document_consumption_finished.send(
File "/usr/local/lib/python3.11/site-packages/django/dispatch/dispatcher.py", line 176, in send
return [
^
File "/usr/local/lib/python3.11/site-packages/django/dispatch/dispatcher.py", line 177, in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/signals/handlers.py", line 150, in set_document_type
potential_document_type = matching.match_document_types(document, classifier)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/matching.py", line 61, in match_document_types
pred_id = classifier.predict_document_type(document.content) if classifier else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/classifier.py", line 424, in predict_document_type
X = self.data_vectorizer.transform([self.preprocess_content(content)])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/classifier.py", line 386, in preprocess_content
words: list[str] = word_tokenize(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/nltk/tokenize/__init__.py", line 142, in word_tokenize
sentences = [text] if preserve_line else sent_tokenize(text, language)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/nltk/tokenize/__init__.py", line 119, in sent_tokenize
tokenizer = _get_punkt_tokenizer(language)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/nltk/tokenize/__init__.py", line 105, in _get_punkt_tokenizer
return PunktTokenizer(language)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/nltk/tokenize/punkt.py", line 1744, in __init__
self.load_lang(lang)
File "/usr/local/lib/python3.11/site-packages/nltk/tokenize/punkt.py", line 1749, in load_lang
lang_dir = find(f"tokenizers/punkt_tab/{lang}/")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/nltk/data.py", line 579, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource [93mpunkt_tab[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt_tab')
[0m
For more information see: https://www.nltk.org/data.html
Attempted to load [93mtokenizers/punkt_tab/german/[0m
Searched in:
- PosixPath('/usr/share/nltk_data')
**********************************************************************
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 149, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 733, in run
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 304, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: Rezept Korsett T-Shirts.pdf: The following error occurred while storing document Rezept Korsett T-Shirts.pdf after parsing:
**********************************************************************
Resource [93mpunkt_tab[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt_tab')
[0m
For more information see: https://www.nltk.org/data.html
Attempted to load [93mtokenizers/punkt_tab/german/[0m
Searched in:
- PosixPath('/usr/share/nltk_data')
**********************************************************************
```
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.5
### Host OS
Linux-4.4.302+-x86_64-with-glibc2.36
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.11.5",
"server_os": "Linux-4.4.302+-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 15352618299392,
"available": 4169358458880
},
"database": {
"type": "mysql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "documents.1052_document_transaction_id",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-08-23T00:00:12.820137+02:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-08-23T13:05:01.097417Z",
"classifier_error": null
}
}
```
### Browser
Chrome
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-23T13:07:31Z | 2024-09-24T03:07:58Z | https://github.com/paperless-ngx/paperless-ngx/issues/7530 | [
"duplicate",
"not a bug"
] | DerP4si | 18 |
oegedijk/explainerdashboard | plotly | 297 | shap_values should be 2d, instead shape=(200, 21, 2)! | I am running the sample code same as it's given here https://github.com/oegedijk/explainerdashboard, using titanic datasource.
And running into the error saying "shap_values should be 2d, instead shape=(200, 21, 2)!"
Attached is the full error trace. can pleas anyone help me understand why i am getting this error and how can i resolve it ?
`AssertionError Traceback (most recent call last)
Cell In[7], line 12
1 explainer = ClassifierExplainer(model, X_test, y_test,
2 cats=['Deck', 'Embarked',
3 {'Gender': ['Sex_male', 'Sex_female', 'Sex_nan']}],
(...)
9 target = "Survival", # defaults to y.name
10 )
---> 12 db = ExplainerDashboard(explainer,
13 title="Titanic Explainer", # defaults to "Model Explainer"
14 shap_interaction=False, # you can switch off tabs with bools
15 )
16 db.run(port=8050)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\dashboards.py:803, in ExplainerDashboard.__init__(self, explainer, tabs, title, name, description, simple, hide_header, header_hide_title, header_hide_selector, header_hide_download, hide_poweredby, block_selector_callbacks, pos_label, fluid, mode, width, height, bootstrap, external_stylesheets, server, url_base_pathname, routes_pathname_prefix, requests_pathname_prefix, responsive, logins, port, importances, model_summary, contributions, whatif, shap_dependence, shap_interaction, decision_trees, **kwargs)
801 if isinstance(tabs, list):
802 tabs = [self._convert_str_tabs(tab) for tab in tabs]
--> 803 self.explainer_layout = ExplainerTabsLayout(
804 explainer,
805 tabs,
806 title,
807 description=self.description,
808 **update_kwargs(
809 kwargs,
810 header_hide_title=self.header_hide_title,
811 header_hide_selector=self.header_hide_selector,
812 header_hide_download=self.header_hide_download,
813 hide_poweredby=self.hide_poweredby,
814 block_selector_callbacks=self.block_selector_callbacks,
815 pos_label=self.pos_label,
816 fluid=fluid,
817 ),
818 )
819 else:
820 tabs = self._convert_str_tabs(tabs)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\dashboards.py:119, in ExplainerTabsLayout.__init__(self, explainer, tabs, title, name, description, header_hide_title, header_hide_selector, header_hide_download, hide_poweredby, block_selector_callbacks, pos_label, fluid, **kwargs)
116 self.fluid = fluid
118 self.selector = PosLabelSelector(explainer, name="0", pos_label=pos_label)
--> 119 self.tabs = [
120 instantiate_component(tab, explainer, name=str(i + 1), **kwargs)
121 for i, tab in enumerate(tabs)
122 ]
123 assert (
124 len(self.tabs) > 0
125 ), "When passing a list to tabs, need to pass at least one valid tab!"
127 self.register_components(*self.tabs)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\dashboards.py:120, in <listcomp>(.0)
116 self.fluid = fluid
118 self.selector = PosLabelSelector(explainer, name="0", pos_label=pos_label)
119 self.tabs = [
--> 120 instantiate_component(tab, explainer, name=str(i + 1), **kwargs)
121 for i, tab in enumerate(tabs)
122 ]
123 assert (
124 len(self.tabs) > 0
125 ), "When passing a list to tabs, need to pass at least one valid tab!"
127 self.register_components(*self.tabs)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\dashboard_methods.py:890, in instantiate_component(component, explainer, name, **kwargs)
884 kwargs = {
885 k: v
886 for k, v in kwargs.items()
887 if k in init_argspec.args + init_argspec.kwonlyargs
888 }
889 if "name" in init_argspec.args + init_argspec.kwonlyargs:
--> 890 component = component(explainer, name=name, **kwargs)
891 else:
892 print(
893 f"ExplainerComponent {component} does not accept a name parameter, "
894 f"so cannot assign name='{name}': "
(...)
899 "cluster will generate its own random uuid name!"
900 )
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\dashboard_components\composites.py:545, in IndividualPredictionsComposite.__init__(self, explainer, title, name, hide_predindexselector, hide_predictionsummary, hide_contributiongraph, hide_pdp, hide_contributiontable, hide_title, hide_selector, index_check, **kwargs)
538 self.summary = RegressionPredictionSummaryComponent(
539 explainer, hide_selector=hide_selector, **kwargs
540 )
542 self.contributions = ShapContributionsGraphComponent(
543 explainer, hide_selector=hide_selector, **kwargs
544 )
--> 545 self.pdp = PdpComponent(
546 explainer, name=self.name + "3", hide_selector=hide_selector, **kwargs
547 )
548 self.contributions_list = ShapContributionsTableComponent(
549 explainer, hide_selector=hide_selector, **kwargs
550 )
552 self.index_connector = IndexConnector(
553 self.index,
554 [self.summary, self.contributions, self.pdp, self.contributions_list],
555 explainer=explainer if index_check else None,
556 )
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\dashboard_components\overview_components.py:639, in PdpComponent.__init__(self, explainer, title, name, subtitle, hide_col, hide_index, hide_title, hide_subtitle, hide_footer, hide_selector, hide_popout, hide_dropna, hide_sample, hide_gridlines, hide_gridpoints, hide_cats_sort, index_dropdown, feature_input_component, pos_label, col, index, dropna, sample, gridlines, gridpoints, cats_sort, description, **kwargs)
636 self.index_name = "pdp-index-" + self.name
638 if self.col is None:
--> 639 self.col = self.explainer.columns_ranked_by_shap()[0]
641 if self.feature_input_component is not None:
642 self.exclude_callbacks(self.feature_input_component)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\explainers.py:86, in insert_pos_label.<locals>.inner(self, *args, **kwargs)
84 else:
85 kwargs.update(dict(pos_label=self.pos_label))
---> 86 return func(self, **kwargs)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\explainers.py:1318, in BaseExplainer.columns_ranked_by_shap(self, pos_label)
1306 @insert_pos_label
1307 def columns_ranked_by_shap(self, pos_label=None):
1308 """returns the columns of X, ranked by mean abs shap value
1309
1310 Args:
(...)
1316
1317 """
-> 1318 return self.mean_abs_shap_df(pos_label).Feature.tolist()
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\explainers.py:86, in insert_pos_label.<locals>.inner(self, *args, **kwargs)
84 else:
85 kwargs.update(dict(pos_label=self.pos_label))
---> 86 return func(self, **kwargs)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\explainers.py:3128, in ClassifierExplainer.mean_abs_shap_df(self, pos_label)
3126 """mean absolute SHAP values"""
3127 if not hasattr(self, "_mean_abs_shap_df"):
-> 3128 _ = self.get_shap_values_df()
3129 self._mean_abs_shap_df = [
3130 self.get_shap_values_df(pos_label)[self.merged_cols]
3131 .abs()
(...)
3138 for pos_label in self.labels
3139 ]
3140 return self._mean_abs_shap_df[pos_label]
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\explainers.py:86, in insert_pos_label.<locals>.inner(self, *args, **kwargs)
84 else:
85 kwargs.update(dict(pos_label=self.pos_label))
---> 86 return func(self, **kwargs)
File I:\Explainer Dashboard\explainer-dashboard\lib\site-packages\explainerdashboard\explainers.py:2845, in ClassifierExplainer.get_shap_values_df(self, pos_label)
2843 if len(self.labels) == 2:
2844 if not isinstance(_shap_values, list):
-> 2845 assert (
2846 len(_shap_values.shape) == 2
2847 ), f"shap_values should be 2d, instead shape={_shap_values.shape}!"
2848 elif isinstance(_shap_values, list) and len(_shap_values) == 2:
2849 # for binary classifier only keep positive class
2850 _shap_values = _shap_values[1]
AssertionError: shap_values should be 2d, instead shape=(200, 21, 2)!` | open | 2024-03-09T15:57:24Z | 2024-03-13T13:29:31Z | https://github.com/oegedijk/explainerdashboard/issues/297 | [] | harshil17 | 11 |
pyg-team/pytorch_geometric | pytorch | 9,344 | Still error after installing dependencies:No module named 'torch_geometric.utils.subgraph' | ### 😵 Describe the installation problem
I installed four additional dependencies following the tutorial:
**Scatter - 2.1.2 + pt23cu118 - cp38 - cp38 - linux_x86_64. WHL
Torch_sparse 0.6.18 + pt23cu118 cp38 - cp38 - linux_x86_64. WHL
Torch_cluster 1.6.3 + pt23cu118 cp38 - cp38 - linux_x86_64. WHL
Torch_spline_conv 1.2.2 + pt23cu118 cp38 - cp38 - linux_x86_64. WHL**
It is then installed using pip install torch_geometric.
**But when I run the file it still says: No module named 'torch_geometric.utils.subgraph'**
Excuse me, what's going on here?
**I looked at the file in torch_geometry.utils. There was a file called _subgraph.py.
Why is there still an error?**
### Environment
* PyG version:2.1.0
* PyTorch version: 2.3
* OS: Linux
* Python version: 3.8
* CUDA/cuDNN version:11.8
* How you installed PyTorch and PyG (`conda`, `pip`, source): pip
* Any other relevant information (*e.g.*, version of `torch-scatter`):
scatter-2.1.2+pt23cu118-cp38-cp38-linux_x86_64
sparse-0.6.18+pt23cu118-cp38-cp38-linux_x86_64
cluster-1.6.3+pt23cu118-cp38-cp38-linux_x86_64
spline_conv-1.2.2+pt23cu118-cp38-cp38-linux_x86_64 | open | 2024-05-22T03:31:48Z | 2024-05-27T08:09:55Z | https://github.com/pyg-team/pytorch_geometric/issues/9344 | [
"installation"
] | Aminoacid1226 | 2 |
scanapi/scanapi | rest-api | 164 | ADR 3: How to show test results in the markdown report | ## Architecture Decision Review - ADR
- How are we going to show the tests in the markdown report
- How are we going to show each test case?
- How are we going to show if a test passed?
- How are we going to show if a test failed?
This discussion started [here](https://github.com/scanapi/scanapi/pull/157#pullrequestreview-424762095)
Related ADR: #161 | closed | 2020-06-04T20:49:21Z | 2020-06-14T17:40:39Z | https://github.com/scanapi/scanapi/issues/164 | [
"ADR"
] | camilamaia | 1 |
pyjanitor-devs/pyjanitor | pandas | 998 | [BUG] Extend `fill_empty`'s `column_names` type range | # Brief Description
<!-- Please provide a brief description of your bug. Do NOT paste the stack trace here. -->
https://github.com/pyjanitor-devs/pyjanitor/blob/3fab49e8c89f1a5e4ca7a6e4fdbbe8e2f7b89c66/janitor/functions/fill.py#L148-L152
Quickly fix this, could add `pd.Index`.
And a little bit more thinking, using `Iterable` is better?
Because `check_column`'s `column_names` also support `Iterable`.
And `pd.Index`'s instance is also `Iterable`.
So it would be `@dispatch(pd.DataFrame, Iterable)`
The `Iterable` is not `typing.Iterable` is `collections.abc.Iterable`.
Or else using `typing.Iterable` would get another error.
# Minimally Reproducible Code
<!-- If you provide minimal code that reproduces the problem, this makes it easier for us to debug what's going on.
Minimal code should be trivially copy/pastable into a Python interpreter in its entirety. Be sure to include imports.
-->
```python
>>> import pandas as pd
>>> import janitor # noqa
>>> df = pd.DataFrame({"attr":[None, None]})
>>> df.fill_empty(df.columns, 0)
Traceback (most recent call last):
File "C:\Software\miniforge3\envs\work\lib\site-packages\multipledispatch\dispatcher.py", line 269, in __call__
func = self._cache[types]
KeyError: (<class 'pandas.core.frame.DataFrame'>, <class 'pandas.core.indexes.base.Index'>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Software\miniforge3\envs\work\lib\site-packages\pandas_flavor\register.py", line 29, in __call__
return method(self._obj, *args, **kwargs)
File "C:\Software\miniforge3\envs\work\lib\site-packages\janitor\utils.py", line 231, in wrapper
return func(*args, **kwargs)
File "C:\Software\miniforge3\envs\work\lib\site-packages\janitor\functions\fill.py", line 199, in fill_empty
return _fill_empty(df, column_names, value=value)
File "C:\Software\miniforge3\envs\work\lib\site-packages\multipledispatch\dispatcher.py", line 273, in __call__
raise NotImplementedError(
NotImplementedError: Could not find signature for _fill_empty: <DataFrame, Index>
``` | closed | 2022-01-26T03:03:01Z | 2022-02-10T17:21:13Z | https://github.com/pyjanitor-devs/pyjanitor/issues/998 | [] | Zeroto521 | 2 |
miguelgrinberg/flasky | flask | 66 | Bootstrap does not affect the page on refresh. (3b) | When I type url by hand and press enter everything works as it should. Bootstrap is nicely formating the navbar. But then when I press refresh button page reloads without Bootstrap (altough I can see it does get transfered in the network tab)
The only difference I can see is that on refresh request a Cache-Control header gets set with value: max-age=0
This somehow prevents Bootstrap css affecting the page.
Any thoughts on the solution?
| closed | 2015-08-29T16:59:36Z | 2015-08-29T19:43:22Z | https://github.com/miguelgrinberg/flasky/issues/66 | [
"question"
] | mfrlin | 4 |
arogozhnikov/einops | numpy | 85 | flipping axis | is it possible by means of einops to flip input akin to np.flipur or np.fliplr? | closed | 2020-11-06T11:01:14Z | 2024-05-06T16:34:21Z | https://github.com/arogozhnikov/einops/issues/85 | [] | CDitzel | 3 |
stanfordnlp/stanza | nlp | 1,184 | [QUESTION] How to access the dictionary directly to find another variant of a word? | When using a prebuilt pipeline, is there a way to access the original dictionary and find all variants of a specific word given its lemma? | closed | 2023-01-23T19:04:07Z | 2023-01-24T07:23:19Z | https://github.com/stanfordnlp/stanza/issues/1184 | [
"question"
] | czyzby | 2 |
seleniumbase/SeleniumBase | web-scraping | 3,380 | "Hacking websites with CDP" is now on YouTube | "Hacking websites with CDP" is now on YouTube:
<b>https://www.youtube.com/watch?v=vt2zsdiNh3U</b>
<a href="https://www.youtube.com/watch?v=vt2zsdiNh3U"><img src="https://github.com/user-attachments/assets/82ab2715-727e-4d09-9314-b8905795dc43" title="Hacking websites with CDP" width="600" /></a>
| open | 2025-01-01T01:37:41Z | 2025-03-01T20:58:40Z | https://github.com/seleniumbase/SeleniumBase/issues/3380 | [
"News / Announcements",
"Tutorials & Learning",
"UC Mode / CDP Mode"
] | mdmintz | 10 |
litestar-org/litestar | asyncio | 3,466 | Enhancement: Add Pydantic's error dictionary to ValidationException's extra dict | ### Summary
To send a custom message for Pydantic errors, we require the error `type`. Pydantic's error details are lost while building the error message in `SignatureModel._build_error_message`. If we add the `exc` dict to this message, it will be propagated to exception handlers
### Basic Example
```
"SignatureModel"
@classmethod
def _build_error_message(
cls,
keys: Sequence[str],
exc_msg: str,
connection: ASGIConnection,
exc: Optional[Dict[str, Any]] = None
) -> ErrorMessage:
...
if exc:
message["exc"] = exc
...
```
Then in an exception handler, Pydantic's error dict can be accessed by:
`validation_exception["extra"][0]["exc"]`
### Drawbacks and Impact
_No response_
### Unresolved questions
Is there a better way to propagate Pydantic's error object to `ValidationException` received by the handlers? | open | 2024-05-04T05:41:05Z | 2025-03-20T15:54:40Z | https://github.com/litestar-org/litestar/issues/3466 | [
"Enhancement"
] | Anu-cool-007 | 0 |
sinaptik-ai/pandas-ai | data-science | 1,152 | Add Firebase database as connector | ### 🚀 The feature
Add Firebase database as a connector
### Motivation, pitch
Add Firebase database as connector
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-05-13T06:57:59Z | 2024-08-22T17:39:33Z | https://github.com/sinaptik-ai/pandas-ai/issues/1152 | [] | shivatmax | 1 |
google-research/bert | nlp | 435 | TypeError: batch() got an unexpected keyword argument 'drop_remainder' | Trying to classify the sentiment of the movie review using TF Hub. I encounter this error. batch() got an unexpected keyword argument 'drop_remainder'.
```
>>> estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jugs/anaconda3/envs/asr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/jugs/anaconda3/envs/asr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/jugs/anaconda3/envs/asr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 853, in _train_model_default
input_fn, model_fn_lib.ModeKeys.TRAIN))
File "/home/jugs/anaconda3/envs/asr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 691, in _get_features_and_labels_from_input_fn
result = self._call_input_fn(input_fn, mode)
File "/home/jugs/anaconda3/envs/asr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 798, in _call_input_fn
return input_fn(**kwargs)
File "/home/jugs/anaconda3/envs/asr/lib/python3.6/site-packages/bert/run_classifier.py", line 759, in input_fn
d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
TypeError: batch() got an unexpected keyword argument 'drop_remainder'
```
the input to the estimator.train is:
```
>>> train_input_fn = bert.run_classifier.input_fn_builder(
... features=train_features,
... seq_length=MAX_SEQ_LENGTH,
... is_training=True,
... drop_remainder=False)
```
| open | 2019-02-14T06:26:32Z | 2019-03-13T10:32:01Z | https://github.com/google-research/bert/issues/435 | [] | jageshmaharjan | 2 |
codertimo/BERT-pytorch | nlp | 93 | dataset / dataset.py have one erro? | "
def get_random_line(self):
if self.on_memory:
self.lines[random.randrange(len(self.lines))][1]
"
This code is to get the incorrect next sentence(isNotNext : 0),
maybe it random get a lines it is (isnext:1)。 | open | 2021-08-22T09:16:58Z | 2023-05-15T13:57:15Z | https://github.com/codertimo/BERT-pytorch/issues/93 | [] | ndn-love | 1 |
unit8co/darts | data-science | 1,978 | Do we need to scale the covariates | Hi folks,
I am very new to the machine learning, and I am trying to forecast the wind power based on different covariates, i.e. wind speed, wind direction, temperature and air pressure.
As far as I'm concerned, the neural network-based models need to scale all the features into to normalise the data so that the training is improved, accurate, and faster. So I need to normalise these data (wind speed, wind direction, temperature and air pressure) into the same scale.
As I understand, in DARTS, these features (measured wind speed, wind direction, temperature and air pressure) can be represented as covariates as they provides an external information to the model. So, I would like to ask if I need to normalise/ scale these covariates as done in the normal neural network-based models?
Thank you for all suggestions. | closed | 2023-09-04T09:58:29Z | 2023-09-11T06:49:51Z | https://github.com/unit8co/darts/issues/1978 | [
"question",
"q&a"
] | mchirsa5 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.