repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
NullArray/AutoSploit | automation | 1,274 | Unhandled Exception (d0a112a6c) | Autosploit version: `3.0`
OS information: `Linux-5.6.0-kali1-amd64-x86_64-with-debian-kali-rolling`
Running context: `autosploit.py`
Error meesage: `No JSON object could be decoded`
Error traceback:
```
Traceback (most recent call):
File "/root/Downloads/AutoSploit-master/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Downloads/AutoSploit-master/lib/jsonize.py", line 72, in load_exploits
_json = json.loads(exploit_file.read())
File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
```
Metasploit launched: `False`
| open | 2020-05-29T16:27:58Z | 2020-05-29T16:27:58Z | https://github.com/NullArray/AutoSploit/issues/1274 | [] | AutosploitReporter | 0 |
marimo-team/marimo | data-science | 3,469 | with mo.md and triple quote f strings I cant get one variable per line of text | ### Describe the bug
I tried to print one variable with value with fstrings, eg `f"{a=}"` per line using `mo.md` and `"""` but either get all on one line, or extra lines in between. See screenshot, my goal is to get one variable printed (with value) per line.

### Environment
<details>
{
"marimo": "0.10.13",
"OS": "Windows",
"OS Version": "10",
"Processor": "Intel64 Family 6 Model 85 Stepping 4, GenuineIntel",
"Python Version": "3.10.11",
"Binaries": {
"Browser": "132.0.6834.83",
"Node": "--"
},
"Dependencies": {
"click": "8.1.7",
"docutils": "0.20.1",
"itsdangerous": "2.2.0",
"jedi": "0.19.0",
"markdown": "3.6",
"narwhals": "1.22.0",
"packaging": "23.1",
"psutil": "5.9.5",
"pygments": "2.16.1",
"pymdown-extensions": "10.8.1",
"pyyaml": "6.0.1",
"ruff": "0.5.5",
"starlette": "0.37.2",
"tomlkit": "0.12.5",
"typing-extensions": "4.7.1",
"uvicorn": "0.30.1",
"websockets": "12.0"
},
"Optional Dependencies": {
"altair": "5.3.0",
"pandas": "2.1.0",
"polars": "1.19.0",
"pyarrow": "16.1.0"
}
}
</details>
### Code to reproduce
```python
import marimo
__generated_with = "0.10.12"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
return (mo,)
@app.cell
def _(mo):
a = 1
b = 2
c = 3
mo.hstack(
[
mo.md(f"""#one per line please
ohh no they're all on the same line
{a=}
{b=}
{c=}"""),
mo.md(f"""#one per line please
I added breaks myself
but ohh no, now there is extra space between each line
{a=}
{b=}
{c=}"""),
]
)
return a, b, c
if __name__ == "__main__":
app.run()
``` | closed | 2025-01-16T21:22:10Z | 2025-01-18T15:18:15Z | https://github.com/marimo-team/marimo/issues/3469 | [] | ggggggggg | 2 |
ijl/orjson | numpy | 445 | loads does not handle subclasses of str | Expected outcome:
Subclasses of str should be handled
Actual outcome:
`orjson.loads` blows up
Step by step reproduction:
```
>>> import orjson
>>> class MyStr(str):
... pass
...
>>> obj = MyStr('"abc"')
>>> orjson.loads(obj)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
orjson.JSONDecodeError: Input must be bytes, bytearray, memoryview, or str: line 1 column 1 (char 0)
``` | closed | 2023-12-08T15:36:24Z | 2024-02-06T01:51:38Z | https://github.com/ijl/orjson/issues/445 | [] | emontnemery | 7 |
pydata/xarray | pandas | 9,481 | Merging coordinates computes array values | ### What is your issue?
Xarray's default handling of coordinate merging (e.g., as used in arithmetic) computes array values, which is not ideal.
(There is probably an older issue to discuss this, but I couldn't find it with a quick search)
This is easiest to see using Dask:
```python
import xarray
import numpy as np
import dask.array
def r(*args):
raise RuntimeError('data accessed')
x1 = dask.array.from_delayed(dask.delayed(r)(1), shape=(), dtype=np.float64)
x2 = dask.array.from_delayed(dask.delayed(r)(2), shape=(), dtype=np.float64)
ds1 = xarray.Dataset(coords={'x': x1})
ds2 = xarray.Dataset(coords={'x': x2})
ds1 + ds2 # RuntimeError: data accessed
```
Traceback:
<details>
```
RuntimeError Traceback (most recent call last)
Cell In[2], line 12
10 ds1 = xarray.Dataset(coords={'x': x1})
11 ds2 = xarray.Dataset(coords={'x': x2})
---> 12 ds1 + ds2
File ~/dev/xarray/xarray/core/_typed_ops.py:35, in DatasetOpsMixin.__add__(self, other)
34 def __add__(self, other: DsCompatible) -> Self:
---> 35 return self._binary_op(other, operator.add)
File ~/dev/xarray/xarray/core/dataset.py:7783, in Dataset._binary_op(self, other, f, reflexive, join)
7781 self, other = align(self, other, join=align_type, copy=False)
7782 g = f if not reflexive else lambda x, y: f(y, x)
-> 7783 ds = self._calculate_binary_op(g, other, join=align_type)
7784 keep_attrs = _get_keep_attrs(default=False)
7785 if keep_attrs:
File ~/dev/xarray/xarray/core/dataset.py:7844, in Dataset._calculate_binary_op(self, f, other, join, inplace)
7841 return type(self)(new_data_vars)
7843 other_coords: Coordinates | None = getattr(other, "coords", None)
-> 7844 ds = self.coords.merge(other_coords)
7846 if isinstance(other, Dataset):
7847 new_vars = apply_over_both(
7848 self.data_vars, other.data_vars, self.variables, other.variables
7849 )
File ~/dev/xarray/xarray/core/coordinates.py:522, in Coordinates.merge(self, other)
519 if not isinstance(other, Coordinates):
520 other = Dataset(coords=other).coords
--> 522 coords, indexes = merge_coordinates_without_align([self, other])
523 coord_names = set(coords)
524 return Dataset._construct_direct(
525 variables=coords, coord_names=coord_names, indexes=indexes
526 )
File ~/dev/xarray/xarray/core/merge.py:413, in merge_coordinates_without_align(objects, prioritized, exclude_dims, combine_attrs)
409 filtered = collected
411 # TODO: indexes should probably be filtered in collected elements
412 # before merging them
--> 413 merged_coords, merged_indexes = merge_collected(
414 filtered, prioritized, combine_attrs=combine_attrs
415 )
416 merged_indexes = filter_indexes_from_coords(merged_indexes, set(merged_coords))
418 return merged_coords, merged_indexes
File ~/dev/xarray/xarray/core/merge.py:290, in merge_collected(grouped, prioritized, compat, combine_attrs, equals)
288 variables = [variable for variable, _ in elements_list]
289 try:
--> 290 merged_vars[name] = unique_variable(
291 name, variables, compat, equals.get(name, None)
292 )
293 except MergeError:
294 if compat != "minimal":
295 # we need more than "minimal" compatibility (for which
296 # we drop conflicting coordinates)
File ~/dev/xarray/xarray/core/merge.py:137, in unique_variable(name, variables, compat, equals)
133 break
135 if equals is None:
136 # now compare values with minimum number of computes
--> 137 out = out.compute()
138 for var in variables[1:]:
139 equals = getattr(out, compat)(var)
File ~/dev/xarray/xarray/core/variable.py:1003, in Variable.compute(self, **kwargs)
985 """Manually trigger loading of this variable's data from disk or a
986 remote source into memory and return a new variable. The original is
987 left unaltered.
(...)
1000 dask.array.compute
1001 """
1002 new = self.copy(deep=False)
-> 1003 return new.load(**kwargs)
File ~/dev/xarray/xarray/core/variable.py:981, in Variable.load(self, **kwargs)
964 def load(self, **kwargs):
965 """Manually trigger loading of this variable's data from disk or a
966 remote source into memory and return this variable.
967
(...)
979 dask.array.compute
980 """
--> 981 self._data = to_duck_array(self._data, **kwargs)
982 return self
File ~/dev/xarray/xarray/namedarray/pycompat.py:130, in to_duck_array(data, **kwargs)
128 if is_chunked_array(data):
129 chunkmanager = get_chunked_array_type(data)
--> 130 loaded_data, *_ = chunkmanager.compute(data, **kwargs) # type: ignore[var-annotated]
131 return loaded_data
133 if isinstance(data, ExplicitlyIndexed):
File ~/dev/xarray/xarray/namedarray/daskmanager.py:86, in DaskManager.compute(self, *data, **kwargs)
81 def compute(
82 self, *data: Any, **kwargs: Any
83 ) -> tuple[np.ndarray[Any, _DType_co], ...]:
84 from dask.array import compute
---> 86 return compute(*data, **kwargs)
File ~/miniconda3/envs/xarray-py312/lib/python3.12/site-packages/dask/base.py:664, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
661 postcomputes.append(x.__dask_postcompute__())
663 with shorten_traceback():
--> 664 results = schedule(dsk, keys, **kwargs)
666 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
Cell In[2], line 6, in r(*args)
5 def r(*args):
----> 6 raise RuntimeError('data accessed')
RuntimeError: data accessed
```
</details>
We use this check to decide whether or not to preserve coordinates on result objects. If coordinates are the same from all arguments, they are kept. Otherwise they are dropped.
There are checks for matching array identity inside the `Variable.equals`, so in practice this is often skipped, but it isn't ideal. It's basically the only case where Xarray operations on Xarray objects requires computing lazy array values.
The simplest fix would be to switch the default `compat` option used for merging inside arithmetic (and other xarray internal operations) to `"override"`, so coordinates are simply copied from the first object on which they appear. Would this make sense? | open | 2024-09-11T18:39:56Z | 2024-09-12T00:25:50Z | https://github.com/pydata/xarray/issues/9481 | [
"topic-combine",
"topic-lazy array"
] | shoyer | 2 |
NullArray/AutoSploit | automation | 494 | Unhandled Exception (b059fa380) | Autosploit version: `3.0`
OS information: `Linux-4.9.0-8-amd64-x86_64-with-debian-9.6`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/opt/Autosploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/opt/Autosploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-02-19T18:39:21Z | 2019-03-03T03:30:31Z | https://github.com/NullArray/AutoSploit/issues/494 | [] | AutosploitReporter | 0 |
ultralytics/ultralytics | pytorch | 18,686 | Custom Image size training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I have an dataset of images 1920*480 (1920 is the width and 480 is the height).
While training a yolov8 model I use the parameter --imgsz [1920,480] with --rect True. The best.pt model which I get after training seems to be in 1920X1920 only but, is square padding done in this scenario also? If this seems to be incorrect please let me know the correct way of training it.
I also need a onxx model to be exported with the same size 1920*480 (w*h). Need help with it.
### Additional
_No response_ | open | 2025-01-14T18:09:28Z | 2025-01-15T09:04:41Z | https://github.com/ultralytics/ultralytics/issues/18686 | [
"question",
"detect",
"exports"
] | manoj-kumar-p | 5 |
dpgaspar/Flask-AppBuilder | flask | 2,274 | How do I associate automatically generated permissions with the Public role? | Hello,
I have a question regarding permissions associated with the "Public" role.
I've spent a lot of time digging through the documentation as well as the source code, but I couldn't figure out the simplest method to add a CRUD permission like "can list on view" without using the security roles/list UI on the actual web app.
For example, I have a view named "HomepageView" and I would like to add the automatically generated permissions "can list on HomepageView" and "can show on HomepageView" to the Public role so that users can view data displayed on that view without having to log in.
I was able to accomplish this using the built-in security UI as shown below:

However, I wasn't able to do this using the FAB_ROLES setting in the config file as explained in the documentation:

Is it even possible to use the config file to accomplish this? | open | 2024-10-07T17:27:37Z | 2024-10-29T09:42:12Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2274 | [] | yamen321 | 1 |
aeon-toolkit/aeon | scikit-learn | 2,344 | [DOC] Two preprocessing notebooks | ### Describe the issue linked to the documentation
I seem to have done the same thing twice, so we have preprocessing in transformations and preprocessing in utils. Needs resolving
### Suggest a potential alternative/fix
_No response_ | closed | 2024-11-12T12:44:52Z | 2024-11-15T20:00:11Z | https://github.com/aeon-toolkit/aeon/issues/2344 | [
"documentation"
] | TonyBagnall | 0 |
plotly/dash | data-science | 2,798 | [Feature Request] openssf scorecard | Would be good to get added https://securityscorecards.dev/ to better know where next improvements could happen and when evaluating the risk of using a component like this.
`scorecard --repo=https://github.com/plotly/dash
Starting [Packaging]
Starting [Security-Policy]
Starting [Pinned-Dependencies]
Starting [Signed-Releases]
Starting [Code-Review]
Starting [CI-Tests]
Starting [CII-Best-Practices]
Starting [Token-Permissions]
Starting [License]
Starting [Maintained]
Starting [SAST]
Starting [Binary-Artifacts]
Starting [Branch-Protection]
Starting [Contributors]
Starting [Fuzzing]
Starting [Vulnerabilities]
Starting [Dependency-Update-Tool]
Starting [Dangerous-Workflow]
Finished [Code-Review]
Finished [CI-Tests]
Finished [CII-Best-Practices]
Finished [Token-Permissions]
Finished [Packaging]
Finished [Security-Policy]
Finished [Pinned-Dependencies]
Finished [Signed-Releases]
Finished [SAST]
Finished [Binary-Artifacts]
Finished [License]
Finished [Maintained]
Finished [Branch-Protection]
Finished [Contributors]
Finished [Fuzzing]
Finished [Vulnerabilities]
Finished [Dependency-Update-Tool]
Finished [Dangerous-Workflow]
RESULTS
-------
Aggregate score: 5.4 / 10
Check scores:
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| SCORE | NAME | REASON | DOCUMENTATION/REMEDIATION |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Binary-Artifacts | no binaries found in the repo | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#binary-artifacts |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 8 / 10 | Branch-Protection | branch protection is not | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#branch-protection |
| | | maximal on development and all | |
| | | release branches | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | CI-Tests | 7 out of 7 merged PRs | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#ci-tests |
| | | checked by a CI test -- score | |
| | | normalized to 10 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | CII-Best-Practices | no effort to earn an OpenSSF | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#cii-best-practices |
| | | best practices badge detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 8 / 10 | Code-Review | found 1 unreviewed changesets | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#code-review |
| | | out of 7 -- score normalized | |
| | | to 8 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Contributors | 25 different organizations | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#contributors |
| | | found -- score normalized to | |
| | | 10 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Dangerous-Workflow | no dangerous workflow patterns | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#dangerous-workflow |
| | | detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Dependency-Update-Tool | update tool detected | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#dependency-update-tool |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Fuzzing | project is not fuzzed | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#fuzzing |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | License | license file detected | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#license |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Maintained | 30 commit(s) out of 30 and 1 | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#maintained |
| | | issue activity out of 30 found | |
| | | in the last 90 days -- score | |
| | | normalized to 10 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| ? | Packaging | no published package detected | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#packaging |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 4 / 10 | Pinned-Dependencies | dependency not pinned by hash | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#pinned-dependencies |
| | | detected -- score normalized | |
| | | to 4 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | SAST | SAST tool is not run on all | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#sast |
| | | commits -- score normalized to | |
| | | 0 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Security-Policy | security policy file not | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#security-policy |
| | | detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Signed-Releases | 0 out of 1 artifacts are | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#signed-releases |
| | | signed or have provenance | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Token-Permissions | detected GitHub workflow | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#token-permissions |
| | | tokens with excessive | |
| | | permissions | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Vulnerabilities | 46 existing vulnerabilities | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#vulnerabilities |
| | | detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
`
| closed | 2024-03-16T14:04:22Z | 2024-07-26T13:07:28Z | https://github.com/plotly/dash/issues/2798 | [] | andy778 | 1 |
exaloop/codon | numpy | 316 | Strange bug (?) in for-else-construction | The code
```
p = 5
best = 4
for s in [(1, 2), (1, 3)]:
S = set()
for i, j in [(u, v) for u in s for v in s]:
a = (i+j)%p
S.add(a)
if len(S) >= best:
break
else:
#print(p, s, S, best)
print(p, s, S)
best = len(S)
```
does not behave as expected. From the logic the `else` block should be run only once (as it does in CPython). However, if one removes the comment in the `else` block, then the code runs correctly. Somehow the passive reference to `best` plays a role here. Sorry for the convoluted code, it is already an attempt to reduce it to as simple an example as possible. | closed | 2023-04-01T11:02:35Z | 2023-04-12T22:14:01Z | https://github.com/exaloop/codon/issues/316 | [] | ypfmde | 1 |
pandas-dev/pandas | data-science | 60,863 | ENH: give a useful error message when `.query` is used on a dataframe with duplicate column names | ### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
The title says it all. Optimally `.query` would give the same error message as the alternative given in the MWE below.
### Feature Description
see MWE below
### Alternative Solutions
see MWE below
### Additional Context
```python
# %%
import pandas as pd
df = pd.DataFrame(
{
"A": range(1, 6),
"B": range(10, 0, -2),
"C": range(10, 5, -1),
},
)
# %%
# rename columns with duplicate names
df.columns = ["A", "B", "A"]
# %%
# gives useful error message:
# ValueError: cannot reindex on an axis with duplicate labels
df[(df.A <= 4) & (df.B <= 8)]
# %%
# does not give useful error message:
# TypeError: dtype 'A int64
# A int64
# dtype: object' not understood
df.query("A <= 4 and B <= 8")
``` | open | 2025-02-06T11:08:48Z | 2025-02-06T11:08:48Z | https://github.com/pandas-dev/pandas/issues/60863 | [
"Enhancement",
"Needs Triage"
] | Mo-Gul | 0 |
mwouts/itables | jupyter | 112 | Object of type NaTType is not JSON serializable | Pandas dataframes that contain `pd.NaT` cannot be displayed. | closed | 2022-11-10T23:33:14Z | 2022-11-11T01:18:36Z | https://github.com/mwouts/itables/issues/112 | [] | mwouts | 0 |
flaskbb/flaskbb | flask | 335 | FlaskBB as a mailing list | As the title says, I want to have (and help implement if it’s on no one else’s to-do list) mailing list functionality in FlaskBB. Specifically, I am referring to something similar as implemented in Discourse [(see here if want more info)](https://meta.discourse.org/t/replacing-mailing-lists-email-in/13099). The point is of course that it behaves similarly from the point of view of the end user, not that the implementation is similar.
Specifically I would like the following features:
- [ ] Send email notifications when someone posts in a thread you follow (via topic tracker?)
- [ ] Send email notifications when someone posts in a forum you follow (set via plugin option)
- [ ] Allow suppression of notification about own posts (i.e. don’t send email notifications about own posts)
- [ ] Allow users to post a reply via email
- [ ] Allow users to post a new topic (i.e. start a discussion thread) via email
- [ ] Allow admin to set defaults for all of the above that apply for all users who don’t override them
I think that email notifications are a rather simple thing to implement. flaskbb.email.send_mail() is already there to send email asynchronously via Celery. Sending emails whenever someone posts should be simple and we could use the topic tracker to decide when to send an email. Selecting notifications on a per-forum basis seems like an easy thing to me as well. Probably, we want to offload the whole process of figuring out who to send email to to Celery, but I’m not sure whether Celery can access the database, so I’d start simple and keep the thing in the main process for now.
Posting via email probably would be a little more challenging. I’m still green and new and believe I can do this. Here’s how I’d plan on doing it: I’d set up FlaskBB to monitor an email account (from hereon list@forum.example.org) via IMAP IDLE (found [this very useful snippet](https://gist.github.com/jexhson/3496039/). The example here works with a thread that is dedicated to waiting for new emails. I’m not sure how scalable this is, but the alternative would be to trigger an HTTP request via the (not yet existent?) API whenever an email is received by the MX server hosting the domain forum.example.org. Since the latter would require lots of manual setup on the MX server (which is of course not possible if you don’t own the server, because you use Gmail as has been recommend many times) and the former would stay in Python, I’d prefer the former. **edit:** It does however make the assumption that there is a FlaskBB process that runs all the time, which might not be true depending on the deployment setup.
I think this is a functionality that is somewhat specialized, so I don’t want it to clutter up the codebase. Setting this up should be possible in a plugin, especially since you seem to have alembic support for plugins (#274) now. About that two questions: I’d probably need some new database columns (e.g. Message-ID of the email associated with a forum post). Would it be safe to add this to the `posts` table or would it be better to keep it in a separate table, indexed with a foreign key?
Background story:
As stated in #183, I’m new FlaskBB. I was researching forum software and was intrigued by Disourse, mainly because I really liked the idea of being able to the forum like a (very modern) mailing list. My initial excitement quickly turned into anger and sadness when I tried to install it and broke my server so badly, I had to reinstall the whole thing and fight with reimporting all the backups for all the sites I was hosting on the server – lesson learned: never run experiments on “production” servers…
I then looked into Flarum, but I soon realized that even though it’s much easier to set up, when I want some additional functionality, I’m totally lost because my PHP knowledge ends with
`<?php include 'navi.shtml'; ?>`
Now I’m going for Flask, and to the best of my knowledge, the only forum software that uses Flask is FlaskBB. The only thing I don’t like about FlaskBB is that it uses Bootstrap (which really bloats the JS and CSS), but I’ll probably come around once I get used to that. Because it seems as though Bootstrap makes designing much easier and in the end where all no webdesigners here, are we? | open | 2017-09-14T10:13:56Z | 2018-04-15T07:47:47Z | https://github.com/flaskbb/flaskbb/issues/335 | [
"feature"
] | shunju | 2 |
sktime/pytorch-forecasting | pandas | 1,315 | TemporalFusionTransformer logging active during prediction by Default | - PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.0.1+cpu
- Python version: 3.10
- Operating System: Ubuntu
### Expected behavior
I execute the following code to use a trained TemporalFusionTransformer model to make real-time (not batch) predictions. I am serving the model via FastAPI.
### Actual behavior
The predictions are coming clean (no issues there). However, for every prediction, logs are also active and generated.
How do I deactivate them? (No logging during prediction)
### Code to reproduce the problem
```
model = TemporalFusionTransformer.load_from_checkpoint(
model_path + best_checkpoint)
model.eval()
model.freeze()
raw_predictions = model.predict(df, mode="raw", return_x=False, output_dir=None)
``` | closed | 2023-05-26T15:36:15Z | 2023-05-31T00:17:56Z | https://github.com/sktime/pytorch-forecasting/issues/1315 | [] | vilorel | 3 |
streamlit/streamlit | deep-learning | 10,211 | Provide a new dataframe column type to insert Markdown text | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
I have a nested data to render as data frame, Some columns shall be rendered neatly when I have ability insert markdown column type. TextColumn is neither rendering HTML nor Markdown , I could not even insert simple "\n".
### Why?
I tried option to inherit the TextColumn and modify the behavior to render Markdown , It seem too complex . To render complex data , simple string based Text Column itself is not enough.
### How?
Provide a st.column_config.MardownColumn() , similar as TextColumn where the user can input markdown formatted string. Expectation is table display markdown data
### Additional Context
_No response_ | open | 2025-01-19T06:54:04Z | 2025-01-21T12:45:54Z | https://github.com/streamlit/streamlit/issues/10211 | [
"type:enhancement",
"feature:st.dataframe",
"feature:st.data_editor"
] | bakkiaraj | 1 |
aiortc/aioquic | asyncio | 242 | [Test] Indefinite/infinite stream from server to client | There does not appear to be any means to sustain an indefinite/infinite stream (for example, streaming a live jam session; full-length movie or live event; web radio station) using either datagrams or streams using `aioquic`.
Kindly create tests for indefinite/infinite streams using datagrams and streams (where only one (1) write to server is used to start the stream) from server to client.
| closed | 2021-11-28T18:51:25Z | 2022-01-01T00:24:18Z | https://github.com/aiortc/aioquic/issues/242 | [] | guest271314 | 6 |
aleju/imgaug | machine-learning | 61 | Checking image shape | https://github.com/aleju/imgaug/blob/32e8aa7935e187492fc98951bce493244211fbdd/imgaug/imgaug.py#L204
is this line correct? should not we rather check for ```len(image.shape) == 2```?
if this is implemented, this code will break line https://github.com/aleju/imgaug/blob/master/imgaug/augmenters/geometric.py#L940
which should be probably updated to ```result[i] = np.reshape(warped, (result[i].shape)) ``` | closed | 2017-09-18T11:49:34Z | 2017-10-13T14:02:19Z | https://github.com/aleju/imgaug/issues/61 | [] | arahusky | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 1,077 | Depoendency conflicts | Looks like there's a dependency conflict in the requirements.txt file
ERROR: Cannot install -r requirements.txt (line 12), -r requirements.txt (line 14) and resampy==0.2.2 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested resampy==0.2.2
librosa 0.9.2 depends on resampy>=0.2.2
matchering 2.0.6 depends on resampy>=0.4.2
Also it doesn't seem like torch 1.9.0 is available for python3.10 - I'm able to install with python3.9 but I didn't catch the python version requirements anywhere on the main page, could be worth explicitly mentioning (also possible I missed it). | open | 2024-01-03T14:41:22Z | 2024-01-03T16:13:27Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1077 | [] | kronkinatorix | 3 |
allenai/allennlp | nlp | 5,432 | ConfigurationError: srl not in acceptable choices for dataset_reader.type | Get the following error when I run the following code:
```
from allennlp.predictors.predictor import Predictor
download = "https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.09.03.tar.gz"
predictor = Predictor.from_path(download)
```
Same is the case for the code below:
```
from allennlp_models.pretrained import load_predictor
predictor = load_predictor("structured-prediction-srl-bert")
```
### ERROR
ConfigurationError: srl not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'multitask_shim', 'sequence_tagging', 'sharded', 'text_classification_json', 'sst_tokens', 'boolq', 'coref', 'preco', 'winobias', 'masked_language_modeling', 'next_token_lm', 'simple_language_modeling', 'copynet_seq2seq', 'seq2seq', 'cnn_dm', 'swag', 'commonsenseqa', 'piqa', 'fake', 'piqa_tt', 'quora_paraphrase', 'snli', 'transformer_superglue_rte', 'drop', 'qangaroo', 'quac', 'squad', 'squad1', 'squad2', 'transformer_squad', 'triviaqa']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically. | closed | 2021-10-06T12:36:47Z | 2021-11-06T22:50:10Z | https://github.com/allenai/allennlp/issues/5432 | [
"stale"
] | vinay1986 | 9 |
plotly/dash-table | plotly | 432 | With fixed rows, columns are as wide as the data and not the headers | In this example, "Date received" is cut-off as "2015-02-01" is shorter than that column name. But in the complaints column, the column width is as wide as the cell's content.

```python
import dash
from dash.dependencies import Input, Output
import dash_html_components as html
import dash_design_kit as ddk
from dash_table import DataTable
import json
import pandas as pd
types = {
'id': 'numeric',
'ZIP code': 'text',
'Date received': 'datetime',
'Date sent to company': 'datetime',
}
df = pd.read_csv('1k-consumer-complaints.csv')
df['id'] = df['Unnamed: 0']
df = df.drop(['Unnamed: 0'], axis=1)
df = df.reindex(columns=['id']+df.columns[:-1].tolist())
app = dash.Dash(__name__)
app.scripts.config.serve_locally = True
app.layout = ddk.App([
DataTable(
id='demo-table',
data=df.to_dict('rows'),
columns=[{ 'id': i, 'name': i, 'type': types.get(i, 'any') } for i in df.columns],
filtering='be',
pagination_mode=False,
virtualization=True,
n_fixed_rows=2,
style_cell={
'min-width': '100px'
},
css=[
{ 'selector': '.row-1', 'rule': 'min-height: 500px;' }
]
),
html.Pre(id='filter-input')
])
def to_string(filter):
l_type = filter.get('type')
l_sub = filter.get('subType')
if l_type == 'relational-operator':
if l_sub == '=':
return '=='
else:
return l_sub
else:
return filter.get('value')
def handle_leaf(filter, df_filter):
return (to_string(filter), df_filter)
def handle_default(filter, df_filter):
left = filter.get('left', None)
right = filter.get('right', None)
(left_query, left_df) = to_panda_filter(left, df_filter)
(right_query, right_df) = to_panda_filter(right, left_df)
return ('{} {} {}'.format(
left_query,
to_string(filter) if left_query != '' and right_query != '' else '',
right_query
).strip(), right_df)
def handle_contains(filter, df_filter):
left = filter.get('left', None)
right = filter.get('right', None)
(left_query, left_df) = to_panda_filter(left, df_filter)
(right_query, right_df) = to_panda_filter(right, left_df)
return ('', right_df[right_df[left_query].astype(str).str.contains(right_query)])
def handle_datestartswith(filter, df_filter):
left = filter.get('left', None)
right = filter.get('right', None)
(left_query, left_df) = to_panda_filter(left, df_filter)
(right_query, right_df) = to_panda_filter(right, left_df)
return ('', right_df[right_df[left_query].astype(str).str.startswith(right_query)])
def to_panda_filter(filter, df_filter):
if filter is None:
return ('', df_filter)
l_type = filter.get('type')
l_sub = filter.get('subType')
left = filter.get('left', None)
right = filter.get('right', None)
if left is None and right is None:
return handle_leaf(filter, df_filter)
elif l_type == 'relational-operator' and l_sub == 'contains':
return handle_contains(filter, df_filter)
elif l_type == 'relational-operator' and l_sub == 'datestartswith':
return handle_datestartswith(filter, df_filter)
else:
return handle_default(filter, df_filter)
@app.callback(
[Output("demo-table", "data"),
Output("filter-input", "children")],
[Input("demo-table", "derived_filter_structure")]
)
def onFilterUpdate(filter):
(pandas_query, df_filter) = to_panda_filter(filter, df)
return [
df_filter.query(pandas_query).to_dict('rows') if pandas_query != '' else df_filter.to_dict('rows'),
json.dumps(filter, indent=4)
]
if __name__ == "__main__":
app.run_server(debug=True)
```
Note: Remove `limitation` from https://dash.plotly.com/datatable/height (https://github.com/plotly/dash-docs/pull/847/files) once fixed | open | 2019-05-14T04:36:50Z | 2023-04-27T09:34:57Z | https://github.com/plotly/dash-table/issues/432 | [
"bug"
] | chriddyp | 10 |
erdewit/ib_insync | asyncio | 692 | util.logToFile(path, pylogging.INFO) issues | Hello,
When using the logtofiles feature, the file becomes so large that it leads to problems. It seems that the operating system has limitations.
Additionally, log messages are captured from other standard modules like the request module.
Is it possible to support a rotating file setting, and is there a way to encapsulate the ib_insync logs from the rest of the application?
Best regards, | open | 2024-02-13T12:23:29Z | 2024-02-13T12:23:29Z | https://github.com/erdewit/ib_insync/issues/692 | [] | kaidaniel82 | 0 |
babysor/MockingBird | pytorch | 642 | fmax 8000,会对模型有什么影响吗 | 想做小样本学习100样本左右,微调 tacotron 的 decoder 部分
<img width="433" alt="image" src="https://user-images.githubusercontent.com/32589854/178209451-0e75d0e2-f941-4030-aecf-4182328886db.png">
#507
想知道fmax8000的话会对语音的相似度有什么影响吗,另外输出的这个 attention 图代表什么呢,横轴是步数,纵轴是attention,比如下面的这些输出该怎么分析呢,横轴代表步数的话为什么不是递增呢,这个图该怎么看呀
万分感谢!!希望可以深入交流





| open | 2022-07-11T07:22:39Z | 2022-07-16T11:04:37Z | https://github.com/babysor/MockingBird/issues/642 | [] | SG-XM | 2 |
custom-components/pyscript | jupyter | 509 | Feature Request: Service Closure | The ability to use closure for Triggers and being able to define a range of trigger functions in a loop is very handy. Could you also extend this functionality to @service definitions? Looks like its not possible to create multiple services with
```
all_services = []
for i in range(10):
@service(f"test.service{i}')
def func():
log.debug(f"Service {i}")
all_services.append(func)
```
All of the test.servicex will print the last service, service 9. | closed | 2023-08-08T00:58:32Z | 2023-08-08T03:50:06Z | https://github.com/custom-components/pyscript/issues/509 | [] | rajeee | 2 |
gee-community/geemap | streamlit | 736 | Add a timelapse module | Add a timelapse dedicated to creating timelapse from common satellite imagery, e.g., Landsat, Sentinel, MODIS, ERA5.
References:
- https://github.com/initze/GEE_visualizations/blob/main/geemap_ERA5.ipynb
- https://twitter.com/i_nitze/status/1455975718463184902 | closed | 2021-11-03T19:53:05Z | 2021-11-11T04:10:46Z | https://github.com/gee-community/geemap/issues/736 | [
"Feature Request"
] | giswqs | 1 |
mljar/mercury | jupyter | 426 | Change the order of package installation and secrets provision | closed | 2024-03-07T10:31:16Z | 2024-03-07T10:35:32Z | https://github.com/mljar/mercury/issues/426 | [] | apjanusz | 0 | |
plotly/dash-bio | dash | 164 | Dash Needle Plot: Reset plot viewing window on dataset change | When changing a dataset, the zoom window at the bottom of the plot should also reset to view the entire x axis.

| closed | 2019-02-07T20:55:59Z | 2021-05-04T20:27:45Z | https://github.com/plotly/dash-bio/issues/164 | [
"App QA"
] | jackparmer | 4 |
sloria/TextBlob | nlp | 281 | Sigularize and Lemmatize could use some error handling | I have a some text `text = 'This is some of my long text that goes on ....'`. I want to get all the words `lemmatized` and in their singular form.
As seen in other errors like #274 , this can happen when a word is already in its base or singular form, e.g. `your ---> ymy` and `this ---> thi`, etc.
Ideally:
```python
TextBlob(text.lower()).words.lemmatize().singularize()
```
would achieve this without issue.
Looking at [word list](https://textblob.readthedocs.io/en/dev/api_reference.html#textblob.blob.WordList) and more importantly [`Word`](https://textblob.readthedocs.io/en/dev/api_reference.html#textblob.blob.Word) there are no methods like `is_lemma` or `is_singular` which would be helpful for your library to call prior to trying to apply these operations. | open | 2019-08-16T13:22:08Z | 2019-08-16T13:22:08Z | https://github.com/sloria/TextBlob/issues/281 | [] | SumNeuron | 0 |
nonebot/nonebot2 | fastapi | 2,503 | Plugin: nonebot-plugin-imagemaster | ### PyPI 项目名
nonebot-plugin-imagemaster
### 插件 import 包名
nonebot_plugin_imagemaster
### 标签
[{"label":"修图","color":"#52d4ea"}]
### 插件配置项
_No Response_
| closed | 2023-12-28T06:52:02Z | 2023-12-29T08:24:40Z | https://github.com/nonebot/nonebot2/issues/2503 | [
"Plugin"
] | phquathi | 9 |
ultralytics/ultralytics | computer-vision | 19,617 | RuntimeError: torch.cat(): expected a non-empty list of Tensors | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
My kitti_seg.yaml is as follows:
train: /home/imrl-pgt/yolo_ws/src/ultralytics/kitti_seg/images/train_split
val: /home/imrl-pgt/yolo_ws/src/ultralytics/kitti_seg/images/val_split
masks:
train: /home/imrl-pgt/yolo_ws/src/ultralytics/kitti_seg/masks_remapped/train_split
val: /home/imrl-pgt/yolo_ws/src/ultralytics/kitti_seg/masks_remapped/val_split
names:
0: static
1: road
2: sidewalk
3: parking
4: building
5: wall
6: fence
7: pole
8: traffic light
9: traffic sign
10: vegetation
11: terrain
12: sky
13: person
14: rider
15: car
16: truck
17: bus
18: train
19: motorcycle
20: bicycle
nc: 21
I am trying to train using yolov8-seg.yaml with the above kitti_seg.yaml, but I get the following error:
Traceback (most recent call last):
File "/home/imrl-pgt/yolo_ws/src/ultralytics/learn.py", line 5, in <module>
results = model.train(
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/engine/model.py", line 810, in train
self.trainer.train()
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/engine/trainer.py", line 208, in train
self._do_train(world_size)
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/engine/trainer.py", line 433, in _do_train
self.metrics, self.fitness = self.validate()
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/engine/trainer.py", line 611, in validate
metrics = self.validator(self)
File "/home/imrl-pgt/anaconda3/envs/yolo8/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/engine/validator.py", line 199, in __call__
stats = self.get_stats()
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/models/yolo/detect/val.py", line 185, in get_stats
stats = {k: torch.cat(v, 0).cpu().numpy() for k, v in self.stats.items()} # to numpy
File "/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/models/yolo/detect/val.py", line 185, in <dictcomp>
stats = {k: torch.cat(v, 0).cpu().numpy() for k, v in self.stats.items()} # to numpy
RuntimeError: torch.cat(): expected a non-empty list of Tensors
Let me know if you need further assistance!
### Additional
from ultralytics import YOLO
model = YOLO("/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/cfg/models/v8/yolov8-seg.yaml")
results = model.train(
data="/home/imrl-pgt/yolo_ws/src/ultralytics/ultralytics/cfg/datasets/kitti_seg.yaml",
imgsz=(640,640),
epochs=300,
batch=8,
workers=16,
device=0,
optimizer="AdamW",
lr0=0.001,
lrf=0.01,
momentum=0.937,
weight_decay=0.0005,
warmup_epochs=3.0,
warmup_momentum=0.8,
warmup_bias_lr=0.1,
resume=False,
box=7.5,
cls=0.5,
dfl=1.5,
pose=12.0,
kobj=1.0,
label_smoothing=0.0,
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
degrees=0.0,
translate=0.1,
scale=0.5,
shear=0.0,
perspective=0.0,
flipud=0.0,
fliplr=0.5,
mosaic=0.8,
mixup=0.3,
auto_augment='randaugment',
erasing=0.4,
crop_fraction=1.0
)
learn.py
It means "Training is performed by running python learn.py. | open | 2025-03-10T11:51:01Z | 2025-03-11T09:25:34Z | https://github.com/ultralytics/ultralytics/issues/19617 | [
"question",
"segment"
] | gyutae1009 | 2 |
docarray/docarray | fastapi | 1,115 | Proto stack optimization | # Context
TODO when stack mode and the proto are more stable.
`DocumentArrayStack` should optimize the proto serialization. It should store its columns directly stacked so that we don't do the underway step to stack again after the deserialization of the proto.
- [x] implement a `DocumentArrayProtoStack` where we store the column directly
- [x] make sure we remove the data from the document which are already inside the column
- [ ] add a test which benchmark the implementation against the naive way | closed | 2023-02-09T13:41:09Z | 2023-03-14T09:33:49Z | https://github.com/docarray/docarray/issues/1115 | [] | samsja | 0 |
Avaiga/taipy | data-visualization | 1,613 | [🐛 BUG] Can't edit tabular data in data node viewer | ### What went wrong? 🤔
I have a Taipy application. I have created a new scenario and want to edit the "demand" data node. In the data node viewer, when I press edit data, the data tab shows a lock sign with hovertext "locked by you" and I can't edit the data:

### Expected Behavior
I want to edit the datanode "demand".
### Steps to Reproduce Issue
1. Clone the demo repo
```bash
git clone https://github.com/Avaiga/demo-workforce-plan.git
```
2. Install the develop version of Taipy
```bash
pip install git+https://github.com/Avaiga/taipy.git
```
3. Install requirements
```bash
pip install -r requirements.txt
```
4. Run main.py
```
python main.py
```
5. Create a new scenario using the scenario selector and try editing the demand data node in the data node viewer
### Browsers
Chrome
### OS
Windows
### Version of Taipy
develop
### Acceptance Criteria
- [ ] Ensure the new code is unit tested, and check that the code coverage is at least 90%.
- [ ] Create related issues in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-30T17:27:09Z | 2024-08-27T08:10:41Z | https://github.com/Avaiga/taipy/issues/1613 | [
"Core",
"🟥 Priority: Critical",
"🖰 GUI",
"💥Malfunction",
"🔒 Staff only"
] | AlexandreSajus | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 801 | Helping users to make their settings in a more friendly interface. | closed | 2024-11-11T01:20:09Z | 2024-12-05T02:06:26Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/801 | [
"enhancement",
"stale"
] | surapuramakhil | 2 | |
nltk/nltk | nlp | 2,866 | word_tokenize/EN hangs on incorrect strings | Hi NLTK team,
I have a string that I pass to `word_tokenize` (which uses `PunktSentenceTokenizer` under the hood), and the call hangs. The string in question is taken from Wikipedia and is the result of some vandalism. It can be generated by this
```
text = 'swirley thing w' + 'e' * (884779 - len('swirley thing w'))
```
The call seems to hang, I did not go deep too much but after running this for a couple of hours I just stopped the process.
What would be an ok solution to process this in a robust fashion? I have a pipeline that has correct sentences as well as, from time to time, this kind of sentences. | closed | 2021-10-26T21:56:15Z | 2021-11-26T11:58:19Z | https://github.com/nltk/nltk/issues/2866 | [
"bug",
"tokenizer",
"critical"
] | raffienficiaud | 4 |
unit8co/darts | data-science | 1,981 | Cumulative Sum on Timeseries class | I make frequent use of the TimeSeries.diff() function for forecasting, I was hoping to add the reverse of the function `cumulative sum aka cumsum` to the TimeSeries class in order to provide opposite functionality. I've found this useful in conjunction with baseline models such as NaiveMovingAverage in order to provide decent baselines.
optionally something that I would propose to add is a `starting value` which could be a value to start with when using cumsum. This is again useful with baseline predictions on differenced values.
example psudocode:
```python
ts = TimeSeries.from_values([1,2,3,4,5])
diffs = ts.diff() # [1,1,1,1]
reversed_diff = diffs.cumsum() # [1,2,3,4]
model = NaiveMovingAverage()
model.fit(diffs)
average_diffs = model.predict(5) # [1,1,1,1,1]
baseline_prediction = average_diffs.cumsum(starting_value=ts[-1]) # starting value of 5
# baseline prediction w/ NaiveMovingAverage would be [6,7,8,9,10]
```
**Describe proposed solution**
I would implement the above pseudocode by mirroring the implementation of diff, except I would use the underlying xarray implementation of cumsum() (which relies on the numpy implementation of cumsum).
**Describe potential alternatives**
Alternatively if this doesnt seem useful, its always possible to use the xarray implementation, but it requires a little bit more massaging. I figured it might be nice to have the reverse operation of `diff` as it was something that I found myself implementing at work.
**Additional context**
I'd be happy to contribute this issue if this is something the maintainers think would be a good addition! I already have a solution that I use, would just be modifying it & adding test cases to ensure functionality. | closed | 2023-09-06T02:27:36Z | 2023-09-14T10:18:50Z | https://github.com/unit8co/darts/issues/1981 | [
"improvement"
] | Eliotdoesprogramming | 2 |
snarfed/granary | rest-api | 103 | render <video>s | we currently only render links and thumbnail/poster images for videos. we should render actual `<video class="u-video">`s!
inspired by microformats/h-entry#5.
example silo copies of http://tantek.com/2016/055/t12/fast-walk-hike-beach-video
* https://brid.gy/post/flickr/39039882@N00/24615676754
* https://brid.gy/post/facebook/214611/10102143546607213
* https://brid.gy/post/twitter/t/702695028464812034
| closed | 2017-03-03T00:16:29Z | 2017-12-09T01:14:24Z | https://github.com/snarfed/granary/issues/103 | [
"now"
] | snarfed | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 41 | Easy Triplet Hard Negative Mining | Add this method to the miners: http://openaccess.thecvf.com/content_WACV_2020/papers/Xuan_Improved_Embeddings_with_Easy_Positive_Triplet_Mining_WACV_2020_paper.pdf | closed | 2020-04-10T19:36:14Z | 2020-12-11T03:54:55Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/41 | [
"new algorithm request",
"in progress"
] | KevinMusgrave | 11 |
sammchardy/python-binance | api | 1,324 | Binance api problem | there are a lot of coins on the binance futures market with a different number of symbols in the price after the decimal point, examples: MKR 632.9, LTC 80.32, EOS 0.834, BAT 0.2124, KNC 0.59910, etc. If the price is rounded up to 2 symbols, then only coins with 1 or 2 symbols after the decimal point are correctly opened on the exchange along with stop loss and take profit, if the coin has 3 symbols, they are also rounded, but this leads to inaccuracy in setting take profit and stop loss as a percentage. If you set the rounding to 4 characters, then for coins with 1 and 2 characters after the decimal point, the error is -1111, and if you set the rounding to 2 characters (to hundredths) then for coins with 4 characters after the decimal point, the error is -2021, tell me what to do ? perhaps there is a request to get the price accuracy and QTY accuracy and poll the pair that the bot has chosen for trading before each trade? | open | 2023-05-13T23:08:12Z | 2023-05-15T05:24:36Z | https://github.com/sammchardy/python-binance/issues/1324 | [] | helpmeet | 1 |
explosion/spaCy | deep-learning | 13,547 | ImportError: cannot import name symnols | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
trying to import `Corpus` or `Example` as shown here: https://spacy.io/usage/training#custom-code-readers-batchers
```python
from spacy.training import Corpus
```
produces import error:
```txt
---------------------------------------------------------------------------
ImportError
Traceback (most recent call last)
Cell In[3], line 1
----> 1 from spacy.training import Corpus
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/__init__.py:13
10 # These are imported as part of the API
11 from thinc.api import Config, prefer_gpu, require_cpu, require_gpu # noqa: F401
---> 13 from . import pipeline # noqa: F401
14 from . import util
15 from .about import __version__ # noqa: F401
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/pipeline/__init__.py:1
----> 1 from .attributeruler import AttributeRuler
2 from .dep_parser import DependencyParser
3 from .edit_tree_lemmatizer import EditTreeLemmatizer
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/pipeline/attributeruler.py:8
6 from .. import util
7 from ..errors import Errors
----> 8 from ..language import Language
9 from ..matcher import Matcher
10 from ..scorer import Scorer
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/language.py:43
41 from .lang.tokenizer_exceptions import BASE_EXCEPTIONS, URL_MATCH
42 from .lookups import load_lookups
---> 43 from .pipe_analysis import analyze_pipes, print_pipe_analysis, validate_attrs
44 from .schemas import (
45 ConfigSchema,
46 ConfigSchemaInit,
(...)
49 validate_init_settings,
50 )
51 from .scorer import Scorer
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/pipe_analysis.py:6
3 from wasabi import msg
5 from .errors import Errors
----> 6 from .tokens import Doc, Span, Token
7 from .util import dot_to_dict
9 if TYPE_CHECKING:
10 # This lets us add type hints for mypy etc. without causing circular imports
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/tokens/__init__.py:1
----> 1 from ._serialize import DocBin
2 from .doc import Doc
3 from .morphanalysis import MorphAnalysis
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/tokens/_serialize.py:14
12 from ..errors import Errors
13 from ..util import SimpleFrozenList, ensure_path
---> 14 from ..vocab import Vocab
15 from ._dict_proxies import SpanGroups
16 from .doc import DOCBIN_ALL_ATTRS as ALL_ATTRS
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/vocab.pyx:1, in init spacy.vocab()
File ~/dev/project/.venv/lib/python3.10/site-packages/spacy/morphology.pyx:9, in init spacy.morphology()
ImportError: cannot import name symbols
```
I'm not sure why there are dots being used to navigate the submodules instead of the full module path.
i.e. `from . import pipeline` opposed to `from spacy import pipeline`
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
- **spaCy version:** 3.7.5
- **Platform:** macOS-14.5-arm64-arm-64bit
- **Python version:** 3.10.13
- **Environment Information**: env created with pypoetry
| closed | 2024-06-25T12:38:48Z | 2024-07-26T00:02:37Z | https://github.com/explosion/spaCy/issues/13547 | [] | fschlz | 2 |
deeppavlov/DeepPavlov | tensorflow | 1,010 | 'ValueError: not enough values to unpack' in conll2003_reader.py | Hello.
I've got train files from http://files.deeppavlov.ai/deeppavlov_data/conll2003_v2.tar.gz
When I'm trying to read them by "conll2003_reader" I see got the follow error:
> deeppavlov/dataset_readers/conll2003_reader.py in parse_ner_file(self, file_name)
> 84 pos_tags.append(pos)
> 85 else:
> ---> 86 token, *_, tag = line.split()
> 87 tags.append(tag)
> 88 tokens.append(token)
>
> ValueError: not enough values to unpack (expected at least 2, got 1)
It looks like a bug.
| closed | 2019-09-23T15:03:52Z | 2019-09-26T13:56:31Z | https://github.com/deeppavlov/DeepPavlov/issues/1010 | [] | ldSidious | 5 |
jwkvam/bowtie | jupyter | 64 | creating new components docs needs some editing | to reflect recent changes to the python component api | closed | 2016-12-13T17:25:00Z | 2016-12-14T21:10:09Z | https://github.com/jwkvam/bowtie/issues/64 | [
"documentation"
] | jwkvam | 0 |
koaning/scikit-lego | scikit-learn | 298 | [FEATURE] FairnessWarning | if we have datasets where fairness is an issue, maybe it is naive to assume that folks will read the documentation. raising an explicit fairness warning might be a good idea. it's part of what could have gone better with the load_boston dataset.
@MBrouns agree? | closed | 2020-02-19T23:16:03Z | 2020-05-02T08:46:27Z | https://github.com/koaning/scikit-lego/issues/298 | [
"enhancement",
"good first issue"
] | koaning | 1 |
iperov/DeepFaceLab | machine-learning | 731 | Some issues with reading faces | ## Expected behavior
Reading faces from video.
## Actual behavior
Functions "Video A,B , Images A,b" works, Faces A, B didnt work so MODEL ,SWAPS and MOVIE dont work too
## Steps to reproduce
This code is showing
Traceback (most recent call last):
File "faceswap\faceswap.py", line 8, in <module>
from lib.cli import FullHelpArgumentParser
File "C:\OpenFaceSwap\faceswap\lib\cli.py", line 7, in <module>
from lib.FaceFilter import FaceFilter
File "C:\OpenFaceSwap\faceswap\lib\FaceFilter.py", line 3, in <module>
import face_recognition
File "C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\__init__.py", line 7, in <module>
from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
File "C:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\face_recognition\api.py", line 4, in <module>
import dlib
ImportError: DLL load failed:
I've tried instal "pip install nymp" , "pip install pillow" but nothing change
## Other relevant information
- Windows 10, 64x
- Python 3.8.2 | open | 2020-04-28T14:28:23Z | 2023-06-08T20:39:01Z | https://github.com/iperov/DeepFaceLab/issues/731 | [] | TheKingLozo | 3 |
pydantic/pydantic | pydantic | 11,344 | Json[T] is very awkward to use | I'm in the unfortunate position where I have to both deserialize from and serialize to a json in json.
Deserializing works very well, however Serializing is very awkward.
I have the following structure where the outer json describes the type of the inner json.
````python
from typing import Literal
from pydantic import BaseModel, Json
class EventPayload(BaseModel):
type: str
value: str
class DoubleJsonEvent(BaseModel):
type: Literal['EventType1']
payload: Json[EventPayload]
````
---
Now I want to create a ``DoubleJsonEvent`` and serialize it.
---
What doesn't work:
````python
new = DoubleJsonEvent(
type='EventType1',
payload=EventPayload(type='EventType', value='EventPayload')
)
````
````text
pydantic_core._pydantic_core.ValidationError: 1 validation error for DoubleJsonEvent
payload
JSON input should be string, bytes or bytearray [type=json_type, input_value=EventPayload(type='EventT...', value='EventPayload'), input_type=EventPayload]
For further information visit https://errors.pydantic.dev/2.10/v/json_type
````
````python
new = DoubleJsonEvent(
type='EventType1',
payload={'type': 'EvenType', 'value': 'EventPayload'}
)
````
````text
pydantic_core._pydantic_core.ValidationError: 1 validation error for DoubleJsonEvent
payload
JSON input should be string, bytes or bytearray [type=json_type, input_value={'type': 'EvenType', 'value': 'EventPayload'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.10/v/json_type
````
---
What works
1. Dumping to json before creating the object
````python
new = DoubleJsonEvent(
type='EventType1',
payload=EventPayload(type='EventType', value='EventPayload').model_dump_json()
)
print(new.model_dump_json(round_trip=True))
````
````json
{"type":"EventType1","payload":"{\"type\":\"EventType\",\"value\":\"EventPayload\"}"}
````
2. Creating the object with dummy values and then assign the new value
````python
new = DoubleJsonEvent(
type='EventType1',
payload='{"type":"DummyType","value":"DummyValue"}'
)
new.payload = EventPayload(type='RealType', value='RealPayload')
print(new.model_dump_json(round_trip=True))
````
````text
{"type":"EventType1","payload":"{\"type\":\"RealType\",\"value\":\"RealPayload\"}"}
````
Both working solutions to create a `DoubleJsonEvent` seem like a very hacky workaround and is not very nice.
Additionally I have to set `round_trip=True` when serializing the model instance which might have unexpected side effects (e.g. one json-in-json should be serialized, the other not).
I would have expected some way to set this on the model or field, the same way like `exclude`.
What is the proper way to achieve this?
---
Idea or suggestion (in case there is no elegant way today)
Add the possibility to annotate the JSON type:
````python
MyJson = Annotated[Json, JsonField(serialize_to_json=True, allow_model_instances=True)]
````
There one could control how the Object will be serialized and that correct model instances are already allowed.
| closed | 2025-01-25T07:24:02Z | 2025-02-05T14:59:56Z | https://github.com/pydantic/pydantic/issues/11344 | [] | spacemanspiff2007 | 12 |
sammchardy/python-binance | api | 862 | Websocket freezing | Hello!
I noticed that after some time my BinanceSocketManager stalls on the await ms.recv() line. The time it freezes is constantly between 60-65 seconds. After the time it starts working again for a short moment until it freezes again for the same period of time.
I can get rid of the error by removing the line await asyncio.sleep(0.1) in my code. But in my actual trading bot there is some math that needs to be done and the sleep is command is replacing it. I don't know if this is a bug or am I doing something wrong.
Here's my code to illustrate the issue:
```
import asyncio
import time
from binance import BinanceSocketManager, AsyncClient
async def socket():
messages = 0
streams = ["ethbtc@bookTicker", "btcusdt@trade", "ethusdt@bookTicker"]
start_time = time.time()
client = await AsyncClient.create()
bm = BinanceSocketManager(client)
multi_socket = bm.multiplex_socket(streams)
async with multi_socket as ms:
while True:
messages += 1
msg = await ms.recv()
time.sleep(0.1) # This illustrates some math that would be done
# print current message's stream and the total amount of messages received
message = msg["stream"]
msg = f"\rMessages: {messages} Message: {message} Time: {float(time.time()-start_time)} "
print(msg, end="")
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(socket())
```
**Expected behavior**
Expecting a continuous stream of messages. Instead stream freezes after around 45 second. The freeze lasts always a bit over a minute after which the stream starts again until another freeze.
**Environment (please complete the following information):**
- Python version: [e.g. 3.9]
- OS: Windowns
- python-binance version 1.0.12 | open | 2021-05-18T08:51:11Z | 2021-06-07T18:09:11Z | https://github.com/sammchardy/python-binance/issues/862 | [] | kokojiji01 | 3 |
httpie/cli | rest-api | 627 | show Garbled message when Content-Type doesn't have charset part | many of website's HTTP header's Content-Type part does not have Charset part, just like:
> Content-Type: text/html
then shows garbled message in terminal/iTerm2 in macOS 10.13 when the html contains Chinese Character which encode with utf8 (which shows OK with curl), but when I change the header like this:
> Content-Type: text/html; charset=utf-8
then OK
the garbled messge is below in body>h1
```
$ http --debug dev1.cn/test.html
HTTPie 0.9.9
Requests 2.12.3
Pygments 2.1.3
Python 3.6.3 (default, Oct 4 2017, 06:09:15)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]
/usr/local/Cellar/httpie/0.9.9/libexec/bin/python3.6
Darwin 17.0.0
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/docs#config",
"httpie": "0.9.9"
},
"default_options": "[]"
},
"config_dir": "/Users/hh/.httpie",
"is_windows": false,
"stderr": "<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
>>> requests.request(**{
"allow_redirects": false,
"auth": "None",
"cert": "None",
"data": {},
"files": {},
"headers": {
"User-Agent": "HTTPie/0.9.9"
},
"method": "get",
"params": {},
"proxies": {},
"stream": true,
"timeout": 30,
"url": "http://dev1.cn/test.html",
"verify": true
})
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html
Date: Sun, 05 Nov 2017 15:23:00 GMT
ETag: W/"59ff2cc9-a7"
Last-Modified: Sun, 05 Nov 2017 15:22:49 GMT
Server: nginx/1.10.3 (Ubuntu)
Transfer-Encoding: chunked
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title></title>
</head>
<body>
<h1>ä½ å¥½ï¼ä¸ç</h1>
</body>
</html>
```
the garbled message disturbed me very much, which not exists in curl , but I like httpie more, can you improve it ? | closed | 2017-11-05T15:20:43Z | 2021-09-29T18:22:20Z | https://github.com/httpie/cli/issues/627 | [] | hh-in-zhuzhou | 1 |
roboflow/supervision | tensorflow | 1,333 | How to determine the source coordinates better in computer vision speed estimation | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
blog https://blog.roboflow.com/estimate-speed-computer-vision/

Is there A good tool to calculate the coordinates of A, B, C, D? Thank you very much
### Additional
_No response_ | closed | 2024-07-08T14:47:59Z | 2024-07-08T20:53:10Z | https://github.com/roboflow/supervision/issues/1333 | [
"question"
] | dearMOMO | 0 |
ijl/orjson | numpy | 219 | Cannot install orjson 3.6.4 on Debian 10. | # pip3 install orjson
Collecting orjson
Using cached https://files.pythonhosted.org/packages/f6/1e/61225fb33a9f614ac67f595a80008ded0bdc58ea20c0201703a166d37fca/orjson-3.6.4.tar.gz
Installing build dependencies ... done
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.7/tokenize.py", line 447, in open
buffer = _builtin_open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-install-krpawaz1/orjson/setup.py'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-krpawaz1/orjson/
| closed | 2021-11-10T17:41:10Z | 2021-12-05T15:48:07Z | https://github.com/ijl/orjson/issues/219 | [] | hardpoorcorn | 2 |
nteract/papermill | jupyter | 402 | How to run code blocks with particular tags only via papermill? | Is there a way to run a notebook with papermill, where we only want to execute code blocks with particular tags only.
For example: execute all code blocks via cli that have a tag "Data_Preprocessing" in input notebook and skip the rest of the code blocks | open | 2019-07-16T18:43:53Z | 2019-08-15T19:00:20Z | https://github.com/nteract/papermill/issues/402 | [
"reference:tags",
"reference:extensions"
] | aayush-jain18 | 3 |
SciTools/cartopy | matplotlib | 1,796 | I don't think "ogc_clients.py" should rely on urn exact match searches. | ### Description
For example, I came across "urn:ogc:def:crs:EPSG:6.3:3857". There is no end to increasing the definitions of METERS_PER_UNIT and _URN_TO_CRS each time we encounter such a variant.
For example, what about the following pattern matching?
```python
class OrderedPatkeyDict(collections.OrderedDict):
def __init__(self, *a):
super(OrderedPatkeyDict, self).__init__(*a)
def get(self, item, fallback=None):
try:
return self.__getitem__(item)
except KeyError:
return fallback
def __getitem__(self, item):
from fnmatch import fnmatch
for k, v in self.items():
if fnmatch(item, k):
return v
raise KeyError(item)
METERS_PER_UNIT = OrderedPatkeyDict({
'urn:ogc:def:crs:EPSG:*:27700': 1,
'urn:ogc:def:crs:EPSG:*:900913': 1,
'urn:ogc:def:crs:OGC:1.3:CRS84': _WGS84_METERS_PER_UNIT,
'urn:ogc:def:crs:EPSG:*:3031': 1,
'urn:ogc:def:crs:EPSG:*:3413': 1,
'urn:ogc:def:crs:EPSG:*:3857': 1,
#'urn:ogc:def:crs:EPSG:*:4326': _WGS84_METERS_PER_UNIT,
})
_URN_TO_CRS = OrderedPatkeyDict(
[('urn:ogc:def:crs:OGC:1.3:CRS84', ccrs.PlateCarree()),
('urn:ogc:def:crs:EPSG:*:4326', ccrs.PlateCarree()),
('urn:ogc:def:crs:EPSG:*:900913', ccrs.GOOGLE_MERCATOR),
('urn:ogc:def:crs:EPSG:*:27700', ccrs.OSGB(approx=True)),
('urn:ogc:def:crs:EPSG:*:3031', ccrs.Stereographic(
central_latitude=-90,
true_scale_latitude=-71)),
('urn:ogc:def:crs:EPSG:*:3413', ccrs.Stereographic(
central_longitude=-45,
central_latitude=90,
true_scale_latitude=70)),
('urn:ogc:def:crs:EPSG:*:3857', ccrs.GOOGLE_MERCATOR),
])
```
#### Code to reproduce
```python
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
def main():
url = 'https://mrdata.usgs.gov/mapcache/wmts'
layers = ['mrds-PGE_OS', 'mrds-STN_D_G',]
fig = plt.figure()
ax = fig.add_subplot(projection=ccrs.GOOGLE_MERCATOR)
for layer in layers:
ax.add_wmts(url, layer)
ax.coastlines("110m")
plt.show()
```
#### Traceback
```
Traceback (most recent call last):
File "c:\Users\hhsprings\somewhere\wmts_exam_01.py", line 18, in <module>
main()
File "c:\Users\hhsprings\somewhere\wmts_exam_01.py", line 12, in main
ax.add_wmts(url, layer)
File "C:\Program Files\Python39\lib\site-packages\cartopy\mpl\geoaxes.py", line 2141, in add_wmts
return self.add_raster(wmts, **kwargs)
File "C:\Program Files\Python39\lib\site-packages\cartopy\mpl\geoaxes.py", line 1224, in add_raster
raster_source.validate_projection(self.projection)
File "C:\Program Files\Python39\lib\site-packages\cartopy\io\ogc_clients.py", line 421, in validate_projection
self._matrix_set_name(projection)
File "C:\Program Files\Python39\lib\site-packages\cartopy\io\ogc_clients.py", line 416, in _matrix_set_name
raise ValueError(msg)
ValueError: Unable to find tile matrix for projection.
Projection: <cartopy.crs.Mercator object at 0x000001288DD94590>
Available tile CRS URNs:
urn:ogc:def:crs:EPSG:6.3:3857
urn:ogc:def:crs:EPSG:6.3:4326
urn:ogc:def:crs:EPSG:6.3:900913
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Windows 10
### Cartopy version
0.19.0.post1
</details>
| open | 2021-06-01T20:25:53Z | 2021-06-02T00:06:37Z | https://github.com/SciTools/cartopy/issues/1796 | [] | hhsprings | 0 |
OthersideAI/self-operating-computer | automation | 7 | wrong coordinate | I asked it to play Spotify and it guessed the play bottom at x 78% y 46% which is wrong.

maybe for a more detailed guess we can have more gridlines?
something like this maybe

| closed | 2023-11-28T11:30:55Z | 2023-12-21T00:49:06Z | https://github.com/OthersideAI/self-operating-computer/issues/7 | [] | daaniyaan | 11 |
assafelovic/gpt-researcher | automation | 489 | pass config.json to GPTResearcher | hi,
i want to pass config.json into GPTResearcher like: researcher = GPTResearcher(query=query, report_type="research_report", config_path='config.json')
but the GPTResearcher uses the default config.why?
the json file is:
{
"SEARCH_RETRIEVER": "serpapi",
"EMBEDDING_PROVIDER": "azureopenai",
"LLM_PROVIDER": "azureopenai",
"FAST_LLM_MODEL": "gpt_35_16k",
"SMART_LLM_MODEL": "gpt_35_16k",
"FAST_TOKEN_LIMIT": 2000,
"SMART_TOKEN_LIMIT": 4000,
"BROWSE_CHUNK_MAX_LENGTH": 8192,
"SUMMARY_TOKEN_LIMIT": 700,
"TEMPERATURE": 0.2,
"USER_AGENT": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0",
"MAX_SEARCH_RESULTS_PER_QUERY": 5,
"MEMORY_BACKEND": "local",
"TOTAL_WORDS": 1000,
"REPORT_FORMAT": "APA",
"MAX_ITERATIONS": 3,
"AGENT_ROLE": "None",
"SCRAPER": "bs",
"MAX_SUBTOPICS": 3
}
how can I fixed it? | closed | 2024-05-10T17:20:02Z | 2024-05-14T15:58:56Z | https://github.com/assafelovic/gpt-researcher/issues/489 | [] | saeid976 | 1 |
pytest-dev/pytest-qt | pytest | 457 | 3 tests fail | ```
========================================================================================== FAILURES ==========================================================================================
_________________________________________________________________________________ test_logging_fails_ignore __________________________________________________________________________________
testdir = <Testdir local('/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0')>
def test_logging_fails_ignore(testdir):
"""
Test qt_log_ignore config option.
:type testdir: _pytest.pytester.TmpTestdir
"""
testdir.makeini(
"""
[pytest]
qt_log_level_fail = CRITICAL
qt_log_ignore =
WM_DESTROY.*sent
WM_PAINT not handled
"""
)
testdir.makepyfile(
"""
from pytestqt.qt_compat import qt_api
import pytest
def test1():
qt_api.qCritical('a critical message')
def test2():
qt_api.qCritical('WM_DESTROY was sent')
def test3():
qt_api.qCritical('WM_DESTROY was sent')
assert 0
def test4():
qt_api.qCritical('WM_PAINT not handled')
qt_api.qCritical('another critical message')
"""
)
res = testdir.runpytest()
lines = [
# test1 fails because it has emitted a CRITICAL message and that message
# does not match any regex in qt_log_ignore
"*_ test1 _*",
"*Failure: Qt messages with level CRITICAL or above emitted*",
"*QtCriticalMsg: a critical message*",
# test2 succeeds because its message matches qt_log_ignore
# test3 fails because of an assert, but the ignored message should
# still appear in the failure message
"*_ test3 _*",
"*AssertionError*",
"*QtCriticalMsg: WM_DESTROY was sent*(IGNORED)*",
# test4 fails because one message is ignored but the other isn't
"*_ test4 _*",
"*Failure: Qt messages with level CRITICAL or above emitted*",
"*QtCriticalMsg: WM_PAINT not handled*(IGNORED)*",
"*QtCriticalMsg: another critical message*",
# summary
"*3 failed, 1 passed*",
]
> res.stdout.fnmatch_lines(lines)
E Failed: nomatch: '*_ test1 _*'
E and: '============================= test session starts =============================='
E and: 'platform freebsd13 -- Python 3.9.13, pytest-7.1.3, pluggy-1.0.0'
E and: 'PyQt5 5.15.6 -- Qt runtime 5.15.5 -- Qt compiled 5.15.5'
E and: 'Using --randomly-seed=3508469783'
E and: 'rootdir: /tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0, configfile: tox.ini'
E and: 'plugins: qt-4.1.0, forked-1.4.0, hypothesis-6.55.0, cov-2.9.0, rerunfailures-10.1, xdist-2.5.0, randomly-3.12.0, typeguard-2.13.3'
E and: 'collected 4 items'
E and: ''
E and: 'test_logging_fails_ignore.py F.FF [100%]'
E and: ''
E and: '=================================== FAILURES ==================================='
E and: '____________________________________ test3 _____________________________________'
E and: ''
E and: ' def test3():'
E and: " qt_api.qCritical('WM_DESTROY was sent')"
E and: '> assert 0'
E and: 'E assert 0'
E and: ''
E and: 'test_logging_fails_ignore.py:10: AssertionError'
E and: '----------------------------- Captured Qt messages -----------------------------'
E and: '/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test3:9:'
E and: ''
E and: ' QtCriticalMsg: WM_DESTROY was sent (IGNORED)'
E fnmatch: '*_ test1 _*'
E with: '____________________________________ test1 _____________________________________'
E fnmatch: '*Failure: Qt messages with level CRITICAL or above emitted*'
E with: 'test_logging_fails_ignore.py:4: Failure: Qt messages with level CRITICAL or above emitted'
E nomatch: '*QtCriticalMsg: a critical message*'
E and: '----------------------------- Captured Qt messages -----------------------------'
E and: '/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test1:5:'
E and: ''
E fnmatch: '*QtCriticalMsg: a critical message*'
E with: ' QtCriticalMsg: a critical message'
E nomatch: '*_ test3 _*'
E and: '____________________________________ test4 _____________________________________'
E and: 'test_logging_fails_ignore.py:11: Failure: Qt messages with level CRITICAL or above emitted'
E and: '----------------------------- Captured Qt messages -----------------------------'
E and: '/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test4:12:'
E and: ''
E and: ' QtCriticalMsg: WM_PAINT not handled (IGNORED)'
E and: '/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test4:13:'
E and: ''
E and: ' QtCriticalMsg: another critical message'
E and: '=========================== short test summary info ============================'
E and: 'FAILED test_logging_fails_ignore.py::test3 - assert 0'
E and: 'FAILED test_logging_fails_ignore.py::test1'
E and: 'FAILED test_logging_fails_ignore.py::test4'
E and: '========================= 3 failed, 1 passed in 0.04s =========================='
E remains unmatched: '*_ test3 _*'
/disk-samsung/freebsd-ports/devel/py-pytest-qt/work-py39/pytest-qt-4.1.0/tests/test_logging.py:339: Failed
------------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------------
============================= test session starts ==============================
platform freebsd13 -- Python 3.9.13, pytest-7.1.3, pluggy-1.0.0
PyQt5 5.15.6 -- Qt runtime 5.15.5 -- Qt compiled 5.15.5
Using --randomly-seed=3508469783
rootdir: /tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0, configfile: tox.ini
plugins: qt-4.1.0, forked-1.4.0, hypothesis-6.55.0, cov-2.9.0, rerunfailures-10.1, xdist-2.5.0, randomly-3.12.0, typeguard-2.13.3
collected 4 items
test_logging_fails_ignore.py F.FF [100%]
=================================== FAILURES ===================================
____________________________________ test3 _____________________________________
def test3():
qt_api.qCritical('WM_DESTROY was sent')
> assert 0
E assert 0
test_logging_fails_ignore.py:10: AssertionError
----------------------------- Captured Qt messages -----------------------------
/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test3:9:
QtCriticalMsg: WM_DESTROY was sent (IGNORED)
____________________________________ test1 _____________________________________
test_logging_fails_ignore.py:4: Failure: Qt messages with level CRITICAL or above emitted
----------------------------- Captured Qt messages -----------------------------
/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test1:5:
QtCriticalMsg: a critical message
____________________________________ test4 _____________________________________
test_logging_fails_ignore.py:11: Failure: Qt messages with level CRITICAL or above emitted
----------------------------- Captured Qt messages -----------------------------
/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test4:12:
QtCriticalMsg: WM_PAINT not handled (IGNORED)
/tmp/pytest-of-yuri/pytest-0/test_logging_fails_ignore0/test_logging_fails_ignore.py:test4:13:
QtCriticalMsg: another critical message
=========================== short test summary info ============================
FAILED test_logging_fails_ignore.py::test3 - assert 0
FAILED test_logging_fails_ignore.py::test1
FAILED test_logging_fails_ignore.py::test4
========================= 3 failed, 1 passed in 0.04s ==========================
_________________________________________________________________________________ test_exceptions_dont_leak __________________________________________________________________________________
testdir = <Testdir local('/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0')>
@pytest.mark.xfail(
condition=sys.version_info[:2] == (3, 4),
reason="failing in Python 3.4, which is about to be dropped soon anyway",
)
def test_exceptions_dont_leak(testdir):
"""
Ensure exceptions are cleared when an exception occurs and don't leak (#187).
"""
testdir.makepyfile(
"""
from pytestqt.qt_compat import qt_api
import gc
import weakref
class MyWidget(qt_api.QtWidgets.QWidget):
def event(self, ev):
called.append(1)
raise RuntimeError('event processed')
weak_ref = None
called = []
def test_1(qapp):
global weak_ref
w = MyWidget()
weak_ref = weakref.ref(w)
qapp.postEvent(w, qt_api.QtCore.QEvent(qt_api.QtCore.QEvent.Type.User))
qapp.processEvents()
def test_2(qapp):
assert called
gc.collect()
assert weak_ref() is None
"""
)
result = testdir.runpytest()
> result.stdout.fnmatch_lines(["*1 failed, 1 passed*"])
E Failed: nomatch: '*1 failed, 1 passed*'
E and: '============================= test session starts =============================='
E and: 'platform freebsd13 -- Python 3.9.13, pytest-7.1.3, pluggy-1.0.0'
E and: 'PyQt5 5.15.6 -- Qt runtime 5.15.5 -- Qt compiled 5.15.5'
E and: 'Using --randomly-seed=3508469783'
E and: 'rootdir: /tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0'
E and: 'plugins: qt-4.1.0, forked-1.4.0, hypothesis-6.55.0, cov-2.9.0, rerunfailures-10.1, xdist-2.5.0, randomly-3.12.0, typeguard-2.13.3'
E and: 'collected 2 items'
E and: ''
E and: 'test_exceptions_dont_leak.py FF [100%]'
E and: ''
E and: '=================================== FAILURES ==================================='
E and: '____________________________________ test_2 ____________________________________'
E and: ''
E and: 'qapp = <PyQt5.QtWidgets.QApplication object at 0x8fc7e2430>'
E and: ''
E and: ' def test_2(qapp):'
E and: '> assert called'
E and: 'E assert []'
E and: ''
E and: 'test_exceptions_dont_leak.py:22: AssertionError'
E and: '____________________________________ test_1 ____________________________________'
E and: 'CALL ERROR: Exceptions caught in Qt event loop:'
E and: '________________________________________________________________________________'
E and: 'Traceback (most recent call last):'
E and: ' File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event'
E and: " raise RuntimeError('event processed')"
E and: 'RuntimeError: event processed'
E and: '________________________________________________________________________________'
E and: 'Traceback (most recent call last):'
E and: ' File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event'
E and: " raise RuntimeError('event processed')"
E and: 'RuntimeError: event processed'
E and: '________________________________________________________________________________'
E and: '----------------------------- Captured stderr call -----------------------------'
E and: 'Exceptions caught in Qt event loop:'
E and: '________________________________________________________________________________'
E and: 'Traceback (most recent call last):'
E and: ' File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event'
E and: " raise RuntimeError('event processed')"
E and: 'RuntimeError: event processed'
E and: '________________________________________________________________________________'
E and: 'Exceptions caught in Qt event loop:'
E and: '________________________________________________________________________________'
E and: 'Traceback (most recent call last):'
E and: ' File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event'
E and: " raise RuntimeError('event processed')"
E and: 'RuntimeError: event processed'
E and: '________________________________________________________________________________'
E and: '=========================== short test summary info ============================'
E and: 'FAILED test_exceptions_dont_leak.py::test_2 - assert []'
E and: 'FAILED test_exceptions_dont_leak.py::test_1'
E and: '============================== 2 failed in 0.04s ==============================='
E remains unmatched: '*1 failed, 1 passed*'
/disk-samsung/freebsd-ports/devel/py-pytest-qt/work-py39/pytest-qt-4.1.0/tests/test_exceptions.py:381: Failed
------------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------------
============================= test session starts ==============================
platform freebsd13 -- Python 3.9.13, pytest-7.1.3, pluggy-1.0.0
PyQt5 5.15.6 -- Qt runtime 5.15.5 -- Qt compiled 5.15.5
Using --randomly-seed=3508469783
rootdir: /tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0
plugins: qt-4.1.0, forked-1.4.0, hypothesis-6.55.0, cov-2.9.0, rerunfailures-10.1, xdist-2.5.0, randomly-3.12.0, typeguard-2.13.3
collected 2 items
test_exceptions_dont_leak.py FF [100%]
=================================== FAILURES ===================================
____________________________________ test_2 ____________________________________
qapp = <PyQt5.QtWidgets.QApplication object at 0x8fc7e2430>
def test_2(qapp):
> assert called
E assert []
test_exceptions_dont_leak.py:22: AssertionError
____________________________________ test_1 ____________________________________
CALL ERROR: Exceptions caught in Qt event loop:
________________________________________________________________________________
Traceback (most recent call last):
File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event
raise RuntimeError('event processed')
RuntimeError: event processed
________________________________________________________________________________
Traceback (most recent call last):
File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event
raise RuntimeError('event processed')
RuntimeError: event processed
________________________________________________________________________________
----------------------------- Captured stderr call -----------------------------
Exceptions caught in Qt event loop:
________________________________________________________________________________
Traceback (most recent call last):
File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event
raise RuntimeError('event processed')
RuntimeError: event processed
________________________________________________________________________________
Exceptions caught in Qt event loop:
________________________________________________________________________________
Traceback (most recent call last):
File "/tmp/pytest-of-yuri/pytest-0/test_exceptions_dont_leak0/test_exceptions_dont_leak.py", line 9, in event
raise RuntimeError('event processed')
RuntimeError: event processed
________________________________________________________________________________
=========================== short test summary info ============================
FAILED test_exceptions_dont_leak.py::test_2 - assert []
FAILED test_exceptions_dont_leak.py::test_1
============================== 2 failed in 0.04s ===============================
_______________________________________________________________________________ test_qt_api_ini_config[pyqt5] ________________________________________________________________________________
testdir = <Testdir local('/tmp/pytest-of-yuri/pytest-0/test_qt_api_ini_config3')>, monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x902e88520>, option_api = 'pyqt5'
@pytest.mark.parametrize("option_api", ["pyqt5", "pyqt6", "pyside2", "pyside6"])
def test_qt_api_ini_config(testdir, monkeypatch, option_api):
"""
Test qt_api ini option handling.
"""
from pytestqt.qt_compat import qt_api
monkeypatch.delenv("PYTEST_QT_API", raising=False)
testdir.makeini(
"""
[pytest]
qt_api={option_api}
""".format(
option_api=option_api
)
)
testdir.makepyfile(
"""
import pytest
def test_foo(qtbot):
pass
"""
)
result = testdir.runpytest_subprocess()
if qt_api.pytest_qt_api == option_api:
result.stdout.fnmatch_lines(["* 1 passed in *"])
else:
try:
ModuleNotFoundError
except NameError:
# Python < 3.6
result.stderr.fnmatch_lines(["*ImportError:*"])
else:
# Python >= 3.6
> result.stderr.fnmatch_lines(["*ModuleNotFoundError:*"])
E Failed: remains unmatched: '*ModuleNotFoundError:*'
/disk-samsung/freebsd-ports/devel/py-pytest-qt/work-py39/pytest-qt-4.1.0/tests/test_basics.py:480: Failed
------------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------------
running: /usr/local/bin/python3.9 -mpytest --basetemp=/tmp/pytest-of-yuri/pytest-0/test_qt_api_ini_config3/runpytest-0
in: /tmp/pytest-of-yuri/pytest-0/test_qt_api_ini_config3
============================= test session starts ==============================
platform freebsd13 -- Python 3.9.13, pytest-7.1.3, pluggy-1.0.0
PyQt5 5.15.6 -- Qt runtime 5.15.5 -- Qt compiled 5.15.5
Using --randomly-seed=655155166
rootdir: /tmp/pytest-of-yuri/pytest-0/test_qt_api_ini_config3, configfile: tox.ini
plugins: qt-4.1.0, forked-1.4.0, hypothesis-6.55.0, cov-2.9.0, rerunfailures-10.1, xdist-2.5.0, randomly-3.12.0, typeguard-2.13.3
collected 1 item
test_qt_api_ini_config.py . [100%]
============================== 1 passed in 0.08s ===============================
================================================================================== short test summary info ===================================================================================
SKIPPED [1] tests/test_wait_signal.py:922: test only makes sense for PySide2, whose signals don't contain a name!
SKIPPED [1] tests/test_wait_signal.py:1206: test only makes sense for PySide, whose signals don't contain a name!
========================================================================= 3 failed, 371 passed, 2 skipped in 55.32s ==========================================================================
*** Error code 1
```
Version: 4.1.0
Python-3.9
FreeBSD 13.1 STABLE | open | 2022-10-08T03:42:58Z | 2024-12-17T09:36:30Z | https://github.com/pytest-dev/pytest-qt/issues/457 | [] | yurivict | 3 |
tortoise/tortoise-orm | asyncio | 1,919 | (2013, 'Lost connection to MySQL server during query) | my database config is :
TORTOISE_ORM = {
"connections": {
"default": {
"engine": "tortoise.backends.mysql",
"credentials": {
"host": mysql_host,
"port": mysql_port,
"user": mysql_user,
"password": mysql_password,
"database": mysql_database,
"minsize": 2,
"maxsize": 5,
"charset": "utf8mb4",
"echo": False
}
},
},
"apps": {
"rules": {
"models": ["src.dto.db_model"],
"default_connection": "default",
}
},
"use_tz": False,
"timezone": "Asia/Shanghai"
}
my version of tortoise-orm is 0.23.0 , mysql is 8.0.39
'config in MySql Server is :
connection_timeout 10
max_allowed_paaket 104857600
wait_timeout 2880000
When the program is running, errors often occur: (2013, 'Lost connection to MySQL server during query) | open | 2025-03-07T06:44:19Z | 2025-03-24T19:51:35Z | https://github.com/tortoise/tortoise-orm/issues/1919 | [] | rhmb-ai | 5 |
plotly/dash-table | plotly | 50 | Pressing "Enter" to confirm changes to cell / dataframe does not work in the last row | Normally, pressing "Enter" while editing a cell value triggers the dataframe update and moves the active cell to the cell directly below.
Expected behavior:
Pressing "Enter" while editing a cell in the last row updates the dataframe. Focus remains on the current cell. | closed | 2018-08-22T13:41:38Z | 2018-09-10T17:00:06Z | https://github.com/plotly/dash-table/issues/50 | [] | Marc-Andre-Rivet | 0 |
python-gino/gino | asyncio | 608 | [question] Any advice on factory_boy usage/replacement? | Does anyone have a clue how we can utilize [factory_boy](https://github.com/FactoryBoy/factory_boy) with GINO? Coming from Django I got very used to using factories in pytest tests.
I guess I should override the `_create()` function in the factory model class, but since it is a synchronous function I am having troubles awaiting GINO model `create()` there. Tried to experiment a bit with `asyncio._get_running_loop()` etc. but with no luck. As I said, coming from Django background I have lack of experience in async. Any help is much appreciated! | closed | 2019-12-07T03:08:21Z | 2019-12-07T18:04:56Z | https://github.com/python-gino/gino/issues/608 | [] | remarkov | 1 |
nolar/kopf | asyncio | 300 | Print statement from example does not show up in logs for me but logging lib works | > <a href="https://github.com/janvdvegt"><img align="left" height="50" src="https://avatars3.githubusercontent.com/u/12046878?v=4"></a> An issue by [janvdvegt](https://github.com/janvdvegt) at _2020-01-26 20:37:09+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/300
>
## Long story short
Print statements do not show up in logs in bare bones example. Replacing them with `logging.info` works.
## Description
Example code from docs:
```python
import kopf
@kopf.on.create('zalando.org', 'v1', 'ephemeralvolumeclaims')
def create_fn(body, **kwargs):
print(f"A handler is called with body: {body}")
```
This does work:
```python
import kopf
import logging
@kopf.on.create('zalando.org', 'v1', 'ephemeralvolumeclaims')
def create_fn(body, **kwargs):
logging.info(f"A handler is called with body: {body}")
```
I tried to set `PYTHONUNBUFFERED` to 0 but this did not matter.
## Environment
* Kopf version: 0.24
* Kubernetes version: 0.12.4
* Python version: 3.7
* OS/platform: MacOS
By using the logging library it does not block me in whatever way but the base example does not work for me so I just wanted to notify you | open | 2020-08-18T20:03:07Z | 2022-02-10T10:48:36Z | https://github.com/nolar/kopf/issues/300 | [
"bug",
"archive"
] | kopf-archiver[bot] | 1 |
microsoft/nni | data-science | 5,329 | Encounter ModuleNotFoundError when run Darts from Example | **Describe the issue**:
Encounter ModuleNotFoundError when run Darts Demo code from Example folders
(/examples/nas/oneshot/darts/search.py)
**Environment**:
- NNI version: 2.10
- Training service: local
- Client OS: Linux
- Python version: 3.10.9
- PyTorch/TensorFlow version: 1.13.1
- CUDA: 11.6
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config: Using the original code
- Search space: Using the original code
**Log message**:
```
root@Ubuntu:~/home/nni_repo/nni/examples/nas/oneshot/darts$ python search.py
Traceback (most recent call last):
File "~/home/nni_repo/nni/examples/nas/oneshot/darts/search.py", line 14, in <module>
from nni.nas.pytorch.callbacks import ArchitectureCheckpoint, LRSchedulerCallback
ModuleNotFoundError: No module named 'nni.nas.pytorch'
```
**How to reproduce it?**:
Run the original code without making any changes
| closed | 2023-01-31T16:21:44Z | 2023-02-17T08:55:12Z | https://github.com/microsoft/nni/issues/5329 | [] | YichaoCode | 6 |
pytorch/vision | machine-learning | 8,628 | Suitable augmentation of Keras in Pytorch | I am reproducing a code (which is in tensorflow/keras) in which the following augmentations are applied.
ImageDataGenerator( zoom_range=0.1, fill_mode='reflect', width_shift_range=0.4, height_shift_range=0.4, rotation_range=90)
Are these augmentations available in Pytorch? I couldn't find them. Any help would be appreciated. | closed | 2024-09-03T19:21:06Z | 2024-09-04T08:53:01Z | https://github.com/pytorch/vision/issues/8628 | [] | jawi289o | 1 |
junyanz/iGAN | computer-vision | 21 | The dataset link is not found | The dataset link is not found. (https://people.eecs.berkeley.edu/~junyanz/projects/gvm/datasets/$FILE.zip)
Would you please give a new URL link? | closed | 2018-04-04T04:03:54Z | 2018-06-16T07:14:19Z | https://github.com/junyanz/iGAN/issues/21 | [] | jichunshen | 1 |
robotframework/robotframework | automation | 5,058 | Elapsed time is not updated when merging results | `rebot` of RF7 produces an incorrect total suite runtime when merging results - compared to RF6.
With some lines of bash, I created a reproducable example where I execute the following steps with both RF6 and 7:
1. Run the suite with `-v FAIL:yes` so that the first test case will FAIL. Test 2 and 3 have a sleep time of 0.111s.
2. Re-run the failed one with `--rerunfailed`, this time the first test case PASSES with a sleep time of 1s.
3. Merge the first and the second XML.
The final HTML log `rebot_1_2.html` can be seen in the version number inside the results folder.
RF6 (left) looks good to me: the "merged" total suite runtime is the sum of
- first run: 0.112 (T2) + 0.113 (T3) = 0.225s
- second run with the re-executed 1st Test: 1.001 (T1)
- = in total: 1.226
RF7 (right) however shows a total runtime of 0.242s, which seems to be just the total suite runtime of the first run.

You can find the script, the Pipfile and the XML/HTML logs in the attached zip file.
[rebot-test.zip](https://github.com/robotframework/robotframework/files/14343227/rebot-test.zip)
| closed | 2024-02-20T10:52:28Z | 2025-01-28T08:15:53Z | https://github.com/robotframework/robotframework/issues/5058 | [
"bug",
"priority: medium",
"effort: small"
] | simonmeggle | 2 |
scrapy/scrapy | python | 5,796 | ItemLoader instantiated from a base item does not create a new item instead keeps reference to same item | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
In Scrapy 2.6, an ItemLoader instantiated with a base item keeps a reference to a single same item and instead of loading new items, it repeats the values added, therefore, producing duplicate items.
### Steps to Reproduce
Run the following script:
```
from scrapy.loader import ItemLoader
base_item = {}
for i in range(5):
il = ItemLoader(item=base_item)
il.add_value('b', i)
new_item = il.load_item()
print(new_item)
```
Output when ran in `scrapinghub-stack-scrapy:2.6`:
```
{'b': [0]}
{'b': [0, 1]}
{'b': [0, 1, 2]}
{'b': [0, 1, 2, 3]}
{'b': [0, 1, 2, 3, 4]}
```
Output when ran in `scrapinghub-stack-scrapy:1.6`:
```
{'b': [0]}
{'b': [1]}
{'b': [2]}
{'b': [3]}
{'b': [4]}
```
**Expected behavior:** The output should be the same as when ran in older Scrapy stack: `scrapinghub-stack-scrapy:1.6`
**Possible resolution:** Deep copying the `base_item` works as intended:
```
from scrapy.loader import ItemLoader
from copy import deepcopy
base_item = {}
for i in range(5):
il = ItemLoader(item=deepcopy(base_item))
il.add_value('b', i)
new_item = il.load_item()
print(new_item)
```
Output:
```
{'b': [0]}
{'b': [1]}
{'b': [2]}
{'b': [3]}
{'b': [4]}
``` | closed | 2023-01-17T19:57:18Z | 2023-01-18T08:45:08Z | https://github.com/scrapy/scrapy/issues/5796 | [] | theumairahmed | 3 |
opengeos/leafmap | jupyter | 180 | add_cog_mosaic fails - does new Titiler version support creating MosaicJSONs? | ### Environment Information
- leafmap version: 0.7.2
- Python version: 3.9
- Operating System: Windows
### Description
I am trying to add a mosaic of COGs to leafmap, using the following code:
```
m.add_cog_mosaic(["https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif"])
```
(This is a simple mosaic for testing, with just a single image in it).
I get the following error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~\Documents\mambaforge\envs\anglo\lib\site-packages\leafmap\common.py in cog_mosaic(links, titiler_endpoint, username, layername, overwrite, verbose, **kwargs)
1072 ).json()
-> 1073 token = r["token"]
1074
KeyError: 'token'
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_18396/3959939009.py in <module>
----> 1 m.add_cog_mosaic(["https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif"])
~\Documents\mambaforge\envs\anglo\lib\site-packages\leafmap\leafmap.py in add_cog_mosaic(self, links, name, attribution, opacity, shown, titiler_endpoint, username, overwrite, show_footprints, verbose, **kwargs)
803 """
804 layername = name.replace(" ", "_")
--> 805 tile = cog_mosaic(
806 links,
807 titiler_endpoint=titiler_endpoint,
~\Documents\mambaforge\envs\anglo\lib\site-packages\leafmap\common.py in cog_mosaic(links, titiler_endpoint, username, layername, overwrite, verbose, **kwargs)
1094
1095 except Exception as e:
-> 1096 raise Exception(e)
1097
1098
Exception: 'token'
```
I've done some investigating of this, and it seems that the titiler server running at http://titiler.xyz doesn't support the `/tokens/create` endpoint, or the `mosaicjson/create` endpoint - see the (automatically generated) docs page at http://titiler.xyz/docs.
From looking at some of the old PRs, it seemed this endpoint is the new version of the https://api.cogeo.xyz/ server - and that server _does_ have the relevant endpoints (see https://api.cogeo.xyz/docs).
I've also had a look at the titiler code, and I can't see anything about /tokens/create or /mosaicjson/create endpoints in the latest titiler version (0.4).
Does anyone know what is going on here? Was this an old feature of titiler that was removed? If so, does this functionality need removing from leafmap? Or is there a possibility to add this feature back into titiler (it would be useful for some other applications of titiler that I'm working on).
Pinging @vincentsarago as he probably knows about the titiler side of this. | closed | 2022-01-14T09:59:52Z | 2022-01-15T00:36:09Z | https://github.com/opengeos/leafmap/issues/180 | [
"bug"
] | robintw | 3 |
JaidedAI/EasyOCR | pytorch | 901 | EasyOCR Links are not reachable. | I am having a problem reaching the direct links for `https://jaided.ai/` | open | 2022-12-06T13:24:29Z | 2022-12-19T01:49:20Z | https://github.com/JaidedAI/EasyOCR/issues/901 | [] | engahmed1190 | 2 |
aio-libs/aiopg | sqlalchemy | 699 | The default read committed isolation level is the wrong choice | Hello,
I have a project where I experienced serialization failures on a statement:
```
select 1 from file where (id, group_id) = (%s, %s) for update
```
resulting in errors like:
```
psycopg2.errors.SerializationFailure: could not serialize access due to concurrent update
```
now, I happen to know that the statement above would have orderly blocked in case of concurrency, if run in psycopg or in psql: logged some of the operations of the database and sure enough found:
```
postgres_1 | 2020-07-16 10:30:36.377 UTC [11914] LOG: statement: BEGIN ISOLATION LEVEL REPEATABLE READ
```
Which led me to find:
https://github.com/aio-libs/aiopg/blob/3fb3256319de766384b4867c7cc6710397bd1a8c//aiopg/cursor.py#L10-L16
this is pretty wrong: the driver is choosing an isolation level that causes side effects, and the behaviour of the program becomes very different from the one it would have in psycopg2.
The right default would be to begin a transaction just with `BEGIN`. Note that, unlike what stated here:
https://github.com/aio-libs/aiopg/blob/3fb3256319de766384b4867c7cc6710397bd1a8c/aiopg/transaction.py#L51-L55
`BEGIN` is not "read committed" behaviour: this is "use the server configured behaviour", which is what selected by `default_transaction_isolation` level, and can be changed at server configuration level, session level.
So, what I think is:
1) The `ReadCommittedCompiler` should actually be implemented as:
```python
class ReadCommittedCompiler(IsolationCompiler):
name = 'Read committed'
def begin(self):
return 'BEGIN ISOLATION LEVEL READ COMMITTED'
```
2) you need a class
```python
class DefaultCompiler(IsolationCompiler):
name = 'Default'
def begin(self):
return 'BEGIN'
```
3) that class should be the default.
```python
class Cursor:
def __init__(self, conn, impl, timeout, echo):
# ...
self._transaction = Transaction(self, IsolationLevel.default)
```
you also probably want to fix #497 to allow the isolation level to be chosen, but at least by default you should leave the server default unaltered. | closed | 2020-07-16T11:04:21Z | 2020-12-21T06:07:35Z | https://github.com/aio-libs/aiopg/issues/699 | [] | dvarrazzo | 9 |
aio-libs/aiohttp | asyncio | 9,850 | FTP download broken in client >= 3.10.0 | ### Describe the bug
aiohttp versions prior 3.9.5 can download file from FTP via HTTP-proxy. But starting from version 3.10.0, this code throws an exception `aiohttp.client_exceptions.NonHttpUrlClientError: ftp://demo:password@test.rebex.net/readme.txt`
### To Reproduce
```python
async with aiohttp.ClientSession() as session:
async with session.get(
'ftp://demo:password@test.rebex.net/readme.txt',
proxy='http://some-http-proxy:3128'
) as resp:
print(resp.status)
print(await resp.text())
```
### Expected behavior
FTP url downloaded
### Logs/tracebacks
```python-traceback
-
```
### Python Version
```console
3.10.15
```
### aiohttp Version
```console
>= 3.10.0
```
### multidict Version
```console
6.1.0
```
### propcache Version
```console
0.2.0
```
### yarl Version
```console
1.17.1
```
### OS
macOS
### Related component
Client
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | closed | 2024-11-13T07:29:14Z | 2024-11-13T14:22:09Z | https://github.com/aio-libs/aiohttp/issues/9850 | [
"bug"
] | poofeg | 4 |
LibrePhotos/librephotos | django | 881 | Public photo share failed | # 🐛 Bug Report
* [x] 📁 I've Included a ZIP file containing my librephotos `log` files (error log pasted below)
* [x] ❌ I have looked for similar issues (including closed ones)
* [x] 🎬 (If applicable) I've provided pictures or links to videos that clearly demonstrate the issue
## 📝 Description of issue:
Loading public link forever:

## 🔁 How can we reproduce it:
Share some photos (I have 100+ shared), use the www.example.com/user/user_name link to load the shared photos
## Please provide additional information:
- 💻 Operating system: Arch
- ⚙ Architecture (x86 or ARM): x86
- 🔢 Librephotos version: reallibrephotos/librephotos:latest as of date of written
- 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.): docker
* 🐋 If Docker or Kubernets, provide docker-compose image tag: reallibrephotos/librephotos:latest
- 📁 How is you picture library mounted (Local file system (Type), NFS, SMB, etc.): Local
- ☁ If you are virtualizing librephotos, Virtualization platform (Proxmox, Xen, HyperV, etc.): No
```
Internal Server Error: /api/albums/date/list/
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/django/db/models/fields/__init__.py", line 2055, in get_prep_value
return int(value)
^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/contrib/auth/models.py", line 437, in __int__
raise TypeError(
TypeError: Cannot cast AnonymousUser to int. Are you trying to use it in place of User?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.11/dist-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/api/views/albums.py", line 545, in list
serializer = IncompleteAlbumDateSerializer(self.get_queryset(), many=True)
^^^^^^^^^^^^^^^^^^^
File "/code/api/views/albums.py", line 526, in get_queryset
qs.annotate(
File "/usr/local/lib/python3.11/dist-packages/django/db/models/query.py", line 1590, in annotate
return self._annotate(args, kwargs, select=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/query.py", line 1638, in _annotate
clone.query.add_annotation(
File "/usr/local/lib/python3.11/dist-packages/django/db/models/sql/query.py", line 1090, in add_annotation
annotation = annotation.resolve_expression(self, allow_joins=True, reuse=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/aggregates.py", line 65, in resolve_expression
c.filter = c.filter and c.filter.resolve_expression(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/query_utils.py", line 87, in resolve_expression
clause, joins = query._add_q(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/sql/query.py", line 1533, in _add_q
child_clause, needed_inner = self.build_filter(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/sql/query.py", line 1448, in build_filter
condition = self.build_lookup(lookups, col, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/sql/query.py", line 1275, in build_lookup
lookup = lookup_class(lhs, rhs)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/lookups.py", line 27, in __init__
self.rhs = self.get_prep_lookup()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/fields/related_lookups.py", line 166, in get_prep_lookup
self.rhs = target_field.get_prep_value(self.rhs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/django/db/models/fields/__init__.py", line 2057, in get_prep_value
raise e.__class__(
TypeError: Field 'id' expected a number but got <django.contrib.auth.models.AnonymousUser object at 0x69542cf1f690>.
Unauthorized: /api/searchtermexamples/
Unauthorized: /api/albums/thing/list/
Unauthorized: /api/albums/place/list/
Unauthorized: /api/persons/
Unauthorized: /api/albums/user/list/
```
| closed | 2023-06-12T13:32:47Z | 2023-08-02T20:09:58Z | https://github.com/LibrePhotos/librephotos/issues/881 | [
"bug"
] | J4gQBqqR | 2 |
Evil0ctal/Douyin_TikTok_Download_API | api | 340 | /douyin_video_comments/接口怎么了,scraper里没有get_douyin_video_comments方法 | 在启动web_api后调用douyin_video_comments接口,报错,原因是 'Scraper' object has no attribute 'get_douyin_video_comments' | closed | 2024-03-26T08:36:38Z | 2024-04-23T05:02:37Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/340 | [
"BUG"
] | Azulsolitan | 2 |
schemathesis/schemathesis | graphql | 2,376 | [BUG] Links being followed on example tests | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
I see that links are being followed even when i specify `--hypothesis-phases=explicit`
minor issue, but even if filter out API using `--include-name` they are still followed
guessing `--include-name` means run only that and not follow links?!
or maybe not follow them when `--hypothesis-phases=explicit`
understand them being followed for stateful testing however
the problem with links being followed on stateless/example testing is for example:
assume `POST /trees` is linked to `DELETE /trees/id` with the id from creation used for the delete.
when `--hypothesis-phase=explicit` then the state/id from POST call is _not_ used for the DELETE
instead any explicit example id under `DELETE /trees/id` are used for the linked test.
hope my explanation is clear with this example
### To Reproduce
1. Run this command `st run --include-name 'POST /trees' tree-openapi20.yaml --base-url=http://localhost:8083 --hypothesis-phases=explicit`
2. See links are followed
Please include a minimal API schema causing this issue:
`any schema with links and examples`
### Expected behavior
_imho_
1. links not followed when `--hypothesis-phases=explicit` specified
2. `--include-name` honored and linked not followed if not in the inclusion
### Environment
`platform Linux -- Python 3.9.18, schemathesis-3.33.1, hypothesis-6.108.2, hypothesis_jsonschema-0.23.1, jsonschema-4.23.0`
| closed | 2024-07-25T17:33:41Z | 2024-08-10T11:07:30Z | https://github.com/schemathesis/schemathesis/issues/2376 | [
"Priority: Medium",
"Type: Bug",
"Specification: OpenAPI",
"Difficulty: Beginner",
"Core: Stateful testing"
] | ravy | 2 |
nteract/papermill | jupyter | 509 | Update C# translator to support dotnet-interactive, include .net-fsharp and .net-powershell kernels | In the most recent preview of .NET's support for Jupyter ([.NET Interactive](https://github.com/dotnet/interactive)), we changed the name of the executable from `dotnet-try` to `dotnet-interactive`, which now supports three .NET languages: C#, F#, and PowerShell: https://devblogs.microsoft.com/dotnet/net-interactive-is-here-net-notebooks-preview-2/
@lobrien added support for `dotnet-try` in #448, which should be updated to the new tool, and translators added for the additional languages.
| closed | 2020-05-28T15:46:53Z | 2020-08-19T14:40:16Z | https://github.com/nteract/papermill/issues/509 | [
"enhancement",
"help wanted",
"new-contributor-friendly"
] | jonsequitur | 5 |
sinaptik-ai/pandas-ai | data-science | 595 | Refactor `_format_results` method in `SmartDatalake` to use `ResponseParser` | ### 🚀 The feature
Currently, the `_format_results` method in the `SmartDatalake` class contains both formatting and processing logic for different types of results. To make the code more modular and maintainable, I propose refactoring this method to delegate the formatting and processing of results to a dedicated `ResponseParser` class.
# Proposed Changes:
Create a dedicated ResponseClass class that will handle the formatting and processing of results based on their type.
Move the logic for formatting dataframes, plots, and other result types into separate methods within the `ResponseParser`.
Modify the `_format_results` method in the `SmartDatalake` class to delegate the formatting and processing of results to the appropriate methods in the `ResponseParser`.
Ensure that the `ResponseParser` can be customized or extended in the future to allow for different parsers to act differently.
# Example Refactored Code:
```python
class ResponseParser:
def parse(self, result):
if result["type"] == "dataframe":
return self.format_dataframe()
elif result["type"] == "plot":
return self.format_plot()
else:
return self.format_other()
def format_dataframe(self):
# Format and process dataframe results here
# ...
def format_plot(self):
# Format and process plot results here
# ...
def format_other(self):
# Format and process other result types here
# ...
```
### Motivation, pitch
* Improved code readability and maintainability.
* Separation of concerns between the SmartDatalake class and the ResponseClass.
* Flexibility to customize the behavior of the response formatting in the future.
### Alternatives
_No response_
### Additional context
This refactoring will enhance the maintainability of the code and enable future customization by allowing different parsers to be passed to the ResponseParser for handling various result types. | closed | 2023-09-26T17:30:24Z | 2024-06-01T00:21:37Z | https://github.com/sinaptik-ai/pandas-ai/issues/595 | [
"enhancement"
] | gventuri | 1 |
assafelovic/gpt-researcher | automation | 1,108 | Using Fireworks models in the example collab notebook throws an OpenAI API error | **Describe the bug**
This could easily be user error, but I'm trying to run the collab notebook using the Fireworks API rather than the OpenAI API and I'm getting the following error:
```
⚠️ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
---------------------------------------------------------------------------
AuthenticationError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/gpt_researcher/actions/agent_creator.py](https://localhost:8080/#) in choose_agent(query, cfg, parent_query, cost_callback, headers)
26 try:
---> 27 response = await create_chat_completion(
28 model=cfg.smart_llm_model,
22 frames
AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: dummy_key. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
[/usr/lib/python3.11/re/__init__.py](https://localhost:8080/#) in search(pattern, string, flags)
174 """Scan through string looking for a match to the pattern, returning
175 a Match object, or None if no match was found."""
--> 176 return _compile(pattern, flags).search(string)
177
178 def sub(pattern, repl, string, count=0, flags=0):
TypeError: expected string or bytes-like object, got 'NoneType'
```
**To Reproduce**
Steps to reproduce the behavior:
1. Open the collab link in the readme
2. Add the lines of code from https://docs.gptr.dev/docs/gpt-researcher/llms/llms for the Fireworks models
3. Modify the fast/smart/strategic lines to use R1 and V3 instead of Mixtral
4. Add the langchain-fireworks package like the documentation mentions on https://docs.gptr.dev/docs/gpt-researcher/llms/supported-llms
5. Add the dummy OpenAI API key since I saw that in the documentation once or twice (if there's no OpenAI API key I got a different error saying it needed an OpenAI API key instead)
6. Run the code blocks in the notebook top to bottom, as normal. The third block triggers the error.
**Expected behavior**
I expected GPT-Researcher to work and use the fireworks model API's.
**Screenshots**



**Additional context**
There's no desktop, etc. information since this is in collab.
There are probably instructions I'm missing, but the link to the config page in the docs as of now is dead.
https://docs.gptr.dev/gpt-researcher/config

| closed | 2025-02-04T21:14:37Z | 2025-02-09T00:33:20Z | https://github.com/assafelovic/gpt-researcher/issues/1108 | [] | Ajacmac | 5 |
ranaroussi/yfinance | pandas | 1,952 | 401 Client Error: Unauthorized for url: | ### Describe bug
An issue occurs when attempting to fetch stock information using the yfinance library. The request results in a 401 Client Error: Unauthorized.
import yfinance as yf
msft = yf.Ticker("MSFT")
# get all stock info
msft.info
401 Client Error: Unauthorized for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/MSFT?modules=financialData%2CquoteType%2CdefaultKeyStatistics%2CassetProfile%2CsummaryDetail&corsDomain=finance.yahoo.com&formatted=false&symbol=MSFT&crumb=2abgHhN2kUC
Environment:
Operating System: Windows 10, Windows 11
Region: Europe
both current yfinance version
### Simple code that reproduces your problem
import yfinance as yf
msft = yf.Ticker("MSFT")
# get all stock info
msft.info
### Debug log
...
### Bad data proof
_No response_
### `yfinance` version
0.2.40
### Python version
Python 3.11.4
### Operating system
W10/11 | closed | 2024-05-29T20:39:17Z | 2025-02-16T18:42:21Z | https://github.com/ranaroussi/yfinance/issues/1952 | [] | KingTurbo | 9 |
reloadware/reloadium | django | 7 | Incompatibility with pydash library | pydash is a library to facilitate the work with data structures.
Reloadium library craches when importing pydash module.
This code: `import pydash`
raises this error when running or debugging:
```
Traceback (most recent call last):
File "C:\Users\ludovic.marce\Documents\Lancaster\auto\bacasable\nested_dicts.py", line 5, in <module>
import pydash
File "C:\Users\ludovic.marce\Documents\Lancaster\auto\bacasable\venv\lib\site-packages\pydash\__init__.py", line 5, in <module>
from .arrays import (
File "C:\Users\ludovic.marce\Documents\Lancaster\auto\bacasable\venv\lib\site-packages\pydash\arrays.py", line 13, in <module>
from .helpers import base_get, iteriteratee, parse_iteratee
File "C:\Users\ludovic.marce\Documents\Lancaster\auto\bacasable\venv\lib\site-packages\pydash\helpers.py", line 21, in <module>
BUILTINS = {value: key for key, value in builtins.__dict__.items() if isinstance(value, Hashable)}
File "C:\Users\ludovic.marce\Documents\Lancaster\auto\bacasable\venv\lib\site-packages\pydash\helpers.py", line 21, in <dictcomp>
BUILTINS = {value: key for key, value in builtins.__dict__.items() if isinstance(value, Hashable)}
File "<string>", line 2, in __hash__
TypeError: unhashable type: 'lll11ll111111lllIl1l1'
```
### Versions:
**Python**:3.6 & 3.9
**Pycharm**: 2022.1
**reloadium plugin**: 0.8.0
**pydash**: 5.1.0 | closed | 2022-04-27T09:19:15Z | 2022-05-12T16:50:23Z | https://github.com/reloadware/reloadium/issues/7 | [] | Doxxxxxx | 1 |
sqlalchemy/alembic | sqlalchemy | 1,103 | Allow --config to be a Python package or a file relative to that package | Hi.
I'm trying to distribute alembic migration files with my application. I want the user to basically `pip install my_app` then `alembic upgrade head`.
I can put the migration files directory inside the app along with alembic.ini (and modify setup.py and MANIFEST.in accordingly), but then, AFAIU, the user will need to
alembic -c /path/to/virtual/env/lib/python3.x/site-packages/my_app/alembic.ini upgrade head
We could improve this by allowing the user to do
alembic -c my_app:alembic.ini upgrade head
~The package:file syntax is inspired by Flask but with different semantics. Well, perhaps `:` is wrong since it is a valid character in a filename. The point is we need to be able to discriminate a file system path and a package path. We could even use a different config flag.~
I think this is how `script_location` accepts package resources ([tutorial](https://alembic.sqlalchemy.org/en/latest/tutorial.html)).
The option could accept a package and assume alembic.ini in package root. But then again, to discriminate a package name from a relative file name, we might have to use a different config flag.
I've seen people discussing on SO about distributing migration files, but I couldn't find the answer to the alembic.ini path issue. I'm pretty sure there's an already existing and recommended way to do this. In this case, I'd be more than happy to get a pointer, as I've been searching for hours. But I don't mean to turn this into a support request.
In any case, I figured the idea above could be worth sharing.
Have a nice day! | closed | 2022-10-19T08:59:21Z | 2023-11-06T23:47:59Z | https://github.com/sqlalchemy/alembic/issues/1103 | [
"use case"
] | lafrech | 5 |
roboflow/supervision | machine-learning | 1,366 | Metrics API | This issue aggregates the discussion and near-future plans to introduce metrics to supervision.
The first steps shall be enacted by the core Roboflow team, and then we'll open submissions for specific metrics for the community.
### I propose the following:
* Aim for ease of usage, compact API, sacrificing completeness if required.
* Provide public classes with aggregation by default (metrics.py), keep implementation in impl.py or equivalent, to be used internally.
* Expose not in global scope, but in supervision.metrics.
* I don't think we need to split into metrics.detection, metrics.segmentation, metrics.classification, but I'm on the fence.
* Focus only on what we can apply to Detections object.
* This means, only implement metrics if they use some of: class_id, confidence, xyxy, mask, xyxyxyxy (in Detections.data).
### :warning: I don't know:
* How metrics are computed when targets and predictions have different numbers of detections or they are mismatched.
* I don't think metrics should fail in that case, but perhaps there's a standard way of addressing this.
### I believe we could start with:
* Importing current metrics into the new system:
* IoU
* mAP
* Confusion Matrix
* Detections
* Accuracy
* Precision
* Recall
* General
* Mean confidence
* Median confidence
* Min confidence
* Max confidence
* (not typical, but I'd find useful) - number of unique classes detected & aggregate count of how many objects of which class were detected - N defects / hour).
I believe the param Metrics needs to provide during construction is queue_size.
* 1 - don't keep history, only ever give metrics of current batch
* N - keep up to N metric results in history for computation.
### Other thoughts:
* I don't think metrics should know about datasets. Instead of benchmark as it is in current API, let's have def benchmark_dataset(dataset, metric) in metrics/utils.py.
### API:
```python
class Accuracy(Metric):
def __init__(self, queue_size=1) -> None
@override
def update(predictions: Detections, targets: Detections) -> None
@override
def compute() -> NotSureYet
# Metric also provides `def detect_and_compute(args*, kwargs**)`.
accuracy_metric = Accuracy()
accuracy_metric.add(detections, detections_ground_truth)
accuracy = accuracy_metric.compute()
```
Related features:
* https://github.com/roboflow/supervision/issues/140
* https://github.com/roboflow/supervision/pull/177
* https://github.com/roboflow/supervision/issues/232
* https://github.com/roboflow/supervision/pull/236
* https://github.com/roboflow/supervision/issues/292
* https://github.com/roboflow/supervision/issues/480
* https://github.com/roboflow/supervision/issues/632
| closed | 2024-07-16T12:52:52Z | 2024-07-17T12:06:20Z | https://github.com/roboflow/supervision/issues/1366 | [] | LinasKo | 4 |
litestar-org/polyfactory | pydantic | 550 | Enhancement: switch to type-lens | ### Summary
We have a lot of utility functions to help us figure out the various types from models and extract information from them. A lot of this is now included in [type-lens](https://github.com/litestar-org/type-lens), so we should be switching to using that instead.
cc: @peterschutt
### Basic Example
_No response_
### Drawbacks and Impact
I'm not sure if there's an easy way to do this without making breaking changes. There may be, but I am inclined to make the breaking changes regadlless. I don't *think* it would effect users too much since most likely it would only change things like `FieldModel` and some other stuff that polyfactory which, while technically public API, are probably not used by many users.
### Unresolved questions
_No response_ | open | 2024-05-17T02:25:30Z | 2025-03-20T15:53:16Z | https://github.com/litestar-org/polyfactory/issues/550 | [
"enhancement"
] | guacs | 0 |
hyperspy/hyperspy | data-visualization | 3,121 | Protect `axis` attribute for uniform data axes | #### Describe the bug
Currently, the `axis` attribute of uniform data axes can be changed (after initialization of the axis), though it should be defined by `offset` and `scale` only. Instead, it should be a protected attribute that can be changed only if `offset` and `scale` are changed, but not directly by user input. An example is that users not understanding the difference between `DataAxis` and `UniformDataAxis`, might actually change an axis to a non-uniform one just by providing the `axis` array without converting the axes type, which can lead to problems in a later analysis.
Would be solved by #3031
#### To Reproduce
Steps to reproduce the behavior:
```
S=hs.signals.Signal1D(np.arange(100))
S.axes_manager[0].axis = np.ones(100)
```
`S.axes_manager` and `S.axes_manager[0].axis` are now in contradiction to each other.
#### Expected behavior
Throw an error when trying to manually change the `axis` vector for a uniform or functional data axis.
#### Python environement:
- HyperSpy version: 1.7.4
- Python version: 3.11
| open | 2023-03-31T09:19:27Z | 2023-03-31T09:19:27Z | https://github.com/hyperspy/hyperspy/issues/3121 | [
"type: bug"
] | jlaehne | 0 |
mljar/mljar-supervised | scikit-learn | 416 | How many tree are in my Random Forest? | In the summary page. I see the report on each classifier. One that I am pay attention most is `RandomForest`
```
Random Forest
n_jobs: -1
criterion: gini
max_features: 0.5
min_samples_split: 20
max_depth: 4
eval_metric_name: logloss
num_class: 13
explain_level: 0
```
My questions
1. Does `mljar` find the optimal `trees` and `max_depth`?
2. How can I see the number of `trees` in the report? | closed | 2021-06-23T09:05:34Z | 2023-05-01T13:34:46Z | https://github.com/mljar/mljar-supervised/issues/416 | [] | elcolie | 5 |
sigmavirus24/github3.py | rest-api | 806 | Refactor Documentation for 1.0 | Let's make our documentation more than just API reference for the library.
- Let's have some narrative documentation to teach people how to explore the library and understand how to use it.
- Let's separate out the API reference documentation
- Let's improve our examples
- Let's improve our release notes | closed | 2018-03-22T02:54:02Z | 2021-11-01T01:08:43Z | https://github.com/sigmavirus24/github3.py/issues/806 | [] | sigmavirus24 | 1 |
pytorch/pytorch | numpy | 149,522 | DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16 (__main__.TestForeachCUDA) | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39026442978).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner
return f(*args, **kw)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 327, in test_binary_op_with_scalar_self_support
self._binary_test(
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 263, in _binary_test
actual = op(inputs, self.is_cuda, is_fastpath)
File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__
assert mta_called == (expect_fastpath and (not zero_size)), (
AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_pow', keys=('aten::_foreach_pow', 'Unrecognized', 'aten::empty_strided', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3153, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 1: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.float16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.float16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.float16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.float16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.float16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.float16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.float16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.float16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.float16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.float16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.float16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.float16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.float16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.float16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.float16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.float16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.float16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.float16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.float16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.float16]], args=(3), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=1 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99 | open | 2025-03-19T15:43:21Z | 2025-03-19T15:43:26Z | https://github.com/pytorch/pytorch/issues/149522 | [
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | pytorch-bot[bot] | 1 |
kennethreitz/responder | graphql | 575 | GraphQL: Update Python dependencies | Any ideas how to verify updates to those?
- GH-573
- GH-574
It will probably be a manual procedure until there are software tests covering some of the details? Those won't be enough, will they?
https://github.com/kennethreitz/responder/blob/a698eaaab37e5a90f44681950474d2e4259bd9d3/tests/test_graphql.py#L6-L38
| open | 2024-10-31T06:27:41Z | 2024-10-31T06:27:41Z | https://github.com/kennethreitz/responder/issues/575 | [] | amotl | 0 |
pydantic/logfire | pydantic | 853 | Can this be used locally? | ### Question
Neither the description nor the documentation really make it clear from the start if this can be used purely locally but then authentication is required with logfire.pydantic.dev
So, if this cannot be used completely stand-alone please point this out to save people who do not want to register with some company to use a so-called open-source package some time. | open | 2025-02-10T10:21:40Z | 2025-02-14T16:51:57Z | https://github.com/pydantic/logfire/issues/853 | [
"Question"
] | johann-petrak | 1 |
serengil/deepface | deep-learning | 782 | DeepFace: Enforce Detection | Hi serengil,
I have passport-like images (i.e portrait image with face) and am processing them. However, as some of the images are low-res, I encounter the exception error where the model is not able to detect a face. As such, I added enforce_detection=False.
However, I realized that adding that flag and vectorizing even the high res ones will result in a DIFFERENT vector as compared to when I didn't add that flag. Can you advice for my use-case, should I add the flag for ALL image to ensure consistency, or only add the flag for cases where it has error? Thanks! | closed | 2023-06-22T06:59:12Z | 2023-06-22T08:10:43Z | https://github.com/serengil/deepface/issues/782 | [
"question"
] | jsnleong | 1 |
psf/requests | python | 6,314 | RequestsDependencyWarning | Windows 11 (Professional); Pycharm 2022.3
Getting dependency warning in both Python 3.10 (latest) and Python 3.11 (latest). The code produces the desired results as well as a dependency warning is generated.
[python code]
<import requests
quote = requests.get(url="https://api.kanye.rest")
quote.raise_for_status()
kanye = quote.json()
print(kanye)>
Error message (copy & paste from console)
C:\Users\XXX\AppData\Local\Programs\Python\Python310\python.exe E:\Python_Projects\Kanye\scratch.py
C:\Users\XXX\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\__init__.py:109: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (None)/charset_normalizer (3.0.1) doesn't match a supported version!
warnings.warn(
{'quote': "I don't wanna see no woke tweets or hear no woke raps ... it's show time ... it's a whole different energy right now"}
Process finished with exit code 0
Installed Packages

| closed | 2022-12-21T21:06:52Z | 2022-12-21T21:11:39Z | https://github.com/psf/requests/issues/6314 | [] | thetechnodino | 1 |
deepfakes/faceswap | machine-learning | 1,139 | Lot of errors when no space left on device | **Describe the bug**
part of the log:
```shell
File "/Users/lzw/.conda/envs/faceswap/lib/python3.8/logging/__init__.py", line 1065, in flush
self.stream.flush()
OSError: [Errno 28] No space left on device
Call stack:
File "/Users/lzw/.conda/envs/faceswap/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/Users/lzw/.conda/envs/faceswap/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/Users/lzw/faceswap/lib/multithreading.py", line 37, in run
self._target(*self._args, **self._kwargs)
File "/Users/lzw/faceswap/scripts/convert.py", line 632, in _save
self._writer.write(filename, image)
File "/Users/lzw/faceswap/plugins/convert/writer/opencv.py", line 47, in write
logger.error("Failed to save image '%s'. Original Error: %s", filename, err)
Message: "Failed to save image '%s'. Original Error: %s"
Arguments: ('/Users/lzw/Movies/trump_converted/trump_e1_002431.png', OSError(28, 'No space left on device'))
/Users/lzw/.conda/envs/faceswap/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```
**To Reproduce**
Steps to reproduce the behavior:
1. follow the tutorial
2. in convert step
**Expected behavior**
Just one line error. Stopping the whole task may be better.
**Screenshots**
<img width="1399" alt="Screen Shot 2021-03-30 at 7 40 29 PM" src="https://user-images.githubusercontent.com/5022872/112983065-cb2ce500-918f-11eb-9397-552882f07888.png">
**Desktop (please complete the following information):**
- OS: macOS
- Python Version 3.8
| closed | 2021-03-30T11:41:59Z | 2021-03-30T13:43:42Z | https://github.com/deepfakes/faceswap/issues/1139 | [] | lzwjava | 2 |
Esri/arcgis-python-api | jupyter | 1,272 | No _as_array function when calling from_table on a dbf file | **Describe the bug**
to_table throwing error when calling `from_table` on a `.dbf` file in arcgis pro 2.9.2.
function seems to be updated internally from `_as_array` to `_as_narray`.
Additional error when not passing a field argument as well as .da.SearchCursor doesn't expect a `None` value for the `field_names` prop.
**To Reproduce**
Steps to reproduce the behavior:
```python
import pandas as pd
from arcgis.features import GeoAccessor, GeoSeriesAccessor
pd.DataFrame.spatial.from_table('file.dbf', fields="*")
```
error:
```python
da.SearchCursor' object has no attribute '_as_array'
```
from fileops.py -> line 339
**Expected behavior**
table should load as a dataframe
**Platform (please complete the following information):**
- OS: Windows
- Browser: ArcGIS Pro
- Python API Version 2.0.0
**Additional context**
Resolution looks to be updating the reference from `_as_array` to `_as_narray`.
Additionally, passing `field_names=kwargs.pop("fields", "*")` instead of None to the field name prop
| closed | 2022-06-08T18:55:34Z | 2022-06-23T13:27:41Z | https://github.com/Esri/arcgis-python-api/issues/1272 | [
"bug"
] | cody-scott | 2 |
autokey/autokey | automation | 203 | Update PyPi to latest version | ## Classification:
Enhancement?
## Reproducibility:
Always
## Summary
PyPi is currently at 0.93.10 which is severely outdated.
## Steps to Reproduce (if applicable)
pip3 (or pip-3.x in FreeBSD) install autokey
## Expected Results
Latest version 0.95.4 should install
## Actual Results
Outdated version installs.
## Notes
I am currently porting autokey to FreeBSD and need this updated as using GitHub for PyPi installs is not recommended. Until it's updated I will resort to using GitHub, but would love to update it later.
Thanks! | closed | 2018-10-29T21:15:51Z | 2018-10-30T21:57:34Z | https://github.com/autokey/autokey/issues/203 | [] | y2kbadbug | 1 |
xlwings/xlwings | automation | 1,640 | add 'vector' as a cross-platform format for plots in pictures.add | Translate to `svg` on Windows and `eps` on macOS. | closed | 2021-07-01T10:25:12Z | 2021-07-06T08:31:24Z | https://github.com/xlwings/xlwings/issues/1640 | [
"enhancement"
] | fzumstein | 0 |
deepspeedai/DeepSpeed | pytorch | 7,038 | [REQUEST] activation checkpoint API should have parity with Pytorch, keywords arguments not supported | **Is your feature request related to a problem? Please describe.**
`deepspeed.checkpointing.checkpoint` does not support keyword arguments. However `torch.utils.checkpoint.checkpoint` does support it, this makes it impossible to apply `deepspeed.checkpointing.checkpoint` as a direct replacement.
```python
from functools import partial
import torch
import deepspeed
import torch.utils.checkpoint
torch_checkpoint = partial(torch.utils.checkpoint.checkpoint, use_reentrant=False)
def fn(a, extra=None):
if extra is not None:
return a + extra
else:
return a
a = torch.tensor([1,2])
b = torch.tensor([3,4])
print(fn(a, extra=b)) # tensor([4, 6])
print(torch_checkpoint(fn, a, extra=b)) # tensor([4, 6])
print(deepspeed.checkpointing.checkpoint(fn, a, extra=b)) # Error, checkpoint() got an unexpected keyword argument 'extra'
```
**Describe the solution you'd like**
Please add support for keyword arguments in `deepspeed.checkpointing.checkpoint`.
**Describe alternatives you've considered**
I could ensure that calls do not use keyword arguments. However, that is not feasible when the module is defined in a third-party library.
| open | 2025-02-15T00:04:18Z | 2025-02-15T00:04:18Z | https://github.com/deepspeedai/DeepSpeed/issues/7038 | [
"enhancement"
] | AndreasMadsen | 0 |
jupyterlab/jupyter-ai | jupyter | 718 | /help not working due changes in `_format_help_message` | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
/help command throws error.
[(https://github.com/jupyterlab/jupyter-ai/blob/3bfce328e3b6f730d05faa68ca9b6d6434b5fdec/packages/jupyter-ai/jupyter_ai/chat_handlers/help.py#L65)](https://github.com/jupyterlab/jupyter-ai/blob/3bfce328e3b6f730d05faa68ca9b6d6434b5fdec/packages/jupyter-ai/jupyter_ai/chat_handlers/help.py#L65)
Forgot to add new `persona` and `unsupported_slash_commands` args to `_format_help_message` in L65.
| closed | 2024-04-06T08:49:48Z | 2024-04-12T16:24:24Z | https://github.com/jupyterlab/jupyter-ai/issues/718 | [
"bug"
] | michaelchia | 0 |
521xueweihan/HelloGitHub | python | 1,925 | java | ## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
- 项目后续更新计划:
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg
## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
| closed | 2021-10-14T08:50:22Z | 2021-10-14T08:50:36Z | https://github.com/521xueweihan/HelloGitHub/issues/1925 | [
"恶意issue"
] | panTTT | 1 |
vitalik/django-ninja | rest-api | 1,323 | default 422 validation schema in swagger [fastapi] |
I would like to see the default error diagram as fastapi
<img width="1345" alt="Screenshot 2024-10-21 at 12 36 07" src="https://github.com/user-attachments/assets/53caafaa-4ea0-485c-98fd-66d677ff4b7a">
| open | 2024-10-21T07:36:26Z | 2025-01-22T20:11:55Z | https://github.com/vitalik/django-ninja/issues/1323 | [] | begyy | 2 |
nicodv/kmodes | scikit-learn | 102 | Question about k-prototypes ordinal variables | How can I apply ordinal variable in kprototypes??
For example, I have categorical income level variable from 1 to 7.
The higher value means the higher income.
The difference between income level 1 and 2 differs from the difference between income level 1 and 7.
So, I want to apply the order difference.
+ I have 3 variables ( 1 categorical variable, 1 ordinal variable, 1 numerical variable ) for clustering variables
if I use this command, I can't apply the ordinal variable
kproto.fit_predict(data3.values, categorical=[0,1])
Help. Thank You :D | closed | 2019-01-29T16:08:29Z | 2019-02-04T21:49:03Z | https://github.com/nicodv/kmodes/issues/102 | [
"question"
] | jiyelee14 | 2 |
glumpy/glumpy | numpy | 15 | Standard transforms | ### Implement standard transforms
Implement & document standard transforms
- [x] Polar
- [x] Translate
- [x] Rotate
- [x] Linear scale
- [x] Log scale
- [x] Power scale
- [x] Perspective
- [x] Orthographic
- [x] PVM (Projection, View, Model)
- [x] Trackball
- [x] PanZoom
| closed | 2015-01-02T20:00:30Z | 2015-01-17T18:16:44Z | https://github.com/glumpy/glumpy/issues/15 | [
"1.0 Release"
] | rougier | 0 |
Significant-Gravitas/AutoGPT | python | 8,677 | Condition Block should support strings - It currently only supports numbers | For example "text" == "text"
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/f81d72d6-5ff9-4356-8bfa-a7ed4e7c2184/93334572-7717-4260-9880-7bf40eb683a1?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9mODFkNzJkNi01ZmY5LTQzNTYtOGJmYS1hN2VkNGU3YzIxODQvOTMzMzQ1NzItNzcxNy00MjYwLTk4ODAtN2JmNDBlYjY4M2ExIiwiaWF0IjoxNzMxODU4NDk0LCJleHAiOjMzMzAyNDE4NDk0fQ.7Pd7vJjjrkQ35H9pmP6rzAzJVpQbyuJmgOgDraWkEi8 " alt="image.png" width="514" height="993" /> | closed | 2024-11-17T15:35:26Z | 2024-12-10T17:51:39Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8677 | [] | Torantulino | 1 |
kymatio/kymatio | numpy | 457 | Separate testing of CPU tensor errors in torch_skcuda backend | Right now, these tests are sprinkled in among the rest. Would be more straightforward to have them in separate tests to check them once and for all. | open | 2019-11-28T03:39:54Z | 2020-03-03T17:00:32Z | https://github.com/kymatio/kymatio/issues/457 | [] | janden | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 1,574 | error in converting file | Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
ModelLoadingError: "Invalid checksum for file C:\Users\my pc\AppData\Local\Programs\Ultimate Vocal Remover\models\Demucs_Models\v3_v4_repo\955717e8-8726e21a.th, expected 8726e21a but got 40cb3802"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 834, in seperate
File "demucs\pretrained.py", line 81, in get_model
File "demucs\repo.py", line 148, in get_model
File "demucs\repo.py", line 129, in get_model
File "demucs\repo.py", line 129, in <listcomp>
File "demucs\repo.py", line 99, in get_model
File "demucs\repo.py", line 39, in check_checksum
"
Error Time Stamp [2024-10-01 14:21:26]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: False
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: True
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2024-10-01T08:57:22Z | 2024-10-01T08:57:22Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1574 | [] | Ricky4270 | 0 |
babysor/MockingBird | pytorch | 418 | synthesizer跑到到1500步就出错,有高手知道是怎么回事吗 | 下面是日志:
{| Epoch: 143/910 (10/11) | Loss: 0.3434 | 1.1 steps/s | Step: 1k | }{| Epoch: 143/910 (11/11) | Loss: 0.3442 | 1.1 steps/s | Step: 1k | }
{| Epoch: 144/910 (11/11) | Loss: 0.3435 | 1.1 steps/s | Step: 1k | }
{| Epoch: 145/910 (2/11) | Loss: 0.3433 | 1.1 steps/s | Step: 1k | }Traceback (most recent call last):
File "D:\MockingBird-main\synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "D:\MockingBird-main\synthesizer\train.py", line 201, in train
loss.backward()
File "D:\anaconda\envs\pytorch\lib\site-packages\torch\_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "D:\anaconda\envs\pytorch\lib\site-packages\torch\autograd\__init__.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` | open | 2022-03-03T00:30:44Z | 2022-03-06T01:30:44Z | https://github.com/babysor/MockingBird/issues/418 | [] | lijielijie | 7 |
521xueweihan/HelloGitHub | python | 2,753 | 【项目推荐】搜集全网最全的1000+Telegram群组项目 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/itgoyo/TelegramGroup
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:其他
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:搜集全网1000+有趣的Telegram群组。
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:很多小伙伴只是把tg当成是IM聊天工具,但是它更有趣好玩的是它的群组、还有bot功能,整理汇总全网最全的群组合集总有一个是你需要的。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:方便小伙伴们快速找到自己感兴趣的群组。
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg
- 后续更新计划:
## 提高项目收录的方法(提交时请删除下方内容)
1. **请勿使用复制的内容**作为项目描述提交!
2. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目**是否被推荐过**。
3. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
最后,感谢您对 HelloGitHub 项目的支持!
| closed | 2024-05-24T03:27:48Z | 2024-05-24T06:40:40Z | https://github.com/521xueweihan/HelloGitHub/issues/2753 | [] | itgoyo | 0 |
blacklanternsecurity/bbot | automation | 2,305 | Possibility to add -rate (rate limiting) in ffuz | **Description**
Which feature would you like to see added to BBOT? What are its use cases?
Hello,
ffuz has the following option :
`-rate Rate of requests per second (default: 0)`
It is possible in nuclei through bbot to add a rate limit, I think it may be usefull to add a rate limit into ffuz too.
Is it possible to add this feature into this great project? :)
Thanks a lot in advance | closed | 2025-02-24T14:44:52Z | 2025-02-24T19:32:47Z | https://github.com/blacklanternsecurity/bbot/issues/2305 | [
"enhancement"
] | 4FunAndProfit | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.